mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-24 16:42:05 +00:00
Merge branch 'master' into fp16
This commit is contained in:
commit
d46e8fc84b
@ -37,7 +37,6 @@ Checks: [
|
|||||||
'-cert-oop54-cpp',
|
'-cert-oop54-cpp',
|
||||||
'-cert-oop57-cpp',
|
'-cert-oop57-cpp',
|
||||||
|
|
||||||
'-clang-analyzer-optin.core.EnumCastOutOfRange', # https://github.com/abseil/abseil-cpp/issues/1667
|
|
||||||
'-clang-analyzer-optin.performance.Padding',
|
'-clang-analyzer-optin.performance.Padding',
|
||||||
|
|
||||||
'-clang-analyzer-unix.Malloc',
|
'-clang-analyzer-unix.Malloc',
|
||||||
|
@ -19,3 +19,7 @@ charset = utf-8
|
|||||||
indent_style = space
|
indent_style = space
|
||||||
indent_size = 4
|
indent_size = 4
|
||||||
trim_trailing_whitespace = true
|
trim_trailing_whitespace = true
|
||||||
|
|
||||||
|
# Some SQL results have trailing whitespace which is removed by IDEs
|
||||||
|
[tests/queries/**.reference]
|
||||||
|
trim_trailing_whitespace = false
|
||||||
|
12
.github/ISSUE_TEMPLATE/10_question.md
vendored
12
.github/ISSUE_TEMPLATE/10_question.md
vendored
@ -1,12 +0,0 @@
|
|||||||
---
|
|
||||||
name: Question
|
|
||||||
about: Ask a question about ClickHouse
|
|
||||||
title: ''
|
|
||||||
labels: question
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
> Make sure to check documentation https://clickhouse.com/docs/en/ first. If the question is concise and probably has a short answer, asking it in [community Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-1gh9ds7f4-PgDhJAaF8ad5RbWBAAjzFg) is probably the fastest way to find the answer. For more complicated questions, consider asking them on StackOverflow with "clickhouse" tag https://stackoverflow.com/questions/tagged/clickhouse
|
|
||||||
|
|
||||||
> If you still prefer GitHub issues, remove all this text and ask your question here.
|
|
20
.github/ISSUE_TEMPLATE/10_question.yaml
vendored
Normal file
20
.github/ISSUE_TEMPLATE/10_question.yaml
vendored
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
name: Question
|
||||||
|
description: Ask a question about ClickHouse
|
||||||
|
labels: ["question"]
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: |
|
||||||
|
> Make sure to check documentation https://clickhouse.com/docs/en/ first. If the question is concise and probably has a short answer, asking it in [community Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-1gh9ds7f4-PgDhJAaF8ad5RbWBAAjzFg) is probably the fastest way to find the answer. For more complicated questions, consider asking them on StackOverflow with "clickhouse" tag https://stackoverflow.com/questions/tagged/clickhouse
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: Company or project name
|
||||||
|
description: Put your company name or project description here.
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: Question
|
||||||
|
description: Please put your question here.
|
||||||
|
validations:
|
||||||
|
required: true
|
4
.github/ISSUE_TEMPLATE/20_feature-request.md
vendored
4
.github/ISSUE_TEMPLATE/20_feature-request.md
vendored
@ -9,6 +9,10 @@ assignees: ''
|
|||||||
|
|
||||||
> (you don't have to strictly follow this form)
|
> (you don't have to strictly follow this form)
|
||||||
|
|
||||||
|
**Company or project name**
|
||||||
|
|
||||||
|
> Put your company name or project description here
|
||||||
|
|
||||||
**Use case**
|
**Use case**
|
||||||
|
|
||||||
> A clear and concise description of what is the intended usage scenario is.
|
> A clear and concise description of what is the intended usage scenario is.
|
||||||
|
@ -9,6 +9,10 @@ assignees: ''
|
|||||||
|
|
||||||
(you don't have to strictly follow this form)
|
(you don't have to strictly follow this form)
|
||||||
|
|
||||||
|
**Company or project name**
|
||||||
|
|
||||||
|
Put your company name or project description here
|
||||||
|
|
||||||
**Describe the unexpected behaviour**
|
**Describe the unexpected behaviour**
|
||||||
A clear and concise description of what works not as it is supposed to.
|
A clear and concise description of what works not as it is supposed to.
|
||||||
|
|
||||||
|
@ -9,6 +9,10 @@ assignees: ''
|
|||||||
|
|
||||||
(you don't have to strictly follow this form)
|
(you don't have to strictly follow this form)
|
||||||
|
|
||||||
|
**Company or project name**
|
||||||
|
|
||||||
|
Put your company name or project description here
|
||||||
|
|
||||||
**Describe the unexpected behaviour**
|
**Describe the unexpected behaviour**
|
||||||
A clear and concise description of what works not as it is supposed to.
|
A clear and concise description of what works not as it is supposed to.
|
||||||
|
|
||||||
|
3
.github/ISSUE_TEMPLATE/45_usability-issue.md
vendored
3
.github/ISSUE_TEMPLATE/45_usability-issue.md
vendored
@ -9,6 +9,9 @@ assignees: ''
|
|||||||
|
|
||||||
(you don't have to strictly follow this form)
|
(you don't have to strictly follow this form)
|
||||||
|
|
||||||
|
**Company or project name**
|
||||||
|
Put your company name or project description here
|
||||||
|
|
||||||
**Describe the issue**
|
**Describe the issue**
|
||||||
A clear and concise description of what works not as it is supposed to.
|
A clear and concise description of what works not as it is supposed to.
|
||||||
|
|
||||||
|
4
.github/ISSUE_TEMPLATE/50_build-issue.md
vendored
4
.github/ISSUE_TEMPLATE/50_build-issue.md
vendored
@ -9,6 +9,10 @@ assignees: ''
|
|||||||
|
|
||||||
> Make sure that `git diff` result is empty and you've just pulled fresh master. Try cleaning up cmake cache. Just in case, official build instructions are published here: https://clickhouse.com/docs/en/development/build/
|
> Make sure that `git diff` result is empty and you've just pulled fresh master. Try cleaning up cmake cache. Just in case, official build instructions are published here: https://clickhouse.com/docs/en/development/build/
|
||||||
|
|
||||||
|
**Company or project name**
|
||||||
|
|
||||||
|
> Put your company name or project description here
|
||||||
|
|
||||||
**Operating system**
|
**Operating system**
|
||||||
|
|
||||||
> OS kind or distribution, specific version/release, non-standard kernel if any. If you are trying to build inside virtual machine, please mention it too.
|
> OS kind or distribution, specific version/release, non-standard kernel if any. If you are trying to build inside virtual machine, please mention it too.
|
||||||
|
@ -8,6 +8,9 @@ labels: comp-documentation
|
|||||||
|
|
||||||
(you don't have to strictly follow this form)
|
(you don't have to strictly follow this form)
|
||||||
|
|
||||||
|
**Company or project name**
|
||||||
|
Put your company name or project description here
|
||||||
|
|
||||||
**Describe the issue**
|
**Describe the issue**
|
||||||
A clear and concise description of what's wrong in documentation.
|
A clear and concise description of what's wrong in documentation.
|
||||||
|
|
||||||
|
@ -9,6 +9,9 @@ assignees: ''
|
|||||||
|
|
||||||
(you don't have to strictly follow this form)
|
(you don't have to strictly follow this form)
|
||||||
|
|
||||||
|
**Company or project name**
|
||||||
|
Put your company name or project description here
|
||||||
|
|
||||||
**Describe the situation**
|
**Describe the situation**
|
||||||
What exactly works slower than expected?
|
What exactly works slower than expected?
|
||||||
|
|
||||||
|
@ -9,6 +9,9 @@ assignees: ''
|
|||||||
|
|
||||||
(you don't have to strictly follow this form)
|
(you don't have to strictly follow this form)
|
||||||
|
|
||||||
|
**Company or project name**
|
||||||
|
Put your company name or project description here
|
||||||
|
|
||||||
**Describe the issue**
|
**Describe the issue**
|
||||||
A clear and concise description of what works not as it is supposed to.
|
A clear and concise description of what works not as it is supposed to.
|
||||||
|
|
||||||
|
4
.github/ISSUE_TEMPLATE/85_bug-report.md
vendored
4
.github/ISSUE_TEMPLATE/85_bug-report.md
vendored
@ -11,6 +11,10 @@ assignees: ''
|
|||||||
|
|
||||||
> You have to provide the following information whenever possible.
|
> You have to provide the following information whenever possible.
|
||||||
|
|
||||||
|
**Company or project name**
|
||||||
|
|
||||||
|
> Put your company name or project description here
|
||||||
|
|
||||||
**Describe what's wrong**
|
**Describe what's wrong**
|
||||||
|
|
||||||
> A clear and concise description of what works not as it is supposed to.
|
> A clear and concise description of what works not as it is supposed to.
|
||||||
|
@ -7,6 +7,10 @@ assignees: ''
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
**Company or project name**
|
||||||
|
|
||||||
|
Put your company name or project description here
|
||||||
|
|
||||||
**I have tried the following solutions**: https://clickhouse.com/docs/en/faq/troubleshooting/#troubleshooting-installation-errors
|
**I have tried the following solutions**: https://clickhouse.com/docs/en/faq/troubleshooting/#troubleshooting-installation-errors
|
||||||
|
|
||||||
**Installation type**
|
**Installation type**
|
||||||
|
8
.github/PULL_REQUEST_TEMPLATE.md
vendored
8
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -48,19 +48,17 @@ At a minimum, the following information should be added (but add more as needed)
|
|||||||
- [ ] <!---ci_include_stateful--> Allow: Stateful tests
|
- [ ] <!---ci_include_stateful--> Allow: Stateful tests
|
||||||
- [ ] <!---ci_include_integration--> Allow: Integration Tests
|
- [ ] <!---ci_include_integration--> Allow: Integration Tests
|
||||||
- [ ] <!---ci_include_performance--> Allow: Performance tests
|
- [ ] <!---ci_include_performance--> Allow: Performance tests
|
||||||
|
- [ ] <!---ci_set_normal_builds--> Allow: Normal Builds
|
||||||
|
- [ ] <!---ci_set_special_builds--> Allow: Special Builds
|
||||||
- [ ] <!---ci_set_non_required--> Allow: All NOT Required Checks
|
- [ ] <!---ci_set_non_required--> Allow: All NOT Required Checks
|
||||||
- [ ] <!---batch_0_1--> Allow: batch 1, 2 for multi-batch jobs
|
- [ ] <!---batch_0_1--> Allow: batch 1, 2 for multi-batch jobs
|
||||||
- [ ] <!---batch_2_3--> Allow: batch 3, 4, 5, 6 for multi-batch jobs
|
- [ ] <!---batch_2_3--> Allow: batch 3, 4, 5, 6 for multi-batch jobs
|
||||||
---
|
---
|
||||||
- [ ] <!---ci_exclude_style--> Exclude: Style check
|
- [ ] <!---ci_exclude_style--> Exclude: Style check
|
||||||
- [ ] <!---ci_exclude_fast--> Exclude: Fast test
|
- [ ] <!---ci_exclude_fast--> Exclude: Fast test
|
||||||
- [ ] <!---ci_exclude_integration--> Exclude: Integration Tests
|
|
||||||
- [ ] <!---ci_exclude_stateless--> Exclude: Stateless tests
|
|
||||||
- [ ] <!---ci_exclude_stateful--> Exclude: Stateful tests
|
|
||||||
- [ ] <!---ci_exclude_performance--> Exclude: Performance tests
|
|
||||||
- [ ] <!---ci_exclude_asan--> Exclude: All with ASAN
|
- [ ] <!---ci_exclude_asan--> Exclude: All with ASAN
|
||||||
- [ ] <!---ci_exclude_aarch64--> Exclude: All with Aarch64
|
|
||||||
- [ ] <!---ci_exclude_tsan|msan|ubsan|coverage--> Exclude: All with TSAN, MSAN, UBSAN, Coverage
|
- [ ] <!---ci_exclude_tsan|msan|ubsan|coverage--> Exclude: All with TSAN, MSAN, UBSAN, Coverage
|
||||||
|
- [ ] <!---ci_exclude_aarch64|release|debug--> Exclude: All with aarch64, release, debug
|
||||||
---
|
---
|
||||||
- [ ] <!---do_not_test--> Do not test
|
- [ ] <!---do_not_test--> Do not test
|
||||||
- [ ] <!---upload_all--> Upload binaries for special builds
|
- [ ] <!---upload_all--> Upload binaries for special builds
|
||||||
|
2
.github/workflows/backport_branches.yml
vendored
2
.github/workflows/backport_branches.yml
vendored
@ -273,5 +273,5 @@ jobs:
|
|||||||
- name: Finish label
|
- name: Finish label
|
||||||
run: |
|
run: |
|
||||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||||
python3 finish_check.py
|
python3 finish_check.py --wf-status ${{ contains(needs.*.result, 'failure') && 'failure' || 'success' }}
|
||||||
python3 merge_pr.py
|
python3 merge_pr.py
|
||||||
|
5
.github/workflows/master.yml
vendored
5
.github/workflows/master.yml
vendored
@ -106,7 +106,8 @@ jobs:
|
|||||||
data: ${{ needs.RunConfig.outputs.data }}
|
data: ${{ needs.RunConfig.outputs.data }}
|
||||||
# stage for jobs that do not prohibit merge
|
# stage for jobs that do not prohibit merge
|
||||||
Tests_3:
|
Tests_3:
|
||||||
needs: [RunConfig, Builds_1]
|
# Test_3 should not wait for Test_1/Test_2 and should not be blocked by them on master branch since all jobs need to run there.
|
||||||
|
needs: [RunConfig, Builds_1, Builds_2]
|
||||||
if: ${{ !failure() && !cancelled() && contains(fromJson(needs.RunConfig.outputs.data).stages_data.stages_to_do, 'Tests_3') }}
|
if: ${{ !failure() && !cancelled() && contains(fromJson(needs.RunConfig.outputs.data).stages_data.stages_to_do, 'Tests_3') }}
|
||||||
uses: ./.github/workflows/reusable_test_stage.yml
|
uses: ./.github/workflows/reusable_test_stage.yml
|
||||||
with:
|
with:
|
||||||
@ -172,4 +173,4 @@ jobs:
|
|||||||
- name: Finish label
|
- name: Finish label
|
||||||
run: |
|
run: |
|
||||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||||
python3 finish_check.py
|
python3 finish_check.py --wf-status ${{ contains(needs.*.result, 'failure') && 'failure' || 'success' }}
|
||||||
|
4
.github/workflows/merge_queue.yml
vendored
4
.github/workflows/merge_queue.yml
vendored
@ -99,7 +99,7 @@ jobs:
|
|||||||
################################# Stage Final #################################
|
################################# Stage Final #################################
|
||||||
#
|
#
|
||||||
FinishCheck:
|
FinishCheck:
|
||||||
if: ${{ !failure() && !cancelled() }}
|
if: ${{ !cancelled() }}
|
||||||
needs: [RunConfig, BuildDockers, StyleCheck, FastTest, Builds_1, Tests_1]
|
needs: [RunConfig, BuildDockers, StyleCheck, FastTest, Builds_1, Tests_1]
|
||||||
runs-on: [self-hosted, style-checker-aarch64]
|
runs-on: [self-hosted, style-checker-aarch64]
|
||||||
steps:
|
steps:
|
||||||
@ -112,4 +112,4 @@ jobs:
|
|||||||
- name: Finish label
|
- name: Finish label
|
||||||
run: |
|
run: |
|
||||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||||
python3 finish_check.py ${{ (contains(needs.*.result, 'failure') && github.event_name == 'merge_group') && '--pipeline-failure' || '' }}
|
python3 finish_check.py --wf-status ${{ contains(needs.*.result, 'failure') && 'failure' || 'success' }}
|
||||||
|
4
.github/workflows/pull_request.yml
vendored
4
.github/workflows/pull_request.yml
vendored
@ -135,7 +135,7 @@ jobs:
|
|||||||
data: ${{ needs.RunConfig.outputs.data }}
|
data: ${{ needs.RunConfig.outputs.data }}
|
||||||
# stage for jobs that do not prohibit merge
|
# stage for jobs that do not prohibit merge
|
||||||
Tests_3:
|
Tests_3:
|
||||||
needs: [RunConfig, Tests_1, Tests_2]
|
needs: [RunConfig, Builds_1, Tests_1, Builds_2, Tests_2]
|
||||||
if: ${{ !failure() && !cancelled() && contains(fromJson(needs.RunConfig.outputs.data).stages_data.stages_to_do, 'Tests_3') }}
|
if: ${{ !failure() && !cancelled() && contains(fromJson(needs.RunConfig.outputs.data).stages_data.stages_to_do, 'Tests_3') }}
|
||||||
uses: ./.github/workflows/reusable_test_stage.yml
|
uses: ./.github/workflows/reusable_test_stage.yml
|
||||||
with:
|
with:
|
||||||
@ -191,7 +191,7 @@ jobs:
|
|||||||
- name: Finish label
|
- name: Finish label
|
||||||
run: |
|
run: |
|
||||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||||
python3 finish_check.py
|
python3 finish_check.py --wf-status ${{ contains(needs.*.result, 'failure') && 'failure' || 'success' }}
|
||||||
|
|
||||||
#############################################################################################
|
#############################################################################################
|
||||||
###################################### JEPSEN TESTS #########################################
|
###################################### JEPSEN TESTS #########################################
|
||||||
|
2
.github/workflows/release_branches.yml
vendored
2
.github/workflows/release_branches.yml
vendored
@ -496,4 +496,4 @@ jobs:
|
|||||||
- name: Finish label
|
- name: Finish label
|
||||||
run: |
|
run: |
|
||||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||||
python3 finish_check.py
|
python3 finish_check.py --wf-status ${{ contains(needs.*.result, 'failure') && 'failure' || 'success' }}
|
||||||
|
2
.github/workflows/reusable_test.yml
vendored
2
.github/workflows/reusable_test.yml
vendored
@ -58,7 +58,7 @@ jobs:
|
|||||||
env:
|
env:
|
||||||
GITHUB_JOB_OVERRIDDEN: ${{inputs.test_name}}${{ fromJson(inputs.data).jobs_data.jobs_params[inputs.test_name].num_batches > 1 && format('-{0}',matrix.batch) || '' }}
|
GITHUB_JOB_OVERRIDDEN: ${{inputs.test_name}}${{ fromJson(inputs.data).jobs_data.jobs_params[inputs.test_name].num_batches > 1 && format('-{0}',matrix.batch) || '' }}
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: false # we always wait for entire matrix
|
fail-fast: false # we always wait for the entire matrix
|
||||||
matrix:
|
matrix:
|
||||||
batch: ${{ fromJson(inputs.data).jobs_data.jobs_params[inputs.test_name].batches }}
|
batch: ${{ fromJson(inputs.data).jobs_data.jobs_params[inputs.test_name].batches }}
|
||||||
steps:
|
steps:
|
||||||
|
5
.github/workflows/tags_stable.yml
vendored
5
.github/workflows/tags_stable.yml
vendored
@ -46,9 +46,10 @@ jobs:
|
|||||||
./utils/list-versions/list-versions.sh > ./utils/list-versions/version_date.tsv
|
./utils/list-versions/list-versions.sh > ./utils/list-versions/version_date.tsv
|
||||||
./utils/list-versions/update-docker-version.sh
|
./utils/list-versions/update-docker-version.sh
|
||||||
GID=$(id -g "${UID}")
|
GID=$(id -g "${UID}")
|
||||||
docker run -u "${UID}:${GID}" -e PYTHONUNBUFFERED=1 \
|
# --network=host and CI=1 are required for the S3 access from a container
|
||||||
|
docker run -u "${UID}:${GID}" -e PYTHONUNBUFFERED=1 -e CI=1 --network=host \
|
||||||
--volume="${GITHUB_WORKSPACE}:/ClickHouse" clickhouse/style-test \
|
--volume="${GITHUB_WORKSPACE}:/ClickHouse" clickhouse/style-test \
|
||||||
/ClickHouse/utils/changelog/changelog.py -v --debug-helpers \
|
/ClickHouse/tests/ci/changelog.py -v --debug-helpers \
|
||||||
--gh-user-or-token="$GITHUB_TOKEN" --jobs=5 \
|
--gh-user-or-token="$GITHUB_TOKEN" --jobs=5 \
|
||||||
--output="/ClickHouse/docs/changelogs/${GITHUB_TAG}.md" "${GITHUB_TAG}"
|
--output="/ClickHouse/docs/changelogs/${GITHUB_TAG}.md" "${GITHUB_TAG}"
|
||||||
git add "./docs/changelogs/${GITHUB_TAG}.md"
|
git add "./docs/changelogs/${GITHUB_TAG}.md"
|
||||||
|
3
.gitignore
vendored
3
.gitignore
vendored
@ -21,6 +21,9 @@
|
|||||||
*.stderr
|
*.stderr
|
||||||
*.stdout
|
*.stdout
|
||||||
|
|
||||||
|
# llvm-xray logs
|
||||||
|
xray-log.*
|
||||||
|
|
||||||
/docs/build
|
/docs/build
|
||||||
/docs/publish
|
/docs/publish
|
||||||
/docs/edit
|
/docs/edit
|
||||||
|
29
.gitmessage
29
.gitmessage
@ -1,29 +0,0 @@
|
|||||||
|
|
||||||
|
|
||||||
### CI modificators (add a leading space to apply) ###
|
|
||||||
|
|
||||||
## To avoid a merge commit in CI:
|
|
||||||
#no_merge_commit
|
|
||||||
|
|
||||||
## To discard CI cache:
|
|
||||||
#no_ci_cache
|
|
||||||
|
|
||||||
## To not test (only style check):
|
|
||||||
#do_not_test
|
|
||||||
|
|
||||||
## To run specified set of tests in CI:
|
|
||||||
#ci_set_<SET_NAME>
|
|
||||||
#ci_set_reduced
|
|
||||||
#ci_set_arm
|
|
||||||
#ci_set_integration
|
|
||||||
#ci_set_old_analyzer
|
|
||||||
|
|
||||||
## To run specified job in CI:
|
|
||||||
#job_<JOB NAME>
|
|
||||||
#job_stateless_tests_release
|
|
||||||
#job_package_debug
|
|
||||||
#job_integration_tests_asan
|
|
||||||
|
|
||||||
## To run only specified batches for multi-batch job(s)
|
|
||||||
#batch_2
|
|
||||||
#batch_1_2_3
|
|
8
.gitmodules
vendored
8
.gitmodules
vendored
@ -91,13 +91,13 @@
|
|||||||
[submodule "contrib/aws"]
|
[submodule "contrib/aws"]
|
||||||
path = contrib/aws
|
path = contrib/aws
|
||||||
url = https://github.com/ClickHouse/aws-sdk-cpp
|
url = https://github.com/ClickHouse/aws-sdk-cpp
|
||||||
[submodule "aws-c-event-stream"]
|
[submodule "contrib/aws-c-event-stream"]
|
||||||
path = contrib/aws-c-event-stream
|
path = contrib/aws-c-event-stream
|
||||||
url = https://github.com/awslabs/aws-c-event-stream
|
url = https://github.com/awslabs/aws-c-event-stream
|
||||||
[submodule "aws-c-common"]
|
[submodule "contrib/aws-c-common"]
|
||||||
path = contrib/aws-c-common
|
path = contrib/aws-c-common
|
||||||
url = https://github.com/awslabs/aws-c-common.git
|
url = https://github.com/awslabs/aws-c-common.git
|
||||||
[submodule "aws-checksums"]
|
[submodule "contrib/aws-checksums"]
|
||||||
path = contrib/aws-checksums
|
path = contrib/aws-checksums
|
||||||
url = https://github.com/awslabs/aws-checksums
|
url = https://github.com/awslabs/aws-checksums
|
||||||
[submodule "contrib/curl"]
|
[submodule "contrib/curl"]
|
||||||
@ -163,7 +163,7 @@
|
|||||||
url = https://github.com/xz-mirror/xz
|
url = https://github.com/xz-mirror/xz
|
||||||
[submodule "contrib/abseil-cpp"]
|
[submodule "contrib/abseil-cpp"]
|
||||||
path = contrib/abseil-cpp
|
path = contrib/abseil-cpp
|
||||||
url = https://github.com/abseil/abseil-cpp
|
url = https://github.com/ClickHouse/abseil-cpp.git
|
||||||
[submodule "contrib/dragonbox"]
|
[submodule "contrib/dragonbox"]
|
||||||
path = contrib/dragonbox
|
path = contrib/dragonbox
|
||||||
url = https://github.com/ClickHouse/dragonbox
|
url = https://github.com/ClickHouse/dragonbox
|
||||||
|
@ -11,7 +11,7 @@
|
|||||||
### <a id="245"></a> ClickHouse release 24.5, 2024-05-30
|
### <a id="245"></a> ClickHouse release 24.5, 2024-05-30
|
||||||
|
|
||||||
#### Backward Incompatible Change
|
#### Backward Incompatible Change
|
||||||
* Renamed "inverted indexes" to "full-text indexes" which is a less technical / more user-friendly name. This also changes internal table metadata and breaks tables with existing (experimental) inverted indexes. Please make to drop such indexes before upgrade and re-create them after upgrade. [#62884](https://github.com/ClickHouse/ClickHouse/pull/62884) ([Robert Schulze](https://github.com/rschu1ze)).
|
* Renamed "inverted indexes" to "full-text indexes" which is a less technical / more user-friendly name. This also changes internal table metadata and breaks tables with existing (experimental) inverted indexes. Please make sure to drop such indexes before upgrade and re-create them after upgrade. [#62884](https://github.com/ClickHouse/ClickHouse/pull/62884) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
* Usage of functions `neighbor`, `runningAccumulate`, `runningDifferenceStartingWithFirstValue`, `runningDifference` deprecated (because it is error-prone). Proper window functions should be used instead. To enable them back, set `allow_deprecated_error_prone_window_functions = 1` or set `compatibility = '24.4'` or lower. [#63132](https://github.com/ClickHouse/ClickHouse/pull/63132) ([Nikita Taranov](https://github.com/nickitat)).
|
* Usage of functions `neighbor`, `runningAccumulate`, `runningDifferenceStartingWithFirstValue`, `runningDifference` deprecated (because it is error-prone). Proper window functions should be used instead. To enable them back, set `allow_deprecated_error_prone_window_functions = 1` or set `compatibility = '24.4'` or lower. [#63132](https://github.com/ClickHouse/ClickHouse/pull/63132) ([Nikita Taranov](https://github.com/nickitat)).
|
||||||
* Queries from `system.columns` will work faster if there is a large number of columns, but many databases or tables are not granted for `SHOW TABLES`. Note that in previous versions, if you grant `SHOW COLUMNS` to individual columns without granting `SHOW TABLES` to the corresponding tables, the `system.columns` table will show these columns, but in a new version, it will skip the table entirely. Remove trace log messages "Access granted" and "Access denied" that slowed down queries. [#63439](https://github.com/ClickHouse/ClickHouse/pull/63439) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
* Queries from `system.columns` will work faster if there is a large number of columns, but many databases or tables are not granted for `SHOW TABLES`. Note that in previous versions, if you grant `SHOW COLUMNS` to individual columns without granting `SHOW TABLES` to the corresponding tables, the `system.columns` table will show these columns, but in a new version, it will skip the table entirely. Remove trace log messages "Access granted" and "Access denied" that slowed down queries. [#63439](https://github.com/ClickHouse/ClickHouse/pull/63439) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
@ -122,6 +122,8 @@ add_library(global-libs INTERFACE)
|
|||||||
|
|
||||||
include (cmake/sanitize.cmake)
|
include (cmake/sanitize.cmake)
|
||||||
|
|
||||||
|
include (cmake/xray_instrumentation.cmake)
|
||||||
|
|
||||||
option(ENABLE_COLORED_BUILD "Enable colors in compiler output" ON)
|
option(ENABLE_COLORED_BUILD "Enable colors in compiler output" ON)
|
||||||
|
|
||||||
set (CMAKE_COLOR_MAKEFILE ${ENABLE_COLORED_BUILD}) # works only for the makefile generator
|
set (CMAKE_COLOR_MAKEFILE ${ENABLE_COLORED_BUILD}) # works only for the makefile generator
|
||||||
@ -208,8 +210,6 @@ option(OMIT_HEAVY_DEBUG_SYMBOLS
|
|||||||
"Do not generate debugger info for heavy modules (ClickHouse functions and dictionaries, some contrib)"
|
"Do not generate debugger info for heavy modules (ClickHouse functions and dictionaries, some contrib)"
|
||||||
${OMIT_HEAVY_DEBUG_SYMBOLS_DEFAULT})
|
${OMIT_HEAVY_DEBUG_SYMBOLS_DEFAULT})
|
||||||
|
|
||||||
option(USE_DEBUG_HELPERS "Enable debug helpers" ${USE_DEBUG_HELPERS})
|
|
||||||
|
|
||||||
option(BUILD_STANDALONE_KEEPER "Build keeper as small standalone binary" OFF)
|
option(BUILD_STANDALONE_KEEPER "Build keeper as small standalone binary" OFF)
|
||||||
if (NOT BUILD_STANDALONE_KEEPER)
|
if (NOT BUILD_STANDALONE_KEEPER)
|
||||||
option(CREATE_KEEPER_SYMLINK "Create symlink for clickhouse-keeper to main server binary" ON)
|
option(CREATE_KEEPER_SYMLINK "Create symlink for clickhouse-keeper to main server binary" ON)
|
||||||
@ -399,7 +399,7 @@ option (ENABLE_GWP_ASAN "Enable Gwp-Asan" ON)
|
|||||||
# but GWP-ASan also wants to use mmap frequently,
|
# but GWP-ASan also wants to use mmap frequently,
|
||||||
# and due to a large number of memory mappings,
|
# and due to a large number of memory mappings,
|
||||||
# it does not work together well.
|
# it does not work together well.
|
||||||
if ((NOT OS_LINUX AND NOT OS_ANDROID) OR (CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG"))
|
if ((NOT OS_LINUX AND NOT OS_ANDROID) OR (CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG") OR SANITIZE)
|
||||||
set(ENABLE_GWP_ASAN OFF)
|
set(ENABLE_GWP_ASAN OFF)
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
|
@ -19,7 +19,10 @@ The following versions of ClickHouse server are currently supported with securit
|
|||||||
| 24.3 | ✔️ |
|
| 24.3 | ✔️ |
|
||||||
| 24.2 | ❌ |
|
| 24.2 | ❌ |
|
||||||
| 24.1 | ❌ |
|
| 24.1 | ❌ |
|
||||||
| 23.* | ❌ |
|
| 23.12 | ❌ |
|
||||||
|
| 23.11 | ❌ |
|
||||||
|
| 23.10 | ❌ |
|
||||||
|
| 23.9 | ❌ |
|
||||||
| 23.8 | ✔️ |
|
| 23.8 | ✔️ |
|
||||||
| 23.7 | ❌ |
|
| 23.7 | ❌ |
|
||||||
| 23.6 | ❌ |
|
| 23.6 | ❌ |
|
||||||
|
@ -34,15 +34,6 @@ set (SRCS
|
|||||||
throwError.cpp
|
throwError.cpp
|
||||||
)
|
)
|
||||||
|
|
||||||
if (USE_DEBUG_HELPERS)
|
|
||||||
get_target_property(MAGIC_ENUM_INCLUDE_DIR ch_contrib::magic_enum INTERFACE_INCLUDE_DIRECTORIES)
|
|
||||||
# CMake generator expression will do insane quoting when it encounters special character like quotes, spaces, etc.
|
|
||||||
# Prefixing "SHELL:" will force it to use the original text.
|
|
||||||
set (INCLUDE_DEBUG_HELPERS "SHELL:-I\"${MAGIC_ENUM_INCLUDE_DIR}\" -include \"${ClickHouse_SOURCE_DIR}/base/base/iostream_debug_helpers.h\"")
|
|
||||||
# Use generator expression as we don't want to pollute CMAKE_CXX_FLAGS, which will interfere with CMake check system.
|
|
||||||
add_compile_options($<$<COMPILE_LANGUAGE:CXX>:${INCLUDE_DEBUG_HELPERS}>)
|
|
||||||
endif ()
|
|
||||||
|
|
||||||
add_library (common ${SRCS})
|
add_library (common ${SRCS})
|
||||||
|
|
||||||
if (WITH_COVERAGE)
|
if (WITH_COVERAGE)
|
||||||
|
@ -32,7 +32,7 @@ constexpr void static_for(F && f)
|
|||||||
template <is_enum T>
|
template <is_enum T>
|
||||||
struct fmt::formatter<T> : fmt::formatter<std::string_view>
|
struct fmt::formatter<T> : fmt::formatter<std::string_view>
|
||||||
{
|
{
|
||||||
constexpr auto format(T value, auto& format_context)
|
constexpr auto format(T value, auto& format_context) const
|
||||||
{
|
{
|
||||||
return formatter<string_view>::format(magic_enum::enum_name(value), format_context);
|
return formatter<string_view>::format(magic_enum::enum_name(value), format_context);
|
||||||
}
|
}
|
||||||
|
@ -12,6 +12,8 @@
|
|||||||
#include <base/types.h>
|
#include <base/types.h>
|
||||||
#include <base/unaligned.h>
|
#include <base/unaligned.h>
|
||||||
#include <base/simd.h>
|
#include <base/simd.h>
|
||||||
|
#include <fmt/core.h>
|
||||||
|
#include <fmt/ostream.h>
|
||||||
|
|
||||||
#include <city.h>
|
#include <city.h>
|
||||||
|
|
||||||
@ -376,3 +378,5 @@ namespace PackedZeroTraits
|
|||||||
|
|
||||||
|
|
||||||
std::ostream & operator<<(std::ostream & os, const StringRef & str);
|
std::ostream & operator<<(std::ostream & os, const StringRef & str);
|
||||||
|
|
||||||
|
template<> struct fmt::formatter<StringRef> : fmt::ostream_formatter {};
|
||||||
|
@ -1,187 +0,0 @@
|
|||||||
#pragma once
|
|
||||||
|
|
||||||
#include "demangle.h"
|
|
||||||
#include "getThreadId.h"
|
|
||||||
#include <type_traits>
|
|
||||||
#include <tuple>
|
|
||||||
#include <iomanip>
|
|
||||||
#include <iostream>
|
|
||||||
#include <magic_enum.hpp>
|
|
||||||
|
|
||||||
/** Usage:
|
|
||||||
*
|
|
||||||
* DUMP(variable...)
|
|
||||||
*/
|
|
||||||
|
|
||||||
|
|
||||||
template <typename Out, typename T>
|
|
||||||
Out & dumpValue(Out &, T &&);
|
|
||||||
|
|
||||||
|
|
||||||
/// Catch-all case.
|
|
||||||
template <int priority, typename Out, typename T>
|
|
||||||
requires(priority == -1)
|
|
||||||
Out & dumpImpl(Out & out, T &&) // NOLINT(cppcoreguidelines-missing-std-forward)
|
|
||||||
{
|
|
||||||
return out << "{...}";
|
|
||||||
}
|
|
||||||
|
|
||||||
/// An object, that could be output with operator <<.
|
|
||||||
template <int priority, typename Out, typename T>
|
|
||||||
requires(priority == 0)
|
|
||||||
Out & dumpImpl(Out & out, T && x, std::decay_t<decltype(std::declval<Out &>() << std::declval<T>())> * = nullptr) // NOLINT(cppcoreguidelines-missing-std-forward)
|
|
||||||
{
|
|
||||||
return out << x;
|
|
||||||
}
|
|
||||||
|
|
||||||
/// A pointer-like object.
|
|
||||||
template <int priority, typename Out, typename T>
|
|
||||||
requires(priority == 1
|
|
||||||
/// Protect from the case when operator * do effectively nothing (function pointer).
|
|
||||||
&& !std::is_same_v<std::decay_t<T>, std::decay_t<decltype(*std::declval<T>())>>)
|
|
||||||
Out & dumpImpl(Out & out, T && x, std::decay_t<decltype(*std::declval<T>())> * = nullptr) // NOLINT(cppcoreguidelines-missing-std-forward)
|
|
||||||
{
|
|
||||||
if (!x)
|
|
||||||
return out << "nullptr";
|
|
||||||
return dumpValue(out, *x);
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Container.
|
|
||||||
template <int priority, typename Out, typename T>
|
|
||||||
requires(priority == 2)
|
|
||||||
Out & dumpImpl(Out & out, T && x, std::decay_t<decltype(std::begin(std::declval<T>()))> * = nullptr) // NOLINT(cppcoreguidelines-missing-std-forward)
|
|
||||||
{
|
|
||||||
bool first = true;
|
|
||||||
out << "{";
|
|
||||||
for (const auto & elem : x)
|
|
||||||
{
|
|
||||||
if (first)
|
|
||||||
first = false;
|
|
||||||
else
|
|
||||||
out << ", ";
|
|
||||||
dumpValue(out, elem);
|
|
||||||
}
|
|
||||||
return out << "}";
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
template <int priority, typename Out, typename T>
|
|
||||||
requires(priority == 3 && std::is_enum_v<std::decay_t<T>>)
|
|
||||||
Out & dumpImpl(Out & out, T && x) // NOLINT(cppcoreguidelines-missing-std-forward)
|
|
||||||
{
|
|
||||||
return out << magic_enum::enum_name(x);
|
|
||||||
}
|
|
||||||
|
|
||||||
/// string and const char * - output not as container or pointer.
|
|
||||||
|
|
||||||
template <int priority, typename Out, typename T>
|
|
||||||
requires(priority == 3 && (std::is_same_v<std::decay_t<T>, std::string> || std::is_same_v<std::decay_t<T>, const char *>))
|
|
||||||
Out & dumpImpl(Out & out, T && x) // NOLINT(cppcoreguidelines-missing-std-forward)
|
|
||||||
{
|
|
||||||
return out << std::quoted(x);
|
|
||||||
}
|
|
||||||
|
|
||||||
/// UInt8 - output as number, not char.
|
|
||||||
|
|
||||||
template <int priority, typename Out, typename T>
|
|
||||||
requires(priority == 3 && std::is_same_v<std::decay_t<T>, unsigned char>)
|
|
||||||
Out & dumpImpl(Out & out, T && x) // NOLINT(cppcoreguidelines-missing-std-forward)
|
|
||||||
{
|
|
||||||
return out << int(x);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
/// Tuple, pair
|
|
||||||
template <size_t N, typename Out, typename T>
|
|
||||||
Out & dumpTupleImpl(Out & out, T && x) // NOLINT(cppcoreguidelines-missing-std-forward)
|
|
||||||
{
|
|
||||||
if constexpr (N == 0)
|
|
||||||
out << "{";
|
|
||||||
else
|
|
||||||
out << ", ";
|
|
||||||
|
|
||||||
dumpValue(out, std::get<N>(x));
|
|
||||||
|
|
||||||
if constexpr (N + 1 == std::tuple_size_v<std::decay_t<T>>)
|
|
||||||
out << "}";
|
|
||||||
else
|
|
||||||
dumpTupleImpl<N + 1>(out, x);
|
|
||||||
|
|
||||||
return out;
|
|
||||||
}
|
|
||||||
|
|
||||||
template <int priority, typename Out, typename T>
|
|
||||||
requires(priority == 4)
|
|
||||||
Out & dumpImpl(Out & out, T && x, std::decay_t<decltype(std::get<0>(std::declval<T>()))> * = nullptr) // NOLINT(cppcoreguidelines-missing-std-forward)
|
|
||||||
{
|
|
||||||
return dumpTupleImpl<0>(out, x);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
template <int priority, typename Out, typename T>
|
|
||||||
Out & dumpDispatchPriorities(Out & out, T && x, std::decay_t<decltype(dumpImpl<priority>(std::declval<Out &>(), std::declval<T>()))> *) // NOLINT(cppcoreguidelines-missing-std-forward)
|
|
||||||
{
|
|
||||||
return dumpImpl<priority>(out, x);
|
|
||||||
}
|
|
||||||
|
|
||||||
// NOLINTNEXTLINE(google-explicit-constructor)
|
|
||||||
struct LowPriority { LowPriority(void *) {} };
|
|
||||||
|
|
||||||
template <int priority, typename Out, typename T>
|
|
||||||
Out & dumpDispatchPriorities(Out & out, T && x, LowPriority) // NOLINT(cppcoreguidelines-missing-std-forward)
|
|
||||||
{
|
|
||||||
return dumpDispatchPriorities<priority - 1>(out, x, nullptr);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
template <typename Out, typename T>
|
|
||||||
Out & dumpValue(Out & out, T && x) // NOLINT(cppcoreguidelines-missing-std-forward)
|
|
||||||
{
|
|
||||||
return dumpDispatchPriorities<5>(out, x, nullptr);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
template <typename Out, typename T>
|
|
||||||
Out & dump(Out & out, const char * name, T && x) // NOLINT(cppcoreguidelines-missing-std-forward)
|
|
||||||
{
|
|
||||||
// Dumping string literal, printing name and demangled type is irrelevant.
|
|
||||||
if constexpr (std::is_same_v<const char *, std::decay_t<std::remove_reference_t<T>>>)
|
|
||||||
{
|
|
||||||
const auto name_len = strlen(name);
|
|
||||||
const auto value_len = strlen(x);
|
|
||||||
// `name` is the same as quoted `x`
|
|
||||||
if (name_len > 2 && value_len > 0 && name[0] == '"' && name[name_len - 1] == '"'
|
|
||||||
&& strncmp(name + 1, x, std::min(value_len, name_len) - 1) == 0)
|
|
||||||
return out << x;
|
|
||||||
}
|
|
||||||
|
|
||||||
out << demangle(typeid(x).name()) << " " << name << " = ";
|
|
||||||
return dumpValue(out, x) << "; ";
|
|
||||||
}
|
|
||||||
|
|
||||||
#pragma clang diagnostic ignored "-Wgnu-zero-variadic-macro-arguments"
|
|
||||||
|
|
||||||
#define DUMPVAR(VAR) ::dump(std::cerr, #VAR, (VAR));
|
|
||||||
#define DUMPHEAD std::cerr << __FILE__ << ':' << __LINE__ << " [ " << getThreadId() << " ] ";
|
|
||||||
#define DUMPTAIL std::cerr << '\n';
|
|
||||||
|
|
||||||
#define DUMP1(V1) do { DUMPHEAD DUMPVAR(V1) DUMPTAIL } while(0)
|
|
||||||
#define DUMP2(V1, V2) do { DUMPHEAD DUMPVAR(V1) DUMPVAR(V2) DUMPTAIL } while(0)
|
|
||||||
#define DUMP3(V1, V2, V3) do { DUMPHEAD DUMPVAR(V1) DUMPVAR(V2) DUMPVAR(V3) DUMPTAIL } while(0)
|
|
||||||
#define DUMP4(V1, V2, V3, V4) do { DUMPHEAD DUMPVAR(V1) DUMPVAR(V2) DUMPVAR(V3) DUMPVAR(V4) DUMPTAIL } while(0)
|
|
||||||
#define DUMP5(V1, V2, V3, V4, V5) do { DUMPHEAD DUMPVAR(V1) DUMPVAR(V2) DUMPVAR(V3) DUMPVAR(V4) DUMPVAR(V5) DUMPTAIL } while(0)
|
|
||||||
#define DUMP6(V1, V2, V3, V4, V5, V6) do { DUMPHEAD DUMPVAR(V1) DUMPVAR(V2) DUMPVAR(V3) DUMPVAR(V4) DUMPVAR(V5) DUMPVAR(V6) DUMPTAIL } while(0)
|
|
||||||
#define DUMP7(V1, V2, V3, V4, V5, V6, V7) do { DUMPHEAD DUMPVAR(V1) DUMPVAR(V2) DUMPVAR(V3) DUMPVAR(V4) DUMPVAR(V5) DUMPVAR(V6) DUMPVAR(V7) DUMPTAIL } while(0)
|
|
||||||
#define DUMP8(V1, V2, V3, V4, V5, V6, V7, V8) do { DUMPHEAD DUMPVAR(V1) DUMPVAR(V2) DUMPVAR(V3) DUMPVAR(V4) DUMPVAR(V5) DUMPVAR(V6) DUMPVAR(V7) DUMPVAR(V8) DUMPTAIL } while(0)
|
|
||||||
#define DUMP9(V1, V2, V3, V4, V5, V6, V7, V8, V9) do { DUMPHEAD DUMPVAR(V1) DUMPVAR(V2) DUMPVAR(V3) DUMPVAR(V4) DUMPVAR(V5) DUMPVAR(V6) DUMPVAR(V7) DUMPVAR(V8) DUMPVAR(V9) DUMPTAIL } while(0)
|
|
||||||
|
|
||||||
/// https://groups.google.com/forum/#!searchin/kona-dev/variadic$20macro%7Csort:date/kona-dev/XMA-lDOqtlI/GCzdfZsD41sJ
|
|
||||||
|
|
||||||
#define VA_NUM_ARGS_IMPL(x1, x2, x3, x4, x5, x6, x7, x8, x9, N, ...) N
|
|
||||||
#define VA_NUM_ARGS(...) VA_NUM_ARGS_IMPL(__VA_ARGS__, 9, 8, 7, 6, 5, 4, 3, 2, 1)
|
|
||||||
|
|
||||||
#define MAKE_VAR_MACRO_IMPL_CONCAT(PREFIX, NUM_ARGS) PREFIX ## NUM_ARGS
|
|
||||||
#define MAKE_VAR_MACRO_IMPL(PREFIX, NUM_ARGS) MAKE_VAR_MACRO_IMPL_CONCAT(PREFIX, NUM_ARGS)
|
|
||||||
#define MAKE_VAR_MACRO(PREFIX, ...) MAKE_VAR_MACRO_IMPL(PREFIX, VA_NUM_ARGS(__VA_ARGS__))
|
|
||||||
|
|
||||||
#define DUMP(...) MAKE_VAR_MACRO(DUMP, __VA_ARGS__)(__VA_ARGS__)
|
|
@ -1,2 +0,0 @@
|
|||||||
clickhouse_add_executable (dump_variable dump_variable.cpp)
|
|
||||||
target_link_libraries (dump_variable PRIVATE clickhouse_common_io)
|
|
@ -1,70 +0,0 @@
|
|||||||
#include <base/iostream_debug_helpers.h>
|
|
||||||
|
|
||||||
#include <iostream>
|
|
||||||
#include <memory>
|
|
||||||
#include <vector>
|
|
||||||
#include <map>
|
|
||||||
#include <set>
|
|
||||||
#include <tuple>
|
|
||||||
#include <array>
|
|
||||||
#include <utility>
|
|
||||||
|
|
||||||
|
|
||||||
struct S1;
|
|
||||||
struct S2 {};
|
|
||||||
|
|
||||||
struct S3
|
|
||||||
{
|
|
||||||
std::set<const char *> m1;
|
|
||||||
};
|
|
||||||
|
|
||||||
std::ostream & operator<<(std::ostream & stream, const S3 & what)
|
|
||||||
{
|
|
||||||
stream << "S3 {m1=";
|
|
||||||
dumpValue(stream, what.m1) << "}";
|
|
||||||
return stream;
|
|
||||||
}
|
|
||||||
|
|
||||||
int main(int, char **)
|
|
||||||
{
|
|
||||||
int x = 1;
|
|
||||||
|
|
||||||
DUMP(x);
|
|
||||||
DUMP(x, 1, &x);
|
|
||||||
|
|
||||||
DUMP(std::make_unique<int>(1));
|
|
||||||
DUMP(std::make_shared<int>(1));
|
|
||||||
|
|
||||||
std::vector<int> vec{1, 2, 3};
|
|
||||||
DUMP(vec);
|
|
||||||
|
|
||||||
auto pair = std::make_pair(1, 2);
|
|
||||||
DUMP(pair);
|
|
||||||
|
|
||||||
auto tuple = std::make_tuple(1, 2, 3);
|
|
||||||
DUMP(tuple);
|
|
||||||
|
|
||||||
std::map<int, std::string> map{{1, "hello"}, {2, "world"}};
|
|
||||||
DUMP(map);
|
|
||||||
|
|
||||||
std::initializer_list<const char *> list{"hello", "world"};
|
|
||||||
DUMP(list);
|
|
||||||
|
|
||||||
std::array<const char *, 2> arr{{"hello", "world"}};
|
|
||||||
DUMP(arr);
|
|
||||||
|
|
||||||
//DUMP([]{});
|
|
||||||
|
|
||||||
S1 * s = nullptr;
|
|
||||||
DUMP(s);
|
|
||||||
|
|
||||||
DUMP(S2());
|
|
||||||
|
|
||||||
std::set<const char *> variants = {"hello", "world"};
|
|
||||||
DUMP(variants);
|
|
||||||
|
|
||||||
S3 s3 {{"hello", "world"}};
|
|
||||||
DUMP(s3);
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
@ -62,7 +62,7 @@ struct fmt::formatter<wide::integer<Bits, Signed>>
|
|||||||
}
|
}
|
||||||
|
|
||||||
template <typename FormatContext>
|
template <typename FormatContext>
|
||||||
auto format(const wide::integer<Bits, Signed> & value, FormatContext & ctx)
|
auto format(const wide::integer<Bits, Signed> & value, FormatContext & ctx) const
|
||||||
{
|
{
|
||||||
return fmt::format_to(ctx.out(), "{}", to_string(value));
|
return fmt::format_to(ctx.out(), "{}", to_string(value));
|
||||||
}
|
}
|
||||||
|
20
cmake/xray_instrumentation.cmake
Normal file
20
cmake/xray_instrumentation.cmake
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
# https://llvm.org/docs/XRay.html
|
||||||
|
|
||||||
|
option (ENABLE_XRAY "Enable LLVM XRay" OFF)
|
||||||
|
|
||||||
|
if (NOT ENABLE_XRAY)
|
||||||
|
message (STATUS "Not using LLVM XRay")
|
||||||
|
return()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if (NOT (ARCH_AMD64 AND OS_LINUX))
|
||||||
|
message (STATUS "Not using LLVM XRay, only amd64 Linux or FreeBSD are supported")
|
||||||
|
return()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
# The target clang must support xray, otherwise it should error on invalid option
|
||||||
|
set (XRAY_FLAGS "-fxray-instrument -DUSE_XRAY")
|
||||||
|
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${XRAY_FLAGS}")
|
||||||
|
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${XRAY_FLAGS}")
|
||||||
|
|
||||||
|
message (STATUS "Using LLVM XRay")
|
2
contrib/abseil-cpp
vendored
2
contrib/abseil-cpp
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 3bd86026c93da5a40006fd53403dff9d5f5e30e3
|
Subproject commit a3c4dd3e77f28b526efbb0eb394b72e29c633936
|
@ -1,6 +1,8 @@
|
|||||||
set(ABSL_ROOT_DIR "${ClickHouse_SOURCE_DIR}/contrib/abseil-cpp")
|
set(ABSL_ROOT_DIR "${ClickHouse_SOURCE_DIR}/contrib/abseil-cpp")
|
||||||
set(ABSL_COMMON_INCLUDE_DIRS "${ABSL_ROOT_DIR}")
|
set(ABSL_COMMON_INCLUDE_DIRS "${ABSL_ROOT_DIR}")
|
||||||
|
|
||||||
|
# This is a minimized version of the function definition in CMake/AbseilHelpers.cmake
|
||||||
|
|
||||||
#
|
#
|
||||||
# Copyright 2017 The Abseil Authors.
|
# Copyright 2017 The Abseil Authors.
|
||||||
#
|
#
|
||||||
@ -16,7 +18,6 @@ set(ABSL_COMMON_INCLUDE_DIRS "${ABSL_ROOT_DIR}")
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
#
|
#
|
||||||
|
|
||||||
function(absl_cc_library)
|
function(absl_cc_library)
|
||||||
cmake_parse_arguments(ABSL_CC_LIB
|
cmake_parse_arguments(ABSL_CC_LIB
|
||||||
"DISABLE_INSTALL;PUBLIC;TESTONLY"
|
"DISABLE_INSTALL;PUBLIC;TESTONLY"
|
||||||
@ -76,6 +77,12 @@ function(absl_cc_library)
|
|||||||
add_library(absl::${ABSL_CC_LIB_NAME} ALIAS ${_NAME})
|
add_library(absl::${ABSL_CC_LIB_NAME} ALIAS ${_NAME})
|
||||||
endfunction()
|
endfunction()
|
||||||
|
|
||||||
|
# The following definitions are an amalgamation of the CMakeLists.txt files in absl/*/
|
||||||
|
# To refresh them when upgrading to a new version:
|
||||||
|
# - copy them over from upstream
|
||||||
|
# - remove calls of 'absl_cc_test'
|
||||||
|
# - remove calls of `absl_cc_library` that contain `TESTONLY`
|
||||||
|
# - append '${DIR}' to the file definitions
|
||||||
|
|
||||||
set(DIR ${ABSL_ROOT_DIR}/absl/algorithm)
|
set(DIR ${ABSL_ROOT_DIR}/absl/algorithm)
|
||||||
|
|
||||||
@ -102,12 +109,12 @@ absl_cc_library(
|
|||||||
absl::algorithm
|
absl::algorithm
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
absl::meta
|
absl::meta
|
||||||
|
absl::nullability
|
||||||
PUBLIC
|
PUBLIC
|
||||||
)
|
)
|
||||||
|
|
||||||
set(DIR ${ABSL_ROOT_DIR}/absl/base)
|
set(DIR ${ABSL_ROOT_DIR}/absl/base)
|
||||||
|
|
||||||
# Internal-only target, do not depend on directly.
|
|
||||||
absl_cc_library(
|
absl_cc_library(
|
||||||
NAME
|
NAME
|
||||||
atomic_hook
|
atomic_hook
|
||||||
@ -146,6 +153,18 @@ absl_cc_library(
|
|||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
)
|
)
|
||||||
|
|
||||||
|
absl_cc_library(
|
||||||
|
NAME
|
||||||
|
no_destructor
|
||||||
|
HDRS
|
||||||
|
"${DIR}/no_destructor.h"
|
||||||
|
DEPS
|
||||||
|
absl::config
|
||||||
|
absl::nullability
|
||||||
|
COPTS
|
||||||
|
${ABSL_DEFAULT_COPTS}
|
||||||
|
)
|
||||||
|
|
||||||
absl_cc_library(
|
absl_cc_library(
|
||||||
NAME
|
NAME
|
||||||
nullability
|
nullability
|
||||||
@ -305,6 +324,8 @@ absl_cc_library(
|
|||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
LINKOPTS
|
LINKOPTS
|
||||||
${ABSL_DEFAULT_LINKOPTS}
|
${ABSL_DEFAULT_LINKOPTS}
|
||||||
|
$<$<BOOL:${LIBRT}>:-lrt>
|
||||||
|
$<$<BOOL:${MINGW}>:-ladvapi32>
|
||||||
DEPS
|
DEPS
|
||||||
absl::atomic_hook
|
absl::atomic_hook
|
||||||
absl::base_internal
|
absl::base_internal
|
||||||
@ -312,6 +333,7 @@ absl_cc_library(
|
|||||||
absl::core_headers
|
absl::core_headers
|
||||||
absl::dynamic_annotations
|
absl::dynamic_annotations
|
||||||
absl::log_severity
|
absl::log_severity
|
||||||
|
absl::nullability
|
||||||
absl::raw_logging_internal
|
absl::raw_logging_internal
|
||||||
absl::spinlock_wait
|
absl::spinlock_wait
|
||||||
absl::type_traits
|
absl::type_traits
|
||||||
@ -357,6 +379,7 @@ absl_cc_library(
|
|||||||
absl::base
|
absl::base
|
||||||
absl::config
|
absl::config
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
|
absl::nullability
|
||||||
PUBLIC
|
PUBLIC
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -467,10 +490,11 @@ absl_cc_library(
|
|||||||
LINKOPTS
|
LINKOPTS
|
||||||
${ABSL_DEFAULT_LINKOPTS}
|
${ABSL_DEFAULT_LINKOPTS}
|
||||||
DEPS
|
DEPS
|
||||||
absl::container_common
|
|
||||||
absl::common_policy_traits
|
absl::common_policy_traits
|
||||||
absl::compare
|
absl::compare
|
||||||
absl::compressed_tuple
|
absl::compressed_tuple
|
||||||
|
absl::config
|
||||||
|
absl::container_common
|
||||||
absl::container_memory
|
absl::container_memory
|
||||||
absl::cord
|
absl::cord
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
@ -480,7 +504,6 @@ absl_cc_library(
|
|||||||
absl::strings
|
absl::strings
|
||||||
absl::throw_delegate
|
absl::throw_delegate
|
||||||
absl::type_traits
|
absl::type_traits
|
||||||
absl::utility
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Internal-only target, do not depend on directly.
|
# Internal-only target, do not depend on directly.
|
||||||
@ -523,7 +546,9 @@ absl_cc_library(
|
|||||||
COPTS
|
COPTS
|
||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
DEPS
|
DEPS
|
||||||
|
absl::base_internal
|
||||||
absl::compressed_tuple
|
absl::compressed_tuple
|
||||||
|
absl::config
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
absl::memory
|
absl::memory
|
||||||
absl::span
|
absl::span
|
||||||
@ -548,18 +573,6 @@ absl_cc_library(
|
|||||||
PUBLIC
|
PUBLIC
|
||||||
)
|
)
|
||||||
|
|
||||||
# Internal-only target, do not depend on directly.
|
|
||||||
absl_cc_library(
|
|
||||||
NAME
|
|
||||||
counting_allocator
|
|
||||||
HDRS
|
|
||||||
"${DIR}/internal/counting_allocator.h"
|
|
||||||
COPTS
|
|
||||||
${ABSL_DEFAULT_COPTS}
|
|
||||||
DEPS
|
|
||||||
absl::config
|
|
||||||
)
|
|
||||||
|
|
||||||
absl_cc_library(
|
absl_cc_library(
|
||||||
NAME
|
NAME
|
||||||
flat_hash_map
|
flat_hash_map
|
||||||
@ -570,7 +583,7 @@ absl_cc_library(
|
|||||||
DEPS
|
DEPS
|
||||||
absl::container_memory
|
absl::container_memory
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
absl::hash_function_defaults
|
absl::hash_container_defaults
|
||||||
absl::raw_hash_map
|
absl::raw_hash_map
|
||||||
absl::algorithm_container
|
absl::algorithm_container
|
||||||
absl::memory
|
absl::memory
|
||||||
@ -586,7 +599,7 @@ absl_cc_library(
|
|||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
DEPS
|
DEPS
|
||||||
absl::container_memory
|
absl::container_memory
|
||||||
absl::hash_function_defaults
|
absl::hash_container_defaults
|
||||||
absl::raw_hash_set
|
absl::raw_hash_set
|
||||||
absl::algorithm_container
|
absl::algorithm_container
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
@ -604,7 +617,7 @@ absl_cc_library(
|
|||||||
DEPS
|
DEPS
|
||||||
absl::container_memory
|
absl::container_memory
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
absl::hash_function_defaults
|
absl::hash_container_defaults
|
||||||
absl::node_slot_policy
|
absl::node_slot_policy
|
||||||
absl::raw_hash_map
|
absl::raw_hash_map
|
||||||
absl::algorithm_container
|
absl::algorithm_container
|
||||||
@ -620,8 +633,9 @@ absl_cc_library(
|
|||||||
COPTS
|
COPTS
|
||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
DEPS
|
DEPS
|
||||||
|
absl::container_memory
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
absl::hash_function_defaults
|
absl::hash_container_defaults
|
||||||
absl::node_slot_policy
|
absl::node_slot_policy
|
||||||
absl::raw_hash_set
|
absl::raw_hash_set
|
||||||
absl::algorithm_container
|
absl::algorithm_container
|
||||||
@ -629,6 +643,19 @@ absl_cc_library(
|
|||||||
PUBLIC
|
PUBLIC
|
||||||
)
|
)
|
||||||
|
|
||||||
|
absl_cc_library(
|
||||||
|
NAME
|
||||||
|
hash_container_defaults
|
||||||
|
HDRS
|
||||||
|
"${DIR}/hash_container_defaults.h"
|
||||||
|
COPTS
|
||||||
|
${ABSL_DEFAULT_COPTS}
|
||||||
|
DEPS
|
||||||
|
absl::config
|
||||||
|
absl::hash_function_defaults
|
||||||
|
PUBLIC
|
||||||
|
)
|
||||||
|
|
||||||
# Internal-only target, do not depend on directly.
|
# Internal-only target, do not depend on directly.
|
||||||
absl_cc_library(
|
absl_cc_library(
|
||||||
NAME
|
NAME
|
||||||
@ -655,9 +682,11 @@ absl_cc_library(
|
|||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
DEPS
|
DEPS
|
||||||
absl::config
|
absl::config
|
||||||
|
absl::container_common
|
||||||
absl::cord
|
absl::cord
|
||||||
absl::hash
|
absl::hash
|
||||||
absl::strings
|
absl::strings
|
||||||
|
absl::type_traits
|
||||||
PUBLIC
|
PUBLIC
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -703,6 +732,7 @@ absl_cc_library(
|
|||||||
absl::base
|
absl::base
|
||||||
absl::config
|
absl::config
|
||||||
absl::exponential_biased
|
absl::exponential_biased
|
||||||
|
absl::no_destructor
|
||||||
absl::raw_logging_internal
|
absl::raw_logging_internal
|
||||||
absl::sample_recorder
|
absl::sample_recorder
|
||||||
absl::synchronization
|
absl::synchronization
|
||||||
@ -756,7 +786,9 @@ absl_cc_library(
|
|||||||
COPTS
|
COPTS
|
||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
DEPS
|
DEPS
|
||||||
|
absl::config
|
||||||
absl::container_memory
|
absl::container_memory
|
||||||
|
absl::core_headers
|
||||||
absl::raw_hash_set
|
absl::raw_hash_set
|
||||||
absl::throw_delegate
|
absl::throw_delegate
|
||||||
PUBLIC
|
PUBLIC
|
||||||
@ -817,6 +849,7 @@ absl_cc_library(
|
|||||||
DEPS
|
DEPS
|
||||||
absl::config
|
absl::config
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
|
absl::debugging_internal
|
||||||
absl::meta
|
absl::meta
|
||||||
absl::strings
|
absl::strings
|
||||||
absl::span
|
absl::span
|
||||||
@ -931,6 +964,7 @@ absl_cc_library(
|
|||||||
absl::crc32c
|
absl::crc32c
|
||||||
absl::config
|
absl::config
|
||||||
absl::strings
|
absl::strings
|
||||||
|
absl::no_destructor
|
||||||
)
|
)
|
||||||
|
|
||||||
set(DIR ${ABSL_ROOT_DIR}/absl/debugging)
|
set(DIR ${ABSL_ROOT_DIR}/absl/debugging)
|
||||||
@ -954,6 +988,8 @@ absl_cc_library(
|
|||||||
"${DIR}/stacktrace.cc"
|
"${DIR}/stacktrace.cc"
|
||||||
COPTS
|
COPTS
|
||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
|
LINKOPTS
|
||||||
|
$<$<BOOL:${EXECINFO_LIBRARY}>:${EXECINFO_LIBRARY}>
|
||||||
DEPS
|
DEPS
|
||||||
absl::debugging_internal
|
absl::debugging_internal
|
||||||
absl::config
|
absl::config
|
||||||
@ -980,6 +1016,7 @@ absl_cc_library(
|
|||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
LINKOPTS
|
LINKOPTS
|
||||||
${ABSL_DEFAULT_LINKOPTS}
|
${ABSL_DEFAULT_LINKOPTS}
|
||||||
|
$<$<BOOL:${MINGW}>:-ldbghelp>
|
||||||
DEPS
|
DEPS
|
||||||
absl::debugging_internal
|
absl::debugging_internal
|
||||||
absl::demangle_internal
|
absl::demangle_internal
|
||||||
@ -1058,8 +1095,10 @@ absl_cc_library(
|
|||||||
demangle_internal
|
demangle_internal
|
||||||
HDRS
|
HDRS
|
||||||
"${DIR}/internal/demangle.h"
|
"${DIR}/internal/demangle.h"
|
||||||
|
"${DIR}/internal/demangle_rust.h"
|
||||||
SRCS
|
SRCS
|
||||||
"${DIR}/internal/demangle.cc"
|
"${DIR}/internal/demangle.cc"
|
||||||
|
"${DIR}/internal/demangle_rust.cc"
|
||||||
COPTS
|
COPTS
|
||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
DEPS
|
DEPS
|
||||||
@ -1252,6 +1291,7 @@ absl_cc_library(
|
|||||||
absl::strings
|
absl::strings
|
||||||
absl::synchronization
|
absl::synchronization
|
||||||
absl::flat_hash_map
|
absl::flat_hash_map
|
||||||
|
absl::no_destructor
|
||||||
)
|
)
|
||||||
|
|
||||||
# Internal-only target, do not depend on directly.
|
# Internal-only target, do not depend on directly.
|
||||||
@ -1283,12 +1323,9 @@ absl_cc_library(
|
|||||||
absl_cc_library(
|
absl_cc_library(
|
||||||
NAME
|
NAME
|
||||||
flags
|
flags
|
||||||
SRCS
|
|
||||||
"${DIR}/flag.cc"
|
|
||||||
HDRS
|
HDRS
|
||||||
"${DIR}/declare.h"
|
"${DIR}/declare.h"
|
||||||
"${DIR}/flag.h"
|
"${DIR}/flag.h"
|
||||||
"${DIR}/internal/flag_msvc.inc"
|
|
||||||
COPTS
|
COPTS
|
||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
LINKOPTS
|
LINKOPTS
|
||||||
@ -1299,7 +1336,6 @@ absl_cc_library(
|
|||||||
absl::flags_config
|
absl::flags_config
|
||||||
absl::flags_internal
|
absl::flags_internal
|
||||||
absl::flags_reflection
|
absl::flags_reflection
|
||||||
absl::base
|
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
absl::strings
|
absl::strings
|
||||||
)
|
)
|
||||||
@ -1379,6 +1415,9 @@ absl_cc_library(
|
|||||||
absl::synchronization
|
absl::synchronization
|
||||||
)
|
)
|
||||||
|
|
||||||
|
############################################################################
|
||||||
|
# Unit tests in alphabetical order.
|
||||||
|
|
||||||
set(DIR ${ABSL_ROOT_DIR}/absl/functional)
|
set(DIR ${ABSL_ROOT_DIR}/absl/functional)
|
||||||
|
|
||||||
absl_cc_library(
|
absl_cc_library(
|
||||||
@ -1431,6 +1470,18 @@ absl_cc_library(
|
|||||||
PUBLIC
|
PUBLIC
|
||||||
)
|
)
|
||||||
|
|
||||||
|
absl_cc_library(
|
||||||
|
NAME
|
||||||
|
overload
|
||||||
|
HDRS
|
||||||
|
"${DIR}/overload.h"
|
||||||
|
COPTS
|
||||||
|
${ABSL_DEFAULT_COPTS}
|
||||||
|
DEPS
|
||||||
|
absl::meta
|
||||||
|
PUBLIC
|
||||||
|
)
|
||||||
|
|
||||||
set(DIR ${ABSL_ROOT_DIR}/absl/hash)
|
set(DIR ${ABSL_ROOT_DIR}/absl/hash)
|
||||||
|
|
||||||
absl_cc_library(
|
absl_cc_library(
|
||||||
@ -1640,6 +1691,7 @@ absl_cc_library(
|
|||||||
absl::log_internal_conditions
|
absl::log_internal_conditions
|
||||||
absl::log_internal_message
|
absl::log_internal_message
|
||||||
absl::log_internal_strip
|
absl::log_internal_strip
|
||||||
|
absl::absl_vlog_is_on
|
||||||
)
|
)
|
||||||
|
|
||||||
absl_cc_library(
|
absl_cc_library(
|
||||||
@ -1721,6 +1773,7 @@ absl_cc_library(
|
|||||||
absl::log_entry
|
absl::log_entry
|
||||||
absl::log_severity
|
absl::log_severity
|
||||||
absl::log_sink
|
absl::log_sink
|
||||||
|
absl::no_destructor
|
||||||
absl::raw_logging_internal
|
absl::raw_logging_internal
|
||||||
absl::synchronization
|
absl::synchronization
|
||||||
absl::span
|
absl::span
|
||||||
@ -1771,6 +1824,7 @@ absl_cc_library(
|
|||||||
LINKOPTS
|
LINKOPTS
|
||||||
${ABSL_DEFAULT_LINKOPTS}
|
${ABSL_DEFAULT_LINKOPTS}
|
||||||
DEPS
|
DEPS
|
||||||
|
absl::core_headers
|
||||||
absl::log_internal_message
|
absl::log_internal_message
|
||||||
absl::log_internal_nullstream
|
absl::log_internal_nullstream
|
||||||
absl::log_severity
|
absl::log_severity
|
||||||
@ -1876,6 +1930,11 @@ absl_cc_library(
|
|||||||
PUBLIC
|
PUBLIC
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Warning: Many linkers will strip the contents of this library because its
|
||||||
|
# symbols are only used in a global constructor. A workaround is for clients
|
||||||
|
# to link this using $<LINK_LIBRARY:WHOLE_ARCHIVE,absl::log_flags> instead of
|
||||||
|
# the plain absl::log_flags.
|
||||||
|
# TODO(b/320467376): Implement the equivalent of Bazel's alwayslink=True.
|
||||||
absl_cc_library(
|
absl_cc_library(
|
||||||
NAME
|
NAME
|
||||||
log_flags
|
log_flags
|
||||||
@ -1897,6 +1956,7 @@ absl_cc_library(
|
|||||||
absl::flags
|
absl::flags
|
||||||
absl::flags_marshalling
|
absl::flags_marshalling
|
||||||
absl::strings
|
absl::strings
|
||||||
|
absl::vlog_config_internal
|
||||||
PUBLIC
|
PUBLIC
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -1919,6 +1979,7 @@ absl_cc_library(
|
|||||||
absl::log_severity
|
absl::log_severity
|
||||||
absl::raw_logging_internal
|
absl::raw_logging_internal
|
||||||
absl::strings
|
absl::strings
|
||||||
|
absl::vlog_config_internal
|
||||||
)
|
)
|
||||||
|
|
||||||
absl_cc_library(
|
absl_cc_library(
|
||||||
@ -1952,6 +2013,7 @@ absl_cc_library(
|
|||||||
${ABSL_DEFAULT_LINKOPTS}
|
${ABSL_DEFAULT_LINKOPTS}
|
||||||
DEPS
|
DEPS
|
||||||
absl::log_internal_log_impl
|
absl::log_internal_log_impl
|
||||||
|
absl::vlog_is_on
|
||||||
PUBLIC
|
PUBLIC
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -2064,21 +2126,75 @@ absl_cc_library(
|
|||||||
)
|
)
|
||||||
|
|
||||||
absl_cc_library(
|
absl_cc_library(
|
||||||
NAME
|
NAME
|
||||||
log_internal_fnmatch
|
vlog_config_internal
|
||||||
SRCS
|
SRCS
|
||||||
"${DIR}/internal/fnmatch.cc"
|
"${DIR}/internal/vlog_config.cc"
|
||||||
HDRS
|
HDRS
|
||||||
"${DIR}/internal/fnmatch.h"
|
"${DIR}/internal/vlog_config.h"
|
||||||
COPTS
|
COPTS
|
||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
LINKOPTS
|
LINKOPTS
|
||||||
${ABSL_DEFAULT_LINKOPTS}
|
${ABSL_DEFAULT_LINKOPTS}
|
||||||
DEPS
|
DEPS
|
||||||
absl::config
|
absl::base
|
||||||
absl::strings
|
absl::config
|
||||||
|
absl::core_headers
|
||||||
|
absl::log_internal_fnmatch
|
||||||
|
absl::memory
|
||||||
|
absl::no_destructor
|
||||||
|
absl::strings
|
||||||
|
absl::synchronization
|
||||||
|
absl::optional
|
||||||
)
|
)
|
||||||
|
|
||||||
|
absl_cc_library(
|
||||||
|
NAME
|
||||||
|
absl_vlog_is_on
|
||||||
|
COPTS
|
||||||
|
${ABSL_DEFAULT_COPTS}
|
||||||
|
LINKOPTS
|
||||||
|
${ABSL_DEFAULT_LINKOPTS}
|
||||||
|
HDRS
|
||||||
|
"${DIR}/absl_vlog_is_on.h"
|
||||||
|
DEPS
|
||||||
|
absl::vlog_config_internal
|
||||||
|
absl::config
|
||||||
|
absl::core_headers
|
||||||
|
absl::strings
|
||||||
|
)
|
||||||
|
|
||||||
|
absl_cc_library(
|
||||||
|
NAME
|
||||||
|
vlog_is_on
|
||||||
|
COPTS
|
||||||
|
${ABSL_DEFAULT_COPTS}
|
||||||
|
LINKOPTS
|
||||||
|
${ABSL_DEFAULT_LINKOPTS}
|
||||||
|
HDRS
|
||||||
|
"${DIR}/vlog_is_on.h"
|
||||||
|
DEPS
|
||||||
|
absl::absl_vlog_is_on
|
||||||
|
)
|
||||||
|
|
||||||
|
absl_cc_library(
|
||||||
|
NAME
|
||||||
|
log_internal_fnmatch
|
||||||
|
SRCS
|
||||||
|
"${DIR}/internal/fnmatch.cc"
|
||||||
|
HDRS
|
||||||
|
"${DIR}/internal/fnmatch.h"
|
||||||
|
COPTS
|
||||||
|
${ABSL_DEFAULT_COPTS}
|
||||||
|
LINKOPTS
|
||||||
|
${ABSL_DEFAULT_LINKOPTS}
|
||||||
|
DEPS
|
||||||
|
absl::config
|
||||||
|
absl::strings
|
||||||
|
)
|
||||||
|
|
||||||
|
# Test targets
|
||||||
|
|
||||||
set(DIR ${ABSL_ROOT_DIR}/absl/memory)
|
set(DIR ${ABSL_ROOT_DIR}/absl/memory)
|
||||||
|
|
||||||
absl_cc_library(
|
absl_cc_library(
|
||||||
@ -2147,6 +2263,7 @@ absl_cc_library(
|
|||||||
COPTS
|
COPTS
|
||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
DEPS
|
DEPS
|
||||||
|
absl::compare
|
||||||
absl::config
|
absl::config
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
absl::bits
|
absl::bits
|
||||||
@ -2176,6 +2293,8 @@ absl_cc_library(
|
|||||||
PUBLIC
|
PUBLIC
|
||||||
)
|
)
|
||||||
|
|
||||||
|
set(DIR ${ABSL_ROOT_DIR}/absl/profiling)
|
||||||
|
|
||||||
absl_cc_library(
|
absl_cc_library(
|
||||||
NAME
|
NAME
|
||||||
sample_recorder
|
sample_recorder
|
||||||
@ -2188,8 +2307,6 @@ absl_cc_library(
|
|||||||
absl::synchronization
|
absl::synchronization
|
||||||
)
|
)
|
||||||
|
|
||||||
set(DIR ${ABSL_ROOT_DIR}/absl/profiling)
|
|
||||||
|
|
||||||
absl_cc_library(
|
absl_cc_library(
|
||||||
NAME
|
NAME
|
||||||
exponential_biased
|
exponential_biased
|
||||||
@ -2265,6 +2382,7 @@ absl_cc_library(
|
|||||||
LINKOPTS
|
LINKOPTS
|
||||||
${ABSL_DEFAULT_LINKOPTS}
|
${ABSL_DEFAULT_LINKOPTS}
|
||||||
DEPS
|
DEPS
|
||||||
|
absl::config
|
||||||
absl::fast_type_id
|
absl::fast_type_id
|
||||||
absl::optional
|
absl::optional
|
||||||
)
|
)
|
||||||
@ -2336,11 +2454,13 @@ absl_cc_library(
|
|||||||
DEPS
|
DEPS
|
||||||
absl::config
|
absl::config
|
||||||
absl::inlined_vector
|
absl::inlined_vector
|
||||||
|
absl::nullability
|
||||||
absl::random_internal_pool_urbg
|
absl::random_internal_pool_urbg
|
||||||
absl::random_internal_salted_seed_seq
|
absl::random_internal_salted_seed_seq
|
||||||
absl::random_internal_seed_material
|
absl::random_internal_seed_material
|
||||||
absl::random_seed_gen_exception
|
absl::random_seed_gen_exception
|
||||||
absl::span
|
absl::span
|
||||||
|
absl::string_view
|
||||||
)
|
)
|
||||||
|
|
||||||
# Internal-only target, do not depend on directly.
|
# Internal-only target, do not depend on directly.
|
||||||
@ -2399,6 +2519,7 @@ absl_cc_library(
|
|||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
LINKOPTS
|
LINKOPTS
|
||||||
${ABSL_DEFAULT_LINKOPTS}
|
${ABSL_DEFAULT_LINKOPTS}
|
||||||
|
$<$<BOOL:${MINGW}>:-lbcrypt>
|
||||||
DEPS
|
DEPS
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
absl::optional
|
absl::optional
|
||||||
@ -2658,6 +2779,29 @@ absl_cc_library(
|
|||||||
absl::config
|
absl::config
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Internal-only target, do not depend on directly.
|
||||||
|
absl_cc_library(
|
||||||
|
NAME
|
||||||
|
random_internal_distribution_test_util
|
||||||
|
SRCS
|
||||||
|
"${DIR}/internal/chi_square.cc"
|
||||||
|
"${DIR}/internal/distribution_test_util.cc"
|
||||||
|
HDRS
|
||||||
|
"${DIR}/internal/chi_square.h"
|
||||||
|
"${DIR}/internal/distribution_test_util.h"
|
||||||
|
COPTS
|
||||||
|
${ABSL_DEFAULT_COPTS}
|
||||||
|
LINKOPTS
|
||||||
|
${ABSL_DEFAULT_LINKOPTS}
|
||||||
|
DEPS
|
||||||
|
absl::config
|
||||||
|
absl::core_headers
|
||||||
|
absl::raw_logging_internal
|
||||||
|
absl::strings
|
||||||
|
absl::str_format
|
||||||
|
absl::span
|
||||||
|
)
|
||||||
|
|
||||||
# Internal-only target, do not depend on directly.
|
# Internal-only target, do not depend on directly.
|
||||||
absl_cc_library(
|
absl_cc_library(
|
||||||
NAME
|
NAME
|
||||||
@ -2699,6 +2843,8 @@ absl_cc_library(
|
|||||||
absl::function_ref
|
absl::function_ref
|
||||||
absl::inlined_vector
|
absl::inlined_vector
|
||||||
absl::memory
|
absl::memory
|
||||||
|
absl::no_destructor
|
||||||
|
absl::nullability
|
||||||
absl::optional
|
absl::optional
|
||||||
absl::raw_logging_internal
|
absl::raw_logging_internal
|
||||||
absl::span
|
absl::span
|
||||||
@ -2724,8 +2870,11 @@ absl_cc_library(
|
|||||||
absl::base
|
absl::base
|
||||||
absl::config
|
absl::config
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
|
absl::has_ostream_operator
|
||||||
|
absl::nullability
|
||||||
absl::raw_logging_internal
|
absl::raw_logging_internal
|
||||||
absl::status
|
absl::status
|
||||||
|
absl::str_format
|
||||||
absl::strings
|
absl::strings
|
||||||
absl::type_traits
|
absl::type_traits
|
||||||
absl::utility
|
absl::utility
|
||||||
@ -2748,6 +2897,7 @@ absl_cc_library(
|
|||||||
absl::base
|
absl::base
|
||||||
absl::config
|
absl::config
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
|
absl::nullability
|
||||||
absl::throw_delegate
|
absl::throw_delegate
|
||||||
PUBLIC
|
PUBLIC
|
||||||
)
|
)
|
||||||
@ -2762,6 +2912,7 @@ absl_cc_library(
|
|||||||
"${DIR}/has_absl_stringify.h"
|
"${DIR}/has_absl_stringify.h"
|
||||||
"${DIR}/internal/damerau_levenshtein_distance.h"
|
"${DIR}/internal/damerau_levenshtein_distance.h"
|
||||||
"${DIR}/internal/string_constant.h"
|
"${DIR}/internal/string_constant.h"
|
||||||
|
"${DIR}/internal/has_absl_stringify.h"
|
||||||
"${DIR}/match.h"
|
"${DIR}/match.h"
|
||||||
"${DIR}/numbers.h"
|
"${DIR}/numbers.h"
|
||||||
"${DIR}/str_cat.h"
|
"${DIR}/str_cat.h"
|
||||||
@ -2805,6 +2956,7 @@ absl_cc_library(
|
|||||||
absl::endian
|
absl::endian
|
||||||
absl::int128
|
absl::int128
|
||||||
absl::memory
|
absl::memory
|
||||||
|
absl::nullability
|
||||||
absl::raw_logging_internal
|
absl::raw_logging_internal
|
||||||
absl::throw_delegate
|
absl::throw_delegate
|
||||||
absl::type_traits
|
absl::type_traits
|
||||||
@ -2824,6 +2976,18 @@ absl_cc_library(
|
|||||||
PUBLIC
|
PUBLIC
|
||||||
)
|
)
|
||||||
|
|
||||||
|
absl_cc_library(
|
||||||
|
NAME
|
||||||
|
has_ostream_operator
|
||||||
|
HDRS
|
||||||
|
"${DIR}/has_ostream_operator.h"
|
||||||
|
COPTS
|
||||||
|
${ABSL_DEFAULT_COPTS}
|
||||||
|
DEPS
|
||||||
|
absl::config
|
||||||
|
PUBLIC
|
||||||
|
)
|
||||||
|
|
||||||
# Internal-only target, do not depend on directly.
|
# Internal-only target, do not depend on directly.
|
||||||
absl_cc_library(
|
absl_cc_library(
|
||||||
NAME
|
NAME
|
||||||
@ -2855,7 +3019,12 @@ absl_cc_library(
|
|||||||
COPTS
|
COPTS
|
||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
DEPS
|
DEPS
|
||||||
|
absl::config
|
||||||
|
absl::core_headers
|
||||||
|
absl::nullability
|
||||||
|
absl::span
|
||||||
absl::str_format_internal
|
absl::str_format_internal
|
||||||
|
absl::string_view
|
||||||
PUBLIC
|
PUBLIC
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -2886,6 +3055,7 @@ absl_cc_library(
|
|||||||
absl::strings
|
absl::strings
|
||||||
absl::config
|
absl::config
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
|
absl::fixed_array
|
||||||
absl::inlined_vector
|
absl::inlined_vector
|
||||||
absl::numeric_representation
|
absl::numeric_representation
|
||||||
absl::type_traits
|
absl::type_traits
|
||||||
@ -2989,6 +3159,7 @@ absl_cc_library(
|
|||||||
DEPS
|
DEPS
|
||||||
absl::base
|
absl::base
|
||||||
absl::config
|
absl::config
|
||||||
|
absl::no_destructor
|
||||||
absl::raw_logging_internal
|
absl::raw_logging_internal
|
||||||
absl::synchronization
|
absl::synchronization
|
||||||
)
|
)
|
||||||
@ -3079,6 +3250,7 @@ absl_cc_library(
|
|||||||
absl::endian
|
absl::endian
|
||||||
absl::function_ref
|
absl::function_ref
|
||||||
absl::inlined_vector
|
absl::inlined_vector
|
||||||
|
absl::nullability
|
||||||
absl::optional
|
absl::optional
|
||||||
absl::raw_logging_internal
|
absl::raw_logging_internal
|
||||||
absl::span
|
absl::span
|
||||||
@ -3246,6 +3418,8 @@ absl_cc_library(
|
|||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
DEPS
|
DEPS
|
||||||
Threads::Threads
|
Threads::Threads
|
||||||
|
# TODO(#1495): Use $<LINK_LIBRARY:FRAMEWORK,CoreFoundation> once our
|
||||||
|
# minimum CMake version >= 3.24
|
||||||
$<$<PLATFORM_ID:Darwin>:-Wl,-framework,CoreFoundation>
|
$<$<PLATFORM_ID:Darwin>:-Wl,-framework,CoreFoundation>
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -3286,8 +3460,8 @@ absl_cc_library(
|
|||||||
NAME
|
NAME
|
||||||
bad_any_cast_impl
|
bad_any_cast_impl
|
||||||
SRCS
|
SRCS
|
||||||
"${DIR}/bad_any_cast.h"
|
"${DIR}/bad_any_cast.h"
|
||||||
"${DIR}/bad_any_cast.cc"
|
"${DIR}/bad_any_cast.cc"
|
||||||
COPTS
|
COPTS
|
||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
DEPS
|
DEPS
|
||||||
@ -3307,6 +3481,7 @@ absl_cc_library(
|
|||||||
DEPS
|
DEPS
|
||||||
absl::algorithm
|
absl::algorithm
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
|
absl::nullability
|
||||||
absl::throw_delegate
|
absl::throw_delegate
|
||||||
absl::type_traits
|
absl::type_traits
|
||||||
PUBLIC
|
PUBLIC
|
||||||
@ -3327,6 +3502,7 @@ absl_cc_library(
|
|||||||
absl::config
|
absl::config
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
absl::memory
|
absl::memory
|
||||||
|
absl::nullability
|
||||||
absl::type_traits
|
absl::type_traits
|
||||||
absl::utility
|
absl::utility
|
||||||
PUBLIC
|
PUBLIC
|
||||||
@ -3389,6 +3565,7 @@ absl_cc_library(
|
|||||||
COPTS
|
COPTS
|
||||||
${ABSL_DEFAULT_COPTS}
|
${ABSL_DEFAULT_COPTS}
|
||||||
DEPS
|
DEPS
|
||||||
|
absl::config
|
||||||
absl::core_headers
|
absl::core_headers
|
||||||
absl::type_traits
|
absl::type_traits
|
||||||
PUBLIC
|
PUBLIC
|
||||||
|
2
contrib/aws
vendored
2
contrib/aws
vendored
@ -1 +1 @@
|
|||||||
Subproject commit deeaa9e7c5fe690e3dacc4005d7ecfa7a66a32bb
|
Subproject commit 1c2946bfcb7f1e3ae0a858de0b59d4f1a7b4ccaf
|
2
contrib/cld2
vendored
2
contrib/cld2
vendored
@ -1 +1 @@
|
|||||||
Subproject commit bc6d493a2f64ed1fc1c4c4b4294a542a04e04217
|
Subproject commit 217ba8b8805b41557faadaa47bb6e99f2242eea3
|
2
contrib/fmtlib
vendored
2
contrib/fmtlib
vendored
@ -1 +1 @@
|
|||||||
Subproject commit b6f4ceaed0a0a24ccf575fab6c56dd50ccf6f1a9
|
Subproject commit a33701196adfad74917046096bf5a2aa0ab0bb50
|
@ -13,7 +13,6 @@ set (SRCS
|
|||||||
${FMT_SOURCE_DIR}/include/fmt/core.h
|
${FMT_SOURCE_DIR}/include/fmt/core.h
|
||||||
${FMT_SOURCE_DIR}/include/fmt/format.h
|
${FMT_SOURCE_DIR}/include/fmt/format.h
|
||||||
${FMT_SOURCE_DIR}/include/fmt/format-inl.h
|
${FMT_SOURCE_DIR}/include/fmt/format-inl.h
|
||||||
${FMT_SOURCE_DIR}/include/fmt/locale.h
|
|
||||||
${FMT_SOURCE_DIR}/include/fmt/os.h
|
${FMT_SOURCE_DIR}/include/fmt/os.h
|
||||||
${FMT_SOURCE_DIR}/include/fmt/ostream.h
|
${FMT_SOURCE_DIR}/include/fmt/ostream.h
|
||||||
${FMT_SOURCE_DIR}/include/fmt/printf.h
|
${FMT_SOURCE_DIR}/include/fmt/printf.h
|
||||||
|
2
contrib/googletest
vendored
2
contrib/googletest
vendored
@ -1 +1 @@
|
|||||||
Subproject commit e47544ad31cb3ceecd04cc13e8fe556f8df9fe0b
|
Subproject commit a7f443b80b105f940225332ed3c31f2790092f47
|
2
contrib/openssl
vendored
2
contrib/openssl
vendored
@ -1 +1 @@
|
|||||||
Subproject commit f7b8721dfc66abb147f24ca07b9c9d1d64f40f71
|
Subproject commit 67c0b63e578e4c751ac9edf490f5a96124fff8dc
|
2
contrib/orc
vendored
2
contrib/orc
vendored
@ -1 +1 @@
|
|||||||
Subproject commit e24f2c2a3ca0769c96704ab20ad6f512a83ea2ad
|
Subproject commit 947cebaf9432d708253ac08dc3012daa6b4ede6f
|
@ -41,8 +41,7 @@
|
|||||||
"docker/test/stateless": {
|
"docker/test/stateless": {
|
||||||
"name": "clickhouse/stateless-test",
|
"name": "clickhouse/stateless-test",
|
||||||
"dependent": [
|
"dependent": [
|
||||||
"docker/test/stateful",
|
"docker/test/stateful"
|
||||||
"docker/test/unit"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"docker/test/stateful": {
|
"docker/test/stateful": {
|
||||||
@ -122,15 +121,16 @@
|
|||||||
"docker/test/base": {
|
"docker/test/base": {
|
||||||
"name": "clickhouse/test-base",
|
"name": "clickhouse/test-base",
|
||||||
"dependent": [
|
"dependent": [
|
||||||
|
"docker/test/clickbench",
|
||||||
"docker/test/fuzzer",
|
"docker/test/fuzzer",
|
||||||
"docker/test/libfuzzer",
|
|
||||||
"docker/test/integration/base",
|
"docker/test/integration/base",
|
||||||
"docker/test/keeper-jepsen",
|
"docker/test/keeper-jepsen",
|
||||||
|
"docker/test/libfuzzer",
|
||||||
"docker/test/server-jepsen",
|
"docker/test/server-jepsen",
|
||||||
"docker/test/sqllogic",
|
"docker/test/sqllogic",
|
||||||
"docker/test/sqltest",
|
"docker/test/sqltest",
|
||||||
"docker/test/clickbench",
|
"docker/test/stateless",
|
||||||
"docker/test/stateless"
|
"docker/test/unit"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"docker/test/integration/kerberized_hadoop": {
|
"docker/test/integration/kerberized_hadoop": {
|
||||||
|
@ -34,7 +34,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
|||||||
# lts / testing / prestable / etc
|
# lts / testing / prestable / etc
|
||||||
ARG REPO_CHANNEL="stable"
|
ARG REPO_CHANNEL="stable"
|
||||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||||
ARG VERSION="24.5.1.1763"
|
ARG VERSION="24.5.3.5"
|
||||||
ARG PACKAGES="clickhouse-keeper"
|
ARG PACKAGES="clickhouse-keeper"
|
||||||
ARG DIRECT_DOWNLOAD_URLS=""
|
ARG DIRECT_DOWNLOAD_URLS=""
|
||||||
|
|
||||||
|
@ -32,7 +32,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
|||||||
# lts / testing / prestable / etc
|
# lts / testing / prestable / etc
|
||||||
ARG REPO_CHANNEL="stable"
|
ARG REPO_CHANNEL="stable"
|
||||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||||
ARG VERSION="24.5.1.1763"
|
ARG VERSION="24.5.3.5"
|
||||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||||
ARG DIRECT_DOWNLOAD_URLS=""
|
ARG DIRECT_DOWNLOAD_URLS=""
|
||||||
|
|
||||||
|
@ -28,7 +28,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
|
|||||||
|
|
||||||
ARG REPO_CHANNEL="stable"
|
ARG REPO_CHANNEL="stable"
|
||||||
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
||||||
ARG VERSION="24.5.1.1763"
|
ARG VERSION="24.5.3.5"
|
||||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||||
|
|
||||||
#docker-official-library:off
|
#docker-official-library:off
|
||||||
|
@ -208,6 +208,7 @@ handle SIGPIPE nostop noprint pass
|
|||||||
handle SIGTERM nostop noprint pass
|
handle SIGTERM nostop noprint pass
|
||||||
handle SIGUSR1 nostop noprint pass
|
handle SIGUSR1 nostop noprint pass
|
||||||
handle SIGUSR2 nostop noprint pass
|
handle SIGUSR2 nostop noprint pass
|
||||||
|
handle SIGSEGV nostop pass
|
||||||
handle SIG$RTMIN nostop noprint pass
|
handle SIG$RTMIN nostop noprint pass
|
||||||
info signals
|
info signals
|
||||||
continue
|
continue
|
||||||
|
@ -20,6 +20,7 @@ handle SIGPIPE nostop noprint pass
|
|||||||
handle SIGTERM nostop noprint pass
|
handle SIGTERM nostop noprint pass
|
||||||
handle SIGUSR1 nostop noprint pass
|
handle SIGUSR1 nostop noprint pass
|
||||||
handle SIGUSR2 nostop noprint pass
|
handle SIGUSR2 nostop noprint pass
|
||||||
|
handle SIGSEGV nostop pass
|
||||||
handle SIG$RTMIN nostop noprint pass
|
handle SIG$RTMIN nostop noprint pass
|
||||||
info signals
|
info signals
|
||||||
continue
|
continue
|
||||||
|
@ -285,7 +285,7 @@ stop_logs_replication
|
|||||||
|
|
||||||
# Try to get logs while server is running
|
# Try to get logs while server is running
|
||||||
failed_to_save_logs=0
|
failed_to_save_logs=0
|
||||||
for table in query_log zookeeper_log trace_log transactions_info_log metric_log
|
for table in query_log zookeeper_log trace_log transactions_info_log metric_log blob_storage_log
|
||||||
do
|
do
|
||||||
err=$(clickhouse-client -q "select * from system.$table into outfile '/test_output/$table.tsv.gz' format TSVWithNamesAndTypes")
|
err=$(clickhouse-client -q "select * from system.$table into outfile '/test_output/$table.tsv.gz' format TSVWithNamesAndTypes")
|
||||||
echo "$err"
|
echo "$err"
|
||||||
@ -339,7 +339,7 @@ if [ $failed_to_save_logs -ne 0 ]; then
|
|||||||
# directly
|
# directly
|
||||||
# - even though ci auto-compress some files (but not *.tsv) it does this only
|
# - even though ci auto-compress some files (but not *.tsv) it does this only
|
||||||
# for files >64MB, we want this files to be compressed explicitly
|
# for files >64MB, we want this files to be compressed explicitly
|
||||||
for table in query_log zookeeper_log trace_log transactions_info_log metric_log
|
for table in query_log zookeeper_log trace_log transactions_info_log metric_log blob_storage_log
|
||||||
do
|
do
|
||||||
clickhouse-local "$data_path_config" --only-system-tables --stacktrace -q "select * from system.$table format TSVWithNamesAndTypes" | zstd --threads=0 > /test_output/$table.tsv.zst ||:
|
clickhouse-local "$data_path_config" --only-system-tables --stacktrace -q "select * from system.$table format TSVWithNamesAndTypes" | zstd --threads=0 > /test_output/$table.tsv.zst ||:
|
||||||
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
||||||
|
@ -10,21 +10,33 @@ RUN sed -i "s|http://archive.ubuntu.com|$apt_archive|g" /etc/apt/sources.list
|
|||||||
RUN apt-get update && env DEBIAN_FRONTEND=noninteractive apt-get install --yes \
|
RUN apt-get update && env DEBIAN_FRONTEND=noninteractive apt-get install --yes \
|
||||||
aspell \
|
aspell \
|
||||||
curl \
|
curl \
|
||||||
git \
|
|
||||||
gh \
|
|
||||||
file \
|
file \
|
||||||
|
gh \
|
||||||
|
git \
|
||||||
libxml2-utils \
|
libxml2-utils \
|
||||||
|
locales \
|
||||||
moreutils \
|
moreutils \
|
||||||
python3-fuzzywuzzy \
|
|
||||||
python3-pip \
|
python3-pip \
|
||||||
yamllint \
|
yamllint \
|
||||||
locales \
|
zstd \
|
||||||
&& apt-get clean \
|
&& apt-get clean \
|
||||||
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
|
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
|
||||||
|
|
||||||
# python-magic is the same version as in Ubuntu 22.04
|
# python-magic is the same version as in Ubuntu 22.04
|
||||||
RUN pip3 install black==23.12.0 boto3 codespell==2.2.1 mypy==1.8.0 PyGithub unidiff pylint==3.1.0 \
|
RUN pip3 install \
|
||||||
python-magic==0.4.24 requests types-requests \
|
PyGithub \
|
||||||
|
black==23.12.0 \
|
||||||
|
boto3 \
|
||||||
|
codespell==2.2.1 \
|
||||||
|
mypy==1.8.0 \
|
||||||
|
pylint==3.1.0 \
|
||||||
|
python-magic==0.4.24 \
|
||||||
|
flake8==4.0.1 \
|
||||||
|
requests \
|
||||||
|
thefuzz \
|
||||||
|
tqdm==4.66.4 \
|
||||||
|
types-requests \
|
||||||
|
unidiff \
|
||||||
&& rm -rf /root/.cache/pip
|
&& rm -rf /root/.cache/pip
|
||||||
|
|
||||||
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen en_US.UTF-8
|
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen en_US.UTF-8
|
||||||
|
@ -9,6 +9,8 @@ echo "Check style" | ts
|
|||||||
./check-style -n |& tee /test_output/style_output.txt
|
./check-style -n |& tee /test_output/style_output.txt
|
||||||
echo "Check python formatting with black" | ts
|
echo "Check python formatting with black" | ts
|
||||||
./check-black -n |& tee /test_output/black_output.txt
|
./check-black -n |& tee /test_output/black_output.txt
|
||||||
|
echo "Check python with flake8" | ts
|
||||||
|
./check-flake8 |& tee /test_output/flake8_output.txt
|
||||||
echo "Check python type hinting with mypy" | ts
|
echo "Check python type hinting with mypy" | ts
|
||||||
./check-mypy -n |& tee /test_output/mypy_output.txt
|
./check-mypy -n |& tee /test_output/mypy_output.txt
|
||||||
echo "Check typos" | ts
|
echo "Check typos" | ts
|
||||||
|
@ -1,9 +1,7 @@
|
|||||||
# rebuild in #33610
|
# rebuild in #33610
|
||||||
# docker build -t clickhouse/unit-test .
|
# docker build -t clickhouse/unit-test .
|
||||||
ARG FROM_TAG=latest
|
ARG FROM_TAG=latest
|
||||||
FROM clickhouse/stateless-test:$FROM_TAG
|
FROM clickhouse/test-base:$FROM_TAG
|
||||||
|
|
||||||
RUN apt-get install gdb
|
|
||||||
|
|
||||||
COPY run.sh /
|
COPY run.sh /
|
||||||
CMD ["/bin/bash", "/run.sh"]
|
CMD ["/bin/bash", "/run.sh"]
|
||||||
|
@ -25,7 +25,8 @@ azurite-blob --blobHost 0.0.0.0 --blobPort 10000 --debug /azurite_log &
|
|||||||
./setup_minio.sh stateless # to have a proper environment
|
./setup_minio.sh stateless # to have a proper environment
|
||||||
|
|
||||||
echo "Get previous release tag"
|
echo "Get previous release tag"
|
||||||
previous_release_tag=$(dpkg --info package_folder/clickhouse-client*.deb | grep "Version: " | awk '{print $2}' | cut -f1 -d'+' | get_previous_release_tag)
|
# shellcheck disable=SC2016
|
||||||
|
previous_release_tag=$(dpkg-deb --showformat='${Version}' --show package_folder/clickhouse-client*.deb | get_previous_release_tag)
|
||||||
echo $previous_release_tag
|
echo $previous_release_tag
|
||||||
|
|
||||||
echo "Clone previous release repository"
|
echo "Clone previous release repository"
|
||||||
@ -65,46 +66,22 @@ function save_settings_clean()
|
|||||||
script -q -c "clickhouse-local -q \"select * from system.settings into outfile '$out'\"" --log-out /dev/null
|
script -q -c "clickhouse-local -q \"select * from system.settings into outfile '$out'\"" --log-out /dev/null
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# We save the (numeric) version of the old server to compare setting changes between the 2
|
||||||
|
# We do this since we are testing against the latest release, not taking into account release candidates, so we might
|
||||||
|
# be testing current master (24.6) against the latest stable release (24.4)
|
||||||
|
function save_major_version()
|
||||||
|
{
|
||||||
|
local out=$1 && shift
|
||||||
|
clickhouse-local -q "SELECT a[1]::UInt64 * 100 + a[2]::UInt64 as v FROM (Select splitByChar('.', version()) as a) into outfile '$out'"
|
||||||
|
}
|
||||||
|
|
||||||
save_settings_clean 'old_settings.native'
|
save_settings_clean 'old_settings.native'
|
||||||
|
save_major_version 'old_version.native'
|
||||||
|
|
||||||
# Initial run without S3 to create system.*_log on local file system to make it
|
# Initial run without S3 to create system.*_log on local file system to make it
|
||||||
# available for dump via clickhouse-local
|
# available for dump via clickhouse-local
|
||||||
configure
|
configure
|
||||||
|
|
||||||
function remove_keeper_config()
|
|
||||||
{
|
|
||||||
sudo sed -i "/<$1>$2<\/$1>/d" /etc/clickhouse-server/config.d/keeper_port.xml
|
|
||||||
}
|
|
||||||
|
|
||||||
# async_replication setting doesn't exist on some older versions
|
|
||||||
remove_keeper_config "async_replication" "1"
|
|
||||||
|
|
||||||
# create_if_not_exists feature flag doesn't exist on some older versions
|
|
||||||
remove_keeper_config "create_if_not_exists" "[01]"
|
|
||||||
|
|
||||||
#todo: remove these after 24.3 released.
|
|
||||||
sudo sed -i "s|<object_storage_type>azure<|<object_storage_type>azure_blob_storage<|" /etc/clickhouse-server/config.d/azure_storage_conf.xml
|
|
||||||
|
|
||||||
#todo: remove these after 24.3 released.
|
|
||||||
sudo sed -i "s|<object_storage_type>local<|<object_storage_type>local_blob_storage<|" /etc/clickhouse-server/config.d/storage_conf.xml
|
|
||||||
|
|
||||||
# latest_logs_cache_size_threshold setting doesn't exist on some older versions
|
|
||||||
remove_keeper_config "latest_logs_cache_size_threshold" "[[:digit:]]\+"
|
|
||||||
|
|
||||||
# commit_logs_cache_size_threshold setting doesn't exist on some older versions
|
|
||||||
remove_keeper_config "commit_logs_cache_size_threshold" "[[:digit:]]\+"
|
|
||||||
|
|
||||||
# it contains some new settings, but we can safely remove it
|
|
||||||
rm /etc/clickhouse-server/config.d/merge_tree.xml
|
|
||||||
rm /etc/clickhouse-server/config.d/enable_wait_for_shutdown_replicated_tables.xml
|
|
||||||
rm /etc/clickhouse-server/config.d/zero_copy_destructive_operations.xml
|
|
||||||
rm /etc/clickhouse-server/config.d/storage_conf_02963.xml
|
|
||||||
rm /etc/clickhouse-server/config.d/backoff_failed_mutation.xml
|
|
||||||
rm /etc/clickhouse-server/config.d/handlers.yaml
|
|
||||||
rm /etc/clickhouse-server/users.d/nonconst_timezone.xml
|
|
||||||
rm /etc/clickhouse-server/users.d/s3_cache_new.xml
|
|
||||||
rm /etc/clickhouse-server/users.d/replicated_ddl_entry.xml
|
|
||||||
|
|
||||||
start
|
start
|
||||||
stop
|
stop
|
||||||
mv /var/log/clickhouse-server/clickhouse-server.log /var/log/clickhouse-server/clickhouse-server.initial.log
|
mv /var/log/clickhouse-server/clickhouse-server.log /var/log/clickhouse-server/clickhouse-server.initial.log
|
||||||
@ -116,44 +93,11 @@ export USE_S3_STORAGE_FOR_MERGE_TREE=1
|
|||||||
export ZOOKEEPER_FAULT_INJECTION=0
|
export ZOOKEEPER_FAULT_INJECTION=0
|
||||||
configure
|
configure
|
||||||
|
|
||||||
# force_sync=false doesn't work correctly on some older versions
|
|
||||||
sudo sed -i "s|<force_sync>false</force_sync>|<force_sync>true</force_sync>|" /etc/clickhouse-server/config.d/keeper_port.xml
|
|
||||||
|
|
||||||
#todo: remove these after 24.3 released.
|
|
||||||
sudo sed -i "s|<object_storage_type>azure<|<object_storage_type>azure_blob_storage<|" /etc/clickhouse-server/config.d/azure_storage_conf.xml
|
|
||||||
|
|
||||||
#todo: remove these after 24.3 released.
|
|
||||||
sudo sed -i "s|<object_storage_type>local<|<object_storage_type>local_blob_storage<|" /etc/clickhouse-server/config.d/storage_conf.xml
|
|
||||||
|
|
||||||
# async_replication setting doesn't exist on some older versions
|
|
||||||
remove_keeper_config "async_replication" "1"
|
|
||||||
|
|
||||||
# create_if_not_exists feature flag doesn't exist on some older versions
|
|
||||||
remove_keeper_config "create_if_not_exists" "[01]"
|
|
||||||
|
|
||||||
# latest_logs_cache_size_threshold setting doesn't exist on some older versions
|
|
||||||
remove_keeper_config "latest_logs_cache_size_threshold" "[[:digit:]]\+"
|
|
||||||
|
|
||||||
# commit_logs_cache_size_threshold setting doesn't exist on some older versions
|
|
||||||
remove_keeper_config "commit_logs_cache_size_threshold" "[[:digit:]]\+"
|
|
||||||
|
|
||||||
# But we still need default disk because some tables loaded only into it
|
# But we still need default disk because some tables loaded only into it
|
||||||
sudo sed -i "s|<main><disk>s3</disk></main>|<main><disk>s3</disk></main><default><disk>default</disk></default>|" /etc/clickhouse-server/config.d/s3_storage_policy_by_default.xml
|
sudo sed -i "s|<main><disk>s3</disk></main>|<main><disk>s3</disk></main><default><disk>default</disk></default>|" /etc/clickhouse-server/config.d/s3_storage_policy_by_default.xml
|
||||||
sudo chown clickhouse /etc/clickhouse-server/config.d/s3_storage_policy_by_default.xml
|
sudo chown clickhouse /etc/clickhouse-server/config.d/s3_storage_policy_by_default.xml
|
||||||
sudo chgrp clickhouse /etc/clickhouse-server/config.d/s3_storage_policy_by_default.xml
|
sudo chgrp clickhouse /etc/clickhouse-server/config.d/s3_storage_policy_by_default.xml
|
||||||
|
|
||||||
# it contains some new settings, but we can safely remove it
|
|
||||||
rm /etc/clickhouse-server/config.d/merge_tree.xml
|
|
||||||
rm /etc/clickhouse-server/config.d/enable_wait_for_shutdown_replicated_tables.xml
|
|
||||||
rm /etc/clickhouse-server/config.d/zero_copy_destructive_operations.xml
|
|
||||||
rm /etc/clickhouse-server/config.d/storage_conf_02963.xml
|
|
||||||
rm /etc/clickhouse-server/config.d/backoff_failed_mutation.xml
|
|
||||||
rm /etc/clickhouse-server/config.d/handlers.yaml
|
|
||||||
rm /etc/clickhouse-server/config.d/block_number.xml
|
|
||||||
rm /etc/clickhouse-server/users.d/nonconst_timezone.xml
|
|
||||||
rm /etc/clickhouse-server/users.d/s3_cache_new.xml
|
|
||||||
rm /etc/clickhouse-server/users.d/replicated_ddl_entry.xml
|
|
||||||
|
|
||||||
start
|
start
|
||||||
|
|
||||||
clickhouse-client --query="SELECT 'Server version: ', version()"
|
clickhouse-client --query="SELECT 'Server version: ', version()"
|
||||||
@ -192,6 +136,7 @@ then
|
|||||||
save_settings_clean 'new_settings.native'
|
save_settings_clean 'new_settings.native'
|
||||||
clickhouse-local -nmq "
|
clickhouse-local -nmq "
|
||||||
CREATE TABLE old_settings AS file('old_settings.native');
|
CREATE TABLE old_settings AS file('old_settings.native');
|
||||||
|
CREATE TABLE old_version AS file('old_version.native');
|
||||||
CREATE TABLE new_settings AS file('new_settings.native');
|
CREATE TABLE new_settings AS file('new_settings.native');
|
||||||
|
|
||||||
SELECT
|
SELECT
|
||||||
@ -202,8 +147,11 @@ then
|
|||||||
LEFT JOIN old_settings ON new_settings.name = old_settings.name
|
LEFT JOIN old_settings ON new_settings.name = old_settings.name
|
||||||
WHERE (new_settings.value != old_settings.value) AND (name NOT IN (
|
WHERE (new_settings.value != old_settings.value) AND (name NOT IN (
|
||||||
SELECT arrayJoin(tupleElement(changes, 'name'))
|
SELECT arrayJoin(tupleElement(changes, 'name'))
|
||||||
FROM system.settings_changes
|
FROM
|
||||||
WHERE version = extract(version(), '^(?:\\d+\\.\\d+)')
|
(
|
||||||
|
SELECT *, splitByChar('.', version) AS version_array FROM system.settings_changes
|
||||||
|
)
|
||||||
|
WHERE (version_array[1]::UInt64 * 100 + version_array[2]::UInt64) > (SELECT v FROM old_version LIMIT 1)
|
||||||
))
|
))
|
||||||
SETTINGS join_use_nulls = 1
|
SETTINGS join_use_nulls = 1
|
||||||
INTO OUTFILE 'changed_settings.txt'
|
INTO OUTFILE 'changed_settings.txt'
|
||||||
@ -216,8 +164,11 @@ then
|
|||||||
FROM old_settings
|
FROM old_settings
|
||||||
)) AND (name NOT IN (
|
)) AND (name NOT IN (
|
||||||
SELECT arrayJoin(tupleElement(changes, 'name'))
|
SELECT arrayJoin(tupleElement(changes, 'name'))
|
||||||
FROM system.settings_changes
|
FROM
|
||||||
WHERE version = extract(version(), '^(?:\\d+\\.\\d+)')
|
(
|
||||||
|
SELECT *, splitByChar('.', version) AS version_array FROM system.settings_changes
|
||||||
|
)
|
||||||
|
WHERE (version_array[1]::UInt64 * 100 + version_array[2]::UInt64) > (SELECT v FROM old_version LIMIT 1)
|
||||||
))
|
))
|
||||||
INTO OUTFILE 'new_settings.txt'
|
INTO OUTFILE 'new_settings.txt'
|
||||||
FORMAT PrettyCompactNoEscapes;
|
FORMAT PrettyCompactNoEscapes;
|
||||||
|
40
docs/changelogs/v23.8.15.35-lts.md
Normal file
40
docs/changelogs/v23.8.15.35-lts.md
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
---
|
||||||
|
sidebar_position: 1
|
||||||
|
sidebar_label: 2024
|
||||||
|
---
|
||||||
|
|
||||||
|
# 2024 Changelog
|
||||||
|
|
||||||
|
### ClickHouse release v23.8.15.35-lts (060ff8e813a) FIXME as compared to v23.8.14.6-lts (967e51c1d6b)
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
* Backported in [#63621](https://github.com/ClickHouse/ClickHouse/issues/63621): The Dockerfile is reviewed by the docker official library in https://github.com/docker-library/official-images/pull/15846. [#63400](https://github.com/ClickHouse/ClickHouse/pull/63400) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Backported in [#65153](https://github.com/ClickHouse/ClickHouse/issues/65153): Decrease the `unit-test` image a few times. [#65102](https://github.com/ClickHouse/ClickHouse/pull/65102) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
|
||||||
|
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||||
|
|
||||||
|
* Backported in [#64422](https://github.com/ClickHouse/ClickHouse/issues/64422): Fixes [#59989](https://github.com/ClickHouse/ClickHouse/issues/59989): runs init scripts when force-enabled or when no database exists, rather than the inverse. [#59991](https://github.com/ClickHouse/ClickHouse/pull/59991) ([jktng](https://github.com/jktng)).
|
||||||
|
* Backported in [#64016](https://github.com/ClickHouse/ClickHouse/issues/64016): Fix "Invalid storage definition in metadata file" for parameterized views. [#60708](https://github.com/ClickHouse/ClickHouse/pull/60708) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Backported in [#63456](https://github.com/ClickHouse/ClickHouse/issues/63456): Fix the issue where the function `addDays` (and similar functions) reports an error when the first parameter is `DateTime64`. [#61561](https://github.com/ClickHouse/ClickHouse/pull/61561) ([Shuai li](https://github.com/loneylee)).
|
||||||
|
* Backported in [#63289](https://github.com/ClickHouse/ClickHouse/issues/63289): Fix crash with untuple and unresolved lambda. [#63131](https://github.com/ClickHouse/ClickHouse/pull/63131) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#63512](https://github.com/ClickHouse/ClickHouse/issues/63512): Fix `X-ClickHouse-Timezone` header returning wrong timezone when using `session_timezone` as query level setting. [#63377](https://github.com/ClickHouse/ClickHouse/pull/63377) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||||
|
* Backported in [#63902](https://github.com/ClickHouse/ClickHouse/issues/63902): `query_plan_remove_redundant_distinct` can break queries with WINDOW FUNCTIONS (with `allow_experimental_analyzer` is on). Fixes [#62820](https://github.com/ClickHouse/ClickHouse/issues/62820). [#63776](https://github.com/ClickHouse/ClickHouse/pull/63776) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||||
|
* Backported in [#64104](https://github.com/ClickHouse/ClickHouse/issues/64104): Deserialize untrusted binary inputs in a safer way. [#64024](https://github.com/ClickHouse/ClickHouse/pull/64024) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* Backported in [#64265](https://github.com/ClickHouse/ClickHouse/issues/64265): Prevent LOGICAL_ERROR on CREATE TABLE as MaterializedView. [#64174](https://github.com/ClickHouse/ClickHouse/pull/64174) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#64867](https://github.com/ClickHouse/ClickHouse/issues/64867): Fixed memory possible incorrect memory tracking in several kinds of queries: queries that read any data from S3, queries via http protocol, asynchronous inserts. [#64844](https://github.com/ClickHouse/ClickHouse/pull/64844) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
#### NO CL CATEGORY
|
||||||
|
|
||||||
|
* Backported in [#63704](https://github.com/ClickHouse/ClickHouse/issues/63704):. [#63415](https://github.com/ClickHouse/ClickHouse/pull/63415) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
|
||||||
|
#### NO CL ENTRY
|
||||||
|
|
||||||
|
* NO CL ENTRY: 'Installation test has wrong check_state'. [#63994](https://github.com/ClickHouse/ClickHouse/pull/63994) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
|
||||||
|
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||||
|
|
||||||
|
* Backported in [#63343](https://github.com/ClickHouse/ClickHouse/issues/63343): The commit url has different pattern. [#63331](https://github.com/ClickHouse/ClickHouse/pull/63331) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Backported in [#63965](https://github.com/ClickHouse/ClickHouse/issues/63965): fix 02124_insert_deduplication_token_multiple_blocks. [#63950](https://github.com/ClickHouse/ClickHouse/pull/63950) ([Han Fei](https://github.com/hanfei1991)).
|
||||||
|
* Backported in [#64043](https://github.com/ClickHouse/ClickHouse/issues/64043): Do not create new release in release branch automatically. [#64039](https://github.com/ClickHouse/ClickHouse/pull/64039) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Pin requests to fix the integration tests. [#65183](https://github.com/ClickHouse/ClickHouse/pull/65183) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
|
45
docs/changelogs/v24.1.6.52-stable.md
Normal file
45
docs/changelogs/v24.1.6.52-stable.md
Normal file
@ -0,0 +1,45 @@
|
|||||||
|
---
|
||||||
|
sidebar_position: 1
|
||||||
|
sidebar_label: 2024
|
||||||
|
---
|
||||||
|
|
||||||
|
# 2024 Changelog
|
||||||
|
|
||||||
|
### ClickHouse release v24.1.6.52-stable (fa09f677bc9) FIXME as compared to v24.1.5.6-stable (7f67181ff31)
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
* Backported in [#60292](https://github.com/ClickHouse/ClickHouse/issues/60292): Copy S3 file GCP fallback to buffer copy in case GCP returned `Internal Error` with `GATEWAY_TIMEOUT` HTTP error code. [#60164](https://github.com/ClickHouse/ClickHouse/pull/60164) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Backported in [#60832](https://github.com/ClickHouse/ClickHouse/issues/60832): Update tzdata to 2024a. [#60768](https://github.com/ClickHouse/ClickHouse/pull/60768) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
|
||||||
|
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||||
|
|
||||||
|
* Backported in [#60413](https://github.com/ClickHouse/ClickHouse/issues/60413): Fix segmentation fault in KQL parser when the input query exceeds the `max_query_size`. Also re-enable the KQL dialect. Fixes [#59036](https://github.com/ClickHouse/ClickHouse/issues/59036) and [#59037](https://github.com/ClickHouse/ClickHouse/issues/59037). [#59626](https://github.com/ClickHouse/ClickHouse/pull/59626) ([Yong Wang](https://github.com/kashwy)).
|
||||||
|
* Backported in [#60074](https://github.com/ClickHouse/ClickHouse/issues/60074): Fix error `Read beyond last offset` for `AsynchronousBoundedReadBuffer`. [#59630](https://github.com/ClickHouse/ClickHouse/pull/59630) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Backported in [#60299](https://github.com/ClickHouse/ClickHouse/issues/60299): Fix having neigher acked nor nacked messages. If exception happens during read-write phase, messages will be nacked. [#59775](https://github.com/ClickHouse/ClickHouse/pull/59775) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Backported in [#60066](https://github.com/ClickHouse/ClickHouse/issues/60066): Fix optimize_uniq_to_count removing the column alias. [#60026](https://github.com/ClickHouse/ClickHouse/pull/60026) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#60638](https://github.com/ClickHouse/ClickHouse/issues/60638): Fixed a bug in parallel optimization for queries with `FINAL`, which could give an incorrect result in rare cases. [#60041](https://github.com/ClickHouse/ClickHouse/pull/60041) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Backported in [#60177](https://github.com/ClickHouse/ClickHouse/issues/60177): Fix cosineDistance crash with Nullable. [#60150](https://github.com/ClickHouse/ClickHouse/pull/60150) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#60279](https://github.com/ClickHouse/ClickHouse/issues/60279): Hide sensitive info for `S3Queue` table engine. [#60233](https://github.com/ClickHouse/ClickHouse/pull/60233) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Backported in [#61000](https://github.com/ClickHouse/ClickHouse/issues/61000): Reduce the number of read rows from `system.numbers`. Fixes [#59418](https://github.com/ClickHouse/ClickHouse/issues/59418). [#60546](https://github.com/ClickHouse/ClickHouse/pull/60546) ([JackyWoo](https://github.com/JackyWoo)).
|
||||||
|
* Backported in [#60791](https://github.com/ClickHouse/ClickHouse/issues/60791): Fix buffer overflow that can happen if the attacker asks the HTTP server to decompress data with a composition of codecs and size triggering numeric overflow. Fix buffer overflow that can happen inside codec NONE on wrong input data. This was submitted by TIANGONG research team through our [Bug Bounty program](https://github.com/ClickHouse/ClickHouse/issues/38986). [#60731](https://github.com/ClickHouse/ClickHouse/pull/60731) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Backported in [#60783](https://github.com/ClickHouse/ClickHouse/issues/60783): Functions for SQL/JSON were able to read uninitialized memory. This closes [#60017](https://github.com/ClickHouse/ClickHouse/issues/60017). Found by Fuzzer. [#60738](https://github.com/ClickHouse/ClickHouse/pull/60738) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Backported in [#60803](https://github.com/ClickHouse/ClickHouse/issues/60803): Do not set aws custom metadata `x-amz-meta-*` headers on UploadPart & CompleteMultipartUpload calls. [#60748](https://github.com/ClickHouse/ClickHouse/pull/60748) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)).
|
||||||
|
* Backported in [#60820](https://github.com/ClickHouse/ClickHouse/issues/60820): Fix crash in arrayEnumerateRanked. [#60764](https://github.com/ClickHouse/ClickHouse/pull/60764) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#60841](https://github.com/ClickHouse/ClickHouse/issues/60841): Fix crash when using input() in INSERT SELECT JOIN. Closes [#60035](https://github.com/ClickHouse/ClickHouse/issues/60035). [#60765](https://github.com/ClickHouse/ClickHouse/pull/60765) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Backported in [#60904](https://github.com/ClickHouse/ClickHouse/issues/60904): Avoid segfault if too many keys are skipped when reading from S3. [#60849](https://github.com/ClickHouse/ClickHouse/pull/60849) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
|
||||||
|
#### NO CL CATEGORY
|
||||||
|
|
||||||
|
* Backported in [#60186](https://github.com/ClickHouse/ClickHouse/issues/60186):. [#60181](https://github.com/ClickHouse/ClickHouse/pull/60181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||||
|
|
||||||
|
* Backported in [#60333](https://github.com/ClickHouse/ClickHouse/issues/60333): CI: Fix job failures due to jepsen artifacts. [#59890](https://github.com/ClickHouse/ClickHouse/pull/59890) ([Max K.](https://github.com/maxknv)).
|
||||||
|
* Backported in [#60034](https://github.com/ClickHouse/ClickHouse/issues/60034): Fix mark release ready. [#59994](https://github.com/ClickHouse/ClickHouse/pull/59994) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Backported in [#60326](https://github.com/ClickHouse/ClickHouse/issues/60326): Ability to detect undead ZooKeeper sessions. [#60044](https://github.com/ClickHouse/ClickHouse/pull/60044) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* Backported in [#60363](https://github.com/ClickHouse/ClickHouse/issues/60363): CI: hot fix for gh statuses. [#60201](https://github.com/ClickHouse/ClickHouse/pull/60201) ([Max K.](https://github.com/maxknv)).
|
||||||
|
* Backported in [#60648](https://github.com/ClickHouse/ClickHouse/issues/60648): Detect io_uring in tests. [#60373](https://github.com/ClickHouse/ClickHouse/pull/60373) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Backported in [#60569](https://github.com/ClickHouse/ClickHouse/issues/60569): Remove broken test while we fix it. [#60547](https://github.com/ClickHouse/ClickHouse/pull/60547) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#60756](https://github.com/ClickHouse/ClickHouse/issues/60756): Update shellcheck. [#60553](https://github.com/ClickHouse/ClickHouse/pull/60553) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Backported in [#60584](https://github.com/ClickHouse/ClickHouse/issues/60584): CI: fix docker build job name. [#60554](https://github.com/ClickHouse/ClickHouse/pull/60554) ([Max K.](https://github.com/maxknv)).
|
||||||
|
|
100
docs/changelogs/v24.3.4.147-lts.md
Normal file
100
docs/changelogs/v24.3.4.147-lts.md
Normal file
@ -0,0 +1,100 @@
|
|||||||
|
---
|
||||||
|
sidebar_position: 1
|
||||||
|
sidebar_label: 2024
|
||||||
|
---
|
||||||
|
|
||||||
|
# 2024 Changelog
|
||||||
|
|
||||||
|
### ClickHouse release v24.3.4.147-lts (31a7bdc346d) FIXME as compared to v24.3.3.102-lts (7e7f3bdd9be)
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
* Backported in [#63465](https://github.com/ClickHouse/ClickHouse/issues/63465): Make rabbitmq nack broken messages. Closes [#45350](https://github.com/ClickHouse/ClickHouse/issues/45350). [#60312](https://github.com/ClickHouse/ClickHouse/pull/60312) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Backported in [#64290](https://github.com/ClickHouse/ClickHouse/issues/64290): Fix logical-error when undoing quorum insert transaction. [#61953](https://github.com/ClickHouse/ClickHouse/pull/61953) ([Han Fei](https://github.com/hanfei1991)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
* Backported in [#63610](https://github.com/ClickHouse/ClickHouse/issues/63610): The Dockerfile is reviewed by the docker official library in https://github.com/docker-library/official-images/pull/15846. [#63400](https://github.com/ClickHouse/ClickHouse/pull/63400) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Backported in [#65128](https://github.com/ClickHouse/ClickHouse/issues/65128): Decrease the `unit-test` image a few times. [#65102](https://github.com/ClickHouse/ClickHouse/pull/65102) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
|
||||||
|
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||||
|
|
||||||
|
* Backported in [#64277](https://github.com/ClickHouse/ClickHouse/issues/64277): Fix queries with FINAL give wrong result when table does not use adaptive granularity. [#62432](https://github.com/ClickHouse/ClickHouse/pull/62432) ([Duc Canh Le](https://github.com/canhld94)).
|
||||||
|
* Backported in [#63716](https://github.com/ClickHouse/ClickHouse/issues/63716): Fix excessive memory usage for queries with nested lambdas. Fixes [#62036](https://github.com/ClickHouse/ClickHouse/issues/62036). [#62462](https://github.com/ClickHouse/ClickHouse/pull/62462) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#63247](https://github.com/ClickHouse/ClickHouse/issues/63247): Fix size checks when updating materialized nested columns ( fixes [#62731](https://github.com/ClickHouse/ClickHouse/issues/62731) ). [#62773](https://github.com/ClickHouse/ClickHouse/pull/62773) ([Eliot Hautefeuille](https://github.com/hileef)).
|
||||||
|
* Backported in [#62984](https://github.com/ClickHouse/ClickHouse/issues/62984): Fix the `Unexpected return type` error for queries that read from `StorageBuffer` with `PREWHERE` when the source table has different types. Fixes [#62545](https://github.com/ClickHouse/ClickHouse/issues/62545). [#62916](https://github.com/ClickHouse/ClickHouse/pull/62916) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#63185](https://github.com/ClickHouse/ClickHouse/issues/63185): Sanity check: Clamp values instead of throwing. [#63119](https://github.com/ClickHouse/ClickHouse/pull/63119) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#63293](https://github.com/ClickHouse/ClickHouse/issues/63293): Fix crash with untuple and unresolved lambda. [#63131](https://github.com/ClickHouse/ClickHouse/pull/63131) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#63411](https://github.com/ClickHouse/ClickHouse/issues/63411): Fix a misbehavior when SQL security defaults don't load for old tables during server startup. [#63209](https://github.com/ClickHouse/ClickHouse/pull/63209) ([pufit](https://github.com/pufit)).
|
||||||
|
* Backported in [#63616](https://github.com/ClickHouse/ClickHouse/issues/63616): Fix bug which could potentially lead to rare LOGICAL_ERROR during SELECT query with message: `Unexpected return type from materialize. Expected type_XXX. Got type_YYY.` Introduced in [#59379](https://github.com/ClickHouse/ClickHouse/issues/59379). [#63353](https://github.com/ClickHouse/ClickHouse/pull/63353) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#63455](https://github.com/ClickHouse/ClickHouse/issues/63455): Fix `X-ClickHouse-Timezone` header returning wrong timezone when using `session_timezone` as query level setting. [#63377](https://github.com/ClickHouse/ClickHouse/pull/63377) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||||
|
* Backported in [#63603](https://github.com/ClickHouse/ClickHouse/issues/63603): Fix backup of projection part in case projection was removed from table metadata, but part still has projection. [#63426](https://github.com/ClickHouse/ClickHouse/pull/63426) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Backported in [#63508](https://github.com/ClickHouse/ClickHouse/issues/63508): Fix 'Every derived table must have its own alias' error for MYSQL dictionary source, close [#63341](https://github.com/ClickHouse/ClickHouse/issues/63341). [#63481](https://github.com/ClickHouse/ClickHouse/pull/63481) ([vdimir](https://github.com/vdimir)).
|
||||||
|
* Backported in [#63595](https://github.com/ClickHouse/ClickHouse/issues/63595): Avoid segafult in `MergeTreePrefetchedReadPool` while fetching projection parts. [#63513](https://github.com/ClickHouse/ClickHouse/pull/63513) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Backported in [#63748](https://github.com/ClickHouse/ClickHouse/issues/63748): Read only the necessary columns from VIEW (new analyzer). Closes [#62594](https://github.com/ClickHouse/ClickHouse/issues/62594). [#63688](https://github.com/ClickHouse/ClickHouse/pull/63688) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Backported in [#63770](https://github.com/ClickHouse/ClickHouse/issues/63770): Fix [#63539](https://github.com/ClickHouse/ClickHouse/issues/63539). Forbid WINDOW redefinition in new analyzer. [#63694](https://github.com/ClickHouse/ClickHouse/pull/63694) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Backported in [#64189](https://github.com/ClickHouse/ClickHouse/issues/64189): Fix `Not found column` and `CAST AS Map from array requires nested tuple of 2 elements` exceptions for distributed queries which use `Map(Nothing, Nothing)` type. Fixes [#63637](https://github.com/ClickHouse/ClickHouse/issues/63637). [#63753](https://github.com/ClickHouse/ClickHouse/pull/63753) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#63845](https://github.com/ClickHouse/ClickHouse/issues/63845): Fix possible `ILLEGAL_COLUMN` error in `partial_merge` join, close [#37928](https://github.com/ClickHouse/ClickHouse/issues/37928). [#63755](https://github.com/ClickHouse/ClickHouse/pull/63755) ([vdimir](https://github.com/vdimir)).
|
||||||
|
* Backported in [#63906](https://github.com/ClickHouse/ClickHouse/issues/63906): `query_plan_remove_redundant_distinct` can break queries with WINDOW FUNCTIONS (with `allow_experimental_analyzer` is on). Fixes [#62820](https://github.com/ClickHouse/ClickHouse/issues/62820). [#63776](https://github.com/ClickHouse/ClickHouse/pull/63776) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||||
|
* Backported in [#63989](https://github.com/ClickHouse/ClickHouse/issues/63989): Fix incorrect select query result when parallel replicas were used to read from a Materialized View. [#63861](https://github.com/ClickHouse/ClickHouse/pull/63861) ([Nikita Taranov](https://github.com/nickitat)).
|
||||||
|
* Backported in [#64031](https://github.com/ClickHouse/ClickHouse/issues/64031): Fix a error `Database name is empty` for remote queries with lambdas over the cluster with modified default database. Fixes [#63471](https://github.com/ClickHouse/ClickHouse/issues/63471). [#63864](https://github.com/ClickHouse/ClickHouse/pull/63864) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#64559](https://github.com/ClickHouse/ClickHouse/issues/64559): Fix SIGSEGV due to CPU/Real (`query_profiler_real_time_period_ns`/`query_profiler_cpu_time_period_ns`) profiler (has been an issue since 2022, that leads to periodic server crashes, especially if you were using distributed engine). [#63865](https://github.com/ClickHouse/ClickHouse/pull/63865) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Backported in [#64009](https://github.com/ClickHouse/ClickHouse/issues/64009): Fix analyzer - IN function with arbitrary deep sub-selects in materialized view to use insertion block. [#63930](https://github.com/ClickHouse/ClickHouse/pull/63930) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* Backported in [#64236](https://github.com/ClickHouse/ClickHouse/issues/64236): Fix resolve of unqualified COLUMNS matcher. Preserve the input columns order and forbid usage of unknown identifiers. [#63962](https://github.com/ClickHouse/ClickHouse/pull/63962) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Backported in [#64106](https://github.com/ClickHouse/ClickHouse/issues/64106): Deserialize untrusted binary inputs in a safer way. [#64024](https://github.com/ClickHouse/ClickHouse/pull/64024) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* Backported in [#64168](https://github.com/ClickHouse/ClickHouse/issues/64168): Add missing settings to recoverLostReplica. [#64040](https://github.com/ClickHouse/ClickHouse/pull/64040) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#64320](https://github.com/ClickHouse/ClickHouse/issues/64320): This fix will use a proper redefined context with the correct definer for each individual view in the query pipeline Closes [#63777](https://github.com/ClickHouse/ClickHouse/issues/63777). [#64079](https://github.com/ClickHouse/ClickHouse/pull/64079) ([pufit](https://github.com/pufit)).
|
||||||
|
* Backported in [#64380](https://github.com/ClickHouse/ClickHouse/issues/64380): Fix analyzer: "Not found column" error is fixed when using INTERPOLATE. [#64096](https://github.com/ClickHouse/ClickHouse/pull/64096) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* Backported in [#64567](https://github.com/ClickHouse/ClickHouse/issues/64567): Fix creating backups to S3 buckets with different credentials from the disk containing the file. [#64153](https://github.com/ClickHouse/ClickHouse/pull/64153) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Backported in [#64270](https://github.com/ClickHouse/ClickHouse/issues/64270): Prevent LOGICAL_ERROR on CREATE TABLE as MaterializedView. [#64174](https://github.com/ClickHouse/ClickHouse/pull/64174) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#64339](https://github.com/ClickHouse/ClickHouse/issues/64339): The query cache now considers two identical queries against different databases as different. The previous behavior could be used to bypass missing privileges to read from a table. [#64199](https://github.com/ClickHouse/ClickHouse/pull/64199) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* Backported in [#64259](https://github.com/ClickHouse/ClickHouse/issues/64259): Ignore `text_log` config when using Keeper. [#64218](https://github.com/ClickHouse/ClickHouse/pull/64218) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Backported in [#64688](https://github.com/ClickHouse/ClickHouse/issues/64688): Fix Query Tree size validation. Closes [#63701](https://github.com/ClickHouse/ClickHouse/issues/63701). [#64377](https://github.com/ClickHouse/ClickHouse/pull/64377) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Backported in [#64725](https://github.com/ClickHouse/ClickHouse/issues/64725): Fixed `CREATE TABLE AS` queries for tables with default expressions. [#64455](https://github.com/ClickHouse/ClickHouse/pull/64455) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Backported in [#64621](https://github.com/ClickHouse/ClickHouse/issues/64621): Fix an error `Cannot find column` in distributed queries with constant CTE in the `GROUP BY` key. [#64519](https://github.com/ClickHouse/ClickHouse/pull/64519) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#64678](https://github.com/ClickHouse/ClickHouse/issues/64678): Fix [#64612](https://github.com/ClickHouse/ClickHouse/issues/64612). Do not rewrite aggregation if `-If` combinator is already used. [#64638](https://github.com/ClickHouse/ClickHouse/pull/64638) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Backported in [#64831](https://github.com/ClickHouse/ClickHouse/issues/64831): Fix bug which could lead to non-working TTLs with expressions. Fixes [#63700](https://github.com/ClickHouse/ClickHouse/issues/63700). [#64694](https://github.com/ClickHouse/ClickHouse/pull/64694) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#64940](https://github.com/ClickHouse/ClickHouse/issues/64940): Fix OrderByLimitByDuplicateEliminationVisitor across subqueries. [#64766](https://github.com/ClickHouse/ClickHouse/pull/64766) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#64869](https://github.com/ClickHouse/ClickHouse/issues/64869): Fixed memory possible incorrect memory tracking in several kinds of queries: queries that read any data from S3, queries via http protocol, asynchronous inserts. [#64844](https://github.com/ClickHouse/ClickHouse/pull/64844) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Backported in [#64980](https://github.com/ClickHouse/ClickHouse/issues/64980): Fix the `Block structure mismatch` error for queries reading with `PREWHERE` from the materialized view when the materialized view has columns of different types than the source table. Fixes [#64611](https://github.com/ClickHouse/ClickHouse/issues/64611). [#64855](https://github.com/ClickHouse/ClickHouse/pull/64855) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#64972](https://github.com/ClickHouse/ClickHouse/issues/64972): Fix rare crash when table has TTL with subquery + database replicated + parallel replicas + analyzer. It's really rare, but please don't use TTLs with subqueries. [#64858](https://github.com/ClickHouse/ClickHouse/pull/64858) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#65070](https://github.com/ClickHouse/ClickHouse/issues/65070): Fix `ALTER MODIFY COMMENT` query that was broken for parameterized VIEWs in https://github.com/ClickHouse/ClickHouse/pull/54211. [#65031](https://github.com/ClickHouse/ClickHouse/pull/65031) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||||
|
* Backported in [#65175](https://github.com/ClickHouse/ClickHouse/issues/65175): Fix the `Unknown expression identifier` error for remote queries with `INTERPOLATE (alias)` (new analyzer). Fixes [#64636](https://github.com/ClickHouse/ClickHouse/issues/64636). [#65090](https://github.com/ClickHouse/ClickHouse/pull/65090) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
|
||||||
|
#### Critical Bug Fix (crash, LOGICAL_ERROR, data loss, RBAC)
|
||||||
|
|
||||||
|
* Backported in [#64587](https://github.com/ClickHouse/ClickHouse/issues/64587): Disabled `enable_vertical_final` setting by default. This feature should not be used because it has a bug: [#64543](https://github.com/ClickHouse/ClickHouse/issues/64543). [#64544](https://github.com/ClickHouse/ClickHouse/pull/64544) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* Backported in [#64878](https://github.com/ClickHouse/ClickHouse/issues/64878): This PR fixes an error when a user in a specific situation can escalate their privileges on the default database without necessary grants. [#64769](https://github.com/ClickHouse/ClickHouse/pull/64769) ([pufit](https://github.com/pufit)).
|
||||||
|
|
||||||
|
#### NO CL CATEGORY
|
||||||
|
|
||||||
|
* Backported in [#63304](https://github.com/ClickHouse/ClickHouse/issues/63304):. [#63297](https://github.com/ClickHouse/ClickHouse/pull/63297) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Backported in [#63708](https://github.com/ClickHouse/ClickHouse/issues/63708):. [#63415](https://github.com/ClickHouse/ClickHouse/pull/63415) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
|
||||||
|
#### NO CL ENTRY
|
||||||
|
|
||||||
|
* NO CL ENTRY: 'Revert "Backport [#64363](https://github.com/ClickHouse/ClickHouse/issues/64363) to 24.3: Split tests 03039_dynamic_all_merge_algorithms to avoid timeouts"'. [#64907](https://github.com/ClickHouse/ClickHouse/pull/64907) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
|
||||||
|
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||||
|
|
||||||
|
* Backported in [#63751](https://github.com/ClickHouse/ClickHouse/issues/63751): group_by_use_nulls strikes back. [#62922](https://github.com/ClickHouse/ClickHouse/pull/62922) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#63558](https://github.com/ClickHouse/ClickHouse/issues/63558): Try fix segfault in `MergeTreeReadPoolBase::createTask`. [#63323](https://github.com/ClickHouse/ClickHouse/pull/63323) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Backported in [#63336](https://github.com/ClickHouse/ClickHouse/issues/63336): The commit url has different pattern. [#63331](https://github.com/ClickHouse/ClickHouse/pull/63331) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Backported in [#63374](https://github.com/ClickHouse/ClickHouse/issues/63374): Add tags for the test 03000_traverse_shadow_system_data_paths.sql to make it stable. [#63366](https://github.com/ClickHouse/ClickHouse/pull/63366) ([Aleksei Filatov](https://github.com/aalexfvk)).
|
||||||
|
* Backported in [#63625](https://github.com/ClickHouse/ClickHouse/issues/63625): Workaround for `oklch()` inside canvas bug for firefox. [#63404](https://github.com/ClickHouse/ClickHouse/pull/63404) ([Sergei Trifonov](https://github.com/serxa)).
|
||||||
|
* Backported in [#63569](https://github.com/ClickHouse/ClickHouse/issues/63569): Add `jwcrypto` to integration tests runner. [#63551](https://github.com/ClickHouse/ClickHouse/pull/63551) ([Konstantin Bogdanov](https://github.com/thevar1able)).
|
||||||
|
* Backported in [#63649](https://github.com/ClickHouse/ClickHouse/issues/63649): Fix `02362_part_log_merge_algorithm` flaky test. [#63635](https://github.com/ClickHouse/ClickHouse/pull/63635) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
|
||||||
|
* Backported in [#63762](https://github.com/ClickHouse/ClickHouse/issues/63762): Cancel S3 reads properly when parallel reads are used. [#63687](https://github.com/ClickHouse/ClickHouse/pull/63687) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Backported in [#63741](https://github.com/ClickHouse/ClickHouse/issues/63741): Userspace page cache: don't collect stats if cache is unused. [#63730](https://github.com/ClickHouse/ClickHouse/pull/63730) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||||
|
* Backported in [#63826](https://github.com/ClickHouse/ClickHouse/issues/63826): Fix `test_odbc_interaction` for arm64 on linux. [#63787](https://github.com/ClickHouse/ClickHouse/pull/63787) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#63895](https://github.com/ClickHouse/ClickHouse/issues/63895): Fix `test_catboost_evaluate` for aarch64. [#63789](https://github.com/ClickHouse/ClickHouse/pull/63789) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#63887](https://github.com/ClickHouse/ClickHouse/issues/63887): Fix `test_disk_types` for aarch64. [#63832](https://github.com/ClickHouse/ClickHouse/pull/63832) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#63879](https://github.com/ClickHouse/ClickHouse/issues/63879): Fix `test_short_strings_aggregation` for arm. [#63836](https://github.com/ClickHouse/ClickHouse/pull/63836) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#63916](https://github.com/ClickHouse/ClickHouse/issues/63916): Disable `test_non_default_compression/test.py::test_preconfigured_deflateqpl_codec` on arm. [#63839](https://github.com/ClickHouse/ClickHouse/pull/63839) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#63969](https://github.com/ClickHouse/ClickHouse/issues/63969): fix 02124_insert_deduplication_token_multiple_blocks. [#63950](https://github.com/ClickHouse/ClickHouse/pull/63950) ([Han Fei](https://github.com/hanfei1991)).
|
||||||
|
* Backported in [#64047](https://github.com/ClickHouse/ClickHouse/issues/64047): Do not create new release in release branch automatically. [#64039](https://github.com/ClickHouse/ClickHouse/pull/64039) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Backported in [#64076](https://github.com/ClickHouse/ClickHouse/issues/64076): Files without shebang have mime 'text/plain' or 'inode/x-empty'. [#64062](https://github.com/ClickHouse/ClickHouse/pull/64062) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Backported in [#64142](https://github.com/ClickHouse/ClickHouse/issues/64142): Fix sanitizers. [#64090](https://github.com/ClickHouse/ClickHouse/pull/64090) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Backported in [#64159](https://github.com/ClickHouse/ClickHouse/issues/64159): Add retries in `git submodule update`. [#64125](https://github.com/ClickHouse/ClickHouse/pull/64125) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Backported in [#64473](https://github.com/ClickHouse/ClickHouse/issues/64473): Split tests 03039_dynamic_all_merge_algorithms to avoid timeouts. [#64363](https://github.com/ClickHouse/ClickHouse/pull/64363) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Backported in [#65113](https://github.com/ClickHouse/ClickHouse/issues/65113): Adjust the `version_helper` and script to a new release scheme. [#64759](https://github.com/ClickHouse/ClickHouse/pull/64759) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Backported in [#64999](https://github.com/ClickHouse/ClickHouse/issues/64999): Fix crash with DISTINCT and window functions. [#64767](https://github.com/ClickHouse/ClickHouse/pull/64767) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||||
|
|
101
docs/changelogs/v24.4.2.141-stable.md
Normal file
101
docs/changelogs/v24.4.2.141-stable.md
Normal file
@ -0,0 +1,101 @@
|
|||||||
|
---
|
||||||
|
sidebar_position: 1
|
||||||
|
sidebar_label: 2024
|
||||||
|
---
|
||||||
|
|
||||||
|
# 2024 Changelog
|
||||||
|
|
||||||
|
### ClickHouse release v24.4.2.141-stable (9e23d27bd11) FIXME as compared to v24.4.1.2088-stable (6d4b31322d1)
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
* Backported in [#63467](https://github.com/ClickHouse/ClickHouse/issues/63467): Make rabbitmq nack broken messages. Closes [#45350](https://github.com/ClickHouse/ClickHouse/issues/45350). [#60312](https://github.com/ClickHouse/ClickHouse/pull/60312) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
* Backported in [#63612](https://github.com/ClickHouse/ClickHouse/issues/63612): The Dockerfile is reviewed by the docker official library in https://github.com/docker-library/official-images/pull/15846. [#63400](https://github.com/ClickHouse/ClickHouse/pull/63400) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
|
||||||
|
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||||
|
|
||||||
|
* Backported in [#64279](https://github.com/ClickHouse/ClickHouse/issues/64279): Fix queries with FINAL give wrong result when table does not use adaptive granularity. [#62432](https://github.com/ClickHouse/ClickHouse/pull/62432) ([Duc Canh Le](https://github.com/canhld94)).
|
||||||
|
* Backported in [#63295](https://github.com/ClickHouse/ClickHouse/issues/63295): Fix crash with untuple and unresolved lambda. [#63131](https://github.com/ClickHouse/ClickHouse/pull/63131) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#63978](https://github.com/ClickHouse/ClickHouse/issues/63978): Fix intersect parts when restart after drop range. [#63202](https://github.com/ClickHouse/ClickHouse/pull/63202) ([Han Fei](https://github.com/hanfei1991)).
|
||||||
|
* Backported in [#63413](https://github.com/ClickHouse/ClickHouse/issues/63413): Fix a misbehavior when SQL security defaults don't load for old tables during server startup. [#63209](https://github.com/ClickHouse/ClickHouse/pull/63209) ([pufit](https://github.com/pufit)).
|
||||||
|
* Backported in [#63388](https://github.com/ClickHouse/ClickHouse/issues/63388): JOIN filter push down filled join fix. Closes [#63228](https://github.com/ClickHouse/ClickHouse/issues/63228). [#63234](https://github.com/ClickHouse/ClickHouse/pull/63234) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Backported in [#63618](https://github.com/ClickHouse/ClickHouse/issues/63618): Fix bug which could potentially lead to rare LOGICAL_ERROR during SELECT query with message: `Unexpected return type from materialize. Expected type_XXX. Got type_YYY.` Introduced in [#59379](https://github.com/ClickHouse/ClickHouse/issues/59379). [#63353](https://github.com/ClickHouse/ClickHouse/pull/63353) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#63451](https://github.com/ClickHouse/ClickHouse/issues/63451): Fix `X-ClickHouse-Timezone` header returning wrong timezone when using `session_timezone` as query level setting. [#63377](https://github.com/ClickHouse/ClickHouse/pull/63377) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||||
|
* Backported in [#63605](https://github.com/ClickHouse/ClickHouse/issues/63605): Fix backup of projection part in case projection was removed from table metadata, but part still has projection. [#63426](https://github.com/ClickHouse/ClickHouse/pull/63426) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Backported in [#63510](https://github.com/ClickHouse/ClickHouse/issues/63510): Fix 'Every derived table must have its own alias' error for MYSQL dictionary source, close [#63341](https://github.com/ClickHouse/ClickHouse/issues/63341). [#63481](https://github.com/ClickHouse/ClickHouse/pull/63481) ([vdimir](https://github.com/vdimir)).
|
||||||
|
* Backported in [#63592](https://github.com/ClickHouse/ClickHouse/issues/63592): Avoid segafult in `MergeTreePrefetchedReadPool` while fetching projection parts. [#63513](https://github.com/ClickHouse/ClickHouse/pull/63513) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Backported in [#63750](https://github.com/ClickHouse/ClickHouse/issues/63750): Read only the necessary columns from VIEW (new analyzer). Closes [#62594](https://github.com/ClickHouse/ClickHouse/issues/62594). [#63688](https://github.com/ClickHouse/ClickHouse/pull/63688) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Backported in [#63772](https://github.com/ClickHouse/ClickHouse/issues/63772): Fix [#63539](https://github.com/ClickHouse/ClickHouse/issues/63539). Forbid WINDOW redefinition in new analyzer. [#63694](https://github.com/ClickHouse/ClickHouse/pull/63694) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Backported in [#63872](https://github.com/ClickHouse/ClickHouse/issues/63872): Flatten_nested is broken with replicated database. [#63695](https://github.com/ClickHouse/ClickHouse/pull/63695) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#63854](https://github.com/ClickHouse/ClickHouse/issues/63854): Fix `Not found column` and `CAST AS Map from array requires nested tuple of 2 elements` exceptions for distributed queries which use `Map(Nothing, Nothing)` type. Fixes [#63637](https://github.com/ClickHouse/ClickHouse/issues/63637). [#63753](https://github.com/ClickHouse/ClickHouse/pull/63753) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#63847](https://github.com/ClickHouse/ClickHouse/issues/63847): Fix possible `ILLEGAL_COLUMN` error in `partial_merge` join, close [#37928](https://github.com/ClickHouse/ClickHouse/issues/37928). [#63755](https://github.com/ClickHouse/ClickHouse/pull/63755) ([vdimir](https://github.com/vdimir)).
|
||||||
|
* Backported in [#63908](https://github.com/ClickHouse/ClickHouse/issues/63908): `query_plan_remove_redundant_distinct` can break queries with WINDOW FUNCTIONS (with `allow_experimental_analyzer` is on). Fixes [#62820](https://github.com/ClickHouse/ClickHouse/issues/62820). [#63776](https://github.com/ClickHouse/ClickHouse/pull/63776) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||||
|
* Backported in [#63955](https://github.com/ClickHouse/ClickHouse/issues/63955): Fix possible crash with SYSTEM UNLOAD PRIMARY KEY. [#63778](https://github.com/ClickHouse/ClickHouse/pull/63778) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#63938](https://github.com/ClickHouse/ClickHouse/issues/63938): Allow JOIN filter push down to both streams if only single equivalent column is used in query. Closes [#63799](https://github.com/ClickHouse/ClickHouse/issues/63799). [#63819](https://github.com/ClickHouse/ClickHouse/pull/63819) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Backported in [#63991](https://github.com/ClickHouse/ClickHouse/issues/63991): Fix incorrect select query result when parallel replicas were used to read from a Materialized View. [#63861](https://github.com/ClickHouse/ClickHouse/pull/63861) ([Nikita Taranov](https://github.com/nickitat)).
|
||||||
|
* Backported in [#64033](https://github.com/ClickHouse/ClickHouse/issues/64033): Fix a error `Database name is empty` for remote queries with lambdas over the cluster with modified default database. Fixes [#63471](https://github.com/ClickHouse/ClickHouse/issues/63471). [#63864](https://github.com/ClickHouse/ClickHouse/pull/63864) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#64561](https://github.com/ClickHouse/ClickHouse/issues/64561): Fix SIGSEGV due to CPU/Real (`query_profiler_real_time_period_ns`/`query_profiler_cpu_time_period_ns`) profiler (has been an issue since 2022, that leads to periodic server crashes, especially if you were using distributed engine). [#63865](https://github.com/ClickHouse/ClickHouse/pull/63865) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Backported in [#64011](https://github.com/ClickHouse/ClickHouse/issues/64011): Fix analyzer - IN function with arbitrary deep sub-selects in materialized view to use insertion block. [#63930](https://github.com/ClickHouse/ClickHouse/pull/63930) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* Backported in [#64238](https://github.com/ClickHouse/ClickHouse/issues/64238): Fix resolve of unqualified COLUMNS matcher. Preserve the input columns order and forbid usage of unknown identifiers. [#63962](https://github.com/ClickHouse/ClickHouse/pull/63962) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Backported in [#64103](https://github.com/ClickHouse/ClickHouse/issues/64103): Deserialize untrusted binary inputs in a safer way. [#64024](https://github.com/ClickHouse/ClickHouse/pull/64024) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* Backported in [#64170](https://github.com/ClickHouse/ClickHouse/issues/64170): Add missing settings to recoverLostReplica. [#64040](https://github.com/ClickHouse/ClickHouse/pull/64040) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#64322](https://github.com/ClickHouse/ClickHouse/issues/64322): This fix will use a proper redefined context with the correct definer for each individual view in the query pipeline Closes [#63777](https://github.com/ClickHouse/ClickHouse/issues/63777). [#64079](https://github.com/ClickHouse/ClickHouse/pull/64079) ([pufit](https://github.com/pufit)).
|
||||||
|
* Backported in [#64382](https://github.com/ClickHouse/ClickHouse/issues/64382): Fix analyzer: "Not found column" error is fixed when using INTERPOLATE. [#64096](https://github.com/ClickHouse/ClickHouse/pull/64096) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||||
|
* Backported in [#64568](https://github.com/ClickHouse/ClickHouse/issues/64568): Fix creating backups to S3 buckets with different credentials from the disk containing the file. [#64153](https://github.com/ClickHouse/ClickHouse/pull/64153) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Backported in [#64272](https://github.com/ClickHouse/ClickHouse/issues/64272): Prevent LOGICAL_ERROR on CREATE TABLE as MaterializedView. [#64174](https://github.com/ClickHouse/ClickHouse/pull/64174) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#64330](https://github.com/ClickHouse/ClickHouse/issues/64330): The query cache now considers two identical queries against different databases as different. The previous behavior could be used to bypass missing privileges to read from a table. [#64199](https://github.com/ClickHouse/ClickHouse/pull/64199) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* Backported in [#64254](https://github.com/ClickHouse/ClickHouse/issues/64254): Ignore `text_log` config when using Keeper. [#64218](https://github.com/ClickHouse/ClickHouse/pull/64218) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Backported in [#64690](https://github.com/ClickHouse/ClickHouse/issues/64690): Fix Query Tree size validation. Closes [#63701](https://github.com/ClickHouse/ClickHouse/issues/63701). [#64377](https://github.com/ClickHouse/ClickHouse/pull/64377) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Backported in [#64409](https://github.com/ClickHouse/ClickHouse/issues/64409): Fix `Logical error: Bad cast` for `Buffer` table with `PREWHERE`. Fixes [#64172](https://github.com/ClickHouse/ClickHouse/issues/64172). [#64388](https://github.com/ClickHouse/ClickHouse/pull/64388) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#64727](https://github.com/ClickHouse/ClickHouse/issues/64727): Fixed `CREATE TABLE AS` queries for tables with default expressions. [#64455](https://github.com/ClickHouse/ClickHouse/pull/64455) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Backported in [#64623](https://github.com/ClickHouse/ClickHouse/issues/64623): Fix an error `Cannot find column` in distributed queries with constant CTE in the `GROUP BY` key. [#64519](https://github.com/ClickHouse/ClickHouse/pull/64519) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#64680](https://github.com/ClickHouse/ClickHouse/issues/64680): Fix [#64612](https://github.com/ClickHouse/ClickHouse/issues/64612). Do not rewrite aggregation if `-If` combinator is already used. [#64638](https://github.com/ClickHouse/ClickHouse/pull/64638) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
* Backported in [#64942](https://github.com/ClickHouse/ClickHouse/issues/64942): Fix OrderByLimitByDuplicateEliminationVisitor across subqueries. [#64766](https://github.com/ClickHouse/ClickHouse/pull/64766) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#64871](https://github.com/ClickHouse/ClickHouse/issues/64871): Fixed memory possible incorrect memory tracking in several kinds of queries: queries that read any data from S3, queries via http protocol, asynchronous inserts. [#64844](https://github.com/ClickHouse/ClickHouse/pull/64844) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
#### CI Fix or Improvement (changelog entry is not required)
|
||||||
|
|
||||||
|
* Backported in [#63364](https://github.com/ClickHouse/ClickHouse/issues/63364): Implement cumulative A Sync status. [#61464](https://github.com/ClickHouse/ClickHouse/pull/61464) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Backported in [#63338](https://github.com/ClickHouse/ClickHouse/issues/63338): Use `/commit/` to have the URLs in [reports](https://play.clickhouse.com/play?user=play#c2VsZWN0IGRpc3RpbmN0IGNvbW1pdF91cmwgZnJvbSBjaGVja3Mgd2hlcmUgY2hlY2tfc3RhcnRfdGltZSA+PSBub3coKSAtIGludGVydmFsIDEgbW9udGggYW5kIHB1bGxfcmVxdWVzdF9udW1iZXI9NjA1MzI=) like https://github.com/ClickHouse/ClickHouse/commit/44f8bc5308b53797bec8cccc3bd29fab8a00235d and not like https://github.com/ClickHouse/ClickHouse/commits/44f8bc5308b53797bec8cccc3bd29fab8a00235d. [#63331](https://github.com/ClickHouse/ClickHouse/pull/63331) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Backported in [#63376](https://github.com/ClickHouse/ClickHouse/issues/63376):. [#63366](https://github.com/ClickHouse/ClickHouse/pull/63366) ([Aleksei Filatov](https://github.com/aalexfvk)).
|
||||||
|
* Backported in [#63571](https://github.com/ClickHouse/ClickHouse/issues/63571):. [#63551](https://github.com/ClickHouse/ClickHouse/pull/63551) ([Konstantin Bogdanov](https://github.com/thevar1able)).
|
||||||
|
* Backported in [#63651](https://github.com/ClickHouse/ClickHouse/issues/63651): Fix 02362_part_log_merge_algorithm flaky test. [#63635](https://github.com/ClickHouse/ClickHouse/pull/63635) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
|
||||||
|
* Backported in [#63828](https://github.com/ClickHouse/ClickHouse/issues/63828): Fix test_odbc_interaction from aarch64 [#61457](https://github.com/ClickHouse/ClickHouse/issues/61457). [#63787](https://github.com/ClickHouse/ClickHouse/pull/63787) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#63897](https://github.com/ClickHouse/ClickHouse/issues/63897): Fix test `test_catboost_evaluate` for aarch64. [#61457](https://github.com/ClickHouse/ClickHouse/issues/61457). [#63789](https://github.com/ClickHouse/ClickHouse/pull/63789) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#63889](https://github.com/ClickHouse/ClickHouse/issues/63889): Remove HDFS from disks config for one integration test for arm. [#61457](https://github.com/ClickHouse/ClickHouse/issues/61457). [#63832](https://github.com/ClickHouse/ClickHouse/pull/63832) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#63881](https://github.com/ClickHouse/ClickHouse/issues/63881): Bump version for old image in test_short_strings_aggregation to make it work on arm. [#61457](https://github.com/ClickHouse/ClickHouse/issues/61457). [#63836](https://github.com/ClickHouse/ClickHouse/pull/63836) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#63919](https://github.com/ClickHouse/ClickHouse/issues/63919): Disable test `test_non_default_compression/test.py::test_preconfigured_deflateqpl_codec` on arm. [#61457](https://github.com/ClickHouse/ClickHouse/issues/61457). [#63839](https://github.com/ClickHouse/ClickHouse/pull/63839) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#63971](https://github.com/ClickHouse/ClickHouse/issues/63971): Fix 02124_insert_deduplication_token_multiple_blocks. [#63950](https://github.com/ClickHouse/ClickHouse/pull/63950) ([Han Fei](https://github.com/hanfei1991)).
|
||||||
|
* Backported in [#64049](https://github.com/ClickHouse/ClickHouse/issues/64049): Add `ClickHouseVersion.copy` method. Create a branch release in advance without spinning out the release to increase the stability. [#64039](https://github.com/ClickHouse/ClickHouse/pull/64039) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Backported in [#64078](https://github.com/ClickHouse/ClickHouse/issues/64078): The mime type is not 100% reliable for Python and shell scripts without shebangs; add a check for file extension. [#64062](https://github.com/ClickHouse/ClickHouse/pull/64062) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Backported in [#64161](https://github.com/ClickHouse/ClickHouse/issues/64161): Add retries in git submodule update. [#64125](https://github.com/ClickHouse/ClickHouse/pull/64125) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
#### Critical Bug Fix (crash, LOGICAL_ERROR, data loss, RBAC)
|
||||||
|
|
||||||
|
* Backported in [#64589](https://github.com/ClickHouse/ClickHouse/issues/64589): Disabled `enable_vertical_final` setting by default. This feature should not be used because it has a bug: [#64543](https://github.com/ClickHouse/ClickHouse/issues/64543). [#64544](https://github.com/ClickHouse/ClickHouse/pull/64544) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||||
|
* Backported in [#64880](https://github.com/ClickHouse/ClickHouse/issues/64880): This PR fixes an error when a user in a specific situation can escalate their privileges on the default database without necessary grants. [#64769](https://github.com/ClickHouse/ClickHouse/pull/64769) ([pufit](https://github.com/pufit)).
|
||||||
|
|
||||||
|
#### NO CL CATEGORY
|
||||||
|
|
||||||
|
* Backported in [#63306](https://github.com/ClickHouse/ClickHouse/issues/63306):. [#63297](https://github.com/ClickHouse/ClickHouse/pull/63297) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Backported in [#63710](https://github.com/ClickHouse/ClickHouse/issues/63710):. [#63415](https://github.com/ClickHouse/ClickHouse/pull/63415) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
|
||||||
|
#### NO CL ENTRY
|
||||||
|
|
||||||
|
* NO CL ENTRY: 'Revert "Backport [#64363](https://github.com/ClickHouse/ClickHouse/issues/64363) to 24.4: Split tests 03039_dynamic_all_merge_algorithms to avoid timeouts"'. [#64905](https://github.com/ClickHouse/ClickHouse/pull/64905) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
|
||||||
|
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||||
|
|
||||||
|
* group_by_use_nulls strikes back [#62922](https://github.com/ClickHouse/ClickHouse/pull/62922) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Add `FROM` keyword to `TRUNCATE ALL TABLES` [#63241](https://github.com/ClickHouse/ClickHouse/pull/63241) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||||
|
* More checks for concurrently deleted files and dirs in system.remote_data_paths [#63274](https://github.com/ClickHouse/ClickHouse/pull/63274) ([Alexander Gololobov](https://github.com/davenger)).
|
||||||
|
* Try fix segfault in `MergeTreeReadPoolBase::createTask` [#63323](https://github.com/ClickHouse/ClickHouse/pull/63323) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Skip unaccessible table dirs in system.remote_data_paths [#63330](https://github.com/ClickHouse/ClickHouse/pull/63330) ([Alexander Gololobov](https://github.com/davenger)).
|
||||||
|
* Workaround for `oklch()` inside canvas bug for firefox [#63404](https://github.com/ClickHouse/ClickHouse/pull/63404) ([Sergei Trifonov](https://github.com/serxa)).
|
||||||
|
* Cancel S3 reads properly when parallel reads are used [#63687](https://github.com/ClickHouse/ClickHouse/pull/63687) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||||
|
* Userspace page cache: don't collect stats if cache is unused [#63730](https://github.com/ClickHouse/ClickHouse/pull/63730) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||||
|
* Fix sanitizers [#64090](https://github.com/ClickHouse/ClickHouse/pull/64090) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Split tests 03039_dynamic_all_merge_algorithms to avoid timeouts [#64363](https://github.com/ClickHouse/ClickHouse/pull/64363) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* CI: Critical bugfix category in PR template [#64480](https://github.com/ClickHouse/ClickHouse/pull/64480) ([Max K.](https://github.com/maxknv)).
|
||||||
|
|
38
docs/changelogs/v24.5.2.34-stable.md
Normal file
38
docs/changelogs/v24.5.2.34-stable.md
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
---
|
||||||
|
sidebar_position: 1
|
||||||
|
sidebar_label: 2024
|
||||||
|
---
|
||||||
|
|
||||||
|
# 2024 Changelog
|
||||||
|
|
||||||
|
### ClickHouse release v24.5.2.34-stable (45589aeee49) FIXME as compared to v24.5.1.1763-stable (647c154a94d)
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
* Backported in [#65096](https://github.com/ClickHouse/ClickHouse/issues/65096): The setting `allow_experimental_join_condition` was accidentally marked as important which may prevent distributed queries in a mixed versions cluster from being executed successfully. [#65008](https://github.com/ClickHouse/ClickHouse/pull/65008) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
* Backported in [#65132](https://github.com/ClickHouse/ClickHouse/issues/65132): Decrease the `unit-test` image a few times. [#65102](https://github.com/ClickHouse/ClickHouse/pull/65102) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
|
||||||
|
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||||
|
|
||||||
|
* Backported in [#64729](https://github.com/ClickHouse/ClickHouse/issues/64729): Fixed `CREATE TABLE AS` queries for tables with default expressions. [#64455](https://github.com/ClickHouse/ClickHouse/pull/64455) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Backported in [#65061](https://github.com/ClickHouse/ClickHouse/issues/65061): Fix the `Expression nodes list expected 1 projection names` and `Unknown expression or identifier` errors for queries with aliases to `GLOBAL IN.` Fixes [#64445](https://github.com/ClickHouse/ClickHouse/issues/64445). [#64517](https://github.com/ClickHouse/ClickHouse/pull/64517) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#65088](https://github.com/ClickHouse/ClickHouse/issues/65088): Fix removing the `WHERE` and `PREWHERE` expressions, which are always true (for the new analyzer). Fixes [#64575](https://github.com/ClickHouse/ClickHouse/issues/64575). [#64695](https://github.com/ClickHouse/ClickHouse/pull/64695) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#64944](https://github.com/ClickHouse/ClickHouse/issues/64944): Fix OrderByLimitByDuplicateEliminationVisitor across subqueries. [#64766](https://github.com/ClickHouse/ClickHouse/pull/64766) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#64873](https://github.com/ClickHouse/ClickHouse/issues/64873): Fixed memory possible incorrect memory tracking in several kinds of queries: queries that read any data from S3, queries via http protocol, asynchronous inserts. [#64844](https://github.com/ClickHouse/ClickHouse/pull/64844) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Backported in [#64984](https://github.com/ClickHouse/ClickHouse/issues/64984): Fix the `Block structure mismatch` error for queries reading with `PREWHERE` from the materialized view when the materialized view has columns of different types than the source table. Fixes [#64611](https://github.com/ClickHouse/ClickHouse/issues/64611). [#64855](https://github.com/ClickHouse/ClickHouse/pull/64855) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#64976](https://github.com/ClickHouse/ClickHouse/issues/64976): Fix rare crash when table has TTL with subquery + database replicated + parallel replicas + analyzer. It's really rare, but please don't use TTLs with subqueries. [#64858](https://github.com/ClickHouse/ClickHouse/pull/64858) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#65074](https://github.com/ClickHouse/ClickHouse/issues/65074): Fix `ALTER MODIFY COMMENT` query that was broken for parameterized VIEWs in https://github.com/ClickHouse/ClickHouse/pull/54211. [#65031](https://github.com/ClickHouse/ClickHouse/pull/65031) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||||
|
* Backported in [#65179](https://github.com/ClickHouse/ClickHouse/issues/65179): Fix the `Unknown expression identifier` error for remote queries with `INTERPOLATE (alias)` (new analyzer). Fixes [#64636](https://github.com/ClickHouse/ClickHouse/issues/64636). [#65090](https://github.com/ClickHouse/ClickHouse/pull/65090) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Backported in [#65163](https://github.com/ClickHouse/ClickHouse/issues/65163): Fix pushing arithmetic operations out of aggregation. In the new analyzer, optimization was applied only once. Part of [#62245](https://github.com/ClickHouse/ClickHouse/issues/62245). [#65104](https://github.com/ClickHouse/ClickHouse/pull/65104) ([Dmitry Novik](https://github.com/novikd)).
|
||||||
|
|
||||||
|
#### Critical Bug Fix (crash, LOGICAL_ERROR, data loss, RBAC)
|
||||||
|
|
||||||
|
* Backported in [#64882](https://github.com/ClickHouse/ClickHouse/issues/64882): This PR fixes an error when a user in a specific situation can escalate their privileges on the default database without necessary grants. [#64769](https://github.com/ClickHouse/ClickHouse/pull/64769) ([pufit](https://github.com/pufit)).
|
||||||
|
|
||||||
|
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||||
|
|
||||||
|
* Backported in [#65002](https://github.com/ClickHouse/ClickHouse/issues/65002): Be more graceful with existing tables with `inverted` indexes. [#64656](https://github.com/ClickHouse/ClickHouse/pull/64656) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
* Backported in [#65115](https://github.com/ClickHouse/ClickHouse/issues/65115): Adjust the `version_helper` and script to a new release scheme. [#64759](https://github.com/ClickHouse/ClickHouse/pull/64759) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Backported in [#64796](https://github.com/ClickHouse/ClickHouse/issues/64796): Fix crash with DISTINCT and window functions. [#64767](https://github.com/ClickHouse/ClickHouse/pull/64767) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||||
|
|
14
docs/changelogs/v24.5.3.5-stable.md
Normal file
14
docs/changelogs/v24.5.3.5-stable.md
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
---
|
||||||
|
sidebar_position: 1
|
||||||
|
sidebar_label: 2024
|
||||||
|
---
|
||||||
|
|
||||||
|
# 2024 Changelog
|
||||||
|
|
||||||
|
### ClickHouse release v24.5.3.5-stable (e0eb66f8e17) FIXME as compared to v24.5.2.34-stable (45589aeee49)
|
||||||
|
|
||||||
|
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||||
|
|
||||||
|
* Backported in [#65227](https://github.com/ClickHouse/ClickHouse/issues/65227): Capture weak_ptr of ContextAccess for safety. [#65051](https://github.com/ClickHouse/ClickHouse/pull/65051) ([Alexander Gololobov](https://github.com/davenger)).
|
||||||
|
* Backported in [#65219](https://github.com/ClickHouse/ClickHouse/issues/65219): Fix false positives leaky memory warnings in OpenSSL. [#65125](https://github.com/ClickHouse/ClickHouse/pull/65125) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||||
|
|
@ -71,7 +71,7 @@ If it fails, fix the style errors following the [code style guide](style.md).
|
|||||||
```sh
|
```sh
|
||||||
mkdir -p /tmp/test_output
|
mkdir -p /tmp/test_output
|
||||||
# running all checks
|
# running all checks
|
||||||
docker run --rm --volume=.:/ClickHouse --volume=/tmp/test_output:/test_output -u $(id -u ${USER}):$(id -g ${USER}) --cap-add=SYS_PTRACE clickhouse/style-test
|
python3 tests/ci/style_check.py --no-push
|
||||||
|
|
||||||
# run specified check script (e.g.: ./check-mypy)
|
# run specified check script (e.g.: ./check-mypy)
|
||||||
docker run --rm --volume=.:/ClickHouse --volume=/tmp/test_output:/test_output -u $(id -u ${USER}):$(id -g ${USER}) --cap-add=SYS_PTRACE --entrypoint= -w/ClickHouse/utils/check-style clickhouse/style-test ./check-mypy
|
docker run --rm --volume=.:/ClickHouse --volume=/tmp/test_output:/test_output -u $(id -u ${USER}):$(id -g ${USER}) --cap-add=SYS_PTRACE --entrypoint= -w/ClickHouse/utils/check-style clickhouse/style-test ./check-mypy
|
||||||
@ -91,6 +91,9 @@ cd ./utils/check-style
|
|||||||
# Check python type hinting with mypy
|
# Check python type hinting with mypy
|
||||||
./check-mypy
|
./check-mypy
|
||||||
|
|
||||||
|
# Check python with flake8
|
||||||
|
./check-flake8
|
||||||
|
|
||||||
# Check code with codespell
|
# Check code with codespell
|
||||||
./check-typos
|
./check-typos
|
||||||
|
|
||||||
|
@ -229,6 +229,10 @@ For production builds, clang is used, but we also test make gcc builds. For deve
|
|||||||
|
|
||||||
## Sanitizers {#sanitizers}
|
## Sanitizers {#sanitizers}
|
||||||
|
|
||||||
|
:::note
|
||||||
|
If the process (ClickHouse server or client) crashes at startup when running it locally, you might need to disable address space layout randomization: `sudo sysctl kernel.randomize_va_space=0`
|
||||||
|
:::
|
||||||
|
|
||||||
### Address sanitizer
|
### Address sanitizer
|
||||||
We run functional, integration, stress and unit tests under ASan on per-commit basis.
|
We run functional, integration, stress and unit tests under ASan on per-commit basis.
|
||||||
|
|
||||||
|
@ -54,6 +54,7 @@ SELECT * FROM test_table;
|
|||||||
- `_path` — Path to the file. Type: `LowCardinalty(String)`.
|
- `_path` — Path to the file. Type: `LowCardinalty(String)`.
|
||||||
- `_file` — Name of the file. Type: `LowCardinalty(String)`.
|
- `_file` — Name of the file. Type: `LowCardinalty(String)`.
|
||||||
- `_size` — Size of the file in bytes. Type: `Nullable(UInt64)`. If the size is unknown, the value is `NULL`.
|
- `_size` — Size of the file in bytes. Type: `Nullable(UInt64)`. If the size is unknown, the value is `NULL`.
|
||||||
|
- `_time` — Last modified time of the file. Type: `Nullable(DateTime)`. If the time is unknown, the value is `NULL`.
|
||||||
|
|
||||||
## See also
|
## See also
|
||||||
|
|
||||||
|
@ -235,6 +235,7 @@ libhdfs3 support HDFS namenode HA.
|
|||||||
- `_path` — Path to the file. Type: `LowCardinalty(String)`.
|
- `_path` — Path to the file. Type: `LowCardinalty(String)`.
|
||||||
- `_file` — Name of the file. Type: `LowCardinalty(String)`.
|
- `_file` — Name of the file. Type: `LowCardinalty(String)`.
|
||||||
- `_size` — Size of the file in bytes. Type: `Nullable(UInt64)`. If the size is unknown, the value is `NULL`.
|
- `_size` — Size of the file in bytes. Type: `Nullable(UInt64)`. If the size is unknown, the value is `NULL`.
|
||||||
|
- `_time` — Last modified time of the file. Type: `Nullable(DateTime)`. If the time is unknown, the value is `NULL`.
|
||||||
|
|
||||||
## Storage Settings {#storage-settings}
|
## Storage Settings {#storage-settings}
|
||||||
|
|
||||||
|
@ -34,10 +34,11 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name
|
|||||||
- `options` — MongoDB connection string options (optional parameter).
|
- `options` — MongoDB connection string options (optional parameter).
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
If you are using the MongoDB Atlas cloud offering please add these options:
|
If you are using the MongoDB Atlas cloud offering:
|
||||||
|
|
||||||
```
|
```
|
||||||
'connectTimeoutMS=10000&ssl=true&authSource=admin'
|
- connection url can be obtained from 'Atlas SQL' option
|
||||||
|
- use options: 'connectTimeoutMS=10000&ssl=true&authSource=admin'
|
||||||
```
|
```
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
@ -145,6 +145,7 @@ Code: 48. DB::Exception: Received from localhost:9000. DB::Exception: Reading fr
|
|||||||
- `_path` — Path to the file. Type: `LowCardinalty(String)`.
|
- `_path` — Path to the file. Type: `LowCardinalty(String)`.
|
||||||
- `_file` — Name of the file. Type: `LowCardinalty(String)`.
|
- `_file` — Name of the file. Type: `LowCardinalty(String)`.
|
||||||
- `_size` — Size of the file in bytes. Type: `Nullable(UInt64)`. If the size is unknown, the value is `NULL`.
|
- `_size` — Size of the file in bytes. Type: `Nullable(UInt64)`. If the size is unknown, the value is `NULL`.
|
||||||
|
- `_time` — Last modified time of the file. Type: `Nullable(DateTime)`. If the time is unknown, the value is `NULL`.
|
||||||
|
|
||||||
For more information about virtual columns see [here](../../../engines/table-engines/index.md#table_engines-virtual_columns).
|
For more information about virtual columns see [here](../../../engines/table-engines/index.md#table_engines-virtual_columns).
|
||||||
|
|
||||||
|
@ -75,7 +75,7 @@ Possible values:
|
|||||||
- unordered — With unordered mode, the set of all already processed files is tracked with persistent nodes in ZooKeeper.
|
- unordered — With unordered mode, the set of all already processed files is tracked with persistent nodes in ZooKeeper.
|
||||||
- ordered — With ordered mode, only the max name of the successfully consumed file, and the names of files that will be retried after unsuccessful loading attempt are being stored in ZooKeeper.
|
- ordered — With ordered mode, only the max name of the successfully consumed file, and the names of files that will be retried after unsuccessful loading attempt are being stored in ZooKeeper.
|
||||||
|
|
||||||
Default value: `unordered`.
|
Default value: `ordered` in versions before 24.6. Starting with 24.6 there is no default value, the setting becomes required to be specified manually. For tables created on earlier versions the default value will remain `Ordered` for compatibility.
|
||||||
|
|
||||||
### after_processing {#after_processing}
|
### after_processing {#after_processing}
|
||||||
|
|
||||||
@ -181,6 +181,10 @@ For 'Ordered' mode. Defines a maximum boundary for reschedule interval for a bac
|
|||||||
|
|
||||||
Default value: `30000`.
|
Default value: `30000`.
|
||||||
|
|
||||||
|
### s3queue_buckets {#buckets}
|
||||||
|
|
||||||
|
For 'Ordered' mode. Available since `24.6`. If there are several replicas of S3Queue table, each working with the same metadata directory in keeper, the value of `s3queue_buckets` needs to be equal to at least the number of replicas. If `s3queue_processing_threads` setting is used as well, it makes sense to increase the value of `s3queue_buckets` setting even further, as it defines the actual parallelism of `S3Queue` processing.
|
||||||
|
|
||||||
## S3-related Settings {#s3-settings}
|
## S3-related Settings {#s3-settings}
|
||||||
|
|
||||||
Engine supports all s3 related settings. For more information about S3 settings see [here](../../../engines/table-engines/integrations/s3.md).
|
Engine supports all s3 related settings. For more information about S3 settings see [here](../../../engines/table-engines/integrations/s3.md).
|
||||||
@ -267,7 +271,7 @@ For introspection use `system.s3queue` stateless table and `system.s3queue_log`
|
|||||||
`exception` String
|
`exception` String
|
||||||
)
|
)
|
||||||
ENGINE = SystemS3Queue
|
ENGINE = SystemS3Queue
|
||||||
COMMENT 'SYSTEM TABLE is built on the fly.' │
|
COMMENT 'Contains in-memory state of S3Queue metadata and currently processed rows per file.' │
|
||||||
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
|
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -37,7 +37,7 @@ ways, for example with respect to their DDL/DQL syntax or performance/compressio
|
|||||||
To use full-text indexes, first enable them in the configuration:
|
To use full-text indexes, first enable them in the configuration:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SET allow_experimental_inverted_index = true;
|
SET allow_experimental_full_text_index = true;
|
||||||
```
|
```
|
||||||
|
|
||||||
An full-text index can be defined on a string column using the following syntax
|
An full-text index can be defined on a string column using the following syntax
|
||||||
|
@ -6,41 +6,32 @@ sidebar_label: MergeTree
|
|||||||
|
|
||||||
# MergeTree
|
# MergeTree
|
||||||
|
|
||||||
The `MergeTree` engine and other engines of this family (`*MergeTree`) are the most commonly used and most robust ClickHouse table engines.
|
The `MergeTree` engine and other engines of the `MergeTree` family (e.g. `ReplacingMergeTree`, `AggregatingMergeTree` ) are the most commonly used and most robust table engines in ClickHouse.
|
||||||
|
|
||||||
Engines in the `MergeTree` family are designed for inserting a very large amount of data into a table. The data is quickly written to the table part by part, then rules are applied for merging the parts in the background. This method is much more efficient than continually rewriting the data in storage during insert.
|
`MergeTree`-family table engines are designed for high data ingest rates and huge data volumes.
|
||||||
|
Insert operations create table parts which are merged by a background process with other table parts.
|
||||||
|
|
||||||
Main features:
|
Main features of `MergeTree`-family table engines.
|
||||||
|
|
||||||
- Stores data sorted by primary key.
|
- The table's primary key determines the sort order within each table part (clustered index). The primary key also does not reference individual rows but blocks of 8192 rows called granules. This makes primary keys of huge data sets small enough to remain loaded in main memory, while still providing fast access to on-disk data.
|
||||||
|
|
||||||
This allows you to create a small sparse index that helps find data faster.
|
- Tables can be partitioned using an arbitrary partition expression. Partition pruning ensures partitions are omitted from reading when the query allows it.
|
||||||
|
|
||||||
- Partitions can be used if the [partitioning key](/docs/en/engines/table-engines/mergetree-family/custom-partitioning-key.md) is specified.
|
- Data can be replicated across multiple cluster nodes for high availability, failover, and zero downtime upgrades. See [Data replication](/docs/en/engines/table-engines/mergetree-family/replication.md).
|
||||||
|
|
||||||
ClickHouse supports certain operations with partitions that are more efficient than general operations on the same data with the same result. ClickHouse also automatically cuts off the partition data where the partitioning key is specified in the query.
|
- `MergeTree` table engines support various statistics kinds and sampling methods to help query optimization.
|
||||||
|
|
||||||
- Data replication support.
|
:::note
|
||||||
|
Despite a similar name, the [Merge](/docs/en/engines/table-engines/special/merge.md/#merge) engine is different from `*MergeTree` engines.
|
||||||
The family of `ReplicatedMergeTree` tables provides data replication. For more information, see [Data replication](/docs/en/engines/table-engines/mergetree-family/replication.md).
|
|
||||||
|
|
||||||
- Data sampling support.
|
|
||||||
|
|
||||||
If necessary, you can set the data sampling method in the table.
|
|
||||||
|
|
||||||
:::info
|
|
||||||
The [Merge](/docs/en/engines/table-engines/special/merge.md/#merge) engine does not belong to the `*MergeTree` family.
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
If you need to update rows frequently, we recommend using the [`ReplacingMergeTree`](/docs/en/engines/table-engines/mergetree-family/replacingmergetree.md) table engine. Using `ALTER TABLE my_table UPDATE` to update rows triggers a mutation, which causes parts to be re-written and uses IO/resources. With `ReplacingMergeTree`, you can simply insert the updated rows and the old rows will be replaced according to the table sorting key.
|
## Creating Tables {#table_engine-mergetree-creating-a-table}
|
||||||
|
|
||||||
## Creating a Table {#table_engine-mergetree-creating-a-table}
|
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||||
(
|
(
|
||||||
name1 [type1] [[NOT] NULL] [DEFAULT|MATERIALIZED|ALIAS|EPHEMERAL expr1] [COMMENT ...] [CODEC(codec1)] [STATISTIC(stat1)] [TTL expr1] [PRIMARY KEY] [SETTINGS (name = value, ...)],
|
name1 [type1] [[NOT] NULL] [DEFAULT|MATERIALIZED|ALIAS|EPHEMERAL expr1] [COMMENT ...] [CODEC(codec1)] [STATISTICS(stat1)] [TTL expr1] [PRIMARY KEY] [SETTINGS (name = value, ...)],
|
||||||
name2 [type2] [[NOT] NULL] [DEFAULT|MATERIALIZED|ALIAS|EPHEMERAL expr2] [COMMENT ...] [CODEC(codec2)] [STATISTIC(stat2)] [TTL expr2] [PRIMARY KEY] [SETTINGS (name = value, ...)],
|
name2 [type2] [[NOT] NULL] [DEFAULT|MATERIALIZED|ALIAS|EPHEMERAL expr2] [COMMENT ...] [CODEC(codec2)] [STATISTICS(stat2)] [TTL expr2] [PRIMARY KEY] [SETTINGS (name = value, ...)],
|
||||||
...
|
...
|
||||||
INDEX index_name1 expr1 TYPE type1(...) [GRANULARITY value1],
|
INDEX index_name1 expr1 TYPE type1(...) [GRANULARITY value1],
|
||||||
INDEX index_name2 expr2 TYPE type2(...) [GRANULARITY value2],
|
INDEX index_name2 expr2 TYPE type2(...) [GRANULARITY value2],
|
||||||
@ -59,23 +50,24 @@ ORDER BY expr
|
|||||||
[SETTINGS name = value, ...]
|
[SETTINGS name = value, ...]
|
||||||
```
|
```
|
||||||
|
|
||||||
For a description of parameters, see the [CREATE query description](/docs/en/sql-reference/statements/create/table.md).
|
For a detailed description of the parameters, see the [CREATE TABLE](/docs/en/sql-reference/statements/create/table.md) statement
|
||||||
|
|
||||||
### Query Clauses {#mergetree-query-clauses}
|
### Query Clauses {#mergetree-query-clauses}
|
||||||
|
|
||||||
#### ENGINE
|
#### ENGINE
|
||||||
|
|
||||||
`ENGINE` — Name and parameters of the engine. `ENGINE = MergeTree()`. The `MergeTree` engine does not have parameters.
|
`ENGINE` — Name and parameters of the engine. `ENGINE = MergeTree()`. The `MergeTree` engine has no parameters.
|
||||||
|
|
||||||
#### ORDER_BY
|
#### ORDER_BY
|
||||||
|
|
||||||
`ORDER BY` — The sorting key.
|
`ORDER BY` — The sorting key.
|
||||||
|
|
||||||
A tuple of column names or arbitrary expressions. Example: `ORDER BY (CounterID, EventDate)`.
|
A tuple of column names or arbitrary expressions. Example: `ORDER BY (CounterID + 1, EventDate)`.
|
||||||
|
|
||||||
ClickHouse uses the sorting key as a primary key if the primary key is not defined explicitly by the `PRIMARY KEY` clause.
|
If no primary key is defined (i.e. `PRIMARY KEY` was not specified), ClickHouse uses the the sorting key as primary key.
|
||||||
|
|
||||||
Use the `ORDER BY tuple()` syntax, if you do not need sorting, or set `create_table_empty_primary_key_by_default` to `true` to use the `ORDER BY tuple()` syntax by default. See [Selecting the Primary Key](#selecting-the-primary-key).
|
If no sorting is required, you can use syntax `ORDER BY tuple()`.
|
||||||
|
Alternatively, if setting `create_table_empty_primary_key_by_default` is enabled, `ORDER BY tuple()` is implicitly added to `CREATE TABLE` statements. See [Selecting a Primary Key](#selecting-a-primary-key).
|
||||||
|
|
||||||
#### PARTITION BY
|
#### PARTITION BY
|
||||||
|
|
||||||
@ -87,96 +79,32 @@ For partitioning by month, use the `toYYYYMM(date_column)` expression, where `da
|
|||||||
|
|
||||||
`PRIMARY KEY` — The primary key if it [differs from the sorting key](#choosing-a-primary-key-that-differs-from-the-sorting-key). Optional.
|
`PRIMARY KEY` — The primary key if it [differs from the sorting key](#choosing-a-primary-key-that-differs-from-the-sorting-key). Optional.
|
||||||
|
|
||||||
By default the primary key is the same as the sorting key (which is specified by the `ORDER BY` clause). Thus in most cases it is unnecessary to specify a separate `PRIMARY KEY` clause.
|
Specifying a sorting key (using `ORDER BY` clause) implicitly specifies a primary key.
|
||||||
|
It is usually not necessary to specify the primary key in addition to the primary key.
|
||||||
|
|
||||||
#### SAMPLE BY
|
#### SAMPLE BY
|
||||||
|
|
||||||
`SAMPLE BY` — An expression for sampling. Optional.
|
`SAMPLE BY` — A sampling expression. Optional.
|
||||||
|
|
||||||
If a sampling expression is used, the primary key must contain it. The result of a sampling expression must be an unsigned integer. Example: `SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`.
|
If specified, it must be contained in the primary key.
|
||||||
|
The sampling expression must result in an unsigned integer.
|
||||||
|
|
||||||
|
Example: `SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`.
|
||||||
|
|
||||||
#### TTL
|
#### TTL
|
||||||
|
|
||||||
`TTL` — A list of rules specifying storage duration of rows and defining logic of automatic parts movement [between disks and volumes](#table_engine-mergetree-multiple-volumes). Optional.
|
`TTL` — A list of rules that specify the storage duration of rows and the logic of automatic parts movement [between disks and volumes](#table_engine-mergetree-multiple-volumes). Optional.
|
||||||
|
|
||||||
Expression must have one `Date` or `DateTime` column as a result. Example:
|
Expression must result in a `Date` or `DateTime`, e.g. `TTL date + INTERVAL 1 DAY`.
|
||||||
```
|
|
||||||
TTL date + INTERVAL 1 DAY
|
|
||||||
```
|
|
||||||
|
|
||||||
Type of the rule `DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'|GROUP BY` specifies an action to be done with the part if the expression is satisfied (reaches current time): removal of expired rows, moving a part (if expression is satisfied for all rows in a part) to specified disk (`TO DISK 'xxx'`) or to volume (`TO VOLUME 'xxx'`), or aggregating values in expired rows. Default type of the rule is removal (`DELETE`). List of multiple rules can be specified, but there should be no more than one `DELETE` rule.
|
Type of the rule `DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'|GROUP BY` specifies an action to be done with the part if the expression is satisfied (reaches current time): removal of expired rows, moving a part (if expression is satisfied for all rows in a part) to specified disk (`TO DISK 'xxx'`) or to volume (`TO VOLUME 'xxx'`), or aggregating values in expired rows. Default type of the rule is removal (`DELETE`). List of multiple rules can be specified, but there should be no more than one `DELETE` rule.
|
||||||
|
|
||||||
|
|
||||||
For more details, see [TTL for columns and tables](#table_engine-mergetree-ttl)
|
For more details, see [TTL for columns and tables](#table_engine-mergetree-ttl)
|
||||||
|
|
||||||
### SETTINGS
|
#### SETTINGS
|
||||||
Additional parameters that control the behavior of the `MergeTree` (optional):
|
|
||||||
|
|
||||||
#### index_granularity
|
See [MergeTree Settings](../../../operations/settings/merge-tree-settings.md).
|
||||||
|
|
||||||
`index_granularity` — Maximum number of data rows between the marks of an index. Default value: 8192. See [Data Storage](#mergetree-data-storage).
|
|
||||||
|
|
||||||
#### index_granularity_bytes
|
|
||||||
|
|
||||||
`index_granularity_bytes` — Maximum size of data granules in bytes. Default value: 10Mb. To restrict the granule size only by number of rows, set to 0 (not recommended). See [Data Storage](#mergetree-data-storage).
|
|
||||||
|
|
||||||
#### min_index_granularity_bytes
|
|
||||||
|
|
||||||
`min_index_granularity_bytes` — Min allowed size of data granules in bytes. Default value: 1024b. To provide a safeguard against accidentally creating tables with very low index_granularity_bytes. See [Data Storage](#mergetree-data-storage).
|
|
||||||
|
|
||||||
#### enable_mixed_granularity_parts
|
|
||||||
|
|
||||||
`enable_mixed_granularity_parts` — Enables or disables transitioning to control the granule size with the `index_granularity_bytes` setting. Before version 19.11, there was only the `index_granularity` setting for restricting granule size. The `index_granularity_bytes` setting improves ClickHouse performance when selecting data from tables with big rows (tens and hundreds of megabytes). If you have tables with big rows, you can enable this setting for the tables to improve the efficiency of `SELECT` queries.
|
|
||||||
|
|
||||||
#### use_minimalistic_part_header_in_zookeeper
|
|
||||||
|
|
||||||
`use_minimalistic_part_header_in_zookeeper` — Storage method of the data parts headers in ZooKeeper. If `use_minimalistic_part_header_in_zookeeper=1`, then ZooKeeper stores less data. For more information, see the [setting description](/docs/en/operations/server-configuration-parameters/settings.md/#server-settings-use_minimalistic_part_header_in_zookeeper) in “Server configuration parameters”.
|
|
||||||
|
|
||||||
#### min_merge_bytes_to_use_direct_io
|
|
||||||
|
|
||||||
`min_merge_bytes_to_use_direct_io` — The minimum data volume for merge operation that is required for using direct I/O access to the storage disk. When merging data parts, ClickHouse calculates the total storage volume of all the data to be merged. If the volume exceeds `min_merge_bytes_to_use_direct_io` bytes, ClickHouse reads and writes the data to the storage disk using the direct I/O interface (`O_DIRECT` option). If `min_merge_bytes_to_use_direct_io = 0`, then direct I/O is disabled. Default value: `10 * 1024 * 1024 * 1024` bytes.
|
|
||||||
|
|
||||||
#### merge_with_ttl_timeout
|
|
||||||
|
|
||||||
`merge_with_ttl_timeout` — Minimum delay in seconds before repeating a merge with delete TTL. Default value: `14400` seconds (4 hours).
|
|
||||||
#### merge_with_recompression_ttl_timeout
|
|
||||||
|
|
||||||
`merge_with_recompression_ttl_timeout` — Minimum delay in seconds before repeating a merge with recompression TTL. Default value: `14400` seconds (4 hours).
|
|
||||||
|
|
||||||
#### try_fetch_recompressed_part_timeout
|
|
||||||
|
|
||||||
`try_fetch_recompressed_part_timeout` — Timeout (in seconds) before starting merge with recompression. During this time ClickHouse tries to fetch recompressed part from replica which assigned this merge with recompression. Default value: `7200` seconds (2 hours).
|
|
||||||
|
|
||||||
#### write_final_mark
|
|
||||||
|
|
||||||
`write_final_mark` — Enables or disables writing the final index mark at the end of data part (after the last byte). Default value: 1. Don’t turn it off.
|
|
||||||
|
|
||||||
#### merge_max_block_size
|
|
||||||
|
|
||||||
`merge_max_block_size` — Maximum number of rows in block for merge operations. Default value: 8192.
|
|
||||||
|
|
||||||
#### storage_policy
|
|
||||||
|
|
||||||
`storage_policy` — Storage policy. See [Using Multiple Block Devices for Data Storage](#table_engine-mergetree-multiple-volumes).
|
|
||||||
|
|
||||||
#### min_bytes_for_wide_part
|
|
||||||
|
|
||||||
`min_bytes_for_wide_part`, `min_rows_for_wide_part` — Minimum number of bytes/rows in a data part that can be stored in `Wide` format. You can set one, both or none of these settings. See [Data Storage](#mergetree-data-storage).
|
|
||||||
|
|
||||||
#### max_parts_in_total
|
|
||||||
|
|
||||||
`max_parts_in_total` — Maximum number of parts in all partitions.
|
|
||||||
|
|
||||||
#### max_compress_block_size
|
|
||||||
|
|
||||||
`max_compress_block_size` — Maximum size of blocks of uncompressed data before compressing for writing to a table. You can also specify this setting in the global settings (see [max_compress_block_size](/docs/en/operations/settings/settings.md/#max-compress-block-size) setting). The value specified when table is created overrides the global value for this setting.
|
|
||||||
|
|
||||||
#### min_compress_block_size
|
|
||||||
|
|
||||||
`min_compress_block_size` — Minimum size of blocks of uncompressed data required for compression when writing the next mark. You can also specify this setting in the global settings (see [min_compress_block_size](/docs/en/operations/settings/settings.md/#min-compress-block-size) setting). The value specified when table is created overrides the global value for this setting.
|
|
||||||
|
|
||||||
#### max_partitions_to_read
|
|
||||||
|
|
||||||
`max_partitions_to_read` — Limits the maximum number of partitions that can be accessed in one query. You can also specify setting [max_partitions_to_read](/docs/en/operations/settings/merge-tree-settings.md/#max-partitions-to-read) in the global setting.
|
|
||||||
|
|
||||||
**Example of Sections Setting**
|
**Example of Sections Setting**
|
||||||
|
|
||||||
@ -266,7 +194,7 @@ ClickHouse does not require a unique primary key. You can insert multiple rows w
|
|||||||
|
|
||||||
You can use `Nullable`-typed expressions in the `PRIMARY KEY` and `ORDER BY` clauses but it is strongly discouraged. To allow this feature, turn on the [allow_nullable_key](/docs/en/operations/settings/settings.md/#allow-nullable-key) setting. The [NULLS_LAST](/docs/en/sql-reference/statements/select/order-by.md/#sorting-of-special-values) principle applies for `NULL` values in the `ORDER BY` clause.
|
You can use `Nullable`-typed expressions in the `PRIMARY KEY` and `ORDER BY` clauses but it is strongly discouraged. To allow this feature, turn on the [allow_nullable_key](/docs/en/operations/settings/settings.md/#allow-nullable-key) setting. The [NULLS_LAST](/docs/en/sql-reference/statements/select/order-by.md/#sorting-of-special-values) principle applies for `NULL` values in the `ORDER BY` clause.
|
||||||
|
|
||||||
### Selecting the Primary Key {#selecting-the-primary-key}
|
### Selecting a Primary Key {#selecting-a-primary-key}
|
||||||
|
|
||||||
The number of columns in the primary key is not explicitly limited. Depending on the data structure, you can include more or fewer columns in the primary key. This may:
|
The number of columns in the primary key is not explicitly limited. Depending on the data structure, you can include more or fewer columns in the primary key. This may:
|
||||||
|
|
||||||
@ -1039,12 +967,12 @@ ClickHouse versions 22.3 through 22.7 use a different cache configuration, see [
|
|||||||
|
|
||||||
## Column Statistics (Experimental) {#column-statistics}
|
## Column Statistics (Experimental) {#column-statistics}
|
||||||
|
|
||||||
The statistic declaration is in the columns section of the `CREATE` query for tables from the `*MergeTree*` Family when we enable `set allow_experimental_statistic = 1`.
|
The statistics declaration is in the columns section of the `CREATE` query for tables from the `*MergeTree*` Family when we enable `set allow_experimental_statistics = 1`.
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
CREATE TABLE tab
|
CREATE TABLE tab
|
||||||
(
|
(
|
||||||
a Int64 STATISTIC(tdigest),
|
a Int64 STATISTICS(TDigest, Uniq),
|
||||||
b Float64
|
b Float64
|
||||||
)
|
)
|
||||||
ENGINE = MergeTree
|
ENGINE = MergeTree
|
||||||
@ -1054,19 +982,23 @@ ORDER BY a
|
|||||||
We can also manipulate statistics with `ALTER` statements.
|
We can also manipulate statistics with `ALTER` statements.
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
ALTER TABLE tab ADD STATISTIC b TYPE tdigest;
|
ALTER TABLE tab ADD STATISTICS b TYPE TDigest, Uniq;
|
||||||
ALTER TABLE tab DROP STATISTIC a TYPE tdigest;
|
ALTER TABLE tab DROP STATISTICS a;
|
||||||
```
|
```
|
||||||
|
|
||||||
These lightweight statistics aggregate information about distribution of values in columns.
|
These lightweight statistics aggregate information about distribution of values in columns. Statistics are stored in every part and updated when every insert comes.
|
||||||
They can be used for query optimization when we enable `set allow_statistic_optimize = 1`.
|
They can be used for prewhere optimization only if we enable `set allow_statistics_optimize = 1`.
|
||||||
|
|
||||||
#### Available Types of Column Statistics {#available-types-of-column-statistics}
|
#### Available Types of Column Statistics {#available-types-of-column-statistics}
|
||||||
|
|
||||||
- `tdigest`
|
- `TDigest`
|
||||||
|
|
||||||
Stores distribution of values from numeric columns in [TDigest](https://github.com/tdunning/t-digest) sketch.
|
Stores distribution of values from numeric columns in [TDigest](https://github.com/tdunning/t-digest) sketch.
|
||||||
|
|
||||||
|
- `Uniq`
|
||||||
|
|
||||||
|
Estimate the number of distinct values of a column by HyperLogLog.
|
||||||
|
|
||||||
## Column-level Settings {#column-level-settings}
|
## Column-level Settings {#column-level-settings}
|
||||||
|
|
||||||
Certain MergeTree settings can be override at column level:
|
Certain MergeTree settings can be override at column level:
|
||||||
|
@ -102,6 +102,7 @@ For partitioning by month, use the `toYYYYMM(date_column)` expression, where `da
|
|||||||
- `_path` — Path to the file. Type: `LowCardinalty(String)`.
|
- `_path` — Path to the file. Type: `LowCardinalty(String)`.
|
||||||
- `_file` — Name of the file. Type: `LowCardinalty(String)`.
|
- `_file` — Name of the file. Type: `LowCardinalty(String)`.
|
||||||
- `_size` — Size of the file in bytes. Type: `Nullable(UInt64)`. If the size is unknown, the value is `NULL`.
|
- `_size` — Size of the file in bytes. Type: `Nullable(UInt64)`. If the size is unknown, the value is `NULL`.
|
||||||
|
- `_time` — Last modified time of the file. Type: `Nullable(DateTime)`. If the time is unknown, the value is `NULL`.
|
||||||
|
|
||||||
## Settings {#settings}
|
## Settings {#settings}
|
||||||
|
|
||||||
|
@ -108,6 +108,7 @@ For partitioning by month, use the `toYYYYMM(date_column)` expression, where `da
|
|||||||
- `_path` — Path to the `URL`. Type: `LowCardinalty(String)`.
|
- `_path` — Path to the `URL`. Type: `LowCardinalty(String)`.
|
||||||
- `_file` — Resource name of the `URL`. Type: `LowCardinalty(String)`.
|
- `_file` — Resource name of the `URL`. Type: `LowCardinalty(String)`.
|
||||||
- `_size` — Size of the resource in bytes. Type: `Nullable(UInt64)`. If the size is unknown, the value is `NULL`.
|
- `_size` — Size of the resource in bytes. Type: `Nullable(UInt64)`. If the size is unknown, the value is `NULL`.
|
||||||
|
- `_time` — Last modified time of the file. Type: `Nullable(DateTime)`. If the time is unknown, the value is `NULL`.
|
||||||
|
|
||||||
## Storage Settings {#storage-settings}
|
## Storage Settings {#storage-settings}
|
||||||
|
|
||||||
|
@ -480,7 +480,7 @@ The CSV format supports the output of totals and extremes the same way as `TabSe
|
|||||||
- [input_format_csv_detect_header](/docs/en/operations/settings/settings-formats.md/#input_format_csv_detect_header) - automatically detect header with names and types in CSV format. Default value - `true`.
|
- [input_format_csv_detect_header](/docs/en/operations/settings/settings-formats.md/#input_format_csv_detect_header) - automatically detect header with names and types in CSV format. Default value - `true`.
|
||||||
- [input_format_csv_skip_trailing_empty_lines](/docs/en/operations/settings/settings-formats.md/#input_format_csv_skip_trailing_empty_lines) - skip trailing empty lines at the end of data. Default value - `false`.
|
- [input_format_csv_skip_trailing_empty_lines](/docs/en/operations/settings/settings-formats.md/#input_format_csv_skip_trailing_empty_lines) - skip trailing empty lines at the end of data. Default value - `false`.
|
||||||
- [input_format_csv_trim_whitespaces](/docs/en/operations/settings/settings-formats.md/#input_format_csv_trim_whitespaces) - trim spaces and tabs in non-quoted CSV strings. Default value - `true`.
|
- [input_format_csv_trim_whitespaces](/docs/en/operations/settings/settings-formats.md/#input_format_csv_trim_whitespaces) - trim spaces and tabs in non-quoted CSV strings. Default value - `true`.
|
||||||
- [input_format_csv_allow_whitespace_or_tab_as_delimiter](/docs/en/operations/settings/settings-formats.md/# input_format_csv_allow_whitespace_or_tab_as_delimiter) - Allow to use whitespace or tab as field delimiter in CSV strings. Default value - `false`.
|
- [input_format_csv_allow_whitespace_or_tab_as_delimiter](/docs/en/operations/settings/settings-formats.md/#input_format_csv_allow_whitespace_or_tab_as_delimiter) - Allow to use whitespace or tab as field delimiter in CSV strings. Default value - `false`.
|
||||||
- [input_format_csv_allow_variable_number_of_columns](/docs/en/operations/settings/settings-formats.md/#input_format_csv_allow_variable_number_of_columns) - allow variable number of columns in CSV format, ignore extra columns and use default values on missing columns. Default value - `false`.
|
- [input_format_csv_allow_variable_number_of_columns](/docs/en/operations/settings/settings-formats.md/#input_format_csv_allow_variable_number_of_columns) - allow variable number of columns in CSV format, ignore extra columns and use default values on missing columns. Default value - `false`.
|
||||||
- [input_format_csv_use_default_on_bad_values](/docs/en/operations/settings/settings-formats.md/#input_format_csv_use_default_on_bad_values) - Allow to set default value to column when CSV field deserialization failed on bad value. Default value - `false`.
|
- [input_format_csv_use_default_on_bad_values](/docs/en/operations/settings/settings-formats.md/#input_format_csv_use_default_on_bad_values) - Allow to set default value to column when CSV field deserialization failed on bad value. Default value - `false`.
|
||||||
- [input_format_csv_try_infer_numbers_from_strings](/docs/en/operations/settings/settings-formats.md/#input_format_csv_try_infer_numbers_from_strings) - Try to infer numbers from string fields while schema inference. Default value - `false`.
|
- [input_format_csv_try_infer_numbers_from_strings](/docs/en/operations/settings/settings-formats.md/#input_format_csv_try_infer_numbers_from_strings) - Try to infer numbers from string fields while schema inference. Default value - `false`.
|
||||||
@ -2165,6 +2165,8 @@ To exchange data with Hadoop, you can use [HDFS table engine](/docs/en/engines/t
|
|||||||
- [output_format_parquet_fixed_string_as_fixed_byte_array](/docs/en/operations/settings/settings-formats.md/#output_format_parquet_fixed_string_as_fixed_byte_array) - use Parquet FIXED_LENGTH_BYTE_ARRAY type instead of Binary/String for FixedString columns. Default value - `true`.
|
- [output_format_parquet_fixed_string_as_fixed_byte_array](/docs/en/operations/settings/settings-formats.md/#output_format_parquet_fixed_string_as_fixed_byte_array) - use Parquet FIXED_LENGTH_BYTE_ARRAY type instead of Binary/String for FixedString columns. Default value - `true`.
|
||||||
- [output_format_parquet_version](/docs/en/operations/settings/settings-formats.md/#output_format_parquet_version) - The version of Parquet format used in output format. Default value - `2.latest`.
|
- [output_format_parquet_version](/docs/en/operations/settings/settings-formats.md/#output_format_parquet_version) - The version of Parquet format used in output format. Default value - `2.latest`.
|
||||||
- [output_format_parquet_compression_method](/docs/en/operations/settings/settings-formats.md/#output_format_parquet_compression_method) - compression method used in output Parquet format. Default value - `lz4`.
|
- [output_format_parquet_compression_method](/docs/en/operations/settings/settings-formats.md/#output_format_parquet_compression_method) - compression method used in output Parquet format. Default value - `lz4`.
|
||||||
|
- [input_format_parquet_max_block_size](/docs/en/operations/settings/settings-formats.md/#input_format_parquet_max_block_size) - Max block row size for parquet reader. Default value - `65409`.
|
||||||
|
- [input_format_parquet_prefer_block_bytes](/docs/en/operations/settings/settings-formats.md/#input_format_parquet_prefer_block_bytes) - Average block bytes output by parquet reader. Default value - `16744704`.
|
||||||
|
|
||||||
## ParquetMetadata {data-format-parquet-metadata}
|
## ParquetMetadata {data-format-parquet-metadata}
|
||||||
|
|
||||||
|
194
docs/en/operations/analyzer.md
Normal file
194
docs/en/operations/analyzer.md
Normal file
@ -0,0 +1,194 @@
|
|||||||
|
---
|
||||||
|
slug: /en/operations/analyzer
|
||||||
|
sidebar_label: Analyzer
|
||||||
|
title: Analyzer
|
||||||
|
description: Details about ClickHouse's query analyzer
|
||||||
|
keywords: [analyzer]
|
||||||
|
---
|
||||||
|
|
||||||
|
# Analyzer
|
||||||
|
|
||||||
|
<BetaBadge />
|
||||||
|
|
||||||
|
## Known incompatibilities
|
||||||
|
|
||||||
|
In ClickHouse version `24.3`, the new query analyzer was enabled by default.
|
||||||
|
Despite fixing a large number of bugs and introducing new optimizations, it also introduces some breaking changes in ClickHouse behaviour. Please read the following changes to determine how to rewrite your queries for the new analyzer.
|
||||||
|
|
||||||
|
### Invalid queries are no longer optimized
|
||||||
|
|
||||||
|
The previous query planning infrastructure applied AST-level optimizations before the query validation step.
|
||||||
|
Optimizations could rewrite the initial query so it becomes valid and can be executed.
|
||||||
|
|
||||||
|
In the new analyzer, query validation takes place before the optimization step.
|
||||||
|
This means that invalid queries that were possible to execute before are now unsupported.
|
||||||
|
In such cases, the query must be fixed manually.
|
||||||
|
|
||||||
|
**Example 1:**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT number
|
||||||
|
FROM numbers(1)
|
||||||
|
GROUP BY toString(number)
|
||||||
|
```
|
||||||
|
|
||||||
|
The following query uses column `number` in the projection list when only `toString(number)` is available after the aggregation.
|
||||||
|
In the old analyzer, `GROUP BY toString(number)` was optimized into `GROUP BY number,` making the query valid.
|
||||||
|
|
||||||
|
**Example 2:**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT
|
||||||
|
number % 2 AS n,
|
||||||
|
sum(number)
|
||||||
|
FROM numbers(10)
|
||||||
|
GROUP BY n
|
||||||
|
HAVING number > 5
|
||||||
|
```
|
||||||
|
|
||||||
|
The same problem occurs in this query: column `number` is used after aggregation with another key.
|
||||||
|
The previous query analyzer fixed this query by moving the `number > 5` filter from the `HAVING` clause to the `WHERE` clause.
|
||||||
|
|
||||||
|
To fix the query, you should move all conditions that apply to non-aggregated columns to the `WHERE` section to conform to standard SQL syntax:
|
||||||
|
```sql
|
||||||
|
SELECT
|
||||||
|
number % 2 AS n,
|
||||||
|
sum(number)
|
||||||
|
FROM numbers(10)
|
||||||
|
WHERE number > 5
|
||||||
|
GROUP BY n
|
||||||
|
```
|
||||||
|
|
||||||
|
### CREATE VIEW with invalid query
|
||||||
|
|
||||||
|
The new analyzer always performs type-checking.
|
||||||
|
Previously, it was possible to create a `VIEW` with an invalid `SELECT` query. It would then fail during the first `SELECT` or `INSERT` (in the case of `MATERIALIZED VIEW`).
|
||||||
|
|
||||||
|
Now, it's not possible to create such `VIEW`s anymore.
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE source (data String) ENGINE=MergeTree ORDER BY tuple();
|
||||||
|
|
||||||
|
CREATE VIEW some_view
|
||||||
|
AS SELECT JSONExtract(data, 'test', 'DateTime64(3)')
|
||||||
|
FROM source;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Known incompatibilities of the `JOIN` clause
|
||||||
|
|
||||||
|
#### Join using column from projection
|
||||||
|
|
||||||
|
Alias from the `SELECT` list can not be used as a `JOIN USING` key by default.
|
||||||
|
|
||||||
|
A new setting, `analyzer_compatibility_join_using_top_level_identifier`, when enabled, alters the behavior of `JOIN USING` to prefer to resolve identifiers based on expressions from the projection list of the `SELECT` query, rather than using the columns from left table directly.
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT a + 1 AS b, t2.s
|
||||||
|
FROM Values('a UInt64, b UInt64', (1, 1)) AS t1
|
||||||
|
JOIN Values('b UInt64, s String', (1, 'one'), (2, 'two')) t2
|
||||||
|
USING (b);
|
||||||
|
```
|
||||||
|
|
||||||
|
With `analyzer_compatibility_join_using_top_level_identifier` set to `true`, the join condition is interpreted as `t1.a + 1 = t2.b`, matching the behavior of earlier versions. So, the result will be `2, 'two'`.
|
||||||
|
When the setting is `false`, the join condition defaults to `t1.b = t2.b`, and the query will return `2, 'one'`.
|
||||||
|
If `b` is not present in `t1`, the query will fail with an error.
|
||||||
|
|
||||||
|
#### Changes in behavior with `JOIN USING` and `ALIAS`/`MATERIALIZED` columns
|
||||||
|
|
||||||
|
In the new analyzer, using `*` in a `JOIN USING` query that involves `ALIAS` or `MATERIALIZED` columns will include those columns in the result set by default.
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE t1 (id UInt64, payload ALIAS sipHash64(id)) ENGINE = MergeTree ORDER BY id;
|
||||||
|
INSERT INTO t1 VALUES (1), (2);
|
||||||
|
|
||||||
|
CREATE TABLE t2 (id UInt64, payload ALIAS sipHash64(id)) ENGINE = MergeTree ORDER BY id;
|
||||||
|
INSERT INTO t2 VALUES (2), (3);
|
||||||
|
|
||||||
|
SELECT * FROM t1
|
||||||
|
FULL JOIN t2 USING (payload);
|
||||||
|
```
|
||||||
|
|
||||||
|
In the new analyzer, the result of this query will include the `payload` column along with `id` from both tables. In contrast, the previous analyzer would only include these `ALIAS` columns if specific settings (`asterisk_include_alias_columns` or `asterisk_include_materialized_columns`) were enabled, and the columns might appear in a different order.
|
||||||
|
|
||||||
|
To ensure consistent and expected results, especially when migrating old queries to the new analyzer, it is advisable to specify columns explicitly in the `SELECT` clause rather than using `*`.
|
||||||
|
|
||||||
|
#### Handling of Type Modifiers for columns in `USING` Clause
|
||||||
|
|
||||||
|
In the new version of the analyzer, the rules for determining the common supertype for columns specified in the `USING` clause have been standardized to produce more predictable outcomes, especially when dealing with type modifiers like `LowCardinality` and `Nullable`.
|
||||||
|
|
||||||
|
- `LowCardinality(T)` and `T`: When a column of type `LowCardinality(T)` is joined with a column of type `T`, the resulting common supertype will be `T`, effectively discarding the `LowCardinality` modifier.
|
||||||
|
|
||||||
|
- `Nullable(T)` and `T`: When a column of type `Nullable(T)` is joined with a column of type `T`, the resulting common supertype will be `Nullable(T)`, ensuring that the nullable property is preserved.
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT id, toTypeName(id) FROM Values('id LowCardinality(String)', ('a')) AS t1
|
||||||
|
FULL OUTER JOIN Values('id String', ('b')) AS t2
|
||||||
|
USING (id);
|
||||||
|
```
|
||||||
|
|
||||||
|
In this query, the common supertype for `id` is determined as `String`, discarding the `LowCardinality` modifier from `t1`.
|
||||||
|
|
||||||
|
### Projection column names changes
|
||||||
|
|
||||||
|
During projection names computation, aliases are not substituted.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT
|
||||||
|
1 + 1 AS x,
|
||||||
|
x + 1
|
||||||
|
SETTINGS allow_experimental_analyzer = 0
|
||||||
|
FORMAT PrettyCompact
|
||||||
|
|
||||||
|
┌─x─┬─plus(plus(1, 1), 1)─┐
|
||||||
|
1. │ 2 │ 3 │
|
||||||
|
└───┴─────────────────────┘
|
||||||
|
|
||||||
|
SELECT
|
||||||
|
1 + 1 AS x,
|
||||||
|
x + 1
|
||||||
|
SETTINGS allow_experimental_analyzer = 1
|
||||||
|
FORMAT PrettyCompact
|
||||||
|
|
||||||
|
┌─x─┬─plus(x, 1)─┐
|
||||||
|
1. │ 2 │ 3 │
|
||||||
|
└───┴────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Incompatible function arguments types
|
||||||
|
|
||||||
|
In the new analyzer, type inference happens during initial query analysis.
|
||||||
|
This change means that type checks are done before short-circuit evaluation; thus, `if` function arguments must always have a common supertype.
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
The following query fails with `There is no supertype for types Array(UInt8), String because some of them are Array and some of them are not`:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT toTypeName(if(0, [2, 3, 4], 'String'))
|
||||||
|
```
|
||||||
|
|
||||||
|
### Heterogeneous clusters
|
||||||
|
|
||||||
|
The new analyzer significantly changed the communication protocol between servers in the cluster. Thus, it's impossible to run distributed queries on servers with different `allow_experimental_analyzer` setting values.
|
||||||
|
|
||||||
|
### Mutations are interpreted by previous analyzer
|
||||||
|
|
||||||
|
Mutations are still using the old analyzer.
|
||||||
|
This means some new ClickHouse SQL features can't be used in mutations. For example, the `QUALIFY` clause.
|
||||||
|
Status can be checked [here](https://github.com/ClickHouse/ClickHouse/issues/61563).
|
||||||
|
|
||||||
|
### Unsupported features
|
||||||
|
|
||||||
|
The list of features new analyzer currently doesn't support:
|
||||||
|
|
||||||
|
- Annoy index.
|
||||||
|
- Hypothesis index. Work in progress [here](https://github.com/ClickHouse/ClickHouse/pull/48381).
|
||||||
|
- Window view is not supported. There are no plans to support it in the future.
|
@ -7,6 +7,8 @@ sidebar_label: Configuration Files
|
|||||||
# Configuration Files
|
# Configuration Files
|
||||||
|
|
||||||
The ClickHouse server can be configured with configuration files in XML or YAML syntax. In most installation types, the ClickHouse server runs with `/etc/clickhouse-server/config.xml` as default configuration file, but it is also possible to specify the location of the configuration file manually at server startup using command line option `--config-file=` or `-C`. Additional configuration files may be placed into directory `config.d/` relative to the main configuration file, for example into directory `/etc/clickhouse-server/config.d/`. Files in this directory and the main configuration are merged in a preprocessing step before the configuration is applied in ClickHouse server. Configuration files are merged in alphabetical order. To simplify updates and improve modularization, it is best practice to keep the default `config.xml` file unmodified and place additional customization into `config.d/`.
|
The ClickHouse server can be configured with configuration files in XML or YAML syntax. In most installation types, the ClickHouse server runs with `/etc/clickhouse-server/config.xml` as default configuration file, but it is also possible to specify the location of the configuration file manually at server startup using command line option `--config-file=` or `-C`. Additional configuration files may be placed into directory `config.d/` relative to the main configuration file, for example into directory `/etc/clickhouse-server/config.d/`. Files in this directory and the main configuration are merged in a preprocessing step before the configuration is applied in ClickHouse server. Configuration files are merged in alphabetical order. To simplify updates and improve modularization, it is best practice to keep the default `config.xml` file unmodified and place additional customization into `config.d/`.
|
||||||
|
(The ClickHouse keeper configuration lives in `/etc/clickhouse-keeper/keeper_config.xml` and thus the additional files need to be placed in `/etc/clickhouse-keeper/keeper_config.d/` )
|
||||||
|
|
||||||
|
|
||||||
It is possible to mix XML and YAML configuration files, for example you could have a main configuration file `config.xml` and additional configuration files `config.d/network.xml`, `config.d/timezone.yaml` and `config.d/keeper.yaml`. Mixing XML and YAML within a single configuration file is not supported. XML configuration files should use `<clickhouse>...</clickhouse>` as top-level tag. In YAML configuration files, `clickhouse:` is optional, the parser inserts it implicitly if absent.
|
It is possible to mix XML and YAML configuration files, for example you could have a main configuration file `config.xml` and additional configuration files `config.d/network.xml`, `config.d/timezone.yaml` and `config.d/keeper.yaml`. Mixing XML and YAML within a single configuration file is not supported. XML configuration files should use `<clickhouse>...</clickhouse>` as top-level tag. In YAML configuration files, `clickhouse:` is optional, the parser inserts it implicitly if absent.
|
||||||
|
|
||||||
|
@ -67,6 +67,23 @@ To manage named collections with DDL a user must have the `named_control_collect
|
|||||||
In the above example the `password_sha256_hex` value is the hexadecimal representation of the SHA256 hash of the password. This configuration for the user `default` has the attribute `replace=true` as in the default configuration has a plain text `password` set, and it is not possible to have both plain text and sha256 hex passwords set for a user.
|
In the above example the `password_sha256_hex` value is the hexadecimal representation of the SHA256 hash of the password. This configuration for the user `default` has the attribute `replace=true` as in the default configuration has a plain text `password` set, and it is not possible to have both plain text and sha256 hex passwords set for a user.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
### Storage for named collections
|
||||||
|
|
||||||
|
Named collections can either be stored on local disk or in zookeeper/keeper. By default local storage is used.
|
||||||
|
|
||||||
|
To configure named collections storage in keeper and a `type` (equal to either `keeper` or `zookeeper`) and `path` (path in keeper, where named collections will be stored) to `named_collections_storage` section in configuration file:
|
||||||
|
```
|
||||||
|
<clickhouse>
|
||||||
|
<named_collections_storage>
|
||||||
|
<type>zookeeper</type>
|
||||||
|
<path>/named_collections_path/</path>
|
||||||
|
<update_timeout_ms>1000</update_timeout_ms>
|
||||||
|
</named_collections_storage>
|
||||||
|
</clickhouse>
|
||||||
|
```
|
||||||
|
|
||||||
|
An optional configuration parameter `update_timeout_ms` by default is equal to `5000`.
|
||||||
|
|
||||||
## Storing named collections in configuration files
|
## Storing named collections in configuration files
|
||||||
|
|
||||||
### XML example
|
### XML example
|
||||||
@ -443,3 +460,59 @@ SELECT dictGet('dict', 'b', 1);
|
|||||||
│ a │
|
│ a │
|
||||||
└─────────────────────────┘
|
└─────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Named collections for accessing Kafka
|
||||||
|
|
||||||
|
The description of parameters see [Kafka](../engines/table-engines/integrations/kafka.md).
|
||||||
|
|
||||||
|
### DDL example
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE NAMED COLLECTION my_kafka_cluster AS
|
||||||
|
kafka_broker_list = 'localhost:9092',
|
||||||
|
kafka_topic_list = 'kafka_topic',
|
||||||
|
kafka_group_name = 'consumer_group',
|
||||||
|
kafka_format = 'JSONEachRow',
|
||||||
|
kafka_max_block_size = '1048576';
|
||||||
|
|
||||||
|
```
|
||||||
|
### XML example
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<clickhouse>
|
||||||
|
<named_collections>
|
||||||
|
<my_kafka_cluster>
|
||||||
|
<kafka_broker_list>localhost:9092</kafka_broker_list>
|
||||||
|
<kafka_topic_list>kafka_topic</kafka_topic_list>
|
||||||
|
<kafka_group_name>consumer_group</kafka_group_name>
|
||||||
|
<kafka_format>JSONEachRow</kafka_format>
|
||||||
|
<kafka_max_block_size>1048576</kafka_max_block_size>
|
||||||
|
</my_kafka_cluster>
|
||||||
|
</named_collections>
|
||||||
|
</clickhouse>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example of using named collections with a Kafka table
|
||||||
|
|
||||||
|
Both of the following examples use the same named collection `my_kafka_cluster`:
|
||||||
|
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE queue
|
||||||
|
(
|
||||||
|
timestamp UInt64,
|
||||||
|
level String,
|
||||||
|
message String
|
||||||
|
)
|
||||||
|
ENGINE = Kafka(my_kafka_cluster)
|
||||||
|
|
||||||
|
CREATE TABLE queue
|
||||||
|
(
|
||||||
|
timestamp UInt64,
|
||||||
|
level String,
|
||||||
|
message String
|
||||||
|
)
|
||||||
|
ENGINE = Kafka(my_kafka_cluster)
|
||||||
|
SETTINGS kafka_num_consumers = 4,
|
||||||
|
kafka_thread_per_consumer = 1;
|
||||||
|
```
|
||||||
|
@ -1206,6 +1206,16 @@ Expired time for HSTS in seconds. The default value is 0 means clickhouse disabl
|
|||||||
<hsts_max_age>600000</hsts_max_age>
|
<hsts_max_age>600000</hsts_max_age>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## mlock_executable {#mlock_executable}
|
||||||
|
|
||||||
|
Perform mlockall after startup to lower first queries latency and to prevent clickhouse executable from being paged out under high IO load. Enabling this option is recommended but will lead to increased startup time for up to a few seconds.
|
||||||
|
Keep in mind that this parameter would not work without "CAP_IPC_LOCK" capability.
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
``` xml
|
||||||
|
<mlock_executable>false</mlock_executable>
|
||||||
|
```
|
||||||
|
|
||||||
## include_from {#include_from}
|
## include_from {#include_from}
|
||||||
|
|
||||||
The path to the file with substitutions. Both XML and YAML formats are supported.
|
The path to the file with substitutions. Both XML and YAML formats are supported.
|
||||||
@ -1353,6 +1363,26 @@ Examples:
|
|||||||
<listen_host>127.0.0.1</listen_host>
|
<listen_host>127.0.0.1</listen_host>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## listen_try {#listen_try}
|
||||||
|
|
||||||
|
The server will not exit if IPv6 or IPv4 networks are unavailable while trying to listen.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
``` xml
|
||||||
|
<listen_try>0</listen_try>
|
||||||
|
```
|
||||||
|
|
||||||
|
## listen_reuse_port {#listen_reuse_port}
|
||||||
|
|
||||||
|
Allow multiple servers to listen on the same address:port. Requests will be routed to a random server by the operating system. Enabling this setting is not recommended.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
``` xml
|
||||||
|
<listen_reuse_port>0</listen_reuse_port>
|
||||||
|
```
|
||||||
|
|
||||||
## listen_backlog {#listen_backlog}
|
## listen_backlog {#listen_backlog}
|
||||||
|
|
||||||
Backlog (queue size of pending connections) of the listen socket.
|
Backlog (queue size of pending connections) of the listen socket.
|
||||||
@ -2894,6 +2924,8 @@ Define proxy servers for HTTP and HTTPS requests, currently supported by S3 stor
|
|||||||
|
|
||||||
There are three ways to define proxy servers: environment variables, proxy lists, and remote proxy resolvers.
|
There are three ways to define proxy servers: environment variables, proxy lists, and remote proxy resolvers.
|
||||||
|
|
||||||
|
Bypassing proxy servers for specific hosts is also supported with the use of `no_proxy`.
|
||||||
|
|
||||||
### Environment variables
|
### Environment variables
|
||||||
|
|
||||||
The `http_proxy` and `https_proxy` environment variables allow you to specify a
|
The `http_proxy` and `https_proxy` environment variables allow you to specify a
|
||||||
@ -3003,6 +3035,29 @@ This also allows a mix of resolver types can be used.
|
|||||||
|
|
||||||
By default, tunneling (i.e, `HTTP CONNECT`) is used to make `HTTPS` requests over `HTTP` proxy. This setting can be used to disable it.
|
By default, tunneling (i.e, `HTTP CONNECT`) is used to make `HTTPS` requests over `HTTP` proxy. This setting can be used to disable it.
|
||||||
|
|
||||||
|
### no_proxy
|
||||||
|
By default, all requests will go through the proxy. In order to disable it for specific hosts, the `no_proxy` variable must be set.
|
||||||
|
It can be set inside the `<proxy>` clause for list and remote resolvers and as an environment variable for environment resolver.
|
||||||
|
It supports IP addresses, domains, subdomains and `'*'` wildcard for full bypass. Leading dots are stripped just like curl does.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
The below configuration bypasses proxy requests to `clickhouse.cloud` and all of its subdomains (e.g, `auth.clickhouse.cloud`).
|
||||||
|
The same applies to GitLab, even though it has a leading dot. Both `gitlab.com` and `about.gitlab.com` would bypass the proxy.
|
||||||
|
|
||||||
|
``` xml
|
||||||
|
<proxy>
|
||||||
|
<no_proxy>clickhouse.cloud,.gitlab.com</no_proxy>
|
||||||
|
<http>
|
||||||
|
<uri>http://proxy1</uri>
|
||||||
|
<uri>http://proxy2:3128</uri>
|
||||||
|
</http>
|
||||||
|
<https>
|
||||||
|
<uri>http://proxy1:3128</uri>
|
||||||
|
</https>
|
||||||
|
</proxy>
|
||||||
|
```
|
||||||
|
|
||||||
## max_materialized_views_count_for_table {#max_materialized_views_count_for_table}
|
## max_materialized_views_count_for_table {#max_materialized_views_count_for_table}
|
||||||
|
|
||||||
A limit on the number of materialized views attached to a table.
|
A limit on the number of materialized views attached to a table.
|
||||||
|
@ -3,9 +3,126 @@ slug: /en/operations/settings/merge-tree-settings
|
|||||||
title: "MergeTree tables settings"
|
title: "MergeTree tables settings"
|
||||||
---
|
---
|
||||||
|
|
||||||
The values of `merge_tree` settings (for all MergeTree tables) can be viewed in the table `system.merge_tree_settings`, they can be overridden in `config.xml` in the `merge_tree` section, or set in the `SETTINGS` section of each table.
|
System table `system.merge_tree_settings` shows the globally set MergeTree settings.
|
||||||
|
|
||||||
These are example overrides for `max_suspicious_broken_parts`:
|
MergeTree settings can be set in the `merge_tree` section of the server config file, or specified for each `MergeTree` table individually in
|
||||||
|
the `SETTINGS` clause of the `CREATE TABLE` statement.
|
||||||
|
|
||||||
|
Example for customizing setting `max_suspicious_broken_parts`:
|
||||||
|
|
||||||
|
Configure the default for all `MergeTree` tables in the server configuration file:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
<merge_tree>
|
||||||
|
<max_suspicious_broken_parts>5</max_suspicious_broken_parts>
|
||||||
|
</merge_tree>
|
||||||
|
```
|
||||||
|
|
||||||
|
Set for a particular table:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE tab
|
||||||
|
(
|
||||||
|
`A` Int64
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree
|
||||||
|
ORDER BY tuple()
|
||||||
|
SETTINGS max_suspicious_broken_parts = 500;
|
||||||
|
```
|
||||||
|
|
||||||
|
Change the settings for a particular table using `ALTER TABLE ... MODIFY SETTING`:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
ALTER TABLE tab MODIFY SETTING max_suspicious_broken_parts = 100;
|
||||||
|
|
||||||
|
-- reset to global default (value from system.merge_tree_settings)
|
||||||
|
ALTER TABLE tab RESET SETTING max_suspicious_broken_parts;
|
||||||
|
```
|
||||||
|
|
||||||
|
## index_granularity
|
||||||
|
|
||||||
|
Maximum number of data rows between the marks of an index.
|
||||||
|
|
||||||
|
Default value: 8192.
|
||||||
|
|
||||||
|
## index_granularity_bytes
|
||||||
|
|
||||||
|
Maximum size of data granules in bytes.
|
||||||
|
|
||||||
|
Default value: 10Mb.
|
||||||
|
|
||||||
|
To restrict the granule size only by number of rows, set to 0 (not recommended).
|
||||||
|
|
||||||
|
## min_index_granularity_bytes
|
||||||
|
|
||||||
|
Min allowed size of data granules in bytes.
|
||||||
|
|
||||||
|
Default value: 1024b.
|
||||||
|
|
||||||
|
To provide a safeguard against accidentally creating tables with very low index_granularity_bytes.
|
||||||
|
|
||||||
|
## enable_mixed_granularity_parts
|
||||||
|
|
||||||
|
Enables or disables transitioning to control the granule size with the `index_granularity_bytes` setting. Before version 19.11, there was only the `index_granularity` setting for restricting granule size. The `index_granularity_bytes` setting improves ClickHouse performance when selecting data from tables with big rows (tens and hundreds of megabytes). If you have tables with big rows, you can enable this setting for the tables to improve the efficiency of `SELECT` queries.
|
||||||
|
|
||||||
|
## use_minimalistic_part_header_in_zookeeper
|
||||||
|
|
||||||
|
Storage method of the data parts headers in ZooKeeper. If enabled, ZooKeeper stores less data. For details, see [here](../server-configuration-parameters/settings.md/#server-settings-use_minimalistic_part_header_in_zookeeper).
|
||||||
|
|
||||||
|
## min_merge_bytes_to_use_direct_io
|
||||||
|
|
||||||
|
The minimum data volume for merge operation that is required for using direct I/O access to the storage disk.
|
||||||
|
When merging data parts, ClickHouse calculates the total storage volume of all the data to be merged.
|
||||||
|
If the volume exceeds `min_merge_bytes_to_use_direct_io` bytes, ClickHouse reads and writes the data to the storage disk using the direct I/O interface (`O_DIRECT` option).
|
||||||
|
If `min_merge_bytes_to_use_direct_io = 0`, then direct I/O is disabled.
|
||||||
|
|
||||||
|
Default value: `10 * 1024 * 1024 * 1024` bytes.
|
||||||
|
|
||||||
|
## merge_with_ttl_timeout
|
||||||
|
|
||||||
|
Minimum delay in seconds before repeating a merge with delete TTL.
|
||||||
|
|
||||||
|
Default value: `14400` seconds (4 hours).
|
||||||
|
|
||||||
|
## merge_with_recompression_ttl_timeout
|
||||||
|
|
||||||
|
Minimum delay in seconds before repeating a merge with recompression TTL.
|
||||||
|
|
||||||
|
Default value: `14400` seconds (4 hours).
|
||||||
|
|
||||||
|
## write_final_mark
|
||||||
|
|
||||||
|
Enables or disables writing the final index mark at the end of data part (after the last byte).
|
||||||
|
|
||||||
|
Default value: 1.
|
||||||
|
|
||||||
|
Don’t change or bad things will happen.
|
||||||
|
|
||||||
|
## storage_policy
|
||||||
|
|
||||||
|
Storage policy.
|
||||||
|
|
||||||
|
## min_bytes_for_wide_part
|
||||||
|
|
||||||
|
Minimum number of bytes/rows in a data part that can be stored in `Wide` format.
|
||||||
|
You can set one, both or none of these settings.
|
||||||
|
|
||||||
|
## max_compress_block_size
|
||||||
|
|
||||||
|
Maximum size of blocks of uncompressed data before compressing for writing to a table.
|
||||||
|
You can also specify this setting in the global settings (see [max_compress_block_size](/docs/en/operations/settings/settings.md/#max-compress-block-size) setting).
|
||||||
|
The value specified when table is created overrides the global value for this setting.
|
||||||
|
|
||||||
|
## min_compress_block_size
|
||||||
|
|
||||||
|
Minimum size of blocks of uncompressed data required for compression when writing the next mark.
|
||||||
|
You can also specify this setting in the global settings (see [min_compress_block_size](/docs/en/operations/settings/settings.md/#min-compress-block-size) setting).
|
||||||
|
The value specified when table is created overrides the global value for this setting.
|
||||||
|
|
||||||
|
## max_partitions_to_read
|
||||||
|
|
||||||
|
Limits the maximum number of partitions that can be accessed in one query.
|
||||||
|
You can also specify setting [max_partitions_to_read](/docs/en/operations/settings/merge-tree-settings.md/#max-partitions-to-read) in the global setting.
|
||||||
|
|
||||||
## max_suspicious_broken_parts
|
## max_suspicious_broken_parts
|
||||||
|
|
||||||
@ -17,37 +134,6 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 100.
|
Default value: 100.
|
||||||
|
|
||||||
Override example in `config.xml`:
|
|
||||||
|
|
||||||
``` text
|
|
||||||
<merge_tree>
|
|
||||||
<max_suspicious_broken_parts>5</max_suspicious_broken_parts>
|
|
||||||
</merge_tree>
|
|
||||||
```
|
|
||||||
|
|
||||||
An example to set in `SETTINGS` for a particular table:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
CREATE TABLE foo
|
|
||||||
(
|
|
||||||
`A` Int64
|
|
||||||
)
|
|
||||||
ENGINE = MergeTree
|
|
||||||
ORDER BY tuple()
|
|
||||||
SETTINGS max_suspicious_broken_parts = 500;
|
|
||||||
```
|
|
||||||
|
|
||||||
An example of changing the settings for a specific table with the `ALTER TABLE ... MODIFY SETTING` command:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
ALTER TABLE foo
|
|
||||||
MODIFY SETTING max_suspicious_broken_parts = 100;
|
|
||||||
|
|
||||||
-- reset to default (use value from system.merge_tree_settings)
|
|
||||||
ALTER TABLE foo
|
|
||||||
RESET SETTING max_suspicious_broken_parts;
|
|
||||||
```
|
|
||||||
|
|
||||||
## parts_to_throw_insert {#parts-to-throw-insert}
|
## parts_to_throw_insert {#parts-to-throw-insert}
|
||||||
|
|
||||||
If the number of active parts in a single partition exceeds the `parts_to_throw_insert` value, `INSERT` is interrupted with the `Too many parts (N). Merges are processing significantly slower than inserts` exception.
|
If the number of active parts in a single partition exceeds the `parts_to_throw_insert` value, `INSERT` is interrupted with the `Too many parts (N). Merges are processing significantly slower than inserts` exception.
|
||||||
@ -301,6 +387,8 @@ Default value: 10800
|
|||||||
|
|
||||||
## try_fetch_recompressed_part_timeout
|
## try_fetch_recompressed_part_timeout
|
||||||
|
|
||||||
|
Timeout (in seconds) before starting merge with recompression. During this time ClickHouse tries to fetch recompressed part from replica which assigned this merge with recompression.
|
||||||
|
|
||||||
Recompression works slow in most cases, so we don't start merge with recompression until this timeout and trying to fetch recompressed part from replica which assigned this merge with recompression.
|
Recompression works slow in most cases, so we don't start merge with recompression until this timeout and trying to fetch recompressed part from replica which assigned this merge with recompression.
|
||||||
|
|
||||||
Possible values:
|
Possible values:
|
||||||
@ -885,3 +973,49 @@ Default value: false
|
|||||||
**See Also**
|
**See Also**
|
||||||
|
|
||||||
- [exclude_deleted_rows_for_part_size_in_merge](#exclude_deleted_rows_for_part_size_in_merge) setting
|
- [exclude_deleted_rows_for_part_size_in_merge](#exclude_deleted_rows_for_part_size_in_merge) setting
|
||||||
|
|
||||||
|
### optimize_row_order
|
||||||
|
|
||||||
|
Controls if the row order should be optimized during inserts to improve the compressability of the newly inserted table part.
|
||||||
|
|
||||||
|
Only has an effect for ordinary MergeTree-engine tables. Does nothing for specialized MergeTree engine tables (e.g. CollapsingMergeTree).
|
||||||
|
|
||||||
|
MergeTree tables are (optionally) compressed using [compression codecs](../../sql-reference/statements/create/table.md#column_compression_codec).
|
||||||
|
Generic compression codecs such as LZ4 and ZSTD achieve maximum compression rates if the data exposes patterns.
|
||||||
|
Long runs of the same value typically compress very well.
|
||||||
|
|
||||||
|
If this setting is enabled, ClickHouse attempts to store the data in newly inserted parts in a row order that minimizes the number of equal-value runs across the columns of the new table part.
|
||||||
|
In other words, a small number of equal-value runs mean that individual runs are long and compress well.
|
||||||
|
|
||||||
|
Finding the optimal row order is computationally infeasible (NP hard).
|
||||||
|
Therefore, ClickHouse uses a heuristics to quickly find a row order which still improves compression rates over the original row order.
|
||||||
|
|
||||||
|
<details markdown="1">
|
||||||
|
|
||||||
|
<summary>Heuristics for finding a row order</summary>
|
||||||
|
|
||||||
|
It is generally possible to shuffle the rows of a table (or table part) freely as SQL considers the same table (table part) in different row order equivalent.
|
||||||
|
|
||||||
|
This freedom of shuffling rows is restricted when a primary key is defined for the table.
|
||||||
|
In ClickHouse, a primary key `C1, C2, ..., CN` enforces that the table rows are sorted by columns `C1`, `C2`, ... `Cn` ([clustered index](https://en.wikipedia.org/wiki/Database_index#Clustered)).
|
||||||
|
As a result, rows can only be shuffled within "equivalence classes" of row, i.e. rows which have the same values in their primary key columns.
|
||||||
|
The intuition is that primary keys with high-cardinality, e.g. primary keys involving a `DateTime64` timestamp column, lead to many small equivalence classes.
|
||||||
|
Likewise, tables with a low-cardinality primary key, create few and large equivalence classes.
|
||||||
|
A table with no primary key represents the extreme case of a single equivalence class which spans all rows.
|
||||||
|
|
||||||
|
The fewer and the larger the equivalence classes are, the higher the degree of freedom when re-shuffling rows.
|
||||||
|
|
||||||
|
The heuristics applied to find the best row order within each equivalence class is suggested by D. Lemir, O. Kaser in [Reordering columns for smaller indexes](https://doi.org/10.1016/j.ins.2011.02.002) and based on sorting the rows within each equivalence class by ascending cardinality of the non-primary key columns.
|
||||||
|
It performs three steps:
|
||||||
|
1. Find all equivalence classes based on the row values in primary key columns.
|
||||||
|
2. For each equivalence class, calculate (usually estimate) the cardinalities of the non-primary-key columns.
|
||||||
|
3. For each equivalence class, sort the rows in order of ascending non-primary-key column cardinality.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
If enabled, insert operations incur additional CPU costs to analyze and optimize the row order of the new data.
|
||||||
|
INSERTs are expected to take 30-50% longer depending on the data characteristics.
|
||||||
|
Compression rates of LZ4 or ZSTD improve on average by 20-40%.
|
||||||
|
|
||||||
|
This setting works best for tables with no primary key or a low-cardinality primary key, i.e. a table with only few distinct primary key values.
|
||||||
|
High-cardinality primary keys, e.g. involving timestamp columns of type `DateTime64`, are not expected to benefit from this setting.
|
||||||
|
@ -1417,6 +1417,17 @@ Compression method used in output Parquet format. Supported codecs: `snappy`, `l
|
|||||||
|
|
||||||
Default value: `lz4`.
|
Default value: `lz4`.
|
||||||
|
|
||||||
|
### input_format_parquet_max_block_size {#input_format_parquet_max_block_size}
|
||||||
|
Max block row size for parquet reader. By controlling the number of rows in each block, you can control the memory usage,
|
||||||
|
and in some operators that cache blocks, you can improve the accuracy of the operator's memory control。
|
||||||
|
|
||||||
|
Default value: `65409`.
|
||||||
|
|
||||||
|
### input_format_parquet_prefer_block_bytes {#input_format_parquet_prefer_block_bytes}
|
||||||
|
Average block bytes output by parquet reader. Lowering the configuration in the case of reading some high compression parquet relieves the memory pressure.
|
||||||
|
|
||||||
|
Default value: `65409 * 256 = 16744704`
|
||||||
|
|
||||||
## Hive format settings {#hive-format-settings}
|
## Hive format settings {#hive-format-settings}
|
||||||
|
|
||||||
### input_format_hive_text_fields_delimiter {#input_format_hive_text_fields_delimiter}
|
### input_format_hive_text_fields_delimiter {#input_format_hive_text_fields_delimiter}
|
||||||
|
@ -1590,6 +1590,22 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `default`.
|
Default value: `default`.
|
||||||
|
|
||||||
|
## parallel_replicas_custom_key_range_lower {#parallel_replicas_custom_key_range_lower}
|
||||||
|
|
||||||
|
Allows the filter type `range` to split the work evenly between replicas based on the custom range `[parallel_replicas_custom_key_range_lower, INT_MAX]`.
|
||||||
|
|
||||||
|
When used in conjuction with [parallel_replicas_custom_key_range_upper](#parallel_replicas_custom_key_range_upper), it lets the filter evenly split the work over replicas for the range `[parallel_replicas_custom_key_range_lower, parallel_replicas_custom_key_range_upper]`.
|
||||||
|
|
||||||
|
Note: This setting will not cause any additional data to be filtered during query processing, rather it changes the points at which the range filter breaks up the range `[0, INT_MAX]` for parallel processing.
|
||||||
|
|
||||||
|
## parallel_replicas_custom_key_range_upper {#parallel_replicas_custom_key_range_upper}
|
||||||
|
|
||||||
|
Allows the filter type `range` to split the work evenly between replicas based on the custom range `[0, parallel_replicas_custom_key_range_upper]`. A value of 0 disables the upper bound, setting it the max value of the custom key expression.
|
||||||
|
|
||||||
|
When used in conjuction with [parallel_replicas_custom_key_range_lower](#parallel_replicas_custom_key_range_lower), it lets the filter evenly split the work over replicas for the range `[parallel_replicas_custom_key_range_lower, parallel_replicas_custom_key_range_upper]`.
|
||||||
|
|
||||||
|
Note: This setting will not cause any additional data to be filtered during query processing, rather it changes the points at which the range filter breaks up the range `[0, INT_MAX]` for parallel processing.
|
||||||
|
|
||||||
## allow_experimental_parallel_reading_from_replicas
|
## allow_experimental_parallel_reading_from_replicas
|
||||||
|
|
||||||
Enables or disables sending SELECT queries to all replicas of a table (up to `max_parallel_replicas`). Reading is parallelized and coordinated dynamically. It will work for any kind of MergeTree table.
|
Enables or disables sending SELECT queries to all replicas of a table (up to `max_parallel_replicas`). Reading is parallelized and coordinated dynamically. It will work for any kind of MergeTree table.
|
||||||
@ -3170,6 +3186,18 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `0`.
|
Default value: `0`.
|
||||||
|
|
||||||
|
## lightweight_deletes_sync {#lightweight_deletes_sync}
|
||||||
|
|
||||||
|
The same as 'mutation_sync', but controls only execution of lightweight deletes.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 0 - Mutations execute asynchronously.
|
||||||
|
- 1 - The query waits for the lightweight deletes to complete on the current server.
|
||||||
|
- 2 - The query waits for the lightweight deletes to complete on all replicas (if they exist).
|
||||||
|
|
||||||
|
Default value: `2`.
|
||||||
|
|
||||||
**See Also**
|
**See Also**
|
||||||
|
|
||||||
- [Synchronicity of ALTER Queries](../../sql-reference/statements/alter/index.md#synchronicity-of-alter-queries)
|
- [Synchronicity of ALTER Queries](../../sql-reference/statements/alter/index.md#synchronicity-of-alter-queries)
|
||||||
@ -3850,6 +3878,10 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 30.
|
Default value: 30.
|
||||||
|
|
||||||
|
:::note
|
||||||
|
It's applicable only to the default profile. A server reboot is required for the changes to take effect.
|
||||||
|
:::
|
||||||
|
|
||||||
## http_receive_timeout {#http_receive_timeout}
|
## http_receive_timeout {#http_receive_timeout}
|
||||||
|
|
||||||
HTTP receive timeout (in seconds).
|
HTTP receive timeout (in seconds).
|
||||||
@ -5108,7 +5140,7 @@ a Tuple(
|
|||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
## allow_experimental_statistic {#allow_experimental_statistic}
|
## allow_experimental_statistics {#allow_experimental_statistics}
|
||||||
|
|
||||||
Allows defining columns with [statistics](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) and [manipulate statistics](../../engines/table-engines/mergetree-family/mergetree.md#column-statistics).
|
Allows defining columns with [statistics](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) and [manipulate statistics](../../engines/table-engines/mergetree-family/mergetree.md#column-statistics).
|
||||||
|
|
||||||
|
@ -18,7 +18,7 @@ This tool works via HTTP, not via pipes, shared memory, or TCP because:
|
|||||||
However it can be used as standalone tool from command line with the following
|
However it can be used as standalone tool from command line with the following
|
||||||
parameters in POST-request URL:
|
parameters in POST-request URL:
|
||||||
- `connection_string` -- ODBC connection string.
|
- `connection_string` -- ODBC connection string.
|
||||||
- `columns` -- columns in ClickHouse NamesAndTypesList format, name in backticks,
|
- `sample_block` -- columns description in ClickHouse NamesAndTypesList format, name in backticks,
|
||||||
type as string. Name and type are space separated, rows separated with
|
type as string. Name and type are space separated, rows separated with
|
||||||
newline.
|
newline.
|
||||||
- `max_block_size` -- optional parameter, sets maximum size of single block.
|
- `max_block_size` -- optional parameter, sets maximum size of single block.
|
||||||
|
@ -106,8 +106,8 @@ To work with these states, use:
|
|||||||
- [AggregatingMergeTree](../../engines/table-engines/mergetree-family/aggregatingmergetree.md) table engine.
|
- [AggregatingMergeTree](../../engines/table-engines/mergetree-family/aggregatingmergetree.md) table engine.
|
||||||
- [finalizeAggregation](../../sql-reference/functions/other-functions.md#function-finalizeaggregation) function.
|
- [finalizeAggregation](../../sql-reference/functions/other-functions.md#function-finalizeaggregation) function.
|
||||||
- [runningAccumulate](../../sql-reference/functions/other-functions.md#runningaccumulate) function.
|
- [runningAccumulate](../../sql-reference/functions/other-functions.md#runningaccumulate) function.
|
||||||
- [-Merge](#aggregate_functions_combinators-merge) combinator.
|
- [-Merge](#-merge) combinator.
|
||||||
- [-MergeState](#aggregate_functions_combinators-mergestate) combinator.
|
- [-MergeState](#-mergestate) combinator.
|
||||||
|
|
||||||
## -Merge
|
## -Merge
|
||||||
|
|
||||||
|
@ -82,10 +82,12 @@ FROM
|
|||||||
|
|
||||||
In this case, you should remember that you do not know the histogram bin borders.
|
In this case, you should remember that you do not know the histogram bin borders.
|
||||||
|
|
||||||
## sequenceMatch(pattern)(timestamp, cond1, cond2, ...)
|
## sequenceMatch
|
||||||
|
|
||||||
Checks whether the sequence contains an event chain that matches the pattern.
|
Checks whether the sequence contains an event chain that matches the pattern.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
sequenceMatch(pattern)(timestamp, cond1, cond2, ...)
|
sequenceMatch(pattern)(timestamp, cond1, cond2, ...)
|
||||||
```
|
```
|
||||||
@ -102,7 +104,7 @@ Events that occur at the same second may lay in the sequence in an undefined ord
|
|||||||
|
|
||||||
**Parameters**
|
**Parameters**
|
||||||
|
|
||||||
- `pattern` — Pattern string. See [Pattern syntax](#sequence-function-pattern-syntax).
|
- `pattern` — Pattern string. See [Pattern syntax](#sequencematch).
|
||||||
|
|
||||||
**Returned values**
|
**Returned values**
|
||||||
|
|
||||||
@ -170,9 +172,9 @@ SELECT sequenceMatch('(?1)(?2)')(time, number = 1, number = 2, number = 4) FROM
|
|||||||
|
|
||||||
**See Also**
|
**See Also**
|
||||||
|
|
||||||
- [sequenceCount](#function-sequencecount)
|
- [sequenceCount](#sequencecount)
|
||||||
|
|
||||||
## sequenceCount(pattern)(time, cond1, cond2, ...)
|
## sequenceCount
|
||||||
|
|
||||||
Counts the number of event chains that matched the pattern. The function searches event chains that do not overlap. It starts to search for the next chain after the current chain is matched.
|
Counts the number of event chains that matched the pattern. The function searches event chains that do not overlap. It starts to search for the next chain after the current chain is matched.
|
||||||
|
|
||||||
@ -180,6 +182,8 @@ Counts the number of event chains that matched the pattern. The function searche
|
|||||||
Events that occur at the same second may lay in the sequence in an undefined order affecting the result.
|
Events that occur at the same second may lay in the sequence in an undefined order affecting the result.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
sequenceCount(pattern)(timestamp, cond1, cond2, ...)
|
sequenceCount(pattern)(timestamp, cond1, cond2, ...)
|
||||||
```
|
```
|
||||||
@ -192,7 +196,7 @@ sequenceCount(pattern)(timestamp, cond1, cond2, ...)
|
|||||||
|
|
||||||
**Parameters**
|
**Parameters**
|
||||||
|
|
||||||
- `pattern` — Pattern string. See [Pattern syntax](#sequence-function-pattern-syntax).
|
- `pattern` — Pattern string. See [Pattern syntax](#sequencematch).
|
||||||
|
|
||||||
**Returned values**
|
**Returned values**
|
||||||
|
|
||||||
@ -229,7 +233,7 @@ SELECT sequenceCount('(?1).*(?2)')(time, number = 1, number = 2) FROM t
|
|||||||
|
|
||||||
**See Also**
|
**See Also**
|
||||||
|
|
||||||
- [sequenceMatch](#function-sequencematch)
|
- [sequenceMatch](#sequencematch)
|
||||||
|
|
||||||
## windowFunnel
|
## windowFunnel
|
||||||
|
|
||||||
|
@ -5,10 +5,57 @@ sidebar_position: 107
|
|||||||
|
|
||||||
# corr
|
# corr
|
||||||
|
|
||||||
Syntax: `corr(x, y)`
|
Calculates the [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient):
|
||||||
|
|
||||||
|
$$
|
||||||
|
\frac{\Sigma{(x - \bar{x})(y - \bar{y})}}{\sqrt{\Sigma{(x - \bar{x})^2} * \Sigma{(y - \bar{y})^2}}}
|
||||||
|
$$
|
||||||
|
|
||||||
Calculates the Pearson correlation coefficient: `Σ((x - x̅)(y - y̅)) / sqrt(Σ((x - x̅)^2) * Σ((y - y̅)^2))`.
|
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `corrStable` function. It works slower but provides a lower computational error.
|
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the [`corrStable`](../reference/corrstable.md) function. It is slower but provides a more accurate result.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
corr(x, y)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments**
|
||||||
|
|
||||||
|
- `x` — first variable. [(U)Int*](../../data-types/int-uint.md), [Float*](../../data-types/float.md), [Decimal](../../data-types/decimal.md).
|
||||||
|
- `y` — second variable. [(U)Int*](../../data-types/int-uint.md), [Float*](../../data-types/float.md), [Decimal](../../data-types/decimal.md).
|
||||||
|
|
||||||
|
**Returned Value**
|
||||||
|
|
||||||
|
- The Pearson correlation coefficient. [Float64](../../data-types/float.md).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DROP TABLE IF EXISTS series;
|
||||||
|
CREATE TABLE series
|
||||||
|
(
|
||||||
|
i UInt32,
|
||||||
|
x_value Float64,
|
||||||
|
y_value Float64
|
||||||
|
)
|
||||||
|
ENGINE = Memory;
|
||||||
|
INSERT INTO series(i, x_value, y_value) VALUES (1, 5.6, -4.4),(2, -9.6, 3),(3, -1.3, -4),(4, 5.3, 9.7),(5, 4.4, 0.037),(6, -8.6, -7.8),(7, 5.1, 9.3),(8, 7.9, -3.6),(9, -8.2, 0.62),(10, -3, 7.3);
|
||||||
|
```
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT corr(x_value, y_value)
|
||||||
|
FROM series;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```response
|
||||||
|
┌─corr(x_value, y_value)─┐
|
||||||
|
│ 0.1730265755453256 │
|
||||||
|
└────────────────────────┘
|
||||||
|
```
|
||||||
|
@ -0,0 +1,55 @@
|
|||||||
|
---
|
||||||
|
slug: /en/sql-reference/aggregate-functions/reference/corrmatrix
|
||||||
|
sidebar_position: 108
|
||||||
|
---
|
||||||
|
|
||||||
|
# corrMatrix
|
||||||
|
|
||||||
|
Computes the correlation matrix over N variables.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
corrMatrix(x[, ...])
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments**
|
||||||
|
|
||||||
|
- `x` — a variable number of parameters. [(U)Int*](../../data-types/int-uint.md), [Float*](../../data-types/float.md), [Decimal](../../data-types/decimal.md).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Correlation matrix. [Array](../../data-types/array.md)([Array](../../data-types/array.md)([Float64](../../data-types/float.md))).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DROP TABLE IF EXISTS test;
|
||||||
|
CREATE TABLE test
|
||||||
|
(
|
||||||
|
a UInt32,
|
||||||
|
b Float64,
|
||||||
|
c Float64,
|
||||||
|
d Float64
|
||||||
|
)
|
||||||
|
ENGINE = Memory;
|
||||||
|
INSERT INTO test(a, b, c, d) VALUES (1, 5.6, -4.4, 2.6), (2, -9.6, 3, 3.3), (3, -1.3, -4, 1.2), (4, 5.3, 9.7, 2.3), (5, 4.4, 0.037, 1.222), (6, -8.6, -7.8, 2.1233), (7, 5.1, 9.3, 8.1222), (8, 7.9, -3.6, 9.837), (9, -8.2, 0.62, 8.43555), (10, -3, 7.3, 6.762);
|
||||||
|
```
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT arrayMap(x -> round(x, 3), arrayJoin(corrMatrix(a, b, c, d))) AS corrMatrix
|
||||||
|
FROM test;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```response
|
||||||
|
┌─corrMatrix─────────────┐
|
||||||
|
1. │ [1,-0.096,0.243,0.746] │
|
||||||
|
2. │ [-0.096,1,0.173,0.106] │
|
||||||
|
3. │ [0.243,0.173,1,0.258] │
|
||||||
|
4. │ [0.746,0.106,0.258,1] │
|
||||||
|
└────────────────────────┘
|
||||||
|
```
|
@ -0,0 +1,58 @@
|
|||||||
|
---
|
||||||
|
slug: /en/sql-reference/aggregate-functions/reference/corrstable
|
||||||
|
sidebar_position: 107
|
||||||
|
---
|
||||||
|
|
||||||
|
# corrStable
|
||||||
|
|
||||||
|
Calculates the [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient):
|
||||||
|
|
||||||
|
$$
|
||||||
|
\frac{\Sigma{(x - \bar{x})(y - \bar{y})}}{\sqrt{\Sigma{(x - \bar{x})^2} * \Sigma{(y - \bar{y})^2}}}
|
||||||
|
$$
|
||||||
|
|
||||||
|
Similar to the [`corr`](../reference/corr.md) function, but uses a numerically stable algorithm. As a result, `corrStable` is slower than `corr` but produces a more accurate result.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
corrStable(x, y)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments**
|
||||||
|
|
||||||
|
- `x` — first variable. [(U)Int*](../../data-types/int-uint.md), [Float*](../../data-types/float.md), [Decimal](../../data-types/decimal.md).
|
||||||
|
- `y` — second variable. [(U)Int*](../../data-types/int-uint.md), [Float*](../../data-types/float.md), [Decimal](../../data-types/decimal.md).
|
||||||
|
|
||||||
|
**Returned Value**
|
||||||
|
|
||||||
|
- The Pearson correlation coefficient. [Float64](../../data-types/float.md).
|
||||||
|
|
||||||
|
***Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DROP TABLE IF EXISTS series;
|
||||||
|
CREATE TABLE series
|
||||||
|
(
|
||||||
|
i UInt32,
|
||||||
|
x_value Float64,
|
||||||
|
y_value Float64
|
||||||
|
)
|
||||||
|
ENGINE = Memory;
|
||||||
|
INSERT INTO series(i, x_value, y_value) VALUES (1, 5.6, -4.4),(2, -9.6, 3),(3, -1.3, -4),(4, 5.3, 9.7),(5, 4.4, 0.037),(6, -8.6, -7.8),(7, 5.1, 9.3),(8, 7.9, -3.6),(9, -8.2, 0.62),(10, -3, 7.3);
|
||||||
|
```
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT corrStable(x_value, y_value)
|
||||||
|
FROM series;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```response
|
||||||
|
┌─corrStable(x_value, y_value)─┐
|
||||||
|
│ 0.17302657554532558 │
|
||||||
|
└──────────────────────────────┘
|
||||||
|
```
|
@ -1,14 +1,54 @@
|
|||||||
---
|
---
|
||||||
slug: /en/sql-reference/aggregate-functions/reference/covarpop
|
slug: /en/sql-reference/aggregate-functions/reference/covarpop
|
||||||
sidebar_position: 36
|
sidebar_position: 37
|
||||||
---
|
---
|
||||||
|
|
||||||
# covarPop
|
# covarPop
|
||||||
|
|
||||||
Syntax: `covarPop(x, y)`
|
Calculates the population covariance:
|
||||||
|
|
||||||
Calculates the value of `Σ((x - x̅)(y - y̅)) / n`.
|
$$
|
||||||
|
\frac{\Sigma{(x - \bar{x})(y - \bar{y})}}{n}
|
||||||
|
$$
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `covarPopStable` function. It works slower but provides a lower computational error.
|
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the [`covarPopStable`](../reference/covarpopstable.md) function. It works slower but provides a lower computational error.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
covarPop(x, y)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments**
|
||||||
|
|
||||||
|
- `x` — first variable. [(U)Int*](../../data-types/int-uint.md), [Float*](../../data-types/float.md), [Decimal](../../data-types/decimal.md).
|
||||||
|
- `y` — second variable. [(U)Int*](../../data-types/int-uint.md), [Float*](../../data-types/float.md), [Decimal](../../data-types/decimal.md).
|
||||||
|
|
||||||
|
**Returned Value**
|
||||||
|
|
||||||
|
- The population covariance between `x` and `y`. [Float64](../../data-types/float.md).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DROP TABLE IF EXISTS series;
|
||||||
|
CREATE TABLE series(i UInt32, x_value Float64, y_value Float64) ENGINE = Memory;
|
||||||
|
INSERT INTO series(i, x_value, y_value) VALUES (1, 5.6, -4.4),(2, -9.6, 3),(3, -1.3, -4),(4, 5.3, 9.7),(5, 4.4, 0.037),(6, -8.6, -7.8),(7, 5.1, 9.3),(8, 7.9, -3.6),(9, -8.2, 0.62),(10, -3, 7.3);
|
||||||
|
```
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT covarPop(x_value, y_value)
|
||||||
|
FROM series;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```reference
|
||||||
|
┌─covarPop(x_value, y_value)─┐
|
||||||
|
│ 6.485648 │
|
||||||
|
└────────────────────────────┘
|
||||||
|
```
|
||||||
|
@ -0,0 +1,55 @@
|
|||||||
|
---
|
||||||
|
slug: /en/sql-reference/aggregate-functions/reference/covarpopmatrix
|
||||||
|
sidebar_position: 36
|
||||||
|
---
|
||||||
|
|
||||||
|
# covarPopMatrix
|
||||||
|
|
||||||
|
Returns the population covariance matrix over N variables.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
covarPopMatrix(x[, ...])
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments**
|
||||||
|
|
||||||
|
- `x` — a variable number of parameters. [(U)Int*](../../data-types/int-uint.md), [Float*](../../data-types/float.md), [Decimal](../../data-types/decimal.md).
|
||||||
|
|
||||||
|
**Returned Value**
|
||||||
|
|
||||||
|
- Population covariance matrix. [Array](../../data-types/array.md)([Array](../../data-types/array.md)([Float64](../../data-types/float.md))).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DROP TABLE IF EXISTS test;
|
||||||
|
CREATE TABLE test
|
||||||
|
(
|
||||||
|
a UInt32,
|
||||||
|
b Float64,
|
||||||
|
c Float64,
|
||||||
|
d Float64
|
||||||
|
)
|
||||||
|
ENGINE = Memory;
|
||||||
|
INSERT INTO test(a, b, c, d) VALUES (1, 5.6, -4.4, 2.6), (2, -9.6, 3, 3.3), (3, -1.3, -4, 1.2), (4, 5.3, 9.7, 2.3), (5, 4.4, 0.037, 1.222), (6, -8.6, -7.8, 2.1233), (7, 5.1, 9.3, 8.1222), (8, 7.9, -3.6, 9.837), (9, -8.2, 0.62, 8.43555), (10, -3, 7.3, 6.762);
|
||||||
|
```
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT arrayMap(x -> round(x, 3), arrayJoin(covarPopMatrix(a, b, c, d))) AS covarPopMatrix
|
||||||
|
FROM test;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```reference
|
||||||
|
┌─covarPopMatrix────────────┐
|
||||||
|
1. │ [8.25,-1.76,4.08,6.748] │
|
||||||
|
2. │ [-1.76,41.07,6.486,2.132] │
|
||||||
|
3. │ [4.08,6.486,34.21,4.755] │
|
||||||
|
4. │ [6.748,2.132,4.755,9.93] │
|
||||||
|
└───────────────────────────┘
|
||||||
|
```
|
@ -0,0 +1,60 @@
|
|||||||
|
---
|
||||||
|
slug: /en/sql-reference/aggregate-functions/reference/covarpopstable
|
||||||
|
sidebar_position: 36
|
||||||
|
---
|
||||||
|
|
||||||
|
# covarPopStable
|
||||||
|
|
||||||
|
Calculates the value of the population covariance:
|
||||||
|
|
||||||
|
$$
|
||||||
|
\frac{\Sigma{(x - \bar{x})(y - \bar{y})}}{n}
|
||||||
|
$$
|
||||||
|
|
||||||
|
It is similar to the [covarPop](../reference/covarpop.md) function, but uses a numerically stable algorithm. As a result, `covarPopStable` is slower than `covarPop` but produces a more accurate result.
|
||||||
|
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
covarPop(x, y)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments**
|
||||||
|
|
||||||
|
- `x` — first variable. [(U)Int*](../../data-types/int-uint.md), [Float*](../../data-types/float.md), [Decimal](../../data-types/decimal.md).
|
||||||
|
- `y` — second variable. [(U)Int*](../../data-types/int-uint.md), [Float*](../../data-types/float.md), [Decimal](../../data-types/decimal.md).
|
||||||
|
|
||||||
|
**Returned Value**
|
||||||
|
|
||||||
|
- The population covariance between `x` and `y`. [Float64](../../data-types/float.md).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DROP TABLE IF EXISTS series;
|
||||||
|
CREATE TABLE series(i UInt32, x_value Float64, y_value Float64) ENGINE = Memory;
|
||||||
|
INSERT INTO series(i, x_value, y_value) VALUES (1, 5.6,-4.4),(2, -9.6,3),(3, -1.3,-4),(4, 5.3,9.7),(5, 4.4,0.037),(6, -8.6,-7.8),(7, 5.1,9.3),(8, 7.9,-3.6),(9, -8.2,0.62),(10, -3,7.3);
|
||||||
|
```
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT covarPopStable(x_value, y_value)
|
||||||
|
FROM
|
||||||
|
(
|
||||||
|
SELECT
|
||||||
|
x_value,
|
||||||
|
y_value
|
||||||
|
FROM series
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```reference
|
||||||
|
┌─covarPopStable(x_value, y_value)─┐
|
||||||
|
│ 6.485648 │
|
||||||
|
└──────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
@ -7,8 +7,74 @@ sidebar_position: 37
|
|||||||
|
|
||||||
Calculates the value of `Σ((x - x̅)(y - y̅)) / (n - 1)`.
|
Calculates the value of `Σ((x - x̅)(y - y̅)) / (n - 1)`.
|
||||||
|
|
||||||
Returns Float64. When `n <= 1`, returns `nan`.
|
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `covarSampStable` function. It works slower but provides a lower computational error.
|
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the [`covarSampStable`](../reference/covarsamp.md) function. It works slower but provides a lower computational error.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
covarSamp(x, y)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments**
|
||||||
|
|
||||||
|
- `x` — first variable. [(U)Int*](../../data-types/int-uint.md), [Float*](../../data-types/float.md), [Decimal](../../data-types/decimal.md).
|
||||||
|
- `y` — second variable. [(U)Int*](../../data-types/int-uint.md), [Float*](../../data-types/float.md), [Decimal](../../data-types/decimal.md).
|
||||||
|
|
||||||
|
**Returned Value**
|
||||||
|
|
||||||
|
- The sample covariance between `x` and `y`. For `n <= 1`, `nan` is returned. [Float64](../../data-types/float.md).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DROP TABLE IF EXISTS series;
|
||||||
|
CREATE TABLE series(i UInt32, x_value Float64, y_value Float64) ENGINE = Memory;
|
||||||
|
INSERT INTO series(i, x_value, y_value) VALUES (1, 5.6,-4.4),(2, -9.6,3),(3, -1.3,-4),(4, 5.3,9.7),(5, 4.4,0.037),(6, -8.6,-7.8),(7, 5.1,9.3),(8, 7.9,-3.6),(9, -8.2,0.62),(10, -3,7.3);
|
||||||
|
```
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT covarSamp(x_value, y_value)
|
||||||
|
FROM
|
||||||
|
(
|
||||||
|
SELECT
|
||||||
|
x_value,
|
||||||
|
y_value
|
||||||
|
FROM series
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```reference
|
||||||
|
┌─covarSamp(x_value, y_value)─┐
|
||||||
|
│ 7.206275555555556 │
|
||||||
|
└─────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT covarSamp(x_value, y_value)
|
||||||
|
FROM
|
||||||
|
(
|
||||||
|
SELECT
|
||||||
|
x_value,
|
||||||
|
y_value
|
||||||
|
FROM series LIMIT 1
|
||||||
|
);
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```reference
|
||||||
|
┌─covarSamp(x_value, y_value)─┐
|
||||||
|
│ nan │
|
||||||
|
└─────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
@ -0,0 +1,57 @@
|
|||||||
|
---
|
||||||
|
slug: /en/sql-reference/aggregate-functions/reference/covarsampmatrix
|
||||||
|
sidebar_position: 38
|
||||||
|
---
|
||||||
|
|
||||||
|
# covarSampMatrix
|
||||||
|
|
||||||
|
Returns the sample covariance matrix over N variables.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
covarSampMatrix(x[, ...])
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments**
|
||||||
|
|
||||||
|
- `x` — a variable number of parameters. [(U)Int*](../../data-types/int-uint.md), [Float*](../../data-types/float.md), [Decimal](../../data-types/decimal.md).
|
||||||
|
|
||||||
|
**Returned Value**
|
||||||
|
|
||||||
|
- Sample covariance matrix. [Array](../../data-types/array.md)([Array](../../data-types/array.md)([Float64](../../data-types/float.md))).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DROP TABLE IF EXISTS test;
|
||||||
|
CREATE TABLE test
|
||||||
|
(
|
||||||
|
a UInt32,
|
||||||
|
b Float64,
|
||||||
|
c Float64,
|
||||||
|
d Float64
|
||||||
|
)
|
||||||
|
ENGINE = Memory;
|
||||||
|
INSERT INTO test(a, b, c, d) VALUES (1, 5.6, -4.4, 2.6), (2, -9.6, 3, 3.3), (3, -1.3, -4, 1.2), (4, 5.3, 9.7, 2.3), (5, 4.4, 0.037, 1.222), (6, -8.6, -7.8, 2.1233), (7, 5.1, 9.3, 8.1222), (8, 7.9, -3.6, 9.837), (9, -8.2, 0.62, 8.43555), (10, -3, 7.3, 6.762);
|
||||||
|
```
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT arrayMap(x -> round(x, 3), arrayJoin(covarSampMatrix(a, b, c, d))) AS covarSampMatrix
|
||||||
|
FROM test;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```reference
|
||||||
|
┌─covarSampMatrix─────────────┐
|
||||||
|
1. │ [9.167,-1.956,4.534,7.498] │
|
||||||
|
2. │ [-1.956,45.634,7.206,2.369] │
|
||||||
|
3. │ [4.534,7.206,38.011,5.283] │
|
||||||
|
4. │ [7.498,2.369,5.283,11.034] │
|
||||||
|
└─────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
|
@ -0,0 +1,73 @@
|
|||||||
|
---
|
||||||
|
slug: /en/sql-reference/aggregate-functions/reference/covarsampstable
|
||||||
|
sidebar_position: 37
|
||||||
|
---
|
||||||
|
|
||||||
|
# covarSampStable
|
||||||
|
|
||||||
|
Calculates the value of `Σ((x - x̅)(y - y̅)) / (n - 1)`. Similar to [covarSamp](../reference/covarsamp.md) but works slower while providing a lower computational error.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
covarSampStable(x, y)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments**
|
||||||
|
|
||||||
|
- `x` — first variable. [(U)Int*](../../data-types/int-uint.md), [Float*](../../data-types/float.md), [Decimal](../../data-types/decimal.md).
|
||||||
|
- `y` — second variable. [(U)Int*](../../data-types/int-uint.md), [Float*](../../data-types/float.md), [Decimal](../../data-types/decimal.md).
|
||||||
|
|
||||||
|
**Returned Value**
|
||||||
|
|
||||||
|
- The sample covariance between `x` and `y`. For `n <= 1`, `inf` is returned. [Float64](../../data-types/float.md).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DROP TABLE IF EXISTS series;
|
||||||
|
CREATE TABLE series(i UInt32, x_value Float64, y_value Float64) ENGINE = Memory;
|
||||||
|
INSERT INTO series(i, x_value, y_value) VALUES (1, 5.6,-4.4),(2, -9.6,3),(3, -1.3,-4),(4, 5.3,9.7),(5, 4.4,0.037),(6, -8.6,-7.8),(7, 5.1,9.3),(8, 7.9,-3.6),(9, -8.2,0.62),(10, -3,7.3);
|
||||||
|
```
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT covarSampStable(x_value, y_value)
|
||||||
|
FROM
|
||||||
|
(
|
||||||
|
SELECT
|
||||||
|
x_value,
|
||||||
|
y_value
|
||||||
|
FROM series
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```reference
|
||||||
|
┌─covarSampStable(x_value, y_value)─┐
|
||||||
|
│ 7.206275555555556 │
|
||||||
|
└───────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT covarSampStable(x_value, y_value)
|
||||||
|
FROM
|
||||||
|
(
|
||||||
|
SELECT
|
||||||
|
x_value,
|
||||||
|
y_value
|
||||||
|
FROM series LIMIT 1
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```reference
|
||||||
|
┌─covarSampStable(x_value, y_value)─┐
|
||||||
|
│ inf │
|
||||||
|
└───────────────────────────────────┘
|
||||||
|
```
|
@ -0,0 +1,95 @@
|
|||||||
|
---
|
||||||
|
slug: /en/sql-reference/aggregate-functions/reference/flamegraph
|
||||||
|
sidebar_position: 110
|
||||||
|
---
|
||||||
|
|
||||||
|
# flameGraph
|
||||||
|
|
||||||
|
Aggregate function which builds a [flamegraph](https://www.brendangregg.com/flamegraphs.html) using the list of stacktraces. Outputs an array of strings which can be used by [flamegraph.pl utility](https://github.com/brendangregg/FlameGraph) to render an SVG of the flamegraph.
|
||||||
|
|
||||||
|
## Syntax
|
||||||
|
|
||||||
|
```sql
|
||||||
|
flameGraph(traces, [size], [ptr])
|
||||||
|
```
|
||||||
|
|
||||||
|
## Parameters
|
||||||
|
|
||||||
|
- `traces` — a stacktrace. [Array](../../data-types/array.md)([UInt64](../../data-types/int-uint.md)).
|
||||||
|
- `size` — an allocation size for memory profiling. (optional - default `1`). [UInt64](../../data-types/int-uint.md).
|
||||||
|
- `ptr` — an allocation address. (optional - default `0`). [UInt64](../../data-types/int-uint.md).
|
||||||
|
|
||||||
|
:::note
|
||||||
|
In the case where `ptr != 0`, a flameGraph will map allocations (size > 0) and deallocations (size < 0) with the same size and ptr.
|
||||||
|
Only allocations which were not freed are shown. Non mapped deallocations are ignored.
|
||||||
|
:::
|
||||||
|
|
||||||
|
## Returned value
|
||||||
|
|
||||||
|
- An array of strings for use with [flamegraph.pl utility](https://github.com/brendangregg/FlameGraph). [Array](../../data-types/array.md)([String](../../data-types/string.md)).
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Building a flamegraph based on a CPU query profiler
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SET query_profiler_cpu_time_period_ns=10000000;
|
||||||
|
SELECT SearchPhrase, COUNT(DISTINCT UserID) AS u FROM hits WHERE SearchPhrase <> '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
clickhouse client --allow_introspection_functions=1 -q "select arrayJoin(flameGraph(arrayReverse(trace))) from system.trace_log where trace_type = 'CPU' and query_id = 'xxx'" | ~/dev/FlameGraph/flamegraph.pl > flame_cpu.svg
|
||||||
|
```
|
||||||
|
|
||||||
|
### Building a flamegraph based on a memory query profiler, showing all allocations
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SET memory_profiler_sample_probability=1, max_untracked_memory=1;
|
||||||
|
SELECT SearchPhrase, COUNT(DISTINCT UserID) AS u FROM hits WHERE SearchPhrase <> '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
clickhouse client --allow_introspection_functions=1 -q "select arrayJoin(flameGraph(trace, size)) from system.trace_log where trace_type = 'MemorySample' and query_id = 'xxx'" | ~/dev/FlameGraph/flamegraph.pl --countname=bytes --color=mem > flame_mem.svg
|
||||||
|
```
|
||||||
|
|
||||||
|
### Building a flamegraph based on a memory query profiler, showing allocations which were not deallocated in query context
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SET memory_profiler_sample_probability=1, max_untracked_memory=1, use_uncompressed_cache=1, merge_tree_max_rows_to_use_cache=100000000000, merge_tree_max_bytes_to_use_cache=1000000000000;
|
||||||
|
SELECT SearchPhrase, COUNT(DISTINCT UserID) AS u FROM hits WHERE SearchPhrase <> '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
clickhouse client --allow_introspection_functions=1 -q "SELECT arrayJoin(flameGraph(trace, size, ptr)) FROM system.trace_log WHERE trace_type = 'MemorySample' AND query_id = 'xxx'" | ~/dev/FlameGraph/flamegraph.pl --countname=bytes --color=mem > flame_mem_untracked.svg
|
||||||
|
```
|
||||||
|
|
||||||
|
### Build a flamegraph based on memory query profiler, showing active allocations at the fixed point of time
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SET memory_profiler_sample_probability=1, max_untracked_memory=1;
|
||||||
|
SELECT SearchPhrase, COUNT(DISTINCT UserID) AS u FROM hits WHERE SearchPhrase <> '' GROUP BY SearchPhrase ORDER BY u DESC LIMIT 10;
|
||||||
|
```
|
||||||
|
|
||||||
|
- 1 - Memory usage per second
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT event_time, m, formatReadableSize(max(s) as m) FROM (SELECT event_time, sum(size) OVER (ORDER BY event_time) AS s FROM system.trace_log WHERE query_id = 'xxx' AND trace_type = 'MemorySample') GROUP BY event_time ORDER BY event_time;
|
||||||
|
```
|
||||||
|
|
||||||
|
- 2 - Find a time point with maximal memory usage
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT argMax(event_time, s), max(s) FROM (SELECT event_time, sum(size) OVER (ORDER BY event_time) AS s FROM system.trace_log WHERE query_id = 'xxx' AND trace_type = 'MemorySample');
|
||||||
|
```
|
||||||
|
|
||||||
|
- 3 - Fix active allocations at fixed point of time
|
||||||
|
|
||||||
|
```text
|
||||||
|
clickhouse client --allow_introspection_functions=1 -q "SELECT arrayJoin(flameGraph(trace, size, ptr)) FROM (SELECT * FROM system.trace_log WHERE trace_type = 'MemorySample' AND query_id = 'xxx' AND event_time <= 'yyy' ORDER BY event_time)" | ~/dev/FlameGraph/flamegraph.pl --countname=bytes --color=mem > flame_mem_time_point_pos.svg
|
||||||
|
```
|
||||||
|
|
||||||
|
- 4 - Find deallocations at fixed point of time
|
||||||
|
|
||||||
|
```text
|
||||||
|
clickhouse client --allow_introspection_functions=1 -q "SELECT arrayJoin(flameGraph(trace, -size, ptr)) FROM (SELECT * FROM system.trace_log WHERE trace_type = 'MemorySample' AND query_id = 'xxx' AND event_time > 'yyy' ORDER BY event_time desc)" | ~/dev/FlameGraph/flamegraph.pl --countname=bytes --color=mem > flame_mem_time_point_neg.svg
|
||||||
|
```
|
@ -9,110 +9,117 @@ toc_hidden: true
|
|||||||
|
|
||||||
Standard aggregate functions:
|
Standard aggregate functions:
|
||||||
|
|
||||||
- [count](/docs/en/sql-reference/aggregate-functions/reference/count.md)
|
- [count](../reference/count.md)
|
||||||
- [min](/docs/en/sql-reference/aggregate-functions/reference/min.md)
|
- [min](../reference/min.md)
|
||||||
- [max](/docs/en/sql-reference/aggregate-functions/reference/max.md)
|
- [max](../reference/max.md)
|
||||||
- [sum](/docs/en/sql-reference/aggregate-functions/reference/sum.md)
|
- [sum](../reference/sum.md)
|
||||||
- [avg](/docs/en/sql-reference/aggregate-functions/reference/avg.md)
|
- [avg](../reference/avg.md)
|
||||||
- [any](/docs/en/sql-reference/aggregate-functions/reference/any.md)
|
- [any](../reference/any.md)
|
||||||
- [stddevPop](/docs/en/sql-reference/aggregate-functions/reference/stddevpop.md)
|
- [stddevPop](../reference/stddevpop.md)
|
||||||
- [stddevPopStable](/docs/en/sql-reference/aggregate-functions/reference/stddevpopstable.md)
|
- [stddevPopStable](../reference/stddevpopstable.md)
|
||||||
- [stddevSamp](/docs/en/sql-reference/aggregate-functions/reference/stddevsamp.md)
|
- [stddevSamp](../reference/stddevsamp.md)
|
||||||
- [stddevSampStable](/docs/en/sql-reference/aggregate-functions/reference/stddevsampstable.md)
|
- [stddevSampStable](../reference/stddevsampstable.md)
|
||||||
- [varPop](/docs/en/sql-reference/aggregate-functions/reference/varpop.md)
|
- [varPop](../reference/varpop.md)
|
||||||
- [varSamp](/docs/en/sql-reference/aggregate-functions/reference/varsamp.md)
|
- [varSamp](../reference/varsamp.md)
|
||||||
- [corr](./corr.md)
|
- [corr](../reference/corr.md)
|
||||||
- [covarPop](/docs/en/sql-reference/aggregate-functions/reference/covarpop.md)
|
- [corr](../reference/corrstable.md)
|
||||||
- [covarSamp](/docs/en/sql-reference/aggregate-functions/reference/covarsamp.md)
|
- [corrMatrix](../reference/corrmatrix.md)
|
||||||
- [entropy](./entropy.md)
|
- [covarPop](../reference/covarpop.md)
|
||||||
- [exponentialMovingAverage](./exponentialmovingaverage.md)
|
- [covarStable](../reference/covarpopstable.md)
|
||||||
- [intervalLengthSum](./intervalLengthSum.md)
|
- [covarPopMatrix](../reference/covarpopmatrix.md)
|
||||||
- [kolmogorovSmirnovTest](./kolmogorovsmirnovtest.md)
|
- [covarSamp](../reference/covarsamp.md)
|
||||||
- [mannwhitneyutest](./mannwhitneyutest.md)
|
- [covarSampStable](../reference/covarsampstable.md)
|
||||||
- [median](./median.md)
|
- [covarSampMatrix](../reference/covarsampmatrix.md)
|
||||||
- [rankCorr](./rankCorr.md)
|
- [entropy](../reference/entropy.md)
|
||||||
- [sumKahan](./sumkahan.md)
|
- [exponentialMovingAverage](../reference/exponentialmovingaverage.md)
|
||||||
- [studentTTest](./studentttest.md)
|
- [intervalLengthSum](../reference/intervalLengthSum.md)
|
||||||
- [welchTTest](./welchttest.md)
|
- [kolmogorovSmirnovTest](../reference/kolmogorovsmirnovtest.md)
|
||||||
|
- [mannwhitneyutest](../reference/mannwhitneyutest.md)
|
||||||
|
- [median](../reference/median.md)
|
||||||
|
- [rankCorr](../reference/rankCorr.md)
|
||||||
|
- [sumKahan](../reference/sumkahan.md)
|
||||||
|
- [studentTTest](../reference/studentttest.md)
|
||||||
|
- [welchTTest](../reference/welchttest.md)
|
||||||
|
|
||||||
ClickHouse-specific aggregate functions:
|
ClickHouse-specific aggregate functions:
|
||||||
|
|
||||||
- [analysisOfVariance](/docs/en/sql-reference/aggregate-functions/reference/analysis_of_variance.md)
|
- [analysisOfVariance](../reference/analysis_of_variance.md)
|
||||||
- [any](/docs/en/sql-reference/aggregate-functions/reference/any_respect_nulls.md)
|
- [any](../reference/any_respect_nulls.md)
|
||||||
- [anyHeavy](/docs/en/sql-reference/aggregate-functions/reference/anyheavy.md)
|
- [anyHeavy](../reference/anyheavy.md)
|
||||||
- [anyLast](/docs/en/sql-reference/aggregate-functions/reference/anylast.md)
|
- [anyLast](../reference/anylast.md)
|
||||||
- [anyLast](/docs/en/sql-reference/aggregate-functions/reference/anylast_respect_nulls.md)
|
- [anyLast](../reference/anylast_respect_nulls.md)
|
||||||
- [boundingRatio](/docs/en/sql-reference/aggregate-functions/reference/boundrat.md)
|
- [boundingRatio](../reference/boundrat.md)
|
||||||
- [first_value](/docs/en/sql-reference/aggregate-functions/reference/first_value.md)
|
- [first_value](../reference/first_value.md)
|
||||||
- [last_value](/docs/en/sql-reference/aggregate-functions/reference/last_value.md)
|
- [last_value](../reference/last_value.md)
|
||||||
- [argMin](/docs/en/sql-reference/aggregate-functions/reference/argmin.md)
|
- [argMin](../reference/argmin.md)
|
||||||
- [argMax](/docs/en/sql-reference/aggregate-functions/reference/argmax.md)
|
- [argMax](../reference/argmax.md)
|
||||||
- [avgWeighted](/docs/en/sql-reference/aggregate-functions/reference/avgweighted.md)
|
- [avgWeighted](../reference/avgweighted.md)
|
||||||
- [topK](/docs/en/sql-reference/aggregate-functions/reference/topk.md)
|
- [topK](../reference/topk.md)
|
||||||
- [topKWeighted](/docs/en/sql-reference/aggregate-functions/reference/topkweighted.md)
|
- [topKWeighted](../reference/topkweighted.md)
|
||||||
- [deltaSum](./deltasum.md)
|
- [deltaSum](../reference/deltasum.md)
|
||||||
- [deltaSumTimestamp](./deltasumtimestamp.md)
|
- [deltaSumTimestamp](../reference/deltasumtimestamp.md)
|
||||||
- [groupArray](/docs/en/sql-reference/aggregate-functions/reference/grouparray.md)
|
- [flameGraph](../reference/flame_graph.md)
|
||||||
- [groupArrayLast](/docs/en/sql-reference/aggregate-functions/reference/grouparraylast.md)
|
- [groupArray](../reference/grouparray.md)
|
||||||
- [groupUniqArray](/docs/en/sql-reference/aggregate-functions/reference/groupuniqarray.md)
|
- [groupArrayLast](../reference/grouparraylast.md)
|
||||||
- [groupArrayInsertAt](/docs/en/sql-reference/aggregate-functions/reference/grouparrayinsertat.md)
|
- [groupUniqArray](../reference/groupuniqarray.md)
|
||||||
- [groupArrayMovingAvg](/docs/en/sql-reference/aggregate-functions/reference/grouparraymovingavg.md)
|
- [groupArrayInsertAt](../reference/grouparrayinsertat.md)
|
||||||
- [groupArrayMovingSum](/docs/en/sql-reference/aggregate-functions/reference/grouparraymovingsum.md)
|
- [groupArrayMovingAvg](../reference/grouparraymovingavg.md)
|
||||||
- [groupArraySample](./grouparraysample.md)
|
- [groupArrayMovingSum](../reference/grouparraymovingsum.md)
|
||||||
- [groupArraySorted](/docs/en/sql-reference/aggregate-functions/reference/grouparraysorted.md)
|
- [groupArraySample](../reference/grouparraysample.md)
|
||||||
- [groupArrayIntersect](./grouparrayintersect.md)
|
- [groupArraySorted](../reference/grouparraysorted.md)
|
||||||
- [groupBitAnd](/docs/en/sql-reference/aggregate-functions/reference/groupbitand.md)
|
- [groupArrayIntersect](../reference/grouparrayintersect.md)
|
||||||
- [groupBitOr](/docs/en/sql-reference/aggregate-functions/reference/groupbitor.md)
|
- [groupBitAnd](../reference/groupbitand.md)
|
||||||
- [groupBitXor](/docs/en/sql-reference/aggregate-functions/reference/groupbitxor.md)
|
- [groupBitOr](../reference/groupbitor.md)
|
||||||
- [groupBitmap](/docs/en/sql-reference/aggregate-functions/reference/groupbitmap.md)
|
- [groupBitXor](../reference/groupbitxor.md)
|
||||||
- [groupBitmapAnd](/docs/en/sql-reference/aggregate-functions/reference/groupbitmapand.md)
|
- [groupBitmap](../reference/groupbitmap.md)
|
||||||
- [groupBitmapOr](/docs/en/sql-reference/aggregate-functions/reference/groupbitmapor.md)
|
- [groupBitmapAnd](../reference/groupbitmapand.md)
|
||||||
- [groupBitmapXor](/docs/en/sql-reference/aggregate-functions/reference/groupbitmapxor.md)
|
- [groupBitmapOr](../reference/groupbitmapor.md)
|
||||||
- [sumWithOverflow](/docs/en/sql-reference/aggregate-functions/reference/sumwithoverflow.md)
|
- [groupBitmapXor](../reference/groupbitmapxor.md)
|
||||||
- [sumMap](/docs/en/sql-reference/aggregate-functions/reference/summap.md)
|
- [sumWithOverflow](../reference/sumwithoverflow.md)
|
||||||
- [sumMapWithOverflow](/docs/en/sql-reference/aggregate-functions/reference/summapwithoverflow.md)
|
- [sumMap](../reference/summap.md)
|
||||||
- [sumMapFiltered](/docs/en/sql-reference/aggregate-functions/parametric-functions.md/#summapfiltered)
|
- [sumMapWithOverflow](../reference/summapwithoverflow.md)
|
||||||
- [sumMapFilteredWithOverflow](/docs/en/sql-reference/aggregate-functions/parametric-functions.md/#summapfilteredwithoverflow)
|
- [sumMapFiltered](../parametric-functions.md/#summapfiltered)
|
||||||
- [minMap](/docs/en/sql-reference/aggregate-functions/reference/minmap.md)
|
- [sumMapFilteredWithOverflow](../parametric-functions.md/#summapfilteredwithoverflow)
|
||||||
- [maxMap](/docs/en/sql-reference/aggregate-functions/reference/maxmap.md)
|
- [minMap](../reference/minmap.md)
|
||||||
- [skewSamp](/docs/en/sql-reference/aggregate-functions/reference/skewsamp.md)
|
- [maxMap](../reference/maxmap.md)
|
||||||
- [skewPop](/docs/en/sql-reference/aggregate-functions/reference/skewpop.md)
|
- [skewSamp](../reference/skewsamp.md)
|
||||||
- [kurtSamp](/docs/en/sql-reference/aggregate-functions/reference/kurtsamp.md)
|
- [skewPop](../reference/skewpop.md)
|
||||||
- [kurtPop](/docs/en/sql-reference/aggregate-functions/reference/kurtpop.md)
|
- [kurtSamp](../reference/kurtsamp.md)
|
||||||
- [uniq](/docs/en/sql-reference/aggregate-functions/reference/uniq.md)
|
- [kurtPop](../reference/kurtpop.md)
|
||||||
- [uniqExact](/docs/en/sql-reference/aggregate-functions/reference/uniqexact.md)
|
- [uniq](../reference/uniq.md)
|
||||||
- [uniqCombined](/docs/en/sql-reference/aggregate-functions/reference/uniqcombined.md)
|
- [uniqExact](../reference/uniqexact.md)
|
||||||
- [uniqCombined64](/docs/en/sql-reference/aggregate-functions/reference/uniqcombined64.md)
|
- [uniqCombined](../reference/uniqcombined.md)
|
||||||
- [uniqHLL12](/docs/en/sql-reference/aggregate-functions/reference/uniqhll12.md)
|
- [uniqCombined64](../reference/uniqcombined64.md)
|
||||||
- [uniqTheta](/docs/en/sql-reference/aggregate-functions/reference/uniqthetasketch.md)
|
- [uniqHLL12](../reference/uniqhll12.md)
|
||||||
- [quantile](/docs/en/sql-reference/aggregate-functions/reference/quantile.md)
|
- [uniqTheta](../reference/uniqthetasketch.md)
|
||||||
- [quantiles](/docs/en/sql-reference/aggregate-functions/reference/quantiles.md)
|
- [quantile](../reference/quantile.md)
|
||||||
- [quantileExact](/docs/en/sql-reference/aggregate-functions/reference/quantileexact.md)
|
- [quantiles](../reference/quantiles.md)
|
||||||
- [quantileExactLow](/docs/en/sql-reference/aggregate-functions/reference/quantileexact.md#quantileexactlow)
|
- [quantileExact](../reference/quantileexact.md)
|
||||||
- [quantileExactHigh](/docs/en/sql-reference/aggregate-functions/reference/quantileexact.md#quantileexacthigh)
|
- [quantileExactLow](../reference/quantileexact.md#quantileexactlow)
|
||||||
- [quantileExactWeighted](/docs/en/sql-reference/aggregate-functions/reference/quantileexactweighted.md)
|
- [quantileExactHigh](../reference/quantileexact.md#quantileexacthigh)
|
||||||
- [quantileTiming](/docs/en/sql-reference/aggregate-functions/reference/quantiletiming.md)
|
- [quantileExactWeighted](../reference/quantileexactweighted.md)
|
||||||
- [quantileTimingWeighted](/docs/en/sql-reference/aggregate-functions/reference/quantiletimingweighted.md)
|
- [quantileTiming](../reference/quantiletiming.md)
|
||||||
- [quantileDeterministic](/docs/en/sql-reference/aggregate-functions/reference/quantiledeterministic.md)
|
- [quantileTimingWeighted](../reference/quantiletimingweighted.md)
|
||||||
- [quantileTDigest](/docs/en/sql-reference/aggregate-functions/reference/quantiletdigest.md)
|
- [quantileDeterministic](../reference/quantiledeterministic.md)
|
||||||
- [quantileTDigestWeighted](/docs/en/sql-reference/aggregate-functions/reference/quantiletdigestweighted.md)
|
- [quantileTDigest](../reference/quantiletdigest.md)
|
||||||
- [quantileBFloat16](/docs/en/sql-reference/aggregate-functions/reference/quantilebfloat16.md#quantilebfloat16)
|
- [quantileTDigestWeighted](../reference/quantiletdigestweighted.md)
|
||||||
- [quantileBFloat16Weighted](/docs/en/sql-reference/aggregate-functions/reference/quantilebfloat16.md#quantilebfloat16weighted)
|
- [quantileBFloat16](../reference/quantilebfloat16.md#quantilebfloat16)
|
||||||
- [quantileDD](/docs/en/sql-reference/aggregate-functions/reference/quantileddsketch.md#quantileddsketch)
|
- [quantileBFloat16Weighted](../reference/quantilebfloat16.md#quantilebfloat16weighted)
|
||||||
- [simpleLinearRegression](/docs/en/sql-reference/aggregate-functions/reference/simplelinearregression.md)
|
- [quantileDD](../reference/quantileddsketch.md#quantileddsketch)
|
||||||
- [singleValueOrNull](/docs/en/sql-reference/aggregate-functions/reference/singlevalueornull.md)
|
- [simpleLinearRegression](../reference/simplelinearregression.md)
|
||||||
- [stochasticLinearRegression](/docs/en/sql-reference/aggregate-functions/reference/stochasticlinearregression.md)
|
- [singleValueOrNull](../reference/singlevalueornull.md)
|
||||||
- [stochasticLogisticRegression](/docs/en/sql-reference/aggregate-functions/reference/stochasticlogisticregression.md)
|
- [stochasticLinearRegression](../reference/stochasticlinearregression.md)
|
||||||
- [categoricalInformationValue](/docs/en/sql-reference/aggregate-functions/reference/categoricalinformationvalue.md)
|
- [stochasticLogisticRegression](../reference/stochasticlogisticregression.md)
|
||||||
- [contingency](./contingency.md)
|
- [categoricalInformationValue](../reference/categoricalinformationvalue.md)
|
||||||
- [cramersV](./cramersv.md)
|
- [contingency](../reference/contingency.md)
|
||||||
- [cramersVBiasCorrected](./cramersvbiascorrected.md)
|
- [cramersV](../reference/cramersv.md)
|
||||||
- [theilsU](./theilsu.md)
|
- [cramersVBiasCorrected](../reference/cramersvbiascorrected.md)
|
||||||
- [maxIntersections](./maxintersections.md)
|
- [theilsU](../reference/theilsu.md)
|
||||||
- [maxIntersectionsPosition](./maxintersectionsposition.md)
|
- [maxIntersections](../reference/maxintersections.md)
|
||||||
- [meanZTest](./meanztest.md)
|
- [maxIntersectionsPosition](../reference/maxintersectionsposition.md)
|
||||||
- [quantileGK](./quantileGK.md)
|
- [meanZTest](../reference/meanztest.md)
|
||||||
- [quantileInterpolatedWeighted](./quantileinterpolatedweighted.md)
|
- [quantileGK](../reference/quantileGK.md)
|
||||||
- [sparkBar](./sparkbar.md)
|
- [quantileInterpolatedWeighted](../reference/quantileinterpolatedweighted.md)
|
||||||
- [sumCount](./sumcount.md)
|
- [sparkBar](../reference/sparkbar.md)
|
||||||
- [largestTriangleThreeBuckets](./largestTriangleThreeBuckets.md)
|
- [sumCount](../reference/sumcount.md)
|
||||||
|
- [largestTriangleThreeBuckets](../reference/largestTriangleThreeBuckets.md)
|
||||||
|
@ -24,6 +24,8 @@ Alias: `lttb`.
|
|||||||
- `x` — x coordinate. [Integer](../../../sql-reference/data-types/int-uint.md) , [Float](../../../sql-reference/data-types/float.md) , [Decimal](../../../sql-reference/data-types/decimal.md) , [Date](../../../sql-reference/data-types/date.md), [Date32](../../../sql-reference/data-types/date32.md), [DateTime](../../../sql-reference/data-types/datetime.md), [DateTime64](../../../sql-reference/data-types/datetime64.md).
|
- `x` — x coordinate. [Integer](../../../sql-reference/data-types/int-uint.md) , [Float](../../../sql-reference/data-types/float.md) , [Decimal](../../../sql-reference/data-types/decimal.md) , [Date](../../../sql-reference/data-types/date.md), [Date32](../../../sql-reference/data-types/date32.md), [DateTime](../../../sql-reference/data-types/datetime.md), [DateTime64](../../../sql-reference/data-types/datetime64.md).
|
||||||
- `y` — y coordinate. [Integer](../../../sql-reference/data-types/int-uint.md) , [Float](../../../sql-reference/data-types/float.md) , [Decimal](../../../sql-reference/data-types/decimal.md) , [Date](../../../sql-reference/data-types/date.md), [Date32](../../../sql-reference/data-types/date32.md), [DateTime](../../../sql-reference/data-types/datetime.md), [DateTime64](../../../sql-reference/data-types/datetime64.md).
|
- `y` — y coordinate. [Integer](../../../sql-reference/data-types/int-uint.md) , [Float](../../../sql-reference/data-types/float.md) , [Decimal](../../../sql-reference/data-types/decimal.md) , [Date](../../../sql-reference/data-types/date.md), [Date32](../../../sql-reference/data-types/date32.md), [DateTime](../../../sql-reference/data-types/datetime.md), [DateTime64](../../../sql-reference/data-types/datetime64.md).
|
||||||
|
|
||||||
|
NaNs are ignored in the provided series, meaning that any NaN values will be excluded from the analysis. This ensures that the function operates only on valid numerical data.
|
||||||
|
|
||||||
**Parameters**
|
**Parameters**
|
||||||
|
|
||||||
- `n` — number of points in the resulting series. [UInt64](../../../sql-reference/data-types/int-uint.md).
|
- `n` — number of points in the resulting series. [UInt64](../../../sql-reference/data-types/int-uint.md).
|
||||||
@ -61,7 +63,7 @@ Result:
|
|||||||
|
|
||||||
``` text
|
``` text
|
||||||
┌────────largestTriangleThreeBuckets(4)(x, y)───────────┐
|
┌────────largestTriangleThreeBuckets(4)(x, y)───────────┐
|
||||||
│ [(1,10),(3,15),(5,40),(10,70)] │
|
│ [(1,10),(3,15),(9,55),(10,70)] │
|
||||||
└───────────────────────────────────────────────────────┘
|
└───────────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -3,7 +3,7 @@ slug: /en/sql-reference/aggregate-functions/reference/stochasticlinearregression
|
|||||||
sidebar_position: 221
|
sidebar_position: 221
|
||||||
---
|
---
|
||||||
|
|
||||||
# stochasticLinearRegression
|
# stochasticLinearRegression {#agg_functions_stochasticlinearregression_parameters}
|
||||||
|
|
||||||
This function implements stochastic linear regression. It supports custom parameters for learning rate, L2 regularization coefficient, mini-batch size, and has a few methods for updating weights ([Adam](https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Adam) (used by default), [simple SGD](https://en.wikipedia.org/wiki/Stochastic_gradient_descent), [Momentum](https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Momentum), and [Nesterov](https://mipt.ru/upload/medialibrary/d7e/41-91.pdf)).
|
This function implements stochastic linear regression. It supports custom parameters for learning rate, L2 regularization coefficient, mini-batch size, and has a few methods for updating weights ([Adam](https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Adam) (used by default), [simple SGD](https://en.wikipedia.org/wiki/Stochastic_gradient_descent), [Momentum](https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Momentum), and [Nesterov](https://mipt.ru/upload/medialibrary/d7e/41-91.pdf)).
|
||||||
|
|
||||||
@ -72,5 +72,5 @@ The query will return a column of predicted values. Note that first argument of
|
|||||||
|
|
||||||
**See Also**
|
**See Also**
|
||||||
|
|
||||||
- [stochasticLogisticRegression](../../../sql-reference/aggregate-functions/reference/stochasticlogisticregression.md#agg_functions-stochasticlogisticregression)
|
- [stochasticLogisticRegression](../../../sql-reference/aggregate-functions/reference/stochasticlogisticregression.md#stochasticlogisticregression)
|
||||||
- [Difference between linear and logistic regressions](https://stackoverflow.com/questions/12146914/what-is-the-difference-between-linear-regression-and-logistic-regression)
|
- [Difference between linear and logistic regressions](https://stackoverflow.com/questions/12146914/what-is-the-difference-between-linear-regression-and-logistic-regression)
|
||||||
|
@ -11,7 +11,7 @@ This function implements stochastic logistic regression. It can be used for bina
|
|||||||
|
|
||||||
Parameters are exactly the same as in stochasticLinearRegression:
|
Parameters are exactly the same as in stochasticLinearRegression:
|
||||||
`learning rate`, `l2 regularization coefficient`, `mini-batch size`, `method for updating weights`.
|
`learning rate`, `l2 regularization coefficient`, `mini-batch size`, `method for updating weights`.
|
||||||
For more information see [parameters](#agg_functions-stochasticlinearregression-parameters).
|
For more information see [parameters](../reference/stochasticlinearregression.md/#parameters).
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
stochasticLogisticRegression(1.0, 1.0, 10, 'SGD')
|
stochasticLogisticRegression(1.0, 1.0, 10, 'SGD')
|
||||||
|
@ -27,7 +27,7 @@ Returns an integer of type `Float64`.
|
|||||||
|
|
||||||
**Implementation details**
|
**Implementation details**
|
||||||
|
|
||||||
This function uses a numerically unstable algorithm. If you need numerical stability in calculations, use the slower but more stable [`varPopStable` function](#varPopStable).
|
This function uses a numerically unstable algorithm. If you need numerical stability in calculations, use the slower but more stable [`varPopStable`](#varpopstable) function.
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
@ -76,7 +76,7 @@ Returns an integer of type `Float64`.
|
|||||||
|
|
||||||
**Implementation details**
|
**Implementation details**
|
||||||
|
|
||||||
Unlike [`varPop()`](#varPop), this function uses a stable, numerically accurate algorithm to calculate the population variance to avoid issues like catastrophic cancellation or loss of precision. This function also handles `NaN` and `Inf` values correctly, excluding them from calculations.
|
Unlike [`varPop`](#varpop), this function uses a stable, numerically accurate algorithm to calculate the population variance to avoid issues like catastrophic cancellation or loss of precision. This function also handles `NaN` and `Inf` values correctly, excluding them from calculations.
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
|
@ -40,7 +40,7 @@ Where:
|
|||||||
|
|
||||||
The function assumes that the input data set represents a sample from a larger population. If you want to calculate the variance of the entire population (when you have the complete data set), you should use the [`varPop()` function](./varpop#varpop) instead.
|
The function assumes that the input data set represents a sample from a larger population. If you want to calculate the variance of the entire population (when you have the complete data set), you should use the [`varPop()` function](./varpop#varpop) instead.
|
||||||
|
|
||||||
This function uses a numerically unstable algorithm. If you need numerical stability in calculations, use the slower but more stable [`varSampStable` function](#varSampStable).
|
This function uses a numerically unstable algorithm. If you need numerical stability in calculations, use the slower but more stable [`varSampStable`](#varsampstable) function.
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
@ -82,11 +82,11 @@ varSampStable(expr)
|
|||||||
|
|
||||||
**Returned value**
|
**Returned value**
|
||||||
|
|
||||||
The `varSampStable()` function returns a Float64 value representing the sample variance of the input data set.
|
The `varSampStable` function returns a Float64 value representing the sample variance of the input data set.
|
||||||
|
|
||||||
**Implementation details**
|
**Implementation details**
|
||||||
|
|
||||||
The `varSampStable()` function calculates the sample variance using the same formula as the [`varSamp()`](#varSamp function):
|
The `varSampStable` function calculates the sample variance using the same formula as the [`varSamp`](#varsamp) function:
|
||||||
|
|
||||||
```plaintext
|
```plaintext
|
||||||
∑(x - mean(x))^2 / (n - 1)
|
∑(x - mean(x))^2 / (n - 1)
|
||||||
@ -97,9 +97,9 @@ Where:
|
|||||||
- `mean(x)` is the arithmetic mean of the data set.
|
- `mean(x)` is the arithmetic mean of the data set.
|
||||||
- `n` is the number of data points in the data set.
|
- `n` is the number of data points in the data set.
|
||||||
|
|
||||||
The difference between `varSampStable()` and `varSamp()` is that `varSampStable()` is designed to provide a more deterministic and stable result when dealing with floating-point arithmetic. It uses an algorithm that minimizes the accumulation of rounding errors, which can be particularly important when dealing with large data sets or data with a wide range of values.
|
The difference between `varSampStable` and `varSamp` is that `varSampStable` is designed to provide a more deterministic and stable result when dealing with floating-point arithmetic. It uses an algorithm that minimizes the accumulation of rounding errors, which can be particularly important when dealing with large data sets or data with a wide range of values.
|
||||||
|
|
||||||
Like `varSamp()`, the `varSampStable()` function assumes that the input data set represents a sample from a larger population. If you want to calculate the variance of the entire population (when you have the complete data set), you should use the [`varPopStable()` function](./varpop#varpopstable) instead.
|
Like `varSamp`, the `varSampStable` function assumes that the input data set represents a sample from a larger population. If you want to calculate the variance of the entire population (when you have the complete data set), you should use the [`varPopStable`](./varpop#varpopstable) function instead.
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
@ -125,4 +125,4 @@ Response:
|
|||||||
0.865
|
0.865
|
||||||
```
|
```
|
||||||
|
|
||||||
This query calculates the sample variance of the `value` column in the `example_table` using the `varSampStable()` function. The result shows that the sample variance of the values `[10.5, 12.3, 9.8, 11.2, 10.7]` is approximately 0.865, which may differ slightly from the result of `varSamp()` due to the more precise handling of floating-point arithmetic.
|
This query calculates the sample variance of the `value` column in the `example_table` using the `varSampStable()` function. The result shows that the sample variance of the values `[10.5, 12.3, 9.8, 11.2, 10.7]` is approximately 0.865, which may differ slightly from the result of `varSamp` due to the more precise handling of floating-point arithmetic.
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user