mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-23 16:12:01 +00:00
Merge remote-tracking branch 'origin/master' into BetterLWDProjException
This commit is contained in:
commit
19cf12e205
20
.clang-tidy
20
.clang-tidy
@ -22,6 +22,7 @@ Checks: [
|
||||
'-bugprone-exception-escape',
|
||||
'-bugprone-forward-declaration-namespace',
|
||||
'-bugprone-implicit-widening-of-multiplication-result',
|
||||
'-bugprone-multi-level-implicit-pointer-conversion',
|
||||
'-bugprone-narrowing-conversions',
|
||||
'-bugprone-not-null-terminated-result',
|
||||
'-bugprone-reserved-identifier', # useful but too slow, TODO retry when https://reviews.llvm.org/rG1c282052624f9d0bd273bde0b47b30c96699c6c7 is merged
|
||||
@ -98,6 +99,7 @@ Checks: [
|
||||
'-modernize-use-nodiscard',
|
||||
'-modernize-use-trailing-return-type',
|
||||
|
||||
'-performance-enum-size',
|
||||
'-performance-inefficient-string-concatenation',
|
||||
'-performance-no-int-to-ptr',
|
||||
'-performance-avoid-endl',
|
||||
@ -105,6 +107,7 @@ Checks: [
|
||||
|
||||
'-portability-simd-intrinsics',
|
||||
|
||||
'-readability-avoid-nested-conditional-operator',
|
||||
'-readability-avoid-unconditional-preprocessor-if',
|
||||
'-readability-braces-around-statements',
|
||||
'-readability-convert-member-functions-to-static',
|
||||
@ -118,6 +121,12 @@ Checks: [
|
||||
'-readability-magic-numbers',
|
||||
'-readability-named-parameter',
|
||||
'-readability-redundant-declaration',
|
||||
'-readability-redundant-inline-specifier',
|
||||
'-readability-redundant-member-init', # Useful but triggers another problem. Imagine a struct S with multiple String members. Structs are often instantiated via designated
|
||||
# initializer S s{.s1 = [...], .s2 = [...], [...]}. In this case, compiler warning `missing-field-initializers` requires to specify all members which are not in-struct
|
||||
# initialized (example: s1 in struct S { String s1; String s2{};}; is not in-struct initialized, therefore it must be specified at instantiation time). As explicitly
|
||||
# specifying all members is tedious for large structs, `missing-field-initializers` makes programmers initialize as many members as possible in-struct. Clang-tidy
|
||||
# warning `readability-redundant-member-init` does the opposite thing, both are not compatible with each other.
|
||||
'-readability-simplify-boolean-expr',
|
||||
'-readability-suspicious-call-argument',
|
||||
'-readability-uppercase-literal-suffix',
|
||||
@ -125,17 +134,6 @@ Checks: [
|
||||
|
||||
'-zircon-*',
|
||||
|
||||
# These are new in clang-18, and we have to sort them out:
|
||||
'-readability-avoid-nested-conditional-operator',
|
||||
'-modernize-use-designated-initializers',
|
||||
'-performance-enum-size',
|
||||
'-readability-redundant-inline-specifier',
|
||||
'-readability-redundant-member-init',
|
||||
'-bugprone-crtp-constructor-accessibility',
|
||||
'-bugprone-suspicious-stringview-data-usage',
|
||||
'-bugprone-multi-level-implicit-pointer-conversion',
|
||||
'-cert-err33-c',
|
||||
|
||||
# This is a good check, but clang-tidy crashes, see https://github.com/llvm/llvm-project/issues/91872
|
||||
'-modernize-use-constraints',
|
||||
# https://github.com/abseil/abseil-cpp/issues/1667
|
||||
|
9
.github/PULL_REQUEST_TEMPLATE.md
vendored
9
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -42,25 +42,25 @@ At a minimum, the following information should be added (but add more as needed)
|
||||
> Information about CI checks: https://clickhouse.com/docs/en/development/continuous-integration/
|
||||
|
||||
<details>
|
||||
<summary>Modify your CI run</summary>
|
||||
<summary>CI Settings</summary>
|
||||
|
||||
**NOTE:** If your merge the PR with modified CI you **MUST KNOW** what you are doing
|
||||
**NOTE:** Checked options will be applied if set before CI RunConfig/PrepareRunConfig step
|
||||
|
||||
#### Include tests (required builds will be added automatically):
|
||||
- [ ] <!---ci_include_fast--> Fast test
|
||||
#### Run these jobs only (required builds will be added automatically):
|
||||
- [ ] <!---ci_include_integration--> Integration Tests
|
||||
- [ ] <!---ci_include_stateless--> Stateless tests
|
||||
- [ ] <!---ci_include_stateful--> Stateful tests
|
||||
- [ ] <!---ci_include_unit--> Unit tests
|
||||
- [ ] <!---ci_include_performance--> Performance tests
|
||||
- [ ] <!---ci_include_aarch64--> All with aarch64
|
||||
- [ ] <!---ci_include_asan--> All with ASAN
|
||||
- [ ] <!---ci_include_tsan--> All with TSAN
|
||||
- [ ] <!---ci_include_analyzer--> All with Analyzer
|
||||
- [ ] <!---ci_include_azure --> All with Azure
|
||||
- [ ] <!---ci_include_KEYWORD--> Add your option here
|
||||
|
||||
#### Exclude tests:
|
||||
#### Deny these jobs:
|
||||
- [ ] <!---ci_exclude_fast--> Fast test
|
||||
- [ ] <!---ci_exclude_integration--> Integration Tests
|
||||
- [ ] <!---ci_exclude_stateless--> Stateless tests
|
||||
@ -72,7 +72,6 @@ At a minimum, the following information should be added (but add more as needed)
|
||||
- [ ] <!---ci_exclude_ubsan--> All with UBSAN
|
||||
- [ ] <!---ci_exclude_coverage--> All with Coverage
|
||||
- [ ] <!---ci_exclude_aarch64--> All with Aarch64
|
||||
- [ ] <!---ci_exclude_KEYWORD--> Add your option here
|
||||
|
||||
#### Extra options:
|
||||
- [ ] <!---do_not_test--> do not test (only style check)
|
||||
|
3
.github/workflows/merge_queue.yml
vendored
3
.github/workflows/merge_queue.yml
vendored
@ -22,6 +22,9 @@ jobs:
|
||||
clear-repository: true # to ensure correct digests
|
||||
fetch-depth: 0 # to get version
|
||||
filter: tree:0
|
||||
- name: Cancel PR workflow
|
||||
run: |
|
||||
python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" --cancel-previous-run
|
||||
- name: Python unit tests
|
||||
run: |
|
||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||
|
40
.github/workflows/pull_request.yml
vendored
40
.github/workflows/pull_request.yml
vendored
@ -130,15 +130,21 @@ jobs:
|
||||
with:
|
||||
stage: Tests_2
|
||||
data: ${{ needs.RunConfig.outputs.data }}
|
||||
# stage for jobs that do not prohibit merge
|
||||
Tests_3:
|
||||
needs: [RunConfig, Tests_1, Tests_2]
|
||||
if: ${{ !failure() && !cancelled() && contains(fromJson(needs.RunConfig.outputs.data).stages_data.stages_to_do, 'Tests_3') }}
|
||||
uses: ./.github/workflows/reusable_test_stage.yml
|
||||
with:
|
||||
stage: Tests_3
|
||||
data: ${{ needs.RunConfig.outputs.data }}
|
||||
|
||||
################################# Reports #################################
|
||||
# Reports should by run even if Builds_1/2 fail, so put them separatly in wf (not in Tests_1/2)
|
||||
# Reports should by run even if Builds_1/2 fail, so put them separately in wf (not in Tests_1/2)
|
||||
Builds_1_Report:
|
||||
# run report check for failed builds to indicate the CI error
|
||||
if: ${{ !cancelled() && contains(fromJson(needs.RunConfig.outputs.data).jobs_data.jobs_to_do, 'ClickHouse build check') }}
|
||||
needs:
|
||||
- RunConfig
|
||||
- Builds_1
|
||||
if: ${{ !cancelled() && needs.StyleCheck.result == 'success' && contains(fromJson(needs.RunConfig.outputs.data).jobs_data.jobs_to_do, 'ClickHouse build check') }}
|
||||
needs: [RunConfig, StyleCheck, Builds_1]
|
||||
uses: ./.github/workflows/reusable_test.yml
|
||||
with:
|
||||
test_name: ClickHouse build check
|
||||
@ -146,25 +152,39 @@ jobs:
|
||||
data: ${{ needs.RunConfig.outputs.data }}
|
||||
Builds_2_Report:
|
||||
# run report check for failed builds to indicate the CI error
|
||||
if: ${{ !cancelled() && contains(fromJson(needs.RunConfig.outputs.data).jobs_data.jobs_to_do, 'ClickHouse special build check') }}
|
||||
needs:
|
||||
- RunConfig
|
||||
- Builds_2
|
||||
if: ${{ !cancelled() && needs.StyleCheck.result == 'success' && contains(fromJson(needs.RunConfig.outputs.data).jobs_data.jobs_to_do, 'ClickHouse special build check') }}
|
||||
needs: [RunConfig, StyleCheck, Builds_2]
|
||||
uses: ./.github/workflows/reusable_test.yml
|
||||
with:
|
||||
test_name: ClickHouse special build check
|
||||
runner_type: style-checker-aarch64
|
||||
data: ${{ needs.RunConfig.outputs.data }}
|
||||
|
||||
CheckReadyForMerge:
|
||||
if: ${{ !cancelled() && needs.StyleCheck.result == 'success' }}
|
||||
needs: [RunConfig, BuildDockers, StyleCheck, FastTest, Builds_1, Builds_2, Builds_1_Report, Builds_2_Report, Tests_1, Tests_2]
|
||||
runs-on: [self-hosted, style-checker-aarch64]
|
||||
steps:
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
filter: tree:0
|
||||
- name: Check and set merge status
|
||||
run: |
|
||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||
python3 merge_pr.py --set-ci-status --wf-status ${{ contains(needs.*.result, 'failure') && 'failure' || 'success' }}
|
||||
|
||||
################################# Stage Final #################################
|
||||
#
|
||||
FinishCheck:
|
||||
if: ${{ !failure() && !cancelled() }}
|
||||
needs: [RunConfig, BuildDockers, StyleCheck, FastTest, Builds_1, Builds_2, Builds_1_Report, Builds_2_Report, Tests_1, Tests_2]
|
||||
needs: [RunConfig, BuildDockers, StyleCheck, FastTest, Builds_1, Builds_2, Builds_1_Report, Builds_2_Report, Tests_1, Tests_2, Tests_3]
|
||||
runs-on: [self-hosted, style-checker]
|
||||
steps:
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
filter: tree:0
|
||||
- name: Finish label
|
||||
run: |
|
||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||
|
@ -61,13 +61,16 @@ if (ENABLE_CHECK_HEAVY_BUILDS)
|
||||
# set CPU time limit to 1000 seconds
|
||||
set (RLIMIT_CPU 1000)
|
||||
|
||||
# Sanitizers are too heavy
|
||||
if (SANITIZE OR SANITIZE_COVERAGE OR WITH_COVERAGE)
|
||||
set (RLIMIT_DATA 10000000000) # 10G
|
||||
# Sanitizers are too heavy. Some architectures too.
|
||||
if (SANITIZE OR SANITIZE_COVERAGE OR WITH_COVERAGE OR ARCH_RISCV64 OR ARCH_LOONGARCH64)
|
||||
# Twice as large
|
||||
set (RLIMIT_DATA 10000000000)
|
||||
set (RLIMIT_AS 20000000000)
|
||||
endif()
|
||||
|
||||
# For some files currently building RISCV64 might be too slow. TODO: Improve compilation times per file
|
||||
if (ARCH_RISCV64)
|
||||
# For some files currently building RISCV64/LOONGARCH64 might be too slow.
|
||||
# TODO: Improve compilation times per file
|
||||
if (ARCH_RISCV64 OR ARCH_LOONGARCH64)
|
||||
set (RLIMIT_CPU 1800)
|
||||
endif()
|
||||
|
||||
|
@ -9,11 +9,18 @@
|
||||
bool cgroupsV2Enabled()
|
||||
{
|
||||
#if defined(OS_LINUX)
|
||||
/// This file exists iff the host has cgroups v2 enabled.
|
||||
auto controllers_file = default_cgroups_mount / "cgroup.controllers";
|
||||
if (!std::filesystem::exists(controllers_file))
|
||||
return false;
|
||||
return true;
|
||||
try
|
||||
{
|
||||
/// This file exists iff the host has cgroups v2 enabled.
|
||||
auto controllers_file = default_cgroups_mount / "cgroup.controllers";
|
||||
if (!std::filesystem::exists(controllers_file))
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
catch (const std::filesystem::filesystem_error &) /// all "underlying OS API errors", typically: permission denied
|
||||
{
|
||||
return false; /// not logging the exception as most callers fall back to cgroups v1
|
||||
}
|
||||
#else
|
||||
return false;
|
||||
#endif
|
||||
|
@ -2,11 +2,11 @@
|
||||
|
||||
# NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION,
|
||||
# only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes.
|
||||
SET(VERSION_REVISION 54486)
|
||||
SET(VERSION_REVISION 54487)
|
||||
SET(VERSION_MAJOR 24)
|
||||
SET(VERSION_MINOR 5)
|
||||
SET(VERSION_MINOR 6)
|
||||
SET(VERSION_PATCH 1)
|
||||
SET(VERSION_GITHASH 6d4b31322d168356c8b10c43b4cef157c82337ff)
|
||||
SET(VERSION_DESCRIBE v24.5.1.1-testing)
|
||||
SET(VERSION_STRING 24.5.1.1)
|
||||
SET(VERSION_GITHASH 70a1d3a63d47f0be077d67b8deb907230fc7cfb0)
|
||||
SET(VERSION_DESCRIBE v24.6.1.1-testing)
|
||||
SET(VERSION_STRING 24.6.1.1)
|
||||
# end of autochange
|
||||
|
2
contrib/libunwind
vendored
2
contrib/libunwind
vendored
@ -1 +1 @@
|
||||
Subproject commit 854538ce337d631b619010528adff22cd58f9dce
|
||||
Subproject commit d6a01c46327e56fd86beb8aaa31591fcd9a6b7df
|
@ -11,6 +11,7 @@ RUN apt-get update && env DEBIAN_FRONTEND=noninteractive apt-get install --yes \
|
||||
aspell \
|
||||
curl \
|
||||
git \
|
||||
gh \
|
||||
file \
|
||||
libxml2-utils \
|
||||
moreutils \
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
Description.
|
||||
|
||||
For the switch setting, use the typical phrase: “Enables or disables something …”.
|
||||
For the switch setting, use the typical phrase: “Enables or disables something ...”.
|
||||
|
||||
Possible values:
|
||||
|
||||
|
@ -166,4 +166,4 @@
|
||||
* NO CL ENTRY: 'Revert "Abort on std::out_of_range in debug builds"'. [#12752](https://github.com/ClickHouse/ClickHouse/pull/12752) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* NO CL ENTRY: 'Bump protobuf from 3.12.2 to 3.12.4 in /docs/tools'. [#13102](https://github.com/ClickHouse/ClickHouse/pull/13102) ([dependabot-preview[bot]](https://github.com/apps/dependabot-preview)).
|
||||
* NO CL ENTRY: 'Merge [#12574](https://github.com/ClickHouse/ClickHouse/issues/12574)'. [#13158](https://github.com/ClickHouse/ClickHouse/pull/13158) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* NO CL ENTRY: 'Revert "Add QueryTimeMicroseconds, SelectQueryTimeMicroseconds and InsertQuer…"'. [#13303](https://github.com/ClickHouse/ClickHouse/pull/13303) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* NO CL ENTRY: 'Revert "Add QueryTimeMicroseconds, SelectQueryTimeMicroseconds and InsertQuer..."'. [#13303](https://github.com/ClickHouse/ClickHouse/pull/13303) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
@ -421,5 +421,5 @@ sidebar_label: 2022
|
||||
* Fix possible crash in DataTypeAggregateFunction [#32287](https://github.com/ClickHouse/ClickHouse/pull/32287) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Update backport.py [#32323](https://github.com/ClickHouse/ClickHouse/pull/32323) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix graphite-bench build [#32351](https://github.com/ClickHouse/ClickHouse/pull/32351) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Revert "graphite: split tagged/plain rollup rules (for merges perfoma… [#32376](https://github.com/ClickHouse/ClickHouse/pull/32376) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Revert "graphite: split tagged/plain rollup rules (for merges perfoma... [#32376](https://github.com/ClickHouse/ClickHouse/pull/32376) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Another attempt to fix unit test Executor::RemoveTasksStress [#32390](https://github.com/ClickHouse/ClickHouse/pull/32390) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
|
@ -18,4 +18,4 @@ sidebar_label: 2022
|
||||
|
||||
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||
|
||||
* fix incorrect number of rows for Chunks with no columns in PartialSor… [#21761](https://github.com/ClickHouse/ClickHouse/pull/21761) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* fix incorrect number of rows for Chunks with no columns in PartialSor... [#21761](https://github.com/ClickHouse/ClickHouse/pull/21761) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
|
@ -223,7 +223,7 @@ sidebar_label: 2022
|
||||
* Do not overlap zookeeper path for ReplicatedMergeTree in stateless *.sh tests [#21724](https://github.com/ClickHouse/ClickHouse/pull/21724) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* make the fuzzer use sources from the CI [#21754](https://github.com/ClickHouse/ClickHouse/pull/21754) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Add one more variant to memcpy benchmark [#21759](https://github.com/ClickHouse/ClickHouse/pull/21759) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* fix incorrect number of rows for Chunks with no columns in PartialSor… [#21761](https://github.com/ClickHouse/ClickHouse/pull/21761) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* fix incorrect number of rows for Chunks with no columns in PartialSor... [#21761](https://github.com/ClickHouse/ClickHouse/pull/21761) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* docs(fix): typo [#21775](https://github.com/ClickHouse/ClickHouse/pull/21775) ([Ali Demirci](https://github.com/depyronick)).
|
||||
* DDLWorker.cpp: fixed exceeded amount of tries typo [#21807](https://github.com/ClickHouse/ClickHouse/pull/21807) ([Eldar Nasyrov](https://github.com/3ldar-nasyrov)).
|
||||
* fix integration MaterializeMySQL test [#21819](https://github.com/ClickHouse/ClickHouse/pull/21819) ([TCeason](https://github.com/TCeason)).
|
||||
|
@ -226,7 +226,7 @@ sidebar_label: 2022
|
||||
* Do not overlap zookeeper path for ReplicatedMergeTree in stateless *.sh tests [#21724](https://github.com/ClickHouse/ClickHouse/pull/21724) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* make the fuzzer use sources from the CI [#21754](https://github.com/ClickHouse/ClickHouse/pull/21754) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Add one more variant to memcpy benchmark [#21759](https://github.com/ClickHouse/ClickHouse/pull/21759) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* fix incorrect number of rows for Chunks with no columns in PartialSor… [#21761](https://github.com/ClickHouse/ClickHouse/pull/21761) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* fix incorrect number of rows for Chunks with no columns in PartialSor... [#21761](https://github.com/ClickHouse/ClickHouse/pull/21761) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* docs(fix): typo [#21775](https://github.com/ClickHouse/ClickHouse/pull/21775) ([Ali Demirci](https://github.com/depyronick)).
|
||||
* DDLWorker.cpp: fixed exceeded amount of tries typo [#21807](https://github.com/ClickHouse/ClickHouse/pull/21807) ([Eldar Nasyrov](https://github.com/3ldar-nasyrov)).
|
||||
* fix integration MaterializeMySQL test [#21819](https://github.com/ClickHouse/ClickHouse/pull/21819) ([TCeason](https://github.com/TCeason)).
|
||||
|
@ -160,7 +160,7 @@ sidebar_label: 2022
|
||||
* fix toString error on DatatypeDate32. [#37775](https://github.com/ClickHouse/ClickHouse/pull/37775) ([LiuNeng](https://github.com/liuneng1994)).
|
||||
* The clickhouse-keeper setting `dead_session_check_period_ms` was transformed into microseconds (multiplied by 1000), which lead to dead sessions only being cleaned up after several minutes (instead of 500ms). [#37824](https://github.com/ClickHouse/ClickHouse/pull/37824) ([Michael Lex](https://github.com/mlex)).
|
||||
* Fix possible "No more packets are available" for distributed queries (in case of `async_socket_for_remote`/`use_hedged_requests` is disabled). [#37826](https://github.com/ClickHouse/ClickHouse/pull/37826) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Do not drop the inner target table when executing `ALTER TABLE … MODIFY QUERY` in WindowView. [#37879](https://github.com/ClickHouse/ClickHouse/pull/37879) ([vxider](https://github.com/Vxider)).
|
||||
* Do not drop the inner target table when executing `ALTER TABLE ... MODIFY QUERY` in WindowView. [#37879](https://github.com/ClickHouse/ClickHouse/pull/37879) ([vxider](https://github.com/Vxider)).
|
||||
* Fix directory ownership of coordination dir in clickhouse-keeper Docker image. Fixes [#37914](https://github.com/ClickHouse/ClickHouse/issues/37914). [#37915](https://github.com/ClickHouse/ClickHouse/pull/37915) ([James Maidment](https://github.com/jamesmaidment)).
|
||||
* Dictionaries fix custom query with update field and `{condition}`. Closes [#33746](https://github.com/ClickHouse/ClickHouse/issues/33746). [#37947](https://github.com/ClickHouse/ClickHouse/pull/37947) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Fix possible incorrect result of `SELECT ... WITH FILL` in the case when `ORDER BY` should be applied after `WITH FILL` result (e.g. for outer query). Incorrect result was caused by optimization for `ORDER BY` expressions ([#35623](https://github.com/ClickHouse/ClickHouse/issues/35623)). Closes [#37904](https://github.com/ClickHouse/ClickHouse/issues/37904). [#37959](https://github.com/ClickHouse/ClickHouse/pull/37959) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
@ -180,7 +180,7 @@ sidebar_label: 2022
|
||||
#### NO CL ENTRY
|
||||
|
||||
* NO CL ENTRY: 'Revert "Fix mutations in tables with columns of type `Object`"'. [#37355](https://github.com/ClickHouse/ClickHouse/pull/37355) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* NO CL ENTRY: 'Revert "Remove height restrictions from the query div in play web tool, and m…"'. [#37501](https://github.com/ClickHouse/ClickHouse/pull/37501) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* NO CL ENTRY: 'Revert "Remove height restrictions from the query div in play web tool, and m..."'. [#37501](https://github.com/ClickHouse/ClickHouse/pull/37501) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* NO CL ENTRY: 'Revert "Add support for preprocessing ZooKeeper operations in `clickhouse-keeper`"'. [#37534](https://github.com/ClickHouse/ClickHouse/pull/37534) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* NO CL ENTRY: 'Revert "(only with zero-copy replication, non-production experimental feature not recommended to use) fix possible deadlock during fetching part"'. [#37545](https://github.com/ClickHouse/ClickHouse/pull/37545) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* NO CL ENTRY: 'Revert "RFC: Fix converting types for UNION queries (may produce LOGICAL_ERROR)"'. [#37582](https://github.com/ClickHouse/ClickHouse/pull/37582) ([Dmitry Novik](https://github.com/novikd)).
|
||||
|
@ -410,7 +410,7 @@ sidebar_label: 2022
|
||||
* Add test for [#39132](https://github.com/ClickHouse/ClickHouse/issues/39132) [#39173](https://github.com/ClickHouse/ClickHouse/pull/39173) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Suppression for BC check (`Cannot parse string 'Hello' as UInt64`) [#39176](https://github.com/ClickHouse/ClickHouse/pull/39176) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix 01961_roaring_memory_tracking test [#39187](https://github.com/ClickHouse/ClickHouse/pull/39187) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Cleanup: done during [#38719](https://github.com/ClickHouse/ClickHouse/issues/38719) (SortingStep: deduce way to sort based on … [#39191](https://github.com/ClickHouse/ClickHouse/pull/39191) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Cleanup: done during [#38719](https://github.com/ClickHouse/ClickHouse/issues/38719) (SortingStep: deduce way to sort based on ... [#39191](https://github.com/ClickHouse/ClickHouse/pull/39191) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Fix exception in AsynchronousMetrics for s390x [#39193](https://github.com/ClickHouse/ClickHouse/pull/39193) ([Harry Lee](https://github.com/HarryLeeIBM)).
|
||||
* Optimize accesses to system.stack_trace (filter by name before sending signal) [#39212](https://github.com/ClickHouse/ClickHouse/pull/39212) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Enable warning "-Wdeprecated-dynamic-exception-spec" [#39213](https://github.com/ClickHouse/ClickHouse/pull/39213) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
|
@ -20,4 +20,4 @@ sidebar_label: 2023
|
||||
* Fix wrong approved_at, simplify conditions [#45302](https://github.com/ClickHouse/ClickHouse/pull/45302) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Get rid of artifactory in favor of r2 + ch-repos-manager [#45421](https://github.com/ClickHouse/ClickHouse/pull/45421) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Trim refs/tags/ from GITHUB_TAG in release workflow [#45636](https://github.com/ClickHouse/ClickHouse/pull/45636) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Merge pull request [#38262](https://github.com/ClickHouse/ClickHouse/issues/38262) from PolyProgrammist/fix-ordinary-system-un… [#45650](https://github.com/ClickHouse/ClickHouse/pull/45650) ([alesapin](https://github.com/alesapin)).
|
||||
* Merge pull request [#38262](https://github.com/ClickHouse/ClickHouse/issues/38262) from PolyProgrammist/fix-ordinary-system-un... [#45650](https://github.com/ClickHouse/ClickHouse/pull/45650) ([alesapin](https://github.com/alesapin)).
|
||||
|
@ -217,7 +217,7 @@ sidebar_label: 2023
|
||||
* S3Queue minor fix [#56999](https://github.com/ClickHouse/ClickHouse/pull/56999) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix file path validation for DatabaseFileSystem [#57029](https://github.com/ClickHouse/ClickHouse/pull/57029) ([San](https://github.com/santrancisco)).
|
||||
* Fix `fuzzBits` with `ARRAY JOIN` [#57033](https://github.com/ClickHouse/ClickHouse/pull/57033) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix Nullptr dereference in partial merge join with joined_subquery_re… [#57048](https://github.com/ClickHouse/ClickHouse/pull/57048) ([vdimir](https://github.com/vdimir)).
|
||||
* Fix Nullptr dereference in partial merge join with joined_subquery_re... [#57048](https://github.com/ClickHouse/ClickHouse/pull/57048) ([vdimir](https://github.com/vdimir)).
|
||||
* Fix race condition in RemoteSource [#57052](https://github.com/ClickHouse/ClickHouse/pull/57052) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Implement `bitHammingDistance` for big integers [#57073](https://github.com/ClickHouse/ClickHouse/pull/57073) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* S3-style links bug fix [#57075](https://github.com/ClickHouse/ClickHouse/pull/57075) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
|
@ -272,7 +272,7 @@ sidebar_label: 2023
|
||||
* Bump Azure to v1.6.0 [#58052](https://github.com/ClickHouse/ClickHouse/pull/58052) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Correct values for randomization [#58058](https://github.com/ClickHouse/ClickHouse/pull/58058) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Non post request should be readonly [#58060](https://github.com/ClickHouse/ClickHouse/pull/58060) ([San](https://github.com/santrancisco)).
|
||||
* Revert "Merge pull request [#55710](https://github.com/ClickHouse/ClickHouse/issues/55710) from guoxiaolongzte/clickhouse-test… [#58066](https://github.com/ClickHouse/ClickHouse/pull/58066) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Revert "Merge pull request [#55710](https://github.com/ClickHouse/ClickHouse/issues/55710) from guoxiaolongzte/clickhouse-test... [#58066](https://github.com/ClickHouse/ClickHouse/pull/58066) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* fix typo in the test 02479 [#58072](https://github.com/ClickHouse/ClickHouse/pull/58072) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Bump Azure to 1.7.2 [#58075](https://github.com/ClickHouse/ClickHouse/pull/58075) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix flaky test `02567_and_consistency` [#58076](https://github.com/ClickHouse/ClickHouse/pull/58076) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
|
@ -520,7 +520,7 @@ sidebar_label: 2023
|
||||
* Improve script for updating clickhouse-docs [#48135](https://github.com/ClickHouse/ClickHouse/pull/48135) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix stdlib compatibility issues [#48150](https://github.com/ClickHouse/ClickHouse/pull/48150) ([DimasKovas](https://github.com/DimasKovas)).
|
||||
* Make test test_disallow_concurrency less flaky [#48152](https://github.com/ClickHouse/ClickHouse/pull/48152) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Remove unused mockSystemDatabase from gtest_transform_query_for_exter… [#48162](https://github.com/ClickHouse/ClickHouse/pull/48162) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Remove unused mockSystemDatabase from gtest_transform_query_for_exter... [#48162](https://github.com/ClickHouse/ClickHouse/pull/48162) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Update environmental-sensors.md [#48166](https://github.com/ClickHouse/ClickHouse/pull/48166) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Correctly handle NULL constants in logical optimizer for new analyzer [#48168](https://github.com/ClickHouse/ClickHouse/pull/48168) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Try making KeeperMap test more stable [#48170](https://github.com/ClickHouse/ClickHouse/pull/48170) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
|
@ -474,7 +474,7 @@ sidebar_label: 2023
|
||||
* Fix flakiness of test_distributed_load_balancing test [#49921](https://github.com/ClickHouse/ClickHouse/pull/49921) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add some logging [#49925](https://github.com/ClickHouse/ClickHouse/pull/49925) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Support hardlinking parts transactionally [#49931](https://github.com/ClickHouse/ClickHouse/pull/49931) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Fix for analyzer: 02377_ optimize_sorting_by_input_stream_properties_e… [#49943](https://github.com/ClickHouse/ClickHouse/pull/49943) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Fix for analyzer: 02377_ optimize_sorting_by_input_stream_properties_e... [#49943](https://github.com/ClickHouse/ClickHouse/pull/49943) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Follow up to [#49429](https://github.com/ClickHouse/ClickHouse/issues/49429) [#49964](https://github.com/ClickHouse/ClickHouse/pull/49964) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix flaky test_ssl_cert_authentication to use urllib3 [#49982](https://github.com/ClickHouse/ClickHouse/pull/49982) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Fix woboq codebrowser build with -Wno-poison-system-directories [#49992](https://github.com/ClickHouse/ClickHouse/pull/49992) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
@ -272,7 +272,7 @@ sidebar_label: 2023
|
||||
* Add more checks into ThreadStatus ctor. [#42019](https://github.com/ClickHouse/ClickHouse/pull/42019) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Refactor Query Tree visitor [#46740](https://github.com/ClickHouse/ClickHouse/pull/46740) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Revert "Revert "Randomize JIT settings in tests"" [#48282](https://github.com/ClickHouse/ClickHouse/pull/48282) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix outdated cache configuration in s3 tests: s3_storage_policy_by_defau… [#48424](https://github.com/ClickHouse/ClickHouse/pull/48424) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix outdated cache configuration in s3 tests: s3_storage_policy_by_defau... [#48424](https://github.com/ClickHouse/ClickHouse/pull/48424) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix IN with decimal in analyzer [#48754](https://github.com/ClickHouse/ClickHouse/pull/48754) ([vdimir](https://github.com/vdimir)).
|
||||
* Some unclear change in StorageBuffer::reschedule() for something [#49723](https://github.com/ClickHouse/ClickHouse/pull/49723) ([DimasKovas](https://github.com/DimasKovas)).
|
||||
* MergeTree & SipHash checksum big-endian support [#50276](https://github.com/ClickHouse/ClickHouse/pull/50276) ([ltrk2](https://github.com/ltrk2)).
|
||||
|
@ -13,7 +13,7 @@ sidebar_label: 2024
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
|
||||
* Fix `ASTAlterCommand::formatImpl` in case of column specific settings… [#59445](https://github.com/ClickHouse/ClickHouse/pull/59445) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
* Fix `ASTAlterCommand::formatImpl` in case of column specific settings... [#59445](https://github.com/ClickHouse/ClickHouse/pull/59445) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
* Make MAX use the same rules as permutation for complex types [#59498](https://github.com/ClickHouse/ClickHouse/pull/59498) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix corner case when passing `update_insert_deduplication_token_in_dependent_materialized_views` [#59544](https://github.com/ClickHouse/ClickHouse/pull/59544) ([Jordi Villar](https://github.com/jrdi)).
|
||||
* Fix incorrect result of arrayElement / map[] on empty value [#59594](https://github.com/ClickHouse/ClickHouse/pull/59594) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
|
@ -130,7 +130,7 @@ sidebar_label: 2024
|
||||
* Fix translate() with FixedString input [#59356](https://github.com/ClickHouse/ClickHouse/pull/59356) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix digest calculation in Keeper [#59439](https://github.com/ClickHouse/ClickHouse/pull/59439) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix stacktraces for binaries without debug symbols [#59444](https://github.com/ClickHouse/ClickHouse/pull/59444) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix `ASTAlterCommand::formatImpl` in case of column specific settings… [#59445](https://github.com/ClickHouse/ClickHouse/pull/59445) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
* Fix `ASTAlterCommand::formatImpl` in case of column specific settings... [#59445](https://github.com/ClickHouse/ClickHouse/pull/59445) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
* Fix `SELECT * FROM [...] ORDER BY ALL` with Analyzer [#59462](https://github.com/ClickHouse/ClickHouse/pull/59462) ([zhongyuankai](https://github.com/zhongyuankai)).
|
||||
* Fix possible uncaught exception during distributed query cancellation [#59487](https://github.com/ClickHouse/ClickHouse/pull/59487) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Make MAX use the same rules as permutation for complex types [#59498](https://github.com/ClickHouse/ClickHouse/pull/59498) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
|
@ -526,7 +526,7 @@ sidebar_label: 2024
|
||||
* No "please" [#61916](https://github.com/ClickHouse/ClickHouse/pull/61916) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update version_date.tsv and changelogs after v23.12.6.19-stable [#61917](https://github.com/ClickHouse/ClickHouse/pull/61917) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Update version_date.tsv and changelogs after v24.1.8.22-stable [#61918](https://github.com/ClickHouse/ClickHouse/pull/61918) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Fix flaky test_broken_projestions/test.py::test_broken_ignored_replic… [#61932](https://github.com/ClickHouse/ClickHouse/pull/61932) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix flaky test_broken_projestions/test.py::test_broken_ignored_replic... [#61932](https://github.com/ClickHouse/ClickHouse/pull/61932) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Check is Rust avaiable for build, if not, suggest a way to disable Rust support [#61938](https://github.com/ClickHouse/ClickHouse/pull/61938) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* CI: new ci menu in PR body [#61948](https://github.com/ClickHouse/ClickHouse/pull/61948) ([Max K.](https://github.com/maxknv)).
|
||||
* Remove flaky test `01193_metadata_loading` [#61961](https://github.com/ClickHouse/ClickHouse/pull/61961) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
|
@ -57,7 +57,7 @@ memcpy(&buf[place_value], &x, sizeof(x));
|
||||
for (size_t i = 0; i < rows; i += storage.index_granularity)
|
||||
```
|
||||
|
||||
**7.** Add spaces around binary operators (`+`, `-`, `*`, `/`, `%`, …) and the ternary operator `?:`.
|
||||
**7.** Add spaces around binary operators (`+`, `-`, `*`, `/`, `%`, ...) and the ternary operator `?:`.
|
||||
|
||||
``` cpp
|
||||
UInt16 year = (s[0] - '0') * 1000 + (s[1] - '0') * 100 + (s[2] - '0') * 10 + (s[3] - '0');
|
||||
@ -86,7 +86,7 @@ dst.ClickGoodEvent = click.GoodEvent;
|
||||
|
||||
If necessary, the operator can be wrapped to the next line. In this case, the offset in front of it is increased.
|
||||
|
||||
**11.** Do not use a space to separate unary operators (`--`, `++`, `*`, `&`, …) from the argument.
|
||||
**11.** Do not use a space to separate unary operators (`--`, `++`, `*`, `&`, ...) from the argument.
|
||||
|
||||
**12.** Put a space after a comma, but not before it. The same rule goes for a semicolon inside a `for` expression.
|
||||
|
||||
@ -115,7 +115,7 @@ public:
|
||||
|
||||
**16.** If the same `namespace` is used for the entire file, and there isn’t anything else significant, an offset is not necessary inside `namespace`.
|
||||
|
||||
**17.** If the block for an `if`, `for`, `while`, or other expression consists of a single `statement`, the curly brackets are optional. Place the `statement` on a separate line, instead. This rule is also valid for nested `if`, `for`, `while`, …
|
||||
**17.** If the block for an `if`, `for`, `while`, or other expression consists of a single `statement`, the curly brackets are optional. Place the `statement` on a separate line, instead. This rule is also valid for nested `if`, `for`, `while`, ...
|
||||
|
||||
But if the inner `statement` contains curly brackets or `else`, the external block should be written in curly brackets.
|
||||
|
||||
|
@ -118,7 +118,7 @@ If the listing of files contains number ranges with leading zeros, use the const
|
||||
|
||||
**Example**
|
||||
|
||||
Create table with files named `file000`, `file001`, … , `file999`:
|
||||
Create table with files named `file000`, `file001`, ... , `file999`:
|
||||
|
||||
``` sql
|
||||
CREATE TABLE big_table (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/big_dir/file{0..9}{0..9}{0..9}', 'CSV')
|
||||
|
@ -178,7 +178,7 @@ If the listing of files contains number ranges with leading zeros, use the const
|
||||
|
||||
**Example with wildcards 1**
|
||||
|
||||
Create table with files named `file-000.csv`, `file-001.csv`, … , `file-999.csv`:
|
||||
Create table with files named `file-000.csv`, `file-001.csv`, ... , `file-999.csv`:
|
||||
|
||||
``` sql
|
||||
CREATE TABLE big_table (name String, value UInt32)
|
||||
|
@ -71,7 +71,7 @@ WHERE table = 'visits'
|
||||
└───────────┴───────────────────┴────────┘
|
||||
```
|
||||
|
||||
The `partition` column contains the names of the partitions. There are two partitions in this example: `201901` and `201902`. You can use this column value to specify the partition name in [ALTER … PARTITION](../../../sql-reference/statements/alter/partition.md) queries.
|
||||
The `partition` column contains the names of the partitions. There are two partitions in this example: `201901` and `201902`. You can use this column value to specify the partition name in [ALTER ... PARTITION](../../../sql-reference/statements/alter/partition.md) queries.
|
||||
|
||||
The `name` column contains the names of the partition data parts. You can use this column to specify the name of the part in the [ALTER ATTACH PART](../../../sql-reference/statements/alter/partition.md#alter_attach-partition) query.
|
||||
|
||||
|
@ -954,7 +954,7 @@ In the case of `MergeTree` tables, data is getting to disk in different ways:
|
||||
- As a result of an insert (`INSERT` query).
|
||||
- During background merges and [mutations](/docs/en/sql-reference/statements/alter/index.md#alter-mutations).
|
||||
- When downloading from another replica.
|
||||
- As a result of partition freezing [ALTER TABLE … FREEZE PARTITION](/docs/en/sql-reference/statements/alter/partition.md/#alter_freeze-partition).
|
||||
- As a result of partition freezing [ALTER TABLE ... FREEZE PARTITION](/docs/en/sql-reference/statements/alter/partition.md/#alter_freeze-partition).
|
||||
|
||||
In all these cases except for mutations and partition freezing, a part is stored on a volume and a disk according to the given storage policy:
|
||||
|
||||
@ -966,7 +966,7 @@ Under the hood, mutations and partition freezing make use of [hard links](https:
|
||||
In the background, parts are moved between volumes on the basis of the amount of free space (`move_factor` parameter) according to the order the volumes are declared in the configuration file.
|
||||
Data is never transferred from the last one and into the first one. One may use system tables [system.part_log](/docs/en/operations/system-tables/part_log.md/#system_tables-part-log) (field `type = MOVE_PART`) and [system.parts](/docs/en/operations/system-tables/parts.md/#system_tables-parts) (fields `path` and `disk`) to monitor background moves. Also, the detailed information can be found in server logs.
|
||||
|
||||
User can force moving a part or a partition from one volume to another using the query [ALTER TABLE … MOVE PART\|PARTITION … TO VOLUME\|DISK …](/docs/en/sql-reference/statements/alter/partition.md/#alter_move-partition), all the restrictions for background operations are taken into account. The query initiates a move on its own and does not wait for background operations to be completed. User will get an error message if not enough free space is available or if any of the required conditions are not met.
|
||||
User can force moving a part or a partition from one volume to another using the query [ALTER TABLE ... MOVE PART\|PARTITION ... TO VOLUME\|DISK ...](/docs/en/sql-reference/statements/alter/partition.md/#alter_move-partition), all the restrictions for background operations are taken into account. The query initiates a move on its own and does not wait for background operations to be completed. User will get an error message if not enough free space is available or if any of the required conditions are not met.
|
||||
|
||||
Moving data does not interfere with data replication. Therefore, different storage policies can be specified for the same table on different replicas.
|
||||
|
||||
|
@ -29,7 +29,7 @@ Only a single table can be retrieved from stdin.
|
||||
The following parameters are optional: **–name**– Name of the table. If omitted, _data is used.
|
||||
**–format** – Data format in the file. If omitted, TabSeparated is used.
|
||||
|
||||
One of the following parameters is required:**–types** – A list of comma-separated column types. For example: `UInt64,String`. The columns will be named _1, _2, …
|
||||
One of the following parameters is required:**–types** – A list of comma-separated column types. For example: `UInt64,String`. The columns will be named _1, _2, ...
|
||||
**–structure**– The table structure in the format`UserID UInt64`, `URL String`. Defines the column names and types.
|
||||
|
||||
The files specified in ‘file’ will be parsed by the format specified in ‘format’, using the data types specified in ‘types’ or ‘structure’. The table will be uploaded to the server and accessible there as a temporary table with the name in ‘name’.
|
||||
|
@ -14,6 +14,10 @@ Usage scenarios:
|
||||
- Convert data from one format to another.
|
||||
- Updating data in ClickHouse via editing a file on a disk.
|
||||
|
||||
:::note
|
||||
This engine is not currently available in ClickHouse Cloud, please [use the S3 table function instead](/docs/en/sql-reference/table-functions/s3.md).
|
||||
:::
|
||||
|
||||
## Usage in ClickHouse Server {#usage-in-clickhouse-server}
|
||||
|
||||
``` sql
|
||||
|
@ -197,6 +197,7 @@ SELECT * FROM nestedt FORMAT TSV
|
||||
- [input_format_tsv_enum_as_number](/docs/en/operations/settings/settings-formats.md/#input_format_tsv_enum_as_number) - treat inserted enum values in TSV formats as enum indices. Default value - `false`.
|
||||
- [input_format_tsv_use_best_effort_in_schema_inference](/docs/en/operations/settings/settings-formats.md/#input_format_tsv_use_best_effort_in_schema_inference) - use some tweaks and heuristics to infer schema in TSV format. If disabled, all fields will be inferred as Strings. Default value - `true`.
|
||||
- [output_format_tsv_crlf_end_of_line](/docs/en/operations/settings/settings-formats.md/#output_format_tsv_crlf_end_of_line) - if it is set true, end of line in TSV output format will be `\r\n` instead of `\n`. Default value - `false`.
|
||||
- [input_format_tsv_crlf_end_of_line](/docs/en/operations/settings/settings-formats.md/#input_format_tsv_crlf_end_of_line) - if it is set true, end of line in TSV input format will be `\r\n` instead of `\n`. Default value - `false`.
|
||||
- [input_format_tsv_skip_first_lines](/docs/en/operations/settings/settings-formats.md/#input_format_tsv_skip_first_lines) - skip specified number of lines at the beginning of data. Default value - `0`.
|
||||
- [input_format_tsv_detect_header](/docs/en/operations/settings/settings-formats.md/#input_format_tsv_detect_header) - automatically detect header with names and types in TSV format. Default value - `true`.
|
||||
- [input_format_tsv_skip_trailing_empty_lines](/docs/en/operations/settings/settings-formats.md/#input_format_tsv_skip_trailing_empty_lines) - skip trailing empty lines at the end of data. Default value - `false`.
|
||||
|
@ -22,7 +22,7 @@ description: In order to effectively mitigate possible human errors, you should
|
||||
TEMPORARY TABLE table_name [AS table_name_in_backup] |
|
||||
VIEW view_name [AS view_name_in_backup]
|
||||
ALL TEMPORARY TABLES [EXCEPT ...] |
|
||||
ALL DATABASES [EXCEPT ...] } [,...]
|
||||
ALL [EXCEPT ...] } [,...]
|
||||
[ON CLUSTER 'cluster_name']
|
||||
TO|FROM File('<path>/<filename>') | Disk('<disk_name>', '<path>/') | S3('<S3 endpoint>/<path>', '<Access key ID>', '<Secret access key>')
|
||||
[SETTINGS base_backup = File('<path>/<filename>') | Disk(...) | S3('<S3 endpoint>/<path>', '<Access key ID>', '<Secret access key>')]
|
||||
|
@ -561,6 +561,25 @@ Default value: 5000
|
||||
<max_table_num_to_warn>400</max_table_num_to_warn>
|
||||
```
|
||||
|
||||
## max\_view\_num\_to\_warn {#max-view-num-to-warn}
|
||||
If the number of attached views exceeds the specified value, clickhouse server will add warning messages to `system.warnings` table.
|
||||
Default value: 10000
|
||||
|
||||
**Example**
|
||||
|
||||
``` xml
|
||||
<max_view_num_to_warn>400</max_view_num_to_warn>
|
||||
```
|
||||
|
||||
## max\_dictionary\_num\_to\_warn {#max-dictionary-num-to-warn}
|
||||
If the number of attached dictionaries exceeds the specified value, clickhouse server will add warning messages to `system.warnings` table.
|
||||
Default value: 1000
|
||||
|
||||
**Example**
|
||||
|
||||
``` xml
|
||||
<max_dictionary_num_to_warn>400</max_dictionary_num_to_warn>
|
||||
```
|
||||
|
||||
## max\_part\_num\_to\_warn {#max-part-num-to-warn}
|
||||
If the number of active parts exceeds the specified value, clickhouse server will add warning messages to `system.warnings` table.
|
||||
|
@ -303,7 +303,7 @@ What to do when the amount of data exceeds one of the limits: ‘throw’ or ‘
|
||||
|
||||
Limits the number of rows in the hash table that is used when joining tables.
|
||||
|
||||
This settings applies to [SELECT … JOIN](../../sql-reference/statements/select/join.md#select-join) operations and the [Join](../../engines/table-engines/special/join.md) table engine.
|
||||
This settings applies to [SELECT ... JOIN](../../sql-reference/statements/select/join.md#select-join) operations and the [Join](../../engines/table-engines/special/join.md) table engine.
|
||||
|
||||
If a query contains multiple joins, ClickHouse checks this setting for every intermediate result.
|
||||
|
||||
@ -320,7 +320,7 @@ Default value: 0.
|
||||
|
||||
Limits the size in bytes of the hash table used when joining tables.
|
||||
|
||||
This setting applies to [SELECT … JOIN](../../sql-reference/statements/select/join.md#select-join) operations and [Join table engine](../../engines/table-engines/special/join.md).
|
||||
This setting applies to [SELECT ... JOIN](../../sql-reference/statements/select/join.md#select-join) operations and [Join table engine](../../engines/table-engines/special/join.md).
|
||||
|
||||
If the query contains joins, ClickHouse checks this setting for every intermediate result.
|
||||
|
||||
|
@ -831,7 +831,13 @@ Default value: `0`.
|
||||
|
||||
### output_format_tsv_crlf_end_of_line {#output_format_tsv_crlf_end_of_line}
|
||||
|
||||
Use DOC/Windows-style line separator (CRLF) in TSV instead of Unix style (LF).
|
||||
Use DOS/Windows-style line separator (CRLF) in TSV instead of Unix style (LF).
|
||||
|
||||
Disabled by default.
|
||||
|
||||
### input_format_tsv_crlf_end_of_line {#input_format_tsv_crlf_end_of_line}
|
||||
|
||||
Use DOS/Windows-style line separator (CRLF) for TSV input files instead of Unix style (LF).
|
||||
|
||||
Disabled by default.
|
||||
|
||||
|
@ -2248,7 +2248,7 @@ Default value: 0.
|
||||
|
||||
## count_distinct_implementation {#count_distinct_implementation}
|
||||
|
||||
Specifies which of the `uniq*` functions should be used to perform the [COUNT(DISTINCT …)](../../sql-reference/aggregate-functions/reference/count.md/#agg_function-count) construction.
|
||||
Specifies which of the `uniq*` functions should be used to perform the [COUNT(DISTINCT ...)](../../sql-reference/aggregate-functions/reference/count.md/#agg_function-count) construction.
|
||||
|
||||
Possible values:
|
||||
|
||||
|
@ -7,27 +7,27 @@ title: "External Disks for Storing Data"
|
||||
|
||||
Data, processed in ClickHouse, is usually stored in the local file system — on the same machine with the ClickHouse server. That requires large-capacity disks, which can be expensive enough. To avoid that you can store the data remotely. Various storages are supported:
|
||||
1. [Amazon S3](https://aws.amazon.com/s3/) object storage.
|
||||
2. The Hadoop Distributed File System ([HDFS](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html))
|
||||
3. [Azure Blob Storage](https://azure.microsoft.com/en-us/products/storage/blobs).
|
||||
2. [Azure Blob Storage](https://azure.microsoft.com/en-us/products/storage/blobs).
|
||||
3. Unsupported: The Hadoop Distributed File System ([HDFS](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html))
|
||||
|
||||
:::note ClickHouse also has support for external table engines, which are different from external storage option described on this page as they allow to read data stored in some general file format (like Parquet), while on this page we are describing storage configuration for ClickHouse `MergeTree` family or `Log` family tables.
|
||||
1. to work with data stored on `Amazon S3` disks, use [S3](/docs/en/engines/table-engines/integrations/s3.md) table engine.
|
||||
2. to work with data in the Hadoop Distributed File System — [HDFS](/docs/en/engines/table-engines/integrations/hdfs.md) table engine.
|
||||
3. to work with data stored in Azure Blob Storage use [AzureBlobStorage](/docs/en/engines/table-engines/integrations/azureBlobStorage.md) table engine.
|
||||
2. to work with data stored in Azure Blob Storage use [AzureBlobStorage](/docs/en/engines/table-engines/integrations/azureBlobStorage.md) table engine.
|
||||
3. Unsupported: to work with data in the Hadoop Distributed File System — [HDFS](/docs/en/engines/table-engines/integrations/hdfs.md) table engine.
|
||||
:::
|
||||
|
||||
## Configuring external storage {#configuring-external-storage}
|
||||
|
||||
[MergeTree](/docs/en/engines/table-engines/mergetree-family/mergetree.md) and [Log](/docs/en/engines/table-engines/log-family/log.md) family table engines can store data to `S3`, `AzureBlobStorage`, `HDFS` using a disk with types `s3`, `azure_blob_storage`, `hdfs` accordingly.
|
||||
[MergeTree](/docs/en/engines/table-engines/mergetree-family/mergetree.md) and [Log](/docs/en/engines/table-engines/log-family/log.md) family table engines can store data to `S3`, `AzureBlobStorage`, `HDFS` (unsupported) using a disk with types `s3`, `azure_blob_storage`, `hdfs` (unsupported) accordingly.
|
||||
|
||||
Disk configuration requires:
|
||||
1. `type` section, equal to one of `s3`, `azure_blob_storage`, `hdfs`, `local_blob_storage`, `web`.
|
||||
1. `type` section, equal to one of `s3`, `azure_blob_storage`, `hdfs` (unsupported), `local_blob_storage`, `web`.
|
||||
2. Configuration of a specific external storage type.
|
||||
|
||||
Starting from 24.1 clickhouse version, it is possible to use a new configuration option.
|
||||
It requires to specify:
|
||||
1. `type` equal to `object_storage`
|
||||
2. `object_storage_type`, equal to one of `s3`, `azure_blob_storage` (or just `azure` from `24.3`), `hdfs`, `local_blob_storage` (or just `local` from `24.3`), `web`.
|
||||
2. `object_storage_type`, equal to one of `s3`, `azure_blob_storage` (or just `azure` from `24.3`), `hdfs` (unsupported), `local_blob_storage` (or just `local` from `24.3`), `web`.
|
||||
Optionally, `metadata_type` can be specified (it is equal to `local` by default), but it can also be set to `plain`, `web` and, starting from `24.4`, `plain_rewritable`.
|
||||
Usage of `plain` metadata type is described in [plain storage section](/docs/en/operations/storing-data.md/#storing-data-on-webserver), `web` metadata type can be used only with `web` object storage type, `local` metadata type stores metadata files locally (each metadata files contains mapping to files in object storage and some additional meta information about them).
|
||||
|
||||
@ -328,7 +328,7 @@ Configuration:
|
||||
</s3_plain>
|
||||
```
|
||||
|
||||
Starting from `24.1` it is possible configure any object storage disk (`s3`, `azure`, `hdfs`, `local`) using `plain` metadata type.
|
||||
Starting from `24.1` it is possible configure any object storage disk (`s3`, `azure`, `hdfs` (unsupported), `local`) using `plain` metadata type.
|
||||
|
||||
Configuration:
|
||||
``` xml
|
||||
@ -421,6 +421,7 @@ Other parameters:
|
||||
* `skip_access_check` - If true, disk access checks will not be performed on disk start-up. Default value is `false`.
|
||||
* `read_resource` — Resource name to be used for [scheduling](/docs/en/operations/workload-scheduling.md) of read requests to this disk. Default value is empty string (IO scheduling is not enabled for this disk).
|
||||
* `write_resource` — Resource name to be used for [scheduling](/docs/en/operations/workload-scheduling.md) of write requests to this disk. Default value is empty string (IO scheduling is not enabled for this disk).
|
||||
* `metadata_keep_free_space_bytes` - the amount of free metadata disk space to be reserved.
|
||||
|
||||
Examples of working configurations can be found in integration tests directory (see e.g. [test_merge_tree_azure_blob_storage](https://github.com/ClickHouse/ClickHouse/blob/master/tests/integration/test_merge_tree_azure_blob_storage/configs/config.d/storage_conf.xml) or [test_azure_blob_storage_zero_copy_replication](https://github.com/ClickHouse/ClickHouse/blob/master/tests/integration/test_azure_blob_storage_zero_copy_replication/configs/config.d/storage_conf.xml)).
|
||||
|
||||
@ -428,12 +429,14 @@ Examples of working configurations can be found in integration tests directory (
|
||||
Zero-copy replication is disabled by default in ClickHouse version 22.8 and higher. This feature is not recommended for production use.
|
||||
:::
|
||||
|
||||
## Using HDFS storage {#hdfs-storage}
|
||||
## Using HDFS storage (Unsupported)
|
||||
|
||||
In this sample configuration:
|
||||
- the disk is of type `hdfs`
|
||||
- the disk is of type `hdfs` (unsupported)
|
||||
- the data is hosted at `hdfs://hdfs1:9000/clickhouse/`
|
||||
|
||||
By the way, HDFS is unsupported and therefore there might be issues when using it. Feel free to make a pull request with the fix if any issue arises.
|
||||
|
||||
```xml
|
||||
<clickhouse>
|
||||
<storage_configuration>
|
||||
@ -464,9 +467,11 @@ In this sample configuration:
|
||||
</clickhouse>
|
||||
```
|
||||
|
||||
Keep in mind that HDFS may not work in corner cases.
|
||||
|
||||
### Using Data Encryption {#encrypted-virtual-file-system}
|
||||
|
||||
You can encrypt the data stored on [S3](/docs/en/engines/table-engines/mergetree-family/mergetree.md/#table_engine-mergetree-s3), or [HDFS](#configuring-hdfs) external disks, or on a local disk. To turn on the encryption mode, in the configuration file you must define a disk with the type `encrypted` and choose a disk on which the data will be saved. An `encrypted` disk ciphers all written files on the fly, and when you read files from an `encrypted` disk it deciphers them automatically. So you can work with an `encrypted` disk like with a normal one.
|
||||
You can encrypt the data stored on [S3](/docs/en/engines/table-engines/mergetree-family/mergetree.md/#table_engine-mergetree-s3), or [HDFS](#configuring-hdfs) (unsupported) external disks, or on a local disk. To turn on the encryption mode, in the configuration file you must define a disk with the type `encrypted` and choose a disk on which the data will be saved. An `encrypted` disk ciphers all written files on the fly, and when you read files from an `encrypted` disk it deciphers them automatically. So you can work with an `encrypted` disk like with a normal one.
|
||||
|
||||
Example of disk configuration:
|
||||
|
||||
@ -529,7 +534,7 @@ Example of disk configuration:
|
||||
|
||||
It is possible to configure local cache over disks in storage configuration starting from version 22.3.
|
||||
For versions 22.3 - 22.7 cache is supported only for `s3` disk type. For versions >= 22.8 cache is supported for any disk type: S3, Azure, Local, Encrypted, etc.
|
||||
For versions >= 23.5 cache is supported only for remote disk types: S3, Azure, HDFS.
|
||||
For versions >= 23.5 cache is supported only for remote disk types: S3, Azure, HDFS (unsupported).
|
||||
Cache uses `LRU` cache policy.
|
||||
|
||||
|
||||
@ -971,7 +976,7 @@ Use [http_max_single_read_retries](/docs/en/operations/settings/settings.md/#htt
|
||||
|
||||
### Zero-copy Replication (not ready for production) {#zero-copy}
|
||||
|
||||
Zero-copy replication is possible, but not recommended, with `S3` and `HDFS` disks. Zero-copy replication means that if the data is stored remotely on several machines and needs to be synchronized, then only the metadata is replicated (paths to the data parts), but not the data itself.
|
||||
Zero-copy replication is possible, but not recommended, with `S3` and `HDFS` (unsupported) disks. Zero-copy replication means that if the data is stored remotely on several machines and needs to be synchronized, then only the metadata is replicated (paths to the data parts), but not the data itself.
|
||||
|
||||
:::note Zero-copy replication is not ready for production
|
||||
Zero-copy replication is disabled by default in ClickHouse version 22.8 and higher. This feature is not recommended for production use.
|
||||
|
@ -82,7 +82,7 @@ FROM
|
||||
|
||||
In this case, you should remember that you do not know the histogram bin borders.
|
||||
|
||||
## sequenceMatch(pattern)(timestamp, cond1, cond2, …)
|
||||
## sequenceMatch(pattern)(timestamp, cond1, cond2, ...)
|
||||
|
||||
Checks whether the sequence contains an event chain that matches the pattern.
|
||||
|
||||
@ -172,7 +172,7 @@ SELECT sequenceMatch('(?1)(?2)')(time, number = 1, number = 2, number = 4) FROM
|
||||
|
||||
- [sequenceCount](#function-sequencecount)
|
||||
|
||||
## sequenceCount(pattern)(time, cond1, cond2, …)
|
||||
## sequenceCount(pattern)(time, cond1, cond2, ...)
|
||||
|
||||
Counts the number of event chains that matched the pattern. The function searches event chains that do not overlap. It starts to search for the next chain after the current chain is matched.
|
||||
|
||||
|
@ -0,0 +1,45 @@
|
||||
---
|
||||
slug: /en/sql-reference/aggregate-functions/reference/analysis_of_variance
|
||||
sidebar_position: 6
|
||||
---
|
||||
|
||||
# analysisOfVariance
|
||||
|
||||
Provides a statistical test for one-way analysis of variance (ANOVA test). It is a test over several groups of normally distributed observations to find out whether all groups have the same mean or not.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
analysisOfVariance(val, group_no)
|
||||
```
|
||||
|
||||
Aliases: `anova`
|
||||
|
||||
**Parameters**
|
||||
- `val`: value.
|
||||
- `group_no` : group number that `val` belongs to.
|
||||
|
||||
:::note
|
||||
Groups are enumerated starting from 0 and there should be at least two groups to perform a test.
|
||||
There should be at least one group with the number of observations greater than one.
|
||||
:::
|
||||
|
||||
**Returned value**
|
||||
|
||||
- `(f_statistic, p_value)`. [Tuple](../../data-types/tuple.md)([Float64](../../data-types/float.md), [Float64](../../data-types/float.md)).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT analysisOfVariance(number, number % 2) FROM numbers(1048575);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─analysisOfVariance(number, modulo(number, 2))─┐
|
||||
│ (0,1) │
|
||||
└───────────────────────────────────────────────┘
|
||||
```
|
@ -37,6 +37,7 @@ Standard aggregate functions:
|
||||
|
||||
ClickHouse-specific aggregate functions:
|
||||
|
||||
- [analysisOfVariance](/docs/en/sql-reference/aggregate-functions/reference/analysis_of_variance.md)
|
||||
- [any](/docs/en/sql-reference/aggregate-functions/reference/any_respect_nulls.md)
|
||||
- [anyHeavy](/docs/en/sql-reference/aggregate-functions/reference/anyheavy.md)
|
||||
- [anyLast](/docs/en/sql-reference/aggregate-functions/reference/anylast.md)
|
||||
|
@ -7,7 +7,7 @@ sidebar_position: 201
|
||||
|
||||
## quantiles
|
||||
|
||||
Syntax: `quantiles(level1, level2, …)(x)`
|
||||
Syntax: `quantiles(level1, level2, ...)(x)`
|
||||
|
||||
All the quantile functions also have corresponding quantiles functions: `quantiles`, `quantilesDeterministic`, `quantilesTiming`, `quantilesTimingWeighted`, `quantilesExact`, `quantilesExactWeighted`, `quantileInterpolatedWeighted`, `quantilesTDigest`, `quantilesBFloat16`, `quantilesDD`. These functions calculate all the quantiles of the listed levels in one pass, and return an array of the resulting values.
|
||||
|
||||
|
@ -6,9 +6,9 @@ sidebar_label: AggregateFunction
|
||||
|
||||
# AggregateFunction
|
||||
|
||||
Aggregate functions can have an implementation-defined intermediate state that can be serialized to an `AggregateFunction(…)` data type and stored in a table, usually, by means of [a materialized view](../../sql-reference/statements/create/view.md). The common way to produce an aggregate function state is by calling the aggregate function with the `-State` suffix. To get the final result of aggregation in the future, you must use the same aggregate function with the `-Merge`suffix.
|
||||
Aggregate functions can have an implementation-defined intermediate state that can be serialized to an `AggregateFunction(...)` data type and stored in a table, usually, by means of [a materialized view](../../sql-reference/statements/create/view.md). The common way to produce an aggregate function state is by calling the aggregate function with the `-State` suffix. To get the final result of aggregation in the future, you must use the same aggregate function with the `-Merge`suffix.
|
||||
|
||||
`AggregateFunction(name, types_of_arguments…)` — parametric data type.
|
||||
`AggregateFunction(name, types_of_arguments...)` — parametric data type.
|
||||
|
||||
**Parameters**
|
||||
|
||||
|
495
docs/en/sql-reference/data-types/dynamic.md
Normal file
495
docs/en/sql-reference/data-types/dynamic.md
Normal file
@ -0,0 +1,495 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/dynamic
|
||||
sidebar_position: 56
|
||||
sidebar_label: Dynamic
|
||||
---
|
||||
|
||||
# Dynamic
|
||||
|
||||
This type allows to store values of any type inside it without knowing all of them in advance.
|
||||
|
||||
To declare a column of `Dynamic` type, use the following syntax:
|
||||
|
||||
``` sql
|
||||
<column_name> Dynamic(max_types=N)
|
||||
```
|
||||
|
||||
Where `N` is an optional parameter between `1` and `255` indicating how many different data types can be stored inside a column with type `Dynamic` across single block of data that is stored separately (for example across single data part for MergeTree table). If this limit is exceeded, all new types will be converted to type `String`. Default value of `max_types` is `32`.
|
||||
|
||||
:::note
|
||||
The Dynamic data type is an experimental feature. To use it, set `allow_experimental_dynamic_type = 1`.
|
||||
:::
|
||||
|
||||
## Creating Dynamic
|
||||
|
||||
Using `Dynamic` type in table column definition:
|
||||
|
||||
```sql
|
||||
CREATE TABLE test (d Dynamic) ENGINE = Memory;
|
||||
INSERT INTO test VALUES (NULL), (42), ('Hello, World!'), ([1, 2, 3]);
|
||||
SELECT d, dynamicType(d) FROM test;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d─────────────┬─dynamicType(d)─┐
|
||||
│ ᴺᵁᴸᴸ │ None │
|
||||
│ 42 │ Int64 │
|
||||
│ Hello, World! │ String │
|
||||
│ [1,2,3] │ Array(Int64) │
|
||||
└───────────────┴────────────────┘
|
||||
```
|
||||
|
||||
Using CAST from ordinary column:
|
||||
|
||||
```sql
|
||||
SELECT 'Hello, World!'::Dynamic as d, dynamicType(d);
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d─────────────┬─dynamicType(d)─┐
|
||||
│ Hello, World! │ String │
|
||||
└───────────────┴────────────────┘
|
||||
```
|
||||
|
||||
Using CAST from `Variant` column:
|
||||
|
||||
```sql
|
||||
SET allow_experimental_variant_type = 1, use_variant_as_common_type = 1;
|
||||
SELECT multiIf((number % 3) = 0, number, (number % 3) = 1, range(number + 1), NULL)::Dynamic AS d, dynamicType(d) FROM numbers(3)
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d─────┬─dynamicType(d)─┐
|
||||
│ 0 │ UInt64 │
|
||||
│ [0,1] │ Array(UInt64) │
|
||||
│ ᴺᵁᴸᴸ │ None │
|
||||
└───────┴────────────────┘
|
||||
```
|
||||
|
||||
|
||||
## Reading Dynamic nested types as subcolumns
|
||||
|
||||
`Dynamic` type supports reading a single nested type from a `Dynamic` column using the type name as a subcolumn.
|
||||
So, if you have column `d Dynamic` you can read a subcolumn of any valid type `T` using syntax `d.T`,
|
||||
this subcolumn will have type `Nullable(T)` if `T` can be inside `Nullable` and `T` otherwise. This subcolumn will
|
||||
be the same size as original `Dynamic` column and will contain `NULL` values (or empty values if `T` cannot be inside `Nullable`)
|
||||
in all rows in which original `Dynamic` column doesn't have type `T`.
|
||||
|
||||
`Dynamic` subcolumns can be also read using function `dynamicElement(dynamic_column, type_name)`.
|
||||
|
||||
Examples:
|
||||
|
||||
```sql
|
||||
CREATE TABLE test (d Dynamic) ENGINE = Memory;
|
||||
INSERT INTO test VALUES (NULL), (42), ('Hello, World!'), ([1, 2, 3]);
|
||||
SELECT d, dynamicType(d), d.String, d.Int64, d.`Array(Int64)`, d.Date, d.`Array(String)` FROM test;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d─────────────┬─dynamicType(d)─┬─d.String──────┬─d.Int64─┬─d.Array(Int64)─┬─d.Date─┬─d.Array(String)─┐
|
||||
│ ᴺᵁᴸᴸ │ None │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ [] │ ᴺᵁᴸᴸ │ [] │
|
||||
│ 42 │ Int64 │ ᴺᵁᴸᴸ │ 42 │ [] │ ᴺᵁᴸᴸ │ [] │
|
||||
│ Hello, World! │ String │ Hello, World! │ ᴺᵁᴸᴸ │ [] │ ᴺᵁᴸᴸ │ [] │
|
||||
│ [1,2,3] │ Array(Int64) │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ [1,2,3] │ ᴺᵁᴸᴸ │ [] │
|
||||
└───────────────┴────────────────┴───────────────┴─────────┴────────────────┴────────┴─────────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT toTypeName(d.String), toTypeName(d.Int64), toTypeName(d.`Array(Int64)`), toTypeName(d.Date), toTypeName(d.`Array(String)`) FROM test LIMIT 1;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─toTypeName(d.String)─┬─toTypeName(d.Int64)─┬─toTypeName(d.Array(Int64))─┬─toTypeName(d.Date)─┬─toTypeName(d.Array(String))─┐
|
||||
│ Nullable(String) │ Nullable(Int64) │ Array(Int64) │ Nullable(Date) │ Array(String) │
|
||||
└──────────────────────┴─────────────────────┴────────────────────────────┴────────────────────┴─────────────────────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT d, dynamicType(d), dynamicElement(d, 'String'), dynamicElement(d, 'Int64'), dynamicElement(d, 'Array(Int64)'), dynamicElement(d, 'Date'), dynamicElement(d, 'Array(String)') FROM test;```
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d─────────────┬─dynamicType(d)─┬─dynamicElement(d, 'String')─┬─dynamicElement(d, 'Int64')─┬─dynamicElement(d, 'Array(Int64)')─┬─dynamicElement(d, 'Date')─┬─dynamicElement(d, 'Array(String)')─┐
|
||||
│ ᴺᵁᴸᴸ │ None │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ [] │ ᴺᵁᴸᴸ │ [] │
|
||||
│ 42 │ Int64 │ ᴺᵁᴸᴸ │ 42 │ [] │ ᴺᵁᴸᴸ │ [] │
|
||||
│ Hello, World! │ String │ Hello, World! │ ᴺᵁᴸᴸ │ [] │ ᴺᵁᴸᴸ │ [] │
|
||||
│ [1,2,3] │ Array(Int64) │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ [1,2,3] │ ᴺᵁᴸᴸ │ [] │
|
||||
└───────────────┴────────────────┴─────────────────────────────┴────────────────────────────┴───────────────────────────────────┴───────────────────────────┴────────────────────────────────────┘
|
||||
```
|
||||
|
||||
To know what variant is stored in each row function `dynamicType(dynamic_column)` can be used. It returns `String` with value type name for each row (or `'None'` if row is `NULL`).
|
||||
|
||||
Example:
|
||||
|
||||
```sql
|
||||
CREATE TABLE test (d Dynamic) ENGINE = Memory;
|
||||
INSERT INTO test VALUES (NULL), (42), ('Hello, World!'), ([1, 2, 3]);
|
||||
SELECT dynamicType(d) from test;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─dynamicType(d)─┐
|
||||
│ None │
|
||||
│ Int64 │
|
||||
│ String │
|
||||
│ Array(Int64) │
|
||||
└────────────────┘
|
||||
```
|
||||
|
||||
## Conversion between Dynamic column and other columns
|
||||
|
||||
There are 4 possible conversions that can be performed with `Dynamic` column.
|
||||
|
||||
### Converting an ordinary column to a Dynamic column
|
||||
|
||||
```sql
|
||||
SELECT 'Hello, World!'::Dynamic as d, dynamicType(d);
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d─────────────┬─dynamicType(d)─┐
|
||||
│ Hello, World! │ String │
|
||||
└───────────────┴────────────────┘
|
||||
```
|
||||
|
||||
### Converting a String column to a Dynamic column through parsing
|
||||
|
||||
To parse `Dynamic` type values from a `String` column you can enable setting `cast_string_to_dynamic_use_inference`:
|
||||
|
||||
```sql
|
||||
SET cast_string_to_dynamic_use_inference = 1;
|
||||
SELECT CAST(materialize(map('key1', '42', 'key2', 'true', 'key3', '2020-01-01')), 'Map(String, Dynamic)') as map_of_dynamic, mapApply((k, v) -> (k, dynamicType(v)), map_of_dynamic) as map_of_dynamic_types;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─map_of_dynamic──────────────────────────────┬─map_of_dynamic_types─────────────────────────┐
|
||||
│ {'key1':42,'key2':true,'key3':'2020-01-01'} │ {'key1':'Int64','key2':'Bool','key3':'Date'} │
|
||||
└─────────────────────────────────────────────┴──────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Converting a Dynamic column to an ordinary column
|
||||
|
||||
It is possible to convert a `Dynamic` column to an ordinary column. In this case all nested types will be converted to a destination type:
|
||||
|
||||
```sql
|
||||
CREATE TABLE test (d Dynamic) ENGINE = Memory;
|
||||
INSERT INTO test VALUES (NULL), (42), ('42.42'), (true), ('e10');
|
||||
SELECT d::Nullable(Float64) FROM test;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─CAST(d, 'Nullable(Float64)')─┐
|
||||
│ ᴺᵁᴸᴸ │
|
||||
│ 42 │
|
||||
│ 42.42 │
|
||||
│ 1 │
|
||||
│ 0 │
|
||||
└──────────────────────────────┘
|
||||
```
|
||||
|
||||
### Converting a Variant column to Dynamic column
|
||||
|
||||
```sql
|
||||
CREATE TABLE test (v Variant(UInt64, String, Array(UInt64))) ENGINE = Memory;
|
||||
INSERT INTO test VALUES (NULL), (42), ('String'), ([1, 2, 3]);
|
||||
SELECT v::Dynamic as d, dynamicType(d) from test;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d───────┬─dynamicType(d)─┐
|
||||
│ ᴺᵁᴸᴸ │ None │
|
||||
│ 42 │ UInt64 │
|
||||
│ String │ String │
|
||||
│ [1,2,3] │ Array(UInt64) │
|
||||
└─────────┴────────────────┘
|
||||
```
|
||||
|
||||
### Converting a Dynamic(max_types=N) column to another Dynamic(max_types=K)
|
||||
|
||||
If `K >= N` than during conversion the data doesn't change:
|
||||
|
||||
```sql
|
||||
CREATE TABLE test (d Dynamic(max_types=3)) ENGINE = Memory;
|
||||
INSERT INTO test VALUES (NULL), (42), (43), ('42.42'), (true);
|
||||
SELECT d::Dynamic(max_types=5) as d2, dynamicType(d2) FROM test;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d─────┬─dynamicType(d)─┐
|
||||
│ ᴺᵁᴸᴸ │ None │
|
||||
│ 42 │ Int64 │
|
||||
│ 43 │ Int64 │
|
||||
│ 42.42 │ String │
|
||||
│ true │ Bool │
|
||||
└───────┴────────────────┘
|
||||
```
|
||||
|
||||
If `K < N`, then the values with the rarest types are converted to `String`:
|
||||
```text
|
||||
CREATE TABLE test (d Dynamic(max_types=4)) ENGINE = Memory;
|
||||
INSERT INTO test VALUES (NULL), (42), (43), ('42.42'), (true), ([1, 2, 3]);
|
||||
SELECT d, dynamicType(d), d::Dynamic(max_types=2) as d2, dynamicType(d2) FROM test;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d───────┬─dynamicType(d)─┬─d2──────┬─dynamicType(d2)─┐
|
||||
│ ᴺᵁᴸᴸ │ None │ ᴺᵁᴸᴸ │ None │
|
||||
│ 42 │ Int64 │ 42 │ Int64 │
|
||||
│ 43 │ Int64 │ 43 │ Int64 │
|
||||
│ 42.42 │ String │ 42.42 │ String │
|
||||
│ true │ Bool │ true │ String │
|
||||
│ [1,2,3] │ Array(Int64) │ [1,2,3] │ String │
|
||||
└─────────┴────────────────┴─────────┴─────────────────┘
|
||||
```
|
||||
|
||||
If `K=1`, all types are converted to `String`:
|
||||
|
||||
```text
|
||||
CREATE TABLE test (d Dynamic(max_types=4)) ENGINE = Memory;
|
||||
INSERT INTO test VALUES (NULL), (42), (43), ('42.42'), (true), ([1, 2, 3]);
|
||||
SELECT d, dynamicType(d), d::Dynamic(max_types=1) as d2, dynamicType(d2) FROM test;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d───────┬─dynamicType(d)─┬─d2──────┬─dynamicType(d2)─┐
|
||||
│ ᴺᵁᴸᴸ │ None │ ᴺᵁᴸᴸ │ None │
|
||||
│ 42 │ Int64 │ 42 │ String │
|
||||
│ 43 │ Int64 │ 43 │ String │
|
||||
│ 42.42 │ String │ 42.42 │ String │
|
||||
│ true │ Bool │ true │ String │
|
||||
│ [1,2,3] │ Array(Int64) │ [1,2,3] │ String │
|
||||
└─────────┴────────────────┴─────────┴─────────────────┘
|
||||
```
|
||||
|
||||
## Reading Dynamic type from the data
|
||||
|
||||
All text formats (TSV, CSV, CustomSeparated, Values, JSONEachRow, etc) supports reading `Dynamic` type. During data parsing ClickHouse tries to infer the type of each value and use it during insertion to `Dynamic` column.
|
||||
|
||||
Example:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
d,
|
||||
dynamicType(d),
|
||||
dynamicElement(d, 'String') AS str,
|
||||
dynamicElement(d, 'Int64') AS num,
|
||||
dynamicElement(d, 'Float64') AS float,
|
||||
dynamicElement(d, 'Date') AS date,
|
||||
dynamicElement(d, 'Array(Int64)') AS arr
|
||||
FROM format(JSONEachRow, 'd Dynamic', $$
|
||||
{"d" : "Hello, World!"},
|
||||
{"d" : 42},
|
||||
{"d" : 42.42},
|
||||
{"d" : "2020-01-01"},
|
||||
{"d" : [1, 2, 3]}
|
||||
$$)
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d─────────────┬─dynamicType(d)─┬─str───────────┬──num─┬─float─┬───────date─┬─arr─────┐
|
||||
│ Hello, World! │ String │ Hello, World! │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ [] │
|
||||
│ 42 │ Int64 │ ᴺᵁᴸᴸ │ 42 │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ [] │
|
||||
│ 42.42 │ Float64 │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ 42.42 │ ᴺᵁᴸᴸ │ [] │
|
||||
│ 2020-01-01 │ Date │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ 2020-01-01 │ [] │
|
||||
│ [1,2,3] │ Array(Int64) │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ [1,2,3] │
|
||||
└───────────────┴────────────────┴───────────────┴──────┴───────┴────────────┴─────────┘
|
||||
```
|
||||
|
||||
## Comparing values of Dynamic type
|
||||
|
||||
Values of `Dynamic` types are compared similar to values of `Variant` type:
|
||||
The result of operator `<` for values `d1` with underlying type `T1` and `d2` with underlying type `T2` of a type `Dynamic` is defined as follows:
|
||||
- If `T1 = T2 = T`, the result will be `d1.T < d2.T` (underlying values will be compared).
|
||||
- If `T1 != T2`, the result will be `T1 < T2` (type names will be compared).
|
||||
|
||||
Examples:
|
||||
```sql
|
||||
CREATE TABLE test (d1 Dynamic, d2 Dynamic) ENGINE=Memory;
|
||||
INSERT INTO test VALUES (42, 42), (42, 43), (42, 'abc'), (42, [1, 2, 3]), (42, []), (42, NULL);
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT d2, dynamicType(d2) as d2_type from test order by d2;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d2──────┬─d2_type──────┐
|
||||
│ [] │ Array(Int64) │
|
||||
│ [1,2,3] │ Array(Int64) │
|
||||
│ 42 │ Int64 │
|
||||
│ 43 │ Int64 │
|
||||
│ abc │ String │
|
||||
│ ᴺᵁᴸᴸ │ None │
|
||||
└─────────┴──────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT d1, dynamicType(d1) as d1_type, d2, dynamicType(d2) as d2_type, d1 = d2, d1 < d2, d1 > d2 from test;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d1─┬─d1_type─┬─d2──────┬─d2_type──────┬─equals(d1, d2)─┬─less(d1, d2)─┬─greater(d1, d2)─┐
|
||||
│ 42 │ Int64 │ 42 │ Int64 │ 1 │ 0 │ 0 │
|
||||
│ 42 │ Int64 │ 43 │ Int64 │ 0 │ 1 │ 0 │
|
||||
│ 42 │ Int64 │ abc │ String │ 0 │ 1 │ 0 │
|
||||
│ 42 │ Int64 │ [1,2,3] │ Array(Int64) │ 0 │ 0 │ 1 │
|
||||
│ 42 │ Int64 │ [] │ Array(Int64) │ 0 │ 0 │ 1 │
|
||||
│ 42 │ Int64 │ ᴺᵁᴸᴸ │ None │ 0 │ 1 │ 0 │
|
||||
└────┴─────────┴─────────┴──────────────┴────────────────┴──────────────┴─────────────────┘
|
||||
```
|
||||
|
||||
If you need to find the row with specific `Dynamic` value, you can do one of the following:
|
||||
|
||||
- Cast value to the `Dynamic` type:
|
||||
|
||||
```sql
|
||||
SELECT * FROM test WHERE d2 == [1,2,3]::Array(UInt32)::Dynamic;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d1─┬─d2──────┐
|
||||
│ 42 │ [1,2,3] │
|
||||
└────┴─────────┘
|
||||
```
|
||||
|
||||
- Compare `Dynamic` subcolumn with required type:
|
||||
|
||||
```sql
|
||||
SELECT * FROM test WHERE d2.`Array(Int65)` == [1,2,3] -- or using variantElement(d2, 'Array(UInt32)')
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d1─┬─d2──────┐
|
||||
│ 42 │ [1,2,3] │
|
||||
└────┴─────────┘
|
||||
```
|
||||
|
||||
Sometimes it can be useful to make additional check on dynamic type as subcolumns with complex types like `Array/Map/Tuple` cannot be inside `Nullable` and will have default values instead of `NULL` on rows with different types:
|
||||
|
||||
```sql
|
||||
SELECT d2, d2.`Array(Int64)`, dynamicType(d2) FROM test WHERE d2.`Array(Int64)` == [];
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d2───┬─d2.Array(UInt32)─┬─dynamicType(d2)─┐
|
||||
│ 42 │ [] │ Int64 │
|
||||
│ 43 │ [] │ Int64 │
|
||||
│ abc │ [] │ String │
|
||||
│ [] │ [] │ Array(Int32) │
|
||||
│ ᴺᵁᴸᴸ │ [] │ None │
|
||||
└──────┴──────────────────┴─────────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT d2, d2.`Array(Int64)`, dynamicType(d2) FROM test WHERE dynamicType(d2) == 'Array(Int64)' AND d2.`Array(Int64)` == [];
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d2─┬─d2.Array(UInt32)─┬─dynamicType(d2)─┐
|
||||
│ [] │ [] │ Array(Int64) │
|
||||
└────┴──────────────────┴─────────────────┘
|
||||
```
|
||||
|
||||
**Note:** values of dynamic types with different numeric types are considered as different values and not compared between each other, their type names are compared instead.
|
||||
|
||||
Example:
|
||||
|
||||
```sql
|
||||
CREATE TABLE test (d Dynamic) ENGINE=Memory;
|
||||
INSERT INTO test VALUES (1::UInt32), (1::Int64), (100::UInt32), (100::Int64);
|
||||
SELECT d, dynamicType(d) FROM test ORDER by d;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─v───┬─dynamicType(v)─┐
|
||||
│ 1 │ Int64 │
|
||||
│ 100 │ Int64 │
|
||||
│ 1 │ UInt32 │
|
||||
│ 100 │ UInt32 │
|
||||
└─────┴────────────────┘
|
||||
```
|
||||
|
||||
## Reaching the limit in number of different data types stored inside Dynamic
|
||||
|
||||
`Dynamic` data type can store only limited number of different data types inside. By default, this limit is 32, but you can change it in type declaration using syntax `Dynamic(max_types=N)` where N is between 1 and 255 (due to implementation details, it's impossible to have more than 255 different data types inside Dynamic).
|
||||
When the limit is reached, all new data types inserted to `Dynamic` column will be casted to `String` and stored as `String` values.
|
||||
|
||||
Let's see what happens when the limit is reached in different scenarios.
|
||||
|
||||
### Reaching the limit during data parsing
|
||||
|
||||
During parsing of `Dynamic` values from the data, when the limit is reached for current block of data, all new values will be inserted as `String` values:
|
||||
|
||||
```sql
|
||||
SELECT d, dynamicType(d) FROM format(JSONEachRow, 'd Dynamic(max_types=3)', '
|
||||
{"d" : 42}
|
||||
{"d" : [1, 2, 3]}
|
||||
{"d" : "Hello, World!"}
|
||||
{"d" : "2020-01-01"}
|
||||
{"d" : ["str1", "str2", "str3"]}
|
||||
{"d" : {"a" : 1, "b" : [1, 2, 3]}}
|
||||
')
|
||||
```
|
||||
|
||||
```text
|
||||
┌─d──────────────────────────┬─dynamicType(d)─┐
|
||||
│ 42 │ Int64 │
|
||||
│ [1,2,3] │ Array(Int64) │
|
||||
│ Hello, World! │ String │
|
||||
│ 2020-01-01 │ String │
|
||||
│ ["str1", "str2", "str3"] │ String │
|
||||
│ {"a" : 1, "b" : [1, 2, 3]} │ String │
|
||||
└────────────────────────────┴────────────────┘
|
||||
```
|
||||
|
||||
As we can see, after inserting 3 different data types `Int64`, `Array(Int64)` and `String` all new types were converted to `String`.
|
||||
|
||||
### During merges of data parts in MergeTree table engines
|
||||
|
||||
During merge of several data parts in MergeTree table the `Dynamic` column in the resulting data part can reach the limit of different data types inside and won't be able to store all types from source parts.
|
||||
In this case ClickHouse chooses what types will remain after merge and what types will be casted to `String`. In most cases ClickHouse tries to keep the most frequent types and cast the rarest types to `String`, but it depends on the implementation.
|
||||
|
||||
Let's see an example of such merge. First, let's create a table with `Dynamic` column, set the limit of different data types to `3` and insert values with `5` different types:
|
||||
|
||||
```sql
|
||||
CREATE TABLE test (id UInt64, d Dynamic(max_types=3)) engine=MergeTree ORDER BY id;
|
||||
SYSTEM STOP MERGES test;
|
||||
INSERT INTO test SELECT number, number FROM numbers(5);
|
||||
INSERT INTO test SELECT number, range(number) FROM numbers(4);
|
||||
INSERT INTO test SELECT number, toDate(number) FROM numbers(3);
|
||||
INSERT INTO test SELECT number, map(number, number) FROM numbers(2);
|
||||
INSERT INTO test SELECT number, 'str_' || toString(number) FROM numbers(1);
|
||||
```
|
||||
|
||||
Each insert will create a separate data pert with `Dynamic` column containing single type:
|
||||
```sql
|
||||
SELECT count(), dynamicType(d), _part FROM test GROUP BY _part, dynamicType(d) ORDER BY _part;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─count()─┬─dynamicType(d)──────┬─_part─────┐
|
||||
│ 5 │ UInt64 │ all_1_1_0 │
|
||||
│ 4 │ Array(UInt64) │ all_2_2_0 │
|
||||
│ 3 │ Date │ all_3_3_0 │
|
||||
│ 2 │ Map(UInt64, UInt64) │ all_4_4_0 │
|
||||
│ 1 │ String │ all_5_5_0 │
|
||||
└─────────┴─────────────────────┴───────────┘
|
||||
```
|
||||
|
||||
Now, let's merge all parts into one and see what will happen:
|
||||
|
||||
```sql
|
||||
SYSTEM START MERGES test;
|
||||
OPTIMIZE TABLE test FINAL;
|
||||
SELECT count(), dynamicType(d), _part FROM test GROUP BY _part, dynamicType(d) ORDER BY _part;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─count()─┬─dynamicType(d)─┬─_part─────┐
|
||||
│ 5 │ UInt64 │ all_1_5_2 │
|
||||
│ 6 │ String │ all_1_5_2 │
|
||||
│ 4 │ Array(UInt64) │ all_1_5_2 │
|
||||
└─────────┴────────────────┴───────────┘
|
||||
```
|
||||
|
||||
As we can see, ClickHouse kept the most frequent types `UInt64` and `Array(UInt64)` and casted all other types to `String`.
|
@ -21,8 +21,8 @@ The `FixedString` type is efficient when data has the length of precisely `N` by
|
||||
Examples of the values that can be efficiently stored in `FixedString`-typed columns:
|
||||
|
||||
- The binary representation of IP addresses (`FixedString(16)` for IPv6).
|
||||
- Language codes (ru_RU, en_US … ).
|
||||
- Currency codes (USD, RUB … ).
|
||||
- Language codes (ru_RU, en_US ... ).
|
||||
- Currency codes (USD, RUB ... ).
|
||||
- Binary representation of hashes (`FixedString(16)` for MD5, `FixedString(32)` for SHA256).
|
||||
|
||||
To store UUID values, use the [UUID](../../sql-reference/data-types/uuid.md) data type.
|
||||
|
@ -6,7 +6,7 @@ sidebar_label: Nested(Name1 Type1, Name2 Type2, ...)
|
||||
|
||||
# Nested
|
||||
|
||||
## Nested(name1 Type1, Name2 Type2, …)
|
||||
## Nested(name1 Type1, Name2 Type2, ...)
|
||||
|
||||
A nested data structure is like a table inside a cell. The parameters of a nested data structure – the column names and types – are specified the same way as in a [CREATE TABLE](../../../sql-reference/statements/create/table.md) query. Each table row can correspond to any number of rows in a nested data structure.
|
||||
|
||||
|
@ -5,7 +5,7 @@ sidebar_label: SimpleAggregateFunction
|
||||
---
|
||||
# SimpleAggregateFunction
|
||||
|
||||
`SimpleAggregateFunction(name, types_of_arguments…)` data type stores current value of the aggregate function, and does not store its full state as [`AggregateFunction`](../../sql-reference/data-types/aggregatefunction.md) does. This optimization can be applied to functions for which the following property holds: the result of applying a function `f` to a row set `S1 UNION ALL S2` can be obtained by applying `f` to parts of the row set separately, and then again applying `f` to the results: `f(S1 UNION ALL S2) = f(f(S1) UNION ALL f(S2))`. This property guarantees that partial aggregation results are enough to compute the combined one, so we do not have to store and process any extra data.
|
||||
`SimpleAggregateFunction(name, types_of_arguments...)` data type stores current value of the aggregate function, and does not store its full state as [`AggregateFunction`](../../sql-reference/data-types/aggregatefunction.md) does. This optimization can be applied to functions for which the following property holds: the result of applying a function `f` to a row set `S1 UNION ALL S2` can be obtained by applying `f` to parts of the row set separately, and then again applying `f` to the results: `f(S1 UNION ALL S2) = f(f(S1) UNION ALL f(S2))`. This property guarantees that partial aggregation results are enough to compute the combined one, so we do not have to store and process any extra data.
|
||||
|
||||
The common way to produce an aggregate function value is by calling the aggregate function with the [-SimpleState](../../sql-reference/aggregate-functions/combinators.md#agg-functions-combinator-simplestate) suffix.
|
||||
|
||||
|
@ -140,6 +140,70 @@ Same as `intDiv` but returns zero when dividing by zero or when dividing a minim
|
||||
intDivOrZero(a, b)
|
||||
```
|
||||
|
||||
## isFinite
|
||||
|
||||
Returns 1 if the Float32 or Float64 argument not infinite and not a NaN, otherwise this function returns 0.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
isFinite(x)
|
||||
```
|
||||
|
||||
## isInfinite
|
||||
|
||||
Returns 1 if the Float32 or Float64 argument is infinite, otherwise this function returns 0. Note that 0 is returned for a NaN.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
isInfinite(x)
|
||||
```
|
||||
|
||||
## ifNotFinite
|
||||
|
||||
Checks whether a floating point value is finite.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
ifNotFinite(x,y)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `x` — Value to check for infinity. [Float\*](../../sql-reference/data-types/float.md).
|
||||
- `y` — Fallback value. [Float\*](../../sql-reference/data-types/float.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- `x` if `x` is finite.
|
||||
- `y` if `x` is not finite.
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
SELECT 1/0 as infimum, ifNotFinite(infimum,42)
|
||||
|
||||
Result:
|
||||
|
||||
┌─infimum─┬─ifNotFinite(divide(1, 0), 42)─┐
|
||||
│ inf │ 42 │
|
||||
└─────────┴───────────────────────────────┘
|
||||
|
||||
You can get similar result by using the [ternary operator](../../sql-reference/functions/conditional-functions.md#ternary-operator): `isFinite(x) ? x : y`.
|
||||
|
||||
## isNaN
|
||||
|
||||
Returns 1 if the Float32 and Float64 argument is NaN, otherwise this function 0.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
isNaN(x)
|
||||
```
|
||||
|
||||
## modulo
|
||||
|
||||
Calculates the remainder of the division of two values `a` by `b`.
|
||||
|
@ -561,7 +561,7 @@ Result:
|
||||
└─────────────┴─────────────┴────────────────┴─────────────────┘
|
||||
```
|
||||
|
||||
## array(x1, …), operator \[x1, …\]
|
||||
## array(x1, ...), operator \[x1, ...\]
|
||||
|
||||
Creates an array from the function arguments.
|
||||
The arguments must be constants and have types that have the smallest common type. At least one argument must be passed, because otherwise it isn’t clear which type of array to create. That is, you can’t use this function to create an empty array (to do that, use the ‘emptyArray\*’ function described above).
|
||||
@ -768,9 +768,9 @@ SELECT indexOf([1, 3, NULL, NULL], NULL)
|
||||
|
||||
Elements set to `NULL` are handled as normal values.
|
||||
|
||||
## arrayCount(\[func,\] arr1, …)
|
||||
## arrayCount(\[func,\] arr1, ...)
|
||||
|
||||
Returns the number of elements for which `func(arr1[i], …, arrN[i])` returns something other than 0. If `func` is not specified, it returns the number of non-zero elements in the array.
|
||||
Returns the number of elements for which `func(arr1[i], ..., arrN[i])` returns something other than 0. If `func` is not specified, it returns the number of non-zero elements in the array.
|
||||
|
||||
Note that the `arrayCount` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
@ -847,7 +847,7 @@ SELECT countEqual([1, 2, NULL, NULL], NULL)
|
||||
|
||||
## arrayEnumerate(arr)
|
||||
|
||||
Returns the array \[1, 2, 3, …, length (arr) \]
|
||||
Returns the array \[1, 2, 3, ..., length (arr) \]
|
||||
|
||||
This function is normally used with ARRAY JOIN. It allows counting something just once for each array after applying ARRAY JOIN. Example:
|
||||
|
||||
@ -887,7 +887,7 @@ WHERE (CounterID = 160656) AND notEmpty(GoalsReached)
|
||||
|
||||
This function can also be used in higher-order functions. For example, you can use it to get array indexes for elements that match a condition.
|
||||
|
||||
## arrayEnumerateUniq(arr, …)
|
||||
## arrayEnumerateUniq(arr, ...)
|
||||
|
||||
Returns an array the same size as the source array, indicating for each element what its position is among elements with the same value.
|
||||
For example: arrayEnumerateUniq(\[10, 20, 10, 30\]) = \[1, 1, 2, 1\].
|
||||
@ -1206,7 +1206,7 @@ Result:
|
||||
└───────────────────┘
|
||||
```
|
||||
|
||||
## arraySort(\[func,\] arr, …) {#sort}
|
||||
## arraySort(\[func,\] arr, ...) {#sort}
|
||||
|
||||
Sorts the elements of the `arr` array in ascending order. If the `func` function is specified, sorting order is determined by the result of the `func` function applied to the elements of the array. If `func` accepts multiple arguments, the `arraySort` function is passed several arrays that the arguments of `func` will correspond to. Detailed examples are shown at the end of `arraySort` description.
|
||||
|
||||
@ -1307,11 +1307,11 @@ SELECT arraySort((x, y) -> -y, [0, 1, 2], [1, 2, 3]) as res;
|
||||
To improve sorting efficiency, the [Schwartzian transform](https://en.wikipedia.org/wiki/Schwartzian_transform) is used.
|
||||
:::
|
||||
|
||||
## arrayPartialSort(\[func,\] limit, arr, …)
|
||||
## arrayPartialSort(\[func,\] limit, arr, ...)
|
||||
|
||||
Same as `arraySort` with additional `limit` argument allowing partial sorting. Returns an array of the same size as the original array where elements in range `[1..limit]` are sorted in ascending order. Remaining elements `(limit..N]` shall contain elements in unspecified order.
|
||||
|
||||
## arrayReverseSort(\[func,\] arr, …) {#reverse-sort}
|
||||
## arrayReverseSort(\[func,\] arr, ...) {#reverse-sort}
|
||||
|
||||
Sorts the elements of the `arr` array in descending order. If the `func` function is specified, `arr` is sorted according to the result of the `func` function applied to the elements of the array, and then the sorted array is reversed. If `func` accepts multiple arguments, the `arrayReverseSort` function is passed several arrays that the arguments of `func` will correspond to. Detailed examples are shown at the end of `arrayReverseSort` description.
|
||||
|
||||
@ -1412,7 +1412,7 @@ SELECT arrayReverseSort((x, y) -> -y, [4, 3, 5], [1, 2, 3]) AS res;
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
## arrayPartialReverseSort(\[func,\] limit, arr, …)
|
||||
## arrayPartialReverseSort(\[func,\] limit, arr, ...)
|
||||
|
||||
Same as `arrayReverseSort` with additional `limit` argument allowing partial sorting. Returns an array of the same size as the original array where elements in range `[1..limit]` are sorted in descending order. Remaining elements `(limit..N]` shall contain elements in unspecified order.
|
||||
|
||||
@ -1535,7 +1535,7 @@ Result:
|
||||
[3,9,1,4,5,6,7,8,2,10]
|
||||
```
|
||||
|
||||
## arrayUniq(arr, …)
|
||||
## arrayUniq(arr, ...)
|
||||
|
||||
If one argument is passed, it counts the number of different elements in the array.
|
||||
If multiple arguments are passed, it counts the number of different tuples of elements at corresponding positions in multiple arrays.
|
||||
@ -2079,9 +2079,9 @@ Result:
|
||||
└───────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## arrayMap(func, arr1, …)
|
||||
## arrayMap(func, arr1, ...)
|
||||
|
||||
Returns an array obtained from the original arrays by application of `func(arr1[i], …, arrN[i])` for each element. Arrays `arr1` … `arrN` must have the same number of elements.
|
||||
Returns an array obtained from the original arrays by application of `func(arr1[i], ..., arrN[i])` for each element. Arrays `arr1` ... `arrN` must have the same number of elements.
|
||||
|
||||
Examples:
|
||||
|
||||
@ -2109,9 +2109,9 @@ SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res
|
||||
|
||||
Note that the `arrayMap` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayFilter(func, arr1, …)
|
||||
## arrayFilter(func, arr1, ...)
|
||||
|
||||
Returns an array containing only the elements in `arr1` for which `func(arr1[i], …, arrN[i])` returns something other than 0.
|
||||
Returns an array containing only the elements in `arr1` for which `func(arr1[i], ..., arrN[i])` returns something other than 0.
|
||||
|
||||
Examples:
|
||||
|
||||
@ -2142,9 +2142,9 @@ SELECT
|
||||
|
||||
Note that the `arrayFilter` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayFill(func, arr1, …)
|
||||
## arrayFill(func, arr1, ...)
|
||||
|
||||
Scan through `arr1` from the first element to the last element and replace `arr1[i]` by `arr1[i - 1]` if `func(arr1[i], …, arrN[i])` returns 0. The first element of `arr1` will not be replaced.
|
||||
Scan through `arr1` from the first element to the last element and replace `arr1[i]` by `arr1[i - 1]` if `func(arr1[i], ..., arrN[i])` returns 0. The first element of `arr1` will not be replaced.
|
||||
|
||||
Examples:
|
||||
|
||||
@ -2160,9 +2160,9 @@ SELECT arrayFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14,
|
||||
|
||||
Note that the `arrayFill` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayReverseFill(func, arr1, …)
|
||||
## arrayReverseFill(func, arr1, ...)
|
||||
|
||||
Scan through `arr1` from the last element to the first element and replace `arr1[i]` by `arr1[i + 1]` if `func(arr1[i], …, arrN[i])` returns 0. The last element of `arr1` will not be replaced.
|
||||
Scan through `arr1` from the last element to the first element and replace `arr1[i]` by `arr1[i + 1]` if `func(arr1[i], ..., arrN[i])` returns 0. The last element of `arr1` will not be replaced.
|
||||
|
||||
Examples:
|
||||
|
||||
@ -2178,9 +2178,9 @@ SELECT arrayReverseFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5,
|
||||
|
||||
Note that the `arrayReverseFill` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arraySplit(func, arr1, …)
|
||||
## arraySplit(func, arr1, ...)
|
||||
|
||||
Split `arr1` into multiple arrays. When `func(arr1[i], …, arrN[i])` returns something other than 0, the array will be split on the left hand side of the element. The array will not be split before the first element.
|
||||
Split `arr1` into multiple arrays. When `func(arr1[i], ..., arrN[i])` returns something other than 0, the array will be split on the left hand side of the element. The array will not be split before the first element.
|
||||
|
||||
Examples:
|
||||
|
||||
@ -2196,9 +2196,9 @@ SELECT arraySplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res
|
||||
|
||||
Note that the `arraySplit` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayReverseSplit(func, arr1, …)
|
||||
## arrayReverseSplit(func, arr1, ...)
|
||||
|
||||
Split `arr1` into multiple arrays. When `func(arr1[i], …, arrN[i])` returns something other than 0, the array will be split on the right hand side of the element. The array will not be split after the last element.
|
||||
Split `arr1` into multiple arrays. When `func(arr1[i], ..., arrN[i])` returns something other than 0, the array will be split on the right hand side of the element. The array will not be split after the last element.
|
||||
|
||||
Examples:
|
||||
|
||||
@ -2214,30 +2214,30 @@ SELECT arrayReverseSplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res
|
||||
|
||||
Note that the `arrayReverseSplit` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayExists(\[func,\] arr1, …)
|
||||
## arrayExists(\[func,\] arr1, ...)
|
||||
|
||||
Returns 1 if there is at least one element in `arr` for which `func(arr1[i], …, arrN[i])` returns something other than 0. Otherwise, it returns 0.
|
||||
Returns 1 if there is at least one element in `arr` for which `func(arr1[i], ..., arrN[i])` returns something other than 0. Otherwise, it returns 0.
|
||||
|
||||
Note that the `arrayExists` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
## arrayAll(\[func,\] arr1, …)
|
||||
## arrayAll(\[func,\] arr1, ...)
|
||||
|
||||
Returns 1 if `func(arr1[i], …, arrN[i])` returns something other than 0 for all the elements in arrays. Otherwise, it returns 0.
|
||||
Returns 1 if `func(arr1[i], ..., arrN[i])` returns something other than 0 for all the elements in arrays. Otherwise, it returns 0.
|
||||
|
||||
Note that the `arrayAll` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
## arrayFirst(func, arr1, …)
|
||||
## arrayFirst(func, arr1, ...)
|
||||
|
||||
Returns the first element in the `arr1` array for which `func(arr1[i], …, arrN[i])` returns something other than 0.
|
||||
Returns the first element in the `arr1` array for which `func(arr1[i], ..., arrN[i])` returns something other than 0.
|
||||
|
||||
## arrayFirstOrNull
|
||||
|
||||
Returns the first element in the `arr1` array for which `func(arr1[i], …, arrN[i])` returns something other than 0, otherwise it returns `NULL`.
|
||||
Returns the first element in the `arr1` array for which `func(arr1[i], ..., arrN[i])` returns something other than 0, otherwise it returns `NULL`.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
arrayFirstOrNull(func, arr1, …)
|
||||
arrayFirstOrNull(func, arr1, ...)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
@ -2292,20 +2292,20 @@ Result:
|
||||
\N
|
||||
```
|
||||
|
||||
## arrayLast(func, arr1, …)
|
||||
## arrayLast(func, arr1, ...)
|
||||
|
||||
Returns the last element in the `arr1` array for which `func(arr1[i], …, arrN[i])` returns something other than 0.
|
||||
Returns the last element in the `arr1` array for which `func(arr1[i], ..., arrN[i])` returns something other than 0.
|
||||
|
||||
Note that the `arrayLast` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayLastOrNull
|
||||
|
||||
Returns the last element in the `arr1` array for which `func(arr1[i], …, arrN[i])` returns something other than 0, otherwise returns `NULL`.
|
||||
Returns the last element in the `arr1` array for which `func(arr1[i], ..., arrN[i])` returns something other than 0, otherwise returns `NULL`.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
arrayLastOrNull(func, arr1, …)
|
||||
arrayLastOrNull(func, arr1, ...)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
@ -2348,15 +2348,15 @@ Result:
|
||||
\N
|
||||
```
|
||||
|
||||
## arrayFirstIndex(func, arr1, …)
|
||||
## arrayFirstIndex(func, arr1, ...)
|
||||
|
||||
Returns the index of the first element in the `arr1` array for which `func(arr1[i], …, arrN[i])` returns something other than 0.
|
||||
Returns the index of the first element in the `arr1` array for which `func(arr1[i], ..., arrN[i])` returns something other than 0.
|
||||
|
||||
Note that the `arrayFirstIndex` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayLastIndex(func, arr1, …)
|
||||
## arrayLastIndex(func, arr1, ...)
|
||||
|
||||
Returns the index of the last element in the `arr1` array for which `func(arr1[i], …, arrN[i])` returns something other than 0.
|
||||
Returns the index of the last element in the `arr1` array for which `func(arr1[i], ..., arrN[i])` returns something other than 0.
|
||||
|
||||
Note that the `arrayLastIndex` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
@ -2580,9 +2580,9 @@ Result:
|
||||
└─────┘
|
||||
```
|
||||
|
||||
## arrayCumSum(\[func,\] arr1, …)
|
||||
## arrayCumSum(\[func,\] arr1, ...)
|
||||
|
||||
Returns an array of the partial (running) sums of the elements in the source array `arr1`. If `func` is specified, then the sum is computed from applying `func` to `arr1`, `arr2`, ..., `arrN`, i.e. `func(arr1[i], …, arrN[i])`.
|
||||
Returns an array of the partial (running) sums of the elements in the source array `arr1`. If `func` is specified, then the sum is computed from applying `func` to `arr1`, `arr2`, ..., `arrN`, i.e. `func(arr1[i], ..., arrN[i])`.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -2614,9 +2614,9 @@ SELECT arrayCumSum([1, 1, 1, 1]) AS res
|
||||
|
||||
Note that the `arrayCumSum` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
## arrayCumSumNonNegative(\[func,\] arr1, …)
|
||||
## arrayCumSumNonNegative(\[func,\] arr1, ...)
|
||||
|
||||
Same as `arrayCumSum`, returns an array of the partial (running) sums of the elements in the source array. If `func` is specified, then the sum is computed from applying `func` to `arr1`, `arr2`, ..., `arrN`, i.e. `func(arr1[i], …, arrN[i])`. Unlike `arrayCumSum`, if the current running sum is smaller than `0`, it is replaced by `0`.
|
||||
Same as `arrayCumSum`, returns an array of the partial (running) sums of the elements in the source array. If `func` is specified, then the sum is computed from applying `func` to `arr1`, `arr2`, ..., `arrN`, i.e. `func(arr1[i], ..., arrN[i])`. Unlike `arrayCumSum`, if the current running sum is smaller than `0`, it is replaced by `0`.
|
||||
|
||||
**Syntax**
|
||||
|
||||
|
@ -1499,7 +1499,7 @@ This function returns the week number for date or datetime. The two-argument for
|
||||
|
||||
The following table describes how the mode argument works.
|
||||
|
||||
| Mode | First day of week | Range | Week 1 is the first week … |
|
||||
| Mode | First day of week | Range | Week 1 is the first week ... |
|
||||
|------|-------------------|-------|-------------------------------|
|
||||
| 0 | Sunday | 0-53 | with a Sunday in this year |
|
||||
| 1 | Monday | 0-53 | with 4 or more days this year |
|
||||
|
@ -386,7 +386,7 @@ SELECT isValidJSON('{"a": "hello", "b": [-100, 200.0, 300]}') = 1
|
||||
SELECT isValidJSON('not a json') = 0
|
||||
```
|
||||
|
||||
## JSONHas(json\[, indices_or_keys\]…)
|
||||
## JSONHas(json\[, indices_or_keys\]...)
|
||||
|
||||
If the value exists in the JSON document, `1` will be returned.
|
||||
|
||||
@ -419,7 +419,7 @@ SELECT JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', -2) = 'a'
|
||||
SELECT JSONExtractString('{"a": "hello", "b": [-100, 200.0, 300]}', 1) = 'hello'
|
||||
```
|
||||
|
||||
## JSONLength(json\[, indices_or_keys\]…)
|
||||
## JSONLength(json\[, indices_or_keys\]...)
|
||||
|
||||
Return the length of a JSON array or a JSON object.
|
||||
|
||||
@ -432,7 +432,7 @@ SELECT JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 3
|
||||
SELECT JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}') = 2
|
||||
```
|
||||
|
||||
## JSONType(json\[, indices_or_keys\]…)
|
||||
## JSONType(json\[, indices_or_keys\]...)
|
||||
|
||||
Return the type of a JSON value.
|
||||
|
||||
@ -446,13 +446,13 @@ SELECT JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'a') = 'String'
|
||||
SELECT JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 'Array'
|
||||
```
|
||||
|
||||
## JSONExtractUInt(json\[, indices_or_keys\]…)
|
||||
## JSONExtractUInt(json\[, indices_or_keys\]...)
|
||||
|
||||
## JSONExtractInt(json\[, indices_or_keys\]…)
|
||||
## JSONExtractInt(json\[, indices_or_keys\]...)
|
||||
|
||||
## JSONExtractFloat(json\[, indices_or_keys\]…)
|
||||
## JSONExtractFloat(json\[, indices_or_keys\]...)
|
||||
|
||||
## JSONExtractBool(json\[, indices_or_keys\]…)
|
||||
## JSONExtractBool(json\[, indices_or_keys\]...)
|
||||
|
||||
Parses a JSON and extract a value. These functions are similar to `visitParam` functions.
|
||||
|
||||
@ -466,7 +466,7 @@ SELECT JSONExtractFloat('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 2) = 200
|
||||
SELECT JSONExtractUInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', -1) = 300
|
||||
```
|
||||
|
||||
## JSONExtractString(json\[, indices_or_keys\]…)
|
||||
## JSONExtractString(json\[, indices_or_keys\]...)
|
||||
|
||||
Parses a JSON and extract a string. This function is similar to `visitParamExtractString` functions.
|
||||
|
||||
@ -484,7 +484,7 @@ SELECT JSONExtractString('{"abc":"\\u263"}', 'abc') = ''
|
||||
SELECT JSONExtractString('{"abc":"hello}', 'abc') = ''
|
||||
```
|
||||
|
||||
## JSONExtract(json\[, indices_or_keys…\], Return_type)
|
||||
## JSONExtract(json\[, indices_or_keys...\], Return_type)
|
||||
|
||||
Parses a JSON and extract a value of the given ClickHouse data type.
|
||||
|
||||
@ -506,7 +506,7 @@ SELECT JSONExtract('{"day": "Thursday"}', 'day', 'Enum8(\'Sunday\' = 0, \'Monday
|
||||
SELECT JSONExtract('{"day": 5}', 'day', 'Enum8(\'Sunday\' = 0, \'Monday\' = 1, \'Tuesday\' = 2, \'Wednesday\' = 3, \'Thursday\' = 4, \'Friday\' = 5, \'Saturday\' = 6)') = 'Friday'
|
||||
```
|
||||
|
||||
## JSONExtractKeysAndValues(json\[, indices_or_keys…\], Value_type)
|
||||
## JSONExtractKeysAndValues(json\[, indices_or_keys...\], Value_type)
|
||||
|
||||
Parses key-value pairs from a JSON where the values are of the given ClickHouse data type.
|
||||
|
||||
@ -554,7 +554,7 @@ text
|
||||
└────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## JSONExtractRaw(json\[, indices_or_keys\]…)
|
||||
## JSONExtractRaw(json\[, indices_or_keys\]...)
|
||||
|
||||
Returns a part of JSON as unparsed string.
|
||||
|
||||
@ -566,7 +566,7 @@ Example:
|
||||
SELECT JSONExtractRaw('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = '[-100, 200.0, 300]';
|
||||
```
|
||||
|
||||
## JSONExtractArrayRaw(json\[, indices_or_keys…\])
|
||||
## JSONExtractArrayRaw(json\[, indices_or_keys...\])
|
||||
|
||||
Returns an array with elements of JSON array, each represented as unparsed string.
|
||||
|
||||
|
@ -947,3 +947,49 @@ Result:
|
||||
│ 11 │
|
||||
└──────────────────────────────────┘
|
||||
```
|
||||
|
||||
## proportionsZTest
|
||||
|
||||
Returns test statistics for the two proportion Z-test - a statistical test for comparing the proportions from two populations `x` and `y`.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
proportionsZTest(successes_x, successes_y, trials_x, trials_y, conf_level, pool_type)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `successes_x`: Number of successes in population `x`. [UInt64](../data-types/int-uint.md).
|
||||
- `successes_y`: Number of successes in population `y`. [UInt64](../data-types/int-uint.md).
|
||||
- `trials_x`: Number of trials in population `x`. [UInt64](../data-types/int-uint.md).
|
||||
- `trials_y`: Number of trials in population `y`. [UInt64](../data-types/int-uint.md).
|
||||
- `conf_level`: Confidence level for the test. [Float64](../data-types/float.md).
|
||||
- `pool_type`: Selection of pooling (way in which the standard error is estimated). Can be either `unpooled` or `pooled`. [String](../data-types/string.md).
|
||||
|
||||
:::note
|
||||
For argument `pool_type`: In the pooled version, the two proportions are averaged, and only one proportion is used to estimate the standard error. In the unpooled version, the two proportions are used separately.
|
||||
:::
|
||||
|
||||
**Returned value**
|
||||
|
||||
- `z_stat`: Z statistic. [Float64](../data-types/float.md).
|
||||
- `p_val`: P value. [Float64](../data-types/float.md).
|
||||
- `ci_low`: The lower confidence interval. [Float64](../data-types/float.md).
|
||||
- `ci_high`: The upper confidence interval. [Float64](../data-types/float.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT proportionsZTest(10, 11, 100, 101, 0.95, 'unpooled');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─proportionsZTest(10, 11, 100, 101, 0.95, 'unpooled')───────────────────────────────┐
|
||||
│ (-0.20656724435948853,0.8363478437079654,-0.09345975390115283,0.07563797172293502) │
|
||||
└────────────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
File diff suppressed because it is too large
Load Diff
@ -139,7 +139,7 @@ Format the `pattern` string with the values (strings, integers, etc.) listed in
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
format(pattern, s0, s1, …)
|
||||
format(pattern, s0, s1, ...)
|
||||
```
|
||||
|
||||
**Example**
|
||||
|
@ -799,7 +799,7 @@ If you only want to search multiple substrings in a string, you can use function
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
multiMatchAny(haystack, \[pattern<sub>1</sub>, pattern<sub>2</sub>, …, pattern<sub>n</sub>\])
|
||||
multiMatchAny(haystack, \[pattern<sub>1</sub>, pattern<sub>2</sub>, ..., pattern<sub>n</sub>\])
|
||||
```
|
||||
|
||||
## multiMatchAnyIndex
|
||||
@ -809,7 +809,7 @@ Like `multiMatchAny` but returns any index that matches the haystack.
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
multiMatchAnyIndex(haystack, \[pattern<sub>1</sub>, pattern<sub>2</sub>, …, pattern<sub>n</sub>\])
|
||||
multiMatchAnyIndex(haystack, \[pattern<sub>1</sub>, pattern<sub>2</sub>, ..., pattern<sub>n</sub>\])
|
||||
```
|
||||
|
||||
## multiMatchAllIndices
|
||||
@ -819,7 +819,7 @@ Like `multiMatchAny` but returns the array of all indices that match the haystac
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
multiMatchAllIndices(haystack, \[pattern<sub>1</sub>, pattern<sub>2</sub>, …, pattern<sub>n</sub>\])
|
||||
multiMatchAllIndices(haystack, \[pattern<sub>1</sub>, pattern<sub>2</sub>, ..., pattern<sub>n</sub>\])
|
||||
```
|
||||
|
||||
## multiFuzzyMatchAny
|
||||
@ -833,7 +833,7 @@ Like `multiMatchAny` but returns 1 if any pattern matches the haystack within a
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
multiFuzzyMatchAny(haystack, distance, \[pattern<sub>1</sub>, pattern<sub>2</sub>, …, pattern<sub>n</sub>\])
|
||||
multiFuzzyMatchAny(haystack, distance, \[pattern<sub>1</sub>, pattern<sub>2</sub>, ..., pattern<sub>n</sub>\])
|
||||
```
|
||||
|
||||
## multiFuzzyMatchAnyIndex
|
||||
@ -843,7 +843,7 @@ Like `multiFuzzyMatchAny` but returns any index that matches the haystack within
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
multiFuzzyMatchAnyIndex(haystack, distance, \[pattern<sub>1</sub>, pattern<sub>2</sub>, …, pattern<sub>n</sub>\])
|
||||
multiFuzzyMatchAnyIndex(haystack, distance, \[pattern<sub>1</sub>, pattern<sub>2</sub>, ..., pattern<sub>n</sub>\])
|
||||
```
|
||||
|
||||
## multiFuzzyMatchAllIndices
|
||||
@ -853,7 +853,7 @@ Like `multiFuzzyMatchAny` but returns the array of all indices in any order that
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
multiFuzzyMatchAllIndices(haystack, distance, \[pattern<sub>1</sub>, pattern<sub>2</sub>, …, pattern<sub>n</sub>\])
|
||||
multiFuzzyMatchAllIndices(haystack, distance, \[pattern<sub>1</sub>, pattern<sub>2</sub>, ..., pattern<sub>n</sub>\])
|
||||
```
|
||||
|
||||
## extract
|
||||
|
@ -7,15 +7,15 @@ sidebar_label: Tuples
|
||||
## tuple
|
||||
|
||||
A function that allows grouping multiple columns.
|
||||
For columns with the types T1, T2, …, it returns a Tuple(T1, T2, …) type tuple containing these columns. There is no cost to execute the function.
|
||||
For columns with the types T1, T2, ..., it returns a Tuple(T1, T2, ...) type tuple containing these columns. There is no cost to execute the function.
|
||||
Tuples are normally used as intermediate values for an argument of IN operators, or for creating a list of formal parameters of lambda functions. Tuples can’t be written to a table.
|
||||
|
||||
The function implements the operator `(x, y, …)`.
|
||||
The function implements the operator `(x, y, ...)`.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
tuple(x, y, …)
|
||||
tuple(x, y, ...)
|
||||
```
|
||||
|
||||
## tupleElement
|
||||
|
@ -589,7 +589,7 @@ mapApply(func, map)
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Returns a map obtained from the original map by application of `func(map1[i], …, mapN[i])` for each element.
|
||||
- Returns a map obtained from the original map by application of `func(map1[i], ..., mapN[i])` for each element.
|
||||
|
||||
**Example**
|
||||
|
||||
@ -629,7 +629,7 @@ mapFilter(func, map)
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Returns a map containing only the elements in `map` for which `func(map1[i], …, mapN[i])` returns something other than 0.
|
||||
- Returns a map containing only the elements in `map` for which `func(map1[i], ..., mapN[i])` returns something other than 0.
|
||||
|
||||
|
||||
**Example**
|
||||
|
@ -16,7 +16,7 @@ If the relevant part isn’t present in a URL, an empty string is returned.
|
||||
|
||||
Extracts the protocol from a URL.
|
||||
|
||||
Examples of typical returned values: http, https, ftp, mailto, tel, magnet…
|
||||
Examples of typical returned values: http, https, ftp, mailto, tel, magnet...
|
||||
|
||||
### domain
|
||||
|
||||
|
@ -4,7 +4,7 @@ sidebar_position: 51
|
||||
sidebar_label: COMMENT
|
||||
---
|
||||
|
||||
# ALTER TABLE … MODIFY COMMENT
|
||||
# ALTER TABLE ... MODIFY COMMENT
|
||||
|
||||
Adds, modifies, or removes comment to the table, regardless if it was set before or not. Comment change is reflected in both [system.tables](../../../operations/system-tables/tables.md) and `SHOW CREATE TABLE` query.
|
||||
|
||||
|
@ -4,7 +4,7 @@ sidebar_position: 39
|
||||
sidebar_label: DELETE
|
||||
---
|
||||
|
||||
# ALTER TABLE … DELETE Statement
|
||||
# ALTER TABLE ... DELETE Statement
|
||||
|
||||
``` sql
|
||||
ALTER TABLE [db.]table [ON CLUSTER cluster] DELETE WHERE filter_expr
|
||||
|
@ -42,7 +42,7 @@ These `ALTER` statements modify entities related to role-based access control:
|
||||
|
||||
## Mutations
|
||||
|
||||
`ALTER` queries that are intended to manipulate table data are implemented with a mechanism called “mutations”, most notably [ALTER TABLE … DELETE](/docs/en/sql-reference/statements/alter/delete.md) and [ALTER TABLE … UPDATE](/docs/en/sql-reference/statements/alter/update.md). They are asynchronous background processes similar to merges in [MergeTree](/docs/en/engines/table-engines/mergetree-family/index.md) tables that to produce new “mutated” versions of parts.
|
||||
`ALTER` queries that are intended to manipulate table data are implemented with a mechanism called “mutations”, most notably [ALTER TABLE ... DELETE](/docs/en/sql-reference/statements/alter/delete.md) and [ALTER TABLE ... UPDATE](/docs/en/sql-reference/statements/alter/update.md). They are asynchronous background processes similar to merges in [MergeTree](/docs/en/engines/table-engines/mergetree-family/index.md) tables that to produce new “mutated” versions of parts.
|
||||
|
||||
For `*MergeTree` tables mutations execute by **rewriting whole data parts**. There is no atomicity - parts are substituted for mutated parts as soon as they are ready and a `SELECT` query that started executing during a mutation will see data from parts that have already been mutated along with data from parts that have not been mutated yet.
|
||||
|
||||
|
@ -4,7 +4,7 @@ sidebar_position: 40
|
||||
sidebar_label: UPDATE
|
||||
---
|
||||
|
||||
# ALTER TABLE … UPDATE Statements
|
||||
# ALTER TABLE ... UPDATE Statements
|
||||
|
||||
``` sql
|
||||
ALTER TABLE [db.]table [ON CLUSTER cluster] UPDATE column1 = expr1 [, ...] [IN PARTITION partition_id] WHERE filter_expr
|
||||
|
@ -4,9 +4,9 @@ sidebar_position: 50
|
||||
sidebar_label: VIEW
|
||||
---
|
||||
|
||||
# ALTER TABLE … MODIFY QUERY Statement
|
||||
# ALTER TABLE ... MODIFY QUERY Statement
|
||||
|
||||
You can modify `SELECT` query that was specified when a [materialized view](../create/view.md#materialized) was created with the `ALTER TABLE … MODIFY QUERY` statement without interrupting ingestion process.
|
||||
You can modify `SELECT` query that was specified when a [materialized view](../create/view.md#materialized) was created with the `ALTER TABLE ... MODIFY QUERY` statement without interrupting ingestion process.
|
||||
|
||||
This command is created to change materialized view created with `TO [db.]name` clause. It does not change the structure of the underlying storage table and it does not change the columns' definition of the materialized view, because of this the application of this command is very limited for materialized views are created without `TO [db.]name` clause.
|
||||
|
||||
@ -198,6 +198,6 @@ SELECT * FROM mv;
|
||||
|
||||
`ALTER LIVE VIEW ... REFRESH` statement refreshes a [Live view](../create/view.md#live-view). See [Force Live View Refresh](../create/view.md#live-view-alter-refresh).
|
||||
|
||||
## ALTER TABLE … MODIFY REFRESH Statement
|
||||
## ALTER TABLE ... MODIFY REFRESH Statement
|
||||
|
||||
`ALTER TABLE ... MODIFY REFRESH` statement changes refresh parameters of a [Refreshable Materialized View](../create/view.md#refreshable-materialized-view). See [Changing Refresh Parameters](../create/view.md#changing-refresh-parameters).
|
||||
|
@ -306,7 +306,7 @@ CREATE WINDOW VIEW test.wv TO test.dst WATERMARK=ASCENDING ALLOWED_LATENESS=INTE
|
||||
|
||||
Note that elements emitted by a late firing should be treated as updated results of a previous computation. Instead of firing at the end of windows, the window view will fire immediately when the late event arrives. Thus, it will result in multiple outputs for the same window. Users need to take these duplicated results into account or deduplicate them.
|
||||
|
||||
You can modify `SELECT` query that was specified in the window view by using `ALTER TABLE … MODIFY QUERY` statement. The data structure resulting in a new `SELECT` query should be the same as the original `SELECT` query when with or without `TO [db.]name` clause. Note that the data in the current window will be lost because the intermediate state cannot be reused.
|
||||
You can modify `SELECT` query that was specified in the window view by using `ALTER TABLE ... MODIFY QUERY` statement. The data structure resulting in a new `SELECT` query should be the same as the original `SELECT` query when with or without `TO [db.]name` clause. Note that the data in the current window will be lost because the intermediate state cannot be reused.
|
||||
|
||||
### Monitoring New Windows
|
||||
|
||||
|
@ -73,7 +73,7 @@ Data can be passed to the INSERT in any [format](../../interfaces/formats.md#for
|
||||
INSERT INTO [db.]table [(c1, c2, c3)] FORMAT format_name data_set
|
||||
```
|
||||
|
||||
For example, the following query format is identical to the basic version of INSERT … VALUES:
|
||||
For example, the following query format is identical to the basic version of INSERT ... VALUES:
|
||||
|
||||
``` sql
|
||||
INSERT INTO [db.]table [(c1, c2, c3)] FORMAT Values (v11, v12, v13), (v21, v22, v23), ...
|
||||
|
@ -17,11 +17,11 @@ If there is no [ORDER BY](../../../sql-reference/statements/select/order-by.md)
|
||||
The number of rows in the result set can also depend on the [limit](../../../operations/settings/settings.md#limit) setting.
|
||||
:::
|
||||
|
||||
## LIMIT … WITH TIES Modifier
|
||||
## LIMIT ... WITH TIES Modifier
|
||||
|
||||
When you set `WITH TIES` modifier for `LIMIT n[,m]` and specify `ORDER BY expr_list`, you will get in result first `n` or `n,m` rows and all rows with same `ORDER BY` fields values equal to row at position `n` for `LIMIT n` and `m` for `LIMIT n,m`.
|
||||
|
||||
This modifier also can be combined with [ORDER BY … WITH FILL modifier](../../../sql-reference/statements/select/order-by.md#orderby-with-fill).
|
||||
This modifier also can be combined with [ORDER BY ... WITH FILL modifier](../../../sql-reference/statements/select/order-by.md#orderby-with-fill).
|
||||
|
||||
For example, the following query
|
||||
|
||||
|
@ -283,7 +283,7 @@ In `MaterializedView`-engine tables the optimization works with views like `SELE
|
||||
|
||||
## ORDER BY Expr WITH FILL Modifier
|
||||
|
||||
This modifier also can be combined with [LIMIT … WITH TIES modifier](../../../sql-reference/statements/select/limit.md#limit-with-ties).
|
||||
This modifier also can be combined with [LIMIT ... WITH TIES modifier](../../../sql-reference/statements/select/limit.md#limit-with-ties).
|
||||
|
||||
`WITH FILL` modifier can be set after `ORDER BY expr` with optional `FROM expr`, `TO expr` and `STEP expr` parameters.
|
||||
All missed values of `expr` column will be filled sequentially and other columns will be filled as defaults.
|
||||
|
@ -169,7 +169,7 @@ If your listing of files contains number ranges with leading zeros, use the cons
|
||||
|
||||
**Example**
|
||||
|
||||
Query the total number of rows in files named `file000`, `file001`, … , `file999`:
|
||||
Query the total number of rows in files named `file000`, `file001`, ... , `file999`:
|
||||
|
||||
``` sql
|
||||
SELECT count(*) FROM file('big_dir/file{0..9}{0..9}{0..9}', 'CSV', 'name String, value UInt32');
|
||||
|
@ -130,7 +130,7 @@ FROM gcs('https://storage.googleapis.com/my-test-bucket-768/{some,another}_prefi
|
||||
If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`.
|
||||
:::
|
||||
|
||||
Count the total amount of rows in files named `file-000.csv`, `file-001.csv`, … , `file-999.csv`:
|
||||
Count the total amount of rows in files named `file-000.csv`, `file-001.csv`, ... , `file-999.csv`:
|
||||
|
||||
``` sql
|
||||
SELECT count(*)
|
||||
|
@ -85,7 +85,7 @@ If your listing of files contains number ranges with leading zeros, use the cons
|
||||
|
||||
**Example**
|
||||
|
||||
Query the data from files named `file000`, `file001`, … , `file999`:
|
||||
Query the data from files named `file000`, `file001`, ... , `file999`:
|
||||
|
||||
``` sql
|
||||
SELECT count(*)
|
||||
|
@ -137,7 +137,7 @@ FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/
|
||||
If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`.
|
||||
:::
|
||||
|
||||
Count the total amount of rows in files named `file-000.csv`, `file-001.csv`, … , `file-999.csv`:
|
||||
Count the total amount of rows in files named `file-000.csv`, `file-001.csv`, ... , `file-999.csv`:
|
||||
|
||||
``` sql
|
||||
SELECT count(*)
|
||||
|
@ -57,7 +57,7 @@ memcpy(&buf[place_value], &x, sizeof(x));
|
||||
for (size_t i = 0; i < rows; i += storage.index_granularity)
|
||||
```
|
||||
|
||||
**7.** Вокруг бинарных операторов (`+`, `-`, `*`, `/`, `%`, …), а также тернарного оператора `?:` ставятся пробелы.
|
||||
**7.** Вокруг бинарных операторов (`+`, `-`, `*`, `/`, `%`, ...), а также тернарного оператора `?:` ставятся пробелы.
|
||||
|
||||
``` cpp
|
||||
UInt16 year = (s[0] - '0') * 1000 + (s[1] - '0') * 100 + (s[2] - '0') * 10 + (s[3] - '0');
|
||||
@ -86,7 +86,7 @@ dst.ClickGoodEvent = click.GoodEvent;
|
||||
|
||||
При необходимости, оператор может быть перенесён на новую строку. В этом случае, перед ним увеличивается отступ.
|
||||
|
||||
**11.** Унарные операторы `--`, `++`, `*`, `&`, … не отделяются от аргумента пробелом.
|
||||
**11.** Унарные операторы `--`, `++`, `*`, `&`, ... не отделяются от аргумента пробелом.
|
||||
|
||||
**12.** После запятой ставится пробел, а перед — нет. Аналогично для точки с запятой внутри выражения `for`.
|
||||
|
||||
@ -115,7 +115,7 @@ public:
|
||||
|
||||
**16.** Если на весь файл один `namespace` и кроме него ничего существенного нет, то отступ внутри `namespace` не нужен.
|
||||
|
||||
**17.** Если блок для выражения `if`, `for`, `while`, … состоит из одного `statement`, то фигурные скобки не обязательны. Вместо этого поместите `statement` на отдельную строку. Это правило справедливо и для вложенных `if`, `for`, `while`, …
|
||||
**17.** Если блок для выражения `if`, `for`, `while`, ... состоит из одного `statement`, то фигурные скобки не обязательны. Вместо этого поместите `statement` на отдельную строку. Это правило справедливо и для вложенных `if`, `for`, `while`, ...
|
||||
|
||||
Если внутренний `statement` содержит фигурные скобки или `else`, то внешний блок следует писать в фигурных скобках.
|
||||
|
||||
@ -266,7 +266,7 @@ void executeQuery(
|
||||
|
||||
Пример взят с ресурса http://home.tamk.fi/~jaalto/course/coding-style/doc/unmaintainable-code/.
|
||||
|
||||
**7.** Нельзя писать мусорные комментарии (автор, дата создания…) в начале каждого файла.
|
||||
**7.** Нельзя писать мусорные комментарии (автор, дата создания...) в начале каждого файла.
|
||||
|
||||
**8.** Однострочные комментарии начинаются с трёх слешей: `///` , многострочные с `/**`. Такие комментарии считаются «документирующими».
|
||||
|
||||
|
@ -103,7 +103,7 @@ CREATE TABLE table_with_asterisk (name String, value UInt32) ENGINE = HDFS('hdfs
|
||||
|
||||
**Example**
|
||||
|
||||
Создадим таблицу с именами `file000`, `file001`, … , `file999`:
|
||||
Создадим таблицу с именами `file000`, `file001`, ... , `file999`:
|
||||
|
||||
``` sql
|
||||
CREATE TABLE big_table (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/big_dir/file{0..9}{0..9}{0..9}', 'CSV')
|
||||
|
@ -73,7 +73,7 @@ SELECT * FROM s3_engine_table LIMIT 2;
|
||||
|
||||
**Пример подстановки 1**
|
||||
|
||||
Таблица содержит данные из файлов с именами `file-000.csv`, `file-001.csv`, … , `file-999.csv`:
|
||||
Таблица содержит данные из файлов с именами `file-000.csv`, `file-001.csv`, ... , `file-999.csv`:
|
||||
|
||||
``` sql
|
||||
CREATE TABLE big_table (name String, value UInt32)
|
||||
|
@ -66,7 +66,7 @@ WHERE table = 'visits'
|
||||
└───────────┴───────────────────┴────────┘
|
||||
```
|
||||
|
||||
Столбец `partition` содержит имена всех партиций таблицы. Таблица `visits` из нашего примера содержит две партиции: `201901` и `201902`. Используйте значения из этого столбца в запросах [ALTER … PARTITION](../../../sql-reference/statements/alter/partition.md).
|
||||
Столбец `partition` содержит имена всех партиций таблицы. Таблица `visits` из нашего примера содержит две партиции: `201901` и `201902`. Используйте значения из этого столбца в запросах [ALTER ... PARTITION](../../../sql-reference/statements/alter/partition.md).
|
||||
|
||||
Столбец `name` содержит названия кусков партиций. Значения из этого столбца можно использовать в запросах [ALTER ATTACH PART](../../../sql-reference/statements/alter/partition.md#alter_attach-partition).
|
||||
|
||||
|
@ -771,7 +771,7 @@ SETTINGS storage_policy = 'moving_from_ssd_to_hdd'
|
||||
- В результате вставки (запрос `INSERT`).
|
||||
- В фоновых операциях слияний и [мутаций](../../../sql-reference/statements/alter/index.md#mutations).
|
||||
- При скачивании данных с другой реплики.
|
||||
- В результате заморозки партиций [ALTER TABLE … FREEZE PARTITION](../../../engines/table-engines/mergetree-family/mergetree.md#alter_freeze-partition).
|
||||
- В результате заморозки партиций [ALTER TABLE ... FREEZE PARTITION](../../../engines/table-engines/mergetree-family/mergetree.md#alter_freeze-partition).
|
||||
|
||||
Во всех случаях, кроме мутаций и заморозки партиций, при записи куска выбирается том и диск в соответствии с указанной конфигурацией хранилища:
|
||||
|
||||
@ -781,7 +781,7 @@ SETTINGS storage_policy = 'moving_from_ssd_to_hdd'
|
||||
Мутации и запросы заморозки партиций в реализации используют [жесткие ссылки](https://ru.wikipedia.org/wiki/%D0%96%D1%91%D1%81%D1%82%D0%BA%D0%B0%D1%8F_%D1%81%D1%81%D1%8B%D0%BB%D0%BA%D0%B0). Жесткие ссылки между различными дисками не поддерживаются, поэтому в случае таких операций куски размещаются на тех же дисках, что и исходные.
|
||||
|
||||
В фоне куски перемещаются между томами на основе информации о занятом месте (настройка `move_factor`) по порядку, в котором указаны тома в конфигурации. Данные никогда не перемещаются с последнего тома и на первый том. Следить за фоновыми перемещениями можно с помощью системных таблиц [system.part_log](../../../engines/table-engines/mergetree-family/mergetree.md#system_tables-part-log) (поле `type = MOVE_PART`) и [system.parts](../../../engines/table-engines/mergetree-family/mergetree.md#system_tables-parts) (поля `path` и `disk`). Также подробная информация о перемещениях доступна в логах сервера.
|
||||
С помощью запроса [ALTER TABLE … MOVE PART\|PARTITION … TO VOLUME\|DISK …](../../../engines/table-engines/mergetree-family/mergetree.md#alter_move-partition) пользователь может принудительно перенести кусок или партицию с одного раздела на другой. При этом учитываются все ограничения, указанные для фоновых операций. Запрос самостоятельно инициирует процесс перемещения не дожидаясь фоновых операций. В случае недостатка места или неудовлетворения ограничениям пользователь получит сообщение об ошибке.
|
||||
С помощью запроса [ALTER TABLE ... MOVE PART\|PARTITION ... TO VOLUME\|DISK ...](../../../engines/table-engines/mergetree-family/mergetree.md#alter_move-partition) пользователь может принудительно перенести кусок или партицию с одного раздела на другой. При этом учитываются все ограничения, указанные для фоновых операций. Запрос самостоятельно инициирует процесс перемещения не дожидаясь фоновых операций. В случае недостатка места или неудовлетворения ограничениям пользователь получит сообщение об ошибке.
|
||||
|
||||
Перемещения данных не взаимодействуют с репликацией данных, поэтому на разных репликах одной и той же таблицы могут быть указаны разные политики хранения.
|
||||
|
||||
|
@ -31,7 +31,7 @@ ClickHouse позволяет отправить на сервер данные,
|
||||
- **--format** - формат данных в файле. Если не указано - используется TabSeparated.
|
||||
|
||||
Должен быть указан один из следующих параметров:
|
||||
- **--types** - список типов столбцов через запятую. Например, `UInt64,String`. Столбцы будут названы _1, _2, …
|
||||
- **--types** - список типов столбцов через запятую. Например, `UInt64,String`. Столбцы будут названы _1, _2, ...
|
||||
- **--structure** - структура таблицы, в форме `UserID UInt64`, `URL String`. Определяет имена и типы столбцов.
|
||||
|
||||
Файлы, указанные в file, будут разобраны форматом, указанным в format, с использованием типов данных, указанных в types или structure. Таблица будет загружена на сервер, и доступна там в качестве временной таблицы с именем name.
|
||||
|
@ -9,13 +9,13 @@ sidebar_position: 100
|
||||
[OLAP](https://ru.wikipedia.org/wiki/OLAP) (OnLine Analytical Processing) переводится как обработка данных в реальном времени. Это широкий термин, который можно рассмотреть с двух сторон: с технической и с точки зрения бизнеса. Для самого общего понимания можно просто прочитать его с конца:
|
||||
|
||||
**Processing**
|
||||
Обрабатываются некие исходные данные…
|
||||
Обрабатываются некие исходные данные...
|
||||
|
||||
**Analytical**
|
||||
: … чтобы получить какие-то аналитические отчеты или новые знания…
|
||||
: ... чтобы получить какие-то аналитические отчеты или новые знания...
|
||||
|
||||
**OnLine**
|
||||
: … в реальном времени, практически без задержек на обработку.
|
||||
: ... в реальном времени, практически без задержек на обработку.
|
||||
|
||||
## OLAP с точки зрения бизнеса {#olap-from-the-business-perspective}
|
||||
|
||||
|
@ -196,7 +196,7 @@ real 75m56.214s
|
||||
|
||||
(Импорт данных напрямую из Postgres также возможен с использованием `COPY ... TO PROGRAM`.)
|
||||
|
||||
К сожалению, все поля, связанные с погодой (precipitation…average_wind_speed) заполнены NULL. Из-за этого мы исключим их из финального набора данных.
|
||||
К сожалению, все поля, связанные с погодой (precipitation...average_wind_speed) заполнены NULL. Из-за этого мы исключим их из финального набора данных.
|
||||
|
||||
Для начала мы создадим таблицу на одном сервере. Позже мы сделаем таблицу распределенной.
|
||||
|
||||
|
@ -12,10 +12,10 @@ ClickHouse — столбцовая система управления база
|
||||
|
||||
| Строка | WatchID | JavaEnable | Title | GoodEvent | EventTime |
|
||||
|--------|-------------|------------|--------------------|-----------|---------------------|
|
||||
| #0 | 89354350662 | 1 | Investor Relations | 1 | 2016-05-18 05:19:20 |
|
||||
| #1 | 90329509958 | 0 | Contact us | 1 | 2016-05-18 08:10:20 |
|
||||
| #2 | 89953706054 | 1 | Mission | 1 | 2016-05-18 07:38:00 |
|
||||
| #N | … | … | … | … | … |
|
||||
| #0 | 89354350662 | 1 | Investor Relations | 1 | 2016-05-18 05:19:20 |
|
||||
| #1 | 90329509958 | 0 | Contact us | 1 | 2016-05-18 08:10:20 |
|
||||
| #2 | 89953706054 | 1 | Mission | 1 | 2016-05-18 07:38:00 |
|
||||
| #N | ... | ... | ... | ... | ... |
|
||||
|
||||
То есть, значения, относящиеся к одной строке, физически хранятся рядом.
|
||||
|
||||
@ -24,13 +24,13 @@ ClickHouse — столбцовая система управления база
|
||||
|
||||
В столбцовых СУБД данные хранятся в таком порядке:
|
||||
|
||||
| Строка: | #0 | #1 | #2 | #N |
|
||||
| Строка: | #0 | #1 | #2 | #N |
|
||||
|-------------|---------------------|---------------------|---------------------|-----|
|
||||
| WatchID: | 89354350662 | 90329509958 | 89953706054 | … |
|
||||
| JavaEnable: | 1 | 0 | 1 | … |
|
||||
| Title: | Investor Relations | Contact us | Mission | … |
|
||||
| GoodEvent: | 1 | 1 | 1 | … |
|
||||
| EventTime: | 2016-05-18 05:19:20 | 2016-05-18 08:10:20 | 2016-05-18 07:38:00 | … |
|
||||
| WatchID: | 89354350662 | 90329509958 | 89953706054 | ... |
|
||||
| JavaEnable: | 1 | 0 | 1 | ... |
|
||||
| Title: | Investor Relations | Contact us | Mission | ... |
|
||||
| GoodEvent: | 1 | 1 | 1 | ... |
|
||||
| EventTime: | 2016-05-18 05:19:20 | 2016-05-18 08:10:20 | 2016-05-18 07:38:00 | ... |
|
||||
|
||||
В примерах изображён только порядок расположения данных.
|
||||
То есть значения из разных столбцов хранятся отдельно, а данные одного столбца — вместе.
|
||||
|
@ -119,6 +119,7 @@ Hello\nworld
|
||||
Hello\
|
||||
world
|
||||
```
|
||||
`\n\r` (CRLF) поддерживается с помощью настройки `input_format_tsv_crlf_end_of_line`.
|
||||
|
||||
Второй вариант поддерживается, так как его использует MySQL при записи tab-separated дампа.
|
||||
|
||||
|
@ -260,7 +260,7 @@ FORMAT Null;
|
||||
|
||||
Ограничивает количество строк в хэш-таблице, используемой при соединении таблиц.
|
||||
|
||||
Параметр применяется к операциям [SELECT… JOIN](../../sql-reference/statements/select/join.md#select-join) и к движку таблиц [Join](../../engines/table-engines/special/join.md).
|
||||
Параметр применяется к операциям [SELECT... JOIN](../../sql-reference/statements/select/join.md#select-join) и к движку таблиц [Join](../../engines/table-engines/special/join.md).
|
||||
|
||||
Если запрос содержит несколько `JOIN`, то ClickHouse проверяет значение настройки для каждого промежуточного результата.
|
||||
|
||||
@ -277,7 +277,7 @@ FORMAT Null;
|
||||
|
||||
Ограничивает размер (в байтах) хэш-таблицы, используемой при объединении таблиц.
|
||||
|
||||
Параметр применяется к операциям [SELECT… JOIN](../../sql-reference/statements/select/join.md#select-join) и к движку таблиц [Join](../../engines/table-engines/special/join.md).
|
||||
Параметр применяется к операциям [SELECT... JOIN](../../sql-reference/statements/select/join.md#select-join) и к движку таблиц [Join](../../engines/table-engines/special/join.md).
|
||||
|
||||
Если запрос содержит несколько `JOIN`, то ClickHouse проверяет значение настройки для каждого промежуточного результата.
|
||||
|
||||
|
@ -1859,7 +1859,7 @@ SELECT * FROM test_table
|
||||
|
||||
## count_distinct_implementation {#settings-count_distinct_implementation}
|
||||
|
||||
Задаёт, какая из функций `uniq*` используется при выполнении конструкции [COUNT(DISTINCT …)](../../sql-reference/aggregate-functions/reference/count.md#agg_function-count).
|
||||
Задаёт, какая из функций `uniq*` используется при выполнении конструкции [COUNT(DISTINCT ...)](../../sql-reference/aggregate-functions/reference/count.md#agg_function-count).
|
||||
|
||||
Возможные значения:
|
||||
|
||||
|
@ -82,7 +82,7 @@ FROM
|
||||
|
||||
В этом случае необходимо помнить, что границы корзин гистограммы не известны.
|
||||
|
||||
## sequenceMatch(pattern)(timestamp, cond1, cond2, …) {#function-sequencematch}
|
||||
## sequenceMatch(pattern)(timestamp, cond1, cond2, ...) {#function-sequencematch}
|
||||
|
||||
Проверяет, содержит ли последовательность событий цепочку, которая соответствует указанному шаблону.
|
||||
|
||||
@ -172,7 +172,7 @@ SELECT sequenceMatch('(?1)(?2)')(time, number = 1, number = 2, number = 4) FROM
|
||||
|
||||
- [sequenceCount](#function-sequencecount)
|
||||
|
||||
## sequenceCount(pattern)(time, cond1, cond2, …) {#function-sequencecount}
|
||||
## sequenceCount(pattern)(time, cond1, cond2, ...) {#function-sequencecount}
|
||||
|
||||
Вычисляет количество цепочек событий, соответствующих шаблону. Функция обнаруживает только непересекающиеся цепочки событий. Она начинает искать следующую цепочку только после того, как полностью совпала текущая цепочка событий.
|
||||
|
||||
|
@ -7,7 +7,7 @@ sidebar_position: 201
|
||||
|
||||
## quantiles {#quantiles}
|
||||
|
||||
Синтаксис: `quantiles(level1, level2, …)(x)`
|
||||
Синтаксис: `quantiles(level1, level2, ...)(x)`
|
||||
|
||||
Все функции для вычисления квантилей имеют соответствующие функции для вычисления нескольких квантилей: `quantiles`, `quantilesDeterministic`, `quantilesTiming`, `quantilesTimingWeighted`, `quantilesExact`, `quantilesExactWeighted`, `quantilesTDigest`, `quantilesBFloat16`. Эти функции вычисляют все квантили указанных уровней в один проход и возвращают массив с вычисленными значениями.
|
||||
|
||||
|
@ -6,9 +6,9 @@ sidebar_label: AggregateFunction
|
||||
|
||||
# AggregateFunction {#data-type-aggregatefunction}
|
||||
|
||||
Агрегатные функции могут обладать определяемым реализацией промежуточным состоянием, которое может быть сериализовано в тип данных, соответствующий AggregateFunction(…), и быть записано в таблицу обычно посредством [материализованного представления](../../sql-reference/statements/create/view.md). Чтобы получить промежуточное состояние, обычно используются агрегатные функции с суффиксом `-State`. Чтобы в дальнейшем получить агрегированные данные необходимо использовать те же агрегатные функции с суффиксом `-Merge`.
|
||||
Агрегатные функции могут обладать определяемым реализацией промежуточным состоянием, которое может быть сериализовано в тип данных, соответствующий AggregateFunction(...), и быть записано в таблицу обычно посредством [материализованного представления](../../sql-reference/statements/create/view.md). Чтобы получить промежуточное состояние, обычно используются агрегатные функции с суффиксом `-State`. Чтобы в дальнейшем получить агрегированные данные необходимо использовать те же агрегатные функции с суффиксом `-Merge`.
|
||||
|
||||
`AggregateFunction(name, types_of_arguments…)` — параметрический тип данных.
|
||||
`AggregateFunction(name, types_of_arguments...)` — параметрический тип данных.
|
||||
|
||||
**Параметры**
|
||||
|
||||
|
@ -21,8 +21,8 @@ sidebar_label: FixedString(N)
|
||||
Примеры значений, которые можно эффективно хранить в столбцах типа `FixedString`:
|
||||
|
||||
- Двоичное представление IP-адреса (`FixedString(16)` для IPv6).
|
||||
- Коды языков (ru_RU, en_US … ).
|
||||
- Коды валют (USD, RUB … ).
|
||||
- Коды языков (ru_RU, en_US ... ).
|
||||
- Коды валют (USD, RUB ... ).
|
||||
- Двоичное представление хэшей (`FixedString(16)` для MD5, `FixedString(32)` для SHA256).
|
||||
|
||||
Для хранения значений UUID используйте тип данных [UUID](uuid.md).
|
||||
|
@ -3,7 +3,7 @@ slug: /ru/sql-reference/data-types/nested-data-structures/nested
|
||||
---
|
||||
# Nested {#nested}
|
||||
|
||||
## Nested(Name1 Type1, Name2 Type2, …) {#nestedname1-type1-name2-type2}
|
||||
## Nested(Name1 Type1, Name2 Type2, ...) {#nestedname1-type1-name2-type2}
|
||||
|
||||
Вложенная структура данных - это как будто вложенная таблица. Параметры вложенной структуры данных - имена и типы столбцов, указываются так же, как у запроса CREATE. Каждой строке таблицы может соответствовать произвольное количество строк вложенной структуры данных.
|
||||
|
||||
|
@ -4,7 +4,7 @@ sidebar_position: 54
|
||||
sidebar_label: Tuple(T1, T2, ...)
|
||||
---
|
||||
|
||||
# Tuple(T1, T2, …) {#tuplet1-t2}
|
||||
# Tuple(T1, T2, ...) {#tuplet1-t2}
|
||||
|
||||
Кортеж из элементов любого [типа](index.md#data_types). Элементы кортежа могут быть одного или разных типов.
|
||||
|
||||
|
@ -161,7 +161,7 @@ SELECT range(5), range(1, 5), range(1, 5, 2);
|
||||
```
|
||||
|
||||
|
||||
## array(x1, …), оператор \[x1, …\] {#arrayx1-operator-x1}
|
||||
## array(x1, ...), оператор \[x1, ...\] {#arrayx1-operator-x1}
|
||||
|
||||
Создаёт массив из аргументов функции.
|
||||
Аргументы должны быть константами и иметь типы, для которых есть наименьший общий тип. Должен быть передан хотя бы один аргумент, так как иначе непонятно, какого типа создавать массив. То есть, с помощью этой функции невозможно создать пустой массив (для этого используйте функции emptyArray\*, описанные выше).
|
||||
@ -308,7 +308,7 @@ SELECT indexOf([1, 3, NULL, NULL], NULL)
|
||||
|
||||
Элементы, равные `NULL`, обрабатываются как обычные значения.
|
||||
|
||||
## arrayCount(\[func,\] arr1, …) {#array-count}
|
||||
## arrayCount(\[func,\] arr1, ...) {#array-count}
|
||||
|
||||
Возвращает количество элементов массива `arr`, для которых функция `func` возвращает не 0. Если `func` не указана - возвращает количество ненулевых элементов массива.
|
||||
|
||||
@ -335,7 +335,7 @@ SELECT countEqual([1, 2, NULL, NULL], NULL)
|
||||
|
||||
## arrayEnumerate(arr) {#array_functions-arrayenumerate}
|
||||
|
||||
Возвращает массив \[1, 2, 3, …, length(arr)\]
|
||||
Возвращает массив \[1, 2, 3, ..., length(arr)\]
|
||||
|
||||
Эта функция обычно используется совместно с ARRAY JOIN. Она позволяет, после применения ARRAY JOIN, посчитать что-либо только один раз для каждого массива. Пример:
|
||||
|
||||
@ -375,7 +375,7 @@ WHERE (CounterID = 160656) AND notEmpty(GoalsReached)
|
||||
|
||||
Также эта функция может быть использована в функциях высшего порядка. Например, с её помощью можно достать индексы массива для элементов, удовлетворяющих некоторому условию.
|
||||
|
||||
## arrayEnumerateUniq(arr, …) {#arrayenumerateuniqarr}
|
||||
## arrayEnumerateUniq(arr, ...) {#arrayenumerateuniqarr}
|
||||
|
||||
Возвращает массив, такого же размера, как исходный, где для каждого элемента указано, какой он по счету среди элементов с таким же значением.
|
||||
Например: arrayEnumerateUniq(\[10, 20, 10, 30\]) = \[1, 1, 2, 1\].
|
||||
@ -597,7 +597,7 @@ SELECT arraySlice([1, 2, NULL, 4, 5], 2, 3) AS res;
|
||||
|
||||
Элементы массива равные `NULL` обрабатываются как обычные значения.
|
||||
|
||||
## arraySort(\[func,\] arr, …) {#array_functions-sort}
|
||||
## arraySort(\[func,\] arr, ...) {#array_functions-sort}
|
||||
|
||||
Возвращает массив `arr`, отсортированный в восходящем порядке. Если задана функция `func`, то порядок сортировки определяется результатом применения этой функции на элементы массива `arr`. Если `func` принимает несколько аргументов, то в функцию `arraySort` нужно передавать несколько массивов, которые будут соответствовать аргументам функции `func`. Подробные примеры рассмотрены в конце описания `arraySort`.
|
||||
|
||||
@ -698,11 +698,11 @@ SELECT arraySort((x, y) -> -y, [0, 1, 2], [1, 2, 3]) as res;
|
||||
Для улучшения эффективности сортировки применяется [преобразование Шварца](https://ru.wikipedia.org/wiki/%D0%9F%D1%80%D0%B5%D0%BE%D0%B1%D1%80%D0%B0%D0%B7%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5_%D0%A8%D0%B2%D0%B0%D1%80%D1%86%D0%B0).
|
||||
:::
|
||||
|
||||
## arrayPartialSort(\[func,\] limit, arr, …) {#array_functions-sort}
|
||||
## arrayPartialSort(\[func,\] limit, arr, ...) {#array_functions-sort}
|
||||
|
||||
То же, что и `arraySort` с дополнительным аргументом `limit`, позволяющим частичную сортировку. Возвращает массив того же размера, как и исходный, в котором элементы `[1..limit]` отсортированы в возрастающем порядке. Остальные элементы `(limit..N]` остаются в неспецифицированном порядке.
|
||||
|
||||
## arrayReverseSort(\[func,\] arr, …) {#array_functions-reverse-sort}
|
||||
## arrayReverseSort(\[func,\] arr, ...) {#array_functions-reverse-sort}
|
||||
|
||||
Возвращает массив `arr`, отсортированный в нисходящем порядке. Если указана функция `func`, то массив `arr` сначала сортируется в порядке, который определяется функцией `func`, а затем отсортированный массив переворачивается. Если функция `func` принимает несколько аргументов, то в функцию `arrayReverseSort` необходимо передавать несколько массивов, которые будут соответствовать аргументам функции `func`. Подробные примеры рассмотрены в конце описания функции `arrayReverseSort`.
|
||||
|
||||
@ -803,11 +803,11 @@ SELECT arrayReverseSort((x, y) -> -y, [4, 3, 5], [1, 2, 3]) AS res;
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
## arrayPartialReverseSort(\[func,\] limit, arr, …) {#array_functions-sort}
|
||||
## arrayPartialReverseSort(\[func,\] limit, arr, ...) {#array_functions-sort}
|
||||
|
||||
То же, что и `arrayReverseSort` с дополнительным аргументом `limit`, позволяющим частичную сортировку. Возвращает массив того же размера, как и исходный, в котором элементы `[1..limit]` отсортированы в убывающем порядке. Остальные элементы `(limit..N]` остаются в неспецифицированном порядке.
|
||||
|
||||
## arrayUniq(arr, …) {#array-functions-arrayuniq}
|
||||
## arrayUniq(arr, ...) {#array-functions-arrayuniq}
|
||||
|
||||
Если передан один аргумент, считает количество разных элементов в массиве.
|
||||
Если передано несколько аргументов, считает количество разных кортежей из элементов на соответствующих позициях в нескольких массивах.
|
||||
@ -1174,7 +1174,7 @@ SELECT arrayZip(['a', 'b', 'c'], [5, 2, 1]);
|
||||
└──────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## arrayMap(func, arr1, …) {#array-map}
|
||||
## arrayMap(func, arr1, ...) {#array-map}
|
||||
|
||||
Возвращает массив, полученный на основе результатов применения функции `func` к каждому элементу массива `arr`.
|
||||
|
||||
@ -1204,7 +1204,7 @@ SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res;
|
||||
|
||||
Функция `arrayMap` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен.
|
||||
|
||||
## arrayFilter(func, arr1, …) {#array-filter}
|
||||
## arrayFilter(func, arr1, ...) {#array-filter}
|
||||
|
||||
Возвращает массив, содержащий только те элементы массива `arr1`, для которых функция `func` возвращает не 0.
|
||||
|
||||
@ -1237,7 +1237,7 @@ SELECT
|
||||
|
||||
Функция `arrayFilter` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен.
|
||||
|
||||
## arrayFill(func, arr1, …) {#array-fill}
|
||||
## arrayFill(func, arr1, ...) {#array-fill}
|
||||
|
||||
Перебирает `arr1` от первого элемента к последнему и заменяет `arr1[i]` на `arr1[i - 1]`, если `func` вернула 0. Первый элемент `arr1` остаётся неизменным.
|
||||
|
||||
@ -1255,7 +1255,7 @@ SELECT arrayFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14,
|
||||
|
||||
Функция `arrayFill` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен.
|
||||
|
||||
## arrayReverseFill(func, arr1, …) {#array-reverse-fill}
|
||||
## arrayReverseFill(func, arr1, ...) {#array-reverse-fill}
|
||||
|
||||
Перебирает `arr1` от последнего элемента к первому и заменяет `arr1[i]` на `arr1[i + 1]`, если `func` вернула 0. Последний элемент `arr1` остаётся неизменным.
|
||||
|
||||
@ -1273,7 +1273,7 @@ SELECT arrayReverseFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5,
|
||||
|
||||
Функция `arrayReverseFill` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен.
|
||||
|
||||
## arraySplit(func, arr1, …) {#array-split}
|
||||
## arraySplit(func, arr1, ...) {#array-split}
|
||||
|
||||
Разделяет массив `arr1` на несколько. Если `func` возвращает не 0, то массив разделяется, а элемент помещается в левую часть. Массив не разбивается по первому элементу.
|
||||
|
||||
@ -1291,7 +1291,7 @@ SELECT arraySplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res
|
||||
|
||||
Функция `arraySplit` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен.
|
||||
|
||||
## arrayReverseSplit(func, arr1, …) {#array-reverse-split}
|
||||
## arrayReverseSplit(func, arr1, ...) {#array-reverse-split}
|
||||
|
||||
Разделяет массив `arr1` на несколько. Если `func` возвращает не 0, то массив разделяется, а элемент помещается в правую часть. Массив не разбивается по последнему элементу.
|
||||
|
||||
@ -1309,25 +1309,25 @@ SELECT arrayReverseSplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res
|
||||
|
||||
Функция `arrayReverseSplit` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен.
|
||||
|
||||
## arrayExists(\[func,\] arr1, …) {#arrayexistsfunc-arr1}
|
||||
## arrayExists(\[func,\] arr1, ...) {#arrayexistsfunc-arr1}
|
||||
|
||||
Возвращает 1, если существует хотя бы один элемент массива `arr`, для которого функция func возвращает не 0. Иначе возвращает 0.
|
||||
|
||||
Функция `arrayExists` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) - в качестве первого аргумента ей можно передать лямбда-функцию.
|
||||
|
||||
## arrayAll(\[func,\] arr1, …) {#arrayallfunc-arr1}
|
||||
## arrayAll(\[func,\] arr1, ...) {#arrayallfunc-arr1}
|
||||
|
||||
Возвращает 1, если для всех элементов массива `arr`, функция `func` возвращает не 0. Иначе возвращает 0.
|
||||
|
||||
Функция `arrayAll` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) - в качестве первого аргумента ей можно передать лямбда-функцию.
|
||||
|
||||
## arrayFirst(func, arr1, …) {#array-first}
|
||||
## arrayFirst(func, arr1, ...) {#array-first}
|
||||
|
||||
Возвращает первый элемент массива `arr1`, для которого функция func возвращает не 0.
|
||||
|
||||
Функция `arrayFirst` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен.
|
||||
|
||||
## arrayFirstIndex(func, arr1, …) {#array-first-index}
|
||||
## arrayFirstIndex(func, arr1, ...) {#array-first-index}
|
||||
|
||||
Возвращает индекс первого элемента массива `arr1`, для которого функция func возвращает не 0.
|
||||
|
||||
@ -1599,7 +1599,7 @@ SELECT arraySum(x -> x*x, [2, 3]) AS res;
|
||||
└─────┘
|
||||
```
|
||||
|
||||
## arrayCumSum(\[func,\] arr1, …) {#arraycumsumfunc-arr1}
|
||||
## arrayCumSum(\[func,\] arr1, ...) {#arraycumsumfunc-arr1}
|
||||
|
||||
Возвращает массив из частичных сумм элементов исходного массива (сумма с накоплением). Если указана функция `func`, то значения элементов массива преобразуются этой функцией перед суммированием.
|
||||
|
||||
|
@ -559,7 +559,7 @@ SELECT
|
||||
|
||||
Описание режимов (mode):
|
||||
|
||||
| Mode | Первый день недели | Диапазон | Неделя 1 это первая неделя … |
|
||||
| Mode | Первый день недели | Диапазон | Неделя 1 это первая неделя ... |
|
||||
| ----------- | -------- | -------- | ------------------ |
|
||||
|0|Воскресенье|0-53|с воскресеньем в этом году
|
||||
|1|Понедельник|0-53|с 4-мя или более днями в этом году
|
||||
|
@ -88,7 +88,7 @@ SELECT isValidJSON('{"a": "hello", "b": [-100, 200.0, 300]}') = 1
|
||||
SELECT isValidJSON('not a json') = 0
|
||||
```
|
||||
|
||||
## JSONHas(json\[, indices_or_keys\]…) {#jsonhasjson-indices-or-keys}
|
||||
## JSONHas(json\[, indices_or_keys\]...) {#jsonhasjson-indices-or-keys}
|
||||
|
||||
Если значение существует в документе JSON, то возвращается `1`.
|
||||
|
||||
@ -121,7 +121,7 @@ SELECT JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', -2) = 'a'
|
||||
SELECT JSONExtractString('{"a": "hello", "b": [-100, 200.0, 300]}', 1) = 'hello'
|
||||
```
|
||||
|
||||
## JSONLength(json\[, indices_or_keys\]…) {#jsonlengthjson-indices-or-keys}
|
||||
## JSONLength(json\[, indices_or_keys\]...) {#jsonlengthjson-indices-or-keys}
|
||||
|
||||
Возвращает длину массива JSON или объекта JSON.
|
||||
|
||||
@ -134,7 +134,7 @@ SELECT JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 3
|
||||
SELECT JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}') = 2
|
||||
```
|
||||
|
||||
## JSONType(json\[, indices_or_keys\]…) {#jsontypejson-indices-or-keys}
|
||||
## JSONType(json\[, indices_or_keys\]...) {#jsontypejson-indices-or-keys}
|
||||
|
||||
Возвращает тип значения JSON.
|
||||
|
||||
@ -148,13 +148,13 @@ SELECT JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'a') = 'String'
|
||||
SELECT JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 'Array'
|
||||
```
|
||||
|
||||
## JSONExtractUInt(json\[, indices_or_keys\]…) {#jsonextractuintjson-indices-or-keys}
|
||||
## JSONExtractUInt(json\[, indices_or_keys\]...) {#jsonextractuintjson-indices-or-keys}
|
||||
|
||||
## JSONExtractInt(json\[, indices_or_keys\]…) {#jsonextractintjson-indices-or-keys}
|
||||
## JSONExtractInt(json\[, indices_or_keys\]...) {#jsonextractintjson-indices-or-keys}
|
||||
|
||||
## JSONExtractFloat(json\[, indices_or_keys\]…) {#jsonextractfloatjson-indices-or-keys}
|
||||
## JSONExtractFloat(json\[, indices_or_keys\]...) {#jsonextractfloatjson-indices-or-keys}
|
||||
|
||||
## JSONExtractBool(json\[, indices_or_keys\]…) {#jsonextractbooljson-indices-or-keys}
|
||||
## JSONExtractBool(json\[, indices_or_keys\]...) {#jsonextractbooljson-indices-or-keys}
|
||||
|
||||
Парсит JSON и извлекает значение. Эти функции аналогичны функциям `visitParam`.
|
||||
|
||||
@ -168,7 +168,7 @@ SELECT JSONExtractFloat('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 2) = 200
|
||||
SELECT JSONExtractUInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', -1) = 300
|
||||
```
|
||||
|
||||
## JSONExtractString(json\[, indices_or_keys\]…) {#jsonextractstringjson-indices-or-keys}
|
||||
## JSONExtractString(json\[, indices_or_keys\]...) {#jsonextractstringjson-indices-or-keys}
|
||||
|
||||
Парсит JSON и извлекает строку. Эта функция аналогична функции `visitParamExtractString`.
|
||||
|
||||
@ -186,7 +186,7 @@ SELECT JSONExtractString('{"abc":"\\u263"}', 'abc') = ''
|
||||
SELECT JSONExtractString('{"abc":"hello}', 'abc') = ''
|
||||
```
|
||||
|
||||
## JSONExtract(json\[, indices_or_keys…\], Return_type) {#jsonextractjson-indices-or-keys-return-type}
|
||||
## JSONExtract(json\[, indices_or_keys...\], Return_type) {#jsonextractjson-indices-or-keys-return-type}
|
||||
|
||||
Парсит JSON и извлекает значение с заданным типом данных.
|
||||
|
||||
@ -207,7 +207,7 @@ SELECT JSONExtract('{"day": "Thursday"}', 'day', 'Enum8(\'Sunday\' = 0, \'Monday
|
||||
SELECT JSONExtract('{"day": 5}', 'day', 'Enum8(\'Sunday\' = 0, \'Monday\' = 1, \'Tuesday\' = 2, \'Wednesday\' = 3, \'Thursday\' = 4, \'Friday\' = 5, \'Saturday\' = 6)') = 'Friday'
|
||||
```
|
||||
|
||||
## JSONExtractKeysAndValues(json\[, indices_or_keys…\], Value_type) {#jsonextractkeysandvaluesjson-indices-or-keys-value-type}
|
||||
## JSONExtractKeysAndValues(json\[, indices_or_keys...\], Value_type) {#jsonextractkeysandvaluesjson-indices-or-keys-value-type}
|
||||
|
||||
Разбор пар ключ-значение из JSON, где значение имеет тип данных ClickHouse.
|
||||
|
||||
@ -255,7 +255,7 @@ text
|
||||
└────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## JSONExtractRaw(json\[, indices_or_keys\]…) {#jsonextractrawjson-indices-or-keys}
|
||||
## JSONExtractRaw(json\[, indices_or_keys\]...) {#jsonextractrawjson-indices-or-keys}
|
||||
|
||||
Возвращает часть JSON в виде строки, содержащей неразобранную подстроку.
|
||||
|
||||
@ -267,7 +267,7 @@ text
|
||||
SELECT JSONExtractRaw('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = '[-100, 200.0, 300]';
|
||||
```
|
||||
|
||||
## JSONExtractArrayRaw(json\[, indices_or_keys\]…) {#jsonextractarrayrawjson-indices-or-keys}
|
||||
## JSONExtractArrayRaw(json\[, indices_or_keys\]...) {#jsonextractarrayrawjson-indices-or-keys}
|
||||
|
||||
Возвращает массив из элементов JSON массива, каждый из которых представлен в виде строки с неразобранными подстроками из JSON.
|
||||
|
||||
|
@ -286,7 +286,7 @@ SELECT byteSize(NULL, 1, 0.3, '');
|
||||
Превращает константу в полноценный столбец, содержащий только одно значение.
|
||||
В ClickHouse полноценные столбцы и константы представлены в памяти по-разному. Функции по-разному работают для аргументов-констант и обычных аргументов (выполняется разный код), хотя результат почти всегда должен быть одинаковым. Эта функция предназначена для отладки такого поведения.
|
||||
|
||||
## ignore(…) {#ignore}
|
||||
## ignore(...) {#ignore}
|
||||
|
||||
Принимает любые аргументы, в т.ч. `NULL`, всегда возвращает 0.
|
||||
При этом, аргумент всё равно вычисляется. Это может использоваться для бенчмарков.
|
||||
|
@ -358,7 +358,7 @@ SELECT repeat('abc', 10);
|
||||
|
||||
Разворачивает последовательность кодовых точек Unicode, при допущении, что строка содержит набор байтов, представляющий текст в кодировке UTF-8. Иначе — что-то делает (не кидает исключение).
|
||||
|
||||
## format(pattern, s0, s1, …) {#format}
|
||||
## format(pattern, s0, s1, ...) {#format}
|
||||
|
||||
Форматирует константный шаблон со строками, перечисленными в аргументах. `pattern` — упрощенная версия шаблона в языке Python. Шаблон содержит «заменяющие поля», которые окружены фигурными скобками `{}`. Всё, что не содержится в скобках, интерпретируется как обычный текст и просто копируется. Если нужно использовать символ фигурной скобки, можно экранировать двойной скобкой `{{ '{{' }}` или `{{ '}}' }}`. Имя полей могут быть числами (нумерация с нуля) или пустыми (тогда они интерпретируются как последовательные числа).
|
||||
|
||||
|
@ -311,19 +311,19 @@ Result:
|
||||
|
||||
Смотрите `multiSearchAllPositions`.
|
||||
|
||||
## multiSearchFirstPosition(haystack, \[needle<sub>1</sub>, needle<sub>2</sub>, …, needle<sub>n</sub>\]) {#multisearchfirstpositionhaystack-needle1-needle2-needlen}
|
||||
## multiSearchFirstPosition(haystack, \[needle<sub>1</sub>, needle<sub>2</sub>, ..., needle<sub>n</sub>\]) {#multisearchfirstpositionhaystack-needle1-needle2-needlen}
|
||||
|
||||
Так же, как и `position`, только возвращает оффсет первого вхождения любого из needles.
|
||||
|
||||
Для поиска без учета регистра и/или в кодировке UTF-8 используйте функции `multiSearchFirstPositionCaseInsensitive, multiSearchFirstPositionUTF8, multiSearchFirstPositionCaseInsensitiveUTF8`.
|
||||
|
||||
## multiSearchFirstIndex(haystack, \[needle<sub>1</sub>, needle<sub>2</sub>, …, needle<sub>n</sub>\]) {#multisearchfirstindexhaystack-needle1-needle2-needlen}
|
||||
## multiSearchFirstIndex(haystack, \[needle<sub>1</sub>, needle<sub>2</sub>, ..., needle<sub>n</sub>\]) {#multisearchfirstindexhaystack-needle1-needle2-needlen}
|
||||
|
||||
Возвращает индекс `i` (нумерация с единицы) первой найденной строки needle<sub>i</sub> в строке `haystack` и 0 иначе.
|
||||
|
||||
Для поиска без учета регистра и/или в кодировке UTF-8 используйте функции `multiSearchFirstIndexCaseInsensitive, multiSearchFirstIndexUTF8, multiSearchFirstIndexCaseInsensitiveUTF8`.
|
||||
|
||||
## multiSearchAny(haystack, \[needle<sub>1</sub>, needle<sub>2</sub>, …, needle<sub>n</sub>\]) {#function-multisearchany}
|
||||
## multiSearchAny(haystack, \[needle<sub>1</sub>, needle<sub>2</sub>, ..., needle<sub>n</sub>\]) {#function-multisearchany}
|
||||
|
||||
Возвращает 1, если хотя бы одна подстрока needle<sub>i</sub> нашлась в строке `haystack` и 0 иначе.
|
||||
|
||||
@ -343,30 +343,30 @@ Result:
|
||||
Регулярное выражение работает со строкой как с набором байт. Регулярное выражение не может содержать нулевые байты.
|
||||
Для шаблонов на поиск подстроки в строке, лучше используйте LIKE или position, так как они работают существенно быстрее.
|
||||
|
||||
## multiMatchAny(haystack, \[pattern<sub>1</sub>, pattern<sub>2</sub>, …, pattern<sub>n</sub>\]) {#multimatchanyhaystack-pattern1-pattern2-patternn}
|
||||
## multiMatchAny(haystack, \[pattern<sub>1</sub>, pattern<sub>2</sub>, ..., pattern<sub>n</sub>\]) {#multimatchanyhaystack-pattern1-pattern2-patternn}
|
||||
|
||||
То же, что и `match`, но возвращает ноль, если ни одно регулярное выражение не подошло и один, если хотя бы одно. Используется библиотека [hyperscan](https://github.com/intel/hyperscan) для соответствия регулярных выражений. Для шаблонов на поиск многих подстрок в строке, лучше используйте `multiSearchAny`, так как она работает существенно быстрее.
|
||||
|
||||
:::note Примечание
|
||||
Длина любой строки из `haystack` должна быть меньше 2<sup>32</sup> байт, иначе бросается исключение. Это ограничение связано с ограничением hyperscan API.
|
||||
:::
|
||||
## multiMatchAnyIndex(haystack, \[pattern<sub>1</sub>, pattern<sub>2</sub>, …, pattern<sub>n</sub>\]) {#multimatchanyindexhaystack-pattern1-pattern2-patternn}
|
||||
## multiMatchAnyIndex(haystack, \[pattern<sub>1</sub>, pattern<sub>2</sub>, ..., pattern<sub>n</sub>\]) {#multimatchanyindexhaystack-pattern1-pattern2-patternn}
|
||||
|
||||
То же, что и `multiMatchAny`, только возвращает любой индекс подходящего регулярного выражения.
|
||||
|
||||
## multiMatchAllIndices(haystack, \[pattern<sub>1</sub>, pattern<sub>2</sub>, …, pattern<sub>n</sub>\]) {#multimatchallindiceshaystack-pattern1-pattern2-patternn}
|
||||
## multiMatchAllIndices(haystack, \[pattern<sub>1</sub>, pattern<sub>2</sub>, ..., pattern<sub>n</sub>\]) {#multimatchallindiceshaystack-pattern1-pattern2-patternn}
|
||||
|
||||
То же, что и `multiMatchAny`, только возвращает массив всех индексов всех подходящих регулярных выражений в любом порядке.
|
||||
|
||||
## multiFuzzyMatchAny(haystack, distance, \[pattern<sub>1</sub>, pattern<sub>2</sub>, …, pattern<sub>n</sub>\]) {#multifuzzymatchanyhaystack-distance-pattern1-pattern2-patternn}
|
||||
## multiFuzzyMatchAny(haystack, distance, \[pattern<sub>1</sub>, pattern<sub>2</sub>, ..., pattern<sub>n</sub>\]) {#multifuzzymatchanyhaystack-distance-pattern1-pattern2-patternn}
|
||||
|
||||
То же, что и `multiMatchAny`, но возвращает 1 если любой шаблон соответствует haystack в пределах константного [редакционного расстояния](https://en.wikipedia.org/wiki/Edit_distance). Эта функция основана на экспериментальной библиотеке [hyperscan](https://intel.github.io/hyperscan/dev-reference/compilation.html#approximate-matching) и может быть медленной для некоторых частных случаев. Производительность зависит от значения редакционного расстояния и используемых шаблонов, но всегда медленнее по сравнению с non-fuzzy вариантами.
|
||||
|
||||
## multiFuzzyMatchAnyIndex(haystack, distance, \[pattern<sub>1</sub>, pattern<sub>2</sub>, …, pattern<sub>n</sub>\]) {#multifuzzymatchanyindexhaystack-distance-pattern1-pattern2-patternn}
|
||||
## multiFuzzyMatchAnyIndex(haystack, distance, \[pattern<sub>1</sub>, pattern<sub>2</sub>, ..., pattern<sub>n</sub>\]) {#multifuzzymatchanyindexhaystack-distance-pattern1-pattern2-patternn}
|
||||
|
||||
То же, что и `multiFuzzyMatchAny`, только возвращает любой индекс подходящего регулярного выражения в пределах константного редакционного расстояния.
|
||||
|
||||
## multiFuzzyMatchAllIndices(haystack, distance, \[pattern<sub>1</sub>, pattern<sub>2</sub>, …, pattern<sub>n</sub>\]) {#multifuzzymatchallindiceshaystack-distance-pattern1-pattern2-patternn}
|
||||
## multiFuzzyMatchAllIndices(haystack, distance, \[pattern<sub>1</sub>, pattern<sub>2</sub>, ..., pattern<sub>n</sub>\]) {#multifuzzymatchallindiceshaystack-distance-pattern1-pattern2-patternn}
|
||||
|
||||
То же, что и `multiFuzzyMatchAny`, только возвращает массив всех индексов всех подходящих регулярных выражений в любом порядке в пределах константного редакционного расстояния.
|
||||
|
||||
|
@ -9,15 +9,15 @@ sidebar_label: Функции для работы с кортежами
|
||||
## tuple {#tuple}
|
||||
|
||||
Функция, позволяющая сгруппировать несколько столбцов.
|
||||
Для столбцов, имеющих типы T1, T2, … возвращает кортеж типа Tuple(T1, T2, …), содержащий эти столбцы. Выполнение функции ничего не стоит.
|
||||
Для столбцов, имеющих типы T1, T2, ... возвращает кортеж типа Tuple(T1, T2, ...), содержащий эти столбцы. Выполнение функции ничего не стоит.
|
||||
Кортежи обычно используются как промежуточное значение в качестве аргумента операторов IN, или для создания списка формальных параметров лямбда-функций. Кортежи не могут быть записаны в таблицу.
|
||||
|
||||
С помощью функции реализуется оператор `(x, y, …)`.
|
||||
С помощью функции реализуется оператор `(x, y, ...)`.
|
||||
|
||||
**Синтаксис**
|
||||
|
||||
``` sql
|
||||
tuple(x, y, …)
|
||||
tuple(x, y, ...)
|
||||
```
|
||||
|
||||
## tupleElement {#tupleelement}
|
||||
|
@ -14,7 +14,7 @@ sidebar_label: "Функции для работы с URL"
|
||||
|
||||
### protocol {#protocol}
|
||||
|
||||
Возвращает протокол. Примеры: http, ftp, mailto, magnet…
|
||||
Возвращает протокол. Примеры: http, ftp, mailto, magnet...
|
||||
|
||||
### domain {#domain}
|
||||
|
||||
|
@ -4,7 +4,7 @@ sidebar_position: 51
|
||||
sidebar_label: COMMENT
|
||||
---
|
||||
|
||||
# ALTER TABLE … MODIFY COMMENT {#alter-modify-comment}
|
||||
# ALTER TABLE ... MODIFY COMMENT {#alter-modify-comment}
|
||||
|
||||
Добавляет, изменяет или удаляет комментарий к таблице, независимо от того, был ли он установлен раньше или нет. Изменение комментария отражается как в системной таблице [system.tables](../../../operations/system-tables/tables.md), так и в результате выполнения запроса `SHOW CREATE TABLE`.
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user