Merge branch 'master' into rabbit-optimize

This commit is contained in:
alesapin 2020-10-28 10:24:05 +03:00 committed by GitHub
commit 617e42ddb4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
520 changed files with 23659 additions and 3762 deletions

View File

@ -409,7 +409,7 @@
## ClickHouse release 20.6
### ClickHouse release v20.6.3.28-stable
### ClickHouse release v20.6.3.28-stable
#### New Feature
@ -2362,7 +2362,7 @@ No changes compared to v20.4.3.16-stable.
* `Live View` table engine refactoring. [#8519](https://github.com/ClickHouse/ClickHouse/pull/8519) ([vzakaznikov](https://github.com/vzakaznikov))
* Add additional checks for external dictionaries created from DDL-queries. [#8127](https://github.com/ClickHouse/ClickHouse/pull/8127) ([alesapin](https://github.com/alesapin))
* Fix error `Column ... already exists` while using `FINAL` and `SAMPLE` together, e.g. `select count() from table final sample 1/2`. Fixes [#5186](https://github.com/ClickHouse/ClickHouse/issues/5186). [#7907](https://github.com/ClickHouse/ClickHouse/pull/7907) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Now table the first argument of `joinGet` function can be table indentifier. [#7707](https://github.com/ClickHouse/ClickHouse/pull/7707) ([Amos Bird](https://github.com/amosbird))
* Now table the first argument of `joinGet` function can be table identifier. [#7707](https://github.com/ClickHouse/ClickHouse/pull/7707) ([Amos Bird](https://github.com/amosbird))
* Allow using `MaterializedView` with subqueries above `Kafka` tables. [#8197](https://github.com/ClickHouse/ClickHouse/pull/8197) ([filimonov](https://github.com/filimonov))
* Now background moves between disks run it the seprate thread pool. [#7670](https://github.com/ClickHouse/ClickHouse/pull/7670) ([Vladimir Chebotarev](https://github.com/excitoon))
* `SYSTEM RELOAD DICTIONARY` now executes synchronously. [#8240](https://github.com/ClickHouse/ClickHouse/pull/8240) ([Vitaly Baranov](https://github.com/vitlibar))

View File

@ -17,4 +17,6 @@ ClickHouse is an open-source column-oriented database management system that all
## Upcoming Events
* [ClickHouse virtual office hours](https://www.eventbrite.com/e/clickhouse-october-virtual-meetup-office-hours-tickets-123129500651) on October 22, 2020.
* [The Second ClickHouse Meetup East (online)](https://www.eventbrite.com/e/the-second-clickhouse-meetup-east-tickets-126787955187) on October 31, 2020.
* [ClickHouse for Enterprise Meetup (online in Russian)](https://arenadata-events.timepad.ru/event/1465249/) on November 10, 2020.

View File

@ -51,7 +51,7 @@ struct StringRef
};
/// Here constexpr doesn't implicate inline, see https://www.viva64.com/en/w/v1043/
/// nullptr can't be used because the StringRef values are used in SipHash's pointer arithmetics
/// nullptr can't be used because the StringRef values are used in SipHash's pointer arithmetic
/// and the UBSan thinks that something like nullptr + 8 is UB.
constexpr const inline char empty_string_ref_addr{};
constexpr const inline StringRef EMPTY_STRING_REF{&empty_string_ref_addr, 0};

View File

@ -11,11 +11,11 @@ CFLAGS (GLOBAL -DDBMS_VERSION_MAJOR=${VERSION_MAJOR})
CFLAGS (GLOBAL -DDBMS_VERSION_MINOR=${VERSION_MINOR})
CFLAGS (GLOBAL -DDBMS_VERSION_PATCH=${VERSION_PATCH})
CFLAGS (GLOBAL -DVERSION_FULL=\"\\\"${VERSION_FULL}\\\"\")
CFLAGS (GLOBAL -DVERSION_MAJOR=${VERSION_MAJOR})
CFLAGS (GLOBAL -DVERSION_MINOR=${VERSION_MINOR})
CFLAGS (GLOBAL -DVERSION_MAJOR=${VERSION_MAJOR})
CFLAGS (GLOBAL -DVERSION_MINOR=${VERSION_MINOR})
CFLAGS (GLOBAL -DVERSION_PATCH=${VERSION_PATCH})
# TODO: not supported yet, not sure if ya.make supports arithmetics.
# TODO: not supported yet, not sure if ya.make supports arithmetic.
CFLAGS (GLOBAL -DVERSION_INTEGER=0)
CFLAGS (GLOBAL -DVERSION_NAME=\"\\\"${VERSION_NAME}\\\"\")

View File

@ -192,7 +192,7 @@ set(SRCS
${HDFS3_SOURCE_DIR}/common/FileWrapper.h
)
# old kernels (< 3.17) doens't have SYS_getrandom. Always use POSIX implementation to have better compatibility
# old kernels (< 3.17) doesn't have SYS_getrandom. Always use POSIX implementation to have better compatibility
set_source_files_properties(${HDFS3_SOURCE_DIR}/rpc/RpcClient.cpp PROPERTIES COMPILE_FLAGS "-DBOOST_UUID_RANDOM_PROVIDER_FORCE_POSIX=1")
# target

@ -1 +1 @@
Subproject commit f5638e954a79f50bac7c7a5deaa5a241e0ce8b5f
Subproject commit 1485b0de3eaa1508dfe49a5ba1e4aa2a71fd8335

View File

@ -31,10 +31,6 @@ RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \
&& chmod +x dpkg-deb \
&& cp dpkg-deb /usr/bin
RUN export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
&& wget -nv -O /tmp/arrow-keyring.deb "https://apache.bintray.com/arrow/ubuntu/apache-arrow-archive-keyring-latest-${CODENAME}.deb" \
&& dpkg -i /tmp/arrow-keyring.deb
# Libraries from OS are only needed to test the "unbundled" build (this is not used in production).
RUN apt-get update \
&& apt-get install \

View File

@ -1,6 +1,10 @@
# docker build -t yandex/clickhouse-unbundled-builder .
FROM yandex/clickhouse-deb-builder
RUN export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
&& wget -nv -O /tmp/arrow-keyring.deb "https://apache.bintray.com/arrow/ubuntu/apache-arrow-archive-keyring-latest-${CODENAME}.deb" \
&& dpkg -i /tmp/arrow-keyring.deb
# Libraries from OS are only needed to test the "unbundled" build (that is not used in production).
RUN apt-get update \
&& apt-get install \

View File

@ -82,6 +82,7 @@ RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
ENV COMMIT_SHA=''
ENV PULL_REQUEST_NUMBER=''
ENV COPY_CLICKHOUSE_BINARY_TO_OUTPUT=0
COPY run.sh /
CMD ["/bin/bash", "/run.sh"]

View File

@ -172,6 +172,9 @@ function build
(
cd "$FASTTEST_BUILD"
time ninja clickhouse-bundle | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/build_log.txt"
if [ "$COPY_CLICKHOUSE_BINARY_TO_OUTPUT" -eq "1" ]; then
cp programs/clickhouse "$FASTTEST_OUTPUT/clickhouse"
fi
ccache --show-stats ||:
)
}
@ -268,7 +271,7 @@ TESTS_TO_SKIP=(
00974_query_profiler
# Look at DistributedFilesToInsert, so cannot run in parallel.
01457_DistributedFilesToInsert
01460_DistributedFilesToInsert
01541_max_memory_usage_for_user

View File

@ -63,7 +63,7 @@ function configure
# Make copies of the original db for both servers. Use hardlinks instead
# of copying to save space. Before that, remove preprocessed configs and
# system tables, because sharing them between servers with hardlinks may
# lead to weird effects.
# lead to weird effects.
rm -r left/db ||:
rm -r right/db ||:
rm -r db0/preprocessed_configs ||:
@ -77,15 +77,12 @@ function restart
while killall clickhouse-server; do echo . ; sleep 1 ; done
echo all killed
# Disable percpu arenas because they segfault when the process is bound to
# a particular NUMA node: https://github.com/jemalloc/jemalloc/pull/1939
#
# About the jemalloc settings:
# Change the jemalloc settings here.
# https://github.com/jemalloc/jemalloc/wiki/Getting-Started
export MALLOC_CONF="percpu_arena:disabled,confirm_conf:true"
export MALLOC_CONF="confirm_conf:true"
set -m # Spawn servers in their own process groups
left/clickhouse-server --config-file=left/config/config.xml \
-- --path left/db --user_files_path left/db/user_files \
&>> left-server-log.log &
@ -211,7 +208,7 @@ function run_tests
echo test "$test_name"
# Don't profile if we're past the time limit.
# Use awk because bash doesn't support floating point arithmetics.
# Use awk because bash doesn't support floating point arithmetic.
profile_seconds=$(awk "BEGIN { print ($profile_seconds_left > 0 ? 10 : 0) }")
TIMEFORMAT=$(printf "$test_name\t%%3R\t%%3U\t%%3S\n")
@ -544,10 +541,10 @@ create table queries engine File(TSVWithNamesAndTypes, 'report/queries.tsv')
as select
abs(diff) > report_threshold and abs(diff) > stat_threshold as changed_fail,
abs(diff) > report_threshold - 0.05 and abs(diff) > stat_threshold as changed_show,
not changed_fail and stat_threshold > report_threshold + 0.10 as unstable_fail,
not changed_show and stat_threshold > report_threshold - 0.05 as unstable_show,
left, right, diff, stat_threshold,
if(report_threshold > 0, report_threshold, 0.10) as report_threshold,
query_metric_stats.test test, query_metric_stats.query_index query_index,
@ -770,7 +767,7 @@ create table all_tests_report engine File(TSV, 'report/all-queries.tsv') as
-- The threshold for 2) is significantly larger than the threshold for 1), to
-- avoid jitter.
create view shortness
as select
as select
(test, query_index) in
(select * from file('analyze/marked-short-queries.tsv', TSV,
'test text, query_index int'))

View File

@ -17,14 +17,24 @@ service clickhouse-server start && sleep 5
if grep -q -- "--use-skip-list" /usr/bin/clickhouse-test; then
SKIP_LIST_OPT="--use-skip-list"
fi
# We can have several additional options so we path them as array because it's
# more idiologically correct.
read -ra ADDITIONAL_OPTIONS <<< "${ADDITIONAL_OPTIONS:-}"
function run_tests()
{
# We can have several additional options so we path them as array because it's
# more idiologically correct.
read -ra ADDITIONAL_OPTIONS <<< "${ADDITIONAL_OPTIONS:-}"
# Skip these tests, because they fail when we rerun them multiple times
if [ "$NUM_TRIES" -gt "1" ]; then
ADDITIONAL_OPTIONS+=('--skip')
ADDITIONAL_OPTIONS+=('00000_no_tests_to_skip')
fi
for i in $(seq 1 $NUM_TRIES); do
clickhouse-test --testname --shard --zookeeper --hung-check --print-time "$SKIP_LIST_OPT" "${ADDITIONAL_OPTIONS[@]}" "$SKIP_TESTS_OPTION" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee -a test_output/test_result.txt
clickhouse-test --testname --shard --zookeeper --hung-check --print-time "$SKIP_LIST_OPT" "${ADDITIONAL_OPTIONS[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee -a test_output/test_result.txt
if [ ${PIPESTATUS[0]} -ne "0" ]; then
break;
fi
done
}

View File

@ -35,7 +35,7 @@ RUN apt-get update \
ENV TZ=Europe/Moscow
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN pip3 install urllib3 testflows==1.6.57 docker-compose docker dicttoxml kazoo tzlocal
RUN pip3 install urllib3 testflows==1.6.59 docker-compose docker dicttoxml kazoo tzlocal
ENV DOCKER_CHANNEL stable
ENV DOCKER_VERSION 17.09.1-ce

View File

@ -18,4 +18,14 @@ toc_title: Cloud
- Encryption and isolation
- Automated maintenance
## Altinity.Cloud {#altinity.cloud}
[Altinity.Cloud](https://altinity.com/cloud-database/) is a fully managed ClickHouse-as-a-Service for the Amazon public cloud.
- Fast deployment of ClickHouse clusters on Amazon resources
- Easy scale-out/scale-in as well as vertical scaling of nodes
- Isolated per-tenant VPCs with public endpoint or VPC peering
- Configurable storage types and volume configurations
- Cross-AZ scaling for performance and high availability
- Built-in monitoring and SQL query editor
{## [Original article](https://clickhouse.tech/docs/en/commercial/cloud/) ##}

View File

@ -30,4 +30,4 @@ Instead of inserting data manually, you might consider to use one of [client lib
- `input_format_import_nested_json` allows to insert nested JSON objects into columns of [Nested](../../sql-reference/data-types/nested-data-structures/nested.md) type.
!!! note "Note"
Settings are specified as `GET` parameters for the HTTP interface or as additional command-line arguments prefixed with `--` for the CLI interface.
Settings are specified as `GET` parameters for the HTTP interface or as additional command-line arguments prefixed with `--` for the `CLI` interface.

View File

@ -1,5 +1,5 @@
---
toc_priority: 17
toc_priority: 19
toc_title: AMPLab Big Data Benchmark
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 19
toc_priority: 18
toc_title: Terabyte Click Logs from Criteo
---

View File

@ -1,6 +1,6 @@
---
toc_folder_title: Example Datasets
toc_priority: 15
toc_priority: 14
toc_title: Introduction
---
@ -18,4 +18,4 @@ The list of documented datasets:
- [New York Taxi Data](../../getting-started/example-datasets/nyc-taxi.md)
- [OnTime](../../getting-started/example-datasets/ontime.md)
[Original article](https://clickhouse.tech/docs/en/getting_started/example_datasets) <!--hide-->
[Original article](https://clickhouse.tech/docs/en/getting_started/example_datasets) <!--hide-->

View File

@ -1,5 +1,5 @@
---
toc_priority: 14
toc_priority: 15
toc_title: Yandex.Metrica Data
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 16
toc_priority: 20
toc_title: New York Taxi Data
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 15
toc_priority: 21
toc_title: OnTime
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 20
toc_priority: 16
toc_title: Star Schema Benchmark
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 18
toc_priority: 17
toc_title: WikiStat
---

View File

@ -460,7 +460,7 @@ See also the [JSONEachRow](#jsoneachrow) format.
## JSONString {#jsonstring}
Differs from JSON only in that data fields are output in strings, not in typed json values.
Differs from JSON only in that data fields are output in strings, not in typed JSON values.
Example:
@ -596,7 +596,7 @@ When inserting the data, you should provide a separate JSON value for each row.
## JSONEachRowWithProgress {#jsoneachrowwithprogress}
## JSONStringEachRowWithProgress {#jsonstringeachrowwithprogress}
Differs from JSONEachRow/JSONStringEachRow in that ClickHouse will also yield progress information as JSON objects.
Differs from `JSONEachRow`/`JSONStringEachRow` in that ClickHouse will also yield progress information as JSON values.
```json
{"row":{"'hello'":"hello","multiply(42, number)":"0","range(5)":[0,1,2,3,4]}}
@ -608,7 +608,7 @@ Differs from JSONEachRow/JSONStringEachRow in that ClickHouse will also yield pr
## JSONCompactEachRowWithNamesAndTypes {#jsoncompacteachrowwithnamesandtypes}
## JSONCompactStringEachRowWithNamesAndTypes {#jsoncompactstringeachrowwithnamesandtypes}
Differs from JSONCompactEachRow/JSONCompactStringEachRow in that the column names and types are written as the first two rows.
Differs from `JSONCompactEachRow`/`JSONCompactStringEachRow` in that the column names and types are written as the first two rows.
```json
["'hello'", "multiply(42, number)", "range(5)"]

View File

@ -6,7 +6,7 @@ toc_title: Client Libraries
# Client Libraries from Third-party Developers {#client-libraries-from-third-party-developers}
!!! warning "Disclaimer"
Yandex does **not** maintain the libraries listed below and havent done any extensive testing to ensure their quality.
Yandex does **not** maintain the libraries listed below and hasnt done any extensive testing to ensure their quality.
- Python
- [infi.clickhouse_orm](https://github.com/Infinidat/infi.clickhouse_orm)

View File

@ -0,0 +1,69 @@
---
toc_priority: 62
toc_title: OpenTelemetry Support
---
# [experimental] OpenTelemetry Support
[OpenTelemetry](https://opentelemetry.io/) is an open standard for collecting
traces and metrics from distributed application. ClickHouse has some support
for OpenTelemetry.
!!! warning "Warning"
This is an experimental feature that will change in backwards-incompatible ways in the future releases.
## Supplying Trace Context to ClickHouse
ClickHouse accepts trace context HTTP headers, as described by
the [W3C recommendation](https://www.w3.org/TR/trace-context/).
It also accepts trace context over native protocol that is used for
communication between ClickHouse servers or between the client and server.
For manual testing, trace context headers conforming to the Trace Context
recommendation can be supplied to `clickhouse-client` using
`--opentelemetry-traceparent` and `--opentelemetry-tracestate` flags.
If no parent trace context is supplied, ClickHouse can start a new trace, with
probability controlled by the `opentelemetry_start_trace_probability` setting.
## Propagating the Trace Context
The trace context is propagated to downstream services in the following cases:
* Queries to remote ClickHouse servers, such as when using `Distributed` table
engine.
* `URL` table function. Trace context information is sent in HTTP headers.
## Tracing the ClickHouse Itself
ClickHouse creates _trace spans_ for each query and some of the query execution
stages, such as query planning or distributed queries.
To be useful, the tracing information has to be exported to a monitoring system
that supports OpenTelemetry, such as Jaeger or Prometheus. ClickHouse avoids
a dependency on a particular monitoring system, instead only
providing the tracing data conforming to the standard. A natural way to do so
in an SQL RDBMS is a system table. OpenTelemetry trace span information
[required by the standard](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/overview.md#span)
is stored in the system table called `system.opentelemetry_span_log`.
The table must be enabled in the server configuration, see the `opentelemetry_span_log`
element in the default config file `config.xml`. It is enabled by default.
The table has the following columns:
- `trace_id`
- `span_id`
- `parent_span_id`
- `operation_name`
- `start_time`
- `finish_time`
- `finish_date`
- `attribute.name`
- `attribute.values`
The tags or attributes are saved as two parallel arrays, containing the keys
and values. Use `ARRAY JOIN` to work with them.

View File

@ -0,0 +1,48 @@
# system.crash_log {#system-tables_crash_log}
Contains information about stack traces for fatal errors. The table does not exist in the database by default, it is created only when fatal errors occur.
Columns:
- `event_date` ([Datetime](../../sql-reference/data-types/datetime.md)) — Date of the event.
- `event_time` ([Datetime](../../sql-reference/data-types/datetime.md)) — Time of the event.
- `timestamp_ns` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Timestamp of the event with nanoseconds.
- `signal` ([Int32](../../sql-reference/data-types/int-uint.md)) — Signal number.
- `thread_id` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Thread ID.
- `query_id` ([String](../../sql-reference/data-types/string.md)) — Query ID.
- `trace` ([Array](../../sql-reference/data-types/array.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Stack trace at the moment of crash. Each element is a virtual memory address inside ClickHouse server process.
- `trace_full` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — Stack trace at the moment of crash. Each element contains a called method inside ClickHouse server process.
- `version` ([String](../../sql-reference/data-types/string.md)) — ClickHouse server version.
- `revision` ([UInt32](../../sql-reference/data-types/int-uint.md)) — ClickHouse server revision.
- `build_id` ([String](../../sql-reference/data-types/string.md)) — BuildID that is generated by compiler.
**Example**
Query:
``` sql
SELECT * FROM system.crash_log ORDER BY event_time DESC LIMIT 1;
```
Result (not full):
``` text
Row 1:
──────
event_date: 2020-10-14
event_time: 2020-10-14 15:47:40
timestamp_ns: 1602679660271312710
signal: 11
thread_id: 23624
query_id: 428aab7c-8f5c-44e9-9607-d16b44467e69
trace: [188531193,...]
trace_full: ['3. DB::(anonymous namespace)::FunctionFormatReadableTimeDelta::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) const @ 0xb3cc1f9 in /home/username/work/ClickHouse/build/programs/clickhouse',...]
version: ClickHouse 20.11.1.1
revision: 54442
build_id:
```
**See also**
- [trace_log](../../operations/system-tables/trace_log.md) system table
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/crash-log)

View File

@ -20,8 +20,8 @@ The `system.query_log` table registers two kinds of queries:
Each query creates one or two rows in the `query_log` table, depending on the status (see the `type` column) of the query:
1. If the query execution was successful, two rows with the `QueryStart` and `QueryFinish` types are created .
2. If an error occurred during query processing, two events with the `QueryStart` and `ExceptionWhileProcessing` types are created .
1. If the query execution was successful, two rows with the `QueryStart` and `QueryFinish` types are created.
2. If an error occurred during query processing, two events with the `QueryStart` and `ExceptionWhileProcessing` types are created.
3. If an error occurred before launching the query, a single event with the `ExceptionBeforeStart` type is created.
Columns:
@ -37,8 +37,8 @@ Columns:
- `query_start_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Start time of query execution.
- `query_start_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Start time of query execution with microsecond precision.
- `query_duration_ms` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Duration of query execution in milliseconds.
- `read_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number or rows read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_rows` includes the total number of rows read at all replicas. Each replica sends its `read_rows` value, and the server-initiator of the query summarize all received and local values. The cache volumes doesnt affect this value.
- `read_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number or bytes read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_bytes` includes the total number of rows read at all replicas. Each replica sends its `read_bytes` value, and the server-initiator of the query summarize all received and local values. The cache volumes doesnt affect this value.
- `read_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number of rows read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_rows` includes the total number of rows read at all replicas. Each replica sends its `read_rows` value, and the server-initiator of the query summarizes all received and local values. The cache volumes dont affect this value.
- `read_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number of bytes read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_bytes` includes the total number of rows read at all replicas. Each replica sends its `read_bytes` value, and the server-initiator of the query summarizes all received and local values. The cache volumes dont affect this value.
- `written_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — For `INSERT` queries, the number of written rows. For other queries, the column value is 0.
- `written_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — For `INSERT` queries, the number of written bytes. For other queries, the column value is 0.
- `result_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Number of rows in a result of the `SELECT` query, or a number of rows in the `INSERT` query.

View File

@ -1,6 +1,6 @@
# system.query_thread_log {#system_tables-query_thread_log}
Contains information about threads which execute queries, for example, thread name, thread start time, duration of query processing.
Contains information about threads that execute queries, for example, thread name, thread start time, duration of query processing.
To start logging:

View File

@ -1,6 +1,6 @@
# system.text_log {#system_tables-text_log}
Contains logging entries. Logging level which goes to this table can be limited with `text_log.level` server setting.
Contains logging entries. The logging level which goes to this table can be limited to the `text_log.level` server setting.
Columns:

View File

@ -18,7 +18,7 @@ Columns:
- `revision` ([UInt32](../../sql-reference/data-types/int-uint.md)) — ClickHouse server build revision.
When connecting to server by `clickhouse-client`, you see the string similar to `Connected to ClickHouse server version 19.18.1 revision 54429.`. This field contains the `revision`, but not the `version` of a server.
When connecting to the server by `clickhouse-client`, you see the string similar to `Connected to ClickHouse server version 19.18.1 revision 54429.`. This field contains the `revision`, but not the `version` of a server.
- `timer_type` ([Enum8](../../sql-reference/data-types/enum.md)) — Timer type:

View File

@ -80,4 +80,4 @@ Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Wrong argu
## See Also {#see-also}
- [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) operator
- [toInterval](../../../sql-reference/functions/type-conversion-functions.md#function-tointerval) type convertion functions
- [toInterval](../../../sql-reference/functions/type-conversion-functions.md#function-tointerval) type conversion functions

View File

@ -6,7 +6,7 @@ toc_title: Encoding
# Encoding Functions {#encoding-functions}
## char {#char}
Returns the string with the length as the number of passed arguments and each byte has the value of corresponding argument. Accepts multiple arguments of numeric types. If the value of argument is out of range of UInt8 data type, it is converted to UInt8 with possible rounding and overflow.
**Syntax**

View File

@ -551,7 +551,7 @@ formatReadableTimeDelta(column[, maximum_unit])
**Parameters**
- `column` — A column with numeric time delta.
- `maximum_unit` — Optional. Maximum unit to show. Acceptable values seconds, minutes, hours, days, months, years.
- `maximum_unit` — Optional. Maximum unit to show. Acceptable values seconds, minutes, hours, days, months, years.
Example:
@ -1584,7 +1584,7 @@ isDecimalOverflow(d, [p])
**Parameters**
- `d` — value. [Decimal](../../sql-reference/data-types/decimal.md).
- `p` — precision. Optional. If omitted, the initial presicion of the first argument is used. Using of this paratemer could be helpful for data extraction to another DBMS or file. [UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges).
- `p` — precision. Optional. If omitted, the initial precision of the first argument is used. Using of this paratemer could be helpful for data extraction to another DBMS or file. [UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges).
**Returned values**

View File

@ -61,6 +61,54 @@ SELECT toUUID('61f0c404-5cb3-11e7-907b-a6006ad3dba0') AS uuid
└──────────────────────────────────────┘
```
## toUUIDOrNull (x) {#touuidornull-x}
It takes an argument of type String and tries to parse it into UUID. If failed, returns NULL.
``` sql
toUUIDOrNull(String)
```
**Returned value**
The Nullable(UUID) type value.
**Usage example**
``` sql
SELECT toUUIDOrNull('61f0c404-5cb3-11e7-907b-a6006ad3dba0T') AS uuid
```
``` text
┌─uuid─┐
│ ᴺᵁᴸᴸ │
└──────┘
```
## toUUIDOrZero (x) {#touuidorzero-x}
It takes an argument of type String and tries to parse it into UUID. If failed, returns zero UUID.
``` sql
toUUIDOrZero(String)
```
**Returned value**
The UUID type value.
**Usage example**
``` sql
SELECT toUUIDOrZero('61f0c404-5cb3-11e7-907b-a6006ad3dba0T') AS uuid
```
``` text
┌─────────────────────────────────uuid─┐
│ 00000000-0000-0000-0000-000000000000 │
└──────────────────────────────────────┘
```
## UUIDStringToNum {#uuidstringtonum}
Accepts a string containing 36 characters in the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`, and returns it as a set of bytes in a [FixedString(16)](../../sql-reference/data-types/fixedstring.md).

View File

@ -1,5 +1,5 @@
---
toc_priority: 37
toc_priority: 38
toc_title: Operators
---
@ -169,7 +169,7 @@ SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL
**See Also**
- [Interval](../../sql-reference/data-types/special-data-types/interval.md) data type
- [toInterval](../../sql-reference/functions/type-conversion-functions.md#function-tointerval) type convertion functions
- [toInterval](../../sql-reference/functions/type-conversion-functions.md#function-tointerval) type conversion functions
## Logical Negation Operator {#logical-negation-operator}

View File

@ -1,5 +1,5 @@
---
toc_priority: 36
toc_priority: 35
toc_title: ALTER
---

View File

@ -5,16 +5,16 @@ toc_title: SAMPLE BY
# Manipulating Sampling-Key Expressions {#manipulations-with-sampling-key-expressions}
Syntax:
``` sql
ALTER TABLE [db].name [ON CLUSTER cluster] MODIFY SAMPLE BY new_expression
```
The command changes the [sampling key](../../../engines/table-engines/mergetree-family/mergetree.md) of the table to `new_expression` (an expression or a tuple of expressions).
The command is lightweight in a sense that it only changes metadata. The primary key must contain the new sample key.
The command is lightweight in the sense that it only changes metadata. The primary key must contain the new sample key.
!!! note "Note"
It only works for tables in the [`MergeTree`](../../../engines/table-engines/mergetree-family/mergetree.md) family (including
[replicated](../../../engines/table-engines/mergetree-family/replication.md) tables).
It only works for tables in the [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md) family (including
[replicated](../../../engines/table-engines/mergetree-family/replication.md) tables).

View File

@ -1,5 +1,5 @@
---
toc_priority: 42
toc_priority: 40
toc_title: ATTACH
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 43
toc_priority: 41
toc_title: CHECK
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 1
toc_priority: 35
toc_title: DATABASE
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 4
toc_priority: 38
toc_title: DICTIONARY
---

View File

@ -1,6 +1,6 @@
---
toc_folder_title: CREATE
toc_priority: 35
toc_priority: 34
toc_title: Overview
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 8
toc_priority: 42
toc_title: QUOTA
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 6
toc_priority: 40
toc_title: ROLE
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 7
toc_priority: 41
toc_title: ROW POLICY
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 9
toc_priority: 43
toc_title: SETTINGS PROFILE
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 2
toc_priority: 36
toc_title: TABLE
---
@ -121,7 +121,7 @@ Defines storage time for values. Can be specified only for MergeTree-family tabl
## Column Compression Codecs {#codecs}
By default, ClickHouse applies the `lz4` compression method. For `MergeTree`-engine family you can change the default compression method in the [compression](../../../operations/server-configuration-parameters/settings.md#server-settings-compression) section of a server configuration.
By default, ClickHouse applies the `lz4` compression method. For `MergeTree`-engine family you can change the default compression method in the [compression](../../../operations/server-configuration-parameters/settings.md#server-settings-compression) section of a server configuration.
You can also define the compression method for each individual column in the `CREATE TABLE` query.
@ -138,7 +138,7 @@ ENGINE = <Engine>
...
```
The `Default` codec can be specified to reference default compression which may dependend on different settings (and properties of data) in runtime.
The `Default` codec can be specified to reference default compression which may depend on different settings (and properties of data) in runtime.
Example: `value UInt64 CODEC(Default)` — the same as lack of codec specification.
Also you can remove current CODEC from the column and use default compression from config.xml:
@ -149,7 +149,7 @@ ALTER TABLE codec_example MODIFY COLUMN float_value CODEC(Default);
Codecs can be combined in a pipeline, for example, `CODEC(Delta, Default)`.
To select the best codec combination for you project, pass benchmarks similar to described in the Altinity [New Encodings to Improve ClickHouse Efficiency](https://www.altinity.com/blog/2019/7/new-encodings-to-improve-clickhouse) article. One thing to note is that codec can't be applied for ALIAS column type.
To select the best codec combination for you project, pass benchmarks similar to described in the Altinity [New Encodings to Improve ClickHouse Efficiency](https://www.altinity.com/blog/2019/7/new-encodings-to-improve-clickhouse) article. One thing to note is that codec can't be applied for ALIAS column type.
!!! warning "Warning"
You cant decompress ClickHouse database files with external utilities like `lz4`. Instead, use the special [clickhouse-compressor](https://github.com/ClickHouse/ClickHouse/tree/master/programs/compressor) utility.

View File

@ -1,5 +1,5 @@
---
toc_priority: 5
toc_priority: 39
toc_title: USER
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 3
toc_priority: 37
toc_title: VIEW
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 44
toc_priority: 42
toc_title: DESCRIBE
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 45
toc_priority: 43
toc_title: DETACH
---

View File

@ -1,88 +1,100 @@
---
toc_priority: 46
toc_priority: 44
toc_title: DROP
---
# DROP Statements {#drop}
Deletes existing entity. If `IF EXISTS` clause is specified, these queries doesnt return an error if the entity doesnt exist.
Deletes existing entity. If the `IF EXISTS` clause is specified, these queries dont return an error if the entity doesnt exist.
## DROP DATABASE {#drop-database}
Deletes all tables inside the `db` database, then deletes the `db` database itself.
Syntax:
``` sql
DROP DATABASE [IF EXISTS] db [ON CLUSTER cluster]
```
Deletes all tables inside the `db` database, then deletes the db database itself.
## DROP TABLE {#drop-table}
Deletes the table.
Syntax:
``` sql
DROP [TEMPORARY] TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster]
```
Deletes the table.
## DROP DICTIONARY {#drop-dictionary}
Deletes the dictionary.
Syntax:
``` sql
DROP DICTIONARY [IF EXISTS] [db.]name
```
Deletes the dictionary.
## DROP USER {#drop-user-statement}
Deletes a user.
Syntax:
``` sql
DROP USER [IF EXISTS] name [,...] [ON CLUSTER cluster_name]
```
Deletes a user.
## DROP ROLE {#drop-role-statement}
Deletes a role. The deleted role is revoked from all the entities where it was assigned.
Syntax:
``` sql
DROP ROLE [IF EXISTS] name [,...] [ON CLUSTER cluster_name]
```
Deletes a role.
Deleted role is revoked from all the entities where it was assigned.
## DROP ROW POLICY {#drop-row-policy-statement}
Deletes a row policy. Deleted row policy is revoked from all the entities where it was assigned.
Syntax:
``` sql
DROP [ROW] POLICY [IF EXISTS] name [,...] ON [database.]table [,...] [ON CLUSTER cluster_name]
```
Deletes a row policy.
Deleted row policy is revoked from all the entities where it was assigned.
## DROP QUOTA {#drop-quota-statement}
Deletes a quota. The deleted quota is revoked from all the entities where it was assigned.
Syntax:
``` sql
DROP QUOTA [IF EXISTS] name [,...] [ON CLUSTER cluster_name]
```
Deletes a quota.
Deleted quota is revoked from all the entities where it was assigned.
## DROP SETTINGS PROFILE {#drop-settings-profile-statement}
Deletes a settings profile. The deleted settings profile is revoked from all the entities where it was assigned.
Syntax:
``` sql
DROP [SETTINGS] PROFILE [IF EXISTS] name [,...] [ON CLUSTER cluster_name]
```
Deletes a settings profile.
Deleted settings profile is revoked from all the entities where it was assigned.
## DROP VIEW {#drop-view}
Deletes a view. Views can be deleted by a `DROP TABLE` command as well but `DROP VIEW` checks that `[db.]name` is a view.
Syntax:
``` sql
DROP VIEW [IF EXISTS] [db.]name [ON CLUSTER cluster]
```
Deletes a view. Views can be deleted by a `DROP TABLE` command as well but `DROP VIEW` checks that `[db.]name` is a view.
[Оriginal article](https://clickhouse.tech/docs/en/sql-reference/statements/drop/) <!--hide-->

View File

@ -1,5 +1,5 @@
---
toc_priority: 47
toc_priority: 45
toc_title: EXISTS
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 39
toc_priority: 38
toc_title: GRANT
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 34
toc_priority: 33
toc_title: INSERT INTO
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 48
toc_priority: 46
toc_title: KILL
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 49
toc_priority: 47
toc_title: OPTIMIZE
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 50
toc_priority: 48
toc_title: RENAME
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 40
toc_priority: 39
toc_title: REVOKE
---

View File

@ -1,7 +1,7 @@
---
title: SELECT Query
toc_folder_title: SELECT
toc_priority: 33
toc_priority: 32
toc_title: Overview
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 52
toc_priority: 51
toc_title: SET ROLE
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 51
toc_priority: 49
toc_title: SET
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 38
toc_priority: 37
toc_title: SHOW
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 37
toc_priority: 36
toc_title: SYSTEM
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 53
toc_priority: 52
toc_title: TRUNCATE
---

View File

@ -1,5 +1,5 @@
---
toc_priority: 54
toc_priority: 53
toc_title: USE
---

View File

@ -1,3 +1,8 @@
---
toc_priority: 1
toc_title: "\u041f\u043e\u0441\u0442\u0430\u0432\u0449\u0438\u043a\u0438\u0020\u043e\u0431\u043b\u0430\u0447\u043d\u044b\u0445\u0020\u0443\u0441\u043b\u0443\u0433\u0020\u0043\u006c\u0069\u0063\u006b\u0048\u006f\u0075\u0073\u0065"
---
# Поставщики облачных услуг ClickHouse {#clickhouse-cloud-service-providers}
!!! info "Инфо"

View File

@ -1,3 +1,8 @@
---
toc_priority: 62
toc_title: "\u041e\u0431\u0437\u043e\u0440\u0020\u0430\u0440\u0445\u0438\u0442\u0435\u043a\u0442\u0443\u0440\u044b\u0020\u0043\u006c\u0069\u0063\u006b\u0048\u006f\u0075\u0073\u0065"
---
# Обзор архитектуры ClickHouse {#overview-of-clickhouse-architecture}
ClickHouse - полноценная колоночная СУБД. Данные хранятся в колонках, а в процессе обработки - в массивах (векторах или фрагментах (chunkах) колонок). По возможности операции выполняются на массивах, а не на индивидуальных значениях. Это называется “векторизованное выполнения запросов” (vectorized query execution), и помогает снизить стоимость фактической обработки данных.

View File

@ -1,3 +1,9 @@
---
toc_priority: 71
toc_title: "\u041d\u0430\u0432\u0438\u0433\u0430\u0446\u0438\u044f\u0020\u043f\u043e\u0020\u043a\u043e\u0434\u0443\u0020\u0043\u006c\u0069\u0063\u006b\u0048\u006f\u0075\u0073\u0065"
---
# Навигация по коду ClickHouse {#navigatsiia-po-kodu-clickhouse}
Для навигации по коду онлайн доступен **Woboq**, он расположен [здесь](https://clickhouse.tech/codebrowser/html_report///ClickHouse/src/index.html). В нём реализовано удобное перемещение между исходными файлами, семантическая подсветка, подсказки, индексация и поиск. Слепок кода обновляется ежедневно.

View File

@ -1,3 +1,9 @@
---
toc_priority: 70
toc_title: "\u0418\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0435\u043c\u044b\u0435\u0020\u0441\u0442\u043e\u0440\u043e\u043d\u043d\u0438\u0435\u0020\u0431\u0438\u0431\u043b\u0438\u043e\u0442\u0435\u043a\u0438"
---
# Используемые сторонние библиотеки {#ispolzuemye-storonnie-biblioteki}
| Библиотека | Лицензия |

View File

@ -1,3 +1,8 @@
---
toc_priority: 61
toc_title: "\u0418\u043d\u0441\u0442\u0440\u0443\u043a\u0446\u0438\u044f\u0020\u0434\u043b\u044f\u0020\u0440\u0430\u0437\u0440\u0430\u0431\u043e\u0442\u0447\u0438\u043a\u043e\u0432"
---
# Инструкция для разработчиков
Сборка ClickHouse поддерживается на Linux, FreeBSD, Mac OS X.

View File

@ -1,3 +1,9 @@
---
toc_priority: 68
toc_title: "\u041a\u0430\u043a\u0020\u043f\u0438\u0441\u0430\u0442\u044c\u0020\u043a\u043e\u0434\u0020\u043d\u0430\u0020\u0043\u002b\u002b"
---
# Как писать код на C++ {#kak-pisat-kod-na-c}
## Общее {#obshchee}

View File

@ -1,3 +1,10 @@
---
toc_folder_title: "\u0414\u0432\u0438\u0436\u043a\u0438\u0020\u0431\u0430\u0437\u0020\u0434\u0430\u043d\u043d\u044b\u0445"
toc_priority: 27
toc_title: "\u0412\u0432\u0435\u0434\u0435\u043d\u0438\u0435"
---
# Движки баз данных {#dvizhki-baz-dannykh}
Движки баз данных обеспечивают работу с таблицами.

View File

@ -1,3 +1,8 @@
---
toc_priority: 31
toc_title: Lazy
---
# Lazy {#lazy}
Сохраняет таблицы только в оперативной памяти `expiration_time_in_seconds` через несколько секунд после последнего доступа. Может использоваться только с таблицами \*Log.

View File

@ -1,3 +1,8 @@
---
toc_priority: 30
toc_title: MySQL
---
# MySQL {#mysql}
Позволяет подключаться к базам данных на удалённом MySQL сервере и выполнять запросы `INSERT` и `SELECT` для обмена данными между ClickHouse и MySQL.

View File

@ -1,6 +1,6 @@
---
toc_folder_title: "\u0414\u0432\u0438\u0436\u043A\u0438"
toc_folder_title: "\u0045\u006e\u0067\u0069\u006e\u0065\u0073"
toc_hidden: true
toc_priority: 25
toc_title: hidden
---

View File

@ -1,3 +1,10 @@
---
toc_folder_title: "\u0414\u0432\u0438\u0436\u043a\u0438\u0020\u0442\u0430\u0431\u043b\u0438\u0446"
toc_priority: 26
toc_title: "\u0412\u0432\u0435\u0434\u0435\u043d\u0438\u0435"
---
# Движки таблиц {#table_engines}
Движок таблицы (тип таблицы) определяет:

View File

@ -1,3 +1,8 @@
---
toc_priority: 4
toc_title: HDFS
---
# HDFS {#table_engines-hdfs}
Управляет данными в HDFS. Данный движок похож на движки [File](../special/file.md#table_engines-file) и [URL](../special/url.md#table_engines-url).

View File

@ -1,5 +1,5 @@
---
toc_folder_title: Integrations
toc_folder_title: "\u0414\u0432\u0438\u0436\u043a\u0438\u0020\u0442\u0430\u0431\u043b\u0438\u0446\u0020\u0434\u043b\u044f\u0020\u0438\u043d\u0442\u0435\u0433\u0440\u0430\u0446\u0438\u0438"
toc_priority: 30
---

View File

@ -1,3 +1,8 @@
---
toc_priority: 2
toc_title: JDBC
---
# JDBC {#table-engine-jdbc}
Позволяет ClickHouse подключаться к внешним базам данных с помощью [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity).

View File

@ -1,3 +1,8 @@
---
toc_priority: 5
toc_title: Kafka
---
# Kafka {#kafka}
Движок работает с [Apache Kafka](http://kafka.apache.org/).

View File

@ -1,3 +1,8 @@
---
toc_priority: 3
toc_title: MySQL
---
# MySQL {#mysql}
Движок MySQL позволяет выполнять запросы `SELECT` над данными, хранящимися на удалённом MySQL сервере.

View File

@ -1,3 +1,8 @@
---
toc_priority: 1
toc_title: ODBC
---
# ODBC {#table-engine-odbc}
Позволяет ClickHouse подключаться к внешним базам данных с помощью [ODBC](https://en.wikipedia.org/wiki/Open_Database_Connectivity).

View File

@ -1,6 +1,6 @@
---
toc_folder_title: Семейство Log
toc_title: Введение
toc_folder_title: "\u0421\u0435\u043c\u0435\u0439\u0441\u0442\u0432\u043e\u0020\u004c\u006f\u0067"
toc_title: "\u0412\u0432\u0435\u0434\u0435\u043d\u0438\u0435"
toc_priority: 29
---

View File

@ -1,3 +1,8 @@
---
toc_priority: 33
toc_title: Log
---
# Log {#log}
Движок относится к семейству движков Log. Смотрите общие свойства и различия движков в статье [Семейство Log](index.md).

View File

@ -1,3 +1,8 @@
---
toc_priority: 32
toc_title: StripeLog
---
# StripeLog {#stripelog}
Движок относится к семейству движков Log. Смотрите общие свойства и различия движков в статье [Семейство Log](index.md).

View File

@ -1,3 +1,8 @@
---
toc_priority: 34
toc_title: TinyLog
---
# TinyLog {#tinylog}
Движок относится к семейству движков Log. Смотрите общие свойства и различия движков в статье [Семейство Log](index.md).

View File

@ -1,3 +1,8 @@
---
toc_priority: 35
toc_title: AggregatingMergeTree
---
# AggregatingMergeTree {#aggregatingmergetree}
Движок наследует функциональность [MergeTree](mergetree.md#table_engines-mergetree), изменяя логику слияния кусков данных. Все строки с одинаковым первичным ключом (точнее, с одинаковым [ключом сортировки](mergetree.md)) ClickHouse заменяет на одну (в пределах одного куска данных), которая хранит объединение состояний агрегатных функций.

View File

@ -1,3 +1,8 @@
---
toc_priority: 36
toc_title: CollapsingMergeTree
---
# CollapsingMergeTree {#table_engine-collapsingmergetree}
Движок наследует функциональность от [MergeTree](mergetree.md) и добавляет в алгоритм слияния кусков данных логику сворачивания (удаления) строк.

View File

@ -1,3 +1,9 @@
---
toc_priority: 32
toc_title: "\u041f\u0440\u043e\u0438\u0437\u0432\u043e\u043b\u044c\u043d\u044b\u0439\u0020\u043a\u043b\u044e\u0447\u0020\u043f\u0430\u0440\u0442\u0438\u0446\u0438\u043e\u043d\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u044f"
---
# Произвольный ключ партиционирования {#proizvolnyi-kliuch-partitsionirovaniia}
Партиционирование данных доступно для таблиц семейства [MergeTree](mergetree.md) (включая [реплицированные таблицы](replication.md)). Таблицы [MaterializedView](../special/materializedview.md#materializedview), созданные на основе таблиц MergeTree, также поддерживают партиционирование.

View File

@ -1,3 +1,8 @@
---
toc_priority: 38
toc_title: GraphiteMergeTree
---
# GraphiteMergeTree {#graphitemergetree}
Движок предназначен для прореживания и агрегирования/усреднения (rollup) данных [Graphite](http://graphite.readthedocs.io/en/latest/index.html). Он может быть интересен разработчикам, которые хотят использовать ClickHouse как хранилище данных для Graphite.

View File

@ -1,6 +1,5 @@
---
toc_folder_title: MergeTree Family
toc_priority: 28
toc_title: "\u0412\u0432\u0435\u0434\u0435\u043d\u0438\u0435"
---

View File

@ -1,3 +1,8 @@
---
toc_priority: 33
toc_title: ReplacingMergeTree
---
# ReplacingMergeTree {#replacingmergetree}
Движок отличается от [MergeTree](mergetree.md#table_engines-mergetree) тем, что выполняет удаление дублирующихся записей с одинаковым значением [ключа сортировки](mergetree.md)).

View File

@ -1,3 +1,8 @@
---
toc_priority: 31
toc_title: "\u0420\u0435\u043f\u043b\u0438\u043a\u0430\u0446\u0438\u044f\u0020\u0434\u0430\u043d\u043d\u044b\u0445"
---
# Репликация данных {#table_engines-replication}
Репликация поддерживается только для таблиц семейства MergeTree:

View File

@ -1,3 +1,8 @@
---
toc_priority: 34
toc_title: SummingMergeTree
---
# SummingMergeTree {#summingmergetree}
Движок наследует функциональность [MergeTree](mergetree.md#table_engines-mergetree). Отличие заключается в том, что для таблиц `SummingMergeTree` при слиянии кусков данных ClickHouse все строки с одинаковым первичным ключом (точнее, с одинаковым [ключом сортировки](mergetree.md)) заменяет на одну, которая хранит только суммы значений из столбцов с цифровым типом данных. Если ключ сортировки подобран таким образом, что одному значению ключа соответствует много строк, это значительно уменьшает объём хранения и ускоряет последующую выборку данных.

View File

@ -1,3 +1,8 @@
---
toc_priority: 37
toc_title: VersionedCollapsingMergeTree
---
# VersionedCollapsingMergeTree {#versionedcollapsingmergetree}
Движок:

View File

@ -1,3 +1,8 @@
---
toc_priority: 45
toc_title: Buffer
---
# Buffer {#buffer}
Буферизует записываемые данные в оперативке, периодически сбрасывая их в другую таблицу. При чтении, производится чтение данных одновременно из буфера и из другой таблицы.

View File

@ -1,3 +1,8 @@
---
toc_priority: 35
toc_title: Dictionary
---
# Dictionary {#dictionary}
Движок `Dictionary` отображает данные [словаря](../../../sql-reference/dictionaries/external-dictionaries/external-dicts.md) как таблицу ClickHouse.

View File

@ -1,3 +1,8 @@
---
toc_priority: 33
toc_title: Distributed
---
# Distributed {#distributed}
**Движок Distributed не хранит данные самостоятельно**, а позволяет обрабатывать запросы распределённо, на нескольких серверах. Чтение автоматически распараллеливается. При чтении будут использованы индексы таблиц на удалённых серверах, если есть.

Some files were not shown because too many files have changed in this diff Show More