Merge branch 'master' into master5

This commit is contained in:
alesapin 2023-06-26 16:24:18 +02:00 committed by GitHub
commit 727caadf64
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
538 changed files with 7614 additions and 3847 deletions

View File

@ -22,12 +22,13 @@ curl https://clickhouse.com/ | sh
## Upcoming Events
* [**v23.5 Release Webinar**](https://clickhouse.com/company/events/v23-5-release-webinar?utm_source=github&utm_medium=social&utm_campaign=release-webinar-2023-05) - Jun 8 - 23.5 is rapidly approaching. Original creator, co-founder, and CTO of ClickHouse Alexey Milovidov will walk us through the highlights of the release.
* [**ClickHouse Meetup in Bangalore**](https://www.meetup.com/clickhouse-bangalore-user-group/events/293740066/) - Jun 7
* [**ClickHouse Meetup in San Francisco**](https://www.meetup.com/clickhouse-silicon-valley-meetup-group/events/293426725/) - Jun 7
* [**v23.6 Release Webinar**](https://clickhouse.com/company/events/v23-6-release-call?utm_source=github&utm_medium=social&utm_campaign=release-webinar-2023-06) - Jun 29 - 23.6 is rapidly approaching. Original creator, co-founder, and CTO of ClickHouse Alexey Milovidov will walk us through the highlights of the release.
* [**ClickHouse Meetup in Paris**](https://www.meetup.com/clickhouse-france-user-group/events/294283460) - Jul 4
* [**ClickHouse Meetup in Boston**](https://www.meetup.com/clickhouse-boston-user-group/events/293913596) - Jul 18
* [**ClickHouse Meetup in NYC**](https://www.meetup.com/clickhouse-new-york-user-group/events/293913441) - Jul 19
* [**ClickHouse Meetup in Toronto**](https://www.meetup.com/clickhouse-toronto-user-group/events/294183127) - Jul 20
Also, keep an eye out for upcoming meetups in Amsterdam, Boston, NYC, Beijing, and Toronto. Somewhere else you want us to be? Please feel free to reach out to tyler <at> clickhouse <dot> com.
Also, keep an eye out for upcoming meetups around the world. Somewhere else you want us to be? Please feel free to reach out to tyler <at> clickhouse <dot> com.
## Recent Recordings
* **Recent Meetup Videos**: [Meetup Playlist](https://www.youtube.com/playlist?list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U) Whenever possible recordings of the ClickHouse Community Meetups are edited and presented as individual talks. Current featuring "Modern SQL in 2023", "Fast, Concurrent, and Consistent Asynchronous INSERTS in ClickHouse", and "Full-Text Indices: Design and Experiments"

View File

@ -2,6 +2,7 @@
#include <cstdint>
#include <string>
#include <array>
#if defined(__SSE2__)
#include <emmintrin.h>

View File

@ -11,3 +11,8 @@ constexpr double interpolateExponential(double min, double max, double ratio)
assert(min > 0 && ratio >= 0 && ratio <= 1);
return min * std::pow(max / min, ratio);
}
constexpr double interpolateLinear(double min, double max, double ratio)
{
return std::lerp(min, max, ratio);
}

View File

@ -146,7 +146,7 @@ add_contrib (amqpcpp-cmake AMQP-CPP) # requires: libuv
add_contrib (cassandra-cmake cassandra) # requires: libuv
if (NOT OS_DARWIN)
add_contrib (curl-cmake curl)
add_contrib (azure-cmake azure)
add_contrib (azure-cmake azure) # requires: curl
add_contrib (sentry-native-cmake sentry-native) # requires: curl
endif()
add_contrib (fmtlib-cmake fmtlib)
@ -157,7 +157,7 @@ add_contrib (librdkafka-cmake librdkafka) # requires: libgsasl
add_contrib (nats-io-cmake nats-io)
add_contrib (isa-l-cmake isa-l)
add_contrib (libhdfs3-cmake libhdfs3) # requires: google-protobuf, krb5, isa-l
add_contrib (hive-metastore-cmake hive-metastore) # requires: thrift/avro/arrow/libhdfs3
add_contrib (hive-metastore-cmake hive-metastore) # requires: thrift, avro, arrow, libhdfs3
add_contrib (cppkafka-cmake cppkafka)
add_contrib (libpqxx-cmake libpqxx)
add_contrib (libpq-cmake libpq)

View File

@ -31,12 +31,12 @@ endif()
set (CMAKE_CXX_STANDARD 17)
set(ARROW_VERSION "6.0.1")
set(ARROW_VERSION "11.0.0")
string(REGEX MATCH "^[0-9]+\\.[0-9]+\\.[0-9]+" ARROW_BASE_VERSION "${ARROW_VERSION}")
set(ARROW_VERSION_MAJOR "6")
set(ARROW_VERSION_MAJOR "11")
set(ARROW_VERSION_MINOR "0")
set(ARROW_VERSION_PATCH "1")
set(ARROW_VERSION_PATCH "0")
if(ARROW_VERSION_MAJOR STREQUAL "0")
# Arrow 0.x.y => SO version is "x", full SO version is "x.y.0"
@ -116,43 +116,79 @@ configure_file("${ORC_SOURCE_SRC_DIR}/Adaptor.hh.in" "${ORC_BUILD_INCLUDE_DIR}/A
# ARROW_ORC + adapters/orc/CMakefiles
set(ORC_SRCS
"${CMAKE_CURRENT_BINARY_DIR}/orc_proto.pb.h"
"${ORC_SOURCE_SRC_DIR}/sargs/ExpressionTree.cc"
"${ORC_SOURCE_SRC_DIR}/sargs/Literal.cc"
"${ORC_SOURCE_SRC_DIR}/sargs/PredicateLeaf.cc"
"${ORC_SOURCE_SRC_DIR}/sargs/SargsApplier.cc"
"${ORC_SOURCE_SRC_DIR}/sargs/SearchArgument.cc"
"${ORC_SOURCE_SRC_DIR}/sargs/TruthValue.cc"
"${ORC_SOURCE_SRC_DIR}/Exceptions.cc"
"${ORC_SOURCE_SRC_DIR}/OrcFile.cc"
"${ORC_SOURCE_SRC_DIR}/Reader.cc"
"${ORC_ADDITION_SOURCE_DIR}/orc_proto.pb.cc"
"${ORC_SOURCE_SRC_DIR}/Adaptor.cc"
"${ORC_SOURCE_SRC_DIR}/Adaptor.hh.in"
"${ORC_SOURCE_SRC_DIR}/BlockBuffer.cc"
"${ORC_SOURCE_SRC_DIR}/BlockBuffer.hh"
"${ORC_SOURCE_SRC_DIR}/BloomFilter.cc"
"${ORC_SOURCE_SRC_DIR}/BloomFilter.hh"
"${ORC_SOURCE_SRC_DIR}/Bpacking.hh"
"${ORC_SOURCE_SRC_DIR}/BpackingDefault.cc"
"${ORC_SOURCE_SRC_DIR}/BpackingDefault.hh"
"${ORC_SOURCE_SRC_DIR}/ByteRLE.cc"
"${ORC_SOURCE_SRC_DIR}/ByteRLE.hh"
"${ORC_SOURCE_SRC_DIR}/CMakeLists.txt"
"${ORC_SOURCE_SRC_DIR}/ColumnPrinter.cc"
"${ORC_SOURCE_SRC_DIR}/ColumnReader.cc"
"${ORC_SOURCE_SRC_DIR}/ColumnReader.hh"
"${ORC_SOURCE_SRC_DIR}/ColumnWriter.cc"
"${ORC_SOURCE_SRC_DIR}/ColumnWriter.hh"
"${ORC_SOURCE_SRC_DIR}/Common.cc"
"${ORC_SOURCE_SRC_DIR}/Compression.cc"
"${ORC_SOURCE_SRC_DIR}/Compression.hh"
"${ORC_SOURCE_SRC_DIR}/ConvertColumnReader.cc"
"${ORC_SOURCE_SRC_DIR}/ConvertColumnReader.hh"
"${ORC_SOURCE_SRC_DIR}/CpuInfoUtil.cc"
"${ORC_SOURCE_SRC_DIR}/CpuInfoUtil.hh"
"${ORC_SOURCE_SRC_DIR}/Dispatch.hh"
"${ORC_SOURCE_SRC_DIR}/Exceptions.cc"
"${ORC_SOURCE_SRC_DIR}/Int128.cc"
"${ORC_SOURCE_SRC_DIR}/LzoDecompressor.cc"
"${ORC_SOURCE_SRC_DIR}/LzoDecompressor.hh"
"${ORC_SOURCE_SRC_DIR}/MemoryPool.cc"
"${ORC_SOURCE_SRC_DIR}/Murmur3.cc"
"${ORC_SOURCE_SRC_DIR}/Murmur3.hh"
"${ORC_SOURCE_SRC_DIR}/Options.hh"
"${ORC_SOURCE_SRC_DIR}/OrcFile.cc"
"${ORC_SOURCE_SRC_DIR}/RLE.cc"
"${ORC_SOURCE_SRC_DIR}/RLE.hh"
"${ORC_SOURCE_SRC_DIR}/RLEV2Util.cc"
"${ORC_SOURCE_SRC_DIR}/RLEV2Util.hh"
"${ORC_SOURCE_SRC_DIR}/RLEv1.cc"
"${ORC_SOURCE_SRC_DIR}/RLEv1.hh"
"${ORC_SOURCE_SRC_DIR}/RLEv2.hh"
"${ORC_SOURCE_SRC_DIR}/Reader.cc"
"${ORC_SOURCE_SRC_DIR}/Reader.hh"
"${ORC_SOURCE_SRC_DIR}/RleDecoderV2.cc"
"${ORC_SOURCE_SRC_DIR}/RleEncoderV2.cc"
"${ORC_SOURCE_SRC_DIR}/RLEV2Util.cc"
"${ORC_SOURCE_SRC_DIR}/SchemaEvolution.cc"
"${ORC_SOURCE_SRC_DIR}/SchemaEvolution.hh"
"${ORC_SOURCE_SRC_DIR}/Statistics.cc"
"${ORC_SOURCE_SRC_DIR}/Statistics.hh"
"${ORC_SOURCE_SRC_DIR}/StripeStream.cc"
"${ORC_SOURCE_SRC_DIR}/StripeStream.hh"
"${ORC_SOURCE_SRC_DIR}/Timezone.cc"
"${ORC_SOURCE_SRC_DIR}/Timezone.hh"
"${ORC_SOURCE_SRC_DIR}/TypeImpl.cc"
"${ORC_SOURCE_SRC_DIR}/TypeImpl.hh"
"${ORC_SOURCE_SRC_DIR}/Utils.hh"
"${ORC_SOURCE_SRC_DIR}/Vector.cc"
"${ORC_SOURCE_SRC_DIR}/Writer.cc"
"${ORC_SOURCE_SRC_DIR}/Adaptor.cc"
"${ORC_SOURCE_SRC_DIR}/BloomFilter.cc"
"${ORC_SOURCE_SRC_DIR}/Murmur3.cc"
"${ORC_SOURCE_SRC_DIR}/BlockBuffer.cc"
"${ORC_SOURCE_SRC_DIR}/wrap/orc-proto-wrapper.cc"
"${ORC_SOURCE_SRC_DIR}/io/InputStream.cc"
"${ORC_SOURCE_SRC_DIR}/io/InputStream.hh"
"${ORC_SOURCE_SRC_DIR}/io/OutputStream.cc"
"${ORC_ADDITION_SOURCE_DIR}/orc_proto.pb.cc"
"${ORC_SOURCE_SRC_DIR}/io/OutputStream.hh"
"${ORC_SOURCE_SRC_DIR}/sargs/ExpressionTree.cc"
"${ORC_SOURCE_SRC_DIR}/sargs/ExpressionTree.hh"
"${ORC_SOURCE_SRC_DIR}/sargs/Literal.cc"
"${ORC_SOURCE_SRC_DIR}/sargs/PredicateLeaf.cc"
"${ORC_SOURCE_SRC_DIR}/sargs/PredicateLeaf.hh"
"${ORC_SOURCE_SRC_DIR}/sargs/SargsApplier.cc"
"${ORC_SOURCE_SRC_DIR}/sargs/SargsApplier.hh"
"${ORC_SOURCE_SRC_DIR}/sargs/SearchArgument.cc"
"${ORC_SOURCE_SRC_DIR}/sargs/SearchArgument.hh"
"${ORC_SOURCE_SRC_DIR}/sargs/TruthValue.cc"
)
add_library(_orc ${ORC_SRCS})
@ -478,6 +514,10 @@ if (SANITIZE STREQUAL "undefined")
target_compile_options(_arrow PRIVATE -fno-sanitize=undefined)
endif ()
# Define Thrift version for parquet (we use 0.16.0)
add_definitions(-DPARQUET_THRIFT_VERSION_MAJOR=0)
add_definitions(-DPARQUET_THRIFT_VERSION_MINOR=16)
# === tools
set(TOOLS_DIR "${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/tools/parquet")

View File

@ -1,6 +1,6 @@
option (ENABLE_AZURE_BLOB_STORAGE "Enable Azure blob storage" ${ENABLE_LIBRARIES})
if (NOT ENABLE_AZURE_BLOB_STORAGE OR BUILD_STANDALONE_KEEPER OR OS_FREEBSD OR (NOT ARCH_AMD64))
if (NOT ENABLE_AZURE_BLOB_STORAGE OR BUILD_STANDALONE_KEEPER OR OS_FREEBSD)
message(STATUS "Not using Azure blob storage")
return()
endif()

View File

@ -1,11 +1,11 @@
if(NOT ARCH_AARCH64 AND NOT OS_FREEBSD AND NOT APPLE AND NOT ARCH_PPC64LE AND NOT ARCH_S390X)
if(NOT OS_FREEBSD AND NOT APPLE AND NOT ARCH_PPC64LE AND NOT ARCH_S390X)
option(ENABLE_HDFS "Enable HDFS" ${ENABLE_LIBRARIES})
elseif(ENABLE_HDFS)
message (${RECONFIGURE_MESSAGE_LEVEL} "Cannot use HDFS3 with current configuration")
endif()
if(NOT ENABLE_HDFS)
message(STATUS "Not using hdfs")
message(STATUS "Not using HDFS")
return()
endif()

2
contrib/orc vendored

@ -1 +1 @@
Subproject commit c5d7755ba0b9a95631c8daea4d094101f26ec761
Subproject commit 568d1d60c250af1890f226c182bc15bd8cc94cf1

View File

@ -59,6 +59,8 @@ install_packages previous_release_package_folder
# available for dump via clickhouse-local
configure
# it contains some new settings, but we can safely remove it
rm /etc/clickhouse-server/config.d/merge_tree.xml
rm /etc/clickhouse-server/users.d/nonconst_timezone.xml
start
@ -85,6 +87,8 @@ export USE_S3_STORAGE_FOR_MERGE_TREE=1
export ZOOKEEPER_FAULT_INJECTION=0
configure
# it contains some new settings, but we can safely remove it
rm /etc/clickhouse-server/config.d/merge_tree.xml
rm /etc/clickhouse-server/users.d/nonconst_timezone.xml
start
@ -115,6 +119,13 @@ mv /var/log/clickhouse-server/clickhouse-server.log /var/log/clickhouse-server/c
install_packages package_folder
export ZOOKEEPER_FAULT_INJECTION=1
configure
# Just in case previous version left some garbage in zk
sudo cat /etc/clickhouse-server/config.d/lost_forever_check.xml \
| sed "s|>1<|>0<|g" \
> /etc/clickhouse-server/config.d/lost_forever_check.xml.tmp
sudo mv /etc/clickhouse-server/config.d/lost_forever_check.xml.tmp /etc/clickhouse-server/config.d/lost_forever_check.xml
start 500
clickhouse-client --query "SELECT 'Server successfully started', 'OK', NULL, ''" >> /test_output/test_results.tsv \
|| (rg --text "<Error>.*Application" /var/log/clickhouse-server/clickhouse-server.log > /test_output/application_errors.txt \

View File

@ -0,0 +1,19 @@
---
sidebar_position: 1
sidebar_label: 2023
---
# 2023 Changelog
### ClickHouse release v23.3.5.9-lts (f5fbc2fd2b3) FIXME as compared to v23.3.4.17-lts (2c99b73ff40)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Fix broken index analysis when binary operator contains a null constant argument [#50177](https://github.com/ClickHouse/ClickHouse/pull/50177) ([Amos Bird](https://github.com/amosbird)).
* Cleanup moving parts [#50489](https://github.com/ClickHouse/ClickHouse/pull/50489) ([vdimir](https://github.com/vdimir)).
* Do not apply projection if read-in-order was enabled. [#50923](https://github.com/ClickHouse/ClickHouse/pull/50923) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Increase max array size in group bitmap [#50620](https://github.com/ClickHouse/ClickHouse/pull/50620) ([Kruglov Pavel](https://github.com/Avogar)).

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-engines/integrations/ExternalDistributed
sidebar_position: 12
sidebar_position: 55
sidebar_label: ExternalDistributed
title: ExternalDistributed
---

View File

@ -1,5 +1,6 @@
---
slug: /en/engines/table-engines/integrations/azureBlobStorage
sidebar_position: 10
sidebar_label: Azure Blob Storage
---
@ -29,8 +30,8 @@ CREATE TABLE azure_blob_storage_table (name String, value UInt32)
**Example**
``` sql
CREATE TABLE test_table (key UInt64, data String)
ENGINE = AzureBlobStorage('DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1/;',
CREATE TABLE test_table (key UInt64, data String)
ENGINE = AzureBlobStorage('DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1/;',
'test_container', 'test_table', 'CSV');
INSERT INTO test_table VALUES (1, 'a'), (2, 'b'), (3, 'c');

View File

@ -1,5 +1,6 @@
---
slug: /en/engines/table-engines/integrations/deltalake
sidebar_position: 40
sidebar_label: DeltaLake
---

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-engines/integrations/embedded-rocksdb
sidebar_position: 9
sidebar_position: 50
sidebar_label: EmbeddedRocksDB
---
@ -99,7 +99,7 @@ INSERT INTO test VALUES ('some key', 1, 'value', 3.2);
### Deletes
Rows can be deleted using `DELETE` query or `TRUNCATE`.
Rows can be deleted using `DELETE` query or `TRUNCATE`.
```sql
DELETE FROM test WHERE key LIKE 'some%' AND v1 > 1;

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-engines/integrations/hdfs
sidebar_position: 6
sidebar_position: 80
sidebar_label: HDFS
---
@ -63,7 +63,7 @@ SELECT * FROM hdfs_engine_table LIMIT 2
- `ALTER` and `SELECT...SAMPLE` operations.
- Indexes.
- [Zero-copy](../../../operations/storing-data.md#zero-copy) replication is possible, but not recommended.
:::note Zero-copy replication is not ready for production
Zero-copy replication is disabled by default in ClickHouse version 22.8 and higher. This feature is not recommended for production use.
:::

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-engines/integrations/hive
sidebar_position: 4
sidebar_position: 84
sidebar_label: Hive
---

View File

@ -1,5 +1,6 @@
---
slug: /en/engines/table-engines/integrations/hudi
sidebar_position: 86
sidebar_label: Hudi
---

View File

@ -1,5 +1,6 @@
---
slug: /en/engines/table-engines/integrations/iceberg
sidebar_position: 90
sidebar_label: Iceberg
---

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-engines/integrations/jdbc
sidebar_position: 3
sidebar_position: 100
sidebar_label: JDBC
---

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-engines/integrations/kafka
sidebar_position: 8
sidebar_position: 110
sidebar_label: Kafka
---

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-engines/integrations/materialized-postgresql
sidebar_position: 12
sidebar_position: 130
sidebar_label: MaterializedPostgreSQL
title: MaterializedPostgreSQL
---

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-engines/integrations/mongodb
sidebar_position: 5
sidebar_position: 135
sidebar_label: MongoDB
---

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-engines/integrations/mysql
sidebar_position: 4
sidebar_position: 138
sidebar_label: MySQL
---

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-engines/integrations/nats
sidebar_position: 14
sidebar_position: 140
sidebar_label: NATS
---
@ -83,12 +83,12 @@ You can select one of the subjects the table reads from and publish your data th
CREATE TABLE queue (
key UInt64,
value UInt64
) ENGINE = NATS
) ENGINE = NATS
SETTINGS nats_url = 'localhost:4444',
nats_subjects = 'subject1,subject2',
nats_format = 'JSONEachRow';
INSERT INTO queue
INSERT INTO queue
SETTINGS stream_like_engine_insert_queue = 'subject2'
VALUES (1, 1);
```
@ -102,7 +102,7 @@ Example:
key UInt64,
value UInt64,
date DateTime
) ENGINE = NATS
) ENGINE = NATS
SETTINGS nats_url = 'localhost:4444',
nats_subjects = 'subject1',
nats_format = 'JSONEachRow',
@ -137,7 +137,7 @@ Example:
CREATE TABLE queue (
key UInt64,
value UInt64
) ENGINE = NATS
) ENGINE = NATS
SETTINGS nats_url = 'localhost:4444',
nats_subjects = 'subject1',
nats_format = 'JSONEachRow',

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-engines/integrations/odbc
sidebar_position: 2
sidebar_position: 150
sidebar_label: ODBC
---

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-engines/integrations/postgresql
sidebar_position: 11
sidebar_position: 160
sidebar_label: PostgreSQL
---

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-engines/integrations/rabbitmq
sidebar_position: 10
sidebar_position: 170
sidebar_label: RabbitMQ
---

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-engines/integrations/redis
sidebar_position: 43
sidebar_position: 175
sidebar_label: Redis
---

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-engines/integrations/s3
sidebar_position: 7
sidebar_position: 180
sidebar_label: S3
---
@ -8,30 +8,7 @@ sidebar_label: S3
This engine provides integration with [Amazon S3](https://aws.amazon.com/s3/) ecosystem. This engine is similar to the [HDFS](../../../engines/table-engines/special/file.md#table_engines-hdfs) engine, but provides S3-specific features.
## Create Table {#creating-a-table}
``` sql
CREATE TABLE s3_engine_table (name String, value UInt32)
ENGINE = S3(path [, NOSIGN | aws_access_key_id, aws_secret_access_key,] format, [compression])
[PARTITION BY expr]
[SETTINGS ...]
```
**Engine parameters**
- `path` — Bucket url with path to file. Supports following wildcards in readonly mode: `*`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` — numbers, `'abc'`, `'def'` — strings. For more information see [below](#wildcards-in-path).
- `NOSIGN` - If this keyword is provided in place of credentials, all the requests will not be signed.
- `format` — The [format](../../../interfaces/formats.md#formats) of the file.
- `aws_access_key_id`, `aws_secret_access_key` - Long-term credentials for the [AWS](https://aws.amazon.com/) account user. You can use these to authenticate your requests. Parameter is optional. If credentials are not specified, they are used from the configuration file. For more information see [Using S3 for Data Storage](../mergetree-family/mergetree.md#table_engine-mergetree-s3).
- `compression` — Compression type. Supported values: `none`, `gzip/gz`, `brotli/br`, `xz/LZMA`, `zstd/zst`. Parameter is optional. By default, it will auto-detect compression by file extension.
### PARTITION BY
`PARTITION BY` — Optional. In most cases you don't need a partition key, and if it is needed you generally don't need a partition key more granular than by month. Partitioning does not speed up queries (in contrast to the ORDER BY expression). You should never use too granular partitioning. Don't partition your data by client identifiers or names (instead, make client identifier or name the first column in the ORDER BY expression).
For partitioning by month, use the `toYYYYMM(date_column)` expression, where `date_column` is a column with a date of the type [Date](/docs/en/sql-reference/data-types/date.md). The partition names here have the `"YYYYMM"` format.
**Example**
## Example
``` sql
CREATE TABLE s3_engine_table (name String, value UInt32)
@ -49,6 +26,135 @@ SELECT * FROM s3_engine_table LIMIT 2;
│ two │ 2 │
└──────┴───────┘
```
## Create Table {#creating-a-table}
``` sql
CREATE TABLE s3_engine_table (name String, value UInt32)
ENGINE = S3(path [, NOSIGN | aws_access_key_id, aws_secret_access_key,] format, [compression])
[PARTITION BY expr]
[SETTINGS ...]
```
### Engine parameters
- `path` — Bucket url with path to file. Supports following wildcards in readonly mode: `*`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` — numbers, `'abc'`, `'def'` — strings. For more information see [below](#wildcards-in-path).
- `NOSIGN` - If this keyword is provided in place of credentials, all the requests will not be signed.
- `format` — The [format](../../../interfaces/formats.md#formats) of the file.
- `aws_access_key_id`, `aws_secret_access_key` - Long-term credentials for the [AWS](https://aws.amazon.com/) account user. You can use these to authenticate your requests. Parameter is optional. If credentials are not specified, they are used from the configuration file. For more information see [Using S3 for Data Storage](../mergetree-family/mergetree.md#table_engine-mergetree-s3).
- `compression` — Compression type. Supported values: `none`, `gzip/gz`, `brotli/br`, `xz/LZMA`, `zstd/zst`. Parameter is optional. By default, it will auto-detect compression by file extension.
### PARTITION BY
`PARTITION BY` — Optional. In most cases you don't need a partition key, and if it is needed you generally don't need a partition key more granular than by month. Partitioning does not speed up queries (in contrast to the ORDER BY expression). You should never use too granular partitioning. Don't partition your data by client identifiers or names (instead, make client identifier or name the first column in the ORDER BY expression).
For partitioning by month, use the `toYYYYMM(date_column)` expression, where `date_column` is a column with a date of the type [Date](/docs/en/sql-reference/data-types/date.md). The partition names here have the `"YYYYMM"` format.
### Querying partitioned data
This example uses the [docker compose recipe](https://github.com/ClickHouse/examples/tree/5fdc6ff72f4e5137e23ea075c88d3f44b0202490/docker-compose-recipes/recipes/ch-and-minio-S3), which integrates ClickHouse and MinIO. You should be able to reproduce the same queries using S3 by replacing the endpoint and authentication values.
Notice that the S3 endpoint in the `ENGINE` configuration uses the parameter token `{_partition_id}` as part of the S3 object (filename), and that the SELECT queries select against those resulting object names (e.g., `test_3.csv`).
:::note
As shown in the example, querying from S3 tables that are partitioned is
not directly supported at this time, but can be accomplished by querying the bucket contents with a wildcard.
The primary use-case for writing
partitioned data in S3 is to enable transferring that data into another
ClickHouse system (for example, moving from on-prem systems to ClickHouse
Cloud). Because ClickHouse datasets are often very large, and network
reliability is sometimes imperfect it makes sense to transfer datasets
in subsets, hence partitioned writes.
:::
#### Create the table
```sql
CREATE TABLE p
(
`column1` UInt32,
`column2` UInt32,
`column3` UInt32
)
ENGINE = S3(
# highlight-next-line
'http://minio:10000/clickhouse//test_{_partition_id}.csv',
'minioadmin',
'minioadminpassword',
'CSV')
PARTITION BY column3
```
#### Insert data
```sql
insert into p values (1, 2, 3), (3, 2, 1), (78, 43, 45)
```
#### Select from partition 3
:::tip
This query uses the s3 table function
:::
```sql
SELECT *
FROM s3('http://minio:10000/clickhouse//test_3.csv', 'minioadmin', 'minioadminpassword', 'CSV')
```
```response
┌─c1─┬─c2─┬─c3─┐
│ 1 │ 2 │ 3 │
└────┴────┴────┘
```
#### Select from partition 1
```sql
SELECT *
FROM s3('http://minio:10000/clickhouse//test_1.csv', 'minioadmin', 'minioadminpassword', 'CSV')
```
```response
┌─c1─┬─c2─┬─c3─┐
│ 3 │ 2 │ 1 │
└────┴────┴────┘
```
#### Select from partition 45
```sql
SELECT *
FROM s3('http://minio:10000/clickhouse//test_45.csv', 'minioadmin', 'minioadminpassword', 'CSV')
```
```response
┌─c1─┬─c2─┬─c3─┐
│ 78 │ 43 │ 45 │
└────┴────┴────┘
```
#### Select from all partitions
```sql
SELECT *
FROM s3('http://minio:10000/clickhouse//**', 'minioadmin', 'minioadminpassword', 'CSV')
```
```response
┌─c1─┬─c2─┬─c3─┐
│ 3 │ 2 │ 1 │
└────┴────┴────┘
┌─c1─┬─c2─┬─c3─┐
│ 1 │ 2 │ 3 │
└────┴────┴────┘
┌─c1─┬─c2─┬─c3─┐
│ 78 │ 43 │ 45 │
└────┴────┴────┘
```
You may naturally try to `Select * from p`, but as noted above, this query will fail; use the preceding query.
```sql
SELECT * FROM p
```
```response
Received exception from server (version 23.4.1):
Code: 48. DB::Exception: Received from localhost:9000. DB::Exception: Reading from a partitioned S3 storage is not implemented yet. (NOT_IMPLEMENTED)
```
## Virtual columns {#virtual-columns}
- `_path` — Path to the file.

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-engines/integrations/sqlite
sidebar_position: 7
sidebar_position: 185
sidebar_label: SQLite
---

View File

@ -1298,8 +1298,8 @@ For output it uses the following correspondence between ClickHouse types and BSO
| [Tuple](/docs/en/sql-reference/data-types/tuple.md) | `\x04` array |
| [Named Tuple](/docs/en/sql-reference/data-types/tuple.md) | `\x03` document |
| [Map](/docs/en/sql-reference/data-types/map.md) | `\x03` document |
| [IPv4](/docs/en/sql-reference/data-types/domains/ipv4.md) | `\x10` int32 |
| [IPv6](/docs/en/sql-reference/data-types/domains/ipv6.md) | `\x05` binary, `\x00` binary subtype |
| [IPv4](/docs/en/sql-reference/data-types/ipv4.md) | `\x10` int32 |
| [IPv6](/docs/en/sql-reference/data-types/ipv6.md) | `\x05` binary, `\x00` binary subtype |
For input it uses the following correspondence between BSON types and ClickHouse types:
@ -1309,7 +1309,7 @@ For input it uses the following correspondence between BSON types and ClickHouse
| `\x02` string | [String](/docs/en/sql-reference/data-types/string.md)/[FixedString](/docs/en/sql-reference/data-types/fixedstring.md) |
| `\x03` document | [Map](/docs/en/sql-reference/data-types/map.md)/[Named Tuple](/docs/en/sql-reference/data-types/tuple.md) |
| `\x04` array | [Array](/docs/en/sql-reference/data-types/array.md)/[Tuple](/docs/en/sql-reference/data-types/tuple.md) |
| `\x05` binary, `\x00` binary subtype | [String](/docs/en/sql-reference/data-types/string.md)/[FixedString](/docs/en/sql-reference/data-types/fixedstring.md)/[IPv6](/docs/en/sql-reference/data-types/domains/ipv6.md) |
| `\x05` binary, `\x00` binary subtype | [String](/docs/en/sql-reference/data-types/string.md)/[FixedString](/docs/en/sql-reference/data-types/fixedstring.md)/[IPv6](/docs/en/sql-reference/data-types/ipv6.md) |
| `\x05` binary, `\x02` old binary subtype | [String](/docs/en/sql-reference/data-types/string.md)/[FixedString](/docs/en/sql-reference/data-types/fixedstring.md) |
| `\x05` binary, `\x03` old uuid subtype | [UUID](/docs/en/sql-reference/data-types/uuid.md) |
| `\x05` binary, `\x04` uuid subtype | [UUID](/docs/en/sql-reference/data-types/uuid.md) |
@ -1319,7 +1319,7 @@ For input it uses the following correspondence between BSON types and ClickHouse
| `\x0A` null value | [NULL](/docs/en/sql-reference/data-types/nullable.md) |
| `\x0D` JavaScript code | [String](/docs/en/sql-reference/data-types/string.md)/[FixedString](/docs/en/sql-reference/data-types/fixedstring.md) |
| `\x0E` symbol | [String](/docs/en/sql-reference/data-types/string.md)/[FixedString](/docs/en/sql-reference/data-types/fixedstring.md) |
| `\x10` int32 | [Int32/UInt32](/docs/en/sql-reference/data-types/int-uint.md)/[Decimal32](/docs/en/sql-reference/data-types/decimal.md)/[IPv4](/docs/en/sql-reference/data-types/domains/ipv4.md)/[Enum8/Enum16](/docs/en/sql-reference/data-types/enum.md) |
| `\x10` int32 | [Int32/UInt32](/docs/en/sql-reference/data-types/int-uint.md)/[Decimal32](/docs/en/sql-reference/data-types/decimal.md)/[IPv4](/docs/en/sql-reference/data-types/ipv4.md)/[Enum8/Enum16](/docs/en/sql-reference/data-types/enum.md) |
| `\x12` int64 | [Int64/UInt64](/docs/en/sql-reference/data-types/int-uint.md)/[Decimal64](/docs/en/sql-reference/data-types/decimal.md)/[DateTime64](/docs/en/sql-reference/data-types/datetime64.md) |
Other BSON types are not supported. Also, it performs conversion between different integer types (for example, you can insert BSON int32 value into ClickHouse UInt8).
@ -1669,8 +1669,8 @@ The table below shows supported data types and how they match ClickHouse [data t
| `ENUM` | [Enum(8/16)](/docs/en/sql-reference/data-types/enum.md) | `ENUM` |
| `LIST` | [Array](/docs/en/sql-reference/data-types/array.md) | `LIST` |
| `STRUCT` | [Tuple](/docs/en/sql-reference/data-types/tuple.md) | `STRUCT` |
| `UINT32` | [IPv4](/docs/en/sql-reference/data-types/domains/ipv4.md) | `UINT32` |
| `DATA` | [IPv6](/docs/en/sql-reference/data-types/domains/ipv6.md) | `DATA` |
| `UINT32` | [IPv4](/docs/en/sql-reference/data-types/ipv4.md) | `UINT32` |
| `DATA` | [IPv6](/docs/en/sql-reference/data-types/ipv6.md) | `DATA` |
| `DATA` | [Int128/UInt128/Int256/UInt256](/docs/en/sql-reference/data-types/int-uint.md) | `DATA` |
| `DATA` | [Decimal128/Decimal256](/docs/en/sql-reference/data-types/decimal.md) | `DATA` |
| `STRUCT(entries LIST(STRUCT(key Key, value Value)))` | [Map](/docs/en/sql-reference/data-types/map.md) | `STRUCT(entries LIST(STRUCT(key Key, value Value)))` |
@ -1872,8 +1872,8 @@ The table below shows supported data types and how they match ClickHouse [data t
| `long (timestamp-millis)` \** | [DateTime64(3)](/docs/en/sql-reference/data-types/datetime.md) | `long (timestamp-millis)` \** |
| `long (timestamp-micros)` \** | [DateTime64(6)](/docs/en/sql-reference/data-types/datetime.md) | `long (timestamp-micros)` \** |
| `bytes (decimal)` \** | [DateTime64(N)](/docs/en/sql-reference/data-types/datetime.md) | `bytes (decimal)` \** |
| `int` | [IPv4](/docs/en/sql-reference/data-types/domains/ipv4.md) | `int` |
| `fixed(16)` | [IPv6](/docs/en/sql-reference/data-types/domains/ipv6.md) | `fixed(16)` |
| `int` | [IPv4](/docs/en/sql-reference/data-types/ipv4.md) | `int` |
| `fixed(16)` | [IPv6](/docs/en/sql-reference/data-types/ipv6.md) | `fixed(16)` |
| `bytes (decimal)` \** | [Decimal(P, S)](/docs/en/sql-reference/data-types/decimal.md) | `bytes (decimal)` \** |
| `string (uuid)` \** | [UUID](/docs/en/sql-reference/data-types/uuid.md) | `string (uuid)` \** |
| `fixed(16)` | [Int128/UInt128](/docs/en/sql-reference/data-types/int-uint.md) | `fixed(16)` |
@ -2026,9 +2026,9 @@ The table below shows supported data types and how they match ClickHouse [data t
| `LIST` | [Array](/docs/en/sql-reference/data-types/array.md) | `LIST` |
| `STRUCT` | [Tuple](/docs/en/sql-reference/data-types/tuple.md) | `STRUCT` |
| `MAP` | [Map](/docs/en/sql-reference/data-types/map.md) | `MAP` |
| `UINT32` | [IPv4](/docs/en/sql-reference/data-types/domains/ipv4.md) | `UINT32` |
| `FIXED_LENGTH_BYTE_ARRAY`, `BINARY` | [IPv6](/docs/en/sql-reference/data-types/domains/ipv6.md) | `FIXED_LENGTH_BYTE_ARRAY` |
| `FIXED_LENGTH_BYTE_ARRAY`, `BINARY` | [Int128/UInt128/Int256/UInt256](/docs/en/sql-reference/data-types/int-uint.md) | `FIXED_LENGTH_BYTE_ARRAY` |
| `UINT32` | [IPv4](/docs/en/sql-reference/data-types/ipv4.md) | `UINT32` |
| `FIXED_LENGTH_BYTE_ARRAY`, `BINARY` | [IPv6](/docs/en/sql-reference/data-types/ipv6.md) | `FIXED_LENGTH_BYTE_ARRAY` |
| `FIXED_LENGTH_BYTE_ARRAY`, `BINARY` | [Int128/UInt128/Int256/UInt256](/docs/en/sql-reference/data-types/int-uint.md) | `FIXED_LENGTH_BYTE_ARRAY` |
Arrays can be nested and can have a value of the `Nullable` type as an argument. `Tuple` and `Map` types also can be nested.
@ -2082,7 +2082,7 @@ Special format for reading Parquet file metadata (https://parquet.apache.org/doc
- logical_type - column logical type
- compression - compression used for this column
- total_uncompressed_size - total uncompressed bytes size of the column, calculated as the sum of total_uncompressed_size of the column from all row groups
- total_compressed_size - total compressed bytes size of the column, calculated as the sum of total_compressed_size of the column from all row groups
- total_compressed_size - total compressed bytes size of the column, calculated as the sum of total_compressed_size of the column from all row groups
- space_saved - percent of space saved by compression, calculated as (1 - total_compressed_size/total_uncompressed_size).
- encodings - the list of encodings used for this column
- row_groups - the list of row groups metadata with the next structure:
@ -2229,9 +2229,9 @@ The table below shows supported data types and how they match ClickHouse [data t
| `LIST` | [Array](/docs/en/sql-reference/data-types/array.md) | `LIST` |
| `STRUCT` | [Tuple](/docs/en/sql-reference/data-types/tuple.md) | `STRUCT` |
| `MAP` | [Map](/docs/en/sql-reference/data-types/map.md) | `MAP` |
| `UINT32` | [IPv4](/docs/en/sql-reference/data-types/domains/ipv4.md) | `UINT32` |
| `FIXED_SIZE_BINARY`, `BINARY` | [IPv6](/docs/en/sql-reference/data-types/domains/ipv6.md) | `FIXED_SIZE_BINARY` |
| `FIXED_SIZE_BINARY`, `BINARY` | [Int128/UInt128/Int256/UInt256](/docs/en/sql-reference/data-types/int-uint.md) | `FIXED_SIZE_BINARY` |
| `UINT32` | [IPv4](/docs/en/sql-reference/data-types/ipv4.md) | `UINT32` |
| `FIXED_SIZE_BINARY`, `BINARY` | [IPv6](/docs/en/sql-reference/data-types/ipv6.md) | `FIXED_SIZE_BINARY` |
| `FIXED_SIZE_BINARY`, `BINARY` | [Int128/UInt128/Int256/UInt256](/docs/en/sql-reference/data-types/int-uint.md) | `FIXED_SIZE_BINARY` |
Arrays can be nested and can have a value of the `Nullable` type as an argument. `Tuple` and `Map` types also can be nested.
@ -2297,7 +2297,7 @@ The table below shows supported data types and how they match ClickHouse [data t
| `Struct` | [Tuple](/docs/en/sql-reference/data-types/tuple.md) | `Struct` |
| `Map` | [Map](/docs/en/sql-reference/data-types/map.md) | `Map` |
| `Int` | [IPv4](/docs/en/sql-reference/data-types/int-uint.md) | `Int` |
| `Binary` | [IPv6](/docs/en/sql-reference/data-types/domains/ipv6.md) | `Binary` |
| `Binary` | [IPv6](/docs/en/sql-reference/data-types/ipv6.md) | `Binary` |
| `Binary` | [Int128/UInt128/Int256/UInt256](/docs/en/sql-reference/data-types/int-uint.md) | `Binary` |
| `Binary` | [Decimal256](/docs/en/sql-reference/data-types/decimal.md) | `Binary` |
@ -2510,7 +2510,7 @@ ClickHouse supports reading and writing [MessagePack](https://msgpack.org/) data
| `uint 64` | [DateTime64](/docs/en/sql-reference/data-types/datetime.md) | `uint 64` |
| `fixarray`, `array 16`, `array 32` | [Array](/docs/en/sql-reference/data-types/array.md)/[Tuple](/docs/en/sql-reference/data-types/tuple.md) | `fixarray`, `array 16`, `array 32` |
| `fixmap`, `map 16`, `map 32` | [Map](/docs/en/sql-reference/data-types/map.md) | `fixmap`, `map 16`, `map 32` |
| `uint 32` | [IPv4](/docs/en/sql-reference/data-types/domains/ipv4.md) | `uint 32` |
| `uint 32` | [IPv4](/docs/en/sql-reference/data-types/ipv4.md) | `uint 32` |
| `bin 8` | [String](/docs/en/sql-reference/data-types/string.md) | `bin 8` |
| `int 8` | [Enum8](/docs/en/sql-reference/data-types/enum.md) | `int 8` |
| `bin 8` | [(U)Int128/(U)Int256](/docs/en/sql-reference/data-types/int-uint.md) | `bin 8` |

View File

@ -6,32 +6,43 @@ sidebar_label: Configuration Files
# Configuration Files
ClickHouse supports multi-file configuration management. The main server configuration file is `/etc/clickhouse-server/config.xml` or `/etc/clickhouse-server/config.yaml`. Other files must be in the `/etc/clickhouse-server/config.d` directory. Note, that any configuration file can be written either in XML or YAML, but mixing formats in one file is not supported. For example, you can have main configs as `config.xml` and `users.xml` and write additional files in `config.d` and `users.d` directories in `.yaml`.
The ClickHouse server can be configured with configuration files in XML or YAML syntax. In most installation types, the ClickHouse server runs with `/etc/clickhouse-server/config.xml` as default configuration file but it is also possible to specify the location of the configuration file manually at server startup using command line option `--config-file=` or `-C`. Additional configuration files may be placed into directory `config.d/` relative to the main configuration file, for example into directory `/etc/clickhouse-server/config.d/`. Files in this directory and the main configuration are merged in a preprocessing step before the configuration is applied in ClickHouse server. Configuration files are merged in alphabetical order. To simplify updates and improve modularization, it is best practice to keep the default `config.xml` file unmodified and place additional customization into `config.d/`.
All XML files should have the same root element, usually `<clickhouse>`. As for YAML, `clickhouse:` should not be present, the parser will insert it automatically.
It is possible to mix XML and YAML configuration files, for example you could have a main configuration file `config.xml` and additional configuration files `config.d/network.xml`, `config.d/timezone.yaml` and `config.d/keeper.yaml`. Mixing XML and YAML within a single configuration file is not supported. XML configuration files should use `<clickhouse>...</clickhouse>` as top-level tag. In YAML configuration files, `clickhouse:` is optional, the parser inserts it implicitly if absent.
## Override {#override}
## Overriding Configuration {#override}
Some settings specified in the main configuration file can be overridden in other configuration files:
The merge of configuration files behaves as one intuitively expects: The contents of both files are combined recursively, children with the same name are replaced by the element of the more specific configuration file. The merge can be customized using attributes `replace` and `remove`.
- Attribute `replace` means that the element is replaced by the specified one.
- Attribute `remove` means that the element is deleted.
- The `replace` or `remove` attributes can be specified for the elements of these configuration files.
- If neither is specified, it combines the contents of elements recursively, replacing values of duplicate children.
- If `replace` is specified, it replaces the entire element with the specified one.
- If `remove` is specified, it deletes the element.
To specify that a value of an element should be replaced by the value of an environment variable, you can use attribute `from_env`.
You can also declare attributes as coming from environment variables by using `from_env="VARIABLE_NAME"`:
Example with `$MAX_QUERY_SIZE = 150000`:
```xml
<clickhouse>
<macros>
<replica from_env="REPLICA" />
<layer from_env="LAYER" />
<shard from_env="SHARD" />
</macros>
<profiles>
<default>
<max_query_size from_env="MAX_QUERY_SIZE"/>
</default>
</profiles>
</clickhouse>
```
## Substitution {#substitution}
which is equal to
``` xml
<clickhouse>
<profiles>
<default>
<max_query_size/>150000</max_query_size>
</default>
</profiles>
</clickhouse>
```
## Substituting Configuration {#substitution}
The config can also define “substitutions”. If an element has the `incl` attribute, the corresponding substitution from the file will be used as the value. By default, the path to the file with substitutions is `/etc/metrika.xml`. This can be changed in the [include_from](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-include_from) element in the server config. The substitution values are specified in `/clickhouse/substitution_name` elements in this file. If a substitution specified in `incl` does not exist, it is recorded in the log. To prevent ClickHouse from logging missing substitutions, specify the `optional="true"` attribute (for example, settings for [macros](../operations/server-configuration-parameters/settings.md#macros)).

View File

@ -50,7 +50,7 @@ To manage named collections with DDL a user must have the `named_control_collect
```
:::tip
In the above example the `passowrd_sha256_hex` value is the hexadecimal representation of the SHA256 hash of the password. This configuration for the user `default` has the attribute `replace=true` as in the default configuration has a plain text `password` set, and it is not possible to have both plain text and sha256 hex passwords set for a user.
In the above example the `password_sha256_hex` value is the hexadecimal representation of the SHA256 hash of the password. This configuration for the user `default` has the attribute `replace=true` as in the default configuration has a plain text `password` set, and it is not possible to have both plain text and sha256 hex passwords set for a user.
:::
## Storing named collections in configuration files

View File

@ -1,9 +0,0 @@
---
slug: /en/operations/optimizing-performance/
sidebar_label: Optimizing Performance
sidebar_position: 52
---
# Optimizing Performance
- [Sampling query profiler](../../operations/optimizing-performance/sampling-query-profiler.md)

View File

@ -1,16 +0,0 @@
---
slug: /en/operations/server-configuration-parameters/
sidebar_position: 54
sidebar_label: Server Configuration Parameters
pagination_next: en/operations/server-configuration-parameters/settings
---
# Server Configuration Parameters
This section contains descriptions of server settings that cannot be changed at the session or query level.
These settings are stored in the `config.xml` file on the ClickHouse server.
Other settings are described in the “[Settings](../../operations/settings/index.md#session-settings-intro)” section.
Before studying the settings, read the [Configuration files](../../operations/configuration-files.md#configuration_files) section and note the use of substitutions (the `incl` and `optional` attributes).

View File

@ -7,6 +7,14 @@ description: This section contains descriptions of server settings that cannot b
# Server Settings
This section contains descriptions of server settings that cannot be changed at the session or query level.
These settings are stored in the `config.xml` file on the ClickHouse server.
Other settings are described in the “[Settings](../../operations/settings/index.md#session-settings-intro)” section.
Before studying the settings, read the [Configuration files](../../operations/configuration-files.md#configuration_files) section and note the use of substitutions (the `incl` and `optional` attributes).
## allow_use_jemalloc_memory
Allows to use jemalloc memory.
@ -1967,6 +1975,10 @@ The time zone is necessary for conversions between String and DateTime formats w
<timezone>Asia/Istanbul</timezone>
```
**See also**
- [session_timezone](../settings/settings.md#session_timezone)
## tcp_port {#server_configuration_parameters-tcp_port}
Port for communicating with clients over the TCP protocol.

View File

@ -2941,7 +2941,7 @@ Default value: `0`.
## mutations_sync {#mutations_sync}
Allows to execute `ALTER TABLE ... UPDATE|DELETE` queries ([mutations](../../sql-reference/statements/alter/index.md#mutations)) synchronously.
Allows to execute `ALTER TABLE ... UPDATE|DELETE|MATERIALIZE INDEX|MATERIALIZE PROJECTION|MATERIALIZE COLUMN` queries ([mutations](../../sql-reference/statements/alter/index.md#mutations)) synchronously.
Possible values:
@ -4251,6 +4251,69 @@ Default value: `0`.
Use this setting only for backward compatibility if your use cases depend on old syntax.
:::
## session_timezone {#session_timezone}
Sets the implicit time zone of the current session or query.
The implicit time zone is the time zone applied to values of type DateTime/DateTime64 which have no explicitly specified time zone.
The setting takes precedence over the globally configured (server-level) implicit time zone.
A value of '' (empty string) means that the implicit time zone of the current session or query is equal to the [server time zone](../server-configuration-parameters/settings.md#server_configuration_parameters-timezone).
You can use functions `timeZone()` and `serverTimeZone()` to get the session time zone and server time zone.
Possible values:
- Any time zone name from `system.time_zones`, e.g. `Europe/Berlin`, `UTC` or `Zulu`
Default value: `''`.
Examples:
```sql
SELECT timeZone(), serverTimeZone() FORMAT TSV
Europe/Berlin Europe/Berlin
```
```sql
SELECT timeZone(), serverTimeZone() SETTINGS session_timezone = 'Asia/Novosibirsk' FORMAT TSV
Asia/Novosibirsk Europe/Berlin
```
Assign session time zone 'America/Denver' to the inner DateTime without explicitly specified time zone:
```sql
SELECT toDateTime64(toDateTime64('1999-12-12 23:23:23.123', 3), 3, 'Europe/Zurich') SETTINGS session_timezone = 'America/Denver' FORMAT TSV
1999-12-13 07:23:23.123
```
:::warning
Not all functions that parse DateTime/DateTime64 respect `session_timezone`. This can lead to subtle errors.
See the following example and explanation.
:::
```sql
CREATE TABLE test_tz (`d` DateTime('UTC')) ENGINE = Memory AS SELECT toDateTime('2000-01-01 00:00:00', 'UTC');
SELECT *, timeZone() FROM test_tz WHERE d = toDateTime('2000-01-01 00:00:00') SETTINGS session_timezone = 'Asia/Novosibirsk'
0 rows in set.
SELECT *, timeZone() FROM test_tz WHERE d = '2000-01-01 00:00:00' SETTINGS session_timezone = 'Asia/Novosibirsk'
┌───────────────────d─┬─timeZone()───────┐
│ 2000-01-01 00:00:00 │ Asia/Novosibirsk │
└─────────────────────┴──────────────────┘
```
This happens due to different parsing pipelines:
- `toDateTime()` without explicitly given time zone used in the first `SELECT` query honors setting `session_timezone` and the global time zone.
- In the second query, a DateTime is parsed from a String, and inherits the type and time zone of the existing column`d`. Thus, setting `session_timezone` and the global time zone are not honored.
**See also**
- [timezone](../server-configuration-parameters/settings.md#server_configuration_parameters-timezone)
## final {#final}
Automatically applies [FINAL](../../sql-reference/statements/select/from.md#final-modifier) modifier to all tables in a query, to tables where [FINAL](../../sql-reference/statements/select/from.md#final-modifier) is applicable, including joined tables and tables in sub-queries, and

View File

@ -71,11 +71,11 @@ Columns:
- 0 — Query was initiated by another query as part of distributed query execution.
- `user` ([String](../../sql-reference/data-types/string.md)) — Name of the user who initiated the current query.
- `query_id` ([String](../../sql-reference/data-types/string.md)) — ID of the query.
- `address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — IP address that was used to make the query.
- `address` ([IPv6](../../sql-reference/data-types/ipv6.md)) — IP address that was used to make the query.
- `port` ([UInt16](../../sql-reference/data-types/int-uint.md)) — The client port that was used to make the query.
- `initial_user` ([String](../../sql-reference/data-types/string.md)) — Name of the user who ran the initial query (for distributed query execution).
- `initial_query_id` ([String](../../sql-reference/data-types/string.md)) — ID of the initial query (for distributed query execution).
- `initial_address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — IP address that the parent query was launched from.
- `initial_address` ([IPv6](../../sql-reference/data-types/ipv6.md)) — IP address that the parent query was launched from.
- `initial_port` ([UInt16](../../sql-reference/data-types/int-uint.md)) — The client port that was used to make the parent query.
- `initial_query_start_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Initial query starting time (for distributed query execution).
- `initial_query_start_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Initial query starting time with microseconds precision (for distributed query execution).

View File

@ -40,11 +40,11 @@ Columns:
- 0 — Query was initiated by another query for distributed query execution.
- `user` ([String](../../sql-reference/data-types/string.md)) — Name of the user who initiated the current query.
- `query_id` ([String](../../sql-reference/data-types/string.md)) — ID of the query.
- `address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — IP address that was used to make the query.
- `address` ([IPv6](../../sql-reference/data-types/ipv6.md)) — IP address that was used to make the query.
- `port` ([UInt16](../../sql-reference/data-types/int-uint.md#uint-ranges)) — The client port that was used to make the query.
- `initial_user` ([String](../../sql-reference/data-types/string.md)) — Name of the user who ran the initial query (for distributed query execution).
- `initial_query_id` ([String](../../sql-reference/data-types/string.md)) — ID of the initial query (for distributed query execution).
- `initial_address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — IP address that the parent query was launched from.
- `initial_address` ([IPv6](../../sql-reference/data-types/ipv6.md)) — IP address that the parent query was launched from.
- `initial_port` ([UInt16](../../sql-reference/data-types/int-uint.md#uint-ranges)) — The client port that was used to make the parent query.
- `interface` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Interface that the query was initiated from. Possible values:
- 1 — TCP.

View File

@ -28,7 +28,7 @@ Columns:
- `profiles` ([Array](../../sql-reference/data-types/array.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md))) — The list of profiles set for all roles and/or users.
- `roles` ([Array](../../sql-reference/data-types/array.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md))) — The list of roles to which the profile is applied.
- `settings` ([Array](../../sql-reference/data-types/array.md)([Tuple](../../sql-reference/data-types/tuple.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md), [String](../../sql-reference/data-types/string.md)))) — Settings that were changed when the client logged in/out.
- `client_address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — The IP address that was used to log in/out.
- `client_address` ([IPv6](../../sql-reference/data-types/ipv6.md)) — The IP address that was used to log in/out.
- `client_port` ([UInt16](../../sql-reference/data-types/int-uint.md)) — The client port that was used to log in/out.
- `interface` ([Enum8](../../sql-reference/data-types/enum.md)) — The interface from which the login was initiated. Possible values:
- `TCP`

View File

@ -11,7 +11,8 @@ Columns:
- `host` ([String](../../sql-reference/data-types/string.md)) — The hostname/IP of the ZooKeeper node that ClickHouse connected to.
- `port` ([String](../../sql-reference/data-types/string.md)) — The port of the ZooKeeper node that ClickHouse connected to.
- `index` ([UInt8](../../sql-reference/data-types/int-uint.md)) — The index of the ZooKeeper node that ClickHouse connected to. The index is from ZooKeeper config.
- `connected_time` ([String](../../sql-reference/data-types/string.md)) — When the connection was established
- `connected_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — When the connection was established
- `session_uptime_elapsed_seconds` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Seconds elapsed since the connection was established
- `is_expired` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Is the current connection expired.
- `keeper_api_version` ([String](../../sql-reference/data-types/string.md)) — Keeper API version.
- `client_id` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Session id of the connection.
@ -23,7 +24,7 @@ SELECT * FROM system.zookeeper_connection;
```
``` text
┌─name──────────────┬─host─────────┬─port─┬─index─┬──────connected_time─┬─is_expired─┬─keeper_api_version─┬──────────client_id─┐
│ default_zookeeper │ 127.0.0.1 │ 2181 │ 0 │ 2023-05-19 14:30:16 │ 0 │ 0 │ 216349144108826660
└─────────────────────────────────┴──────┴───────┴─────────────────────┴────────────────────────────────┴────────────────────┘
┌─name────┬─host──────┬─port─┬─index─┬──────connected_time─┬─session_uptime_elapsed_seconds─┬─is_expired─┬─keeper_api_version─┬─client_id─┐
│ default │ 127.0.0.1 │ 9181 │ 0 │ 2023-06-15 14:36:01 │ 3058 │ 0 │ 3 │ 5
└─────────┴───────────┴──────┴───────┴─────────────────────┴────────────────────────────────┴────────────┴────────────────────┴───────────┘
```

View File

@ -15,7 +15,7 @@ Columns with request parameters:
- `Finalize` — The connection is lost, no response was received.
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — The date when the event happened.
- `event_time` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — The date and time when the event happened.
- `address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — IP address of ZooKeeper server that was used to make the request.
- `address` ([IPv6](../../sql-reference/data-types/ipv6.md)) — IP address of ZooKeeper server that was used to make the request.
- `port` ([UInt16](../../sql-reference/data-types/int-uint.md)) — The port of ZooKeeper server that was used to make the request.
- `session_id` ([Int64](../../sql-reference/data-types/int-uint.md)) — The session ID that the ZooKeeper server sets for each connection.
- `xid` ([Int32](../../sql-reference/data-types/int-uint.md)) — The ID of the request within the session. This is usually a sequential request number. It is the same for the request row and the paired `response`/`finalize` row.

View File

@ -32,7 +32,7 @@ For example, Decimal32(4) can contain numbers from -99999.9999 to 99999.9999 wit
Internally data is represented as normal signed integers with respective bit width. Real value ranges that can be stored in memory are a bit larger than specified above, which are checked only on conversion from a string.
Because modern CPUs do not support 128-bit integers natively, operations on Decimal128 are emulated. Because of this Decimal128 works significantly slower than Decimal32/Decimal64.
Because modern CPUs do not support 128-bit and 256-bit integers natively, operations on Decimal128 and Decimal256 are emulated. Thus, Decimal128 and Decimal256 work significantly slower than Decimal32/Decimal64.
## Operations and Result Type
@ -59,6 +59,10 @@ Some functions on Decimal return result as Float64 (for example, var or stddev).
During calculations on Decimal, integer overflows might happen. Excessive digits in a fraction are discarded (not rounded). Excessive digits in integer part will lead to an exception.
:::warning
Overflow check is not implemented for Decimal128 and Decimal256. In case of overflow incorrect result is returned, no exception is thrown.
:::
``` sql
SELECT toDecimal32(2, 4) AS x, x / 3
```

View File

@ -28,6 +28,6 @@ ClickHouse data types include:
- **Nested data structures**: A [`Nested` data structure](./nested-data-structures/index.md) is like a table inside a cell
- **Tuples**: A [`Tuple` of elements](./tuple.md), each having an individual type.
- **Nullable**: [`Nullable`](./nullable.md) allows you to store a value as `NULL` when a value is "missing" (instead of the column settings its default value for the data type)
- **IP addresses**: use [`IPv4`](./domains/ipv4.md) and [`IPv6`](./domains/ipv6.md) to efficiently store IP addresses
- **IP addresses**: use [`IPv4`](./ipv4.md) and [`IPv6`](./ipv6.md) to efficiently store IP addresses
- **Geo types**: for [geographical data](./geo.md), including `Point`, `Ring`, `Polygon` and `MultiPolygon`
- **Special data types**: including [`Expression`](./special-data-types/expression.md), [`Set`](./special-data-types/set.md), [`Nothing`](./special-data-types/nothing.md) and [`Interval`](./special-data-types/interval.md)

View File

@ -1,12 +1,12 @@
---
slug: /en/sql-reference/data-types/domains/ipv4
slug: /en/sql-reference/data-types/ipv4
sidebar_position: 59
sidebar_label: IPv4
---
## IPv4
`IPv4` is a domain based on `UInt32` type and serves as a typed replacement for storing IPv4 values. It provides compact storage with the human-friendly input-output format and column type information on inspection.
IPv4 addresses. Stored in 4 bytes as UInt32.
### Basic Usage
@ -57,25 +57,6 @@ SELECT toTypeName(from), hex(from) FROM hits LIMIT 1;
└──────────────────┴───────────┘
```
Domain values are not implicitly convertible to types other than `UInt32`.
If you want to convert `IPv4` value to a string, you have to do that explicitly with `IPv4NumToString()` function:
**See Also**
``` sql
SELECT toTypeName(s), IPv4NumToString(from) as s FROM hits LIMIT 1;
```
┌─toTypeName(IPv4NumToString(from))─┬─s──────────────┐
│ String │ 183.247.232.58 │
└───────────────────────────────────┴────────────────┘
Or cast to a `UInt32` value:
``` sql
SELECT toTypeName(i), CAST(from as UInt32) as i FROM hits LIMIT 1;
```
``` text
┌─toTypeName(CAST(from, 'UInt32'))─┬──────────i─┐
│ UInt32 │ 3086477370 │
└──────────────────────────────────┴────────────┘
```
- [Functions for Working with IPv4 and IPv6 Addresses](../functions/ip-address-functions.md)

View File

@ -1,12 +1,12 @@
---
slug: /en/sql-reference/data-types/domains/ipv6
slug: /en/sql-reference/data-types/ipv6
sidebar_position: 60
sidebar_label: IPv6
---
## IPv6
`IPv6` is a domain based on `FixedString(16)` type and serves as a typed replacement for storing IPv6 values. It provides compact storage with the human-friendly input-output format and column type information on inspection.
IPv6 addresses. Stored in 16 bytes as UInt128 big-endian.
### Basic Usage
@ -57,27 +57,6 @@ SELECT toTypeName(from), hex(from) FROM hits LIMIT 1;
└──────────────────┴──────────────────────────────────┘
```
Domain values are not implicitly convertible to types other than `FixedString(16)`.
If you want to convert `IPv6` value to a string, you have to do that explicitly with `IPv6NumToString()` function:
**See Also**
``` sql
SELECT toTypeName(s), IPv6NumToString(from) as s FROM hits LIMIT 1;
```
``` text
┌─toTypeName(IPv6NumToString(from))─┬─s─────────────────────────────┐
│ String │ 2001:44c8:129:2632:33:0:252:2 │
└───────────────────────────────────┴───────────────────────────────┘
```
Or cast to a `FixedString(16)` value:
``` sql
SELECT toTypeName(i), CAST(from as FixedString(16)) as i FROM hits LIMIT 1;
```
``` text
┌─toTypeName(CAST(from, 'FixedString(16)'))─┬─i───────┐
│ FixedString(16) │ <20><><EFBFBD>
└───────────────────────────────────────────┴─────────┘
```
- [Functions for Working with IPv4 and IPv6 Addresses](../functions/ip-address-functions.md)

View File

@ -139,8 +139,8 @@ makeDateTime32(year, month, day, hour, minute, second[, fraction[, precision[, t
## timeZone
Returns the timezone of the server.
If the function is executed in the context of a distributed table, it generates a normal column with values relevant to each shard, otherwise it produces a constant value.
Returns the timezone of the current session, i.e. the value of setting [session_timezone](../../operations/settings/settings.md#session_timezone).
If the function is executed in the context of a distributed table, then it generates a normal column with values relevant to each shard, otherwise it produces a constant value.
**Syntax**
@ -156,6 +156,33 @@ Alias: `timezone`.
Type: [String](../../sql-reference/data-types/string.md).
**See also**
- [serverTimeZone](#serverTimeZone)
## serverTimeZone
Returns the timezone of the server, i.e. the value of setting [timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone).
If the function is executed in the context of a distributed table, then it generates a normal column with values relevant to each shard. Otherwise, it produces a constant value.
**Syntax**
``` sql
serverTimeZone()
```
Alias: `serverTimezone`.
**Returned value**
- Timezone.
Type: [String](../../sql-reference/data-types/string.md).
**See also**
- [timeZone](#timeZone)
## toTimeZone
Converts a date or date with time to the specified time zone. Does not change the internal value (number of unix seconds) of the data, only the value's time zone attribute and the value's string representation changes.

View File

@ -237,6 +237,43 @@ Result:
└────────────────────────────┘
```
## L2SquaredDistance
Calculates the sum of the squares of the difference between the corresponding elements of two vectors.
**Syntax**
```sql
L2SquaredDistance(vector1, vector2)
```
Alias: `distanceL2Squared`.
**Arguments**
- `vector1` — First vector. [Tuple](../../sql-reference/data-types/tuple.md) or [Array](../../sql-reference/data-types/array.md).
- `vector2` — Second vector. [Tuple](../../sql-reference/data-types/tuple.md) or [Array](../../sql-reference/data-types/array.md).
**Returned value**
Type: [Float](../../sql-reference/data-types/float.md).
**Example**
Query:
```sql
SELECT L2SquaredDistance([1, 2, 3], [0, 0, 0])
```
Result:
```response
┌─L2SquaredDistance([1, 2, 3], [0, 0, 0])─┐
│ 14 │
└─────────────────────────────────────────┘
```
## LinfDistance
Calculates the distance between two points (the values of the vectors are the coordinates) in `L_{inf}` space ([maximum norm](https://en.wikipedia.org/wiki/Norm_(mathematics)#Maximum_norm_(special_case_of:_infinity_norm,_uniform_norm,_or_supremum_norm))).

View File

@ -248,7 +248,7 @@ SELECT IPv6CIDRToRange(toIPv6('2001:0db8:0000:85a3:0000:0000:ac1f:8001'), 32);
## toIPv4(string)
An alias to `IPv4StringToNum()` that takes a string form of IPv4 address and returns value of [IPv4](../../sql-reference/data-types/domains/ipv4.md) type, which is binary equal to value returned by `IPv4StringToNum()`.
An alias to `IPv4StringToNum()` that takes a string form of IPv4 address and returns value of [IPv4](../../sql-reference/data-types/ipv4.md) type, which is binary equal to value returned by `IPv4StringToNum()`.
``` sql
WITH
@ -296,7 +296,7 @@ Same as `toIPv6`, but if the IPv6 address has an invalid format, it returns null
## toIPv6
Converts a string form of IPv6 address to [IPv6](../../sql-reference/data-types/domains/ipv6.md) type. If the IPv6 address has an invalid format, returns an empty value.
Converts a string form of IPv6 address to [IPv6](../../sql-reference/data-types/ipv6.md) type. If the IPv6 address has an invalid format, returns an empty value.
Similar to [IPv6StringToNum](#ipv6stringtonums) function, which converts IPv6 address to binary format.
If the input string contains a valid IPv4 address, then the IPv6 equivalent of the IPv4 address is returned.
@ -315,7 +315,7 @@ toIPv6(string)
- IP address.
Type: [IPv6](../../sql-reference/data-types/domains/ipv6.md).
Type: [IPv6](../../sql-reference/data-types/ipv6.md).
**Examples**

View File

@ -4,6 +4,8 @@ sidebar_position: 130
sidebar_label: NLP (experimental)
---
# Natural Language Processing (NLP) Functions
:::note
This is an experimental feature that is currently in development and is not ready for general use. It will change in unpredictable backwards-incompatible ways in future releases. Set `allow_experimental_nlp_functions = 1` to enable it.
:::

View File

@ -232,6 +232,7 @@ ALTER TABLE table_with_ttl MODIFY COLUMN column_ttl REMOVE TTL;
Materializes or updates a column with an expression for a default value (`DEFAULT` or `MATERIALIZED`).
It is used if it is necessary to add or update a column with a complicated expression, because evaluating such an expression directly on `SELECT` executing turns out to be expensive.
Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
Syntax:

View File

@ -60,7 +60,7 @@ You can specify how long (in seconds) to wait for inactive replicas to execute a
For all `ALTER` queries, if `alter_sync = 2` and some replicas are not active for more than the time, specified in the `replication_wait_for_inactive_replica_timeout` setting, then an exception `UNFINISHED` is thrown.
:::
For `ALTER TABLE ... UPDATE|DELETE` queries the synchronicity is defined by the [mutations_sync](/docs/en/operations/settings/settings.md/#mutations_sync) setting.
For `ALTER TABLE ... UPDATE|DELETE|MATERIALIZE INDEX|MATERIALIZE PROJECTION|MATERIALIZE COLUMN` queries the synchronicity is defined by the [mutations_sync](/docs/en/operations/settings/settings.md/#mutations_sync) setting.
## Related content

View File

@ -142,19 +142,19 @@ The following operations with [projections](/docs/en/engines/table-engines/merge
## ADD PROJECTION
`ALTER TABLE [db].name ADD PROJECTION [IF NOT EXISTS] name ( SELECT <COLUMN LIST EXPR> [GROUP BY] [ORDER BY] )` - Adds projection description to tables metadata.
`ALTER TABLE [db.]name [ON CLUSTER cluster] ADD PROJECTION [IF NOT EXISTS] name ( SELECT <COLUMN LIST EXPR> [GROUP BY] [ORDER BY] )` - Adds projection description to tables metadata.
## DROP PROJECTION
`ALTER TABLE [db].name DROP PROJECTION [IF EXISTS] name` - Removes projection description from tables metadata and deletes projection files from disk. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
`ALTER TABLE [db.]name [ON CLUSTER cluster] DROP PROJECTION [IF EXISTS] name` - Removes projection description from tables metadata and deletes projection files from disk. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
## MATERIALIZE PROJECTION
`ALTER TABLE [db.]table MATERIALIZE PROJECTION name IN PARTITION partition_name` - The query rebuilds the projection `name` in the partition `partition_name`. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
`ALTER TABLE [db.]table [ON CLUSTER cluster] MATERIALIZE PROJECTION [IF EXISTS] name [IN PARTITION partition_name]` - The query rebuilds the projection `name` in the partition `partition_name`. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
## CLEAR PROJECTION
`ALTER TABLE [db.]table CLEAR PROJECTION [IF EXISTS] name IN PARTITION partition_name` - Deletes projection files from disk without removing description. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
`ALTER TABLE [db.]table [ON CLUSTER cluster] CLEAR PROJECTION [IF EXISTS] name [IN PARTITION partition_name]` - Deletes projection files from disk without removing description. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
The commands `ADD`, `DROP` and `CLEAR` are lightweight in a sense that they only change metadata or remove files.

View File

@ -10,15 +10,25 @@ sidebar_label: INDEX
The following operations are available:
- `ALTER TABLE [db].table_name [ON CLUSTER cluster] ADD INDEX name expression TYPE type [GRANULARITY value] [FIRST|AFTER name]` - Adds index description to tables metadata.
## ADD INDEX
- `ALTER TABLE [db].table_name [ON CLUSTER cluster] DROP INDEX name` - Removes index description from tables metadata and deletes index files from disk. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
`ALTER TABLE [db.]table_name [ON CLUSTER cluster] ADD INDEX [IF NOT EXISTS] name expression TYPE type [GRANULARITY value] [FIRST|AFTER name]` - Adds index description to tables metadata.
- `ALTER TABLE [db.]table_name [ON CLUSTER cluster] MATERIALIZE INDEX name [IN PARTITION partition_name]` - Rebuilds the secondary index `name` for the specified `partition_name`. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations). If `IN PARTITION` part is omitted then it rebuilds the index for the whole table data.
## DROP INDEX
The first two commands are lightweight in a sense that they only change metadata or remove files.
`ALTER TABLE [db.]table_name [ON CLUSTER cluster] DROP INDEX [IF EXISTS] name` - Removes index description from tables metadata and deletes index files from disk. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
Also, they are replicated, syncing indices metadata via ZooKeeper.
## MATERIALIZE INDEX
`ALTER TABLE [db.]table_name [ON CLUSTER cluster] MATERIALIZE INDEX [IF EXISTS] name [IN PARTITION partition_name]` - Rebuilds the secondary index `name` for the specified `partition_name`. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations). If `IN PARTITION` part is omitted then it rebuilds the index for the whole table data.
## CLEAR INDEX
`ALTER TABLE [db.]table_name [ON CLUSTER cluster] CLEAR INDEX [IF EXISTS] name [IN PARTITION partition_name]` - Deletes the secondary index files from disk without removing description. Implemented as a [mutation](/docs/en/sql-reference/statements/alter/index.md#mutations).
The commands `ADD`, `DROP`, and `CLEAR` are lightweight in the sense that they only change metadata or remove files.
Also, they are replicated, syncing indices metadata via ClickHouse Keeper or ZooKeeper.
:::note
Index manipulation is supported only for tables with [`*MergeTree`](/docs/en/engines/table-engines/mergetree-family/mergetree.md) engine (including [replicated](/docs/en/engines/table-engines/mergetree-family/replication.md) variants).

View File

@ -82,6 +82,35 @@ LIFETIME(MIN 0 MAX 1000)
LAYOUT(FLAT())
```
:::note
When using the SQL console in [ClickHouse Cloud](https://clickhouse.com), you must specify a user (`default` or any other user with the role `default_role`) and password when creating a dictionary.
:::note
```sql
CREATE USER IF NOT EXISTS clickhouse_admin
IDENTIFIED WITH sha256_password BY 'passworD43$x';
GRANT default_role TO clickhouse_admin;
CREATE DATABASE foo_db;
CREATE TABLE foo_db.source_table (
id UInt64,
value String
) ENGINE = MergeTree
PRIMARY KEY id;
CREATE DICTIONARY foo_db.id_value_dictionary
(
id UInt64,
value String
)
PRIMARY KEY id
SOURCE(CLICKHOUSE(TABLE 'source_table' USER 'clickhouse_admin' PASSWORD 'passworD43$x' DB 'foo_db' ))
LAYOUT(FLAT())
LIFETIME(MIN 0 MAX 1000);
```
### Create a dictionary from a table in a remote ClickHouse service
Input table (in the remote ClickHouse service) `source_table`:

View File

@ -380,11 +380,15 @@ High compression levels are useful for asymmetric scenarios, like compress once,
`DEFLATE_QPL` — [Deflate compression algorithm](https://github.com/intel/qpl) implemented by Intel® Query Processing Library. Some limitations apply:
- DEFLATE_QPL is experimental and can only be used after setting configuration parameter `allow_experimental_codecs=1`.
- DEFLATE_QPL is disabled by default and can only be used after setting configuration parameter `enable_deflate_qpl_codec = 1`.
- DEFLATE_QPL requires a ClickHouse build compiled with SSE 4.2 instructions (by default, this is the case). Refer to [Build Clickhouse with DEFLATE_QPL](/docs/en/development/building_and_benchmarking_deflate_qpl.md/#Build-Clickhouse-with-DEFLATE_QPL) for more details.
- DEFLATE_QPL works best if the system has a Intel® IAA (In-Memory Analytics Accelerator) offloading device. Refer to [Accelerator Configuration](https://intel.github.io/qpl/documentation/get_started_docs/installation.html#accelerator-configuration) and [Benchmark with DEFLATE_QPL](/docs/en/development/building_and_benchmarking_deflate_qpl.md/#Run-Benchmark-with-DEFLATE_QPL) for more details.
- DEFLATE_QPL-compressed data can only be transferred between ClickHouse nodes compiled with SSE 4.2 enabled.
:::note
DEFLATE_QPL is not available in ClickHouse Cloud.
:::
### Specialized Codecs
These codecs are designed to make compression more effective by using specific features of data. Some of these codecs do not compress data themself. Instead, they prepare the data for a common purpose codec, which compresses it better than without this preparation.

View File

@ -10,7 +10,7 @@ sidebar_label: SET
SET param = value
```
Assigns `value` to the `param` [setting](../../operations/settings/index.md) for the current session. You cannot change [server settings](../../operations/server-configuration-parameters/index.md) this way.
Assigns `value` to the `param` [setting](../../operations/settings/index.md) for the current session. You cannot change [server settings](../../operations/server-configuration-parameters/settings.md) this way.
You can also set all the values from the specified settings profile in a single query.

View File

@ -1,5 +1,6 @@
---
slug: /en/sql-reference/table-functions/azureBlobStorage
sidebar_position: 10
sidebar_label: azureBlobStorage
keywords: [azure blob storage]
---
@ -34,16 +35,16 @@ A table with the specified structure for reading or writing data in the specifie
Write data into azure blob storage using the following :
```sql
INSERT INTO TABLE FUNCTION azureBlobStorage('http://azurite1:10000/devstoreaccount1',
INSERT INTO TABLE FUNCTION azureBlobStorage('http://azurite1:10000/devstoreaccount1',
'test_container', 'test_{_partition_id}.csv', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==',
'CSV', 'auto', 'column1 UInt32, column2 UInt32, column3 UInt32') PARTITION BY column3 VALUES (1, 2, 3), (3, 2, 1), (78, 43, 3);
```
And then it can be read using
And then it can be read using
```sql
SELECT * FROM azureBlobStorage('http://azurite1:10000/devstoreaccount1',
'test_container', 'test_1.csv', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==',
SELECT * FROM azureBlobStorage('http://azurite1:10000/devstoreaccount1',
'test_container', 'test_1.csv', 'devstoreaccount1', 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==',
'CSV', 'auto', 'column1 UInt32, column2 UInt32, column3 UInt32');
```

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/cluster
sidebar_position: 50
sidebar_position: 30
sidebar_label: cluster
title: "cluster, clusterAllReplicas"
---
@ -9,7 +9,7 @@ Allows to access all shards in an existing cluster which configured in `remote_s
`clusterAllReplicas` function — same as `cluster`, but all replicas are queried. Each replica in a cluster is used as a separate shard/connection.
:::note
:::note
All available clusters are listed in the [system.clusters](../../operations/system-tables/clusters.md) table.
:::
@ -23,9 +23,9 @@ clusterAllReplicas('cluster_name', db, table[, sharding_key])
```
**Arguments**
- `cluster_name` Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers.
- `db.table` or `db`, `table` - Name of a database and a table.
- `sharding_key` - A sharding key. Optional. Needs to be specified if the cluster has more than one shard.
- `cluster_name` Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers.
- `db.table` or `db`, `table` - Name of a database and a table.
- `sharding_key` - A sharding key. Optional. Needs to be specified if the cluster has more than one shard.
**Returned value**

View File

@ -1,6 +1,7 @@
---
slug: /en/sql-reference/table-functions/deltalake
sidebar_label: DeltaLake
sidebar_position: 45
sidebar_label: deltaLake
---
# deltaLake Table Function

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/dictionary
sidebar_position: 54
sidebar_position: 47
sidebar_label: dictionary
title: dictionary
---

View File

@ -1,6 +1,6 @@
---
slug: /en/engines/table-functions/executable
sidebar_position: 55
sidebar_position: 50
sidebar_label: executable
keywords: [udf, user defined function, clickhouse, executable, table, function]
---

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/file
sidebar_position: 37
sidebar_position: 60
sidebar_label: file
---

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/format
sidebar_position: 56
sidebar_position: 65
sidebar_label: format
---

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/gcs
sidebar_position: 45
sidebar_position: 70
sidebar_label: gcs
keywords: [gcs, bucket]
---
@ -16,7 +16,7 @@ gcs(path [,hmac_key, hmac_secret] [,format] [,structure] [,compression])
```
:::tip GCS
The GCS Table Function integrates with Google Cloud Storage by using the GCS XML API and HMAC keys. See the [Google interoperability docs]( https://cloud.google.com/storage/docs/interoperability) for more details about the endpoint and HMAC.
The GCS Table Function integrates with Google Cloud Storage by using the GCS XML API and HMAC keys. See the [Google interoperability docs]( https://cloud.google.com/storage/docs/interoperability) for more details about the endpoint and HMAC.
:::

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/generate
sidebar_position: 47
sidebar_position: 75
sidebar_label: generateRandom
---

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/hdfs
sidebar_position: 45
sidebar_position: 80
sidebar_label: hdfs
---
@ -79,7 +79,7 @@ SELECT count(*)
FROM hdfs('hdfs://hdfs1:9000/{some,another}_dir/*', 'TSV', 'name String, value UInt32')
```
:::note
:::note
If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`.
:::

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/hdfsCluster
sidebar_position: 55
sidebar_position: 81
sidebar_label: hdfsCluster
---
@ -50,7 +50,7 @@ SELECT count(*)
FROM hdfsCluster('cluster_simple', 'hdfs://hdfs1:9000/{some,another}_dir/*', 'TSV', 'name String, value UInt32')
```
:::note
:::note
If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`.
:::

View File

@ -1,6 +1,7 @@
---
slug: /en/sql-reference/table-functions/hudi
sidebar_label: Hudi
sidebar_position: 85
sidebar_label: hudi
---
# hudi Table Function

View File

@ -1,6 +1,7 @@
---
slug: /en/sql-reference/table-functions/iceberg
sidebar_label: Iceberg
sidebar_position: 90
sidebar_label: iceberg
---
# iceberg Table Function

View File

@ -1,10 +1,10 @@
---
slug: /en/sql-reference/table-functions/
sidebar_label: Table Functions
sidebar_position: 34
sidebar_position: 1
---
# Table Functions
# Table Functions
Table functions are methods for constructing tables.

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/input
sidebar_position: 46
sidebar_position: 95
sidebar_label: input
---

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/jdbc
sidebar_position: 43
sidebar_position: 100
sidebar_label: jdbc
---

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/merge
sidebar_position: 38
sidebar_position: 130
sidebar_label: merge
---
@ -16,7 +16,7 @@ merge('db_name', 'tables_regexp')
**Arguments**
- `db_name` — Possible values:
- database name,
- database name,
- constant expression that returns a string with a database name, for example, `currentDatabase()`,
- `REGEXP(expression)`, where `expression` is a regular expression to match the DB names.

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/mongodb
sidebar_position: 42
sidebar_position: 135
sidebar_label: mongodb
---

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/mysql
sidebar_position: 42
sidebar_position: 137
sidebar_label: mysql
---

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/null
sidebar_position: 53
sidebar_position: 140
sidebar_label: null function
title: 'null'
---

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/numbers
sidebar_position: 39
sidebar_position: 145
sidebar_label: numbers
---

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/odbc
sidebar_position: 44
sidebar_position: 150
sidebar_label: odbc
---

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/postgresql
sidebar_position: 42
sidebar_position: 160
sidebar_label: postgresql
---

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/redis
sidebar_position: 43
sidebar_position: 170
sidebar_label: redis
---
@ -31,7 +31,7 @@ redis(host:port, key, structure[, db_index[, password[, pool_size]]])
- `primary` must be specified, it supports only one column in the primary key. The primary key will be serialized in binary as a Redis key.
- columns other than the primary key will be serialized in binary as Redis value in corresponding order.
- queries with key equals or in filtering will be optimized to multi keys lookup from Redis. If queries without filtering key full table scan will happen which is a heavy operation.

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/remote
sidebar_position: 40
sidebar_position: 175
sidebar_label: remote
---
@ -89,10 +89,10 @@ SELECT * FROM remote_table;
```
### Migration of tables from one system to another:
This example uses one table from a sample dataset. The database is `imdb`, and the table is `actors`.
This example uses one table from a sample dataset. The database is `imdb`, and the table is `actors`.
#### On the source ClickHouse system (the system that currently hosts the data)
- Verify the source database and table name (`imdb.actors`)
- Verify the source database and table name (`imdb.actors`)
```sql
show databases
```
@ -114,9 +114,8 @@ This example uses one table from a sample dataset. The database is `imdb`, and
`first_name` String,
`last_name` String,
`gender` FixedString(1))
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
ORDER BY (id, first_name, last_name, gender)
SETTINGS index_granularity = 8192
ENGINE = MergeTree
ORDER BY (id, first_name, last_name, gender);
```
#### On the destination ClickHouse system:
@ -132,9 +131,8 @@ This example uses one table from a sample dataset. The database is `imdb`, and
`first_name` String,
`last_name` String,
`gender` FixedString(1))
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
ORDER BY (id, first_name, last_name, gender)
SETTINGS index_granularity = 8192
ENGINE = MergeTree
ORDER BY (id, first_name, last_name, gender);
```
#### Back on the source deployment:
@ -142,7 +140,7 @@ This example uses one table from a sample dataset. The database is `imdb`, and
Insert into the new database and table created on the remote system. You will need the host, port, username, password, destination database, and destination table.
```sql
INSERT INTO FUNCTION
remoteSecure('remote.clickhouse.cloud:9440', 'imdb.actors', 'USER', 'PASSWORD', rand())
remoteSecure('remote.clickhouse.cloud:9440', 'imdb.actors', 'USER', 'PASSWORD')
SELECT * from imdb.actors
```

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/s3
sidebar_position: 45
sidebar_position: 180
sidebar_label: s3
keywords: [s3, gcs, bucket]
---
@ -33,7 +33,7 @@ For GCS, substitute your HMAC key and HMAC secret where you see `aws_access_key_
and not ~~https://storage.cloud.google.com~~.
:::
- `NOSIGN` - If this keyword is provided in place of credentials, all the requests will not be signed.
- `NOSIGN` - If this keyword is provided in place of credentials, all the requests will not be signed.
- `format` — The [format](../../interfaces/formats.md#formats) of the file.
- `structure` — Structure of the table. Format `'column1_name column1_type, column2_name column2_type, ...'`.
- `compression` — Parameter is optional. Supported values: `none`, `gzip/gz`, `brotli/br`, `xz/LZMA`, `zstd/zst`. By default, it will autodetect compression by file extension.

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/s3Cluster
sidebar_position: 55
sidebar_position: 181
sidebar_label: s3Cluster
title: "s3Cluster Table Function"
---
@ -31,18 +31,18 @@ Select the data from all the files in the `/root/data/clickhouse` and `/root/dat
``` sql
SELECT * FROM s3Cluster(
'cluster_simple',
'http://minio1:9001/root/data/{clickhouse,database}/*',
'minio',
'minio123',
'CSV',
'cluster_simple',
'http://minio1:9001/root/data/{clickhouse,database}/*',
'minio',
'minio123',
'CSV',
'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))'
) ORDER BY (name, value, polygon);
```
Count the total amount of rows in all files in the cluster `cluster_simple`:
:::tip
:::tip
If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`.
:::

View File

@ -1,19 +1,19 @@
---
slug: /en/sql-reference/table-functions/sqlite
sidebar_position: 55
sidebar_position: 185
sidebar_label: sqlite
title: sqlite
---
Allows to perform queries on a data stored in an [SQLite](../../engines/database-engines/sqlite.md) database.
**Syntax**
**Syntax**
``` sql
sqlite('db_path', 'table_name')
```
**Arguments**
**Arguments**
- `db_path` — Path to a file with an SQLite database. [String](../../sql-reference/data-types/string.md).
- `table_name` — Name of a table in the SQLite database. [String](../../sql-reference/data-types/string.md).
@ -40,6 +40,6 @@ Result:
└───────┴──────┘
```
**See Also**
**See Also**
- [SQLite](../../engines/table-engines/integrations/sqlite.md) table engine

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/url
sidebar_position: 41
sidebar_position: 200
sidebar_label: url
---

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/urlCluster
sidebar_position: 55
sidebar_position: 201
sidebar_label: urlCluster
---

View File

@ -1,6 +1,6 @@
---
slug: /en/sql-reference/table-functions/view
sidebar_position: 51
sidebar_position: 210
sidebar_label: view
title: view
---

View File

@ -1,453 +1,6 @@
agg_functions/combinators.md query-language/agg-functions/combinators.md
agg_functions/index.md query-language/agg-functions/index.md
agg_functions/parametric_functions.md query-language/agg-functions/parametric-functions.md
agg_functions/reference.md query-language/agg-functions/reference.md
changelog/2017.md whats-new/changelog/2017.md
changelog/2018.md whats-new/changelog/2018.md
changelog/2019.md whats-new/changelog/2019.md
changelog/index.md whats-new/changelog/index.md
commercial/cloud.md https://clickhouse.com/cloud/
data_types/array.md sql-reference/data-types/array.md
data_types/boolean.md sql-reference/data-types/boolean.md
data_types/date.md sql-reference/data-types/date.md
data_types/datetime.md sql-reference/data-types/datetime.md
data_types/datetime64.md sql-reference/data-types/datetime64.md
data_types/decimal.md sql-reference/data-types/decimal.md
data_types/domains/ipv4.md sql-reference/data-types/domains/ipv4.md
data_types/domains/ipv6.md sql-reference/data-types/domains/ipv6.md
data_types/domains/overview.md sql-reference/data-types/domains/overview.md
data_types/enum.md sql-reference/data-types/enum.md
data_types/fixedstring.md sql-reference/data-types/fixedstring.md
data_types/float.md sql-reference/data-types/float.md
data_types/index.md sql-reference/data-types/index.md
data_types/int_uint.md sql-reference/data-types/int-uint.md
data_types/nested_data_structures/aggregatefunction.md sql-reference/data-types/aggregatefunction.md
data_types/nested_data_structures/index.md sql-reference/data-types/nested-data-structures/index.md
data_types/nested_data_structures/nested.md sql-reference/data-types/nested-data-structures/nested.md
data_types/nullable.md sql-reference/data-types/nullable.md
data_types/special_data_types/expression.md sql-reference/data-types/special-data-types/expression.md
data_types/special_data_types/index.md sql-reference/data-types/special-data-types/index.md
data_types/special_data_types/interval.md sql-reference/data-types/special-data-types/interval.md
data_types/special_data_types/nothing.md sql-reference/data-types/special-data-types/nothing.md
data_types/special_data_types/set.md sql-reference/data-types/special-data-types/set.md
data_types/string.md sql-reference/data-types/string.md
data_types/tuple.md sql-reference/data-types/tuple.md
data_types/uuid.md sql-reference/data-types/uuid.md
database_engines/index.md engines/database-engines/index.md
database_engines/lazy.md engines/database-engines/lazy.md
database_engines/mysql.md engines/database-engines/mysql.md
development/browse_code.md development/browse-code.md
development/build_cross_arm.md development/build-cross-arm.md
development/build_cross_osx.md development/build-cross-osx.md
development/build_osx.md development/build-osx.md
development/developer_instruction.md development/developer-instruction.md
dicts/external_dicts.md query-language/dicts/external-dicts.md
dicts/external_dicts_dict.md query-language/dicts/external-dicts-dict.md
dicts/external_dicts_dict_layout.md query-language/dicts/external-dicts-dict-layout.md
dicts/external_dicts_dict_lifetime.md query-language/dicts/external-dicts-dict-lifetime.md
dicts/external_dicts_dict_sources.md query-language/dicts/external-dicts-dict-sources.md
dicts/external_dicts_dict_structure.md query-language/dicts/external-dicts-dict-structure.md
dicts/index.md query-language/dicts/index.md
dicts/internal_dicts.md query-language/dicts/internal-dicts.md
engines/database_engines/index.md engines/database-engines/index.md
engines/database_engines/lazy.md engines/database-engines/lazy.md
engines/database_engines/mysql.md engines/database-engines/mysql.md
engines/table-engines/log-family/log-family.md engines/table-engines/log-family/index.md
engines/table_engines/index.md engines/table-engines/index.md
engines/table_engines/integrations/hdfs.md engines/table-engines/integrations/hdfs.md
engines/table_engines/integrations/index.md engines/table-engines/integrations/index.md
engines/table_engines/integrations/jdbc.md engines/table-engines/integrations/jdbc.md
engines/table_engines/integrations/kafka.md engines/table-engines/integrations/kafka.md
engines/table_engines/integrations/mysql.md engines/table-engines/integrations/mysql.md
engines/table_engines/integrations/odbc.md engines/table-engines/integrations/odbc.md
engines/table_engines/log_family/index.md engines/table-engines/log-family/index.md
engines/table_engines/log_family/log.md engines/table-engines/log-family/log.md
engines/table_engines/log_family/log_family.md engines/table-engines/log-family/log-family.md
engines/table_engines/log_family/stripelog.md engines/table-engines/log-family/stripelog.md
engines/table_engines/log_family/tinylog.md engines/table-engines/log-family/tinylog.md
engines/table_engines/mergetree_family/aggregatingmergetree.md engines/table-engines/mergetree-family/aggregatingmergetree.md
engines/table_engines/mergetree_family/collapsingmergetree.md engines/table-engines/mergetree-family/collapsingmergetree.md
engines/table_engines/mergetree_family/custom_partitioning_key.md engines/table-engines/mergetree-family/custom-partitioning-key.md
engines/table_engines/mergetree_family/graphitemergetree.md engines/table-engines/mergetree-family/graphitemergetree.md
engines/table_engines/mergetree_family/index.md engines/table-engines/mergetree-family/index.md
engines/table_engines/mergetree_family/mergetree.md engines/table-engines/mergetree-family/mergetree.md
engines/table_engines/mergetree_family/replacingmergetree.md engines/table-engines/mergetree-family/replacingmergetree.md
engines/table_engines/mergetree_family/replication.md engines/table-engines/mergetree-family/replication.md
engines/table_engines/mergetree_family/summingmergetree.md engines/table-engines/mergetree-family/summingmergetree.md
engines/table_engines/mergetree_family/versionedcollapsingmergetree.md engines/table-engines/mergetree-family/versionedcollapsingmergetree.md
engines/table_engines/special/buffer.md engines/table-engines/special/buffer.md
engines/table_engines/special/dictionary.md engines/table-engines/special/dictionary.md
engines/table_engines/special/distributed.md engines/table-engines/special/distributed.md
engines/table_engines/special/external_data.md engines/table-engines/special/external-data.md
engines/table_engines/special/file.md engines/table-engines/special/file.md
engines/table_engines/special/generate.md engines/table-engines/special/generate.md
engines/table_engines/special/index.md engines/table-engines/special/index.md
engines/table_engines/special/join.md engines/table-engines/special/join.md
engines/table_engines/special/materializedview.md engines/table-engines/special/materializedview.md
engines/table_engines/special/memory.md engines/table-engines/special/memory.md
engines/table_engines/special/merge.md engines/table-engines/special/merge.md
engines/table_engines/special/null.md engines/table-engines/special/null.md
engines/table_engines/special/set.md engines/table-engines/special/set.md
engines/table_engines/special/url.md engines/table-engines/special/url.md
engines/table_engines/special/view.md engines/table-engines/special/view.md
extended_roadmap.md whats-new/extended-roadmap.md
formats.md interfaces/formats.md
formats/capnproto.md interfaces/formats.md
formats/csv.md interfaces/formats.md
formats/csvwithnames.md interfaces/formats.md
formats/json.md interfaces/formats.md
formats/jsoncompact.md interfaces/formats.md
formats/jsoneachrow.md interfaces/formats.md
formats/native.md interfaces/formats.md
formats/null.md interfaces/formats.md
formats/pretty.md interfaces/formats.md
formats/prettycompact.md interfaces/formats.md
formats/prettycompactmonoblock.md interfaces/formats.md
formats/prettynoescapes.md interfaces/formats.md
formats/prettyspace.md interfaces/formats.md
formats/rowbinary.md interfaces/formats.md
formats/tabseparated.md interfaces/formats.md
formats/tabseparatedraw.md interfaces/formats.md
formats/tabseparatedwithnames.md interfaces/formats.md
formats/tabseparatedwithnamesandtypes.md interfaces/formats.md
formats/tskv.md interfaces/formats.md
formats/values.md interfaces/formats.md
formats/vertical.md interfaces/formats.md
formats/verticalraw.md interfaces/formats.md
formats/xml.md interfaces/formats.md
functions/arithmetic_functions.md query-language/functions/arithmetic-functions.md
functions/array_functions.md query-language/functions/array-functions.md
functions/array_join.md query-language/functions/array-join.md
functions/bit_functions.md query-language/functions/bit-functions.md
functions/bitmap_functions.md query-language/functions/bitmap-functions.md
functions/comparison_functions.md query-language/functions/comparison-functions.md
functions/conditional_functions.md query-language/functions/conditional-functions.md
functions/date_time_functions.md query-language/functions/date-time-functions.md
functions/encoding_functions.md query-language/functions/encoding-functions.md
functions/ext_dict_functions.md query-language/functions/ext-dict-functions.md
functions/hash_functions.md query-language/functions/hash-functions.md
functions/higher_order_functions.md query-language/functions/higher-order-functions.md
functions/in_functions.md query-language/functions/in-functions.md
functions/index.md query-language/functions/index.md
functions/ip_address_functions.md query-language/functions/ip-address-functions.md
functions/json_functions.md query-language/functions/json-functions.md
functions/logical_functions.md query-language/functions/logical-functions.md
functions/math_functions.md query-language/functions/math-functions.md
functions/other_functions.md query-language/functions/other-functions.md
functions/random_functions.md query-language/functions/random-functions.md
functions/rounding_functions.md query-language/functions/rounding-functions.md
functions/splitting_merging_functions.md query-language/functions/splitting-merging-functions.md
functions/string_functions.md query-language/functions/string-functions.md
functions/string_replace_functions.md query-language/functions/string-replace-functions.md
functions/string_search_functions.md query-language/functions/string-search-functions.md
functions/type_conversion_functions.md query-language/functions/type-conversion-functions.md
functions/url_functions.md query-language/functions/url-functions.md
functions/ym_dict_functions.md query-language/functions/ym-dict-functions.md
getting_started/example_datasets/amplab_benchmark.md getting-started/example-datasets/amplab-benchmark.md
getting_started/example_datasets/criteo.md getting-started/example-datasets/criteo.md
getting_started/example_datasets/index.md getting-started/example-datasets/index.md
getting_started/example_datasets/metrica.md getting-started/example-datasets/metrica.md
getting_started/example_datasets/nyc_taxi.md getting-started/example-datasets/nyc-taxi.md
getting_started/example_datasets/ontime.md getting-started/example-datasets/ontime.md
getting_started/example_datasets/star_schema.md getting-started/example-datasets/star-schema.md
getting_started/example_datasets/wikistat.md getting-started/example-datasets/wikistat.md
getting_started/index.md getting-started/index.md
getting_started/install.md getting-started/install.md
getting_started/playground.md getting-started/playground.md
getting_started/tutorial.md getting-started/tutorial.md
images/column_oriented.gif images/column-oriented.gif
images/row_oriented.gif images/row-oriented.gif
interfaces/http_interface.md interfaces/http.md
interfaces/third-party/client_libraries.md interfaces/third-party/client-libraries.md
interfaces/third-party_client_libraries.md interfaces/third-party/client-libraries.md
interfaces/third-party_gui.md interfaces/third-party/gui.md
interfaces/third_party/index.md interfaces/third-party/index.md
introduction/index.md
introduction/distinctive_features.md introduction/distinctive-features.md
introduction/features_considered_disadvantages.md introduction/distinctive-features.md
introduction/possible_silly_questions.md faq/general.md
introduction/ya_metrika_task.md introduction/history.md
operations/access_rights.md operations/access-rights.md
operations/configuration_files.md operations/configuration-files.md
operations/optimizing_performance/index.md operations/optimizing-performance/index.md
operations/optimizing_performance/sampling_query_profiler.md operations/optimizing-performance/sampling-query-profiler.md
operations/performance/sampling_query_profiler.md operations/optimizing-performance/sampling-query-profiler.md
operations/performance_test.md operations/performance-test.md
operations/server_configuration_parameters/index.md operations/server-configuration-parameters/index.md
operations/server_configuration_parameters/settings.md operations/server-configuration-parameters/settings.md
operations/server_settings/index.md operations/server-configuration-parameters/index.md
operations/server_settings/settings.md operations/server-configuration-parameters/settings.md
operations/settings/constraints_on_settings.md operations/settings/constraints-on-settings.md
operations/settings/permissions_for_queries.md operations/settings/permissions-for-queries.md
operations/settings/query_complexity.md operations/settings/query-complexity.md
operations/settings/settings_profiles.md operations/settings/settings-profiles.md
operations/settings/settings_users.md operations/settings/settings-users.md
operations/system_tables.md operations/system-tables.md
operations/table_engines/aggregatingmergetree.md engines/table-engines/mergetree-family/aggregatingmergetree.md
operations/table_engines/buffer.md engines/table-engines/special/buffer.md
operations/table_engines/collapsingmergetree.md engines/table-engines/mergetree-family/collapsingmergetree.md
operations/table_engines/custom_partitioning_key.md engines/table-engines/mergetree-family/custom-partitioning-key.md
operations/table_engines/dictionary.md engines/table-engines/special/dictionary.md
operations/table_engines/distributed.md engines/table-engines/special/distributed.md
operations/table_engines/external_data.md engines/table-engines/special/external-data.md
operations/table_engines/file.md engines/table-engines/special/file.md
operations/table_engines/generate.md engines/table-engines/special/generate.md
operations/table_engines/graphitemergetree.md engines/table-engines/mergetree-family/graphitemergetree.md
operations/table_engines/hdfs.md engines/table-engines/integrations/hdfs.md
operations/table_engines/index.md engines/table-engines/index.md
operations/table_engines/jdbc.md engines/table-engines/integrations/jdbc.md
operations/table_engines/join.md engines/table-engines/special/join.md
operations/table_engines/kafka.md engines/table-engines/integrations/kafka.md
operations/table_engines/log.md engines/table-engines/log-family/log.md
operations/table_engines/log_family.md engines/table-engines/log-family/log-family.md
operations/table_engines/materializedview.md engines/table-engines/special/materializedview.md
operations/table_engines/memory.md engines/table-engines/special/memory.md
operations/table_engines/merge.md engines/table-engines/special/merge.md
operations/table_engines/mergetree.md engines/table-engines/mergetree-family/mergetree.md
operations/table_engines/mysql.md engines/table-engines/integrations/mysql.md
operations/table_engines/null.md engines/table-engines/special/null.md
operations/table_engines/odbc.md engines/table-engines/integrations/odbc.md
operations/table_engines/replacingmergetree.md engines/table-engines/mergetree-family/replacingmergetree.md
operations/table_engines/replication.md engines/table-engines/mergetree-family/replication.md
operations/table_engines/set.md engines/table-engines/special/set.md
operations/table_engines/stripelog.md engines/table-engines/log-family/stripelog.md
operations/table_engines/summingmergetree.md engines/table-engines/mergetree-family/summingmergetree.md
operations/table_engines/tinylog.md engines/table-engines/log-family/tinylog.md
operations/table_engines/url.md engines/table-engines/special/url.md
operations/table_engines/versionedcollapsingmergetree.md engines/table-engines/mergetree-family/versionedcollapsingmergetree.md
operations/table_engines/view.md engines/table-engines/special/view.md
operations/utils/clickhouse-benchmark.md operations/utilities/clickhouse-benchmark.md
operations/utils/clickhouse-copier.md operations/utilities/clickhouse-copier.md
operations/utils/clickhouse-local.md operations/utilities/clickhouse-local.md
operations/utils/index.md operations/utilities/index.md
query_language/agg_functions/combinators.md sql-reference/aggregate-functions/combinators.md
query_language/agg_functions/index.md sql-reference/aggregate-functions/index.md
query_language/agg_functions/parametric_functions.md sql-reference/aggregate-functions/parametric-functions.md
query_language/agg_functions/reference.md sql-reference/aggregate-functions/reference.md
query_language/alter.md sql-reference/statements/alter.md
query_language/create.md sql-reference/statements/create.md
query_language/dicts/external_dicts.md sql-reference/dictionaries/external-dictionaries/external-dicts.md
query_language/dicts/external_dicts_dict.md sql-reference/dictionaries/external-dictionaries/external-dicts-dict.md
query_language/dicts/external_dicts_dict_hierarchical.md sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md
query_language/dicts/external_dicts_dict_layout.md sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md
query_language/dicts/external_dicts_dict_lifetime.md sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md
query_language/dicts/external_dicts_dict_sources.md sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md
query_language/dicts/external_dicts_dict_structure.md sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md
query_language/dicts/index.md sql-reference/dictionaries/index.md
query_language/dicts/internal_dicts.md sql-reference/dictionaries/internal-dicts.md
query_language/functions/arithmetic_functions.md sql-reference/functions/arithmetic-functions.md
query_language/functions/array_functions.md sql-reference/functions/array-functions.md
query_language/functions/array_join.md sql-reference/functions/array-join.md
query_language/functions/bit_functions.md sql-reference/functions/bit-functions.md
query_language/functions/bitmap_functions.md sql-reference/functions/bitmap-functions.md
query_language/functions/comparison_functions.md sql-reference/functions/comparison-functions.md
query_language/functions/conditional_functions.md sql-reference/functions/conditional-functions.md
query_language/functions/date_time_functions.md sql-reference/functions/date-time-functions.md
query_language/functions/encoding_functions.md sql-reference/functions/encoding-functions.md
query_language/functions/ext_dict_functions.md sql-reference/functions/ext-dict-functions.md
query_language/functions/functions_for_nulls.md sql-reference/functions/functions-for-nulls.md
query_language/functions/geo.md sql-reference/functions/geo.md
query_language/functions/hash_functions.md sql-reference/functions/hash-functions.md
query_language/functions/higher_order_functions.md sql-reference/functions/higher-order-functions.md
query_language/functions/in_functions.md sql-reference/functions/in-functions.md
query_language/functions/index.md sql-reference/functions/index.md
query_language/functions/introspection.md sql-reference/functions/introspection.md
query_language/functions/ip_address_functions.md sql-reference/functions/ip-address-functions.md
query_language/functions/json_functions.md sql-reference/functions/json-functions.md
query_language/functions/logical_functions.md sql-reference/functions/logical-functions.md
query_language/functions/machine_learning_functions.md sql-reference/functions/machine-learning-functions.md
query_language/functions/math_functions.md sql-reference/functions/math-functions.md
query_language/functions/other_functions.md sql-reference/functions/other-functions.md
query_language/functions/random_functions.md sql-reference/functions/random-functions.md
query_language/functions/rounding_functions.md sql-reference/functions/rounding-functions.md
query_language/functions/splitting_merging_functions.md sql-reference/functions/splitting-merging-functions.md
query_language/functions/string_functions.md sql-reference/functions/string-functions.md
query_language/functions/string_replace_functions.md sql-reference/functions/string-replace-functions.md
query_language/functions/string_search_functions.md sql-reference/functions/string-search-functions.md
query_language/functions/type_conversion_functions.md sql-reference/functions/type-conversion-functions.md
query_language/functions/url_functions.md sql-reference/functions/url-functions.md
query_language/functions/uuid_functions.md sql-reference/functions/uuid-functions.md
query_language/functions/ym_dict_functions.md sql-reference/functions/ym-dict-functions.md
query_language/index.md sql-reference/index.md
query_language/insert_into.md sql-reference/statements/insert-into.md
query_language/misc.md sql-reference/statements/misc.md
query_language/operators.md sql-reference/operators.md
query_language/queries.md query-language.md
query_language/select.md sql-reference/statements/select.md
query_language/show.md sql-reference/statements/show.md
query_language/syntax.md sql-reference/syntax.md
query_language/system.md sql-reference/statements/system.md
query_language/table_functions/file.md sql-reference/table-functions/file.md
query_language/table_functions/generate.md sql-reference/table-functions/generate.md
query_language/table_functions/hdfs.md sql-reference/table-functions/hdfs.md
query_language/table_functions/index.md sql-reference/table-functions/index.md
query_language/table_functions/input.md sql-reference/table-functions/input.md
query_language/table_functions/jdbc.md sql-reference/table-functions/jdbc.md
query_language/table_functions/merge.md sql-reference/table-functions/merge.md
query_language/table_functions/mysql.md sql-reference/table-functions/mysql.md
query_language/table_functions/numbers.md sql-reference/table-functions/numbers.md
query_language/table_functions/odbc.md sql-reference/table-functions/odbc.md
query_language/table_functions/remote.md sql-reference/table-functions/remote.md
query_language/table_functions/url.md sql-reference/table-functions/url.md
roadmap.md whats-new/roadmap.md
security_changelog.md whats-new/security-changelog.md
sql-reference/data-types/domains/overview.md sql-reference/data-types/domains/index.md
sql_reference/aggregate_functions/combinators.md sql-reference/aggregate-functions/combinators.md
sql_reference/aggregate_functions/index.md sql-reference/aggregate-functions/index.md
sql_reference/aggregate_functions/parametric_functions.md sql-reference/aggregate-functions/parametric-functions.md
sql_reference/aggregate_functions/reference.md sql-reference/aggregate-functions/reference.md
sql_reference/ansi.md sql-reference/ansi.md
sql_reference/data_types/aggregatefunction.md sql-reference/data-types/aggregatefunction.md
sql_reference/data_types/array.md sql-reference/data-types/array.md
sql_reference/data_types/boolean.md sql-reference/data-types/boolean.md
sql_reference/data_types/date.md sql-reference/data-types/date.md
sql_reference/data_types/datetime.md sql-reference/data-types/datetime.md
sql_reference/data_types/datetime64.md sql-reference/data-types/datetime64.md
sql_reference/data_types/decimal.md sql-reference/data-types/decimal.md
sql_reference/data_types/domains/index.md sql-reference/data-types/domains/index.md
sql_reference/data_types/domains/ipv4.md sql-reference/data-types/domains/ipv4.md
sql_reference/data_types/domains/ipv6.md sql-reference/data-types/domains/ipv6.md
sql_reference/data_types/domains/overview.md sql-reference/data-types/domains/overview.md
sql_reference/data_types/enum.md sql-reference/data-types/enum.md
sql_reference/data_types/fixedstring.md sql-reference/data-types/fixedstring.md
sql_reference/data_types/float.md sql-reference/data-types/float.md
sql_reference/data_types/index.md sql-reference/data-types/index.md
sql_reference/data_types/int_uint.md sql-reference/data-types/int-uint.md
sql_reference/data_types/nested_data_structures/index.md sql-reference/data-types/nested-data-structures/index.md
sql_reference/data_types/nested_data_structures/nested.md sql-reference/data-types/nested-data-structures/nested.md
sql_reference/data_types/nullable.md sql-reference/data-types/nullable.md
sql_reference/data_types/simpleaggregatefunction.md sql-reference/data-types/simpleaggregatefunction.md
sql_reference/data_types/special_data_types/expression.md sql-reference/data-types/special-data-types/expression.md
sql_reference/data_types/special_data_types/index.md sql-reference/data-types/special-data-types/index.md
sql_reference/data_types/special_data_types/interval.md sql-reference/data-types/special-data-types/interval.md
sql_reference/data_types/special_data_types/nothing.md sql-reference/data-types/special-data-types/nothing.md
sql_reference/data_types/special_data_types/set.md sql-reference/data-types/special-data-types/set.md
sql_reference/data_types/string.md sql-reference/data-types/string.md
sql_reference/data_types/tuple.md sql-reference/data-types/tuple.md
sql_reference/data_types/uuid.md sql-reference/data-types/uuid.md
sql_reference/dictionaries/external_dictionaries/external_dicts.md sql-reference/dictionaries/external-dictionaries/external-dicts.md
sql_reference/dictionaries/external_dictionaries/external_dicts_dict.md sql-reference/dictionaries/external-dictionaries/external-dicts-dict.md
sql_reference/dictionaries/external_dictionaries/external_dicts_dict_hierarchical.md sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md
sql_reference/dictionaries/external_dictionaries/external_dicts_dict_layout.md sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md
sql_reference/dictionaries/external_dictionaries/external_dicts_dict_lifetime.md sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md
sql_reference/dictionaries/external_dictionaries/external_dicts_dict_sources.md sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md
sql_reference/dictionaries/external_dictionaries/external_dicts_dict_structure.md sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md
sql_reference/dictionaries/external_dictionaries/index.md sql-reference/dictionaries/external-dictionaries/index.md
sql_reference/dictionaries/index.md sql-reference/dictionaries/index.md
sql_reference/dictionaries/internal_dicts.md sql-reference/dictionaries/internal-dicts.md
sql_reference/functions/arithmetic_functions.md sql-reference/functions/arithmetic-functions.md
sql_reference/functions/array_functions.md sql-reference/functions/array-functions.md
sql_reference/functions/array_join.md sql-reference/functions/array-join.md
sql_reference/functions/bit_functions.md sql-reference/functions/bit-functions.md
sql_reference/functions/bitmap_functions.md sql-reference/functions/bitmap-functions.md
sql_reference/functions/comparison_functions.md sql-reference/functions/comparison-functions.md
sql_reference/functions/conditional_functions.md sql-reference/functions/conditional-functions.md
sql_reference/functions/date_time_functions.md sql-reference/functions/date-time-functions.md
sql_reference/functions/encoding_functions.md sql-reference/functions/encoding-functions.md
sql_reference/functions/ext_dict_functions.md sql-reference/functions/ext-dict-functions.md
sql_reference/functions/functions_for_nulls.md sql-reference/functions/functions-for-nulls.md
sql_reference/functions/geo.md sql-reference/functions/geo.md
sql_reference/functions/hash_functions.md sql-reference/functions/hash-functions.md
sql_reference/functions/higher_order_functions.md sql-reference/functions/higher-order-functions.md
sql_reference/functions/in_functions.md sql-reference/functions/in-functions.md
sql_reference/functions/index.md sql-reference/functions/index.md
sql_reference/functions/introspection.md sql-reference/functions/introspection.md
sql_reference/functions/ip_address_functions.md sql-reference/functions/ip-address-functions.md
sql_reference/functions/json_functions.md sql-reference/functions/json-functions.md
sql_reference/functions/logical_functions.md sql-reference/functions/logical-functions.md
sql_reference/functions/machine_learning_functions.md sql-reference/functions/machine-learning-functions.md
sql_reference/functions/math_functions.md sql-reference/functions/math-functions.md
sql_reference/functions/other_functions.md sql-reference/functions/other-functions.md
sql_reference/functions/random_functions.md sql-reference/functions/random-functions.md
sql_reference/functions/rounding_functions.md sql-reference/functions/rounding-functions.md
sql_reference/functions/splitting_merging_functions.md sql-reference/functions/splitting-merging-functions.md
sql_reference/functions/string_functions.md sql-reference/functions/string-functions.md
sql_reference/functions/string_replace_functions.md sql-reference/functions/string-replace-functions.md
sql_reference/functions/string_search_functions.md sql-reference/functions/string-search-functions.md
sql_reference/functions/type_conversion_functions.md sql-reference/functions/type-conversion-functions.md
sql_reference/functions/url_functions.md sql-reference/functions/url-functions.md
sql_reference/functions/uuid_functions.md sql-reference/functions/uuid-functions.md
sql_reference/functions/ym_dict_functions.md sql-reference/functions/ym-dict-functions.md
sql_reference/index.md sql-reference/index.md
sql_reference/operators.md sql-reference/operators.md
sql_reference/statements/alter.md sql-reference/statements/alter.md
sql_reference/statements/create.md sql-reference/statements/create.md
sql_reference/statements/index.md sql-reference/statements/index.md
sql_reference/statements/insert_into.md sql-reference/statements/insert-into.md
sql_reference/statements/misc.md sql-reference/statements/misc.md
sql_reference/statements/select.md sql-reference/statements/select.md
sql_reference/statements/show.md sql-reference/statements/show.md
sql_reference/statements/system.md sql-reference/statements/system.md
sql_reference/syntax.md sql-reference/syntax.md
sql_reference/table_functions/file.md sql-reference/table-functions/file.md
sql_reference/table_functions/generate.md sql-reference/table-functions/generate.md
sql_reference/table_functions/hdfs.md sql-reference/table-functions/hdfs.md
sql_reference/table_functions/index.md sql-reference/table-functions/index.md
sql_reference/table_functions/input.md sql-reference/table-functions/input.md
sql_reference/table_functions/jdbc.md sql-reference/table-functions/jdbc.md
sql_reference/table_functions/merge.md sql-reference/table-functions/merge.md
sql_reference/table_functions/mysql.md sql-reference/table-functions/mysql.md
sql_reference/table_functions/numbers.md sql-reference/table-functions/numbers.md
sql_reference/table_functions/odbc.md sql-reference/table-functions/odbc.md
sql_reference/table_functions/remote.md sql-reference/table-functions/remote.md
sql_reference/table_functions/url.md sql-reference/table-functions/url.md
system_tables.md operations/system-tables.md
system_tables/system.asynchronous_metrics.md operations/system-tables.md
system_tables/system.clusters.md operations/system-tables.md
system_tables/system.columns.md operations/system-tables.md
system_tables/system.databases.md operations/system-tables.md
system_tables/system.dictionaries.md operations/system-tables.md
system_tables/system.events.md operations/system-tables.md
system_tables/system.functions.md operations/system-tables.md
system_tables/system.merges.md operations/system-tables.md
system_tables/system.metrics.md operations/system-tables.md
system_tables/system.numbers.md operations/system-tables.md
system_tables/system.numbers_mt.md operations/system-tables.md
system_tables/system.one.md operations/system-tables.md
system_tables/system.parts.md operations/system-tables.md
system_tables/system.processes.md operations/system-tables.md
system_tables/system.replicas.md operations/system-tables.md
system_tables/system.settings.md operations/system-tables.md
system_tables/system.tables.md operations/system-tables.md
system_tables/system.zookeeper.md operations/system-tables.md
table_engines.md operations/table-engines.md
table_engines/aggregatingmergetree.md operations/table-engines/aggregatingmergetree.md
table_engines/buffer.md operations/table-engines/buffer.md
table_engines/collapsingmergetree.md operations/table-engines/collapsingmergetree.md
table_engines/custom_partitioning_key.md operations/table-engines/custom-partitioning-key.md
table_engines/dictionary.md operations/table-engines/dictionary.md
table_engines/distributed.md operations/table-engines/distributed.md
table_engines/external_data.md operations/table-engines/external-data.md
table_engines/file.md operations/table-engines/file.md
table_engines/graphitemergetree.md operations/table-engines/graphitemergetree.md
table_engines/index.md operations/table-engines/index.md
table_engines/join.md operations/table-engines/join.md
table_engines/kafka.md operations/table-engines/kafka.md
table_engines/log.md operations/table-engines/log.md
table_engines/materializedview.md operations/table-engines/materializedview.md
table_engines/memory.md operations/table-engines/memory.md
table_engines/merge.md operations/table-engines/merge.md
table_engines/mergetree.md operations/table-engines/mergetree.md
table_engines/mysql.md operations/table-engines/mysql.md
table_engines/null.md operations/table-engines/null.md
table_engines/replacingmergetree.md operations/table-engines/replacingmergetree.md
table_engines/replication.md operations/table-engines/replication.md
table_engines/set.md operations/table-engines/set.md
table_engines/summingmergetree.md operations/table-engines/summingmergetree.md
table_engines/tinylog.md operations/table-engines/tinylog.md
table_engines/view.md operations/table-engines/view.md
table_functions/file.md query-language/table-functions/file.md
table_functions/index.md query-language/table-functions/index.md
table_functions/merge.md query-language/table-functions/merge.md
table_functions/numbers.md query-language/table-functions/numbers.md
table_functions/remote.md query-language/table-functions/remote.md
utils.md operations/utils.md
utils/clickhouse-copier.md operations/utils/clickhouse-copier.md
utils/clickhouse-local.md operations/utils/clickhouse-local.md
whats_new/changelog/2017.md whats-new/changelog/2017.md
whats_new/changelog/2018.md whats-new/changelog/2018.md
whats_new/changelog/2019.md whats-new/changelog/2019.md
whats_new/changelog/index.md whats-new/changelog/index.md
whats_new/index.md whats-new/index.md
whats_new/roadmap.md whats-new/roadmap.md
whats_new/security_changelog.md whats-new/security-changelog.md
The redirects from this file were moved to the Docusaurus configuration file.
If you need to add a redirect please either open a PR in
https://github.com/clickhouse/clickhouse-docs adding the redirect to
https://github.com/ClickHouse/clickhouse-docs/blob/main/docusaurus.config.js
or open an issue in the same repo and provide the old URL and new URL to have
the redirect added.

View File

@ -1355,6 +1355,10 @@ Parameters:
<timezone>Europe/Moscow</timezone>
```
**См. также**
- [session_timezone](../settings/settings.md#session_timezone)
## tcp_port {#server_configuration_parameters-tcp_port}
Порт для взаимодействия с клиентами по протоколу TCP.

View File

@ -4127,6 +4127,63 @@ SELECT sum(number) FROM numbers(10000000000) SETTINGS partial_result_on_first_ca
Значение по умолчанию: `false`
## session_timezone {#session_timezone}
Задаёт значение часового пояса (session_timezone) по умолчанию для текущей сессии вместо [часового пояса сервера](../server-configuration-parameters/settings.md#server_configuration_parameters-timezone). То есть, все значения DateTime/DateTime64, для которых явно не задан часовой пояс, будут интерпретированы как относящиеся к указанной зоне.
При значении настройки `''` (пустая строка), будет совпадать с часовым поясом сервера.
Функции `timeZone()` and `serverTimezone()` возвращают часовой пояс текущей сессии и сервера соответственно.
Примеры:
```sql
SELECT timeZone(), serverTimezone() FORMAT TSV
Europe/Berlin Europe/Berlin
```
```sql
SELECT timeZone(), serverTimezone() SETTINGS session_timezone = 'Asia/Novosibirsk' FORMAT TSV
Asia/Novosibirsk Europe/Berlin
```
```sql
SELECT toDateTime64(toDateTime64('1999-12-12 23:23:23.123', 3), 3, 'Europe/Zurich') SETTINGS session_timezone = 'America/Denver' FORMAT TSV
1999-12-13 07:23:23.123
```
Возможные значения:
- Любая зона из `system.time_zones`, например `Europe/Berlin`, `UTC` или `Zulu`
Значение по умолчанию: `''`.
:::warning
Иногда при формировании значений типа `DateTime` и `DateTime64` параметр `session_timezone` может быть проигнорирован.
Это может привести к путанице. Пример и пояснение см. ниже.
:::
```sql
CREATE TABLE test_tz (`d` DateTime('UTC')) ENGINE = Memory AS SELECT toDateTime('2000-01-01 00:00:00', 'UTC');
SELECT *, timezone() FROM test_tz WHERE d = toDateTime('2000-01-01 00:00:00') SETTINGS session_timezone = 'Asia/Novosibirsk'
0 rows in set.
SELECT *, timezone() FROM test_tz WHERE d = '2000-01-01 00:00:00' SETTINGS session_timezone = 'Asia/Novosibirsk'
┌───────────────────d─┬─timezone()───────┐
│ 2000-01-01 00:00:00 │ Asia/Novosibirsk │
└─────────────────────┴──────────────────┘
```
Это происходит из-за различного происхождения значения, используемого для сравнения:
- В первом запросе функция `toDateTime()`, создавая значение типа `DateTime`, принимает во внимание параметр `session_timezone` из контекста запроса;
- Во втором запросе `DateTime` формируется из строки неявно, наследуя тип колонки `d` (в том числе и числовой пояс), и параметр `session_timezone` игнорируется.
**Смотрите также**
- [timezone](../server-configuration-parameters/settings.md#server_configuration_parameters-timezone)
## rename_files_after_processing
- **Тип:** Строка

View File

@ -69,11 +69,11 @@ ClickHouse не удаляет данные из таблица автомати
- 0 — запрос был инициирован другим запросом при выполнении распределенного запроса.
- `user` ([String](../../sql-reference/data-types/string.md)) — пользователь, запустивший текущий запрос.
- `query_id` ([String](../../sql-reference/data-types/string.md)) — ID запроса.
- `address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — IP адрес, с которого пришел запрос.
- `address` ([IPv6](../../sql-reference/data-types/ipv6.md)) — IP адрес, с которого пришел запрос.
- `port` ([UInt16](../../sql-reference/data-types/int-uint.md)) — порт, с которого клиент сделал запрос
- `initial_user` ([String](../../sql-reference/data-types/string.md)) — пользователь, запустивший первоначальный запрос (для распределенных запросов).
- `initial_query_id` ([String](../../sql-reference/data-types/string.md)) — ID родительского запроса.
- `initial_address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — IP адрес, с которого пришел родительский запрос.
- `initial_address` ([IPv6](../../sql-reference/data-types/ipv6.md)) — IP адрес, с которого пришел родительский запрос.
- `initial_port` ([UInt16](../../sql-reference/data-types/int-uint.md)) — порт, с которого клиент сделал родительский запрос.
- `initial_query_start_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — время начала обработки запроса (для распределенных запросов).
- `initial_query_start_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — время начала обработки запроса с точностью до микросекунд (для распределенных запросов).

View File

@ -39,11 +39,11 @@ ClickHouse не удаляет данные из таблицы автомати
- 0 — запрос был инициирован другим запросом при распределенном запросе.
- `user` ([String](../../sql-reference/data-types/string.md)) — пользователь, запустивший текущий запрос.
- `query_id` ([String](../../sql-reference/data-types/string.md)) — ID запроса.
- `address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — IP адрес, с которого пришел запрос.
- `address` ([IPv6](../../sql-reference/data-types/ipv6.md)) — IP адрес, с которого пришел запрос.
- `port` ([UInt16](../../sql-reference/data-types/int-uint.md#uint-ranges)) — порт, с которого пришел запрос.
- `initial_user` ([String](../../sql-reference/data-types/string.md)) — пользователь, запустивший первоначальный запрос (для распределенных запросов).
- `initial_query_id` ([String](../../sql-reference/data-types/string.md)) — ID родительского запроса.
- `initial_address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — IP адрес, с которого пришел родительский запрос.
- `initial_address` ([IPv6](../../sql-reference/data-types/ipv6.md)) — IP адрес, с которого пришел родительский запрос.
- `initial_port` ([UInt16](../../sql-reference/data-types/int-uint.md#uint-ranges)) — порт, пришел родительский запрос.
- `interface` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — интерфейс, с которого ушёл запрос. Возможные значения:
- 1 — TCP.

View File

@ -27,7 +27,7 @@ slug: /ru/operations/system-tables/session_log
- `profiles` ([Array](../../sql-reference/data-types/array.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md))) — список профилей, установленных для всех ролей и (или) пользователей.
- `roles` ([Array](../../sql-reference/data-types/array.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md))) — список ролей, к которым применяется данный профиль.
- `settings` ([Array](../../sql-reference/data-types/array.md)([Tuple](../../sql-reference/data-types/tuple.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md), [String](../../sql-reference/data-types/string.md)))) — настройки, которые были изменены при входе или выходе клиента из системы.
- `client_address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — IP-адрес, который использовался для входа или выхода из системы.
- `client_address` ([IPv6](../../sql-reference/data-types/ipv6.md)) — IP-адрес, который использовался для входа или выхода из системы.
- `client_port` ([UInt16](../../sql-reference/data-types/int-uint.md)) — порт клиента, который использовался для входа или выхода из системы.
- `interface` ([Enum8](../../sql-reference/data-types/enum.md)) — интерфейс, с которого был инициирован вход в систему. Возможные значения:
- `TCP`

View File

@ -15,7 +15,7 @@ slug: /ru/operations/system-tables/zookeeper_log
- `Finalize` — соединение разорвано, ответ не получен.
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — дата, когда произошло событие.
- `event_time` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — дата и время, когда произошло событие.
- `address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — IP адрес сервера ZooKeeper, с которого был сделан запрос.
- `address` ([IPv6](../../sql-reference/data-types/ipv6.md)) — IP адрес сервера ZooKeeper, с которого был сделан запрос.
- `port` ([UInt16](../../sql-reference/data-types/int-uint.md)) — порт сервера ZooKeeper, с которого был сделан запрос.
- `session_id` ([Int64](../../sql-reference/data-types/int-uint.md)) — идентификатор сессии, который сервер ZooKeeper создает для каждого соединения.
- `xid` ([Int32](../../sql-reference/data-types/int-uint.md)) — идентификатор запроса внутри сессии. Обычно это последовательный номер запроса, одинаковый у строки запроса и у парной строки `response`/`finalize`.

View File

@ -31,7 +31,7 @@ sidebar_label: Decimal
## Внутреннее представление {#vnutrennee-predstavlenie}
Внутри данные представляются как знаковые целые числа, соответсвующей разрядности. Реальные диапазоны, хранящиеся в ячейках памяти несколько больше заявленных. Заявленные диапазоны Decimal проверяются только при вводе числа из строкового представления.
Поскольку современные CPU не поддерживают 128-битные числа, операции над Decimal128 эмулируются программно. Decimal128 работает в разы медленней чем Decimal32/Decimal64.
Поскольку современные CPU не поддерживают 128-битные и 256-битные числа, для операций над Decimal128 и Decimal256 эмулируются программно. Данные типы работают в разы медленнее, чем Decimal32/Decimal64.
## Операции и типы результата {#operatsii-i-tipy-rezultata}
@ -59,6 +59,10 @@ sidebar_label: Decimal
При выполнении операций над типом Decimal могут происходить целочисленные переполнения. Лишняя дробная часть отбрасывается (не округляется). Лишняя целочисленная часть приводит к исключению.
:::warning
Проверка переполнения не реализована для Decimal128 и Decimal256. В случае переполнения неверный результат будёт возвращён без выбрасывания исключения.
:::
``` sql
SELECT toDecimal32(2, 4) AS x, x / 3
```

View File

@ -1,12 +1,12 @@
---
slug: /ru/sql-reference/data-types/domains/ipv4
slug: /ru/sql-reference/data-types/ipv4
sidebar_position: 59
sidebar_label: IPv4
---
## IPv4 {#ipv4}
`IPv4` — это домен, базирующийся на типе данных `UInt32` предназначенный для хранения адресов IPv4. Он обеспечивает компактное хранение данных с удобным для человека форматом ввода-вывода, и явно отображаемым типом данных в структуре таблицы.
IPv4-адреса. Хранится в 4 байтах как UInt32.
### Применение {#primenenie}
@ -57,27 +57,6 @@ SELECT toTypeName(from), hex(from) FROM hits LIMIT 1;
└──────────────────┴───────────┘
```
Значения с доменным типом данных не преобразуются неявно в другие типы данных, кроме `UInt32`.
Если необходимо преобразовать значение типа `IPv4` в строку, то это необходимо делать явно с помощью функции `IPv4NumToString()`:
**См. также**
``` sql
SELECT toTypeName(s), IPv4NumToString(from) AS s FROM hits LIMIT 1;
```
``` text
┌─toTypeName(IPv4NumToString(from))─┬─s──────────────┐
│ String │ 183.247.232.58 │
└───────────────────────────────────┴────────────────┘
```
Или приводить к типу данных `UInt32`:
``` sql
SELECT toTypeName(i), CAST(from AS UInt32) AS i FROM hits LIMIT 1;
```
``` text
┌─toTypeName(CAST(from, 'UInt32'))─┬──────────i─┐
│ UInt32 │ 3086477370 │
└──────────────────────────────────┴────────────┘
```
- [Functions for Working with IPv4 and IPv6 Addresses](../functions/ip-address-functions.md)

View File

@ -1,5 +1,5 @@
---
slug: /ru/sql-reference/data-types/domains/ipv6
slug: /ru/sql-reference/data-types/ipv6
sidebar_position: 60
sidebar_label: IPv6
---

View File

@ -26,7 +26,8 @@ SELECT
## timeZone {#timezone}
Возвращает часовой пояс сервера.
Возвращает часовой пояс сервера, считающийся умолчанием для текущей сессии: значение параметра [session_timezone](../../operations/settings/settings.md#session_timezone), если установлено.
Если функция вызывается в контексте распределенной таблицы, то она генерирует обычный столбец со значениями, актуальными для каждого шарда. Иначе возвращается константа.
**Синтаксис**
@ -43,6 +44,33 @@ timeZone()
Тип: [String](../../sql-reference/data-types/string.md).
**Смотрите также**
- [serverTimeZone](#servertimezone)
## serverTimeZone {#servertimezone}
Возвращает часовой пояс сервера по умолчанию, в т.ч. установленный [timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone)
Если функция вызывается в контексте распределенной таблицы, то она генерирует обычный столбец со значениями, актуальными для каждого шарда. Иначе возвращается константа.
**Синтаксис**
``` sql
serverTimeZone()
```
Синонимы: `serverTimezone`.
**Возвращаемое значение**
- Часовой пояс.
Тип: [String](../../sql-reference/data-types/string.md).
**Смотрите также**
- [timeZone](#timezone)
## toTimeZone {#totimezone}
Переводит дату или дату с временем в указанный часовой пояс. Часовой пояс - это атрибут типов `Date` и `DateTime`. Внутреннее значение (количество секунд) поля таблицы или результирующего столбца не изменяется, изменяется тип поля и, соответственно, его текстовое отображение.

View File

@ -5,7 +5,7 @@ sidebar_label: "Функции для работы с внешними слов
---
:::note "Внимание"
Для словарей, созданных с помощью [DDL-запросов](../../sql-reference/statements/create/dictionary.md), в параметре `dict_name` указывается полное имя словаря вместе с базой данных, например: `<database>.<dict_name>`. Если база данных не указана, используется текущая.
Для словарей, созданных с помощью [DDL-запросов](../../sql-reference/statements/create/dictionary.md), в параметре `dict_name` указывается полное имя словаря вместе с базой данных, например: `<database>.<dict_name>`. Если база данных не указана, используется текущая.
:::
# Функции для работы с внешними словарями {#ext_dict_functions}

Some files were not shown because too many files have changed in this diff Show More