Merge pull request #35858 from rfraposa/master

Format changes for new docs
This commit is contained in:
Rich Raposa 2022-04-03 09:13:19 -06:00 committed by GitHub
commit e56b6f4828
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
519 changed files with 2451 additions and 4240 deletions

View File

@ -1,121 +0,0 @@
name: DocsReleaseChecks
env:
# Force the stdout and stderr streams to be unbuffered
PYTHONUNBUFFERED: 1
concurrency:
group: master-release
cancel-in-progress: true
on: # yamllint disable-line rule:truthy
push:
branches:
- master
paths:
- 'docs/**'
- 'website/**'
- 'benchmark/**'
- 'docker/**'
- '.github/**'
workflow_dispatch:
jobs:
DockerHubPushAarch64:
runs-on: [self-hosted, style-checker-aarch64]
steps:
- name: Clear repository
run: |
sudo rm -fr "$GITHUB_WORKSPACE" && mkdir "$GITHUB_WORKSPACE"
- name: Check out repository code
uses: actions/checkout@v2
- name: Images check
run: |
cd "$GITHUB_WORKSPACE/tests/ci"
python3 docker_images_check.py --suffix aarch64
- name: Upload images files to artifacts
uses: actions/upload-artifact@v2
with:
name: changed_images_aarch64
path: ${{ runner.temp }}/docker_images_check/changed_images_aarch64.json
DockerHubPushAmd64:
runs-on: [self-hosted, style-checker]
steps:
- name: Clear repository
run: |
sudo rm -fr "$GITHUB_WORKSPACE" && mkdir "$GITHUB_WORKSPACE"
- name: Check out repository code
uses: actions/checkout@v2
- name: Images check
run: |
cd "$GITHUB_WORKSPACE/tests/ci"
python3 docker_images_check.py --suffix amd64
- name: Upload images files to artifacts
uses: actions/upload-artifact@v2
with:
name: changed_images_amd64
path: ${{ runner.temp }}/docker_images_check/changed_images_amd64.json
DockerHubPush:
needs: [DockerHubPushAmd64, DockerHubPushAarch64]
runs-on: [self-hosted, style-checker]
steps:
- name: Clear repository
run: |
sudo rm -fr "$GITHUB_WORKSPACE" && mkdir "$GITHUB_WORKSPACE"
- name: Check out repository code
uses: actions/checkout@v2
- name: Download changed aarch64 images
uses: actions/download-artifact@v2
with:
name: changed_images_aarch64
path: ${{ runner.temp }}
- name: Download changed amd64 images
uses: actions/download-artifact@v2
with:
name: changed_images_amd64
path: ${{ runner.temp }}
- name: Images check
run: |
cd "$GITHUB_WORKSPACE/tests/ci"
python3 docker_manifests_merge.py --suffix amd64 --suffix aarch64
- name: Upload images files to artifacts
uses: actions/upload-artifact@v2
with:
name: changed_images
path: ${{ runner.temp }}/changed_images.json
DocsRelease:
needs: DockerHubPush
runs-on: [self-hosted, func-tester]
steps:
- name: Set envs
# https://docs.github.com/en/actions/learn-github-actions/workflow-commands-for-github-actions#multiline-strings
run: |
cat >> "$GITHUB_ENV" << 'EOF'
TEMP_PATH=${{runner.temp}}/docs_release
REPO_COPY=${{runner.temp}}/docs_release/ClickHouse
CLOUDFLARE_TOKEN=${{secrets.CLOUDFLARE}}
ROBOT_CLICKHOUSE_SSH_KEY<<RCSK
${{secrets.ROBOT_CLICKHOUSE_SSH_KEY}}
RCSK
EOF
- name: Clear repository
run: |
sudo rm -fr "$GITHUB_WORKSPACE" && mkdir "$GITHUB_WORKSPACE"
- name: Check out repository code
uses: actions/checkout@v2
- name: Download changed images
uses: actions/download-artifact@v2
with:
name: changed_images
path: ${{ env.TEMP_PATH }}
- name: Docs Release
run: |
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
cd "$REPO_COPY/tests/ci"
python3 docs_release.py
- name: Cleanup
if: always()
run: |
docker kill "$(docker ps -q)" ||:
docker rm -f "$(docker ps -a -q)" ||:
sudo rm -fr "$TEMP_PATH"

8
docs/en/_category_.yml Normal file
View File

@ -0,0 +1,8 @@
position: 50
label: 'Reference Guides'
collapsible: true
collapsed: true
link:
type: generated-index
title: Reference Guides
slug: /en

View File

@ -1,9 +0,0 @@
---
toc_priority: 1
toc_title: Cloud
---
# ClickHouse Cloud Service {#clickhouse-cloud-service}
!!! info "Info"
Detailed public description for ClickHouse cloud services is not ready yet, please [contact us](https://clickhouse.com/company/#contact) to learn more.

View File

@ -1,13 +0,0 @@
---
toc_folder_title: Commercial
toc_priority: 70
toc_title: Introduction
---
# ClickHouse Commercial Services {#clickhouse-commercial-services}
Service categories:
- [Cloud](../commercial/cloud.md)
- [Support](../commercial/support.md)

View File

@ -1,9 +0,0 @@
---
toc_priority: 3
toc_title: Support
---
# ClickHouse Commercial Support Service {#clickhouse-commercial-support-service}
!!! info "Info"
Detailed public description for ClickHouse support services is not ready yet, please [contact us](https://clickhouse.com/company/#contact) to learn more.

View File

@ -0,0 +1,7 @@
position: 100
label: 'Building ClickHouse'
collapsible: true
collapsed: true
link:
type: generated-index
title: Building ClickHouse

View File

@ -1,3 +1,9 @@
---
sidebar_label: Adding Test Queries
sidebar_position: 63
description: Instructions on how to add a test case to ClickHouse continuous integration
---
# How to add test queries to ClickHouse CI # How to add test queries to ClickHouse CI
ClickHouse has hundreds (or even thousands) of features. Every commit gets checked by a complex set of tests containing many thousands of test cases. ClickHouse has hundreds (or even thousands) of features. Every commit gets checked by a complex set of tests containing many thousands of test cases.

View File

@ -1,11 +1,12 @@
--- ---
toc_priority: 62 sidebar_label: Architecture Overview
toc_title: Architecture Overview sidebar_position: 62
--- ---
# Overview of ClickHouse Architecture {#overview-of-clickhouse-architecture} # Overview of ClickHouse Architecture
ClickHouse is a true column-oriented DBMS. Data is stored by columns, and during the execution of arrays (vectors or chunks of columns). Whenever possible, operations are dispatched on arrays, rather than on individual values. It is called “vectorized query execution” and it helps lower the cost of actual data processing. ClickHouse is a true column-oriented DBMS. Data is stored by columns, and during the execution of arrays (vectors or chunks of columns).
Whenever possible, operations are dispatched on arrays, rather than on individual values. It is called “vectorized query execution” and it helps lower the cost of actual data processing.
> This idea is nothing new. It dates back to the `APL` (A programming language, 1957) and its descendants: `A +` (APL dialect), `J` (1990), `K` (1993), and `Q` (programming language from Kx Systems, 2003). Array programming is used in scientific data processing. Neither is this idea something new in relational databases: for example, it is used in the `VectorWise` system (also known as Actian Vector Analytic Database by Actian Corporation). > This idea is nothing new. It dates back to the `APL` (A programming language, 1957) and its descendants: `A +` (APL dialect), `J` (1990), `K` (1993), and `Q` (programming language from Kx Systems, 2003). Array programming is used in scientific data processing. Neither is this idea something new in relational databases: for example, it is used in the `VectorWise` system (also known as Actian Vector Analytic Database by Actian Corporation).
@ -154,8 +155,9 @@ The server initializes the `Context` class with the necessary environment for qu
We maintain full backward and forward compatibility for the server TCP protocol: old clients can talk to new servers, and new clients can talk to old servers. But we do not want to maintain it eternally, and we are removing support for old versions after about one year. We maintain full backward and forward compatibility for the server TCP protocol: old clients can talk to new servers, and new clients can talk to old servers. But we do not want to maintain it eternally, and we are removing support for old versions after about one year.
!!! note "Note" :::note
For most external applications, we recommend using the HTTP interface because it is simple and easy to use. The TCP protocol is more tightly linked to internal data structures: it uses an internal format for passing blocks of data, and it uses custom framing for compressed data. We havent released a C library for that protocol because it requires linking most of the ClickHouse codebase, which is not practical. For most external applications, we recommend using the HTTP interface because it is simple and easy to use. The TCP protocol is more tightly linked to internal data structures: it uses an internal format for passing blocks of data, and it uses custom framing for compressed data. We havent released a C library for that protocol because it requires linking most of the ClickHouse codebase, which is not practical.
:::
## Distributed Query Execution {#distributed-query-execution} ## Distributed Query Execution {#distributed-query-execution}
@ -193,7 +195,8 @@ Replication is physical: only compressed parts are transferred between nodes, no
Besides, each replica stores its state in ZooKeeper as the set of parts and its checksums. When the state on the local filesystem diverges from the reference state in ZooKeeper, the replica restores its consistency by downloading missing and broken parts from other replicas. When there is some unexpected or broken data in the local filesystem, ClickHouse does not remove it, but moves it to a separate directory and forgets it. Besides, each replica stores its state in ZooKeeper as the set of parts and its checksums. When the state on the local filesystem diverges from the reference state in ZooKeeper, the replica restores its consistency by downloading missing and broken parts from other replicas. When there is some unexpected or broken data in the local filesystem, ClickHouse does not remove it, but moves it to a separate directory and forgets it.
!!! note "Note" :::note
The ClickHouse cluster consists of independent shards, and each shard consists of replicas. The cluster is **not elastic**, so after adding a new shard, data is not rebalanced between shards automatically. Instead, the cluster load is supposed to be adjusted to be uneven. This implementation gives you more control, and it is ok for relatively small clusters, such as tens of nodes. But for clusters with hundreds of nodes that we are using in production, this approach becomes a significant drawback. We should implement a table engine that spans across the cluster with dynamically replicated regions that could be split and balanced between clusters automatically. The ClickHouse cluster consists of independent shards, and each shard consists of replicas. The cluster is **not elastic**, so after adding a new shard, data is not rebalanced between shards automatically. Instead, the cluster load is supposed to be adjusted to be uneven. This implementation gives you more control, and it is ok for relatively small clusters, such as tens of nodes. But for clusters with hundreds of nodes that we are using in production, this approach becomes a significant drawback. We should implement a table engine that spans across the cluster with dynamically replicated regions that could be split and balanced between clusters automatically.
:::
{## [Original article](https://clickhouse.com/docs/en/development/architecture/) ##} [Original article](https://clickhouse.com/docs/en/development/architecture/)

View File

@ -1,12 +1,13 @@
--- ---
toc_priority: 72 sidebar_label: Source Code Browser
toc_title: Source Code Browser sidebar_position: 72
description: Various ways to browse and edit the source code
--- ---
# Browse ClickHouse Source Code {#browse-clickhouse-source-code} # Browse ClickHouse Source Code
You can use **Woboq** online code browser available [here](https://clickhouse.com/codebrowser/ClickHouse/src/index.html). It provides code navigation and semantic highlighting, search and indexing. The code snapshot is updated daily. You can use the **Woboq** online code browser available [here](https://clickhouse.com/codebrowser/ClickHouse/src/index.html). It provides code navigation and semantic highlighting, search and indexing. The code snapshot is updated daily.
Also, you can browse sources on [GitHub](https://github.com/ClickHouse/ClickHouse) as usual. Also, you can browse sources on [GitHub](https://github.com/ClickHouse/ClickHouse) as usual.
If youre interested what IDE to use, we recommend CLion, QT Creator, VS Code and KDevelop (with caveats). You can use any favourite IDE. Vim and Emacs also count. If youre interested what IDE to use, we recommend CLion, QT Creator, VS Code and KDevelop (with caveats). You can use any favorite IDE. Vim and Emacs also count.

View File

@ -1,11 +1,12 @@
--- ---
toc_priority: 67 sidebar_position: 67
toc_title: Build on Linux for AARCH64 (ARM64) sidebar_label: Build on Linux for AARCH64 (ARM64)
--- ---
# How to Build ClickHouse on Linux for AARCH64 (ARM64) Architecture {#how-to-build-clickhouse-on-linux-for-aarch64-arm64-architecture} # How to Build ClickHouse on Linux for AARCH64 (ARM64) Architecture
This is for the case when you have Linux machine and want to use it to build `clickhouse` binary that will run on another Linux machine with AARCH64 CPU architecture. This is intended for continuous integration checks that run on Linux servers. This is for the case when you have Linux machine and want to use it to build `clickhouse` binary that will run on another Linux machine with AARCH64 CPU architecture.
This is intended for continuous integration checks that run on Linux servers.
The cross-build for AARCH64 is based on the [Build instructions](../development/build.md), follow them first. The cross-build for AARCH64 is based on the [Build instructions](../development/build.md), follow them first.

View File

@ -1,11 +1,12 @@
--- ---
toc_priority: 66 sidebar_position: 66
toc_title: Build on Linux for Mac OS X sidebar_label: Build on Linux for Mac OS X
--- ---
# How to Build ClickHouse on Linux for Mac OS X {#how-to-build-clickhouse-on-linux-for-mac-os-x} # How to Build ClickHouse on Linux for Mac OS X
This is for the case when you have Linux machine and want to use it to build `clickhouse` binary that will run on OS X. This is intended for continuous integration checks that run on Linux servers. If you want to build ClickHouse directly on Mac OS X, then proceed with [another instruction](../development/build-osx.md). This is for the case when you have a Linux machine and want to use it to build `clickhouse` binary that will run on OS X.
This is intended for continuous integration checks that run on Linux servers. If you want to build ClickHouse directly on Mac OS X, then proceed with [another instruction](../development/build-osx.md).
The cross-build for Mac OS X is based on the [Build instructions](../development/build.md), follow them first. The cross-build for Mac OS X is based on the [Build instructions](../development/build.md), follow them first.

View File

@ -1,9 +1,9 @@
--- ---
toc_priority: 68 sidebar_position: 68
toc_title: Build on Linux for RISC-V 64 sidebar_label: Build on Linux for RISC-V 64
--- ---
# How to Build ClickHouse on Linux for RISC-V 64 Architecture {#how-to-build-clickhouse-on-linux-for-risc-v-64-architecture} # How to Build ClickHouse on Linux for RISC-V 64 Architecture
As of writing (11.11.2021) building for risc-v considered to be highly experimental. Not all features can be enabled. As of writing (11.11.2021) building for risc-v considered to be highly experimental. Not all features can be enabled.

View File

@ -1,16 +1,21 @@
--- ---
toc_priority: 65 sidebar_position: 65
toc_title: Build on Mac OS X sidebar_label: Build on Mac OS X
description: How to build ClickHouse on Mac OS X
--- ---
# How to Build ClickHouse on Mac OS X {#how-to-build-clickhouse-on-mac-os-x} # How to Build ClickHouse on Mac OS X
!!! info "You don't have to build ClickHouse yourself" :::info You don't have to build ClickHouse yourself!
You can install pre-built ClickHouse as described in [Quick Start](https://clickhouse.com/#quick-start). You can install pre-built ClickHouse as described in [Quick Start](https://clickhouse.com/#quick-start). Follow **macOS (Intel)** or **macOS (Apple silicon)** installation instructions.
Follow `macOS (Intel)` or `macOS (Apple silicon)` installation instructions. :::
Build should work on x86_64 (Intel) and arm64 (Apple silicon) based macOS 10.15 (Catalina) and higher with Homebrew's vanilla Clang. Build should work on x86_64 (Intel) and arm64 (Apple silicon) based macOS 10.15 (Catalina) and higher with Homebrew's vanilla Clang.
It is always recommended to use vanilla `clang` compiler. It is possible to use XCode's `apple-clang` or `gcc` but it's strongly discouraged. It is always recommended to use vanilla `clang` compiler.
:::note
It is possible to use XCode's `apple-clang` or `gcc`, but it's strongly discouraged.
:::
## Install Homebrew {#install-homebrew} ## Install Homebrew {#install-homebrew}
@ -89,8 +94,9 @@ cmake --build . --config RelWithDebInfo
If you intend to run `clickhouse-server`, make sure to increase the systems maxfiles variable. If you intend to run `clickhouse-server`, make sure to increase the systems maxfiles variable.
!!! info "Note" :::note
Youll need to use sudo. Youll need to use sudo.
:::
To do so, create the `/Library/LaunchDaemons/limit.maxfiles.plist` file with the following content: To do so, create the `/Library/LaunchDaemons/limit.maxfiles.plist` file with the following content:

View File

@ -1,9 +1,10 @@
--- ---
toc_priority: 64 sidebar_position: 64
toc_title: Build on Linux sidebar_label: Build on Linux
description: How to build ClickHouse on Linux
--- ---
# How to Build ClickHouse on Linux {#how-to-build-clickhouse-for-development} # How to Build ClickHouse on Linux
Supported platforms: Supported platforms:

View File

@ -1,6 +1,7 @@
--- ---
toc_priority: 62 sidebar_position: 62
toc_title: Continuous Integration Checks sidebar_label: Continuous Integration Checks
description: When you submit a pull request, some automated checks are ran for your code by the ClickHouse continuous integration (CI) system
--- ---
# Continuous Integration Checks # Continuous Integration Checks
@ -71,8 +72,6 @@ This check means that the CI system started to process the pull request. When it
Performs some simple regex-based checks of code style, using the [`utils/check-style/check-style`](https://github.com/ClickHouse/ClickHouse/blob/master/utils/check-style/check-style) binary (note that it can be run locally). Performs some simple regex-based checks of code style, using the [`utils/check-style/check-style`](https://github.com/ClickHouse/ClickHouse/blob/master/utils/check-style/check-style) binary (note that it can be run locally).
If it fails, fix the style errors following the [code style guide](style.md). If it fails, fix the style errors following the [code style guide](style.md).
Python code is checked with [black](https://github.com/psf/black/).
### Report Details ### Report Details
- [Status page example](https://clickhouse-test-reports.s3.yandex.net/12550/659c78c7abb56141723af6a81bfae39335aa8cb2/style_check.html) - [Status page example](https://clickhouse-test-reports.s3.yandex.net/12550/659c78c7abb56141723af6a81bfae39335aa8cb2/style_check.html)
- `output.txt` contains the check resulting errors (invalid tabulation etc), blank page means no errors. [Successful result example](https://clickhouse-test-reports.s3.yandex.net/12550/659c78c7abb56141723af6a81bfae39335aa8cb2/style_check/output.txt). - `output.txt` contains the check resulting errors (invalid tabulation etc), blank page means no errors. [Successful result example](https://clickhouse-test-reports.s3.yandex.net/12550/659c78c7abb56141723af6a81bfae39335aa8cb2/style_check/output.txt).
@ -152,7 +151,7 @@ checks page](../development/build.md#you-dont-have-to-build-clickhouse), or buil
## Functional Stateful Tests ## Functional Stateful Tests
Runs [stateful functional tests](tests.md#functional-tests). Treat them in the same way as the functional stateless tests. The difference is that they require `hits` and `visits` tables from the [Yandex.Metrica dataset](../getting-started/example-datasets/metrica.md) to run. Runs [stateful functional tests](tests.md#functional-tests). Treat them in the same way as the functional stateless tests. The difference is that they require `hits` and `visits` tables from the [clickstream dataset](../example-datasets/metrica.md) to run.
## Integration Tests ## Integration Tests

View File

@ -1,9 +1,10 @@
--- ---
toc_priority: 71 sidebar_position: 71
toc_title: Third-Party Libraries Used sidebar_label: Third-Party Libraries
description: A list of third-party libraries used
--- ---
# Third-Party Libraries Used {#third-party-libraries-used} # Third-Party Libraries Used
The list of third-party libraries: The list of third-party libraries:

View File

@ -1,11 +1,12 @@
--- ---
toc_priority: 61 sidebar_position: 61
toc_title: For Beginners sidebar_label: Getting Started
description: Prerequisites and an overview of how to build ClickHouse
--- ---
# The Beginner ClickHouse Developer Instruction {#the-beginner-clickhouse-developer-instruction} # Getting Started Guide for Building ClickHouse
Building of ClickHouse is supported on Linux, FreeBSD and Mac OS X. The building of ClickHouse is supported on Linux, FreeBSD and Mac OS X.
If you use Windows, you need to create a virtual machine with Ubuntu. To start working with a virtual machine please install VirtualBox. You can download Ubuntu from the website: https://www.ubuntu.com/#download. Please create a virtual machine from the downloaded image (you should reserve at least 4GB of RAM for it). To run a command-line terminal in Ubuntu, please locate a program containing the word “terminal” in its name (gnome-terminal, konsole etc.) or just press Ctrl+Alt+T. If you use Windows, you need to create a virtual machine with Ubuntu. To start working with a virtual machine please install VirtualBox. You can download Ubuntu from the website: https://www.ubuntu.com/#download. Please create a virtual machine from the downloaded image (you should reserve at least 4GB of RAM for it). To run a command-line terminal in Ubuntu, please locate a program containing the word “terminal” in its name (gnome-terminal, konsole etc.) or just press Ctrl+Alt+T.
@ -229,25 +230,6 @@ As simple code editors, you can use Sublime Text or Visual Studio Code, or Kate
Just in case, it is worth mentioning that CLion creates `build` path on its own, it also on its own selects `debug` for build type, for configuration it uses a version of CMake that is defined in CLion and not the one installed by you, and finally, CLion will use `make` to run build tasks instead of `ninja`. This is normal behaviour, just keep that in mind to avoid confusion. Just in case, it is worth mentioning that CLion creates `build` path on its own, it also on its own selects `debug` for build type, for configuration it uses a version of CMake that is defined in CLion and not the one installed by you, and finally, CLion will use `make` to run build tasks instead of `ninja`. This is normal behaviour, just keep that in mind to avoid confusion.
## Debugging
Many graphical IDEs offer with an integrated debugger but you can also use a standalone debugger.
### GDB
### LLDB
# tell LLDB where to find the source code
settings set target.source-map /path/to/build/dir /path/to/source/dir
# configure LLDB to display code before/after currently executing line
settings set stop-line-count-before 10
settings set stop-line-count-after 10
target create ./clickhouse-client
# <set breakpoints here>
process launch -- --query="SELECT * FROM TAB"
## Writing Code {#writing-code} ## Writing Code {#writing-code}
The description of ClickHouse architecture can be found here: https://clickhouse.com/docs/en/development/architecture/ The description of ClickHouse architecture can be found here: https://clickhouse.com/docs/en/development/architecture/

View File

@ -1,10 +0,0 @@
---
toc_folder_title: Development
toc_hidden: true
toc_priority: 58
toc_title: hidden
---
# ClickHouse Development {#clickhouse-development}
[Original article](https://clickhouse.com/docs/en/development/) <!--hide-->

View File

@ -1,9 +1,10 @@
--- ---
toc_priority: 69 sidebar_position: 69
toc_title: C++ Guide sidebar_label: C++ Guide
description: A list of recommendations regarding coding style, naming convention, formatting and more
--- ---
# How to Write C++ Code {#how-to-write-c-code} # How to Write C++ Code
## General Recommendations {#general-recommendations} ## General Recommendations {#general-recommendations}

View File

@ -1,11 +1,12 @@
--- ---
toc_priority: 70 sidebar_position: 70
toc_title: Testing sidebar_label: Testing
description: Most of ClickHouse features can be tested with functional tests and they are mandatory to use for every change in ClickHouse code that can be tested that way.
--- ---
# ClickHouse Testing {#clickhouse-testing} # ClickHouse Testing
## Functional Tests {#functional-tests} ## Functional Tests
Functional tests are the most simple and convenient to use. Most of ClickHouse features can be tested with functional tests and they are mandatory to use for every change in ClickHouse code that can be tested that way. Functional tests are the most simple and convenient to use. Most of ClickHouse features can be tested with functional tests and they are mandatory to use for every change in ClickHouse code that can be tested that way.

View File

@ -0,0 +1,8 @@
position: 30
label: 'Database & Table Engines'
collapsible: true
collapsed: true
link:
type: generated-index
title: Database & Table Engines
slug: /en/table-engines

View File

@ -1,9 +1,9 @@
--- ---
toc_priority: 32 sidebar_label: Atomic
toc_title: Atomic sidebar_position: 10
--- ---
# Atomic {#atomic} # Atomic
It supports non-blocking [DROP TABLE](#drop-detach-table) and [RENAME TABLE](#rename-table) queries and atomic [EXCHANGE TABLES](#exchange-tables) queries. `Atomic` database engine is used by default. It supports non-blocking [DROP TABLE](#drop-detach-table) and [RENAME TABLE](#rename-table) queries and atomic [EXCHANGE TABLES](#exchange-tables) queries. `Atomic` database engine is used by default.
@ -18,14 +18,21 @@ CREATE DATABASE test [ENGINE = Atomic];
### Table UUID {#table-uuid} ### Table UUID {#table-uuid}
All tables in database `Atomic` have persistent [UUID](../../sql-reference/data-types/uuid.md) and store data in directory `/clickhouse_path/store/xxx/xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy/`, where `xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy` is UUID of the table. All tables in database `Atomic` have persistent [UUID](../../sql-reference/data-types/uuid.md) and store data in directory `/clickhouse_path/store/xxx/xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy/`, where `xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy` is UUID of the table.
Usually, the UUID is generated automatically, but the user can also explicitly specify the UUID in the same way when creating the table (this is not recommended). To display the `SHOW CREATE` query with the UUID you can use setting [show_table_uuid_in_table_create_query_if_not_nil](../../operations/settings/settings.md#show_table_uuid_in_table_create_query_if_not_nil). For example: Usually, the UUID is generated automatically, but the user can also explicitly specify the UUID in the same way when creating the table (this is not recommended).
For example:
```sql ```sql
CREATE TABLE name UUID '28f1c61c-2970-457a-bffe-454156ddcfef' (n UInt64) ENGINE = ...; CREATE TABLE name UUID '28f1c61c-2970-457a-bffe-454156ddcfef' (n UInt64) ENGINE = ...;
``` ```
:::note
You can use the [show_table_uuid_in_table_create_query_if_not_nil](../../operations/settings/settings.md#show_table_uuid_in_table_create_query_if_not_nil) setting to display the UUID with the `SHOW CREATE` query.
:::
### RENAME TABLE {#rename-table} ### RENAME TABLE {#rename-table}
[RENAME](../../sql-reference/statements/rename.md) queries are performed without changing UUID and moving table data. These queries do not wait for the completion of queries using the table and are executed instantly. [RENAME](../../sql-reference/statements/rename.md) queries are performed without changing the UUID or moving table data. These queries do not wait for the completion of queries using the table and are executed instantly.
### DROP/DETACH TABLE {#drop-detach-table} ### DROP/DETACH TABLE {#drop-detach-table}

View File

@ -6,11 +6,11 @@ toc_title: Introduction
# Database Engines {#database-engines} # Database Engines {#database-engines}
Database engines allow you to work with tables. Database engines allow you to work with tables. By default, ClickHouse uses the [Atomic](../../engines/database-engines/atomic.md) database engine, which provides configurable [table engines](../../engines/table-engines/index.md) and an [SQL dialect](../../sql-reference/syntax.md).
By default, ClickHouse uses database engine [Atomic](../../engines/database-engines/atomic.md). It provides configurable [table engines](../../engines/table-engines/index.md) and an [SQL dialect](../../sql-reference/syntax.md). Here is a complete list of available database engines. Follow the links for more details:
You can also use the following database engines: - [Atomic](../../engines/database-engines/atomic.md)
- [MySQL](../../engines/database-engines/mysql.md) - [MySQL](../../engines/database-engines/mysql.md)
@ -18,8 +18,6 @@ You can also use the following database engines:
- [Lazy](../../engines/database-engines/lazy.md) - [Lazy](../../engines/database-engines/lazy.md)
- [Atomic](../../engines/database-engines/atomic.md)
- [PostgreSQL](../../engines/database-engines/postgresql.md) - [PostgreSQL](../../engines/database-engines/postgresql.md)
- [Replicated](../../engines/database-engines/replicated.md) - [Replicated](../../engines/database-engines/replicated.md)

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 31 sidebar_label: Lazy
toc_title: Lazy sidebar_position: 20
--- ---
# Lazy {#lazy} # Lazy {#lazy}

View File

@ -1,16 +1,15 @@
--- ---
toc_priority: 29 sidebar_label: MaterializedMySQL
toc_title: MaterializedMySQL sidebar_position: 70
--- ---
# [experimental] MaterializedMySQL {#materialized-mysql} # [experimental] MaterializedMySQL
!!! warning "Warning" :::warning
This is an experimental feature that should not be used in production. This is an experimental feature that should not be used in production.
:::
Creates ClickHouse database with all the tables existing in MySQL, and all the data in those tables. Creates a ClickHouse database with all the tables existing in MySQL, and all the data in those tables. The ClickHouse server works as MySQL replica. It reads `binlog` and performs DDL and DML queries.
ClickHouse server works as MySQL replica. It reads binlog and performs DDL and DML queries.
## Creating a Database {#creating-a-database} ## Creating a Database {#creating-a-database}
@ -31,8 +30,6 @@ ENGINE = MaterializedMySQL('host:port', ['database' | database], 'user', 'passwo
- `max_rows_in_buffer` — Maximum number of rows that data is allowed to cache in memory (for single table and the cache data unable to query). When this number is exceeded, the data will be materialized. Default: `65 505`. - `max_rows_in_buffer` — Maximum number of rows that data is allowed to cache in memory (for single table and the cache data unable to query). When this number is exceeded, the data will be materialized. Default: `65 505`.
- `max_bytes_in_buffer` — Maximum number of bytes that data is allowed to cache in memory (for single table and the cache data unable to query). When this number is exceeded, the data will be materialized. Default: `1 048 576`. - `max_bytes_in_buffer` — Maximum number of bytes that data is allowed to cache in memory (for single table and the cache data unable to query). When this number is exceeded, the data will be materialized. Default: `1 048 576`.
- `max_rows_in_buffers` — Maximum number of rows that data is allowed to cache in memory (for database and the cache data unable to query). When this number is exceeded, the data will be materialized. Default: `65 505`.
- `max_bytes_in_buffers` — Maximum number of bytes that data is allowed to cache in memory (for database and the cache data unable to query). When this number is exceeded, the data will be materialized. Default: `1 048 576`.
- `max_flush_data_time` — Maximum number of milliseconds that data is allowed to cache in memory (for database and the cache data unable to query). When this time is exceeded, the data will be materialized. Default: `1000`. - `max_flush_data_time` — Maximum number of milliseconds that data is allowed to cache in memory (for database and the cache data unable to query). When this time is exceeded, the data will be materialized. Default: `1000`.
- `max_wait_time_when_mysql_unavailable` — Retry interval when MySQL is not available (milliseconds). Negative value disables retry. Default: `1000`. - `max_wait_time_when_mysql_unavailable` — Retry interval when MySQL is not available (milliseconds). Negative value disables retry. Default: `1000`.
- `allows_query_when_mysql_lost` — Allows to query a materialized table when MySQL is lost. Default: `0` (`false`). - `allows_query_when_mysql_lost` — Allows to query a materialized table when MySQL is lost. Default: `0` (`false`).
@ -52,8 +49,9 @@ For the correct work of `MaterializedMySQL`, there are few mandatory `MySQL`-sid
- `default_authentication_plugin = mysql_native_password` since `MaterializedMySQL` can only authorize with this method. - `default_authentication_plugin = mysql_native_password` since `MaterializedMySQL` can only authorize with this method.
- `gtid_mode = on` since GTID based logging is a mandatory for providing correct `MaterializedMySQL` replication. - `gtid_mode = on` since GTID based logging is a mandatory for providing correct `MaterializedMySQL` replication.
!!! attention "Attention" :::note
While turning on `gtid_mode` you should also specify `enforce_gtid_consistency = on`. While turning on `gtid_mode` you should also specify `enforce_gtid_consistency = on`.
:::
## Virtual Columns {#virtual-columns} ## Virtual Columns {#virtual-columns}
@ -76,7 +74,7 @@ When working with the `MaterializedMySQL` database engine, [ReplacingMergeTree](
| FLOAT | [Float32](../../sql-reference/data-types/float.md) | | FLOAT | [Float32](../../sql-reference/data-types/float.md) |
| DOUBLE | [Float64](../../sql-reference/data-types/float.md) | | DOUBLE | [Float64](../../sql-reference/data-types/float.md) |
| DECIMAL, NEWDECIMAL | [Decimal](../../sql-reference/data-types/decimal.md) | | DECIMAL, NEWDECIMAL | [Decimal](../../sql-reference/data-types/decimal.md) |
| DATE, NEWDATE | [Date32](../../sql-reference/data-types/date32.md) | | DATE, NEWDATE | [Date](../../sql-reference/data-types/date.md) |
| DATETIME, TIMESTAMP | [DateTime](../../sql-reference/data-types/datetime.md) | | DATETIME, TIMESTAMP | [DateTime](../../sql-reference/data-types/datetime.md) |
| DATETIME2, TIMESTAMP2 | [DateTime64](../../sql-reference/data-types/datetime64.md) | | DATETIME2, TIMESTAMP2 | [DateTime64](../../sql-reference/data-types/datetime64.md) |
| YEAR | [UInt16](../../sql-reference/data-types/int-uint.md) | | YEAR | [UInt16](../../sql-reference/data-types/int-uint.md) |
@ -220,13 +218,14 @@ extra care needs to be taken.
You may specify overrides for tables that do not exist yet. You may specify overrides for tables that do not exist yet.
!!! warning "Warning" :::warning
It is easy to break replication with table overrides if not used with care. For example: It is easy to break replication with table overrides if not used with care. For example:
* If an ALIAS column is added with a table override, and a column with the same name is later added to the source * If an ALIAS column is added with a table override, and a column with the same name is later added to the source
MySQL table, the converted ALTER TABLE query in ClickHouse will fail and replication stops. MySQL table, the converted ALTER TABLE query in ClickHouse will fail and replication stops.
* It is currently possible to add overrides that reference nullable columns where not-nullable are required, such as in * It is currently possible to add overrides that reference nullable columns where not-nullable are required, such as in
`ORDER BY` or `PARTITION BY`. This will cause CREATE TABLE queries that will fail, also causing replication to stop. `ORDER BY` or `PARTITION BY`. This will cause CREATE TABLE queries that will fail, also causing replication to stop.
:::
## Examples of Use {#examples-of-use} ## Examples of Use {#examples-of-use}

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 30 sidebar_label: MaterializedPostgreSQL
toc_title: MaterializedPostgreSQL sidebar_position: 60
--- ---
# [experimental] MaterializedPostgreSQL {#materialize-postgresql} # [experimental] MaterializedPostgreSQL {#materialize-postgresql}
@ -46,7 +46,9 @@ After `MaterializedPostgreSQL` database is created, it does not automatically de
ATTACH TABLE postgres_database.new_table; ATTACH TABLE postgres_database.new_table;
``` ```
Warning: before version 22.1 adding table to replication left unremoved temprorary replication slot (named `{db_name}_ch_replication_slot_tmp`). If attaching tables in clickhouse version before 22.1, make sure to delete it manually (`SELECT pg_drop_replication_slot('{db_name}_ch_replication_slot_tmp')`). Otherwise disk usage will grow. Issue is fixed in 22.1. :::warning
Before version 22.1, adding a table to replication left an unremoved temporary replication slot (named `{db_name}_ch_replication_slot_tmp`). If attaching tables in ClickHouse version before 22.1, make sure to delete it manually (`SELECT pg_drop_replication_slot('{db_name}_ch_replication_slot_tmp')`). Otherwise disk usage will grow. This issue is fixed in 22.1.
:::
## Dynamically removing tables from replication {#dynamically-removing-table-from-replication} ## Dynamically removing tables from replication {#dynamically-removing-table-from-replication}
@ -135,69 +137,70 @@ FROM pg_class
WHERE oid = 'postgres_table'::regclass; WHERE oid = 'postgres_table'::regclass;
``` ```
!!! warning "Warning" :::warning
Replication of [**TOAST**](https://www.postgresql.org/docs/9.5/storage-toast.html) values is not supported. The default value for the data type will be used. Replication of [**TOAST**](https://www.postgresql.org/docs/9.5/storage-toast.html) values is not supported. The default value for the data type will be used.
:::
## Settings {#settings} ## Settings {#settings}
1. materialized_postgresql_tables_list {#materialized-postgresql-tables-list} 1. `materialized_postgresql_tables_list` {#materialized-postgresql-tables-list}
Sets a comma-separated list of PostgreSQL database tables, which will be replicated via [MaterializedPostgreSQL](../../engines/database-engines/materialized-postgresql.md) database engine. Sets a comma-separated list of PostgreSQL database tables, which will be replicated via [MaterializedPostgreSQL](../../engines/database-engines/materialized-postgresql.md) database engine.
Default value: empty list — means whole PostgreSQL database will be replicated. Default value: empty list — means whole PostgreSQL database will be replicated.
2. materialized_postgresql_schema {#materialized-postgresql-schema} 2. `materialized_postgresql_schema` {#materialized-postgresql-schema}
Default value: empty string. (Default schema is used) Default value: empty string. (Default schema is used)
3. materialized_postgresql_schema_list {#materialized-postgresql-schema-list} 3. `materialized_postgresql_schema_list` {#materialized-postgresql-schema-list}
Default value: empty list. (Default schema is used) Default value: empty list. (Default schema is used)
4. materialized_postgresql_allow_automatic_update {#materialized-postgresql-allow-automatic-update} 4. `materialized_postgresql_allow_automatic_update` {#materialized-postgresql-allow-automatic-update}
Do not use this setting before 22.1 version. Do not use this setting before 22.1 version.
Allows reloading table in the background, when schema changes are detected. DDL queries on the PostgreSQL side are not replicated via ClickHouse [MaterializedPostgreSQL](../../engines/database-engines/materialized-postgresql.md) engine, because it is not allowed with PostgreSQL logical replication protocol, but the fact of DDL changes is detected transactionally. In this case, the default behaviour is to stop replicating those tables once DDL is detected. However, if this setting is enabled, then, instead of stopping the replication of those tables, they will be reloaded in the background via database snapshot without data losses and replication will continue for them. Allows reloading table in the background, when schema changes are detected. DDL queries on the PostgreSQL side are not replicated via ClickHouse [MaterializedPostgreSQL](../../engines/database-engines/materialized-postgresql.md) engine, because it is not allowed with PostgreSQL logical replication protocol, but the fact of DDL changes is detected transactionally. In this case, the default behaviour is to stop replicating those tables once DDL is detected. However, if this setting is enabled, then, instead of stopping the replication of those tables, they will be reloaded in the background via database snapshot without data losses and replication will continue for them.
Possible values: Possible values:
- 0 — The table is not automatically updated in the background, when schema changes are detected. - 0 — The table is not automatically updated in the background, when schema changes are detected.
- 1 — The table is automatically updated in the background, when schema changes are detected. - 1 — The table is automatically updated in the background, when schema changes are detected.
Default value: `0`. Default value: `0`.
5. materialized_postgresql_max_block_size {#materialized-postgresql-max-block-size} 5. `materialized_postgresql_max_block_size` {#materialized-postgresql-max-block-size}
Sets the number of rows collected in memory before flushing data into PostgreSQL database table. Sets the number of rows collected in memory before flushing data into PostgreSQL database table.
Possible values: Possible values:
- Positive integer. - Positive integer.
Default value: `65536`. Default value: `65536`.
6. materialized_postgresql_replication_slot {#materialized-postgresql-replication-slot} 6. `materialized_postgresql_replication_slot` {#materialized-postgresql-replication-slot}
A user-created replication slot. Must be used together with `materialized_postgresql_snapshot`. A user-created replication slot. Must be used together with `materialized_postgresql_snapshot`.
7. materialized_postgresql_snapshot {#materialized-postgresql-snapshot} 7. `materialized_postgresql_snapshot` {#materialized-postgresql-snapshot}
A text string identifying a snapshot, from which [initial dump of PostgreSQL tables](../../engines/database-engines/materialized-postgresql.md) will be performed. Must be used together with `materialized_postgresql_replication_slot`. A text string identifying a snapshot, from which [initial dump of PostgreSQL tables](../../engines/database-engines/materialized-postgresql.md) will be performed. Must be used together with `materialized_postgresql_replication_slot`.
``` sql ``` sql
CREATE DATABASE database1 CREATE DATABASE database1
ENGINE = MaterializedPostgreSQL('postgres1:5432', 'postgres_database', 'postgres_user', 'postgres_password') ENGINE = MaterializedPostgreSQL('postgres1:5432', 'postgres_database', 'postgres_user', 'postgres_password')
SETTINGS materialized_postgresql_tables_list = 'table1,table2,table3'; SETTINGS materialized_postgresql_tables_list = 'table1,table2,table3';
SELECT * FROM database1.table1; SELECT * FROM database1.table1;
``` ```
The settings can be changed, if necessary, using a DDL query. But it is impossible to change the setting `materialized_postgresql_tables_list`. To update the list of tables in this setting use the `ATTACH TABLE` query. The settings can be changed, if necessary, using a DDL query. But it is impossible to change the setting `materialized_postgresql_tables_list`. To update the list of tables in this setting use the `ATTACH TABLE` query.
``` sql ``` sql
ALTER DATABASE postgres_database MODIFY SETTING materialized_postgresql_max_block_size = <new_size>; ALTER DATABASE postgres_database MODIFY SETTING materialized_postgresql_max_block_size = <new_size>;
``` ```
## Notes {#notes} ## Notes {#notes}
@ -213,47 +216,47 @@ Please note that this should be used only if it is actually needed. If there is
1. Configure replication slot in PostgreSQL. 1. Configure replication slot in PostgreSQL.
```yaml ```yaml
apiVersion: "acid.zalan.do/v1" apiVersion: "acid.zalan.do/v1"
kind: postgresql kind: postgresql
metadata: metadata:
name: acid-demo-cluster name: acid-demo-cluster
spec: spec:
numberOfInstances: 2 numberOfInstances: 2
postgresql: postgresql:
parameters: parameters:
wal_level: logical wal_level: logical
patroni: patroni:
slots: slots:
clickhouse_sync: clickhouse_sync:
type: logical type: logical
database: demodb database: demodb
plugin: pgoutput plugin: pgoutput
``` ```
2. Wait for replication slot to be ready, then begin a transaction and export the transaction snapshot identifier: 2. Wait for replication slot to be ready, then begin a transaction and export the transaction snapshot identifier:
```sql ```sql
BEGIN; BEGIN;
SELECT pg_export_snapshot(); SELECT pg_export_snapshot();
``` ```
3. In ClickHouse create database: 3. In ClickHouse create database:
```sql ```sql
CREATE DATABASE demodb CREATE DATABASE demodb
ENGINE = MaterializedPostgreSQL('postgres1:5432', 'postgres_database', 'postgres_user', 'postgres_password') ENGINE = MaterializedPostgreSQL('postgres1:5432', 'postgres_database', 'postgres_user', 'postgres_password')
SETTINGS SETTINGS
materialized_postgresql_replication_slot = 'clickhouse_sync', materialized_postgresql_replication_slot = 'clickhouse_sync',
materialized_postgresql_snapshot = '0000000A-0000023F-3', materialized_postgresql_snapshot = '0000000A-0000023F-3',
materialized_postgresql_tables_list = 'table1,table2,table3'; materialized_postgresql_tables_list = 'table1,table2,table3';
``` ```
4. End the PostgreSQL transaction once replication to ClickHouse DB is confirmed. Verify that replication continues after failover: 4. End the PostgreSQL transaction once replication to ClickHouse DB is confirmed. Verify that replication continues after failover:
```bash ```bash
kubectl exec acid-demo-cluster-0 -c postgres -- su postgres -c 'patronictl failover --candidate acid-demo-cluster-1 --force' kubectl exec acid-demo-cluster-0 -c postgres -- su postgres -c 'patronictl failover --candidate acid-demo-cluster-1 --force'
``` ```
### Required permissions ### Required permissions

View File

@ -1,9 +1,9 @@
--- ---
toc_priority: 30 sidebar_position: 50
toc_title: MySQL sidebar_label: MySQL
--- ---
# MySQL {#mysql} # MySQL
Allows to connect to databases on a remote MySQL server and perform `INSERT` and `SELECT` queries to exchange data between ClickHouse and MySQL. Allows to connect to databases on a remote MySQL server and perform `INSERT` and `SELECT` queries to exchange data between ClickHouse and MySQL.
@ -49,8 +49,6 @@ ENGINE = MySQL('host:port', ['database' | database], 'user', 'password')
All other MySQL data types are converted into [String](../../sql-reference/data-types/string.md). All other MySQL data types are converted into [String](../../sql-reference/data-types/string.md).
Because of the ClickHouse date type has a different range from the MySQL date range,If the MySQL date type is out of the range of ClickHouse date, you can use the setting mysql_datatypes_support_level to modify the mapping from the MySQL date type to the Clickhouse date type: date2Date32 (convert MySQL's date type to ClickHouse Date32) or date2String(convert MySQL's date type to ClickHouse String,this is usually used when your mysql data is less than 1925) or default(convert MySQL's date type to ClickHouse Date).
[Nullable](../../sql-reference/data-types/nullable.md) is supported. [Nullable](../../sql-reference/data-types/nullable.md) is supported.
## Global Variables Support {#global-variables-support} ## Global Variables Support {#global-variables-support}
@ -61,8 +59,9 @@ These variables are supported:
- `version` - `version`
- `max_allowed_packet` - `max_allowed_packet`
!!! warning "Warning" :::warning
By now these variables are stubs and don't correspond to anything. By now these variables are stubs and don't correspond to anything.
:::
Example: Example:

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 35 sidebar_position: 40
toc_title: PostgreSQL sidebar_label: PostgreSQL
--- ---
# PostgreSQL {#postgresql} # PostgreSQL {#postgresql}

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 36 sidebar_position: 30
toc_title: Replicated sidebar_label: Replicated
--- ---
# [experimental] Replicated {#replicated} # [experimental] Replicated {#replicated}
@ -20,8 +20,9 @@ One ClickHouse server can have multiple replicated databases running and updatin
- `shard_name` — Shard name. Database replicas are grouped into shards by `shard_name`. - `shard_name` — Shard name. Database replicas are grouped into shards by `shard_name`.
- `replica_name` — Replica name. Replica names must be different for all replicas of the same shard. - `replica_name` — Replica name. Replica names must be different for all replicas of the same shard.
!!! note "Warning" :::warning
For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables if no arguments provided, then default arguments are used: `/clickhouse/tables/{uuid}/{shard}` and `{replica}`. These can be changed in the server settings [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). Macro `{uuid}` is unfolded to table's uuid, `{shard}` and `{replica}` are unfolded to values from server config, not from database engine arguments. But in the future, it will be possible to use `shard_name` and `replica_name` of Replicated database. For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables if no arguments provided, then default arguments are used: `/clickhouse/tables/{uuid}/{shard}` and `{replica}`. These can be changed in the server settings [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). Macro `{uuid}` is unfolded to table's uuid, `{shard}` and `{replica}` are unfolded to values from server config, not from database engine arguments. But in the future, it will be possible to use `shard_name` and `replica_name` of Replicated database.
:::
## Specifics and Recommendations {#specifics-and-recommendations} ## Specifics and Recommendations {#specifics-and-recommendations}

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 32 sidebar_position: 55
toc_title: SQLite sidebar_label: SQLite
--- ---
# SQLite {#sqlite} # SQLite {#sqlite}

View File

@ -1,15 +0,0 @@
---
toc_folder_title: Engines
toc_hidden: true
toc_priority: 25
toc_title: hidden
---
# ClickHouse Engines {#clickhouse-engines}
There are two key engine kinds in ClickHouse:
- [Table engines](../engines/table-engines/index.md)
- [Database engines](../engines/database-engines/index.md)
{## [Original article](https://clickhouse.com/docs/en/engines/) ##}

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 12 sidebar_position: 12
toc_title: ExternalDistributed sidebar_label: ExternalDistributed
--- ---
# ExternalDistributed {#externaldistributed} # ExternalDistributed {#externaldistributed}
@ -51,3 +51,6 @@ You can specify any number of shards and any number of replicas for each shard.
- [MySQL table engine](../../../engines/table-engines/integrations/mysql.md) - [MySQL table engine](../../../engines/table-engines/integrations/mysql.md)
- [PostgreSQL table engine](../../../engines/table-engines/integrations/postgresql.md) - [PostgreSQL table engine](../../../engines/table-engines/integrations/postgresql.md)
- [Distributed table engine](../../../engines/table-engines/special/distributed.md) - [Distributed table engine](../../../engines/table-engines/special/distributed.md)
[Original article](https://clickhouse.com/docs/en/engines/table-engines/integrations/ExternalDistributed/) <!--hide-->

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 9 sidebar_position: 9
toc_title: EmbeddedRocksDB sidebar_label: EmbeddedRocksDB
--- ---
# EmbeddedRocksDB Engine {#EmbeddedRocksDB-engine} # EmbeddedRocksDB Engine {#EmbeddedRocksDB-engine}

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 6 sidebar_position: 6
toc_title: HDFS sidebar_label: HDFS
--- ---
# HDFS {#table_engines-hdfs} # HDFS {#table_engines-hdfs}
@ -98,8 +98,9 @@ Table consists of all the files in both directories (all files should satisfy fo
CREATE TABLE table_with_asterisk (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/{some,another}_dir/*', 'TSV') CREATE TABLE table_with_asterisk (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/{some,another}_dir/*', 'TSV')
``` ```
!!! warning "Warning" :::warning
If the listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`. If the listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`.
:::
**Example** **Example**

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 4 sidebar_position: 4
toc_title: Hive sidebar_label: Hive
--- ---
# Hive {#hive} # Hive {#hive}
@ -137,7 +137,7 @@ CREATE TABLE test.test_orc
`f_array_array_float` Array(Array(Float32)), `f_array_array_float` Array(Array(Float32)),
`day` String `day` String
) )
ENGINE = Hive('thrift://localhost:9083', 'test', 'test_orc') ENGINE = Hive('thrift://202.168.117.26:9083', 'test', 'test_orc')
PARTITION BY day PARTITION BY day
``` ```
@ -406,3 +406,5 @@ f_char: hello world
f_bool: true f_bool: true
day: 2021-09-18 day: 2021-09-18
``` ```
[Original article](https://clickhouse.com/docs/en/engines/table-engines/integrations/hive/) <!--hide-->

View File

@ -1,6 +1,6 @@
--- ---
toc_folder_title: Integrations sidebar_position: 40
toc_priority: 1 sidebar_label: Integrations
--- ---
# Table Engines for Integrations {#table-engines-for-integrations} # Table Engines for Integrations {#table-engines-for-integrations}

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 3 sidebar_position: 3
toc_title: JDBC sidebar_label: JDBC
--- ---
# JDBC {#table-engine-jdbc} # JDBC {#table-engine-jdbc}

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 8 sidebar_position: 8
toc_title: Kafka sidebar_label: Kafka
--- ---
# Kafka {#kafka} # Kafka {#kafka}
@ -87,8 +87,9 @@ Examples:
<summary>Deprecated Method for Creating a Table</summary> <summary>Deprecated Method for Creating a Table</summary>
!!! attention "Attention" :::warning
Do not use this method in new projects. If possible, switch old projects to the method described above. Do not use this method in new projects. If possible, switch old projects to the method described above.
:::
``` sql ``` sql
Kafka(kafka_broker_list, kafka_topic_list, kafka_group_name, kafka_format Kafka(kafka_broker_list, kafka_topic_list, kafka_group_name, kafka_format
@ -133,7 +134,7 @@ Example:
SELECT level, sum(total) FROM daily GROUP BY level; SELECT level, sum(total) FROM daily GROUP BY level;
``` ```
To improve performance, received messages are grouped into blocks the size of [max_insert_block_size](../../../operations/settings/settings/#settings-max_insert_block_size). If the block wasnt formed within [stream_flush_interval_ms](../../../operations/settings/settings/#stream-flush-interval-ms) milliseconds, the data will be flushed to the table regardless of the completeness of the block. To improve performance, received messages are grouped into blocks the size of [max_insert_block_size](../../../operations/settings/settings.md#settings-max_insert_block_size). If the block wasnt formed within [stream_flush_interval_ms](../../../operations/settings/settings.md/#stream-flush-interval-ms) milliseconds, the data will be flushed to the table regardless of the completeness of the block.
To stop receiving topic data or to change the conversion logic, detach the materialized view: To stop receiving topic data or to change the conversion logic, detach the materialized view:

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 12 sidebar_position: 12
toc_title: MaterializedPostgreSQL sidebar_label: MaterializedPostgreSQL
--- ---
# MaterializedPostgreSQL {#materialize-postgresql} # MaterializedPostgreSQL {#materialize-postgresql}
@ -52,5 +52,8 @@ PRIMARY KEY key;
SELECT key, value, _version FROM postgresql_db.postgresql_replica; SELECT key, value, _version FROM postgresql_db.postgresql_replica;
``` ```
!!! warning "Warning" :::warning
Replication of [**TOAST**](https://www.postgresql.org/docs/9.5/storage-toast.html) values is not supported. The default value for the data type will be used. Replication of [**TOAST**](https://www.postgresql.org/docs/9.5/storage-toast.html) values is not supported. The default value for the data type will be used.
:::
[Original article](https://clickhouse.com/docs/en/engines/table-engines/integrations/materialized-postgresql) <!--hide-->

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 5 sidebar_position: 5
toc_title: MongoDB sidebar_label: MongoDB
--- ---
# MongoDB {#mongodb} # MongoDB {#mongodb}

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 4 sidebar_position: 4
toc_title: MySQL sidebar_label: MySQL
--- ---
# MySQL {#mysql} # MySQL {#mysql}
@ -148,3 +148,5 @@ Default value: `16`.
- [The mysql table function](../../../sql-reference/table-functions/mysql.md) - [The mysql table function](../../../sql-reference/table-functions/mysql.md)
- [Using MySQL as a source of external dictionary](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-mysql) - [Using MySQL as a source of external dictionary](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-mysql)
[Original article](https://clickhouse.com/docs/en/engines/table-engines/integrations/mysql/) <!--hide-->

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 2 sidebar_position: 2
toc_title: ODBC sidebar_label: ODBC
--- ---
# ODBC {#table-engine-odbc} # ODBC {#table-engine-odbc}

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 11 sidebar_position: 11
toc_title: PostgreSQL sidebar_label: PostgreSQL
--- ---
# PostgreSQL {#postgresql} # PostgreSQL {#postgresql}
@ -73,8 +73,9 @@ All joins, aggregations, sorting, `IN [ array ]` conditions and the `LIMIT` samp
PostgreSQL `Array` types are converted into ClickHouse arrays. PostgreSQL `Array` types are converted into ClickHouse arrays.
!!! info "Note" :::warning
Be careful - in PostgreSQL an array data, created like a `type_name[]`, may contain multi-dimensional arrays of different dimensions in different table rows in same column. But in ClickHouse it is only allowed to have multidimensional arrays of the same count of dimensions in all table rows in same column. Be careful - in PostgreSQL an array data, created like a `type_name[]`, may contain multi-dimensional arrays of different dimensions in different table rows in same column. But in ClickHouse it is only allowed to have multidimensional arrays of the same count of dimensions in all table rows in same column.
:::
Supports multiple replicas that must be listed by `|`. For example: Supports multiple replicas that must be listed by `|`. For example:

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 10 sidebar_position: 10
toc_title: RabbitMQ sidebar_label: RabbitMQ
--- ---
# RabbitMQ Engine {#rabbitmq-engine} # RabbitMQ Engine {#rabbitmq-engine}

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 7 sidebar_position: 7
toc_title: S3 sidebar_label: S3
--- ---
# S3 Table Engine {#table-engine-s3} # S3 Table Engine {#table-engine-s3}
@ -66,8 +66,9 @@ For more information about virtual columns see [here](../../../engines/table-eng
Constructions with `{}` are similar to the [remote](../../../sql-reference/table-functions/remote.md) table function. Constructions with `{}` are similar to the [remote](../../../sql-reference/table-functions/remote.md) table function.
!!! warning "Warning" :::warning
If the listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`. If the listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`.
:::
**Example with wildcards 1** **Example with wildcards 1**
@ -158,3 +159,5 @@ The following settings can be specified in configuration file for given endpoint
## See also ## See also
- [s3 table function](../../../sql-reference/table-functions/s3.md) - [s3 table function](../../../sql-reference/table-functions/s3.md)
[Original article](https://clickhouse.com/docs/en/engines/table-engines/integrations/s3/) <!--hide-->

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 7 sidebar_position: 7
toc_title: SQLite sidebar_label: SQLite
--- ---
# SQLite {#sqlite} # SQLite {#sqlite}
@ -57,3 +57,6 @@ SELECT * FROM sqlite_db.table2 ORDER BY col1;
- [SQLite](../../../engines/database-engines/sqlite.md) engine - [SQLite](../../../engines/database-engines/sqlite.md) engine
- [sqlite](../../../sql-reference/table-functions/sqlite.md) table function - [sqlite](../../../sql-reference/table-functions/sqlite.md) table function
[Original article](https://clickhouse.com/docs/en/engines/table-engines/integrations/sqlite/) <!--hide-->

View File

@ -1,7 +1,6 @@
--- ---
toc_folder_title: Log Family sidebar_position: 20
toc_priority: 29 sidebar_label: Log Family
toc_title: Introduction
--- ---
# Log Engine Family {#log-engine-family} # Log Engine Family {#log-engine-family}

View File

@ -10,3 +10,6 @@ The engine belongs to the family of `Log` engines. See the common properties of
`Log` differs from [TinyLog](../../../engines/table-engines/log-family/tinylog.md) in that a small file of "marks" resides with the column files. These marks are written on every data block and contain offsets that indicate where to start reading the file in order to skip the specified number of rows. This makes it possible to read table data in multiple threads. `Log` differs from [TinyLog](../../../engines/table-engines/log-family/tinylog.md) in that a small file of "marks" resides with the column files. These marks are written on every data block and contain offsets that indicate where to start reading the file in order to skip the specified number of rows. This makes it possible to read table data in multiple threads.
For concurrent data access, the read operations can be performed simultaneously, while write operations block reads and each other. For concurrent data access, the read operations can be performed simultaneously, while write operations block reads and each other.
The `Log` engine does not support indexes. Similarly, if writing to a table failed, the table is broken, and reading from it returns an error. The `Log` engine is appropriate for temporary data, write-once tables, and for testing or demonstration purposes. The `Log` engine does not support indexes. Similarly, if writing to a table failed, the table is broken, and reading from it returns an error. The `Log` engine is appropriate for temporary data, write-once tables, and for testing or demonstration purposes.
[Original article](https://clickhouse.com/docs/en/engines/table-engines/log-family/log/) <!--hide-->

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 35 sidebar_position: 60
toc_title: AggregatingMergeTree sidebar_label: AggregatingMergeTree
--- ---
# AggregatingMergeTree {#aggregatingmergetree} # AggregatingMergeTree {#aggregatingmergetree}
@ -42,8 +42,9 @@ When creating a `AggregatingMergeTree` table the same [clauses](../../../engines
<summary>Deprecated Method for Creating a Table</summary> <summary>Deprecated Method for Creating a Table</summary>
!!! attention "Attention" :::warning
Do not use this method in new projects and, if possible, switch the old projects to the method described above. Do not use this method in new projects and, if possible, switch the old projects to the method described above.
:::
``` sql ``` sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 36 sidebar_position: 70
toc_title: CollapsingMergeTree sidebar_label: CollapsingMergeTree
--- ---
# CollapsingMergeTree {#table_engine-collapsingmergetree} # CollapsingMergeTree {#table_engine-collapsingmergetree}
@ -42,8 +42,9 @@ When creating a `CollapsingMergeTree` table, the same [query clauses](../../../e
<summary>Deprecated Method for Creating a Table</summary> <summary>Deprecated Method for Creating a Table</summary>
!!! attention "Attention" :::warning
Do not use this method in new projects and, if possible, switch the old projects to the method described above. Do not use this method in new projects and, if possible, switch old projects to the method described above.
:::
``` sql ``` sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]

View File

@ -1,12 +1,15 @@
--- ---
toc_priority: 32 sidebar_position: 30
toc_title: Custom Partitioning Key sidebar_label: Custom Partitioning Key
--- ---
# Custom Partitioning Key {#custom-partitioning-key} # Custom Partitioning Key {#custom-partitioning-key}
!!! warning "Warning" :::warning
In most cases you don't need partition key, and in most other cases you don't need partition key more granular than by months. Partitioning does not speed up queries (in contrast to the ORDER BY expression). You should never use too granular partitioning. Don't partition your data by client identifiers or names (instead make client identifier or name the first column in the ORDER BY expression). In most cases you do not need a partition key, and in most other cases you do not need a partition key more granular than by months. Partitioning does not speed up queries (in contrast to the ORDER BY expression).
You should never use too granular of partitioning. Don't partition your data by client identifiers or names. Instead, make a client identifier or name the first column in the ORDER BY expression.
:::
Partitioning is available for the [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md) family tables (including [replicated](../../../engines/table-engines/mergetree-family/replication.md) tables). [Materialized views](../../../engines/table-engines/special/materializedview.md#materializedview) based on MergeTree tables support partitioning, as well. Partitioning is available for the [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md) family tables (including [replicated](../../../engines/table-engines/mergetree-family/replication.md) tables). [Materialized views](../../../engines/table-engines/special/materializedview.md#materializedview) based on MergeTree tables support partitioning, as well.
@ -40,8 +43,9 @@ By default, the floating-point partition key is not supported. To use it enable
When inserting new data to a table, this data is stored as a separate part (chunk) sorted by the primary key. In 10-15 minutes after inserting, the parts of the same partition are merged into the entire part. When inserting new data to a table, this data is stored as a separate part (chunk) sorted by the primary key. In 10-15 minutes after inserting, the parts of the same partition are merged into the entire part.
!!! info "Info" :::info
A merge only works for data parts that have the same value for the partitioning expression. This means **you shouldnt make overly granular partitions** (more than about a thousand partitions). Otherwise, the `SELECT` query performs poorly because of an unreasonably large number of files in the file system and open file descriptors. A merge only works for data parts that have the same value for the partitioning expression. This means **you shouldnt make overly granular partitions** (more than about a thousand partitions). Otherwise, the `SELECT` query performs poorly because of an unreasonably large number of files in the file system and open file descriptors.
:::
Use the [system.parts](../../../operations/system-tables/parts.md#system_tables-parts) table to view the table parts and partitions. For example, lets assume that we have a `visits` table with partitioning by month. Lets perform the `SELECT` query for the `system.parts` table: Use the [system.parts](../../../operations/system-tables/parts.md#system_tables-parts) table to view the table parts and partitions. For example, lets assume that we have a `visits` table with partitioning by month. Lets perform the `SELECT` query for the `system.parts` table:
@ -78,8 +82,9 @@ Lets break down the name of the part: `201901_1_9_2_11`:
- `2` is the chunk level (the depth of the merge tree it is formed from). - `2` is the chunk level (the depth of the merge tree it is formed from).
- `11` is the mutation version (if a part mutated) - `11` is the mutation version (if a part mutated)
!!! info "Info" :::info
The parts of old-type tables have the name: `20190117_20190123_2_2_0` (minimum date - maximum date - minimum block number - maximum block number - level). The parts of old-type tables have the name: `20190117_20190123_2_2_0` (minimum date - maximum date - minimum block number - maximum block number - level).
:::
The `active` column shows the status of the part. `1` is active; `0` is inactive. The inactive parts are, for example, source parts remaining after merging to a larger part. The corrupted data parts are also indicated as inactive. The `active` column shows the status of the part. `1` is active; `0` is inactive. The inactive parts are, for example, source parts remaining after merging to a larger part. The corrupted data parts are also indicated as inactive.

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 38 sidebar_position: 90
toc_title: GraphiteMergeTree sidebar_label: GraphiteMergeTree
--- ---
# GraphiteMergeTree {#graphitemergetree} # GraphiteMergeTree {#graphitemergetree}
@ -54,8 +54,9 @@ When creating a `GraphiteMergeTree` table, the same [clauses](../../../engines/t
<summary>Deprecated Method for Creating a Table</summary> <summary>Deprecated Method for Creating a Table</summary>
!!! attention "Attention" :::warning
Do not use this method in new projects and, if possible, switch the old projects to the method described above. Do not use this method in new projects and, if possible, switch old projects to the method described above.
:::
``` sql ``` sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
@ -119,12 +120,13 @@ default
... ...
``` ```
!!! warning "Attention" :::warning
Patterns must be strictly ordered: Patterns must be strictly ordered:
1. Patterns without `function` or `retention`. 1. Patterns without `function` or `retention`.
1. Patterns with both `function` and `retention`. 1. Patterns with both `function` and `retention`.
1. Pattern `default`. 1. Pattern `default`.
:::
When processing a row, ClickHouse checks the rules in the `pattern` sections. Each of `pattern` (including `default`) sections can contain `function` parameter for aggregation, `retention` parameters or both. If the metric name matches the `regexp`, the rules from the `pattern` section (or sections) are applied; otherwise, the rules from the `default` section are used. When processing a row, ClickHouse checks the rules in the `pattern` sections. Each of `pattern` (including `default`) sections can contain `function` parameter for aggregation, `retention` parameters or both. If the metric name matches the `regexp`, the rules from the `pattern` section (or sections) are applied; otherwise, the rules from the `default` section are used.
@ -253,7 +255,6 @@ Valid values:
``` ```
!!! warning "Warning" :::warning
Data rollup is performed during merges. Usually, for old partitions, merges are not started, so for rollup it is necessary to trigger an unscheduled merge using [optimize](../../../sql-reference/statements/optimize.md). Or use additional tools, for example [graphite-ch-optimizer](https://github.com/innogames/graphite-ch-optimizer). Data rollup is performed during merges. Usually, for old partitions, merges are not started, so for rollup it is necessary to trigger an unscheduled merge using [optimize](../../../sql-reference/statements/optimize.md). Or use additional tools, for example [graphite-ch-optimizer](https://github.com/innogames/graphite-ch-optimizer).
:::
[Original article](https://clickhouse.com/docs/en/operations/table_engines/graphitemergetree/) <!--hide-->

View File

@ -1,7 +1,6 @@
--- ---
toc_folder_title: MergeTree Family sidebar_position: 10
toc_priority: 28 sidebar_label: MergeTree Family
toc_title: Introduction
--- ---
# MergeTree Engine Family {#mergetree-engine-family} # MergeTree Engine Family {#mergetree-engine-family}

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 30 sidebar_position: 11
toc_title: MergeTree sidebar_label: MergeTree
--- ---
# MergeTree {#table_engines-mergetree} # MergeTree {#table_engines-mergetree}
@ -27,8 +27,9 @@ Main features:
If necessary, you can set the data sampling method in the table. If necessary, you can set the data sampling method in the table.
!!! info "Info" :::info
The [Merge](../../../engines/table-engines/special/merge.md#merge) engine does not belong to the `*MergeTree` family. The [Merge](../../../engines/table-engines/special/merge.md#merge) engine does not belong to the `*MergeTree` family.
:::
## Creating a Table {#table_engine-mergetree-creating-a-table} ## Creating a Table {#table_engine-mergetree-creating-a-table}
@ -127,8 +128,9 @@ The `index_granularity` setting can be omitted because 8192 is the default value
<summary>Deprecated Method for Creating a Table</summary> <summary>Deprecated Method for Creating a Table</summary>
!!! attention "Attention" :::warning
Do not use this method in new projects. If possible, switch old projects to the method described above. Do not use this method in new projects. If possible, switch old projects to the method described above.
:::
``` sql ``` sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
@ -304,8 +306,8 @@ CREATE TABLE table_name
Indices from the example can be used by ClickHouse to reduce the amount of data to read from disk in the following queries: Indices from the example can be used by ClickHouse to reduce the amount of data to read from disk in the following queries:
``` sql ``` sql
SELECT count() FROM table WHERE s < 'z' SELECT count() FROM table WHERE s &lt; 'z'
SELECT count() FROM table WHERE u64 * i32 == 10 AND u64 * length(s) >= 1234 SELECT count() FROM table WHERE u64 * i32 == 10 AND u64 * length(s) &gt;= 1234
``` ```
#### Available Types of Indices {#available-types-of-indices} #### Available Types of Indices {#available-types-of-indices}
@ -364,7 +366,7 @@ The `set` index can be used with all functions. Function subsets for other index
| Function (operator) / Index | primary key | minmax | ngrambf_v1 | tokenbf_v1 | bloom_filter | | Function (operator) / Index | primary key | minmax | ngrambf_v1 | tokenbf_v1 | bloom_filter |
|------------------------------------------------------------------------------------------------------------|-------------|--------|-------------|-------------|---------------| |------------------------------------------------------------------------------------------------------------|-------------|--------|-------------|-------------|---------------|
| [equals (=, ==)](../../../sql-reference/functions/comparison-functions.md#function-equals) | ✔ | ✔ | ✔ | ✔ | ✔ | | [equals (=, ==)](../../../sql-reference/functions/comparison-functions.md#function-equals) | ✔ | ✔ | ✔ | ✔ | ✔ |
| [notEquals(!=, <>)](../../../sql-reference/functions/comparison-functions.md#function-notequals) | ✔ | ✔ | ✔ | ✔ | ✔ | | [notEquals(!=, &lt;&gt;)](../../../sql-reference/functions/comparison-functions.md#function-notequals) | ✔ | ✔ | ✔ | ✔ | ✔ |
| [like](../../../sql-reference/functions/string-search-functions.md#function-like) | ✔ | ✔ | ✔ | ✔ | ✗ | | [like](../../../sql-reference/functions/string-search-functions.md#function-like) | ✔ | ✔ | ✔ | ✔ | ✗ |
| [notLike](../../../sql-reference/functions/string-search-functions.md#function-notlike) | ✔ | ✔ | ✔ | ✔ | ✗ | | [notLike](../../../sql-reference/functions/string-search-functions.md#function-notlike) | ✔ | ✔ | ✔ | ✔ | ✗ |
| [startsWith](../../../sql-reference/functions/string-functions.md#startswith) | ✔ | ✔ | ✔ | ✔ | ✗ | | [startsWith](../../../sql-reference/functions/string-functions.md#startswith) | ✔ | ✔ | ✔ | ✔ | ✗ |
@ -382,8 +384,10 @@ The `set` index can be used with all functions. Function subsets for other index
Functions with a constant argument that is less than ngram size cant be used by `ngrambf_v1` for query optimization. Functions with a constant argument that is less than ngram size cant be used by `ngrambf_v1` for query optimization.
!!! note "Note" :::note
Bloom filters can have false positive matches, so the `ngrambf_v1`, `tokenbf_v1`, and `bloom_filter` indexes cant be used for optimizing queries where the result of a function is expected to be false, for example: Bloom filters can have false positive matches, so the `ngrambf_v1`, `tokenbf_v1`, and `bloom_filter` indexes can not be used for optimizing queries where the result of a function is expected to be false.
For example:
- Can be optimized: - Can be optimized:
- `s LIKE '%test%'` - `s LIKE '%test%'`
@ -391,12 +395,13 @@ Functions with a constant argument that is less than ngram size cant be used
- `s = 1` - `s = 1`
- `NOT s != 1` - `NOT s != 1`
- `startsWith(s, 'test')` - `startsWith(s, 'test')`
- Cant be optimized: - Can not be optimized:
- `NOT s LIKE '%test%'` - `NOT s LIKE '%test%'`
- `s NOT LIKE '%test%'` - `s NOT LIKE '%test%'`
- `NOT s = 1` - `NOT s = 1`
- `s != 1` - `s != 1`
- `NOT startsWith(s, 'test')` - `NOT startsWith(s, 'test')`
:::
## Projections {#projections} ## Projections {#projections}
Projections are like [materialized views](../../../sql-reference/statements/create/view.md#materialized) but defined in part-level. It provides consistency guarantees along with automatic usage in queries. Projections are like [materialized views](../../../sql-reference/statements/create/view.md#materialized) but defined in part-level. It provides consistency guarantees along with automatic usage in queries.

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 33 sidebar_position: 40
toc_title: ReplacingMergeTree sidebar_label: ReplacingMergeTree
--- ---
# ReplacingMergeTree {#replacingmergetree} # ReplacingMergeTree {#replacingmergetree}
@ -29,8 +29,9 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
For a description of request parameters, see [statement description](../../../sql-reference/statements/create/table.md). For a description of request parameters, see [statement description](../../../sql-reference/statements/create/table.md).
!!! note "Attention" :::warning
Uniqueness of rows is determined by the `ORDER BY` table section, not `PRIMARY KEY`. Uniqueness of rows is determined by the `ORDER BY` table section, not `PRIMARY KEY`.
:::
**ReplacingMergeTree Parameters** **ReplacingMergeTree Parameters**
@ -49,8 +50,9 @@ When creating a `ReplacingMergeTree` table the same [clauses](../../../engines/t
<summary>Deprecated Method for Creating a Table</summary> <summary>Deprecated Method for Creating a Table</summary>
!!! attention "Attention" :::warning
Do not use this method in new projects and, if possible, switch the old projects to the method described above. Do not use this method in new projects and, if possible, switch old projects to the method described above.
:::
``` sql ``` sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 31 sidebar_position: 20
toc_title: Data Replication sidebar_label: Data Replication
--- ---
# Data Replication {#table_engines-replication} # Data Replication {#table_engines-replication}
@ -31,8 +31,9 @@ ClickHouse uses [Apache ZooKeeper](https://zookeeper.apache.org) for storing rep
To use replication, set parameters in the [zookeeper](../../../operations/server-configuration-parameters/settings.md#server-settings_zookeeper) server configuration section. To use replication, set parameters in the [zookeeper](../../../operations/server-configuration-parameters/settings.md#server-settings_zookeeper) server configuration section.
!!! attention "Attention" :::warning
Dont neglect the security setting. ClickHouse supports the `digest` [ACL scheme](https://zookeeper.apache.org/doc/current/zookeeperProgrammers.html#sc_ZooKeeperAccessControl) of the ZooKeeper security subsystem. Dont neglect the security setting. ClickHouse supports the `digest` [ACL scheme](https://zookeeper.apache.org/doc/current/zookeeperProgrammers.html#sc_ZooKeeperAccessControl) of the ZooKeeper security subsystem.
:::
Example of setting the addresses of the ZooKeeper cluster: Example of setting the addresses of the ZooKeeper cluster:

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 34 sidebar_position: 50
toc_title: SummingMergeTree sidebar_label: SummingMergeTree
--- ---
# SummingMergeTree {#summingmergetree} # SummingMergeTree {#summingmergetree}
@ -41,8 +41,9 @@ When creating a `SummingMergeTree` table the same [clauses](../../../engines/tab
<summary>Deprecated Method for Creating a Table</summary> <summary>Deprecated Method for Creating a Table</summary>
!!! attention "Attention" :::warning
Do not use this method in new projects and, if possible, switch the old projects to the method described above. Do not use this method in new projects and, if possible, switch the old projects to the method described above.
:::
``` sql ``` sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 37 sidebar_position: 80
toc_title: VersionedCollapsingMergeTree sidebar_label: VersionedCollapsingMergeTree
--- ---
# VersionedCollapsingMergeTree {#versionedcollapsingmergetree} # VersionedCollapsingMergeTree {#versionedcollapsingmergetree}
@ -53,8 +53,9 @@ When creating a `VersionedCollapsingMergeTree` table, the same [clauses](../../.
<summary>Deprecated Method for Creating a Table</summary> <summary>Deprecated Method for Creating a Table</summary>
!!! attention "Attention" :::warning
Do not use this method in new projects. If possible, switch the old projects to the method described above. Do not use this method in new projects. If possible, switch old projects to the method described above.
:::
``` sql ``` sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 45 sidebar_position: 120
toc_title: Buffer sidebar_label: Buffer
--- ---
# Buffer Table Engine {#buffer} # Buffer Table Engine {#buffer}
@ -54,8 +54,9 @@ If the set of columns in the Buffer table does not match the set of columns in a
If the types do not match for one of the columns in the Buffer table and a subordinate table, an error message is entered in the server log, and the buffer is cleared. If the types do not match for one of the columns in the Buffer table and a subordinate table, an error message is entered in the server log, and the buffer is cleared.
The same thing happens if the subordinate table does not exist when the buffer is flushed. The same thing happens if the subordinate table does not exist when the buffer is flushed.
!!! attention "Attention" :::warning
Running ALTER on the Buffer table in releases made before 26 Oct 2021 will cause a `Block structure mismatch` error (see [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117) and [#30565](https://github.com/ClickHouse/ClickHouse/pull/30565)), so deleting the Buffer table and then recreating is the only option. It is advisable to check that this error is fixed in your release before trying to run ALTER on the Buffer table. Running ALTER on the Buffer table in releases made before 26 Oct 2021 will cause a `Block structure mismatch` error (see [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117) and [#30565](https://github.com/ClickHouse/ClickHouse/pull/30565)), so deleting the Buffer table and then recreating is the only option. It is advisable to check that this error is fixed in your release before trying to run ALTER on the Buffer table.
:::
If the server is restarted abnormally, the data in the buffer is lost. If the server is restarted abnormally, the data in the buffer is lost.
@ -73,4 +74,4 @@ A Buffer table is used when too many INSERTs are received from a large number of
Note that it does not make sense to insert data one row at a time, even for Buffer tables. This will only produce a speed of a few thousand rows per second, while inserting larger blocks of data can produce over a million rows per second (see the section “Performance”). Note that it does not make sense to insert data one row at a time, even for Buffer tables. This will only produce a speed of a few thousand rows per second, while inserting larger blocks of data can produce over a million rows per second (see the section “Performance”).
[Original article](https://clickhouse.com/docs/en/operations/table_engines/buffer/) <!--hide--> [Original article](https://clickhouse.com/docs/en/engines/table-engines/special/buffer/) <!--hide-->

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 35 sidebar_position: 20
toc_title: Dictionary sidebar_label: Dictionary
--- ---
# Dictionary Table Engine {#dictionary} # Dictionary Table Engine {#dictionary}
@ -97,3 +97,5 @@ select * from products limit 1;
**See Also** **See Also**
- [Dictionary function](../../../sql-reference/table-functions/dictionary.md#dictionary-function) - [Dictionary function](../../../sql-reference/table-functions/dictionary.md#dictionary-function)
[Original article](https://clickhouse.com/docs/en/engines/table-engines/special/dictionary/) <!--hide-->

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 33 sidebar_position: 10
toc_title: Distributed sidebar_label: Distributed
--- ---
# Distributed Table Engine {#distributed} # Distributed Table Engine {#distributed}
@ -64,19 +64,19 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] AS [db2.]name2
- `monitor_max_sleep_time_ms` - same as [distributed_directory_monitor_max_sleep_time_ms](../../../operations/settings/settings.md#distributed_directory_monitor_max_sleep_time_ms) - `monitor_max_sleep_time_ms` - same as [distributed_directory_monitor_max_sleep_time_ms](../../../operations/settings/settings.md#distributed_directory_monitor_max_sleep_time_ms)
!!! note "Note" :::note
**Durability settings** (`fsync_...`):
**Durability settings** (`fsync_...`): - Affect only asynchronous INSERTs (i.e. `insert_distributed_sync=false`) when data first stored on the initiator node disk and later asynchronously send to shards.
- May significantly decrease the inserts' performance
- Affect writing the data stored inside Distributed table folder into the **node which accepted your insert**. If you need to have guarantees of writing data to underlying MergeTree tables - see durability settings (`...fsync...`) in `system.merge_tree_settings`
- Affect only asynchronous INSERTs (i.e. `insert_distributed_sync=false`) when data first stored on the initiator node disk and later asynchronously send to shards. For **Insert limit settings** (`..._insert`) see also:
- May significantly decrease the inserts' performance
- Affect writing the data stored inside Distributed table folder into the **node which accepted your insert**. If you need to have guarantees of writing data to underlying MergeTree tables - see durability settings (`...fsync...`) in `system.merge_tree_settings`
For **Insert limit settings** (`..._insert`) see also: - [insert_distributed_sync](../../../operations/settings/settings.md#insert_distributed_sync) setting
- [prefer_localhost_replica](../../../operations/settings/settings.md#settings-prefer-localhost-replica) setting
- [insert_distributed_sync](../../../operations/settings/settings.md#insert_distributed_sync) setting - `bytes_to_throw_insert` handled before `bytes_to_delay_insert`, so you should not set it to the value less then `bytes_to_delay_insert`
- [prefer_localhost_replica](../../../operations/settings/settings.md#settings-prefer-localhost-replica) setting :::
- `bytes_to_throw_insert` handled before `bytes_to_delay_insert`, so you should not set it to the value less then `bytes_to_delay_insert`
**Example** **Example**
@ -215,8 +215,9 @@ To learn more about how distibuted `in` and `global in` queries are processed, r
- `_shard_num` — Contains the `shard_num` value from the table `system.clusters`. Type: [UInt32](../../../sql-reference/data-types/int-uint.md). - `_shard_num` — Contains the `shard_num` value from the table `system.clusters`. Type: [UInt32](../../../sql-reference/data-types/int-uint.md).
!!! note "Note" :::note
Since [remote](../../../sql-reference/table-functions/remote.md) and [cluster](../../../sql-reference/table-functions/cluster.md) table functions internally create temporary Distributed table, `_shard_num` is available there too. Since [remote](../../../sql-reference/table-functions/remote.md) and [cluster](../../../sql-reference/table-functions/cluster.md) table functions internally create temporary Distributed table, `_shard_num` is available there too.
:::
**See Also** **See Also**
@ -225,3 +226,4 @@ To learn more about how distibuted `in` and `global in` queries are processed, r
- [shardNum()](../../../sql-reference/functions/other-functions.md#shard-num) and [shardCount()](../../../sql-reference/functions/other-functions.md#shard-count) functions - [shardNum()](../../../sql-reference/functions/other-functions.md#shard-num) and [shardCount()](../../../sql-reference/functions/other-functions.md#shard-count) functions
[Original article](https://clickhouse.com/docs/en/engines/table-engines/special/distributed/) <!--hide-->

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 45 sidebar_position: 130
toc_title: External Data sidebar_label: External Data
--- ---
# External Data for Query Processing {#external-data-for-query-processing} # External Data for Query Processing {#external-data-for-query-processing}
@ -63,4 +63,3 @@ $ curl -F 'passwd=@passwd.tsv;' 'http://localhost:8123/?query=SELECT+shell,+coun
For distributed query processing, the temporary tables are sent to all the remote servers. For distributed query processing, the temporary tables are sent to all the remote servers.
[Original article](https://clickhouse.com/docs/en/operations/table_engines/external_data/) <!--hide-->

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 37 sidebar_position: 40
toc_title: File sidebar_label: File
--- ---
# File Table Engine {#table_engines-file} # File Table Engine {#table_engines-file}
@ -30,8 +30,9 @@ When creating table using `File(Format)` it creates empty subdirectory in that f
You may manually create this subfolder and file in server filesystem and then [ATTACH](../../../sql-reference/statements/attach.md) it to table information with matching name, so you can query data from that file. You may manually create this subfolder and file in server filesystem and then [ATTACH](../../../sql-reference/statements/attach.md) it to table information with matching name, so you can query data from that file.
!!! warning "Warning" :::warning
Be careful with this functionality, because ClickHouse does not keep track of external changes to such files. The result of simultaneous writes via ClickHouse and outside of ClickHouse is undefined. Be careful with this functionality, because ClickHouse does not keep track of external changes to such files. The result of simultaneous writes via ClickHouse and outside of ClickHouse is undefined.
:::
## Example {#example} ## Example {#example}
@ -85,4 +86,4 @@ $ echo -e "1,2\n3,4" | clickhouse-local -q "CREATE TABLE table (a Int64, b Int64
- Indices - Indices
- Replication - Replication
[Original article](https://clickhouse.com/docs/en/operations/table_engines/file/) <!--hide--> [Original article](https://clickhouse.com/docs/en/operations/table_engines/special/file/) <!--hide-->

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 46 sidebar_position: 140
toc_title: GenerateRandom sidebar_label: GenerateRandom
--- ---
# GenerateRandom Table Engine {#table_engines-generate} # GenerateRandom Table Engine {#table_engines-generate}
@ -56,4 +56,4 @@ SELECT * FROM generate_engine_table LIMIT 3
- Indices - Indices
- Replication - Replication
[Original article](https://clickhouse.com/docs/en/operations/table_engines/generate/) <!--hide--> [Original article](https://clickhouse.com/docs/en/engines/table-engines/special/generate/) <!--hide-->

View File

@ -1,6 +1,6 @@
--- ---
toc_folder_title: Special sidebar_position: 50
toc_priority: 31 sidebar_label: Special
--- ---
# Special Table Engines {#special-table-engines} # Special Table Engines {#special-table-engines}

View File

@ -1,14 +1,15 @@
--- ---
toc_priority: 40 sidebar_position: 70
toc_title: Join sidebar_label: Join
--- ---
# Join Table Engine {#join} # Join Table Engine {#join}
Optional prepared data structure for usage in [JOIN](../../../sql-reference/statements/select/join.md#select-join) operations. Optional prepared data structure for usage in [JOIN](../../../sql-reference/statements/select/join.md#select-join) operations.
!!! note "Note" :::note
This is not an article about the [JOIN clause](../../../sql-reference/statements/select/join.md#select-join) itself. This is not an article about the [JOIN clause](../../../sql-reference/statements/select/join.md#select-join) itself.
:::
## Creating a Table {#creating-a-table} ## Creating a Table {#creating-a-table}
@ -125,3 +126,5 @@ ALTER TABLE id_val_join DELETE WHERE id = 3;
│ 1 │ 21 │ │ 1 │ 21 │
└────┴─────┘ └────┴─────┘
``` ```
[Original article](https://clickhouse.com/docs/en/operations/table_engines/special/join/) <!--hide-->

View File

@ -1,10 +1,10 @@
--- ---
toc_priority: 43 sidebar_position: 100
toc_title: MaterializedView sidebar_label: MaterializedView
--- ---
# MaterializedView Table Engine {#materializedview} # MaterializedView Table Engine {#materializedview}
Used for implementing materialized views (for more information, see [CREATE VIEW](../../../sql-reference/statements/create/view.md#materialized)). For storing data, it uses a different engine that was specified when creating the view. When reading from a table, it just uses that engine. Used for implementing materialized views (for more information, see [CREATE VIEW](../../../sql-reference/statements/create/view.md#materialized)). For storing data, it uses a different engine that was specified when creating the view. When reading from a table, it just uses that engine.
[Original article](https://clickhouse.com/docs/en/operations/table_engines/materializedview/) <!--hide--> [Original article](https://clickhouse.com/docs/en/engines/table-engines/special/materializedview/) <!--hide-->

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 44 sidebar_position: 110
toc_title: Memory sidebar_label: Memory
--- ---
# Memory Table Engine {#memory} # Memory Table Engine {#memory}
@ -15,4 +15,4 @@ Normally, using this table engine is not justified. However, it can be used for
The Memory engine is used by the system for temporary tables with external query data (see the section “External data for processing a query”), and for implementing `GLOBAL IN` (see the section “IN operators”). The Memory engine is used by the system for temporary tables with external query data (see the section “External data for processing a query”), and for implementing `GLOBAL IN` (see the section “IN operators”).
[Original article](https://clickhouse.com/docs/en/operations/table_engines/memory/) <!--hide--> [Original article](https://clickhouse.com/docs/en/engines/table-engines/special/memory/) <!--hide-->

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 36 sidebar_position: 30
toc_title: Merge sidebar_label: Merge
--- ---
# Merge Table Engine {#merge} # Merge Table Engine {#merge}
@ -12,7 +12,7 @@ Reading is automatically parallelized. Writing to a table is not supported. When
## Creating a Table {#creating-a-table} ## Creating a Table {#creating-a-table}
``` sql ``` sql
CREATE TABLE ... Engine=Merge(db_name, tables_regexp) CREATE TABLE ... Engine=Merge(db_name, tables_regexp)
``` ```
**Engine Parameters** **Engine Parameters**
@ -81,3 +81,5 @@ SELECT * FROM WatchLog;
- [Virtual columns](../../../engines/table-engines/special/index.md#table_engines-virtual_columns) - [Virtual columns](../../../engines/table-engines/special/index.md#table_engines-virtual_columns)
- [merge](../../../sql-reference/table-functions/merge.md) table function - [merge](../../../sql-reference/table-functions/merge.md) table function
[Original article](https://clickhouse.com/docs/en/operations/table_engines/special/merge/) <!--hide-->

View File

@ -1,13 +1,15 @@
--- ---
toc_priority: 38 sidebar_position: 50
toc_title: 'Null' sidebar_label: 'Null'
--- ---
# Null Table Engine {#null} # Null Table Engine {#null}
When writing to a `Null` table, data is ignored. When reading from a `Null` table, the response is empty. When writing to a `Null` table, data is ignored. When reading from a `Null` table, the response is empty.
!!! info "Hint" :::note
However, you can create a materialized view on a `Null` table. So the data written to the table will end up affecting the view, but original raw data will still be discarded. If you are wondering why this is useful, note that you can create a materialized view on a `Null` table. So the data written to the table will end up affecting the view, but original raw data will still be discarded.
:::
[Original article](https://clickhouse.com/docs/en/operations/table_engines/null/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/operations/table_engines/special/null/) <!--hide-->

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 39 sidebar_position: 60
toc_title: Set sidebar_label: Set
--- ---
# Set Table Engine {#set} # Set Table Engine {#set}
@ -20,4 +20,4 @@ When creating a table, the following settings are applied:
- [persistent](../../../operations/settings/settings.md#persistent) - [persistent](../../../operations/settings/settings.md#persistent)
[Original article](https://clickhouse.com/docs/en/operations/table_engines/set/) <!--hide--> [Original article](https://clickhouse.com/docs/en/operations/table_engines/special/set/) <!--hide-->

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 41 sidebar_position: 80
toc_title: URL sidebar_label: URL
--- ---
# URL Table Engine {#table_engines-url} # URL Table Engine {#table_engines-url}
@ -89,4 +89,4 @@ SELECT * FROM url_engine_table
- Indexes. - Indexes.
- Replication. - Replication.
[Original article](https://clickhouse.com/docs/en/operations/table_engines/url/) <!--hide--> [Original article](https://clickhouse.com/docs/en/operations/table_engines/special/url/) <!--hide-->

View File

@ -1,10 +1,10 @@
--- ---
toc_priority: 42 sidebar_position: 90
toc_title: View sidebar_label: View
--- ---
# View Table Engine {#table_engines-view} # View Table Engine {#table_engines-view}
Used for implementing views (for more information, see the `CREATE VIEW query`). It does not store data, but only stores the specified `SELECT` query. When reading from a table, it runs this query (and deletes all unnecessary columns from the query). Used for implementing views (for more information, see the `CREATE VIEW query`). It does not store data, but only stores the specified `SELECT` query. When reading from a table, it runs this query (and deletes all unnecessary columns from the query).
[Original article](https://clickhouse.com/docs/en/operations/table_engines/view/) <!--hide--> [Original article](https://clickhouse.com/docs/en/operations/table_engines/special/view/) <!--hide-->

View File

@ -0,0 +1,8 @@
position: 10
label: 'Example Datasets'
collapsible: true
collapsed: true
link:
type: generated-index
title: Example Datasets
slug: /en/example-datasets

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 19 sidebar_label: AMPLab Big Data Benchmark
toc_title: AMPLab Big Data Benchmark description: A benchmark dataset used for comparing the performance of data warehousing solutions.
--- ---
# AMPLab Big Data Benchmark {#amplab-big-data-benchmark} # AMPLab Big Data Benchmark {#amplab-big-data-benchmark}

View File

@ -1,6 +1,6 @@
--- ---
toc_priority: 20 sidebar_label: Brown University Benchmark
toc_title: Brown University Benchmark description: A new analytical benchmark for machine-generated log data
--- ---
# Brown University Benchmark # Brown University Benchmark

View File

@ -1,9 +1,8 @@
--- ---
toc_priority: 21 sidebar_label: Cell Towers
toc_title: Cell Towers
--- ---
# Cell Towers {#cell-towers} # Cell Towers
This dataset is from [OpenCellid](https://www.opencellid.org/) - The world's largest Open Database of Cell Towers. This dataset is from [OpenCellid](https://www.opencellid.org/) - The world's largest Open Database of Cell Towers.
@ -96,7 +95,7 @@ SELECT mcc, count() FROM cell_towers GROUP BY mcc ORDER BY count() DESC LIMIT 10
So, the top countries are: the USA, Germany, and Russia. So, the top countries are: the USA, Germany, and Russia.
You may want to create an [External Dictionary](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md) in ClickHouse to decode these values. You may want to create an [External Dictionary](../sql-reference/dictionaries/external-dictionaries/external-dicts.md) in ClickHouse to decode these values.
## Use case {#use-case} ## Use case {#use-case}

View File

@ -1,9 +1,8 @@
--- ---
toc_priority: 18 sidebar_label: Terabyte Click Logs from Criteo
toc_title: Terabyte Click Logs from Criteo
--- ---
# Terabyte of Click Logs from Criteo {#terabyte-of-click-logs-from-criteo} # Terabyte of Click Logs from Criteo
Download the data from http://labs.criteo.com/downloads/download-terabyte-click-logs/ Download the data from http://labs.criteo.com/downloads/download-terabyte-click-logs/

View File

@ -1,6 +1,5 @@
--- ---
toc_priority: 11 sidebar_label: GitHub Events
toc_title: GitHub Events
--- ---
# GitHub Events Dataset # GitHub Events Dataset

View File

@ -1,9 +1,8 @@
--- ---
toc_priority: 21 sidebar_label: New York Public Library "What's on the Menu?" Dataset
toc_title: Menus
--- ---
# New York Public Library "What's on the Menu?" Dataset {#menus-dataset} # New York Public Library "What's on the Menu?" Dataset
The dataset is created by the New York Public Library. It contains historical data on the menus of hotels, restaurants and cafes with the dishes along with their prices. The dataset is created by the New York Public Library. It contains historical data on the menus of hotels, restaurants and cafes with the dishes along with their prices.
@ -40,7 +39,7 @@ The data is normalized consisted of four tables:
## Create the Tables {#create-tables} ## Create the Tables {#create-tables}
We use [Decimal](../../sql-reference/data-types/decimal.md) data type to store prices. We use [Decimal](../sql-reference/data-types/decimal.md) data type to store prices.
```sql ```sql
CREATE TABLE dish CREATE TABLE dish
@ -116,17 +115,17 @@ clickhouse-client --format_csv_allow_single_quotes 0 --input_format_null_as_defa
clickhouse-client --format_csv_allow_single_quotes 0 --input_format_null_as_default 0 --date_time_input_format best_effort --query "INSERT INTO menu_item FORMAT CSVWithNames" < MenuItem.csv clickhouse-client --format_csv_allow_single_quotes 0 --input_format_null_as_default 0 --date_time_input_format best_effort --query "INSERT INTO menu_item FORMAT CSVWithNames" < MenuItem.csv
``` ```
We use [CSVWithNames](../../interfaces/formats.md#csvwithnames) format as the data is represented by CSV with header. We use [CSVWithNames](../interfaces/formats.md#csvwithnames) format as the data is represented by CSV with header.
We disable `format_csv_allow_single_quotes` as only double quotes are used for data fields and single quotes can be inside the values and should not confuse the CSV parser. We disable `format_csv_allow_single_quotes` as only double quotes are used for data fields and single quotes can be inside the values and should not confuse the CSV parser.
We disable [input_format_null_as_default](../../operations/settings/settings.md#settings-input-format-null-as-default) as our data does not have [NULL](../../sql-reference/syntax.md#null-literal). Otherwise ClickHouse will try to parse `\N` sequences and can be confused with `\` in data. We disable [input_format_null_as_default](../operations/settings/settings.md#settings-input-format-null-as-default) as our data does not have [NULL](../sql-reference/syntax.md#null-literal). Otherwise ClickHouse will try to parse `\N` sequences and can be confused with `\` in data.
The setting [date_time_input_format best_effort](../../operations/settings/settings.md#settings-date_time_input_format) allows to parse [DateTime](../../sql-reference/data-types/datetime.md) fields in wide variety of formats. For example, ISO-8601 without seconds like '2000-01-01 01:02' will be recognized. Without this setting only fixed DateTime format is allowed. The setting [date_time_input_format best_effort](../operations/settings/settings.md#settings-date_time_input_format) allows to parse [DateTime](../sql-reference/data-types/datetime.md) fields in wide variety of formats. For example, ISO-8601 without seconds like '2000-01-01 01:02' will be recognized. Without this setting only fixed DateTime format is allowed.
## Denormalize the Data {#denormalize-data} ## Denormalize the Data {#denormalize-data}
Data is presented in multiple tables in [normalized form](https://en.wikipedia.org/wiki/Database_normalization#Normal_forms). It means you have to perform [JOIN](../../sql-reference/statements/select/join.md#select-join) if you want to query, e.g. dish names from menu items. Data is presented in multiple tables in [normalized form](https://en.wikipedia.org/wiki/Database_normalization#Normal_forms). It means you have to perform [JOIN](../sql-reference/statements/select/join.md#select-join) if you want to query, e.g. dish names from menu items.
For typical analytical tasks it is way more efficient to deal with pre-JOINed data to avoid doing `JOIN` every time. It is called "denormalized" data. For typical analytical tasks it is way more efficient to deal with pre-JOINed data to avoid doing `JOIN` every time. It is called "denormalized" data.
We will create a table `menu_item_denorm` where will contain all the data JOINed together: We will create a table `menu_item_denorm` where will contain all the data JOINed together:

View File

@ -1,9 +1,9 @@
--- ---
toc_priority: 15 sidebar_label: Web Analytics Data
toc_title: Web Analytics Data description: Dataset consists of two tables containing anonymized web analytics data with hits and visits
--- ---
# Anonymized Web Analytics Data {#anonymized-web-analytics-data} # Anonymized Web Analytics Data
Dataset consists of two tables containing anonymized web analytics data with hits (`hits_v1`) and visits (`visits_v1`). Dataset consists of two tables containing anonymized web analytics data with hits (`hits_v1`) and visits (`visits_v1`).
@ -73,6 +73,6 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
## Example Queries {#example-queries} ## Example Queries {#example-queries}
[The ClickHouse tutorial](../../getting-started/tutorial.md) is based on this web analytics dataset, and the recommended way to get started with this dataset is to go through the tutorial. [The ClickHouse tutorial](../../tutorial.md) is based on this web analytics dataset, and the recommended way to get started with this dataset is to go through the tutorial.
Additional examples of queries to these tables can be found among [stateful tests](https://github.com/ClickHouse/ClickHouse/tree/master/tests/queries/1_stateful) of ClickHouse (they are named `test.hits` and `test.visits` there). Additional examples of queries to these tables can be found among [stateful tests](https://github.com/ClickHouse/ClickHouse/tree/master/tests/queries/1_stateful) of ClickHouse (they are named `test.hits` and `test.visits` there).

View File

@ -1,9 +1,9 @@
--- ---
toc_priority: 20 sidebar_label: New York Taxi Data
toc_title: New York Taxi Data description: Data for billions of taxi and for-hire vehicle (Uber, Lyft, etc.) trips originating in New York City since 2009
--- ---
# New York Taxi Data {#new-york-taxi-data} # New York Taxi Data
This dataset can be obtained in two ways: This dataset can be obtained in two ways:
@ -290,8 +290,9 @@ $ sudo service clickhouse-server restart
$ clickhouse-client --query "select count(*) from datasets.trips_mergetree" $ clickhouse-client --query "select count(*) from datasets.trips_mergetree"
``` ```
!!! info "Info" :::info
If you will run the queries described below, you have to use the full table name, `datasets.trips_mergetree`. If you will run the queries described below, you have to use the full table name, `datasets.trips_mergetree`.
:::
## Results on Single Server {#results-on-single-server} ## Results on Single Server {#results-on-single-server}

View File

@ -1,9 +1,9 @@
--- ---
toc_priority: 21 sidebar_label: OnTime Airline Flight Data
toc_title: OnTime description: Dataset containing the on-time performance of airline flights
--- ---
# OnTime {#ontime} # OnTime
This dataset can be obtained in two ways: This dataset can be obtained in two ways:
@ -156,8 +156,9 @@ $ sudo service clickhouse-server restart
$ clickhouse-client --query "select count(*) from datasets.ontime" $ clickhouse-client --query "select count(*) from datasets.ontime"
``` ```
!!! info "Info" :::note
If you will run the queries described below, you have to use the full table name, `datasets.ontime`. If you will run the queries described below, you have to use the full table name, `datasets.ontime`.
:::
## Queries {#queries} ## Queries {#queries}

View File

@ -1,11 +1,11 @@
--- ---
toc_priority: 20 sidebar_label: Air Traffic Data
toc_title: OpenSky description: The data in this dataset is derived and cleaned from the full OpenSky dataset to illustrate the development of air traffic during the COVID-19 pandemic.
--- ---
# Crowdsourced air traffic data from The OpenSky Network 2020 {#opensky} # Crowdsourced air traffic data from The OpenSky Network 2020
"The data in this dataset is derived and cleaned from the full OpenSky dataset to illustrate the development of air traffic during the COVID-19 pandemic. It spans all flights seen by the network's more than 2500 members since 1 January 2019. More data will be periodically included in the dataset until the end of the COVID-19 pandemic". The data in this dataset is derived and cleaned from the full OpenSky dataset to illustrate the development of air traffic during the COVID-19 pandemic. It spans all flights seen by the network's more than 2500 members since 1 January 2019. More data will be periodically included in the dataset until the end of the COVID-19 pandemic.
Source: https://zenodo.org/record/5092942#.YRBCyTpRXYd Source: https://zenodo.org/record/5092942#.YRBCyTpRXYd
@ -60,9 +60,9 @@ ls -1 flightlist_*.csv.gz | xargs -P100 -I{} bash -c 'gzip -c -d "{}" | clickhou
`xargs -P100` specifies to use up to 100 parallel workers but as we only have 30 files, the number of workers will be only 30. `xargs -P100` specifies to use up to 100 parallel workers but as we only have 30 files, the number of workers will be only 30.
- For every file, `xargs` will run a script with `bash -c`. The script has substitution in form of `{}` and the `xargs` command will substitute the filename to it (we have asked it for `xargs` with `-I{}`). - For every file, `xargs` will run a script with `bash -c`. The script has substitution in form of `{}` and the `xargs` command will substitute the filename to it (we have asked it for `xargs` with `-I{}`).
- The script will decompress the file (`gzip -c -d "{}"`) to standard output (`-c` parameter) and the output is redirected to `clickhouse-client`. - The script will decompress the file (`gzip -c -d "{}"`) to standard output (`-c` parameter) and the output is redirected to `clickhouse-client`.
- We also asked to parse [DateTime](../../sql-reference/data-types/datetime.md) fields with extended parser ([--date_time_input_format best_effort](../../operations/settings/settings.md#settings-date_time_input_format)) to recognize ISO-8601 format with timezone offsets. - We also asked to parse [DateTime](../sql-reference/data-types/datetime.md) fields with extended parser ([--date_time_input_format best_effort](../operations/settings/settings.md#settings-date_time_input_format)) to recognize ISO-8601 format with timezone offsets.
Finally, `clickhouse-client` will do insertion. It will read input data in [CSVWithNames](../../interfaces/formats.md#csvwithnames) format. Finally, `clickhouse-client` will do insertion. It will read input data in [CSVWithNames](../interfaces/formats.md#csvwithnames) format.
Parallel upload takes 24 seconds. Parallel upload takes 24 seconds.

View File

@ -1,6 +1,5 @@
--- ---
toc_priority: 16 sidebar_label: Recipes Dataset
toc_title: Recipes Dataset
--- ---
# Recipes Dataset # Recipes Dataset
@ -51,13 +50,13 @@ clickhouse-client --query "
This is a showcase how to parse custom CSV, as it requires multiple tunes. This is a showcase how to parse custom CSV, as it requires multiple tunes.
Explanation: Explanation:
- The dataset is in CSV format, but it requires some preprocessing on insertion; we use table function [input](../../sql-reference/table-functions/input.md) to perform preprocessing; - The dataset is in CSV format, but it requires some preprocessing on insertion; we use table function [input](../sql-reference/table-functions/input.md) to perform preprocessing;
- The structure of CSV file is specified in the argument of the table function `input`; - The structure of CSV file is specified in the argument of the table function `input`;
- The field `num` (row number) is unneeded - we parse it from file and ignore; - The field `num` (row number) is unneeded - we parse it from file and ignore;
- We use `FORMAT CSVWithNames` but the header in CSV will be ignored (by command line parameter `--input_format_with_names_use_header 0`), because the header does not contain the name for the first field; - We use `FORMAT CSVWithNames` but the header in CSV will be ignored (by command line parameter `--input_format_with_names_use_header 0`), because the header does not contain the name for the first field;
- File is using only double quotes to enclose CSV strings; some strings are not enclosed in double quotes, and single quote must not be parsed as the string enclosing - that's why we also add the `--format_csv_allow_single_quote 0` parameter; - File is using only double quotes to enclose CSV strings; some strings are not enclosed in double quotes, and single quote must not be parsed as the string enclosing - that's why we also add the `--format_csv_allow_single_quote 0` parameter;
- Some strings from CSV cannot parse, because they contain `\M/` sequence at the beginning of the value; the only value starting with backslash in CSV can be `\N` that is parsed as SQL NULL. We add `--input_format_allow_errors_num 10` parameter and up to ten malformed records can be skipped; - Some strings from CSV cannot parse, because they contain `\M/` sequence at the beginning of the value; the only value starting with backslash in CSV can be `\N` that is parsed as SQL NULL. We add `--input_format_allow_errors_num 10` parameter and up to ten malformed records can be skipped;
- There are arrays for ingredients, directions and NER fields; these arrays are represented in unusual form: they are serialized into string as JSON and then placed in CSV - we parse them as String and then use [JSONExtract](../../sql-reference/functions/json-functions/) function to transform it to Array. - There are arrays for ingredients, directions and NER fields; these arrays are represented in unusual form: they are serialized into string as JSON and then placed in CSV - we parse them as String and then use [JSONExtract](../sql-reference/functions/json-functions/) function to transform it to Array.
## Validate the Inserted Data ## Validate the Inserted Data
@ -81,7 +80,7 @@ Result:
### Top Components by the Number of Recipes: ### Top Components by the Number of Recipes:
In this example we learn how to use [arrayJoin](../../sql-reference/functions/array-join/) function to expand an array into a set of rows. In this example we learn how to use [arrayJoin](../sql-reference/functions/array-join/) function to expand an array into a set of rows.
Query: Query:
@ -186,7 +185,7 @@ Result:
10 rows in set. Elapsed: 0.215 sec. Processed 2.23 million rows, 1.48 GB (10.35 million rows/s., 6.86 GB/s.) 10 rows in set. Elapsed: 0.215 sec. Processed 2.23 million rows, 1.48 GB (10.35 million rows/s., 6.86 GB/s.)
``` ```
In this example, we involve [has](../../sql-reference/functions/array-functions/#hasarr-elem) function to filter by array elements and sort by the number of directions. In this example, we involve [has](../sql-reference/functions/array-functions/#hasarr-elem) function to filter by array elements and sort by the number of directions.
There is a wedding cake that requires the whole 126 steps to produce! Show that directions: There is a wedding cake that requires the whole 126 steps to produce! Show that directions:

View File

@ -1,9 +1,11 @@
--- ---
toc_priority: 16 sidebar_label: Star Schema Benchmark
toc_title: Star Schema Benchmark description: "Dataset based on the TPC-H dbgen source. The coding style and architecture
follows the TPCH dbgen."
--- ---
# Star Schema Benchmark {#star-schema-benchmark} # Star Schema Benchmark
Compiling dbgen: Compiling dbgen:
@ -15,8 +17,9 @@ $ make
Generating data: Generating data:
!!! warning "Attention" :::warning
With `-s 100` dbgen generates 600 million rows (67 GB), while while `-s 1000` it generates 6 billion rows (which takes a lot of time) With `-s 100` dbgen generates 600 million rows (67 GB), while while `-s 1000` it generates 6 billion rows (which takes a lot of time)
:::
``` bash ``` bash
$ ./dbgen -s 1000 -T c $ ./dbgen -s 1000 -T c

View File

@ -1,9 +1,8 @@
--- ---
toc_priority: 20 sidebar_label: UK Property Price Paid
toc_title: UK Property Price Paid
--- ---
# UK Property Price Paid {#uk-property-price-paid} # UK Property Price Paid
The dataset contains data about prices paid for real-estate property in England and Wales. The data is available since year 1995. The dataset contains data about prices paid for real-estate property in England and Wales. The data is available since year 1995.
The size of the dataset in uncompressed form is about 4 GiB and it will take about 278 MiB in ClickHouse. The size of the dataset in uncompressed form is about 4 GiB and it will take about 278 MiB in ClickHouse.
@ -55,9 +54,9 @@ In this example, we define the structure of source data from the CSV file and sp
The preprocessing is: The preprocessing is:
- splitting the postcode to two different columns `postcode1` and `postcode2` that is better for storage and queries; - splitting the postcode to two different columns `postcode1` and `postcode2` that is better for storage and queries;
- coverting the `time` field to date as it only contains 00:00 time; - coverting the `time` field to date as it only contains 00:00 time;
- ignoring the [UUid](../../sql-reference/data-types/uuid.md) field because we don't need it for analysis; - ignoring the [UUid](../sql-reference/data-types/uuid.md) field because we don't need it for analysis;
- transforming `type` and `duration` to more readable Enum fields with function [transform](../../sql-reference/functions/other-functions.md#transform); - transforming `type` and `duration` to more readable Enum fields with function [transform](../sql-reference/functions/other-functions.md#transform);
- transforming `is_new` and `category` fields from single-character string (`Y`/`N` and `A`/`B`) to [UInt8](../../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-uint256-int8-int16-int32-int64-int128-int256) field with 0 and 1. - transforming `is_new` and `category` fields from single-character string (`Y`/`N` and `A`/`B`) to [UInt8](../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-uint256-int8-int16-int32-int64-int128-int256) field with 0 and 1.
Preprocessed data is piped directly to `clickhouse-client` to be inserted into ClickHouse table in streaming fashion. Preprocessed data is piped directly to `clickhouse-client` to be inserted into ClickHouse table in streaming fashion.
@ -353,7 +352,7 @@ Result:
## Let's Speed Up Queries Using Projections {#speedup-with-projections} ## Let's Speed Up Queries Using Projections {#speedup-with-projections}
[Projections](../../sql-reference/statements/alter/projection.md) allow to improve queries speed by storing pre-aggregated data. [Projections](../sql-reference/statements/alter/projection.md) allow to improve queries speed by storing pre-aggregated data.
### Build a Projection {#build-projection} ### Build a Projection {#build-projection}
@ -389,7 +388,7 @@ SETTINGS mutations_sync = 1;
Let's run the same 3 queries. Let's run the same 3 queries.
[Enable](../../operations/settings/settings.md#allow-experimental-projection-optimization) projections for selects: [Enable](../operations/settings/settings.md#allow-experimental-projection-optimization) projections for selects:
```sql ```sql
SET allow_experimental_projection_optimization = 1; SET allow_experimental_projection_optimization = 1;

View File

@ -1,11 +1,10 @@
--- ---
toc_priority: 17 sidebar_label: WikiStat
toc_title: WikiStat
--- ---
# WikiStat {#wikistat} # WikiStat
See: http://dumps.wikimedia.org/other/pagecounts-raw/ See http://dumps.wikimedia.org/other/pagecounts-raw/ for details.
Creating a table: Creating a table:

View File

@ -1,25 +0,0 @@
---
title: What is a columnar database?
toc_hidden: true
toc_priority: 101
---
# What Is a Columnar Database? {#what-is-a-columnar-database}
A columnar database stores data of each column independently. This allows to read data from disks only for those columns that are used in any given query. The cost is that operations that affect whole rows become proportionally more expensive. The synonym for a columnar database is a column-oriented database management system. ClickHouse is a typical example of such a system.
Key columnar database advantages are:
- Queries that use only a few columns out of many.
- Aggregating queries against large volumes of data.
- Column-wise data compression.
Here is the illustration of the difference between traditional row-oriented systems and columnar databases when building reports:
**Traditional row-oriented**
![Traditional row-oriented](https://clickhouse.com/docs/en/images/row-oriented.gif#)
**Columnar**
![Columnar](https://clickhouse.com/docs/en/images/column-oriented.gif#)
A columnar database is a preferred choice for analytical applications because it allows to have many columns in a table just in case, but do not pay the cost for unused columns on read query execution time. Column-oriented databases are designed for big data processing and data warehousing, because they often natively scale using distributed clusters of low-cost hardware to increase throughput. ClickHouse does it with combination of [distributed](../../engines/table-engines/special/distributed.md) and [replicated](../../engines/table-engines/mergetree-family/replication.md) tables.

View File

@ -1,17 +0,0 @@
---
title: "What does \u201CClickHouse\u201D mean?"
toc_hidden: true
toc_priority: 10
---
# What Does “ClickHouse” Mean? {#what-does-clickhouse-mean}
Its a combination of “**Click**stream” and “Data ware**House**”. It comes from the original use case at Yandex.Metrica, where ClickHouse was supposed to keep records of all clicks by people from all over the Internet, and it still does the job. You can read more about this use case on [ClickHouse history](../../introduction/history.md) page.
This two-part meaning has two consequences:
- The only correct way to write Click**H**ouse is with capital H.
- If you need to abbreviate it, use **CH**. For some historical reasons, abbreviating as CK is also popular in China, mostly because one of the first talks about ClickHouse in Chinese used this form.
!!! info "Fun fact"
Many years after ClickHouse got its name, this approach of combining two words that are meaningful on their own has been highlighted as the best way to name a database in a [research by Andy Pavlo](https://www.cs.cmu.edu/~pavlo/blog/2020/03/on-naming-a-database-management-system.html), an Associate Professor of Databases at Carnegie Mellon University. ClickHouse shared his “best database name of all time” award with Postgres.

View File

@ -1,15 +0,0 @@
---
title: How do I contribute code to ClickHouse?
toc_hidden: true
toc_priority: 120
---
# How do I contribute code to ClickHouse? {#how-do-i-contribute-code-to-clickhouse}
ClickHouse is an open-source project [developed on GitHub](https://github.com/ClickHouse/ClickHouse).
As customary, contribution instructions are published in [CONTRIBUTING.md](https://github.com/ClickHouse/ClickHouse/blob/master/CONTRIBUTING.md) file in the root of the source code repository.
If you want to suggest a substantial change to ClickHouse, consider [opening a GitHub issue](https://github.com/ClickHouse/ClickHouse/issues/new/choose) explaining what you want to do, to discuss it with maintainers and community first. [Examples of such RFC issues](https://github.com/ClickHouse/ClickHouse/issues?q=is%3Aissue+is%3Aopen+rfc).
If your contributions are security related, please check out [our security policy](https://github.com/ClickHouse/ClickHouse/security/policy/) too.

View File

@ -1,25 +0,0 @@
---
title: General questions about ClickHouse
toc_hidden_folder: true
toc_priority: 1
toc_title: General
---
# General Questions About ClickHouse {#general-questions}
Questions:
- [What is ClickHouse?](../../index.md#what-is-clickhouse)
- [Why ClickHouse is so fast?](../../faq/general/why-clickhouse-is-so-fast.md)
- [Who is using ClickHouse?](../../faq/general/who-is-using-clickhouse.md)
- [What does “ClickHouse” mean?](../../faq/general/dbms-naming.md)
- [What does “Не тормозит” mean?](../../faq/general/ne-tormozit.md)
- [What is OLAP?](../../faq/general/olap.md)
- [What is a columnar database?](../../faq/general/columnar-database.md)
- [Why not use something like MapReduce?](../../faq/general/mapreduce.md)
- [How do I contribute code to ClickHouse?](../../faq/general/how-do-i-contribute-code-to-clickhouse.md)
!!! info "Dont see what you were looking for?"
Check out [other F.A.Q. categories](../../faq/index.md) or browse around main documentation articles found in the left sidebar.
{## [Original article](https://clickhouse.com/docs/en/faq/general/) ##}

View File

@ -1,13 +0,0 @@
---
title: Why not use something like MapReduce?
toc_hidden: true
toc_priority: 110
---
# Why Not Use Something Like MapReduce? {#why-not-use-something-like-mapreduce}
We can refer to systems like MapReduce as distributed computing systems in which the reduce operation is based on distributed sorting. The most common open-source solution in this class is [Apache Hadoop](http://hadoop.apache.org). Large IT companies often have proprietary in-house solutions.
These systems arent appropriate for online queries due to their high latency. In other words, they cant be used as the back-end for a web interface. These types of systems arent useful for real-time data updates. Distributed sorting isnt the best way to perform reduce operations if the result of the operation and all the intermediate results (if there are any) are located in the RAM of a single server, which is usually the case for online queries. In such a case, a hash table is an optimal way to perform reduce operations. A common approach to optimizing map-reduce tasks is pre-aggregation (partial reduce) using a hash table in RAM. The user performs this optimization manually. Distributed sorting is one of the main causes of reduced performance when running simple map-reduce tasks.
Most MapReduce implementations allow you to execute arbitrary code on a cluster. But a declarative query language is better suited to OLAP to run experiments quickly. For example, Hadoop has Hive and Pig. Also consider Cloudera Impala or Shark (outdated) for Spark, as well as Spark SQL, Presto, and Apache Drill. Performance when running such tasks is highly sub-optimal compared to specialized systems, but relatively high latency makes it unrealistic to use these systems as the backend for a web interface.

View File

@ -1,26 +0,0 @@
---
title: "What does \u201C\u043D\u0435 \u0442\u043E\u0440\u043C\u043E\u0437\u0438\u0442\
\u201D mean?"
toc_hidden: true
toc_priority: 11
---
# What Does “Не тормозит” Mean? {#what-does-ne-tormozit-mean}
This question usually arises when people see official ClickHouse t-shirts. They have large words **“ClickHouse не тормозит”** on the front.
Before ClickHouse became open-source, it has been developed as an in-house storage system by the largest Russian IT company, Yandex. Thats why it initially got its slogan in Russian, which is “не тормозит” (pronounced as “ne tormozit”). After the open-source release we first produced some of those t-shirts for events in Russia and it was a no-brainer to use the slogan as-is.
One of the following batches of those t-shirts was supposed to be given away on events outside of Russia and we tried to make the English version of the slogan. Unfortunately, the Russian language is kind of elegant in terms of expressing stuff and there was a restriction of limited space on a t-shirt, so we failed to come up with good enough translation (most options appeared to be either long or inaccurate) and decided to keep the slogan in Russian even on t-shirts produced for international events. It appeared to be a great decision because people all over the world get positively surprised and curious when they see it.
So, what does it mean? Here are some ways to translate *“не тормозит”*:
- If you translate it literally, itd be something like *“ClickHouse does not press the brake pedal”*.
- If youd want to express it as close to how it sounds to a Russian person with IT background, itd be something like *“If your larger system lags, its not because it uses ClickHouse”*.
- Shorter, but not so precise versions could be *“ClickHouse is not slow”*, *“ClickHouse does not lag”* or just *“ClickHouse is fast”*.
If you havent seen one of those t-shirts in person, you can check them out online in many ClickHouse-related videos. For example, this one:
![iframe](https://www.youtube.com/embed/bSyQahMVZ7w)
P.S. These t-shirts are not for sale, they are given away for free on most [ClickHouse Meetups](https://clickhouse.com/#meet), usually for best questions or other forms of active participation.

View File

@ -1,39 +0,0 @@
---
title: What is OLAP?
toc_hidden: true
toc_priority: 100
---
# What Is OLAP? {#what-is-olap}
[OLAP](https://en.wikipedia.org/wiki/Online_analytical_processing) stands for Online Analytical Processing. It is a broad term that can be looked at from two perspectives: technical and business. But at the very high level, you can just read these words backward:
Processing
: Some source data is processed…
Analytical
: …to produce some analytical reports and insights…
Online
: …in real-time.
## OLAP from the Business Perspective {#olap-from-the-business-perspective}
In recent years, business people started to realize the value of data. Companies who make their decisions blindly, more often than not fail to keep up with the competition. The data-driven approach of successful companies forces them to collect all data that might be remotely useful for making business decisions and need mechanisms to timely analyze them. Heres where OLAP database management systems (DBMS) come in.
In a business sense, OLAP allows companies to continuously plan, analyze, and report operational activities, thus maximizing efficiency, reducing expenses, and ultimately conquering the market share. It could be done either in an in-house system or outsourced to SaaS providers like web/mobile analytics services, CRM services, etc. OLAP is the technology behind many BI applications (Business Intelligence).
ClickHouse is an OLAP database management system that is pretty often used as a backend for those SaaS solutions for analyzing domain-specific data. However, some businesses are still reluctant to share their data with third-party providers and an in-house data warehouse scenario is also viable.
## OLAP from the Technical Perspective {#olap-from-the-technical-perspective}
All database management systems could be classified into two groups: OLAP (Online **Analytical** Processing) and OLTP (Online **Transactional** Processing). Former focuses on building reports, each based on large volumes of historical data, but doing it not so frequently. While the latter usually handle a continuous stream of transactions, constantly modifying the current state of data.
In practice OLAP and OLTP are not categories, its more like a spectrum. Most real systems usually focus on one of them but provide some solutions or workarounds if the opposite kind of workload is also desired. This situation often forces businesses to operate multiple storage systems integrated, which might be not so big deal but having more systems make it more expensive to maintain. So the trend of recent years is HTAP (**Hybrid Transactional/Analytical Processing**) when both kinds of the workload are handled equally well by a single database management system.
Even if a DBMS started as a pure OLAP or pure OLTP, they are forced to move towards that HTAP direction to keep up with their competition. And ClickHouse is no exception, initially, it has been designed as [fast-as-possible OLAP system](../../faq/general/why-clickhouse-is-so-fast.md) and it still does not have full-fledged transaction support, but some features like consistent read/writes and mutations for updating/deleting data had to be added.
The fundamental trade-off between OLAP and OLTP systems remains:
- To build analytical reports efficiently its crucial to be able to read columns separately, thus most OLAP databases are [columnar](../../faq/general/columnar-database.md),
- While storing columns separately increases costs of operations on rows, like append or in-place modification, proportionally to the number of columns (which can be huge if the systems try to collect all details of an event just in case). Thus, most OLTP systems store data arranged by rows.

View File

@ -1,19 +0,0 @@
---
title: Who is using ClickHouse?
toc_hidden: true
toc_priority: 9
---
# Who Is Using ClickHouse? {#who-is-using-clickhouse}
Being an open-source product makes this question not so straightforward to answer. You do not have to tell anyone if you want to start using ClickHouse, you just go grab source code or pre-compiled packages. Theres no contract to sign and the [Apache 2.0 license](https://github.com/ClickHouse/ClickHouse/blob/master/LICENSE) allows for unconstrained software distribution.
Also, the technology stack is often in a grey zone of whats covered by an NDA. Some companies consider technologies they use as a competitive advantage even if they are open-source and do not allow employees to share any details publicly. Some see some PR risks and allow employees to share implementation details only with their PR department approval.
So how to tell who is using ClickHouse?
One way is to **ask around**. If its not in writing, people are much more willing to share what technologies are used in their companies, what the use cases are, what kind of hardware is used, data volumes, etc. Were talking with users regularly on [ClickHouse Meetups](https://www.youtube.com/channel/UChtmrD-dsdpspr42P_PyRAw/playlists) all over the world and have heard stories about 1000+ companies that use ClickHouse. Unfortunately, thats not reproducible and we try to treat such stories as if they were told under NDA to avoid any potential troubles. But you can come to any of our future meetups and talk with other users on your own. There are multiple ways how meetups are announced, for example, you can subscribe to [our Twitter](http://twitter.com/ClickHouseDB/).
The second way is to look for companies **publicly saying** that they use ClickHouse. Its more substantial because theres usually some hard evidence like a blog post, talk video recording, slide deck, etc. We collect the collection of links to such evidence on our **[Adopters](../../introduction/adopters.md)** page. Feel free to contribute the story of your employer or just some links youve stumbled upon (but try not to violate your NDA in the process).
You can find names of very large companies in the adopters list, like Bloomberg, Cisco, China Telecom, Tencent, or Uber, but with the first approach, we found that there are many more. For example, if you take [the list of largest IT companies by Forbes (2020)](https://www.forbes.com/sites/hanktucker/2020/05/13/worlds-largest-technology-companies-2020-apple-stays-on-top-zoom-and-uber-debut/) over half of them are using ClickHouse in some way. Also, it would be unfair not to mention [Yandex](../../introduction/history.md), the company which initially open-sourced ClickHouse in 2016 and happens to be one of the largest IT companies in Europe.

View File

@ -1,63 +0,0 @@
---
title: Why ClickHouse is so fast?
toc_hidden: true
toc_priority: 8
---
# Why ClickHouse Is So Fast? {#why-clickhouse-is-so-fast}
It was designed to be fast. Query execution performance has always been a top priority during the development process, but other important characteristics like user-friendliness, scalability, and security were also considered so ClickHouse could become a real production system.
ClickHouse was initially built as a prototype to do just a single task well: to filter and aggregate data as fast as possible. Thats what needs to be done to build a typical analytical report and thats what a typical [GROUP BY](../../sql-reference/statements/select/group-by.md) query does. ClickHouse team has made several high-level decisions that combined made achieving this task possible:
Column-oriented storage
: Source data often contain hundreds or even thousands of columns, while a report can use just a few of them. The system needs to avoid reading unnecessary columns, or most expensive disk read operations would be wasted.
Indexes
: ClickHouse keeps data structures in memory that allows reading not only used columns but only necessary row ranges of those columns.
Data compression
: Storing different values of the same column together often leads to better compression ratios (compared to row-oriented systems) because in real data column often has the same or not so many different values for neighboring rows. In addition to general-purpose compression, ClickHouse supports [specialized codecs](../../sql-reference/statements/create/table.md#create-query-specialized-codecs) that can make data even more compact.
Vectorized query execution
: ClickHouse not only stores data in columns but also processes data in columns. It leads to better CPU cache utilization and allows for [SIMD](https://en.wikipedia.org/wiki/SIMD) CPU instructions usage.
Scalability
: ClickHouse can leverage all available CPU cores and disks to execute even a single query. Not only on a single server but all CPU cores and disks of a cluster as well.
But many other database management systems use similar techniques. What really makes ClickHouse stand out is **attention to low-level details**. Most programming languages provide implementations for most common algorithms and data structures, but they tend to be too generic to be effective. Every task can be considered as a landscape with various characteristics, instead of just throwing in random implementation. For example, if you need a hash table, here are some key questions to consider:
- Which hash function to choose?
- Collision resolution algorithm: [open addressing](https://en.wikipedia.org/wiki/Open_addressing) vs [chaining](https://en.wikipedia.org/wiki/Hash_table#Separate_chaining)?
- Memory layout: one array for keys and values or separate arrays? Will it store small or large values?
- Fill factor: when and how to resize? How to move values around on resize?
- Will values be removed and which algorithm will work better if they will?
- Will we need fast probing with bitmaps, inline placement of string keys, support for non-movable values, prefetch, and batching?
Hash table is a key data structure for `GROUP BY` implementation and ClickHouse automatically chooses one of [30+ variations](https://github.com/ClickHouse/ClickHouse/blob/master/src/Interpreters/Aggregator.h) for each specific query.
The same goes for algorithms, for example, in sorting you might consider:
- What will be sorted: an array of numbers, tuples, strings, or structures?
- Is all data available completely in RAM?
- Do we need a stable sort?
- Do we need a full sort? Maybe partial sort or n-th element will suffice?
- How to implement comparisons?
- Are we sorting data that has already been partially sorted?
Algorithms that they rely on characteristics of data they are working with can often do better than their generic counterparts. If it is not really known in advance, the system can try various implementations and choose the one that works best in runtime. For example, see an [article on how LZ4 decompression is implemented in ClickHouse](https://habr.com/en/company/yandex/blog/457612/).
Last but not least, the ClickHouse team always monitors the Internet on people claiming that they came up with the best implementation, algorithm, or data structure to do something and tries it out. Those claims mostly appear to be false, but from time to time youll indeed find a gem.
!!! info "Tips for building your own high-performance software"
- Keep in mind low-level details when designing your system.
- Design based on hardware capabilities.
- Choose data structures and abstractions based on the needs of the task.
- Provide specializations for special cases.
- Try new, “best” algorithms, that you read about yesterday.
- Choose an algorithm in runtime based on statistics.
- Benchmark on real datasets.
- Test for performance regressions in CI.
- Measure and observe everything.

View File

@ -1,47 +0,0 @@
---
toc_folder_title: F.A.Q.
toc_hidden: true
toc_priority: 76
---
# ClickHouse F.A.Q {#clickhouse-f-a-q}
This section of the documentation is a place to collect answers to ClickHouse-related questions that arise often.
Categories:
- **[General](../faq/general/index.md)**
- [What is ClickHouse?](../index.md#what-is-clickhouse)
- [Why ClickHouse is so fast?](../faq/general/why-clickhouse-is-so-fast.md)
- [Who is using ClickHouse?](../faq/general/who-is-using-clickhouse.md)
- [What does “ClickHouse” mean?](../faq/general/dbms-naming.md)
- [What does “Не тормозит” mean?](../faq/general/ne-tormozit.md)
- [What is OLAP?](../faq/general/olap.md)
- [What is a columnar database?](../faq/general/columnar-database.md)
- [Why not use something like MapReduce?](../faq/general/mapreduce.md)
- **[Use Cases](../faq/use-cases/index.md)**
- [Can I use ClickHouse as a time-series database?](../faq/use-cases/time-series.md)
- [Can I use ClickHouse as a key-value storage?](../faq/use-cases/key-value.md)
- **[Operations](../faq/operations/index.md)**
- [Which ClickHouse version to use in production?](../faq/operations/production.md)
- [Is it possible to delete old records from a ClickHouse table?](../faq/operations/delete-old-data.md)
- [Does ClickHouse support multi-region replication?](../faq/operations/multi-region-replication.md)
- **[Integration](../faq/integration/index.md)**
- [How do I export data from ClickHouse to a file?](../faq/integration/file-export.md)
- [What if I have a problem with encodings when connecting to Oracle via ODBC?](../faq/integration/oracle-odbc.md)
{## TODO
Question candidates:
- How to choose a primary key?
- How to add a column in ClickHouse?
- Too many parts
- How to filter ClickHouse table by an array column contents?
- How to insert all rows from one table to another of identical structure?
- How to kill a process (query) in ClickHouse?
- How to implement pivot (like in pandas)?
- How to remove the default ClickHouse user through users.d?
- Importing MySQL dump to ClickHouse
- Window function workarounds (row_number, lag/lead, running diff/sum/average)
##}
{## [Original article](https://clickhouse.com/docs/en/faq) ##}

View File

@ -1,37 +0,0 @@
---
title: How do I export data from ClickHouse to a file?
toc_hidden: true
toc_priority: 10
---
# How Do I Export Data from ClickHouse to a File? {#how-to-export-to-file}
## Using INTO OUTFILE Clause {#using-into-outfile-clause}
Add an [INTO OUTFILE](../../sql-reference/statements/select/into-outfile.md#into-outfile-clause) clause to your query.
For example:
``` sql
SELECT * FROM table INTO OUTFILE 'file'
```
By default, ClickHouse uses the [TabSeparated](../../interfaces/formats.md#tabseparated) format for output data. To select the [data format](../../interfaces/formats.md), use the [FORMAT clause](../../sql-reference/statements/select/format.md#format-clause).
For example:
``` sql
SELECT * FROM table INTO OUTFILE 'file' FORMAT CSV
```
## Using a File-Engine Table {#using-a-file-engine-table}
See [File](../../engines/table-engines/special/file.md) table engine.
## Using Command-Line Redirection {#using-command-line-redirection}
``` bash
$ clickhouse-client --query "SELECT * from table" --format FormatName > result.txt
```
See [clickhouse-client](../../interfaces/cli.md).

View File

@ -1,19 +0,0 @@
---
title: Questions about integrating ClickHouse and other systems
toc_hidden_folder: true
toc_priority: 4
toc_title: Integration
---
# Questions About Integrating ClickHouse and Other Systems {#question-about-integrating-clickhouse-and-other-systems}
Questions:
- [How do I export data from ClickHouse to a file?](../../faq/integration/file-export.md)
- [How to import JSON into ClickHouse?](../../faq/integration/json-import.md)
- [What if I have a problem with encodings when connecting to Oracle via ODBC?](../../faq/integration/oracle-odbc.md)
!!! info "Dont see what you were looking for?"
Check out [other F.A.Q. categories](../../faq/index.md) or browse around main documentation articles found in the left sidebar.
{## [Original article](https://clickhouse.com/docs/en/faq/integration/) ##}

Some files were not shown because too many files have changed in this diff Show More