From dcb5d48f1b598d8135ba460ff36506c1399b21d3 Mon Sep 17 00:00:00 2001
From: Ivan Blinkov
Date: Tue, 18 Jun 2019 17:00:18 +0300
Subject: [PATCH 001/312] Create SECURITY.md
---
SECURITY.md | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
create mode 100644 SECURITY.md
diff --git a/SECURITY.md b/SECURITY.md
new file mode 100644
index 00000000000..034e8480320
--- /dev/null
+++ b/SECURITY.md
@@ -0,0 +1,21 @@
+# Security Policy
+
+## Supported Versions
+
+Use this section to tell people about which versions of your project are
+currently being supported with security updates.
+
+| Version | Supported |
+| ------- | ------------------ |
+| 5.1.x | :white_check_mark: |
+| 5.0.x | :x: |
+| 4.0.x | :white_check_mark: |
+| < 4.0 | :x: |
+
+## Reporting a Vulnerability
+
+Use this section to tell people how to report a vulnerability.
+
+Tell them where to go, how often they can expect to get an update on a
+reported vulnerability, what to expect if the vulnerability is accepted or
+declined, etc.
From 0d4f3d8cc725d6e8f5297c7cfc1df851146aa15b Mon Sep 17 00:00:00 2001
From: Ivan Blinkov
Date: Mon, 19 Aug 2019 14:09:21 +0300
Subject: [PATCH 002/312] [experimental] auto-mark documentation PRs with
labels
---
.github/label-pr.yml | 2 ++
.github/main.workflow | 9 +++++++++
2 files changed, 11 insertions(+)
create mode 100644 .github/label-pr.yml
create mode 100644 .github/main.workflow
diff --git a/.github/label-pr.yml b/.github/label-pr.yml
new file mode 100644
index 00000000000..4ae73a2e720
--- /dev/null
+++ b/.github/label-pr.yml
@@ -0,0 +1,2 @@
+- regExp: ".*\\.md$"
+ labels: ["documentation", "pr-documentation"]
diff --git a/.github/main.workflow b/.github/main.workflow
new file mode 100644
index 00000000000..a450195b955
--- /dev/null
+++ b/.github/main.workflow
@@ -0,0 +1,9 @@
+workflow "Main workflow" {
+ resolves = ["Label PR"]
+ on = "pull_request"
+}
+
+action "Label PR" {
+ uses = "decathlon/pull-request-labeler-action@v1.0.0"
+ secrets = ["GITHUB_TOKEN"]
+}
From 453020593f1c8372100b70e47e9c4b26c9be09d5 Mon Sep 17 00:00:00 2001
From: Ivan Blinkov
Date: Mon, 19 Aug 2019 16:14:39 +0300
Subject: [PATCH 003/312] revert #6544
---
.github/label-pr.yml | 2 --
.github/main.workflow | 9 ---------
2 files changed, 11 deletions(-)
delete mode 100644 .github/label-pr.yml
delete mode 100644 .github/main.workflow
diff --git a/.github/label-pr.yml b/.github/label-pr.yml
deleted file mode 100644
index 4ae73a2e720..00000000000
--- a/.github/label-pr.yml
+++ /dev/null
@@ -1,2 +0,0 @@
-- regExp: ".*\\.md$"
- labels: ["documentation", "pr-documentation"]
diff --git a/.github/main.workflow b/.github/main.workflow
deleted file mode 100644
index a450195b955..00000000000
--- a/.github/main.workflow
+++ /dev/null
@@ -1,9 +0,0 @@
-workflow "Main workflow" {
- resolves = ["Label PR"]
- on = "pull_request"
-}
-
-action "Label PR" {
- uses = "decathlon/pull-request-labeler-action@v1.0.0"
- secrets = ["GITHUB_TOKEN"]
-}
From a187fa736035f247d8ae30fe6a25d827951afa25 Mon Sep 17 00:00:00 2001
From: Ivan Blinkov
Date: Tue, 20 Aug 2019 15:50:42 +0300
Subject: [PATCH 004/312] Sync RPM packages instructions to other docs
languages
---
docs/en/getting_started/index.md | 4 ++--
docs/fa/getting_started/index.md | 27 +++++++++++++++++++++++++
docs/ru/getting_started/index.md | 22 ++++++++++++++++-----
docs/zh/getting_started/index.md | 34 ++++++++++++++++++++++++--------
4 files changed, 72 insertions(+), 15 deletions(-)
diff --git a/docs/en/getting_started/index.md b/docs/en/getting_started/index.md
index 8f6308cd0ab..8cdbae86e5e 100644
--- a/docs/en/getting_started/index.md
+++ b/docs/en/getting_started/index.md
@@ -37,9 +37,9 @@ You can also download and install packages manually from here: :ﺪﯿﻨﮐ ﺐﺼﻧ ﻭ ﯼﺮﯿﮔﺭﺎﺑ ﺎﺠﻨ
+
+ Docker Image ﺯﺍ ###
+
+.ﺪﻨﻨﮐ ﯽﻣ ﻩﺩﺎﻔﺘﺳﺍ ﻞﺧﺍﺩ ﺭﺩ "deb" ﯽﻤﺳﺭ ﯼﺎﻫ ﻪﺘﺴﺑ ﺯﺍ ﺮﯾﻭﺎﺼﺗ ﻦﯾﺍ .ﺪﯿﻨﮐ ﻝﺎﺒﻧﺩ ﺍﺭ (/ht
+
+
### نصب از طریق Source
برای Compile، دستورالعمل های فایل build.md را دنبال کنید:
diff --git a/docs/ru/getting_started/index.md b/docs/ru/getting_started/index.md
index 8091f297019..e3fb2ab0985 100644
--- a/docs/ru/getting_started/index.md
+++ b/docs/ru/getting_started/index.md
@@ -37,13 +37,25 @@ sudo apt-get install clickhouse-client clickhouse-server
### Из RPM пакетов
-Яндекс не использует ClickHouse на поддерживающих `rpm` дистрибутивах Linux, а также `rpm` пакеты менее тщательно тестируются. Таким образом, использовать их стоит на свой страх и риск, но, тем не менее, многие другие компании успешно работают на них в production без каких-либо серьезных проблем.
+Команда ClickHouse в Яндексе рекомендует использовать официальные предкомпилированные `rpm` пакеты для CentOS, RedHad и всех остальных дистрибутивов Linux, основанных на rpm.
-Для CentOS, RHEL и Fedora возможны следующие варианты:
+Сначала нужно подключить официальный репозиторий:
+```bash
+sudo yum install yum-utils
+sudo rpm --import https://repo.yandex.ru/clickhouse/CLICKHOUSE-KEY.GPG
+sudo yum-config-manager --add-repo https://repo.yandex.ru/clickhouse/rpm/stable/x86_64
+```
-* Пакеты из генерируются на основе официальных `deb` пакетов от Яндекса и содержат в точности тот же исполняемый файл.
-* Пакеты из собираются независимой компанией Altinity, но широко используются без каких-либо нареканий.
-* Либо можно использовать Docker (см. ниже).
+Для использования наиболее свежих версий нужно заменить `stable` на `testing` (рекомендуется для тестовых окружений).
+
+Then run these commands to actually install packages:
+Для, собственно, установки пакетов необходимо выполнить следующие команды:
+
+```bash
+sudo yum install clickhouse-server clickhouse-client
+```
+
+Также есть возможность установить пакеты вручную, скачав отсюда: .
### Из Docker образа
diff --git a/docs/zh/getting_started/index.md b/docs/zh/getting_started/index.md
index 20d3c8ff9b1..f51323ce7e8 100644
--- a/docs/zh/getting_started/index.md
+++ b/docs/zh/getting_started/index.md
@@ -43,6 +43,32 @@ ClickHouse包含访问控制配置,它们位于`users.xml`文件中(与'config
默认情况下,允许从任何地方使用默认的‘default’用户无密码的访问ClickHouse。参考‘user/default/networks’。
有关更多信息,请参考"Configuration files"部分。
+###来自RPM包
+
+Yandex ClickHouse团队建议使用官方预编译的`rpm`软件包,用于CentOS,RedHat和所有其他基于rpm的Linux发行版。
+
+首先,您需要添加官方存储库:
+
+```bash
+sudo yum install yum-utils
+sudo rpm --import https://repo.yandex.ru/clickhouse/CLICKHOUSE-KEY.GPG
+sudo yum-config-manager --add-repo https://repo.yandex.ru/clickhouse/rpm/stable/x86_64
+```
+
+如果您想使用最新版本,请将`stable`替换为`testing`(建议您在测试环境中使用)。
+
+然后运行这些命令以实际安装包:
+
+```bash
+sudo yum install clickhouse-server clickhouse-client
+```
+
+您也可以从此处手动下载和安装软件包:。
+
+###来自Docker
+
+要在Docker中运行ClickHouse,请遵循[Docker Hub](https://hub.docker.com/r/yandex/clickhouse-server/)上的指南。那些图像使用官方的`deb`包。
+
### 使用源码安装
具体编译方式可以参考build.md。
@@ -67,14 +93,6 @@ Server: dbms/programs/clickhouse-server
日志的路径可以在server config (src/dbms/programs/server/config.xml)中配置。
-### 其他的安装方法
-
-Docker image:
-
-CentOS或RHEL安装包:
-
-Gentoo:`emerge clickhouse`
-
## 启动
可以运行如下命令在后台启动服务:
From 551de6ec31aaa3f0b54ea679616147b6c90ecd08 Mon Sep 17 00:00:00 2001
From: Ivan Blinkov
Date: Fri, 23 Aug 2019 15:41:23 +0300
Subject: [PATCH 005/312] Move tutorial to documentation with old content (for
now)
---
docs/en/getting_started/tutorial.md | 341 +++++++++++++++
docs/toc_en.yml | 1 +
website/nginx/default.conf | 3 +-
website/tutorial.html | 649 ----------------------------
4 files changed, 344 insertions(+), 650 deletions(-)
create mode 100644 docs/en/getting_started/tutorial.md
delete mode 100644 website/tutorial.html
diff --git a/docs/en/getting_started/tutorial.md b/docs/en/getting_started/tutorial.md
new file mode 100644
index 00000000000..48faf5bd327
--- /dev/null
+++ b/docs/en/getting_started/tutorial.md
@@ -0,0 +1,341 @@
+# ClickHouse Tutorial
+
+## Setup
+
+Let's get started with sample dataset from open sources. We will use USA civil flights data from 1987 to 2015. It's hard to call this sample a Big Data (contains 166 millions rows, 63 Gb of uncompressed data) but this allows us to quickly get to work. Dataset is available for download [here](https://yadi.sk/d/pOZxpa42sDdgm). Also you may download it from the original datasource [as described here](example_datasets/ontime.md).
+
+At first we will deploy ClickHouse to a single server. Later we will also review the process of deployment to a cluster with support for sharding and replication.
+
+ClickHouse is usually installed from [deb](index.md#from-deb-packages) or [rpm](index.md#from-rpm-packages) packages, but there are [alternatives](index.md#from-docker-image) for the operating systems that do no support them. What do we have in those packages:
+
+* `clickhouse-client` package contains [clickhouse-client](../interfaces/cli.md) application, interactive ClickHouse console client.
+* `clickhouse-common` package contains a ClickHouse executable file.
+* `clickhouse-server` package contains configuration files to run ClickHouse as a server.
+
+Server config files are located in /etc/clickhouse-server/. Before getting to work please notice the `path` element in config. Path dtermines the location for data storage. It's not really handy to directly edit `config.xml` file considering package updates. Recommended way to override the config elements is to create [files in config.d directory](../operations/configuration_files.md). Also you may want to [set up access rights](../operations/access_rights.md) early on.
+
+`clickhouse-server` won't be launched automatically after package installation. It won't be automatically restarted after updates either. Start the server with:
+``` bash
+sudo service clickhouse-server start
+```
+
+The default location for server logs is `/var/log/clickhouse-server/`.
+
+Server is ready to handle client connections once `Ready for connections` message was logged.
+
+Use `clickhouse-client` to connect to the server.
+
+
-
-
-
-
-
ClickHouse
-
Tutorial
-
-
-
Let's get started with sample dataset from open sources. We will use USA civil flights data since 1987 till 2015.
- It's hard to call this sample a Big Data (contains 166 millions rows, 63 Gb of uncompressed data) but this
- allows us to quickly get to work. Dataset is available for download here.
- Also you may download it from the original datasource as described here.
-
-
Firstly we will deploy ClickHouse to a single server. Below that we will also review the process of deployment to
- a cluster with support for sharding and replication.
-
-
On Ubuntu and Debian Linux ClickHouse can be installed from packages.
- For other Linux distributions you can compile
- ClickHouse from sources and then install.
-
-
clickhouse-client package contains clickhouse-client application —
- interactive ClickHouse client. clickhouse-common contains a clickhouse-server binary file. clickhouse-server
- — contains config files for the clickhouse-server.
-
-
Server config files are located in /etc/clickhouse-server/. Before getting to work please notice the path
- element in config. Path determines the location for data storage. It's not really handy to directly
- edit config.xml file considering package updates. Recommended way is to override the config elements in
- files of config.d directory.
- Also you may want to set up access
- rights at the start.
-
-
clickhouse-server won't be launched automatically after package installation. It won't be automatically
- restarted after updates either. Start the server with:
-
sudo service clickhouse-server start
- Default location for server logs is /var/log/clickhouse-server/
- Server is ready to handle client connections once "Ready for connections" message was logged.
-
-
Use clickhouse-client to connect to the server.
-
-
Tips for clickhouse-client
-
- Interactive mode:
-
-clickhouse-client
-clickhouse-client --host=... --port=... --user=... --password=...
-
- Enable multiline queries:
-
-clickhouse-client -m
-clickhouse-client --multiline
-
- Run queries in batch-mode:
-
-clickhouse-client --query='SELECT 1'
-echo 'SELECT 1' | clickhouse-client
-
- Insert data from file of a specified format:
-
-clickhouse-client --query='INSERT INTO table VALUES' < data.txt
-clickhouse-client --query='INSERT INTO table FORMAT TabSeparated' < data.tsv
-
-
-
-
-
Create table for sample dataset
-
Create table query
-
-
-$ clickhouse-client --multiline
-ClickHouse client version 0.0.53720.
-Connecting to localhost:9000.
-Connected to ClickHouse server version 0.0.53720.
-
-:) CREATE TABLE ontime
-(
- Year UInt16,
- Quarter UInt8,
- Month UInt8,
- DayofMonth UInt8,
- DayOfWeek UInt8,
- FlightDate Date,
- UniqueCarrier FixedString(7),
- AirlineID Int32,
- Carrier FixedString(2),
- TailNum String,
- FlightNum String,
- OriginAirportID Int32,
- OriginAirportSeqID Int32,
- OriginCityMarketID Int32,
- Origin FixedString(5),
- OriginCityName String,
- OriginState FixedString(2),
- OriginStateFips String,
- OriginStateName String,
- OriginWac Int32,
- DestAirportID Int32,
- DestAirportSeqID Int32,
- DestCityMarketID Int32,
- Dest FixedString(5),
- DestCityName String,
- DestState FixedString(2),
- DestStateFips String,
- DestStateName String,
- DestWac Int32,
- CRSDepTime Int32,
- DepTime Int32,
- DepDelay Int32,
- DepDelayMinutes Int32,
- DepDel15 Int32,
- DepartureDelayGroups String,
- DepTimeBlk String,
- TaxiOut Int32,
- WheelsOff Int32,
- WheelsOn Int32,
- TaxiIn Int32,
- CRSArrTime Int32,
- ArrTime Int32,
- ArrDelay Int32,
- ArrDelayMinutes Int32,
- ArrDel15 Int32,
- ArrivalDelayGroups Int32,
- ArrTimeBlk String,
- Cancelled UInt8,
- CancellationCode FixedString(1),
- Diverted UInt8,
- CRSElapsedTime Int32,
- ActualElapsedTime Int32,
- AirTime Int32,
- Flights Int32,
- Distance Int32,
- DistanceGroup UInt8,
- CarrierDelay Int32,
- WeatherDelay Int32,
- NASDelay Int32,
- SecurityDelay Int32,
- LateAircraftDelay Int32,
- FirstDepTime String,
- TotalAddGTime String,
- LongestAddGTime String,
- DivAirportLandings String,
- DivReachedDest String,
- DivActualElapsedTime String,
- DivArrDelay String,
- DivDistance String,
- Div1Airport String,
- Div1AirportID Int32,
- Div1AirportSeqID Int32,
- Div1WheelsOn String,
- Div1TotalGTime String,
- Div1LongestGTime String,
- Div1WheelsOff String,
- Div1TailNum String,
- Div2Airport String,
- Div2AirportID Int32,
- Div2AirportSeqID Int32,
- Div2WheelsOn String,
- Div2TotalGTime String,
- Div2LongestGTime String,
- Div2WheelsOff String,
- Div2TailNum String,
- Div3Airport String,
- Div3AirportID Int32,
- Div3AirportSeqID Int32,
- Div3WheelsOn String,
- Div3TotalGTime String,
- Div3LongestGTime String,
- Div3WheelsOff String,
- Div3TailNum String,
- Div4Airport String,
- Div4AirportID Int32,
- Div4AirportSeqID Int32,
- Div4WheelsOn String,
- Div4TotalGTime String,
- Div4LongestGTime String,
- Div4WheelsOff String,
- Div4TailNum String,
- Div5Airport String,
- Div5AirportID Int32,
- Div5AirportSeqID Int32,
- Div5WheelsOn String,
- Div5TotalGTime String,
- Div5LongestGTime String,
- Div5WheelsOff String,
- Div5TailNum String
-)
-ENGINE = MergeTree(FlightDate, (Year, FlightDate), 8192);
-
-
-
-
-
Now we have a table of MergeTree type.
- MergeTree table type is recommended for usage in production. Table of this kind has a primary key used for
- incremental sort of table data. This allows fast execution of queries in ranges of a primary key.
-
-
-
Note
- We store ad network banners impressions logs in ClickHouse. Each table entry looks like:
- [Advertiser ID, Impression ID, attribute1, attribute2, …].
- Let assume that our aim is to provide a set of reports for each advertiser. Common and frequently demanded query
- would be to count impressions for a specific Advertiser ID. This means that table primary key should start with
- Advertiser ID. In this case ClickHouse needs to read smaller amount of data to perform the query for a
- given Advertiser ID.
-
-
-
Load data
-
xz -v -c -d < ontime.csv.xz | clickhouse-client --query="INSERT INTO ontime FORMAT CSV"
-
ClickHouse INSERT query allows to load data in any supported
- format. Data load requires just O(1) RAM consumption. INSERT query can receive any data volume as input.
- It's strongly recommended to insert data with not too small
- size blocks. Notice that insert of blocks with size up to max_insert_block_size (= 1 048 576
- rows by default) is an atomic operation: data block will be inserted completely or not inserted at all. In case
- of disconnect during insert operation you may not know if the block was inserted successfully. To achieve
- exactly-once semantics ClickHouse supports idempotency for replicated tables. This means
- that you may retry insert of the same data block (possibly on a different replicas) but this block will be
- inserted just once. Anyway in this guide we will load data from our localhost so we may not take care about data
- blocks generation and exactly-once semantics.
-
-
INSERT query into tables of MergeTree type is non-blocking (so does a SELECT query). You can execute SELECT
- queries right after of during insert operation.
-
-
Our sample dataset is a bit not optimal. There are two reasons.
-
-
The first is that String data type is used in cases when Enum or numeric type would fit best.
-
-
⚖ When set of possible values is determined and known to be small. (E.g. OS name, browser
- vendors etc.) it's recommended to use Enums or numbers to improve performance.
- When set of possible values is not limited (search query, URL, etc.) just go ahead with String.
-
-
The second is that dataset contains redundant fields like Year, Quarter, Month, DayOfMonth, DayOfWeek. In fact a
- single FlightDate would be enough. Most likely they have been added to improve performance for other DBMS'es
- which DateTime handling functions may be not efficient.
-
-
✯ ClickHouse functions
- for operating with DateTime fields are well-optimized so such redundancy is not required. Anyway much
- columns is not a reason to worry — ClickHouse is a column-oriented
- DBMS. This allows you to have as much fields as you need. Hundreds of columns in a table is fine for
- ClickHouse.
-
-
Querying the sample dataset
-
-
Here are some examples of the queries from our test data.
-
-
- -
-
the most popular destinations in 2015;
-
-
-SELECT
- OriginCityName,
- DestCityName,
- count(*) AS flights,
- bar(flights, 0, 20000, 40)
-FROM ontime WHERE Year = 2015 GROUP BY OriginCityName, DestCityName ORDER BY flights DESC LIMIT 20
-
-
-
-SELECT
- OriginCityName < DestCityName ? OriginCityName : DestCityName AS a,
- OriginCityName < DestCityName ? DestCityName : OriginCityName AS b,
- count(*) AS flights,
- bar(flights, 0, 40000, 40)
-FROM ontime WHERE Year = 2015 GROUP BY a, b ORDER BY flights DESC LIMIT 20
-
-
-
-
- -
-
-
- -
-
-
- -
-
-
- -
-
-
- -
-
flights of maximum duration;
-
-
-SELECT OriginCityName, DestCityName, count(*) AS flights, avg(AirTime) AS duration
-FROM ontime
-GROUP BY OriginCityName, DestCityName
-ORDER BY duration DESC
-LIMIT 20
-
-
-
-
- -
-
-
- -
-
-
- -
-
most trending destination cities in 2015;
-
-
-SELECT
- DestCityName,
- sum(Year = 2014) AS c2014,
- sum(Year = 2015) AS c2015,
- c2015 / c2014 AS diff
-FROM ontime
-WHERE Year IN (2014, 2015)
-GROUP BY DestCityName
-HAVING c2014 > 10000 AND c2015 > 1000 AND diff > 1
-ORDER BY diff DESC
-
-
-
-
- -
-
destination cities with maximum popularity-season
- dependency.
-
-
-SELECT
- DestCityName,
- any(total),
- avg(abs(monthly * 12 - total) / total) AS avg_month_diff
-FROM
-(
- SELECT DestCityName, count() AS total
- FROM ontime GROUP BY DestCityName HAVING total > 100000
-)
-ALL INNER JOIN
-(
- SELECT DestCityName, Month, count() AS monthly
- FROM ontime GROUP BY DestCityName, Month HAVING monthly > 10000
-)
-USING DestCityName
-GROUP BY DestCityName
-ORDER BY avg_month_diff DESC
-LIMIT 20
-
-
-
-
-
-
-
ClickHouse deployment to cluster
-
ClickHouse cluster is a homogenous cluster. Steps to set up:
-
- - Install ClickHouse server on all machines of the cluster
- - Set up cluster configs in configuration file
- - Create local tables on each instance
- - Create a Distributed table
-
-
-
-
Distributed-table is actually a kind of
- "view" to local tables of ClickHouse cluster. SELECT query from a distributed table will be executed using
- resources of all cluster's shards. You may specify configs for multiple clusters and create multiple
- Distributed-tables providing views to different clusters.
-
-
Config for cluster of three shards. Each shard stores data on a single
- replica
-
-
-<remote_servers>
- <perftest_3shards_1replicas>
- <shard>
- <replica>
- <host>example-perftest01j.yandex.ru</host>
- <port>9000</port>
- </replica>
- </shard>
- <shard>
- <replica>
- <host>example-perftest02j.yandex.ru</host>
- <port>9000</port>
- </replica>
- </shard>
- <shard>
- <replica>
- <host>example-perftest03j.yandex.ru</host>
- <port>9000</port>
- </replica>
- </shard>
- </perftest_3shards_1replicas>
-</remote_servers>
-
-
-
- Creating a local table:
-
CREATE TABLE ontime_local (...) ENGINE = MergeTree(FlightDate, (Year, FlightDate), 8192);
- Creating a distributed table providing a view into local tables of the cluster:
-
CREATE TABLE ontime_all AS ontime_local
- ENGINE = Distributed(perftest_3shards_1replicas, default, ontime_local, rand());
-
-
You can create a Distributed table on all machines in the cluster. This would allow to run distributed queries on
- any machine of the cluster. Besides distributed table you can also use *remote* table function.
-
-
Let's run INSERT SELECT into Distributed table
- to spread the table to multiple servers.
-
-
INSERT INTO ontime_all SELECT * FROM ontime;
-
-
⚠ Worth to notice that the approach given above wouldn't fit for sharding of large
- tables.
-
-
As you could expect heavy queries are executed N times faster being launched on 3 servers instead of one.
-
See here
-
-
-
-
You may have noticed that quantiles calculation are slightly different. This happens due to t-digest
- algorithm implementation which is non-deterministic — it depends on the order of data processing.
-
-
-
-
In this case we have used a cluster with 3 shards each contains a single replica.
-
-
To provide for resilience in production environment we recommend that each shard should contain 2-3 replicas
- distributed between multiple data-centers. Note that ClickHouse supports unlimited number of replicas.
-
-
Config for cluster of one shard containing three replicas
-
-
-<remote_servers>
- ...
- <perftest_1shards_3replicas>
- <shard>
- <replica>
- <host>example-perftest01j.yandex.ru</host>
- <port>9000</port>
- </replica>
- <replica>
- <host>example-perftest02j.yandex.ru</host>
- <port>9000</port>
- </replica>
- <replica>
- <host>example-perftest03j.yandex.ru</host>
- <port>9000</port>
- </replica>
- </shard>
- </perftest_1shards_3replicas>
-</remote_servers>
-
-
-
-
-
To enable replication ZooKeeper is required.
- ClickHouse will take care of data consistency on all replicas and run restore procedure after failure
- automatically. It's recommended to deploy ZooKeeper cluster to separate servers.
-
-
ZooKeeper is not a requirement — in some simple cases you can duplicate the data by writing it into all the
- replicas from your application code. This approach is not recommended — in this case ClickHouse is not able to
- guarantee data consistency on all replicas. This remains the responsibility of your application.
-
-
Set ZooKeeper locations in configuration file
-
-
-<zookeeper-servers>
- <node>
- <host>zoo01.yandex.ru</host>
- <port>2181</port>
- </node>
- <node>
- <host>zoo02.yandex.ru</host>
- <port>2181</port>
- </node>
- <node>
- <host>zoo03.yandex.ru</host>
- <port>2181</port>
- </node>
-</zookeeper-servers>
-
-
-
-
-
Also we need to set macros for identifying shard and replica — it will be used on table creation
-
-<macros>
- <shard>01</shard>
- <replica>01</replica>
-</macros>
-
-
If there are no replicas at the moment on replicated table creation — a new first replica will be instantiated.
- If there are already live replicas — new replica will clone the data from existing ones. You have an option to
- create all replicated tables first and that insert data to it. Another option is to create some replicas and add
- the others after or during data insertion.
-
-
-CREATE TABLE ontime_replica (...)
-ENGINE = ReplicatedMergeTree(
- '/clickhouse_perftest/tables/{shard}/ontime',
- '{replica}',
- FlightDate,
- (Year, FlightDate),
- 8192);
-
-
Here we use ReplicatedMergeTree
- table type. In parameters we specify ZooKeeper path containing shard and replica identifiers.
-
-
INSERT INTO ontime_replica SELECT * FROM ontime;
-
Replication operates in multi-master mode. Data can be loaded into any replica — it will be synced with other
- instances automatically. Replication is asynchronous so at a given moment of time not all replicas may contain
- recently inserted data. To allow data insertion at least one replica should be up. Others will sync up data and
- repair consistency once they will become active again. Please notice that such scheme allows for the possibility
- of just appended data loss.
-
-
- ClickHouse source code is published under Apache 2.0 License. Software is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- KIND, either express or implied.
-
-
-
-
-
-
-
-
-
-
Date: Fri, 23 Aug 2019 17:36:51 +0300
Subject: [PATCH 006/312] refactor installation guide a bit
---
.../getting_started/{index.md => install.md} | 6 +-
docs/fa/getting_started/index.md | 198 +-----------------
docs/fa/getting_started/install.md | 197 +++++++++++++++++
docs/ru/getting_started/index.md | 140 +------------
docs/ru/getting_started/install.md | 142 +++++++++++++
docs/toc_en.yml | 3 +-
docs/toc_fa.yml | 12 +-
docs/toc_ru.yml | 4 +-
docs/toc_zh.yml | 4 +-
docs/zh/getting_started/index.md | 160 +-------------
docs/zh/getting_started/install.md | 156 ++++++++++++++
11 files changed, 529 insertions(+), 493 deletions(-)
rename docs/en/getting_started/{index.md => install.md} (98%)
create mode 100644 docs/fa/getting_started/install.md
create mode 100644 docs/ru/getting_started/install.md
create mode 100644 docs/zh/getting_started/install.md
diff --git a/docs/en/getting_started/index.md b/docs/en/getting_started/install.md
similarity index 98%
rename from docs/en/getting_started/index.md
rename to docs/en/getting_started/install.md
index 8cdbae86e5e..779abba905b 100644
--- a/docs/en/getting_started/index.md
+++ b/docs/en/getting_started/install.md
@@ -1,4 +1,4 @@
-# Getting Started
+# Installation
## System Requirements
@@ -10,7 +10,7 @@ Though pre-built binaries are typically compiled to leverage SSE 4.2 instruction
$ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported"
```
-## Installation
+## Available Installation Options
### From DEB Packages
@@ -148,4 +148,4 @@ SELECT 1
To continue experimenting, you can download one of test data sets or go through [tutorial](https://clickhouse.yandex/tutorial.html).
-[Original article](https://clickhouse.yandex/docs/en/getting_started/)
+[Original article](https://clickhouse.yandex/docs/en/getting_started/install/)
diff --git a/docs/fa/getting_started/index.md b/docs/fa/getting_started/index.md
index 778393aed91..57496c474e2 100644
--- a/docs/fa/getting_started/index.md
+++ b/docs/fa/getting_started/index.md
@@ -1,197 +1,11 @@
+# ﻥﺪﺷ ﻉﻭﺮﺷ
-# شروع به کار
+ﻖﯾﺮﻃ ﺯﺍ ﺪﯾﺎﺑ ﻪﻤﻫ ﺯﺍ ﻝﻭﺍ ، ﺪﯿﻨﮐ ﺱﺎﺴﺣﺍ ﺍﺭ ﻥﺁ ﺩﺮﮑﻠﻤﻋ ﺪﯿﻫﺍﻮﺧ ﯽﻣ ﻭ ﺪﯿﺘﺴﻫ ﺩﺭﺍﻭ ﻩﺯﺎﺗ[ﺐﺼﻧ ﻞﺣﺍﺮﻣ](install.md).
+ﺪﯿﻨﮐ ﺏﺎﺨﺘﻧﺍ ﺍﺭ ﺮﯾﺯ ﯼﺎﻫ ﻪﻨﯾﺰﮔ ﺯﺍ ﯽﮑﯾ ﺪﯿﻧﺍﻮﺗ ﯽﻣ ﻥﺁ ﺯﺍ ﺲﭘ:
-## نیازمندی های سیستم
-
-این یک سیستم چند سکویی (Cross-Platform) نمی باشد. این ابزار نیاز به Linux Ubuntu Precise (12.04) یا جدیدتر، با معماری x86\_64 و پشتیبانی از SSE 4.2 می باشد. برای چک کردن SSE 4.2 خروجی دستور زیر را بررسی کنید:
+* [ﺪﯿﻨﮐ ﯽﻃ ﺍﺭ ﻞﺼﻔﻣ ﺵﺯﻮﻣﺁ](tutorial.md)
+* [ﺪﯿﻨﮐ ﺶﯾﺎﻣﺯﺁ ﻪﻧﻮﻤﻧ ﯼﺎﻫ ﻩﺩﺍﺩ ﺎﺑ](example_datasets/ontime.md)
+[ﯽﻠﺻﺍ ﻪﻟﺎﻘﻣ](https://clickhouse.yandex/docs/fa/getting_started/)
-
-```bash
-grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported"
-```
-
-
-
-پیشنهاد می کنیم از Ubuntu TrustyT، Ubuntu Xenial یا Ubuntu Precise استفاده کنید. ترمینال باید از UTF-8 پشتیبانی کند. (به صورت پیش فرض در Ubuntu پشتیبانی می شود).
-
-## نصب
-
-### نصب از طریق پکیج های Debian/Ubuntu
-
-در فایل `/etc/apt/sources.list` (یا در یک فایل جدا `/etc/apt/sources.list.d/clickhouse.list`)، Repo زیر را اضافه کنید:
-
-
-
-```
-deb http://repo.yandex.ru/clickhouse/deb/stable/ main/
-```
-
-
-
-اگر شما میخوایید جدیدترین نسخه ی تست را استفاده کنید، 'stable' رو به 'testing' تغییر بدید.
-
-سپس دستورات زیر را اجرا کنید:
-
-
-
-```bash
-sudo apt-get install dirmngr # optional
-sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4 # optional
-sudo apt-get update
-sudo apt-get install clickhouse-client clickhouse-server
-```
-
-
-
-شما همچنین می توانید از طریق لینک زیر پکیج ClickHouse را به صورت دستی دانلود و نصب کنید: .
-
-ClickHouse دارای تنظیمات محدودیت دسترسی می باشد. این تنظیمات در فایل 'users.xml' (کنار 'config.xml') می باشد. به صورت پیش فرض دسترسی برای کاربر 'default' از همه جا بدون نیاز به پسورد وجود دارد. 'user/default/networks' را مشاهده کنید. برای اطلاعات بیشتر قسمت "تنظیمات فایل ها" را مشاهده کنید.
-
- RPM ﯼﺎﻫ ﻪﺘﺴﺑ ﺯﺍ ###
-
-.ﺪﻨﮐ ﯽﻣ ﻪﯿﺻﻮﺗ ﺲﮐﻮﻨﯿﻟ ﺮﺑ ﯽﻨﺘﺒﻣ rpm ﺮﺑ ﯽﻨﺘﺒﻣ ﯼﺎﻫ ﻊﯾﺯﻮﺗ ﺮﯾﺎﺳ ﻭ CentOS ، RedHat ﯼﺍ
-
- :ﺪﯿﻨﮐ ﻪﻓﺎﺿﺍ ﺍﺭ ﯽﻤﺳﺭ ﻥﺰﺨﻣ ﺪﯾﺎﺑ ﺍﺪﺘﺑﺍ
-
-```bash
-sudo yum install yum-utils
-sudo rpm --import https://repo.yandex.ru/clickhouse/CLICKHOUSE-KEY.GPG
-sudo yum-config-manager --add-repo https://repo.yandex.ru/clickhouse/rpm/stable/x86_64
-```
-
-.(ﺩﻮﺷ ﯽﻣ ﻪﯿﺻﻮﺗ ﺎﻤﺷ ﺶﯾﺎﻣﺯﺁ ﯼﺎﻫ ﻂﯿﺤﻣ ﯼﺍﺮﺑ ﻦﯾﺍ) ﺪﯿﻨﮐ ﻦﯾﺰﮕﯾﺎﺟ "ﺖﺴﺗ" ﺎﺑ ﺍﺭ "ﺭﺍﺪﯾﺎﭘ"
-
- :ﺪﯿﻨﮐ ﺐﺼﻧ ﺍﺭ ﺎﻫ ﻪﺘﺴﺑ ﻊﻗﺍﻭ ﺭﺩ ﺎﺗ ﺪﯿﻨﮐ ﺍﺮﺟﺍ ﺍﺭ ﺕﺍﺭﻮﺘﺳﺩ ﻦﯾﺍ ﺲﭙﺳ
-
-```bash
-sudo yum install clickhouse-server clickhouse-client
-```
-
-. :ﺪﯿﻨﮐ ﺐﺼﻧ ﻭ ﯼﺮﯿﮔﺭﺎﺑ ﺎﺠﻨ
-
- Docker Image ﺯﺍ ###
-
-.ﺪﻨﻨﮐ ﯽﻣ ﻩﺩﺎﻔﺘﺳﺍ ﻞﺧﺍﺩ ﺭﺩ "deb" ﯽﻤﺳﺭ ﯼﺎﻫ ﻪﺘﺴﺑ ﺯﺍ ﺮﯾﻭﺎﺼﺗ ﻦﯾﺍ .ﺪﯿﻨﮐ ﻝﺎﺒﻧﺩ ﺍﺭ (/ht
-
-
-### نصب از طریق Source
-
-برای Compile، دستورالعمل های فایل build.md را دنبال کنید:
-
-شما میتوانید پکیج را compile و نصب کنید. شما همچنین می توانید بدون نصب پکیج از برنامه ها استفاده کنید.
-
-
-
-```
-Client: dbms/programs/clickhouse-client
-Server: dbms/programs/clickhouse-server
-```
-
-
-
-برای سرور، یک کاتالوگ با دیتا بسازید، مانند
-
-
-
-```
-/opt/clickhouse/data/default/
-/opt/clickhouse/metadata/default/
-```
-
-
-
-(قابل تنظیم در تنظیمات سرور). 'chown' را برای کاربر دلخواه اجرا کنید.
-
-به مسیر لاگ ها در تنظیمات سرور توجه کنید (src/dbms/programs/config.xml).
-
-### روش های دیگر نصب
-
-Docker image:
-
-پکیج RPM برای CentOS یا RHEL:
-
-Gentoo: `emerge clickhouse`
-
-## راه اندازی
-
-برای استارت سرور (به صورت daemon)، دستور زیر را اجرا کنید:
-
-
-
-```bash
-sudo service clickhouse-server start
-```
-
-
-
-لاگ های دایرکتوری `/var/log/clickhouse-server/` directory. را مشاهده کنید.
-
-اگر سرور استارت نشد، فایل تنظیمات را بررسی کنید `/etc/clickhouse-server/config.xml.`
-
-شما همچنین می توانید سرور را از طریق کنسول راه اندازی کنید:
-
-
-
-```bash
-clickhouse-server --config-file=/etc/clickhouse-server/config.xml
-```
-
-
-
-در این مورد که مناسب زمان توسعه می باشد، لاگ ها در کنسول پرینت می شوند. اگر فایل تنظیمات در دایرکتوری جاری باشد، نیازی به مشخص کردن '--config-file' نمی باشد. به صورت پیش فرض از './config.xml' استفاده می شود.
-
-شما می توانید از کلاینت command-line برای اتصال به سرور استفاده کنید:
-
-
-
-```bash
-clickhouse-client
-```
-
-
-
-پارامترهای پیش فرض، نشان از اتصال به localhost:9000 از طرف کاربر 'default' بدون پسورد را می دهد. از کلاینت میتوان برای اتصال به یک سرور remote استفاده کرد. مثال:
-
-
-
-```bash
-clickhouse-client --host=example.com
-```
-
-
-
-برای اطلاعات بیشتر، بخش "کلاینت Command-line" را مشاهده کنید.
-
-چک کردن سیستم:
-
-
-
-```bash
-milovidov@hostname:~/work/metrica/src/dbms/src/Client$ ./clickhouse-client
-ClickHouse client version 0.0.18749.
-Connecting to localhost:9000.
-Connected to ClickHouse server version 0.0.18749.
-
-:) SELECT 1
-
-SELECT 1
-
-┌─1─┐
-│ 1 │
-└───┘
-
-1 rows in set. Elapsed: 0.003 sec.
-
-:)
-```
-
-
-
-**تبریک میگم، سیستم کار می کنه!**
-
-برای ادامه آزمایشات، شما میتوانید دیتاست های تستی را دریافت و امتحان کنید.
-
-
-[مقاله اصلی](https://clickhouse.yandex/docs/fa/getting_started/)
diff --git a/docs/fa/getting_started/install.md b/docs/fa/getting_started/install.md
new file mode 100644
index 00000000000..2313d29578b
--- /dev/null
+++ b/docs/fa/getting_started/install.md
@@ -0,0 +1,197 @@
+
+
+# ﯼﺯﺍﺪﻧﺍ ﻩﺍﺭ ﻭ ﺐﺼﻧ
+
+## نیازمندی های سیستم
+
+این یک سیستم چند سکویی (Cross-Platform) نمی باشد. این ابزار نیاز به Linux Ubuntu Precise (12.04) یا جدیدتر، با معماری x86\_64 و پشتیبانی از SSE 4.2 می باشد. برای چک کردن SSE 4.2 خروجی دستور زیر را بررسی کنید:
+
+
+
+```bash
+grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported"
+```
+
+
+
+پیشنهاد می کنیم از Ubuntu Trusty، Ubuntu Xenial یا Ubuntu Precise استفاده کنید. ترمینال باید از UTF-8 پشتیبانی کند. (به صورت پیش فرض در Ubuntu پشتیبانی می شود).
+
+##ﺩﻮﺟﻮﻣ ﺐﺼﻧ ﯼﺎﻫ ﻪﻨﯾﺰﮔ
+
+### نصب از طریق پکیج های Debian/Ubuntu
+
+در فایل `/etc/apt/sources.list` (یا در یک فایل جدا `/etc/apt/sources.list.d/clickhouse.list`)، Repo زیر را اضافه کنید:
+
+
+
+```
+deb http://repo.yandex.ru/clickhouse/deb/stable/ main/
+```
+
+
+
+اگر شما میخوایید جدیدترین نسخه ی تست را استفاده کنید، 'stable' رو به 'testing' تغییر بدید.
+
+سپس دستورات زیر را اجرا کنید:
+
+
+
+```bash
+sudo apt-get install dirmngr # optional
+sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4 # optional
+sudo apt-get update
+sudo apt-get install clickhouse-client clickhouse-server
+```
+
+
+
+شما همچنین می توانید از طریق لینک زیر پکیج ClickHouse را به صورت دستی دانلود و نصب کنید: .
+
+ClickHouse دارای تنظیمات محدودیت دسترسی می باشد. این تنظیمات در فایل 'users.xml' (کنار 'config.xml') می باشد. به صورت پیش فرض دسترسی برای کاربر 'default' از همه جا بدون نیاز به پسورد وجود دارد. 'user/default/networks' را مشاهده کنید. برای اطلاعات بیشتر قسمت "تنظیمات فایل ها" را مشاهده کنید.
+
+ RPM ﯼﺎﻫ ﻪﺘﺴﺑ ﺯﺍ ###
+
+.ﺪﻨﮐ ﯽﻣ ﻪﯿﺻﻮﺗ ﺲﮐﻮﻨﯿﻟ ﺮﺑ ﯽﻨﺘﺒﻣ rpm ﺮﺑ ﯽﻨﺘﺒﻣ ﯼﺎﻫ ﻊﯾﺯﻮﺗ ﺮﯾﺎﺳ ﻭ CentOS ، RedHat ﯼﺍ
+
+ :ﺪﯿﻨﮐ ﻪﻓﺎﺿﺍ ﺍﺭ ﯽﻤﺳﺭ ﻥﺰﺨﻣ ﺪﯾﺎﺑ ﺍﺪﺘﺑﺍ
+
+```bash
+sudo yum install yum-utils
+sudo rpm --import https://repo.yandex.ru/clickhouse/CLICKHOUSE-KEY.GPG
+sudo yum-config-manager --add-repo https://repo.yandex.ru/clickhouse/rpm/stable/x86_64
+```
+
+.(ﺩﻮﺷ ﯽﻣ ﻪﯿﺻﻮﺗ ﺎﻤﺷ ﺶﯾﺎﻣﺯﺁ ﯼﺎﻫ ﻂﯿﺤﻣ ﯼﺍﺮﺑ ﻦﯾﺍ) ﺪﯿﻨﮐ ﻦﯾﺰﮕﯾﺎﺟ "ﺖﺴﺗ" ﺎﺑ ﺍﺭ "ﺭﺍﺪﯾﺎﭘ"
+
+ :ﺪﯿﻨﮐ ﺐﺼﻧ ﺍﺭ ﺎﻫ ﻪﺘﺴﺑ ﻊﻗﺍﻭ ﺭﺩ ﺎﺗ ﺪﯿﻨﮐ ﺍﺮﺟﺍ ﺍﺭ ﺕﺍﺭﻮﺘﺳﺩ ﻦﯾﺍ ﺲﭙﺳ
+
+```bash
+sudo yum install clickhouse-server clickhouse-client
+```
+
+. :ﺪﯿﻨﮐ ﺐﺼﻧ ﻭ ﯼﺮﯿﮔﺭﺎﺑ ﺎﺠﻨ
+
+ Docker Image ﺯﺍ ###
+
+.ﺪﻨﻨﮐ ﯽﻣ ﻩﺩﺎﻔﺘﺳﺍ ﻞﺧﺍﺩ ﺭﺩ "deb" ﯽﻤﺳﺭ ﯼﺎﻫ ﻪﺘﺴﺑ ﺯﺍ ﺮﯾﻭﺎﺼﺗ ﻦﯾﺍ .ﺪﯿﻨﮐ ﻝﺎﺒﻧﺩ ﺍﺭ (/ht
+
+
+### نصب از طریق Source
+
+برای Compile، دستورالعمل های فایل build.md را دنبال کنید:
+
+شما میتوانید پکیج را compile و نصب کنید. شما همچنین می توانید بدون نصب پکیج از برنامه ها استفاده کنید.
+
+
+
+```
+Client: dbms/programs/clickhouse-client
+Server: dbms/programs/clickhouse-server
+```
+
+
+
+برای سرور، یک کاتالوگ با دیتا بسازید، مانند
+
+
+
+```
+/opt/clickhouse/data/default/
+/opt/clickhouse/metadata/default/
+```
+
+
+
+(قابل تنظیم در تنظیمات سرور). 'chown' را برای کاربر دلخواه اجرا کنید.
+
+به مسیر لاگ ها در تنظیمات سرور توجه کنید (src/dbms/programs/config.xml).
+
+### روش های دیگر نصب
+
+Docker image:
+
+پکیج RPM برای CentOS یا RHEL:
+
+Gentoo: `emerge clickhouse`
+
+## راه اندازی
+
+برای استارت سرور (به صورت daemon)، دستور زیر را اجرا کنید:
+
+
+
+```bash
+sudo service clickhouse-server start
+```
+
+
+
+لاگ های دایرکتوری `/var/log/clickhouse-server/` directory. را مشاهده کنید.
+
+اگر سرور استارت نشد، فایل تنظیمات را بررسی کنید `/etc/clickhouse-server/config.xml.`
+
+شما همچنین می توانید سرور را از طریق کنسول راه اندازی کنید:
+
+
+
+```bash
+clickhouse-server --config-file=/etc/clickhouse-server/config.xml
+```
+
+
+
+در این مورد که مناسب زمان توسعه می باشد، لاگ ها در کنسول پرینت می شوند. اگر فایل تنظیمات در دایرکتوری جاری باشد، نیازی به مشخص کردن '--config-file' نمی باشد. به صورت پیش فرض از './config.xml' استفاده می شود.
+
+شما می توانید از کلاینت command-line برای اتصال به سرور استفاده کنید:
+
+
+
+```bash
+clickhouse-client
+```
+
+
+
+پارامترهای پیش فرض، نشان از اتصال به localhost:9000 از طرف کاربر 'default' بدون پسورد را می دهد. از کلاینت میتوان برای اتصال به یک سرور remote استفاده کرد. مثال:
+
+
+
+```bash
+clickhouse-client --host=example.com
+```
+
+
+
+برای اطلاعات بیشتر، بخش "کلاینت Command-line" را مشاهده کنید.
+
+چک کردن سیستم:
+
+
+
+```bash
+milovidov@hostname:~/work/metrica/src/dbms/src/Client$ ./clickhouse-client
+ClickHouse client version 0.0.18749.
+Connecting to localhost:9000.
+Connected to ClickHouse server version 0.0.18749.
+
+:) SELECT 1
+
+SELECT 1
+
+┌─1─┐
+│ 1 │
+└───┘
+
+1 rows in set. Elapsed: 0.003 sec.
+
+:)
+```
+
+
+
+**تبریک میگم، سیستم کار می کنه!**
+
+برای ادامه آزمایشات، شما میتوانید دیتاست های تستی را دریافت و امتحان کنید.
+
+
+[مقاله اصلی](https://clickhouse.yandex/docs/fa/getting_started/install/)
diff --git a/docs/ru/getting_started/index.md b/docs/ru/getting_started/index.md
index e3fb2ab0985..a8d0fbaa5b1 100644
--- a/docs/ru/getting_started/index.md
+++ b/docs/ru/getting_started/index.md
@@ -1,142 +1,10 @@
# Начало работы
-## Системные требования
+Если вы новичок в ClickHouse и хотите получить вживую оценить его производительность, прежде всего нужно пройти через [процесс установки](install.md).
-ClickHouse может работать на любом Linux, FreeBSD или Mac OS X с архитектурой процессора x86\_64.
+После этого можно выбрать один из следующих вариантов:
-Хотя предсобранные релизы обычно компилируются с использованием набора инструкций SSE 4.2, что добавляет использование поддерживающего его процессора в список системных требований. Команда для проверки наличия поддержки инструкций SSE 4.2 на текущем процессоре:
-
-```bash
-$ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported"
-```
-
-## Установка
-
-### Из DEB пакетов
-
-Яндекс рекомендует использовать официальные скомпилированные `deb` пакеты для Debian или Ubuntu.
-
-Чтобы установить официальные пакеты, пропишите репозиторий Яндекса в `/etc/apt/sources.list` или в отдельный файл `/etc/apt/sources.list.d/clickhouse.list`:
-
-```
-deb http://repo.yandex.ru/clickhouse/deb/stable/ main/
-```
-
-Если вы хотите использовать наиболее свежую тестовую, замените `stable` на `testing` (не рекомендуется для production окружений).
-
-Затем для самой установки пакетов выполните:
-
-```bash
-sudo apt-get install dirmngr # optional
-sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4 # optional
-sudo apt-get update
-sudo apt-get install clickhouse-client clickhouse-server
-```
-
-Также эти пакеты можно скачать и установить вручную отсюда: .
-
-### Из RPM пакетов
-
-Команда ClickHouse в Яндексе рекомендует использовать официальные предкомпилированные `rpm` пакеты для CentOS, RedHad и всех остальных дистрибутивов Linux, основанных на rpm.
-
-Сначала нужно подключить официальный репозиторий:
-```bash
-sudo yum install yum-utils
-sudo rpm --import https://repo.yandex.ru/clickhouse/CLICKHOUSE-KEY.GPG
-sudo yum-config-manager --add-repo https://repo.yandex.ru/clickhouse/rpm/stable/x86_64
-```
-
-Для использования наиболее свежих версий нужно заменить `stable` на `testing` (рекомендуется для тестовых окружений).
-
-Then run these commands to actually install packages:
-Для, собственно, установки пакетов необходимо выполнить следующие команды:
-
-```bash
-sudo yum install clickhouse-server clickhouse-client
-```
-
-Также есть возможность установить пакеты вручную, скачав отсюда: .
-
-### Из Docker образа
-
-Для запуска ClickHouse в Docker нужно следовать инструкции на [Docker Hub](https://hub.docker.com/r/yandex/clickhouse-server/). Внутри образов используются официальные `deb` пакеты.
-
-### Из исходникого кода
-
-Для компиляции ClickHouse вручную, используйте инструкцию для [Linux](../development/build.md) или [Mac OS X](../development/build_osx.md).
-
-Можно скомпилировать пакеты и установить их, либо использовать программы без установки пакетов. Также при ручой сборке можно отключить необходимость поддержки набора инструкций SSE 4.2 или собрать под процессоры архитектуры AArch64.
-
-```
-Client: dbms/programs/clickhouse-client
-Server: dbms/programs/clickhouse-server
-```
-
-Для работы собранного вручную сервера необходимо создать директории для данных и метаданных, а также сделать их `chown` для желаемого пользователя. Пути к этим директориям могут быть изменены в конфигурационном файле сервера (src/dbms/programs/server/config.xml), по умолчанию используются следующие:
-
-```
-/opt/clickhouse/data/default/
-/opt/clickhouse/metadata/default/
-```
-
-На Gentoo для установки ClickHouse из исходного кода можно использовать просто `emerge clickhouse`.
-
-## Запуск
-
-Для запуска сервера в качестве демона, выполните:
-
-``` bash
-$ sudo service clickhouse-server start
-```
-
-Смотрите логи в директории `/var/log/clickhouse-server/`.
-
-Если сервер не стартует, проверьте корректность конфигурации в файле `/etc/clickhouse-server/config.xml`
-
-Также можно запустить сервер вручную из консоли:
-
-``` bash
-$ clickhouse-server --config-file=/etc/clickhouse-server/config.xml
-```
-
-При этом, лог будет выводиться в консоль, что удобно для разработки.
-Если конфигурационный файл лежит в текущей директории, то указывать параметр `--config-file` не требуется, по умолчанию будет использован файл `./config.xml`.
-
-После запуска сервера, соединиться с ним можно с помощью клиента командной строки:
-
-``` bash
-$ clickhouse-client
-```
-
-По умолчанию он соединяется с localhost:9000, от имени пользователя `default` без пароля. Также клиент может быть использован для соединения с удалённым сервером с помощью аргумента `--host`.
-
-Терминал должен использовать кодировку UTF-8.
-
-Более подробная информация о клиенте располагается в разделе [«Клиент командной строки»](../interfaces/cli.md).
-
-Пример проверки работоспособности системы:
-
-``` bash
-$ ./clickhouse-client
-ClickHouse client version 0.0.18749.
-Connecting to localhost:9000.
-Connected to ClickHouse server version 0.0.18749.
-
-:) SELECT 1
-
-SELECT 1
-
-┌─1─┐
-│ 1 │
-└───┘
-
-1 rows in set. Elapsed: 0.003 sec.
-
-:)
-```
-
-**Поздравляем, система работает!**
-
-Для дальнейших экспериментов можно попробовать загрузить один из тестовых наборов данных или пройти [пошаговое руководство для начинающих](https://clickhouse.yandex/tutorial.html).
+* [Пройти подробное руководство для начинающих](tutorial.md)
+* [Поэкспериментировать с тестовыми наборами данных](example_datasets/ontime.md)
[Оригинальная статья](https://clickhouse.yandex/docs/ru/getting_started/)
diff --git a/docs/ru/getting_started/install.md b/docs/ru/getting_started/install.md
new file mode 100644
index 00000000000..ae8075e0157
--- /dev/null
+++ b/docs/ru/getting_started/install.md
@@ -0,0 +1,142 @@
+# Установка
+
+## Системные требования
+
+ClickHouse может работать на любом Linux, FreeBSD или Mac OS X с архитектурой процессора x86\_64.
+
+Хотя предсобранные релизы обычно компилируются с использованием набора инструкций SSE 4.2, что добавляет использование поддерживающего его процессора в список системных требований. Команда для проверки наличия поддержки инструкций SSE 4.2 на текущем процессоре:
+
+```bash
+$ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported"
+```
+
+## Доступные варианты установки
+
+### Из DEB пакетов
+
+Яндекс рекомендует использовать официальные скомпилированные `deb` пакеты для Debian или Ubuntu.
+
+Чтобы установить официальные пакеты, пропишите репозиторий Яндекса в `/etc/apt/sources.list` или в отдельный файл `/etc/apt/sources.list.d/clickhouse.list`:
+
+```
+deb http://repo.yandex.ru/clickhouse/deb/stable/ main/
+```
+
+Если вы хотите использовать наиболее свежую тестовую, замените `stable` на `testing` (не рекомендуется для production окружений).
+
+Затем для самой установки пакетов выполните:
+
+```bash
+sudo apt-get install dirmngr # optional
+sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4 # optional
+sudo apt-get update
+sudo apt-get install clickhouse-client clickhouse-server
+```
+
+Также эти пакеты можно скачать и установить вручную отсюда: .
+
+### Из RPM пакетов
+
+Команда ClickHouse в Яндексе рекомендует использовать официальные предкомпилированные `rpm` пакеты для CentOS, RedHad и всех остальных дистрибутивов Linux, основанных на rpm.
+
+Сначала нужно подключить официальный репозиторий:
+```bash
+sudo yum install yum-utils
+sudo rpm --import https://repo.yandex.ru/clickhouse/CLICKHOUSE-KEY.GPG
+sudo yum-config-manager --add-repo https://repo.yandex.ru/clickhouse/rpm/stable/x86_64
+```
+
+Для использования наиболее свежих версий нужно заменить `stable` на `testing` (рекомендуется для тестовых окружений).
+
+Then run these commands to actually install packages:
+Для, собственно, установки пакетов необходимо выполнить следующие команды:
+
+```bash
+sudo yum install clickhouse-server clickhouse-client
+```
+
+Также есть возможность установить пакеты вручную, скачав отсюда: .
+
+### Из Docker образа
+
+Для запуска ClickHouse в Docker нужно следовать инструкции на [Docker Hub](https://hub.docker.com/r/yandex/clickhouse-server/). Внутри образов используются официальные `deb` пакеты.
+
+### Из исходникого кода
+
+Для компиляции ClickHouse вручную, используйте инструкцию для [Linux](../development/build.md) или [Mac OS X](../development/build_osx.md).
+
+Можно скомпилировать пакеты и установить их, либо использовать программы без установки пакетов. Также при ручой сборке можно отключить необходимость поддержки набора инструкций SSE 4.2 или собрать под процессоры архитектуры AArch64.
+
+```
+Client: dbms/programs/clickhouse-client
+Server: dbms/programs/clickhouse-server
+```
+
+Для работы собранного вручную сервера необходимо создать директории для данных и метаданных, а также сделать их `chown` для желаемого пользователя. Пути к этим директориям могут быть изменены в конфигурационном файле сервера (src/dbms/programs/server/config.xml), по умолчанию используются следующие:
+
+```
+/opt/clickhouse/data/default/
+/opt/clickhouse/metadata/default/
+```
+
+На Gentoo для установки ClickHouse из исходного кода можно использовать просто `emerge clickhouse`.
+
+## Запуск
+
+Для запуска сервера в качестве демона, выполните:
+
+``` bash
+$ sudo service clickhouse-server start
+```
+
+Смотрите логи в директории `/var/log/clickhouse-server/`.
+
+Если сервер не стартует, проверьте корректность конфигурации в файле `/etc/clickhouse-server/config.xml`
+
+Также можно запустить сервер вручную из консоли:
+
+``` bash
+$ clickhouse-server --config-file=/etc/clickhouse-server/config.xml
+```
+
+При этом, лог будет выводиться в консоль, что удобно для разработки.
+Если конфигурационный файл лежит в текущей директории, то указывать параметр `--config-file` не требуется, по умолчанию будет использован файл `./config.xml`.
+
+После запуска сервера, соединиться с ним можно с помощью клиента командной строки:
+
+``` bash
+$ clickhouse-client
+```
+
+По умолчанию он соединяется с localhost:9000, от имени пользователя `default` без пароля. Также клиент может быть использован для соединения с удалённым сервером с помощью аргумента `--host`.
+
+Терминал должен использовать кодировку UTF-8.
+
+Более подробная информация о клиенте располагается в разделе [«Клиент командной строки»](../interfaces/cli.md).
+
+Пример проверки работоспособности системы:
+
+``` bash
+$ ./clickhouse-client
+ClickHouse client version 0.0.18749.
+Connecting to localhost:9000.
+Connected to ClickHouse server version 0.0.18749.
+
+:) SELECT 1
+
+SELECT 1
+
+┌─1─┐
+│ 1 │
+└───┘
+
+1 rows in set. Elapsed: 0.003 sec.
+
+:)
+```
+
+**Поздравляем, система работает!**
+
+Для дальнейших экспериментов можно попробовать загрузить один из тестовых наборов данных или пройти [пошаговое руководство для начинающих](https://clickhouse.yandex/tutorial.html).
+
+[Оригинальная статья](https://clickhouse.yandex/docs/ru/getting_started/install/)
diff --git a/docs/toc_en.yml b/docs/toc_en.yml
index d823e026164..bf05e68e04a 100644
--- a/docs/toc_en.yml
+++ b/docs/toc_en.yml
@@ -8,8 +8,9 @@ nav:
- 'The Yandex.Metrica Task': 'introduction/ya_metrika_task.md'
- 'Getting Started':
+ - 'hidden': 'getting_started/index.md'
+ - 'Installation': 'getting_started/install.md'
- 'Tutorial': 'getting_started/tutorial.md'
- - 'Deploying and Running': 'getting_started/index.md'
- 'Example Datasets':
- 'OnTime': 'getting_started/example_datasets/ontime.md'
- 'New York Taxi Data': 'getting_started/example_datasets/nyc_taxi.md'
diff --git a/docs/toc_fa.yml b/docs/toc_fa.yml
index 1799093df24..682e9197bac 100644
--- a/docs/toc_fa.yml
+++ b/docs/toc_fa.yml
@@ -1,6 +1,6 @@
nav:
-- 'Introduction':
+- 'ﯽﻓﺮﻌﻣ':
- 'ClickHouse چیست؟': 'index.md'
- ' ویژگی های برجسته ClickHouse': 'introduction/distinctive_features.md'
- ' ویژگی های از ClickHouse که می تواند معایبی باشد': 'introduction/features_considered_disadvantages.md'
@@ -8,8 +8,10 @@ nav:
- 'The Yandex.Metrica task': 'introduction/ya_metrika_task.md'
- 'Getting started':
- - ' شروع به کار': 'getting_started/index.md'
- - 'Example datasets':
+ - 'hidden': 'getting_started/index.md'
+ - 'ﯼﺯﺍﺪﻧﺍ ﻩﺍﺭ ﻭ ﺐﺼﻧ': 'getting_started/install.md'
+ - 'ﺵﺯﻮﻣﺁ': 'getting_started/tutorial.md'
+ - 'ﻪﻧﻮﻤﻧ ﯼﺎﻫ ﻩﺩﺍﺩ ﻪﻋﻮﻤﺠﻣ':
- 'OnTime': 'getting_started/example_datasets/ontime.md'
- ' داده های تاکسی New York': 'getting_started/example_datasets/nyc_taxi.md'
- ' بنچمارک AMPLab Big Data': 'getting_started/example_datasets/amplab_benchmark.md'
@@ -18,7 +20,7 @@ nav:
- ' بنچمارک Star Schema': 'getting_started/example_datasets/star_schema.md'
- 'Yandex.Metrica Data': 'getting_started/example_datasets/metrica.md'
-- 'Interfaces':
+- 'ﻂﺑﺍﺭ':
- 'Interface ها': 'interfaces/index.md'
- ' کلاینت Command-line': 'interfaces/cli.md'
- 'Native interface (TCP)': 'interfaces/tcp.md'
@@ -32,7 +34,7 @@ nav:
- 'رابط های بصری': 'interfaces/third-party/gui.md'
- 'پروکسی': 'interfaces/third-party/proxy.md'
-- 'Data types':
+- 'ﻩﺩﺍﺩ ﻉﺍﻮﻧﺍ':
- 'Introduction': 'data_types/index.md'
- 'UInt8, UInt16, UInt32, UInt64, Int8, Int16, Int32, Int64': 'data_types/int_uint.md'
- 'Float32, Float64': 'data_types/float.md'
diff --git a/docs/toc_ru.yml b/docs/toc_ru.yml
index 682f171a1f3..60b1a8afd23 100644
--- a/docs/toc_ru.yml
+++ b/docs/toc_ru.yml
@@ -9,7 +9,9 @@ nav:
- 'Информационная поддержка': 'introduction/info.md'
- 'Начало работы':
- - 'Установка и запуск': 'getting_started/index.md'
+ - 'hidden': 'getting_started/index.html'
+ - 'Установка': 'getting_started/install.md'
+ - 'Руководство для начинающих': 'getting_started/tutorial.md'
- 'Тестовые наборы данных':
- 'OnTime': 'getting_started/example_datasets/ontime.md'
- 'Данные о такси в Нью-Йорке': 'getting_started/example_datasets/nyc_taxi.md'
diff --git a/docs/toc_zh.yml b/docs/toc_zh.yml
index d140ec88d64..ef4f05c6172 100644
--- a/docs/toc_zh.yml
+++ b/docs/toc_zh.yml
@@ -8,7 +8,9 @@ nav:
- 'Yandex.Metrica使用案例': 'introduction/ya_metrika_task.md'
- '入门指南':
- - '部署运行': 'getting_started/index.md'
+ - 'hidden': 'getting_started/index.md'
+ - '安装': 'getting_started/install.md'
+ - '教程': 'getting_started/tutorial.md'
- '示例数据集':
- '航班飞行数据': 'getting_started/example_datasets/ontime.md'
- '纽约市出租车数据': 'getting_started/example_datasets/nyc_taxi.md'
diff --git a/docs/zh/getting_started/index.md b/docs/zh/getting_started/index.md
index f51323ce7e8..c73181a6068 100644
--- a/docs/zh/getting_started/index.md
+++ b/docs/zh/getting_started/index.md
@@ -1,158 +1,10 @@
-# 入门指南
+# 入门
-## 系统要求
+如果您是ClickHouse的新手,并希望亲身体验它的性能,首先您需要通过 [安装过程](install.md).
-如果从官方仓库安装,需要确保您使用的是x86\_64处理器构架的Linux并且支持SSE 4.2指令集
+之后,您可以选择以下选项之一:
-检查是否支持SSE 4.2:
+* [通过详细的教程](tutorial.md)
+* [试验示例数据集](example_datasets/ontime.md)
-```bash
-grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported"
-```
-
-我们推荐使用Ubuntu或者Debian。终端必须使用UTF-8编码。
-
-基于rpm的系统,你可以使用第三方的安装包:https://packagecloud.io/altinity/clickhouse 或者直接安装debian安装包。
-
-ClickHouse还可以在FreeBSD与Mac OS X上工作。同时它可以在不支持SSE 4.2的x86\_64构架和AArch64 CPUs上编译。
-
-## 安装
-
-### 为Debian/Ubuntu安装
-
-在`/etc/apt/sources.list` (或创建`/etc/apt/sources.list.d/clickhouse.list`文件)中添加仓库:
-
-```text
-deb http://repo.yandex.ru/clickhouse/deb/stable/ main/
-```
-
-如果你想使用最新的测试版本,请使用'testing'替换'stable'。
-
-然后运行:
-
-```bash
-sudo apt-get install dirmngr # optional
-sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4 # optional
-sudo apt-get update
-sudo apt-get install clickhouse-client clickhouse-server
-```
-
-你也可以从这里手动下载安装包:。
-
-ClickHouse包含访问控制配置,它们位于`users.xml`文件中(与'config.xml'同目录)。
-默认情况下,允许从任何地方使用默认的‘default’用户无密码的访问ClickHouse。参考‘user/default/networks’。
-有关更多信息,请参考"Configuration files"部分。
-
-###来自RPM包
-
-Yandex ClickHouse团队建议使用官方预编译的`rpm`软件包,用于CentOS,RedHat和所有其他基于rpm的Linux发行版。
-
-首先,您需要添加官方存储库:
-
-```bash
-sudo yum install yum-utils
-sudo rpm --import https://repo.yandex.ru/clickhouse/CLICKHOUSE-KEY.GPG
-sudo yum-config-manager --add-repo https://repo.yandex.ru/clickhouse/rpm/stable/x86_64
-```
-
-如果您想使用最新版本,请将`stable`替换为`testing`(建议您在测试环境中使用)。
-
-然后运行这些命令以实际安装包:
-
-```bash
-sudo yum install clickhouse-server clickhouse-client
-```
-
-您也可以从此处手动下载和安装软件包:。
-
-###来自Docker
-
-要在Docker中运行ClickHouse,请遵循[Docker Hub](https://hub.docker.com/r/yandex/clickhouse-server/)上的指南。那些图像使用官方的`deb`包。
-
-### 使用源码安装
-
-具体编译方式可以参考build.md。
-
-你可以编译并安装它们。
-你也可以直接使用而不进行安装。
-
-```text
-Client: dbms/programs/clickhouse-client
-Server: dbms/programs/clickhouse-server
-```
-
-在服务器中为数据创建如下目录:
-
-```text
-/opt/clickhouse/data/default/
-/opt/clickhouse/metadata/default/
-```
-
-(它们可以在server config中配置。)
-为需要的用户运行‘chown’
-
-日志的路径可以在server config (src/dbms/programs/server/config.xml)中配置。
-
-## 启动
-
-可以运行如下命令在后台启动服务:
-
-```bash
-sudo service clickhouse-server start
-```
-
-可以在`/var/log/clickhouse-server/`目录中查看日志。
-
-如果服务没有启动,请检查配置文件 `/etc/clickhouse-server/config.xml`。
-
-你也可以在控制台中直接启动服务:
-
-```bash
-clickhouse-server --config-file=/etc/clickhouse-server/config.xml
-```
-
-在这种情况下,日志将被打印到控制台中,这在开发过程中很方便。
-如果配置文件在当前目录中,你可以不指定‘--config-file’参数。它默认使用‘./config.xml’。
-
-你可以使用命令行客户端连接到服务:
-
-```bash
-clickhouse-client
-```
-
-默认情况下它使用‘default’用户无密码的与localhost:9000服务建立连接。
-客户端也可以用于连接远程服务,例如:
-
-```bash
-clickhouse-client --host=example.com
-```
-
-有关更多信息,请参考"Command-line client"部分。
-
-检查系统是否工作:
-
-```bash
-milovidov@hostname:~/work/metrica/src/dbms/src/Client$ ./clickhouse-client
-ClickHouse client version 0.0.18749.
-Connecting to localhost:9000.
-Connected to ClickHouse server version 0.0.18749.
-
-:) SELECT 1
-
-SELECT 1
-
-┌─1─┐
-│ 1 │
-└───┘
-
-1 rows in set. Elapsed: 0.003 sec.
-
-:)
-```
-
-**恭喜,系统已经工作了!**
-
-为了继续进行实验,你可以尝试下载测试数据集。
-
-
-[Original article](https://clickhouse.yandex/docs/en/getting_started/)
+[来源文章](https://clickhouse.yandex/docs/zh/getting_started/)
diff --git a/docs/zh/getting_started/install.md b/docs/zh/getting_started/install.md
new file mode 100644
index 00000000000..29aee915bfa
--- /dev/null
+++ b/docs/zh/getting_started/install.md
@@ -0,0 +1,156 @@
+## 系统要求
+
+如果从官方仓库安装,需要确保您使用的是x86\_64处理器构架的Linux并且支持SSE 4.2指令集
+
+检查是否支持SSE 4.2:
+
+```bash
+grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported"
+```
+
+我们推荐使用Ubuntu或者Debian。终端必须使用UTF-8编码。
+
+基于rpm的系统,你可以使用第三方的安装包:https://packagecloud.io/altinity/clickhouse 或者直接安装debian安装包。
+
+ClickHouse还可以在FreeBSD与Mac OS X上工作。同时它可以在不支持SSE 4.2的x86\_64构架和AArch64 CPUs上编译。
+
+##可用的安装选项
+
+### 为Debian/Ubuntu安装
+
+在`/etc/apt/sources.list` (或创建`/etc/apt/sources.list.d/clickhouse.list`文件)中添加仓库:
+
+```text
+deb http://repo.yandex.ru/clickhouse/deb/stable/ main/
+```
+
+如果你想使用最新的测试版本,请使用'testing'替换'stable'。
+
+然后运行:
+
+```bash
+sudo apt-get install dirmngr # optional
+sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4 # optional
+sudo apt-get update
+sudo apt-get install clickhouse-client clickhouse-server
+```
+
+你也可以从这里手动下载安装包:。
+
+ClickHouse包含访问控制配置,它们位于`users.xml`文件中(与'config.xml'同目录)。
+默认情况下,允许从任何地方使用默认的‘default’用户无密码的访问ClickHouse。参考‘user/default/networks’。
+有关更多信息,请参考"Configuration files"部分。
+
+###来自RPM包
+
+Yandex ClickHouse团队建议使用官方预编译的`rpm`软件包,用于CentOS,RedHat和所有其他基于rpm的Linux发行版。
+
+首先,您需要添加官方存储库:
+
+```bash
+sudo yum install yum-utils
+sudo rpm --import https://repo.yandex.ru/clickhouse/CLICKHOUSE-KEY.GPG
+sudo yum-config-manager --add-repo https://repo.yandex.ru/clickhouse/rpm/stable/x86_64
+```
+
+如果您想使用最新版本,请将`stable`替换为`testing`(建议您在测试环境中使用)。
+
+然后运行这些命令以实际安装包:
+
+```bash
+sudo yum install clickhouse-server clickhouse-client
+```
+
+您也可以从此处手动下载和安装软件包:。
+
+###来自Docker
+
+要在Docker中运行ClickHouse,请遵循[Docker Hub](https://hub.docker.com/r/yandex/clickhouse-server/)上的指南。那些图像使用官方的`deb`包。
+
+### 使用源码安装
+
+具体编译方式可以参考build.md。
+
+你可以编译并安装它们。
+你也可以直接使用而不进行安装。
+
+```text
+Client: dbms/programs/clickhouse-client
+Server: dbms/programs/clickhouse-server
+```
+
+在服务器中为数据创建如下目录:
+
+```text
+/opt/clickhouse/data/default/
+/opt/clickhouse/metadata/default/
+```
+
+(它们可以在server config中配置。)
+为需要的用户运行‘chown’
+
+日志的路径可以在server config (src/dbms/programs/server/config.xml)中配置。
+
+## 启动
+
+可以运行如下命令在后台启动服务:
+
+```bash
+sudo service clickhouse-server start
+```
+
+可以在`/var/log/clickhouse-server/`目录中查看日志。
+
+如果服务没有启动,请检查配置文件 `/etc/clickhouse-server/config.xml`。
+
+你也可以在控制台中直接启动服务:
+
+```bash
+clickhouse-server --config-file=/etc/clickhouse-server/config.xml
+```
+
+在这种情况下,日志将被打印到控制台中,这在开发过程中很方便。
+如果配置文件在当前目录中,你可以不指定‘--config-file’参数。它默认使用‘./config.xml’。
+
+你可以使用命令行客户端连接到服务:
+
+```bash
+clickhouse-client
+```
+
+默认情况下它使用‘default’用户无密码的与localhost:9000服务建立连接。
+客户端也可以用于连接远程服务,例如:
+
+```bash
+clickhouse-client --host=example.com
+```
+
+有关更多信息,请参考"Command-line client"部分。
+
+检查系统是否工作:
+
+```bash
+milovidov@hostname:~/work/metrica/src/dbms/src/Client$ ./clickhouse-client
+ClickHouse client version 0.0.18749.
+Connecting to localhost:9000.
+Connected to ClickHouse server version 0.0.18749.
+
+:) SELECT 1
+
+SELECT 1
+
+┌─1─┐
+│ 1 │
+└───┘
+
+1 rows in set. Elapsed: 0.003 sec.
+
+:)
+```
+
+**恭喜,系统已经工作了!**
+
+为了继续进行实验,你可以尝试下载测试数据集。
+
+
+[Original article](https://clickhouse.yandex/docs/en/getting_started/install/)
From 9e8e7709f6d91eb752dec57f79cc2eabbcfbb534 Mon Sep 17 00:00:00 2001
From: Ivan Blinkov
Date: Fri, 23 Aug 2019 18:29:58 +0300
Subject: [PATCH 007/312] add ../en/getting_started/index.md
---
docs/en/getting_started/index.md | 10 ++++++++++
1 file changed, 10 insertions(+)
create mode 100644 docs/en/getting_started/index.md
diff --git a/docs/en/getting_started/index.md b/docs/en/getting_started/index.md
new file mode 100644
index 00000000000..390cb3183c4
--- /dev/null
+++ b/docs/en/getting_started/index.md
@@ -0,0 +1,10 @@
+# Getting Started
+
+If you are new to ClickHouse and want to get a hands-on feeling of it's performance, first of all you need to go through the [installation process](install.md).
+
+After that you can choose one of the following options:
+
+* [Go through detailed tutorial](tutorial.md)
+* [Experiment with example datasets](example_datasets/ontime.md)
+
+[Original article](https://clickhouse.yandex/docs/en/getting_started/)
From 6b936b0c04b375ed9c1042444acd3a1bcb10b465 Mon Sep 17 00:00:00 2001
From: Ivan Blinkov
Date: Wed, 28 Aug 2019 11:51:03 +0300
Subject: [PATCH 008/312] Rename ya_metrica_task.md
---
docs/en/introduction/history.md | 50 +++++++++++++++++++++++++++++++++
docs/fa/introduction/history.md | 49 ++++++++++++++++++++++++++++++++
docs/redirects.txt | 1 +
docs/ru/introduction/history.md | 49 ++++++++++++++++++++++++++++++++
docs/toc_en.yml | 2 +-
docs/toc_fa.yml | 4 +--
docs/toc_ru.yml | 2 +-
docs/toc_zh.yml | 2 +-
docs/zh/introduction/history.md | 50 +++++++++++++++++++++++++++++++++
9 files changed, 204 insertions(+), 5 deletions(-)
create mode 100644 docs/en/introduction/history.md
create mode 100644 docs/fa/introduction/history.md
create mode 100644 docs/ru/introduction/history.md
create mode 100644 docs/zh/introduction/history.md
diff --git a/docs/en/introduction/history.md b/docs/en/introduction/history.md
new file mode 100644
index 00000000000..e8f373880f1
--- /dev/null
+++ b/docs/en/introduction/history.md
@@ -0,0 +1,50 @@
+# ClickHouse History
+
+ClickHouse was originally developed to power [Yandex.Metrica](https://metrica.yandex.com/), [the second largest web analytics platform in the world](http://w3techs.com/technologies/overview/traffic_analysis/all), and continues to be the core component of this system. With more than 13 trillion records in the database and more than 20 billion events daily, ClickHouse allows generating custom reports on the fly directly from non-aggregated data. This article briefly covers the goals of ClickHouse in the early stages of its development.
+
+Yandex.Metrica builds customized reports on the fly based on hits and sessions, with arbitrary segments defined by the user. This often requires building complex aggregates, such as the number of unique users. New data for building a report is received in real time.
+
+As of April 2014, Yandex.Metrica was tracking about 12 billion events (page views and clicks) daily. All these events must be stored in order to build custom reports. A single query may require scanning millions of rows within a few hundred milliseconds, or hundreds of millions of rows in just a few seconds.
+
+## Usage in Yandex.Metrica and Other Yandex Services
+
+ClickHouse is used for multiple purposes in Yandex.Metrica.
+Its main task is to build reports in online mode using non-aggregated data. It uses a cluster of 374 servers, which store over 20.3 trillion rows in the database. The volume of compressed data, without counting duplication and replication, is about 2 PB. The volume of uncompressed data (in TSV format) would be approximately 17 PB.
+
+ClickHouse is also used for:
+
+- Storing data for Session Replay from Yandex.Metrica.
+- Processing intermediate data.
+- Building global reports with Analytics.
+- Running queries for debugging the Yandex.Metrica engine.
+- Analyzing logs from the API and the user interface.
+
+ClickHouse has at least a dozen installations in other Yandex services: in search verticals, Market, Direct, business analytics, mobile development, AdFox, personal services, and others.
+
+## Aggregated and Non-aggregated Data
+
+There is a popular opinion that in order to effectively calculate statistics, you must aggregate data, since this reduces the volume of data.
+
+But data aggregation is a very limited solution, for the following reasons:
+
+- You must have a pre-defined list of reports the user will need.
+- The user can't make custom reports.
+- When aggregating a large quantity of keys, the volume of data is not reduced, and aggregation is useless.
+- For a large number of reports, there are too many aggregation variations (combinatorial explosion).
+- When aggregating keys with high cardinality (such as URLs), the volume of data is not reduced by much (less than twofold).
+- For this reason, the volume of data with aggregation might grow instead of shrink.
+- Users do not view all the reports we generate for them. A large portion of calculations are useless.
+- The logical integrity of data may be violated for various aggregations.
+
+If we do not aggregate anything and work with non-aggregated data, this might actually reduce the volume of calculations.
+
+However, with aggregation, a significant part of the work is taken offline and completed relatively calmly. In contrast, online calculations require calculating as fast as possible, since the user is waiting for the result.
+
+Yandex.Metrica has a specialized system for aggregating data called Metrage, which is used for the majority of reports.
+Starting in 2009, Yandex.Metrica also used a specialized OLAP database for non-aggregated data called OLAPServer, which was previously used for the report builder.
+OLAPServer worked well for non-aggregated data, but it had many restrictions that did not allow it to be used for all reports as desired. These included the lack of support for data types (only numbers), and the inability to incrementally update data in real-time (it could only be done by rewriting data daily). OLAPServer is not a DBMS, but a specialized DB.
+
+To remove the limitations of OLAPServer and solve the problem of working with non-aggregated data for all reports, we developed the ClickHouse DBMS.
+
+
+[Original article](https://clickhouse.yandex/docs/en/introduction/history/)
diff --git a/docs/fa/introduction/history.md b/docs/fa/introduction/history.md
new file mode 100644
index 00000000000..abde10aa6f3
--- /dev/null
+++ b/docs/fa/introduction/history.md
@@ -0,0 +1,49 @@
+
+
+# ClickHouse ﻪﭽﺨﯾﺭﺎﺗ
+
+ClickHouse در ابتدا برای قدرت به Yandex.Metrica دومین بستر آنالیز وب در دنیا توسعه داده شد، و همچنان جز اصلی آن است. ClickHouse اجازه می دهند که با بیش از 13 تریلیون رکورد در دیتابیس و بیش از 20 میلیارد event در روز، گزارش های مستقیم (On the fly) از داده های non-aggregate تهیه کنیم. این مقاله پیشنیه ی تاریخی در ارتباط با اهداف اصلی ClickHouse قبل از آنکه به یک محصول open source تبدیل شود، می دهد.
+
+Yandex.Metrica تولید گزارش های برپایه بازدید و session ها به صورت on the fly و با استفده از بخش های دلخواه و دوره ی زمانی که توسط کاربر انتخاب می شود را انجام می دهد. aggregate های پیچیده معمولا مورد نیاز هستند، مانند تعداد بازدیدکنندگان unique. داده های جدید برای تهیه گزارش گیری به صورت real-time می رسند.
+
+از آوریل 2014، Yandex.Metrica تقریبا 12 میلیارد event شامل page view و click در روز دریافت کرد. تمام این event ها باید به ترتیب برای ساخت گزارش های سفارشی ذخیره سازی می شدند. یک query ممکن است نیاز به اسکن کردن میلیون ها سطر با زمان کمتر از چند صد میلی ثانیه، یا چند صد میلیون سطر در عرض چند ثانیه داشته باشد.
+
+## استفاده در Yandex.Metrica و دیگر سرویس های Yandex
+
+ClickHouse با چندین اهداف در Yandex.Metrica استفاده می شود. وظیفه اصلی آن ساخت گزارش های آنلاین از داده های non-aggregate می باشد. ClickHouse در یک کلاستر با سایز 374 سرور، که بیش از 20.3 تریلیون سطر در دیتابیس را دارد مورد استفاده قرار می گیرد. اندازه فشرده داده ها، بدون شمارش داده های تکراری و replication، حدود 2 پتابایت می باشد. اندازه ی غیرفشرده داده ها (در فرمت TSV) حدودا 17 پتابایت می باشد.
+
+ClickHouse همچنین در موارد زیراستفاده می شود:
+
+- ذخیره سازی داده ها برای Session replay از Yandex.Metrica.
+- پردازش داده های Intermediate.
+- ساخت گزارش های سراسری از آنالیز ها.
+- اجرای query ها برای debug کردن موتور Yandex.Metrica.
+- آنالیز لاگ های به دست آمده از API ها و user interface.
+
+ClickHouse حداقل در دوازده جای دیگر سرویس Yandex نصب شده است: در search verticals، Market، Direct، Business Analytics، Mobile Development، AdFox، سرویس های شخصی و..
+
+## داده های Aggregate , Non-Aggregate
+
+یک دیدگاه محبوب وجود دارد که شما باید، داده های خود را به منظور کاهش اندازه داده ها Aggregate کنید.
+
+اما به دلایل زیر، aggregate کردن داده ها راه حل بسیار محدودی است:
+
+- شما باید لیست گزارش های از قبل تعریف شده توسط کاربر که نیاز به تهیه گزارش آنها را دارید، داشته باشید.
+- کاربر نمیتواند گزارش های سفارشی تهیه کند.
+- در هنگام aggregate کردن تعداد بسیار زیاد key، اندازه ی داده ها کم نمی شود و aggregate بی فایده است.
+- برای تعداد زیادی از گزارش ها، aggregate های متنوع و تغییرپذیر زیادی وجود دارد. (انفجار ترکیبی).
+- هنگام aggregate کردن key ها با cardinality بالا (مثل URL ها)، اندازه داده ها به اندازه کافی کاهش پیدا نمی کند (کمتر از دو برابر).
+- به این دلیل اندازه ی داده ها با aggregate کردن ممکن است به جای شکستن، رشد هم بکند.
+- کاربر تمام گزارش هایی که ما تولید کردیم را نگاه نمی کند. بخش بزرگی از محاسبات بی فایده است.
+- یکپارچگی منطقی داده ها ممکن است برای aggregate های مختلف نقض شود.
+
+اگر ما هیچ چیزی را aggregate نکنیم و با داده های non-aggregate کار کنیم، در واقع این ممکن است باعث کاهش اندازه ی محاسبات شود.
+
+با این حال، با aggregate کردن، بخش قابل توجهی از کار به صورت آفلاین انجام می شود و نسبتا آرام به پایان می رسد. در مقابل، محاسبات آنلاین به دلیل اینکه کاربر منتظر نمایش نتایج می باشد، نیازمند محاسبه سریع تا جایی که ممکن است می باشد.
+
+Yandex.Metrica دارای یک سیستم تخصصی برای aggregate کردن داده ها به اسم Metrage می باشد، که برای اکثریت گزارش های مورد استفاده قرار می گیرد. شروع سال 2009، Yandex.Metrica همچنین از یک دیتابیس تخصصی OLAP برای داده های non-aggregate به نام OLAPServer، که قبلا برای ساخت گزارش ها استفاده می شد، استفاده می کرد. OLAPServer به خوبی روی داده های Non-Aggregate کار می کرد، اما محدودیت های بسیار زیادی داشت که اجازه ی استفاده در تمام گزارش های دلخواه را نمی داد. مواردی از قبیل عدم پشتیبانی از data type ها (فقط عدد)، و عدم توانایی در بروزرسانی افزایشی داده ها به صورت real-time (این کار فقط به rewrite کردن داده ها به صورت روزانه امکام پذیر بود). OLAPServer یک مدیریت دیتابیس نبود اما یک دیتابیس تخصصی بود.
+
+برای حذف محدودیت های OLAPServer و حل مشکلات کار با داده های Non-Aggregate برای تمام گزارش ها، ما مدیریت دیتابیس ClicHouse را توسعه دادیم..
+
+
+[مقاله اصلی](https://clickhouse.yandex/docs/fa/introduction/ya_metrika_task/)
diff --git a/docs/redirects.txt b/docs/redirects.txt
index 0ff077b660c..b38f6d242f2 100644
--- a/docs/redirects.txt
+++ b/docs/redirects.txt
@@ -1,3 +1,4 @@
+introduction/ya_metrika_task.md introduction/history.md
system_tables.md operations/system_tables.md
system_tables/system.asynchronous_metrics.md operations/system_tables.md
system_tables/system.clusters.md operations/system_tables.md
diff --git a/docs/ru/introduction/history.md b/docs/ru/introduction/history.md
new file mode 100644
index 00000000000..c0035b51f82
--- /dev/null
+++ b/docs/ru/introduction/history.md
@@ -0,0 +1,49 @@
+# История ClickHouse
+
+ClickHouse изначально разрабатывался для обеспечения работы [Яндекс.Метрики](https://metrika.yandex.ru/), [второй крупнейшей в мире](http://w3techs.com/technologies/overview/traffic_analysis/all) платформы для веб аналитики, и продолжает быть её ключевым компонентом. При более 13 триллионах записей в базе данных и более 20 миллиардах событий в сутки, ClickHouse позволяет генерировать индивидуально настроенные отчёты на лету напрямую из неагрегированных данных. Данная статья вкратце демонстрирует какие цели исторически стояли перед ClickHouse на ранних этапах его развития.
+
+Яндекс.Метрика на лету строит индивидуальные отчёты на основе хитов и визитов, с периодом и произвольными сегментами, задаваемыми конечным пользователем. Часто требуется построение сложных агрегатов, например числа уникальных пользователей. Новые данные для построения отчета поступают в реальном времени.
+
+На апрель 2014, в Яндекс.Метрику поступало около 12 миллиардов событий (показов страниц и кликов мыши) ежедневно. Все эти события должны быть сохранены для возможности строить произвольные отчёты. Один запрос может потребовать просканировать миллионы строк за время не более нескольких сотен миллисекунд, или сотни миллионов строк за время не более нескольких секунд.
+
+## Использование в Яндекс.Метрике и других отделах Яндекса
+
+В Яндекс.Метрике ClickHouse используется для нескольких задач.
+Основная задача - построение отчётов в режиме онлайн по неагрегированным данным. Для решения этой задачи используется кластер из 374 серверов, хранящий более 20,3 триллионов строк в базе данных. Объём сжатых данных, без учёта дублирования и репликации, составляет около 2 ПБ. Объём несжатых данных (в формате tsv) составил бы, приблизительно, 17 ПБ.
+
+Также ClickHouse используется:
+
+- для хранения данных Вебвизора;
+- для обработки промежуточных данных;
+- для построения глобальных отчётов Аналитиками;
+- для выполнения запросов в целях отладки движка Метрики;
+- для анализа логов работы API и пользовательского интерфейса.
+
+ClickHouse имеет более десятка инсталляций в других отделах Яндекса: в Вертикальных сервисах, Маркете, Директе, БК, Бизнес аналитике, Мобильной разработке, AdFox, Персональных сервисах и т п.
+
+## Агрегированные и неагрегированные данные
+
+Существует мнение, что для того, чтобы эффективно считать статистику, данные нужно агрегировать, так как это позволяет уменьшить объём данных.
+
+Но агрегированные данные являются очень ограниченным решением, по следующим причинам:
+
+- вы должны заранее знать перечень отчётов, необходимых пользователю;
+- то есть, пользователь не может построить произвольный отчёт;
+- при агрегации по большому количеству ключей, объём данных не уменьшается и агрегация бесполезна;
+- при большом количестве отчётов, получается слишком много вариантов агрегации (комбинаторный взрыв);
+- при агрегации по ключам высокой кардинальности (например, URL) объём данных уменьшается не сильно (менее чем в 2 раза);
+- из-за этого, объём данных при агрегации может не уменьшиться, а вырасти;
+- пользователи будут смотреть не все отчёты, которые мы для них посчитаем - то есть, большая часть вычислений бесполезна;
+- возможно нарушение логической целостности данных для разных агрегаций;
+
+Как видно, если ничего не агрегировать, и работать с неагрегированными данными, то это даже может уменьшить объём вычислений.
+
+Впрочем, при агрегации, существенная часть работы выносится в оффлайне, и её можно делать сравнительно спокойно. Для сравнения, при онлайн вычислениях, вычисления надо делать так быстро, как это возможно, так как именно в момент вычислений пользователь ждёт результата.
+
+В Яндекс.Метрике есть специализированная система для агрегированных данных - Metrage, на основе которой работает большинство отчётов.
+Также в Яндекс.Метрике с 2009 года использовалась специализированная OLAP БД для неагрегированных данных - OLAPServer, на основе которой раньше работал конструктор отчётов.
+OLAPServer хорошо подходил для неагрегированных данных, но содержал много ограничений, не позволяющих использовать его для всех отчётов так, как хочется: отсутствие поддержки типов данных (только числа), невозможность инкрементального обновления данных в реальном времени (только перезаписью данных за сутки). OLAPServer не является СУБД, а является специализированной БД.
+
+Чтобы снять ограничения OLAPServer-а и решить задачу работы с неагрегированными данными для всех отчётов, разработана СУБД ClickHouse.
+
+[Оригинальная статья](https://clickhouse.yandex/docs/ru/introduction/ya_metrika_task/)
diff --git a/docs/toc_en.yml b/docs/toc_en.yml
index bf05e68e04a..756dc15414d 100644
--- a/docs/toc_en.yml
+++ b/docs/toc_en.yml
@@ -5,7 +5,7 @@ nav:
- 'Distinctive Features of ClickHouse': 'introduction/distinctive_features.md'
- 'ClickHouse Features that Can Be Considered Disadvantages': 'introduction/features_considered_disadvantages.md'
- 'Performance': 'introduction/performance.md'
- - 'The Yandex.Metrica Task': 'introduction/ya_metrika_task.md'
+ - 'History': 'introduction/history.md'
- 'Getting Started':
- 'hidden': 'getting_started/index.md'
diff --git a/docs/toc_fa.yml b/docs/toc_fa.yml
index 682e9197bac..31484cd596c 100644
--- a/docs/toc_fa.yml
+++ b/docs/toc_fa.yml
@@ -4,8 +4,8 @@ nav:
- 'ClickHouse چیست؟': 'index.md'
- ' ویژگی های برجسته ClickHouse': 'introduction/distinctive_features.md'
- ' ویژگی های از ClickHouse که می تواند معایبی باشد': 'introduction/features_considered_disadvantages.md'
- - 'Performance': 'introduction/performance.md'
- - 'The Yandex.Metrica task': 'introduction/ya_metrika_task.md'
+ - 'ﯽﯾﺍﺭﺎﮐ': 'introduction/performance.md'
+ - 'ﺦﯾﺭﺎﺗ': 'introduction/history.md'
- 'Getting started':
- 'hidden': 'getting_started/index.md'
diff --git a/docs/toc_ru.yml b/docs/toc_ru.yml
index 60b1a8afd23..f14dce709ac 100644
--- a/docs/toc_ru.yml
+++ b/docs/toc_ru.yml
@@ -5,7 +5,7 @@ nav:
- 'Отличительные возможности ClickHouse': 'introduction/distinctive_features.md'
- 'Особенности ClickHouse, которые могут считаться недостатками': 'introduction/features_considered_disadvantages.md'
- 'Производительность': 'introduction/performance.md'
- - 'Постановка задачи в Яндекс.Метрике': 'introduction/ya_metrika_task.md'
+ - 'История': 'introduction/history.md'
- 'Информационная поддержка': 'introduction/info.md'
- 'Начало работы':
diff --git a/docs/toc_zh.yml b/docs/toc_zh.yml
index ef4f05c6172..3aef09a24ed 100644
--- a/docs/toc_zh.yml
+++ b/docs/toc_zh.yml
@@ -5,7 +5,7 @@ nav:
- 'ClickHouse的独特功能': 'introduction/distinctive_features.md'
- 'ClickHouse功能可被视为缺点': 'introduction/features_considered_disadvantages.md'
- '性能': 'introduction/performance.md'
- - 'Yandex.Metrica使用案例': 'introduction/ya_metrika_task.md'
+ - '历史': 'introduction/history.md'
- '入门指南':
- 'hidden': 'getting_started/index.md'
diff --git a/docs/zh/introduction/history.md b/docs/zh/introduction/history.md
new file mode 100644
index 00000000000..86fe02f84d5
--- /dev/null
+++ b/docs/zh/introduction/history.md
@@ -0,0 +1,50 @@
+# ClickHouse历史
+
+ClickHouse最初是为 [Yandex.Metrica](https://metrica.yandex.com/) [世界第二大Web分析平台](http://w3techs.com/technologies/overview/traffic_analysis/all) 而开发的。多年来一直作为该系统的核心组件被该系统持续使用着。目前为止,该系统在ClickHouse中有超过13万亿条记录,并且每天超过200多亿个事件被处理。它允许直接从原始数据中动态查询并生成报告。本文简要介绍了ClickHouse在其早期发展阶段的目标。
+
+Yandex.Metrica基于用户定义的字段,对实时访问、连接会话,生成实时的统计报表。这种需求往往需要复杂聚合方式,比如对访问用户进行去重。构建报表的数据,是实时接收存储的新数据。
+
+截至2014年4月,Yandex.Metrica每天跟踪大约120亿个事件(用户的点击和浏览)。为了可以创建自定义的报表,我们必须存储全部这些事件。同时,这些查询可能需要在几百毫秒内扫描数百万行的数据,或在几秒内扫描数亿行的数据。
+
+## Yandex.Metrica以及其他Yandex服务的使用案例
+
+在Yandex.Metrica中,ClickHouse被用于多个场景中。
+它的主要任务是使用原始数据在线的提供各种数据报告。它使用374台服务器的集群,存储了20.3万亿行的数据。在去除重复与副本数据的情况下,压缩后的数据达到了2PB。未压缩前(TSV格式)它大概有17PB。
+
+ClickHouse还被使用在:
+
+- 存储来自Yandex.Metrica回话重放数据。
+- 处理中间数据
+- 与Analytics一起构建全球报表。
+- 为调试Yandex.Metrica引擎运行查询
+- 分析来自API和用户界面的日志数据
+
+ClickHouse在其他Yandex服务中至少有12个安装:search verticals, Market, Direct, business analytics, mobile development, AdFox, personal services等。
+
+## 聚合与非聚合数据
+
+有一种流行的观点认为,想要有效的计算统计数据,必须要聚合数据,因为聚合将降低数据量。
+
+但是数据聚合是一个有诸多限制的解决方案,例如:
+
+- 你必须提前知道用户定义的报表的字段列表
+- 用户无法自定义报表
+- 当聚合条件过多时,可能不会减少数据,聚合是无用的。
+- 存在大量报表时,有太多的聚合变化(组合爆炸)
+- 当聚合条件有非常大的基数时(如:url),数据量没有太大减少(少于两倍)
+- 聚合的数据量可能会增长而不是收缩
+- 用户不会查看我们为他生成的所有报告,大部分计算将是无用的
+- 各种聚合可能违背了数据的逻辑完整性
+
+如果我们直接使用非聚合数据而不进行任何聚合时,我们的计算量可能是减少的。
+
+然而,相对于聚合中很大一部分工作被离线完成,在线计算需要尽快的完成计算,因为用户在等待结果。
+
+Yandex.Metrica 有一个专门用于聚合数据的系统,称为Metrage,它可以用作大部分报表。
+从2009年开始,Yandex.Metrica还为非聚合数据使用专门的OLAP数据库,称为OLAPServer,它以前用于报表构建系统。
+OLAPServer可以很好的工作在非聚合数据上,但是它有诸多限制,导致无法根据需要将其用于所有报表中。如,缺少对数据类型的支持(只支持数据),无法实时增量的更新数据(只能通过每天重写数据完成)。OLAPServer不是一个数据库管理系统,它只是一个数据库。
+
+为了消除OLAPServer的这些局限性,解决所有报表使用非聚合数据的问题,我们开发了ClickHouse数据库管理系统。
+
+
+[来源文章](https://clickhouse.yandex/docs/en/introduction/ya_metrika_task/)
From 86de9486cf5fca46c6e31046e4eee8287327edba Mon Sep 17 00:00:00 2001
From: Ivan Blinkov
Date: Wed, 28 Aug 2019 11:51:27 +0300
Subject: [PATCH 009/312] Rename ya_metrica_task.md
---
docs/en/introduction/ya_metrika_task.md | 50 -------------------------
docs/fa/introduction/ya_metrika_task.md | 49 ------------------------
docs/ru/introduction/ya_metrika_task.md | 49 ------------------------
docs/zh/introduction/ya_metrika_task.md | 50 -------------------------
4 files changed, 198 deletions(-)
delete mode 100644 docs/en/introduction/ya_metrika_task.md
delete mode 100644 docs/fa/introduction/ya_metrika_task.md
delete mode 100644 docs/ru/introduction/ya_metrika_task.md
delete mode 100644 docs/zh/introduction/ya_metrika_task.md
diff --git a/docs/en/introduction/ya_metrika_task.md b/docs/en/introduction/ya_metrika_task.md
deleted file mode 100644
index 41b33eff581..00000000000
--- a/docs/en/introduction/ya_metrika_task.md
+++ /dev/null
@@ -1,50 +0,0 @@
-# Yandex.Metrica Use Case
-
-ClickHouse was originally developed to power [Yandex.Metrica](https://metrica.yandex.com/), [the second largest web analytics platform in the world](http://w3techs.com/technologies/overview/traffic_analysis/all), and continues to be the core component of this system. With more than 13 trillion records in the database and more than 20 billion events daily, ClickHouse allows generating custom reports on the fly directly from non-aggregated data. This article briefly covers the goals of ClickHouse in the early stages of its development.
-
-Yandex.Metrica builds customized reports on the fly based on hits and sessions, with arbitrary segments defined by the user. This often requires building complex aggregates, such as the number of unique users. New data for building a report is received in real time.
-
-As of April 2014, Yandex.Metrica was tracking about 12 billion events (page views and clicks) daily. All these events must be stored in order to build custom reports. A single query may require scanning millions of rows within a few hundred milliseconds, or hundreds of millions of rows in just a few seconds.
-
-## Usage in Yandex.Metrica and Other Yandex Services
-
-ClickHouse is used for multiple purposes in Yandex.Metrica.
-Its main task is to build reports in online mode using non-aggregated data. It uses a cluster of 374 servers, which store over 20.3 trillion rows in the database. The volume of compressed data, without counting duplication and replication, is about 2 PB. The volume of uncompressed data (in TSV format) would be approximately 17 PB.
-
-ClickHouse is also used for:
-
-- Storing data for Session Replay from Yandex.Metrica.
-- Processing intermediate data.
-- Building global reports with Analytics.
-- Running queries for debugging the Yandex.Metrica engine.
-- Analyzing logs from the API and the user interface.
-
-ClickHouse has at least a dozen installations in other Yandex services: in search verticals, Market, Direct, business analytics, mobile development, AdFox, personal services, and others.
-
-## Aggregated and Non-aggregated Data
-
-There is a popular opinion that in order to effectively calculate statistics, you must aggregate data, since this reduces the volume of data.
-
-But data aggregation is a very limited solution, for the following reasons:
-
-- You must have a pre-defined list of reports the user will need.
-- The user can't make custom reports.
-- When aggregating a large quantity of keys, the volume of data is not reduced, and aggregation is useless.
-- For a large number of reports, there are too many aggregation variations (combinatorial explosion).
-- When aggregating keys with high cardinality (such as URLs), the volume of data is not reduced by much (less than twofold).
-- For this reason, the volume of data with aggregation might grow instead of shrink.
-- Users do not view all the reports we generate for them. A large portion of calculations are useless.
-- The logical integrity of data may be violated for various aggregations.
-
-If we do not aggregate anything and work with non-aggregated data, this might actually reduce the volume of calculations.
-
-However, with aggregation, a significant part of the work is taken offline and completed relatively calmly. In contrast, online calculations require calculating as fast as possible, since the user is waiting for the result.
-
-Yandex.Metrica has a specialized system for aggregating data called Metrage, which is used for the majority of reports.
-Starting in 2009, Yandex.Metrica also used a specialized OLAP database for non-aggregated data called OLAPServer, which was previously used for the report builder.
-OLAPServer worked well for non-aggregated data, but it had many restrictions that did not allow it to be used for all reports as desired. These included the lack of support for data types (only numbers), and the inability to incrementally update data in real-time (it could only be done by rewriting data daily). OLAPServer is not a DBMS, but a specialized DB.
-
-To remove the limitations of OLAPServer and solve the problem of working with non-aggregated data for all reports, we developed the ClickHouse DBMS.
-
-
-[Original article](https://clickhouse.yandex/docs/en/introduction/ya_metrika_task/)
diff --git a/docs/fa/introduction/ya_metrika_task.md b/docs/fa/introduction/ya_metrika_task.md
deleted file mode 100644
index 1ea434f248c..00000000000
--- a/docs/fa/introduction/ya_metrika_task.md
+++ /dev/null
@@ -1,49 +0,0 @@
-
-
-# Yandex.Metrica use case
-
-ClickHouse در ابتدا برای قدرت به Yandex.Metrica دومین بستر آنالیز وب در دنیا توسعه داده شد، و همچنان جز اصلی آن است. ClickHouse اجازه می دهند که با بیش از 13 تریلیون رکورد در دیتابیس و بیش از 20 میلیارد event در روز، گزارش های مستقیم (On the fly) از داده های non-aggregate تهیه کنیم. این مقاله پیشنیه ی تاریخی در ارتباط با اهداف اصلی ClickHouse قبل از آنکه به یک محصول open source تبدیل شود، می دهد.
-
-Yandex.Metrica تولید گزارش های برپایه بازدید و session ها به صورت on the fly و با استفده از بخش های دلخواه و دوره ی زمانی که توسط کاربر انتخاب می شود را انجام می دهد. aggregate های پیچیده معمولا مورد نیاز هستند، مانند تعداد بازدیدکنندگان unique. داده های جدید برای تهیه گزارش گیری به صورت real-time می رسند.
-
-از آوریل 2014، Yandex.Metrica تقریبا 12 میلیارد event شامل page view و click در روز دریافت کرد. تمام این event ها باید به ترتیب برای ساخت گزارش های سفارشی ذخیره سازی می شدند. یک query ممکن است نیاز به اسکن کردن میلیون ها سطر با زمان کمتر از چند صد میلی ثانیه، یا چند صد میلیون سطر در عرض چند ثانیه داشته باشد.
-
-## استفاده در Yandex.Metrica و دیگر سرویس های Yandex
-
-ClickHouse با چندین اهداف در Yandex.Metrica استفاده می شود. وظیفه اصلی آن ساخت گزارش های آنلاین از داده های non-aggregate می باشد. ClickHouse در یک کلاستر با سایز 374 سرور، که بیش از 20.3 تریلیون سطر در دیتابیس را دارد مورد استفاده قرار می گیرد. اندازه فشرده داده ها، بدون شمارش داده های تکراری و replication، حدود 2 پتابایت می باشد. اندازه ی غیرفشرده داده ها (در فرمت TSV) حدودا 17 پتابایت می باشد.
-
-ClickHouse همچنین در موارد زیراستفاده می شود:
-
-- ذخیره سازی داده ها برای Session replay از Yandex.Metrica.
-- پردازش داده های Intermediate.
-- ساخت گزارش های سراسری از آنالیز ها.
-- اجرای query ها برای debug کردن موتور Yandex.Metrica.
-- آنالیز لاگ های به دست آمده از API ها و user interface.
-
-ClickHouse حداقل در دوازده جای دیگر سرویس Yandex نصب شده است: در search verticals، Market، Direct، Business Analytics، Mobile Development، AdFox، سرویس های شخصی و..
-
-## داده های Aggregate , Non-Aggregate
-
-یک دیدگاه محبوب وجود دارد که شما باید، داده های خود را به منظور کاهش اندازه داده ها Aggregate کنید.
-
-اما به دلایل زیر، aggregate کردن داده ها راه حل بسیار محدودی است:
-
-- شما باید لیست گزارش های از قبل تعریف شده توسط کاربر که نیاز به تهیه گزارش آنها را دارید، داشته باشید.
-- کاربر نمیتواند گزارش های سفارشی تهیه کند.
-- در هنگام aggregate کردن تعداد بسیار زیاد key، اندازه ی داده ها کم نمی شود و aggregate بی فایده است.
-- برای تعداد زیادی از گزارش ها، aggregate های متنوع و تغییرپذیر زیادی وجود دارد. (انفجار ترکیبی).
-- هنگام aggregate کردن key ها با cardinality بالا (مثل URL ها)، اندازه داده ها به اندازه کافی کاهش پیدا نمی کند (کمتر از دو برابر).
-- به این دلیل اندازه ی داده ها با aggregate کردن ممکن است به جای شکستن، رشد هم بکند.
-- کاربر تمام گزارش هایی که ما تولید کردیم را نگاه نمی کند. بخش بزرگی از محاسبات بی فایده است.
-- یکپارچگی منطقی داده ها ممکن است برای aggregate های مختلف نقض شود.
-
-اگر ما هیچ چیزی را aggregate نکنیم و با داده های non-aggregate کار کنیم، در واقع این ممکن است باعث کاهش اندازه ی محاسبات شود.
-
-با این حال، با aggregate کردن، بخش قابل توجهی از کار به صورت آفلاین انجام می شود و نسبتا آرام به پایان می رسد. در مقابل، محاسبات آنلاین به دلیل اینکه کاربر منتظر نمایش نتایج می باشد، نیازمند محاسبه سریع تا جایی که ممکن است می باشد.
-
-Yandex.Metrica دارای یک سیستم تخصصی برای aggregate کردن داده ها به اسم Metrage می باشد، که برای اکثریت گزارش های مورد استفاده قرار می گیرد. شروع سال 2009، Yandex.Metrica همچنین از یک دیتابیس تخصصی OLAP برای داده های non-aggregate به نام OLAPServer، که قبلا برای ساخت گزارش ها استفاده می شد، استفاده می کرد. OLAPServer به خوبی روی داده های Non-Aggregate کار می کرد، اما محدودیت های بسیار زیادی داشت که اجازه ی استفاده در تمام گزارش های دلخواه را نمی داد. مواردی از قبیل عدم پشتیبانی از data type ها (فقط عدد)، و عدم توانایی در بروزرسانی افزایشی داده ها به صورت real-time (این کار فقط به rewrite کردن داده ها به صورت روزانه امکام پذیر بود). OLAPServer یک مدیریت دیتابیس نبود اما یک دیتابیس تخصصی بود.
-
-برای حذف محدودیت های OLAPServer و حل مشکلات کار با داده های Non-Aggregate برای تمام گزارش ها، ما مدیریت دیتابیس ClicHouse را توسعه دادیم..
-
-
-[مقاله اصلی](https://clickhouse.yandex/docs/fa/introduction/ya_metrika_task/)
diff --git a/docs/ru/introduction/ya_metrika_task.md b/docs/ru/introduction/ya_metrika_task.md
deleted file mode 100644
index c7e22346ae5..00000000000
--- a/docs/ru/introduction/ya_metrika_task.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# Постановка задачи в Яндекс.Метрике
-
-ClickHouse изначально разрабатывался для обеспечения работы [Яндекс.Метрики](https://metrika.yandex.ru/), [второй крупнейшей в мире](http://w3techs.com/technologies/overview/traffic_analysis/all) платформы для веб аналитики, и продолжает быть её ключевым компонентом. При более 13 триллионах записей в базе данных и более 20 миллиардах событий в сутки, ClickHouse позволяет генерировать индивидуально настроенные отчёты на лету напрямую из неагрегированных данных. Данная статья вкратце демонстрирует какие цели исторически стояли перед ClickHouse на ранних этапах его развития.
-
-Яндекс.Метрика на лету строит индивидуальные отчёты на основе хитов и визитов, с периодом и произвольными сегментами, задаваемыми конечным пользователем. Часто требуется построение сложных агрегатов, например числа уникальных пользователей. Новые данные для построения отчета поступают в реальном времени.
-
-На апрель 2014, в Яндекс.Метрику поступало около 12 миллиардов событий (показов страниц и кликов мыши) ежедневно. Все эти события должны быть сохранены для возможности строить произвольные отчёты. Один запрос может потребовать просканировать миллионы строк за время не более нескольких сотен миллисекунд, или сотни миллионов строк за время не более нескольких секунд.
-
-## Использование в Яндекс.Метрике и других отделах Яндекса
-
-В Яндекс.Метрике ClickHouse используется для нескольких задач.
-Основная задача - построение отчётов в режиме онлайн по неагрегированным данным. Для решения этой задачи используется кластер из 374 серверов, хранящий более 20,3 триллионов строк в базе данных. Объём сжатых данных, без учёта дублирования и репликации, составляет около 2 ПБ. Объём несжатых данных (в формате tsv) составил бы, приблизительно, 17 ПБ.
-
-Также ClickHouse используется:
-
-- для хранения данных Вебвизора;
-- для обработки промежуточных данных;
-- для построения глобальных отчётов Аналитиками;
-- для выполнения запросов в целях отладки движка Метрики;
-- для анализа логов работы API и пользовательского интерфейса.
-
-ClickHouse имеет более десятка инсталляций в других отделах Яндекса: в Вертикальных сервисах, Маркете, Директе, БК, Бизнес аналитике, Мобильной разработке, AdFox, Персональных сервисах и т п.
-
-## Агрегированные и неагрегированные данные
-
-Существует мнение, что для того, чтобы эффективно считать статистику, данные нужно агрегировать, так как это позволяет уменьшить объём данных.
-
-Но агрегированные данные являются очень ограниченным решением, по следующим причинам:
-
-- вы должны заранее знать перечень отчётов, необходимых пользователю;
-- то есть, пользователь не может построить произвольный отчёт;
-- при агрегации по большому количеству ключей, объём данных не уменьшается и агрегация бесполезна;
-- при большом количестве отчётов, получается слишком много вариантов агрегации (комбинаторный взрыв);
-- при агрегации по ключам высокой кардинальности (например, URL) объём данных уменьшается не сильно (менее чем в 2 раза);
-- из-за этого, объём данных при агрегации может не уменьшиться, а вырасти;
-- пользователи будут смотреть не все отчёты, которые мы для них посчитаем - то есть, большая часть вычислений бесполезна;
-- возможно нарушение логической целостности данных для разных агрегаций;
-
-Как видно, если ничего не агрегировать, и работать с неагрегированными данными, то это даже может уменьшить объём вычислений.
-
-Впрочем, при агрегации, существенная часть работы выносится в оффлайне, и её можно делать сравнительно спокойно. Для сравнения, при онлайн вычислениях, вычисления надо делать так быстро, как это возможно, так как именно в момент вычислений пользователь ждёт результата.
-
-В Яндекс.Метрике есть специализированная система для агрегированных данных - Metrage, на основе которой работает большинство отчётов.
-Также в Яндекс.Метрике с 2009 года использовалась специализированная OLAP БД для неагрегированных данных - OLAPServer, на основе которой раньше работал конструктор отчётов.
-OLAPServer хорошо подходил для неагрегированных данных, но содержал много ограничений, не позволяющих использовать его для всех отчётов так, как хочется: отсутствие поддержки типов данных (только числа), невозможность инкрементального обновления данных в реальном времени (только перезаписью данных за сутки). OLAPServer не является СУБД, а является специализированной БД.
-
-Чтобы снять ограничения OLAPServer-а и решить задачу работы с неагрегированными данными для всех отчётов, разработана СУБД ClickHouse.
-
-[Оригинальная статья](https://clickhouse.yandex/docs/ru/introduction/ya_metrika_task/)
diff --git a/docs/zh/introduction/ya_metrika_task.md b/docs/zh/introduction/ya_metrika_task.md
deleted file mode 100644
index da4b18826e0..00000000000
--- a/docs/zh/introduction/ya_metrika_task.md
+++ /dev/null
@@ -1,50 +0,0 @@
-# Yandex.Metrica的使用案例
-
-ClickHouse最初是为 [Yandex.Metrica](https://metrica.yandex.com/) [世界第二大Web分析平台](http://w3techs.com/technologies/overview/traffic_analysis/all) 而开发的。多年来一直作为该系统的核心组件被该系统持续使用着。目前为止,该系统在ClickHouse中有超过13万亿条记录,并且每天超过200多亿个事件被处理。它允许直接从原始数据中动态查询并生成报告。本文简要介绍了ClickHouse在其早期发展阶段的目标。
-
-Yandex.Metrica基于用户定义的字段,对实时访问、连接会话,生成实时的统计报表。这种需求往往需要复杂聚合方式,比如对访问用户进行去重。构建报表的数据,是实时接收存储的新数据。
-
-截至2014年4月,Yandex.Metrica每天跟踪大约120亿个事件(用户的点击和浏览)。为了可以创建自定义的报表,我们必须存储全部这些事件。同时,这些查询可能需要在几百毫秒内扫描数百万行的数据,或在几秒内扫描数亿行的数据。
-
-## Yandex.Metrica以及其他Yandex服务的使用案例
-
-在Yandex.Metrica中,ClickHouse被用于多个场景中。
-它的主要任务是使用原始数据在线的提供各种数据报告。它使用374台服务器的集群,存储了20.3万亿行的数据。在去除重复与副本数据的情况下,压缩后的数据达到了2PB。未压缩前(TSV格式)它大概有17PB。
-
-ClickHouse还被使用在:
-
-- 存储来自Yandex.Metrica回话重放数据。
-- 处理中间数据
-- 与Analytics一起构建全球报表。
-- 为调试Yandex.Metrica引擎运行查询
-- 分析来自API和用户界面的日志数据
-
-ClickHouse在其他Yandex服务中至少有12个安装:search verticals, Market, Direct, business analytics, mobile development, AdFox, personal services等。
-
-## 聚合与非聚合数据
-
-有一种流行的观点认为,想要有效的计算统计数据,必须要聚合数据,因为聚合将降低数据量。
-
-但是数据聚合是一个有诸多限制的解决方案,例如:
-
-- 你必须提前知道用户定义的报表的字段列表
-- 用户无法自定义报表
-- 当聚合条件过多时,可能不会减少数据,聚合是无用的。
-- 存在大量报表时,有太多的聚合变化(组合爆炸)
-- 当聚合条件有非常大的基数时(如:url),数据量没有太大减少(少于两倍)
-- 聚合的数据量可能会增长而不是收缩
-- 用户不会查看我们为他生成的所有报告,大部分计算将是无用的
-- 各种聚合可能违背了数据的逻辑完整性
-
-如果我们直接使用非聚合数据而不进行任何聚合时,我们的计算量可能是减少的。
-
-然而,相对于聚合中很大一部分工作被离线完成,在线计算需要尽快的完成计算,因为用户在等待结果。
-
-Yandex.Metrica 有一个专门用于聚合数据的系统,称为Metrage,它可以用作大部分报表。
-从2009年开始,Yandex.Metrica还为非聚合数据使用专门的OLAP数据库,称为OLAPServer,它以前用于报表构建系统。
-OLAPServer可以很好的工作在非聚合数据上,但是它有诸多限制,导致无法根据需要将其用于所有报表中。如,缺少对数据类型的支持(只支持数据),无法实时增量的更新数据(只能通过每天重写数据完成)。OLAPServer不是一个数据库管理系统,它只是一个数据库。
-
-为了消除OLAPServer的这些局限性,解决所有报表使用非聚合数据的问题,我们开发了ClickHouse数据库管理系统。
-
-
-[来源文章](https://clickhouse.yandex/docs/en/introduction/ya_metrika_task/)
From 4a7f22040991b4054a4e2bee4cf2504e688d6d8d Mon Sep 17 00:00:00 2001
From: Ivan Blinkov
Date: Wed, 28 Aug 2019 11:52:07 +0300
Subject: [PATCH 010/312] Refactor Yandex.Metrica dataset description
---
.../example_datasets/metrica.md | 30 ++++++++++++-------
1 file changed, 19 insertions(+), 11 deletions(-)
diff --git a/docs/en/getting_started/example_datasets/metrica.md b/docs/en/getting_started/example_datasets/metrica.md
index 75741ba0b54..e3a9556adb4 100644
--- a/docs/en/getting_started/example_datasets/metrica.md
+++ b/docs/en/getting_started/example_datasets/metrica.md
@@ -1,9 +1,13 @@
# Anonymized Yandex.Metrica Data
-Dataset consists of two tables containing anonymized data about hits (`hits_v1`) and visits (`visits_v1`) of Yandex.Metrica. Each of the tables can be downloaded as a compressed `tsv.xz` file or as prepared partitions. In addition to that, an extended version of the `hits` table containing 100 million rows is available as TSV at `https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_100m_obfuscated_v1.tsv.xz` and as prepared partitions at `https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz`.
+Dataset consists of two tables containing anonymized data about hits (`hits_v1`) and visits (`visits_v1`) of Yandex.Metrica. You can read more about Yandex.Metrica in [ClickHouse history](../../introduction/history.md) section.
+
+The dataset consists of two tables, either of them can be downloaded as a compressed `tsv.xz` file or as prepared partitions. In addition to that, an extended version of the `hits` table containing 100 million rows is available as TSV at and as prepared partitions at .
## Obtaining Tables from Prepared Partitions
-**Download and import hits:**
-```bash
+
+Download and import hits table:
+
+``` bash
curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar
tar xvf hits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory
# check permissions on unpacked data, fix if required
@@ -11,8 +15,9 @@ sudo service clickhouse-server restart
clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
```
-**Download and import visits:**
-```bash
+Download and import visits:
+
+``` bash
curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar
tar xvf visits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory
# check permissions on unpacked data, fix if required
@@ -20,9 +25,11 @@ sudo service clickhouse-server restart
clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
```
-## Obtaining Tables from Compressed tsv-file
-**Download and import hits from compressed tsv-file**
-```bash
+## Obtaining Tables from Compressed TSV File
+
+Download and import hits from compressed TSV file:
+
+``` bash
curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
# now create table
clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
@@ -34,8 +41,9 @@ clickhouse-client --query "OPTIMIZE TABLE datasets.hits_v1 FINAL"
clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
```
-**Download and import visits from compressed tsv-file**
-```bash
+Download and import visits from compressed tsv-file:
+
+``` bash
curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
# now create table
clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
@@ -48,4 +56,4 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
```
## Queries
-Examples of queries to these tables (they are named `test.hits` and `test.visits`) can be found among [stateful tests](https://github.com/yandex/ClickHouse/tree/master/dbms/tests/queries/1_stateful) and in some [performance tests](https://github.com/yandex/ClickHouse/tree/master/dbms/tests/performance/test_hits) of ClickHouse.
+Examples of queries to these tables (they are named `test.hits` and `test.visits`) can be found among [stateful tests](https://github.com/yandex/ClickHouse/tree/master/dbms/tests/queries/1_stateful) of ClickHouse.
From a95105453f4a4b176edf198294a5678ccaee80da Mon Sep 17 00:00:00 2001
From: Ivan Blinkov
Date: Wed, 28 Aug 2019 14:14:42 +0300
Subject: [PATCH 011/312] WIP on rewriting tutorial
---
.../example_datasets/metrica.md | 7 +-
docs/en/getting_started/install.md | 6 +-
docs/en/getting_started/tutorial.md | 577 +++++++++++++-----
3 files changed, 442 insertions(+), 148 deletions(-)
diff --git a/docs/en/getting_started/example_datasets/metrica.md b/docs/en/getting_started/example_datasets/metrica.md
index e3a9556adb4..b26f26f5acb 100644
--- a/docs/en/getting_started/example_datasets/metrica.md
+++ b/docs/en/getting_started/example_datasets/metrica.md
@@ -55,5 +55,8 @@ clickhouse-client --query "OPTIMIZE TABLE datasets.visits_v1 FINAL"
clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
```
-## Queries
-Examples of queries to these tables (they are named `test.hits` and `test.visits`) can be found among [stateful tests](https://github.com/yandex/ClickHouse/tree/master/dbms/tests/queries/1_stateful) of ClickHouse.
+## Example Queries
+
+[ClickHouse tutorial](../tutorial.md) is based on Yandex.Metrica dataset and the recommended way to get started with this dataset is to just go through tutorial.
+
+Additional examples of queries to these tables can be found among [stateful tests](https://github.com/yandex/ClickHouse/tree/master/dbms/tests/queries/1_stateful) of ClickHouse (they are named `test.hists` and `test.visits` there).
diff --git a/docs/en/getting_started/install.md b/docs/en/getting_started/install.md
index 779abba905b..6bb7a17d340 100644
--- a/docs/en/getting_started/install.md
+++ b/docs/en/getting_started/install.md
@@ -2,14 +2,16 @@
## System Requirements
-ClickHouse can run on any Linux, FreeBSD or Mac OS X with x86\_64 CPU architecture.
+ClickHouse can run on any Linux, FreeBSD or Mac OS X with x86\_64, AArch64 or PowerPC64LE CPU architecture.
-Though pre-built binaries are typically compiled to leverage SSE 4.2 instruction set, so unless otherwise stated usage of CPU that supports it becomes an additional system requirement. Here's the command to check if current CPU has support for SSE 4.2:
+Though pre-built binaries are typically compiled for x86\_64 and leverage SSE 4.2 instruction set, so unless otherwise stated usage of CPU that supports it becomes an additional system requirement. Here's the command to check if current CPU has support for SSE 4.2:
``` bash
$ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported"
```
+To run ClickHouse on processors that do not support SSE 4.2 or have AArch64 or PowerPC64LE architecture, you should [build ClickHouse from sources](#from-sources) with proper configuration adjustments.
+
## Available Installation Options
### From DEB Packages
diff --git a/docs/en/getting_started/tutorial.md b/docs/en/getting_started/tutorial.md
index 48faf5bd327..222d6d7517b 100644
--- a/docs/en/getting_started/tutorial.md
+++ b/docs/en/getting_started/tutorial.md
@@ -1,31 +1,48 @@
# ClickHouse Tutorial
-## Setup
+## What to Expect from This Tutorial?
-Let's get started with sample dataset from open sources. We will use USA civil flights data from 1987 to 2015. It's hard to call this sample a Big Data (contains 166 millions rows, 63 Gb of uncompressed data) but this allows us to quickly get to work. Dataset is available for download [here](https://yadi.sk/d/pOZxpa42sDdgm). Also you may download it from the original datasource [as described here](example_datasets/ontime.md).
+By going through this tutorial you'll learn how to set up basic ClickHouse cluster, it'll be small, but fault tolerant and scalable. We will use one of example datasets to fill it with data and execute some demo queries.
-At first we will deploy ClickHouse to a single server. Later we will also review the process of deployment to a cluster with support for sharding and replication.
+## Single Node Setup
-ClickHouse is usually installed from [deb](index.md#from-deb-packages) or [rpm](index.md#from-rpm-packages) packages, but there are [alternatives](index.md#from-docker-image) for the operating systems that do no support them. What do we have in those packages:
+To postpone complexities of distributed environment, we'll start with deploying ClickHouse on a single server or virtual machine. ClickHouse is usually installed from [deb](index.md#from-deb-packages) or [rpm](index.md#from-rpm-packages) packages, but there are [alternatives](index.md#from-docker-image) for the operating systems that do no support them.
+
+For example, you have chosen `deb` packages and executed:
+``` bash
+sudo apt-get install dirmngr
+sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4
+
+echo "deb http://repo.yandex.ru/clickhouse/deb/stable/ main/" | sudo tee /etc/apt/sources.list.d/clickhouse.list
+sudo apt-get update
+
+sudo apt-get install -y clickhouse-server clickhouse-client
+```
+
+What do we have in the packages that got installed:
* `clickhouse-client` package contains [clickhouse-client](../interfaces/cli.md) application, interactive ClickHouse console client.
* `clickhouse-common` package contains a ClickHouse executable file.
* `clickhouse-server` package contains configuration files to run ClickHouse as a server.
-Server config files are located in /etc/clickhouse-server/. Before getting to work please notice the `path` element in config. Path dtermines the location for data storage. It's not really handy to directly edit `config.xml` file considering package updates. Recommended way to override the config elements is to create [files in config.d directory](../operations/configuration_files.md). Also you may want to [set up access rights](../operations/access_rights.md) early on.
+Server config files are located in `/etc/clickhouse-server/`. Before going further please notice the `` element in `config.xml`. Path determines the location for data storage, so it should be located on volume with large disk capacity, the default value is `/var/lib/clickhouse/`. If you want to adjust the configuration it's not really handy to directly edit `config.xml` file, considering it might get rewritten on future package updates. Recommended way to override the config elements is to create [files in config.d directory](../operations/configuration_files.md) which serve as "patches" to config.xml.
+
+As you might have noticed, `clickhouse-server` is not launched automatically after package installation. It won't be automatically restarted after updates either. The way you start the server depends on your init system, usually it's:
-`clickhouse-server` won't be launched automatically after package installation. It won't be automatically restarted after updates either. Start the server with:
``` bash
sudo service clickhouse-server start
```
+or
-The default location for server logs is `/var/log/clickhouse-server/`.
+``` bash
+sudo /etc/init.d/clickhouse-server start
+```
-Server is ready to handle client connections once `Ready for connections` message was logged.
+The default location for server logs is `/var/log/clickhouse-server/`. Server will be ready to handle client connections once `Ready for connections` message was logged.
-Use `clickhouse-client` to connect to the server.
+Once the `clickhouse-server` is up and running, we can use `clickhouse-client` to connect to the server and run some test queries like `SELECT "Hello, world!";`.
-Tips for clickhouse-client
+Quick tips for clickhouse-client
Interactive mode:
``` bash
clickhouse-client
@@ -42,160 +59,432 @@ Run queries in batch-mode:
``` bash
clickhouse-client --query='SELECT 1'
echo 'SELECT 1' | clickhouse-client
+clickhouse-client <<< 'SELECT 1'
```
-Insert data from file of a specified format:
+Insert data from a file in specified format:
``` bash
-clickhouse-client --query='INSERT INTO table VALUES' < data.txt
-clickhouse-client --query='INSERT INTO table FORMAT TabSeparated' < data.tsv
+clickhouse-client --query='INSERT INTO table VALUES' < data.txt
+clickhouse-client --query='INSERT INTO table FORMAT TabSeparated' < data.tsv
```
-## Create Table for Sample Dataset
-Create table query
-``` bash
-$ clickhouse-client --multiline
-ClickHouse client version 0.0.53720.
-Connecting to localhost:9000.
-Connected to ClickHouse server version 0.0.53720.
+## Import Sample Dataset
-:) CREATE TABLE ontime
+Now it's time to fill our ClickHouse server with some sample data. In this tutorial we'll use anonymized data of Yandex.Metrica, the first service that run ClickHouse in production way before it became open-source (more on that in [history section](../introduction/history.md)). There are [multiple ways to import Yandex.Metrica dataset](example_datasets/metrica.md) and for the sake of the tutorial we'll go with the most realistic one.
+
+### Download and Extract Table Data
+
+``` bash
+curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
+curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
+```
+
+The extracted files are about 10GB in size.
+
+### Create Tables
+
+Tables are logically grouped into "databases". There's a `default` database, but we'll create a new one named `tutorial`:
+
+``` bash
+clickhouse-client --query "CREATE DATABASE IF NOT EXISTS tutorial"
+```
+
+Syntax for creating tables is way more complicated compared to databases (see [reference](../query_language/create.md). In general `CREATE TABLE` statement has to specify three key things:
+
+1. Name of table to create.
+2. Table schema, i.e. list of columns and their [data types](../data_types/index.md).
+3. [Table engine](../operations/table_engines/index.md) and it's settings, which determines all the details on how queries to this table will be physically executed.
+
+Yandex.Metrica is a web analytics service and sample dataset doesn't cover it's full functionality, so there are only two tables to create:
+
+* `hits` is a table with each action done by all users on all websites covered by the service.
+* `visits` is a table that contains pre-built sessions instead of individual actions.
+
+Let's see and execute the real create table queries for these tables:
+
+``` sql
+CREATE TABLE tutorial.hits_v1
(
- Year UInt16,
- Quarter UInt8,
- Month UInt8,
- DayofMonth UInt8,
- DayOfWeek UInt8,
- FlightDate Date,
- UniqueCarrier FixedString(7),
- AirlineID Int32,
- Carrier FixedString(2),
- TailNum String,
- FlightNum String,
- OriginAirportID Int32,
- OriginAirportSeqID Int32,
- OriginCityMarketID Int32,
- Origin FixedString(5),
- OriginCityName String,
- OriginState FixedString(2),
- OriginStateFips String,
- OriginStateName String,
- OriginWac Int32,
- DestAirportID Int32,
- DestAirportSeqID Int32,
- DestCityMarketID Int32,
- Dest FixedString(5),
- DestCityName String,
- DestState FixedString(2),
- DestStateFips String,
- DestStateName String,
- DestWac Int32,
- CRSDepTime Int32,
- DepTime Int32,
- DepDelay Int32,
- DepDelayMinutes Int32,
- DepDel15 Int32,
- DepartureDelayGroups String,
- DepTimeBlk String,
- TaxiOut Int32,
- WheelsOff Int32,
- WheelsOn Int32,
- TaxiIn Int32,
- CRSArrTime Int32,
- ArrTime Int32,
- ArrDelay Int32,
- ArrDelayMinutes Int32,
- ArrDel15 Int32,
- ArrivalDelayGroups Int32,
- ArrTimeBlk String,
- Cancelled UInt8,
- CancellationCode FixedString(1),
- Diverted UInt8,
- CRSElapsedTime Int32,
- ActualElapsedTime Int32,
- AirTime Int32,
- Flights Int32,
- Distance Int32,
- DistanceGroup UInt8,
- CarrierDelay Int32,
- WeatherDelay Int32,
- NASDelay Int32,
- SecurityDelay Int32,
- LateAircraftDelay Int32,
- FirstDepTime String,
- TotalAddGTime String,
- LongestAddGTime String,
- DivAirportLandings String,
- DivReachedDest String,
- DivActualElapsedTime String,
- DivArrDelay String,
- DivDistance String,
- Div1Airport String,
- Div1AirportID Int32,
- Div1AirportSeqID Int32,
- Div1WheelsOn String,
- Div1TotalGTime String,
- Div1LongestGTime String,
- Div1WheelsOff String,
- Div1TailNum String,
- Div2Airport String,
- Div2AirportID Int32,
- Div2AirportSeqID Int32,
- Div2WheelsOn String,
- Div2TotalGTime String,
- Div2LongestGTime String,
- Div2WheelsOff String,
- Div2TailNum String,
- Div3Airport String,
- Div3AirportID Int32,
- Div3AirportSeqID Int32,
- Div3WheelsOn String,
- Div3TotalGTime String,
- Div3LongestGTime String,
- Div3WheelsOff String,
- Div3TailNum String,
- Div4Airport String,
- Div4AirportID Int32,
- Div4AirportSeqID Int32,
- Div4WheelsOn String,
- Div4TotalGTime String,
- Div4LongestGTime String,
- Div4WheelsOff String,
- Div4TailNum String,
- Div5Airport String,
- Div5AirportID Int32,
- Div5AirportSeqID Int32,
- Div5WheelsOn String,
- Div5TotalGTime String,
- Div5LongestGTime String,
- Div5WheelsOff String,
- Div5TailNum String
+ `WatchID` UInt64,
+ `JavaEnable` UInt8,
+ `Title` String,
+ `GoodEvent` Int16,
+ `EventTime` DateTime,
+ `EventDate` Date,
+ `CounterID` UInt32,
+ `ClientIP` UInt32,
+ `ClientIP6` FixedString(16),
+ `RegionID` UInt32,
+ `UserID` UInt64,
+ `CounterClass` Int8,
+ `OS` UInt8,
+ `UserAgent` UInt8,
+ `URL` String,
+ `Referer` String,
+ `URLDomain` String,
+ `RefererDomain` String,
+ `Refresh` UInt8,
+ `IsRobot` UInt8,
+ `RefererCategories` Array(UInt16),
+ `URLCategories` Array(UInt16),
+ `URLRegions` Array(UInt32),
+ `RefererRegions` Array(UInt32),
+ `ResolutionWidth` UInt16,
+ `ResolutionHeight` UInt16,
+ `ResolutionDepth` UInt8,
+ `FlashMajor` UInt8,
+ `FlashMinor` UInt8,
+ `FlashMinor2` String,
+ `NetMajor` UInt8,
+ `NetMinor` UInt8,
+ `UserAgentMajor` UInt16,
+ `UserAgentMinor` FixedString(2),
+ `CookieEnable` UInt8,
+ `JavascriptEnable` UInt8,
+ `IsMobile` UInt8,
+ `MobilePhone` UInt8,
+ `MobilePhoneModel` String,
+ `Params` String,
+ `IPNetworkID` UInt32,
+ `TraficSourceID` Int8,
+ `SearchEngineID` UInt16,
+ `SearchPhrase` String,
+ `AdvEngineID` UInt8,
+ `IsArtifical` UInt8,
+ `WindowClientWidth` UInt16,
+ `WindowClientHeight` UInt16,
+ `ClientTimeZone` Int16,
+ `ClientEventTime` DateTime,
+ `SilverlightVersion1` UInt8,
+ `SilverlightVersion2` UInt8,
+ `SilverlightVersion3` UInt32,
+ `SilverlightVersion4` UInt16,
+ `PageCharset` String,
+ `CodeVersion` UInt32,
+ `IsLink` UInt8,
+ `IsDownload` UInt8,
+ `IsNotBounce` UInt8,
+ `FUniqID` UInt64,
+ `HID` UInt32,
+ `IsOldCounter` UInt8,
+ `IsEvent` UInt8,
+ `IsParameter` UInt8,
+ `DontCountHits` UInt8,
+ `WithHash` UInt8,
+ `HitColor` FixedString(1),
+ `UTCEventTime` DateTime,
+ `Age` UInt8,
+ `Sex` UInt8,
+ `Income` UInt8,
+ `Interests` UInt16,
+ `Robotness` UInt8,
+ `GeneralInterests` Array(UInt16),
+ `RemoteIP` UInt32,
+ `RemoteIP6` FixedString(16),
+ `WindowName` Int32,
+ `OpenerName` Int32,
+ `HistoryLength` Int16,
+ `BrowserLanguage` FixedString(2),
+ `BrowserCountry` FixedString(2),
+ `SocialNetwork` String,
+ `SocialAction` String,
+ `HTTPError` UInt16,
+ `SendTiming` Int32,
+ `DNSTiming` Int32,
+ `ConnectTiming` Int32,
+ `ResponseStartTiming` Int32,
+ `ResponseEndTiming` Int32,
+ `FetchTiming` Int32,
+ `RedirectTiming` Int32,
+ `DOMInteractiveTiming` Int32,
+ `DOMContentLoadedTiming` Int32,
+ `DOMCompleteTiming` Int32,
+ `LoadEventStartTiming` Int32,
+ `LoadEventEndTiming` Int32,
+ `NSToDOMContentLoadedTiming` Int32,
+ `FirstPaintTiming` Int32,
+ `RedirectCount` Int8,
+ `SocialSourceNetworkID` UInt8,
+ `SocialSourcePage` String,
+ `ParamPrice` Int64,
+ `ParamOrderID` String,
+ `ParamCurrency` FixedString(3),
+ `ParamCurrencyID` UInt16,
+ `GoalsReached` Array(UInt32),
+ `OpenstatServiceName` String,
+ `OpenstatCampaignID` String,
+ `OpenstatAdID` String,
+ `OpenstatSourceID` String,
+ `UTMSource` String,
+ `UTMMedium` String,
+ `UTMCampaign` String,
+ `UTMContent` String,
+ `UTMTerm` String,
+ `FromTag` String,
+ `HasGCLID` UInt8,
+ `RefererHash` UInt64,
+ `URLHash` UInt64,
+ `CLID` UInt32,
+ `YCLID` UInt64,
+ `ShareService` String,
+ `ShareURL` String,
+ `ShareTitle` String,
+ `ParsedParams` Nested(
+ Key1 String,
+ Key2 String,
+ Key3 String,
+ Key4 String,
+ Key5 String,
+ ValueDouble Float64),
+ `IslandID` FixedString(16),
+ `RequestNum` UInt32,
+ `RequestTry` UInt8
)
-ENGINE = MergeTree(FlightDate, (Year, FlightDate), 8192);
+ENGINE = MergeTree()
+PARTITION BY toYYYYMM(EventDate)
+ORDER BY (CounterID, EventDate, intHash32(UserID))
+SAMPLE BY intHash32(UserID)
+SETTINGS index_granularity = 8192
```
-
-Now we have a table of [MergeTree type](../operations/table_engines/mergetree.md). MergeTree table engine family is recommended for usage in production. Tables of this kind has a primary key used for incremental sort of table data. This allows fast execution of queries in ranges of a primary key.
+``` sql
+CREATE TABLE tutorial.visits_v1
+(
+ `CounterID` UInt32,
+ `StartDate` Date,
+ `Sign` Int8,
+ `IsNew` UInt8,
+ `VisitID` UInt64,
+ `UserID` UInt64,
+ `StartTime` DateTime,
+ `Duration` UInt32,
+ `UTCStartTime` DateTime,
+ `PageViews` Int32,
+ `Hits` Int32,
+ `IsBounce` UInt8,
+ `Referer` String,
+ `StartURL` String,
+ `RefererDomain` String,
+ `StartURLDomain` String,
+ `EndURL` String,
+ `LinkURL` String,
+ `IsDownload` UInt8,
+ `TraficSourceID` Int8,
+ `SearchEngineID` UInt16,
+ `SearchPhrase` String,
+ `AdvEngineID` UInt8,
+ `PlaceID` Int32,
+ `RefererCategories` Array(UInt16),
+ `URLCategories` Array(UInt16),
+ `URLRegions` Array(UInt32),
+ `RefererRegions` Array(UInt32),
+ `IsYandex` UInt8,
+ `GoalReachesDepth` Int32,
+ `GoalReachesURL` Int32,
+ `GoalReachesAny` Int32,
+ `SocialSourceNetworkID` UInt8,
+ `SocialSourcePage` String,
+ `MobilePhoneModel` String,
+ `ClientEventTime` DateTime,
+ `RegionID` UInt32,
+ `ClientIP` UInt32,
+ `ClientIP6` FixedString(16),
+ `RemoteIP` UInt32,
+ `RemoteIP6` FixedString(16),
+ `IPNetworkID` UInt32,
+ `SilverlightVersion3` UInt32,
+ `CodeVersion` UInt32,
+ `ResolutionWidth` UInt16,
+ `ResolutionHeight` UInt16,
+ `UserAgentMajor` UInt16,
+ `UserAgentMinor` UInt16,
+ `WindowClientWidth` UInt16,
+ `WindowClientHeight` UInt16,
+ `SilverlightVersion2` UInt8,
+ `SilverlightVersion4` UInt16,
+ `FlashVersion3` UInt16,
+ `FlashVersion4` UInt16,
+ `ClientTimeZone` Int16,
+ `OS` UInt8,
+ `UserAgent` UInt8,
+ `ResolutionDepth` UInt8,
+ `FlashMajor` UInt8,
+ `FlashMinor` UInt8,
+ `NetMajor` UInt8,
+ `NetMinor` UInt8,
+ `MobilePhone` UInt8,
+ `SilverlightVersion1` UInt8,
+ `Age` UInt8,
+ `Sex` UInt8,
+ `Income` UInt8,
+ `JavaEnable` UInt8,
+ `CookieEnable` UInt8,
+ `JavascriptEnable` UInt8,
+ `IsMobile` UInt8,
+ `BrowserLanguage` UInt16,
+ `BrowserCountry` UInt16,
+ `Interests` UInt16,
+ `Robotness` UInt8,
+ `GeneralInterests` Array(UInt16),
+ `Params` Array(String),
+ `Goals` Nested(
+ ID UInt32,
+ Serial UInt32,
+ EventTime DateTime,
+ Price Int64,
+ OrderID String,
+ CurrencyID UInt32),
+ `WatchIDs` Array(UInt64),
+ `ParamSumPrice` Int64,
+ `ParamCurrency` FixedString(3),
+ `ParamCurrencyID` UInt16,
+ `ClickLogID` UInt64,
+ `ClickEventID` Int32,
+ `ClickGoodEvent` Int32,
+ `ClickEventTime` DateTime,
+ `ClickPriorityID` Int32,
+ `ClickPhraseID` Int32,
+ `ClickPageID` Int32,
+ `ClickPlaceID` Int32,
+ `ClickTypeID` Int32,
+ `ClickResourceID` Int32,
+ `ClickCost` UInt32,
+ `ClickClientIP` UInt32,
+ `ClickDomainID` UInt32,
+ `ClickURL` String,
+ `ClickAttempt` UInt8,
+ `ClickOrderID` UInt32,
+ `ClickBannerID` UInt32,
+ `ClickMarketCategoryID` UInt32,
+ `ClickMarketPP` UInt32,
+ `ClickMarketCategoryName` String,
+ `ClickMarketPPName` String,
+ `ClickAWAPSCampaignName` String,
+ `ClickPageName` String,
+ `ClickTargetType` UInt16,
+ `ClickTargetPhraseID` UInt64,
+ `ClickContextType` UInt8,
+ `ClickSelectType` Int8,
+ `ClickOptions` String,
+ `ClickGroupBannerID` Int32,
+ `OpenstatServiceName` String,
+ `OpenstatCampaignID` String,
+ `OpenstatAdID` String,
+ `OpenstatSourceID` String,
+ `UTMSource` String,
+ `UTMMedium` String,
+ `UTMCampaign` String,
+ `UTMContent` String,
+ `UTMTerm` String,
+ `FromTag` String,
+ `HasGCLID` UInt8,
+ `FirstVisit` DateTime,
+ `PredLastVisit` Date,
+ `LastVisit` Date,
+ `TotalVisits` UInt32,
+ `TraficSource` Nested(
+ ID Int8,
+ SearchEngineID UInt16,
+ AdvEngineID UInt8,
+ PlaceID UInt16,
+ SocialSourceNetworkID UInt8,
+ Domain String,
+ SearchPhrase String,
+ SocialSourcePage String),
+ `Attendance` FixedString(16),
+ `CLID` UInt32,
+ `YCLID` UInt64,
+ `NormalizedRefererHash` UInt64,
+ `SearchPhraseHash` UInt64,
+ `RefererDomainHash` UInt64,
+ `NormalizedStartURLHash` UInt64,
+ `StartURLDomainHash` UInt64,
+ `NormalizedEndURLHash` UInt64,
+ `TopLevelDomain` UInt64,
+ `URLScheme` UInt64,
+ `OpenstatServiceNameHash` UInt64,
+ `OpenstatCampaignIDHash` UInt64,
+ `OpenstatAdIDHash` UInt64,
+ `OpenstatSourceIDHash` UInt64,
+ `UTMSourceHash` UInt64,
+ `UTMMediumHash` UInt64,
+ `UTMCampaignHash` UInt64,
+ `UTMContentHash` UInt64,
+ `UTMTermHash` UInt64,
+ `FromHash` UInt64,
+ `WebVisorEnabled` UInt8,
+ `WebVisorActivity` UInt32,
+ `ParsedParams` Nested(
+ Key1 String,
+ Key2 String,
+ Key3 String,
+ Key4 String,
+ Key5 String,
+ ValueDouble Float64),
+ `Market` Nested(
+ Type UInt8,
+ GoalID UInt32,
+ OrderID String,
+ OrderPrice Int64,
+ PP UInt32,
+ DirectPlaceID UInt32,
+ DirectOrderID UInt32,
+ DirectBannerID UInt32,
+ GoodID String,
+ GoodName String,
+ GoodQuantity Int32,
+ GoodPrice Int64),
+ `IslandID` FixedString(16)
+)
+ENGINE = CollapsingMergeTree(Sign)
+PARTITION BY toYYYYMM(StartDate)
+ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID)
+SAMPLE BY intHash32(UserID)
+SETTINGS index_granularity = 8192
+```
+
+You can execute those queries using interactive mode of `clickhouse-client` (just launch it in terminal without specifying a query in advance) or try some [alternative interface](../interfaces/index.md) if you ant.
+
+As we can see, `hits_v1` uses the [basic MergeTree engine](../operations/table_engines/mergetree.md), while the `visits_v1` uses the [Collapsing](../operations/table_engines/collapsingmergetree.md) variant.
+
+### Import Data
+
+Data import to ClickHouse is done via [INSERT INTO](../query_language/insert_into.md) query like in many other SQL databases. However data is usually provided in one of the [supported formats](../interfaces/formats.md) instead of `VALUES` clause (which is also supported).
+
+The files we downloaded earlier are in tab-separated format, so here's how to import them via console client:
-## Load Data
``` bash
-xz -v -c -d < ontime.csv.xz | clickhouse-client --query="INSERT INTO ontime FORMAT CSV"
+clickhouse-client --query "INSERT INTO tutorial.hits_v1 FORMAT TSV" --max_insert_block_size=100000 < hits_v1.tsv
+clickhouse-client --query "INSERT INTO tutorial.visits_v1 FORMAT TSV" --max_insert_block_size=100000 < visits_v1.tsv
```
-ClickHouse INSERT query allows to load data in any [supported format](../interfaces/formats.md). Data load requires just O(1) RAM consumption. INSERT query can receive any data volume as input. It is strongly recommended to insert data with [not so small blocks](../introduction/performance/#performance-when-inserting-data). Notice that insert of blocks with size up to max_insert_block_size (= 1 048 576
- rows by default) is an atomic operation: data block will be inserted completely or not inserted at all. In case of disconnect during insert operation you may not know if the block was inserted successfully. To achieve exactly-once semantics ClickHouse supports idempotency for [replicated tables](../operations/table_engines/replication.md). This means that you may retry insert of the same data block (possibly on a different replicas) but this block will be inserted just once. Anyway in this guide we will load data from our localhost so we may not take care about data blocks generation and exactly-once semantics.
+ClickHouse has a lot of [settings to tune](../operations/settings.md) and one way to specify them in console client is via arguments, as we can see with `--max_insert_block_size`. The easiest way to figure out what settings are available, what do they mean and what the defaults are is to query the `system.settings` table:
-INSERT query into tables of MergeTree type is non-blocking (so does a SELECT query). You can execute SELECT queries right after of during insert operation.
+``` sql
+SELECT name, value, changed, description
+FROM system.settings
+WHERE name LIKE '%max_insert_b%'
+FORMAT TSV
-Our sample dataset is a bit not optimal. There are two reasons:
+max_insert_block_size 1048576 0 "The maximum block size for insertion, if we control the creation of blocks for insertion."
+```
-* The first is that String data type is used in cases when [Enum](../data_types/enum.md) or numeric type would fit better.
-* The second is that dataset contains redundant fields like Year, Quarter, Month, DayOfMonth, DayOfWeek. In fact a single FlightDate would be enough. Most likely they have been added to improve performance for other DBMS'eswhere DateTime handling functions may be not efficient.
+Optionally you can [OPTIMIZE](../query_language/misc/#misc_operations-optimize) the tables after import. Tables that are configured with MergeTree-family engine always do merges of data parts in background to optimize data storage (or at least check if it makes sense). These queries will just force table engine to do storage optimization right now instead of some time later:
+``` bash
+clickhouse-client --query "OPTIMIZE TABLE tutorial.hits_v1 FINAL"
+clickhouse-client --query "OPTIMIZE TABLE tutorial.visits_v1 FINAL"
+```
-!!! note "Tip"
- ClickHouse [functions for manipulating DateTime fields](../query_language/functions/date_time_functions/) are well-optimized so such redundancy is not required. Anyway many columns is not a reason to worry, because ClickHouse is a [column-oriented DBMS](https://en.wikipedia.org/wiki/Column-oriented_DBMS). This allows you to have as many fields as you need with minimal impact on performance. Hundreds of columns in a table is totally fine for ClickHouse.
+This is I/O and CPU intensive operation so if the table constantly receives new data it's better to leave it alone and let merges run in background.
-## Querying the Sample Dataset
+Now we can check that the tables are successfully imported:
+``` bash
+clickhouse-client --query "SELECT COUNT(*) FROM tutorial.hits_v1"
+clickhouse-client --query "SELECT COUNT(*) FROM tutorial.visits_v1"
+```
+
+## Queries
TODO
From 431968d69b4ed39f8502ed01713295b007cb710a Mon Sep 17 00:00:00 2001
From: Nikolai Kochetov
Date: Fri, 15 Nov 2019 19:23:48 +0300
Subject: [PATCH 012/312] Processors and dydtem.numbers
---
.../Storages/System/StorageSystemNumbers.cpp | 70 ++++++++++++-------
.../Storages/System/StorageSystemNumbers.h | 4 +-
2 files changed, 47 insertions(+), 27 deletions(-)
diff --git a/dbms/src/Storages/System/StorageSystemNumbers.cpp b/dbms/src/Storages/System/StorageSystemNumbers.cpp
index 2f155e22a11..aa457eee9ca 100644
--- a/dbms/src/Storages/System/StorageSystemNumbers.cpp
+++ b/dbms/src/Storages/System/StorageSystemNumbers.cpp
@@ -5,6 +5,9 @@
#include
#include
+#include
+#include
+#include
namespace DB
{
@@ -12,21 +15,16 @@ namespace DB
namespace
{
-class NumbersBlockInputStream : public IBlockInputStream
+class NumbersSource : public SourceWithProgress
{
public:
- NumbersBlockInputStream(UInt64 block_size_, UInt64 offset_, UInt64 step_)
- : block_size(block_size_), next(offset_), step(step_) {}
+ NumbersSource(UInt64 block_size_, UInt64 offset_, UInt64 step_)
+ : SourceWithProgress(createHeader()), block_size(block_size_), next(offset_), step(step_) {}
String getName() const override { return "Numbers"; }
- Block getHeader() const override
- {
- return { ColumnWithTypeAndName(ColumnUInt64::create(), std::make_shared(), "number") };
- }
-
protected:
- Block readImpl() override
+ Chunk generate() override
{
auto column = ColumnUInt64::create(block_size);
ColumnUInt64::Container & vec = column->getData();
@@ -38,12 +36,19 @@ protected:
*pos++ = curr++;
next += step;
- return { ColumnWithTypeAndName(std::move(column), std::make_shared(), "number") };
+
+ return { Columns {std::move(column)}, block_size };
}
+
private:
UInt64 block_size;
UInt64 next;
UInt64 step;
+
+ static Block createHeader()
+ {
+ return { ColumnWithTypeAndName(ColumnUInt64::create(), std::make_shared(), "number") };
+ }
};
@@ -55,21 +60,19 @@ struct NumbersMultiThreadedState
using NumbersMultiThreadedStatePtr = std::shared_ptr;
-class NumbersMultiThreadedBlockInputStream : public IBlockInputStream
+class NumbersMultiThreadedSource : public SourceWithProgress
{
public:
- NumbersMultiThreadedBlockInputStream(NumbersMultiThreadedStatePtr state_, UInt64 block_size_, UInt64 max_counter_)
- : state(std::move(state_)), block_size(block_size_), max_counter(max_counter_) {}
+ NumbersMultiThreadedSource(NumbersMultiThreadedStatePtr state_, UInt64 block_size_, UInt64 max_counter_)
+ : SourceWithProgress(createHeader())
+ , state(std::move(state_))
+ , block_size(block_size_)
+ , max_counter(max_counter_) {}
String getName() const override { return "NumbersMt"; }
- Block getHeader() const override
- {
- return { ColumnWithTypeAndName(ColumnUInt64::create(), std::make_shared(), "number") };
- }
-
protected:
- Block readImpl() override
+ Chunk generate() override
{
if (block_size == 0)
return {};
@@ -90,7 +93,7 @@ protected:
while (pos < end)
*pos++ = curr++;
- return { ColumnWithTypeAndName(std::move(column), std::make_shared(), "number") };
+ return { Columns {std::move(column)}, block_size };
}
private:
@@ -98,6 +101,11 @@ private:
UInt64 block_size;
UInt64 max_counter;
+
+ Block createHeader() const
+ {
+ return { ColumnWithTypeAndName(ColumnUInt64::create(), std::make_shared(), "number") };
+ }
};
}
@@ -109,7 +117,7 @@ StorageSystemNumbers::StorageSystemNumbers(const std::string & name_, bool multi
setColumns(ColumnsDescription({{"number", std::make_shared()}}));
}
-BlockInputStreams StorageSystemNumbers::read(
+Pipes StorageSystemNumbers::readWithProcessors(
const Names & column_names,
const SelectQueryInfo &,
const Context & /*context*/,
@@ -128,7 +136,8 @@ BlockInputStreams StorageSystemNumbers::read(
if (!multithreaded)
num_streams = 1;
- BlockInputStreams res(num_streams);
+ Pipes res;
+ res.reserve(num_streams);
if (num_streams > 1 && !even_distribution && *limit)
{
@@ -136,17 +145,26 @@ BlockInputStreams StorageSystemNumbers::read(
UInt64 max_counter = offset + *limit;
for (size_t i = 0; i < num_streams; ++i)
- res[i] = std::make_shared(state, max_block_size, max_counter);
+ res.emplace_back(std::make_shared(state, max_block_size, max_counter));
return res;
}
for (size_t i = 0; i < num_streams; ++i)
{
- res[i] = std::make_shared(max_block_size, offset + i * max_block_size, num_streams * max_block_size);
+ auto source = std::make_shared(max_block_size, offset + i * max_block_size, num_streams * max_block_size);
- if (limit) /// This formula is how to split 'limit' elements to 'num_streams' chunks almost uniformly.
- res[i] = std::make_shared(res[i], *limit * (i + 1) / num_streams - *limit * i / num_streams, 0, false, true);
+ if (limit && i == 0)
+ source->addTotalRowsApprox(*limit);
+
+ res.emplace_back(std::move(source));
+
+ if (limit)
+ {
+ /// This formula is how to split 'limit' elements to 'num_streams' chunks almost uniformly.
+ res.back().addSimpleTransform(std::make_shared(
+ res.back().getHeader(), *limit * (i + 1) / num_streams - *limit * i / num_streams, 0, false));
+ }
}
return res;
diff --git a/dbms/src/Storages/System/StorageSystemNumbers.h b/dbms/src/Storages/System/StorageSystemNumbers.h
index 76070839012..2db97683279 100644
--- a/dbms/src/Storages/System/StorageSystemNumbers.h
+++ b/dbms/src/Storages/System/StorageSystemNumbers.h
@@ -31,7 +31,7 @@ public:
std::string getTableName() const override { return name; }
std::string getDatabaseName() const override { return "system"; }
- BlockInputStreams read(
+ Pipes readWithProcessors(
const Names & column_names,
const SelectQueryInfo & query_info,
const Context & context,
@@ -39,6 +39,8 @@ public:
size_t max_block_size,
unsigned num_streams) override;
+ bool supportProcessorsPipeline() const override { return true; }
+
private:
const std::string name;
bool multithreaded;
From ac89b60d5af7ee07407bc4c239617804c517132b Mon Sep 17 00:00:00 2001
From: Ivan Blinkov
Date: Tue, 3 Dec 2019 16:13:14 +0300
Subject: [PATCH 013/312] tmp commit
---
docs/en/getting_started/tutorial.md | 31 +++++++++++++++++++++++------
1 file changed, 25 insertions(+), 6 deletions(-)
diff --git a/docs/en/getting_started/tutorial.md b/docs/en/getting_started/tutorial.md
index 222d6d7517b..9251967d567 100644
--- a/docs/en/getting_started/tutorial.md
+++ b/docs/en/getting_started/tutorial.md
@@ -76,8 +76,8 @@ Now it's time to fill our ClickHouse server with some sample data. In this tutor
### Download and Extract Table Data
``` bash
-curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
-curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
+curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
+curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
```
The extracted files are about 10GB in size.
@@ -484,9 +484,28 @@ clickhouse-client --query "SELECT COUNT(*) FROM tutorial.hits_v1"
clickhouse-client --query "SELECT COUNT(*) FROM tutorial.visits_v1"
```
-## Queries
+## Example Queries
-TODO
+``` sql
+SELECT
+ StartURL AS URL,
+ AVG(Duration) AS AvgDuration
+FROM tutorial.visits_v1
+WHERE StartDate BETWEEN '2014-03-23' AND '2014-03-30'
+GROUP BY URL
+ORDER BY AvgDuration DESC
+LIMIT 10
+```
+
+
+``` sql
+SELECT
+ sum(Sign) AS visits,
+ sumIf(Sign, has(Goals.ID, 1105530)) AS goal_visits,
+ (100. * goal_visits) / visits AS goal_percent
+FROM tutorial.visits_v1
+WHERE (CounterID = 912887) AND (toYYYYMM(StartDate) = 201403) AND (domain(StartURL) = 'yandex.ru')
+```
## Cluster Deployment
@@ -525,7 +544,7 @@ ClickHouse cluster is a homogenous cluster. Steps to set up:
```
-
+
Creating a local table:
``` sql
CREATE TABLE ontime_local (...) ENGINE = MergeTree(FlightDate, (Year, FlightDate), 8192);
@@ -577,7 +596,7 @@ To provide for resilience in production environment we recommend that each shard
```
-
+
To enable replication ZooKeeper is required. ClickHouse will take care of data consistency on all replicas and run restore procedure after failure
automatically. It's recommended to deploy ZooKeeper cluster to separate servers.
From c136bee4ce9cfbec249aa1d729b5f88d34b90d2f Mon Sep 17 00:00:00 2001
From: Ivan Blinkov
Date: Tue, 3 Dec 2019 19:19:07 +0300
Subject: [PATCH 014/312] lots of docs fixes
---
.../example_datasets/metrica.md | 2 +-
docs/en/getting_started/tutorial.md | 2 +-
docs/en/index.md | 108 +++++++------
docs/fa/getting_started/install.md | 8 +-
docs/fa/getting_started/tutorial.md | 1 +
docs/ja/changelog.md | 1 +
docs/ja/data_types/array.md | 1 +
docs/ja/data_types/boolean.md | 1 +
docs/ja/data_types/date.md | 1 +
docs/ja/data_types/datetime.md | 1 +
docs/ja/data_types/decimal.md | 1 +
docs/ja/data_types/domains/ipv4.md | 1 +
docs/ja/data_types/domains/ipv6.md | 1 +
docs/ja/data_types/domains/overview.md | 1 +
docs/ja/data_types/enum.md | 1 +
docs/ja/data_types/fixedstring.md | 1 +
docs/ja/data_types/float.md | 1 +
docs/ja/data_types/index.md | 1 +
docs/ja/data_types/int_uint.md | 1 +
.../aggregatefunction.md | 1 +
.../nested_data_structures/index.md | 1 +
.../nested_data_structures/nested.md | 1 +
docs/ja/data_types/nullable.md | 1 +
.../special_data_types/expression.md | 1 +
.../ja/data_types/special_data_types/index.md | 1 +
.../data_types/special_data_types/interval.md | 1 +
.../data_types/special_data_types/nothing.md | 1 +
docs/ja/data_types/special_data_types/set.md | 1 +
docs/ja/data_types/string.md | 1 +
docs/ja/data_types/tuple.md | 1 +
docs/ja/data_types/uuid.md | 1 +
docs/ja/database_engines/index.md | 1 +
docs/ja/database_engines/lazy.md | 1 +
docs/ja/database_engines/mysql.md | 1 +
docs/ja/development/architecture.md | 1 +
docs/ja/development/build.md | 1 +
docs/ja/development/build_cross_arm.md | 1 +
docs/ja/development/build_cross_osx.md | 1 +
docs/ja/development/build_osx.md | 1 +
docs/ja/development/contrib.md | 1 +
docs/ja/development/developer_instruction.md | 1 +
docs/ja/development/index.md | 1 +
docs/ja/development/style.md | 1 +
docs/ja/development/tests.md | 1 +
.../tests/developer_instruction_ru.md | 1 +
.../development/tests/easy_tasks_sorted_ru.md | 1 +
docs/ja/development/tests/sanitizers.md | 1 +
docs/ja/faq/general.md | 1 +
.../example_datasets/amplab_benchmark.md | 1 +
.../example_datasets/criteo.md | 1 +
.../example_datasets/metrica.md | 1 +
.../example_datasets/nyc_taxi.md | 1 +
.../example_datasets/ontime.md | 1 +
.../example_datasets/star_schema.md | 1 +
.../example_datasets/wikistat.md | 1 +
docs/ja/getting_started/index.md | 1 +
docs/ja/getting_started/install.md | 1 +
docs/ja/getting_started/tutorial.md | 1 +
docs/ja/guides/apply_catboost_model.md | 1 +
docs/ja/guides/index.md | 1 +
docs/ja/images/column_oriented.gif | Bin 0 -> 45485 bytes
docs/ja/images/logo.svg | 12 ++
docs/ja/images/row_oriented.gif | Bin 0 -> 41571 bytes
docs/ja/index.md | 143 +-----------------
docs/ja/interfaces/cli.md | 1 +
docs/ja/interfaces/cpp.md | 1 +
docs/ja/interfaces/formats.md | 1 +
docs/ja/interfaces/http.md | 1 +
docs/ja/interfaces/index.md | 1 +
docs/ja/interfaces/jdbc.md | 1 +
docs/ja/interfaces/odbc.md | 1 +
docs/ja/interfaces/tcp.md | 1 +
.../third-party/client_libraries.md | 1 +
docs/ja/interfaces/third-party/gui.md | 1 +
.../ja/interfaces/third-party/integrations.md | 1 +
docs/ja/interfaces/third-party/proxy.md | 1 +
docs/ja/introduction/distinctive_features.md | 1 +
.../features_considered_disadvantages.md | 1 +
docs/ja/introduction/history.md | 1 +
docs/ja/introduction/performance.md | 1 +
docs/ja/operations/access_rights.md | 1 +
docs/ja/operations/backup.md | 1 +
docs/ja/operations/configuration_files.md | 1 +
docs/ja/operations/index.md | 1 +
docs/ja/operations/monitoring.md | 1 +
docs/ja/operations/quotas.md | 1 +
docs/ja/operations/requirements.md | 1 +
docs/ja/operations/server_settings/index.md | 1 +
.../ja/operations/server_settings/settings.md | 1 +
.../settings/constraints_on_settings.md | 1 +
docs/ja/operations/settings/index.md | 1 +
.../settings/permissions_for_queries.md | 1 +
.../operations/settings/query_complexity.md | 1 +
docs/ja/operations/settings/settings.md | 1 +
.../operations/settings/settings_profiles.md | 1 +
docs/ja/operations/settings/settings_users.md | 1 +
docs/ja/operations/system_tables.md | 1 +
.../table_engines/aggregatingmergetree.md | 1 +
docs/ja/operations/table_engines/buffer.md | 1 +
.../table_engines/collapsingmergetree.md | 1 +
.../table_engines/custom_partitioning_key.md | 1 +
.../ja/operations/table_engines/dictionary.md | 1 +
.../operations/table_engines/distributed.md | 1 +
.../operations/table_engines/external_data.md | 1 +
docs/ja/operations/table_engines/file.md | 1 +
.../table_engines/graphitemergetree.md | 1 +
docs/ja/operations/table_engines/hdfs.md | 1 +
docs/ja/operations/table_engines/index.md | 1 +
docs/ja/operations/table_engines/jdbc.md | 1 +
docs/ja/operations/table_engines/join.md | 1 +
docs/ja/operations/table_engines/kafka.md | 1 +
docs/ja/operations/table_engines/log.md | 1 +
.../ja/operations/table_engines/log_family.md | 1 +
.../table_engines/materializedview.md | 1 +
docs/ja/operations/table_engines/memory.md | 1 +
docs/ja/operations/table_engines/merge.md | 1 +
docs/ja/operations/table_engines/mergetree.md | 1 +
docs/ja/operations/table_engines/mysql.md | 1 +
docs/ja/operations/table_engines/null.md | 1 +
docs/ja/operations/table_engines/odbc.md | 1 +
.../table_engines/replacingmergetree.md | 1 +
.../operations/table_engines/replication.md | 1 +
docs/ja/operations/table_engines/set.md | 1 +
docs/ja/operations/table_engines/stripelog.md | 1 +
.../table_engines/summingmergetree.md | 1 +
docs/ja/operations/table_engines/tinylog.md | 1 +
docs/ja/operations/table_engines/url.md | 1 +
.../versionedcollapsingmergetree.md | 1 +
docs/ja/operations/table_engines/view.md | 1 +
docs/ja/operations/tips.md | 1 +
docs/ja/operations/troubleshooting.md | 1 +
docs/ja/operations/update.md | 1 +
docs/ja/operations/utils/clickhouse-copier.md | 1 +
docs/ja/operations/utils/clickhouse-local.md | 1 +
docs/ja/operations/utils/index.md | 1 +
.../agg_functions/combinators.md | 1 +
docs/ja/query_language/agg_functions/index.md | 1 +
.../agg_functions/parametric_functions.md | 1 +
.../query_language/agg_functions/reference.md | 1 +
docs/ja/query_language/alter.md | 1 +
docs/ja/query_language/create.md | 1 +
.../ja/query_language/dicts/external_dicts.md | 1 +
.../dicts/external_dicts_dict.md | 1 +
.../dicts/external_dicts_dict_layout.md | 1 +
.../dicts/external_dicts_dict_lifetime.md | 1 +
.../dicts/external_dicts_dict_sources.md | 1 +
.../dicts/external_dicts_dict_structure.md | 1 +
docs/ja/query_language/dicts/index.md | 1 +
.../ja/query_language/dicts/internal_dicts.md | 1 +
.../functions/arithmetic_functions.md | 1 +
.../functions/array_functions.md | 1 +
.../ja/query_language/functions/array_join.md | 1 +
.../query_language/functions/bit_functions.md | 1 +
.../functions/bitmap_functions.md | 1 +
.../functions/comparison_functions.md | 1 +
.../functions/conditional_functions.md | 1 +
.../functions/date_time_functions.md | 1 +
.../functions/encoding_functions.md | 1 +
.../functions/ext_dict_functions.md | 1 +
.../functions/functions_for_nulls.md | 1 +
docs/ja/query_language/functions/geo.md | 1 +
.../functions/hash_functions.md | 1 +
.../functions/higher_order_functions.md | 1 +
.../query_language/functions/in_functions.md | 1 +
docs/ja/query_language/functions/index.md | 1 +
.../functions/ip_address_functions.md | 1 +
.../functions/json_functions.md | 1 +
.../functions/logical_functions.md | 1 +
.../functions/machine_learning_functions.md | 1 +
.../functions/math_functions.md | 1 +
.../functions/other_functions.md | 1 +
.../functions/random_functions.md | 1 +
.../functions/rounding_functions.md | 1 +
.../functions/splitting_merging_functions.md | 1 +
.../functions/string_functions.md | 1 +
.../functions/string_replace_functions.md | 1 +
.../functions/string_search_functions.md | 1 +
.../functions/type_conversion_functions.md | 1 +
.../query_language/functions/url_functions.md | 1 +
.../functions/uuid_functions.md | 1 +
.../functions/ym_dict_functions.md | 1 +
docs/ja/query_language/index.md | 1 +
docs/ja/query_language/insert_into.md | 1 +
docs/ja/query_language/misc.md | 1 +
docs/ja/query_language/operators.md | 1 +
docs/ja/query_language/select.md | 1 +
docs/ja/query_language/show.md | 1 +
docs/ja/query_language/syntax.md | 1 +
docs/ja/query_language/system.md | 1 +
.../ja/query_language/table_functions/file.md | 1 +
.../ja/query_language/table_functions/hdfs.md | 1 +
.../query_language/table_functions/index.md | 1 +
.../query_language/table_functions/input.md | 1 +
.../ja/query_language/table_functions/jdbc.md | 1 +
.../query_language/table_functions/merge.md | 1 +
.../query_language/table_functions/mysql.md | 1 +
.../query_language/table_functions/numbers.md | 1 +
.../ja/query_language/table_functions/odbc.md | 1 +
.../query_language/table_functions/remote.md | 1 +
docs/ja/query_language/table_functions/url.md | 1 +
docs/ja/roadmap.md | 1 +
docs/ja/security_changelog.md | 1 +
docs/ru/getting_started/install.md | 6 +-
docs/ru/getting_started/tutorial.md | 1 +
docs/toc_en.yml | 3 +-
docs/toc_ja.yml | 11 +-
docs/toc_ru.yml | 4 +-
docs/toc_zh.yml | 3 +-
docs/tools/make_links.sh | 2 +-
.../mkdocs-material-theme/assets/flags/ja.svg | 11 +-
.../partials/language/ja.html | 6 +
.../example_datasets/metrica.md | 1 +
docs/zh/getting_started/install.md | 6 +-
docs/zh/getting_started/tutorial.md | 1 +
214 files changed, 308 insertions(+), 216 deletions(-)
create mode 120000 docs/fa/getting_started/tutorial.md
create mode 120000 docs/ja/changelog.md
create mode 120000 docs/ja/data_types/array.md
create mode 120000 docs/ja/data_types/boolean.md
create mode 120000 docs/ja/data_types/date.md
create mode 120000 docs/ja/data_types/datetime.md
create mode 120000 docs/ja/data_types/decimal.md
create mode 120000 docs/ja/data_types/domains/ipv4.md
create mode 120000 docs/ja/data_types/domains/ipv6.md
create mode 120000 docs/ja/data_types/domains/overview.md
create mode 120000 docs/ja/data_types/enum.md
create mode 120000 docs/ja/data_types/fixedstring.md
create mode 120000 docs/ja/data_types/float.md
create mode 120000 docs/ja/data_types/index.md
create mode 120000 docs/ja/data_types/int_uint.md
create mode 120000 docs/ja/data_types/nested_data_structures/aggregatefunction.md
create mode 120000 docs/ja/data_types/nested_data_structures/index.md
create mode 120000 docs/ja/data_types/nested_data_structures/nested.md
create mode 120000 docs/ja/data_types/nullable.md
create mode 120000 docs/ja/data_types/special_data_types/expression.md
create mode 120000 docs/ja/data_types/special_data_types/index.md
create mode 120000 docs/ja/data_types/special_data_types/interval.md
create mode 120000 docs/ja/data_types/special_data_types/nothing.md
create mode 120000 docs/ja/data_types/special_data_types/set.md
create mode 120000 docs/ja/data_types/string.md
create mode 120000 docs/ja/data_types/tuple.md
create mode 120000 docs/ja/data_types/uuid.md
create mode 120000 docs/ja/database_engines/index.md
create mode 120000 docs/ja/database_engines/lazy.md
create mode 120000 docs/ja/database_engines/mysql.md
create mode 120000 docs/ja/development/architecture.md
create mode 120000 docs/ja/development/build.md
create mode 120000 docs/ja/development/build_cross_arm.md
create mode 120000 docs/ja/development/build_cross_osx.md
create mode 120000 docs/ja/development/build_osx.md
create mode 120000 docs/ja/development/contrib.md
create mode 120000 docs/ja/development/developer_instruction.md
create mode 120000 docs/ja/development/index.md
create mode 120000 docs/ja/development/style.md
create mode 120000 docs/ja/development/tests.md
create mode 120000 docs/ja/development/tests/developer_instruction_ru.md
create mode 120000 docs/ja/development/tests/easy_tasks_sorted_ru.md
create mode 120000 docs/ja/development/tests/sanitizers.md
create mode 120000 docs/ja/faq/general.md
create mode 120000 docs/ja/getting_started/example_datasets/amplab_benchmark.md
create mode 120000 docs/ja/getting_started/example_datasets/criteo.md
create mode 120000 docs/ja/getting_started/example_datasets/metrica.md
create mode 120000 docs/ja/getting_started/example_datasets/nyc_taxi.md
create mode 120000 docs/ja/getting_started/example_datasets/ontime.md
create mode 120000 docs/ja/getting_started/example_datasets/star_schema.md
create mode 120000 docs/ja/getting_started/example_datasets/wikistat.md
create mode 120000 docs/ja/getting_started/index.md
create mode 120000 docs/ja/getting_started/install.md
create mode 120000 docs/ja/getting_started/tutorial.md
create mode 120000 docs/ja/guides/apply_catboost_model.md
create mode 120000 docs/ja/guides/index.md
create mode 100644 docs/ja/images/column_oriented.gif
create mode 100644 docs/ja/images/logo.svg
create mode 100644 docs/ja/images/row_oriented.gif
mode change 100644 => 120000 docs/ja/index.md
create mode 120000 docs/ja/interfaces/cli.md
create mode 120000 docs/ja/interfaces/cpp.md
create mode 120000 docs/ja/interfaces/formats.md
create mode 120000 docs/ja/interfaces/http.md
create mode 120000 docs/ja/interfaces/index.md
create mode 120000 docs/ja/interfaces/jdbc.md
create mode 120000 docs/ja/interfaces/odbc.md
create mode 120000 docs/ja/interfaces/tcp.md
create mode 120000 docs/ja/interfaces/third-party/client_libraries.md
create mode 120000 docs/ja/interfaces/third-party/gui.md
create mode 120000 docs/ja/interfaces/third-party/integrations.md
create mode 120000 docs/ja/interfaces/third-party/proxy.md
create mode 120000 docs/ja/introduction/distinctive_features.md
create mode 120000 docs/ja/introduction/features_considered_disadvantages.md
create mode 120000 docs/ja/introduction/history.md
create mode 120000 docs/ja/introduction/performance.md
create mode 120000 docs/ja/operations/access_rights.md
create mode 120000 docs/ja/operations/backup.md
create mode 120000 docs/ja/operations/configuration_files.md
create mode 120000 docs/ja/operations/index.md
create mode 120000 docs/ja/operations/monitoring.md
create mode 120000 docs/ja/operations/quotas.md
create mode 120000 docs/ja/operations/requirements.md
create mode 120000 docs/ja/operations/server_settings/index.md
create mode 120000 docs/ja/operations/server_settings/settings.md
create mode 120000 docs/ja/operations/settings/constraints_on_settings.md
create mode 120000 docs/ja/operations/settings/index.md
create mode 120000 docs/ja/operations/settings/permissions_for_queries.md
create mode 120000 docs/ja/operations/settings/query_complexity.md
create mode 120000 docs/ja/operations/settings/settings.md
create mode 120000 docs/ja/operations/settings/settings_profiles.md
create mode 120000 docs/ja/operations/settings/settings_users.md
create mode 120000 docs/ja/operations/system_tables.md
create mode 120000 docs/ja/operations/table_engines/aggregatingmergetree.md
create mode 120000 docs/ja/operations/table_engines/buffer.md
create mode 120000 docs/ja/operations/table_engines/collapsingmergetree.md
create mode 120000 docs/ja/operations/table_engines/custom_partitioning_key.md
create mode 120000 docs/ja/operations/table_engines/dictionary.md
create mode 120000 docs/ja/operations/table_engines/distributed.md
create mode 120000 docs/ja/operations/table_engines/external_data.md
create mode 120000 docs/ja/operations/table_engines/file.md
create mode 120000 docs/ja/operations/table_engines/graphitemergetree.md
create mode 120000 docs/ja/operations/table_engines/hdfs.md
create mode 120000 docs/ja/operations/table_engines/index.md
create mode 120000 docs/ja/operations/table_engines/jdbc.md
create mode 120000 docs/ja/operations/table_engines/join.md
create mode 120000 docs/ja/operations/table_engines/kafka.md
create mode 120000 docs/ja/operations/table_engines/log.md
create mode 120000 docs/ja/operations/table_engines/log_family.md
create mode 120000 docs/ja/operations/table_engines/materializedview.md
create mode 120000 docs/ja/operations/table_engines/memory.md
create mode 120000 docs/ja/operations/table_engines/merge.md
create mode 120000 docs/ja/operations/table_engines/mergetree.md
create mode 120000 docs/ja/operations/table_engines/mysql.md
create mode 120000 docs/ja/operations/table_engines/null.md
create mode 120000 docs/ja/operations/table_engines/odbc.md
create mode 120000 docs/ja/operations/table_engines/replacingmergetree.md
create mode 120000 docs/ja/operations/table_engines/replication.md
create mode 120000 docs/ja/operations/table_engines/set.md
create mode 120000 docs/ja/operations/table_engines/stripelog.md
create mode 120000 docs/ja/operations/table_engines/summingmergetree.md
create mode 120000 docs/ja/operations/table_engines/tinylog.md
create mode 120000 docs/ja/operations/table_engines/url.md
create mode 120000 docs/ja/operations/table_engines/versionedcollapsingmergetree.md
create mode 120000 docs/ja/operations/table_engines/view.md
create mode 120000 docs/ja/operations/tips.md
create mode 120000 docs/ja/operations/troubleshooting.md
create mode 120000 docs/ja/operations/update.md
create mode 120000 docs/ja/operations/utils/clickhouse-copier.md
create mode 120000 docs/ja/operations/utils/clickhouse-local.md
create mode 120000 docs/ja/operations/utils/index.md
create mode 120000 docs/ja/query_language/agg_functions/combinators.md
create mode 120000 docs/ja/query_language/agg_functions/index.md
create mode 120000 docs/ja/query_language/agg_functions/parametric_functions.md
create mode 120000 docs/ja/query_language/agg_functions/reference.md
create mode 120000 docs/ja/query_language/alter.md
create mode 120000 docs/ja/query_language/create.md
create mode 120000 docs/ja/query_language/dicts/external_dicts.md
create mode 120000 docs/ja/query_language/dicts/external_dicts_dict.md
create mode 120000 docs/ja/query_language/dicts/external_dicts_dict_layout.md
create mode 120000 docs/ja/query_language/dicts/external_dicts_dict_lifetime.md
create mode 120000 docs/ja/query_language/dicts/external_dicts_dict_sources.md
create mode 120000 docs/ja/query_language/dicts/external_dicts_dict_structure.md
create mode 120000 docs/ja/query_language/dicts/index.md
create mode 120000 docs/ja/query_language/dicts/internal_dicts.md
create mode 120000 docs/ja/query_language/functions/arithmetic_functions.md
create mode 120000 docs/ja/query_language/functions/array_functions.md
create mode 120000 docs/ja/query_language/functions/array_join.md
create mode 120000 docs/ja/query_language/functions/bit_functions.md
create mode 120000 docs/ja/query_language/functions/bitmap_functions.md
create mode 120000 docs/ja/query_language/functions/comparison_functions.md
create mode 120000 docs/ja/query_language/functions/conditional_functions.md
create mode 120000 docs/ja/query_language/functions/date_time_functions.md
create mode 120000 docs/ja/query_language/functions/encoding_functions.md
create mode 120000 docs/ja/query_language/functions/ext_dict_functions.md
create mode 120000 docs/ja/query_language/functions/functions_for_nulls.md
create mode 120000 docs/ja/query_language/functions/geo.md
create mode 120000 docs/ja/query_language/functions/hash_functions.md
create mode 120000 docs/ja/query_language/functions/higher_order_functions.md
create mode 120000 docs/ja/query_language/functions/in_functions.md
create mode 120000 docs/ja/query_language/functions/index.md
create mode 120000 docs/ja/query_language/functions/ip_address_functions.md
create mode 120000 docs/ja/query_language/functions/json_functions.md
create mode 120000 docs/ja/query_language/functions/logical_functions.md
create mode 120000 docs/ja/query_language/functions/machine_learning_functions.md
create mode 120000 docs/ja/query_language/functions/math_functions.md
create mode 120000 docs/ja/query_language/functions/other_functions.md
create mode 120000 docs/ja/query_language/functions/random_functions.md
create mode 120000 docs/ja/query_language/functions/rounding_functions.md
create mode 120000 docs/ja/query_language/functions/splitting_merging_functions.md
create mode 120000 docs/ja/query_language/functions/string_functions.md
create mode 120000 docs/ja/query_language/functions/string_replace_functions.md
create mode 120000 docs/ja/query_language/functions/string_search_functions.md
create mode 120000 docs/ja/query_language/functions/type_conversion_functions.md
create mode 120000 docs/ja/query_language/functions/url_functions.md
create mode 120000 docs/ja/query_language/functions/uuid_functions.md
create mode 120000 docs/ja/query_language/functions/ym_dict_functions.md
create mode 120000 docs/ja/query_language/index.md
create mode 120000 docs/ja/query_language/insert_into.md
create mode 120000 docs/ja/query_language/misc.md
create mode 120000 docs/ja/query_language/operators.md
create mode 120000 docs/ja/query_language/select.md
create mode 120000 docs/ja/query_language/show.md
create mode 120000 docs/ja/query_language/syntax.md
create mode 120000 docs/ja/query_language/system.md
create mode 120000 docs/ja/query_language/table_functions/file.md
create mode 120000 docs/ja/query_language/table_functions/hdfs.md
create mode 120000 docs/ja/query_language/table_functions/index.md
create mode 120000 docs/ja/query_language/table_functions/input.md
create mode 120000 docs/ja/query_language/table_functions/jdbc.md
create mode 120000 docs/ja/query_language/table_functions/merge.md
create mode 120000 docs/ja/query_language/table_functions/mysql.md
create mode 120000 docs/ja/query_language/table_functions/numbers.md
create mode 120000 docs/ja/query_language/table_functions/odbc.md
create mode 120000 docs/ja/query_language/table_functions/remote.md
create mode 120000 docs/ja/query_language/table_functions/url.md
create mode 120000 docs/ja/roadmap.md
create mode 120000 docs/ja/security_changelog.md
create mode 120000 docs/ru/getting_started/tutorial.md
create mode 120000 docs/zh/getting_started/example_datasets/metrica.md
create mode 120000 docs/zh/getting_started/tutorial.md
diff --git a/docs/en/getting_started/example_datasets/metrica.md b/docs/en/getting_started/example_datasets/metrica.md
index b26f26f5acb..d89fe54f4eb 100644
--- a/docs/en/getting_started/example_datasets/metrica.md
+++ b/docs/en/getting_started/example_datasets/metrica.md
@@ -57,6 +57,6 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
## Example Queries
-[ClickHouse tutorial](../tutorial.md) is based on Yandex.Metrica dataset and the recommended way to get started with this dataset is to just go through tutorial.
+[ClickHouse tutorial](../../getting_started/tutorial.md) is based on Yandex.Metrica dataset and the recommended way to get started with this dataset is to just go through tutorial.
Additional examples of queries to these tables can be found among [stateful tests](https://github.com/yandex/ClickHouse/tree/master/dbms/tests/queries/1_stateful) of ClickHouse (they are named `test.hists` and `test.visits` there).
diff --git a/docs/en/getting_started/tutorial.md b/docs/en/getting_started/tutorial.md
index 9251967d567..f1fd9ee2beb 100644
--- a/docs/en/getting_started/tutorial.md
+++ b/docs/en/getting_started/tutorial.md
@@ -459,7 +459,7 @@ clickhouse-client --query "INSERT INTO tutorial.hits_v1 FORMAT TSV" --max_insert
clickhouse-client --query "INSERT INTO tutorial.visits_v1 FORMAT TSV" --max_insert_block_size=100000 < visits_v1.tsv
```
-ClickHouse has a lot of [settings to tune](../operations/settings.md) and one way to specify them in console client is via arguments, as we can see with `--max_insert_block_size`. The easiest way to figure out what settings are available, what do they mean and what the defaults are is to query the `system.settings` table:
+ClickHouse has a lot of [settings to tune](../operations/settings/index.md) and one way to specify them in console client is via arguments, as we can see with `--max_insert_block_size`. The easiest way to figure out what settings are available, what do they mean and what the defaults are is to query the `system.settings` table:
``` sql
SELECT name, value, changed, description
diff --git a/docs/en/index.md b/docs/en/index.md
index 40158f524ec..6dea5f6570b 100644
--- a/docs/en/index.md
+++ b/docs/en/index.md
@@ -1,8 +1,8 @@
-# What is ClickHouse?
+# ClickHouseとは?
-ClickHouse is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).
+ClickHouseは、クエリのオンライン分析処理(OLAP)用の列指向のデータベース管理システム(DBMS)です。
-In a "normal" row-oriented DBMS, data is stored in this order:
+「通常の」行指向のDBMSでは、データは次の順序で保存されます。
| Row | WatchID | JavaEnable | Title | GoodEvent | EventTime |
| ------ | ------------------- | ---------- | ------------------ | --------- | ------------------- |
@@ -11,12 +11,12 @@ In a "normal" row-oriented DBMS, data is stored in this order:
| #2 | 89953706054 | 1 | Mission | 1 | 2016-05-18 07:38:00 |
| #N | ... | ... | ... | ... | ... |
-In other words, all the values related to a row are physically stored next to each other.
+つまり、行に関連するすべての値は物理的に隣り合わせに格納されます。
-Examples of a row-oriented DBMS are MySQL, Postgres, and MS SQL Server.
+行指向のDBMSの例:MySQL, Postgres および MS SQL Server
{: .grey }
-In a column-oriented DBMS, data is stored like this:
+列指向のDBMSでは、データは次のように保存されます:
| Row: | #0 | #1 | #2 | #N |
| ----------- | ------------------- | ------------------- | ------------------- | ------------------- |
@@ -26,68 +26,74 @@ In a column-oriented DBMS, data is stored like this:
| GoodEvent: | 1 | 1 | 1 | ... |
| EventTime: | 2016-05-18 05:19:20 | 2016-05-18 08:10:20 | 2016-05-18 07:38:00 | ... |
-These examples only show the order that data is arranged in.
-The values from different columns are stored separately, and data from the same column is stored together.
+これらの例は、データが配置される順序のみを示しています。
+異なる列の値は別々に保存され、同じ列のデータは一緒に保存されます。
-Examples of a column-oriented DBMS: Vertica, Paraccel (Actian Matrix and Amazon Redshift), Sybase IQ, Exasol, Infobright, InfiniDB, MonetDB (VectorWise and Actian Vector), LucidDB, SAP HANA, Google Dremel, Google PowerDrill, Druid, and kdb+.
+列指向DBMSの例:Vertica, Paraccel (Actian Matrix and Amazon Redshift), Sybase IQ, Exasol, Infobright, InfiniDB, MonetDB (VectorWise and Actian Vector), LucidDB, SAP HANA, Google Dremel, Google PowerDrill, Druid および kdb+
{: .grey }
-Different orders for storing data are better suited to different scenarios.
-The data access scenario refers to what queries are made, how often, and in what proportion; how much data is read for each type of query – rows, columns, and bytes; the relationship between reading and updating data; the working size of the data and how locally it is used; whether transactions are used, and how isolated they are; requirements for data replication and logical integrity; requirements for latency and throughput for each type of query, and so on.
+異なったデータ格納の順序は、異なったシナリオにより適します。
+データアクセスシナリオとは、クエリの実行内容、頻度、割合を指します。クエリで読み取られるの各種データの量(行、列、バイト)。データの読み取りと更新の関係。作業データのサイズとローカルでの使用方法。トランザクションが使用されるかどうか、およびそれらがどの程度分離されているか。データ複製と論理的整合性の要件。クエリの種類ごとの遅延とスループットの要件など。
-The higher the load on the system, the more important it is to customize the system set up to match the requirements of the usage scenario, and the more fine grained this customization becomes. There is no system that is equally well-suited to significantly different scenarios. If a system is adaptable to a wide set of scenarios, under a high load, the system will handle all the scenarios equally poorly, or will work well for just one or few of possible scenarios.
+システムの負荷が高いほど、使用シナリオの要件に一致するようにセットアップされたシステムをカスタマイズすることがより重要になり、このカスタマイズはより細かくなります。大きく異なるシナリオに等しく適したシステムはありません。システムがさまざまなシナリオに適応可能である場合、高負荷下では、システムはすべてのシナリオを同等に不十分に処理するか、1つまたはいくつかの可能なシナリオでうまく機能します。
-## Key Properties of the OLAP scenario
+## OLAPシナリオの主要なプロパティ
-- The vast majority of requests are for read access.
-- Data is updated in fairly large batches (> 1000 rows), not by single rows; or it is not updated at all.
-- Data is added to the DB but is not modified.
-- For reads, quite a large number of rows are extracted from the DB, but only a small subset of columns.
-- Tables are "wide," meaning they contain a large number of columns.
-- Queries are relatively rare (usually hundreds of queries per server or less per second).
-- For simple queries, latencies around 50 ms are allowed.
-- Column values are fairly small: numbers and short strings (for example, 60 bytes per URL).
-- Requires high throughput when processing a single query (up to billions of rows per second per server).
-- Transactions are not necessary.
-- Low requirements for data consistency.
-- There is one large table per query. All tables are small, except for one.
-- A query result is significantly smaller than the source data. In other words, data is filtered or aggregated, so the result fits in a single server's RAM.
+- リクエストの大部分は読み取りアクセス用である。
+- データは、単一行ではなく、かなり大きなバッチ(> 1000行)で更新されます。または、まったく更新されない。
+- データはDBに追加されるが、変更されない。
+- 読み取りの場合、非常に多くの行がDBから抽出されるが、一部の列のみ。
+- テーブルは「幅が広く」、多数の列が含まれる。
+- クエリは比較的まれ(通常、サーバーあたり毎秒数百あるいはそれ以下の数のクエリ)。
+- 単純なクエリでは、約50ミリ秒の遅延が容認される。
+- 列の値はかなり小さく、数値や短い文字列(たとえば、URLごとに60バイト)。
+- 単一のクエリを処理する場合、高いスループットが必要(サーバーあたり毎秒最大数十億行)。
+- トランザクションは必要ない。
+- データの一貫性の要件が低い。
+- クエリごとに1つの大きなテーブルがある。 1つを除くすべてのテーブルは小さい。
+- クエリ結果は、ソースデータよりも大幅に小さくなる。つまり、データはフィルター処理または集計されるため、結果は単一サーバーのRAMに収まる。
-It is easy to see that the OLAP scenario is very different from other popular scenarios (such as OLTP or Key-Value access). So it doesn't make sense to try to use OLTP or a Key-Value DB for processing analytical queries if you want to get decent performance. For example, if you try to use MongoDB or Redis for analytics, you will get very poor performance compared to OLAP databases.
+OLAPシナリオは、他の一般的なシナリオ(OLTPやKey-Valueアクセスなど)とは非常に異なることが容易にわかります。 したがって、まともなパフォーマンスを得るには、OLTPまたはKey-Value DBを使用して分析クエリを処理しようとするのは無意味です。 たとえば、分析にMongoDBまたはRedisを使用しようとすると、OLAPデータベースに比べてパフォーマンスが非常に低下します。
-## Why Column-Oriented Databases Work Better in the OLAP Scenario
+## OLAPシナリオで列指向データベースがよりよく機能する理由
-Column-oriented databases are better suited to OLAP scenarios: they are at least 100 times faster in processing most queries. The reasons are explained in detail below, but the fact is easier to demonstrate visually:
+列指向データベースは、OLAPシナリオにより適しています。ほとんどのクエリの処理が少なくとも100倍高速です。 理由を以下に詳しく説明しますが、その根拠は視覚的に簡単に説明できます:
-**Row-oriented DBMS**
+**行指向DBMS**
![Row-oriented](images/row_oriented.gif#)
-**Column-oriented DBMS**
+**列指向DBMS**
![Column-oriented](images/column_oriented.gif#)
-See the difference?
+違いがわかりましたか?
### Input/output
-1. For an analytical query, only a small number of table columns need to be read. In a column-oriented database, you can read just the data you need. For example, if you need 5 columns out of 100, you can expect a 20-fold reduction in I/O.
-2. Since data is read in packets, it is easier to compress. Data in columns is also easier to compress. This further reduces the I/O volume.
-3. Due to the reduced I/O, more data fits in the system cache.
+1. 分析クエリでは、少数のテーブル列のみを読み取る必要があります。列指向のデータベースでは、必要なデータのみを読み取ることができます。たとえば、100のうち5つの列が必要な場合、I/Oが20倍削減されることが期待できます。
+2. データはパケットで読み取られるため、圧縮が容易です。列のデータも圧縮が簡単です。これにより、I/Oボリュームがさらに削減されます。
+3. I/Oの削減により、より多くのデータがシステムキャッシュに収まります。
-For example, the query "count the number of records for each advertising platform" requires reading one "advertising platform ID" column, which takes up 1 byte uncompressed. If most of the traffic was not from advertising platforms, you can expect at least 10-fold compression of this column. When using a quick compression algorithm, data decompression is possible at a speed of at least several gigabytes of uncompressed data per second. In other words, this query can be processed at a speed of approximately several billion rows per second on a single server. This speed is actually achieved in practice.
+たとえば、「各広告プラットフォームのレコード数をカウントする」クエリでは、1つの「広告プラットフォームID」列を読み取る必要がありますが、これは非圧縮では1バイトの領域を要します。トラフィックのほとんどが広告プラットフォームからのものではない場合、この列は少なくとも10倍の圧縮が期待できます。高速な圧縮アルゴリズムを使用すれば、1秒あたり少なくとも非圧縮データに換算して数ギガバイトの速度でデータを展開できます。つまり、このクエリは、単一のサーバーで1秒あたり約数十億行の速度で処理できます。この速度はまさに実際に達成されます。
Example
-```bash
+```
$ clickhouse-client
ClickHouse client version 0.0.52053.
Connecting to localhost:9000.
Connected to ClickHouse server version 0.0.52053.
-```
-```sql
-SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
-```
-```text
+
+:) SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
+
+SELECT
+CounterID,
+count()
+FROM hits
+GROUP BY CounterID
+ORDER BY count() DESC
+LIMIT 20
+
┌─CounterID─┬──count()─┐
│ 114208 │ 56057344 │
│ 115080 │ 51619590 │
@@ -110,23 +116,27 @@ SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIM
│ 115079 │ 8837972 │
│ 337234 │ 8205961 │
└───────────┴──────────┘
+
+20 rows in set. Elapsed: 0.153 sec. Processed 1.00 billion rows, 4.00 GB (6.53 billion rows/s., 26.10 GB/s.)
+
+:)
```
### CPU
-Since executing a query requires processing a large number of rows, it helps to dispatch all operations for entire vectors instead of for separate rows, or to implement the query engine so that there is almost no dispatching cost. If you don't do this, with any half-decent disk subsystem, the query interpreter inevitably stalls the CPU.
-It makes sense to both store data in columns and process it, when possible, by columns.
+クエリを実行するには大量の行を処理する必要があるため、個別の行ではなくベクター全体のすべての操作をディスパッチするか、ディスパッチコストがほとんどないようにクエリエンジンを実装すると効率的です。 適切なディスクサブシステムでこれを行わないと、クエリインタープリターが必然的にCPUを失速させます。
+データを列に格納し、可能な場合は列ごとに処理することは理にかなっています。
-There are two ways to do this:
+これを行うには2つの方法があります:
-1. A vector engine. All operations are written for vectors, instead of for separate values. This means you don't need to call operations very often, and dispatching costs are negligible. Operation code contains an optimized internal cycle.
+1. ベクトルエンジン。 すべての操作は、個別の値ではなく、ベクトルに対して記述されます。 これは、オペレーションを頻繁に呼び出す必要がなく、ディスパッチコストが無視できることを意味します。 操作コードには、最適化された内部サイクルが含まれています。
-2. Code generation. The code generated for the query has all the indirect calls in it.
+2. コード生成。 クエリ用に生成されたコードには、すべての間接的な呼び出しが含まれています。
-This is not done in "normal" databases, because it doesn't make sense when running simple queries. However, there are exceptions. For example, MemSQL uses code generation to reduce latency when processing SQL queries. (For comparison, analytical DBMSs require optimization of throughput, not latency.)
+これは、単純なクエリを実行する場合には意味がないため、「通常の」データベースでは実行されません。 ただし、例外があります。 たとえば、MemSQLはコード生成を使用して、SQLクエリを処理する際の遅延を減らします。 (比較のために、分析DBMSではレイテンシではなくスループットの最適化が必要です。)
-Note that for CPU efficiency, the query language must be declarative (SQL or MDX), or at least a vector (J, K). The query should only contain implicit loops, allowing for optimization.
+CPU効率のために、クエリ言語は宣言型(SQLまたはMDX)、または少なくともベクトル(J、K)でなければなりません。 クエリには、最適化を可能にする暗黙的なループのみを含める必要があります。
[Original article](https://clickhouse.yandex/docs/en/)
diff --git a/docs/fa/getting_started/install.md b/docs/fa/getting_started/install.md
index 4922724bf6d..790c9381007 100644
--- a/docs/fa/getting_started/install.md
+++ b/docs/fa/getting_started/install.md
@@ -20,7 +20,7 @@ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not su
##ﺩﻮﺟﻮﻣ ﺐﺼﻧ ﯼﺎﻫ ﻪﻨﯾﺰﮔ
-### نصب از طریق پکیج های Debian/Ubuntu
+### نصب از طریق پکیج های Debian/Ubuntu {#from-deb-packages}
در فایل `/etc/apt/sources.list` (یا در یک فایل جدا `/etc/apt/sources.list.d/clickhouse.list`)، Repo زیر را اضافه کنید:
@@ -51,7 +51,7 @@ sudo apt-get install clickhouse-client clickhouse-server
ClickHouse دارای تنظیمات محدودیت دسترسی می باشد. این تنظیمات در فایل 'users.xml' (کنار 'config.xml') می باشد. به صورت پیش فرض دسترسی برای کاربر 'default' از همه جا بدون نیاز به پسورد وجود دارد. 'user/default/networks' را مشاهده کنید. برای اطلاعات بیشتر قسمت "تنظیمات فایل ها" را مشاهده کنید.
-### RPM ﯼﺎﻫ ﻪﺘﺴﺑ ﺯﺍ
+### RPM ﯼﺎﻫ ﻪﺘﺴﺑ ﺯﺍ {#from-rpm-packages}
.ﺪﻨﮐ ﯽﻣ ﻪﯿﺻﻮﺗ ﺲﮐﻮﻨﯿﻟ ﺮﺑ ﯽﻨﺘﺒﻣ rpm ﺮﺑ ﯽﻨﺘﺒﻣ ﯼﺎﻫ ﻊﯾﺯﻮﺗ ﺮﯾﺎﺳ ﻭ CentOS ، RedHat ﯼﺍ
@@ -78,7 +78,7 @@ sudo yum install clickhouse-server clickhouse-client
.ﺪﻨﻨﮐ ﯽﻣ ﻩﺩﺎﻔﺘﺳﺍ ﻞﺧﺍﺩ ﺭﺩ "deb" ﯽﻤﺳﺭ ﯼﺎﻫ ﻪﺘﺴﺑ ﺯﺍ ﺮﯾﻭﺎﺼﺗ ﻦﯾﺍ .ﺪﯿﻨﮐ ﻝﺎﺒﻧﺩ ﺍﺭ (/ht
-### نصب از طریق Source
+### نصب از طریق Source {#from-sources}
برای Compile، دستورالعمل های فایل build.md را دنبال کنید:
@@ -108,7 +108,7 @@ Server: dbms/programs/clickhouse-server
به مسیر لاگ ها در تنظیمات سرور توجه کنید (src/dbms/programs/config.xml).
-### روش های دیگر نصب
+### روش های دیگر نصب {#from-docker-image}
Docker image:
diff --git a/docs/fa/getting_started/tutorial.md b/docs/fa/getting_started/tutorial.md
new file mode 120000
index 00000000000..8bc40816ab2
--- /dev/null
+++ b/docs/fa/getting_started/tutorial.md
@@ -0,0 +1 @@
+../../en/getting_started/tutorial.md
\ No newline at end of file
diff --git a/docs/ja/changelog.md b/docs/ja/changelog.md
new file mode 120000
index 00000000000..699cc9e7b7c
--- /dev/null
+++ b/docs/ja/changelog.md
@@ -0,0 +1 @@
+../../CHANGELOG.md
\ No newline at end of file
diff --git a/docs/ja/data_types/array.md b/docs/ja/data_types/array.md
new file mode 120000
index 00000000000..808c98bf91a
--- /dev/null
+++ b/docs/ja/data_types/array.md
@@ -0,0 +1 @@
+../../en/data_types/array.md
\ No newline at end of file
diff --git a/docs/ja/data_types/boolean.md b/docs/ja/data_types/boolean.md
new file mode 120000
index 00000000000..42e84f1e52a
--- /dev/null
+++ b/docs/ja/data_types/boolean.md
@@ -0,0 +1 @@
+../../en/data_types/boolean.md
\ No newline at end of file
diff --git a/docs/ja/data_types/date.md b/docs/ja/data_types/date.md
new file mode 120000
index 00000000000..d1ebc137e8f
--- /dev/null
+++ b/docs/ja/data_types/date.md
@@ -0,0 +1 @@
+../../en/data_types/date.md
\ No newline at end of file
diff --git a/docs/ja/data_types/datetime.md b/docs/ja/data_types/datetime.md
new file mode 120000
index 00000000000..2eb9f44e6eb
--- /dev/null
+++ b/docs/ja/data_types/datetime.md
@@ -0,0 +1 @@
+../../en/data_types/datetime.md
\ No newline at end of file
diff --git a/docs/ja/data_types/decimal.md b/docs/ja/data_types/decimal.md
new file mode 120000
index 00000000000..ccea440adfa
--- /dev/null
+++ b/docs/ja/data_types/decimal.md
@@ -0,0 +1 @@
+../../en/data_types/decimal.md
\ No newline at end of file
diff --git a/docs/ja/data_types/domains/ipv4.md b/docs/ja/data_types/domains/ipv4.md
new file mode 120000
index 00000000000..eb4cc7d57b5
--- /dev/null
+++ b/docs/ja/data_types/domains/ipv4.md
@@ -0,0 +1 @@
+../../../en/data_types/domains/ipv4.md
\ No newline at end of file
diff --git a/docs/ja/data_types/domains/ipv6.md b/docs/ja/data_types/domains/ipv6.md
new file mode 120000
index 00000000000..cca37a22458
--- /dev/null
+++ b/docs/ja/data_types/domains/ipv6.md
@@ -0,0 +1 @@
+../../../en/data_types/domains/ipv6.md
\ No newline at end of file
diff --git a/docs/ja/data_types/domains/overview.md b/docs/ja/data_types/domains/overview.md
new file mode 120000
index 00000000000..13465d655ee
--- /dev/null
+++ b/docs/ja/data_types/domains/overview.md
@@ -0,0 +1 @@
+../../../en/data_types/domains/overview.md
\ No newline at end of file
diff --git a/docs/ja/data_types/enum.md b/docs/ja/data_types/enum.md
new file mode 120000
index 00000000000..23ebe64773e
--- /dev/null
+++ b/docs/ja/data_types/enum.md
@@ -0,0 +1 @@
+../../en/data_types/enum.md
\ No newline at end of file
diff --git a/docs/ja/data_types/fixedstring.md b/docs/ja/data_types/fixedstring.md
new file mode 120000
index 00000000000..53092fcb884
--- /dev/null
+++ b/docs/ja/data_types/fixedstring.md
@@ -0,0 +1 @@
+../../en/data_types/fixedstring.md
\ No newline at end of file
diff --git a/docs/ja/data_types/float.md b/docs/ja/data_types/float.md
new file mode 120000
index 00000000000..d2ae6bd11de
--- /dev/null
+++ b/docs/ja/data_types/float.md
@@ -0,0 +1 @@
+../../en/data_types/float.md
\ No newline at end of file
diff --git a/docs/ja/data_types/index.md b/docs/ja/data_types/index.md
new file mode 120000
index 00000000000..c9f29d637f3
--- /dev/null
+++ b/docs/ja/data_types/index.md
@@ -0,0 +1 @@
+../../en/data_types/index.md
\ No newline at end of file
diff --git a/docs/ja/data_types/int_uint.md b/docs/ja/data_types/int_uint.md
new file mode 120000
index 00000000000..3a913c9328e
--- /dev/null
+++ b/docs/ja/data_types/int_uint.md
@@ -0,0 +1 @@
+../../en/data_types/int_uint.md
\ No newline at end of file
diff --git a/docs/ja/data_types/nested_data_structures/aggregatefunction.md b/docs/ja/data_types/nested_data_structures/aggregatefunction.md
new file mode 120000
index 00000000000..36544324d2b
--- /dev/null
+++ b/docs/ja/data_types/nested_data_structures/aggregatefunction.md
@@ -0,0 +1 @@
+../../../en/data_types/nested_data_structures/aggregatefunction.md
\ No newline at end of file
diff --git a/docs/ja/data_types/nested_data_structures/index.md b/docs/ja/data_types/nested_data_structures/index.md
new file mode 120000
index 00000000000..a5659a9c5cd
--- /dev/null
+++ b/docs/ja/data_types/nested_data_structures/index.md
@@ -0,0 +1 @@
+../../../en/data_types/nested_data_structures/index.md
\ No newline at end of file
diff --git a/docs/ja/data_types/nested_data_structures/nested.md b/docs/ja/data_types/nested_data_structures/nested.md
new file mode 120000
index 00000000000..653a1ce31c3
--- /dev/null
+++ b/docs/ja/data_types/nested_data_structures/nested.md
@@ -0,0 +1 @@
+../../../en/data_types/nested_data_structures/nested.md
\ No newline at end of file
diff --git a/docs/ja/data_types/nullable.md b/docs/ja/data_types/nullable.md
new file mode 120000
index 00000000000..0233f91d954
--- /dev/null
+++ b/docs/ja/data_types/nullable.md
@@ -0,0 +1 @@
+../../en/data_types/nullable.md
\ No newline at end of file
diff --git a/docs/ja/data_types/special_data_types/expression.md b/docs/ja/data_types/special_data_types/expression.md
new file mode 120000
index 00000000000..4cec632b416
--- /dev/null
+++ b/docs/ja/data_types/special_data_types/expression.md
@@ -0,0 +1 @@
+../../../en/data_types/special_data_types/expression.md
\ No newline at end of file
diff --git a/docs/ja/data_types/special_data_types/index.md b/docs/ja/data_types/special_data_types/index.md
new file mode 120000
index 00000000000..f3ca4a47f98
--- /dev/null
+++ b/docs/ja/data_types/special_data_types/index.md
@@ -0,0 +1 @@
+../../../en/data_types/special_data_types/index.md
\ No newline at end of file
diff --git a/docs/ja/data_types/special_data_types/interval.md b/docs/ja/data_types/special_data_types/interval.md
new file mode 120000
index 00000000000..6829f5ced00
--- /dev/null
+++ b/docs/ja/data_types/special_data_types/interval.md
@@ -0,0 +1 @@
+../../../en/data_types/special_data_types/interval.md
\ No newline at end of file
diff --git a/docs/ja/data_types/special_data_types/nothing.md b/docs/ja/data_types/special_data_types/nothing.md
new file mode 120000
index 00000000000..197a752ce9c
--- /dev/null
+++ b/docs/ja/data_types/special_data_types/nothing.md
@@ -0,0 +1 @@
+../../../en/data_types/special_data_types/nothing.md
\ No newline at end of file
diff --git a/docs/ja/data_types/special_data_types/set.md b/docs/ja/data_types/special_data_types/set.md
new file mode 120000
index 00000000000..5beb14114d3
--- /dev/null
+++ b/docs/ja/data_types/special_data_types/set.md
@@ -0,0 +1 @@
+../../../en/data_types/special_data_types/set.md
\ No newline at end of file
diff --git a/docs/ja/data_types/string.md b/docs/ja/data_types/string.md
new file mode 120000
index 00000000000..7bdd739398f
--- /dev/null
+++ b/docs/ja/data_types/string.md
@@ -0,0 +1 @@
+../../en/data_types/string.md
\ No newline at end of file
diff --git a/docs/ja/data_types/tuple.md b/docs/ja/data_types/tuple.md
new file mode 120000
index 00000000000..d30a8463aeb
--- /dev/null
+++ b/docs/ja/data_types/tuple.md
@@ -0,0 +1 @@
+../../en/data_types/tuple.md
\ No newline at end of file
diff --git a/docs/ja/data_types/uuid.md b/docs/ja/data_types/uuid.md
new file mode 120000
index 00000000000..aba05e889ac
--- /dev/null
+++ b/docs/ja/data_types/uuid.md
@@ -0,0 +1 @@
+../../en/data_types/uuid.md
\ No newline at end of file
diff --git a/docs/ja/database_engines/index.md b/docs/ja/database_engines/index.md
new file mode 120000
index 00000000000..bbdb762a4ad
--- /dev/null
+++ b/docs/ja/database_engines/index.md
@@ -0,0 +1 @@
+../../en/database_engines/index.md
\ No newline at end of file
diff --git a/docs/ja/database_engines/lazy.md b/docs/ja/database_engines/lazy.md
new file mode 120000
index 00000000000..66830dcdb2f
--- /dev/null
+++ b/docs/ja/database_engines/lazy.md
@@ -0,0 +1 @@
+../../en/database_engines/lazy.md
\ No newline at end of file
diff --git a/docs/ja/database_engines/mysql.md b/docs/ja/database_engines/mysql.md
new file mode 120000
index 00000000000..51ac4126e2d
--- /dev/null
+++ b/docs/ja/database_engines/mysql.md
@@ -0,0 +1 @@
+../../en/database_engines/mysql.md
\ No newline at end of file
diff --git a/docs/ja/development/architecture.md b/docs/ja/development/architecture.md
new file mode 120000
index 00000000000..abda4dd48a8
--- /dev/null
+++ b/docs/ja/development/architecture.md
@@ -0,0 +1 @@
+../../en/development/architecture.md
\ No newline at end of file
diff --git a/docs/ja/development/build.md b/docs/ja/development/build.md
new file mode 120000
index 00000000000..480dbc2e9f5
--- /dev/null
+++ b/docs/ja/development/build.md
@@ -0,0 +1 @@
+../../en/development/build.md
\ No newline at end of file
diff --git a/docs/ja/development/build_cross_arm.md b/docs/ja/development/build_cross_arm.md
new file mode 120000
index 00000000000..983a9872dc1
--- /dev/null
+++ b/docs/ja/development/build_cross_arm.md
@@ -0,0 +1 @@
+../../en/development/build_cross_arm.md
\ No newline at end of file
diff --git a/docs/ja/development/build_cross_osx.md b/docs/ja/development/build_cross_osx.md
new file mode 120000
index 00000000000..72e64e8631f
--- /dev/null
+++ b/docs/ja/development/build_cross_osx.md
@@ -0,0 +1 @@
+../../en/development/build_cross_osx.md
\ No newline at end of file
diff --git a/docs/ja/development/build_osx.md b/docs/ja/development/build_osx.md
new file mode 120000
index 00000000000..f9adaf24584
--- /dev/null
+++ b/docs/ja/development/build_osx.md
@@ -0,0 +1 @@
+../../en/development/build_osx.md
\ No newline at end of file
diff --git a/docs/ja/development/contrib.md b/docs/ja/development/contrib.md
new file mode 120000
index 00000000000..4749f95f9ef
--- /dev/null
+++ b/docs/ja/development/contrib.md
@@ -0,0 +1 @@
+../../en/development/contrib.md
\ No newline at end of file
diff --git a/docs/ja/development/developer_instruction.md b/docs/ja/development/developer_instruction.md
new file mode 120000
index 00000000000..bdfa9047aa2
--- /dev/null
+++ b/docs/ja/development/developer_instruction.md
@@ -0,0 +1 @@
+../../en/development/developer_instruction.md
\ No newline at end of file
diff --git a/docs/ja/development/index.md b/docs/ja/development/index.md
new file mode 120000
index 00000000000..1e2ad97dcc5
--- /dev/null
+++ b/docs/ja/development/index.md
@@ -0,0 +1 @@
+../../en/development/index.md
\ No newline at end of file
diff --git a/docs/ja/development/style.md b/docs/ja/development/style.md
new file mode 120000
index 00000000000..c1bbf11f421
--- /dev/null
+++ b/docs/ja/development/style.md
@@ -0,0 +1 @@
+../../en/development/style.md
\ No newline at end of file
diff --git a/docs/ja/development/tests.md b/docs/ja/development/tests.md
new file mode 120000
index 00000000000..c03d36c3916
--- /dev/null
+++ b/docs/ja/development/tests.md
@@ -0,0 +1 @@
+../../en/development/tests.md
\ No newline at end of file
diff --git a/docs/ja/development/tests/developer_instruction_ru.md b/docs/ja/development/tests/developer_instruction_ru.md
new file mode 120000
index 00000000000..c053faed45e
--- /dev/null
+++ b/docs/ja/development/tests/developer_instruction_ru.md
@@ -0,0 +1 @@
+../../../en/development/tests/developer_instruction_ru.md
\ No newline at end of file
diff --git a/docs/ja/development/tests/easy_tasks_sorted_ru.md b/docs/ja/development/tests/easy_tasks_sorted_ru.md
new file mode 120000
index 00000000000..fb8630a5e8b
--- /dev/null
+++ b/docs/ja/development/tests/easy_tasks_sorted_ru.md
@@ -0,0 +1 @@
+../../../en/development/tests/easy_tasks_sorted_ru.md
\ No newline at end of file
diff --git a/docs/ja/development/tests/sanitizers.md b/docs/ja/development/tests/sanitizers.md
new file mode 120000
index 00000000000..c71a3d7aa25
--- /dev/null
+++ b/docs/ja/development/tests/sanitizers.md
@@ -0,0 +1 @@
+../../../en/development/tests/sanitizers.md
\ No newline at end of file
diff --git a/docs/ja/faq/general.md b/docs/ja/faq/general.md
new file mode 120000
index 00000000000..bc267395b1b
--- /dev/null
+++ b/docs/ja/faq/general.md
@@ -0,0 +1 @@
+../../en/faq/general.md
\ No newline at end of file
diff --git a/docs/ja/getting_started/example_datasets/amplab_benchmark.md b/docs/ja/getting_started/example_datasets/amplab_benchmark.md
new file mode 120000
index 00000000000..78c93906bb0
--- /dev/null
+++ b/docs/ja/getting_started/example_datasets/amplab_benchmark.md
@@ -0,0 +1 @@
+../../../en/getting_started/example_datasets/amplab_benchmark.md
\ No newline at end of file
diff --git a/docs/ja/getting_started/example_datasets/criteo.md b/docs/ja/getting_started/example_datasets/criteo.md
new file mode 120000
index 00000000000..507dc68cd62
--- /dev/null
+++ b/docs/ja/getting_started/example_datasets/criteo.md
@@ -0,0 +1 @@
+../../../en/getting_started/example_datasets/criteo.md
\ No newline at end of file
diff --git a/docs/ja/getting_started/example_datasets/metrica.md b/docs/ja/getting_started/example_datasets/metrica.md
new file mode 120000
index 00000000000..984023973eb
--- /dev/null
+++ b/docs/ja/getting_started/example_datasets/metrica.md
@@ -0,0 +1 @@
+../../../en/getting_started/example_datasets/metrica.md
\ No newline at end of file
diff --git a/docs/ja/getting_started/example_datasets/nyc_taxi.md b/docs/ja/getting_started/example_datasets/nyc_taxi.md
new file mode 120000
index 00000000000..c47fc83a293
--- /dev/null
+++ b/docs/ja/getting_started/example_datasets/nyc_taxi.md
@@ -0,0 +1 @@
+../../../en/getting_started/example_datasets/nyc_taxi.md
\ No newline at end of file
diff --git a/docs/ja/getting_started/example_datasets/ontime.md b/docs/ja/getting_started/example_datasets/ontime.md
new file mode 120000
index 00000000000..87cfbb8be91
--- /dev/null
+++ b/docs/ja/getting_started/example_datasets/ontime.md
@@ -0,0 +1 @@
+../../../en/getting_started/example_datasets/ontime.md
\ No newline at end of file
diff --git a/docs/ja/getting_started/example_datasets/star_schema.md b/docs/ja/getting_started/example_datasets/star_schema.md
new file mode 120000
index 00000000000..1c26392dd23
--- /dev/null
+++ b/docs/ja/getting_started/example_datasets/star_schema.md
@@ -0,0 +1 @@
+../../../en/getting_started/example_datasets/star_schema.md
\ No newline at end of file
diff --git a/docs/ja/getting_started/example_datasets/wikistat.md b/docs/ja/getting_started/example_datasets/wikistat.md
new file mode 120000
index 00000000000..bf6e780fb27
--- /dev/null
+++ b/docs/ja/getting_started/example_datasets/wikistat.md
@@ -0,0 +1 @@
+../../../en/getting_started/example_datasets/wikistat.md
\ No newline at end of file
diff --git a/docs/ja/getting_started/index.md b/docs/ja/getting_started/index.md
new file mode 120000
index 00000000000..1acedb0f03e
--- /dev/null
+++ b/docs/ja/getting_started/index.md
@@ -0,0 +1 @@
+../../en/getting_started/index.md
\ No newline at end of file
diff --git a/docs/ja/getting_started/install.md b/docs/ja/getting_started/install.md
new file mode 120000
index 00000000000..60aa3fb93a4
--- /dev/null
+++ b/docs/ja/getting_started/install.md
@@ -0,0 +1 @@
+../../en/getting_started/install.md
\ No newline at end of file
diff --git a/docs/ja/getting_started/tutorial.md b/docs/ja/getting_started/tutorial.md
new file mode 120000
index 00000000000..8bc40816ab2
--- /dev/null
+++ b/docs/ja/getting_started/tutorial.md
@@ -0,0 +1 @@
+../../en/getting_started/tutorial.md
\ No newline at end of file
diff --git a/docs/ja/guides/apply_catboost_model.md b/docs/ja/guides/apply_catboost_model.md
new file mode 120000
index 00000000000..dd36e885974
--- /dev/null
+++ b/docs/ja/guides/apply_catboost_model.md
@@ -0,0 +1 @@
+../../en/guides/apply_catboost_model.md
\ No newline at end of file
diff --git a/docs/ja/guides/index.md b/docs/ja/guides/index.md
new file mode 120000
index 00000000000..162dcbc3b8f
--- /dev/null
+++ b/docs/ja/guides/index.md
@@ -0,0 +1 @@
+../../en/guides/index.md
\ No newline at end of file
diff --git a/docs/ja/images/column_oriented.gif b/docs/ja/images/column_oriented.gif
new file mode 100644
index 0000000000000000000000000000000000000000..15f4b12e697ac40c60bf77f964645316410da946
GIT binary patch
literal 45485
zcmb@tWmFvB)+Y=h2^JuM;DHc=2Y1&5hY$!Z3Enutp>fy7p>c=C9U6CcZ``eMcY5-_
z^W0}<-ZghVz4f8$oL$>a)m~@S-u2s3GLpP}`c5xiAn$k~A(D^~RY(X?QSE9NV%r^Y
zn2A{TMzkOy>O~PYqR-pg2o@y7EbGbg65?hS(c=BMW{p@?vv5$dAPiHQgjq^HTr$NeM(IXPl2^Lc`#Y5f#&ae)|pb9J(Y
zP*+Dh9%yT-BlgtoT9*;Q-iXuooAbSl$sGg^4q`tNQIq!^85!LigLs2n-)((+*a&HJ
zKpgubO3{~3uC6vC2YO_jp|uEk^r@-It*y-;-{trB_f3opT3VW8q9gnJ`~KKiRaaF6
z1^OY*5)k)Wh^w*lyGO+8Z^UIEq8)v)j}@U!JHE4tm={Itcq9HE!$^NmpAeXe-+rn{
zh$($zXJ$n~MnXdROVUtKqWG%sp#S$ODB@zPZ)jocKw)5PYGy4&dD7HINnvIrM5)fD
zz^Y&?VGJ;ncC#~9aZ~(h=w@NaXGAF?Od;sP?_yDib?<*2OE2UjV*z1)YMvp_%nR
zMEZY8DJbyESlc`3TN@h7d=;YntB2Xl%!r?pk5@uMT#}DRl2x3I?HiXkD>ti{I42hu
z8xK1#r#RogWxv`OI$9Z9JN#SL=zqzI{YTk<0>aAnujQ}Cc4kh-Mv`_mRuunQIKSC{
z_J!v^>iaKQqyOv+?|+nK`RfeJKd<+Hd)@yg{f(i2hW{PBe<%MP{>IjSW8Us>a3dq0
zpB^9X?{06duP!gn&rVN{j}8yu`+K`P+gqC(>uakk%S($3^RT(undzy?iSe<~k>R1i
zf&RYUp6;&Bj`p_Jmgc6$255aeX-RQWA*3KbFE=MUD>EZKEj1-M37nV^
z9~T=F9Tf?R2oDPl2@VPj@b~le@%Hlca0j}%x;Q&II@sI&v9+`cp;uhthXN1$jAH8EGlWZ(k+E#Y9Dfg#-oo`FMG_xi~r4*;rYa
znHaw?(9_Y;ic-CERxwa!~z4so^cF
zc$q#*L4<<)5@6?WDp1>xd*0X6m#!ZxMVJ+qdT7tFJRHHQTyZvU8Uf{$72>W!t=1d2
z`K$u|nYg1cz=eev&OOA~l^CkF5pKtFjW1HJFhId1cSVcfjpkf@414H}Ag*AavBke!
z-yq@iALdm&^WTZ(u}a^1jLMGK5(eyWxv-c2Noh_Q4D5dM`DHRU)_6(7hq*IDzeH0%
zwb<-2)Ie<1FBt2+QC5E}S-m8ROq?)A)+ttMg@A900I`AsYkkE-IJh+#3V?%b2ERiYP0mX
zSs_02ejd#e^QbmpWKtl}yUtYm?-kL3K^LX^r2ZD1rm6fkg2qZ1b6H|aB$z$1*d`_~
zw_@GsXv`E49^c+?7oC)}fxcH#hLnnt%YAMT
z)3-}t-8T}_yZ({Ly-!8>!t`ylWWk@){+wKYhrzmCJlmHyID%jKr#~9Q+a;#HF`97)
zdr%uo7$vB77H$z8$}!pmRM+-2)VH5-ZxQW!Sv
ziLpZ4Xp?)=^{yE3=8<;R>4baOwq$HpWmaJAd1-HND~i>AZel5@=(P-BI^eL{R!V|ed
ziXpG*o3{$zRr01*PUJT1QinqZFS#RziA~>EMXAz^;<3Lm+xmIozVE+v&d2e`s#j^=
zAz}h{^}FWS{YLMFR&Wqa09;t>!Sf0ct)K#7?{d6Lqu4e%mO&3cX
z8y<+;EddT;&RvlV&%48q)t($Dq#wN=E>Sqz*l%vqy`CO-tJ)sc4l*d7Y1?1)`g-?>
zQo!E>+Fuv>`gD3{Zd2{GV}N{p+uA8wH~}5lcD{biNLi;Z_BwFyBmJ8sJ|VsWIte6w
z187Sq-snnqeH%U{;L-VxZP70wei%g{@?QQeaIc+^=qy-z3mGS>|0@;f_e+KF?Dt8x
z-Sk=$!K%$0_|>ql)QZtzTH%yLU4S08n96V^!JPLaM3UU(6JZweIV6R2QtXYD5ss;w
zWR+kkq3a(Jz}%dVs4~4oRAV^7^0}0R1Jd8leQ+YebE)3l^~-2g#UwOSQE?1N%iC2U
zqSGI@s3hnIe^^(=r3o6-%QVQUX2-;o4O4yj#y9kRmOZ+D%ZS;SSWf1cJs$FyM~QPM
z`J@VaUQ)Ic2einwM~2LBu5%JmGK5
zPsTSYoq(5AD!(gG*`R1wXP;D_TEM^UI_9Bvk#T%TD|q2LW=B$!F#yaJCmb9PWaoU%
zW()aBdoLZVe3?zqLMQpTXd=+0CI|b8PMV%!GCC$Mhvte-RzXWSp^P&R-=k1cn_oHE
zcRC-V$4ttRUnNN~u7E*-UI||d)#1E4{h4Zyf46WfmsvHlxrA0FD^#wiiX&wSQB%ZM
zJ3n1Yu9~V5Q9u`wteWdKldR>)pfO}VTjr@!^wX+H8)s0>yS28=sfU)U=U%2NyS7v-
z%|d4iK3kluS}Lnxsn^vw*Hads?$u+#OZuRpNb^2ltjE%XO<+3KZnl8y)sZRJgUUpp
zQ%&-brCEUaLPShLK@pajg)VSvUU<3=Q`PE6M9_36WnC30&C(VhQv-^v4(%hO`xOG5
z>*wbw8k(UNcePkrdCgsFYHVpoZ?OzFziAo{ICeq_)j5?^FK@>$H!ggb-Vk!I;SK87h%FTdQjV+EcftUe9@Mk$!>
zz!5$Q__erz@fq4)WLy!9*SxqkGS_s6RpF-cX7%A&y?xTEJk;=T^>)FzcQAm>`(aS;
zZV=it(sLH-;l4qNlhh`?!WI^}^owE<+$)1^NAml2*JrH3BM^^jf1$(qSM^Ca4!sv?B8SS@4-KPsAp7r{a)77=BA0J8
zRZ%^R+g}9S2J{0NbBR56zR5LCB(z?FM;^DXl8JgknK=puKJ3efxlN~eau#`|?9e%1
z&Ah^=)9qU@{QO;Os))w1xXJQBrC?!NEuS<0GuD9;Dsbj1t?FzC(Y!m-$vH;A?2viH
zs4v`gH$M_sm!21KB+}%%u#{gK*jG9sal<^(B3W1V=g2}oaba0ifv0)fP1jLUR^)EyIXZCvxP&vX=+{lFcowRiy77vxy@zM?Pl##r
zCIy1GU!v?RT3cs}rj2({W#ufsx_O)0>u$jC*)DM|b%*_icT~#SKH)}iTO`9}TtV0&
z{UmksTSmihtni=LUj=t%gWRUGMjgN^PkX|B_cM7Qhf-6I1LbnJxx{0~82gqz19jK=
z5$wxab-^Rx+omZCD_h^y#bb=VruJ+{N)
zo7ZF4yHT3~7Wb21%-pM_Uao^*1YC6bpo<~GZnZ(e$9aSvd+XL$Q*A<5q^3gfG3|ZAav*%uF7gc5FMot5>fTicwNB(9
zuHrC1PGqh;R$z^}2GV}Gh-DGE-_<>7v}iqct`}TGmA|>@%Q#E_j(8$0N8F+6RXpx`
zC8@h??Ak{0$oF=+!{&g$3p|!-Juvp%4TId9O+DWCd+?e1j3xTuf<1DBd_L}Ze0TPF
zx99x<=1nK-!1p%gpP;isnlS_2A$|*sVHSucLaMn|g`{d!@~JJ=A-O
zxcI|)JW+UUQNjL-E-n$dF445!J~#eq8va}A0qW8=B9i_(MEMy^%4YK16a?l9+Hts{d2eg0$RnGaqXq{{cgY)YBps`5kZ(jxm2M2+J
z!=b^T;9w2!ppm{nMpR_tw=coqkYs2`G8p;ocpx>@mk}+LD-rpHZ72jBS_BOx_73uu
z4&jyd1NH}5-gwu*{`k8H5LJXV!@^o`!&-=tVewv?5NWEn$nD_p4(S*2MBd`ifPP5$
zWH53kEI7~H~`dO%pOx*s1nnzS(b
zLth?@@9eE`S(gha#>#vcbuuk6L`@rJ9z
z;UEEP{#PQ1nVe~lB>wvy*@nNR%CL!KWw=qb>F>jJ$KC8z#&?C{A#d?M$
z@}WC0Hzc|f$MU#>In98KcZs3I2rzF$ESU_rgf1ip3RW&ibeK>490JB@NMhgt1I-e{
zHDeR#V`I_4RkvUsNHSqxY++8k_P*Y=RNOcEgiEx*QH_Kh{{(&X_^Me?tB}aW+my$|
z_=!WNJG>QVelaTvAYek{$RiCRsB%
zts!PJ#33^&DKH6q$`|`)FR8F0`C2n0%1m$-Jr*S-`Ga~ojcfYBK+
zVJxOr(wUa8-ctV*G;)JoJEGU)v&zM|51<#tG
zm+`!h@Wx0RB8KpU$_0iN2Frp%wF>{Z`h#SP^zPG?L-W;>?UC-IjkO9)wDKI__J&2d
z82km_-7>IA3bY1`+Zu}%5{qIQili3usa%U4??OsKOHBETD+@~A4dmAj=Cp#_ME-~**~0Ka&{ScLm|5ihb0AaYd?Un`i$`+S0*q0aO4!f094VHGg*-dICHo)^6-Ad<^(`U3Q
ze-1|9%j8m^=i?K*;mGB`8>$$E7k>n1TEHur9;zBcOW7^*I7q7~oC~sI3kbE!l?SA6
zio&-Cb12-(8Z{x8vXMD(mtzLYQ@H;HM#@?;1jbb}piy%VkAIm`gSd}HCarmZ=X7eu
zqo%~6lP1$Fmb_1_LTjiRgIAGYmSzxF14!}@%&MOoN*%uZ5d`v4X;(=OR#^cnOVH}=
zQmXMV5hWAm^|-~=?@3Fr7V9~~s^h;j=)y}0v}&{6YJXvdYsrD67HeP0$z-I|>W6tL
z4Ap*@D-sGVN^Yt-FfT9o5_!kanD>yiFI!tURCcUhXX9FD1Q2|u)Z7bf?oVm%D@GpS
z)?Z6d7Rp2(6lfV*M49WgoFUMkzP6l{Ygr*h9&T#cBCz&&4e*_{s{Piwp=Pa89H087
z;=sQ)6WGKHZ`>nMuO&?}Ols1bS7>iao8!~%*sMPJQl}#jcNdm=Y0;h++Egy5i2fnG
zg)|4N*}7D&?B=Fpfez`#wwvHi^`ws7?v7uF9pWK9b#lF*hkK-pYiNgS)EJEzb$XfQ
zJ4inWVvqNEKK6Nk==WkoRxL>l{fR=3gcRD`AHLKdeu&Jq)~lV`*)W6}a@Y?JA0S8S
z%`NUwxoe{*Y0s-M5pwT_Yqu7Jcf(Rz7HL``mR&N8KU}$olMWXj3_Pazn=rPoIS;qJ8R;k)p3>>iO&LBk?Z0yA!&(~7(Ls1U
zy44^NjZQBO>@){BG7hFTdM#O2z=sVxst1L21}g+x>)pq$+*^xM$8Has?=8okAIFfG
zhQ8~J|6mNjwj7QbZe-CKosl1bgCAkS>l*HlrrU;5ntD{y*Bw5(P!hQ>=z^(R
zxFTbOsyJ`U?Kjka5ksvV8*zM1%J
z<)Sk(ivLu`tJ^{RG|{6or};Gd@y(PU^%QaV%(K8O&YRiv*MjGVgHlgZ4b6z@2OU^w
z#5BqiOmZ0(Tsr-t1ty;cOAz{@l*X^zGD&SUWp*@4(K7mKarO*jVfe$GIL4@J%S3L(
z0t?fE`N&*R#IWIqNdt|=ij+z3qdDW{Ne!~a4xxqXxjCnn5tj&a-zQgpp|S9^>7Jvqv^{>s|l=?Cc(MSmJ80yi(RF2=sct8WU1b1izk9GLFt*I
zheg%paZ`m(Hid;UiJETqe}u~y31Ut
z%d-*dQKjn&WGhP^(>0IFgr3VVk9jiB4cyv!$khDl!~8(n_+by)&YqfTx&^uIWQqofUEVRXuzJk#@_b4>{{4~ciI*ZlXH(~{&
zJYLqd-XO(Z{(!whcf37Awjr?GEuZ>ZiE#%rZzHQ~8kSR+QefM5@@@p
zVT*NTt9oR!FKw3nx#QdMxSH^`Puez#{7Nk2UeL;dy6&DLb6iOJ@LSNPCumaJdWb-8
zW3F^(zSwH{Xno^o$2fhRYGtsn3|>A8|ERcw^=dcq^X`?xYKPVCCH0nH*{;gSuDY=6
z56pdXtHnmWy=KK<0bKjglzlGHtaIvCtkEHk)mB5uniBb@t0H0xbiAm4yFvN1dv>_A
z)4FUzzEiUTuL2z(f#B8T2m8!BxzF36mg9owomujQQ0rCZmcyW#!)4Ik>dI#Q$lkk^
zeSL*B|CN)0(Q#z2-PYqnUW}t7hGQ4THQCn9aO`7dy|sAI?!nOsOmVV{IliaWau*wZ
z+6q4xKDjL94hdr=K$pRXtC46HmK^
z*P(s~twR$;h+fK%qYmi@ouJdy=SzO(SuB=)l<)hkEu$i*AugM`mleb)Q$Ki4ln6&u3DD-5)likA_IdW
z*JFC8nK);!`mZNH&P@Nh|G0WO;B^HA-Kmt_jjdiyAg-w%GGu1>0I*Xj5}_
zV@u25lC66ew6nLqf1r5?F+9>U);BRZIy^fD8=q{LURr2fS>2e~TAbhM9;-Y&IzBl)
zUYfeNyjttMZMuhF4emeRBI5zV)fZVGa44HS&0>Fb!I(xSuLp*f8QsJ;$w*37S-QeW
zMfns-CjP6pKfkM%zn&=y{xVaj)8hH+U^-f(*tky*_o}*7yPm-U6z4id)!oQo)msZx
z-i3m2EHLs15;sPSd@vu_ZtE&H!+{sruS+z_#RuY1P0}9-SvtENSo{PJ`7#W~hslfe
z$i7rrOy)f~c}64|BX$NdzT4S-$qw>kJc*naWbp~%KKIm6oT^0MOqJ=&O>IStY2VH
zt@sqT)UUcfzC+FMjhmUH@Oy;~&Ghq7kk*&MH#d3bOVLTa9!z4rw;q5j2{8yF%I?qd
zqS+zL31FIqttx)n*~7C}KBEh91tZa?I}Pgd5>NM!bPH3XaH->rt*)eaDvb1ZajrxLn`u?kR-Qbg!pduc@%&ylUDqGHwP4xw3n5Dy-<*x+CfK`8q6@b2PwY=Y&k1v4NMmb
zB{NXFstOmvOSD@S;6*Ey_l6})S&4a32hj|c3h4lDa>((tIX4ZrQLaEFu2#@>-h(
zeaN@
zR{l&KTCQ@DN;ow+O@%qNM25y|&h+IC|ca
zbPzGLVZw7(J)sQ;v=m6%_M=
z4RLDq`ez2(MG*KE?psgP)XWA-B5`~*R&U>?wd_oFu6^TDw;!L#vAlws4?T_or;58T
zoLp+Nmf`&0XHoMkFQOyO4AiX;|}NbQY4seaz;WRuqJa4(++m
zd*>a(2qyo*EB;XWHmV*k7Y%)Yn)1a+poqJHssprZGXOV8&>%h9fVEqhq|(&@a|1WT
zNtUV?H?;1MoVMPm|Lp3uNZ2qD85uEOpQ;_3oggXo*=_$)8k;s$BInpCDMzoh5`?Hj
zqv;F`krr?|`#LCCpS-bAvq|Rd-uF3ssgTbTS(~FGixgU_#rCGZf`pl)#tx
zPR)LMP(%r>NG0(A?oMnvlR}ow?|X5_)^VIq`1JT5=rUTVHNTb^&E(DC3P!Ed#vVp^
zG}M5p_*u#VRFXpBNX@Ve66QSdTBGm!bOWq0_8B`4JIdOy=?X=>qJDb7CAM345VlQr
ziv<90w{b#K54Ehd7$9jMG8ma%RPMPA!Mnpyt|6By`r2Y3*h9}=qC8W=ID;@}C9;5)
z6X5@P`f#v0SOhB|xvmgspyo%f<9D2d!Il563gNx{BKO%9+DJ-O2yzdng#e`R9!
zv!M
zx05ZEEyYyT-&dwX=Ka41s9WNt46uXNJ}k-XB$$*tg9h9
z%dz4p^>dxr_U_TT2b!NB#^bmhl35<}0jqH^-9Sg9EHVp|6aI$P9mgskOf#z|l9G;5
zjyj;P*@a!>lm>Hc^EaWBb34L;v)&uXk0WdUoyEZeU7qjZSP1Js2lq>c*LBb#A^pG$
zymhWxC%;!K)K+-0i(Gow(4*#4@(ZGodTh?pDJDQ@8~^C9YnJXvvhxT5fu$-ZsHVEG
zMGQyp7Li$f!BNXa-6U`(bM(Bau-RUXSV&!R>!u(;5iVInvRhgSO=T0-iU@1oR!?*-
ztu50}KA7L>@o^JGBeyFWnqOmKfesO3UDP(E9raz^_fsERRz~W==}|M6h{<+JfFBCI
zf_`Ts_S-8Km&y3sZpKqaFMH>J`=Y3~V~7!#!4&>|u$Q3Y)zM{Fo8KXRo#jv|=-Q-<
z?8wCQZpK5%g<{HMBHiZwV4l#!R2$hrorufYdjAy^Q7@eN8=-WlkL=c(*t+Ry-L&7e
z{GiK|aTQbMv>qVhu1+Pq7r)!QxgO!s|6TYYjJENHY0M+TB7Ju&Bj#FD_X#rBx>op|
zXYN+_?3c>Ob_EJn={)Xfa)+fDAXl
zoOq{#8~6GGpEoe8#{zG00-sV_5BWMz8amfi9?#_<-+ffi#a)|SR6jVO>q)*jeLw1V
zTHoBa-b5N+Xb?bsfs@Fd4=EJr`QF2nK!m>^Ll6u+D{vEb@opgWhtRscKr;j0_)P73
z$$>ExU{2p5P8U-AKl|M!p#fiEKE$*h!43#}UGRsuwSuzKHXG``PEcQFjUb+W-|ZV;
zw_s1_TVHMs8$PK2FR<51pQRz-EF!=Mh}YL!Qyd}*8yAcqjX)F^QUVtzA%IsT
z3=qxhmZstFW*-8nK>b^>ErErW^@qMvLMrDCtAT`8YJ@>u!l2-=Dri_4u9rTCoLn6+IB2I!Mu7V>E!IAyEk(a!nN3_UiSi}t-2!%Lm5-kdY
zIDE$ilou6pMHrY1_;~5;4&p_GWV-}DeY27YX2-sZ!b1zdEwnV!h#r#k&IZ~2i;$QP6)TK=p&9o8h{N2E
zdvh0^yA~@W6L;+rr-mMXp%Jf%9;Y!MzjPZf+Ym3M8E=vlsgsnzsTpM{6N^P3|5h^~
z1QJ3Zlb9hLK!O%Rg03yyFGa!o`y+9pt7|lonKwpI3{hdsJNgid+n84B_#DxA1N4M@
zx>&Ko1Yy2}nEiMKnFOBs1o^wTW8Nfjvm^`61c*$y4RLI0LsGg~a@t*TSyEEoUD7Yt
zggW}<#=G!yqU5HAI4w;?QY$nWn;r~wmDhAh9FzeQLcn%1F<(_f?(x8%(7n_E?vC@p
z%E4f#om35t(04(Btou>bNhwvX=^d`|H3RW*SaKVE>YiroSyFl&dIsDq<6I`AMmlv@
zBfWwzquDhBi6kXyAief3UEeJIJ|qKkAr>V$^UXpeGJk5+JZf)}cPunG+$AP8I7`Jj
zHLu=o68#OnTNeF%mMA1GC?q@NE>W=`%(M@#d6o3MpMph_)pD1KR+RI$F^6w3N187Q
zN}nZ)k$M-BlT(FyAX%iBnGn!DBw!E{#a|f1Ul_LlNl1poCl`h`7N%kprr#H4h8AWQ73MM&WrY?M
zXca{_79}JXWndJiG86~O7AKh(m&1!9LyPmZiki(KK&337{|_(+8R;)n2NlKJ%jf?D
z?EL>jmqcPf4*shj;qCQ*1^#%wdjBurQQ0vh3WP{5(j(+r*MmKdHB5{{Vk#-b$7I
z1^&cU*0r~5cqPX$^z{q;1N<4{!{b7oLZ!r+n`>F{!^P`o7+n__C!a>;*yF%iJUmKR
zSsUZ~n?+vXaUvsAK1QzgjUJ9@NBL27U~!7lz04|K7xsKKfi_Ju5pw`B|9(`R_W4Mm
zVi${bCs<_<#GY6&)x@NZfX-y;g=iHmgX&ZgEj#pT2q?EU3ODN?EZ4)jLX^JT2DS
zlJuZ}@j4nFvf~EfBjn<59N
z4uW(SCYiH7&o^-AMaOxo-DUqEZyu06305VNs|2|ImD|Ce@pRzF0ZuI4%=v9RC2^VS
zIU|ok8rVCjw>vW!cw$^(KN;@dCmFFdzS<1uBHxbJc~9fIfB>1Oa*>$Hfnt)w{qXBv
zges^7BAG|3Xo?szFStIBTh`3?YI`25)@yqq0;noXqao
z#0RS?W4ZZIZO5T)UhP{?GPEisD0h2dB8Hg~9SBitVyG)qMzO7rs=YOVIz*RB>&m+LfI)8Pv9k%p^MXV8X-v8f8(KYO`SO1FG4wc$aMNONm8p2ud>KZ)EyAjxO=)>I`a>lVSoMxQ%uT~M
zGLbvjA{NG7f*9X*ki22iYPRWnP_N}{8M6F=IIaFuT6oqc-fvMo^nKbraEidHrdGyu
z(SFO^Q`>df7pJU2c`sCUb$YriMB$Ad8msR5f^$qapUEFzz|sM^DfMH3iJklWuMXxL
z0=_2}n~&Drt9<9l;Cu{2^VcO3Tu44+&MioK
zQ~un|WmzW{>9Xl#8hfA{tRq(`L!3p?Y$np#K5D+DtWlkvE*X6*cZv6UlEd{?Ouh;s
zGrtY+k|dh4dd4KiB^KU`8H*KlNBow6fBbRm#N3h>Cgu5M4ZW~N3V&86;>b3W>=U#c
zo#d5W9a1hCTd)-CJNAH&Mqnfv06yL^4kCd3C+yR^1}s9ow}DuLt-U2wBfr+K;L
z>B%_>nUSyVmTxxggXbu;X#RNQE(PL!|5nN?`@`+$VG@41sld_tXkyo0Iw5*o?eF-=
z#2G$4SRCQ7-Wxot>EvIRTJiGr-|W0Sr&gD^`xPmTCnHlFGypH}WW;-iq5=;tDtvLpt
zy1J{EwCx}r3sLn(#jTe@dqwK?{AyLt`la5g#oE+Ja}BdINJ@_1^se}QS_4%vq9P7I
zx{3M@swI?Kaam%0mF+IzvoBm;Oq=N}P)^o;2ZUXc;G{6rb?a@U*!)cSP}w+TqhXU<
zk5a1i?Lk5(`MPGD@1uM)s{CB5S`oKg3a_c$Xx-i**b^kcI^VM#HuRjXt~X5}ybbEmS!ebTIz4V{Ng(Y0*mWd2P9Gl+ABc
zZH>b%!pe=(Zi6fMO22GERRT3K!X(7-cjI4p($vhlZBVH0B#C2Qfn%XEATqcl?*+E}8bHV74_`qC
z28z5s#_N82PYc^$hvPS6S4S7=eI}-Yq!eu)uQ`2=!dAOtG)Y~BId!)&)W0D#s|`Md
z|!+|N{iYD-KN4zw2UXQ89DWzHi9dJp$=
z?bvk{K?;Wkqz|xu#k#6Q#K@tEz{C7FsII0!;mBP3VF5N;S64rBWDR^+T*I!1_9-0O
zr93R{Db_d6jT}2QJuIJs>YH~JPTUqBR&Gb@TW>~AJRcrbk#L~xsEVh)q>pQ8-=Up^
zqo;uakLzzEq5l*3^SD7Y2JPh;Jp%zBHz{x$`lS@lV^SWs=)Sww@_3{sNKlP2i(nVw
z6qvp9#&LHrCiqJA!pgE;a7R>Rlv-RDP7GW6@a=d^s%&`CjNjt*H>pMm(M*S{2+jHZ
zOi76aEys1n?yu`iUvodVm>#~r9eW}C#RIcM^U2BdFsAGa7KYqv4NKz@8o@P;mPx%n
zyB1aGwB4AR5r!C6gCYF^Ro6~W?ew{?FahRXUM0~O6;D)pT7{$<{yIPXNlCl&rb0z#~wBdn@vuV>d-Q0T~J#TC%8L>cDo_eo$L4Uhaea1QX=+g&5qeI|hMN3Q@!0j9YB&qh=?8frJQ
zX^Xi6gA!WPz8f8Ve|@Seqn1SPoLcX)ac`cVUq4HFl-62wph^)!RLUt|ymaAaQ&avZ
z?OI3p6}pS?W)zd0B6OpX^y&Zh^=FT-w2HfIm&Z!I`q8$A@s+2ys;6(lul>1Shqe9&
z+n$P0BkLX~)E$fKKF`~FuO~u%JQEERh!10r&ugeY_ntd1n#bH{5B(iK!F*paIzb6K
zZSgt}79u~`luwMh-w3U5si<#zEwGE)N8i)}pU(ZTGDr#*1kms$X7?kPR-B_Y_V33C
zgmMK-8>p#!h@z3IyfWGe@R0`l|9+)*Jmas4=B*9zj0p66xC(H873g^ts&%UtJ{!2IYcv2*43#>R6t1y577%U+JIL{g`PmP0V&bd)^2~h{YE=?T~_q
ztnAQYqHuMzUyabPW}<)^81`eHo@&3p*H!2lPuRt70Pem<7jIy%hRb^z?*Tx_8bEhr
z&tVMJ;LmLcUBAK%6eJD^Ws;6M!Q))yRQq5W^dzAHT>&((8hu<6-`6=9C^9&d
z1rYTd5M@N2K;RmEu8yKY7i|_2pa@FF9oPA=(j1iL)IJET3^d|4AMTp_es|!H%|X5yxGU0e{>7
zq|EBBQuB5?3i}KYvhHI1Mj!iv)_jJ~*V9$#@s>`V09V}EvZL>l@)|NT5ku-VY7KgV
zJY7QKPSo%|kBj*>F)aua*Xe?@P*=0%5Co+2{jO-vBNiTko
z!i=#9;g-gq-wJ5uJcVF~vkGj!B_GSooj?-KamswJOu_jXdyo~G{4^70^64iDw!(q{
z^#oe^yL^LtEWB~E%XtM((Jbw{d`mbo=bBYsBIz7^ft^;tAITT-Z_)nu1Wpnu?Ejf~
z`2zpdagq49<03vSH!tN?c46GBSNUZnMSnXkg8z12cqP>|HAi^-MzcLrLebu
zq;0r=aB?cBZ*+8gV0y7~Ze?tC)pIFoeq(KSVe24qeSdHC@I12qDyif;|03Y_uAuIj
z_lOjV_PZVUTKAn!@IFPMC$>Nqt~8c`Xc#sv+(4Rj{aa)`?6_``s~aqt;CJ;<0p
zar0}8++9pQZ_<(?wx3MQZh1U+%iVjD_q6qytAX4cvP{`uqu#M8FHz6vsd}j=sX6@p
z$stra34O*BQ2WSIUY@x#grSzS;08e8*KOyifF~2Ay^b53I9MhC_Ug0S0*t3+jdbs5
zLG^PsGJUX6lKfY9m7@dI{gl(q^Ncg~`5P++VSEI8HHvb58;ov8i+ZIQq12^X)BEG~
zGBb*KM0zJ#qTg|8UfbVPb;M(mqRuRio%YZ9-pDKJoS%Q6J~B~v_K4eo+5XBH89{%P
zfGmqdCz7$XP|qIx0OlL(sXU$UEvtwAn$7Z*9TA#)V;g~ok&l!dtO|os7Xg=Yz%uz?sOL3%95-$lgf4pdt2@oS2avz!tLDi#u=XuQULv7+mS7oT^p+
zFIHGR>HVJ^XcY>r;&~YsOHoc=yk%*|rXyvF?0SyQrf#}!nhra8scle+x)e+WT@Ov6
z!kVB7JFIzU2R->TA6ik004;lx9`%y4JlaOmDMOZxYm{7#nkbqD%2Cd%4%7
z&;uV=oa57J3Bkl6uBQ2B&wt?K84R+Qw5Mc7W1InVj`b55~9s^L6Tw1E-V2?jxLP)u)|$`y^zg;Nnem=+e%RPK?o@VnP|;?6
zay>X}!2&NX7Jfujm1^5G?n#sEQv|`ahL7PLH^=8EET}jvXg6Pg)UD{s(*5(*ceox5
zsu^`=vH&1Z>qQU^j}R8tpSQf?b15N~Tl7@6>0V(~X5gOSR7tkl97
zvR?9SL&T75m0qO2%EGwis4xAkg84>!)sBcOhi_aV9Ajgha<7(4wL*n>NgMZxic@SU
zN--Se)4C@_0v9yyUn8rJfnN{Rp~m5BH6X9R{>o?g$--ssVreJ8fd5eFi&OsQ%Fxy;
zZ+^FSl%`nqn|3bY$jW!XJ(T5vmp@p=0R2cstVz8a^;HQII4+Gq;%E4)fk&BQNes7k
zLdPmWP0mO}_!vjdWeH&^yp}D(`cmUVJnv{JY4r$WkNZk$yW&D|A#iW_@>&LB2Oj2DE+Q_4+d7q}x>J
zYCwsaDK3jkDTZR29?f1+41ZgmR`9?~$%pbX=Xr}xT0%?7tBNCS`>~6Lx)0pnK}Xi9
z^|phtfK9peEFsC>kU7t^Ty6FplJjuRO*5hJSo}%cu3yf+)BKPG&AC%pxDlKBwKOV)
zeM~GuXXo^mHY>ih#>v2FpU*&+aWCy%0Z*+N!zx5b?RYd^G$8LwknE5xnQ8>TY7xbs
zul-`YTwo=#{GZ#V^7pz|p7BOIAl7-2^4G&Gldgwb;+p-{MUMIa-GiNlg{k^yPDo|$
zLDjvB`cE2sp#0SVO9otcDbKPZqQsne~y04pMgI8bjehr05}%3W1ox5yC*7_tQ)m6)i@nb>vBf?gGAopEkO^RALpPV
zdT>qCX%Dl^_JQ89xU_)6U;tT@Tp$qG-^4@USE#@{t%M<+C@
zQ*^2Qba#Gd9sjARl{X9!abBN%m02FB!}MF5bf!KOT0}Kk6mk+^pil%>PYds%MP^i~
z2A%g>G#_e(H8+w?oy7~MQ%4;!1Z$^XI~FRi2WbxF(DXTZDkYd(e#Unrr)HQI*<*HohmywkB}8Q35kw_JTIH#>l{9jXsukRaW)Vt
ze3Klg{c&ub*m@Cv0z3&*e!boO?TJfXpZsgZ&CL5@FuuL-m75wz7k-WXQ>Fg^!c^`;
z6AZ)Y!d16w+2Y5X9XKDVKB-8C;UVs{5J__3ERw%}lM=;CcyjD*?P+obsmo9?!8v=T};R9T1`u6Wq(Y#7+V9Nt)7gbMtmXs0#9@ozt
ztrBPJOJ}5G&nr$=weVee+-9aQYfj)Nq5W&wxyeNAE1}A!eMR8q6h=JXFdNg@IRkwf
zmvnj2+z|Y_^=77`jK#y$WAf4ZaZ8feInpHcwB`kHm~akU34F8;>C^7lD8HdTS=I*~
zEN(h8AD(0{9mU+>z_p@bA4#?->p_I}pHi)O#|z#q}MWx`K6^lzE&n4cO2kuN~%!@N_^yZ{j9MJBx~v&V#dTC@fJg6
zskZ#|eGylxH7T~B`IxX0V1T^5d%txz23%3hn7g=X;dxo)(RRL%fynJC=gLuG!r9_R
zG{`2_dZdtF7Mwgk#6aC^O_Z9=dOW3aeu3{PU!tF(&|jd!R+TZ!jZ$CIx3?ZM4p1YU
zP9MAopL19?gIoobJ$3RZYD7QH;C|YvcZ)@~elJd)a>KTI?M2|?Lv)LVTkV4v>qCC)
zLqX(A$?Hp{;Y;Jp8olxQu7_;wH?pKX~>A$SP%a{}SQOq2m7FAN+OBW9`;`w7|Z*&tsX#
zr%#%t7`!!=dNn4KU+8iZyJ1GWt?Ui$1sj%p7Kc;qzs1tuW-v
zK&Dk{!)UemHFv{}upmV@U(^s=)l@fxR2S+-7vv{5-PE|dl7t}f$Zs#O1CI9Q&<4nl
z#xYNJR-7iONr~cVj>izrtXLNDloo7#F}u_;e~T=@hAnCib-2}17&&8RkDPC*;qF&-
zSF!XY-BQVtV`W=oW!e&ZQg{wS+yu(qgf&cA$BcejB)FJ-DwmjP=5WT*YedSeI&^WV
zDW%(}aC^N`@oHKG`FlhFV3UqgxU)->&NvbxaMUbZ6C^x5<29n@B+^r_ldjN`gBI-`
zjFK}w;{6{}p|H~UEi!&+qzR2_gnO7)t-91$sKt6@ST`p%t|m2Gq|=fmw_B)pwP$v7
z*~Yq}X_*5UV&@g*z?&iP`;`(g*DS|6h0{o-_U9F#BV
zGNO?|=#j;fk~R9MAJuLe1M3RL?I*)+7HXCqAC&Fpnw%A$oD&}3bZtnBg1s!L8bq9B
zE}BIIn`EGtmw0bKf#WdMo=4Lj7s?$u51XH_pIPJbnZ2r!2sF;`fwg^~qkE%36Xa2a
z5?3n3$qSM!fXK)YZ7&d8EkJwBT49AjNH+*xNXXMKbQQ~29?K>R$?qx45AV|o<}hDK
zG9MxSwi8sc0+AE*2(8DRV$oG8V&@n!*v9O6ijGDhhKUX^Hv(jgAo;A(2{n=(OpW
z`&^#-4Agm*uz7Pq(kOw`f>Qd)qu(o*A<2_RHC_ZY?n1worZApI(w-!7NY)1QT=Z~F
z9V^Z_DaH&>w8JWQkGD#C)k~4m5aY=Lr_F=I$>cN1t2uDsh^){K?b#Y6~>={>pfnMw*Kv-joE-gJ9Q|iYKASx}0c!~$c6j#pVwRRYzZjM|VY@%f0padPGlwsM?88>F>XUqY%
zcQRB8!b9D`qIqPCR+D`0rE@BDaPHHJ)6qd{m|&$OsKy#?dEP>G@`FhRZVh6GFCulu
z&b;`sl!Iisz!VoaIY})lu4a09=6!?AJ100`rj{zQS_U4QfkIgcKG9N`@U1ADLTY-dsbq706!B{x*u9>M`nm|1oyx@BbFX1t!2+(F_VhVM^C61694*Ui*g^-q8y&u>)b#tF_llOihbSsDWhOzUQd4
z_MVPBVy7o8vV}-2Dl1Kr;8-uYQx<}^J#_-U#;P5o${bki7~~&Q*!e9qEbKtBETOL7
zn5Uh~eX~gXwt$qpKJs}_o~S`wkM1-3-ecjx
z4XaiT?-oLsp_AJH3GcR+ie4Fn=JA^V`S(UmzJB!c;Th-t&ba<0t@dzC05zYSv38we
zR(DoYCqu}v8a?T+s&4J`03~ajb+1aN$p$&CQ9Jx0oV3yJ4Wp_XL(OaXn%1LNAOQU+
za|=FmLm7B0{E_&s61=DpQ}40b$Whyk(G7(`$BiM9w&CcF5#}fpd;0P2_0i%<;0XvI
zXah@y9xfRd7=ZxH)1KIQ0w!n4Ci21eB8;JVj}QaLKq?2ysyZtsC-UqkYR-Xm_}vqH
zQ*Bj4-cdlBw^1DP@p^5MspYKk8!5yY?@{XGsVyczPt;^KAM)V)1S4=77eS83y;nwI
zdgC2%EHg@-R&4`1#eY1-h)ukVJ05Z)DRkV!j{R?Qm1q!k5M5B?e=W3q0RV6Bf52$J
z|1?>|h9?JwgeLxn$s#f(FgQ8yUnYx+u%d$6^vd$O(3;|gw)U*Lme!81xk{PW(2yVG
z@INMt5kIeigsBNg;QZL^YU#4q;>Psa?q2)&==Raj*2LQJ>G5#n&i?Mk<<-pH%WFb)
zTGMCu$BO|4lfIB&%H!@{u>4Nh4*R0ioOJex_?TkSq~0kT%%49l@#&Don7uJr4PBBi9&Jk2M(
zgNm`FAC%T+1_LL=`)`#@bvuTU$+{&{Y)-7d=W+GGWn~>r$KMV{nyum==#~v$_H3r0
zS9I$e#s7!}@#LPi)`4CMPLq6)8k=o-pud!-A2g$R
z{_v&?dA1NxX}N))jy!ly2MTlDPfs?pOh{=+
zGtf?34%21!Zu^RorillNNhkBlL7^tE`x27mJZVZZQgCVQUX(RY%b?^fQwm}$Xy`H{
z#b~9=5;Y^rOGxcAEXp(X;HYYi1JR&k!QR;wnpbnIcabq44bm8|MY;46}nfp3QQ-a8Jqd
zA@5Mk8D$>jsr%Zt4cTWwe<;hqJ`A35K!6`EG`yyx+w~)?~CrhS&u&8$js-v&uTw
z3erpBC6{P3Tia5s%4`1!^ws`O1FSQ&9i2+i_Fj!lSd|Dm!LfBLONhe^PGFy&%thVf>1I*bGy>ae{oUOI>(CkM)klA=$a&F{6
z+g4P@Ve7IlH7VJ2HfC{r(4G5HRN40(Aijt7gMt4xVk0u6X9kiRo_CF6HJ;m#6P9*4
zB1Ws%@|Vtdo?ipEe<1XHf=0$6_>vuDMm3C|J6#D(P*Z*K7f@>E#8Ke~`@l};pPsuP
z$yi~wjhof1>r^|xN@rvkAtaY
z5lGQDP=9i)CAEB1N1K71Iw1K5TYnq>
zJ0YxT>$PgML9&POGd?M4dkDm1Odvy%M9rI|-SD{`Zd}$`LUae)Y)pix;3h>-8^3nV
zeTXfg2CDg-m>}1E6!|b75X4)|(i|@}WuoAsiJpgLg)PlYdzpdAxATn`IK_V1U?t{V
z$asM{$-boEHf)@X`P4S)`caUM<82C3$UUfrd!1S`VTLpH+^6k&=epgM$0aL1&CN0$
z?kZ!}81usA#-mI$E}i=fF-CKZp_sH~r9rj2EFr4;D!Ghimh=(O9EGxoDgQ{df6Hu$Xl0SWvlg
z?%j7w*y*y}PjuXl7?-AO%PI9O^a{bq3_3WT@5<4~+YGsD$}Yk#74|;mGTmZJ4FzXq
zeE@=4rm#ih>}0o$IGSI~!}5J24|QS5qBLsE!S=|G9vHG!##GX)2RX;W3EoD-|l
zo{x=rh}Gue(ra^H`|2{;YD=|=wWW^7=DO%=YcuKfwY4vZw)wlJ)5LnOGJ8b~FkMfp
zS<7IjqP1<&s`s>Fu}iaA51sF{=wp2PSJ|TZFLruI4c^|H;Ho5gn{tZY$g!WIGMKwQ
z*2b-G8h@tQD(8I2WO65R;m^jNEg~H<-E{G}{qbDE#{P75kcDx5>ogau(!785cSJji7oKUX2@&7VaF7ngX
zjrGZFcdtpvnK?H+KBtqmMor56n`F;mKtF?1#vId^MFE`|wvV*!Fxt22D6um(!Win2
zV^~9zUMR4Frjyd-v
z`MhD0l?xc3aotYNrWepwYM$`9WXo&47-x)`wtX!sMpmOTg>8umN=U=avT7AIQ1lF1
zTAC@@FS+>T*Rj6fOq_qVbfTPy=M0>5>X^t7Q*Z73q5{=K_P^U%J9U_W3~|!igf}5*
zYKO>q5rlE(U+Oiy?RUObTD&X}_&mdSjFJ5zR%0*@zwu1UT4{a>nmtc+y*ku|3*0NC>_In7e+QM~m@?vKSq^^$cy_Zhx)T6o`6)uC%-r{$<
zEFU<4c7=2=oj=vAfMIwTvG7iB+)G%(5qccotP0k1!*3|ysFKH!KF{UZT`SBH{LFzo
zOB&%-AAsV>9jYC@i5B;0CdqfdWb!5LBuqHin|9g-{=L-wv=JRxO8rBB)~Y_coylSM
zgueFeo+4ttMZaWVqGs#PNXcilZOdbAi126a=xZ81Jo$F~#`6%F`VF#bUGNy`7mLGp
z&En-n&n)%yk&fH~cjLv?{2|Nx`SL{b!z{i{jAmZeYO_GpSVIP9D4BUI#}c#)&>frJfsVeh)^z18SHlDn1x6
zeosd}SJ2MrFx~{_Sny$f`0jwNjW;uIaCb66u0SqqpG5~$=rZqUEP?{|VDlBPUu7XwU>I8F
z0c9-6g~Or#>cMy7!Hxioz_gHhf6cICfViP>&~uno8gbeRe)VEV&s~TJ3>(Y^dU~1k!vELe1}H8q
zZ}^wREx)O(qU#@vTWe3tzrKBANBgF}EN%n87nWC6tC!X`x3=Rpx~N4(Gem_Ie@N|I
zZ0>cyp@_*ICa7~^V}t#;{8+xWr>U#^smCut9fUpGTD#pFIt4aPIv12LCs!yQ7>IgJYv!--aB
zJSiUo9{Yw}#S(H<5)>W>H=L=dY*}Z|cPC5AwIZG#j>UM9idcHCLFzNwxa?cC*v{
z=62U;SxL9k>vIc{5}Rqe2YHx=1g6zt`nwnh3*Yg4eNT26+$UlcjssbL7Oz)KF}Jg>
z+jSCh{Fo+q3jKE@@w%CVgSAm%Xc~nTE^0B6f~yQ9fez&3!J(TT+UTxp&hUJ;L+>?q
z6z(OK@H(%iR<9>)<}SBSvYhwl?j6`@&TqP1Z%^JUosQdtzAx4@KR4e!Qf$i(1}V(a
z%t7If#RGv+GQzzx49r(jVoM|_JlN)90YEI0+*W)OHB5@E5
zL1dxWP?F`cwUVA+yvR1eKF%C<|pJwKU3x
zpzSayEE}L+p#`Nt83t)u!VQLJP(~S{a$Kl~$nyB!FJ2aga8evEemSh`E^p~UpE@FZ
zR+OwRMrV_{Y}iRi2ih!u*1D~lFGA&%=D&;zv7#_gOL&PgNL<(;rC&)lUT$e$GJ;bZ
zlRX4?*{h)42a9a76+DEI{;^Be3DyCBRS`4|e;#nZ=3e_b#4!GbRTZ}+Itr<>!*kZF
zG9xsG2#Icu12#)N&ldKfAXp4TSUSU{np$m06Y8j#w;&1$?I~Ch#RZzH~*+KupAfK#(gbD
zo9Pxr`~6QH3BhUAEk?&d
z4DAJ}5JCW#!_S9Ak&lm0knmhmU=Z~8rYY0I#PoDzv2Z5kd^jv791yCo7ENs+GV@9x
zQOZtmVYC(U3YzW&HE54rJ25O#Q63z>LmNuk1cMQ=NOgk05TYidWfE|+*Bk>9M#?@|
zFiJG%bfktNNze@(Ih7=-g!AX|4j2u25R%(-wN^e;n7T1U?A8bq+T^YWILr$9F^E{v
z-)kuSBGR@3!9n_tS!6z1!F_CypdU$bsub{}kOOMNm{D}mpqa|>&oB%WJvVWFTneX?
z${^WuXf{)Ju6Df3oVrV57)y$0hHcOe
z!;|I&2yEAOG~{d>2#;{Oe$V2AHGszBi3y0C5yB2h57Il7$`0K~5sMyWucL|7RC*Qh
z?bA0X^N~8qD6jNqry3xjE#ds+uSio4V}D|Umzo&I^=T2~XmCX@l4G&boOo3@
zoVq1Z{Z6e=POH+5;8MZ*-5j!2GpTJ)+DxeURObP(I53xlZud7-kD0QqAi=WE#*scc
zDqg)nz0%SU(WxHQV@v&91qWWGoWZcU>2AY1GBc8
zBIXP^LoE;&Qc=0g-#%1&V-NSK4JcdVn3}1LZam)76>U2$>@0us!JbD6H&4?tv3cqF
z)VYIL>pm*2i}`%ikSD8}l3<+MASo?Sg{{?6w;Or8_SEw{b>&KD4}RQJ!N5R4pQ#P1
zHQOc9`6^onK(mJbG{oX^%BE+uuzl0F+2})^oT>}qoZQ_n5X7kP8Tw@s83{(zQao^7
zfwvuKXd!_ewqV1|`sF?SZzk0LJBJk@UuPTtyTc8A5M3~#|EWj+yJF<^GjjtZP)4jHP@aOR8_#|lm^q;%^o3``Y
zhsUSqm#F*cp`^iAFgSb(d6XZjAQS}jz`A@97kDN9z(aG=;n1(U{S8d!%;5+k&Io)-
zR2=?DYNcDHc1Gi|R3@Ww!_o%RDIpFg%A~Qx*(6-SKsW*^XQ|(5Y%X6tdVwe(Qoc|s
zTT0~=%9^kUVRgK@a=Av45C1zlXy9t2`Fuq*)1%2;iT$U|aJPg+^=7B}sxFxVCgT2igay(?m;YtUjAr{~haZK7#!LZ!k7!Arx=Y06pC$!szB0FUlW-{DaZ4O;Li>Kg{NsQDE@$LEI5;bnlLa;C(R)2DM~$`
zBBK33&jv)%X*>GL>1g}ri%T9Mr6y(zi2mgA~1%P#?J%3nsfswIL`J`j3j)E=nDmfUiUWU;SP
zk5oQxdm6^19Kj_el-h>Zlq?PmT%|HDQmJJ@k))0XBtEZ)C8>0}xJo6ivK=%$;JzKx
zg_)ui70k()m=vx`H4(om&EZfjX+l3napOKP#`Wp~PT_tBNmXm?&@?3AZdM=vsj%js
zTuPH_qLs6vl_?{2ujcEDnWv-Oki%q1-0o?zjo2Siy431TX`>?Rui!Y(9r*~e?mBRV
zv+RCYd$T4zfqSYcx=3lVPX%G5vnNqYcs~7<4I!72t?px&nHx+Hqrd5nGW2G4N9T2E
zsKixjWojnWq+=09dt0MUx#ysh=qaYao66&ST*s^z`h7la3rRRhCG7w^*T9A!xSBU9
zmb<^&e|I{mF7Znot=X9{7UH%_MQc7@x~`@xsg&1cb8@*nWKJE4F|`8KSGyqFlh^i`+XY9!
zz&lSiZe_c_4=ZMVe<2Pe>xZ9qLK%GxpB4FP;#KvEb9>Nd{Q^DSu1Bw?-foxedfD!`
zYd=4R4}O1sf@FzLfl2g%-46Ic$>f3Y2lPSmTnAW;XF^cH^uy^|0kB20AUV^@5Ip6X
z@ey|81SEt@!mfi*X!GIhF#l~V{*U1Ke_xJlzwW=N{}1H|0QD~nF$Mo$;^zW#DAICL
z)^F9-sde$F2?PX*`1}a;>bh@bkQEA;fI2=gy*k4{Mj%x60y*m^c
z?Zj&7Y8ya;%_?=HKRD6{xba2A8>-g&;fKlTF0njICY{O%jmifj42I|qixuIHtROF-
zO-c6wkKJf4o_-29*Y;w&8_<#87!6_Hqj}Q?R8Gms2eY`b9uUFyMJy&FlwN|U&DQ!ul8epr(@8V}Rf))Z0NGo
z>g3C>Z>V`f>)r6}Vv@d8-xIW8cdTZYA>euH!ddhAa`^o&3)NNdXP1~_!Nh63EF`Rx
z64f$JzmK+6;xyato~pPjLXZBsyS&J3KHCMa9%YcgEA0YrBRf?^NqVz88yMDe)7PeF
zLK(jkH=HIoLq1suza@}(B*-IBDgVnFB*h2`jpnW&Kf2)1TLeApd6CZ+gAOKyy&ZVB
zXolzbnhb>T+X(~LQPpz6kdkIX;o37vEe_T&1?_KMiE(5&R))G#NHGDS=BC0NM{*=K^u3S=3hTDL~SR3}l!E=Vy<
zCm1+N$dHmPLr(U_1dyiFeJmTgFo&=gq)~CFCb+ql!0u)IQ3xZ^`#=rL0xjq3H~jTV|o%zY!?Y&Fa%M&|Z-Kd~5PH8GxT&D}^#T6F?jcSTHtK37#NQzIqQW5gtN
z-}QGx1RMWEi5i)nb(Na?F0P^8rg|r~)%WC!nSS->MJ~TC36nB=QSTg<#yN
z423>BKdGZ@w6<^p_C!?-q5b9~YYHh3-q+gua$?7=E+MOT%u>SI8frXAFf1QZRuiol
zwhl9dYK0t{Sg;$d=>&2g5#*2B#5|Y$?j$kim8aOIKsMw2auf0O2P|?2Hvjbh;vuWuj
z4Ng!iTKc2#<_ZMnDclu_3!`w&{$+bpunu#T
z+_&Zc7mReS@z^#Hyq193O9}yIwZpCD%EK{YQ!^D~pE85K!Y&d<8=c&YVhlavM)(mt11CI0
zOsL{KB}ADZ4;hyG=Riz+(%Bvpx9TL*scmrdI~gmuMu(g*YlgIw1~r-o0g9oLYfO_7
z`HlIcT}-@N+PTU{9~xh9Hd=P4<^b4DUFiG(E?V&MP>n&YR{eqQDsD!L-v}V0
zV{#6|i<^J${WhC?q2%q7IuYvd((cLFKmZmR=!-O0Fbq4{d>-Xvv;qocl1K=5Gxp?s=X
z99|1DfDikWz&|Ct`&@t09M8^E0SGoiSkc_TpCOgpQQpCoP?==0njv
zJgvZ+h>-6HQ@K}mkPM{ylq&;jsV{y6R^{uWB$OakGTR8n-Ku)w|5GFLWA^^n?29?|E5RVi_g{P8vW&{oqPp
z4)5lk*7DAQ!HAxblD^R<@1dx%@!#V>zuKv#`tG^%+2-Y~)U}D#-O7!{fy3sLk&W%k
z`2B;krmKhPoadKJ=JeN(+`qT%`%jES{eV{_&xTPZ1;HDCqh30^cBL=-Y`ki9n=2FGeJQY1i7}+Ab9^e2nLSXzs|9TSzU5~)4BCcFw
zm>Q)cj;W*JYWjPN@oqR;>(OMThQN(~Q>)o(rNLr-gm1gSv?t4C(AM|0opO5@;%LSG
zF+hI90fxt4+Yx~l8q60oY-GzaR>FzCNnmm)}24Z!;YmW`>DJ|&{%?Vsfg`w^PbOsdH-N*njR6HpsM_v@4#
z^F8jPOjjh$(rm9;VhO}RcSu!wkMQKfTyNEkqhb%m2%4Ov8XR@C1!RtFE*#BP16M*R
zsDd!=0(ZlR$CEPBAUcLroIoIGo>i_GsoR+w$JVl8WdmTTAf_IY7zi5ZZZ1eafT*z(
z07u!TpD?Dde$ax1eq?N$$p7dc9I~pYV_0fApYK`0f>+=+G%{}3^@7ofQ%$=zUK9^^
zc`8(k#Dh)d!&N{{*F#I$Y2A3r4=>#`HX`voVbL(t<})7Qva`08gJTr0op|1;f)C$v
z+14#*_HDKDcEFNx;F=-Ssu$$iGpkkx%qz40p~d^Uo{9E7w9}6Z{=$DwD#~FPU2EfF
zZo^jlpcN%#LMuhpfB+>$8wuQ0So@63RNd$=C{>18&8n$hDCTKPdeBIFd9lx4a
zIL_0$ed~9#f=G_nMf5mVm8BxLd4A54jOop1y<7o27J24;X_ks}$0yrysHaKC-vYe|
z&OI&DRaR4(Z(3(B9PBS1USUGolpZt#riD`{{hM12o0l$aVw{TZei2Fe3{CM3
zp@}U*XTq?zF!#Oeuzt@X8clCrY9(#97z0pEYebf~y3oSGQGUVbwV#>om{!t$AqnOp
zjgQP(%-MSqCl?I1L4-{KBaVWE883ydN1j(s^kMIig<7!R1jAhqW9s<5JnM6gA{xXO
zr~~1g4hkRE2QgqCuwj*VO2}2GF3IGKE}CM3NeT0?8{^Xvs5hL2Ik2;8=dM6Yh#f!o
zL-)PbxEzKHW!FFi=?AmbT8iy#Z+p8;h|Jaj*KaR
z-9ZOzjd>ah;E={9b=
zlz;^iYCK`@rU-vC3bJ@i(US6_c==N%PNo@WLS*{39M*r+SQ{b0ky&8K+L~*X5&TcO#zih7XaiM{3T$8fKXsFzHuV@?KnH4*H
zjGyKu8~4^C+-Xp0v!%FXKBAs0SyOSwP(FdQ%0Db)&ir^}exmATU_U
zul0d}qpz|CEH+i_r>wpE8kY>$fJ*s!@-@pC^x|`dW6$^QQ)h1c-ZQE1ogAB&en;nU
zpl+O9U3}Methh4I9PWC0A=lcQ(uW$m?3}m7v`~eWhfk0knSqcFet!^5K-O1F;|{6=
z<~xp>U0*$+{pWne2pUtRt~k&bntVm0b5Nx$gq5Z4pa`O6|BA`#@!>{0sj?bk_*n!;
z$<`aGMWrw&+bD!uHR$eXu(UPJs9~8~Ok_N|wzWh}t?o^t>MGLxFp)5H421Hw(&sq>onS)XKv&D
z_bFjCi0c<+M)0rS=Ks=fhWq_zUXou#L~x3C(!cVOa?*oB(*J`ptBm%osR7jfqfM#v
zZfUP-Z>sNV>1nJQ?Ck3AZ65lf%*q-^fxTaeNo@=5^Ruf9Q8iP;D@)4_>$TfQ$I}OU
z`+Z;j&Fk)~g+IW@rjyvk+t&A&SFjS1Aju>lKoQc{&$zsDe+U8zp(nM2e7-3ZjnqO~
zSRrWuaVUS`^6zf~?o?Ot${H-i%(9aoV}nO5MY1ln+Ll_Y2}kVNXJ4gYQHD|~moh%1
zb8KNAnExwZi~x?Ubk1HL@(aOEvT7>P=On308za+*g#CTXoXLD4){61T$YNEQVj~)8
za3_^P+e|xArFP_UQGOnb$>l`(Ig*=}cT(DAO_#YomAL|=)tFv5f_n`V(
zwe3<=MqoYXRot>+;2D7WuFd>VIBI(BUFHBVm!^fE44;I1ah^{X|wV$-;0}
zD2pN`q4bobl!IhfWskCKvlZV{9Q$=!un#OE>pL7ZqQHek5VOD-ZL`wAr*sQ5-v{n6
z{LK7ejtytGC3h-U_cgOzBRk<^bB8JZ>rl5chm+XU3x78ngCS+gY**_HNz;tDUCP`b
zTU5>ctgs~y^W24#@S`dpfTmX71Ww3$e#jPVX~8!ZU|HdiSXhZclCMR1iH0eUZsVcL
ztEOju&gzy|Nys=}oqf&e8E6e*29dAV^cYp$=`Ig-?$aW%Z9zXFlXX4v+ER`Ij^T=d
zHeS!qr5=-qb>y(|FmsAHI?9a#P{t8;lW5MpG9xlRDSD&NL@e>u!eJFs4P!ELRT+v{
ztaC}quZmO(1GRm7$=%Q5TJX@EykTJD#Sx@@HeXSb!_mF}o@_jRqhs*W<6A@90+
z!ZPb>3HPe&e$Uj-7C0cY*tn}wQMH_fv`>@H`{Z5y+s1L7p%MX8$2RA1^>0H&f77OpjO*(LQRgRZFj;kn_hJzbZY!_Wr4%CB?Dkd_tKU$zY46ZGpuRa
zaIW;??352hIzNv7@M$&$6l&;oLHAS>)R<}w;;c?SwLz6II+hy}t@l0js71qDWMpPe
z+b*{2V8A(hp=}mOsPXjWUwZAQX?$01qZlHQ_pT%++Fx?&QOrkFv5cIDAPFeb`D-oI
zH2cjf1lTUoPZ=g0x%MHp(pw=!t5`9Q9Ku7|x%#&<0(2hh;?S@&Fk8ZnX`JBd5=;!s
z;I|Ypo#G=Hfb>+QRD211E>1OIU{Spy7|PiI4pt&5&Hax*;ALEvZ#b<(mYUopYmz_%
zhS0QJp3av(j}WaqRjS3%%b#AG^0Q&gDyxvt%JrR$kzR6~dc+pxRXyxXS$tfjD@HHQ
zx|seK%D+t@B|*(V;6cUz$4O%p=NFzJL!X5whK;StC5xgCm;9d*bIF_-nOV(Q8o7BL
z*jikexYTw35iu8*+SD?c)A_rLOSlyHwfIlbTyrg47!mc!)iz*EsGm#MdVx=0bwC!f>S2QHD{tYy-%4S;{i2c@;z8!JcCCM=&f`?0;et`W=Vn5&R#TZI{Q)_h)r&d-1oqp$xr_>^!$2chc0&|yN
zuKRVfQQdy5^=kc~%K4ppqi3fc)YaAg1_ejDkYcf$hQ8rI4MglH^@j|@iSH*_
z8^$(5UYx{AXM6n-WgGSe8xi+sQo$De4o=tro3
z5Q7+^WQpg8VgXk3!|)B&DSdT)B}+2QY}KPYJzqjhA_gCn_0p_TXiS4WPFIh@ETGzs
z{QN?m379>igZHv7EoMaZ@OM2*(s41z4~qz(I=1tkHKdO7zkive3Zgi$gq%$wjbY;1
zKitU^SWZ{=D>>wFC@NbI$Hft3kWh=O+rhl(l48PnP|<3V*Yu&dfWZkxX@Db|(qu8M
z{F24jRny!sTrZUJaj?>2Z3f}Ule`67%W4lY&$W}5Jh;R)t(_3~FHw@I{$ej*g$!t&t&Q>P1Kc@&+y+^oz
ziDv#te!rMsK{0A}abb#QvzL}$yq`I{wZ2$9Q~X=rVr|AR11h0!Ie@PB$-ywlhzX#p
zuJz0c8^L*so*c#dy+RuU-5*T@&bNS=iHtHWy68YGck!qw$mD;5D#6=hs8^xL@Jh^l
zj&iXPvu&e_h8%61h3_KT8HDYk6LG@fG
zs8@{Rr+u=-pa59jl`{Yk=^qfMs?8}Oy?w4q=jmEvm8iLe!j+M_XgwF`)eo)uGRccJ
zAS#lG-7fL>^&?C48ed8;`LL18CzFFDvU;*6*iFN`Pn2hyZ6av5^$Okbv
z=fhB}3_?DDhp?mtxucqx34OP=SuiSQB*e&{KH{5kl0s
z1|{-f8763P+;8!ViABCCeG<}Ac76~pGezzj?)enRdGR@HIruPry+RQQyb{y{h>Ebi
zcvVcXwO!MM=YkXHH4Vw2q(k>>;V{S;CxPOlb7R#!&ECqsA7Bm}&!qc$9C`o$x`4
zu!(^d4E?@kLnS2Q_K;{GPLaPB^ic+AJf)mT__j1Ti3n2cQ2WL9-m%6
zLBQee(ev~A1HTM93ruke-w@%*7|v(w_K3nzS;h3t>kCI?_g=HXCChRo5S}Ft_uWv7
z$G?`{x>m3oYmC6fQW@*OlPlPI?S>|CU
z
zeC^4M?tD62xrsSlDp_tV*(HmnORef)u}Ub&^yBExQnEZrt+d*XB)D;kn=MY=>N)9f
zzdarUa|>@Zxw#LHLks=ocjCE4WL4X(LlpmR{ro_+Bfc5mlIBqu&6!66Q}MV7K;}^1
zv4nOl)ei!&eVudSb2#VPd)$QX`mI2$?1rCuujKq96_Y4XpdMM74hmNZ+YH9GJYEfC
zc~nP?4!T)UaG%=6&X384J=ydJ(2LYyI9v3i(YX)L*(N-fz=oZG)kQ%2(n_)1aqMS`LB3go~Y
z&$QwoHv>2ehZUK`^Hk7~#${+27sKnGZj=<+Dw_LxyJ~C~P^O&HD)cpL&gKtM&=ObH
z9*)zbr^B(G#N;7SpBibkfb&1z-Z61>u2B`E5w_?%qvsP#&B&4wVcl`T6aR;
zE?Ai)1;L1R%^tV9rRO{P+YX0pwee3z{1k)%ztB=ou
z_1zTLI^W&2>fx8Ap7V~LEkqE@=6*(E-uiy-C)LHhql~!E{k(E3<-?+^TJ?jTx*OvC
zl78!-hZTWP8xCdT(&)QYYw+U7b)MGmoRap-o1+`@?qJ9;&?M92sP3}iZ+qTmt(fPg{mKf`DALaS&wIAlE_KY3pX4ZI;mL>MQo&`7M
z@SN98AaP$bulw>|R-E~9yHvFz_*6F+|Z4P5tQ|9CkgV*do(ZGVCs
zbArl6`}?r!FCnLKLi$DrAUf->5@QR&jzk088}(Px30!Ij9qVTHz;{@~yf(=;A;5Ig)o1^bC
z4ZeF8L?yGaT9ej?N5GQ))MjF6=P~U~xmp0*2da4YkK}@*b!^Jk^Dc7-p_FS>a@sND
zF6&sPlxKNt+O^{@`#P$W?})n**19j}xvNy*ZtS-YDhVYOvG5P*j^AFWyQu`H{ScU6
zv7$Wp`G%rL
u@!w6%
zBtTi_1jfGN*Ds1vXvgytL!3%JzrNJdu%%o|*yzK9XjHUm2Mb;6vji7I<)u6Da{;Us
zFSKc=vYUxY0+jJTwH{!3zY8rj=#o;#9T#i*q=yG3>{Q!FS85;O&N@MX{r_RH7{8!}
z|4mzoV*4kpIw9si)2d@QIPmid^8b<67LezBNo!HjX#XXxMMM48@vY{ot)!yiHw@Aa
zj}DE|R%W*F{*zWciv~ryOv>7~9#+>l|AjH^?fyn1gg$}p?D#tF%dV&0xnE#kPM@CR
ze*gRN7tI_38+!+j0!P;g-^ByVnoz!#@p1IJ+MQyP0MKSRcn{{!Ii7U7abr7?%1g
zJ+&S5Pe>}0-Ul~HehGvWJHqk8s=|CYAzffg6wO`iRAz!D^;G-3;P29Pf2eH1rV5!1
z2@86YBF42rNc};El^sxj4G^Tc3JQs9eU7q-PweTt(_i!}VWu{DDrf$rVF@QJmO;|$
z^P>)b2arHJ+iq4Gya#!YBZ8e)D#&?*07cZ>aJ=V>d=r_tWB1cg=
zzliKMYajanE|s5yc<+FbzE)tWFaW)bb~lACrrvbcO2FLBBr)ruj~QxTO66Y$APe
zlJJj*)H!)kSjU}U2&tWZ1$eQ&Jd3kz%4B&JNrSy`zk9)f2s%>r#84^bY#3B0;T`{tOiV~FvUHzJe?JUL_JdUB-D#YsvW3))B-kfFsX+FPM|n!gBV
z`U|35cyk=5}7K)?i?J7`J;yx}U!ib(B;P*{pGPT{H=DrQiS*ng^H&*5qf4Qv|5T_cBy@=xI|+KN)V7%D1Tq^#@h+as-85^N{V
zivbWf;#IR-Z{`-wS^V_k!O>i4j>iItUaZ+|QRdk-OwE8_C`|Yn8HkT{mAv
zz0L7CR#BOV_$iS;z!USoeVY;FbJ?BLS_huxwv#{o
z7_XmjOuNHf;o@Zc?Q?BiKt=*`^C;++cWF~$wM2wI!qop$&SpL{Lw#Ji;fz`K`x?07
zRfuTQ73>Xz5@68Zo1cGgdEfZ=w=S=`+rI3c(v!9BQ3Ah^4`v$(t4F7C2uaCdhI1b4S2Tkh@L
znYPm(+L=!Ohwqzt-}jvJoDgBVJS-}UT3}RziTPwES5t;GeTOyK&yrQ_eH}2f)?9qo
zWJy8Lh)9+DO@d69?ne+OXnA!SXsJkhm5&l@b*Sh^Lbpwxqa_n+hDYqvtVV@~sFXA3
zm`cX8yElATFHxzc5p6)U0T37sk-K$mliZ-!ZJF?GG)Y-MHA58Un6!>kOAqDA>>|~k
z)ktQ|fDtKzQ72Z>AFE1F31dQ1Jr#|4?9X6FWWW+!pTmK3&Xt0vB5@3x_NHyfIy>Ly
zli&m2{78tW=q&m6xkzT1XC^@pnq!n1bUMh7*}kUNH6rMgh8)OhPr(d%of=h&vTNBC4T
zQln;I&Zm8dgjS|iA^k73y?0eQ7umcyDkEM`fTiz*c1`>>DMl9yX7}|rtYOQbpI~5_q9yFf4_C-Q6bmLOKbtzH)eK?vUXNDG
z!m0TXc2?NJ1Q6{-79psYA<9De~BnG=Q)
zSuH|qBi2U{66E7sh3ZO}23fjy-xHODjzzH2L#6)T
zQ=OTPg**ydT`E!LlR{p=t2hqwHSiqI`hoP9__=zh|8l2v=USa9rT$M}B9a@SOEn1cXc3L_7GjLTT
z#vnDVOeAA)?qV%!sb`SCrD3Qap4}ntXv_7^LRydfivNUV8XLK#m0k;^lhEa{(j+iS
z7jx@YXRYkE@jTwSa--$Hvm}bKHYf$U)G@@ETCvpT-bmZKy>NKDMp7;qCy@FY`q3*L
z@2kp9q${|LW{N~z!4h#xnDgO&xa&ZCH4tLdVQzNwpkf1A&SXYj>u|AXe>Mr6|g
zUh`?o07nFu@HcF
z8=y%;4b}EeU!jhR3`9H*&{+-C&-CJJ@X~9*r#5l;ArQe49E8H;gKia3Mnu;{z}1`K
zQ^4tG_lt1y5baV@*i`J>?P9Q~D|YBjm_Vklkj{6JV*}6U(AlA|0Dhb`4{RkdyDivI
z(qmE?e9}NwG8Hw_KoydUM?ZZYg!?`Ne9mavdbHME^v{pzcZH;QLG~)N#N5}BFm$jV
z3eg*gWALt#32BJCyd2xKF=>Bc!E$1+*2j#LyN7ckVyeY0Tg9nU$E`!+Hdo`epW}9k
zGviMo@#m}Ym(TIn#0l^#`1cSzbq>UbReZj*ghyb)gMjN21MwOm
zOqG`Bpk){g9Wkvj4uWoC*J0R0D;nC1`;_FnzK5B)YlrsY(fnvDd>bW6%rm!&Q~&4=
z86`9Y6H-}!X+d_x^~OL*N+xYg#)p`Pi`p4_VmJ|b@+J|oR`^Ja`p})A*?VGtou})u
z@bI;gWU~&vAhdPT_K@F${g8;-)%F=JGYxIk+W^I9Kp<96C&En!&kE>m1i*5E1nD)V
z8=?3x)274Ehg($O+gdrm7pGPAqDNrkNG7J2J_RKahvERVMCGEyb+yDE^lf+pKI%dp
z%8A40o`CasnQ{P%KE|+!wGhAyTCfiNUY}v?NosUC>1A+$8YDmuF^Ulw$?TOC?#;FC
znJBS_lJ_EAz#I^*&9ad7}tTx`vy6Kgf`>xiP~7e3r~&aA9BX1V9?QW
z#Gj`Kr}7}Q=L}brU?I9-R^rrb!JJ%`oM@EncINE(cHsE}P*?i>vl%7>eYT^r>PeDq
znzMpNqj+3$T-@?mQXCT2ay}gASDu81yikhie6iH_>?(=bYe)GVr497SSBJJj=SHaNsK}ikrj;$n!(}TBoFJ+v|5W&u)#WEwH`Vx6)V~|e!&;kr
z0&3s=d09V)!HsQ$9b+Rc%@cF;k=@gkV+(8j%RN&~>jm4(dpo;_^-G(hCjm#7ztV4R
zQxflPu3{b@Z(rZio?*D9Zbuvo%Ypzn6**Gz`u!2Z)Itif&e*~ZxSE;>kVH-C*sCvG
zUBHHQX}=G7QHx%<1|yoR2~iV-cY5+^%=+_wDCdWt^5y}rXN+!?bD;zAsG54ISPRCY
z86${Y*gzC*#)pOiC2*
z1fOqW%=niV@GS_9_sJ+2%_X_e^;82g<`Z_nzwSFQ3f}U*x6X`;jEeHgjLLF)&WL{I
zB&3ZlofFXYX3K|A#)cfM0z(bF5sQRymk;)7iIgC-&?Hbj{STdka~J(2LCy-<St!`lQXdzF
z68>V$&W4QT7YCvY92G=9dSfQKG4q?rola8sgJ_9W24D+u;ZDnuojOj?vEFw!L99&q
zQ&`wRQdSI}M3#PZ$aK($kj0Dsld6*C>xvrMav_Yeu0sa%5G(8P3UCQ3?O8J{jIB*W
z)xVn{o|=F1EG^ASZ#K0M(vXAjelN7DYU(gFI6y(hd||w(1(cEaY*$P;GG98vBX@z#sRp+iVh+xu*7zMzl7bhe%f2R
z`WYyC&NUb+M1DKi!KQdT{Aoah2@$`5`OuqKz3`?N)7P$f8oBPkEE(Yoq&g@o8`Zu%
zP=&0wbD=+{rTUlJ8%t}Y!<$@NPLF&I0&acP!;=1lPf&ef@*9j&_i1tMx|MJtKaB6$
zMeqBoi?*>>cw_v%fFpl&4pOJ0{@9|^geJJcQ*H0w)tX#Y#b6d={Mc+&_0hXJscGwB
zCD$Xgw$3~>x(y-V?jq9Y^w`>Zj=3O=!m$Q
zLiP8Qf9#(ihwqm+!zpxt90L0_G)0<~6h7*(?v4U3(LNbxJb*7djlMAn5LQM9Weio)
zx-$J2YEI8Ar;vyu?~AZmI*WPyRfSYNl`=(a#O0+|f{oJ|rji;2X@DrnIfrUY?yXye
zOYwk`T&xi`J8h6**1db-gBEg@jjg2FJ??T07mCmRsjw)u;KbMHuMh__4F;^=#AEJ)
zg<_b3d-FIVL%l$VB`1;$dL`L$ImO3a?Niul_c>ZIM+bu_a4yrexks@SJHt^e^L=Nr
zr2|ZWpY7~oR@YcFTrg~VkQu>OS0D^V`8VWO{_cZs37J)v4d{LH>?mAhdoL1k#{1(0
z#Be}o8C&6`UDb>VC>P#=Tz&}3@LZx(0WSah83;`O!3+h
z0oEUDm0CB>=?$^rtnA2d3Q55s0ud#~3Y5`(RzF;enF(n4TPo(Qtru;`AFKpnvFE$v
znoH>8Xym_^OIiO7jkAL$l>!&jpGC9JsEbaAGW_vZ>sr5kW%D*C0
z=`myxo;KXk>F7=AjaEBs4sWjA$g|ZHvwOJ55Ywv
zGN+Jvi)Opy#ZaST3;L&&@!34xMU#Rp!oyIH=*hCFWCRhU(2^^19ypw}?d9L5ZJpMB5_LO`Ul!z+&AXB<0nrV8aJv^Da$N
zeSs|gX^skhXqTOK$@4%i`c1Wj?%BY0M}vm{W9`YNp6~$SOzHx+&w@h#1KjJlE`xPZ
zc#M@J%j?94MMf}%zr@j_O2NMt=AbvhErq-LDc{(;)+tp>4wvh$$ae@4(dXf>Xq>Yr
zNS7V$xmLJbU3zhZHr*3`j>lTB1)ld$?!0-1!B_@lMgJ&cL#IZK@X#BUhRZ5R+Ff8C
zbyiA&?t0)0PFf@373ME5!f;C_6ip)O1MHb|@U4oo(+2e&c8^n`m}gedhx&y_?~&Nt
zi!2lhCal@bZVZQQrWKF5+hKtcF#xOoxbOv|eOt;el>u!^s57h)R(R;jctGQ&|kO
zAdD_+XqWHnnK71r--z(##}HkEL`&fYMFJ0P3eq;qI$sR#o*nDoI46xC}I1qu24U=1^QoFbG$91**=hc3WEi4gpc
zXe2kWJ0~k=|AW&m+Rn-Km7XiNv3M-l!Oep)7vB%pk8n(ZzNOBItvHV>0s*ao
z^qxNdjsi`P*-m9x0TAZSM3yClz!c;F9hwklN%%4%C*sUtb|9d4#Aooxw;kT^2s^Zw
z2X~$7T~!O+`{FJ0s%zJu&>;*U|0nq3OHhWg_xwUGFz#}S-7|X
zW7^BbA#~rZSG;aA0$ALQ_J@KqET#NFQ14A*jfqT*%DvcYMkQN5uToEAHUSOm%usdF
z&=yA6hRB3^a6*@i@8na$WSie4*dKuz_Gr)_NjGtfF%hOc@!zOlrwn`#vcEbE8mFv3
zKsS-p+Mh%>k>tdmmpIrG;5
zJyyGSi%+t!;9W|CC3F!ZNxv3=rVoh|Ai`!P31STz6FeGc-H0aCFan-N6Ie168lMs|
z1hfA>Cy<@MTDc}N8vB<=C2SofmPYxZXZf`=0(dz9_|{3ojBIbf#G18ivazHpl3Xe|
zR#7H()FngJd6ikoZ!*lP=J@6+>B)R2=3}x!j?An@czJ^@8TXobBtue0bo@fiL?X>;
zDImwR$22>T(iJYd*NXH;TJCq$EDO2IgjhBIG4;jG{_5e)LW
zdF#B)Z59i8#|Nxv$CD^%=B2d{m(-_1>a6sjEU6z^X@Ue|@}>U@v--sWJjBTYOv!67
zehTY8gDa&mxP(cF~yn(UNs+5YCSR
zNKipqW3FpDh@!#XDVy9>2uzjDHq&71C0}B7T9V;Zmdso#7hL`ZQ<@P~4nI;ZK~wQq
z2o6ulotVqTGXu-*mITLDX7~g>2WoIhd6WqrfdW`nWiFi)>-0<
zj_*i&G!{$=1P&UXxgSHt1XqIXG3^z*>g{>2RVEZ1^2C^areSm9K@h
zsm(I3=w>%Z`L+^e&)I}73Wc(w`8K_TPCxk%XeV7IHsZAV540x@
zF((OaME2^%mb&=vWhS$p1mXDx?PR8a1~PU}ytH3}gh?=-P>Wu`nZ|-Hae{SbV-fq>
z0ei1z|0+`BA_ESF#p{QZc;s5Nn3a3Ilm~-q{j}i&ki8ZkTCPR|krP>M0rhd*K>(j*
zXKN4F3TPZ)z1S?Q(D^A(>%^m!mVeGTFn@r(Ii!SIMkU}ae{aotGCD$bylm+xEB_^H
zoH3ghEoYddte(slJ*orh)zqL1&pldt$%3Ijp0jk6Q$dm|TIJuuA6h9B`C+VJk;K7e
zObxDFkUB>hiA6eMI4^0#97~7AKegy^Hs$msMbKQ5o)xfp7BMDVJQ0%&%;{cf6l|0Q
zu$o@mEgXCI{(kSmQm^0l<7>hZ`B{N0Fh6O(%P*1AI&4Q(51Jo$?#
zK!NbNsl0A+OdlXLI)&CN#W2y5@Ez3nxg|HxJrQ05Bu7>xYe``!YvyfnvdKox0}zv;jF@JlRN?e0lT(w}J@;ahP0|#-ir$;@r`;<2UzLq|BNlp9
zvZG`+LSgT|Ar~7=zjuDlX`u7^2z6VQ#kGsQA(7a=ra~k87IUSZaHDRCJY#
z$Vlh#SPDv3I_hY}68PA-GFp*5M1HjT)w(E~DEJR*L&s=vG^ko+tVVu3(}(#%7i?`k
z(fp?(+-EFa9y^)^eaHZ1j2ufDry`Vz$8O}OA}GOc%Eu4f)+JG^IJM#T1GLD1v_lGo`6IGkG5JT0Qxfd~ADj9ENiO
zA`*6RKF!z$u8$tCi+<xZKLKhtof-*XCX|EmEj
z#y378F){Ytfc0NWd454sNK#3B0Vtxh6frOvTw51akx<<9&cTJG1$1`$`E~X7_4l{O
z42<-Q1Po1Pj(7hYpITVV?Hij}>kFA!Snpb0U)m2{`!)0X7DF(f1cw&vT4KlYtemDJO;@%GD6HpBkKRZW<9c-r`
zEAiTd{qJksFMpm^z6Xg-SRk^#*lCQLwdq|7U0JV9#V3DJBGPrSn#@)W96}Yk8?>m6
zY5gLct!MPp$r+8cI;CN+w?|O#{34@vdmZAie~6*hMs>Iw0{#BS#z$d)UX?abbKT8$
zt#Tj)N=}(yvzjqCvN3I__Ob+`^9S<>YWQnj#r3NxJ9&6NF7X~3xWS{Alex8j^q{7K
zZB5GyU8aEFWgaJZ+I1(wMa~UJOrpYztY3XXkQl}iqW
zrzNJ3(236~jT~Ugrp_=YwHZr`XXr35&9Ol#4o|#XJ}J#bwXC4`>hRvrm%yxK(Q>+d
zv8htbJC%#CM$NYLX+jOhuBk=R1=qAq^0H(VaGlbZw;R81z|Ts*iT)T|kEjMsGqY5H
zjk0M!HgkuOUBZWbc(aEf5ntEvEc&t9w?sVCzQ+ZYg&om3SKkd-&zENtG=BGIbwA;-
z6rIr9N~`Mo)fIn;(jZOfJ9VG%b=5;*Un^Z3UVyM`MvXb%h}l5sg!ALTIcO)8Qd8QI=Lp>&p~1QC$PSo1j=vvQLiVNGE*@E5mF%vB;P-Q`hslMzZm5VtYwM2*v63&TjD03w
zjzj#u-{3$#Pkrrm{@>z;=I
z?c~4v8~XZBVbY>~H+6mhzoD1%^q#x}EZcL1tT(sA?h%X{B*L)G=@4YGQxav%&+xw<
zaBqGNqg;m}-fzQxN8&D^zYYiM7>#(tGSl^nC6#7kmVr)4;n;-9<
zl1Oo3H!?^egyY%#Mve~libC9;vN=5~9AD>$JZY-fw@mD!bb?z^YuOB6HVR19Ek2|C
zqp^(&-Ct6)>J#c$4nN_d+c1zF`a;?tjXPKj-!wXeO*k9!gKeLfNO$-d2P42Xm6ELQ
zppoAJ$6!xLALa}oNPzkgnsAp>FgbiA!vd$~