fix some links

This commit is contained in:
Ivan Blinkov 2018-12-12 19:35:45 +03:00
parent 291948f0ba
commit 5cd6469cae
24 changed files with 85 additions and 89 deletions

View File

@ -61,7 +61,7 @@ ClickHouse checks `min_part_size` and `min_part_size_ratio` and processes the
The default database.
To get a list of databases, use the [ SHOW DATABASES](../../query_language/misc.md#query_language_queries_show_databases) query.
To get a list of databases, use the [SHOW DATABASES](../../query_language/misc.md#query_language_queries_show_databases) query.
**Example**
@ -74,7 +74,7 @@ To get a list of databases, use the [ SHOW DATABASES](../../query_language/misc.
Default settings profile.
Settings profiles are located in the file specified in the parameter [user_config]().
Settings profiles are located in the file specified in the parameter [user_config](#user-config).
**Example**

View File

@ -14,7 +14,7 @@ Restrictions:
- If the subquery concerns a distributed table containing more than one shard,
- Not used for a table-valued [remote](../../query_language/table_functions/remote.md) function.
The possible values are:
The possible values are:
- `deny` — Default value. Prohibits using these types of subqueries (returns the "Double-distributed in/JOIN subqueries is denied" exception).
- `local` — Replaces the database and table in the subquery with local ones for the destination server (shard), leaving the normal `IN` / `JOIN.`
@ -273,7 +273,7 @@ This parameter is useful when you are using formats that require a schema defini
## stream_flush_interval_ms
Works for tables with streaming in the case of a timeout, or when a thread generates[max_insert_block_size]() rows.
Works for tables with streaming in the case of a timeout, or when a thread generates[max_insert_block_size](#max-insert-block-size) rows.
The default value is 7500.
@ -381,7 +381,7 @@ The default value is 0.
All the replicas in the quorum are consistent, i.e., they contain data from all previous `INSERT` queries. The `INSERT` sequence is linearized.
When reading the data written from the `insert_quorum`, you can use the[select_sequential_consistency]() option.
When reading the data written from the `insert_quorum`, you can use the[select_sequential_consistency](#select-sequential-consistency) option.
**ClickHouse generates an exception**
@ -390,8 +390,8 @@ When reading the data written from the `insert_quorum`, you can use the[select_s
**See also the following parameters:**
- [insert_quorum_timeout]()
- [select_sequential_consistency]()
- [insert_quorum_timeout](#insert-quorum-timeout)
- [select_sequential_consistency](#select-sequential-consistency)
## insert_quorum_timeout
@ -402,8 +402,8 @@ By default, 60 seconds.
**See also the following parameters:**
- [insert_quorum]()
- [select_sequential_consistency]()
- [insert_quorum](#insert-quorum)
- [select_sequential_consistency](#select-sequential-consistency)
## select_sequential_consistency
@ -417,8 +417,8 @@ When sequential consistency is enabled, ClickHouse allows the client to execute
See also the following parameters:
- [insert_quorum]()
- [insert_quorum_timeout]()
- [insert_quorum](#insert-quorum)
- [insert_quorum_timeout](#insert-quorum-timeout)
[Original article](https://clickhouse.yandex/docs/en/operations/settings/settings/) <!--hide-->

View File

@ -4,7 +4,7 @@
The engine inherits from [MergeTree](mergetree.md) and adds the logic of rows collapsing to data parts merge algorithm.
`CollapsingMergeTree` asynchronously deletes (collapses) pairs of rows if all of the fields in a row are equivalent excepting the particular field `Sign` which can have `1` and `-1` values. Rows without a pair are kept. For more details see the [Collapsing]() section of the document.
`CollapsingMergeTree` asynchronously deletes (collapses) pairs of rows if all of the fields in a row are equivalent excepting the particular field `Sign` which can have `1` and `-1` values. Rows without a pair are kept. For more details see the [Collapsing](#collapsing) section of the document.
The engine may significantly reduce the volume of storage and increase efficiency of `SELECT` query as a consequence.

View File

@ -172,7 +172,7 @@ If all data and metadata disappeared from one of the servers, follow these steps
Then start the server (restart, if it is already running). Data will be downloaded from replicas.
An alternative recovery option is to delete information about the lost replica from ZooKeeper (`/path_to_table/replica_name`), then create the replica again as described in "[Creating replicatable tables]()".
An alternative recovery option is to delete information about the lost replica from ZooKeeper (`/path_to_table/replica_name`), then create the replica again as described in "[Creating replicated tables](#creating-replicated-tables)".
There is no restriction on network bandwidth during recovery. Keep this in mind if you are restoring many replicas at once.

View File

@ -72,7 +72,7 @@ Insert data to it:
:) INSERT INTO summtt Values(1,1),(1,2),(2,1)
```
ClickHouse may sum all the rows not completely ([see below]()), so we use an aggregate function `sum` and `GROUP BY` clause in the query.
ClickHouse may sum all the rows not completely ([see below](#data-processing)), so we use an aggregate function `sum` and `GROUP BY` clause in the query.
```sql
SELECT key, sum(value) FROM summtt GROUP BY key

View File

@ -6,7 +6,7 @@ This engine:
- Allows quick writing of continually changing states of objects.
- Deletes old states of objects in the background. It causes to significant reduction of the volume of storage.
See the section [Collapsing]() for details.
See the section [Collapsing](#collapsing) for details.
The engine inherits from [MergeTree](mergetree.md#table_engines-mergetree) and adds the logic of rows collapsing to data parts merge algorithm. `VersionedCollapsingMergeTree` solves the same problem as the [CollapsingMergeTree](collapsingmergetree.md) but uses another algorithm of collapsing. It allows inserting the data in any order with multiple threads. The particular `Version` column helps to collapse the rows properly even if they are inserted in the wrong order. `CollapsingMergeTree` allows only strictly consecutive insertion.

View File

@ -2,11 +2,11 @@
# Storing Dictionaries in Memory
There are a [variety of ways]() to store dictionaries in memory.
There are a variety of ways to store dictionaries in memory.
We recommend [flat](#dicts-external_dicts_dict_layout-flat), [hashed](#dicts-external_dicts_dict_layout-hashed)and[complex_key_hashed](). which provide optimal processing speed.
We recommend [flat](#flat), [hashed](#hashed) and [complex_key_hashed](#complex-key-hashed). which provide optimal processing speed.
Caching is not recommended because of potentially poor performance and difficulties in selecting optimal parameters. Read more in the section "[cache]()".
Caching is not recommended because of potentially poor performance and difficulties in selecting optimal parameters. Read more in the section "[cache](#cache)".
There are several ways to improve dictionary performance:
@ -39,15 +39,13 @@ The configuration looks like this:
## Ways to Store Dictionaries in Memory
- [flat]()
- [hashed]()
- [cache]()
- [range_hashed]()
- [complex_key_hashed]()
- [complex_key_cache]()
- [ip_trie]()
<a name="dicts-external_dicts_dict_layout-flat"></a>
- [flat](#flat)
- [hashed](#hashed)
- [cache](#cache)
- [range_hashed](#range-hashed)
- [complex_key_hashed](#complex-key-hashed)
- [complex_key_cache](#complex-key-cache)
- [ip_trie](#ip-trie)
### flat

View File

@ -25,14 +25,14 @@ The source is configured in the `source` section.
Types of sources (`source_type`):
- [Local file]()
- [Executable file]()
- [HTTP(s)]()
- [Local file](#local-file)
- [Executable file](#executable-file)
- [HTTP(s)](#http-s)
- DBMS
- [MySQL]()
- [ClickHouse]()
- [MongoDB]()
- [ODBC]()
- [MySQL](#mysql)
- [ClickHouse](#clickhouse)
- [MongoDB](#mongodb)
- [ODBC](#odbc)
<a name="dicts-external_dicts_dict_sources-local_file"></a>

View File

@ -339,7 +339,7 @@ The corresponding conversion can be performed before the WHERE/PREWHERE clause (
Joins the data in the usual [SQL JOIN](https://en.wikipedia.org/wiki/Join_(SQL)) sense.
!!! info "Note"
Not related to [ARRAY JOIN]().
Not related to [ARRAY JOIN](#array-join).
``` sql
@ -374,7 +374,7 @@ When using a normal `JOIN`, the query is sent to remote servers. Subqueries are
When using `GLOBAL ... JOIN`, first the requestor server runs a subquery to calculate the right table. This temporary table is passed to each remote server, and queries are run on them using the temporary data that was transmitted.
Be careful when using `GLOBAL`. For more information, see the section [Distributed subqueries]().
Be careful when using `GLOBAL`. For more information, see the section [Distributed subqueries](#distributed-subqueries).
**Usage Recommendations**

View File

@ -118,7 +118,7 @@ expr AS alias
For example `SELECT column_name * 2 AS double FROM some_table`.
- `alias` — [string literal](). If an alias contains spaces, enclose it in double quotes or backticks.
- `alias` — [string literal](#string-literals). If an alias contains spaces, enclose it in double quotes or backticks.
For example, `SELECT "table t".col_name FROM t AS "table t"`.

View File

@ -74,7 +74,7 @@ ClickHouse проверит условия `min_part_size` и `min_part_size_rat
Профиль настроек по умолчанию.
Профили настроек находятся в файле, указанном в параметре [user_config]().
Профили настроек находятся в файле, указанном в параметре [user_config](#user-config).
**Пример**
@ -197,7 +197,7 @@ ClickHouse проверит условия `min_part_size` и `min_part_size_rat
Порт для обращений к серверу по протоколу HTTP(s).
Если указан `https_port`, то требуется конфигурирование [openSSL]().
Если указан `https_port`, то требуется конфигурирование [openSSL](#openssl).
Если указан `http_port`, то настройка openSSL игнорируется, даже если она задана.

View File

@ -256,7 +256,7 @@ ClickHouse применяет настройку в тех случаях, ко
## stream_flush_interval_ms
Работает для таблиц со стриммингом в случае тайм-аута, или когда поток генерирует [max_insert_block_size]() строк.
Работает для таблиц со стриммингом в случае тайм-аута, или когда поток генерирует [max_insert_block_size](#max-insert-block-size) строк.
Значение по умолчанию - 7500.
@ -362,7 +362,7 @@ ClickHouse применяет настройку в тех случаях, ко
Все реплики в кворуме консистентны, т.е. содержат данные всех более ранних запросов `INSERT`. Последовательность `INSERT` линеаризуется.
При чтении данных, записанных с `insert_quorum` можно использовать настройку [select_sequential_consistency]().
При чтении данных, записанных с `insert_quorum` можно использовать настройку [select_sequential_consistency](#select-sequential-consistency).
**ClickHouse генерирует исключение**
@ -371,8 +371,8 @@ ClickHouse применяет настройку в тех случаях, ко
**См. также параметры**
- [insert_quorum_timeout]()
- [select_sequential_consistency]()
- [insert_quorum_timeout](#insert-quorum-timeout)
- [select_sequential_consistency](#select-sequential-consitency)
## insert_quorum_timeout
@ -383,8 +383,8 @@ ClickHouse применяет настройку в тех случаях, ко
**См. также параметры**
- [insert_quorum]()
- [select_sequential_consistency]()
- [insert_quorum](#insert-quorum)
- [select_sequential_consistency](#select-sequential-consistency)
## select_sequential_consistency
@ -398,7 +398,7 @@ ClickHouse применяет настройку в тех случаях, ко
См. также параметры:
- [insert_quorum]()
- [insert_quorum_timeout]()
- [insert_quorum](#insert-quorum)
- [insert_quorum_timeout](#insert-quorum-timeout)
[Оригинальная статья](https://clickhouse.yandex/docs/ru/operations/settings/settings/) <!--hide-->

View File

@ -168,7 +168,7 @@ sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data
Затем запустите сервер (перезапустите, если уже запущен). Данные будут скачаны с реплик.
В качестве альтернативного варианта восстановления, вы можете удалить из ZooKeeper информацию о потерянной реплике - `/path_to_table/replica_name`, и затем создать реплику заново, как написано в разделе "[Создание реплицируемых таблиц]()".
В качестве альтернативного варианта восстановления, вы можете удалить из ZooKeeper информацию о потерянной реплике - `/path_to_table/replica_name`, и затем создать реплику заново, как написано в разделе "[Создание реплицируемых таблиц](#sozdanie-replitsiruemykh-tablits)".
Отсутствует ограничение на использование сетевой полосы при восстановлении. Имейте это ввиду, если восстанавливаете сразу много реплик.

View File

@ -72,7 +72,7 @@ ORDER BY key
:) INSERT INTO summtt Values(1,1),(1,2),(2,1)
```
ClickHouse может не полностью просуммировать все строки ([смотрите ниже по тексту]()), поэтому при запросе мы используем агрегатную функцию `sum` и секцию `GROUP BY`.
ClickHouse может не полностью просуммировать все строки ([смотрите ниже по тексту](#obrabotka-dannykh)), поэтому при запросе мы используем агрегатную функцию `sum` и секцию `GROUP BY`.
```sql
SELECT key, sum(value) FROM summtt GROUP BY key

View File

@ -2,11 +2,11 @@
# Хранение словарей в памяти
Словари можно размещать в памяти [множеством способов]().
Словари можно размещать в памяти множеством способов.
Рекомендуем [flat](#dicts-external_dicts_dict_layout-flat), [hashed](#dicts-external_dicts_dict_layout-hashed) и [complex_key_hashed](). Скорость обработки словарей при этом максимальна.
Рекомендуем [flat](#flat), [hashed](#hashed) и [complex_key_hashed](#complex-key-hashed). Скорость обработки словарей при этом максимальна.
Размещение с кэшированием не рекомендуется использовать из-за потенциально низкой производительности и сложностей в подборе оптимальных параметров. Читайте об этом подробнее в разделе "[cache]()".
Размещение с кэшированием не рекомендуется использовать из-за потенциально низкой производительности и сложностей в подборе оптимальных параметров. Читайте об этом подробнее в разделе "[cache](#cache)".
Повысить производительнось словарей можно следующими способами:
@ -39,15 +39,13 @@
## Способы размещения словарей в памяти
- [flat]()
- [hashed]()
- [cache]()
- [range_hashed]()
- [complex_key_hashed]()
- [complex_key_cache]()
- [ip_trie]()
<a name="dicts-external_dicts_dict_layout-flat"></a>
- [flat](#flat)
- [hashed](#hashed)
- [cache](#cache)
- [range_hashed](#range-hashed)
- [complex_key_hashed](#complex-key-hashed)
- [complex_key_cache](#complex-key-cache)
- [ip_trie](#ip-trie)
### flat

View File

@ -24,14 +24,14 @@
Типы источников (`source_type`):
- [Локальный файл]()
- [Исполняемый файл]()
- [HTTP(s)]()
- [Локальный файл](#ispolniaemyi-fail)
- [Исполняемый файл](#ispolniaemyi-fail)
- [HTTP(s)](#http-s)
- СУБД:
- [ODBC]()
- [MySQL]()
- [ClickHouse]()
- [MongoDB]()
- [ODBC](#odbc)
- [MySQL](#mysql)
- [ClickHouse](#clickhouse)
- [MongoDB](#mongodb)
## Локальный файл

View File

@ -442,13 +442,13 @@ LIMIT 10
### Секция WHERE
Позволяет задать выражение, которое ClickHouse использует для фильтрации данных перед всеми другими действиями в запросе кроме выражений, содержащихся в секции [PREWHERE](). Обычно, это выражение с логическими операторами.
Позволяет задать выражение, которое ClickHouse использует для фильтрации данных перед всеми другими действиями в запросе кроме выражений, содержащихся в секции [PREWHERE](#prewhere). Обычно, это выражение с логическими операторами.
Результат выражения должен иметь тип `UInt8`.
ClickHouse использует в выражении индексы, если это позволяет [движок таблицы](../operations/table_engines/index.md#table_engines).
ClickHouse использует в выражении индексы, если это позволяет [движок таблицы](../operations/table_engines/index.md).
Если в секции необходимо проверить [NULL](syntax.md#null-literal), то используйте операторы [IS NULL](operators.md#operator-is-null) и [IS NOT NULL](operators.md), а также соответствующие функции `isNull` и `isNotNull`. В противном случае выражение будет считаться всегда не выполненным.
Если в секции необходимо проверить [NULL](syntax.md#null-literal), то используйте операторы [IS NULL](operators.md#operator-is-null) и [IS NOT NULL](operators.md#is-not-null), а также соответствующие функции `isNull` и `isNotNull`. В противном случае выражение будет считаться всегда не выполненным.
Пример проверки на `NULL`:
@ -469,7 +469,7 @@ WHERE isNull(y)
### Секция PREWHERE
Имеет такой же смысл, как и секция [WHERE](). Отличие состоит в том, какие данные читаются из таблицы.
Имеет такой же смысл, как и секция [WHERE](#where). Отличие состоит в том, какие данные читаются из таблицы.
При использовании `PREWHERE`, из таблицы сначала читаются только столбцы, необходимые для выполнения `PREWHERE`. Затем читаются остальные столбцы, нужные для выполнения запроса, но из них только те блоки, в которых выражение в `PREWHERE` истинное.
`PREWHERE` имеет смысл использовать, если есть условия фильтрации, которые использует меньшинство столбцов из тех, что есть в запросе, но достаточно сильно фильтрует данные. Таким образом, сокращается количество читаемых данных.

View File

@ -14,7 +14,7 @@ If `replace` is specified, it replaces the entire element with the specified one
If `remove` is specified, it deletes the element.
The config can also define "substitutions". If an element has the `incl` attribute, the corresponding substitution from the file will be used as the value. By default, the path to the file with substitutions is `/etc/metrika.xml`. This can be changed in the [include_from](server_settings/settings.md#server_settings-include_from) element in the server config. The substitution values are specified in `/yandex/substitution_name` elements in this file. If a substitution specified in `incl` does not exist, it is recorded in the log. To prevent ClickHouse from logging missing substitutions, specify the `optional="true"` attribute (for example, settings for [macros]() server_settings/settings.md)).
The config can also define "substitutions". If an element has the `incl` attribute, the corresponding substitution from the file will be used as the value. By default, the path to the file with substitutions is `/etc/metrika.xml`. This can be changed in the [include_from](server_settings/settings.md#server_settings-include_from) element in the server config. The substitution values are specified in `/yandex/substitution_name` elements in this file. If a substitution specified in `incl` does not exist, it is recorded in the log. To prevent ClickHouse from logging missing substitutions, specify the `optional="true"` attribute (for example, settings for [macros](#macros) server_settings/settings.md)).
Substitutions can also be performed from ZooKeeper. To do this, specify the attribute `from_zk = "/path/to/node"`. The element value is replaced with the contents of the node at `/path/to/node` in ZooKeeper. You can also put an entire XML subtree on the ZooKeeper node and it will be fully inserted into the source element.

View File

@ -61,7 +61,7 @@ ClickHouse checks `min_part_size` and `min_part_size_ratio` and processes the
The default database.
To get a list of databases, use the [ SHOW DATABASES](../../query_language/misc.md#query_language_queries_show_databases) query.
To get a list of databases, use the [SHOW DATABASES](../../query_language/misc.md#query_language_queries_show_databases) query.
**Example**
@ -74,7 +74,7 @@ To get a list of databases, use the [ SHOW DATABASES](../../query_language/misc.
Default settings profile.
Settings profiles are located in the file specified in the parameter [user_config]().
Settings profiles are located in the file specified in the parameter [user_config](#user-config).
**Example**
@ -196,7 +196,7 @@ For more details, see [GraphiteMergeTree](../../operations/table_engines/graphit
The port for connecting to the server over HTTP(s).
If `https_port` is specified, [openSSL]() must be configured.
If `https_port` is specified, [openSSL](#openssl) must be configured.
If `http_port` is specified, the openSSL configuration is ignored even if it is set.

View File

@ -258,7 +258,7 @@ This parameter is useful when you are using formats that require a schema defini
## stream_flush_interval_ms
Works for tables with streaming in the case of a timeout, or when a thread generates[max_insert_block_size]() rows.
Works for tables with streaming in the case of a timeout, or when a thread generates[max_insert_block_size](#max-insert-block-size) rows.
The default value is 7500.
@ -366,7 +366,7 @@ The default value is 0.
All the replicas in the quorum are consistent, i.e., they contain data from all previous `INSERT` queries. The `INSERT` sequence is linearized.
When reading the data written from the `insert_quorum`, you can use the[select_sequential_consistency]() option.
When reading the data written from the `insert_quorum`, you can use the[select_sequential_consistency](#select-sequential-consistency) option.
**ClickHouse generates an exception**
@ -375,8 +375,8 @@ When reading the data written from the `insert_quorum`, you can use the[select_s
**See also the following parameters:**
- [insert_quorum_timeout]()
- [select_sequential_consistency]()
- [insert_quorum_timeout](#insert-quorum-timeout)
- [select_sequential_consistency](#select-sequential-consistency)
## insert_quorum_timeout
@ -387,8 +387,8 @@ By default, 60 seconds.
**See also the following parameters:**
- [insert_quorum]()
- [select_sequential_consistency]()
- [insert_quorum](#insert-quorum)
- [select_sequential_consistency](#select-sequential-consistency)
## select_sequential_consistency
@ -402,8 +402,8 @@ When sequential consistency is enabled, ClickHouse allows the client to execute
See also the following parameters:
- [insert_quorum]()
- [insert_quorum_timeout]()
- [insert_quorum](#insert-quorum)
- [insert_quorum_timeout](#insert-quorum-timeout)
[Original article](https://clickhouse.yandex/docs/en/operations/settings/settings/) <!--hide-->

View File

@ -4,7 +4,7 @@
The engine inherits from [MergeTree](mergetree.md) and adds the logic of rows collapsing to data parts merge algorithm.
`CollapsingMergeTree` asynchronously deletes (collapses) pairs of rows if all of the fields in a row are equivalent excepting the particular field `Sign` which can have `1` and `-1` values. Rows without a pair are kept. For more details see the [Collapsing]() section of the document.
`CollapsingMergeTree` asynchronously deletes (collapses) pairs of rows if all of the fields in a row are equivalent excepting the particular field `Sign` which can have `1` and `-1` values. Rows without a pair are kept. For more details see the [Collapsing](#collapsing) section of the document.
The engine may significantly reduce the volume of storage and increase efficiency of `SELECT` query as a consequence.

View File

@ -169,7 +169,7 @@ If all data and metadata disappeared from one of the servers, follow these steps
Then start the server (restart, if it is already running). Data will be downloaded from replicas.
An alternative recovery option is to delete information about the lost replica from ZooKeeper (`/path_to_table/replica_name`), then create the replica again as described in "[Creating replicatable tables]()".
An alternative recovery option is to delete information about the lost replica from ZooKeeper (`/path_to_table/replica_name`), then create the replica again as described in "[Creating replicated tables](#creating-replicated-tables)".
There is no restriction on network bandwidth during recovery. Keep this in mind if you are restoring many replicas at once.

View File

@ -72,7 +72,7 @@ Insert data to it:
:) INSERT INTO summtt Values(1,1),(1,2),(2,1)
```
ClickHouse may sum all the rows not completely ([see below]()), so we use an aggregate function `sum` and `GROUP BY` clause in the query.
ClickHouse may sum all the rows not completely ([see below](#data-processing)), so we use an aggregate function `sum` and `GROUP BY` clause in the query.
```sql
SELECT key, sum(value) FROM summtt GROUP BY key

View File

@ -339,7 +339,7 @@ ARRAY JOIN nest AS n, arrayEnumerate(`nest.x`) AS num
JOIN子句用于连接数据作用与[SQL JOIN](https://en.wikipedia.org/wiki/Join_(SQL))的定义相同。
!!! info "注意"
与[ARRAY JOIN]()没有关系.
与[ARRAY JOIN](#array-join)没有关系.
``` sql
@ -373,7 +373,7 @@ FROM <left_subquery>
当使用`GLOBAL ... JOIN`,首先会在请求服务器上计算右表并以临时表的方式将其发送到所有服务器。这时每台服务器将直接使用它进行计算。
使用`GLOBAL`时需要小心。更多信息,参阅[Distributed subqueries]()部分。
使用`GLOBAL`时需要小心。更多信息,参阅[Distributed subqueries](#distributed-subqueries)部分。
**使用建议**