Changed repository URL

This commit is contained in:
Alexey Milovidov 2019-09-23 19:18:19 +03:00
parent 2d2bc052e1
commit 8579c26efb
47 changed files with 1442 additions and 1442 deletions

View File

@ -17,7 +17,7 @@ A clear and concise description of what works not as it is supposed to.
* Which interface to use, if matters
* Non-default settings, if any
* `CREATE TABLE` statements for all tables involved
* Sample data for all these tables, use [clickhouse-obfuscator](https://github.com/yandex/ClickHouse/blob/master/dbms/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary
* Sample data for all these tables, use [clickhouse-obfuscator](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary
* Queries to run that lead to unexpected result
**Expected behavior**

View File

@ -17,7 +17,7 @@ What exactly works slower than expected?
* Which interface to use, if matters
* Non-default settings, if any
* `CREATE TABLE` statements for all tables involved
* Sample data for all these tables, use [clickhouse-obfuscator](https://github.com/yandex/ClickHouse/blob/master/dbms/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary
* Sample data for all these tables, use [clickhouse-obfuscator](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary
* Queries to run that lead to slow performance
**Expected performance**

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,4 @@
[![ClickHouse — open source distributed column-oriented DBMS](https://github.com/yandex/ClickHouse/raw/master/website/images/logo-400x240.png)](https://clickhouse.yandex)
[![ClickHouse — open source distributed column-oriented DBMS](https://github.com/ClickHouse/ClickHouse/raw/master/website/images/logo-400x240.png)](https://clickhouse.yandex)
ClickHouse is an open-source column-oriented database management system that allows generating analytical data reports in real time.

View File

@ -14,4 +14,4 @@ currently being supported with security updates:
## Reporting a Vulnerability
To report a potential vulnerability in ClickHouse please use the security advisory feature of GitHub:
https://github.com/yandex/ClickHouse/security/advisories
https://github.com/ClickHouse/ClickHouse/security/advisories

View File

@ -55,7 +55,7 @@ public:
size_t wipeSensitiveData(std::string & data) const;
/// setInstance is not thread-safe and should be called once in single-thread mode.
/// https://github.com/yandex/ClickHouse/pull/6810#discussion_r321183367
/// https://github.com/ClickHouse/ClickHouse/pull/6810#discussion_r321183367
static void setInstance(std::unique_ptr<SensitiveDataMasker> sensitive_data_masker_);
static SensitiveDataMasker * getInstance();

View File

@ -20,7 +20,7 @@ class Context;
* - Query after optimization :
* SELECT id_1, name_1 FROM (SELECT id_1, name_1 FROM table_a WHERE id_1 = 1 UNION ALL SELECT id_2, name_2 FROM table_b WHERE id_2 = 1)
* WHERE id_1 = 1
* For more details : https://github.com/yandex/ClickHouse/pull/2015#issuecomment-374283452
* For more details : https://github.com/ClickHouse/ClickHouse/pull/2015#issuecomment-374283452
*/
class PredicateExpressionsOptimizer
{

View File

@ -242,7 +242,7 @@ private:
// Lookups can be stored in a HashTable because it is memmovable
// A std::variant contains a currently active type id (memmovable), together with a union of the types
// The types are all std::unique_ptr, which contains a single pointer, which is memmovable.
// Source: https://github.com/yandex/ClickHouse/issues/4906
// Source: https://github.com/ClickHouse/ClickHouse/issues/4906
Lookups lookups;
};

View File

@ -45,7 +45,7 @@ void ReplicatedMergeTreePartCheckThread::start()
void ReplicatedMergeTreePartCheckThread::stop()
{
//based on discussion on https://github.com/yandex/ClickHouse/pull/1489#issuecomment-344756259
//based on discussion on https://github.com/ClickHouse/ClickHouse/pull/1489#issuecomment-344756259
//using the schedule pool there is no problem in case stop is called two time in row and the start multiple times
std::lock_guard lock(start_stop_mutex);

View File

@ -12,7 +12,7 @@
Если аккаунта нет - зарегистрируйтесь на https://github.com/. Создайте ssh ключи, если их нет, и загрузите публичные ключи на GitHub. Это потребуется для отправки изменений. Для работы с GitHub можно использовать такие же ssh ключи, как и для работы с другими ssh серверами - скорее всего, они уже у вас есть.
Создайте fork репозитория ClickHouse. Для этого, на странице https://github.com/yandex/ClickHouse нажмите на кнопку "fork" в правом верхнем углу. Вы получите полную копию репозитория ClickHouse на своём аккаунте, которая называется "форк". Процесс разработки состоит в том, чтобы внести нужные изменения в свой форк репозитория, а затем создать "pull request" для принятия изменений в основной репозиторий.
Создайте fork репозитория ClickHouse. Для этого, на странице https://github.com/ClickHouse/ClickHouse нажмите на кнопку "fork" в правом верхнем углу. Вы получите полную копию репозитория ClickHouse на своём аккаунте, которая называется "форк". Процесс разработки состоит в том, чтобы внести нужные изменения в свой форк репозитория, а затем создать "pull request" для принятия изменений в основной репозиторий.
Для работы с git репозиториями, установите `git`.
@ -61,7 +61,7 @@ and the repository exists.
Вы также можете клонировать репозиторий по протоколу https:
```
git clone https://github.com/yandex/ClickHouse.git
git clone https://github.com/ClickHouse/ClickHouse.git
```
Этот вариант не подходит для отправки изменений на сервер. Вы можете временно его использовать, а затем добавить ssh ключи и заменить адрес репозитория с помощью команды `git remote`.
@ -228,7 +228,7 @@ sudo -u clickhouse ClickHouse/build/dbms/programs/clickhouse server --config-fil
Разработка тестов: https://clickhouse.yandex/docs/ru/development/tests/
Список задач: https://github.com/yandex/ClickHouse/blob/master/dbms/tests/instructions/easy_tasks_sorted_ru.md
Список задач: https://github.com/ClickHouse/ClickHouse/blob/master/dbms/tests/instructions/easy_tasks_sorted_ru.md
# Тестовые данные

View File

@ -134,7 +134,7 @@ Geohash - способ преобразования географических
Энтропию следует считать по гистограмме. Пример расчёта гистограммы смотрите в реализации функции `quantileExact`.
https://github.com/yandex/ClickHouse/issues/3266
https://github.com/ClickHouse/ClickHouse/issues/3266
## Функции создания и обновления состояния агрегатной функции по одному кортежу аргументов.
@ -152,7 +152,7 @@ https://github.com/yandex/ClickHouse/issues/3266
## Корректное сравнение Date и DateTime.
https://github.com/yandex/ClickHouse/issues/2011
https://github.com/ClickHouse/ClickHouse/issues/2011
Нужно сравнивать Date и DateTime так, как будто Date расширено до DateTime на начало суток в том же часовом поясе.

View File

@ -9,7 +9,7 @@ cluster = ClickHouseCluster(__file__)
node1 = cluster.add_instance('node1', main_configs=['configs/remote_servers.xml'])
node2 = cluster.add_instance('node2', main_configs=['configs/remote_servers.xml'])
#test reproducing issue https://github.com/yandex/ClickHouse/issues/3162
#test reproducing issue https://github.com/ClickHouse/ClickHouse/issues/3162
@pytest.fixture(scope="module")
def started_cluster():
try:

View File

@ -17,5 +17,5 @@
<query>SELECT count() FROM system.numbers WHERE NOT ignore(rand() % 2 ? ['Hello', 'World'] : materialize(['a', 'b', 'c']))</query>
<query>SELECT count() FROM system.numbers WHERE NOT ignore(rand() % 2 ? materialize(['Hello', 'World']) : materialize(['a', 'b', 'c']))</query>
<query>SELECT count() FROM system.numbers WHERE NOT ignore(rand() % 2 ? materialize(['', '']) : emptyArrayString())</query>
<query>SELECT count() FROM system.numbers WHERE NOT ignore(rand() % 2 ? materialize(['https://github.com/yandex/ClickHouse/pull/1070', 'https://www.google.ru/search?newwindow=1&amp;site=&amp;source=hp&amp;q=zookeeper+wire+protocol+exists&amp;oq=zookeeper+wire+protocol+exists&amp;gs_l=psy-ab.3...330.6300.0.6687.33.28.0.0.0.0.386.4838.0j5j9j5.19.0....0...1.1.64.psy-ab..14.17.4448.0..0j35i39k1j0i131k1j0i22i30k1j0i19k1j33i21k1.r_3uFoNOrSU']) : emptyArrayString())</query>
<query>SELECT count() FROM system.numbers WHERE NOT ignore(rand() % 2 ? materialize(['https://github.com/ClickHouse/ClickHouse/pull/1070', 'https://www.google.ru/search?newwindow=1&amp;site=&amp;source=hp&amp;q=zookeeper+wire+protocol+exists&amp;oq=zookeeper+wire+protocol+exists&amp;gs_l=psy-ab.3...330.6300.0.6687.33.28.0.0.0.0.386.4838.0j5j9j5.19.0....0...1.1.64.psy-ab..14.17.4448.0..0j35i39k1j0i131k1j0i22i30k1j0i19k1j33i21k1.r_3uFoNOrSU']) : emptyArrayString())</query>
</test>

View File

@ -3,7 +3,7 @@
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
. $CURDIR/../shell_config.sh
# https://github.com/yandex/ClickHouse/issues/1300
# https://github.com/ClickHouse/ClickHouse/issues/1300
$CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS advertiser";
$CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS advertiser_test";

View File

@ -1,5 +1,5 @@
-- https://github.com/yandex/ClickHouse/issues/1059
-- https://github.com/ClickHouse/ClickHouse/issues/1059
DROP TABLE IF EXISTS union1;
DROP TABLE IF EXISTS union2;

View File

@ -4,4 +4,4 @@ For more information see [ClickHouse Server Docker Image](https://hub.docker.com
## License
View [license information](https://github.com/yandex/ClickHouse/blob/master/LICENSE) for the software contained in this image.
View [license information](https://github.com/ClickHouse/ClickHouse/blob/master/LICENSE) for the software contained in this image.

View File

@ -59,4 +59,4 @@ EOSQL
## License
View [license information](https://github.com/yandex/ClickHouse/blob/master/LICENSE) for the software contained in this image.
View [license information](https://github.com/ClickHouse/ClickHouse/blob/master/LICENSE) for the software contained in this image.

View File

@ -2,4 +2,4 @@
## License
View [license information](https://github.com/yandex/ClickHouse/blob/master/LICENSE) for the software contained in this image.
View [license information](https://github.com/ClickHouse/ClickHouse/blob/master/LICENSE) for the software contained in this image.

View File

@ -48,4 +48,4 @@ $ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
```
## Queries
Examples of queries to these tables (they are named `test.hits` and `test.visits`) can be found among [stateful tests](https://github.com/yandex/ClickHouse/tree/master/dbms/tests/queries/1_stateful) and in some [performance tests](https://github.com/yandex/ClickHouse/tree/master/dbms/tests/performance/test_hits) of ClickHouse.
Examples of queries to these tables (they are named `test.hits` and `test.visits`) can be found among [stateful tests](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/tests/queries/1_stateful) and in some [performance tests](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/tests/performance/test_hits) of ClickHouse.

View File

@ -1,5 +1,5 @@
# Native Interface (TCP)
The native protocol is used in the [command-line client](cli.md), for interserver communication during distributed query processing, and also in other C++ programs. Unfortunately, native ClickHouse protocol does not have formal specification yet, but it can be reverse engineered from ClickHouse source code (starting [around here](https://github.com/yandex/ClickHouse/tree/master/dbms/src/Client)) and/or by intercepting and analyzing TCP traffic.
The native protocol is used in the [command-line client](cli.md), for interserver communication during distributed query processing, and also in other C++ programs. Unfortunately, native ClickHouse protocol does not have formal specification yet, but it can be reverse engineered from ClickHouse source code (starting [around here](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/src/Client)) and/or by intercepting and analyzing TCP traffic.
[Original article](https://clickhouse.yandex/docs/en/interfaces/tcp/) <!--hide-->

View File

@ -1,6 +1,6 @@
# Data Backup
While [replication](table_engines/replication.md) provides protection from hardware failures, it does not protect against human errors: accidental deletion of data, deletion of the wrong table or a table on the wrong cluster, and software bugs that result in incorrect data processing or data corruption. In many cases mistakes like these will affect all replicas. ClickHouse has built-in safeguards to prevent some types of mistakes — for example, by default [you can't just drop tables with a MergeTree-like engine containing more than 50 Gb of data](https://github.com/yandex/ClickHouse/blob/v18.14.18-stable/dbms/programs/server/config.xml#L322-L330). However, these safeguards don't cover all possible cases and can be circumvented.
While [replication](table_engines/replication.md) provides protection from hardware failures, it does not protect against human errors: accidental deletion of data, deletion of the wrong table or a table on the wrong cluster, and software bugs that result in incorrect data processing or data corruption. In many cases mistakes like these will affect all replicas. ClickHouse has built-in safeguards to prevent some types of mistakes — for example, by default [you can't just drop tables with a MergeTree-like engine containing more than 50 Gb of data](https://github.com/ClickHouse/ClickHouse/blob/v18.14.18-stable/dbms/programs/server/config.xml#L322-L330). However, these safeguards don't cover all possible cases and can be circumvented.
In order to effectively mitigate possible human errors, you should carefully prepare a strategy for backing up and restoring your data **in advance**.

View File

@ -37,7 +37,7 @@ Memory consumption is also restricted by the parameters `max_memory_usage_for_us
The maximum amount of RAM to use for running a user's queries on a single server.
Default values are defined in [Settings.h](https://github.com/yandex/ClickHouse/blob/master/dbms/src/Core/Settings.h#L288). By default, the amount is not restricted (`max_memory_usage_for_user = 0`).
Default values are defined in [Settings.h](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Core/Settings.h#L288). By default, the amount is not restricted (`max_memory_usage_for_user = 0`).
See also the description of [max_memory_usage](#settings_max_memory_usage).
@ -45,7 +45,7 @@ See also the description of [max_memory_usage](#settings_max_memory_usage).
The maximum amount of RAM to use for running all queries on a single server.
Default values are defined in [Settings.h](https://github.com/yandex/ClickHouse/blob/master/dbms/src/Core/Settings.h#L289). By default, the amount is not restricted (`max_memory_usage_for_all_queries = 0`).
Default values are defined in [Settings.h](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Core/Settings.h#L289). By default, the amount is not restricted (`max_memory_usage_for_all_queries = 0`).
See also the description of [max_memory_usage](#settings_max_memory_usage).

View File

@ -78,7 +78,7 @@ For a description of parameters, see the [CREATE query description](../../query_
For more details, see [TTL for columns and tables](#table_engine-mergetree-ttl)
- `SETTINGS` — Additional parameters that control the behavior of the `MergeTree`:
- `index_granularity` — The granularity of an index. The number of data rows between the "marks" of an index. By default, 8192. For the list of available parameters, see [MergeTreeSettings.h](https://github.com/yandex/ClickHouse/blob/master/dbms/src/Storages/MergeTree/MergeTreeSettings.h).
- `index_granularity` — The granularity of an index. The number of data rows between the "marks" of an index. By default, 8192. For the list of available parameters, see [MergeTreeSettings.h](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Storages/MergeTree/MergeTreeSettings.h).
- `use_minimalistic_part_header_in_zookeeper` — Storage method of the data parts headers in ZooKeeper. If `use_minimalistic_part_header_in_zookeeper=1`, then ZooKeeper stores less data. For more information, see the [setting description](../server_settings/settings.md#server-settings-use_minimalistic_part_header_in_zookeeper) in "Server configuration parameters".
- `min_merge_bytes_to_use_direct_io` — The minimum data volume for merge operation that is required for using direct I/O access to the storage disk. When merging data parts, ClickHouse calculates the total storage volume of all the data to be merged. If the volume exceeds `min_merge_bytes_to_use_direct_io` bytes, ClickHouse reads and writes the data to the storage disk using the direct I/O interface (`O_DIRECT` option). If `min_merge_bytes_to_use_direct_io = 0`, then direct I/O is disabled. Default value: `10 * 1024 * 1024 * 1024` bytes.
<a name="mergetree_setting-merge_with_ttl_timeout"></a>

View File

@ -374,7 +374,7 @@ All the rules above are also true for the [OPTIMIZE](misc.md#misc_operations-opt
OPTIMIZE TABLE table_not_partitioned PARTITION tuple() FINAL;
```
The examples of `ALTER ... PARTITION` queries are demonstrated in the tests [`00502_custom_partitioning_local`](https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_local.sql) and [`00502_custom_partitioning_replicated_zookeeper`](https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_replicated_zookeeper.sql).
The examples of `ALTER ... PARTITION` queries are demonstrated in the tests [`00502_custom_partitioning_local`](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_local.sql) and [`00502_custom_partitioning_replicated_zookeeper`](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_replicated_zookeeper.sql).
### Synchronicity of ALTER Queries

View File

@ -149,7 +149,7 @@ ENGINE = <Engine>
If a codec is specified, the default codec doesn't apply. Codecs can be combined in a pipeline, for example, `CODEC(Delta, ZSTD)`. To select the best codecs combination for you project, pass benchmarks, similar to described in the Altinity [New Encodings to Improve ClickHouse Efficiency](https://www.altinity.com/blog/2019/7/new-encodings-to-improve-clickhouse) article.
!!!warning
You cannot decompress ClickHouse database files with external utilities, for example, `lz4`. Use the special utility, [clickhouse-compressor](https://github.com/yandex/ClickHouse/tree/master/dbms/programs/compressor).
You cannot decompress ClickHouse database files with external utilities, for example, `lz4`. Use the special utility, [clickhouse-compressor](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/programs/compressor).
Compression is supported for the table engines:

View File

@ -126,7 +126,7 @@ FROM test.Orders;
└───────────┴────────────┴──────────┴───────────┴─────────────┴─────────────┘
```
You can see more examples in [tests](https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00619_extract.sql).
You can see more examples in [tests](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00619_extract.sql).
## Logical Negation Operator

View File

@ -2,7 +2,7 @@
# رابط بومی (TCP)
پروتکل بومی در [خط فرمان خط] (cli.md)، برای برقراری ارتباط بین سرور در طی پردازش پرس و جو توزیع شده، و همچنین در سایر برنامه های C ++ استفاده می شود. متاسفانه، پروتکل ClickHouse بومی هنوز مشخصات رسمی ندارد، اما می توان آن را از کد منبع ClickHouse (شروع [از اینجا](https://github.com/yandex/ClickHouse/tree/master/dbms/src/Client)) و / یا با رهگیری و تجزیه و تحلیل ترافیک TCP.
پروتکل بومی در [خط فرمان خط] (cli.md)، برای برقراری ارتباط بین سرور در طی پردازش پرس و جو توزیع شده، و همچنین در سایر برنامه های C ++ استفاده می شود. متاسفانه، پروتکل ClickHouse بومی هنوز مشخصات رسمی ندارد، اما می توان آن را از کد منبع ClickHouse (شروع [از اینجا](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/src/Client)) و / یا با رهگیری و تجزیه و تحلیل ترافیک TCP.
</div>
[مقاله اصلی](https://clickhouse.yandex/docs/fa/interfaces/tcp/) <!--hide-->

View File

@ -48,4 +48,4 @@ $ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
```
## Запросы
Примеры запросов к этим таблицам (они называются `test.hits` и `test.visits`) можно найти среди [stateful тестов](https://github.com/yandex/ClickHouse/tree/master/dbms/tests/queries/1_stateful) и в некоторых [performance тестах](https://github.com/yandex/ClickHouse/tree/master/dbms/tests/performance/test_hits) ClickHouse.
Примеры запросов к этим таблицам (они называются `test.hits` и `test.visits`) можно найти среди [stateful тестов](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/tests/queries/1_stateful) и в некоторых [performance тестах](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/tests/performance/test_hits) ClickHouse.

View File

@ -1,5 +1,5 @@
# Родной интерфейс (TCP)
Нативный протокол используется в [клиенте командной строки](cli.md), для взаимодействия между серверами во время обработки распределенных запросов, а также в других программах на C++. К сожалению, у родного протокола ClickHouse пока нет формальной спецификации, но в нем можно разобраться с использованием исходного кода ClickHouse (начиная с [примерно этого места](https://github.com/yandex/ClickHouse/tree/master/dbms/src/Client)) и/или путем перехвата и анализа TCP трафика.
Нативный протокол используется в [клиенте командной строки](cli.md), для взаимодействия между серверами во время обработки распределенных запросов, а также в других программах на C++. К сожалению, у родного протокола ClickHouse пока нет формальной спецификации, но в нем можно разобраться с использованием исходного кода ClickHouse (начиная с [примерно этого места](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/src/Client)) и/или путем перехвата и анализа TCP трафика.
[Оригинальная статья](https://clickhouse.yandex/docs/ru/interfaces/tcp/) <!--hide-->

View File

@ -1,6 +1,6 @@
# Резервное копирование данных
[Репликация](table_engines/replication.md) обеспечивает защиту от аппаратных сбоев, но не защищает от человеческих ошибок: случайного удаления данных, удаления не той таблицы, которую надо было, или таблицы на не том кластере, а также программных ошибок, которые приводят к неправильной обработке данных или их повреждению. Во многих случаях подобные ошибки влияют на все реплики. ClickHouse имеет встроенные средства защиты для предотвращения некоторых типов ошибок — например, по умолчанию [не получится удалить таблицы *MergeTree, содержащие более 50 Гб данных, одной командой](https://github.com/yandex/ClickHouse/blob/v18.14.18-stable/dbms/programs/server/config.xml#L322-L330). Однако эти средства защиты не охватывают все возможные случаи и могут быть обойдены.
[Репликация](table_engines/replication.md) обеспечивает защиту от аппаратных сбоев, но не защищает от человеческих ошибок: случайного удаления данных, удаления не той таблицы, которую надо было, или таблицы на не том кластере, а также программных ошибок, которые приводят к неправильной обработке данных или их повреждению. Во многих случаях подобные ошибки влияют на все реплики. ClickHouse имеет встроенные средства защиты для предотвращения некоторых типов ошибок — например, по умолчанию [не получится удалить таблицы *MergeTree, содержащие более 50 Гб данных, одной командой](https://github.com/ClickHouse/ClickHouse/blob/v18.14.18-stable/dbms/programs/server/config.xml#L322-L330). Однако эти средства защиты не охватывают все возможные случаи и могут быть обойдены.
Для того чтобы эффективно уменьшить возможные человеческие ошибки, следует тщательно подготовить стратегию резервного копирования и восстановления данных **заранее**.

View File

@ -38,7 +38,7 @@
Максимальный возможный объем оперативной памяти для запросов пользователя на одном сервере.
Значения по умолчанию определены в файле [Settings.h](https://github.com/yandex/ClickHouse/blob/master/dbms/src/Core/Settings.h#L288). По умолчанию размер не ограничен (`max_memory_usage_for_user = 0`).
Значения по умолчанию определены в файле [Settings.h](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Core/Settings.h#L288). По умолчанию размер не ограничен (`max_memory_usage_for_user = 0`).
Смотрите также описание настройки [max_memory_usage](#settings_max_memory_usage).
@ -46,7 +46,7 @@
Максимальный возможный объем оперативной памяти для всех запросов на одном сервере.
Значения по умолчанию определены в файле [Settings.h](https://github.com/yandex/ClickHouse/blob/master/dbms/src/Core/Settings.h#L289). По умолчанию размер не ограничен (`max_memory_usage_for_all_queries = 0`).
Значения по умолчанию определены в файле [Settings.h](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Core/Settings.h#L289). По умолчанию размер не ограничен (`max_memory_usage_for_all_queries = 0`).
Смотрите также описание настройки [max_memory_usage](#settings_max_memory_usage).

View File

@ -75,7 +75,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
- `SETTINGS` — дополнительные параметры, регулирующие поведение `MergeTree`:
- `index_granularity` — гранулярность индекса. Число строк данных между «засечками» индекса. По умолчанию — 8192. Список всех доступных параметров можно посмотреть в [MergeTreeSettings.h](https://github.com/yandex/ClickHouse/blob/master/dbms/src/Storages/MergeTree/MergeTreeSettings.h).
- `index_granularity` — гранулярность индекса. Число строк данных между «засечками» индекса. По умолчанию — 8192. Список всех доступных параметров можно посмотреть в [MergeTreeSettings.h](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Storages/MergeTree/MergeTreeSettings.h).
- `min_merge_bytes_to_use_direct_io` — минимальный объем данных, необходимый для прямого (небуферизованного) чтения/записи (direct I/O) на диск. При слиянии частей данных ClickHouse вычисляет общий объем хранения всех данных, подлежащих слиянию. Если общий объем хранения всех данных для чтения превышает `min_bytes_to_use_direct_io` байт, тогда ClickHouse использует флаг `O_DIRECT` при чтении данных с диска. Если `min_merge_bytes_to_use_direct_io = 0`, тогда прямой ввод-вывод отключен. Значение по умолчанию: `10 * 1024 * 1024 * 1024` байт.
<a name="mergetree_setting-merge_with_ttl_timeout"></a>
- `merge_with_ttl_timeout` - Минимальное время в секундах для повторного выполнения слияний с TTL. По умолчанию - 86400 (1 день).

View File

@ -399,7 +399,7 @@ ALTER TABLE hits MOVE PARTITION '2019-09-01' TO DISK 'fast_ssd'
OPTIMIZE TABLE table_not_partitioned PARTITION tuple() FINAL;
```
Примеры запросов `ALTER ... PARTITION` можно посмотреть в тестах: [`00502_custom_partitioning_local`](https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_local.sql) и [`00502_custom_partitioning_replicated_zookeeper`](https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_replicated_zookeeper.sql).
Примеры запросов `ALTER ... PARTITION` можно посмотреть в тестах: [`00502_custom_partitioning_local`](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_local.sql) и [`00502_custom_partitioning_replicated_zookeeper`](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_replicated_zookeeper.sql).
### Синхронность запросов ALTER

View File

@ -144,7 +144,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
Высокие уровни сжатия полезны для асимметричных сценариев, например, для таких, в которых требуется однократное сжатие и многократная распаковка. Более высокие уровни обеспечивают лучшее сжатие, но более высокое потребление вычислительных ресурсов.
!!! warning "Предупреждение"
Базу данных ClickHouse не получится распаковать с помощью внешних утилит типа `lz4`. Используйте специальную программу [clickhouse-compressor](https://github.com/yandex/ClickHouse/tree/master/dbms/programs/compressor).
Базу данных ClickHouse не получится распаковать с помощью внешних утилит типа `lz4`. Используйте специальную программу [clickhouse-compressor](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/programs/compressor).
Пример использования:

View File

@ -126,7 +126,7 @@ FROM test.Orders;
└───────────┴────────────┴──────────┴───────────┴─────────────┴─────────────┘
```
Больше примеров приведено в [тестах](https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00619_extract.sql).
Больше примеров приведено в [тестах](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00619_extract.sql).
## Оператор логического отрицания

View File

@ -12,7 +12,7 @@ import util
def choose_latest_releases():
seen = collections.OrderedDict()
candidates = requests.get('https://api.github.com/repos/yandex/ClickHouse/tags?per_page=100').json()
candidates = requests.get('https://api.github.com/repos/ClickHouse/ClickHouse/tags?per_page=100').json()
for tag in candidates:
name = tag.get('name', '')
if 'v18' in name or 'stable' not in name:

View File

@ -1,5 +1,5 @@
# 原生客户端接口TCP
本机协议用于 [命令行客户端](cli.md)用于分布式查询处理期间的服务器间通信以及其他C ++程序。 不幸的是本机ClickHouse协议还没有正式的规范但它可以从ClickHouse源代码进行逆向工程 [从这里开始](https://github.com/yandex/ClickHouse/tree/master/dbms/src/Client))和/或拦截和分析TCP流量。
本机协议用于 [命令行客户端](cli.md)用于分布式查询处理期间的服务器间通信以及其他C ++程序。 不幸的是本机ClickHouse协议还没有正式的规范但它可以从ClickHouse源代码进行逆向工程 [从这里开始](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/src/Client))和/或拦截和分析TCP流量。
[来源文章](https://clickhouse.yandex/docs/zh/interfaces/tcp/) <!--hide-->

View File

@ -45,7 +45,7 @@ Memory consumption is also restricted by the parameters `max_memory_usage_for_us
The maximum amount of RAM to use for running a user's queries on a single server.
Default values are defined in [Settings.h](https://github.com/yandex/ClickHouse/blob/master/dbms/src/Interpreters/Settings.h#L244). By default, the amount is not restricted (`max_memory_usage_for_user = 0`).
Default values are defined in [Settings.h](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Interpreters/Settings.h#L244). By default, the amount is not restricted (`max_memory_usage_for_user = 0`).
See also the description of [max_memory_usage](#settings_max_memory_usage).
@ -53,7 +53,7 @@ See also the description of [max_memory_usage](#settings_max_memory_usage).
The maximum amount of RAM to use for running all queries on a single server.
Default values are defined in [Settings.h](https://github.com/yandex/ClickHouse/blob/master/dbms/src/Interpreters/Settings.h#L245). By default, the amount is not restricted (`max_memory_usage_for_all_queries = 0`).
Default values are defined in [Settings.h](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Interpreters/Settings.h#L245). By default, the amount is not restricted (`max_memory_usage_for_all_queries = 0`).
See also the description of [max_memory_usage](#settings_max_memory_usage).

View File

@ -69,7 +69,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
`SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`
- `SETTINGS` — 影响 `MergeTree` 性能的额外参数:
- `index_granularity` — 索引粒度。即索引中相邻『标记』间的数据行数。默认值8192 。该列表中所有可用的参数可以从这里查看 [MergeTreeSettings.h](https://github.com/yandex/ClickHouse/blob/master/dbms/src/Storages/MergeTree/MergeTreeSettings.h) 。
- `index_granularity` — 索引粒度。即索引中相邻『标记』间的数据行数。默认值8192 。该列表中所有可用的参数可以从这里查看 [MergeTreeSettings.h](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Storages/MergeTree/MergeTreeSettings.h) 。
- `use_minimalistic_part_header_in_zookeeper` — 数据片段头在 ZooKeeper 中的存储方式。如果设置了 `use_minimalistic_part_header_in_zookeeper=1` ZooKeeper 会存储更少的数据。更多信息参考『服务配置参数』这章中的 [设置描述](../server_settings/settings.md#server-settings-use_minimalistic_part_header_in_zookeeper) 。
- `min_merge_bytes_to_use_direct_io` — 使用直接 I/O 来操作磁盘的合并操作时要求的最小数据量。合并数据片段时ClickHouse 会计算要被合并的所有数据的总存储空间。如果大小超过了 `min_merge_bytes_to_use_direct_io` 设置的字节数,则 ClickHouse 将使用直接 I/O 接口(`O_DIRECT` 选项)对磁盘读写。如果设置 `min_merge_bytes_to_use_direct_io = 0` ,则会禁用直接 I/O。默认值`10 * 1024 * 1024 * 1024` 字节。

View File

@ -124,7 +124,7 @@ ENGINE = <Engine>
If a codec is specified, the default codec doesn't apply. Codecs can be combined in a pipeline, for example, `CODEC(Delta, ZSTD)`. To select the best codecs combination for you project, pass benchmarks, similar to described in the Altinity [New Encodings to Improve ClickHouse Efficiency](https://www.altinity.com/blog/2019/7/new-encodings-to-improve-clickhouse) article.
!!!warning
You cannot decompress ClickHouse database files with external utilities, for example, `lz4`. Use the special utility, [clickhouse-compressor](https://github.com/yandex/ClickHouse/tree/master/dbms/programs/compressor).
You cannot decompress ClickHouse database files with external utilities, for example, `lz4`. Use the special utility, [clickhouse-compressor](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/programs/compressor).
Compression is supported for the table engines:

View File

@ -13,7 +13,7 @@ PROJECT_ROOT=$(cd $SCRIPTPATH/.. && pwd)
# get-sources
SOURCES_METHOD=local # clone, local, tarball
SOURCES_CLONE_URL="https://github.com/yandex/ClickHouse.git"
SOURCES_CLONE_URL="https://github.com/ClickHouse/ClickHouse.git"
SOURCES_BRANCH="master"
SOURCES_COMMIT=HEAD # do checkout of this commit after clone

View File

@ -1,11 +1,11 @@
#!/bin/sh -x
# Usages:
# sh -x clickhouse-report > ch.`hostname`.`date '+%Y%M%''d%H%M%''S'`.dmp 2>&1
# curl https://raw.githubusercontent.com/yandex/ClickHouse/master/utils/report/clickhouse-report | sh -x > ch.`hostname`.`date '+%Y%M%''d%H%M%''S'`.dmp 2>&1
# curl https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/utils/report/clickhouse-report | sh -x > ch.`hostname`.`date '+%Y%M%''d%H%M%''S'`.dmp 2>&1
# Also dump some system info (can contain some private data) and get trace from running clickhouse-server process
# sh -x clickhouse-report system gdb > ch.`hostname`.`date '+%Y%M%''d%H%M%''S'`.dmp 2>&1
# curl https://raw.githubusercontent.com/yandex/ClickHouse/master/utils/report/clickhouse-report | sh -s -x system gdb > ch.`hostname`.`date '+%Y%M%''d%H%M%''S'`.dmp 2>&1
# curl https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/utils/report/clickhouse-report | sh -s -x system gdb > ch.`hostname`.`date '+%Y%M%''d%H%M%''S'`.dmp 2>&1
for i in "$@" ; do

View File

@ -1,2 +1,2 @@
ClickHouse website is built alongside it's documentation via [docs/tools](https://github.com/yandex/ClickHouse/tree/master/docs/tools), see [README.md there](https://github.com/yandex/ClickHouse/tree/master/docs/tools/README.md).
ClickHouse website is built alongside it's documentation via [docs/tools](https://github.com/ClickHouse/ClickHouse/tree/master/docs/tools), see [README.md there](https://github.com/ClickHouse/ClickHouse/tree/master/docs/tools/README.md).

View File

@ -453,7 +453,7 @@ By default, access is allowed from everywhere for the default user without a pas
===Installing from source===
To build, follow the instructions in <a href="https://github.com/yandex/ClickHouse/blob/master/doc/build.md">build.md</a> (for Linux) or in <a href="https://github.com/yandex/ClickHouse/blob/master/doc/build_osx.md">build_osx.md</a> (for Mac OS X).
To build, follow the instructions in <a href="https://github.com/ClickHouse/ClickHouse/blob/master/doc/build.md">build.md</a> (for Linux) or in <a href="https://github.com/ClickHouse/ClickHouse/blob/master/doc/build_osx.md">build_osx.md</a> (for Mac OS X).
You can compile packages and install them. You can also use programs without installing packages.
@ -550,7 +550,7 @@ Congratulations, it works!
If you are Yandex employee, you can use Yandex.Metrica test data to explore the system&#39;s capabilities. You can find instructions for using the test data <a href="https://github.yandex-team.ru/Metrika/ClickHouse_private/tree/master/tests">here</a>.
Otherwise, you could use one of available public datasets, described <a href="https://github.com/yandex/ClickHouse/tree/master/doc/example_datasets">here</a>.
Otherwise, you could use one of available public datasets, described <a href="https://github.com/ClickHouse/ClickHouse/tree/master/doc/example_datasets">here</a>.
==If you have questions==

View File

@ -464,7 +464,7 @@ ClickHouse содержит настройки ограничения досту
===Установка из исходников===
Для сборки воспользуйтесь инструкцией <a href="https://github.com/yandex/ClickHouse/blob/master/doc/build.md">build.md</a> (для Linux) или <a href="https://github.com/yandex/ClickHouse/blob/master/doc/build_osx.md">build_osx.md</a> (для Mac OS X).
Для сборки воспользуйтесь инструкцией <a href="https://github.com/ClickHouse/ClickHouse/blob/master/doc/build.md">build.md</a> (для Linux) или <a href="https://github.com/ClickHouse/ClickHouse/blob/master/doc/build_osx.md">build_osx.md</a> (для Mac OS X).
Вы можете собрать пакеты и установить их.
Также вы можете использовать программы без установки пакетов.
@ -564,7 +564,7 @@ Connected to ClickHouse server version 0.0.18749.
Если вы сотрудник Яндекса, вы можете воспользоваться тестовыми данными Яндекс.Метрики для изучения возможностей системы.
Как загрузить тестовые данные, написано <a href='https://github.yandex-team.ru/Metrika/ClickHouse_private/tree/master/tests'>здесь</a>.
Если вы внешний пользователь системы, вы можете воспользоваться использовать общедоступные данные, способы загрузки которых указаны <a href='https://github.com/yandex/ClickHouse/tree/master/doc/example_datasets'>здесь</a>.
Если вы внешний пользователь системы, вы можете воспользоваться использовать общедоступные данные, способы загрузки которых указаны <a href='https://github.com/ClickHouse/ClickHouse/tree/master/doc/example_datasets'>здесь</a>.
==Если возникли вопросы==

View File

@ -429,7 +429,7 @@ clickhouse-client
target="_blank">
official Docker images of ClickHouse</a>
. Alternatively you can build ClickHouse from <a
href="https://github.com/yandex/ClickHouse" rel="external nofollow"
href="https://github.com/ClickHouse/ClickHouse" rel="external nofollow"
target="_blank">sources</a>
according to the <a
href="https://clickhouse.yandex/docs/en/development/build.html" rel="external nofollow"
@ -457,7 +457,7 @@ clickhouse-client
<li>Follow official <a
href="https://twitter.com/ClickHouseDB"
rel="external nofollow" target="_blank">Twitter account</a>.</li>
<li>Open <a href="https://github.com/yandex/ClickHouse/issues/new/choose"
<li>Open <a href="https://github.com/ClickHouse/ClickHouse/issues/new/choose"
rel="external nofollow" target="_blank">GitHub issue</a> if you have a bug report or feature request.</li>
<li>Or email Yandex ClickHouse team directly at
<a id="feedback_email" href="">turn on JavaScript to see email address</a>.
@ -476,7 +476,7 @@ clickhouse-client
if you are interested and we'll get in touch.
Short reports about previous meetups are <a href="https://clickhouse.yandex/blog/en?tag=meetup" target="_blank">published in official ClickHouse blog</a>.</p>
<p class="warranty"><a href="https://github.com/yandex/ClickHouse/blob/master/LICENSE"
<p class="warranty"><a href="https://github.com/ClickHouse/ClickHouse/blob/master/LICENSE"
rel="external nofollow" target="_blank">
ClickHouse source code is published under Apache 2.0 License.</a> Software is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.</p>
@ -486,7 +486,7 @@ clickhouse-client
</div>
<a id="github_link"
href="https://github.com/yandex/ClickHouse"
href="https://github.com/ClickHouse/ClickHouse"
rel="external nofollow"
target="_blank"
><div id="github">Fork me on GitHub</div></a>
@ -494,7 +494,7 @@ clickhouse-client
<script type="text/javascript" src="https://yastatic.net/jquery/3.1.1/jquery.min.js"></script>
<script type="text/javascript">
$(document).ready(function () {
$.get('https://raw.githubusercontent.com/yandex/ClickHouse/master/README.md', function(e) {
$.get('https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/README.md', function(e) {
var skip = true;
var lines = e.split('\n');
var result = [];

View File

@ -591,7 +591,7 @@ ENGINE = ReplicatedMergeTree(
repair consistency once they will become active again. Please notice that such scheme allows for the possibility
of just appended data loss.</p>
<p class="warranty"><a href="https://github.com/yandex/ClickHouse/blob/master/LICENSE"
<p class="warranty"><a href="https://github.com/ClickHouse/ClickHouse/blob/master/LICENSE"
rel="external nofollow" target="_blank">
ClickHouse source code is published under Apache 2.0 License.</a> Software is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.</p>