mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-21 15:12:02 +00:00
Merge branch 'master' into yandex-to-clickhouse-in-configs
This commit is contained in:
commit
1e6f9ac635
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -1,5 +1,3 @@
|
||||
I hereby agree to the terms of the CLA available at: https://yandex.ru/legal/cla/?lang=en
|
||||
|
||||
Changelog category (leave one):
|
||||
- New Feature
|
||||
- Improvement
|
||||
|
@ -9,35 +9,3 @@ Thank you.
|
||||
We have a [developer's guide](https://clickhouse.com/docs/en/development/developer_instruction/) for writing code for ClickHouse. Besides this guide, you can find [Overview of ClickHouse Architecture](https://clickhouse.com/docs/en/development/architecture/) and instructions on how to build ClickHouse in different environments.
|
||||
|
||||
If you want to contribute to documentation, read the [Contributing to ClickHouse Documentation](docs/README.md) guide.
|
||||
|
||||
## Legal Info
|
||||
|
||||
In order for us (YANDEX LLC) to accept patches and other contributions from you, you may adopt our Yandex Contributor License Agreement (the "**CLA**"). The current version of the CLA you may find here:
|
||||
1) https://yandex.ru/legal/cla/?lang=en (in English) and
|
||||
2) https://yandex.ru/legal/cla/?lang=ru (in Russian).
|
||||
|
||||
By adopting the CLA, you state the following:
|
||||
|
||||
* You obviously wish and are willingly licensing your contributions to us for our open source projects under the terms of the CLA,
|
||||
* You have read the terms and conditions of the CLA and agree with them in full,
|
||||
* You are legally able to provide and license your contributions as stated,
|
||||
* We may use your contributions for our open source projects and for any other our project too,
|
||||
* We rely on your assurances concerning the rights of third parties in relation to your contributions.
|
||||
|
||||
If you agree with these principles, please read and adopt our CLA. By providing us your contributions, you hereby declare that you have already read and adopt our CLA, and we may freely merge your contributions with our corresponding open source project and use it in further in accordance with terms and conditions of the CLA.
|
||||
|
||||
If you have already adopted terms and conditions of the CLA, you are able to provide your contributes. When you submit your pull request, please add the following information into it:
|
||||
|
||||
```
|
||||
I hereby agree to the terms of the CLA available at: [link].
|
||||
```
|
||||
|
||||
Replace the bracketed text as follows:
|
||||
* [link] is the link at the current version of the CLA (you may add here a link https://yandex.ru/legal/cla/?lang=en (in English) or a link https://yandex.ru/legal/cla/?lang=ru (in Russian).
|
||||
|
||||
It is enough to provide us such notification once.
|
||||
|
||||
As an alternative, you can provide DCO instead of CLA. You can find the text of DCO here: https://developercertificate.org/
|
||||
It is enough to read and copy it verbatim to your pull request.
|
||||
|
||||
If you don't agree with the CLA and don't want to provide DCO, you still can open a pull request to provide your contributions.
|
||||
|
@ -1,7 +1,7 @@
|
||||
sudo apt-get install apt-transport-https ca-certificates dirmngr
|
||||
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4
|
||||
|
||||
echo "deb https://repo.clickhouse.tech/deb/stable/ main/" | sudo tee \
|
||||
echo "deb https://repo.clickhouse.com/deb/stable/ main/" | sudo tee \
|
||||
/etc/apt/sources.list.d/clickhouse.list
|
||||
sudo apt-get update
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
sudo yum install yum-utils
|
||||
sudo rpm --import https://repo.clickhouse.tech/CLICKHOUSE-KEY.GPG
|
||||
sudo yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/clickhouse.repo
|
||||
sudo rpm --import https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG
|
||||
sudo yum-config-manager --add-repo https://repo.clickhouse.com/rpm/clickhouse.repo
|
||||
sudo yum install clickhouse-server clickhouse-client
|
||||
|
||||
sudo /etc/init.d/clickhouse-server start
|
||||
|
@ -1,9 +1,9 @@
|
||||
export LATEST_VERSION=$(curl -s https://repo.clickhouse.tech/tgz/stable/ | \
|
||||
export LATEST_VERSION=$(curl -s https://repo.clickhouse.com/tgz/stable/ | \
|
||||
grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | sort -V -r | head -n 1)
|
||||
curl -O https://repo.clickhouse.tech/tgz/stable/clickhouse-common-static-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.tech/tgz/stable/clickhouse-common-static-dbg-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.tech/tgz/stable/clickhouse-server-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.tech/tgz/stable/clickhouse-client-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-common-static-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-common-static-dbg-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-server-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-client-$LATEST_VERSION.tgz
|
||||
|
||||
tar -xzvf clickhouse-common-static-$LATEST_VERSION.tgz
|
||||
sudo clickhouse-common-static-$LATEST_VERSION/install/doinst.sh
|
||||
|
@ -29,7 +29,7 @@ It is recommended to use official pre-compiled `deb` packages for Debian or Ubun
|
||||
|
||||
If you want to use the most recent version, replace `stable` with `testing` (this is recommended for your testing environments).
|
||||
|
||||
You can also download and install packages manually from [here](https://repo.clickhouse.tech/deb/stable/main/).
|
||||
You can also download and install packages manually from [here](https://repo.clickhouse.com/deb/stable/main/).
|
||||
|
||||
#### Packages {#packages}
|
||||
|
||||
@ -50,8 +50,8 @@ First, you need to add the official repository:
|
||||
|
||||
``` bash
|
||||
sudo yum install yum-utils
|
||||
sudo rpm --import https://repo.clickhouse.tech/CLICKHOUSE-KEY.GPG
|
||||
sudo yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/stable/x86_64
|
||||
sudo rpm --import https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG
|
||||
sudo yum-config-manager --add-repo https://repo.clickhouse.com/rpm/stable/x86_64
|
||||
```
|
||||
|
||||
If you want to use the most recent version, replace `stable` with `testing` (this is recommended for your testing environments). `prestable` is sometimes also available.
|
||||
@ -62,21 +62,21 @@ Then run these commands to install packages:
|
||||
sudo yum install clickhouse-server clickhouse-client
|
||||
```
|
||||
|
||||
You can also download and install packages manually from [here](https://repo.clickhouse.tech/rpm/stable/x86_64).
|
||||
You can also download and install packages manually from [here](https://repo.clickhouse.com/rpm/stable/x86_64).
|
||||
|
||||
### From Tgz Archives {#from-tgz-archives}
|
||||
|
||||
It is recommended to use official pre-compiled `tgz` archives for all Linux distributions, where installation of `deb` or `rpm` packages is not possible.
|
||||
|
||||
The required version can be downloaded with `curl` or `wget` from repository https://repo.clickhouse.tech/tgz/.
|
||||
The required version can be downloaded with `curl` or `wget` from repository https://repo.clickhouse.com/tgz/.
|
||||
After that downloaded archives should be unpacked and installed with installation scripts. Example for the latest version:
|
||||
|
||||
``` bash
|
||||
export LATEST_VERSION=`curl https://api.github.com/repos/ClickHouse/ClickHouse/tags 2>/dev/null | grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | head -n 1`
|
||||
curl -O https://repo.clickhouse.tech/tgz/clickhouse-common-static-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.tech/tgz/clickhouse-common-static-dbg-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.tech/tgz/clickhouse-server-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.tech/tgz/clickhouse-client-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/clickhouse-common-static-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/clickhouse-common-static-dbg-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/clickhouse-server-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/clickhouse-client-$LATEST_VERSION.tgz
|
||||
|
||||
tar -xzvf clickhouse-common-static-$LATEST_VERSION.tgz
|
||||
sudo clickhouse-common-static-$LATEST_VERSION/install/doinst.sh
|
||||
|
@ -30,7 +30,7 @@ Debian や Ubuntu 用にコンパイル済みの公式パッケージ `deb` を
|
||||
|
||||
最新版を使いたい場合は、`stable`を`testing`に置き換えてください。(テスト環境ではこれを推奨します)
|
||||
|
||||
同様に、[こちら](https://repo.clickhouse.tech/deb/stable/main/)からパッケージをダウンロードして、手動でインストールすることもできます。
|
||||
同様に、[こちら](https://repo.clickhouse.com/deb/stable/main/)からパッケージをダウンロードして、手動でインストールすることもできます。
|
||||
|
||||
#### パッケージ {#packages}
|
||||
|
||||
@ -47,8 +47,8 @@ CentOS、RedHat、その他すべてのrpmベースのLinuxディストリビュ
|
||||
|
||||
``` bash
|
||||
sudo yum install yum-utils
|
||||
sudo rpm --import https://repo.clickhouse.tech/CLICKHOUSE-KEY.GPG
|
||||
sudo yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/stable/x86_64
|
||||
sudo rpm --import https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG
|
||||
sudo yum-config-manager --add-repo https://repo.clickhouse.com/rpm/stable/x86_64
|
||||
```
|
||||
|
||||
最新版を使いたい場合は `stable` を `testing` に置き換えてください。(テスト環境ではこれが推奨されています)。`prestable` もしばしば同様に利用できます。
|
||||
@ -59,20 +59,20 @@ sudo yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/stable/x86_6
|
||||
sudo yum install clickhouse-server clickhouse-client
|
||||
```
|
||||
|
||||
同様に、[こちら](https://repo.clickhouse.tech/rpm/stable/x86_64) からパッケージをダウンロードして、手動でインストールすることもできます。
|
||||
同様に、[こちら](https://repo.clickhouse.com/rpm/stable/x86_64) からパッケージをダウンロードして、手動でインストールすることもできます。
|
||||
|
||||
### Tgzアーカイブから {#from-tgz-archives}
|
||||
|
||||
すべての Linux ディストリビューションで、`deb` や `rpm` パッケージがインストールできない場合は、公式のコンパイル済み `tgz` アーカイブを使用することをお勧めします。
|
||||
|
||||
必要なバージョンは、リポジトリ https://repo.clickhouse.tech/tgz/ から `curl` または `wget` でダウンロードできます。その後、ダウンロードしたアーカイブを解凍し、インストールスクリプトでインストールしてください。最新版の例は以下です:
|
||||
必要なバージョンは、リポジトリ https://repo.clickhouse.com/tgz/ から `curl` または `wget` でダウンロードできます。その後、ダウンロードしたアーカイブを解凍し、インストールスクリプトでインストールしてください。最新版の例は以下です:
|
||||
|
||||
``` bash
|
||||
export LATEST_VERSION=`curl https://api.github.com/repos/ClickHouse/ClickHouse/tags 2>/dev/null | grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | head -n 1`
|
||||
curl -O https://repo.clickhouse.tech/tgz/clickhouse-common-static-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.tech/tgz/clickhouse-common-static-dbg-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.tech/tgz/clickhouse-server-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.tech/tgz/clickhouse-client-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/clickhouse-common-static-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/clickhouse-common-static-dbg-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/clickhouse-server-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/clickhouse-client-$LATEST_VERSION.tgz
|
||||
|
||||
tar -xzvf clickhouse-common-static-$LATEST_VERSION.tgz
|
||||
sudo clickhouse-common-static-$LATEST_VERSION/install/doinst.sh
|
||||
|
@ -27,11 +27,11 @@ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not su
|
||||
{% include 'install/deb.sh' %}
|
||||
```
|
||||
|
||||
Также эти пакеты можно скачать и установить вручную отсюда: https://repo.clickhouse.tech/deb/stable/main/.
|
||||
Также эти пакеты можно скачать и установить вручную отсюда: https://repo.clickhouse.com/deb/stable/main/.
|
||||
|
||||
Если вы хотите использовать наиболее свежую версию, замените `stable` на `testing` (рекомендуется для тестовых окружений).
|
||||
|
||||
Также вы можете вручную скачать и установить пакеты из [репозитория](https://repo.clickhouse.tech/deb/stable/main/).
|
||||
Также вы можете вручную скачать и установить пакеты из [репозитория](https://repo.clickhouse.com/deb/stable/main/).
|
||||
|
||||
#### Пакеты {#packages}
|
||||
|
||||
@ -52,8 +52,8 @@ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not su
|
||||
|
||||
``` bash
|
||||
sudo yum install yum-utils
|
||||
sudo rpm --import https://repo.clickhouse.tech/CLICKHOUSE-KEY.GPG
|
||||
sudo yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/stable/x86_64
|
||||
sudo rpm --import https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG
|
||||
sudo yum-config-manager --add-repo https://repo.clickhouse.com/rpm/stable/x86_64
|
||||
```
|
||||
|
||||
Для использования наиболее свежих версий нужно заменить `stable` на `testing` (рекомендуется для тестовых окружений). Также иногда доступен `prestable`.
|
||||
@ -64,21 +64,21 @@ sudo yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/stable/x86_6
|
||||
sudo yum install clickhouse-server clickhouse-client
|
||||
```
|
||||
|
||||
Также есть возможность установить пакеты вручную, скачав отсюда: https://repo.clickhouse.tech/rpm/stable/x86_64.
|
||||
Также есть возможность установить пакеты вручную, скачав отсюда: https://repo.clickhouse.com/rpm/stable/x86_64.
|
||||
|
||||
### Из Tgz архивов {#from-tgz-archives}
|
||||
|
||||
Команда ClickHouse в Яндексе рекомендует использовать предкомпилированные бинарники из `tgz` архивов для всех дистрибутивов, где невозможна установка `deb` и `rpm` пакетов.
|
||||
|
||||
Интересующую версию архивов можно скачать вручную с помощью `curl` или `wget` из репозитория https://repo.clickhouse.tech/tgz/.
|
||||
Интересующую версию архивов можно скачать вручную с помощью `curl` или `wget` из репозитория https://repo.clickhouse.com/tgz/.
|
||||
После этого архивы нужно распаковать и воспользоваться скриптами установки. Пример установки самой свежей версии:
|
||||
|
||||
``` bash
|
||||
export LATEST_VERSION=`curl https://api.github.com/repos/ClickHouse/ClickHouse/tags 2>/dev/null | grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | head -n 1`
|
||||
curl -O https://repo.clickhouse.tech/tgz/clickhouse-common-static-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.tech/tgz/clickhouse-common-static-dbg-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.tech/tgz/clickhouse-server-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.tech/tgz/clickhouse-client-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/clickhouse-common-static-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/clickhouse-common-static-dbg-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/clickhouse-server-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/clickhouse-client-$LATEST_VERSION.tgz
|
||||
|
||||
tar -xzvf clickhouse-common-static-$LATEST_VERSION.tgz
|
||||
sudo clickhouse-common-static-$LATEST_VERSION/install/doinst.sh
|
||||
|
@ -5,7 +5,7 @@ BASE_DIR=$(dirname $(readlink -f $0))
|
||||
BUILD_DIR="${BASE_DIR}/../build"
|
||||
PUBLISH_DIR="${BASE_DIR}/../publish"
|
||||
BASE_DOMAIN="${BASE_DOMAIN:-content.clickhouse.com}"
|
||||
GIT_TEST_URI="${GIT_TEST_URI:-git@github.com:ClickHouse/clickhouse-website-content.git}"
|
||||
GIT_TEST_URI="${GIT_TEST_URI:-git@github.com:ClickHouse/clickhouse-com-content.git}"
|
||||
GIT_PROD_URI="git@github.com:ClickHouse/clickhouse-website-content.git"
|
||||
EXTRA_BUILD_ARGS="${EXTRA_BUILD_ARGS:---minify --verbose}"
|
||||
|
||||
|
@ -29,7 +29,7 @@ $ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not
|
||||
|
||||
如果您想使用最新的版本,请用`testing`替代`stable`(我们只推荐您用于测试环境)。
|
||||
|
||||
你也可以从这里手动下载安装包:[下载](https://repo.clickhouse.tech/deb/stable/main/)。
|
||||
你也可以从这里手动下载安装包:[下载](https://repo.clickhouse.com/deb/stable/main/)。
|
||||
|
||||
安装包列表:
|
||||
|
||||
@ -46,8 +46,8 @@ $ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not
|
||||
|
||||
``` bash
|
||||
sudo yum install yum-utils
|
||||
sudo rpm --import https://repo.clickhouse.tech/CLICKHOUSE-KEY.GPG
|
||||
sudo yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/stable/x86_64
|
||||
sudo rpm --import https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG
|
||||
sudo yum-config-manager --add-repo https://repo.clickhouse.com/rpm/stable/x86_64
|
||||
```
|
||||
|
||||
如果您想使用最新的版本,请用`testing`替代`stable`(我们只推荐您用于测试环境)。`prestable`有时也可用。
|
||||
@ -58,22 +58,22 @@ sudo yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/stable/x86_6
|
||||
sudo yum install clickhouse-server clickhouse-client
|
||||
```
|
||||
|
||||
你也可以从这里手动下载安装包:[下载](https://repo.clickhouse.tech/rpm/stable/x86_64)。
|
||||
你也可以从这里手动下载安装包:[下载](https://repo.clickhouse.com/rpm/stable/x86_64)。
|
||||
|
||||
### `Tgz`安装包 {#from-tgz-archives}
|
||||
|
||||
如果您的操作系统不支持安装`deb`或`rpm`包,建议使用官方预编译的`tgz`软件包。
|
||||
|
||||
所需的版本可以通过`curl`或`wget`从存储库`https://repo.clickhouse.tech/tgz/`下载。
|
||||
所需的版本可以通过`curl`或`wget`从存储库`https://repo.clickhouse.com/tgz/`下载。
|
||||
|
||||
下载后解压缩下载资源文件并使用安装脚本进行安装。以下是一个最新版本的安装示例:
|
||||
|
||||
``` bash
|
||||
export LATEST_VERSION=`curl https://api.github.com/repos/ClickHouse/ClickHouse/tags 2>/dev/null | grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | head -n 1`
|
||||
curl -O https://repo.clickhouse.tech/tgz/clickhouse-common-static-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.tech/tgz/clickhouse-common-static-dbg-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.tech/tgz/clickhouse-server-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.tech/tgz/clickhouse-client-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/clickhouse-common-static-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/clickhouse-common-static-dbg-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/clickhouse-server-$LATEST_VERSION.tgz
|
||||
curl -O https://repo.clickhouse.com/tgz/clickhouse-client-$LATEST_VERSION.tgz
|
||||
|
||||
tar -xzvf clickhouse-common-static-$LATEST_VERSION.tgz
|
||||
sudo clickhouse-common-static-$LATEST_VERSION/install/doinst.sh
|
||||
|
@ -859,6 +859,9 @@ if (ThreadFuzzer::instance().isEffective())
|
||||
if (config->has("max_partition_size_to_drop"))
|
||||
global_context->setMaxPartitionSizeToDrop(config->getUInt64("max_partition_size_to_drop"));
|
||||
|
||||
if (config->has("max_concurrent_queries"))
|
||||
global_context->getProcessList().setMaxSize(config->getInt("max_concurrent_queries", 0));
|
||||
|
||||
if (!initial_loading)
|
||||
{
|
||||
/// We do not load ZooKeeper configuration on the first config loading
|
||||
|
@ -350,7 +350,7 @@ class IColumn;
|
||||
M(UInt64, max_memory_usage, 0, "Maximum memory usage for processing of single query. Zero means unlimited.", 0) \
|
||||
M(UInt64, max_memory_usage_for_user, 0, "Maximum memory usage for processing all concurrently running queries for the user. Zero means unlimited.", 0) \
|
||||
M(UInt64, max_untracked_memory, (4 * 1024 * 1024), "Small allocations and deallocations are grouped in thread local variable and tracked or profiled only when amount (in absolute value) becomes larger than specified value. If the value is higher than 'memory_profiler_step' it will be effectively lowered to 'memory_profiler_step'.", 0) \
|
||||
M(UInt64, memory_profiler_step, 0, "Whenever query memory usage becomes larger than every next step in number of bytes the memory profiler will collect the allocating stack trace. Zero means disabled memory profiler. Values lower than a few megabytes will slow down query processing.", 0) \
|
||||
M(UInt64, memory_profiler_step, (4 * 1024 * 1024), "Whenever query memory usage becomes larger than every next step in number of bytes the memory profiler will collect the allocating stack trace. Zero means disabled memory profiler. Values lower than a few megabytes will slow down query processing.", 0) \
|
||||
M(Float, memory_profiler_sample_probability, 0., "Collect random allocations and deallocations and write them into system.trace_log with 'MemorySample' trace_type. The probability is for every alloc/free regardless to the size of the allocation. Note that sampling happens only when the amount of untracked memory exceeds 'max_untracked_memory'. You may want to set 'max_untracked_memory' to 0 for extra fine grained sampling.", 0) \
|
||||
\
|
||||
M(UInt64, max_network_bandwidth, 0, "The maximum speed of data exchange over the network in bytes per second for a query. Zero means unlimited.", 0) \
|
||||
|
@ -107,6 +107,8 @@ public:
|
||||
FunctionBasePtr buildImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr & return_type) const override
|
||||
{
|
||||
DataTypes argument_types;
|
||||
for (const auto & argument : arguments)
|
||||
argument_types.push_back(argument.type);
|
||||
|
||||
/// More efficient specialization for two numeric arguments.
|
||||
if (arguments.size() == 2 && isNumber(arguments[0].type) && isNumber(arguments[1].type))
|
||||
|
@ -511,8 +511,6 @@ bool MergeTreeIndexConditionBloomFilter::traverseASTEquals(
|
||||
RPNElement & out,
|
||||
const ASTPtr & parent)
|
||||
{
|
||||
std::cerr << "MergeTreeIndexConditionBloomFilter::traverseASTEquals " << function_name << " ast " << key_ast->formatForErrorMessage() << std::endl;
|
||||
|
||||
if (header.has(key_ast->getColumnName()))
|
||||
{
|
||||
size_t position = header.getPositionByName(key_ast->getColumnName());
|
||||
|
@ -1,66 +0,0 @@
|
||||
#include <DataStreams/IBlockInputStream.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <IO/ReadBufferFromString.h>
|
||||
#include <Storages/RocksDB/StorageEmbeddedRocksDB.h>
|
||||
#include <Storages/RocksDB/EmbeddedRocksDBBlockInputStream.h>
|
||||
|
||||
#include <rocksdb/db.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ROCKSDB_ERROR;
|
||||
}
|
||||
|
||||
EmbeddedRocksDBBlockInputStream::EmbeddedRocksDBBlockInputStream(
|
||||
StorageEmbeddedRocksDB & storage_,
|
||||
const StorageMetadataPtr & metadata_snapshot_,
|
||||
size_t max_block_size_)
|
||||
: storage(storage_)
|
||||
, metadata_snapshot(metadata_snapshot_)
|
||||
, max_block_size(max_block_size_)
|
||||
{
|
||||
sample_block = metadata_snapshot->getSampleBlock();
|
||||
primary_key_pos = sample_block.getPositionByName(storage.primary_key);
|
||||
}
|
||||
|
||||
Block EmbeddedRocksDBBlockInputStream::readImpl()
|
||||
{
|
||||
if (finished)
|
||||
return {};
|
||||
|
||||
if (!iterator)
|
||||
{
|
||||
iterator = std::unique_ptr<rocksdb::Iterator>(storage.rocksdb_ptr->NewIterator(rocksdb::ReadOptions()));
|
||||
iterator->SeekToFirst();
|
||||
}
|
||||
|
||||
MutableColumns columns = sample_block.cloneEmptyColumns();
|
||||
|
||||
for (size_t rows = 0; iterator->Valid() && rows < max_block_size; ++rows, iterator->Next())
|
||||
{
|
||||
ReadBufferFromString key_buffer(iterator->key());
|
||||
ReadBufferFromString value_buffer(iterator->value());
|
||||
|
||||
size_t idx = 0;
|
||||
for (const auto & elem : sample_block)
|
||||
{
|
||||
auto serialization = elem.type->getDefaultSerialization();
|
||||
serialization->deserializeBinary(*columns[idx], idx == primary_key_pos ? key_buffer : value_buffer);
|
||||
++idx;
|
||||
}
|
||||
}
|
||||
|
||||
finished = !iterator->Valid();
|
||||
if (!iterator->status().ok())
|
||||
{
|
||||
throw Exception("Engine " + getName() + " got error while seeking key value data: " + iterator->status().ToString(),
|
||||
ErrorCodes::ROCKSDB_ERROR);
|
||||
}
|
||||
return sample_block.cloneWithColumns(std::move(columns));
|
||||
}
|
||||
|
||||
}
|
@ -1,39 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <DataStreams/IBlockInputStream.h>
|
||||
|
||||
|
||||
namespace rocksdb
|
||||
{
|
||||
class Iterator;
|
||||
}
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
class StorageEmbeddedRocksDB;
|
||||
struct StorageInMemoryMetadata;
|
||||
using StorageMetadataPtr = std::shared_ptr<const StorageInMemoryMetadata>;
|
||||
|
||||
class EmbeddedRocksDBBlockInputStream : public IBlockInputStream
|
||||
{
|
||||
|
||||
public:
|
||||
EmbeddedRocksDBBlockInputStream(
|
||||
StorageEmbeddedRocksDB & storage_, const StorageMetadataPtr & metadata_snapshot_, size_t max_block_size_);
|
||||
|
||||
String getName() const override { return "EmbeddedRocksDB"; }
|
||||
Block getHeader() const override { return sample_block; }
|
||||
Block readImpl() override;
|
||||
|
||||
private:
|
||||
StorageEmbeddedRocksDB & storage;
|
||||
StorageMetadataPtr metadata_snapshot;
|
||||
const size_t max_block_size;
|
||||
|
||||
Block sample_block;
|
||||
std::unique_ptr<rocksdb::Iterator> iterator;
|
||||
size_t primary_key_pos;
|
||||
bool finished = false;
|
||||
};
|
||||
}
|
@ -1,6 +1,5 @@
|
||||
#include <Storages/RocksDB/StorageEmbeddedRocksDB.h>
|
||||
#include <Storages/RocksDB/EmbeddedRocksDBSink.h>
|
||||
#include <Storages/RocksDB/EmbeddedRocksDBBlockInputStream.h>
|
||||
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
|
||||
@ -182,15 +181,15 @@ class EmbeddedRocksDBSource : public SourceWithProgress
|
||||
public:
|
||||
EmbeddedRocksDBSource(
|
||||
const StorageEmbeddedRocksDB & storage_,
|
||||
const StorageMetadataPtr & metadata_snapshot_,
|
||||
const Block & header,
|
||||
FieldVectorPtr keys_,
|
||||
FieldVector::const_iterator begin_,
|
||||
FieldVector::const_iterator end_,
|
||||
const size_t max_block_size_)
|
||||
: SourceWithProgress(metadata_snapshot_->getSampleBlock())
|
||||
: SourceWithProgress(header)
|
||||
, storage(storage_)
|
||||
, metadata_snapshot(metadata_snapshot_)
|
||||
, keys(std::move(keys_))
|
||||
, primary_key_pos(header.getPositionByName(storage.getPrimaryKey()))
|
||||
, keys(keys_)
|
||||
, begin(begin_)
|
||||
, end(end_)
|
||||
, it(begin)
|
||||
@ -198,12 +197,29 @@ public:
|
||||
{
|
||||
}
|
||||
|
||||
String getName() const override
|
||||
EmbeddedRocksDBSource(
|
||||
const StorageEmbeddedRocksDB & storage_,
|
||||
const Block & header,
|
||||
std::unique_ptr<rocksdb::Iterator> iterator_,
|
||||
const size_t max_block_size_)
|
||||
: SourceWithProgress(header)
|
||||
, storage(storage_)
|
||||
, primary_key_pos(header.getPositionByName(storage.getPrimaryKey()))
|
||||
, iterator(std::move(iterator_))
|
||||
, max_block_size(max_block_size_)
|
||||
{
|
||||
return storage.getName();
|
||||
}
|
||||
|
||||
String getName() const override { return storage.getName(); }
|
||||
|
||||
Chunk generate() override
|
||||
{
|
||||
if (keys)
|
||||
return generateWithKeys();
|
||||
return generateFullScan();
|
||||
}
|
||||
|
||||
Chunk generateWithKeys()
|
||||
{
|
||||
if (it >= end)
|
||||
return {};
|
||||
@ -213,16 +229,15 @@ public:
|
||||
std::vector<std::string> serialized_keys(num_keys);
|
||||
std::vector<rocksdb::Slice> slices_keys(num_keys);
|
||||
|
||||
const auto & sample_block = metadata_snapshot->getSampleBlock();
|
||||
const auto & key_column = sample_block.getByName(storage.getPrimaryKey());
|
||||
auto columns = sample_block.cloneEmptyColumns();
|
||||
size_t primary_key_pos = sample_block.getPositionByName(storage.getPrimaryKey());
|
||||
const auto & sample_block = getPort().getHeader();
|
||||
|
||||
const auto & key_column_type = sample_block.getByName(storage.getPrimaryKey()).type;
|
||||
|
||||
size_t rows_processed = 0;
|
||||
while (it < end && rows_processed < max_block_size)
|
||||
{
|
||||
WriteBufferFromString wb(serialized_keys[rows_processed]);
|
||||
key_column.type->getDefaultSerialization()->serializeBinary(*it, wb);
|
||||
key_column_type->getDefaultSerialization()->serializeBinary(*it, wb);
|
||||
wb.finalize();
|
||||
slices_keys[rows_processed] = std::move(serialized_keys[rows_processed]);
|
||||
|
||||
@ -230,6 +245,7 @@ public:
|
||||
++rows_processed;
|
||||
}
|
||||
|
||||
MutableColumns columns = sample_block.cloneEmptyColumns();
|
||||
std::vector<String> values;
|
||||
auto statuses = storage.multiGet(slices_keys, values);
|
||||
for (size_t i = 0; i < statuses.size(); ++i)
|
||||
@ -238,13 +254,7 @@ public:
|
||||
{
|
||||
ReadBufferFromString key_buffer(slices_keys[i]);
|
||||
ReadBufferFromString value_buffer(values[i]);
|
||||
|
||||
size_t idx = 0;
|
||||
for (const auto & elem : sample_block)
|
||||
{
|
||||
elem.type->getDefaultSerialization()->deserializeBinary(*columns[idx], idx == primary_key_pos ? key_buffer : value_buffer);
|
||||
++idx;
|
||||
}
|
||||
fillColumns(key_buffer, value_buffer, columns);
|
||||
}
|
||||
}
|
||||
|
||||
@ -252,14 +262,54 @@ public:
|
||||
return Chunk(std::move(columns), num_rows);
|
||||
}
|
||||
|
||||
Chunk generateFullScan()
|
||||
{
|
||||
if (!iterator->Valid())
|
||||
return {};
|
||||
|
||||
const auto & sample_block = getPort().getHeader();
|
||||
MutableColumns columns = sample_block.cloneEmptyColumns();
|
||||
|
||||
for (size_t rows = 0; iterator->Valid() && rows < max_block_size; ++rows, iterator->Next())
|
||||
{
|
||||
ReadBufferFromString key_buffer(iterator->key());
|
||||
ReadBufferFromString value_buffer(iterator->value());
|
||||
fillColumns(key_buffer, value_buffer, columns);
|
||||
}
|
||||
|
||||
if (!iterator->status().ok())
|
||||
{
|
||||
throw Exception("Engine " + getName() + " got error while seeking key value data: " + iterator->status().ToString(),
|
||||
ErrorCodes::ROCKSDB_ERROR);
|
||||
}
|
||||
Block block = sample_block.cloneWithColumns(std::move(columns));
|
||||
return Chunk(block.getColumns(), block.rows());
|
||||
}
|
||||
|
||||
void fillColumns(ReadBufferFromString & key_buffer, ReadBufferFromString & value_buffer, MutableColumns & columns)
|
||||
{
|
||||
size_t idx = 0;
|
||||
for (const auto & elem : getPort().getHeader())
|
||||
{
|
||||
elem.type->getDefaultSerialization()->deserializeBinary(*columns[idx], idx == primary_key_pos ? key_buffer : value_buffer);
|
||||
++idx;
|
||||
}
|
||||
}
|
||||
|
||||
private:
|
||||
const StorageEmbeddedRocksDB & storage;
|
||||
|
||||
const StorageMetadataPtr metadata_snapshot;
|
||||
FieldVectorPtr keys;
|
||||
size_t primary_key_pos;
|
||||
|
||||
/// For key scan
|
||||
FieldVectorPtr keys = nullptr;
|
||||
FieldVector::const_iterator begin;
|
||||
FieldVector::const_iterator end;
|
||||
FieldVector::const_iterator it;
|
||||
|
||||
/// For full scan
|
||||
std::unique_ptr<rocksdb::Iterator> iterator = nullptr;
|
||||
|
||||
const size_t max_block_size;
|
||||
};
|
||||
|
||||
@ -379,7 +429,6 @@ void StorageEmbeddedRocksDB::initDB()
|
||||
rocksdb_ptr = std::unique_ptr<rocksdb::DB>(db);
|
||||
}
|
||||
|
||||
|
||||
Pipe StorageEmbeddedRocksDB::read(
|
||||
const Names & column_names,
|
||||
const StorageMetadataPtr & metadata_snapshot,
|
||||
@ -394,13 +443,14 @@ Pipe StorageEmbeddedRocksDB::read(
|
||||
FieldVectorPtr keys;
|
||||
bool all_scan = false;
|
||||
|
||||
auto primary_key_data_type = metadata_snapshot->getSampleBlock().getByName(primary_key).type;
|
||||
Block sample_block = metadata_snapshot->getSampleBlock();
|
||||
auto primary_key_data_type = sample_block.getByName(primary_key).type;
|
||||
std::tie(keys, all_scan) = getFilterKeys(primary_key, primary_key_data_type, query_info);
|
||||
if (all_scan)
|
||||
{
|
||||
auto reader = std::make_shared<EmbeddedRocksDBBlockInputStream>(
|
||||
*this, metadata_snapshot, max_block_size);
|
||||
return Pipe(std::make_shared<SourceFromInputStream>(reader));
|
||||
auto iterator = std::unique_ptr<rocksdb::Iterator>(rocksdb_ptr->NewIterator(rocksdb::ReadOptions()));
|
||||
iterator->SeekToFirst();
|
||||
return Pipe(std::make_shared<EmbeddedRocksDBSource>(*this, sample_block, std::move(iterator), max_block_size));
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -424,7 +474,7 @@ Pipe StorageEmbeddedRocksDB::read(
|
||||
size_t end = num_keys * (thread_idx + 1) / num_threads;
|
||||
|
||||
pipes.emplace_back(std::make_shared<EmbeddedRocksDBSource>(
|
||||
*this, metadata_snapshot, keys, keys->begin() + begin, keys->begin() + end, max_block_size));
|
||||
*this, sample_block, keys, keys->begin() + begin, keys->begin() + end, max_block_size));
|
||||
}
|
||||
return Pipe::unitePipes(std::move(pipes));
|
||||
}
|
||||
@ -436,7 +486,6 @@ SinkToStoragePtr StorageEmbeddedRocksDB::write(
|
||||
return std::make_shared<EmbeddedRocksDBSink>(*this, metadata_snapshot);
|
||||
}
|
||||
|
||||
|
||||
static StoragePtr create(const StorageFactory::Arguments & args)
|
||||
{
|
||||
// TODO custom RocksDBSettings, table function
|
||||
|
@ -23,7 +23,6 @@ class StorageEmbeddedRocksDB final : public shared_ptr_helper<StorageEmbeddedRoc
|
||||
{
|
||||
friend struct shared_ptr_helper<StorageEmbeddedRocksDB>;
|
||||
friend class EmbeddedRocksDBSink;
|
||||
friend class EmbeddedRocksDBBlockInputStream;
|
||||
public:
|
||||
std::string getName() const override { return "EmbeddedRocksDB"; }
|
||||
|
||||
|
@ -1050,7 +1050,7 @@ class ClickHouseCluster:
|
||||
errors += [str(ex)]
|
||||
time.sleep(interval)
|
||||
|
||||
run_and_check(['docker-compose', 'ps', '--services', '--all'])
|
||||
run_and_check(['docker', 'ps', '--all'])
|
||||
logging.error("Can't connect to URL:{}".format(errors))
|
||||
raise Exception("Cannot wait URL {}(interval={}, timeout={}, attempts={})".format(
|
||||
url, interval, timeout, attempts))
|
||||
@ -1434,6 +1434,7 @@ class ClickHouseCluster:
|
||||
for dir in self.zookeeper_dirs_to_create:
|
||||
os.makedirs(dir)
|
||||
run_and_check(self.base_zookeeper_cmd + common_opts, env=self.env)
|
||||
self.up_called = True
|
||||
|
||||
self.wait_zookeeper_secure_to_start()
|
||||
for command in self.pre_zookeeper_commands:
|
||||
@ -1459,6 +1460,7 @@ class ClickHouseCluster:
|
||||
shutil.copy(os.path.join(HELPERS_DIR, f'keeper_config{i}.xml'), os.path.join(self.keeper_instance_dir_prefix + f"{i}", "config" ))
|
||||
|
||||
run_and_check(self.base_zookeeper_cmd + common_opts, env=self.env)
|
||||
self.up_called = True
|
||||
|
||||
self.wait_zookeeper_to_start()
|
||||
for command in self.pre_zookeeper_commands:
|
||||
@ -1476,6 +1478,7 @@ class ClickHouseCluster:
|
||||
os.makedirs(self.mysql_logs_dir)
|
||||
os.chmod(self.mysql_logs_dir, stat.S_IRWXO)
|
||||
subprocess_check_call(self.base_mysql_cmd + common_opts)
|
||||
self.up_called = True
|
||||
self.wait_mysql_to_start()
|
||||
|
||||
if self.with_mysql8 and self.base_mysql8_cmd:
|
||||
@ -1495,6 +1498,7 @@ class ClickHouseCluster:
|
||||
os.chmod(self.mysql_cluster_logs_dir, stat.S_IRWXO)
|
||||
|
||||
subprocess_check_call(self.base_mysql_cluster_cmd + common_opts)
|
||||
self.up_called = True
|
||||
self.wait_mysql_cluster_to_start()
|
||||
|
||||
if self.with_postgres and self.base_postgres_cmd:
|
||||
@ -1505,6 +1509,7 @@ class ClickHouseCluster:
|
||||
os.chmod(self.postgres_logs_dir, stat.S_IRWXO)
|
||||
|
||||
subprocess_check_call(self.base_postgres_cmd + common_opts)
|
||||
self.up_called = True
|
||||
self.wait_postgres_to_start()
|
||||
|
||||
if self.with_postgres_cluster and self.base_postgres_cluster_cmd:
|
||||
@ -1516,17 +1521,20 @@ class ClickHouseCluster:
|
||||
os.makedirs(self.postgres4_logs_dir)
|
||||
os.chmod(self.postgres4_logs_dir, stat.S_IRWXO)
|
||||
subprocess_check_call(self.base_postgres_cluster_cmd + common_opts)
|
||||
self.up_called = True
|
||||
self.wait_postgres_cluster_to_start()
|
||||
|
||||
if self.with_kafka and self.base_kafka_cmd:
|
||||
logging.debug('Setup Kafka')
|
||||
subprocess_check_call(self.base_kafka_cmd + common_opts + ['--renew-anon-volumes'])
|
||||
self.up_called = True
|
||||
self.wait_kafka_is_available(self.kafka_docker_id, self.kafka_port)
|
||||
self.wait_schema_registry_to_start()
|
||||
|
||||
if self.with_kerberized_kafka and self.base_kerberized_kafka_cmd:
|
||||
logging.debug('Setup kerberized kafka')
|
||||
run_and_check(self.base_kerberized_kafka_cmd + common_opts + ['--renew-anon-volumes'])
|
||||
self.up_called = True
|
||||
self.wait_kafka_is_available(self.kerberized_kafka_docker_id, self.kerberized_kafka_port, 100)
|
||||
|
||||
if self.with_rabbitmq and self.base_rabbitmq_cmd:
|
||||
@ -1536,6 +1544,7 @@ class ClickHouseCluster:
|
||||
|
||||
for i in range(5):
|
||||
subprocess_check_call(self.base_rabbitmq_cmd + common_opts + ['--renew-anon-volumes'])
|
||||
self.up_called = True
|
||||
self.rabbitmq_docker_id = self.get_instance_docker_id('rabbitmq1')
|
||||
logging.debug(f"RabbitMQ checking container try: {i}")
|
||||
if self.wait_rabbitmq_to_start(throw=(i==4)):
|
||||
@ -1546,6 +1555,7 @@ class ClickHouseCluster:
|
||||
os.makedirs(self.hdfs_logs_dir)
|
||||
os.chmod(self.hdfs_logs_dir, stat.S_IRWXO)
|
||||
subprocess_check_call(self.base_hdfs_cmd + common_opts)
|
||||
self.up_called = True
|
||||
self.make_hdfs_api()
|
||||
self.wait_hdfs_to_start()
|
||||
|
||||
@ -1554,23 +1564,27 @@ class ClickHouseCluster:
|
||||
os.makedirs(self.hdfs_kerberized_logs_dir)
|
||||
os.chmod(self.hdfs_kerberized_logs_dir, stat.S_IRWXO)
|
||||
run_and_check(self.base_kerberized_hdfs_cmd + common_opts)
|
||||
self.up_called = True
|
||||
self.make_hdfs_api(kerberized=True)
|
||||
self.wait_hdfs_to_start(check_marker=True)
|
||||
|
||||
if self.with_nginx and self.base_nginx_cmd:
|
||||
logging.debug('Setup nginx')
|
||||
subprocess_check_call(self.base_nginx_cmd + common_opts + ['--renew-anon-volumes'])
|
||||
self.up_called = True
|
||||
self.nginx_docker_id = self.get_instance_docker_id('nginx')
|
||||
self.wait_nginx_to_start()
|
||||
|
||||
if self.with_mongo and self.base_mongo_cmd:
|
||||
logging.debug('Setup Mongo')
|
||||
run_and_check(self.base_mongo_cmd + common_opts)
|
||||
self.up_called = True
|
||||
self.wait_mongo_to_start(30, secure=self.with_mongo_secure)
|
||||
|
||||
if self.with_redis and self.base_redis_cmd:
|
||||
logging.debug('Setup Redis')
|
||||
subprocess_check_call(self.base_redis_cmd + common_opts)
|
||||
self.up_called = True
|
||||
time.sleep(10)
|
||||
|
||||
if self.with_minio and self.base_minio_cmd:
|
||||
@ -1585,11 +1599,13 @@ class ClickHouseCluster:
|
||||
|
||||
logging.info("Trying to create Minio instance by command %s", ' '.join(map(str, minio_start_cmd)))
|
||||
run_and_check(minio_start_cmd)
|
||||
self.up_called = True
|
||||
logging.info("Trying to connect to Minio...")
|
||||
self.wait_minio_to_start(secure=self.minio_certs_dir is not None)
|
||||
|
||||
if self.with_cassandra and self.base_cassandra_cmd:
|
||||
subprocess_check_call(self.base_cassandra_cmd + ['up', '-d'])
|
||||
self.up_called = True
|
||||
self.wait_cassandra_to_start()
|
||||
|
||||
if self.with_jdbc_bridge and self.base_jdbc_bridge_cmd:
|
||||
@ -1597,6 +1613,7 @@ class ClickHouseCluster:
|
||||
os.chmod(self.jdbc_driver_logs_dir, stat.S_IRWXO)
|
||||
|
||||
subprocess_check_call(self.base_jdbc_bridge_cmd + ['up', '-d'])
|
||||
self.up_called = True
|
||||
self.jdbc_bridge_ip = self.get_instance_ip(self.jdbc_bridge_host)
|
||||
self.wait_for_url(f"http://{self.jdbc_bridge_ip}:{self.jdbc_bridge_port}/ping")
|
||||
|
||||
@ -1668,6 +1685,8 @@ class ClickHouseCluster:
|
||||
subprocess_check_call(self.base_cmd + ['down', '--volumes'])
|
||||
except Exception as e:
|
||||
logging.debug("Down + remove orphans failed durung shutdown. {}".format(repr(e)))
|
||||
else:
|
||||
logging.warning("docker-compose up was not called. Trying to export docker.log for running containers")
|
||||
|
||||
self.cleanup()
|
||||
|
||||
@ -1750,6 +1769,7 @@ CLICKHOUSE_START_COMMAND = "clickhouse server --config-file=/etc/clickhouse-serv
|
||||
|
||||
CLICKHOUSE_STAY_ALIVE_COMMAND = 'bash -c "trap \'killall tail\' INT TERM; {} --daemon; coproc tail -f /dev/null; wait $$!"'.format(CLICKHOUSE_START_COMMAND)
|
||||
|
||||
# /run/xtables.lock passed inside for correct iptables --wait
|
||||
DOCKER_COMPOSE_TEMPLATE = '''
|
||||
version: '2.3'
|
||||
services:
|
||||
@ -1761,6 +1781,7 @@ services:
|
||||
- {db_dir}:/var/lib/clickhouse/
|
||||
- {logs_dir}:/var/log/clickhouse-server/
|
||||
- /etc/passwd:/etc/passwd:ro
|
||||
- /run/xtables.lock:/run/xtables.lock:ro
|
||||
{binary_volume}
|
||||
{odbc_bridge_volume}
|
||||
{library_bridge_volume}
|
||||
|
@ -212,6 +212,8 @@ class _NetworkManager:
|
||||
self._container = self._docker_client.containers.run('clickhouse/integration-helper',
|
||||
auto_remove=True,
|
||||
command=('sleep %s' % self.container_exit_timeout),
|
||||
# /run/xtables.lock passed inside for correct iptables --wait
|
||||
volumes={'/run/xtables.lock': {'bind': '/run/xtables.lock', 'mode': 'ro' }},
|
||||
detach=True, network_mode='host')
|
||||
container_id = self._container.id
|
||||
self._container_expire_time = time.time() + self.container_expire_timeout
|
||||
|
@ -276,6 +276,7 @@ if __name__ == "__main__":
|
||||
--volume={library_bridge_bin}:/clickhouse-library-bridge --volume={bin}:/clickhouse \
|
||||
--volume={base_cfg}:/clickhouse-config --volume={cases_dir}:/ClickHouse/tests/integration \
|
||||
--volume={src_dir}/Server/grpc_protos:/ClickHouse/src/Server/grpc_protos \
|
||||
--volume=/run/xtables.lock:/run/xtables.lock:ro \
|
||||
{dockerd_internal_volume} -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 \
|
||||
{env_tags} {env_cleanup} -e PYTEST_OPTS='{parallel} {opts} {tests_list} -vvv' {img} {command}".format(
|
||||
net=net,
|
||||
|
@ -1,3 +1,16 @@
|
||||
---Q1---
|
||||
s s s
|
||||
---Q2---
|
||||
s s s
|
||||
---Q3---
|
||||
s s s
|
||||
---Q4---
|
||||
s s s
|
||||
\N s1 s1
|
||||
\N \N s2
|
||||
---Q5---
|
||||
---Q6---
|
||||
---Q7---
|
||||
---Q8---
|
||||
s s s
|
||||
---Q9---
|
||||
|
@ -7,10 +7,34 @@ ENGINE = MergeTree() ORDER BY (id,s) SETTINGS allow_nullable_key = 1;
|
||||
|
||||
INSERT into test_23634 values ('s','s','s'), (null,'s1','s1'), (null,null,'s2'), (null,null,null);
|
||||
|
||||
select '---Q1---';
|
||||
select * from test_23634 where id !='';
|
||||
|
||||
select '---Q2---';
|
||||
select * from test_23634 where id !='' and s != '';
|
||||
|
||||
select '---Q3---';
|
||||
select * from test_23634 where id !='' and s != '' and s1 != '';
|
||||
|
||||
set force_primary_key=0;
|
||||
|
||||
select '---Q4---';
|
||||
select * from test_23634 where (id, s, s1) != ('', '', '') order by id, s1, s1;
|
||||
|
||||
select '---Q5---';
|
||||
select * from test_23634 where (id, s, s1) = ('', '', '') order by id, s1, s1;
|
||||
|
||||
select '---Q6---';
|
||||
select * from test_23634 where (id, s, s1) = ('', '', 's2') order by id, s1, s1;
|
||||
|
||||
select '---Q7---';
|
||||
select * from test_23634 where (id, s, s1) = ('', 's1', 's1') order by id, s1, s1;
|
||||
|
||||
select '---Q8---';
|
||||
select * from test_23634 where (id, s, s1) = ('s', 's', 's') order by id, s1, s1;
|
||||
|
||||
select '---Q9---';
|
||||
select * from test_23634 where (id, s, s1) = (null::Nullable(String), null::Nullable(String), null::Nullable(String)) order by id, s1, s1;
|
||||
|
||||
drop table test_23634;
|
||||
|
||||
|
@ -153,3 +153,4 @@ select number % 2 and toLowCardinality(number) from numbers(5);
|
||||
select number % 2 or toLowCardinality(number) from numbers(5);
|
||||
select if(toLowCardinality(number) % 2, number, number + 1) from numbers(10);
|
||||
select multiIf(toLowCardinality(number) % 2, number, number + 1) from numbers(10);
|
||||
|
||||
|
@ -0,0 +1,20 @@
|
||||
0
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
0
|
||||
1
|
||||
1
|
||||
1
|
||||
0
|
||||
1
|
||||
0
|
||||
0
|
||||
0
|
||||
1
|
||||
0
|
||||
1
|
||||
0
|
||||
0
|
@ -0,0 +1,2 @@
|
||||
select 1 and greatest(number % 2, number % 3) from numbers(10);
|
||||
select 1 and least(number % 2, number % 3) from numbers(10);
|
@ -195,6 +195,38 @@ find $ROOT_PATH/{src,programs,utils} -name '*.h' -or -name '*.cpp' |
|
||||
grep -vP $EXCLUDE_DIRS |
|
||||
xargs grep -P 'std::[io]?stringstream' | grep -v "STYLE_CHECK_ALLOW_STD_STRING_STREAM" && echo "Use WriteBufferFromOwnString or ReadBufferFromString instead of std::stringstream"
|
||||
|
||||
# Forbid std::cerr/std::cout in src (fine in programs/utils)
|
||||
std_cerr_cout_excludes=(
|
||||
/examples/
|
||||
/tests/
|
||||
_fuzzer
|
||||
# OK
|
||||
src/Common/ProgressIndication.cpp
|
||||
# only under #ifdef DBMS_HASH_MAP_DEBUG_RESIZES, that is used only in tests
|
||||
src/Common/HashTable/HashTable.h
|
||||
# SensitiveDataMasker::printStats()
|
||||
src/Common/SensitiveDataMasker.cpp
|
||||
# StreamStatistics::print()
|
||||
src/Compression/LZ4_decompress_faster.cpp
|
||||
# ContextSharedPart with subsequent std::terminate()
|
||||
src/Interpreters/Context.cpp
|
||||
# IProcessor::dump()
|
||||
src/Processors/IProcessor.cpp
|
||||
)
|
||||
sources_with_std_cerr_cout=( $(
|
||||
find $ROOT_PATH/src -name '*.h' -or -name '*.cpp' | \
|
||||
grep -vP $EXCLUDE_DIRS | \
|
||||
grep -F -v $(printf -- "-e %s " "${std_cerr_cout_excludes[@]}") | \
|
||||
xargs grep -F --with-filename -e std::cerr -e std::cout | cut -d: -f1 | sort -u
|
||||
) )
|
||||
# Exclude comments
|
||||
for src in "${sources_with_std_cerr_cout[@]}"; do
|
||||
# suppress stderr, since it may contain warning for #pargma once in headers
|
||||
if gcc -fpreprocessed -dD -E "$src" 2>/dev/null | grep -F -q -e std::cerr -e std::cout; then
|
||||
echo "$src: uses std::cerr/std::cout"
|
||||
fi
|
||||
done
|
||||
|
||||
# Conflict markers
|
||||
find $ROOT_PATH/{src,base,programs,utils,tests,docs,website,cmake} -name '*.md' -or -name '*.cpp' -or -name '*.h' |
|
||||
xargs grep -P '^(<<<<<<<|=======|>>>>>>>)$' | grep -P '.' && echo "Conflict markers are found in files"
|
||||
|
@ -8,27 +8,27 @@ tags: ['article', 'CDN', 'Cloudflare', 'repository', 'deb', 'rpm', 'tgz']
|
||||
On initial open-source launch, ClickHouse packages were published at an independent repository implemented on Yandex infrastructure. We'd love to use the default repositories of Linux distributions, but, unfortunately, they have their own strict rules on third-party library usage and software compilation options. These rules happen to contradict with how ClickHouse is produced. In 2018 ClickHouse was added to [official Debian repository](https://packages.debian.org/sid/clickhouse-server) as an experiment, but it didn't get much traction. Adaptation to those rules ended up producing more like a demo version of ClickHouse with crippled performance and limited features.
|
||||
|
||||
!!! info "TL;DR"
|
||||
If you have configured your system to use <http://repo.yandex.ru/clickhouse/> for fetching ClickHouse packages, replace it with <https://repo.clickhouse.tech/>.
|
||||
If you have configured your system to use <http://repo.yandex.ru/clickhouse/> for fetching ClickHouse packages, replace it with <https://repo.clickhouse.com/>.
|
||||
|
||||
Distributing packages via our own repository was working totally fine until ClickHouse has started getting traction in countries far from Moscow, most notably the USA and China. Downloading large files of packages from remote location was especially painful for Chinese ClickHouse users, likely due to how China is connected to the rest of the world via its famous firewall. But at least it worked (with high latencies and low throughput), while in some smaller countries there was completely no access to this repository and people living there had to host their own mirrors on neutral ground as a workaround.
|
||||
|
||||
Earlier this year we made the ClickHouse official website to be served via global CDN by [Cloudflare](https://www.cloudflare.com) on a `clickhouse.com` domain. To solve the download issues discussed above, we have also configured a new location for ClickHouse packages that are also served by Cloudflare at [repo.clickhouse.tech](https://repo.clickhouse.tech). It used to have some quirks, but now it seems to be working fine while improving throughput and latencies in remote geographical locations by over an order of magnitude.
|
||||
Earlier this year we made the ClickHouse official website to be served via global CDN by [Cloudflare](https://www.cloudflare.com) on a `clickhouse.tech` domain. To solve the download issues discussed above, we have also configured a new location for ClickHouse packages that are also served by Cloudflare at [repo.clickhouse.com](https://repo.clickhouse.com). It used to have some quirks, but now it seems to be working fine while improving throughput and latencies in remote geographical locations by over an order of magnitude.
|
||||
|
||||
## Switching To Repository Behind CDN
|
||||
|
||||
This transition has some more benefits besides improving the package fetching, but let's get back to them in a minute. One of the key reasons for this post is that we can't actually influence the repository configuration of ClickHouse users. We have updated all instructions, but for people who have followed these instructions earlier, **action is required** to use the new location behind CDN. Basically, you need to replace `http://repo.yandex.ru/clickhouse/` with `https://repo.clickhouse.tech/` in your package manager configuration.
|
||||
This transition has some more benefits besides improving the package fetching, but let's get back to them in a minute. One of the key reasons for this post is that we can't actually influence the repository configuration of ClickHouse users. We have updated all instructions, but for people who have followed these instructions earlier, **action is required** to use the new location behind CDN. Basically, you need to replace `http://repo.yandex.ru/clickhouse/` with `https://repo.clickhouse.com/` in your package manager configuration.
|
||||
|
||||
One-liner for Ubuntu or Debian:
|
||||
```bash
|
||||
sudo apt-get install apt-transport-https ca-certificates && sudo perl -pi -e 's|http://repo.yandex.ru/clickhouse/|https://repo.clickhouse.tech/|g' /etc/apt/sources.list.d/clickhouse.list && sudo apt-get update
|
||||
sudo apt-get install apt-transport-https ca-certificates && sudo perl -pi -e 's|http://repo.yandex.ru/clickhouse/|https://repo.clickhouse.com/|g' /etc/apt/sources.list.d/clickhouse.list && sudo apt-get update
|
||||
```
|
||||
|
||||
One-liner for RedHat or CentOS:
|
||||
```bash
|
||||
sudo perl -pi -e 's|http://repo.yandex.ru/clickhouse/|https://repo.clickhouse.tech/|g' /etc/yum.repos.d/clickhouse*
|
||||
sudo perl -pi -e 's|http://repo.yandex.ru/clickhouse/|https://repo.clickhouse.com/|g' /etc/yum.repos.d/clickhouse*
|
||||
```
|
||||
|
||||
As you might have noticed, the domain name is not the only thing that has changed: the new URL uses `https://` protocol. Usually, it's considered less important for package repositories compared to normal websites because most package managers check [GPG signatures](https://en.wikipedia.org/wiki/GNU_Privacy_Guard) for what they download anyway. However it still has some benefits: for example, it's not so uncommon for people to download packages via browser, `curl` or `wget`, and install them manually (while for [tgz](https://repo.clickhouse.tech/tgz/) builds it's the only option). Fewer opportunities for sniffing traffic can't hurt either. The downside is that `apt` in some Debian flavors has no HTTPS support by default and needs a couple more packages to be installed (`apt-transport-https` and `ca-certificates`).
|
||||
As you might have noticed, the domain name is not the only thing that has changed: the new URL uses `https://` protocol. Usually, it's considered less important for package repositories compared to normal websites because most package managers check [GPG signatures](https://en.wikipedia.org/wiki/GNU_Privacy_Guard) for what they download anyway. However it still has some benefits: for example, it's not so uncommon for people to download packages via browser, `curl` or `wget`, and install them manually (while for [tgz](https://repo.clickhouse.com/tgz/) builds it's the only option). Fewer opportunities for sniffing traffic can't hurt either. The downside is that `apt` in some Debian flavors has no HTTPS support by default and needs a couple more packages to be installed (`apt-transport-https` and `ca-certificates`).
|
||||
|
||||
## Investigating Repository Usage
|
||||
|
||||
@ -61,7 +61,7 @@ There's not so much data collected yet, but here's a live example of how the res
|
||||
While here we confirmed that `rpm` is at least as popular as `deb`:
|
||||
![iframe](https://datalens.yandex/lfvldsf92i2uh?_embedded=1)
|
||||
|
||||
Or you can take a look at all key charts for `repo.clickhouse.tech` together on a handy **[dashboard](https://datalens.yandex/pjzq4rot3t2ql)** with a filtering possibility.
|
||||
Or you can take a look at all key charts for `repo.clickhouse.com` together on a handy **[dashboard](https://datalens.yandex/pjzq4rot3t2ql)** with a filtering possibility.
|
||||
|
||||
## Lessons Learned
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user