Merge branch 'master' into pr/content-type

This commit is contained in:
mergify[bot] 2022-02-26 00:45:54 +00:00 committed by GitHub
commit 82e115c097
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
27 changed files with 196 additions and 28 deletions

View File

@ -3,25 +3,25 @@ compilers and build settings. Correctly configured Docker daemon is single depen
Usage:
Build deb package with `clang-11` in `debug` mode:
Build deb package with `clang-14` in `debug` mode:
```
$ mkdir deb/test_output
$ ./packager --output-dir deb/test_output/ --package-type deb --compiler=clang-11 --build-type=debug
$ ./packager --output-dir deb/test_output/ --package-type deb --compiler=clang-14 --build-type=debug
$ ls -l deb/test_output
-rw-r--r-- 1 root root 3730 clickhouse-client_18.14.2+debug_all.deb
-rw-r--r-- 1 root root 84221888 clickhouse-common-static_18.14.2+debug_amd64.deb
-rw-r--r-- 1 root root 255967314 clickhouse-common-static-dbg_18.14.2+debug_amd64.deb
-rw-r--r-- 1 root root 14940 clickhouse-server_18.14.2+debug_all.deb
-rw-r--r-- 1 root root 340206010 clickhouse-server-base_18.14.2+debug_amd64.deb
-rw-r--r-- 1 root root 7900 clickhouse-server-common_18.14.2+debug_all.deb
-rw-r--r-- 1 root root 3730 clickhouse-client_22.2.2+debug_all.deb
-rw-r--r-- 1 root root 84221888 clickhouse-common-static_22.2.2+debug_amd64.deb
-rw-r--r-- 1 root root 255967314 clickhouse-common-static-dbg_22.2.2+debug_amd64.deb
-rw-r--r-- 1 root root 14940 clickhouse-server_22.2.2+debug_all.deb
-rw-r--r-- 1 root root 340206010 clickhouse-server-base_22.2.2+debug_amd64.deb
-rw-r--r-- 1 root root 7900 clickhouse-server-common_22.2.2+debug_all.deb
```
Build ClickHouse binary with `clang-11` and `address` sanitizer in `relwithdebuginfo`
Build ClickHouse binary with `clang-14` and `address` sanitizer in `relwithdebuginfo`
mode:
```
$ mkdir $HOME/some_clickhouse
$ ./packager --output-dir=$HOME/some_clickhouse --package-type binary --compiler=clang-11 --sanitizer=address
$ ./packager --output-dir=$HOME/some_clickhouse --package-type binary --compiler=clang-14 --sanitizer=address
$ ls -l $HOME/some_clickhouse
-rwxr-xr-x 1 root root 787061952 clickhouse
lrwxrwxrwx 1 root root 10 clickhouse-benchmark -> clickhouse

View File

@ -322,7 +322,7 @@ std::string getName() const override { return "Memory"; }
class StorageMemory : public IStorage
```
**4.** `using` are named the same way as classes, or with `_t` on the end.
**4.** `using` are named the same way as classes.
**5.** Names of template type arguments: in simple cases, use `T`; `T`, `U`; `T1`, `T2`.
@ -490,7 +490,7 @@ if (0 != close(fd))
throwFromErrno("Cannot close file " + file_name, ErrorCodes::CANNOT_CLOSE_FILE);
```
`Do not use assert`.
You can use assert to check invariants in code.
**4.** Exception types.
@ -571,7 +571,7 @@ Dont use these types for numbers: `signed/unsigned long`, `long long`, `short
**13.** Passing arguments.
Pass complex values by reference (including `std::string`).
Pass complex values by value if they are going to be moved and use std::move; pass by reference if you want to update value in a loop.
If a function captures ownership of an object created in the heap, make the argument type `shared_ptr` or `unique_ptr`.
@ -581,7 +581,7 @@ In most cases, just use `return`. Do not write `return std::move(res)`.
If the function allocates an object on heap and returns it, use `shared_ptr` or `unique_ptr`.
In rare cases you might need to return the value via an argument. In this case, the argument should be a reference.
In rare cases (updating a value in a loop) you might need to return the value via an argument. In this case, the argument should be a reference.
``` cpp
using AggregateFunctionPtr = std::shared_ptr<IAggregateFunction>;

View File

@ -143,6 +143,10 @@ Features:
- Backup and restore.
- RBAC.
### Zeppelin-Interpreter-for-ClickHouse {#zeppelin-interpreter-for-clickhouse}
[Zeppelin-Interpreter-for-ClickHouse](https://github.com/SiderZhang/Zeppelin-Interpreter-for-ClickHouse) is a [Zeppelin](https://zeppelin.apache.org) interpreter for ClickHouse. Compared with JDBC interpreter, it can provide better timeout control for long running queries.
## Commercial {#commercial}
### DataGrip {#datagrip}

View File

@ -5,7 +5,7 @@ toc_title: Array(T)
# Array(t) {#data-type-array}
An array of `T`-type items. `T` can be any data type, including an array.
An array of `T`-type items, with the starting array index as 1. `T` can be any data type, including an array.
## Creating an Array {#creating-an-array}

View File

@ -7,6 +7,8 @@ toc_title: Date
A date. Stored in two bytes as the number of days since 1970-01-01 (unsigned). Allows storing values from just after the beginning of the Unix Epoch to the upper threshold defined by a constant at the compilation stage (currently, this is until the year 2149, but the final fully-supported year is 2148).
Supported range of values: \[1970-01-01, 2149-06-06\].
The date value is stored without the time zone.
**Example**

View File

@ -13,7 +13,7 @@ Syntax:
DateTime([timezone])
```
Supported range of values: \[1970-01-01 00:00:00, 2105-12-31 23:59:59\].
Supported range of values: \[1970-01-01 00:00:00, 2106-02-07 06:28:15\].
Resolution: 1 second.

View File

@ -18,7 +18,7 @@ DateTime64(precision, [timezone])
Internally, stores data as a number of ticks since epoch start (1970-01-01 00:00:00 UTC) as Int64. The tick resolution is determined by the precision parameter. Additionally, the `DateTime64` type can store time zone that is the same for the entire column, that affects how the values of the `DateTime64` type values are displayed in text format and how the values specified as strings are parsed (2020-01-01 05:00:01.000). The time zone is not stored in the rows of the table (or in resultset), but is stored in the column metadata. See details in [DateTime](../../sql-reference/data-types/datetime.md).
Supported range from January 1, 1925 till November 11, 2283.
Supported range of values: \[1925-01-01 00:00:00, 2283-11-11 23:59:59.99999999\] (Note: The precision of the maximum value is 8).
## Examples {#examples}

View File

@ -127,7 +127,7 @@ ARRAY JOIN [1, 2, 3] AS arr_external;
└─────────────┴──────────────┘
```
Multiple arrays can be comma-separated in the `ARRAY JOIN` clause. In this case, `JOIN` is performed with them simultaneously (the direct sum, not the cartesian product). Note that all the arrays must have the same size. Example:
Multiple arrays can be comma-separated in the `ARRAY JOIN` clause. In this case, `JOIN` is performed with them simultaneously (the direct sum, not the cartesian product). Note that all the arrays must have the same size by default. Example:
``` sql
SELECT s, arr, a, num, mapped
@ -162,6 +162,25 @@ ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num;
│ World │ [3,4,5] │ 5 │ 3 │ [1,2,3] │
└───────┴─────────┴───┴─────┴─────────────────────┘
```
Multiple arrays with different sizes can be joined by using: `SETTINGS enable_unaligned_array_join = 1`. Example:
```sql
SELECT s, arr, a, b
FROM arrays_test ARRAY JOIN arr as a, [['a','b'],['c']] as b
SETTINGS enable_unaligned_array_join = 1;
```
```text
┌─s───────┬─arr─────┬─a─┬─b─────────┐
│ Hello │ [1,2] │ 1 │ ['a','b'] │
│ Hello │ [1,2] │ 2 │ ['c'] │
│ World │ [3,4,5] │ 3 │ ['a','b'] │
│ World │ [3,4,5] │ 4 │ ['c'] │
│ World │ [3,4,5] │ 5 │ [] │
│ Goodbye │ [] │ 0 │ ['a','b'] │
│ Goodbye │ [] │ 0 │ ['c'] │
└─────────┴─────────┴───┴───────────┘
```
## ARRAY JOIN with Nested Data Structure {#array-join-with-nested-data-structure}

View File

@ -7,6 +7,8 @@ toc_title: "\u65E5\u4ED8"
日付型です。 1970-01-01 からの日数が2バイトの符号なし整数として格納されます。 UNIX時間の開始直後から、変換段階で定数として定義される上限しきい値までの値を格納できます現在は2106年までですが、一年分を完全にサポートしているのは2105年までです
サポートされる値の範囲: \[1970-01-01, 2149-06-06\].
日付値は、タイムゾーンなしで格納されます。
[元の記事](https://clickhouse.com/docs/en/data_types/date/) <!--hide-->

View File

@ -15,7 +15,7 @@ toc_title: DateTime
DateTime([timezone])
```
サポートされる値の範囲: \[1970-01-01 00:00:00, 2105-12-31 23:59:59\].
サポートされる値の範囲: \[1970-01-01 00:00:00, 2106-02-07 06:28:15\].
解像度:1秒.

View File

@ -19,6 +19,8 @@ DateTime64(precision, [timezone])
内部的には、データを ticks エポック開始1970-01-01 00:00:00UTC以来、Int64として。 目盛りの解像度は、精度パラメータによって決定されます。 さらに、 `DateTime64` 型は、列全体で同じタイムゾーンを格納することができます。 `DateTime64` 型の値はテキスト形式で表示され、文字列として指定された値がどのように解析されるか (2020-01-01 05:00:01.000). タイムゾーンは、テーブルの行(またはresultset)には格納されませんが、列メタデータに格納されます。 詳細はを参照。 [DateTime](datetime.md).
サポートされる値の範囲: \[1925-01-01 00:00:00, 2283-11-11 23:59:59.99999999\] (注最大値の精度は、8).
## 例 {#examples}
**1.** テーブルの作成 `DateTime64`-列を入力し、そこにデータを挿入する:

View File

@ -5,7 +5,7 @@ toc_title: Array(T)
# Array(T) {#data-type-array}
Массив из элементов типа `T`. `T` может любым, в том числе массивом. Таким образом поддерживаются многомерные массивы.
Массив из элементов типа `T`. `T` может любым, в том числе массивом. Таким образом поддерживаются многомерные массивы. Первый элемент массива имеет индекс 1.
## Создание массива {#creating-an-array}

View File

@ -7,6 +7,8 @@ toc_title: Date
Дата. Хранится в двух байтах в виде (беззнакового) числа дней, прошедших от 1970-01-01. Позволяет хранить значения от чуть больше, чем начала unix-эпохи до верхнего порога, определяющегося константой на этапе компиляции (сейчас - до 2106 года, последний полностью поддерживаемый год - 2105).
Диапазон значений: \[1970-01-01, 2149-06-06\].
Дата хранится без учёта часового пояса.
**Пример**

View File

@ -13,7 +13,7 @@ toc_title: DateTime
DateTime([timezone])
```
Диапазон значений: \[1970-01-01 00:00:00, 2105-12-31 23:59:59\].
Диапазон значений: \[1970-01-01 00:00:00, 2106-02-07 06:28:15\].
Точность: 1 секунда.

View File

@ -18,7 +18,7 @@ DateTime64(precision, [timezone])
Данные хранятся в виде количества ‘тиков’, прошедших с момента начала эпохи (1970-01-01 00:00:00 UTC), в Int64. Размер тика определяется параметром precision. Дополнительно, тип `DateTime64` позволяет хранить часовой пояс, единый для всей колонки, который влияет на то, как будут отображаться значения типа `DateTime64` в текстовом виде и как будут парситься значения заданные в виде строк (2020-01-01 05:00:01.000). Часовой пояс не хранится в строках таблицы (выборки), а хранится в метаданных колонки. Подробнее см. [DateTime](datetime.md).
Поддерживаются значения от 1 января 1925 г. и до 11 ноября 2283 г.
Диапазон значений: \[1925-01-01 00:00:00, 2283-11-11 23:59:59.99999999\] (Примечание: Точность максимального значения составляет 8).
## Примеры {#examples}

View File

@ -1 +0,0 @@
../../../en/faq/general/who-is-using-clickhouse.md

View File

@ -0,0 +1,19 @@
---
title: 谁在使用 ClickHouse?
toc_hidden: true
toc_priority: 9
---
# 谁在使用 ClickHouse? {#who-is-using-clickhouse}
作为一个开源产品这个问题的答案并不那么简单。如果你想开始使用ClickHouse你不需要告诉任何人你只需要获取源代码或预编译包。不需要签署任何合同[Apache 2.0许可证](https://github.com/ClickHouse/ClickHouse/blob/master/LICENSE)允许不受约束的软件分发。
此外,技术堆栈通常处于保密协议所涵盖的灰色地带。一些公司认为他们使用的技术是一种竞争优势,即使这些技术是开源的,并且不允许员工公开分享任何细节。一些公司看到了一些公关风险,只允许员工在获得公关部门批准后分享实施细节。
那么如何辨别谁在使用ClickHouse呢?
一种方法是询问周围的人。如果不是书面形式,人们更愿意分享他们公司使用的技术、用例、使用的硬件类型、数据量等。我们定期在[ClickHouse meetup](https://www.youtube.com/channel/UChtmrD-dsdpspr42P_PyRAw/playlists)上与世界各地的用户进行交流并听到了大约1000多家使用ClickHouse的公司的故事。不幸的是这是不可复制的我们试图把这些故事当作是在保密协议下被告知的以避免任何潜在的麻烦。但你可以参加我们未来的任何聚会并与其他用户单独交谈。有多种方式宣布聚会例如你可以订阅[我们的Twitter](http://twitter.com/ClickHouseDB/)。
第二种方法是寻找**公开表示**使用ClickHouse的公司。因为通常会有一些确凿的证据如博客文章、谈话视频录音、幻灯片等。我们在我们的[**Adopters**](../../introduction/adopters.md)页面上收集指向此类证据的链接。你可以随意提供你雇主的故事,或者只是一些你偶然发现的链接(但尽量不要在这个过程中违反保密协议)。
你可以在采用者名单中找到一些非常大的公司,比如彭博社、思科、中国电信、腾讯或优步,但通过第一种方法,我们发现还有更多。例如,如果你看看《福布斯》[(2020年)列出的最大IT公司名单](https://www.forbes.com/sites/hanktucker/2020/05/13/worlds-largest-technology-companies-2020-apple-stays-on-top-zoom-and-uber-debut/)超过一半的公司都在以某种方式使用ClickHouse。此外不提[Yandex](../../introduction/history.md)是不公平的该公司最初于2016年开放ClickHouse碰巧是欧洲最大的it公司之一。

View File

@ -2,4 +2,6 @@
日期类型,用两个字节存储,表示从 1970-01-01 (无符号) 到当前的日期值。允许存储从 Unix 纪元开始到编译阶段定义的上限阈值常量目前上限是2106年但最终完全支持的年份为2105。最小值输出为1970-01-01。
值的范围: \[1970-01-01, 2149-06-06\]。
日期中没有存储时区信息。

View File

@ -2,6 +2,8 @@
时间戳类型。用四个字节(无符号的)存储 Unix 时间戳)。允许存储与日期类型相同的范围内的值。最小值为 1970-01-01 00:00:00。时间戳类型值精确到秒没有闰秒
值的范围: \[1970-01-01 00:00:00, 2106-02-07 06:28:15\]。
## 时区 {#shi-qu}
使用启动客户端或服务器时的系统时区,时间戳是从文本(分解为组件)转换为二进制并返回。在文本格式中,有关夏令时的信息会丢失。

View File

@ -19,6 +19,8 @@ DateTime64(precision, [timezone])
在内部此类型以Int64类型将数据存储为自Linux纪元开始(1970-01-01 00:00:00UTC)的时间刻度数ticks。时间刻度的分辨率由precision参数确定。此外`DateTime64` 类型可以像存储其他数据列一样存储时区信息,时区会影响 `DateTime64` 类型的值如何以文本格式显示,以及如何解析以字符串形式指定的时间数据 (2020-01-01 05:00:01.000)。时区不存储在表的行中也不在resultset中而是存储在列的元数据中。详细信息请参考 [DateTime](datetime.md) 数据类型.
值的范围: \[1925-01-01 00:00:00, 2283-11-11 23:59:59.99999999\] (注意: 最大值的精度是8)。
## 示例 {#examples}
**1.** 创建一个具有 `DateTime64` 类型列的表,并向其中插入数据:

View File

@ -149,8 +149,6 @@ void ODBCSource::insertValue(
DateTime64 time = 0;
const auto * datetime_type = assert_cast<const DataTypeDateTime64 *>(data_type.get());
readDateTime64Text(time, datetime_type->getScale(), in, datetime_type->getTimeZone());
if (time < 0)
time = 0;
assert_cast<DataTypeDateTime64::ColumnType &>(column).insertValue(time);
break;
}

View File

@ -108,8 +108,6 @@ void insertPostgreSQLValue(
ReadBufferFromString in(value);
DateTime64 time = 0;
readDateTime64Text(time, 6, in, assert_cast<const DataTypeDateTime64 *>(data_type.get())->getTimeZone());
if (time < 0)
time = 0;
assert_cast<DataTypeDateTime64::ColumnType &>(column).insertValue(time);
break;
}

View File

@ -121,7 +121,8 @@ public:
auto set = column_set->getData();
auto set_types = set->getDataTypes();
if (tuple && (set_types.size() != 1 || !set_types[0]->equals(*type_tuple)))
if (tuple && set_types.size() != 1 && set_types.size() == tuple->tupleSize())
{
const auto & tuple_columns = tuple->getColumns();
const DataTypes & tuple_types = type_tuple->getElements();

View File

@ -447,6 +447,16 @@ def test_where_false(started_cluster):
cursor.execute("DROP TABLE test")
def test_datetime64(started_cluster):
cursor = started_cluster.postgres_conn.cursor()
cursor.execute("drop table if exists test")
cursor.execute("create table test (ts timestamp)")
cursor.execute("insert into test select '1960-01-01 20:00:00';")
result = node1.query("select * from postgresql(postgres1, table='test')")
assert(result.strip() == '1960-01-01 20:00:00.000000')
if __name__ == '__main__':
cluster.start()
input("Cluster created, press any key to destroy...")

View File

@ -0,0 +1 @@
2001 2

View File

@ -0,0 +1,13 @@
DROP TABLE IF EXISTS calendar;
DROP TABLE IF EXISTS events32;
CREATE TABLE calendar ( `year` Int64, `month` Int64 ) ENGINE = TinyLog;
INSERT INTO calendar VALUES (2000, 1), (2001, 2), (2000, 3);
CREATE TABLE events32 ( `year` Int32, `month` Int32 ) ENGINE = TinyLog;
INSERT INTO events32 VALUES (2001, 2), (2001, 3);
SELECT * FROM calendar WHERE (year, month) IN ( SELECT (year, month) FROM events32 );
DROP TABLE IF EXISTS calendar;
DROP TABLE IF EXISTS events32;

View File

@ -1,4 +1,6 @@
v22.2.3.5-stable 2022-02-25
v22.2.2.1-stable 2022-02-17
v22.1.4.30-stable 2022-02-25
v22.1.3.7-stable 2022-01-23
v22.1.2.2-stable 2022-01-19
v21.12.4.1-stable 2022-01-23

1 v22.2.2.1-stable v22.2.3.5-stable 2022-02-17 2022-02-25
1 v22.2.3.5-stable 2022-02-25
2 v22.2.2.1-stable v22.2.2.1-stable 2022-02-17 2022-02-17
3 v22.1.4.30-stable 2022-02-25
4 v22.1.3.7-stable v22.1.3.7-stable 2022-01-23 2022-01-23
5 v22.1.2.2-stable v22.1.2.2-stable 2022-01-19 2022-01-19
6 v21.12.4.1-stable v21.12.4.1-stable 2022-01-23 2022-01-23

View File

@ -0,0 +1,90 @@
---
title: 'ClickHouse 22.2 Released'
image: 'https://blog-images.clickhouse.com/en/2022/clickhouse-v22-2/featured.jpg'
date: '2022-02-23'
author: 'Alexey Milovidov'
tags: ['company', 'community']
---
We prepared a new ClickHouse release 22.2, so it's nice if you have tried it on 2022-02-22. If not, you can try it today. This latest release includes 2,140 new commits from 118 contributors, including 41 new contributors:
> Aaron Katz, Andre Marianiello, Andrew, Andrii Buriachevskyi, Brian Hunter, CoolT2, Federico Rodriguez, Filippov Denis, Gaurav Kumar, Geoff Genz, HarryLeeIBM, Heena Bansal, ILya Limarenko, Igor Nikonov, IlyaTsoi, Jake Liu, JaySon-Huang, Lemore, Leonid Krylov, Michail Safronov, Mikhail Fursov, Nikita, RogerYK, Roy Bellingan, Saad Ur Rahman, W, Yakov Olkhovskiy, alexeypavlenko, cnmade, grantovsky, hanqf-git, liuneng1994, mlkui, s-kat, tesw yew isal, vahid-sohrabloo, yakov-olkhovskiy, zhifeng, zkun, zxealous, 박동철.
Let me tell you what is most interesting in 22.2...
## Projections are production ready
Projections allow you to have multiple data representations in the same table. For example, you can have data aggregations along with the raw data. There are no restrictions on which aggregate functions can be used - you can have count distinct, quantiles, or whatever you want. You can have data in multiple different sorting orders. ClickHouse will automatically select the most suitable projection for your query, so the query will be automatically optimized.
Projections are somewhat similar to Materialized Views, which also allow you to have incremental aggregation and multiple sorting orders. But unlike Materialized Views, projections are updated atomically and consistently with the main table. The data for projections is being stored in the same "data parts" of the table and is being merged in the same way as the main data.
The feature was developed by **Amos Bird**, a prominent ClickHouse contributor. The [prototype](https://github.com/ClickHouse/ClickHouse/pull/20202) has been available since Feb 2021, it has been merged in the main codebase by **Nikolai Kochetov** in May 2021 under experimental flag, and after 21 follow-up pull requests we ensured that it passed the full set of test suites and enabled it by default.
Read an example of how to optimize queries with projections [in our docs](https://clickhouse.com/docs/en/getting-started/example-datasets/uk-price-paid/#speedup-with-projections).
## Control of file creation and rewriting on data export
When you export your data with an `INSERT INTO TABLE FUNCTION` statement into `file`, `s3` or `hdfs` and the target file already exists, you can now control how to deal with it: you can append new data into the file if it is possible, rewrite it with new data, or create another file with a similar name like 'data.1.parquet.gz'.
Some storage systems like `s3` and some formats like `Parquet` don't support data appending. In previous ClickHouse versions, if you insert multiple times into a file with Parquet data format, you will end up with a file that is not recognized by other systems. Now you can choose between throwing exceptions on subsequent inserts or creating more files.
So, new settings were introduced: `s3_truncate_on_insert`, `s3_create_new_file_on_insert`, `hdfs_truncate_on_insert`, `hdfs_create_new_file_on_insert`, `engine_file_allow_create_multiple_files`.
This feature [was developed](https://github.com/ClickHouse/ClickHouse/pull/33302) by **Pavel Kruglov**.
## Custom deduplication token
`ReplicatedMergeTree` and `MergeTree` types of tables implement block-level deduplication. When a block of data is inserted, its cryptographic hash is calculated and if the same block was already inserted before, then the duplicate is skipped and the insert query succeeds. This makes it possible to implement exactly-once semantics for inserts.
In ClickHouse version 22.2 you can provide your own deduplication token instead of an automatically calculated hash. This makes sense if you already have batch identifiers from some other system and you want to reuse them. It also makes sense when blocks can be identical but they should actually be inserted multiple times. Or the opposite - when blocks contain some random data and you want to deduplicate only by significant columns.
This is implemented by adding the setting `insert_deduplication_token`. The feature was contributed by **Igor Nikonov**.
## DEFAULT keyword for INSERT
A small addition for SQL compatibility - now we allow using the `DEFAULT` keyword instead of a value in `INSERT INTO ... VALUES` statement. It looks like this:
`INSERT INTO test VALUES (1, 'Hello', DEFAULT)`
Thanks to **Andrii Buriachevskyi** for this feature.
## EPHEMERAL columns
A column in a table can have a `DEFAULT` expression like `c INT DEFAULT a + b`. In ClickHouse you can also use `MATERIALIZED` instead of `DEFAULT` if you want the column to be always calculated with the provided expression instead of allowing a user to insert data. And you can use `ALIAS` if you don't want the column to be stored at all but instead to be calculated on the fly if referenced.
Since version 22.2 a new type of column is added: `EPHEMERAL` column. The user can insert data into this column but the column is not stored in a table, it's ephemeral. The purpose of this column is to provide data to calculate other columns that can reference it with `DEFAULT` or `MATERIALIZED` expressions.
This feature was made by **Yakov Olkhovskiy**.
## Improvements for multi-disk configuration
You can configure multiple disks to store ClickHouse data instead of managing RAID and ClickHouse will automatically manage the data placement.
Since version 22.2 ClickHouse can automatically repair broken disks without server restart by downloading the missing parts from replicas and placing them on the healthy disks.
This feature was implemented by **Amos Bird** and is already being used for more than 1.5 years in production at Kuaishou.
Another improvement is the option to specify TTL MOVE TO DISK/VOLUME **IF EXISTS**. It allows replicas with non-uniform disk configuration and to have one replica to move old data to cold storage while another replica has all the data on hot storage. Data will be moved only on replicas that have the specified disk or volume, hence *if exists*. This was developed by **Anton Popov**.
## Flexible memory limits
We split per-query and per-user memory limits into a pair of hard and soft limits. The settings `max_memory_usage` and `max_memory_usage_for_user` act as hard limits. When memory consumption is approaching the hard limit, an exception will be thrown. Two other settings: `max_guaranteed_memory_usage` and `max_guaranteed_memory_usage_for_user` act as soft limits.
A query will be allowed to use more memory than a soft limit if there is available memory. But if there will be memory shortage (relative to the per-user hard limit or total per-server memory consumption), we calculate the "overcommit ratio" - how much more memory every query is consuming relative to the soft limit - and we will kill the most overcommitted query to let other queries run.
In short, your query will not be limited to a few gigabytes of RAM if you have hundreds of gigabytes available.
This experimental feature was implemented by **Dmitry Novik** and is continuing to be developed.
## Shell-style comments in SQL
Now we allow comments starting with `# ` or `#!`, similar to MySQL. The variant with `#!` allows using shell scripts with "shebang" interpreted by `clickhouse-local`.
This feature was contributed by **Aaron Katz**. Very nice.
## And many more...
Maxim Kita, Danila Kutenin, Anton Popov, zhanglistar, Federico Rodriguez, Raúl Marín, Amos Bird and Alexey Milovidov have contributed a ton of performance optimizations for this release. We are obsessed with high performance, as usual. :)
Read the [full changelog](https://github.com/ClickHouse/ClickHouse/blob/master/CHANGELOG.md) for the 22.2 release and follow [the roadmap](https://github.com/ClickHouse/ClickHouse/issues/32513).