mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-10-06 08:30:54 +00:00
Add some backlinks to official website from mirrors that just blindly take markdown sources
This commit is contained in:
parent
73d412d2af
commit
ce208ce291
@ -83,3 +83,5 @@ Code: 386. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception
|
|||||||
0 rows in set. Elapsed: 0.246 sec.
|
0 rows in set. Elapsed: 0.246 sec.
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/array/) <!--hide-->
|
||||||
|
@ -2,3 +2,5 @@
|
|||||||
|
|
||||||
There isn't a separate type for boolean values. They use the UInt8 type, restricted to the values 0 or 1.
|
There isn't a separate type for boolean values. They use the UInt8 type, restricted to the values 0 or 1.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/boolean/) <!--hide-->
|
||||||
|
@ -5,3 +5,5 @@ The minimum value is output as 0000-00-00.
|
|||||||
|
|
||||||
The date is stored without the time zone.
|
The date is stored without the time zone.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/date/) <!--hide-->
|
||||||
|
@ -13,3 +13,5 @@ By default, the client switches to the timezone of the server when it connects.
|
|||||||
|
|
||||||
So when working with a textual date (for example, when saving text dumps), keep in mind that there may be ambiguity during changes for daylight savings time, and there may be problems matching data if the time zone changed.
|
So when working with a textual date (for example, when saving text dumps), keep in mind that there may be ambiguity during changes for daylight savings time, and there may be problems matching data if the time zone changed.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/datetime/) <!--hide-->
|
||||||
|
@ -95,3 +95,5 @@ SELECT toDecimal32(1, 8) < 100
|
|||||||
```
|
```
|
||||||
DB::Exception: Can't compare.
|
DB::Exception: Can't compare.
|
||||||
```
|
```
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/decimal/) <!--hide-->
|
||||||
|
@ -113,3 +113,5 @@ The Enum type can be changed without cost using ALTER, if only the set of values
|
|||||||
|
|
||||||
Using ALTER, it is possible to change an Enum8 to an Enum16 or vice versa, just like changing an Int8 to Int16.
|
Using ALTER, it is possible to change an Enum8 to an Enum16 or vice versa, just like changing an Int8 to Int16.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/enum/) <!--hide-->
|
||||||
|
@ -8,3 +8,5 @@ Note that this behavior differs from MySQL behavior for the CHAR type (where str
|
|||||||
|
|
||||||
Fewer functions can work with the FixedString(N) type than with String, so it is less convenient to use.
|
Fewer functions can work with the FixedString(N) type than with String, so it is less convenient to use.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/fixedstring/) <!--hide-->
|
||||||
|
@ -69,3 +69,5 @@ SELECT 0 / 0
|
|||||||
|
|
||||||
See the rules for `NaN` sorting in the section [ORDER BY clause](../query_language/select.md#query_language-queries-order_by).
|
See the rules for `NaN` sorting in the section [ORDER BY clause](../query_language/select.md#query_language-queries-order_by).
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/float/) <!--hide-->
|
||||||
|
@ -6,3 +6,5 @@ ClickHouse can store various types of data in table cells.
|
|||||||
|
|
||||||
This section describes the supported data types and special considerations when using and/or implementing them, if any.
|
This section describes the supported data types and special considerations when using and/or implementing them, if any.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/) <!--hide-->
|
||||||
|
@ -18,3 +18,5 @@ Fixed-length integers, with or without a sign.
|
|||||||
- UInt32 - [0 : 4294967295]
|
- UInt32 - [0 : 4294967295]
|
||||||
- UInt64 - [0 : 18446744073709551615]
|
- UInt64 - [0 : 18446744073709551615]
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/int_uint/) <!--hide-->
|
||||||
|
@ -2,3 +2,5 @@
|
|||||||
|
|
||||||
The intermediate state of an aggregate function. To get it, use aggregate functions with the '-State' suffix. For more information, see "AggregatingMergeTree".
|
The intermediate state of an aggregate function. To get it, use aggregate functions with the '-State' suffix. For more information, see "AggregatingMergeTree".
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/nested_data_structures/aggregatefunction/) <!--hide-->
|
||||||
|
@ -1,2 +1,4 @@
|
|||||||
# Nested Data Structures
|
# Nested Data Structures
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/nested_data_structures/) <!--hide-->
|
||||||
|
@ -96,3 +96,5 @@ For a DESCRIBE query, the columns in a nested data structure are listed separate
|
|||||||
|
|
||||||
The ALTER query is very limited for elements in a nested data structure.
|
The ALTER query is very limited for elements in a nested data structure.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/nested_data_structures/nested/) <!--hide-->
|
||||||
|
@ -53,3 +53,5 @@ FROM t_null
|
|||||||
|
|
||||||
2 rows in set. Elapsed: 0.144 sec.
|
2 rows in set. Elapsed: 0.144 sec.
|
||||||
```
|
```
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/nullable/) <!--hide-->
|
||||||
|
@ -2,3 +2,5 @@
|
|||||||
|
|
||||||
Used for representing lambda expressions in high-order functions.
|
Used for representing lambda expressions in high-order functions.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/special_data_types/expression/) <!--hide-->
|
||||||
|
@ -2,3 +2,5 @@
|
|||||||
|
|
||||||
Special data type values can't be saved to a table or output in results, but are used as the intermediate result of running a query.
|
Special data type values can't be saved to a table or output in results, but are used as the intermediate result of running a query.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/special_data_types/) <!--hide-->
|
||||||
|
@ -20,3 +20,5 @@ SELECT toTypeName([])
|
|||||||
1 rows in set. Elapsed: 0.062 sec.
|
1 rows in set. Elapsed: 0.062 sec.
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/special_data_types/nothing/) <!--hide-->
|
||||||
|
@ -2,3 +2,5 @@
|
|||||||
|
|
||||||
Used for the right half of an IN expression.
|
Used for the right half of an IN expression.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/special_data_types/set/) <!--hide-->
|
||||||
|
@ -12,3 +12,5 @@ If you need to store texts, we recommend using UTF-8 encoding. At the very least
|
|||||||
Similarly, certain functions for working with strings have separate variations that work under the assumption that the string contains a set of bytes representing a UTF-8 encoded text.
|
Similarly, certain functions for working with strings have separate variations that work under the assumption that the string contains a set of bytes representing a UTF-8 encoded text.
|
||||||
For example, the 'length' function calculates the string length in bytes, while the 'lengthUTF8' function calculates the string length in Unicode code points, assuming that the value is UTF-8 encoded.
|
For example, the 'length' function calculates the string length in bytes, while the 'lengthUTF8' function calculates the string length in Unicode code points, assuming that the value is UTF-8 encoded.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/string/) <!--hide-->
|
||||||
|
@ -52,3 +52,5 @@ SELECT
|
|||||||
1 rows in set. Elapsed: 0.002 sec.
|
1 rows in set. Elapsed: 0.002 sec.
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/data_types/tuple/) <!--hide-->
|
||||||
|
@ -193,3 +193,5 @@ In addition, each replica stores its state in ZooKeeper as the set of parts and
|
|||||||
|
|
||||||
> The ClickHouse cluster consists of independent shards, and each shard consists of replicas. The cluster is not elastic, so after adding a new shard, data is not rebalanced between shards automatically. Instead, the cluster load will be uneven. This implementation gives you more control, and it is fine for relatively small clusters such as tens of nodes. But for clusters with hundreds of nodes that we are using in production, this approach becomes a significant drawback. We should implement a table engine that will span its data across the cluster with dynamically replicated regions that could be split and balanced between clusters automatically.
|
> The ClickHouse cluster consists of independent shards, and each shard consists of replicas. The cluster is not elastic, so after adding a new shard, data is not rebalanced between shards automatically. Instead, the cluster load will be uneven. This implementation gives you more control, and it is fine for relatively small clusters such as tens of nodes. But for clusters with hundreds of nodes that we are using in production, this approach becomes a significant drawback. We should implement a table engine that will span its data across the cluster with dynamically replicated regions that could be split and balanced between clusters automatically.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/development/architecture/) <!--hide-->
|
||||||
|
@ -95,3 +95,5 @@ cd ..
|
|||||||
To create an executable, run `ninja clickhouse`.
|
To create an executable, run `ninja clickhouse`.
|
||||||
This will create the `dbms/programs/clickhouse` executable, which can be used with `client` or `server` arguments.
|
This will create the `dbms/programs/clickhouse` executable, which can be used with `client` or `server` arguments.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/development/build/) <!--hide-->
|
||||||
|
@ -79,3 +79,5 @@ Reboot.
|
|||||||
|
|
||||||
To check if it's working, you can use `ulimit -n` command.
|
To check if it's working, you can use `ulimit -n` command.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/development/build_osx/) <!--hide-->
|
||||||
|
@ -1,2 +1,4 @@
|
|||||||
# ClickHouse Development
|
# ClickHouse Development
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/development/) <!--hide-->
|
||||||
|
@ -834,3 +834,5 @@ function(
|
|||||||
const & RangesInDataParts ranges,
|
const & RangesInDataParts ranges,
|
||||||
size_t limit)
|
size_t limit)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/development/style/) <!--hide-->
|
||||||
|
@ -249,3 +249,5 @@ In Travis CI due to limit on time and computational power we can afford only sub
|
|||||||
In Jenkins we run functional tests for each commit and for each pull request from trusted users; the same under ASan; we also run quorum tests, dictionary tests, Metrica B2B tests. We use Jenkins to prepare and publish releases. Worth to note that we are not happy with Jenkins at all.
|
In Jenkins we run functional tests for each commit and for each pull request from trusted users; the same under ASan; we also run quorum tests, dictionary tests, Metrica B2B tests. We use Jenkins to prepare and publish releases. Worth to note that we are not happy with Jenkins at all.
|
||||||
|
|
||||||
One of our goals is to provide reliable testing infrastructure that will be available to community.
|
One of our goals is to provide reliable testing infrastructure that will be available to community.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/development/tests/) <!--hide-->
|
||||||
|
@ -11,3 +11,5 @@ Distributed sorting is one of the main causes of reduced performance when runnin
|
|||||||
|
|
||||||
Most MapReduce implementations allow you to execute arbitrary code on a cluster. But a declarative query language is better suited to OLAP in order to run experiments quickly. For example, Hadoop has Hive and Pig. Also consider Cloudera Impala or Shark (outdated) for Spark, as well as Spark SQL, Presto, and Apache Drill. Performance when running such tasks is highly sub-optimal compared to specialized systems, but relatively high latency makes it unrealistic to use these systems as the backend for a web interface.
|
Most MapReduce implementations allow you to execute arbitrary code on a cluster. But a declarative query language is better suited to OLAP in order to run experiments quickly. For example, Hadoop has Hive and Pig. Also consider Cloudera Impala or Shark (outdated) for Spark, as well as Spark SQL, Presto, and Apache Drill. Performance when running such tasks is highly sub-optimal compared to specialized systems, but relatively high latency makes it unrealistic to use these systems as the backend for a web interface.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/faq/general/) <!--hide-->
|
||||||
|
@ -119,3 +119,5 @@ ORDER BY totalRevenue DESC
|
|||||||
LIMIT 1
|
LIMIT 1
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/getting_started/example_datasets/amplab_benchmark/) <!--hide-->
|
||||||
|
@ -71,3 +71,5 @@ INSERT INTO criteo SELECT date, clicked, int1, int2, int3, int4, int5, int6, int
|
|||||||
DROP TABLE criteo_log;
|
DROP TABLE criteo_log;
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/getting_started/example_datasets/criteo/) <!--hide-->
|
||||||
|
@ -364,3 +364,5 @@ We ran queries using a client located in a Yandex datacenter in Finland on a clu
|
|||||||
| 3 | 0.212 | 0.438 | 0.733 | 1.241 |
|
| 3 | 0.212 | 0.438 | 0.733 | 1.241 |
|
||||||
| 140 | 0.028 | 0.043 | 0.051 | 0.072 |
|
| 140 | 0.028 | 0.043 | 0.051 | 0.072 |
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/getting_started/example_datasets/nyc_taxi/) <!--hide-->
|
||||||
|
@ -317,3 +317,5 @@ This performance test was created by Vadim Tkachenko. See:
|
|||||||
- <https://www.percona.com/blog/2016/01/07/apache-spark-with-air-ontime-performance-data/>
|
- <https://www.percona.com/blog/2016/01/07/apache-spark-with-air-ontime-performance-data/>
|
||||||
- <http://nickmakos.blogspot.ru/2012/08/analyzing-air-traffic-performance-with.html>
|
- <http://nickmakos.blogspot.ru/2012/08/analyzing-air-traffic-performance-with.html>
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/getting_started/example_datasets/ontime/) <!--hide-->
|
||||||
|
@ -83,3 +83,5 @@ cat customer.tbl | sed 's/$/2000-01-01/' | clickhouse-client --query "INSERT INT
|
|||||||
cat lineorder.tbl | clickhouse-client --query "INSERT INTO lineorder FORMAT CSV"
|
cat lineorder.tbl | clickhouse-client --query "INSERT INTO lineorder FORMAT CSV"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/getting_started/example_datasets/star_schema/) <!--hide-->
|
||||||
|
@ -25,3 +25,5 @@ cat links.txt | while read link; do wget http://dumps.wikimedia.org/other/pageco
|
|||||||
ls -1 /opt/wikistat/ | grep gz | while read i; do echo $i; gzip -cd /opt/wikistat/$i | ./wikistat-loader --time="$(echo -n $i | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})([0-9]{2})-([0-9]{2})([0-9]{2})([0-9]{2})\.gz/\1-\2-\3 \4-00-00/')" | clickhouse-client --query="INSERT INTO wikistat FORMAT TabSeparated"; done
|
ls -1 /opt/wikistat/ | grep gz | while read i; do echo $i; gzip -cd /opt/wikistat/$i | ./wikistat-loader --time="$(echo -n $i | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})([0-9]{2})-([0-9]{2})([0-9]{2})([0-9]{2})\.gz/\1-\2-\3 \4-00-00/')" | clickhouse-client --query="INSERT INTO wikistat FORMAT TabSeparated"; done
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/getting_started/example_datasets/wikistat/) <!--hide-->
|
||||||
|
@ -137,3 +137,5 @@ SELECT 1
|
|||||||
|
|
||||||
To continue experimenting, you can try to download from the test data sets.
|
To continue experimenting, you can try to download from the test data sets.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/getting_started/) <!--hide-->
|
||||||
|
@ -138,3 +138,5 @@ There are two ways to do this:
|
|||||||
This is not done in "normal" databases, because it doesn't make sense when running simple queries. However, there are exceptions. For example, MemSQL uses code generation to reduce latency when processing SQL queries. (For comparison, analytical DBMSs require optimization of throughput, not latency.)
|
This is not done in "normal" databases, because it doesn't make sense when running simple queries. However, there are exceptions. For example, MemSQL uses code generation to reduce latency when processing SQL queries. (For comparison, analytical DBMSs require optimization of throughput, not latency.)
|
||||||
|
|
||||||
Note that for CPU efficiency, the query language must be declarative (SQL or MDX), or at least a vector (J, K). The query should only contain implicit loops, allowing for optimization.
|
Note that for CPU efficiency, the query language must be declarative (SQL or MDX), or at least a vector (J, K). The query should only contain implicit loops, allowing for optimization.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/) <!--hide-->
|
||||||
|
@ -113,3 +113,5 @@ Example of a config file:
|
|||||||
</config>
|
</config>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/interfaces/cli/) <!--hide-->
|
||||||
|
@ -621,3 +621,5 @@ struct Message {
|
|||||||
Schema files are in the file that is located in the directory specified in [ format_schema_path](../operations/server_settings/settings.md#server_settings-format_schema_path) in the server configuration.
|
Schema files are in the file that is located in the directory specified in [ format_schema_path](../operations/server_settings/settings.md#server_settings-format_schema_path) in the server configuration.
|
||||||
|
|
||||||
Deserialization is effective and usually doesn't increase the system load.
|
Deserialization is effective and usually doesn't increase the system load.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/interfaces/formats/) <!--hide-->
|
||||||
|
@ -218,3 +218,5 @@ curl -sS 'http://localhost:8123/?max_result_bytes=4000000&buffer_size=3000000&wa
|
|||||||
|
|
||||||
Use buffering to avoid situations where a query processing error occurred after the response code and HTTP headers were sent to the client. In this situation, an error message is written at the end of the response body, and on the client side, the error can only be detected at the parsing stage.
|
Use buffering to avoid situations where a query processing error occurred after the response code and HTTP headers were sent to the client. In this situation, an error message is written at the end of the response body, and on the client side, the error can only be detected at the parsing stage.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/interfaces/http_interface/) <!--hide-->
|
||||||
|
@ -4,3 +4,5 @@
|
|||||||
|
|
||||||
To explore the system's capabilities, download data to tables, or make manual queries, use the clickhouse-client program.
|
To explore the system's capabilities, download data to tables, or make manual queries, use the clickhouse-client program.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/interfaces/) <!--hide-->
|
||||||
|
@ -3,3 +3,5 @@
|
|||||||
- [Official driver](https://github.com/yandex/clickhouse-jdbc).
|
- [Official driver](https://github.com/yandex/clickhouse-jdbc).
|
||||||
- Third-party driver from [ClickHouse-Native-JDBC](https://github.com/housepower/ClickHouse-Native-JDBC).
|
- Third-party driver from [ClickHouse-Native-JDBC](https://github.com/housepower/ClickHouse-Native-JDBC).
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/interfaces/jdbc/) <!--hide-->
|
||||||
|
@ -2,3 +2,5 @@
|
|||||||
|
|
||||||
The native interface is used in the "clickhouse-client" command-line client for interaction between servers with distributed query processing, and also in C++ programs. We will only cover the command-line client.
|
The native interface is used in the "clickhouse-client" command-line client for interaction between servers with distributed query processing, and also in C++ programs. We will only cover the command-line client.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/interfaces/tcp/) <!--hide-->
|
||||||
|
@ -47,3 +47,5 @@ We have not tested the libraries listed below.
|
|||||||
- Nim
|
- Nim
|
||||||
- [nim-clickhouse](https://github.com/leonardoce/nim-clickhouse)
|
- [nim-clickhouse](https://github.com/leonardoce/nim-clickhouse)
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/interfaces/third-party_client_libraries/) <!--hide-->
|
||||||
|
@ -45,3 +45,5 @@ Key features:
|
|||||||
- Query development with syntax highlight.
|
- Query development with syntax highlight.
|
||||||
- Table preview.
|
- Table preview.
|
||||||
- Autocompletion.
|
- Autocompletion.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/interfaces/third-party_gui/) <!--hide-->
|
||||||
|
@ -60,3 +60,5 @@ ClickHouse provides various ways to trade accuracy for performance:
|
|||||||
Uses asynchronous multimaster replication. After being written to any available replica, data is distributed to all the remaining replicas in the background. The system maintains identical data on different replicas. Recovery after most failures is performed automatically, and in complex cases — semi-automatically.
|
Uses asynchronous multimaster replication. After being written to any available replica, data is distributed to all the remaining replicas in the background. The system maintains identical data on different replicas. Recovery after most failures is performed automatically, and in complex cases — semi-automatically.
|
||||||
|
|
||||||
For more information, see the section [Data replication](../operations/table_engines/replication.md#table_engines-replication).
|
For more information, see the section [Data replication](../operations/table_engines/replication.md#table_engines-replication).
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/introduction/distinctive_features/) <!--hide-->
|
||||||
|
@ -3,3 +3,5 @@
|
|||||||
1. No full-fledged transactions.
|
1. No full-fledged transactions.
|
||||||
2. Lack of ability to modify or delete already inserted data with high rate and low latency. There are batch deletes and updates available to clean up or modify data, for example to comply with [GDPR](https://gdpr-info.eu).
|
2. Lack of ability to modify or delete already inserted data with high rate and low latency. There are batch deletes and updates available to clean up or modify data, for example to comply with [GDPR](https://gdpr-info.eu).
|
||||||
3. The sparse index makes ClickHouse not really suitable for point queries retrieving single rows by their keys.
|
3. The sparse index makes ClickHouse not really suitable for point queries retrieving single rows by their keys.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/introduction/features_considered_disadvantages/) <!--hide-->
|
||||||
|
@ -21,3 +21,5 @@ Under the same conditions, ClickHouse can handle several hundred queries per sec
|
|||||||
## Performance When Inserting Data
|
## Performance When Inserting Data
|
||||||
|
|
||||||
We recommend inserting data in packets of at least 1000 rows, or no more than a single request per second. When inserting to a MergeTree table from a tab-separated dump, the insertion speed will be from 50 to 200 MB/s. If the inserted rows are around 1 Kb in size, the speed will be from 50,000 to 200,000 rows per second. If the rows are small, the performance will be higher in rows per second (on Banner System data -`>` 500,000 rows per second; on Graphite data -`>` 1,000,000 rows per second). To improve performance, you can make multiple INSERT queries in parallel, and performance will increase linearly.
|
We recommend inserting data in packets of at least 1000 rows, or no more than a single request per second. When inserting to a MergeTree table from a tab-separated dump, the insertion speed will be from 50 to 200 MB/s. If the inserted rows are around 1 Kb in size, the speed will be from 50,000 to 200,000 rows per second. If the rows are small, the performance will be higher in rows per second (on Banner System data -`>` 500,000 rows per second; on Graphite data -`>` 1,000,000 rows per second). To improve performance, you can make multiple INSERT queries in parallel, and performance will increase linearly.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/introduction/performance/) <!--hide-->
|
||||||
|
@ -46,3 +46,5 @@ OLAPServer worked well for non-aggregated data, but it had many restrictions tha
|
|||||||
|
|
||||||
To remove the limitations of OLAPServer and solve the problem of working with non-aggregated data for all reports, we developed the ClickHouse DBMS.
|
To remove the limitations of OLAPServer and solve the problem of working with non-aggregated data for all reports, we developed the ClickHouse DBMS.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/introduction/ya_metrika_task/) <!--hide-->
|
||||||
|
@ -99,3 +99,5 @@ The user can get a list of all databases and tables in them by using `SHOW` quer
|
|||||||
|
|
||||||
Database access is not related to the [readonly](settings/query_complexity.md#query_complexity_readonly) setting. You can't grant full access to one database and `readonly` access to another one.
|
Database access is not related to the [readonly](settings/query_complexity.md#query_complexity_readonly) setting. You can't grant full access to one database and `readonly` access to another one.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/access_rights/) <!--hide-->
|
||||||
|
@ -40,3 +40,5 @@ $ cat /etc/clickhouse-server/users.d/alice.xml
|
|||||||
For each config file, the server also generates `file-preprocessed.xml` files when starting. These files contain all the completed substitutions and overrides, and they are intended for informational use. If ZooKeeper substitutions were used in the config files but ZooKeeper is not available on the server start, the server loads the configuration from the preprocessed file.
|
For each config file, the server also generates `file-preprocessed.xml` files when starting. These files contain all the completed substitutions and overrides, and they are intended for informational use. If ZooKeeper substitutions were used in the config files but ZooKeeper is not available on the server start, the server loads the configuration from the preprocessed file.
|
||||||
|
|
||||||
The server tracks changes in config files, as well as files and ZooKeeper nodes that were used when performing substitutions and overrides, and reloads the settings for users and clusters on the fly. This means that you can modify the cluster, users, and their settings without restarting the server.
|
The server tracks changes in config files, as well as files and ZooKeeper nodes that were used when performing substitutions and overrides, and reloads the settings for users and clusters on the fly. This means that you can modify the cluster, users, and their settings without restarting the server.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/configuration_files/) <!--hide-->
|
||||||
|
@ -1,2 +1,4 @@
|
|||||||
# Operations
|
# Operations
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/) <!--hide-->
|
||||||
|
@ -104,3 +104,5 @@ For distributed query processing, the accumulated amounts are stored on the requ
|
|||||||
|
|
||||||
When the server is restarted, quotas are reset.
|
When the server is restarted, quotas are reset.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/quotas/) <!--hide-->
|
||||||
|
@ -10,3 +10,5 @@ Other settings are described in the "[Settings](../settings/index.md#settings)"
|
|||||||
|
|
||||||
Before studying the settings, read the [Configuration files](../configuration_files.md#configuration_files) section and note the use of substitutions (the `incl` and `optional` attributes).
|
Before studying the settings, read the [Configuration files](../configuration_files.md#configuration_files) section and note the use of substitutions (the `incl` and `optional` attributes).
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/server_settings/) <!--hide-->
|
||||||
|
@ -717,3 +717,5 @@ For more information, see the section "[Replication](../../operations/table_engi
|
|||||||
<zookeeper incl="zookeeper-servers" optional="true" />
|
<zookeeper incl="zookeeper-servers" optional="true" />
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/server_settings/settings/) <!--hide-->
|
||||||
|
@ -22,3 +22,5 @@ Similarly, you can use ClickHouse sessions in the HTTP protocol. To do this, you
|
|||||||
|
|
||||||
Settings that can only be made in the server config file are not covered in this section.
|
Settings that can only be made in the server config file are not covered in this section.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/settings/) <!--hide-->
|
||||||
|
@ -193,3 +193,5 @@ Maximum number of bytes (uncompressed data) that can be passed to a remote serve
|
|||||||
## transfer_overflow_mode
|
## transfer_overflow_mode
|
||||||
|
|
||||||
What to do when the amount of data exceeds one of the limits: 'throw' or 'break'. By default, throw.
|
What to do when the amount of data exceeds one of the limits: 'throw' or 'break'. By default, throw.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/settings/query_complexity/) <!--hide-->
|
||||||
|
@ -417,3 +417,5 @@ See also the following parameters:
|
|||||||
- [insert_quorum](#setting-insert_quorum)
|
- [insert_quorum](#setting-insert_quorum)
|
||||||
- [insert_quorum_timeout](#setting-insert_quorum_timeout)
|
- [insert_quorum_timeout](#setting-insert_quorum_timeout)
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/settings/settings/) <!--hide-->
|
||||||
|
@ -63,3 +63,5 @@ The example specifies two profiles: `default` and `web`. The `default` profile
|
|||||||
|
|
||||||
Settings profiles can inherit from each other. To use inheritance, indicate the `profile` setting before the other settings that are listed in the profile.
|
Settings profiles can inherit from each other. To use inheritance, indicate the `profile` setting before the other settings that are listed in the profile.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/settings/settings_profiles/) <!--hide-->
|
||||||
|
@ -435,3 +435,5 @@ numChildren: 7
|
|||||||
pzxid: 987021252247
|
pzxid: 987021252247
|
||||||
path: /clickhouse/tables/01-08/visits/replicas
|
path: /clickhouse/tables/01-08/visits/replicas
|
||||||
```
|
```
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/system_tables/) <!--hide-->
|
||||||
|
@ -85,3 +85,5 @@ You can create a materialized view like this and assign a normal view to it that
|
|||||||
|
|
||||||
Note that in most cases, using `AggregatingMergeTree` is not justified, since queries can be run efficiently enough on non-aggregated data.
|
Note that in most cases, using `AggregatingMergeTree` is not justified, since queries can be run efficiently enough on non-aggregated data.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/aggregatingmergetree/) <!--hide-->
|
||||||
|
@ -52,3 +52,5 @@ A Buffer table is used when too many INSERTs are received from a large number of
|
|||||||
|
|
||||||
Note that it doesn't make sense to insert data one row at a time, even for Buffer tables. This will only produce a speed of a few thousand rows per second, while inserting larger blocks of data can produce over a million rows per second (see the section "Performance").
|
Note that it doesn't make sense to insert data one row at a time, even for Buffer tables. This will only produce a speed of a few thousand rows per second, while inserting larger blocks of data can produce over a million rows per second (see the section "Performance").
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/buffer/) <!--hide-->
|
||||||
|
@ -36,3 +36,5 @@ There are several ways to get completely "collapsed" data from a `CollapsingMerg
|
|||||||
1. Write a query with GROUP BY and aggregate functions that accounts for the sign. For example, to calculate quantity, write 'sum(Sign)' instead of 'count()'. To calculate the sum of something, write 'sum(Sign * x)' instead of 'sum(x)', and so on, and also add 'HAVING sum(Sign) `>` 0'. Not all amounts can be calculated this way. For example, the aggregate functions 'min' and 'max' can't be rewritten.
|
1. Write a query with GROUP BY and aggregate functions that accounts for the sign. For example, to calculate quantity, write 'sum(Sign)' instead of 'count()'. To calculate the sum of something, write 'sum(Sign * x)' instead of 'sum(x)', and so on, and also add 'HAVING sum(Sign) `>` 0'. Not all amounts can be calculated this way. For example, the aggregate functions 'min' and 'max' can't be rewritten.
|
||||||
2. If you must extract data without aggregation (for example, to check whether rows are present whose newest values match certain conditions), you can use the FINAL modifier for the FROM clause. This approach is significantly less efficient.
|
2. If you must extract data without aggregation (for example, to check whether rows are present whose newest values match certain conditions), you can use the FINAL modifier for the FROM clause. This approach is significantly less efficient.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/collapsingmergetree/) <!--hide-->
|
||||||
|
@ -45,3 +45,5 @@ The partition ID is its string identifier (human-readable, if possible) that is
|
|||||||
|
|
||||||
For more examples, see the tests [`00502_custom_partitioning_local`](https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_local.sql) and [`00502_custom_partitioning_replicated_zookeeper`](https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_replicated_zookeeper.sql).
|
For more examples, see the tests [`00502_custom_partitioning_local`](https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_local.sql) and [`00502_custom_partitioning_replicated_zookeeper`](https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_replicated_zookeeper.sql).
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/custom_partitioning_key/) <!--hide-->
|
||||||
|
@ -108,3 +108,5 @@ LIMIT 1
|
|||||||
1 rows in set. Elapsed: 0.006 sec.
|
1 rows in set. Elapsed: 0.006 sec.
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/dictionary/) <!--hide-->
|
||||||
|
@ -122,3 +122,5 @@ If the server ceased to exist or had a rough restart (for example, after a devic
|
|||||||
|
|
||||||
When the max_parallel_replicas option is enabled, query processing is parallelized across all replicas within a single shard. For more information, see the section "Settings, max_parallel_replicas".
|
When the max_parallel_replicas option is enabled, query processing is parallelized across all replicas within a single shard. For more information, see the section "Settings, max_parallel_replicas".
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/distributed/) <!--hide-->
|
||||||
|
@ -60,3 +60,5 @@ curl -F 'passwd=@passwd.tsv;' 'http://localhost:8123/?query=SELECT+shell,+count(
|
|||||||
|
|
||||||
For distributed query processing, the temporary tables are sent to all the remote servers.
|
For distributed query processing, the temporary tables are sent to all the remote servers.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/external_data/) <!--hide-->
|
||||||
|
@ -76,3 +76,5 @@ $ echo -e "1,2\n3,4" | clickhouse-local -q "CREATE TABLE table (a Int64, b Int64
|
|||||||
- `SELECT ... SAMPLE`
|
- `SELECT ... SAMPLE`
|
||||||
- Indices
|
- Indices
|
||||||
- Replication
|
- Replication
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/file/) <!--hide-->
|
||||||
|
@ -84,3 +84,5 @@ Example of settings:
|
|||||||
</graphite_rollup>
|
</graphite_rollup>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/graphitemergetree/) <!--hide-->
|
||||||
|
@ -14,3 +14,5 @@ The table engine (type of table) determines:
|
|||||||
When reading, the engine is only required to output the requested columns, but in some cases the engine can partially process data when responding to the request.
|
When reading, the engine is only required to output the requested columns, but in some cases the engine can partially process data when responding to the request.
|
||||||
|
|
||||||
For most serious tasks, you should use engines from the `MergeTree` family.
|
For most serious tasks, you should use engines from the `MergeTree` family.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/) <!--hide-->
|
||||||
|
@ -15,3 +15,5 @@ You can use INSERT to add data to the table, similar to the Set engine. For ANY,
|
|||||||
|
|
||||||
Storing data on the disk is the same as for the Set engine.
|
Storing data on the disk is the same as for the Set engine.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/join/) <!--hide-->
|
||||||
|
@ -136,3 +136,5 @@ Similar to GraphiteMergeTree, the Kafka engine supports extended configuration u
|
|||||||
```
|
```
|
||||||
|
|
||||||
For a list of possible configuration options, see the [librdkafka configuration reference](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md). Use the underscore (`_`) instead of a dot in the ClickHouse configuration. For example, `check.crcs=true` will be `<check_crcs>true</check_crcs>`.
|
For a list of possible configuration options, see the [librdkafka configuration reference](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md). Use the underscore (`_`) instead of a dot in the ClickHouse configuration. For example, `check.crcs=true` will be `<check_crcs>true</check_crcs>`.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/kafka/) <!--hide-->
|
||||||
|
@ -4,3 +4,5 @@ Log differs from TinyLog in that a small file of "marks" resides with the column
|
|||||||
For concurrent data access, the read operations can be performed simultaneously, while write operations block reads and each other.
|
For concurrent data access, the read operations can be performed simultaneously, while write operations block reads and each other.
|
||||||
The Log engine does not support indexes. Similarly, if writing to a table failed, the table is broken, and reading from it returns an error. The Log engine is appropriate for temporary data, write-once tables, and for testing or demonstration purposes.
|
The Log engine does not support indexes. Similarly, if writing to a table failed, the table is broken, and reading from it returns an error. The Log engine is appropriate for temporary data, write-once tables, and for testing or demonstration purposes.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/log/) <!--hide-->
|
||||||
|
@ -2,3 +2,5 @@
|
|||||||
|
|
||||||
Used for implementing materialized views (for more information, see [CREATE TABLE](../../query_language/create.md#query_language-queries-create_table)). For storing data, it uses a different engine that was specified when creating the view. When reading from a table, it just uses this engine.
|
Used for implementing materialized views (for more information, see [CREATE TABLE](../../query_language/create.md#query_language-queries-create_table)). For storing data, it uses a different engine that was specified when creating the view. When reading from a table, it just uses this engine.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/materializedview/) <!--hide-->
|
||||||
|
@ -9,3 +9,5 @@ Normally, using this table engine is not justified. However, it can be used for
|
|||||||
|
|
||||||
The Memory engine is used by the system for temporary tables with external query data (see the section "External data for processing a query"), and for implementing GLOBAL IN (see the section "IN operators").
|
The Memory engine is used by the system for temporary tables with external query data (see the section "External data for processing a query"), and for implementing GLOBAL IN (see the section "IN operators").
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/memory/) <!--hide-->
|
||||||
|
@ -65,3 +65,5 @@ The `Merge` type table contains a virtual `_table` column of the `String` type.
|
|||||||
|
|
||||||
If the `WHERE/PREWHERE` clause contains conditions for the `_table` column that do not depend on other table columns (as one of the conjunction elements, or as an entire expression), these conditions are used as an index. The conditions are performed on a data set of table names to read data from, and the read operation will be performed from only those tables that the condition was triggered on.
|
If the `WHERE/PREWHERE` clause contains conditions for the `_table` column that do not depend on other table columns (as one of the conjunction elements, or as an entire expression), these conditions are used as an index. The conditions are performed on a data set of table names to read data from, and the read operation will be performed from only those tables that the condition was triggered on.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/merge/) <!--hide-->
|
||||||
|
@ -182,3 +182,5 @@ For concurrent table access, we use multi-versioning. In other words, when a tab
|
|||||||
|
|
||||||
Reading from a table is automatically parallelized.
|
Reading from a table is automatically parallelized.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/mergetree/) <!--hide-->
|
||||||
|
@ -26,3 +26,5 @@ The rest of the conditions and the `LIMIT` sampling constraint are executed in C
|
|||||||
|
|
||||||
The `MySQL` engine does not support the [Nullable](../../data_types/nullable.md#data_type-nullable) data type, so when reading data from MySQL tables, `NULL` is converted to default values for the specified column type (usually 0 or an empty string).
|
The `MySQL` engine does not support the [Nullable](../../data_types/nullable.md#data_type-nullable) data type, so when reading data from MySQL tables, `NULL` is converted to default values for the specified column type (usually 0 or an empty string).
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/mysql/) <!--hide-->
|
||||||
|
@ -4,3 +4,5 @@ When writing to a Null table, data is ignored. When reading from a Null table, t
|
|||||||
|
|
||||||
However, you can create a materialized view on a Null table. So the data written to the table will end up in the view.
|
However, you can create a materialized view on a Null table. So the data written to the table will end up in the view.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/null/) <!--hide-->
|
||||||
|
@ -16,3 +16,5 @@ Thus, `ReplacingMergeTree` is suitable for clearing out duplicate data in the b
|
|||||||
|
|
||||||
*This engine is not used in Yandex.Metrica, but it has been applied in other Yandex projects.*
|
*This engine is not used in Yandex.Metrica, but it has been applied in other Yandex projects.*
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/replacingmergetree/) <!--hide-->
|
||||||
|
@ -180,3 +180,5 @@ After this, you can launch the server, create a `MergeTree` table, move the data
|
|||||||
## Recovery When Metadata in The ZooKeeper Cluster is Lost or Damaged
|
## Recovery When Metadata in The ZooKeeper Cluster is Lost or Damaged
|
||||||
|
|
||||||
If the data in ZooKeeper was lost or damaged, you can save data by moving it to an unreplicated table as described above.
|
If the data in ZooKeeper was lost or damaged, you can save data by moving it to an unreplicated table as described above.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/replication/) <!--hide-->
|
||||||
|
@ -9,3 +9,5 @@ Data is always located in RAM. For INSERT, the blocks of inserted data are also
|
|||||||
|
|
||||||
For a rough server restart, the block of data on the disk might be lost or damaged. In the latter case, you may need to manually delete the file with damaged data.
|
For a rough server restart, the block of data on the disk might be lost or damaged. In the latter case, you may need to manually delete the file with damaged data.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/set/) <!--hide-->
|
||||||
|
@ -45,3 +45,5 @@ For nested data structures, you don't need to specify the columns as a list of c
|
|||||||
|
|
||||||
This table engine is not particularly useful. Remember that when saving just pre-aggregated data, you lose some of the system's advantages.
|
This table engine is not particularly useful. Remember that when saving just pre-aggregated data, you lose some of the system's advantages.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/summingmergetree/) <!--hide-->
|
||||||
|
@ -17,3 +17,5 @@ The situation when you have a large number of small tables guarantees poor produ
|
|||||||
|
|
||||||
In Yandex.Metrica, TinyLog tables are used for intermediary data that is processed in small batches.
|
In Yandex.Metrica, TinyLog tables are used for intermediary data that is processed in small batches.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/tinylog/) <!--hide-->
|
||||||
|
@ -71,3 +71,5 @@ SELECT * FROM url_engine_table
|
|||||||
- `ALTER` and `SELECT...SAMPLE` operations.
|
- `ALTER` and `SELECT...SAMPLE` operations.
|
||||||
- Indexes.
|
- Indexes.
|
||||||
- Replication.
|
- Replication.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/url/) <!--hide-->
|
||||||
|
@ -2,3 +2,5 @@
|
|||||||
|
|
||||||
Used for implementing views (for more information, see the `CREATE VIEW query`). It does not store data, but only stores the specified `SELECT` query. When reading from a table, it runs this query (and deletes all unnecessary columns from the query).
|
Used for implementing views (for more information, see the `CREATE VIEW query`). It does not store data, but only stores the specified `SELECT` query. When reading from a table, it runs this query (and deletes all unnecessary columns from the query).
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/view/) <!--hide-->
|
||||||
|
@ -255,3 +255,5 @@ script
|
|||||||
end script
|
end script
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/tips/) <!--hide-->
|
||||||
|
@ -159,3 +159,5 @@ Parameters:
|
|||||||
|
|
||||||
`clickhouse-copier` tracks the changes in `/task/path/description` and applies them on the fly. For instance, if you change the value of `max_workers`, the number of processes running tasks will also change.
|
`clickhouse-copier` tracks the changes in `/task/path/description` and applies them on the fly. For instance, if you change the value of `max_workers`, the number of processes running tasks will also change.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/utils/clickhouse-copier/) <!--hide-->
|
||||||
|
@ -71,3 +71,5 @@ Read 186 rows, 4.15 KiB in 0.035 sec., 5302 rows/sec., 118.34 KiB/sec.
|
|||||||
├──────────┼──────────┤
|
├──────────┼──────────┤
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/utils/clickhouse-local/) <!--hide-->
|
||||||
|
@ -3,3 +3,5 @@
|
|||||||
* [clickhouse-local](clickhouse-local.md#utils-clickhouse-local) — Allows running SQL queries on data without stopping the ClickHouse server, similar to how `awk` does this.
|
* [clickhouse-local](clickhouse-local.md#utils-clickhouse-local) — Allows running SQL queries on data without stopping the ClickHouse server, similar to how `awk` does this.
|
||||||
* [clickhouse-copier](clickhouse-copier.md#utils-clickhouse-copier) — Copies (and reshards) data from one cluster to another cluster.
|
* [clickhouse-copier](clickhouse-copier.md#utils-clickhouse-copier) — Copies (and reshards) data from one cluster to another cluster.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/operations/utils/) <!--hide-->
|
||||||
|
@ -38,3 +38,5 @@ Merges the intermediate aggregation states in the same way as the -Merge combina
|
|||||||
|
|
||||||
Converts an aggregate function for tables into an aggregate function for arrays that aggregates the corresponding array items and returns an array of results. For example, `sumForEach` for the arrays `[1, 2]`, `[3, 4, 5]`and`[6, 7]`returns the result `[10, 13, 5]` after adding together the corresponding array items.
|
Converts an aggregate function for tables into an aggregate function for arrays that aggregates the corresponding array items and returns an array of results. For example, `sumForEach` for the arrays `[1, 2]`, `[3, 4, 5]`and`[6, 7]`returns the result `[10, 13, 5]` after adding together the corresponding array items.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/query_language/agg_functions/combinators/) <!--hide-->
|
||||||
|
@ -61,3 +61,5 @@ FROM t_null_big
|
|||||||
|
|
||||||
`groupArray` does not include `NULL` in the resulting array.
|
`groupArray` does not include `NULL` in the resulting array.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/query_language/agg_functions/) <!--hide-->
|
||||||
|
@ -155,3 +155,5 @@ Usage example:
|
|||||||
Problem: Generate a report that shows only keywords that produced at least 5 unique users.
|
Problem: Generate a report that shows only keywords that produced at least 5 unique users.
|
||||||
Solution: Write in the GROUP BY query SearchPhrase HAVING uniqUpTo(4)(UserID) >= 5
|
Solution: Write in the GROUP BY query SearchPhrase HAVING uniqUpTo(4)(UserID) >= 5
|
||||||
```
|
```
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/query_language/agg_functions/parametric_functions/) <!--hide-->
|
||||||
|
@ -350,3 +350,5 @@ Calculates the value of `Σ((x - x̅)(y - y̅)) / n`.
|
|||||||
|
|
||||||
Calculates the Pearson correlation coefficient: `Σ((x - x̅)(y - y̅)) / sqrt(Σ((x - x̅)^2) * Σ((y - y̅)^2))`.
|
Calculates the Pearson correlation coefficient: `Σ((x - x̅)(y - y̅)) / sqrt(Σ((x - x̅)^2) * Σ((y - y̅)^2))`.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/query_language/agg_functions/reference/) <!--hide-->
|
||||||
|
@ -272,3 +272,5 @@ The table contains information about mutations of MergeTree tables and their pro
|
|||||||
|
|
||||||
**is_done** - Is the mutation done? Note that even if `parts_to_do = 0` it is possible that a mutation of a replicated table is not done yet because of a long-running INSERT that will create a new data part that will need to be mutated.
|
**is_done** - Is the mutation done? Note that even if `parts_to_do = 0` it is possible that a mutation of a replicated table is not done yet because of a long-running INSERT that will create a new data part that will need to be mutated.
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/query_language/alter/) <!--hide-->
|
||||||
|
@ -152,3 +152,5 @@ The execution of `ALTER` queries on materialized views has not been fully develo
|
|||||||
Views look the same as normal tables. For example, they are listed in the result of the `SHOW TABLES` query.
|
Views look the same as normal tables. For example, they are listed in the result of the `SHOW TABLES` query.
|
||||||
|
|
||||||
There isn't a separate query for deleting views. To delete a view, use `DROP TABLE`.
|
There isn't a separate query for deleting views. To delete a view, use `DROP TABLE`.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/query_language/create/) <!--hide-->
|
||||||
|
@ -41,3 +41,5 @@ See also "[Functions for working with external dictionaries](../functions/ext_di
|
|||||||
|
|
||||||
!!! attention
|
!!! attention
|
||||||
You can convert values for a small dictionary by describing it in a `SELECT` query (see the [transform](../functions/other_functions.md#other_functions-transform) function). This functionality is not related to external dictionaries.
|
You can convert values for a small dictionary by describing it in a `SELECT` query (see the [transform](../functions/other_functions.md#other_functions-transform) function). This functionality is not related to external dictionaries.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/query_language/dicts/external_dicts/) <!--hide-->
|
||||||
|
@ -31,3 +31,5 @@ The dictionary configuration has the following structure:
|
|||||||
- [layout](external_dicts_dict_layout.md#dicts-external_dicts_dict_layout) — Dictionary layout in memory.
|
- [layout](external_dicts_dict_layout.md#dicts-external_dicts_dict_layout) — Dictionary layout in memory.
|
||||||
- [structure](external_dicts_dict_structure.md#dicts-external_dicts_dict_structure) — Structure of the dictionary . A key and attributes that can be retrieved by this key.
|
- [structure](external_dicts_dict_structure.md#dicts-external_dicts_dict_structure) — Structure of the dictionary . A key and attributes that can be retrieved by this key.
|
||||||
- [lifetime](external_dicts_dict_lifetime.md#dicts-external_dicts_dict_lifetime) — Frequency of dictionary updates.
|
- [lifetime](external_dicts_dict_lifetime.md#dicts-external_dicts_dict_lifetime) — Frequency of dictionary updates.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/query_language/dicts/external_dicts_dict/) <!--hide-->
|
||||||
|
@ -292,3 +292,5 @@ dictGetString('prefix', 'asn', tuple(IPv6StringToNum('2001:db8::1')))
|
|||||||
Other types are not supported yet. The function returns the attribute for the prefix that corresponds to this IP address. If there are overlapping prefixes, the most specific one is returned.
|
Other types are not supported yet. The function returns the attribute for the prefix that corresponds to this IP address. If there are overlapping prefixes, the most specific one is returned.
|
||||||
|
|
||||||
Data is stored in a `trie`. It must completely fit into RAM.
|
Data is stored in a `trie`. It must completely fit into RAM.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/query_language/dicts/external_dicts_dict_layout/) <!--hide-->
|
||||||
|
@ -57,3 +57,5 @@ Example of settings:
|
|||||||
</dictionary>
|
</dictionary>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/query_language/dicts/external_dicts_dict_lifetime/) <!--hide-->
|
||||||
|
@ -427,3 +427,5 @@ Setting fields:
|
|||||||
- `password` – Password of the MongoDB user.
|
- `password` – Password of the MongoDB user.
|
||||||
- `db` – Name of the database.
|
- `db` – Name of the database.
|
||||||
- `collection` – Name of the collection.
|
- `collection` – Name of the collection.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/query_language/dicts/external_dicts_dict_sources/) <!--hide-->
|
||||||
|
@ -115,3 +115,5 @@ Configuration fields:
|
|||||||
- `hierarchical` – Hierarchical support. Mirrored to the parent identifier. By default, ` false`.
|
- `hierarchical` – Hierarchical support. Mirrored to the parent identifier. By default, ` false`.
|
||||||
- `injective` – Whether the `id -> attribute` image is injective. If ` true`, then you can optimize the ` GROUP BY` clause. By default, `false`.
|
- `injective` – Whether the `id -> attribute` image is injective. If ` true`, then you can optimize the ` GROUP BY` clause. By default, `false`.
|
||||||
- `is_object_id` – Whether the query is executed for a MongoDB document by `ObjectID`.
|
- `is_object_id` – Whether the query is executed for a MongoDB document by `ObjectID`.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.yandex/docs/en/query_language/dicts/external_dicts_dict_structure/) <!--hide-->
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user