Merge branch 'ClickHouse:master' into short_circut_func

This commit is contained in:
李扬 2024-11-18 15:09:14 +08:00 committed by GitHub
commit b1e816f60c
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
11 changed files with 91 additions and 24 deletions

View File

@ -16,7 +16,7 @@ You have four options for getting up and running with ClickHouse:
- **[ClickHouse Cloud](https://clickhouse.com/cloud/):** The official ClickHouse as a service, - built by, maintained and supported by the creators of ClickHouse
- **[Quick Install](#quick-install):** an easy-to-download binary for testing and developing with ClickHouse
- **[Production Deployments](#available-installation-options):** ClickHouse can run on any Linux, FreeBSD, or macOS with x86-64, modern ARM (ARMv8.2-A up), or PowerPC64LE CPU architecture
- **[Docker Image](https://hub.docker.com/r/clickhouse/clickhouse-server/):** use the official Docker image in Docker Hub
- **[Docker Image](https://hub.docker.com/_/clickhouse):** use the official Docker image in Docker Hub
## ClickHouse Cloud

View File

@ -5,9 +5,14 @@ sidebar_label: EXCEPT
# EXCEPT Clause
The `EXCEPT` clause returns only those rows that result from the first query without the second. The queries must match the number of columns, order, and type. The result of `EXCEPT` can contain duplicate rows.
The `EXCEPT` clause returns only those rows that result from the first query without the second.
Multiple `EXCEPT` statements are executed left to right if parenthesis are not specified. The `EXCEPT` operator has the same priority as the `UNION` clause and lower priority than the `INTERSECT` clause.
- Both queries must have the same number of columns in the same order and data type.
- The result of `EXCEPT` can contain duplicate rows. Use `EXCEPT DISTINCT` if this is not desirable.
- Multiple `EXCEPT` statements are executed from left to right if parentheses are not specified.
- The `EXCEPT` operator has the same priority as the `UNION` clause and lower priority than the `INTERSECT` clause.
## Syntax
``` sql
SELECT column1 [, column2 ]
@ -19,18 +24,33 @@ EXCEPT
SELECT column1 [, column2 ]
FROM table2
[WHERE condition]
```
The condition could be any expression based on your requirements.
The condition could be any expression based on your requirements.
Additionally, `EXCEPT()` can be used to exclude columns from a result in the same table, as is possible with BigQuery (Google Cloud), using the following syntax:
```sql
SELECT column1 [, column2 ] EXCEPT (column3 [, column4])
FROM table1
[WHERE condition]
```
## Examples
The examples in this section demonstrate usage of the `EXCEPT` clause.
### Filtering Numbers Using the `EXCEPT` Clause
Here is a simple example that returns the numbers 1 to 10 that are _not_ a part of the numbers 3 to 8:
Query:
``` sql
SELECT number FROM numbers(1,10) EXCEPT SELECT number FROM numbers(3,6);
SELECT number
FROM numbers(1, 10)
EXCEPT
SELECT number
FROM numbers(3, 6)
```
Result:
@ -44,7 +64,53 @@ Result:
└────────┘
```
`EXCEPT` and `INTERSECT` can often be used interchangeably with different Boolean logic, and they are both useful if you have two tables that share a common column (or columns). For example, suppose we have a few million rows of historical cryptocurrency data that contains trade prices and volume:
### Excluding Specific Columns Using `EXCEPT()`
`EXCEPT()` can be used to quickly exclude columns from a result. For instance if we want to select all columns from a table, except a few select columns as shown in the example below:
Query:
```sql
SHOW COLUMNS IN system.settings
SELECT * EXCEPT (default, alias_for, readonly, description)
FROM system.settings
LIMIT 5
```
Result:
```response
┌─field───────┬─type─────────────────────────────────────────────────────────────────────┬─null─┬─key─┬─default─┬─extra─┐
1. │ alias_for │ String │ NO │ │ ᴺᵁᴸᴸ │ │
2. │ changed │ UInt8 │ NO │ │ ᴺᵁᴸᴸ │ │
3. │ default │ String │ NO │ │ ᴺᵁᴸᴸ │ │
4. │ description │ String │ NO │ │ ᴺᵁᴸᴸ │ │
5. │ is_obsolete │ UInt8 │ NO │ │ ᴺᵁᴸᴸ │ │
6. │ max │ Nullable(String) │ YES │ │ ᴺᵁᴸᴸ │ │
7. │ min │ Nullable(String) │ YES │ │ ᴺᵁᴸᴸ │ │
8. │ name │ String │ NO │ │ ᴺᵁᴸᴸ │ │
9. │ readonly │ UInt8 │ NO │ │ ᴺᵁᴸᴸ │ │
10. │ tier │ Enum8('Production' = 0, 'Obsolete' = 4, 'Experimental' = 8, 'Beta' = 12) │ NO │ │ ᴺᵁᴸᴸ │ │
11. │ type │ String │ NO │ │ ᴺᵁᴸᴸ │ │
12. │ value │ String │ NO │ │ ᴺᵁᴸᴸ │ │
└─────────────┴──────────────────────────────────────────────────────────────────────────┴──────┴─────┴─────────┴───────┘
┌─name────────────────────┬─value──────┬─changed─┬─min──┬─max──┬─type────┬─is_obsolete─┬─tier───────┐
1. │ dialect │ clickhouse │ 0 │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ Dialect │ 0 │ Production │
2. │ min_compress_block_size │ 65536 │ 0 │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ UInt64 │ 0 │ Production │
3. │ max_compress_block_size │ 1048576 │ 0 │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ UInt64 │ 0 │ Production │
4. │ max_block_size │ 65409 │ 0 │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ UInt64 │ 0 │ Production │
5. │ max_insert_block_size │ 1048449 │ 0 │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ UInt64 │ 0 │ Production │
└─────────────────────────┴────────────┴─────────┴──────┴──────┴─────────┴─────────────┴────────────┘
```
### Using `EXCEPT` and `INTERSECT` with Cryptocurrency Data
`EXCEPT` and `INTERSECT` can often be used interchangeably with different Boolean logic, and they are both useful if you have two tables that share a common column (or columns).
For example, suppose we have a few million rows of historical cryptocurrency data that contains trade prices and volume:
Query:
```sql
CREATE TABLE crypto_prices
@ -72,6 +138,8 @@ ORDER BY trade_date DESC
LIMIT 10;
```
Result:
```response
┌─trade_date─┬─crypto_name─┬──────volume─┬────price─┬───market_cap─┬──change_1_day─┐
│ 2020-11-02 │ Bitcoin │ 30771456000 │ 13550.49 │ 251119860000 │ -0.013585099 │
@ -127,7 +195,7 @@ Result:
This means of the four cryptocurrencies we own, only Bitcoin has never dropped below $10 (based on the limited data we have here in this example).
## EXCEPT DISTINCT
### Using `EXCEPT DISTINCT`
Notice in the previous query we had multiple Bitcoin holdings in the result. You can add `DISTINCT` to `EXCEPT` to eliminate duplicate rows from the result:
@ -146,7 +214,6 @@ Result:
└─────────────┘
```
**See Also**
- [UNION](union.md#union-clause)

View File

@ -154,7 +154,7 @@ sudo "clickhouse-client-$LATEST_VERSION/install/doinst.sh"
### Из Docker образа {#from-docker-image}
Для запуска ClickHouse в Docker нужно следовать инструкции на [Docker Hub](https://hub.docker.com/r/clickhouse/clickhouse-server/). Внутри образов используются официальные `deb`-пакеты.
Для запуска ClickHouse в Docker нужно следовать инструкции на [Docker Hub](https://hub.docker.com/_/clickhouse). Внутри образов используются официальные `deb`-пакеты.
### Из единого бинарного файла {#from-single-binary}

View File

@ -132,7 +132,7 @@ sudo "clickhouse-client-$LATEST_VERSION/install/doinst.sh"
### `Docker`安装包 {#from-docker-image}
要在Docker中运行ClickHouse请遵循[Docker Hub](https://hub.docker.com/r/clickhouse/clickhouse-server/)上的指南。它是官方的`deb`安装包。
要在Docker中运行ClickHouse请遵循[Docker Hub](https://hub.docker.com/_/clickhouse)上的指南。它是官方的`deb`安装包。
### 其他环境安装包 {#from-other}

View File

@ -98,9 +98,9 @@ check_cli_and_http
function cmp_http_compression() {
$CLICKHOUSE_CLIENT -q "$(query "$1")" > "${CLICKHOUSE_TMP}"/res0
ch_url 'compress=1' "$1" | "${CLICKHOUSE_BINARY}"-compressor --decompress > "${CLICKHOUSE_TMP}"/res1
ch_url "compress=1&buffer_size=$2&wait_end_of_query=0" "$1" | "${CLICKHOUSE_BINARY}"-compressor --decompress > "${CLICKHOUSE_TMP}"/res2
ch_url "compress=1&buffer_size=$2&wait_end_of_query=1" "$1" | "${CLICKHOUSE_BINARY}"-compressor --decompress > "${CLICKHOUSE_TMP}"/res3
ch_url 'compress=1' "$1" | "${CLICKHOUSE_COMPRESSOR}" --decompress > "${CLICKHOUSE_TMP}"/res1
ch_url "compress=1&buffer_size=$2&wait_end_of_query=0" "$1" | "${CLICKHOUSE_COMPRESSOR}" --decompress > "${CLICKHOUSE_TMP}"/res2
ch_url "compress=1&buffer_size=$2&wait_end_of_query=1" "$1" | "${CLICKHOUSE_COMPRESSOR}" --decompress > "${CLICKHOUSE_TMP}"/res3
cmp "${CLICKHOUSE_TMP}"/res0 "${CLICKHOUSE_TMP}"/res1
cmp "${CLICKHOUSE_TMP}"/res1 "${CLICKHOUSE_TMP}"/res2
cmp "${CLICKHOUSE_TMP}"/res1 "${CLICKHOUSE_TMP}"/res3

View File

@ -35,7 +35,7 @@ CREATE DICTIONARY {CLICKHOUSE_DATABASE:Identifier}.dict_ipv4_trie
cca2 String
)
PRIMARY KEY prefix
SOURCE(CLICKHOUSE(host 'localhost' port 9000 user 'default' db currentDatabase() table 'table_ipv4_trie'))
SOURCE(CLICKHOUSE(host 'localhost' port tcpPort() user 'default' db currentDatabase() table 'table_ipv4_trie'))
LAYOUT(IP_TRIE())
LIFETIME(MIN 10 MAX 100)
SETTINGS(dictionary_use_async_executor=1, max_threads=8)
@ -139,7 +139,7 @@ CREATE DICTIONARY {CLICKHOUSE_DATABASE:Identifier}.dict_ipv4_trie
val UInt32
)
PRIMARY KEY prefix
SOURCE(CLICKHOUSE(host 'localhost' port 9000 user 'default' db currentDatabase() table 'table_ipv4_trie'))
SOURCE(CLICKHOUSE(host 'localhost' port tcpPort() user 'default' db currentDatabase() table 'table_ipv4_trie'))
LAYOUT(IP_TRIE())
LIFETIME(MIN 10 MAX 100);
@ -209,7 +209,7 @@ INSERT INTO {CLICKHOUSE_DATABASE:Identifier}.table_ipv4_trie VALUES ('127.255.25
CREATE DICTIONARY {CLICKHOUSE_DATABASE:Identifier}.dict_ipv4_trie ( prefix String, val UInt32 )
PRIMARY KEY prefix
SOURCE(CLICKHOUSE(host 'localhost' port 9000 user 'default' db currentDatabase() table 'table_ipv4_trie'))
SOURCE(CLICKHOUSE(host 'localhost' port tcpPort() user 'default' db currentDatabase() table 'table_ipv4_trie'))
LAYOUT(IP_TRIE(ACCESS_TO_KEY_FROM_ATTRIBUTES 1))
LIFETIME(MIN 10 MAX 100);
@ -287,7 +287,7 @@ CREATE DICTIONARY {CLICKHOUSE_DATABASE:Identifier}.dict_ip_trie
val String
)
PRIMARY KEY prefix
SOURCE(CLICKHOUSE(host 'localhost' port 9000 user 'default' db currentDatabase() table 'table_ip_trie'))
SOURCE(CLICKHOUSE(host 'localhost' port tcpPort() user 'default' db currentDatabase() table 'table_ip_trie'))
LAYOUT(IP_TRIE(ACCESS_TO_KEY_FROM_ATTRIBUTES 1))
LIFETIME(MIN 10 MAX 100);
@ -493,7 +493,7 @@ CREATE DICTIONARY {CLICKHOUSE_DATABASE:Identifier}.dict_ip_trie
val String
)
PRIMARY KEY prefix
SOURCE(CLICKHOUSE(host 'localhost' port 9000 user 'default' db currentDatabase() table 'table_ip_trie'))
SOURCE(CLICKHOUSE(host 'localhost' port tcpPort() user 'default' db currentDatabase() table 'table_ip_trie'))
LAYOUT(IP_TRIE())
LIFETIME(MIN 10 MAX 100);

View File

@ -1,5 +1,5 @@
#!/usr/bin/env bash
# Tags: no-replicated-database, no-shared-merge-tree
# Tags: zookeeper, no-replicated-database, no-shared-merge-tree
# Tag no-replicated-database: CREATE AS SELECT is disabled
# Tag no-shared-merge-tree -- implemented separate test, just bad substituion here

View File

@ -1,5 +1,5 @@
#!/usr/bin/env bash
# Tags: long, no-replicated-database, no-ordinary-database
# Tags: long, no-replicated-database, no-ordinary-database, zookeeper
# shellcheck disable=SC2015

View File

@ -1,4 +1,4 @@
-- Tags: no-ordinary-database, use-rocksdb
-- Tags: zookeeper, no-ordinary-database, use-rocksdb
DROP TABLE IF EXISTS 02526_keeper_map;
DROP TABLE IF EXISTS 02526_rocksdb;

View File

@ -5,5 +5,5 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
. "$CURDIR"/../shell_config.sh
${CLICKHOUSE_CLIENT} --query="CREATE TABLE ${CLICKHOUSE_DATABASE}.x ENGINE = MergeTree() ORDER BY number AS SELECT * FROM numbers(2)"
${CLICKHOUSE_LOCAL} --query="SELECT count() FROM remote('127.0.0.{2,3}', '${CLICKHOUSE_DATABASE}.x') SETTINGS enable_analyzer = 1" 2>&1 \
${CLICKHOUSE_LOCAL} --query="SELECT count() FROM remote('127.0.0.{2,3}:${CLICKHOUSE_PORT_TCP}', '${CLICKHOUSE_DATABASE}.x') SETTINGS enable_analyzer = 1" 2>&1 \
| grep -av "ASan doesn't fully support makecontext/swapcontext functions"

View File

@ -1,4 +1,4 @@
-- Tags: no-replicated-database
-- Tags: no-replicated-database, shard
-- Closes: https://github.com/ClickHouse/ClickHouse/issues/6571
SET enable_analyzer=1;