Update /sql-reference docs

This commit is contained in:
rfraposa 2022-03-29 22:06:21 -06:00
parent 01ec63c909
commit 560471f991
276 changed files with 903 additions and 787 deletions

View File

@ -155,8 +155,9 @@ The server initializes the `Context` class with the necessary environment for qu
We maintain full backward and forward compatibility for the server TCP protocol: old clients can talk to new servers, and new clients can talk to old servers. But we do not want to maintain it eternally, and we are removing support for old versions after about one year.
!!! note "Note"
For most external applications, we recommend using the HTTP interface because it is simple and easy to use. The TCP protocol is more tightly linked to internal data structures: it uses an internal format for passing blocks of data, and it uses custom framing for compressed data. We havent released a C library for that protocol because it requires linking most of the ClickHouse codebase, which is not practical.
:::note
For most external applications, we recommend using the HTTP interface because it is simple and easy to use. The TCP protocol is more tightly linked to internal data structures: it uses an internal format for passing blocks of data, and it uses custom framing for compressed data. We havent released a C library for that protocol because it requires linking most of the ClickHouse codebase, which is not practical.
:::
## Distributed Query Execution {#distributed-query-execution}
@ -194,7 +195,8 @@ Replication is physical: only compressed parts are transferred between nodes, no
Besides, each replica stores its state in ZooKeeper as the set of parts and its checksums. When the state on the local filesystem diverges from the reference state in ZooKeeper, the replica restores its consistency by downloading missing and broken parts from other replicas. When there is some unexpected or broken data in the local filesystem, ClickHouse does not remove it, but moves it to a separate directory and forgets it.
!!! note "Note"
The ClickHouse cluster consists of independent shards, and each shard consists of replicas. The cluster is **not elastic**, so after adding a new shard, data is not rebalanced between shards automatically. Instead, the cluster load is supposed to be adjusted to be uneven. This implementation gives you more control, and it is ok for relatively small clusters, such as tens of nodes. But for clusters with hundreds of nodes that we are using in production, this approach becomes a significant drawback. We should implement a table engine that spans across the cluster with dynamically replicated regions that could be split and balanced between clusters automatically.
:::note
The ClickHouse cluster consists of independent shards, and each shard consists of replicas. The cluster is **not elastic**, so after adding a new shard, data is not rebalanced between shards automatically. Instead, the cluster load is supposed to be adjusted to be uneven. This implementation gives you more control, and it is ok for relatively small clusters, such as tens of nodes. But for clusters with hundreds of nodes that we are using in production, this approach becomes a significant drawback. We should implement a table engine that spans across the cluster with dynamically replicated regions that could be split and balanced between clusters automatically.
:::
{## [Original article](https://clickhouse.com/docs/en/development/architecture/) ##}
[Original article](https://clickhouse.com/docs/en/development/architecture/)

View File

@ -94,8 +94,9 @@ cmake --build . --config RelWithDebInfo
If you intend to run `clickhouse-server`, make sure to increase the systems maxfiles variable.
!!! info "Note"
Youll need to use sudo.
:::note
Youll need to use sudo.
:::
To do so, create the `/Library/LaunchDaemons/limit.maxfiles.plist` file with the following content:

View File

@ -20,8 +20,9 @@ One ClickHouse server can have multiple replicated databases running and updatin
- `shard_name` — Shard name. Database replicas are grouped into shards by `shard_name`.
- `replica_name` — Replica name. Replica names must be different for all replicas of the same shard.
!!! note "Warning"
For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables if no arguments provided, then default arguments are used: `/clickhouse/tables/{uuid}/{shard}` and `{replica}`. These can be changed in the server settings [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). Macro `{uuid}` is unfolded to table's uuid, `{shard}` and `{replica}` are unfolded to values from server config, not from database engine arguments. But in the future, it will be possible to use `shard_name` and `replica_name` of Replicated database.
:::warning
For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables if no arguments provided, then default arguments are used: `/clickhouse/tables/{uuid}/{shard}` and `{replica}`. These can be changed in the server settings [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). Macro `{uuid}` is unfolded to table's uuid, `{shard}` and `{replica}` are unfolded to values from server config, not from database engine arguments. But in the future, it will be possible to use `shard_name` and `replica_name` of Replicated database.
:::
## Specifics and Recommendations {#specifics-and-recommendations}

View File

@ -290,8 +290,9 @@ $ sudo service clickhouse-server restart
$ clickhouse-client --query "select count(*) from datasets.trips_mergetree"
```
!!! info "Info"
If you will run the queries described below, you have to use the full table name, `datasets.trips_mergetree`.
:::info
If you will run the queries described below, you have to use the full table name, `datasets.trips_mergetree`.
:::
## Results on Single Server {#results-on-single-server}

View File

@ -156,8 +156,9 @@ $ sudo service clickhouse-server restart
$ clickhouse-client --query "select count(*) from datasets.ontime"
```
!!! info "Info"
If you will run the queries described below, you have to use the full table name, `datasets.ontime`.
:::note
If you will run the queries described below, you have to use the full table name, `datasets.ontime`.
:::
## Queries {#queries}

View File

@ -17,8 +17,9 @@ $ make
Generating data:
!!! warning "Attention"
With `-s 100` dbgen generates 600 million rows (67 GB), while while `-s 1000` it generates 6 billion rows (which takes a lot of time)
:::warning
With `-s 100` dbgen generates 600 million rows (67 GB), while while `-s 1000` it generates 6 billion rows (which takes a lot of time)
:::
``` bash
$ ./dbgen -s 1000 -T c

View File

@ -69,9 +69,10 @@ You can also download and install packages manually from [here](https://packages
- `clickhouse-client` — Creates a symbolic link for `clickhouse-client` and other client-related tools. and installs client configuration files.
- `clickhouse-common-static-dbg` — Installs ClickHouse compiled binary files with debug info.
!!! attention "Attention"
If you need to install specific version of ClickHouse you have to install all packages with the same version:
`sudo apt-get install clickhouse-server=21.8.5.7 clickhouse-client=21.8.5.7 clickhouse-common-static=21.8.5.7`
:::info
If you need to install specific version of ClickHouse you have to install all packages with the same version:
`sudo apt-get install clickhouse-server=21.8.5.7 clickhouse-client=21.8.5.7 clickhouse-common-static=21.8.5.7`
:::
### From RPM Packages {#from-rpm-packages}

View File

@ -1,7 +1,4 @@
position: 15
label: 'SQL Reference'
collapsible: true
collapsed: true
link:
type: generated-index
title: SQL Reference
collapsed: true

View File

@ -1,6 +1,6 @@
---
toc_priority: 37
toc_title: Combinators
sidebar_position: 37
sidebar_label: Combinators
---
# Aggregate Function Combinators {#aggregate_functions_combinators}

View File

@ -1,10 +1,9 @@
---
toc_folder_title: Aggregate Functions
toc_priority: 33
toc_title: Introduction
sidebar_label: Aggregate Functions
sidebar_position: 33
---
# Aggregate Functions {#aggregate-functions}
# Aggregate Functions
Aggregate functions work in the [normal](http://www.sql-tutorial.com/sql-aggregate-functions-sql-tutorial) way as expected by database experts.

View File

@ -1,6 +1,6 @@
---
toc_priority: 38
toc_title: Parametric
sidebar_position: 38
sidebar_label: Parametric
---
# Parametric Aggregate Functions {#aggregate_functions_parametric}
@ -89,8 +89,9 @@ Checks whether the sequence contains an event chain that matches the pattern.
sequenceMatch(pattern)(timestamp, cond1, cond2, ...)
```
!!! warning "Warning"
Events that occur at the same second may lay in the sequence in an undefined order affecting the result.
:::warning
Events that occur at the same second may lay in the sequence in an undefined order affecting the result.
:::
**Arguments**
@ -174,8 +175,9 @@ SELECT sequenceMatch('(?1)(?2)')(time, number = 1, number = 2, number = 4) FROM
Counts the number of event chains that matched the pattern. The function searches event chains that do not overlap. It starts to search for the next chain after the current chain is matched.
!!! warning "Warning"
Events that occur at the same second may lay in the sequence in an undefined order affecting the result.
:::warning
Events that occur at the same second may lay in the sequence in an undefined order affecting the result.
:::
``` sql
sequenceCount(pattern)(timestamp, cond1, cond2, ...)

View File

@ -1,5 +1,5 @@
---
toc_priority: 6
sidebar_position: 6
---
# any {#agg_function-any}

View File

@ -1,5 +1,5 @@
---
toc_priority: 103
sidebar_position: 103
---
# anyHeavy {#anyheavyx}

View File

@ -1,5 +1,5 @@
---
toc_priority: 104
sidebar_position: 104
---
## anyLast {#anylastx}

View File

@ -1,5 +1,5 @@
---
toc_priority: 106
sidebar_position: 106
---
# argMax {#agg-function-argmax}

View File

@ -1,5 +1,5 @@
---
toc_priority: 105
sidebar_position: 105
---
# argMin {#agg-function-argmin}

View File

@ -1,5 +1,5 @@
---
toc_priority: 5
sidebar_position: 5
---
# avg {#agg_function-avg}

View File

@ -1,5 +1,5 @@
---
toc_priority: 107
sidebar_position: 107
---
# avgWeighted {#avgweighted}

View File

@ -1,5 +1,5 @@
---
toc_priority: 250
sidebar_position: 250
---
# categoricalInformationValue {#categoricalinformationvalue}

View File

@ -1,5 +1,5 @@
---
toc_priority: 107
sidebar_position: 107
---
# corr {#corrx-y}
@ -8,5 +8,6 @@ Syntax: `corr(x, y)`
Calculates the Pearson correlation coefficient: `Σ((x - x̅)(y - y̅)) / sqrt(Σ((x - x̅)^2) * Σ((y - y̅)^2))`.
!!! note "Note"
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `corrStable` function. It works slower but provides a lower computational error.
:::note
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `corrStable` function. It works slower but provides a lower computational error.
:::

View File

@ -1,5 +1,5 @@
---
toc_priority: 1
sidebar_position: 1
---
# count {#agg_function-count}

View File

@ -1,5 +1,5 @@
---
toc_priority: 36
sidebar_position: 36
---
# covarPop {#covarpop}
@ -8,5 +8,6 @@ Syntax: `covarPop(x, y)`
Calculates the value of `Σ((x - x̅)(y - y̅)) / n`.
!!! note "Note"
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `covarPopStable` function. It works slower but provides a lower computational error.
:::note
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `covarPopStable` function. It works slower but provides a lower computational error.
:::

View File

@ -1,5 +1,5 @@
---
toc_priority: 37
sidebar_position: 37
---
# covarSamp {#covarsamp}
@ -8,5 +8,6 @@ Calculates the value of `Σ((x - x̅)(y - y̅)) / (n - 1)`.
Returns Float64. When `n <= 1`, returns +∞.
!!! note "Note"
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `covarSampStable` function. It works slower but provides a lower computational error.
:::note
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `covarSampStable` function. It works slower but provides a lower computational error.
:::

View File

@ -1,13 +1,14 @@
---
toc_priority: 141
sidebar_position: 141
---
# deltaSum {#agg_functions-deltasum}
Sums the arithmetic difference between consecutive rows. If the difference is negative, it is ignored.
!!! info "Note"
The underlying data must be sorted for this function to work properly. If you would like to use this function in a [materialized view](../../../sql-reference/statements/create/view.md#materialized), you most likely want to use the [deltaSumTimestamp](../../../sql-reference/aggregate-functions/reference/deltasumtimestamp.md#agg_functions-deltasumtimestamp) method instead.
:::note
The underlying data must be sorted for this function to work properly. If you would like to use this function in a [materialized view](../../../sql-reference/statements/create/view.md#materialized), you most likely want to use the [deltaSumTimestamp](../../../sql-reference/aggregate-functions/reference/deltasumtimestamp.md#agg_functions-deltasumtimestamp) method instead.
:::
**Syntax**

View File

@ -1,5 +1,5 @@
---
toc_priority: 141
sidebar_position: 141
---
# deltaSumTimestamp {#agg_functions-deltasumtimestamp}

View File

@ -1,5 +1,5 @@
---
toc_priority: 302
sidebar_position: 302
---
# entropy {#entropy}

View File

@ -1,5 +1,5 @@
---
toc_priority: 108
sidebar_position: 108
---
## exponentialMovingAverage {#exponential-moving-average}

View File

@ -1,5 +1,5 @@
---
toc_priority: 110
sidebar_position: 110
---
# groupArray {#agg_function-grouparray}

View File

@ -1,5 +1,5 @@
---
toc_priority: 112
sidebar_position: 112
---
# groupArrayInsertAt {#grouparrayinsertat}

View File

@ -1,5 +1,5 @@
---
toc_priority: 114
sidebar_position: 114
---
# groupArrayMovingAvg {#agg_function-grouparraymovingavg}

View File

@ -1,5 +1,5 @@
---
toc_priority: 113
sidebar_position: 113
---
# groupArrayMovingSum {#agg_function-grouparraymovingsum}

View File

@ -1,5 +1,5 @@
---
toc_priority: 114
sidebar_position: 114
---
# groupArraySample {#grouparraysample}

View File

@ -1,5 +1,5 @@
---
toc_priority: 125
sidebar_position: 125
---
# groupBitAnd {#groupbitand}

View File

@ -1,5 +1,5 @@
---
toc_priority: 128
sidebar_position: 128
---
# groupBitmap {#groupbitmap}

View File

@ -1,5 +1,5 @@
---
toc_priority: 129
sidebar_position: 129
---
# groupBitmapAnd {#groupbitmapand}

View File

@ -1,5 +1,5 @@
---
toc_priority: 130
sidebar_position: 130
---
# groupBitmapOr {#groupbitmapor}

View File

@ -1,5 +1,5 @@
---
toc_priority: 131
sidebar_position: 131
---
# groupBitmapXor {#groupbitmapxor}

View File

@ -1,5 +1,5 @@
---
toc_priority: 126
sidebar_position: 126
---
# groupBitOr {#groupbitor}

View File

@ -1,5 +1,5 @@
---
toc_priority: 127
sidebar_position: 127
---
# groupBitXor {#groupbitxor}

View File

@ -1,5 +1,5 @@
---
toc_priority: 111
sidebar_position: 111
---
# groupUniqArray {#groupuniqarray}

View File

@ -1,6 +1,6 @@
---
toc_folder_title: Reference
toc_priority: 36
sidebar_position: 36
toc_hidden: true
---

View File

@ -1,6 +1,6 @@
---
toc_priority: 146
toc_title: intervalLengthSum
sidebar_position: 146
sidebar_label: intervalLengthSum
---
# intervalLengthSum {#agg_function-intervallengthsum}
@ -18,8 +18,9 @@ intervalLengthSum(start, end)
- `start` — The starting value of the interval. [Int32](../../../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-int8-int16-int32-int64), [Int64](../../../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-int8-int16-int32-int64), [UInt32](../../../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-int8-int16-int32-int64), [UInt64](../../../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-int8-int16-int32-int64), [Float32](../../../sql-reference/data-types/float.md#float32-float64), [Float64](../../../sql-reference/data-types/float.md#float32-float64), [DateTime](../../../sql-reference/data-types/datetime.md#data_type-datetime) or [Date](../../../sql-reference/data-types/date.md#data_type-date).
- `end` — The ending value of the interval. [Int32](../../../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-int8-int16-int32-int64), [Int64](../../../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-int8-int16-int32-int64), [UInt32](../../../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-int8-int16-int32-int64), [UInt64](../../../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-int8-int16-int32-int64), [Float32](../../../sql-reference/data-types/float.md#float32-float64), [Float64](../../../sql-reference/data-types/float.md#float32-float64), [DateTime](../../../sql-reference/data-types/datetime.md#data_type-datetime) or [Date](../../../sql-reference/data-types/date.md#data_type-date).
!!! info "Note"
Arguments must be of the same data type. Otherwise, an exception will be thrown.
:::note
Arguments must be of the same data type. Otherwise, an exception will be thrown.
:::
**Returned value**

View File

@ -1,5 +1,5 @@
---
toc_priority: 153
sidebar_position: 153
---
# kurtPop {#kurtpop}

View File

@ -1,5 +1,5 @@
---
toc_priority: 154
sidebar_position: 154
---
# kurtSamp {#kurtsamp}

View File

@ -1,6 +1,6 @@
---
toc_priority: 310
toc_title: mannWhitneyUTest
sidebar_position: 310
sidebar_label: mannWhitneyUTest
---
# mannWhitneyUTest {#mannwhitneyutest}

View File

@ -1,5 +1,5 @@
---
toc_priority: 3
sidebar_position: 3
---
# max {#agg_function-max}

View File

@ -1,5 +1,5 @@
---
toc_priority: 143
sidebar_position: 143
---
# maxMap {#agg_functions-maxmap}

View File

@ -1,6 +1,6 @@
---
toc_priority: 303
toc_title: meanZTest
sidebar_position: 303
sidebar_label: meanZTest
---
# meanZTest {#meanztest}

View File

@ -1,5 +1,5 @@
---
toc_priority: 212
sidebar_position: 212
---
# median {#median}

View File

@ -1,5 +1,5 @@
---
toc_priority: 2
sidebar_position: 2
---
## min {#agg_function-min}

View File

@ -1,5 +1,5 @@
---
toc_priority: 142
sidebar_position: 142
---
# minMap {#agg_functions-minmap}

View File

@ -1,5 +1,5 @@
---
toc_priority: 200
sidebar_position: 200
---
# quantile {#quantile}

View File

@ -1,5 +1,5 @@
---
toc_priority: 209
sidebar_position: 209
---
# quantileBFloat16 {#quantilebfloat16}

View File

@ -1,5 +1,5 @@
---
toc_priority: 206
sidebar_position: 206
---
# quantileDeterministic {#quantiledeterministic}

View File

@ -1,5 +1,5 @@
---
toc_priority: 202
sidebar_position: 202
---
# quantileExact Functions {#quantileexact-functions}

View File

@ -1,5 +1,5 @@
---
toc_priority: 203
sidebar_position: 203
---
# quantileExactWeighted {#quantileexactweighted}

View File

@ -1,5 +1,5 @@
---
toc_priority: 201
sidebar_position: 201
---
# quantiles Functions {#quantiles-functions}

View File

@ -1,5 +1,5 @@
---
toc_priority: 207
sidebar_position: 207
---
# quantileTDigest {#quantiletdigest}

View File

@ -1,5 +1,5 @@
---
toc_priority: 208
sidebar_position: 208
---
# quantileTDigestWeighted {#quantiletdigestweighted}
@ -12,8 +12,9 @@ The result depends on the order of running the query, and is nondeterministic.
When using multiple `quantile*` functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) function.
!!! note "Note"
Using `quantileTDigestWeighted` [is not recommended for tiny data sets](https://github.com/tdunning/t-digest/issues/167#issuecomment-828650275) and can lead to significat error. In this case, consider possibility of using [`quantileTDigest`](../../../sql-reference/aggregate-functions/reference/quantiletdigest.md) instead.
:::note
Using `quantileTDigestWeighted` [is not recommended for tiny data sets](https://github.com/tdunning/t-digest/issues/167#issuecomment-828650275) and can lead to significat error. In this case, consider possibility of using [`quantileTDigest`](../../../sql-reference/aggregate-functions/reference/quantiletdigest.md) instead.
:::
**Syntax**

View File

@ -1,5 +1,5 @@
---
toc_priority: 204
sidebar_position: 204
---
# quantileTiming {#quantiletiming}
@ -36,8 +36,9 @@ The calculation is accurate if:
Otherwise, the result of the calculation is rounded to the nearest multiple of 16 ms.
!!! note "Note"
For calculating page loading time quantiles, this function is more effective and accurate than [quantile](../../../sql-reference/aggregate-functions/reference/quantile.md#quantile).
:::note
For calculating page loading time quantiles, this function is more effective and accurate than [quantile](../../../sql-reference/aggregate-functions/reference/quantile.md#quantile).
:::
**Returned value**
@ -45,8 +46,9 @@ Otherwise, the result of the calculation is rounded to the nearest multiple of 1
Type: `Float32`.
!!! note "Note"
If no values are passed to the function (when using `quantileTimingIf`), [NaN](../../../sql-reference/data-types/float.md#data_type-float-nan-inf) is returned. The purpose of this is to differentiate these cases from cases that result in zero. See [ORDER BY clause](../../../sql-reference/statements/select/order-by.md#select-order-by) for notes on sorting `NaN` values.
:::note
If no values are passed to the function (when using `quantileTimingIf`), [NaN](../../../sql-reference/data-types/float.md#data_type-float-nan-inf) is returned. The purpose of this is to differentiate these cases from cases that result in zero. See [ORDER BY clause](../../../sql-reference/statements/select/order-by.md#select-order-by) for notes on sorting `NaN` values.
:::
**Example**

View File

@ -1,5 +1,5 @@
---
toc_priority: 205
sidebar_position: 205
---
# quantileTimingWeighted {#quantiletimingweighted}
@ -38,8 +38,9 @@ The calculation is accurate if:
Otherwise, the result of the calculation is rounded to the nearest multiple of 16 ms.
!!! note "Note"
For calculating page loading time quantiles, this function is more effective and accurate than [quantile](../../../sql-reference/aggregate-functions/reference/quantile.md#quantile).
:::note
For calculating page loading time quantiles, this function is more effective and accurate than [quantile](../../../sql-reference/aggregate-functions/reference/quantile.md#quantile).
:::
**Returned value**
@ -47,8 +48,9 @@ Otherwise, the result of the calculation is rounded to the nearest multiple of 1
Type: `Float32`.
!!! note "Note"
If no values are passed to the function (when using `quantileTimingIf`), [NaN](../../../sql-reference/data-types/float.md#data_type-float-nan-inf) is returned. The purpose of this is to differentiate these cases from cases that result in zero. See [ORDER BY clause](../../../sql-reference/statements/select/order-by.md#select-order-by) for notes on sorting `NaN` values.
:::note
If no values are passed to the function (when using `quantileTimingIf`), [NaN](../../../sql-reference/data-types/float.md#data_type-float-nan-inf) is returned. The purpose of this is to differentiate these cases from cases that result in zero. See [ORDER BY clause](../../../sql-reference/statements/select/order-by.md#select-order-by) for notes on sorting `NaN` values.
:::
**Example**

View File

@ -1,5 +1,5 @@
---
toc_priority: 145
sidebar_position: 145
---
# rankCorr {#agg_function-rankcorr}

View File

@ -1,5 +1,5 @@
---
toc_priority: 220
sidebar_position: 220
---
# simpleLinearRegression {#simplelinearregression}

View File

@ -1,5 +1,5 @@
---
toc_priority: 150
sidebar_position: 150
---
# skewPop {#skewpop}

View File

@ -1,5 +1,5 @@
---
toc_priority: 151
sidebar_position: 151
---
# skewSamp {#skewsamp}

View File

@ -1,6 +1,6 @@
---
toc_priority: 311
toc_title: sparkbar
sidebar_position: 311
sidebar_label: sparkbar
---
# sparkbar {#sparkbar}

View File

@ -1,10 +1,11 @@
---
toc_priority: 30
sidebar_position: 30
---
# stddevPop {#stddevpop}
The result is equal to the square root of [varPop](../../../sql-reference/aggregate-functions/reference/varpop.md).
!!! note "Note"
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `stddevPopStable` function. It works slower but provides a lower computational error.
:::note
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `stddevPopStable` function. It works slower but provides a lower computational error.
:::

View File

@ -1,10 +1,11 @@
---
toc_priority: 31
sidebar_position: 31
---
# stddevSamp {#stddevsamp}
The result is equal to the square root of [varSamp](../../../sql-reference/aggregate-functions/reference/varsamp.md).
!!! note "Note"
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `stddevSampStable` function. It works slower but provides a lower computational error.
:::note
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `stddevSampStable` function. It works slower but provides a lower computational error.
:::

View File

@ -1,5 +1,5 @@
---
toc_priority: 221
sidebar_position: 221
---
# stochasticLinearRegression {#agg_functions-stochasticlinearregression}

View File

@ -1,5 +1,5 @@
---
toc_priority: 222
sidebar_position: 222
---
# stochasticLogisticRegression {#agg_functions-stochasticlogisticregression}

View File

@ -1,6 +1,6 @@
---
toc_priority: 300
toc_title: studentTTest
sidebar_position: 300
sidebar_label: studentTTest
---
# studentTTest {#studentttest}

View File

@ -1,5 +1,5 @@
---
toc_priority: 4
sidebar_position: 4
---
# sum {#agg_function-sum}

View File

@ -1,5 +1,5 @@
---
toc_priority: 144
sidebar_position: 144
---
# sumCount {#agg_function-sumCount}

View File

@ -1,5 +1,5 @@
---
toc_priority: 145
sidebar_position: 145
---
# sumKahan {#agg_function-sumKahan}

View File

@ -1,5 +1,5 @@
---
toc_priority: 141
sidebar_position: 141
---
# sumMap {#agg_functions-summap}

View File

@ -1,5 +1,5 @@
---
toc_priority: 140
sidebar_position: 140
---
# sumWithOverflow {#sumwithoverflowx}

View File

@ -1,5 +1,5 @@
---
toc_priority: 108
sidebar_position: 108
---
# topK {#topk}

View File

@ -1,5 +1,5 @@
---
toc_priority: 109
sidebar_position: 109
---
# topKWeighted {#topkweighted}

View File

@ -1,5 +1,5 @@
---
toc_priority: 190
sidebar_position: 190
---
# uniq {#agg_function-uniq}

View File

@ -1,5 +1,5 @@
---
toc_priority: 192
sidebar_position: 192
---
# uniqCombined {#agg_function-uniqcombined}
@ -34,8 +34,9 @@ Function:
- Provides the result deterministically (it does not depend on the query processing order).
!!! note "Note"
Since it uses 32-bit hash for non-`String` type, the result will have very high error for cardinalities significantly larger than `UINT_MAX` (error will raise quickly after a few tens of billions of distinct values), hence in this case you should use [uniqCombined64](../../../sql-reference/aggregate-functions/reference/uniqcombined64.md#agg_function-uniqcombined64)
:::note
Since it uses 32-bit hash for non-`String` type, the result will have very high error for cardinalities significantly larger than `UINT_MAX` (error will raise quickly after a few tens of billions of distinct values), hence in this case you should use [uniqCombined64](../../../sql-reference/aggregate-functions/reference/uniqcombined64.md#agg_function-uniqcombined64)
:::
Compared to the [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) function, the `uniqCombined`:

View File

@ -1,5 +1,5 @@
---
toc_priority: 193
sidebar_position: 193
---
# uniqCombined64 {#agg_function-uniqcombined64}

View File

@ -1,5 +1,5 @@
---
toc_priority: 191
sidebar_position: 191
---
# uniqExact {#agg_function-uniqexact}

View File

@ -1,5 +1,5 @@
---
toc_priority: 194
sidebar_position: 194
---
# uniqHLL12 {#agg_function-uniqhll12}

View File

@ -1,5 +1,5 @@
---
toc_priority: 195
sidebar_position: 195
---
# uniqTheta {#agg_function-uniqthetasketch}

View File

@ -1,5 +1,5 @@
---
toc_priority: 32
sidebar_position: 32
---
# varPop(x) {#varpopx}
@ -8,5 +8,6 @@ Calculates the amount `Σ((x - x̅)^2) / n`, where `n` is the sample size and `x
In other words, dispersion for a set of values. Returns `Float64`.
!!! note "Note"
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `varPopStable` function. It works slower but provides a lower computational error.
:::note
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `varPopStable` function. It works slower but provides a lower computational error.
:::

View File

@ -1,5 +1,5 @@
---
toc_priority: 33
sidebar_position: 33
---
# varSamp {#varsamp}
@ -10,5 +10,6 @@ It represents an unbiased estimate of the variance of a random variable if passe
Returns `Float64`. When `n <= 1`, returns `+∞`.
!!! note "Note"
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `varSampStable` function. It works slower but provides a lower computational error.
:::note
This function uses a numerically unstable algorithm. If you need [numerical stability](https://en.wikipedia.org/wiki/Numerical_stability) in calculations, use the `varSampStable` function. It works slower but provides a lower computational error.
:::

View File

@ -1,6 +1,6 @@
---
toc_priority: 301
toc_title: welchTTest
sidebar_position: 301
sidebar_label: welchTTest
---
# welchTTest {#welchttest}

View File

@ -1,12 +1,13 @@
---
toc_priority: 40
toc_title: ANSI Compatibility
sidebar_position: 40
sidebar_label: ANSI Compatibility
---
# ANSI SQL Compatibility of ClickHouse SQL Dialect {#ansi-sql-compatibility-of-clickhouse-sql-dialect}
!!! note "Note"
This article relies on Table 38, “Feature taxonomy and definition for mandatory features”, Annex F of [ISO/IEC CD 9075-2:2011](https://www.iso.org/obp/ui/#iso:std:iso-iec:9075:-2:ed-4:v1:en:sec:8).
:::note
This article relies on Table 38, “Feature taxonomy and definition for mandatory features”, Annex F of [ISO/IEC CD 9075-2:2011](https://www.iso.org/obp/ui/#iso:std:iso-iec:9075:-2:ed-4:v1:en:sec:8).
:::
## Differences in Behaviour {#differences-in-behaviour}

View File

@ -1,6 +1,6 @@
---
toc_priority: 53
toc_title: AggregateFunction
sidebar_position: 53
sidebar_label: AggregateFunction
---
# AggregateFunction {#data-type-aggregatefunction}

View File

@ -1,6 +1,6 @@
---
toc_priority: 52
toc_title: Array(T)
sidebar_position: 52
sidebar_label: Array(T)
---
# Array(t) {#data-type-array}

View File

@ -1,6 +1,6 @@
---
toc_priority: 43
toc_title: Boolean
sidebar_position: 43
sidebar_label: Boolean
---
# Boolean Values {#boolean-values}

View File

@ -1,6 +1,6 @@
---
toc_priority: 47
toc_title: Date
sidebar_position: 47
sidebar_label: Date
---
# Date {#data_type-date}

View File

@ -1,6 +1,6 @@
---
toc_priority: 48
toc_title: Date32
sidebar_position: 48
sidebar_label: Date32
---
# Date32 {#data_type-datetime32}

View File

@ -1,6 +1,6 @@
---
toc_priority: 48
toc_title: DateTime
sidebar_position: 48
sidebar_label: DateTime
---
# Datetime {#data_type-datetime}

View File

@ -1,6 +1,6 @@
---
toc_priority: 49
toc_title: DateTime64
sidebar_position: 49
sidebar_label: DateTime64
---
# Datetime64 {#data_type-datetime64}

View File

@ -1,6 +1,6 @@
---
toc_priority: 42
toc_title: Decimal
sidebar_position: 42
sidebar_label: Decimal
---
# Decimal(P, S), Decimal32(S), Decimal64(S), Decimal128(S), Decimal256(S) {#decimal}

View File

@ -1,7 +1,6 @@
---
toc_folder_title: Domains
toc_priority: 56
toc_title: Overview
sidebar_position: 56
sidebar_label: Domains
---
# Domains {#domains}

View File

@ -1,6 +1,6 @@
---
toc_priority: 59
toc_title: IPv4
sidebar_position: 59
sidebar_label: IPv4
---
## IPv4 {#ipv4}

View File

@ -1,6 +1,6 @@
---
toc_priority: 60
toc_title: IPv6
sidebar_position: 60
sidebar_label: IPv6
---
## IPv6 {#ipv6}

View File

@ -1,6 +1,6 @@
---
toc_priority: 50
toc_title: Enum
sidebar_position: 50
sidebar_label: Enum
---
# Enum {#enum}

Some files were not shown because too many files have changed in this diff Show More