mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-10 01:25:21 +00:00
Fix some typos in docs
This commit is contained in:
parent
13f03ddb65
commit
9bd9c3d1d1
@ -138,7 +138,7 @@ It's important to name tests correctly, so one could turn some tests subset off
|
|||||||
|
|
||||||
| Tester flag| What should be in test name | When flag should be added |
|
| Tester flag| What should be in test name | When flag should be added |
|
||||||
|---|---|---|---|
|
|---|---|---|---|
|
||||||
| `--[no-]zookeeper`| "zookeeper" or "replica" | Test uses tables from ReplicatedMergeTree family |
|
| `--[no-]zookeeper`| "zookeeper" or "replica" | Test uses tables from `ReplicatedMergeTree` family |
|
||||||
| `--[no-]shard` | "shard" or "distributed" or "global"| Test using connections to 127.0.0.2 or similar |
|
| `--[no-]shard` | "shard" or "distributed" or "global"| Test using connections to 127.0.0.2 or similar |
|
||||||
| `--[no-]long` | "long" or "deadlock" or "race" | Test runs longer than 60 seconds |
|
| `--[no-]long` | "long" or "deadlock" or "race" | Test runs longer than 60 seconds |
|
||||||
|
|
||||||
|
@ -5,7 +5,7 @@ sidebar_position: 62
|
|||||||
|
|
||||||
# Overview of ClickHouse Architecture
|
# Overview of ClickHouse Architecture
|
||||||
|
|
||||||
ClickHouse is a true column-oriented DBMS. Data is stored by columns, and during the execution of arrays (vectors or chunks of columns).
|
ClickHouse is a true column-oriented DBMS. Data is stored by columns, and during the execution of arrays (vectors or chunks of columns).
|
||||||
Whenever possible, operations are dispatched on arrays, rather than on individual values. It is called “vectorized query execution” and it helps lower the cost of actual data processing.
|
Whenever possible, operations are dispatched on arrays, rather than on individual values. It is called “vectorized query execution” and it helps lower the cost of actual data processing.
|
||||||
|
|
||||||
> This idea is nothing new. It dates back to the `APL` (A programming language, 1957) and its descendants: `A +` (APL dialect), `J` (1990), `K` (1993), and `Q` (programming language from Kx Systems, 2003). Array programming is used in scientific data processing. Neither is this idea something new in relational databases: for example, it is used in the `VectorWise` system (also known as Actian Vector Analytic Database by Actian Corporation).
|
> This idea is nothing new. It dates back to the `APL` (A programming language, 1957) and its descendants: `A +` (APL dialect), `J` (1990), `K` (1993), and `Q` (programming language from Kx Systems, 2003). Array programming is used in scientific data processing. Neither is this idea something new in relational databases: for example, it is used in the `VectorWise` system (also known as Actian Vector Analytic Database by Actian Corporation).
|
||||||
@ -155,7 +155,7 @@ The server initializes the `Context` class with the necessary environment for qu
|
|||||||
|
|
||||||
We maintain full backward and forward compatibility for the server TCP protocol: old clients can talk to new servers, and new clients can talk to old servers. But we do not want to maintain it eternally, and we are removing support for old versions after about one year.
|
We maintain full backward and forward compatibility for the server TCP protocol: old clients can talk to new servers, and new clients can talk to old servers. But we do not want to maintain it eternally, and we are removing support for old versions after about one year.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
For most external applications, we recommend using the HTTP interface because it is simple and easy to use. The TCP protocol is more tightly linked to internal data structures: it uses an internal format for passing blocks of data, and it uses custom framing for compressed data. We haven’t released a C library for that protocol because it requires linking most of the ClickHouse codebase, which is not practical.
|
For most external applications, we recommend using the HTTP interface because it is simple and easy to use. The TCP protocol is more tightly linked to internal data structures: it uses an internal format for passing blocks of data, and it uses custom framing for compressed data. We haven’t released a C library for that protocol because it requires linking most of the ClickHouse codebase, which is not practical.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
@ -189,7 +189,7 @@ IO thread pool is implemented as a plain `ThreadPool` accessible via `IOThreadPo
|
|||||||
|
|
||||||
For periodic task execution there is `BackgroundSchedulePool` class. You can register tasks using `BackgroundSchedulePool::TaskHolder` objects and the pool ensures that no task runs two jobs at the same time. It also allows you to postpone task execution to a specific instant in the future or temporarily deactivate task. Global `Context` provides a few instances of this class for different purposes. For general purpose tasks `Context::getSchedulePool()` is used.
|
For periodic task execution there is `BackgroundSchedulePool` class. You can register tasks using `BackgroundSchedulePool::TaskHolder` objects and the pool ensures that no task runs two jobs at the same time. It also allows you to postpone task execution to a specific instant in the future or temporarily deactivate task. Global `Context` provides a few instances of this class for different purposes. For general purpose tasks `Context::getSchedulePool()` is used.
|
||||||
|
|
||||||
There are also specialized thread pools for preemptable tasks. Such `IExecutableTask` task can be split into ordered sequence of jobs, called steps. To schedule these tasks in a manner allowing short tasks to be prioritied over long ones `MergeTreeBackgroundExecutor` is used. As name suggests it is used for background MergeTree related operations such as merges, mutations, fetches and moves. Pool instances are available using `Context::getCommonExecutor()` and other similar methods.
|
There are also specialized thread pools for preemptable tasks. Such `IExecutableTask` task can be split into ordered sequence of jobs, called steps. To schedule these tasks in a manner allowing short tasks to be prioritized over long ones `MergeTreeBackgroundExecutor` is used. As name suggests it is used for background MergeTree related operations such as merges, mutations, fetches and moves. Pool instances are available using `Context::getCommonExecutor()` and other similar methods.
|
||||||
|
|
||||||
No matter what pool is used for a job, at start `ThreadStatus` instance is created for this job. It encapsulates all per-thread information: thread id, query id, performance counters, resource consumption and many other useful data. Job can access it via thread local pointer by `CurrentThread::get()` call, so we do not need to pass it to every function.
|
No matter what pool is used for a job, at start `ThreadStatus` instance is created for this job. It encapsulates all per-thread information: thread id, query id, performance counters, resource consumption and many other useful data. Job can access it via thread local pointer by `CurrentThread::get()` call, so we do not need to pass it to every function.
|
||||||
|
|
||||||
@ -201,7 +201,7 @@ Servers in a cluster setup are mostly independent. You can create a `Distributed
|
|||||||
|
|
||||||
Things become more complicated when you have subqueries in IN or JOIN clauses, and each of them uses a `Distributed` table. We have different strategies for the execution of these queries.
|
Things become more complicated when you have subqueries in IN or JOIN clauses, and each of them uses a `Distributed` table. We have different strategies for the execution of these queries.
|
||||||
|
|
||||||
There is no global query plan for distributed query execution. Each node has its local query plan for its part of the job. We only have simple one-pass distributed query execution: we send queries for remote nodes and then merge the results. But this is not feasible for complicated queries with high cardinality GROUP BYs or with a large amount of temporary data for JOIN. In such cases, we need to “reshuffle” data between servers, which requires additional coordination. ClickHouse does not support that kind of query execution, and we need to work on it.
|
There is no global query plan for distributed query execution. Each node has its local query plan for its part of the job. We only have simple one-pass distributed query execution: we send queries for remote nodes and then merge the results. But this is not feasible for complicated queries with high cardinality `GROUP BY`s or with a large amount of temporary data for JOIN. In such cases, we need to “reshuffle” data between servers, which requires additional coordination. ClickHouse does not support that kind of query execution, and we need to work on it.
|
||||||
|
|
||||||
## Merge Tree {#merge-tree}
|
## Merge Tree {#merge-tree}
|
||||||
|
|
||||||
@ -231,7 +231,7 @@ Replication is physical: only compressed parts are transferred between nodes, no
|
|||||||
|
|
||||||
Besides, each replica stores its state in ZooKeeper as the set of parts and its checksums. When the state on the local filesystem diverges from the reference state in ZooKeeper, the replica restores its consistency by downloading missing and broken parts from other replicas. When there is some unexpected or broken data in the local filesystem, ClickHouse does not remove it, but moves it to a separate directory and forgets it.
|
Besides, each replica stores its state in ZooKeeper as the set of parts and its checksums. When the state on the local filesystem diverges from the reference state in ZooKeeper, the replica restores its consistency by downloading missing and broken parts from other replicas. When there is some unexpected or broken data in the local filesystem, ClickHouse does not remove it, but moves it to a separate directory and forgets it.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
The ClickHouse cluster consists of independent shards, and each shard consists of replicas. The cluster is **not elastic**, so after adding a new shard, data is not rebalanced between shards automatically. Instead, the cluster load is supposed to be adjusted to be uneven. This implementation gives you more control, and it is ok for relatively small clusters, such as tens of nodes. But for clusters with hundreds of nodes that we are using in production, this approach becomes a significant drawback. We should implement a table engine that spans across the cluster with dynamically replicated regions that could be split and balanced between clusters automatically.
|
The ClickHouse cluster consists of independent shards, and each shard consists of replicas. The cluster is **not elastic**, so after adding a new shard, data is not rebalanced between shards automatically. Instead, the cluster load is supposed to be adjusted to be uneven. This implementation gives you more control, and it is ok for relatively small clusters, such as tens of nodes. But for clusters with hundreds of nodes that we are using in production, this approach becomes a significant drawback. We should implement a table engine that spans across the cluster with dynamically replicated regions that could be split and balanced between clusters automatically.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
@ -4,7 +4,7 @@ sidebar_label: Build on Mac OS X
|
|||||||
description: How to build ClickHouse on Mac OS X
|
description: How to build ClickHouse on Mac OS X
|
||||||
---
|
---
|
||||||
|
|
||||||
# How to Build ClickHouse on Mac OS X
|
# How to Build ClickHouse on Mac OS X
|
||||||
|
|
||||||
:::info You don't have to build ClickHouse yourself!
|
:::info You don't have to build ClickHouse yourself!
|
||||||
You can install pre-built ClickHouse as described in [Quick Start](https://clickhouse.com/#quick-start). Follow **macOS (Intel)** or **macOS (Apple silicon)** installation instructions.
|
You can install pre-built ClickHouse as described in [Quick Start](https://clickhouse.com/#quick-start). Follow **macOS (Intel)** or **macOS (Apple silicon)** installation instructions.
|
||||||
@ -20,9 +20,9 @@ It is also possible to compile with Apple's XCode `apple-clang` or Homebrew's `g
|
|||||||
|
|
||||||
First install [Homebrew](https://brew.sh/)
|
First install [Homebrew](https://brew.sh/)
|
||||||
|
|
||||||
## For Apple's Clang (discouraged): Install Xcode and Command Line Tools {#install-xcode-and-command-line-tools}
|
## For Apple's Clang (discouraged): Install XCode and Command Line Tools {#install-xcode-and-command-line-tools}
|
||||||
|
|
||||||
Install the latest [Xcode](https://apps.apple.com/am/app/xcode/id497799835?mt=12) from App Store.
|
Install the latest [XCode](https://apps.apple.com/am/app/xcode/id497799835?mt=12) from App Store.
|
||||||
|
|
||||||
Open it at least once to accept the end-user license agreement and automatically install the required components.
|
Open it at least once to accept the end-user license agreement and automatically install the required components.
|
||||||
|
|
||||||
@ -62,7 +62,7 @@ cmake --build build
|
|||||||
# The resulting binary will be created at: build/programs/clickhouse
|
# The resulting binary will be created at: build/programs/clickhouse
|
||||||
```
|
```
|
||||||
|
|
||||||
To build using Xcode's native AppleClang compiler in Xcode IDE (this option is only for development builds and workflows, and is **not recommended** unless you know what you are doing):
|
To build using XCode native AppleClang compiler in XCode IDE (this option is only for development builds and workflows, and is **not recommended** unless you know what you are doing):
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
cd ClickHouse
|
cd ClickHouse
|
||||||
@ -71,7 +71,7 @@ mkdir build
|
|||||||
cd build
|
cd build
|
||||||
XCODE_IDE=1 ALLOW_APPLECLANG=1 cmake -G Xcode -DCMAKE_BUILD_TYPE=Debug -DENABLE_JEMALLOC=OFF ..
|
XCODE_IDE=1 ALLOW_APPLECLANG=1 cmake -G Xcode -DCMAKE_BUILD_TYPE=Debug -DENABLE_JEMALLOC=OFF ..
|
||||||
cmake --open .
|
cmake --open .
|
||||||
# ...then, in Xcode IDE select ALL_BUILD scheme and start the building process.
|
# ...then, in XCode IDE select ALL_BUILD scheme and start the building process.
|
||||||
# The resulting binary will be created at: ./programs/Debug/clickhouse
|
# The resulting binary will be created at: ./programs/Debug/clickhouse
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -93,7 +93,7 @@ cmake --build build
|
|||||||
|
|
||||||
If you intend to run `clickhouse-server`, make sure to increase the system’s maxfiles variable.
|
If you intend to run `clickhouse-server`, make sure to increase the system’s maxfiles variable.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
You’ll need to use sudo.
|
You’ll need to use sudo.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
@ -130,7 +130,7 @@ Here is an example of how to install the new `cmake` from the official website:
|
|||||||
```
|
```
|
||||||
wget https://github.com/Kitware/CMake/releases/download/v3.22.2/cmake-3.22.2-linux-x86_64.sh
|
wget https://github.com/Kitware/CMake/releases/download/v3.22.2/cmake-3.22.2-linux-x86_64.sh
|
||||||
chmod +x cmake-3.22.2-linux-x86_64.sh
|
chmod +x cmake-3.22.2-linux-x86_64.sh
|
||||||
./cmake-3.22.2-linux-x86_64.sh
|
./cmake-3.22.2-linux-x86_64.sh
|
||||||
export PATH=/home/milovidov/work/cmake-3.22.2-linux-x86_64/bin/:${PATH}
|
export PATH=/home/milovidov/work/cmake-3.22.2-linux-x86_64/bin/:${PATH}
|
||||||
hash cmake
|
hash cmake
|
||||||
```
|
```
|
||||||
@ -163,7 +163,7 @@ ClickHouse is available in pre-built binaries and packages. Binaries are portabl
|
|||||||
|
|
||||||
They are built for stable, prestable and testing releases as long as for every commit to master and for every pull request.
|
They are built for stable, prestable and testing releases as long as for every commit to master and for every pull request.
|
||||||
|
|
||||||
To find the freshest build from `master`, go to [commits page](https://github.com/ClickHouse/ClickHouse/commits/master), click on the first green checkmark or red cross near commit, and click to the “Details” link right after “ClickHouse Build Check”.
|
To find the freshest build from `master`, go to [commits page](https://github.com/ClickHouse/ClickHouse/commits/master), click on the first green check mark or red cross near commit, and click to the “Details” link right after “ClickHouse Build Check”.
|
||||||
|
|
||||||
## Faster builds for development: Split build configuration {#split-build}
|
## Faster builds for development: Split build configuration {#split-build}
|
||||||
|
|
||||||
|
@ -19,7 +19,7 @@ cmake .. \
|
|||||||
|
|
||||||
## CMake files types
|
## CMake files types
|
||||||
|
|
||||||
1. ClickHouse's source CMake files (located in the root directory and in /src).
|
1. ClickHouse source CMake files (located in the root directory and in /src).
|
||||||
2. Arch-dependent CMake files (located in /cmake/*os_name*).
|
2. Arch-dependent CMake files (located in /cmake/*os_name*).
|
||||||
3. Libraries finders (search for contrib libraries, located in /contrib/*/CMakeLists.txt).
|
3. Libraries finders (search for contrib libraries, located in /contrib/*/CMakeLists.txt).
|
||||||
4. Contrib build CMake files (used instead of libraries' own CMake files, located in /cmake/modules)
|
4. Contrib build CMake files (used instead of libraries' own CMake files, located in /cmake/modules)
|
||||||
@ -456,7 +456,7 @@ option(ENABLE_TESTS "Provide unit_test_dbms target with Google.test unit tests"
|
|||||||
|
|
||||||
#### If the option's state could produce unwanted (or unusual) result, explicitly warn the user.
|
#### If the option's state could produce unwanted (or unusual) result, explicitly warn the user.
|
||||||
|
|
||||||
Suppose you have an option that may strip debug symbols from the ClickHouse's part.
|
Suppose you have an option that may strip debug symbols from the ClickHouse part.
|
||||||
This can speed up the linking process, but produces a binary that cannot be debugged.
|
This can speed up the linking process, but produces a binary that cannot be debugged.
|
||||||
In that case, prefer explicitly raising a warning telling the developer that he may be doing something wrong.
|
In that case, prefer explicitly raising a warning telling the developer that he may be doing something wrong.
|
||||||
Also, such options should be disabled if applies.
|
Also, such options should be disabled if applies.
|
||||||
|
@ -57,7 +57,7 @@ You have to specify a changelog category for your change (e.g., Bug Fix), and
|
|||||||
write a user-readable message describing the change for [CHANGELOG.md](../whats-new/changelog/)
|
write a user-readable message describing the change for [CHANGELOG.md](../whats-new/changelog/)
|
||||||
|
|
||||||
|
|
||||||
## Push To Dockerhub
|
## Push To DockerHub
|
||||||
|
|
||||||
Builds docker images used for build and tests, then pushes them to DockerHub.
|
Builds docker images used for build and tests, then pushes them to DockerHub.
|
||||||
|
|
||||||
@ -118,7 +118,7 @@ Builds ClickHouse in various configurations for use in further steps. You have t
|
|||||||
- **Compiler**: `gcc-9` or `clang-10` (or `clang-10-xx` for other architectures e.g. `clang-10-freebsd`).
|
- **Compiler**: `gcc-9` or `clang-10` (or `clang-10-xx` for other architectures e.g. `clang-10-freebsd`).
|
||||||
- **Build type**: `Debug` or `RelWithDebInfo` (cmake).
|
- **Build type**: `Debug` or `RelWithDebInfo` (cmake).
|
||||||
- **Sanitizer**: `none` (without sanitizers), `address` (ASan), `memory` (MSan), `undefined` (UBSan), or `thread` (TSan).
|
- **Sanitizer**: `none` (without sanitizers), `address` (ASan), `memory` (MSan), `undefined` (UBSan), or `thread` (TSan).
|
||||||
- **Splitted** `splitted` is a [split build](../development/build.md#split-build)
|
- **Split** `splitted` is a [split build](../development/build.md#split-build)
|
||||||
- **Status**: `success` or `fail`
|
- **Status**: `success` or `fail`
|
||||||
- **Build log**: link to the building and files copying log, useful when build failed.
|
- **Build log**: link to the building and files copying log, useful when build failed.
|
||||||
- **Build time**.
|
- **Build time**.
|
||||||
|
@ -96,9 +96,9 @@ SELECT library_name, license_type, license_path FROM system.licenses ORDER BY li
|
|||||||
|
|
||||||
## Adding new third-party libraries and maintaining patches in third-party libraries {#adding-third-party-libraries}
|
## Adding new third-party libraries and maintaining patches in third-party libraries {#adding-third-party-libraries}
|
||||||
|
|
||||||
1. Each third-party libary must reside in a dedicated directory under the `contrib/` directory of the ClickHouse repository. Avoid dumps/copies of external code, instead use Git's submodule feature to pull third-party code from an external upstream repository.
|
1. Each third-party library must reside in a dedicated directory under the `contrib/` directory of the ClickHouse repository. Avoid dumps/copies of external code, instead use Git submodule feature to pull third-party code from an external upstream repository.
|
||||||
2. Submodules are listed in `.gitmodule`. If the external library can be used as-is, you may reference the upstream repository directly. Otherwise, i.e. the external libary requires patching/customization, create a fork of the official repository in the [Clickhouse organization in GitHub](https://github.com/ClickHouse).
|
2. Submodules are listed in `.gitmodule`. If the external library can be used as-is, you may reference the upstream repository directly. Otherwise, i.e. the external library requires patching/customization, create a fork of the official repository in the [Clickhouse organization in GitHub](https://github.com/ClickHouse).
|
||||||
3. In the latter case, create a branch with `clickhouse/` prefix from the branch you want to integrate, e.g. `clickhouse/master` (for `master`) or `clickhouse/release/vX.Y.Z` (for a `release/vX.Y.Z` tag). The purpose of this branch is to isolate customization of the library from upstream work. For example, pulls from the upstream repository into the fork will leave all `clickhouse/` branches unaffected. Submodules in `contrib/` must only track `clickhouse/` branches of forked third-party repositories.
|
3. In the latter case, create a branch with `clickhouse/` prefix from the branch you want to integrate, e.g. `clickhouse/master` (for `master`) or `clickhouse/release/vX.Y.Z` (for a `release/vX.Y.Z` tag). The purpose of this branch is to isolate customization of the library from upstream work. For example, pulls from the upstream repository into the fork will leave all `clickhouse/` branches unaffected. Submodules in `contrib/` must only track `clickhouse/` branches of forked third-party repositories.
|
||||||
4. To patch a fork of a third-party library, create a dedicated branch with `clickhouse/` prefix in the fork, e.g. `clickhouse/fix-some-desaster`. Finally, merge the patch branch into the custom tracking branch (e.g. `clickhouse/master` or `clickhouse/release/vX.Y.Z`) using a PR.
|
4. To patch a fork of a third-party library, create a dedicated branch with `clickhouse/` prefix in the fork, e.g. `clickhouse/fix-some-desaster`. Finally, merge the patch branch into the custom tracking branch (e.g. `clickhouse/master` or `clickhouse/release/vX.Y.Z`) using a PR.
|
||||||
5. Always create patches of third-party libraries with the official repository in mind. Once a PR of a patch branch to the `clickhouse/` branch in the fork repository is done and the submodule version in ClickHouse's official repository is bumped, consider opening another PR from the patch branch to the upstream library repository. This ensures, that 1) the contribution has more than a single use case and importance, 2) others will also benefit from it, 3) the change will not remain a maintenance burden solely on ClickHouse developers.
|
5. Always create patches of third-party libraries with the official repository in mind. Once a PR of a patch branch to the `clickhouse/` branch in the fork repository is done and the submodule version in ClickHouse official repository is bumped, consider opening another PR from the patch branch to the upstream library repository. This ensures, that 1) the contribution has more than a single use case and importance, 2) others will also benefit from it, 3) the change will not remain a maintenance burden solely on ClickHouse developers.
|
||||||
9. To update a submodule with changes in the upstream repository, first merge upstream `master` (or a new `versionX.Y.Z` tag) into the `clickhouse`-tracking branch in the fork repository. Conflicts with patches/customization will need to be resolved in this merge (see Step 4.). Once the merge is done, bump the submodule in ClickHouse to point to the new hash in the fork.
|
9. To update a submodule with changes in the upstream repository, first merge upstream `master` (or a new `versionX.Y.Z` tag) into the `clickhouse`-tracking branch in the fork repository. Conflicts with patches/customization will need to be resolved in this merge (see Step 4.). Once the merge is done, bump the submodule in ClickHouse to point to the new hash in the fork.
|
||||||
|
@ -70,7 +70,7 @@ You can also clone the repository via https protocol:
|
|||||||
|
|
||||||
This, however, will not let you send your changes to the server. You can still use it temporarily and add the SSH keys later replacing the remote address of the repository with `git remote` command.
|
This, however, will not let you send your changes to the server. You can still use it temporarily and add the SSH keys later replacing the remote address of the repository with `git remote` command.
|
||||||
|
|
||||||
You can also add original ClickHouse repo’s address to your local repository to pull updates from there:
|
You can also add original ClickHouse repo address to your local repository to pull updates from there:
|
||||||
|
|
||||||
git remote add upstream git@github.com:ClickHouse/ClickHouse.git
|
git remote add upstream git@github.com:ClickHouse/ClickHouse.git
|
||||||
|
|
||||||
@ -269,7 +269,7 @@ Developing ClickHouse often requires loading realistic datasets. It is particula
|
|||||||
|
|
||||||
Navigate to your fork repository in GitHub’s UI. If you have been developing in a branch, you need to select that branch. There will be a “Pull request” button located on the screen. In essence, this means “create a request for accepting my changes into the main repository”.
|
Navigate to your fork repository in GitHub’s UI. If you have been developing in a branch, you need to select that branch. There will be a “Pull request” button located on the screen. In essence, this means “create a request for accepting my changes into the main repository”.
|
||||||
|
|
||||||
A pull request can be created even if the work is not completed yet. In this case please put the word “WIP” (work in progress) at the beginning of the title, it can be changed later. This is useful for cooperative reviewing and discussion of changes as well as for running all of the available tests. It is important that you provide a brief description of your changes, it will later be used for generating release changelogs.
|
A pull request can be created even if the work is not completed yet. In this case please put the word “WIP” (work in progress) at the beginning of the title, it can be changed later. This is useful for cooperative reviewing and discussion of changes as well as for running all of the available tests. It is important that you provide a brief description of your changes, it will later be used for generating release changelog.
|
||||||
|
|
||||||
Testing will commence as soon as ClickHouse employees label your PR with a tag “can be tested”. The results of some first checks (e.g. code style) will come in within several minutes. Build check results will arrive within half an hour. And the main set of tests will report itself within an hour.
|
Testing will commence as soon as ClickHouse employees label your PR with a tag “can be tested”. The results of some first checks (e.g. code style) will come in within several minutes. Build check results will arrive within half an hour. And the main set of tests will report itself within an hour.
|
||||||
|
|
||||||
|
@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
Rust library integration will be described based on BLAKE3 hash-function integration.
|
Rust library integration will be described based on BLAKE3 hash-function integration.
|
||||||
|
|
||||||
The first step is forking a library and making neccessary changes for Rust and C/C++ compatibility.
|
The first step is forking a library and making necessary changes for Rust and C/C++ compatibility.
|
||||||
|
|
||||||
After forking library repository you need to change target settings in Cargo.toml file. Firstly, you need to switch build to static library. Secondly, you need to add cbindgen crate to the crate list. We will use it later to generate C-header automatically.
|
After forking library repository you need to change target settings in Cargo.toml file. Firstly, you need to switch build to static library. Secondly, you need to add cbindgen crate to the crate list. We will use it later to generate C-header automatically.
|
||||||
|
|
||||||
@ -51,7 +51,7 @@ pub unsafe extern "C" fn blake3_apply_shim(
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
This method gets C-compatible string, its size and output string pointer as input. Then, it converts C-compatible inputs into types that are used by actual library methods and calls them. After that, it should convert library methods' outputs back into C-compatible type. In that particular case library supported direct writing into pointer by method fill(), so the convertion was not needed. The main advice here is to create less methods, so you will need to do less convertions on each method call and won't create much overhead.
|
This method gets C-compatible string, its size and output string pointer as input. Then, it converts C-compatible inputs into types that are used by actual library methods and calls them. After that, it should convert library methods' outputs back into C-compatible type. In that particular case library supported direct writing into pointer by method fill(), so the conversion was not needed. The main advice here is to create less methods, so you will need to do less conversions on each method call and won't create much overhead.
|
||||||
|
|
||||||
Also, you should use attribute #[no_mangle] and extern "C" for every C-compatible attribute. Without it library can compile incorrectly and cbindgen won't launch header autogeneration.
|
Also, you should use attribute #[no_mangle] and extern "C" for every C-compatible attribute. Without it library can compile incorrectly and cbindgen won't launch header autogeneration.
|
||||||
|
|
||||||
|
@ -4,7 +4,7 @@ sidebar_label: C++ Guide
|
|||||||
description: A list of recommendations regarding coding style, naming convention, formatting and more
|
description: A list of recommendations regarding coding style, naming convention, formatting and more
|
||||||
---
|
---
|
||||||
|
|
||||||
# How to Write C++ Code
|
# How to Write C++ Code
|
||||||
|
|
||||||
## General Recommendations {#general-recommendations}
|
## General Recommendations {#general-recommendations}
|
||||||
|
|
||||||
@ -196,7 +196,7 @@ std::cerr << static_cast<int>(c) << std::endl;
|
|||||||
|
|
||||||
The same is true for small methods in any classes or structs.
|
The same is true for small methods in any classes or structs.
|
||||||
|
|
||||||
For templated classes and structs, do not separate the method declarations from the implementation (because otherwise they must be defined in the same translation unit).
|
For template classes and structs, do not separate the method declarations from the implementation (because otherwise they must be defined in the same translation unit).
|
||||||
|
|
||||||
**31.** You can wrap lines at 140 characters, instead of 80.
|
**31.** You can wrap lines at 140 characters, instead of 80.
|
||||||
|
|
||||||
@ -285,7 +285,7 @@ Note: You can use Doxygen to generate documentation from these comments. But Dox
|
|||||||
/// WHAT THE FAIL???
|
/// WHAT THE FAIL???
|
||||||
```
|
```
|
||||||
|
|
||||||
**14.** Do not use comments to make delimeters.
|
**14.** Do not use comments to make delimiters.
|
||||||
|
|
||||||
``` cpp
|
``` cpp
|
||||||
///******************************************************
|
///******************************************************
|
||||||
|
@ -4,7 +4,7 @@ sidebar_label: Testing
|
|||||||
description: Most of ClickHouse features can be tested with functional tests and they are mandatory to use for every change in ClickHouse code that can be tested that way.
|
description: Most of ClickHouse features can be tested with functional tests and they are mandatory to use for every change in ClickHouse code that can be tested that way.
|
||||||
---
|
---
|
||||||
|
|
||||||
# ClickHouse Testing
|
# ClickHouse Testing
|
||||||
|
|
||||||
## Functional Tests
|
## Functional Tests
|
||||||
|
|
||||||
@ -228,7 +228,7 @@ Our Security Team did some basic overview of ClickHouse capabilities from the se
|
|||||||
|
|
||||||
We run `clang-tidy` on per-commit basis. `clang-static-analyzer` checks are also enabled. `clang-tidy` is also used for some style checks.
|
We run `clang-tidy` on per-commit basis. `clang-static-analyzer` checks are also enabled. `clang-tidy` is also used for some style checks.
|
||||||
|
|
||||||
We have evaluated `clang-tidy`, `Coverity`, `cppcheck`, `PVS-Studio`, `tscancode`, `CodeQL`. You will find instructions for usage in `tests/instructions/` directory.
|
We have evaluated `clang-tidy`, `Coverity`, `cppcheck`, `PVS-Studio`, `tscancode`, `CodeQL`. You will find instructions for usage in `tests/instructions/` directory.
|
||||||
|
|
||||||
If you use `CLion` as an IDE, you can leverage some `clang-tidy` checks out of the box.
|
If you use `CLion` as an IDE, you can leverage some `clang-tidy` checks out of the box.
|
||||||
|
|
||||||
@ -244,7 +244,7 @@ In debug build we also involve a customization of libc that ensures that no "har
|
|||||||
|
|
||||||
Debug assertions are used extensively.
|
Debug assertions are used extensively.
|
||||||
|
|
||||||
In debug build, if exception with "logical error" code (implies a bug) is being thrown, the program is terminated prematurally. It allows to use exceptions in release build but make it an assertion in debug build.
|
In debug build, if exception with "logical error" code (implies a bug) is being thrown, the program is terminated prematurely. It allows to use exceptions in release build but make it an assertion in debug build.
|
||||||
|
|
||||||
Debug version of jemalloc is used for debug builds.
|
Debug version of jemalloc is used for debug builds.
|
||||||
Debug version of libc++ is used for debug builds.
|
Debug version of libc++ is used for debug builds.
|
||||||
@ -253,7 +253,7 @@ Debug version of libc++ is used for debug builds.
|
|||||||
|
|
||||||
Data stored on disk is checksummed. Data in MergeTree tables is checksummed in three ways simultaneously* (compressed data blocks, uncompressed data blocks, the total checksum across blocks). Data transferred over network between client and server or between servers is also checksummed. Replication ensures bit-identical data on replicas.
|
Data stored on disk is checksummed. Data in MergeTree tables is checksummed in three ways simultaneously* (compressed data blocks, uncompressed data blocks, the total checksum across blocks). Data transferred over network between client and server or between servers is also checksummed. Replication ensures bit-identical data on replicas.
|
||||||
|
|
||||||
It is required to protect from faulty hardware (bit rot on storage media, bit flips in RAM on server, bit flips in RAM of network controller, bit flips in RAM of network switch, bit flips in RAM of client, bit flips on the wire). Note that bit flips are common and likely to occur even for ECC RAM and in presense of TCP checksums (if you manage to run thousands of servers processing petabytes of data each day). [See the video (russian)](https://www.youtube.com/watch?v=ooBAQIe0KlQ).
|
It is required to protect from faulty hardware (bit rot on storage media, bit flips in RAM on server, bit flips in RAM of network controller, bit flips in RAM of network switch, bit flips in RAM of client, bit flips on the wire). Note that bit flips are common and likely to occur even for ECC RAM and in presence of TCP checksums (if you manage to run thousands of servers processing petabytes of data each day). [See the video (russian)](https://www.youtube.com/watch?v=ooBAQIe0KlQ).
|
||||||
|
|
||||||
ClickHouse provides diagnostics that will help ops engineers to find faulty hardware.
|
ClickHouse provides diagnostics that will help ops engineers to find faulty hardware.
|
||||||
|
|
||||||
|
@ -190,8 +190,7 @@ sudo ./clickhouse install
|
|||||||
|
|
||||||
### From Precompiled Binaries for Non-Standard Environments {#from-binaries-non-linux}
|
### From Precompiled Binaries for Non-Standard Environments {#from-binaries-non-linux}
|
||||||
|
|
||||||
For non-Linux operating systems and for AArch64 CPU arhitecture, ClickHouse builds are provided as a cross-compiled binary from the latest commit of the `master` branch (with a few hours delay).
|
For non-Linux operating systems and for AArch64 CPU architecture, ClickHouse builds are provided as a cross-compiled binary from the latest commit of the `master` branch (with a few hours delay).
|
||||||
|
|
||||||
|
|
||||||
- [MacOS x86_64](https://builds.clickhouse.com/master/macos/clickhouse)
|
- [MacOS x86_64](https://builds.clickhouse.com/master/macos/clickhouse)
|
||||||
```bash
|
```bash
|
||||||
|
@ -119,7 +119,7 @@ Dates with times are written in the format `YYYY-MM-DD hh:mm:ss` and parsed in t
|
|||||||
This all occurs in the system time zone at the time the client or server starts (depending on which of them formats data). For dates with times, daylight saving time is not specified. So if a dump has times during daylight saving time, the dump does not unequivocally match the data, and parsing will select one of the two times.
|
This all occurs in the system time zone at the time the client or server starts (depending on which of them formats data). For dates with times, daylight saving time is not specified. So if a dump has times during daylight saving time, the dump does not unequivocally match the data, and parsing will select one of the two times.
|
||||||
During a read operation, incorrect dates and dates with times can be parsed with natural overflow or as null dates and times, without an error message.
|
During a read operation, incorrect dates and dates with times can be parsed with natural overflow or as null dates and times, without an error message.
|
||||||
|
|
||||||
As an exception, parsing dates with times is also supported in Unix timestamp format, if it consists of exactly 10 decimal digits. The result is not time zone-dependent. The formats YYYY-MM-DD hh:mm:ss and NNNNNNNNNN are differentiated automatically.
|
As an exception, parsing dates with times is also supported in Unix timestamp format, if it consists of exactly 10 decimal digits. The result is not time zone-dependent. The formats `YYYY-MM-DD hh:mm:ss` and `NNNNNNNNNN` are differentiated automatically.
|
||||||
|
|
||||||
Strings are output with backslash-escaped special characters. The following escape sequences are used for output: `\b`, `\f`, `\r`, `\n`, `\t`, `\0`, `\'`, `\\`. Parsing also supports the sequences `\a`, `\v`, and `\xHH` (hex escape sequences) and any `\c` sequences, where `c` is any character (these sequences are converted to `c`). Thus, reading data supports formats where a line feed can be written as `\n` or `\`, or as a line feed. For example, the string `Hello world` with a line feed between the words instead of space can be parsed in any of the following variations:
|
Strings are output with backslash-escaped special characters. The following escape sequences are used for output: `\b`, `\f`, `\r`, `\n`, `\t`, `\0`, `\'`, `\\`. Parsing also supports the sequences `\a`, `\v`, and `\xHH` (hex escape sequences) and any `\c` sequences, where `c` is any character (these sequences are converted to `c`). Thus, reading data supports formats where a line feed can be written as `\n` or `\`, or as a line feed. For example, the string `Hello world` with a line feed between the words instead of space can be parsed in any of the following variations:
|
||||||
|
|
||||||
@ -1363,9 +1363,9 @@ Columns `name` ([String](../sql-reference/data-types/string.md)) and `value` (nu
|
|||||||
Rows may optionally contain `help` ([String](../sql-reference/data-types/string.md)) and `timestamp` (number).
|
Rows may optionally contain `help` ([String](../sql-reference/data-types/string.md)) and `timestamp` (number).
|
||||||
Column `type` ([String](../sql-reference/data-types/string.md)) is either `counter`, `gauge`, `histogram`, `summary`, `untyped` or empty.
|
Column `type` ([String](../sql-reference/data-types/string.md)) is either `counter`, `gauge`, `histogram`, `summary`, `untyped` or empty.
|
||||||
Each metric value may also have some `labels` ([Map(String, String)](../sql-reference/data-types/map.md)).
|
Each metric value may also have some `labels` ([Map(String, String)](../sql-reference/data-types/map.md)).
|
||||||
Several consequent rows may refer to the one metric with different lables. The table should be sorted by metric name (e.g., with `ORDER BY name`).
|
Several consequent rows may refer to the one metric with different labels. The table should be sorted by metric name (e.g., with `ORDER BY name`).
|
||||||
|
|
||||||
There's special requirements for labels for `histogram` and `summary`, see [Prometheus doc](https://prometheus.io/docs/instrumenting/exposition_formats/#histograms-and-summaries) for the details. Special rules applied to row with labels `{'count':''}` and `{'sum':''}`, they'll be convered to `<metric_name>_count` and `<metric_name>_sum` respectively.
|
There's special requirements for labels for `histogram` and `summary`, see [Prometheus doc](https://prometheus.io/docs/instrumenting/exposition_formats/#histograms-and-summaries) for the details. Special rules applied to row with labels `{'count':''}` and `{'sum':''}`, they'll be converted to `<metric_name>_count` and `<metric_name>_sum` respectively.
|
||||||
|
|
||||||
**Example:**
|
**Example:**
|
||||||
|
|
||||||
@ -1845,7 +1845,7 @@ When working with the `Regexp` format, you can use the following settings:
|
|||||||
- Quoted (similarly to [Values](#data-format-values))
|
- Quoted (similarly to [Values](#data-format-values))
|
||||||
- Raw (extracts subpatterns as a whole, no escaping rules, similarly to [TSVRaw](#tabseparatedraw))
|
- Raw (extracts subpatterns as a whole, no escaping rules, similarly to [TSVRaw](#tabseparatedraw))
|
||||||
|
|
||||||
- `format_regexp_skip_unmatched` — [UInt8](../sql-reference/data-types/int-uint.md). Defines the need to throw an exeption in case the `format_regexp` expression does not match the imported data. Can be set to `0` or `1`.
|
- `format_regexp_skip_unmatched` — [UInt8](../sql-reference/data-types/int-uint.md). Defines the need to throw an exception in case the `format_regexp` expression does not match the imported data. Can be set to `0` or `1`.
|
||||||
|
|
||||||
**Usage**
|
**Usage**
|
||||||
|
|
||||||
|
@ -5,7 +5,7 @@ sidebar_label: PostgreSQL Interface
|
|||||||
|
|
||||||
# PostgreSQL Interface
|
# PostgreSQL Interface
|
||||||
|
|
||||||
ClickHouse supports the PostgreSQL wire protocol, which allows you to use Postgres clients to connect to ClickHouse. In a sense, ClickHouse can pretend to be a PostgreSQL instance - allowing you to connect a PostgreSQL client application to ClickHouse that is not already directy supported by ClickHouse (for example, Amazon Redshift).
|
ClickHouse supports the PostgreSQL wire protocol, which allows you to use Postgres clients to connect to ClickHouse. In a sense, ClickHouse can pretend to be a PostgreSQL instance - allowing you to connect a PostgreSQL client application to ClickHouse that is not already directly supported by ClickHouse (for example, Amazon Redshift).
|
||||||
|
|
||||||
To enable the PostgreSQL wire protocol, add the [postgresql_port](../operations/server-configuration-parameters/settings#server_configuration_parameters-postgresql_port) setting to your server's configuration file. For example, you could define the port in a new XML file in your `config.d` folder:
|
To enable the PostgreSQL wire protocol, add the [postgresql_port](../operations/server-configuration-parameters/settings#server_configuration_parameters-postgresql_port) setting to your server's configuration file. For example, you could define the port in a new XML file in your `config.d` folder:
|
||||||
|
|
||||||
@ -59,7 +59,7 @@ The PostgreSQL protocol currently only supports plain-text passwords.
|
|||||||
|
|
||||||
## Using SSL
|
## Using SSL
|
||||||
|
|
||||||
If you have SSL/TLS configured on your ClickHouse instance, then `postgresql_port` will use the same settings (the port is shared for both secure and unsecure clients).
|
If you have SSL/TLS configured on your ClickHouse instance, then `postgresql_port` will use the same settings (the port is shared for both secure and insecure clients).
|
||||||
|
|
||||||
Each client has their own method of how to connect using SSL. The following command demonstrates how to pass in the certificates and key to securely connect `psql` to ClickHouse:
|
Each client has their own method of how to connect using SSL. The following command demonstrates how to pass in the certificates and key to securely connect `psql` to ClickHouse:
|
||||||
|
|
||||||
|
@ -57,7 +57,7 @@ Substitutions can also be performed from ZooKeeper. To do this, specify the attr
|
|||||||
|
|
||||||
The `config.xml` file can specify a separate config with user settings, profiles, and quotas. The relative path to this config is set in the `users_config` element. By default, it is `users.xml`. If `users_config` is omitted, the user settings, profiles, and quotas are specified directly in `config.xml`.
|
The `config.xml` file can specify a separate config with user settings, profiles, and quotas. The relative path to this config is set in the `users_config` element. By default, it is `users.xml`. If `users_config` is omitted, the user settings, profiles, and quotas are specified directly in `config.xml`.
|
||||||
|
|
||||||
Users configuration can be splitted into separate files similar to `config.xml` and `config.d/`.
|
Users configuration can be split into separate files similar to `config.xml` and `config.d/`.
|
||||||
Directory name is defined as `users_config` setting without `.xml` postfix concatenated with `.d`.
|
Directory name is defined as `users_config` setting without `.xml` postfix concatenated with `.d`.
|
||||||
Directory `users.d` is used by default, as `users_config` defaults to `users.xml`.
|
Directory `users.d` is used by default, as `users_config` defaults to `users.xml`.
|
||||||
|
|
||||||
|
@ -94,7 +94,7 @@ Use at least a 10 GB network, if possible. 1 Gb will also work, but it will be m
|
|||||||
|
|
||||||
## Huge Pages {#huge-pages}
|
## Huge Pages {#huge-pages}
|
||||||
|
|
||||||
If you are using old Linux kernel, disable transparent huge pages. It interferes with memory allocators, which leads to significant performance degradation.
|
If you are using old Linux kernel, disable transparent huge pages. It interferes with memory allocator, which leads to significant performance degradation.
|
||||||
On newer Linux kernels transparent huge pages are alright.
|
On newer Linux kernels transparent huge pages are alright.
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
@ -136,7 +136,7 @@ Do not change `minSessionTimeout` setting, large values may affect ClickHouse re
|
|||||||
|
|
||||||
With the default settings, ZooKeeper is a time bomb:
|
With the default settings, ZooKeeper is a time bomb:
|
||||||
|
|
||||||
> The ZooKeeper server won’t delete files from old snapshots and logs when using the default configuration (see autopurge), and this is the responsibility of the operator.
|
> The ZooKeeper server won’t delete files from old snapshots and logs when using the default configuration (see `autopurge`), and this is the responsibility of the operator.
|
||||||
|
|
||||||
This bomb must be defused.
|
This bomb must be defused.
|
||||||
|
|
||||||
|
@ -3,7 +3,7 @@ sidebar_position: 46
|
|||||||
sidebar_label: Troubleshooting
|
sidebar_label: Troubleshooting
|
||||||
---
|
---
|
||||||
|
|
||||||
# Troubleshooting
|
# Troubleshooting
|
||||||
|
|
||||||
- [Installation](#troubleshooting-installation-errors)
|
- [Installation](#troubleshooting-installation-errors)
|
||||||
- [Connecting to the server](#troubleshooting-accepts-no-connections)
|
- [Connecting to the server](#troubleshooting-accepts-no-connections)
|
||||||
@ -26,7 +26,7 @@ Possible issues:
|
|||||||
|
|
||||||
### Server Is Not Running {#server-is-not-running}
|
### Server Is Not Running {#server-is-not-running}
|
||||||
|
|
||||||
**Check if server is runnnig**
|
**Check if server is running**
|
||||||
|
|
||||||
Command:
|
Command:
|
||||||
|
|
||||||
|
@ -4,7 +4,7 @@ sidebar_label: H3 Indexes
|
|||||||
|
|
||||||
# Functions for Working with H3 Indexes
|
# Functions for Working with H3 Indexes
|
||||||
|
|
||||||
[H3](https://eng.uber.com/h3/) is a geographical indexing system where Earth’s surface divided into a grid of even hexagonal cells. This system is hierarchical, i. e. each hexagon on the top level ("parent") can be splitted into seven even but smaller ones ("children"), and so on.
|
[H3](https://eng.uber.com/h3/) is a geographical indexing system where Earth’s surface divided into a grid of even hexagonal cells. This system is hierarchical, i. e. each hexagon on the top level ("parent") can be split into seven even but smaller ones ("children"), and so on.
|
||||||
|
|
||||||
The level of the hierarchy is called `resolution` and can receive a value from `0` till `15`, where `0` is the `base` level with the largest and coarsest cells.
|
The level of the hierarchy is called `resolution` and can receive a value from `0` till `15`, where `0` is the `base` level with the largest and coarsest cells.
|
||||||
|
|
||||||
@ -1398,4 +1398,4 @@ Result:
|
|||||||
│ [(37.42012867767779,-122.03773496427027),(37.33755608435299,-122.090428929044)] │
|
│ [(37.42012867767779,-122.03773496427027),(37.33755608435299,-122.090428929044)] │
|
||||||
└─────────────────────────────────────────────────────────────────────────────────┘
|
└─────────────────────────────────────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
[Original article](https://clickhouse.com/docs/en/sql-reference/functions/geo/h3) <!--hide-->
|
[Original article](https://clickhouse.com/docs/en/sql-reference/functions/geo/h3) <!--hide-->
|
||||||
|
@ -32,7 +32,7 @@ Integer value in the `Int8`, `Int16`, `Int32`, `Int64`, `Int128` or `Int256` dat
|
|||||||
|
|
||||||
Functions use [rounding towards zero](https://en.wikipedia.org/wiki/Rounding#Rounding_towards_zero), meaning they truncate fractional digits of numbers.
|
Functions use [rounding towards zero](https://en.wikipedia.org/wiki/Rounding#Rounding_towards_zero), meaning they truncate fractional digits of numbers.
|
||||||
|
|
||||||
The behavior of functions for the [NaN and Inf](../../sql-reference/data-types/float.md#data_type-float-nan-inf) arguments is undefined. Remember about [numeric convertions issues](#numeric-conversion-issues), when using the functions.
|
The behavior of functions for the [NaN and Inf](../../sql-reference/data-types/float.md#data_type-float-nan-inf) arguments is undefined. Remember about [numeric conversions issues](#numeric-conversion-issues), when using the functions.
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
@ -131,7 +131,7 @@ Integer value in the `UInt8`, `UInt16`, `UInt32`, `UInt64` or `UInt256` data typ
|
|||||||
|
|
||||||
Functions use [rounding towards zero](https://en.wikipedia.org/wiki/Rounding#Rounding_towards_zero), meaning they truncate fractional digits of numbers.
|
Functions use [rounding towards zero](https://en.wikipedia.org/wiki/Rounding#Rounding_towards_zero), meaning they truncate fractional digits of numbers.
|
||||||
|
|
||||||
The behavior of functions for negative agruments and for the [NaN and Inf](../../sql-reference/data-types/float.md#data_type-float-nan-inf) arguments is undefined. If you pass a string with a negative number, for example `'-32'`, ClickHouse raises an exception. Remember about [numeric convertions issues](#numeric-conversion-issues), when using the functions.
|
The behavior of functions for negative agruments and for the [NaN and Inf](../../sql-reference/data-types/float.md#data_type-float-nan-inf) arguments is undefined. If you pass a string with a negative number, for example `'-32'`, ClickHouse raises an exception. Remember about [numeric conversions issues](#numeric-conversion-issues), when using the functions.
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
@ -689,7 +689,7 @@ x::t
|
|||||||
|
|
||||||
- Converted value.
|
- Converted value.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
If the input value does not fit the bounds of the target type, the result overflows. For example, `CAST(-1, 'UInt8')` returns `255`.
|
If the input value does not fit the bounds of the target type, the result overflows. For example, `CAST(-1, 'UInt8')` returns `255`.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
@ -1433,7 +1433,7 @@ Result:
|
|||||||
|
|
||||||
Converts a `DateTime64` to a `Int64` value with fixed sub-second precision. Input value is scaled up or down appropriately depending on it precision.
|
Converts a `DateTime64` to a `Int64` value with fixed sub-second precision. Input value is scaled up or down appropriately depending on it precision.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
The output value is a timestamp in UTC, not in the timezone of `DateTime64`.
|
The output value is a timestamp in UTC, not in the timezone of `DateTime64`.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user