Merge remote-tracking branch 'blessed/master' into faster_debug_startup

This commit is contained in:
Raúl Marín 2024-09-25 22:59:50 +02:00
commit e0c5037a9b
489 changed files with 12636 additions and 3934 deletions

6
.gitmodules vendored
View File

@ -363,6 +363,12 @@
[submodule "contrib/double-conversion"]
path = contrib/double-conversion
url = https://github.com/ClickHouse/double-conversion.git
[submodule "contrib/mongo-cxx-driver"]
path = contrib/mongo-cxx-driver
url = https://github.com/ClickHouse/mongo-cxx-driver.git
[submodule "contrib/mongo-c-driver"]
path = contrib/mongo-c-driver
url = https://github.com/ClickHouse/mongo-c-driver.git
[submodule "contrib/numactl"]
path = contrib/numactl
url = https://github.com/ClickHouse/numactl.git

View File

@ -1,5 +1,6 @@
### Table of Contents
**[ClickHouse release v24.8 LTS, 2024-08-20](#243)**<br/>
**[ClickHouse release v24.9, 2024-09-26](#249)**<br/>
**[ClickHouse release v24.8 LTS, 2024-08-20](#248)**<br/>
**[ClickHouse release v24.7, 2024-07-30](#247)**<br/>
**[ClickHouse release v24.6, 2024-07-01](#246)**<br/>
**[ClickHouse release v24.5, 2024-05-30](#245)**<br/>
@ -11,6 +12,174 @@
# 2024 Changelog
### <a id="249"></a> ClickHouse release 24.9, 2024-09-26
#### Backward Incompatible Change
* Do not allow explicitly specifying UUID when creating a table in Replicated database. Also, do not allow explicitly specifying ZooKeeper path and replica name for *MergeTree tables in Replicated databases. It introduces a new setting `database_replicated_allow_explicit_uuid` and changes the type of `database_replicated_allow_replicated_engine_arguments` from Bool to UInt64 [#66104](https://github.com/ClickHouse/ClickHouse/pull/66104) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Expressions like `a[b].c` are supported for named tuples, as well as named subscripts from arbitrary expressions, e.g., `expr().name`. This is useful for processing JSON. This closes [#54965](https://github.com/ClickHouse/ClickHouse/issues/54965). In previous versions, an expression of form `expr().name` was parsed as `tupleElement(expr(), name)`, and the query analyzer was searching for a column `name` rather than for the corresponding tuple element; while in the new version, it is changed to `tupleElement(expr(), 'name')`. In most cases, the previous version was not working, but it is possible to imagine a very unusual scenario when this change could lead to incompatibility: if you stored names of tuple elements in a column or an alias, that was named differently than the tuple element's name: `SELECT 'b' AS a, CAST([tuple(123)] AS 'Array(Tuple(b UInt8))') AS t, t[1].a`. It is very unlikely that you used such queries, but we still have to mark this change as potentially backward incompatible. [#68435](https://github.com/ClickHouse/ClickHouse/pull/68435) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* When the setting `print_pretty_type_names` is enabled, it will print `Tuple` data type in a pretty form in `SHOW CREATE TABLE` statements, `formatQuery` function, and in the interactive mode in `clickhouse-client` and `clickhouse-local`. In previous versions, this setting was only applied to `DESCRIBE` queries and `toTypeName`. This closes [#65753](https://github.com/ClickHouse/ClickHouse/issues/65753). [#68492](https://github.com/ClickHouse/ClickHouse/pull/68492) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
#### New Feature
* Allow a user to have multiple authentication methods instead of only one. Allow authentication methods to be reset to most recently added method. If you want to run instances on 24.8 and one on 24.9 for some time, it's better to set max_authentication_methods_per_user = 1 for that period to avoid potential errors. [#65277](https://github.com/ClickHouse/ClickHouse/pull/65277) ([Arthur Passos](https://github.com/arthurpassos)).
* Function `toStartOfInterval()` now has a new overload which emulates TimescaleDB's `time_bucket()` function, respectively PostgreSQL's `date_bin()` function. ([#55619](https://github.com/ClickHouse/ClickHouse/issues/55619)). It allows to align date or timestamp values to multiples of a given interval from an *arbitrary* origin (instead of 0000-01-01 00:00:00.000 as *fixed* origin). For example, `SELECT toStartOfInterval(toDateTime('2023-01-01 14:45:00'), INTERVAL 1 MINUTE, toDateTime('2023-01-01 14:35:30'));` returns `2023-01-01 14:44:30` which is a multiple of 1 minute intervals, starting from origin `2023-01-01 14:35:30`. [#56738](https://github.com/ClickHouse/ClickHouse/pull/56738) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
* Function `toStartOfInterval()` now has a new overload which emulates TimescaleDB's `time_bucket` function, respectively PostgreSQL's `date_bin` function. ([#55619](https://github.com/ClickHouse/ClickHouse/issues/55619)). It allows to align date or timestamp values to multiples of a given interval from an *arbitrary* origin (instead of 0000-01-01 00:00:00.000 as *fixed* origin). For example, `SELECT toStartOfInterval(toDateTime('2023-01-01 14:45:00'), INTERVAL 1 MINUTE, toDateTime('2023-01-01 14:35:30'));` returns `2023-01-01 14:44:30` which is a multiple of 1 minute intervals, starting from origin `2023-01-01 14:35:30`. [#56738](https://github.com/ClickHouse/ClickHouse/pull/56738) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
* Add support for `ATTACH PARTITION ALL FROM`. [#61987](https://github.com/ClickHouse/ClickHouse/pull/61987) ([Kirill Nikiforov](https://github.com/allmazz)).
* Introduced JSONCompactWithProgress format where ClickHouse outputs each row as a newline-delimited JSON object, including metadata, data, progress, totals, and statistics. [#66205](https://github.com/ClickHouse/ClickHouse/pull/66205) ([Alexey Korepanov](https://github.com/alexkorep)).
* Add the `input_format_json_empty_as_default` setting which, when enabled, treats empty fields in JSON inputs as default values. Closes [#59339](https://github.com/ClickHouse/ClickHouse/issues/59339). [#66782](https://github.com/ClickHouse/ClickHouse/pull/66782) ([Alexis Arnaud](https://github.com/a-a-f)).
* Added functions `overlay` and `overlayUTF8` which replace parts of a string by another string. Example: `SELECT overlay('Hello New York', 'Jersey', 11)` returns `Hello New Jersey`. [#66933](https://github.com/ClickHouse/ClickHouse/pull/66933) ([李扬](https://github.com/taiyang-li)).
* Add support for Lightweight Delete in partition `DELETE FROM [db.]table [ON CLUSTER cluster] [IN PARTITION partition_expr] WHERE expr; ` [#67805](https://github.com/ClickHouse/ClickHouse/pull/67805) ([sunny](https://github.com/sunny19930321)).
* Implemented comparison for `Interval` data type values so they are converting now to the least supertype. [#68057](https://github.com/ClickHouse/ClickHouse/pull/68057) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
* Add `create_if_not_exists` setting to default to `IF NOT EXISTS` behavior during CREATE statements. [#68164](https://github.com/ClickHouse/ClickHouse/pull/68164) ([Peter Nguyen](https://github.com/petern48)).
* Makes possible to read Iceberg tables in Azure and locally. [#68210](https://github.com/ClickHouse/ClickHouse/pull/68210) ([Daniil Ivanik](https://github.com/divanik)).
* Query cache entries can now be dropped by tag. For example, the query cache entry created by `SELECT 1 SETTINGS use_query_cache = true, query_cache_tag = 'abc'` can now be dropped by `SYSTEM DROP QUERY CACHE TAG 'abc'`. [#68477](https://github.com/ClickHouse/ClickHouse/pull/68477) ([Michał Tabaszewski](https://github.com/pinsvin00)).
* Add storage encryption for named collections. [#68615](https://github.com/ClickHouse/ClickHouse/pull/68615) ([Pablo Marcos](https://github.com/pamarcos)).
* Added `ripeMD160` function, which computes the RIPEMD-160 cryptographic hash of a string. Example: `SELECT hex(ripeMD160('The quick brown fox jumps over the lazy dog'))` returns `37F332F68DB77BD9D7EDD4969571AD671CF9DD3B`. [#68639](https://github.com/ClickHouse/ClickHouse/pull/68639) ([Dergousov Maxim](https://github.com/m7kss1)).
* Add virtual column _headers for url table engine. Closes [#65026](https://github.com/ClickHouse/ClickHouse/issues/65026). [#68867](https://github.com/ClickHouse/ClickHouse/pull/68867) ([flynn](https://github.com/ucasfl)).
* Add `system.projections` table to track available projections. [#68901](https://github.com/ClickHouse/ClickHouse/pull/68901) ([Jordi Villar](https://github.com/jrdi)).
* Add new function `arrayZipUnaligned` for spark compatibility(arrays_zip), which allowed unaligned arrays based on original `arrayZip`. ``` sql SELECT arrayZipUnaligned([1], [1, 2, 3]). [#69030](https://github.com/ClickHouse/ClickHouse/pull/69030) ([李扬](https://github.com/taiyang-li)).
* Added cp/mv commands for keeper client which atomically copies/moves node. [#69034](https://github.com/ClickHouse/ClickHouse/pull/69034) ([Mikhail Artemenko](https://github.com/Michicosun)).
* Adds argument `scale` (default: `true`) to function `arrayAUC` which allows to skip the normalization step (issue [#69609](https://github.com/ClickHouse/ClickHouse/issues/69609)). [#69717](https://github.com/ClickHouse/ClickHouse/pull/69717) ([gabrielmcg44](https://github.com/gabrielmcg44)).
#### Experimental feature
* Adds a setting `input_format_try_infer_variants` which allows Variant type to be inferred during schema inference for text formats when there is more than one possible type for column/array elements. [#63798](https://github.com/ClickHouse/ClickHouse/pull/63798) ([Shaun Struwig](https://github.com/Blargian)).
* Add aggregate functions distinctDynamicTypes/distinctJSONPaths/distinctJSONPathsAndTypes for better introspection of JSON column type content. [#68463](https://github.com/ClickHouse/ClickHouse/pull/68463) ([Kruglov Pavel](https://github.com/Avogar)).
* New algorithm to determine the unit of marks distribution between replicas by consistent hash. Different numbers of marks chosen for different read patterns to improve performance. [#68424](https://github.com/ClickHouse/ClickHouse/pull/68424) ([Nikita Taranov](https://github.com/nickitat)).
* Previously the algorithmic complexity of part deduplication logic in parallel replica announcement handling was O(n^2) which could take noticeable time for tables with many part (or partitions). This change makes the complexity O(n*log(n)). [#69596](https://github.com/ClickHouse/ClickHouse/pull/69596) ([Alexander Gololobov](https://github.com/davenger)).
* Refreshable materialized view improvements: append mode (`... REFRESH EVERY 1 MINUTE APPEND ...`) to add rows to existing table instead of overwriting the whole table, retries (disabled by default, configured in SETTINGS section of the query), `SYSTEM WAIT VIEW <name>` query that waits for the currently running refresh, some fixes. [#58934](https://github.com/ClickHouse/ClickHouse/pull/58934) ([Michael Kolupaev](https://github.com/al13n321)).
* Added `min_max`as a new type of (experimental) statistics. It supports estimating range predicates over numeric columns, e.g. `x < 100`. [#67013](https://github.com/ClickHouse/ClickHouse/pull/67013) ([JackyWoo](https://github.com/JackyWoo)).
#### Performance Improvement
* Improve the join performance by rearange the right table by keys while the table keys are dense in left or inner hash join. [#60341](https://github.com/ClickHouse/ClickHouse/pull/60341) ([kevinyhzou](https://github.com/KevinyhZou)).
* Improve all join performance by append `RowRefList` or `RowRef` to AddedColumns for lazy output, while buildOutput, we use `RowRefList`/`RowRef` for output, and remove `is_join_get` condition from `buildOutput` for loop. [#63677](https://github.com/ClickHouse/ClickHouse/pull/63677) ([kevinyhzou](https://github.com/KevinyhZou)).
* Load filesystem cache metadata asynchronously during the boot process, in order to make restarts faster (controlled by setting `load_metadata_asynchronously`). [#65736](https://github.com/ClickHouse/ClickHouse/pull/65736) ([Daniel Pozo Escalona](https://github.com/danipozo)).
* Functions `array` and `map` were optimized to process certain common cases much faster. [#67707](https://github.com/ClickHouse/ClickHouse/pull/67707) ([李扬](https://github.com/taiyang-li)).
* Trivial optimize on orc string reading especially when column contains no NULLs. [#67794](https://github.com/ClickHouse/ClickHouse/pull/67794) ([李扬](https://github.com/taiyang-li)).
* Improved overall performance of merges by reducing the overhead of scheduling steps of merges. [#68016](https://github.com/ClickHouse/ClickHouse/pull/68016) ([Anton Popov](https://github.com/CurtizJ)).
* Speed up requests to S3 when a profile is not set, credentials are not set, and IMDS is not available (for example, when you are querying a public bucket on a machine outside of a cloud). This closes [#52771](https://github.com/ClickHouse/ClickHouse/issues/52771). [#68082](https://github.com/ClickHouse/ClickHouse/pull/68082) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Devirtualize format reader in RowInputFormatWithNamesAndTypes for some performance improvement. [#68437](https://github.com/ClickHouse/ClickHouse/pull/68437) ([李扬](https://github.com/taiyang-li)).
* Add the parallel merge with key implementation to maximize the CPU utilization. [#68441](https://github.com/ClickHouse/ClickHouse/pull/68441) ([Jiebin Sun](https://github.com/jiebinn)).
* Add settings `output_format_orc_dictionary_key_size_threshold` to allow user to enable dict encoding for string column in ORC output format. It helps reduce the output orc file size and improve reading performance significantly. [#68591](https://github.com/ClickHouse/ClickHouse/pull/68591) ([李扬](https://github.com/taiyang-li)).
* Implemented reading of required files only during hive partitioning. [#68963](https://github.com/ClickHouse/ClickHouse/pull/68963) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
* Introduce new Keeper request RemoveRecursive which removes node with all it's subtree. [#69332](https://github.com/ClickHouse/ClickHouse/pull/69332) ([Mikhail Artemenko](https://github.com/Michicosun)).
* Speedup insert performance with vector similarity index by adding data to vector index parallel. [#69493](https://github.com/ClickHouse/ClickHouse/pull/69493) ([flynn](https://github.com/ucasfl)).
#### Improvement
* Hardened parts of the codebase related to parsing of small entities. The following (minor) bugs were found and fixed: - if a `DeltaLake` table is partitioned by Bool, the partition value is always interpreted as false; - `ExternalDistributed` table was using only a single shard in the provided addresses; the value of `max_threads` setting and similar were printed as `'auto(N)'` instead of `auto(N)`. [#52503](https://github.com/ClickHouse/ClickHouse/pull/52503) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Use cgroup-specific metrics for CPU usage accounting instead of system-wide metrics. [#62003](https://github.com/ClickHouse/ClickHouse/pull/62003) ([Nikita Taranov](https://github.com/nickitat)).
* IO scheduling for remote S3 disks is now done on the level of HTTP socket streams (instead of the whole S3 requests) to resolve `bandwidth_limit` throttling issues. [#65182](https://github.com/ClickHouse/ClickHouse/pull/65182) ([Sergei Trifonov](https://github.com/serxa)).
* Functions `upperUTF8` and `lowerUTF8` were previously only able to uppercase / lowercase Cyrillic characters. This limitation is now removed and characters in arbitrary languages are uppercased/lowercased. Example: `SELECT upperUTF8('Süden')` now returns `SÜDEN`. [#65761](https://github.com/ClickHouse/ClickHouse/pull/65761) ([李扬](https://github.com/taiyang-li)).
* When lightweight delete happens on a table with projection(s), despite users have choices either throw an exception (by default) or drop the projection lightweight delete would happen, now the third option is to still have lightweight delete and then rebuild projection(s). [#66169](https://github.com/ClickHouse/ClickHouse/pull/66169) ([jsc0218](https://github.com/jsc0218)).
* Two options (`dns_allow_resolve_names_to_ipv4` and `dns_allow_resolve_names_to_ipv6`) have been added, to allow block connections ip family. [#66895](https://github.com/ClickHouse/ClickHouse/pull/66895) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
* Make C-z ignorance configurable (ignore_shell_suspend) in clickhouse-client. [#67134](https://github.com/ClickHouse/ClickHouse/pull/67134) ([Azat Khuzhin](https://github.com/azat)).
* Improve castOrDefault from Variant/Dynamic columns so it works when inner types are not convertable at all. [#67150](https://github.com/ClickHouse/ClickHouse/pull/67150) ([Kruglov Pavel](https://github.com/Avogar)).
* Improve unicode encoding in JSON output formats. Ensures that valid JSON is generated in the case of certain byte sequences in the result data. [#67938](https://github.com/ClickHouse/ClickHouse/pull/67938) ([mwoenker](https://github.com/mwoenker)).
* Added profile events for merges and mutations for better introspection. [#68015](https://github.com/ClickHouse/ClickHouse/pull/68015) ([Anton Popov](https://github.com/CurtizJ)).
* Odbc: get http_max_tries from server configuration. [#68128](https://github.com/ClickHouse/ClickHouse/pull/68128) ([Rodolphe Dugé de Bernonville](https://github.com/RodolpheDuge)).
* Add wildcard support for user identification in x509 SubjectAltName extension. [#68236](https://github.com/ClickHouse/ClickHouse/pull/68236) ([Marco Vilas Boas](https://github.com/marco-vb)).
* Improve schema inference of date times. Now DateTime64 used only when date time has fractional part, otherwise regular DateTime is used. Inference of Date/DateTime is more strict now, especially when `date_time_input_format='best_effort'` to avoid inferring date times from strings in corner cases. [#68382](https://github.com/ClickHouse/ClickHouse/pull/68382) ([Kruglov Pavel](https://github.com/Avogar)).
* Delete old code of named collections from dictionaries and substitute it to the new, which allows to use DDL created named collections in dictionaries. Closes [#60936](https://github.com/ClickHouse/ClickHouse/issues/60936), closes [#36890](https://github.com/ClickHouse/ClickHouse/issues/36890). [#68412](https://github.com/ClickHouse/ClickHouse/pull/68412) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Use HTTP/1.1 instead of HTTP/1.0 (set by default) for external HTTP authentication. [#68456](https://github.com/ClickHouse/ClickHouse/pull/68456) ([Aleksei Filatov](https://github.com/aalexfvk)).
* Added a new set of metrics for Thread Pool introspection, providing deeper insights into thread pool performance and behavior. [#68674](https://github.com/ClickHouse/ClickHouse/pull/68674) ([filimonov](https://github.com/filimonov)).
* Support query parameters in async inserts with format `Values`. [#68741](https://github.com/ClickHouse/ClickHouse/pull/68741) ([Anton Popov](https://github.com/CurtizJ)).
* Support Date32 on dateTrunc and toStartOfInterval. [#68874](https://github.com/ClickHouse/ClickHouse/pull/68874) ([LiuNeng](https://github.com/liuneng1994)).
* Add a new setting: `output_format_always_quote_identifiers` to always quote identifiers if it is enabled. - Add a new setting: `output_format_identifier_quoting_style` to set the identifier quoting style. Applicable values: `'None'`, `'Backticks'` (default), `'DoubleQuotes'`, `'BackticksMySQL'`. [#68896](https://github.com/ClickHouse/ClickHouse/pull/68896) ([tuanpach](https://github.com/tuanpach)).
* Add `plan_step_name` and `plan_step_description` columns to `system.processors_profile_log`. [#68954](https://github.com/ClickHouse/ClickHouse/pull/68954) ([Alexander Gololobov](https://github.com/davenger)).
* Support for the Spanish language in the embedded dictionaries. [#69035](https://github.com/ClickHouse/ClickHouse/pull/69035) ([Vasily Okunev](https://github.com/VOkunev)).
* Add CPU arch to short fault info. [#69037](https://github.com/ClickHouse/ClickHouse/pull/69037) ([Konstantin Bogdanov](https://github.com/thevar1able)).
* CREATE TABLE AS copy PRIMARY KEY, ORDER BY, and similar clauses. Now it supports only for MergeTree family of table engines. [#69076](https://github.com/ClickHouse/ClickHouse/pull/69076) ([sakulali](https://github.com/sakulali)).
* Replication of subset of columns is now available through MaterializedPostgreSQL. Closes [#33748](https://github.com/ClickHouse/ClickHouse/issues/33748). [#69092](https://github.com/ClickHouse/ClickHouse/pull/69092) ([Kruglov Kirill](https://github.com/1on)).
* Do not block retries when establishing a new keeper connection. [#69148](https://github.com/ClickHouse/ClickHouse/pull/69148) ([Raúl Marín](https://github.com/Algunenano)).
* Update DatabaseFactory so it would be possible for user defined database engines to have arguments, settings and table overrides (similar to StorageFactory). [#69201](https://github.com/ClickHouse/ClickHouse/pull/69201) ([Барыкин Никита Олегович](https://github.com/NikBarykin)).
* Restore mode that replaces all external table engines and functions to Null (`restore_replace_external_engines_to_null`, `restore_replace_external_table_functions_to_null` settings) was failing if table had SETTINGS. Now it removes settings from table definition in this case and allows to restore such tables. [#69253](https://github.com/ClickHouse/ClickHouse/pull/69253) ([Ilya Yatsishin](https://github.com/qoega)).
* Reduce memory usage of inserts to JSON by using adaptive write buffer size. A lot of files created by JSON column in wide part contains small amount of data and it doesn't make sense to allocate 1MB buffer for them. [#69272](https://github.com/ClickHouse/ClickHouse/pull/69272) ([Kruglov Pavel](https://github.com/Avogar)).
* CLICKHOUSE_PASSWORD is escaped for XML in clickhouse image's entrypoint. [#69301](https://github.com/ClickHouse/ClickHouse/pull/69301) ([aohoyd](https://github.com/aohoyd)).
* Not retaining thread in concurrent hash join threadpool to avoid query excessively spawn threads. [#69406](https://github.com/ClickHouse/ClickHouse/pull/69406) ([Duc Canh Le](https://github.com/canhld94)).
* Allow empty arguments for arrayZip/arrayZipUnaligned, as concat did in https://github.com/ClickHouse/ClickHouse/pull/65887. It is for spark compatiability in Gluten CH Backend. [#69576](https://github.com/ClickHouse/ClickHouse/pull/69576) ([李扬](https://github.com/taiyang-li)).
* Support more advanced SSL options for Keeper's internal communication (e.g. private keys with passphrase). [#69582](https://github.com/ClickHouse/ClickHouse/pull/69582) ([Antonio Andelic](https://github.com/antonio2368)).
* Sleep for 10ms before retrying to acquire a lock in `databasereplicatedddlworker::enqueuequeryimpl`. [#69588](https://github.com/ClickHouse/ClickHouse/pull/69588) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
* Index analysis can take noticeable time for big tables with many parts or partitions. This change should enable killing a heavy query at that stage. [#69606](https://github.com/ClickHouse/ClickHouse/pull/69606) ([Alexander Gololobov](https://github.com/davenger)).
* Masking sensitive info in `gcs()` table function. [#69611](https://github.com/ClickHouse/ClickHouse/pull/69611) ([Vitaly Baranov](https://github.com/vitlibar)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Rebuild projection for merges that reduce number of rows. [#62364](https://github.com/ClickHouse/ClickHouse/pull/62364) ([cangyin](https://github.com/cangyin)).
* Fix attaching table when pg dbname contains "-" in MaterializedPostgreSQL. [#62730](https://github.com/ClickHouse/ClickHouse/pull/62730) ([takakawa](https://github.com/takakawa)).
* Storage Join support for nullable columns in left table, close [#61247](https://github.com/ClickHouse/ClickHouse/issues/61247). [#66926](https://github.com/ClickHouse/ClickHouse/pull/66926) ([vdimir](https://github.com/vdimir)).
* Incorrect query result with parallel replicas (distribute queries as well) when `IN` operator contains conversion to Decimal(). The bug was introduced with the new analyzer. [#67234](https://github.com/ClickHouse/ClickHouse/pull/67234) ([Igor Nikonov](https://github.com/devcrafter)).
* Fix the problem that alter modifiy order by causes inconsistent metadata. [#67436](https://github.com/ClickHouse/ClickHouse/pull/67436) ([iceFireser](https://github.com/iceFireser)).
* Fix the upper bound of the function `fromModifiedJulianDay`. It was supposed to be `9999-12-31` but was mistakenly set to `9999-01-01`. [#67583](https://github.com/ClickHouse/ClickHouse/pull/67583) ([PHO](https://github.com/depressed-pho)).
* Fix when the index is not at the beginning of the tuple during `IN` query. [#67626](https://github.com/ClickHouse/ClickHouse/pull/67626) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
* Fixed error on generated columns in MaterializedPostgreSQL when adnum ordering is broken [#63161](https://github.com/ClickHouse/ClickHouse/issues/63161). Fixed error on id column with nextval expression as default MaterializedPostgreSQL when there are generated columns in table. Fixed error on dropping publication with symbols except [a-z1-9-]. [#67664](https://github.com/ClickHouse/ClickHouse/pull/67664) ([Kruglov Kirill](https://github.com/1on)).
* Fix expiration in `RoleCache`. [#67748](https://github.com/ClickHouse/ClickHouse/pull/67748) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix window view missing blocks due to slow flush to view. [#67983](https://github.com/ClickHouse/ClickHouse/pull/67983) ([Raúl Marín](https://github.com/Algunenano)).
* Fix MSAN issue caused by incorrect date format. [#68105](https://github.com/ClickHouse/ClickHouse/pull/68105) ([JackyWoo](https://github.com/JackyWoo)).
* Fixed crash in Parquet filtering when data types in the file substantially differ from requested types (e.g. `... FROM file('a.parquet', Parquet, 'x String')`, but the file has `x Int64`). Without this fix, use `input_format_parquet_filter_push_down = 0` as a workaround. [#68131](https://github.com/ClickHouse/ClickHouse/pull/68131) ([Michael Kolupaev](https://github.com/al13n321)).
* Fix crash in `lag`/`lead` which is introduced in [#67091](https://github.com/ClickHouse/ClickHouse/issues/67091). [#68262](https://github.com/ClickHouse/ClickHouse/pull/68262) ([lgbo](https://github.com/lgbo-ustc)).
* Try fix postgres crash when query is cancelled. [#68288](https://github.com/ClickHouse/ClickHouse/pull/68288) ([Kseniia Sumarokova](https://github.com/kssenii)).
* After https://github.com/ClickHouse/ClickHouse/pull/61984 `schema_inference_make_columns_nullable=0` still can make columns `Nullable` in Parquet/Arrow formats. The change was backward incompatible and users noticed the changes in the behaviour. This PR makes `schema_inference_make_columns_nullable=0` to work as before (no Nullable columns will be inferred) and introduces new value `auto` for this setting that will make columns `Nullable` only if data has information about nullability. [#68298](https://github.com/ClickHouse/ClickHouse/pull/68298) ([Kruglov Pavel](https://github.com/Avogar)).
* Fixes [#50868](https://github.com/ClickHouse/ClickHouse/issues/50868). Small DateTime64 constant values returned by a nested subquery inside a distributed query were wrongly transformed to Nulls, thus causing errors and possible incorrect query results. [#68323](https://github.com/ClickHouse/ClickHouse/pull/68323) ([Shankar](https://github.com/shiyer7474)).
* Fix missing sync replica mode in query `SYSTEM SYNC REPLICA`. [#68326](https://github.com/ClickHouse/ClickHouse/pull/68326) ([Duc Canh Le](https://github.com/canhld94)).
* Fix bug in key condition. [#68354](https://github.com/ClickHouse/ClickHouse/pull/68354) ([Han Fei](https://github.com/hanfei1991)).
* Fix crash on drop or rename a role that is used in LDAP external user directory. [#68355](https://github.com/ClickHouse/ClickHouse/pull/68355) ([Andrey Zvonov](https://github.com/zvonand)).
* Fix Progress column value of system.view_refreshes greater than 1 [#68377](https://github.com/ClickHouse/ClickHouse/issues/68377). [#68378](https://github.com/ClickHouse/ClickHouse/pull/68378) ([megao](https://github.com/jetgm)).
* Process regexp flags correctly. [#68389](https://github.com/ClickHouse/ClickHouse/pull/68389) ([Han Fei](https://github.com/hanfei1991)).
* PostgreSQL-style cast operator (`::`) works correctly even for SQL-style hex and binary string literals (e.g., `SELECT x'414243'::String`). This closes [#68324](https://github.com/ClickHouse/ClickHouse/issues/68324). [#68482](https://github.com/ClickHouse/ClickHouse/pull/68482) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Minor patch for https://github.com/ClickHouse/ClickHouse/pull/68131. [#68494](https://github.com/ClickHouse/ClickHouse/pull/68494) ([Chang chen](https://github.com/baibaichen)).
* Fix [#68239](https://github.com/ClickHouse/ClickHouse/issues/68239) SAMPLE n where n is an integer. [#68499](https://github.com/ClickHouse/ClickHouse/pull/68499) ([Denis Hananein](https://github.com/denis-hananein)).
* Fix bug in mann-whitney-utest when the size of two districutions are not equal. [#68556](https://github.com/ClickHouse/ClickHouse/pull/68556) ([Han Fei](https://github.com/hanfei1991)).
* After unexpected restart, fail to start replication of ReplicatedMergeTree due to abnormal handling of covered-by-broken part. [#68584](https://github.com/ClickHouse/ClickHouse/pull/68584) ([baolin](https://github.com/baolinhuang)).
* Fix `LOGICAL_ERROR`s when functions `sipHash64Keyed`, `sipHash128Keyed`, or `sipHash128ReferenceKeyed` are applied to empty arrays or tuples. [#68630](https://github.com/ClickHouse/ClickHouse/pull/68630) ([Robert Schulze](https://github.com/rschu1ze)).
* Full text index may filter out wrong columns when index multiple columns, it didn't reset row_id between different columns, the reproduce procedure is in tests/queries/0_stateless/03228_full_text_with_multi_col.sql. Without this. [#68644](https://github.com/ClickHouse/ClickHouse/pull/68644) ([siyuan](https://github.com/linkwk7)).
* Fix invalid character '\t' and '\n' in replica_name when creating a Replicated table, which causes incorrect parsing of 'source replica' in LogEntry. Mentioned in issue [#68640](https://github.com/ClickHouse/ClickHouse/issues/68640). [#68645](https://github.com/ClickHouse/ClickHouse/pull/68645) ([Zhigao Hong](https://github.com/zghong)).
* Added back virtual columns ` _table` and `_database` to distributed tables. They were available until version 24.3. [#68672](https://github.com/ClickHouse/ClickHouse/pull/68672) ([Anton Popov](https://github.com/CurtizJ)).
* Fix possible error `Size of permutation (0) is less than required (...)` during Variant column permutation. [#68681](https://github.com/ClickHouse/ClickHouse/pull/68681) ([Kruglov Pavel](https://github.com/Avogar)).
* Fix possible error `DB::Exception: Block structure mismatch in joined block stream: different columns:` with new JSON column. [#68686](https://github.com/ClickHouse/ClickHouse/pull/68686) ([Kruglov Pavel](https://github.com/Avogar)).
* Fix issue with materialized constant keys when hashing maps with arrays as keys in functions `sipHash(64/128)Keyed`. [#68731](https://github.com/ClickHouse/ClickHouse/pull/68731) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
* Make `ColumnsDescription::toString` format each column using the same `IAST::FormatState object`. This results in uniform columns metadata being written to disk and ZooKeeper. [#68733](https://github.com/ClickHouse/ClickHouse/pull/68733) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
* Fix merging of aggregated data for grouping sets. [#68744](https://github.com/ClickHouse/ClickHouse/pull/68744) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix logical error, when we create a replicated merge tree, alter a column and then execute modify statistics. [#68820](https://github.com/ClickHouse/ClickHouse/pull/68820) ([Han Fei](https://github.com/hanfei1991)).
* Fix resolving dynamic subcolumns from subqueries in analyzer. [#68824](https://github.com/ClickHouse/ClickHouse/pull/68824) ([Kruglov Pavel](https://github.com/Avogar)).
* Fix complex types metadata parsing in DeltaLake. Closes [#68739](https://github.com/ClickHouse/ClickHouse/issues/68739). [#68836](https://github.com/ClickHouse/ClickHouse/pull/68836) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fixed asynchronous inserts in case when metadata of table is changed (by `ALTER ADD/MODIFY COLUMN` queries) after insert but before flush to the table. [#68837](https://github.com/ClickHouse/ClickHouse/pull/68837) ([Anton Popov](https://github.com/CurtizJ)).
* Fix unexpected exception when passing empty tuple in array. This fixes [#68618](https://github.com/ClickHouse/ClickHouse/issues/68618). [#68848](https://github.com/ClickHouse/ClickHouse/pull/68848) ([Amos Bird](https://github.com/amosbird)).
* Fix parsing pure metadata mutations commands. [#68935](https://github.com/ClickHouse/ClickHouse/pull/68935) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Fix possible wrong result during anyHeavy state merge. [#68950](https://github.com/ClickHouse/ClickHouse/pull/68950) ([Raúl Marín](https://github.com/Algunenano)).
* Fixed writing to Materialized Views with enabled setting `optimize_functions_to_subcolumns`. [#68951](https://github.com/ClickHouse/ClickHouse/pull/68951) ([Anton Popov](https://github.com/CurtizJ)).
* Don't use serializations cache in const Dynamic column methods. It could let to use-of-uninitialized value or even race condition during aggregations. [#68953](https://github.com/ClickHouse/ClickHouse/pull/68953) ([Kruglov Pavel](https://github.com/Avogar)).
* Fix parsing error when null should be inserted as default in some cases during JSON type parsing. [#68955](https://github.com/ClickHouse/ClickHouse/pull/68955) ([Kruglov Pavel](https://github.com/Avogar)).
* Fix `Content-Encoding` not sent in some compressed responses. [#64802](https://github.com/ClickHouse/ClickHouse/issues/64802). [#68975](https://github.com/ClickHouse/ClickHouse/pull/68975) ([Konstantin Bogdanov](https://github.com/thevar1able)).
* There were cases when path was concatenated incorrectly and had the `//` part in it, solving this problem using path normalization. [#69066](https://github.com/ClickHouse/ClickHouse/pull/69066) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
* Fix logical error when we have empty async insert. [#69080](https://github.com/ClickHouse/ClickHouse/pull/69080) ([Han Fei](https://github.com/hanfei1991)).
* Fixed data race of progress indication in clickhouse-client during query canceling. [#69081](https://github.com/ClickHouse/ClickHouse/pull/69081) ([Sergei Trifonov](https://github.com/serxa)).
* Fix a bug that the vector similarity index (currently experimental) was not utilized when used with cosine distance as distance function. [#69090](https://github.com/ClickHouse/ClickHouse/pull/69090) ([flynn](https://github.com/ucasfl)).
* This change addresses an issue where attempting to create a Replicated database again after a server failure during the initial creation process could result in error. [#69102](https://github.com/ClickHouse/ClickHouse/pull/69102) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
* Don't infer Bool type from String in CSV when `input_format_csv_try_infer_numbers_from_strings = 1` because we don't allow reading bool values from strings. [#69109](https://github.com/ClickHouse/ClickHouse/pull/69109) ([Kruglov Pavel](https://github.com/Avogar)).
* Fix explain ast insert queries parsing errors on client when `--multiquery` is enabled. [#69123](https://github.com/ClickHouse/ClickHouse/pull/69123) ([wxybear](https://github.com/wxybear)).
* `UNION` clause in subqueries wasn't handled correctly in queries with parallel replicas and lead to LOGICAL_ERROR `Duplicate announcement received for replica`. [#69146](https://github.com/ClickHouse/ClickHouse/pull/69146) ([Igor Nikonov](https://github.com/devcrafter)).
* Fix propogating structure argument in s3Cluster. Previously the `DEFAULT` expression of the column could be lost when sending the query to the replicas in s3Cluster. [#69147](https://github.com/ClickHouse/ClickHouse/pull/69147) ([Kruglov Pavel](https://github.com/Avogar)).
* Respect format settings in Values format during conversion from expression to the destination type. [#69149](https://github.com/ClickHouse/ClickHouse/pull/69149) ([Kruglov Pavel](https://github.com/Avogar)).
* Fix `clickhouse-client --queries-file` for readonly users (previously fails with `Cannot modify 'log_comment' setting in readonly mode`). [#69175](https://github.com/ClickHouse/ClickHouse/pull/69175) ([Azat Khuzhin](https://github.com/azat)).
* Fix data race in clickhouse-client when it's piped to a process that terminated early. [#69186](https://github.com/ClickHouse/ClickHouse/pull/69186) ([vdimir](https://github.com/vdimir)).
* Fix incorrect results of Fix uniq and GROUP BY for JSON/Dynamic types. [#69203](https://github.com/ClickHouse/ClickHouse/pull/69203) ([Kruglov Pavel](https://github.com/Avogar)).
* Fix the INFILE format detection for asynchronous inserts. If the format is not explicitly defined in the FORMAT clause, it can be detected from the INFILE file extension. [#69237](https://github.com/ClickHouse/ClickHouse/pull/69237) ([Julia Kartseva](https://github.com/jkartseva)).
* After [this issue](https://github.com/ClickHouse/ClickHouse/pull/59946#issuecomment-1943653197) there are quite a few table replicas in production such that their `metadata_version` node value is both equal to `0` and is different from the respective table's `metadata` node version. This leads to `alter` queries failing on such replicas. [#69274](https://github.com/ClickHouse/ClickHouse/pull/69274) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
* Mark Dynamic type as not safe primary key type to avoid issues with Fields. [#69311](https://github.com/ClickHouse/ClickHouse/pull/69311) ([Kruglov Pavel](https://github.com/Avogar)).
* Improve restoring of access entities' dependencies. [#69346](https://github.com/ClickHouse/ClickHouse/pull/69346) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix undefined behavior when all connection attempts fail getting a connection for insertions. [#69390](https://github.com/ClickHouse/ClickHouse/pull/69390) ([Pablo Marcos](https://github.com/pamarcos)).
* Close [#69135](https://github.com/ClickHouse/ClickHouse/issues/69135). If we try to reuse joined data for `cross` join, but this could not happen in ClickHouse at present. It's better to keep `have_compressed` in `reuseJoinedData`. [#69404](https://github.com/ClickHouse/ClickHouse/pull/69404) ([lgbo](https://github.com/lgbo-ustc)).
* Make `materialize()` function return full column when parameter is a sparse column. [#69429](https://github.com/ClickHouse/ClickHouse/pull/69429) ([Alexander Gololobov](https://github.com/davenger)).
* Fixed a `LOGICAL_ERROR` with function `sqidDecode` ([#69450](https://github.com/ClickHouse/ClickHouse/issues/69450)). [#69451](https://github.com/ClickHouse/ClickHouse/pull/69451) ([Robert Schulze](https://github.com/rschu1ze)).
* Quick fix for s3queue problem on 24.6 or create query with database replicated. [#69454](https://github.com/ClickHouse/ClickHouse/pull/69454) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fixed case when memory consumption was too high because of the squashing in `INSERT INTO ... SELECT` or `CREATE TABLE AS SELECT` queries. [#69469](https://github.com/ClickHouse/ClickHouse/pull/69469) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
* Statements `SHOW COLUMNS` and `SHOW INDEX` now work properly if the table has dots in its name. [#69514](https://github.com/ClickHouse/ClickHouse/pull/69514) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
* Usage of the query cache for queries with an overflow mode != 'throw' is now disallowed. This prevents situations where potentially truncated and incorrect query results could be stored in the query cache. (issue [#67476](https://github.com/ClickHouse/ClickHouse/issues/67476)). [#69549](https://github.com/ClickHouse/ClickHouse/pull/69549) ([Robert Schulze](https://github.com/rschu1ze)).
* Keep original order of conditions during move to prewhere. Previously the order could change and it could lead to failing queries when the order is important. [#69560](https://github.com/ClickHouse/ClickHouse/pull/69560) ([Kruglov Pavel](https://github.com/Avogar)).
* Fix Keeper multi-request preprocessing after ZNOAUTH error. [#69627](https://github.com/ClickHouse/ClickHouse/pull/69627) ([Antonio Andelic](https://github.com/antonio2368)).
* Fix METADATA_MISMATCH that might have happened due to TTL with a WHERE clause in DatabaseReplicated when creating a new replica. [#69736](https://github.com/ClickHouse/ClickHouse/pull/69736) ([Nikolay Degterinsky](https://github.com/evillique)).
* Fix `StorageS3(Azure)Queue` settings `tracked_file_ttl_sec`. We wrote it to keeper with key `tracked_file_ttl_sec`, but read as `tracked_files_ttl_sec`, which was a typo. [#69742](https://github.com/ClickHouse/ClickHouse/pull/69742) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Use tryconvertfieldtotype in gethyperrectangleforrowgroup. [#69745](https://github.com/ClickHouse/ClickHouse/pull/69745) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
* Revert "Fix prewhere without columns and without adaptive index granularity (almost w/o anything)"'. Due to the reverted changes some errors might happen when reading data parts produced by old CH releases(presumably 2021 or older). [#68897](https://github.com/ClickHouse/ClickHouse/pull/68897) ([Alexander Gololobov](https://github.com/davenger)).
### <a id="248"></a> ClickHouse release 24.8 LTS, 2024-08-20
#### Backward Incompatible Change
@ -28,7 +197,7 @@
* Add `_etag` virtual column for S3 table engine. Fixes [#65312](https://github.com/ClickHouse/ClickHouse/issues/65312). [#65386](https://github.com/ClickHouse/ClickHouse/pull/65386) ([skyoct](https://github.com/skyoct)).
* Added a tagging (namespace) mechanism for the query cache. The same queries with different tags are considered different by the query cache. Example: `SELECT 1 SETTINGS use_query_cache = 1, query_cache_tag = 'abc'` and `SELECT 1 SETTINGS use_query_cache = 1, query_cache_tag = 'def'` now create different query cache entries. [#68235](https://github.com/ClickHouse/ClickHouse/pull/68235) ([sakulali](https://github.com/sakulali)).
* Support more variants of JOIN strictness (`LEFT/RIGHT SEMI/ANTI/ANY JOIN`) with inequality conditions which involve columns from both left and right table. e.g. `t1.y < t2.y` (see the setting `allow_experimental_join_condition`). [#64281](https://github.com/ClickHouse/ClickHouse/pull/64281) ([lgbo](https://github.com/lgbo-ustc)).
* Intrpret Hive-style partitioning for different engines (`File`, `URL`, `S3`, `AzureBlobStorage`, `HDFS`). Hive-style partitioning organizes data into partitioned sub-directories, making it efficient to query and manage large datasets. Currently, it only creates virtual columns with the appropriate name and data. The follow-up PR will introduce the appropriate data filtering (performance speedup). [#65997](https://github.com/ClickHouse/ClickHouse/pull/65997) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
* Interpret Hive-style partitioning for different engines (`File`, `URL`, `S3`, `AzureBlobStorage`, `HDFS`). Hive-style partitioning organizes data into partitioned sub-directories, making it efficient to query and manage large datasets. Currently, it only creates virtual columns with the appropriate name and data. The follow-up PR will introduce the appropriate data filtering (performance speedup). [#65997](https://github.com/ClickHouse/ClickHouse/pull/65997) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
* Add function `printf` for Spark compatiability (but you can use the existing `format` function). [#66257](https://github.com/ClickHouse/ClickHouse/pull/66257) ([李扬](https://github.com/taiyang-li)).
* Add options `restore_replace_external_engines_to_null` and `restore_replace_external_table_functions_to_null` to replace external engines and table_engines to `Null` engine that can be useful for testing. It should work for RESTORE and explicit table creation. [#66536](https://github.com/ClickHouse/ClickHouse/pull/66536) ([Ilya Yatsishin](https://github.com/qoega)).
* Added support for reading `MULTILINESTRING` geometry in `WKT` format using function `readWKTLineString`. [#67647](https://github.com/ClickHouse/ClickHouse/pull/67647) ([Jacob Reckhard](https://github.com/jacobrec)).
@ -80,7 +249,6 @@
* Automatically retry Keeper requests in KeeperMap if they happen because of timeout or connection loss. [#67448](https://github.com/ClickHouse/ClickHouse/pull/67448) ([Antonio Andelic](https://github.com/antonio2368)).
* Add `-no-pie` to Aarch64 Linux builds to allow proper introspection and symbolizing of stacktraces after a ClickHouse restart. [#67916](https://github.com/ClickHouse/ClickHouse/pull/67916) ([filimonov](https://github.com/filimonov)).
* Added profile events for merges and mutations for better introspection. [#68015](https://github.com/ClickHouse/ClickHouse/pull/68015) ([Anton Popov](https://github.com/CurtizJ)).
* Fix settings and `current_database` in `system.processes` for async BACKUP/RESTORE. [#68163](https://github.com/ClickHouse/ClickHouse/pull/68163) ([Azat Khuzhin](https://github.com/azat)).
* Remove unnecessary logs for non-replicated `MergeTree`. [#68238](https://github.com/ClickHouse/ClickHouse/pull/68238) ([Daniil Ivanik](https://github.com/divanik)).
#### Build/Testing/Packaging Improvement

View File

@ -42,8 +42,6 @@ Keep an eye out for upcoming meetups and events around the world. Somewhere else
Upcoming meetups
* [Bangalore Meetup](https://www.meetup.com/clickhouse-bangalore-user-group/events/303208274/) - September 18
* [Tel Aviv Meetup](https://www.meetup.com/clickhouse-meetup-israel/events/303095121) - September 22
* [Jakarta Meetup](https://www.meetup.com/clickhouse-indonesia-user-group/events/303191359/) - October 1
* [Singapore Meetup](https://www.meetup.com/clickhouse-singapore-meetup-group/events/303212064/) - October 3
* [Madrid Meetup](https://www.meetup.com/clickhouse-spain-user-group/events/303096564/) - October 22
@ -67,6 +65,8 @@ Recently completed meetups
* [Chicago Meetup (Jump Capital)](https://lu.ma/43tvmrfw) - September 12
* [London Meetup](https://www.meetup.com/clickhouse-london-user-group/events/302977267) - September 17
* [Austin Meetup](https://www.meetup.com/clickhouse-austin-user-group/events/302558689/) - September 17
* [Bangalore Meetup](https://www.meetup.com/clickhouse-bangalore-user-group/events/303208274/) - September 18
* [Tel Aviv Meetup](https://www.meetup.com/clickhouse-meetup-israel/events/303095121) - September 22
## Recent Recordings
* **Recent Meetup Videos**: [Meetup Playlist](https://www.youtube.com/playlist?list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U) Whenever possible recordings of the ClickHouse Community Meetups are edited and presented as individual talks. Current featuring "Modern SQL in 2023", "Fast, Concurrent, and Consistent Asynchronous INSERTS in ClickHouse", and "Full-Text Indices: Design and Experiments"

View File

@ -3,7 +3,11 @@ add_subdirectory (Data)
add_subdirectory (Data/ODBC)
add_subdirectory (Foundation)
add_subdirectory (JSON)
add_subdirectory (MongoDB)
if (USE_MONGODB)
add_subdirectory(MongoDB)
endif()
add_subdirectory (Net)
add_subdirectory (NetSSL_OpenSSL)
add_subdirectory (Redis)

View File

@ -2,11 +2,11 @@
# NOTE: VERSION_REVISION has nothing common with DBMS_TCP_PROTOCOL_VERSION,
# only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes.
SET(VERSION_REVISION 54490)
SET(VERSION_REVISION 54491)
SET(VERSION_MAJOR 24)
SET(VERSION_MINOR 9)
SET(VERSION_MINOR 10)
SET(VERSION_PATCH 1)
SET(VERSION_GITHASH e02b434d2fc0c4fbee29ca675deab7474d274608)
SET(VERSION_DESCRIBE v24.9.1.1-testing)
SET(VERSION_STRING 24.9.1.1)
SET(VERSION_GITHASH b12a367741812f9e5fe754d19ebae600e2a2614c)
SET(VERSION_DESCRIBE v24.10.1.1-testing)
SET(VERSION_STRING 24.10.1.1)
# end of autochange

View File

@ -12,11 +12,13 @@ macro(add_headers_only prefix common_path)
add_glob(${prefix}_headers ${common_path}/*.h)
endmacro()
# Assumes the path is under src and that src/ needs to be removed (valid for any subdirectory under src/)
macro(extract_into_parent_list src_list dest_list)
list(REMOVE_ITEM ${src_list} ${ARGN})
get_filename_component(__dir_name ${CMAKE_CURRENT_SOURCE_DIR} NAME)
file(RELATIVE_PATH relative ${CMAKE_SOURCE_DIR} ${CMAKE_CURRENT_SOURCE_DIR})
string(REPLACE "src/" "" relative ${relative})
foreach(file IN ITEMS ${ARGN})
list(APPEND ${dest_list} ${__dir_name}/${file})
list(APPEND ${dest_list} ${relative}/${file})
endforeach()
set(${dest_list} "${${dest_list}}" PARENT_SCOPE)
endmacro()

View File

@ -18,4 +18,4 @@ set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} --gcc-toolchain=${TOOLCHAIN_PATH}")
set (CMAKE_ASM_FLAGS "${CMAKE_ASM_FLAGS} --gcc-toolchain=${TOOLCHAIN_PATH}")
set (USE_MUSL 1)
add_definitions(-DUSE_MUSL=1)
add_definitions(-DUSE_MUSL=1 -D__MUSL__=1)

View File

@ -160,6 +160,12 @@ add_contrib (datasketches-cpp-cmake datasketches-cpp)
add_contrib (incbin-cmake incbin)
add_contrib (sqids-cpp-cmake sqids-cpp)
option(USE_MONGODB "Enable MongoDB support" ${ENABLE_LIBRARIES})
if (USE_MONGODB)
add_contrib (mongo-c-driver-cmake mongo-c-driver) # requires: zlib
add_contrib (mongo-cxx-driver-cmake mongo-cxx-driver) # requires: libmongoc, libbson
endif()
option(ENABLE_NLP "Enable NLP functions support" ${ENABLE_LIBRARIES})
if (ENABLE_NLP)
add_contrib (libstemmer-c-cmake libstemmer_c)

1
contrib/mongo-c-driver vendored Submodule

@ -0,0 +1 @@
Subproject commit d55410c69183c90d18fd3b3f1d9db3d224fc8d52

View File

@ -0,0 +1,150 @@
option(USE_MONGODB "Enable MongoDB support" ${ENABLE_LIBRARIES})
if(NOT USE_MONGODB)
message(STATUS "Not using libmongoc and libbson")
return()
endif()
set(libbson_VERSION_MAJOR 1)
set(libbson_VERSION_MINOR 27)
set(libbson_VERSION_PATCH 0)
set(libbson_VERSION 1.27.0)
set(libmongoc_VERSION_MAJOR 1)
set(libmongoc_VERSION_MINOR 27)
set(libmongoc_VERSION_PATCH 0)
set(libmongoc_VERSION 1.27.0)
set(LIBBSON_SOURCES_ROOT "${ClickHouse_SOURCE_DIR}/contrib/mongo-c-driver/src")
set(LIBBSON_SOURCE_DIR "${LIBBSON_SOURCES_ROOT}/libbson/src")
file(GLOB_RECURSE LIBBSON_SOURCES "${LIBBSON_SOURCE_DIR}/*.c")
set(LIBBSON_BINARY_ROOT "${ClickHouse_BINARY_DIR}/contrib/mongo-c-driver/src")
set(LIBBSON_BINARY_DIR "${LIBBSON_BINARY_ROOT}/libbson/src")
include(TestBigEndian)
test_big_endian(BSON_BIG_ENDIAN)
if(BSON_BIG_ENDIAN)
set(BSON_BYTE_ORDER 4321)
else()
set(BSON_BYTE_ORDER 1234)
endif()
set(BSON_OS 1)
set(BSON_EXTRA_ALIGN 1)
set(BSON_HAVE_SNPRINTF 1)
set(BSON_HAVE_TIMESPEC 1)
set(BSON_HAVE_GMTIME_R 1)
set(BSON_HAVE_RAND_R 1)
set(BSON_HAVE_STRINGS_H 1)
set(BSON_HAVE_STRLCPY 0)
set(BSON_HAVE_STRNLEN 1)
set(BSON_HAVE_STDBOOL_H 1)
set(BSON_HAVE_CLOCK_GETTIME 1)
# common settings
set(MONGOC_TRACE 0)
set(MONGOC_ENABLE_STATIC_BUILD 1)
set(MONGOC_ENABLE_DEBUG_ASSERTIONS 0)
set(MONGOC_ENABLE_MONGODB_AWS_AUTH 0)
set(MONGOC_ENABLE_SASL_CYRUS 0)
set(MONGOC_ENABLE_SASL 0)
set(MONGOC_ENABLE_SASL_SSPI 0)
set(MONGOC_HAVE_SASL_CLIENT_DONE 0)
set(MONGOC_ENABLE_SRV 0)
# DNS
set(MONGOC_HAVE_DNSAPI 0)
set(MONGOC_HAVE_RES_SEARCH 0)
set(MONGOC_HAVE_RES_NSEARCH 0)
set(MONGOC_HAVE_RES_NCLOSE 0)
set(MONGOC_HAVE_RES_NDESTROY 0)
set(MONGOC_ENABLE_COMPRESSION 1)
set(MONGOC_ENABLE_COMPRESSION_ZLIB 0)
set(MONGOC_ENABLE_COMPRESSION_SNAPPY 0)
set(MONGOC_ENABLE_COMPRESSION_ZSTD 1)
# SSL
set(MONGOC_ENABLE_CRYPTO 0)
set(MONGOC_ENABLE_CRYPTO_CNG 0)
set(MONGOC_ENABLE_CRYPTO_COMMON_CRYPTO 0)
set(MONGOC_ENABLE_CRYPTO_SYSTEM_PROFILE 0)
set(MONGOC_ENABLE_SSL 0)
set(MONGOC_ENABLE_SSL_OPENSSL 0)
set(MONGOC_ENABLE_SSL_SECURE_CHANNEL 0)
set(MONGOC_ENABLE_SSL_SECURE_TRANSPORT 0)
set(MONGOC_ENABLE_SSL_LIBRESSL 0)
set(MONGOC_ENABLE_CRYPTO_LIBCRYPTO 0)
set(MONGOC_ENABLE_CLIENT_SIDE_ENCRYPTION 0)
set(MONGOC_HAVE_ASN1_STRING_GET0_DATA 0)
if(ENABLE_SSL)
set(MONGOC_ENABLE_SSL 1)
set(MONGOC_ENABLE_CRYPTO 1)
set(MONGOC_ENABLE_SSL_OPENSSL 1)
set(MONGOC_ENABLE_CRYPTO_LIBCRYPTO 1)
set(MONGOC_HAVE_ASN1_STRING_GET0_DATA 1)
else()
message(WARNING "Building mongoc without SSL")
endif()
set(CMAKE_EXTRA_INCLUDE_FILES "sys/socket.h")
set(MONGOC_SOCKET_ARG2 "struct sockaddr")
set(MONGOC_HAVE_SOCKLEN 1)
set(MONGOC_SOCKET_ARG3 "socklen_t")
set(MONGOC_ENABLE_RDTSCP 0)
set(MONGOC_NO_AUTOMATIC_GLOBALS 1)
set(MONGOC_ENABLE_STATIC_INSTALL 0)
set(MONGOC_ENABLE_SHM_COUNTERS 0)
set(MONGOC_HAVE_SCHED_GETCPU 0)
set(MONGOC_HAVE_SS_FAMILY 0)
configure_file(
${LIBBSON_SOURCE_DIR}/bson/bson-config.h.in
${LIBBSON_BINARY_DIR}/bson/bson-config.h
)
configure_file(
${LIBBSON_SOURCE_DIR}/bson/bson-version.h.in
${LIBBSON_BINARY_DIR}/bson/bson-version.h
)
set(COMMON_SOURCE_DIR "${LIBBSON_SOURCES_ROOT}/common")
file(GLOB_RECURSE COMMON_SOURCES "${COMMON_SOURCE_DIR}/*.c")
configure_file(
${COMMON_SOURCE_DIR}/common-config.h.in
${COMMON_SOURCE_DIR}/common-config.h
)
add_library(_libbson ${LIBBSON_SOURCES} ${COMMON_SOURCES})
add_library(ch_contrib::libbson ALIAS _libbson)
target_include_directories(_libbson SYSTEM PUBLIC ${LIBBSON_SOURCE_DIR} ${LIBBSON_BINARY_DIR} ${COMMON_SOURCE_DIR})
target_compile_definitions(_libbson PRIVATE BSON_COMPILATION)
if(OS_LINUX)
target_compile_definitions(_libbson PRIVATE -D_GNU_SOURCE -D_POSIX_C_SOURCE=199309L -D_XOPEN_SOURCE=600)
elseif(OS_DARWIN)
target_compile_definitions(_libbson PRIVATE -D_DARWIN_C_SOURCE)
endif()
set(LIBMONGOC_SOURCE_DIR "${LIBBSON_SOURCES_ROOT}/libmongoc/src")
file(GLOB_RECURSE LIBMONGOC_SOURCES "${LIBMONGOC_SOURCE_DIR}/*.c")
set(UTF8PROC_SOURCE_DIR "${LIBBSON_SOURCES_ROOT}/utf8proc-2.8.0")
set(UTF8PROC_SOURCES "${UTF8PROC_SOURCE_DIR}/utf8proc.c")
set(UTHASH_SOURCE_DIR "${LIBBSON_SOURCES_ROOT}/uthash")
configure_file(
${LIBMONGOC_SOURCE_DIR}/mongoc/mongoc-config.h.in
${LIBMONGOC_SOURCE_DIR}/mongoc/mongoc-config.h
)
configure_file(
${LIBMONGOC_SOURCE_DIR}/mongoc/mongoc-version.h.in
${LIBMONGOC_SOURCE_DIR}/mongoc/mongoc-version.h
)
add_library(_libmongoc ${LIBMONGOC_SOURCES} ${COMMON_SOURCES} ${UTF8PROC_SOURCES})
add_library(ch_contrib::libmongoc ALIAS _libmongoc)
target_include_directories(_libmongoc SYSTEM PUBLIC ${LIBMONGOC_SOURCE_DIR} ${COMMON_SOURCE_DIR} ${UTF8PROC_SOURCE_DIR} ${UTHASH_SOURCE_DIR})
target_include_directories(_libmongoc SYSTEM PRIVATE ${LIBMONGOC_SOURCE_DIR}/mongoc ${UTHASH_SOURCE_DIR})
target_compile_definitions(_libmongoc PRIVATE MONGOC_COMPILATION)
target_link_libraries(_libmongoc ch_contrib::libbson ch_contrib::c-ares ch_contrib::zstd)
if(ENABLE_SSL)
target_link_libraries(_libmongoc OpenSSL::SSL)
endif()

1
contrib/mongo-cxx-driver vendored Submodule

@ -0,0 +1 @@
Subproject commit 3166bdb49b717ce1bc30f46cc2b274ab1de7005b

View File

@ -0,0 +1,192 @@
option(USE_MONGODB "Enable MongoDB support" ${ENABLE_LIBRARIES})
if(NOT USE_MONGODB)
message(STATUS "Not using mongocxx and bsoncxx")
return()
endif()
set(BSONCXX_SOURCES_DIR "${ClickHouse_SOURCE_DIR}/contrib/mongo-cxx-driver/src/bsoncxx")
set(BSONCXX_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/mongo-cxx-driver/src/bsoncxx")
set(BSONCXX_SOURCES
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/array/element.cpp
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/array/value.cpp
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/array/view.cpp
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/builder/core.cpp
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/decimal128.cpp
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/document/element.cpp
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/document/value.cpp
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/document/view.cpp
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/exception/error_code.cpp
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/json.cpp
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/oid.cpp
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/private/itoa.cpp
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/string/view_or_value.cpp
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/types.cpp
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/types/bson_value/value.cpp
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/types/bson_value/view.cpp
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/validate.cpp
)
set(BSONCXX_POLY_USE_IMPLS ON)
configure_file(
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/config/config.hpp.in
${BSONCXX_BINARY_DIR}/lib/bsoncxx/v_noabi/bsoncxx/config/config.hpp
)
configure_file(
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/config/version.hpp.in
${BSONCXX_BINARY_DIR}/lib/bsoncxx/v_noabi/bsoncxx/config/version.hpp
)
configure_file(
${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi/bsoncxx/config/private/config.hh.in
${BSONCXX_BINARY_DIR}/lib/bsoncxx/v_noabi/bsoncxx/config/private/config.hh
)
add_library(_bsoncxx ${BSONCXX_SOURCES})
add_library(ch_contrib::bsoncxx ALIAS _bsoncxx)
target_include_directories(_bsoncxx SYSTEM PUBLIC "${BSONCXX_SOURCES_DIR}/include/bsoncxx/v_noabi" "${BSONCXX_SOURCES_DIR}/lib/bsoncxx/v_noabi" "${BSONCXX_BINARY_DIR}/lib/bsoncxx/v_noabi")
target_compile_definitions(_bsoncxx PUBLIC BSONCXX_STATIC)
target_link_libraries(_bsoncxx ch_contrib::libbson)
include(GenerateExportHeader)
generate_export_header(_bsoncxx
BASE_NAME BSONCXX
EXPORT_MACRO_NAME BSONCXX_API
NO_EXPORT_MACRO_NAME BSONCXX_PRIVATE
EXPORT_FILE_NAME ${BSONCXX_BINARY_DIR}/lib/bsoncxx/v_noabi/bsoncxx/config/export.hpp
STATIC_DEFINE BSONCXX_STATIC
)
set(MONGOCXX_SOURCES_DIR "${ClickHouse_SOURCE_DIR}/contrib/mongo-cxx-driver/src/mongocxx")
set(MONGOCXX_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/mongo-cxx-driver/src/mongocxx")
set(MONGOCXX_SOURCES
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/bulk_write.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/change_stream.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/client.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/client_encryption.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/client_session.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/collection.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/cursor.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/database.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/events/command_failed_event.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/events/command_started_event.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/events/command_succeeded_event.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/events/heartbeat_failed_event.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/events/heartbeat_started_event.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/events/heartbeat_succeeded_event.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/events/server_changed_event.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/events/server_closed_event.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/events/server_description.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/events/server_opening_event.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/events/topology_changed_event.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/events/topology_closed_event.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/events/topology_description.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/events/topology_opening_event.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/exception/error_code.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/exception/operation_exception.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/exception/server_error_code.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/gridfs/bucket.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/gridfs/downloader.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/gridfs/uploader.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/hint.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/index_model.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/index_view.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/instance.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/logger.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/model/delete_many.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/model/delete_one.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/model/insert_one.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/model/replace_one.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/model/update_many.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/model/update_one.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/model/write.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/aggregate.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/apm.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/auto_encryption.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/bulk_write.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/change_stream.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/client.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/client_encryption.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/client_session.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/count.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/create_collection.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/data_key.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/delete.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/distinct.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/encrypt.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/estimated_document_count.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/find.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/find_one_and_delete.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/find_one_and_replace.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/find_one_and_update.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/gridfs/bucket.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/gridfs/upload.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/index.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/index_view.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/insert.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/pool.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/range.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/replace.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/rewrap_many_datakey.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/server_api.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/tls.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/transaction.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/options/update.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/pipeline.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/pool.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/private/conversions.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/private/libbson.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/private/libmongoc.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/private/numeric_casting.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/read_concern.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/read_preference.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/result/bulk_write.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/result/delete.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/result/gridfs/upload.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/result/insert_many.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/result/insert_one.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/result/replace_one.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/result/rewrap_many_datakey.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/result/update.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/search_index_model.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/search_index_view.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/uri.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/validation_criteria.cpp
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/write_concern.cpp
)
set(MONGOCXX_COMPILER_VERSION "${CMAKE_CXX_COMPILER_VERSION}")
set(MONGOCXX_COMPILER_ID "${CMAKE_CXX_COMPILER_ID}")
set(MONGOCXX_LINK_WITH_STATIC_MONGOC 1)
set(MONGOCXX_BUILD_STATIC 1)
if(ENABLE_SSL)
set(MONGOCXX_ENABLE_SSL 1)
endif()
configure_file(
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/config/config.hpp.in
${MONGOCXX_BINARY_DIR}/lib/mongocxx/v_noabi/mongocxx/config/config.hpp
)
configure_file(
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/config/version.hpp.in
${MONGOCXX_BINARY_DIR}/lib/mongocxx/v_noabi/mongocxx/config/version.hpp
)
configure_file(
${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi/mongocxx/config/private/config.hh.in
${MONGOCXX_BINARY_DIR}/lib/mongocxx/v_noabi/mongocxx/config/private/config.hh
)
add_library(_mongocxx ${MONGOCXX_SOURCES})
add_library(ch_contrib::mongocxx ALIAS _mongocxx)
target_include_directories(_mongocxx SYSTEM PUBLIC "${MONGOCXX_SOURCES_DIR}/include/mongocxx/v_noabi" "${MONGOCXX_SOURCES_DIR}/lib/mongocxx/v_noabi" "${MONGOCXX_BINARY_DIR}/lib/mongocxx/v_noabi")
target_compile_definitions(_mongocxx PUBLIC MONGOCXX_STATIC)
target_link_libraries(_mongocxx ch_contrib::bsoncxx ch_contrib::libmongoc)
generate_export_header(_mongocxx
BASE_NAME MONGOCXX
EXPORT_MACRO_NAME MONGOCXX_API
NO_EXPORT_MACRO_NAME MONGOCXX_PRIVATE
EXPORT_FILE_NAME ${MONGOCXX_BINARY_DIR}/lib/mongocxx/v_noabi/mongocxx/config/export.hpp
STATIC_DEFINE MONGOCXX_STATIC
)

2
contrib/postgres vendored

@ -1 +1 @@
Subproject commit 2e51f82e27f4be389cc239d1b8784bbf2f01d33a
Subproject commit c041ed8cbda02eb881de8d7645ca96b6e4b2327d

2
contrib/sysroot vendored

@ -1 +1 @@
Subproject commit 5be834147d5b5dd77ca2b821f356982029320513
Subproject commit 738138e665809a5b28c453983c5f48f23a340ed6

View File

@ -1,4 +1,11 @@
# docker build -t clickhouse/docs-builder .
FROM golang:alpine AS htmltest-builder
ARG HTMLTEST_VERSION=0.17.0
RUN CGO_ENABLED=0 go install github.com/wjdp/htmltest@v${HTMLTEST_VERSION} \
&& mv "${GOPATH}/bin/htmltest" /usr/bin/htmltest
# nodejs 17 prefers ipv6 and is broken in our environment
FROM node:16-alpine
@ -17,6 +24,13 @@ RUN yarn config set registry https://registry.npmjs.org \
&& yarn install \
&& yarn cache clean
ENV HOME /opt/clickhouse-docs
RUN mkdir /output_path \
&& chmod -R a+w . /output_path \
&& git config --global --add safe.directory /opt/clickhouse-docs
COPY run.sh /run.sh
COPY --from=htmltest-builder /usr/bin/htmltest /usr/bin/htmltest
ENTRYPOINT ["/run.sh"]

View File

@ -97,6 +97,10 @@ cmake --debug-trycompile -DCMAKE_VERBOSE_MAKEFILE=1 -LA "-DCMAKE_BUILD_TYPE=$BUI
# shellcheck disable=SC2086 # No quotes because I want it to expand to nothing if empty.
ninja $NINJA_FLAGS $BUILD_TARGET
# We don't allow dirty files in the source directory after build
git ls-files --others --exclude-standard | grep . && echo "^ Dirty files in the working copy after build" && exit 1
git submodule foreach --quiet git ls-files --others --exclude-standard | grep . && echo "^ Dirty files in submodules after build" && exit 1
ls -la ./programs
ccache_status

View File

@ -1,4 +1,4 @@
FROM ubuntu:20.04 AS glibc-donor
FROM ubuntu:22.04 AS glibc-donor
ARG TARGETARCH
RUN arch=${TARGETARCH:-amd64} \
@ -6,8 +6,11 @@ RUN arch=${TARGETARCH:-amd64} \
amd64) rarch=x86_64 ;; \
arm64) rarch=aarch64 ;; \
esac \
&& ln -s "${rarch}-linux-gnu" /lib/linux-gnu
&& ln -s "${rarch}-linux-gnu" /lib/linux-gnu \
&& case $arch in \
amd64) ln /lib/linux-gnu/ld-linux-x86-64.so.2 /lib/linux-gnu/ld-2.35.so ;; \
arm64) ln /lib/linux-gnu/ld-linux-aarch64.so.1 /lib/linux-gnu/ld-2.35.so ;; \
esac
FROM alpine
@ -17,7 +20,7 @@ ENV LANG=en_US.UTF-8 \
TZ=UTC \
CLICKHOUSE_CONFIG=/etc/clickhouse-server/config.xml
COPY --from=glibc-donor /lib/linux-gnu/libc.so.6 /lib/linux-gnu/libdl.so.2 /lib/linux-gnu/libm.so.6 /lib/linux-gnu/libpthread.so.0 /lib/linux-gnu/librt.so.1 /lib/linux-gnu/libnss_dns.so.2 /lib/linux-gnu/libnss_files.so.2 /lib/linux-gnu/libresolv.so.2 /lib/linux-gnu/ld-2.31.so /lib/
COPY --from=glibc-donor /lib/linux-gnu/libc.so.6 /lib/linux-gnu/libdl.so.2 /lib/linux-gnu/libm.so.6 /lib/linux-gnu/libpthread.so.0 /lib/linux-gnu/librt.so.1 /lib/linux-gnu/libnss_dns.so.2 /lib/linux-gnu/libnss_files.so.2 /lib/linux-gnu/libresolv.so.2 /lib/linux-gnu/ld-2.35.so /lib/
COPY --from=glibc-donor /etc/nsswitch.conf /etc/
COPY docker_related_config.xml /etc/clickhouse-server/config.d/
COPY entrypoint.sh /entrypoint.sh
@ -25,8 +28,8 @@ COPY entrypoint.sh /entrypoint.sh
ARG TARGETARCH
RUN arch=${TARGETARCH:-amd64} \
&& case $arch in \
amd64) mkdir -p /lib64 && ln -sf /lib/ld-2.31.so /lib64/ld-linux-x86-64.so.2 ;; \
arm64) ln -sf /lib/ld-2.31.so /lib/ld-linux-aarch64.so.1 ;; \
amd64) mkdir -p /lib64 && ln -sf /lib/ld-2.35.so /lib64/ld-linux-x86-64.so.2 ;; \
arm64) ln -sf /lib/ld-2.35.so /lib/ld-linux-aarch64.so.1 ;; \
esac
# lts / testing / prestable / etc

9
docs/.htmltest.yml Normal file
View File

@ -0,0 +1,9 @@
DirectoryPath: /test
IgnoreDirectoryMissingTrailingSlash: true
CheckExternal: false
CheckInternal: false
CheckInternalHash: true
CheckMailto: false
IgnoreAltMissing: true
IgnoreEmptyHref: true
IgnoreInternalEmptyHash: true

View File

@ -6,7 +6,15 @@ sidebar_label: MongoDB
# MongoDB
MongoDB engine is read-only table engine which allows to read data (`SELECT` queries) from remote MongoDB collection. Engine supports only non-nested data types. `INSERT` queries are not supported.
MongoDB engine is read-only table engine which allows to read data from remote [MongoDB](https://www.mongodb.com/) collection.
Only MongoDB v3.6+ servers are supported.
[Seed list(`mongodb**+srv**`)](https://www.mongodb.com/docs/manual/reference/glossary/#std-term-seed-list) is not yet supported.
:::note
If you're facing troubles, please report the issue, and try to use [the legacy implementation](../../../operations/server-configuration-parameters/settings.md#use_legacy_mongodb_integration).
Keep in mind that it is deprecated, and will be removed in next releases.
:::
## Creating a Table {#creating-a-table}
@ -34,55 +42,147 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name
- `options` — MongoDB connection string options (optional parameter).
:::tip
If you are using the MongoDB Atlas cloud offering:
If you are using the MongoDB Atlas cloud offering connection url can be obtained from 'Atlas SQL' option.
Seed list(`mongodb**+srv**`) is not yet supported, but will be added in future releases.
:::
Also, you can simply pass a URI:
``` sql
ENGINE = MongoDB(uri, collection);
```
- connection url can be obtained from 'Atlas SQL' option
- use options: 'connectTimeoutMS=10000&ssl=true&authSource=admin'
**Engine Parameters**
- `uri` — MongoDB server's connection URI
- `collection` — Remote collection name.
## Types mappings
| MongoDB | ClickHouse |
|--------------------|-----------------------------------------------------------------------|
| bool, int32, int64 | *any numeric type*, String |
| double | Float64, String |
| date | Date, Date32, DateTime, DateTime64, String |
| string | String, UUID |
| document | String(as JSON) |
| array | Array, String(as JSON) |
| oid | String |
| binary | String if in column, base64 encoded string if in an array or document |
| *any other* | String |
If key is not found in MongoDB document (for example, column name doesn't match), default value or `NULL` (if the column is nullable) will be inserted.
## Supported clauses
Only queries with simple expressions are supported (for example, `WHERE field = <constant> ORDER BY field2 LIMIT <constant>`).
Such expressions are translated to MongoDB query language and executed on the server side.
You can disable all these restriction, using [mongodb_throw_on_unsupported_query](../../../operations/settings/settings.md#mongodb_throw_on_unsupported_query).
In that case ClickHouse tries to convert query on best effort basis, but it can lead to full table scan and processing on ClickHouse side.
:::note
It's always better to explicitly set type of literal because Mongo requires strict typed filters.\
For example you want to filter by `Date`:
```sql
SELECT * FROM mongo_table WHERE date = '2024-01-01'
```
This will not work because Mongo will not cast string to `Date`, so you need to cast it manually:
```sql
SELECT * FROM mongo_table WHERE date = '2024-01-01'::Date OR date = toDate('2024-01-01')
```
This applied for `Date`, `Date32`, `DateTime`, `Bool`, `UUID`.
:::
## Usage Example {#usage-example}
Assuming MongoDB has [sample_mflix](https://www.mongodb.com/docs/atlas/sample-data/sample-mflix) dataset loaded
Create a table in ClickHouse which allows to read data from MongoDB collection:
``` sql
CREATE TABLE mongo_table
CREATE TABLE sample_mflix_table
(
key UInt64,
data String
) ENGINE = MongoDB('mongo1:27017', 'test', 'simple_table', 'testuser', 'clickhouse');
```
To read from an SSL secured MongoDB server:
``` sql
CREATE TABLE mongo_table_ssl
(
key UInt64,
data String
) ENGINE = MongoDB('mongo2:27017', 'test', 'simple_table', 'testuser', 'clickhouse', 'ssl=true');
_id String,
title String,
plot String,
genres Array(String),
directors Array(String),
writers Array(String),
released Date,
imdb String,
year String,
) ENGINE = MongoDB('mongodb://<USERNAME>:<PASSWORD>@atlas-sql-6634be87cefd3876070caf96-98lxs.a.query.mongodb.net/sample_mflix?ssl=true&authSource=admin', 'movies');
```
Query:
``` sql
SELECT COUNT() FROM mongo_table;
SELECT count() FROM sample_mflix_table
```
``` text
┌─count()─┐
│ 4 │
└─────────┘
┌─count()─┐
1. │ 21349
└─────────┘
```
You can also adjust connection timeout:
```SQL
-- JSONExtractString cannot be pushed down to MongoDB
SET mongodb_throw_on_unsupported_query = 0;
``` sql
CREATE TABLE mongo_table
(
key UInt64,
data String
) ENGINE = MongoDB('mongo2:27017', 'test', 'simple_table', 'testuser', 'clickhouse', 'connectTimeoutMS=100000');
-- Find all 'Back to the Future' sequels with rating > 7.5
SELECT title, plot, genres, directors, released FROM sample_mflix_table
WHERE title IN ('Back to the Future', 'Back to the Future Part II', 'Back to the Future Part III')
AND toFloat32(JSONExtractString(imdb, 'rating')) > 7.5
ORDER BY year
FORMAT Vertical;
```
```text
Row 1:
──────
title: Back to the Future
plot: A young man is accidentally sent 30 years into the past in a time-traveling DeLorean invented by his friend, Dr. Emmett Brown, and must make sure his high-school-age parents unite in order to save his own existence.
genres: ['Adventure','Comedy','Sci-Fi']
directors: ['Robert Zemeckis']
released: 1985-07-03
Row 2:
──────
title: Back to the Future Part II
plot: After visiting 2015, Marty McFly must repeat his visit to 1955 to prevent disastrous changes to 1985... without interfering with his first trip.
genres: ['Action','Adventure','Comedy']
directors: ['Robert Zemeckis']
released: 1989-11-22
```
```SQL
-- Find top 3 movies based on Cormac McCarthy's books
SELECT title, toFloat32(JSONExtractString(imdb, 'rating')) as rating
FROM sample_mflix_table
WHERE arrayExists(x -> x like 'Cormac McCarthy%', writers)
ORDER BY rating DESC
LIMIT 3;
```
```text
┌─title──────────────────┬─rating─┐
1. │ No Country for Old Men │ 8.1 │
2. │ The Sunset Limited │ 7.4 │
3. │ The Road │ 7.3 │
└────────────────────────┴────────┘
```
## Troubleshooting
You can see the generated MongoDB query in DEBUG level logs.
Implementation details can be found in [mongocxx](https://github.com/mongodb/mongo-cxx-driver) and [mongoc](https://github.com/mongodb/mongo-c-driver) documentations.

View File

@ -1,17 +1,20 @@
---
slug: /en/engines/table-engines/integrations/postgresql
title: PostgreSQL Table Engine
sidebar_position: 160
sidebar_label: PostgreSQL
---
# PostgreSQL
The PostgreSQL engine allows to perform `SELECT` and `INSERT` queries on data that is stored on a remote PostgreSQL server.
The PostgreSQL engine allows `SELECT` and `INSERT` queries on data stored on a remote PostgreSQL server.
:::note
Currently, only PostgreSQL versions 12 and up are supported.
:::
:::note Replicating or migrating Postgres data with with PeerDB
> In addition to the Postgres table engine, you can use [PeerDB](https://docs.peerdb.io/introduction) by ClickHouse to set up a continuous data pipeline from Postgres to ClickHouse. PeerDB is a tool designed specifically to replicate data from Postgres to ClickHouse using change data capture (CDC).
:::
## Creating a Table {#creating-a-table}
``` sql

View File

@ -191,6 +191,15 @@ For 'Ordered' mode. Available since `24.6`. If there are several replicas of S3Q
Engine supports all s3 related settings. For more information about S3 settings see [here](../../../engines/table-engines/integrations/s3.md).
## S3Queue Ordered mode {#ordered-mode}
`S3Queue` processing mode allows to store less metadata in ZooKeeper, but has a limitation that files, which added later by time, are required to have alphanumerically bigger names.
`S3Queue` `ordered` mode, as well as `unordered`, supports `(s3queue_)processing_threads_num` setting (`s3queue_` prefix is optional), which allows to control number of threads, which would do processing of `S3` files locally on the server.
In addition, `ordered` mode also introduces another setting called `(s3queue_)buckets` which means "logical threads". It means that in distributed scenario, when there are several servers with `S3Queue` table replicas, where this setting defines the number of processing units. E.g. each processing thread on each `S3Queue` replica will try to lock a certain `bucket` for processing, each `bucket` is attributed to certain files by hash of the file name. Therefore, in distributed scenario it is highly recommended to have `(s3queue_)buckets` setting to be at least equal to the number of replicas or bigger. This is fine to have the number of buckets bigger than the number of replicas. The most optimal scenario would be for `(s3queue_)buckets` setting to equal a multiplication of `number_of_replicas` and `(s3queue_)processing_threads_num`.
The setting `(s3queue_)processing_threads_num` is not recommended for usage before version `24.6`.
The setting `(s3queue_)buckets` is available starting with version `24.6`.
## Description {#description}
`SELECT` is not particularly useful for streaming import (except for debugging), because each file can be imported only once. It is more practical to create real-time threads using [materialized views](../../../sql-reference/statements/create/view.md). To do this:

View File

@ -79,7 +79,7 @@ All of the parameters excepting `config_section` have the same meaning as in `Me
## Rollup Configuration {#rollup-configuration}
The settings for rollup are defined by the [graphite_rollup](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-graphite) parameter in the server configuration. The name of the parameter could be any. You can create several configurations and use them for different tables.
The settings for rollup are defined by the [graphite_rollup](../../../operations/server-configuration-parameters/settings.md#graphite) parameter in the server configuration. The name of the parameter could be any. You can create several configurations and use them for different tables.
Rollup configuration structure:

View File

@ -710,7 +710,7 @@ Data part is the minimum movable unit for `MergeTree`-engine tables. The data be
### Terms {#terms}
- Disk — Block device mounted to the filesystem.
- Default disk — Disk that stores the path specified in the [path](/docs/en/operations/server-configuration-parameters/settings.md/#server_configuration_parameters-path) server setting.
- Default disk — Disk that stores the path specified in the [path](/docs/en/operations/server-configuration-parameters/settings.md/#path) server setting.
- Volume — Ordered set of equal disks (similar to [JBOD](https://en.wikipedia.org/wiki/Non-RAID_drive_architectures)).
- Storage policy — Set of volumes and the rules for moving data between them.

View File

@ -127,7 +127,7 @@ By default, an INSERT query waits for confirmation of writing the data from only
Each block of data is written atomically. The INSERT query is divided into blocks up to `max_insert_block_size = 1048576` rows. In other words, if the `INSERT` query has less than 1048576 rows, it is made atomically.
Data blocks are deduplicated. For multiple writes of the same data block (data blocks of the same size containing the same rows in the same order), the block is only written once. The reason for this is in case of network failures when the client application does not know if the data was written to the DB, so the `INSERT` query can simply be repeated. It does not matter which replica INSERTs were sent to with identical data. `INSERTs` are idempotent. Deduplication parameters are controlled by [merge_tree](/docs/en/operations/server-configuration-parameters/settings.md/#server_configuration_parameters-merge_tree) server settings.
Data blocks are deduplicated. For multiple writes of the same data block (data blocks of the same size containing the same rows in the same order), the block is only written once. The reason for this is in case of network failures when the client application does not know if the data was written to the DB, so the `INSERT` query can simply be repeated. It does not matter which replica INSERTs were sent to with identical data. `INSERTs` are idempotent. Deduplication parameters are controlled by [merge_tree](/docs/en/operations/server-configuration-parameters/settings.md/#merge_tree) server settings.
During replication, only the source data to insert is transferred over the network. Further data transformation (merging) is coordinated and performed on all the replicas in the same way. This minimizes network usage, which means that replication works well when replicas reside in different datacenters. (Note that duplicating data in different datacenters is the main goal of replication.)

View File

@ -1,35 +1,38 @@
---
slug: /en/getting-started/example-datasets/star-schema
sidebar_label: Star Schema Benchmark
description: "Dataset based on the TPC-H dbgen source. The coding style and architecture
follows the TPCH dbgen."
description: "The Star Schema Benchmark (SSB) data set and queries"
---
# Star Schema Benchmark
# Star Schema Benchmark (SSB, 2009)
The Star Schema Benchmark is roughly based on the [TPC-H](tpch.md)'s tables and queries but unlike TPC-H, it uses a star schema layout.
The bulk of the data sits in a gigantic fact table which is surrounded by multiple small dimension tables.
The queries joined the fact table with one or more dimension tables to apply filter criteria, e.g. `MONTH = 'JANUARY'`.
Compiling dbgen:
References:
- [Star Schema Benchmark](https://cs.umb.edu/~poneil/StarSchemaB.pdf) (O'Neil et. al), 2009
- [Variations of the Star Schema Benchmark to Test the Effects of Data Skew on Query Performance](https://doi.org/10.1145/2479871.2479927) (Rabl. et. al.), 2013
First, checkout the star schema benchmark repository and compile the data generator:
``` bash
$ git clone git@github.com:vadimtk/ssb-dbgen.git
$ cd ssb-dbgen
$ make
git clone https://github.com/vadimtk/ssb-dbgen.git
cd ssb-dbgen
make
```
Generating data:
:::note
With `-s 100` dbgen generates 600 million rows (67 GB), while while `-s 1000` it generates 6 billion rows (which takes a lot of time)
:::
Then, generate the data. Parameter `-s` specifies the scale factor. For example, with `-s 100`, 600 million rows are generated.
``` bash
$ ./dbgen -s 1000 -T c
$ ./dbgen -s 1000 -T l
$ ./dbgen -s 1000 -T p
$ ./dbgen -s 1000 -T s
./dbgen -s 1000 -T c
./dbgen -s 1000 -T l
./dbgen -s 1000 -T p
./dbgen -s 1000 -T s
./dbgen -s 1000 -T d
```
Creating tables in ClickHouse:
Now create tables in ClickHouse:
``` sql
CREATE TABLE customer
@ -92,18 +95,42 @@ CREATE TABLE supplier
S_PHONE String
)
ENGINE = MergeTree ORDER BY S_SUPPKEY;
CREATE TABLE date
(
D_DATEKEY Date,
D_DATE FixedString(18),
D_DAYOFWEEK LowCardinality(String),
D_MONTH LowCardinality(String),
D_YEAR UInt16,
D_YEARMONTHNUM UInt32,
D_YEARMONTH LowCardinality(FixedString(7)),
D_DAYNUMINWEEK UInt8,
D_DAYNUMINMONTH UInt8,
D_DAYNUMINYEAR UInt16,
D_MONTHNUMINYEAR UInt8,
D_WEEKNUMINYEAR UInt8,
D_SELLINGSEASON String,
D_LASTDAYINWEEKFL UInt8,
D_LASTDAYINMONTHFL UInt8,
D_HOLIDAYFL UInt8,
D_WEEKDAYFL UInt8
)
ENGINE = MergeTree ORDER BY D_DATEKEY;
```
Inserting data:
The data can be imported as follows:
``` bash
$ clickhouse-client --query "INSERT INTO customer FORMAT CSV" < customer.tbl
$ clickhouse-client --query "INSERT INTO part FORMAT CSV" < part.tbl
$ clickhouse-client --query "INSERT INTO supplier FORMAT CSV" < supplier.tbl
$ clickhouse-client --query "INSERT INTO lineorder FORMAT CSV" < lineorder.tbl
clickhouse-client --query "INSERT INTO customer FORMAT CSV" < customer.tbl
clickhouse-client --query "INSERT INTO part FORMAT CSV" < part.tbl
clickhouse-client --query "INSERT INTO supplier FORMAT CSV" < supplier.tbl
clickhouse-client --query "INSERT INTO lineorder FORMAT CSV" < lineorder.tbl
clickhouse-client --query "INSERT INTO date FORMAT CSV" < date.tbl
```
Converting “star schema” to denormalized “flat schema”:
In many use cases of ClickHouse, multiple tables are converted into a single denormalized flat table.
This step is optional, below queries are listed in their original form and in a format rewritten for the denormalized table.
``` sql
SET max_memory_usage = 20000000000;
@ -155,42 +182,131 @@ INNER JOIN supplier AS s ON s.S_SUPPKEY = l.LO_SUPPKEY
INNER JOIN part AS p ON p.P_PARTKEY = l.LO_PARTKEY;
```
Running the queries:
The queries are generated by `./qgen -s <scaling_factor>`. Example queries for `s = 100`:
Q1.1
```sql
SELECT
sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS REVENUE
FROM
lineorder,
date
WHERE
LO_ORDERDATE = D_DATEKEY
AND D_YEAR = 1993
AND LO_DISCOUNT BETWEEN 1 AND 3
AND LO_QUANTITY < 25;
```
Denormalized table:
``` sql
SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue
FROM lineorder_flat
WHERE toYear(LO_ORDERDATE) = 1993 AND LO_DISCOUNT BETWEEN 1 AND 3 AND LO_QUANTITY < 25;
SELECT
sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue
FROM
lineorder_flat
WHERE
toYear(LO_ORDERDATE) = 1993
AND LO_DISCOUNT BETWEEN 1 AND 3
AND LO_QUANTITY < 25;
```
Q1.2
``` sql
SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue
FROM lineorder_flat
WHERE toYYYYMM(LO_ORDERDATE) = 199401 AND LO_DISCOUNT BETWEEN 4 AND 6 AND LO_QUANTITY BETWEEN 26 AND 35;
```sql
SELECT
sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS REVENUE
FROM
lineorder,
date
WHERE
LO_ORDERDATE = D_DATEKEY
AND D_YEARMONTHNUM = 199401
AND LO_DISCOUNT BETWEEN 4 AND 6
AND LO_QUANTITY BETWEEN 26 AND 35;
```
Denormalized table:
```sql
SELECT
sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue
FROM
lineorder_flat
WHERE
toYYYYMM(LO_ORDERDATE) = 199401
AND LO_DISCOUNT BETWEEN 4 AND 6
AND LO_QUANTITY BETWEEN 26 AND 35;
```
Q1.3
``` sql
SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue
FROM lineorder_flat
WHERE toISOWeek(LO_ORDERDATE) = 6 AND toYear(LO_ORDERDATE) = 1994
AND LO_DISCOUNT BETWEEN 5 AND 7 AND LO_QUANTITY BETWEEN 26 AND 35;
```sql
SELECT
sum(LO_EXTENDEDPRICE*LO_DISCOUNT) AS REVENUE
FROM
lineorder,
date
WHERE
LO_ORDERDATE = D_DATEKEY
AND D_WEEKNUMINYEAR = 6
AND D_YEAR = 1994
AND LO_DISCOUNT BETWEEN 5 AND 7
AND LO_QUANTITY BETWEEN 26 AND 35;
```
Denormalized table:
```sql
SELECT
sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue
FROM
lineorder_flat
WHERE
toISOWeek(LO_ORDERDATE) = 6
AND toYear(LO_ORDERDATE) = 1994
AND LO_DISCOUNT BETWEEN 5 AND 7
AND LO_QUANTITY BETWEEN 26 AND 35;
```
Q2.1
``` sql
```sql
SELECT
sum(LO_REVENUE),
D_YEAR,
P_BRAND
FROM
lineorder,
date,
part,
supplier
WHERE
LO_ORDERDATE = D_DATEKEY
AND LO_PARTKEY = P_PARTKEY
AND LO_SUPPKEY = S_SUPPKEY
AND P_CATEGORY = 'MFGR#12'
AND S_REGION = 'AMERICA'
GROUP BY
D_YEAR,
P_BRAND
ORDER BY
D_YEAR,
P_BRAND;
```
Denormalized table:
```sql
SELECT
sum(LO_REVENUE),
toYear(LO_ORDERDATE) AS year,
P_BRAND
FROM lineorder_flat
WHERE P_CATEGORY = 'MFGR#12' AND S_REGION = 'AMERICA'
WHERE
P_CATEGORY = 'MFGR#12'
AND S_REGION = 'AMERICA'
GROUP BY
year,
P_BRAND
@ -201,7 +317,34 @@ ORDER BY
Q2.2
``` sql
```sql
SELECT
sum(LO_REVENUE),
D_YEAR,
P_BRAND
FROM
lineorder,
date,
part,
supplier
WHERE
LO_ORDERDATE = D_DATEKEY
AND LO_PARTKEY = P_PARTKEY
AND LO_SUPPKEY = S_SUPPKEY
AND P_BRAND BETWEEN
'MFGR#2221' AND 'MFGR#2228'
AND S_REGION = 'ASIA'
GROUP BY
D_YEAR,
P_BRAND
ORDER BY
D_YEAR,
P_BRAND;
```
Denormalized table:
```sql
SELECT
sum(LO_REVENUE),
toYear(LO_ORDERDATE) AS year,
@ -218,7 +361,33 @@ ORDER BY
Q2.3
``` sql
```sql
SELECT
sum(LO_REVENUE),
D_YEAR,
P_BRAND
FROM
lineorder,
date,
part,
supplier
WHERE
LO_ORDERDATE = D_DATEKEY
AND LO_PARTKEY = P_PARTKEY
AND LO_SUPPKEY = S_SUPPKEY
AND P_BRAND = 'MFGR#2221'
AND S_REGION = 'EUROPE'
GROUP BY
D_YEAR,
P_BRAND
ORDER BY
D_YEAR,
P_BRAND;
```
Denormalized table:
```sql
SELECT
sum(LO_REVENUE),
toYear(LO_ORDERDATE) AS year,
@ -235,14 +404,46 @@ ORDER BY
Q3.1
``` sql
```sql
SELECT
C_NATION,
S_NATION,
D_YEAR,
sum(LO_REVENUE) AS REVENUE
FROM
customer,
lineorder,
supplier,
date
WHERE
LO_CUSTKEY = C_CUSTKEY
AND LO_SUPPKEY = S_SUPPKEY
AND LO_ORDERDATE = D_DATEKEY
AND C_REGION = 'ASIA' AND S_REGION = 'ASIA'
AND D_YEAR >= 1992 AND D_YEAR <= 1997
GROUP BY
C_NATION,
S_NATION,
D_YEAR
ORDER BY
D_YEAR ASC,
REVENUE DESC;
```
Denormalized table:
```sql
SELECT
C_NATION,
S_NATION,
toYear(LO_ORDERDATE) AS year,
sum(LO_REVENUE) AS revenue
FROM lineorder_flat
WHERE C_REGION = 'ASIA' AND S_REGION = 'ASIA' AND year >= 1992 AND year <= 1997
WHERE
C_REGION = 'ASIA'
AND S_REGION = 'ASIA'
AND year >= 1992
AND year <= 1997
GROUP BY
C_NATION,
S_NATION,
@ -254,14 +455,47 @@ ORDER BY
Q3.2
``` sql
```sql
SELECT
C_CITY,
S_CITY,
D_YEAR,
sum(LO_REVENUE) AS REVENUE
FROM
customer,
lineorder,
supplier,
date
WHERE
LO_CUSTKEY = C_CUSTKEY
AND LO_SUPPKEY = S_SUPPKEY
AND LO_ORDERDATE = D_DATEKEY
AND C_NATION = 'UNITED STATES'
AND S_NATION = 'UNITED STATES'
AND D_YEAR >= 1992 AND D_YEAR <= 1997
GROUP BY
C_CITY,
S_CITY,
D_YEAR
ORDER BY
D_YEAR ASC,
REVENUE DESC;
```
Denormalized table:
```sql
SELECT
C_CITY,
S_CITY,
toYear(LO_ORDERDATE) AS year,
sum(LO_REVENUE) AS revenue
FROM lineorder_flat
WHERE C_NATION = 'UNITED STATES' AND S_NATION = 'UNITED STATES' AND year >= 1992 AND year <= 1997
WHERE
C_NATION = 'UNITED STATES'
AND S_NATION = 'UNITED STATES'
AND year >= 1992
AND year <= 1997
GROUP BY
C_CITY,
S_CITY,
@ -273,14 +507,48 @@ ORDER BY
Q3.3
``` sql
```sql
SELECT
C_CITY,
S_CITY,
D_YEAR,
sum(LO_REVENUE) AS revenue
FROM
customer,
lineorder,
supplier,
date
WHERE
LO_CUSTKEY = C_CUSTKEY
AND LO_SUPPKEY = S_SUPPKEY
AND LO_ORDERDATE = D_DATEKEY
AND (C_CITY = 'UNITED KI1' OR C_CITY = 'UNITED KI5')
AND (S_CITY = 'UNITED KI1' OR S_CITY = 'UNITED KI5')
AND D_YEAR >= 1992
AND D_YEAR <= 1997
GROUP BY
C_CITY,
S_CITY,
D_YEAR
ORDER BY
D_YEAR ASC,
revenue DESC;
```
Denormalized table:
```sql
SELECT
C_CITY,
S_CITY,
toYear(LO_ORDERDATE) AS year,
sum(LO_REVENUE) AS revenue
FROM lineorder_flat
WHERE (C_CITY = 'UNITED KI1' OR C_CITY = 'UNITED KI5') AND (S_CITY = 'UNITED KI1' OR S_CITY = 'UNITED KI5') AND year >= 1992 AND year <= 1997
WHERE
(C_CITY = 'UNITED KI1' OR C_CITY = 'UNITED KI5')
AND (S_CITY = 'UNITED KI1' OR S_CITY = 'UNITED KI5')
AND year >= 1992
AND year <= 1997
GROUP BY
C_CITY,
S_CITY,
@ -292,14 +560,46 @@ ORDER BY
Q3.4
``` sql
```sql
SELECT
C_CITY,
S_CITY,
D_YEAR,
sum(LO_REVENUE) AS revenue
FROM
customer,
lineorder,
supplier,
date
WHERE
LO_CUSTKEY = C_CUSTKEY
AND LO_SUPPKEY = S_SUPPKEY
AND LO_ORDERDATE = D_DATEKEY
AND (C_CITY='UNITED KI1' OR C_CITY='UNITED KI5')
AND (S_CITY='UNITED KI1' OR S_CITY='UNITED KI5')
AND D_YEARMONTH = 'Dec1997'
GROUP BY
C_CITY,
S_CITY,
D_YEAR
ORDER BY
D_YEAR ASC,
revenue DESC;
```
Denormalized table:
```sql
SELECT
C_CITY,
S_CITY,
toYear(LO_ORDERDATE) AS year,
sum(LO_REVENUE) AS revenue
FROM lineorder_flat
WHERE (C_CITY = 'UNITED KI1' OR C_CITY = 'UNITED KI5') AND (S_CITY = 'UNITED KI1' OR S_CITY = 'UNITED KI5') AND toYYYYMM(LO_ORDERDATE) = 199712
WHERE
(C_CITY = 'UNITED KI1' OR C_CITY = 'UNITED KI5')
AND (S_CITY = 'UNITED KI1' OR S_CITY = 'UNITED KI5')
AND toYYYYMM(LO_ORDERDATE) = 199712
GROUP BY
C_CITY,
S_CITY,
@ -311,7 +611,36 @@ ORDER BY
Q4.1
``` sql
```sql
SELECT
D_YEAR,
C_NATION,
sum(LO_REVENUE - LO_SUPPLYCOST) AS PROFIT
FROM
date,
customer,
supplier,
part,
lineorder
WHERE
LO_CUSTKEY = C_CUSTKEY
AND LO_SUPPKEY = S_SUPPKEY
AND LO_PARTKEY = P_PARTKEY
AND LO_ORDERDATE = D_DATEKEY
AND C_REGION = 'AMERICA'
AND S_REGION = 'AMERICA'
AND (P_MFGR = 'MFGR#1' OR P_MFGR = 'MFGR#2')
GROUP BY
D_YEAR,
C_NATION
ORDER BY
D_YEAR,
C_NATION
```
Denormalized table:
```sql
SELECT
toYear(LO_ORDERDATE) AS year,
C_NATION,
@ -328,14 +657,51 @@ ORDER BY
Q4.2
``` sql
```sql
SELECT
D_YEAR,
S_NATION,
P_CATEGORY,
sum(LO_REVENUE - LO_SUPPLYCOST) AS profit
FROM
date,
customer,
supplier,
part,
lineorder
WHERE
LO_CUSTKEY = C_CUSTKEY
AND LO_SUPPKEY = S_SUPPKEY
AND LO_PARTKEY = P_PARTKEY
AND LO_ORDERDATE = D_DATEKEY
AND C_REGION = 'AMERICA'
AND S_REGION = 'AMERICA'
AND (D_YEAR = 1997 OR D_YEAR = 1998)
AND (P_MFGR = 'MFGR#1' OR P_MFGR = 'MFGR#2')
GROUP BY
D_YEAR,
S_NATION,
P_CATEGORY
ORDER BY
D_YEAR,
S_NATION,
P_CATEGORY
```
Denormalized table:
```sql
SELECT
toYear(LO_ORDERDATE) AS year,
S_NATION,
P_CATEGORY,
sum(LO_REVENUE - LO_SUPPLYCOST) AS profit
FROM lineorder_flat
WHERE C_REGION = 'AMERICA' AND S_REGION = 'AMERICA' AND (year = 1997 OR year = 1998) AND (P_MFGR = 'MFGR#1' OR P_MFGR = 'MFGR#2')
WHERE
C_REGION = 'AMERICA'
AND S_REGION = 'AMERICA'
AND (year = 1997 OR year = 1998)
AND (P_MFGR = 'MFGR#1' OR P_MFGR = 'MFGR#2')
GROUP BY
year,
S_NATION,
@ -348,14 +714,51 @@ ORDER BY
Q4.3
``` sql
```sql
SELECT
D_YEAR,
S_CITY,
P_BRAND,
sum(LO_REVENUE - LO_SUPPLYCOST) AS profit
FROM
date,
customer,
supplier,
part,
lineorder
WHERE
LO_CUSTKEY = C_CUSTKEY
AND LO_SUPPKEY = S_SUPPKEY
AND LO_PARTKEY = P_PARTKEY
AND LO_ORDERDATE = D_DATEKEY
AND C_REGION = 'AMERICA'
AND S_NATION = 'UNITED STATES'
AND (D_YEAR = 1997 OR D_YEAR = 1998)
AND P_CATEGORY = 'MFGR#14'
GROUP BY
D_YEAR,
S_CITY,
P_BRAND
ORDER BY
D_YEAR,
S_CITY,
P_BRAND
```
Denormalized table:
```sql
SELECT
toYear(LO_ORDERDATE) AS year,
S_CITY,
P_BRAND,
sum(LO_REVENUE - LO_SUPPLYCOST) AS profit
FROM lineorder_flat
WHERE S_NATION = 'UNITED STATES' AND (year = 1997 OR year = 1998) AND P_CATEGORY = 'MFGR#14'
FROM
lineorder_flat
WHERE
S_NATION = 'UNITED STATES'
AND (year = 1997 OR year = 1998)
AND P_CATEGORY = 'MFGR#14'
GROUP BY
year,
S_CITY,
@ -365,3 +768,4 @@ ORDER BY
S_CITY ASC,
P_BRAND ASC;
```

View File

@ -0,0 +1,596 @@
---
slug: /en/getting-started/example-datasets/tpcds
sidebar_label: TPC-DS
description: "The TPC-DS benchmark data set and queries."
---
# TPC-DS (2012)
Similar to the [Star Schema Benchmark (SSB)](star-schema.md), TPC-DS is based on [TPC-H](tpch.md), but it took the opposite route, i.e. it expanded the number of joins needed by storing the data in a complex snowflake schema (24 instead of 8 tables).
The data distribution is skewed (e.g. normal and Poisson distributions).
It includes 99 reporting and ad-hoc queries with random substitutions.
References
- [The Making of TPC-DS](https://dl.acm.org/doi/10.5555/1182635.1164217) (Nambiar), 2006
First, checkout the TPC-DS repository and compile the data generator:
``` bash
git clone https://github.com/gregrahn/tpcds-kit.git
cd tpcds-kit/tools
make
```
Then, generate the data. Parameter `-scale` specifies the scale factor.
``` bash
./dsdgen -scale 1
```
Then, generate the queries (use the same scale factor):
```bash
./dsqgen -DIRECTORY ../query_templates/ -INPUT ../query_templates/templates.lst -SCALE 1 # generates 99 queries in out/query_0.sql
```
Now create tables in ClickHouse.
You can either use the original table definitions in tools/tpcds.sql or "tuned" table definitions with properly defined primary key indexes and LowCardinality-type column types where it makes sense.
```sql
CREATE TABLE call_center(
cc_call_center_sk Int64,
cc_call_center_id LowCardinality(String),
cc_rec_start_date Nullable(Date),
cc_rec_end_date Nullable(Date),
cc_closed_date_sk Nullable(UInt32),
cc_open_date_sk Nullable(UInt32),
cc_name LowCardinality(String),
cc_class LowCardinality(String),
cc_employees Int32,
cc_sq_ft Int32,
cc_hours LowCardinality(String),
cc_manager LowCardinality(String),
cc_mkt_id Int32,
cc_mkt_class LowCardinality(String),
cc_mkt_desc LowCardinality(String),
cc_market_manager LowCardinality(String),
cc_division Int32,
cc_division_name LowCardinality(String),
cc_company Int32,
cc_company_name LowCardinality(String),
cc_street_number LowCardinality(String),
cc_street_name LowCardinality(String),
cc_street_type LowCardinality(String),
cc_suite_number LowCardinality(String),
cc_city LowCardinality(String),
cc_county LowCardinality(String),
cc_state LowCardinality(String),
cc_zip LowCardinality(String),
cc_country LowCardinality(String),
cc_gmt_offset Decimal(7,2),
cc_tax_percentage Decimal(7,2),
PRIMARY KEY (cc_call_center_sk)
);
CREATE TABLE catalog_page(
cp_catalog_page_sk Int64,
cp_catalog_page_id LowCardinality(String),
cp_start_date_sk Nullable(UInt32),
cp_end_date_sk Nullable(UInt32),
cp_department LowCardinality(Nullable(String)),
cp_catalog_number Nullable(Int32),
cp_catalog_page_number Nullable(Int32),
cp_description LowCardinality(Nullable(String)),
cp_type LowCardinality(Nullable(String)),
PRIMARY KEY (cp_catalog_page_sk)
);
CREATE TABLE catalog_returns(
cr_returned_date_sk Int32,
cr_returned_time_sk Int64,
cr_item_sk Int64,
cr_refunded_customer_sk Nullable(Int64),
cr_refunded_cdemo_sk Nullable(Int64),
cr_refunded_hdemo_sk Nullable(Int64),
cr_refunded_addr_sk Nullable(Int64),
cr_returning_customer_sk Nullable(Int64),
cr_returning_cdemo_sk Nullable(Int64),
cr_returning_hdemo_sk Nullable(Int64),
cr_returning_addr_sk Nullable(Int64),
cr_call_center_sk Nullable(Int64),
cr_catalog_page_sk Nullable(Int64),
cr_ship_mode_sk Nullable(Int64),
cr_warehouse_sk Nullable(Int64),
cr_reason_sk Nullable(Int64),
cr_order_number Int64,
cr_return_quantity Nullable(Int32),
cr_return_amount Nullable(Decimal(7,2)),
cr_return_tax Nullable(Decimal(7,2)),
cr_return_amt_inc_tax Nullable(Decimal(7,2)),
cr_fee Nullable(Decimal(7,2)),
cr_return_ship_cost Nullable(Decimal(7,2)),
cr_refunded_cash Nullable(Decimal(7,2)),
cr_reversed_charge Nullable(Decimal(7,2)),
cr_store_credit Nullable(Decimal(7,2)),
cr_net_loss Nullable(Decimal(7,2)),
PRIMARY KEY (cr_item_sk, cr_order_number)
);
CREATE TABLE catalog_sales (
cs_sold_date_sk Nullable(UInt32),
cs_sold_time_sk Nullable(Int64),
cs_ship_date_sk Nullable(UInt32),
cs_bill_customer_sk Nullable(Int64),
cs_bill_cdemo_sk Nullable(Int64),
cs_bill_hdemo_sk Nullable(Int64),
cs_bill_addr_sk Nullable(Int64),
cs_ship_customer_sk Nullable(Int64),
cs_ship_cdemo_sk Nullable(Int64),
cs_ship_hdemo_sk Nullable(Int64),
cs_ship_addr_sk Nullable(Int64),
cs_call_center_sk Nullable(Int64),
cs_catalog_page_sk Nullable(Int64),
cs_ship_mode_sk Nullable(Int64),
cs_warehouse_sk Nullable(Int64),
cs_item_sk Int64,
cs_promo_sk Nullable(Int64),
cs_order_number Int64,
cs_quantity Nullable(Int32),
cs_wholesale_cost Nullable(Decimal(7,2)),
cs_list_price Nullable(Decimal(7,2)),
cs_sales_price Nullable(Decimal(7,2)),
cs_ext_discount_amt Nullable(Decimal(7,2)),
cs_ext_sales_price Nullable(Decimal(7,2)),
cs_ext_wholesale_cost Nullable(Decimal(7,2)),
cs_ext_list_price Nullable(Decimal(7,2)),
cs_ext_tax Nullable(Decimal(7,2)),
cs_coupon_amt Nullable(Decimal(7,2)),
cs_ext_ship_cost Nullable(Decimal(7,2)),
cs_net_paid Nullable(Decimal(7,2)),
cs_net_paid_inc_tax Nullable(Decimal(7,2)),
cs_net_paid_inc_ship Nullable(Decimal(7,2)),
cs_net_paid_inc_ship_tax Nullable(Decimal(7,2)),
cs_net_profit Decimal(7,2),
PRIMARY KEY (cs_item_sk, cs_order_number)
);
CREATE TABLE customer_address (
ca_address_sk Int64,
ca_address_id LowCardinality(String),
ca_street_number LowCardinality(Nullable(String)),
ca_street_name LowCardinality(Nullable(String)),
ca_street_type LowCardinality(Nullable(String)),
ca_suite_number LowCardinality(Nullable(String)),
ca_city LowCardinality(Nullable(String)),
ca_county LowCardinality(Nullable(String)),
ca_state LowCardinality(Nullable(String)),
ca_zip LowCardinality(Nullable(String)),
ca_country LowCardinality(Nullable(String)),
ca_gmt_offset Nullable(Decimal(7,2)),
ca_location_type LowCardinality(Nullable(String)),
PRIMARY KEY (ca_address_sk)
);
CREATE TABLE customer_demographics (
cd_demo_sk Int64,
cd_gender LowCardinality(String),
cd_marital_status LowCardinality(String),
cd_education_status LowCardinality(String),
cd_purchase_estimate Int32,
cd_credit_rating LowCardinality(String),
cd_dep_count Int32,
cd_dep_employed_count Int32,
cd_dep_college_count Int32,
PRIMARY KEY (cd_demo_sk)
);
CREATE TABLE customer (
c_customer_sk Int64,
c_customer_id LowCardinality(String),
c_current_cdemo_sk Nullable(Int64),
c_current_hdemo_sk Nullable(Int64),
c_current_addr_sk Nullable(Int64),
c_first_shipto_date_sk Nullable(UInt32),
c_first_sales_date_sk Nullable(UInt32),
c_salutation LowCardinality(Nullable(String)),
c_first_name LowCardinality(Nullable(String)),
c_last_name LowCardinality(Nullable(String)),
c_preferred_cust_flag LowCardinality(Nullable(String)),
c_birth_day Nullable(Int32),
c_birth_month Nullable(Int32),
c_birth_year Nullable(Int32),
c_birth_country LowCardinality(Nullable(String)),
c_login LowCardinality(Nullable(String)),
c_email_address LowCardinality(Nullable(String)),
c_last_review_date LowCardinality(Nullable(String)),
PRIMARY KEY (c_customer_sk)
);
CREATE TABLE date_dim (
d_date_sk UInt32,
d_date_id LowCardinality(String),
d_date Date,
d_month_seq UInt16,
d_week_seq UInt16,
d_quarter_seq UInt16,
d_year UInt16,
d_dow UInt16,
d_moy UInt16,
d_dom UInt16,
d_qoy UInt16,
d_fy_year UInt16,
d_fy_quarter_seq UInt16,
d_fy_week_seq UInt16,
d_day_name LowCardinality(String),
d_quarter_name LowCardinality(String),
d_holiday LowCardinality(String),
d_weekend LowCardinality(String),
d_following_holiday LowCardinality(String),
d_first_dom Int32,
d_last_dom Int32,
d_same_day_ly Int32,
d_same_day_lq Int32,
d_current_day LowCardinality(String),
d_current_week LowCardinality(String),
d_current_month LowCardinality(String),
d_current_quarter LowCardinality(String),
d_current_year LowCardinality(String),
PRIMARY KEY (d_date_sk)
);
CREATE TABLE household_demographics (
hd_demo_sk Int64,
hd_income_band_sk Int64,
hd_buy_potential LowCardinality(String),
hd_dep_count Int32,
hd_vehicle_count Int32,
PRIMARY KEY (hd_demo_sk)
);
CREATE TABLE income_band(
ib_income_band_sk Int64,
ib_lower_bound Int32,
ib_upper_bound Int32,
PRIMARY KEY (ib_income_band_sk),
);
CREATE TABLE inventory (
inv_date_sk UInt32,
inv_item_sk Int64,
inv_warehouse_sk Int64,
inv_quantity_on_hand Nullable(Int32)
PRIMARY KEY (inv_date_sk, inv_item_sk, inv_warehouse_sk),
);
CREATE TABLE item (
i_item_sk Int64,
i_item_id LowCardinality(String),
i_rec_start_date LowCardinality(Nullable(String)),
i_rec_end_date LowCardinality(Nullable(String)),
i_item_desc LowCardinality(Nullable(String)),
i_current_price Nullable(Decimal(7,2)),
i_wholesale_cost Nullable(Decimal(7,2)),
i_brand_id Nullable(Int32),
i_brand LowCardinality(Nullable(String)),
i_class_id Nullable(Int32),
i_class LowCardinality(Nullable(String)),
i_category_id Nullable(Int32),
i_category LowCardinality(Nullable(String)),
i_manufact_id Nullable(Int32),
i_manufact LowCardinality(Nullable(String)),
i_size LowCardinality(Nullable(String)),
i_formulation LowCardinality(Nullable(String)),
i_color LowCardinality(Nullable(String)),
i_units LowCardinality(Nullable(String)),
i_container LowCardinality(Nullable(String)),
i_manager_id Nullable(Int32),
i_product_name LowCardinality(Nullable(String)),
PRIMARY KEY (i_item_sk)
);
CREATE TABLE promotion (
p_promo_sk Int64,
p_promo_id LowCardinality(String),
p_start_date_sk Nullable(UInt32),
p_end_date_sk Nullable(UInt32),
p_item_sk Nullable(Int64),
p_cost Nullable(Decimal(15,2)),
p_response_target Nullable(Int32),
p_promo_name LowCardinality(Nullable(String)),
p_channel_dmail LowCardinality(Nullable(String)),
p_channel_email LowCardinality(Nullable(String)),
p_channel_catalog LowCardinality(Nullable(String)),
p_channel_tv LowCardinality(Nullable(String)),
p_channel_radio LowCardinality(Nullable(String)),
p_channel_press LowCardinality(Nullable(String)),
p_channel_event LowCardinality(Nullable(String)),
p_channel_demo LowCardinality(Nullable(String)),
p_channel_details LowCardinality(Nullable(String)),
p_purpose LowCardinality(Nullable(String)),
p_discount_active LowCardinality(Nullable(String)),
PRIMARY KEY (p_promo_sk)
);
CREATE TABLE reason(
r_reason_sk Int64,
r_reason_id LowCardinality(String),
r_reason_desc LowCardinality(String),
PRIMARY KEY (r_reason_sk)
);
CREATE TABLE ship_mode(
sm_ship_mode_sk Int64,
sm_ship_mode_id LowCardinality(String),
sm_type LowCardinality(String),
sm_code LowCardinality(String),
sm_carrier LowCardinality(String),
sm_contract LowCardinality(String),
PRIMARY KEY (sm_ship_mode_sk)
);
CREATE TABLE store_returns (
sr_returned_date_sk Nullable(UInt32),
sr_return_time_sk Nullable(Int64),
sr_item_sk Int64,
sr_customer_sk Nullable(Int64),
sr_cdemo_sk Nullable(Int64),
sr_hdemo_sk Nullable(Int64),
sr_addr_sk Nullable(Int64),
sr_store_sk Nullable(Int64),
sr_reason_sk Nullable(Int64),
sr_ticket_number Int64,
sr_return_quantity Nullable(Int32),
sr_return_amt Nullable(Decimal(7,2)),
sr_return_tax Nullable(Decimal(7,2)),
sr_return_amt_inc_tax Nullable(Decimal(7,2)),
sr_fee Nullable(Decimal(7,2)),
sr_return_ship_cost Nullable(Decimal(7,2)),
sr_refunded_cash Nullable(Decimal(7,2)),
sr_reversed_charge Nullable(Decimal(7,2)),
sr_store_credit Nullable(Decimal(7,2)),
sr_net_loss Nullable(Decimal(7,2)),
PRIMARY KEY (sr_item_sk, sr_ticket_number)
);
CREATE TABLE store_sales (
ss_sold_date_sk Nullable(UInt32),
ss_sold_time_sk Nullable(Int64),
ss_item_sk Int64,
ss_customer_sk Nullable(Int64),
ss_cdemo_sk Nullable(Int64),
ss_hdemo_sk Nullable(Int64),
ss_addr_sk Nullable(Int64),
ss_store_sk Nullable(Int64),
ss_promo_sk Nullable(Int64),
ss_ticket_number Int64,
ss_quantity Nullable(Int32),
ss_wholesale_cost Nullable(Decimal(7,2)),
ss_list_price Nullable(Decimal(7,2)),
ss_sales_price Nullable(Decimal(7,2)),
ss_ext_discount_amt Nullable(Decimal(7,2)),
ss_ext_sales_price Nullable(Decimal(7,2)),
ss_ext_wholesale_cost Nullable(Decimal(7,2)),
ss_ext_list_price Nullable(Decimal(7,2)),
ss_ext_tax Nullable(Decimal(7,2)),
ss_coupon_amt Nullable(Decimal(7,2)),
ss_net_paid Nullable(Decimal(7,2)),
ss_net_paid_inc_tax Nullable(Decimal(7,2)),
ss_net_profit Nullable(Decimal(7,2)),
PRIMARY KEY (ss_item_sk, ss_ticket_number)
);
CREATE TABLE store (
s_store_sk Int64,
s_store_id LowCardinality(String),
s_rec_start_date LowCardinality(Nullable(String)),
s_rec_end_date LowCardinality(Nullable(String)),
s_closed_date_sk Nullable(UInt32),
s_store_name LowCardinality(Nullable(String)),
s_number_employees Nullable(Int32),
s_floor_space Nullable(Int32),
s_hours LowCardinality(Nullable(String)),
s_manager LowCardinality(Nullable(String)),
s_market_id Nullable(Int32),
s_geography_class LowCardinality(Nullable(String)),
s_market_desc LowCardinality(Nullable(String)),
s_market_manager LowCardinality(Nullable(String)),
s_division_id Nullable(Int32),
s_division_name LowCardinality(Nullable(String)),
s_company_id Nullable(Int32),
s_company_name LowCardinality(Nullable(String)),
s_street_number LowCardinality(Nullable(String)),
s_street_name LowCardinality(Nullable(String)),
s_street_type LowCardinality(Nullable(String)),
s_suite_number LowCardinality(Nullable(String)),
s_city LowCardinality(Nullable(String)),
s_county LowCardinality(Nullable(String)),
s_state LowCardinality(Nullable(String)),
s_zip LowCardinality(Nullable(String)),
s_country LowCardinality(Nullable(String)),
s_gmt_offset Nullable(Decimal(7,2)),
s_tax_precentage Nullable(Decimal(7,2)),
PRIMARY KEY (s_store_sk)
);
CREATE TABLE time_dim (
t_time_sk UInt32,
t_time_id LowCardinality(String),
t_time UInt32,
t_hour UInt8,
t_minute UInt8,
t_second UInt8,
t_am_pm LowCardinality(String),
t_shift LowCardinality(String),
t_sub_shift LowCardinality(String),
t_meal_time LowCardinality(Nullable(String)),
PRIMARY KEY (t_time_sk)
);
CREATE TABLE warehouse(
w_warehouse_sk Int64,
w_warehouse_id LowCardinality(String),
w_warehouse_name LowCardinality(Nullable(String)),
w_warehouse_sq_ft Nullable(Int32),
w_street_number LowCardinality(Nullable(String)),
w_street_name LowCardinality(Nullable(String)),
w_street_type LowCardinality(Nullable(String)),
w_suite_number LowCardinality(Nullable(String)),
w_city LowCardinality(Nullable(String)),
w_county LowCardinality(Nullable(String)),
w_state LowCardinality(Nullable(String)),
w_zip LowCardinality(Nullable(String)),
w_country LowCardinality(Nullable(String)),
w_gmt_offset Decimal(7,2),
PRIMARY KEY (w_warehouse_sk)
);
CREATE TABLE web_page(
wp_web_page_sk Int64,
wp_web_page_id LowCardinality(String),
wp_rec_start_date LowCardinality(Nullable(String)),
wp_rec_end_date LowCardinality(Nullable(String)),
wp_creation_date_sk Nullable(UInt32),
wp_access_date_sk Nullable(UInt32),
wp_autogen_flag LowCardinality(Nullable(String)),
wp_customer_sk Nullable(Int64),
wp_url LowCardinality(Nullable(String)),
wp_type LowCardinality(Nullable(String)),
wp_char_count Nullable(Int32),
wp_link_count Nullable(Int32),
wp_image_count Nullable(Int32),
wp_max_ad_count Nullable(Int32),
PRIMARY KEY (wp_web_page_sk)
);
CREATE TABLE web_returns (
wr_returned_date_sk Nullable(UInt32),
wr_returned_time_sk Nullable(Int64),
wr_item_sk Int64,
wr_refunded_customer_sk Nullable(Int64),
wr_refunded_cdemo_sk Nullable(Int64),
wr_refunded_hdemo_sk Nullable(Int64),
wr_refunded_addr_sk Nullable(Int64),
wr_returning_customer_sk Nullable(Int64),
wr_returning_cdemo_sk Nullable(Int64),
wr_returning_hdemo_sk Nullable(Int64),
wr_returning_addr_sk Nullable(Int64),
wr_web_page_sk Nullable(Int64),
wr_reason_sk Nullable(Int64),
wr_order_number Int64,
wr_return_quantity Nullable(Int32),
wr_return_amt Nullable(Decimal(7,2)),
wr_return_tax Nullable(Decimal(7,2)),
wr_return_amt_inc_tax Nullable(Decimal(7,2)),
wr_fee Nullable(Decimal(7,2)),
wr_return_ship_cost Nullable(Decimal(7,2)),
wr_refunded_cash Nullable(Decimal(7,2)),
wr_reversed_charge Nullable(Decimal(7,2)),
wr_account_credit Nullable(Decimal(7,2)),
wr_net_loss Nullable(Decimal(7,2)),
PRIMARY KEY (wr_item_sk, wr_order_number)
);
CREATE TABLE web_sales (
ws_sold_date_sk Nullable(UInt32),
ws_sold_time_sk Nullable(Int64),
ws_ship_date_sk Nullable(UInt32),
ws_item_sk Int64,
ws_bill_customer_sk Nullable(Int64),
ws_bill_cdemo_sk Nullable(Int64),
ws_bill_hdemo_sk Nullable(Int64),
ws_bill_addr_sk Nullable(Int64),
ws_ship_customer_sk Nullable(Int64),
ws_ship_cdemo_sk Nullable(Int64),
ws_ship_hdemo_sk Nullable(Int64),
ws_ship_addr_sk Nullable(Int64),
ws_web_page_sk Nullable(Int64),
ws_web_site_sk Nullable(Int64),
ws_ship_mode_sk Nullable(Int64),
ws_warehouse_sk Nullable(Int64),
ws_promo_sk Nullable(Int64),
ws_order_number Int64,
ws_quantity Nullable(Int32),
ws_wholesale_cost Nullable(Decimal(7,2)),
ws_list_price Nullable(Decimal(7,2)),
ws_sales_price Nullable(Decimal(7,2)),
ws_ext_discount_amt Nullable(Decimal(7,2)),
ws_ext_sales_price Nullable(Decimal(7,2)),
ws_ext_wholesale_cost Nullable(Decimal(7,2)),
ws_ext_list_price Nullable(Decimal(7,2)),
ws_ext_tax Nullable(Decimal(7,2)),
ws_coupon_amt Nullable(Decimal(7,2)),
ws_ext_ship_cost Nullable(Decimal(7,2)),
ws_net_paid Nullable(Decimal(7,2)),
ws_net_paid_inc_tax Nullable(Decimal(7,2)),
ws_net_paid_inc_ship Decimal(7,2),
ws_net_paid_inc_ship_tax Decimal(7,2),
ws_net_profit Decimal(7,2),
PRIMARY KEY (ws_item_sk, ws_order_number)
);
CREATE TABLE web_site (
web_site_sk Int64,
web_site_id LowCardinality(String),
web_rec_start_date LowCardinality(String),
web_rec_end_date LowCardinality(Nullable(String)),
web_name LowCardinality(String),
web_open_date_sk UInt32,
web_close_date_sk Nullable(UInt32),
web_class LowCardinality(String),
web_manager LowCardinality(String),
web_mkt_id Int32,
web_mkt_class LowCardinality(String),
web_mkt_desc LowCardinality(String),
web_market_manager LowCardinality(String),
web_company_id Int32,
web_company_name LowCardinality(String),
web_street_number LowCardinality(String),
web_street_name LowCardinality(String),
web_street_type LowCardinality(String),
web_suite_number LowCardinality(String),
web_city LowCardinality(String),
web_county LowCardinality(String),
web_state LowCardinality(String),
web_zip LowCardinality(String),
web_country LowCardinality(String),
web_gmt_offset Decimal(7,2),
web_tax_percentage Decimal(7,2),
PRIMARY KEY (web_site_sk)
);
```
The data can be imported as follows:
``` bash
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO call_center FORMAT CSV" < call_center.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO catalog_page FORMAT CSV" < catalog_page.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO catalog_returns FORMAT CSV" < catalog_returns.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO catalog_sales FORMAT CSV" < catalog_sales.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO customer FORMAT CSV" < customer.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO customer_address FORMAT CSV" < customer_address.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO customer_demographics FORMAT CSV" < customer_demographics.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO date_dim FORMAT CSV" < date_dim.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO household_demographics FORMAT CSV" < household_demographics.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO income_band FORMAT CSV" < income_band.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO inventory FORMAT CSV" < inventory.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO item FORMAT CSV" < item.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO promotion FORMAT CSV" < promotion.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO reason FORMAT CSV" < reason.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO ship_mode FORMAT CSV" < ship_mode.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO store FORMAT CSV" < store.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO store_returns FORMAT CSV" < store_returns.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO store_sales FORMAT CSV" < store_sales.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO time_dim FORMAT CSV" < time_dim.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO warehouse FORMAT CSV" < warehouse.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO web_page FORMAT CSV" < web_page.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO web_returns FORMAT CSV" < web_returns.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO web_sales FORMAT CSV" < web_sales.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO web_site FORMAT CSV" < web_site.tbl
```
Then run the generated queries.
::::warning
TPC-DS makes heavy use of correlated subqueries which are at the time of writing (September 2024) not supported by ClickHouse ([issue #6697](https://github.com/ClickHouse/ClickHouse/issues/6697)).
As a result, many of above benchmark queries will fail with errors.
::::

View File

@ -0,0 +1,882 @@
---
slug: /en/getting-started/example-datasets/tpch
sidebar_label: TPC-H
description: "The TPC-H benchmark data set and queries."
---
# TPC-H (1999)
A popular benchmark which models the internal data warehouse of a wholesale supplier.
The data is stored into a 3rd normal form representation, requiring lots of joins at query runtime.
Despite its age and its unrealistic assumption that the data is uniformly and independently distributed, TPC-H remains the most popular OLAP benchmark to date.
References
- [TPC-H](https://www.tpc.org/tpc_documents_current_versions/current_specifications5.asp)
- [New TPC Benchmarks for Decision Support and Web Commerce](https://doi.org/10.1145/369275.369291) (Poess et. al., 2000)
- [TPC-H Analyzed: Hidden Messages and Lessons Learned from an Influential Benchmark](https://doi.org/10.1007/978-3-319-04936-6_5) (Boncz et. al.), 2013
- [Quantifying TPC-H Choke Points and Their Optimizations](https://doi.org/10.14778/3389133.3389138) (Dresseler et. al.), 2020
First, checkout the TPC-H repository and compile the data generator:
``` bash
git clone https://github.com/gregrahn/tpch-kit.git
cd tpch-kit/dbgen
make
```
Then, generate the data. Parameter `-s` specifies the scale factor. For example, with `-s 100`, 600 million rows are generated for table 'lineitem'.
``` bash
./dbgen -s 100
```
Now create tables in ClickHouse:
```sql
CREATE TABLE nation (
n_nationkey Int32,
n_name String,
n_regionkey Int32,
n_comment String)
ORDER BY (n_regionkey, n_name);
CREATE TABLE region (
r_regionkey Int32,
r_name String,
r_comment String)
ORDER BY (r_name);
CREATE TABLE part (
p_partkey Int32,
p_name String,
p_mfgr String,
p_brand String,
p_type String,
p_size Int32,
p_container String,
p_retailprice Decimal(15,2),
p_comment String)
ORDER BY (p_mfgr, p_brand, p_type, p_name);
CREATE TABLE supplier (
s_suppkey Int32,
s_name String,
s_address String,
s_nationkey Int32,
s_phone String,
s_acctbal Decimal(15,2),
s_comment String)
ORDER BY (s_nationkey, s_address, s_name);
CREATE TABLE partsupp (
ps_partkey Int32,
ps_suppkey Int32,
ps_availqty Int32,
ps_supplycost Decimal(15,2),
ps_comment String)
ORDER BY (ps_suppkey, ps_availqty, ps_supplycost, ps_partkey);
CREATE TABLE customer (
c_custkey Int32,
c_name String,
c_address String,
c_nationkey Int32,
c_phone String,
c_acctbal Decimal(15,2),
c_mktsegment String,
c_comment String)
ORDER BY (c_nationkey, c_mktsegment, c_address, c_name, c_custkey);
CREATE TABLE orders (
o_orderkey Int32,
o_custkey Int32,
o_orderstatus String,
o_totalprice Decimal(15,2),
o_orderdate Date,
o_orderpriority String,
o_clerk String,
o_shippriority Int32,
o_comment String)
ORDER BY (o_orderdate, o_orderstatus, o_custkey);
CREATE TABLE lineitem (
l_orderkey Int32,
l_partkey Int32,
l_suppkey Int32,
l_linenumber Int32,
l_quantity Decimal(15,2),
l_extendedprice Decimal(15,2),
l_discount Decimal(15,2),
l_tax Decimal(15,2),
l_returnflag String,
l_linestatus String,
l_shipdate Date,
l_commitdate Date,
l_receiptdate Date,
l_shipinstruct String,
l_shipmode String,
l_comment String)
ORDER BY (l_suppkey, l_partkey, l_shipdate, l_commitdate, l_receiptdate);
```
The data can be imported as follows:
``` bash
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO nation FORMAT CSV" < nation.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO region FORMAT CSV" < region.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO part FORMAT CSV" < part.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO supplier FORMAT CSV" < supplier.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO partsupp FORMAT CSV" < partsupp.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO customers FORMAT CSV" < customers.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO orders FORMAT CSV" < orders.tbl
clickhouse-client --format_csv_delimiter '|' --query "INSERT INTO lineitem FORMAT CSV" < lineitem.tbl
```
The queries are generated by `./qgen -s <scaling_factor>`. Example queries for `s = 100`:
::::warning
TPC-H makes heavy use of correlated subqueries which are at the time of writing (September 2024) not supported by ClickHouse ([issue #6697](https://github.com/ClickHouse/ClickHouse/issues/6697)).
As a result, many of below benchmark queries will fail with errors.
::::
Q1
```sql
SELECT
l_returnflag,
l_linestatus,
sum(l_quantity) AS sum_qty,
sum(l_extendedprice) AS sum_base_price,
sum(l_extendedprice * (1 - l_discount)) AS sum_disc_price,
sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) AS sum_charge,
avg(l_quantity) AS avg_qty,
avg(l_extendedprice) AS avg_price,
avg(l_discount) AS avg_disc,
count(*) AS count_order
FROM
lineitem
WHERE
l_shipdate <= date '1998-12-01' - interval '100' day
GROUP BY
l_returnflag,
l_linestatus
ORDER BY
l_returnflag,
l_linestatus;
```
Q2
```sql
SELECT
s_acctbal,
s_name,
n_name,
p_partkey,
p_mfgr,
s_address,
s_phone,
s_comment
FROM
part,
supplier,
partsupp,
nation,
region
WHERE
p_partkey = ps_partkey
AND s_suppkey = ps_suppkey
AND p_size = 21
AND p_type LIKE '%COPPER'
AND s_nationkey = n_nationkey
AND n_regionkey = r_regionkey
AND r_name = 'AMERICA'
AND ps_supplycost = (
SELECT
min(ps_supplycost)
FROM
partsupp,
supplier,
nation,
region
WHERE
p_partkey = ps_partkey
AND s_suppkey = ps_suppkey
AND s_nationkey = n_nationkey
AND n_regionkey = r_regionkey
AND r_name = 'AMERICA'
)
ORDER BY
s_acctbal desc,
n_name,
s_name,
p_partkey
LIMIT 100;
```
Q3
```sql
SELECT
l_orderkey,
sum(l_extendedprice * (1 - l_discount)) AS revenue,
o_orderdate,
o_shippriority
FROM
customer,
orders,
lineitem
WHERE
c_mktsegment = 'BUILDING'
AND c_custkey = o_custkey
AND l_orderkey = o_orderkey
AND o_orderdate < date '1995-03-10'
AND l_shipdate > date '1995-03-10'
GROUP BY
l_orderkey,
o_orderdate,
o_shippriority
ORDER BY
revenue desc,
o_orderdate
LIMIT 10;
```
Q4
```sql
SELECT
o_orderpriority,
count(*) AS order_count
FROM
orders
WHERE
o_orderdate >= date '1994-07-01'
AND o_orderdate < date '1994-07-01' + interval '3' month
AND EXISTS (
SELECT
*
FROM
lineitem
WHERE
l_orderkey = o_orderkey
AND l_commitdate < l_receiptdate
)
GROUP BY
o_orderpriority
ORDER BY
o_orderpriority;
```
Q5
```sql
SELECT
n_name,
sum(l_extendedprice * (1 - l_discount)) AS revenue
FROM
customer,
orders,
lineitem,
supplier,
nation,
region
WHERE
c_custkey = o_custkey
AND l_orderkey = o_orderkey
AND l_suppkey = s_suppkey
AND c_nationkey = s_nationkey
AND s_nationkey = n_nationkey
AND n_regionkey = r_regionkey
AND r_name = 'MIDDLE EAST'
AND o_orderdate >= date '1994-01-01'
AND o_orderdate < date '1994-01-01' + interval '1' year
GROUP BY
n_name
ORDER BY
revenue desc;
```
Q6
```sql
SELECT
sum(l_extendedprice * l_discount) AS revenue
FROM
lineitem
WHERE
l_shipdate >= date '1994-01-01'
AND l_shipdate < date '1994-01-01' + interval '1' year
AND l_discount between 0.09 - 0.01 AND 0.09 + 0.01
AND l_quantity < 24;
```
Q7
```sql
SELECT
supp_nation,
cust_nation,
l_year,
sum(volume) AS revenue
FROM
(
SELECT
n1.n_name AS supp_nation,
n2.n_name AS cust_nation,
extract(year FROM l_shipdate) AS l_year,
l_extendedprice * (1 - l_discount) AS volume
FROM
supplier,
lineitem,
orders,
customer,
nation n1,
nation n2
WHERE
s_suppkey = l_suppkey
AND o_orderkey = l_orderkey
AND c_custkey = o_custkey
AND s_nationkey = n1.n_nationkey
AND c_nationkey = n2.n_nationkey
AND (
(n1.n_name = 'UNITED KINGDOM' AND n2.n_name = 'ETHIOPIA')
OR (n1.n_name = 'ETHIOPIA' AND n2.n_name = 'UNITED KINGDOM')
)
AND l_shipdate between date '1995-01-01' AND date '1996-12-31'
) AS shipping
GROUP BY
supp_nation,
cust_nation,
l_year
ORDER BY
supp_nation,
cust_nation,
l_year;
```
Q8
```sql
SELECT
o_year,
sum(CASE
WHEN nation = 'ETHIOPIA' THEN volume
ELSE 0
END) / sum(volume) AS mkt_share
FROM
(
SELECT
extract(year FROM o_orderdate) AS o_year,
l_extendedprice * (1 - l_discount) AS volume,
n2.n_name AS nation
FROM
part,
supplier,
lineitem,
orders,
customer,
nation n1,
nation n2,
region
WHERE
p_partkey = l_partkey
AND s_suppkey = l_suppkey
AND l_orderkey = o_orderkey
AND o_custkey = c_custkey
AND c_nationkey = n1.n_nationkey
AND n1.n_regionkey = r_regionkey
AND r_name = 'AFRICA'
AND s_nationkey = n2.n_nationkey
AND o_orderdate between date '1995-01-01' AND date '1996-12-31'
AND p_type = 'SMALL POLISHED TIN'
) AS all_nations
GROUP BY
o_year
ORDER BY
o_year;
```
Q9
```sql
SELECT
nation,
o_year,
sum(amount) AS sum_profit
FROM
(
SELECT
n_name AS nation,
extract(year FROM o_orderdate) AS o_year,
l_extendedprice * (1 - l_discount) - ps_supplycost * l_quantity AS amount
FROM
part,
supplier,
lineitem,
partsupp,
orders,
nation
WHERE
s_suppkey = l_suppkey
AND ps_suppkey = l_suppkey
AND ps_partkey = l_partkey
AND p_partkey = l_partkey
AND o_orderkey = l_orderkey
AND s_nationkey = n_nationkey
AND p_name LIKE '%drab%'
) AS profit
GROUP BY
nation,
o_year
ORDER BY
nation,
o_year desc;
```
Q10
```sql
SELECT
c_custkey,
c_name,
sum(l_extendedprice * (1 - l_discount)) AS revenue,
c_acctbal,
n_name,
c_address,
c_phone,
c_comment
FROM
customer,
orders,
lineitem,
nation
WHERE
c_custkey = o_custkey
AND l_orderkey = o_orderkey
AND o_orderdate >= date '1993-06-01'
AND o_orderdate < date '1993-06-01' + interval '3' month
AND l_returnflag = 'R'
AND c_nationkey = n_nationkey
GROUP BY
c_custkey,
c_name,
c_acctbal,
c_phone,
n_name,
c_address,
c_comment
ORDER BY
revenue desc
LIMIT 20;
```
Q11
```sql
SELECT
ps_partkey,
sum(ps_supplycost * ps_availqty) AS value
FROM
partsupp,
supplier,
nation
WHERE
ps_suppkey = s_suppkey
AND s_nationkey = n_nationkey
AND n_name = 'MOZAMBIQUE'
GROUP BY
ps_partkey having
sum(ps_supplycost * ps_availqty) > (
SELECT
sum(ps_supplycost * ps_availqty) * 0.0000010000
FROM
partsupp,
supplier,
nation
WHERE
ps_suppkey = s_suppkey
AND s_nationkey = n_nationkey
AND n_name = 'MOZAMBIQUE'
)
ORDER BY
value desc;
```
Q12
```sql
SELECT
l_shipmode,
sum(CASE
WHEN o_orderpriority = '1-URGENT'
OR o_orderpriority = '2-HIGH'
THEN 1
ELSE 0
END) AS high_line_count,
sum(CASE
WHEN o_orderpriority <> '1-URGENT'
AND o_orderpriority <> '2-HIGH'
THEN 1
ELSE 0
END) AS low_line_count
FROM
orders,
lineitem
WHERE
o_orderkey = l_orderkey
AND l_shipmode in ('MAIL', 'AIR')
AND l_commitdate < l_receiptdate
AND l_shipdate < l_commitdate
AND l_receiptdate >= date '1996-01-01'
AND l_receiptdate < date '1996-01-01' + interval '1' year
GROUP BY
l_shipmode
ORDER BY
l_shipmode;
```
Q13
```sql
SELECT
c_count,
count(*) AS custdist
FROM
(
SELECT
c_custkey,
count(o_orderkey)
FROM
customer LEFT OUTER JOIN orders ON
c_custkey = o_custkey
AND o_comment NOT LIKE '%special%deposits%'
GROUP BY
c_custkey
) AS c_orders
GROUP BY
c_count
ORDER BY
custdist desc,
c_count desc;
```
Q14
```sql
SELECT
100.00 * sum(CASE
WHEN p_type LIKE 'PROMO%'
THEN l_extendedprice * (1 - l_discount)
ELSE 0
END) / sum(l_extendedprice * (1 - l_discount)) AS promo_revenue
FROM
lineitem,
part
WHERE
l_partkey = p_partkey
AND l_shipdate >= date '1996-10-01'
AND l_shipdate < date '1996-10-01' + interval '1' month;
```
Q15
```sql
CREATE VIEW revenue0 (supplier_no, total_revenue) AS
SELECT
l_suppkey,
sum(l_extendedprice * (1 - l_discount))
FROM
lineitem
WHERE
l_shipdate >= date '1997-06-01'
AND l_shipdate < date '1997-06-01' + interval '3' month
GROUP BY
l_suppkey;
SELECT
s_suppkey,
s_name,
s_address,
s_phone,
total_revenue
FROM
supplier,
revenue0
WHERE
s_suppkey = supplier_no
AND total_revenue = (
SELECT
max(total_revenue)
FROM
revenue0
)
ORDER BY
s_suppkey;
DROP VIEW revenue0;
```
Q16
```sql
SELECT
p_brand,
p_type,
p_size,
count(distinct ps_suppkey) AS supplier_cnt
FROM
partsupp,
part
WHERE
p_partkey = ps_partkey
AND p_brand <> 'Brand#15'
AND p_type NOT LIKE 'SMALL POLISHED%'
AND p_size in (21, 9, 46, 34, 50, 33, 17, 36)
AND ps_suppkey NOT in (
SELECT
s_suppkey
FROM
supplier
WHERE
s_comment LIKE '%Customer%Complaints%'
)
GROUP BY
p_brand,
p_type,
p_size
ORDER BY
supplier_cnt desc,
p_brand,
p_type,
p_size;
```
Q17
```sql
SELECT
sum(l_extendedprice) / 7.0 AS avg_yearly
FROM
lineitem,
part
WHERE
p_partkey = l_partkey
AND p_brand = 'Brand#52'
AND p_container = 'MED CASE'
AND l_quantity < (
SELECT
0.2 * avg(l_quantity)
FROM
lineitem
WHERE
l_partkey = p_partkey
);
```
Q18
```sql
SELECT
c_name,
c_custkey,
o_orderkey,
o_orderdate,
o_totalprice,
sum(l_quantity)
FROM
customer,
orders,
lineitem
WHERE
o_orderkey in (
SELECT
l_orderkey
FROM
lineitem
GROUP BY
l_orderkey having
sum(l_quantity) > 313
)
AND c_custkey = o_custkey
AND o_orderkey = l_orderkey
GROUP BY
c_name,
c_custkey,
o_orderkey,
o_orderdate,
o_totalprice
ORDER BY
o_totalprice desc,
o_orderdate
LIMIT 100;
```
Q19
```sql
SELECT
sum(l_extendedprice* (1 - l_discount)) AS revenue
FROM
lineitem,
part
WHERE
(
p_partkey = l_partkey
AND p_brand = 'Brand#31'
AND p_container in ('SM CASE', 'SM BOX', 'SM PACK', 'SM PKG')
AND l_quantity >= 3 AND l_quantity <= 3 + 10
AND p_size between 1 AND 5
AND l_shipmode in ('AIR', 'AIR REG')
AND l_shipinstruct = 'DELIVER IN PERSON'
)
OR
(
p_partkey = l_partkey
AND p_brand = 'Brand#54'
AND p_container in ('MED BAG', 'MED BOX', 'MED PKG', 'MED PACK')
AND l_quantity >= 17 AND l_quantity <= 17 + 10
AND p_size between 1 AND 10
AND l_shipmode in ('AIR', 'AIR REG')
AND l_shipinstruct = 'DELIVER IN PERSON'
)
OR
(
p_partkey = l_partkey
AND p_brand = 'Brand#54'
AND p_container in ('LG CASE', 'LG BOX', 'LG PACK', 'LG PKG')
AND l_quantity >= 26 AND l_quantity <= 26 + 10
AND p_size between 1 AND 15
AND l_shipmode in ('AIR', 'AIR REG')
AND l_shipinstruct = 'DELIVER IN PERSON'
);
```
Q20
```sql
SELECT
s_name,
s_address
FROM
supplier,
nation
WHERE
s_suppkey in (
SELECT
ps_suppkey
FROM
partsupp
WHERE
ps_partkey in (
SELECT
p_partkey
FROM
part
WHERE
p_name LIKE 'chiffon%'
)
AND ps_availqty > (
SELECT
0.5 * sum(l_quantity)
FROM
lineitem
WHERE
l_partkey = ps_partkey
AND l_suppkey = ps_suppkey
AND l_shipdate >= date '1997-01-01'
AND l_shipdate < date '1997-01-01' + interval '1' year
)
)
AND s_nationkey = n_nationkey
AND n_name = 'MOZAMBIQUE'
ORDER BY
s_name;
```
Q21
```sql
SELECT
s_name,
count(*) AS numwait
FROM
supplier,
lineitem l1,
orders,
nation
WHERE
s_suppkey = l1.l_suppkey
AND o_orderkey = l1.l_orderkey
AND o_orderstatus = 'F'
AND l1.l_receiptdate > l1.l_commitdate
AND EXISTS (
SELECT
*
FROM
lineitem l2
WHERE
l2.l_orderkey = l1.l_orderkey
AND l2.l_suppkey <> l1.l_suppkey
)
AND NOT EXISTS (
SELECT
*
FROM
lineitem l3
WHERE
l3.l_orderkey = l1.l_orderkey
AND l3.l_suppkey <> l1.l_suppkey
AND l3.l_receiptdate > l3.l_commitdate
)
AND s_nationkey = n_nationkey
AND n_name = 'RUSSIA'
GROUP BY
s_name
ORDER BY
numwait desc,
s_name
LIMIT 100;
```
Q22
```sql
SELECT
cntrycode,
count(*) AS numcust,
sum(c_acctbal) AS totacctbal
FROM
(
SELECT
substring(c_phone FROM 1 for 2) AS cntrycode,
c_acctbal
FROM
customer
WHERE
substring(c_phone FROM 1 for 2) in
('26', '34', '10', '18', '27', '12', '11')
AND c_acctbal > (
SELECT
avg(c_acctbal)
FROM
customer
WHERE
c_acctbal > 0.00
AND substring(c_phone FROM 1 for 2) in
('26', '34', '10', '18', '27', '12', '11')
)
AND NOT EXISTS (
SELECT
*
FROM
orders
WHERE
o_custkey = c_custkey
)
) AS custsale
GROUP BY
cntrycode
ORDER BY
cntrycode;
```

View File

@ -155,7 +155,7 @@ Format a query as usual, then place the values that you want to pass from the ap
``` bash
$ clickhouse-client --param_tuple_in_tuple="(10, ('dt', 10))" -q "SELECT * FROM table WHERE val = {tuple_in_tuple:Tuple(UInt8, Tuple(String, UInt8))}"
$ clickhouse-client --param_tbl="numbers" --param_db="system" --param_col="number" --query "SELECT {col:Identifier} FROM {db:Identifier}.{tbl:Identifier} LIMIT 10"
$ clickhouse-client --param_tbl="numbers" --param_db="system" --param_col="number" --param_alias="top_ten" --query "SELECT {col:Identifier} as {alias:Identifier} FROM {db:Identifier}.{tbl:Identifier} LIMIT 10"
```
## Configuring {#interfaces_cli_configuration}
@ -188,7 +188,7 @@ You can pass parameters to `clickhouse-client` (all parameters have a default va
- `--memory-usage` If specified, print memory usage to stderr in non-interactive mode]. Possible values: 'none' - do not print memory usage, 'default' - print number of bytes, 'readable' - print memory usage in human-readable format.
- `--stacktrace` If specified, also print the stack trace if an exception occurs.
- `--config-file` The name of the configuration file.
- `--secure` If specified, will connect to server over secure connection (TLS). You might need to configure your CA certificates in the [configuration file](#configuration_files). The available configuration settings are the same as for [server-side TLS configuration](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-openssl).
- `--secure` If specified, will connect to server over secure connection (TLS). You might need to configure your CA certificates in the [configuration file](#configuration_files). The available configuration settings are the same as for [server-side TLS configuration](../operations/server-configuration-parameters/settings.md#openssl).
- `--history_file` — Path to a file containing command history.
- `--param_<name>` — Value for a [query with parameters](#cli-queries-with-parameters).
- `--hardware-utilization` — Print hardware utilization information in progress bar.

View File

@ -2059,7 +2059,7 @@ In this case autogenerated Protobuf schema will be saved in file `path/to/schema
### Drop Protobuf cache
To reload Protobuf schema loaded from [format_schema_path](../operations/server-configuration-parameters/settings.md/#server_configuration_parameters-format_schema_path) use [SYSTEM DROP ... FORMAT CACHE](../sql-reference/statements/system.md/#system-drop-schema-format) statement.
To reload Protobuf schema loaded from [format_schema_path](../operations/server-configuration-parameters/settings.md/#format_schema_path) use [SYSTEM DROP ... FORMAT CACHE](../sql-reference/statements/system.md/#system-drop-schema-format) statement.
```sql
SYSTEM DROP FORMAT SCHEMA CACHE FOR Protobuf
@ -2622,7 +2622,7 @@ Result:
## Npy {#data-format-npy}
This function is designed to load a NumPy array from a .npy file into ClickHouse. The NumPy file format is a binary format used for efficiently storing arrays of numerical data. During import, ClickHouse treats top level dimension as an array of rows with single column. Supported Npy data types and their corresponding type in ClickHouse:
This function is designed to load a NumPy array from a .npy file into ClickHouse. The NumPy file format is a binary format used for efficiently storing arrays of numerical data. During import, ClickHouse treats top level dimension as an array of rows with single column. Supported Npy data types and their corresponding type in ClickHouse:
| Npy data type (`INSERT`) | ClickHouse data type | Npy data type (`SELECT`) |
|--------------------------|-----------------------------------------------------------------|--------------------------|
@ -2774,7 +2774,7 @@ can contain an absolute path or a path relative to the current directory on the
If you use the client in the [batch mode](/docs/en/interfaces/cli.md/#cli_usage), the path to the schema must be relative due to security reasons.
If you input or output data via the [HTTP interface](/docs/en/interfaces/http.md) the file name specified in the format schema
should be located in the directory specified in [format_schema_path](/docs/en/operations/server-configuration-parameters/settings.md/#server_configuration_parameters-format_schema_path)
should be located in the directory specified in [format_schema_path](/docs/en/operations/server-configuration-parameters/settings.md/#format_schema_path)
in the server configuration.
## Skipping Errors {#skippingerrors}

View File

@ -11,7 +11,7 @@ The HTTP interface lets you use ClickHouse on any platform from any programming
By default, `clickhouse-server` listens for HTTP on port 8123 (this can be changed in the config).
HTTPS can be enabled as well with port 8443 by default.
If you make a `GET /` request without parameters, it returns 200 response code and the string which defined in [http_server_default_response](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-http_server_default_response) default value “Ok.” (with a line feed at the end)
If you make a `GET /` request without parameters, it returns 200 response code and the string which defined in [http_server_default_response](../operations/server-configuration-parameters/settings.md#http_server_default_response) default value “Ok.” (with a line feed at the end)
``` bash
$ curl 'http://localhost:8123/'
@ -58,7 +58,7 @@ Connection: Close
Content-Type: text/tab-separated-values; charset=UTF-8
X-ClickHouse-Server-Display-Name: clickhouse.ru-central1.internal
X-ClickHouse-Query-Id: 5abe861c-239c-467f-b955-8a201abb8b7f
X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds": "0"}
X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
1
```
@ -472,7 +472,7 @@ $ curl -v 'http://localhost:8123/predefined_query'
< X-ClickHouse-Format: Template
< X-ClickHouse-Timezone: Asia/Shanghai
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
<
# HELP "Query" "Number of executing queries"
# TYPE "Query" counter
@ -668,7 +668,7 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/hi'
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
<
* Connection #0 to host localhost left intact
Say Hi!%
@ -708,7 +708,7 @@ $ curl -v -H 'XXX:xxx' 'http://localhost:8123/get_config_static_handler'
< Content-Type: text/plain; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
<
* Connection #0 to host localhost left intact
<html ng-app="SMI2"><head><base href="http://ui.tabix.io/"></head><body><div ui-view="" class="content-ui"></div><script src="http://loader.tabix.io/master.js"></script></body></html>%
@ -766,7 +766,7 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_absolute_path_static_handler'
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
<
<html><body>Absolute Path File</body></html>
* Connection #0 to host localhost left intact
@ -785,13 +785,13 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_relative_path_static_handler'
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
<
<html><body>Relative Path File</body></html>
* Connection #0 to host localhost left intact
```
## Valid JSON/XML response on exception during HTTP streaming {valid-output-on-exception-http-streaming}
## Valid JSON/XML response on exception during HTTP streaming {valid-output-on-exception-http-streaming}
While query execution over HTTP an exception can happen when part of the data has already been sent. Usually an exception is sent to the client in plain text
even if some specific data format was used to output data and the output may become invalid in terms of specified data format.

View File

@ -21,6 +21,11 @@ If there is a native driver available (e.g., [DBeaver](../integrations/dbeaver))
If your use case involves a particular tool that does not have a native ClickHouse driver, and you would like to use it via the MySQL interface and you found certain incompatibilities - please [create an issue](https://github.com/ClickHouse/ClickHouse/issues) in the ClickHouse repository.
::::note
To support the SQL dialect of above BI tools better, ClickHouse's MySQL interface implicitly runs SELECT queries with setting [prefer_column_name_to_alias = 1](../operations/settings/settings.md#prefer-column-name-to-alias).
This cannot be turned off and it can lead in rare edge cases to different behavior between queries sent to ClickHouse's normal and MySQL query interfaces.
::::
## Enabling the MySQL Interface On ClickHouse Cloud
1. After creating your ClickHouse Cloud Service, on the credentials screen, select the MySQL tab
@ -96,7 +101,7 @@ In this case, ensure that the username follows the `mysql4<subdomain>_<username>
## Enabling the MySQL Interface On Self-managed ClickHouse
Add the [mysql_port](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-mysql_port) setting to your server's configuration file. For example, you could define the port in a new XML file in your `config.d/` [folder](../operations/configuration-files):
Add the [mysql_port](../operations/server-configuration-parameters/settings.md#mysql_port) setting to your server's configuration file. For example, you could define the port in a new XML file in your `config.d/` [folder](../operations/configuration-files):
``` xml
<clickhouse>

View File

@ -8,7 +8,7 @@ sidebar_label: PostgreSQL Interface
ClickHouse supports the PostgreSQL wire protocol, which allows you to use Postgres clients to connect to ClickHouse. In a sense, ClickHouse can pretend to be a PostgreSQL instance - allowing you to connect a PostgreSQL client application to ClickHouse that is not already directly supported by ClickHouse (for example, Amazon Redshift).
To enable the PostgreSQL wire protocol, add the [postgresql_port](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-postgresql_port) setting to your server's configuration file. For example, you could define the port in a new XML file in your `config.d` folder:
To enable the PostgreSQL wire protocol, add the [postgresql_port](../operations/server-configuration-parameters/settings.md#postgresql_port) setting to your server's configuration file. For example, you could define the port in a new XML file in your `config.d` folder:
```xml
<clickhouse>

View File

@ -151,7 +151,7 @@ Check:
- Endpoint settings.
Check [listen_host](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-listen_host) and [tcp_port](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port) settings.
Check [listen_host](../operations/server-configuration-parameters/settings.md#listen_host) and [tcp_port](../operations/server-configuration-parameters/settings.md#tcp_port) settings.
ClickHouse server accepts localhost connections only by default.
@ -163,8 +163,8 @@ Check:
Check:
- The [tcp_port_secure](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port_secure) setting.
- Settings for [SSL certificates](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-openssl).
- The [tcp_port_secure](../operations/server-configuration-parameters/settings.md#tcp_port_secure) setting.
- Settings for [SSL certificates](../operations/server-configuration-parameters/settings.md#openssl).
Use proper parameters while connecting. For example, use the `port_secure` parameter with `clickhouse_client`.

View File

@ -7,7 +7,7 @@ sidebar_label: Configuration Files
# Configuration Files
The ClickHouse server can be configured with configuration files in XML or YAML syntax. In most installation types, the ClickHouse server runs with `/etc/clickhouse-server/config.xml` as default configuration file, but it is also possible to specify the location of the configuration file manually at server startup using command line option `--config-file=` or `-C`. Additional configuration files may be placed into directory `config.d/` relative to the main configuration file, for example into directory `/etc/clickhouse-server/config.d/`. Files in this directory and the main configuration are merged in a preprocessing step before the configuration is applied in ClickHouse server. Configuration files are merged in alphabetical order. To simplify updates and improve modularization, it is best practice to keep the default `config.xml` file unmodified and place additional customization into `config.d/`.
(The ClickHouse keeper configuration lives in `/etc/clickhouse-keeper/keeper_config.xml` and thus the additional files need to be placed in `/etc/clickhouse-keeper/keeper_config.d/` )
(The ClickHouse keeper configuration lives in `/etc/clickhouse-keeper/keeper_config.xml` and thus the additional files need to be placed in `/etc/clickhouse-keeper/keeper_config.d/` )
It is possible to mix XML and YAML configuration files, for example you could have a main configuration file `config.xml` and additional configuration files `config.d/network.xml`, `config.d/timezone.yaml` and `config.d/keeper.yaml`. Mixing XML and YAML within a single configuration file is not supported. XML configuration files should use `<clickhouse>...</clickhouse>` as top-level tag. In YAML configuration files, `clickhouse:` is optional, the parser inserts it implicitly if absent.
@ -154,7 +154,7 @@ will take the default value
The config can define substitutions. There are two types of substitutions:
- If an element has the `incl` attribute, the corresponding substitution from the file will be used as the value. By default, the path to the file with substitutions is `/etc/metrika.xml`. This can be changed in the [include_from](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-include_from) element in the server config. The substitution values are specified in `/clickhouse/substitution_name` elements in this file. If a substitution specified in `incl` does not exist, it is recorded in the log. To prevent ClickHouse from logging missing substitutions, specify the `optional="true"` attribute (for example, settings for [macros](../operations/server-configuration-parameters/settings.md#macros)).
- If an element has the `incl` attribute, the corresponding substitution from the file will be used as the value. By default, the path to the file with substitutions is `/etc/metrika.xml`. This can be changed in the [include_from](../operations/server-configuration-parameters/settings.md#include_from) element in the server config. The substitution values are specified in `/clickhouse/substitution_name` elements in this file. If a substitution specified in `incl` does not exist, it is recorded in the log. To prevent ClickHouse from logging missing substitutions, specify the `optional="true"` attribute (for example, settings for [macros](../operations/server-configuration-parameters/settings.md#macros)).
- If you want to replace an entire element with a substitution, use `include` as the element name. Substitutions can also be performed from ZooKeeper by specifying attribute `from_zk = "/path/to/node"`. In this case, the element value is replaced with the contents of the Zookeeper node at `/path/to/node`. This also works with you store an entire XML subtree as a Zookeeper node, it will be fully inserted into the source element.

View File

@ -6,7 +6,7 @@ import SelfManaged from '@site/docs/en/_snippets/_self_managed_only_no_roadmap.m
<SelfManaged />
[SSL 'strict' option](../server-configuration-parameters/settings.md#server_configuration_parameters-openssl) enables mandatory certificate validation for the incoming connections. In this case, only connections with trusted certificates can be established. Connections with untrusted certificates will be rejected. Thus, certificate validation allows to uniquely authenticate an incoming connection. `Common Name` or `subjectAltName extension` field of the certificate is used to identify the connected user. `subjectAltName extension` supports the usage of one wildcard '*' in the server configuration. This allows to associate multiple certificates with the same user. Additionally, reissuing and revoking of the certificates does not affect the ClickHouse configuration.
[SSL 'strict' option](../server-configuration-parameters/settings.md#openssl) enables mandatory certificate validation for the incoming connections. In this case, only connections with trusted certificates can be established. Connections with untrusted certificates will be rejected. Thus, certificate validation allows to uniquely authenticate an incoming connection. `Common Name` or `subjectAltName extension` field of the certificate is used to identify the connected user. `subjectAltName extension` supports the usage of one wildcard '*' in the server configuration. This allows to associate multiple certificates with the same user. Additionally, reissuing and revoking of the certificates does not affect the ClickHouse configuration.
To enable SSL certificate authentication, a list of `Common Name`'s or `Subject Alt Name`'s for each ClickHouse user must be specified in the settings file `users.xml `:
@ -40,4 +40,4 @@ To enable SSL certificate authentication, a list of `Common Name`'s or `Subject
</clickhouse>
```
For the SSL [`chain of trust`](https://en.wikipedia.org/wiki/Chain_of_trust) to work correctly, it is also important to make sure that the [`caConfig`](../server-configuration-parameters/settings.md#server_configuration_parameters-openssl) parameter is configured properly.
For the SSL [`chain of trust`](https://en.wikipedia.org/wiki/Chain_of_trust) to work correctly, it is also important to make sure that the [`caConfig`](../server-configuration-parameters/settings.md#openssl) parameter is configured properly.

View File

@ -50,7 +50,7 @@ This data is collected in the `system.asynchronous_metric_log` table.
ClickHouse server has embedded instruments for self-state monitoring.
To track server events use server logs. See the [logger](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-logger) section of the configuration file.
To track server events use server logs. See the [logger](../operations/server-configuration-parameters/settings.md#logger) section of the configuration file.
ClickHouse collects:
@ -59,9 +59,9 @@ ClickHouse collects:
You can find metrics in the [system.metrics](../operations/system-tables/metrics.md#system_tables-metrics), [system.events](../operations/system-tables/events.md#system_tables-events), and [system.asynchronous_metrics](../operations/system-tables/asynchronous_metrics.md#system_tables-asynchronous_metrics) tables.
You can configure ClickHouse to export metrics to [Graphite](https://github.com/graphite-project). See the [Graphite section](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-graphite) in the ClickHouse server configuration file. Before configuring export of metrics, you should set up Graphite by following their official [guide](https://graphite.readthedocs.io/en/latest/install.html).
You can configure ClickHouse to export metrics to [Graphite](https://github.com/graphite-project). See the [Graphite section](../operations/server-configuration-parameters/settings.md#graphite) in the ClickHouse server configuration file. Before configuring export of metrics, you should set up Graphite by following their official [guide](https://graphite.readthedocs.io/en/latest/install.html).
You can configure ClickHouse to export metrics to [Prometheus](https://prometheus.io). See the [Prometheus section](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-prometheus) in the ClickHouse server configuration file. Before configuring export of metrics, you should set up Prometheus by following their official [guide](https://prometheus.io/docs/prometheus/latest/installation/).
You can configure ClickHouse to export metrics to [Prometheus](https://prometheus.io). See the [Prometheus section](../operations/server-configuration-parameters/settings.md#prometheus) in the ClickHouse server configuration file. Before configuring export of metrics, you should set up Prometheus by following their official [guide](https://prometheus.io/docs/prometheus/latest/installation/).
Additionally, you can monitor server availability through the HTTP API. Send the `HTTP GET` request to `/ping`. If the server is available, it responds with `200 OK`.

View File

@ -28,7 +28,7 @@ SETTINGS allow_introspection_functions = 1
In self-managed deployments, to use query profiler:
- Setup the [trace_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-trace_log) section of the server configuration.
- Setup the [trace_log](../../operations/server-configuration-parameters/settings.md#trace_log) section of the server configuration.
This section configures the [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log) system table containing the results of the profiler functioning. It is configured by default. Remember that data in this table is valid only for a running server. After the server restart, ClickHouse does not clean up the table and all the stored virtual memory address may become invalid.

View File

@ -3162,3 +3162,11 @@ Type: UInt64
Default value: 100
Zero means unlimited
## use_legacy_mongodb_integration
Use the legacy MongoDB integration implementation. Deprecated.
Type: Bool
Default value: `true`.

View File

@ -933,7 +933,7 @@ Default value: `1`.
Setting up query logging.
Queries sent to ClickHouse with this setup are logged according to the rules in the [query_log](../../operations/server-configuration-parameters/settings.md/#server_configuration_parameters-query-log) server configuration parameter.
Queries sent to ClickHouse with this setup are logged according to the rules in the [query_log](../../operations/server-configuration-parameters/settings.md/#query-log) server configuration parameter.
Example:
@ -978,7 +978,7 @@ log_queries_min_type='EXCEPTION_WHILE_PROCESSING'
Setting up query threads logging.
Query threads log into the [system.query_thread_log](../../operations/system-tables/query_thread_log.md) table. This setting has effect only when [log_queries](#log-queries) is true. Queries threads run by ClickHouse with this setup are logged according to the rules in the [query_thread_log](../../operations/server-configuration-parameters/settings.md/#server_configuration_parameters-query_thread_log) server configuration parameter.
Query threads log into the [system.query_thread_log](../../operations/system-tables/query_thread_log.md) table. This setting has effect only when [log_queries](#log-queries) is true. Queries threads run by ClickHouse with this setup are logged according to the rules in the [query_thread_log](../../operations/server-configuration-parameters/settings.md/#query_thread_log) server configuration parameter.
Possible values:
@ -997,7 +997,7 @@ log_query_threads=1
Setting up query views logging.
When a query run by ClickHouse with this setting enabled has associated views (materialized or live views), they are logged in the [query_views_log](../../operations/server-configuration-parameters/settings.md/#server_configuration_parameters-query_views_log) server configuration parameter.
When a query run by ClickHouse with this setting enabled has associated views (materialized or live views), they are logged in the [query_views_log](../../operations/server-configuration-parameters/settings.md/#query_views_log) server configuration parameter.
Example:
@ -1588,25 +1588,12 @@ This setting is useful for any replicated table.
An arbitrary integer expression that can be used to split work between replicas for a specific table.
The value can be any integer expression.
A query may be processed faster if it is executed on several servers in parallel but it depends on the used [parallel_replicas_custom_key](#parallel_replicas_custom_key)
and [parallel_replicas_custom_key_filter_type](#parallel_replicas_custom_key_filter_type).
Simple expressions using primary keys are preferred.
If the setting is used on a cluster that consists of a single shard with multiple replicas, those replicas will be converted into virtual shards.
Otherwise, it will behave same as for `SAMPLE` key, it will use multiple replicas of each shard.
## parallel_replicas_custom_key_filter_type {#parallel_replicas_custom_key_filter_type}
How to use `parallel_replicas_custom_key` expression for splitting work between replicas.
Possible values:
- `default` — Use the default implementation using modulo operation on the `parallel_replicas_custom_key`.
- `range` — Split the entire value space of the expression in the ranges. This type of filtering is useful if values of `parallel_replicas_custom_key` are uniformly spread across the entire integer space, e.g. hash values.
Default value: `default`.
## parallel_replicas_custom_key_range_lower {#parallel_replicas_custom_key_range_lower}
Allows the filter type `range` to split the work evenly between replicas based on the custom range `[parallel_replicas_custom_key_range_lower, INT_MAX]`.
@ -1621,9 +1608,9 @@ Allows the filter type `range` to split the work evenly between replicas based o
When used in conjuction with [parallel_replicas_custom_key_range_lower](#parallel_replicas_custom_key_range_lower), it lets the filter evenly split the work over replicas for the range `[parallel_replicas_custom_key_range_lower, parallel_replicas_custom_key_range_upper]`.
Note: This setting will not cause any additional data to be filtered during query processing, rather it changes the points at which the range filter breaks up the range `[0, INT_MAX]` for parallel processing.
Note: This setting will not cause any additional data to be filtered during query processing, rather it changes the points at which the range filter breaks up the range `[0, INT_MAX]` for parallel processing
## allow_experimental_parallel_reading_from_replicas
## enable_parallel_replicas
Enables or disables sending SELECT queries to all replicas of a table (up to `max_parallel_replicas`). Reading is parallelized and coordinated dynamically. It will work for any kind of MergeTree table.
@ -4773,7 +4760,7 @@ Use this setting only for backward compatibility if your use cases depend on old
Sets the implicit time zone of the current session or query.
The implicit time zone is the time zone applied to values of type DateTime/DateTime64 which have no explicitly specified time zone.
The setting takes precedence over the globally configured (server-level) implicit time zone.
A value of '' (empty string) means that the implicit time zone of the current session or query is equal to the [server time zone](../server-configuration-parameters/settings.md#server_configuration_parameters-timezone).
A value of '' (empty string) means that the implicit time zone of the current session or query is equal to the [server time zone](../server-configuration-parameters/settings.md#timezone).
You can use functions `timeZone()` and `serverTimeZone()` to get the session time zone and server time zone.
@ -4829,7 +4816,7 @@ This happens due to different parsing pipelines:
**See also**
- [timezone](../server-configuration-parameters/settings.md#server_configuration_parameters-timezone)
- [timezone](../server-configuration-parameters/settings.md#timezone)
## final {#final}
@ -5682,3 +5669,11 @@ Default value: `0`.
Enable `IF NOT EXISTS` for `CREATE` statement by default. If either this setting or `IF NOT EXISTS` is specified and a table with the provided name already exists, no exception will be thrown.
Default value: `false`.
## mongodb_throw_on_unsupported_query
If enabled, MongoDB tables will return an error when a MongoDB query can't be built.
Not applied for the legacy implementation, or when 'allow_experimental_analyzer=0`.
Default value: `true`.

View File

@ -5,9 +5,9 @@ slug: /en/operations/system-tables/asynchronous_insert_log
Contains information about async inserts. Each entry represents an insert query buffered into an async insert query.
To start logging configure parameters in the [asynchronous_insert_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-asynchronous_insert_log) section.
To start logging configure parameters in the [asynchronous_insert_log](../../operations/server-configuration-parameters/settings.md#asynchronous_insert_log) section.
The flushing period of data is set in `flush_interval_milliseconds` parameter of the [asynchronous_insert_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-asynchronous_insert_log) server settings section. To force flushing, use the [SYSTEM FLUSH LOGS](../../sql-reference/statements/system.md#query_language-system-flush_logs) query.
The flushing period of data is set in `flush_interval_milliseconds` parameter of the [asynchronous_insert_log](../../operations/server-configuration-parameters/settings.md#asynchronous_insert_log) server settings section. To force flushing, use the [SYSTEM FLUSH LOGS](../../sql-reference/statements/system.md#query_language-system-flush_logs) query.
ClickHouse does not delete data from the table automatically. See [Introduction](../../operations/system-tables/index.md#system-tables-introduction) for more details.

View File

@ -3,7 +3,7 @@ slug: /en/operations/system-tables/graphite_retentions
---
# graphite_retentions
Contains information about parameters [graphite_rollup](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-graphite) which are used in tables with [\*GraphiteMergeTree](../../engines/table-engines/mergetree-family/graphitemergetree.md) engines.
Contains information about parameters [graphite_rollup](../../operations/server-configuration-parameters/settings.md#graphite) which are used in tables with [\*GraphiteMergeTree](../../engines/table-engines/mergetree-family/graphitemergetree.md) engines.
Columns:

View File

@ -3,7 +3,7 @@ slug: /en/operations/system-tables/part_log
---
# part_log
The `system.part_log` table is created only if the [part_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-part-log) server setting is specified.
The `system.part_log` table is created only if the [part_log](../../operations/server-configuration-parameters/settings.md#part-log) server setting is specified.
This table contains information about events that occurred with [data parts](../../engines/table-engines/mergetree-family/custom-partitioning-key.md) in the [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) family tables, such as adding or merging data.

View File

@ -9,11 +9,11 @@ Contains information about executed queries, for example, start time, duration o
This table does not contain the ingested data for `INSERT` queries.
:::
You can change settings of queries logging in the [query_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-query-log) section of the server configuration.
You can change settings of queries logging in the [query_log](../../operations/server-configuration-parameters/settings.md#query-log) section of the server configuration.
You can disable queries logging by setting [log_queries = 0](../../operations/settings/settings.md#log-queries). We do not recommend to turn off logging because information in this table is important for solving issues.
The flushing period of data is set in `flush_interval_milliseconds` parameter of the [query_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-query-log) server settings section. To force flushing, use the [SYSTEM FLUSH LOGS](../../sql-reference/statements/system.md#query_language-system-flush_logs) query.
The flushing period of data is set in `flush_interval_milliseconds` parameter of the [query_log](../../operations/server-configuration-parameters/settings.md#query-log) server settings section. To force flushing, use the [SYSTEM FLUSH LOGS](../../sql-reference/statements/system.md#query_language-system-flush_logs) query.
ClickHouse does not delete data from the table automatically. See [Introduction](../../operations/system-tables/index.md#system-tables-introduction) for more details.

View File

@ -7,10 +7,10 @@ Contains information about threads that execute queries, for example, thread nam
To start logging:
1. Configure parameters in the [query_thread_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-query_thread_log) section.
1. Configure parameters in the [query_thread_log](../../operations/server-configuration-parameters/settings.md#query_thread_log) section.
2. Set [log_query_threads](../../operations/settings/settings.md#log-query-threads) to 1.
The flushing period of data is set in `flush_interval_milliseconds` parameter of the [query_thread_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-query_thread_log) server settings section. To force flushing, use the [SYSTEM FLUSH LOGS](../../sql-reference/statements/system.md#query_language-system-flush_logs) query.
The flushing period of data is set in `flush_interval_milliseconds` parameter of the [query_thread_log](../../operations/server-configuration-parameters/settings.md#query_thread_log) server settings section. To force flushing, use the [SYSTEM FLUSH LOGS](../../sql-reference/statements/system.md#query_language-system-flush_logs) query.
ClickHouse does not delete data from the table automatically. See [Introduction](../../operations/system-tables/index.md#system-tables-introduction) for more details.

View File

@ -7,10 +7,10 @@ Contains information about the dependent views executed when running a query, fo
To start logging:
1. Configure parameters in the [query_views_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-query_views_log) section.
1. Configure parameters in the [query_views_log](../../operations/server-configuration-parameters/settings.md#query_views_log) section.
2. Set [log_query_views](../../operations/settings/settings.md#log-query-views) to 1.
The flushing period of data is set in `flush_interval_milliseconds` parameter of the [query_views_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-query_views_log) server settings section. To force flushing, use the [SYSTEM FLUSH LOGS](../../sql-reference/statements/system.md#query_language-system-flush_logs) query.
The flushing period of data is set in `flush_interval_milliseconds` parameter of the [query_views_log](../../operations/server-configuration-parameters/settings.md#query_views_log) server settings section. To force flushing, use the [SYSTEM FLUSH LOGS](../../sql-reference/statements/system.md#query_language-system-flush_logs) query.
ClickHouse does not delete data from the table automatically. See [Introduction](../../operations/system-tables/index.md#system-tables-introduction) for more details.

View File

@ -4,7 +4,7 @@ slug: /en/operations/system-tables/server_settings
# server_settings
Contains information about global settings for the server, which were specified in `config.xml`.
Currently, the table shows only settings from the first layer of `config.xml` and doesn't support nested configs (e.g. [logger](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-logger)).
Currently, the table shows only settings from the first layer of `config.xml` and doesn't support nested configs (e.g. [logger](../../operations/server-configuration-parameters/settings.md#logger)).
Columns:
@ -50,7 +50,7 @@ WHERE name LIKE '%thread_pool%'
```
Using of `WHERE changed` can be useful, for example, when you want to check
Using of `WHERE changed` can be useful, for example, when you want to check
whether settings in configuration files are loaded correctly and are in use.
<!-- -->

View File

@ -5,7 +5,7 @@ slug: /en/operations/system-tables/trace_log
Contains stack traces collected by the [sampling query profiler](../../operations/optimizing-performance/sampling-query-profiler.md).
ClickHouse creates this table when the [trace_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-trace_log) server configuration section is set. Also see settings: [query_profiler_real_time_period_ns](../../operations/settings/settings.md#query_profiler_real_time_period_ns), [query_profiler_cpu_time_period_ns](../../operations/settings/settings.md#query_profiler_cpu_time_period_ns), [memory_profiler_step](../../operations/settings/settings.md#memory_profiler_step),
ClickHouse creates this table when the [trace_log](../../operations/server-configuration-parameters/settings.md#trace_log) server configuration section is set. Also see settings: [query_profiler_real_time_period_ns](../../operations/settings/settings.md#query_profiler_real_time_period_ns), [query_profiler_cpu_time_period_ns](../../operations/settings/settings.md#query_profiler_cpu_time_period_ns), [memory_profiler_step](../../operations/settings/settings.md#memory_profiler_step),
[memory_profiler_sample_probability](../../operations/settings/settings.md#memory_profiler_sample_probability), [trace_profile_events](../../operations/settings/settings.md#trace_profile_events).
To analyze logs, use the `addressToLine`, `addressToLineWithInlines`, `addressToSymbol` and `demangle` introspection functions.

View File

@ -12,7 +12,7 @@ A utility providing filesystem-like operations for ClickHouse disks. It can work
* `--config-file, -C` -- path to ClickHouse config, defaults to `/etc/clickhouse-server/config.xml`.
* `--save-logs` -- Log progress of invoked commands to `/var/log/clickhouse-server/clickhouse-disks.log`.
* `--log-level` -- What [type](../server-configuration-parameters/settings#server_configuration_parameters-logger) of events to log, defaults to `none`.
* `--log-level` -- What [type](../server-configuration-parameters/settings#logger) of events to log, defaults to `none`.
* `--disk` -- what disk to use for `mkdir, move, read, write, remove` commands. Defaults to `default`.
* `--query, -q` -- single query that can be executed without launching interactive mode
* `--help, -h` -- print all the options and commands with description
@ -23,7 +23,7 @@ After the launch two disks are initialized. The first one is a disk `local` that
## Clickhouse-disks state
For each disk that was added the utility stores current directory (as in a usual filesystem). User can change current directory and switch between disks.
State is reflected in a prompt "`disk_name`:`path_name`"
State is reflected in a prompt "`disk_name`:`path_name`"
## Commands
@ -35,7 +35,7 @@ In these documentation file all mandatory positional arguments are referred as `
Recursively copy data from `path-from` at disk `disk_1` (default value is a current disk (parameter `disk` in a non-interactive mode))
to `path-to` at disk `disk_2` (default value is a current disk (parameter `disk` in a non-interactive mode)).
* `current_disk_with_path (current, current_disk, current_path)`
Print current state in format:
Print current state in format:
`Disk: "current_disk" Path: "current path on current disk"`
* `help [<command>]`
Print help message about command `command`. If `command` is not specified print information about all commands.
@ -54,6 +54,6 @@ In these documentation file all mandatory positional arguments are referred as `
* `read (r) <path-from> [--path-to path]`
Read a file from `path-from` to `path` (`stdout` if not supplied).
* `switch-disk [--path path] <disk>`
Switch to disk `disk` on path `path` (if `path` is not specified default value is a previous path on disk `disk`).
Switch to disk `disk` on path `path` (if `path` is not specified default value is a previous path on disk `disk`).
* `write (w) [--path-from path] <path-to>`.
Write a file from `path` (`stdin` if `path` is not supplied, input must finish by Ctrl+D) to `path-to`.

View File

@ -32,7 +32,7 @@ Timezone agnostic Unix timestamp is stored in tables, and the timezone is used t
A list of supported time zones can be found in the [IANA Time Zone Database](https://www.iana.org/time-zones) and also can be queried by `SELECT * FROM system.time_zones`. [The list](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) is also available at Wikipedia.
You can explicitly set a time zone for `DateTime`-type columns when creating a table. Example: `DateTime('UTC')`. If the time zone isnt set, ClickHouse uses the value of the [timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) parameter in the server settings or the operating system settings at the moment of the ClickHouse server start.
You can explicitly set a time zone for `DateTime`-type columns when creating a table. Example: `DateTime('UTC')`. If the time zone isnt set, ClickHouse uses the value of the [timezone](../../operations/server-configuration-parameters/settings.md#timezone) parameter in the server settings or the operating system settings at the moment of the ClickHouse server start.
The [clickhouse-client](../../interfaces/cli.md) applies the server time zone by default if a time zone isnt explicitly set when initializing the data type. To use the client time zone, run `clickhouse-client` with the `--use_client_time_zone` parameter.
@ -149,7 +149,7 @@ Time shifts for multiple days. Some pacific islands changed their timezone offse
- [Functions for working with arrays](../../sql-reference/functions/array-functions.md)
- [The `date_time_input_format` setting](../../operations/settings/settings-formats.md#date_time_input_format)
- [The `date_time_output_format` setting](../../operations/settings/settings-formats.md#date_time_output_format)
- [The `timezone` server configuration parameter](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone)
- [The `timezone` server configuration parameter](../../operations/server-configuration-parameters/settings.md#timezone)
- [The `session_timezone` setting](../../operations/settings/settings.md#session_timezone)
- [Operators for working with dates and times](../../sql-reference/operators/index.md#operators-datetime)
- [The `Date` data type](../../sql-reference/data-types/date.md)

View File

@ -119,7 +119,7 @@ FROM dt64;
- [Functions for working with dates and times](../../sql-reference/functions/date-time-functions.md)
- [The `date_time_input_format` setting](../../operations/settings/settings-formats.md#date_time_input_format)
- [The `date_time_output_format` setting](../../operations/settings/settings-formats.md#date_time_output_format)
- [The `timezone` server configuration parameter](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone)
- [The `timezone` server configuration parameter](../../operations/server-configuration-parameters/settings.md#timezone)
- [The `session_timezone` setting](../../operations/settings/settings.md#session_timezone)
- [Operators for working with dates and times](../../sql-reference/operators/index.md#operators-for-working-with-dates-and-times)
- [`Date` data type](../../sql-reference/data-types/date.md)

View File

@ -31,9 +31,9 @@ ClickHouse:
- Periodically updates dictionaries and dynamically loads missing values. In other words, dictionaries can be loaded dynamically.
- Allows creating dictionaries with xml files or [DDL queries](../../sql-reference/statements/create/dictionary.md).
The configuration of dictionaries can be located in one or more xml-files. The path to the configuration is specified in the [dictionaries_config](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-dictionaries_config) parameter.
The configuration of dictionaries can be located in one or more xml-files. The path to the configuration is specified in the [dictionaries_config](../../operations/server-configuration-parameters/settings.md#dictionaries_config) parameter.
Dictionaries can be loaded at server startup or at first use, depending on the [dictionaries_lazy_load](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-dictionaries_lazy_load) setting.
Dictionaries can be loaded at server startup or at first use, depending on the [dictionaries_lazy_load](../../operations/server-configuration-parameters/settings.md#dictionaries_lazy_load) setting.
The [dictionaries](../../operations/system-tables/dictionaries.md#system_tables-dictionaries) system table contains information about dictionaries configured at server. For each dictionary you can find there:
@ -1156,7 +1156,7 @@ Setting fields:
- `command_read_timeout` - Timeout for reading data from command stdout in milliseconds. Default value 10000. Optional parameter.
- `command_write_timeout` - Timeout for writing data to command stdin in milliseconds. Default value 10000. Optional parameter.
- `implicit_key` — The executable source file can return only values, and the correspondence to the requested keys is determined implicitly — by the order of rows in the result. Default value is false.
- `execute_direct` - If `execute_direct` = `1`, then `command` will be searched inside user_scripts folder specified by [user_scripts_path](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-user_scripts_path). Additional script arguments can be specified using a whitespace separator. Example: `script_name arg1 arg2`. If `execute_direct` = `0`, `command` is passed as argument for `bin/sh -c`. Default value is `0`. Optional parameter.
- `execute_direct` - If `execute_direct` = `1`, then `command` will be searched inside user_scripts folder specified by [user_scripts_path](../../operations/server-configuration-parameters/settings.md#user_scripts_path). Additional script arguments can be specified using a whitespace separator. Example: `script_name arg1 arg2`. If `execute_direct` = `0`, `command` is passed as argument for `bin/sh -c`. Default value is `0`. Optional parameter.
- `send_chunk_header` - controls whether to send row count before sending a chunk of data to process. Optional. Default value is `false`.
That dictionary source can be configured only via XML configuration. Creating dictionaries with executable source via DDL is disabled; otherwise, the DB user would be able to execute arbitrary binaries on the ClickHouse node.
@ -1191,7 +1191,7 @@ Setting fields:
- `command_read_timeout` - timeout for reading data from command stdout in milliseconds. Default value 10000. Optional parameter.
- `command_write_timeout` - timeout for writing data to command stdin in milliseconds. Default value 10000. Optional parameter.
- `implicit_key` — The executable source file can return only values, and the correspondence to the requested keys is determined implicitly — by the order of rows in the result. Default value is false. Optional parameter.
- `execute_direct` - If `execute_direct` = `1`, then `command` will be searched inside user_scripts folder specified by [user_scripts_path](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-user_scripts_path). Additional script arguments can be specified using whitespace separator. Example: `script_name arg1 arg2`. If `execute_direct` = `0`, `command` is passed as argument for `bin/sh -c`. Default value is `1`. Optional parameter.
- `execute_direct` - If `execute_direct` = `1`, then `command` will be searched inside user_scripts folder specified by [user_scripts_path](../../operations/server-configuration-parameters/settings.md#user_scripts_path). Additional script arguments can be specified using whitespace separator. Example: `script_name arg1 arg2`. If `execute_direct` = `0`, `command` is passed as argument for `bin/sh -c`. Default value is `1`. Optional parameter.
- `send_chunk_header` - controls whether to send row count before sending a chunk of data to process. Optional. Default value is `false`.
That dictionary source can be configured only via XML configuration. Creating dictionaries with executable source via DDL is disabled, otherwise, the DB user would be able to execute arbitrary binary on ClickHouse node.
@ -1232,7 +1232,7 @@ SOURCE(HTTP(
))
```
In order for ClickHouse to access an HTTPS resource, you must [configure openSSL](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-openssl) in the server configuration.
In order for ClickHouse to access an HTTPS resource, you must [configure openSSL](../../operations/server-configuration-parameters/settings.md#openssl) in the server configuration.
Setting fields:
@ -1680,7 +1680,7 @@ Setting fields:
The `table` or `where` fields cannot be used together with the `query` field. And either one of the `table` or `query` fields must be declared.
:::
#### Mongodb
#### MongoDB
Example of settings:
@ -1700,6 +1700,17 @@ Example of settings:
or
``` xml
<source>
<mongodb>
<uri>mongodb://localhost:27017/test?ssl=true</uri>
<collection>dictionary_source</collection>
</mongodb>
</source>
```
or
``` sql
SOURCE(MONGODB(
host 'localhost'
@ -1722,6 +1733,22 @@ Setting fields:
- `collection` Name of the collection.
- `options` - MongoDB connection string options (optional parameter).
or
``` sql
SOURCE(MONGODB(
uri 'mongodb://localhost:27017/clickhouse'
collection 'dictionary_source'
))
```
Setting fields:
- `uri` - URI for establish the connection.
- `collection` Name of the collection.
[More information about the engine](../../engines/table-engines/integrations/mongodb.md)
#### Redis
@ -2038,7 +2065,7 @@ Configuration fields:
| `expression` | [Expression](../../sql-reference/syntax.md#expressions) that ClickHouse executes on the value.<br/>The expression can be a column name in the remote SQL database. Thus, you can use it to create an alias for the remote column.<br/><br/>Default value: no expression. | No |
| <a name="hierarchical-dict-attr"></a> `hierarchical` | If `true`, the attribute contains the value of a parent key for the current key. See [Hierarchical Dictionaries](#hierarchical-dictionaries).<br/><br/>Default value: `false`. | No |
| `injective` | Flag that shows whether the `id -> attribute` image is [injective](https://en.wikipedia.org/wiki/Injective_function).<br/>If `true`, ClickHouse can automatically place after the `GROUP BY` clause the requests to dictionaries with injection. Usually it significantly reduces the amount of such requests.<br/><br/>Default value: `false`. | No |
| `is_object_id` | Flag that shows whether the query is executed for a MongoDB document by `ObjectID`.<br/><br/>Default value: `false`.
| `is_object_id` | Flag that shows whether the query is executed for a MongoDB document by `ObjectID`.<br/><br/>Default value: `false`.
## Hierarchical Dictionaries

View File

@ -595,6 +595,7 @@ SELECT arrayConcat([1, 2], [3, 4], [5, 6]) AS res
Get the element with the index `n` from the array `arr`. `n` must be any integer type.
Indexes in an array begin from one.
Negative indexes are supported. In this case, it selects the corresponding element numbered from the end. For example, `arr[-1]` is the last item in the array.
If the index falls outside of the bounds of an array, it returns some default value (0 for numbers, an empty string for strings, etc.), except for the case with a non-constant array and a constant index 0 (in this case there will be an error `Array indices are 1-based`).
@ -616,6 +617,27 @@ SELECT has([1, 2, NULL], NULL)
└─────────────────────────┘
```
## arrayElementOrNull(arr, n)
Get the element with the index `n`from the array `arr`. `n` must be any integer type.
Indexes in an array begin from one.
Negative indexes are supported. In this case, it selects the corresponding element numbered from the end. For example, `arr[-1]` is the last item in the array.
If the index falls outside of the bounds of an array, it returns `NULL` instead of a default value.
### Examples
``` sql
SELECT arrayElementOrNull([1, 2, 3], 2), arrayElementOrNull([1, 2, 3], 4)
```
``` text
┌─arrayElementOrNull([1, 2, 3], 2)─┬─arrayElementOrNull([1, 2, 3], 4)─┐
│ 2 │ ᴺᵁᴸᴸ │
└──────────────────────────────────┴──────────────────────────────────┘
```
## hasAll {#hasall}
Checks whether one array is a subset of another.
@ -1717,6 +1739,24 @@ Result:
[[1,1,2,3],[1,2,3,4]]
```
## arrayUnion(arr)
Takes multiple arrays, returns an array that contains all elements that are present in any of the source arrays.
Example:
```sql
SELECT
arrayUnion([-2, 1], [10, 1], [-2], []) as num_example,
arrayUnion(['hi'], [], ['hello', 'hi']) as str_example,
arrayUnion([1, 3, NULL], [2, 3, NULL]) as null_example
```
```text
┌─num_example─┬─str_example────┬─null_example─┐
│ [10,-2,1] │ ['hello','hi'] │ [3,2,1,NULL] │
└─────────────┴────────────────┴──────────────┘
```
## arrayIntersect(arr)
Takes multiple arrays, returns an array with elements that are present in all source arrays.

View File

@ -83,7 +83,7 @@ Result:
```
## makeDate32
Creates a date of type [Date32](../../sql-reference/data-types/date32.md) from a year, month, day (or optionally a year and a day).
Creates a date of type [Date32](../../sql-reference/data-types/date32.md) from a year, month, day (or optionally a year and a day).
**Syntax**
@ -153,7 +153,7 @@ makeDateTime(year, month, day, hour, minute, second[, timezone])
- `hour` — Hour. [Integer](../data-types/int-uint.md), [Float](../data-types/float.md) or [Decimal](../data-types/decimal.md).
- `minute` — Minute. [Integer](../data-types/int-uint.md), [Float](../data-types/float.md) or [Decimal](../data-types/decimal.md).
- `second` — Second. [Integer](../data-types/int-uint.md), [Float](../data-types/float.md) or [Decimal](../data-types/decimal.md).
- `timezone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional).
- `timezone` — [Timezone](../../operations/server-configuration-parameters/settings.md#timezone) for the returned value (optional).
**Returned value**
@ -186,11 +186,11 @@ makeDateTime64(year, month, day, hour, minute, second[, precision])
**Arguments**
- `year` — Year (0-9999). [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `month` — Month (1-12). [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `month` — Month (1-12). [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `day` — Day (1-31). [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `hour` — Hour (0-23). [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `minute` — Minute (0-59). [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `second` — Second (0-59). [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `minute` — Minute (0-59). [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `second` — Second (0-59). [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `precision` — Optional precision of the sub-second component (0-9). [Integer](../../sql-reference/data-types/int-uint.md).
**Returned value**
@ -294,7 +294,7 @@ Result:
## serverTimeZone
Returns the timezone of the server, i.e. the value of setting [timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone).
Returns the timezone of the server, i.e. the value of setting [timezone](../../operations/server-configuration-parameters/settings.md#timezone).
If the function is executed in the context of a distributed table, then it generates a normal column with values relevant to each shard. Otherwise, it produces a constant value.
**Syntax**
@ -1269,7 +1269,7 @@ toStartOfSecond(value, [timezone])
**Arguments**
- `value` — Date and time. [DateTime64](../data-types/datetime64.md).
- `timezone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../data-types/string.md).
- `timezone` — [Timezone](../../operations/server-configuration-parameters/settings.md#timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../data-types/string.md).
**Returned value**
@ -1309,7 +1309,7 @@ Result:
**See also**
- [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) server configuration parameter.
- [Timezone](../../operations/server-configuration-parameters/settings.md#timezone) server configuration parameter.
## toStartOfMillisecond
@ -1324,7 +1324,7 @@ toStartOfMillisecond(value, [timezone])
**Arguments**
- `value` — Date and time. [DateTime64](../../sql-reference/data-types/datetime64.md).
- `timezone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../../sql-reference/data-types/string.md).
- `timezone` — [Timezone](../../operations/server-configuration-parameters/settings.md#timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../../sql-reference/data-types/string.md).
**Returned value**
@ -1376,7 +1376,7 @@ toStartOfMicrosecond(value, [timezone])
**Arguments**
- `value` — Date and time. [DateTime64](../../sql-reference/data-types/datetime64.md).
- `timezone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../../sql-reference/data-types/string.md).
- `timezone` — [Timezone](../../operations/server-configuration-parameters/settings.md#timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../../sql-reference/data-types/string.md).
**Returned value**
@ -1416,7 +1416,7 @@ Result:
**See also**
- [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) server configuration parameter.
- [Timezone](../../operations/server-configuration-parameters/settings.md#timezone) server configuration parameter.
## toStartOfNanosecond
@ -1431,7 +1431,7 @@ toStartOfNanosecond(value, [timezone])
**Arguments**
- `value` — Date and time. [DateTime64](../../sql-reference/data-types/datetime64.md).
- `timezone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../../sql-reference/data-types/string.md).
- `timezone` — [Timezone](../../operations/server-configuration-parameters/settings.md#timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../../sql-reference/data-types/string.md).
**Returned value**
@ -1471,7 +1471,7 @@ Result:
**See also**
- [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) server configuration parameter.
- [Timezone](../../operations/server-configuration-parameters/settings.md#timezone) server configuration parameter.
## toStartOfFiveMinutes
@ -1674,7 +1674,7 @@ Result:
## toRelativeYearNum
Converts a date, or date with time, to the number of years elapsed since a certain fixed point in the past.
Converts a date, or date with time, to the number of years elapsed since a certain fixed point in the past.
**Syntax**
@ -2178,7 +2178,7 @@ age('unit', startdate, enddate, [timezone])
- `enddate` — The second time value to subtract from (the minuend). [Date](../data-types/date.md), [Date32](../data-types/date32.md), [DateTime](../data-types/datetime.md) or [DateTime64](../data-types/datetime64.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) (optional). If specified, it is applied to both `startdate` and `enddate`. If not specified, timezones of `startdate` and `enddate` are used. If they are not the same, the result is unspecified. [String](../data-types/string.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#timezone) (optional). If specified, it is applied to both `startdate` and `enddate`. If not specified, timezones of `startdate` and `enddate` are used. If they are not the same, the result is unspecified. [String](../data-types/string.md).
**Returned value**
@ -2254,7 +2254,7 @@ Aliases: `dateDiff`, `DATE_DIFF`, `timestampDiff`, `timestamp_diff`, `TIMESTAMP_
- `enddate` — The second time value to subtract from (the minuend). [Date](../data-types/date.md), [Date32](../data-types/date32.md), [DateTime](../data-types/datetime.md) or [DateTime64](../data-types/datetime64.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) (optional). If specified, it is applied to both `startdate` and `enddate`. If not specified, timezones of `startdate` and `enddate` are used. If they are not the same, the result is unspecified. [String](../data-types/string.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#timezone) (optional). If specified, it is applied to both `startdate` and `enddate`. If not specified, timezones of `startdate` and `enddate` are used. If they are not the same, the result is unspecified. [String](../data-types/string.md).
**Returned value**
@ -2323,7 +2323,7 @@ Alias: `dateTrunc`.
`unit` argument is case-insensitive.
- `value` — Date and time. [Date](../data-types/date.md), [Date32](../data-types/date32.md), [DateTime](../data-types/datetime.md) or [DateTime64](../data-types/datetime64.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../data-types/string.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../data-types/string.md).
**Returned value**
@ -2703,7 +2703,7 @@ now([timezone])
**Arguments**
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). [String](../data-types/string.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#timezone) for the returned value (optional). [String](../data-types/string.md).
**Returned value**
@ -2752,7 +2752,7 @@ now64([scale], [timezone])
**Arguments**
- `scale` - Tick size (precision): 10<sup>-precision</sup> seconds. Valid range: [ 0 : 9 ]. Typically, are used - 3 (default) (milliseconds), 6 (microseconds), 9 (nanoseconds).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). [String](../data-types/string.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#timezone) for the returned value (optional). [String](../data-types/string.md).
**Returned value**
@ -2786,7 +2786,7 @@ nowInBlock([timezone])
**Arguments**
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). [String](../data-types/string.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#timezone) for the returned value (optional). [String](../data-types/string.md).
**Returned value**
@ -2975,7 +2975,7 @@ YYYYMMDDhhmmssToDateTime(yyyymmddhhmmss[, timezone]);
**Arguments**
- `yyyymmddhhmmss` - A number representing the year, month and day. [Integer](../data-types/int-uint.md), [Float](../data-types/float.md) or [Decimal](../data-types/decimal.md).
- `timezone` - [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional).
- `timezone` - [Timezone](../../operations/server-configuration-parameters/settings.md#timezone) for the returned value (optional).
**Returned value**
@ -3631,7 +3631,7 @@ addInterval(interval_1, interval_2)
- Returns a tuple of intervals. [tuple](../data-types/tuple.md)([interval](../data-types/special-data-types/interval.md)).
:::note
Intervals of the same type will be combined into a single interval. For instance if `toIntervalDay(1)` and `toIntervalDay(2)` are passed then the result will be `(3)` rather than `(1,1)`.
Intervals of the same type will be combined into a single interval. For instance if `toIntervalDay(1)` and `toIntervalDay(2)` are passed then the result will be `(3)` rather than `(1,1)`.
:::
**Example**
@ -3682,7 +3682,7 @@ addTupleOfIntervals(interval_1, interval_2)
Query:
```sql
WITH toDate('2018-01-01') AS date
WITH toDate('2018-01-01') AS date
SELECT addTupleOfIntervals(date, (INTERVAL 1 DAY, INTERVAL 1 MONTH, INTERVAL 1 YEAR))
```
@ -4802,4 +4802,3 @@ timeDiff(toDateTime64('1927-01-01 00:00:00', 3), toDate32('1927-01-02'));
## Related content
- Blog: [Working with time series data in ClickHouse](https://clickhouse.com/blog/working-with-time-series-data-and-functions-ClickHouse)

View File

@ -18,7 +18,7 @@ file(path[, default])
**Arguments**
- `path` — The path of the file relative to [user_files_path](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-user_files_path). Supports wildcards `*`, `**`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` are numbers and `'abc', 'def'` are strings.
- `path` — The path of the file relative to [user_files_path](../../operations/server-configuration-parameters/settings.md#user_files_path). Supports wildcards `*`, `**`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` are numbers and `'abc', 'def'` are strings.
- `default` — The value returned if the file does not exist or cannot be accessed. Supported data types: [String](../data-types/string.md) and [NULL](../../sql-reference/syntax.md#null-literal).
**Example**

View File

@ -347,7 +347,7 @@ Result:
## intHash64
Calculates a 64-bit hash code from any type of integer.
This is a relatively fast non-cryptographic hash function of average quality for numbers.
This is a relatively fast non-cryptographic hash function of average quality for numbers.
It works faster than [intHash32](#inthash32).
**Syntax**
@ -651,7 +651,7 @@ For more information, see the link: [JumpConsistentHash](https://arxiv.org/pdf/1
## kostikConsistentHash
An O(1) time and space consistent hash algorithm by Konstantin 'kostik' Oblakov. Previously `yandexConsistentHash`.
An O(1) time and space consistent hash algorithm by Konstantin 'kostik' Oblakov. Previously `yandexConsistentHash`.
**Syntax**
@ -672,7 +672,7 @@ Alias: `yandexConsistentHash` (left for backwards compatibility sake).
**Implementation details**
It is efficient only if n <= 32768.
It is efficient only if n <= 32768.
**Example**
@ -688,14 +688,14 @@ SELECT kostikConsistentHash(16045690984833335023, 2);
└───────────────────────────────────────────────┘
```
## ripeMD160
## RIPEMD160
Produces [RIPEMD-160](https://en.wikipedia.org/wiki/RIPEMD) hash value.
**Syntax**
```sql
ripeMD160(input)
RIPEMD160(input)
```
**Parameters**
@ -713,11 +713,11 @@ Use the [hex](../functions/encoding-functions.md/#hex) function to represent the
Query:
```sql
SELECT hex(ripeMD160('The quick brown fox jumps over the lazy dog'));
SELECT hex(RIPEMD160('The quick brown fox jumps over the lazy dog'));
```
```response
┌─hex(ripeMD160('The quick brown fox jumps over the lazy dog'))─┐
┌─hex(RIPEMD160('The quick brown fox jumps over the lazy dog'))─┐
│ 37F332F68DB77BD9D7EDD4969571AD671CF9DD3B │
└───────────────────────────────────────────────────────────────┘
```
@ -937,7 +937,7 @@ SELECT xxHash64('')
**Returned value**
- Hash value. [UInt32/64](../data-types/int-uint.md).
- Hash value. [UInt32/64](../data-types/int-uint.md).
:::note
The return type will be `UInt32` for `xxHash32` and `UInt64` for `xxHash64`.

View File

@ -86,11 +86,11 @@ Returns the fully qualified domain name of the ClickHouse server.
fqdn();
```
Aliases: `fullHostName`, `FQDN`.
Aliases: `fullHostName`, `FQDN`.
**Returned value**
- String with the fully qualified domain name. [String](../data-types/string.md).
- String with the fully qualified domain name. [String](../data-types/string.md).
**Example**
@ -245,7 +245,7 @@ Result:
3. │ 5 │
4. │ 5 │
5. │ 5 │
└─────────────┘
└─────────────┘
```
## byteSize
@ -347,7 +347,7 @@ Result:
Turns a constant into a full column containing a single value.
Full columns and constants are represented differently in memory.
Functions usually execute different code for normal and constant arguments, although the result should typically be the same.
Functions usually execute different code for normal and constant arguments, although the result should typically be the same.
This function can be used to debug this behavior.
**Syntax**
@ -366,8 +366,8 @@ materialize(x)
**Example**
In the example below the `countMatches` function expects a constant second argument.
This behaviour can be debugged by using the `materialize` function to turn a constant into a full column,
In the example below the `countMatches` function expects a constant second argument.
This behaviour can be debugged by using the `materialize` function to turn a constant into a full column,
verifying that the function throws an error for a non-constant argument.
Query:
@ -1837,7 +1837,7 @@ Returns the default value for the given data type.
Does not include default values for custom columns set by the user.
**Syntax**
**Syntax**
```sql
defaultValueOfArgumentType(expression)
@ -2164,7 +2164,7 @@ Result:
## filesystemCapacity
Returns the capacity of the filesystem in bytes. Needs the [path](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-path) to the data directory to be configured.
Returns the capacity of the filesystem in bytes. Needs the [path](../../operations/server-configuration-parameters/settings.md#path) to the data directory to be configured.
**Syntax**
@ -2592,7 +2592,7 @@ joinGetOrNull(join_storage_table_name, `value_column`, join_keys)
**Arguments**
- `join_storage_table_name` — an [identifier](../../sql-reference/syntax.md#syntax-identifiers) indicating where the search is performed.
- `join_storage_table_name` — an [identifier](../../sql-reference/syntax.md#syntax-identifiers) indicating where the search is performed.
- `value_column` — name of the column of the table that contains required data.
- `join_keys` — list of keys.
@ -2917,7 +2917,7 @@ Result:
**See Also**
- [tcp_port](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port)
- [tcp_port](../../operations/server-configuration-parameters/settings.md#tcp_port)
## currentProfiles
@ -4031,7 +4031,7 @@ displayName()
**Example**
The `display_name` can be set in `config.xml`. Taking for example a server with `display_name` configured to 'production':
The `display_name` can be set in `config.xml`. Taking for example a server with `display_name` configured to 'production':
```xml
<!-- It is the name that will be shown in the clickhouse-client.
@ -4279,4 +4279,3 @@ Result:
1. │ ['{ArraySizes}','{ArrayElements, TupleElement(keys), Regular}','{ArrayElements, TupleElement(values), Regular}'] │
└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
```

View File

@ -10,7 +10,7 @@ Time window functions return the inclusive lower and exclusive upper bound of th
## tumble
A tumbling time window assigns records to non-overlapping, continuous windows with a fixed duration (`interval`).
A tumbling time window assigns records to non-overlapping, continuous windows with a fixed duration (`interval`).
**Syntax**
@ -21,7 +21,7 @@ tumble(time_attr, interval [, timezone])
**Arguments**
- `time_attr` — Date and time. [DateTime](../data-types/datetime.md).
- `interval` — Window interval in [Interval](../data-types/special-data-types/interval.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) (optional).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#timezone) (optional).
**Returned values**
@ -57,7 +57,7 @@ tumbleStart(time_attr, interval [, timezone]);
- `time_attr` — Date and time. [DateTime](../data-types/datetime.md).
- `interval` — Window interval in [Interval](../data-types/special-data-types/interval.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) (optional).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#timezone) (optional).
The parameters above can also be passed to the function as a [tuple](../data-types/tuple.md).
@ -95,7 +95,7 @@ tumbleEnd(time_attr, interval [, timezone]);
- `time_attr` — Date and time. [DateTime](../data-types/datetime.md).
- `interval` — Window interval in [Interval](../data-types/special-data-types/interval.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) (optional).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#timezone) (optional).
The parameters above can also be passed to the function as a [tuple](../data-types/tuple.md).
@ -132,7 +132,7 @@ hop(time_attr, hop_interval, window_interval [, timezone])
- `time_attr` — Date and time. [DateTime](../data-types/datetime.md).
- `hop_interval` — Positive Hop interval. [Interval](../data-types/special-data-types/interval.md).
- `window_interval` — Positive Window interval. [Interval](../data-types/special-data-types/interval.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) (optional).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#timezone) (optional).
**Returned values**
@ -172,7 +172,7 @@ hopStart(time_attr, hop_interval, window_interval [, timezone]);
- `time_attr` — Date and time. [DateTime](../data-types/datetime.md).
- `hop_interval` — Positive Hop interval. [Interval](../data-types/special-data-types/interval.md).
- `window_interval` — Positive Window interval. [Interval](../data-types/special-data-types/interval.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) (optional).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#timezone) (optional).
The parameters above can also be passed to the function as a [tuple](../data-types/tuple.md).
@ -212,9 +212,9 @@ hopEnd(time_attr, hop_interval, window_interval [, timezone]);
**Arguments**
- `time_attr` — Date and time. [DateTime](../data-types/datetime.md).
- `hop_interval` — Positive Hop interval. [Interval](../data-types/special-data-types/interval.md).
- `hop_interval` — Positive Hop interval. [Interval](../data-types/special-data-types/interval.md).
- `window_interval` — Positive Window interval. [Interval](../data-types/special-data-types/interval.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) (optional).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#timezone) (optional).
The parameters above can also be passed to the function as a [tuple](../data-types/tuple.md).

View File

@ -121,7 +121,7 @@ Unsupported arguments:
- String representations of binary and hexadecimal values, e.g. `SELECT toInt8('0xc0fe');`.
:::note
If the input value cannot be represented within the bounds of [Int8](../data-types/int-uint.md), overflow or underflow of the result occurs.
If the input value cannot be represented within the bounds of [Int8](../data-types/int-uint.md), overflow or underflow of the result occurs.
This is not considered an error.
For example: `SELECT toInt8(128) == -128;`.
:::
@ -193,7 +193,7 @@ This is not considered an error.
- 8-bit integer value if successful, otherwise `0`. [Int8](../data-types/int-uint.md).
:::note
The function uses [rounding towards zero](https://en.wikipedia.org/wiki/Rounding#Rounding_towards_zero), meaning it truncates fractional digits of numbers.
The function uses [rounding towards zero](https://en.wikipedia.org/wiki/Rounding#Rounding_towards_zero), meaning it truncates fractional digits of numbers.
:::
**Example**
@ -492,7 +492,7 @@ Unsupported arguments (return `\N`)
- String representations of binary and hexadecimal values, e.g. `SELECT toInt16OrNull('0xc0fe');`.
:::note
If the input value cannot be represented within the bounds of [Int16](../data-types/int-uint.md), overflow or underflow of the result occurs.
If the input value cannot be represented within the bounds of [Int16](../data-types/int-uint.md), overflow or underflow of the result occurs.
This is not considered an error.
:::
@ -555,7 +555,7 @@ Arguments for which the default value is returned:
- String representations of binary and hexadecimal values, e.g. `SELECT toInt16OrDefault('0xc0fe', CAST('-1', 'Int16'));`.
:::note
If the input value cannot be represented within the bounds of [Int16](../data-types/int-uint.md), overflow or underflow of the result occurs.
If the input value cannot be represented within the bounds of [Int16](../data-types/int-uint.md), overflow or underflow of the result occurs.
This is not considered an error.
:::
@ -617,7 +617,7 @@ Unsupported arguments:
- String representations of binary and hexadecimal values, e.g. `SELECT toInt32('0xc0fe');`.
:::note
If the input value cannot be represented within the bounds of [Int32](../data-types/int-uint.md), the result over or under flows.
If the input value cannot be represented within the bounds of [Int32](../data-types/int-uint.md), the result over or under flows.
This is not considered an error.
For example: `SELECT toInt32(2147483648) == -2147483648;`
:::
@ -680,7 +680,7 @@ Unsupported arguments (return `0`):
- String representations of binary and hexadecimal values, e.g. `SELECT toInt32OrZero('0xc0fe');`.
:::note
If the input value cannot be represented within the bounds of [Int32](../data-types/int-uint.md), overflow or underflow of the result occurs.
If the input value cannot be represented within the bounds of [Int32](../data-types/int-uint.md), overflow or underflow of the result occurs.
This is not considered an error.
:::
@ -739,7 +739,7 @@ Unsupported arguments (return `\N`)
- String representations of binary and hexadecimal values, e.g. `SELECT toInt32OrNull('0xc0fe');`.
:::note
If the input value cannot be represented within the bounds of [Int32](../data-types/int-uint.md), overflow or underflow of the result occurs.
If the input value cannot be represented within the bounds of [Int32](../data-types/int-uint.md), overflow or underflow of the result occurs.
This is not considered an error.
:::
@ -802,7 +802,7 @@ Arguments for which the default value is returned:
- String representations of binary and hexadecimal values, e.g. `SELECT toInt32OrDefault('0xc0fe', CAST('-1', 'Int32'));`.
:::note
If the input value cannot be represented within the bounds of [Int32](../data-types/int-uint.md), overflow or underflow of the result occurs.
If the input value cannot be represented within the bounds of [Int32](../data-types/int-uint.md), overflow or underflow of the result occurs.
This is not considered an error.
:::
@ -864,7 +864,7 @@ Unsupported types:
- String representations of binary and hexadecimal values, e.g. `SELECT toInt64('0xc0fe');`.
:::note
If the input value cannot be represented within the bounds of [Int64](../data-types/int-uint.md), the result over or under flows.
If the input value cannot be represented within the bounds of [Int64](../data-types/int-uint.md), the result over or under flows.
This is not considered an error.
For example: `SELECT toInt64(9223372036854775808) == -9223372036854775808;`
:::
@ -927,7 +927,7 @@ Unsupported arguments (return `0`):
- String representations of binary and hexadecimal values, e.g. `SELECT toInt64OrZero('0xc0fe');`.
:::note
If the input value cannot be represented within the bounds of [Int64](../data-types/int-uint.md), overflow or underflow of the result occurs.
If the input value cannot be represented within the bounds of [Int64](../data-types/int-uint.md), overflow or underflow of the result occurs.
This is not considered an error.
:::
@ -987,7 +987,7 @@ Unsupported arguments (return `\N`)
- String representations of binary and hexadecimal values, e.g. `SELECT toInt64OrNull('0xc0fe');`.
:::note
If the input value cannot be represented within the bounds of [Int64](../data-types/int-uint.md), overflow or underflow of the result occurs.
If the input value cannot be represented within the bounds of [Int64](../data-types/int-uint.md), overflow or underflow of the result occurs.
This is not considered an error.
:::
@ -1112,7 +1112,7 @@ Unsupported arguments:
- String representations of binary and hexadecimal values, e.g. `SELECT toInt128('0xc0fe');`.
:::note
If the input value cannot be represented within the bounds of [Int128](../data-types/int-uint.md), the result over or under flows.
If the input value cannot be represented within the bounds of [Int128](../data-types/int-uint.md), the result over or under flows.
This is not considered an error.
:::
@ -1234,7 +1234,7 @@ Unsupported arguments (return `\N`)
- String representations of binary and hexadecimal values, e.g. `SELECT toInt128OrNull('0xc0fe');`.
:::note
If the input value cannot be represented within the bounds of [Int128](../data-types/int-uint.md), overflow or underflow of the result occurs.
If the input value cannot be represented within the bounds of [Int128](../data-types/int-uint.md), overflow or underflow of the result occurs.
This is not considered an error.
:::
@ -1298,7 +1298,7 @@ Arguments for which the default value is returned:
- String representations of binary and hexadecimal values, e.g. `SELECT toInt128OrDefault('0xc0fe', CAST('-1', 'Int128'));`.
:::note
If the input value cannot be represented within the bounds of [Int128](../data-types/int-uint.md), overflow or underflow of the result occurs.
If the input value cannot be represented within the bounds of [Int128](../data-types/int-uint.md), overflow or underflow of the result occurs.
This is not considered an error.
:::
@ -1360,7 +1360,7 @@ Unsupported arguments:
- String representations of binary and hexadecimal values, e.g. `SELECT toInt256('0xc0fe');`.
:::note
If the input value cannot be represented within the bounds of [Int256](../data-types/int-uint.md), the result over or under flows.
If the input value cannot be represented within the bounds of [Int256](../data-types/int-uint.md), the result over or under flows.
This is not considered an error.
:::
@ -1422,7 +1422,7 @@ Unsupported arguments (return `0`):
- String representations of binary and hexadecimal values, e.g. `SELECT toInt256OrZero('0xc0fe');`.
:::note
If the input value cannot be represented within the bounds of [Int256](../data-types/int-uint.md), overflow or underflow of the result occurs.
If the input value cannot be represented within the bounds of [Int256](../data-types/int-uint.md), overflow or underflow of the result occurs.
This is not considered an error.
:::
@ -1482,7 +1482,7 @@ Unsupported arguments (return `\N`)
- String representations of binary and hexadecimal values, e.g. `SELECT toInt256OrNull('0xc0fe');`.
:::note
If the input value cannot be represented within the bounds of [Int256](../data-types/int-uint.md), overflow or underflow of the result occurs.
If the input value cannot be represented within the bounds of [Int256](../data-types/int-uint.md), overflow or underflow of the result occurs.
This is not considered an error.
:::
@ -4063,7 +4063,7 @@ Result:
## toDateTime64OrDefault
Like [toDateTime64](#todatetime64), this function converts an input value to a value of type [DateTime64](../data-types/datetime64.md),
Like [toDateTime64](#todatetime64), this function converts an input value to a value of type [DateTime64](../data-types/datetime64.md),
but returns either the default value of [DateTime64](../data-types/datetime64.md)
or the provided default if an invalid argument is received.
@ -4132,8 +4132,8 @@ Unsupported arguments:
- String representations of binary and hexadecimal values, e.g. `SELECT toDecimal32('0xc0fe', 1);`.
:::note
An overflow can occur if the value of `expr` exceeds the bounds of `Decimal32`: `( -1 * 10^(9 - S), 1 * 10^(9 - S) )`.
Excessive digits in a fraction are discarded (not rounded).
An overflow can occur if the value of `expr` exceeds the bounds of `Decimal32`: `( -1 * 10^(9 - S), 1 * 10^(9 - S) )`.
Excessive digits in a fraction are discarded (not rounded).
Excessive digits in the integer part will lead to an exception.
:::
@ -5261,7 +5261,7 @@ Result:
## reinterpretAsUInt8
Performs byte reinterpretation by treating the input value as a value of type UInt8. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
Performs byte reinterpretation by treating the input value as a value of type UInt8. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
**Syntax**
@ -5299,7 +5299,7 @@ Result:
## reinterpretAsUInt16
Performs byte reinterpretation by treating the input value as a value of type UInt16. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
Performs byte reinterpretation by treating the input value as a value of type UInt16. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
**Syntax**
@ -5375,7 +5375,7 @@ Result:
## reinterpretAsUInt64
Performs byte reinterpretation by treating the input value as a value of type UInt64. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
Performs byte reinterpretation by treating the input value as a value of type UInt64. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
**Syntax**
@ -5413,7 +5413,7 @@ Result:
## reinterpretAsUInt128
Performs byte reinterpretation by treating the input value as a value of type UInt128. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
Performs byte reinterpretation by treating the input value as a value of type UInt128. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
**Syntax**
@ -5489,7 +5489,7 @@ Result:
## reinterpretAsInt8
Performs byte reinterpretation by treating the input value as a value of type Int8. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
Performs byte reinterpretation by treating the input value as a value of type Int8. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
**Syntax**
@ -5565,7 +5565,7 @@ Result:
## reinterpretAsInt32
Performs byte reinterpretation by treating the input value as a value of type Int32. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
Performs byte reinterpretation by treating the input value as a value of type Int32. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
**Syntax**
@ -5603,7 +5603,7 @@ Result:
## reinterpretAsInt64
Performs byte reinterpretation by treating the input value as a value of type Int64. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
Performs byte reinterpretation by treating the input value as a value of type Int64. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
**Syntax**
@ -5641,7 +5641,7 @@ Result:
## reinterpretAsInt128
Performs byte reinterpretation by treating the input value as a value of type Int128. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
Performs byte reinterpretation by treating the input value as a value of type Int128. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
**Syntax**
@ -5679,7 +5679,7 @@ Result:
## reinterpretAsInt256
Performs byte reinterpretation by treating the input value as a value of type Int256. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
Performs byte reinterpretation by treating the input value as a value of type Int256. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
**Syntax**
@ -5717,7 +5717,7 @@ Result:
## reinterpretAsFloat32
Performs byte reinterpretation by treating the input value as a value of type Float32. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
Performs byte reinterpretation by treating the input value as a value of type Float32. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
**Syntax**
@ -5751,7 +5751,7 @@ Result:
## reinterpretAsFloat64
Performs byte reinterpretation by treating the input value as a value of type Float64. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
Performs byte reinterpretation by treating the input value as a value of type Float64. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
**Syntax**
@ -5804,7 +5804,7 @@ reinterpretAsDate(x)
**Implementation details**
:::note
If the provided string isnt long enough, the function works as if the string is padded with the necessary number of null bytes. If the string is longer than needed, the extra bytes are ignored.
If the provided string isnt long enough, the function works as if the string is padded with the necessary number of null bytes. If the string is longer than needed, the extra bytes are ignored.
:::
**Example**
@ -5844,7 +5844,7 @@ reinterpretAsDateTime(x)
**Implementation details**
:::note
If the provided string isnt long enough, the function works as if the string is padded with the necessary number of null bytes. If the string is longer than needed, the extra bytes are ignored.
If the provided string isnt long enough, the function works as if the string is padded with the necessary number of null bytes. If the string is longer than needed, the extra bytes are ignored.
:::
**Example**
@ -5886,8 +5886,8 @@ reinterpretAsString(x)
Query:
```sql
SELECT
reinterpretAsString(toDateTime('1970-01-01 01:01:05')),
SELECT
reinterpretAsString(toDateTime('1970-01-01 01:01:05')),
reinterpretAsString(toDate('1970-03-07'));
```
@ -5922,8 +5922,8 @@ reinterpretAsFixedString(x)
Query:
```sql
SELECT
reinterpretAsFixedString(toDateTime('1970-01-01 01:01:05')),
SELECT
reinterpretAsFixedString(toDateTime('1970-01-01 01:01:05')),
reinterpretAsFixedString(toDate('1970-03-07'));
```
@ -6702,7 +6702,7 @@ parseDateTime(str[, format[, timezone]])
- `str` — The String to be parsed
- `format` — The format string. Optional. `%Y-%m-%d %H:%i:%s` if not specified.
- `timezone` — [Timezone](/docs/en/operations/server-configuration-parameters/settings.md/#server_configuration_parameters-timezone). Optional.
- `timezone` — [Timezone](/docs/en/operations/server-configuration-parameters/settings.md#timezone). Optional.
**Returned value(s)**
@ -6751,7 +6751,7 @@ parseDateTimeInJodaSyntax(str[, format[, timezone]])
- `str` — The String to be parsed
- `format` — The format string. Optional. `yyyy-MM-dd HH:mm:ss` if not specified.
- `timezone` — [Timezone](/docs/en/operations/server-configuration-parameters/settings.md/#server_configuration_parameters-timezone). Optional.
- `timezone` — [Timezone](/docs/en/operations/server-configuration-parameters/settings.md#timezone). Optional.
**Returned value(s)**
@ -6958,7 +6958,7 @@ parseDateTime64BestEffort(time_string [, precision [, time_zone]])
- `time_string` — String containing a date or date with time to convert. [String](../data-types/string.md).
- `precision` — Required precision. `3` — for milliseconds, `6` — for microseconds. Default — `3`. Optional. [UInt8](../data-types/int-uint.md).
- `time_zone` — [Timezone](/docs/en/operations/server-configuration-parameters/settings.md/#server_configuration_parameters-timezone). The function parses `time_string` according to the timezone. Optional. [String](../data-types/string.md).
- `time_zone` — [Timezone](/docs/en/operations/server-configuration-parameters/settings.md#timezone). The function parses `time_string` according to the timezone. Optional. [String](../data-types/string.md).
**Returned value**
@ -7028,7 +7028,7 @@ toLowCardinality(expr)
**Returned values**
- Result of `expr`. [LowCardinality](../data-types/lowcardinality.md) of the type of `expr`.
- Result of `expr`. [LowCardinality](../data-types/lowcardinality.md) of the type of `expr`.
**Example**
@ -7249,7 +7249,7 @@ Result:
## fromUnixTimestamp64Nano
Converts an `Int64` to a `DateTime64` value with fixed nanosecond precision and optional timezone. The input value is scaled up or down appropriately depending on its precision.
Converts an `Int64` to a `DateTime64` value with fixed nanosecond precision and optional timezone. The input value is scaled up or down appropriately depending on its precision.
:::note
Please note that input value is treated as a UTC timestamp, not timestamp at the given (or implicit) timezone.

View File

@ -10,7 +10,7 @@ sidebar_label: UDF
## Executable User Defined Functions
ClickHouse can call any external executable program or script to process data.
The configuration of executable user defined functions can be located in one or more xml-files. The path to the configuration is specified in the [user_defined_executable_functions_config](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-user_defined_executable_functions_config) parameter.
The configuration of executable user defined functions can be located in one or more xml-files. The path to the configuration is specified in the [user_defined_executable_functions_config](../../operations/server-configuration-parameters/settings.md#user_defined_executable_functions_config) parameter.
A function configuration contains the following settings:
@ -27,7 +27,7 @@ A function configuration contains the following settings:
- `command_write_timeout` - timeout for writing data to command stdin in milliseconds. Default value 10000. Optional parameter.
- `pool_size` - the size of a command pool. Optional. Default value is `16`.
- `send_chunk_header` - controls whether to send row count before sending a chunk of data to process. Optional. Default value is `false`.
- `execute_direct` - If `execute_direct` = `1`, then `command` will be searched inside user_scripts folder specified by [user_scripts_path](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-user_scripts_path). Additional script arguments can be specified using whitespace separator. Example: `script_name arg1 arg2`. If `execute_direct` = `0`, `command` is passed as argument for `bin/sh -c`. Default value is `1`. Optional parameter.
- `execute_direct` - If `execute_direct` = `1`, then `command` will be searched inside user_scripts folder specified by [user_scripts_path](../../operations/server-configuration-parameters/settings.md#user_scripts_path). Additional script arguments can be specified using whitespace separator. Example: `script_name arg1 arg2`. If `execute_direct` = `0`, `command` is passed as argument for `bin/sh -c`. Default value is `1`. Optional parameter.
- `lifetime` - the reload interval of a function in seconds. If it is set to `0` then the function is not reloaded. Default value is `0`. Optional parameter.
The command must read arguments from `STDIN` and must output the result to `STDOUT`. The command must process arguments iteratively. That is after processing a chunk of arguments it must wait for the next chunk.

View File

@ -61,7 +61,7 @@ ULIDStringToDateTime(ulid[, timezone])
**Arguments**
- `ulid` — Input ULID. [String](../data-types/string.md) or [FixedString(26)](../data-types/fixedstring.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). [String](../data-types/string.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#timezone) for the returned value (optional). [String](../data-types/string.md).
**Returned value**

View File

@ -493,7 +493,7 @@ UUIDv7ToDateTime(uuid[, timezone])
**Arguments**
- `uuid` — [UUID](../data-types/uuid.md) of version 7.
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). [String](../data-types/string.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#timezone) for the returned value (optional). [String](../data-types/string.md).
**Returned value**
@ -637,7 +637,7 @@ snowflakeToDateTime(value[, time_zone])
**Arguments**
- `value` — Snowflake ID. [Int64](../data-types/int-uint.md).
- `time_zone` — [Timezone](/docs/en/operations/server-configuration-parameters/settings.md/#server_configuration_parameters-timezone). The function parses `time_string` according to the timezone. Optional. [String](../data-types/string.md).
- `time_zone` — [Timezone](/docs/en/operations/server-configuration-parameters/settings.md#timezone). The function parses `time_string` according to the timezone. Optional. [String](../data-types/string.md).
**Returned value**
@ -678,7 +678,7 @@ snowflakeToDateTime64(value[, time_zone])
**Arguments**
- `value` — Snowflake ID. [Int64](../data-types/int-uint.md).
- `time_zone` — [Timezone](/docs/en/operations/server-configuration-parameters/settings.md/#server_configuration_parameters-timezone). The function parses `time_string` according to the timezone. Optional. [String](../data-types/string.md).
- `time_zone` — [Timezone](/docs/en/operations/server-configuration-parameters/settings.md#timezone). The function parses `time_string` according to the timezone. Optional. [String](../data-types/string.md).
**Returned value**
@ -793,7 +793,7 @@ snowflakeIDToDateTime(value[, epoch[, time_zone]])
- `value` — Snowflake ID. [UInt64](../data-types/int-uint.md).
- `epoch` - Epoch of the Snowflake ID in milliseconds since 1970-01-01. Defaults to 0 (1970-01-01). For the Twitter/X epoch (2015-01-01), provide 1288834974657. Optional. [UInt*](../data-types/int-uint.md).
- `time_zone` — [Timezone](/docs/en/operations/server-configuration-parameters/settings.md/#server_configuration_parameters-timezone). The function parses `time_string` according to the timezone. Optional. [String](../data-types/string.md).
- `time_zone` — [Timezone](/docs/en/operations/server-configuration-parameters/settings.md#timezone). The function parses `time_string` according to the timezone. Optional. [String](../data-types/string.md).
**Returned value**
@ -829,7 +829,7 @@ snowflakeIDToDateTime64(value[, epoch[, time_zone]])
- `value` — Snowflake ID. [UInt64](../data-types/int-uint.md).
- `epoch` - Epoch of the Snowflake ID in milliseconds since 1970-01-01. Defaults to 0 (1970-01-01). For the Twitter/X epoch (2015-01-01), provide 1288834974657. Optional. [UInt*](../data-types/int-uint.md).
- `time_zone` — [Timezone](/docs/en/operations/server-configuration-parameters/settings.md/#server_configuration_parameters-timezone). The function parses `time_string` according to the timezone. Optional. [String](../data-types/string.md).
- `time_zone` — [Timezone](/docs/en/operations/server-configuration-parameters/settings.md#timezone). The function parses `time_string` according to the timezone. Optional. [String](../data-types/string.md).
**Returned value**

View File

@ -11,7 +11,7 @@ Changes roles.
Syntax:
``` sql
ALTER ROLE [IF EXISTS] name1 [ON CLUSTER cluster_name1] [RENAME TO new_name1]
[, name2 [ON CLUSTER cluster_name2] [RENAME TO new_name2] ...]
ALTER ROLE [IF EXISTS] name1 [RENAME TO new_name |, name2 [,...]]
[ON CLUSTER cluster_name]
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [CONST|READONLY|WRITABLE|CHANGEABLE_IN_READONLY] | PROFILE 'profile_name'] [,...]
```

View File

@ -11,7 +11,8 @@ Changes settings profiles.
Syntax:
``` sql
ALTER SETTINGS PROFILE [IF EXISTS] TO name1 [ON CLUSTER cluster_name1] [RENAME TO new_name1]
[, name2 [ON CLUSTER cluster_name2] [RENAME TO new_name2] ...]
ALTER SETTINGS PROFILE [IF EXISTS] name1 [RENAME TO new_name |, name2 [,...]]
[ON CLUSTER cluster_name]
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [CONST|READONLY|WRITABLE|CHANGEABLE_IN_READONLY] | INHERIT 'profile_name'] [,...]
[TO {{role1 | user1 [, role2 | user2 ...]} | NONE | ALL | ALL EXCEPT {role1 | user1 [, role2 | user2 ...]}}]
```

View File

@ -10,8 +10,8 @@ Changes ClickHouse user accounts.
Syntax:
``` sql
ALTER USER [IF EXISTS] name1 [ON CLUSTER cluster_name1] [RENAME TO new_name1]
[, name2 [ON CLUSTER cluster_name2] [RENAME TO new_name2] ...]
ALTER USER [IF EXISTS] name1 [RENAME TO new_name |, name2 [,...]]
[ON CLUSTER cluster_name]
[NOT IDENTIFIED | IDENTIFIED | ADD IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']} | {WITH ssl_certificate CN 'common_name' | SAN 'TYPE:subject_alt_name'}]
[[ADD | DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
[VALID UNTIL datetime]

View File

@ -10,7 +10,7 @@ Creates new [roles](../../../guides/sre/user-management/index.md#role-management
Syntax:
``` sql
CREATE ROLE [IF NOT EXISTS | OR REPLACE] name1 [ON CLUSTER cluster_name1] [, name2 [ON CLUSTER cluster_name2] ...]
CREATE ROLE [IF NOT EXISTS | OR REPLACE] name1 [, name2 [,...]] [ON CLUSTER cluster_name]
[IN access_storage_type]
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [CONST|READONLY|WRITABLE|CHANGEABLE_IN_READONLY] | PROFILE 'profile_name'] [,...]
```

View File

@ -10,10 +10,11 @@ Creates [settings profiles](../../../guides/sre/user-management/index.md#setting
Syntax:
``` sql
CREATE SETTINGS PROFILE [IF NOT EXISTS | OR REPLACE] name1 [ON CLUSTER cluster_name1]
[, name2 [ON CLUSTER cluster_name2] ...]
CREATE SETTINGS PROFILE [IF NOT EXISTS | OR REPLACE] name1 [, name2 [,...]]
[ON CLUSTER cluster_name]
[IN access_storage_type]
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [CONST|READONLY|WRITABLE|CHANGEABLE_IN_READONLY] | INHERIT 'profile_name'] [,...]
[TO {{role1 | user1 [, role2 | user2 ...]} | NONE | ALL | ALL EXCEPT {role1 | user1 [, role2 | user2 ...]}}]
```
`ON CLUSTER` clause allows creating settings profiles on a cluster, see [Distributed DDL](../../../sql-reference/distributed-ddl.md).

View File

@ -10,8 +10,7 @@ Creates [user accounts](../../../guides/sre/user-management/index.md#user-accoun
Syntax:
``` sql
CREATE USER [IF NOT EXISTS | OR REPLACE] name1 [ON CLUSTER cluster_name1]
[, name2 [ON CLUSTER cluster_name2] ...]
CREATE USER [IF NOT EXISTS | OR REPLACE] name1 [, name2 [,...]] [ON CLUSTER cluster_name]
[NOT IDENTIFIED | IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']} | {WITH ssl_certificate CN 'common_name' | SAN 'TYPE:subject_alt_name'} | {WITH ssh_key BY KEY 'public_key' TYPE 'ssh-rsa|...'} | {WITH http SERVER 'server_name' [SCHEME 'Basic']}]
[HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
[VALID UNTIL datetime]

View File

@ -6,7 +6,7 @@ sidebar_label: VIEW
# CREATE VIEW
Creates a new view. Views can be [normal](#normal-view), [materialized](#materialized-view), [live](#live-view-deprecated), and [window](#window-view-experimental) (live view and window view are experimental features).
Creates a new view. Views can be [normal](#normal-view), [materialized](#materialized-view), [refreshable materialized](#refreshable-materialized-view), and [window](#window-view-experimental) (refreshable materialized view and window view are experimental features).
## Normal View

View File

@ -15,7 +15,7 @@ Always returns `Ok.` regardless of the result of the internal dictionary update.
## RELOAD DICTIONARIES
Reloads all dictionaries that have been successfully loaded before.
By default, dictionaries are loaded lazily (see [dictionaries_lazy_load](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-dictionaries_lazy_load)), so instead of being loaded automatically at startup, they are initialized on first access through dictGet function or SELECT from tables with ENGINE = Dictionary. The `SYSTEM RELOAD DICTIONARIES` query reloads such dictionaries (LOADED).
By default, dictionaries are loaded lazily (see [dictionaries_lazy_load](../../operations/server-configuration-parameters/settings.md#dictionaries_lazy_load)), so instead of being loaded automatically at startup, they are initialized on first access through dictGet function or SELECT from tables with ENGINE = Dictionary. The `SYSTEM RELOAD DICTIONARIES` query reloads such dictionaries (LOADED).
Always returns `Ok.` regardless of the result of the dictionary update.
**Syntax**
@ -146,7 +146,7 @@ If a tag is specified, only query cache entries with the specified tag are delet
## DROP FORMAT SCHEMA CACHE {#system-drop-schema-format}
Clears cache for schemas loaded from [format_schema_path](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-format_schema_path).
Clears cache for schemas loaded from [format_schema_path](../../operations/server-configuration-parameters/settings.md#format_schema_path).
Supported formats:

View File

@ -18,7 +18,7 @@ file([path_to_archive ::] path [,format] [,structure] [,compression])
**Parameters**
- `path` — The relative path to the file from [user_files_path](/docs/en/operations/server-configuration-parameters/settings.md#server_configuration_parameters-user_files_path). Supports in read-only mode the following [globs](#globs-in-path): `*`, `?`, `{abc,def}` (with `'abc'` and `'def'` being strings) and `{N..M}` (with `N` and `M` being numbers).
- `path` — The relative path to the file from [user_files_path](/docs/en/operations/server-configuration-parameters/settings.md#user_files_path). Supports in read-only mode the following [globs](#globs-in-path): `*`, `?`, `{abc,def}` (with `'abc'` and `'def'` being strings) and `{N..M}` (with `N` and `M` being numbers).
- `path_to_archive` - The relative path to a zip/tar/7z archive. Supports the same globs as `path`.
- `format` — The [format](/docs/en/interfaces/formats.md#formats) of the file.
- `structure` — Structure of the table. Format: `'column1_name column1_type, column2_name column2_type, ...'`.
@ -128,7 +128,7 @@ Reading data from `table.csv`, located in `archive1.zip` or/and `archive2.zip`:
SELECT * FROM file('user_files/archives/archive{1..2}.zip :: table.csv');
```
## Globs in path
## Globs in path
Paths may use globbing. Files must match the whole path pattern, not only the suffix or prefix. There is one exception that if the path refers to an existing
directory and does not use globs, a `*` will be implicitly added to the path so

View File

@ -22,7 +22,7 @@ fileCluster(cluster_name, path[, format, structure, compression_method])
**Arguments**
- `cluster_name` — Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers.
- `path` — The relative path to the file from [user_files_path](/docs/en/operations/server-configuration-parameters/settings.md#server_configuration_parameters-user_files_path). Path to file also supports [globs](#globs-in-path).
- `path` — The relative path to the file from [user_files_path](/docs/en/operations/server-configuration-parameters/settings.md#user_files_path). Path to file also supports [globs](#globs-in-path).
- `format` — [Format](../../interfaces/formats.md#formats) of the files. Type: [String](../../sql-reference/data-types/string.md).
- `structure` — Table structure in `'UserID UInt64, Name String'` format. Determines column names and types. Type: [String](../../sql-reference/data-types/string.md).
- `compression_method` — Compression method. Supported compression types are `gz`, `br`, `xz`, `zst`, `lz4`, and `bz2`.

View File

@ -39,6 +39,18 @@ If you are using the MongoDB Atlas cloud offering please add these options:
:::
Also, you can connect by URI:
``` sql
mongodb(uri, collection, structure)
```
**Arguments**
- `uri` — Connection string.
- `collection` — Remote collection name.
- `structure` — The schema for the ClickHouse table returned from this function.
**Returned Value**
A table object with the same columns as the original MongoDB table.
@ -76,6 +88,16 @@ SELECT * FROM mongodb(
)
```
or:
```sql
SELECT * FROM mongodb(
'mongodb://test_user:password@127.0.0.1:27017/test?connectionTimeoutMS=10000',
'my_collection',
'log_type String, host String, command String'
)
```
**See Also**
- [The `MongoDB` table engine](/docs/en/engines/table-engines/integrations/mongodb.md)

View File

@ -151,3 +151,7 @@ CREATE TABLE pg_table_schema_with_dots (a UInt32)
- Blog: [ClickHouse and PostgreSQL - a match made in data heaven - part 1](https://clickhouse.com/blog/migrating-data-between-clickhouse-postgres)
- Blog: [ClickHouse and PostgreSQL - a Match Made in Data Heaven - part 2](https://clickhouse.com/blog/migrating-data-between-clickhouse-postgres-part-2)
### Replicating or migrating Postgres data with with PeerDB
> In addition to table functions, you can always use [PeerDB](https://docs.peerdb.io/introduction) by ClickHouse to set up a continuous data pipeline from Postgres to ClickHouse. PeerDB is a tool designed specifically to replicate data from Postgres to ClickHouse using change data capture (CDC).

View File

@ -27,7 +27,7 @@ remoteSecure(named_collection[, option=value [,..]])
The `host` can be specified as a server name, or as a IPv4 or IPv6 address. An IPv6 address must be specified in square brackets.
The `port` is the TCP port on the remote server. If the port is omitted, it uses [tcp_port](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port) from the server config file for table function `remote` (by default, 9000) and [tcp_port_secure](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port_secure) for table function `remoteSecure` (by default, 9440).
The `port` is the TCP port on the remote server. If the port is omitted, it uses [tcp_port](../../operations/server-configuration-parameters/settings.md#tcp_port) from the server config file for table function `remote` (by default, 9000) and [tcp_port_secure](../../operations/server-configuration-parameters/settings.md#tcp_port_secure) for table function `remoteSecure` (by default, 9440).
For IPv6 addresses, a port is required.

View File

@ -58,11 +58,11 @@ Count the total amount of rows in all files in the cluster `cluster_simple`:
If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`.
:::
For production use cases it is recommended to use [named collections](/docs/en/operations/named-collections.md). Here is the example:
For production use cases, it is recommended to use [named collections](/docs/en/operations/named-collections.md). Here is the example:
``` sql
CREATE NAMED COLLECTION creds AS
access_key_id = 'minio'
access_key_id = 'minio',
secret_access_key = 'minio123';
SELECT count(*) FROM s3Cluster(
'cluster_simple', creds, url='https://s3-object-url.csv',

View File

@ -50,7 +50,7 @@ Connection: Close
Content-Type: text/tab-separated-values; charset=UTF-8
X-ClickHouse-Server-Display-Name: clickhouse.ru-central1.internal
X-ClickHouse-Query-Id: 5abe861c-239c-467f-b955-8a201abb8b7f
X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
1
```
@ -367,7 +367,7 @@ $ curl -v 'http://localhost:8123/predefined_query'
< X-ClickHouse-Format: Template
< X-ClickHouse-Timezone: Asia/Shanghai
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0", "elapsed_ns":"662334", "real_time_microseconds":"0"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0"}
<
# HELP "Query" "Number of executing queries"
# TYPE "Query" counter
@ -601,7 +601,7 @@ $ curl -v -H 'XXX:xxx' 'http://localhost:8123/get_config_static_handler'
< Content-Type: text/plain; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
<
* Connection #0 to host localhost left intact
<html ng-app="SMI2"><head><base href="http://ui.tabix.io/"></head><body><div ui-view="" class="content-ui"></div><script src="http://loader.tabix.io/master.js"></script></body></html>%
@ -659,7 +659,7 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_absolute_path_static_handler'
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
<
<html><body>Absolute Path File</body></html>
* Connection #0 to host localhost left intact
@ -678,7 +678,7 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_relative_path_static_handler'
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
<
<html><body>Relative Path File</body></html>
* Connection #0 to host localhost left intact

View File

@ -110,7 +110,7 @@ ClickHouse Keeper может использоваться как равноце
## Как запустить {#how-to-run}
ClickHouse Keeper входит в пакет `clickhouse-server`, просто добавьте кофигурацию `<keeper_server>` и запустите сервер ClickHouse как обычно. Если вы хотите запустить ClickHouse Keeper автономно, сделайте это аналогичным способом:
ClickHouse Keeper входит в пакет `clickhouse-server`, просто добавьте конфигурацию `<keeper_server>` и запустите сервер ClickHouse как обычно. Если вы хотите запустить ClickHouse Keeper автономно, сделайте это аналогичным способом:
```bash
clickhouse-keeper --config /etc/your_path_to_config/config.xml --daemon

View File

@ -6,7 +6,7 @@ sidebar_label: Date32
# Date32 {#data_type-datetime32}
Дата. Поддерживается такой же диапазон дат, как для типа [DateTime64](../../sql-reference/data-types/datetime64.md). Значение хранится в четырех байтах и соответствует числу дней с 1900-01-01 по 2299-12-31.
Дата. Поддерживается такой же диапазон дат, как для типа [DateTime64](../../sql-reference/data-types/datetime64.md). Значение хранится в четырех байтах, представляющим дни с 1970-01-01 (0 представляет 1970-01-01, а отрицательные значения представляют дни до 1970).
**Пример**

View File

@ -9,12 +9,12 @@ sidebar_label: DateTime64
Позволяет хранить момент времени, который может быть представлен как календарная дата и время, с заданной суб-секундной точностью.
Размер тика (точность, precision): 10<sup>-precision</sup> секунд, где precision - целочисленный параметр. Возможные значения: [ 0 : 9 ].
Обычно используются - 3 (миллисекунды), 6 (микросекунды), 9 (наносекунды).
Обычно используются - 3 (миллисекунды)(default), 6 (микросекунды), 9 (наносекунды).
**Синтаксис:**
``` sql
DateTime64(precision, [timezone])
DateTime64[(precision, [timezone])]
```
Данные хранятся в виде количества ‘тиков’, прошедших с момента начала эпохи (1970-01-01 00:00:00 UTC), в Int64. Размер тика определяется параметром precision. Дополнительно, тип `DateTime64` позволяет хранить часовой пояс, единый для всей колонки, который влияет на то, как будут отображаться значения типа `DateTime64` в текстовом виде и как будут парситься значения заданные в виде строк (2020-01-01 05:00:01.000). Часовой пояс не хранится в строках таблицы (выборки), а хранится в метаданных колонки. Подробнее см. [DateTime](datetime.md).

View File

@ -1,17 +1,17 @@
---
slug: /ru/sql-reference/distributed-ddl
sidebar_position: 32
sidebar_label: "Распределенные DDL запросы"
sidebar_label: "Распределенные DDL-запросы"
---
# Распределенные DDL запросы (секция ON CLUSTER) {#raspredelennye-ddl-zaprosy-sektsiia-on-cluster}
# Распределенные DDL-запросы (секция ON CLUSTER) {#raspredelennye-ddl-zaprosy-sektsiia-on-cluster}
Запросы `CREATE`, `DROP`, `ALTER`, `RENAME` поддерживают возможность распределенного выполнения на кластере.
Запросы `CREATE`, `DROP`, `ALTER`, `RENAME` поддерживают возможность распределённого выполнения на кластере.
Например, следующий запрос создает распределенную (Distributed) таблицу `all_hits` на каждом хосте в `cluster`:
``` sql
CREATE TABLE IF NOT EXISTS all_hits ON CLUSTER cluster (p Date, i Int32) ENGINE = Distributed(cluster, default, hits)
```
Для корректного выполнения таких запросов необходимо на каждом хосте иметь одинаковое определение кластера (для упрощения синхронизации конфигов можете использовать подстановки из ZooKeeper). Также необходимо подключение к ZooKeeper серверам.
Локальная версия запроса в конечном итоге будет выполнена на каждом хосте кластера, даже если некоторые хосты в данный момент не доступны. Гарантируется упорядоченность выполнения запросов в рамках одного хоста.
Для корректного выполнения таких запросов необходимо на каждом хосте иметь одинаковое определение кластера (для упрощения синхронизации конфигураций можете использовать подстановки из ZooKeeper). Также необходимо подключение к ZooKeeper-серверам.
Локальная версия запроса в конечном итоге будет выполнена на каждом хосте кластера, даже если некоторые хосты в данный момент недоступны. Гарантируется упорядоченность выполнения запросов в рамках одного хоста.

View File

@ -219,8 +219,8 @@ FROM system.numbers LIMIT 5 FORMAT TabSeparated;
``` text
(0,'2019-05-20') 0 \N \N (NULL,NULL)
(1,'2019-05-20') 1 First First ('First','First')
(2,'2019-05-20') 0 \N \N (NULL,NULL)
(3,'2019-05-20') 0 \N \N (NULL,NULL)
(2,'2019-05-20') 1 Second \N ('Second',NULL)
(3,'2019-05-20') 1 Third Third ('Third','Third')
(4,'2019-05-20') 0 \N \N (NULL,NULL)
```

View File

@ -124,14 +124,14 @@ SELECT hex(sipHash128('foo', '\x01', 3));
└──────────────────────────────────┘
```
## ripeMD160
## RIPEMD160
Генерирует [RIPEMD-160](https://en.wikipedia.org/wiki/RIPEMD) хеш строки.
**Синтаксис**
```sql
ripeMD160(input)
RIPEMD160(input)
```
**Аргументы**
@ -149,11 +149,11 @@ ripeMD160(input)
Запрос:
```sql
SELECT hex(ripeMD160('The quick brown fox jumps over the lazy dog'));
SELECT hex(RIPEMD160('The quick brown fox jumps over the lazy dog'));
```
Результат:
```response
┌─hex(ripeMD160('The quick brown fox jumps over the lazy dog'))─┐
┌─hex(RIPEMD160('The quick brown fox jumps over the lazy dog'))─┐
│ 37F332F68DB77BD9D7EDD4969571AD671CF9DD3B │
└───────────────────────────────────────────────────────────────┘
```

View File

@ -11,7 +11,7 @@ sidebar_label: ROLE
Синтаксис:
``` sql
ALTER ROLE [IF EXISTS] name1 [ON CLUSTER cluster_name1] [RENAME TO new_name1]
[, name2 [ON CLUSTER cluster_name2] [RENAME TO new_name2] ...]
ALTER ROLE [IF EXISTS] name1 [RENAME TO new_name |, name2 [,...]]
[ON CLUSTER cluster_name]
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [CONST|READONLY|WRITABLE|CHANGEABLE_IN_READONLY] | PROFILE 'profile_name'] [,...]
```

View File

@ -11,7 +11,8 @@ sidebar_label: SETTINGS PROFILE
Синтаксис:
``` sql
ALTER SETTINGS PROFILE [IF EXISTS] TO name1 [ON CLUSTER cluster_name1] [RENAME TO new_name1]
[, name2 [ON CLUSTER cluster_name2] [RENAME TO new_name2] ...]
ALTER SETTINGS PROFILE [IF EXISTS] name1 [RENAME TO new_name |, name2 [,...]]
[ON CLUSTER cluster_name]
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [CONST|READONLY|WRITABLE|CHANGEABLE_IN_READONLY] | INHERIT 'profile_name'] [,...]
[TO {{role1 | user1 [, role2 | user2 ...]} | NONE | ALL | ALL EXCEPT {role1 | user1 [, role2 | user2 ...]}}]
```

View File

@ -11,8 +11,8 @@ sidebar_label: USER
Синтаксис:
``` sql
ALTER USER [IF EXISTS] name1 [ON CLUSTER cluster_name1] [RENAME TO new_name1]
[, name2 [ON CLUSTER cluster_name2] [RENAME TO new_name2] ...]
ALTER USER [IF EXISTS] name1 [RENAME TO new_name |, name2 [,...]]
[ON CLUSTER cluster_name]
[NOT IDENTIFIED | IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']}]
[[ADD | DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
[DEFAULT ROLE role [,...] | ALL | ALL EXCEPT role [,...] ]

View File

@ -12,7 +12,8 @@ sidebar_label: "Квота"
``` sql
CREATE QUOTA [IF NOT EXISTS | OR REPLACE] name [ON CLUSTER cluster_name]
[KEYED BY {user_name | ip_address | client_key | client_key, user_name | client_key, ip_address} | NOT KEYED]
[IN access_storage_type]
[KEYED BY {user_name | ip_address | client_key | client_key,user_name | client_key,ip_address} | NOT KEYED]
[FOR [RANDOMIZED] INTERVAL number {second | minute | hour | day | week | month | quarter | year}
{MAX { {queries | query_selects | query_inserts | errors | result_rows | result_bytes | read_rows | read_bytes | execution_time} = number } [,...] |
NO LIMITS | TRACKING ONLY} [,...]]

View File

@ -11,7 +11,8 @@ sidebar_label: "Роль"
Синтаксис:
```sql
CREATE ROLE [IF NOT EXISTS | OR REPLACE] name1 [ON CLUSTER cluster_name1] [, name2 [ON CLUSTER cluster_name2] ...]
CREATE ROLE [IF NOT EXISTS | OR REPLACE] name1 [, name2 [,...]] [ON CLUSTER cluster_name]
[IN access_storage_type]
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [CONST|READONLY|WRITABLE|CHANGEABLE_IN_READONLY] | PROFILE 'profile_name'] [,...]
```

View File

@ -11,9 +11,11 @@ sidebar_label: "Профиль настроек"
Синтаксис:
``` sql
CREATE SETTINGS PROFILE [IF NOT EXISTS | OR REPLACE] name1 [ON CLUSTER cluster_name1]
[, name2 [ON CLUSTER cluster_name2] ...]
CREATE SETTINGS PROFILE [IF NOT EXISTS | OR REPLACE] name1 [, name2 [,...]]
[ON CLUSTER cluster_name]
[IN access_storage_type]
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [CONST|READONLY|WRITABLE|CHANGEABLE_IN_READONLY] | INHERIT 'profile_name'] [,...]
[TO {{role1 | user1 [, role2 | user2 ...]} | NONE | ALL | ALL EXCEPT {role1 | user1 [, role2 | user2 ...]}}]
```
Секция `ON CLUSTER` позволяет создавать профили на кластере, см. [Распределенные DDL запросы](../../../sql-reference/distributed-ddl.md).

View File

@ -11,8 +11,7 @@ sidebar_label: "Пользователь"
Синтаксис:
``` sql
CREATE USER [IF NOT EXISTS | OR REPLACE] name1 [ON CLUSTER cluster_name1]
[, name2 [ON CLUSTER cluster_name2] ...]
CREATE USER [IF NOT EXISTS | OR REPLACE] name1 [, name2 [,...]] [ON CLUSTER cluster_name]
[NOT IDENTIFIED | IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']} | {WITH ssl_certificate CN 'common_name' | SAN 'TYPE:subject_alt_name'} | {WITH ssh_key BY KEY 'public_key' TYPE 'ssh-rsa|...'}]
[HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
[DEFAULT ROLE role [,...]]

View File

@ -53,7 +53,7 @@ Connection: Close
Content-Type: text/tab-separated-values; charset=UTF-8
X-ClickHouse-Server-Display-Name: clickhouse.ru-central1.internal
X-ClickHouse-Query-Id: 5abe861c-239c-467f-b955-8a201abb8b7f
X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334","real_time_microseconds":"0"}
X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
1
```
@ -363,7 +363,7 @@ $ curl -v 'http://localhost:8123/predefined_query'
< X-ClickHouse-Format: Template
< X-ClickHouse-Timezone: Asia/Shanghai
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
<
# HELP "Query" "Number of executing queries"
# TYPE "Query" counter
@ -524,7 +524,7 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/hi'
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
<
* Connection #0 to host localhost left intact
Say Hi!%
@ -564,7 +564,7 @@ $ curl -v -H 'XXX:xxx' 'http://localhost:8123/get_config_static_handler'
< Content-Type: text/plain; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334","real_time_microseconds":"0"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
<
* Connection #0 to host localhost left intact
<html ng-app="SMI2"><head><base href="http://ui.tabix.io/"></head><body><div ui-view="" class="content-ui"></div><script src="http://loader.tabix.io/master.js"></script></body></html>%
@ -616,7 +616,7 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_absolute_path_static_handler'
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334","real_time_microseconds":"0"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
<
<html><body>Absolute Path File</body></html>
* Connection #0 to host localhost left intact
@ -635,7 +635,7 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_relative_path_static_handler'
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334","real_time_microseconds":"0"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
<
<html><body>Relative Path File</body></html>
* Connection #0 to host localhost left intact

View File

@ -35,7 +35,7 @@ CREATE TABLE table_with_ttl
)
ENGINE MergeTree()
ORDER BY tuple()
TTL event_time + INTERVAL 3 MONTH;
TTL event_time + INTERVAL 3 MONTH
SETTINGS min_bytes_for_wide_part = 0;
INSERT INTO table_with_ttl VALUES (now(), 1, 'username1');

View File

@ -188,9 +188,9 @@ int mainEntryClickHouseFormat(int argc, char ** argv)
registerInterpreters();
registerFunctions();
registerAggregateFunctions();
registerTableFunctions();
registerTableFunctions(false);
registerDatabases();
registerStorages();
registerStorages(false);
registerFormats();
std::unordered_set<std::string> additional_names;

View File

@ -19,7 +19,6 @@ target_link_libraries(clickhouse-library-bridge PRIVATE
daemon
dbms
bridge
clickhouse_functions
)
set_target_properties(clickhouse-library-bridge PROPERTIES RUNTIME_OUTPUT_DIRECTORY ..)

View File

@ -507,10 +507,10 @@ try
/// Don't initialize DateLUT
registerFunctions();
registerAggregateFunctions();
registerTableFunctions();
registerTableFunctions(server_settings.use_legacy_mongodb_integration);
registerDatabases();
registerStorages();
registerDictionaries();
registerStorages(server_settings.use_legacy_mongodb_integration);
registerDictionaries(server_settings.use_legacy_mongodb_integration);
registerDisks(/* global_skip_access_check= */ true);
registerFormats();

View File

@ -24,7 +24,6 @@ target_link_libraries(clickhouse-odbc-bridge PRIVATE
clickhouse_parsers
ch_contrib::nanodbc
ch_contrib::unixodbc
clickhouse_functions
)
set_target_properties(clickhouse-odbc-bridge PROPERTIES RUNTIME_OUTPUT_DIRECTORY ..)

View File

@ -1,2 +1,2 @@
clickhouse_add_executable (validate-odbc-connection-string validate-odbc-connection-string.cpp ../validateODBCConnectionString.cpp)
target_link_libraries (validate-odbc-connection-string PRIVATE clickhouse_common_io clickhouse_common_config)
target_link_libraries (validate-odbc-connection-string PRIVATE dbms clickhouse_functions clickhouse_common_io clickhouse_common_config)

View File

@ -784,10 +784,10 @@ try
registerInterpreters();
registerFunctions();
registerAggregateFunctions();
registerTableFunctions();
registerTableFunctions(server_settings.use_legacy_mongodb_integration);
registerDatabases();
registerStorages();
registerDictionaries();
registerStorages(server_settings.use_legacy_mongodb_integration);
registerDictionaries(server_settings.use_legacy_mongodb_integration);
registerDisks(/* global_skip_access_check= */ false);
registerFormats();
registerRemoteFileMetadatas();

View File

@ -191,8 +191,8 @@ void LDAPAccessStorage::applyRoleChangeNoLock(bool grant, const UUID & role_id,
}
else
{
granted_role_names.erase(role_id);
granted_role_ids.erase(role_name);
granted_role_names.erase(role_id);
}
}

View File

@ -1,4 +1,4 @@
if (TARGET ch_contrib::krb5)
clickhouse_add_executable (kerberos_init kerberos_init.cpp)
target_link_libraries (kerberos_init PRIVATE dbms clickhouse_functions ch_contrib::krb5)
target_link_libraries (kerberos_init PRIVATE dbms ch_contrib::krb5)
endif()

View File

@ -3,20 +3,17 @@ add_headers_and_sources(clickhouse_aggregate_functions .)
add_headers_and_sources(clickhouse_aggregate_functions Combinators)
extract_into_parent_list(clickhouse_aggregate_functions_sources dbms_sources
IAggregateFunction.cpp
AggregateFunctionFactory.cpp
Combinators/AggregateFunctionCombinatorFactory.cpp
Combinators/AggregateFunctionState.cpp
AggregateFunctionCount.cpp
parseAggregateFunctionParameters.cpp
AggregateFunctionFactory.cpp # AggregateFunctionFactory::instance() Used many times
Combinators/AggregateFunctionCombinatorFactory.cpp # AggregateFunctionCombinatorFactory::instance() Used by AggregateFunctionFactory.cpp
AggregateFunctionCount.cpp # Used in optimizeUseAggregateProjection.cpp
IAggregateFunction.cpp # For AggregateFunctionCount.cpp, WindowTransform.cpp, FunctionsConversion.cpp...
parseAggregateFunctionParameters.cpp # getAggregateFunctionNameAndParametersArray, getAggregateFunctionParametersArray (Graphite.cpp, ExpressionAnalyzer.cpp)
)
extract_into_parent_list(clickhouse_aggregate_functions_headers dbms_headers
IAggregateFunction.h
Combinators/IAggregateFunctionCombinator.h
AggregateFunctionFactory.h
Combinators/AggregateFunctionCombinatorFactory.h
Combinators/AggregateFunctionState.h
AggregateFunctionCount.cpp
FactoryHelpers.h
parseAggregateFunctionParameters.h
)

Some files were not shown because too many files have changed in this diff Show More