mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-12-05 05:52:05 +00:00
Merge branch 'master' into tuple-of-intervals
This commit is contained in:
commit
3f37636654
@ -1,7 +1,7 @@
|
||||
# To run clang-tidy from CMake, build ClickHouse with -DENABLE_CLANG_TIDY=1. To show all warnings, it is
|
||||
# recommended to pass "-k0" to Ninja.
|
||||
|
||||
# Enable all checks + disale selected checks. Feel free to remove disabled checks from below list if
|
||||
# Enable all checks + disable selected checks. Feel free to remove disabled checks from below list if
|
||||
# a) the new check is not controversial (this includes many checks in readability-* and google-*) or
|
||||
# b) too noisy (checks with > 100 new warnings are considered noisy, this includes e.g. cppcoreguidelines-*).
|
||||
|
||||
|
1
.gitattributes
vendored
1
.gitattributes
vendored
@ -1,3 +1,4 @@
|
||||
contrib/* linguist-vendored
|
||||
*.h linguist-language=C++
|
||||
tests/queries/0_stateless/data_json/* binary
|
||||
tests/queries/0_stateless/*.reference -crlf
|
||||
|
51
CHANGELOG.md
51
CHANGELOG.md
@ -1,6 +1,6 @@
|
||||
### Table of Contents
|
||||
**[ClickHouse release v22.9, 2022-09-22](#229)**<br/>
|
||||
**[ClickHouse release v22.8, 2022-08-18](#228)**<br/>
|
||||
**[ClickHouse release v22.8-lts, 2022-08-18](#228)**<br/>
|
||||
**[ClickHouse release v22.7, 2022-07-21](#227)**<br/>
|
||||
**[ClickHouse release v22.6, 2022-06-16](#226)**<br/>
|
||||
**[ClickHouse release v22.5, 2022-05-19](#225)**<br/>
|
||||
@ -10,10 +10,10 @@
|
||||
**[ClickHouse release v22.1, 2022-01-18](#221)**<br/>
|
||||
**[Changelog for 2021](https://clickhouse.com/docs/en/whats-new/changelog/2021/)**<br/>
|
||||
|
||||
|
||||
### <a id="229"></a> ClickHouse release 22.9, 2022-09-22
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
||||
* Upgrade from 20.3 and older to 22.9 and newer should be done through an intermediate version if there are any `ReplicatedMergeTree` tables, otherwise server with the new version will not start. [#40641](https://github.com/ClickHouse/ClickHouse/pull/40641) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Remove the functions `accurate_Cast` and `accurate_CastOrNull` (they are different to `accurateCast` and `accurateCastOrNull` by underscore in the name and they are not affected by the value of `cast_keep_nullable` setting). These functions were undocumented, untested, unused, and unneeded. They appeared to be alive due to code generalization. [#40682](https://github.com/ClickHouse/ClickHouse/pull/40682) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add a test to ensure that every new table function will be documented. See [#40649](https://github.com/ClickHouse/ClickHouse/issues/40649). Rename table function `MeiliSearch` to `meilisearch`. [#40709](https://github.com/ClickHouse/ClickHouse/pull/40709) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
@ -21,6 +21,7 @@
|
||||
* Make interpretation of YAML configs to be more conventional. [#41044](https://github.com/ClickHouse/ClickHouse/pull/41044) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Support `insert_quorum = 'auto'` to use majority number. [#39970](https://github.com/ClickHouse/ClickHouse/pull/39970) ([Sachin](https://github.com/SachinSetiya)).
|
||||
* Add embedded dashboards to ClickHouse server. This is a demo project about how to achieve 90% results with 1% effort using ClickHouse features. [#40461](https://github.com/ClickHouse/ClickHouse/pull/40461) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added new settings constraint writability kind `changeable_in_readonly`. [#40631](https://github.com/ClickHouse/ClickHouse/pull/40631) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
@ -38,6 +39,7 @@
|
||||
* Improvement for in-memory data parts: remove completely processed WAL files. [#40592](https://github.com/ClickHouse/ClickHouse/pull/40592) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
||||
#### Performance Improvement
|
||||
|
||||
* Implement compression of marks and primary key. Close [#34437](https://github.com/ClickHouse/ClickHouse/issues/34437). [#37693](https://github.com/ClickHouse/ClickHouse/pull/37693) ([zhongyuankai](https://github.com/zhongyuankai)).
|
||||
* Allow to load marks with threadpool in advance. Regulated by setting `load_marks_asynchronously` (default: 0). [#40821](https://github.com/ClickHouse/ClickHouse/pull/40821) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Virtual filesystem over s3 will use random object names split into multiple path prefixes for better performance on AWS. [#40968](https://github.com/ClickHouse/ClickHouse/pull/40968) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
@ -58,6 +60,7 @@
|
||||
* Parallel hash JOIN for Float data types might be suboptimal. Make it better. [#41183](https://github.com/ClickHouse/ClickHouse/pull/41183) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* During startup and ATTACH call, `ReplicatedMergeTree` tables will be readonly until the ZooKeeper connection is made and the setup is finished. [#40148](https://github.com/ClickHouse/ClickHouse/pull/40148) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Add `enable_extended_results_for_datetime_functions` option to return results of type Date32 for functions toStartOfYear, toStartOfISOYear, toStartOfQuarter, toStartOfMonth, toStartOfWeek, toMonday and toLastDayOfMonth when argument is Date32 or DateTime64, otherwise results of Date type are returned. For compatibility reasons default value is ‘0’. [#41214](https://github.com/ClickHouse/ClickHouse/pull/41214) ([Roman Vasin](https://github.com/rvasin)).
|
||||
* For security and stability reasons, CatBoost models are no longer evaluated within the ClickHouse server. Instead, the evaluation is now done in the clickhouse-library-bridge, a separate process that loads the catboost library and communicates with the server process via HTTP. [#40897](https://github.com/ClickHouse/ClickHouse/pull/40897) ([Robert Schulze](https://github.com/rschu1ze)). [#39629](https://github.com/ClickHouse/ClickHouse/pull/39629) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
@ -108,6 +111,7 @@
|
||||
* Add `has_lightweight_delete` to system.parts. [#41564](https://github.com/ClickHouse/ClickHouse/pull/41564) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Enforce documentation for every setting. [#40644](https://github.com/ClickHouse/ClickHouse/pull/40644) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Enforce documentation for every current metric. [#40645](https://github.com/ClickHouse/ClickHouse/pull/40645) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Enforce documentation for every profile event counter. Write the documentation where it was missing. [#40646](https://github.com/ClickHouse/ClickHouse/pull/40646) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
@ -217,15 +221,16 @@
|
||||
* Fix read bytes/rows in X-ClickHouse-Summary with materialized views. [#41586](https://github.com/ClickHouse/ClickHouse/pull/41586) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix possible `pipeline stuck` exception for queries with `OFFSET`. The error was found with `enable_optimize_predicate_expression = 0` and always false condition in `WHERE`. Fixes [#41383](https://github.com/ClickHouse/ClickHouse/issues/41383). [#41588](https://github.com/ClickHouse/ClickHouse/pull/41588) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
|
||||
|
||||
### <a id="228"></a> ClickHouse release 22.8, 2022-08-18
|
||||
### <a id="228"></a> ClickHouse release 22.8-lts, 2022-08-18
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
||||
* Extended range of `Date32` and `DateTime64` to support dates from the year 1900 to 2299. In previous versions, the supported interval was only from the year 1925 to 2283. The implementation is using the proleptic Gregorian calendar (which is conformant with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601):2004 (clause 3.2.1 The Gregorian calendar)) instead of accounting for historical transitions from the Julian to the Gregorian calendar. This change affects implementation-specific behavior for out-of-range arguments. E.g. if in previous versions the value of `1899-01-01` was clamped to `1925-01-01`, in the new version it will be clamped to `1900-01-01`. It changes the behavior of rounding with `toStartOfInterval` if you pass `INTERVAL 3 QUARTER` up to one quarter because the intervals are counted from an implementation-specific point of time. Closes [#28216](https://github.com/ClickHouse/ClickHouse/issues/28216), improves [#38393](https://github.com/ClickHouse/ClickHouse/issues/38393). [#39425](https://github.com/ClickHouse/ClickHouse/pull/39425) ([Roman Vasin](https://github.com/rvasin)).
|
||||
* Now, all relevant dictionary sources respect `remote_url_allow_hosts` setting. It was already done for HTTP, Cassandra, Redis. Added ClickHouse, MongoDB, MySQL, PostgreSQL. Host is checked only for dictionaries created from DDL. [#39184](https://github.com/ClickHouse/ClickHouse/pull/39184) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Make the remote filesystem cache composable, allow not to evict certain files (regarding idx, mrk, ..), delete old cache version. Now it is possible to configure cache over Azure blob storage disk, over Local disk, over StaticWeb disk, etc. This PR is marked backward incompatible because cache configuration changes and in order for cache to work need to update the config file. Old cache will still be used with new configuration. The server will startup fine with the old cache configuration. Closes https://github.com/ClickHouse/ClickHouse/issues/36140. Closes https://github.com/ClickHouse/ClickHouse/issues/37889. ([Kseniia Sumarokova](https://github.com/kssenii)). [#36171](https://github.com/ClickHouse/ClickHouse/pull/36171))
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Query parameters can be set in interactive mode as `SET param_abc = 'def'` and transferred via the native protocol as settings. [#39906](https://github.com/ClickHouse/ClickHouse/pull/39906) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Quota key can be set in the native protocol ([Yakov Olkhovsky](https://github.com/ClickHouse/ClickHouse/pull/39874)).
|
||||
* Added a setting `exact_rows_before_limit` (0/1). When enabled, ClickHouse will provide exact value for `rows_before_limit_at_least` statistic, but with the cost that the data before limit will have to be read completely. This closes [#6613](https://github.com/ClickHouse/ClickHouse/issues/6613). [#25333](https://github.com/ClickHouse/ClickHouse/pull/25333) ([kevin wan](https://github.com/MaxWk)).
|
||||
@ -240,12 +245,14 @@
|
||||
* Add new setting schema_inference_hints that allows to specify structure hints in schema inference for specific columns. Closes [#39569](https://github.com/ClickHouse/ClickHouse/issues/39569). [#40068](https://github.com/ClickHouse/ClickHouse/pull/40068) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
|
||||
#### Experimental Feature
|
||||
|
||||
* Support SQL standard DELETE FROM syntax on merge tree tables and lightweight delete implementation for merge tree families. [#37893](https://github.com/ClickHouse/ClickHouse/pull/37893) ([Jianmei Zhang](https://github.com/zhangjmruc)) ([Alexander Gololobov](https://github.com/davenger)). Note: this new feature does not make ClickHouse an HTAP DBMS.
|
||||
|
||||
#### Performance Improvement
|
||||
|
||||
* Improved memory usage during memory efficient merging of aggregation results. [#39429](https://github.com/ClickHouse/ClickHouse/pull/39429) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Added concurrency control logic to limit total number of concurrent threads created by queries. [#37558](https://github.com/ClickHouse/ClickHouse/pull/37558) ([Sergei Trifonov](https://github.com/serxa)). Add `concurrent_threads_soft_limit parameter` to increase performance in case of high QPS by means of limiting total number of threads for all queries. [#37285](https://github.com/ClickHouse/ClickHouse/pull/37285) ([Roman Vasin](https://github.com/rvasin)).
|
||||
* Add `SLRU` cache policy for uncompressed cache and marks cache. ([Kseniia Sumarokova](https://github.com/kssenii)). [#34651](https://github.com/ClickHouse/ClickHouse/pull/34651) ([alexX512](https://github.com/alexX512)). Decoupling local cache function and cache algorithm [#38048](https://github.com/ClickHouse/ClickHouse/pull/38048) ([Han Shukai](https://github.com/KinderRiven)).
|
||||
* Add `SLRU` cache policy for uncompressed cache and marks cache. ([Kseniia Sumarokova](https://github.com/kssenii)). [#34651](https://github.com/ClickHouse/ClickHouse/pull/34651) ([alexX512](https://github.com/alexX512)). Decoupling local cache function and cache algorithm [#38048](https://github.com/ClickHouse/ClickHouse/pull/38048) ([Han Shukai](https://github.com/KinderRiven)).
|
||||
* Intel® In-Memory Analytics Accelerator (Intel® IAA) is a hardware accelerator available in the upcoming generation of Intel® Xeon® Scalable processors ("Sapphire Rapids"). Its goal is to speed up common operations in analytics like data (de)compression and filtering. ClickHouse gained the new "DeflateQpl" compression codec which utilizes the Intel® IAA offloading technology to provide a high-performance DEFLATE implementation. The codec uses the [Intel® Query Processing Library (QPL)](https://github.com/intel/qpl) which abstracts access to the hardware accelerator, respectively to a software fallback in case the hardware accelerator is not available. DEFLATE provides in general higher compression rates than ClickHouse's LZ4 default codec, and as a result, offers less disk I/O and lower main memory consumption. [#36654](https://github.com/ClickHouse/ClickHouse/pull/36654) ([jasperzhu](https://github.com/jinjunzh)). [#39494](https://github.com/ClickHouse/ClickHouse/pull/39494) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* `DISTINCT` in order with `ORDER BY`: Deduce way to sort based on input stream sort description. Skip sorting if input stream is already sorted. [#38719](https://github.com/ClickHouse/ClickHouse/pull/38719) ([Igor Nikonov](https://github.com/devcrafter)). Improve memory usage (significantly) and query execution time + use `DistinctSortedChunkTransform` for final distinct when `DISTINCT` columns match `ORDER BY` columns, but rename to `DistinctSortedStreamTransform` in `EXPLAIN PIPELINE` → this improves memory usage significantly + remove unnecessary allocations in hot loop in `DistinctSortedChunkTransform`. [#39432](https://github.com/ClickHouse/ClickHouse/pull/39432) ([Igor Nikonov](https://github.com/devcrafter)). Use `DistinctSortedTransform` only when sort description is applicable to DISTINCT columns, otherwise fall back to ordinary DISTINCT implementation + it allows making less checks during `DistinctSortedTransform` execution. [#39528](https://github.com/ClickHouse/ClickHouse/pull/39528) ([Igor Nikonov](https://github.com/devcrafter)). Fix: `DistinctSortedTransform` didn't take advantage of sorting. It never cleared HashSet since clearing_columns were detected incorrectly (always empty). So, it basically worked as ordinary `DISTINCT` (`DistinctTransform`). The fix reduces memory usage significantly. [#39538](https://github.com/ClickHouse/ClickHouse/pull/39538) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Use local node as first priority to get structure of remote table when executing `cluster` and similar table functions. [#39440](https://github.com/ClickHouse/ClickHouse/pull/39440) ([Mingliang Pan](https://github.com/liangliangpan)).
|
||||
@ -256,6 +263,7 @@
|
||||
* Improve bytes to bits mask transform for SSE/AVX/AVX512. [#39586](https://github.com/ClickHouse/ClickHouse/pull/39586) ([Guo Wangyang](https://github.com/guowangy)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Normalize `AggregateFunction` types and state representations because optimizations like [#35788](https://github.com/ClickHouse/ClickHouse/pull/35788) will treat `count(not null columns)` as `count()`, which might confuses distributed interpreters with the following error : `Conversion from AggregateFunction(count) to AggregateFunction(count, Int64) is not supported`. [#39420](https://github.com/ClickHouse/ClickHouse/pull/39420) ([Amos Bird](https://github.com/amosbird)). The functions with identical states can be used in materialized views interchangeably.
|
||||
* Rework and simplify the `system.backups` table, remove the `internal` column, allow user to set the ID of operation, add columns `num_files`, `uncompressed_size`, `compressed_size`, `start_time`, `end_time`. [#39503](https://github.com/ClickHouse/ClickHouse/pull/39503) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Improved structure of DDL query result table for `Replicated` database (separate columns with shard and replica name, more clear status) - `CREATE TABLE ... ON CLUSTER` queries can be normalized on initiator first if `distributed_ddl_entry_format_version` is set to 3 (default value). It means that `ON CLUSTER` queries may not work if initiator does not belong to the cluster that specified in query. Fixes [#37318](https://github.com/ClickHouse/ClickHouse/issues/37318), [#39500](https://github.com/ClickHouse/ClickHouse/issues/39500) - Ignore `ON CLUSTER` clause if database is `Replicated` and cluster name equals to database name. Related to [#35570](https://github.com/ClickHouse/ClickHouse/issues/35570) - Miscellaneous minor fixes for `Replicated` database engine - Check metadata consistency when starting up `Replicated` database, start replica recovery in case of mismatch of local metadata and metadata in Keeper. Resolves [#24880](https://github.com/ClickHouse/ClickHouse/issues/24880). [#37198](https://github.com/ClickHouse/ClickHouse/pull/37198) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
@ -294,6 +302,7 @@
|
||||
* Add support for LARGE_BINARY/LARGE_STRING with Arrow (Closes [#32401](https://github.com/ClickHouse/ClickHouse/issues/32401)). [#40293](https://github.com/ClickHouse/ClickHouse/pull/40293) ([Josh Taylor](https://github.com/joshuataylor)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* [ClickFiddle](https://fiddle.clickhouse.com/): A new tool for testing ClickHouse versions in read/write mode (**Igor Baliuk**).
|
||||
* ClickHouse binary is made self-extracting [#35775](https://github.com/ClickHouse/ClickHouse/pull/35775) ([Yakov Olkhovskiy, Arthur Filatenkov](https://github.com/yakov-olkhovskiy)).
|
||||
* Update tzdata to 2022b to support the new timezone changes. See https://github.com/google/cctz/pull/226. Chile's 2022 DST start is delayed from September 4 to September 11. Iran plans to stop observing DST permanently, after it falls back on 2022-09-21. There are corrections of the historical time zone of Asia/Tehran in the year 1977: Iran adopted standard time in 1935, not 1946. In 1977 it observed DST from 03-21 23:00 to 10-20 24:00; its 1978 transitions were on 03-24 and 08-05, not 03-20 and 10-20; and its spring 1979 transition was on 05-27, not 03-21 (https://data.iana.org/time-zones/tzdb/NEWS). ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
@ -308,6 +317,7 @@
|
||||
* Docker: Now entrypoint.sh in docker image creates and executes chown for all folders it found in config for multidisk setup [#17717](https://github.com/ClickHouse/ClickHouse/issues/17717). [#39121](https://github.com/ClickHouse/ClickHouse/pull/39121) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix possible segfault in `CapnProto` input format. This bug was found and send through ClickHouse bug-bounty [program](https://github.com/ClickHouse/ClickHouse/issues/38986) by *kiojj*. [#40241](https://github.com/ClickHouse/ClickHouse/pull/40241) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix a very rare case of incorrect behavior of array subscript operator. This closes [#28720](https://github.com/ClickHouse/ClickHouse/issues/28720). [#40185](https://github.com/ClickHouse/ClickHouse/pull/40185) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix insufficient argument check for encryption functions (found by query fuzzer). This closes [#39987](https://github.com/ClickHouse/ClickHouse/issues/39987). [#40194](https://github.com/ClickHouse/ClickHouse/pull/40194) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
@ -358,16 +368,17 @@
|
||||
* A fix for reverse DNS resolution. [#40134](https://github.com/ClickHouse/ClickHouse/pull/40134) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Fix unexpected result `arrayDifference` of `Array(UInt32). [#40211](https://github.com/ClickHouse/ClickHouse/pull/40211) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
|
||||
|
||||
### <a id="227"></a> ClickHouse release 22.7, 2022-07-21
|
||||
|
||||
#### Upgrade Notes
|
||||
|
||||
* Enable setting `enable_positional_arguments` by default. It allows queries like `SELECT ... ORDER BY 1, 2` where 1, 2 are the references to the select clause. If you need to return the old behavior, disable this setting. [#38204](https://github.com/ClickHouse/ClickHouse/pull/38204) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Disable `format_csv_allow_single_quotes` by default. See [#37096](https://github.com/ClickHouse/ClickHouse/issues/37096). ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* `Ordinary` database engine and old storage definition syntax for `*MergeTree` tables are deprecated. By default it's not possible to create new databases with `Ordinary` engine. If `system` database has `Ordinary` engine it will be automatically converted to `Atomic` on server startup. There are settings to keep old behavior (`allow_deprecated_database_ordinary` and `allow_deprecated_syntax_for_merge_tree`), but these settings may be removed in future releases. [#38335](https://github.com/ClickHouse/ClickHouse/pull/38335) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Force rewriting comma join to inner by default (set default value `cross_to_inner_join_rewrite = 2`). To have old behavior set `cross_to_inner_join_rewrite = 1`. [#39326](https://github.com/ClickHouse/ClickHouse/pull/39326) ([Vladimir C](https://github.com/vdimir)). If you will face any incompatibilities, you can turn this setting back.
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Support expressions with window functions. Closes [#19857](https://github.com/ClickHouse/ClickHouse/issues/19857). [#37848](https://github.com/ClickHouse/ClickHouse/pull/37848) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Add new `direct` join algorithm for `EmbeddedRocksDB` tables, see [#33582](https://github.com/ClickHouse/ClickHouse/issues/33582). [#35363](https://github.com/ClickHouse/ClickHouse/pull/35363) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Added full sorting merge join algorithm. [#35796](https://github.com/ClickHouse/ClickHouse/pull/35796) ([Vladimir C](https://github.com/vdimir)).
|
||||
@ -395,9 +406,11 @@
|
||||
* Add `clickhouse-diagnostics` binary to the packages. [#38647](https://github.com/ClickHouse/ClickHouse/pull/38647) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
|
||||
#### Experimental Feature
|
||||
|
||||
* Adds new setting `implicit_transaction` to run standalone queries inside a transaction. It handles both creation and closing (via COMMIT if the query succeeded or ROLLBACK if it didn't) of the transaction automatically. [#38344](https://github.com/ClickHouse/ClickHouse/pull/38344) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
|
||||
#### Performance Improvement
|
||||
|
||||
* Distinct optimization for sorted columns. Use specialized distinct transformation in case input stream is sorted by column(s) in distinct. Optimization can be applied to pre-distinct, final distinct, or both. Initial implementation by @dimarub2000. [#37803](https://github.com/ClickHouse/ClickHouse/pull/37803) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Improve performance of `ORDER BY`, `MergeTree` merges, window functions using batch version of `BinaryHeap`. [#38022](https://github.com/ClickHouse/ClickHouse/pull/38022) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* More parallel execution for queries with `FINAL` [#36396](https://github.com/ClickHouse/ClickHouse/pull/36396) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
@ -407,7 +420,7 @@
|
||||
* Improve performance of insertion to columns of type `JSON`. [#38320](https://github.com/ClickHouse/ClickHouse/pull/38320) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Optimized insertion and lookups in the HashTable. [#38413](https://github.com/ClickHouse/ClickHouse/pull/38413) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Fix performance degradation from [#32493](https://github.com/ClickHouse/ClickHouse/issues/32493). [#38417](https://github.com/ClickHouse/ClickHouse/pull/38417) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Improve performance of joining with numeric columns using SIMD instructions. [#37235](https://github.com/ClickHouse/ClickHouse/pull/37235) ([zzachimed](https://github.com/zzachimed)). [#38565](https://github.com/ClickHouse/ClickHouse/pull/38565) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Improve performance of joining with numeric columns using SIMD instructions. [#37235](https://github.com/ClickHouse/ClickHouse/pull/37235) ([zzachimed](https://github.com/zzachimed)). [#38565](https://github.com/ClickHouse/ClickHouse/pull/38565) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Norm and Distance functions for arrays speed up 1.2-2 times. [#38740](https://github.com/ClickHouse/ClickHouse/pull/38740) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Add AVX-512 VBMI optimized `copyOverlap32Shuffle` for LZ4 decompression. In other words, LZ4 decompression performance is improved. [#37891](https://github.com/ClickHouse/ClickHouse/pull/37891) ([Guo Wangyang](https://github.com/guowangy)).
|
||||
* `ORDER BY (a, b)` will use all the same benefits as `ORDER BY a, b`. [#38873](https://github.com/ClickHouse/ClickHouse/pull/38873) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
@ -419,6 +432,7 @@
|
||||
* The table `system.asynchronous_metric_log` is further optimized for storage space. This closes [#38134](https://github.com/ClickHouse/ClickHouse/issues/38134). See the [YouTube video](https://www.youtube.com/watch?v=0fSp9SF8N8A). [#38428](https://github.com/ClickHouse/ClickHouse/pull/38428) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Support SQL standard CREATE INDEX and DROP INDEX syntax. [#35166](https://github.com/ClickHouse/ClickHouse/pull/35166) ([Jianmei Zhang](https://github.com/zhangjmruc)).
|
||||
* Send profile events for INSERT queries (previously only SELECT was supported). [#37391](https://github.com/ClickHouse/ClickHouse/pull/37391) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Implement in order aggregation (`optimize_aggregation_in_order`) for fully materialized projections. [#37469](https://github.com/ClickHouse/ClickHouse/pull/37469) ([Azat Khuzhin](https://github.com/azat)).
|
||||
@ -464,6 +478,7 @@
|
||||
* Allow to declare `RabbitMQ` queue without default arguments `x-max-length` and `x-overflow`. [#39259](https://github.com/ClickHouse/ClickHouse/pull/39259) ([rnbondarenko](https://github.com/rnbondarenko)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Apply Clang Thread Safety Analysis (TSA) annotations to ClickHouse. [#38068](https://github.com/ClickHouse/ClickHouse/pull/38068) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Adapt universal installation script for FreeBSD. [#39302](https://github.com/ClickHouse/ClickHouse/pull/39302) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Preparation for building on `s390x` platform. [#39193](https://github.com/ClickHouse/ClickHouse/pull/39193) ([Harry Lee](https://github.com/HarryLeeIBM)).
|
||||
@ -473,6 +488,7 @@
|
||||
* Change `all|noarch` packages to architecture-dependent - Fix some documentation for it - Push aarch64|arm64 packages to artifactory and release assets - Fixes [#36443](https://github.com/ClickHouse/ClickHouse/issues/36443). [#38580](https://github.com/ClickHouse/ClickHouse/pull/38580) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in official stable or prestable release)
|
||||
|
||||
* Fix rounding for `Decimal128/Decimal256` with more than 19-digits long scale. [#38027](https://github.com/ClickHouse/ClickHouse/pull/38027) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Fixed crash caused by data race in storage `Hive` (integration table engine). [#38887](https://github.com/ClickHouse/ClickHouse/pull/38887) ([lgbo](https://github.com/lgbo-ustc)).
|
||||
* Fix crash when executing GRANT ALL ON *.* with ON CLUSTER. It was broken in https://github.com/ClickHouse/ClickHouse/pull/35767. This closes [#38618](https://github.com/ClickHouse/ClickHouse/issues/38618). [#38674](https://github.com/ClickHouse/ClickHouse/pull/38674) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
@ -529,6 +545,7 @@
|
||||
### <a id="226"></a> ClickHouse release 22.6, 2022-06-16
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
||||
* Remove support for octal number literals in SQL. In previous versions they were parsed as Float64. [#37765](https://github.com/ClickHouse/ClickHouse/pull/37765) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
* Changes how settings using `seconds` as type are parsed to support floating point values (for example: `max_execution_time=0.5`). Infinity or NaN values will throw an exception. [#37187](https://github.com/ClickHouse/ClickHouse/pull/37187) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Changed format of binary serialization of columns of experimental type `Object`. New format is more convenient to implement by third-party clients. [#37482](https://github.com/ClickHouse/ClickHouse/pull/37482) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
@ -537,6 +554,7 @@
|
||||
* If you run different ClickHouse versions on a cluster with AArch64 CPU or mix AArch64 and amd64 on a cluster, and use distributed queries with GROUP BY multiple keys of fixed-size type that fit in 256 bits but don't fit in 64 bits, and the size of the result is huge, the data will not be fully aggregated in the result of these queries during upgrade. Workaround: upgrade with downtime instead of a rolling upgrade.
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Add `GROUPING` function. It allows to disambiguate the records in the queries with `ROLLUP`, `CUBE` or `GROUPING SETS`. Closes [#19426](https://github.com/ClickHouse/ClickHouse/issues/19426). [#37163](https://github.com/ClickHouse/ClickHouse/pull/37163) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* A new codec [FPC](https://userweb.cs.txstate.edu/~burtscher/papers/dcc07a.pdf) algorithm for floating point data compression. [#37553](https://github.com/ClickHouse/ClickHouse/pull/37553) ([Mikhail Guzov](https://github.com/koloshmet)).
|
||||
* Add new columnar JSON formats: `JSONColumns`, `JSONCompactColumns`, `JSONColumnsWithMetadata`. Closes [#36338](https://github.com/ClickHouse/ClickHouse/issues/36338) Closes [#34509](https://github.com/ClickHouse/ClickHouse/issues/34509). [#36975](https://github.com/ClickHouse/ClickHouse/pull/36975) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
@ -557,11 +575,13 @@
|
||||
* Added `SYSTEM UNFREEZE` query that deletes the whole backup regardless if the corresponding table is deleted or not. [#36424](https://github.com/ClickHouse/ClickHouse/pull/36424) ([Vadim Volodin](https://github.com/PolyProgrammist)).
|
||||
|
||||
#### Experimental Feature
|
||||
|
||||
* Enables `POPULATE` for `WINDOW VIEW`. [#36945](https://github.com/ClickHouse/ClickHouse/pull/36945) ([vxider](https://github.com/Vxider)).
|
||||
* `ALTER TABLE ... MODIFY QUERY` support for `WINDOW VIEW`. [#37188](https://github.com/ClickHouse/ClickHouse/pull/37188) ([vxider](https://github.com/Vxider)).
|
||||
* This PR changes the behavior of the `ENGINE` syntax in `WINDOW VIEW`, to make it like in `MATERIALIZED VIEW`. [#37214](https://github.com/ClickHouse/ClickHouse/pull/37214) ([vxider](https://github.com/Vxider)).
|
||||
|
||||
#### Performance Improvement
|
||||
|
||||
* Added numerous optimizations for ARM NEON [#38093](https://github.com/ClickHouse/ClickHouse/pull/38093)([Daniel Kutenin](https://github.com/danlark1)), ([Alexandra Pilipyuk](https://github.com/chalice19)) Note: if you run different ClickHouse versions on a cluster with ARM CPU and use distributed queries with GROUP BY multiple keys of fixed-size type that fit in 256 bits but don't fit in 64 bits, the result of the aggregation query will be wrong during upgrade. Workaround: upgrade with downtime instead of a rolling upgrade.
|
||||
* Improve performance and memory usage for select of subset of columns for formats Native, Protobuf, CapnProto, JSONEachRow, TSKV, all formats with suffixes WithNames/WithNamesAndTypes. Previously while selecting only subset of columns from files in these formats all columns were read and stored in memory. Now only required columns are read. This PR enables setting `input_format_skip_unknown_fields` by default, because otherwise in case of select of subset of columns exception will be thrown. [#37192](https://github.com/ClickHouse/ClickHouse/pull/37192) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Now more filters can be pushed down for join. [#37472](https://github.com/ClickHouse/ClickHouse/pull/37472) ([Amos Bird](https://github.com/amosbird)).
|
||||
@ -592,6 +612,7 @@
|
||||
* In function: CompressedWriteBuffer::nextImpl(), there is an unnecessary write-copy step that would happen frequently during inserting data. Below shows the differentiation with this patch: - Before: 1. Compress "working_buffer" into "compressed_buffer" 2. write-copy into "out" - After: Directly Compress "working_buffer" into "out". [#37242](https://github.com/ClickHouse/ClickHouse/pull/37242) ([jasperzhu](https://github.com/jinjunzh)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Support types with non-standard defaults in ROLLUP, CUBE, GROUPING SETS. Closes [#37360](https://github.com/ClickHouse/ClickHouse/issues/37360). [#37667](https://github.com/ClickHouse/ClickHouse/pull/37667) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Fix stack traces collection on ARM. Closes [#37044](https://github.com/ClickHouse/ClickHouse/issues/37044). Closes [#15638](https://github.com/ClickHouse/ClickHouse/issues/15638). [#37797](https://github.com/ClickHouse/ClickHouse/pull/37797) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Client will try every IP address returned by DNS resolution until successful connection. [#37273](https://github.com/ClickHouse/ClickHouse/pull/37273) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
@ -633,6 +654,7 @@
|
||||
* Add implicit grants with grant option too. For example `GRANT CREATE TABLE ON test.* TO A WITH GRANT OPTION` now allows `A` to execute `GRANT CREATE VIEW ON test.* TO B`. [#38017](https://github.com/ClickHouse/ClickHouse/pull/38017) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Use `clang-14` and LLVM infrastructure version 14 for builds. This closes [#34681](https://github.com/ClickHouse/ClickHouse/issues/34681). [#34754](https://github.com/ClickHouse/ClickHouse/pull/34754) ([Alexey Milovidov](https://github.com/alexey-milovidov)). Note: `clang-14` has [a bug](https://github.com/google/sanitizers/issues/1540) in ThreadSanitizer that makes our CI work worse.
|
||||
* Allow to drop privileges at startup. This simplifies Docker images. Closes [#36293](https://github.com/ClickHouse/ClickHouse/issues/36293). [#36341](https://github.com/ClickHouse/ClickHouse/pull/36341) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add docs spellcheck to CI. [#37790](https://github.com/ClickHouse/ClickHouse/pull/37790) ([Vladimir C](https://github.com/vdimir)).
|
||||
@ -690,7 +712,6 @@
|
||||
* Fix possible heap-use-after-free error when reading system.projection_parts and system.projection_parts_columns . This fixes [#37184](https://github.com/ClickHouse/ClickHouse/issues/37184). [#37185](https://github.com/ClickHouse/ClickHouse/pull/37185) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fixed `DateTime64` fractional seconds behavior prior to Unix epoch. [#37697](https://github.com/ClickHouse/ClickHouse/pull/37697) ([Andrey Zvonov](https://github.com/zvonand)). [#37039](https://github.com/ClickHouse/ClickHouse/pull/37039) ([李扬](https://github.com/taiyang-li)).
|
||||
|
||||
|
||||
### <a id="225"></a> ClickHouse release 22.5, 2022-05-19
|
||||
|
||||
#### Upgrade Notes
|
||||
@ -743,7 +764,7 @@
|
||||
* Implement partial GROUP BY key for optimize_aggregation_in_order. [#35111](https://github.com/ClickHouse/ClickHouse/pull/35111) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
|
||||
* Show names of erroneous files in case of parsing errors while executing table functions `file`, `s3` and `url`. [#36314](https://github.com/ClickHouse/ClickHouse/pull/36314) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Allowed to increase the number of threads for executing background operations (merges, mutations, moves and fetches) at runtime if they are specified at top level config. [#36425](https://github.com/ClickHouse/ClickHouse/pull/36425) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Now date time conversion functions that generates time before 1970-01-01 00:00:00 with partial hours/minutes timezones will be saturated to zero instead of overflow. This is the continuation of https://github.com/ClickHouse/ClickHouse/pull/29953 which addresses https://github.com/ClickHouse/ClickHouse/pull/29953#discussion_r800550280 . Mark as improvement because it's implementation defined behavior (and very rare case) and we are allowed to break it. [#36656](https://github.com/ClickHouse/ClickHouse/pull/36656) ([Amos Bird](https://github.com/amosbird)).
|
||||
@ -852,7 +873,6 @@
|
||||
* Fix ALTER DROP COLUMN of nested column with compact parts (i.e. `ALTER TABLE x DROP COLUMN n`, when there is column `n.d`). [#35797](https://github.com/ClickHouse/ClickHouse/pull/35797) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix substring function range error length when `offset` and `length` is negative constant and `s` is not constant. [#33861](https://github.com/ClickHouse/ClickHouse/pull/33861) ([RogerYK](https://github.com/RogerYK)).
|
||||
|
||||
|
||||
### <a id="224"></a> ClickHouse release 22.4, 2022-04-19
|
||||
|
||||
#### Backward Incompatible Change
|
||||
@ -1004,8 +1024,7 @@
|
||||
* Fix mutations in tables with enabled sparse columns. [#35284](https://github.com/ClickHouse/ClickHouse/pull/35284) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Do not delay final part writing by default (fixes possible `Memory limit exceeded` during `INSERT` by adding `max_insert_delayed_streams_for_parallel_write` with default to 1000 for writes to s3 and disabled as before otherwise). [#34780](https://github.com/ClickHouse/ClickHouse/pull/34780) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
||||
|
||||
## <a id="223"></a> ClickHouse release v22.3-lts, 2022-03-17
|
||||
### <a id="223"></a> ClickHouse release v22.3-lts, 2022-03-17
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
||||
@ -1132,7 +1151,6 @@
|
||||
* Fix incorrect result of trivial count query when part movement feature is used [#34089](https://github.com/ClickHouse/ClickHouse/issues/34089). [#34385](https://github.com/ClickHouse/ClickHouse/pull/34385) ([nvartolomei](https://github.com/nvartolomei)).
|
||||
* Fix inconsistency of `max_query_size` limitation in distributed subqueries. [#34078](https://github.com/ClickHouse/ClickHouse/pull/34078) ([Chao Ma](https://github.com/godliness)).
|
||||
|
||||
|
||||
### <a id="222"></a> ClickHouse release v22.2, 2022-02-17
|
||||
|
||||
#### Upgrade Notes
|
||||
@ -1308,7 +1326,6 @@
|
||||
* Fix issue [#18206](https://github.com/ClickHouse/ClickHouse/issues/18206). [#33977](https://github.com/ClickHouse/ClickHouse/pull/33977) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* This PR allows using multiple LDAP storages in the same list of user directories. It worked earlier but was broken because LDAP tests are disabled (they are part of the testflows tests). [#33574](https://github.com/ClickHouse/ClickHouse/pull/33574) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
|
||||
|
||||
### <a id="221"></a> ClickHouse release v22.1, 2022-01-18
|
||||
|
||||
#### Upgrade Notes
|
||||
@ -1335,7 +1352,6 @@
|
||||
* Add function `decodeURLFormComponent` slightly different to `decodeURLComponent`. Close [#10298](https://github.com/ClickHouse/ClickHouse/issues/10298). [#33451](https://github.com/ClickHouse/ClickHouse/pull/33451) ([SuperDJY](https://github.com/cmsxbc)).
|
||||
* Allow to split `GraphiteMergeTree` rollup rules for plain/tagged metrics (optional rule_type field). [#33494](https://github.com/ClickHouse/ClickHouse/pull/33494) ([Michail Safronov](https://github.com/msaf1980)).
|
||||
|
||||
|
||||
#### Performance Improvement
|
||||
|
||||
* Support moving conditions to `PREWHERE` (setting `optimize_move_to_prewhere`) for tables of `Merge` engine if its all underlying tables supports `PREWHERE`. [#33300](https://github.com/ClickHouse/ClickHouse/pull/33300) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
@ -1351,7 +1367,6 @@
|
||||
* Optimize selecting of MergeTree parts that can be moved between volumes. [#33225](https://github.com/ClickHouse/ClickHouse/pull/33225) ([OnePiece](https://github.com/zhongyuankai)).
|
||||
* Fix `sparse_hashed` dict performance with sequential keys (wrong hash function). [#32536](https://github.com/ClickHouse/ClickHouse/pull/32536) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
||||
|
||||
#### Experimental Feature
|
||||
|
||||
* Parallel reading from multiple replicas within a shard during distributed query without using sample key. To enable this, set `allow_experimental_parallel_reading_from_replicas = 1` and `max_parallel_replicas` to any number. This closes [#26748](https://github.com/ClickHouse/ClickHouse/issues/26748). [#29279](https://github.com/ClickHouse/ClickHouse/pull/29279) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
@ -1364,7 +1379,6 @@
|
||||
* Fix ACL with explicit digit hash in `clickhouse-keeper`: now the behavior consistent with ZooKeeper and generated digest is always accepted. [#33249](https://github.com/ClickHouse/ClickHouse/pull/33249) ([小路](https://github.com/nicelulu)). [#33246](https://github.com/ClickHouse/ClickHouse/pull/33246).
|
||||
* Fix unexpected projection removal when detaching parts. [#32067](https://github.com/ClickHouse/ClickHouse/pull/32067) ([Amos Bird](https://github.com/amosbird)).
|
||||
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Now date time conversion functions that generates time before `1970-01-01 00:00:00` will be saturated to zero instead of overflow. [#29953](https://github.com/ClickHouse/ClickHouse/pull/29953) ([Amos Bird](https://github.com/amosbird)). It also fixes a bug in index analysis if date truncation function would yield result before the Unix epoch.
|
||||
@ -1411,7 +1425,6 @@
|
||||
* Updating `modification_time` for data part in `system.parts` after part movement [#32964](https://github.com/ClickHouse/ClickHouse/issues/32964). [#32965](https://github.com/ClickHouse/ClickHouse/pull/32965) ([save-my-heart](https://github.com/save-my-heart)).
|
||||
* Potential issue, cannot be exploited: integer overflow may happen in array resize. [#33024](https://github.com/ClickHouse/ClickHouse/pull/33024) ([varadarajkumar](https://github.com/varadarajkumar)).
|
||||
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Add packages, functional tests and Docker builds for AArch64 (ARM) version of ClickHouse. [#32911](https://github.com/ClickHouse/ClickHouse/pull/32911) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). [#32415](https://github.com/ClickHouse/ClickHouse/pull/32415)
|
||||
@ -1426,7 +1439,6 @@
|
||||
* Inject git information into clickhouse binary file. So we can get source code revision easily from clickhouse binary file. [#33124](https://github.com/ClickHouse/ClickHouse/pull/33124) ([taiyang-li](https://github.com/taiyang-li)).
|
||||
* Remove obsolete code from ConfigProcessor. Yandex specific code is not used anymore. The code contained one minor defect. This defect was reported by [Mallik Hassan](https://github.com/SadiHassan) in [#33032](https://github.com/ClickHouse/ClickHouse/issues/33032). This closes [#33032](https://github.com/ClickHouse/ClickHouse/issues/33032). [#33026](https://github.com/ClickHouse/ClickHouse/pull/33026) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in official stable or prestable release)
|
||||
|
||||
* Several fixes for format parsing. This is relevant if `clickhouse-server` is open for write access to adversary. Specifically crafted input data for `Native` format may lead to reading uninitialized memory or crash. This is relevant if `clickhouse-server` is open for write access to adversary. [#33050](https://github.com/ClickHouse/ClickHouse/pull/33050) ([Heena Bansal](https://github.com/HeenaBansal2009)). Fixed Apache Avro Union type index out of boundary issue in Apache Avro binary format. [#33022](https://github.com/ClickHouse/ClickHouse/pull/33022) ([Harry Lee](https://github.com/HarryLeeIBM)). Fix null pointer dereference in `LowCardinality` data when deserializing `LowCardinality` data in the Native format. [#33021](https://github.com/ClickHouse/ClickHouse/pull/33021) ([Harry Lee](https://github.com/HarryLeeIBM)).
|
||||
@ -1485,5 +1497,4 @@
|
||||
* Fix possible crash (or incorrect result) in case of `LowCardinality` arguments of window function. Fixes [#31114](https://github.com/ClickHouse/ClickHouse/issues/31114). [#31888](https://github.com/ClickHouse/ClickHouse/pull/31888) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix hang up with command `DROP TABLE system.query_log sync`. [#33293](https://github.com/ClickHouse/ClickHouse/pull/33293) ([zhanghuajie](https://github.com/zhanghuajieHIT)).
|
||||
|
||||
|
||||
## [Changelog for 2021](https://clickhouse.com/docs/en/whats-new/changelog/2021)
|
||||
|
@ -495,6 +495,14 @@ endif ()
|
||||
|
||||
enable_testing() # Enable for tests without binary
|
||||
|
||||
option(ENABLE_EXTERNAL_OPENSSL "This option is insecure and not recommended for any occasions. If it is enabled, it allows building with alternative OpenSSL library. By default, ClickHouse is using BoringSSL, which is better. Do not use this option." OFF)
|
||||
|
||||
if (ENABLE_EXTERNAL_OPENSSL)
|
||||
message (STATUS "Build and uses OpenSSL library instead of BoringSSL. This is strongly discouraged. Your build of ClickHouse will be unsupported.")
|
||||
set(ENABLE_SSL 1)
|
||||
target_compile_options(global-group INTERFACE "-Wno-deprecated-declarations")
|
||||
endif ()
|
||||
|
||||
# when installing to /usr - place configs to /etc but for /usr/local place to /usr/local/etc
|
||||
if (CMAKE_INSTALL_PREFIX STREQUAL "/usr")
|
||||
set (CLICKHOUSE_ETC_DIR "/etc")
|
||||
@ -567,8 +575,8 @@ function (add_native_target)
|
||||
set_property (GLOBAL APPEND PROPERTY NATIVE_BUILD_TARGETS ${ARGV})
|
||||
endfunction (add_native_target)
|
||||
|
||||
set(ConfigIncludePath ${CMAKE_CURRENT_BINARY_DIR}/includes/configs CACHE INTERNAL "Path to generated configuration files.")
|
||||
include_directories(${ConfigIncludePath})
|
||||
set(CONFIG_INCLUDE_PATH ${CMAKE_CURRENT_BINARY_DIR}/includes/configs CACHE INTERNAL "Path to generated configuration files.")
|
||||
include_directories(${CONFIG_INCLUDE_PATH})
|
||||
|
||||
# Add as many warnings as possible for our own code.
|
||||
include (cmake/warnings.cmake)
|
||||
|
@ -5,15 +5,16 @@ ClickHouse® is an open-source column-oriented database management system that a
|
||||
## Useful Links
|
||||
|
||||
* [Official website](https://clickhouse.com/) has a quick high-level overview of ClickHouse on the main page.
|
||||
* [ClickHouse Cloud](https://clickhouse.com/cloud) ClickHouse as a service, built by the creators and maintainers.
|
||||
* [Tutorial](https://clickhouse.com/docs/en/getting_started/tutorial/) shows how to set up and query a small ClickHouse cluster.
|
||||
* [Documentation](https://clickhouse.com/docs/en/) provides more in-depth information.
|
||||
* [YouTube channel](https://www.youtube.com/c/ClickHouseDB) has a lot of content about ClickHouse in video format.
|
||||
* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-rxm3rdrk-lIUmhLC3V8WTaL0TGxsOmg) and [Telegram](https://telegram.me/clickhouse_en) allow chatting with ClickHouse users in real-time.
|
||||
* [Blog](https://clickhouse.com/blog/en/) contains various ClickHouse-related articles, as well as announcements and reports about events.
|
||||
* [Blog](https://clickhouse.com/blog/) contains various ClickHouse-related articles, as well as announcements and reports about events.
|
||||
* [Code Browser (Woboq)](https://clickhouse.com/codebrowser/ClickHouse/index.html) with syntax highlight and navigation.
|
||||
* [Code Browser (github.dev)](https://github.dev/ClickHouse/ClickHouse) with syntax highlight, powered by github.dev.
|
||||
* [Contacts](https://clickhouse.com/company/contact) can help to get your questions answered if there are any.
|
||||
|
||||
## Upcoming events
|
||||
* [**v22.9 Release Webinar**](https://clickhouse.com/company/events/v22-9-release-webinar) Original creator, co-founder, and CTO of ClickHouse Alexey Milovidov will walk us through the highlights of the release, provide live demos, and share vision into what is coming in the roadmap.
|
||||
* [**ClickHouse for Analytics @ Barracuda Networks**](https://www.meetup.com/clickhouse-silicon-valley-meetup-group/events/288140358/) Join us for this in person meetup hosted by our friends at Barracuda in Bay Area.
|
||||
* [**v22.10 Release Webinar**](https://clickhouse.com/company/events/v22-10-release-webinar) Original creator, co-founder, and CTO of ClickHouse Alexey Milovidov will walk us through the highlights of the release, provide live demos, and share vision into what is coming in the roadmap.
|
||||
* [**Introducing ClickHouse Cloud**](https://clickhouse.com/company/events/cloud-beta) Introducing ClickHouse as a service, built by creators and maintainers of the fastest OLAP database on earth. Join Tanya Bragin for a detailed walkthrough of ClickHouse Cloud capabilities, as well as a peek behind the curtain to understand the unique architecture that makes our service tick.
|
||||
|
@ -23,7 +23,7 @@ namespace
|
||||
{
|
||||
|
||||
/// Trim ending whitespace inplace
|
||||
void trim(String & s)
|
||||
void rightTrim(String & s)
|
||||
{
|
||||
s.erase(std::find_if(s.rbegin(), s.rend(), [](unsigned char ch) { return !std::isspace(ch); }).base(), s.end());
|
||||
}
|
||||
@ -441,7 +441,7 @@ LineReader::InputStatus ReplxxLineReader::readOneLine(const String & prompt)
|
||||
return (errno != EAGAIN) ? ABORT : RESET_LINE;
|
||||
input = cinput;
|
||||
|
||||
trim(input);
|
||||
rightTrim(input);
|
||||
return INPUT_LINE;
|
||||
}
|
||||
|
||||
@ -512,6 +512,9 @@ void ReplxxLineReader::openInteractiveHistorySearch()
|
||||
/// NOTE: You can use one of the following to configure the behaviour additionally:
|
||||
/// - SKIM_DEFAULT_OPTIONS
|
||||
/// - FZF_DEFAULT_OPTS
|
||||
///
|
||||
/// And also note, that fzf and skim is 95% compatible (at least option
|
||||
/// that is used here)
|
||||
std::string fuzzy_finder_command = fmt::format(
|
||||
"{} --read0 --tac --no-sort --tiebreak=index --bind=ctrl-r:toggle-sort --height=30% < {} > {}",
|
||||
fuzzy_finder, history_file.getPath(), output_file.getPath());
|
||||
@ -521,7 +524,8 @@ void ReplxxLineReader::openInteractiveHistorySearch()
|
||||
{
|
||||
if (executeCommand(argv) == 0)
|
||||
{
|
||||
const std::string & new_query = readFile(output_file.getPath());
|
||||
std::string new_query = readFile(output_file.getPath());
|
||||
rightTrim(new_query);
|
||||
rx.set_state(replxx::Replxx::State(new_query.c_str(), new_query.size()));
|
||||
}
|
||||
}
|
||||
|
@ -123,11 +123,15 @@
|
||||
/// - tries to print failed assertion into server log
|
||||
/// It can be used for all assertions except heavy ones.
|
||||
/// Heavy assertions (that run loops or call complex functions) are allowed in debug builds only.
|
||||
/// Also it makes sense to call abort() instead of __builtin_unreachable() in debug builds,
|
||||
/// because SIGABRT is easier to debug than SIGTRAP (the second one makes gdb crazy)
|
||||
#if !defined(chassert)
|
||||
#if defined(ABORT_ON_LOGICAL_ERROR)
|
||||
#define chassert(x) static_cast<bool>(x) ? void(0) : abortOnFailedAssertion(#x)
|
||||
#define UNREACHABLE() abort()
|
||||
#else
|
||||
#define chassert(x) ((void)0)
|
||||
#define UNREACHABLE() __builtin_unreachable()
|
||||
#endif
|
||||
#endif
|
||||
|
||||
@ -142,7 +146,9 @@
|
||||
# define TSA_NO_THREAD_SAFETY_ANALYSIS __attribute__((no_thread_safety_analysis)) /// disable TSA for a function
|
||||
|
||||
/// Macros for suppressing TSA warnings for specific reads/writes (instead of suppressing it for the whole function)
|
||||
/// Consider adding a comment before using these macros.
|
||||
/// They use a lambda function to apply function attribute to a single statement. This enable us to suppress warnings locally instead of
|
||||
/// suppressing them in the whole function
|
||||
/// Consider adding a comment when using these macros.
|
||||
# define TSA_SUPPRESS_WARNING_FOR_READ(x) ([&]() TSA_NO_THREAD_SAFETY_ANALYSIS -> const auto & { return (x); }())
|
||||
# define TSA_SUPPRESS_WARNING_FOR_WRITE(x) ([&]() TSA_NO_THREAD_SAFETY_ANALYSIS -> auto & { return (x); }())
|
||||
|
||||
@ -159,9 +165,9 @@
|
||||
# define TSA_REQUIRES_SHARED(...)
|
||||
# define TSA_NO_THREAD_SAFETY_ANALYSIS
|
||||
|
||||
# define TSA_SUPPRESS_WARNING_FOR_READ(x)
|
||||
# define TSA_SUPPRESS_WARNING_FOR_WRITE(x)
|
||||
# define TSA_READ_ONE_THREAD(x)
|
||||
# define TSA_SUPPRESS_WARNING_FOR_READ(x) (x)
|
||||
# define TSA_SUPPRESS_WARNING_FOR_WRITE(x) (x)
|
||||
# define TSA_READ_ONE_THREAD(x) TSA_SUPPRESS_WARNING_FOR_READ(x)
|
||||
#endif
|
||||
|
||||
/// A template function for suppressing warnings about unused variables or function results.
|
||||
|
@ -1,6 +1,7 @@
|
||||
#if defined(OS_LINUX)
|
||||
# include <sys/syscall.h>
|
||||
#endif
|
||||
#include <cstdlib>
|
||||
#include <unistd.h>
|
||||
#include <base/safeExit.h>
|
||||
#include <base/defines.h>
|
||||
@ -11,7 +12,7 @@
|
||||
/// Thread sanitizer tries to do something on exit that we don't need if we want to exit immediately,
|
||||
/// while connection handling threads are still run.
|
||||
(void)syscall(SYS_exit_group, code);
|
||||
__builtin_unreachable();
|
||||
UNREACHABLE();
|
||||
#else
|
||||
_exit(code);
|
||||
#endif
|
||||
|
@ -3,15 +3,15 @@
|
||||
# This is a workaround for bug in llvm/clang,
|
||||
# that does not produce .debug_aranges with LTO
|
||||
#
|
||||
# NOTE: this is a temporary solution, that should be removed once [1] will be
|
||||
# resolved.
|
||||
# NOTE: this is a temporary solution, that should be removed after upgrading to
|
||||
# clang-16/llvm-16.
|
||||
#
|
||||
# [1]: https://discourse.llvm.org/t/clang-does-not-produce-full-debug-aranges-section-with-thinlto/64898/8
|
||||
# Refs: https://reviews.llvm.org/D133092
|
||||
|
||||
# NOTE: only -flto=thin is supported.
|
||||
# NOTE: it is not possible to check was there -gdwarf-aranges initially or not.
|
||||
if [[ "$*" =~ -plugin-opt=thinlto ]]; then
|
||||
exec "@LLD_PATH@" -mllvm -generate-arange-section "$@"
|
||||
exec "@LLD_PATH@" -plugin-opt=-generate-arange-section "$@"
|
||||
else
|
||||
exec "@LLD_PATH@" "$@"
|
||||
fi
|
||||
|
2
contrib/AMQP-CPP
vendored
2
contrib/AMQP-CPP
vendored
@ -1 +1 @@
|
||||
Subproject commit 1a6c51f4ac51ac56610fa95081bd2f349911375a
|
||||
Subproject commit 818c2d8ad96a08a5d20fece7d1e1e8855a2b0860
|
6
contrib/CMakeLists.txt
vendored
6
contrib/CMakeLists.txt
vendored
@ -74,7 +74,11 @@ add_contrib (re2-cmake re2)
|
||||
add_contrib (xz-cmake xz)
|
||||
add_contrib (brotli-cmake brotli)
|
||||
add_contrib (double-conversion-cmake double-conversion)
|
||||
add_contrib (boringssl-cmake boringssl)
|
||||
if (NOT ENABLE_EXTERNAL_OPENSSL)
|
||||
add_contrib (boringssl-cmake boringssl)
|
||||
else ()
|
||||
add_contrib (openssl-cmake openssl)
|
||||
endif ()
|
||||
add_contrib (poco-cmake poco)
|
||||
add_contrib (croaring-cmake croaring)
|
||||
add_contrib (zstd-cmake zstd)
|
||||
|
@ -4,6 +4,11 @@ if (NOT ENABLE_AMQPCPP)
|
||||
message(STATUS "Not using AMQP-CPP")
|
||||
return()
|
||||
endif()
|
||||
if (OS_FREEBSD)
|
||||
message(STATUS "Not using AMQP-CPP because libuv is disabled")
|
||||
return()
|
||||
endif()
|
||||
|
||||
|
||||
# can be removed once libuv build on MacOS with GCC is possible
|
||||
if (NOT TARGET ch_contrib::uv)
|
||||
|
2
contrib/cctz
vendored
2
contrib/cctz
vendored
@ -1 +1 @@
|
||||
Subproject commit 49c656c62fbd36a1bc20d64c476853bdb7cf7bb9
|
||||
Subproject commit 7a454c25c7d16053bcd327cdd16329212a08fa4a
|
@ -578,6 +578,12 @@ if(CMAKE_SYSTEM_NAME MATCHES "Darwin")
|
||||
list(APPEND ALL_SRCS "${CMAKE_CURRENT_BINARY_DIR}/include_private/kcmrpc.c")
|
||||
endif()
|
||||
|
||||
if (ENABLE_EXTERNAL_OPENSSL)
|
||||
list(REMOVE_ITEM ALL_SRCS "${KRB5_SOURCE_DIR}/lib/crypto/openssl/enc_provider/aes.c")
|
||||
list(APPEND ALL_SRCS "${CMAKE_CURRENT_SOURCE_DIR}/aes.c")
|
||||
endif ()
|
||||
|
||||
|
||||
target_sources(_krb5 PRIVATE
|
||||
${ALL_SRCS}
|
||||
)
|
||||
|
@ -59,6 +59,12 @@ set(SRCS
|
||||
|
||||
add_library(_libpq ${SRCS})
|
||||
|
||||
if (ENABLE_EXTERNAL_OPENSSL)
|
||||
add_definitions(-DHAVE_BIO_METH_NEW)
|
||||
add_definitions(-DHAVE_HMAC_CTX_NEW)
|
||||
add_definitions(-DHAVE_HMAC_CTX_FREE)
|
||||
endif ()
|
||||
|
||||
target_include_directories (_libpq SYSTEM PUBLIC ${LIBPQ_SOURCE_DIR})
|
||||
target_include_directories (_libpq SYSTEM PUBLIC "${LIBPQ_SOURCE_DIR}/include")
|
||||
target_include_directories (_libpq SYSTEM PRIVATE "${LIBPQ_SOURCE_DIR}/configs")
|
||||
|
2
contrib/llvm-project
vendored
2
contrib/llvm-project
vendored
@ -1 +1 @@
|
||||
Subproject commit dc972a767ff2e9488d96cb2a6e67de160fbe15a7
|
||||
Subproject commit 3a39038345a400e7e767811b142a94355d511215
|
@ -1,4 +1,4 @@
|
||||
if (APPLE OR NOT ARCH_AMD64 OR SANITIZE STREQUAL "undefined" OR NOT USE_STATIC_LIBRARIES)
|
||||
if (APPLE OR NOT ARCH_AMD64 OR SANITIZE STREQUAL "undefined")
|
||||
set (ENABLE_EMBEDDED_COMPILER_DEFAULT OFF)
|
||||
else()
|
||||
set (ENABLE_EMBEDDED_COMPILER_DEFAULT ON)
|
||||
@ -6,15 +6,16 @@ endif()
|
||||
|
||||
option (ENABLE_EMBEDDED_COMPILER "Enable support for 'compile_expressions' option for query execution" ${ENABLE_EMBEDDED_COMPILER_DEFAULT})
|
||||
|
||||
# If USE_STATIC_LIBRARIES=0 was passed to CMake, we'll still build LLVM statically to keep complexity minimal.
|
||||
|
||||
if (NOT ENABLE_EMBEDDED_COMPILER)
|
||||
message(STATUS "Not using LLVM")
|
||||
return()
|
||||
endif()
|
||||
|
||||
# TODO: Enable shared library build
|
||||
# TODO: Enable compilation on AArch64
|
||||
|
||||
set (LLVM_VERSION "14.0.0bundled")
|
||||
set (LLVM_VERSION "15.0.0bundled")
|
||||
set (LLVM_INCLUDE_DIRS
|
||||
"${ClickHouse_SOURCE_DIR}/contrib/llvm-project/llvm/include"
|
||||
"${ClickHouse_BINARY_DIR}/contrib/llvm-project/llvm/include"
|
||||
@ -62,9 +63,6 @@ set (REQUIRED_LLVM_LIBRARIES
|
||||
# list(APPEND REQUIRED_LLVM_LIBRARIES LLVMAArch64Info LLVMAArch64Desc LLVMAArch64CodeGen)
|
||||
# endif ()
|
||||
|
||||
# ld: unknown option: --color-diagnostics
|
||||
# set (LINKER_SUPPORTS_COLOR_DIAGNOSTICS 0 CACHE INTERNAL "")
|
||||
|
||||
set (CMAKE_INSTALL_RPATH "ON") # Do not adjust RPATH in llvm, since then it will not be able to find libcxx/libcxxabi/libunwind
|
||||
set (LLVM_COMPILER_CHECKED 1 CACHE INTERNAL "") # Skip internal compiler selection
|
||||
set (LLVM_ENABLE_EH 1 CACHE INTERNAL "") # With exception handling
|
||||
@ -80,6 +78,7 @@ set(LLVM_ENABLE_LIBXML2 0 CACHE INTERNAL "")
|
||||
set(LLVM_ENABLE_LIBEDIT 0 CACHE INTERNAL "")
|
||||
set(LLVM_ENABLE_LIBPFM 0 CACHE INTERNAL "")
|
||||
set(LLVM_ENABLE_ZLIB 0 CACHE INTERNAL "")
|
||||
set(LLVM_ENABLE_ZSTD 0 CACHE INTERNAL "")
|
||||
set(LLVM_ENABLE_Z3_SOLVER 0 CACHE INTERNAL "")
|
||||
set(LLVM_INCLUDE_TOOLS 0 CACHE INTERNAL "")
|
||||
set(LLVM_BUILD_TOOLS 0 CACHE INTERNAL "")
|
||||
@ -96,9 +95,6 @@ set(LLVM_INCLUDE_DOCS 0 CACHE INTERNAL "")
|
||||
set(LLVM_ENABLE_OCAMLDOC 0 CACHE INTERNAL "")
|
||||
set(LLVM_ENABLE_BINDINGS 0 CACHE INTERNAL "")
|
||||
|
||||
# C++20 is currently not supported due to ambiguous operator != etc.
|
||||
set (CMAKE_CXX_STANDARD 17)
|
||||
|
||||
set (LLVM_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/llvm-project/llvm")
|
||||
set (LLVM_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/llvm-project/llvm")
|
||||
add_subdirectory ("${LLVM_SOURCE_DIR}" "${LLVM_BINARY_DIR}")
|
||||
|
@ -36,10 +36,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
# repo versions doesn't work correctly with C++17
|
||||
# also we push reports to s3, so we add index.html to subfolder urls
|
||||
# https://github.com/ClickHouse-Extras/woboq_codebrowser/commit/37e15eaf377b920acb0b48dbe82471be9203f76b
|
||||
# TODO: remove branch in a few weeks after merge, e.g. in May or June 2022
|
||||
#
|
||||
# FIXME: update location of a repo
|
||||
RUN git clone https://github.com/azat/woboq_codebrowser --branch llvm-15 \
|
||||
RUN git clone https://github.com/ClickHouse/woboq_codebrowser \
|
||||
&& cd woboq_codebrowser \
|
||||
&& cmake . -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_COMPILER=clang\+\+-${LLVM_VERSION} -DCMAKE_C_COMPILER=clang-${LLVM_VERSION} \
|
||||
&& ninja \
|
||||
|
6
docker/test/fuzzer/allow-nullable-key.xml
Normal file
6
docker/test/fuzzer/allow-nullable-key.xml
Normal file
@ -0,0 +1,6 @@
|
||||
<clickhouse>
|
||||
<!-- Allow nullable key to avoid errors while fuzzing definitions of tables -->
|
||||
<merge_tree>
|
||||
<allow_nullable_key>1</allow_nullable_key>
|
||||
</merge_tree>
|
||||
</clickhouse>
|
@ -94,6 +94,7 @@ function configure
|
||||
# TODO figure out which ones are needed
|
||||
cp -av --dereference "$repo_dir"/tests/config/config.d/listen.xml db/config.d
|
||||
cp -av --dereference "$script_dir"/query-fuzzer-tweaks-users.xml db/users.d
|
||||
cp -av --dereference "$script_dir"/allow-nullable-key.xml db/config.d
|
||||
|
||||
cat > db/config.d/core.xml <<EOL
|
||||
<clickhouse>
|
||||
@ -240,6 +241,7 @@ quit
|
||||
--receive_data_timeout_ms=10000 \
|
||||
--stacktrace \
|
||||
--query-fuzzer-runs=1000 \
|
||||
--create-query-fuzzer-runs=50 \
|
||||
--queries-file $(ls -1 ch/tests/queries/0_stateless/*.sql | sort -R) \
|
||||
$NEW_TESTS_OPT \
|
||||
> >(tail -n 100000 > fuzzer.log) \
|
||||
|
@ -11,6 +11,7 @@ RUN apt-get update -y \
|
||||
apt-get install --yes --no-install-recommends \
|
||||
awscli \
|
||||
brotli \
|
||||
lz4 \
|
||||
expect \
|
||||
golang \
|
||||
lsof \
|
||||
|
22
docker/test/stress/run.sh
Executable file → Normal file
22
docker/test/stress/run.sh
Executable file → Normal file
@ -47,7 +47,6 @@ function install_packages()
|
||||
|
||||
function configure()
|
||||
{
|
||||
export ZOOKEEPER_FAULT_INJECTION=1
|
||||
# install test configs
|
||||
export USE_DATABASE_ORDINARY=1
|
||||
export EXPORT_S3_STORAGE_POLICIES=1
|
||||
@ -203,6 +202,7 @@ quit
|
||||
|
||||
install_packages package_folder
|
||||
|
||||
export ZOOKEEPER_FAULT_INJECTION=1
|
||||
configure
|
||||
|
||||
azurite-blob --blobHost 0.0.0.0 --blobPort 10000 --debug /azurite_log &
|
||||
@ -243,6 +243,7 @@ stop
|
||||
|
||||
# Let's enable S3 storage by default
|
||||
export USE_S3_STORAGE_FOR_MERGE_TREE=1
|
||||
export ZOOKEEPER_FAULT_INJECTION=1
|
||||
configure
|
||||
|
||||
# But we still need default disk because some tables loaded only into it
|
||||
@ -375,6 +376,8 @@ else
|
||||
install_packages previous_release_package_folder
|
||||
|
||||
# Start server from previous release
|
||||
# Previous version may not be ready for fault injections
|
||||
export ZOOKEEPER_FAULT_INJECTION=0
|
||||
configure
|
||||
|
||||
# Avoid "Setting s3_check_objects_after_upload is neither a builtin setting..."
|
||||
@ -389,12 +392,23 @@ else
|
||||
|
||||
clickhouse-client --query="SELECT 'Server version: ', version()"
|
||||
|
||||
# Install new package before running stress test because we should use new clickhouse-client and new clickhouse-test
|
||||
# But we should leave old binary in /usr/bin/ for gdb (so it will print sane stacktarces)
|
||||
# Install new package before running stress test because we should use new
|
||||
# clickhouse-client and new clickhouse-test.
|
||||
#
|
||||
# But we should leave old binary in /usr/bin/ and debug symbols in
|
||||
# /usr/lib/debug/usr/bin (if any) for gdb and internal DWARF parser, so it
|
||||
# will print sane stacktraces and also to avoid possible crashes.
|
||||
#
|
||||
# FIXME: those files can be extracted directly from debian package, but
|
||||
# actually better solution will be to use different PATH instead of playing
|
||||
# games with files from packages.
|
||||
mv /usr/bin/clickhouse previous_release_package_folder/
|
||||
mv /usr/lib/debug/usr/bin/clickhouse.debug previous_release_package_folder/
|
||||
install_packages package_folder
|
||||
mv /usr/bin/clickhouse package_folder/
|
||||
mv /usr/lib/debug/usr/bin/clickhouse.debug package_folder/
|
||||
mv previous_release_package_folder/clickhouse /usr/bin/
|
||||
mv previous_release_package_folder/clickhouse.debug /usr/lib/debug/usr/bin/clickhouse.debug
|
||||
|
||||
mkdir tmp_stress_output
|
||||
|
||||
@ -410,6 +424,8 @@ else
|
||||
|
||||
# Start new server
|
||||
mv package_folder/clickhouse /usr/bin/
|
||||
mv package_folder/clickhouse.debug /usr/lib/debug/usr/bin/clickhouse.debug
|
||||
export ZOOKEEPER_FAULT_INJECTION=1
|
||||
configure
|
||||
start 500
|
||||
clickhouse-client --query "SELECT 'Backward compatibility check: Server successfully started', 'OK'" >> /test_output/test_results.tsv \
|
||||
|
@ -5,6 +5,7 @@ FROM ubuntu:20.04
|
||||
ARG apt_archive="http://archive.ubuntu.com"
|
||||
RUN sed -i "s|http://archive.ubuntu.com|$apt_archive|g" /etc/apt/sources.list
|
||||
|
||||
# 15.0.2
|
||||
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=15
|
||||
|
||||
RUN apt-get update \
|
||||
@ -58,6 +59,9 @@ RUN apt-get update \
|
||||
RUN ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/ld.lld
|
||||
# for external_symbolizer_path
|
||||
RUN ln -s /usr/bin/llvm-symbolizer-${LLVM_VERSION} /usr/bin/llvm-symbolizer
|
||||
# FIXME: workaround for "The imported target "merge-fdata" references the file" error
|
||||
# https://salsa.debian.org/pkg-llvm-team/llvm-toolchain/-/commit/992e52c0b156a5ba9c6a8a54f8c4857ddd3d371d
|
||||
RUN sed -i '/_IMPORT_CHECK_FILES_FOR_\(mlir-\|llvm-bolt\|merge-fdata\|MLIR\)/ {s|^|#|}' /usr/lib/llvm-${LLVM_VERSION}/lib/cmake/llvm/LLVMExports-*.cmake
|
||||
|
||||
ARG CCACHE_VERSION=4.6.1
|
||||
RUN mkdir /tmp/ccache \
|
||||
|
23
docs/changelogs/v22.6.9.11-stable.md
Normal file
23
docs/changelogs/v22.6.9.11-stable.md
Normal file
@ -0,0 +1,23 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
sidebar_label: 2022
|
||||
---
|
||||
|
||||
# 2022 Changelog
|
||||
|
||||
### ClickHouse release v22.6.9.11-stable (9ec61dcac49) FIXME as compared to v22.6.8.35-stable (b91dc59a565)
|
||||
|
||||
#### Improvement
|
||||
* Backported in [#42089](https://github.com/ClickHouse/ClickHouse/issues/42089): Replace back `clickhouse su` command with `sudo -u` in start in order to respect limits in `/etc/security/limits.conf`. [#41847](https://github.com/ClickHouse/ClickHouse/pull/41847) ([Eugene Konkov](https://github.com/ekonkov)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Backported in [#41558](https://github.com/ClickHouse/ClickHouse/issues/41558): Add `source` field to deb packages, update `nfpm`. [#41531](https://github.com/ClickHouse/ClickHouse/pull/41531) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in official stable or prestable release)
|
||||
|
||||
* Backported in [#41504](https://github.com/ClickHouse/ClickHouse/issues/41504): Writing data in Apache `ORC` format might lead to a buffer overrun. [#41458](https://github.com/ClickHouse/ClickHouse/pull/41458) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||
|
||||
* Build latest tags ONLY from master branch [#41567](https://github.com/ClickHouse/ClickHouse/pull/41567) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
|
@ -1,14 +0,0 @@
|
||||
---
|
||||
slug: /en/development/browse-code
|
||||
sidebar_label: Source Code Browser
|
||||
sidebar_position: 72
|
||||
description: Various ways to browse and edit the source code
|
||||
---
|
||||
|
||||
# Browse ClickHouse Source Code
|
||||
|
||||
You can use the **Woboq** online code browser available [here](https://clickhouse.com/codebrowser/ClickHouse/src/index.html). It provides code navigation and semantic highlighting, search and indexing. The code snapshot is updated daily.
|
||||
|
||||
Also, you can browse sources on [GitHub](https://github.com/ClickHouse/ClickHouse) as usual.
|
||||
|
||||
If you’re interested what IDE to use, we recommend CLion, QT Creator, VS Code and KDevelop (with caveats). You can use any favorite IDE. Vim and Emacs also count.
|
@ -38,13 +38,13 @@ For other Linux distribution - check the availability of the [prebuild packages]
|
||||
#### Use the latest clang for Builds
|
||||
|
||||
``` bash
|
||||
export CC=clang-14
|
||||
export CXX=clang++-14
|
||||
export CC=clang-15
|
||||
export CXX=clang++-15
|
||||
```
|
||||
|
||||
In this example we use version 14 that is the latest as of Feb 2022.
|
||||
In this example we use version 15 that is the latest as of Sept 2022.
|
||||
|
||||
Gcc can also be used though it is discouraged.
|
||||
Gcc cannot be used.
|
||||
|
||||
### Checkout ClickHouse Sources {#checkout-clickhouse-sources}
|
||||
|
||||
|
@ -122,7 +122,7 @@ If you use Arch or Gentoo, you probably know it yourself how to install CMake.
|
||||
|
||||
## C++ Compiler {#c-compiler}
|
||||
|
||||
Compilers Clang starting from version 12 is supported for building ClickHouse.
|
||||
Compilers Clang starting from version 15 is supported for building ClickHouse.
|
||||
|
||||
Clang should be used instead of gcc. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations.
|
||||
|
||||
@ -146,7 +146,7 @@ While inside the `build` directory, configure your build by running CMake. Befor
|
||||
export CC=clang CXX=clang++
|
||||
cmake ..
|
||||
|
||||
If you installed clang using the automatic installation script above, also specify the version of clang installed in the first command, e.g. `export CC=clang-14 CXX=clang++-14`. The clang version will be in the script output.
|
||||
If you installed clang using the automatic installation script above, also specify the version of clang installed in the first command, e.g. `export CC=clang-15 CXX=clang++-15`. The clang version will be in the script output.
|
||||
|
||||
The `CC` variable specifies the compiler for C (short for C Compiler), and `CXX` variable instructs which C++ compiler is to be used for building.
|
||||
|
||||
@ -178,7 +178,7 @@ If you get the message: `ninja: error: loading 'build.ninja': No such file or di
|
||||
|
||||
Upon the successful start of the building process, you’ll see the build progress - the number of processed tasks and the total number of tasks.
|
||||
|
||||
While building messages about protobuf files in libhdfs2 library like `libprotobuf WARNING` may show up. They affect nothing and are safe to be ignored.
|
||||
While building messages about LLVM library may show up. They affect nothing and are safe to be ignored.
|
||||
|
||||
Upon successful build you get an executable file `ClickHouse/<build_dir>/programs/clickhouse`:
|
||||
|
||||
@ -272,15 +272,10 @@ Most probably some of the builds will fail at first times. This is due to the fa
|
||||
|
||||
You can use the **Woboq** online code browser available [here](https://clickhouse.com/codebrowser/ClickHouse/src/index.html). It provides code navigation, semantic highlighting, search and indexing. The code snapshot is updated daily.
|
||||
|
||||
You can use GitHub integrated code browser [here](https://github.dev/ClickHouse/ClickHouse).
|
||||
|
||||
Also, you can browse sources on [GitHub](https://github.com/ClickHouse/ClickHouse) as usual.
|
||||
|
||||
## Faster builds for development: Split build configuration {#split-build}
|
||||
|
||||
ClickHouse is normally statically linked into a single static `clickhouse` binary with minimal dependencies. This is convenient for distribution, but it means that for every change the entire binary needs to be re-linked, which is slow and inconvenient for development. As an alternative, you can instead build dynamically linked shared libraries, allowing for faster incremental builds. To use it, add the following flags to your `cmake` invocation:
|
||||
```
|
||||
-DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1
|
||||
```
|
||||
|
||||
If you are not interested in functionality provided by third-party libraries, you can further speed up the build using `cmake` options
|
||||
```
|
||||
-DENABLE_LIBRARIES=0 -DENABLE_EMBEDDED_COMPILER=0
|
||||
|
@ -419,6 +419,8 @@ Supported data types: `Int*`, `UInt*`, `Float*`, `Enum`, `Date`, `DateTime`, `St
|
||||
|
||||
For `Map` data type client can specify if index should be created for keys or values using [mapKeys](../../../sql-reference/functions/tuple-map-functions.md#mapkeys) or [mapValues](../../../sql-reference/functions/tuple-map-functions.md#mapvalues) function.
|
||||
|
||||
There are also special-purpose and experimental indexes to support approximate nearest neighbor (ANN) queries. See [here](annindexes.md) for details.
|
||||
|
||||
The following functions can use the filter: [equals](../../../sql-reference/functions/comparison-functions.md), [notEquals](../../../sql-reference/functions/comparison-functions.md), [in](../../../sql-reference/functions/in-functions), [notIn](../../../sql-reference/functions/in-functions), [has](../../../sql-reference/functions/array-functions#hasarr-elem), [hasAny](../../../sql-reference/functions/array-functions#hasany), [hasAll](../../../sql-reference/functions/array-functions#hasall).
|
||||
|
||||
Example of index creation for `Map` data type
|
||||
|
@ -1,8 +1,7 @@
|
||||
position: 10
|
||||
position: 1
|
||||
label: 'Example Datasets'
|
||||
collapsible: true
|
||||
collapsed: true
|
||||
link:
|
||||
type: generated-index
|
||||
title: Example Datasets
|
||||
slug: /en/getting-started/example-datasets
|
||||
type: doc
|
||||
id: en/getting-started/example-datasets/
|
||||
|
@ -1,9 +1,16 @@
|
||||
---
|
||||
slug: /en/getting-started/example-datasets/cell-towers
|
||||
sidebar_label: Cell Towers
|
||||
sidebar_position: 3
|
||||
title: "Cell Towers"
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
import CodeBlock from '@theme/CodeBlock';
|
||||
import ActionsMenu from '@site/docs/en/_snippets/_service_actions_menu.md';
|
||||
import SQLConsoleDetail from '@site/docs/en/_snippets/_launch_sql_console.md';
|
||||
|
||||
This dataset is from [OpenCellid](https://www.opencellid.org/) - The world's largest Open Database of Cell Towers.
|
||||
|
||||
As of 2021, it contains more than 40 million records about cell towers (GSM, LTE, UMTS, etc.) around the world with their geographical coordinates and metadata (country code, network, etc).
|
||||
@ -13,6 +20,26 @@ OpenCelliD Project is licensed under a Creative Commons Attribution-ShareAlike 4
|
||||
|
||||
## Get the Dataset {#get-the-dataset}
|
||||
|
||||
<Tabs groupId="deployMethod">
|
||||
<TabItem value="serverless" label="ClickHouse Cloud" default>
|
||||
|
||||
ClickHouse Cloud provides an easy-button for uploading this dataset from S3. Log in to your ClickHouse Cloud organization, or create a free trial at [ClickHouse.cloud](https://clickhouse.cloud).
|
||||
<ActionsMenu menu="Load Data" />
|
||||
|
||||
Choose the **Cell Towers** dataset from the **Sample data** tab, and **Load data**:
|
||||
|
||||
![Load cell towers dataset](@site/docs/en/_snippets/images/cloud-load-data-sample.png)
|
||||
|
||||
Examine the schema of the cell_towers table:
|
||||
```sql
|
||||
DESCRIBE TABLE cell_towers
|
||||
```
|
||||
|
||||
<SQLConsoleDetail />
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="selfmanaged" label="Self-managed">
|
||||
|
||||
1. Download the snapshot of the dataset from February 2021: [cell_towers.csv.xz](https://datasets.clickhouse.com/cell_towers.csv.xz) (729 MB).
|
||||
|
||||
2. Validate the integrity (optional step):
|
||||
@ -56,7 +83,10 @@ ENGINE = MergeTree ORDER BY (radio, mcc, net, created);
|
||||
clickhouse-client --query "INSERT INTO cell_towers FORMAT CSVWithNames" < cell_towers.csv
|
||||
```
|
||||
|
||||
## Examples {#examples}
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Example queries {#examples}
|
||||
|
||||
1. A number of cell towers by type:
|
||||
|
||||
@ -101,18 +131,31 @@ So, the top countries are: the USA, Germany, and Russia.
|
||||
|
||||
You may want to create an [External Dictionary](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md) in ClickHouse to decode these values.
|
||||
|
||||
|
||||
## Use case {#use-case}
|
||||
## Use case: Incorporate geo data {#use-case}
|
||||
|
||||
Using `pointInPolygon` function.
|
||||
|
||||
1. Create a table where we will store polygons:
|
||||
|
||||
<Tabs groupId="deployMethod">
|
||||
<TabItem value="serverless" label="ClickHouse Cloud" default>
|
||||
|
||||
```sql
|
||||
CREATE TABLE moscow (polygon Array(Tuple(Float64, Float64)))
|
||||
ORDER BY polygon;
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="selfmanaged" label="Self-managed">
|
||||
|
||||
```sql
|
||||
CREATE TEMPORARY TABLE
|
||||
moscow (polygon Array(Tuple(Float64, Float64)));
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
2. This is a rough shape of Moscow (without "new Moscow"):
|
||||
|
||||
```sql
|
||||
|
File diff suppressed because one or more lines are too long
@ -13,16 +13,6 @@ Description of the fields: https://www.gov.uk/guidance/about-the-price-paid-data
|
||||
|
||||
Contains HM Land Registry data © Crown copyright and database right 2021. This data is licensed under the Open Government Licence v3.0.
|
||||
|
||||
## Download the Dataset {#download-dataset}
|
||||
|
||||
Run the command:
|
||||
|
||||
```bash
|
||||
wget http://prod.publicdata.landregistry.gov.uk.s3-website-eu-west-1.amazonaws.com/pp-complete.csv
|
||||
```
|
||||
|
||||
Download will take about 2 minutes with good internet connection.
|
||||
|
||||
## Create the Table {#create-table}
|
||||
|
||||
```sql
|
||||
@ -41,31 +31,49 @@ CREATE TABLE uk_price_paid
|
||||
locality LowCardinality(String),
|
||||
town LowCardinality(String),
|
||||
district LowCardinality(String),
|
||||
county LowCardinality(String),
|
||||
category UInt8
|
||||
) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2);
|
||||
county LowCardinality(String)
|
||||
)
|
||||
ENGINE = MergeTree
|
||||
ORDER BY (postcode1, postcode2, addr1, addr2);
|
||||
```
|
||||
|
||||
## Preprocess and Import Data {#preprocess-import-data}
|
||||
## Preprocess and Insert the Data {#preprocess-import-data}
|
||||
|
||||
We will use `clickhouse-local` tool for data preprocessing and `clickhouse-client` to upload it.
|
||||
We will use the `url` function to stream the data into ClickHouse. We need to preprocess some of the incoming data first, which includes:
|
||||
- splitting the `postcode` to two different columns - `postcode1` and `postcode2`, which is better for storage and queries
|
||||
- converting the `time` field to date as it only contains 00:00 time
|
||||
- ignoring the [UUid](../../sql-reference/data-types/uuid.md) field because we don't need it for analysis
|
||||
- transforming `type` and `duration` to more readable `Enum` fields using the [transform](../../sql-reference/functions/other-functions.md#transform) function
|
||||
- transforming the `is_new` field from a single-character string (`Y`/`N`) to a [UInt8](../../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-uint256-int8-int16-int32-int64-int128-int256) field with 0 or 1
|
||||
- drop the last two columns since they all have the same value (which is 0)
|
||||
|
||||
In this example, we define the structure of source data from the CSV file and specify a query to preprocess the data with `clickhouse-local`.
|
||||
The `url` function streams the data from the web server into your ClickHouse table. The following command inserts 5 million rows into the `uk_price_paid` table:
|
||||
|
||||
The preprocessing is:
|
||||
- splitting the postcode to two different columns `postcode1` and `postcode2` that is better for storage and queries;
|
||||
- coverting the `time` field to date as it only contains 00:00 time;
|
||||
- ignoring the [UUid](../../sql-reference/data-types/uuid.md) field because we don't need it for analysis;
|
||||
- transforming `type` and `duration` to more readable Enum fields with function [transform](../../sql-reference/functions/other-functions.md#transform);
|
||||
- transforming `is_new` and `category` fields from single-character string (`Y`/`N` and `A`/`B`) to [UInt8](../../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-uint256-int8-int16-int32-int64-int128-int256) field with 0 and 1.
|
||||
|
||||
Preprocessed data is piped directly to `clickhouse-client` to be inserted into ClickHouse table in streaming fashion.
|
||||
|
||||
```bash
|
||||
clickhouse-local --input-format CSV --structure '
|
||||
uuid String,
|
||||
price UInt32,
|
||||
time DateTime,
|
||||
```sql
|
||||
INSERT INTO uk_price_paid
|
||||
WITH
|
||||
splitByChar(' ', postcode) AS p
|
||||
SELECT
|
||||
toUInt32(price_string) AS price,
|
||||
parseDateTimeBestEffortUS(time) AS date,
|
||||
p[1] AS postcode1,
|
||||
p[2] AS postcode2,
|
||||
transform(a, ['T', 'S', 'D', 'F', 'O'], ['terraced', 'semi-detached', 'detached', 'flat', 'other']) AS type,
|
||||
b = 'Y' AS is_new,
|
||||
transform(c, ['F', 'L', 'U'], ['freehold', 'leasehold', 'unknown']) AS duration,
|
||||
addr1,
|
||||
addr2,
|
||||
street,
|
||||
locality,
|
||||
town,
|
||||
district,
|
||||
county
|
||||
FROM url(
|
||||
'http://prod.publicdata.landregistry.gov.uk.s3-website-eu-west-1.amazonaws.com/pp-complete.csv',
|
||||
'CSV',
|
||||
'uuid_string String,
|
||||
price_string String,
|
||||
time String,
|
||||
postcode String,
|
||||
a String,
|
||||
b String,
|
||||
@ -78,154 +86,136 @@ clickhouse-local --input-format CSV --structure '
|
||||
district String,
|
||||
county String,
|
||||
d String,
|
||||
e String
|
||||
' --query "
|
||||
WITH splitByChar(' ', postcode) AS p
|
||||
SELECT
|
||||
price,
|
||||
toDate(time) AS date,
|
||||
p[1] AS postcode1,
|
||||
p[2] AS postcode2,
|
||||
transform(a, ['T', 'S', 'D', 'F', 'O'], ['terraced', 'semi-detached', 'detached', 'flat', 'other']) AS type,
|
||||
b = 'Y' AS is_new,
|
||||
transform(c, ['F', 'L', 'U'], ['freehold', 'leasehold', 'unknown']) AS duration,
|
||||
addr1,
|
||||
addr2,
|
||||
street,
|
||||
locality,
|
||||
town,
|
||||
district,
|
||||
county,
|
||||
d = 'B' AS category
|
||||
FROM table" --date_time_input_format best_effort < pp-complete.csv | clickhouse-client --query "INSERT INTO uk_price_paid FORMAT TSV"
|
||||
e String'
|
||||
) SETTINGS max_http_get_redirects=10;
|
||||
```
|
||||
|
||||
It will take about 40 seconds.
|
||||
Wait for the data to insert - it will take a minute or two depending on the network speed.
|
||||
|
||||
## Validate the Data {#validate-data}
|
||||
|
||||
Query:
|
||||
Let's verify it worked by seeing how many rows were inserted:
|
||||
|
||||
```sql
|
||||
SELECT count() FROM uk_price_paid;
|
||||
SELECT count()
|
||||
FROM uk_price_paid
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌──count()─┐
|
||||
│ 26321785 │
|
||||
└──────────┘
|
||||
```
|
||||
|
||||
The size of dataset in ClickHouse is just 278 MiB, check it.
|
||||
|
||||
Query:
|
||||
At the time this query was executed, the dataset had 27,450,499 rows. Let's see what the storage size is of the table in ClickHouse:
|
||||
|
||||
```sql
|
||||
SELECT formatReadableSize(total_bytes) FROM system.tables WHERE name = 'uk_price_paid';
|
||||
SELECT formatReadableSize(total_bytes)
|
||||
FROM system.tables
|
||||
WHERE name = 'uk_price_paid'
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─formatReadableSize(total_bytes)─┐
|
||||
│ 278.80 MiB │
|
||||
└─────────────────────────────────┘
|
||||
```
|
||||
Notice the size of the table is just 221.43 MiB!
|
||||
|
||||
## Run Some Queries {#run-queries}
|
||||
|
||||
Let's run some queries to analyze the data:
|
||||
|
||||
### Query 1. Average Price Per Year {#average-price}
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT toYear(date) AS year, round(avg(price)) AS price, bar(price, 0, 1000000, 80) FROM uk_price_paid GROUP BY year ORDER BY year;
|
||||
SELECT
|
||||
toYear(date) AS year,
|
||||
round(avg(price)) AS price,
|
||||
bar(price, 0, 1000000, 80
|
||||
)
|
||||
FROM uk_price_paid
|
||||
GROUP BY year
|
||||
ORDER BY year
|
||||
```
|
||||
|
||||
Result:
|
||||
The result looks like:
|
||||
|
||||
```text
|
||||
```response
|
||||
┌─year─┬──price─┬─bar(round(avg(price)), 0, 1000000, 80)─┐
|
||||
│ 1995 │ 67932 │ █████▍ │
|
||||
│ 1996 │ 71505 │ █████▋ │
|
||||
│ 1997 │ 78532 │ ██████▎ │
|
||||
│ 1998 │ 85436 │ ██████▋ │
|
||||
│ 1999 │ 96037 │ ███████▋ │
|
||||
│ 2000 │ 107479 │ ████████▌ │
|
||||
│ 2001 │ 118885 │ █████████▌ │
|
||||
│ 2002 │ 137941 │ ███████████ │
|
||||
│ 2003 │ 155889 │ ████████████▍ │
|
||||
│ 2004 │ 178885 │ ██████████████▎ │
|
||||
│ 2005 │ 189351 │ ███████████████▏ │
|
||||
│ 2006 │ 203528 │ ████████████████▎ │
|
||||
│ 2007 │ 219378 │ █████████████████▌ │
|
||||
│ 1995 │ 67934 │ █████▍ │
|
||||
│ 1996 │ 71508 │ █████▋ │
|
||||
│ 1997 │ 78536 │ ██████▎ │
|
||||
│ 1998 │ 85441 │ ██████▋ │
|
||||
│ 1999 │ 96038 │ ███████▋ │
|
||||
│ 2000 │ 107487 │ ████████▌ │
|
||||
│ 2001 │ 118888 │ █████████▌ │
|
||||
│ 2002 │ 137948 │ ███████████ │
|
||||
│ 2003 │ 155893 │ ████████████▍ │
|
||||
│ 2004 │ 178888 │ ██████████████▎ │
|
||||
│ 2005 │ 189359 │ ███████████████▏ │
|
||||
│ 2006 │ 203532 │ ████████████████▎ │
|
||||
│ 2007 │ 219375 │ █████████████████▌ │
|
||||
│ 2008 │ 217056 │ █████████████████▎ │
|
||||
│ 2009 │ 213419 │ █████████████████ │
|
||||
│ 2010 │ 236109 │ ██████████████████▊ │
|
||||
│ 2010 │ 236110 │ ██████████████████▊ │
|
||||
│ 2011 │ 232805 │ ██████████████████▌ │
|
||||
│ 2012 │ 238367 │ ███████████████████ │
|
||||
│ 2013 │ 256931 │ ████████████████████▌ │
|
||||
│ 2014 │ 279915 │ ██████████████████████▍ │
|
||||
│ 2015 │ 297266 │ ███████████████████████▋ │
|
||||
│ 2016 │ 313201 │ █████████████████████████ │
|
||||
│ 2017 │ 346097 │ ███████████████████████████▋ │
|
||||
│ 2018 │ 350116 │ ████████████████████████████ │
|
||||
│ 2019 │ 351013 │ ████████████████████████████ │
|
||||
│ 2020 │ 369420 │ █████████████████████████████▌ │
|
||||
│ 2021 │ 386903 │ ██████████████████████████████▊ │
|
||||
│ 2012 │ 238381 │ ███████████████████ │
|
||||
│ 2013 │ 256927 │ ████████████████████▌ │
|
||||
│ 2014 │ 280008 │ ██████████████████████▍ │
|
||||
│ 2015 │ 297263 │ ███████████████████████▋ │
|
||||
│ 2016 │ 313518 │ █████████████████████████ │
|
||||
│ 2017 │ 346371 │ ███████████████████████████▋ │
|
||||
│ 2018 │ 350556 │ ████████████████████████████ │
|
||||
│ 2019 │ 352184 │ ████████████████████████████▏ │
|
||||
│ 2020 │ 375808 │ ██████████████████████████████ │
|
||||
│ 2021 │ 381105 │ ██████████████████████████████▍ │
|
||||
│ 2022 │ 362572 │ █████████████████████████████ │
|
||||
└──────┴────────┴────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Query 2. Average Price per Year in London {#average-price-london}
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT toYear(date) AS year, round(avg(price)) AS price, bar(price, 0, 2000000, 100) FROM uk_price_paid WHERE town = 'LONDON' GROUP BY year ORDER BY year;
|
||||
SELECT
|
||||
toYear(date) AS year,
|
||||
round(avg(price)) AS price,
|
||||
bar(price, 0, 2000000, 100
|
||||
)
|
||||
FROM uk_price_paid
|
||||
WHERE town = 'LONDON'
|
||||
GROUP BY year
|
||||
ORDER BY year
|
||||
```
|
||||
|
||||
Result:
|
||||
The result looks like:
|
||||
|
||||
```text
|
||||
```response
|
||||
┌─year─┬───price─┬─bar(round(avg(price)), 0, 2000000, 100)───────────────┐
|
||||
│ 1995 │ 109116 │ █████▍ │
|
||||
│ 1996 │ 118667 │ █████▊ │
|
||||
│ 1997 │ 136518 │ ██████▋ │
|
||||
│ 1998 │ 152983 │ ███████▋ │
|
||||
│ 1999 │ 180637 │ █████████ │
|
||||
│ 2000 │ 215838 │ ██████████▋ │
|
||||
│ 2001 │ 232994 │ ███████████▋ │
|
||||
│ 2002 │ 263670 │ █████████████▏ │
|
||||
│ 2003 │ 278394 │ █████████████▊ │
|
||||
│ 2004 │ 304666 │ ███████████████▏ │
|
||||
│ 2005 │ 322875 │ ████████████████▏ │
|
||||
│ 2006 │ 356191 │ █████████████████▋ │
|
||||
│ 2007 │ 404054 │ ████████████████████▏ │
|
||||
│ 1995 │ 109110 │ █████▍ │
|
||||
│ 1996 │ 118659 │ █████▊ │
|
||||
│ 1997 │ 136526 │ ██████▋ │
|
||||
│ 1998 │ 153002 │ ███████▋ │
|
||||
│ 1999 │ 180633 │ █████████ │
|
||||
│ 2000 │ 215849 │ ██████████▋ │
|
||||
│ 2001 │ 232987 │ ███████████▋ │
|
||||
│ 2002 │ 263668 │ █████████████▏ │
|
||||
│ 2003 │ 278424 │ █████████████▊ │
|
||||
│ 2004 │ 304664 │ ███████████████▏ │
|
||||
│ 2005 │ 322887 │ ████████████████▏ │
|
||||
│ 2006 │ 356195 │ █████████████████▋ │
|
||||
│ 2007 │ 404062 │ ████████████████████▏ │
|
||||
│ 2008 │ 420741 │ █████████████████████ │
|
||||
│ 2009 │ 427753 │ █████████████████████▍ │
|
||||
│ 2010 │ 480306 │ ████████████████████████ │
|
||||
│ 2011 │ 496274 │ ████████████████████████▋ │
|
||||
│ 2012 │ 519442 │ █████████████████████████▊ │
|
||||
│ 2013 │ 616212 │ ██████████████████████████████▋ │
|
||||
│ 2014 │ 724154 │ ████████████████████████████████████▏ │
|
||||
│ 2015 │ 792129 │ ███████████████████████████████████████▌ │
|
||||
│ 2016 │ 843655 │ ██████████████████████████████████████████▏ │
|
||||
│ 2017 │ 982642 │ █████████████████████████████████████████████████▏ │
|
||||
│ 2018 │ 1016835 │ ██████████████████████████████████████████████████▋ │
|
||||
│ 2019 │ 1042849 │ ████████████████████████████████████████████████████▏ │
|
||||
│ 2020 │ 1011889 │ ██████████████████████████████████████████████████▌ │
|
||||
│ 2021 │ 960343 │ ████████████████████████████████████████████████ │
|
||||
│ 2009 │ 427754 │ █████████████████████▍ │
|
||||
│ 2010 │ 480322 │ ████████████████████████ │
|
||||
│ 2011 │ 496278 │ ████████████████████████▋ │
|
||||
│ 2012 │ 519482 │ █████████████████████████▊ │
|
||||
│ 2013 │ 616195 │ ██████████████████████████████▋ │
|
||||
│ 2014 │ 724121 │ ████████████████████████████████████▏ │
|
||||
│ 2015 │ 792101 │ ███████████████████████████████████████▌ │
|
||||
│ 2016 │ 843589 │ ██████████████████████████████████████████▏ │
|
||||
│ 2017 │ 983523 │ █████████████████████████████████████████████████▏ │
|
||||
│ 2018 │ 1016753 │ ██████████████████████████████████████████████████▋ │
|
||||
│ 2019 │ 1041673 │ ████████████████████████████████████████████████████ │
|
||||
│ 2020 │ 1060027 │ █████████████████████████████████████████████████████ │
|
||||
│ 2021 │ 958249 │ ███████████████████████████████████████████████▊ │
|
||||
│ 2022 │ 902596 │ █████████████████████████████████████████████▏ │
|
||||
└──────┴─────────┴───────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Something happened in 2013. I don't have a clue. Maybe you have a clue what happened in 2020?
|
||||
Something happened to home prices in 2020! But that is probably not a surprise...
|
||||
|
||||
### Query 3. The Most Expensive Neighborhoods {#most-expensive-neighborhoods}
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
town,
|
||||
@ -240,124 +230,123 @@ GROUP BY
|
||||
district
|
||||
HAVING c >= 100
|
||||
ORDER BY price DESC
|
||||
LIMIT 100;
|
||||
LIMIT 100
|
||||
```
|
||||
|
||||
Result:
|
||||
The result looks like:
|
||||
|
||||
```text
|
||||
|
||||
┌─town─────────────────┬─district───────────────┬────c─┬───price─┬─bar(round(avg(price)), 0, 5000000, 100)────────────────────────────┐
|
||||
│ LONDON │ CITY OF WESTMINSTER │ 3606 │ 3280239 │ █████████████████████████████████████████████████████████████████▌ │
|
||||
│ LONDON │ CITY OF LONDON │ 274 │ 3160502 │ ███████████████████████████████████████████████████████████████▏ │
|
||||
│ LONDON │ KENSINGTON AND CHELSEA │ 2550 │ 2308478 │ ██████████████████████████████████████████████▏ │
|
||||
│ LEATHERHEAD │ ELMBRIDGE │ 114 │ 1897407 │ █████████████████████████████████████▊ │
|
||||
│ LONDON │ CAMDEN │ 3033 │ 1805404 │ ████████████████████████████████████ │
|
||||
│ VIRGINIA WATER │ RUNNYMEDE │ 156 │ 1753247 │ ███████████████████████████████████ │
|
||||
│ WINDLESHAM │ SURREY HEATH │ 108 │ 1677613 │ █████████████████████████████████▌ │
|
||||
│ THORNTON HEATH │ CROYDON │ 546 │ 1671721 │ █████████████████████████████████▍ │
|
||||
│ BARNET │ ENFIELD │ 124 │ 1505840 │ ██████████████████████████████ │
|
||||
│ COBHAM │ ELMBRIDGE │ 387 │ 1237250 │ ████████████████████████▋ │
|
||||
│ LONDON │ ISLINGTON │ 2668 │ 1236980 │ ████████████████████████▋ │
|
||||
│ OXFORD │ SOUTH OXFORDSHIRE │ 321 │ 1220907 │ ████████████████████████▍ │
|
||||
│ LONDON │ RICHMOND UPON THAMES │ 704 │ 1215551 │ ████████████████████████▎ │
|
||||
│ LONDON │ HOUNSLOW │ 671 │ 1207493 │ ████████████████████████▏ │
|
||||
│ ASCOT │ WINDSOR AND MAIDENHEAD │ 407 │ 1183299 │ ███████████████████████▋ │
|
||||
│ BEACONSFIELD │ BUCKINGHAMSHIRE │ 330 │ 1175615 │ ███████████████████████▌ │
|
||||
│ RICHMOND │ RICHMOND UPON THAMES │ 874 │ 1110444 │ ██████████████████████▏ │
|
||||
│ LONDON │ HAMMERSMITH AND FULHAM │ 3086 │ 1053983 │ █████████████████████ │
|
||||
│ SURBITON │ ELMBRIDGE │ 100 │ 1011800 │ ████████████████████▏ │
|
||||
│ RADLETT │ HERTSMERE │ 283 │ 1011712 │ ████████████████████▏ │
|
||||
│ SALCOMBE │ SOUTH HAMS │ 127 │ 1011624 │ ████████████████████▏ │
|
||||
│ WEYBRIDGE │ ELMBRIDGE │ 655 │ 1007265 │ ████████████████████▏ │
|
||||
│ ESHER │ ELMBRIDGE │ 485 │ 986581 │ ███████████████████▋ │
|
||||
│ LEATHERHEAD │ GUILDFORD │ 202 │ 977320 │ ███████████████████▌ │
|
||||
│ BURFORD │ WEST OXFORDSHIRE │ 111 │ 966893 │ ███████████████████▎ │
|
||||
│ BROCKENHURST │ NEW FOREST │ 129 │ 956675 │ ███████████████████▏ │
|
||||
│ HINDHEAD │ WAVERLEY │ 137 │ 953753 │ ███████████████████ │
|
||||
│ GERRARDS CROSS │ BUCKINGHAMSHIRE │ 419 │ 951121 │ ███████████████████ │
|
||||
│ EAST MOLESEY │ ELMBRIDGE │ 192 │ 936769 │ ██████████████████▋ │
|
||||
│ CHALFONT ST GILES │ BUCKINGHAMSHIRE │ 146 │ 925515 │ ██████████████████▌ │
|
||||
│ LONDON │ TOWER HAMLETS │ 4388 │ 918304 │ ██████████████████▎ │
|
||||
│ OLNEY │ MILTON KEYNES │ 235 │ 910646 │ ██████████████████▏ │
|
||||
│ HENLEY-ON-THAMES │ SOUTH OXFORDSHIRE │ 540 │ 902418 │ ██████████████████ │
|
||||
│ LONDON │ SOUTHWARK │ 3885 │ 892997 │ █████████████████▋ │
|
||||
│ KINGSTON UPON THAMES │ KINGSTON UPON THAMES │ 960 │ 885969 │ █████████████████▋ │
|
||||
│ LONDON │ EALING │ 2658 │ 871755 │ █████████████████▍ │
|
||||
│ CRANBROOK │ TUNBRIDGE WELLS │ 431 │ 862348 │ █████████████████▏ │
|
||||
│ LONDON │ MERTON │ 2099 │ 859118 │ █████████████████▏ │
|
||||
│ BELVEDERE │ BEXLEY │ 346 │ 842423 │ ████████████████▋ │
|
||||
│ GUILDFORD │ WAVERLEY │ 143 │ 841277 │ ████████████████▋ │
|
||||
│ HARPENDEN │ ST ALBANS │ 657 │ 841216 │ ████████████████▋ │
|
||||
│ LONDON │ HACKNEY │ 3307 │ 837090 │ ████████████████▋ │
|
||||
│ LONDON │ WANDSWORTH │ 6566 │ 832663 │ ████████████████▋ │
|
||||
│ MAIDENHEAD │ BUCKINGHAMSHIRE │ 123 │ 824299 │ ████████████████▍ │
|
||||
│ KINGS LANGLEY │ DACORUM │ 145 │ 821331 │ ████████████████▍ │
|
||||
│ BERKHAMSTED │ DACORUM │ 543 │ 818415 │ ████████████████▎ │
|
||||
│ GREAT MISSENDEN │ BUCKINGHAMSHIRE │ 226 │ 802807 │ ████████████████ │
|
||||
│ BILLINGSHURST │ CHICHESTER │ 144 │ 797829 │ ███████████████▊ │
|
||||
│ WOKING │ GUILDFORD │ 176 │ 793494 │ ███████████████▋ │
|
||||
│ STOCKBRIDGE │ TEST VALLEY │ 178 │ 793269 │ ███████████████▋ │
|
||||
│ EPSOM │ REIGATE AND BANSTEAD │ 172 │ 791862 │ ███████████████▋ │
|
||||
│ TONBRIDGE │ TUNBRIDGE WELLS │ 360 │ 787876 │ ███████████████▋ │
|
||||
│ TEDDINGTON │ RICHMOND UPON THAMES │ 595 │ 786492 │ ███████████████▋ │
|
||||
│ TWICKENHAM │ RICHMOND UPON THAMES │ 1155 │ 786193 │ ███████████████▋ │
|
||||
│ LYNDHURST │ NEW FOREST │ 102 │ 785593 │ ███████████████▋ │
|
||||
│ LONDON │ LAMBETH │ 5228 │ 774574 │ ███████████████▍ │
|
||||
│ LONDON │ BARNET │ 3955 │ 773259 │ ███████████████▍ │
|
||||
│ OXFORD │ VALE OF WHITE HORSE │ 353 │ 772088 │ ███████████████▍ │
|
||||
│ TONBRIDGE │ MAIDSTONE │ 305 │ 770740 │ ███████████████▍ │
|
||||
│ LUTTERWORTH │ HARBOROUGH │ 538 │ 768634 │ ███████████████▎ │
|
||||
│ WOODSTOCK │ WEST OXFORDSHIRE │ 140 │ 766037 │ ███████████████▎ │
|
||||
│ MIDHURST │ CHICHESTER │ 257 │ 764815 │ ███████████████▎ │
|
||||
│ MARLOW │ BUCKINGHAMSHIRE │ 327 │ 761876 │ ███████████████▏ │
|
||||
│ LONDON │ NEWHAM │ 3237 │ 761784 │ ███████████████▏ │
|
||||
│ ALDERLEY EDGE │ CHESHIRE EAST │ 178 │ 757318 │ ███████████████▏ │
|
||||
│ LUTON │ CENTRAL BEDFORDSHIRE │ 212 │ 754283 │ ███████████████ │
|
||||
│ PETWORTH │ CHICHESTER │ 154 │ 754220 │ ███████████████ │
|
||||
│ ALRESFORD │ WINCHESTER │ 219 │ 752718 │ ███████████████ │
|
||||
│ POTTERS BAR │ WELWYN HATFIELD │ 174 │ 748465 │ ██████████████▊ │
|
||||
│ HASLEMERE │ CHICHESTER │ 128 │ 746907 │ ██████████████▊ │
|
||||
│ TADWORTH │ REIGATE AND BANSTEAD │ 502 │ 743252 │ ██████████████▋ │
|
||||
│ THAMES DITTON │ ELMBRIDGE │ 244 │ 741913 │ ██████████████▋ │
|
||||
│ REIGATE │ REIGATE AND BANSTEAD │ 581 │ 738198 │ ██████████████▋ │
|
||||
│ BOURNE END │ BUCKINGHAMSHIRE │ 138 │ 735190 │ ██████████████▋ │
|
||||
│ SEVENOAKS │ SEVENOAKS │ 1156 │ 730018 │ ██████████████▌ │
|
||||
│ OXTED │ TANDRIDGE │ 336 │ 729123 │ ██████████████▌ │
|
||||
│ INGATESTONE │ BRENTWOOD │ 166 │ 728103 │ ██████████████▌ │
|
||||
│ LONDON │ BRENT │ 2079 │ 720605 │ ██████████████▍ │
|
||||
│ LONDON │ HARINGEY │ 3216 │ 717780 │ ██████████████▎ │
|
||||
│ PURLEY │ CROYDON │ 575 │ 716108 │ ██████████████▎ │
|
||||
│ WELWYN │ WELWYN HATFIELD │ 222 │ 710603 │ ██████████████▏ │
|
||||
│ RICKMANSWORTH │ THREE RIVERS │ 798 │ 704571 │ ██████████████ │
|
||||
│ BANSTEAD │ REIGATE AND BANSTEAD │ 401 │ 701293 │ ██████████████ │
|
||||
│ CHIGWELL │ EPPING FOREST │ 261 │ 701203 │ ██████████████ │
|
||||
│ PINNER │ HARROW │ 528 │ 698885 │ █████████████▊ │
|
||||
│ HASLEMERE │ WAVERLEY │ 280 │ 696659 │ █████████████▊ │
|
||||
│ SLOUGH │ BUCKINGHAMSHIRE │ 396 │ 694917 │ █████████████▊ │
|
||||
│ WALTON-ON-THAMES │ ELMBRIDGE │ 946 │ 692395 │ █████████████▋ │
|
||||
│ READING │ SOUTH OXFORDSHIRE │ 318 │ 691988 │ █████████████▋ │
|
||||
│ NORTHWOOD │ HILLINGDON │ 271 │ 690643 │ █████████████▋ │
|
||||
│ FELTHAM │ HOUNSLOW │ 763 │ 688595 │ █████████████▋ │
|
||||
│ ASHTEAD │ MOLE VALLEY │ 303 │ 687923 │ █████████████▋ │
|
||||
│ BARNET │ BARNET │ 975 │ 686980 │ █████████████▋ │
|
||||
│ WOKING │ SURREY HEATH │ 283 │ 686669 │ █████████████▋ │
|
||||
│ MALMESBURY │ WILTSHIRE │ 323 │ 683324 │ █████████████▋ │
|
||||
│ AMERSHAM │ BUCKINGHAMSHIRE │ 496 │ 680962 │ █████████████▌ │
|
||||
│ CHISLEHURST │ BROMLEY │ 430 │ 680209 │ █████████████▌ │
|
||||
│ HYTHE │ FOLKESTONE AND HYTHE │ 490 │ 676908 │ █████████████▌ │
|
||||
│ MAYFIELD │ WEALDEN │ 101 │ 676210 │ █████████████▌ │
|
||||
│ ASCOT │ BRACKNELL FOREST │ 168 │ 676004 │ █████████████▌ │
|
||||
└──────────────────────┴────────────────────────┴──────┴─────────┴────────────────────────────────────────────────────────────────────┘
|
||||
```response
|
||||
┌─town─────────────────┬─district───────────────┬─────c─┬───price─┬─bar(round(avg(price)), 0, 5000000, 100)─────────────────────────┐
|
||||
│ LONDON │ CITY OF LONDON │ 578 │ 3149590 │ ██████████████████████████████████████████████████████████████▊ │
|
||||
│ LONDON │ CITY OF WESTMINSTER │ 7083 │ 2903794 │ ██████████████████████████████████████████████████████████ │
|
||||
│ LONDON │ KENSINGTON AND CHELSEA │ 4986 │ 2333782 │ ██████████████████████████████████████████████▋ │
|
||||
│ LEATHERHEAD │ ELMBRIDGE │ 203 │ 2071595 │ █████████████████████████████████████████▍ │
|
||||
│ VIRGINIA WATER │ RUNNYMEDE │ 308 │ 1939465 │ ██████████████████████████████████████▋ │
|
||||
│ LONDON │ CAMDEN │ 5750 │ 1673687 │ █████████████████████████████████▍ │
|
||||
│ WINDLESHAM │ SURREY HEATH │ 182 │ 1428358 │ ████████████████████████████▌ │
|
||||
│ NORTHWOOD │ THREE RIVERS │ 112 │ 1404170 │ ████████████████████████████ │
|
||||
│ BARNET │ ENFIELD │ 259 │ 1338299 │ ██████████████████████████▋ │
|
||||
│ LONDON │ ISLINGTON │ 5504 │ 1275520 │ █████████████████████████▌ │
|
||||
│ LONDON │ RICHMOND UPON THAMES │ 1345 │ 1261935 │ █████████████████████████▏ │
|
||||
│ COBHAM │ ELMBRIDGE │ 727 │ 1251403 │ █████████████████████████ │
|
||||
│ BEACONSFIELD │ BUCKINGHAMSHIRE │ 680 │ 1199970 │ ███████████████████████▊ │
|
||||
│ LONDON │ TOWER HAMLETS │ 10012 │ 1157827 │ ███████████████████████▏ │
|
||||
│ LONDON │ HOUNSLOW │ 1278 │ 1144389 │ ██████████████████████▊ │
|
||||
│ BURFORD │ WEST OXFORDSHIRE │ 182 │ 1139393 │ ██████████████████████▋ │
|
||||
│ RICHMOND │ RICHMOND UPON THAMES │ 1649 │ 1130076 │ ██████████████████████▌ │
|
||||
│ KINGSTON UPON THAMES │ RICHMOND UPON THAMES │ 147 │ 1126111 │ ██████████████████████▌ │
|
||||
│ ASCOT │ WINDSOR AND MAIDENHEAD │ 773 │ 1106109 │ ██████████████████████ │
|
||||
│ LONDON │ HAMMERSMITH AND FULHAM │ 6162 │ 1056198 │ █████████████████████ │
|
||||
│ RADLETT │ HERTSMERE │ 513 │ 1045758 │ ████████████████████▊ │
|
||||
│ LEATHERHEAD │ GUILDFORD │ 354 │ 1045175 │ ████████████████████▊ │
|
||||
│ WEYBRIDGE │ ELMBRIDGE │ 1275 │ 1036702 │ ████████████████████▋ │
|
||||
│ FARNHAM │ EAST HAMPSHIRE │ 107 │ 1033682 │ ████████████████████▋ │
|
||||
│ ESHER │ ELMBRIDGE │ 915 │ 1032753 │ ████████████████████▋ │
|
||||
│ FARNHAM │ HART │ 102 │ 1002692 │ ████████████████████ │
|
||||
│ GERRARDS CROSS │ BUCKINGHAMSHIRE │ 845 │ 983639 │ ███████████████████▋ │
|
||||
│ CHALFONT ST GILES │ BUCKINGHAMSHIRE │ 286 │ 973993 │ ███████████████████▍ │
|
||||
│ SALCOMBE │ SOUTH HAMS │ 215 │ 965724 │ ███████████████████▎ │
|
||||
│ SURBITON │ ELMBRIDGE │ 181 │ 960346 │ ███████████████████▏ │
|
||||
│ BROCKENHURST │ NEW FOREST │ 226 │ 951278 │ ███████████████████ │
|
||||
│ SUTTON COLDFIELD │ LICHFIELD │ 110 │ 930757 │ ██████████████████▌ │
|
||||
│ EAST MOLESEY │ ELMBRIDGE │ 372 │ 927026 │ ██████████████████▌ │
|
||||
│ LLANGOLLEN │ WREXHAM │ 127 │ 925681 │ ██████████████████▌ │
|
||||
│ OXFORD │ SOUTH OXFORDSHIRE │ 638 │ 923830 │ ██████████████████▍ │
|
||||
│ LONDON │ MERTON │ 4383 │ 923194 │ ██████████████████▍ │
|
||||
│ GUILDFORD │ WAVERLEY │ 261 │ 905733 │ ██████████████████ │
|
||||
│ TEDDINGTON │ RICHMOND UPON THAMES │ 1147 │ 894856 │ █████████████████▊ │
|
||||
│ HARPENDEN │ ST ALBANS │ 1271 │ 893079 │ █████████████████▋ │
|
||||
│ HENLEY-ON-THAMES │ SOUTH OXFORDSHIRE │ 1042 │ 887557 │ █████████████████▋ │
|
||||
│ POTTERS BAR │ WELWYN HATFIELD │ 314 │ 863037 │ █████████████████▎ │
|
||||
│ LONDON │ WANDSWORTH │ 13210 │ 857318 │ █████████████████▏ │
|
||||
│ BILLINGSHURST │ CHICHESTER │ 255 │ 856508 │ █████████████████▏ │
|
||||
│ LONDON │ SOUTHWARK │ 7742 │ 843145 │ ████████████████▋ │
|
||||
│ LONDON │ HACKNEY │ 6656 │ 839716 │ ████████████████▋ │
|
||||
│ LUTTERWORTH │ HARBOROUGH │ 1096 │ 836546 │ ████████████████▋ │
|
||||
│ KINGSTON UPON THAMES │ KINGSTON UPON THAMES │ 1846 │ 828990 │ ████████████████▌ │
|
||||
│ LONDON │ EALING │ 5583 │ 820135 │ ████████████████▍ │
|
||||
│ INGATESTONE │ CHELMSFORD │ 120 │ 815379 │ ████████████████▎ │
|
||||
│ MARLOW │ BUCKINGHAMSHIRE │ 718 │ 809943 │ ████████████████▏ │
|
||||
│ EAST GRINSTEAD │ TANDRIDGE │ 105 │ 809461 │ ████████████████▏ │
|
||||
│ CHIGWELL │ EPPING FOREST │ 484 │ 809338 │ ████████████████▏ │
|
||||
│ EGHAM │ RUNNYMEDE │ 989 │ 807858 │ ████████████████▏ │
|
||||
│ HASLEMERE │ CHICHESTER │ 223 │ 804173 │ ████████████████ │
|
||||
│ PETWORTH │ CHICHESTER │ 288 │ 803206 │ ████████████████ │
|
||||
│ TWICKENHAM │ RICHMOND UPON THAMES │ 2194 │ 802616 │ ████████████████ │
|
||||
│ WEMBLEY │ BRENT │ 1698 │ 801733 │ ████████████████ │
|
||||
│ HINDHEAD │ WAVERLEY │ 233 │ 801482 │ ████████████████ │
|
||||
│ LONDON │ BARNET │ 8083 │ 792066 │ ███████████████▋ │
|
||||
│ WOKING │ GUILDFORD │ 343 │ 789360 │ ███████████████▋ │
|
||||
│ STOCKBRIDGE │ TEST VALLEY │ 318 │ 777909 │ ███████████████▌ │
|
||||
│ BERKHAMSTED │ DACORUM │ 1049 │ 776138 │ ███████████████▌ │
|
||||
│ MAIDENHEAD │ BUCKINGHAMSHIRE │ 236 │ 775572 │ ███████████████▌ │
|
||||
│ SOLIHULL │ STRATFORD-ON-AVON │ 142 │ 770727 │ ███████████████▍ │
|
||||
│ GREAT MISSENDEN │ BUCKINGHAMSHIRE │ 431 │ 764493 │ ███████████████▎ │
|
||||
│ TADWORTH │ REIGATE AND BANSTEAD │ 920 │ 757511 │ ███████████████▏ │
|
||||
│ LONDON │ BRENT │ 4124 │ 757194 │ ███████████████▏ │
|
||||
│ THAMES DITTON │ ELMBRIDGE │ 470 │ 750828 │ ███████████████ │
|
||||
│ LONDON │ LAMBETH │ 10431 │ 750532 │ ███████████████ │
|
||||
│ RICKMANSWORTH │ THREE RIVERS │ 1500 │ 747029 │ ██████████████▊ │
|
||||
│ KINGS LANGLEY │ DACORUM │ 281 │ 746536 │ ██████████████▊ │
|
||||
│ HARLOW │ EPPING FOREST │ 172 │ 739423 │ ██████████████▋ │
|
||||
│ TONBRIDGE │ SEVENOAKS │ 103 │ 738740 │ ██████████████▋ │
|
||||
│ BELVEDERE │ BEXLEY │ 686 │ 736385 │ ██████████████▋ │
|
||||
│ CRANBROOK │ TUNBRIDGE WELLS │ 769 │ 734328 │ ██████████████▋ │
|
||||
│ SOLIHULL │ WARWICK │ 116 │ 733286 │ ██████████████▋ │
|
||||
│ ALDERLEY EDGE │ CHESHIRE EAST │ 357 │ 732882 │ ██████████████▋ │
|
||||
│ WELWYN │ WELWYN HATFIELD │ 404 │ 730281 │ ██████████████▌ │
|
||||
│ CHISLEHURST │ BROMLEY │ 870 │ 730279 │ ██████████████▌ │
|
||||
│ LONDON │ HARINGEY │ 6488 │ 726715 │ ██████████████▌ │
|
||||
│ AMERSHAM │ BUCKINGHAMSHIRE │ 965 │ 725426 │ ██████████████▌ │
|
||||
│ SEVENOAKS │ SEVENOAKS │ 2183 │ 725102 │ ██████████████▌ │
|
||||
│ BOURNE END │ BUCKINGHAMSHIRE │ 269 │ 724595 │ ██████████████▍ │
|
||||
│ NORTHWOOD │ HILLINGDON │ 568 │ 722436 │ ██████████████▍ │
|
||||
│ PURFLEET │ THURROCK │ 143 │ 722205 │ ██████████████▍ │
|
||||
│ SLOUGH │ BUCKINGHAMSHIRE │ 832 │ 721529 │ ██████████████▍ │
|
||||
│ INGATESTONE │ BRENTWOOD │ 301 │ 718292 │ ██████████████▎ │
|
||||
│ EPSOM │ REIGATE AND BANSTEAD │ 315 │ 709264 │ ██████████████▏ │
|
||||
│ ASHTEAD │ MOLE VALLEY │ 524 │ 708646 │ ██████████████▏ │
|
||||
│ BETCHWORTH │ MOLE VALLEY │ 155 │ 708525 │ ██████████████▏ │
|
||||
│ OXTED │ TANDRIDGE │ 645 │ 706946 │ ██████████████▏ │
|
||||
│ READING │ SOUTH OXFORDSHIRE │ 593 │ 705466 │ ██████████████ │
|
||||
│ FELTHAM │ HOUNSLOW │ 1536 │ 703815 │ ██████████████ │
|
||||
│ TUNBRIDGE WELLS │ WEALDEN │ 207 │ 703296 │ ██████████████ │
|
||||
│ LEWES │ WEALDEN │ 116 │ 701349 │ ██████████████ │
|
||||
│ OXFORD │ OXFORD │ 3656 │ 700813 │ ██████████████ │
|
||||
│ MAYFIELD │ WEALDEN │ 177 │ 698158 │ █████████████▊ │
|
||||
│ PINNER │ HARROW │ 997 │ 697876 │ █████████████▊ │
|
||||
│ LECHLADE │ COTSWOLD │ 155 │ 696262 │ █████████████▊ │
|
||||
│ WALTON-ON-THAMES │ ELMBRIDGE │ 1850 │ 690102 │ █████████████▋ │
|
||||
└──────────────────────┴────────────────────────┴───────┴─────────┴─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Let's Speed Up Queries Using Projections {#speedup-with-projections}
|
||||
|
||||
[Projections](../../sql-reference/statements/alter/projection.md) allow to improve queries speed by storing pre-aggregated data.
|
||||
[Projections](../../sql-reference/statements/alter/projection.md) allow you to improve query speeds by storing pre-aggregated data in whatever format you want. In this example, we create a projection that keeps track of the average price, total price, and count of properties grouped by the year, district and town. At execution time, ClickHouse will use your projection if it thinks the projection can improve the performance fo the query (you don't have to do anything special to use the projection - ClickHouse decides for you when the projection will be useful).
|
||||
|
||||
### Build a Projection {#build-projection}
|
||||
|
||||
Create an aggregate projection by dimensions `toYear(date)`, `district`, `town`:
|
||||
Let's create an aggregate projection by the dimensions `toYear(date)`, `district`, and `town`:
|
||||
|
||||
```sql
|
||||
ALTER TABLE uk_price_paid
|
||||
@ -374,25 +363,23 @@ ALTER TABLE uk_price_paid
|
||||
toYear(date),
|
||||
district,
|
||||
town
|
||||
);
|
||||
)
|
||||
```
|
||||
|
||||
Populate the projection for existing data (without it projection will be created for only newly inserted data):
|
||||
Populate the projection for existing data. (Without materializing it, the projection will be created for only newly inserted data):
|
||||
|
||||
```sql
|
||||
ALTER TABLE uk_price_paid
|
||||
MATERIALIZE PROJECTION projection_by_year_district_town
|
||||
SETTINGS mutations_sync = 1;
|
||||
SETTINGS mutations_sync = 1
|
||||
```
|
||||
|
||||
## Test Performance {#test-performance}
|
||||
|
||||
Let's run the same 3 queries.
|
||||
Let's run the same 3 queries again:
|
||||
|
||||
### Query 1. Average Price Per Year {#average-price-projections}
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
toYear(date) AS year,
|
||||
@ -400,47 +387,18 @@ SELECT
|
||||
bar(price, 0, 1000000, 80)
|
||||
FROM uk_price_paid
|
||||
GROUP BY year
|
||||
ORDER BY year ASC;
|
||||
ORDER BY year ASC
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─year─┬──price─┬─bar(round(avg(price)), 0, 1000000, 80)─┐
|
||||
│ 1995 │ 67932 │ █████▍ │
|
||||
│ 1996 │ 71505 │ █████▋ │
|
||||
│ 1997 │ 78532 │ ██████▎ │
|
||||
│ 1998 │ 85436 │ ██████▋ │
|
||||
│ 1999 │ 96037 │ ███████▋ │
|
||||
│ 2000 │ 107479 │ ████████▌ │
|
||||
│ 2001 │ 118885 │ █████████▌ │
|
||||
│ 2002 │ 137941 │ ███████████ │
|
||||
│ 2003 │ 155889 │ ████████████▍ │
|
||||
│ 2004 │ 178885 │ ██████████████▎ │
|
||||
│ 2005 │ 189351 │ ███████████████▏ │
|
||||
│ 2006 │ 203528 │ ████████████████▎ │
|
||||
│ 2007 │ 219378 │ █████████████████▌ │
|
||||
│ 2008 │ 217056 │ █████████████████▎ │
|
||||
│ 2009 │ 213419 │ █████████████████ │
|
||||
│ 2010 │ 236109 │ ██████████████████▊ │
|
||||
│ 2011 │ 232805 │ ██████████████████▌ │
|
||||
│ 2012 │ 238367 │ ███████████████████ │
|
||||
│ 2013 │ 256931 │ ████████████████████▌ │
|
||||
│ 2014 │ 279915 │ ██████████████████████▍ │
|
||||
│ 2015 │ 297266 │ ███████████████████████▋ │
|
||||
│ 2016 │ 313201 │ █████████████████████████ │
|
||||
│ 2017 │ 346097 │ ███████████████████████████▋ │
|
||||
│ 2018 │ 350116 │ ████████████████████████████ │
|
||||
│ 2019 │ 351013 │ ████████████████████████████ │
|
||||
│ 2020 │ 369420 │ █████████████████████████████▌ │
|
||||
│ 2021 │ 386903 │ ██████████████████████████████▊ │
|
||||
└──────┴────────┴────────────────────────────────────────┘
|
||||
The result is the same, but the performance is better!
|
||||
```response
|
||||
No projection: 28 rows in set. Elapsed: 1.775 sec. Processed 27.45 million rows, 164.70 MB (15.47 million rows/s., 92.79 MB/s.)
|
||||
With projection: 28 rows in set. Elapsed: 0.665 sec. Processed 87.51 thousand rows, 3.21 MB (131.51 thousand rows/s., 4.82 MB/s.)
|
||||
```
|
||||
|
||||
|
||||
### Query 2. Average Price Per Year in London {#average-price-london-projections}
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
toYear(date) AS year,
|
||||
@ -449,48 +407,19 @@ SELECT
|
||||
FROM uk_price_paid
|
||||
WHERE town = 'LONDON'
|
||||
GROUP BY year
|
||||
ORDER BY year ASC;
|
||||
ORDER BY year ASC
|
||||
```
|
||||
|
||||
Result:
|
||||
Same result, but notice the improvement in query performance:
|
||||
|
||||
```text
|
||||
┌─year─┬───price─┬─bar(round(avg(price)), 0, 2000000, 100)───────────────┐
|
||||
│ 1995 │ 109116 │ █████▍ │
|
||||
│ 1996 │ 118667 │ █████▊ │
|
||||
│ 1997 │ 136518 │ ██████▋ │
|
||||
│ 1998 │ 152983 │ ███████▋ │
|
||||
│ 1999 │ 180637 │ █████████ │
|
||||
│ 2000 │ 215838 │ ██████████▋ │
|
||||
│ 2001 │ 232994 │ ███████████▋ │
|
||||
│ 2002 │ 263670 │ █████████████▏ │
|
||||
│ 2003 │ 278394 │ █████████████▊ │
|
||||
│ 2004 │ 304666 │ ███████████████▏ │
|
||||
│ 2005 │ 322875 │ ████████████████▏ │
|
||||
│ 2006 │ 356191 │ █████████████████▋ │
|
||||
│ 2007 │ 404054 │ ████████████████████▏ │
|
||||
│ 2008 │ 420741 │ █████████████████████ │
|
||||
│ 2009 │ 427753 │ █████████████████████▍ │
|
||||
│ 2010 │ 480306 │ ████████████████████████ │
|
||||
│ 2011 │ 496274 │ ████████████████████████▋ │
|
||||
│ 2012 │ 519442 │ █████████████████████████▊ │
|
||||
│ 2013 │ 616212 │ ██████████████████████████████▋ │
|
||||
│ 2014 │ 724154 │ ████████████████████████████████████▏ │
|
||||
│ 2015 │ 792129 │ ███████████████████████████████████████▌ │
|
||||
│ 2016 │ 843655 │ ██████████████████████████████████████████▏ │
|
||||
│ 2017 │ 982642 │ █████████████████████████████████████████████████▏ │
|
||||
│ 2018 │ 1016835 │ ██████████████████████████████████████████████████▋ │
|
||||
│ 2019 │ 1042849 │ ████████████████████████████████████████████████████▏ │
|
||||
│ 2020 │ 1011889 │ ██████████████████████████████████████████████████▌ │
|
||||
│ 2021 │ 960343 │ ████████████████████████████████████████████████ │
|
||||
└──────┴─────────┴───────────────────────────────────────────────────────┘
|
||||
```response
|
||||
No projection: 28 rows in set. Elapsed: 0.720 sec. Processed 27.45 million rows, 46.61 MB (38.13 million rows/s., 64.74 MB/s.)
|
||||
With projection: 28 rows in set. Elapsed: 0.015 sec. Processed 87.51 thousand rows, 3.51 MB (5.74 million rows/s., 230.24 MB/s.)
|
||||
```
|
||||
|
||||
### Query 3. The Most Expensive Neighborhoods {#most-expensive-neighborhoods-projections}
|
||||
|
||||
The condition (date >= '2020-01-01') needs to be modified to match projection dimension (toYear(date) >= 2020).
|
||||
|
||||
Query:
|
||||
The condition (date >= '2020-01-01') needs to be modified so that it matches the projection dimension (`toYear(date) >= 2020)`:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
@ -506,138 +435,16 @@ GROUP BY
|
||||
district
|
||||
HAVING c >= 100
|
||||
ORDER BY price DESC
|
||||
LIMIT 100;
|
||||
LIMIT 100
|
||||
```
|
||||
|
||||
Result:
|
||||
Again, the result is the same but notice the improvement in query performance:
|
||||
|
||||
```text
|
||||
┌─town─────────────────┬─district───────────────┬────c─┬───price─┬─bar(round(avg(price)), 0, 5000000, 100)────────────────────────────┐
|
||||
│ LONDON │ CITY OF WESTMINSTER │ 3606 │ 3280239 │ █████████████████████████████████████████████████████████████████▌ │
|
||||
│ LONDON │ CITY OF LONDON │ 274 │ 3160502 │ ███████████████████████████████████████████████████████████████▏ │
|
||||
│ LONDON │ KENSINGTON AND CHELSEA │ 2550 │ 2308478 │ ██████████████████████████████████████████████▏ │
|
||||
│ LEATHERHEAD │ ELMBRIDGE │ 114 │ 1897407 │ █████████████████████████████████████▊ │
|
||||
│ LONDON │ CAMDEN │ 3033 │ 1805404 │ ████████████████████████████████████ │
|
||||
│ VIRGINIA WATER │ RUNNYMEDE │ 156 │ 1753247 │ ███████████████████████████████████ │
|
||||
│ WINDLESHAM │ SURREY HEATH │ 108 │ 1677613 │ █████████████████████████████████▌ │
|
||||
│ THORNTON HEATH │ CROYDON │ 546 │ 1671721 │ █████████████████████████████████▍ │
|
||||
│ BARNET │ ENFIELD │ 124 │ 1505840 │ ██████████████████████████████ │
|
||||
│ COBHAM │ ELMBRIDGE │ 387 │ 1237250 │ ████████████████████████▋ │
|
||||
│ LONDON │ ISLINGTON │ 2668 │ 1236980 │ ████████████████████████▋ │
|
||||
│ OXFORD │ SOUTH OXFORDSHIRE │ 321 │ 1220907 │ ████████████████████████▍ │
|
||||
│ LONDON │ RICHMOND UPON THAMES │ 704 │ 1215551 │ ████████████████████████▎ │
|
||||
│ LONDON │ HOUNSLOW │ 671 │ 1207493 │ ████████████████████████▏ │
|
||||
│ ASCOT │ WINDSOR AND MAIDENHEAD │ 407 │ 1183299 │ ███████████████████████▋ │
|
||||
│ BEACONSFIELD │ BUCKINGHAMSHIRE │ 330 │ 1175615 │ ███████████████████████▌ │
|
||||
│ RICHMOND │ RICHMOND UPON THAMES │ 874 │ 1110444 │ ██████████████████████▏ │
|
||||
│ LONDON │ HAMMERSMITH AND FULHAM │ 3086 │ 1053983 │ █████████████████████ │
|
||||
│ SURBITON │ ELMBRIDGE │ 100 │ 1011800 │ ████████████████████▏ │
|
||||
│ RADLETT │ HERTSMERE │ 283 │ 1011712 │ ████████████████████▏ │
|
||||
│ SALCOMBE │ SOUTH HAMS │ 127 │ 1011624 │ ████████████████████▏ │
|
||||
│ WEYBRIDGE │ ELMBRIDGE │ 655 │ 1007265 │ ████████████████████▏ │
|
||||
│ ESHER │ ELMBRIDGE │ 485 │ 986581 │ ███████████████████▋ │
|
||||
│ LEATHERHEAD │ GUILDFORD │ 202 │ 977320 │ ███████████████████▌ │
|
||||
│ BURFORD │ WEST OXFORDSHIRE │ 111 │ 966893 │ ███████████████████▎ │
|
||||
│ BROCKENHURST │ NEW FOREST │ 129 │ 956675 │ ███████████████████▏ │
|
||||
│ HINDHEAD │ WAVERLEY │ 137 │ 953753 │ ███████████████████ │
|
||||
│ GERRARDS CROSS │ BUCKINGHAMSHIRE │ 419 │ 951121 │ ███████████████████ │
|
||||
│ EAST MOLESEY │ ELMBRIDGE │ 192 │ 936769 │ ██████████████████▋ │
|
||||
│ CHALFONT ST GILES │ BUCKINGHAMSHIRE │ 146 │ 925515 │ ██████████████████▌ │
|
||||
│ LONDON │ TOWER HAMLETS │ 4388 │ 918304 │ ██████████████████▎ │
|
||||
│ OLNEY │ MILTON KEYNES │ 235 │ 910646 │ ██████████████████▏ │
|
||||
│ HENLEY-ON-THAMES │ SOUTH OXFORDSHIRE │ 540 │ 902418 │ ██████████████████ │
|
||||
│ LONDON │ SOUTHWARK │ 3885 │ 892997 │ █████████████████▋ │
|
||||
│ KINGSTON UPON THAMES │ KINGSTON UPON THAMES │ 960 │ 885969 │ █████████████████▋ │
|
||||
│ LONDON │ EALING │ 2658 │ 871755 │ █████████████████▍ │
|
||||
│ CRANBROOK │ TUNBRIDGE WELLS │ 431 │ 862348 │ █████████████████▏ │
|
||||
│ LONDON │ MERTON │ 2099 │ 859118 │ █████████████████▏ │
|
||||
│ BELVEDERE │ BEXLEY │ 346 │ 842423 │ ████████████████▋ │
|
||||
│ GUILDFORD │ WAVERLEY │ 143 │ 841277 │ ████████████████▋ │
|
||||
│ HARPENDEN │ ST ALBANS │ 657 │ 841216 │ ████████████████▋ │
|
||||
│ LONDON │ HACKNEY │ 3307 │ 837090 │ ████████████████▋ │
|
||||
│ LONDON │ WANDSWORTH │ 6566 │ 832663 │ ████████████████▋ │
|
||||
│ MAIDENHEAD │ BUCKINGHAMSHIRE │ 123 │ 824299 │ ████████████████▍ │
|
||||
│ KINGS LANGLEY │ DACORUM │ 145 │ 821331 │ ████████████████▍ │
|
||||
│ BERKHAMSTED │ DACORUM │ 543 │ 818415 │ ████████████████▎ │
|
||||
│ GREAT MISSENDEN │ BUCKINGHAMSHIRE │ 226 │ 802807 │ ████████████████ │
|
||||
│ BILLINGSHURST │ CHICHESTER │ 144 │ 797829 │ ███████████████▊ │
|
||||
│ WOKING │ GUILDFORD │ 176 │ 793494 │ ███████████████▋ │
|
||||
│ STOCKBRIDGE │ TEST VALLEY │ 178 │ 793269 │ ███████████████▋ │
|
||||
│ EPSOM │ REIGATE AND BANSTEAD │ 172 │ 791862 │ ███████████████▋ │
|
||||
│ TONBRIDGE │ TUNBRIDGE WELLS │ 360 │ 787876 │ ███████████████▋ │
|
||||
│ TEDDINGTON │ RICHMOND UPON THAMES │ 595 │ 786492 │ ███████████████▋ │
|
||||
│ TWICKENHAM │ RICHMOND UPON THAMES │ 1155 │ 786193 │ ███████████████▋ │
|
||||
│ LYNDHURST │ NEW FOREST │ 102 │ 785593 │ ███████████████▋ │
|
||||
│ LONDON │ LAMBETH │ 5228 │ 774574 │ ███████████████▍ │
|
||||
│ LONDON │ BARNET │ 3955 │ 773259 │ ███████████████▍ │
|
||||
│ OXFORD │ VALE OF WHITE HORSE │ 353 │ 772088 │ ███████████████▍ │
|
||||
│ TONBRIDGE │ MAIDSTONE │ 305 │ 770740 │ ███████████████▍ │
|
||||
│ LUTTERWORTH │ HARBOROUGH │ 538 │ 768634 │ ███████████████▎ │
|
||||
│ WOODSTOCK │ WEST OXFORDSHIRE │ 140 │ 766037 │ ███████████████▎ │
|
||||
│ MIDHURST │ CHICHESTER │ 257 │ 764815 │ ███████████████▎ │
|
||||
│ MARLOW │ BUCKINGHAMSHIRE │ 327 │ 761876 │ ███████████████▏ │
|
||||
│ LONDON │ NEWHAM │ 3237 │ 761784 │ ███████████████▏ │
|
||||
│ ALDERLEY EDGE │ CHESHIRE EAST │ 178 │ 757318 │ ███████████████▏ │
|
||||
│ LUTON │ CENTRAL BEDFORDSHIRE │ 212 │ 754283 │ ███████████████ │
|
||||
│ PETWORTH │ CHICHESTER │ 154 │ 754220 │ ███████████████ │
|
||||
│ ALRESFORD │ WINCHESTER │ 219 │ 752718 │ ███████████████ │
|
||||
│ POTTERS BAR │ WELWYN HATFIELD │ 174 │ 748465 │ ██████████████▊ │
|
||||
│ HASLEMERE │ CHICHESTER │ 128 │ 746907 │ ██████████████▊ │
|
||||
│ TADWORTH │ REIGATE AND BANSTEAD │ 502 │ 743252 │ ██████████████▋ │
|
||||
│ THAMES DITTON │ ELMBRIDGE │ 244 │ 741913 │ ██████████████▋ │
|
||||
│ REIGATE │ REIGATE AND BANSTEAD │ 581 │ 738198 │ ██████████████▋ │
|
||||
│ BOURNE END │ BUCKINGHAMSHIRE │ 138 │ 735190 │ ██████████████▋ │
|
||||
│ SEVENOAKS │ SEVENOAKS │ 1156 │ 730018 │ ██████████████▌ │
|
||||
│ OXTED │ TANDRIDGE │ 336 │ 729123 │ ██████████████▌ │
|
||||
│ INGATESTONE │ BRENTWOOD │ 166 │ 728103 │ ██████████████▌ │
|
||||
│ LONDON │ BRENT │ 2079 │ 720605 │ ██████████████▍ │
|
||||
│ LONDON │ HARINGEY │ 3216 │ 717780 │ ██████████████▎ │
|
||||
│ PURLEY │ CROYDON │ 575 │ 716108 │ ██████████████▎ │
|
||||
│ WELWYN │ WELWYN HATFIELD │ 222 │ 710603 │ ██████████████▏ │
|
||||
│ RICKMANSWORTH │ THREE RIVERS │ 798 │ 704571 │ ██████████████ │
|
||||
│ BANSTEAD │ REIGATE AND BANSTEAD │ 401 │ 701293 │ ██████████████ │
|
||||
│ CHIGWELL │ EPPING FOREST │ 261 │ 701203 │ ██████████████ │
|
||||
│ PINNER │ HARROW │ 528 │ 698885 │ █████████████▊ │
|
||||
│ HASLEMERE │ WAVERLEY │ 280 │ 696659 │ █████████████▊ │
|
||||
│ SLOUGH │ BUCKINGHAMSHIRE │ 396 │ 694917 │ █████████████▊ │
|
||||
│ WALTON-ON-THAMES │ ELMBRIDGE │ 946 │ 692395 │ █████████████▋ │
|
||||
│ READING │ SOUTH OXFORDSHIRE │ 318 │ 691988 │ █████████████▋ │
|
||||
│ NORTHWOOD │ HILLINGDON │ 271 │ 690643 │ █████████████▋ │
|
||||
│ FELTHAM │ HOUNSLOW │ 763 │ 688595 │ █████████████▋ │
|
||||
│ ASHTEAD │ MOLE VALLEY │ 303 │ 687923 │ █████████████▋ │
|
||||
│ BARNET │ BARNET │ 975 │ 686980 │ █████████████▋ │
|
||||
│ WOKING │ SURREY HEATH │ 283 │ 686669 │ █████████████▋ │
|
||||
│ MALMESBURY │ WILTSHIRE │ 323 │ 683324 │ █████████████▋ │
|
||||
│ AMERSHAM │ BUCKINGHAMSHIRE │ 496 │ 680962 │ █████████████▌ │
|
||||
│ CHISLEHURST │ BROMLEY │ 430 │ 680209 │ █████████████▌ │
|
||||
│ HYTHE │ FOLKESTONE AND HYTHE │ 490 │ 676908 │ █████████████▌ │
|
||||
│ MAYFIELD │ WEALDEN │ 101 │ 676210 │ █████████████▌ │
|
||||
│ ASCOT │ BRACKNELL FOREST │ 168 │ 676004 │ █████████████▌ │
|
||||
└──────────────────────┴────────────────────────┴──────┴─────────┴────────────────────────────────────────────────────────────────────┘
|
||||
```response
|
||||
No projection: 100 rows in set. Elapsed: 0.928 sec. Processed 27.45 million rows, 103.80 MB (29.56 million rows/s., 111.80 MB/s.)
|
||||
With projection: 100 rows in set. Elapsed: 0.336 sec. Processed 17.32 thousand rows, 1.23 MB (51.61 thousand rows/s., 3.65 MB/s.)
|
||||
```
|
||||
|
||||
### Summary {#summary}
|
||||
|
||||
All 3 queries work much faster and read fewer rows.
|
||||
|
||||
```text
|
||||
Query 1
|
||||
|
||||
no projection: 27 rows in set. Elapsed: 0.158 sec. Processed 26.32 million rows, 157.93 MB (166.57 million rows/s., 999.39 MB/s.)
|
||||
projection: 27 rows in set. Elapsed: 0.007 sec. Processed 105.96 thousand rows, 3.33 MB (14.58 million rows/s., 458.13 MB/s.)
|
||||
|
||||
|
||||
Query 2
|
||||
|
||||
no projection: 27 rows in set. Elapsed: 0.163 sec. Processed 26.32 million rows, 80.01 MB (161.75 million rows/s., 491.64 MB/s.)
|
||||
projection: 27 rows in set. Elapsed: 0.008 sec. Processed 105.96 thousand rows, 3.67 MB (13.29 million rows/s., 459.89 MB/s.)
|
||||
|
||||
Query 3
|
||||
|
||||
no projection: 100 rows in set. Elapsed: 0.069 sec. Processed 26.32 million rows, 62.47 MB (382.13 million rows/s., 906.93 MB/s.)
|
||||
projection: 100 rows in set. Elapsed: 0.029 sec. Processed 8.08 thousand rows, 511.08 KB (276.06 thousand rows/s., 17.47 MB/s.)
|
||||
```
|
||||
|
||||
### Test It in Playground {#playground}
|
||||
### Test it in the Playground {#playground}
|
||||
|
||||
The dataset is also available in the [Online Playground](https://play.clickhouse.com/play?user=play#U0VMRUNUIHRvd24sIGRpc3RyaWN0LCBjb3VudCgpIEFTIGMsIHJvdW5kKGF2ZyhwcmljZSkpIEFTIHByaWNlLCBiYXIocHJpY2UsIDAsIDUwMDAwMDAsIDEwMCkgRlJPTSB1a19wcmljZV9wYWlkIFdIRVJFIGRhdGUgPj0gJzIwMjAtMDEtMDEnIEdST1VQIEJZIHRvd24sIGRpc3RyaWN0IEhBVklORyBjID49IDEwMCBPUkRFUiBCWSBwcmljZSBERVNDIExJTUlUIDEwMA==).
|
||||
|
26
docs/en/getting-started/index.md
Normal file
26
docs/en/getting-started/index.md
Normal file
@ -0,0 +1,26 @@
|
||||
---
|
||||
slug: /en/getting-started/example-datasets/
|
||||
sidebar_position: 0
|
||||
sidebar_label: Overview
|
||||
keywords: [clickhouse, install, tutorial, sample, datasets]
|
||||
pagination_next: 'en/tutorial'
|
||||
---
|
||||
|
||||
# Tutorials and Example Datasets
|
||||
|
||||
We have a lot of resources for helping you get started and learn how ClickHouse works:
|
||||
|
||||
- If you need to get ClickHouse up and running, check out our [Quick Start](../quick-start.mdx)
|
||||
- The [ClickHouse Tutorial](../tutorial.md) analyzes a dataset of New York City taxi rides
|
||||
|
||||
In addition, the sample datasets provide a great experience on working with ClickHouse,
|
||||
learning important techniques and tricks, and seeing how to take advantage of the many powerful
|
||||
functions in ClickHouse. The sample datasets include:
|
||||
|
||||
- The [UK Property Price Paid dataset](../getting-started/example-datasets/uk-price-paid.md) is a good starting point with some interesting SQL queries
|
||||
- The [New York Taxi Data](../getting-started/example-datasets/nyc-taxi.md) has an example of how to insert data from S3 into ClickHouse
|
||||
- The [Cell Towers dataset](../getting-started/example-datasets/cell-towers.md) imports a CSV into ClickHouse
|
||||
- The [NYPD Complaint Data](../getting-started/example-datasets/nypd_complaint_data.md) demonstrates how to use data inference to simplify creating tables
|
||||
- The ["What's on the Menu?" dataset](../getting-started/example-datasets/menus.md) has an example of denormalizing data
|
||||
|
||||
View the **Tutorials and Datasets** menu for a complete list of sample datasets.
|
@ -1,13 +1,34 @@
|
||||
---
|
||||
sidebar_label: Installation
|
||||
sidebar_position: 1
|
||||
keywords: [clickhouse, install, installation, docs]
|
||||
description: ClickHouse can run on any Linux, FreeBSD, or Mac OS X with x86_64, AArch64, or PowerPC64LE CPU architecture.
|
||||
slug: /en/getting-started/install
|
||||
title: Installation
|
||||
sidebar_label: Install
|
||||
keywords: [clickhouse, install, getting started, quick start]
|
||||
slug: /en/install
|
||||
---
|
||||
|
||||
## System Requirements {#system-requirements}
|
||||
# Installing ClickHouse
|
||||
|
||||
You have two options for getting up and running with ClickHouse:
|
||||
|
||||
- **[ClickHouse Cloud](https://clickhouse.cloud/):** the official ClickHouse as a service, - built by, maintained, and supported by the creators of ClickHouse
|
||||
- **Self-managed ClickHouse:** ClickHouse can run on any Linux, FreeBSD, or Mac OS X with x86_64, AArch64, or PowerPC64LE CPU architecture
|
||||
|
||||
## ClickHouse Cloud
|
||||
|
||||
The quickest and easiest way to get up and running with ClickHouse is to create a new service in [ClickHouse Cloud](https://clickhouse.cloud/):
|
||||
|
||||
<div class="eighty-percent">
|
||||
|
||||
![Create a ClickHouse Cloud service](@site/docs/en/_snippets/images/createservice1.png)
|
||||
</div>
|
||||
|
||||
Once your Cloud service is provisioned, you will be able to [connect to it](/docs/en/integrations/connect-a-client.md) and start [inserting data](/docs/en/integrations/data-ingestion.md).
|
||||
|
||||
:::note
|
||||
The [Quick Start](/docs/en/quick-start.mdx) walks through the steps to get a ClickHouse Cloud service up and running, connecting to it, and inserting data.
|
||||
:::
|
||||
|
||||
## Self-Managed Requirements
|
||||
|
||||
### CPU Architecture
|
||||
|
||||
ClickHouse can run on any Linux, FreeBSD, or Mac OS X with x86_64, AArch64, or PowerPC64LE CPU architecture.
|
||||
|
||||
@ -19,6 +40,55 @@ $ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not
|
||||
|
||||
To run ClickHouse on processors that do not support SSE 4.2 or have AArch64 or PowerPC64LE architecture, you should [build ClickHouse from sources](#from-sources) with proper configuration adjustments.
|
||||
|
||||
ClickHouse implements parallel data processing and uses all the hardware resources available. When choosing a processor, take into account that ClickHouse works more efficiently at configurations with a large number of cores but a lower clock rate than at configurations with fewer cores and a higher clock rate. For example, 16 cores with 2600 MHz is preferable to 8 cores with 3600 MHz.
|
||||
|
||||
It is recommended to use **Turbo Boost** and **hyper-threading** technologies. It significantly improves performance with a typical workload.
|
||||
|
||||
### RAM {#ram}
|
||||
|
||||
We recommend using a minimum of 4GB of RAM to perform non-trivial queries. The ClickHouse server can run with a much smaller amount of RAM, but it requires memory for processing queries.
|
||||
|
||||
The required volume of RAM depends on:
|
||||
|
||||
- The complexity of queries.
|
||||
- The amount of data that is processed in queries.
|
||||
|
||||
To calculate the required volume of RAM, you should estimate the size of temporary data for [GROUP BY](/docs/en/sql-reference/statements/select/group-by.md#select-group-by-clause), [DISTINCT](/docs/en/sql-reference/statements/select/distinct.md#select-distinct), [JOIN](/docs/en/sql-reference/statements/select/join.md#select-join) and other operations you use.
|
||||
|
||||
ClickHouse can use external memory for temporary data. See [GROUP BY in External Memory](/docs/en/sql-reference/statements/select/group-by.md#select-group-by-in-external-memory) for details.
|
||||
|
||||
### Swap File {#swap-file}
|
||||
|
||||
Disable the swap file for production environments.
|
||||
|
||||
### Storage Subsystem {#storage-subsystem}
|
||||
|
||||
You need to have 2GB of free disk space to install ClickHouse.
|
||||
|
||||
The volume of storage required for your data should be calculated separately. Assessment should include:
|
||||
|
||||
- Estimation of the data volume.
|
||||
|
||||
You can take a sample of the data and get the average size of a row from it. Then multiply the value by the number of rows you plan to store.
|
||||
|
||||
- The data compression coefficient.
|
||||
|
||||
To estimate the data compression coefficient, load a sample of your data into ClickHouse, and compare the actual size of the data with the size of the table stored. For example, clickstream data is usually compressed by 6-10 times.
|
||||
|
||||
To calculate the final volume of data to be stored, apply the compression coefficient to the estimated data volume. If you plan to store data in several replicas, then multiply the estimated volume by the number of replicas.
|
||||
|
||||
### Network {#network}
|
||||
|
||||
If possible, use networks of 10G or higher class.
|
||||
|
||||
The network bandwidth is critical for processing distributed queries with a large amount of intermediate data. Besides, network speed affects replication processes.
|
||||
|
||||
### Software {#software}
|
||||
|
||||
ClickHouse is developed primarily for the Linux family of operating systems. The recommended Linux distribution is Ubuntu. The `tzdata` package should be installed in the system.
|
||||
|
||||
## Self-Managed Install
|
||||
|
||||
## Available Installation Options {#available-installation-options}
|
||||
|
||||
### From DEB Packages {#install-from-deb-packages}
|
||||
@ -58,9 +128,9 @@ clickhouse-client # or "clickhouse-client --password" if you set up a password.
|
||||
|
||||
</details>
|
||||
|
||||
You can replace `stable` with `lts` to use different [release kinds](../faq/operations/production.md) based on your needs.
|
||||
You can replace `stable` with `lts` to use different [release kinds](/docs/en/faq/operations/production.md) based on your needs.
|
||||
|
||||
You can also download and install packages manually from [here](https://packages.clickhouse.com/deb/pool/stable).
|
||||
You can also download and install packages manually from [here](https://packages.clickhouse.com/deb/pool/main/c/).
|
||||
|
||||
#### Packages {#packages}
|
||||
|
||||
@ -105,7 +175,7 @@ clickhouse-client # or "clickhouse-client --password" if you set up a password.
|
||||
|
||||
</details>
|
||||
|
||||
You can replace `stable` with `lts` to use different [release kinds](../faq/operations/production.md) based on your needs.
|
||||
You can replace `stable` with `lts` to use different [release kinds](/docs/en/faq/operations/production.md) based on your needs.
|
||||
|
||||
Then run these commands to install packages:
|
||||
|
||||
@ -226,7 +296,7 @@ Use the `clickhouse client` to connect to the server, or `clickhouse local` to p
|
||||
|
||||
### From Sources {#from-sources}
|
||||
|
||||
To manually compile ClickHouse, follow the instructions for [Linux](../development/build.md) or [Mac OS X](../development/build-osx.md).
|
||||
To manually compile ClickHouse, follow the instructions for [Linux](/docs/en/development/build.md) or [Mac OS X](/docs/en/development/build-osx.md).
|
||||
|
||||
You can compile packages and install them or use programs without installing packages. Also by building manually you can disable SSE 4.2 requirement or build for AArch64 CPUs.
|
||||
|
||||
@ -281,7 +351,7 @@ If the configuration file is in the current directory, you do not need to specif
|
||||
|
||||
ClickHouse supports access restriction settings. They are located in the `users.xml` file (next to `config.xml`).
|
||||
By default, access is allowed from anywhere for the `default` user, without a password. See `user/default/networks`.
|
||||
For more information, see the section [“Configuration Files”](../operations/configuration-files.md).
|
||||
For more information, see the section [“Configuration Files”](/docs/en/operations/configuration-files.md).
|
||||
|
||||
After launching server, you can use the command-line client to connect to it:
|
||||
|
||||
@ -292,7 +362,7 @@ $ clickhouse-client
|
||||
By default, it connects to `localhost:9000` on behalf of the user `default` without a password. It can also be used to connect to a remote server using `--host` argument.
|
||||
|
||||
The terminal must use UTF-8 encoding.
|
||||
For more information, see the section [“Command-line client”](../interfaces/cli.md).
|
||||
For more information, see the section [“Command-line client”](/docs/en/interfaces/cli.md).
|
||||
|
||||
Example:
|
||||
|
||||
@ -317,6 +387,5 @@ SELECT 1
|
||||
|
||||
**Congratulations, the system works!**
|
||||
|
||||
To continue experimenting, you can download one of the test data sets or go through [tutorial](./../tutorial.md).
|
||||
To continue experimenting, you can download one of the test data sets or go through [tutorial](/docs/en/tutorial.md).
|
||||
|
||||
[Original article](https://clickhouse.com/docs/en/getting_started/install/) <!--hide-->
|
||||
|
@ -3,6 +3,7 @@ slug: /en/interfaces/cli
|
||||
sidebar_position: 17
|
||||
sidebar_label: Command-Line Client
|
||||
---
|
||||
import ConnectionDetails from '@site/docs/en/_snippets/_gather_your_details_native.md';
|
||||
|
||||
# Command-line Client
|
||||
|
||||
@ -24,26 +25,76 @@ Connected to ClickHouse server version 20.13.1 revision 54442.
|
||||
Different client and server versions are compatible with one another, but some features may not be available in older clients. We recommend using the same version of the client as the server app. When you try to use a client of the older version, then the server, `clickhouse-client` displays the message:
|
||||
|
||||
```response
|
||||
ClickHouse client version is older than ClickHouse server. It may lack support for new features.
|
||||
ClickHouse client version is older than ClickHouse server.
|
||||
It may lack support for new features.
|
||||
```
|
||||
|
||||
## Usage {#cli_usage}
|
||||
|
||||
The client can be used in interactive and non-interactive (batch) mode. To use batch mode, specify the ‘query’ parameter, or send data to ‘stdin’ (it verifies that ‘stdin’ is not a terminal), or both. Similar to the HTTP interface, when using the ‘query’ parameter and sending data to ‘stdin’, the request is a concatenation of the ‘query’ parameter, a line feed, and the data in ‘stdin’. This is convenient for large INSERT queries.
|
||||
The client can be used in interactive and non-interactive (batch) mode.
|
||||
|
||||
Example of using the client to insert data:
|
||||
### Gather your connection details
|
||||
<ConnectionDetails />
|
||||
|
||||
### Interactive
|
||||
|
||||
To connect to your ClickHouse Cloud service, or any ClickHouse server using TLS and passwords, interactively use `--secure`, port 9440, and provide your username and password:
|
||||
|
||||
```bash
|
||||
clickhouse-client --host <HOSTNAME> \
|
||||
--secure \
|
||||
--port 9440 \
|
||||
--user <USERNAME> \
|
||||
--password <PASSWORD>
|
||||
```
|
||||
|
||||
To connect to a self-managed ClickHouse server you will need the details for that server. Whether or not TLS is used, port numbers, and passwords are all configurable. Use the above example for ClickHouse Cloud as a starting point.
|
||||
|
||||
|
||||
### Batch
|
||||
|
||||
To use batch mode, specify the ‘query’ parameter, or send data to ‘stdin’ (it verifies that ‘stdin’ is not a terminal), or both. Similar to the HTTP interface, when using the ‘query’ parameter and sending data to ‘stdin’, the request is a concatenation of the ‘query’ parameter, a line feed, and the data in ‘stdin’. This is convenient for large INSERT queries.
|
||||
|
||||
Examples of using the client to insert data:
|
||||
|
||||
#### Inserting a CSV file into a remote ClickHouse service
|
||||
|
||||
This example is appropriate for ClickHouse Cloud, or any ClickHouse server using TLS and a password. In this example a sample dataset CSV file, `cell_towers.csv` is inserted into an existing table `cell_towers` in the `default` database:
|
||||
|
||||
```bash
|
||||
clickhouse-client --host HOSTNAME.clickhouse.cloud \
|
||||
--secure \
|
||||
--port 9440 \
|
||||
--user default \
|
||||
--password PASSWORD \
|
||||
--query "INSERT INTO cell_towers FORMAT CSVWithNames" \
|
||||
< cell_towers.csv
|
||||
```
|
||||
|
||||
:::note
|
||||
To concentrate on the query syntax, the rest of the examples leave off the connection details (`--host`, `--port`, etc.). Add them in when you try the commands.
|
||||
:::
|
||||
|
||||
#### Three different ways of inserting data
|
||||
|
||||
``` bash
|
||||
$ echo -ne "1, 'some text', '2016-08-14 00:00:00'\n2, 'some more text', '2016-08-14 00:00:01'" | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
|
||||
echo -ne "1, 'some text', '2016-08-14 00:00:00'\n2, 'some more text', '2016-08-14 00:00:01'" | \
|
||||
clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
|
||||
```
|
||||
|
||||
$ cat <<_EOF | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
|
||||
```bash
|
||||
cat <<_EOF | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
|
||||
3, 'some text', '2016-08-14 00:00:00'
|
||||
4, 'some more text', '2016-08-14 00:00:01'
|
||||
_EOF
|
||||
|
||||
$ cat file.csv | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
|
||||
```
|
||||
|
||||
```bash
|
||||
cat file.csv | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
In batch mode, the default data format is TabSeparated. You can set the format in the FORMAT clause of the query.
|
||||
|
||||
By default, you can only process a single query in batch mode. To make multiple queries from a “script,” use the `--multiquery` parameter. This works for all queries except INSERT. Query results are output consecutively without additional separators. Similarly, to process a large number of queries, you can run ‘clickhouse-client’ for each query. Note that it may take tens of milliseconds to launch the ‘clickhouse-client’ program.
|
||||
|
@ -1020,6 +1020,62 @@ Example:
|
||||
}
|
||||
```
|
||||
|
||||
To use object name as column value you can use special setting [format_json_object_each_row_column_for_object_name](../operations/settings/settings.md#format_json_object_each_row_column_for_object_name). Value of this setting is set to the name of a column, that is used as JSON key for a row in resulting object.
|
||||
Examples:
|
||||
|
||||
For output:
|
||||
|
||||
Let's say we have table `test` with two columns:
|
||||
```
|
||||
┌─object_name─┬─number─┐
|
||||
│ first_obj │ 1 │
|
||||
│ second_obj │ 2 │
|
||||
│ third_obj │ 3 │
|
||||
└─────────────┴────────┘
|
||||
```
|
||||
Let's output it in `JSONObjectEachRow` format and use `format_json_object_each_row_column_for_object_name` setting:
|
||||
|
||||
```sql
|
||||
select * from test settings format_json_object_each_row_column_for_object_name='object_name'
|
||||
```
|
||||
|
||||
The output:
|
||||
```json
|
||||
{
|
||||
"first_obj": {"number": 1},
|
||||
"second_obj": {"number": 2},
|
||||
"third_obj": {"number": 3}
|
||||
}
|
||||
```
|
||||
|
||||
For input:
|
||||
|
||||
Let's say we stored output from previous example in a file with name `data.json`:
|
||||
```sql
|
||||
select * from file('data.json', JSONObjectEachRow, 'object_name String, number UInt64') settings format_json_object_each_row_column_for_object_name='object_name'
|
||||
```
|
||||
|
||||
```
|
||||
┌─object_name─┬─number─┐
|
||||
│ first_obj │ 1 │
|
||||
│ second_obj │ 2 │
|
||||
│ third_obj │ 3 │
|
||||
└─────────────┴────────┘
|
||||
```
|
||||
|
||||
It also works in schema inference:
|
||||
|
||||
```sql
|
||||
desc file('data.json', JSONObjectEachRow) settings format_json_object_each_row_column_for_object_name='object_name'
|
||||
```
|
||||
|
||||
```
|
||||
┌─name────────┬─type────────────┐
|
||||
│ object_name │ String │
|
||||
│ number │ Nullable(Int64) │
|
||||
└─────────────┴─────────────────┘
|
||||
```
|
||||
|
||||
|
||||
### Inserting Data {#json-inserting-data}
|
||||
|
||||
|
@ -41,6 +41,7 @@ ClickHouse Inc does **not** maintain the libraries listed below and hasn’t don
|
||||
- [node-clickhouse](https://github.com/apla/node-clickhouse)
|
||||
- [nestjs-clickhouse](https://github.com/depyronick/nestjs-clickhouse)
|
||||
- [clickhouse-client](https://github.com/depyronick/clickhouse-client)
|
||||
- [node-clickhouse-orm](https://github.com/zimv/node-clickhouse-orm)
|
||||
- Perl
|
||||
- [perl-DBD-ClickHouse](https://github.com/elcamlost/perl-DBD-ClickHouse)
|
||||
- [HTTP-ClickHouse](https://metacpan.org/release/HTTP-ClickHouse)
|
||||
|
@ -171,6 +171,55 @@ end_time: 2022-08-30 09:21:46
|
||||
1 row in set. Elapsed: 0.002 sec.
|
||||
```
|
||||
|
||||
## Backup to S3
|
||||
|
||||
It is possible to `BACKUP`/`RESTORE` to S3, but this disk should be configured
|
||||
in a proper way, since by default you will need to backup metadata from local
|
||||
disk to make backup full.
|
||||
|
||||
First of all, you need to configure S3 disk in a special way:
|
||||
|
||||
```xml
|
||||
<clickhouse>
|
||||
<storage_configuration>
|
||||
<disks>
|
||||
<s3_plain>
|
||||
<type>s3_plain</type>
|
||||
<endpoint></endpoint>
|
||||
<access_key_id></access_key_id>
|
||||
<secret_access_key></secret_access_key>
|
||||
</s3_plain>
|
||||
</disks>
|
||||
<policies>
|
||||
<s3>
|
||||
<volumes>
|
||||
<main>
|
||||
<disk>s3</disk>
|
||||
</main>
|
||||
</volumes>
|
||||
</s3>
|
||||
</policies>
|
||||
</storage_configuration>
|
||||
|
||||
<backups>
|
||||
<allowed_disk>s3_plain</allowed_disk>
|
||||
</backups>
|
||||
</clickhouse>
|
||||
```
|
||||
|
||||
And then `BACKUP`/`RESTORE` as usual:
|
||||
|
||||
```sql
|
||||
BACKUP TABLE data TO Disk('s3_plain', 'cloud_backup');
|
||||
RESTORE TABLE data AS data_restored FROM Disk('s3_plain', 'cloud_backup');
|
||||
```
|
||||
|
||||
:::note
|
||||
But keep in mind that:
|
||||
- This disk should not be used for `MergeTree` itself, only for `BACKUP`/`RESTORE`
|
||||
- It has excessive API calls
|
||||
:::
|
||||
|
||||
## Alternatives
|
||||
|
||||
ClickHouse stores data on disk, and there are many ways to backup disks. These are some alternatives that have been used in the past, and that may fit in well in your environment.
|
||||
|
@ -5,6 +5,9 @@ sidebar_label: ClickHouse Keeper
|
||||
---
|
||||
|
||||
# ClickHouse Keeper
|
||||
import SelfManaged from '@site/docs/en/_snippets/_self_managed_only_automated.md';
|
||||
|
||||
<SelfManaged />
|
||||
|
||||
ClickHouse Keeper provides the coordination system for data [replication](../engines/table-engines/mergetree-family/replication.md) and [distributed DDL](../sql-reference/distributed-ddl.md) queries execution. ClickHouse Keeper is compatible with ZooKeeper.
|
||||
|
||||
|
@ -3,7 +3,11 @@ slug: /en/operations/external-authenticators/
|
||||
sidebar_position: 48
|
||||
sidebar_label: External User Authenticators and Directories
|
||||
title: "External User Authenticators and Directories"
|
||||
pagination_next: 'en/operations/external-authenticators/kerberos'
|
||||
---
|
||||
import SelfManaged from '@site/docs/en/_snippets/_self_managed_only_no_roadmap.md';
|
||||
|
||||
<SelfManaged />
|
||||
|
||||
ClickHouse supports authenticating and managing users using external services.
|
||||
|
||||
|
@ -2,6 +2,9 @@
|
||||
slug: /en/operations/external-authenticators/kerberos
|
||||
---
|
||||
# Kerberos
|
||||
import SelfManaged from '@site/docs/en/_snippets/_self_managed_only_no_roadmap.md';
|
||||
|
||||
<SelfManaged />
|
||||
|
||||
Existing and properly configured ClickHouse users can be authenticated via Kerberos authentication protocol.
|
||||
|
||||
|
@ -2,6 +2,9 @@
|
||||
slug: /en/operations/external-authenticators/ldap
|
||||
title: "LDAP"
|
||||
---
|
||||
import SelfManaged from '@site/docs/en/_snippets/_self_managed_only_no_roadmap.md';
|
||||
|
||||
<SelfManaged />
|
||||
|
||||
LDAP server can be used to authenticate ClickHouse users. There are two different approaches for doing this:
|
||||
|
||||
|
@ -2,6 +2,9 @@
|
||||
slug: /en/operations/external-authenticators/ssl-x509
|
||||
title: "SSL X.509 certificate authentication"
|
||||
---
|
||||
import SelfManaged from '@site/docs/en/_snippets/_self_managed_only_no_roadmap.md';
|
||||
|
||||
<SelfManaged />
|
||||
|
||||
[SSL 'strict' option](../server-configuration-parameters/settings.md#server_configuration_parameters-openssl) enables mandatory certificate validation for the incoming connections. In this case, only connections with trusted certificates can be established. Connections with untrusted certificates will be rejected. Thus, certificate validation allows to uniquely authenticate an incoming connection. `Common Name` field of the certificate is used to identify connected user. This allows to associate multiple certificates with the same user. Additionally, reissuing and revoking of the certificates does not affect the ClickHouse configuration.
|
||||
|
||||
|
@ -5,6 +5,9 @@ sidebar_label: Monitoring
|
||||
---
|
||||
|
||||
# Monitoring
|
||||
import SelfManaged from '@site/docs/en/_snippets/_self_managed_only_automated.md';
|
||||
|
||||
<SelfManaged />
|
||||
|
||||
You can monitor:
|
||||
|
||||
|
@ -3,9 +3,12 @@ slug: /en/operations/optimizing-performance/sampling-query-profiler
|
||||
sidebar_position: 54
|
||||
sidebar_label: Query Profiling
|
||||
---
|
||||
import SelfManaged from '@site/docs/en/_snippets/_self_managed_only_no_roadmap.md';
|
||||
|
||||
# Sampling Query Profiler
|
||||
|
||||
<SelfManaged />
|
||||
|
||||
ClickHouse runs sampling profiler that allows analyzing query execution. Using profiler you can find source code routines that used the most frequently during query execution. You can trace CPU time and wall-clock time spent including idle time.
|
||||
|
||||
To use profiler:
|
||||
|
@ -5,6 +5,10 @@ sidebar_label: Testing Hardware
|
||||
title: "How to Test Your Hardware with ClickHouse"
|
||||
---
|
||||
|
||||
import SelfManaged from '@site/docs/en/_snippets/_self_managed_only_no_roadmap.md';
|
||||
|
||||
<SelfManaged />
|
||||
|
||||
You can run a basic ClickHouse performance test on any server without installation of ClickHouse packages.
|
||||
|
||||
|
||||
|
@ -1,60 +0,0 @@
|
||||
---
|
||||
slug: /en/operations/requirements
|
||||
sidebar_position: 44
|
||||
sidebar_label: Requirements
|
||||
---
|
||||
|
||||
# Requirements
|
||||
|
||||
## CPU
|
||||
|
||||
For installation from prebuilt deb packages, use a CPU with x86_64 architecture and support for SSE 4.2 instructions. To run ClickHouse with processors that do not support SSE 4.2 or have AArch64 or PowerPC64LE architecture, you should build ClickHouse from sources.
|
||||
|
||||
ClickHouse implements parallel data processing and uses all the hardware resources available. When choosing a processor, take into account that ClickHouse works more efficiently at configurations with a large number of cores but a lower clock rate than at configurations with fewer cores and a higher clock rate. For example, 16 cores with 2600 MHz is preferable to 8 cores with 3600 MHz.
|
||||
|
||||
It is recommended to use **Turbo Boost** and **hyper-threading** technologies. It significantly improves performance with a typical workload.
|
||||
|
||||
## RAM {#ram}
|
||||
|
||||
We recommend using a minimum of 4GB of RAM to perform non-trivial queries. The ClickHouse server can run with a much smaller amount of RAM, but it requires memory for processing queries.
|
||||
|
||||
The required volume of RAM depends on:
|
||||
|
||||
- The complexity of queries.
|
||||
- The amount of data that is processed in queries.
|
||||
|
||||
To calculate the required volume of RAM, you should estimate the size of temporary data for [GROUP BY](../sql-reference/statements/select/group-by.md#select-group-by-clause), [DISTINCT](../sql-reference/statements/select/distinct.md#select-distinct), [JOIN](../sql-reference/statements/select/join.md#select-join) and other operations you use.
|
||||
|
||||
ClickHouse can use external memory for temporary data. See [GROUP BY in External Memory](../sql-reference/statements/select/group-by.md#select-group-by-in-external-memory) for details.
|
||||
|
||||
## Swap File {#swap-file}
|
||||
|
||||
Disable the swap file for production environments.
|
||||
|
||||
## Storage Subsystem {#storage-subsystem}
|
||||
|
||||
You need to have 2GB of free disk space to install ClickHouse.
|
||||
|
||||
The volume of storage required for your data should be calculated separately. Assessment should include:
|
||||
|
||||
- Estimation of the data volume.
|
||||
|
||||
You can take a sample of the data and get the average size of a row from it. Then multiply the value by the number of rows you plan to store.
|
||||
|
||||
- The data compression coefficient.
|
||||
|
||||
To estimate the data compression coefficient, load a sample of your data into ClickHouse, and compare the actual size of the data with the size of the table stored. For example, clickstream data is usually compressed by 6-10 times.
|
||||
|
||||
To calculate the final volume of data to be stored, apply the compression coefficient to the estimated data volume. If you plan to store data in several replicas, then multiply the estimated volume by the number of replicas.
|
||||
|
||||
## Network {#network}
|
||||
|
||||
If possible, use networks of 10G or higher class.
|
||||
|
||||
The network bandwidth is critical for processing distributed queries with a large amount of intermediate data. Besides, network speed affects replication processes.
|
||||
|
||||
## Software {#software}
|
||||
|
||||
ClickHouse is developed primarily for the Linux family of operating systems. The recommended Linux distribution is Ubuntu. The `tzdata` package should be installed in the system.
|
||||
|
||||
ClickHouse can also work in other operating system families. See details in the [install guide](../getting-started/install.md) section of the documentation.
|
@ -2,6 +2,7 @@
|
||||
slug: /en/operations/server-configuration-parameters/
|
||||
sidebar_position: 54
|
||||
sidebar_label: Server Configuration Parameters
|
||||
pagination_next: en/operations/server-configuration-parameters/settings
|
||||
---
|
||||
|
||||
# Server Configuration Parameters
|
||||
|
@ -666,6 +666,7 @@ Keys:
|
||||
- `http_proxy` - Configure HTTP proxy for sending crash reports.
|
||||
- `debug` - Sets the Sentry client into debug mode.
|
||||
- `tmp_path` - Filesystem path for temporary crash report state.
|
||||
- `environment` - An arbitrary name of an environment in which the ClickHouse server is running. It will be mentioned in each crash report. The default value is `test` or `prod` depending on the version of ClickHouse.
|
||||
|
||||
**Recommended way to use**
|
||||
|
||||
@ -1501,6 +1502,21 @@ If not set, [tmp_path](#tmp-path) is used, otherwise it is ignored.
|
||||
- Policy should have exactly one volume with local disks.
|
||||
:::
|
||||
|
||||
## max_temporary_data_on_disk_size {#max_temporary_data_on_disk_size}
|
||||
|
||||
Limit the amount of disk space consumed by temporary files in `tmp_path` for the server.
|
||||
Queries that exceed this limit will fail with an exception.
|
||||
|
||||
Default value: `0`.
|
||||
|
||||
**See also**
|
||||
|
||||
- [max_temporary_data_on_disk_size_for_user](../../operations/settings/query-complexity.md#settings_max_temporary_data_on_disk_size_for_user)
|
||||
- [max_temporary_data_on_disk_size_for_query](../../operations/settings/query-complexity.md#settings_max_temporary_data_on_disk_size_for_query)
|
||||
- [tmp_path](#tmp-path)
|
||||
- [tmp_policy](#tmp-policy)
|
||||
- [max_server_memory_usage](#max_server_memory_usage)
|
||||
|
||||
## uncompressed_cache_size {#server-settings-uncompressed_cache_size}
|
||||
|
||||
Cache size (in bytes) for uncompressed data used by table engines from the [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md).
|
||||
|
@ -2,6 +2,7 @@
|
||||
sidebar_label: Settings
|
||||
sidebar_position: 51
|
||||
slug: /en/operations/settings/
|
||||
pagination_next: en/operations/settings/settings
|
||||
---
|
||||
|
||||
# Settings Overview
|
||||
|
@ -313,4 +313,19 @@ When inserting data, ClickHouse calculates the number of partitions in the inser
|
||||
|
||||
> “Too many partitions for single INSERT block (more than” + toString(max_parts) + “). The limit is controlled by ‘max_partitions_per_insert_block’ setting. A large number of partitions is a common misconception. It will lead to severe negative performance impact, including slow server startup, slow INSERT queries and slow SELECT queries. Recommended total number of partitions for a table is under 1000..10000. Please note, that partitioning is not intended to speed up SELECT queries (ORDER BY key is sufficient to make range queries fast). Partitions are intended for data manipulation (DROP PARTITION, etc).”
|
||||
|
||||
## max_temporary_data_on_disk_size_for_user {#settings_max_temporary_data_on_disk_size_for_user}
|
||||
|
||||
The maximum amount of data consumed by temporary files on disk in bytes for all concurrently running user queries.
|
||||
Zero means unlimited.
|
||||
|
||||
Default value: 0.
|
||||
|
||||
|
||||
## max_temporary_data_on_disk_size_for_query {#settings_max_temporary_data_on_disk_size_for_query}
|
||||
|
||||
The maximum amount of data consumed by temporary files on disk in bytes for all concurrently running queries.
|
||||
Zero means unlimited.
|
||||
|
||||
Default value: 0.
|
||||
|
||||
[Original article](https://clickhouse.com/docs/en/operations/settings/query_complexity/) <!--hide-->
|
||||
|
@ -35,7 +35,7 @@ Structure of the `users` section:
|
||||
<database_name>
|
||||
<table_name>
|
||||
<filter>expression</filter>
|
||||
<table_name>
|
||||
</table_name>
|
||||
</database_name>
|
||||
</databases>
|
||||
</user_name>
|
||||
|
@ -668,7 +668,7 @@ log_query_views=1
|
||||
|
||||
## log_formatted_queries {#settings-log-formatted-queries}
|
||||
|
||||
Allows to log formatted queries to the [system.query_log](../../operations/system-tables/query_log.md) system table.
|
||||
Allows to log formatted queries to the [system.query_log](../../operations/system-tables/query_log.md) system table (populates `formatted_query` column in the [system.query_log](../../operations/system-tables/query_log.md)).
|
||||
|
||||
Possible values:
|
||||
|
||||
@ -1599,7 +1599,7 @@ Right now it requires `optimize_skip_unused_shards` (the reason behind this is t
|
||||
|
||||
## optimize_throw_if_noop {#setting-optimize_throw_if_noop}
|
||||
|
||||
Enables or disables throwing an exception if an [OPTIMIZE](../../sql-reference/statements/misc.md#misc_operations-optimize) query didn’t perform a merge.
|
||||
Enables or disables throwing an exception if an [OPTIMIZE](../../sql-reference/statements/optimize.md) query didn’t perform a merge.
|
||||
|
||||
By default, `OPTIMIZE` returns successfully even if it didn’t do anything. This setting lets you differentiate these situations and get the reason in an exception message.
|
||||
|
||||
@ -2629,12 +2629,6 @@ Sets the maximum number of inserted blocks after which mergeable blocks are drop
|
||||
|
||||
Default value: `64`.
|
||||
|
||||
## temporary_live_view_timeout {#temporary-live-view-timeout}
|
||||
|
||||
Sets the interval in seconds after which [live view](../../sql-reference/statements/create/view.md#live-view) with timeout is deleted.
|
||||
|
||||
Default value: `5`.
|
||||
|
||||
## periodic_live_view_refresh {#periodic-live-view-refresh}
|
||||
|
||||
Sets the interval in seconds after which periodically refreshed [live view](../../sql-reference/statements/create/view.md#live-view) is forced to refresh.
|
||||
@ -3908,6 +3902,13 @@ Controls validation of UTF-8 sequences in JSON output formats, doesn't impact fo
|
||||
|
||||
Disabled by default.
|
||||
|
||||
### format_json_object_each_row_column_for_object_name {#format_json_object_each_row_column_for_object_name}
|
||||
|
||||
The name of column that will be used for storing/writing object names in [JSONObjectEachRow](../../interfaces/formats.md#jsonobjecteachrow) format.
|
||||
Column type should be String. If value is empty, default names `row_{i}`will be used for object names.
|
||||
|
||||
Default value: ''.
|
||||
|
||||
## TSV format settings {#tsv-format-settings}
|
||||
|
||||
### input_format_tsv_empty_as_default {#input_format_tsv_empty_as_default}
|
||||
|
@ -5,6 +5,9 @@ sidebar_label: Secured Communication with Zookeeper
|
||||
---
|
||||
|
||||
# Optional secured communication between ClickHouse and Zookeeper
|
||||
import SelfManaged from '@site/docs/en/_snippets/_self_managed_only_automated.md';
|
||||
|
||||
<SelfManaged />
|
||||
|
||||
You should specify `ssl.keyStore.location`, `ssl.keyStore.password` and `ssl.trustStore.location`, `ssl.trustStore.password` for communication with ClickHouse client over SSL. These options are available from Zookeeper version 3.5.2.
|
||||
|
||||
|
@ -5,7 +5,7 @@ slug: /en/operations/system-tables/columns
|
||||
|
||||
Contains information about columns in all the tables.
|
||||
|
||||
You can use this table to get information similar to the [DESCRIBE TABLE](../../sql-reference/statements/misc.md#misc-describe-table) query, but for multiple tables at once.
|
||||
You can use this table to get information similar to the [DESCRIBE TABLE](../../sql-reference/statements/describe-table.md) query, but for multiple tables at once.
|
||||
|
||||
Columns from [temporary tables](../../sql-reference/statements/create/table.md#temporary-tables) are visible in the `system.columns` only in those session where they have been created. They are shown with the empty `database` field.
|
||||
|
||||
|
@ -11,6 +11,7 @@ Columns:
|
||||
- `path` ([String](../../sql-reference/data-types/string.md)) — Path to the mount point in the file system.
|
||||
- `free_space` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Free space on disk in bytes.
|
||||
- `total_space` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Disk volume in bytes.
|
||||
- `unreserved_space` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Free space which is not taken by reservations (`free_space` minus the size of reservations taken by merges, inserts, and other disk write operations currently running).
|
||||
- `keep_free_space` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Amount of disk space that should stay free on disk in bytes. Defined in the `keep_free_space_bytes` parameter of disk configuration.
|
||||
|
||||
**Example**
|
||||
|
@ -4,6 +4,9 @@ sidebar_position: 58
|
||||
sidebar_label: Usage Recommendations
|
||||
title: "Usage Recommendations"
|
||||
---
|
||||
import SelfManaged from '@site/docs/en/_snippets/_self_managed_only_automated.md';
|
||||
|
||||
<SelfManaged />
|
||||
|
||||
## CPU Scaling Governor
|
||||
|
||||
|
@ -294,6 +294,53 @@ Result:
|
||||
|
||||
Notice how only a portion of the data was properly decrypted, and the rest is gibberish since either `mode`, `key`, or `iv` were different upon encryption.
|
||||
|
||||
## tryDecrypt
|
||||
|
||||
Similar to `decrypt`, but returns NULL if decryption fails because of using the wrong key.
|
||||
|
||||
**Examples**
|
||||
|
||||
Let's create a table where `user_id` is the unique user id, `encrypted` is an encrypted string field, `iv` is an initial vector for decrypt/encrypt. Assume that users know their id and the key to decrypt the encrypted field:
|
||||
|
||||
```sql
|
||||
CREATE TABLE decrypt_null (
|
||||
dt DateTime,
|
||||
user_id UInt32,
|
||||
encrypted String,
|
||||
iv String
|
||||
) ENGINE = Memory;
|
||||
```
|
||||
|
||||
Insert some data:
|
||||
|
||||
```sql
|
||||
INSERT INTO decrypt_null VALUES
|
||||
('2022-08-02 00:00:00', 1, encrypt('aes-256-gcm', 'value1', 'keykeykeykeykeykeykeykeykeykey01', 'iv1'), 'iv1'),
|
||||
('2022-09-02 00:00:00', 2, encrypt('aes-256-gcm', 'value2', 'keykeykeykeykeykeykeykeykeykey02', 'iv2'), 'iv2'),
|
||||
('2022-09-02 00:00:01', 3, encrypt('aes-256-gcm', 'value3', 'keykeykeykeykeykeykeykeykeykey03', 'iv3'), 'iv3');
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
dt,
|
||||
user_id,
|
||||
tryDecrypt('aes-256-gcm', encrypted, 'keykeykeykeykeykeykeykeykeykey02', iv) AS value
|
||||
FROM decrypt_null
|
||||
ORDER BY user_id ASC
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```
|
||||
┌──────────────────dt─┬─user_id─┬─value──┐
|
||||
│ 2022-08-02 00:00:00 │ 1 │ ᴺᵁᴸᴸ │
|
||||
│ 2022-09-02 00:00:00 │ 2 │ value2 │
|
||||
│ 2022-09-02 00:00:01 │ 3 │ ᴺᵁᴸᴸ │
|
||||
└─────────────────────┴─────────┴────────┘
|
||||
```
|
||||
|
||||
## aes_decrypt_mysql
|
||||
|
||||
Compatible with mysql encryption and decrypts data encrypted with [AES_ENCRYPT](https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html#function_aes-encrypt) function.
|
||||
|
@ -127,7 +127,7 @@ Adds a comment to the column. If the `IF EXISTS` clause is specified, the query
|
||||
|
||||
Each column can have one comment. If a comment already exists for the column, a new comment overwrites the previous comment.
|
||||
|
||||
Comments are stored in the `comment_expression` column returned by the [DESCRIBE TABLE](../../../sql-reference/statements/misc.md#misc-describe-table) query.
|
||||
Comments are stored in the `comment_expression` column returned by the [DESCRIBE TABLE](../../../sql-reference/statements/describe-table.md) query.
|
||||
|
||||
Example:
|
||||
|
||||
@ -253,7 +253,7 @@ The `ALTER` query lets you create and delete separate elements (columns) in nest
|
||||
|
||||
There is no support for deleting columns in the primary key or the sampling key (columns that are used in the `ENGINE` expression). Changing the type for columns that are included in the primary key is only possible if this change does not cause the data to be modified (for example, you are allowed to add values to an Enum or to change a type from `DateTime` to `UInt32`).
|
||||
|
||||
If the `ALTER` query is not sufficient to make the table changes you need, you can create a new table, copy the data to it using the [INSERT SELECT](../../../sql-reference/statements/insert-into.md#insert_query_insert-select) query, then switch the tables using the [RENAME](../../../sql-reference/statements/misc.md#misc_operations-rename) query and delete the old table. You can use the [clickhouse-copier](../../../operations/utilities/clickhouse-copier.md) as an alternative to the `INSERT SELECT` query.
|
||||
If the `ALTER` query is not sufficient to make the table changes you need, you can create a new table, copy the data to it using the [INSERT SELECT](../../../sql-reference/statements/insert-into.md#insert_query_insert-select) query, then switch the tables using the [RENAME](../../../sql-reference/statements/rename.md#rename-table) query and delete the old table. You can use the [clickhouse-copier](../../../operations/utilities/clickhouse-copier.md) as an alternative to the `INSERT SELECT` query.
|
||||
|
||||
The `ALTER` query blocks all reads and writes for the table. In other words, if a long `SELECT` is running at the time of the `ALTER` query, the `ALTER` query will wait for it to complete. At the same time, all new queries to the same table will wait while this `ALTER` is running.
|
||||
|
||||
|
@ -44,7 +44,7 @@ For `*MergeTree` tables mutations execute by **rewriting whole data parts**. The
|
||||
|
||||
Mutations are totally ordered by their creation order and are applied to each part in that order. Mutations are also partially ordered with `INSERT INTO` queries: data that was inserted into the table before the mutation was submitted will be mutated and data that was inserted after that will not be mutated. Note that mutations do not block inserts in any way.
|
||||
|
||||
A mutation query returns immediately after the mutation entry is added (in case of replicated tables to ZooKeeper, for non-replicated tables - to the filesystem). The mutation itself executes asynchronously using the system profile settings. To track the progress of mutations you can use the [`system.mutations`](../../../operations/system-tables/mutations.md#system_tables-mutations) table. A mutation that was successfully submitted will continue to execute even if ClickHouse servers are restarted. There is no way to roll back the mutation once it is submitted, but if the mutation is stuck for some reason it can be cancelled with the [`KILL MUTATION`](../../../sql-reference/statements/misc.md#kill-mutation) query.
|
||||
A mutation query returns immediately after the mutation entry is added (in case of replicated tables to ZooKeeper, for non-replicated tables - to the filesystem). The mutation itself executes asynchronously using the system profile settings. To track the progress of mutations you can use the [`system.mutations`](../../../operations/system-tables/mutations.md#system_tables-mutations) table. A mutation that was successfully submitted will continue to execute even if ClickHouse servers are restarted. There is no way to roll back the mutation once it is submitted, but if the mutation is stuck for some reason it can be cancelled with the [`KILL MUTATION`](../../../sql-reference/statements/kill.md#kill-mutation) query.
|
||||
|
||||
Entries for finished mutations are not deleted right away (the number of preserved entries is determined by the `finished_mutations_to_keep` storage engine parameter). Older mutation entries are deleted.
|
||||
|
||||
|
@ -319,7 +319,7 @@ You can specify the partition expression in `ALTER ... PARTITION` queries in dif
|
||||
|
||||
Usage of quotes when specifying the partition depends on the type of partition expression. For example, for the `String` type, you have to specify its name in quotes (`'`). For the `Date` and `Int*` types no quotes are needed.
|
||||
|
||||
All the rules above are also true for the [OPTIMIZE](../../../sql-reference/statements/misc.md#misc_operations-optimize) query. If you need to specify the only partition when optimizing a non-partitioned table, set the expression `PARTITION tuple()`. For example:
|
||||
All the rules above are also true for the [OPTIMIZE](../../../sql-reference/statements/optimize.md) query. If you need to specify the only partition when optimizing a non-partitioned table, set the expression `PARTITION tuple()`. For example:
|
||||
|
||||
``` sql
|
||||
OPTIMIZE TABLE table_not_partitioned PARTITION tuple() FINAL;
|
||||
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
slug: /en/sql-reference/statements/check-table
|
||||
sidebar_position: 41
|
||||
sidebar_label: CHECK
|
||||
sidebar_label: CHECK TABLE
|
||||
title: "CHECK TABLE Statement"
|
||||
---
|
||||
|
||||
|
@ -166,23 +166,6 @@ SELECT * FROM [db.]live_view WHERE ...
|
||||
|
||||
You can force live view refresh using the `ALTER LIVE VIEW [db.]table_name REFRESH` statement.
|
||||
|
||||
### WITH TIMEOUT Clause
|
||||
|
||||
When a live view is created with a `WITH TIMEOUT` clause then the live view will be dropped automatically after the specified number of seconds elapse since the end of the last [WATCH](../../../sql-reference/statements/watch.md) query that was watching the live view.
|
||||
|
||||
```sql
|
||||
CREATE LIVE VIEW [db.]table_name WITH TIMEOUT [value_in_sec] AS SELECT ...
|
||||
```
|
||||
|
||||
If the timeout value is not specified then the value specified by the [temporary_live_view_timeout](../../../operations/settings/settings.md#temporary-live-view-timeout) setting is used.
|
||||
|
||||
**Example:**
|
||||
|
||||
```sql
|
||||
CREATE TABLE mt (x Int8) Engine = MergeTree ORDER BY x;
|
||||
CREATE LIVE VIEW lv WITH TIMEOUT 15 AS SELECT sum(x) FROM mt;
|
||||
```
|
||||
|
||||
### WITH REFRESH Clause
|
||||
|
||||
When a live view is created with a `WITH REFRESH` clause then it will be automatically refreshed after the specified number of seconds elapse since the last refresh or trigger.
|
||||
@ -212,20 +195,6 @@ WATCH lv
|
||||
└─────────────────────┴──────────┘
|
||||
```
|
||||
|
||||
You can combine `WITH TIMEOUT` and `WITH REFRESH` clauses using an `AND` clause.
|
||||
|
||||
```sql
|
||||
CREATE LIVE VIEW [db.]table_name WITH TIMEOUT [value_in_sec] AND REFRESH [value_in_sec] AS SELECT ...
|
||||
```
|
||||
|
||||
**Example:**
|
||||
|
||||
```sql
|
||||
CREATE LIVE VIEW lv WITH TIMEOUT 15 AND REFRESH 5 AS SELECT now();
|
||||
```
|
||||
|
||||
After 15 sec the live view will be automatically dropped if there are no active `WATCH` queries.
|
||||
|
||||
```sql
|
||||
WATCH lv
|
||||
```
|
||||
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
slug: /en/sql-reference/statements/describe-table
|
||||
sidebar_position: 42
|
||||
sidebar_label: DESCRIBE
|
||||
sidebar_label: DESCRIBE TABLE
|
||||
title: "DESCRIBE TABLE"
|
||||
---
|
||||
|
||||
|
@ -221,7 +221,7 @@ By default, a user account or a role has no privileges.
|
||||
|
||||
If a user or a role has no privileges, it is displayed as [NONE](#grant-none) privilege.
|
||||
|
||||
Some queries by their implementation require a set of privileges. For example, to execute the [RENAME](../../sql-reference/statements/misc.md#misc_operations-rename) query you need the following privileges: `SELECT`, `CREATE TABLE`, `INSERT` and `DROP TABLE`.
|
||||
Some queries by their implementation require a set of privileges. For example, to execute the [RENAME](../../sql-reference/statements/optimize.md) query you need the following privileges: `SELECT`, `CREATE TABLE`, `INSERT` and `DROP TABLE`.
|
||||
|
||||
### SELECT
|
||||
|
||||
@ -304,11 +304,11 @@ Examples of how this hierarchy is treated:
|
||||
- The `MODIFY SETTING` privilege allows modifying table engine settings. It does not affect settings or server configuration parameters.
|
||||
- The `ATTACH` operation needs the [CREATE](#grant-create) privilege.
|
||||
- The `DETACH` operation needs the [DROP](#grant-drop) privilege.
|
||||
- To stop mutation by the [KILL MUTATION](../../sql-reference/statements/misc.md#kill-mutation) query, you need to have a privilege to start this mutation. For example, if you want to stop the `ALTER UPDATE` query, you need the `ALTER UPDATE`, `ALTER TABLE`, or `ALTER` privilege.
|
||||
- To stop mutation by the [KILL MUTATION](../../sql-reference/statements/kill.md#kill-mutation) query, you need to have a privilege to start this mutation. For example, if you want to stop the `ALTER UPDATE` query, you need the `ALTER UPDATE`, `ALTER TABLE`, or `ALTER` privilege.
|
||||
|
||||
### CREATE
|
||||
|
||||
Allows executing [CREATE](../../sql-reference/statements/create/index.md) and [ATTACH](../../sql-reference/statements/misc.md#attach) DDL-queries according to the following hierarchy of privileges:
|
||||
Allows executing [CREATE](../../sql-reference/statements/create/index.md) and [ATTACH](../../sql-reference/statements/attach.md) DDL-queries according to the following hierarchy of privileges:
|
||||
|
||||
- `CREATE`. Level: `GROUP`
|
||||
- `CREATE DATABASE`. Level: `DATABASE`
|
||||
@ -323,7 +323,7 @@ Allows executing [CREATE](../../sql-reference/statements/create/index.md) and [A
|
||||
|
||||
### DROP
|
||||
|
||||
Allows executing [DROP](../../sql-reference/statements/misc.md#drop) and [DETACH](../../sql-reference/statements/misc.md#detach) queries according to the following hierarchy of privileges:
|
||||
Allows executing [DROP](../../sql-reference/statements/drop.md) and [DETACH](../../sql-reference/statements/detach.md) queries according to the following hierarchy of privileges:
|
||||
|
||||
- `DROP`. Level: `GROUP`
|
||||
- `DROP DATABASE`. Level: `DATABASE`
|
||||
@ -333,13 +333,13 @@ Allows executing [DROP](../../sql-reference/statements/misc.md#drop) and [DETACH
|
||||
|
||||
### TRUNCATE
|
||||
|
||||
Allows executing [TRUNCATE](../../sql-reference/statements/misc.md#truncate-statement) queries.
|
||||
Allows executing [TRUNCATE](../../sql-reference/statements/truncate.md) queries.
|
||||
|
||||
Privilege level: `TABLE`.
|
||||
|
||||
### OPTIMIZE
|
||||
|
||||
Allows executing [OPTIMIZE TABLE](../../sql-reference/statements/misc.md#misc_operations-optimize) queries.
|
||||
Allows executing [OPTIMIZE TABLE](../../sql-reference/statements/optimize.md) queries.
|
||||
|
||||
Privilege level: `TABLE`.
|
||||
|
||||
@ -359,7 +359,7 @@ A user has the `SHOW` privilege if it has any other privilege concerning the spe
|
||||
|
||||
### KILL QUERY
|
||||
|
||||
Allows executing [KILL](../../sql-reference/statements/misc.md#kill-query-statement) queries according to the following hierarchy of privileges:
|
||||
Allows executing [KILL](../../sql-reference/statements/kill.md#kill-query) queries according to the following hierarchy of privileges:
|
||||
|
||||
Privilege level: `GLOBAL`.
|
||||
|
||||
|
@ -1,14 +0,0 @@
|
||||
---
|
||||
slug: /ru/development/browse-code
|
||||
sidebar_position: 72
|
||||
sidebar_label: "Навигация по коду ClickHouse"
|
||||
---
|
||||
|
||||
|
||||
# Навигация по коду ClickHouse {#navigatsiia-po-kodu-clickhouse}
|
||||
|
||||
Для навигации по коду онлайн доступен **Woboq**, он расположен [здесь](https://clickhouse.com/codebrowser/ClickHouse/src/index.html). В нём реализовано удобное перемещение между исходными файлами, семантическая подсветка, подсказки, индексация и поиск. Слепок кода обновляется ежедневно.
|
||||
|
||||
Также вы можете просматривать исходники на [GitHub](https://github.com/ClickHouse/ClickHouse).
|
||||
|
||||
Если вы интересуетесь, какую среду разработки выбрать для работы с ClickHouse, мы рекомендуем CLion, QT Creator, VSCode или KDevelop (с некоторыми предостережениями). Вы можете использовать свою любимую среду разработки, Vim и Emacs тоже считаются.
|
@ -488,7 +488,7 @@ FORMAT TSV
|
||||
max_insert_block_size 1048576 0 "The maximum block size for insertion, if we control the creation of blocks for insertion."
|
||||
```
|
||||
|
||||
Optionally you can [OPTIMIZE](../sql-reference/statements/misc.md#misc_operations-optimize) the tables after import. Tables that are configured with an engine from MergeTree-family always do merges of data parts in the background to optimize data storage (or at least check if it makes sense). These queries force the table engine to do storage optimization right now instead of some time later:
|
||||
Optionally you can [OPTIMIZE](../sql-reference/statements/optimize.md) the tables after import. Tables that are configured with an engine from MergeTree-family always do merges of data parts in the background to optimize data storage (or at least check if it makes sense). These queries force the table engine to do storage optimization right now instead of some time later:
|
||||
|
||||
``` bash
|
||||
clickhouse-client --query "OPTIMIZE TABLE tutorial.hits_v1 FINAL"
|
||||
|
@ -34,6 +34,7 @@ sidebar_label: "Клиентские библиотеки от сторонни
|
||||
- [node-clickhouse](https://github.com/apla/node-clickhouse)
|
||||
- [nestjs-clickhouse](https://github.com/depyronick/nestjs-clickhouse)
|
||||
- [clickhouse-client](https://github.com/depyronick/clickhouse-client)
|
||||
- [node-clickhouse-orm](https://github.com/zimv/node-clickhouse-orm)
|
||||
- Perl
|
||||
- [perl-DBD-ClickHouse](https://github.com/elcamlost/perl-DBD-ClickHouse)
|
||||
- [HTTP-ClickHouse](https://metacpan.org/release/HTTP-ClickHouse)
|
||||
|
@ -64,7 +64,7 @@ ClickHouse поддерживает управление доступом на
|
||||
|
||||
- [CREATE USER](../sql-reference/statements/create/user.md#create-user-statement)
|
||||
- [ALTER USER](../sql-reference/statements/alter/user.md)
|
||||
- [DROP USER](../sql-reference/statements/misc.md#drop-user-statement)
|
||||
- [DROP USER](../sql-reference/statements/drop.md#drop-user)
|
||||
- [SHOW CREATE USER](../sql-reference/statements/show.md#show-create-user-statement)
|
||||
|
||||
### Применение настроек {#access-control-settings-applying}
|
||||
@ -91,9 +91,9 @@ ClickHouse поддерживает управление доступом на
|
||||
|
||||
- [CREATE ROLE](../sql-reference/statements/create/index.md#create-role-statement)
|
||||
- [ALTER ROLE](../sql-reference/statements/alter/role.md)
|
||||
- [DROP ROLE](../sql-reference/statements/misc.md#drop-role-statement)
|
||||
- [SET ROLE](../sql-reference/statements/misc.md#set-role-statement)
|
||||
- [SET DEFAULT ROLE](../sql-reference/statements/misc.md#set-default-role-statement)
|
||||
- [DROP ROLE](../sql-reference/statements/drop.md#drop-role)
|
||||
- [SET ROLE](../sql-reference/statements/set-role.md)
|
||||
- [SET DEFAULT ROLE](../sql-reference/statements/set-role.md#set-default-role)
|
||||
- [SHOW CREATE ROLE](../sql-reference/statements/show.md#show-create-role-statement)
|
||||
|
||||
Привилегии можно присвоить роли с помощью запроса [GRANT](../sql-reference/statements/grant.md). Для отзыва привилегий у роли ClickHouse предоставляет запрос [REVOKE](../sql-reference/statements/revoke.md).
|
||||
@ -106,7 +106,7 @@ ClickHouse поддерживает управление доступом на
|
||||
|
||||
- [CREATE ROW POLICY](../sql-reference/statements/create/index.md#create-row-policy-statement)
|
||||
- [ALTER ROW POLICY](../sql-reference/statements/alter/row-policy.md)
|
||||
- [DROP ROW POLICY](../sql-reference/statements/misc.md#drop-row-policy-statement)
|
||||
- [DROP ROW POLICY](../sql-reference/statements/drop.md#drop-row-policy)
|
||||
- [SHOW CREATE ROW POLICY](../sql-reference/statements/show.md#show-create-row-policy-statement)
|
||||
|
||||
|
||||
@ -118,7 +118,7 @@ ClickHouse поддерживает управление доступом на
|
||||
|
||||
- [CREATE SETTINGS PROFILE](../sql-reference/statements/create/index.md#create-settings-profile-statement)
|
||||
- [ALTER SETTINGS PROFILE](../sql-reference/statements/alter/settings-profile.md)
|
||||
- [DROP SETTINGS PROFILE](../sql-reference/statements/misc.md#drop-settings-profile-statement)
|
||||
- [DROP SETTINGS PROFILE](../sql-reference/statements/drop.md#drop-settings-profile)
|
||||
- [SHOW CREATE SETTINGS PROFILE](../sql-reference/statements/show.md#show-create-settings-profile-statement)
|
||||
|
||||
|
||||
@ -132,7 +132,7 @@ ClickHouse поддерживает управление доступом на
|
||||
|
||||
- [CREATE QUOTA](../sql-reference/statements/create/index.md#create-quota-statement)
|
||||
- [ALTER QUOTA](../sql-reference/statements/alter/quota.md)
|
||||
- [DROP QUOTA](../sql-reference/statements/misc.md#drop-quota-statement)
|
||||
- [DROP QUOTA](../sql-reference/statements/drop.md#drop-quota)
|
||||
- [SHOW CREATE QUOTA](../sql-reference/statements/show.md#show-create-quota-statement)
|
||||
|
||||
|
||||
|
@ -624,6 +624,7 @@ ClickHouse поддерживает динамическое изменение
|
||||
- `http_proxy` - Настройка HTTP proxy для отсылки отчетов о сбоях.
|
||||
- `debug` - Настроить клиентскую библиотеку Sentry в debug режим.
|
||||
- `tmp_path` - Путь в файловой системе для временного хранения состояния отчетов о сбоях перед отправкой на сервер Sentry.
|
||||
- `environment` - Произвольное название среды, в которой запущен сервер ClickHouse, которое будет упомянуто в каждом отчете от сбое. По умолчанию имеет значение `test` или `prod` в зависимости от версии ClickHouse.
|
||||
|
||||
**Рекомендованные настройки**
|
||||
|
||||
|
@ -1986,7 +1986,7 @@ SELECT * FROM test_table
|
||||
|
||||
## optimize_throw_if_noop {#setting-optimize_throw_if_noop}
|
||||
|
||||
Включает или отключает генерирование исключения в случаях, когда запрос [OPTIMIZE](../../sql-reference/statements/misc.md#misc_operations-optimize) не выполняет мёрж.
|
||||
Включает или отключает генерирование исключения в случаях, когда запрос [OPTIMIZE](../../sql-reference/statements/optimize.md) не выполняет мёрж.
|
||||
|
||||
По умолчанию, `OPTIMIZE` завершается успешно и в тех случаях, когда он ничего не сделал. Настройка позволяет отделить подобные случаи и включает генерирование исключения с поясняющим сообщением.
|
||||
|
||||
@ -3258,12 +3258,6 @@ SELECT * FROM test2;
|
||||
|
||||
Значение по умолчанию: `64`.
|
||||
|
||||
## temporary_live_view_timeout {#temporary-live-view-timeout}
|
||||
|
||||
Задает время в секундах, после которого [LIVE VIEW](../../sql-reference/statements/create/view.md#live-view) удаляется.
|
||||
|
||||
Значение по умолчанию: `5`.
|
||||
|
||||
## periodic_live_view_refresh {#periodic-live-view-refresh}
|
||||
|
||||
Задает время в секундах, по истечении которого [LIVE VIEW](../../sql-reference/statements/create/view.md#live-view) с установленным автообновлением обновляется.
|
||||
|
@ -5,7 +5,7 @@ slug: /ru/operations/system-tables/columns
|
||||
|
||||
Содержит информацию о столбцах всех таблиц.
|
||||
|
||||
С помощью этой таблицы можно получить информацию аналогично запросу [DESCRIBE TABLE](../../sql-reference/statements/misc.md#misc-describe-table), но для многих таблиц сразу.
|
||||
С помощью этой таблицы можно получить информацию аналогично запросу [DESCRIBE TABLE](../../sql-reference/statements/describe-table.md), но для многих таблиц сразу.
|
||||
|
||||
Колонки [временных таблиц](../../sql-reference/statements/create/table.md#temporary-tables) содержатся в `system.columns` только в тех сессиях, в которых эти таблицы были созданы. Поле `database` у таких колонок пустое.
|
||||
|
||||
|
@ -11,5 +11,6 @@ Cодержит информацию о дисках, заданных в [ко
|
||||
- `path` ([String](../../sql-reference/data-types/string.md)) — путь к точке монтирования в файловой системе.
|
||||
- `free_space` ([UInt64](../../sql-reference/data-types/int-uint.md)) — свободное место на диске в байтах.
|
||||
- `total_space` ([UInt64](../../sql-reference/data-types/int-uint.md)) — объём диска в байтах.
|
||||
- `unreserved_space` ([UInt64](../../sql-reference/data-types/int-uint.md)) — не зарезервированное cвободное место в байтах (`free_space` минус размер места, зарезервированного на выполняемые в данный момент фоновые слияния, вставки и другие операции записи на диск).
|
||||
- `keep_free_space` ([UInt64](../../sql-reference/data-types/int-uint.md)) — место, которое должно остаться свободным на диске в байтах. Задаётся значением параметра `keep_free_space_bytes` конфигурации дисков.
|
||||
|
||||
|
@ -1053,6 +1053,7 @@ formatDateTime(Time, Format[, Timezone])
|
||||
| %w | номер дня недели, начиная с воскресенья (0-6) | 2 |
|
||||
| %y | год, последние 2 цифры (00-99) | 18 |
|
||||
| %Y | год, 4 цифры | 2018 |
|
||||
| %z | Смещение времени от UTC +HHMM или -HHMM | -0500 |
|
||||
| %% | символ % | % |
|
||||
|
||||
**Пример**
|
||||
|
@ -10,5 +10,4 @@ sidebar_position: 28
|
||||
- [INSERT INTO](statements/insert-into.md)
|
||||
- [CREATE](statements/create/index.md)
|
||||
- [ALTER](statements/alter/index.md#query_language_queries_alter)
|
||||
- [Прочие виды запросов](statements/misc.md)
|
||||
|
||||
|
@ -128,7 +128,7 @@ COMMENT COLUMN [IF EXISTS] name 'Text comment'
|
||||
|
||||
Каждый столбец может содержать только один комментарий. При выполнении запроса существующий комментарий заменяется на новый.
|
||||
|
||||
Посмотреть комментарии можно в столбце `comment_expression` из запроса [DESCRIBE TABLE](../misc.md#misc-describe-table).
|
||||
Посмотреть комментарии можно в столбце `comment_expression` из запроса [DESCRIBE TABLE](../describe-table.md).
|
||||
|
||||
Пример:
|
||||
|
||||
@ -254,7 +254,7 @@ SELECT groupArray(x), groupArray(s) FROM tmp;
|
||||
|
||||
Отсутствует возможность удалять столбцы, входящие в первичный ключ или ключ для сэмплирования (в общем, входящие в выражение `ENGINE`). Изменение типа у столбцов, входящих в первичный ключ возможно только в том случае, если это изменение не приводит к изменению данных (например, разрешено добавление значения в Enum или изменение типа с `DateTime` на `UInt32`).
|
||||
|
||||
Если возможностей запроса `ALTER` не хватает для нужного изменения таблицы, вы можете создать новую таблицу, скопировать туда данные с помощью запроса [INSERT SELECT](../insert-into.md#insert_query_insert-select), затем поменять таблицы местами с помощью запроса [RENAME](../misc.md#misc_operations-rename), и удалить старую таблицу. В качестве альтернативы для запроса `INSERT SELECT`, можно использовать инструмент [clickhouse-copier](../../../sql-reference/statements/alter/index.md).
|
||||
Если возможностей запроса `ALTER` не хватает для нужного изменения таблицы, вы можете создать новую таблицу, скопировать туда данные с помощью запроса [INSERT SELECT](../insert-into.md#insert_query_insert-select), затем поменять таблицы местами с помощью запроса [RENAME](../rename.md#rename-table), и удалить старую таблицу. В качестве альтернативы для запроса `INSERT SELECT`, можно использовать инструмент [clickhouse-copier](../../../sql-reference/statements/alter/index.md).
|
||||
|
||||
Запрос `ALTER` блокирует все чтения и записи для таблицы. То есть если на момент запроса `ALTER` выполнялся долгий `SELECT`, то запрос `ALTER` сначала дождётся его выполнения. И в это время все новые запросы к той же таблице будут ждать, пока завершится этот `ALTER`.
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
slug: /ru/sql-reference/statements/check-table
|
||||
sidebar_position: 41
|
||||
sidebar_label: CHECK
|
||||
sidebar_label: CHECK TABLE
|
||||
---
|
||||
|
||||
# CHECK TABLE Statement {#check-table}
|
||||
|
@ -17,13 +17,13 @@ CREATE ROLE [IF NOT EXISTS | OR REPLACE] name1 [ON CLUSTER cluster_name1] [, nam
|
||||
|
||||
## Управление ролями {#managing-roles}
|
||||
|
||||
Одному пользователю можно назначить несколько ролей. Пользователи могут применять назначенные роли в произвольных комбинациях с помощью выражения [SET ROLE](../misc.md#set-role-statement). Конечный объем привилегий — это комбинация всех привилегий всех примененных ролей. Если у пользователя имеются привилегии, присвоенные его аккаунту напрямую, они также прибавляются к привилегиям, присвоенным через роли.
|
||||
Одному пользователю можно назначить несколько ролей. Пользователи могут применять назначенные роли в произвольных комбинациях с помощью выражения [SET ROLE](../set-role.md). Конечный объем привилегий — это комбинация всех привилегий всех примененных ролей. Если у пользователя имеются привилегии, присвоенные его аккаунту напрямую, они также прибавляются к привилегиям, присвоенным через роли.
|
||||
|
||||
Роли по умолчанию применяются при входе пользователя в систему. Установить роли по умолчанию можно с помощью выражений [SET DEFAULT ROLE](../misc.md#set-default-role-statement) или [ALTER USER](../alter/index.md#alter-user-statement).
|
||||
Роли по умолчанию применяются при входе пользователя в систему. Установить роли по умолчанию можно с помощью выражений [SET DEFAULT ROLE](../set-role.md#set-default-role) или [ALTER USER](../alter/index.md#alter-user-statement).
|
||||
|
||||
Для отзыва роли используется выражение [REVOKE](../../../sql-reference/statements/revoke.md).
|
||||
|
||||
Для удаления роли используется выражение [DROP ROLE](../misc.md#drop-role-statement). Удаленная роль автоматически отзывается у всех пользователей, которым была назначена.
|
||||
Для удаления роли используется выражение [DROP ROLE](../drop.md#drop-role). Удаленная роль автоматически отзывается у всех пользователей, которым была назначена.
|
||||
|
||||
## Примеры {#create-role-examples}
|
||||
|
||||
|
@ -156,23 +156,6 @@ SELECT * FROM [db.]live_view WHERE ...
|
||||
|
||||
Чтобы принудительно обновить LIVE-представление, используйте запрос `ALTER LIVE VIEW [db.]table_name REFRESH`.
|
||||
|
||||
### Секция WITH TIMEOUT {#live-view-with-timeout}
|
||||
|
||||
LIVE-представление, созданное с параметром `WITH TIMEOUT`, будет автоматически удалено через определенное количество секунд с момента предыдущего запроса [WATCH](../../../sql-reference/statements/watch.md), примененного к данному LIVE-представлению.
|
||||
|
||||
```sql
|
||||
CREATE LIVE VIEW [db.]table_name WITH TIMEOUT [value_in_sec] AS SELECT ...
|
||||
```
|
||||
|
||||
Если временной промежуток не указан, используется значение настройки [temporary_live_view_timeout](../../../operations/settings/settings.md#temporary-live-view-timeout).
|
||||
|
||||
**Пример:**
|
||||
|
||||
```sql
|
||||
CREATE TABLE mt (x Int8) Engine = MergeTree ORDER BY x;
|
||||
CREATE LIVE VIEW lv WITH TIMEOUT 15 AS SELECT sum(x) FROM mt;
|
||||
```
|
||||
|
||||
### Секция WITH REFRESH {#live-view-with-refresh}
|
||||
|
||||
LIVE-представление, созданное с параметром `WITH REFRESH`, будет автоматически обновляться через указанные промежутки времени, начиная с момента последнего обновления.
|
||||
@ -202,20 +185,6 @@ WATCH lv;
|
||||
└─────────────────────┴──────────┘
|
||||
```
|
||||
|
||||
Параметры `WITH TIMEOUT` и `WITH REFRESH` можно сочетать с помощью `AND`.
|
||||
|
||||
```sql
|
||||
CREATE LIVE VIEW [db.]table_name WITH TIMEOUT [value_in_sec] AND REFRESH [value_in_sec] AS SELECT ...
|
||||
```
|
||||
|
||||
**Пример:**
|
||||
|
||||
```sql
|
||||
CREATE LIVE VIEW lv WITH TIMEOUT 15 AND REFRESH 5 AS SELECT now();
|
||||
```
|
||||
|
||||
По истечении 15 секунд представление будет автоматически удалено, если нет активного запроса `WATCH`.
|
||||
|
||||
```sql
|
||||
WATCH lv;
|
||||
```
|
||||
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
slug: /ru/sql-reference/statements/describe-table
|
||||
sidebar_position: 42
|
||||
sidebar_label: DESCRIBE
|
||||
sidebar_label: DESCRIBE TABLE
|
||||
---
|
||||
|
||||
# DESCRIBE TABLE {#misc-describe-table}
|
||||
|
@ -221,7 +221,7 @@ GRANT SELECT(x,y) ON db.table TO john WITH GRANT OPTION
|
||||
|
||||
Отсутствие привилегий у пользователя или роли отображается как привилегия [NONE](#grant-none).
|
||||
|
||||
Выполнение некоторых запросов требует определенного набора привилегий. Например, чтобы выполнить запрос [RENAME](misc.md#misc_operations-rename), нужны следующие привилегии: `SELECT`, `CREATE TABLE`, `INSERT` и `DROP TABLE`.
|
||||
Выполнение некоторых запросов требует определенного набора привилегий. Например, чтобы выполнить запрос [RENAME](rename.md#rename-table), нужны следующие привилегии: `SELECT`, `CREATE TABLE`, `INSERT` и `DROP TABLE`.
|
||||
|
||||
|
||||
### SELECT {#grant-select}
|
||||
@ -309,7 +309,7 @@ GRANT INSERT(x,y) ON db.table TO john
|
||||
|
||||
### CREATE {#grant-create}
|
||||
|
||||
Разрешает выполнять DDL-запросы [CREATE](../../sql-reference/statements/create/index.md) и [ATTACH](misc.md#attach) в соответствии со следующей иерархией привилегий:
|
||||
Разрешает выполнять DDL-запросы [CREATE](../../sql-reference/statements/create/index.md) и [ATTACH](attach.md) в соответствии со следующей иерархией привилегий:
|
||||
|
||||
- `CREATE`. Уровень: `GROUP`
|
||||
- `CREATE DATABASE`. Уровень: `DATABASE`
|
||||
@ -324,7 +324,7 @@ GRANT INSERT(x,y) ON db.table TO john
|
||||
|
||||
### DROP {#grant-drop}
|
||||
|
||||
Разрешает выполнять запросы [DROP](misc.md#drop) и [DETACH](misc.md#detach-statement) в соответствии со следующей иерархией привилегий:
|
||||
Разрешает выполнять запросы [DROP](drop.md) и [DETACH](detach.md) в соответствии со следующей иерархией привилегий:
|
||||
|
||||
- `DROP`. Уровень: `GROUP`
|
||||
- `DROP DATABASE`. Уровень: `DATABASE`
|
||||
@ -340,7 +340,7 @@ GRANT INSERT(x,y) ON db.table TO john
|
||||
|
||||
### OPTIMIZE {#grant-optimize}
|
||||
|
||||
Разрешает выполнять запросы [OPTIMIZE TABLE](misc.md#misc_operations-optimize).
|
||||
Разрешает выполнять запросы [OPTIMIZE TABLE](optimize.md).
|
||||
|
||||
Уровень: `TABLE`.
|
||||
|
||||
|
@ -1,13 +0,0 @@
|
||||
---
|
||||
slug: /zh/development/browse-code
|
||||
sidebar_position: 63
|
||||
sidebar_label: "\u6D4F\u89C8\u6E90\u4EE3\u7801"
|
||||
---
|
||||
|
||||
# 浏览ClickHouse源代码 {#browse-clickhouse-source-code}
|
||||
|
||||
您可以使用 **Woboq** 在线代码浏览器 [点击这里](https://clickhouse.com/codebrowser/ClickHouse/src/index.html). 它提供了代码导航和语义突出显示、搜索和索引。 代码快照每天更新。
|
||||
|
||||
此外,您还可以像往常一样浏览源代码 [GitHub](https://github.com/ClickHouse/ClickHouse)
|
||||
|
||||
如果你希望了解哪种IDE较好,我们推荐使用CLion,QT Creator,VS Code和KDevelop(有注意事项)。 您可以使用任何您喜欢的IDE。 Vim和Emacs也可以。
|
@ -1,10 +1,460 @@
|
||||
---
|
||||
slug: /zh/getting-started/example-datasets/brown-benchmark
|
||||
sidebar_label: Brown University Benchmark
|
||||
description: A new analytical benchmark for machine-generated log data
|
||||
title: "Brown University Benchmark"
|
||||
sidebar_label: 布朗大学基准
|
||||
description: 机器生成日志数据的新分析基准
|
||||
title: "布朗大学基准"
|
||||
---
|
||||
|
||||
import Content from '@site/docs/en/getting-started/example-datasets/brown-benchmark.md';
|
||||
`MgBench` 是机器生成的日志数据的新分析基准,[Andrew Crotty](http://cs.brown.edu/people/acrotty/)。
|
||||
|
||||
<Content />
|
||||
下载数据:
|
||||
|
||||
```bash
|
||||
wget https://datasets.clickhouse.com/mgbench{1..3}.csv.xz
|
||||
```
|
||||
|
||||
解压数据:
|
||||
|
||||
```bash
|
||||
xz -v -d mgbench{1..3}.csv.xz
|
||||
```
|
||||
|
||||
创建数据库和表:
|
||||
|
||||
```sql
|
||||
CREATE DATABASE mgbench;
|
||||
```
|
||||
|
||||
```sql
|
||||
USE mgbench;
|
||||
```
|
||||
|
||||
```sql
|
||||
CREATE TABLE mgbench.logs1 (
|
||||
log_time DateTime,
|
||||
machine_name LowCardinality(String),
|
||||
machine_group LowCardinality(String),
|
||||
cpu_idle Nullable(Float32),
|
||||
cpu_nice Nullable(Float32),
|
||||
cpu_system Nullable(Float32),
|
||||
cpu_user Nullable(Float32),
|
||||
cpu_wio Nullable(Float32),
|
||||
disk_free Nullable(Float32),
|
||||
disk_total Nullable(Float32),
|
||||
part_max_used Nullable(Float32),
|
||||
load_fifteen Nullable(Float32),
|
||||
load_five Nullable(Float32),
|
||||
load_one Nullable(Float32),
|
||||
mem_buffers Nullable(Float32),
|
||||
mem_cached Nullable(Float32),
|
||||
mem_free Nullable(Float32),
|
||||
mem_shared Nullable(Float32),
|
||||
swap_free Nullable(Float32),
|
||||
bytes_in Nullable(Float32),
|
||||
bytes_out Nullable(Float32)
|
||||
)
|
||||
ENGINE = MergeTree()
|
||||
ORDER BY (machine_group, machine_name, log_time);
|
||||
```
|
||||
|
||||
|
||||
```sql
|
||||
CREATE TABLE mgbench.logs2 (
|
||||
log_time DateTime,
|
||||
client_ip IPv4,
|
||||
request String,
|
||||
status_code UInt16,
|
||||
object_size UInt64
|
||||
)
|
||||
ENGINE = MergeTree()
|
||||
ORDER BY log_time;
|
||||
```
|
||||
|
||||
|
||||
```sql
|
||||
CREATE TABLE mgbench.logs3 (
|
||||
log_time DateTime64,
|
||||
device_id FixedString(15),
|
||||
device_name LowCardinality(String),
|
||||
device_type LowCardinality(String),
|
||||
device_floor UInt8,
|
||||
event_type LowCardinality(String),
|
||||
event_unit FixedString(1),
|
||||
event_value Nullable(Float32)
|
||||
)
|
||||
ENGINE = MergeTree()
|
||||
ORDER BY (event_type, log_time);
|
||||
```
|
||||
|
||||
插入数据:
|
||||
|
||||
```
|
||||
clickhouse-client --query "INSERT INTO mgbench.logs1 FORMAT CSVWithNames" < mgbench1.csv
|
||||
clickhouse-client --query "INSERT INTO mgbench.logs2 FORMAT CSVWithNames" < mgbench2.csv
|
||||
clickhouse-client --query "INSERT INTO mgbench.logs3 FORMAT CSVWithNames" < mgbench3.csv
|
||||
```
|
||||
|
||||
## 运行基准查询:
|
||||
|
||||
```sql
|
||||
USE mgbench;
|
||||
```
|
||||
|
||||
```sql
|
||||
-- Q1.1: 自午夜以来每个 Web 服务器的 CPU/网络利用率是多少?
|
||||
|
||||
SELECT machine_name,
|
||||
MIN(cpu) AS cpu_min,
|
||||
MAX(cpu) AS cpu_max,
|
||||
AVG(cpu) AS cpu_avg,
|
||||
MIN(net_in) AS net_in_min,
|
||||
MAX(net_in) AS net_in_max,
|
||||
AVG(net_in) AS net_in_avg,
|
||||
MIN(net_out) AS net_out_min,
|
||||
MAX(net_out) AS net_out_max,
|
||||
AVG(net_out) AS net_out_avg
|
||||
FROM (
|
||||
SELECT machine_name,
|
||||
COALESCE(cpu_user, 0.0) AS cpu,
|
||||
COALESCE(bytes_in, 0.0) AS net_in,
|
||||
COALESCE(bytes_out, 0.0) AS net_out
|
||||
FROM logs1
|
||||
WHERE machine_name IN ('anansi','aragog','urd')
|
||||
AND log_time >= TIMESTAMP '2017-01-11 00:00:00'
|
||||
) AS r
|
||||
GROUP BY machine_name;
|
||||
```
|
||||
|
||||
|
||||
```sql
|
||||
-- Q1.2:最近一天有哪些机房的机器离线?
|
||||
|
||||
SELECT machine_name,
|
||||
log_time
|
||||
FROM logs1
|
||||
WHERE (machine_name LIKE 'cslab%' OR
|
||||
machine_name LIKE 'mslab%')
|
||||
AND load_one IS NULL
|
||||
AND log_time >= TIMESTAMP '2017-01-10 00:00:00'
|
||||
ORDER BY machine_name,
|
||||
log_time;
|
||||
```
|
||||
|
||||
```sql
|
||||
-- Q1.3:特定工作站过去 10 天的每小时的平均指标是多少?
|
||||
|
||||
SELECT dt,
|
||||
hr,
|
||||
AVG(load_fifteen) AS load_fifteen_avg,
|
||||
AVG(load_five) AS load_five_avg,
|
||||
AVG(load_one) AS load_one_avg,
|
||||
AVG(mem_free) AS mem_free_avg,
|
||||
AVG(swap_free) AS swap_free_avg
|
||||
FROM (
|
||||
SELECT CAST(log_time AS DATE) AS dt,
|
||||
EXTRACT(HOUR FROM log_time) AS hr,
|
||||
load_fifteen,
|
||||
load_five,
|
||||
load_one,
|
||||
mem_free,
|
||||
swap_free
|
||||
FROM logs1
|
||||
WHERE machine_name = 'babbage'
|
||||
AND load_fifteen IS NOT NULL
|
||||
AND load_five IS NOT NULL
|
||||
AND load_one IS NOT NULL
|
||||
AND mem_free IS NOT NULL
|
||||
AND swap_free IS NOT NULL
|
||||
AND log_time >= TIMESTAMP '2017-01-01 00:00:00'
|
||||
) AS r
|
||||
GROUP BY dt,
|
||||
hr
|
||||
ORDER BY dt,
|
||||
hr;
|
||||
```
|
||||
|
||||
```sql
|
||||
-- Q1.4: 1 个月内,每台服务器的磁盘 I/O 阻塞的频率是多少?
|
||||
|
||||
SELECT machine_name,
|
||||
COUNT(*) AS spikes
|
||||
FROM logs1
|
||||
WHERE machine_group = 'Servers'
|
||||
AND cpu_wio > 0.99
|
||||
AND log_time >= TIMESTAMP '2016-12-01 00:00:00'
|
||||
AND log_time < TIMESTAMP '2017-01-01 00:00:00'
|
||||
GROUP BY machine_name
|
||||
ORDER BY spikes DESC
|
||||
LIMIT 10;
|
||||
```
|
||||
|
||||
```sql
|
||||
-- Q1.5:哪些外部可访问的虚拟机的运行内存不足?
|
||||
|
||||
SELECT machine_name,
|
||||
dt,
|
||||
MIN(mem_free) AS mem_free_min
|
||||
FROM (
|
||||
SELECT machine_name,
|
||||
CAST(log_time AS DATE) AS dt,
|
||||
mem_free
|
||||
FROM logs1
|
||||
WHERE machine_group = 'DMZ'
|
||||
AND mem_free IS NOT NULL
|
||||
) AS r
|
||||
GROUP BY machine_name,
|
||||
dt
|
||||
HAVING MIN(mem_free) < 10000
|
||||
ORDER BY machine_name,
|
||||
dt;
|
||||
```
|
||||
|
||||
```sql
|
||||
-- Q1.6: 每小时所有文件服务器的总网络流量是多少?
|
||||
|
||||
SELECT dt,
|
||||
hr,
|
||||
SUM(net_in) AS net_in_sum,
|
||||
SUM(net_out) AS net_out_sum,
|
||||
SUM(net_in) + SUM(net_out) AS both_sum
|
||||
FROM (
|
||||
SELECT CAST(log_time AS DATE) AS dt,
|
||||
EXTRACT(HOUR FROM log_time) AS hr,
|
||||
COALESCE(bytes_in, 0.0) / 1000000000.0 AS net_in,
|
||||
COALESCE(bytes_out, 0.0) / 1000000000.0 AS net_out
|
||||
FROM logs1
|
||||
WHERE machine_name IN ('allsorts','andes','bigred','blackjack','bonbon',
|
||||
'cadbury','chiclets','cotton','crows','dove','fireball','hearts','huey',
|
||||
'lindt','milkduds','milkyway','mnm','necco','nerds','orbit','peeps',
|
||||
'poprocks','razzles','runts','smarties','smuggler','spree','stride',
|
||||
'tootsie','trident','wrigley','york')
|
||||
) AS r
|
||||
GROUP BY dt,
|
||||
hr
|
||||
ORDER BY both_sum DESC
|
||||
LIMIT 10;
|
||||
```
|
||||
|
||||
```sql
|
||||
-- Q2.1:过去 2 周内哪些请求导致了服务器错误?
|
||||
|
||||
SELECT *
|
||||
FROM logs2
|
||||
WHERE status_code >= 500
|
||||
AND log_time >= TIMESTAMP '2012-12-18 00:00:00'
|
||||
ORDER BY log_time;
|
||||
```
|
||||
|
||||
```sql
|
||||
-- Q2.2:在特定的某 2 周内,用户密码文件是否被泄露了?
|
||||
|
||||
SELECT *
|
||||
FROM logs2
|
||||
WHERE status_code >= 200
|
||||
AND status_code < 300
|
||||
AND request LIKE '%/etc/passwd%'
|
||||
AND log_time >= TIMESTAMP '2012-05-06 00:00:00'
|
||||
AND log_time < TIMESTAMP '2012-05-20 00:00:00';
|
||||
```
|
||||
|
||||
|
||||
```sql
|
||||
-- Q2.3:过去一个月顶级请求的平均路径深度是多少?
|
||||
|
||||
SELECT top_level,
|
||||
AVG(LENGTH(request) - LENGTH(REPLACE(request, '/', ''))) AS depth_avg
|
||||
FROM (
|
||||
SELECT SUBSTRING(request FROM 1 FOR len) AS top_level,
|
||||
request
|
||||
FROM (
|
||||
SELECT POSITION(SUBSTRING(request FROM 2), '/') AS len,
|
||||
request
|
||||
FROM logs2
|
||||
WHERE status_code >= 200
|
||||
AND status_code < 300
|
||||
AND log_time >= TIMESTAMP '2012-12-01 00:00:00'
|
||||
) AS r
|
||||
WHERE len > 0
|
||||
) AS s
|
||||
WHERE top_level IN ('/about','/courses','/degrees','/events',
|
||||
'/grad','/industry','/news','/people',
|
||||
'/publications','/research','/teaching','/ugrad')
|
||||
GROUP BY top_level
|
||||
ORDER BY top_level;
|
||||
```
|
||||
|
||||
|
||||
```sql
|
||||
-- Q2.4:在过去的 3 个月里,哪些客户端发出了过多的请求?
|
||||
|
||||
SELECT client_ip,
|
||||
COUNT(*) AS num_requests
|
||||
FROM logs2
|
||||
WHERE log_time >= TIMESTAMP '2012-10-01 00:00:00'
|
||||
GROUP BY client_ip
|
||||
HAVING COUNT(*) >= 100000
|
||||
ORDER BY num_requests DESC;
|
||||
```
|
||||
|
||||
|
||||
```sql
|
||||
-- Q2.5:每天的独立访问者数量是多少?
|
||||
|
||||
SELECT dt,
|
||||
COUNT(DISTINCT client_ip)
|
||||
FROM (
|
||||
SELECT CAST(log_time AS DATE) AS dt,
|
||||
client_ip
|
||||
FROM logs2
|
||||
) AS r
|
||||
GROUP BY dt
|
||||
ORDER BY dt;
|
||||
```
|
||||
|
||||
|
||||
```sql
|
||||
-- Q2.6:平均和最大数据传输速率(Gbps)是多少?
|
||||
|
||||
SELECT AVG(transfer) / 125000000.0 AS transfer_avg,
|
||||
MAX(transfer) / 125000000.0 AS transfer_max
|
||||
FROM (
|
||||
SELECT log_time,
|
||||
SUM(object_size) AS transfer
|
||||
FROM logs2
|
||||
GROUP BY log_time
|
||||
) AS r;
|
||||
```
|
||||
|
||||
|
||||
```sql
|
||||
-- Q3.1:自 2019/11/29 17:00 以来,室温是否达到过冰点?
|
||||
|
||||
SELECT *
|
||||
FROM logs3
|
||||
WHERE event_type = 'temperature'
|
||||
AND event_value <= 32.0
|
||||
AND log_time >= '2019-11-29 17:00:00.000';
|
||||
```
|
||||
|
||||
|
||||
```sql
|
||||
-- Q3.4:在过去的 6 个月里,每扇门打开的频率是多少?
|
||||
|
||||
SELECT device_name,
|
||||
device_floor,
|
||||
COUNT(*) AS ct
|
||||
FROM logs3
|
||||
WHERE event_type = 'door_open'
|
||||
AND log_time >= '2019-06-01 00:00:00.000'
|
||||
GROUP BY device_name,
|
||||
device_floor
|
||||
ORDER BY ct DESC;
|
||||
```
|
||||
|
||||
下面的查询 3.5 使用了 UNION 关键词。设置该模式以便组合 SELECT 的查询结果。该设置仅在未明确指定 UNION ALL 或 UNION DISTINCT 但使用了 UNION 进行共享时使用。
|
||||
|
||||
```sql
|
||||
SET union_default_mode = 'DISTINCT'
|
||||
```
|
||||
|
||||
```sql
|
||||
-- Q3.5: 在冬季和夏季,建筑物内哪些地方会出现较大的温度变化?
|
||||
|
||||
WITH temperature AS (
|
||||
SELECT dt,
|
||||
device_name,
|
||||
device_type,
|
||||
device_floor
|
||||
FROM (
|
||||
SELECT dt,
|
||||
hr,
|
||||
device_name,
|
||||
device_type,
|
||||
device_floor,
|
||||
AVG(event_value) AS temperature_hourly_avg
|
||||
FROM (
|
||||
SELECT CAST(log_time AS DATE) AS dt,
|
||||
EXTRACT(HOUR FROM log_time) AS hr,
|
||||
device_name,
|
||||
device_type,
|
||||
device_floor,
|
||||
event_value
|
||||
FROM logs3
|
||||
WHERE event_type = 'temperature'
|
||||
) AS r
|
||||
GROUP BY dt,
|
||||
hr,
|
||||
device_name,
|
||||
device_type,
|
||||
device_floor
|
||||
) AS s
|
||||
GROUP BY dt,
|
||||
device_name,
|
||||
device_type,
|
||||
device_floor
|
||||
HAVING MAX(temperature_hourly_avg) - MIN(temperature_hourly_avg) >= 25.0
|
||||
)
|
||||
SELECT DISTINCT device_name,
|
||||
device_type,
|
||||
device_floor,
|
||||
'WINTER'
|
||||
FROM temperature
|
||||
WHERE dt >= DATE '2018-12-01'
|
||||
AND dt < DATE '2019-03-01'
|
||||
UNION
|
||||
SELECT DISTINCT device_name,
|
||||
device_type,
|
||||
device_floor,
|
||||
'SUMMER'
|
||||
FROM temperature
|
||||
WHERE dt >= DATE '2019-06-01'
|
||||
AND dt < DATE '2019-09-01';
|
||||
```
|
||||
|
||||
|
||||
```sql
|
||||
-- Q3.6:对于每种类别的设备,每月的功耗指标是什么?
|
||||
|
||||
SELECT yr,
|
||||
mo,
|
||||
SUM(coffee_hourly_avg) AS coffee_monthly_sum,
|
||||
AVG(coffee_hourly_avg) AS coffee_monthly_avg,
|
||||
SUM(printer_hourly_avg) AS printer_monthly_sum,
|
||||
AVG(printer_hourly_avg) AS printer_monthly_avg,
|
||||
SUM(projector_hourly_avg) AS projector_monthly_sum,
|
||||
AVG(projector_hourly_avg) AS projector_monthly_avg,
|
||||
SUM(vending_hourly_avg) AS vending_monthly_sum,
|
||||
AVG(vending_hourly_avg) AS vending_monthly_avg
|
||||
FROM (
|
||||
SELECT dt,
|
||||
yr,
|
||||
mo,
|
||||
hr,
|
||||
AVG(coffee) AS coffee_hourly_avg,
|
||||
AVG(printer) AS printer_hourly_avg,
|
||||
AVG(projector) AS projector_hourly_avg,
|
||||
AVG(vending) AS vending_hourly_avg
|
||||
FROM (
|
||||
SELECT CAST(log_time AS DATE) AS dt,
|
||||
EXTRACT(YEAR FROM log_time) AS yr,
|
||||
EXTRACT(MONTH FROM log_time) AS mo,
|
||||
EXTRACT(HOUR FROM log_time) AS hr,
|
||||
CASE WHEN device_name LIKE 'coffee%' THEN event_value END AS coffee,
|
||||
CASE WHEN device_name LIKE 'printer%' THEN event_value END AS printer,
|
||||
CASE WHEN device_name LIKE 'projector%' THEN event_value END AS projector,
|
||||
CASE WHEN device_name LIKE 'vending%' THEN event_value END AS vending
|
||||
FROM logs3
|
||||
WHERE device_type = 'meter'
|
||||
) AS r
|
||||
GROUP BY dt,
|
||||
yr,
|
||||
mo,
|
||||
hr
|
||||
) AS s
|
||||
GROUP BY yr,
|
||||
mo
|
||||
ORDER BY yr,
|
||||
mo;
|
||||
```
|
||||
|
||||
此数据集可在 [Playground](https://play.clickhouse.com/play?user=play) 中进行交互式的请求, [example](https://play.clickhouse.com/play?user=play#U0VMRUNUIG1hY2hpbmVfbmFtZSwKICAgICAgIE1JTihjcHUpIEFTIGNwdV9taW4sCiAgICAgICBNQVgoY3B1KSBBUyBjcHVfbWF4LAogICAgICAgQVZHKGNwdSkgQVMgY3B1X2F2ZywKICAgICAgIE1JTihuZXRfaW4pIEFTIG5ldF9pbl9taW4sCiAgICAgICBNQVgobmV0X2luKSBBUyBuZXRfaW5fbWF4LAogICAgICAgQVZHKG5ldF9pbikgQVMgbmV0X2luX2F2ZywKICAgICAgIE1JTihuZXRfb3V0KSBBUyBuZXRfb3V0X21pbiwKICAgICAgIE1BWChuZXRfb3V0KSBBUyBuZXRfb3V0X21heCwKICAgICAgIEFWRyhuZXRfb3V0KSBBUyBuZXRfb3V0X2F2ZwpGUk9NICgKICBTRUxFQ1QgbWFjaGluZV9uYW1lLAogICAgICAgICBDT0FMRVNDRShjcHVfdXNlciwgMC4wKSBBUyBjcHUsCiAgICAgICAgIENPQUxFU0NFKGJ5dGVzX2luLCAwLjApIEFTIG5ldF9pbiwKICAgICAgICAgQ09BTEVTQ0UoYnl0ZXNfb3V0LCAwLjApIEFTIG5ldF9vdXQKICBGUk9NIG1nYmVuY2gubG9nczEKICBXSEVSRSBtYWNoaW5lX25hbWUgSU4gKCdhbmFuc2knLCdhcmFnb2cnLCd1cmQnKQogICAgQU5EIGxvZ190aW1lID49IFRJTUVTVEFNUCAnMjAxNy0wMS0xMSAwMDowMDowMCcKKSBBUyByCkdST1VQIEJZIG1hY2hpbmVfbmFtZQ==).
|
||||
|
@ -1,9 +1,232 @@
|
||||
---
|
||||
slug: /zh/getting-started/example-datasets/cell-towers
|
||||
sidebar_label: Cell Towers
|
||||
title: "Cell Towers"
|
||||
sidebar_label: 蜂窝信号塔
|
||||
sidebar_position: 3
|
||||
title: "蜂窝信号塔"
|
||||
---
|
||||
|
||||
import Content from '@site/docs/en/getting-started/example-datasets/cell-towers.md';
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
import CodeBlock from '@theme/CodeBlock';
|
||||
import ActionsMenu from '@site/docs/en/_snippets/_service_actions_menu.md';
|
||||
import SQLConsoleDetail from '@site/docs/en/_snippets/_launch_sql_console.md';
|
||||
|
||||
该数据集来自 [OpenCellid](https://www.opencellid.org/) - 世界上最大的蜂窝信号塔的开放数据库。
|
||||
|
||||
截至 2021 年,它拥有超过 4000 万条关于全球蜂窝信号塔(GSM、LTE、UMTS 等)的记录及其地理坐标和元数据(国家代码、网络等)。
|
||||
|
||||
OpenCelliD 项目在 `Creative Commons Attribution-ShareAlike 4.0 International License` 协议下许可使用,我们根据相同许可条款重新分发此数据集的快照。登录后即可下载最新版本的数据集。
|
||||
|
||||
|
||||
## 获取数据集 {#get-the-dataset}
|
||||
|
||||
<Tabs groupId="deployMethod">
|
||||
<TabItem value="serverless" label="ClickHouse Cloud" default>
|
||||
|
||||
在 ClickHouse Cloud 上可以通过一个按钮实现通过 S3 上传此数据集。登录你的 ClickHouse Cloud 组织,或通过 [ClickHouse.cloud](https://clickhouse.cloud) 创建免费试用版。<ActionsMenu menu="Load Data" />
|
||||
|
||||
从 **Sample data** 选项卡中选择 **Cell Towers** 数据集,然后选择 **Load data**:
|
||||
|
||||
![加载数据集](@site/docs/en/_snippets/images/cloud-load-data-sample.png)
|
||||
|
||||
检查 cell_towers 的表结构:
|
||||
|
||||
```sql
|
||||
DESCRIBE TABLE cell_towers
|
||||
```
|
||||
|
||||
<SQLConsoleDetail />
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="selfmanaged" label="Self-managed">
|
||||
|
||||
1. 下载 2021 年 2 月以来的数据集快照:[cell_towers.csv.xz](https://datasets.clickhouse.com/cell_towers.csv.xz) (729 MB)。
|
||||
|
||||
2. 验证完整性(可选步骤):
|
||||
|
||||
```bash
|
||||
md5sum cell_towers.csv.xz
|
||||
```
|
||||
|
||||
```response
|
||||
8cf986f4a0d9f12c6f384a0e9192c908 cell_towers.csv.xz
|
||||
```
|
||||
|
||||
3. 使用以下命令解压:
|
||||
|
||||
```bash
|
||||
xz -d cell_towers.csv.xz
|
||||
```
|
||||
|
||||
4. 创建表:
|
||||
|
||||
```sql
|
||||
CREATE TABLE cell_towers
|
||||
(
|
||||
radio Enum8('' = 0, 'CDMA' = 1, 'GSM' = 2, 'LTE' = 3, 'NR' = 4, 'UMTS' = 5),
|
||||
mcc UInt16,
|
||||
net UInt16,
|
||||
area UInt16,
|
||||
cell UInt64,
|
||||
unit Int16,
|
||||
lon Float64,
|
||||
lat Float64,
|
||||
range UInt32,
|
||||
samples UInt32,
|
||||
changeable UInt8,
|
||||
created DateTime,
|
||||
updated DateTime,
|
||||
averageSignal UInt8
|
||||
)
|
||||
ENGINE = MergeTree ORDER BY (radio, mcc, net, created);
|
||||
```
|
||||
|
||||
5. 插入数据集:
|
||||
|
||||
```bash
|
||||
clickhouse-client --query "INSERT INTO cell_towers FORMAT CSVWithNames" < cell_towers.csv
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## 查询示例 {#examples}
|
||||
|
||||
1. 按类型划分的基站数量:
|
||||
|
||||
```sql
|
||||
SELECT radio, count() AS c FROM cell_towers GROUP BY radio ORDER BY c DESC
|
||||
```
|
||||
```response
|
||||
┌─radio─┬────────c─┐
|
||||
│ UMTS │ 20686487 │
|
||||
│ LTE │ 12101148 │
|
||||
│ GSM │ 9931312 │
|
||||
│ CDMA │ 556344 │
|
||||
│ NR │ 867 │
|
||||
└───────┴──────────┘
|
||||
|
||||
5 rows in set. Elapsed: 0.011 sec. Processed 43.28 million rows, 43.28 MB (3.83 billion rows/s., 3.83 GB/s.)
|
||||
```
|
||||
|
||||
2. 各个[移动国家代码(MCC)](https://en.wikipedia.org/wiki/Mobile_country_code)对应的蜂窝信号塔数量:
|
||||
|
||||
```sql
|
||||
SELECT mcc, count() FROM cell_towers GROUP BY mcc ORDER BY count() DESC LIMIT 10
|
||||
```
|
||||
```response
|
||||
┌─mcc─┬─count()─┐
|
||||
│ 310 │ 5024650 │
|
||||
│ 262 │ 2622423 │
|
||||
│ 250 │ 1953176 │
|
||||
│ 208 │ 1891187 │
|
||||
│ 724 │ 1836150 │
|
||||
│ 404 │ 1729151 │
|
||||
│ 234 │ 1618924 │
|
||||
│ 510 │ 1353998 │
|
||||
│ 440 │ 1343355 │
|
||||
│ 311 │ 1332798 │
|
||||
└─────┴─────────┘
|
||||
|
||||
10 rows in set. Elapsed: 0.019 sec. Processed 43.28 million rows, 86.55 MB (2.33 billion rows/s., 4.65 GB/s.)
|
||||
```
|
||||
|
||||
排名靠前的国家是:美国、德国和俄罗斯。
|
||||
|
||||
你可以通过在 ClickHouse 中创建一个 [External Dictionary](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md) 来解码这些值。
|
||||
|
||||
## 用例:合并地理数据 {#use-case}
|
||||
|
||||
使用 `pointInPolygon` 函数。
|
||||
|
||||
1. 创建一个用于存储多边形的表:
|
||||
|
||||
<Tabs groupId="deployMethod">
|
||||
<TabItem value="serverless" label="ClickHouse Cloud" default>
|
||||
|
||||
```sql
|
||||
CREATE TABLE moscow (polygon Array(Tuple(Float64, Float64)))
|
||||
ORDER BY polygon;
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="selfmanaged" label="Self-managed">
|
||||
|
||||
```sql
|
||||
CREATE TEMPORARY TABLE
|
||||
moscow (polygon Array(Tuple(Float64, Float64)));
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
2. 以下点大致上构造了莫斯科的地理围栏(除“新莫斯科”外):
|
||||
|
||||
```sql
|
||||
INSERT INTO moscow VALUES ([(37.84172564285271, 55.78000432402266),
|
||||
(37.8381207618713, 55.775874525970494), (37.83979446823122, 55.775626746008065), (37.84243326983639, 55.77446586811748), (37.84262672750849, 55.771974101091104), (37.84153238623039, 55.77114545193181), (37.841124690460184, 55.76722010265554),
|
||||
(37.84239076983644, 55.76654891107098), (37.842283558197025, 55.76258709833121), (37.8421759312134, 55.758073999993734), (37.84198330422974, 55.75381499999371), (37.8416827275085, 55.749277102484484), (37.84157576190186, 55.74794544108413),
|
||||
(37.83897929098507, 55.74525257875241), (37.83739676451868, 55.74404373042019), (37.838732481460525, 55.74298009816793), (37.841183997352545, 55.743060321833575), (37.84097476190185, 55.73938799999373), (37.84048155819702, 55.73570799999372),
|
||||
(37.840095812164286, 55.73228210777237), (37.83983814285274, 55.73080491981639), (37.83846476321406, 55.729799917464675), (37.83835745269769, 55.72919751082619), (37.838636380279524, 55.72859509486539), (37.8395161005249, 55.727705075632784),
|
||||
(37.83897964285276, 55.722727886185154), (37.83862557539366, 55.72034817326636), (37.83559735744853, 55.71944437307499), (37.835370708803126, 55.71831419154461), (37.83738169402022, 55.71765218986692), (37.83823396494291, 55.71691750159089),
|
||||
(37.838056931213345, 55.71547311301385), (37.836812846557606, 55.71221445615604), (37.83522525396725, 55.709331054395555), (37.83269301586908, 55.70953687463627), (37.829667367706236, 55.70903403789297), (37.83311126588435, 55.70552351822608),
|
||||
(37.83058993121339, 55.70041317726053), (37.82983872750851, 55.69883771404813), (37.82934501586913, 55.69718947487017), (37.828926414016685, 55.69504441658371), (37.82876530422971, 55.69287499999378), (37.82894754100031, 55.690759754047335),
|
||||
(37.827697554878185, 55.68951421135665), (37.82447346292115, 55.68965045405069), (37.83136543914793, 55.68322046195302), (37.833554015869154, 55.67814012759211), (37.83544184655761, 55.67295011628339), (37.837480388885474, 55.6672498719639),
|
||||
(37.838960677246064, 55.66316274139358), (37.83926093121332, 55.66046999999383), (37.839025050262435, 55.65869897264431), (37.83670784390257, 55.65794084879904), (37.835656529083245, 55.65694309303843), (37.83704060449217, 55.65689306460552),
|
||||
(37.83696819873806, 55.65550363526252), (37.83760389616388, 55.65487847246661), (37.83687972750851, 55.65356745541324), (37.83515216004943, 55.65155951234079), (37.83312418518067, 55.64979413590619), (37.82801726983639, 55.64640836412121),
|
||||
(37.820614174591, 55.64164525405531), (37.818908190475426, 55.6421883258084), (37.81717543386075, 55.64112490388471), (37.81690987037274, 55.63916106913107), (37.815099354492155, 55.637925371757085), (37.808769150787356, 55.633798276884455),
|
||||
(37.80100123544311, 55.62873670012244), (37.79598013491824, 55.62554336109055), (37.78634567724606, 55.62033499605651), (37.78334147619623, 55.618768681480326), (37.77746201055901, 55.619855533402706), (37.77527329626457, 55.61909966711279),
|
||||
(37.77801986242668, 55.618770300976294), (37.778212973541216, 55.617257701952106), (37.77784818518065, 55.61574504433011), (37.77016867724609, 55.61148576294007), (37.760191219573976, 55.60599579539028), (37.75338926983641, 55.60227892751446),
|
||||
(37.746329965606634, 55.59920577639331), (37.73939925396728, 55.59631430313617), (37.73273665739439, 55.5935318803559), (37.7299954450912, 55.59350760316188), (37.7268679946899, 55.59469840523759), (37.72626726983634, 55.59229549697373),
|
||||
(37.7262673598022, 55.59081598950582), (37.71897193121335, 55.5877595845419), (37.70871550793456, 55.58393177431724), (37.700497489410374, 55.580917323756644), (37.69204305026244, 55.57778089778455), (37.68544477378839, 55.57815154690915),
|
||||
(37.68391050793454, 55.57472945079756), (37.678803592590306, 55.57328235936491), (37.6743402539673, 55.57255251445782), (37.66813862698363, 55.57216388774464), (37.617927457672096, 55.57505691895805), (37.60443099999999, 55.5757737568051),
|
||||
(37.599683515869145, 55.57749105910326), (37.59754177842709, 55.57796291823627), (37.59625834786988, 55.57906686095235), (37.59501783265684, 55.57746616444403), (37.593090671936025, 55.57671634534502), (37.587018007904, 55.577944600233785),
|
||||
(37.578692203704804, 55.57982895000019), (37.57327546607398, 55.58116294118248), (37.57385012109279, 55.581550362779), (37.57399562266922, 55.5820107079112), (37.5735356072979, 55.58226289171689), (37.57290393054962, 55.582393529795155),
|
||||
(37.57037722355653, 55.581919415056234), (37.5592298306885, 55.584471614867844), (37.54189249206543, 55.58867650795186), (37.5297256269836, 55.59158133551745), (37.517837865081766, 55.59443656218868), (37.51200186508174, 55.59635625174229),
|
||||
(37.506808949737554, 55.59907823904434), (37.49820432275389, 55.6062944994944), (37.494406071441674, 55.60967103463367), (37.494760001358024, 55.61066689753365), (37.49397137107085, 55.61220931698269), (37.49016528606031, 55.613417718449064),
|
||||
(37.48773249206542, 55.61530616333343), (37.47921386508177, 55.622640129112334), (37.470652153442394, 55.62993723476164), (37.46273446298218, 55.6368075123157), (37.46350692265317, 55.64068225239439), (37.46050283203121, 55.640794546982576),
|
||||
(37.457627470916734, 55.64118904154646), (37.450718034393326, 55.64690488145138), (37.44239252645875, 55.65397824729769), (37.434587576721185, 55.66053543155961), (37.43582144975277, 55.661693766520735), (37.43576786245721, 55.662755031737014),
|
||||
(37.430982915344174, 55.664610641628116), (37.428547447097685, 55.66778515273695), (37.42945134592044, 55.668633314343566), (37.42859571562949, 55.66948145750025), (37.4262836402282, 55.670813882451405), (37.418709037048295, 55.6811141674414),
|
||||
(37.41922139651101, 55.68235377885389), (37.419218771842885, 55.68359335082235), (37.417196501327446, 55.684375235224735), (37.41607020370478, 55.68540557585352), (37.415640857147146, 55.68686637150793), (37.414632153442334, 55.68903015131686),
|
||||
(37.413344899475064, 55.690896881757396), (37.41171432275391, 55.69264232162232), (37.40948282275393, 55.69455101638112), (37.40703674603271, 55.69638690385348), (37.39607169577025, 55.70451821283731), (37.38952706878662, 55.70942491932811),
|
||||
(37.387778313491815, 55.71149057784176), (37.39049275399779, 55.71419814298992), (37.385557272491454, 55.7155489617061), (37.38388335714726, 55.71849856042102), (37.378368238098155, 55.7292763261685), (37.37763597123337, 55.730845879211614),
|
||||
(37.37890062088197, 55.73167906388319), (37.37750451918789, 55.734703664681774), (37.375610832015965, 55.734851959522246), (37.3723813571472, 55.74105626086403), (37.37014935714723, 55.746115620904355), (37.36944173016362, 55.750883999993725),
|
||||
(37.36975304365541, 55.76335905525834), (37.37244070571134, 55.76432079697595), (37.3724259757175, 55.76636979670426), (37.369922155757884, 55.76735417953104), (37.369892695770275, 55.76823419316575), (37.370214730163575, 55.782312184391266),
|
||||
(37.370493611114505, 55.78436801120489), (37.37120164550783, 55.78596427165359), (37.37284851456452, 55.7874378183096), (37.37608325135799, 55.7886695054807), (37.3764587460632, 55.78947647305964), (37.37530000265506, 55.79146512926804),
|
||||
(37.38235915344241, 55.79899647809345), (37.384344043655396, 55.80113596939471), (37.38594269577028, 55.80322699999366), (37.38711208598329, 55.804919036911976), (37.3880239841309, 55.806610999993666), (37.38928977249147, 55.81001864976979),
|
||||
(37.39038389947512, 55.81348641242801), (37.39235781481933, 55.81983538336746), (37.393709457672124, 55.82417822811877), (37.394685720901464, 55.82792275755836), (37.39557615344238, 55.830447148154136), (37.39844478226658, 55.83167107969975),
|
||||
(37.40019761214057, 55.83151823557964), (37.400398790382326, 55.83264967594742), (37.39659544313046, 55.83322180909622), (37.39667059524539, 55.83402792148566), (37.39682089947515, 55.83638877400216), (37.39643489154053, 55.83861656112751),
|
||||
(37.3955338994751, 55.84072348043264), (37.392680272491454, 55.84502158126453), (37.39241188227847, 55.84659117913199), (37.392529730163616, 55.84816071336481), (37.39486835714723, 55.85288092980303), (37.39873052645878, 55.859893456073635),
|
||||
(37.40272161111449, 55.86441833633205), (37.40697072750854, 55.867579567544375), (37.410007082016016, 55.868369880337), (37.4120992989502, 55.86920843741314), (37.412668021163924, 55.87055369615854), (37.41482461111453, 55.87170587948249),
|
||||
(37.41862266137694, 55.873183961039565), (37.42413732540892, 55.874879126654704), (37.4312182698669, 55.875614937236705), (37.43111093783558, 55.8762723478417), (37.43332105622856, 55.87706546369396), (37.43385747619623, 55.87790681284802),
|
||||
(37.441303050262405, 55.88027084462084), (37.44747234260555, 55.87942070143253), (37.44716141796871, 55.88072960917233), (37.44769797085568, 55.88121221323979), (37.45204320500181, 55.882080694420715), (37.45673176190186, 55.882346110794586),
|
||||
(37.463383999999984, 55.88252729504517), (37.46682797486874, 55.88294937719063), (37.470014457672086, 55.88361266759345), (37.47751410450743, 55.88546991372396), (37.47860317658232, 55.88534929207307), (37.48165826025772, 55.882563306475106),
|
||||
(37.48316434442331, 55.8815803226785), (37.483831555817645, 55.882427612793315), (37.483182967125686, 55.88372791409729), (37.483092277908824, 55.88495581062434), (37.4855716508179, 55.8875561994203), (37.486440636245746, 55.887827444039566),
|
||||
(37.49014203439328, 55.88897899871799), (37.493210285705544, 55.890208937135604), (37.497512451065035, 55.891342397444696), (37.49780744510645, 55.89174030252967), (37.49940333499519, 55.89239745507079), (37.50018383334346, 55.89339220941865),
|
||||
(37.52421672750851, 55.903869074155224), (37.52977457672118, 55.90564076517974), (37.53503220370484, 55.90661661218259), (37.54042858064267, 55.90714113744566), (37.54320461007303, 55.905645048442985), (37.545686966066306, 55.906608607018505),
|
||||
(37.54743976120755, 55.90788552162358), (37.55796999999999, 55.90901557907218), (37.572711542327866, 55.91059395704873), (37.57942799999998, 55.91073854155573), (37.58502865872187, 55.91009969268444), (37.58739968913264, 55.90794809960554),
|
||||
(37.59131567193598, 55.908713267595054), (37.612687423278814, 55.902866854295375), (37.62348079629517, 55.90041967242986), (37.635797880950896, 55.898141151686396), (37.649487626983664, 55.89639275532968), (37.65619302513125, 55.89572360207488),
|
||||
(37.66294133862307, 55.895295577183965), (37.66874564418033, 55.89505457604897), (37.67375601586915, 55.89254677027454), (37.67744661901856, 55.8947775867987), (37.688347, 55.89450045676125), (37.69480554232789, 55.89422926332761),
|
||||
(37.70107096560668, 55.89322256101114), (37.705962965606716, 55.891763491662616), (37.711885134918205, 55.889110234998974), (37.71682005026245, 55.886577568759876), (37.7199315476074, 55.88458159806678), (37.72234560316464, 55.882281005794134),
|
||||
(37.72364385977171, 55.8809452036196), (37.725371142837474, 55.8809722706006), (37.727870902099546, 55.88037213862385), (37.73394330422971, 55.877941504088696), (37.745339592590376, 55.87208120378722), (37.75525267724611, 55.86703807949492),
|
||||
(37.76919976190188, 55.859821640197474), (37.827835219574, 55.82962968399116), (37.83341438888553, 55.82575289922351), (37.83652584655761, 55.82188784027888), (37.83809213491821, 55.81612575504693), (37.83605359521481, 55.81460347077685),
|
||||
(37.83632178569025, 55.81276696067908), (37.838623105812026, 55.811486181656385), (37.83912198147584, 55.807329380532785), (37.839079078033414, 55.80510270463816), (37.83965844708251, 55.79940712529036), (37.840581150787344, 55.79131399999368),
|
||||
(37.84172564285271, 55.78000432402266)]);
|
||||
```
|
||||
|
||||
3. 检查莫斯科有多少个蜂窝信号塔:
|
||||
|
||||
```sql
|
||||
SELECT count() FROM cell_towers
|
||||
WHERE pointInPolygon((lon, lat), (SELECT * FROM moscow))
|
||||
```
|
||||
```response
|
||||
┌─count()─┐
|
||||
│ 310463 │
|
||||
└─────────┘
|
||||
|
||||
1 rows in set. Elapsed: 0.067 sec. Processed 43.28 million rows, 692.42 MB (645.83 million rows/s., 10.33 GB/s.)
|
||||
```
|
||||
|
||||
虽然不能创建临时表,但此数据集仍可在 [Playground](https://play.clickhouse.com/play?user=play) 中进行交互式的请求, [example](https://play.clickhouse.com/play?user=play#U0VMRUNUIG1jYywgY291bnQoKSBGUk9NIGNlbGxfdG93ZXJzIEdST1VQIEJZIG1jYyBPUkRFUiBCWSBjb3VudCgpIERFU0M=).
|
||||
|
||||
<Content />
|
||||
|
@ -1,9 +1,352 @@
|
||||
---
|
||||
slug: /zh/getting-started/example-datasets/menus
|
||||
sidebar_label: New York Public Library "What's on the Menu?" Dataset
|
||||
title: "New York Public Library \"What's on the Menu?\" Dataset"
|
||||
---
|
||||
slug: /zh/getting-started/example-datasets/menus
|
||||
sidebar_label: '纽约公共图书馆“菜单上有什么?”数据集'
|
||||
title: '纽约公共图书馆“菜单上有什么?”数据集'
|
||||
---
|
||||
|
||||
import Content from '@site/docs/en/getting-started/example-datasets/menus.md';
|
||||
该数据集由纽约公共图书馆创建。其中含有有关酒店、餐馆和咖啡馆的菜单上的菜肴及其价格的历史数据。
|
||||
|
||||
<Content />
|
||||
来源:http://menus.nypl.org/data
|
||||
数据为开放数据。
|
||||
|
||||
数据来自于图书馆中的档案,因此可能不完整,以至于难以进行统计分析。尽管如此,该数据集也是非常有意思的。数据集中只有 130 万条关于菜单中的菜肴的记录 - 这对于 ClickHouse 来说是一个非常小的数据量,但这仍是一个很好的例子。
|
||||
|
||||
## 下载数据集 {#download-dataset}
|
||||
|
||||
运行命令:
|
||||
|
||||
```bash
|
||||
wget https://s3.amazonaws.com/menusdata.nypl.org/gzips/2021_08_01_07_01_17_data.tgz
|
||||
```
|
||||
|
||||
如果有需要可以使用 http://menus.nypl.org/data 中的最新链接。下载的大小约为 35 MB。
|
||||
|
||||
## 解压数据集 {#unpack-dataset}
|
||||
|
||||
```bash
|
||||
tar xvf 2021_08_01_07_01_17_data.tgz
|
||||
```
|
||||
|
||||
解压后的的大小约为 150 MB。
|
||||
|
||||
数据集由四个表组成:
|
||||
|
||||
- `Menu` - 有关菜单的信息,其中包含:餐厅名称,看到菜单的日期等
|
||||
- `Dish` - 有关菜肴的信息,其中包含:菜肴名称以及一些特征。
|
||||
- `MenuPage` - 有关菜单中页面的信息,每个页面都属于某个 `Menu`。
|
||||
- `MenuItem` - 菜单项。某个菜单页面上的菜肴及其价格:指向 `Dish` 和 `MenuPage`的链接。
|
||||
|
||||
## 创建表 {#create-tables}
|
||||
|
||||
使用 [Decimal](/docs/zh/sql-reference/data-types/decimal.md) 数据类型来存储价格。
|
||||
|
||||
```sql
|
||||
CREATE TABLE dish
|
||||
(
|
||||
id UInt32,
|
||||
name String,
|
||||
description String,
|
||||
menus_appeared UInt32,
|
||||
times_appeared Int32,
|
||||
first_appeared UInt16,
|
||||
last_appeared UInt16,
|
||||
lowest_price Decimal64(3),
|
||||
highest_price Decimal64(3)
|
||||
) ENGINE = MergeTree ORDER BY id;
|
||||
|
||||
CREATE TABLE menu
|
||||
(
|
||||
id UInt32,
|
||||
name String,
|
||||
sponsor String,
|
||||
event String,
|
||||
venue String,
|
||||
place String,
|
||||
physical_description String,
|
||||
occasion String,
|
||||
notes String,
|
||||
call_number String,
|
||||
keywords String,
|
||||
language String,
|
||||
date String,
|
||||
location String,
|
||||
location_type String,
|
||||
currency String,
|
||||
currency_symbol String,
|
||||
status String,
|
||||
page_count UInt16,
|
||||
dish_count UInt16
|
||||
) ENGINE = MergeTree ORDER BY id;
|
||||
|
||||
CREATE TABLE menu_page
|
||||
(
|
||||
id UInt32,
|
||||
menu_id UInt32,
|
||||
page_number UInt16,
|
||||
image_id String,
|
||||
full_height UInt16,
|
||||
full_width UInt16,
|
||||
uuid UUID
|
||||
) ENGINE = MergeTree ORDER BY id;
|
||||
|
||||
CREATE TABLE menu_item
|
||||
(
|
||||
id UInt32,
|
||||
menu_page_id UInt32,
|
||||
price Decimal64(3),
|
||||
high_price Decimal64(3),
|
||||
dish_id UInt32,
|
||||
created_at DateTime,
|
||||
updated_at DateTime,
|
||||
xpos Float64,
|
||||
ypos Float64
|
||||
) ENGINE = MergeTree ORDER BY id;
|
||||
```
|
||||
|
||||
## 导入数据 {#import-data}
|
||||
|
||||
执行以下命令将数据导入 ClickHouse:
|
||||
|
||||
```bash
|
||||
clickhouse-client --format_csv_allow_single_quotes 0 --input_format_null_as_default 0 --query "INSERT INTO dish FORMAT CSVWithNames" < Dish.csv
|
||||
clickhouse-client --format_csv_allow_single_quotes 0 --input_format_null_as_default 0 --query "INSERT INTO menu FORMAT CSVWithNames" < Menu.csv
|
||||
clickhouse-client --format_csv_allow_single_quotes 0 --input_format_null_as_default 0 --query "INSERT INTO menu_page FORMAT CSVWithNames" < MenuPage.csv
|
||||
clickhouse-client --format_csv_allow_single_quotes 0 --input_format_null_as_default 0 --date_time_input_format best_effort --query "INSERT INTO menu_item FORMAT CSVWithNames" < MenuItem.csv
|
||||
```
|
||||
|
||||
因为数据由带有标题的 CSV 表示,所以使用 [CSVWithNames](/docs/zh/interfaces/formats.md#csvwithnames) 格式。
|
||||
|
||||
因为只有双引号用于数据字段,单引号可以在值内,所以禁用了 `format_csv_allow_single_quotes` 以避免混淆 CSV 解析器。
|
||||
|
||||
因为数据中没有 [NULL](/docs/zh/sql-reference/syntax.md#null-literal) 值,所以禁用 [input_format_null_as_default](/docs/zh/operations/settings/settings.md#settings-input-format-null-as-default)。不然 ClickHouse 将会尝试解析 `\N` 序列,并可能与数据中的 `\` 混淆。
|
||||
|
||||
设置 [date_time_input_format best_effort](/docs/zh/operations/settings/settings.md#settings-date_time_input_format) 以便解析各种格式的 [DateTime](/docs/zh/sql-reference/data-types/datetime.md)字段。例如,识别像“2000-01-01 01:02”这样没有秒数的 ISO-8601 时间字符串。如果没有此设置,则仅允许使用固定的 DateTime 格式。
|
||||
|
||||
## 非规范化数据 {#denormalize-data}
|
||||
|
||||
数据以 [规范化形式] (https://en.wikipedia.org/wiki/Database_normalization#Normal_forms) 在多个表格中呈现。这意味着如果你想进行如查询菜单项中的菜名这类的查询,则必须执行 [JOIN](/docs/zh/sql-reference/statements/select/join.md#select-join)。在典型的分析任务中,预先处理联接的数据以避免每次都执行“联接”会更有效率。这中操作被称为“非规范化”数据。
|
||||
|
||||
我们将创建一个表“menu_item_denorm”,其中将包含所有联接在一起的数据:
|
||||
|
||||
```sql
|
||||
CREATE TABLE menu_item_denorm
|
||||
ENGINE = MergeTree ORDER BY (dish_name, created_at)
|
||||
AS SELECT
|
||||
price,
|
||||
high_price,
|
||||
created_at,
|
||||
updated_at,
|
||||
xpos,
|
||||
ypos,
|
||||
dish.id AS dish_id,
|
||||
dish.name AS dish_name,
|
||||
dish.description AS dish_description,
|
||||
dish.menus_appeared AS dish_menus_appeared,
|
||||
dish.times_appeared AS dish_times_appeared,
|
||||
dish.first_appeared AS dish_first_appeared,
|
||||
dish.last_appeared AS dish_last_appeared,
|
||||
dish.lowest_price AS dish_lowest_price,
|
||||
dish.highest_price AS dish_highest_price,
|
||||
menu.id AS menu_id,
|
||||
menu.name AS menu_name,
|
||||
menu.sponsor AS menu_sponsor,
|
||||
menu.event AS menu_event,
|
||||
menu.venue AS menu_venue,
|
||||
menu.place AS menu_place,
|
||||
menu.physical_description AS menu_physical_description,
|
||||
menu.occasion AS menu_occasion,
|
||||
menu.notes AS menu_notes,
|
||||
menu.call_number AS menu_call_number,
|
||||
menu.keywords AS menu_keywords,
|
||||
menu.language AS menu_language,
|
||||
menu.date AS menu_date,
|
||||
menu.location AS menu_location,
|
||||
menu.location_type AS menu_location_type,
|
||||
menu.currency AS menu_currency,
|
||||
menu.currency_symbol AS menu_currency_symbol,
|
||||
menu.status AS menu_status,
|
||||
menu.page_count AS menu_page_count,
|
||||
menu.dish_count AS menu_dish_count
|
||||
FROM menu_item
|
||||
JOIN dish ON menu_item.dish_id = dish.id
|
||||
JOIN menu_page ON menu_item.menu_page_id = menu_page.id
|
||||
JOIN menu ON menu_page.menu_id = menu.id;
|
||||
```
|
||||
|
||||
## 验证数据 {#validate-data}
|
||||
|
||||
请求:
|
||||
|
||||
```sql
|
||||
SELECT count() FROM menu_item_denorm;
|
||||
```
|
||||
|
||||
结果:
|
||||
|
||||
```text
|
||||
┌─count()─┐
|
||||
│ 1329175 │
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
## 运行一些查询 {#run-queries}
|
||||
|
||||
### 菜品的平均历史价格 {#query-averaged-historical-prices}
|
||||
|
||||
请求:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
round(toUInt32OrZero(extract(menu_date, '^\\d{4}')), -1) AS d,
|
||||
count(),
|
||||
round(avg(price), 2),
|
||||
bar(avg(price), 0, 100, 100)
|
||||
FROM menu_item_denorm
|
||||
WHERE (menu_currency = 'Dollars') AND (d > 0) AND (d < 2022)
|
||||
GROUP BY d
|
||||
ORDER BY d ASC;
|
||||
```
|
||||
|
||||
结果:
|
||||
|
||||
```text
|
||||
┌────d─┬─count()─┬─round(avg(price), 2)─┬─bar(avg(price), 0, 100, 100)─┐
|
||||
│ 1850 │ 618 │ 1.5 │ █▍ │
|
||||
│ 1860 │ 1634 │ 1.29 │ █▎ │
|
||||
│ 1870 │ 2215 │ 1.36 │ █▎ │
|
||||
│ 1880 │ 3909 │ 1.01 │ █ │
|
||||
│ 1890 │ 8837 │ 1.4 │ █▍ │
|
||||
│ 1900 │ 176292 │ 0.68 │ ▋ │
|
||||
│ 1910 │ 212196 │ 0.88 │ ▊ │
|
||||
│ 1920 │ 179590 │ 0.74 │ ▋ │
|
||||
│ 1930 │ 73707 │ 0.6 │ ▌ │
|
||||
│ 1940 │ 58795 │ 0.57 │ ▌ │
|
||||
│ 1950 │ 41407 │ 0.95 │ ▊ │
|
||||
│ 1960 │ 51179 │ 1.32 │ █▎ │
|
||||
│ 1970 │ 12914 │ 1.86 │ █▋ │
|
||||
│ 1980 │ 7268 │ 4.35 │ ████▎ │
|
||||
│ 1990 │ 11055 │ 6.03 │ ██████ │
|
||||
│ 2000 │ 2467 │ 11.85 │ ███████████▋ │
|
||||
│ 2010 │ 597 │ 25.66 │ █████████████████████████▋ │
|
||||
└──────┴─────────┴──────────────────────┴──────────────────────────────┘
|
||||
```
|
||||
|
||||
带上一粒盐。
|
||||
|
||||
### 汉堡价格 {#query-burger-prices}
|
||||
|
||||
请求:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
round(toUInt32OrZero(extract(menu_date, '^\\d{4}')), -1) AS d,
|
||||
count(),
|
||||
round(avg(price), 2),
|
||||
bar(avg(price), 0, 50, 100)
|
||||
FROM menu_item_denorm
|
||||
WHERE (menu_currency = 'Dollars') AND (d > 0) AND (d < 2022) AND (dish_name ILIKE '%burger%')
|
||||
GROUP BY d
|
||||
ORDER BY d ASC;
|
||||
```
|
||||
|
||||
结果:
|
||||
|
||||
```text
|
||||
┌────d─┬─count()─┬─round(avg(price), 2)─┬─bar(avg(price), 0, 50, 100)───────────┐
|
||||
│ 1880 │ 2 │ 0.42 │ ▋ │
|
||||
│ 1890 │ 7 │ 0.85 │ █▋ │
|
||||
│ 1900 │ 399 │ 0.49 │ ▊ │
|
||||
│ 1910 │ 589 │ 0.68 │ █▎ │
|
||||
│ 1920 │ 280 │ 0.56 │ █ │
|
||||
│ 1930 │ 74 │ 0.42 │ ▋ │
|
||||
│ 1940 │ 119 │ 0.59 │ █▏ │
|
||||
│ 1950 │ 134 │ 1.09 │ ██▏ │
|
||||
│ 1960 │ 272 │ 0.92 │ █▋ │
|
||||
│ 1970 │ 108 │ 1.18 │ ██▎ │
|
||||
│ 1980 │ 88 │ 2.82 │ █████▋ │
|
||||
│ 1990 │ 184 │ 3.68 │ ███████▎ │
|
||||
│ 2000 │ 21 │ 7.14 │ ██████████████▎ │
|
||||
│ 2010 │ 6 │ 18.42 │ ████████████████████████████████████▋ │
|
||||
└──────┴─────────┴──────────────────────┴───────────────────────────────────────┘
|
||||
```
|
||||
|
||||
###伏特加{#query-vodka}
|
||||
|
||||
请求:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
round(toUInt32OrZero(extract(menu_date, '^\\d{4}')), -1) AS d,
|
||||
count(),
|
||||
round(avg(price), 2),
|
||||
bar(avg(price), 0, 50, 100)
|
||||
FROM menu_item_denorm
|
||||
WHERE (menu_currency IN ('Dollars', '')) AND (d > 0) AND (d < 2022) AND (dish_name ILIKE '%vodka%')
|
||||
GROUP BY d
|
||||
ORDER BY d ASC;
|
||||
```
|
||||
|
||||
结果:
|
||||
|
||||
```text
|
||||
┌────d─┬─count()─┬─round(avg(price), 2)─┬─bar(avg(price), 0, 50, 100)─┐
|
||||
│ 1910 │ 2 │ 0 │ │
|
||||
│ 1920 │ 1 │ 0.3 │ ▌ │
|
||||
│ 1940 │ 21 │ 0.42 │ ▋ │
|
||||
│ 1950 │ 14 │ 0.59 │ █▏ │
|
||||
│ 1960 │ 113 │ 2.17 │ ████▎ │
|
||||
│ 1970 │ 37 │ 0.68 │ █▎ │
|
||||
│ 1980 │ 19 │ 2.55 │ █████ │
|
||||
│ 1990 │ 86 │ 3.6 │ ███████▏ │
|
||||
│ 2000 │ 2 │ 3.98 │ ███████▊ │
|
||||
└──────┴─────────┴──────────────────────┴─────────────────────────────┘
|
||||
```
|
||||
|
||||
要查询 `Vodka`,必须声明通过 `ILIKE '%vodka%'` 进行查询。
|
||||
|
||||
### 鱼子酱 {#query-caviar}
|
||||
|
||||
列出鱼子酱的价格。另外,列出任何带有鱼子酱的菜肴的名称。
|
||||
|
||||
请求:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
round(toUInt32OrZero(extract(menu_date, '^\\d{4}')), -1) AS d,
|
||||
count(),
|
||||
round(avg(price), 2),
|
||||
bar(avg(price), 0, 50, 100),
|
||||
any(dish_name)
|
||||
FROM menu_item_denorm
|
||||
WHERE (menu_currency IN ('Dollars', '')) AND (d > 0) AND (d < 2022) AND (dish_name ILIKE '%caviar%')
|
||||
GROUP BY d
|
||||
ORDER BY d ASC;
|
||||
```
|
||||
|
||||
结果:
|
||||
|
||||
```text
|
||||
┌────d─┬─count()─┬─round(avg(price), 2)─┬─bar(avg(price), 0, 50, 100)──────┬─any(dish_name)──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
|
||||
│ 1090 │ 1 │ 0 │ │ Caviar │
|
||||
│ 1880 │ 3 │ 0 │ │ Caviar │
|
||||
│ 1890 │ 39 │ 0.59 │ █▏ │ Butter and caviar │
|
||||
│ 1900 │ 1014 │ 0.34 │ ▋ │ Anchovy Caviar on Toast │
|
||||
│ 1910 │ 1588 │ 1.35 │ ██▋ │ 1/1 Brötchen Caviar │
|
||||
│ 1920 │ 927 │ 1.37 │ ██▋ │ ASTRAKAN CAVIAR │
|
||||
│ 1930 │ 289 │ 1.91 │ ███▋ │ Astrachan caviar │
|
||||
│ 1940 │ 201 │ 0.83 │ █▋ │ (SPECIAL) Domestic Caviar Sandwich │
|
||||
│ 1950 │ 81 │ 2.27 │ ████▌ │ Beluga Caviar │
|
||||
│ 1960 │ 126 │ 2.21 │ ████▍ │ Beluga Caviar │
|
||||
│ 1970 │ 105 │ 0.95 │ █▊ │ BELUGA MALOSSOL CAVIAR AMERICAN DRESSING │
|
||||
│ 1980 │ 12 │ 7.22 │ ██████████████▍ │ Authentic Iranian Beluga Caviar the world's finest black caviar presented in ice garni and a sampling of chilled 100° Russian vodka │
|
||||
│ 1990 │ 74 │ 14.42 │ ████████████████████████████▋ │ Avocado Salad, Fresh cut avocado with caviare │
|
||||
│ 2000 │ 3 │ 7.82 │ ███████████████▋ │ Aufgeschlagenes Kartoffelsueppchen mit Forellencaviar │
|
||||
│ 2010 │ 6 │ 15.58 │ ███████████████████████████████▏ │ "OYSTERS AND PEARLS" "Sabayon" of Pearl Tapioca with Island Creek Oysters and Russian Sevruga Caviar │
|
||||
└──────┴─────────┴──────────────────────┴──────────────────────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
至少他们有伏特加配鱼子酱。真棒。
|
||||
|
||||
## 在线 Playground{#playground}
|
||||
|
||||
此数据集已经上传到了 ClickHouse Playground 中,[example](https://play.clickhouse.com/play?user=play#U0VMRUNUCiAgICByb3VuZCh0b1VJbnQzMk9yWmVybyhleHRyYWN0KG1lbnVfZGF0ZSwgJ15cXGR7NH0nKSksIC0xKSBBUyBkLAogICAgY291bnQoKSwKICAgIHJvdW5kKGF2ZyhwcmljZSksIDIpLAogICAgYmFyKGF2ZyhwcmljZSksIDAsIDUwLCAxMDApLAogICAgYW55KGRpc2hfbmFtZSkKRlJPTSBtZW51X2l0ZW1fZGVub3JtCldIRVJFIChtZW51X2N1cnJlbmN5IElOICgnRG9sbGFycycsICcnKSkgQU5EIChkID4gMCkgQU5EIChkIDwgMjAyMikgQU5EIChkaXNoX25hbWUgSUxJS0UgJyVjYXZpYXIlJykKR1JPVVAgQlkgZApPUkRFUiBCWSBkIEFTQw==)。
|
||||
|
@ -1,9 +1,416 @@
|
||||
---
|
||||
---
|
||||
slug: /zh/getting-started/example-datasets/opensky
|
||||
sidebar_label: Air Traffic Data
|
||||
title: "Crowdsourced air traffic data from The OpenSky Network 2020"
|
||||
sidebar_label: 空中交通数据
|
||||
description: 该数据集中的数据是从完整的 OpenSky 数据集中衍生而来的,对其中的数据进行了必要的清理,用以展示在 COVID-19 期间空中交通的发展。
|
||||
title: "来自 The OpenSky Network 2020 的众包空中交通数据"
|
||||
---
|
||||
|
||||
import Content from '@site/docs/en/getting-started/example-datasets/opensky.md';
|
||||
该数据集中的数据是从完整的 OpenSky 数据集中派生和清理的,以说明 COVID-19 大流行期间空中交通的发展。它涵盖了自 2019 年 1 月 1 日以来该网络中 2500 多名成员观测到的所有航班。直到 COVID-19 大流行结束,更多数据将定期的更新到数据集中。
|
||||
|
||||
<Content />
|
||||
来源:https://zenodo.org/record/5092942#.YRBCyTpRXYd
|
||||
|
||||
Martin Strohmeier、Xavier Olive、Jannis Lübbe、Matthias Schäfer 和 Vincent Lenders “来自 OpenSky 网络 2019-2020 的众包空中交通数据”地球系统科学数据 13(2),2021 https://doi.org/10.5194/essd- 13-357-2021
|
||||
|
||||
## 下载数据集 {#download-dataset}
|
||||
|
||||
运行命令:
|
||||
|
||||
```bash
|
||||
wget -O- https://zenodo.org/record/5092942 | grep -oP 'https://zenodo.org/record/5092942/files/flightlist_\d+_\d+\.csv\.gz' | xargs wget
|
||||
```
|
||||
|
||||
Download will take about 2 minutes with good internet connection. There are 30 files with total size of 4.3 GB.
|
||||
|
||||
## 创建表 {#create-table}
|
||||
|
||||
```sql
|
||||
CREATE TABLE opensky
|
||||
(
|
||||
callsign String,
|
||||
number String,
|
||||
icao24 String,
|
||||
registration String,
|
||||
typecode String,
|
||||
origin String,
|
||||
destination String,
|
||||
firstseen DateTime,
|
||||
lastseen DateTime,
|
||||
day DateTime,
|
||||
latitude_1 Float64,
|
||||
longitude_1 Float64,
|
||||
altitude_1 Float64,
|
||||
latitude_2 Float64,
|
||||
longitude_2 Float64,
|
||||
altitude_2 Float64
|
||||
) ENGINE = MergeTree ORDER BY (origin, destination, callsign);
|
||||
```
|
||||
|
||||
## 导入数据 {#import-data}
|
||||
|
||||
将数据并行导入到 ClickHouse:
|
||||
|
||||
```bash
|
||||
ls -1 flightlist_*.csv.gz | xargs -P100 -I{} bash -c 'gzip -c -d "{}" | clickhouse-client --date_time_input_format best_effort --query "INSERT INTO opensky FORMAT CSVWithNames"'
|
||||
```
|
||||
|
||||
- 这里我们将文件列表(`ls -1 flightlist_*.csv.gz`)传递给`xargs`以进行并行处理。 `xargs -P100` 指定最多使用 100 个并行工作程序,但由于我们只有 30 个文件,工作程序的数量将只有 30 个。
|
||||
- 对于每个文件,`xargs` 将通过 `bash -c` 为每个文件运行一个脚本文件。该脚本通过使用 `{}` 表示文件名占位符,然后 `xargs` 由命令进行填充(使用 `-I{}`)。
|
||||
- 该脚本会将文件 (`gzip -c -d "{}"`) 解压缩到标准输出(`-c` 参数),并将输出重定向到 `clickhouse-client`。
|
||||
- 我们还要求使用扩展解析器解析 [DateTime](../../sql-reference/data-types/datetime.md) 字段 ([--date_time_input_format best_effort](../../operations/settings/ settings.md#settings-date_time_input_format)) 以识别具有时区偏移的 ISO-8601 格式。
|
||||
|
||||
最后,`clickhouse-client` 会以 [CSVWithNames](../../interfaces/formats.md#csvwithnames) 格式读取输入数据然后执行插入。
|
||||
|
||||
并行导入需要 24 秒。
|
||||
|
||||
如果您不想使用并行导入,以下是顺序导入的方式:
|
||||
|
||||
```bash
|
||||
for file in flightlist_*.csv.gz; do gzip -c -d "$file" | clickhouse-client --date_time_input_format best_effort --query "INSERT INTO opensky FORMAT CSVWithNames"; done
|
||||
```
|
||||
|
||||
## 验证数据 {#validate-data}
|
||||
|
||||
请求:
|
||||
|
||||
```sql
|
||||
SELECT count() FROM opensky;
|
||||
```
|
||||
|
||||
结果:
|
||||
|
||||
```text
|
||||
┌──count()─┐
|
||||
│ 66010819 │
|
||||
└──────────┘
|
||||
```
|
||||
|
||||
ClickHouse 中的数据集大小只有 2.66 GiB,检查一下。
|
||||
|
||||
请求:
|
||||
|
||||
```sql
|
||||
SELECT formatReadableSize(total_bytes) FROM system.tables WHERE name = 'opensky';
|
||||
```
|
||||
|
||||
结果:
|
||||
|
||||
```text
|
||||
┌─formatReadableSize(total_bytes)─┐
|
||||
│ 2.66 GiB │
|
||||
└─────────────────────────────────┘
|
||||
```
|
||||
|
||||
## 运行一些查询 {#run-queries}
|
||||
|
||||
总行驶距离为 680 亿公里。
|
||||
|
||||
请求:
|
||||
|
||||
```sql
|
||||
SELECT formatReadableQuantity(sum(geoDistance(longitude_1, latitude_1, longitude_2, latitude_2)) / 1000) FROM opensky;
|
||||
```
|
||||
|
||||
结果:
|
||||
|
||||
```text
|
||||
┌─formatReadableQuantity(divide(sum(geoDistance(longitude_1, latitude_1, longitude_2, latitude_2)), 1000))─┐
|
||||
│ 68.72 billion │
|
||||
└──────────────────────────────────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
平均飞行距离约为 1000 公里。
|
||||
|
||||
请求:
|
||||
|
||||
```sql
|
||||
SELECT avg(geoDistance(longitude_1, latitude_1, longitude_2, latitude_2)) FROM opensky;
|
||||
```
|
||||
|
||||
结果:
|
||||
|
||||
```text
|
||||
┌─avg(geoDistance(longitude_1, latitude_1, longitude_2, latitude_2))─┐
|
||||
│ 1041090.6465708319 │
|
||||
└────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### 最繁忙的始发机场和观测到的平均距离{#busy-airports-average-distance}
|
||||
|
||||
请求:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
origin,
|
||||
count(),
|
||||
round(avg(geoDistance(longitude_1, latitude_1, longitude_2, latitude_2))) AS distance,
|
||||
bar(distance, 0, 10000000, 100) AS bar
|
||||
FROM opensky
|
||||
WHERE origin != ''
|
||||
GROUP BY origin
|
||||
ORDER BY count() DESC
|
||||
LIMIT 100;
|
||||
```
|
||||
|
||||
结果:
|
||||
|
||||
```text
|
||||
┌─origin─┬─count()─┬─distance─┬─bar────────────────────────────────────┐
|
||||
1. │ KORD │ 745007 │ 1546108 │ ███████████████▍ │
|
||||
2. │ KDFW │ 696702 │ 1358721 │ █████████████▌ │
|
||||
3. │ KATL │ 667286 │ 1169661 │ ███████████▋ │
|
||||
4. │ KDEN │ 582709 │ 1287742 │ ████████████▊ │
|
||||
5. │ KLAX │ 581952 │ 2628393 │ ██████████████████████████▎ │
|
||||
6. │ KLAS │ 447789 │ 1336967 │ █████████████▎ │
|
||||
7. │ KPHX │ 428558 │ 1345635 │ █████████████▍ │
|
||||
8. │ KSEA │ 412592 │ 1757317 │ █████████████████▌ │
|
||||
9. │ KCLT │ 404612 │ 880355 │ ████████▋ │
|
||||
10. │ VIDP │ 363074 │ 1445052 │ ██████████████▍ │
|
||||
11. │ EDDF │ 362643 │ 2263960 │ ██████████████████████▋ │
|
||||
12. │ KSFO │ 361869 │ 2445732 │ ████████████████████████▍ │
|
||||
13. │ KJFK │ 349232 │ 2996550 │ █████████████████████████████▊ │
|
||||
14. │ KMSP │ 346010 │ 1287328 │ ████████████▋ │
|
||||
15. │ LFPG │ 344748 │ 2206203 │ ██████████████████████ │
|
||||
16. │ EGLL │ 341370 │ 3216593 │ ████████████████████████████████▏ │
|
||||
17. │ EHAM │ 340272 │ 2116425 │ █████████████████████▏ │
|
||||
18. │ KEWR │ 337696 │ 1826545 │ ██████████████████▎ │
|
||||
19. │ KPHL │ 320762 │ 1291761 │ ████████████▊ │
|
||||
20. │ OMDB │ 308855 │ 2855706 │ ████████████████████████████▌ │
|
||||
21. │ UUEE │ 307098 │ 1555122 │ ███████████████▌ │
|
||||
22. │ KBOS │ 304416 │ 1621675 │ ████████████████▏ │
|
||||
23. │ LEMD │ 291787 │ 1695097 │ ████████████████▊ │
|
||||
24. │ YSSY │ 272979 │ 1875298 │ ██████████████████▋ │
|
||||
25. │ KMIA │ 265121 │ 1923542 │ ███████████████████▏ │
|
||||
26. │ ZGSZ │ 263497 │ 745086 │ ███████▍ │
|
||||
27. │ EDDM │ 256691 │ 1361453 │ █████████████▌ │
|
||||
28. │ WMKK │ 254264 │ 1626688 │ ████████████████▎ │
|
||||
29. │ CYYZ │ 251192 │ 2175026 │ █████████████████████▋ │
|
||||
30. │ KLGA │ 248699 │ 1106935 │ ███████████ │
|
||||
31. │ VHHH │ 248473 │ 3457658 │ ██████████████████████████████████▌ │
|
||||
32. │ RJTT │ 243477 │ 1272744 │ ████████████▋ │
|
||||
33. │ KBWI │ 241440 │ 1187060 │ ███████████▋ │
|
||||
34. │ KIAD │ 239558 │ 1683485 │ ████████████████▋ │
|
||||
35. │ KIAH │ 234202 │ 1538335 │ ███████████████▍ │
|
||||
36. │ KFLL │ 223447 │ 1464410 │ ██████████████▋ │
|
||||
37. │ KDAL │ 212055 │ 1082339 │ ██████████▋ │
|
||||
38. │ KDCA │ 207883 │ 1013359 │ ██████████▏ │
|
||||
39. │ LIRF │ 207047 │ 1427965 │ ██████████████▎ │
|
||||
40. │ PANC │ 206007 │ 2525359 │ █████████████████████████▎ │
|
||||
41. │ LTFJ │ 205415 │ 860470 │ ████████▌ │
|
||||
42. │ KDTW │ 204020 │ 1106716 │ ███████████ │
|
||||
43. │ VABB │ 201679 │ 1300865 │ █████████████ │
|
||||
44. │ OTHH │ 200797 │ 3759544 │ █████████████████████████████████████▌ │
|
||||
45. │ KMDW │ 200796 │ 1232551 │ ████████████▎ │
|
||||
46. │ KSAN │ 198003 │ 1495195 │ ██████████████▊ │
|
||||
47. │ KPDX │ 197760 │ 1269230 │ ████████████▋ │
|
||||
48. │ SBGR │ 197624 │ 2041697 │ ████████████████████▍ │
|
||||
49. │ VOBL │ 189011 │ 1040180 │ ██████████▍ │
|
||||
50. │ LEBL │ 188956 │ 1283190 │ ████████████▋ │
|
||||
51. │ YBBN │ 188011 │ 1253405 │ ████████████▌ │
|
||||
52. │ LSZH │ 187934 │ 1572029 │ ███████████████▋ │
|
||||
53. │ YMML │ 187643 │ 1870076 │ ██████████████████▋ │
|
||||
54. │ RCTP │ 184466 │ 2773976 │ ███████████████████████████▋ │
|
||||
55. │ KSNA │ 180045 │ 778484 │ ███████▋ │
|
||||
56. │ EGKK │ 176420 │ 1694770 │ ████████████████▊ │
|
||||
57. │ LOWW │ 176191 │ 1274833 │ ████████████▋ │
|
||||
58. │ UUDD │ 176099 │ 1368226 │ █████████████▋ │
|
||||
59. │ RKSI │ 173466 │ 3079026 │ ██████████████████████████████▋ │
|
||||
60. │ EKCH │ 172128 │ 1229895 │ ████████████▎ │
|
||||
61. │ KOAK │ 171119 │ 1114447 │ ███████████▏ │
|
||||
62. │ RPLL │ 170122 │ 1440735 │ ██████████████▍ │
|
||||
63. │ KRDU │ 167001 │ 830521 │ ████████▎ │
|
||||
64. │ KAUS │ 164524 │ 1256198 │ ████████████▌ │
|
||||
65. │ KBNA │ 163242 │ 1022726 │ ██████████▏ │
|
||||
66. │ KSDF │ 162655 │ 1380867 │ █████████████▋ │
|
||||
67. │ ENGM │ 160732 │ 910108 │ █████████ │
|
||||
68. │ LIMC │ 160696 │ 1564620 │ ███████████████▋ │
|
||||
69. │ KSJC │ 159278 │ 1081125 │ ██████████▋ │
|
||||
70. │ KSTL │ 157984 │ 1026699 │ ██████████▎ │
|
||||
71. │ UUWW │ 156811 │ 1261155 │ ████████████▌ │
|
||||
72. │ KIND │ 153929 │ 987944 │ █████████▊ │
|
||||
73. │ ESSA │ 153390 │ 1203439 │ ████████████ │
|
||||
74. │ KMCO │ 153351 │ 1508657 │ ███████████████ │
|
||||
75. │ KDVT │ 152895 │ 74048 │ ▋ │
|
||||
76. │ VTBS │ 152645 │ 2255591 │ ██████████████████████▌ │
|
||||
77. │ CYVR │ 149574 │ 2027413 │ ████████████████████▎ │
|
||||
78. │ EIDW │ 148723 │ 1503985 │ ███████████████ │
|
||||
79. │ LFPO │ 143277 │ 1152964 │ ███████████▌ │
|
||||
80. │ EGSS │ 140830 │ 1348183 │ █████████████▍ │
|
||||
81. │ KAPA │ 140776 │ 420441 │ ████▏ │
|
||||
82. │ KHOU │ 138985 │ 1068806 │ ██████████▋ │
|
||||
83. │ KTPA │ 138033 │ 1338223 │ █████████████▍ │
|
||||
84. │ KFFZ │ 137333 │ 55397 │ ▌ │
|
||||
85. │ NZAA │ 136092 │ 1581264 │ ███████████████▋ │
|
||||
86. │ YPPH │ 133916 │ 1271550 │ ████████████▋ │
|
||||
87. │ RJBB │ 133522 │ 1805623 │ ██████████████████ │
|
||||
88. │ EDDL │ 133018 │ 1265919 │ ████████████▋ │
|
||||
89. │ ULLI │ 130501 │ 1197108 │ ███████████▊ │
|
||||
90. │ KIWA │ 127195 │ 250876 │ ██▌ │
|
||||
91. │ KTEB │ 126969 │ 1189414 │ ███████████▊ │
|
||||
92. │ VOMM │ 125616 │ 1127757 │ ███████████▎ │
|
||||
93. │ LSGG │ 123998 │ 1049101 │ ██████████▍ │
|
||||
94. │ LPPT │ 122733 │ 1779187 │ █████████████████▋ │
|
||||
95. │ WSSS │ 120493 │ 3264122 │ ████████████████████████████████▋ │
|
||||
96. │ EBBR │ 118539 │ 1579939 │ ███████████████▋ │
|
||||
97. │ VTBD │ 118107 │ 661627 │ ██████▌ │
|
||||
98. │ KVNY │ 116326 │ 692960 │ ██████▊ │
|
||||
99. │ EDDT │ 115122 │ 941740 │ █████████▍ │
|
||||
100. │ EFHK │ 114860 │ 1629143 │ ████████████████▎ │
|
||||
└────────┴─────────┴──────────┴────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### 每周来自莫斯科三个主要机场的航班数量 {#flights-from-moscow}
|
||||
|
||||
请求:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
toMonday(day) AS k,
|
||||
count() AS c,
|
||||
bar(c, 0, 10000, 100) AS bar
|
||||
FROM opensky
|
||||
WHERE origin IN ('UUEE', 'UUDD', 'UUWW')
|
||||
GROUP BY k
|
||||
ORDER BY k ASC;
|
||||
```
|
||||
|
||||
结果:
|
||||
|
||||
```text
|
||||
┌──────────k─┬────c─┬─bar──────────────────────────────────────────────────────────────────────────┐
|
||||
1. │ 2018-12-31 │ 5248 │ ████████████████████████████████████████████████████▍ │
|
||||
2. │ 2019-01-07 │ 6302 │ ███████████████████████████████████████████████████████████████ │
|
||||
3. │ 2019-01-14 │ 5701 │ █████████████████████████████████████████████████████████ │
|
||||
4. │ 2019-01-21 │ 5638 │ ████████████████████████████████████████████████████████▍ │
|
||||
5. │ 2019-01-28 │ 5731 │ █████████████████████████████████████████████████████████▎ │
|
||||
6. │ 2019-02-04 │ 5683 │ ████████████████████████████████████████████████████████▋ │
|
||||
7. │ 2019-02-11 │ 5759 │ █████████████████████████████████████████████████████████▌ │
|
||||
8. │ 2019-02-18 │ 5736 │ █████████████████████████████████████████████████████████▎ │
|
||||
9. │ 2019-02-25 │ 5873 │ ██████████████████████████████████████████████████████████▋ │
|
||||
10. │ 2019-03-04 │ 5965 │ ███████████████████████████████████████████████████████████▋ │
|
||||
11. │ 2019-03-11 │ 5900 │ ███████████████████████████████████████████████████████████ │
|
||||
12. │ 2019-03-18 │ 5823 │ ██████████████████████████████████████████████████████████▏ │
|
||||
13. │ 2019-03-25 │ 5899 │ ██████████████████████████████████████████████████████████▊ │
|
||||
14. │ 2019-04-01 │ 6043 │ ████████████████████████████████████████████████████████████▍ │
|
||||
15. │ 2019-04-08 │ 6098 │ ████████████████████████████████████████████████████████████▊ │
|
||||
16. │ 2019-04-15 │ 6196 │ █████████████████████████████████████████████████████████████▊ │
|
||||
17. │ 2019-04-22 │ 6486 │ ████████████████████████████████████████████████████████████████▋ │
|
||||
18. │ 2019-04-29 │ 6682 │ ██████████████████████████████████████████████████████████████████▋ │
|
||||
19. │ 2019-05-06 │ 6739 │ ███████████████████████████████████████████████████████████████████▍ │
|
||||
20. │ 2019-05-13 │ 6600 │ ██████████████████████████████████████████████████████████████████ │
|
||||
21. │ 2019-05-20 │ 6575 │ █████████████████████████████████████████████████████████████████▋ │
|
||||
22. │ 2019-05-27 │ 6786 │ ███████████████████████████████████████████████████████████████████▋ │
|
||||
23. │ 2019-06-03 │ 6872 │ ████████████████████████████████████████████████████████████████████▋ │
|
||||
24. │ 2019-06-10 │ 7045 │ ██████████████████████████████████████████████████████████████████████▍ │
|
||||
25. │ 2019-06-17 │ 7045 │ ██████████████████████████████████████████████████████████████████████▍ │
|
||||
26. │ 2019-06-24 │ 6852 │ ████████████████████████████████████████████████████████████████████▌ │
|
||||
27. │ 2019-07-01 │ 7248 │ ████████████████████████████████████████████████████████████████████████▍ │
|
||||
28. │ 2019-07-08 │ 7284 │ ████████████████████████████████████████████████████████████████████████▋ │
|
||||
29. │ 2019-07-15 │ 7142 │ ███████████████████████████████████████████████████████████████████████▍ │
|
||||
30. │ 2019-07-22 │ 7108 │ ███████████████████████████████████████████████████████████████████████ │
|
||||
31. │ 2019-07-29 │ 7251 │ ████████████████████████████████████████████████████████████████████████▌ │
|
||||
32. │ 2019-08-05 │ 7403 │ ██████████████████████████████████████████████████████████████████████████ │
|
||||
33. │ 2019-08-12 │ 7457 │ ██████████████████████████████████████████████████████████████████████████▌ │
|
||||
34. │ 2019-08-19 │ 7502 │ ███████████████████████████████████████████████████████████████████████████ │
|
||||
35. │ 2019-08-26 │ 7540 │ ███████████████████████████████████████████████████████████████████████████▍ │
|
||||
36. │ 2019-09-02 │ 7237 │ ████████████████████████████████████████████████████████████████████████▎ │
|
||||
37. │ 2019-09-09 │ 7328 │ █████████████████████████████████████████████████████████████████████████▎ │
|
||||
38. │ 2019-09-16 │ 5566 │ ███████████████████████████████████████████████████████▋ │
|
||||
39. │ 2019-09-23 │ 7049 │ ██████████████████████████████████████████████████████████████████████▍ │
|
||||
40. │ 2019-09-30 │ 6880 │ ████████████████████████████████████████████████████████████████████▋ │
|
||||
41. │ 2019-10-07 │ 6518 │ █████████████████████████████████████████████████████████████████▏ │
|
||||
42. │ 2019-10-14 │ 6688 │ ██████████████████████████████████████████████████████████████████▊ │
|
||||
43. │ 2019-10-21 │ 6667 │ ██████████████████████████████████████████████████████████████████▋ │
|
||||
44. │ 2019-10-28 │ 6303 │ ███████████████████████████████████████████████████████████████ │
|
||||
45. │ 2019-11-04 │ 6298 │ ██████████████████████████████████████████████████████████████▊ │
|
||||
46. │ 2019-11-11 │ 6137 │ █████████████████████████████████████████████████████████████▎ │
|
||||
47. │ 2019-11-18 │ 6051 │ ████████████████████████████████████████████████████████████▌ │
|
||||
48. │ 2019-11-25 │ 5820 │ ██████████████████████████████████████████████████████████▏ │
|
||||
49. │ 2019-12-02 │ 5942 │ ███████████████████████████████████████████████████████████▍ │
|
||||
50. │ 2019-12-09 │ 4891 │ ████████████████████████████████████████████████▊ │
|
||||
51. │ 2019-12-16 │ 5682 │ ████████████████████████████████████████████████████████▋ │
|
||||
52. │ 2019-12-23 │ 6111 │ █████████████████████████████████████████████████████████████ │
|
||||
53. │ 2019-12-30 │ 5870 │ ██████████████████████████████████████████████████████████▋ │
|
||||
54. │ 2020-01-06 │ 5953 │ ███████████████████████████████████████████████████████████▌ │
|
||||
55. │ 2020-01-13 │ 5698 │ ████████████████████████████████████████████████████████▊ │
|
||||
56. │ 2020-01-20 │ 5339 │ █████████████████████████████████████████████████████▍ │
|
||||
57. │ 2020-01-27 │ 5566 │ ███████████████████████████████████████████████████████▋ │
|
||||
58. │ 2020-02-03 │ 5801 │ ██████████████████████████████████████████████████████████ │
|
||||
59. │ 2020-02-10 │ 5692 │ ████████████████████████████████████████████████████████▊ │
|
||||
60. │ 2020-02-17 │ 5912 │ ███████████████████████████████████████████████████████████ │
|
||||
61. │ 2020-02-24 │ 6031 │ ████████████████████████████████████████████████████████████▎ │
|
||||
62. │ 2020-03-02 │ 6105 │ █████████████████████████████████████████████████████████████ │
|
||||
63. │ 2020-03-09 │ 5823 │ ██████████████████████████████████████████████████████████▏ │
|
||||
64. │ 2020-03-16 │ 4659 │ ██████████████████████████████████████████████▌ │
|
||||
65. │ 2020-03-23 │ 3720 │ █████████████████████████████████████▏ │
|
||||
66. │ 2020-03-30 │ 1720 │ █████████████████▏ │
|
||||
67. │ 2020-04-06 │ 849 │ ████████▍ │
|
||||
68. │ 2020-04-13 │ 710 │ ███████ │
|
||||
69. │ 2020-04-20 │ 725 │ ███████▏ │
|
||||
70. │ 2020-04-27 │ 920 │ █████████▏ │
|
||||
71. │ 2020-05-04 │ 859 │ ████████▌ │
|
||||
72. │ 2020-05-11 │ 1047 │ ██████████▍ │
|
||||
73. │ 2020-05-18 │ 1135 │ ███████████▎ │
|
||||
74. │ 2020-05-25 │ 1266 │ ████████████▋ │
|
||||
75. │ 2020-06-01 │ 1793 │ █████████████████▊ │
|
||||
76. │ 2020-06-08 │ 1979 │ ███████████████████▋ │
|
||||
77. │ 2020-06-15 │ 2297 │ ██████████████████████▊ │
|
||||
78. │ 2020-06-22 │ 2788 │ ███████████████████████████▊ │
|
||||
79. │ 2020-06-29 │ 3389 │ █████████████████████████████████▊ │
|
||||
80. │ 2020-07-06 │ 3545 │ ███████████████████████████████████▍ │
|
||||
81. │ 2020-07-13 │ 3569 │ ███████████████████████████████████▋ │
|
||||
82. │ 2020-07-20 │ 3784 │ █████████████████████████████████████▋ │
|
||||
83. │ 2020-07-27 │ 3960 │ ███████████████████████████████████████▌ │
|
||||
84. │ 2020-08-03 │ 4323 │ ███████████████████████████████████████████▏ │
|
||||
85. │ 2020-08-10 │ 4581 │ █████████████████████████████████████████████▋ │
|
||||
86. │ 2020-08-17 │ 4791 │ ███████████████████████████████████████████████▊ │
|
||||
87. │ 2020-08-24 │ 4928 │ █████████████████████████████████████████████████▎ │
|
||||
88. │ 2020-08-31 │ 4687 │ ██████████████████████████████████████████████▋ │
|
||||
89. │ 2020-09-07 │ 4643 │ ██████████████████████████████████████████████▍ │
|
||||
90. │ 2020-09-14 │ 4594 │ █████████████████████████████████████████████▊ │
|
||||
91. │ 2020-09-21 │ 4478 │ ████████████████████████████████████████████▋ │
|
||||
92. │ 2020-09-28 │ 4382 │ ███████████████████████████████████████████▋ │
|
||||
93. │ 2020-10-05 │ 4261 │ ██████████████████████████████████████████▌ │
|
||||
94. │ 2020-10-12 │ 4243 │ ██████████████████████████████████████████▍ │
|
||||
95. │ 2020-10-19 │ 3941 │ ███████████████████████████████████████▍ │
|
||||
96. │ 2020-10-26 │ 3616 │ ████████████████████████████████████▏ │
|
||||
97. │ 2020-11-02 │ 3586 │ ███████████████████████████████████▋ │
|
||||
98. │ 2020-11-09 │ 3403 │ ██████████████████████████████████ │
|
||||
99. │ 2020-11-16 │ 3336 │ █████████████████████████████████▎ │
|
||||
100. │ 2020-11-23 │ 3230 │ ████████████████████████████████▎ │
|
||||
101. │ 2020-11-30 │ 3183 │ ███████████████████████████████▋ │
|
||||
102. │ 2020-12-07 │ 3285 │ ████████████████████████████████▋ │
|
||||
103. │ 2020-12-14 │ 3367 │ █████████████████████████████████▋ │
|
||||
104. │ 2020-12-21 │ 3748 │ █████████████████████████████████████▍ │
|
||||
105. │ 2020-12-28 │ 3986 │ ███████████████████████████████████████▋ │
|
||||
106. │ 2021-01-04 │ 3906 │ ███████████████████████████████████████ │
|
||||
107. │ 2021-01-11 │ 3425 │ ██████████████████████████████████▎ │
|
||||
108. │ 2021-01-18 │ 3144 │ ███████████████████████████████▍ │
|
||||
109. │ 2021-01-25 │ 3115 │ ███████████████████████████████▏ │
|
||||
110. │ 2021-02-01 │ 3285 │ ████████████████████████████████▋ │
|
||||
111. │ 2021-02-08 │ 3321 │ █████████████████████████████████▏ │
|
||||
112. │ 2021-02-15 │ 3475 │ ██████████████████████████████████▋ │
|
||||
113. │ 2021-02-22 │ 3549 │ ███████████████████████████████████▍ │
|
||||
114. │ 2021-03-01 │ 3755 │ █████████████████████████████████████▌ │
|
||||
115. │ 2021-03-08 │ 3080 │ ██████████████████████████████▋ │
|
||||
116. │ 2021-03-15 │ 3789 │ █████████████████████████████████████▊ │
|
||||
117. │ 2021-03-22 │ 3804 │ ██████████████████████████████████████ │
|
||||
118. │ 2021-03-29 │ 4238 │ ██████████████████████████████████████████▍ │
|
||||
119. │ 2021-04-05 │ 4307 │ ███████████████████████████████████████████ │
|
||||
120. │ 2021-04-12 │ 4225 │ ██████████████████████████████████████████▎ │
|
||||
121. │ 2021-04-19 │ 4391 │ ███████████████████████████████████████████▊ │
|
||||
122. │ 2021-04-26 │ 4868 │ ████████████████████████████████████████████████▋ │
|
||||
123. │ 2021-05-03 │ 4977 │ █████████████████████████████████████████████████▋ │
|
||||
124. │ 2021-05-10 │ 5164 │ ███████████████████████████████████████████████████▋ │
|
||||
125. │ 2021-05-17 │ 4986 │ █████████████████████████████████████████████████▋ │
|
||||
126. │ 2021-05-24 │ 5024 │ ██████████████████████████████████████████████████▏ │
|
||||
127. │ 2021-05-31 │ 4824 │ ████████████████████████████████████████████████▏ │
|
||||
128. │ 2021-06-07 │ 5652 │ ████████████████████████████████████████████████████████▌ │
|
||||
129. │ 2021-06-14 │ 5613 │ ████████████████████████████████████████████████████████▏ │
|
||||
130. │ 2021-06-21 │ 6061 │ ████████████████████████████████████████████████████████████▌ │
|
||||
131. │ 2021-06-28 │ 2554 │ █████████████████████████▌ │
|
||||
└────────────┴──────┴──────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### 在线 Playground {#playground}
|
||||
|
||||
你可以使用交互式资源 [Online Playground](https://play.clickhouse.com/play?user=play) 来尝试对此数据集的其他查询。 例如, [执行这个查询](https://play.clickhouse.com/play?user=play#U0VMRUNUCiAgICBvcmlnaW4sCiAgICBjb3VudCgpLAogICAgcm91bmQoYXZnKGdlb0Rpc3RhbmNlKGxvbmdpdHVkZV8xLCBsYXRpdHVkZV8xLCBsb25naXR1ZGVfMiwgbGF0aXR1ZGVfMikpKSBBUyBkaXN0YW5jZSwKICAgIGJhcihkaXN0YW5jZSwgMCwgMTAwMDAwMDAsIDEwMCkgQVMgYmFyCkZST00gb3BlbnNreQpXSEVSRSBvcmlnaW4gIT0gJycKR1JPVVAgQlkgb3JpZ2luCk9SREVSIEJZIGNvdW50KCkgREVTQwpMSU1JVCAxMDA=). 但是,请注意无法在 Playground 中创建临时表。
|
||||
|
@ -1,9 +1,339 @@
|
||||
---
|
||||
slug: /zh/getting-started/example-datasets/recipes
|
||||
sidebar_label: Recipes Dataset
|
||||
title: "Recipes Dataset"
|
||||
---
|
||||
slug: /zh/getting-started/example-datasets/recipes
|
||||
sidebar_label: 食谱数据集
|
||||
title: "食谱数据集"
|
||||
---
|
||||
|
||||
import Content from '@site/docs/en/getting-started/example-datasets/recipes.md';
|
||||
RecipeNLG 数据集可在 [此处](https://recipenlg.cs.put.poznan.pl/dataset) 下载。其中包含 220 万份食谱。大小略小于 1 GB。
|
||||
|
||||
<Content />
|
||||
## 下载并解压数据集
|
||||
|
||||
1. 进入下载页面[https://recipenlg.cs.put.poznan.pl/dataset](https://recipenlg.cs.put.poznan.pl/dataset)。
|
||||
2. 接受条款和条件并下载 zip 文件。
|
||||
3. 使用 `unzip` 解压 zip 文件,得到 `full_dataset.csv` 文件。
|
||||
|
||||
## 创建表
|
||||
|
||||
运行 clickhouse-client 并执行以下 CREATE 请求:
|
||||
|
||||
``` sql
|
||||
CREATE TABLE recipes
|
||||
(
|
||||
title String,
|
||||
ingredients Array(String),
|
||||
directions Array(String),
|
||||
link String,
|
||||
source LowCardinality(String),
|
||||
NER Array(String)
|
||||
) ENGINE = MergeTree ORDER BY title;
|
||||
```
|
||||
|
||||
## 插入数据
|
||||
|
||||
运行以下命令:
|
||||
|
||||
``` bash
|
||||
clickhouse-client --query "
|
||||
INSERT INTO recipes
|
||||
SELECT
|
||||
title,
|
||||
JSONExtract(ingredients, 'Array(String)'),
|
||||
JSONExtract(directions, 'Array(String)'),
|
||||
link,
|
||||
source,
|
||||
JSONExtract(NER, 'Array(String)')
|
||||
FROM input('num UInt32, title String, ingredients String, directions String, link String, source LowCardinality(String), NER String')
|
||||
FORMAT CSVWithNames
|
||||
" --input_format_with_names_use_header 0 --format_csv_allow_single_quote 0 --input_format_allow_errors_num 10 < full_dataset.csv
|
||||
```
|
||||
|
||||
这是一个展示如何解析自定义 CSV,这其中涉及了许多调整。
|
||||
|
||||
说明:
|
||||
- 数据集为 CSV 格式,但在插入时需要一些预处理;使用表函数 [input](../../sql-reference/table-functions/input.md) 进行预处理;
|
||||
- CSV 文件的结构在表函数 `input` 的参数中指定;
|
||||
- 字段 `num`(行号)是不需要的 - 可以忽略并从文件中进行解析;
|
||||
- 使用 `FORMAT CSVWithNames`,因为标题不包含第一个字段的名称,因此 CSV 中的标题将被忽略(通过命令行参数 `--input_format_with_names_use_header 0`);
|
||||
- 文件仅使用双引号将 CSV 字符串括起来;一些字符串没有用双引号括起来,单引号也不能被解析为括起来的字符串 - 所以添加`--format_csv_allow_single_quote 0`参数接受文件中的单引号;
|
||||
- 由于某些 CSV 的字符串的开头包含 `\M/` 因此无法被解析; CSV 中唯一可能以反斜杠开头的值是 `\N`,这个值被解析为 SQL NULL。通过添加`--input_format_allow_errors_num 10`参数,允许在导入过程中跳过 10 个格式错误;
|
||||
- 在数据集中的 Ingredients、directions 和 NER 字段为数组;但这些数组并没有以一般形式表示:这些字段作为 JSON 序列化为字符串,然后放入 CSV 中 - 在导入是将它们解析为字符串,然后使用 [JSONExtract](../../sql-reference/functions/json-functions.md ) 函数将其转换为数组。
|
||||
|
||||
## 验证插入的数据
|
||||
|
||||
通过检查行数:
|
||||
|
||||
请求:
|
||||
|
||||
``` sql
|
||||
SELECT count() FROM recipes;
|
||||
```
|
||||
|
||||
结果:
|
||||
|
||||
``` text
|
||||
┌─count()─┐
|
||||
│ 2231141 │
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
## 示例查询
|
||||
|
||||
### 按配方数量排列的顶级组件:
|
||||
|
||||
在此示例中,我们学习如何使用 [arrayJoin](../../sql-reference/functions/array-join/) 函数将数组扩展为行的集合。
|
||||
|
||||
请求:
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
arrayJoin(NER) AS k,
|
||||
count() AS c
|
||||
FROM recipes
|
||||
GROUP BY k
|
||||
ORDER BY c DESC
|
||||
LIMIT 50
|
||||
```
|
||||
|
||||
结果:
|
||||
|
||||
``` text
|
||||
┌─k────────────────────┬──────c─┐
|
||||
│ salt │ 890741 │
|
||||
│ sugar │ 620027 │
|
||||
│ butter │ 493823 │
|
||||
│ flour │ 466110 │
|
||||
│ eggs │ 401276 │
|
||||
│ onion │ 372469 │
|
||||
│ garlic │ 358364 │
|
||||
│ milk │ 346769 │
|
||||
│ water │ 326092 │
|
||||
│ vanilla │ 270381 │
|
||||
│ olive oil │ 197877 │
|
||||
│ pepper │ 179305 │
|
||||
│ brown sugar │ 174447 │
|
||||
│ tomatoes │ 163933 │
|
||||
│ egg │ 160507 │
|
||||
│ baking powder │ 148277 │
|
||||
│ lemon juice │ 146414 │
|
||||
│ Salt │ 122557 │
|
||||
│ cinnamon │ 117927 │
|
||||
│ sour cream │ 116682 │
|
||||
│ cream cheese │ 114423 │
|
||||
│ margarine │ 112742 │
|
||||
│ celery │ 112676 │
|
||||
│ baking soda │ 110690 │
|
||||
│ parsley │ 102151 │
|
||||
│ chicken │ 101505 │
|
||||
│ onions │ 98903 │
|
||||
│ vegetable oil │ 91395 │
|
||||
│ oil │ 85600 │
|
||||
│ mayonnaise │ 84822 │
|
||||
│ pecans │ 79741 │
|
||||
│ nuts │ 78471 │
|
||||
│ potatoes │ 75820 │
|
||||
│ carrots │ 75458 │
|
||||
│ pineapple │ 74345 │
|
||||
│ soy sauce │ 70355 │
|
||||
│ black pepper │ 69064 │
|
||||
│ thyme │ 68429 │
|
||||
│ mustard │ 65948 │
|
||||
│ chicken broth │ 65112 │
|
||||
│ bacon │ 64956 │
|
||||
│ honey │ 64626 │
|
||||
│ oregano │ 64077 │
|
||||
│ ground beef │ 64068 │
|
||||
│ unsalted butter │ 63848 │
|
||||
│ mushrooms │ 61465 │
|
||||
│ Worcestershire sauce │ 59328 │
|
||||
│ cornstarch │ 58476 │
|
||||
│ green pepper │ 58388 │
|
||||
│ Cheddar cheese │ 58354 │
|
||||
└──────────────────────┴────────┘
|
||||
|
||||
50 rows in set. Elapsed: 0.112 sec. Processed 2.23 million rows, 361.57 MB (19.99 million rows/s., 3.24 GB/s.)
|
||||
```
|
||||
|
||||
### 最复杂的草莓食谱
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
title,
|
||||
length(NER),
|
||||
length(directions)
|
||||
FROM recipes
|
||||
WHERE has(NER, 'strawberry')
|
||||
ORDER BY length(directions) DESC
|
||||
LIMIT 10
|
||||
```
|
||||
|
||||
结果:
|
||||
|
||||
``` text
|
||||
┌─title────────────────────────────────────────────────────────────┬─length(NER)─┬─length(directions)─┐
|
||||
│ Chocolate-Strawberry-Orange Wedding Cake │ 24 │ 126 │
|
||||
│ Strawberry Cream Cheese Crumble Tart │ 19 │ 47 │
|
||||
│ Charlotte-Style Ice Cream │ 11 │ 45 │
|
||||
│ Sinfully Good a Million Layers Chocolate Layer Cake, With Strawb │ 31 │ 45 │
|
||||
│ Sweetened Berries With Elderflower Sherbet │ 24 │ 44 │
|
||||
│ Chocolate-Strawberry Mousse Cake │ 15 │ 42 │
|
||||
│ Rhubarb Charlotte with Strawberries and Rum │ 20 │ 42 │
|
||||
│ Chef Joey's Strawberry Vanilla Tart │ 7 │ 37 │
|
||||
│ Old-Fashioned Ice Cream Sundae Cake │ 17 │ 37 │
|
||||
│ Watermelon Cake │ 16 │ 36 │
|
||||
└──────────────────────────────────────────────────────────────────┴─────────────┴────────────────────┘
|
||||
|
||||
10 rows in set. Elapsed: 0.215 sec. Processed 2.23 million rows, 1.48 GB (10.35 million rows/s., 6.86 GB/s.)
|
||||
```
|
||||
|
||||
在此示例中,我们使用 [has](../../sql-reference/functions/array-functions/#hasarr-elem) 函数来按过滤数组类型元素并按 directions 的数量进行排序。
|
||||
|
||||
有一个婚礼蛋糕需要整个126个步骤来制作!显示 directions:
|
||||
|
||||
请求:
|
||||
|
||||
``` sql
|
||||
SELECT arrayJoin(directions)
|
||||
FROM recipes
|
||||
WHERE title = 'Chocolate-Strawberry-Orange Wedding Cake'
|
||||
```
|
||||
|
||||
结果:
|
||||
|
||||
``` text
|
||||
┌─arrayJoin(directions)───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
|
||||
│ Position 1 rack in center and 1 rack in bottom third of oven and preheat to 350F. │
|
||||
│ Butter one 5-inch-diameter cake pan with 2-inch-high sides, one 8-inch-diameter cake pan with 2-inch-high sides and one 12-inch-diameter cake pan with 2-inch-high sides. │
|
||||
│ Dust pans with flour; line bottoms with parchment. │
|
||||
│ Combine 1/3 cup orange juice and 2 ounces unsweetened chocolate in heavy small saucepan. │
|
||||
│ Stir mixture over medium-low heat until chocolate melts. │
|
||||
│ Remove from heat. │
|
||||
│ Gradually mix in 1 2/3 cups orange juice. │
|
||||
│ Sift 3 cups flour, 2/3 cup cocoa, 2 teaspoons baking soda, 1 teaspoon salt and 1/2 teaspoon baking powder into medium bowl. │
|
||||
│ using electric mixer, beat 1 cup (2 sticks) butter and 3 cups sugar in large bowl until blended (mixture will look grainy). │
|
||||
│ Add 4 eggs, 1 at a time, beating to blend after each. │
|
||||
│ Beat in 1 tablespoon orange peel and 1 tablespoon vanilla extract. │
|
||||
│ Add dry ingredients alternately with orange juice mixture in 3 additions each, beating well after each addition. │
|
||||
│ Mix in 1 cup chocolate chips. │
|
||||
│ Transfer 1 cup plus 2 tablespoons batter to prepared 5-inch pan, 3 cups batter to prepared 8-inch pan and remaining batter (about 6 cups) to 12-inch pan. │
|
||||
│ Place 5-inch and 8-inch pans on center rack of oven. │
|
||||
│ Place 12-inch pan on lower rack of oven. │
|
||||
│ Bake cakes until tester inserted into center comes out clean, about 35 minutes. │
|
||||
│ Transfer cakes in pans to racks and cool completely. │
|
||||
│ Mark 4-inch diameter circle on one 6-inch-diameter cardboard cake round. │
|
||||
│ Cut out marked circle. │
|
||||
│ Mark 7-inch-diameter circle on one 8-inch-diameter cardboard cake round. │
|
||||
│ Cut out marked circle. │
|
||||
│ Mark 11-inch-diameter circle on one 12-inch-diameter cardboard cake round. │
|
||||
│ Cut out marked circle. │
|
||||
│ Cut around sides of 5-inch-cake to loosen. │
|
||||
│ Place 4-inch cardboard over pan. │
|
||||
│ Hold cardboard and pan together; turn cake out onto cardboard. │
|
||||
│ Peel off parchment.Wrap cakes on its cardboard in foil. │
|
||||
│ Repeat turning out, peeling off parchment and wrapping cakes in foil, using 7-inch cardboard for 8-inch cake and 11-inch cardboard for 12-inch cake. │
|
||||
│ Using remaining ingredients, make 1 more batch of cake batter and bake 3 more cake layers as described above. │
|
||||
│ Cool cakes in pans. │
|
||||
│ Cover cakes in pans tightly with foil. │
|
||||
│ (Can be prepared ahead. │
|
||||
│ Let stand at room temperature up to 1 day or double-wrap all cake layers and freeze up to 1 week. │
|
||||
│ Bring cake layers to room temperature before using.) │
|
||||
│ Place first 12-inch cake on its cardboard on work surface. │
|
||||
│ Spread 2 3/4 cups ganache over top of cake and all the way to edge. │
|
||||
│ Spread 2/3 cup jam over ganache, leaving 1/2-inch chocolate border at edge. │
|
||||
│ Drop 1 3/4 cups white chocolate frosting by spoonfuls over jam. │
|
||||
│ Gently spread frosting over jam, leaving 1/2-inch chocolate border at edge. │
|
||||
│ Rub some cocoa powder over second 12-inch cardboard. │
|
||||
│ Cut around sides of second 12-inch cake to loosen. │
|
||||
│ Place cardboard, cocoa side down, over pan. │
|
||||
│ Turn cake out onto cardboard. │
|
||||
│ Peel off parchment. │
|
||||
│ Carefully slide cake off cardboard and onto filling on first 12-inch cake. │
|
||||
│ Refrigerate. │
|
||||
│ Place first 8-inch cake on its cardboard on work surface. │
|
||||
│ Spread 1 cup ganache over top all the way to edge. │
|
||||
│ Spread 1/4 cup jam over, leaving 1/2-inch chocolate border at edge. │
|
||||
│ Drop 1 cup white chocolate frosting by spoonfuls over jam. │
|
||||
│ Gently spread frosting over jam, leaving 1/2-inch chocolate border at edge. │
|
||||
│ Rub some cocoa over second 8-inch cardboard. │
|
||||
│ Cut around sides of second 8-inch cake to loosen. │
|
||||
│ Place cardboard, cocoa side down, over pan. │
|
||||
│ Turn cake out onto cardboard. │
|
||||
│ Peel off parchment. │
|
||||
│ Slide cake off cardboard and onto filling on first 8-inch cake. │
|
||||
│ Refrigerate. │
|
||||
│ Place first 5-inch cake on its cardboard on work surface. │
|
||||
│ Spread 1/2 cup ganache over top of cake and all the way to edge. │
|
||||
│ Spread 2 tablespoons jam over, leaving 1/2-inch chocolate border at edge. │
|
||||
│ Drop 1/3 cup white chocolate frosting by spoonfuls over jam. │
|
||||
│ Gently spread frosting over jam, leaving 1/2-inch chocolate border at edge. │
|
||||
│ Rub cocoa over second 6-inch cardboard. │
|
||||
│ Cut around sides of second 5-inch cake to loosen. │
|
||||
│ Place cardboard, cocoa side down, over pan. │
|
||||
│ Turn cake out onto cardboard. │
|
||||
│ Peel off parchment. │
|
||||
│ Slide cake off cardboard and onto filling on first 5-inch cake. │
|
||||
│ Chill all cakes 1 hour to set filling. │
|
||||
│ Place 12-inch tiered cake on its cardboard on revolving cake stand. │
|
||||
│ Spread 2 2/3 cups frosting over top and sides of cake as a first coat. │
|
||||
│ Refrigerate cake. │
|
||||
│ Place 8-inch tiered cake on its cardboard on cake stand. │
|
||||
│ Spread 1 1/4 cups frosting over top and sides of cake as a first coat. │
|
||||
│ Refrigerate cake. │
|
||||
│ Place 5-inch tiered cake on its cardboard on cake stand. │
|
||||
│ Spread 3/4 cup frosting over top and sides of cake as a first coat. │
|
||||
│ Refrigerate all cakes until first coats of frosting set, about 1 hour. │
|
||||
│ (Cakes can be made to this point up to 1 day ahead; cover and keep refrigerate.) │
|
||||
│ Prepare second batch of frosting, using remaining frosting ingredients and following directions for first batch. │
|
||||
│ Spoon 2 cups frosting into pastry bag fitted with small star tip. │
|
||||
│ Place 12-inch cake on its cardboard on large flat platter. │
|
||||
│ Place platter on cake stand. │
|
||||
│ Using icing spatula, spread 2 1/2 cups frosting over top and sides of cake; smooth top. │
|
||||
│ Using filled pastry bag, pipe decorative border around top edge of cake. │
|
||||
│ Refrigerate cake on platter. │
|
||||
│ Place 8-inch cake on its cardboard on cake stand. │
|
||||
│ Using icing spatula, spread 1 1/2 cups frosting over top and sides of cake; smooth top. │
|
||||
│ Using pastry bag, pipe decorative border around top edge of cake. │
|
||||
│ Refrigerate cake on its cardboard. │
|
||||
│ Place 5-inch cake on its cardboard on cake stand. │
|
||||
│ Using icing spatula, spread 3/4 cup frosting over top and sides of cake; smooth top. │
|
||||
│ Using pastry bag, pipe decorative border around top edge of cake, spooning more frosting into bag if necessary. │
|
||||
│ Refrigerate cake on its cardboard. │
|
||||
│ Keep all cakes refrigerated until frosting sets, about 2 hours. │
|
||||
│ (Can be prepared 2 days ahead. │
|
||||
│ Cover loosely; keep refrigerated.) │
|
||||
│ Place 12-inch cake on platter on work surface. │
|
||||
│ Press 1 wooden dowel straight down into and completely through center of cake. │
|
||||
│ Mark dowel 1/4 inch above top of frosting. │
|
||||
│ Remove dowel and cut with serrated knife at marked point. │
|
||||
│ Cut 4 more dowels to same length. │
|
||||
│ Press 1 cut dowel back into center of cake. │
|
||||
│ Press remaining 4 cut dowels into cake, positioning 3 1/2 inches inward from cake edges and spacing evenly. │
|
||||
│ Place 8-inch cake on its cardboard on work surface. │
|
||||
│ Press 1 dowel straight down into and completely through center of cake. │
|
||||
│ Mark dowel 1/4 inch above top of frosting. │
|
||||
│ Remove dowel and cut with serrated knife at marked point. │
|
||||
│ Cut 3 more dowels to same length. │
|
||||
│ Press 1 cut dowel back into center of cake. │
|
||||
│ Press remaining 3 cut dowels into cake, positioning 2 1/2 inches inward from edges and spacing evenly. │
|
||||
│ Using large metal spatula as aid, place 8-inch cake on its cardboard atop dowels in 12-inch cake, centering carefully. │
|
||||
│ Gently place 5-inch cake on its cardboard atop dowels in 8-inch cake, centering carefully. │
|
||||
│ Using citrus stripper, cut long strips of orange peel from oranges. │
|
||||
│ Cut strips into long segments. │
|
||||
│ To make orange peel coils, wrap peel segment around handle of wooden spoon; gently slide peel off handle so that peel keeps coiled shape. │
|
||||
│ Garnish cake with orange peel coils, ivy or mint sprigs, and some berries. │
|
||||
│ (Assembled cake can be made up to 8 hours ahead. │
|
||||
│ Let stand at cool room temperature.) │
|
||||
│ Remove top and middle cake tiers. │
|
||||
│ Remove dowels from cakes. │
|
||||
│ Cut top and middle cakes into slices. │
|
||||
│ To cut 12-inch cake: Starting 3 inches inward from edge and inserting knife straight down, cut through from top to bottom to make 6-inch-diameter circle in center of cake. │
|
||||
│ Cut outer portion of cake into slices; cut inner portion into slices and serve with strawberries. │
|
||||
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
|
||||
|
||||
126 rows in set. Elapsed: 0.011 sec. Processed 8.19 thousand rows, 5.34 MB (737.75 thousand rows/s., 480.59 MB/s.)
|
||||
```
|
||||
|
||||
### 在线 Playground
|
||||
|
||||
此数据集也可在 [在线 Playground](https://play.clickhouse.com/play?user=play#U0VMRUNUCiAgICBhcnJheUpvaW4oTkVSKSBBUyBrLAogICAgY291bnQoKSBBUyBjCkZST00gcmVjaXBlcwpHUk9VUCBCWSBrCk9SREVSIEJZIGMgREVTQwpMSU1JVCA1MA==) 中体验。
|
||||
|
||||
[原文链接](https://clickhouse.com/docs/en/getting-started/example-datasets/recipes/)
|
||||
|
@ -1,10 +1,450 @@
|
||||
---
|
||||
slug: /zh/getting-started/example-datasets/uk-price-paid
|
||||
sidebar_label: UK Property Price Paid
|
||||
sidebar_label: 英国房地产支付价格
|
||||
sidebar_position: 1
|
||||
title: "UK Property Price Paid"
|
||||
title: "英国房地产支付价格"
|
||||
---
|
||||
|
||||
import Content from '@site/docs/en/getting-started/example-datasets/uk-price-paid.md';
|
||||
该数据集包含自 1995 年以来有关英格兰和威尔士房地产价格的数据。未压缩的大小约为 4 GiB,在 ClickHouse 中大约需要 278 MiB。
|
||||
|
||||
<Content />
|
||||
来源:https://www.gov.uk/government/statistical-data-sets/price-paid-data-downloads
|
||||
字段说明:https://www.gov.uk/guidance/about-the-price-data
|
||||
|
||||
包含 HM Land Registry data © Crown copyright and database right 2021.。此数据集需在 Open Government License v3.0 的许可下使用。
|
||||
|
||||
## 创建表 {#create-table}
|
||||
|
||||
```sql
|
||||
CREATE TABLE uk_price_paid
|
||||
(
|
||||
price UInt32,
|
||||
date Date,
|
||||
postcode1 LowCardinality(String),
|
||||
postcode2 LowCardinality(String),
|
||||
type Enum8('terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4, 'other' = 0),
|
||||
is_new UInt8,
|
||||
duration Enum8('freehold' = 1, 'leasehold' = 2, 'unknown' = 0),
|
||||
addr1 String,
|
||||
addr2 String,
|
||||
street LowCardinality(String),
|
||||
locality LowCardinality(String),
|
||||
town LowCardinality(String),
|
||||
district LowCardinality(String),
|
||||
county LowCardinality(String)
|
||||
)
|
||||
ENGINE = MergeTree
|
||||
ORDER BY (postcode1, postcode2, addr1, addr2);
|
||||
```
|
||||
|
||||
## 预处理和插入数据 {#preprocess-import-data}
|
||||
|
||||
我们将使用 `url` 函数将数据流式传输到 ClickHouse。我们需要首先预处理一些传入的数据,其中包括:
|
||||
|
||||
- 将`postcode` 拆分为两个不同的列 - `postcode1` 和 `postcode2`,因为这更适合存储和查询
|
||||
- 将`time` 字段转换为日期为它只包含 00:00 时间
|
||||
- 忽略 [UUid](/docs/zh/sql-reference/data-types/uuid.md) 字段,因为我们不需要它进行分析
|
||||
- 使用 [transform](/docs/zh/sql-reference/functions/other-functions.md#transform) 函数将 `Enum` 字段 `type` 和 `duration` 转换为更易读的 `Enum` 字段
|
||||
- 将 `is_new` 字段从单字符串(` Y`/`N`) 到 [UInt8](/docs/zh/sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-uint256-int8-int16-int32-int64 -int128-int256) 字段为 0 或 1
|
||||
- 删除最后两列,因为它们都具有相同的值(即 0)
|
||||
|
||||
`url` 函数将来自网络服务器的数据流式传输到 ClickHouse 表中。以下命令将 500 万行插入到 `uk_price_paid` 表中:
|
||||
|
||||
```sql
|
||||
INSERT INTO uk_price_paid
|
||||
WITH
|
||||
splitByChar(' ', postcode) AS p
|
||||
SELECT
|
||||
toUInt32(price_string) AS price,
|
||||
parseDateTimeBestEffortUS(time) AS date,
|
||||
p[1] AS postcode1,
|
||||
p[2] AS postcode2,
|
||||
transform(a, ['T', 'S', 'D', 'F', 'O'], ['terraced', 'semi-detached', 'detached', 'flat', 'other']) AS type,
|
||||
b = 'Y' AS is_new,
|
||||
transform(c, ['F', 'L', 'U'], ['freehold', 'leasehold', 'unknown']) AS duration,
|
||||
addr1,
|
||||
addr2,
|
||||
street,
|
||||
locality,
|
||||
town,
|
||||
district,
|
||||
county
|
||||
FROM url(
|
||||
'http://prod.publicdata.landregistry.gov.uk.s3-website-eu-west-1.amazonaws.com/pp-complete.csv',
|
||||
'CSV',
|
||||
'uuid_string String,
|
||||
price_string String,
|
||||
time String,
|
||||
postcode String,
|
||||
a String,
|
||||
b String,
|
||||
c String,
|
||||
addr1 String,
|
||||
addr2 String,
|
||||
street String,
|
||||
locality String,
|
||||
town String,
|
||||
district String,
|
||||
county String,
|
||||
d String,
|
||||
e String'
|
||||
) SETTINGS max_http_get_redirects=10;
|
||||
```
|
||||
|
||||
需要等待一两分钟以便数据插入,具体时间取决于网络速度。
|
||||
|
||||
## 验证数据 {#validate-data}
|
||||
|
||||
让我们通过查看插入了多少行来验证它是否有效:
|
||||
|
||||
```sql
|
||||
SELECT count()
|
||||
FROM uk_price_paid
|
||||
```
|
||||
|
||||
在执行此查询时,数据集有 27,450,499 行。让我们看看 ClickHouse 中表的大小是多少:
|
||||
|
||||
```sql
|
||||
SELECT formatReadableSize(total_bytes)
|
||||
FROM system.tables
|
||||
WHERE name = 'uk_price_paid'
|
||||
```
|
||||
|
||||
请注意,表的大小仅为 221.43 MiB!
|
||||
|
||||
## 运行一些查询 {#run-queries}
|
||||
|
||||
让我们运行一些查询来分析数据:
|
||||
|
||||
### 查询 1. 每年平均价格 {#average-price}
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
toYear(date) AS year,
|
||||
round(avg(price)) AS price,
|
||||
bar(price, 0, 1000000, 80
|
||||
)
|
||||
FROM uk_price_paid
|
||||
GROUP BY year
|
||||
ORDER BY year
|
||||
```
|
||||
|
||||
结果如下所示:
|
||||
|
||||
```response
|
||||
┌─year─┬──price─┬─bar(round(avg(price)), 0, 1000000, 80)─┐
|
||||
│ 1995 │ 67934 │ █████▍ │
|
||||
│ 1996 │ 71508 │ █████▋ │
|
||||
│ 1997 │ 78536 │ ██████▎ │
|
||||
│ 1998 │ 85441 │ ██████▋ │
|
||||
│ 1999 │ 96038 │ ███████▋ │
|
||||
│ 2000 │ 107487 │ ████████▌ │
|
||||
│ 2001 │ 118888 │ █████████▌ │
|
||||
│ 2002 │ 137948 │ ███████████ │
|
||||
│ 2003 │ 155893 │ ████████████▍ │
|
||||
│ 2004 │ 178888 │ ██████████████▎ │
|
||||
│ 2005 │ 189359 │ ███████████████▏ │
|
||||
│ 2006 │ 203532 │ ████████████████▎ │
|
||||
│ 2007 │ 219375 │ █████████████████▌ │
|
||||
│ 2008 │ 217056 │ █████████████████▎ │
|
||||
│ 2009 │ 213419 │ █████████████████ │
|
||||
│ 2010 │ 236110 │ ██████████████████▊ │
|
||||
│ 2011 │ 232805 │ ██████████████████▌ │
|
||||
│ 2012 │ 238381 │ ███████████████████ │
|
||||
│ 2013 │ 256927 │ ████████████████████▌ │
|
||||
│ 2014 │ 280008 │ ██████████████████████▍ │
|
||||
│ 2015 │ 297263 │ ███████████████████████▋ │
|
||||
│ 2016 │ 313518 │ █████████████████████████ │
|
||||
│ 2017 │ 346371 │ ███████████████████████████▋ │
|
||||
│ 2018 │ 350556 │ ████████████████████████████ │
|
||||
│ 2019 │ 352184 │ ████████████████████████████▏ │
|
||||
│ 2020 │ 375808 │ ██████████████████████████████ │
|
||||
│ 2021 │ 381105 │ ██████████████████████████████▍ │
|
||||
│ 2022 │ 362572 │ █████████████████████████████ │
|
||||
└──────┴────────┴────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### 查询 2. 伦敦每年的平均价格 {#average-price-london}
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
toYear(date) AS year,
|
||||
round(avg(price)) AS price,
|
||||
bar(price, 0, 2000000, 100
|
||||
)
|
||||
FROM uk_price_paid
|
||||
WHERE town = 'LONDON'
|
||||
GROUP BY year
|
||||
ORDER BY year
|
||||
```
|
||||
|
||||
结果如下所示:
|
||||
|
||||
```response
|
||||
┌─year─┬───price─┬─bar(round(avg(price)), 0, 2000000, 100)───────────────┐
|
||||
│ 1995 │ 109110 │ █████▍ │
|
||||
│ 1996 │ 118659 │ █████▊ │
|
||||
│ 1997 │ 136526 │ ██████▋ │
|
||||
│ 1998 │ 153002 │ ███████▋ │
|
||||
│ 1999 │ 180633 │ █████████ │
|
||||
│ 2000 │ 215849 │ ██████████▋ │
|
||||
│ 2001 │ 232987 │ ███████████▋ │
|
||||
│ 2002 │ 263668 │ █████████████▏ │
|
||||
│ 2003 │ 278424 │ █████████████▊ │
|
||||
│ 2004 │ 304664 │ ███████████████▏ │
|
||||
│ 2005 │ 322887 │ ████████████████▏ │
|
||||
│ 2006 │ 356195 │ █████████████████▋ │
|
||||
│ 2007 │ 404062 │ ████████████████████▏ │
|
||||
│ 2008 │ 420741 │ █████████████████████ │
|
||||
│ 2009 │ 427754 │ █████████████████████▍ │
|
||||
│ 2010 │ 480322 │ ████████████████████████ │
|
||||
│ 2011 │ 496278 │ ████████████████████████▋ │
|
||||
│ 2012 │ 519482 │ █████████████████████████▊ │
|
||||
│ 2013 │ 616195 │ ██████████████████████████████▋ │
|
||||
│ 2014 │ 724121 │ ████████████████████████████████████▏ │
|
||||
│ 2015 │ 792101 │ ███████████████████████████████████████▌ │
|
||||
│ 2016 │ 843589 │ ██████████████████████████████████████████▏ │
|
||||
│ 2017 │ 983523 │ █████████████████████████████████████████████████▏ │
|
||||
│ 2018 │ 1016753 │ ██████████████████████████████████████████████████▋ │
|
||||
│ 2019 │ 1041673 │ ████████████████████████████████████████████████████ │
|
||||
│ 2020 │ 1060027 │ █████████████████████████████████████████████████████ │
|
||||
│ 2021 │ 958249 │ ███████████████████████████████████████████████▊ │
|
||||
│ 2022 │ 902596 │ █████████████████████████████████████████████▏ │
|
||||
└──────┴─────────┴───────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
2020 年房价出事了!但这并不令人意外……
|
||||
|
||||
### 查询 3. 最昂贵的社区 {#most-expensive-neighborhoods}
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
town,
|
||||
district,
|
||||
count() AS c,
|
||||
round(avg(price)) AS price,
|
||||
bar(price, 0, 5000000, 100)
|
||||
FROM uk_price_paid
|
||||
WHERE date >= '2020-01-01'
|
||||
GROUP BY
|
||||
town,
|
||||
district
|
||||
HAVING c >= 100
|
||||
ORDER BY price DESC
|
||||
LIMIT 100
|
||||
```
|
||||
|
||||
结果如下所示:
|
||||
|
||||
```response
|
||||
┌─town─────────────────┬─district───────────────┬─────c─┬───price─┬─bar(round(avg(price)), 0, 5000000, 100)─────────────────────────┐
|
||||
│ LONDON │ CITY OF LONDON │ 578 │ 3149590 │ ██████████████████████████████████████████████████████████████▊ │
|
||||
│ LONDON │ CITY OF WESTMINSTER │ 7083 │ 2903794 │ ██████████████████████████████████████████████████████████ │
|
||||
│ LONDON │ KENSINGTON AND CHELSEA │ 4986 │ 2333782 │ ██████████████████████████████████████████████▋ │
|
||||
│ LEATHERHEAD │ ELMBRIDGE │ 203 │ 2071595 │ █████████████████████████████████████████▍ │
|
||||
│ VIRGINIA WATER │ RUNNYMEDE │ 308 │ 1939465 │ ██████████████████████████████████████▋ │
|
||||
│ LONDON │ CAMDEN │ 5750 │ 1673687 │ █████████████████████████████████▍ │
|
||||
│ WINDLESHAM │ SURREY HEATH │ 182 │ 1428358 │ ████████████████████████████▌ │
|
||||
│ NORTHWOOD │ THREE RIVERS │ 112 │ 1404170 │ ████████████████████████████ │
|
||||
│ BARNET │ ENFIELD │ 259 │ 1338299 │ ██████████████████████████▋ │
|
||||
│ LONDON │ ISLINGTON │ 5504 │ 1275520 │ █████████████████████████▌ │
|
||||
│ LONDON │ RICHMOND UPON THAMES │ 1345 │ 1261935 │ █████████████████████████▏ │
|
||||
│ COBHAM │ ELMBRIDGE │ 727 │ 1251403 │ █████████████████████████ │
|
||||
│ BEACONSFIELD │ BUCKINGHAMSHIRE │ 680 │ 1199970 │ ███████████████████████▊ │
|
||||
│ LONDON │ TOWER HAMLETS │ 10012 │ 1157827 │ ███████████████████████▏ │
|
||||
│ LONDON │ HOUNSLOW │ 1278 │ 1144389 │ ██████████████████████▊ │
|
||||
│ BURFORD │ WEST OXFORDSHIRE │ 182 │ 1139393 │ ██████████████████████▋ │
|
||||
│ RICHMOND │ RICHMOND UPON THAMES │ 1649 │ 1130076 │ ██████████████████████▌ │
|
||||
│ KINGSTON UPON THAMES │ RICHMOND UPON THAMES │ 147 │ 1126111 │ ██████████████████████▌ │
|
||||
│ ASCOT │ WINDSOR AND MAIDENHEAD │ 773 │ 1106109 │ ██████████████████████ │
|
||||
│ LONDON │ HAMMERSMITH AND FULHAM │ 6162 │ 1056198 │ █████████████████████ │
|
||||
│ RADLETT │ HERTSMERE │ 513 │ 1045758 │ ████████████████████▊ │
|
||||
│ LEATHERHEAD │ GUILDFORD │ 354 │ 1045175 │ ████████████████████▊ │
|
||||
│ WEYBRIDGE │ ELMBRIDGE │ 1275 │ 1036702 │ ████████████████████▋ │
|
||||
│ FARNHAM │ EAST HAMPSHIRE │ 107 │ 1033682 │ ████████████████████▋ │
|
||||
│ ESHER │ ELMBRIDGE │ 915 │ 1032753 │ ████████████████████▋ │
|
||||
│ FARNHAM │ HART │ 102 │ 1002692 │ ████████████████████ │
|
||||
│ GERRARDS CROSS │ BUCKINGHAMSHIRE │ 845 │ 983639 │ ███████████████████▋ │
|
||||
│ CHALFONT ST GILES │ BUCKINGHAMSHIRE │ 286 │ 973993 │ ███████████████████▍ │
|
||||
│ SALCOMBE │ SOUTH HAMS │ 215 │ 965724 │ ███████████████████▎ │
|
||||
│ SURBITON │ ELMBRIDGE │ 181 │ 960346 │ ███████████████████▏ │
|
||||
│ BROCKENHURST │ NEW FOREST │ 226 │ 951278 │ ███████████████████ │
|
||||
│ SUTTON COLDFIELD │ LICHFIELD │ 110 │ 930757 │ ██████████████████▌ │
|
||||
│ EAST MOLESEY │ ELMBRIDGE │ 372 │ 927026 │ ██████████████████▌ │
|
||||
│ LLANGOLLEN │ WREXHAM │ 127 │ 925681 │ ██████████████████▌ │
|
||||
│ OXFORD │ SOUTH OXFORDSHIRE │ 638 │ 923830 │ ██████████████████▍ │
|
||||
│ LONDON │ MERTON │ 4383 │ 923194 │ ██████████████████▍ │
|
||||
│ GUILDFORD │ WAVERLEY │ 261 │ 905733 │ ██████████████████ │
|
||||
│ TEDDINGTON │ RICHMOND UPON THAMES │ 1147 │ 894856 │ █████████████████▊ │
|
||||
│ HARPENDEN │ ST ALBANS │ 1271 │ 893079 │ █████████████████▋ │
|
||||
│ HENLEY-ON-THAMES │ SOUTH OXFORDSHIRE │ 1042 │ 887557 │ █████████████████▋ │
|
||||
│ POTTERS BAR │ WELWYN HATFIELD │ 314 │ 863037 │ █████████████████▎ │
|
||||
│ LONDON │ WANDSWORTH │ 13210 │ 857318 │ █████████████████▏ │
|
||||
│ BILLINGSHURST │ CHICHESTER │ 255 │ 856508 │ █████████████████▏ │
|
||||
│ LONDON │ SOUTHWARK │ 7742 │ 843145 │ ████████████████▋ │
|
||||
│ LONDON │ HACKNEY │ 6656 │ 839716 │ ████████████████▋ │
|
||||
│ LUTTERWORTH │ HARBOROUGH │ 1096 │ 836546 │ ████████████████▋ │
|
||||
│ KINGSTON UPON THAMES │ KINGSTON UPON THAMES │ 1846 │ 828990 │ ████████████████▌ │
|
||||
│ LONDON │ EALING │ 5583 │ 820135 │ ████████████████▍ │
|
||||
│ INGATESTONE │ CHELMSFORD │ 120 │ 815379 │ ████████████████▎ │
|
||||
│ MARLOW │ BUCKINGHAMSHIRE │ 718 │ 809943 │ ████████████████▏ │
|
||||
│ EAST GRINSTEAD │ TANDRIDGE │ 105 │ 809461 │ ████████████████▏ │
|
||||
│ CHIGWELL │ EPPING FOREST │ 484 │ 809338 │ ████████████████▏ │
|
||||
│ EGHAM │ RUNNYMEDE │ 989 │ 807858 │ ████████████████▏ │
|
||||
│ HASLEMERE │ CHICHESTER │ 223 │ 804173 │ ████████████████ │
|
||||
│ PETWORTH │ CHICHESTER │ 288 │ 803206 │ ████████████████ │
|
||||
│ TWICKENHAM │ RICHMOND UPON THAMES │ 2194 │ 802616 │ ████████████████ │
|
||||
│ WEMBLEY │ BRENT │ 1698 │ 801733 │ ████████████████ │
|
||||
│ HINDHEAD │ WAVERLEY │ 233 │ 801482 │ ████████████████ │
|
||||
│ LONDON │ BARNET │ 8083 │ 792066 │ ███████████████▋ │
|
||||
│ WOKING │ GUILDFORD │ 343 │ 789360 │ ███████████████▋ │
|
||||
│ STOCKBRIDGE │ TEST VALLEY │ 318 │ 777909 │ ███████████████▌ │
|
||||
│ BERKHAMSTED │ DACORUM │ 1049 │ 776138 │ ███████████████▌ │
|
||||
│ MAIDENHEAD │ BUCKINGHAMSHIRE │ 236 │ 775572 │ ███████████████▌ │
|
||||
│ SOLIHULL │ STRATFORD-ON-AVON │ 142 │ 770727 │ ███████████████▍ │
|
||||
│ GREAT MISSENDEN │ BUCKINGHAMSHIRE │ 431 │ 764493 │ ███████████████▎ │
|
||||
│ TADWORTH │ REIGATE AND BANSTEAD │ 920 │ 757511 │ ███████████████▏ │
|
||||
│ LONDON │ BRENT │ 4124 │ 757194 │ ███████████████▏ │
|
||||
│ THAMES DITTON │ ELMBRIDGE │ 470 │ 750828 │ ███████████████ │
|
||||
│ LONDON │ LAMBETH │ 10431 │ 750532 │ ███████████████ │
|
||||
│ RICKMANSWORTH │ THREE RIVERS │ 1500 │ 747029 │ ██████████████▊ │
|
||||
│ KINGS LANGLEY │ DACORUM │ 281 │ 746536 │ ██████████████▊ │
|
||||
│ HARLOW │ EPPING FOREST │ 172 │ 739423 │ ██████████████▋ │
|
||||
│ TONBRIDGE │ SEVENOAKS │ 103 │ 738740 │ ██████████████▋ │
|
||||
│ BELVEDERE │ BEXLEY │ 686 │ 736385 │ ██████████████▋ │
|
||||
│ CRANBROOK │ TUNBRIDGE WELLS │ 769 │ 734328 │ ██████████████▋ │
|
||||
│ SOLIHULL │ WARWICK │ 116 │ 733286 │ ██████████████▋ │
|
||||
│ ALDERLEY EDGE │ CHESHIRE EAST │ 357 │ 732882 │ ██████████████▋ │
|
||||
│ WELWYN │ WELWYN HATFIELD │ 404 │ 730281 │ ██████████████▌ │
|
||||
│ CHISLEHURST │ BROMLEY │ 870 │ 730279 │ ██████████████▌ │
|
||||
│ LONDON │ HARINGEY │ 6488 │ 726715 │ ██████████████▌ │
|
||||
│ AMERSHAM │ BUCKINGHAMSHIRE │ 965 │ 725426 │ ██████████████▌ │
|
||||
│ SEVENOAKS │ SEVENOAKS │ 2183 │ 725102 │ ██████████████▌ │
|
||||
│ BOURNE END │ BUCKINGHAMSHIRE │ 269 │ 724595 │ ██████████████▍ │
|
||||
│ NORTHWOOD │ HILLINGDON │ 568 │ 722436 │ ██████████████▍ │
|
||||
│ PURFLEET │ THURROCK │ 143 │ 722205 │ ██████████████▍ │
|
||||
│ SLOUGH │ BUCKINGHAMSHIRE │ 832 │ 721529 │ ██████████████▍ │
|
||||
│ INGATESTONE │ BRENTWOOD │ 301 │ 718292 │ ██████████████▎ │
|
||||
│ EPSOM │ REIGATE AND BANSTEAD │ 315 │ 709264 │ ██████████████▏ │
|
||||
│ ASHTEAD │ MOLE VALLEY │ 524 │ 708646 │ ██████████████▏ │
|
||||
│ BETCHWORTH │ MOLE VALLEY │ 155 │ 708525 │ ██████████████▏ │
|
||||
│ OXTED │ TANDRIDGE │ 645 │ 706946 │ ██████████████▏ │
|
||||
│ READING │ SOUTH OXFORDSHIRE │ 593 │ 705466 │ ██████████████ │
|
||||
│ FELTHAM │ HOUNSLOW │ 1536 │ 703815 │ ██████████████ │
|
||||
│ TUNBRIDGE WELLS │ WEALDEN │ 207 │ 703296 │ ██████████████ │
|
||||
│ LEWES │ WEALDEN │ 116 │ 701349 │ ██████████████ │
|
||||
│ OXFORD │ OXFORD │ 3656 │ 700813 │ ██████████████ │
|
||||
│ MAYFIELD │ WEALDEN │ 177 │ 698158 │ █████████████▊ │
|
||||
│ PINNER │ HARROW │ 997 │ 697876 │ █████████████▊ │
|
||||
│ LECHLADE │ COTSWOLD │ 155 │ 696262 │ █████████████▊ │
|
||||
│ WALTON-ON-THAMES │ ELMBRIDGE │ 1850 │ 690102 │ █████████████▋ │
|
||||
└──────────────────────┴────────────────────────┴───────┴─────────┴─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## 使用 Projection 加速查询 {#speedup-with-projections}
|
||||
|
||||
[Projections](/docs/zh/sql-reference/statements/alter/projection.mdx) 允许我们通过存储任意格式的预先聚合的数据来提高查询速度。在此示例中,我们创建了一个按年份、地区和城镇分组的房产的平均价格、总价格和数量的 Projection。在执行时,如果 ClickHouse 认为 Projection 可以提高查询的性能,它将使用 Projection(何时使用由 ClickHouse 决定)。
|
||||
|
||||
### 构建投影{#build-projection}
|
||||
|
||||
让我们通过维度 `toYear(date)`、`district` 和 `town` 创建一个聚合 Projection:
|
||||
|
||||
```sql
|
||||
ALTER TABLE uk_price_paid
|
||||
ADD PROJECTION projection_by_year_district_town
|
||||
(
|
||||
SELECT
|
||||
toYear(date),
|
||||
district,
|
||||
town,
|
||||
avg(price),
|
||||
sum(price),
|
||||
count()
|
||||
GROUP BY
|
||||
toYear(date),
|
||||
district,
|
||||
town
|
||||
)
|
||||
```
|
||||
|
||||
填充现有数据的 Projection。 (如果不进行 materialize 操作,则 ClickHouse 只会为新插入的数据创建 Projection):
|
||||
|
||||
```sql
|
||||
ALTER TABLE uk_price_paid
|
||||
MATERIALIZE PROJECTION projection_by_year_district_town
|
||||
SETTINGS mutations_sync = 1
|
||||
```
|
||||
|
||||
## Test Performance {#test-performance}
|
||||
|
||||
让我们再次运行相同的 3 个查询:
|
||||
|
||||
### 查询 1. 每年平均价格 {#average-price-projections}
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
toYear(date) AS year,
|
||||
round(avg(price)) AS price,
|
||||
bar(price, 0, 1000000, 80)
|
||||
FROM uk_price_paid
|
||||
GROUP BY year
|
||||
ORDER BY year ASC
|
||||
```
|
||||
|
||||
结果是一样的,但是性能更好!
|
||||
```response
|
||||
No projection: 28 rows in set. Elapsed: 1.775 sec. Processed 27.45 million rows, 164.70 MB (15.47 million rows/s., 92.79 MB/s.)
|
||||
With projection: 28 rows in set. Elapsed: 0.665 sec. Processed 87.51 thousand rows, 3.21 MB (131.51 thousand rows/s., 4.82 MB/s.)
|
||||
```
|
||||
|
||||
|
||||
### 查询 2. 伦敦每年的平均价格 {#average-price-london-projections}
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
toYear(date) AS year,
|
||||
round(avg(price)) AS price,
|
||||
bar(price, 0, 2000000, 100)
|
||||
FROM uk_price_paid
|
||||
WHERE town = 'LONDON'
|
||||
GROUP BY year
|
||||
ORDER BY year ASC
|
||||
```
|
||||
|
||||
Same result, but notice the improvement in query performance:
|
||||
|
||||
```response
|
||||
No projection: 28 rows in set. Elapsed: 0.720 sec. Processed 27.45 million rows, 46.61 MB (38.13 million rows/s., 64.74 MB/s.)
|
||||
With projection: 28 rows in set. Elapsed: 0.015 sec. Processed 87.51 thousand rows, 3.51 MB (5.74 million rows/s., 230.24 MB/s.)
|
||||
```
|
||||
|
||||
### 查询 3. 最昂贵的社区 {#most-expensive-neighborhoods-projections}
|
||||
|
||||
注意:需要修改 (date >= '2020-01-01') 以使其与 Projection 定义的维度 (`toYear(date) >= 2020)` 匹配:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
town,
|
||||
district,
|
||||
count() AS c,
|
||||
round(avg(price)) AS price,
|
||||
bar(price, 0, 5000000, 100)
|
||||
FROM uk_price_paid
|
||||
WHERE toYear(date) >= 2020
|
||||
GROUP BY
|
||||
town,
|
||||
district
|
||||
HAVING c >= 100
|
||||
ORDER BY price DESC
|
||||
LIMIT 100
|
||||
```
|
||||
|
||||
同样,结果是相同的,但请注意查询性能的改进:
|
||||
|
||||
```response
|
||||
No projection: 100 rows in set. Elapsed: 0.928 sec. Processed 27.45 million rows, 103.80 MB (29.56 million rows/s., 111.80 MB/s.)
|
||||
With projection: 100 rows in set. Elapsed: 0.336 sec. Processed 17.32 thousand rows, 1.23 MB (51.61 thousand rows/s., 3.65 MB/s.)
|
||||
```
|
||||
|
||||
### 在 Playground 上测试{#playground}
|
||||
|
||||
也可以在 [Online Playground](https://play.clickhouse.com/play?user=play#U0VMRUNUIHRvd24sIGRpc3RyaWN0LCBjb3VudCgpIEFTIGMsIHJvdW5kKGF2ZyhwcmljZSkpIEFTIHByaWNlLCBiYXIocHJpY2UsIDAsIDUwMDAwMDAsIDEwMCkgRlJPTSB1a19wcmljZV9wYWlkIFdIRVJFIGRhdGUgPj0gJzIwMjAtMDEtMDEnIEdST1VQIEJZIHRvd24sIGRpc3RyaWN0IEhBVklORyBjID49IDEwMCBPUkRFUiBCWSBwcmljZSBERVNDIExJTUlUIDEwMA==) 上找到此数据集。
|
||||
|
@ -35,6 +35,9 @@ Yandex**没有**维护下面列出的库,也没有做过任何广泛的测试
|
||||
- NodeJs
|
||||
- [clickhouse (NodeJs)](https://github.com/TimonKK/clickhouse)
|
||||
- [node-clickhouse](https://github.com/apla/node-clickhouse)
|
||||
- [nestjs-clickhouse](https://github.com/depyronick/nestjs-clickhouse)
|
||||
- [clickhouse-client](https://github.com/depyronick/clickhouse-client)
|
||||
- [node-clickhouse-orm](https://github.com/zimv/node-clickhouse-orm)
|
||||
- Perl
|
||||
- [perl-DBD-ClickHouse](https://github.com/elcamlost/perl-DBD-ClickHouse)
|
||||
- [HTTP-ClickHouse](https://metacpan.org/release/HTTP-ClickHouse)
|
||||
|
@ -164,23 +164,6 @@ SELECT * FROM [db.]live_view WHERE ...
|
||||
|
||||
您可以使用`ALTER LIVE VIEW [db.]table_name REFRESH`语法.
|
||||
|
||||
### WITH TIMEOUT条件 {#live-view-with-timeout}
|
||||
|
||||
当使用`WITH TIMEOUT`子句创建实时视图时,[WATCH](../../../sql-reference/statements/watch.md)观察实时视图的查询。
|
||||
|
||||
```sql
|
||||
CREATE LIVE VIEW [db.]table_name WITH TIMEOUT [value_in_sec] AS SELECT ...
|
||||
```
|
||||
|
||||
如果未指定超时值,则由指定的值[temporary_live_view_timeout](../../../operations/settings/settings.md#temporary-live-view-timeout)决定.
|
||||
|
||||
**示例:**
|
||||
|
||||
```sql
|
||||
CREATE TABLE mt (x Int8) Engine = MergeTree ORDER BY x;
|
||||
CREATE LIVE VIEW lv WITH TIMEOUT 15 AS SELECT sum(x) FROM mt;
|
||||
```
|
||||
|
||||
### WITH REFRESH条件 {#live-view-with-refresh}
|
||||
|
||||
当使用`WITH REFRESH`子句创建实时视图时,它将在自上次刷新或触发后经过指定的秒数后自动刷新。
|
||||
@ -210,20 +193,6 @@ WATCH lv
|
||||
└─────────────────────┴──────────┘
|
||||
```
|
||||
|
||||
您可以使用`AND`子句组合`WITH TIMEOUT`和`WITH REFRESH`子句。
|
||||
|
||||
```sql
|
||||
CREATE LIVE VIEW [db.]table_name WITH TIMEOUT [value_in_sec] AND REFRESH [value_in_sec] AS SELECT ...
|
||||
```
|
||||
|
||||
**示例:**
|
||||
|
||||
```sql
|
||||
CREATE LIVE VIEW lv WITH TIMEOUT 15 AND REFRESH 5 AS SELECT now();
|
||||
```
|
||||
|
||||
15 秒后,如果没有活动的`WATCH`查询,实时视图将自动删除。
|
||||
|
||||
```sql
|
||||
WATCH lv
|
||||
```
|
||||
|
@ -120,7 +120,11 @@ use_cron()
|
||||
if [ -x "/bin/systemctl" ] && [ -f /etc/systemd/system/clickhouse-server.service ] && [ -d /run/systemd/system ]; then
|
||||
return 1
|
||||
fi
|
||||
# 2. disabled by config
|
||||
# 2. checking whether the config is existed
|
||||
if [ ! -f "$CLICKHOUSE_CRONFILE" ]; then
|
||||
return 1
|
||||
fi
|
||||
# 3. disabled by config
|
||||
if [ -z "$CLICKHOUSE_CRONFILE" ]; then
|
||||
return 2
|
||||
fi
|
||||
|
@ -189,7 +189,7 @@ else()
|
||||
message(STATUS "ClickHouse su: OFF")
|
||||
endif()
|
||||
|
||||
configure_file (config_tools.h.in ${ConfigIncludePath}/config_tools.h)
|
||||
configure_file (config_tools.h.in ${CONFIG_INCLUDE_PATH}/config_tools.h)
|
||||
|
||||
macro(clickhouse_target_link_split_lib target name)
|
||||
if(NOT CLICKHOUSE_ONE_SHARED)
|
||||
|
@ -12,10 +12,11 @@
|
||||
#include <string>
|
||||
#include "Client.h"
|
||||
#include "Core/Protocol.h"
|
||||
#include "Parsers/formatAST.h"
|
||||
|
||||
#include <base/find_symbols.h>
|
||||
|
||||
#include <Common/config_version.h>
|
||||
#include "config_version.h"
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/formatReadable.h>
|
||||
#include <Common/TerminalSize.h>
|
||||
@ -514,6 +515,66 @@ static bool queryHasWithClause(const IAST & ast)
|
||||
return false;
|
||||
}
|
||||
|
||||
std::optional<bool> Client::processFuzzingStep(const String & query_to_execute, const ASTPtr & parsed_query)
|
||||
{
|
||||
processParsedSingleQuery(query_to_execute, query_to_execute, parsed_query);
|
||||
|
||||
const auto * exception = server_exception ? server_exception.get() : client_exception.get();
|
||||
// Sometimes you may get TOO_DEEP_RECURSION from the server,
|
||||
// and TOO_DEEP_RECURSION should not fail the fuzzer check.
|
||||
if (have_error && exception->code() == ErrorCodes::TOO_DEEP_RECURSION)
|
||||
{
|
||||
have_error = false;
|
||||
server_exception.reset();
|
||||
client_exception.reset();
|
||||
return true;
|
||||
}
|
||||
|
||||
if (have_error)
|
||||
{
|
||||
fmt::print(stderr, "Error on processing query '{}': {}\n", parsed_query->formatForErrorMessage(), exception->message());
|
||||
|
||||
// Try to reconnect after errors, for two reasons:
|
||||
// 1. We might not have realized that the server died, e.g. if
|
||||
// it sent us a <Fatal> trace and closed connection properly.
|
||||
// 2. The connection might have gotten into a wrong state and
|
||||
// the next query will get false positive about
|
||||
// "Unknown packet from server".
|
||||
try
|
||||
{
|
||||
connection->forceConnected(connection_parameters.timeouts);
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
// Just report it, we'll terminate below.
|
||||
fmt::print(stderr,
|
||||
"Error while reconnecting to the server: {}\n",
|
||||
getCurrentExceptionMessage(true));
|
||||
|
||||
// The reconnection might fail, but we'll still be connected
|
||||
// in the sense of `connection->isConnected() = true`,
|
||||
// in case when the requested database doesn't exist.
|
||||
// Disconnect manually now, so that the following code doesn't
|
||||
// have any doubts, and the connection state is predictable.
|
||||
connection->disconnect();
|
||||
}
|
||||
}
|
||||
|
||||
if (!connection->isConnected())
|
||||
{
|
||||
// Probably the server is dead because we found an assertion
|
||||
// failure. Fail fast.
|
||||
fmt::print(stderr, "Lost connection to the server.\n");
|
||||
|
||||
// Print the changed settings because they might be needed to
|
||||
// reproduce the error.
|
||||
printChangedSettings();
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
return std::nullopt;
|
||||
}
|
||||
|
||||
/// Returns false when server is not available.
|
||||
bool Client::processWithFuzzing(const String & full_query)
|
||||
@ -558,18 +619,33 @@ bool Client::processWithFuzzing(const String & full_query)
|
||||
// - SET -- The time to fuzz the settings has not yet come
|
||||
// (see comments in Client/QueryFuzzer.cpp)
|
||||
size_t this_query_runs = query_fuzzer_runs;
|
||||
if (orig_ast->as<ASTInsertQuery>() ||
|
||||
orig_ast->as<ASTCreateQuery>() ||
|
||||
orig_ast->as<ASTDropQuery>() ||
|
||||
orig_ast->as<ASTSetQuery>())
|
||||
ASTs queries_for_fuzzed_tables;
|
||||
|
||||
if (orig_ast->as<ASTSetQuery>())
|
||||
{
|
||||
this_query_runs = 1;
|
||||
}
|
||||
else if (const auto * create = orig_ast->as<ASTCreateQuery>())
|
||||
{
|
||||
if (QueryFuzzer::isSuitableForFuzzing(*create))
|
||||
this_query_runs = create_query_fuzzer_runs;
|
||||
else
|
||||
this_query_runs = 1;
|
||||
}
|
||||
else if (const auto * insert = orig_ast->as<ASTInsertQuery>())
|
||||
{
|
||||
this_query_runs = 1;
|
||||
queries_for_fuzzed_tables = fuzzer.getInsertQueriesForFuzzedTables(full_query);
|
||||
}
|
||||
else if (const auto * drop = orig_ast->as<ASTDropQuery>())
|
||||
{
|
||||
this_query_runs = 1;
|
||||
queries_for_fuzzed_tables = fuzzer.getDropQueriesForFuzzedTables(*drop);
|
||||
}
|
||||
|
||||
String query_to_execute;
|
||||
ASTPtr parsed_query;
|
||||
|
||||
ASTPtr fuzz_base = orig_ast;
|
||||
|
||||
for (size_t fuzz_step = 0; fuzz_step < this_query_runs; ++fuzz_step)
|
||||
{
|
||||
fmt::print(stderr, "Fuzzing step {} out of {}\n", fuzz_step, this_query_runs);
|
||||
@ -630,9 +706,9 @@ bool Client::processWithFuzzing(const String & full_query)
|
||||
continue;
|
||||
}
|
||||
|
||||
parsed_query = ast_to_process;
|
||||
query_to_execute = parsed_query->formatForErrorMessage();
|
||||
processParsedSingleQuery(full_query, query_to_execute, parsed_query);
|
||||
query_to_execute = ast_to_process->formatForErrorMessage();
|
||||
if (auto res = processFuzzingStep(query_to_execute, ast_to_process))
|
||||
return *res;
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
@ -645,60 +721,6 @@ bool Client::processWithFuzzing(const String & full_query)
|
||||
have_error = true;
|
||||
}
|
||||
|
||||
const auto * exception = server_exception ? server_exception.get() : client_exception.get();
|
||||
// Sometimes you may get TOO_DEEP_RECURSION from the server,
|
||||
// and TOO_DEEP_RECURSION should not fail the fuzzer check.
|
||||
if (have_error && exception->code() == ErrorCodes::TOO_DEEP_RECURSION)
|
||||
{
|
||||
have_error = false;
|
||||
server_exception.reset();
|
||||
client_exception.reset();
|
||||
return true;
|
||||
}
|
||||
|
||||
if (have_error)
|
||||
{
|
||||
fmt::print(stderr, "Error on processing query '{}': {}\n", ast_to_process->formatForErrorMessage(), exception->message());
|
||||
|
||||
// Try to reconnect after errors, for two reasons:
|
||||
// 1. We might not have realized that the server died, e.g. if
|
||||
// it sent us a <Fatal> trace and closed connection properly.
|
||||
// 2. The connection might have gotten into a wrong state and
|
||||
// the next query will get false positive about
|
||||
// "Unknown packet from server".
|
||||
try
|
||||
{
|
||||
connection->forceConnected(connection_parameters.timeouts);
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
// Just report it, we'll terminate below.
|
||||
fmt::print(stderr,
|
||||
"Error while reconnecting to the server: {}\n",
|
||||
getCurrentExceptionMessage(true));
|
||||
|
||||
// The reconnection might fail, but we'll still be connected
|
||||
// in the sense of `connection->isConnected() = true`,
|
||||
// in case when the requested database doesn't exist.
|
||||
// Disconnect manually now, so that the following code doesn't
|
||||
// have any doubts, and the connection state is predictable.
|
||||
connection->disconnect();
|
||||
}
|
||||
}
|
||||
|
||||
if (!connection->isConnected())
|
||||
{
|
||||
// Probably the server is dead because we found an assertion
|
||||
// failure. Fail fast.
|
||||
fmt::print(stderr, "Lost connection to the server.\n");
|
||||
|
||||
// Print the changed settings because they might be needed to
|
||||
// reproduce the error.
|
||||
printChangedSettings();
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
// Check that after the query is formatted, we can parse it back,
|
||||
// format again and get the same result. Unfortunately, we can't
|
||||
// compare the ASTs, which would be more sensitive to errors. This
|
||||
@ -729,13 +751,12 @@ bool Client::processWithFuzzing(const String & full_query)
|
||||
// query, but second and third.
|
||||
// If you have to add any more workarounds to this check, just remove
|
||||
// it altogether, it's not so useful.
|
||||
if (parsed_query && !have_error && !queryHasWithClause(*parsed_query))
|
||||
if (ast_to_process && !have_error && !queryHasWithClause(*ast_to_process))
|
||||
{
|
||||
ASTPtr ast_2;
|
||||
try
|
||||
{
|
||||
const auto * tmp_pos = query_to_execute.c_str();
|
||||
|
||||
ast_2 = parseQuery(tmp_pos, tmp_pos + query_to_execute.size(), false /* allow_multi_statements */);
|
||||
}
|
||||
catch (Exception & e)
|
||||
@ -762,7 +783,7 @@ bool Client::processWithFuzzing(const String & full_query)
|
||||
"Got the following (different) text after formatting the fuzzed query and parsing it back:\n'{}'\n, expected:\n'{}'\n",
|
||||
text_3, text_2);
|
||||
fmt::print(stderr, "In more detail:\n");
|
||||
fmt::print(stderr, "AST-1 (generated by fuzzer):\n'{}'\n", parsed_query->dumpTree());
|
||||
fmt::print(stderr, "AST-1 (generated by fuzzer):\n'{}'\n", ast_to_process->dumpTree());
|
||||
fmt::print(stderr, "Text-1 (AST-1 formatted):\n'{}'\n", query_to_execute);
|
||||
fmt::print(stderr, "AST-2 (Text-1 parsed):\n'{}'\n", ast_2->dumpTree());
|
||||
fmt::print(stderr, "Text-2 (AST-2 formatted):\n'{}'\n", text_2);
|
||||
@ -784,6 +805,7 @@ bool Client::processWithFuzzing(const String & full_query)
|
||||
// so that it doesn't influence the exit code.
|
||||
server_exception.reset();
|
||||
client_exception.reset();
|
||||
fuzzer.notifyQueryFailed(ast_to_process);
|
||||
have_error = false;
|
||||
}
|
||||
else if (ast_to_process->formatForErrorMessage().size() > 500)
|
||||
@ -800,6 +822,35 @@ bool Client::processWithFuzzing(const String & full_query)
|
||||
}
|
||||
}
|
||||
|
||||
for (const auto & query : queries_for_fuzzed_tables)
|
||||
{
|
||||
std::cout << std::endl;
|
||||
WriteBufferFromOStream ast_buf(std::cout, 4096);
|
||||
formatAST(*query, ast_buf, false /*highlight*/);
|
||||
ast_buf.next();
|
||||
std::cout << std::endl << std::endl;
|
||||
|
||||
try
|
||||
{
|
||||
query_to_execute = query->formatForErrorMessage();
|
||||
if (auto res = processFuzzingStep(query_to_execute, query))
|
||||
return *res;
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
client_exception = std::make_unique<Exception>(getCurrentExceptionMessage(print_stack_trace), getCurrentExceptionCode());
|
||||
have_error = true;
|
||||
}
|
||||
|
||||
if (have_error)
|
||||
{
|
||||
server_exception.reset();
|
||||
client_exception.reset();
|
||||
fuzzer.notifyQueryFailed(query);
|
||||
have_error = false;
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -834,6 +885,7 @@ void Client::addOptions(OptionsDescription & options_description)
|
||||
("compression", po::value<bool>(), "enable or disable compression (enabled by default for remote communication and disabled for localhost communication).")
|
||||
|
||||
("query-fuzzer-runs", po::value<int>()->default_value(0), "After executing every SELECT query, do random mutations in it and run again specified number of times. This is used for testing to discover unexpected corner cases.")
|
||||
("create-query-fuzzer-runs", po::value<int>()->default_value(0), "")
|
||||
("interleave-queries-file", po::value<std::vector<std::string>>()->multitoken(),
|
||||
"file path with queries to execute before every file from 'queries-file'; multiple files can be specified (--queries-file file1 file2...); this is needed to enable more aggressive fuzzing of newly added tests (see 'query-fuzzer-runs' option)")
|
||||
|
||||
@ -994,6 +1046,17 @@ void Client::processOptions(const OptionsDescription & options_description,
|
||||
ignore_error = true;
|
||||
}
|
||||
|
||||
if ((create_query_fuzzer_runs = options["create-query-fuzzer-runs"].as<int>()))
|
||||
{
|
||||
// Fuzzer implies multiquery.
|
||||
config().setBool("multiquery", true);
|
||||
// Ignore errors in parsing queries.
|
||||
config().setBool("ignore-error", true);
|
||||
|
||||
global_context->setSetting("allow_suspicious_low_cardinality_types", true);
|
||||
ignore_error = true;
|
||||
}
|
||||
|
||||
if (options.count("opentelemetry-traceparent"))
|
||||
{
|
||||
String traceparent = options["opentelemetry-traceparent"].as<std::string>();
|
||||
|
@ -17,6 +17,7 @@ public:
|
||||
|
||||
protected:
|
||||
bool processWithFuzzing(const String & full_query) override;
|
||||
std::optional<bool> processFuzzingStep(const String & query_to_execute, const ASTPtr & parsed_query);
|
||||
|
||||
void connect() override;
|
||||
|
||||
|
@ -19,7 +19,6 @@
|
||||
{host}
|
||||
{port}
|
||||
{user}
|
||||
{database}
|
||||
{display_name}
|
||||
Terminal colors: https://misc.flogisoft.com/bash/tip_colors_and_formatting
|
||||
See also: https://wiki.hackzine.org/development/misc/readline-color-prompt.html
|
||||
|
@ -1,6 +1,6 @@
|
||||
#pragma once
|
||||
/// This file was autogenerated by CMake
|
||||
|
||||
// .h autogenerated by cmake !
|
||||
#pragma once
|
||||
|
||||
#cmakedefine01 ENABLE_CLICKHOUSE_SERVER
|
||||
#cmakedefine01 ENABLE_CLICKHOUSE_CLIENT
|
||||
|
@ -1,6 +1,6 @@
|
||||
module github.com/ClickHouse/ClickHouse/programs/diagnostics
|
||||
|
||||
go 1.17
|
||||
go 1.19
|
||||
|
||||
require (
|
||||
github.com/ClickHouse/clickhouse-go/v2 v2.0.12
|
||||
|
@ -65,7 +65,6 @@ github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZ
|
||||
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
|
||||
github.com/ClickHouse/clickhouse-go v1.5.3 h1:Vok8zUb/wlqc9u8oEqQzBMBRDoFd8NxPRqgYEqMnV88=
|
||||
github.com/ClickHouse/clickhouse-go v1.5.3/go.mod h1:EaI/sW7Azgz9UATzd5ZdZHRUhHgv5+JMS9NSr2smCJI=
|
||||
github.com/ClickHouse/clickhouse-go/v2 v2.0.12 h1:Nbl/NZwoM6LGJm7smNBgvtdr/rxjlIssSW3eG/Nmb9E=
|
||||
github.com/ClickHouse/clickhouse-go/v2 v2.0.12/go.mod h1:u4RoNQLLM2W6hNSPYrIESLJqaWSInZVmfM+MlaAhXcg=
|
||||
@ -457,7 +456,6 @@ github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgf
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
|
||||
github.com/hashicorp/consul/api v1.11.0/go.mod h1:XjsvQN+RJGWI2TWy1/kqaE16HrR2J/FWgkYjdZQsX9M=
|
||||
github.com/hashicorp/consul/api v1.12.0/go.mod h1:6pVBMo0ebnYdt2S3H87XhekM/HHrUoTD2XXb/VrZVy0=
|
||||
github.com/hashicorp/consul/sdk v0.8.0/go.mod h1:GBvyrGALthsZObzUGsfgHZQDXjg4lOjagTIwIR1vPms=
|
||||
github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
@ -663,9 +661,7 @@ github.com/paulmach/protoscan v0.2.1-0.20210522164731-4e53c6875432/go.mod h1:2sV
|
||||
github.com/pelletier/go-toml v1.9.4 h1:tjENF6MfZAg8e4ZmZTeWaWiT2vXtsoO6+iuOjFhECwM=
|
||||
github.com/pelletier/go-toml v1.9.4/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
|
||||
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
|
||||
github.com/pierrec/lz4 v2.0.5+incompatible h1:2xWsjqPFWcplujydGg4WmhC/6fZqK42wMM8aXeqhl0I=
|
||||
github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
|
||||
github.com/pierrec/lz4/v4 v4.1.12/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
|
||||
github.com/pierrec/lz4/v4 v4.1.14 h1:+fL8AQEZtz/ijeNnpduH0bROTu0O3NZAlPjQxGn8LwE=
|
||||
github.com/pierrec/lz4/v4 v4.1.14/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
|
||||
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
@ -717,7 +713,6 @@ github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQD
|
||||
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
|
||||
github.com/safchain/ethtool v0.0.0-20190326074333-42ed695e3de8/go.mod h1:Z0q5wiBQGYcxhMZ6gUqHn6pYNLypFAvaL3UvgZLR0U4=
|
||||
github.com/sagikazarmark/crypt v0.3.0/go.mod h1:uD/D+6UF4SrIR1uGEv7bBNkNqLGqUr43MRiaGWX1Nig=
|
||||
github.com/sagikazarmark/crypt v0.4.0/go.mod h1:ALv2SRj7GxYV4HO9elxH9nS6M9gW+xDNxqmyJ6RfDFM=
|
||||
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
|
||||
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
|
||||
github.com/seccomp/libseccomp-golang v0.9.1/go.mod h1:GbW5+tmTXfcxTToHLXlScSlAvWlF4P2Ca7zGrPiEpWo=
|
||||
@ -1083,7 +1078,6 @@ golang.org/x/sys v0.0.0-20211109184856-51b60fd695b3/go.mod h1:oPkhp1MJrh7nUepCBc
|
||||
golang.org/x/sys v0.0.0-20211110154304-99a53858aa08/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211124211545-fe61309f8881/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211205182925-97ca703d548d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211210111614-af8b64212486/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9 h1:XfKQ4OlFl8okEOr5UvAqFRVj8pY/4yfcXrddB8qAbU0=
|
||||
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
@ -1202,7 +1196,6 @@ google.golang.org/api v0.57.0/go.mod h1:dVPlbZyBo2/OjBpmvNdpn2GRm6rPy75jyU7bmhdr
|
||||
google.golang.org/api v0.59.0/go.mod h1:sT2boj7M9YJxZzgeZqXogmhfmRWDtPzT31xkieUbuZU=
|
||||
google.golang.org/api v0.61.0/go.mod h1:xQRti5UdCmoCEqFxcz93fTl338AVqDgyaDRuOZ3hg9I=
|
||||
google.golang.org/api v0.62.0/go.mod h1:dKmwPCydfsad4qCH08MSdgWjfHOyfpd4VtDGgRFdavw=
|
||||
google.golang.org/api v0.63.0/go.mod h1:gs4ij2ffTRXwuzzgJl/56BdwJaA194ijkfn++9tDuPo=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
|
@ -58,7 +58,7 @@ void DisksApp::addOptions(
|
||||
("disk", po::value<String>(), "Set disk name")
|
||||
("command_name", po::value<String>(), "Name for command to do")
|
||||
("send-logs", "Send logs")
|
||||
("log-level", "Logging level")
|
||||
("log-level", po::value<String>(), "Logging level")
|
||||
;
|
||||
|
||||
positional_options_description.add("command_name", 1);
|
||||
|
@ -45,6 +45,7 @@ if (BUILD_STANDALONE_KEEPER)
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperLogStore.cpp
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperServer.cpp
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperSnapshotManager.cpp
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperSnapshotManagerS3.cpp
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperStateMachine.cpp
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperStateManager.cpp
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperStorage.cpp
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user