Merge branch 'master' into async-read-from-socket

This commit is contained in:
Nikolai Kochetov 2020-12-14 17:45:38 +03:00
commit 8de5cd5bc7
389 changed files with 25788 additions and 4056 deletions

View File

@ -1,3 +1,126 @@
### ClickHouse release 20.12
### ClickHouse release v20.12.3.3-stable, 2020-12-13
#### Backward Incompatible Change
* Enable `use_compact_format_in_distributed_parts_names` by default (see the documentation for the reference). [#16728](https://github.com/ClickHouse/ClickHouse/pull/16728) ([Azat Khuzhin](https://github.com/azat)).
* Accept user settings related to file formats (e.g. `format_csv_delimiter`) in the `SETTINGS` clause when creating a table that uses `File` engine, and use these settings in all `INSERT`s and `SELECT`s. The file format settings changed in the current user session, or in the `SETTINGS` clause of a DML query itself, no longer affect the query. [#16591](https://github.com/ClickHouse/ClickHouse/pull/16591) ([Alexander Kuzmenkov](https://github.com/akuzm)).
#### New Feature
* add `*.xz` compression/decompression support.It enables using `*.xz` in `file()` function. This closes [#8828](https://github.com/ClickHouse/ClickHouse/issues/8828). [#16578](https://github.com/ClickHouse/ClickHouse/pull/16578) ([Abi Palagashvili](https://github.com/fibersel)).
* Introduce the query `ALTER TABLE ... DROP|DETACH PART 'part_name'`. [#15511](https://github.com/ClickHouse/ClickHouse/pull/15511) ([nvartolomei](https://github.com/nvartolomei)).
* Added new ALTER UPDATE/DELETE IN PARTITION syntax. [#13403](https://github.com/ClickHouse/ClickHouse/pull/13403) ([Vladimir Chebotarev](https://github.com/excitoon)).
* Allow formatting named tuples as JSON objects when using JSON input/output formats, controlled by the `output_format_json_named_tuples_as_objects` setting, disabled by default. [#17175](https://github.com/ClickHouse/ClickHouse/pull/17175) ([Alexander Kuzmenkov](https://github.com/akuzm)).
* Add a possibility to input enum value as it's id in TSV and CSV formats by default. [#16834](https://github.com/ClickHouse/ClickHouse/pull/16834) ([Kruglov Pavel](https://github.com/Avogar)).
* Add COLLATE support for Nullable, LowCardinality, Array and Tuple, where nested type is String. Also refactor the code associated with collations in ColumnString.cpp. [#16273](https://github.com/ClickHouse/ClickHouse/pull/16273) ([Kruglov Pavel](https://github.com/Avogar)).
* New `tcpPort` function returns TCP port listened by this server. [#17134](https://github.com/ClickHouse/ClickHouse/pull/17134) ([Ivan](https://github.com/abyss7)).
* Add new math functions: `acosh`, `asinh`, `atan2`, `atanh`, `cosh`, `hypot`, `log1p`, `sinh`. [#16636](https://github.com/ClickHouse/ClickHouse/pull/16636) ([Konstantin Malanchev](https://github.com/hombit)).
* Possibility to distribute the merges between different replicas. Introduces the `execute_merges_on_single_replica_time_threshold` mergetree setting. [#16424](https://github.com/ClickHouse/ClickHouse/pull/16424) ([filimonov](https://github.com/filimonov)).
* Add setting `aggregate_functions_null_for_empty` for SQL standard compatibility. This option will rewrite all aggregate functions in a query, adding -OrNull suffix to them. Implements [10273](https://github.com/ClickHouse/ClickHouse/issues/10273). [#16123](https://github.com/ClickHouse/ClickHouse/pull/16123) ([flynn](https://github.com/ucasFL)).
* Updated DateTime, DateTime64 parsing to accept string Date literal format. [#16040](https://github.com/ClickHouse/ClickHouse/pull/16040) ([Maksim Kita](https://github.com/kitaisreal)).
* Make it possible to change the path to history file in `clickhouse-client` using the `--history_file` parameter. [#15960](https://github.com/ClickHouse/ClickHouse/pull/15960) ([Maksim Kita](https://github.com/kitaisreal)).
#### Bug Fix
* Fix the issue when server can stop accepting connections in very rare cases. [#17542](https://github.com/ClickHouse/ClickHouse/pull/17542) ([Amos Bird](https://github.com/amosbird)).
* Fixed `Function not implemented` error when executing `RENAME` query in `Atomic` database with ClickHouse running on Windows Subsystem for Linux. Fixes [#17661](https://github.com/ClickHouse/ClickHouse/issues/17661). [#17664](https://github.com/ClickHouse/ClickHouse/pull/17664) ([tavplubix](https://github.com/tavplubix)).
* Do not restore parts from WAL if `in_memory_parts_enable_wal` is disabled. [#17802](https://github.com/ClickHouse/ClickHouse/pull/17802) ([detailyang](https://github.com/detailyang)).
* fix incorrect initialization of `max_compress_block_size` of MergeTreeWriterSettings with `min_compress_block_size`. [#17833](https://github.com/ClickHouse/ClickHouse/pull/17833) ([flynn](https://github.com/ucasFL)).
* Exception message about max table size to drop was displayed incorrectly. [#17764](https://github.com/ClickHouse/ClickHouse/pull/17764) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed possible segfault when there is not enough space when inserting into `Distributed` table. [#17737](https://github.com/ClickHouse/ClickHouse/pull/17737) ([tavplubix](https://github.com/tavplubix)).
* Fixed problem when ClickHouse fails to resume connection to MySQL servers. [#17681](https://github.com/ClickHouse/ClickHouse/pull/17681) ([Alexander Kazakov](https://github.com/Akazz)).
* In might be determined incorrectly if cluster is circular- (cross-) replicated or not when executing `ON CLUSTER` query due to race condition when `pool_size` > 1. It's fixed. [#17640](https://github.com/ClickHouse/ClickHouse/pull/17640) ([tavplubix](https://github.com/tavplubix)).
* Exception `fmt::v7::format_error` can be logged in background for MergeTree tables. This fixes [#17613](https://github.com/ClickHouse/ClickHouse/issues/17613). [#17615](https://github.com/ClickHouse/ClickHouse/pull/17615) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* When clickhouse-client is used in interactive mode with multiline queries, single line comment was erronously extended till the end of query. This fixes [#13654](https://github.com/ClickHouse/ClickHouse/issues/13654). [#17565](https://github.com/ClickHouse/ClickHouse/pull/17565) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix alter query hang when the corresponding mutation was killed on the different replica. Fixes [#16953](https://github.com/ClickHouse/ClickHouse/issues/16953). [#17499](https://github.com/ClickHouse/ClickHouse/pull/17499) ([alesapin](https://github.com/alesapin)).
* Fix issue when mark cache size was underestimated by clickhouse. It may happen when there are a lot of tiny files with marks. [#17496](https://github.com/ClickHouse/ClickHouse/pull/17496) ([alesapin](https://github.com/alesapin)).
* Fix `ORDER BY` with enabled setting `optimize_redundant_functions_in_order_by`. [#17471](https://github.com/ClickHouse/ClickHouse/pull/17471) ([Anton Popov](https://github.com/CurtizJ)).
* Fix duplicates after `DISTINCT` which were possible because of incorrect optimization. Fixes [#17294](https://github.com/ClickHouse/ClickHouse/issues/17294). [#17296](https://github.com/ClickHouse/ClickHouse/pull/17296) ([li chengxiang](https://github.com/chengxianglibra)). [#17439](https://github.com/ClickHouse/ClickHouse/pull/17439) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix crash while reading from `JOIN` table with `LowCardinality` types. Fixes [#17228](https://github.com/ClickHouse/ClickHouse/issues/17228). [#17397](https://github.com/ClickHouse/ClickHouse/pull/17397) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* fix `toInt256(inf)` stack overflow. Int256 is an experimental feature. Closed [#17235](https://github.com/ClickHouse/ClickHouse/issues/17235). [#17257](https://github.com/ClickHouse/ClickHouse/pull/17257) ([flynn](https://github.com/ucasFL)).
* Fix possible `Unexpected packet Data received from client` error logged for Distributed queries with `LIMIT`. [#17254](https://github.com/ClickHouse/ClickHouse/pull/17254) ([Azat Khuzhin](https://github.com/azat)).
* Fix set index invalidation when there are const columns in the subquery. This fixes [#17246](https://github.com/ClickHouse/ClickHouse/issues/17246). [#17249](https://github.com/ClickHouse/ClickHouse/pull/17249) ([Amos Bird](https://github.com/amosbird)).
* Fix possible wrong index analysis when the types of the index comparison are different. This fixes [#17122](https://github.com/ClickHouse/ClickHouse/issues/17122). [#17145](https://github.com/ClickHouse/ClickHouse/pull/17145) ([Amos Bird](https://github.com/amosbird)).
* Fix ColumnConst comparison which leads to crash. This fixed [#17088](https://github.com/ClickHouse/ClickHouse/issues/17088) . [#17135](https://github.com/ClickHouse/ClickHouse/pull/17135) ([Amos Bird](https://github.com/amosbird)).
* Multiple fixed for MaterializeMySQL (experimental feature). Fixes [#16923](https://github.com/ClickHouse/ClickHouse/issues/16923) Fixes [#15883](https://github.com/ClickHouse/ClickHouse/issues/15883) Fix MaterializeMySQL SYNC failure when the modify MySQL binlog_checksum. [#17091](https://github.com/ClickHouse/ClickHouse/pull/17091) ([Winter Zhang](https://github.com/zhang2014)).
* Fix bug when `ON CLUSTER` queries may hang forever for non-leader ReplicatedMergeTreeTables. [#17089](https://github.com/ClickHouse/ClickHouse/pull/17089) ([alesapin](https://github.com/alesapin)).
* Fixed crash on `CREATE TABLE ... AS some_table` query when `some_table` was created `AS table_function()` Fixes [#16944](https://github.com/ClickHouse/ClickHouse/issues/16944). [#17072](https://github.com/ClickHouse/ClickHouse/pull/17072) ([tavplubix](https://github.com/tavplubix)).
* Bug unfinished implementation for funciton fuzzBits, related issue: [#16980](https://github.com/ClickHouse/ClickHouse/issues/16980). [#17051](https://github.com/ClickHouse/ClickHouse/pull/17051) ([hexiaoting](https://github.com/hexiaoting)).
* Fix LLVM's libunwind in the case when CFA register is RAX. This is the [bug](https://bugs.llvm.org/show_bug.cgi?id=48186) in [LLVM's libunwind](https://github.com/llvm/llvm-project/tree/master/libunwind). We already have workarounds for this bug. [#17046](https://github.com/ClickHouse/ClickHouse/pull/17046) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Avoid unnecessary network errors for remote queries which may be cancelled while execution, like queries with `LIMIT`. [#17006](https://github.com/ClickHouse/ClickHouse/pull/17006) ([Azat Khuzhin](https://github.com/azat)).
* Fix `optimize_distributed_group_by_sharding_key` setting (that is disabled by default) for query with OFFSET only. [#16996](https://github.com/ClickHouse/ClickHouse/pull/16996) ([Azat Khuzhin](https://github.com/azat)).
* Fix for Merge tables over Distributed tables with JOIN. [#16993](https://github.com/ClickHouse/ClickHouse/pull/16993) ([Azat Khuzhin](https://github.com/azat)).
* Fixed wrong result in big integers (128, 256 bit) when casting from double. Big integers support is experimental. [#16986](https://github.com/ClickHouse/ClickHouse/pull/16986) ([Mike](https://github.com/myrrc)).
* Fix possible server crash after `ALTER TABLE ... MODIFY COLUMN ... NewType` when `SELECT` have `WHERE` expression on altering column and alter doesn't finished yet. [#16968](https://github.com/ClickHouse/ClickHouse/pull/16968) ([Amos Bird](https://github.com/amosbird)).
* Blame info was not calculated correctly in `clickhouse-git-import`. [#16959](https://github.com/ClickHouse/ClickHouse/pull/16959) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix order by optimization with monotonous functions. Fixes [#16107](https://github.com/ClickHouse/ClickHouse/issues/16107). [#16956](https://github.com/ClickHouse/ClickHouse/pull/16956) ([Anton Popov](https://github.com/CurtizJ)).
* Fix optimization of group by with enabled setting `optimize_aggregators_of_group_by_keys` and joins. Fixes [#12604](https://github.com/ClickHouse/ClickHouse/issues/12604). [#16951](https://github.com/ClickHouse/ClickHouse/pull/16951) ([Anton Popov](https://github.com/CurtizJ)).
* Fix possible error `Illegal type of argument` for queries with `ORDER BY`. Fixes [#16580](https://github.com/ClickHouse/ClickHouse/issues/16580). [#16928](https://github.com/ClickHouse/ClickHouse/pull/16928) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix strange code in InterpreterShowAccessQuery. [#16866](https://github.com/ClickHouse/ClickHouse/pull/16866) ([tavplubix](https://github.com/tavplubix)).
* Prevent clickhouse server crashes when using the function `timeSeriesGroupSum`. The function is removed from newer ClickHouse releases. [#16865](https://github.com/ClickHouse/ClickHouse/pull/16865) ([filimonov](https://github.com/filimonov)).
* Fix rare silent crashes when query profiler is on and ClickHouse is installed on OS with glibc version that has (supposedly) broken asynchronous unwind tables for some functions. This fixes [#15301](https://github.com/ClickHouse/ClickHouse/issues/15301). This fixes [#13098](https://github.com/ClickHouse/ClickHouse/issues/13098). [#16846](https://github.com/ClickHouse/ClickHouse/pull/16846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix crash when using `any` without any arguments. This is for [#16803](https://github.com/ClickHouse/ClickHouse/issues/16803) . cc @azat. [#16826](https://github.com/ClickHouse/ClickHouse/pull/16826) ([Amos Bird](https://github.com/amosbird)).
* If no memory can be allocated while writing table metadata on disk, broken metadata file can be written. [#16772](https://github.com/ClickHouse/ClickHouse/pull/16772) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix trivial query optimization with partition predicate. [#16767](https://github.com/ClickHouse/ClickHouse/pull/16767) ([Azat Khuzhin](https://github.com/azat)).
* Fix `IN` operator over several columns and tuples with enabled `transform_null_in` setting. Fixes [#15310](https://github.com/ClickHouse/ClickHouse/issues/15310). [#16722](https://github.com/ClickHouse/ClickHouse/pull/16722) ([Anton Popov](https://github.com/CurtizJ)).
* Return number of affected rows for INSERT queries via MySQL protocol. Previously ClickHouse used to always return 0, it's fixed. Fixes [#16605](https://github.com/ClickHouse/ClickHouse/issues/16605). [#16715](https://github.com/ClickHouse/ClickHouse/pull/16715) ([Winter Zhang](https://github.com/zhang2014)).
* Fix remote query failure when using 'if' suffix aggregate function. Fixes [#16574](https://github.com/ClickHouse/ClickHouse/issues/16574) Fixes [#16231](https://github.com/ClickHouse/ClickHouse/issues/16231) [#16610](https://github.com/ClickHouse/ClickHouse/pull/16610) ([Winter Zhang](https://github.com/zhang2014)).
* Fix inconsistent behavior caused by `select_sequential_consistency` for optimized trivial count query and system.tables. [#16309](https://github.com/ClickHouse/ClickHouse/pull/16309) ([Hao Chen](https://github.com/haoch)).
#### Improvement
* Remove empty parts after they were pruned by TTL, mutation, or collapsing merge algorithm. [#16895](https://github.com/ClickHouse/ClickHouse/pull/16895) ([Anton Popov](https://github.com/CurtizJ)).
* Enable compact format of directories for asynchronous sends in Distributed tables: `use_compact_format_in_distributed_parts_names` is set to 1 by default. [#16788](https://github.com/ClickHouse/ClickHouse/pull/16788) ([Azat Khuzhin](https://github.com/azat)).
* Abort multipart upload if no data was written to S3. [#16840](https://github.com/ClickHouse/ClickHouse/pull/16840) ([Pavel Kovalenko](https://github.com/Jokser)).
* Reresolve the IP of the `format_avro_schema_registry_url` in case of errors. [#16985](https://github.com/ClickHouse/ClickHouse/pull/16985) ([filimonov](https://github.com/filimonov)).
* Mask password in data_path in the system.distribution_queue. [#16727](https://github.com/ClickHouse/ClickHouse/pull/16727) ([Azat Khuzhin](https://github.com/azat)).
* Throw error when use column transformer replaces non existing column. [#16183](https://github.com/ClickHouse/ClickHouse/pull/16183) ([hexiaoting](https://github.com/hexiaoting)).
* Turn off parallel parsing when there is no enough memory for all threads to work simultaneously. Also there could be exceptions like "Memory limit exceeded" when somebody will try to insert extremely huge rows (> min_chunk_bytes_for_parallel_parsing), because each piece to parse has to be independent set of strings (one or more). [#16721](https://github.com/ClickHouse/ClickHouse/pull/16721) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Install script should always create subdirs in config folders. This is only relevant for Docker build with custom config. [#16936](https://github.com/ClickHouse/ClickHouse/pull/16936) ([filimonov](https://github.com/filimonov)).
* Correct grammar in error message in JSONEachRow, JSONCompactEachRow, and RegexpRow input formats. [#17205](https://github.com/ClickHouse/ClickHouse/pull/17205) ([nico piderman](https://github.com/sneako)).
* Set default `host` and `port` parameters for `SOURCE(CLICKHOUSE(...))` to current instance and set default `user` value to `'default'`. [#16997](https://github.com/ClickHouse/ClickHouse/pull/16997) ([vdimir](https://github.com/vdimir)).
* Throw an informative error message when doing ATTACH/DETACH TABLE <DICTIONARY>. Before this PR, `detach table <dict>` works but leads to an ill-formed in-memory metadata. [#16885](https://github.com/ClickHouse/ClickHouse/pull/16885) ([Amos Bird](https://github.com/amosbird)).
* Add cutToFirstSignificantSubdomainWithWWW(). [#16845](https://github.com/ClickHouse/ClickHouse/pull/16845) ([Azat Khuzhin](https://github.com/azat)).
* Server refused to startup with exception message if wrong config is given (`metric_log`.`collect_interval_milliseconds` is missing). [#16815](https://github.com/ClickHouse/ClickHouse/pull/16815) ([Ivan](https://github.com/abyss7)).
* Better exception message when configuration for distributed DDL is absent. This fixes [#5075](https://github.com/ClickHouse/ClickHouse/issues/5075). [#16769](https://github.com/ClickHouse/ClickHouse/pull/16769) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Usability improvement: better suggestions in syntax error message when `CODEC` expression is misplaced in `CREATE TABLE` query. This fixes [#12493](https://github.com/ClickHouse/ClickHouse/issues/12493). [#16768](https://github.com/ClickHouse/ClickHouse/pull/16768) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Remove empty directories for async INSERT at start of Distributed engine. [#16729](https://github.com/ClickHouse/ClickHouse/pull/16729) ([Azat Khuzhin](https://github.com/azat)).
* Workaround for use S3 with nginx server as proxy. Nginx currenty does not accept urls with empty path like `http://domain.com?delete`, but vanilla aws-sdk-cpp produces this kind of urls. This commit uses patched aws-sdk-cpp version, which makes urls with "/" as path in this cases, like `http://domain.com/?delete`. [#16709](https://github.com/ClickHouse/ClickHouse/pull/16709) ([ianton-ru](https://github.com/ianton-ru)).
* Allow `reinterpretAs*` functions to work for integers and floats of the same size. Implements [16640](https://github.com/ClickHouse/ClickHouse/issues/16640). [#16657](https://github.com/ClickHouse/ClickHouse/pull/16657) ([flynn](https://github.com/ucasFL)).
* Now, `<auxiliary_zookeepers>` configuration can be changed in `config.xml` and reloaded without server startup. [#16627](https://github.com/ClickHouse/ClickHouse/pull/16627) ([Amos Bird](https://github.com/amosbird)).
* Support SNI in https connections to remote resources. This will allow to connect to Cloudflare servers that require SNI. This fixes [#10055](https://github.com/ClickHouse/ClickHouse/issues/10055). [#16252](https://github.com/ClickHouse/ClickHouse/pull/16252) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Make it possible to connect to `clickhouse-server` secure endpoint which requires SNI. This is possible when `clickhouse-server` is hosted behind TLS proxy. [#16938](https://github.com/ClickHouse/ClickHouse/pull/16938) ([filimonov](https://github.com/filimonov)).
* Fix possible stack overflow if a loop of materialized views is created. This closes [#15732](https://github.com/ClickHouse/ClickHouse/issues/15732). [#16048](https://github.com/ClickHouse/ClickHouse/pull/16048) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Simplify the implementation of background tasks processing for the MergeTree table engines family. There should be no visible changes for user. [#15983](https://github.com/ClickHouse/ClickHouse/pull/15983) ([alesapin](https://github.com/alesapin)).
* Improvement for MaterializeMySQL (experimental feature). Throw exception about right sync privileges when MySQL sync user has error privileges. [#15977](https://github.com/ClickHouse/ClickHouse/pull/15977) ([TCeason](https://github.com/TCeason)).
* Made `indexOf()` use BloomFilter. [#14977](https://github.com/ClickHouse/ClickHouse/pull/14977) ([achimbab](https://github.com/achimbab)).
#### Performance Improvement
* Use Floyd-Rivest algorithm, it is the best for the ClickHouse use case of partial sorting. Bechmarks are in https://github.com/danlark1/miniselect and [here](https://drive.google.com/drive/folders/1DHEaeXgZuX6AJ9eByeZ8iQVQv0ueP8XM). [#16825](https://github.com/ClickHouse/ClickHouse/pull/16825) ([Danila Kutenin](https://github.com/danlark1)).
* Now `ReplicatedMergeTree` tree engines family uses a separate thread pool for replicated fetches. Size of the pool limited by setting `background_fetches_pool_size` which can be tuned with a server restart. The default value of the setting is 3 and it means that the maximum amount of parallel fetches is equal to 3 (and it allows to utilize 10G network). Fixes #520. [#16390](https://github.com/ClickHouse/ClickHouse/pull/16390) ([alesapin](https://github.com/alesapin)).
* Fixed uncontrolled growth of the state of `quantileTDigest`. [#16680](https://github.com/ClickHouse/ClickHouse/pull/16680) ([hrissan](https://github.com/hrissan)).
* Add `VIEW` subquery description to `EXPLAIN`. Limit push down optimisation for `VIEW`. Add local replicas of `Distributed` to query plan. [#14936](https://github.com/ClickHouse/ClickHouse/pull/14936) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix optimize_read_in_order/optimize_aggregation_in_order with max_threads > 0 and expression in ORDER BY. [#16637](https://github.com/ClickHouse/ClickHouse/pull/16637) ([Azat Khuzhin](https://github.com/azat)).
* Fix performance of reading from `Merge` tables over huge number of `MergeTree` tables. Fixes [#7748](https://github.com/ClickHouse/ClickHouse/issues/7748). [#16988](https://github.com/ClickHouse/ClickHouse/pull/16988) ([Anton Popov](https://github.com/CurtizJ)).
* Now we can safely prune partitions with exact match. Useful case: Suppose table is partitioned by `intHash64(x) % 100` and the query has condition on `intHash64(x) % 100` verbatim, not on x. [#16253](https://github.com/ClickHouse/ClickHouse/pull/16253) ([Amos Bird](https://github.com/amosbird)).
#### Experimental Feature
* Add `EmbeddedRocksDB` table engine (can be used for dictionaries). [#15073](https://github.com/ClickHouse/ClickHouse/pull/15073) ([sundyli](https://github.com/sundy-li)).
#### Build/Testing/Packaging Improvement
* Improvements in test coverage building images. [#17233](https://github.com/ClickHouse/ClickHouse/pull/17233) ([alesapin](https://github.com/alesapin)).
* Update embedded timezone data to version 2020d (also update cctz to the latest master). [#17204](https://github.com/ClickHouse/ClickHouse/pull/17204) ([filimonov](https://github.com/filimonov)).
* Fix UBSan report in Poco. This closes [#12719](https://github.com/ClickHouse/ClickHouse/issues/12719). [#16765](https://github.com/ClickHouse/ClickHouse/pull/16765) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Do not instrument 3rd-party libraries with UBSan. [#16764](https://github.com/ClickHouse/ClickHouse/pull/16764) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix UBSan report in cache dictionaries. This closes [#12641](https://github.com/ClickHouse/ClickHouse/issues/12641). [#16763](https://github.com/ClickHouse/ClickHouse/pull/16763) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix UBSan report when trying to convert infinite floating point number to integer. This closes [#14190](https://github.com/ClickHouse/ClickHouse/issues/14190). [#16677](https://github.com/ClickHouse/ClickHouse/pull/16677) ([alexey-milovidov](https://github.com/alexey-milovidov)).
## ClickHouse release 20.11
### ClickHouse release v20.11.3.3-stable, 2020-11-13
@ -15,7 +138,7 @@
* Restrict to use of non-comparable data types (like `AggregateFunction`) in keys (Sorting key, Primary key, Partition key, and so on). [#16601](https://github.com/ClickHouse/ClickHouse/pull/16601) ([alesapin](https://github.com/alesapin)).
* Remove `ANALYZE` and `AST` queries, and make the setting `enable_debug_queries` obsolete since now it is the part of full featured `EXPLAIN` query. [#16536](https://github.com/ClickHouse/ClickHouse/pull/16536) ([Ivan](https://github.com/abyss7)).
* Aggregate functions `boundingRatio`, `rankCorr`, `retention`, `timeSeriesGroupSum`, `timeSeriesGroupRateSum`, `windowFunnel` were erroneously made case-insensitive. Now their names are made case sensitive as designed. Only functions that are specified in SQL standard or made for compatibility with other DBMS or functions similar to those should be case-insensitive. [#16407](https://github.com/ClickHouse/ClickHouse/pull/16407) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Make `rankCorr` function return nan on insufficient data https://github.com/ClickHouse/ClickHouse/issues/16124. [#16135](https://github.com/ClickHouse/ClickHouse/pull/16135) ([hexiaoting](https://github.com/hexiaoting)).
* Make `rankCorr` function return nan on insufficient data [#16124](https://github.com/ClickHouse/ClickHouse/issues/16124). [#16135](https://github.com/ClickHouse/ClickHouse/pull/16135) ([hexiaoting](https://github.com/hexiaoting)).
* When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version).
#### New Feature
@ -33,7 +156,7 @@
* Now we can provide identifiers via query parameters. And these parameters can be used as table objects or columns. [#16594](https://github.com/ClickHouse/ClickHouse/pull/16594) ([Amos Bird](https://github.com/amosbird)).
* Added big integers (UInt256, Int128, Int256) and UUID data types support for MergeTree BloomFilter index. Big integers is an experimental feature. [#16642](https://github.com/ClickHouse/ClickHouse/pull/16642) ([Maksim Kita](https://github.com/kitaisreal)).
* Add `farmFingerprint64` function (non-cryptographic string hashing). [#16570](https://github.com/ClickHouse/ClickHouse/pull/16570) ([Jacob Hayes](https://github.com/JacobHayes)).
* Add `log_queries_min_query_duration_ms`, only queries slower then the value of this setting will go to `query_log`/`query_thread_log` (i.e. something like `slow_query_log` in mysql). [#16529](https://github.com/ClickHouse/ClickHouse/pull/16529) ([Azat Khuzhin](https://github.com/azat)).
* Add `log_queries_min_query_duration_ms`, only queries slower than the value of this setting will go to `query_log`/`query_thread_log` (i.e. something like `slow_query_log` in mysql). [#16529](https://github.com/ClickHouse/ClickHouse/pull/16529) ([Azat Khuzhin](https://github.com/azat)).
* Ability to create a docker image on the top of `Alpine`. Uses precompiled binary and glibc components from ubuntu 20.04. [#16479](https://github.com/ClickHouse/ClickHouse/pull/16479) ([filimonov](https://github.com/filimonov)).
* Added `toUUIDOrNull`, `toUUIDOrZero` cast functions. [#16337](https://github.com/ClickHouse/ClickHouse/pull/16337) ([Maksim Kita](https://github.com/kitaisreal)).
* Add `max_concurrent_queries_for_all_users` setting, see [#6636](https://github.com/ClickHouse/ClickHouse/issues/6636) for use cases. [#16154](https://github.com/ClickHouse/ClickHouse/pull/16154) ([nvartolomei](https://github.com/nvartolomei)).
@ -178,7 +301,7 @@
* Add `JSONStrings` format which output data in arrays of strings. [#14333](https://github.com/ClickHouse/ClickHouse/pull/14333) ([hcz](https://github.com/hczhcz)).
* Add support for "Raw" column format for `Regexp` format. It allows to simply extract subpatterns as a whole without any escaping rules. [#15363](https://github.com/ClickHouse/ClickHouse/pull/15363) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Allow configurable `NULL` representation for `TSV` output format. It is controlled by the setting `output_format_tsv_null_representation` which is `\N` by default. This closes [#9375](https://github.com/ClickHouse/ClickHouse/issues/9375). Note that the setting only controls output format and `\N` is the only supported `NULL` representation for `TSV` input format. [#14586](https://github.com/ClickHouse/ClickHouse/pull/14586) ([Kruglov Pavel](https://github.com/Avogar)).
* Support Decimal data type for `MaterializedMySQL`. `MaterializedMySQL` is an experimental feature. [#14535](https://github.com/ClickHouse/ClickHouse/pull/14535) ([Winter Zhang](https://github.com/zhang2014)).
* Support Decimal data type for `MaterializeMySQL`. `MaterializeMySQL` is an experimental feature. [#14535](https://github.com/ClickHouse/ClickHouse/pull/14535) ([Winter Zhang](https://github.com/zhang2014)).
* Add new feature: `SHOW DATABASES LIKE 'xxx'`. [#14521](https://github.com/ClickHouse/ClickHouse/pull/14521) ([hexiaoting](https://github.com/hexiaoting)).
* Added a script to import (arbitrary) git repository to ClickHouse as a sample dataset. [#14471](https://github.com/ClickHouse/ClickHouse/pull/14471) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Now insert statements can have asterisk (or variants) with column transformers in the column list. [#14453](https://github.com/ClickHouse/ClickHouse/pull/14453) ([Amos Bird](https://github.com/amosbird)).
@ -200,18 +323,18 @@
* Fix a very wrong code in TwoLevelStringHashTable implementation, which might lead to memory leak. [#16264](https://github.com/ClickHouse/ClickHouse/pull/16264) ([Amos Bird](https://github.com/amosbird)).
* Fix segfault in some cases of wrong aggregation in lambdas. [#16082](https://github.com/ClickHouse/ClickHouse/pull/16082) ([Anton Popov](https://github.com/CurtizJ)).
* Fix `ALTER MODIFY ... ORDER BY` query hang for `ReplicatedVersionedCollapsingMergeTree`. This fixes [#15980](https://github.com/ClickHouse/ClickHouse/issues/15980). [#16011](https://github.com/ClickHouse/ClickHouse/pull/16011) ([alesapin](https://github.com/alesapin)).
* `MaterializedMySQL` (experimental feature): Fix collate name & charset name parser and support `length = 0` for string type. [#16008](https://github.com/ClickHouse/ClickHouse/pull/16008) ([Winter Zhang](https://github.com/zhang2014)).
* `MaterializeMySQL` (experimental feature): Fix collate name & charset name parser and support `length = 0` for string type. [#16008](https://github.com/ClickHouse/ClickHouse/pull/16008) ([Winter Zhang](https://github.com/zhang2014)).
* Allow to use `direct` layout for dictionaries with complex keys. [#16007](https://github.com/ClickHouse/ClickHouse/pull/16007) ([Anton Popov](https://github.com/CurtizJ)).
* Prevent replica hang for 5-10 mins when replication error happens after a period of inactivity. [#15987](https://github.com/ClickHouse/ClickHouse/pull/15987) ([filimonov](https://github.com/filimonov)).
* Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)).
* Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes [#15628](https://github.com/ClickHouse/ClickHouse/issues/15628). [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)).
* `MaterializedMySQL` (experimental feature): Fix crash on create database failure. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)).
* `MaterializeMySQL` (experimental feature): Fix crash on create database failure. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)).
* Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) - Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)).
* Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixes [#12513](https://github.com/ClickHouse/ClickHouse/issues/12513): difference expressions with same alias when query is reanalyzed. [#15886](https://github.com/ClickHouse/ClickHouse/pull/15886) ([Winter Zhang](https://github.com/zhang2014)).
* Fix possible very rare deadlocks in RBAC implementation. [#15875](https://github.com/ClickHouse/ClickHouse/pull/15875) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix exception `Block structure mismatch` in `SELECT ... ORDER BY DESC` queries which were executed after `ALTER MODIFY COLUMN` query. Fixes [#15800](https://github.com/ClickHouse/ClickHouse/issues/15800). [#15852](https://github.com/ClickHouse/ClickHouse/pull/15852) ([alesapin](https://github.com/alesapin)).
* `MaterializedMySQL` (experimental feature): Fix `select count()` inaccuracy. [#15767](https://github.com/ClickHouse/ClickHouse/pull/15767) ([tavplubix](https://github.com/tavplubix)).
* `MaterializeMySQL` (experimental feature): Fix `select count()` inaccuracy. [#15767](https://github.com/ClickHouse/ClickHouse/pull/15767) ([tavplubix](https://github.com/tavplubix)).
* Fix some cases of queries, in which only virtual columns are selected. Previously `Not found column _nothing in block` exception may be thrown. Fixes [#12298](https://github.com/ClickHouse/ClickHouse/issues/12298). [#15756](https://github.com/ClickHouse/ClickHouse/pull/15756) ([Anton Popov](https://github.com/CurtizJ)).
* Fix drop of materialized view with inner table in Atomic database (hangs all subsequent DROP TABLE due to hang of the worker thread, due to recursive DROP TABLE for inner table of MV). [#15743](https://github.com/ClickHouse/ClickHouse/pull/15743) ([Azat Khuzhin](https://github.com/azat)).
* Possibility to move part to another disk/volume if the first attempt was failed. [#15723](https://github.com/ClickHouse/ClickHouse/pull/15723) ([Pavel Kovalenko](https://github.com/Jokser)).
@ -243,37 +366,37 @@
* Fix hang of queries with a lot of subqueries to same table of `MySQL` engine. Previously, if there were more than 16 subqueries to same `MySQL` table in query, it hang forever. [#15299](https://github.com/ClickHouse/ClickHouse/pull/15299) ([Anton Popov](https://github.com/CurtizJ)).
* Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix 'Unknown identifier' in GROUP BY when query has JOIN over Merge table. [#15242](https://github.com/ClickHouse/ClickHouse/pull/15242) ([Artem Zuikov](https://github.com/4ertus2)).
* Fix instance crash when using `joinGet` with `LowCardinality` types. This fixes https://github.com/ClickHouse/ClickHouse/issues/15214. [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)).
* Fix instance crash when using `joinGet` with `LowCardinality` types. This fixes [#15214](https://github.com/ClickHouse/ClickHouse/issues/15214). [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)).
* Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)).
* Adjust Decimal field size in MySQL column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)).
* Fixes `Data compressed with different methods` in `join_algorithm='auto'`. Keep LowCardinality as type for left table join key in `join_algorithm='partial_merge'`. [#15088](https://github.com/ClickHouse/ClickHouse/pull/15088) ([Artem Zuikov](https://github.com/4ertus2)).
* Update `jemalloc` to fix `percpu_arena` with affinity mask. [#15035](https://github.com/ClickHouse/ClickHouse/pull/15035) ([Azat Khuzhin](https://github.com/azat)). [#14957](https://github.com/ClickHouse/ClickHouse/pull/14957) ([Azat Khuzhin](https://github.com/azat)).
* We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes https://github.com/ClickHouse/ClickHouse/issues/14908. [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)).
* We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes [#14908](https://github.com/ClickHouse/ClickHouse/issues/14908). [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)).
* If function `bar` was called with specifically crafted arguments, buffer overflow was possible. This closes [#13926](https://github.com/ClickHouse/ClickHouse/issues/13926). [#15028](https://github.com/ClickHouse/ClickHouse/pull/15028) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in Docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)).
* Fix crash in RIGHT or FULL JOIN with join_algorith='auto' when memory limit exceeded and we should change HashJoin with MergeJoin. [#15002](https://github.com/ClickHouse/ClickHouse/pull/15002) ([Artem Zuikov](https://github.com/4ertus2)).
* Now settings `number_of_free_entries_in_pool_to_execute_mutation` and `number_of_free_entries_in_pool_to_lower_max_size_of_merge` can be equal to `background_pool_size`. [#14975](https://github.com/ClickHouse/ClickHouse/pull/14975) ([alesapin](https://github.com/alesapin)).
* Fix to make predicate push down work when subquery contains `finalizeAggregation` function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)).
* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes https://github.com/ClickHouse/ClickHouse/issues/14923. [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)).
* `MaterializedMySQL` (experimental feature): Fixed `.metadata.tmp File exists` error. [#14898](https://github.com/ClickHouse/ClickHouse/pull/14898) ([Winter Zhang](https://github.com/zhang2014)).
* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes [#14923](https://github.com/ClickHouse/ClickHouse/issues/14923). [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)).
* `MaterializeMySQL` (experimental feature): Fixed `.metadata.tmp File exists` error. [#14898](https://github.com/ClickHouse/ClickHouse/pull/14898) ([Winter Zhang](https://github.com/zhang2014)).
* Fix the issue when some invocations of `extractAllGroups` function may trigger "Memory limit exceeded" error. This fixes [#13383](https://github.com/ClickHouse/ClickHouse/issues/13383). [#14889](https://github.com/ClickHouse/ClickHouse/pull/14889) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix SIGSEGV for an attempt to INSERT into StorageFile with file descriptor. [#14887](https://github.com/ClickHouse/ClickHouse/pull/14887) ([Azat Khuzhin](https://github.com/azat)).
* Fixed segfault in `cache` dictionary [#14837](https://github.com/ClickHouse/ClickHouse/issues/14837). [#14879](https://github.com/ClickHouse/ClickHouse/pull/14879) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* `MaterializedMySQL` (experimental feature): Fixed bug in parsing MySQL binlog events, which causes `Attempt to read after eof` and `Packet payload is not fully read` in `MaterializeMySQL` database engine. [#14852](https://github.com/ClickHouse/ClickHouse/pull/14852) ([Winter Zhang](https://github.com/zhang2014)).
* `MaterializeMySQL` (experimental feature): Fixed bug in parsing MySQL binlog events, which causes `Attempt to read after eof` and `Packet payload is not fully read` in `MaterializeMySQL` database engine. [#14852](https://github.com/ClickHouse/ClickHouse/pull/14852) ([Winter Zhang](https://github.com/zhang2014)).
* Fix rare error in `SELECT` queries when the queried column has `DEFAULT` expression which depends on the other column which also has `DEFAULT` and not present in select query and not exists on disk. Partially fixes [#14531](https://github.com/ClickHouse/ClickHouse/issues/14531). [#14845](https://github.com/ClickHouse/ClickHouse/pull/14845) ([alesapin](https://github.com/alesapin)).
* Fix a problem where the server may get stuck on startup while talking to ZooKeeper, if the configuration files have to be fetched from ZK (using the `from_zk` include option). This fixes [#14814](https://github.com/ClickHouse/ClickHouse/issues/14814). [#14843](https://github.com/ClickHouse/ClickHouse/pull/14843) ([Alexander Kuzmenkov](https://github.com/akuzm)).
* Fix wrong monotonicity detection for shrunk `Int -> Int` cast of signed types. It might lead to incorrect query result. This bug is unveiled in [#14513](https://github.com/ClickHouse/ClickHouse/issues/14513). [#14783](https://github.com/ClickHouse/ClickHouse/pull/14783) ([Amos Bird](https://github.com/amosbird)).
* `Replace` column transformer should replace identifiers with cloned ASTs. This fixes https://github.com/ClickHouse/ClickHouse/issues/14695 . [#14734](https://github.com/ClickHouse/ClickHouse/pull/14734) ([Amos Bird](https://github.com/amosbird)).
* `Replace` column transformer should replace identifiers with cloned ASTs. This fixes [#14695](https://github.com/ClickHouse/ClickHouse/issues/14695) . [#14734](https://github.com/ClickHouse/ClickHouse/pull/14734) ([Amos Bird](https://github.com/amosbird)).
* Fixed missed default database name in metadata of materialized view when executing `ALTER ... MODIFY QUERY`. [#14664](https://github.com/ClickHouse/ClickHouse/pull/14664) ([tavplubix](https://github.com/tavplubix)).
* Fix bug when `ALTER UPDATE` mutation with `Nullable` column in assignment expression and constant value (like `UPDATE x = 42`) leads to incorrect value in column or segfault. Fixes [#13634](https://github.com/ClickHouse/ClickHouse/issues/13634), [#14045](https://github.com/ClickHouse/ClickHouse/issues/14045). [#14646](https://github.com/ClickHouse/ClickHouse/pull/14646) ([alesapin](https://github.com/alesapin)).
* Fix wrong Decimal multiplication result caused wrong decimal scale of result column. [#14603](https://github.com/ClickHouse/ClickHouse/pull/14603) ([Artem Zuikov](https://github.com/4ertus2)).
* Fix function `has` with `LowCardinality` of `Nullable`. [#14591](https://github.com/ClickHouse/ClickHouse/pull/14591) ([Mike](https://github.com/myrrc)).
* Cleanup data directory after Zookeeper exceptions during CreateQuery for StorageReplicatedMergeTree Engine. [#14563](https://github.com/ClickHouse/ClickHouse/pull/14563) ([Bharat Nallan](https://github.com/bharatnc)).
* Fix rare segfaults in functions with combinator `-Resample`, which could appear in result of overflow with very large parameters. [#14562](https://github.com/ClickHouse/ClickHouse/pull/14562) ([Anton Popov](https://github.com/CurtizJ)).
* Fix a bug when converting `Nullable(String)` to Enum. Introduced by https://github.com/ClickHouse/ClickHouse/pull/12745. This fixes https://github.com/ClickHouse/ClickHouse/issues/14435. [#14530](https://github.com/ClickHouse/ClickHouse/pull/14530) ([Amos Bird](https://github.com/amosbird)).
* Fix a bug when converting `Nullable(String)` to Enum. Introduced by [#12745](https://github.com/ClickHouse/ClickHouse/pull/12745). This fixes [#14435](https://github.com/ClickHouse/ClickHouse/issues/14435). [#14530](https://github.com/ClickHouse/ClickHouse/pull/14530) ([Amos Bird](https://github.com/amosbird)).
* Fixed the incorrect sorting order of `Nullable` column. This fixes [#14344](https://github.com/ClickHouse/ClickHouse/issues/14344). [#14495](https://github.com/ClickHouse/ClickHouse/pull/14495) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Fix `currentDatabase()` function cannot be used in `ON CLUSTER` ddl query. [#14211](https://github.com/ClickHouse/ClickHouse/pull/14211) ([Winter Zhang](https://github.com/zhang2014)).
* `MaterializedMySQL` (experimental feature): Fixed `Packet payload is not fully read` error in `MaterializeMySQL` database engine. [#14696](https://github.com/ClickHouse/ClickHouse/pull/14696) ([BohuTANG](https://github.com/BohuTANG)).
* `MaterializeMySQL` (experimental feature): Fixed `Packet payload is not fully read` error in `MaterializeMySQL` database engine. [#14696](https://github.com/ClickHouse/ClickHouse/pull/14696) ([BohuTANG](https://github.com/BohuTANG)).
#### Improvement
@ -308,7 +431,7 @@
* Add an option to skip access checks for `DiskS3`. `s3` disk is an experimental feature. [#14497](https://github.com/ClickHouse/ClickHouse/pull/14497) ([Pavel Kovalenko](https://github.com/Jokser)).
* Speed up server shutdown process if there are ongoing S3 requests. [#14496](https://github.com/ClickHouse/ClickHouse/pull/14496) ([Pavel Kovalenko](https://github.com/Jokser)).
* `SYSTEM RELOAD CONFIG` now throws an exception if failed to reload and continues using the previous users.xml. The background periodic reloading also continues using the previous users.xml if failed to reload. [#14492](https://github.com/ClickHouse/ClickHouse/pull/14492) ([Vitaly Baranov](https://github.com/vitlibar)).
* For INSERTs with inline data in VALUES format in the script mode of `clickhouse-client`, support semicolon as the data terminator, in addition to the new line. Closes https://github.com/ClickHouse/ClickHouse/issues/12288. [#13192](https://github.com/ClickHouse/ClickHouse/pull/13192) ([Alexander Kuzmenkov](https://github.com/akuzm)).
* For INSERTs with inline data in VALUES format in the script mode of `clickhouse-client`, support semicolon as the data terminator, in addition to the new line. Closes [#12288](https://github.com/ClickHouse/ClickHouse/issues/12288). [#13192](https://github.com/ClickHouse/ClickHouse/pull/13192) ([Alexander Kuzmenkov](https://github.com/akuzm)).
* Support custom codecs in compact parts. [#12183](https://github.com/ClickHouse/ClickHouse/pull/12183) ([Anton Popov](https://github.com/CurtizJ)).
#### Performance Improvement
@ -320,7 +443,7 @@
* Improve performance of 256-bit types using (u)int64_t as base type for wide integers. Original wide integers use 8-bit types as base. [#14859](https://github.com/ClickHouse/ClickHouse/pull/14859) ([Artem Zuikov](https://github.com/4ertus2)).
* Explicitly use a temporary disk to store vertical merge temporary data. [#15639](https://github.com/ClickHouse/ClickHouse/pull/15639) ([Grigory Pervakov](https://github.com/GrigoryPervakov)).
* Use one S3 DeleteObjects request instead of multiple DeleteObject in a loop. No any functionality changes, so covered by existing tests like integration/test_log_family_s3. [#15238](https://github.com/ClickHouse/ClickHouse/pull/15238) ([ianton-ru](https://github.com/ianton-ru)).
* Fix `DateTime <op> DateTime` mistakenly choosing the slow generic implementation. This fixes https://github.com/ClickHouse/ClickHouse/issues/15153. [#15178](https://github.com/ClickHouse/ClickHouse/pull/15178) ([Amos Bird](https://github.com/amosbird)).
* Fix `DateTime <op> DateTime` mistakenly choosing the slow generic implementation. This fixes [#15153](https://github.com/ClickHouse/ClickHouse/issues/15153). [#15178](https://github.com/ClickHouse/ClickHouse/pull/15178) ([Amos Bird](https://github.com/amosbird)).
* Improve performance of GROUP BY key of type `FixedString`. [#15034](https://github.com/ClickHouse/ClickHouse/pull/15034) ([Amos Bird](https://github.com/amosbird)).
* Only `mlock` code segment when starting clickhouse-server. In previous versions, all mapped regions were locked in memory, including debug info. Debug info is usually splitted to a separate file but if it isn't, it led to +2..3 GiB memory usage. [#14929](https://github.com/ClickHouse/ClickHouse/pull/14929) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* ClickHouse binary become smaller due to link time optimization.
@ -387,7 +510,7 @@
* Allow to use direct layout for dictionaries with complex keys. [#16007](https://github.com/ClickHouse/ClickHouse/pull/16007) ([Anton Popov](https://github.com/CurtizJ)).
* Prevent replica hang for 5-10 mins when replication error happens after a period of inactivity. [#15987](https://github.com/ClickHouse/ClickHouse/pull/15987) ([filimonov](https://github.com/filimonov)).
* Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)).
* Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes https://github.com/ClickHouse/ClickHouse/issues/15628. [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes [#15628](https://github.com/ClickHouse/ClickHouse/issues/15628). [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix a crash when database creation fails. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)).
* Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)).
* Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
@ -398,7 +521,7 @@
* Fixed too low default value of `max_replicated_logs_to_keep` setting, which might cause replicas to become lost too often. Improve lost replica recovery process by choosing the most up-to-date replica to clone. Also do not remove old parts from lost replica, detach them instead. [#15701](https://github.com/ClickHouse/ClickHouse/pull/15701) ([tavplubix](https://github.com/tavplubix)).
* Fix error `Cannot add simple transform to empty Pipe` which happened while reading from `Buffer` table which has different structure than destination table. It was possible if destination table returned empty result for query. Fixes [#15529](https://github.com/ClickHouse/ClickHouse/issues/15529). [#15662](https://github.com/ClickHouse/ClickHouse/pull/15662) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed bug with globs in S3 table function, region from URL was not applied to S3 client configuration. [#15646](https://github.com/ClickHouse/ClickHouse/pull/15646) ([Vladimir Chebotarev](https://github.com/excitoon)).
* Decrement the `ReadonlyReplica` metric when detaching read-only tables. This fixes https://github.com/ClickHouse/ClickHouse/issues/15598. [#15592](https://github.com/ClickHouse/ClickHouse/pull/15592) ([sundyli](https://github.com/sundy-li)).
* Decrement the `ReadonlyReplica` metric when detaching read-only tables. This fixes [#15598](https://github.com/ClickHouse/ClickHouse/issues/15598). [#15592](https://github.com/ClickHouse/ClickHouse/pull/15592) ([sundyli](https://github.com/sundy-li)).
* Throw an error when a single parameter is passed to ReplicatedMergeTree instead of ignoring it. [#15516](https://github.com/ClickHouse/ClickHouse/pull/15516) ([nvartolomei](https://github.com/nvartolomei)).
#### Improvement
@ -422,11 +545,11 @@
* Fix `Missing columns` errors when selecting columns which absent in data, but depend on other columns which also absent in data. Fixes [#15530](https://github.com/ClickHouse/ClickHouse/issues/15530). [#15532](https://github.com/ClickHouse/ClickHouse/pull/15532) ([alesapin](https://github.com/alesapin)).
* Fix bug with event subscription in DDLWorker which rarely may lead to query hangs in `ON CLUSTER`. Introduced in [#13450](https://github.com/ClickHouse/ClickHouse/issues/13450). [#15477](https://github.com/ClickHouse/ClickHouse/pull/15477) ([alesapin](https://github.com/alesapin)).
* Report proper error when the second argument of `boundingRatio` aggregate function has a wrong type. [#15407](https://github.com/ClickHouse/ClickHouse/pull/15407) ([detailyang](https://github.com/detailyang)).
* Fix bug where queries like SELECT toStartOfDay(today()) fail complaining about empty time_zone argument. [#15319](https://github.com/ClickHouse/ClickHouse/pull/15319) ([Bharat Nallan](https://github.com/bharatnc)).
* Fix bug where queries like `SELECT toStartOfDay(today())` fail complaining about empty time_zone argument. [#15319](https://github.com/ClickHouse/ClickHouse/pull/15319) ([Bharat Nallan](https://github.com/bharatnc)).
* Fix race condition during MergeTree table rename and background cleanup. [#15304](https://github.com/ClickHouse/ClickHouse/pull/15304) ([alesapin](https://github.com/alesapin)).
* Fix rare race condition on server startup when system.logs are enabled. [#15300](https://github.com/ClickHouse/ClickHouse/pull/15300) ([alesapin](https://github.com/alesapin)).
* Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix instance crash when using joinGet with LowCardinality types. This fixes https://github.com/ClickHouse/ClickHouse/issues/15214. [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)).
* Fix instance crash when using joinGet with LowCardinality types. This fixes [#15214](https://github.com/ClickHouse/ClickHouse/issues/15214). [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)).
* Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)).
* Adjust decimals field size in mysql column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)).
* Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)).
@ -455,10 +578,10 @@
* Fix bug when `ALTER UPDATE` mutation with Nullable column in assignment expression and constant value (like `UPDATE x = 42`) leads to incorrect value in column or segfault. Fixes [#13634](https://github.com/ClickHouse/ClickHouse/issues/13634), [#14045](https://github.com/ClickHouse/ClickHouse/issues/14045). [#14646](https://github.com/ClickHouse/ClickHouse/pull/14646) ([alesapin](https://github.com/alesapin)).
* Fix wrong Decimal multiplication result caused wrong decimal scale of result column. [#14603](https://github.com/ClickHouse/ClickHouse/pull/14603) ([Artem Zuikov](https://github.com/4ertus2)).
* Fixed the incorrect sorting order of `Nullable` column. This fixes [#14344](https://github.com/ClickHouse/ClickHouse/issues/14344). [#14495](https://github.com/ClickHouse/ClickHouse/pull/14495) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Fixed inconsistent comparison with primary key of type `FixedString` on index analysis if they're compered with a string of less size. This fixes https://github.com/ClickHouse/ClickHouse/issues/14908. [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)).
* Fixed inconsistent comparison with primary key of type `FixedString` on index analysis if they're compered with a string of less size. This fixes [#14908](https://github.com/ClickHouse/ClickHouse/issues/14908). [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)).
* Fix bug which leads to wrong merges assignment if table has partitions with a single part. [#14444](https://github.com/ClickHouse/ClickHouse/pull/14444) ([alesapin](https://github.com/alesapin)).
* If function `bar` was called with specifically crafted arguments, buffer overflow was possible. This closes [#13926](https://github.com/ClickHouse/ClickHouse/issues/13926). [#15028](https://github.com/ClickHouse/ClickHouse/pull/15028) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes https://github.com/ClickHouse/ClickHouse/issues/14923. [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)).
* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes [#14923](https://github.com/ClickHouse/ClickHouse/issues/14923). [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)).
* Fixed `.metadata.tmp File exists` error when using `MaterializeMySQL` database engine. [#14898](https://github.com/ClickHouse/ClickHouse/pull/14898) ([Winter Zhang](https://github.com/zhang2014)).
* Fix the issue when some invocations of `extractAllGroups` function may trigger "Memory limit exceeded" error. This fixes [#13383](https://github.com/ClickHouse/ClickHouse/issues/13383). [#14889](https://github.com/ClickHouse/ClickHouse/pull/14889) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix SIGSEGV for an attempt to INSERT into StorageFile(fd). [#14887](https://github.com/ClickHouse/ClickHouse/pull/14887) ([Azat Khuzhin](https://github.com/azat)).
@ -501,7 +624,7 @@
#### Performance Improvement
* Optimize queries with LIMIT/LIMIT BY/ORDER BY for distributed with GROUP BY sharding_key (under optimize_skip_unused_shards and optimize_distributed_group_by_sharding_key). [#10373](https://github.com/ClickHouse/ClickHouse/pull/10373) ([Azat Khuzhin](https://github.com/azat)).
* Optimize queries with LIMIT/LIMIT BY/ORDER BY for distributed with GROUP BY sharding_key (under `optimize_skip_unused_shards` and `optimize_distributed_group_by_sharding_key`). [#10373](https://github.com/ClickHouse/ClickHouse/pull/10373) ([Azat Khuzhin](https://github.com/azat)).
* Creating sets for multiple `JOIN` and `IN` in parallel. It may slightly improve performance for queries with several different `IN subquery` expressions. [#14412](https://github.com/ClickHouse/ClickHouse/pull/14412) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Improve Kafka engine performance by providing independent thread for each consumer. Separate thread pool for streaming engines (like Kafka). [#13939](https://github.com/ClickHouse/ClickHouse/pull/13939) ([fastio](https://github.com/fastio)).
@ -579,15 +702,15 @@
* Fix race condition during MergeTree table rename and background cleanup. [#15304](https://github.com/ClickHouse/ClickHouse/pull/15304) ([alesapin](https://github.com/alesapin)).
* Fix rare race condition on server startup when system.logs are enabled. [#15300](https://github.com/ClickHouse/ClickHouse/pull/15300) ([alesapin](https://github.com/alesapin)).
* Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix instance crash when using joinGet with LowCardinality types. This fixes https://github.com/ClickHouse/ClickHouse/issues/15214. [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)).
* Fix instance crash when using joinGet with LowCardinality types. This fixes [#15214](https://github.com/ClickHouse/ClickHouse/issues/15214). [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)).
* Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)).
* Adjust decimals field size in mysql column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)).
* We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes https://github.com/ClickHouse/ClickHouse/issues/14908. [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)).
* If function `bar` was called with specifically crafter arguments, buffer overflow was possible. This closes [#13926](https://github.com/ClickHouse/ClickHouse/issues/13926). [#15028](https://github.com/ClickHouse/ClickHouse/pull/15028) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes [#14908](https://github.com/ClickHouse/ClickHouse/issues/14908). [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)).
* If function `bar` was called with specifically crafted arguments, buffer overflow was possible. This closes [#13926](https://github.com/ClickHouse/ClickHouse/issues/13926). [#15028](https://github.com/ClickHouse/ClickHouse/pull/15028) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)).
* Now settings `number_of_free_entries_in_pool_to_execute_mutation` and `number_of_free_entries_in_pool_to_lower_max_size_of_merge` can be equal to `background_pool_size`. [#14975](https://github.com/ClickHouse/ClickHouse/pull/14975) ([alesapin](https://github.com/alesapin)).
* Fix to make predicate push down work when subquery contains finalizeAggregation function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)).
* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes https://github.com/ClickHouse/ClickHouse/issues/14923. [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)).
* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes [#14923](https://github.com/ClickHouse/ClickHouse/issues/14923). [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)).
* Fixed `.metadata.tmp File exists` error when using `MaterializeMySQL` database engine. [#14898](https://github.com/ClickHouse/ClickHouse/pull/14898) ([Winter Zhang](https://github.com/zhang2014)).
* Fix a problem where the server may get stuck on startup while talking to ZooKeeper, if the configuration files have to be fetched from ZK (using the `from_zk` include option). This fixes [#14814](https://github.com/ClickHouse/ClickHouse/issues/14814). [#14843](https://github.com/ClickHouse/ClickHouse/pull/14843) ([Alexander Kuzmenkov](https://github.com/akuzm)).
* Fix wrong monotonicity detection for shrunk `Int -> Int` cast of signed types. It might lead to incorrect query result. This bug is unveiled in [#14513](https://github.com/ClickHouse/ClickHouse/issues/14513). [#14783](https://github.com/ClickHouse/ClickHouse/pull/14783) ([Amos Bird](https://github.com/amosbird)).
@ -647,16 +770,16 @@
* Fix visible data clobbering by progress bar in client in interactive mode. This fixes [#12562](https://github.com/ClickHouse/ClickHouse/issues/12562) and [#13369](https://github.com/ClickHouse/ClickHouse/issues/13369) and [#13584](https://github.com/ClickHouse/ClickHouse/issues/13584) and fixes [#12964](https://github.com/ClickHouse/ClickHouse/issues/12964). [#13691](https://github.com/ClickHouse/ClickHouse/pull/13691) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed incorrect sorting order if `LowCardinality` column when sorting by multiple columns. This fixes [#13958](https://github.com/ClickHouse/ClickHouse/issues/13958). [#14223](https://github.com/ClickHouse/ClickHouse/pull/14223) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Check for array size overflow in `topK` aggregate function. Without this check the user may send a query with carefully crafter parameters that will lead to server crash. This closes [#14452](https://github.com/ClickHouse/ClickHouse/issues/14452). [#14467](https://github.com/ClickHouse/ClickHouse/pull/14467) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Check for array size overflow in `topK` aggregate function. Without this check the user may send a query with carefully crafted parameters that will lead to server crash. This closes [#14452](https://github.com/ClickHouse/ClickHouse/issues/14452). [#14467](https://github.com/ClickHouse/ClickHouse/pull/14467) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix bug which can lead to wrong merges assignment if table has partitions with a single part. [#14444](https://github.com/ClickHouse/ClickHouse/pull/14444) ([alesapin](https://github.com/alesapin)).
* Stop query execution if exception happened in `PipelineExecutor` itself. This could prevent rare possible query hung. Continuation of [#14334](https://github.com/ClickHouse/ClickHouse/issues/14334). [#14402](https://github.com/ClickHouse/ClickHouse/pull/14402) [#14334](https://github.com/ClickHouse/ClickHouse/pull/14334) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix crash during `ALTER` query for table which was created `AS table_function`. Fixes [#14212](https://github.com/ClickHouse/ClickHouse/issues/14212). [#14326](https://github.com/ClickHouse/ClickHouse/pull/14326) ([alesapin](https://github.com/alesapin)).
* Fix exception during ALTER LIVE VIEW query with REFRESH command. Live view is an experimental feature. [#14320](https://github.com/ClickHouse/ClickHouse/pull/14320) ([Bharat Nallan](https://github.com/bharatnc)).
* Fix QueryPlan lifetime (for EXPLAIN PIPELINE graph=1) for queries with nested interpreter. [#14315](https://github.com/ClickHouse/ClickHouse/pull/14315) ([Azat Khuzhin](https://github.com/azat)).
* Fix segfault in `clickhouse-odbc-bridge` during schema fetch from some external sources. This PR fixes https://github.com/ClickHouse/ClickHouse/issues/13861. [#14267](https://github.com/ClickHouse/ClickHouse/pull/14267) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix crash in mark inclusion search introduced in https://github.com/ClickHouse/ClickHouse/pull/12277. [#14225](https://github.com/ClickHouse/ClickHouse/pull/14225) ([Amos Bird](https://github.com/amosbird)).
* Fix segfault in `clickhouse-odbc-bridge` during schema fetch from some external sources. This PR fixes [#13861](https://github.com/ClickHouse/ClickHouse/issues/13861). [#14267](https://github.com/ClickHouse/ClickHouse/pull/14267) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix crash in mark inclusion search introduced in [#12277](https://github.com/ClickHouse/ClickHouse/pull/12277). [#14225](https://github.com/ClickHouse/ClickHouse/pull/14225) ([Amos Bird](https://github.com/amosbird)).
* Fix creation of tables with named tuples. This fixes [#13027](https://github.com/ClickHouse/ClickHouse/issues/13027). [#14143](https://github.com/ClickHouse/ClickHouse/pull/14143) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix formatting of minimal negative decimal numbers. This fixes https://github.com/ClickHouse/ClickHouse/issues/14111. [#14119](https://github.com/ClickHouse/ClickHouse/pull/14119) ([Alexander Kuzmenkov](https://github.com/akuzm)).
* Fix formatting of minimal negative decimal numbers. This fixes [#14111](https://github.com/ClickHouse/ClickHouse/issues/14111). [#14119](https://github.com/ClickHouse/ClickHouse/pull/14119) ([Alexander Kuzmenkov](https://github.com/akuzm)).
* Fix `DistributedFilesToInsert` metric (zeroed when it should not). [#14095](https://github.com/ClickHouse/ClickHouse/pull/14095) ([Azat Khuzhin](https://github.com/azat)).
* Fix `pointInPolygon` with const 2d array as polygon. [#14079](https://github.com/ClickHouse/ClickHouse/pull/14079) ([Alexey Ilyukhov](https://github.com/livace)).
* Fixed wrong mount point in extra info for `Poco::Exception: no space left on device`. [#14050](https://github.com/ClickHouse/ClickHouse/pull/14050) ([tavplubix](https://github.com/tavplubix)).
@ -685,10 +808,10 @@
* Fix wrong code in function `netloc`. This fixes [#13335](https://github.com/ClickHouse/ClickHouse/issues/13335). [#13446](https://github.com/ClickHouse/ClickHouse/pull/13446) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix possible race in `StorageMemory`. [#13416](https://github.com/ClickHouse/ClickHouse/pull/13416) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix missing or excessive headers in `TSV/CSVWithNames` formats in HTTP protocol. This fixes [#12504](https://github.com/ClickHouse/ClickHouse/issues/12504). [#13343](https://github.com/ClickHouse/ClickHouse/pull/13343) ([Azat Khuzhin](https://github.com/azat)).
* Fix parsing row policies from users.xml when names of databases or tables contain dots. This fixes https://github.com/ClickHouse/ClickHouse/issues/5779, https://github.com/ClickHouse/ClickHouse/issues/12527. [#13199](https://github.com/ClickHouse/ClickHouse/pull/13199) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix parsing row policies from users.xml when names of databases or tables contain dots. This fixes [#5779](https://github.com/ClickHouse/ClickHouse/issues/5779), [#12527](https://github.com/ClickHouse/ClickHouse/issues/12527). [#13199](https://github.com/ClickHouse/ClickHouse/pull/13199) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix access to `redis` dictionary after connection was dropped once. It may happen with `cache` and `direct` dictionary layouts. [#13082](https://github.com/ClickHouse/ClickHouse/pull/13082) ([Anton Popov](https://github.com/CurtizJ)).
* Removed wrong auth access check when using ClickHouseDictionarySource to query remote tables. [#12756](https://github.com/ClickHouse/ClickHouse/pull/12756) ([sundyli](https://github.com/sundy-li)).
* Properly distinguish subqueries in some cases for common subexpression elimination. https://github.com/ClickHouse/ClickHouse/issues/8333. [#8367](https://github.com/ClickHouse/ClickHouse/pull/8367) ([Amos Bird](https://github.com/amosbird)).
* Properly distinguish subqueries in some cases for common subexpression elimination. [#8333](https://github.com/ClickHouse/ClickHouse/issues/8333). [#8367](https://github.com/ClickHouse/ClickHouse/pull/8367) ([Amos Bird](https://github.com/amosbird)).
#### Improvement
@ -756,7 +879,7 @@
* Updating LDAP user authentication suite to check that it works with RBAC. [#13656](https://github.com/ClickHouse/ClickHouse/pull/13656) ([vzakaznikov](https://github.com/vzakaznikov)).
* Removed `-DENABLE_CURL_CLIENT` for `contrib/aws`. [#13628](https://github.com/ClickHouse/ClickHouse/pull/13628) ([Vladimir Chebotarev](https://github.com/excitoon)).
* Increasing health-check timeouts for ClickHouse nodes and adding support to dump docker-compose logs if unhealthy containers found. [#13612](https://github.com/ClickHouse/ClickHouse/pull/13612) ([vzakaznikov](https://github.com/vzakaznikov)).
* Make sure https://github.com/ClickHouse/ClickHouse/issues/10977 is invalid. [#13539](https://github.com/ClickHouse/ClickHouse/pull/13539) ([Amos Bird](https://github.com/amosbird)).
* Make sure [#10977](https://github.com/ClickHouse/ClickHouse/issues/10977) is invalid. [#13539](https://github.com/ClickHouse/ClickHouse/pull/13539) ([Amos Bird](https://github.com/amosbird)).
* Skip PR's from robot-clickhouse. [#13489](https://github.com/ClickHouse/ClickHouse/pull/13489) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Move Dockerfiles from integration tests to `docker/test` directory. docker_compose files are available in `runner` docker container. Docker images are built in CI and not in integration tests. [#13448](https://github.com/ClickHouse/ClickHouse/pull/13448) ([Ilya Yatsishin](https://github.com/qoega)).
@ -788,7 +911,7 @@
* Add `FROM_UNIXTIME` function for compatibility with MySQL, related to [12149](https://github.com/ClickHouse/ClickHouse/issues/12149). [#12484](https://github.com/ClickHouse/ClickHouse/pull/12484) ([flynn](https://github.com/ucasFL)).
* Allow Nullable types as keys in MergeTree tables if `allow_nullable_key` table setting is enabled. Closes [#5319](https://github.com/ClickHouse/ClickHouse/issues/5319). [#12433](https://github.com/ClickHouse/ClickHouse/pull/12433) ([Amos Bird](https://github.com/amosbird)).
* Integration with [COS](https://intl.cloud.tencent.com/product/cos). [#12386](https://github.com/ClickHouse/ClickHouse/pull/12386) ([fastio](https://github.com/fastio)).
* Add mapAdd and mapSubtract functions for adding/subtracting key-mapped values. [#11735](https://github.com/ClickHouse/ClickHouse/pull/11735) ([Ildus Kurbangaliev](https://github.com/ildus)).
* Add `mapAdd` and `mapSubtract` functions for adding/subtracting key-mapped values. [#11735](https://github.com/ClickHouse/ClickHouse/pull/11735) ([Ildus Kurbangaliev](https://github.com/ildus)).
#### Bug Fix
@ -1071,7 +1194,7 @@
* Improved performace of 'ORDER BY' and 'GROUP BY' by prefix of sorting key (enabled with `optimize_aggregation_in_order` setting, disabled by default). [#11696](https://github.com/ClickHouse/ClickHouse/pull/11696) ([Anton Popov](https://github.com/CurtizJ)).
* Removed injective functions inside `uniq*()` if `set optimize_injective_functions_inside_uniq=1`. [#12337](https://github.com/ClickHouse/ClickHouse/pull/12337) ([Ruslan Kamalov](https://github.com/kamalov-ruslan)).
* Index not used for IN operator with literals", performance regression introduced around v19.3. This fixes "[#10574](https://github.com/ClickHouse/ClickHouse/issues/10574). [#12062](https://github.com/ClickHouse/ClickHouse/pull/12062) ([nvartolomei](https://github.com/nvartolomei)).
* Index not used for IN operator with literals, performance regression introduced around v19.3. This fixes [#10574](https://github.com/ClickHouse/ClickHouse/issues/10574). [#12062](https://github.com/ClickHouse/ClickHouse/pull/12062) ([nvartolomei](https://github.com/nvartolomei)).
* Implemented single part uploads for DiskS3 (experimental feature). [#12026](https://github.com/ClickHouse/ClickHouse/pull/12026) ([Vladimir Chebotarev](https://github.com/excitoon)).
#### Experimental Feature
@ -1133,7 +1256,7 @@
#### Performance Improvement
* Index not used for IN operator with literals", performance regression introduced around v19.3. This fixes "[#10574](https://github.com/ClickHouse/ClickHouse/issues/10574). [#12062](https://github.com/ClickHouse/ClickHouse/pull/12062) ([nvartolomei](https://github.com/nvartolomei)).
* Index not used for IN operator with literals, performance regression introduced around v19.3. This fixes [#10574](https://github.com/ClickHouse/ClickHouse/issues/10574). [#12062](https://github.com/ClickHouse/ClickHouse/pull/12062) ([nvartolomei](https://github.com/nvartolomei)).
#### Build/Testing/Packaging Improvement
@ -1213,7 +1336,7 @@
* Fix wrong result of comparison of FixedString with constant String. This fixes [#11393](https://github.com/ClickHouse/ClickHouse/issues/11393). This bug appeared in version 20.4. [#11828](https://github.com/ClickHouse/ClickHouse/pull/11828) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix wrong result for `if` with NULLs in condition. [#11807](https://github.com/ClickHouse/ClickHouse/pull/11807) ([Artem Zuikov](https://github.com/4ertus2)).
* Fix using too many threads for queries. [#11788](https://github.com/ClickHouse/ClickHouse/pull/11788) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed `Scalar doesn't exist` exception when using `WITH <scalar subquery> ...` in `SELECT ... FROM merge_tree_table ...` https://github.com/ClickHouse/ClickHouse/issues/11621. [#11767](https://github.com/ClickHouse/ClickHouse/pull/11767) ([Amos Bird](https://github.com/amosbird)).
* Fixed `Scalar doesn't exist` exception when using `WITH <scalar subquery> ...` in `SELECT ... FROM merge_tree_table ...` [#11621](https://github.com/ClickHouse/ClickHouse/issues/11621). [#11767](https://github.com/ClickHouse/ClickHouse/pull/11767) ([Amos Bird](https://github.com/amosbird)).
* Fix unexpected behaviour of queries like `SELECT *, xyz.*` which were success while an error expected. [#11753](https://github.com/ClickHouse/ClickHouse/pull/11753) ([hexiaoting](https://github.com/hexiaoting)).
* Now replicated fetches will be cancelled during metadata alter. [#11744](https://github.com/ClickHouse/ClickHouse/pull/11744) ([alesapin](https://github.com/alesapin)).
* Parse metadata stored in zookeeper before checking for equality. [#11739](https://github.com/ClickHouse/ClickHouse/pull/11739) ([Azat Khuzhin](https://github.com/azat)).
@ -1264,8 +1387,8 @@
* Fix potential uninitialized memory in conversion. Example: `SELECT toIntervalSecond(now64())`. [#11311](https://github.com/ClickHouse/ClickHouse/pull/11311) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix the issue when index analysis cannot work if a table has Array column in primary key and if a query is filtering by this column with `empty` or `notEmpty` functions. This fixes [#11286](https://github.com/ClickHouse/ClickHouse/issues/11286). [#11303](https://github.com/ClickHouse/ClickHouse/pull/11303) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix bug when query speed estimation can be incorrect and the limit of `min_execution_speed` may not work or work incorrectly if the query is throttled by `max_network_bandwidth`, `max_execution_speed` or `priority` settings. Change the default value of `timeout_before_checking_execution_speed` to non-zero, because otherwise the settings `min_execution_speed` and `max_execution_speed` have no effect. This fixes [#11297](https://github.com/ClickHouse/ClickHouse/issues/11297). This fixes [#5732](https://github.com/ClickHouse/ClickHouse/issues/5732). This fixes [#6228](https://github.com/ClickHouse/ClickHouse/issues/6228). Usability improvement: avoid concatenation of exception message with progress bar in `clickhouse-client`. [#11296](https://github.com/ClickHouse/ClickHouse/pull/11296) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix crash when `SET DEFAULT ROLE` is called with wrong arguments. This fixes https://github.com/ClickHouse/ClickHouse/issues/10586. [#11278](https://github.com/ClickHouse/ClickHouse/pull/11278) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix crash while reading malformed data in `Protobuf` format. This fixes https://github.com/ClickHouse/ClickHouse/issues/5957, fixes https://github.com/ClickHouse/ClickHouse/issues/11203. [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix crash when `SET DEFAULT ROLE` is called with wrong arguments. This fixes [#10586](https://github.com/ClickHouse/ClickHouse/issues/10586). [#11278](https://github.com/ClickHouse/ClickHouse/pull/11278) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix crash while reading malformed data in `Protobuf` format. This fixes [#5957](https://github.com/ClickHouse/ClickHouse/issues/5957), fixes [#11203](https://github.com/ClickHouse/ClickHouse/issues/11203). [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fixed a bug when `cache` dictionary could return default value instead of normal (when there are only expired keys). This affects only string fields. [#11233](https://github.com/ClickHouse/ClickHouse/pull/11233) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Fix error `Block structure mismatch in QueryPipeline` while reading from `VIEW` with constants in inner query. Fixes [#11181](https://github.com/ClickHouse/ClickHouse/issues/11181). [#11205](https://github.com/ClickHouse/ClickHouse/pull/11205) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix possible exception `Invalid status for associated output`. [#11200](https://github.com/ClickHouse/ClickHouse/pull/11200) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
@ -1331,7 +1454,7 @@
* Fix error `the BloomFilter false positive must be a double number between 0 and 1` [#10551](https://github.com/ClickHouse/ClickHouse/issues/10551). [#10569](https://github.com/ClickHouse/ClickHouse/pull/10569) ([Winter Zhang](https://github.com/zhang2014)).
* Fix SELECT of column ALIAS which default expression type different from column type. [#10563](https://github.com/ClickHouse/ClickHouse/pull/10563) ([Azat Khuzhin](https://github.com/azat)).
* Implemented comparison between DateTime64 and String values (just like for DateTime). [#10560](https://github.com/ClickHouse/ClickHouse/pull/10560) ([Vasily Nemkov](https://github.com/Enmk)).
* Fix index corruption, which may accur in some cases after merge compact parts into another compact part. [#10531](https://github.com/ClickHouse/ClickHouse/pull/10531) ([Anton Popov](https://github.com/CurtizJ)).
* Fix index corruption, which may occur in some cases after merge compact parts into another compact part. [#10531](https://github.com/ClickHouse/ClickHouse/pull/10531) ([Anton Popov](https://github.com/CurtizJ)).
* Disable GROUP BY sharding_key optimization by default (`optimize_distributed_group_by_sharding_key` had been introduced and turned of by default, due to trickery of sharding_key analyzing, simple example is `if` in sharding key) and fix it for WITH ROLLUP/CUBE/TOTALS. [#10516](https://github.com/ClickHouse/ClickHouse/pull/10516) ([Azat Khuzhin](https://github.com/azat)).
* Fixes: [#10263](https://github.com/ClickHouse/ClickHouse/issues/10263) (after that PR dist send via INSERT had been postponing on each INSERT) Fixes: [#8756](https://github.com/ClickHouse/ClickHouse/issues/8756) (that PR breaks distributed sends with all of the following conditions met (unlikely setup for now I guess): `internal_replication == false`, multiple local shards (activates the hardlinking code) and `distributed_storage_policy` (makes `link(2)` fails on `EXDEV`)). [#10486](https://github.com/ClickHouse/ClickHouse/pull/10486) ([Azat Khuzhin](https://github.com/azat)).
* Fixed error with "max_rows_to_sort" limit. [#10268](https://github.com/ClickHouse/ClickHouse/pull/10268) ([alexey-milovidov](https://github.com/alexey-milovidov)).
@ -1488,7 +1611,7 @@
* Lower memory usage in tests. [#10617](https://github.com/ClickHouse/ClickHouse/pull/10617) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixing hard coded timeouts in new live view tests. [#10604](https://github.com/ClickHouse/ClickHouse/pull/10604) ([vzakaznikov](https://github.com/vzakaznikov)).
* Increasing timeout when opening a client in tests/queries/0_stateless/helpers/client.py. [#10599](https://github.com/ClickHouse/ClickHouse/pull/10599) ([vzakaznikov](https://github.com/vzakaznikov)).
* Enable ThinLTO for clang builds, continuation of https://github.com/ClickHouse/ClickHouse/pull/10435. [#10585](https://github.com/ClickHouse/ClickHouse/pull/10585) ([Amos Bird](https://github.com/amosbird)).
* Enable ThinLTO for clang builds, continuation of [#10435](https://github.com/ClickHouse/ClickHouse/pull/10435). [#10585](https://github.com/ClickHouse/ClickHouse/pull/10585) ([Amos Bird](https://github.com/amosbird)).
* Adding fuzzers and preparing for oss-fuzz integration. [#10546](https://github.com/ClickHouse/ClickHouse/pull/10546) ([kyprizel](https://github.com/kyprizel)).
* Fix FreeBSD build. [#10150](https://github.com/ClickHouse/ClickHouse/pull/10150) ([Ivan](https://github.com/abyss7)).
* Add new build for query tests using pytest framework. [#10039](https://github.com/ClickHouse/ClickHouse/pull/10039) ([Ivan](https://github.com/abyss7)).
@ -1563,7 +1686,7 @@
#### Performance Improvement
* Index not used for IN operator with literals", performance regression introduced around v19.3. This fixes "[#10574](https://github.com/ClickHouse/ClickHouse/issues/10574). [#12062](https://github.com/ClickHouse/ClickHouse/pull/12062) ([nvartolomei](https://github.com/nvartolomei)).
* Index not used for IN operator with literals, performance regression introduced around v19.3. This fixes [#10574](https://github.com/ClickHouse/ClickHouse/issues/10574). [#12062](https://github.com/ClickHouse/ClickHouse/pull/12062) ([nvartolomei](https://github.com/nvartolomei)).
#### Build/Testing/Packaging Improvement
@ -1617,7 +1740,7 @@
* Fix the error `Data compressed with different methods` that can happen if `min_bytes_to_use_direct_io` is enabled and PREWHERE is active and using SAMPLE or high number of threads. This fixes [#11539](https://github.com/ClickHouse/ClickHouse/issues/11539). [#11540](https://github.com/ClickHouse/ClickHouse/pull/11540) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix return compressed size for codecs. [#11448](https://github.com/ClickHouse/ClickHouse/pull/11448) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix server crash when a column has compression codec with non-literal arguments. Fixes [#11365](https://github.com/ClickHouse/ClickHouse/issues/11365). [#11431](https://github.com/ClickHouse/ClickHouse/pull/11431) ([alesapin](https://github.com/alesapin)).
* Fix pointInPolygon with nan as point. Fixes https://github.com/ClickHouse/ClickHouse/issues/11375. [#11421](https://github.com/ClickHouse/ClickHouse/pull/11421) ([Alexey Ilyukhov](https://github.com/livace)).
* Fix pointInPolygon with nan as point. Fixes [#11375](https://github.com/ClickHouse/ClickHouse/issues/11375). [#11421](https://github.com/ClickHouse/ClickHouse/pull/11421) ([Alexey Ilyukhov](https://github.com/livace)).
* Fix potential uninitialized memory read in MergeTree shutdown if table was not created successfully. [#11420](https://github.com/ClickHouse/ClickHouse/pull/11420) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed geohashesInBox with arguments outside of latitude/longitude range. [#11403](https://github.com/ClickHouse/ClickHouse/pull/11403) ([Vasily Nemkov](https://github.com/Enmk)).
* Fix possible `Pipeline stuck` error for queries with external sort and limit. Fixes [#11359](https://github.com/ClickHouse/ClickHouse/issues/11359). [#11366](https://github.com/ClickHouse/ClickHouse/pull/11366) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
@ -1633,8 +1756,8 @@
* Fix potential uninitialized memory in conversion. Example: `SELECT toIntervalSecond(now64())`. [#11311](https://github.com/ClickHouse/ClickHouse/pull/11311) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix the issue when index analysis cannot work if a table has Array column in primary key and if a query is filtering by this column with `empty` or `notEmpty` functions. This fixes [#11286](https://github.com/ClickHouse/ClickHouse/issues/11286). [#11303](https://github.com/ClickHouse/ClickHouse/pull/11303) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix bug when query speed estimation can be incorrect and the limit of `min_execution_speed` may not work or work incorrectly if the query is throttled by `max_network_bandwidth`, `max_execution_speed` or `priority` settings. Change the default value of `timeout_before_checking_execution_speed` to non-zero, because otherwise the settings `min_execution_speed` and `max_execution_speed` have no effect. This fixes [#11297](https://github.com/ClickHouse/ClickHouse/issues/11297). This fixes [#5732](https://github.com/ClickHouse/ClickHouse/issues/5732). This fixes [#6228](https://github.com/ClickHouse/ClickHouse/issues/6228). Usability improvement: avoid concatenation of exception message with progress bar in `clickhouse-client`. [#11296](https://github.com/ClickHouse/ClickHouse/pull/11296) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix crash when SET DEFAULT ROLE is called with wrong arguments. This fixes https://github.com/ClickHouse/ClickHouse/issues/10586. [#11278](https://github.com/ClickHouse/ClickHouse/pull/11278) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix crash while reading malformed data in Protobuf format. This fixes https://github.com/ClickHouse/ClickHouse/issues/5957, fixes https://github.com/ClickHouse/ClickHouse/issues/11203. [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix crash when SET DEFAULT ROLE is called with wrong arguments. This fixes [#10586](https://github.com/ClickHouse/ClickHouse/issues/10586). [#11278](https://github.com/ClickHouse/ClickHouse/pull/11278) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix crash while reading malformed data in Protobuf format. This fixes [#5957](https://github.com/ClickHouse/ClickHouse/issues/5957), fixes [#11203](https://github.com/ClickHouse/ClickHouse/issues/11203). [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fixed a bug when cache-dictionary could return default value instead of normal (when there are only expired keys). This affects only string fields. [#11233](https://github.com/ClickHouse/ClickHouse/pull/11233) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Fix error `Block structure mismatch in QueryPipeline` while reading from `VIEW` with constants in inner query. Fixes [#11181](https://github.com/ClickHouse/ClickHouse/issues/11181). [#11205](https://github.com/ClickHouse/ClickHouse/pull/11205) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix possible exception `Invalid status for associated output`. [#11200](https://github.com/ClickHouse/ClickHouse/pull/11200) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
@ -1679,7 +1802,7 @@ No changes compared to v20.4.3.16-stable.
* Now constraints are updated if the column participating in `CONSTRAINT` expression was renamed. Fixes [#10844](https://github.com/ClickHouse/ClickHouse/issues/10844). [#10847](https://github.com/ClickHouse/ClickHouse/pull/10847) ([alesapin](https://github.com/alesapin)).
* Fixed potential read of uninitialized memory in cache-dictionary. [#10834](https://github.com/ClickHouse/ClickHouse/pull/10834) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed columns order after `Block::sortColumns()`. [#10826](https://github.com/ClickHouse/ClickHouse/pull/10826) ([Azat Khuzhin](https://github.com/azat)).
* Fixed the issue with `ODBC` bridge when no quoting of identifiers is requested. Fixes [#7984] (https://github.com/ClickHouse/ClickHouse/issues/7984). [#10821](https://github.com/ClickHouse/ClickHouse/pull/10821) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed the issue with `ODBC` bridge when no quoting of identifiers is requested. Fixes [#7984](https://github.com/ClickHouse/ClickHouse/issues/7984). [#10821](https://github.com/ClickHouse/ClickHouse/pull/10821) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed `UBSan` and `MSan` report in `DateLUT`. [#10798](https://github.com/ClickHouse/ClickHouse/pull/10798) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed incorrect type conversion in key conditions. Fixes [#6287](https://github.com/ClickHouse/ClickHouse/issues/6287). [#10791](https://github.com/ClickHouse/ClickHouse/pull/10791) ([Andrew Onyshchuk](https://github.com/oandrew)).
* Fixed `parallel_view_processing` behavior. Now all insertions into `MATERIALIZED VIEW` without exception should be finished if exception happened. Fixes [#10241](https://github.com/ClickHouse/ClickHouse/issues/10241). [#10757](https://github.com/ClickHouse/ClickHouse/pull/10757) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
@ -1707,15 +1830,15 @@ No changes compared to v20.4.3.16-stable.
#### New Feature
* Add support for secured connection from ClickHouse to Zookeeper [#10184](https://github.com/ClickHouse/ClickHouse/pull/10184) ([Konstantin Lebedev](https://github.com/xzkostyan))
* Support custom HTTP handlers. See ISSUES-5436 for description. [#7572](https://github.com/ClickHouse/ClickHouse/pull/7572) ([Winter Zhang](https://github.com/zhang2014))
* Support custom HTTP handlers. See [#5436](https://github.com/ClickHouse/ClickHouse/issues/5436) for description. [#7572](https://github.com/ClickHouse/ClickHouse/pull/7572) ([Winter Zhang](https://github.com/zhang2014))
* Add MessagePack Input/Output format. [#9889](https://github.com/ClickHouse/ClickHouse/pull/9889) ([Kruglov Pavel](https://github.com/Avogar))
* Add Regexp input format. [#9196](https://github.com/ClickHouse/ClickHouse/pull/9196) ([Kruglov Pavel](https://github.com/Avogar))
* Added output format `Markdown` for embedding tables in markdown documents. [#10317](https://github.com/ClickHouse/ClickHouse/pull/10317) ([Kruglov Pavel](https://github.com/Avogar))
* Added support for custom settings section in dictionaries. Also fixes issue [#2829](https://github.com/ClickHouse/ClickHouse/issues/2829). [#10137](https://github.com/ClickHouse/ClickHouse/pull/10137) ([Artem Streltsov](https://github.com/kekekekule))
* Added custom settings support in DDL-queries for CREATE DICTIONARY [#10465](https://github.com/ClickHouse/ClickHouse/pull/10465) ([Artem Streltsov](https://github.com/kekekekule))
* Added custom settings support in DDL-queries for `CREATE DICTIONARY` [#10465](https://github.com/ClickHouse/ClickHouse/pull/10465) ([Artem Streltsov](https://github.com/kekekekule))
* Add simple server-wide memory profiler that will collect allocation contexts when server memory usage becomes higher than the next allocation threshold. [#10444](https://github.com/ClickHouse/ClickHouse/pull/10444) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Add setting `always_fetch_merged_part` which restrict replica to merge parts by itself and always prefer dowloading from other replicas. [#10379](https://github.com/ClickHouse/ClickHouse/pull/10379) ([alesapin](https://github.com/alesapin))
* Add function JSONExtractKeysAndValuesRaw which extracts raw data from JSON objects [#10378](https://github.com/ClickHouse/ClickHouse/pull/10378) ([hcz](https://github.com/hczhcz))
* Add function `JSONExtractKeysAndValuesRaw` which extracts raw data from JSON objects [#10378](https://github.com/ClickHouse/ClickHouse/pull/10378) ([hcz](https://github.com/hczhcz))
* Add memory usage from OS to `system.asynchronous_metrics`. [#10361](https://github.com/ClickHouse/ClickHouse/pull/10361) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Added generic variants for functions `least` and `greatest`. Now they work with arbitrary number of arguments of arbitrary types. This fixes [#4767](https://github.com/ClickHouse/ClickHouse/issues/4767) [#10318](https://github.com/ClickHouse/ClickHouse/pull/10318) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Now ClickHouse controls timeouts of dictionary sources on its side. Two new settings added to cache dictionary configuration: `strict_max_lifetime_seconds`, which is `max_lifetime` by default, and `query_wait_timeout_milliseconds`, which is one minute by default. The first settings is also useful with `allow_read_expired_keys` settings (to forbid reading very expired keys). [#10337](https://github.com/ClickHouse/ClickHouse/pull/10337) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
@ -1728,7 +1851,7 @@ No changes compared to v20.4.3.16-stable.
* Add ability to query Distributed over Distributed (w/o `distributed_group_by_no_merge`) ... [#9923](https://github.com/ClickHouse/ClickHouse/pull/9923) ([Azat Khuzhin](https://github.com/azat))
* Add function `arrayReduceInRanges` which aggregates array elements in given ranges. [#9598](https://github.com/ClickHouse/ClickHouse/pull/9598) ([hcz](https://github.com/hczhcz))
* Add Dictionary Status on prometheus exporter. [#9622](https://github.com/ClickHouse/ClickHouse/pull/9622) ([Guillaume Tassery](https://github.com/YiuRULE))
* Add function arrayAUC [#8698](https://github.com/ClickHouse/ClickHouse/pull/8698) ([taiyang-li](https://github.com/taiyang-li))
* Add function `arrayAUC` [#8698](https://github.com/ClickHouse/ClickHouse/pull/8698) ([taiyang-li](https://github.com/taiyang-li))
* Support `DROP VIEW` statement for better TPC-H compatibility. [#9831](https://github.com/ClickHouse/ClickHouse/pull/9831) ([Amos Bird](https://github.com/amosbird))
* Add 'strict_order' option to windowFunnel() [#9773](https://github.com/ClickHouse/ClickHouse/pull/9773) ([achimbab](https://github.com/achimbab))
* Support `DATE` and `TIMESTAMP` SQL operators, e.g. `SELECT date '2001-01-01'` [#9691](https://github.com/ClickHouse/ClickHouse/pull/9691) ([Artem Zuikov](https://github.com/4ertus2))
@ -1932,7 +2055,7 @@ No changes compared to v20.4.3.16-stable.
* Move integration tests docker files to docker/ directory. [#10335](https://github.com/ClickHouse/ClickHouse/pull/10335) ([Ilya Yatsishin](https://github.com/qoega))
* Allow to use `clang-10` in CI. It ensures that [#10238](https://github.com/ClickHouse/ClickHouse/issues/10238) is fixed. [#10384](https://github.com/ClickHouse/ClickHouse/pull/10384) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Update OpenSSL to upstream master. Fixed the issue when TLS connections may fail with the message `OpenSSL SSL_read: error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error` and `SSL Exception: error:2400006E:random number generator::error retrieving entropy`. The issue was present in version 20.1. [#8956](https://github.com/ClickHouse/ClickHouse/pull/8956) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix clang-10 build. https://github.com/ClickHouse/ClickHouse/issues/10238 [#10370](https://github.com/ClickHouse/ClickHouse/pull/10370) ([Amos Bird](https://github.com/amosbird))
* Fix clang-10 build. [#10238](https://github.com/ClickHouse/ClickHouse/issues/10238) [#10370](https://github.com/ClickHouse/ClickHouse/pull/10370) ([Amos Bird](https://github.com/amosbird))
* Add performance test for [Parallel INSERT for materialized view](https://github.com/ClickHouse/ClickHouse/pull/10052). [#10345](https://github.com/ClickHouse/ClickHouse/pull/10345) ([vxider](https://github.com/Vxider))
* Fix flaky test `test_settings_constraints_distributed.test_insert_clamps_settings`. [#10346](https://github.com/ClickHouse/ClickHouse/pull/10346) ([Vitaly Baranov](https://github.com/vitlibar))
* Add util to test results upload in CI ClickHouse [#10330](https://github.com/ClickHouse/ClickHouse/pull/10330) ([Ilya Yatsishin](https://github.com/qoega))
@ -2106,7 +2229,7 @@ No changes compared to v20.4.3.16-stable.
#### Performance Improvement
* Index not used for IN operator with literals", performance regression introduced around v19.3. This fixes "[#10574](https://github.com/ClickHouse/ClickHouse/issues/10574). [#12062](https://github.com/ClickHouse/ClickHouse/pull/12062) ([nvartolomei](https://github.com/nvartolomei)).
* Index not used for IN operator with literals, performance regression introduced around v19.3. This fixes [#10574](https://github.com/ClickHouse/ClickHouse/issues/10574). [#12062](https://github.com/ClickHouse/ClickHouse/pull/12062) ([nvartolomei](https://github.com/nvartolomei)).
### ClickHouse release v20.3.12.112-lts 2020-06-25
@ -2148,7 +2271,7 @@ No changes compared to v20.4.3.16-stable.
* Fix the error `Data compressed with different methods` that can happen if `min_bytes_to_use_direct_io` is enabled and PREWHERE is active and using SAMPLE or high number of threads. This fixes [#11539](https://github.com/ClickHouse/ClickHouse/issues/11539). [#11540](https://github.com/ClickHouse/ClickHouse/pull/11540) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix return compressed size for codecs. [#11448](https://github.com/ClickHouse/ClickHouse/pull/11448) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix server crash when a column has compression codec with non-literal arguments. Fixes [#11365](https://github.com/ClickHouse/ClickHouse/issues/11365). [#11431](https://github.com/ClickHouse/ClickHouse/pull/11431) ([alesapin](https://github.com/alesapin)).
* Fix pointInPolygon with nan as point. Fixes https://github.com/ClickHouse/ClickHouse/issues/11375. [#11421](https://github.com/ClickHouse/ClickHouse/pull/11421) ([Alexey Ilyukhov](https://github.com/livace)).
* Fix pointInPolygon with nan as point. Fixes [#11375](https://github.com/ClickHouse/ClickHouse/issues/11375). [#11421](https://github.com/ClickHouse/ClickHouse/pull/11421) ([Alexey Ilyukhov](https://github.com/livace)).
* Fix crash in JOIN over LowCarinality(T) and Nullable(T). [#11380](https://github.com/ClickHouse/ClickHouse/issues/11380). [#11414](https://github.com/ClickHouse/ClickHouse/pull/11414) ([Artem Zuikov](https://github.com/4ertus2)).
* Fix error code for wrong `USING` key. [#11373](https://github.com/ClickHouse/ClickHouse/issues/11373). [#11404](https://github.com/ClickHouse/ClickHouse/pull/11404) ([Artem Zuikov](https://github.com/4ertus2)).
* Fixed geohashesInBox with arguments outside of latitude/longitude range. [#11403](https://github.com/ClickHouse/ClickHouse/pull/11403) ([Vasily Nemkov](https://github.com/Enmk)).
@ -2165,7 +2288,7 @@ No changes compared to v20.4.3.16-stable.
* Fix potential uninitialized memory in conversion. Example: `SELECT toIntervalSecond(now64())`. [#11311](https://github.com/ClickHouse/ClickHouse/pull/11311) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix the issue when index analysis cannot work if a table has Array column in primary key and if a query is filtering by this column with `empty` or `notEmpty` functions. This fixes [#11286](https://github.com/ClickHouse/ClickHouse/issues/11286). [#11303](https://github.com/ClickHouse/ClickHouse/pull/11303) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix bug when query speed estimation can be incorrect and the limit of `min_execution_speed` may not work or work incorrectly if the query is throttled by `max_network_bandwidth`, `max_execution_speed` or `priority` settings. Change the default value of `timeout_before_checking_execution_speed` to non-zero, because otherwise the settings `min_execution_speed` and `max_execution_speed` have no effect. This fixes [#11297](https://github.com/ClickHouse/ClickHouse/issues/11297). This fixes [#5732](https://github.com/ClickHouse/ClickHouse/issues/5732). This fixes [#6228](https://github.com/ClickHouse/ClickHouse/issues/6228). Usability improvement: avoid concatenation of exception message with progress bar in `clickhouse-client`. [#11296](https://github.com/ClickHouse/ClickHouse/pull/11296) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix crash while reading malformed data in Protobuf format. This fixes https://github.com/ClickHouse/ClickHouse/issues/5957, fixes https://github.com/ClickHouse/ClickHouse/issues/11203. [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix crash while reading malformed data in Protobuf format. This fixes [#5957](https://github.com/ClickHouse/ClickHouse/issues/5957), fixes [#11203](https://github.com/ClickHouse/ClickHouse/issues/11203). [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fixed a bug when cache-dictionary could return default value instead of normal (when there are only expired keys). This affects only string fields. [#11233](https://github.com/ClickHouse/ClickHouse/pull/11233) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Fix error `Block structure mismatch in QueryPipeline` while reading from `VIEW` with constants in inner query. Fixes [#11181](https://github.com/ClickHouse/ClickHouse/issues/11181). [#11205](https://github.com/ClickHouse/ClickHouse/pull/11205) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix possible exception `Invalid status for associated output`. [#11200](https://github.com/ClickHouse/ClickHouse/pull/11200) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
@ -2196,7 +2319,7 @@ No changes compared to v20.4.3.16-stable.
* Fixed `SIGSEGV` in `StringHashTable` if such a key does not exist. [#10870](https://github.com/ClickHouse/ClickHouse/pull/10870) ([Azat Khuzhin](https://github.com/azat)).
* Fixed bug in `ReplicatedMergeTree` which might cause some `ALTER` on `OPTIMIZE` query to hang waiting for some replica after it become inactive. [#10849](https://github.com/ClickHouse/ClickHouse/pull/10849) ([tavplubix](https://github.com/tavplubix)).
* Fixed columns order after `Block::sortColumns()`. [#10826](https://github.com/ClickHouse/ClickHouse/pull/10826) ([Azat Khuzhin](https://github.com/azat)).
* Fixed the issue with `ODBC` bridge when no quoting of identifiers is requested. Fixes [#7984] (https://github.com/ClickHouse/ClickHouse/issues/7984). [#10821](https://github.com/ClickHouse/ClickHouse/pull/10821) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed the issue with `ODBC` bridge when no quoting of identifiers is requested. Fixes [#7984](https://github.com/ClickHouse/ClickHouse/issues/7984). [#10821](https://github.com/ClickHouse/ClickHouse/pull/10821) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed `UBSan` and `MSan` report in `DateLUT`. [#10798](https://github.com/ClickHouse/ClickHouse/pull/10798) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed incorrect type conversion in key conditions. Fixes [#6287](https://github.com/ClickHouse/ClickHouse/issues/6287). [#10791](https://github.com/ClickHouse/ClickHouse/pull/10791) ([Andrew Onyshchuk](https://github.com/oandrew))
* Fixed `parallel_view_processing` behavior. Now all insertions into `MATERIALIZED VIEW` without exception should be finished if exception happened. Fixes [#10241](https://github.com/ClickHouse/ClickHouse/issues/10241). [#10757](https://github.com/ClickHouse/ClickHouse/pull/10757) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
@ -2215,7 +2338,7 @@ No changes compared to v20.4.3.16-stable.
* Fixed incorrect scalar results inside inner query of `MATERIALIZED VIEW` in case if this query contained dependent table. [#10603](https://github.com/ClickHouse/ClickHouse/pull/10603) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed `SELECT` of column `ALIAS` which default expression type different from column type. [#10563](https://github.com/ClickHouse/ClickHouse/pull/10563) ([Azat Khuzhin](https://github.com/azat)).
* Implemented comparison between DateTime64 and String values. [#10560](https://github.com/ClickHouse/ClickHouse/pull/10560) ([Vasily Nemkov](https://github.com/Enmk)).
* Fixed index corruption, which may accur in some cases after merge compact parts into another compact part. [#10531](https://github.com/ClickHouse/ClickHouse/pull/10531) ([Anton Popov](https://github.com/CurtizJ)).
* Fixed index corruption, which may occur in some cases after merge compact parts into another compact part. [#10531](https://github.com/ClickHouse/ClickHouse/pull/10531) ([Anton Popov](https://github.com/CurtizJ)).
* Fixed the situation, when mutation finished all parts, but hung up in `is_done=0`. [#10526](https://github.com/ClickHouse/ClickHouse/pull/10526) ([alesapin](https://github.com/alesapin)).
* Fixed overflow at beginning of unix epoch for timezones with fractional offset from `UTC`. This fixes [#9335](https://github.com/ClickHouse/ClickHouse/issues/9335). [#10513](https://github.com/ClickHouse/ClickHouse/pull/10513) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed improper shutdown of `Distributed` storage. [#10491](https://github.com/ClickHouse/ClickHouse/pull/10491) ([Azat Khuzhin](https://github.com/azat)).
@ -2225,14 +2348,14 @@ No changes compared to v20.4.3.16-stable.
#### Build/Testing/Packaging Improvement
* Fix UBSan report in LZ4 library. [#10631](https://github.com/ClickHouse/ClickHouse/pull/10631) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix clang-10 build. https://github.com/ClickHouse/ClickHouse/issues/10238. [#10370](https://github.com/ClickHouse/ClickHouse/pull/10370) ([Amos Bird](https://github.com/amosbird)).
* Fix clang-10 build. [#10238](https://github.com/ClickHouse/ClickHouse/issues/10238). [#10370](https://github.com/ClickHouse/ClickHouse/pull/10370) ([Amos Bird](https://github.com/amosbird)).
* Added failing tests about `max_rows_to_sort` setting. [#10268](https://github.com/ClickHouse/ClickHouse/pull/10268) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Added some improvements in printing diagnostic info in input formats. Fixes [#10204](https://github.com/ClickHouse/ClickHouse/issues/10204). [#10418](https://github.com/ClickHouse/ClickHouse/pull/10418) ([tavplubix](https://github.com/tavplubix)).
* Added CA certificates to clickhouse-server docker image. [#10476](https://github.com/ClickHouse/ClickHouse/pull/10476) ([filimonov](https://github.com/filimonov)).
#### Bug fix
* #10551. [#10569](https://github.com/ClickHouse/ClickHouse/pull/10569) ([Winter Zhang](https://github.com/zhang2014)).
* Fix error `the BloomFilter false positive must be a double number between 0 and 1` [#10551](https://github.com/ClickHouse/ClickHouse/issues/10551). [#10569](https://github.com/ClickHouse/ClickHouse/pull/10569) ([Winter Zhang](https://github.com/zhang2014)).
### ClickHouse release v20.3.8.53, 2020-04-23
@ -2424,7 +2547,7 @@ No changes compared to v20.4.3.16-stable.
* Fixed the behaviour of `match` and `extract` functions when haystack has zero bytes. The behaviour was wrong when haystack was constant. This fixes [#9160](https://github.com/ClickHouse/ClickHouse/issues/9160) [#9163](https://github.com/ClickHouse/ClickHouse/pull/9163) ([alexey-milovidov](https://github.com/alexey-milovidov)) [#9345](https://github.com/ClickHouse/ClickHouse/pull/9345) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Avoid throwing from destructor in Apache Avro 3rd-party library. [#9066](https://github.com/ClickHouse/ClickHouse/pull/9066) ([Andrew Onyshchuk](https://github.com/oandrew))
* Don't commit a batch polled from `Kafka` partially as it can lead to holes in data. [#8876](https://github.com/ClickHouse/ClickHouse/pull/8876) ([filimonov](https://github.com/filimonov))
* Fix `joinGet` with nullable return types. https://github.com/ClickHouse/ClickHouse/issues/8919 [#9014](https://github.com/ClickHouse/ClickHouse/pull/9014) ([Amos Bird](https://github.com/amosbird))
* Fix `joinGet` with nullable return types. [#8919](https://github.com/ClickHouse/ClickHouse/issues/8919) [#9014](https://github.com/ClickHouse/ClickHouse/pull/9014) ([Amos Bird](https://github.com/amosbird))
* Fix data incompatibility when compressed with `T64` codec. [#9016](https://github.com/ClickHouse/ClickHouse/pull/9016) ([Artem Zuikov](https://github.com/4ertus2)) Fix data type ids in `T64` compression codec that leads to wrong (de)compression in affected versions. [#9033](https://github.com/ClickHouse/ClickHouse/pull/9033) ([Artem Zuikov](https://github.com/4ertus2))
* Add setting `enable_early_constant_folding` and disable it in some cases that leads to errors. [#9010](https://github.com/ClickHouse/ClickHouse/pull/9010) ([Artem Zuikov](https://github.com/4ertus2))
* Fix pushdown predicate optimizer with VIEW and enable the test [#9011](https://github.com/ClickHouse/ClickHouse/pull/9011) ([Winter Zhang](https://github.com/zhang2014))
@ -2626,7 +2749,7 @@ No changes compared to v20.4.3.16-stable.
* Fix the error `Data compressed with different methods` that can happen if `min_bytes_to_use_direct_io` is enabled and PREWHERE is active and using SAMPLE or high number of threads. This fixes [#11539](https://github.com/ClickHouse/ClickHouse/issues/11539). [#11540](https://github.com/ClickHouse/ClickHouse/pull/11540) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix return compressed size for codecs. [#11448](https://github.com/ClickHouse/ClickHouse/pull/11448) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix server crash when a column has compression codec with non-literal arguments. Fixes [#11365](https://github.com/ClickHouse/ClickHouse/issues/11365). [#11431](https://github.com/ClickHouse/ClickHouse/pull/11431) ([alesapin](https://github.com/alesapin)).
* Fix pointInPolygon with nan as point. Fixes https://github.com/ClickHouse/ClickHouse/issues/11375. [#11421](https://github.com/ClickHouse/ClickHouse/pull/11421) ([Alexey Ilyukhov](https://github.com/livace)).
* Fix pointInPolygon with nan as point. Fixes [#11375](https://github.com/ClickHouse/ClickHouse/issues/11375). [#11421](https://github.com/ClickHouse/ClickHouse/pull/11421) ([Alexey Ilyukhov](https://github.com/livace)).
* Fixed geohashesInBox with arguments outside of latitude/longitude range. [#11403](https://github.com/ClickHouse/ClickHouse/pull/11403) ([Vasily Nemkov](https://github.com/Enmk)).
* Fix possible `Pipeline stuck` error for queries with external sort and limit. Fixes [#11359](https://github.com/ClickHouse/ClickHouse/issues/11359). [#11366](https://github.com/ClickHouse/ClickHouse/pull/11366) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix crash in `quantilesExactWeightedArray`. [#11337](https://github.com/ClickHouse/ClickHouse/pull/11337) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
@ -2636,7 +2759,7 @@ No changes compared to v20.4.3.16-stable.
* Fix potential uninitialized memory in conversion. Example: `SELECT toIntervalSecond(now64())`. [#11311](https://github.com/ClickHouse/ClickHouse/pull/11311) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix the issue when index analysis cannot work if a table has Array column in primary key and if a query is filtering by this column with `empty` or `notEmpty` functions. This fixes [#11286](https://github.com/ClickHouse/ClickHouse/issues/11286). [#11303](https://github.com/ClickHouse/ClickHouse/pull/11303) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix bug when query speed estimation can be incorrect and the limit of `min_execution_speed` may not work or work incorrectly if the query is throttled by `max_network_bandwidth`, `max_execution_speed` or `priority` settings. Change the default value of `timeout_before_checking_execution_speed` to non-zero, because otherwise the settings `min_execution_speed` and `max_execution_speed` have no effect. This fixes [#11297](https://github.com/ClickHouse/ClickHouse/issues/11297). This fixes [#5732](https://github.com/ClickHouse/ClickHouse/issues/5732). This fixes [#6228](https://github.com/ClickHouse/ClickHouse/issues/6228). Usability improvement: avoid concatenation of exception message with progress bar in `clickhouse-client`. [#11296](https://github.com/ClickHouse/ClickHouse/pull/11296) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix crash while reading malformed data in Protobuf format. This fixes https://github.com/ClickHouse/ClickHouse/issues/5957, fixes https://github.com/ClickHouse/ClickHouse/issues/11203. [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix crash while reading malformed data in Protobuf format. This fixes [#5957](https://github.com/ClickHouse/ClickHouse/issues/5957), fixes [#11203](https://github.com/ClickHouse/ClickHouse/issues/11203). [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix possible error `Cannot capture column` for higher-order functions with `Array(Array(LowCardinality))` captured argument. [#11185](https://github.com/ClickHouse/ClickHouse/pull/11185) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* If data skipping index is dependent on columns that are going to be modified during background merge (for SummingMergeTree, AggregatingMergeTree as well as for TTL GROUP BY), it was calculated incorrectly. This issue is fixed by moving index calculation after merge so the index is calculated on merged data. [#11162](https://github.com/ClickHouse/ClickHouse/pull/11162) ([Azat Khuzhin](https://github.com/azat)).
* Remove logging from mutation finalization task if nothing was finalized. [#11109](https://github.com/ClickHouse/ClickHouse/pull/11109) ([alesapin](https://github.com/alesapin)).
@ -2914,7 +3037,7 @@ No changes compared to v20.4.3.16-stable.
* Several improvements ClickHouse grammar in `.g4` file. [#8294](https://github.com/ClickHouse/ClickHouse/pull/8294) ([taiyang-li](https://github.com/taiyang-li))
* Fix bug that leads to crashes in `JOIN`s with tables with engine `Join`. This fixes [#7556](https://github.com/ClickHouse/ClickHouse/issues/7556) [#8254](https://github.com/ClickHouse/ClickHouse/issues/8254) [#7915](https://github.com/ClickHouse/ClickHouse/issues/7915) [#8100](https://github.com/ClickHouse/ClickHouse/issues/8100). [#8298](https://github.com/ClickHouse/ClickHouse/pull/8298) ([Artem Zuikov](https://github.com/4ertus2))
* Fix redundant dictionaries reload on `CREATE DATABASE`. [#7916](https://github.com/ClickHouse/ClickHouse/pull/7916) ([Azat Khuzhin](https://github.com/azat))
* Limit maximum number of streams for read from `StorageFile` and `StorageHDFS`. Fixes https://github.com/ClickHouse/ClickHouse/issues/7650. [#7981](https://github.com/ClickHouse/ClickHouse/pull/7981) ([alesapin](https://github.com/alesapin))
* Limit maximum number of streams for read from `StorageFile` and `StorageHDFS`. Fixes [#7650](https://github.com/ClickHouse/ClickHouse/issues/7650). [#7981](https://github.com/ClickHouse/ClickHouse/pull/7981) ([alesapin](https://github.com/alesapin))
* Fix bug in `ALTER ... MODIFY ... CODEC` query, when user specify both default expression and codec. Fixes [8593](https://github.com/ClickHouse/ClickHouse/issues/8593). [#8614](https://github.com/ClickHouse/ClickHouse/pull/8614) ([alesapin](https://github.com/alesapin))
* Fix error in background merge of columns with `SimpleAggregateFunction(LowCardinality)` type. [#8613](https://github.com/ClickHouse/ClickHouse/pull/8613) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Fixed type check in function `toDateTime64`. [#8375](https://github.com/ClickHouse/ClickHouse/pull/8375) ([Vasily Nemkov](https://github.com/Enmk))
@ -2998,7 +3121,7 @@ No changes compared to v20.4.3.16-stable.
* Added check for extra parts of `MergeTree` at different disks, in order to not allow to miss data parts at undefined disks. [#8118](https://github.com/ClickHouse/ClickHouse/pull/8118) ([Vladimir Chebotarev](https://github.com/excitoon))
* Enable SSL support for Mac client and server. [#8297](https://github.com/ClickHouse/ClickHouse/pull/8297) ([Ivan](https://github.com/abyss7))
* Now ClickHouse can work as MySQL federated server (see https://dev.mysql.com/doc/refman/5.7/en/federated-create-server.html). [#7717](https://github.com/ClickHouse/ClickHouse/pull/7717) ([Maxim Fedotov](https://github.com/MaxFedotov))
* `clickhouse-client` now only enable `bracketed-paste` when multiquery is on and multiline is off. This fixes (#7757)[https://github.com/ClickHouse/ClickHouse/issues/7757]. [#7761](https://github.com/ClickHouse/ClickHouse/pull/7761) ([Amos Bird](https://github.com/amosbird))
* `clickhouse-client` now only enable `bracketed-paste` when multiquery is on and multiline is off. This fixes [#7757](https://github.com/ClickHouse/ClickHouse/issues/7757). [#7761](https://github.com/ClickHouse/ClickHouse/pull/7761) ([Amos Bird](https://github.com/amosbird))
* Support `Array(Decimal)` in `if` function. [#7721](https://github.com/ClickHouse/ClickHouse/pull/7721) ([Artem Zuikov](https://github.com/4ertus2))
* Support Decimals in `arrayDifference`, `arrayCumSum` and `arrayCumSumNegative` functions. [#7724](https://github.com/ClickHouse/ClickHouse/pull/7724) ([Artem Zuikov](https://github.com/4ertus2))
* Added `lifetime` column to `system.dictionaries` table. [#6820](https://github.com/ClickHouse/ClickHouse/issues/6820) [#7727](https://github.com/ClickHouse/ClickHouse/pull/7727) ([kekekekule](https://github.com/kekekekule))

View File

@ -223,16 +223,16 @@ if (ARCH_NATIVE)
set (COMPILER_FLAGS "${COMPILER_FLAGS} -march=native")
endif ()
if (UNBUNDLED AND (COMPILER_GCC OR COMPILER_CLANG))
# to make numeric_limits<__int128> works for unbundled build
set (_CXX_STANDARD "-std=gnu++2a")
if (COMPILER_GCC OR COMPILER_CLANG)
# to make numeric_limits<__int128> works with GCC
set (_CXX_STANDARD "gnu++2a")
else()
set (_CXX_STANDARD "-std=c++2a")
set (_CXX_STANDARD "c++2a")
endif()
# cmake < 3.12 doesn't support 20. We'll set CMAKE_CXX_FLAGS for now
# set (CMAKE_CXX_STANDARD 20)
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${_CXX_STANDARD}")
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=${_CXX_STANDARD}")
set (CMAKE_CXX_EXTENSIONS 0) # https://cmake.org/cmake/help/latest/prop_tgt/CXX_EXTENSIONS.html#prop_tgt:CXX_EXTENSIONS
set (CMAKE_CXX_STANDARD_REQUIRED ON)

View File

@ -58,6 +58,8 @@ ReplxxLineReader::ReplxxLineReader(
}
}
rx.install_window_change_handler();
auto callback = [&suggest] (const String & context, size_t context_size)
{
if (auto range = suggest.getCompletions(context, context_size))

View File

@ -58,8 +58,7 @@ public:
using signed_base_type = int64_t;
// ctors
integer() = default;
constexpr integer() noexcept;
template <typename T>
constexpr integer(T rhs) noexcept;
template <typename T>

View File

@ -916,6 +916,11 @@ public:
// Members
template <size_t Bits, typename Signed>
constexpr integer<Bits, Signed>::integer() noexcept
: items{}
{}
template <size_t Bits, typename Signed>
template <typename T>
constexpr integer<Bits, Signed>::integer(T rhs) noexcept

View File

@ -761,7 +761,7 @@ void BaseDaemon::initializeTerminationAndSignalProcessing()
static KillingErrorHandler killing_error_handler;
Poco::ErrorHandler::set(&killing_error_handler);
signal_pipe.setNonBlocking();
signal_pipe.setNonBlockingWrite();
signal_pipe.tryIncreaseSize(1 << 20);
signal_listener = std::make_unique<SignalListener>(*this);

View File

@ -22,4 +22,12 @@ ResultBase::~ResultBase()
mysql_free_result(res);
}
std::string ResultBase::getFieldName(size_t n) const
{
if (num_fields <= n)
throw Exception(std::string("Unknown column position ") + std::to_string(n));
return fields[n].name;
}
}

View File

@ -31,6 +31,8 @@ public:
MYSQL_RES * getRes() { return res; }
const Query * getQuery() const { return query; }
std::string getFieldName(size_t n) const;
virtual ~ResultBase();
protected:

View File

@ -35,6 +35,15 @@ if (NOT PARALLEL_LINK_JOBS AND AVAILABLE_PHYSICAL_MEMORY AND MAX_LINKER_MEMORY)
endif ()
endif ()
# ThinLTO provides its own parallel linking
# But use 2 parallel jobs, since:
# - this is what llvm does
# - and I've verfied that lld-11 does not use all available CPU time (in peak) while linking one binary
if (ENABLE_THINLTO AND PARALLEL_LINK_JOBS GREATER 2)
message(STATUS "ThinLTO provides its own parallel linking - limiting parallel link jobs to 2.")
set (PARALLEL_LINK_JOBS 2)
endif()
if (PARALLEL_LINK_JOBS AND (NOT NUMBER_OF_LOGICAL_CORES OR PARALLEL_COMPILE_JOBS LESS NUMBER_OF_LOGICAL_CORES))
set(CMAKE_JOB_POOL_LINK link_job_pool${CMAKE_CURRENT_SOURCE_DIR})
string (REGEX REPLACE "[^a-zA-Z0-9]+" "_" CMAKE_JOB_POOL_LINK ${CMAKE_JOB_POOL_LINK})

View File

@ -26,6 +26,7 @@ add_subdirectory (boost-cmake)
add_subdirectory (cctz-cmake)
add_subdirectory (consistent-hashing-sumbur)
add_subdirectory (consistent-hashing)
add_subdirectory (dragonbox-cmake)
add_subdirectory (FastMemcpy)
add_subdirectory (hyperscan-cmake)
add_subdirectory (jemalloc-cmake)
@ -240,6 +241,14 @@ if (USE_EMBEDDED_COMPILER AND USE_INTERNAL_LLVM_LIBRARY)
set (LLVM_ENABLE_RTTI 1 CACHE INTERNAL "")
set (LLVM_ENABLE_PIC 0 CACHE INTERNAL "")
set (LLVM_TARGETS_TO_BUILD "X86;AArch64" CACHE STRING "")
# Yes it is set globally, but this is not enough, since llvm will add -std=c++11 after default
# And c++2a cannot be used, due to ambiguous operator !=
if (COMPILER_GCC OR COMPILER_CLANG)
set (_CXX_STANDARD "gnu++17")
else()
set (_CXX_STANDARD "c++17")
endif()
set (LLVM_CXX_STD ${_CXX_STANDARD} CACHE STRING "" FORCE)
add_subdirectory (llvm/llvm)
target_include_directories(LLVMSupport SYSTEM BEFORE PRIVATE ${ZLIB_INCLUDE_DIR})
endif ()
@ -322,5 +331,5 @@ if (USE_INTERNAL_ROCKSDB_LIBRARY)
add_subdirectory(rocksdb-cmake)
endif()
add_subdirectory(dragonbox)
add_subdirectory(fast_float)

View File

@ -0,0 +1,5 @@
set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/dragonbox")
add_library(dragonbox_to_chars "${LIBRARY_DIR}/source/dragonbox_to_chars.cpp")
target_include_directories(dragonbox_to_chars SYSTEM BEFORE PUBLIC "${LIBRARY_DIR}/include/")

2
contrib/librdkafka vendored

@ -1 +1 @@
Subproject commit 9902bc4fb18bb441fa55ca154b341cdda191e5d3
Subproject commit f2f6616419d567c9198aef0d1133a2e9b4f02276

2
contrib/libunwind vendored

@ -1 +1 @@
Subproject commit 7d78d3618910752c256b2b58c3895f4efea47fac
Subproject commit 51b84d9b6d2548f1cbdcafe622d5a753853b6149

2
contrib/poco vendored

@ -1 +1 @@
Subproject commit b5523bb9b4bc4239640cbfec4d734be8b8585639
Subproject commit 08974cc024b2e748f5b1d45415396706b3521d0f

2
contrib/replxx vendored

@ -1 +1 @@
Subproject commit 8cf626c04e9a74313fb0b474cdbe2297c0f3cdc8
Subproject commit 254be98ae7f2fd92d6db768f8e11ea5a5226cbf5

View File

@ -53,7 +53,7 @@ if (NOT LIBRARY_REPLXX OR NOT INCLUDE_REPLXX OR NOT EXTERNAL_REPLXX_WORKS)
"${LIBRARY_DIR}/src/ConvertUTF.cpp"
"${LIBRARY_DIR}/src/escape.cxx"
"${LIBRARY_DIR}/src/history.cxx"
"${LIBRARY_DIR}/src/io.cxx"
"${LIBRARY_DIR}/src/terminal.cxx"
"${LIBRARY_DIR}/src/prompt.cxx"
"${LIBRARY_DIR}/src/replxx_impl.cxx"
"${LIBRARY_DIR}/src/replxx.cxx"

4
debian/control vendored
View File

@ -5,8 +5,8 @@ Maintainer: Alexey Milovidov <milovidov@yandex-team.ru>
Build-Depends: debhelper (>= 9),
cmake | cmake3,
ninja-build,
gcc-9 [amd64 i386] | gcc-8 [amd64 i386], g++-9 [amd64 i386] | g++-8 [amd64 i386],
clang-8 [arm64 armhf] | clang-7 [arm64 armhf] | clang-6.0 [arm64 armhf],
clang-11,
llvm-11,
libc6-dev,
libicu-dev,
libreadline-dev,

View File

@ -1,6 +1,6 @@
FROM ubuntu:19.10
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=10
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=11
RUN apt-get update \
&& apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \

View File

@ -4,7 +4,7 @@ set -e
#ccache -s # uncomment to display CCache statistics
mkdir -p /server/build_docker
cd /server/build_docker
cmake -G Ninja /server "-DCMAKE_C_COMPILER=$(command -v gcc-9)" "-DCMAKE_CXX_COMPILER=$(command -v g++-9)"
cmake -G Ninja /server "-DCMAKE_C_COMPILER=$(command -v clang-11)" "-DCMAKE_CXX_COMPILER=$(command -v clang++-11)"
# Set the number of build jobs to the half of number of virtual CPU cores (rounded up).
# By default, ninja use all virtual CPU cores, that leads to very high memory consumption without much improvement in build time.

View File

@ -64,7 +64,14 @@ function stop_server
function start_server
{
set -m # Spawn server in its own process groups
clickhouse-server --config-file="$FASTTEST_DATA/config.xml" -- --path "$FASTTEST_DATA" --user_files_path "$FASTTEST_DATA/user_files" &>> "$FASTTEST_OUTPUT/server.log" &
local opts=(
--config-file="$FASTTEST_DATA/config.xml"
--
--path "$FASTTEST_DATA"
--user_files_path "$FASTTEST_DATA/user_files"
--top_level_domains_path "$FASTTEST_DATA/top_level_domains"
)
clickhouse-server "${opts[@]}" &>> "$FASTTEST_OUTPUT/server.log" &
server_pid=$!
set +m

View File

@ -53,4 +53,3 @@ COPY * /
CMD ["bash", "-c", "node=$((RANDOM % $(numactl --hardware | sed -n 's/^.*available:\\(.*\\)nodes.*$/\\1/p'))); echo Will bind to NUMA node $node; numactl --cpunodebind=$node --membind=$node /entrypoint.sh"]
# docker run --network=host --volume <workspace>:/workspace --volume=<output>:/output -e PR_TO_TEST=<> -e SHA_TO_TEST=<> yandex/clickhouse-performance-comparison

View File

@ -55,6 +55,7 @@ function configure
# server *config* directives overrides
--path db0
--user_files_path db0/user_files
--top_level_domains_path /top_level_domains
--tcp_port $LEFT_SERVER_PORT
)
left/clickhouse-server "${setup_left_server_opts[@]}" &> setup-server-log.log &
@ -102,6 +103,7 @@ function restart
# server *config* directives overrides
--path left/db
--user_files_path left/db/user_files
--top_level_domains_path /top_level_domains
--tcp_port $LEFT_SERVER_PORT
)
left/clickhouse-server "${left_server_opts[@]}" &>> left-server-log.log &
@ -116,6 +118,7 @@ function restart
# server *config* directives overrides
--path right/db
--user_files_path right/db/user_files
--top_level_domains_path /top_level_domains
--tcp_port $RIGHT_SERVER_PORT
)
right/clickhouse-server "${right_server_opts[@]}" &>> right-server-log.log &

View File

@ -0,0 +1,5 @@
<yandex>
<top_level_domains_lists>
<public_suffix_list>public_suffix_list.dat</public_suffix_list>
</top_level_domains_lists>
</yandex>

File diff suppressed because it is too large Load Diff

View File

@ -177,8 +177,6 @@ When you `INSERT` a bunch of data into `MergeTree`, that bunch is sorted by prim
`MergeTree` is not an LSM tree because it doesnt contain “memtable” and “log”: inserted data is written directly to the filesystem. This makes it suitable only to INSERT data in batches, not by individual row and not very frequently about once per second is ok, but a thousand times a second is not. We did it this way for simplicitys sake, and because we are already inserting data in batches in our applications.
> MergeTree tables can only have one (primary) index: there arent any secondary indices. It would be nice to allow multiple physical representations under one logical table, for example, to store data in more than one physical order or even to allow representations with pre-aggregated data along with original data.
There are MergeTree engines that are doing additional work during background merges. Examples are `CollapsingMergeTree` and `AggregatingMergeTree`. This could be treated as special support for updates. Keep in mind that these are not real updates because users usually have no control over the time when background merges are executed, and data in a `MergeTree` table is almost always stored in more than one part, not in completely merged form.
## Replication {#replication}

View File

@ -35,10 +35,12 @@ $ cd ClickHouse
## Build ClickHouse {#build-clickhouse}
> Please note: ClickHouse doesn't support build with native Apple Clang compiler, we need use clang from LLVM.
``` bash
$ mkdir build
$ cd build
$ cmake .. -DCMAKE_CXX_COMPILER=`which clang++` -DCMAKE_C_COMPILER=`which clang`
$ cmake ..-DCMAKE_C_COMPILER=`brew --prefix llvm`/bin/clang -DCMAKE_CXX_COMPILER=`brew --prefix llvm`/bin/clang++ -DCMAKE_PREFIX_PATH=`brew --prefix llvm`
$ ninja
$ cd ..
```

View File

@ -253,8 +253,8 @@ Developing ClickHouse often requires loading realistic datasets. It is particula
sudo apt install wget xz-utils
wget https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz
wget https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz
wget https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz
wget https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz
xz -v -d hits_v1.tsv.xz
xz -v -d visits_v1.tsv.xz

View File

@ -577,7 +577,7 @@ If a function captures ownership of an object created in the heap, make the argu
**14.** Return values.
In most cases, just use `return`. Do not write `[return std::move(res)]{.strike}`.
In most cases, just use `return`. Do not write `return std::move(res)`.
If the function allocates an object on heap and returns it, use `shared_ptr` or `unique_ptr`.
@ -671,7 +671,7 @@ Always use `#pragma once` instead of include guards.
**24.** Do not use `trailing return type` for functions unless necessary.
``` cpp
[auto f() -&gt; void;]{.strike}
auto f() -> void
```
**25.** Declaration and initialization of variables.

View File

@ -6,7 +6,7 @@ toc_priority: 101
# Can I Use ClickHouse As a Key-Value Storage? {#can-i-use-clickhouse-as-a-key-value-storage}
The short answer is **“no”**. The key-value workload is among top positions in the list of cases when NOT{.text-danger} to use ClickHouse. Its an [OLAP](../../faq/general/olap.md) system after all, while there are many excellent key-value storage systems out there.
The short answer is **“no”**. The key-value workload is among top positions in the list of cases when **NOT**{.text-danger} to use ClickHouse. Its an [OLAP](../../faq/general/olap.md) system after all, while there are many excellent key-value storage systems out there.
However, there might be situations where it still makes sense to use ClickHouse for key-value-like queries. Usually, its some low-budget products where the main workload is analytical in nature and fits ClickHouse well, but theres also some secondary process that needs a key-value pattern with not so high request throughput and without strict latency requirements. If you had an unlimited budget, you would have installed a secondary key-value database for thus secondary workload, but in reality, theres an additional cost of maintaining one more storage system (monitoring, backups, etc.) which might be desirable to avoid.

View File

@ -0,0 +1,11 @@
---
toc_priority: 11
toc_title: GitHub Events
---
# GitHub Events Dataset
Dataset contains all events on GitHub from 2011 to Dec 6 2020, the size is 3.1 billion records. Download size is 75 GB and it will require up to 200 GB space on disk if stored in a table with lz4 compression.
Full dataset description, insights, download instruction and interactive queries are posted [here](https://github-sql.github.io/explorer/).

View File

@ -1,6 +1,6 @@
---
toc_folder_title: Example Datasets
toc_priority: 14
toc_priority: 10
toc_title: Introduction
---
@ -10,6 +10,7 @@ This section describes how to obtain example datasets and import them into Click
The list of documented datasets:
- [GitHub Events](../../getting-started/example-datasets/github-events.md)
- [Anonymized Yandex.Metrica Dataset](../../getting-started/example-datasets/metrica.md)
- [Star Schema Benchmark](../../getting-started/example-datasets/star-schema.md)
- [WikiStat](../../getting-started/example-datasets/wikistat.md)
@ -18,4 +19,4 @@ The list of documented datasets:
- [New York Taxi Data](../../getting-started/example-datasets/nyc-taxi.md)
- [OnTime](../../getting-started/example-datasets/ontime.md)
[Original article](https://clickhouse.tech/docs/en/getting_started/example_datasets) <!--hide-->
[Original article](https://clickhouse.tech/docs/en/getting_started/example_datasets) <!--hide-->

View File

@ -7,14 +7,14 @@ toc_title: Yandex.Metrica Data
Dataset consists of two tables containing anonymized data about hits (`hits_v1`) and visits (`visits_v1`) of Yandex.Metrica. You can read more about Yandex.Metrica in [ClickHouse history](../../introduction/history.md) section.
The dataset consists of two tables, either of them can be downloaded as a compressed `tsv.xz` file or as prepared partitions. In addition to that, an extended version of the `hits` table containing 100 million rows is available as TSV at https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_100m_obfuscated_v1.tsv.xz and as prepared partitions at https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz.
The dataset consists of two tables, either of them can be downloaded as a compressed `tsv.xz` file or as prepared partitions. In addition to that, an extended version of the `hits` table containing 100 million rows is available as TSV at https://datasets.clickhouse.tech/hits/tsv/hits_100m_obfuscated_v1.tsv.xz and as prepared partitions at https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz.
## Obtaining Tables from Prepared Partitions {#obtaining-tables-from-prepared-partitions}
Download and import hits table:
``` bash
curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar
curl -O https://datasets.clickhouse.tech/hits/partitions/hits_v1.tar
tar xvf hits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory
# check permissions on unpacked data, fix if required
sudo service clickhouse-server restart
@ -24,7 +24,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
Download and import visits:
``` bash
curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar
curl -O https://datasets.clickhouse.tech/visits/partitions/visits_v1.tar
tar xvf visits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory
# check permissions on unpacked data, fix if required
sudo service clickhouse-server restart
@ -36,7 +36,10 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
Download and import hits from compressed TSV file:
``` bash
curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
# Validate the checksum
md5sum hits_v1.tsv
# Checksum should be equal to: f3631b6295bf06989c1437491f7592cb
# now create table
clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"
@ -50,7 +53,10 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
Download and import visits from compressed tsv-file:
``` bash
curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
# Validate the checksum
md5sum visits_v1.tsv
# Checksum should be equal to: 6dafe1a0f24e59e3fc2d0fed85601de6
# now create table
clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"

View File

@ -283,7 +283,7 @@ Among other things, you can run the OPTIMIZE query on MergeTree. But its not
## Download of Prepared Partitions {#download-of-prepared-partitions}
``` bash
$ curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar
$ curl -O https://datasets.clickhouse.tech/trips_mergetree/partitions/trips_mergetree.tar
$ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # path to ClickHouse data directory
$ # check permissions of unpacked data, fix if required
$ sudo service clickhouse-server restart

View File

@ -154,7 +154,7 @@ $ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhous
## Download of Prepared Partitions {#download-of-prepared-partitions}
``` bash
$ curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar
$ curl -O https://datasets.clickhouse.tech/ontime/partitions/ontime.tar
$ tar xvf ontime.tar -C /var/lib/clickhouse # path to ClickHouse data directory
$ # check permissions of unpacked data, fix if required
$ sudo service clickhouse-server restart

View File

@ -85,8 +85,8 @@ Now its time to fill our ClickHouse server with some sample data. In this tut
### Download and Extract Table Data {#download-and-extract-table-data}
``` bash
curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
```
The extracted files are about 10GB in size.

View File

@ -58,6 +58,7 @@ The supported formats are:
| [XML](#xml) | ✗ | ✔ |
| [CapnProto](#capnproto) | ✔ | ✗ |
| [LineAsString](#lineasstring) | ✔ | ✗ |
| [RawBLOB](#rawblob) | ✔ | ✔ |
You can control some format processing parameters with the ClickHouse settings. For more information read the [Settings](../operations/settings/settings.md) section.
@ -1370,4 +1371,45 @@ Result:
└───────────────────────────────────────────────────┘
```
## RawBLOB {#rawblob}
In this format, all input data is read to a single value. It is possible to parse only a table with a single field of type [String](../sql-reference/data-types/string.md) or similar.
The result is output in binary format without delimiters and escaping. If more than one value is output, the format is ambiguous, and it will be impossible to read the data back.
Below is a comparison of the formats `RawBLOB` and [TabSeparatedRaw](#tabseparatedraw).
`RawBLOB`:
- data is output in binary format, no escaping;
- there are no delimiters between values;
- no newline at the end of each value.
[TabSeparatedRaw] (#tabseparatedraw):
- data is output without escaping;
- the rows contain values separated by tabs;
- there is a line feed after the last value in every row.
The following is a comparison of the `RawBLOB` and [RowBinary](#rowbinary) formats.
`RawBLOB`:
- String fields are output without being prefixed by length.
`RowBinary`:
- String fields are represented as length in varint format (unsigned [LEB128] (https://en.wikipedia.org/wiki/LEB128)), followed by the bytes of the string.
When an empty data is passed to the `RawBLOB` input, ClickHouse throws an exception:
``` text
Code: 108. DB::Exception: No data to insert
```
**Example**
``` bash
$ clickhouse-client --query "CREATE TABLE {some_table} (a String) ENGINE = Memory;"
$ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT RawBLOB"
$ clickhouse-client --query "SELECT * FROM {some_table} FORMAT RawBLOB" | md5sum
```
Result:
``` text
f9725a22f9191e064120d718e26862a9 -
```
[Original article](https://clickhouse.tech/docs/en/interfaces/formats/) <!--hide-->

View File

@ -21,6 +21,7 @@ toc_title: Client Libraries
- [seva-code/php-click-house-client](https://packagist.org/packages/seva-code/php-click-house-client)
- [SeasClick C++ client](https://github.com/SeasX/SeasClick)
- [one-ck](https://github.com/lizhichao/one-ck)
- [glushkovds/phpclickhouse-laravel](https://packagist.org/packages/glushkovds/phpclickhouse-laravel)
- Go
- [clickhouse](https://github.com/kshvakov/clickhouse/)
- [go-clickhouse](https://github.com/roistat/go-clickhouse)

View File

@ -82,6 +82,7 @@ toc_title: Adopters
| <a href="http://www.pragma-innovation.fr/" class="favicon">Pragma Innovation</a> | Telemetry and Big Data Analysis | Main product | — | — | [Slides in English, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup18/4_pragma_innovation.pdf) |
| <a href="https://www.qingcloud.com/" class="favicon">QINGCLOUD</a> | Cloud services | Main product | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/4.%20Cloud%20%2B%20TSDB%20for%20ClickHouse%20张健%20QingCloud.pdf) |
| <a href="https://qrator.net" class="favicon">Qrator</a> | DDoS protection | Main product | — | — | [Blog Post, March 2019](https://blog.qrator.net/en/clickhouse-ddos-mitigation_37/) |
| <a href="https://www.rbinternational.com/" class="favicon">Raiffeisenbank</a> | Banking | Analytics | — | — | [Lecture in Russian, December 2020](https://cs.hse.ru/announcements/421965599.html) |
| <a href="https://rambler.ru" class="favicon">Rambler</a> | Internet services | Analytics | — | — | [Talk in Russian, April 2018](https://medium.com/@ramblertop/разработка-api-clickhouse-для-рамблер-топ-100-f4c7e56f3141) |
| <a href="https://retell.cc/" class="favicon">Retell</a> | Speech synthesis | Analytics | — | — | [Blog Article, August 2020](https://vc.ru/services/153732-kak-sozdat-audiostati-na-vashem-sayte-i-zachem-eto-nuzhno) |
| <a href="https://rspamd.com/" class="favicon">Rspamd</a> | Antispam | Analytics | — | — | [Official Website](https://rspamd.com/doc/modules/clickhouse.html) |
@ -102,6 +103,7 @@ toc_title: Adopters
| <a href="https://www.teralytics.net/" class="favicon">Teralytics</a> | Mobility | Analytics | — | — | [Tech blog](https://www.teralytics.net/knowledge-hub/visualizing-mobility-data-the-scalability-challenge) |
| <a href="https://www.tencent.com" class="favicon">Tencent</a> | Big Data | Data processing | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/5.%20ClickHouse大数据集群应用_李俊飞腾讯网媒事业部.pdf) |
| <a href="https://www.tencent.com" class="favicon">Tencent</a> | Messaging | Logging | — | — | [Talk in Chinese, November 2019](https://youtu.be/T-iVQRuw-QY?t=5050) |
| <a href="https://www.tencentmusic.com/" class="favicon">Tencent Music Entertainment (TME)</a> | BigData | Data processing | — | — | [Blog in Chinese, June 2020](https://cloud.tencent.com/developer/article/1637840) |
| <a href="https://trafficstars.com/" class="favicon">Traffic Stars</a> | AD network | — | — | — | [Slides in Russian, May 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup15/lightning/ninja.pdf) |
| <a href="https://www.uber.com" class="favicon">Uber</a> | Taxi | Logging | — | — | [Slides, February 2020](https://presentations.clickhouse.tech/meetup40/uber.pdf) |
| <a href="https://vk.com" class="favicon">VKontakte</a> | Social Network | Statistics, Logging | — | — | [Slides in Russian, August 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup17/3_vk.pdf) |

View File

@ -27,7 +27,7 @@ wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/benchmark/cl
```
6. Download test data according to the [Yandex.Metrica dataset](../getting-started/example-datasets/metrica.md) instruction (“hits” table containing 100 million rows).
```bash
wget https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz
wget https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz
tar xvf hits_100m_obfuscated_v1.tar.xz -C .
mv hits_100m_obfuscated_v1/* .
```

View File

@ -1093,9 +1093,14 @@ See the section “WITH TOTALS modifier”.
## max_parallel_replicas {#settings-max_parallel_replicas}
The maximum number of replicas for each shard when executing a query.
For consistency (to get different parts of the same data split), this option only works when the sampling key is set.
Replica lag is not controlled.
The maximum number of replicas for each shard when executing a query. In limited circumstances, this can make a query faster by executing it on more servers. This setting is only useful for replicated tables with a sampling key. There are cases where performance will not improve or even worsen:
- the position of the sampling key in the partitioning key's order doesn't allow efficient range scans
- adding a sampling key to the table makes filtering by other columns less efficient
- the sampling key is an expression that is expensive to calculate
- the cluster's latency distribution has a long tail, so that querying more servers increases the query's overall latency
In addition, this setting will produce incorrect results when joins or subqueries are involved, and all tables don't meet certain conditions. See [Distributed Subqueries and max_parallel_replicas](../../sql-reference/operators/in.md/#max_parallel_replica-subqueries) for more details.
## compile {#compile}
@ -2360,10 +2365,41 @@ Default value: `1`.
## output_format_tsv_null_representation {#output_format_tsv_null_representation}
Allows configurable `NULL` representation for [TSV](../../interfaces/formats.md#tabseparated) output format. The setting only controls output format and `\N` is the only supported `NULL` representation for TSV input format.
Defines the representation of `NULL` for [TSV](../../interfaces/formats.md#tabseparated) output format. User can set any string as a value, for example, `My NULL`.
Default value: `\N`.
**Examples**
Query
```sql
SELECT * FROM tsv_custom_null FORMAT TSV;
```
Result
```text
788
\N
\N
```
Query
```sql
SET output_format_tsv_null_representation = 'My NULL';
SELECT * FROM tsv_custom_null FORMAT TSV;
```
Result
```text
788
My NULL
My NULL
```
## output_format_json_array_of_rows {#output-format-json-array-of-rows}
Enables the ability to output all rows as a JSON array in the [JSONEachRow](../../interfaces/formats.md#jsoneachrow) format.

View File

@ -0,0 +1,46 @@
# system.distribution_queue {#system_tables-distribution_queue}
Contains information about local files that are in the queue to be sent to the shards. This local files contain new parts that are created by inserting new data into the Distributed table in asynchronous mode.
Columns:
- `database` ([String](../../sql-reference/data-types/string.md)) — Name of the database.
- `table` ([String](../../sql-reference/data-types/string.md)) — Name of the table.
- `data_path` ([String](../../sql-reference/data-types/string.md)) — Path to the folder with local files.
- `is_blocked` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Flag indicates whether sending local files to the server is blocked.
- `error_count` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Number of errors.
- `data_files` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Number of local files in a folder.
- `data_compressed_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Size of compressed data in local files, in bytes.
- `last_exception` ([String](../../sql-reference/data-types/string.md)) — Text message about the last error that occurred (if any).
**Example**
``` sql
SELECT * FROM system.distribution_queue LIMIT 1 FORMAT Vertical;
```
``` text
Row 1:
──────
database: default
table: dist
data_path: ./store/268/268bc070-3aad-4b1a-9cf2-4987580161af/default@127%2E0%2E0%2E2:9000/
is_blocked: 1
error_count: 0
data_files: 1
data_compressed_bytes: 499
last_exception:
```
**See also**
- [Distributed table engine](../../engines/table-engines/special/distributed.md)
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/distribution_queue) <!--hide-->

View File

@ -24,58 +24,58 @@ The following table lists cases when query feature works in ClickHouse, but beha
| Feature ID | Feature Name | Status | Comment |
|------------|--------------------------------------------------------------------------------------------------------------------------|----------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **E011** | **Numeric data types** | **Partial**{.text-warning} | |
| E011-01 | INTEGER and SMALLINT data types | Yes{.text-success} | |
| E011-02 | REAL, DOUBLE PRECISION and FLOAT data types data types | Partial{.text-warning} | `FLOAT(<binary_precision>)`, `REAL` and `DOUBLE PRECISION` are not supported |
| E011-03 | DECIMAL and NUMERIC data types | Partial{.text-warning} | Only `DECIMAL(p,s)` is supported, not `NUMERIC` |
| E011-04 | Arithmetic operators | Yes{.text-success} | |
| E011-05 | Numeric comparison | Yes{.text-success} | |
| E011-06 | Implicit casting among the numeric data types | No{.text-danger} | ANSI SQL allows arbitrary implicit cast between numeric types, while ClickHouse relies on functions having multiple overloads instead of implicit cast |
| E011-01 | INTEGER and SMALLINT data types | Yes {.text-success} | |
| E011-02 | REAL, DOUBLE PRECISION and FLOAT data types data types | Partial {.text-warning} | `FLOAT(<binary_precision>)`, `REAL` and `DOUBLE PRECISION` are not supported |
| E011-03 | DECIMAL and NUMERIC data types | Partial {.text-warning} | Only `DECIMAL(p,s)` is supported, not `NUMERIC` |
| E011-04 | Arithmetic operators | Yes {.text-success} | |
| E011-05 | Numeric comparison | Yes {.text-success} | |
| E011-06 | Implicit casting among the numeric data types | No {.text-danger} | ANSI SQL allows arbitrary implicit cast between numeric types, while ClickHouse relies on functions having multiple overloads instead of implicit cast |
| **E021** | **Character string types** | **Partial**{.text-warning} | |
| E021-01 | CHARACTER data type | No{.text-danger} | |
| E021-02 | CHARACTER VARYING data type | No{.text-danger} | `String` behaves similarly, but without length limit in parentheses |
| E021-03 | Character literals | Partial{.text-warning} | No automatic concatenation of consecutive literals and character set support |
| E021-04 | CHARACTER_LENGTH function | Partial{.text-warning} | No `USING` clause |
| E021-05 | OCTET_LENGTH function | No{.text-danger} | `LENGTH` behaves similarly |
| E021-06 | SUBSTRING | Partial{.text-warning} | No support for `SIMILAR` and `ESCAPE` clauses, no `SUBSTRING_REGEX` variant |
| E021-07 | Character concatenation | Partial{.text-warning} | No `COLLATE` clause |
| E021-08 | UPPER and LOWER functions | Yes{.text-success} | |
| E021-09 | TRIM function | Yes{.text-success} | |
| E021-10 | Implicit casting among the fixed-length and variable-length character string types | No{.text-danger} | ANSI SQL allows arbitrary implicit cast between string types, while ClickHouse relies on functions having multiple overloads instead of implicit cast |
| E021-11 | POSITION function | Partial{.text-warning} | No support for `IN` and `USING` clauses, no `POSITION_REGEX` variant |
| E021-12 | Character comparison | Yes{.text-success} | |
| E021-01 | CHARACTER data type | No {.text-danger} | |
| E021-02 | CHARACTER VARYING data type | No {.text-danger} | `String` behaves similarly, but without length limit in parentheses |
| E021-03 | Character literals | Partial {.text-warning} | No automatic concatenation of consecutive literals and character set support |
| E021-04 | CHARACTER_LENGTH function | Partial {.text-warning} | No `USING` clause |
| E021-05 | OCTET_LENGTH function | No {.text-danger} | `LENGTH` behaves similarly |
| E021-06 | SUBSTRING | Partial {.text-warning} | No support for `SIMILAR` and `ESCAPE` clauses, no `SUBSTRING_REGEX` variant |
| E021-07 | Character concatenation | Partial {.text-warning} | No `COLLATE` clause |
| E021-08 | UPPER and LOWER functions | Yes {.text-success} | |
| E021-09 | TRIM function | Yes {.text-success} | |
| E021-10 | Implicit casting among the fixed-length and variable-length character string types | No {.text-danger} | ANSI SQL allows arbitrary implicit cast between string types, while ClickHouse relies on functions having multiple overloads instead of implicit cast |
| E021-11 | POSITION function | Partial {.text-warning} | No support for `IN` and `USING` clauses, no `POSITION_REGEX` variant |
| E021-12 | Character comparison | Yes {.text-success} | |
| **E031** | **Identifiers** | **Partial**{.text-warning} | |
| E031-01 | Delimited identifiers | Partial{.text-warning} | Unicode literal support is limited |
| E031-02 | Lower case identifiers | Yes{.text-success} | |
| E031-03 | Trailing underscore | Yes{.text-success} | |
| E031-01 | Delimited identifiers | Partial {.text-warning} | Unicode literal support is limited |
| E031-02 | Lower case identifiers | Yes {.text-success} | |
| E031-03 | Trailing underscore | Yes {.text-success} | |
| **E051** | **Basic query specification** | **Partial**{.text-warning} | |
| E051-01 | SELECT DISTINCT | Yes{.text-success} | |
| E051-02 | GROUP BY clause | Yes{.text-success} | |
| E051-04 | GROUP BY can contain columns not in `<select list>` | Yes{.text-success} | |
| E051-05 | Select items can be renamed | Yes{.text-success} | |
| E051-06 | HAVING clause | Yes{.text-success} | |
| E051-07 | Qualified \* in select list | Yes{.text-success} | |
| E051-08 | Correlation name in the FROM clause | Yes{.text-success} | |
| E051-09 | Rename columns in the FROM clause | No{.text-danger} | |
| E051-01 | SELECT DISTINCT | Yes {.text-success} | |
| E051-02 | GROUP BY clause | Yes {.text-success} | |
| E051-04 | GROUP BY can contain columns not in `<select list>` | Yes {.text-success} | |
| E051-05 | Select items can be renamed | Yes {.text-success} | |
| E051-06 | HAVING clause | Yes {.text-success} | |
| E051-07 | Qualified \* in select list | Yes {.text-success} | |
| E051-08 | Correlation name in the FROM clause | Yes {.text-success} | |
| E051-09 | Rename columns in the FROM clause | No {.text-danger} | |
| **E061** | **Basic predicates and search conditions** | **Partial**{.text-warning} | |
| E061-01 | Comparison predicate | Yes{.text-success} | |
| E061-02 | BETWEEN predicate | Partial{.text-warning} | No `SYMMETRIC` and `ASYMMETRIC` clause |
| E061-03 | IN predicate with list of values | Yes{.text-success} | |
| E061-04 | LIKE predicate | Yes{.text-success} | |
| E061-05 | LIKE predicate: ESCAPE clause | No{.text-danger} | |
| E061-06 | NULL predicate | Yes{.text-success} | |
| E061-07 | Quantified comparison predicate | No{.text-danger} | |
| E061-08 | EXISTS predicate | No{.text-danger} | |
| E061-09 | Subqueries in comparison predicate | Yes{.text-success} | |
| E061-11 | Subqueries in IN predicate | Yes{.text-success} | |
| E061-12 | Subqueries in quantified comparison predicate | No{.text-danger} | |
| E061-13 | Correlated subqueries | No{.text-danger} | |
| E061-14 | Search condition | Yes{.text-success} | |
| E061-01 | Comparison predicate | Yes {.text-success} | |
| E061-02 | BETWEEN predicate | Partial {.text-warning} | No `SYMMETRIC` and `ASYMMETRIC` clause |
| E061-03 | IN predicate with list of values | Yes {.text-success} | |
| E061-04 | LIKE predicate | Yes {.text-success} | |
| E061-05 | LIKE predicate: ESCAPE clause | No {.text-danger} | |
| E061-06 | NULL predicate | Yes {.text-success} | |
| E061-07 | Quantified comparison predicate | No {.text-danger} | |
| E061-08 | EXISTS predicate | No {.text-danger} | |
| E061-09 | Subqueries in comparison predicate | Yes {.text-success} | |
| E061-11 | Subqueries in IN predicate | Yes {.text-success} | |
| E061-12 | Subqueries in quantified comparison predicate | No {.text-danger} | |
| E061-13 | Correlated subqueries | No {.text-danger} | |
| E061-14 | Search condition | Yes {.text-success} | |
| **E071** | **Basic query expressions** | **Partial**{.text-warning} | |
| E071-01 | UNION DISTINCT table operator | No{.text-danger} | |
| E071-02 | UNION ALL table operator | Yes{.text-success} | |
| E071-03 | EXCEPT DISTINCT table operator | No{.text-danger} | |
| E071-05 | Columns combined via table operators need not have exactly the same data type | Yes{.text-success} | |
| E071-06 | Table operators in subqueries | Yes{.text-success} | |
| E071-01 | UNION DISTINCT table operator | No {.text-danger} | |
| E071-02 | UNION ALL table operator | Yes {.text-success} | |
| E071-03 | EXCEPT DISTINCT table operator | No {.text-danger} | |
| E071-05 | Columns combined via table operators need not have exactly the same data type | Yes {.text-success} | |
| E071-06 | Table operators in subqueries | Yes {.text-success} | |
| **E081** | **Basic privileges** | **Partial**{.text-warning} | Work in progress |
| E081-01 | SELECT privilege at the table level | | |
| E081-02 | DELETE privilege | | |
@ -88,102 +88,102 @@ The following table lists cases when query feature works in ClickHouse, but beha
| E081-09 | USAGE privilege | | |
| E081-10 | EXECUTE privilege | | |
| **E091** | **Set functions** | **Yes**{.text-success} | |
| E091-01 | AVG | Yes{.text-success} | |
| E091-02 | COUNT | Yes{.text-success} | |
| E091-03 | MAX | Yes{.text-success} | |
| E091-04 | MIN | Yes{.text-success} | |
| E091-05 | SUM | Yes{.text-success} | |
| E091-06 | ALL quantifier | No{.text-danger} | |
| E091-07 | DISTINCT quantifier | Partial{.text-warning} | Not all aggregate functions supported |
| E091-01 | AVG | Yes {.text-success} | |
| E091-02 | COUNT | Yes {.text-success} | |
| E091-03 | MAX | Yes {.text-success} | |
| E091-04 | MIN | Yes {.text-success} | |
| E091-05 | SUM | Yes {.text-success} | |
| E091-06 | ALL quantifier | No {.text-danger} | |
| E091-07 | DISTINCT quantifier | Partial {.text-warning} | Not all aggregate functions supported |
| **E101** | **Basic data manipulation** | **Partial**{.text-warning} | |
| E101-01 | INSERT statement | Yes{.text-success} | Note: primary key in ClickHouse does not imply the `UNIQUE` constraint |
| E101-03 | Searched UPDATE statement | No{.text-danger} | Theres an `ALTER UPDATE` statement for batch data modification |
| E101-04 | Searched DELETE statement | No{.text-danger} | Theres an `ALTER DELETE` statement for batch data removal |
| E101-01 | INSERT statement | Yes {.text-success} | Note: primary key in ClickHouse does not imply the `UNIQUE` constraint |
| E101-03 | Searched UPDATE statement | No {.text-danger} | Theres an `ALTER UPDATE` statement for batch data modification |
| E101-04 | Searched DELETE statement | No {.text-danger} | Theres an `ALTER DELETE` statement for batch data removal |
| **E111** | **Single row SELECT statement** | **No**{.text-danger} | |
| **E121** | **Basic cursor support** | **No**{.text-danger} | |
| E121-01 | DECLARE CURSOR | No{.text-danger} | |
| E121-02 | ORDER BY columns need not be in select list | No{.text-danger} | |
| E121-03 | Value expressions in ORDER BY clause | No{.text-danger} | |
| E121-04 | OPEN statement | No{.text-danger} | |
| E121-06 | Positioned UPDATE statement | No{.text-danger} | |
| E121-07 | Positioned DELETE statement | No{.text-danger} | |
| E121-08 | CLOSE statement | No{.text-danger} | |
| E121-10 | FETCH statement: implicit NEXT | No{.text-danger} | |
| E121-17 | WITH HOLD cursors | No{.text-danger} | |
| E121-01 | DECLARE CURSOR | No {.text-danger} | |
| E121-02 | ORDER BY columns need not be in select list | No {.text-danger} | |
| E121-03 | Value expressions in ORDER BY clause | No {.text-danger} | |
| E121-04 | OPEN statement | No {.text-danger} | |
| E121-06 | Positioned UPDATE statement | No {.text-danger} | |
| E121-07 | Positioned DELETE statement | No {.text-danger} | |
| E121-08 | CLOSE statement | No {.text-danger} | |
| E121-10 | FETCH statement: implicit NEXT | No {.text-danger} | |
| E121-17 | WITH HOLD cursors | No {.text-danger} | |
| **E131** | **Null value support (nulls in lieu of values)** | **Partial**{.text-warning} | Some restrictions apply |
| **E141** | **Basic integrity constraints** | **Partial**{.text-warning} | |
| E141-01 | NOT NULL constraints | Yes{.text-success} | Note: `NOT NULL` is implied for table columns by default |
| E141-02 | UNIQUE constraint of NOT NULL columns | No{.text-danger} | |
| E141-03 | PRIMARY KEY constraints | No{.text-danger} | |
| E141-04 | Basic FOREIGN KEY constraint with the NO ACTION default for both referential delete action and referential update action | No{.text-danger} | |
| E141-06 | CHECK constraint | Yes{.text-success} | |
| E141-07 | Column defaults | Yes{.text-success} | |
| E141-08 | NOT NULL inferred on PRIMARY KEY | Yes{.text-success} | |
| E141-10 | Names in a foreign key can be specified in any order | No{.text-danger} | |
| E141-01 | NOT NULL constraints | Yes {.text-success} | Note: `NOT NULL` is implied for table columns by default |
| E141-02 | UNIQUE constraint of NOT NULL columns | No {.text-danger} | |
| E141-03 | PRIMARY KEY constraints | No {.text-danger} | |
| E141-04 | Basic FOREIGN KEY constraint with the NO ACTION default for both referential delete action and referential update action | No {.text-danger} | |
| E141-06 | CHECK constraint | Yes {.text-success} | |
| E141-07 | Column defaults | Yes {.text-success} | |
| E141-08 | NOT NULL inferred on PRIMARY KEY | Yes {.text-success} | |
| E141-10 | Names in a foreign key can be specified in any order | No {.text-danger} | |
| **E151** | **Transaction support** | **No**{.text-danger} | |
| E151-01 | COMMIT statement | No{.text-danger} | |
| E151-02 | ROLLBACK statement | No{.text-danger} | |
| E151-01 | COMMIT statement | No {.text-danger} | |
| E151-02 | ROLLBACK statement | No {.text-danger} | |
| **E152** | **Basic SET TRANSACTION statement** | **No**{.text-danger} | |
| E152-01 | SET TRANSACTION statement: ISOLATION LEVEL SERIALIZABLE clause | No{.text-danger} | |
| E152-02 | SET TRANSACTION statement: READ ONLY and READ WRITE clauses | No{.text-danger} | |
| E152-01 | SET TRANSACTION statement: ISOLATION LEVEL SERIALIZABLE clause | No {.text-danger} | |
| E152-02 | SET TRANSACTION statement: READ ONLY and READ WRITE clauses | No {.text-danger} | |
| **E153** | **Updatable queries with subqueries** | **No**{.text-danger} | |
| **E161** | **SQL comments using leading double minus** | **Yes**{.text-success} | |
| **E171** | **SQLSTATE support** | **No**{.text-danger} | |
| **E182** | **Host language binding** | **No**{.text-danger} | |
| **F031** | **Basic schema manipulation** | **Partial**{.text-warning} | |
| F031-01 | CREATE TABLE statement to create persistent base tables | Partial{.text-warning} | No `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` clauses and no support for user resolved data types |
| F031-02 | CREATE VIEW statement | Partial{.text-warning} | No `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` clauses and no support for user resolved data types |
| F031-03 | GRANT statement | Yes{.text-success} | |
| F031-04 | ALTER TABLE statement: ADD COLUMN clause | Partial{.text-warning} | No support for `GENERATED` clause and system time period |
| F031-13 | DROP TABLE statement: RESTRICT clause | No{.text-danger} | |
| F031-16 | DROP VIEW statement: RESTRICT clause | No{.text-danger} | |
| F031-19 | REVOKE statement: RESTRICT clause | No{.text-danger} | |
| F031-01 | CREATE TABLE statement to create persistent base tables | Partial {.text-warning} | No `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` clauses and no support for user resolved data types |
| F031-02 | CREATE VIEW statement | Partial {.text-warning} | No `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` clauses and no support for user resolved data types |
| F031-03 | GRANT statement | Yes {.text-success} | |
| F031-04 | ALTER TABLE statement: ADD COLUMN clause | Partial {.text-warning} | No support for `GENERATED` clause and system time period |
| F031-13 | DROP TABLE statement: RESTRICT clause | No {.text-danger} | |
| F031-16 | DROP VIEW statement: RESTRICT clause | No {.text-danger} | |
| F031-19 | REVOKE statement: RESTRICT clause | No {.text-danger} | |
| **F041** | **Basic joined table** | **Partial**{.text-warning} | |
| F041-01 | Inner join (but not necessarily the INNER keyword) | Yes{.text-success} | |
| F041-02 | INNER keyword | Yes{.text-success} | |
| F041-03 | LEFT OUTER JOIN | Yes{.text-success} | |
| F041-04 | RIGHT OUTER JOIN | Yes{.text-success} | |
| F041-05 | Outer joins can be nested | Yes{.text-success} | |
| F041-07 | The inner table in a left or right outer join can also be used in an inner join | Yes{.text-success} | |
| F041-08 | All comparison operators are supported (rather than just =) | No{.text-danger} | |
| F041-01 | Inner join (but not necessarily the INNER keyword) | Yes {.text-success} | |
| F041-02 | INNER keyword | Yes {.text-success} | |
| F041-03 | LEFT OUTER JOIN | Yes {.text-success} | |
| F041-04 | RIGHT OUTER JOIN | Yes {.text-success} | |
| F041-05 | Outer joins can be nested | Yes {.text-success} | |
| F041-07 | The inner table in a left or right outer join can also be used in an inner join | Yes {.text-success} | |
| F041-08 | All comparison operators are supported (rather than just =) | No {.text-danger} | |
| **F051** | **Basic date and time** | **Partial**{.text-warning} | |
| F051-01 | DATE data type (including support of DATE literal) | Partial{.text-warning} | No literal |
| F051-02 | TIME data type (including support of TIME literal) with fractional seconds precision of at least 0 | No{.text-danger} | |
| F051-03 | TIMESTAMP data type (including support of TIMESTAMP literal) with fractional seconds precision of at least 0 and 6 | No{.text-danger} | `DateTime64` time provides similar functionality |
| F051-04 | Comparison predicate on DATE, TIME, and TIMESTAMP data types | Partial{.text-warning} | Only one data type available |
| F051-05 | Explicit CAST between datetime types and character string types | Yes{.text-success} | |
| F051-06 | CURRENT_DATE | No{.text-danger} | `today()` is similar |
| F051-07 | LOCALTIME | No{.text-danger} | `now()` is similar |
| F051-08 | LOCALTIMESTAMP | No{.text-danger} | |
| F051-01 | DATE data type (including support of DATE literal) | Partial {.text-warning} | No literal |
| F051-02 | TIME data type (including support of TIME literal) with fractional seconds precision of at least 0 | No {.text-danger} | |
| F051-03 | TIMESTAMP data type (including support of TIMESTAMP literal) with fractional seconds precision of at least 0 and 6 | No {.text-danger} | `DateTime64` time provides similar functionality |
| F051-04 | Comparison predicate on DATE, TIME, and TIMESTAMP data types | Partial {.text-warning} | Only one data type available |
| F051-05 | Explicit CAST between datetime types and character string types | Yes {.text-success} | |
| F051-06 | CURRENT_DATE | No {.text-danger} | `today()` is similar |
| F051-07 | LOCALTIME | No {.text-danger} | `now()` is similar |
| F051-08 | LOCALTIMESTAMP | No {.text-danger} | |
| **F081** | **UNION and EXCEPT in views** | **Partial**{.text-warning} | |
| **F131** | **Grouped operations** | **Partial**{.text-warning} | |
| F131-01 | WHERE, GROUP BY, and HAVING clauses supported in queries with grouped views | Yes{.text-success} | |
| F131-02 | Multiple tables supported in queries with grouped views | Yes{.text-success} | |
| F131-03 | Set functions supported in queries with grouped views | Yes{.text-success} | |
| F131-04 | Subqueries with GROUP BY and HAVING clauses and grouped views | Yes{.text-success} | |
| F131-05 | Single row SELECT with GROUP BY and HAVING clauses and grouped views | No{.text-danger} | |
| F131-01 | WHERE, GROUP BY, and HAVING clauses supported in queries with grouped views | Yes {.text-success} | |
| F131-02 | Multiple tables supported in queries with grouped views | Yes {.text-success} | |
| F131-03 | Set functions supported in queries with grouped views | Yes {.text-success} | |
| F131-04 | Subqueries with GROUP BY and HAVING clauses and grouped views | Yes {.text-success} | |
| F131-05 | Single row SELECT with GROUP BY and HAVING clauses and grouped views | No {.text-danger} | |
| **F181** | **Multiple module support** | **No**{.text-danger} | |
| **F201** | **CAST function** | **Yes**{.text-success} | |
| **F221** | **Explicit defaults** | **No**{.text-danger} | |
| **F261** | **CASE expression** | **Yes**{.text-success} | |
| F261-01 | Simple CASE | Yes{.text-success} | |
| F261-02 | Searched CASE | Yes{.text-success} | |
| F261-03 | NULLIF | Yes{.text-success} | |
| F261-04 | COALESCE | Yes{.text-success} | |
| F261-01 | Simple CASE | Yes {.text-success} | |
| F261-02 | Searched CASE | Yes {.text-success} | |
| F261-03 | NULLIF | Yes {.text-success} | |
| F261-04 | COALESCE | Yes {.text-success} | |
| **F311** | **Schema definition statement** | **Partial**{.text-warning} | |
| F311-01 | CREATE SCHEMA | No{.text-danger} | |
| F311-02 | CREATE TABLE for persistent base tables | Yes{.text-success} | |
| F311-03 | CREATE VIEW | Yes{.text-success} | |
| F311-04 | CREATE VIEW: WITH CHECK OPTION | No{.text-danger} | |
| F311-05 | GRANT statement | Yes{.text-success} | |
| F311-01 | CREATE SCHEMA | No {.text-danger} | |
| F311-02 | CREATE TABLE for persistent base tables | Yes {.text-success} | |
| F311-03 | CREATE VIEW | Yes {.text-success} | |
| F311-04 | CREATE VIEW: WITH CHECK OPTION | No {.text-danger} | |
| F311-05 | GRANT statement | Yes {.text-success} | |
| **F471** | **Scalar subquery values** | **Yes**{.text-success} | |
| **F481** | **Expanded NULL predicate** | **Yes**{.text-success} | |
| **F812** | **Basic flagging** | **No**{.text-danger} | |
| **S011** | **Distinct data types** | | |
| **T321** | **Basic SQL-invoked routines** | **No**{.text-danger} | |
| T321-01 | User-defined functions with no overloading | No{.text-danger} | |
| T321-02 | User-defined stored procedures with no overloading | No{.text-danger} | |
| T321-03 | Function invocation | No{.text-danger} | |
| T321-04 | CALL statement | No{.text-danger} | |
| T321-05 | RETURN statement | No{.text-danger} | |
| T321-01 | User-defined functions with no overloading | No {.text-danger} | |
| T321-02 | User-defined stored procedures with no overloading | No {.text-danger} | |
| T321-03 | Function invocation | No {.text-danger} | |
| T321-04 | CALL statement | No {.text-danger} | |
| T321-05 | RETURN statement | No {.text-danger} | |
| **T631** | **IN predicate with one list element** | **Yes**{.text-success} | |

View File

@ -57,3 +57,5 @@ Functions:
- [A Magical Mystery Tour of the LowCardinality Data Type](https://www.altinity.com/blog/2019/3/27/low-cardinality).
- [Reducing Clickhouse Storage Cost with the Low Cardinality Type Lessons from an Instana Engineer](https://www.instana.com/blog/reducing-clickhouse-storage-cost-with-the-low-cardinality-type-lessons-from-an-instana-engineer/).
- [String Optimization (video presentation in Russian)](https://youtu.be/rqf-ILRgBdY?list=PL0Z2YDlm0b3iwXCpEFiOOYmwXzVmjJfEt). [Slides in English](https://github.com/yandex/clickhouse-presentations/raw/master/meetup19/string_optimization.pdf).
[Original article](https://clickhouse.tech/docs/en/sql-reference/data-types/lowcardinality/) <!--hide-->

View File

@ -93,6 +93,8 @@ Setting fields:
- `path` The absolute path to the file.
- `format` The file format. All the formats described in “[Formats](../../../interfaces/formats.md#formats)” are supported.
When dictionary with FILE source is created via DDL command (`CREATE DICTIONARY ...`), source of the dictionary have to be located in `user_files` directory, to prevent DB users accessing arbitrary file on clickhouse node.
## Executable File {#dicts-external_dicts_dict_sources-executable}
Working with executable files depends on [how the dictionary is stored in memory](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md). If the dictionary is stored using `cache` and `complex_key_cache`, ClickHouse requests the necessary keys by sending a request to the executable files STDIN. Otherwise, ClickHouse starts executable file and treats its output as dictionary data.
@ -108,17 +110,13 @@ Example of settings:
</source>
```
or
``` sql
SOURCE(EXECUTABLE(command 'cat /opt/dictionaries/os.tsv' format 'TabSeparated'))
```
Setting fields:
- `command` The absolute path to the executable file, or the file name (if the program directory is written to `PATH`).
- `format` The file format. All the formats described in “[Formats](../../../interfaces/formats.md#formats)” are supported.
That dictionary source can be configured only via XML configuration. Creating dictionaries with executable source via DDL is disabled, otherwise, the DB user would be able to execute arbitrary binary on clickhouse node.
## Http(s) {#dicts-external_dicts_dict_sources-http}
Working with an HTTP(s) server depends on [how the dictionary is stored in memory](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md). If the dictionary is stored using `cache` and `complex_key_cache`, ClickHouse requests the necessary keys by sending a request via the `POST` method.
@ -169,6 +167,8 @@ Setting fields:
- `name` Identifiant name used for the header send on the request.
- `value` Value set for a specific identifiant name.
When creating a dictionary using the DDL command (`CREATE DICTIONARY ...`) remote hosts for HTTP dictionaries checked with the `remote_url_allow_hosts` section from config to prevent database users to access arbitrary HTTP server.
## ODBC {#dicts-external_dicts_dict_sources-odbc}
You can use this method to connect any database that has an ODBC driver.

View File

@ -366,7 +366,7 @@ SELECT toDate('2016-12-27') AS date, toYearWeek(date) AS yearWeek0, toYearWeek(d
└────────────┴───────────┴───────────┴───────────┘
```
## date_trunc {#date_trunc}
## date\_trunc {#date_trunc}
Truncates date and time data to the specified part of date.
@ -435,7 +435,7 @@ Result:
- [toStartOfInterval](#tostartofintervaltime-or-data-interval-x-unit-time-zone)
# now {#now}
## now {#now}
Returns the current date and time.
@ -662,7 +662,7 @@ Result:
[Original article](https://clickhouse.tech/docs/en/query_language/functions/date_time_functions/) <!--hide-->
## FROM_UNIXTIME
## FROM\_UNIXTIME {#fromunixfime}
When there is only single argument of integer type, it act in the same way as `toDateTime` and return [DateTime](../../sql-reference/data-types/datetime.md).
type.
@ -692,3 +692,147 @@ SELECT FROM_UNIXTIME(1234334543, '%Y-%m-%d %R:%S') AS DateTime
│ 2009-02-11 14:42:23 │
└─────────────────────┘
```
## toModifiedJulianDay {#tomodifiedjulianday}
Converts a [Proleptic Gregorian calendar](https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar) date in text form `YYYY-MM-DD` to a [Modified Julian Day](https://en.wikipedia.org/wiki/Julian_day#Variants) number in Int32. This function supports date from `0000-01-01` to `9999-12-31`. It raises an exception if the argument cannot be parsed as a date, or the date is invalid.
**Syntax**
``` sql
toModifiedJulianDay(date)
```
**Parameters**
- `date` — Date in text form. [String](../../sql-reference/data-types/string.md) or [FixedString](../../sql-reference/data-types/fixedstring.md).
**Returned value**
- Modified Julian Day number.
Type: [Int32](../../sql-reference/data-types/int-uint.md).
**Example**
Query:
``` sql
SELECT toModifiedJulianDay('2020-01-01');
```
Result:
``` text
┌─toModifiedJulianDay('2020-01-01')─┐
│ 58849 │
└───────────────────────────────────┘
```
## toModifiedJulianDayOrNull {#tomodifiedjuliandayornull}
Similar to [toModifiedJulianDay()](#tomodifiedjulianday), but instead of raising exceptions it returns `NULL`.
**Syntax**
``` sql
toModifiedJulianDayOrNull(date)
```
**Parameters**
- `date` — Date in text form. [String](../../sql-reference/data-types/string.md) or [FixedString](../../sql-reference/data-types/fixedstring.md).
**Returned value**
- Modified Julian Day number.
Type: [Nullable(Int32)](../../sql-reference/data-types/int-uint.md).
**Example**
Query:
``` sql
SELECT toModifiedJulianDayOrNull('2020-01-01');
```
Result:
``` text
┌─toModifiedJulianDayOrNull('2020-01-01')─┐
│ 58849 │
└─────────────────────────────────────────┘
```
## fromModifiedJulianDay {#frommodifiedjulianday}
Converts a [Modified Julian Day](https://en.wikipedia.org/wiki/Julian_day#Variants) number to a [Proleptic Gregorian calendar](https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar) date in text form `YYYY-MM-DD`. This function supports day number from `-678941` to `2973119` (which represent 0000-01-01 and 9999-12-31 respectively). It raises an exception if the day number is outside of the supported range.
**Syntax**
``` sql
fromModifiedJulianDay(day)
```
**Parameters**
- `day` — Modified Julian Day number. [Any integral types](../../sql-reference/data-types/int-uint.md).
**Returned value**
- Date in text form.
Type: [String](../../sql-reference/data-types/string.md)
**Example**
Query:
``` sql
SELECT fromModifiedJulianDay(58849);
```
Result:
``` text
┌─fromModifiedJulianDay(58849)─┐
│ 2020-01-01 │
└──────────────────────────────┘
```
## fromModifiedJulianDayOrNull {#frommodifiedjuliandayornull}
Similar to [fromModifiedJulianDayOrNull()](#frommodifiedjuliandayornull), but instead of raising exceptions it returns `NULL`.
**Syntax**
``` sql
fromModifiedJulianDayOrNull(day)
```
**Parameters**
- `day` — Modified Julian Day number. [Any integral types](../../sql-reference/data-types/int-uint.md).
**Returned value**
- Date in text form.
Type: [Nullable(String)](../../sql-reference/data-types/string.md)
**Example**
Query:
``` sql
SELECT fromModifiedJulianDayOrNull(58849);
```
Result:
``` text
┌─fromModifiedJulianDayOrNull(58849)─┐
│ 2020-01-01 │
└────────────────────────────────────┘
```

View File

@ -111,4 +111,306 @@ Accepts a numeric argument and returns a UInt64 number close to 2 to the power o
Accepts a numeric argument and returns a UInt64 number close to 10 to the power of x.
## cosh(x) {#coshx}
[Hyperbolic cosine](https://in.mathworks.com/help/matlab/ref/cosh.html).
**Syntax**
``` sql
cosh(x)
```
**Parameters**
- `x` — The angle, in radians. Values from the interval: `-∞ < x < +∞`. [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Returned value**
- Values from the interval: `1 <= cosh(x) < +∞`.
Type: [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Example**
Query:
``` sql
SELECT cosh(0);
```
Result:
``` text
┌─cosh(0)──┐
│ 1 │
└──────────┘
```
## acosh(x) {#acoshx}
[Inverse hyperbolic cosine](https://www.mathworks.com/help/matlab/ref/acosh.html).
**Syntax**
``` sql
acosh(x)
```
**Parameters**
- `x` — Hyperbolic cosine of angle. Values from the interval: `1 <= x < +∞`. [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Returned value**
- The angle, in radians. Values from the interval: `0 <= acosh(x) < +∞`.
Type: [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Example**
Query:
``` sql
SELECT acosh(1);
```
Result:
``` text
┌─acosh(1)─┐
│ 0 │
└──────────┘
```
**See Also**
- [cosh(x)](../../sql-reference/functions/math-functions.md#coshx)
## sinh(x) {#sinhx}
[Hyperbolic sine](https://www.mathworks.com/help/matlab/ref/sinh.html).
**Syntax**
``` sql
sinh(x)
```
**Parameters**
- `x` — The angle, in radians. Values from the interval: `-∞ < x < +∞`. [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Returned value**
- Values from the interval: `-∞ < sinh(x) < +∞`.
Type: [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Example**
Query:
``` sql
SELECT sinh(0);
```
Result:
``` text
┌─sinh(0)──┐
│ 0 │
└──────────┘
```
## asinh(x) {#asinhx}
[Inverse hyperbolic sine](https://www.mathworks.com/help/matlab/ref/asinh.html).
**Syntax**
``` sql
asinh(x)
```
**Parameters**
- `x` — Hyperbolic sine of angle. Values from the interval: `-∞ < x < +∞`. [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Returned value**
- The angle, in radians. Values from the interval: `-∞ < asinh(x) < +∞`.
Type: [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Example**
Query:
``` sql
SELECT asinh(0);
```
Result:
``` text
┌─asinh(0)─┐
│ 0 │
└──────────┘
```
**See Also**
- [sinh(x)](../../sql-reference/functions/math-functions.md#sinhx)
## atanh(x) {#atanhx}
[Inverse hyperbolic tangent](https://www.mathworks.com/help/matlab/ref/atanh.html).
**Syntax**
``` sql
atanh(x)
```
**Parameters**
- `x` — Hyperbolic tangent of angle. Values from the interval: `1 < x < 1`. [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Returned value**
- The angle, in radians. Values from the interval: `-∞ < atanh(x) < +∞`.
Type: [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Example**
Query:
``` sql
SELECT atanh(0);
```
Result:
``` text
┌─atanh(0)─┐
│ 0 │
└──────────┘
```
## atan2(y, x) {#atan2yx}
The [function](https://en.wikipedia.org/wiki/Atan2) calculates the angle in the Euclidean plane, given in radians, between the positive x axis and the ray to the point `(x, y) ≠ (0, 0)`.
**Syntax**
``` sql
atan2(y, x)
```
**Parameters**
- `y` — y-coordinate of the point through which the ray passes. [Float64](../../sql-reference/data-types/float.md#float32-float64).
- `x` — x-coordinate of the point through which the ray passes. [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Returned value**
- The angle `θ` such that `−π < θ ≤ π`, in radians.
Type: [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Example**
Query:
``` sql
SELECT atan2(1, 1);
```
Result:
``` text
┌────────atan2(1, 1)─┐
│ 0.7853981633974483 │
└────────────────────┘
```
## hypot(x, y) {#hypotxy}
Calculates the length of the hypotenuse of a right-angle triangle. The [function](https://en.wikipedia.org/wiki/Hypot) avoids problems that occur when squaring very large or very small numbers.
**Syntax**
``` sql
hypot(x, y)
```
**Parameters**
- `x` — The first cathetus of a right-angle triangle. [Float64](../../sql-reference/data-types/float.md#float32-float64).
- `y` — The second cathetus of a right-angle triangle. [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Returned value**
- The length of the hypotenuse of a right-angle triangle.
Type: [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Example**
Query:
``` sql
SELECT hypot(1, 1);
```
Result:
``` text
┌────────hypot(1, 1)─┐
│ 1.4142135623730951 │
└────────────────────┘
```
## log1p(x) {#log1px}
Calculates `log(1+x)`. The [function](https://en.wikipedia.org/wiki/Natural_logarithm#lnp1) `log1p(x)` is more accurate than `log(1+x)` for small values of x.
**Syntax**
``` sql
log1p(x)
```
**Parameters**
- `x` — Values from the interval: `-1 < x < +∞`. [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Returned value**
- Values from the interval: `-∞ < log1p(x) < +∞`.
Type: [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Example**
Query:
``` sql
SELECT log1p(0);
```
Result:
``` text
┌─log1p(0)─┐
│ 0 │
└──────────┘
```
**See Also**
- [log(x)](../../sql-reference/functions/math-functions.md#logx-lnx)
[Original article](https://clickhouse.tech/docs/en/query_language/functions/math_functions/) <!--hide-->

View File

@ -131,6 +131,40 @@ For example:
- `cutToFirstSignificantSubdomain('www.tr') = 'www.tr'`.
- `cutToFirstSignificantSubdomain('tr') = ''`.
### cutToFirstSignificantSubdomainCustom {#cuttofirstsignificantsubdomaincustom}
Same as `cutToFirstSignificantSubdomain` but accept custom TLD list name, useful if:
- you need fresh TLD list,
- or you have custom.
Configuration example:
```xml
<!-- <top_level_domains_path>/var/lib/clickhouse/top_level_domains/</top_level_domains_path> -->
<top_level_domains_lists>
<!-- https://publicsuffix.org/list/public_suffix_list.dat -->
<public_suffix_list>public_suffix_list.dat</public_suffix_list>
<!-- NOTE: path is under top_level_domains_path -->
</top_level_domains_lists>
```
Example:
- `cutToFirstSignificantSubdomain('https://news.yandex.com.tr/', 'public_suffix_list') = 'yandex.com.tr'`.
### cutToFirstSignificantSubdomainCustomWithWWW {#cuttofirstsignificantsubdomaincustomwithwww}
Same as `cutToFirstSignificantSubdomainWithWWW` but accept custom TLD list name.
### firstSignificantSubdomainCustom {#firstsignificantsubdomaincustom}
Same as `firstSignificantSubdomain` but accept custom TLD list name.
### cutToFirstSignificantSubdomainCustomWithWWW {#cuttofirstsignificantsubdomaincustomwithwww}
Same as `cutToFirstSignificantSubdomainWithWWW` but accept custom TLD list name.
### port(URL\[, default_port = 0\]) {#port}
Returns the port or `default_port` if there is no port in the URL (or in case of validation error).

View File

@ -197,3 +197,25 @@ This is more optimal than using the normal IN. However, keep the following point
5. If you need to use GLOBAL IN often, plan the location of the ClickHouse cluster so that a single group of replicas resides in no more than one data center with a fast network between them, so that a query can be processed entirely within a single data center.
It also makes sense to specify a local table in the `GLOBAL IN` clause, in case this local table is only available on the requestor server and you want to use data from it on remote servers.
### Distributed Subqueries and max_parallel_replicas {#max_parallel_replica-subqueries}
When max_parallel_replicas is greater than 1, distributed queries are further transformed. For example, the following:
```sql
SEELECT CounterID, count() FROM distributed_table_1 WHERE UserID IN (SELECT UserID FROM local_table_2 WHERE CounterID < 100)
SETTINGS max_parallel_replicas=3
```
is transformed on each server into
```sql
SELECT CounterID, count() FROM local_table_1 WHERE UserID IN (SELECT UserID FROM local_table_2 WHERE CounterID < 100)
SETTINGS parallel_replicas_count=3, parallel_replicas_offset=M
```
where M is between 1 and 3 depending on which replica the local query is executing on. These settings affect every MergeTree-family table in the query and have the same effect as applying `SAMPLE 1/3 OFFSET (M-1)/3` on each table.
Therefore adding the max_parallel_replicas setting will only produce correct results if both tables have the same replication scheme and are sampled by UserID or a subkey of it. In particular, if local_table_2 does not have a sampling key, incorrect results will be produced. The same rule applies to JOIN.
One workaround if local_table_2 doesn't meet the requirements, is to use `GLOBAL IN` or `GLOBAL JOIN`.

View File

@ -53,7 +53,7 @@ KILL MUTATION [ON CLUSTER cluster]
Tries to cancel and remove [mutations](../../sql-reference/statements/alter/index.md#alter-mutations) that are currently executing. Mutations to cancel are selected from the [`system.mutations`](../../operations/system-tables/mutations.md#system_tables-mutations) table using the filter specified by the `WHERE` clause of the `KILL` query.
A test query (`TEST`) only checks the users rights and displays a list of queries to stop.
A test query (`TEST`) only checks the users rights and displays a list of mutations to stop.
Examples:

View File

@ -56,10 +56,188 @@ When floating point numbers are sorted, NaNs are separate from the other values.
## Collation Support {#collation-support}
For sorting by String values, you can specify collation (comparison). Example: `ORDER BY SearchPhrase COLLATE 'tr'` - for sorting by keyword in ascending order, using the Turkish alphabet, case insensitive, assuming that strings are UTF-8 encoded. `COLLATE` can be specified or not for each expression in ORDER BY independently. If `ASC` or `DESC` is specified, `COLLATE` is specified after it. When using `COLLATE`, sorting is always case-insensitive.
For sorting by [String](../../../sql-reference/data-types/string.md) values, you can specify collation (comparison). Example: `ORDER BY SearchPhrase COLLATE 'tr'` - for sorting by keyword in ascending order, using the Turkish alphabet, case insensitive, assuming that strings are UTF-8 encoded. `COLLATE` can be specified or not for each expression in ORDER BY independently. If `ASC` or `DESC` is specified, `COLLATE` is specified after it. When using `COLLATE`, sorting is always case-insensitive.
Collate is supported in [LowCardinality](../../../sql-reference/data-types/lowcardinality.md), [Nullable](../../../sql-reference/data-types/nullable.md), [Array](../../../sql-reference/data-types/array.md) and [Tuple](../../../sql-reference/data-types/tuple.md).
We only recommend using `COLLATE` for final sorting of a small number of rows, since sorting with `COLLATE` is less efficient than normal sorting by bytes.
## Collation Examples {#collation-examples}
Example only with [String](../../../sql-reference/data-types/string.md) values:
Input table:
``` text
┌─x─┬─s────┐
│ 1 │ bca │
│ 2 │ ABC │
│ 3 │ 123a │
│ 4 │ abc │
│ 5 │ BCA │
└───┴──────┘
```
Query:
```sql
SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en';
```
Result:
``` text
┌─x─┬─s────┐
│ 3 │ 123a │
│ 4 │ abc │
│ 2 │ ABC │
│ 1 │ bca │
│ 5 │ BCA │
└───┴──────┘
```
Example with [Nullable](../../../sql-reference/data-types/nullable.md):
Input table:
``` text
┌─x─┬─s────┐
│ 1 │ bca │
│ 2 │ ᴺᵁᴸᴸ │
│ 3 │ ABC │
│ 4 │ 123a │
│ 5 │ abc │
│ 6 │ ᴺᵁᴸᴸ │
│ 7 │ BCA │
└───┴──────┘
```
Query:
```sql
SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en';
```
Result:
``` text
┌─x─┬─s────┐
│ 4 │ 123a │
│ 5 │ abc │
│ 3 │ ABC │
│ 1 │ bca │
│ 7 │ BCA │
│ 6 │ ᴺᵁᴸᴸ │
│ 2 │ ᴺᵁᴸᴸ │
└───┴──────┘
```
Example with [Array](../../../sql-reference/data-types/array.md):
Input table:
``` text
┌─x─┬─s─────────────┐
│ 1 │ ['Z'] │
│ 2 │ ['z'] │
│ 3 │ ['a'] │
│ 4 │ ['A'] │
│ 5 │ ['z','a'] │
│ 6 │ ['z','a','a'] │
│ 7 │ [''] │
└───┴───────────────┘
```
Query:
```sql
SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en';
```
Result:
``` text
┌─x─┬─s─────────────┐
│ 7 │ [''] │
│ 3 │ ['a'] │
│ 4 │ ['A'] │
│ 2 │ ['z'] │
│ 5 │ ['z','a'] │
│ 6 │ ['z','a','a'] │
│ 1 │ ['Z'] │
└───┴───────────────┘
```
Example with [LowCardinality](../../../sql-reference/data-types/lowcardinality.md) string:
Input table:
```text
┌─x─┬─s───┐
│ 1 │ Z │
│ 2 │ z │
│ 3 │ a │
│ 4 │ A │
│ 5 │ za │
│ 6 │ zaa │
│ 7 │ │
└───┴─────┘
```
Query:
```sql
SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en';
```
Result:
```text
┌─x─┬─s───┐
│ 7 │ │
│ 3 │ a │
│ 4 │ A │
│ 2 │ z │
│ 1 │ Z │
│ 5 │ za │
│ 6 │ zaa │
└───┴─────┘
```
Example with [Tuple](../../../sql-reference/data-types/tuple.md):
```text
┌─x─┬─s───────┐
│ 1 │ (1,'Z') │
│ 2 │ (1,'z') │
│ 3 │ (1,'a') │
│ 4 │ (2,'z') │
│ 5 │ (1,'A') │
│ 6 │ (2,'Z') │
│ 7 │ (2,'A') │
└───┴─────────┘
```
Query:
```sql
SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en';
```
Result:
```text
┌─x─┬─s───────┐
│ 3 │ (1,'a') │
│ 5 │ (1,'A') │
│ 2 │ (1,'z') │
│ 1 │ (1,'Z') │
│ 7 │ (2,'A') │
│ 4 │ (2,'z') │
│ 6 │ (2,'Z') │
└───┴─────────┘
```
## Implementation Details {#implementation-details}
Less RAM is used if a small enough [LIMIT](../../../sql-reference/statements/select/limit.md) is specified in addition to `ORDER BY`. Otherwise, the amount of memory spent is proportional to the volume of data for sorting. For distributed query processing, if [GROUP BY](../../../sql-reference/statements/select/group-by.md) is omitted, sorting is partially done on remote servers, and the results are merged on the requestor server. This means that for distributed sorting, the volume of data to sort can be greater than the amount of memory on a single server.

View File

@ -257,8 +257,8 @@ El desarrollo de ClickHouse a menudo requiere cargar conjuntos de datos realista
sudo apt install wget xz-utils
wget https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz
wget https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz
wget https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz
wget https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz
xz -v -d hits_v1.tsv.xz
xz -v -d visits_v1.tsv.xz

View File

@ -579,7 +579,7 @@ Si una función captura la propiedad de un objeto creado en el montón, cree el
**14.** Valores devueltos.
En la mayoría de los casos, sólo tiene que utilizar `return`. No escribir `[return std::move(res)]{.strike}`.
En la mayoría de los casos, sólo tiene que utilizar `return`. No escribir `return std::move(res)`.
Si la función asigna un objeto en el montón y lo devuelve, use `shared_ptr` o `unique_ptr`.
@ -673,7 +673,7 @@ Utilice siempre `#pragma once` en lugar de incluir guardias.
**24.** No use `trailing return type` para funciones a menos que sea necesario.
``` cpp
[auto f() -&gt; void;]{.strike}
auto f() -> void
```
**25.** Declaración e inicialización de variables.

View File

@ -9,14 +9,14 @@ toc_title: El Yandex.Metrica Datos
El conjunto de datos consta de dos tablas que contienen datos anónimos sobre los hits (`hits_v1`) y visitas (`visits_v1`) el Yandex.Métrica. Puedes leer más sobre Yandex.Metrica en [Historial de ClickHouse](../../introduction/history.md) apartado.
El conjunto de datos consta de dos tablas, cualquiera de ellas se puede descargar como `tsv.xz` o como particiones preparadas. Además, una versión extendida de la `hits` La tabla que contiene 100 millones de filas está disponible como TSV en https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_100m_obfuscated_v1.tsv.xz y como particiones preparadas en https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz.
El conjunto de datos consta de dos tablas, cualquiera de ellas se puede descargar como `tsv.xz` o como particiones preparadas. Además, una versión extendida de la `hits` La tabla que contiene 100 millones de filas está disponible como TSV en https://datasets.clickhouse.tech/hits/tsv/hits_100m_obfuscated_v1.tsv.xz y como particiones preparadas en https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz.
## Obtención de tablas a partir de particiones preparadas {#obtaining-tables-from-prepared-partitions}
Descargar e importar tabla de hits:
``` bash
curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar
curl -O https://datasets.clickhouse.tech/hits/partitions/hits_v1.tar
tar xvf hits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory
# check permissions on unpacked data, fix if required
sudo service clickhouse-server restart
@ -26,7 +26,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
Descargar e importar visitas:
``` bash
curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar
curl -O https://datasets.clickhouse.tech/visits/partitions/visits_v1.tar
tar xvf visits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory
# check permissions on unpacked data, fix if required
sudo service clickhouse-server restart
@ -38,7 +38,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
Descargar e importar hits desde un archivo TSV comprimido:
``` bash
curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
# now create table
clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"
@ -52,7 +52,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
Descargue e importe visitas desde un archivo tsv comprimido:
``` bash
curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
# now create table
clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"

View File

@ -285,7 +285,7 @@ Entre otras cosas, puede ejecutar la consulta OPTIMIZE en MergeTree. Pero no es
## Descarga de Prepared Partitions {#download-of-prepared-partitions}
``` bash
$ curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar
$ curl -O https://datasets.clickhouse.tech/trips_mergetree/partitions/trips_mergetree.tar
$ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # path to ClickHouse data directory
$ # check permissions of unpacked data, fix if required
$ sudo service clickhouse-server restart

View File

@ -156,7 +156,7 @@ $ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhous
## Descarga de Prepared Partitions {#download-of-prepared-partitions}
``` bash
$ curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar
$ curl -O https://datasets.clickhouse.tech/ontime/partitions/ontime.tar
$ tar xvf ontime.tar -C /var/lib/clickhouse # path to ClickHouse data directory
$ # check permissions of unpacked data, fix if required
$ sudo service clickhouse-server restart

View File

@ -87,8 +87,8 @@ Ahora es el momento de llenar nuestro servidor ClickHouse con algunos datos de m
### Descargar y extraer datos de tabla {#download-and-extract-table-data}
``` bash
curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
```
Los archivos extraídos tienen un tamaño de aproximadamente 10 GB.

View File

@ -48,7 +48,7 @@ Con esta instrucción, puede ejecutar una prueba de rendimiento básica de Click
<!-- -->
wget https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz
wget https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz
tar xvf hits_100m_obfuscated_v1.tar.xz -C .
mv hits_100m_obfuscated_v1/* .

View File

@ -26,155 +26,155 @@ En la tabla siguiente se enumeran los casos en que la característica de consult
| Feature ID | Nombre de la función | Estatus | Comentario |
|------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **E011** | **Tipos de datos numéricos** | **Parcial**{.text-warning} | |
| E011-01 | Tipos de datos INTEGER y SMALLINT | Sí{.text-success} | |
| E011-02 | REAL, DOUBLE PRECISION y FLOAT tipos de datos tipos de datos | Parcial{.text-warning} | `FLOAT(<binary_precision>)`, `REAL` y `DOUBLE PRECISION` no son compatibles |
| E011-03 | Tipos de datos DECIMAL y NUMERIC | Parcial{.text-warning} | Solo `DECIMAL(p,s)` es compatible, no `NUMERIC` |
| E011-04 | Operadores aritméticos | Sí{.text-success} | |
| E011-05 | Comparación numérica | Sí{.text-success} | |
| E011-06 | Conversión implícita entre los tipos de datos numéricos | No{.text-danger} | ANSI SQL permite la conversión implícita arbitraria entre tipos numéricos, mientras que ClickHouse se basa en funciones que tienen múltiples sobrecargas en lugar de conversión implícita |
| E011-01 | Tipos de datos INTEGER y SMALLINT | Sí {.text-success} | |
| E011-02 | REAL, DOUBLE PRECISION y FLOAT tipos de datos tipos de datos | Parcial {.text-warning} | `FLOAT(<binary_precision>)`, `REAL` y `DOUBLE PRECISION` no son compatibles |
| E011-03 | Tipos de datos DECIMAL y NUMERIC | Parcial {.text-warning} | Solo `DECIMAL(p,s)` es compatible, no `NUMERIC` |
| E011-04 | Operadores aritméticos | Sí {.text-success} | |
| E011-05 | Comparación numérica | Sí {.text-success} | |
| E011-06 | Conversión implícita entre los tipos de datos numéricos | No {.text-danger} | ANSI SQL permite la conversión implícita arbitraria entre tipos numéricos, mientras que ClickHouse se basa en funciones que tienen múltiples sobrecargas en lugar de conversión implícita |
| **E021** | **Tipos de cadena de caracteres** | **Parcial**{.text-warning} | |
| E021-01 | Tipo de datos CHARACTER | No{.text-danger} | |
| E021-02 | Tipo de datos CHARACTER VARYING | No{.text-danger} | `String` se comporta de manera similar, pero sin límite de longitud entre paréntesis |
| E021-03 | Literales de caracteres | Parcial{.text-warning} | Sin concatenación automática de literales consecutivos y compatibilidad con el conjunto de caracteres |
| E021-04 | Función CHARACTER_LENGTH | Parcial{.text-warning} | No `USING` clausula |
| E021-05 | Función OCTET_LENGTH | No{.text-danger} | `LENGTH` se comporta de manera similar |
| E021-06 | SUBSTRING | Parcial{.text-warning} | No hay soporte para `SIMILAR` y `ESCAPE` cláusulas, no `SUBSTRING_REGEX` variante |
| E021-07 | Concatenación de caracteres | Parcial{.text-warning} | No `COLLATE` clausula |
| E021-08 | Funciones SUPERIOR e INFERIOR | Sí{.text-success} | |
| E021-09 | Función TRIM | Sí{.text-success} | |
| E021-10 | Conversión implícita entre los tipos de cadena de caracteres de longitud fija y longitud variable | No{.text-danger} | ANSI SQL permite la conversión implícita arbitraria entre tipos de cadena, mientras que ClickHouse se basa en funciones que tienen múltiples sobrecargas en lugar de conversión implícita |
| E021-11 | Función POSITION | Parcial{.text-warning} | No hay soporte para `IN` y `USING` cláusulas, no `POSITION_REGEX` variante |
| E021-12 | Comparación de caracteres | Sí{.text-success} | |
| E021-01 | Tipo de datos CHARACTER | No {.text-danger} | |
| E021-02 | Tipo de datos CHARACTER VARYING | No {.text-danger} | `String` se comporta de manera similar, pero sin límite de longitud entre paréntesis |
| E021-03 | Literales de caracteres | Parcial {.text-warning} | Sin concatenación automática de literales consecutivos y compatibilidad con el conjunto de caracteres |
| E021-04 | Función CHARACTER_LENGTH | Parcial {.text-warning} | No `USING` clausula |
| E021-05 | Función OCTET_LENGTH | No {.text-danger} | `LENGTH` se comporta de manera similar |
| E021-06 | SUBSTRING | Parcial {.text-warning} | No hay soporte para `SIMILAR` y `ESCAPE` cláusulas, no `SUBSTRING_REGEX` variante |
| E021-07 | Concatenación de caracteres | Parcial {.text-warning} | No `COLLATE` clausula |
| E021-08 | Funciones SUPERIOR e INFERIOR | Sí {.text-success} | |
| E021-09 | Función TRIM | Sí {.text-success} | |
| E021-10 | Conversión implícita entre los tipos de cadena de caracteres de longitud fija y longitud variable | No {.text-danger} | ANSI SQL permite la conversión implícita arbitraria entre tipos de cadena, mientras que ClickHouse se basa en funciones que tienen múltiples sobrecargas en lugar de conversión implícita |
| E021-11 | Función POSITION | Parcial {.text-warning} | No hay soporte para `IN` y `USING` cláusulas, no `POSITION_REGEX` variante |
| E021-12 | Comparación de caracteres | Sí {.text-success} | |
| **E031** | **Identificador** | **Parcial**{.text-warning} | |
| E031-01 | Identificadores delimitados | Parcial{.text-warning} | El soporte literal Unicode es limitado |
| E031-02 | Identificadores de minúsculas | Sí{.text-success} | |
| E031-03 | Trailing subrayado | Sí{.text-success} | |
| E031-01 | Identificadores delimitados | Parcial {.text-warning} | El soporte literal Unicode es limitado |
| E031-02 | Identificadores de minúsculas | Sí {.text-success} | |
| E031-03 | Trailing subrayado | Sí {.text-success} | |
| **E051** | **Especificación básica de la consulta** | **Parcial**{.text-warning} | |
| E051-01 | SELECT DISTINCT | Sí{.text-success} | |
| E051-02 | Cláusula GROUP BY | Sí{.text-success} | |
| E051-04 | GROUP BY puede contener columnas que no estén en `<select list>` | Sí{.text-success} | |
| E051-05 | Los elementos seleccionados pueden ser renombrados | Sí{.text-success} | |
| E051-06 | Cláusula HAVING | Sí{.text-success} | |
| E051-07 | Calificado \* en la lista de selección | Sí{.text-success} | |
| E051-08 | Nombre de correlación en la cláusula FROM | Sí{.text-success} | |
| E051-09 | Cambiar el nombre de las columnas en la cláusula FROM | No{.text-danger} | |
| E051-01 | SELECT DISTINCT | Sí {.text-success} | |
| E051-02 | Cláusula GROUP BY | Sí {.text-success} | |
| E051-04 | GROUP BY puede contener columnas que no estén en `<select list>` | Sí {.text-success} | |
| E051-05 | Los elementos seleccionados pueden ser renombrados | Sí {.text-success} | |
| E051-06 | Cláusula HAVING | Sí {.text-success} | |
| E051-07 | Calificado \* en la lista de selección | Sí {.text-success} | |
| E051-08 | Nombre de correlación en la cláusula FROM | Sí {.text-success} | |
| E051-09 | Cambiar el nombre de las columnas en la cláusula FROM | No {.text-danger} | |
| **E061** | **Predicados básicos y condiciones de búsqueda** | **Parcial**{.text-warning} | |
| E061-01 | Predicado de comparación | Sí{.text-success} | |
| E061-02 | ENTRE predicado | Parcial{.text-warning} | No `SYMMETRIC` y `ASYMMETRIC` clausula |
| E061-03 | Predicado IN con lista de valores | Sí{.text-success} | |
| E061-04 | COMO predicado | Sí{.text-success} | |
| E061-05 | Predicado LIKE: cláusula ESCAPE | No{.text-danger} | |
| E061-06 | Predicado NULL | Sí{.text-success} | |
| E061-07 | Predicado de comparación cuantificado | No{.text-danger} | |
| E061-08 | Predicado EXISTS | No{.text-danger} | |
| E061-09 | Subconsultas en predicado de comparación | Sí{.text-success} | |
| E061-11 | Subconsultas en el predicado IN | Sí{.text-success} | |
| E061-12 | Subconsultas en predicado de comparación cuantificado | No{.text-danger} | |
| E061-13 | Subconsultas correlacionadas | No{.text-danger} | |
| E061-14 | Condición de búsqueda | Sí{.text-success} | |
| E061-01 | Predicado de comparación | Sí {.text-success} | |
| E061-02 | ENTRE predicado | Parcial {.text-warning} | No `SYMMETRIC` y `ASYMMETRIC` clausula |
| E061-03 | Predicado IN con lista de valores | Sí {.text-success} | |
| E061-04 | COMO predicado | Sí {.text-success} | |
| E061-05 | Predicado LIKE: cláusula ESCAPE | No {.text-danger} | |
| E061-06 | Predicado NULL | Sí {.text-success} | |
| E061-07 | Predicado de comparación cuantificado | No {.text-danger} | |
| E061-08 | Predicado EXISTS | No {.text-danger} | |
| E061-09 | Subconsultas en predicado de comparación | Sí {.text-success} | |
| E061-11 | Subconsultas en el predicado IN | Sí {.text-success} | |
| E061-12 | Subconsultas en predicado de comparación cuantificado | No {.text-danger} | |
| E061-13 | Subconsultas correlacionadas | No {.text-danger} | |
| E061-14 | Condición de búsqueda | Sí {.text-success} | |
| **E071** | **Expresiones de consulta básicas** | **Parcial**{.text-warning} | |
| E071-01 | Operador de tabla UNION DISTINCT | No{.text-danger} | |
| E071-02 | Operador de tabla UNION ALL | Sí{.text-success} | |
| E071-03 | EXCEPTO operador de tabla DISTINCT | No{.text-danger} | |
| E071-05 | Las columnas combinadas a través de operadores de tabla no necesitan tener exactamente el mismo tipo de datos | Sí{.text-success} | |
| E071-06 | Operadores de tabla en subconsultas | Sí{.text-success} | |
| E071-01 | Operador de tabla UNION DISTINCT | No {.text-danger} | |
| E071-02 | Operador de tabla UNION ALL | Sí {.text-success} | |
| E071-03 | EXCEPTO operador de tabla DISTINCT | No {.text-danger} | |
| E071-05 | Las columnas combinadas a través de operadores de tabla no necesitan tener exactamente el mismo tipo de datos | Sí {.text-success} | |
| E071-06 | Operadores de tabla en subconsultas | Sí {.text-success} | |
| **E081** | **Privilegios básicos** | **Parcial**{.text-warning} | Trabajo en curso |
| **E091** | **Establecer funciones** | **Sí**{.text-success} | |
| E091-01 | AVG | Sí{.text-success} | |
| E091-02 | COUNT | Sí{.text-success} | |
| E091-03 | MAX | Sí{.text-success} | |
| E091-04 | MIN | Sí{.text-success} | |
| E091-05 | SUM | Sí{.text-success} | |
| E091-06 | Cuantificador ALL | No{.text-danger} | |
| E091-07 | Cuantificador DISTINCT | Parcial{.text-warning} | No se admiten todas las funciones agregadas |
| E091-01 | AVG | Sí {.text-success} | |
| E091-02 | COUNT | Sí {.text-success} | |
| E091-03 | MAX | Sí {.text-success} | |
| E091-04 | MIN | Sí {.text-success} | |
| E091-05 | SUM | Sí {.text-success} | |
| E091-06 | Cuantificador ALL | No {.text-danger} | |
| E091-07 | Cuantificador DISTINCT | Parcial {.text-warning} | No se admiten todas las funciones agregadas |
| **E101** | **Manipulación de datos básicos** | **Parcial**{.text-warning} | |
| E101-01 | Instrucción INSERT | Sí{.text-success} | Nota: la clave principal en ClickHouse no implica el `UNIQUE` limitación |
| E101-03 | Instrucción UPDATE buscada | No{.text-danger} | Hay una `ALTER UPDATE` declaración para la modificación de datos por lotes |
| E101-04 | Instrucción DELETE buscada | No{.text-danger} | Hay una `ALTER DELETE` declaración para la eliminación de datos por lotes |
| E101-01 | Instrucción INSERT | Sí {.text-success} | Nota: la clave principal en ClickHouse no implica el `UNIQUE` limitación |
| E101-03 | Instrucción UPDATE buscada | No {.text-danger} | Hay una `ALTER UPDATE` declaración para la modificación de datos por lotes |
| E101-04 | Instrucción DELETE buscada | No {.text-danger} | Hay una `ALTER DELETE` declaración para la eliminación de datos por lotes |
| **E111** | **Instrucción SELECT de una sola fila** | **No**{.text-danger} | |
| **E121** | **Soporte básico del cursor** | **No**{.text-danger} | |
| E121-01 | DECLARE CURSOR | No{.text-danger} | |
| E121-02 | Las columnas PEDIR POR no necesitan estar en la lista de selección | No{.text-danger} | |
| E121-03 | Expresiones de valor en la cláusula ORDER BY | No{.text-danger} | |
| E121-04 | Declaración ABIERTA | No{.text-danger} | |
| E121-06 | Instrucción UPDATE posicionada | No{.text-danger} | |
| E121-07 | Instrucción DELETE posicionada | No{.text-danger} | |
| E121-08 | Declaración CERRAR | No{.text-danger} | |
| E121-10 | Declaración FETCH: implícita NEXT | No{.text-danger} | |
| E121-17 | CON Cursores HOLD | No{.text-danger} | |
| E121-01 | DECLARE CURSOR | No {.text-danger} | |
| E121-02 | Las columnas PEDIR POR no necesitan estar en la lista de selección | No {.text-danger} | |
| E121-03 | Expresiones de valor en la cláusula ORDER BY | No {.text-danger} | |
| E121-04 | Declaración ABIERTA | No {.text-danger} | |
| E121-06 | Instrucción UPDATE posicionada | No {.text-danger} | |
| E121-07 | Instrucción DELETE posicionada | No {.text-danger} | |
| E121-08 | Declaración CERRAR | No {.text-danger} | |
| E121-10 | Declaración FETCH: implícita NEXT | No {.text-danger} | |
| E121-17 | CON Cursores HOLD | No {.text-danger} | |
| **E131** | **Soporte de valor nulo (nulos en lugar de valores)** | **Parcial**{.text-warning} | Se aplican algunas restricciones |
| **E141** | **Restricciones de integridad básicas** | **Parcial**{.text-warning} | |
| E141-01 | Restricciones NOT NULL | Sí{.text-success} | Nota: `NOT NULL` está implícito para las columnas de tabla de forma predeterminada |
| E141-02 | Restricción UNIQUE de columnas NOT NULL | No{.text-danger} | |
| E141-03 | Restricciones PRIMARY KEY | No{.text-danger} | |
| E141-04 | Restricción básica FOREIGN KEY con el valor predeterminado NO ACTION para la acción de eliminación referencial y la acción de actualización referencial | No{.text-danger} | |
| E141-06 | Restricción CHECK | Sí{.text-success} | |
| E141-07 | Valores predeterminados de columna | Sí{.text-success} | |
| E141-08 | NO NULL inferido en CLAVE PRIMARIA | Sí{.text-success} | |
| E141-10 | Los nombres de una clave externa se pueden especificar en cualquier orden | No{.text-danger} | |
| E141-01 | Restricciones NOT NULL | Sí {.text-success} | Nota: `NOT NULL` está implícito para las columnas de tabla de forma predeterminada |
| E141-02 | Restricción UNIQUE de columnas NOT NULL | No {.text-danger} | |
| E141-03 | Restricciones PRIMARY KEY | No {.text-danger} | |
| E141-04 | Restricción básica FOREIGN KEY con el valor predeterminado NO ACTION para la acción de eliminación referencial y la acción de actualización referencial | No {.text-danger} | |
| E141-06 | Restricción CHECK | Sí {.text-success} | |
| E141-07 | Valores predeterminados de columna | Sí {.text-success} | |
| E141-08 | NO NULL inferido en CLAVE PRIMARIA | Sí {.text-success} | |
| E141-10 | Los nombres de una clave externa se pueden especificar en cualquier orden | No {.text-danger} | |
| **E151** | **Soporte de transacciones** | **No**{.text-danger} | |
| E151-01 | Declaración COMMIT | No{.text-danger} | |
| E151-02 | Instrucción ROLLBACK | No{.text-danger} | |
| E151-01 | Declaración COMMIT | No {.text-danger} | |
| E151-02 | Instrucción ROLLBACK | No {.text-danger} | |
| **E152** | **Instrucción SET TRANSACTION básica** | **No**{.text-danger} | |
| E152-01 | Instrucción SET TRANSACTION: cláusula ISOLATION LEVEL SERIALIZABLE | No{.text-danger} | |
| E152-02 | Instrucción SET TRANSACTION: cláusulas READ ONLY y READ WRITE | No{.text-danger} | |
| E152-01 | Instrucción SET TRANSACTION: cláusula ISOLATION LEVEL SERIALIZABLE | No {.text-danger} | |
| E152-02 | Instrucción SET TRANSACTION: cláusulas READ ONLY y READ WRITE | No {.text-danger} | |
| **E153** | **Consultas actualizables con subconsultas** | **No**{.text-danger} | |
| **E161** | **Comentarios SQL usando doble menos inicial** | **Sí**{.text-success} | |
| **E171** | **Soporte SQLSTATE** | **No**{.text-danger} | |
| **E182** | **Enlace de idioma de host** | **No**{.text-danger} | |
| **F031** | **Manipulación básica del esquema** | **Parcial**{.text-warning} | |
| F031-01 | Instrucción CREATE TABLE para crear tablas base persistentes | Parcial{.text-warning} | No `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` cláusulas y sin soporte para tipos de datos resueltos por el usuario |
| F031-02 | Instrucción CREATE VIEW | Parcial{.text-warning} | No `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` cláusulas y sin soporte para tipos de datos resueltos por el usuario |
| F031-03 | Declaración GRANT | Sí{.text-success} | |
| F031-04 | Sentencia ALTER TABLE: cláusula ADD COLUMN | Parcial{.text-warning} | No hay soporte para `GENERATED` cláusula y período de tiempo del sistema |
| F031-13 | Instrucción DROP TABLE: cláusula RESTRICT | No{.text-danger} | |
| F031-16 | Instrucción DROP VIEW: cláusula RESTRICT | No{.text-danger} | |
| F031-19 | Declaración REVOKE: cláusula RESTRICT | No{.text-danger} | |
| F031-01 | Instrucción CREATE TABLE para crear tablas base persistentes | Parcial {.text-warning} | No `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` cláusulas y sin soporte para tipos de datos resueltos por el usuario |
| F031-02 | Instrucción CREATE VIEW | Parcial {.text-warning} | No `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` cláusulas y sin soporte para tipos de datos resueltos por el usuario |
| F031-03 | Declaración GRANT | Sí {.text-success} | |
| F031-04 | Sentencia ALTER TABLE: cláusula ADD COLUMN | Parcial {.text-warning} | No hay soporte para `GENERATED` cláusula y período de tiempo del sistema |
| F031-13 | Instrucción DROP TABLE: cláusula RESTRICT | No {.text-danger} | |
| F031-16 | Instrucción DROP VIEW: cláusula RESTRICT | No {.text-danger} | |
| F031-19 | Declaración REVOKE: cláusula RESTRICT | No {.text-danger} | |
| **F041** | **Tabla unida básica** | **Parcial**{.text-warning} | |
| F041-01 | Unión interna (pero no necesariamente la palabra clave INNER) | Sí{.text-success} | |
| F041-02 | Palabra clave INTERNA | Sí{.text-success} | |
| F041-03 | LEFT OUTER JOIN | Sí{.text-success} | |
| F041-04 | RIGHT OUTER JOIN | Sí{.text-success} | |
| F041-05 | Las uniones externas se pueden anidar | Sí{.text-success} | |
| F041-07 | La tabla interna en una combinación externa izquierda o derecha también se puede usar en una combinación interna | Sí{.text-success} | |
| F041-08 | Todos los operadores de comparación son compatibles (en lugar de solo =) | No{.text-danger} | |
| F041-01 | Unión interna (pero no necesariamente la palabra clave INNER) | Sí {.text-success} | |
| F041-02 | Palabra clave INTERNA | Sí {.text-success} | |
| F041-03 | LEFT OUTER JOIN | Sí {.text-success} | |
| F041-04 | RIGHT OUTER JOIN | Sí {.text-success} | |
| F041-05 | Las uniones externas se pueden anidar | Sí {.text-success} | |
| F041-07 | La tabla interna en una combinación externa izquierda o derecha también se puede usar en una combinación interna | Sí {.text-success} | |
| F041-08 | Todos los operadores de comparación son compatibles (en lugar de solo =) | No {.text-danger} | |
| **F051** | **Fecha y hora básicas** | **Parcial**{.text-warning} | |
| F051-01 | Tipo de datos DATE (incluido el soporte del literal DATE) | Parcial{.text-warning} | No literal |
| F051-02 | Tipo de datos TIME (incluido el soporte del literal TIME) con una precisión de segundos fraccionarios de al menos 0 | No{.text-danger} | |
| F051-03 | Tipo de datos TIMESTAMP (incluido el soporte del literal TIMESTAMP) con una precisión de segundos fraccionarios de al menos 0 y 6 | No{.text-danger} | `DateTime64` tiempo proporciona una funcionalidad similar |
| F051-04 | Predicado de comparación en los tipos de datos DATE, TIME y TIMESTAMP | Parcial{.text-warning} | Sólo un tipo de datos disponible |
| F051-05 | CAST explícito entre tipos de fecha y hora y tipos de cadena de caracteres | Sí{.text-success} | |
| F051-06 | CURRENT_DATE | No{.text-danger} | `today()` es similar |
| F051-07 | LOCALTIME | No{.text-danger} | `now()` es similar |
| F051-08 | LOCALTIMESTAMP | No{.text-danger} | |
| F051-01 | Tipo de datos DATE (incluido el soporte del literal DATE) | Parcial {.text-warning} | No literal |
| F051-02 | Tipo de datos TIME (incluido el soporte del literal TIME) con una precisión de segundos fraccionarios de al menos 0 | No {.text-danger} | |
| F051-03 | Tipo de datos TIMESTAMP (incluido el soporte del literal TIMESTAMP) con una precisión de segundos fraccionarios de al menos 0 y 6 | No {.text-danger} | `DateTime64` tiempo proporciona una funcionalidad similar |
| F051-04 | Predicado de comparación en los tipos de datos DATE, TIME y TIMESTAMP | Parcial {.text-warning} | Sólo un tipo de datos disponible |
| F051-05 | CAST explícito entre tipos de fecha y hora y tipos de cadena de caracteres | Sí {.text-success} | |
| F051-06 | CURRENT_DATE | No {.text-danger} | `today()` es similar |
| F051-07 | LOCALTIME | No {.text-danger} | `now()` es similar |
| F051-08 | LOCALTIMESTAMP | No {.text-danger} | |
| **F081** | **UNIÓN y EXCEPTO en vistas** | **Parcial**{.text-warning} | |
| **F131** | **Operaciones agrupadas** | **Parcial**{.text-warning} | |
| F131-01 | Cláusulas WHERE, GROUP BY y HAVING admitidas en consultas con vistas agrupadas | Sí{.text-success} | |
| F131-02 | Múltiples tablas admitidas en consultas con vistas agrupadas | Sí{.text-success} | |
| F131-03 | Establecer funciones admitidas en consultas con vistas agrupadas | Sí{.text-success} | |
| F131-04 | Subconsultas con cláusulas GROUP BY y HAVING y vistas agrupadas | Sí{.text-success} | |
| F131-05 | SELECCIONAR una sola fila con cláusulas GROUP BY y HAVING y vistas agrupadas | No{.text-danger} | |
| F131-01 | Cláusulas WHERE, GROUP BY y HAVING admitidas en consultas con vistas agrupadas | Sí {.text-success} | |
| F131-02 | Múltiples tablas admitidas en consultas con vistas agrupadas | Sí {.text-success} | |
| F131-03 | Establecer funciones admitidas en consultas con vistas agrupadas | Sí {.text-success} | |
| F131-04 | Subconsultas con cláusulas GROUP BY y HAVING y vistas agrupadas | Sí {.text-success} | |
| F131-05 | SELECCIONAR una sola fila con cláusulas GROUP BY y HAVING y vistas agrupadas | No {.text-danger} | |
| **F181** | **Múltiples módulos de apoyo** | **No**{.text-danger} | |
| **F201** | **Función de fundición** | **Sí**{.text-success} | |
| **F221** | **Valores predeterminados explícitos** | **No**{.text-danger} | |
| **F261** | **Expresión CASE** | **Sí**{.text-success} | |
| F261-01 | Caso simple | Sí{.text-success} | |
| F261-02 | CASO buscado | Sí{.text-success} | |
| F261-03 | NULLIF | Sí{.text-success} | |
| F261-04 | COALESCE | Sí{.text-success} | |
| F261-01 | Caso simple | Sí {.text-success} | |
| F261-02 | CASO buscado | Sí {.text-success} | |
| F261-03 | NULLIF | Sí {.text-success} | |
| F261-04 | COALESCE | Sí {.text-success} | |
| **F311** | **Instrucción de definición de esquema** | **Parcial**{.text-warning} | |
| F311-01 | CREATE SCHEMA | No{.text-danger} | |
| F311-02 | CREATE TABLE para tablas base persistentes | Sí{.text-success} | |
| F311-03 | CREATE VIEW | Sí{.text-success} | |
| F311-04 | CREATE VIEW: WITH CHECK OPTION | No{.text-danger} | |
| F311-05 | Declaración GRANT | Sí{.text-success} | |
| F311-01 | CREATE SCHEMA | No {.text-danger} | |
| F311-02 | CREATE TABLE para tablas base persistentes | Sí {.text-success} | |
| F311-03 | CREATE VIEW | Sí {.text-success} | |
| F311-04 | CREATE VIEW: WITH CHECK OPTION | No {.text-danger} | |
| F311-05 | Declaración GRANT | Sí {.text-success} | |
| **F471** | **Valores escalares de la subconsulta** | **Sí**{.text-success} | |
| **F481** | **Predicado NULL expandido** | **Sí**{.text-success} | |
| **F812** | **Marcado básico** | **No**{.text-danger} | |
| **T321** | **Rutinas básicas invocadas por SQL** | **No**{.text-danger} | |
| T321-01 | Funciones definidas por el usuario sin sobrecarga | No{.text-danger} | |
| T321-02 | Procedimientos almacenados definidos por el usuario sin sobrecarga | No{.text-danger} | |
| T321-03 | Invocación de función | No{.text-danger} | |
| T321-04 | Declaración de LLAMADA | No{.text-danger} | |
| T321-05 | Declaración DEVOLUCIÓN | No{.text-danger} | |
| T321-01 | Funciones definidas por el usuario sin sobrecarga | No {.text-danger} | |
| T321-02 | Procedimientos almacenados definidos por el usuario sin sobrecarga | No {.text-danger} | |
| T321-03 | Invocación de función | No {.text-danger} | |
| T321-04 | Declaración de LLAMADA | No {.text-danger} | |
| T321-05 | Declaración DEVOLUCIÓN | No {.text-danger} | |
| **T631** | **Predicado IN con un elemento de lista** | **Sí**{.text-success} | |

View File

@ -259,8 +259,8 @@ KDevelop و QTCreator دیگر از جایگزین های بسیار خوبی ا
sudo apt install wget xz-utils
wget https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz
wget https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz
wget https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz
wget https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz
xz -v -d hits_v1.tsv.xz
xz -v -d visits_v1.tsv.xz

View File

@ -580,7 +580,7 @@ ready_any.set();
**14.** ارزش بازگشت.
در اکثر موارد فقط استفاده کنید `return`. ننویس `[return std::move(res)]{.strike}`.
در اکثر موارد فقط استفاده کنید `return`. ننویس `return std::move(res)`.
اگر تابع یک شی در پشته اختصاص و بازده, استفاده `shared_ptr` یا `unique_ptr`.
@ -674,7 +674,7 @@ Loader() {}
**24.** استفاده نشود `trailing return type` برای توابع مگر اینکه لازم باشد.
``` cpp
[auto f() -&gt; void;]{.strike}
auto f() -> void
```
**25.** اعلامیه و مقدار دهی اولیه از متغیرهای.

View File

@ -10,14 +10,14 @@ toc_title: "\u06CC\u0627\u0646\u062F\u06A9\u0633\u0627\u0637\u0644\u0627\u0639\u
مجموعه داده شامل دو جدول حاوی داده های ناشناس در مورد بازدید (`hits_v1`) و بازدیدکننده داشته است (`visits_v1`) یاندکس . متریکا شما می توانید اطلاعات بیشتر در مورد یاندکس به عنوان خوانده شده.متریکا در [تاریخچه کلیک](../../introduction/history.md) بخش.
مجموعه داده ها شامل دو جدول است که هر کدام می توانند به عنوان یک فشرده دانلود شوند `tsv.xz` فایل و یا به عنوان پارتیشن تهیه شده است. علاوه بر این, یک نسخه طولانی از `hits` جدول حاوی 100 میلیون ردیف به عنوان تسو در دسترس است https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_100m_obfuscated_v1.tsv.xz و به عنوان پارتیشن تهیه شده در https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz.
مجموعه داده ها شامل دو جدول است که هر کدام می توانند به عنوان یک فشرده دانلود شوند `tsv.xz` فایل و یا به عنوان پارتیشن تهیه شده است. علاوه بر این, یک نسخه طولانی از `hits` جدول حاوی 100 میلیون ردیف به عنوان تسو در دسترس است https://datasets.clickhouse.tech/hits/tsv/hits_100m_obfuscated_v1.tsv.xz و به عنوان پارتیشن تهیه شده در https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz.
## اخذ جداول از پارتیشن های تهیه شده {#obtaining-tables-from-prepared-partitions}
دانلود و وارد کردن جدول بازدید:
``` bash
curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar
curl -O https://datasets.clickhouse.tech/hits/partitions/hits_v1.tar
tar xvf hits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory
# check permissions on unpacked data, fix if required
sudo service clickhouse-server restart
@ -27,7 +27,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
دانلود و وارد کردن بازدیدکننده داشته است:
``` bash
curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar
curl -O https://datasets.clickhouse.tech/visits/partitions/visits_v1.tar
tar xvf visits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory
# check permissions on unpacked data, fix if required
sudo service clickhouse-server restart
@ -39,7 +39,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
دانلود و وارد کردن بازدید از فایل تسو فشرده:
``` bash
curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
# now create table
clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"
@ -53,7 +53,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
دانلود و واردات بازدیدکننده داشته است از فشرده فایل:
``` bash
curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
# now create table
clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"

View File

@ -286,7 +286,7 @@ SELECT formatReadableSize(sum(bytes)) FROM system.parts WHERE table = 'trips_mer
## دانلود پارتیشن های تهیه شده {#download-of-prepared-partitions}
``` bash
$ curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar
$ curl -O https://datasets.clickhouse.tech/trips_mergetree/partitions/trips_mergetree.tar
$ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # path to ClickHouse data directory
$ # check permissions of unpacked data, fix if required
$ sudo service clickhouse-server restart

View File

@ -156,7 +156,7 @@ $ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhous
## دانلود پارتیشن های تهیه شده {#download-of-prepared-partitions}
``` bash
$ curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar
$ curl -O https://datasets.clickhouse.tech/ontime/partitions/ontime.tar
$ tar xvf ontime.tar -C /var/lib/clickhouse # path to ClickHouse data directory
$ # check permissions of unpacked data, fix if required
$ sudo service clickhouse-server restart

View File

@ -87,8 +87,8 @@ clickhouse-client --query='INSERT INTO table FORMAT TabSeparated' < data.tsv
### دانلود و استخراج داده های جدول {#download-and-extract-table-data}
``` bash
curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
```
فایل های استخراج شده حدود 10 گیگابایت است.

View File

@ -48,7 +48,7 @@ toc_title: "\u0633\u062E\u062A \u0627\u0641\u0632\u0627\u0631 \u062A\u0633\u062A
<!-- -->
wget https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz
wget https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz
tar xvf hits_100m_obfuscated_v1.tar.xz -C .
mv hits_100m_obfuscated_v1/* .

View File

@ -26,155 +26,155 @@ toc_title: "\u0633\u0627\u0632\u06AF\u0627\u0631\u06CC \u0627\u0646\u0633\u06CC"
| Feature ID | نام ویژگی | وضعیت | توضیح |
|------------|---------------------------------------------------------------------------------------------------|-------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **E011** | **انواع داده های عددی** | **نسبی**{.text-warning} | |
| E011-01 | عدد صحیح و SMALLINT انواع داده ها | بله{.text-success} | |
| E011-02 | انواع داده های دقیق و دوگانه واقعی و شناور | نسبی{.text-warning} | `FLOAT(<binary_precision>)`, `REAL` و `DOUBLE PRECISION` پشتیبانی نمیشود |
| E011-03 | دهدهی و انواع داده های عددی | نسبی{.text-warning} | فقط `DECIMAL(p,s)` پشتیبانی می شود, نه `NUMERIC` |
| E011-04 | اپراتورهای ریاضی | بله{.text-success} | |
| E011-05 | مقایسه عددی | بله{.text-success} | |
| E011-06 | ریخته گری ضمنی در میان انواع داده های عددی | نه{.text-danger} | انسی گذاشتن اجازه می دهد تا بازیگران ضمنی دلخواه بین انواع عددی, در حالی که تاتر متکی بر توابع داشتن اضافه بار متعدد به جای بازیگران ضمنی |
| E011-01 | عدد صحیح و SMALLINT انواع داده ها | بله {.text-success} | |
| E011-02 | انواع داده های دقیق و دوگانه واقعی و شناور | نسبی {.text-warning} | `FLOAT(<binary_precision>)`, `REAL` و `DOUBLE PRECISION` پشتیبانی نمیشود |
| E011-03 | دهدهی و انواع داده های عددی | نسبی {.text-warning} | فقط `DECIMAL(p,s)` پشتیبانی می شود, نه `NUMERIC` |
| E011-04 | اپراتورهای ریاضی | بله {.text-success} | |
| E011-05 | مقایسه عددی | بله {.text-success} | |
| E011-06 | ریخته گری ضمنی در میان انواع داده های عددی | نه {.text-danger} | انسی گذاشتن اجازه می دهد تا بازیگران ضمنی دلخواه بین انواع عددی, در حالی که تاتر متکی بر توابع داشتن اضافه بار متعدد به جای بازیگران ضمنی |
| **E021** | **انواع رشته شخصیت** | **نسبی**{.text-warning} | |
| E021-01 | نوع دادههای نویسه | نه{.text-danger} | |
| E021-02 | شخصیت های مختلف نوع داده ها | نه{.text-danger} | `String` رفتار مشابه, اما بدون محدودیت طول در پرانتز |
| E021-03 | شخصیت literals | نسبی{.text-warning} | بدون الحاق خودکار از لیتر متوالی و شخصیت پشتیبانی مجموعه |
| E021-04 | تابع _شخصی | نسبی{.text-warning} | نه `USING` بند |
| E021-05 | تابع اکتبر | نه{.text-danger} | `LENGTH` رفتار مشابه |
| E021-06 | SUBSTRING | نسبی{.text-warning} | هیچ پشتیبانی برای `SIMILAR` و `ESCAPE` بند نه `SUBSTRING_REGEX` گزینه |
| E021-07 | الحاق شخصیت | نسبی{.text-warning} | نه `COLLATE` بند |
| E021-08 | توابع بالا و پایین | بله{.text-success} | |
| E021-09 | تابع اصلاح | بله{.text-success} | |
| E021-10 | ریخته گری ضمنی در میان ثابت طول و متغیر طول انواع رشته شخصیت | نه{.text-danger} | انسی گذاشتن اجازه می دهد تا بازیگران ضمنی دلخواه بین انواع رشته, در حالی که تاتر متکی بر توابع داشتن اضافه بار متعدد به جای بازیگران ضمنی |
| E021-11 | تابع موقعیت | نسبی{.text-warning} | هیچ پشتیبانی برای `IN` و `USING` بند نه `POSITION_REGEX` گزینه |
| E021-12 | مقایسه شخصیت | بله{.text-success} | |
| E021-01 | نوع دادههای نویسه | نه {.text-danger} | |
| E021-02 | شخصیت های مختلف نوع داده ها | نه {.text-danger} | `String` رفتار مشابه, اما بدون محدودیت طول در پرانتز |
| E021-03 | شخصیت literals | نسبی {.text-warning} | بدون الحاق خودکار از لیتر متوالی و شخصیت پشتیبانی مجموعه |
| E021-04 | تابع _شخصی | نسبی {.text-warning} | نه `USING` بند |
| E021-05 | تابع اکتبر | نه {.text-danger} | `LENGTH` رفتار مشابه |
| E021-06 | SUBSTRING | نسبی {.text-warning} | هیچ پشتیبانی برای `SIMILAR` و `ESCAPE` بند نه `SUBSTRING_REGEX` گزینه |
| E021-07 | الحاق شخصیت | نسبی {.text-warning} | نه `COLLATE` بند |
| E021-08 | توابع بالا و پایین | بله {.text-success} | |
| E021-09 | تابع اصلاح | بله {.text-success} | |
| E021-10 | ریخته گری ضمنی در میان ثابت طول و متغیر طول انواع رشته شخصیت | نه {.text-danger} | انسی گذاشتن اجازه می دهد تا بازیگران ضمنی دلخواه بین انواع رشته, در حالی که تاتر متکی بر توابع داشتن اضافه بار متعدد به جای بازیگران ضمنی |
| E021-11 | تابع موقعیت | نسبی {.text-warning} | هیچ پشتیبانی برای `IN` و `USING` بند نه `POSITION_REGEX` گزینه |
| E021-12 | مقایسه شخصیت | بله {.text-success} | |
| **E031** | **شناسهها** | **نسبی**{.text-warning} | |
| E031-01 | شناسه های محدود | نسبی{.text-warning} | پشتیبانی تحت اللفظی یونیکد محدود است |
| E031-02 | شناسه های مورد پایین | بله{.text-success} | |
| E031-03 | انتهایی تاکید | بله{.text-success} | |
| E031-01 | شناسه های محدود | نسبی {.text-warning} | پشتیبانی تحت اللفظی یونیکد محدود است |
| E031-02 | شناسه های مورد پایین | بله {.text-success} | |
| E031-03 | انتهایی تاکید | بله {.text-success} | |
| **E051** | **مشخصات پرس و جو عمومی** | **نسبی**{.text-warning} | |
| E051-01 | SELECT DISTINCT | بله{.text-success} | |
| E051-02 | گروه بر اساس بند | بله{.text-success} | |
| E051-04 | گروه توسط می تواند ستون ها را شامل نمی شود `<select list>` | بله{.text-success} | |
| E051-05 | انتخاب موارد را می توان تغییر نام داد | بله{.text-success} | |
| E051-06 | داشتن بند | بله{.text-success} | |
| E051-07 | واجد شرایط \* در انتخاب لیست | بله{.text-success} | |
| E051-08 | نام همبستگی در بند | بله{.text-success} | |
| E051-09 | تغییر نام ستون ها در بند | نه{.text-danger} | |
| E051-01 | SELECT DISTINCT | بله {.text-success} | |
| E051-02 | گروه بر اساس بند | بله {.text-success} | |
| E051-04 | گروه توسط می تواند ستون ها را شامل نمی شود `<select list>` | بله {.text-success} | |
| E051-05 | انتخاب موارد را می توان تغییر نام داد | بله {.text-success} | |
| E051-06 | داشتن بند | بله {.text-success} | |
| E051-07 | واجد شرایط \* در انتخاب لیست | بله {.text-success} | |
| E051-08 | نام همبستگی در بند | بله {.text-success} | |
| E051-09 | تغییر نام ستون ها در بند | نه {.text-danger} | |
| **E061** | **مخمصه عمومی و شرایط جستجو** | **نسبی**{.text-warning} | |
| E061-01 | پیش فرض مقایسه | بله{.text-success} | |
| E061-02 | بین پیش فرض | نسبی{.text-warning} | نه `SYMMETRIC` و `ASYMMETRIC` بند |
| E061-03 | در گزاره با لیستی از ارزش ها | بله{.text-success} | |
| E061-04 | مثل گزاره | بله{.text-success} | |
| E061-05 | مانند گزاره: فرار بند | نه{.text-danger} | |
| E061-06 | پیش فرض پوچ | بله{.text-success} | |
| E061-07 | گزاره مقایسه کمی | نه{.text-danger} | |
| E061-08 | پیش فرض وجود دارد | نه{.text-danger} | |
| E061-09 | Subqueries در مقایسه گزاره | بله{.text-success} | |
| E061-11 | در حال بارگذاری | بله{.text-success} | |
| E061-12 | زیرمجموعه ها در پیش بینی مقایسه اندازه گیری شده | نه{.text-danger} | |
| E061-13 | ارتباط subqueries | نه{.text-danger} | |
| E061-14 | وضعیت جستجو | بله{.text-success} | |
| E061-01 | پیش فرض مقایسه | بله {.text-success} | |
| E061-02 | بین پیش فرض | نسبی {.text-warning} | نه `SYMMETRIC` و `ASYMMETRIC` بند |
| E061-03 | در گزاره با لیستی از ارزش ها | بله {.text-success} | |
| E061-04 | مثل گزاره | بله {.text-success} | |
| E061-05 | مانند گزاره: فرار بند | نه {.text-danger} | |
| E061-06 | پیش فرض پوچ | بله {.text-success} | |
| E061-07 | گزاره مقایسه کمی | نه {.text-danger} | |
| E061-08 | پیش فرض وجود دارد | نه {.text-danger} | |
| E061-09 | Subqueries در مقایسه گزاره | بله {.text-success} | |
| E061-11 | در حال بارگذاری | بله {.text-success} | |
| E061-12 | زیرمجموعه ها در پیش بینی مقایسه اندازه گیری شده | نه {.text-danger} | |
| E061-13 | ارتباط subqueries | نه {.text-danger} | |
| E061-14 | وضعیت جستجو | بله {.text-success} | |
| **E071** | **عبارتهای پرسوجو پایه** | **نسبی**{.text-warning} | |
| E071-01 | اتحادیه اپراتور جدول مجزا | نه{.text-danger} | |
| E071-02 | اتحادیه تمام اپراتور جدول | بله{.text-success} | |
| E071-03 | به جز اپراتور جدول مجزا | نه{.text-danger} | |
| E071-05 | ستون ترکیب از طریق اپراتورهای جدول نیاز دقیقا همان نوع داده ندارد | بله{.text-success} | |
| E071-06 | اپراتورهای جدول در زیرمجموعه | بله{.text-success} | |
| E071-01 | اتحادیه اپراتور جدول مجزا | نه {.text-danger} | |
| E071-02 | اتحادیه تمام اپراتور جدول | بله {.text-success} | |
| E071-03 | به جز اپراتور جدول مجزا | نه {.text-danger} | |
| E071-05 | ستون ترکیب از طریق اپراتورهای جدول نیاز دقیقا همان نوع داده ندارد | بله {.text-success} | |
| E071-06 | اپراتورهای جدول در زیرمجموعه | بله {.text-success} | |
| **E081** | **امتیازات پایه** | **نسبی**{.text-warning} | کار در حال پیشرفت |
| **E091** | **تنظیم توابع** | **بله**{.text-success} | |
| E091-01 | AVG | بله{.text-success} | |
| E091-02 | COUNT | بله{.text-success} | |
| E091-03 | MAX | بله{.text-success} | |
| E091-04 | MIN | بله{.text-success} | |
| E091-05 | SUM | بله{.text-success} | |
| E091-06 | همه کمی | نه{.text-danger} | |
| E091-07 | کمی متمایز | نسبی{.text-warning} | همه توابع مجموع پشتیبانی |
| E091-01 | AVG | بله {.text-success} | |
| E091-02 | COUNT | بله {.text-success} | |
| E091-03 | MAX | بله {.text-success} | |
| E091-04 | MIN | بله {.text-success} | |
| E091-05 | SUM | بله {.text-success} | |
| E091-06 | همه کمی | نه {.text-danger} | |
| E091-07 | کمی متمایز | نسبی {.text-warning} | همه توابع مجموع پشتیبانی |
| **E101** | **دستکاری داده های پایه** | **نسبی**{.text-warning} | |
| E101-01 | درج بیانیه | بله{.text-success} | توجه داشته باشید: کلید اصلی در خانه کلیک می کند به این معنی نیست `UNIQUE` محدودیت |
| E101-03 | بیانیه به روز رسانی جستجو | نه{.text-danger} | یک `ALTER UPDATE` بیانیه ای برای اصلاح داده های دسته ای |
| E101-04 | جستجو حذف بیانیه | نه{.text-danger} | یک `ALTER DELETE` بیانیه ای برای حذف داده های دسته ای |
| E101-01 | درج بیانیه | بله {.text-success} | توجه داشته باشید: کلید اصلی در خانه کلیک می کند به این معنی نیست `UNIQUE` محدودیت |
| E101-03 | بیانیه به روز رسانی جستجو | نه {.text-danger} | یک `ALTER UPDATE` بیانیه ای برای اصلاح داده های دسته ای |
| E101-04 | جستجو حذف بیانیه | نه {.text-danger} | یک `ALTER DELETE` بیانیه ای برای حذف داده های دسته ای |
| **E111** | **تک ردیف انتخاب بیانیه** | **نه**{.text-danger} | |
| **E121** | **پشتیبانی عمومی مکان نما** | **نه**{.text-danger} | |
| E121-01 | DECLARE CURSOR | نه{.text-danger} | |
| E121-02 | سفارش ستون ها در لیست انتخاب نمی شود | نه{.text-danger} | |
| E121-03 | عبارات ارزش به ترتیب توسط بند | نه{.text-danger} | |
| E121-04 | بیانیه باز | نه{.text-danger} | |
| E121-06 | بیانیه به روز رسانی موقعیت | نه{.text-danger} | |
| E121-07 | موقعیت حذف بیانیه | نه{.text-danger} | |
| E121-08 | بستن بیانیه | نه{.text-danger} | |
| E121-10 | واکشی بیانیه: ضمنی بعدی | نه{.text-danger} | |
| E121-17 | با نشانگر نگه دارید | نه{.text-danger} | |
| E121-01 | DECLARE CURSOR | نه {.text-danger} | |
| E121-02 | سفارش ستون ها در لیست انتخاب نمی شود | نه {.text-danger} | |
| E121-03 | عبارات ارزش به ترتیب توسط بند | نه {.text-danger} | |
| E121-04 | بیانیه باز | نه {.text-danger} | |
| E121-06 | بیانیه به روز رسانی موقعیت | نه {.text-danger} | |
| E121-07 | موقعیت حذف بیانیه | نه {.text-danger} | |
| E121-08 | بستن بیانیه | نه {.text-danger} | |
| E121-10 | واکشی بیانیه: ضمنی بعدی | نه {.text-danger} | |
| E121-17 | با نشانگر نگه دارید | نه {.text-danger} | |
| **E131** | **پشتیبانی ارزش صفر (صفر به جای ارزش)** | **نسبی**{.text-warning} | برخی از محدودیت ها اعمال می شود |
| **E141** | **محدودیت یکپارچگی عمومی** | **نسبی**{.text-warning} | |
| E141-01 | محدودیت NOT NULL | بله{.text-success} | یادداشت: `NOT NULL` برای ستون های جدول به طور پیش فرض ضمنی |
| E141-02 | محدودیت منحصر به فرد از ستون تهی نیست | نه{.text-danger} | |
| E141-03 | محدودیت های کلیدی اولیه | نه{.text-danger} | |
| E141-04 | محدودیت کلید خارجی عمومی با هیچ پیش فرض اقدام برای هر دو عمل حذف ارجاعی و عمل به روز رسانی ارجاعی | نه{.text-danger} | |
| E141-06 | بررسی محدودیت | بله{.text-success} | |
| E141-07 | پیشفرض ستون | بله{.text-success} | |
| E141-08 | تهی نیست استنباط در کلید اولیه | بله{.text-success} | |
| E141-10 | نام در یک کلید خارجی را می توان در هر سفارش مشخص شده است | نه{.text-danger} | |
| E141-01 | محدودیت NOT NULL | بله {.text-success} | یادداشت: `NOT NULL` برای ستون های جدول به طور پیش فرض ضمنی |
| E141-02 | محدودیت منحصر به فرد از ستون تهی نیست | نه {.text-danger} | |
| E141-03 | محدودیت های کلیدی اولیه | نه {.text-danger} | |
| E141-04 | محدودیت کلید خارجی عمومی با هیچ پیش فرض اقدام برای هر دو عمل حذف ارجاعی و عمل به روز رسانی ارجاعی | نه {.text-danger} | |
| E141-06 | بررسی محدودیت | بله {.text-success} | |
| E141-07 | پیشفرض ستون | بله {.text-success} | |
| E141-08 | تهی نیست استنباط در کلید اولیه | بله {.text-success} | |
| E141-10 | نام در یک کلید خارجی را می توان در هر سفارش مشخص شده است | نه {.text-danger} | |
| **E151** | **پشتیبانی تراکنش** | **نه**{.text-danger} | |
| E151-01 | بیانیه متعهد | نه{.text-danger} | |
| E151-02 | بیانیه عقبگرد | نه{.text-danger} | |
| E151-01 | بیانیه متعهد | نه {.text-danger} | |
| E151-02 | بیانیه عقبگرد | نه {.text-danger} | |
| **E152** | **بیانیه معامله عمومی مجموعه** | **نه**{.text-danger} | |
| E152-01 | مجموعه بیانیه معامله: جداسازی سطح SERIALIZABLE بند | نه{.text-danger} | |
| E152-02 | تنظیم بیانیه معامله: فقط خواندن و خواندن نوشتن جملات | نه{.text-danger} | |
| E152-01 | مجموعه بیانیه معامله: جداسازی سطح SERIALIZABLE بند | نه {.text-danger} | |
| E152-02 | تنظیم بیانیه معامله: فقط خواندن و خواندن نوشتن جملات | نه {.text-danger} | |
| **E153** | **نمایش داده شد بهروز با زیرمجموعه** | **نه**{.text-danger} | |
| **E161** | **گذاشتن نظرات با استفاده از منجر منهای دو** | **بله**{.text-success} | |
| **E171** | **SQLSTATE پشتیبانی** | **نه**{.text-danger} | |
| **E182** | **اتصال زبان میزبان** | **نه**{.text-danger} | |
| **F031** | **دستکاری طرح اولیه** | **نسبی**{.text-warning} | |
| F031-01 | ایجاد بیانیه جدول برای ایجاد جداول پایه مداوم | نسبی{.text-warning} | نه `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` بند و هیچ پشتیبانی برای کاربر حل و فصل انواع داده ها |
| F031-02 | ایجاد نمایش بیانیه | نسبی{.text-warning} | نه `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` بند و هیچ پشتیبانی برای کاربر حل و فصل انواع داده ها |
| F031-03 | بیانیه گرانت | بله{.text-success} | |
| F031-04 | تغییر بیانیه جدول: اضافه کردن بند ستون | نسبی{.text-warning} | هیچ پشتیبانی برای `GENERATED` بند و مدت زمان سیستم |
| F031-13 | بیانیه جدول قطره: محدود کردن بند | نه{.text-danger} | |
| F031-16 | قطره مشاهده بیانیه: محدود بند | نه{.text-danger} | |
| F031-19 | لغو بیانیه: محدود کردن بند | نه{.text-danger} | |
| F031-01 | ایجاد بیانیه جدول برای ایجاد جداول پایه مداوم | نسبی {.text-warning} | نه `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` بند و هیچ پشتیبانی برای کاربر حل و فصل انواع داده ها |
| F031-02 | ایجاد نمایش بیانیه | نسبی {.text-warning} | نه `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` بند و هیچ پشتیبانی برای کاربر حل و فصل انواع داده ها |
| F031-03 | بیانیه گرانت | بله {.text-success} | |
| F031-04 | تغییر بیانیه جدول: اضافه کردن بند ستون | نسبی {.text-warning} | هیچ پشتیبانی برای `GENERATED` بند و مدت زمان سیستم |
| F031-13 | بیانیه جدول قطره: محدود کردن بند | نه {.text-danger} | |
| F031-16 | قطره مشاهده بیانیه: محدود بند | نه {.text-danger} | |
| F031-19 | لغو بیانیه: محدود کردن بند | نه {.text-danger} | |
| **F041** | **جدول پیوست عمومی** | **نسبی**{.text-warning} | |
| F041-01 | عضویت داخلی (اما نه لزوما کلمه کلیدی درونی) | بله{.text-success} | |
| F041-02 | کلیدواژه داخلی | بله{.text-success} | |
| F041-03 | LEFT OUTER JOIN | بله{.text-success} | |
| F041-04 | RIGHT OUTER JOIN | بله{.text-success} | |
| F041-05 | بیرونی می پیوندد می توان تو در تو | بله{.text-success} | |
| F041-07 | جدول درونی در بیرونی چپ یا راست عضویت نیز می تواند مورد استفاده قرار گیرد در عضویت درونی | بله{.text-success} | |
| F041-08 | همه اپراتورهای مقایسه پشتیبانی می شوند (و نه فقط =) | نه{.text-danger} | |
| **F051** | **تاریخ پایه و زمان** | **نسبی**{.text-warning} | |
| F051-01 | تاریخ نوع داده (از جمله پشتیبانی از تاریخ تحت اللفظی) | نسبی{.text-warning} | بدون تحت اللفظی |
| F051-02 | نوع داده زمان (از جمله پشتیبانی از زمان تحت اللفظی) با دقت ثانیه کسری حداقل 0 | نه{.text-danger} | |
| F051-03 | نوع داده برچسب زمان (از جمله پشتیبانی از تحت اللفظی برچسب زمان) با دقت ثانیه کسری از حداقل 0 و 6 | نه{.text-danger} | `DateTime64` زمان فراهم می کند قابلیت های مشابه |
| F051-04 | مقایسه گزاره در تاریخ, زمان, و انواع داده های برچسب زمان | نسبی{.text-warning} | فقط یک نوع داده موجود است |
| F051-05 | بازیگران صریح و روشن بین انواع تاریخ ساعت و انواع رشته شخصیت | بله{.text-success} | |
| F051-06 | CURRENT_DATE | نه{.text-danger} | `today()` مشابه است |
| F051-07 | LOCALTIME | نه{.text-danger} | `now()` مشابه است |
| F051-08 | LOCALTIMESTAMP | نه{.text-danger} | |
| F041-01 | عضویت داخلی (اما نه لزوما کلمه کلیدی درونی) | بله {.text-success} | |
| F041-02 | کلیدواژه داخلی | بله {.text-success} | |
| F041-03 | LEFT OUTER JOIN | بله {.text-success} | |
| F041-04 | RIGHT OUTER JOIN | بله {.text-success} | |
| F041-05 | بیرونی می پیوندد می توان تو در تو | بله {.text-success} | |
| F041-07 | جدول درونی در بیرونی چپ یا راست عضویت نیز می تواند مورد استفاده قرار گیرد در عضویت درونی | بله {.text-success} | |
| F041-08 | همه اپراتورهای مقایسه پشتیبانی می شوند (و نه فقط =) | نه {.text-danger} | |
| **F051** | **تاریخ پایه و زمان** | **نسبی** {.text-warning} | |
| F051-01 | تاریخ نوع داده (از جمله پشتیبانی از تاریخ تحت اللفظی) | نسبی {.text-warning} | بدون تحت اللفظی |
| F051-02 | نوع داده زمان (از جمله پشتیبانی از زمان تحت اللفظی) با دقت ثانیه کسری حداقل 0 | نه {.text-danger} | |
| F051-03 | نوع داده برچسب زمان (از جمله پشتیبانی از تحت اللفظی برچسب زمان) با دقت ثانیه کسری از حداقل 0 و 6 | نه {.text-danger} | `DateTime64` زمان فراهم می کند قابلیت های مشابه |
| F051-04 | مقایسه گزاره در تاریخ, زمان, و انواع داده های برچسب زمان | نسبی {.text-warning} | فقط یک نوع داده موجود است |
| F051-05 | بازیگران صریح و روشن بین انواع تاریخ ساعت و انواع رشته شخصیت | بله {.text-success} | |
| F051-06 | CURRENT_DATE | نه {.text-danger} | `today()` مشابه است |
| F051-07 | LOCALTIME | نه {.text-danger} | `now()` مشابه است |
| F051-08 | LOCALTIMESTAMP | نه {.text-danger} | |
| **F081** | **اتحادیه و به جز در دیدگاه** | **نسبی**{.text-warning} | |
| **F131** | **عملیات گروه بندی شده** | **نسبی**{.text-warning} | |
| F131-01 | جایی که, گروه های, و داشتن بند در نمایش داده شد با نمایش گروه بندی پشتیبانی | بله{.text-success} | |
| F131-02 | جداول چندگانه در نمایش داده شد با نمایش گروه بندی پشتیبانی می شود | بله{.text-success} | |
| F131-03 | تنظیم توابع پشتیبانی شده در نمایش داده شد با نمایش گروه بندی می شوند | بله{.text-success} | |
| F131-04 | Subqueries با گروه و داشتن بند و گروه بندی views | بله{.text-success} | |
| F131-05 | تک ردیف با گروه و داشتن بند و دیدگاه های گروه بندی شده را انتخاب کنید | نه{.text-danger} | |
| F131-01 | جایی که, گروه های, و داشتن بند در نمایش داده شد با نمایش گروه بندی پشتیبانی | بله {.text-success} | |
| F131-02 | جداول چندگانه در نمایش داده شد با نمایش گروه بندی پشتیبانی می شود | بله {.text-success} | |
| F131-03 | تنظیم توابع پشتیبانی شده در نمایش داده شد با نمایش گروه بندی می شوند | بله {.text-success} | |
| F131-04 | Subqueries با گروه و داشتن بند و گروه بندی views | بله {.text-success} | |
| F131-05 | تک ردیف با گروه و داشتن بند و دیدگاه های گروه بندی شده را انتخاب کنید | نه {.text-danger} | |
| **F181** | **پشتیبانی از ماژول های متعدد** | **نه**{.text-danger} | |
| **F201** | **تابع بازیگران** | **بله**{.text-success} | |
| **F221** | **پیش فرض های صریح** | **نه**{.text-danger} | |
| **F261** | **عبارت مورد** | **بله**{.text-success} | |
| F261-01 | مورد ساده | بله{.text-success} | |
| F261-02 | مورد جستجو | بله{.text-success} | |
| F261-03 | NULLIF | بله{.text-success} | |
| F261-04 | COALESCE | بله{.text-success} | |
| F261-01 | مورد ساده | بله {.text-success} | |
| F261-02 | مورد جستجو | بله {.text-success} | |
| F261-03 | NULLIF | بله {.text-success} | |
| F261-04 | COALESCE | بله {.text-success} | |
| **F311** | **بیانیه تعریف طرح** | **نسبی**{.text-warning} | |
| F311-01 | CREATE SCHEMA | نه{.text-danger} | |
| F311-02 | ایجاد جدول برای جداول پایه مداوم | بله{.text-success} | |
| F311-03 | CREATE VIEW | بله{.text-success} | |
| F311-04 | CREATE VIEW: WITH CHECK OPTION | نه{.text-danger} | |
| F311-05 | بیانیه گرانت | بله{.text-success} | |
| F311-01 | CREATE SCHEMA | نه {.text-danger} | |
| F311-02 | ایجاد جدول برای جداول پایه مداوم | بله {.text-success} | |
| F311-03 | CREATE VIEW | بله {.text-success} | |
| F311-04 | CREATE VIEW: WITH CHECK OPTION | نه {.text-danger} | |
| F311-05 | بیانیه گرانت | بله {.text-success} | |
| **F471** | **مقادیر زیر مقیاس** | **بله**{.text-success} | |
| **F481** | **پیش فرض صفر گسترش یافته است** | **بله**{.text-success} | |
| **F812** | **عمومی ضعیف** | **نه**{.text-danger} | |
| **T321** | **روال عمومی گذاشتن استناد** | **نه**{.text-danger} | |
| T321-01 | توابع تعریف شده توسط کاربر بدون اضافه بار | نه{.text-danger} | |
| T321-02 | روش های ذخیره شده تعریف شده توسط کاربر بدون اضافه بار | نه{.text-danger} | |
| T321-03 | فراخوانی تابع | نه{.text-danger} | |
| T321-04 | بیانیه تماس | نه{.text-danger} | |
| T321-05 | بیانیه بازگشت | نه{.text-danger} | |
| T321-01 | توابع تعریف شده توسط کاربر بدون اضافه بار | نه {.text-danger} | |
| T321-02 | روش های ذخیره شده تعریف شده توسط کاربر بدون اضافه بار | نه {.text-danger} | |
| T321-03 | فراخوانی تابع | نه {.text-danger} | |
| T321-04 | بیانیه تماس | نه {.text-danger} | |
| T321-05 | بیانیه بازگشت | نه {.text-danger} | |
| **T631** | **در گزاره با یک عنصر لیست** | **بله**{.text-success} | |

View File

@ -257,8 +257,8 @@ Le développement de ClickHouse nécessite souvent le chargement d'ensembles de
sudo apt install wget xz-utils
wget https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz
wget https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz
wget https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz
wget https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz
xz -v -d hits_v1.tsv.xz
xz -v -d visits_v1.tsv.xz

View File

@ -579,7 +579,7 @@ Si une fonction capture la propriété d'un objet créé dans le tas, définisse
**14.** Les valeurs de retour.
Dans la plupart des cas, il suffit d'utiliser `return`. Ne pas écrire `[return std::move(res)]{.strike}`.
Dans la plupart des cas, il suffit d'utiliser `return`. Ne pas écrire `return std::move(res)`.
Si la fonction alloue un objet sur le tas et le renvoie, utilisez `shared_ptr` ou `unique_ptr`.
@ -673,7 +673,7 @@ Toujours utiliser `#pragma once` au lieu d'inclure des gardes.
**24.** Ne pas utiliser de `trailing return type` pour les fonctions, sauf si nécessaire.
``` cpp
[auto f() -&gt; void;]{.strike}
auto f() -> void
```
**25.** Déclaration et initialisation des variables.

View File

@ -9,14 +9,14 @@ toc_title: "Yandex.Metrica De Donn\xE9es"
Dataset se compose de deux tables contenant des données anonymisées sur les hits (`hits_v1`) et les visites (`visits_v1`) de Yandex.Metrica. Vous pouvez en savoir plus sur Yandex.Metrica dans [Histoire de ClickHouse](../../introduction/history.md) section.
L'ensemble de données se compose de deux tables, l'une d'elles peut être téléchargée sous forme compressée `tsv.xz` fichier ou comme partitions préparées. En outre, une version étendue de l' `hits` table contenant 100 millions de lignes est disponible comme TSV à https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_100m_obfuscated_v1.tsv.xz et comme partitions préparées à https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz.
L'ensemble de données se compose de deux tables, l'une d'elles peut être téléchargée sous forme compressée `tsv.xz` fichier ou comme partitions préparées. En outre, une version étendue de l' `hits` table contenant 100 millions de lignes est disponible comme TSV à https://datasets.clickhouse.tech/hits/tsv/hits_100m_obfuscated_v1.tsv.xz et comme partitions préparées à https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz.
## Obtention de Tables à partir de Partitions préparées {#obtaining-tables-from-prepared-partitions}
Télécharger et importer la table hits:
``` bash
curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar
curl -O https://datasets.clickhouse.tech/hits/partitions/hits_v1.tar
tar xvf hits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory
# check permissions on unpacked data, fix if required
sudo service clickhouse-server restart
@ -26,7 +26,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
Télécharger et importer des visites:
``` bash
curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar
curl -O https://datasets.clickhouse.tech/visits/partitions/visits_v1.tar
tar xvf visits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory
# check permissions on unpacked data, fix if required
sudo service clickhouse-server restart
@ -38,7 +38,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
Télécharger et importer des hits à partir du fichier TSV compressé:
``` bash
curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
# now create table
clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"
@ -52,7 +52,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
Télécharger et importer des visites à partir du fichier TSV compressé:
``` bash
curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
# now create table
clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"

View File

@ -285,7 +285,7 @@ Entre autres choses, vous pouvez exécuter la requête OPTIMIZE sur MergeTree. M
## Téléchargement des Partitions préparées {#download-of-prepared-partitions}
``` bash
$ curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar
$ curl -O https://datasets.clickhouse.tech/trips_mergetree/partitions/trips_mergetree.tar
$ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # path to ClickHouse data directory
$ # check permissions of unpacked data, fix if required
$ sudo service clickhouse-server restart

View File

@ -156,7 +156,7 @@ $ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhous
## Téléchargement des Partitions préparées {#download-of-prepared-partitions}
``` bash
$ curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar
$ curl -O https://datasets.clickhouse.tech/ontime/partitions/ontime.tar
$ tar xvf ontime.tar -C /var/lib/clickhouse # path to ClickHouse data directory
$ # check permissions of unpacked data, fix if required
$ sudo service clickhouse-server restart

View File

@ -87,8 +87,8 @@ Maintenant, il est temps de remplir notre serveur ClickHouse avec quelques exemp
### Télécharger et extraire les données de la Table {#download-and-extract-table-data}
``` bash
curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
```
Les fichiers extraits ont une taille d'environ 10 Go.

View File

@ -48,7 +48,7 @@ Avec cette instruction, vous pouvez exécuter le test de performance clickhouse
<!-- -->
wget https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz
wget https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz
tar xvf hits_100m_obfuscated_v1.tar.xz -C .
mv hits_100m_obfuscated_v1/* .

View File

@ -26,155 +26,155 @@ Le tableau suivant répertorie les cas où la fonctionnalité de requête foncti
| Feature ID | Nom De La Fonctionnalité | Statut | Commentaire |
|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **E011** | **Types de données numériques** | **Partiel**{.text-warning} | |
| E011-01 | Types de données INTEGER et SMALLINT | Oui{.text-success} | |
| E011-02 | Types de données réel, double précision et flottant types de données | Partiel{.text-warning} | `FLOAT(<binary_precision>)`, `REAL` et `DOUBLE PRECISION` ne sont pas pris en charge |
| E011-03 | Types de données décimales et numériques | Partiel{.text-warning} | Seulement `DECIMAL(p,s)` est pris en charge, pas `NUMERIC` |
| E011-04 | Opérateurs arithmétiques | Oui{.text-success} | |
| E011-05 | Comparaison numérique | Oui{.text-success} | |
| E011-06 | Casting implicite parmi les types de données numériques | Aucun{.text-danger} | ANSI SQL permet la distribution implicite arbitraire entre les types numériques, tandis que ClickHouse repose sur des fonctions ayant plusieurs surcharges au lieu de la distribution implicite |
| E011-01 | Types de données INTEGER et SMALLINT | Oui {.text-success} | |
| E011-02 | Types de données réel, double précision et flottant types de données | Partiel {.text-warning} | `FLOAT(<binary_precision>)`, `REAL` et `DOUBLE PRECISION` ne sont pas pris en charge |
| E011-03 | Types de données décimales et numériques | Partiel {.text-warning} | Seulement `DECIMAL(p,s)` est pris en charge, pas `NUMERIC` |
| E011-04 | Opérateurs arithmétiques | Oui {.text-success} | |
| E011-05 | Comparaison numérique | Oui {.text-success} | |
| E011-06 | Casting implicite parmi les types de données numériques | Aucun {.text-danger} | ANSI SQL permet la distribution implicite arbitraire entre les types numériques, tandis que ClickHouse repose sur des fonctions ayant plusieurs surcharges au lieu de la distribution implicite |
| **E021** | **Types de chaînes de caractères** | **Partiel**{.text-warning} | |
| E021-01 | Type de données CARACTÈRE | Aucun{.text-danger} | |
| E021-02 | TYPE DE DONNÉES variable de caractère | Aucun{.text-danger} | `String` se comporte de la même manière, mais sans limite de longueur entre parenthèses |
| E021-03 | Littéraux de caractères | Partiel{.text-warning} | Aucune concaténation automatique de littéraux consécutifs et prise en charge du jeu de caractères |
| E021-04 | Fonction CHARACTER_LENGTH | Partiel{.text-warning} | Aucun `USING` clause |
| E021-05 | Fonction OCTET_LENGTH | Aucun{.text-danger} | `LENGTH` se comporte de la même façon |
| E021-06 | SUBSTRING | Partiel{.text-warning} | Pas de support pour `SIMILAR` et `ESCAPE` clauses, pas de `SUBSTRING_REGEX` variante |
| E021-07 | Concaténation de caractères | Partiel{.text-warning} | Aucun `COLLATE` clause |
| E021-08 | Fonctions supérieures et inférieures | Oui{.text-success} | |
| E021-09 | La fonction TRIM | Oui{.text-success} | |
| E021-10 | Conversion implicite entre les types de chaînes de caractères de longueur fixe et de longueur variable | Aucun{.text-danger} | ANSI SQL permet la distribution implicite arbitraire entre les types de chaîne, tandis que ClickHouse repose sur des fonctions ayant plusieurs surcharges au lieu de la distribution implicite |
| E021-11 | La POSITION de la fonction | Partiel{.text-warning} | Pas de support pour `IN` et `USING` clauses, pas de `POSITION_REGEX` variante |
| E021-12 | Comparaison de caractères | Oui{.text-success} | |
| E021-01 | Type de données CARACTÈRE | Aucun {.text-danger} | |
| E021-02 | TYPE DE DONNÉES variable de caractère | Aucun {.text-danger} | `String` se comporte de la même manière, mais sans limite de longueur entre parenthèses |
| E021-03 | Littéraux de caractères | Partiel {.text-warning} | Aucune concaténation automatique de littéraux consécutifs et prise en charge du jeu de caractères |
| E021-04 | Fonction CHARACTER_LENGTH | Partiel {.text-warning} | Aucun `USING` clause |
| E021-05 | Fonction OCTET_LENGTH | Aucun {.text-danger} | `LENGTH` se comporte de la même façon |
| E021-06 | SUBSTRING | Partiel {.text-warning} | Pas de support pour `SIMILAR` et `ESCAPE` clauses, pas de `SUBSTRING_REGEX` variante |
| E021-07 | Concaténation de caractères | Partiel {.text-warning} | Aucun `COLLATE` clause |
| E021-08 | Fonctions supérieures et inférieures | Oui {.text-success} | |
| E021-09 | La fonction TRIM | Oui {.text-success} | |
| E021-10 | Conversion implicite entre les types de chaînes de caractères de longueur fixe et de longueur variable | Aucun {.text-danger} | ANSI SQL permet la distribution implicite arbitraire entre les types de chaîne, tandis que ClickHouse repose sur des fonctions ayant plusieurs surcharges au lieu de la distribution implicite |
| E021-11 | La POSITION de la fonction | Partiel {.text-warning} | Pas de support pour `IN` et `USING` clauses, pas de `POSITION_REGEX` variante |
| E021-12 | Comparaison de caractères | Oui {.text-success} | |
| **E031** | **Identificateur** | **Partiel**{.text-warning} | |
| E031-01 | Identificateurs délimités | Partiel{.text-warning} | Le support littéral Unicode est limité |
| E031-02 | Identificateurs minuscules | Oui{.text-success} | |
| E031-03 | Fuite de soulignement | Oui{.text-success} | |
| E031-01 | Identificateurs délimités | Partiel {.text-warning} | Le support littéral Unicode est limité |
| E031-02 | Identificateurs minuscules | Oui {.text-success} | |
| E031-03 | Fuite de soulignement | Oui {.text-success} | |
| **E051** | **Spécification de requête de base** | **Partiel**{.text-warning} | |
| E051-01 | SELECT DISTINCT | Oui{.text-success} | |
| E051-02 | Groupe par clause | Oui{.text-success} | |
| E051-04 | GROUP BY peut contenir des colonnes `<select list>` | Oui{.text-success} | |
| E051-05 | Les éléments sélectionnés peuvent être renommés | Oui{.text-success} | |
| E051-06 | Clause HAVING | Oui{.text-success} | |
| E051-07 | Qualifié \* dans la liste select | Oui{.text-success} | |
| E051-08 | Nom de corrélation dans la clause FROM | Oui{.text-success} | |
| E051-09 | Renommer les colonnes de la clause FROM | Aucun{.text-danger} | |
| E051-01 | SELECT DISTINCT | Oui {.text-success} | |
| E051-02 | Groupe par clause | Oui {.text-success} | |
| E051-04 | GROUP BY peut contenir des colonnes `<select list>` | Oui {.text-success} | |
| E051-05 | Les éléments sélectionnés peuvent être renommés | Oui {.text-success} | |
| E051-06 | Clause HAVING | Oui {.text-success} | |
| E051-07 | Qualifié \* dans la liste select | Oui {.text-success} | |
| E051-08 | Nom de corrélation dans la clause FROM | Oui {.text-success} | |
| E051-09 | Renommer les colonnes de la clause FROM | Aucun {.text-danger} | |
| **E061** | **Prédicats de base et conditions de recherche** | **Partiel**{.text-warning} | |
| E061-01 | Prédicat de comparaison | Oui{.text-success} | |
| E061-02 | Entre prédicat | Partiel{.text-warning} | Aucun `SYMMETRIC` et `ASYMMETRIC` clause |
| E061-03 | Dans le prédicat avec la liste des valeurs | Oui{.text-success} | |
| E061-04 | Comme prédicat | Oui{.text-success} | |
| E061-05 | Comme prédicat: clause D'échappement | Aucun{.text-danger} | |
| E061-06 | Prédicat NULL | Oui{.text-success} | |
| E061-07 | Prédicat de comparaison quantifié | Aucun{.text-danger} | |
| E061-08 | Existe prédicat | Aucun{.text-danger} | |
| E061-09 | Sous-requêtes dans le prédicat de comparaison | Oui{.text-success} | |
| E061-11 | Sous-requêtes dans dans le prédicat | Oui{.text-success} | |
| E061-12 | Sous-requêtes dans le prédicat de comparaison quantifiée | Aucun{.text-danger} | |
| E061-13 | Sous-requêtes corrélées | Aucun{.text-danger} | |
| E061-14 | Condition de recherche | Oui{.text-success} | |
| E061-01 | Prédicat de comparaison | Oui {.text-success} | |
| E061-02 | Entre prédicat | Partiel {.text-warning} | Aucun `SYMMETRIC` et `ASYMMETRIC` clause |
| E061-03 | Dans le prédicat avec la liste des valeurs | Oui {.text-success} | |
| E061-04 | Comme prédicat | Oui {.text-success} | |
| E061-05 | Comme prédicat: clause D'échappement | Aucun {.text-danger} | |
| E061-06 | Prédicat NULL | Oui {.text-success} | |
| E061-07 | Prédicat de comparaison quantifié | Aucun {.text-danger} | |
| E061-08 | Existe prédicat | Aucun {.text-danger} | |
| E061-09 | Sous-requêtes dans le prédicat de comparaison | Oui {.text-success} | |
| E061-11 | Sous-requêtes dans dans le prédicat | Oui {.text-success} | |
| E061-12 | Sous-requêtes dans le prédicat de comparaison quantifiée | Aucun {.text-danger} | |
| E061-13 | Sous-requêtes corrélées | Aucun {.text-danger} | |
| E061-14 | Condition de recherche | Oui {.text-success} | |
| **E071** | **Expressions de requête de base** | **Partiel**{.text-warning} | |
| E071-01 | Opérateur de table distinct UNION | Aucun{.text-danger} | |
| E071-02 | Opérateur de table UNION ALL | Oui{.text-success} | |
| E071-03 | Sauf opérateur de table DISTINCT | Aucun{.text-danger} | |
| E071-05 | Les colonnes combinées via les opérateurs de table n'ont pas besoin d'avoir exactement le même type de données | Oui{.text-success} | |
| E071-06 | Tableau des opérateurs dans les sous-requêtes | Oui{.text-success} | |
| E071-01 | Opérateur de table distinct UNION | Aucun {.text-danger} | |
| E071-02 | Opérateur de table UNION ALL | Oui {.text-success} | |
| E071-03 | Sauf opérateur de table DISTINCT | Aucun {.text-danger} | |
| E071-05 | Les colonnes combinées via les opérateurs de table n'ont pas besoin d'avoir exactement le même type de données | Oui {.text-success} | |
| E071-06 | Tableau des opérateurs dans les sous-requêtes | Oui {.text-success} | |
| **E081** | **Les privilèges de base** | **Partiel**{.text-warning} | Les travaux en cours |
| **E091** | **Les fonctions de jeu** | **Oui**{.text-success} | |
| E091-01 | AVG | Oui{.text-success} | |
| E091-02 | COUNT | Oui{.text-success} | |
| E091-03 | MAX | Oui{.text-success} | |
| E091-04 | MIN | Oui{.text-success} | |
| E091-05 | SUM | Oui{.text-success} | |
| E091-06 | TOUS les quantificateurs | Aucun{.text-danger} | |
| E091-07 | Quantificateur DISTINCT | Partiel{.text-warning} | Toutes les fonctions d'agrégation ne sont pas prises en charge |
| E091-01 | AVG | Oui {.text-success} | |
| E091-02 | COUNT | Oui {.text-success} | |
| E091-03 | MAX | Oui {.text-success} | |
| E091-04 | MIN | Oui {.text-success} | |
| E091-05 | SUM | Oui {.text-success} | |
| E091-06 | TOUS les quantificateurs | Aucun {.text-danger} | |
| E091-07 | Quantificateur DISTINCT | Partiel {.text-warning} | Toutes les fonctions d'agrégation ne sont pas prises en charge |
| **E101** | **Manipulation des données de base** | **Partiel**{.text-warning} | |
| E101-01 | Insérer une déclaration | Oui{.text-success} | Remarque: la clé primaire dans ClickHouse n'implique pas `UNIQUE` contrainte |
| E101-03 | Déclaration de mise à jour recherchée | Aucun{.text-danger} | Il y a un `ALTER UPDATE` déclaration pour la modification des données de lot |
| E101-04 | Requête de suppression recherchée | Aucun{.text-danger} | Il y a un `ALTER DELETE` déclaration pour la suppression de données par lots |
| E101-01 | Insérer une déclaration | Oui {.text-success} | Remarque: la clé primaire dans ClickHouse n'implique pas `UNIQUE` contrainte |
| E101-03 | Déclaration de mise à jour recherchée | Aucun {.text-danger} | Il y a un `ALTER UPDATE` déclaration pour la modification des données de lot |
| E101-04 | Requête de suppression recherchée | Aucun {.text-danger} | Il y a un `ALTER DELETE` déclaration pour la suppression de données par lots |
| **E111** | **Instruction SELECT à une ligne** | **Aucun**{.text-danger} | |
| **E121** | **Prise en charge du curseur de base** | **Aucun**{.text-danger} | |
| E121-01 | DECLARE CURSOR | Aucun{.text-danger} | |
| E121-02 | Les colonnes ORDER BY n'ont pas besoin d'être dans la liste select | Aucun{.text-danger} | |
| E121-03 | Expressions de valeur dans la clause ORDER BY | Aucun{.text-danger} | |
| E121-04 | Instruction OPEN | Aucun{.text-danger} | |
| E121-06 | Déclaration de mise à jour positionnée | Aucun{.text-danger} | |
| E121-07 | Instruction de suppression positionnée | Aucun{.text-danger} | |
| E121-08 | Déclaration de fermeture | Aucun{.text-danger} | |
| E121-10 | Instruction FETCH: implicite suivant | Aucun{.text-danger} | |
| E121-17 | Avec curseurs HOLD | Aucun{.text-danger} | |
| E121-01 | DECLARE CURSOR | Aucun {.text-danger} | |
| E121-02 | Les colonnes ORDER BY n'ont pas besoin d'être dans la liste select | Aucun {.text-danger} | |
| E121-03 | Expressions de valeur dans la clause ORDER BY | Aucun {.text-danger} | |
| E121-04 | Instruction OPEN | Aucun {.text-danger} | |
| E121-06 | Déclaration de mise à jour positionnée | Aucun {.text-danger} | |
| E121-07 | Instruction de suppression positionnée | Aucun {.text-danger} | |
| E121-08 | Déclaration de fermeture | Aucun {.text-danger} | |
| E121-10 | Instruction FETCH: implicite suivant | Aucun {.text-danger} | |
| E121-17 | Avec curseurs HOLD | Aucun {.text-danger} | |
| **E131** | **Support de valeur Null (nulls au lieu de valeurs)** | **Partiel**{.text-warning} | Certaines restrictions s'appliquent |
| **E141** | **Contraintes d'intégrité de base** | **Partiel**{.text-warning} | |
| E141-01 | Contraintes non nulles | Oui{.text-success} | Note: `NOT NULL` est implicite pour les colonnes de table par défaut |
| E141-02 | Contrainte UNIQUE de colonnes non nulles | Aucun{.text-danger} | |
| E141-03 | Contraintes de clé primaire | Aucun{.text-danger} | |
| E141-04 | Contrainte de clé étrangère de base avec la valeur par défaut NO ACTION Pour l'action de suppression référentielle et l'action de mise à jour référentielle | Aucun{.text-danger} | |
| E141-06 | Vérifier la contrainte | Oui{.text-success} | |
| E141-07 | Colonne par défaut | Oui{.text-success} | |
| E141-08 | Non NULL déduit sur la clé primaire | Oui{.text-success} | |
| E141-10 | Les noms dans une clé étrangère peut être spécifié dans n'importe quel ordre | Aucun{.text-danger} | |
| E141-01 | Contraintes non nulles | Oui {.text-success} | Note: `NOT NULL` est implicite pour les colonnes de table par défaut |
| E141-02 | Contrainte UNIQUE de colonnes non nulles | Aucun {.text-danger} | |
| E141-03 | Contraintes de clé primaire | Aucun {.text-danger} | |
| E141-04 | Contrainte de clé étrangère de base avec la valeur par défaut NO ACTION Pour l'action de suppression référentielle et l'action de mise à jour référentielle | Aucun {.text-danger} | |
| E141-06 | Vérifier la contrainte | Oui {.text-success} | |
| E141-07 | Colonne par défaut | Oui {.text-success} | |
| E141-08 | Non NULL déduit sur la clé primaire | Oui {.text-success} | |
| E141-10 | Les noms dans une clé étrangère peut être spécifié dans n'importe quel ordre | Aucun {.text-danger} | |
| **E151** | **Support de Transaction** | **Aucun**{.text-danger} | |
| E151-01 | COMMIT déclaration | Aucun{.text-danger} | |
| E151-02 | Déclaration de restauration | Aucun{.text-danger} | |
| E151-01 | COMMIT déclaration | Aucun {.text-danger} | |
| E151-02 | Déclaration de restauration | Aucun {.text-danger} | |
| **E152** | **Instruction de transaction set de base** | **Aucun**{.text-danger} | |
| E152-01 | SET TRANSACTION statement: clause sérialisable de niveau D'isolement | Aucun{.text-danger} | |
| E152-02 | SET TRANSACTION statement: clauses en lecture seule et en lecture écriture | Aucun{.text-danger} | |
| E152-01 | SET TRANSACTION statement: clause sérialisable de niveau D'isolement | Aucun {.text-danger} | |
| E152-02 | SET TRANSACTION statement: clauses en lecture seule et en lecture écriture | Aucun {.text-danger} | |
| **E153** | **Requêtes pouvant être mises à jour avec des sous requêtes** | **Aucun**{.text-danger} | |
| **E161** | **Commentaires SQL en utilisant le premier Double moins** | **Oui**{.text-success} | |
| **E171** | **Support SQLSTATE** | **Aucun**{.text-danger} | |
| **E182** | **Liaison du langage hôte** | **Aucun**{.text-danger} | |
| **F031** | **Manipulation de schéma de base** | **Partiel**{.text-warning} | |
| F031-01 | Instruction CREATE TABLE pour créer des tables de base persistantes | Partiel{.text-warning} | Aucun `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` clauses et aucun support pour les types de données résolus par l'utilisateur |
| F031-02 | Instruction créer une vue | Partiel{.text-warning} | Aucun `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` clauses et aucun support pour les types de données résolus par l'utilisateur |
| F031-03 | Déclaration de subvention | Oui{.text-success} | |
| F031-04 | ALTER TABLE statement: ajouter une clause de colonne | Partiel{.text-warning} | Pas de support pour `GENERATED` clause et période de temps du système |
| F031-13 | Instruction DROP TABLE: clause RESTRICT | Aucun{.text-danger} | |
| F031-16 | Instruction DROP VIEW: clause RESTRICT | Aucun{.text-danger} | |
| F031-19 | REVOKE statement: clause RESTRICT | Aucun{.text-danger} | |
| F031-01 | Instruction CREATE TABLE pour créer des tables de base persistantes | Partiel {.text-warning} | Aucun `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` clauses et aucun support pour les types de données résolus par l'utilisateur |
| F031-02 | Instruction créer une vue | Partiel {.text-warning} | Aucun `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` clauses et aucun support pour les types de données résolus par l'utilisateur |
| F031-03 | Déclaration de subvention | Oui {.text-success} | |
| F031-04 | ALTER TABLE statement: ajouter une clause de colonne | Partiel {.text-warning} | Pas de support pour `GENERATED` clause et période de temps du système |
| F031-13 | Instruction DROP TABLE: clause RESTRICT | Aucun {.text-danger} | |
| F031-16 | Instruction DROP VIEW: clause RESTRICT | Aucun {.text-danger} | |
| F031-19 | REVOKE statement: clause RESTRICT | Aucun {.text-danger} | |
| **F041** | **Table jointe de base** | **Partiel**{.text-warning} | |
| F041-01 | INNER join (mais pas nécessairement le mot-clé INNER) | Oui{.text-success} | |
| F041-02 | INTÉRIEURE mot-clé | Oui{.text-success} | |
| F041-03 | LEFT OUTER JOIN | Oui{.text-success} | |
| F041-04 | RIGHT OUTER JOIN | Oui{.text-success} | |
| F041-05 | Les jointures externes peuvent être imbriqués | Oui{.text-success} | |
| F041-07 | La table intérieure dans une jointure extérieure gauche ou droite peut également être utilisée dans une jointure intérieure | Oui{.text-success} | |
| F041-08 | Tous les opérateurs de comparaison sont pris en charge (plutôt que juste =) | Aucun{.text-danger} | |
| F041-01 | INNER join (mais pas nécessairement le mot-clé INNER) | Oui {.text-success} | |
| F041-02 | INTÉRIEURE mot-clé | Oui {.text-success} | |
| F041-03 | LEFT OUTER JOIN | Oui {.text-success} | |
| F041-04 | RIGHT OUTER JOIN | Oui {.text-success} | |
| F041-05 | Les jointures externes peuvent être imbriqués | Oui {.text-success} | |
| F041-07 | La table intérieure dans une jointure extérieure gauche ou droite peut également être utilisée dans une jointure intérieure | Oui {.text-success} | |
| F041-08 | Tous les opérateurs de comparaison sont pris en charge (plutôt que juste =) | Aucun {.text-danger} | |
| **F051** | **Date et heure de base** | **Partiel**{.text-warning} | |
| F051-01 | Type de données de DATE (y compris la prise en charge du littéral de DATE) | Partiel{.text-warning} | Aucun littéral |
| F051-02 | TYPE DE DONNÉES DE TEMPS (y compris la prise en charge du littéral de temps) avec une précision de secondes fractionnaires d'au moins 0 | Aucun{.text-danger} | |
| F051-03 | Type de données D'horodatage (y compris la prise en charge du littéral D'horodatage) avec une précision de secondes fractionnaires d'au moins 0 et 6 | Aucun{.text-danger} | `DateTime64` temps fournit des fonctionnalités similaires |
| F051-04 | Prédicat de comparaison sur les types de données DATE, heure et horodatage | Partiel{.text-warning} | Un seul type de données disponible |
| F051-05 | Distribution explicite entre les types datetime et les types de chaînes de caractères | Oui{.text-success} | |
| F051-06 | CURRENT_DATE | Aucun{.text-danger} | `today()` est similaire |
| F051-07 | LOCALTIME | Aucun{.text-danger} | `now()` est similaire |
| F051-08 | LOCALTIMESTAMP | Aucun{.text-danger} | |
| F051-01 | Type de données de DATE (y compris la prise en charge du littéral de DATE) | Partiel {.text-warning} | Aucun littéral |
| F051-02 | TYPE DE DONNÉES DE TEMPS (y compris la prise en charge du littéral de temps) avec une précision de secondes fractionnaires d'au moins 0 | Aucun {.text-danger} | |
| F051-03 | Type de données D'horodatage (y compris la prise en charge du littéral D'horodatage) avec une précision de secondes fractionnaires d'au moins 0 et 6 | Aucun {.text-danger} | `DateTime64` temps fournit des fonctionnalités similaires |
| F051-04 | Prédicat de comparaison sur les types de données DATE, heure et horodatage | Partiel {.text-warning} | Un seul type de données disponible |
| F051-05 | Distribution explicite entre les types datetime et les types de chaînes de caractères | Oui {.text-success} | |
| F051-06 | CURRENT_DATE | Aucun {.text-danger} | `today()` est similaire |
| F051-07 | LOCALTIME | Aucun {.text-danger} | `now()` est similaire |
| F051-08 | LOCALTIMESTAMP | Aucun {.text-danger} | |
| **F081** | **UNION et sauf dans les vues** | **Partiel**{.text-warning} | |
| **F131** | **Groupées des opérations** | **Partiel**{.text-warning} | |
| F131-01 | WHERE, GROUP BY et ayant des clauses prises en charge dans les requêtes avec des vues groupées | Oui{.text-success} | |
| F131-02 | Plusieurs tables prises en charge dans les requêtes avec des vues groupées | Oui{.text-success} | |
| F131-03 | Définir les fonctions prises en charge dans les requêtes groupées vues | Oui{.text-success} | |
| F131-04 | Sous requêtes avec des clauses GROUP BY et HAVING et des vues groupées | Oui{.text-success} | |
| F131-05 | Sélectionnez une seule ligne avec des clauses GROUP BY et HAVING et des vues groupées | Aucun{.text-danger} | |
| F131-01 | WHERE, GROUP BY et ayant des clauses prises en charge dans les requêtes avec des vues groupées | Oui {.text-success} | |
| F131-02 | Plusieurs tables prises en charge dans les requêtes avec des vues groupées | Oui {.text-success} | |
| F131-03 | Définir les fonctions prises en charge dans les requêtes groupées vues | Oui {.text-success} | |
| F131-04 | Sous requêtes avec des clauses GROUP BY et HAVING et des vues groupées | Oui {.text-success} | |
| F131-05 | Sélectionnez une seule ligne avec des clauses GROUP BY et HAVING et des vues groupées | Aucun {.text-danger} | |
| **F181** | **Support de module Multiple** | **Aucun**{.text-danger} | |
| **F201** | **Fonction de distribution** | **Oui**{.text-success} | |
| **F221** | **Valeurs par défaut explicites** | **Aucun**{.text-danger} | |
| **F261** | **Expression de cas** | **Oui**{.text-success} | |
| F261-01 | Cas Simple | Oui{.text-success} | |
| F261-02 | Cas recherché | Oui{.text-success} | |
| F261-03 | NULLIF | Oui{.text-success} | |
| F261-04 | COALESCE | Oui{.text-success} | |
| F261-01 | Cas Simple | Oui {.text-success} | |
| F261-02 | Cas recherché | Oui {.text-success} | |
| F261-03 | NULLIF | Oui {.text-success} | |
| F261-04 | COALESCE | Oui {.text-success} | |
| **F311** | **Déclaration de définition de schéma** | **Partiel**{.text-warning} | |
| F311-01 | CREATE SCHEMA | Aucun{.text-danger} | |
| F311-02 | Créer une TABLE pour les tables de base persistantes | Oui{.text-success} | |
| F311-03 | CREATE VIEW | Oui{.text-success} | |
| F311-04 | CREATE VIEW: WITH CHECK OPTION | Aucun{.text-danger} | |
| F311-05 | Déclaration de subvention | Oui{.text-success} | |
| F311-01 | CREATE SCHEMA | Aucun {.text-danger} | |
| F311-02 | Créer une TABLE pour les tables de base persistantes | Oui {.text-success} | |
| F311-03 | CREATE VIEW | Oui {.text-success} | |
| F311-04 | CREATE VIEW: WITH CHECK OPTION | Aucun {.text-danger} | |
| F311-05 | Déclaration de subvention | Oui {.text-success} | |
| **F471** | **Valeurs de sous-requête scalaire** | **Oui**{.text-success} | |
| **F481** | **Prédicat null étendu** | **Oui**{.text-success} | |
| **F812** | **Base de repérage** | **Aucun**{.text-danger} | |
| **T321** | **Routines SQL-invoked de base** | **Aucun**{.text-danger} | |
| T321-01 | Fonctions définies par l'utilisateur sans surcharge | Aucun{.text-danger} | |
| T321-02 | Procédures stockées définies par l'utilisateur sans surcharge | Aucun{.text-danger} | |
| T321-03 | L'invocation de la fonction | Aucun{.text-danger} | |
| T321-04 | L'instruction d'APPEL de | Aucun{.text-danger} | |
| T321-05 | Déclaration de retour | Aucun{.text-danger} | |
| T321-01 | Fonctions définies par l'utilisateur sans surcharge | Aucun {.text-danger} | |
| T321-02 | Procédures stockées définies par l'utilisateur sans surcharge | Aucun {.text-danger} | |
| T321-03 | L'invocation de la fonction | Aucun {.text-danger} | |
| T321-04 | L'instruction d'APPEL de | Aucun {.text-danger} | |
| T321-05 | Déclaration de retour | Aucun {.text-danger} | |
| **T631** | **Dans le prédicat avec un élément de liste** | **Oui**{.text-success} | |

View File

@ -257,8 +257,8 @@ KDevelopとQTCreatorは、ClickHouseを開発するためのIDEの他の優れ
sudo apt install wget xz-utils
wget https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz
wget https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz
wget https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz
wget https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz
xz -v -d hits_v1.tsv.xz
xz -v -d visits_v1.tsv.xz

View File

@ -579,7 +579,7 @@ Forkは並列化には使用されません。
**14.** 戻り値。
ほとんどの場合、 `return`. 書かない `[return std::move(res)]{.strike}`.
ほとんどの場合、 `return`. 書かない `return std::move(res)`.
関数がオブジェクトをヒープに割り当てて返す場合は、次のようにします `shared_ptr` または `unique_ptr`.
@ -673,7 +673,7 @@ Loader() {}
**24.** 使用しない `trailing return type` 必要がない限り機能のため。
``` cpp
[auto f() -&gt; void;]{.strike}
auto f() -> void
```
**25.** 変数の宣言と初期化。

View File

@ -9,14 +9,14 @@ toc_title: "Yandex.Metrica データ"
Yandex.Metricaについての詳細は [ClickHouse history](../../introduction/history.md) のセクションを参照してください。
データセットは2つのテーブルから構成されており、どちらも圧縮された `tsv.xz` ファイルまたは準備されたパーティションとしてダウンロードすることができます。
さらに、1億行を含む`hits`テーブルの拡張版が TSVとして https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_100m_obfuscated_v1.tsv.xz に、準備されたパーティションとして https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz にあります。
さらに、1億行を含む`hits`テーブルの拡張版が TSVとして https://datasets.clickhouse.tech/hits/tsv/hits_100m_obfuscated_v1.tsv.xz に、準備されたパーティションとして https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz にあります。
## パーティション済みテーブルの取得 {#obtaining-tables-from-prepared-partitions}
hits テーブルのダウンロードとインポート:
``` bash
curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar
curl -O https://datasets.clickhouse.tech/hits/partitions/hits_v1.tar
tar xvf hits_v1.tar -C /var/lib/clickhouse # ClickHouse のデータディレクトリへのパス
# 展開されたデータのパーミッションをチェックし、必要に応じて修正します。
sudo service clickhouse-server restart
@ -26,7 +26,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
visits のダウンロードとインポート:
``` bash
curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar
curl -O https://datasets.clickhouse.tech/visits/partitions/visits_v1.tar
tar xvf visits_v1.tar -C /var/lib/clickhouse # ClickHouse のデータディレクトリへのパス
# 展開されたデータのパーミッションをチェックし、必要に応じて修正します。
sudo service clickhouse-server restart
@ -38,7 +38,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
圧縮TSVファイルのダウンロードと hits テーブルのインポート:
``` bash
curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
# now create table
clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"
@ -52,7 +52,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
圧縮TSVファイルのダウンロードと visits テーブルのインポート:
``` bash
curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
# now create table
clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"

View File

@ -286,7 +286,7 @@ SELECT formatReadableSize(sum(bytes)) FROM system.parts WHERE table = 'trips_mer
## パーティションされたデータのダウンロード {#download-of-prepared-partitions}
``` bash
$ curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar
$ curl -O https://datasets.clickhouse.tech/trips_mergetree/partitions/trips_mergetree.tar
$ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # path to ClickHouse data directory
$ # check permissions of unpacked data, fix if required
$ sudo service clickhouse-server restart

View File

@ -154,7 +154,7 @@ $ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhous
## パーティション済みデータのダウンロード {#download-of-prepared-partitions}
``` bash
$ curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar
$ curl -O https://datasets.clickhouse.tech/ontime/partitions/ontime.tar
$ tar xvf ontime.tar -C /var/lib/clickhouse # path to ClickHouse data directory
$ # check permissions of unpacked data, fix if required
$ sudo service clickhouse-server restart

View File

@ -92,8 +92,8 @@ ClickHouseサーバーにいくつかのサンプルデータを入れてみま
### テーブルデータのダウンロードと展開 {#download-and-extract-table-data}
``` bash
curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
```
展開されたファイルのサイズは約10GBです。

View File

@ -48,7 +48,7 @@ toc_title: "\u30CF\u30FC\u30C9\u30A6\u30A7\u30A2\u8A66\u9A13"
<!-- -->
wget https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz
wget https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz
tar xvf hits_100m_obfuscated_v1.tar.xz -C .
mv hits_100m_obfuscated_v1/* .

View File

@ -26,155 +26,155 @@ toc_title: "ANSI\u306E\u4E92\u63DB\u6027"
| Feature ID | 機能名 | 状態 | コメント |
|------------|---------------------------------------------------------------------------------------------------|----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **E011** | **数値データ型** | **部分的**{.text-warning} | |
| E011-01 | 整数およびSMALLINTデータ型 | はい。{.text-success} | |
| E011-02 | 実数、倍精度および浮動小数点データ型データ型 | 部分的{.text-warning} | `FLOAT(<binary_precision>)`, `REAL``DOUBLE PRECISION` 対応していません |
| E011-03 | DECIMALおよびNUMERICデータ型 | 部分的{.text-warning} | のみ `DECIMAL(p,s)` サポートされています。 `NUMERIC` |
| E011-04 | 算術演算子 | はい。{.text-success} | |
| E011-05 | 数値比較 | はい。{.text-success} | |
| E011-06 | 数値データ型間の暗黙的なキャスト | いいえ。{.text-danger} | ANSI SQLできる任意の暗黙的な数値型の間のキャストがClickHouseに依存しての機能を有する複数の過負荷の代わりに暗黙的なキャスト |
| E011-01 | 整数およびSMALLINTデータ型 | はい。 {.text-success} | |
| E011-02 | 実数、倍精度および浮動小数点データ型データ型 | 部分的 {.text-warning} | `FLOAT(<binary_precision>)`, `REAL``DOUBLE PRECISION` 対応していません |
| E011-03 | DECIMALおよびNUMERICデータ型 | 部分的 {.text-warning} | のみ `DECIMAL(p,s)` サポートされています。 `NUMERIC` |
| E011-04 | 算術演算子 | はい。 {.text-success} | |
| E011-05 | 数値比較 | はい。 {.text-success} | |
| E011-06 | 数値データ型間の暗黙的なキャスト | いいえ。 {.text-danger} | ANSI SQLできる任意の暗黙的な数値型の間のキャストがClickHouseに依存しての機能を有する複数の過負荷の代わりに暗黙的なキャスト |
| **E021** | **文字列タイプ** | **部分的**{.text-warning} | |
| E021-01 | 文字データ型 | いいえ。{.text-danger} | |
| E021-02 | 文字変化型データ型 | いいえ。{.text-danger} | `String` 動作同様に、長さの制限内 |
| E021-03 | 文字リテラル | 部分的{.text-warning} | 連続したリテラルと文字セットの自動連結はサポートされません |
| E021-04 | CHARACTER_LENGTH関数 | 部分的{.text-warning} | いいえ。 `USING` 句 |
| E021-05 | OCTET_LENGTH関数 | いいえ。{.text-danger} | `LENGTH` 同様に動作します |
| E021-06 | SUBSTRING | 部分的{.text-warning} | サポートなし `SIMILAR``ESCAPE` 句、ない `SUBSTRING_REGEX` バリアント |
| E021-07 | 文字の連結 | 部分的{.text-warning} | いいえ。 `COLLATE` 句 |
| E021-08 | 上部および下の機能 | はい。{.text-success} | |
| E021-09 | トリム機能 | はい。{.text-success} | |
| E021-10 | 固定長および可変長文字ストリング型間の暗黙的なキャスト | いいえ。{.text-danger} | ANSI SQLできる任意の暗黙の間のキャスト文字列の種類がClickHouseに依存しての機能を有する複数の過負荷の代わりに暗黙的なキャスト |
| E021-11 | 位置関数 | 部分的{.text-warning} | サポートなし `IN``USING` 句、ない `POSITION_REGEX` バリアント |
| E021-12 | 文字の比較 | はい。{.text-success} | |
| E021-02 | 文字変化型データ型 | いいえ。 {.text-danger} | `String` 動作同様に、長さの制限内 |
| E021-03 | 文字リテラル | 部分的 {.text-warning} | 連続したリテラルと文字セットの自動連結はサポートされません |
| E021-04 | CHARACTER_LENGTH関数 | 部分的 {.text-warning} | いいえ。 `USING` 句 |
| E021-05 | OCTET_LENGTH関数 | いいえ。 {.text-danger} | `LENGTH` 同様に動作します |
| E021-06 | SUBSTRING | 部分的 {.text-warning} | サポートなし `SIMILAR``ESCAPE` 句、ない `SUBSTRING_REGEX` バリアント |
| E021-07 | 文字の連結 | 部分的 {.text-warning} | いいえ。 `COLLATE` 句 |
| E021-08 | 上部および下の機能 | はい。 {.text-success} | |
| E021-09 | トリム機能 | はい。 {.text-success} | |
| E021-10 | 固定長および可変長文字ストリング型間の暗黙的なキャスト | いいえ。 {.text-danger} | ANSI SQLできる任意の暗黙の間のキャスト文字列の種類がClickHouseに依存しての機能を有する複数の過負荷の代わりに暗黙的なキャスト |
| E021-11 | 位置関数 | 部分的 {.text-warning} | サポートなし `IN``USING` 句、ない `POSITION_REGEX` バリアント |
| E021-12 | 文字の比較 | はい。 {.text-success} | |
| **E031** | **識別子** | **部分的**{.text-warning} | |
| E031-01 | 区切り識別子 | 部分的{.text-warning} | Unicodeリテラルの支援は限られ |
| E031-02 | 小文字の識別子 | はい。{.text-success} | |
| E031-03 | 末尾のアンダースコア | はい。{.text-success} | |
| E031-01 | 区切り識別子 | 部分的 {.text-warning} | Unicodeリテラルの支援は限られ |
| E031-02 | 小文字の識別子 | はい。 {.text-success} | |
| E031-03 | 末尾のアンダースコア | はい。 {.text-success} | |
| **E051** | **基本的なクエリ仕様** | **部分的**{.text-warning} | |
| E051-01 | SELECT DISTINCT | はい。{.text-success} | |
| E051-02 | GROUP BY句 | はい。{.text-success} | |
| E051-04 | グループによる列を含むことができない `<select list>` | はい。{.text-success} | |
| E051-05 | 選択した項目の名前を変更できます | はい。{.text-success} | |
| E051-06 | 句を持つ | はい。{.text-success} | |
| E051-07 | 選択リストの修飾\* | はい。{.text-success} | |
| E051-08 | FROM句の相関名 | はい。{.text-success} | |
| E051-09 | FROM句の列の名前を変更します | いいえ。{.text-danger} | |
| E051-01 | SELECT DISTINCT | はい。 {.text-success} | |
| E051-02 | GROUP BY句 | はい。 {.text-success} | |
| E051-04 | グループによる列を含むことができない `<select list>` | はい。 {.text-success} | |
| E051-05 | 選択した項目の名前を変更できます | はい。 {.text-success} | |
| E051-06 | 句を持つ | はい。 {.text-success} | |
| E051-07 | 選択リストの修飾\* | はい。 {.text-success} | |
| E051-08 | FROM句の相関名 | はい。 {.text-success} | |
| E051-09 | FROM句の列の名前を変更します | いいえ。 {.text-danger} | |
| **E061** | **基本的な述語と検索条件** | **部分的**{.text-warning} | |
| E061-01 | 比較述語 | はい。{.text-success} | |
| E061-02 | 述語の間 | 部分的{.text-warning} | いいえ。 `SYMMETRIC``ASYMMETRIC` 句 |
| E061-03 | 値のリストを持つ述語で | はい。{.text-success} | |
| E061-04 | 述語のように | はい。{.text-success} | |
| E061-05 | LIKE述語:エスケープ句 | いいえ。{.text-danger} | |
| E061-06 | Null述語 | はい。{.text-success} | |
| E061-07 | 定量化された比較述語 | いいえ。{.text-danger} | |
| E061-08 | 存在する述語 | いいえ。{.text-danger} | |
| E061-09 | 比較述語のサブクエリ | はい。{.text-success} | |
| E061-11 | In述語のサブクエリ | はい。{.text-success} | |
| E061-12 | 定量化された比較述語のサブクエリ | いいえ。{.text-danger} | |
| E061-13 | 相関サブクエリ | いいえ。{.text-danger} | |
| E061-14 | 検索条件 | はい。{.text-success} | |
| E061-01 | 比較述語 | はい。 {.text-success} | |
| E061-02 | 述語の間 | 部分的 {.text-warning} | いいえ。 `SYMMETRIC``ASYMMETRIC` 句 |
| E061-03 | 値のリストを持つ述語で | はい。 {.text-success} | |
| E061-04 | 述語のように | はい。 {.text-success} | |
| E061-05 | LIKE述語:エスケープ句 | いいえ。 {.text-danger} | |
| E061-06 | Null述語 | はい。 {.text-success} | |
| E061-07 | 定量化された比較述語 | いいえ。 {.text-danger} | |
| E061-08 | 存在する述語 | いいえ。 {.text-danger} | |
| E061-09 | 比較述語のサブクエリ | はい。 {.text-success} | |
| E061-11 | In述語のサブクエリ | はい。 {.text-success} | |
| E061-12 | 定量化された比較述語のサブクエリ | いいえ。 {.text-danger} | |
| E061-13 | 相関サブクエリ | いいえ。 {.text-danger} | |
| E061-14 | 検索条件 | はい。 {.text-success} | |
| **E071** | **基本的なクエリ式** | **部分的**{.text-warning} | |
| E071-01 | UNION DISTINCTテーブル演算子 | いいえ。{.text-danger} | |
| E071-02 | UNION ALLテーブル演算子 | はい。{.text-success} | |
| E071-03 | DISTINCTテーブル演算子を除く | いいえ。{.text-danger} | |
| E071-05 | 列の結合経由でテーブル事業者の必要のない全く同じデータ型 | はい。{.text-success} | |
| E071-06 | サブクエリ内のテーブル演算子 | はい。{.text-success} | |
| E071-01 | UNION DISTINCTテーブル演算子 | いいえ。 {.text-danger} | |
| E071-02 | UNION ALLテーブル演算子 | はい。 {.text-success} | |
| E071-03 | DISTINCTテーブル演算子を除く | いいえ。 {.text-danger} | |
| E071-05 | 列の結合経由でテーブル事業者の必要のない全く同じデータ型 | はい。 {.text-success} | |
| E071-06 | サブクエリ内のテーブル演算子 | はい。 {.text-success} | |
| **E081** | **基本権限** | **部分的**{.text-warning} | 進行中の作業 |
| **E091** | **関数の設定** | **はい。**{.text-success} | |
| E091-01 | AVG | はい。{.text-success} | |
| E091-02 | COUNT | はい。{.text-success} | |
| E091-03 | MAX | はい。{.text-success} | |
| E091-04 | MIN | はい。{.text-success} | |
| E091-05 | SUM | はい。{.text-success} | |
| E091-06 | すべての量指定子 | いいえ。{.text-danger} | |
| E091-07 | 異なる量指定子 | 部分的{.text-warning} | な集計機能に対応 |
| E091-01 | AVG | はい。 {.text-success} | |
| E091-02 | COUNT | はい。 {.text-success} | |
| E091-03 | MAX | はい。 {.text-success} | |
| E091-04 | MIN | はい。 {.text-success} | |
| E091-05 | SUM | はい。 {.text-success} | |
| E091-06 | すべての量指定子 | いいえ。 {.text-danger} | |
| E091-07 | 異なる量指定子 | 部分的 {.text-warning} | な集計機能に対応 |
| **E101** | **基本的なデータ操作** | **部分的**{.text-warning} | |
| E101-01 | INSERT文 | はい。{.text-success} | 注ClickHouseの主キーは、 `UNIQUE` 制約 |
| E101-03 | 検索されたUPDATE文 | いいえ。{.text-danger} | そこには `ALTER UPDATE` バッチデータ変更のための命令 |
| E101-04 | 検索されたDELETE文 | いいえ。{.text-danger} | そこには `ALTER DELETE` バッチデータ削除のための命令 |
| E101-01 | INSERT文 | はい。 {.text-success} | 注ClickHouseの主キーは、 `UNIQUE` 制約 |
| E101-03 | 検索されたUPDATE文 | いいえ。 {.text-danger} | そこには `ALTER UPDATE` バッチデータ変更のための命令 |
| E101-04 | 検索されたDELETE文 | いいえ。 {.text-danger} | そこには `ALTER DELETE` バッチデータ削除のための命令 |
| **E111** | **単一行SELECTステートメント** | **いいえ。**{.text-danger} | |
| **E121** | **基本的にカーソルを支援** | **いいえ。**{.text-danger} | |
| E121-01 | DECLARE CURSOR | いいえ。{.text-danger} | |
| E121-02 | ORDER BY列を選択リストに含める必要はありません | いいえ。{.text-danger} | |
| E121-03 | ORDER BY句の値式 | いいえ。{.text-danger} | |
| E121-04 | 開いた声明 | いいえ。{.text-danger} | |
| E121-06 | 位置付きUPDATE文 | いいえ。{.text-danger} | |
| E121-07 | 位置づけDELETEステートメント | いいえ。{.text-danger} | |
| E121-08 | 閉じる文 | いいえ。{.text-danger} | |
| E121-10 | FETCHステートメント:暗黙的なNEXT | いいえ。{.text-danger} | |
| E121-17 | ホールドカーソル付き | いいえ。{.text-danger} | |
| E121-01 | DECLARE CURSOR | いいえ。 {.text-danger} | |
| E121-02 | ORDER BY列を選択リストに含める必要はありません | いいえ。 {.text-danger} | |
| E121-03 | ORDER BY句の値式 | いいえ。 {.text-danger} | |
| E121-04 | 開いた声明 | いいえ。 {.text-danger} | |
| E121-06 | 位置付きUPDATE文 | いいえ。 {.text-danger} | |
| E121-07 | 位置づけDELETEステートメント | いいえ。 {.text-danger} | |
| E121-08 | 閉じる文 | いいえ。 {.text-danger} | |
| E121-10 | FETCHステートメント:暗黙的なNEXT | いいえ。 {.text-danger} | |
| E121-17 | ホールドカーソル付き | いいえ。 {.text-danger} | |
| **E131** | **Null値のサポート(値の代わりにnull)** | **部分的**{.text-warning} | 一部の制限が適用されます |
| **E141** | **基本的な整合性制約** | **部分的**{.text-warning} | |
| E141-01 | NULLでない制約 | はい。{.text-success} | 注: `NOT NULL` は黙示のためのテーブル列によるデフォルト |
| E141-02 | NULLでない列の一意制約 | いいえ。{.text-danger} | |
| E141-03 | 主キー制約 | いいえ。{.text-danger} | |
| E141-04 | 参照削除アクションと参照updateアクションの両方に対するNO ACTIONのデフォルトを持つ基本外部キー制約 | いいえ。{.text-danger} | |
| E141-06 | 制約のチェック | はい。{.text-success} | |
| E141-07 | 列の既定値 | はい。{.text-success} | |
| E141-08 | 主キーで推論されるNULLではありません | はい。{.text-success} | |
| E141-10 | 外部キーの名前は任意の順序で指定できます | いいえ。{.text-danger} | |
| E141-01 | NULLでない制約 | はい。 {.text-success} | 注: `NOT NULL` は黙示のためのテーブル列によるデフォルト |
| E141-02 | NULLでない列の一意制約 | いいえ。 {.text-danger} | |
| E141-03 | 主キー制約 | いいえ。 {.text-danger} | |
| E141-04 | 参照削除アクションと参照updateアクションの両方に対するNO ACTIONのデフォルトを持つ基本外部キー制約 | いいえ。 {.text-danger} | |
| E141-06 | 制約のチェック | はい。 {.text-success} | |
| E141-07 | 列の既定値 | はい。 {.text-success} | |
| E141-08 | 主キーで推論されるNULLではありません | はい。 {.text-success} | |
| E141-10 | 外部キーの名前は任意の順序で指定できます | いいえ。 {.text-danger} | |
| **E151** | **取引サポート** | **いいえ。**{.text-danger} | |
| E151-01 | COMMIT文 | いいえ。{.text-danger} | |
| E151-02 | ROLLBACKステートメント | いいえ。{.text-danger} | |
| E151-01 | COMMIT文 | いいえ。 {.text-danger} | |
| E151-02 | ROLLBACKステートメント | いいえ。 {.text-danger} | |
| **E152** | **基本セット取引明細書** | **いいえ。**{.text-danger} | |
| E152-01 | SET TRANSACTION文:分離レベルSERIALIZABLE句 | いいえ。{.text-danger} | |
| E152-02 | SET TRANSACTION文:READ ONLY句とREAD WRITE句 | いいえ。{.text-danger} | |
| E152-01 | SET TRANSACTION文:分離レベルSERIALIZABLE句 | いいえ。 {.text-danger} | |
| E152-02 | SET TRANSACTION文:READ ONLY句とREAD WRITE句 | いいえ。 {.text-danger} | |
| **E153** | **サブクエリを使用した更新可能なクエリ** | **いいえ。**{.text-danger} | |
| **E161** | **先頭のdouble minusを使用したSQLコメント** | **はい。**{.text-success} | |
| **E171** | **SQLSTATEサポート** | **いいえ。**{.text-danger} | |
| **E182** | **ホスト言語バインド** | **いいえ。**{.text-danger} | |
| **F031** | **基本的なスキーマ操作** | **部分的**{.text-warning} | |
| F031-01 | 永続ベーステーブルを作成するCREATE TABLE文 | 部分的{.text-warning} | いいえ。 `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` 句およびユーザー解決データ型のサポートなし |
| F031-02 | CREATE VIEW文 | 部分的{.text-warning} | いいえ。 `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` 句およびユーザー解決データ型のサポートなし |
| F031-03 | グラント声明 | はい。{.text-success} | |
| F031-04 | ALTER TABLE文:ADD COLUMN句 | 部分的{.text-warning} | サポートなし `GENERATED` 節およびシステム期間 |
| F031-13 | DROP TABLE文:RESTRICT句 | いいえ。{.text-danger} | |
| F031-16 | DROP VIEW文:RESTRICT句 | いいえ。{.text-danger} | |
| F031-19 | REVOKEステートメント:RESTRICT句 | いいえ。{.text-danger} | |
| F031-01 | 永続ベーステーブルを作成するCREATE TABLE文 | 部分的 {.text-warning} | いいえ。 `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` 句およびユーザー解決データ型のサポートなし |
| F031-02 | CREATE VIEW文 | 部分的 {.text-warning} | いいえ。 `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` 句およびユーザー解決データ型のサポートなし |
| F031-03 | グラント声明 | はい。 {.text-success} | |
| F031-04 | ALTER TABLE文:ADD COLUMN句 | 部分的 {.text-warning} | サポートなし `GENERATED` 節およびシステム期間 |
| F031-13 | DROP TABLE文:RESTRICT句 | いいえ。 {.text-danger} | |
| F031-16 | DROP VIEW文:RESTRICT句 | いいえ。 {.text-danger} | |
| F031-19 | REVOKEステートメント:RESTRICT句 | いいえ。 {.text-danger} | |
| **F041** | **基本的な結合テーブル** | **部分的**{.text-warning} | |
| F041-01 | Inner join必ずというわけではないが、内側のキーワード) | はい。{.text-success} | |
| F041-02 | 内部キーワード | はい。{.text-success} | |
| F041-03 | LEFT OUTER JOIN | はい。{.text-success} | |
| F041-04 | RIGHT OUTER JOIN | はい。{.text-success} | |
| F041-05 | 外部結合は入れ子にできます | はい。{.text-success} | |
| F041-07 | 左外部結合または右外部結合の内部テーブルは、内部結合でも使用できます | はい。{.text-success} | |
| F041-08 | すべての比較演算子がサポ) | いいえ。{.text-danger} | |
| F041-01 | Inner join必ずというわけではないが、内側のキーワード) | はい。 {.text-success} | |
| F041-02 | 内部キーワード | はい。 {.text-success} | |
| F041-03 | LEFT OUTER JOIN | はい。 {.text-success} | |
| F041-04 | RIGHT OUTER JOIN | はい。 {.text-success} | |
| F041-05 | 外部結合は入れ子にできます | はい。 {.text-success} | |
| F041-07 | 左外部結合または右外部結合の内部テーブルは、内部結合でも使用できます | はい。 {.text-success} | |
| F041-08 | すべての比較演算子がサポ) | いいえ。 {.text-danger} | |
| **F051** | **基本日時** | **部分的**{.text-warning} | |
| F051-01 | 日付データ型(日付リテラルのサポートを含む) | 部分的{.text-warning} | リテラルなし |
| F051-02 | 秒の小数部の精度が0以上の時刻データ型(時刻リテラルのサポートを含む) | いいえ。{.text-danger} | |
| F051-03 | タイムスタンプのデータ型を含む支援のタイムスタンプ文字と小数点以下の秒の精度で少なくとも0-6 | いいえ。{.text-danger} | `DateTime64` timeは同様の機能を提供します |
| F051-04 | 日付、時刻、およびタイムスタンプのデータ型の比較述語 | 部分的{.text-warning} | 使用可能なデータ型は一つだけです |
| F051-05 | Datetime型と文字列型の間の明示的なキャスト | はい。{.text-success} | |
| F051-06 | CURRENT_DATE | いいえ。{.text-danger} | `today()` 似ています |
| F051-07 | LOCALTIME | いいえ。{.text-danger} | `now()` 似ています |
| F051-08 | LOCALTIMESTAMP | いいえ。{.text-danger} | |
| F051-01 | 日付データ型(日付リテラルのサポートを含む) | 部分的 {.text-warning} | リテラルなし |
| F051-02 | 秒の小数部の精度が0以上の時刻データ型(時刻リテラルのサポートを含む) | いいえ。 {.text-danger} | |
| F051-03 | タイムスタンプのデータ型を含む支援のタイムスタンプ文字と小数点以下の秒の精度で少なくとも0-6 | いいえ。 {.text-danger} | `DateTime64` timeは同様の機能を提供します |
| F051-04 | 日付、時刻、およびタイムスタンプのデータ型の比較述語 | 部分的 {.text-warning} | 使用可能なデータ型は一つだけです |
| F051-05 | Datetime型と文字列型の間の明示的なキャスト | はい。 {.text-success} | |
| F051-06 | CURRENT_DATE | いいえ。 {.text-danger} | `today()` 似ています |
| F051-07 | LOCALTIME | いいえ。 {.text-danger} | `now()` 似ています |
| F051-08 | LOCALTIMESTAMP | いいえ。 {.text-danger} | |
| **F081** | **ビュー内の組合および除く** | **部分的**{.text-warning} | |
| **F131** | **グループ化操作** | **部分的**{.text-warning} | |
| F131-01 | ここで、グループにより、条項対応してクエリを処理するクラウドの場合グ眺望 | はい。{.text-success} | |
| F131-02 | グループ化されたビュ | はい。{.text-success} | |
| F131-03 | セット機能に対応してクエリを処理するクラウドの場合グ眺望 | はい。{.text-success} | |
| F131-04 | GROUP BY句とHAVING句およびグループ化ビューを持つサブクエリ | はい。{.text-success} | |
| F131-05 | GROUP BY句およびHAVING句およびグループ化ビューを使用した単一行選択 | いいえ。{.text-danger} | |
| F131-01 | ここで、グループにより、条項対応してクエリを処理するクラウドの場合グ眺望 | はい。 {.text-success} | |
| F131-02 | グループ化されたビュ | はい。 {.text-success} | |
| F131-03 | セット機能に対応してクエリを処理するクラウドの場合グ眺望 | はい。 {.text-success} | |
| F131-04 | GROUP BY句とHAVING句およびグループ化ビューを持つサブクエリ | はい。 {.text-success} | |
| F131-05 | GROUP BY句およびHAVING句およびグループ化ビューを使用した単一行選択 | いいえ。 {.text-danger} | |
| **F181** | **複数モジュール対応** | **いいえ。**{.text-danger} | |
| **F201** | **キャスト機能** | **はい。**{.text-success} | |
| **F221** | **明示的な既定値** | **いいえ。**{.text-danger} | |
| **F261** | **大文字と小文字の式** | **はい。**{.text-success} | |
| F261-01 | 簡単な場合 | はい。{.text-success} | |
| F261-02 | 検索ケース | はい。{.text-success} | |
| F261-03 | NULLIF | はい。{.text-success} | |
| F261-04 | COALESCE | はい。{.text-success} | |
| F261-01 | 簡単な場合 | はい。 {.text-success} | |
| F261-02 | 検索ケース | はい。 {.text-success} | |
| F261-03 | NULLIF | はい。 {.text-success} | |
| F261-04 | COALESCE | はい。 {.text-success} | |
| **F311** | **スキーマ定義文** | **部分的**{.text-warning} | |
| F311-01 | CREATE SCHEMA | いいえ。{.text-danger} | |
| F311-02 | 永続ベーステーブルのテーブルの作成 | はい。{.text-success} | |
| F311-03 | CREATE VIEW | はい。{.text-success} | |
| F311-04 | CREATE VIEW: WITH CHECK OPTION | いいえ。{.text-danger} | |
| F311-05 | グラント声明 | はい。{.text-success} | |
| F311-01 | CREATE SCHEMA | いいえ。 {.text-danger} | |
| F311-02 | 永続ベーステーブルのテーブルの作成 | はい。 {.text-success} | |
| F311-03 | CREATE VIEW | はい。 {.text-success} | |
| F311-04 | CREATE VIEW: WITH CHECK OPTION | いいえ。 {.text-danger} | |
| F311-05 | グラント声明 | はい。 {.text-success} | |
| **F471** | **スカラーサブクエリ値** | **はい。**{.text-success} | |
| **F481** | **展開されたNULL述語** | **はい。**{.text-success} | |
| **F812** | **基本的なフラグ設定** | **いいえ。**{.text-danger} | |
| **T321** | **基本的なSQL呼び出しルーチン** | **いいえ。**{.text-danger} | |
| T321-01 | オーバーロードのないユーザー定義関数 | いいえ。{.text-danger} | |
| T321-02 | 過負荷のないユーザー定義ストアドプロシージャ | いいえ。{.text-danger} | |
| T321-03 | 関数呼び出し | いいえ。{.text-danger} | |
| T321-04 | CALL文 | いいえ。{.text-danger} | |
| T321-05 | RETURN文 | いいえ。{.text-danger} | |
| T321-01 | オーバーロードのないユーザー定義関数 | いいえ。 {.text-danger} | |
| T321-02 | 過負荷のないユーザー定義ストアドプロシージャ | いいえ。 {.text-danger} | |
| T321-03 | 関数呼び出し | いいえ。 {.text-danger} | |
| T321-04 | CALL文 | いいえ。 {.text-danger} | |
| T321-05 | RETURN文 | いいえ。 {.text-danger} | |
| **T631** | **一つのリスト要素を持つ述語で** | **はい。**{.text-success} | |

View File

@ -0,0 +1 @@
../../../../en/sql-reference/data-types/lowcardinality.md

View File

@ -259,8 +259,8 @@ Mac OS X:
sudo apt install wget xz-utils
wget https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz
wget https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz
wget https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz
wget https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz
xz -v -d hits_v1.tsv.xz
xz -v -d visits_v1.tsv.xz

View File

@ -582,7 +582,7 @@ Fork для распараллеливания не используется.
**14.** Возврат значений.
В большинстве случаев, просто возвращайте значение с помощью `return`. Не пишите `[return std::move(res)]{.strike}`.
В большинстве случаев, просто возвращайте значение с помощью `return`. Не пишите `return std::move(res)`.
Если внутри функции создаётся объект на куче и отдаётся наружу, то возвращайте `shared_ptr` или `unique_ptr`.
@ -676,7 +676,7 @@ Loader() {}
**24.** Не нужно использовать `trailing return type` для функций, если в этом нет необходимости.
``` cpp
[auto f() -&gt; void;]{.strike}
auto f() -> void
```
**25.** Объявление и инициализация переменных.

View File

@ -5,14 +5,14 @@ toc_title: "\u0410\u043d\u043e\u043d\u0438\u043c\u0438\u0437\u0438\u0440\u043e\u
# Анонимизированные данные Яндекс.Метрики {#anonimizirovannye-dannye-iandeks-metriki}
Датасет состоит из двух таблиц, содержащих анонимизированные данные о хитах (`hits_v1`) и визитах (`visits_v1`) Яндекс.Метрики. Каждую из таблиц можно скачать в виде сжатого `.tsv.xz`-файла или в виде уже готовых партиций. Также можно скачать расширенную версию таблицы `hits`, содержащую 100 миллионов строк в виде [архива c файлами TSV](https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_100m_obfuscated_v1.tsv.xz) и в виде [готовых партиций](https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz).
Датасет состоит из двух таблиц, содержащих анонимизированные данные о хитах (`hits_v1`) и визитах (`visits_v1`) Яндекс.Метрики. Каждую из таблиц можно скачать в виде сжатого `.tsv.xz`-файла или в виде уже готовых партиций. Также можно скачать расширенную версию таблицы `hits`, содержащую 100 миллионов строк в виде [архива c файлами TSV](https://datasets.clickhouse.tech/hits/tsv/hits_100m_obfuscated_v1.tsv.xz) и в виде [готовых партиций](https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz).
## Получение таблиц из партиций {#poluchenie-tablits-iz-partitsii}
**Скачивание и импортирование партиций hits:**
``` bash
$ curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar
$ curl -O https://datasets.clickhouse.tech/hits/partitions/hits_v1.tar
$ tar xvf hits_v1.tar -C /var/lib/clickhouse # путь к папке с данными ClickHouse
$ # убедитесь, что установлены корректные права доступа на файлы
$ sudo service clickhouse-server restart
@ -22,7 +22,7 @@ $ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
**Скачивание и импортирование партиций visits:**
``` bash
$ curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar
$ curl -O https://datasets.clickhouse.tech/visits/partitions/visits_v1.tar
$ tar xvf visits_v1.tar -C /var/lib/clickhouse # путь к папке с данными ClickHouse
$ # убедитесь, что установлены корректные права доступа на файлы
$ sudo service clickhouse-server restart
@ -34,7 +34,7 @@ $ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
**Скачивание и импортирование hits из сжатого tsv-файла**
``` bash
$ curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
$ curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
$ # теперь создадим таблицу
$ clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
$ clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"
@ -48,7 +48,7 @@ $ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
**Скачивание и импортирование visits из сжатого tsv-файла**
``` bash
$ curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
$ curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
$ # теперь создадим таблицу
$ clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
$ clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"

View File

@ -283,7 +283,7 @@ SELECT formatReadableSize(sum(bytes)) FROM system.parts WHERE table = 'trips_mer
## Скачивание готовых партиций {#skachivanie-gotovykh-partitsii}
``` bash
$ curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar
$ curl -O https://datasets.clickhouse.tech/trips_mergetree/partitions/trips_mergetree.tar
$ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # путь к папке с данными ClickHouse
$ # убедитесь, что установлены корректные права доступа на файлы
$ sudo service clickhouse-server restart

View File

@ -152,7 +152,7 @@ $ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhous
## Скачивание готовых партиций {#skachivanie-gotovykh-partitsii}
``` bash
$ curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar
$ curl -O https://datasets.clickhouse.tech/ontime/partitions/ontime.tar
$ tar xvf ontime.tar -C /var/lib/clickhouse # путь к папке с данными ClickHouse
$ # убедитесь, что установлены корректные права доступа на файлы
$ sudo service clickhouse-server restart

View File

@ -33,7 +33,7 @@ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not su
### Из RPM пакетов {#from-rpm-packages}
Команда ClickHouse в Яндексе рекомендует использовать официальные предкомпилированные `rpm` пакеты для CentOS, RedHad и всех остальных дистрибутивов Linux, основанных на rpm.
Команда ClickHouse в Яндексе рекомендует использовать официальные предкомпилированные `rpm` пакеты для CentOS, RedHat и всех остальных дистрибутивов Linux, основанных на rpm.
Сначала нужно подключить официальный репозиторий:

View File

@ -85,8 +85,8 @@ Now its time to fill our ClickHouse server with some sample data. In this tut
### Download and Extract Table Data {#download-and-extract-table-data}
``` bash
curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
```
The extracted files are about 10GB in size.

View File

@ -9,6 +9,7 @@ ClickHouse может принимать (`INSERT`) и отдавать (`SELECT
Поддерживаемые форматы и возможность использовать их в запросах `INSERT` и `SELECT` перечислены в таблице ниже.
=======
| Формат | INSERT | SELECT |
|-----------------------------------------------------------------------------------------|--------|--------|
| [TabSeparated](#tabseparated) | ✔ | ✔ |
@ -56,6 +57,7 @@ ClickHouse может принимать (`INSERT`) и отдавать (`SELECT
| [XML](#xml) | ✗ | ✔ |
| [CapnProto](#capnproto) | ✔ | ✗ |
| [LineAsString](#lineasstring) | ✔ | ✗ |
| [RawBLOB](#rawblob) | ✔ | ✔ |
Вы можете регулировать некоторые параметры работы с форматами с помощью настроек ClickHouse. За дополнительной информацией обращайтесь к разделу [Настройки](../operations/settings/settings.md).
@ -1248,4 +1250,45 @@ SELECT * FROM line_as_string;
└───────────────────────────────────────────────────┘
```
## RawBLOB {#rawblob}
В этом формате все входные данные считываются в одно значение. Парсить можно только таблицу с одним полем типа [String](../sql-reference/data-types/string.md) или подобным ему.
Результат выводится в бинарном виде без разделителей и экранирования. При выводе более одного значения формат неоднозначен и будет невозможно прочитать данные снова.
Ниже приведено сравнение форматов `RawBLOB` и [TabSeparatedRaw](#tabseparatedraw).
`RawBLOB`:
- данные выводятся в бинарном виде, без экранирования;
- нет разделителей между значениями;
- нет перевода строки в конце каждого значения.
[TabSeparatedRaw](#tabseparatedraw):
- данные выводятся без экранирования;
- строка содержит значения, разделённые табуляцией;
- после последнего значения в строке есть перевод строки.
Далее рассмотрено сравнение форматов `RawBLOB` и [RowBinary](#rowbinary).
`RawBLOB`:
- строки выводятся без их длины в начале.
`RowBinary`:
- строки представлены как длина в формате varint (unsigned [LEB128](https://en.wikipedia.org/wiki/LEB128)), а затем байты строки.
При передаче на вход `RawBLOB` пустых данных, ClickHouse бросает исключение:
``` text
Code: 108. DB::Exception: No data to insert
```
**Пример**
``` bash
$ clickhouse-client --query "CREATE TABLE {some_table} (a String) ENGINE = Memory;"
$ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT RawBLOB"
$ clickhouse-client --query "SELECT * FROM {some_table} FORMAT RawBLOB" | md5sum
```
Результат:
``` text
f9725a22f9191e064120d718e26862a9 -
```
[Оригинальная статья](https://clickhouse.tech/docs/ru/interfaces/formats/) <!--hide-->

View File

@ -20,6 +20,7 @@ toc_title: "\u041a\u043b\u0438\u0435\u043d\u0442\u0441\u043a\u0438\u0435\u0020\u
- [simpod/clickhouse-client](https://packagist.org/packages/simpod/clickhouse-client)
- [seva-code/php-click-house-client](https://packagist.org/packages/seva-code/php-click-house-client)
- [SeasClick C++ client](https://github.com/SeasX/SeasClick)
- [glushkovds/phpclickhouse-laravel](https://packagist.org/packages/glushkovds/phpclickhouse-laravel)
- Go
- [clickhouse](https://github.com/kshvakov/clickhouse/)
- [go-clickhouse](https://github.com/roistat/go-clickhouse)

View File

@ -297,7 +297,7 @@ FORMAT Null;
**Смотрите также**
- [Секция JOIN](../../sql-reference/statements/select/join.md#select-join)
- [Движоy таблиц Join](../../engines/table-engines/special/join.md)
- [Движок таблиц Join](../../engines/table-engines/special/join.md)
## max_partitions_per_insert_block {#max-partitions-per-insert-block}

View File

@ -2231,10 +2231,41 @@ SELECT CAST(toNullable(toInt32(0)) AS Int32) as x, toTypeName(x);
## output_format_tsv_null_representation {#output_format_tsv_null_representation}
Позволяет настраивать представление `NULL` для формата выходных данных [TSV](../../interfaces/formats.md#tabseparated). Настройка управляет форматом выходных данных, `\N` является единственным поддерживаемым представлением для формата входных данных TSV.
Определяет представление `NULL` для формата выходных данных [TSV](../../interfaces/formats.md#tabseparated). Пользователь может установить в качестве значения любую строку.
Значение по умолчанию: `\N`.
**Примеры**
Запрос
```sql
SELECT * FROM tsv_custom_null FORMAT TSV;
```
Результат
```text
788
\N
\N
```
Запрос
```sql
SET output_format_tsv_null_representation = 'My NULL';
SELECT * FROM tsv_custom_null FORMAT TSV;
```
Результат
```text
788
My NULL
My NULL
```
## output_format_json_array_of_rows {#output-format-json-array-of-rows}
Позволяет выводить все строки в виде массива JSON в формате [JSONEachRow](../../interfaces/formats.md#jsoneachrow).

View File

@ -0,0 +1,46 @@
# system.distribution_queue {#system_tables-distribution_queue}
Содержит информацию о локальных файлах, которые находятся в очереди для отправки на шарды. Эти локальные файлы содержат новые куски, которые создаются путем вставки новых данных в Distributed таблицу в асинхронном режиме.
Столбцы:
- `database` ([String](../../sql-reference/data-types/string.md)) — имя базы данных.
- `table` ([String](../../sql-reference/data-types/string.md)) — имя таблицы.
- `data_path` ([String](../../sql-reference/data-types/string.md)) — путь к папке с локальными файлами.
- `is_blocked` ([UInt8](../../sql-reference/data-types/int-uint.md)) — флаг, указывающий на блокировку отправки локальных файлов на шарды.
- `error_count` ([UInt64](../../sql-reference/data-types/int-uint.md)) — количество ошибок.
- `data_files` ([UInt64](../../sql-reference/data-types/int-uint.md)) — количество локальных файлов в папке.
- `data_compressed_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — размер сжатых данных в локальных файлах в байтах.
- `last_exception` ([String](../../sql-reference/data-types/string.md)) — текстовое сообщение о последней возникшей ошибке, если таковые имеются.
**Пример**
``` sql
SELECT * FROM system.distribution_queue LIMIT 1 FORMAT Vertical;
```
``` text
Row 1:
──────
database: default
table: dist
data_path: ./store/268/268bc070-3aad-4b1a-9cf2-4987580161af/default@127%2E0%2E0%2E2:9000/
is_blocked: 1
error_count: 0
data_files: 1
data_compressed_bytes: 499
last_exception:
```
**Смотрите также**
- [Движок таблиц Distributed](../../engines/table-engines/special/distributed.md)
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/distribution_queue) <!--hide-->

View File

@ -57,3 +57,5 @@ ORDER BY id
- [A Magical Mystery Tour of the LowCardinality Data Type](https://www.altinity.com/blog/2019/3/27/low-cardinality).
- [Reducing Clickhouse Storage Cost with the Low Cardinality Type Lessons from an Instana Engineer](https://www.instana.com/blog/reducing-clickhouse-storage-cost-with-the-low-cardinality-type-lessons-from-an-instana-engineer/).
- [String Optimization (video presentation in Russian)](https://youtu.be/rqf-ILRgBdY?list=PL0Z2YDlm0b3iwXCpEFiOOYmwXzVmjJfEt). [Slides in English](https://github.com/yandex/clickhouse-presentations/raw/master/meetup19/string_optimization.pdf).
[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/data-types/lowcardinality/) <!--hide-->

View File

@ -63,10 +63,18 @@ int32samoa: 1546300800
Переводит дату или дату-с-временем в число типа UInt16, содержащее номер года (AD).
## toQuarter {#toquarter}
Переводит дату или дату-с-временем в число типа UInt8, содержащее номер квартала.
## toMonth {#tomonth}
Переводит дату или дату-с-временем в число типа UInt8, содержащее номер месяца (1-12).
## toDayOfYear {#todayofyear}
Переводит дату или дату-с-временем в число типа UInt16, содержащее номер дня года (1-366).
## toDayOfMonth {#todayofmonth}
Переводит дату или дату-с-временем в число типа UInt8, содержащее номер дня в месяце (1-31).
@ -128,6 +136,22 @@ SELECT toUnixTimestamp('2017-11-05 08:07:47', 'Asia/Tokyo') AS unix_timestamp
Округляет дату или дату-с-временем вниз до первого дня года.
Возвращается дата.
## toStartOfISOYear {#tostartofisoyear}
Округляет дату или дату-с-временем вниз до первого дня ISO года. Возвращается дата.
Начало ISO года отличается от начала обычного года, потому что в соответствии с [ISO 8601:1988](https://en.wikipedia.org/wiki/ISO_8601) первая неделя года - это неделя с четырьмя или более днями в этом году.
1 Января 2017 г. - воскресение, т.е. первая ISO неделя 2017 года началась в понедельник 2 января, поэтому 1 января 2017 это 2016 ISO-год, который начался 2016-01-04.
```sql
SELECT toStartOfISOYear(toDate('2017-01-01')) AS ISOYear20170101;
```
```text
┌─ISOYear20170101─┐
│ 2016-01-04 │
└─────────────────┘
```
## toStartOfQuarter {#tostartofquarter}
Округляет дату или дату-с-временем вниз до первого дня квартала.
@ -147,6 +171,12 @@ SELECT toUnixTimestamp('2017-11-05 08:07:47', 'Asia/Tokyo') AS unix_timestamp
Округляет дату или дату-с-временем вниз до ближайшего понедельника.
Возвращается дата.
## toStartOfWeek(t[,mode]) {#tostartofweek}
Округляет дату или дату со временем до ближайшего воскресенья или понедельника в соответствии с mode.
Возвращается дата.
Аргумент mode работает точно так же, как аргумент mode [toWeek()](#toweek). Если аргумент mode опущен, то используется режим 0.
## toStartOfDay {#tostartofday}
Округляет дату-с-временем вниз до начала дня. Возвращается дата-с-временем.
@ -243,6 +273,10 @@ WITH toDateTime64('2020-01-01 10:20:30.999', 3) AS dt64 SELECT toStartOfSecond(d
Переводит дату-с-временем или дату в номер года, начиная с некоторого фиксированного момента в прошлом.
## toRelativeQuarterNum {#torelativequarternum}
Переводит дату-с-временем или дату в номер квартала, начиная с некоторого фиксированного момента в прошлом.
## toRelativeMonthNum {#torelativemonthnum}
Переводит дату-с-временем или дату в номер месяца, начиная с некоторого фиксированного момента в прошлом.
@ -267,6 +301,102 @@ WITH toDateTime64('2020-01-01 10:20:30.999', 3) AS dt64 SELECT toStartOfSecond(d
Переводит дату-с-временем в номер секунды, начиная с некоторого фиксированного момента в прошлом.
## toISOYear {#toisoyear}
Переводит дату-с-временем или дату в число типа UInt16, содержащее номер ISO года. ISO год отличается от обычного года, потому что в соответствии с [ISO 8601:1988](https://en.wikipedia.org/wiki/ISO_8601) ISO год начинается необязательно первого января.
Пример:
```sql
SELECT
toDate('2017-01-01') AS date,
toYear(date),
toISOYear(date)
```
```text
┌───────date─┬─toYear(toDate('2017-01-01'))─┬─toISOYear(toDate('2017-01-01'))─┐
│ 2017-01-01 │ 2017 │ 2016 │
└────────────┴──────────────────────────────┴─────────────────────────────────┘
```
## toISOWeek {#toisoweek}
Переводит дату-с-временем или дату в число типа UInt8, содержащее номер ISO недели.
Начало ISO года отличается от начала обычного года, потому что в соответствии с [ISO 8601:1988](https://en.wikipedia.org/wiki/ISO_8601) первая неделя года - это неделя с четырьмя или более днями в этом году.
1 Января 2017 г. - воскресение, т.е. первая ISO неделя 2017 года началась в понедельник 2 января, поэтому 1 января 2017 это последняя неделя 2016 года.
```sql
SELECT
toISOWeek(toDate('2017-01-01')) AS ISOWeek20170101,
toISOWeek(toDate('2017-01-02')) AS ISOWeek20170102
```
```text
┌─ISOWeek20170101─┬─ISOWeek20170102─┐
│ 52 │ 1 │
└─────────────────┴─────────────────┘
```
## toWeek(date\[, mode\]\[, timezone\]) {#toweek}
Переводит дату-с-временем или дату в число UInt8, содержащее номер недели. Второй аргументам mode задает режим, начинается ли неделя с воскресенья или с понедельника и должно ли возвращаемое значение находиться в диапазоне от 0 до 53 или от 1 до 53. Если аргумент mode опущен, то используется режим 0.
`toISOWeek() ` эквивалентно `toWeek(date,3)`.
Описание режимов (mode):
| Mode | Первый день недели | Диапазон | Неделя 1 это первая неделя … |
| ----------- | -------- | -------- | ------------------ |
|0|Воскресенье|0-53|с воскресеньем в этом году
|1|Понедельник|0-53|с 4-мя или более днями в этом году
|2|Воскресенье|1-53|с воскресеньем в этом году
|3|Понедельник|1-53|с 4-мя или более днями в этом году
|4|Воскресенье|0-53|с 4-мя или более днями в этом году
|5|Понедельник|0-53|с понедельником в этом году
|6|Воскресенье|1-53|с 4-мя или более днями в этом году
|7|Понедельник|1-53|с понедельником в этом году
|8|Воскресенье|1-53|содержащая 1 января
|9|Понедельник|1-53|содержащая 1 января
Для режимов со значением «с 4 или более днями в этом году» недели нумеруются в соответствии с ISO 8601:1988:
- Если неделя, содержащая 1 января, имеет 4 или более дней в новом году, это неделя 1.
- В противном случае это последняя неделя предыдущего года, а следующая неделя - неделя 1.
Для режимов со значением «содержит 1 января», неделя 1 это неделя содержащая 1 января. Не имеет значения, сколько дней в новом году содержала неделя, даже если она содержала только один день.
**Пример**
```sql
SELECT toDate('2016-12-27') AS date, toWeek(date) AS week0, toWeek(date,1) AS week1, toWeek(date,9) AS week9;
```
```text
┌───────date─┬─week0─┬─week1─┬─week9─┐
│ 2016-12-27 │ 52 │ 52 │ 1 │
└────────────┴───────┴───────┴───────┘
```
## toYearWeek(date[,mode]) {#toyearweek}
Возвращает год и неделю для даты. Год в результате может отличаться от года в аргументе даты для первой и последней недели года.
Аргумент mode работает точно так же, как аргумент mode [toWeek()](#toweek). Если mode не задан, используется режим 0.
`toISOYear() ` эквивалентно `intDiv(toYearWeek(date,3),100)`.
**Пример**
```sql
SELECT toDate('2016-12-27') AS date, toYearWeek(date) AS yearWeek0, toYearWeek(date,1) AS yearWeek1, toYearWeek(date,9) AS yearWeek9;
```
```text
┌───────date─┬─yearWeek0─┬─yearWeek1─┬─yearWeek9─┐
│ 2016-12-27 │ 201652 │ 201652 │ 201701 │
└────────────┴───────────┴───────────┴───────────┘
```
## date_trunc {#date_trunc}
Отсекает от даты и времени части, меньшие чем указанная часть.

View File

@ -103,4 +103,306 @@ SELECT erf(3 / sqrt(2))
Принимает два числовых аргумента x и y. Возвращает число типа Float64, близкое к x в степени y.
## cosh(x) {#coshx}
[Гиперболический косинус](https://help.scilab.org/docs/5.4.0/ru_RU/cosh.html).
**Синтаксис**
``` sql
cosh(x)
```
**Параметры**
- `x` — угол в радианах. Значения из интервала: `-∞ < x < +∞`. [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Возвращаемое значение**
- Значения из интервала: `1 <= cosh(x) < +∞`.
Тип: [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Пример**
Запрос:
``` sql
SELECT cosh(0);
```
Результат:
``` text
┌─cosh(0)──┐
│ 1 │
└──────────┘
```
## acosh(x) {#acoshx}
[Обратный гиперболический косинус](https://help.scilab.org/docs/5.4.0/ru_RU/acosh.html).
**Синтаксис**
``` sql
acosh(x)
```
**Параметры**
- `x` — гиперболический косинус угла. Значения из интервала: `1 <= x < +∞`. [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Возвращаемое значение**
- Угол в радианах. Значения из интервала: `0 <= acosh(x) < +∞`.
Тип: [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Пример**
Запрос:
``` sql
SELECT acosh(1);
```
Результат:
``` text
┌─acosh(1)─┐
│ 0 │
└──────────┘
```
**Смотрите также**
- [cosh(x)](../../sql-reference/functions/math-functions.md#coshx)
## sinh(x) {#sinhx}
[Гиперболический синус](https://help.scilab.org/docs/5.4.0/ru_RU/sinh.html).
**Синтаксис**
``` sql
sinh(x)
```
**Параметры**
- `x` — угол в радианах. Значения из интервала: `-∞ < x < +∞`. [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Возвращаемое значение**
- Значения из интервала: `-∞ < sinh(x) < +∞`.
Тип: [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Пример**
Запрос:
``` sql
SELECT sinh(0);
```
Результат:
``` text
┌─sinh(0)──┐
│ 0 │
└──────────┘
```
## asinh(x) {#asinhx}
[Обратный гиперболический синус](https://help.scilab.org/docs/5.4.0/ru_RU/asinh.html).
**Синтаксис**
``` sql
asinh(x)
```
**Параметры**
- `x` — гиперболический синус угла. Значения из интервала: `-∞ < x < +∞`. [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Возвращаемое значение**
- Угол в радианах. Значения из интервала: `-∞ < asinh(x) < +∞`.
Тип: [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Пример**
Запрос:
``` sql
SELECT asinh(0);
```
Результат:
``` text
┌─asinh(0)─┐
│ 0 │
└──────────┘
```
**Смотрите также**
- [sinh(x)](../../sql-reference/functions/math-functions.md#sinhx)
## atanh(x) {#atanhx}
[Обратный гиперболический тангенс](https://help.scilab.org/docs/5.4.0/ru_RU/atanh.html).
**Синтаксис**
``` sql
atanh(x)
```
**Параметры**
- `x` — гиперболический тангенс угла. Значения из интервала: `1 < x < 1`. [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Возвращаемое значение**
- Угол в радианах. Значения из интервала: `-∞ < atanh(x) < +∞`.
Тип: [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Пример**
Запрос:
``` sql
SELECT atanh(0);
```
Результат:
``` text
┌─atanh(0)─┐
│ 0 │
└──────────┘
```
## atan2(y, x) {#atan2yx}
[Функция](https://msoffice-prowork.com/ref/excel/excelfunc/math/atan2/) вычисляет угол в радианах между положительной осью x и линией, проведенной из начала координат в точку `(x, y) ≠ (0, 0)`.
**Синтаксис**
``` sql
atan2(y, x)
```
**Параметры**
- `y` — координата y точки, в которую проведена линия. [Float64](../../sql-reference/data-types/float.md#float32-float64).
- `x` — координата х точки, в которую проведена линия. [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Возвращаемое значение**
- Угол `θ` в радианах из интервала: `−π < θ ≤ π`.
Тип: [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Пример**
Запрос:
``` sql
SELECT atan2(1, 1);
```
Результат:
``` text
┌────────atan2(1, 1)─┐
│ 0.7853981633974483 │
└────────────────────┘
```
## hypot(x, y) {#hypotxy}
Вычисляет длину гипотенузы прямоугольного треугольника. При использовании этой [функции](https://php.ru/manual/function.hypot.html) не возникает проблем при возведении в квадрат очень больших или очень малых чисел.
**Синтаксис**
``` sql
hypot(x, y)
```
**Параметры**
- `x` — первый катет прямоугольного треугольника. [Float64](../../sql-reference/data-types/float.md#float32-float64).
- `y` — второй катет прямоугольного треугольника. [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Возвращаемое значение**
- Длина гипотенузы прямоугольного треугольника.
Тип: [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Пример**
Запрос:
``` sql
SELECT hypot(1, 1);
```
Результат:
``` text
┌────────hypot(1, 1)─┐
│ 1.4142135623730951 │
└────────────────────┘
```
## log1p(x) {#log1px}
Вычисляет `log(1+x)`. [Функция](https://help.scilab.org/docs/6.0.1/ru_RU/log1p.html) `log1p(x)` является более точной, чем функция `log(1+x)` для малых значений x.
**Синтаксис**
``` sql
log1p(x)
```
**Параметры**
- `x` — значения из интервала: `-1 < x < +∞`. [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Возвращаемое значение**
- Значения из интервала: `-∞ < log1p(x) < +∞`.
Тип: [Float64](../../sql-reference/data-types/float.md#float32-float64).
**Пример**
Запрос:
``` sql
SELECT log1p(0);
```
Результат:
``` text
┌─log1p(0)─┐
│ 0 │
└──────────┘
```
**Смотрите также**
- [log(x)](../../sql-reference/functions/math-functions.md#logx)
[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/math_functions/) <!--hide-->

View File

@ -56,10 +56,188 @@ toc_title: ORDER BY
## Поддержка collation {#collation-support}
Для сортировки по значениям типа String есть возможность указать collation (сравнение). Пример: `ORDER BY SearchPhrase COLLATE 'tr'` - для сортировки по поисковой фразе, по возрастанию, с учётом турецкого алфавита, регистронезависимо, при допущении, что строки в кодировке UTF-8. `COLLATE` может быть указан или не указан для каждого выражения в ORDER BY независимо. Если есть `ASC` или `DESC`, то `COLLATE` указывается после них. При использовании `COLLATE` сортировка всегда регистронезависима.
Для сортировки по значениям типа [String](../../../sql-reference/data-types/string.md) есть возможность указать collation (сравнение). Пример: `ORDER BY SearchPhrase COLLATE 'tr'` - для сортировки по поисковой фразе, по возрастанию, с учётом турецкого алфавита, регистронезависимо, при допущении, что строки в кодировке UTF-8. `COLLATE` может быть указан или не указан для каждого выражения в ORDER BY независимо. Если есть `ASC` или `DESC`, то `COLLATE` указывается после них. При использовании `COLLATE` сортировка всегда регистронезависима.
Сравнение поддерживается при использовании типов [LowCardinality](../../../sql-reference/data-types/lowcardinality.md), [Nullable](../../../sql-reference/data-types/nullable.md), [Array](../../../sql-reference/data-types/array.md) и [Tuple](../../../sql-reference/data-types/tuple.md).
Рекомендуется использовать `COLLATE` только для окончательной сортировки небольшого количества строк, так как производительность сортировки с указанием `COLLATE` меньше, чем обычной сортировки по байтам.
## Примеры с использованием сравнения {#collation-examples}
Пример с значениями типа [String](../../../sql-reference/data-types/string.md):
Входная таблица:
``` text
┌─x─┬─s────┐
│ 1 │ bca │
│ 2 │ ABC │
│ 3 │ 123a │
│ 4 │ abc │
│ 5 │ BCA │
└───┴──────┘
```
Запрос:
```sql
SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en';
```
Результат:
``` text
┌─x─┬─s────┐
│ 3 │ 123a │
│ 4 │ abc │
│ 2 │ ABC │
│ 1 │ bca │
│ 5 │ BCA │
└───┴──────┘
```
Пример со строками типа [Nullable](../../../sql-reference/data-types/nullable.md):
Входная таблица:
``` text
┌─x─┬─s────┐
│ 1 │ bca │
│ 2 │ ᴺᵁᴸᴸ │
│ 3 │ ABC │
│ 4 │ 123a │
│ 5 │ abc │
│ 6 │ ᴺᵁᴸᴸ │
│ 7 │ BCA │
└───┴──────┘
```
Запрос:
```sql
SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en';
```
Результат:
``` text
┌─x─┬─s────┐
│ 4 │ 123a │
│ 5 │ abc │
│ 3 │ ABC │
│ 1 │ bca │
│ 7 │ BCA │
│ 6 │ ᴺᵁᴸᴸ │
│ 2 │ ᴺᵁᴸᴸ │
└───┴──────┘
```
Пример со строками в [Array](../../../sql-reference/data-types/array.md):
Входная таблица:
``` text
┌─x─┬─s─────────────┐
│ 1 │ ['Z'] │
│ 2 │ ['z'] │
│ 3 │ ['a'] │
│ 4 │ ['A'] │
│ 5 │ ['z','a'] │
│ 6 │ ['z','a','a'] │
│ 7 │ [''] │
└───┴───────────────┘
```
Запрос:
```sql
SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en';
```
Результат:
``` text
┌─x─┬─s─────────────┐
│ 7 │ [''] │
│ 3 │ ['a'] │
│ 4 │ ['A'] │
│ 2 │ ['z'] │
│ 5 │ ['z','a'] │
│ 6 │ ['z','a','a'] │
│ 1 │ ['Z'] │
└───┴───────────────┘
```
Пример со строками типа [LowCardinality](../../../sql-reference/data-types/lowcardinality.md):
Входная таблица:
```text
┌─x─┬─s───┐
│ 1 │ Z │
│ 2 │ z │
│ 3 │ a │
│ 4 │ A │
│ 5 │ za │
│ 6 │ zaa │
│ 7 │ │
└───┴─────┘
```
Запрос:
```sql
SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en';
```
Результат:
```text
┌─x─┬─s───┐
│ 7 │ │
│ 3 │ a │
│ 4 │ A │
│ 2 │ z │
│ 1 │ Z │
│ 5 │ za │
│ 6 │ zaa │
└───┴─────┘
```
Пример со строками в [Tuple](../../../sql-reference/data-types/tuple.md):
```text
┌─x─┬─s───────┐
│ 1 │ (1,'Z') │
│ 2 │ (1,'z') │
│ 3 │ (1,'a') │
│ 4 │ (2,'z') │
│ 5 │ (1,'A') │
│ 6 │ (2,'Z') │
│ 7 │ (2,'A') │
└───┴─────────┘
```
Запрос:
```sql
SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en';
```
Результат:
```text
┌─x─┬─s───────┐
│ 3 │ (1,'a') │
│ 5 │ (1,'A') │
│ 2 │ (1,'z') │
│ 1 │ (1,'Z') │
│ 7 │ (2,'A') │
│ 4 │ (2,'z') │
│ 6 │ (2,'Z') │
└───┴─────────┘
```
## Деталь реализации {#implementation-details}
Если кроме `ORDER BY` указан также не слишком большой [LIMIT](limit.md), то расходуется меньше оперативки. Иначе расходуется количество памяти, пропорциональное количеству данных для сортировки. При распределённой обработке запроса, если отсутствует [GROUP BY](group-by.md), сортировка частично делается на удалённых серверах, а на сервере-инициаторе запроса производится слияние результатов. Таким образом, при распределённой сортировке, может сортироваться объём данных, превышающий размер памяти на одном сервере.

View File

@ -12,6 +12,7 @@ toc_title: SYSTEM
- [DROP MARK CACHE](#query_language-system-drop-mark-cache)
- [DROP UNCOMPRESSED CACHE](#query_language-system-drop-uncompressed-cache)
- [DROP COMPILED EXPRESSION CACHE](#query_language-system-drop-compiled-expression-cache)
- [DROP REPLICA](#query_language-system-drop-replica)
- [FLUSH LOGS](#query_language-system-flush_logs)
- [RELOAD CONFIG](#query_language-system-reload-config)
- [SHUTDOWN](#query_language-system-shutdown)
@ -66,6 +67,24 @@ SELECT name, status FROM system.dictionaries;
Сбрасывает кеш «засечек» (`mark cache`). Используется при разработке ClickHouse и тестах производительности.
## DROP REPLICA {#query_language-system-drop-replica}
Мертвые реплики можно удалить, используя следующий синтаксис:
``` sql
SYSTEM DROP REPLICA 'replica_name' FROM TABLE database.table;
SYSTEM DROP REPLICA 'replica_name' FROM DATABASE database;
SYSTEM DROP REPLICA 'replica_name';
SYSTEM DROP REPLICA 'replica_name' FROM ZKPATH '/path/to/table/in/zk';
```
Удаляет путь реплики из ZooKeeper-а. Это полезно, когда реплика мертва и ее метаданные не могут быть удалены из ZooKeeper с помощью `DROP TABLE`, потому что такой таблицы больше нет. `DROP REPLICA` может удалить только неактивную / устаревшую реплику и не может удалить локальную реплику, используйте для этого `DROP TABLE`. `DROP REPLICA` не удаляет таблицы и не удаляет данные или метаданные с диска.
Первая команда удаляет метаданные реплики `'replica_name'` для таблицы `database.table`.
Вторая команда удаляет метаданные реплики `'replica_name'` для всех таблиц базы данных `database`.
Третья команда удаляет метаданные реплики `'replica_name'` для всех таблиц, существующих на локальном сервере (список таблиц генерируется из локальной реплики).
Четверая команда полезна для удаления метаданных мертвой реплики когда все другие реплики таблицы уже были удалены ранее, поэтому необходимо явно указать ZooKeeper путь таблицы. ZooKeeper путь это первый аргумент для `ReplicatedMergeTree` движка при создании таблицы.
## DROP UNCOMPRESSED CACHE {#query_language-system-drop-uncompressed-cache}
Сбрасывает кеш не сжатых данных. Используется при разработке ClickHouse и тестах производительности.

Some files were not shown because too many files have changed in this diff Show More