diff --git a/CHANGELOG.md b/CHANGELOG.md index c722e4a1ca0..6d9174c7d07 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,126 @@ +### ClickHouse release 20.12 + +### ClickHouse release v20.12.3.3-stable, 2020-12-13 + +#### Backward Incompatible Change + +* Enable `use_compact_format_in_distributed_parts_names` by default (see the documentation for the reference). [#16728](https://github.com/ClickHouse/ClickHouse/pull/16728) ([Azat Khuzhin](https://github.com/azat)). +* Accept user settings related to file formats (e.g. `format_csv_delimiter`) in the `SETTINGS` clause when creating a table that uses `File` engine, and use these settings in all `INSERT`s and `SELECT`s. The file format settings changed in the current user session, or in the `SETTINGS` clause of a DML query itself, no longer affect the query. [#16591](https://github.com/ClickHouse/ClickHouse/pull/16591) ([Alexander Kuzmenkov](https://github.com/akuzm)). + +#### New Feature + +* add `*.xz` compression/decompression support.It enables using `*.xz` in `file()` function. This closes [#8828](https://github.com/ClickHouse/ClickHouse/issues/8828). [#16578](https://github.com/ClickHouse/ClickHouse/pull/16578) ([Abi Palagashvili](https://github.com/fibersel)). +* Introduce the query `ALTER TABLE ... DROP|DETACH PART 'part_name'`. [#15511](https://github.com/ClickHouse/ClickHouse/pull/15511) ([nvartolomei](https://github.com/nvartolomei)). +* Added new ALTER UPDATE/DELETE IN PARTITION syntax. [#13403](https://github.com/ClickHouse/ClickHouse/pull/13403) ([Vladimir Chebotarev](https://github.com/excitoon)). +* Allow formatting named tuples as JSON objects when using JSON input/output formats, controlled by the `output_format_json_named_tuples_as_objects` setting, disabled by default. [#17175](https://github.com/ClickHouse/ClickHouse/pull/17175) ([Alexander Kuzmenkov](https://github.com/akuzm)). +* Add a possibility to input enum value as it's id in TSV and CSV formats by default. [#16834](https://github.com/ClickHouse/ClickHouse/pull/16834) ([Kruglov Pavel](https://github.com/Avogar)). +* Add COLLATE support for Nullable, LowCardinality, Array and Tuple, where nested type is String. Also refactor the code associated with collations in ColumnString.cpp. [#16273](https://github.com/ClickHouse/ClickHouse/pull/16273) ([Kruglov Pavel](https://github.com/Avogar)). +* New `tcpPort` function returns TCP port listened by this server. [#17134](https://github.com/ClickHouse/ClickHouse/pull/17134) ([Ivan](https://github.com/abyss7)). +* Add new math functions: `acosh`, `asinh`, `atan2`, `atanh`, `cosh`, `hypot`, `log1p`, `sinh`. [#16636](https://github.com/ClickHouse/ClickHouse/pull/16636) ([Konstantin Malanchev](https://github.com/hombit)). +* Possibility to distribute the merges between different replicas. Introduces the `execute_merges_on_single_replica_time_threshold` mergetree setting. [#16424](https://github.com/ClickHouse/ClickHouse/pull/16424) ([filimonov](https://github.com/filimonov)). +* Add setting `aggregate_functions_null_for_empty` for SQL standard compatibility. This option will rewrite all aggregate functions in a query, adding -OrNull suffix to them. Implements [10273](https://github.com/ClickHouse/ClickHouse/issues/10273). [#16123](https://github.com/ClickHouse/ClickHouse/pull/16123) ([flynn](https://github.com/ucasFL)). +* Updated DateTime, DateTime64 parsing to accept string Date literal format. [#16040](https://github.com/ClickHouse/ClickHouse/pull/16040) ([Maksim Kita](https://github.com/kitaisreal)). +* Make it possible to change the path to history file in `clickhouse-client` using the `--history_file` parameter. [#15960](https://github.com/ClickHouse/ClickHouse/pull/15960) ([Maksim Kita](https://github.com/kitaisreal)). + +#### Bug Fix + +* Fix the issue when server can stop accepting connections in very rare cases. [#17542](https://github.com/ClickHouse/ClickHouse/pull/17542) ([Amos Bird](https://github.com/amosbird)). +* Fixed `Function not implemented` error when executing `RENAME` query in `Atomic` database with ClickHouse running on Windows Subsystem for Linux. Fixes [#17661](https://github.com/ClickHouse/ClickHouse/issues/17661). [#17664](https://github.com/ClickHouse/ClickHouse/pull/17664) ([tavplubix](https://github.com/tavplubix)). +* Do not restore parts from WAL if `in_memory_parts_enable_wal` is disabled. [#17802](https://github.com/ClickHouse/ClickHouse/pull/17802) ([detailyang](https://github.com/detailyang)). +* fix incorrect initialization of `max_compress_block_size` of MergeTreeWriterSettings with `min_compress_block_size`. [#17833](https://github.com/ClickHouse/ClickHouse/pull/17833) ([flynn](https://github.com/ucasFL)). +* Exception message about max table size to drop was displayed incorrectly. [#17764](https://github.com/ClickHouse/ClickHouse/pull/17764) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fixed possible segfault when there is not enough space when inserting into `Distributed` table. [#17737](https://github.com/ClickHouse/ClickHouse/pull/17737) ([tavplubix](https://github.com/tavplubix)). +* Fixed problem when ClickHouse fails to resume connection to MySQL servers. [#17681](https://github.com/ClickHouse/ClickHouse/pull/17681) ([Alexander Kazakov](https://github.com/Akazz)). +* In might be determined incorrectly if cluster is circular- (cross-) replicated or not when executing `ON CLUSTER` query due to race condition when `pool_size` > 1. It's fixed. [#17640](https://github.com/ClickHouse/ClickHouse/pull/17640) ([tavplubix](https://github.com/tavplubix)). +* Exception `fmt::v7::format_error` can be logged in background for MergeTree tables. This fixes [#17613](https://github.com/ClickHouse/ClickHouse/issues/17613). [#17615](https://github.com/ClickHouse/ClickHouse/pull/17615) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* When clickhouse-client is used in interactive mode with multiline queries, single line comment was erronously extended till the end of query. This fixes [#13654](https://github.com/ClickHouse/ClickHouse/issues/13654). [#17565](https://github.com/ClickHouse/ClickHouse/pull/17565) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix alter query hang when the corresponding mutation was killed on the different replica. Fixes [#16953](https://github.com/ClickHouse/ClickHouse/issues/16953). [#17499](https://github.com/ClickHouse/ClickHouse/pull/17499) ([alesapin](https://github.com/alesapin)). +* Fix issue when mark cache size was underestimated by clickhouse. It may happen when there are a lot of tiny files with marks. [#17496](https://github.com/ClickHouse/ClickHouse/pull/17496) ([alesapin](https://github.com/alesapin)). +* Fix `ORDER BY` with enabled setting `optimize_redundant_functions_in_order_by`. [#17471](https://github.com/ClickHouse/ClickHouse/pull/17471) ([Anton Popov](https://github.com/CurtizJ)). +* Fix duplicates after `DISTINCT` which were possible because of incorrect optimization. Fixes [#17294](https://github.com/ClickHouse/ClickHouse/issues/17294). [#17296](https://github.com/ClickHouse/ClickHouse/pull/17296) ([li chengxiang](https://github.com/chengxianglibra)). [#17439](https://github.com/ClickHouse/ClickHouse/pull/17439) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix crash while reading from `JOIN` table with `LowCardinality` types. Fixes [#17228](https://github.com/ClickHouse/ClickHouse/issues/17228). [#17397](https://github.com/ClickHouse/ClickHouse/pull/17397) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* fix `toInt256(inf)` stack overflow. Int256 is an experimental feature. Closed [#17235](https://github.com/ClickHouse/ClickHouse/issues/17235). [#17257](https://github.com/ClickHouse/ClickHouse/pull/17257) ([flynn](https://github.com/ucasFL)). +* Fix possible `Unexpected packet Data received from client` error logged for Distributed queries with `LIMIT`. [#17254](https://github.com/ClickHouse/ClickHouse/pull/17254) ([Azat Khuzhin](https://github.com/azat)). +* Fix set index invalidation when there are const columns in the subquery. This fixes [#17246](https://github.com/ClickHouse/ClickHouse/issues/17246). [#17249](https://github.com/ClickHouse/ClickHouse/pull/17249) ([Amos Bird](https://github.com/amosbird)). +* Fix possible wrong index analysis when the types of the index comparison are different. This fixes [#17122](https://github.com/ClickHouse/ClickHouse/issues/17122). [#17145](https://github.com/ClickHouse/ClickHouse/pull/17145) ([Amos Bird](https://github.com/amosbird)). +* Fix ColumnConst comparison which leads to crash. This fixed [#17088](https://github.com/ClickHouse/ClickHouse/issues/17088) . [#17135](https://github.com/ClickHouse/ClickHouse/pull/17135) ([Amos Bird](https://github.com/amosbird)). +* Multiple fixed for MaterializeMySQL (experimental feature). Fixes [#16923](https://github.com/ClickHouse/ClickHouse/issues/16923) Fixes [#15883](https://github.com/ClickHouse/ClickHouse/issues/15883) Fix MaterializeMySQL SYNC failure when the modify MySQL binlog_checksum. [#17091](https://github.com/ClickHouse/ClickHouse/pull/17091) ([Winter Zhang](https://github.com/zhang2014)). +* Fix bug when `ON CLUSTER` queries may hang forever for non-leader ReplicatedMergeTreeTables. [#17089](https://github.com/ClickHouse/ClickHouse/pull/17089) ([alesapin](https://github.com/alesapin)). +* Fixed crash on `CREATE TABLE ... AS some_table` query when `some_table` was created `AS table_function()` Fixes [#16944](https://github.com/ClickHouse/ClickHouse/issues/16944). [#17072](https://github.com/ClickHouse/ClickHouse/pull/17072) ([tavplubix](https://github.com/tavplubix)). +* Bug unfinished implementation for funciton fuzzBits, related issue: [#16980](https://github.com/ClickHouse/ClickHouse/issues/16980). [#17051](https://github.com/ClickHouse/ClickHouse/pull/17051) ([hexiaoting](https://github.com/hexiaoting)). +* Fix LLVM's libunwind in the case when CFA register is RAX. This is the [bug](https://bugs.llvm.org/show_bug.cgi?id=48186) in [LLVM's libunwind](https://github.com/llvm/llvm-project/tree/master/libunwind). We already have workarounds for this bug. [#17046](https://github.com/ClickHouse/ClickHouse/pull/17046) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Avoid unnecessary network errors for remote queries which may be cancelled while execution, like queries with `LIMIT`. [#17006](https://github.com/ClickHouse/ClickHouse/pull/17006) ([Azat Khuzhin](https://github.com/azat)). +* Fix `optimize_distributed_group_by_sharding_key` setting (that is disabled by default) for query with OFFSET only. [#16996](https://github.com/ClickHouse/ClickHouse/pull/16996) ([Azat Khuzhin](https://github.com/azat)). +* Fix for Merge tables over Distributed tables with JOIN. [#16993](https://github.com/ClickHouse/ClickHouse/pull/16993) ([Azat Khuzhin](https://github.com/azat)). +* Fixed wrong result in big integers (128, 256 bit) when casting from double. Big integers support is experimental. [#16986](https://github.com/ClickHouse/ClickHouse/pull/16986) ([Mike](https://github.com/myrrc)). +* Fix possible server crash after `ALTER TABLE ... MODIFY COLUMN ... NewType` when `SELECT` have `WHERE` expression on altering column and alter doesn't finished yet. [#16968](https://github.com/ClickHouse/ClickHouse/pull/16968) ([Amos Bird](https://github.com/amosbird)). +* Blame info was not calculated correctly in `clickhouse-git-import`. [#16959](https://github.com/ClickHouse/ClickHouse/pull/16959) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix order by optimization with monotonous functions. Fixes [#16107](https://github.com/ClickHouse/ClickHouse/issues/16107). [#16956](https://github.com/ClickHouse/ClickHouse/pull/16956) ([Anton Popov](https://github.com/CurtizJ)). +* Fix optimization of group by with enabled setting `optimize_aggregators_of_group_by_keys` and joins. Fixes [#12604](https://github.com/ClickHouse/ClickHouse/issues/12604). [#16951](https://github.com/ClickHouse/ClickHouse/pull/16951) ([Anton Popov](https://github.com/CurtizJ)). +* Fix possible error `Illegal type of argument` for queries with `ORDER BY`. Fixes [#16580](https://github.com/ClickHouse/ClickHouse/issues/16580). [#16928](https://github.com/ClickHouse/ClickHouse/pull/16928) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix strange code in InterpreterShowAccessQuery. [#16866](https://github.com/ClickHouse/ClickHouse/pull/16866) ([tavplubix](https://github.com/tavplubix)). +* Prevent clickhouse server crashes when using the function `timeSeriesGroupSum`. The function is removed from newer ClickHouse releases. [#16865](https://github.com/ClickHouse/ClickHouse/pull/16865) ([filimonov](https://github.com/filimonov)). +* Fix rare silent crashes when query profiler is on and ClickHouse is installed on OS with glibc version that has (supposedly) broken asynchronous unwind tables for some functions. This fixes [#15301](https://github.com/ClickHouse/ClickHouse/issues/15301). This fixes [#13098](https://github.com/ClickHouse/ClickHouse/issues/13098). [#16846](https://github.com/ClickHouse/ClickHouse/pull/16846) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix crash when using `any` without any arguments. This is for [#16803](https://github.com/ClickHouse/ClickHouse/issues/16803) . cc @azat. [#16826](https://github.com/ClickHouse/ClickHouse/pull/16826) ([Amos Bird](https://github.com/amosbird)). +* If no memory can be allocated while writing table metadata on disk, broken metadata file can be written. [#16772](https://github.com/ClickHouse/ClickHouse/pull/16772) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix trivial query optimization with partition predicate. [#16767](https://github.com/ClickHouse/ClickHouse/pull/16767) ([Azat Khuzhin](https://github.com/azat)). +* Fix `IN` operator over several columns and tuples with enabled `transform_null_in` setting. Fixes [#15310](https://github.com/ClickHouse/ClickHouse/issues/15310). [#16722](https://github.com/ClickHouse/ClickHouse/pull/16722) ([Anton Popov](https://github.com/CurtizJ)). +* Return number of affected rows for INSERT queries via MySQL protocol. Previously ClickHouse used to always return 0, it's fixed. Fixes [#16605](https://github.com/ClickHouse/ClickHouse/issues/16605). [#16715](https://github.com/ClickHouse/ClickHouse/pull/16715) ([Winter Zhang](https://github.com/zhang2014)). +* Fix remote query failure when using 'if' suffix aggregate function. Fixes [#16574](https://github.com/ClickHouse/ClickHouse/issues/16574) Fixes [#16231](https://github.com/ClickHouse/ClickHouse/issues/16231) [#16610](https://github.com/ClickHouse/ClickHouse/pull/16610) ([Winter Zhang](https://github.com/zhang2014)). +* Fix inconsistent behavior caused by `select_sequential_consistency` for optimized trivial count query and system.tables. [#16309](https://github.com/ClickHouse/ClickHouse/pull/16309) ([Hao Chen](https://github.com/haoch)). + +#### Improvement + +* Remove empty parts after they were pruned by TTL, mutation, or collapsing merge algorithm. [#16895](https://github.com/ClickHouse/ClickHouse/pull/16895) ([Anton Popov](https://github.com/CurtizJ)). +* Enable compact format of directories for asynchronous sends in Distributed tables: `use_compact_format_in_distributed_parts_names` is set to 1 by default. [#16788](https://github.com/ClickHouse/ClickHouse/pull/16788) ([Azat Khuzhin](https://github.com/azat)). +* Abort multipart upload if no data was written to S3. [#16840](https://github.com/ClickHouse/ClickHouse/pull/16840) ([Pavel Kovalenko](https://github.com/Jokser)). +* Reresolve the IP of the `format_avro_schema_registry_url` in case of errors. [#16985](https://github.com/ClickHouse/ClickHouse/pull/16985) ([filimonov](https://github.com/filimonov)). +* Mask password in data_path in the system.distribution_queue. [#16727](https://github.com/ClickHouse/ClickHouse/pull/16727) ([Azat Khuzhin](https://github.com/azat)). +* Throw error when use column transformer replaces non existing column. [#16183](https://github.com/ClickHouse/ClickHouse/pull/16183) ([hexiaoting](https://github.com/hexiaoting)). +* Turn off parallel parsing when there is no enough memory for all threads to work simultaneously. Also there could be exceptions like "Memory limit exceeded" when somebody will try to insert extremely huge rows (> min_chunk_bytes_for_parallel_parsing), because each piece to parse has to be independent set of strings (one or more). [#16721](https://github.com/ClickHouse/ClickHouse/pull/16721) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Install script should always create subdirs in config folders. This is only relevant for Docker build with custom config. [#16936](https://github.com/ClickHouse/ClickHouse/pull/16936) ([filimonov](https://github.com/filimonov)). +* Correct grammar in error message in JSONEachRow, JSONCompactEachRow, and RegexpRow input formats. [#17205](https://github.com/ClickHouse/ClickHouse/pull/17205) ([nico piderman](https://github.com/sneako)). +* Set default `host` and `port` parameters for `SOURCE(CLICKHOUSE(...))` to current instance and set default `user` value to `'default'`. [#16997](https://github.com/ClickHouse/ClickHouse/pull/16997) ([vdimir](https://github.com/vdimir)). +* Throw an informative error message when doing `ATTACH/DETACH TABLE `. Before this PR, `detach table ` works but leads to an ill-formed in-memory metadata. [#16885](https://github.com/ClickHouse/ClickHouse/pull/16885) ([Amos Bird](https://github.com/amosbird)). +* Add cutToFirstSignificantSubdomainWithWWW(). [#16845](https://github.com/ClickHouse/ClickHouse/pull/16845) ([Azat Khuzhin](https://github.com/azat)). +* Server refused to startup with exception message if wrong config is given (`metric_log`.`collect_interval_milliseconds` is missing). [#16815](https://github.com/ClickHouse/ClickHouse/pull/16815) ([Ivan](https://github.com/abyss7)). +* Better exception message when configuration for distributed DDL is absent. This fixes [#5075](https://github.com/ClickHouse/ClickHouse/issues/5075). [#16769](https://github.com/ClickHouse/ClickHouse/pull/16769) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Usability improvement: better suggestions in syntax error message when `CODEC` expression is misplaced in `CREATE TABLE` query. This fixes [#12493](https://github.com/ClickHouse/ClickHouse/issues/12493). [#16768](https://github.com/ClickHouse/ClickHouse/pull/16768) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Remove empty directories for async INSERT at start of Distributed engine. [#16729](https://github.com/ClickHouse/ClickHouse/pull/16729) ([Azat Khuzhin](https://github.com/azat)). +* Workaround for use S3 with nginx server as proxy. Nginx currenty does not accept urls with empty path like `http://domain.com?delete`, but vanilla aws-sdk-cpp produces this kind of urls. This commit uses patched aws-sdk-cpp version, which makes urls with "/" as path in this cases, like `http://domain.com/?delete`. [#16709](https://github.com/ClickHouse/ClickHouse/pull/16709) ([ianton-ru](https://github.com/ianton-ru)). +* Allow `reinterpretAs*` functions to work for integers and floats of the same size. Implements [16640](https://github.com/ClickHouse/ClickHouse/issues/16640). [#16657](https://github.com/ClickHouse/ClickHouse/pull/16657) ([flynn](https://github.com/ucasFL)). +* Now, `` configuration can be changed in `config.xml` and reloaded without server startup. [#16627](https://github.com/ClickHouse/ClickHouse/pull/16627) ([Amos Bird](https://github.com/amosbird)). +* Support SNI in https connections to remote resources. This will allow to connect to Cloudflare servers that require SNI. This fixes [#10055](https://github.com/ClickHouse/ClickHouse/issues/10055). [#16252](https://github.com/ClickHouse/ClickHouse/pull/16252) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Make it possible to connect to `clickhouse-server` secure endpoint which requires SNI. This is possible when `clickhouse-server` is hosted behind TLS proxy. [#16938](https://github.com/ClickHouse/ClickHouse/pull/16938) ([filimonov](https://github.com/filimonov)). +* Fix possible stack overflow if a loop of materialized views is created. This closes [#15732](https://github.com/ClickHouse/ClickHouse/issues/15732). [#16048](https://github.com/ClickHouse/ClickHouse/pull/16048) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Simplify the implementation of background tasks processing for the MergeTree table engines family. There should be no visible changes for user. [#15983](https://github.com/ClickHouse/ClickHouse/pull/15983) ([alesapin](https://github.com/alesapin)). +* Improvement for MaterializeMySQL (experimental feature). Throw exception about right sync privileges when MySQL sync user has error privileges. [#15977](https://github.com/ClickHouse/ClickHouse/pull/15977) ([TCeason](https://github.com/TCeason)). +* Made `indexOf()` use BloomFilter. [#14977](https://github.com/ClickHouse/ClickHouse/pull/14977) ([achimbab](https://github.com/achimbab)). + +#### Performance Improvement + +* Use Floyd-Rivest algorithm, it is the best for the ClickHouse use case of partial sorting. Bechmarks are in https://github.com/danlark1/miniselect and [here](https://drive.google.com/drive/folders/1DHEaeXgZuX6AJ9eByeZ8iQVQv0ueP8XM). [#16825](https://github.com/ClickHouse/ClickHouse/pull/16825) ([Danila Kutenin](https://github.com/danlark1)). +* Now `ReplicatedMergeTree` tree engines family uses a separate thread pool for replicated fetches. Size of the pool limited by setting `background_fetches_pool_size` which can be tuned with a server restart. The default value of the setting is 3 and it means that the maximum amount of parallel fetches is equal to 3 (and it allows to utilize 10G network). Fixes #520. [#16390](https://github.com/ClickHouse/ClickHouse/pull/16390) ([alesapin](https://github.com/alesapin)). +* Fixed uncontrolled growth of the state of `quantileTDigest`. [#16680](https://github.com/ClickHouse/ClickHouse/pull/16680) ([hrissan](https://github.com/hrissan)). +* Add `VIEW` subquery description to `EXPLAIN`. Limit push down optimisation for `VIEW`. Add local replicas of `Distributed` to query plan. [#14936](https://github.com/ClickHouse/ClickHouse/pull/14936) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix optimize_read_in_order/optimize_aggregation_in_order with max_threads > 0 and expression in ORDER BY. [#16637](https://github.com/ClickHouse/ClickHouse/pull/16637) ([Azat Khuzhin](https://github.com/azat)). +* Fix performance of reading from `Merge` tables over huge number of `MergeTree` tables. Fixes [#7748](https://github.com/ClickHouse/ClickHouse/issues/7748). [#16988](https://github.com/ClickHouse/ClickHouse/pull/16988) ([Anton Popov](https://github.com/CurtizJ)). +* Now we can safely prune partitions with exact match. Useful case: Suppose table is partitioned by `intHash64(x) % 100` and the query has condition on `intHash64(x) % 100` verbatim, not on x. [#16253](https://github.com/ClickHouse/ClickHouse/pull/16253) ([Amos Bird](https://github.com/amosbird)). + +#### Experimental Feature + +* Add `EmbeddedRocksDB` table engine (can be used for dictionaries). [#15073](https://github.com/ClickHouse/ClickHouse/pull/15073) ([sundyli](https://github.com/sundy-li)). + +#### Build/Testing/Packaging Improvement + +* Improvements in test coverage building images. [#17233](https://github.com/ClickHouse/ClickHouse/pull/17233) ([alesapin](https://github.com/alesapin)). +* Update embedded timezone data to version 2020d (also update cctz to the latest master). [#17204](https://github.com/ClickHouse/ClickHouse/pull/17204) ([filimonov](https://github.com/filimonov)). +* Fix UBSan report in Poco. This closes [#12719](https://github.com/ClickHouse/ClickHouse/issues/12719). [#16765](https://github.com/ClickHouse/ClickHouse/pull/16765) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Do not instrument 3rd-party libraries with UBSan. [#16764](https://github.com/ClickHouse/ClickHouse/pull/16764) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix UBSan report in cache dictionaries. This closes [#12641](https://github.com/ClickHouse/ClickHouse/issues/12641). [#16763](https://github.com/ClickHouse/ClickHouse/pull/16763) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix UBSan report when trying to convert infinite floating point number to integer. This closes [#14190](https://github.com/ClickHouse/ClickHouse/issues/14190). [#16677](https://github.com/ClickHouse/ClickHouse/pull/16677) ([alexey-milovidov](https://github.com/alexey-milovidov)). + + ## ClickHouse release 20.11 ### ClickHouse release v20.11.3.3-stable, 2020-11-13 @@ -15,7 +138,7 @@ * Restrict to use of non-comparable data types (like `AggregateFunction`) in keys (Sorting key, Primary key, Partition key, and so on). [#16601](https://github.com/ClickHouse/ClickHouse/pull/16601) ([alesapin](https://github.com/alesapin)). * Remove `ANALYZE` and `AST` queries, and make the setting `enable_debug_queries` obsolete since now it is the part of full featured `EXPLAIN` query. [#16536](https://github.com/ClickHouse/ClickHouse/pull/16536) ([Ivan](https://github.com/abyss7)). * Aggregate functions `boundingRatio`, `rankCorr`, `retention`, `timeSeriesGroupSum`, `timeSeriesGroupRateSum`, `windowFunnel` were erroneously made case-insensitive. Now their names are made case sensitive as designed. Only functions that are specified in SQL standard or made for compatibility with other DBMS or functions similar to those should be case-insensitive. [#16407](https://github.com/ClickHouse/ClickHouse/pull/16407) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* Make `rankCorr` function return nan on insufficient data https://github.com/ClickHouse/ClickHouse/issues/16124. [#16135](https://github.com/ClickHouse/ClickHouse/pull/16135) ([hexiaoting](https://github.com/hexiaoting)). +* Make `rankCorr` function return nan on insufficient data [#16124](https://github.com/ClickHouse/ClickHouse/issues/16124). [#16135](https://github.com/ClickHouse/ClickHouse/pull/16135) ([hexiaoting](https://github.com/hexiaoting)). * When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version). #### New Feature @@ -33,7 +156,7 @@ * Now we can provide identifiers via query parameters. And these parameters can be used as table objects or columns. [#16594](https://github.com/ClickHouse/ClickHouse/pull/16594) ([Amos Bird](https://github.com/amosbird)). * Added big integers (UInt256, Int128, Int256) and UUID data types support for MergeTree BloomFilter index. Big integers is an experimental feature. [#16642](https://github.com/ClickHouse/ClickHouse/pull/16642) ([Maksim Kita](https://github.com/kitaisreal)). * Add `farmFingerprint64` function (non-cryptographic string hashing). [#16570](https://github.com/ClickHouse/ClickHouse/pull/16570) ([Jacob Hayes](https://github.com/JacobHayes)). -* Add `log_queries_min_query_duration_ms`, only queries slower then the value of this setting will go to `query_log`/`query_thread_log` (i.e. something like `slow_query_log` in mysql). [#16529](https://github.com/ClickHouse/ClickHouse/pull/16529) ([Azat Khuzhin](https://github.com/azat)). +* Add `log_queries_min_query_duration_ms`, only queries slower than the value of this setting will go to `query_log`/`query_thread_log` (i.e. something like `slow_query_log` in mysql). [#16529](https://github.com/ClickHouse/ClickHouse/pull/16529) ([Azat Khuzhin](https://github.com/azat)). * Ability to create a docker image on the top of `Alpine`. Uses precompiled binary and glibc components from ubuntu 20.04. [#16479](https://github.com/ClickHouse/ClickHouse/pull/16479) ([filimonov](https://github.com/filimonov)). * Added `toUUIDOrNull`, `toUUIDOrZero` cast functions. [#16337](https://github.com/ClickHouse/ClickHouse/pull/16337) ([Maksim Kita](https://github.com/kitaisreal)). * Add `max_concurrent_queries_for_all_users` setting, see [#6636](https://github.com/ClickHouse/ClickHouse/issues/6636) for use cases. [#16154](https://github.com/ClickHouse/ClickHouse/pull/16154) ([nvartolomei](https://github.com/nvartolomei)). @@ -178,7 +301,7 @@ * Add `JSONStrings` format which output data in arrays of strings. [#14333](https://github.com/ClickHouse/ClickHouse/pull/14333) ([hcz](https://github.com/hczhcz)). * Add support for "Raw" column format for `Regexp` format. It allows to simply extract subpatterns as a whole without any escaping rules. [#15363](https://github.com/ClickHouse/ClickHouse/pull/15363) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Allow configurable `NULL` representation for `TSV` output format. It is controlled by the setting `output_format_tsv_null_representation` which is `\N` by default. This closes [#9375](https://github.com/ClickHouse/ClickHouse/issues/9375). Note that the setting only controls output format and `\N` is the only supported `NULL` representation for `TSV` input format. [#14586](https://github.com/ClickHouse/ClickHouse/pull/14586) ([Kruglov Pavel](https://github.com/Avogar)). -* Support Decimal data type for `MaterializedMySQL`. `MaterializedMySQL` is an experimental feature. [#14535](https://github.com/ClickHouse/ClickHouse/pull/14535) ([Winter Zhang](https://github.com/zhang2014)). +* Support Decimal data type for `MaterializeMySQL`. `MaterializeMySQL` is an experimental feature. [#14535](https://github.com/ClickHouse/ClickHouse/pull/14535) ([Winter Zhang](https://github.com/zhang2014)). * Add new feature: `SHOW DATABASES LIKE 'xxx'`. [#14521](https://github.com/ClickHouse/ClickHouse/pull/14521) ([hexiaoting](https://github.com/hexiaoting)). * Added a script to import (arbitrary) git repository to ClickHouse as a sample dataset. [#14471](https://github.com/ClickHouse/ClickHouse/pull/14471) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Now insert statements can have asterisk (or variants) with column transformers in the column list. [#14453](https://github.com/ClickHouse/ClickHouse/pull/14453) ([Amos Bird](https://github.com/amosbird)). @@ -200,18 +323,18 @@ * Fix a very wrong code in TwoLevelStringHashTable implementation, which might lead to memory leak. [#16264](https://github.com/ClickHouse/ClickHouse/pull/16264) ([Amos Bird](https://github.com/amosbird)). * Fix segfault in some cases of wrong aggregation in lambdas. [#16082](https://github.com/ClickHouse/ClickHouse/pull/16082) ([Anton Popov](https://github.com/CurtizJ)). * Fix `ALTER MODIFY ... ORDER BY` query hang for `ReplicatedVersionedCollapsingMergeTree`. This fixes [#15980](https://github.com/ClickHouse/ClickHouse/issues/15980). [#16011](https://github.com/ClickHouse/ClickHouse/pull/16011) ([alesapin](https://github.com/alesapin)). -* `MaterializedMySQL` (experimental feature): Fix collate name & charset name parser and support `length = 0` for string type. [#16008](https://github.com/ClickHouse/ClickHouse/pull/16008) ([Winter Zhang](https://github.com/zhang2014)). +* `MaterializeMySQL` (experimental feature): Fix collate name & charset name parser and support `length = 0` for string type. [#16008](https://github.com/ClickHouse/ClickHouse/pull/16008) ([Winter Zhang](https://github.com/zhang2014)). * Allow to use `direct` layout for dictionaries with complex keys. [#16007](https://github.com/ClickHouse/ClickHouse/pull/16007) ([Anton Popov](https://github.com/CurtizJ)). * Prevent replica hang for 5-10 mins when replication error happens after a period of inactivity. [#15987](https://github.com/ClickHouse/ClickHouse/pull/15987) ([filimonov](https://github.com/filimonov)). * Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)). * Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes [#15628](https://github.com/ClickHouse/ClickHouse/issues/15628). [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)). -* `MaterializedMySQL` (experimental feature): Fix crash on create database failure. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)). +* `MaterializeMySQL` (experimental feature): Fix crash on create database failure. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)). * Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) - Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)). * Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fixes [#12513](https://github.com/ClickHouse/ClickHouse/issues/12513): difference expressions with same alias when query is reanalyzed. [#15886](https://github.com/ClickHouse/ClickHouse/pull/15886) ([Winter Zhang](https://github.com/zhang2014)). * Fix possible very rare deadlocks in RBAC implementation. [#15875](https://github.com/ClickHouse/ClickHouse/pull/15875) ([Vitaly Baranov](https://github.com/vitlibar)). * Fix exception `Block structure mismatch` in `SELECT ... ORDER BY DESC` queries which were executed after `ALTER MODIFY COLUMN` query. Fixes [#15800](https://github.com/ClickHouse/ClickHouse/issues/15800). [#15852](https://github.com/ClickHouse/ClickHouse/pull/15852) ([alesapin](https://github.com/alesapin)). -* `MaterializedMySQL` (experimental feature): Fix `select count()` inaccuracy. [#15767](https://github.com/ClickHouse/ClickHouse/pull/15767) ([tavplubix](https://github.com/tavplubix)). +* `MaterializeMySQL` (experimental feature): Fix `select count()` inaccuracy. [#15767](https://github.com/ClickHouse/ClickHouse/pull/15767) ([tavplubix](https://github.com/tavplubix)). * Fix some cases of queries, in which only virtual columns are selected. Previously `Not found column _nothing in block` exception may be thrown. Fixes [#12298](https://github.com/ClickHouse/ClickHouse/issues/12298). [#15756](https://github.com/ClickHouse/ClickHouse/pull/15756) ([Anton Popov](https://github.com/CurtizJ)). * Fix drop of materialized view with inner table in Atomic database (hangs all subsequent DROP TABLE due to hang of the worker thread, due to recursive DROP TABLE for inner table of MV). [#15743](https://github.com/ClickHouse/ClickHouse/pull/15743) ([Azat Khuzhin](https://github.com/azat)). * Possibility to move part to another disk/volume if the first attempt was failed. [#15723](https://github.com/ClickHouse/ClickHouse/pull/15723) ([Pavel Kovalenko](https://github.com/Jokser)). @@ -243,37 +366,37 @@ * Fix hang of queries with a lot of subqueries to same table of `MySQL` engine. Previously, if there were more than 16 subqueries to same `MySQL` table in query, it hang forever. [#15299](https://github.com/ClickHouse/ClickHouse/pull/15299) ([Anton Popov](https://github.com/CurtizJ)). * Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix 'Unknown identifier' in GROUP BY when query has JOIN over Merge table. [#15242](https://github.com/ClickHouse/ClickHouse/pull/15242) ([Artem Zuikov](https://github.com/4ertus2)). -* Fix instance crash when using `joinGet` with `LowCardinality` types. This fixes https://github.com/ClickHouse/ClickHouse/issues/15214. [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)). +* Fix instance crash when using `joinGet` with `LowCardinality` types. This fixes [#15214](https://github.com/ClickHouse/ClickHouse/issues/15214). [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)). * Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)). * Adjust Decimal field size in MySQL column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)). * Fixes `Data compressed with different methods` in `join_algorithm='auto'`. Keep LowCardinality as type for left table join key in `join_algorithm='partial_merge'`. [#15088](https://github.com/ClickHouse/ClickHouse/pull/15088) ([Artem Zuikov](https://github.com/4ertus2)). * Update `jemalloc` to fix `percpu_arena` with affinity mask. [#15035](https://github.com/ClickHouse/ClickHouse/pull/15035) ([Azat Khuzhin](https://github.com/azat)). [#14957](https://github.com/ClickHouse/ClickHouse/pull/14957) ([Azat Khuzhin](https://github.com/azat)). -* We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes https://github.com/ClickHouse/ClickHouse/issues/14908. [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)). +* We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes [#14908](https://github.com/ClickHouse/ClickHouse/issues/14908). [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)). * If function `bar` was called with specifically crafted arguments, buffer overflow was possible. This closes [#13926](https://github.com/ClickHouse/ClickHouse/issues/13926). [#15028](https://github.com/ClickHouse/ClickHouse/pull/15028) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in Docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)). * Fix crash in RIGHT or FULL JOIN with join_algorith='auto' when memory limit exceeded and we should change HashJoin with MergeJoin. [#15002](https://github.com/ClickHouse/ClickHouse/pull/15002) ([Artem Zuikov](https://github.com/4ertus2)). * Now settings `number_of_free_entries_in_pool_to_execute_mutation` and `number_of_free_entries_in_pool_to_lower_max_size_of_merge` can be equal to `background_pool_size`. [#14975](https://github.com/ClickHouse/ClickHouse/pull/14975) ([alesapin](https://github.com/alesapin)). * Fix to make predicate push down work when subquery contains `finalizeAggregation` function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)). -* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes https://github.com/ClickHouse/ClickHouse/issues/14923. [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)). -* `MaterializedMySQL` (experimental feature): Fixed `.metadata.tmp File exists` error. [#14898](https://github.com/ClickHouse/ClickHouse/pull/14898) ([Winter Zhang](https://github.com/zhang2014)). +* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes [#14923](https://github.com/ClickHouse/ClickHouse/issues/14923). [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)). +* `MaterializeMySQL` (experimental feature): Fixed `.metadata.tmp File exists` error. [#14898](https://github.com/ClickHouse/ClickHouse/pull/14898) ([Winter Zhang](https://github.com/zhang2014)). * Fix the issue when some invocations of `extractAllGroups` function may trigger "Memory limit exceeded" error. This fixes [#13383](https://github.com/ClickHouse/ClickHouse/issues/13383). [#14889](https://github.com/ClickHouse/ClickHouse/pull/14889) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix SIGSEGV for an attempt to INSERT into StorageFile with file descriptor. [#14887](https://github.com/ClickHouse/ClickHouse/pull/14887) ([Azat Khuzhin](https://github.com/azat)). * Fixed segfault in `cache` dictionary [#14837](https://github.com/ClickHouse/ClickHouse/issues/14837). [#14879](https://github.com/ClickHouse/ClickHouse/pull/14879) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). -* `MaterializedMySQL` (experimental feature): Fixed bug in parsing MySQL binlog events, which causes `Attempt to read after eof` and `Packet payload is not fully read` in `MaterializeMySQL` database engine. [#14852](https://github.com/ClickHouse/ClickHouse/pull/14852) ([Winter Zhang](https://github.com/zhang2014)). +* `MaterializeMySQL` (experimental feature): Fixed bug in parsing MySQL binlog events, which causes `Attempt to read after eof` and `Packet payload is not fully read` in `MaterializeMySQL` database engine. [#14852](https://github.com/ClickHouse/ClickHouse/pull/14852) ([Winter Zhang](https://github.com/zhang2014)). * Fix rare error in `SELECT` queries when the queried column has `DEFAULT` expression which depends on the other column which also has `DEFAULT` and not present in select query and not exists on disk. Partially fixes [#14531](https://github.com/ClickHouse/ClickHouse/issues/14531). [#14845](https://github.com/ClickHouse/ClickHouse/pull/14845) ([alesapin](https://github.com/alesapin)). * Fix a problem where the server may get stuck on startup while talking to ZooKeeper, if the configuration files have to be fetched from ZK (using the `from_zk` include option). This fixes [#14814](https://github.com/ClickHouse/ClickHouse/issues/14814). [#14843](https://github.com/ClickHouse/ClickHouse/pull/14843) ([Alexander Kuzmenkov](https://github.com/akuzm)). * Fix wrong monotonicity detection for shrunk `Int -> Int` cast of signed types. It might lead to incorrect query result. This bug is unveiled in [#14513](https://github.com/ClickHouse/ClickHouse/issues/14513). [#14783](https://github.com/ClickHouse/ClickHouse/pull/14783) ([Amos Bird](https://github.com/amosbird)). -* `Replace` column transformer should replace identifiers with cloned ASTs. This fixes https://github.com/ClickHouse/ClickHouse/issues/14695 . [#14734](https://github.com/ClickHouse/ClickHouse/pull/14734) ([Amos Bird](https://github.com/amosbird)). +* `Replace` column transformer should replace identifiers with cloned ASTs. This fixes [#14695](https://github.com/ClickHouse/ClickHouse/issues/14695) . [#14734](https://github.com/ClickHouse/ClickHouse/pull/14734) ([Amos Bird](https://github.com/amosbird)). * Fixed missed default database name in metadata of materialized view when executing `ALTER ... MODIFY QUERY`. [#14664](https://github.com/ClickHouse/ClickHouse/pull/14664) ([tavplubix](https://github.com/tavplubix)). * Fix bug when `ALTER UPDATE` mutation with `Nullable` column in assignment expression and constant value (like `UPDATE x = 42`) leads to incorrect value in column or segfault. Fixes [#13634](https://github.com/ClickHouse/ClickHouse/issues/13634), [#14045](https://github.com/ClickHouse/ClickHouse/issues/14045). [#14646](https://github.com/ClickHouse/ClickHouse/pull/14646) ([alesapin](https://github.com/alesapin)). * Fix wrong Decimal multiplication result caused wrong decimal scale of result column. [#14603](https://github.com/ClickHouse/ClickHouse/pull/14603) ([Artem Zuikov](https://github.com/4ertus2)). * Fix function `has` with `LowCardinality` of `Nullable`. [#14591](https://github.com/ClickHouse/ClickHouse/pull/14591) ([Mike](https://github.com/myrrc)). * Cleanup data directory after Zookeeper exceptions during CreateQuery for StorageReplicatedMergeTree Engine. [#14563](https://github.com/ClickHouse/ClickHouse/pull/14563) ([Bharat Nallan](https://github.com/bharatnc)). * Fix rare segfaults in functions with combinator `-Resample`, which could appear in result of overflow with very large parameters. [#14562](https://github.com/ClickHouse/ClickHouse/pull/14562) ([Anton Popov](https://github.com/CurtizJ)). -* Fix a bug when converting `Nullable(String)` to Enum. Introduced by https://github.com/ClickHouse/ClickHouse/pull/12745. This fixes https://github.com/ClickHouse/ClickHouse/issues/14435. [#14530](https://github.com/ClickHouse/ClickHouse/pull/14530) ([Amos Bird](https://github.com/amosbird)). +* Fix a bug when converting `Nullable(String)` to Enum. Introduced by [#12745](https://github.com/ClickHouse/ClickHouse/pull/12745). This fixes [#14435](https://github.com/ClickHouse/ClickHouse/issues/14435). [#14530](https://github.com/ClickHouse/ClickHouse/pull/14530) ([Amos Bird](https://github.com/amosbird)). * Fixed the incorrect sorting order of `Nullable` column. This fixes [#14344](https://github.com/ClickHouse/ClickHouse/issues/14344). [#14495](https://github.com/ClickHouse/ClickHouse/pull/14495) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). * Fix `currentDatabase()` function cannot be used in `ON CLUSTER` ddl query. [#14211](https://github.com/ClickHouse/ClickHouse/pull/14211) ([Winter Zhang](https://github.com/zhang2014)). -* `MaterializedMySQL` (experimental feature): Fixed `Packet payload is not fully read` error in `MaterializeMySQL` database engine. [#14696](https://github.com/ClickHouse/ClickHouse/pull/14696) ([BohuTANG](https://github.com/BohuTANG)). +* `MaterializeMySQL` (experimental feature): Fixed `Packet payload is not fully read` error in `MaterializeMySQL` database engine. [#14696](https://github.com/ClickHouse/ClickHouse/pull/14696) ([BohuTANG](https://github.com/BohuTANG)). #### Improvement @@ -308,7 +431,7 @@ * Add an option to skip access checks for `DiskS3`. `s3` disk is an experimental feature. [#14497](https://github.com/ClickHouse/ClickHouse/pull/14497) ([Pavel Kovalenko](https://github.com/Jokser)). * Speed up server shutdown process if there are ongoing S3 requests. [#14496](https://github.com/ClickHouse/ClickHouse/pull/14496) ([Pavel Kovalenko](https://github.com/Jokser)). * `SYSTEM RELOAD CONFIG` now throws an exception if failed to reload and continues using the previous users.xml. The background periodic reloading also continues using the previous users.xml if failed to reload. [#14492](https://github.com/ClickHouse/ClickHouse/pull/14492) ([Vitaly Baranov](https://github.com/vitlibar)). -* For INSERTs with inline data in VALUES format in the script mode of `clickhouse-client`, support semicolon as the data terminator, in addition to the new line. Closes https://github.com/ClickHouse/ClickHouse/issues/12288. [#13192](https://github.com/ClickHouse/ClickHouse/pull/13192) ([Alexander Kuzmenkov](https://github.com/akuzm)). +* For INSERTs with inline data in VALUES format in the script mode of `clickhouse-client`, support semicolon as the data terminator, in addition to the new line. Closes [#12288](https://github.com/ClickHouse/ClickHouse/issues/12288). [#13192](https://github.com/ClickHouse/ClickHouse/pull/13192) ([Alexander Kuzmenkov](https://github.com/akuzm)). * Support custom codecs in compact parts. [#12183](https://github.com/ClickHouse/ClickHouse/pull/12183) ([Anton Popov](https://github.com/CurtizJ)). #### Performance Improvement @@ -320,7 +443,7 @@ * Improve performance of 256-bit types using (u)int64_t as base type for wide integers. Original wide integers use 8-bit types as base. [#14859](https://github.com/ClickHouse/ClickHouse/pull/14859) ([Artem Zuikov](https://github.com/4ertus2)). * Explicitly use a temporary disk to store vertical merge temporary data. [#15639](https://github.com/ClickHouse/ClickHouse/pull/15639) ([Grigory Pervakov](https://github.com/GrigoryPervakov)). * Use one S3 DeleteObjects request instead of multiple DeleteObject in a loop. No any functionality changes, so covered by existing tests like integration/test_log_family_s3. [#15238](https://github.com/ClickHouse/ClickHouse/pull/15238) ([ianton-ru](https://github.com/ianton-ru)). -* Fix `DateTime DateTime` mistakenly choosing the slow generic implementation. This fixes https://github.com/ClickHouse/ClickHouse/issues/15153. [#15178](https://github.com/ClickHouse/ClickHouse/pull/15178) ([Amos Bird](https://github.com/amosbird)). +* Fix `DateTime DateTime` mistakenly choosing the slow generic implementation. This fixes [#15153](https://github.com/ClickHouse/ClickHouse/issues/15153). [#15178](https://github.com/ClickHouse/ClickHouse/pull/15178) ([Amos Bird](https://github.com/amosbird)). * Improve performance of GROUP BY key of type `FixedString`. [#15034](https://github.com/ClickHouse/ClickHouse/pull/15034) ([Amos Bird](https://github.com/amosbird)). * Only `mlock` code segment when starting clickhouse-server. In previous versions, all mapped regions were locked in memory, including debug info. Debug info is usually splitted to a separate file but if it isn't, it led to +2..3 GiB memory usage. [#14929](https://github.com/ClickHouse/ClickHouse/pull/14929) ([alexey-milovidov](https://github.com/alexey-milovidov)). * ClickHouse binary become smaller due to link time optimization. @@ -387,7 +510,7 @@ * Allow to use direct layout for dictionaries with complex keys. [#16007](https://github.com/ClickHouse/ClickHouse/pull/16007) ([Anton Popov](https://github.com/CurtizJ)). * Prevent replica hang for 5-10 mins when replication error happens after a period of inactivity. [#15987](https://github.com/ClickHouse/ClickHouse/pull/15987) ([filimonov](https://github.com/filimonov)). * Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)). -* Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes https://github.com/ClickHouse/ClickHouse/issues/15628. [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes [#15628](https://github.com/ClickHouse/ClickHouse/issues/15628). [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)). * Fix a crash when database creation fails. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)). * Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)). * Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). @@ -398,7 +521,7 @@ * Fixed too low default value of `max_replicated_logs_to_keep` setting, which might cause replicas to become lost too often. Improve lost replica recovery process by choosing the most up-to-date replica to clone. Also do not remove old parts from lost replica, detach them instead. [#15701](https://github.com/ClickHouse/ClickHouse/pull/15701) ([tavplubix](https://github.com/tavplubix)). * Fix error `Cannot add simple transform to empty Pipe` which happened while reading from `Buffer` table which has different structure than destination table. It was possible if destination table returned empty result for query. Fixes [#15529](https://github.com/ClickHouse/ClickHouse/issues/15529). [#15662](https://github.com/ClickHouse/ClickHouse/pull/15662) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fixed bug with globs in S3 table function, region from URL was not applied to S3 client configuration. [#15646](https://github.com/ClickHouse/ClickHouse/pull/15646) ([Vladimir Chebotarev](https://github.com/excitoon)). -* Decrement the `ReadonlyReplica` metric when detaching read-only tables. This fixes https://github.com/ClickHouse/ClickHouse/issues/15598. [#15592](https://github.com/ClickHouse/ClickHouse/pull/15592) ([sundyli](https://github.com/sundy-li)). +* Decrement the `ReadonlyReplica` metric when detaching read-only tables. This fixes [#15598](https://github.com/ClickHouse/ClickHouse/issues/15598). [#15592](https://github.com/ClickHouse/ClickHouse/pull/15592) ([sundyli](https://github.com/sundy-li)). * Throw an error when a single parameter is passed to ReplicatedMergeTree instead of ignoring it. [#15516](https://github.com/ClickHouse/ClickHouse/pull/15516) ([nvartolomei](https://github.com/nvartolomei)). #### Improvement @@ -422,11 +545,11 @@ * Fix `Missing columns` errors when selecting columns which absent in data, but depend on other columns which also absent in data. Fixes [#15530](https://github.com/ClickHouse/ClickHouse/issues/15530). [#15532](https://github.com/ClickHouse/ClickHouse/pull/15532) ([alesapin](https://github.com/alesapin)). * Fix bug with event subscription in DDLWorker which rarely may lead to query hangs in `ON CLUSTER`. Introduced in [#13450](https://github.com/ClickHouse/ClickHouse/issues/13450). [#15477](https://github.com/ClickHouse/ClickHouse/pull/15477) ([alesapin](https://github.com/alesapin)). * Report proper error when the second argument of `boundingRatio` aggregate function has a wrong type. [#15407](https://github.com/ClickHouse/ClickHouse/pull/15407) ([detailyang](https://github.com/detailyang)). -* Fix bug where queries like SELECT toStartOfDay(today()) fail complaining about empty time_zone argument. [#15319](https://github.com/ClickHouse/ClickHouse/pull/15319) ([Bharat Nallan](https://github.com/bharatnc)). +* Fix bug where queries like `SELECT toStartOfDay(today())` fail complaining about empty time_zone argument. [#15319](https://github.com/ClickHouse/ClickHouse/pull/15319) ([Bharat Nallan](https://github.com/bharatnc)). * Fix race condition during MergeTree table rename and background cleanup. [#15304](https://github.com/ClickHouse/ClickHouse/pull/15304) ([alesapin](https://github.com/alesapin)). * Fix rare race condition on server startup when system.logs are enabled. [#15300](https://github.com/ClickHouse/ClickHouse/pull/15300) ([alesapin](https://github.com/alesapin)). * Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* Fix instance crash when using joinGet with LowCardinality types. This fixes https://github.com/ClickHouse/ClickHouse/issues/15214. [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)). +* Fix instance crash when using joinGet with LowCardinality types. This fixes [#15214](https://github.com/ClickHouse/ClickHouse/issues/15214). [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)). * Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)). * Adjust decimals field size in mysql column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)). * Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)). @@ -455,10 +578,10 @@ * Fix bug when `ALTER UPDATE` mutation with Nullable column in assignment expression and constant value (like `UPDATE x = 42`) leads to incorrect value in column or segfault. Fixes [#13634](https://github.com/ClickHouse/ClickHouse/issues/13634), [#14045](https://github.com/ClickHouse/ClickHouse/issues/14045). [#14646](https://github.com/ClickHouse/ClickHouse/pull/14646) ([alesapin](https://github.com/alesapin)). * Fix wrong Decimal multiplication result caused wrong decimal scale of result column. [#14603](https://github.com/ClickHouse/ClickHouse/pull/14603) ([Artem Zuikov](https://github.com/4ertus2)). * Fixed the incorrect sorting order of `Nullable` column. This fixes [#14344](https://github.com/ClickHouse/ClickHouse/issues/14344). [#14495](https://github.com/ClickHouse/ClickHouse/pull/14495) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). -* Fixed inconsistent comparison with primary key of type `FixedString` on index analysis if they're compered with a string of less size. This fixes https://github.com/ClickHouse/ClickHouse/issues/14908. [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)). +* Fixed inconsistent comparison with primary key of type `FixedString` on index analysis if they're compered with a string of less size. This fixes [#14908](https://github.com/ClickHouse/ClickHouse/issues/14908). [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)). * Fix bug which leads to wrong merges assignment if table has partitions with a single part. [#14444](https://github.com/ClickHouse/ClickHouse/pull/14444) ([alesapin](https://github.com/alesapin)). * If function `bar` was called with specifically crafted arguments, buffer overflow was possible. This closes [#13926](https://github.com/ClickHouse/ClickHouse/issues/13926). [#15028](https://github.com/ClickHouse/ClickHouse/pull/15028) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes https://github.com/ClickHouse/ClickHouse/issues/14923. [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)). +* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes [#14923](https://github.com/ClickHouse/ClickHouse/issues/14923). [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)). * Fixed `.metadata.tmp File exists` error when using `MaterializeMySQL` database engine. [#14898](https://github.com/ClickHouse/ClickHouse/pull/14898) ([Winter Zhang](https://github.com/zhang2014)). * Fix the issue when some invocations of `extractAllGroups` function may trigger "Memory limit exceeded" error. This fixes [#13383](https://github.com/ClickHouse/ClickHouse/issues/13383). [#14889](https://github.com/ClickHouse/ClickHouse/pull/14889) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix SIGSEGV for an attempt to INSERT into StorageFile(fd). [#14887](https://github.com/ClickHouse/ClickHouse/pull/14887) ([Azat Khuzhin](https://github.com/azat)). @@ -501,7 +624,7 @@ #### Performance Improvement -* Optimize queries with LIMIT/LIMIT BY/ORDER BY for distributed with GROUP BY sharding_key (under optimize_skip_unused_shards and optimize_distributed_group_by_sharding_key). [#10373](https://github.com/ClickHouse/ClickHouse/pull/10373) ([Azat Khuzhin](https://github.com/azat)). +* Optimize queries with LIMIT/LIMIT BY/ORDER BY for distributed with GROUP BY sharding_key (under `optimize_skip_unused_shards` and `optimize_distributed_group_by_sharding_key`). [#10373](https://github.com/ClickHouse/ClickHouse/pull/10373) ([Azat Khuzhin](https://github.com/azat)). * Creating sets for multiple `JOIN` and `IN` in parallel. It may slightly improve performance for queries with several different `IN subquery` expressions. [#14412](https://github.com/ClickHouse/ClickHouse/pull/14412) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Improve Kafka engine performance by providing independent thread for each consumer. Separate thread pool for streaming engines (like Kafka). [#13939](https://github.com/ClickHouse/ClickHouse/pull/13939) ([fastio](https://github.com/fastio)). @@ -579,15 +702,15 @@ * Fix race condition during MergeTree table rename and background cleanup. [#15304](https://github.com/ClickHouse/ClickHouse/pull/15304) ([alesapin](https://github.com/alesapin)). * Fix rare race condition on server startup when system.logs are enabled. [#15300](https://github.com/ClickHouse/ClickHouse/pull/15300) ([alesapin](https://github.com/alesapin)). * Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* Fix instance crash when using joinGet with LowCardinality types. This fixes https://github.com/ClickHouse/ClickHouse/issues/15214. [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)). +* Fix instance crash when using joinGet with LowCardinality types. This fixes [#15214](https://github.com/ClickHouse/ClickHouse/issues/15214). [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)). * Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)). * Adjust decimals field size in mysql column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)). -* We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes https://github.com/ClickHouse/ClickHouse/issues/14908. [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)). -* If function `bar` was called with specifically crafter arguments, buffer overflow was possible. This closes [#13926](https://github.com/ClickHouse/ClickHouse/issues/13926). [#15028](https://github.com/ClickHouse/ClickHouse/pull/15028) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes [#14908](https://github.com/ClickHouse/ClickHouse/issues/14908). [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)). +* If function `bar` was called with specifically crafted arguments, buffer overflow was possible. This closes [#13926](https://github.com/ClickHouse/ClickHouse/issues/13926). [#15028](https://github.com/ClickHouse/ClickHouse/pull/15028) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)). * Now settings `number_of_free_entries_in_pool_to_execute_mutation` and `number_of_free_entries_in_pool_to_lower_max_size_of_merge` can be equal to `background_pool_size`. [#14975](https://github.com/ClickHouse/ClickHouse/pull/14975) ([alesapin](https://github.com/alesapin)). * Fix to make predicate push down work when subquery contains finalizeAggregation function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)). -* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes https://github.com/ClickHouse/ClickHouse/issues/14923. [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)). +* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes [#14923](https://github.com/ClickHouse/ClickHouse/issues/14923). [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)). * Fixed `.metadata.tmp File exists` error when using `MaterializeMySQL` database engine. [#14898](https://github.com/ClickHouse/ClickHouse/pull/14898) ([Winter Zhang](https://github.com/zhang2014)). * Fix a problem where the server may get stuck on startup while talking to ZooKeeper, if the configuration files have to be fetched from ZK (using the `from_zk` include option). This fixes [#14814](https://github.com/ClickHouse/ClickHouse/issues/14814). [#14843](https://github.com/ClickHouse/ClickHouse/pull/14843) ([Alexander Kuzmenkov](https://github.com/akuzm)). * Fix wrong monotonicity detection for shrunk `Int -> Int` cast of signed types. It might lead to incorrect query result. This bug is unveiled in [#14513](https://github.com/ClickHouse/ClickHouse/issues/14513). [#14783](https://github.com/ClickHouse/ClickHouse/pull/14783) ([Amos Bird](https://github.com/amosbird)). @@ -647,16 +770,16 @@ * Fix visible data clobbering by progress bar in client in interactive mode. This fixes [#12562](https://github.com/ClickHouse/ClickHouse/issues/12562) and [#13369](https://github.com/ClickHouse/ClickHouse/issues/13369) and [#13584](https://github.com/ClickHouse/ClickHouse/issues/13584) and fixes [#12964](https://github.com/ClickHouse/ClickHouse/issues/12964). [#13691](https://github.com/ClickHouse/ClickHouse/pull/13691) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed incorrect sorting order if `LowCardinality` column when sorting by multiple columns. This fixes [#13958](https://github.com/ClickHouse/ClickHouse/issues/13958). [#14223](https://github.com/ClickHouse/ClickHouse/pull/14223) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). -* Check for array size overflow in `topK` aggregate function. Without this check the user may send a query with carefully crafter parameters that will lead to server crash. This closes [#14452](https://github.com/ClickHouse/ClickHouse/issues/14452). [#14467](https://github.com/ClickHouse/ClickHouse/pull/14467) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Check for array size overflow in `topK` aggregate function. Without this check the user may send a query with carefully crafted parameters that will lead to server crash. This closes [#14452](https://github.com/ClickHouse/ClickHouse/issues/14452). [#14467](https://github.com/ClickHouse/ClickHouse/pull/14467) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix bug which can lead to wrong merges assignment if table has partitions with a single part. [#14444](https://github.com/ClickHouse/ClickHouse/pull/14444) ([alesapin](https://github.com/alesapin)). * Stop query execution if exception happened in `PipelineExecutor` itself. This could prevent rare possible query hung. Continuation of [#14334](https://github.com/ClickHouse/ClickHouse/issues/14334). [#14402](https://github.com/ClickHouse/ClickHouse/pull/14402) [#14334](https://github.com/ClickHouse/ClickHouse/pull/14334) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fix crash during `ALTER` query for table which was created `AS table_function`. Fixes [#14212](https://github.com/ClickHouse/ClickHouse/issues/14212). [#14326](https://github.com/ClickHouse/ClickHouse/pull/14326) ([alesapin](https://github.com/alesapin)). * Fix exception during ALTER LIVE VIEW query with REFRESH command. Live view is an experimental feature. [#14320](https://github.com/ClickHouse/ClickHouse/pull/14320) ([Bharat Nallan](https://github.com/bharatnc)). * Fix QueryPlan lifetime (for EXPLAIN PIPELINE graph=1) for queries with nested interpreter. [#14315](https://github.com/ClickHouse/ClickHouse/pull/14315) ([Azat Khuzhin](https://github.com/azat)). -* Fix segfault in `clickhouse-odbc-bridge` during schema fetch from some external sources. This PR fixes https://github.com/ClickHouse/ClickHouse/issues/13861. [#14267](https://github.com/ClickHouse/ClickHouse/pull/14267) ([Vitaly Baranov](https://github.com/vitlibar)). -* Fix crash in mark inclusion search introduced in https://github.com/ClickHouse/ClickHouse/pull/12277. [#14225](https://github.com/ClickHouse/ClickHouse/pull/14225) ([Amos Bird](https://github.com/amosbird)). +* Fix segfault in `clickhouse-odbc-bridge` during schema fetch from some external sources. This PR fixes [#13861](https://github.com/ClickHouse/ClickHouse/issues/13861). [#14267](https://github.com/ClickHouse/ClickHouse/pull/14267) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix crash in mark inclusion search introduced in [#12277](https://github.com/ClickHouse/ClickHouse/pull/12277). [#14225](https://github.com/ClickHouse/ClickHouse/pull/14225) ([Amos Bird](https://github.com/amosbird)). * Fix creation of tables with named tuples. This fixes [#13027](https://github.com/ClickHouse/ClickHouse/issues/13027). [#14143](https://github.com/ClickHouse/ClickHouse/pull/14143) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* Fix formatting of minimal negative decimal numbers. This fixes https://github.com/ClickHouse/ClickHouse/issues/14111. [#14119](https://github.com/ClickHouse/ClickHouse/pull/14119) ([Alexander Kuzmenkov](https://github.com/akuzm)). +* Fix formatting of minimal negative decimal numbers. This fixes [#14111](https://github.com/ClickHouse/ClickHouse/issues/14111). [#14119](https://github.com/ClickHouse/ClickHouse/pull/14119) ([Alexander Kuzmenkov](https://github.com/akuzm)). * Fix `DistributedFilesToInsert` metric (zeroed when it should not). [#14095](https://github.com/ClickHouse/ClickHouse/pull/14095) ([Azat Khuzhin](https://github.com/azat)). * Fix `pointInPolygon` with const 2d array as polygon. [#14079](https://github.com/ClickHouse/ClickHouse/pull/14079) ([Alexey Ilyukhov](https://github.com/livace)). * Fixed wrong mount point in extra info for `Poco::Exception: no space left on device`. [#14050](https://github.com/ClickHouse/ClickHouse/pull/14050) ([tavplubix](https://github.com/tavplubix)). @@ -685,10 +808,10 @@ * Fix wrong code in function `netloc`. This fixes [#13335](https://github.com/ClickHouse/ClickHouse/issues/13335). [#13446](https://github.com/ClickHouse/ClickHouse/pull/13446) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix possible race in `StorageMemory`. [#13416](https://github.com/ClickHouse/ClickHouse/pull/13416) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fix missing or excessive headers in `TSV/CSVWithNames` formats in HTTP protocol. This fixes [#12504](https://github.com/ClickHouse/ClickHouse/issues/12504). [#13343](https://github.com/ClickHouse/ClickHouse/pull/13343) ([Azat Khuzhin](https://github.com/azat)). -* Fix parsing row policies from users.xml when names of databases or tables contain dots. This fixes https://github.com/ClickHouse/ClickHouse/issues/5779, https://github.com/ClickHouse/ClickHouse/issues/12527. [#13199](https://github.com/ClickHouse/ClickHouse/pull/13199) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix parsing row policies from users.xml when names of databases or tables contain dots. This fixes [#5779](https://github.com/ClickHouse/ClickHouse/issues/5779), [#12527](https://github.com/ClickHouse/ClickHouse/issues/12527). [#13199](https://github.com/ClickHouse/ClickHouse/pull/13199) ([Vitaly Baranov](https://github.com/vitlibar)). * Fix access to `redis` dictionary after connection was dropped once. It may happen with `cache` and `direct` dictionary layouts. [#13082](https://github.com/ClickHouse/ClickHouse/pull/13082) ([Anton Popov](https://github.com/CurtizJ)). * Removed wrong auth access check when using ClickHouseDictionarySource to query remote tables. [#12756](https://github.com/ClickHouse/ClickHouse/pull/12756) ([sundyli](https://github.com/sundy-li)). -* Properly distinguish subqueries in some cases for common subexpression elimination. https://github.com/ClickHouse/ClickHouse/issues/8333. [#8367](https://github.com/ClickHouse/ClickHouse/pull/8367) ([Amos Bird](https://github.com/amosbird)). +* Properly distinguish subqueries in some cases for common subexpression elimination. [#8333](https://github.com/ClickHouse/ClickHouse/issues/8333). [#8367](https://github.com/ClickHouse/ClickHouse/pull/8367) ([Amos Bird](https://github.com/amosbird)). #### Improvement @@ -756,7 +879,7 @@ * Updating LDAP user authentication suite to check that it works with RBAC. [#13656](https://github.com/ClickHouse/ClickHouse/pull/13656) ([vzakaznikov](https://github.com/vzakaznikov)). * Removed `-DENABLE_CURL_CLIENT` for `contrib/aws`. [#13628](https://github.com/ClickHouse/ClickHouse/pull/13628) ([Vladimir Chebotarev](https://github.com/excitoon)). * Increasing health-check timeouts for ClickHouse nodes and adding support to dump docker-compose logs if unhealthy containers found. [#13612](https://github.com/ClickHouse/ClickHouse/pull/13612) ([vzakaznikov](https://github.com/vzakaznikov)). -* Make sure https://github.com/ClickHouse/ClickHouse/issues/10977 is invalid. [#13539](https://github.com/ClickHouse/ClickHouse/pull/13539) ([Amos Bird](https://github.com/amosbird)). +* Make sure [#10977](https://github.com/ClickHouse/ClickHouse/issues/10977) is invalid. [#13539](https://github.com/ClickHouse/ClickHouse/pull/13539) ([Amos Bird](https://github.com/amosbird)). * Skip PR's from robot-clickhouse. [#13489](https://github.com/ClickHouse/ClickHouse/pull/13489) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). * Move Dockerfiles from integration tests to `docker/test` directory. docker_compose files are available in `runner` docker container. Docker images are built in CI and not in integration tests. [#13448](https://github.com/ClickHouse/ClickHouse/pull/13448) ([Ilya Yatsishin](https://github.com/qoega)). @@ -788,7 +911,7 @@ * Add `FROM_UNIXTIME` function for compatibility with MySQL, related to [12149](https://github.com/ClickHouse/ClickHouse/issues/12149). [#12484](https://github.com/ClickHouse/ClickHouse/pull/12484) ([flynn](https://github.com/ucasFL)). * Allow Nullable types as keys in MergeTree tables if `allow_nullable_key` table setting is enabled. Closes [#5319](https://github.com/ClickHouse/ClickHouse/issues/5319). [#12433](https://github.com/ClickHouse/ClickHouse/pull/12433) ([Amos Bird](https://github.com/amosbird)). * Integration with [COS](https://intl.cloud.tencent.com/product/cos). [#12386](https://github.com/ClickHouse/ClickHouse/pull/12386) ([fastio](https://github.com/fastio)). -* Add mapAdd and mapSubtract functions for adding/subtracting key-mapped values. [#11735](https://github.com/ClickHouse/ClickHouse/pull/11735) ([Ildus Kurbangaliev](https://github.com/ildus)). +* Add `mapAdd` and `mapSubtract` functions for adding/subtracting key-mapped values. [#11735](https://github.com/ClickHouse/ClickHouse/pull/11735) ([Ildus Kurbangaliev](https://github.com/ildus)). #### Bug Fix @@ -1071,7 +1194,7 @@ * Improved performace of 'ORDER BY' and 'GROUP BY' by prefix of sorting key (enabled with `optimize_aggregation_in_order` setting, disabled by default). [#11696](https://github.com/ClickHouse/ClickHouse/pull/11696) ([Anton Popov](https://github.com/CurtizJ)). * Removed injective functions inside `uniq*()` if `set optimize_injective_functions_inside_uniq=1`. [#12337](https://github.com/ClickHouse/ClickHouse/pull/12337) ([Ruslan Kamalov](https://github.com/kamalov-ruslan)). -* Index not used for IN operator with literals", performance regression introduced around v19.3. This fixes "[#10574](https://github.com/ClickHouse/ClickHouse/issues/10574). [#12062](https://github.com/ClickHouse/ClickHouse/pull/12062) ([nvartolomei](https://github.com/nvartolomei)). +* Index not used for IN operator with literals, performance regression introduced around v19.3. This fixes [#10574](https://github.com/ClickHouse/ClickHouse/issues/10574). [#12062](https://github.com/ClickHouse/ClickHouse/pull/12062) ([nvartolomei](https://github.com/nvartolomei)). * Implemented single part uploads for DiskS3 (experimental feature). [#12026](https://github.com/ClickHouse/ClickHouse/pull/12026) ([Vladimir Chebotarev](https://github.com/excitoon)). #### Experimental Feature @@ -1133,7 +1256,7 @@ #### Performance Improvement -* Index not used for IN operator with literals", performance regression introduced around v19.3. This fixes "[#10574](https://github.com/ClickHouse/ClickHouse/issues/10574). [#12062](https://github.com/ClickHouse/ClickHouse/pull/12062) ([nvartolomei](https://github.com/nvartolomei)). +* Index not used for IN operator with literals, performance regression introduced around v19.3. This fixes [#10574](https://github.com/ClickHouse/ClickHouse/issues/10574). [#12062](https://github.com/ClickHouse/ClickHouse/pull/12062) ([nvartolomei](https://github.com/nvartolomei)). #### Build/Testing/Packaging Improvement @@ -1213,7 +1336,7 @@ * Fix wrong result of comparison of FixedString with constant String. This fixes [#11393](https://github.com/ClickHouse/ClickHouse/issues/11393). This bug appeared in version 20.4. [#11828](https://github.com/ClickHouse/ClickHouse/pull/11828) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix wrong result for `if` with NULLs in condition. [#11807](https://github.com/ClickHouse/ClickHouse/pull/11807) ([Artem Zuikov](https://github.com/4ertus2)). * Fix using too many threads for queries. [#11788](https://github.com/ClickHouse/ClickHouse/pull/11788) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fixed `Scalar doesn't exist` exception when using `WITH ...` in `SELECT ... FROM merge_tree_table ...` https://github.com/ClickHouse/ClickHouse/issues/11621. [#11767](https://github.com/ClickHouse/ClickHouse/pull/11767) ([Amos Bird](https://github.com/amosbird)). +* Fixed `Scalar doesn't exist` exception when using `WITH ...` in `SELECT ... FROM merge_tree_table ...` [#11621](https://github.com/ClickHouse/ClickHouse/issues/11621). [#11767](https://github.com/ClickHouse/ClickHouse/pull/11767) ([Amos Bird](https://github.com/amosbird)). * Fix unexpected behaviour of queries like `SELECT *, xyz.*` which were success while an error expected. [#11753](https://github.com/ClickHouse/ClickHouse/pull/11753) ([hexiaoting](https://github.com/hexiaoting)). * Now replicated fetches will be cancelled during metadata alter. [#11744](https://github.com/ClickHouse/ClickHouse/pull/11744) ([alesapin](https://github.com/alesapin)). * Parse metadata stored in zookeeper before checking for equality. [#11739](https://github.com/ClickHouse/ClickHouse/pull/11739) ([Azat Khuzhin](https://github.com/azat)). @@ -1264,8 +1387,8 @@ * Fix potential uninitialized memory in conversion. Example: `SELECT toIntervalSecond(now64())`. [#11311](https://github.com/ClickHouse/ClickHouse/pull/11311) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix the issue when index analysis cannot work if a table has Array column in primary key and if a query is filtering by this column with `empty` or `notEmpty` functions. This fixes [#11286](https://github.com/ClickHouse/ClickHouse/issues/11286). [#11303](https://github.com/ClickHouse/ClickHouse/pull/11303) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix bug when query speed estimation can be incorrect and the limit of `min_execution_speed` may not work or work incorrectly if the query is throttled by `max_network_bandwidth`, `max_execution_speed` or `priority` settings. Change the default value of `timeout_before_checking_execution_speed` to non-zero, because otherwise the settings `min_execution_speed` and `max_execution_speed` have no effect. This fixes [#11297](https://github.com/ClickHouse/ClickHouse/issues/11297). This fixes [#5732](https://github.com/ClickHouse/ClickHouse/issues/5732). This fixes [#6228](https://github.com/ClickHouse/ClickHouse/issues/6228). Usability improvement: avoid concatenation of exception message with progress bar in `clickhouse-client`. [#11296](https://github.com/ClickHouse/ClickHouse/pull/11296) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* Fix crash when `SET DEFAULT ROLE` is called with wrong arguments. This fixes https://github.com/ClickHouse/ClickHouse/issues/10586. [#11278](https://github.com/ClickHouse/ClickHouse/pull/11278) ([Vitaly Baranov](https://github.com/vitlibar)). -* Fix crash while reading malformed data in `Protobuf` format. This fixes https://github.com/ClickHouse/ClickHouse/issues/5957, fixes https://github.com/ClickHouse/ClickHouse/issues/11203. [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix crash when `SET DEFAULT ROLE` is called with wrong arguments. This fixes [#10586](https://github.com/ClickHouse/ClickHouse/issues/10586). [#11278](https://github.com/ClickHouse/ClickHouse/pull/11278) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix crash while reading malformed data in `Protobuf` format. This fixes [#5957](https://github.com/ClickHouse/ClickHouse/issues/5957), fixes [#11203](https://github.com/ClickHouse/ClickHouse/issues/11203). [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)). * Fixed a bug when `cache` dictionary could return default value instead of normal (when there are only expired keys). This affects only string fields. [#11233](https://github.com/ClickHouse/ClickHouse/pull/11233) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). * Fix error `Block structure mismatch in QueryPipeline` while reading from `VIEW` with constants in inner query. Fixes [#11181](https://github.com/ClickHouse/ClickHouse/issues/11181). [#11205](https://github.com/ClickHouse/ClickHouse/pull/11205) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fix possible exception `Invalid status for associated output`. [#11200](https://github.com/ClickHouse/ClickHouse/pull/11200) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). @@ -1331,7 +1454,7 @@ * Fix error `the BloomFilter false positive must be a double number between 0 and 1` [#10551](https://github.com/ClickHouse/ClickHouse/issues/10551). [#10569](https://github.com/ClickHouse/ClickHouse/pull/10569) ([Winter Zhang](https://github.com/zhang2014)). * Fix SELECT of column ALIAS which default expression type different from column type. [#10563](https://github.com/ClickHouse/ClickHouse/pull/10563) ([Azat Khuzhin](https://github.com/azat)). * Implemented comparison between DateTime64 and String values (just like for DateTime). [#10560](https://github.com/ClickHouse/ClickHouse/pull/10560) ([Vasily Nemkov](https://github.com/Enmk)). -* Fix index corruption, which may accur in some cases after merge compact parts into another compact part. [#10531](https://github.com/ClickHouse/ClickHouse/pull/10531) ([Anton Popov](https://github.com/CurtizJ)). +* Fix index corruption, which may occur in some cases after merge compact parts into another compact part. [#10531](https://github.com/ClickHouse/ClickHouse/pull/10531) ([Anton Popov](https://github.com/CurtizJ)). * Disable GROUP BY sharding_key optimization by default (`optimize_distributed_group_by_sharding_key` had been introduced and turned of by default, due to trickery of sharding_key analyzing, simple example is `if` in sharding key) and fix it for WITH ROLLUP/CUBE/TOTALS. [#10516](https://github.com/ClickHouse/ClickHouse/pull/10516) ([Azat Khuzhin](https://github.com/azat)). * Fixes: [#10263](https://github.com/ClickHouse/ClickHouse/issues/10263) (after that PR dist send via INSERT had been postponing on each INSERT) Fixes: [#8756](https://github.com/ClickHouse/ClickHouse/issues/8756) (that PR breaks distributed sends with all of the following conditions met (unlikely setup for now I guess): `internal_replication == false`, multiple local shards (activates the hardlinking code) and `distributed_storage_policy` (makes `link(2)` fails on `EXDEV`)). [#10486](https://github.com/ClickHouse/ClickHouse/pull/10486) ([Azat Khuzhin](https://github.com/azat)). * Fixed error with "max_rows_to_sort" limit. [#10268](https://github.com/ClickHouse/ClickHouse/pull/10268) ([alexey-milovidov](https://github.com/alexey-milovidov)). @@ -1488,7 +1611,7 @@ * Lower memory usage in tests. [#10617](https://github.com/ClickHouse/ClickHouse/pull/10617) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixing hard coded timeouts in new live view tests. [#10604](https://github.com/ClickHouse/ClickHouse/pull/10604) ([vzakaznikov](https://github.com/vzakaznikov)). * Increasing timeout when opening a client in tests/queries/0_stateless/helpers/client.py. [#10599](https://github.com/ClickHouse/ClickHouse/pull/10599) ([vzakaznikov](https://github.com/vzakaznikov)). -* Enable ThinLTO for clang builds, continuation of https://github.com/ClickHouse/ClickHouse/pull/10435. [#10585](https://github.com/ClickHouse/ClickHouse/pull/10585) ([Amos Bird](https://github.com/amosbird)). +* Enable ThinLTO for clang builds, continuation of [#10435](https://github.com/ClickHouse/ClickHouse/pull/10435). [#10585](https://github.com/ClickHouse/ClickHouse/pull/10585) ([Amos Bird](https://github.com/amosbird)). * Adding fuzzers and preparing for oss-fuzz integration. [#10546](https://github.com/ClickHouse/ClickHouse/pull/10546) ([kyprizel](https://github.com/kyprizel)). * Fix FreeBSD build. [#10150](https://github.com/ClickHouse/ClickHouse/pull/10150) ([Ivan](https://github.com/abyss7)). * Add new build for query tests using pytest framework. [#10039](https://github.com/ClickHouse/ClickHouse/pull/10039) ([Ivan](https://github.com/abyss7)). @@ -1563,7 +1686,7 @@ #### Performance Improvement -* Index not used for IN operator with literals", performance regression introduced around v19.3. This fixes "[#10574](https://github.com/ClickHouse/ClickHouse/issues/10574). [#12062](https://github.com/ClickHouse/ClickHouse/pull/12062) ([nvartolomei](https://github.com/nvartolomei)). +* Index not used for IN operator with literals, performance regression introduced around v19.3. This fixes [#10574](https://github.com/ClickHouse/ClickHouse/issues/10574). [#12062](https://github.com/ClickHouse/ClickHouse/pull/12062) ([nvartolomei](https://github.com/nvartolomei)). #### Build/Testing/Packaging Improvement @@ -1617,7 +1740,7 @@ * Fix the error `Data compressed with different methods` that can happen if `min_bytes_to_use_direct_io` is enabled and PREWHERE is active and using SAMPLE or high number of threads. This fixes [#11539](https://github.com/ClickHouse/ClickHouse/issues/11539). [#11540](https://github.com/ClickHouse/ClickHouse/pull/11540) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix return compressed size for codecs. [#11448](https://github.com/ClickHouse/ClickHouse/pull/11448) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fix server crash when a column has compression codec with non-literal arguments. Fixes [#11365](https://github.com/ClickHouse/ClickHouse/issues/11365). [#11431](https://github.com/ClickHouse/ClickHouse/pull/11431) ([alesapin](https://github.com/alesapin)). -* Fix pointInPolygon with nan as point. Fixes https://github.com/ClickHouse/ClickHouse/issues/11375. [#11421](https://github.com/ClickHouse/ClickHouse/pull/11421) ([Alexey Ilyukhov](https://github.com/livace)). +* Fix pointInPolygon with nan as point. Fixes [#11375](https://github.com/ClickHouse/ClickHouse/issues/11375). [#11421](https://github.com/ClickHouse/ClickHouse/pull/11421) ([Alexey Ilyukhov](https://github.com/livace)). * Fix potential uninitialized memory read in MergeTree shutdown if table was not created successfully. [#11420](https://github.com/ClickHouse/ClickHouse/pull/11420) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed geohashesInBox with arguments outside of latitude/longitude range. [#11403](https://github.com/ClickHouse/ClickHouse/pull/11403) ([Vasily Nemkov](https://github.com/Enmk)). * Fix possible `Pipeline stuck` error for queries with external sort and limit. Fixes [#11359](https://github.com/ClickHouse/ClickHouse/issues/11359). [#11366](https://github.com/ClickHouse/ClickHouse/pull/11366) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). @@ -1633,8 +1756,8 @@ * Fix potential uninitialized memory in conversion. Example: `SELECT toIntervalSecond(now64())`. [#11311](https://github.com/ClickHouse/ClickHouse/pull/11311) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix the issue when index analysis cannot work if a table has Array column in primary key and if a query is filtering by this column with `empty` or `notEmpty` functions. This fixes [#11286](https://github.com/ClickHouse/ClickHouse/issues/11286). [#11303](https://github.com/ClickHouse/ClickHouse/pull/11303) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix bug when query speed estimation can be incorrect and the limit of `min_execution_speed` may not work or work incorrectly if the query is throttled by `max_network_bandwidth`, `max_execution_speed` or `priority` settings. Change the default value of `timeout_before_checking_execution_speed` to non-zero, because otherwise the settings `min_execution_speed` and `max_execution_speed` have no effect. This fixes [#11297](https://github.com/ClickHouse/ClickHouse/issues/11297). This fixes [#5732](https://github.com/ClickHouse/ClickHouse/issues/5732). This fixes [#6228](https://github.com/ClickHouse/ClickHouse/issues/6228). Usability improvement: avoid concatenation of exception message with progress bar in `clickhouse-client`. [#11296](https://github.com/ClickHouse/ClickHouse/pull/11296) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* Fix crash when SET DEFAULT ROLE is called with wrong arguments. This fixes https://github.com/ClickHouse/ClickHouse/issues/10586. [#11278](https://github.com/ClickHouse/ClickHouse/pull/11278) ([Vitaly Baranov](https://github.com/vitlibar)). -* Fix crash while reading malformed data in Protobuf format. This fixes https://github.com/ClickHouse/ClickHouse/issues/5957, fixes https://github.com/ClickHouse/ClickHouse/issues/11203. [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix crash when SET DEFAULT ROLE is called with wrong arguments. This fixes [#10586](https://github.com/ClickHouse/ClickHouse/issues/10586). [#11278](https://github.com/ClickHouse/ClickHouse/pull/11278) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix crash while reading malformed data in Protobuf format. This fixes [#5957](https://github.com/ClickHouse/ClickHouse/issues/5957), fixes [#11203](https://github.com/ClickHouse/ClickHouse/issues/11203). [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)). * Fixed a bug when cache-dictionary could return default value instead of normal (when there are only expired keys). This affects only string fields. [#11233](https://github.com/ClickHouse/ClickHouse/pull/11233) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). * Fix error `Block structure mismatch in QueryPipeline` while reading from `VIEW` with constants in inner query. Fixes [#11181](https://github.com/ClickHouse/ClickHouse/issues/11181). [#11205](https://github.com/ClickHouse/ClickHouse/pull/11205) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fix possible exception `Invalid status for associated output`. [#11200](https://github.com/ClickHouse/ClickHouse/pull/11200) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). @@ -1679,7 +1802,7 @@ No changes compared to v20.4.3.16-stable. * Now constraints are updated if the column participating in `CONSTRAINT` expression was renamed. Fixes [#10844](https://github.com/ClickHouse/ClickHouse/issues/10844). [#10847](https://github.com/ClickHouse/ClickHouse/pull/10847) ([alesapin](https://github.com/alesapin)). * Fixed potential read of uninitialized memory in cache-dictionary. [#10834](https://github.com/ClickHouse/ClickHouse/pull/10834) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed columns order after `Block::sortColumns()`. [#10826](https://github.com/ClickHouse/ClickHouse/pull/10826) ([Azat Khuzhin](https://github.com/azat)). -* Fixed the issue with `ODBC` bridge when no quoting of identifiers is requested. Fixes [#7984] (https://github.com/ClickHouse/ClickHouse/issues/7984). [#10821](https://github.com/ClickHouse/ClickHouse/pull/10821) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fixed the issue with `ODBC` bridge when no quoting of identifiers is requested. Fixes [#7984](https://github.com/ClickHouse/ClickHouse/issues/7984). [#10821](https://github.com/ClickHouse/ClickHouse/pull/10821) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed `UBSan` and `MSan` report in `DateLUT`. [#10798](https://github.com/ClickHouse/ClickHouse/pull/10798) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed incorrect type conversion in key conditions. Fixes [#6287](https://github.com/ClickHouse/ClickHouse/issues/6287). [#10791](https://github.com/ClickHouse/ClickHouse/pull/10791) ([Andrew Onyshchuk](https://github.com/oandrew)). * Fixed `parallel_view_processing` behavior. Now all insertions into `MATERIALIZED VIEW` without exception should be finished if exception happened. Fixes [#10241](https://github.com/ClickHouse/ClickHouse/issues/10241). [#10757](https://github.com/ClickHouse/ClickHouse/pull/10757) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). @@ -1707,15 +1830,15 @@ No changes compared to v20.4.3.16-stable. #### New Feature * Add support for secured connection from ClickHouse to Zookeeper [#10184](https://github.com/ClickHouse/ClickHouse/pull/10184) ([Konstantin Lebedev](https://github.com/xzkostyan)) -* Support custom HTTP handlers. See ISSUES-5436 for description. [#7572](https://github.com/ClickHouse/ClickHouse/pull/7572) ([Winter Zhang](https://github.com/zhang2014)) +* Support custom HTTP handlers. See [#5436](https://github.com/ClickHouse/ClickHouse/issues/5436) for description. [#7572](https://github.com/ClickHouse/ClickHouse/pull/7572) ([Winter Zhang](https://github.com/zhang2014)) * Add MessagePack Input/Output format. [#9889](https://github.com/ClickHouse/ClickHouse/pull/9889) ([Kruglov Pavel](https://github.com/Avogar)) * Add Regexp input format. [#9196](https://github.com/ClickHouse/ClickHouse/pull/9196) ([Kruglov Pavel](https://github.com/Avogar)) * Added output format `Markdown` for embedding tables in markdown documents. [#10317](https://github.com/ClickHouse/ClickHouse/pull/10317) ([Kruglov Pavel](https://github.com/Avogar)) * Added support for custom settings section in dictionaries. Also fixes issue [#2829](https://github.com/ClickHouse/ClickHouse/issues/2829). [#10137](https://github.com/ClickHouse/ClickHouse/pull/10137) ([Artem Streltsov](https://github.com/kekekekule)) -* Added custom settings support in DDL-queries for CREATE DICTIONARY [#10465](https://github.com/ClickHouse/ClickHouse/pull/10465) ([Artem Streltsov](https://github.com/kekekekule)) +* Added custom settings support in DDL-queries for `CREATE DICTIONARY` [#10465](https://github.com/ClickHouse/ClickHouse/pull/10465) ([Artem Streltsov](https://github.com/kekekekule)) * Add simple server-wide memory profiler that will collect allocation contexts when server memory usage becomes higher than the next allocation threshold. [#10444](https://github.com/ClickHouse/ClickHouse/pull/10444) ([alexey-milovidov](https://github.com/alexey-milovidov)) * Add setting `always_fetch_merged_part` which restrict replica to merge parts by itself and always prefer dowloading from other replicas. [#10379](https://github.com/ClickHouse/ClickHouse/pull/10379) ([alesapin](https://github.com/alesapin)) -* Add function JSONExtractKeysAndValuesRaw which extracts raw data from JSON objects [#10378](https://github.com/ClickHouse/ClickHouse/pull/10378) ([hcz](https://github.com/hczhcz)) +* Add function `JSONExtractKeysAndValuesRaw` which extracts raw data from JSON objects [#10378](https://github.com/ClickHouse/ClickHouse/pull/10378) ([hcz](https://github.com/hczhcz)) * Add memory usage from OS to `system.asynchronous_metrics`. [#10361](https://github.com/ClickHouse/ClickHouse/pull/10361) ([alexey-milovidov](https://github.com/alexey-milovidov)) * Added generic variants for functions `least` and `greatest`. Now they work with arbitrary number of arguments of arbitrary types. This fixes [#4767](https://github.com/ClickHouse/ClickHouse/issues/4767) [#10318](https://github.com/ClickHouse/ClickHouse/pull/10318) ([alexey-milovidov](https://github.com/alexey-milovidov)) * Now ClickHouse controls timeouts of dictionary sources on its side. Two new settings added to cache dictionary configuration: `strict_max_lifetime_seconds`, which is `max_lifetime` by default, and `query_wait_timeout_milliseconds`, which is one minute by default. The first settings is also useful with `allow_read_expired_keys` settings (to forbid reading very expired keys). [#10337](https://github.com/ClickHouse/ClickHouse/pull/10337) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)) @@ -1728,7 +1851,7 @@ No changes compared to v20.4.3.16-stable. * Add ability to query Distributed over Distributed (w/o `distributed_group_by_no_merge`) ... [#9923](https://github.com/ClickHouse/ClickHouse/pull/9923) ([Azat Khuzhin](https://github.com/azat)) * Add function `arrayReduceInRanges` which aggregates array elements in given ranges. [#9598](https://github.com/ClickHouse/ClickHouse/pull/9598) ([hcz](https://github.com/hczhcz)) * Add Dictionary Status on prometheus exporter. [#9622](https://github.com/ClickHouse/ClickHouse/pull/9622) ([Guillaume Tassery](https://github.com/YiuRULE)) -* Add function arrayAUC [#8698](https://github.com/ClickHouse/ClickHouse/pull/8698) ([taiyang-li](https://github.com/taiyang-li)) +* Add function `arrayAUC` [#8698](https://github.com/ClickHouse/ClickHouse/pull/8698) ([taiyang-li](https://github.com/taiyang-li)) * Support `DROP VIEW` statement for better TPC-H compatibility. [#9831](https://github.com/ClickHouse/ClickHouse/pull/9831) ([Amos Bird](https://github.com/amosbird)) * Add 'strict_order' option to windowFunnel() [#9773](https://github.com/ClickHouse/ClickHouse/pull/9773) ([achimbab](https://github.com/achimbab)) * Support `DATE` and `TIMESTAMP` SQL operators, e.g. `SELECT date '2001-01-01'` [#9691](https://github.com/ClickHouse/ClickHouse/pull/9691) ([Artem Zuikov](https://github.com/4ertus2)) @@ -1932,7 +2055,7 @@ No changes compared to v20.4.3.16-stable. * Move integration tests docker files to docker/ directory. [#10335](https://github.com/ClickHouse/ClickHouse/pull/10335) ([Ilya Yatsishin](https://github.com/qoega)) * Allow to use `clang-10` in CI. It ensures that [#10238](https://github.com/ClickHouse/ClickHouse/issues/10238) is fixed. [#10384](https://github.com/ClickHouse/ClickHouse/pull/10384) ([alexey-milovidov](https://github.com/alexey-milovidov)) * Update OpenSSL to upstream master. Fixed the issue when TLS connections may fail with the message `OpenSSL SSL_read: error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error` and `SSL Exception: error:2400006E:random number generator::error retrieving entropy`. The issue was present in version 20.1. [#8956](https://github.com/ClickHouse/ClickHouse/pull/8956) ([alexey-milovidov](https://github.com/alexey-milovidov)) -* Fix clang-10 build. https://github.com/ClickHouse/ClickHouse/issues/10238 [#10370](https://github.com/ClickHouse/ClickHouse/pull/10370) ([Amos Bird](https://github.com/amosbird)) +* Fix clang-10 build. [#10238](https://github.com/ClickHouse/ClickHouse/issues/10238) [#10370](https://github.com/ClickHouse/ClickHouse/pull/10370) ([Amos Bird](https://github.com/amosbird)) * Add performance test for [Parallel INSERT for materialized view](https://github.com/ClickHouse/ClickHouse/pull/10052). [#10345](https://github.com/ClickHouse/ClickHouse/pull/10345) ([vxider](https://github.com/Vxider)) * Fix flaky test `test_settings_constraints_distributed.test_insert_clamps_settings`. [#10346](https://github.com/ClickHouse/ClickHouse/pull/10346) ([Vitaly Baranov](https://github.com/vitlibar)) * Add util to test results upload in CI ClickHouse [#10330](https://github.com/ClickHouse/ClickHouse/pull/10330) ([Ilya Yatsishin](https://github.com/qoega)) @@ -2106,7 +2229,7 @@ No changes compared to v20.4.3.16-stable. #### Performance Improvement -* Index not used for IN operator with literals", performance regression introduced around v19.3. This fixes "[#10574](https://github.com/ClickHouse/ClickHouse/issues/10574). [#12062](https://github.com/ClickHouse/ClickHouse/pull/12062) ([nvartolomei](https://github.com/nvartolomei)). +* Index not used for IN operator with literals, performance regression introduced around v19.3. This fixes [#10574](https://github.com/ClickHouse/ClickHouse/issues/10574). [#12062](https://github.com/ClickHouse/ClickHouse/pull/12062) ([nvartolomei](https://github.com/nvartolomei)). ### ClickHouse release v20.3.12.112-lts 2020-06-25 @@ -2148,7 +2271,7 @@ No changes compared to v20.4.3.16-stable. * Fix the error `Data compressed with different methods` that can happen if `min_bytes_to_use_direct_io` is enabled and PREWHERE is active and using SAMPLE or high number of threads. This fixes [#11539](https://github.com/ClickHouse/ClickHouse/issues/11539). [#11540](https://github.com/ClickHouse/ClickHouse/pull/11540) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix return compressed size for codecs. [#11448](https://github.com/ClickHouse/ClickHouse/pull/11448) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fix server crash when a column has compression codec with non-literal arguments. Fixes [#11365](https://github.com/ClickHouse/ClickHouse/issues/11365). [#11431](https://github.com/ClickHouse/ClickHouse/pull/11431) ([alesapin](https://github.com/alesapin)). -* Fix pointInPolygon with nan as point. Fixes https://github.com/ClickHouse/ClickHouse/issues/11375. [#11421](https://github.com/ClickHouse/ClickHouse/pull/11421) ([Alexey Ilyukhov](https://github.com/livace)). +* Fix pointInPolygon with nan as point. Fixes [#11375](https://github.com/ClickHouse/ClickHouse/issues/11375). [#11421](https://github.com/ClickHouse/ClickHouse/pull/11421) ([Alexey Ilyukhov](https://github.com/livace)). * Fix crash in JOIN over LowCarinality(T) and Nullable(T). [#11380](https://github.com/ClickHouse/ClickHouse/issues/11380). [#11414](https://github.com/ClickHouse/ClickHouse/pull/11414) ([Artem Zuikov](https://github.com/4ertus2)). * Fix error code for wrong `USING` key. [#11373](https://github.com/ClickHouse/ClickHouse/issues/11373). [#11404](https://github.com/ClickHouse/ClickHouse/pull/11404) ([Artem Zuikov](https://github.com/4ertus2)). * Fixed geohashesInBox with arguments outside of latitude/longitude range. [#11403](https://github.com/ClickHouse/ClickHouse/pull/11403) ([Vasily Nemkov](https://github.com/Enmk)). @@ -2165,7 +2288,7 @@ No changes compared to v20.4.3.16-stable. * Fix potential uninitialized memory in conversion. Example: `SELECT toIntervalSecond(now64())`. [#11311](https://github.com/ClickHouse/ClickHouse/pull/11311) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix the issue when index analysis cannot work if a table has Array column in primary key and if a query is filtering by this column with `empty` or `notEmpty` functions. This fixes [#11286](https://github.com/ClickHouse/ClickHouse/issues/11286). [#11303](https://github.com/ClickHouse/ClickHouse/pull/11303) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix bug when query speed estimation can be incorrect and the limit of `min_execution_speed` may not work or work incorrectly if the query is throttled by `max_network_bandwidth`, `max_execution_speed` or `priority` settings. Change the default value of `timeout_before_checking_execution_speed` to non-zero, because otherwise the settings `min_execution_speed` and `max_execution_speed` have no effect. This fixes [#11297](https://github.com/ClickHouse/ClickHouse/issues/11297). This fixes [#5732](https://github.com/ClickHouse/ClickHouse/issues/5732). This fixes [#6228](https://github.com/ClickHouse/ClickHouse/issues/6228). Usability improvement: avoid concatenation of exception message with progress bar in `clickhouse-client`. [#11296](https://github.com/ClickHouse/ClickHouse/pull/11296) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* Fix crash while reading malformed data in Protobuf format. This fixes https://github.com/ClickHouse/ClickHouse/issues/5957, fixes https://github.com/ClickHouse/ClickHouse/issues/11203. [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix crash while reading malformed data in Protobuf format. This fixes [#5957](https://github.com/ClickHouse/ClickHouse/issues/5957), fixes [#11203](https://github.com/ClickHouse/ClickHouse/issues/11203). [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)). * Fixed a bug when cache-dictionary could return default value instead of normal (when there are only expired keys). This affects only string fields. [#11233](https://github.com/ClickHouse/ClickHouse/pull/11233) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). * Fix error `Block structure mismatch in QueryPipeline` while reading from `VIEW` with constants in inner query. Fixes [#11181](https://github.com/ClickHouse/ClickHouse/issues/11181). [#11205](https://github.com/ClickHouse/ClickHouse/pull/11205) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fix possible exception `Invalid status for associated output`. [#11200](https://github.com/ClickHouse/ClickHouse/pull/11200) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). @@ -2196,7 +2319,7 @@ No changes compared to v20.4.3.16-stable. * Fixed `SIGSEGV` in `StringHashTable` if such a key does not exist. [#10870](https://github.com/ClickHouse/ClickHouse/pull/10870) ([Azat Khuzhin](https://github.com/azat)). * Fixed bug in `ReplicatedMergeTree` which might cause some `ALTER` on `OPTIMIZE` query to hang waiting for some replica after it become inactive. [#10849](https://github.com/ClickHouse/ClickHouse/pull/10849) ([tavplubix](https://github.com/tavplubix)). * Fixed columns order after `Block::sortColumns()`. [#10826](https://github.com/ClickHouse/ClickHouse/pull/10826) ([Azat Khuzhin](https://github.com/azat)). -* Fixed the issue with `ODBC` bridge when no quoting of identifiers is requested. Fixes [#7984] (https://github.com/ClickHouse/ClickHouse/issues/7984). [#10821](https://github.com/ClickHouse/ClickHouse/pull/10821) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fixed the issue with `ODBC` bridge when no quoting of identifiers is requested. Fixes [#7984](https://github.com/ClickHouse/ClickHouse/issues/7984). [#10821](https://github.com/ClickHouse/ClickHouse/pull/10821) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed `UBSan` and `MSan` report in `DateLUT`. [#10798](https://github.com/ClickHouse/ClickHouse/pull/10798) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed incorrect type conversion in key conditions. Fixes [#6287](https://github.com/ClickHouse/ClickHouse/issues/6287). [#10791](https://github.com/ClickHouse/ClickHouse/pull/10791) ([Andrew Onyshchuk](https://github.com/oandrew)) * Fixed `parallel_view_processing` behavior. Now all insertions into `MATERIALIZED VIEW` without exception should be finished if exception happened. Fixes [#10241](https://github.com/ClickHouse/ClickHouse/issues/10241). [#10757](https://github.com/ClickHouse/ClickHouse/pull/10757) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). @@ -2215,7 +2338,7 @@ No changes compared to v20.4.3.16-stable. * Fixed incorrect scalar results inside inner query of `MATERIALIZED VIEW` in case if this query contained dependent table. [#10603](https://github.com/ClickHouse/ClickHouse/pull/10603) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fixed `SELECT` of column `ALIAS` which default expression type different from column type. [#10563](https://github.com/ClickHouse/ClickHouse/pull/10563) ([Azat Khuzhin](https://github.com/azat)). * Implemented comparison between DateTime64 and String values. [#10560](https://github.com/ClickHouse/ClickHouse/pull/10560) ([Vasily Nemkov](https://github.com/Enmk)). -* Fixed index corruption, which may accur in some cases after merge compact parts into another compact part. [#10531](https://github.com/ClickHouse/ClickHouse/pull/10531) ([Anton Popov](https://github.com/CurtizJ)). +* Fixed index corruption, which may occur in some cases after merge compact parts into another compact part. [#10531](https://github.com/ClickHouse/ClickHouse/pull/10531) ([Anton Popov](https://github.com/CurtizJ)). * Fixed the situation, when mutation finished all parts, but hung up in `is_done=0`. [#10526](https://github.com/ClickHouse/ClickHouse/pull/10526) ([alesapin](https://github.com/alesapin)). * Fixed overflow at beginning of unix epoch for timezones with fractional offset from `UTC`. This fixes [#9335](https://github.com/ClickHouse/ClickHouse/issues/9335). [#10513](https://github.com/ClickHouse/ClickHouse/pull/10513) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed improper shutdown of `Distributed` storage. [#10491](https://github.com/ClickHouse/ClickHouse/pull/10491) ([Azat Khuzhin](https://github.com/azat)). @@ -2225,14 +2348,14 @@ No changes compared to v20.4.3.16-stable. #### Build/Testing/Packaging Improvement * Fix UBSan report in LZ4 library. [#10631](https://github.com/ClickHouse/ClickHouse/pull/10631) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* Fix clang-10 build. https://github.com/ClickHouse/ClickHouse/issues/10238. [#10370](https://github.com/ClickHouse/ClickHouse/pull/10370) ([Amos Bird](https://github.com/amosbird)). +* Fix clang-10 build. [#10238](https://github.com/ClickHouse/ClickHouse/issues/10238). [#10370](https://github.com/ClickHouse/ClickHouse/pull/10370) ([Amos Bird](https://github.com/amosbird)). * Added failing tests about `max_rows_to_sort` setting. [#10268](https://github.com/ClickHouse/ClickHouse/pull/10268) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Added some improvements in printing diagnostic info in input formats. Fixes [#10204](https://github.com/ClickHouse/ClickHouse/issues/10204). [#10418](https://github.com/ClickHouse/ClickHouse/pull/10418) ([tavplubix](https://github.com/tavplubix)). * Added CA certificates to clickhouse-server docker image. [#10476](https://github.com/ClickHouse/ClickHouse/pull/10476) ([filimonov](https://github.com/filimonov)). #### Bug fix -* #10551. [#10569](https://github.com/ClickHouse/ClickHouse/pull/10569) ([Winter Zhang](https://github.com/zhang2014)). +* Fix error `the BloomFilter false positive must be a double number between 0 and 1` [#10551](https://github.com/ClickHouse/ClickHouse/issues/10551). [#10569](https://github.com/ClickHouse/ClickHouse/pull/10569) ([Winter Zhang](https://github.com/zhang2014)). ### ClickHouse release v20.3.8.53, 2020-04-23 @@ -2424,7 +2547,7 @@ No changes compared to v20.4.3.16-stable. * Fixed the behaviour of `match` and `extract` functions when haystack has zero bytes. The behaviour was wrong when haystack was constant. This fixes [#9160](https://github.com/ClickHouse/ClickHouse/issues/9160) [#9163](https://github.com/ClickHouse/ClickHouse/pull/9163) ([alexey-milovidov](https://github.com/alexey-milovidov)) [#9345](https://github.com/ClickHouse/ClickHouse/pull/9345) ([alexey-milovidov](https://github.com/alexey-milovidov)) * Avoid throwing from destructor in Apache Avro 3rd-party library. [#9066](https://github.com/ClickHouse/ClickHouse/pull/9066) ([Andrew Onyshchuk](https://github.com/oandrew)) * Don't commit a batch polled from `Kafka` partially as it can lead to holes in data. [#8876](https://github.com/ClickHouse/ClickHouse/pull/8876) ([filimonov](https://github.com/filimonov)) -* Fix `joinGet` with nullable return types. https://github.com/ClickHouse/ClickHouse/issues/8919 [#9014](https://github.com/ClickHouse/ClickHouse/pull/9014) ([Amos Bird](https://github.com/amosbird)) +* Fix `joinGet` with nullable return types. [#8919](https://github.com/ClickHouse/ClickHouse/issues/8919) [#9014](https://github.com/ClickHouse/ClickHouse/pull/9014) ([Amos Bird](https://github.com/amosbird)) * Fix data incompatibility when compressed with `T64` codec. [#9016](https://github.com/ClickHouse/ClickHouse/pull/9016) ([Artem Zuikov](https://github.com/4ertus2)) Fix data type ids in `T64` compression codec that leads to wrong (de)compression in affected versions. [#9033](https://github.com/ClickHouse/ClickHouse/pull/9033) ([Artem Zuikov](https://github.com/4ertus2)) * Add setting `enable_early_constant_folding` and disable it in some cases that leads to errors. [#9010](https://github.com/ClickHouse/ClickHouse/pull/9010) ([Artem Zuikov](https://github.com/4ertus2)) * Fix pushdown predicate optimizer with VIEW and enable the test [#9011](https://github.com/ClickHouse/ClickHouse/pull/9011) ([Winter Zhang](https://github.com/zhang2014)) @@ -2626,7 +2749,7 @@ No changes compared to v20.4.3.16-stable. * Fix the error `Data compressed with different methods` that can happen if `min_bytes_to_use_direct_io` is enabled and PREWHERE is active and using SAMPLE or high number of threads. This fixes [#11539](https://github.com/ClickHouse/ClickHouse/issues/11539). [#11540](https://github.com/ClickHouse/ClickHouse/pull/11540) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix return compressed size for codecs. [#11448](https://github.com/ClickHouse/ClickHouse/pull/11448) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fix server crash when a column has compression codec with non-literal arguments. Fixes [#11365](https://github.com/ClickHouse/ClickHouse/issues/11365). [#11431](https://github.com/ClickHouse/ClickHouse/pull/11431) ([alesapin](https://github.com/alesapin)). -* Fix pointInPolygon with nan as point. Fixes https://github.com/ClickHouse/ClickHouse/issues/11375. [#11421](https://github.com/ClickHouse/ClickHouse/pull/11421) ([Alexey Ilyukhov](https://github.com/livace)). +* Fix pointInPolygon with nan as point. Fixes [#11375](https://github.com/ClickHouse/ClickHouse/issues/11375). [#11421](https://github.com/ClickHouse/ClickHouse/pull/11421) ([Alexey Ilyukhov](https://github.com/livace)). * Fixed geohashesInBox with arguments outside of latitude/longitude range. [#11403](https://github.com/ClickHouse/ClickHouse/pull/11403) ([Vasily Nemkov](https://github.com/Enmk)). * Fix possible `Pipeline stuck` error for queries with external sort and limit. Fixes [#11359](https://github.com/ClickHouse/ClickHouse/issues/11359). [#11366](https://github.com/ClickHouse/ClickHouse/pull/11366) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fix crash in `quantilesExactWeightedArray`. [#11337](https://github.com/ClickHouse/ClickHouse/pull/11337) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). @@ -2636,7 +2759,7 @@ No changes compared to v20.4.3.16-stable. * Fix potential uninitialized memory in conversion. Example: `SELECT toIntervalSecond(now64())`. [#11311](https://github.com/ClickHouse/ClickHouse/pull/11311) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix the issue when index analysis cannot work if a table has Array column in primary key and if a query is filtering by this column with `empty` or `notEmpty` functions. This fixes [#11286](https://github.com/ClickHouse/ClickHouse/issues/11286). [#11303](https://github.com/ClickHouse/ClickHouse/pull/11303) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix bug when query speed estimation can be incorrect and the limit of `min_execution_speed` may not work or work incorrectly if the query is throttled by `max_network_bandwidth`, `max_execution_speed` or `priority` settings. Change the default value of `timeout_before_checking_execution_speed` to non-zero, because otherwise the settings `min_execution_speed` and `max_execution_speed` have no effect. This fixes [#11297](https://github.com/ClickHouse/ClickHouse/issues/11297). This fixes [#5732](https://github.com/ClickHouse/ClickHouse/issues/5732). This fixes [#6228](https://github.com/ClickHouse/ClickHouse/issues/6228). Usability improvement: avoid concatenation of exception message with progress bar in `clickhouse-client`. [#11296](https://github.com/ClickHouse/ClickHouse/pull/11296) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* Fix crash while reading malformed data in Protobuf format. This fixes https://github.com/ClickHouse/ClickHouse/issues/5957, fixes https://github.com/ClickHouse/ClickHouse/issues/11203. [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix crash while reading malformed data in Protobuf format. This fixes [#5957](https://github.com/ClickHouse/ClickHouse/issues/5957), fixes [#11203](https://github.com/ClickHouse/ClickHouse/issues/11203). [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)). * Fix possible error `Cannot capture column` for higher-order functions with `Array(Array(LowCardinality))` captured argument. [#11185](https://github.com/ClickHouse/ClickHouse/pull/11185) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * If data skipping index is dependent on columns that are going to be modified during background merge (for SummingMergeTree, AggregatingMergeTree as well as for TTL GROUP BY), it was calculated incorrectly. This issue is fixed by moving index calculation after merge so the index is calculated on merged data. [#11162](https://github.com/ClickHouse/ClickHouse/pull/11162) ([Azat Khuzhin](https://github.com/azat)). * Remove logging from mutation finalization task if nothing was finalized. [#11109](https://github.com/ClickHouse/ClickHouse/pull/11109) ([alesapin](https://github.com/alesapin)). @@ -2914,7 +3037,7 @@ No changes compared to v20.4.3.16-stable. * Several improvements ClickHouse grammar in `.g4` file. [#8294](https://github.com/ClickHouse/ClickHouse/pull/8294) ([taiyang-li](https://github.com/taiyang-li)) * Fix bug that leads to crashes in `JOIN`s with tables with engine `Join`. This fixes [#7556](https://github.com/ClickHouse/ClickHouse/issues/7556) [#8254](https://github.com/ClickHouse/ClickHouse/issues/8254) [#7915](https://github.com/ClickHouse/ClickHouse/issues/7915) [#8100](https://github.com/ClickHouse/ClickHouse/issues/8100). [#8298](https://github.com/ClickHouse/ClickHouse/pull/8298) ([Artem Zuikov](https://github.com/4ertus2)) * Fix redundant dictionaries reload on `CREATE DATABASE`. [#7916](https://github.com/ClickHouse/ClickHouse/pull/7916) ([Azat Khuzhin](https://github.com/azat)) -* Limit maximum number of streams for read from `StorageFile` and `StorageHDFS`. Fixes https://github.com/ClickHouse/ClickHouse/issues/7650. [#7981](https://github.com/ClickHouse/ClickHouse/pull/7981) ([alesapin](https://github.com/alesapin)) +* Limit maximum number of streams for read from `StorageFile` and `StorageHDFS`. Fixes [#7650](https://github.com/ClickHouse/ClickHouse/issues/7650). [#7981](https://github.com/ClickHouse/ClickHouse/pull/7981) ([alesapin](https://github.com/alesapin)) * Fix bug in `ALTER ... MODIFY ... CODEC` query, when user specify both default expression and codec. Fixes [8593](https://github.com/ClickHouse/ClickHouse/issues/8593). [#8614](https://github.com/ClickHouse/ClickHouse/pull/8614) ([alesapin](https://github.com/alesapin)) * Fix error in background merge of columns with `SimpleAggregateFunction(LowCardinality)` type. [#8613](https://github.com/ClickHouse/ClickHouse/pull/8613) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) * Fixed type check in function `toDateTime64`. [#8375](https://github.com/ClickHouse/ClickHouse/pull/8375) ([Vasily Nemkov](https://github.com/Enmk)) @@ -2998,7 +3121,7 @@ No changes compared to v20.4.3.16-stable. * Added check for extra parts of `MergeTree` at different disks, in order to not allow to miss data parts at undefined disks. [#8118](https://github.com/ClickHouse/ClickHouse/pull/8118) ([Vladimir Chebotarev](https://github.com/excitoon)) * Enable SSL support for Mac client and server. [#8297](https://github.com/ClickHouse/ClickHouse/pull/8297) ([Ivan](https://github.com/abyss7)) * Now ClickHouse can work as MySQL federated server (see https://dev.mysql.com/doc/refman/5.7/en/federated-create-server.html). [#7717](https://github.com/ClickHouse/ClickHouse/pull/7717) ([Maxim Fedotov](https://github.com/MaxFedotov)) -* `clickhouse-client` now only enable `bracketed-paste` when multiquery is on and multiline is off. This fixes (#7757)[https://github.com/ClickHouse/ClickHouse/issues/7757]. [#7761](https://github.com/ClickHouse/ClickHouse/pull/7761) ([Amos Bird](https://github.com/amosbird)) +* `clickhouse-client` now only enable `bracketed-paste` when multiquery is on and multiline is off. This fixes [#7757](https://github.com/ClickHouse/ClickHouse/issues/7757). [#7761](https://github.com/ClickHouse/ClickHouse/pull/7761) ([Amos Bird](https://github.com/amosbird)) * Support `Array(Decimal)` in `if` function. [#7721](https://github.com/ClickHouse/ClickHouse/pull/7721) ([Artem Zuikov](https://github.com/4ertus2)) * Support Decimals in `arrayDifference`, `arrayCumSum` and `arrayCumSumNegative` functions. [#7724](https://github.com/ClickHouse/ClickHouse/pull/7724) ([Artem Zuikov](https://github.com/4ertus2)) * Added `lifetime` column to `system.dictionaries` table. [#6820](https://github.com/ClickHouse/ClickHouse/issues/6820) [#7727](https://github.com/ClickHouse/ClickHouse/pull/7727) ([kekekekule](https://github.com/kekekekule)) diff --git a/CMakeLists.txt b/CMakeLists.txt index 367af2bbf92..56faf4ca2b1 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -225,14 +225,14 @@ endif () if (COMPILER_GCC OR COMPILER_CLANG) # to make numeric_limits<__int128> works with GCC - set (_CXX_STANDARD "-std=gnu++2a") + set (_CXX_STANDARD "gnu++2a") else() - set (_CXX_STANDARD "-std=c++2a") + set (_CXX_STANDARD "c++2a") endif() # cmake < 3.12 doesn't support 20. We'll set CMAKE_CXX_FLAGS for now # set (CMAKE_CXX_STANDARD 20) -set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${_CXX_STANDARD}") +set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=${_CXX_STANDARD}") set (CMAKE_CXX_EXTENSIONS 0) # https://cmake.org/cmake/help/latest/prop_tgt/CXX_EXTENSIONS.html#prop_tgt:CXX_EXTENSIONS set (CMAKE_CXX_STANDARD_REQUIRED ON) diff --git a/base/common/ReplxxLineReader.cpp b/base/common/ReplxxLineReader.cpp index 85b474e2021..4eb7b065fe3 100644 --- a/base/common/ReplxxLineReader.cpp +++ b/base/common/ReplxxLineReader.cpp @@ -58,6 +58,8 @@ ReplxxLineReader::ReplxxLineReader( } } + rx.install_window_change_handler(); + auto callback = [&suggest] (const String & context, size_t context_size) { if (auto range = suggest.getCompletions(context, context_size)) diff --git a/cmake/limit_jobs.cmake b/cmake/limit_jobs.cmake index 5b962f34c38..241fa509477 100644 --- a/cmake/limit_jobs.cmake +++ b/cmake/limit_jobs.cmake @@ -35,6 +35,15 @@ if (NOT PARALLEL_LINK_JOBS AND AVAILABLE_PHYSICAL_MEMORY AND MAX_LINKER_MEMORY) endif () endif () +# ThinLTO provides its own parallel linking +# But use 2 parallel jobs, since: +# - this is what llvm does +# - and I've verfied that lld-11 does not use all available CPU time (in peak) while linking one binary +if (ENABLE_THINLTO AND PARALLEL_LINK_JOBS GREATER 2) + message(STATUS "ThinLTO provides its own parallel linking - limiting parallel link jobs to 2.") + set (PARALLEL_LINK_JOBS 2) +endif() + if (PARALLEL_LINK_JOBS AND (NOT NUMBER_OF_LOGICAL_CORES OR PARALLEL_COMPILE_JOBS LESS NUMBER_OF_LOGICAL_CORES)) set(CMAKE_JOB_POOL_LINK link_job_pool${CMAKE_CURRENT_SOURCE_DIR}) string (REGEX REPLACE "[^a-zA-Z0-9]+" "_" CMAKE_JOB_POOL_LINK ${CMAKE_JOB_POOL_LINK}) diff --git a/contrib/CMakeLists.txt b/contrib/CMakeLists.txt index b2621b176cb..a7b1abb9f49 100644 --- a/contrib/CMakeLists.txt +++ b/contrib/CMakeLists.txt @@ -26,6 +26,7 @@ add_subdirectory (boost-cmake) add_subdirectory (cctz-cmake) add_subdirectory (consistent-hashing-sumbur) add_subdirectory (consistent-hashing) +add_subdirectory (dragonbox-cmake) add_subdirectory (FastMemcpy) add_subdirectory (hyperscan-cmake) add_subdirectory (jemalloc-cmake) @@ -240,6 +241,14 @@ if (USE_EMBEDDED_COMPILER AND USE_INTERNAL_LLVM_LIBRARY) set (LLVM_ENABLE_RTTI 1 CACHE INTERNAL "") set (LLVM_ENABLE_PIC 0 CACHE INTERNAL "") set (LLVM_TARGETS_TO_BUILD "X86;AArch64" CACHE STRING "") + # Yes it is set globally, but this is not enough, since llvm will add -std=c++11 after default + # And c++2a cannot be used, due to ambiguous operator != + if (COMPILER_GCC OR COMPILER_CLANG) + set (_CXX_STANDARD "gnu++17") + else() + set (_CXX_STANDARD "c++17") + endif() + set (LLVM_CXX_STD ${_CXX_STANDARD} CACHE STRING "" FORCE) add_subdirectory (llvm/llvm) target_include_directories(LLVMSupport SYSTEM BEFORE PRIVATE ${ZLIB_INCLUDE_DIR}) endif () @@ -322,5 +331,5 @@ if (USE_INTERNAL_ROCKSDB_LIBRARY) add_subdirectory(rocksdb-cmake) endif() -add_subdirectory(dragonbox) add_subdirectory(fast_float) + diff --git a/contrib/dragonbox-cmake/CMakeLists.txt b/contrib/dragonbox-cmake/CMakeLists.txt new file mode 100644 index 00000000000..604394c6dce --- /dev/null +++ b/contrib/dragonbox-cmake/CMakeLists.txt @@ -0,0 +1,5 @@ +set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/dragonbox") + +add_library(dragonbox_to_chars "${LIBRARY_DIR}/source/dragonbox_to_chars.cpp") + +target_include_directories(dragonbox_to_chars SYSTEM BEFORE PUBLIC "${LIBRARY_DIR}/include/") diff --git a/contrib/librdkafka b/contrib/librdkafka index 9902bc4fb18..f2f6616419d 160000 --- a/contrib/librdkafka +++ b/contrib/librdkafka @@ -1 +1 @@ -Subproject commit 9902bc4fb18bb441fa55ca154b341cdda191e5d3 +Subproject commit f2f6616419d567c9198aef0d1133a2e9b4f02276 diff --git a/contrib/libunwind b/contrib/libunwind index 7d78d361891..51b84d9b6d2 160000 --- a/contrib/libunwind +++ b/contrib/libunwind @@ -1 +1 @@ -Subproject commit 7d78d3618910752c256b2b58c3895f4efea47fac +Subproject commit 51b84d9b6d2548f1cbdcafe622d5a753853b6149 diff --git a/contrib/replxx b/contrib/replxx index 8cf626c04e9..254be98ae7f 160000 --- a/contrib/replxx +++ b/contrib/replxx @@ -1 +1 @@ -Subproject commit 8cf626c04e9a74313fb0b474cdbe2297c0f3cdc8 +Subproject commit 254be98ae7f2fd92d6db768f8e11ea5a5226cbf5 diff --git a/contrib/replxx-cmake/CMakeLists.txt b/contrib/replxx-cmake/CMakeLists.txt index 2c0ad86e583..df17e0ed646 100644 --- a/contrib/replxx-cmake/CMakeLists.txt +++ b/contrib/replxx-cmake/CMakeLists.txt @@ -53,7 +53,7 @@ if (NOT LIBRARY_REPLXX OR NOT INCLUDE_REPLXX OR NOT EXTERNAL_REPLXX_WORKS) "${LIBRARY_DIR}/src/ConvertUTF.cpp" "${LIBRARY_DIR}/src/escape.cxx" "${LIBRARY_DIR}/src/history.cxx" - "${LIBRARY_DIR}/src/io.cxx" + "${LIBRARY_DIR}/src/terminal.cxx" "${LIBRARY_DIR}/src/prompt.cxx" "${LIBRARY_DIR}/src/replxx_impl.cxx" "${LIBRARY_DIR}/src/replxx.cxx" diff --git a/debian/control b/debian/control index 12d69d9fff6..9b34e982698 100644 --- a/debian/control +++ b/debian/control @@ -5,8 +5,8 @@ Maintainer: Alexey Milovidov Build-Depends: debhelper (>= 9), cmake | cmake3, ninja-build, - gcc-9 [amd64 i386] | gcc-8 [amd64 i386], g++-9 [amd64 i386] | g++-8 [amd64 i386], - clang-8 [arm64 armhf] | clang-7 [arm64 armhf] | clang-6.0 [arm64 armhf], + clang-11, + llvm-11, libc6-dev, libicu-dev, libreadline-dev, diff --git a/docker/builder/Dockerfile b/docker/builder/Dockerfile index 68245a92c58..199b5217d79 100644 --- a/docker/builder/Dockerfile +++ b/docker/builder/Dockerfile @@ -1,6 +1,6 @@ -FROM ubuntu:19.10 +FROM ubuntu:20.04 -ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=10 +ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=11 RUN apt-get update \ && apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \ diff --git a/docker/builder/build.sh b/docker/builder/build.sh index d814bcdf2b4..d4cf662e91b 100755 --- a/docker/builder/build.sh +++ b/docker/builder/build.sh @@ -4,7 +4,7 @@ set -e #ccache -s # uncomment to display CCache statistics mkdir -p /server/build_docker cd /server/build_docker -cmake -G Ninja /server "-DCMAKE_C_COMPILER=$(command -v gcc-9)" "-DCMAKE_CXX_COMPILER=$(command -v g++-9)" +cmake -G Ninja /server "-DCMAKE_C_COMPILER=$(command -v clang-11)" "-DCMAKE_CXX_COMPILER=$(command -v clang++-11)" # Set the number of build jobs to the half of number of virtual CPU cores (rounded up). # By default, ninja use all virtual CPU cores, that leads to very high memory consumption without much improvement in build time. diff --git a/docker/packager/packager b/docker/packager/packager index 6d075195003..65c03cc10e3 100755 --- a/docker/packager/packager +++ b/docker/packager/packager @@ -148,6 +148,10 @@ def parse_env_variables(build_type, compiler, sanitizer, package_type, image_typ if split_binary: cmake_flags.append('-DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1') + # We can't always build utils because it requires too much space, but + # we have to build them at least in some way in CI. The split build is + # probably the least heavy disk-wise. + cmake_flags.append('-DENABLE_UTILS=1') if clang_tidy: cmake_flags.append('-DENABLE_CLANG_TIDY=1') diff --git a/docker/server/README.md b/docker/server/README.md index e8e8d326de7..d8e9204dffa 100644 --- a/docker/server/README.md +++ b/docker/server/README.md @@ -15,6 +15,8 @@ For more information and documentation see https://clickhouse.yandex/. $ docker run -d --name some-clickhouse-server --ulimit nofile=262144:262144 yandex/clickhouse-server ``` +By default ClickHouse will be accessible only via docker network. See the [networking section below](#networking). + ### connect to it from a native client ```bash $ docker run -it --rm --link some-clickhouse-server:clickhouse-server yandex/clickhouse-client --host clickhouse-server @@ -22,6 +24,70 @@ $ docker run -it --rm --link some-clickhouse-server:clickhouse-server yandex/cli More information about [ClickHouse client](https://clickhouse.yandex/docs/en/interfaces/cli/). +### connect to it using curl + +```bash +echo "SELECT 'Hello, ClickHouse!'" | docker run -i --rm --link some-clickhouse-server:clickhouse-server curlimages/curl 'http://clickhouse-server:8123/?query=' -s --data-binary @- +``` +More information about [ClickHouse HTTP Interface](https://clickhouse.tech/docs/en/interfaces/http/). + +### stopping / removing the containter + +```bash +$ docker stop some-clickhouse-server +$ docker rm some-clickhouse-server +``` + +### networking + +You can expose you ClickHouse running in docker by [mapping particular port](https://docs.docker.com/config/containers/container-networking/) from inside container to a host ports: + +```bash +$ docker run -d -p 18123:8123 -p19000:9000 --name some-clickhouse-server --ulimit nofile=262144:262144 yandex/clickhouse-server +$ echo 'SELECT version()' | curl 'http://localhost:18123/' --data-binary @- +20.12.3.3 +``` + +or by allowing container to use [host ports directly](https://docs.docker.com/network/host/) using `--network=host` (also allows archiving better network performance): + +```bash +$ docker run -d --network=host --name some-clickhouse-server --ulimit nofile=262144:262144 yandex/clickhouse-server +$ echo 'SELECT version()' | curl 'http://localhost:8123/' --data-binary @- +20.12.3.3 +``` + +### Volumes + +Typically you may want to mount the following folders inside your container to archieve persistency: + +* `/var/lib/clickhouse/` - main folder where ClickHouse stores the data +* `/val/log/clickhouse-server/` - logs + +```bash +$ docker run -d \ + -v $(realpath ./ch_data):/var/lib/clickhouse/ \ + -v $(realpath ./ch_logs):/var/log/clickhouse-server/ \ + --name some-clickhouse-server --ulimit nofile=262144:262144 yandex/clickhouse-server +``` + +You may also want to mount: + +* `/etc/clickhouse-server/config.d/*.xml` - files with server configuration adjustmenets +* `/etc/clickhouse-server/usert.d/*.xml` - files with use settings adjustmenets +* `/docker-entrypoint-initdb.d/` - folder with database initialization scripts (see below). + +### Linux capabilities + +ClickHouse has some advanced functionality which requite enabling several [linux capabilities](https://man7.org/linux/man-pages/man7/capabilities.7.html). + +It is optional and can be enabled using the following [docker command line agruments](https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities): + +```bash +$ docker run -d \ + --cap-add=SYS_NICE --cap-add=NET_ADMIN --cap-add=IPC_LOCK \ + --name some-clickhouse-server --ulimit nofile=262144:262144 yandex/clickhouse-server +``` + ## Configuration Container exposes 8123 port for [HTTP interface](https://clickhouse.yandex/docs/en/interfaces/http_interface/) and 9000 port for [native client](https://clickhouse.yandex/docs/en/interfaces/tcp/). diff --git a/docker/test/fasttest/run.sh b/docker/test/fasttest/run.sh index f58d1b1e779..a918cc44420 100755 --- a/docker/test/fasttest/run.sh +++ b/docker/test/fasttest/run.sh @@ -64,7 +64,14 @@ function stop_server function start_server { set -m # Spawn server in its own process groups - clickhouse-server --config-file="$FASTTEST_DATA/config.xml" -- --path "$FASTTEST_DATA" --user_files_path "$FASTTEST_DATA/user_files" &>> "$FASTTEST_OUTPUT/server.log" & + local opts=( + --config-file="$FASTTEST_DATA/config.xml" + -- + --path "$FASTTEST_DATA" + --user_files_path "$FASTTEST_DATA/user_files" + --top_level_domains_path "$FASTTEST_DATA/top_level_domains" + ) + clickhouse-server "${opts[@]}" &>> "$FASTTEST_OUTPUT/server.log" & server_pid=$! set +m diff --git a/docker/test/performance-comparison/Dockerfile b/docker/test/performance-comparison/Dockerfile index 8734e47e80f..5ec048de657 100644 --- a/docker/test/performance-comparison/Dockerfile +++ b/docker/test/performance-comparison/Dockerfile @@ -53,4 +53,3 @@ COPY * / CMD ["bash", "-c", "node=$((RANDOM % $(numactl --hardware | sed -n 's/^.*available:\\(.*\\)nodes.*$/\\1/p'))); echo Will bind to NUMA node $node; numactl --cpunodebind=$node --membind=$node /entrypoint.sh"] # docker run --network=host --volume :/workspace --volume=:/output -e PR_TO_TEST=<> -e SHA_TO_TEST=<> yandex/clickhouse-performance-comparison - diff --git a/docker/test/performance-comparison/compare.sh b/docker/test/performance-comparison/compare.sh index 6068a12f8a7..59d7cc98063 100755 --- a/docker/test/performance-comparison/compare.sh +++ b/docker/test/performance-comparison/compare.sh @@ -55,6 +55,7 @@ function configure # server *config* directives overrides --path db0 --user_files_path db0/user_files + --top_level_domains_path /top_level_domains --tcp_port $LEFT_SERVER_PORT ) left/clickhouse-server "${setup_left_server_opts[@]}" &> setup-server-log.log & @@ -102,6 +103,7 @@ function restart # server *config* directives overrides --path left/db --user_files_path left/db/user_files + --top_level_domains_path /top_level_domains --tcp_port $LEFT_SERVER_PORT ) left/clickhouse-server "${left_server_opts[@]}" &>> left-server-log.log & @@ -116,6 +118,7 @@ function restart # server *config* directives overrides --path right/db --user_files_path right/db/user_files + --top_level_domains_path /top_level_domains --tcp_port $RIGHT_SERVER_PORT ) right/clickhouse-server "${right_server_opts[@]}" &>> right-server-log.log & diff --git a/docker/test/performance-comparison/config/config.d/top_level_domains_lists.xml b/docker/test/performance-comparison/config/config.d/top_level_domains_lists.xml new file mode 100644 index 00000000000..7b5e6a5638a --- /dev/null +++ b/docker/test/performance-comparison/config/config.d/top_level_domains_lists.xml @@ -0,0 +1,5 @@ + + + public_suffix_list.dat + + diff --git a/docker/test/performance-comparison/config/config.d/zzz-perf-comparison-tweaks-config.xml b/docker/test/performance-comparison/config/config.d/zzz-perf-comparison-tweaks-config.xml index 81dab1a48b0..ee2006201b0 100644 --- a/docker/test/performance-comparison/config/config.d/zzz-perf-comparison-tweaks-config.xml +++ b/docker/test/performance-comparison/config/config.d/zzz-perf-comparison-tweaks-config.xml @@ -3,6 +3,7 @@ + :: diff --git a/docker/test/performance-comparison/config/top_level_domains/public_suffix_list.dat b/docker/test/performance-comparison/config/top_level_domains/public_suffix_list.dat new file mode 100644 index 00000000000..1ede2b929a0 --- /dev/null +++ b/docker/test/performance-comparison/config/top_level_domains/public_suffix_list.dat @@ -0,0 +1,13491 @@ +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at https://mozilla.org/MPL/2.0/. + +// Please pull this list from, and only from https://publicsuffix.org/list/public_suffix_list.dat, +// rather than any other VCS sites. Pulling from any other URL is not guaranteed to be supported. + +// Instructions on pulling and using this list can be found at https://publicsuffix.org/list/. + +// ===BEGIN ICANN DOMAINS=== + +// ac : https://en.wikipedia.org/wiki/.ac +ac +com.ac +edu.ac +gov.ac +net.ac +mil.ac +org.ac + +// ad : https://en.wikipedia.org/wiki/.ad +ad +nom.ad + +// ae : https://en.wikipedia.org/wiki/.ae +// see also: "Domain Name Eligibility Policy" at http://www.aeda.ae/eng/aepolicy.php +ae +co.ae +net.ae +org.ae +sch.ae +ac.ae +gov.ae +mil.ae + +// aero : see https://www.information.aero/index.php?id=66 +aero +accident-investigation.aero +accident-prevention.aero +aerobatic.aero +aeroclub.aero +aerodrome.aero +agents.aero +aircraft.aero +airline.aero +airport.aero +air-surveillance.aero +airtraffic.aero +air-traffic-control.aero +ambulance.aero +amusement.aero +association.aero +author.aero +ballooning.aero +broker.aero +caa.aero +cargo.aero +catering.aero +certification.aero +championship.aero +charter.aero +civilaviation.aero +club.aero +conference.aero +consultant.aero +consulting.aero +control.aero +council.aero +crew.aero +design.aero +dgca.aero +educator.aero +emergency.aero +engine.aero +engineer.aero +entertainment.aero +equipment.aero +exchange.aero +express.aero +federation.aero +flight.aero +fuel.aero +gliding.aero +government.aero +groundhandling.aero +group.aero +hanggliding.aero +homebuilt.aero +insurance.aero +journal.aero +journalist.aero +leasing.aero +logistics.aero +magazine.aero +maintenance.aero +media.aero +microlight.aero +modelling.aero +navigation.aero +parachuting.aero +paragliding.aero +passenger-association.aero +pilot.aero +press.aero +production.aero +recreation.aero +repbody.aero +res.aero +research.aero +rotorcraft.aero +safety.aero +scientist.aero +services.aero +show.aero +skydiving.aero +software.aero +student.aero +trader.aero +trading.aero +trainer.aero +union.aero +workinggroup.aero +works.aero + +// af : http://www.nic.af/help.jsp +af +gov.af +com.af +org.af +net.af +edu.af + +// ag : http://www.nic.ag/prices.htm +ag +com.ag +org.ag +net.ag +co.ag +nom.ag + +// ai : http://nic.com.ai/ +ai +off.ai +com.ai +net.ai +org.ai + +// al : http://www.ert.gov.al/ert_alb/faq_det.html?Id=31 +al +com.al +edu.al +gov.al +mil.al +net.al +org.al + +// am : https://www.amnic.net/policy/en/Policy_EN.pdf +am +co.am +com.am +commune.am +net.am +org.am + +// ao : https://en.wikipedia.org/wiki/.ao +// http://www.dns.ao/REGISTR.DOC +ao +ed.ao +gv.ao +og.ao +co.ao +pb.ao +it.ao + +// aq : https://en.wikipedia.org/wiki/.aq +aq + +// ar : https://nic.ar/nic-argentina/normativa-vigente +ar +com.ar +edu.ar +gob.ar +gov.ar +int.ar +mil.ar +musica.ar +net.ar +org.ar +tur.ar + +// arpa : https://en.wikipedia.org/wiki/.arpa +// Confirmed by registry 2008-06-18 +arpa +e164.arpa +in-addr.arpa +ip6.arpa +iris.arpa +uri.arpa +urn.arpa + +// as : https://en.wikipedia.org/wiki/.as +as +gov.as + +// asia : https://en.wikipedia.org/wiki/.asia +asia + +// at : https://en.wikipedia.org/wiki/.at +// Confirmed by registry 2008-06-17 +at +ac.at +co.at +gv.at +or.at +sth.ac.at + +// au : https://en.wikipedia.org/wiki/.au +// http://www.auda.org.au/ +au +// 2LDs +com.au +net.au +org.au +edu.au +gov.au +asn.au +id.au +// Historic 2LDs (closed to new registration, but sites still exist) +info.au +conf.au +oz.au +// CGDNs - http://www.cgdn.org.au/ +act.au +nsw.au +nt.au +qld.au +sa.au +tas.au +vic.au +wa.au +// 3LDs +act.edu.au +catholic.edu.au +// eq.edu.au - Removed at the request of the Queensland Department of Education +nsw.edu.au +nt.edu.au +qld.edu.au +sa.edu.au +tas.edu.au +vic.edu.au +wa.edu.au +// act.gov.au Bug 984824 - Removed at request of Greg Tankard +// nsw.gov.au Bug 547985 - Removed at request of +// nt.gov.au Bug 940478 - Removed at request of Greg Connors +qld.gov.au +sa.gov.au +tas.gov.au +vic.gov.au +wa.gov.au +// 4LDs +// education.tas.edu.au - Removed at the request of the Department of Education Tasmania +schools.nsw.edu.au + +// aw : https://en.wikipedia.org/wiki/.aw +aw +com.aw + +// ax : https://en.wikipedia.org/wiki/.ax +ax + +// az : https://en.wikipedia.org/wiki/.az +az +com.az +net.az +int.az +gov.az +org.az +edu.az +info.az +pp.az +mil.az +name.az +pro.az +biz.az + +// ba : http://nic.ba/users_data/files/pravilnik_o_registraciji.pdf +ba +com.ba +edu.ba +gov.ba +mil.ba +net.ba +org.ba + +// bb : https://en.wikipedia.org/wiki/.bb +bb +biz.bb +co.bb +com.bb +edu.bb +gov.bb +info.bb +net.bb +org.bb +store.bb +tv.bb + +// bd : https://en.wikipedia.org/wiki/.bd +*.bd + +// be : https://en.wikipedia.org/wiki/.be +// Confirmed by registry 2008-06-08 +be +ac.be + +// bf : https://en.wikipedia.org/wiki/.bf +bf +gov.bf + +// bg : https://en.wikipedia.org/wiki/.bg +// https://www.register.bg/user/static/rules/en/index.html +bg +a.bg +b.bg +c.bg +d.bg +e.bg +f.bg +g.bg +h.bg +i.bg +j.bg +k.bg +l.bg +m.bg +n.bg +o.bg +p.bg +q.bg +r.bg +s.bg +t.bg +u.bg +v.bg +w.bg +x.bg +y.bg +z.bg +0.bg +1.bg +2.bg +3.bg +4.bg +5.bg +6.bg +7.bg +8.bg +9.bg + +// bh : https://en.wikipedia.org/wiki/.bh +bh +com.bh +edu.bh +net.bh +org.bh +gov.bh + +// bi : https://en.wikipedia.org/wiki/.bi +// http://whois.nic.bi/ +bi +co.bi +com.bi +edu.bi +or.bi +org.bi + +// biz : https://en.wikipedia.org/wiki/.biz +biz + +// bj : https://en.wikipedia.org/wiki/.bj +bj +asso.bj +barreau.bj +gouv.bj + +// bm : http://www.bermudanic.bm/dnr-text.txt +bm +com.bm +edu.bm +gov.bm +net.bm +org.bm + +// bn : http://www.bnnic.bn/faqs +bn +com.bn +edu.bn +gov.bn +net.bn +org.bn + +// bo : https://nic.bo/delegacion2015.php#h-1.10 +bo +com.bo +edu.bo +gob.bo +int.bo +org.bo +net.bo +mil.bo +tv.bo +web.bo +// Social Domains +academia.bo +agro.bo +arte.bo +blog.bo +bolivia.bo +ciencia.bo +cooperativa.bo +democracia.bo +deporte.bo +ecologia.bo +economia.bo +empresa.bo +indigena.bo +industria.bo +info.bo +medicina.bo +movimiento.bo +musica.bo +natural.bo +nombre.bo +noticias.bo +patria.bo +politica.bo +profesional.bo +plurinacional.bo +pueblo.bo +revista.bo +salud.bo +tecnologia.bo +tksat.bo +transporte.bo +wiki.bo + +// br : http://registro.br/dominio/categoria.html +// Submitted by registry +br +9guacu.br +abc.br +adm.br +adv.br +agr.br +aju.br +am.br +anani.br +aparecida.br +app.br +arq.br +art.br +ato.br +b.br +barueri.br +belem.br +bhz.br +bib.br +bio.br +blog.br +bmd.br +boavista.br +bsb.br +campinagrande.br +campinas.br +caxias.br +cim.br +cng.br +cnt.br +com.br +contagem.br +coop.br +coz.br +cri.br +cuiaba.br +curitiba.br +def.br +des.br +det.br +dev.br +ecn.br +eco.br +edu.br +emp.br +enf.br +eng.br +esp.br +etc.br +eti.br +far.br +feira.br +flog.br +floripa.br +fm.br +fnd.br +fortal.br +fot.br +foz.br +fst.br +g12.br +geo.br +ggf.br +goiania.br +gov.br +// gov.br 26 states + df https://en.wikipedia.org/wiki/States_of_Brazil +ac.gov.br +al.gov.br +am.gov.br +ap.gov.br +ba.gov.br +ce.gov.br +df.gov.br +es.gov.br +go.gov.br +ma.gov.br +mg.gov.br +ms.gov.br +mt.gov.br +pa.gov.br +pb.gov.br +pe.gov.br +pi.gov.br +pr.gov.br +rj.gov.br +rn.gov.br +ro.gov.br +rr.gov.br +rs.gov.br +sc.gov.br +se.gov.br +sp.gov.br +to.gov.br +gru.br +imb.br +ind.br +inf.br +jab.br +jampa.br +jdf.br +joinville.br +jor.br +jus.br +leg.br +lel.br +log.br +londrina.br +macapa.br +maceio.br +manaus.br +maringa.br +mat.br +med.br +mil.br +morena.br +mp.br +mus.br +natal.br +net.br +niteroi.br +*.nom.br +not.br +ntr.br +odo.br +ong.br +org.br +osasco.br +palmas.br +poa.br +ppg.br +pro.br +psc.br +psi.br +pvh.br +qsl.br +radio.br +rec.br +recife.br +rep.br +ribeirao.br +rio.br +riobranco.br +riopreto.br +salvador.br +sampa.br +santamaria.br +santoandre.br +saobernardo.br +saogonca.br +seg.br +sjc.br +slg.br +slz.br +sorocaba.br +srv.br +taxi.br +tc.br +tec.br +teo.br +the.br +tmp.br +trd.br +tur.br +tv.br +udi.br +vet.br +vix.br +vlog.br +wiki.br +zlg.br + +// bs : http://www.nic.bs/rules.html +bs +com.bs +net.bs +org.bs +edu.bs +gov.bs + +// bt : https://en.wikipedia.org/wiki/.bt +bt +com.bt +edu.bt +gov.bt +net.bt +org.bt + +// bv : No registrations at this time. +// Submitted by registry +bv + +// bw : https://en.wikipedia.org/wiki/.bw +// http://www.gobin.info/domainname/bw.doc +// list of other 2nd level tlds ? +bw +co.bw +org.bw + +// by : https://en.wikipedia.org/wiki/.by +// http://tld.by/rules_2006_en.html +// list of other 2nd level tlds ? +by +gov.by +mil.by +// Official information does not indicate that com.by is a reserved +// second-level domain, but it's being used as one (see www.google.com.by and +// www.yahoo.com.by, for example), so we list it here for safety's sake. +com.by + +// http://hoster.by/ +of.by + +// bz : https://en.wikipedia.org/wiki/.bz +// http://www.belizenic.bz/ +bz +com.bz +net.bz +org.bz +edu.bz +gov.bz + +// ca : https://en.wikipedia.org/wiki/.ca +ca +// ca geographical names +ab.ca +bc.ca +mb.ca +nb.ca +nf.ca +nl.ca +ns.ca +nt.ca +nu.ca +on.ca +pe.ca +qc.ca +sk.ca +yk.ca +// gc.ca: https://en.wikipedia.org/wiki/.gc.ca +// see also: http://registry.gc.ca/en/SubdomainFAQ +gc.ca + +// cat : https://en.wikipedia.org/wiki/.cat +cat + +// cc : https://en.wikipedia.org/wiki/.cc +cc + +// cd : https://en.wikipedia.org/wiki/.cd +// see also: https://www.nic.cd/domain/insertDomain_2.jsp?act=1 +cd +gov.cd + +// cf : https://en.wikipedia.org/wiki/.cf +cf + +// cg : https://en.wikipedia.org/wiki/.cg +cg + +// ch : https://en.wikipedia.org/wiki/.ch +ch + +// ci : https://en.wikipedia.org/wiki/.ci +// http://www.nic.ci/index.php?page=charte +ci +org.ci +or.ci +com.ci +co.ci +edu.ci +ed.ci +ac.ci +net.ci +go.ci +asso.ci +aéroport.ci +int.ci +presse.ci +md.ci +gouv.ci + +// ck : https://en.wikipedia.org/wiki/.ck +*.ck +!www.ck + +// cl : https://www.nic.cl +// Confirmed by .CL registry +cl +aprendemas.cl +co.cl +gob.cl +gov.cl +mil.cl + +// cm : https://en.wikipedia.org/wiki/.cm plus bug 981927 +cm +co.cm +com.cm +gov.cm +net.cm + +// cn : https://en.wikipedia.org/wiki/.cn +// Submitted by registry +cn +ac.cn +com.cn +edu.cn +gov.cn +net.cn +org.cn +mil.cn +公司.cn +网络.cn +網絡.cn +// cn geographic names +ah.cn +bj.cn +cq.cn +fj.cn +gd.cn +gs.cn +gz.cn +gx.cn +ha.cn +hb.cn +he.cn +hi.cn +hl.cn +hn.cn +jl.cn +js.cn +jx.cn +ln.cn +nm.cn +nx.cn +qh.cn +sc.cn +sd.cn +sh.cn +sn.cn +sx.cn +tj.cn +xj.cn +xz.cn +yn.cn +zj.cn +hk.cn +mo.cn +tw.cn + +// co : https://en.wikipedia.org/wiki/.co +// Submitted by registry +co +arts.co +com.co +edu.co +firm.co +gov.co +info.co +int.co +mil.co +net.co +nom.co +org.co +rec.co +web.co + +// com : https://en.wikipedia.org/wiki/.com +com + +// coop : https://en.wikipedia.org/wiki/.coop +coop + +// cr : http://www.nic.cr/niccr_publico/showRegistroDominiosScreen.do +cr +ac.cr +co.cr +ed.cr +fi.cr +go.cr +or.cr +sa.cr + +// cu : https://en.wikipedia.org/wiki/.cu +cu +com.cu +edu.cu +org.cu +net.cu +gov.cu +inf.cu + +// cv : https://en.wikipedia.org/wiki/.cv +cv + +// cw : http://www.una.cw/cw_registry/ +// Confirmed by registry 2013-03-26 +cw +com.cw +edu.cw +net.cw +org.cw + +// cx : https://en.wikipedia.org/wiki/.cx +// list of other 2nd level tlds ? +cx +gov.cx + +// cy : http://www.nic.cy/ +// Submitted by registry Panayiotou Fotia +cy +ac.cy +biz.cy +com.cy +ekloges.cy +gov.cy +ltd.cy +name.cy +net.cy +org.cy +parliament.cy +press.cy +pro.cy +tm.cy + +// cz : https://en.wikipedia.org/wiki/.cz +cz + +// de : https://en.wikipedia.org/wiki/.de +// Confirmed by registry (with technical +// reservations) 2008-07-01 +de + +// dj : https://en.wikipedia.org/wiki/.dj +dj + +// dk : https://en.wikipedia.org/wiki/.dk +// Confirmed by registry 2008-06-17 +dk + +// dm : https://en.wikipedia.org/wiki/.dm +dm +com.dm +net.dm +org.dm +edu.dm +gov.dm + +// do : https://en.wikipedia.org/wiki/.do +do +art.do +com.do +edu.do +gob.do +gov.do +mil.do +net.do +org.do +sld.do +web.do + +// dz : http://www.nic.dz/images/pdf_nic/charte.pdf +dz +art.dz +asso.dz +com.dz +edu.dz +gov.dz +org.dz +net.dz +pol.dz +soc.dz +tm.dz + +// ec : http://www.nic.ec/reg/paso1.asp +// Submitted by registry +ec +com.ec +info.ec +net.ec +fin.ec +k12.ec +med.ec +pro.ec +org.ec +edu.ec +gov.ec +gob.ec +mil.ec + +// edu : https://en.wikipedia.org/wiki/.edu +edu + +// ee : http://www.eenet.ee/EENet/dom_reeglid.html#lisa_B +ee +edu.ee +gov.ee +riik.ee +lib.ee +med.ee +com.ee +pri.ee +aip.ee +org.ee +fie.ee + +// eg : https://en.wikipedia.org/wiki/.eg +eg +com.eg +edu.eg +eun.eg +gov.eg +mil.eg +name.eg +net.eg +org.eg +sci.eg + +// er : https://en.wikipedia.org/wiki/.er +*.er + +// es : https://www.nic.es/site_ingles/ingles/dominios/index.html +es +com.es +nom.es +org.es +gob.es +edu.es + +// et : https://en.wikipedia.org/wiki/.et +et +com.et +gov.et +org.et +edu.et +biz.et +name.et +info.et +net.et + +// eu : https://en.wikipedia.org/wiki/.eu +eu + +// fi : https://en.wikipedia.org/wiki/.fi +fi +// aland.fi : https://en.wikipedia.org/wiki/.ax +// This domain is being phased out in favor of .ax. As there are still many +// domains under aland.fi, we still keep it on the list until aland.fi is +// completely removed. +// TODO: Check for updates (expected to be phased out around Q1/2009) +aland.fi + +// fj : http://domains.fj/ +// Submitted by registry 2020-02-11 +fj +ac.fj +biz.fj +com.fj +gov.fj +info.fj +mil.fj +name.fj +net.fj +org.fj +pro.fj + +// fk : https://en.wikipedia.org/wiki/.fk +*.fk + +// fm : https://en.wikipedia.org/wiki/.fm +com.fm +edu.fm +net.fm +org.fm +fm + +// fo : https://en.wikipedia.org/wiki/.fo +fo + +// fr : http://www.afnic.fr/ +// domaines descriptifs : https://www.afnic.fr/medias/documents/Cadre_legal/Afnic_Naming_Policy_12122016_VEN.pdf +fr +asso.fr +com.fr +gouv.fr +nom.fr +prd.fr +tm.fr +// domaines sectoriels : https://www.afnic.fr/en/products-and-services/the-fr-tld/sector-based-fr-domains-4.html +aeroport.fr +avocat.fr +avoues.fr +cci.fr +chambagri.fr +chirurgiens-dentistes.fr +experts-comptables.fr +geometre-expert.fr +greta.fr +huissier-justice.fr +medecin.fr +notaires.fr +pharmacien.fr +port.fr +veterinaire.fr + +// ga : https://en.wikipedia.org/wiki/.ga +ga + +// gb : This registry is effectively dormant +// Submitted by registry +gb + +// gd : https://en.wikipedia.org/wiki/.gd +edu.gd +gov.gd +gd + +// ge : http://www.nic.net.ge/policy_en.pdf +ge +com.ge +edu.ge +gov.ge +org.ge +mil.ge +net.ge +pvt.ge + +// gf : https://en.wikipedia.org/wiki/.gf +gf + +// gg : http://www.channelisles.net/register-domains/ +// Confirmed by registry 2013-11-28 +gg +co.gg +net.gg +org.gg + +// gh : https://en.wikipedia.org/wiki/.gh +// see also: http://www.nic.gh/reg_now.php +// Although domains directly at second level are not possible at the moment, +// they have been possible for some time and may come back. +gh +com.gh +edu.gh +gov.gh +org.gh +mil.gh + +// gi : http://www.nic.gi/rules.html +gi +com.gi +ltd.gi +gov.gi +mod.gi +edu.gi +org.gi + +// gl : https://en.wikipedia.org/wiki/.gl +// http://nic.gl +gl +co.gl +com.gl +edu.gl +net.gl +org.gl + +// gm : http://www.nic.gm/htmlpages%5Cgm-policy.htm +gm + +// gn : http://psg.com/dns/gn/gn.txt +// Submitted by registry +gn +ac.gn +com.gn +edu.gn +gov.gn +org.gn +net.gn + +// gov : https://en.wikipedia.org/wiki/.gov +gov + +// gp : http://www.nic.gp/index.php?lang=en +gp +com.gp +net.gp +mobi.gp +edu.gp +org.gp +asso.gp + +// gq : https://en.wikipedia.org/wiki/.gq +gq + +// gr : https://grweb.ics.forth.gr/english/1617-B-2005.html +// Submitted by registry +gr +com.gr +edu.gr +net.gr +org.gr +gov.gr + +// gs : https://en.wikipedia.org/wiki/.gs +gs + +// gt : https://www.gt/sitio/registration_policy.php?lang=en +gt +com.gt +edu.gt +gob.gt +ind.gt +mil.gt +net.gt +org.gt + +// gu : http://gadao.gov.gu/register.html +// University of Guam : https://www.uog.edu +// Submitted by uognoc@triton.uog.edu +gu +com.gu +edu.gu +gov.gu +guam.gu +info.gu +net.gu +org.gu +web.gu + +// gw : https://en.wikipedia.org/wiki/.gw +gw + +// gy : https://en.wikipedia.org/wiki/.gy +// http://registry.gy/ +gy +co.gy +com.gy +edu.gy +gov.gy +net.gy +org.gy + +// hk : https://www.hkirc.hk +// Submitted by registry +hk +com.hk +edu.hk +gov.hk +idv.hk +net.hk +org.hk +公司.hk +教育.hk +敎育.hk +政府.hk +個人.hk +个人.hk +箇人.hk +網络.hk +网络.hk +组織.hk +網絡.hk +网絡.hk +组织.hk +組織.hk +組织.hk + +// hm : https://en.wikipedia.org/wiki/.hm +hm + +// hn : http://www.nic.hn/politicas/ps02,,05.html +hn +com.hn +edu.hn +org.hn +net.hn +mil.hn +gob.hn + +// hr : http://www.dns.hr/documents/pdf/HRTLD-regulations.pdf +hr +iz.hr +from.hr +name.hr +com.hr + +// ht : http://www.nic.ht/info/charte.cfm +ht +com.ht +shop.ht +firm.ht +info.ht +adult.ht +net.ht +pro.ht +org.ht +med.ht +art.ht +coop.ht +pol.ht +asso.ht +edu.ht +rel.ht +gouv.ht +perso.ht + +// hu : http://www.domain.hu/domain/English/sld.html +// Confirmed by registry 2008-06-12 +hu +co.hu +info.hu +org.hu +priv.hu +sport.hu +tm.hu +2000.hu +agrar.hu +bolt.hu +casino.hu +city.hu +erotica.hu +erotika.hu +film.hu +forum.hu +games.hu +hotel.hu +ingatlan.hu +jogasz.hu +konyvelo.hu +lakas.hu +media.hu +news.hu +reklam.hu +sex.hu +shop.hu +suli.hu +szex.hu +tozsde.hu +utazas.hu +video.hu + +// id : https://pandi.id/en/domain/registration-requirements/ +id +ac.id +biz.id +co.id +desa.id +go.id +mil.id +my.id +net.id +or.id +ponpes.id +sch.id +web.id + +// ie : https://en.wikipedia.org/wiki/.ie +ie +gov.ie + +// il : http://www.isoc.org.il/domains/ +il +ac.il +co.il +gov.il +idf.il +k12.il +muni.il +net.il +org.il + +// im : https://www.nic.im/ +// Submitted by registry +im +ac.im +co.im +com.im +ltd.co.im +net.im +org.im +plc.co.im +tt.im +tv.im + +// in : https://en.wikipedia.org/wiki/.in +// see also: https://registry.in/Policies +// Please note, that nic.in is not an official eTLD, but used by most +// government institutions. +in +co.in +firm.in +net.in +org.in +gen.in +ind.in +nic.in +ac.in +edu.in +res.in +gov.in +mil.in + +// info : https://en.wikipedia.org/wiki/.info +info + +// int : https://en.wikipedia.org/wiki/.int +// Confirmed by registry 2008-06-18 +int +eu.int + +// io : http://www.nic.io/rules.html +// list of other 2nd level tlds ? +io +com.io + +// iq : http://www.cmc.iq/english/iq/iqregister1.htm +iq +gov.iq +edu.iq +mil.iq +com.iq +org.iq +net.iq + +// ir : http://www.nic.ir/Terms_and_Conditions_ir,_Appendix_1_Domain_Rules +// Also see http://www.nic.ir/Internationalized_Domain_Names +// Two .ir entries added at request of , 2010-04-16 +ir +ac.ir +co.ir +gov.ir +id.ir +net.ir +org.ir +sch.ir +// xn--mgba3a4f16a.ir (.ir, Persian YEH) +ایران.ir +// xn--mgba3a4fra.ir (.ir, Arabic YEH) +ايران.ir + +// is : http://www.isnic.is/domain/rules.php +// Confirmed by registry 2008-12-06 +is +net.is +com.is +edu.is +gov.is +org.is +int.is + +// it : https://en.wikipedia.org/wiki/.it +it +gov.it +edu.it +// Reserved geo-names (regions and provinces): +// https://www.nic.it/sites/default/files/archivio/docs/Regulation_assignation_v7.1.pdf +// Regions +abr.it +abruzzo.it +aosta-valley.it +aostavalley.it +bas.it +basilicata.it +cal.it +calabria.it +cam.it +campania.it +emilia-romagna.it +emiliaromagna.it +emr.it +friuli-v-giulia.it +friuli-ve-giulia.it +friuli-vegiulia.it +friuli-venezia-giulia.it +friuli-veneziagiulia.it +friuli-vgiulia.it +friuliv-giulia.it +friulive-giulia.it +friulivegiulia.it +friulivenezia-giulia.it +friuliveneziagiulia.it +friulivgiulia.it +fvg.it +laz.it +lazio.it +lig.it +liguria.it +lom.it +lombardia.it +lombardy.it +lucania.it +mar.it +marche.it +mol.it +molise.it +piedmont.it +piemonte.it +pmn.it +pug.it +puglia.it +sar.it +sardegna.it +sardinia.it +sic.it +sicilia.it +sicily.it +taa.it +tos.it +toscana.it +trentin-sud-tirol.it +trentin-süd-tirol.it +trentin-sudtirol.it +trentin-südtirol.it +trentin-sued-tirol.it +trentin-suedtirol.it +trentino-a-adige.it +trentino-aadige.it +trentino-alto-adige.it +trentino-altoadige.it +trentino-s-tirol.it +trentino-stirol.it +trentino-sud-tirol.it +trentino-süd-tirol.it +trentino-sudtirol.it +trentino-südtirol.it +trentino-sued-tirol.it +trentino-suedtirol.it +trentino.it +trentinoa-adige.it +trentinoaadige.it +trentinoalto-adige.it +trentinoaltoadige.it +trentinos-tirol.it +trentinostirol.it +trentinosud-tirol.it +trentinosüd-tirol.it +trentinosudtirol.it +trentinosüdtirol.it +trentinosued-tirol.it +trentinosuedtirol.it +trentinsud-tirol.it +trentinsüd-tirol.it +trentinsudtirol.it +trentinsüdtirol.it +trentinsued-tirol.it +trentinsuedtirol.it +tuscany.it +umb.it +umbria.it +val-d-aosta.it +val-daosta.it +vald-aosta.it +valdaosta.it +valle-aosta.it +valle-d-aosta.it +valle-daosta.it +valleaosta.it +valled-aosta.it +valledaosta.it +vallee-aoste.it +vallée-aoste.it +vallee-d-aoste.it +vallée-d-aoste.it +valleeaoste.it +valléeaoste.it +valleedaoste.it +valléedaoste.it +vao.it +vda.it +ven.it +veneto.it +// Provinces +ag.it +agrigento.it +al.it +alessandria.it +alto-adige.it +altoadige.it +an.it +ancona.it +andria-barletta-trani.it +andria-trani-barletta.it +andriabarlettatrani.it +andriatranibarletta.it +ao.it +aosta.it +aoste.it +ap.it +aq.it +aquila.it +ar.it +arezzo.it +ascoli-piceno.it +ascolipiceno.it +asti.it +at.it +av.it +avellino.it +ba.it +balsan-sudtirol.it +balsan-südtirol.it +balsan-suedtirol.it +balsan.it +bari.it +barletta-trani-andria.it +barlettatraniandria.it +belluno.it +benevento.it +bergamo.it +bg.it +bi.it +biella.it +bl.it +bn.it +bo.it +bologna.it +bolzano-altoadige.it +bolzano.it +bozen-sudtirol.it +bozen-südtirol.it +bozen-suedtirol.it +bozen.it +br.it +brescia.it +brindisi.it +bs.it +bt.it +bulsan-sudtirol.it +bulsan-südtirol.it +bulsan-suedtirol.it +bulsan.it +bz.it +ca.it +cagliari.it +caltanissetta.it +campidano-medio.it +campidanomedio.it +campobasso.it +carbonia-iglesias.it +carboniaiglesias.it +carrara-massa.it +carraramassa.it +caserta.it +catania.it +catanzaro.it +cb.it +ce.it +cesena-forli.it +cesena-forlì.it +cesenaforli.it +cesenaforlì.it +ch.it +chieti.it +ci.it +cl.it +cn.it +co.it +como.it +cosenza.it +cr.it +cremona.it +crotone.it +cs.it +ct.it +cuneo.it +cz.it +dell-ogliastra.it +dellogliastra.it +en.it +enna.it +fc.it +fe.it +fermo.it +ferrara.it +fg.it +fi.it +firenze.it +florence.it +fm.it +foggia.it +forli-cesena.it +forlì-cesena.it +forlicesena.it +forlìcesena.it +fr.it +frosinone.it +ge.it +genoa.it +genova.it +go.it +gorizia.it +gr.it +grosseto.it +iglesias-carbonia.it +iglesiascarbonia.it +im.it +imperia.it +is.it +isernia.it +kr.it +la-spezia.it +laquila.it +laspezia.it +latina.it +lc.it +le.it +lecce.it +lecco.it +li.it +livorno.it +lo.it +lodi.it +lt.it +lu.it +lucca.it +macerata.it +mantova.it +massa-carrara.it +massacarrara.it +matera.it +mb.it +mc.it +me.it +medio-campidano.it +mediocampidano.it +messina.it +mi.it +milan.it +milano.it +mn.it +mo.it +modena.it +monza-brianza.it +monza-e-della-brianza.it +monza.it +monzabrianza.it +monzaebrianza.it +monzaedellabrianza.it +ms.it +mt.it +na.it +naples.it +napoli.it +no.it +novara.it +nu.it +nuoro.it +og.it +ogliastra.it +olbia-tempio.it +olbiatempio.it +or.it +oristano.it +ot.it +pa.it +padova.it +padua.it +palermo.it +parma.it +pavia.it +pc.it +pd.it +pe.it +perugia.it +pesaro-urbino.it +pesarourbino.it +pescara.it +pg.it +pi.it +piacenza.it +pisa.it +pistoia.it +pn.it +po.it +pordenone.it +potenza.it +pr.it +prato.it +pt.it +pu.it +pv.it +pz.it +ra.it +ragusa.it +ravenna.it +rc.it +re.it +reggio-calabria.it +reggio-emilia.it +reggiocalabria.it +reggioemilia.it +rg.it +ri.it +rieti.it +rimini.it +rm.it +rn.it +ro.it +roma.it +rome.it +rovigo.it +sa.it +salerno.it +sassari.it +savona.it +si.it +siena.it +siracusa.it +so.it +sondrio.it +sp.it +sr.it +ss.it +suedtirol.it +südtirol.it +sv.it +ta.it +taranto.it +te.it +tempio-olbia.it +tempioolbia.it +teramo.it +terni.it +tn.it +to.it +torino.it +tp.it +tr.it +trani-andria-barletta.it +trani-barletta-andria.it +traniandriabarletta.it +tranibarlettaandria.it +trapani.it +trento.it +treviso.it +trieste.it +ts.it +turin.it +tv.it +ud.it +udine.it +urbino-pesaro.it +urbinopesaro.it +va.it +varese.it +vb.it +vc.it +ve.it +venezia.it +venice.it +verbania.it +vercelli.it +verona.it +vi.it +vibo-valentia.it +vibovalentia.it +vicenza.it +viterbo.it +vr.it +vs.it +vt.it +vv.it + +// je : http://www.channelisles.net/register-domains/ +// Confirmed by registry 2013-11-28 +je +co.je +net.je +org.je + +// jm : http://www.com.jm/register.html +*.jm + +// jo : http://www.dns.jo/Registration_policy.aspx +jo +com.jo +org.jo +net.jo +edu.jo +sch.jo +gov.jo +mil.jo +name.jo + +// jobs : https://en.wikipedia.org/wiki/.jobs +jobs + +// jp : https://en.wikipedia.org/wiki/.jp +// http://jprs.co.jp/en/jpdomain.html +// Submitted by registry +jp +// jp organizational type names +ac.jp +ad.jp +co.jp +ed.jp +go.jp +gr.jp +lg.jp +ne.jp +or.jp +// jp prefecture type names +aichi.jp +akita.jp +aomori.jp +chiba.jp +ehime.jp +fukui.jp +fukuoka.jp +fukushima.jp +gifu.jp +gunma.jp +hiroshima.jp +hokkaido.jp +hyogo.jp +ibaraki.jp +ishikawa.jp +iwate.jp +kagawa.jp +kagoshima.jp +kanagawa.jp +kochi.jp +kumamoto.jp +kyoto.jp +mie.jp +miyagi.jp +miyazaki.jp +nagano.jp +nagasaki.jp +nara.jp +niigata.jp +oita.jp +okayama.jp +okinawa.jp +osaka.jp +saga.jp +saitama.jp +shiga.jp +shimane.jp +shizuoka.jp +tochigi.jp +tokushima.jp +tokyo.jp +tottori.jp +toyama.jp +wakayama.jp +yamagata.jp +yamaguchi.jp +yamanashi.jp +栃木.jp +愛知.jp +愛媛.jp +兵庫.jp +熊本.jp +茨城.jp +北海道.jp +千葉.jp +和歌山.jp +長崎.jp +長野.jp +新潟.jp +青森.jp +静岡.jp +東京.jp +石川.jp +埼玉.jp +三重.jp +京都.jp +佐賀.jp +大分.jp +大阪.jp +奈良.jp +宮城.jp +宮崎.jp +富山.jp +山口.jp +山形.jp +山梨.jp +岩手.jp +岐阜.jp +岡山.jp +島根.jp +広島.jp +徳島.jp +沖縄.jp +滋賀.jp +神奈川.jp +福井.jp +福岡.jp +福島.jp +秋田.jp +群馬.jp +香川.jp +高知.jp +鳥取.jp +鹿児島.jp +// jp geographic type names +// http://jprs.jp/doc/rule/saisoku-1.html +*.kawasaki.jp +*.kitakyushu.jp +*.kobe.jp +*.nagoya.jp +*.sapporo.jp +*.sendai.jp +*.yokohama.jp +!city.kawasaki.jp +!city.kitakyushu.jp +!city.kobe.jp +!city.nagoya.jp +!city.sapporo.jp +!city.sendai.jp +!city.yokohama.jp +// 4th level registration +aisai.aichi.jp +ama.aichi.jp +anjo.aichi.jp +asuke.aichi.jp +chiryu.aichi.jp +chita.aichi.jp +fuso.aichi.jp +gamagori.aichi.jp +handa.aichi.jp +hazu.aichi.jp +hekinan.aichi.jp +higashiura.aichi.jp +ichinomiya.aichi.jp +inazawa.aichi.jp +inuyama.aichi.jp +isshiki.aichi.jp +iwakura.aichi.jp +kanie.aichi.jp +kariya.aichi.jp +kasugai.aichi.jp +kira.aichi.jp +kiyosu.aichi.jp +komaki.aichi.jp +konan.aichi.jp +kota.aichi.jp +mihama.aichi.jp +miyoshi.aichi.jp +nishio.aichi.jp +nisshin.aichi.jp +obu.aichi.jp +oguchi.aichi.jp +oharu.aichi.jp +okazaki.aichi.jp +owariasahi.aichi.jp +seto.aichi.jp +shikatsu.aichi.jp +shinshiro.aichi.jp +shitara.aichi.jp +tahara.aichi.jp +takahama.aichi.jp +tobishima.aichi.jp +toei.aichi.jp +togo.aichi.jp +tokai.aichi.jp +tokoname.aichi.jp +toyoake.aichi.jp +toyohashi.aichi.jp +toyokawa.aichi.jp +toyone.aichi.jp +toyota.aichi.jp +tsushima.aichi.jp +yatomi.aichi.jp +akita.akita.jp +daisen.akita.jp +fujisato.akita.jp +gojome.akita.jp +hachirogata.akita.jp +happou.akita.jp +higashinaruse.akita.jp +honjo.akita.jp +honjyo.akita.jp +ikawa.akita.jp +kamikoani.akita.jp +kamioka.akita.jp +katagami.akita.jp +kazuno.akita.jp +kitaakita.akita.jp +kosaka.akita.jp +kyowa.akita.jp +misato.akita.jp +mitane.akita.jp +moriyoshi.akita.jp +nikaho.akita.jp +noshiro.akita.jp +odate.akita.jp +oga.akita.jp +ogata.akita.jp +semboku.akita.jp +yokote.akita.jp +yurihonjo.akita.jp +aomori.aomori.jp +gonohe.aomori.jp +hachinohe.aomori.jp +hashikami.aomori.jp +hiranai.aomori.jp +hirosaki.aomori.jp +itayanagi.aomori.jp +kuroishi.aomori.jp +misawa.aomori.jp +mutsu.aomori.jp +nakadomari.aomori.jp +noheji.aomori.jp +oirase.aomori.jp +owani.aomori.jp +rokunohe.aomori.jp +sannohe.aomori.jp +shichinohe.aomori.jp +shingo.aomori.jp +takko.aomori.jp +towada.aomori.jp +tsugaru.aomori.jp +tsuruta.aomori.jp +abiko.chiba.jp +asahi.chiba.jp +chonan.chiba.jp +chosei.chiba.jp +choshi.chiba.jp +chuo.chiba.jp +funabashi.chiba.jp +futtsu.chiba.jp +hanamigawa.chiba.jp +ichihara.chiba.jp +ichikawa.chiba.jp +ichinomiya.chiba.jp +inzai.chiba.jp +isumi.chiba.jp +kamagaya.chiba.jp +kamogawa.chiba.jp +kashiwa.chiba.jp +katori.chiba.jp +katsuura.chiba.jp +kimitsu.chiba.jp +kisarazu.chiba.jp +kozaki.chiba.jp +kujukuri.chiba.jp +kyonan.chiba.jp +matsudo.chiba.jp +midori.chiba.jp +mihama.chiba.jp +minamiboso.chiba.jp +mobara.chiba.jp +mutsuzawa.chiba.jp +nagara.chiba.jp +nagareyama.chiba.jp +narashino.chiba.jp +narita.chiba.jp +noda.chiba.jp +oamishirasato.chiba.jp +omigawa.chiba.jp +onjuku.chiba.jp +otaki.chiba.jp +sakae.chiba.jp +sakura.chiba.jp +shimofusa.chiba.jp +shirako.chiba.jp +shiroi.chiba.jp +shisui.chiba.jp +sodegaura.chiba.jp +sosa.chiba.jp +tako.chiba.jp +tateyama.chiba.jp +togane.chiba.jp +tohnosho.chiba.jp +tomisato.chiba.jp +urayasu.chiba.jp +yachimata.chiba.jp +yachiyo.chiba.jp +yokaichiba.chiba.jp +yokoshibahikari.chiba.jp +yotsukaido.chiba.jp +ainan.ehime.jp +honai.ehime.jp +ikata.ehime.jp +imabari.ehime.jp +iyo.ehime.jp +kamijima.ehime.jp +kihoku.ehime.jp +kumakogen.ehime.jp +masaki.ehime.jp +matsuno.ehime.jp +matsuyama.ehime.jp +namikata.ehime.jp +niihama.ehime.jp +ozu.ehime.jp +saijo.ehime.jp +seiyo.ehime.jp +shikokuchuo.ehime.jp +tobe.ehime.jp +toon.ehime.jp +uchiko.ehime.jp +uwajima.ehime.jp +yawatahama.ehime.jp +echizen.fukui.jp +eiheiji.fukui.jp +fukui.fukui.jp +ikeda.fukui.jp +katsuyama.fukui.jp +mihama.fukui.jp +minamiechizen.fukui.jp +obama.fukui.jp +ohi.fukui.jp +ono.fukui.jp +sabae.fukui.jp +sakai.fukui.jp +takahama.fukui.jp +tsuruga.fukui.jp +wakasa.fukui.jp +ashiya.fukuoka.jp +buzen.fukuoka.jp +chikugo.fukuoka.jp +chikuho.fukuoka.jp +chikujo.fukuoka.jp +chikushino.fukuoka.jp +chikuzen.fukuoka.jp +chuo.fukuoka.jp +dazaifu.fukuoka.jp +fukuchi.fukuoka.jp +hakata.fukuoka.jp +higashi.fukuoka.jp +hirokawa.fukuoka.jp +hisayama.fukuoka.jp +iizuka.fukuoka.jp +inatsuki.fukuoka.jp +kaho.fukuoka.jp +kasuga.fukuoka.jp +kasuya.fukuoka.jp +kawara.fukuoka.jp +keisen.fukuoka.jp +koga.fukuoka.jp +kurate.fukuoka.jp +kurogi.fukuoka.jp +kurume.fukuoka.jp +minami.fukuoka.jp +miyako.fukuoka.jp +miyama.fukuoka.jp +miyawaka.fukuoka.jp +mizumaki.fukuoka.jp +munakata.fukuoka.jp +nakagawa.fukuoka.jp +nakama.fukuoka.jp +nishi.fukuoka.jp +nogata.fukuoka.jp +ogori.fukuoka.jp +okagaki.fukuoka.jp +okawa.fukuoka.jp +oki.fukuoka.jp +omuta.fukuoka.jp +onga.fukuoka.jp +onojo.fukuoka.jp +oto.fukuoka.jp +saigawa.fukuoka.jp +sasaguri.fukuoka.jp +shingu.fukuoka.jp +shinyoshitomi.fukuoka.jp +shonai.fukuoka.jp +soeda.fukuoka.jp +sue.fukuoka.jp +tachiarai.fukuoka.jp +tagawa.fukuoka.jp +takata.fukuoka.jp +toho.fukuoka.jp +toyotsu.fukuoka.jp +tsuiki.fukuoka.jp +ukiha.fukuoka.jp +umi.fukuoka.jp +usui.fukuoka.jp +yamada.fukuoka.jp +yame.fukuoka.jp +yanagawa.fukuoka.jp +yukuhashi.fukuoka.jp +aizubange.fukushima.jp +aizumisato.fukushima.jp +aizuwakamatsu.fukushima.jp +asakawa.fukushima.jp +bandai.fukushima.jp +date.fukushima.jp +fukushima.fukushima.jp +furudono.fukushima.jp +futaba.fukushima.jp +hanawa.fukushima.jp +higashi.fukushima.jp +hirata.fukushima.jp +hirono.fukushima.jp +iitate.fukushima.jp +inawashiro.fukushima.jp +ishikawa.fukushima.jp +iwaki.fukushima.jp +izumizaki.fukushima.jp +kagamiishi.fukushima.jp +kaneyama.fukushima.jp +kawamata.fukushima.jp +kitakata.fukushima.jp +kitashiobara.fukushima.jp +koori.fukushima.jp +koriyama.fukushima.jp +kunimi.fukushima.jp +miharu.fukushima.jp +mishima.fukushima.jp +namie.fukushima.jp +nango.fukushima.jp +nishiaizu.fukushima.jp +nishigo.fukushima.jp +okuma.fukushima.jp +omotego.fukushima.jp +ono.fukushima.jp +otama.fukushima.jp +samegawa.fukushima.jp +shimogo.fukushima.jp +shirakawa.fukushima.jp +showa.fukushima.jp +soma.fukushima.jp +sukagawa.fukushima.jp +taishin.fukushima.jp +tamakawa.fukushima.jp +tanagura.fukushima.jp +tenei.fukushima.jp +yabuki.fukushima.jp +yamato.fukushima.jp +yamatsuri.fukushima.jp +yanaizu.fukushima.jp +yugawa.fukushima.jp +anpachi.gifu.jp +ena.gifu.jp +gifu.gifu.jp +ginan.gifu.jp +godo.gifu.jp +gujo.gifu.jp +hashima.gifu.jp +hichiso.gifu.jp +hida.gifu.jp +higashishirakawa.gifu.jp +ibigawa.gifu.jp +ikeda.gifu.jp +kakamigahara.gifu.jp +kani.gifu.jp +kasahara.gifu.jp +kasamatsu.gifu.jp +kawaue.gifu.jp +kitagata.gifu.jp +mino.gifu.jp +minokamo.gifu.jp +mitake.gifu.jp +mizunami.gifu.jp +motosu.gifu.jp +nakatsugawa.gifu.jp +ogaki.gifu.jp +sakahogi.gifu.jp +seki.gifu.jp +sekigahara.gifu.jp +shirakawa.gifu.jp +tajimi.gifu.jp +takayama.gifu.jp +tarui.gifu.jp +toki.gifu.jp +tomika.gifu.jp +wanouchi.gifu.jp +yamagata.gifu.jp +yaotsu.gifu.jp +yoro.gifu.jp +annaka.gunma.jp +chiyoda.gunma.jp +fujioka.gunma.jp +higashiagatsuma.gunma.jp +isesaki.gunma.jp +itakura.gunma.jp +kanna.gunma.jp +kanra.gunma.jp +katashina.gunma.jp +kawaba.gunma.jp +kiryu.gunma.jp +kusatsu.gunma.jp +maebashi.gunma.jp +meiwa.gunma.jp +midori.gunma.jp +minakami.gunma.jp +naganohara.gunma.jp +nakanojo.gunma.jp +nanmoku.gunma.jp +numata.gunma.jp +oizumi.gunma.jp +ora.gunma.jp +ota.gunma.jp +shibukawa.gunma.jp +shimonita.gunma.jp +shinto.gunma.jp +showa.gunma.jp +takasaki.gunma.jp +takayama.gunma.jp +tamamura.gunma.jp +tatebayashi.gunma.jp +tomioka.gunma.jp +tsukiyono.gunma.jp +tsumagoi.gunma.jp +ueno.gunma.jp +yoshioka.gunma.jp +asaminami.hiroshima.jp +daiwa.hiroshima.jp +etajima.hiroshima.jp +fuchu.hiroshima.jp +fukuyama.hiroshima.jp +hatsukaichi.hiroshima.jp +higashihiroshima.hiroshima.jp +hongo.hiroshima.jp +jinsekikogen.hiroshima.jp +kaita.hiroshima.jp +kui.hiroshima.jp +kumano.hiroshima.jp +kure.hiroshima.jp +mihara.hiroshima.jp +miyoshi.hiroshima.jp +naka.hiroshima.jp +onomichi.hiroshima.jp +osakikamijima.hiroshima.jp +otake.hiroshima.jp +saka.hiroshima.jp +sera.hiroshima.jp +seranishi.hiroshima.jp +shinichi.hiroshima.jp +shobara.hiroshima.jp +takehara.hiroshima.jp +abashiri.hokkaido.jp +abira.hokkaido.jp +aibetsu.hokkaido.jp +akabira.hokkaido.jp +akkeshi.hokkaido.jp +asahikawa.hokkaido.jp +ashibetsu.hokkaido.jp +ashoro.hokkaido.jp +assabu.hokkaido.jp +atsuma.hokkaido.jp +bibai.hokkaido.jp +biei.hokkaido.jp +bifuka.hokkaido.jp +bihoro.hokkaido.jp +biratori.hokkaido.jp +chippubetsu.hokkaido.jp +chitose.hokkaido.jp +date.hokkaido.jp +ebetsu.hokkaido.jp +embetsu.hokkaido.jp +eniwa.hokkaido.jp +erimo.hokkaido.jp +esan.hokkaido.jp +esashi.hokkaido.jp +fukagawa.hokkaido.jp +fukushima.hokkaido.jp +furano.hokkaido.jp +furubira.hokkaido.jp +haboro.hokkaido.jp +hakodate.hokkaido.jp +hamatonbetsu.hokkaido.jp +hidaka.hokkaido.jp +higashikagura.hokkaido.jp +higashikawa.hokkaido.jp +hiroo.hokkaido.jp +hokuryu.hokkaido.jp +hokuto.hokkaido.jp +honbetsu.hokkaido.jp +horokanai.hokkaido.jp +horonobe.hokkaido.jp +ikeda.hokkaido.jp +imakane.hokkaido.jp +ishikari.hokkaido.jp +iwamizawa.hokkaido.jp +iwanai.hokkaido.jp +kamifurano.hokkaido.jp +kamikawa.hokkaido.jp +kamishihoro.hokkaido.jp +kamisunagawa.hokkaido.jp +kamoenai.hokkaido.jp +kayabe.hokkaido.jp +kembuchi.hokkaido.jp +kikonai.hokkaido.jp +kimobetsu.hokkaido.jp +kitahiroshima.hokkaido.jp +kitami.hokkaido.jp +kiyosato.hokkaido.jp +koshimizu.hokkaido.jp +kunneppu.hokkaido.jp +kuriyama.hokkaido.jp +kuromatsunai.hokkaido.jp +kushiro.hokkaido.jp +kutchan.hokkaido.jp +kyowa.hokkaido.jp +mashike.hokkaido.jp +matsumae.hokkaido.jp +mikasa.hokkaido.jp +minamifurano.hokkaido.jp +mombetsu.hokkaido.jp +moseushi.hokkaido.jp +mukawa.hokkaido.jp +muroran.hokkaido.jp +naie.hokkaido.jp +nakagawa.hokkaido.jp +nakasatsunai.hokkaido.jp +nakatombetsu.hokkaido.jp +nanae.hokkaido.jp +nanporo.hokkaido.jp +nayoro.hokkaido.jp +nemuro.hokkaido.jp +niikappu.hokkaido.jp +niki.hokkaido.jp +nishiokoppe.hokkaido.jp +noboribetsu.hokkaido.jp +numata.hokkaido.jp +obihiro.hokkaido.jp +obira.hokkaido.jp +oketo.hokkaido.jp +okoppe.hokkaido.jp +otaru.hokkaido.jp +otobe.hokkaido.jp +otofuke.hokkaido.jp +otoineppu.hokkaido.jp +oumu.hokkaido.jp +ozora.hokkaido.jp +pippu.hokkaido.jp +rankoshi.hokkaido.jp +rebun.hokkaido.jp +rikubetsu.hokkaido.jp +rishiri.hokkaido.jp +rishirifuji.hokkaido.jp +saroma.hokkaido.jp +sarufutsu.hokkaido.jp +shakotan.hokkaido.jp +shari.hokkaido.jp +shibecha.hokkaido.jp +shibetsu.hokkaido.jp +shikabe.hokkaido.jp +shikaoi.hokkaido.jp +shimamaki.hokkaido.jp +shimizu.hokkaido.jp +shimokawa.hokkaido.jp +shinshinotsu.hokkaido.jp +shintoku.hokkaido.jp +shiranuka.hokkaido.jp +shiraoi.hokkaido.jp +shiriuchi.hokkaido.jp +sobetsu.hokkaido.jp +sunagawa.hokkaido.jp +taiki.hokkaido.jp +takasu.hokkaido.jp +takikawa.hokkaido.jp +takinoue.hokkaido.jp +teshikaga.hokkaido.jp +tobetsu.hokkaido.jp +tohma.hokkaido.jp +tomakomai.hokkaido.jp +tomari.hokkaido.jp +toya.hokkaido.jp +toyako.hokkaido.jp +toyotomi.hokkaido.jp +toyoura.hokkaido.jp +tsubetsu.hokkaido.jp +tsukigata.hokkaido.jp +urakawa.hokkaido.jp +urausu.hokkaido.jp +uryu.hokkaido.jp +utashinai.hokkaido.jp +wakkanai.hokkaido.jp +wassamu.hokkaido.jp +yakumo.hokkaido.jp +yoichi.hokkaido.jp +aioi.hyogo.jp +akashi.hyogo.jp +ako.hyogo.jp +amagasaki.hyogo.jp +aogaki.hyogo.jp +asago.hyogo.jp +ashiya.hyogo.jp +awaji.hyogo.jp +fukusaki.hyogo.jp +goshiki.hyogo.jp +harima.hyogo.jp +himeji.hyogo.jp +ichikawa.hyogo.jp +inagawa.hyogo.jp +itami.hyogo.jp +kakogawa.hyogo.jp +kamigori.hyogo.jp +kamikawa.hyogo.jp +kasai.hyogo.jp +kasuga.hyogo.jp +kawanishi.hyogo.jp +miki.hyogo.jp +minamiawaji.hyogo.jp +nishinomiya.hyogo.jp +nishiwaki.hyogo.jp +ono.hyogo.jp +sanda.hyogo.jp +sannan.hyogo.jp +sasayama.hyogo.jp +sayo.hyogo.jp +shingu.hyogo.jp +shinonsen.hyogo.jp +shiso.hyogo.jp +sumoto.hyogo.jp +taishi.hyogo.jp +taka.hyogo.jp +takarazuka.hyogo.jp +takasago.hyogo.jp +takino.hyogo.jp +tamba.hyogo.jp +tatsuno.hyogo.jp +toyooka.hyogo.jp +yabu.hyogo.jp +yashiro.hyogo.jp +yoka.hyogo.jp +yokawa.hyogo.jp +ami.ibaraki.jp +asahi.ibaraki.jp +bando.ibaraki.jp +chikusei.ibaraki.jp +daigo.ibaraki.jp +fujishiro.ibaraki.jp +hitachi.ibaraki.jp +hitachinaka.ibaraki.jp +hitachiomiya.ibaraki.jp +hitachiota.ibaraki.jp +ibaraki.ibaraki.jp +ina.ibaraki.jp +inashiki.ibaraki.jp +itako.ibaraki.jp +iwama.ibaraki.jp +joso.ibaraki.jp +kamisu.ibaraki.jp +kasama.ibaraki.jp +kashima.ibaraki.jp +kasumigaura.ibaraki.jp +koga.ibaraki.jp +miho.ibaraki.jp +mito.ibaraki.jp +moriya.ibaraki.jp +naka.ibaraki.jp +namegata.ibaraki.jp +oarai.ibaraki.jp +ogawa.ibaraki.jp +omitama.ibaraki.jp +ryugasaki.ibaraki.jp +sakai.ibaraki.jp +sakuragawa.ibaraki.jp +shimodate.ibaraki.jp +shimotsuma.ibaraki.jp +shirosato.ibaraki.jp +sowa.ibaraki.jp +suifu.ibaraki.jp +takahagi.ibaraki.jp +tamatsukuri.ibaraki.jp +tokai.ibaraki.jp +tomobe.ibaraki.jp +tone.ibaraki.jp +toride.ibaraki.jp +tsuchiura.ibaraki.jp +tsukuba.ibaraki.jp +uchihara.ibaraki.jp +ushiku.ibaraki.jp +yachiyo.ibaraki.jp +yamagata.ibaraki.jp +yawara.ibaraki.jp +yuki.ibaraki.jp +anamizu.ishikawa.jp +hakui.ishikawa.jp +hakusan.ishikawa.jp +kaga.ishikawa.jp +kahoku.ishikawa.jp +kanazawa.ishikawa.jp +kawakita.ishikawa.jp +komatsu.ishikawa.jp +nakanoto.ishikawa.jp +nanao.ishikawa.jp +nomi.ishikawa.jp +nonoichi.ishikawa.jp +noto.ishikawa.jp +shika.ishikawa.jp +suzu.ishikawa.jp +tsubata.ishikawa.jp +tsurugi.ishikawa.jp +uchinada.ishikawa.jp +wajima.ishikawa.jp +fudai.iwate.jp +fujisawa.iwate.jp +hanamaki.iwate.jp +hiraizumi.iwate.jp +hirono.iwate.jp +ichinohe.iwate.jp +ichinoseki.iwate.jp +iwaizumi.iwate.jp +iwate.iwate.jp +joboji.iwate.jp +kamaishi.iwate.jp +kanegasaki.iwate.jp +karumai.iwate.jp +kawai.iwate.jp +kitakami.iwate.jp +kuji.iwate.jp +kunohe.iwate.jp +kuzumaki.iwate.jp +miyako.iwate.jp +mizusawa.iwate.jp +morioka.iwate.jp +ninohe.iwate.jp +noda.iwate.jp +ofunato.iwate.jp +oshu.iwate.jp +otsuchi.iwate.jp +rikuzentakata.iwate.jp +shiwa.iwate.jp +shizukuishi.iwate.jp +sumita.iwate.jp +tanohata.iwate.jp +tono.iwate.jp +yahaba.iwate.jp +yamada.iwate.jp +ayagawa.kagawa.jp +higashikagawa.kagawa.jp +kanonji.kagawa.jp +kotohira.kagawa.jp +manno.kagawa.jp +marugame.kagawa.jp +mitoyo.kagawa.jp +naoshima.kagawa.jp +sanuki.kagawa.jp +tadotsu.kagawa.jp +takamatsu.kagawa.jp +tonosho.kagawa.jp +uchinomi.kagawa.jp +utazu.kagawa.jp +zentsuji.kagawa.jp +akune.kagoshima.jp +amami.kagoshima.jp +hioki.kagoshima.jp +isa.kagoshima.jp +isen.kagoshima.jp +izumi.kagoshima.jp +kagoshima.kagoshima.jp +kanoya.kagoshima.jp +kawanabe.kagoshima.jp +kinko.kagoshima.jp +kouyama.kagoshima.jp +makurazaki.kagoshima.jp +matsumoto.kagoshima.jp +minamitane.kagoshima.jp +nakatane.kagoshima.jp +nishinoomote.kagoshima.jp +satsumasendai.kagoshima.jp +soo.kagoshima.jp +tarumizu.kagoshima.jp +yusui.kagoshima.jp +aikawa.kanagawa.jp +atsugi.kanagawa.jp +ayase.kanagawa.jp +chigasaki.kanagawa.jp +ebina.kanagawa.jp +fujisawa.kanagawa.jp +hadano.kanagawa.jp +hakone.kanagawa.jp +hiratsuka.kanagawa.jp +isehara.kanagawa.jp +kaisei.kanagawa.jp +kamakura.kanagawa.jp +kiyokawa.kanagawa.jp +matsuda.kanagawa.jp +minamiashigara.kanagawa.jp +miura.kanagawa.jp +nakai.kanagawa.jp +ninomiya.kanagawa.jp +odawara.kanagawa.jp +oi.kanagawa.jp +oiso.kanagawa.jp +sagamihara.kanagawa.jp +samukawa.kanagawa.jp +tsukui.kanagawa.jp +yamakita.kanagawa.jp +yamato.kanagawa.jp +yokosuka.kanagawa.jp +yugawara.kanagawa.jp +zama.kanagawa.jp +zushi.kanagawa.jp +aki.kochi.jp +geisei.kochi.jp +hidaka.kochi.jp +higashitsuno.kochi.jp +ino.kochi.jp +kagami.kochi.jp +kami.kochi.jp +kitagawa.kochi.jp +kochi.kochi.jp +mihara.kochi.jp +motoyama.kochi.jp +muroto.kochi.jp +nahari.kochi.jp +nakamura.kochi.jp +nankoku.kochi.jp +nishitosa.kochi.jp +niyodogawa.kochi.jp +ochi.kochi.jp +okawa.kochi.jp +otoyo.kochi.jp +otsuki.kochi.jp +sakawa.kochi.jp +sukumo.kochi.jp +susaki.kochi.jp +tosa.kochi.jp +tosashimizu.kochi.jp +toyo.kochi.jp +tsuno.kochi.jp +umaji.kochi.jp +yasuda.kochi.jp +yusuhara.kochi.jp +amakusa.kumamoto.jp +arao.kumamoto.jp +aso.kumamoto.jp +choyo.kumamoto.jp +gyokuto.kumamoto.jp +kamiamakusa.kumamoto.jp +kikuchi.kumamoto.jp +kumamoto.kumamoto.jp +mashiki.kumamoto.jp +mifune.kumamoto.jp +minamata.kumamoto.jp +minamioguni.kumamoto.jp +nagasu.kumamoto.jp +nishihara.kumamoto.jp +oguni.kumamoto.jp +ozu.kumamoto.jp +sumoto.kumamoto.jp +takamori.kumamoto.jp +uki.kumamoto.jp +uto.kumamoto.jp +yamaga.kumamoto.jp +yamato.kumamoto.jp +yatsushiro.kumamoto.jp +ayabe.kyoto.jp +fukuchiyama.kyoto.jp +higashiyama.kyoto.jp +ide.kyoto.jp +ine.kyoto.jp +joyo.kyoto.jp +kameoka.kyoto.jp +kamo.kyoto.jp +kita.kyoto.jp +kizu.kyoto.jp +kumiyama.kyoto.jp +kyotamba.kyoto.jp +kyotanabe.kyoto.jp +kyotango.kyoto.jp +maizuru.kyoto.jp +minami.kyoto.jp +minamiyamashiro.kyoto.jp +miyazu.kyoto.jp +muko.kyoto.jp +nagaokakyo.kyoto.jp +nakagyo.kyoto.jp +nantan.kyoto.jp +oyamazaki.kyoto.jp +sakyo.kyoto.jp +seika.kyoto.jp +tanabe.kyoto.jp +uji.kyoto.jp +ujitawara.kyoto.jp +wazuka.kyoto.jp +yamashina.kyoto.jp +yawata.kyoto.jp +asahi.mie.jp +inabe.mie.jp +ise.mie.jp +kameyama.mie.jp +kawagoe.mie.jp +kiho.mie.jp +kisosaki.mie.jp +kiwa.mie.jp +komono.mie.jp +kumano.mie.jp +kuwana.mie.jp +matsusaka.mie.jp +meiwa.mie.jp +mihama.mie.jp +minamiise.mie.jp +misugi.mie.jp +miyama.mie.jp +nabari.mie.jp +shima.mie.jp +suzuka.mie.jp +tado.mie.jp +taiki.mie.jp +taki.mie.jp +tamaki.mie.jp +toba.mie.jp +tsu.mie.jp +udono.mie.jp +ureshino.mie.jp +watarai.mie.jp +yokkaichi.mie.jp +furukawa.miyagi.jp +higashimatsushima.miyagi.jp +ishinomaki.miyagi.jp +iwanuma.miyagi.jp +kakuda.miyagi.jp +kami.miyagi.jp +kawasaki.miyagi.jp +marumori.miyagi.jp +matsushima.miyagi.jp +minamisanriku.miyagi.jp +misato.miyagi.jp +murata.miyagi.jp +natori.miyagi.jp +ogawara.miyagi.jp +ohira.miyagi.jp +onagawa.miyagi.jp +osaki.miyagi.jp +rifu.miyagi.jp +semine.miyagi.jp +shibata.miyagi.jp +shichikashuku.miyagi.jp +shikama.miyagi.jp +shiogama.miyagi.jp +shiroishi.miyagi.jp +tagajo.miyagi.jp +taiwa.miyagi.jp +tome.miyagi.jp +tomiya.miyagi.jp +wakuya.miyagi.jp +watari.miyagi.jp +yamamoto.miyagi.jp +zao.miyagi.jp +aya.miyazaki.jp +ebino.miyazaki.jp +gokase.miyazaki.jp +hyuga.miyazaki.jp +kadogawa.miyazaki.jp +kawaminami.miyazaki.jp +kijo.miyazaki.jp +kitagawa.miyazaki.jp +kitakata.miyazaki.jp +kitaura.miyazaki.jp +kobayashi.miyazaki.jp +kunitomi.miyazaki.jp +kushima.miyazaki.jp +mimata.miyazaki.jp +miyakonojo.miyazaki.jp +miyazaki.miyazaki.jp +morotsuka.miyazaki.jp +nichinan.miyazaki.jp +nishimera.miyazaki.jp +nobeoka.miyazaki.jp +saito.miyazaki.jp +shiiba.miyazaki.jp +shintomi.miyazaki.jp +takaharu.miyazaki.jp +takanabe.miyazaki.jp +takazaki.miyazaki.jp +tsuno.miyazaki.jp +achi.nagano.jp +agematsu.nagano.jp +anan.nagano.jp +aoki.nagano.jp +asahi.nagano.jp +azumino.nagano.jp +chikuhoku.nagano.jp +chikuma.nagano.jp +chino.nagano.jp +fujimi.nagano.jp +hakuba.nagano.jp +hara.nagano.jp +hiraya.nagano.jp +iida.nagano.jp +iijima.nagano.jp +iiyama.nagano.jp +iizuna.nagano.jp +ikeda.nagano.jp +ikusaka.nagano.jp +ina.nagano.jp +karuizawa.nagano.jp +kawakami.nagano.jp +kiso.nagano.jp +kisofukushima.nagano.jp +kitaaiki.nagano.jp +komagane.nagano.jp +komoro.nagano.jp +matsukawa.nagano.jp +matsumoto.nagano.jp +miasa.nagano.jp +minamiaiki.nagano.jp +minamimaki.nagano.jp +minamiminowa.nagano.jp +minowa.nagano.jp +miyada.nagano.jp +miyota.nagano.jp +mochizuki.nagano.jp +nagano.nagano.jp +nagawa.nagano.jp +nagiso.nagano.jp +nakagawa.nagano.jp +nakano.nagano.jp +nozawaonsen.nagano.jp +obuse.nagano.jp +ogawa.nagano.jp +okaya.nagano.jp +omachi.nagano.jp +omi.nagano.jp +ookuwa.nagano.jp +ooshika.nagano.jp +otaki.nagano.jp +otari.nagano.jp +sakae.nagano.jp +sakaki.nagano.jp +saku.nagano.jp +sakuho.nagano.jp +shimosuwa.nagano.jp +shinanomachi.nagano.jp +shiojiri.nagano.jp +suwa.nagano.jp +suzaka.nagano.jp +takagi.nagano.jp +takamori.nagano.jp +takayama.nagano.jp +tateshina.nagano.jp +tatsuno.nagano.jp +togakushi.nagano.jp +togura.nagano.jp +tomi.nagano.jp +ueda.nagano.jp +wada.nagano.jp +yamagata.nagano.jp +yamanouchi.nagano.jp +yasaka.nagano.jp +yasuoka.nagano.jp +chijiwa.nagasaki.jp +futsu.nagasaki.jp +goto.nagasaki.jp +hasami.nagasaki.jp +hirado.nagasaki.jp +iki.nagasaki.jp +isahaya.nagasaki.jp +kawatana.nagasaki.jp +kuchinotsu.nagasaki.jp +matsuura.nagasaki.jp +nagasaki.nagasaki.jp +obama.nagasaki.jp +omura.nagasaki.jp +oseto.nagasaki.jp +saikai.nagasaki.jp +sasebo.nagasaki.jp +seihi.nagasaki.jp +shimabara.nagasaki.jp +shinkamigoto.nagasaki.jp +togitsu.nagasaki.jp +tsushima.nagasaki.jp +unzen.nagasaki.jp +ando.nara.jp +gose.nara.jp +heguri.nara.jp +higashiyoshino.nara.jp +ikaruga.nara.jp +ikoma.nara.jp +kamikitayama.nara.jp +kanmaki.nara.jp +kashiba.nara.jp +kashihara.nara.jp +katsuragi.nara.jp +kawai.nara.jp +kawakami.nara.jp +kawanishi.nara.jp +koryo.nara.jp +kurotaki.nara.jp +mitsue.nara.jp +miyake.nara.jp +nara.nara.jp +nosegawa.nara.jp +oji.nara.jp +ouda.nara.jp +oyodo.nara.jp +sakurai.nara.jp +sango.nara.jp +shimoichi.nara.jp +shimokitayama.nara.jp +shinjo.nara.jp +soni.nara.jp +takatori.nara.jp +tawaramoto.nara.jp +tenkawa.nara.jp +tenri.nara.jp +uda.nara.jp +yamatokoriyama.nara.jp +yamatotakada.nara.jp +yamazoe.nara.jp +yoshino.nara.jp +aga.niigata.jp +agano.niigata.jp +gosen.niigata.jp +itoigawa.niigata.jp +izumozaki.niigata.jp +joetsu.niigata.jp +kamo.niigata.jp +kariwa.niigata.jp +kashiwazaki.niigata.jp +minamiuonuma.niigata.jp +mitsuke.niigata.jp +muika.niigata.jp +murakami.niigata.jp +myoko.niigata.jp +nagaoka.niigata.jp +niigata.niigata.jp +ojiya.niigata.jp +omi.niigata.jp +sado.niigata.jp +sanjo.niigata.jp +seiro.niigata.jp +seirou.niigata.jp +sekikawa.niigata.jp +shibata.niigata.jp +tagami.niigata.jp +tainai.niigata.jp +tochio.niigata.jp +tokamachi.niigata.jp +tsubame.niigata.jp +tsunan.niigata.jp +uonuma.niigata.jp +yahiko.niigata.jp +yoita.niigata.jp +yuzawa.niigata.jp +beppu.oita.jp +bungoono.oita.jp +bungotakada.oita.jp +hasama.oita.jp +hiji.oita.jp +himeshima.oita.jp +hita.oita.jp +kamitsue.oita.jp +kokonoe.oita.jp +kuju.oita.jp +kunisaki.oita.jp +kusu.oita.jp +oita.oita.jp +saiki.oita.jp +taketa.oita.jp +tsukumi.oita.jp +usa.oita.jp +usuki.oita.jp +yufu.oita.jp +akaiwa.okayama.jp +asakuchi.okayama.jp +bizen.okayama.jp +hayashima.okayama.jp +ibara.okayama.jp +kagamino.okayama.jp +kasaoka.okayama.jp +kibichuo.okayama.jp +kumenan.okayama.jp +kurashiki.okayama.jp +maniwa.okayama.jp +misaki.okayama.jp +nagi.okayama.jp +niimi.okayama.jp +nishiawakura.okayama.jp +okayama.okayama.jp +satosho.okayama.jp +setouchi.okayama.jp +shinjo.okayama.jp +shoo.okayama.jp +soja.okayama.jp +takahashi.okayama.jp +tamano.okayama.jp +tsuyama.okayama.jp +wake.okayama.jp +yakage.okayama.jp +aguni.okinawa.jp +ginowan.okinawa.jp +ginoza.okinawa.jp +gushikami.okinawa.jp +haebaru.okinawa.jp +higashi.okinawa.jp +hirara.okinawa.jp +iheya.okinawa.jp +ishigaki.okinawa.jp +ishikawa.okinawa.jp +itoman.okinawa.jp +izena.okinawa.jp +kadena.okinawa.jp +kin.okinawa.jp +kitadaito.okinawa.jp +kitanakagusuku.okinawa.jp +kumejima.okinawa.jp +kunigami.okinawa.jp +minamidaito.okinawa.jp +motobu.okinawa.jp +nago.okinawa.jp +naha.okinawa.jp +nakagusuku.okinawa.jp +nakijin.okinawa.jp +nanjo.okinawa.jp +nishihara.okinawa.jp +ogimi.okinawa.jp +okinawa.okinawa.jp +onna.okinawa.jp +shimoji.okinawa.jp +taketomi.okinawa.jp +tarama.okinawa.jp +tokashiki.okinawa.jp +tomigusuku.okinawa.jp +tonaki.okinawa.jp +urasoe.okinawa.jp +uruma.okinawa.jp +yaese.okinawa.jp +yomitan.okinawa.jp +yonabaru.okinawa.jp +yonaguni.okinawa.jp +zamami.okinawa.jp +abeno.osaka.jp +chihayaakasaka.osaka.jp +chuo.osaka.jp +daito.osaka.jp +fujiidera.osaka.jp +habikino.osaka.jp +hannan.osaka.jp +higashiosaka.osaka.jp +higashisumiyoshi.osaka.jp +higashiyodogawa.osaka.jp +hirakata.osaka.jp +ibaraki.osaka.jp +ikeda.osaka.jp +izumi.osaka.jp +izumiotsu.osaka.jp +izumisano.osaka.jp +kadoma.osaka.jp +kaizuka.osaka.jp +kanan.osaka.jp +kashiwara.osaka.jp +katano.osaka.jp +kawachinagano.osaka.jp +kishiwada.osaka.jp +kita.osaka.jp +kumatori.osaka.jp +matsubara.osaka.jp +minato.osaka.jp +minoh.osaka.jp +misaki.osaka.jp +moriguchi.osaka.jp +neyagawa.osaka.jp +nishi.osaka.jp +nose.osaka.jp +osakasayama.osaka.jp +sakai.osaka.jp +sayama.osaka.jp +sennan.osaka.jp +settsu.osaka.jp +shijonawate.osaka.jp +shimamoto.osaka.jp +suita.osaka.jp +tadaoka.osaka.jp +taishi.osaka.jp +tajiri.osaka.jp +takaishi.osaka.jp +takatsuki.osaka.jp +tondabayashi.osaka.jp +toyonaka.osaka.jp +toyono.osaka.jp +yao.osaka.jp +ariake.saga.jp +arita.saga.jp +fukudomi.saga.jp +genkai.saga.jp +hamatama.saga.jp +hizen.saga.jp +imari.saga.jp +kamimine.saga.jp +kanzaki.saga.jp +karatsu.saga.jp +kashima.saga.jp +kitagata.saga.jp +kitahata.saga.jp +kiyama.saga.jp +kouhoku.saga.jp +kyuragi.saga.jp +nishiarita.saga.jp +ogi.saga.jp +omachi.saga.jp +ouchi.saga.jp +saga.saga.jp +shiroishi.saga.jp +taku.saga.jp +tara.saga.jp +tosu.saga.jp +yoshinogari.saga.jp +arakawa.saitama.jp +asaka.saitama.jp +chichibu.saitama.jp +fujimi.saitama.jp +fujimino.saitama.jp +fukaya.saitama.jp +hanno.saitama.jp +hanyu.saitama.jp +hasuda.saitama.jp +hatogaya.saitama.jp +hatoyama.saitama.jp +hidaka.saitama.jp +higashichichibu.saitama.jp +higashimatsuyama.saitama.jp +honjo.saitama.jp +ina.saitama.jp +iruma.saitama.jp +iwatsuki.saitama.jp +kamiizumi.saitama.jp +kamikawa.saitama.jp +kamisato.saitama.jp +kasukabe.saitama.jp +kawagoe.saitama.jp +kawaguchi.saitama.jp +kawajima.saitama.jp +kazo.saitama.jp +kitamoto.saitama.jp +koshigaya.saitama.jp +kounosu.saitama.jp +kuki.saitama.jp +kumagaya.saitama.jp +matsubushi.saitama.jp +minano.saitama.jp +misato.saitama.jp +miyashiro.saitama.jp +miyoshi.saitama.jp +moroyama.saitama.jp +nagatoro.saitama.jp +namegawa.saitama.jp +niiza.saitama.jp +ogano.saitama.jp +ogawa.saitama.jp +ogose.saitama.jp +okegawa.saitama.jp +omiya.saitama.jp +otaki.saitama.jp +ranzan.saitama.jp +ryokami.saitama.jp +saitama.saitama.jp +sakado.saitama.jp +satte.saitama.jp +sayama.saitama.jp +shiki.saitama.jp +shiraoka.saitama.jp +soka.saitama.jp +sugito.saitama.jp +toda.saitama.jp +tokigawa.saitama.jp +tokorozawa.saitama.jp +tsurugashima.saitama.jp +urawa.saitama.jp +warabi.saitama.jp +yashio.saitama.jp +yokoze.saitama.jp +yono.saitama.jp +yorii.saitama.jp +yoshida.saitama.jp +yoshikawa.saitama.jp +yoshimi.saitama.jp +aisho.shiga.jp +gamo.shiga.jp +higashiomi.shiga.jp +hikone.shiga.jp +koka.shiga.jp +konan.shiga.jp +kosei.shiga.jp +koto.shiga.jp +kusatsu.shiga.jp +maibara.shiga.jp +moriyama.shiga.jp +nagahama.shiga.jp +nishiazai.shiga.jp +notogawa.shiga.jp +omihachiman.shiga.jp +otsu.shiga.jp +ritto.shiga.jp +ryuoh.shiga.jp +takashima.shiga.jp +takatsuki.shiga.jp +torahime.shiga.jp +toyosato.shiga.jp +yasu.shiga.jp +akagi.shimane.jp +ama.shimane.jp +gotsu.shimane.jp +hamada.shimane.jp +higashiizumo.shimane.jp +hikawa.shimane.jp +hikimi.shimane.jp +izumo.shimane.jp +kakinoki.shimane.jp +masuda.shimane.jp +matsue.shimane.jp +misato.shimane.jp +nishinoshima.shimane.jp +ohda.shimane.jp +okinoshima.shimane.jp +okuizumo.shimane.jp +shimane.shimane.jp +tamayu.shimane.jp +tsuwano.shimane.jp +unnan.shimane.jp +yakumo.shimane.jp +yasugi.shimane.jp +yatsuka.shimane.jp +arai.shizuoka.jp +atami.shizuoka.jp +fuji.shizuoka.jp +fujieda.shizuoka.jp +fujikawa.shizuoka.jp +fujinomiya.shizuoka.jp +fukuroi.shizuoka.jp +gotemba.shizuoka.jp +haibara.shizuoka.jp +hamamatsu.shizuoka.jp +higashiizu.shizuoka.jp +ito.shizuoka.jp +iwata.shizuoka.jp +izu.shizuoka.jp +izunokuni.shizuoka.jp +kakegawa.shizuoka.jp +kannami.shizuoka.jp +kawanehon.shizuoka.jp +kawazu.shizuoka.jp +kikugawa.shizuoka.jp +kosai.shizuoka.jp +makinohara.shizuoka.jp +matsuzaki.shizuoka.jp +minamiizu.shizuoka.jp +mishima.shizuoka.jp +morimachi.shizuoka.jp +nishiizu.shizuoka.jp +numazu.shizuoka.jp +omaezaki.shizuoka.jp +shimada.shizuoka.jp +shimizu.shizuoka.jp +shimoda.shizuoka.jp +shizuoka.shizuoka.jp +susono.shizuoka.jp +yaizu.shizuoka.jp +yoshida.shizuoka.jp +ashikaga.tochigi.jp +bato.tochigi.jp +haga.tochigi.jp +ichikai.tochigi.jp +iwafune.tochigi.jp +kaminokawa.tochigi.jp +kanuma.tochigi.jp +karasuyama.tochigi.jp +kuroiso.tochigi.jp +mashiko.tochigi.jp +mibu.tochigi.jp +moka.tochigi.jp +motegi.tochigi.jp +nasu.tochigi.jp +nasushiobara.tochigi.jp +nikko.tochigi.jp +nishikata.tochigi.jp +nogi.tochigi.jp +ohira.tochigi.jp +ohtawara.tochigi.jp +oyama.tochigi.jp +sakura.tochigi.jp +sano.tochigi.jp +shimotsuke.tochigi.jp +shioya.tochigi.jp +takanezawa.tochigi.jp +tochigi.tochigi.jp +tsuga.tochigi.jp +ujiie.tochigi.jp +utsunomiya.tochigi.jp +yaita.tochigi.jp +aizumi.tokushima.jp +anan.tokushima.jp +ichiba.tokushima.jp +itano.tokushima.jp +kainan.tokushima.jp +komatsushima.tokushima.jp +matsushige.tokushima.jp +mima.tokushima.jp +minami.tokushima.jp +miyoshi.tokushima.jp +mugi.tokushima.jp +nakagawa.tokushima.jp +naruto.tokushima.jp +sanagochi.tokushima.jp +shishikui.tokushima.jp +tokushima.tokushima.jp +wajiki.tokushima.jp +adachi.tokyo.jp +akiruno.tokyo.jp +akishima.tokyo.jp +aogashima.tokyo.jp +arakawa.tokyo.jp +bunkyo.tokyo.jp +chiyoda.tokyo.jp +chofu.tokyo.jp +chuo.tokyo.jp +edogawa.tokyo.jp +fuchu.tokyo.jp +fussa.tokyo.jp +hachijo.tokyo.jp +hachioji.tokyo.jp +hamura.tokyo.jp +higashikurume.tokyo.jp +higashimurayama.tokyo.jp +higashiyamato.tokyo.jp +hino.tokyo.jp +hinode.tokyo.jp +hinohara.tokyo.jp +inagi.tokyo.jp +itabashi.tokyo.jp +katsushika.tokyo.jp +kita.tokyo.jp +kiyose.tokyo.jp +kodaira.tokyo.jp +koganei.tokyo.jp +kokubunji.tokyo.jp +komae.tokyo.jp +koto.tokyo.jp +kouzushima.tokyo.jp +kunitachi.tokyo.jp +machida.tokyo.jp +meguro.tokyo.jp +minato.tokyo.jp +mitaka.tokyo.jp +mizuho.tokyo.jp +musashimurayama.tokyo.jp +musashino.tokyo.jp +nakano.tokyo.jp +nerima.tokyo.jp +ogasawara.tokyo.jp +okutama.tokyo.jp +ome.tokyo.jp +oshima.tokyo.jp +ota.tokyo.jp +setagaya.tokyo.jp +shibuya.tokyo.jp +shinagawa.tokyo.jp +shinjuku.tokyo.jp +suginami.tokyo.jp +sumida.tokyo.jp +tachikawa.tokyo.jp +taito.tokyo.jp +tama.tokyo.jp +toshima.tokyo.jp +chizu.tottori.jp +hino.tottori.jp +kawahara.tottori.jp +koge.tottori.jp +kotoura.tottori.jp +misasa.tottori.jp +nanbu.tottori.jp +nichinan.tottori.jp +sakaiminato.tottori.jp +tottori.tottori.jp +wakasa.tottori.jp +yazu.tottori.jp +yonago.tottori.jp +asahi.toyama.jp +fuchu.toyama.jp +fukumitsu.toyama.jp +funahashi.toyama.jp +himi.toyama.jp +imizu.toyama.jp +inami.toyama.jp +johana.toyama.jp +kamiichi.toyama.jp +kurobe.toyama.jp +nakaniikawa.toyama.jp +namerikawa.toyama.jp +nanto.toyama.jp +nyuzen.toyama.jp +oyabe.toyama.jp +taira.toyama.jp +takaoka.toyama.jp +tateyama.toyama.jp +toga.toyama.jp +tonami.toyama.jp +toyama.toyama.jp +unazuki.toyama.jp +uozu.toyama.jp +yamada.toyama.jp +arida.wakayama.jp +aridagawa.wakayama.jp +gobo.wakayama.jp +hashimoto.wakayama.jp +hidaka.wakayama.jp +hirogawa.wakayama.jp +inami.wakayama.jp +iwade.wakayama.jp +kainan.wakayama.jp +kamitonda.wakayama.jp +katsuragi.wakayama.jp +kimino.wakayama.jp +kinokawa.wakayama.jp +kitayama.wakayama.jp +koya.wakayama.jp +koza.wakayama.jp +kozagawa.wakayama.jp +kudoyama.wakayama.jp +kushimoto.wakayama.jp +mihama.wakayama.jp +misato.wakayama.jp +nachikatsuura.wakayama.jp +shingu.wakayama.jp +shirahama.wakayama.jp +taiji.wakayama.jp +tanabe.wakayama.jp +wakayama.wakayama.jp +yuasa.wakayama.jp +yura.wakayama.jp +asahi.yamagata.jp +funagata.yamagata.jp +higashine.yamagata.jp +iide.yamagata.jp +kahoku.yamagata.jp +kaminoyama.yamagata.jp +kaneyama.yamagata.jp +kawanishi.yamagata.jp +mamurogawa.yamagata.jp +mikawa.yamagata.jp +murayama.yamagata.jp +nagai.yamagata.jp +nakayama.yamagata.jp +nanyo.yamagata.jp +nishikawa.yamagata.jp +obanazawa.yamagata.jp +oe.yamagata.jp +oguni.yamagata.jp +ohkura.yamagata.jp +oishida.yamagata.jp +sagae.yamagata.jp +sakata.yamagata.jp +sakegawa.yamagata.jp +shinjo.yamagata.jp +shirataka.yamagata.jp +shonai.yamagata.jp +takahata.yamagata.jp +tendo.yamagata.jp +tozawa.yamagata.jp +tsuruoka.yamagata.jp +yamagata.yamagata.jp +yamanobe.yamagata.jp +yonezawa.yamagata.jp +yuza.yamagata.jp +abu.yamaguchi.jp +hagi.yamaguchi.jp +hikari.yamaguchi.jp +hofu.yamaguchi.jp +iwakuni.yamaguchi.jp +kudamatsu.yamaguchi.jp +mitou.yamaguchi.jp +nagato.yamaguchi.jp +oshima.yamaguchi.jp +shimonoseki.yamaguchi.jp +shunan.yamaguchi.jp +tabuse.yamaguchi.jp +tokuyama.yamaguchi.jp +toyota.yamaguchi.jp +ube.yamaguchi.jp +yuu.yamaguchi.jp +chuo.yamanashi.jp +doshi.yamanashi.jp +fuefuki.yamanashi.jp +fujikawa.yamanashi.jp +fujikawaguchiko.yamanashi.jp +fujiyoshida.yamanashi.jp +hayakawa.yamanashi.jp +hokuto.yamanashi.jp +ichikawamisato.yamanashi.jp +kai.yamanashi.jp +kofu.yamanashi.jp +koshu.yamanashi.jp +kosuge.yamanashi.jp +minami-alps.yamanashi.jp +minobu.yamanashi.jp +nakamichi.yamanashi.jp +nanbu.yamanashi.jp +narusawa.yamanashi.jp +nirasaki.yamanashi.jp +nishikatsura.yamanashi.jp +oshino.yamanashi.jp +otsuki.yamanashi.jp +showa.yamanashi.jp +tabayama.yamanashi.jp +tsuru.yamanashi.jp +uenohara.yamanashi.jp +yamanakako.yamanashi.jp +yamanashi.yamanashi.jp + +// ke : http://www.kenic.or.ke/index.php/en/ke-domains/ke-domains +ke +ac.ke +co.ke +go.ke +info.ke +me.ke +mobi.ke +ne.ke +or.ke +sc.ke + +// kg : http://www.domain.kg/dmn_n.html +kg +org.kg +net.kg +com.kg +edu.kg +gov.kg +mil.kg + +// kh : http://www.mptc.gov.kh/dns_registration.htm +*.kh + +// ki : http://www.ki/dns/index.html +ki +edu.ki +biz.ki +net.ki +org.ki +gov.ki +info.ki +com.ki + +// km : https://en.wikipedia.org/wiki/.km +// http://www.domaine.km/documents/charte.doc +km +org.km +nom.km +gov.km +prd.km +tm.km +edu.km +mil.km +ass.km +com.km +// These are only mentioned as proposed suggestions at domaine.km, but +// https://en.wikipedia.org/wiki/.km says they're available for registration: +coop.km +asso.km +presse.km +medecin.km +notaires.km +pharmaciens.km +veterinaire.km +gouv.km + +// kn : https://en.wikipedia.org/wiki/.kn +// http://www.dot.kn/domainRules.html +kn +net.kn +org.kn +edu.kn +gov.kn + +// kp : http://www.kcce.kp/en_index.php +kp +com.kp +edu.kp +gov.kp +org.kp +rep.kp +tra.kp + +// kr : https://en.wikipedia.org/wiki/.kr +// see also: http://domain.nida.or.kr/eng/registration.jsp +kr +ac.kr +co.kr +es.kr +go.kr +hs.kr +kg.kr +mil.kr +ms.kr +ne.kr +or.kr +pe.kr +re.kr +sc.kr +// kr geographical names +busan.kr +chungbuk.kr +chungnam.kr +daegu.kr +daejeon.kr +gangwon.kr +gwangju.kr +gyeongbuk.kr +gyeonggi.kr +gyeongnam.kr +incheon.kr +jeju.kr +jeonbuk.kr +jeonnam.kr +seoul.kr +ulsan.kr + +// kw : https://www.nic.kw/policies/ +// Confirmed by registry +kw +com.kw +edu.kw +emb.kw +gov.kw +ind.kw +net.kw +org.kw + +// ky : http://www.icta.ky/da_ky_reg_dom.php +// Confirmed by registry 2008-06-17 +ky +edu.ky +gov.ky +com.ky +org.ky +net.ky + +// kz : https://en.wikipedia.org/wiki/.kz +// see also: http://www.nic.kz/rules/index.jsp +kz +org.kz +edu.kz +net.kz +gov.kz +mil.kz +com.kz + +// la : https://en.wikipedia.org/wiki/.la +// Submitted by registry +la +int.la +net.la +info.la +edu.la +gov.la +per.la +com.la +org.la + +// lb : https://en.wikipedia.org/wiki/.lb +// Submitted by registry +lb +com.lb +edu.lb +gov.lb +net.lb +org.lb + +// lc : https://en.wikipedia.org/wiki/.lc +// see also: http://www.nic.lc/rules.htm +lc +com.lc +net.lc +co.lc +org.lc +edu.lc +gov.lc + +// li : https://en.wikipedia.org/wiki/.li +li + +// lk : https://www.nic.lk/index.php/domain-registration/lk-domain-naming-structure +lk +gov.lk +sch.lk +net.lk +int.lk +com.lk +org.lk +edu.lk +ngo.lk +soc.lk +web.lk +ltd.lk +assn.lk +grp.lk +hotel.lk +ac.lk + +// lr : http://psg.com/dns/lr/lr.txt +// Submitted by registry +lr +com.lr +edu.lr +gov.lr +org.lr +net.lr + +// ls : http://www.nic.ls/ +// Confirmed by registry +ls +ac.ls +biz.ls +co.ls +edu.ls +gov.ls +info.ls +net.ls +org.ls +sc.ls + +// lt : https://en.wikipedia.org/wiki/.lt +lt +// gov.lt : http://www.gov.lt/index_en.php +gov.lt + +// lu : http://www.dns.lu/en/ +lu + +// lv : http://www.nic.lv/DNS/En/generic.php +lv +com.lv +edu.lv +gov.lv +org.lv +mil.lv +id.lv +net.lv +asn.lv +conf.lv + +// ly : http://www.nic.ly/regulations.php +ly +com.ly +net.ly +gov.ly +plc.ly +edu.ly +sch.ly +med.ly +org.ly +id.ly + +// ma : https://en.wikipedia.org/wiki/.ma +// http://www.anrt.ma/fr/admin/download/upload/file_fr782.pdf +ma +co.ma +net.ma +gov.ma +org.ma +ac.ma +press.ma + +// mc : http://www.nic.mc/ +mc +tm.mc +asso.mc + +// md : https://en.wikipedia.org/wiki/.md +md + +// me : https://en.wikipedia.org/wiki/.me +me +co.me +net.me +org.me +edu.me +ac.me +gov.me +its.me +priv.me + +// mg : http://nic.mg/nicmg/?page_id=39 +mg +org.mg +nom.mg +gov.mg +prd.mg +tm.mg +edu.mg +mil.mg +com.mg +co.mg + +// mh : https://en.wikipedia.org/wiki/.mh +mh + +// mil : https://en.wikipedia.org/wiki/.mil +mil + +// mk : https://en.wikipedia.org/wiki/.mk +// see also: http://dns.marnet.net.mk/postapka.php +mk +com.mk +org.mk +net.mk +edu.mk +gov.mk +inf.mk +name.mk + +// ml : http://www.gobin.info/domainname/ml-template.doc +// see also: https://en.wikipedia.org/wiki/.ml +ml +com.ml +edu.ml +gouv.ml +gov.ml +net.ml +org.ml +presse.ml + +// mm : https://en.wikipedia.org/wiki/.mm +*.mm + +// mn : https://en.wikipedia.org/wiki/.mn +mn +gov.mn +edu.mn +org.mn + +// mo : http://www.monic.net.mo/ +mo +com.mo +net.mo +org.mo +edu.mo +gov.mo + +// mobi : https://en.wikipedia.org/wiki/.mobi +mobi + +// mp : http://www.dot.mp/ +// Confirmed by registry 2008-06-17 +mp + +// mq : https://en.wikipedia.org/wiki/.mq +mq + +// mr : https://en.wikipedia.org/wiki/.mr +mr +gov.mr + +// ms : http://www.nic.ms/pdf/MS_Domain_Name_Rules.pdf +ms +com.ms +edu.ms +gov.ms +net.ms +org.ms + +// mt : https://www.nic.org.mt/go/policy +// Submitted by registry +mt +com.mt +edu.mt +net.mt +org.mt + +// mu : https://en.wikipedia.org/wiki/.mu +mu +com.mu +net.mu +org.mu +gov.mu +ac.mu +co.mu +or.mu + +// museum : http://about.museum/naming/ +// http://index.museum/ +museum +academy.museum +agriculture.museum +air.museum +airguard.museum +alabama.museum +alaska.museum +amber.museum +ambulance.museum +american.museum +americana.museum +americanantiques.museum +americanart.museum +amsterdam.museum +and.museum +annefrank.museum +anthro.museum +anthropology.museum +antiques.museum +aquarium.museum +arboretum.museum +archaeological.museum +archaeology.museum +architecture.museum +art.museum +artanddesign.museum +artcenter.museum +artdeco.museum +arteducation.museum +artgallery.museum +arts.museum +artsandcrafts.museum +asmatart.museum +assassination.museum +assisi.museum +association.museum +astronomy.museum +atlanta.museum +austin.museum +australia.museum +automotive.museum +aviation.museum +axis.museum +badajoz.museum +baghdad.museum +bahn.museum +bale.museum +baltimore.museum +barcelona.museum +baseball.museum +basel.museum +baths.museum +bauern.museum +beauxarts.museum +beeldengeluid.museum +bellevue.museum +bergbau.museum +berkeley.museum +berlin.museum +bern.museum +bible.museum +bilbao.museum +bill.museum +birdart.museum +birthplace.museum +bonn.museum +boston.museum +botanical.museum +botanicalgarden.museum +botanicgarden.museum +botany.museum +brandywinevalley.museum +brasil.museum +bristol.museum +british.museum +britishcolumbia.museum +broadcast.museum +brunel.museum +brussel.museum +brussels.museum +bruxelles.museum +building.museum +burghof.museum +bus.museum +bushey.museum +cadaques.museum +california.museum +cambridge.museum +can.museum +canada.museum +capebreton.museum +carrier.museum +cartoonart.museum +casadelamoneda.museum +castle.museum +castres.museum +celtic.museum +center.museum +chattanooga.museum +cheltenham.museum +chesapeakebay.museum +chicago.museum +children.museum +childrens.museum +childrensgarden.museum +chiropractic.museum +chocolate.museum +christiansburg.museum +cincinnati.museum +cinema.museum +circus.museum +civilisation.museum +civilization.museum +civilwar.museum +clinton.museum +clock.museum +coal.museum +coastaldefence.museum +cody.museum +coldwar.museum +collection.museum +colonialwilliamsburg.museum +coloradoplateau.museum +columbia.museum +columbus.museum +communication.museum +communications.museum +community.museum +computer.museum +computerhistory.museum +comunicações.museum +contemporary.museum +contemporaryart.museum +convent.museum +copenhagen.museum +corporation.museum +correios-e-telecomunicações.museum +corvette.museum +costume.museum +countryestate.museum +county.museum +crafts.museum +cranbrook.museum +creation.museum +cultural.museum +culturalcenter.museum +culture.museum +cyber.museum +cymru.museum +dali.museum +dallas.museum +database.museum +ddr.museum +decorativearts.museum +delaware.museum +delmenhorst.museum +denmark.museum +depot.museum +design.museum +detroit.museum +dinosaur.museum +discovery.museum +dolls.museum +donostia.museum +durham.museum +eastafrica.museum +eastcoast.museum +education.museum +educational.museum +egyptian.museum +eisenbahn.museum +elburg.museum +elvendrell.museum +embroidery.museum +encyclopedic.museum +england.museum +entomology.museum +environment.museum +environmentalconservation.museum +epilepsy.museum +essex.museum +estate.museum +ethnology.museum +exeter.museum +exhibition.museum +family.museum +farm.museum +farmequipment.museum +farmers.museum +farmstead.museum +field.museum +figueres.museum +filatelia.museum +film.museum +fineart.museum +finearts.museum +finland.museum +flanders.museum +florida.museum +force.museum +fortmissoula.museum +fortworth.museum +foundation.museum +francaise.museum +frankfurt.museum +franziskaner.museum +freemasonry.museum +freiburg.museum +fribourg.museum +frog.museum +fundacio.museum +furniture.museum +gallery.museum +garden.museum +gateway.museum +geelvinck.museum +gemological.museum +geology.museum +georgia.museum +giessen.museum +glas.museum +glass.museum +gorge.museum +grandrapids.museum +graz.museum +guernsey.museum +halloffame.museum +hamburg.museum +handson.museum +harvestcelebration.museum +hawaii.museum +health.museum +heimatunduhren.museum +hellas.museum +helsinki.museum +hembygdsforbund.museum +heritage.museum +histoire.museum +historical.museum +historicalsociety.museum +historichouses.museum +historisch.museum +historisches.museum +history.museum +historyofscience.museum +horology.museum +house.museum +humanities.museum +illustration.museum +imageandsound.museum +indian.museum +indiana.museum +indianapolis.museum +indianmarket.museum +intelligence.museum +interactive.museum +iraq.museum +iron.museum +isleofman.museum +jamison.museum +jefferson.museum +jerusalem.museum +jewelry.museum +jewish.museum +jewishart.museum +jfk.museum +journalism.museum +judaica.museum +judygarland.museum +juedisches.museum +juif.museum +karate.museum +karikatur.museum +kids.museum +koebenhavn.museum +koeln.museum +kunst.museum +kunstsammlung.museum +kunstunddesign.museum +labor.museum +labour.museum +lajolla.museum +lancashire.museum +landes.museum +lans.museum +läns.museum +larsson.museum +lewismiller.museum +lincoln.museum +linz.museum +living.museum +livinghistory.museum +localhistory.museum +london.museum +losangeles.museum +louvre.museum +loyalist.museum +lucerne.museum +luxembourg.museum +luzern.museum +mad.museum +madrid.museum +mallorca.museum +manchester.museum +mansion.museum +mansions.museum +manx.museum +marburg.museum +maritime.museum +maritimo.museum +maryland.museum +marylhurst.museum +media.museum +medical.museum +medizinhistorisches.museum +meeres.museum +memorial.museum +mesaverde.museum +michigan.museum +midatlantic.museum +military.museum +mill.museum +miners.museum +mining.museum +minnesota.museum +missile.museum +missoula.museum +modern.museum +moma.museum +money.museum +monmouth.museum +monticello.museum +montreal.museum +moscow.museum +motorcycle.museum +muenchen.museum +muenster.museum +mulhouse.museum +muncie.museum +museet.museum +museumcenter.museum +museumvereniging.museum +music.museum +national.museum +nationalfirearms.museum +nationalheritage.museum +nativeamerican.museum +naturalhistory.museum +naturalhistorymuseum.museum +naturalsciences.museum +nature.museum +naturhistorisches.museum +natuurwetenschappen.museum +naumburg.museum +naval.museum +nebraska.museum +neues.museum +newhampshire.museum +newjersey.museum +newmexico.museum +newport.museum +newspaper.museum +newyork.museum +niepce.museum +norfolk.museum +north.museum +nrw.museum +nyc.museum +nyny.museum +oceanographic.museum +oceanographique.museum +omaha.museum +online.museum +ontario.museum +openair.museum +oregon.museum +oregontrail.museum +otago.museum +oxford.museum +pacific.museum +paderborn.museum +palace.museum +paleo.museum +palmsprings.museum +panama.museum +paris.museum +pasadena.museum +pharmacy.museum +philadelphia.museum +philadelphiaarea.museum +philately.museum +phoenix.museum +photography.museum +pilots.museum +pittsburgh.museum +planetarium.museum +plantation.museum +plants.museum +plaza.museum +portal.museum +portland.museum +portlligat.museum +posts-and-telecommunications.museum +preservation.museum +presidio.museum +press.museum +project.museum +public.museum +pubol.museum +quebec.museum +railroad.museum +railway.museum +research.museum +resistance.museum +riodejaneiro.museum +rochester.museum +rockart.museum +roma.museum +russia.museum +saintlouis.museum +salem.museum +salvadordali.museum +salzburg.museum +sandiego.museum +sanfrancisco.museum +santabarbara.museum +santacruz.museum +santafe.museum +saskatchewan.museum +satx.museum +savannahga.museum +schlesisches.museum +schoenbrunn.museum +schokoladen.museum +school.museum +schweiz.museum +science.museum +scienceandhistory.museum +scienceandindustry.museum +sciencecenter.museum +sciencecenters.museum +science-fiction.museum +sciencehistory.museum +sciences.museum +sciencesnaturelles.museum +scotland.museum +seaport.museum +settlement.museum +settlers.museum +shell.museum +sherbrooke.museum +sibenik.museum +silk.museum +ski.museum +skole.museum +society.museum +sologne.museum +soundandvision.museum +southcarolina.museum +southwest.museum +space.museum +spy.museum +square.museum +stadt.museum +stalbans.museum +starnberg.museum +state.museum +stateofdelaware.museum +station.museum +steam.museum +steiermark.museum +stjohn.museum +stockholm.museum +stpetersburg.museum +stuttgart.museum +suisse.museum +surgeonshall.museum +surrey.museum +svizzera.museum +sweden.museum +sydney.museum +tank.museum +tcm.museum +technology.museum +telekommunikation.museum +television.museum +texas.museum +textile.museum +theater.museum +time.museum +timekeeping.museum +topology.museum +torino.museum +touch.museum +town.museum +transport.museum +tree.museum +trolley.museum +trust.museum +trustee.museum +uhren.museum +ulm.museum +undersea.museum +university.museum +usa.museum +usantiques.museum +usarts.museum +uscountryestate.museum +usculture.museum +usdecorativearts.museum +usgarden.museum +ushistory.museum +ushuaia.museum +uslivinghistory.museum +utah.museum +uvic.museum +valley.museum +vantaa.museum +versailles.museum +viking.museum +village.museum +virginia.museum +virtual.museum +virtuel.museum +vlaanderen.museum +volkenkunde.museum +wales.museum +wallonie.museum +war.museum +washingtondc.museum +watchandclock.museum +watch-and-clock.museum +western.museum +westfalen.museum +whaling.museum +wildlife.museum +williamsburg.museum +windmill.museum +workshop.museum +york.museum +yorkshire.museum +yosemite.museum +youth.museum +zoological.museum +zoology.museum +ירושלים.museum +иком.museum + +// mv : https://en.wikipedia.org/wiki/.mv +// "mv" included because, contra Wikipedia, google.mv exists. +mv +aero.mv +biz.mv +com.mv +coop.mv +edu.mv +gov.mv +info.mv +int.mv +mil.mv +museum.mv +name.mv +net.mv +org.mv +pro.mv + +// mw : http://www.registrar.mw/ +mw +ac.mw +biz.mw +co.mw +com.mw +coop.mw +edu.mw +gov.mw +int.mw +museum.mw +net.mw +org.mw + +// mx : http://www.nic.mx/ +// Submitted by registry +mx +com.mx +org.mx +gob.mx +edu.mx +net.mx + +// my : http://www.mynic.net.my/ +my +com.my +net.my +org.my +gov.my +edu.my +mil.my +name.my + +// mz : http://www.uem.mz/ +// Submitted by registry +mz +ac.mz +adv.mz +co.mz +edu.mz +gov.mz +mil.mz +net.mz +org.mz + +// na : http://www.na-nic.com.na/ +// http://www.info.na/domain/ +na +info.na +pro.na +name.na +school.na +or.na +dr.na +us.na +mx.na +ca.na +in.na +cc.na +tv.na +ws.na +mobi.na +co.na +com.na +org.na + +// name : has 2nd-level tlds, but there's no list of them +name + +// nc : http://www.cctld.nc/ +nc +asso.nc +nom.nc + +// ne : https://en.wikipedia.org/wiki/.ne +ne + +// net : https://en.wikipedia.org/wiki/.net +net + +// nf : https://en.wikipedia.org/wiki/.nf +nf +com.nf +net.nf +per.nf +rec.nf +web.nf +arts.nf +firm.nf +info.nf +other.nf +store.nf + +// ng : http://www.nira.org.ng/index.php/join-us/register-ng-domain/189-nira-slds +ng +com.ng +edu.ng +gov.ng +i.ng +mil.ng +mobi.ng +name.ng +net.ng +org.ng +sch.ng + +// ni : http://www.nic.ni/ +ni +ac.ni +biz.ni +co.ni +com.ni +edu.ni +gob.ni +in.ni +info.ni +int.ni +mil.ni +net.ni +nom.ni +org.ni +web.ni + +// nl : https://en.wikipedia.org/wiki/.nl +// https://www.sidn.nl/ +// ccTLD for the Netherlands +nl + +// no : https://www.norid.no/en/om-domenenavn/regelverk-for-no/ +// Norid geographical second level domains : https://www.norid.no/en/om-domenenavn/regelverk-for-no/vedlegg-b/ +// Norid category second level domains : https://www.norid.no/en/om-domenenavn/regelverk-for-no/vedlegg-c/ +// Norid category second-level domains managed by parties other than Norid : https://www.norid.no/en/om-domenenavn/regelverk-for-no/vedlegg-d/ +// RSS feed: https://teknisk.norid.no/en/feed/ +no +// Norid category second level domains : https://www.norid.no/en/om-domenenavn/regelverk-for-no/vedlegg-c/ +fhs.no +vgs.no +fylkesbibl.no +folkebibl.no +museum.no +idrett.no +priv.no +// Norid category second-level domains managed by parties other than Norid : https://www.norid.no/en/om-domenenavn/regelverk-for-no/vedlegg-d/ +mil.no +stat.no +dep.no +kommune.no +herad.no +// Norid geographical second level domains : https://www.norid.no/en/om-domenenavn/regelverk-for-no/vedlegg-b/ +// counties +aa.no +ah.no +bu.no +fm.no +hl.no +hm.no +jan-mayen.no +mr.no +nl.no +nt.no +of.no +ol.no +oslo.no +rl.no +sf.no +st.no +svalbard.no +tm.no +tr.no +va.no +vf.no +// primary and lower secondary schools per county +gs.aa.no +gs.ah.no +gs.bu.no +gs.fm.no +gs.hl.no +gs.hm.no +gs.jan-mayen.no +gs.mr.no +gs.nl.no +gs.nt.no +gs.of.no +gs.ol.no +gs.oslo.no +gs.rl.no +gs.sf.no +gs.st.no +gs.svalbard.no +gs.tm.no +gs.tr.no +gs.va.no +gs.vf.no +// cities +akrehamn.no +åkrehamn.no +algard.no +ålgård.no +arna.no +brumunddal.no +bryne.no +bronnoysund.no +brønnøysund.no +drobak.no +drøbak.no +egersund.no +fetsund.no +floro.no +florø.no +fredrikstad.no +hokksund.no +honefoss.no +hønefoss.no +jessheim.no +jorpeland.no +jørpeland.no +kirkenes.no +kopervik.no +krokstadelva.no +langevag.no +langevåg.no +leirvik.no +mjondalen.no +mjøndalen.no +mo-i-rana.no +mosjoen.no +mosjøen.no +nesoddtangen.no +orkanger.no +osoyro.no +osøyro.no +raholt.no +råholt.no +sandnessjoen.no +sandnessjøen.no +skedsmokorset.no +slattum.no +spjelkavik.no +stathelle.no +stavern.no +stjordalshalsen.no +stjørdalshalsen.no +tananger.no +tranby.no +vossevangen.no +// communities +afjord.no +åfjord.no +agdenes.no +al.no +ål.no +alesund.no +ålesund.no +alstahaug.no +alta.no +áltá.no +alaheadju.no +álaheadju.no +alvdal.no +amli.no +åmli.no +amot.no +åmot.no +andebu.no +andoy.no +andøy.no +andasuolo.no +ardal.no +årdal.no +aremark.no +arendal.no +ås.no +aseral.no +åseral.no +asker.no +askim.no +askvoll.no +askoy.no +askøy.no +asnes.no +åsnes.no +audnedaln.no +aukra.no +aure.no +aurland.no +aurskog-holand.no +aurskog-høland.no +austevoll.no +austrheim.no +averoy.no +averøy.no +balestrand.no +ballangen.no +balat.no +bálát.no +balsfjord.no +bahccavuotna.no +báhccavuotna.no +bamble.no +bardu.no +beardu.no +beiarn.no +bajddar.no +bájddar.no +baidar.no +báidár.no +berg.no +bergen.no +berlevag.no +berlevåg.no +bearalvahki.no +bearalváhki.no +bindal.no +birkenes.no +bjarkoy.no +bjarkøy.no +bjerkreim.no +bjugn.no +bodo.no +bodø.no +badaddja.no +bådåddjå.no +budejju.no +bokn.no +bremanger.no +bronnoy.no +brønnøy.no +bygland.no +bykle.no +barum.no +bærum.no +bo.telemark.no +bø.telemark.no +bo.nordland.no +bø.nordland.no +bievat.no +bievát.no +bomlo.no +bømlo.no +batsfjord.no +båtsfjord.no +bahcavuotna.no +báhcavuotna.no +dovre.no +drammen.no +drangedal.no +dyroy.no +dyrøy.no +donna.no +dønna.no +eid.no +eidfjord.no +eidsberg.no +eidskog.no +eidsvoll.no +eigersund.no +elverum.no +enebakk.no +engerdal.no +etne.no +etnedal.no +evenes.no +evenassi.no +evenášši.no +evje-og-hornnes.no +farsund.no +fauske.no +fuossko.no +fuoisku.no +fedje.no +fet.no +finnoy.no +finnøy.no +fitjar.no +fjaler.no +fjell.no +flakstad.no +flatanger.no +flekkefjord.no +flesberg.no +flora.no +fla.no +flå.no +folldal.no +forsand.no +fosnes.no +frei.no +frogn.no +froland.no +frosta.no +frana.no +fræna.no +froya.no +frøya.no +fusa.no +fyresdal.no +forde.no +førde.no +gamvik.no +gangaviika.no +gáŋgaviika.no +gaular.no +gausdal.no +gildeskal.no +gildeskål.no +giske.no +gjemnes.no +gjerdrum.no +gjerstad.no +gjesdal.no +gjovik.no +gjøvik.no +gloppen.no +gol.no +gran.no +grane.no +granvin.no +gratangen.no +grimstad.no +grong.no +kraanghke.no +kråanghke.no +grue.no +gulen.no +hadsel.no +halden.no +halsa.no +hamar.no +hamaroy.no +habmer.no +hábmer.no +hapmir.no +hápmir.no +hammerfest.no +hammarfeasta.no +hámmárfeasta.no +haram.no +hareid.no +harstad.no +hasvik.no +aknoluokta.no +ákŋoluokta.no +hattfjelldal.no +aarborte.no +haugesund.no +hemne.no +hemnes.no +hemsedal.no +heroy.more-og-romsdal.no +herøy.møre-og-romsdal.no +heroy.nordland.no +herøy.nordland.no +hitra.no +hjartdal.no +hjelmeland.no +hobol.no +hobøl.no +hof.no +hol.no +hole.no +holmestrand.no +holtalen.no +holtålen.no +hornindal.no +horten.no +hurdal.no +hurum.no +hvaler.no +hyllestad.no +hagebostad.no +hægebostad.no +hoyanger.no +høyanger.no +hoylandet.no +høylandet.no +ha.no +hå.no +ibestad.no +inderoy.no +inderøy.no +iveland.no +jevnaker.no +jondal.no +jolster.no +jølster.no +karasjok.no +karasjohka.no +kárášjohka.no +karlsoy.no +galsa.no +gálsá.no +karmoy.no +karmøy.no +kautokeino.no +guovdageaidnu.no +klepp.no +klabu.no +klæbu.no +kongsberg.no +kongsvinger.no +kragero.no +kragerø.no +kristiansand.no +kristiansund.no +krodsherad.no +krødsherad.no +kvalsund.no +rahkkeravju.no +ráhkkerávju.no +kvam.no +kvinesdal.no +kvinnherad.no +kviteseid.no +kvitsoy.no +kvitsøy.no +kvafjord.no +kvæfjord.no +giehtavuoatna.no +kvanangen.no +kvænangen.no +navuotna.no +návuotna.no +kafjord.no +kåfjord.no +gaivuotna.no +gáivuotna.no +larvik.no +lavangen.no +lavagis.no +loabat.no +loabát.no +lebesby.no +davvesiida.no +leikanger.no +leirfjord.no +leka.no +leksvik.no +lenvik.no +leangaviika.no +leaŋgaviika.no +lesja.no +levanger.no +lier.no +lierne.no +lillehammer.no +lillesand.no +lindesnes.no +lindas.no +lindås.no +lom.no +loppa.no +lahppi.no +láhppi.no +lund.no +lunner.no +luroy.no +lurøy.no +luster.no +lyngdal.no +lyngen.no +ivgu.no +lardal.no +lerdal.no +lærdal.no +lodingen.no +lødingen.no +lorenskog.no +lørenskog.no +loten.no +løten.no +malvik.no +masoy.no +måsøy.no +muosat.no +muosát.no +mandal.no +marker.no +marnardal.no +masfjorden.no +meland.no +meldal.no +melhus.no +meloy.no +meløy.no +meraker.no +meråker.no +moareke.no +moåreke.no +midsund.no +midtre-gauldal.no +modalen.no +modum.no +molde.no +moskenes.no +moss.no +mosvik.no +malselv.no +målselv.no +malatvuopmi.no +málatvuopmi.no +namdalseid.no +aejrie.no +namsos.no +namsskogan.no +naamesjevuemie.no +nååmesjevuemie.no +laakesvuemie.no +nannestad.no +narvik.no +narviika.no +naustdal.no +nedre-eiker.no +nes.akershus.no +nes.buskerud.no +nesna.no +nesodden.no +nesseby.no +unjarga.no +unjárga.no +nesset.no +nissedal.no +nittedal.no +nord-aurdal.no +nord-fron.no +nord-odal.no +norddal.no +nordkapp.no +davvenjarga.no +davvenjárga.no +nordre-land.no +nordreisa.no +raisa.no +ráisa.no +nore-og-uvdal.no +notodden.no +naroy.no +nærøy.no +notteroy.no +nøtterøy.no +odda.no +oksnes.no +øksnes.no +oppdal.no +oppegard.no +oppegård.no +orkdal.no +orland.no +ørland.no +orskog.no +ørskog.no +orsta.no +ørsta.no +os.hedmark.no +os.hordaland.no +osen.no +osteroy.no +osterøy.no +ostre-toten.no +østre-toten.no +overhalla.no +ovre-eiker.no +øvre-eiker.no +oyer.no +øyer.no +oygarden.no +øygarden.no +oystre-slidre.no +øystre-slidre.no +porsanger.no +porsangu.no +porsáŋgu.no +porsgrunn.no +radoy.no +radøy.no +rakkestad.no +rana.no +ruovat.no +randaberg.no +rauma.no +rendalen.no +rennebu.no +rennesoy.no +rennesøy.no +rindal.no +ringebu.no +ringerike.no +ringsaker.no +rissa.no +risor.no +risør.no +roan.no +rollag.no +rygge.no +ralingen.no +rælingen.no +rodoy.no +rødøy.no +romskog.no +rømskog.no +roros.no +røros.no +rost.no +røst.no +royken.no +røyken.no +royrvik.no +røyrvik.no +rade.no +råde.no +salangen.no +siellak.no +saltdal.no +salat.no +sálát.no +sálat.no +samnanger.no +sande.more-og-romsdal.no +sande.møre-og-romsdal.no +sande.vestfold.no +sandefjord.no +sandnes.no +sandoy.no +sandøy.no +sarpsborg.no +sauda.no +sauherad.no +sel.no +selbu.no +selje.no +seljord.no +sigdal.no +siljan.no +sirdal.no +skaun.no +skedsmo.no +ski.no +skien.no +skiptvet.no +skjervoy.no +skjervøy.no +skierva.no +skiervá.no +skjak.no +skjåk.no +skodje.no +skanland.no +skånland.no +skanit.no +skánit.no +smola.no +smøla.no +snillfjord.no +snasa.no +snåsa.no +snoasa.no +snaase.no +snåase.no +sogndal.no +sokndal.no +sola.no +solund.no +songdalen.no +sortland.no +spydeberg.no +stange.no +stavanger.no +steigen.no +steinkjer.no +stjordal.no +stjørdal.no +stokke.no +stor-elvdal.no +stord.no +stordal.no +storfjord.no +omasvuotna.no +strand.no +stranda.no +stryn.no +sula.no +suldal.no +sund.no +sunndal.no +surnadal.no +sveio.no +svelvik.no +sykkylven.no +sogne.no +søgne.no +somna.no +sømna.no +sondre-land.no +søndre-land.no +sor-aurdal.no +sør-aurdal.no +sor-fron.no +sør-fron.no +sor-odal.no +sør-odal.no +sor-varanger.no +sør-varanger.no +matta-varjjat.no +mátta-várjjat.no +sorfold.no +sørfold.no +sorreisa.no +sørreisa.no +sorum.no +sørum.no +tana.no +deatnu.no +time.no +tingvoll.no +tinn.no +tjeldsund.no +dielddanuorri.no +tjome.no +tjøme.no +tokke.no +tolga.no +torsken.no +tranoy.no +tranøy.no +tromso.no +tromsø.no +tromsa.no +romsa.no +trondheim.no +troandin.no +trysil.no +trana.no +træna.no +trogstad.no +trøgstad.no +tvedestrand.no +tydal.no +tynset.no +tysfjord.no +divtasvuodna.no +divttasvuotna.no +tysnes.no +tysvar.no +tysvær.no +tonsberg.no +tønsberg.no +ullensaker.no +ullensvang.no +ulvik.no +utsira.no +vadso.no +vadsø.no +cahcesuolo.no +čáhcesuolo.no +vaksdal.no +valle.no +vang.no +vanylven.no +vardo.no +vardø.no +varggat.no +várggát.no +vefsn.no +vaapste.no +vega.no +vegarshei.no +vegårshei.no +vennesla.no +verdal.no +verran.no +vestby.no +vestnes.no +vestre-slidre.no +vestre-toten.no +vestvagoy.no +vestvågøy.no +vevelstad.no +vik.no +vikna.no +vindafjord.no +volda.no +voss.no +varoy.no +værøy.no +vagan.no +vågan.no +voagat.no +vagsoy.no +vågsøy.no +vaga.no +vågå.no +valer.ostfold.no +våler.østfold.no +valer.hedmark.no +våler.hedmark.no + +// np : http://www.mos.com.np/register.html +*.np + +// nr : http://cenpac.net.nr/dns/index.html +// Submitted by registry +nr +biz.nr +info.nr +gov.nr +edu.nr +org.nr +net.nr +com.nr + +// nu : https://en.wikipedia.org/wiki/.nu +nu + +// nz : https://en.wikipedia.org/wiki/.nz +// Submitted by registry +nz +ac.nz +co.nz +cri.nz +geek.nz +gen.nz +govt.nz +health.nz +iwi.nz +kiwi.nz +maori.nz +mil.nz +māori.nz +net.nz +org.nz +parliament.nz +school.nz + +// om : https://en.wikipedia.org/wiki/.om +om +co.om +com.om +edu.om +gov.om +med.om +museum.om +net.om +org.om +pro.om + +// onion : https://tools.ietf.org/html/rfc7686 +onion + +// org : https://en.wikipedia.org/wiki/.org +org + +// pa : http://www.nic.pa/ +// Some additional second level "domains" resolve directly as hostnames, such as +// pannet.pa, so we add a rule for "pa". +pa +ac.pa +gob.pa +com.pa +org.pa +sld.pa +edu.pa +net.pa +ing.pa +abo.pa +med.pa +nom.pa + +// pe : https://www.nic.pe/InformeFinalComision.pdf +pe +edu.pe +gob.pe +nom.pe +mil.pe +org.pe +com.pe +net.pe + +// pf : http://www.gobin.info/domainname/formulaire-pf.pdf +pf +com.pf +org.pf +edu.pf + +// pg : https://en.wikipedia.org/wiki/.pg +*.pg + +// ph : http://www.domains.ph/FAQ2.asp +// Submitted by registry +ph +com.ph +net.ph +org.ph +gov.ph +edu.ph +ngo.ph +mil.ph +i.ph + +// pk : http://pk5.pknic.net.pk/pk5/msgNamepk.PK +pk +com.pk +net.pk +edu.pk +org.pk +fam.pk +biz.pk +web.pk +gov.pk +gob.pk +gok.pk +gon.pk +gop.pk +gos.pk +info.pk + +// pl http://www.dns.pl/english/index.html +// Submitted by registry +pl +com.pl +net.pl +org.pl +// pl functional domains (http://www.dns.pl/english/index.html) +aid.pl +agro.pl +atm.pl +auto.pl +biz.pl +edu.pl +gmina.pl +gsm.pl +info.pl +mail.pl +miasta.pl +media.pl +mil.pl +nieruchomosci.pl +nom.pl +pc.pl +powiat.pl +priv.pl +realestate.pl +rel.pl +sex.pl +shop.pl +sklep.pl +sos.pl +szkola.pl +targi.pl +tm.pl +tourism.pl +travel.pl +turystyka.pl +// Government domains +gov.pl +ap.gov.pl +ic.gov.pl +is.gov.pl +us.gov.pl +kmpsp.gov.pl +kppsp.gov.pl +kwpsp.gov.pl +psp.gov.pl +wskr.gov.pl +kwp.gov.pl +mw.gov.pl +ug.gov.pl +um.gov.pl +umig.gov.pl +ugim.gov.pl +upow.gov.pl +uw.gov.pl +starostwo.gov.pl +pa.gov.pl +po.gov.pl +psse.gov.pl +pup.gov.pl +rzgw.gov.pl +sa.gov.pl +so.gov.pl +sr.gov.pl +wsa.gov.pl +sko.gov.pl +uzs.gov.pl +wiih.gov.pl +winb.gov.pl +pinb.gov.pl +wios.gov.pl +witd.gov.pl +wzmiuw.gov.pl +piw.gov.pl +wiw.gov.pl +griw.gov.pl +wif.gov.pl +oum.gov.pl +sdn.gov.pl +zp.gov.pl +uppo.gov.pl +mup.gov.pl +wuoz.gov.pl +konsulat.gov.pl +oirm.gov.pl +// pl regional domains (http://www.dns.pl/english/index.html) +augustow.pl +babia-gora.pl +bedzin.pl +beskidy.pl +bialowieza.pl +bialystok.pl +bielawa.pl +bieszczady.pl +boleslawiec.pl +bydgoszcz.pl +bytom.pl +cieszyn.pl +czeladz.pl +czest.pl +dlugoleka.pl +elblag.pl +elk.pl +glogow.pl +gniezno.pl +gorlice.pl +grajewo.pl +ilawa.pl +jaworzno.pl +jelenia-gora.pl +jgora.pl +kalisz.pl +kazimierz-dolny.pl +karpacz.pl +kartuzy.pl +kaszuby.pl +katowice.pl +kepno.pl +ketrzyn.pl +klodzko.pl +kobierzyce.pl +kolobrzeg.pl +konin.pl +konskowola.pl +kutno.pl +lapy.pl +lebork.pl +legnica.pl +lezajsk.pl +limanowa.pl +lomza.pl +lowicz.pl +lubin.pl +lukow.pl +malbork.pl +malopolska.pl +mazowsze.pl +mazury.pl +mielec.pl +mielno.pl +mragowo.pl +naklo.pl +nowaruda.pl +nysa.pl +olawa.pl +olecko.pl +olkusz.pl +olsztyn.pl +opoczno.pl +opole.pl +ostroda.pl +ostroleka.pl +ostrowiec.pl +ostrowwlkp.pl +pila.pl +pisz.pl +podhale.pl +podlasie.pl +polkowice.pl +pomorze.pl +pomorskie.pl +prochowice.pl +pruszkow.pl +przeworsk.pl +pulawy.pl +radom.pl +rawa-maz.pl +rybnik.pl +rzeszow.pl +sanok.pl +sejny.pl +slask.pl +slupsk.pl +sosnowiec.pl +stalowa-wola.pl +skoczow.pl +starachowice.pl +stargard.pl +suwalki.pl +swidnica.pl +swiebodzin.pl +swinoujscie.pl +szczecin.pl +szczytno.pl +tarnobrzeg.pl +tgory.pl +turek.pl +tychy.pl +ustka.pl +walbrzych.pl +warmia.pl +warszawa.pl +waw.pl +wegrow.pl +wielun.pl +wlocl.pl +wloclawek.pl +wodzislaw.pl +wolomin.pl +wroclaw.pl +zachpomor.pl +zagan.pl +zarow.pl +zgora.pl +zgorzelec.pl + +// pm : http://www.afnic.fr/medias/documents/AFNIC-naming-policy2012.pdf +pm + +// pn : http://www.government.pn/PnRegistry/policies.htm +pn +gov.pn +co.pn +org.pn +edu.pn +net.pn + +// post : https://en.wikipedia.org/wiki/.post +post + +// pr : http://www.nic.pr/index.asp?f=1 +pr +com.pr +net.pr +org.pr +gov.pr +edu.pr +isla.pr +pro.pr +biz.pr +info.pr +name.pr +// these aren't mentioned on nic.pr, but on https://en.wikipedia.org/wiki/.pr +est.pr +prof.pr +ac.pr + +// pro : http://registry.pro/get-pro +pro +aaa.pro +aca.pro +acct.pro +avocat.pro +bar.pro +cpa.pro +eng.pro +jur.pro +law.pro +med.pro +recht.pro + +// ps : https://en.wikipedia.org/wiki/.ps +// http://www.nic.ps/registration/policy.html#reg +ps +edu.ps +gov.ps +sec.ps +plo.ps +com.ps +org.ps +net.ps + +// pt : http://online.dns.pt/dns/start_dns +pt +net.pt +gov.pt +org.pt +edu.pt +int.pt +publ.pt +com.pt +nome.pt + +// pw : https://en.wikipedia.org/wiki/.pw +pw +co.pw +ne.pw +or.pw +ed.pw +go.pw +belau.pw + +// py : http://www.nic.py/pautas.html#seccion_9 +// Submitted by registry +py +com.py +coop.py +edu.py +gov.py +mil.py +net.py +org.py + +// qa : http://domains.qa/en/ +qa +com.qa +edu.qa +gov.qa +mil.qa +name.qa +net.qa +org.qa +sch.qa + +// re : http://www.afnic.re/obtenir/chartes/nommage-re/annexe-descriptifs +re +asso.re +com.re +nom.re + +// ro : http://www.rotld.ro/ +ro +arts.ro +com.ro +firm.ro +info.ro +nom.ro +nt.ro +org.ro +rec.ro +store.ro +tm.ro +www.ro + +// rs : https://www.rnids.rs/en/domains/national-domains +rs +ac.rs +co.rs +edu.rs +gov.rs +in.rs +org.rs + +// ru : https://cctld.ru/files/pdf/docs/en/rules_ru-rf.pdf +// Submitted by George Georgievsky +ru + +// rw : https://www.ricta.org.rw/sites/default/files/resources/registry_registrar_contract_0.pdf +rw +ac.rw +co.rw +coop.rw +gov.rw +mil.rw +net.rw +org.rw + +// sa : http://www.nic.net.sa/ +sa +com.sa +net.sa +org.sa +gov.sa +med.sa +pub.sa +edu.sa +sch.sa + +// sb : http://www.sbnic.net.sb/ +// Submitted by registry +sb +com.sb +edu.sb +gov.sb +net.sb +org.sb + +// sc : http://www.nic.sc/ +sc +com.sc +gov.sc +net.sc +org.sc +edu.sc + +// sd : http://www.isoc.sd/sudanic.isoc.sd/billing_pricing.htm +// Submitted by registry +sd +com.sd +net.sd +org.sd +edu.sd +med.sd +tv.sd +gov.sd +info.sd + +// se : https://en.wikipedia.org/wiki/.se +// Submitted by registry +se +a.se +ac.se +b.se +bd.se +brand.se +c.se +d.se +e.se +f.se +fh.se +fhsk.se +fhv.se +g.se +h.se +i.se +k.se +komforb.se +kommunalforbund.se +komvux.se +l.se +lanbib.se +m.se +n.se +naturbruksgymn.se +o.se +org.se +p.se +parti.se +pp.se +press.se +r.se +s.se +t.se +tm.se +u.se +w.se +x.se +y.se +z.se + +// sg : http://www.nic.net.sg/page/registration-policies-procedures-and-guidelines +sg +com.sg +net.sg +org.sg +gov.sg +edu.sg +per.sg + +// sh : http://www.nic.sh/registrar.html +sh +com.sh +net.sh +gov.sh +org.sh +mil.sh + +// si : https://en.wikipedia.org/wiki/.si +si + +// sj : No registrations at this time. +// Submitted by registry +sj + +// sk : https://en.wikipedia.org/wiki/.sk +// list of 2nd level domains ? +sk + +// sl : http://www.nic.sl +// Submitted by registry +sl +com.sl +net.sl +edu.sl +gov.sl +org.sl + +// sm : https://en.wikipedia.org/wiki/.sm +sm + +// sn : https://en.wikipedia.org/wiki/.sn +sn +art.sn +com.sn +edu.sn +gouv.sn +org.sn +perso.sn +univ.sn + +// so : http://sonic.so/policies/ +so +com.so +edu.so +gov.so +me.so +net.so +org.so + +// sr : https://en.wikipedia.org/wiki/.sr +sr + +// ss : https://registry.nic.ss/ +// Submitted by registry +ss +biz.ss +com.ss +edu.ss +gov.ss +net.ss +org.ss + +// st : http://www.nic.st/html/policyrules/ +st +co.st +com.st +consulado.st +edu.st +embaixada.st +gov.st +mil.st +net.st +org.st +principe.st +saotome.st +store.st + +// su : https://en.wikipedia.org/wiki/.su +su + +// sv : http://www.svnet.org.sv/niveldos.pdf +sv +com.sv +edu.sv +gob.sv +org.sv +red.sv + +// sx : https://en.wikipedia.org/wiki/.sx +// Submitted by registry +sx +gov.sx + +// sy : https://en.wikipedia.org/wiki/.sy +// see also: http://www.gobin.info/domainname/sy.doc +sy +edu.sy +gov.sy +net.sy +mil.sy +com.sy +org.sy + +// sz : https://en.wikipedia.org/wiki/.sz +// http://www.sispa.org.sz/ +sz +co.sz +ac.sz +org.sz + +// tc : https://en.wikipedia.org/wiki/.tc +tc + +// td : https://en.wikipedia.org/wiki/.td +td + +// tel: https://en.wikipedia.org/wiki/.tel +// http://www.telnic.org/ +tel + +// tf : https://en.wikipedia.org/wiki/.tf +tf + +// tg : https://en.wikipedia.org/wiki/.tg +// http://www.nic.tg/ +tg + +// th : https://en.wikipedia.org/wiki/.th +// Submitted by registry +th +ac.th +co.th +go.th +in.th +mi.th +net.th +or.th + +// tj : http://www.nic.tj/policy.html +tj +ac.tj +biz.tj +co.tj +com.tj +edu.tj +go.tj +gov.tj +int.tj +mil.tj +name.tj +net.tj +nic.tj +org.tj +test.tj +web.tj + +// tk : https://en.wikipedia.org/wiki/.tk +tk + +// tl : https://en.wikipedia.org/wiki/.tl +tl +gov.tl + +// tm : http://www.nic.tm/local.html +tm +com.tm +co.tm +org.tm +net.tm +nom.tm +gov.tm +mil.tm +edu.tm + +// tn : https://en.wikipedia.org/wiki/.tn +// http://whois.ati.tn/ +tn +com.tn +ens.tn +fin.tn +gov.tn +ind.tn +intl.tn +nat.tn +net.tn +org.tn +info.tn +perso.tn +tourism.tn +edunet.tn +rnrt.tn +rns.tn +rnu.tn +mincom.tn +agrinet.tn +defense.tn +turen.tn + +// to : https://en.wikipedia.org/wiki/.to +// Submitted by registry +to +com.to +gov.to +net.to +org.to +edu.to +mil.to + +// tr : https://nic.tr/ +// https://nic.tr/forms/eng/policies.pdf +// https://nic.tr/index.php?USRACTN=PRICELST +tr +av.tr +bbs.tr +bel.tr +biz.tr +com.tr +dr.tr +edu.tr +gen.tr +gov.tr +info.tr +mil.tr +k12.tr +kep.tr +name.tr +net.tr +org.tr +pol.tr +tel.tr +tsk.tr +tv.tr +web.tr +// Used by Northern Cyprus +nc.tr +// Used by government agencies of Northern Cyprus +gov.nc.tr + +// tt : http://www.nic.tt/ +tt +co.tt +com.tt +org.tt +net.tt +biz.tt +info.tt +pro.tt +int.tt +coop.tt +jobs.tt +mobi.tt +travel.tt +museum.tt +aero.tt +name.tt +gov.tt +edu.tt + +// tv : https://en.wikipedia.org/wiki/.tv +// Not listing any 2LDs as reserved since none seem to exist in practice, +// Wikipedia notwithstanding. +tv + +// tw : https://en.wikipedia.org/wiki/.tw +tw +edu.tw +gov.tw +mil.tw +com.tw +net.tw +org.tw +idv.tw +game.tw +ebiz.tw +club.tw +網路.tw +組織.tw +商業.tw + +// tz : http://www.tznic.or.tz/index.php/domains +// Submitted by registry +tz +ac.tz +co.tz +go.tz +hotel.tz +info.tz +me.tz +mil.tz +mobi.tz +ne.tz +or.tz +sc.tz +tv.tz + +// ua : https://hostmaster.ua/policy/?ua +// Submitted by registry +ua +// ua 2LD +com.ua +edu.ua +gov.ua +in.ua +net.ua +org.ua +// ua geographic names +// https://hostmaster.ua/2ld/ +cherkassy.ua +cherkasy.ua +chernigov.ua +chernihiv.ua +chernivtsi.ua +chernovtsy.ua +ck.ua +cn.ua +cr.ua +crimea.ua +cv.ua +dn.ua +dnepropetrovsk.ua +dnipropetrovsk.ua +donetsk.ua +dp.ua +if.ua +ivano-frankivsk.ua +kh.ua +kharkiv.ua +kharkov.ua +kherson.ua +khmelnitskiy.ua +khmelnytskyi.ua +kiev.ua +kirovograd.ua +km.ua +kr.ua +krym.ua +ks.ua +kv.ua +kyiv.ua +lg.ua +lt.ua +lugansk.ua +lutsk.ua +lv.ua +lviv.ua +mk.ua +mykolaiv.ua +nikolaev.ua +od.ua +odesa.ua +odessa.ua +pl.ua +poltava.ua +rivne.ua +rovno.ua +rv.ua +sb.ua +sebastopol.ua +sevastopol.ua +sm.ua +sumy.ua +te.ua +ternopil.ua +uz.ua +uzhgorod.ua +vinnica.ua +vinnytsia.ua +vn.ua +volyn.ua +yalta.ua +zaporizhzhe.ua +zaporizhzhia.ua +zhitomir.ua +zhytomyr.ua +zp.ua +zt.ua + +// ug : https://www.registry.co.ug/ +ug +co.ug +or.ug +ac.ug +sc.ug +go.ug +ne.ug +com.ug +org.ug + +// uk : https://en.wikipedia.org/wiki/.uk +// Submitted by registry +uk +ac.uk +co.uk +gov.uk +ltd.uk +me.uk +net.uk +nhs.uk +org.uk +plc.uk +police.uk +*.sch.uk + +// us : https://en.wikipedia.org/wiki/.us +us +dni.us +fed.us +isa.us +kids.us +nsn.us +// us geographic names +ak.us +al.us +ar.us +as.us +az.us +ca.us +co.us +ct.us +dc.us +de.us +fl.us +ga.us +gu.us +hi.us +ia.us +id.us +il.us +in.us +ks.us +ky.us +la.us +ma.us +md.us +me.us +mi.us +mn.us +mo.us +ms.us +mt.us +nc.us +nd.us +ne.us +nh.us +nj.us +nm.us +nv.us +ny.us +oh.us +ok.us +or.us +pa.us +pr.us +ri.us +sc.us +sd.us +tn.us +tx.us +ut.us +vi.us +vt.us +va.us +wa.us +wi.us +wv.us +wy.us +// The registrar notes several more specific domains available in each state, +// such as state.*.us, dst.*.us, etc., but resolution of these is somewhat +// haphazard; in some states these domains resolve as addresses, while in others +// only subdomains are available, or even nothing at all. We include the +// most common ones where it's clear that different sites are different +// entities. +k12.ak.us +k12.al.us +k12.ar.us +k12.as.us +k12.az.us +k12.ca.us +k12.co.us +k12.ct.us +k12.dc.us +k12.de.us +k12.fl.us +k12.ga.us +k12.gu.us +// k12.hi.us Bug 614565 - Hawaii has a state-wide DOE login +k12.ia.us +k12.id.us +k12.il.us +k12.in.us +k12.ks.us +k12.ky.us +k12.la.us +k12.ma.us +k12.md.us +k12.me.us +k12.mi.us +k12.mn.us +k12.mo.us +k12.ms.us +k12.mt.us +k12.nc.us +// k12.nd.us Bug 1028347 - Removed at request of Travis Rosso +k12.ne.us +k12.nh.us +k12.nj.us +k12.nm.us +k12.nv.us +k12.ny.us +k12.oh.us +k12.ok.us +k12.or.us +k12.pa.us +k12.pr.us +// k12.ri.us Removed at request of Kim Cournoyer +k12.sc.us +// k12.sd.us Bug 934131 - Removed at request of James Booze +k12.tn.us +k12.tx.us +k12.ut.us +k12.vi.us +k12.vt.us +k12.va.us +k12.wa.us +k12.wi.us +// k12.wv.us Bug 947705 - Removed at request of Verne Britton +k12.wy.us +cc.ak.us +cc.al.us +cc.ar.us +cc.as.us +cc.az.us +cc.ca.us +cc.co.us +cc.ct.us +cc.dc.us +cc.de.us +cc.fl.us +cc.ga.us +cc.gu.us +cc.hi.us +cc.ia.us +cc.id.us +cc.il.us +cc.in.us +cc.ks.us +cc.ky.us +cc.la.us +cc.ma.us +cc.md.us +cc.me.us +cc.mi.us +cc.mn.us +cc.mo.us +cc.ms.us +cc.mt.us +cc.nc.us +cc.nd.us +cc.ne.us +cc.nh.us +cc.nj.us +cc.nm.us +cc.nv.us +cc.ny.us +cc.oh.us +cc.ok.us +cc.or.us +cc.pa.us +cc.pr.us +cc.ri.us +cc.sc.us +cc.sd.us +cc.tn.us +cc.tx.us +cc.ut.us +cc.vi.us +cc.vt.us +cc.va.us +cc.wa.us +cc.wi.us +cc.wv.us +cc.wy.us +lib.ak.us +lib.al.us +lib.ar.us +lib.as.us +lib.az.us +lib.ca.us +lib.co.us +lib.ct.us +lib.dc.us +// lib.de.us Issue #243 - Moved to Private section at request of Ed Moore +lib.fl.us +lib.ga.us +lib.gu.us +lib.hi.us +lib.ia.us +lib.id.us +lib.il.us +lib.in.us +lib.ks.us +lib.ky.us +lib.la.us +lib.ma.us +lib.md.us +lib.me.us +lib.mi.us +lib.mn.us +lib.mo.us +lib.ms.us +lib.mt.us +lib.nc.us +lib.nd.us +lib.ne.us +lib.nh.us +lib.nj.us +lib.nm.us +lib.nv.us +lib.ny.us +lib.oh.us +lib.ok.us +lib.or.us +lib.pa.us +lib.pr.us +lib.ri.us +lib.sc.us +lib.sd.us +lib.tn.us +lib.tx.us +lib.ut.us +lib.vi.us +lib.vt.us +lib.va.us +lib.wa.us +lib.wi.us +// lib.wv.us Bug 941670 - Removed at request of Larry W Arnold +lib.wy.us +// k12.ma.us contains school districts in Massachusetts. The 4LDs are +// managed independently except for private (PVT), charter (CHTR) and +// parochial (PAROCH) schools. Those are delegated directly to the +// 5LD operators. +pvt.k12.ma.us +chtr.k12.ma.us +paroch.k12.ma.us +// Merit Network, Inc. maintains the registry for =~ /(k12|cc|lib).mi.us/ and the following +// see also: http://domreg.merit.edu +// see also: whois -h whois.domreg.merit.edu help +ann-arbor.mi.us +cog.mi.us +dst.mi.us +eaton.mi.us +gen.mi.us +mus.mi.us +tec.mi.us +washtenaw.mi.us + +// uy : http://www.nic.org.uy/ +uy +com.uy +edu.uy +gub.uy +mil.uy +net.uy +org.uy + +// uz : http://www.reg.uz/ +uz +co.uz +com.uz +net.uz +org.uz + +// va : https://en.wikipedia.org/wiki/.va +va + +// vc : https://en.wikipedia.org/wiki/.vc +// Submitted by registry +vc +com.vc +net.vc +org.vc +gov.vc +mil.vc +edu.vc + +// ve : https://registro.nic.ve/ +// Submitted by registry +ve +arts.ve +co.ve +com.ve +e12.ve +edu.ve +firm.ve +gob.ve +gov.ve +info.ve +int.ve +mil.ve +net.ve +org.ve +rec.ve +store.ve +tec.ve +web.ve + +// vg : https://en.wikipedia.org/wiki/.vg +vg + +// vi : http://www.nic.vi/newdomainform.htm +// http://www.nic.vi/Domain_Rules/body_domain_rules.html indicates some other +// TLDs are "reserved", such as edu.vi and gov.vi, but doesn't actually say they +// are available for registration (which they do not seem to be). +vi +co.vi +com.vi +k12.vi +net.vi +org.vi + +// vn : https://www.dot.vn/vnnic/vnnic/domainregistration.jsp +vn +com.vn +net.vn +org.vn +edu.vn +gov.vn +int.vn +ac.vn +biz.vn +info.vn +name.vn +pro.vn +health.vn + +// vu : https://en.wikipedia.org/wiki/.vu +// http://www.vunic.vu/ +vu +com.vu +edu.vu +net.vu +org.vu + +// wf : http://www.afnic.fr/medias/documents/AFNIC-naming-policy2012.pdf +wf + +// ws : https://en.wikipedia.org/wiki/.ws +// http://samoanic.ws/index.dhtml +ws +com.ws +net.ws +org.ws +gov.ws +edu.ws + +// yt : http://www.afnic.fr/medias/documents/AFNIC-naming-policy2012.pdf +yt + +// IDN ccTLDs +// When submitting patches, please maintain a sort by ISO 3166 ccTLD, then +// U-label, and follow this format: +// // A-Label ("", [, variant info]) : +// // [sponsoring org] +// U-Label + +// xn--mgbaam7a8h ("Emerat", Arabic) : AE +// http://nic.ae/english/arabicdomain/rules.jsp +امارات + +// xn--y9a3aq ("hye", Armenian) : AM +// ISOC AM (operated by .am Registry) +հայ + +// xn--54b7fta0cc ("Bangla", Bangla) : BD +বাংলা + +// xn--90ae ("bg", Bulgarian) : BG +бг + +// xn--90ais ("bel", Belarusian/Russian Cyrillic) : BY +// Operated by .by registry +бел + +// xn--fiqs8s ("Zhongguo/China", Chinese, Simplified) : CN +// CNNIC +// http://cnnic.cn/html/Dir/2005/10/11/3218.htm +中国 + +// xn--fiqz9s ("Zhongguo/China", Chinese, Traditional) : CN +// CNNIC +// http://cnnic.cn/html/Dir/2005/10/11/3218.htm +中國 + +// xn--lgbbat1ad8j ("Algeria/Al Jazair", Arabic) : DZ +الجزائر + +// xn--wgbh1c ("Egypt/Masr", Arabic) : EG +// http://www.dotmasr.eg/ +مصر + +// xn--e1a4c ("eu", Cyrillic) : EU +// https://eurid.eu +ею + +// xn--qxa6a ("eu", Greek) : EU +// https://eurid.eu +ευ + +// xn--mgbah1a3hjkrd ("Mauritania", Arabic) : MR +موريتانيا + +// xn--node ("ge", Georgian Mkhedruli) : GE +გე + +// xn--qxam ("el", Greek) : GR +// Hellenic Ministry of Infrastructure, Transport, and Networks +ελ + +// xn--j6w193g ("Hong Kong", Chinese) : HK +// https://www.hkirc.hk +// Submitted by registry +// https://www.hkirc.hk/content.jsp?id=30#!/34 +香港 +公司.香港 +教育.香港 +政府.香港 +個人.香港 +網絡.香港 +組織.香港 + +// xn--2scrj9c ("Bharat", Kannada) : IN +// India +ಭಾರತ + +// xn--3hcrj9c ("Bharat", Oriya) : IN +// India +ଭାରତ + +// xn--45br5cyl ("Bharatam", Assamese) : IN +// India +ভাৰত + +// xn--h2breg3eve ("Bharatam", Sanskrit) : IN +// India +भारतम् + +// xn--h2brj9c8c ("Bharot", Santali) : IN +// India +भारोत + +// xn--mgbgu82a ("Bharat", Sindhi) : IN +// India +ڀارت + +// xn--rvc1e0am3e ("Bharatam", Malayalam) : IN +// India +ഭാരതം + +// xn--h2brj9c ("Bharat", Devanagari) : IN +// India +भारत + +// xn--mgbbh1a ("Bharat", Kashmiri) : IN +// India +بارت + +// xn--mgbbh1a71e ("Bharat", Arabic) : IN +// India +بھارت + +// xn--fpcrj9c3d ("Bharat", Telugu) : IN +// India +భారత్ + +// xn--gecrj9c ("Bharat", Gujarati) : IN +// India +ભારત + +// xn--s9brj9c ("Bharat", Gurmukhi) : IN +// India +ਭਾਰਤ + +// xn--45brj9c ("Bharat", Bengali) : IN +// India +ভারত + +// xn--xkc2dl3a5ee0h ("India", Tamil) : IN +// India +இந்தியா + +// xn--mgba3a4f16a ("Iran", Persian) : IR +ایران + +// xn--mgba3a4fra ("Iran", Arabic) : IR +ايران + +// xn--mgbtx2b ("Iraq", Arabic) : IQ +// Communications and Media Commission +عراق + +// xn--mgbayh7gpa ("al-Ordon", Arabic) : JO +// National Information Technology Center (NITC) +// Royal Scientific Society, Al-Jubeiha +الاردن + +// xn--3e0b707e ("Republic of Korea", Hangul) : KR +한국 + +// xn--80ao21a ("Kaz", Kazakh) : KZ +қаз + +// xn--fzc2c9e2c ("Lanka", Sinhalese-Sinhala) : LK +// https://nic.lk +ලංකා + +// xn--xkc2al3hye2a ("Ilangai", Tamil) : LK +// https://nic.lk +இலங்கை + +// xn--mgbc0a9azcg ("Morocco/al-Maghrib", Arabic) : MA +المغرب + +// xn--d1alf ("mkd", Macedonian) : MK +// MARnet +мкд + +// xn--l1acc ("mon", Mongolian) : MN +мон + +// xn--mix891f ("Macao", Chinese, Traditional) : MO +// MONIC / HNET Asia (Registry Operator for .mo) +澳門 + +// xn--mix082f ("Macao", Chinese, Simplified) : MO +澳门 + +// xn--mgbx4cd0ab ("Malaysia", Malay) : MY +مليسيا + +// xn--mgb9awbf ("Oman", Arabic) : OM +عمان + +// xn--mgbai9azgqp6j ("Pakistan", Urdu/Arabic) : PK +پاکستان + +// xn--mgbai9a5eva00b ("Pakistan", Urdu/Arabic, variant) : PK +پاكستان + +// xn--ygbi2ammx ("Falasteen", Arabic) : PS +// The Palestinian National Internet Naming Authority (PNINA) +// http://www.pnina.ps +فلسطين + +// xn--90a3ac ("srb", Cyrillic) : RS +// https://www.rnids.rs/en/domains/national-domains +срб +пр.срб +орг.срб +обр.срб +од.срб +упр.срб +ак.срб + +// xn--p1ai ("rf", Russian-Cyrillic) : RU +// https://cctld.ru/files/pdf/docs/en/rules_ru-rf.pdf +// Submitted by George Georgievsky +рф + +// xn--wgbl6a ("Qatar", Arabic) : QA +// http://www.ict.gov.qa/ +قطر + +// xn--mgberp4a5d4ar ("AlSaudiah", Arabic) : SA +// http://www.nic.net.sa/ +السعودية + +// xn--mgberp4a5d4a87g ("AlSaudiah", Arabic, variant) : SA +السعودیة + +// xn--mgbqly7c0a67fbc ("AlSaudiah", Arabic, variant) : SA +السعودیۃ + +// xn--mgbqly7cvafr ("AlSaudiah", Arabic, variant) : SA +السعوديه + +// xn--mgbpl2fh ("sudan", Arabic) : SD +// Operated by .sd registry +سودان + +// xn--yfro4i67o Singapore ("Singapore", Chinese) : SG +新加坡 + +// xn--clchc0ea0b2g2a9gcd ("Singapore", Tamil) : SG +சிங்கப்பூர் + +// xn--ogbpf8fl ("Syria", Arabic) : SY +سورية + +// xn--mgbtf8fl ("Syria", Arabic, variant) : SY +سوريا + +// xn--o3cw4h ("Thai", Thai) : TH +// http://www.thnic.co.th +ไทย +ศึกษา.ไทย +ธุรกิจ.ไทย +รัฐบาล.ไทย +ทหาร.ไทย +เน็ต.ไทย +องค์กร.ไทย + +// xn--pgbs0dh ("Tunisia", Arabic) : TN +// http://nic.tn +تونس + +// xn--kpry57d ("Taiwan", Chinese, Traditional) : TW +// http://www.twnic.net/english/dn/dn_07a.htm +台灣 + +// xn--kprw13d ("Taiwan", Chinese, Simplified) : TW +// http://www.twnic.net/english/dn/dn_07a.htm +台湾 + +// xn--nnx388a ("Taiwan", Chinese, variant) : TW +臺灣 + +// xn--j1amh ("ukr", Cyrillic) : UA +укр + +// xn--mgb2ddes ("AlYemen", Arabic) : YE +اليمن + +// xxx : http://icmregistry.com +xxx + +// ye : http://www.y.net.ye/services/domain_name.htm +*.ye + +// za : https://www.zadna.org.za/content/page/domain-information/ +ac.za +agric.za +alt.za +co.za +edu.za +gov.za +grondar.za +law.za +mil.za +net.za +ngo.za +nic.za +nis.za +nom.za +org.za +school.za +tm.za +web.za + +// zm : https://zicta.zm/ +// Submitted by registry +zm +ac.zm +biz.zm +co.zm +com.zm +edu.zm +gov.zm +info.zm +mil.zm +net.zm +org.zm +sch.zm + +// zw : https://www.potraz.gov.zw/ +// Confirmed by registry 2017-01-25 +zw +ac.zw +co.zw +gov.zw +mil.zw +org.zw + + +// newGTLDs + +// List of new gTLDs imported from https://www.icann.org/resources/registries/gtlds/v2/gtlds.json on 2020-11-30T20:26:10Z +// This list is auto-generated, don't edit it manually. +// aaa : 2015-02-26 American Automobile Association, Inc. +aaa + +// aarp : 2015-05-21 AARP +aarp + +// abarth : 2015-07-30 Fiat Chrysler Automobiles N.V. +abarth + +// abb : 2014-10-24 ABB Ltd +abb + +// abbott : 2014-07-24 Abbott Laboratories, Inc. +abbott + +// abbvie : 2015-07-30 AbbVie Inc. +abbvie + +// abc : 2015-07-30 Disney Enterprises, Inc. +abc + +// able : 2015-06-25 Able Inc. +able + +// abogado : 2014-04-24 Minds + Machines Group Limited +abogado + +// abudhabi : 2015-07-30 Abu Dhabi Systems and Information Centre +abudhabi + +// academy : 2013-11-07 Binky Moon, LLC +academy + +// accenture : 2014-08-15 Accenture plc +accenture + +// accountant : 2014-11-20 dot Accountant Limited +accountant + +// accountants : 2014-03-20 Binky Moon, LLC +accountants + +// aco : 2015-01-08 ACO Severin Ahlmann GmbH & Co. KG +aco + +// actor : 2013-12-12 Dog Beach, LLC +actor + +// adac : 2015-07-16 Allgemeiner Deutscher Automobil-Club e.V. (ADAC) +adac + +// ads : 2014-12-04 Charleston Road Registry Inc. +ads + +// adult : 2014-10-16 ICM Registry AD LLC +adult + +// aeg : 2015-03-19 Aktiebolaget Electrolux +aeg + +// aetna : 2015-05-21 Aetna Life Insurance Company +aetna + +// afamilycompany : 2015-07-23 Johnson Shareholdings, Inc. +afamilycompany + +// afl : 2014-10-02 Australian Football League +afl + +// africa : 2014-03-24 ZA Central Registry NPC trading as Registry.Africa +africa + +// agakhan : 2015-04-23 Fondation Aga Khan (Aga Khan Foundation) +agakhan + +// agency : 2013-11-14 Binky Moon, LLC +agency + +// aig : 2014-12-18 American International Group, Inc. +aig + +// airbus : 2015-07-30 Airbus S.A.S. +airbus + +// airforce : 2014-03-06 Dog Beach, LLC +airforce + +// airtel : 2014-10-24 Bharti Airtel Limited +airtel + +// akdn : 2015-04-23 Fondation Aga Khan (Aga Khan Foundation) +akdn + +// alfaromeo : 2015-07-31 Fiat Chrysler Automobiles N.V. +alfaromeo + +// alibaba : 2015-01-15 Alibaba Group Holding Limited +alibaba + +// alipay : 2015-01-15 Alibaba Group Holding Limited +alipay + +// allfinanz : 2014-07-03 Allfinanz Deutsche Vermögensberatung Aktiengesellschaft +allfinanz + +// allstate : 2015-07-31 Allstate Fire and Casualty Insurance Company +allstate + +// ally : 2015-06-18 Ally Financial Inc. +ally + +// alsace : 2014-07-02 Region Grand Est +alsace + +// alstom : 2015-07-30 ALSTOM +alstom + +// amazon : 2019-12-19 Amazon Registry Services, Inc. +amazon + +// americanexpress : 2015-07-31 American Express Travel Related Services Company, Inc. +americanexpress + +// americanfamily : 2015-07-23 AmFam, Inc. +americanfamily + +// amex : 2015-07-31 American Express Travel Related Services Company, Inc. +amex + +// amfam : 2015-07-23 AmFam, Inc. +amfam + +// amica : 2015-05-28 Amica Mutual Insurance Company +amica + +// amsterdam : 2014-07-24 Gemeente Amsterdam +amsterdam + +// analytics : 2014-12-18 Campus IP LLC +analytics + +// android : 2014-08-07 Charleston Road Registry Inc. +android + +// anquan : 2015-01-08 Beijing Qihu Keji Co., Ltd. +anquan + +// anz : 2015-07-31 Australia and New Zealand Banking Group Limited +anz + +// aol : 2015-09-17 Oath Inc. +aol + +// apartments : 2014-12-11 Binky Moon, LLC +apartments + +// app : 2015-05-14 Charleston Road Registry Inc. +app + +// apple : 2015-05-14 Apple Inc. +apple + +// aquarelle : 2014-07-24 Aquarelle.com +aquarelle + +// arab : 2015-11-12 League of Arab States +arab + +// aramco : 2014-11-20 Aramco Services Company +aramco + +// archi : 2014-02-06 Afilias Limited +archi + +// army : 2014-03-06 Dog Beach, LLC +army + +// art : 2016-03-24 UK Creative Ideas Limited +art + +// arte : 2014-12-11 Association Relative à la Télévision Européenne G.E.I.E. +arte + +// asda : 2015-07-31 Wal-Mart Stores, Inc. +asda + +// associates : 2014-03-06 Binky Moon, LLC +associates + +// athleta : 2015-07-30 The Gap, Inc. +athleta + +// attorney : 2014-03-20 Dog Beach, LLC +attorney + +// auction : 2014-03-20 Dog Beach, LLC +auction + +// audi : 2015-05-21 AUDI Aktiengesellschaft +audi + +// audible : 2015-06-25 Amazon Registry Services, Inc. +audible + +// audio : 2014-03-20 UNR Corp. +audio + +// auspost : 2015-08-13 Australian Postal Corporation +auspost + +// author : 2014-12-18 Amazon Registry Services, Inc. +author + +// auto : 2014-11-13 XYZ.COM LLC +auto + +// autos : 2014-01-09 XYZ.COM LLC +autos + +// avianca : 2015-01-08 Avianca Holdings S.A. +avianca + +// aws : 2015-06-25 Amazon Registry Services, Inc. +aws + +// axa : 2013-12-19 AXA Group Operations SAS +axa + +// azure : 2014-12-18 Microsoft Corporation +azure + +// baby : 2015-04-09 XYZ.COM LLC +baby + +// baidu : 2015-01-08 Baidu, Inc. +baidu + +// banamex : 2015-07-30 Citigroup Inc. +banamex + +// bananarepublic : 2015-07-31 The Gap, Inc. +bananarepublic + +// band : 2014-06-12 Dog Beach, LLC +band + +// bank : 2014-09-25 fTLD Registry Services LLC +bank + +// bar : 2013-12-12 Punto 2012 Sociedad Anonima Promotora de Inversion de Capital Variable +bar + +// barcelona : 2014-07-24 Municipi de Barcelona +barcelona + +// barclaycard : 2014-11-20 Barclays Bank PLC +barclaycard + +// barclays : 2014-11-20 Barclays Bank PLC +barclays + +// barefoot : 2015-06-11 Gallo Vineyards, Inc. +barefoot + +// bargains : 2013-11-14 Binky Moon, LLC +bargains + +// baseball : 2015-10-29 MLB Advanced Media DH, LLC +baseball + +// basketball : 2015-08-20 Fédération Internationale de Basketball (FIBA) +basketball + +// bauhaus : 2014-04-17 Werkhaus GmbH +bauhaus + +// bayern : 2014-01-23 Bayern Connect GmbH +bayern + +// bbc : 2014-12-18 British Broadcasting Corporation +bbc + +// bbt : 2015-07-23 BB&T Corporation +bbt + +// bbva : 2014-10-02 BANCO BILBAO VIZCAYA ARGENTARIA, S.A. +bbva + +// bcg : 2015-04-02 The Boston Consulting Group, Inc. +bcg + +// bcn : 2014-07-24 Municipi de Barcelona +bcn + +// beats : 2015-05-14 Beats Electronics, LLC +beats + +// beauty : 2015-12-03 XYZ.COM LLC +beauty + +// beer : 2014-01-09 Minds + Machines Group Limited +beer + +// bentley : 2014-12-18 Bentley Motors Limited +bentley + +// berlin : 2013-10-31 dotBERLIN GmbH & Co. KG +berlin + +// best : 2013-12-19 BestTLD Pty Ltd +best + +// bestbuy : 2015-07-31 BBY Solutions, Inc. +bestbuy + +// bet : 2015-05-07 Afilias Limited +bet + +// bharti : 2014-01-09 Bharti Enterprises (Holding) Private Limited +bharti + +// bible : 2014-06-19 American Bible Society +bible + +// bid : 2013-12-19 dot Bid Limited +bid + +// bike : 2013-08-27 Binky Moon, LLC +bike + +// bing : 2014-12-18 Microsoft Corporation +bing + +// bingo : 2014-12-04 Binky Moon, LLC +bingo + +// bio : 2014-03-06 Afilias Limited +bio + +// black : 2014-01-16 Afilias Limited +black + +// blackfriday : 2014-01-16 UNR Corp. +blackfriday + +// blockbuster : 2015-07-30 Dish DBS Corporation +blockbuster + +// blog : 2015-05-14 Knock Knock WHOIS There, LLC +blog + +// bloomberg : 2014-07-17 Bloomberg IP Holdings LLC +bloomberg + +// blue : 2013-11-07 Afilias Limited +blue + +// bms : 2014-10-30 Bristol-Myers Squibb Company +bms + +// bmw : 2014-01-09 Bayerische Motoren Werke Aktiengesellschaft +bmw + +// bnpparibas : 2014-05-29 BNP Paribas +bnpparibas + +// boats : 2014-12-04 XYZ.COM LLC +boats + +// boehringer : 2015-07-09 Boehringer Ingelheim International GmbH +boehringer + +// bofa : 2015-07-31 Bank of America Corporation +bofa + +// bom : 2014-10-16 Núcleo de Informação e Coordenação do Ponto BR - NIC.br +bom + +// bond : 2014-06-05 ShortDot SA +bond + +// boo : 2014-01-30 Charleston Road Registry Inc. +boo + +// book : 2015-08-27 Amazon Registry Services, Inc. +book + +// booking : 2015-07-16 Booking.com B.V. +booking + +// bosch : 2015-06-18 Robert Bosch GMBH +bosch + +// bostik : 2015-05-28 Bostik SA +bostik + +// boston : 2015-12-10 Boston TLD Management, LLC +boston + +// bot : 2014-12-18 Amazon Registry Services, Inc. +bot + +// boutique : 2013-11-14 Binky Moon, LLC +boutique + +// box : 2015-11-12 Intercap Registry Inc. +box + +// bradesco : 2014-12-18 Banco Bradesco S.A. +bradesco + +// bridgestone : 2014-12-18 Bridgestone Corporation +bridgestone + +// broadway : 2014-12-22 Celebrate Broadway, Inc. +broadway + +// broker : 2014-12-11 Dotbroker Registry Limited +broker + +// brother : 2015-01-29 Brother Industries, Ltd. +brother + +// brussels : 2014-02-06 DNS.be vzw +brussels + +// budapest : 2013-11-21 Minds + Machines Group Limited +budapest + +// bugatti : 2015-07-23 Bugatti International SA +bugatti + +// build : 2013-11-07 Plan Bee LLC +build + +// builders : 2013-11-07 Binky Moon, LLC +builders + +// business : 2013-11-07 Binky Moon, LLC +business + +// buy : 2014-12-18 Amazon Registry Services, Inc. +buy + +// buzz : 2013-10-02 DOTSTRATEGY CO. +buzz + +// bzh : 2014-02-27 Association www.bzh +bzh + +// cab : 2013-10-24 Binky Moon, LLC +cab + +// cafe : 2015-02-11 Binky Moon, LLC +cafe + +// cal : 2014-07-24 Charleston Road Registry Inc. +cal + +// call : 2014-12-18 Amazon Registry Services, Inc. +call + +// calvinklein : 2015-07-30 PVH gTLD Holdings LLC +calvinklein + +// cam : 2016-04-21 AC Webconnecting Holding B.V. +cam + +// camera : 2013-08-27 Binky Moon, LLC +camera + +// camp : 2013-11-07 Binky Moon, LLC +camp + +// cancerresearch : 2014-05-15 Australian Cancer Research Foundation +cancerresearch + +// canon : 2014-09-12 Canon Inc. +canon + +// capetown : 2014-03-24 ZA Central Registry NPC trading as ZA Central Registry +capetown + +// capital : 2014-03-06 Binky Moon, LLC +capital + +// capitalone : 2015-08-06 Capital One Financial Corporation +capitalone + +// car : 2015-01-22 XYZ.COM LLC +car + +// caravan : 2013-12-12 Caravan International, Inc. +caravan + +// cards : 2013-12-05 Binky Moon, LLC +cards + +// care : 2014-03-06 Binky Moon, LLC +care + +// career : 2013-10-09 dotCareer LLC +career + +// careers : 2013-10-02 Binky Moon, LLC +careers + +// cars : 2014-11-13 XYZ.COM LLC +cars + +// casa : 2013-11-21 Minds + Machines Group Limited +casa + +// case : 2015-09-03 CNH Industrial N.V. +case + +// caseih : 2015-09-03 CNH Industrial N.V. +caseih + +// cash : 2014-03-06 Binky Moon, LLC +cash + +// casino : 2014-12-18 Binky Moon, LLC +casino + +// catering : 2013-12-05 Binky Moon, LLC +catering + +// catholic : 2015-10-21 Pontificium Consilium de Comunicationibus Socialibus (PCCS) (Pontifical Council for Social Communication) +catholic + +// cba : 2014-06-26 COMMONWEALTH BANK OF AUSTRALIA +cba + +// cbn : 2014-08-22 The Christian Broadcasting Network, Inc. +cbn + +// cbre : 2015-07-02 CBRE, Inc. +cbre + +// cbs : 2015-08-06 CBS Domains Inc. +cbs + +// ceb : 2015-04-09 The Corporate Executive Board Company +ceb + +// center : 2013-11-07 Binky Moon, LLC +center + +// ceo : 2013-11-07 CEOTLD Pty Ltd +ceo + +// cern : 2014-06-05 European Organization for Nuclear Research ("CERN") +cern + +// cfa : 2014-08-28 CFA Institute +cfa + +// cfd : 2014-12-11 DotCFD Registry Limited +cfd + +// chanel : 2015-04-09 Chanel International B.V. +chanel + +// channel : 2014-05-08 Charleston Road Registry Inc. +channel + +// charity : 2018-04-11 Binky Moon, LLC +charity + +// chase : 2015-04-30 JPMorgan Chase Bank, National Association +chase + +// chat : 2014-12-04 Binky Moon, LLC +chat + +// cheap : 2013-11-14 Binky Moon, LLC +cheap + +// chintai : 2015-06-11 CHINTAI Corporation +chintai + +// christmas : 2013-11-21 UNR Corp. +christmas + +// chrome : 2014-07-24 Charleston Road Registry Inc. +chrome + +// church : 2014-02-06 Binky Moon, LLC +church + +// cipriani : 2015-02-19 Hotel Cipriani Srl +cipriani + +// circle : 2014-12-18 Amazon Registry Services, Inc. +circle + +// cisco : 2014-12-22 Cisco Technology, Inc. +cisco + +// citadel : 2015-07-23 Citadel Domain LLC +citadel + +// citi : 2015-07-30 Citigroup Inc. +citi + +// citic : 2014-01-09 CITIC Group Corporation +citic + +// city : 2014-05-29 Binky Moon, LLC +city + +// cityeats : 2014-12-11 Lifestyle Domain Holdings, Inc. +cityeats + +// claims : 2014-03-20 Binky Moon, LLC +claims + +// cleaning : 2013-12-05 Binky Moon, LLC +cleaning + +// click : 2014-06-05 UNR Corp. +click + +// clinic : 2014-03-20 Binky Moon, LLC +clinic + +// clinique : 2015-10-01 The Estée Lauder Companies Inc. +clinique + +// clothing : 2013-08-27 Binky Moon, LLC +clothing + +// cloud : 2015-04-16 Aruba PEC S.p.A. +cloud + +// club : 2013-11-08 .CLUB DOMAINS, LLC +club + +// clubmed : 2015-06-25 Club Méditerranée S.A. +clubmed + +// coach : 2014-10-09 Binky Moon, LLC +coach + +// codes : 2013-10-31 Binky Moon, LLC +codes + +// coffee : 2013-10-17 Binky Moon, LLC +coffee + +// college : 2014-01-16 XYZ.COM LLC +college + +// cologne : 2014-02-05 dotKoeln GmbH +cologne + +// comcast : 2015-07-23 Comcast IP Holdings I, LLC +comcast + +// commbank : 2014-06-26 COMMONWEALTH BANK OF AUSTRALIA +commbank + +// community : 2013-12-05 Binky Moon, LLC +community + +// company : 2013-11-07 Binky Moon, LLC +company + +// compare : 2015-10-08 Registry Services, LLC +compare + +// computer : 2013-10-24 Binky Moon, LLC +computer + +// comsec : 2015-01-08 VeriSign, Inc. +comsec + +// condos : 2013-12-05 Binky Moon, LLC +condos + +// construction : 2013-09-16 Binky Moon, LLC +construction + +// consulting : 2013-12-05 Dog Beach, LLC +consulting + +// contact : 2015-01-08 Dog Beach, LLC +contact + +// contractors : 2013-09-10 Binky Moon, LLC +contractors + +// cooking : 2013-11-21 Minds + Machines Group Limited +cooking + +// cookingchannel : 2015-07-02 Lifestyle Domain Holdings, Inc. +cookingchannel + +// cool : 2013-11-14 Binky Moon, LLC +cool + +// corsica : 2014-09-25 Collectivité de Corse +corsica + +// country : 2013-12-19 DotCountry LLC +country + +// coupon : 2015-02-26 Amazon Registry Services, Inc. +coupon + +// coupons : 2015-03-26 Binky Moon, LLC +coupons + +// courses : 2014-12-04 OPEN UNIVERSITIES AUSTRALIA PTY LTD +courses + +// cpa : 2019-06-10 American Institute of Certified Public Accountants +cpa + +// credit : 2014-03-20 Binky Moon, LLC +credit + +// creditcard : 2014-03-20 Binky Moon, LLC +creditcard + +// creditunion : 2015-01-22 CUNA Performance Resources, LLC +creditunion + +// cricket : 2014-10-09 dot Cricket Limited +cricket + +// crown : 2014-10-24 Crown Equipment Corporation +crown + +// crs : 2014-04-03 Federated Co-operatives Limited +crs + +// cruise : 2015-12-10 Viking River Cruises (Bermuda) Ltd. +cruise + +// cruises : 2013-12-05 Binky Moon, LLC +cruises + +// csc : 2014-09-25 Alliance-One Services, Inc. +csc + +// cuisinella : 2014-04-03 SCHMIDT GROUPE S.A.S. +cuisinella + +// cymru : 2014-05-08 Nominet UK +cymru + +// cyou : 2015-01-22 ShortDot SA +cyou + +// dabur : 2014-02-06 Dabur India Limited +dabur + +// dad : 2014-01-23 Charleston Road Registry Inc. +dad + +// dance : 2013-10-24 Dog Beach, LLC +dance + +// data : 2016-06-02 Dish DBS Corporation +data + +// date : 2014-11-20 dot Date Limited +date + +// dating : 2013-12-05 Binky Moon, LLC +dating + +// datsun : 2014-03-27 NISSAN MOTOR CO., LTD. +datsun + +// day : 2014-01-30 Charleston Road Registry Inc. +day + +// dclk : 2014-11-20 Charleston Road Registry Inc. +dclk + +// dds : 2015-05-07 Minds + Machines Group Limited +dds + +// deal : 2015-06-25 Amazon Registry Services, Inc. +deal + +// dealer : 2014-12-22 Intercap Registry Inc. +dealer + +// deals : 2014-05-22 Binky Moon, LLC +deals + +// degree : 2014-03-06 Dog Beach, LLC +degree + +// delivery : 2014-09-11 Binky Moon, LLC +delivery + +// dell : 2014-10-24 Dell Inc. +dell + +// deloitte : 2015-07-31 Deloitte Touche Tohmatsu +deloitte + +// delta : 2015-02-19 Delta Air Lines, Inc. +delta + +// democrat : 2013-10-24 Dog Beach, LLC +democrat + +// dental : 2014-03-20 Binky Moon, LLC +dental + +// dentist : 2014-03-20 Dog Beach, LLC +dentist + +// desi : 2013-11-14 Desi Networks LLC +desi + +// design : 2014-11-07 Top Level Design, LLC +design + +// dev : 2014-10-16 Charleston Road Registry Inc. +dev + +// dhl : 2015-07-23 Deutsche Post AG +dhl + +// diamonds : 2013-09-22 Binky Moon, LLC +diamonds + +// diet : 2014-06-26 UNR Corp. +diet + +// digital : 2014-03-06 Binky Moon, LLC +digital + +// direct : 2014-04-10 Binky Moon, LLC +direct + +// directory : 2013-09-20 Binky Moon, LLC +directory + +// discount : 2014-03-06 Binky Moon, LLC +discount + +// discover : 2015-07-23 Discover Financial Services +discover + +// dish : 2015-07-30 Dish DBS Corporation +dish + +// diy : 2015-11-05 Lifestyle Domain Holdings, Inc. +diy + +// dnp : 2013-12-13 Dai Nippon Printing Co., Ltd. +dnp + +// docs : 2014-10-16 Charleston Road Registry Inc. +docs + +// doctor : 2016-06-02 Binky Moon, LLC +doctor + +// dog : 2014-12-04 Binky Moon, LLC +dog + +// domains : 2013-10-17 Binky Moon, LLC +domains + +// dot : 2015-05-21 Dish DBS Corporation +dot + +// download : 2014-11-20 dot Support Limited +download + +// drive : 2015-03-05 Charleston Road Registry Inc. +drive + +// dtv : 2015-06-04 Dish DBS Corporation +dtv + +// dubai : 2015-01-01 Dubai Smart Government Department +dubai + +// duck : 2015-07-23 Johnson Shareholdings, Inc. +duck + +// dunlop : 2015-07-02 The Goodyear Tire & Rubber Company +dunlop + +// dupont : 2015-06-25 E. I. du Pont de Nemours and Company +dupont + +// durban : 2014-03-24 ZA Central Registry NPC trading as ZA Central Registry +durban + +// dvag : 2014-06-23 Deutsche Vermögensberatung Aktiengesellschaft DVAG +dvag + +// dvr : 2016-05-26 DISH Technologies L.L.C. +dvr + +// earth : 2014-12-04 Interlink Co., Ltd. +earth + +// eat : 2014-01-23 Charleston Road Registry Inc. +eat + +// eco : 2016-07-08 Big Room Inc. +eco + +// edeka : 2014-12-18 EDEKA Verband kaufmännischer Genossenschaften e.V. +edeka + +// education : 2013-11-07 Binky Moon, LLC +education + +// email : 2013-10-31 Binky Moon, LLC +email + +// emerck : 2014-04-03 Merck KGaA +emerck + +// energy : 2014-09-11 Binky Moon, LLC +energy + +// engineer : 2014-03-06 Dog Beach, LLC +engineer + +// engineering : 2014-03-06 Binky Moon, LLC +engineering + +// enterprises : 2013-09-20 Binky Moon, LLC +enterprises + +// epson : 2014-12-04 Seiko Epson Corporation +epson + +// equipment : 2013-08-27 Binky Moon, LLC +equipment + +// ericsson : 2015-07-09 Telefonaktiebolaget L M Ericsson +ericsson + +// erni : 2014-04-03 ERNI Group Holding AG +erni + +// esq : 2014-05-08 Charleston Road Registry Inc. +esq + +// estate : 2013-08-27 Binky Moon, LLC +estate + +// etisalat : 2015-09-03 Emirates Telecommunications Corporation (trading as Etisalat) +etisalat + +// eurovision : 2014-04-24 European Broadcasting Union (EBU) +eurovision + +// eus : 2013-12-12 Puntueus Fundazioa +eus + +// events : 2013-12-05 Binky Moon, LLC +events + +// exchange : 2014-03-06 Binky Moon, LLC +exchange + +// expert : 2013-11-21 Binky Moon, LLC +expert + +// exposed : 2013-12-05 Binky Moon, LLC +exposed + +// express : 2015-02-11 Binky Moon, LLC +express + +// extraspace : 2015-05-14 Extra Space Storage LLC +extraspace + +// fage : 2014-12-18 Fage International S.A. +fage + +// fail : 2014-03-06 Binky Moon, LLC +fail + +// fairwinds : 2014-11-13 FairWinds Partners, LLC +fairwinds + +// faith : 2014-11-20 dot Faith Limited +faith + +// family : 2015-04-02 Dog Beach, LLC +family + +// fan : 2014-03-06 Dog Beach, LLC +fan + +// fans : 2014-11-07 ZDNS International Limited +fans + +// farm : 2013-11-07 Binky Moon, LLC +farm + +// farmers : 2015-07-09 Farmers Insurance Exchange +farmers + +// fashion : 2014-07-03 Minds + Machines Group Limited +fashion + +// fast : 2014-12-18 Amazon Registry Services, Inc. +fast + +// fedex : 2015-08-06 Federal Express Corporation +fedex + +// feedback : 2013-12-19 Top Level Spectrum, Inc. +feedback + +// ferrari : 2015-07-31 Fiat Chrysler Automobiles N.V. +ferrari + +// ferrero : 2014-12-18 Ferrero Trading Lux S.A. +ferrero + +// fiat : 2015-07-31 Fiat Chrysler Automobiles N.V. +fiat + +// fidelity : 2015-07-30 Fidelity Brokerage Services LLC +fidelity + +// fido : 2015-08-06 Rogers Communications Canada Inc. +fido + +// film : 2015-01-08 Motion Picture Domain Registry Pty Ltd +film + +// final : 2014-10-16 Núcleo de Informação e Coordenação do Ponto BR - NIC.br +final + +// finance : 2014-03-20 Binky Moon, LLC +finance + +// financial : 2014-03-06 Binky Moon, LLC +financial + +// fire : 2015-06-25 Amazon Registry Services, Inc. +fire + +// firestone : 2014-12-18 Bridgestone Licensing Services, Inc +firestone + +// firmdale : 2014-03-27 Firmdale Holdings Limited +firmdale + +// fish : 2013-12-12 Binky Moon, LLC +fish + +// fishing : 2013-11-21 Minds + Machines Group Limited +fishing + +// fit : 2014-11-07 Minds + Machines Group Limited +fit + +// fitness : 2014-03-06 Binky Moon, LLC +fitness + +// flickr : 2015-04-02 Flickr, Inc. +flickr + +// flights : 2013-12-05 Binky Moon, LLC +flights + +// flir : 2015-07-23 FLIR Systems, Inc. +flir + +// florist : 2013-11-07 Binky Moon, LLC +florist + +// flowers : 2014-10-09 UNR Corp. +flowers + +// fly : 2014-05-08 Charleston Road Registry Inc. +fly + +// foo : 2014-01-23 Charleston Road Registry Inc. +foo + +// food : 2016-04-21 Lifestyle Domain Holdings, Inc. +food + +// foodnetwork : 2015-07-02 Lifestyle Domain Holdings, Inc. +foodnetwork + +// football : 2014-12-18 Binky Moon, LLC +football + +// ford : 2014-11-13 Ford Motor Company +ford + +// forex : 2014-12-11 Dotforex Registry Limited +forex + +// forsale : 2014-05-22 Dog Beach, LLC +forsale + +// forum : 2015-04-02 Fegistry, LLC +forum + +// foundation : 2013-12-05 Binky Moon, LLC +foundation + +// fox : 2015-09-11 FOX Registry, LLC +fox + +// free : 2015-12-10 Amazon Registry Services, Inc. +free + +// fresenius : 2015-07-30 Fresenius Immobilien-Verwaltungs-GmbH +fresenius + +// frl : 2014-05-15 FRLregistry B.V. +frl + +// frogans : 2013-12-19 OP3FT +frogans + +// frontdoor : 2015-07-02 Lifestyle Domain Holdings, Inc. +frontdoor + +// frontier : 2015-02-05 Frontier Communications Corporation +frontier + +// ftr : 2015-07-16 Frontier Communications Corporation +ftr + +// fujitsu : 2015-07-30 Fujitsu Limited +fujitsu + +// fujixerox : 2015-07-23 Xerox DNHC LLC +fujixerox + +// fun : 2016-01-14 DotSpace Inc. +fun + +// fund : 2014-03-20 Binky Moon, LLC +fund + +// furniture : 2014-03-20 Binky Moon, LLC +furniture + +// futbol : 2013-09-20 Dog Beach, LLC +futbol + +// fyi : 2015-04-02 Binky Moon, LLC +fyi + +// gal : 2013-11-07 Asociación puntoGAL +gal + +// gallery : 2013-09-13 Binky Moon, LLC +gallery + +// gallo : 2015-06-11 Gallo Vineyards, Inc. +gallo + +// gallup : 2015-02-19 Gallup, Inc. +gallup + +// game : 2015-05-28 UNR Corp. +game + +// games : 2015-05-28 Dog Beach, LLC +games + +// gap : 2015-07-31 The Gap, Inc. +gap + +// garden : 2014-06-26 Minds + Machines Group Limited +garden + +// gay : 2019-05-23 Top Level Design, LLC +gay + +// gbiz : 2014-07-17 Charleston Road Registry Inc. +gbiz + +// gdn : 2014-07-31 Joint Stock Company "Navigation-information systems" +gdn + +// gea : 2014-12-04 GEA Group Aktiengesellschaft +gea + +// gent : 2014-01-23 COMBELL NV +gent + +// genting : 2015-03-12 Resorts World Inc Pte. Ltd. +genting + +// george : 2015-07-31 Wal-Mart Stores, Inc. +george + +// ggee : 2014-01-09 GMO Internet, Inc. +ggee + +// gift : 2013-10-17 DotGift, LLC +gift + +// gifts : 2014-07-03 Binky Moon, LLC +gifts + +// gives : 2014-03-06 Dog Beach, LLC +gives + +// giving : 2014-11-13 Giving Limited +giving + +// glade : 2015-07-23 Johnson Shareholdings, Inc. +glade + +// glass : 2013-11-07 Binky Moon, LLC +glass + +// gle : 2014-07-24 Charleston Road Registry Inc. +gle + +// global : 2014-04-17 Dot Global Domain Registry Limited +global + +// globo : 2013-12-19 Globo Comunicação e Participações S.A +globo + +// gmail : 2014-05-01 Charleston Road Registry Inc. +gmail + +// gmbh : 2016-01-29 Binky Moon, LLC +gmbh + +// gmo : 2014-01-09 GMO Internet, Inc. +gmo + +// gmx : 2014-04-24 1&1 Mail & Media GmbH +gmx + +// godaddy : 2015-07-23 Go Daddy East, LLC +godaddy + +// gold : 2015-01-22 Binky Moon, LLC +gold + +// goldpoint : 2014-11-20 YODOBASHI CAMERA CO.,LTD. +goldpoint + +// golf : 2014-12-18 Binky Moon, LLC +golf + +// goo : 2014-12-18 NTT Resonant Inc. +goo + +// goodyear : 2015-07-02 The Goodyear Tire & Rubber Company +goodyear + +// goog : 2014-11-20 Charleston Road Registry Inc. +goog + +// google : 2014-07-24 Charleston Road Registry Inc. +google + +// gop : 2014-01-16 Republican State Leadership Committee, Inc. +gop + +// got : 2014-12-18 Amazon Registry Services, Inc. +got + +// grainger : 2015-05-07 Grainger Registry Services, LLC +grainger + +// graphics : 2013-09-13 Binky Moon, LLC +graphics + +// gratis : 2014-03-20 Binky Moon, LLC +gratis + +// green : 2014-05-08 Afilias Limited +green + +// gripe : 2014-03-06 Binky Moon, LLC +gripe + +// grocery : 2016-06-16 Wal-Mart Stores, Inc. +grocery + +// group : 2014-08-15 Binky Moon, LLC +group + +// guardian : 2015-07-30 The Guardian Life Insurance Company of America +guardian + +// gucci : 2014-11-13 Guccio Gucci S.p.a. +gucci + +// guge : 2014-08-28 Charleston Road Registry Inc. +guge + +// guide : 2013-09-13 Binky Moon, LLC +guide + +// guitars : 2013-11-14 UNR Corp. +guitars + +// guru : 2013-08-27 Binky Moon, LLC +guru + +// hair : 2015-12-03 XYZ.COM LLC +hair + +// hamburg : 2014-02-20 Hamburg Top-Level-Domain GmbH +hamburg + +// hangout : 2014-11-13 Charleston Road Registry Inc. +hangout + +// haus : 2013-12-05 Dog Beach, LLC +haus + +// hbo : 2015-07-30 HBO Registry Services, Inc. +hbo + +// hdfc : 2015-07-30 HOUSING DEVELOPMENT FINANCE CORPORATION LIMITED +hdfc + +// hdfcbank : 2015-02-12 HDFC Bank Limited +hdfcbank + +// health : 2015-02-11 DotHealth, LLC +health + +// healthcare : 2014-06-12 Binky Moon, LLC +healthcare + +// help : 2014-06-26 UNR Corp. +help + +// helsinki : 2015-02-05 City of Helsinki +helsinki + +// here : 2014-02-06 Charleston Road Registry Inc. +here + +// hermes : 2014-07-10 HERMES INTERNATIONAL +hermes + +// hgtv : 2015-07-02 Lifestyle Domain Holdings, Inc. +hgtv + +// hiphop : 2014-03-06 UNR Corp. +hiphop + +// hisamitsu : 2015-07-16 Hisamitsu Pharmaceutical Co.,Inc. +hisamitsu + +// hitachi : 2014-10-31 Hitachi, Ltd. +hitachi + +// hiv : 2014-03-13 UNR Corp. +hiv + +// hkt : 2015-05-14 PCCW-HKT DataCom Services Limited +hkt + +// hockey : 2015-03-19 Binky Moon, LLC +hockey + +// holdings : 2013-08-27 Binky Moon, LLC +holdings + +// holiday : 2013-11-07 Binky Moon, LLC +holiday + +// homedepot : 2015-04-02 Home Depot Product Authority, LLC +homedepot + +// homegoods : 2015-07-16 The TJX Companies, Inc. +homegoods + +// homes : 2014-01-09 XYZ.COM LLC +homes + +// homesense : 2015-07-16 The TJX Companies, Inc. +homesense + +// honda : 2014-12-18 Honda Motor Co., Ltd. +honda + +// horse : 2013-11-21 Minds + Machines Group Limited +horse + +// hospital : 2016-10-20 Binky Moon, LLC +hospital + +// host : 2014-04-17 DotHost Inc. +host + +// hosting : 2014-05-29 UNR Corp. +hosting + +// hot : 2015-08-27 Amazon Registry Services, Inc. +hot + +// hoteles : 2015-03-05 Travel Reservations SRL +hoteles + +// hotels : 2016-04-07 Booking.com B.V. +hotels + +// hotmail : 2014-12-18 Microsoft Corporation +hotmail + +// house : 2013-11-07 Binky Moon, LLC +house + +// how : 2014-01-23 Charleston Road Registry Inc. +how + +// hsbc : 2014-10-24 HSBC Global Services (UK) Limited +hsbc + +// hughes : 2015-07-30 Hughes Satellite Systems Corporation +hughes + +// hyatt : 2015-07-30 Hyatt GTLD, L.L.C. +hyatt + +// hyundai : 2015-07-09 Hyundai Motor Company +hyundai + +// ibm : 2014-07-31 International Business Machines Corporation +ibm + +// icbc : 2015-02-19 Industrial and Commercial Bank of China Limited +icbc + +// ice : 2014-10-30 IntercontinentalExchange, Inc. +ice + +// icu : 2015-01-08 ShortDot SA +icu + +// ieee : 2015-07-23 IEEE Global LLC +ieee + +// ifm : 2014-01-30 ifm electronic gmbh +ifm + +// ikano : 2015-07-09 Ikano S.A. +ikano + +// imamat : 2015-08-06 Fondation Aga Khan (Aga Khan Foundation) +imamat + +// imdb : 2015-06-25 Amazon Registry Services, Inc. +imdb + +// immo : 2014-07-10 Binky Moon, LLC +immo + +// immobilien : 2013-11-07 Dog Beach, LLC +immobilien + +// inc : 2018-03-10 Intercap Registry Inc. +inc + +// industries : 2013-12-05 Binky Moon, LLC +industries + +// infiniti : 2014-03-27 NISSAN MOTOR CO., LTD. +infiniti + +// ing : 2014-01-23 Charleston Road Registry Inc. +ing + +// ink : 2013-12-05 Top Level Design, LLC +ink + +// institute : 2013-11-07 Binky Moon, LLC +institute + +// insurance : 2015-02-19 fTLD Registry Services LLC +insurance + +// insure : 2014-03-20 Binky Moon, LLC +insure + +// international : 2013-11-07 Binky Moon, LLC +international + +// intuit : 2015-07-30 Intuit Administrative Services, Inc. +intuit + +// investments : 2014-03-20 Binky Moon, LLC +investments + +// ipiranga : 2014-08-28 Ipiranga Produtos de Petroleo S.A. +ipiranga + +// irish : 2014-08-07 Binky Moon, LLC +irish + +// ismaili : 2015-08-06 Fondation Aga Khan (Aga Khan Foundation) +ismaili + +// ist : 2014-08-28 Istanbul Metropolitan Municipality +ist + +// istanbul : 2014-08-28 Istanbul Metropolitan Municipality +istanbul + +// itau : 2014-10-02 Itau Unibanco Holding S.A. +itau + +// itv : 2015-07-09 ITV Services Limited +itv + +// iveco : 2015-09-03 CNH Industrial N.V. +iveco + +// jaguar : 2014-11-13 Jaguar Land Rover Ltd +jaguar + +// java : 2014-06-19 Oracle Corporation +java + +// jcb : 2014-11-20 JCB Co., Ltd. +jcb + +// jeep : 2015-07-30 FCA US LLC. +jeep + +// jetzt : 2014-01-09 Binky Moon, LLC +jetzt + +// jewelry : 2015-03-05 Binky Moon, LLC +jewelry + +// jio : 2015-04-02 Reliance Industries Limited +jio + +// jll : 2015-04-02 Jones Lang LaSalle Incorporated +jll + +// jmp : 2015-03-26 Matrix IP LLC +jmp + +// jnj : 2015-06-18 Johnson & Johnson Services, Inc. +jnj + +// joburg : 2014-03-24 ZA Central Registry NPC trading as ZA Central Registry +joburg + +// jot : 2014-12-18 Amazon Registry Services, Inc. +jot + +// joy : 2014-12-18 Amazon Registry Services, Inc. +joy + +// jpmorgan : 2015-04-30 JPMorgan Chase Bank, National Association +jpmorgan + +// jprs : 2014-09-18 Japan Registry Services Co., Ltd. +jprs + +// juegos : 2014-03-20 UNR Corp. +juegos + +// juniper : 2015-07-30 JUNIPER NETWORKS, INC. +juniper + +// kaufen : 2013-11-07 Dog Beach, LLC +kaufen + +// kddi : 2014-09-12 KDDI CORPORATION +kddi + +// kerryhotels : 2015-04-30 Kerry Trading Co. Limited +kerryhotels + +// kerrylogistics : 2015-04-09 Kerry Trading Co. Limited +kerrylogistics + +// kerryproperties : 2015-04-09 Kerry Trading Co. Limited +kerryproperties + +// kfh : 2014-12-04 Kuwait Finance House +kfh + +// kia : 2015-07-09 KIA MOTORS CORPORATION +kia + +// kim : 2013-09-23 Afilias Limited +kim + +// kinder : 2014-11-07 Ferrero Trading Lux S.A. +kinder + +// kindle : 2015-06-25 Amazon Registry Services, Inc. +kindle + +// kitchen : 2013-09-20 Binky Moon, LLC +kitchen + +// kiwi : 2013-09-20 DOT KIWI LIMITED +kiwi + +// koeln : 2014-01-09 dotKoeln GmbH +koeln + +// komatsu : 2015-01-08 Komatsu Ltd. +komatsu + +// kosher : 2015-08-20 Kosher Marketing Assets LLC +kosher + +// kpmg : 2015-04-23 KPMG International Cooperative (KPMG International Genossenschaft) +kpmg + +// kpn : 2015-01-08 Koninklijke KPN N.V. +kpn + +// krd : 2013-12-05 KRG Department of Information Technology +krd + +// kred : 2013-12-19 KredTLD Pty Ltd +kred + +// kuokgroup : 2015-04-09 Kerry Trading Co. Limited +kuokgroup + +// kyoto : 2014-11-07 Academic Institution: Kyoto Jyoho Gakuen +kyoto + +// lacaixa : 2014-01-09 Fundación Bancaria Caixa d’Estalvis i Pensions de Barcelona, “la Caixa” +lacaixa + +// lamborghini : 2015-06-04 Automobili Lamborghini S.p.A. +lamborghini + +// lamer : 2015-10-01 The Estée Lauder Companies Inc. +lamer + +// lancaster : 2015-02-12 LANCASTER +lancaster + +// lancia : 2015-07-31 Fiat Chrysler Automobiles N.V. +lancia + +// land : 2013-09-10 Binky Moon, LLC +land + +// landrover : 2014-11-13 Jaguar Land Rover Ltd +landrover + +// lanxess : 2015-07-30 LANXESS Corporation +lanxess + +// lasalle : 2015-04-02 Jones Lang LaSalle Incorporated +lasalle + +// lat : 2014-10-16 ECOM-LAC Federaciòn de Latinoamèrica y el Caribe para Internet y el Comercio Electrònico +lat + +// latino : 2015-07-30 Dish DBS Corporation +latino + +// latrobe : 2014-06-16 La Trobe University +latrobe + +// law : 2015-01-22 LW TLD Limited +law + +// lawyer : 2014-03-20 Dog Beach, LLC +lawyer + +// lds : 2014-03-20 IRI Domain Management, LLC +lds + +// lease : 2014-03-06 Binky Moon, LLC +lease + +// leclerc : 2014-08-07 A.C.D. LEC Association des Centres Distributeurs Edouard Leclerc +leclerc + +// lefrak : 2015-07-16 LeFrak Organization, Inc. +lefrak + +// legal : 2014-10-16 Binky Moon, LLC +legal + +// lego : 2015-07-16 LEGO Juris A/S +lego + +// lexus : 2015-04-23 TOYOTA MOTOR CORPORATION +lexus + +// lgbt : 2014-05-08 Afilias Limited +lgbt + +// lidl : 2014-09-18 Schwarz Domains und Services GmbH & Co. KG +lidl + +// life : 2014-02-06 Binky Moon, LLC +life + +// lifeinsurance : 2015-01-15 American Council of Life Insurers +lifeinsurance + +// lifestyle : 2014-12-11 Lifestyle Domain Holdings, Inc. +lifestyle + +// lighting : 2013-08-27 Binky Moon, LLC +lighting + +// like : 2014-12-18 Amazon Registry Services, Inc. +like + +// lilly : 2015-07-31 Eli Lilly and Company +lilly + +// limited : 2014-03-06 Binky Moon, LLC +limited + +// limo : 2013-10-17 Binky Moon, LLC +limo + +// lincoln : 2014-11-13 Ford Motor Company +lincoln + +// linde : 2014-12-04 Linde Aktiengesellschaft +linde + +// link : 2013-11-14 UNR Corp. +link + +// lipsy : 2015-06-25 Lipsy Ltd +lipsy + +// live : 2014-12-04 Dog Beach, LLC +live + +// living : 2015-07-30 Lifestyle Domain Holdings, Inc. +living + +// lixil : 2015-03-19 LIXIL Group Corporation +lixil + +// llc : 2017-12-14 Afilias Limited +llc + +// llp : 2019-08-26 UNR Corp. +llp + +// loan : 2014-11-20 dot Loan Limited +loan + +// loans : 2014-03-20 Binky Moon, LLC +loans + +// locker : 2015-06-04 Dish DBS Corporation +locker + +// locus : 2015-06-25 Locus Analytics LLC +locus + +// loft : 2015-07-30 Annco, Inc. +loft + +// lol : 2015-01-30 UNR Corp. +lol + +// london : 2013-11-14 Dot London Domains Limited +london + +// lotte : 2014-11-07 Lotte Holdings Co., Ltd. +lotte + +// lotto : 2014-04-10 Afilias Limited +lotto + +// love : 2014-12-22 Merchant Law Group LLP +love + +// lpl : 2015-07-30 LPL Holdings, Inc. +lpl + +// lplfinancial : 2015-07-30 LPL Holdings, Inc. +lplfinancial + +// ltd : 2014-09-25 Binky Moon, LLC +ltd + +// ltda : 2014-04-17 InterNetX, Corp +ltda + +// lundbeck : 2015-08-06 H. Lundbeck A/S +lundbeck + +// lupin : 2014-11-07 LUPIN LIMITED +lupin + +// luxe : 2014-01-09 Minds + Machines Group Limited +luxe + +// luxury : 2013-10-17 Luxury Partners, LLC +luxury + +// macys : 2015-07-31 Macys, Inc. +macys + +// madrid : 2014-05-01 Comunidad de Madrid +madrid + +// maif : 2014-10-02 Mutuelle Assurance Instituteur France (MAIF) +maif + +// maison : 2013-12-05 Binky Moon, LLC +maison + +// makeup : 2015-01-15 XYZ.COM LLC +makeup + +// man : 2014-12-04 MAN SE +man + +// management : 2013-11-07 Binky Moon, LLC +management + +// mango : 2013-10-24 PUNTO FA S.L. +mango + +// map : 2016-06-09 Charleston Road Registry Inc. +map + +// market : 2014-03-06 Dog Beach, LLC +market + +// marketing : 2013-11-07 Binky Moon, LLC +marketing + +// markets : 2014-12-11 Dotmarkets Registry Limited +markets + +// marriott : 2014-10-09 Marriott Worldwide Corporation +marriott + +// marshalls : 2015-07-16 The TJX Companies, Inc. +marshalls + +// maserati : 2015-07-31 Fiat Chrysler Automobiles N.V. +maserati + +// mattel : 2015-08-06 Mattel Sites, Inc. +mattel + +// mba : 2015-04-02 Binky Moon, LLC +mba + +// mckinsey : 2015-07-31 McKinsey Holdings, Inc. +mckinsey + +// med : 2015-08-06 Medistry LLC +med + +// media : 2014-03-06 Binky Moon, LLC +media + +// meet : 2014-01-16 Charleston Road Registry Inc. +meet + +// melbourne : 2014-05-29 The Crown in right of the State of Victoria, represented by its Department of State Development, Business and Innovation +melbourne + +// meme : 2014-01-30 Charleston Road Registry Inc. +meme + +// memorial : 2014-10-16 Dog Beach, LLC +memorial + +// men : 2015-02-26 Exclusive Registry Limited +men + +// menu : 2013-09-11 Dot Menu Registry, LLC +menu + +// merckmsd : 2016-07-14 MSD Registry Holdings, Inc. +merckmsd + +// miami : 2013-12-19 Minds + Machines Group Limited +miami + +// microsoft : 2014-12-18 Microsoft Corporation +microsoft + +// mini : 2014-01-09 Bayerische Motoren Werke Aktiengesellschaft +mini + +// mint : 2015-07-30 Intuit Administrative Services, Inc. +mint + +// mit : 2015-07-02 Massachusetts Institute of Technology +mit + +// mitsubishi : 2015-07-23 Mitsubishi Corporation +mitsubishi + +// mlb : 2015-05-21 MLB Advanced Media DH, LLC +mlb + +// mls : 2015-04-23 The Canadian Real Estate Association +mls + +// mma : 2014-11-07 MMA IARD +mma + +// mobile : 2016-06-02 Dish DBS Corporation +mobile + +// moda : 2013-11-07 Dog Beach, LLC +moda + +// moe : 2013-11-13 Interlink Co., Ltd. +moe + +// moi : 2014-12-18 Amazon Registry Services, Inc. +moi + +// mom : 2015-04-16 UNR Corp. +mom + +// monash : 2013-09-30 Monash University +monash + +// money : 2014-10-16 Binky Moon, LLC +money + +// monster : 2015-09-11 XYZ.COM LLC +monster + +// mormon : 2013-12-05 IRI Domain Management, LLC +mormon + +// mortgage : 2014-03-20 Dog Beach, LLC +mortgage + +// moscow : 2013-12-19 Foundation for Assistance for Internet Technologies and Infrastructure Development (FAITID) +moscow + +// moto : 2015-06-04 Motorola Trademark Holdings, LLC +moto + +// motorcycles : 2014-01-09 XYZ.COM LLC +motorcycles + +// mov : 2014-01-30 Charleston Road Registry Inc. +mov + +// movie : 2015-02-05 Binky Moon, LLC +movie + +// msd : 2015-07-23 MSD Registry Holdings, Inc. +msd + +// mtn : 2014-12-04 MTN Dubai Limited +mtn + +// mtr : 2015-03-12 MTR Corporation Limited +mtr + +// mutual : 2015-04-02 Northwestern Mutual MU TLD Registry, LLC +mutual + +// nab : 2015-08-20 National Australia Bank Limited +nab + +// nagoya : 2013-10-24 GMO Registry, Inc. +nagoya + +// nationwide : 2015-07-23 Nationwide Mutual Insurance Company +nationwide + +// natura : 2015-03-12 NATURA COSMÉTICOS S.A. +natura + +// navy : 2014-03-06 Dog Beach, LLC +navy + +// nba : 2015-07-31 NBA REGISTRY, LLC +nba + +// nec : 2015-01-08 NEC Corporation +nec + +// netbank : 2014-06-26 COMMONWEALTH BANK OF AUSTRALIA +netbank + +// netflix : 2015-06-18 Netflix, Inc. +netflix + +// network : 2013-11-14 Binky Moon, LLC +network + +// neustar : 2013-12-05 NeuStar, Inc. +neustar + +// new : 2014-01-30 Charleston Road Registry Inc. +new + +// newholland : 2015-09-03 CNH Industrial N.V. +newholland + +// news : 2014-12-18 Dog Beach, LLC +news + +// next : 2015-06-18 Next plc +next + +// nextdirect : 2015-06-18 Next plc +nextdirect + +// nexus : 2014-07-24 Charleston Road Registry Inc. +nexus + +// nfl : 2015-07-23 NFL Reg Ops LLC +nfl + +// ngo : 2014-03-06 Public Interest Registry +ngo + +// nhk : 2014-02-13 Japan Broadcasting Corporation (NHK) +nhk + +// nico : 2014-12-04 DWANGO Co., Ltd. +nico + +// nike : 2015-07-23 NIKE, Inc. +nike + +// nikon : 2015-05-21 NIKON CORPORATION +nikon + +// ninja : 2013-11-07 Dog Beach, LLC +ninja + +// nissan : 2014-03-27 NISSAN MOTOR CO., LTD. +nissan + +// nissay : 2015-10-29 Nippon Life Insurance Company +nissay + +// nokia : 2015-01-08 Nokia Corporation +nokia + +// northwesternmutual : 2015-06-18 Northwestern Mutual Registry, LLC +northwesternmutual + +// norton : 2014-12-04 NortonLifeLock Inc. +norton + +// now : 2015-06-25 Amazon Registry Services, Inc. +now + +// nowruz : 2014-09-04 Asia Green IT System Bilgisayar San. ve Tic. Ltd. Sti. +nowruz + +// nowtv : 2015-05-14 Starbucks (HK) Limited +nowtv + +// nra : 2014-05-22 NRA Holdings Company, INC. +nra + +// nrw : 2013-11-21 Minds + Machines GmbH +nrw + +// ntt : 2014-10-31 NIPPON TELEGRAPH AND TELEPHONE CORPORATION +ntt + +// nyc : 2014-01-23 The City of New York by and through the New York City Department of Information Technology & Telecommunications +nyc + +// obi : 2014-09-25 OBI Group Holding SE & Co. KGaA +obi + +// observer : 2015-04-30 Top Level Spectrum, Inc. +observer + +// off : 2015-07-23 Johnson Shareholdings, Inc. +off + +// office : 2015-03-12 Microsoft Corporation +office + +// okinawa : 2013-12-05 BRregistry, Inc. +okinawa + +// olayan : 2015-05-14 Crescent Holding GmbH +olayan + +// olayangroup : 2015-05-14 Crescent Holding GmbH +olayangroup + +// oldnavy : 2015-07-31 The Gap, Inc. +oldnavy + +// ollo : 2015-06-04 Dish DBS Corporation +ollo + +// omega : 2015-01-08 The Swatch Group Ltd +omega + +// one : 2014-11-07 One.com A/S +one + +// ong : 2014-03-06 Public Interest Registry +ong + +// onl : 2013-09-16 iRegistry GmbH +onl + +// online : 2015-01-15 DotOnline Inc. +online + +// onyourside : 2015-07-23 Nationwide Mutual Insurance Company +onyourside + +// ooo : 2014-01-09 INFIBEAM AVENUES LIMITED +ooo + +// open : 2015-07-31 American Express Travel Related Services Company, Inc. +open + +// oracle : 2014-06-19 Oracle Corporation +oracle + +// orange : 2015-03-12 Orange Brand Services Limited +orange + +// organic : 2014-03-27 Afilias Limited +organic + +// origins : 2015-10-01 The Estée Lauder Companies Inc. +origins + +// osaka : 2014-09-04 Osaka Registry Co., Ltd. +osaka + +// otsuka : 2013-10-11 Otsuka Holdings Co., Ltd. +otsuka + +// ott : 2015-06-04 Dish DBS Corporation +ott + +// ovh : 2014-01-16 MédiaBC +ovh + +// page : 2014-12-04 Charleston Road Registry Inc. +page + +// panasonic : 2015-07-30 Panasonic Corporation +panasonic + +// paris : 2014-01-30 City of Paris +paris + +// pars : 2014-09-04 Asia Green IT System Bilgisayar San. ve Tic. Ltd. Sti. +pars + +// partners : 2013-12-05 Binky Moon, LLC +partners + +// parts : 2013-12-05 Binky Moon, LLC +parts + +// party : 2014-09-11 Blue Sky Registry Limited +party + +// passagens : 2015-03-05 Travel Reservations SRL +passagens + +// pay : 2015-08-27 Amazon Registry Services, Inc. +pay + +// pccw : 2015-05-14 PCCW Enterprises Limited +pccw + +// pet : 2015-05-07 Afilias Limited +pet + +// pfizer : 2015-09-11 Pfizer Inc. +pfizer + +// pharmacy : 2014-06-19 National Association of Boards of Pharmacy +pharmacy + +// phd : 2016-07-28 Charleston Road Registry Inc. +phd + +// philips : 2014-11-07 Koninklijke Philips N.V. +philips + +// phone : 2016-06-02 Dish DBS Corporation +phone + +// photo : 2013-11-14 UNR Corp. +photo + +// photography : 2013-09-20 Binky Moon, LLC +photography + +// photos : 2013-10-17 Binky Moon, LLC +photos + +// physio : 2014-05-01 PhysBiz Pty Ltd +physio + +// pics : 2013-11-14 UNR Corp. +pics + +// pictet : 2014-06-26 Pictet Europe S.A. +pictet + +// pictures : 2014-03-06 Binky Moon, LLC +pictures + +// pid : 2015-01-08 Top Level Spectrum, Inc. +pid + +// pin : 2014-12-18 Amazon Registry Services, Inc. +pin + +// ping : 2015-06-11 Ping Registry Provider, Inc. +ping + +// pink : 2013-10-01 Afilias Limited +pink + +// pioneer : 2015-07-16 Pioneer Corporation +pioneer + +// pizza : 2014-06-26 Binky Moon, LLC +pizza + +// place : 2014-04-24 Binky Moon, LLC +place + +// play : 2015-03-05 Charleston Road Registry Inc. +play + +// playstation : 2015-07-02 Sony Interactive Entertainment Inc. +playstation + +// plumbing : 2013-09-10 Binky Moon, LLC +plumbing + +// plus : 2015-02-05 Binky Moon, LLC +plus + +// pnc : 2015-07-02 PNC Domain Co., LLC +pnc + +// pohl : 2014-06-23 Deutsche Vermögensberatung Aktiengesellschaft DVAG +pohl + +// poker : 2014-07-03 Afilias Limited +poker + +// politie : 2015-08-20 Politie Nederland +politie + +// porn : 2014-10-16 ICM Registry PN LLC +porn + +// pramerica : 2015-07-30 Prudential Financial, Inc. +pramerica + +// praxi : 2013-12-05 Praxi S.p.A. +praxi + +// press : 2014-04-03 DotPress Inc. +press + +// prime : 2015-06-25 Amazon Registry Services, Inc. +prime + +// prod : 2014-01-23 Charleston Road Registry Inc. +prod + +// productions : 2013-12-05 Binky Moon, LLC +productions + +// prof : 2014-07-24 Charleston Road Registry Inc. +prof + +// progressive : 2015-07-23 Progressive Casualty Insurance Company +progressive + +// promo : 2014-12-18 Afilias Limited +promo + +// properties : 2013-12-05 Binky Moon, LLC +properties + +// property : 2014-05-22 UNR Corp. +property + +// protection : 2015-04-23 XYZ.COM LLC +protection + +// pru : 2015-07-30 Prudential Financial, Inc. +pru + +// prudential : 2015-07-30 Prudential Financial, Inc. +prudential + +// pub : 2013-12-12 Dog Beach, LLC +pub + +// pwc : 2015-10-29 PricewaterhouseCoopers LLP +pwc + +// qpon : 2013-11-14 dotCOOL, Inc. +qpon + +// quebec : 2013-12-19 PointQuébec Inc +quebec + +// quest : 2015-03-26 XYZ.COM LLC +quest + +// qvc : 2015-07-30 QVC, Inc. +qvc + +// racing : 2014-12-04 Premier Registry Limited +racing + +// radio : 2016-07-21 European Broadcasting Union (EBU) +radio + +// raid : 2015-07-23 Johnson Shareholdings, Inc. +raid + +// read : 2014-12-18 Amazon Registry Services, Inc. +read + +// realestate : 2015-09-11 dotRealEstate LLC +realestate + +// realtor : 2014-05-29 Real Estate Domains LLC +realtor + +// realty : 2015-03-19 Fegistry, LLC +realty + +// recipes : 2013-10-17 Binky Moon, LLC +recipes + +// red : 2013-11-07 Afilias Limited +red + +// redstone : 2014-10-31 Redstone Haute Couture Co., Ltd. +redstone + +// redumbrella : 2015-03-26 Travelers TLD, LLC +redumbrella + +// rehab : 2014-03-06 Dog Beach, LLC +rehab + +// reise : 2014-03-13 Binky Moon, LLC +reise + +// reisen : 2014-03-06 Binky Moon, LLC +reisen + +// reit : 2014-09-04 National Association of Real Estate Investment Trusts, Inc. +reit + +// reliance : 2015-04-02 Reliance Industries Limited +reliance + +// ren : 2013-12-12 ZDNS International Limited +ren + +// rent : 2014-12-04 XYZ.COM LLC +rent + +// rentals : 2013-12-05 Binky Moon, LLC +rentals + +// repair : 2013-11-07 Binky Moon, LLC +repair + +// report : 2013-12-05 Binky Moon, LLC +report + +// republican : 2014-03-20 Dog Beach, LLC +republican + +// rest : 2013-12-19 Punto 2012 Sociedad Anonima Promotora de Inversion de Capital Variable +rest + +// restaurant : 2014-07-03 Binky Moon, LLC +restaurant + +// review : 2014-11-20 dot Review Limited +review + +// reviews : 2013-09-13 Dog Beach, LLC +reviews + +// rexroth : 2015-06-18 Robert Bosch GMBH +rexroth + +// rich : 2013-11-21 iRegistry GmbH +rich + +// richardli : 2015-05-14 Pacific Century Asset Management (HK) Limited +richardli + +// ricoh : 2014-11-20 Ricoh Company, Ltd. +ricoh + +// ril : 2015-04-02 Reliance Industries Limited +ril + +// rio : 2014-02-27 Empresa Municipal de Informática SA - IPLANRIO +rio + +// rip : 2014-07-10 Dog Beach, LLC +rip + +// rmit : 2015-11-19 Royal Melbourne Institute of Technology +rmit + +// rocher : 2014-12-18 Ferrero Trading Lux S.A. +rocher + +// rocks : 2013-11-14 Dog Beach, LLC +rocks + +// rodeo : 2013-12-19 Minds + Machines Group Limited +rodeo + +// rogers : 2015-08-06 Rogers Communications Canada Inc. +rogers + +// room : 2014-12-18 Amazon Registry Services, Inc. +room + +// rsvp : 2014-05-08 Charleston Road Registry Inc. +rsvp + +// rugby : 2016-12-15 World Rugby Strategic Developments Limited +rugby + +// ruhr : 2013-10-02 regiodot GmbH & Co. KG +ruhr + +// run : 2015-03-19 Binky Moon, LLC +run + +// rwe : 2015-04-02 RWE AG +rwe + +// ryukyu : 2014-01-09 BRregistry, Inc. +ryukyu + +// saarland : 2013-12-12 dotSaarland GmbH +saarland + +// safe : 2014-12-18 Amazon Registry Services, Inc. +safe + +// safety : 2015-01-08 Safety Registry Services, LLC. +safety + +// sakura : 2014-12-18 SAKURA Internet Inc. +sakura + +// sale : 2014-10-16 Dog Beach, LLC +sale + +// salon : 2014-12-11 Binky Moon, LLC +salon + +// samsclub : 2015-07-31 Wal-Mart Stores, Inc. +samsclub + +// samsung : 2014-04-03 SAMSUNG SDS CO., LTD +samsung + +// sandvik : 2014-11-13 Sandvik AB +sandvik + +// sandvikcoromant : 2014-11-07 Sandvik AB +sandvikcoromant + +// sanofi : 2014-10-09 Sanofi +sanofi + +// sap : 2014-03-27 SAP AG +sap + +// sarl : 2014-07-03 Binky Moon, LLC +sarl + +// sas : 2015-04-02 Research IP LLC +sas + +// save : 2015-06-25 Amazon Registry Services, Inc. +save + +// saxo : 2014-10-31 Saxo Bank A/S +saxo + +// sbi : 2015-03-12 STATE BANK OF INDIA +sbi + +// sbs : 2014-11-07 SPECIAL BROADCASTING SERVICE CORPORATION +sbs + +// sca : 2014-03-13 SVENSKA CELLULOSA AKTIEBOLAGET SCA (publ) +sca + +// scb : 2014-02-20 The Siam Commercial Bank Public Company Limited ("SCB") +scb + +// schaeffler : 2015-08-06 Schaeffler Technologies AG & Co. KG +schaeffler + +// schmidt : 2014-04-03 SCHMIDT GROUPE S.A.S. +schmidt + +// scholarships : 2014-04-24 Scholarships.com, LLC +scholarships + +// school : 2014-12-18 Binky Moon, LLC +school + +// schule : 2014-03-06 Binky Moon, LLC +schule + +// schwarz : 2014-09-18 Schwarz Domains und Services GmbH & Co. KG +schwarz + +// science : 2014-09-11 dot Science Limited +science + +// scjohnson : 2015-07-23 Johnson Shareholdings, Inc. +scjohnson + +// scot : 2014-01-23 Dot Scot Registry Limited +scot + +// search : 2016-06-09 Charleston Road Registry Inc. +search + +// seat : 2014-05-22 SEAT, S.A. (Sociedad Unipersonal) +seat + +// secure : 2015-08-27 Amazon Registry Services, Inc. +secure + +// security : 2015-05-14 XYZ.COM LLC +security + +// seek : 2014-12-04 Seek Limited +seek + +// select : 2015-10-08 Registry Services, LLC +select + +// sener : 2014-10-24 Sener Ingeniería y Sistemas, S.A. +sener + +// services : 2014-02-27 Binky Moon, LLC +services + +// ses : 2015-07-23 SES +ses + +// seven : 2015-08-06 Seven West Media Ltd +seven + +// sew : 2014-07-17 SEW-EURODRIVE GmbH & Co KG +sew + +// sex : 2014-11-13 ICM Registry SX LLC +sex + +// sexy : 2013-09-11 UNR Corp. +sexy + +// sfr : 2015-08-13 Societe Francaise du Radiotelephone - SFR +sfr + +// shangrila : 2015-09-03 Shangri‐La International Hotel Management Limited +shangrila + +// sharp : 2014-05-01 Sharp Corporation +sharp + +// shaw : 2015-04-23 Shaw Cablesystems G.P. +shaw + +// shell : 2015-07-30 Shell Information Technology International Inc +shell + +// shia : 2014-09-04 Asia Green IT System Bilgisayar San. ve Tic. Ltd. Sti. +shia + +// shiksha : 2013-11-14 Afilias Limited +shiksha + +// shoes : 2013-10-02 Binky Moon, LLC +shoes + +// shop : 2016-04-08 GMO Registry, Inc. +shop + +// shopping : 2016-03-31 Binky Moon, LLC +shopping + +// shouji : 2015-01-08 Beijing Qihu Keji Co., Ltd. +shouji + +// show : 2015-03-05 Binky Moon, LLC +show + +// showtime : 2015-08-06 CBS Domains Inc. +showtime + +// silk : 2015-06-25 Amazon Registry Services, Inc. +silk + +// sina : 2015-03-12 Sina Corporation +sina + +// singles : 2013-08-27 Binky Moon, LLC +singles + +// site : 2015-01-15 DotSite Inc. +site + +// ski : 2015-04-09 Afilias Limited +ski + +// skin : 2015-01-15 XYZ.COM LLC +skin + +// sky : 2014-06-19 Sky International AG +sky + +// skype : 2014-12-18 Microsoft Corporation +skype + +// sling : 2015-07-30 DISH Technologies L.L.C. +sling + +// smart : 2015-07-09 Smart Communications, Inc. (SMART) +smart + +// smile : 2014-12-18 Amazon Registry Services, Inc. +smile + +// sncf : 2015-02-19 Société Nationale des Chemins de fer Francais S N C F +sncf + +// soccer : 2015-03-26 Binky Moon, LLC +soccer + +// social : 2013-11-07 Dog Beach, LLC +social + +// softbank : 2015-07-02 SoftBank Group Corp. +softbank + +// software : 2014-03-20 Dog Beach, LLC +software + +// sohu : 2013-12-19 Sohu.com Limited +sohu + +// solar : 2013-11-07 Binky Moon, LLC +solar + +// solutions : 2013-11-07 Binky Moon, LLC +solutions + +// song : 2015-02-26 Amazon Registry Services, Inc. +song + +// sony : 2015-01-08 Sony Corporation +sony + +// soy : 2014-01-23 Charleston Road Registry Inc. +soy + +// spa : 2019-09-19 Asia Spa and Wellness Promotion Council Limited +spa + +// space : 2014-04-03 DotSpace Inc. +space + +// sport : 2017-11-16 Global Association of International Sports Federations (GAISF) +sport + +// spot : 2015-02-26 Amazon Registry Services, Inc. +spot + +// spreadbetting : 2014-12-11 Dotspreadbetting Registry Limited +spreadbetting + +// srl : 2015-05-07 InterNetX, Corp +srl + +// stada : 2014-11-13 STADA Arzneimittel AG +stada + +// staples : 2015-07-30 Staples, Inc. +staples + +// star : 2015-01-08 Star India Private Limited +star + +// statebank : 2015-03-12 STATE BANK OF INDIA +statebank + +// statefarm : 2015-07-30 State Farm Mutual Automobile Insurance Company +statefarm + +// stc : 2014-10-09 Saudi Telecom Company +stc + +// stcgroup : 2014-10-09 Saudi Telecom Company +stcgroup + +// stockholm : 2014-12-18 Stockholms kommun +stockholm + +// storage : 2014-12-22 XYZ.COM LLC +storage + +// store : 2015-04-09 DotStore Inc. +store + +// stream : 2016-01-08 dot Stream Limited +stream + +// studio : 2015-02-11 Dog Beach, LLC +studio + +// study : 2014-12-11 OPEN UNIVERSITIES AUSTRALIA PTY LTD +study + +// style : 2014-12-04 Binky Moon, LLC +style + +// sucks : 2014-12-22 Vox Populi Registry Ltd. +sucks + +// supplies : 2013-12-19 Binky Moon, LLC +supplies + +// supply : 2013-12-19 Binky Moon, LLC +supply + +// support : 2013-10-24 Binky Moon, LLC +support + +// surf : 2014-01-09 Minds + Machines Group Limited +surf + +// surgery : 2014-03-20 Binky Moon, LLC +surgery + +// suzuki : 2014-02-20 SUZUKI MOTOR CORPORATION +suzuki + +// swatch : 2015-01-08 The Swatch Group Ltd +swatch + +// swiftcover : 2015-07-23 Swiftcover Insurance Services Limited +swiftcover + +// swiss : 2014-10-16 Swiss Confederation +swiss + +// sydney : 2014-09-18 State of New South Wales, Department of Premier and Cabinet +sydney + +// systems : 2013-11-07 Binky Moon, LLC +systems + +// tab : 2014-12-04 Tabcorp Holdings Limited +tab + +// taipei : 2014-07-10 Taipei City Government +taipei + +// talk : 2015-04-09 Amazon Registry Services, Inc. +talk + +// taobao : 2015-01-15 Alibaba Group Holding Limited +taobao + +// target : 2015-07-31 Target Domain Holdings, LLC +target + +// tatamotors : 2015-03-12 Tata Motors Ltd +tatamotors + +// tatar : 2014-04-24 Limited Liability Company "Coordination Center of Regional Domain of Tatarstan Republic" +tatar + +// tattoo : 2013-08-30 UNR Corp. +tattoo + +// tax : 2014-03-20 Binky Moon, LLC +tax + +// taxi : 2015-03-19 Binky Moon, LLC +taxi + +// tci : 2014-09-12 Asia Green IT System Bilgisayar San. ve Tic. Ltd. Sti. +tci + +// tdk : 2015-06-11 TDK Corporation +tdk + +// team : 2015-03-05 Binky Moon, LLC +team + +// tech : 2015-01-30 Personals TLD Inc. +tech + +// technology : 2013-09-13 Binky Moon, LLC +technology + +// temasek : 2014-08-07 Temasek Holdings (Private) Limited +temasek + +// tennis : 2014-12-04 Binky Moon, LLC +tennis + +// teva : 2015-07-02 Teva Pharmaceutical Industries Limited +teva + +// thd : 2015-04-02 Home Depot Product Authority, LLC +thd + +// theater : 2015-03-19 Binky Moon, LLC +theater + +// theatre : 2015-05-07 XYZ.COM LLC +theatre + +// tiaa : 2015-07-23 Teachers Insurance and Annuity Association of America +tiaa + +// tickets : 2015-02-05 Accent Media Limited +tickets + +// tienda : 2013-11-14 Binky Moon, LLC +tienda + +// tiffany : 2015-01-30 Tiffany and Company +tiffany + +// tips : 2013-09-20 Binky Moon, LLC +tips + +// tires : 2014-11-07 Binky Moon, LLC +tires + +// tirol : 2014-04-24 punkt Tirol GmbH +tirol + +// tjmaxx : 2015-07-16 The TJX Companies, Inc. +tjmaxx + +// tjx : 2015-07-16 The TJX Companies, Inc. +tjx + +// tkmaxx : 2015-07-16 The TJX Companies, Inc. +tkmaxx + +// tmall : 2015-01-15 Alibaba Group Holding Limited +tmall + +// today : 2013-09-20 Binky Moon, LLC +today + +// tokyo : 2013-11-13 GMO Registry, Inc. +tokyo + +// tools : 2013-11-21 Binky Moon, LLC +tools + +// top : 2014-03-20 .TOP Registry +top + +// toray : 2014-12-18 Toray Industries, Inc. +toray + +// toshiba : 2014-04-10 TOSHIBA Corporation +toshiba + +// total : 2015-08-06 Total SA +total + +// tours : 2015-01-22 Binky Moon, LLC +tours + +// town : 2014-03-06 Binky Moon, LLC +town + +// toyota : 2015-04-23 TOYOTA MOTOR CORPORATION +toyota + +// toys : 2014-03-06 Binky Moon, LLC +toys + +// trade : 2014-01-23 Elite Registry Limited +trade + +// trading : 2014-12-11 Dottrading Registry Limited +trading + +// training : 2013-11-07 Binky Moon, LLC +training + +// travel : 2015-10-09 Dog Beach, LLC +travel + +// travelchannel : 2015-07-02 Lifestyle Domain Holdings, Inc. +travelchannel + +// travelers : 2015-03-26 Travelers TLD, LLC +travelers + +// travelersinsurance : 2015-03-26 Travelers TLD, LLC +travelersinsurance + +// trust : 2014-10-16 UNR Corp. +trust + +// trv : 2015-03-26 Travelers TLD, LLC +trv + +// tube : 2015-06-11 Latin American Telecom LLC +tube + +// tui : 2014-07-03 TUI AG +tui + +// tunes : 2015-02-26 Amazon Registry Services, Inc. +tunes + +// tushu : 2014-12-18 Amazon Registry Services, Inc. +tushu + +// tvs : 2015-02-19 T V SUNDRAM IYENGAR & SONS LIMITED +tvs + +// ubank : 2015-08-20 National Australia Bank Limited +ubank + +// ubs : 2014-12-11 UBS AG +ubs + +// unicom : 2015-10-15 China United Network Communications Corporation Limited +unicom + +// university : 2014-03-06 Binky Moon, LLC +university + +// uno : 2013-09-11 DotSite Inc. +uno + +// uol : 2014-05-01 UBN INTERNET LTDA. +uol + +// ups : 2015-06-25 UPS Market Driver, Inc. +ups + +// vacations : 2013-12-05 Binky Moon, LLC +vacations + +// vana : 2014-12-11 Lifestyle Domain Holdings, Inc. +vana + +// vanguard : 2015-09-03 The Vanguard Group, Inc. +vanguard + +// vegas : 2014-01-16 Dot Vegas, Inc. +vegas + +// ventures : 2013-08-27 Binky Moon, LLC +ventures + +// verisign : 2015-08-13 VeriSign, Inc. +verisign + +// versicherung : 2014-03-20 tldbox GmbH +versicherung + +// vet : 2014-03-06 Dog Beach, LLC +vet + +// viajes : 2013-10-17 Binky Moon, LLC +viajes + +// video : 2014-10-16 Dog Beach, LLC +video + +// vig : 2015-05-14 VIENNA INSURANCE GROUP AG Wiener Versicherung Gruppe +vig + +// viking : 2015-04-02 Viking River Cruises (Bermuda) Ltd. +viking + +// villas : 2013-12-05 Binky Moon, LLC +villas + +// vin : 2015-06-18 Binky Moon, LLC +vin + +// vip : 2015-01-22 Minds + Machines Group Limited +vip + +// virgin : 2014-09-25 Virgin Enterprises Limited +virgin + +// visa : 2015-07-30 Visa Worldwide Pte. Limited +visa + +// vision : 2013-12-05 Binky Moon, LLC +vision + +// viva : 2014-11-07 Saudi Telecom Company +viva + +// vivo : 2015-07-31 Telefonica Brasil S.A. +vivo + +// vlaanderen : 2014-02-06 DNS.be vzw +vlaanderen + +// vodka : 2013-12-19 Minds + Machines Group Limited +vodka + +// volkswagen : 2015-05-14 Volkswagen Group of America Inc. +volkswagen + +// volvo : 2015-11-12 Volvo Holding Sverige Aktiebolag +volvo + +// vote : 2013-11-21 Monolith Registry LLC +vote + +// voting : 2013-11-13 Valuetainment Corp. +voting + +// voto : 2013-11-21 Monolith Registry LLC +voto + +// voyage : 2013-08-27 Binky Moon, LLC +voyage + +// vuelos : 2015-03-05 Travel Reservations SRL +vuelos + +// wales : 2014-05-08 Nominet UK +wales + +// walmart : 2015-07-31 Wal-Mart Stores, Inc. +walmart + +// walter : 2014-11-13 Sandvik AB +walter + +// wang : 2013-10-24 Zodiac Wang Limited +wang + +// wanggou : 2014-12-18 Amazon Registry Services, Inc. +wanggou + +// watch : 2013-11-14 Binky Moon, LLC +watch + +// watches : 2014-12-22 Richemont DNS Inc. +watches + +// weather : 2015-01-08 International Business Machines Corporation +weather + +// weatherchannel : 2015-03-12 International Business Machines Corporation +weatherchannel + +// webcam : 2014-01-23 dot Webcam Limited +webcam + +// weber : 2015-06-04 Saint-Gobain Weber SA +weber + +// website : 2014-04-03 DotWebsite Inc. +website + +// wedding : 2014-04-24 Minds + Machines Group Limited +wedding + +// weibo : 2015-03-05 Sina Corporation +weibo + +// weir : 2015-01-29 Weir Group IP Limited +weir + +// whoswho : 2014-02-20 Who's Who Registry +whoswho + +// wien : 2013-10-28 punkt.wien GmbH +wien + +// wiki : 2013-11-07 Top Level Design, LLC +wiki + +// williamhill : 2014-03-13 William Hill Organization Limited +williamhill + +// win : 2014-11-20 First Registry Limited +win + +// windows : 2014-12-18 Microsoft Corporation +windows + +// wine : 2015-06-18 Binky Moon, LLC +wine + +// winners : 2015-07-16 The TJX Companies, Inc. +winners + +// wme : 2014-02-13 William Morris Endeavor Entertainment, LLC +wme + +// wolterskluwer : 2015-08-06 Wolters Kluwer N.V. +wolterskluwer + +// woodside : 2015-07-09 Woodside Petroleum Limited +woodside + +// work : 2013-12-19 Minds + Machines Group Limited +work + +// works : 2013-11-14 Binky Moon, LLC +works + +// world : 2014-06-12 Binky Moon, LLC +world + +// wow : 2015-10-08 Amazon Registry Services, Inc. +wow + +// wtc : 2013-12-19 World Trade Centers Association, Inc. +wtc + +// wtf : 2014-03-06 Binky Moon, LLC +wtf + +// xbox : 2014-12-18 Microsoft Corporation +xbox + +// xerox : 2014-10-24 Xerox DNHC LLC +xerox + +// xfinity : 2015-07-09 Comcast IP Holdings I, LLC +xfinity + +// xihuan : 2015-01-08 Beijing Qihu Keji Co., Ltd. +xihuan + +// xin : 2014-12-11 Elegant Leader Limited +xin + +// xn--11b4c3d : 2015-01-15 VeriSign Sarl +कॉम + +// xn--1ck2e1b : 2015-02-26 Amazon Registry Services, Inc. +セール + +// xn--1qqw23a : 2014-01-09 Guangzhou YU Wei Information Technology Co., Ltd. +佛山 + +// xn--30rr7y : 2014-06-12 Excellent First Limited +慈善 + +// xn--3bst00m : 2013-09-13 Eagle Horizon Limited +集团 + +// xn--3ds443g : 2013-09-08 TLD REGISTRY LIMITED OY +在线 + +// xn--3oq18vl8pn36a : 2015-07-02 Volkswagen (China) Investment Co., Ltd. +大众汽车 + +// xn--3pxu8k : 2015-01-15 VeriSign Sarl +点看 + +// xn--42c2d9a : 2015-01-15 VeriSign Sarl +คอม + +// xn--45q11c : 2013-11-21 Zodiac Gemini Ltd +八卦 + +// xn--4gbrim : 2013-10-04 Fans TLD Limited +موقع + +// xn--55qw42g : 2013-11-08 China Organizational Name Administration Center +公益 + +// xn--55qx5d : 2013-11-14 China Internet Network Information Center (CNNIC) +公司 + +// xn--5su34j936bgsg : 2015-09-03 Shangri‐La International Hotel Management Limited +香格里拉 + +// xn--5tzm5g : 2014-12-22 Global Website TLD Asia Limited +网站 + +// xn--6frz82g : 2013-09-23 Afilias Limited +移动 + +// xn--6qq986b3xl : 2013-09-13 Tycoon Treasure Limited +我爱你 + +// xn--80adxhks : 2013-12-19 Foundation for Assistance for Internet Technologies and Infrastructure Development (FAITID) +москва + +// xn--80aqecdr1a : 2015-10-21 Pontificium Consilium de Comunicationibus Socialibus (PCCS) (Pontifical Council for Social Communication) +католик + +// xn--80asehdb : 2013-07-14 CORE Association +онлайн + +// xn--80aswg : 2013-07-14 CORE Association +сайт + +// xn--8y0a063a : 2015-03-26 China United Network Communications Corporation Limited +联通 + +// xn--9dbq2a : 2015-01-15 VeriSign Sarl +קום + +// xn--9et52u : 2014-06-12 RISE VICTORY LIMITED +时尚 + +// xn--9krt00a : 2015-03-12 Sina Corporation +微博 + +// xn--b4w605ferd : 2014-08-07 Temasek Holdings (Private) Limited +淡马锡 + +// xn--bck1b9a5dre4c : 2015-02-26 Amazon Registry Services, Inc. +ファッション + +// xn--c1avg : 2013-11-14 Public Interest Registry +орг + +// xn--c2br7g : 2015-01-15 VeriSign Sarl +नेट + +// xn--cck2b3b : 2015-02-26 Amazon Registry Services, Inc. +ストア + +// xn--cckwcxetd : 2019-12-19 Amazon Registry Services, Inc. +アマゾン + +// xn--cg4bki : 2013-09-27 SAMSUNG SDS CO., LTD +삼성 + +// xn--czr694b : 2014-01-16 Internet DotTrademark Organisation Limited +商标 + +// xn--czrs0t : 2013-12-19 Binky Moon, LLC +商店 + +// xn--czru2d : 2013-11-21 Zodiac Aquarius Limited +商城 + +// xn--d1acj3b : 2013-11-20 The Foundation for Network Initiatives “The Smart Internet” +дети + +// xn--eckvdtc9d : 2014-12-18 Amazon Registry Services, Inc. +ポイント + +// xn--efvy88h : 2014-08-22 Guangzhou YU Wei Information Technology Co., Ltd. +新闻 + +// xn--fct429k : 2015-04-09 Amazon Registry Services, Inc. +家電 + +// xn--fhbei : 2015-01-15 VeriSign Sarl +كوم + +// xn--fiq228c5hs : 2013-09-08 TLD REGISTRY LIMITED OY +中文网 + +// xn--fiq64b : 2013-10-14 CITIC Group Corporation +中信 + +// xn--fjq720a : 2014-05-22 Binky Moon, LLC +娱乐 + +// xn--flw351e : 2014-07-31 Charleston Road Registry Inc. +谷歌 + +// xn--fzys8d69uvgm : 2015-05-14 PCCW Enterprises Limited +電訊盈科 + +// xn--g2xx48c : 2015-01-30 Nawang Heli(Xiamen) Network Service Co., LTD. +购物 + +// xn--gckr3f0f : 2015-02-26 Amazon Registry Services, Inc. +クラウド + +// xn--gk3at1e : 2015-10-08 Amazon Registry Services, Inc. +通販 + +// xn--hxt814e : 2014-05-15 Zodiac Taurus Limited +网店 + +// xn--i1b6b1a6a2e : 2013-11-14 Public Interest Registry +संगठन + +// xn--imr513n : 2014-12-11 Internet DotTrademark Organisation Limited +餐厅 + +// xn--io0a7i : 2013-11-14 China Internet Network Information Center (CNNIC) +网络 + +// xn--j1aef : 2015-01-15 VeriSign Sarl +ком + +// xn--jlq480n2rg : 2019-12-19 Amazon Registry Services, Inc. +亚马逊 + +// xn--jlq61u9w7b : 2015-01-08 Nokia Corporation +诺基亚 + +// xn--jvr189m : 2015-02-26 Amazon Registry Services, Inc. +食品 + +// xn--kcrx77d1x4a : 2014-11-07 Koninklijke Philips N.V. +飞利浦 + +// xn--kput3i : 2014-02-13 Beijing RITT-Net Technology Development Co., Ltd +手机 + +// xn--mgba3a3ejt : 2014-11-20 Aramco Services Company +ارامكو + +// xn--mgba7c0bbn0a : 2015-05-14 Crescent Holding GmbH +العليان + +// xn--mgbaakc7dvf : 2015-09-03 Emirates Telecommunications Corporation (trading as Etisalat) +اتصالات + +// xn--mgbab2bd : 2013-10-31 CORE Association +بازار + +// xn--mgbca7dzdo : 2015-07-30 Abu Dhabi Systems and Information Centre +ابوظبي + +// xn--mgbi4ecexp : 2015-10-21 Pontificium Consilium de Comunicationibus Socialibus (PCCS) (Pontifical Council for Social Communication) +كاثوليك + +// xn--mgbt3dhd : 2014-09-04 Asia Green IT System Bilgisayar San. ve Tic. Ltd. Sti. +همراه + +// xn--mk1bu44c : 2015-01-15 VeriSign Sarl +닷컴 + +// xn--mxtq1m : 2014-03-06 Net-Chinese Co., Ltd. +政府 + +// xn--ngbc5azd : 2013-07-13 International Domain Registry Pty. Ltd. +شبكة + +// xn--ngbe9e0a : 2014-12-04 Kuwait Finance House +بيتك + +// xn--ngbrx : 2015-11-12 League of Arab States +عرب + +// xn--nqv7f : 2013-11-14 Public Interest Registry +机构 + +// xn--nqv7fs00ema : 2013-11-14 Public Interest Registry +组织机构 + +// xn--nyqy26a : 2014-11-07 Stable Tone Limited +健康 + +// xn--otu796d : 2017-08-06 Jiang Yu Liang Cai Technology Company Limited +招聘 + +// xn--p1acf : 2013-12-12 Rusnames Limited +рус + +// xn--pssy2u : 2015-01-15 VeriSign Sarl +大拿 + +// xn--q9jyb4c : 2013-09-17 Charleston Road Registry Inc. +みんな + +// xn--qcka1pmc : 2014-07-31 Charleston Road Registry Inc. +グーグル + +// xn--rhqv96g : 2013-09-11 Stable Tone Limited +世界 + +// xn--rovu88b : 2015-02-26 Amazon Registry Services, Inc. +書籍 + +// xn--ses554g : 2014-01-16 KNET Co., Ltd. +网址 + +// xn--t60b56a : 2015-01-15 VeriSign Sarl +닷넷 + +// xn--tckwe : 2015-01-15 VeriSign Sarl +コム + +// xn--tiq49xqyj : 2015-10-21 Pontificium Consilium de Comunicationibus Socialibus (PCCS) (Pontifical Council for Social Communication) +天主教 + +// xn--unup4y : 2013-07-14 Binky Moon, LLC +游戏 + +// xn--vermgensberater-ctb : 2014-06-23 Deutsche Vermögensberatung Aktiengesellschaft DVAG +vermögensberater + +// xn--vermgensberatung-pwb : 2014-06-23 Deutsche Vermögensberatung Aktiengesellschaft DVAG +vermögensberatung + +// xn--vhquv : 2013-08-27 Binky Moon, LLC +企业 + +// xn--vuq861b : 2014-10-16 Beijing Tele-info Network Technology Co., Ltd. +信息 + +// xn--w4r85el8fhu5dnra : 2015-04-30 Kerry Trading Co. Limited +嘉里大酒店 + +// xn--w4rs40l : 2015-07-30 Kerry Trading Co. Limited +嘉里 + +// xn--xhq521b : 2013-11-14 Guangzhou YU Wei Information Technology Co., Ltd. +广东 + +// xn--zfr164b : 2013-11-08 China Organizational Name Administration Center +政务 + +// xyz : 2013-12-05 XYZ.COM LLC +xyz + +// yachts : 2014-01-09 XYZ.COM LLC +yachts + +// yahoo : 2015-04-02 Yahoo! Domain Services Inc. +yahoo + +// yamaxun : 2014-12-18 Amazon Registry Services, Inc. +yamaxun + +// yandex : 2014-04-10 Yandex Europe B.V. +yandex + +// yodobashi : 2014-11-20 YODOBASHI CAMERA CO.,LTD. +yodobashi + +// yoga : 2014-05-29 Minds + Machines Group Limited +yoga + +// yokohama : 2013-12-12 GMO Registry, Inc. +yokohama + +// you : 2015-04-09 Amazon Registry Services, Inc. +you + +// youtube : 2014-05-01 Charleston Road Registry Inc. +youtube + +// yun : 2015-01-08 Beijing Qihu Keji Co., Ltd. +yun + +// zappos : 2015-06-25 Amazon Registry Services, Inc. +zappos + +// zara : 2014-11-07 Industria de Diseño Textil, S.A. (INDITEX, S.A.) +zara + +// zero : 2014-12-18 Amazon Registry Services, Inc. +zero + +// zip : 2014-05-08 Charleston Road Registry Inc. +zip + +// zone : 2013-11-14 Binky Moon, LLC +zone + +// zuerich : 2014-11-07 Kanton Zürich (Canton of Zurich) +zuerich + + +// ===END ICANN DOMAINS=== +// ===BEGIN PRIVATE DOMAINS=== +// (Note: these are in alphabetical order by company name) + +// 1GB LLC : https://www.1gb.ua/ +// Submitted by 1GB LLC +cc.ua +inf.ua +ltd.ua + +// 611coin : https://611project.org/ +611.to + +// Adobe : https://www.adobe.com/ +// Submitted by Ian Boston +adobeaemcloud.com +adobeaemcloud.net +*.dev.adobeaemcloud.com + +// Agnat sp. z o.o. : https://domena.pl +// Submitted by Przemyslaw Plewa +beep.pl + +// alboto.ca : http://alboto.ca +// Submitted by Anton Avramov +barsy.ca + +// Alces Software Ltd : http://alces-software.com +// Submitted by Mark J. Titorenko +*.compute.estate +*.alces.network + +// all-inkl.com : https://all-inkl.com +// Submitted by Werner Kaltofen +kasserver.com + +// Altervista: https://www.altervista.org +// Submitted by Carlo Cannas +altervista.org + +// alwaysdata : https://www.alwaysdata.com +// Submitted by Cyril +alwaysdata.net + +// Amazon CloudFront : https://aws.amazon.com/cloudfront/ +// Submitted by Donavan Miller +cloudfront.net + +// Amazon Elastic Compute Cloud : https://aws.amazon.com/ec2/ +// Submitted by Luke Wells +*.compute.amazonaws.com +*.compute-1.amazonaws.com +*.compute.amazonaws.com.cn +us-east-1.amazonaws.com + +// Amazon Elastic Beanstalk : https://aws.amazon.com/elasticbeanstalk/ +// Submitted by Luke Wells +cn-north-1.eb.amazonaws.com.cn +cn-northwest-1.eb.amazonaws.com.cn +elasticbeanstalk.com +ap-northeast-1.elasticbeanstalk.com +ap-northeast-2.elasticbeanstalk.com +ap-northeast-3.elasticbeanstalk.com +ap-south-1.elasticbeanstalk.com +ap-southeast-1.elasticbeanstalk.com +ap-southeast-2.elasticbeanstalk.com +ca-central-1.elasticbeanstalk.com +eu-central-1.elasticbeanstalk.com +eu-west-1.elasticbeanstalk.com +eu-west-2.elasticbeanstalk.com +eu-west-3.elasticbeanstalk.com +sa-east-1.elasticbeanstalk.com +us-east-1.elasticbeanstalk.com +us-east-2.elasticbeanstalk.com +us-gov-west-1.elasticbeanstalk.com +us-west-1.elasticbeanstalk.com +us-west-2.elasticbeanstalk.com + +// Amazon Elastic Load Balancing : https://aws.amazon.com/elasticloadbalancing/ +// Submitted by Luke Wells +*.elb.amazonaws.com +*.elb.amazonaws.com.cn + +// Amazon S3 : https://aws.amazon.com/s3/ +// Submitted by Luke Wells +s3.amazonaws.com +s3-ap-northeast-1.amazonaws.com +s3-ap-northeast-2.amazonaws.com +s3-ap-south-1.amazonaws.com +s3-ap-southeast-1.amazonaws.com +s3-ap-southeast-2.amazonaws.com +s3-ca-central-1.amazonaws.com +s3-eu-central-1.amazonaws.com +s3-eu-west-1.amazonaws.com +s3-eu-west-2.amazonaws.com +s3-eu-west-3.amazonaws.com +s3-external-1.amazonaws.com +s3-fips-us-gov-west-1.amazonaws.com +s3-sa-east-1.amazonaws.com +s3-us-gov-west-1.amazonaws.com +s3-us-east-2.amazonaws.com +s3-us-west-1.amazonaws.com +s3-us-west-2.amazonaws.com +s3.ap-northeast-2.amazonaws.com +s3.ap-south-1.amazonaws.com +s3.cn-north-1.amazonaws.com.cn +s3.ca-central-1.amazonaws.com +s3.eu-central-1.amazonaws.com +s3.eu-west-2.amazonaws.com +s3.eu-west-3.amazonaws.com +s3.us-east-2.amazonaws.com +s3.dualstack.ap-northeast-1.amazonaws.com +s3.dualstack.ap-northeast-2.amazonaws.com +s3.dualstack.ap-south-1.amazonaws.com +s3.dualstack.ap-southeast-1.amazonaws.com +s3.dualstack.ap-southeast-2.amazonaws.com +s3.dualstack.ca-central-1.amazonaws.com +s3.dualstack.eu-central-1.amazonaws.com +s3.dualstack.eu-west-1.amazonaws.com +s3.dualstack.eu-west-2.amazonaws.com +s3.dualstack.eu-west-3.amazonaws.com +s3.dualstack.sa-east-1.amazonaws.com +s3.dualstack.us-east-1.amazonaws.com +s3.dualstack.us-east-2.amazonaws.com +s3-website-us-east-1.amazonaws.com +s3-website-us-west-1.amazonaws.com +s3-website-us-west-2.amazonaws.com +s3-website-ap-northeast-1.amazonaws.com +s3-website-ap-southeast-1.amazonaws.com +s3-website-ap-southeast-2.amazonaws.com +s3-website-eu-west-1.amazonaws.com +s3-website-sa-east-1.amazonaws.com +s3-website.ap-northeast-2.amazonaws.com +s3-website.ap-south-1.amazonaws.com +s3-website.ca-central-1.amazonaws.com +s3-website.eu-central-1.amazonaws.com +s3-website.eu-west-2.amazonaws.com +s3-website.eu-west-3.amazonaws.com +s3-website.us-east-2.amazonaws.com + +// Amsterdam Wireless: https://www.amsterdamwireless.nl/ +// Submitted by Imre Jonk +amsw.nl + +// Amune : https://amune.org/ +// Submitted by Team Amune +t3l3p0rt.net +tele.amune.org + +// Apigee : https://apigee.com/ +// Submitted by Apigee Security Team +apigee.io + +// Aptible : https://www.aptible.com/ +// Submitted by Thomas Orozco +on-aptible.com + +// ASEINet : https://www.aseinet.com/ +// Submitted by Asei SEKIGUCHI +user.aseinet.ne.jp +gv.vc +d.gv.vc + +// Asociación Amigos de la Informática "Euskalamiga" : http://encounter.eus/ +// Submitted by Hector Martin +user.party.eus + +// Association potager.org : https://potager.org/ +// Submitted by Lunar +pimienta.org +poivron.org +potager.org +sweetpepper.org + +// ASUSTOR Inc. : http://www.asustor.com +// Submitted by Vincent Tseng +myasustor.com + +// AVM : https://avm.de +// Submitted by Andreas Weise +myfritz.net + +// AW AdvisorWebsites.com Software Inc : https://advisorwebsites.com +// Submitted by James Kennedy +*.awdev.ca +*.advisor.ws + +// b-data GmbH : https://www.b-data.io +// Submitted by Olivier Benz +b-data.io + +// backplane : https://www.backplane.io +// Submitted by Anthony Voutas +backplaneapp.io + +// Balena : https://www.balena.io +// Submitted by Petros Angelatos +balena-devices.com + +// Banzai Cloud +// Submitted by Janos Matyas +*.banzai.cloud +app.banzaicloud.io +*.backyards.banzaicloud.io + + +// BetaInABox +// Submitted by Adrian +betainabox.com + +// BinaryLane : http://www.binarylane.com +// Submitted by Nathan O'Sullivan +bnr.la + +// Blackbaud, Inc. : https://www.blackbaud.com +// Submitted by Paul Crowder +blackbaudcdn.net + +// Blatech : http://www.blatech.net +// Submitted by Luke Bratch +of.je + +// Boomla : https://boomla.com +// Submitted by Tibor Halter +boomla.net + +// Boxfuse : https://boxfuse.com +// Submitted by Axel Fontaine +boxfuse.io + +// bplaced : https://www.bplaced.net/ +// Submitted by Miroslav Bozic +square7.ch +bplaced.com +bplaced.de +square7.de +bplaced.net +square7.net + +// BrowserSafetyMark +// Submitted by Dave Tharp +browsersafetymark.io + +// Bytemark Hosting : https://www.bytemark.co.uk +// Submitted by Paul Cammish +uk0.bigv.io +dh.bytemark.co.uk +vm.bytemark.co.uk + +// callidomus : https://www.callidomus.com/ +// Submitted by Marcus Popp +mycd.eu + +// Carrd : https://carrd.co +// Submitted by AJ +carrd.co +crd.co +uwu.ai + +// CentralNic : http://www.centralnic.com/names/domains +// Submitted by registry +ae.org +br.com +cn.com +com.de +com.se +de.com +eu.com +gb.net +hu.net +jp.net +jpn.com +mex.com +ru.com +sa.com +se.net +uk.com +uk.net +us.com +za.bz +za.com + +// No longer operated by CentralNic, these entries should be adopted and/or removed by current operators +// Submitted by Gavin Brown +ar.com +gb.com +hu.com +kr.com +no.com +qc.com +uy.com + +// Africa.com Web Solutions Ltd : https://registry.africa.com +// Submitted by Gavin Brown +africa.com + +// iDOT Services Limited : http://www.domain.gr.com +// Submitted by Gavin Brown +gr.com + +// Radix FZC : http://domains.in.net +// Submitted by Gavin Brown +in.net +web.in + +// US REGISTRY LLC : http://us.org +// Submitted by Gavin Brown +us.org + +// co.com Registry, LLC : https://registry.co.com +// Submitted by Gavin Brown +co.com + +// Roar Domains LLC : https://roar.basketball/ +// Submitted by Gavin Brown +aus.basketball +nz.basketball + +// BRS Media : https://brsmedia.com/ +// Submitted by Gavin Brown +radio.am +radio.fm + +// Globe Hosting SRL : https://www.globehosting.com/ +// Submitted by Gavin Brown +co.ro +shop.ro + +// c.la : http://www.c.la/ +c.la + +// certmgr.org : https://certmgr.org +// Submitted by B. Blechschmidt +certmgr.org + +// Civilized Discourse Construction Kit, Inc. : https://www.discourse.org/ +// Submitted by Rishabh Nambiar & Michael Brown +discourse.group +discourse.team + +// ClearVox : http://www.clearvox.nl/ +// Submitted by Leon Rowland +virtueeldomein.nl + +// Clever Cloud : https://www.clever-cloud.com/ +// Submitted by Quentin Adam +cleverapps.io + +// Clerk : https://www.clerk.dev +// Submitted by Colin Sidoti +*.lcl.dev +*.stg.dev + +// Clic2000 : https://clic2000.fr +// Submitted by Mathilde Blanchemanche +clic2000.net + +// Cloud66 : https://www.cloud66.com/ +// Submitted by Khash Sajadi +c66.me +cloud66.ws +cloud66.zone + +// CloudAccess.net : https://www.cloudaccess.net/ +// Submitted by Pawel Panek +jdevcloud.com +wpdevcloud.com +cloudaccess.host +freesite.host +cloudaccess.net + +// cloudControl : https://www.cloudcontrol.com/ +// Submitted by Tobias Wilken +cloudcontrolled.com +cloudcontrolapp.com + +// Cloudera, Inc. : https://www.cloudera.com/ +// Submitted by Philip Langdale +cloudera.site + +// Cloudflare, Inc. : https://www.cloudflare.com/ +// Submitted by Cloudflare Team +pages.dev +trycloudflare.com +workers.dev + +// Clovyr : https://clovyr.io +// Submitted by Patrick Nielsen +wnext.app + +// co.ca : http://registry.co.ca/ +co.ca + +// Co & Co : https://co-co.nl/ +// Submitted by Govert Versluis +*.otap.co + +// i-registry s.r.o. : http://www.i-registry.cz/ +// Submitted by Martin Semrad +co.cz + +// CDN77.com : http://www.cdn77.com +// Submitted by Jan Krpes +c.cdn77.org +cdn77-ssl.net +r.cdn77.net +rsc.cdn77.org +ssl.origin.cdn77-secure.org + +// Cloud DNS Ltd : http://www.cloudns.net +// Submitted by Aleksander Hristov +cloudns.asia +cloudns.biz +cloudns.club +cloudns.cc +cloudns.eu +cloudns.in +cloudns.info +cloudns.org +cloudns.pro +cloudns.pw +cloudns.us + +// CNPY : https://cnpy.gdn +// Submitted by Angelo Gladding +cnpy.gdn + +// CoDNS B.V. +co.nl +co.no + +// Combell.com : https://www.combell.com +// Submitted by Thomas Wouters +webhosting.be +hosting-cluster.nl + +// Coordination Center for TLD RU and XN--P1AI : https://cctld.ru/en/domains/domens_ru/reserved/ +// Submitted by George Georgievsky +ac.ru +edu.ru +gov.ru +int.ru +mil.ru +test.ru + +// COSIMO GmbH : http://www.cosimo.de +// Submitted by Rene Marticke +dyn.cosidns.de +dynamisches-dns.de +dnsupdater.de +internet-dns.de +l-o-g-i-n.de +dynamic-dns.info +feste-ip.net +knx-server.net +static-access.net + +// Craynic, s.r.o. : http://www.craynic.com/ +// Submitted by Ales Krajnik +realm.cz + +// Cryptonomic : https://cryptonomic.net/ +// Submitted by Andrew Cady +*.cryptonomic.net + +// Cupcake : https://cupcake.io/ +// Submitted by Jonathan Rudenberg +cupcake.is + +// Curv UG : https://curv-labs.de/ +// Submitted by Marvin Wiesner +curv.dev + +// Customer OCI - Oracle Dyn https://cloud.oracle.com/home https://dyn.com/dns/ +// Submitted by Gregory Drake +// Note: This is intended to also include customer-oci.com due to wildcards implicitly including the current label +*.customer-oci.com +*.oci.customer-oci.com +*.ocp.customer-oci.com +*.ocs.customer-oci.com + +// cyon GmbH : https://www.cyon.ch/ +// Submitted by Dominic Luechinger +cyon.link +cyon.site + +// Danger Science Group: https://dangerscience.com/ +// Submitted by Skylar MacDonald +fnwk.site +folionetwork.site +platform0.app + +// Daplie, Inc : https://daplie.com +// Submitted by AJ ONeal +daplie.me +localhost.daplie.me + +// Datto, Inc. : https://www.datto.com/ +// Submitted by Philipp Heckel +dattolocal.com +dattorelay.com +dattoweb.com +mydatto.com +dattolocal.net +mydatto.net + +// Dansk.net : http://www.dansk.net/ +// Submitted by Anani Voule +biz.dk +co.dk +firm.dk +reg.dk +store.dk + +// dappnode.io : https://dappnode.io/ +// Submitted by Abel Boldu / DAppNode Team +dyndns.dappnode.io + +// dapps.earth : https://dapps.earth/ +// Submitted by Daniil Burdakov +*.dapps.earth +*.bzz.dapps.earth + +// Dark, Inc. : https://darklang.com +// Submitted by Paul Biggar +builtwithdark.com + +// Datawire, Inc : https://www.datawire.io +// Submitted by Richard Li +edgestack.me + +// Debian : https://www.debian.org/ +// Submitted by Peter Palfrader / Debian Sysadmin Team +debian.net + +// deSEC : https://desec.io/ +// Submitted by Peter Thomassen +dedyn.io + +// DNS Africa Ltd https://dns.business +// Submitted by Calvin Browne +jozi.biz + +// DNShome : https://www.dnshome.de/ +// Submitted by Norbert Auler +dnshome.de + +// DotArai : https://www.dotarai.com/ +// Submitted by Atsadawat Netcharadsang +online.th +shop.th + +// DrayTek Corp. : https://www.draytek.com/ +// Submitted by Paul Fang +drayddns.com + +// DreamHost : http://www.dreamhost.com/ +// Submitted by Andrew Farmer +dreamhosters.com + +// Drobo : http://www.drobo.com/ +// Submitted by Ricardo Padilha +mydrobo.com + +// Drud Holdings, LLC. : https://www.drud.com/ +// Submitted by Kevin Bridges +drud.io +drud.us + +// DuckDNS : http://www.duckdns.org/ +// Submitted by Richard Harper +duckdns.org + +// Bip : https://bip.sh +// Submitted by Joel Kennedy +bip.sh + +// bitbridge.net : Submitted by Craig Welch, abeliidev@gmail.com +bitbridge.net + +// dy.fi : http://dy.fi/ +// Submitted by Heikki Hannikainen +dy.fi +tunk.org + +// DynDNS.com : http://www.dyndns.com/services/dns/dyndns/ +dyndns-at-home.com +dyndns-at-work.com +dyndns-blog.com +dyndns-free.com +dyndns-home.com +dyndns-ip.com +dyndns-mail.com +dyndns-office.com +dyndns-pics.com +dyndns-remote.com +dyndns-server.com +dyndns-web.com +dyndns-wiki.com +dyndns-work.com +dyndns.biz +dyndns.info +dyndns.org +dyndns.tv +at-band-camp.net +ath.cx +barrel-of-knowledge.info +barrell-of-knowledge.info +better-than.tv +blogdns.com +blogdns.net +blogdns.org +blogsite.org +boldlygoingnowhere.org +broke-it.net +buyshouses.net +cechire.com +dnsalias.com +dnsalias.net +dnsalias.org +dnsdojo.com +dnsdojo.net +dnsdojo.org +does-it.net +doesntexist.com +doesntexist.org +dontexist.com +dontexist.net +dontexist.org +doomdns.com +doomdns.org +dvrdns.org +dyn-o-saur.com +dynalias.com +dynalias.net +dynalias.org +dynathome.net +dyndns.ws +endofinternet.net +endofinternet.org +endoftheinternet.org +est-a-la-maison.com +est-a-la-masion.com +est-le-patron.com +est-mon-blogueur.com +for-better.biz +for-more.biz +for-our.info +for-some.biz +for-the.biz +forgot.her.name +forgot.his.name +from-ak.com +from-al.com +from-ar.com +from-az.net +from-ca.com +from-co.net +from-ct.com +from-dc.com +from-de.com +from-fl.com +from-ga.com +from-hi.com +from-ia.com +from-id.com +from-il.com +from-in.com +from-ks.com +from-ky.com +from-la.net +from-ma.com +from-md.com +from-me.org +from-mi.com +from-mn.com +from-mo.com +from-ms.com +from-mt.com +from-nc.com +from-nd.com +from-ne.com +from-nh.com +from-nj.com +from-nm.com +from-nv.com +from-ny.net +from-oh.com +from-ok.com +from-or.com +from-pa.com +from-pr.com +from-ri.com +from-sc.com +from-sd.com +from-tn.com +from-tx.com +from-ut.com +from-va.com +from-vt.com +from-wa.com +from-wi.com +from-wv.com +from-wy.com +ftpaccess.cc +fuettertdasnetz.de +game-host.org +game-server.cc +getmyip.com +gets-it.net +go.dyndns.org +gotdns.com +gotdns.org +groks-the.info +groks-this.info +ham-radio-op.net +here-for-more.info +hobby-site.com +hobby-site.org +home.dyndns.org +homedns.org +homeftp.net +homeftp.org +homeip.net +homelinux.com +homelinux.net +homelinux.org +homeunix.com +homeunix.net +homeunix.org +iamallama.com +in-the-band.net +is-a-anarchist.com +is-a-blogger.com +is-a-bookkeeper.com +is-a-bruinsfan.org +is-a-bulls-fan.com +is-a-candidate.org +is-a-caterer.com +is-a-celticsfan.org +is-a-chef.com +is-a-chef.net +is-a-chef.org +is-a-conservative.com +is-a-cpa.com +is-a-cubicle-slave.com +is-a-democrat.com +is-a-designer.com +is-a-doctor.com +is-a-financialadvisor.com +is-a-geek.com +is-a-geek.net +is-a-geek.org +is-a-green.com +is-a-guru.com +is-a-hard-worker.com +is-a-hunter.com +is-a-knight.org +is-a-landscaper.com +is-a-lawyer.com +is-a-liberal.com +is-a-libertarian.com +is-a-linux-user.org +is-a-llama.com +is-a-musician.com +is-a-nascarfan.com +is-a-nurse.com +is-a-painter.com +is-a-patsfan.org +is-a-personaltrainer.com +is-a-photographer.com +is-a-player.com +is-a-republican.com +is-a-rockstar.com +is-a-socialist.com +is-a-soxfan.org +is-a-student.com +is-a-teacher.com +is-a-techie.com +is-a-therapist.com +is-an-accountant.com +is-an-actor.com +is-an-actress.com +is-an-anarchist.com +is-an-artist.com +is-an-engineer.com +is-an-entertainer.com +is-by.us +is-certified.com +is-found.org +is-gone.com +is-into-anime.com +is-into-cars.com +is-into-cartoons.com +is-into-games.com +is-leet.com +is-lost.org +is-not-certified.com +is-saved.org +is-slick.com +is-uberleet.com +is-very-bad.org +is-very-evil.org +is-very-good.org +is-very-nice.org +is-very-sweet.org +is-with-theband.com +isa-geek.com +isa-geek.net +isa-geek.org +isa-hockeynut.com +issmarterthanyou.com +isteingeek.de +istmein.de +kicks-ass.net +kicks-ass.org +knowsitall.info +land-4-sale.us +lebtimnetz.de +leitungsen.de +likes-pie.com +likescandy.com +merseine.nu +mine.nu +misconfused.org +mypets.ws +myphotos.cc +neat-url.com +office-on-the.net +on-the-web.tv +podzone.net +podzone.org +readmyblog.org +saves-the-whales.com +scrapper-site.net +scrapping.cc +selfip.biz +selfip.com +selfip.info +selfip.net +selfip.org +sells-for-less.com +sells-for-u.com +sells-it.net +sellsyourhome.org +servebbs.com +servebbs.net +servebbs.org +serveftp.net +serveftp.org +servegame.org +shacknet.nu +simple-url.com +space-to-rent.com +stuff-4-sale.org +stuff-4-sale.us +teaches-yoga.com +thruhere.net +traeumtgerade.de +webhop.biz +webhop.info +webhop.net +webhop.org +worse-than.tv +writesthisblog.com + +// ddnss.de : https://www.ddnss.de/ +// Submitted by Robert Niedziela +ddnss.de +dyn.ddnss.de +dyndns.ddnss.de +dyndns1.de +dyn-ip24.de +home-webserver.de +dyn.home-webserver.de +myhome-server.de +ddnss.org + +// Definima : http://www.definima.com/ +// Submitted by Maxence Bitterli +definima.net +definima.io + +// DigitalOcean : https://digitalocean.com/ +// Submitted by Braxton Huggins +ondigitalocean.app + +// dnstrace.pro : https://dnstrace.pro/ +// Submitted by Chris Partridge +bci.dnstrace.pro + +// Dynu.com : https://www.dynu.com/ +// Submitted by Sue Ye +ddnsfree.com +ddnsgeek.com +giize.com +gleeze.com +kozow.com +loseyourip.com +ooguy.com +theworkpc.com +casacam.net +dynu.net +accesscam.org +camdvr.org +freeddns.org +mywire.org +webredirect.org +myddns.rocks +blogsite.xyz + +// dynv6 : https://dynv6.com +// Submitted by Dominik Menke +dynv6.net + +// E4YOU spol. s.r.o. : https://e4you.cz/ +// Submitted by Vladimir Dudr +e4.cz + +// En root‽ : https://en-root.org +// Submitted by Emmanuel Raviart +en-root.fr + +// Enalean SAS: https://www.enalean.com +// Submitted by Thomas Cottier +mytuleap.com + +// ECG Robotics, Inc: https://ecgrobotics.org +// Submitted by +onred.one +staging.onred.one + +// One.com: https://www.one.com/ +// Submitted by Jacob Bunk Nielsen +service.one + +// Enonic : http://enonic.com/ +// Submitted by Erik Kaareng-Sunde +enonic.io +customer.enonic.io + +// EU.org https://eu.org/ +// Submitted by Pierre Beyssac +eu.org +al.eu.org +asso.eu.org +at.eu.org +au.eu.org +be.eu.org +bg.eu.org +ca.eu.org +cd.eu.org +ch.eu.org +cn.eu.org +cy.eu.org +cz.eu.org +de.eu.org +dk.eu.org +edu.eu.org +ee.eu.org +es.eu.org +fi.eu.org +fr.eu.org +gr.eu.org +hr.eu.org +hu.eu.org +ie.eu.org +il.eu.org +in.eu.org +int.eu.org +is.eu.org +it.eu.org +jp.eu.org +kr.eu.org +lt.eu.org +lu.eu.org +lv.eu.org +mc.eu.org +me.eu.org +mk.eu.org +mt.eu.org +my.eu.org +net.eu.org +ng.eu.org +nl.eu.org +no.eu.org +nz.eu.org +paris.eu.org +pl.eu.org +pt.eu.org +q-a.eu.org +ro.eu.org +ru.eu.org +se.eu.org +si.eu.org +sk.eu.org +tr.eu.org +uk.eu.org +us.eu.org + +// Evennode : http://www.evennode.com/ +// Submitted by Michal Kralik +eu-1.evennode.com +eu-2.evennode.com +eu-3.evennode.com +eu-4.evennode.com +us-1.evennode.com +us-2.evennode.com +us-3.evennode.com +us-4.evennode.com + +// eDirect Corp. : https://hosting.url.com.tw/ +// Submitted by C.S. chang +twmail.cc +twmail.net +twmail.org +mymailer.com.tw +url.tw + +// Fabrica Technologies, Inc. : https://www.fabrica.dev/ +// Submitted by Eric Jiang +onfabrica.com + +// Facebook, Inc. +// Submitted by Peter Ruibal +apps.fbsbx.com + +// FAITID : https://faitid.org/ +// Submitted by Maxim Alzoba +// https://www.flexireg.net/stat_info +ru.net +adygeya.ru +bashkiria.ru +bir.ru +cbg.ru +com.ru +dagestan.ru +grozny.ru +kalmykia.ru +kustanai.ru +marine.ru +mordovia.ru +msk.ru +mytis.ru +nalchik.ru +nov.ru +pyatigorsk.ru +spb.ru +vladikavkaz.ru +vladimir.ru +abkhazia.su +adygeya.su +aktyubinsk.su +arkhangelsk.su +armenia.su +ashgabad.su +azerbaijan.su +balashov.su +bashkiria.su +bryansk.su +bukhara.su +chimkent.su +dagestan.su +east-kazakhstan.su +exnet.su +georgia.su +grozny.su +ivanovo.su +jambyl.su +kalmykia.su +kaluga.su +karacol.su +karaganda.su +karelia.su +khakassia.su +krasnodar.su +kurgan.su +kustanai.su +lenug.su +mangyshlak.su +mordovia.su +msk.su +murmansk.su +nalchik.su +navoi.su +north-kazakhstan.su +nov.su +obninsk.su +penza.su +pokrovsk.su +sochi.su +spb.su +tashkent.su +termez.su +togliatti.su +troitsk.su +tselinograd.su +tula.su +tuva.su +vladikavkaz.su +vladimir.su +vologda.su + +// Fancy Bits, LLC : http://getchannels.com +// Submitted by Aman Gupta +channelsdvr.net +u.channelsdvr.net + +// Fastly Inc. : http://www.fastly.com/ +// Submitted by Fastly Security +fastly-terrarium.com +fastlylb.net +map.fastlylb.net +freetls.fastly.net +map.fastly.net +a.prod.fastly.net +global.prod.fastly.net +a.ssl.fastly.net +b.ssl.fastly.net +global.ssl.fastly.net + +// FASTVPS EESTI OU : https://fastvps.ru/ +// Submitted by Likhachev Vasiliy +fastvps-server.com +fastvps.host +myfast.host +fastvps.site +myfast.space + +// Featherhead : https://featherhead.xyz/ +// Submitted by Simon Menke +fhapp.xyz + +// Fedora : https://fedoraproject.org/ +// submitted by Patrick Uiterwijk +fedorainfracloud.org +fedorapeople.org +cloud.fedoraproject.org +app.os.fedoraproject.org +app.os.stg.fedoraproject.org + +// FearWorks Media Ltd. : https://fearworksmedia.co.uk +// submitted by Keith Fairley +conn.uk +copro.uk +couk.me +ukco.me + +// Fermax : https://fermax.com/ +// submitted by Koen Van Isterdael +mydobiss.com + +// FH Muenster : https://www.fh-muenster.de +// Submitted by Robin Naundorf +fh-muenster.io + +// Filegear Inc. : https://www.filegear.com +// Submitted by Jason Zhu +filegear.me +filegear-au.me +filegear-de.me +filegear-gb.me +filegear-ie.me +filegear-jp.me +filegear-sg.me + +// Firebase, Inc. +// Submitted by Chris Raynor +firebaseapp.com + +// fly.io: https://fly.io +// Submitted by Kurt Mackey +fly.dev +edgeapp.net +shw.io + +// Flynn : https://flynn.io +// Submitted by Jonathan Rudenberg +flynnhosting.net + +// Frederik Braun https://frederik-braun.com +// Submitted by Frederik Braun +0e.vc + +// Freebox : http://www.freebox.fr +// Submitted by Romain Fliedel +freebox-os.com +freeboxos.com +fbx-os.fr +fbxos.fr +freebox-os.fr +freeboxos.fr + +// freedesktop.org : https://www.freedesktop.org +// Submitted by Daniel Stone +freedesktop.org + +// FunkFeuer - Verein zur Förderung freier Netze : https://www.funkfeuer.at +// Submitted by Daniel A. Maierhofer +wien.funkfeuer.at + +// Futureweb OG : http://www.futureweb.at +// Submitted by Andreas Schnederle-Wagner +*.futurecms.at +*.ex.futurecms.at +*.in.futurecms.at +futurehosting.at +futuremailing.at +*.ex.ortsinfo.at +*.kunden.ortsinfo.at +*.statics.cloud + +// GDS : https://www.gov.uk/service-manual/operations/operating-servicegovuk-subdomains +// Submitted by David Illsley +service.gov.uk + +// Gehirn Inc. : https://www.gehirn.co.jp/ +// Submitted by Kohei YOSHIDA +gehirn.ne.jp +usercontent.jp + +// Gentlent, Inc. : https://www.gentlent.com +// Submitted by Tom Klein +gentapps.com +gentlentapis.com +lab.ms +cdn-edges.net + +// GitHub, Inc. +// Submitted by Patrick Toomey +github.io +githubusercontent.com + +// GitLab, Inc. +// Submitted by Alex Hanselka +gitlab.io + +// Gitplac.si - https://gitplac.si +// Submitted by Aljaž Starc +gitapp.si +gitpage.si + +// Glitch, Inc : https://glitch.com +// Submitted by Mads Hartmann +glitch.me + +// GMO Pepabo, Inc. : https://pepabo.com/ +// Submitted by dojineko +lolipop.io + +// GOV.UK Platform as a Service : https://www.cloud.service.gov.uk/ +// Submitted by Tom Whitwell +cloudapps.digital +london.cloudapps.digital + +// GOV.UK Pay : https://www.payments.service.gov.uk/ +// Submitted by Richard Baker +pymnt.uk + +// UKHomeOffice : https://www.gov.uk/government/organisations/home-office +// Submitted by Jon Shanks +homeoffice.gov.uk + +// GlobeHosting, Inc. +// Submitted by Zoltan Egresi +ro.im + +// GoIP DNS Services : http://www.goip.de +// Submitted by Christian Poulter +goip.de + +// Google, Inc. +// Submitted by Eduardo Vela +run.app +a.run.app +web.app +*.0emm.com +appspot.com +*.r.appspot.com +codespot.com +googleapis.com +googlecode.com +pagespeedmobilizer.com +publishproxy.com +withgoogle.com +withyoutube.com +*.gateway.dev +cloud.goog +translate.goog +cloudfunctions.net + +blogspot.ae +blogspot.al +blogspot.am +blogspot.ba +blogspot.be +blogspot.bg +blogspot.bj +blogspot.ca +blogspot.cf +blogspot.ch +blogspot.cl +blogspot.co.at +blogspot.co.id +blogspot.co.il +blogspot.co.ke +blogspot.co.nz +blogspot.co.uk +blogspot.co.za +blogspot.com +blogspot.com.ar +blogspot.com.au +blogspot.com.br +blogspot.com.by +blogspot.com.co +blogspot.com.cy +blogspot.com.ee +blogspot.com.eg +blogspot.com.es +blogspot.com.mt +blogspot.com.ng +blogspot.com.tr +blogspot.com.uy +blogspot.cv +blogspot.cz +blogspot.de +blogspot.dk +blogspot.fi +blogspot.fr +blogspot.gr +blogspot.hk +blogspot.hr +blogspot.hu +blogspot.ie +blogspot.in +blogspot.is +blogspot.it +blogspot.jp +blogspot.kr +blogspot.li +blogspot.lt +blogspot.lu +blogspot.md +blogspot.mk +blogspot.mr +blogspot.mx +blogspot.my +blogspot.nl +blogspot.no +blogspot.pe +blogspot.pt +blogspot.qa +blogspot.re +blogspot.ro +blogspot.rs +blogspot.ru +blogspot.se +blogspot.sg +blogspot.si +blogspot.sk +blogspot.sn +blogspot.td +blogspot.tw +blogspot.ug +blogspot.vn + +// Aaron Marais' Gitlab pages: https://lab.aaronleem.co.za +// Submitted by Aaron Marais +graphox.us + +// Group 53, LLC : https://www.group53.com +// Submitted by Tyler Todd +awsmppl.com + +// Hakaran group: http://hakaran.cz +// Submited by Arseniy Sokolov +fin.ci +free.hr +caa.li +ua.rs +conf.se + +// Handshake : https://handshake.org +// Submitted by Mike Damm +hs.zone +hs.run + +// Hashbang : https://hashbang.sh +hashbang.sh + +// Hasura : https://hasura.io +// Submitted by Shahidh K Muhammed +hasura.app +hasura-app.io + +// Hepforge : https://www.hepforge.org +// Submitted by David Grellscheid +hepforge.org + +// Heroku : https://www.heroku.com/ +// Submitted by Tom Maher +herokuapp.com +herokussl.com + +// Hibernating Rhinos +// Submitted by Oren Eini +myravendb.com +ravendb.community +ravendb.me +development.run +ravendb.run + +// Hong Kong Productivity Council: https://www.hkpc.org/ +// Submitted by SECaaS Team +secaas.hk + +// HOSTBIP REGISTRY : https://www.hostbip.com/ +// Submitted by Atanunu Igbunuroghene +bpl.biz +orx.biz +ng.city +biz.gl +ng.ink +col.ng +firm.ng +gen.ng +ltd.ng +ngo.ng +ng.school +sch.so + +// HostyHosting (hostyhosting.com) +hostyhosting.io + +// Häkkinen.fi +// Submitted by Eero Häkkinen +häkkinen.fi + +// Ici la Lune : http://www.icilalune.com/ +// Submitted by Simon Morvan +*.moonscale.io +moonscale.net + +// iki.fi +// Submitted by Hannu Aronsson +iki.fi + +// Individual Network Berlin e.V. : https://www.in-berlin.de/ +// Submitted by Christian Seitz +dyn-berlin.de +in-berlin.de +in-brb.de +in-butter.de +in-dsl.de +in-dsl.net +in-dsl.org +in-vpn.de +in-vpn.net +in-vpn.org + +// info.at : http://www.info.at/ +biz.at +info.at + +// info.cx : http://info.cx +// Submitted by Jacob Slater +info.cx + +// Interlegis : http://www.interlegis.leg.br +// Submitted by Gabriel Ferreira +ac.leg.br +al.leg.br +am.leg.br +ap.leg.br +ba.leg.br +ce.leg.br +df.leg.br +es.leg.br +go.leg.br +ma.leg.br +mg.leg.br +ms.leg.br +mt.leg.br +pa.leg.br +pb.leg.br +pe.leg.br +pi.leg.br +pr.leg.br +rj.leg.br +rn.leg.br +ro.leg.br +rr.leg.br +rs.leg.br +sc.leg.br +se.leg.br +sp.leg.br +to.leg.br + +// intermetrics GmbH : https://pixolino.com/ +// Submitted by Wolfgang Schwarz +pixolino.com + +// Internet-Pro, LLP: https://netangels.ru/ +// Submited by Vasiliy Sheredeko +na4u.ru + +// iopsys software solutions AB : https://iopsys.eu/ +// Submitted by Roman Azarenko +iopsys.se + +// IPiFony Systems, Inc. : https://www.ipifony.com/ +// Submitted by Matthew Hardeman +ipifony.net + +// IServ GmbH : https://iserv.eu +// Submitted by Kim-Alexander Brodowski +mein-iserv.de +schulserver.de +test-iserv.de +iserv.dev + +// I-O DATA DEVICE, INC. : http://www.iodata.com/ +// Submitted by Yuji Minagawa +iobb.net + +// Jelastic, Inc. : https://jelastic.com/ +// Submited by Ihor Kolodyuk +mel.cloudlets.com.au +cloud.interhostsolutions.be +users.scale.virtualcloud.com.br +mycloud.by +alp1.ae.flow.ch +appengine.flow.ch +es-1.axarnet.cloud +diadem.cloud +vip.jelastic.cloud +jele.cloud +it1.eur.aruba.jenv-aruba.cloud +it1.jenv-aruba.cloud +it1-eur.jenv-arubabiz.cloud +oxa.cloud +tn.oxa.cloud +uk.oxa.cloud +primetel.cloud +uk.primetel.cloud +ca.reclaim.cloud +uk.reclaim.cloud +us.reclaim.cloud +ch.trendhosting.cloud +de.trendhosting.cloud +jele.club +clicketcloud.com +ams.cloudswitches.com +au.cloudswitches.com +sg.cloudswitches.com +dopaas.com +elastyco.com +nv.elastyco.com +hidora.com +paas.hosted-by-previder.com +rag-cloud.hosteur.com +rag-cloud-ch.hosteur.com +jcloud.ik-server.com +jcloud-ver-jpc.ik-server.com +demo.jelastic.com +kilatiron.com +paas.massivegrid.com +jed.wafaicloud.com +lon.wafaicloud.com +ryd.wafaicloud.com +j.scaleforce.com.cy +jelastic.dogado.eu +paas.leviracloud.eu +fi.cloudplatform.fi +demo.datacenter.fi +paas.datacenter.fi +jele.host +mircloud.host +jele.io +ocs.opusinteractive.io +cloud.unispace.io +cloud-de.unispace.io +cloud-fr1.unispace.io +jc.neen.it +cloud.jelastic.open.tim.it +jcloud.kz +upaas.kazteleport.kz +jl.serv.net.mx +cloudjiffy.net +fra1-de.cloudjiffy.net +west1-us.cloudjiffy.net +ams1.jls.docktera.net +jls-sto1.elastx.net +jls-sto2.elastx.net +jls-sto3.elastx.net +fr-1.paas.massivegrid.net +lon-1.paas.massivegrid.net +lon-2.paas.massivegrid.net +ny-1.paas.massivegrid.net +ny-2.paas.massivegrid.net +sg-1.paas.massivegrid.net +jelastic.saveincloud.net +nordeste-idc.saveincloud.net +j.scaleforce.net +jelastic.tsukaeru.net +atl.jelastic.vps-host.net +njs.jelastic.vps-host.net +unicloud.pl +mircloud.ru +jelastic.regruhosting.ru +enscaled.sg +jele.site +jelastic.team +orangecloud.tn +j.layershift.co.uk +phx.enscaled.us +mircloud.us + +// Jino : https://www.jino.ru +// Submitted by Sergey Ulyashin +myjino.ru +*.hosting.myjino.ru +*.landing.myjino.ru +*.spectrum.myjino.ru +*.vps.myjino.ru + +// Joyent : https://www.joyent.com/ +// Submitted by Brian Bennett +*.triton.zone +*.cns.joyent.com + +// JS.ORG : http://dns.js.org +// Submitted by Stefan Keim +js.org + +// KaasHosting : http://www.kaashosting.nl/ +// Submitted by Wouter Bakker +kaas.gg +khplay.nl + +// Keyweb AG : https://www.keyweb.de +// Submitted by Martin Dannehl +keymachine.de + +// KingHost : https://king.host +// Submitted by Felipe Keller Braz +kinghost.net +uni5.net + +// KnightPoint Systems, LLC : http://www.knightpoint.com/ +// Submitted by Roy Keene +knightpoint.systems + +// KUROKU LTD : https://kuroku.ltd/ +// Submitted by DisposaBoy +oya.to + +// .KRD : http://nic.krd/data/krd/Registration%20Policy.pdf +co.krd +edu.krd + +// LCube - Professional hosting e.K. : https://www.lcube-webhosting.de +// Submitted by Lars Laehn +git-repos.de +lcube-server.de +svn-repos.de + +// Leadpages : https://www.leadpages.net +// Submitted by Greg Dallavalle +leadpages.co +lpages.co +lpusercontent.com + +// Lelux.fi : https://lelux.fi/ +// Submitted by Lelux Admin +lelux.site + +// Lifetime Hosting : https://Lifetime.Hosting/ +// Submitted by Mike Fillator +co.business +co.education +co.events +co.financial +co.network +co.place +co.technology + +// Lightmaker Property Manager, Inc. : https://app.lmpm.com/ +// Submitted by Greg Holland +app.lmpm.com + +// linkyard ldt: https://www.linkyard.ch/ +// Submitted by Mario Siegenthaler +linkyard.cloud +linkyard-cloud.ch + +// Linode : https://linode.com +// Submitted by +members.linode.com +*.nodebalancer.linode.com +*.linodeobjects.com + +// LiquidNet Ltd : http://www.liquidnetlimited.com/ +// Submitted by Victor Velchev +we.bs + +// localzone.xyz +// Submitted by Kenny Niehage +localzone.xyz + +// Log'in Line : https://www.loginline.com/ +// Submitted by Rémi Mach +loginline.app +loginline.dev +loginline.io +loginline.services +loginline.site + +// LubMAN UMCS Sp. z o.o : https://lubman.pl/ +// Submitted by Ireneusz Maliszewski +krasnik.pl +leczna.pl +lubartow.pl +lublin.pl +poniatowa.pl +swidnik.pl + +// Lug.org.uk : https://lug.org.uk +// Submitted by Jon Spriggs +glug.org.uk +lug.org.uk +lugs.org.uk + +// Lukanet Ltd : https://lukanet.com +// Submitted by Anton Avramov +barsy.bg +barsy.co.uk +barsyonline.co.uk +barsycenter.com +barsyonline.com +barsy.club +barsy.de +barsy.eu +barsy.in +barsy.info +barsy.io +barsy.me +barsy.menu +barsy.mobi +barsy.net +barsy.online +barsy.org +barsy.pro +barsy.pub +barsy.shop +barsy.site +barsy.support +barsy.uk + +// Magento Commerce +// Submitted by Damien Tournoud +*.magentosite.cloud + +// May First - People Link : https://mayfirst.org/ +// Submitted by Jamie McClelland +mayfirst.info +mayfirst.org + +// Mail.Ru Group : https://hb.cldmail.ru +// Submitted by Ilya Zaretskiy +hb.cldmail.ru + +// mcpe.me : https://mcpe.me +// Submitted by Noa Heyl +mcpe.me + +// McHost : https://mchost.ru +// Submitted by Evgeniy Subbotin +mcdir.ru +vps.mcdir.ru + +// Memset hosting : https://www.memset.com +// Submitted by Tom Whitwell +miniserver.com +memset.net + +// MetaCentrum, CESNET z.s.p.o. : https://www.metacentrum.cz/en/ +// Submitted by Zdeněk Šustr +*.cloud.metacentrum.cz +custom.metacentrum.cz + +// MetaCentrum, CESNET z.s.p.o. : https://www.metacentrum.cz/en/ +// Submitted by Radim Janča +flt.cloud.muni.cz +usr.cloud.muni.cz + +// Meteor Development Group : https://www.meteor.com/hosting +// Submitted by Pierre Carrier +meteorapp.com +eu.meteorapp.com + +// Michau Enterprises Limited : http://www.co.pl/ +co.pl + +// Microsoft Corporation : http://microsoft.com +// Submitted by Mitch Webster +*.azurecontainer.io +azurewebsites.net +azure-mobile.net +cloudapp.net +azurestaticapps.net +centralus.azurestaticapps.net +eastasia.azurestaticapps.net +eastus2.azurestaticapps.net +westeurope.azurestaticapps.net +westus2.azurestaticapps.net + +// minion.systems : http://minion.systems +// Submitted by Robert Böttinger +csx.cc + +// MobileEducation, LLC : https://joinforte.com +// Submitted by Grayson Martin +forte.id + +// Mozilla Corporation : https://mozilla.com +// Submitted by Ben Francis +mozilla-iot.org + +// Mozilla Foundation : https://mozilla.org/ +// Submitted by glob +bmoattachments.org + +// MSK-IX : https://www.msk-ix.ru/ +// Submitted by Khannanov Roman +net.ru +org.ru +pp.ru + +// Mythic Beasts : https://www.mythic-beasts.com +// Submitted by Paul Cammish +hostedpi.com +customer.mythic-beasts.com +lynx.mythic-beasts.com +ocelot.mythic-beasts.com +onza.mythic-beasts.com +sphinx.mythic-beasts.com +vs.mythic-beasts.com +x.mythic-beasts.com +yali.mythic-beasts.com +cust.retrosnub.co.uk + +// Nabu Casa : https://www.nabucasa.com +// Submitted by Paulus Schoutsen +ui.nabu.casa + +// Names.of.London : https://names.of.london/ +// Submitted by James Stevens or +pony.club +of.fashion +in.london +of.london +from.marketing +with.marketing +for.men +repair.men +and.mom +for.mom +for.one +under.one +for.sale +that.win +from.work +to.work + +// NCTU.ME : https://nctu.me/ +// Submitted by Tocknicsu +nctu.me + +// Netlify : https://www.netlify.com +// Submitted by Jessica Parsons +netlify.app + +// Neustar Inc. +// Submitted by Trung Tran +4u.com + +// ngrok : https://ngrok.com/ +// Submitted by Alan Shreve +ngrok.io + +// Nimbus Hosting Ltd. : https://www.nimbushosting.co.uk/ +// Submitted by Nicholas Ford +nh-serv.co.uk + +// NFSN, Inc. : https://www.NearlyFreeSpeech.NET/ +// Submitted by Jeff Wheelhouse +nfshost.com + +// Now-DNS : https://now-dns.com +// Submitted by Steve Russell +dnsking.ch +mypi.co +n4t.co +001www.com +ddnslive.com +myiphost.com +forumz.info +16-b.it +32-b.it +64-b.it +soundcast.me +tcp4.me +dnsup.net +hicam.net +now-dns.net +ownip.net +vpndns.net +dynserv.org +now-dns.org +x443.pw +now-dns.top +ntdll.top +freeddns.us +crafting.xyz +zapto.xyz + +// nsupdate.info : https://www.nsupdate.info/ +// Submitted by Thomas Waldmann +nsupdate.info +nerdpol.ovh + +// No-IP.com : https://noip.com/ +// Submitted by Deven Reza +blogsyte.com +brasilia.me +cable-modem.org +ciscofreak.com +collegefan.org +couchpotatofries.org +damnserver.com +ddns.me +ditchyourip.com +dnsfor.me +dnsiskinky.com +dvrcam.info +dynns.com +eating-organic.net +fantasyleague.cc +geekgalaxy.com +golffan.us +health-carereform.com +homesecuritymac.com +homesecuritypc.com +hopto.me +ilovecollege.info +loginto.me +mlbfan.org +mmafan.biz +myactivedirectory.com +mydissent.net +myeffect.net +mymediapc.net +mypsx.net +mysecuritycamera.com +mysecuritycamera.net +mysecuritycamera.org +net-freaks.com +nflfan.org +nhlfan.net +no-ip.ca +no-ip.co.uk +no-ip.net +noip.us +onthewifi.com +pgafan.net +point2this.com +pointto.us +privatizehealthinsurance.net +quicksytes.com +read-books.org +securitytactics.com +serveexchange.com +servehumour.com +servep2p.com +servesarcasm.com +stufftoread.com +ufcfan.org +unusualperson.com +workisboring.com +3utilities.com +bounceme.net +ddns.net +ddnsking.com +gotdns.ch +hopto.org +myftp.biz +myftp.org +myvnc.com +no-ip.biz +no-ip.info +no-ip.org +noip.me +redirectme.net +servebeer.com +serveblog.net +servecounterstrike.com +serveftp.com +servegame.com +servehalflife.com +servehttp.com +serveirc.com +serveminecraft.net +servemp3.com +servepics.com +servequake.com +sytes.net +webhop.me +zapto.org + +// NodeArt : https://nodeart.io +// Submitted by Konstantin Nosov +stage.nodeart.io + +// Nodum B.V. : https://nodum.io/ +// Submitted by Wietse Wind +nodum.co +nodum.io + +// Nucleos Inc. : https://nucleos.com +// Submitted by Piotr Zduniak +pcloud.host + +// NYC.mn : http://www.information.nyc.mn +// Submitted by Matthew Brown +nyc.mn + +// NymNom : https://nymnom.com/ +// Submitted by NymNom +nom.ae +nom.af +nom.ai +nom.al +nym.by +nom.bz +nym.bz +nom.cl +nym.ec +nom.gd +nom.ge +nom.gl +nym.gr +nom.gt +nym.gy +nym.hk +nom.hn +nym.ie +nom.im +nom.ke +nym.kz +nym.la +nym.lc +nom.li +nym.li +nym.lt +nym.lu +nom.lv +nym.me +nom.mk +nym.mn +nym.mx +nom.nu +nym.nz +nym.pe +nym.pt +nom.pw +nom.qa +nym.ro +nom.rs +nom.si +nym.sk +nom.st +nym.su +nym.sx +nom.tj +nym.tw +nom.ug +nom.uy +nom.vc +nom.vg + +// Observable, Inc. : https://observablehq.com +// Submitted by Mike Bostock +static.observableusercontent.com + +// Octopodal Solutions, LLC. : https://ulterius.io/ +// Submitted by Andrew Sampson +cya.gg + +// OMG.LOL : +// Submitted by Adam Newbold +omg.lol + +// Omnibond Systems, LLC. : https://www.omnibond.com +// Submitted by Cole Estep +cloudycluster.net + +// OmniWe Limited: https://omniwe.com +// Submitted by Vicary Archangel +omniwe.site + +// One Fold Media : http://www.onefoldmedia.com/ +// Submitted by Eddie Jones +nid.io + +// Open Social : https://www.getopensocial.com/ +// Submitted by Alexander Varwijk +opensocial.site + +// OpenCraft GmbH : http://opencraft.com/ +// Submitted by Sven Marnach +opencraft.hosting + +// Opera Software, A.S.A. +// Submitted by Yngve Pettersen +operaunite.com + +// Oursky Limited : https://skygear.io/ +// Submited by Skygear Developer +skygearapp.com + +// OutSystems +// Submitted by Duarte Santos +outsystemscloud.com + +// OwnProvider GmbH: http://www.ownprovider.com +// Submitted by Jan Moennich +ownprovider.com +own.pm + +// OwO : https://whats-th.is/ +// Submitted by Dean Sheather +*.owo.codes + +// OX : http://www.ox.rs +// Submitted by Adam Grand +ox.rs + +// oy.lc +// Submitted by Charly Coste +oy.lc + +// Pagefog : https://pagefog.com/ +// Submitted by Derek Myers +pgfog.com + +// Pagefront : https://www.pagefronthq.com/ +// Submitted by Jason Kriss +pagefrontapp.com + +// PageXL : https://pagexl.com +// Submitted by Yann Guichard +pagexl.com + +// pcarrier.ca Software Inc: https://pcarrier.ca/ +// Submitted by Pierre Carrier +bar0.net +bar1.net +bar2.net +rdv.to + +// .pl domains (grandfathered) +art.pl +gliwice.pl +krakow.pl +poznan.pl +wroc.pl +zakopane.pl + +// Pantheon Systems, Inc. : https://pantheon.io/ +// Submitted by Gary Dylina +pantheonsite.io +gotpantheon.com + +// Peplink | Pepwave : http://peplink.com/ +// Submitted by Steve Leung +mypep.link + +// Perspecta : https://perspecta.com/ +// Submitted by Kenneth Van Alstyne +perspecta.cloud + +// PE Ulyanov Kirill Sergeevich : https://airy.host +// Submitted by Kirill Ulyanov +lk3.ru +ra-ru.ru +zsew.ru + +// Planet-Work : https://www.planet-work.com/ +// Submitted by Frédéric VANNIÈRE +on-web.fr + +// Platform.sh : https://platform.sh +// Submitted by Nikola Kotur +bc.platform.sh +ent.platform.sh +eu.platform.sh +us.platform.sh +*.platformsh.site + +// Platter: https://platter.dev +// Submitted by Patrick Flor +platter-app.com +platter-app.dev +platterp.us + +// Plesk : https://www.plesk.com/ +// Submitted by Anton Akhtyamov +pdns.page +plesk.page +pleskns.com + +// Port53 : https://port53.io/ +// Submitted by Maximilian Schieder +dyn53.io + +// Positive Codes Technology Company : http://co.bn/faq.html +// Submitted by Zulfais +co.bn + +// prgmr.com : https://prgmr.com/ +// Submitted by Sarah Newman +xen.prgmr.com + +// priv.at : http://www.nic.priv.at/ +// Submitted by registry +priv.at + +// privacytools.io : https://www.privacytools.io/ +// Submitted by Jonah Aragon +prvcy.page + +// Protocol Labs : https://protocol.ai/ +// Submitted by Michael Burns +*.dweb.link + +// Protonet GmbH : http://protonet.io +// Submitted by Martin Meier +protonet.io + +// Publication Presse Communication SARL : https://ppcom.fr +// Submitted by Yaacov Akiba Slama +chirurgiens-dentistes-en-france.fr +byen.site + +// pubtls.org: https://www.pubtls.org +// Submitted by Kor Nielsen +pubtls.org + +// QOTO, Org. +// Submitted by Jeffrey Phillips Freeman +qoto.io + +// Qualifio : https://qualifio.com/ +// Submitted by Xavier De Cock +qualifioapp.com + +// QuickBackend: https://www.quickbackend.com +// Submitted by Dani Biro +qbuser.com + +// Redstar Consultants : https://www.redstarconsultants.com/ +// Submitted by Jons Slemmer +instantcloud.cn + +// Russian Academy of Sciences +// Submitted by Tech Support +ras.ru + +// QA2 +// Submitted by Daniel Dent (https://www.danieldent.com/) +qa2.com + +// QCX +// Submitted by Cassandra Beelen +qcx.io +*.sys.qcx.io + +// QNAP System Inc : https://www.qnap.com +// Submitted by Nick Chang +dev-myqnapcloud.com +alpha-myqnapcloud.com +myqnapcloud.com + +// Quip : https://quip.com +// Submitted by Patrick Linehan +*.quipelements.com + +// Qutheory LLC : http://qutheory.io +// Submitted by Jonas Schwartz +vapor.cloud +vaporcloud.io + +// Rackmaze LLC : https://www.rackmaze.com +// Submitted by Kirill Pertsev +rackmaze.com +rackmaze.net + +// Rakuten Games, Inc : https://dev.viberplay.io +// Submitted by Joshua Zhang +g.vbrplsbx.io + +// Rancher Labs, Inc : https://rancher.com +// Submitted by Vincent Fiduccia +*.on-k3s.io +*.on-rancher.cloud +*.on-rio.io + +// Read The Docs, Inc : https://www.readthedocs.org +// Submitted by David Fischer +readthedocs.io + +// Red Hat, Inc. OpenShift : https://openshift.redhat.com/ +// Submitted by Tim Kramer +rhcloud.com + +// Render : https://render.com +// Submitted by Anurag Goel +app.render.com +onrender.com + +// Repl.it : https://repl.it +// Submitted by Mason Clayton +repl.co +repl.run + +// Resin.io : https://resin.io +// Submitted by Tim Perry +resindevice.io +devices.resinstaging.io + +// RethinkDB : https://www.rethinkdb.com/ +// Submitted by Chris Kastorff +hzc.io + +// Revitalised Limited : http://www.revitalised.co.uk +// Submitted by Jack Price +wellbeingzone.eu +wellbeingzone.co.uk + +// Rochester Institute of Technology : http://www.rit.edu/ +// Submitted by Jennifer Herting +git-pages.rit.edu + +// Sandstorm Development Group, Inc. : https://sandcats.io/ +// Submitted by Asheesh Laroia +sandcats.io + +// SBE network solutions GmbH : https://www.sbe.de/ +// Submitted by Norman Meilick +logoip.de +logoip.com + +// schokokeks.org GbR : https://schokokeks.org/ +// Submitted by Hanno Böck +schokokeks.net + +// Scottish Government: https://www.gov.scot +// Submitted by Martin Ellis +gov.scot + +// Scry Security : http://www.scrysec.com +// Submitted by Shante Adam +scrysec.com + +// Securepoint GmbH : https://www.securepoint.de +// Submitted by Erik Anders +firewall-gateway.com +firewall-gateway.de +my-gateway.de +my-router.de +spdns.de +spdns.eu +firewall-gateway.net +my-firewall.org +myfirewall.org +spdns.org + +// Seidat : https://www.seidat.com +// Submitted by Artem Kondratev +seidat.net + +// Senseering GmbH : https://www.senseering.de +// Submitted by Felix Mönckemeyer +senseering.net + +// Service Online LLC : http://drs.ua/ +// Submitted by Serhii Bulakh +biz.ua +co.ua +pp.ua + +// ShiftEdit : https://shiftedit.net/ +// Submitted by Adam Jimenez +shiftedit.io + +// Shopblocks : http://www.shopblocks.com/ +// Submitted by Alex Bowers +myshopblocks.com + +// Shopit : https://www.shopitcommerce.com/ +// Submitted by Craig McMahon +shopitsite.com + +// shopware AG : https://shopware.com +// Submitted by Jens Küper +shopware.store + +// Siemens Mobility GmbH +// Submitted by Oliver Graebner +mo-siemens.io + +// SinaAppEngine : http://sae.sina.com.cn/ +// Submitted by SinaAppEngine +1kapp.com +appchizi.com +applinzi.com +sinaapp.com +vipsinaapp.com + +// Siteleaf : https://www.siteleaf.com/ +// Submitted by Skylar Challand +siteleaf.net + +// Skyhat : http://www.skyhat.io +// Submitted by Shante Adam +bounty-full.com +alpha.bounty-full.com +beta.bounty-full.com + +// Small Technology Foundation : https://small-tech.org +// Submitted by Aral Balkan +small-web.org + +// Stackhero : https://www.stackhero.io +// Submitted by Adrien Gillon +stackhero-network.com + +// staticland : https://static.land +// Submitted by Seth Vincent +static.land +dev.static.land +sites.static.land + +// Sony Interactive Entertainment LLC : https://sie.com/ +// Submitted by David Coles +playstation-cloud.com + +// SourceLair PC : https://www.sourcelair.com +// Submitted by Antonis Kalipetis +apps.lair.io +*.stolos.io + +// SpaceKit : https://www.spacekit.io/ +// Submitted by Reza Akhavan +spacekit.io + +// SpeedPartner GmbH: https://www.speedpartner.de/ +// Submitted by Stefan Neufeind +customer.speedpartner.de + +// Standard Library : https://stdlib.com +// Submitted by Jacob Lee +api.stdlib.com + +// Storj Labs Inc. : https://storj.io/ +// Submitted by Philip Hutchins +storj.farm + +// Studenten Net Twente : http://www.snt.utwente.nl/ +// Submitted by Silke Hofstra +utwente.io + +// Student-Run Computing Facility : https://www.srcf.net/ +// Submitted by Edwin Balani +soc.srcf.net +user.srcf.net + +// Sub 6 Limited: http://www.sub6.com +// Submitted by Dan Miller +temp-dns.com + +// Swisscom Application Cloud: https://developer.swisscom.com +// Submitted by Matthias.Winzeler +applicationcloud.io +scapp.io + +// Symfony, SAS : https://symfony.com/ +// Submitted by Fabien Potencier +*.s5y.io +*.sensiosite.cloud + +// Syncloud : https://syncloud.org +// Submitted by Boris Rybalkin +syncloud.it + +// Synology, Inc. : https://www.synology.com/ +// Submitted by Rony Weng +diskstation.me +dscloud.biz +dscloud.me +dscloud.mobi +dsmynas.com +dsmynas.net +dsmynas.org +familyds.com +familyds.net +familyds.org +i234.me +myds.me +synology.me +vpnplus.to +direct.quickconnect.to + +// TAIFUN Software AG : http://taifun-software.de +// Submitted by Bjoern Henke +taifun-dns.de + +// TASK geographical domains (www.task.gda.pl/uslugi/dns) +gda.pl +gdansk.pl +gdynia.pl +med.pl +sopot.pl + +// Teckids e.V. : https://www.teckids.org +// Submitted by Dominik George +edugit.org + +// Telebit : https://telebit.cloud +// Submitted by AJ ONeal +telebit.app +telebit.io +*.telebit.xyz + +// The Gwiddle Foundation : https://gwiddlefoundation.org.uk +// Submitted by Joshua Bayfield +gwiddle.co.uk + +// Thingdust AG : https://thingdust.com/ +// Submitted by Adrian Imboden +thingdustdata.com +cust.dev.thingdust.io +cust.disrec.thingdust.io +cust.prod.thingdust.io +cust.testing.thingdust.io +*.firenet.ch +*.svc.firenet.ch + +// Tlon.io : https://tlon.io +// Submitted by Mark Staarink +arvo.network +azimuth.network +tlon.network + +// TownNews.com : http://www.townnews.com +// Submitted by Dustin Ward +bloxcms.com +townnews-staging.com + +// TrafficPlex GmbH : https://www.trafficplex.de/ +// Submitted by Phillipp Röll +12hp.at +2ix.at +4lima.at +lima-city.at +12hp.ch +2ix.ch +4lima.ch +lima-city.ch +trafficplex.cloud +de.cool +12hp.de +2ix.de +4lima.de +lima-city.de +1337.pictures +clan.rip +lima-city.rocks +webspace.rocks +lima.zone + +// TransIP : https://www.transip.nl +// Submitted by Rory Breuk +*.transurl.be +*.transurl.eu +*.transurl.nl + +// TuxFamily : http://tuxfamily.org +// Submitted by TuxFamily administrators +tuxfamily.org + +// TwoDNS : https://www.twodns.de/ +// Submitted by TwoDNS-Support +dd-dns.de +diskstation.eu +diskstation.org +dray-dns.de +draydns.de +dyn-vpn.de +dynvpn.de +mein-vigor.de +my-vigor.de +my-wan.de +syno-ds.de +synology-diskstation.de +synology-ds.de + +// Uberspace : https://uberspace.de +// Submitted by Moritz Werner +uber.space +*.uberspace.de + +// UDR Limited : http://www.udr.hk.com +// Submitted by registry +hk.com +hk.org +ltd.hk +inc.hk + +// United Gameserver GmbH : https://united-gameserver.de +// Submitted by Stefan Schwarz +virtualuser.de +virtual-user.de + +// urown.net : https://urown.net +// Submitted by Hostmaster +urown.cloud +dnsupdate.info + +// .US +// Submitted by Ed Moore +lib.de.us + +// VeryPositive SIA : http://very.lv +// Submitted by Danko Aleksejevs +2038.io + +// Vercel, Inc : https://vercel.com/ +// Submitted by Connor Davis +vercel.app +vercel.dev +now.sh + +// Viprinet Europe GmbH : http://www.viprinet.com +// Submitted by Simon Kissel +router.management + +// Virtual-Info : https://www.virtual-info.info/ +// Submitted by Adnan RIHAN +v-info.info + +// Voorloper.com: https://voorloper.com +// Submitted by Nathan van Bakel +voorloper.cloud + +// Voxel.sh DNS : https://voxel.sh/dns/ +// Submitted by Mia Rehlinger +neko.am +nyaa.am +be.ax +cat.ax +es.ax +eu.ax +gg.ax +mc.ax +us.ax +xy.ax +nl.ci +xx.gl +app.gp +blog.gt +de.gt +to.gt +be.gy +cc.hn +blog.kg +io.kg +jp.kg +tv.kg +uk.kg +us.kg +de.ls +at.md +de.md +jp.md +to.md +uwu.nu +indie.porn +vxl.sh +ch.tc +me.tc +we.tc +nyan.to +at.vg +blog.vu +dev.vu +me.vu + +// V.UA Domain Administrator : https://domain.v.ua/ +// Submitted by Serhii Rostilo +v.ua + +// Waffle Computer Inc., Ltd. : https://docs.waffleinfo.com +// Submitted by Masayuki Note +wafflecell.com + +// WapBlog.ID : https://www.wapblog.id +// Submitted by Fajar Sodik +idnblogger.com +indowapblog.com +bloger.id +wblog.id +wbq.me +fastblog.net + +// WebHare bv: https://www.webhare.com/ +// Submitted by Arnold Hendriks +*.webhare.dev + +// WeDeploy by Liferay, Inc. : https://www.wedeploy.com +// Submitted by Henrique Vicente +wedeploy.io +wedeploy.me +wedeploy.sh + +// Western Digital Technologies, Inc : https://www.wdc.com +// Submitted by Jung Jin +remotewd.com + +// WIARD Enterprises : https://wiardweb.com +// Submitted by Kidd Hustle +pages.wiardweb.com + +// Wikimedia Labs : https://wikitech.wikimedia.org +// Submitted by Arturo Borrero Gonzalez +wmflabs.org +toolforge.org +wmcloud.org + +// WISP : https://wisp.gg +// Submitted by Stepan Fedotov +panel.gg +daemon.panel.gg + +// WoltLab GmbH : https://www.woltlab.com +// Submitted by Tim Düsterhus +myforum.community +community-pro.de +diskussionsbereich.de +community-pro.net +meinforum.net + +// www.com.vc : http://www.com.vc +// Submitted by Li Hui +cn.vu + +// XenonCloud GbR: https://xenoncloud.net +// Submitted by Julian Uphoff +half.host + +// XnBay Technology : http://www.xnbay.com/ +// Submitted by XnBay Developer +xnbay.com +u2.xnbay.com +u2-local.xnbay.com + +// XS4ALL Internet bv : https://www.xs4all.nl/ +// Submitted by Daniel Mostertman +cistron.nl +demon.nl +xs4all.space + +// Yandex.Cloud LLC: https://cloud.yandex.com +// Submitted by Alexander Lodin +yandexcloud.net +storage.yandexcloud.net +website.yandexcloud.net + +// YesCourse Pty Ltd : https://yescourse.com +// Submitted by Atul Bhouraskar +official.academy + +// Yola : https://www.yola.com/ +// Submitted by Stefano Rivera +yolasite.com + +// Yombo : https://yombo.net +// Submitted by Mitch Schwenk +ybo.faith +yombo.me +homelink.one +ybo.party +ybo.review +ybo.science +ybo.trade + +// Yunohost : https://yunohost.org +// Submitted by Valentin Grimaud +nohost.me +noho.st + +// ZaNiC : http://www.za.net/ +// Submitted by registry +za.net +za.org + +// Zine EOOD : https://zine.bg/ +// Submitted by Martin Angelov +bss.design + +// Zitcom A/S : https://www.zitcom.dk +// Submitted by Emil Stahl +basicserver.io +virtualserver.io +enterprisecloud.nu + +// Mintere : https://mintere.com/ +// Submitted by Ben Aubin +mintere.site + +// Cityhost LLC : https://cityhost.ua +// Submitted by Maksym Rivtin +cx.ua + +// WP Engine : https://wpengine.com/ +// Submitted by Michael Smith +// Submitted by Brandon DuRette +wpenginepowered.com +js.wpenginepowered.com + +// Impertrix Solutions : +// Submitted by Zhixiang Zhao +impertrixcdn.com +impertrix.com + +// GignoSystemJapan: http://gsj.bz +// Submitted by GignoSystemJapan +gsj.bz +// ===END PRIVATE DOMAINS=== diff --git a/docs/en/development/architecture.md b/docs/en/development/architecture.md index 19caa5241b0..4ef01f4e4fb 100644 --- a/docs/en/development/architecture.md +++ b/docs/en/development/architecture.md @@ -177,8 +177,6 @@ When you `INSERT` a bunch of data into `MergeTree`, that bunch is sorted by prim `MergeTree` is not an LSM tree because it doesn’t contain “memtable” and “log”: inserted data is written directly to the filesystem. This makes it suitable only to INSERT data in batches, not by individual row and not very frequently – about once per second is ok, but a thousand times a second is not. We did it this way for simplicity’s sake, and because we are already inserting data in batches in our applications. -> MergeTree tables can only have one (primary) index: there aren’t any secondary indices. It would be nice to allow multiple physical representations under one logical table, for example, to store data in more than one physical order or even to allow representations with pre-aggregated data along with original data. - There are MergeTree engines that are doing additional work during background merges. Examples are `CollapsingMergeTree` and `AggregatingMergeTree`. This could be treated as special support for updates. Keep in mind that these are not real updates because users usually have no control over the time when background merges are executed, and data in a `MergeTree` table is almost always stored in more than one part, not in completely merged form. ## Replication {#replication} diff --git a/docs/en/development/build-osx.md b/docs/en/development/build-osx.md index 7a4387f073b..c3a0a540b6d 100644 --- a/docs/en/development/build-osx.md +++ b/docs/en/development/build-osx.md @@ -35,10 +35,12 @@ $ cd ClickHouse ## Build ClickHouse {#build-clickhouse} +> Please note: ClickHouse doesn't support build with native Apple Clang compiler, we need use clang from LLVM. + ``` bash $ mkdir build $ cd build -$ cmake .. -DCMAKE_CXX_COMPILER=`which clang++` -DCMAKE_C_COMPILER=`which clang` +$ cmake ..-DCMAKE_C_COMPILER=`brew --prefix llvm`/bin/clang -DCMAKE_CXX_COMPILER=`brew --prefix llvm`/bin/clang++ -DCMAKE_PREFIX_PATH=`brew --prefix llvm` $ ninja $ cd .. ``` diff --git a/docs/en/development/developer-instruction.md b/docs/en/development/developer-instruction.md index dc95c3ec50b..5511e8e19c7 100644 --- a/docs/en/development/developer-instruction.md +++ b/docs/en/development/developer-instruction.md @@ -253,8 +253,8 @@ Developing ClickHouse often requires loading realistic datasets. It is particula sudo apt install wget xz-utils - wget https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz - wget https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz + wget https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz + wget https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz xz -v -d hits_v1.tsv.xz xz -v -d visits_v1.tsv.xz diff --git a/docs/en/development/style.md b/docs/en/development/style.md index 828f866c23c..4c620c44aef 100644 --- a/docs/en/development/style.md +++ b/docs/en/development/style.md @@ -577,7 +577,7 @@ If a function captures ownership of an object created in the heap, make the argu **14.** Return values. -In most cases, just use `return`. Do not write `[return std::move(res)]{.strike}`. +In most cases, just use `return`. Do not write `return std::move(res)`. If the function allocates an object on heap and returns it, use `shared_ptr` or `unique_ptr`. @@ -671,7 +671,7 @@ Always use `#pragma once` instead of include guards. **24.** Do not use `trailing return type` for functions unless necessary. ``` cpp -[auto f() -> void;]{.strike} +auto f() -> void ``` **25.** Declaration and initialization of variables. diff --git a/docs/en/faq/use-cases/key-value.md b/docs/en/faq/use-cases/key-value.md index 76bbcb98cf3..2827dd2fa58 100644 --- a/docs/en/faq/use-cases/key-value.md +++ b/docs/en/faq/use-cases/key-value.md @@ -6,7 +6,7 @@ toc_priority: 101 # Can I Use ClickHouse As a Key-Value Storage? {#can-i-use-clickhouse-as-a-key-value-storage} -The short answer is **“no”**. The key-value workload is among top positions in the list of cases when NOT{.text-danger} to use ClickHouse. It’s an [OLAP](../../faq/general/olap.md) system after all, while there are many excellent key-value storage systems out there. +The short answer is **“no”**. The key-value workload is among top positions in the list of cases when **NOT**{.text-danger} to use ClickHouse. It’s an [OLAP](../../faq/general/olap.md) system after all, while there are many excellent key-value storage systems out there. However, there might be situations where it still makes sense to use ClickHouse for key-value-like queries. Usually, it’s some low-budget products where the main workload is analytical in nature and fits ClickHouse well, but there’s also some secondary process that needs a key-value pattern with not so high request throughput and without strict latency requirements. If you had an unlimited budget, you would have installed a secondary key-value database for thus secondary workload, but in reality, there’s an additional cost of maintaining one more storage system (monitoring, backups, etc.) which might be desirable to avoid. diff --git a/docs/en/getting-started/example-datasets/github-events.md b/docs/en/getting-started/example-datasets/github-events.md new file mode 100644 index 00000000000..a6c71733832 --- /dev/null +++ b/docs/en/getting-started/example-datasets/github-events.md @@ -0,0 +1,11 @@ +--- +toc_priority: 11 +toc_title: GitHub Events +--- + +# GitHub Events Dataset + +Dataset contains all events on GitHub from 2011 to Dec 6 2020, the size is 3.1 billion records. Download size is 75 GB and it will require up to 200 GB space on disk if stored in a table with lz4 compression. + +Full dataset description, insights, download instruction and interactive queries are posted [here](https://github-sql.github.io/explorer/). + diff --git a/docs/en/getting-started/example-datasets/index.md b/docs/en/getting-started/example-datasets/index.md index 35ac90f9beb..ed795a6c4de 100644 --- a/docs/en/getting-started/example-datasets/index.md +++ b/docs/en/getting-started/example-datasets/index.md @@ -1,6 +1,6 @@ --- toc_folder_title: Example Datasets -toc_priority: 14 +toc_priority: 10 toc_title: Introduction --- @@ -10,6 +10,7 @@ This section describes how to obtain example datasets and import them into Click The list of documented datasets: +- [GitHub Events](../../getting-started/example-datasets/github-events.md) - [Anonymized Yandex.Metrica Dataset](../../getting-started/example-datasets/metrica.md) - [Star Schema Benchmark](../../getting-started/example-datasets/star-schema.md) - [WikiStat](../../getting-started/example-datasets/wikistat.md) @@ -18,4 +19,4 @@ The list of documented datasets: - [New York Taxi Data](../../getting-started/example-datasets/nyc-taxi.md) - [OnTime](../../getting-started/example-datasets/ontime.md) -[Original article](https://clickhouse.tech/docs/en/getting_started/example_datasets) \ No newline at end of file +[Original article](https://clickhouse.tech/docs/en/getting_started/example_datasets) diff --git a/docs/en/getting-started/example-datasets/metrica.md b/docs/en/getting-started/example-datasets/metrica.md index b036973b255..cdbb9b56eeb 100644 --- a/docs/en/getting-started/example-datasets/metrica.md +++ b/docs/en/getting-started/example-datasets/metrica.md @@ -7,14 +7,14 @@ toc_title: Yandex.Metrica Data Dataset consists of two tables containing anonymized data about hits (`hits_v1`) and visits (`visits_v1`) of Yandex.Metrica. You can read more about Yandex.Metrica in [ClickHouse history](../../introduction/history.md) section. -The dataset consists of two tables, either of them can be downloaded as a compressed `tsv.xz` file or as prepared partitions. In addition to that, an extended version of the `hits` table containing 100 million rows is available as TSV at https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_100m_obfuscated_v1.tsv.xz and as prepared partitions at https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz. +The dataset consists of two tables, either of them can be downloaded as a compressed `tsv.xz` file or as prepared partitions. In addition to that, an extended version of the `hits` table containing 100 million rows is available as TSV at https://datasets.clickhouse.tech/hits/tsv/hits_100m_obfuscated_v1.tsv.xz and as prepared partitions at https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz. ## Obtaining Tables from Prepared Partitions {#obtaining-tables-from-prepared-partitions} Download and import hits table: ``` bash -curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar +curl -O https://datasets.clickhouse.tech/hits/partitions/hits_v1.tar tar xvf hits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory # check permissions on unpacked data, fix if required sudo service clickhouse-server restart @@ -24,7 +24,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" Download and import visits: ``` bash -curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar +curl -O https://datasets.clickhouse.tech/visits/partitions/visits_v1.tar tar xvf visits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory # check permissions on unpacked data, fix if required sudo service clickhouse-server restart @@ -36,7 +36,10 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1" Download and import hits from compressed TSV file: ``` bash -curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +# Validate the checksum +md5sum hits_v1.tsv +# Checksum should be equal to: f3631b6295bf06989c1437491f7592cb # now create table clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" @@ -50,7 +53,10 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" Download and import visits from compressed tsv-file: ``` bash -curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +# Validate the checksum +md5sum visits_v1.tsv +# Checksum should be equal to: 6dafe1a0f24e59e3fc2d0fed85601de6 # now create table clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" diff --git a/docs/en/getting-started/example-datasets/nyc-taxi.md b/docs/en/getting-started/example-datasets/nyc-taxi.md index 9b9a12ba724..38a87a674f9 100644 --- a/docs/en/getting-started/example-datasets/nyc-taxi.md +++ b/docs/en/getting-started/example-datasets/nyc-taxi.md @@ -283,7 +283,7 @@ Among other things, you can run the OPTIMIZE query on MergeTree. But it’s not ## Download of Prepared Partitions {#download-of-prepared-partitions} ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar +$ curl -O https://datasets.clickhouse.tech/trips_mergetree/partitions/trips_mergetree.tar $ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # path to ClickHouse data directory $ # check permissions of unpacked data, fix if required $ sudo service clickhouse-server restart diff --git a/docs/en/getting-started/example-datasets/ontime.md b/docs/en/getting-started/example-datasets/ontime.md index 6f2408af3b6..5e499cafb2a 100644 --- a/docs/en/getting-started/example-datasets/ontime.md +++ b/docs/en/getting-started/example-datasets/ontime.md @@ -154,7 +154,7 @@ $ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhous ## Download of Prepared Partitions {#download-of-prepared-partitions} ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar +$ curl -O https://datasets.clickhouse.tech/ontime/partitions/ontime.tar $ tar xvf ontime.tar -C /var/lib/clickhouse # path to ClickHouse data directory $ # check permissions of unpacked data, fix if required $ sudo service clickhouse-server restart diff --git a/docs/en/getting-started/tutorial.md b/docs/en/getting-started/tutorial.md index 3e051456a75..254fa37d47d 100644 --- a/docs/en/getting-started/tutorial.md +++ b/docs/en/getting-started/tutorial.md @@ -85,8 +85,8 @@ Now it’s time to fill our ClickHouse server with some sample data. In this tut ### Download and Extract Table Data {#download-and-extract-table-data} ``` bash -curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv -curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv ``` The extracted files are about 10GB in size. diff --git a/docs/en/interfaces/formats.md b/docs/en/interfaces/formats.md index cd29df43fdf..95b339a3269 100644 --- a/docs/en/interfaces/formats.md +++ b/docs/en/interfaces/formats.md @@ -58,6 +58,7 @@ The supported formats are: | [XML](#xml) | ✗ | ✔ | | [CapnProto](#capnproto) | ✔ | ✗ | | [LineAsString](#lineasstring) | ✔ | ✗ | +| [RawBLOB](#rawblob) | ✔ | ✔ | You can control some format processing parameters with the ClickHouse settings. For more information read the [Settings](../operations/settings/settings.md) section. @@ -1370,4 +1371,45 @@ Result: └───────────────────────────────────────────────────┘ ``` +## RawBLOB {#rawblob} + +In this format, all input data is read to a single value. It is possible to parse only a table with a single field of type [String](../sql-reference/data-types/string.md) or similar. +The result is output in binary format without delimiters and escaping. If more than one value is output, the format is ambiguous, and it will be impossible to read the data back. + +Below is a comparison of the formats `RawBLOB` and [TabSeparatedRaw](#tabseparatedraw). +`RawBLOB`: +- data is output in binary format, no escaping; +- there are no delimiters between values; +- no newline at the end of each value. +[TabSeparatedRaw] (#tabseparatedraw): +- data is output without escaping; +- the rows contain values separated by tabs; +- there is a line feed after the last value in every row. + +The following is a comparison of the `RawBLOB` and [RowBinary](#rowbinary) formats. +`RawBLOB`: +- String fields are output without being prefixed by length. +`RowBinary`: +- String fields are represented as length in varint format (unsigned [LEB128] (https://en.wikipedia.org/wiki/LEB128)), followed by the bytes of the string. + +When an empty data is passed to the `RawBLOB` input, ClickHouse throws an exception: + +``` text +Code: 108. DB::Exception: No data to insert +``` + +**Example** + +``` bash +$ clickhouse-client --query "CREATE TABLE {some_table} (a String) ENGINE = Memory;" +$ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT RawBLOB" +$ clickhouse-client --query "SELECT * FROM {some_table} FORMAT RawBLOB" | md5sum +``` + +Result: + +``` text +f9725a22f9191e064120d718e26862a9 - +``` + [Original article](https://clickhouse.tech/docs/en/interfaces/formats/) diff --git a/docs/en/interfaces/third-party/client-libraries.md b/docs/en/interfaces/third-party/client-libraries.md index c737fad152f..047df3128b4 100644 --- a/docs/en/interfaces/third-party/client-libraries.md +++ b/docs/en/interfaces/third-party/client-libraries.md @@ -21,6 +21,7 @@ toc_title: Client Libraries - [seva-code/php-click-house-client](https://packagist.org/packages/seva-code/php-click-house-client) - [SeasClick C++ client](https://github.com/SeasX/SeasClick) - [one-ck](https://github.com/lizhichao/one-ck) + - [glushkovds/phpclickhouse-laravel](https://packagist.org/packages/glushkovds/phpclickhouse-laravel) - Go - [clickhouse](https://github.com/kshvakov/clickhouse/) - [go-clickhouse](https://github.com/roistat/go-clickhouse) diff --git a/docs/en/introduction/adopters.md b/docs/en/introduction/adopters.md index d0bb256439b..9b08de99fc0 100644 --- a/docs/en/introduction/adopters.md +++ b/docs/en/introduction/adopters.md @@ -82,6 +82,7 @@ toc_title: Adopters | Pragma Innovation | Telemetry and Big Data Analysis | Main product | — | — | [Slides in English, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup18/4_pragma_innovation.pdf) | | QINGCLOUD | Cloud services | Main product | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/4.%20Cloud%20%2B%20TSDB%20for%20ClickHouse%20张健%20QingCloud.pdf) | | Qrator | DDoS protection | Main product | — | — | [Blog Post, March 2019](https://blog.qrator.net/en/clickhouse-ddos-mitigation_37/) | +| Raiffeisenbank | Banking | Analytics | — | — | [Lecture in Russian, December 2020](https://cs.hse.ru/announcements/421965599.html) | | Rambler | Internet services | Analytics | — | — | [Talk in Russian, April 2018](https://medium.com/@ramblertop/разработка-api-clickhouse-для-рамблер-топ-100-f4c7e56f3141) | | Retell | Speech synthesis | Analytics | — | — | [Blog Article, August 2020](https://vc.ru/services/153732-kak-sozdat-audiostati-na-vashem-sayte-i-zachem-eto-nuzhno) | | Rspamd | Antispam | Analytics | — | — | [Official Website](https://rspamd.com/doc/modules/clickhouse.html) | @@ -102,6 +103,7 @@ toc_title: Adopters | Teralytics | Mobility | Analytics | — | — | [Tech blog](https://www.teralytics.net/knowledge-hub/visualizing-mobility-data-the-scalability-challenge) | | Tencent | Big Data | Data processing | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/5.%20ClickHouse大数据集群应用_李俊飞腾讯网媒事业部.pdf) | | Tencent | Messaging | Logging | — | — | [Talk in Chinese, November 2019](https://youtu.be/T-iVQRuw-QY?t=5050) | +| Tencent Music Entertainment (TME) | BigData | Data processing | — | — | [Blog in Chinese, June 2020](https://cloud.tencent.com/developer/article/1637840) | | Traffic Stars | AD network | — | — | — | [Slides in Russian, May 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup15/lightning/ninja.pdf) | | Uber | Taxi | Logging | — | — | [Slides, February 2020](https://presentations.clickhouse.tech/meetup40/uber.pdf) | | VKontakte | Social Network | Statistics, Logging | — | — | [Slides in Russian, August 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup17/3_vk.pdf) | diff --git a/docs/en/operations/performance-test.md b/docs/en/operations/performance-test.md index 984bbe02174..ca805923ba9 100644 --- a/docs/en/operations/performance-test.md +++ b/docs/en/operations/performance-test.md @@ -27,7 +27,7 @@ wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/benchmark/cl ``` 6. Download test data according to the [Yandex.Metrica dataset](../getting-started/example-datasets/metrica.md) instruction (“hits” table containing 100 million rows). ```bash -wget https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz +wget https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz tar xvf hits_100m_obfuscated_v1.tar.xz -C . mv hits_100m_obfuscated_v1/* . ``` diff --git a/docs/en/operations/settings/settings.md b/docs/en/operations/settings/settings.md index c5e44d4c464..093e4e50dfe 100644 --- a/docs/en/operations/settings/settings.md +++ b/docs/en/operations/settings/settings.md @@ -1093,9 +1093,14 @@ See the section “WITH TOTALS modifier”. ## max_parallel_replicas {#settings-max_parallel_replicas} -The maximum number of replicas for each shard when executing a query. -For consistency (to get different parts of the same data split), this option only works when the sampling key is set. -Replica lag is not controlled. +The maximum number of replicas for each shard when executing a query. In limited circumstances, this can make a query faster by executing it on more servers. This setting is only useful for replicated tables with a sampling key. There are cases where performance will not improve or even worsen: + +- the position of the sampling key in the partitioning key's order doesn't allow efficient range scans +- adding a sampling key to the table makes filtering by other columns less efficient +- the sampling key is an expression that is expensive to calculate +- the cluster's latency distribution has a long tail, so that querying more servers increases the query's overall latency + +In addition, this setting will produce incorrect results when joins or subqueries are involved, and all tables don't meet certain conditions. See [Distributed Subqueries and max_parallel_replicas](../../sql-reference/operators/in.md/#max_parallel_replica-subqueries) for more details. ## compile {#compile} @@ -2360,10 +2365,41 @@ Default value: `1`. ## output_format_tsv_null_representation {#output_format_tsv_null_representation} -Allows configurable `NULL` representation for [TSV](../../interfaces/formats.md#tabseparated) output format. The setting only controls output format and `\N` is the only supported `NULL` representation for TSV input format. +Defines the representation of `NULL` for [TSV](../../interfaces/formats.md#tabseparated) output format. User can set any string as a value, for example, `My NULL`. Default value: `\N`. +**Examples** + +Query + +```sql +SELECT * FROM tsv_custom_null FORMAT TSV; +``` + +Result + +```text +788 +\N +\N +``` + +Query + +```sql +SET output_format_tsv_null_representation = 'My NULL'; +SELECT * FROM tsv_custom_null FORMAT TSV; +``` + +Result + +```text +788 +My NULL +My NULL +``` + ## output_format_json_array_of_rows {#output-format-json-array-of-rows} Enables the ability to output all rows as a JSON array in the [JSONEachRow](../../interfaces/formats.md#jsoneachrow) format. diff --git a/docs/en/operations/system-tables/distribution_queue.md b/docs/en/operations/system-tables/distribution_queue.md new file mode 100644 index 00000000000..39d06c49e53 --- /dev/null +++ b/docs/en/operations/system-tables/distribution_queue.md @@ -0,0 +1,46 @@ +# system.distribution_queue {#system_tables-distribution_queue} + +Contains information about local files that are in the queue to be sent to the shards. This local files contain new parts that are created by inserting new data into the Distributed table in asynchronous mode. + +Columns: + +- `database` ([String](../../sql-reference/data-types/string.md)) — Name of the database. + +- `table` ([String](../../sql-reference/data-types/string.md)) — Name of the table. + +- `data_path` ([String](../../sql-reference/data-types/string.md)) — Path to the folder with local files. + +- `is_blocked` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Flag indicates whether sending local files to the server is blocked. + +- `error_count` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Number of errors. + +- `data_files` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Number of local files in a folder. + +- `data_compressed_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Size of compressed data in local files, in bytes. + +- `last_exception` ([String](../../sql-reference/data-types/string.md)) — Text message about the last error that occurred (if any). + +**Example** + +``` sql +SELECT * FROM system.distribution_queue LIMIT 1 FORMAT Vertical; +``` + +``` text +Row 1: +────── +database: default +table: dist +data_path: ./store/268/268bc070-3aad-4b1a-9cf2-4987580161af/default@127%2E0%2E0%2E2:9000/ +is_blocked: 1 +error_count: 0 +data_files: 1 +data_compressed_bytes: 499 +last_exception: +``` + +**See also** + +- [Distributed table engine](../../engines/table-engines/special/distributed.md) + +[Original article](https://clickhouse.tech/docs/en/operations/system_tables/distribution_queue) diff --git a/docs/en/sql-reference/ansi.md b/docs/en/sql-reference/ansi.md index 61d0e274207..fc759f9f79a 100644 --- a/docs/en/sql-reference/ansi.md +++ b/docs/en/sql-reference/ansi.md @@ -24,58 +24,58 @@ The following table lists cases when query feature works in ClickHouse, but beha | Feature ID | Feature Name | Status | Comment | |------------|--------------------------------------------------------------------------------------------------------------------------|----------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **E011** | **Numeric data types** | **Partial**{.text-warning} | | -| E011-01 | INTEGER and SMALLINT data types | Yes{.text-success} | | -| E011-02 | REAL, DOUBLE PRECISION and FLOAT data types data types | Partial{.text-warning} | `FLOAT()`, `REAL` and `DOUBLE PRECISION` are not supported | -| E011-03 | DECIMAL and NUMERIC data types | Partial{.text-warning} | Only `DECIMAL(p,s)` is supported, not `NUMERIC` | -| E011-04 | Arithmetic operators | Yes{.text-success} | | -| E011-05 | Numeric comparison | Yes{.text-success} | | -| E011-06 | Implicit casting among the numeric data types | No{.text-danger} | ANSI SQL allows arbitrary implicit cast between numeric types, while ClickHouse relies on functions having multiple overloads instead of implicit cast | +| E011-01 | INTEGER and SMALLINT data types | Yes {.text-success} | | +| E011-02 | REAL, DOUBLE PRECISION and FLOAT data types data types | Partial {.text-warning} | `FLOAT()`, `REAL` and `DOUBLE PRECISION` are not supported | +| E011-03 | DECIMAL and NUMERIC data types | Partial {.text-warning} | Only `DECIMAL(p,s)` is supported, not `NUMERIC` | +| E011-04 | Arithmetic operators | Yes {.text-success} | | +| E011-05 | Numeric comparison | Yes {.text-success} | | +| E011-06 | Implicit casting among the numeric data types | No {.text-danger} | ANSI SQL allows arbitrary implicit cast between numeric types, while ClickHouse relies on functions having multiple overloads instead of implicit cast | | **E021** | **Character string types** | **Partial**{.text-warning} | | -| E021-01 | CHARACTER data type | No{.text-danger} | | -| E021-02 | CHARACTER VARYING data type | No{.text-danger} | `String` behaves similarly, but without length limit in parentheses | -| E021-03 | Character literals | Partial{.text-warning} | No automatic concatenation of consecutive literals and character set support | -| E021-04 | CHARACTER_LENGTH function | Partial{.text-warning} | No `USING` clause | -| E021-05 | OCTET_LENGTH function | No{.text-danger} | `LENGTH` behaves similarly | -| E021-06 | SUBSTRING | Partial{.text-warning} | No support for `SIMILAR` and `ESCAPE` clauses, no `SUBSTRING_REGEX` variant | -| E021-07 | Character concatenation | Partial{.text-warning} | No `COLLATE` clause | -| E021-08 | UPPER and LOWER functions | Yes{.text-success} | | -| E021-09 | TRIM function | Yes{.text-success} | | -| E021-10 | Implicit casting among the fixed-length and variable-length character string types | No{.text-danger} | ANSI SQL allows arbitrary implicit cast between string types, while ClickHouse relies on functions having multiple overloads instead of implicit cast | -| E021-11 | POSITION function | Partial{.text-warning} | No support for `IN` and `USING` clauses, no `POSITION_REGEX` variant | -| E021-12 | Character comparison | Yes{.text-success} | | +| E021-01 | CHARACTER data type | No {.text-danger} | | +| E021-02 | CHARACTER VARYING data type | No {.text-danger} | `String` behaves similarly, but without length limit in parentheses | +| E021-03 | Character literals | Partial {.text-warning} | No automatic concatenation of consecutive literals and character set support | +| E021-04 | CHARACTER_LENGTH function | Partial {.text-warning} | No `USING` clause | +| E021-05 | OCTET_LENGTH function | No {.text-danger} | `LENGTH` behaves similarly | +| E021-06 | SUBSTRING | Partial {.text-warning} | No support for `SIMILAR` and `ESCAPE` clauses, no `SUBSTRING_REGEX` variant | +| E021-07 | Character concatenation | Partial {.text-warning} | No `COLLATE` clause | +| E021-08 | UPPER and LOWER functions | Yes {.text-success} | | +| E021-09 | TRIM function | Yes {.text-success} | | +| E021-10 | Implicit casting among the fixed-length and variable-length character string types | No {.text-danger} | ANSI SQL allows arbitrary implicit cast between string types, while ClickHouse relies on functions having multiple overloads instead of implicit cast | +| E021-11 | POSITION function | Partial {.text-warning} | No support for `IN` and `USING` clauses, no `POSITION_REGEX` variant | +| E021-12 | Character comparison | Yes {.text-success} | | | **E031** | **Identifiers** | **Partial**{.text-warning} | | -| E031-01 | Delimited identifiers | Partial{.text-warning} | Unicode literal support is limited | -| E031-02 | Lower case identifiers | Yes{.text-success} | | -| E031-03 | Trailing underscore | Yes{.text-success} | | +| E031-01 | Delimited identifiers | Partial {.text-warning} | Unicode literal support is limited | +| E031-02 | Lower case identifiers | Yes {.text-success} | | +| E031-03 | Trailing underscore | Yes {.text-success} | | | **E051** | **Basic query specification** | **Partial**{.text-warning} | | -| E051-01 | SELECT DISTINCT | Yes{.text-success} | | -| E051-02 | GROUP BY clause | Yes{.text-success} | | -| E051-04 | GROUP BY can contain columns not in `` | Yes {.text-success} | | +| E051-05 | Select items can be renamed | Yes {.text-success} | | +| E051-06 | HAVING clause | Yes {.text-success} | | +| E051-07 | Qualified \* in select list | Yes {.text-success} | | +| E051-08 | Correlation name in the FROM clause | Yes {.text-success} | | +| E051-09 | Rename columns in the FROM clause | No {.text-danger} | | | **E061** | **Basic predicates and search conditions** | **Partial**{.text-warning} | | -| E061-01 | Comparison predicate | Yes{.text-success} | | -| E061-02 | BETWEEN predicate | Partial{.text-warning} | No `SYMMETRIC` and `ASYMMETRIC` clause | -| E061-03 | IN predicate with list of values | Yes{.text-success} | | -| E061-04 | LIKE predicate | Yes{.text-success} | | -| E061-05 | LIKE predicate: ESCAPE clause | No{.text-danger} | | -| E061-06 | NULL predicate | Yes{.text-success} | | -| E061-07 | Quantified comparison predicate | No{.text-danger} | | -| E061-08 | EXISTS predicate | No{.text-danger} | | -| E061-09 | Subqueries in comparison predicate | Yes{.text-success} | | -| E061-11 | Subqueries in IN predicate | Yes{.text-success} | | -| E061-12 | Subqueries in quantified comparison predicate | No{.text-danger} | | -| E061-13 | Correlated subqueries | No{.text-danger} | | -| E061-14 | Search condition | Yes{.text-success} | | +| E061-01 | Comparison predicate | Yes {.text-success} | | +| E061-02 | BETWEEN predicate | Partial {.text-warning} | No `SYMMETRIC` and `ASYMMETRIC` clause | +| E061-03 | IN predicate with list of values | Yes {.text-success} | | +| E061-04 | LIKE predicate | Yes {.text-success} | | +| E061-05 | LIKE predicate: ESCAPE clause | No {.text-danger} | | +| E061-06 | NULL predicate | Yes {.text-success} | | +| E061-07 | Quantified comparison predicate | No {.text-danger} | | +| E061-08 | EXISTS predicate | No {.text-danger} | | +| E061-09 | Subqueries in comparison predicate | Yes {.text-success} | | +| E061-11 | Subqueries in IN predicate | Yes {.text-success} | | +| E061-12 | Subqueries in quantified comparison predicate | No {.text-danger} | | +| E061-13 | Correlated subqueries | No {.text-danger} | | +| E061-14 | Search condition | Yes {.text-success} | | | **E071** | **Basic query expressions** | **Partial**{.text-warning} | | -| E071-01 | UNION DISTINCT table operator | No{.text-danger} | | -| E071-02 | UNION ALL table operator | Yes{.text-success} | | -| E071-03 | EXCEPT DISTINCT table operator | No{.text-danger} | | -| E071-05 | Columns combined via table operators need not have exactly the same data type | Yes{.text-success} | | -| E071-06 | Table operators in subqueries | Yes{.text-success} | | +| E071-01 | UNION DISTINCT table operator | No {.text-danger} | | +| E071-02 | UNION ALL table operator | Yes {.text-success} | | +| E071-03 | EXCEPT DISTINCT table operator | No {.text-danger} | | +| E071-05 | Columns combined via table operators need not have exactly the same data type | Yes {.text-success} | | +| E071-06 | Table operators in subqueries | Yes {.text-success} | | | **E081** | **Basic privileges** | **Partial**{.text-warning} | Work in progress | | E081-01 | SELECT privilege at the table level | | | | E081-02 | DELETE privilege | | | @@ -88,102 +88,102 @@ The following table lists cases when query feature works in ClickHouse, but beha | E081-09 | USAGE privilege | | | | E081-10 | EXECUTE privilege | | | | **E091** | **Set functions** | **Yes**{.text-success} | | -| E091-01 | AVG | Yes{.text-success} | | -| E091-02 | COUNT | Yes{.text-success} | | -| E091-03 | MAX | Yes{.text-success} | | -| E091-04 | MIN | Yes{.text-success} | | -| E091-05 | SUM | Yes{.text-success} | | -| E091-06 | ALL quantifier | No{.text-danger} | | -| E091-07 | DISTINCT quantifier | Partial{.text-warning} | Not all aggregate functions supported | +| E091-01 | AVG | Yes {.text-success} | | +| E091-02 | COUNT | Yes {.text-success} | | +| E091-03 | MAX | Yes {.text-success} | | +| E091-04 | MIN | Yes {.text-success} | | +| E091-05 | SUM | Yes {.text-success} | | +| E091-06 | ALL quantifier | No {.text-danger} | | +| E091-07 | DISTINCT quantifier | Partial {.text-warning} | Not all aggregate functions supported | | **E101** | **Basic data manipulation** | **Partial**{.text-warning} | | -| E101-01 | INSERT statement | Yes{.text-success} | Note: primary key in ClickHouse does not imply the `UNIQUE` constraint | -| E101-03 | Searched UPDATE statement | No{.text-danger} | There’s an `ALTER UPDATE` statement for batch data modification | -| E101-04 | Searched DELETE statement | No{.text-danger} | There’s an `ALTER DELETE` statement for batch data removal | +| E101-01 | INSERT statement | Yes {.text-success} | Note: primary key in ClickHouse does not imply the `UNIQUE` constraint | +| E101-03 | Searched UPDATE statement | No {.text-danger} | There’s an `ALTER UPDATE` statement for batch data modification | +| E101-04 | Searched DELETE statement | No {.text-danger} | There’s an `ALTER DELETE` statement for batch data removal | | **E111** | **Single row SELECT statement** | **No**{.text-danger} | | | **E121** | **Basic cursor support** | **No**{.text-danger} | | -| E121-01 | DECLARE CURSOR | No{.text-danger} | | -| E121-02 | ORDER BY columns need not be in select list | No{.text-danger} | | -| E121-03 | Value expressions in ORDER BY clause | No{.text-danger} | | -| E121-04 | OPEN statement | No{.text-danger} | | -| E121-06 | Positioned UPDATE statement | No{.text-danger} | | -| E121-07 | Positioned DELETE statement | No{.text-danger} | | -| E121-08 | CLOSE statement | No{.text-danger} | | -| E121-10 | FETCH statement: implicit NEXT | No{.text-danger} | | -| E121-17 | WITH HOLD cursors | No{.text-danger} | | +| E121-01 | DECLARE CURSOR | No {.text-danger} | | +| E121-02 | ORDER BY columns need not be in select list | No {.text-danger} | | +| E121-03 | Value expressions in ORDER BY clause | No {.text-danger} | | +| E121-04 | OPEN statement | No {.text-danger} | | +| E121-06 | Positioned UPDATE statement | No {.text-danger} | | +| E121-07 | Positioned DELETE statement | No {.text-danger} | | +| E121-08 | CLOSE statement | No {.text-danger} | | +| E121-10 | FETCH statement: implicit NEXT | No {.text-danger} | | +| E121-17 | WITH HOLD cursors | No {.text-danger} | | | **E131** | **Null value support (nulls in lieu of values)** | **Partial**{.text-warning} | Some restrictions apply | | **E141** | **Basic integrity constraints** | **Partial**{.text-warning} | | -| E141-01 | NOT NULL constraints | Yes{.text-success} | Note: `NOT NULL` is implied for table columns by default | -| E141-02 | UNIQUE constraint of NOT NULL columns | No{.text-danger} | | -| E141-03 | PRIMARY KEY constraints | No{.text-danger} | | -| E141-04 | Basic FOREIGN KEY constraint with the NO ACTION default for both referential delete action and referential update action | No{.text-danger} | | -| E141-06 | CHECK constraint | Yes{.text-success} | | -| E141-07 | Column defaults | Yes{.text-success} | | -| E141-08 | NOT NULL inferred on PRIMARY KEY | Yes{.text-success} | | -| E141-10 | Names in a foreign key can be specified in any order | No{.text-danger} | | +| E141-01 | NOT NULL constraints | Yes {.text-success} | Note: `NOT NULL` is implied for table columns by default | +| E141-02 | UNIQUE constraint of NOT NULL columns | No {.text-danger} | | +| E141-03 | PRIMARY KEY constraints | No {.text-danger} | | +| E141-04 | Basic FOREIGN KEY constraint with the NO ACTION default for both referential delete action and referential update action | No {.text-danger} | | +| E141-06 | CHECK constraint | Yes {.text-success} | | +| E141-07 | Column defaults | Yes {.text-success} | | +| E141-08 | NOT NULL inferred on PRIMARY KEY | Yes {.text-success} | | +| E141-10 | Names in a foreign key can be specified in any order | No {.text-danger} | | | **E151** | **Transaction support** | **No**{.text-danger} | | -| E151-01 | COMMIT statement | No{.text-danger} | | -| E151-02 | ROLLBACK statement | No{.text-danger} | | +| E151-01 | COMMIT statement | No {.text-danger} | | +| E151-02 | ROLLBACK statement | No {.text-danger} | | | **E152** | **Basic SET TRANSACTION statement** | **No**{.text-danger} | | -| E152-01 | SET TRANSACTION statement: ISOLATION LEVEL SERIALIZABLE clause | No{.text-danger} | | -| E152-02 | SET TRANSACTION statement: READ ONLY and READ WRITE clauses | No{.text-danger} | | +| E152-01 | SET TRANSACTION statement: ISOLATION LEVEL SERIALIZABLE clause | No {.text-danger} | | +| E152-02 | SET TRANSACTION statement: READ ONLY and READ WRITE clauses | No {.text-danger} | | | **E153** | **Updatable queries with subqueries** | **No**{.text-danger} | | | **E161** | **SQL comments using leading double minus** | **Yes**{.text-success} | | | **E171** | **SQLSTATE support** | **No**{.text-danger} | | | **E182** | **Host language binding** | **No**{.text-danger} | | | **F031** | **Basic schema manipulation** | **Partial**{.text-warning} | | -| F031-01 | CREATE TABLE statement to create persistent base tables | Partial{.text-warning} | No `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` clauses and no support for user resolved data types | -| F031-02 | CREATE VIEW statement | Partial{.text-warning} | No `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` clauses and no support for user resolved data types | -| F031-03 | GRANT statement | Yes{.text-success} | | -| F031-04 | ALTER TABLE statement: ADD COLUMN clause | Partial{.text-warning} | No support for `GENERATED` clause and system time period | -| F031-13 | DROP TABLE statement: RESTRICT clause | No{.text-danger} | | -| F031-16 | DROP VIEW statement: RESTRICT clause | No{.text-danger} | | -| F031-19 | REVOKE statement: RESTRICT clause | No{.text-danger} | | +| F031-01 | CREATE TABLE statement to create persistent base tables | Partial {.text-warning} | No `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` clauses and no support for user resolved data types | +| F031-02 | CREATE VIEW statement | Partial {.text-warning} | No `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` clauses and no support for user resolved data types | +| F031-03 | GRANT statement | Yes {.text-success} | | +| F031-04 | ALTER TABLE statement: ADD COLUMN clause | Partial {.text-warning} | No support for `GENERATED` clause and system time period | +| F031-13 | DROP TABLE statement: RESTRICT clause | No {.text-danger} | | +| F031-16 | DROP VIEW statement: RESTRICT clause | No {.text-danger} | | +| F031-19 | REVOKE statement: RESTRICT clause | No {.text-danger} | | | **F041** | **Basic joined table** | **Partial**{.text-warning} | | -| F041-01 | Inner join (but not necessarily the INNER keyword) | Yes{.text-success} | | -| F041-02 | INNER keyword | Yes{.text-success} | | -| F041-03 | LEFT OUTER JOIN | Yes{.text-success} | | -| F041-04 | RIGHT OUTER JOIN | Yes{.text-success} | | -| F041-05 | Outer joins can be nested | Yes{.text-success} | | -| F041-07 | The inner table in a left or right outer join can also be used in an inner join | Yes{.text-success} | | -| F041-08 | All comparison operators are supported (rather than just =) | No{.text-danger} | | +| F041-01 | Inner join (but not necessarily the INNER keyword) | Yes {.text-success} | | +| F041-02 | INNER keyword | Yes {.text-success} | | +| F041-03 | LEFT OUTER JOIN | Yes {.text-success} | | +| F041-04 | RIGHT OUTER JOIN | Yes {.text-success} | | +| F041-05 | Outer joins can be nested | Yes {.text-success} | | +| F041-07 | The inner table in a left or right outer join can also be used in an inner join | Yes {.text-success} | | +| F041-08 | All comparison operators are supported (rather than just =) | No {.text-danger} | | | **F051** | **Basic date and time** | **Partial**{.text-warning} | | -| F051-01 | DATE data type (including support of DATE literal) | Partial{.text-warning} | No literal | -| F051-02 | TIME data type (including support of TIME literal) with fractional seconds precision of at least 0 | No{.text-danger} | | -| F051-03 | TIMESTAMP data type (including support of TIMESTAMP literal) with fractional seconds precision of at least 0 and 6 | No{.text-danger} | `DateTime64` time provides similar functionality | -| F051-04 | Comparison predicate on DATE, TIME, and TIMESTAMP data types | Partial{.text-warning} | Only one data type available | -| F051-05 | Explicit CAST between datetime types and character string types | Yes{.text-success} | | -| F051-06 | CURRENT_DATE | No{.text-danger} | `today()` is similar | -| F051-07 | LOCALTIME | No{.text-danger} | `now()` is similar | -| F051-08 | LOCALTIMESTAMP | No{.text-danger} | | +| F051-01 | DATE data type (including support of DATE literal) | Partial {.text-warning} | No literal | +| F051-02 | TIME data type (including support of TIME literal) with fractional seconds precision of at least 0 | No {.text-danger} | | +| F051-03 | TIMESTAMP data type (including support of TIMESTAMP literal) with fractional seconds precision of at least 0 and 6 | No {.text-danger} | `DateTime64` time provides similar functionality | +| F051-04 | Comparison predicate on DATE, TIME, and TIMESTAMP data types | Partial {.text-warning} | Only one data type available | +| F051-05 | Explicit CAST between datetime types and character string types | Yes {.text-success} | | +| F051-06 | CURRENT_DATE | No {.text-danger} | `today()` is similar | +| F051-07 | LOCALTIME | No {.text-danger} | `now()` is similar | +| F051-08 | LOCALTIMESTAMP | No {.text-danger} | | | **F081** | **UNION and EXCEPT in views** | **Partial**{.text-warning} | | | **F131** | **Grouped operations** | **Partial**{.text-warning} | | -| F131-01 | WHERE, GROUP BY, and HAVING clauses supported in queries with grouped views | Yes{.text-success} | | -| F131-02 | Multiple tables supported in queries with grouped views | Yes{.text-success} | | -| F131-03 | Set functions supported in queries with grouped views | Yes{.text-success} | | -| F131-04 | Subqueries with GROUP BY and HAVING clauses and grouped views | Yes{.text-success} | | -| F131-05 | Single row SELECT with GROUP BY and HAVING clauses and grouped views | No{.text-danger} | | +| F131-01 | WHERE, GROUP BY, and HAVING clauses supported in queries with grouped views | Yes {.text-success} | | +| F131-02 | Multiple tables supported in queries with grouped views | Yes {.text-success} | | +| F131-03 | Set functions supported in queries with grouped views | Yes {.text-success} | | +| F131-04 | Subqueries with GROUP BY and HAVING clauses and grouped views | Yes {.text-success} | | +| F131-05 | Single row SELECT with GROUP BY and HAVING clauses and grouped views | No {.text-danger} | | | **F181** | **Multiple module support** | **No**{.text-danger} | | | **F201** | **CAST function** | **Yes**{.text-success} | | | **F221** | **Explicit defaults** | **No**{.text-danger} | | | **F261** | **CASE expression** | **Yes**{.text-success} | | -| F261-01 | Simple CASE | Yes{.text-success} | | -| F261-02 | Searched CASE | Yes{.text-success} | | -| F261-03 | NULLIF | Yes{.text-success} | | -| F261-04 | COALESCE | Yes{.text-success} | | +| F261-01 | Simple CASE | Yes {.text-success} | | +| F261-02 | Searched CASE | Yes {.text-success} | | +| F261-03 | NULLIF | Yes {.text-success} | | +| F261-04 | COALESCE | Yes {.text-success} | | | **F311** | **Schema definition statement** | **Partial**{.text-warning} | | -| F311-01 | CREATE SCHEMA | No{.text-danger} | | -| F311-02 | CREATE TABLE for persistent base tables | Yes{.text-success} | | -| F311-03 | CREATE VIEW | Yes{.text-success} | | -| F311-04 | CREATE VIEW: WITH CHECK OPTION | No{.text-danger} | | -| F311-05 | GRANT statement | Yes{.text-success} | | +| F311-01 | CREATE SCHEMA | No {.text-danger} | | +| F311-02 | CREATE TABLE for persistent base tables | Yes {.text-success} | | +| F311-03 | CREATE VIEW | Yes {.text-success} | | +| F311-04 | CREATE VIEW: WITH CHECK OPTION | No {.text-danger} | | +| F311-05 | GRANT statement | Yes {.text-success} | | | **F471** | **Scalar subquery values** | **Yes**{.text-success} | | | **F481** | **Expanded NULL predicate** | **Yes**{.text-success} | | | **F812** | **Basic flagging** | **No**{.text-danger} | | | **S011** | **Distinct data types** | | | | **T321** | **Basic SQL-invoked routines** | **No**{.text-danger} | | -| T321-01 | User-defined functions with no overloading | No{.text-danger} | | -| T321-02 | User-defined stored procedures with no overloading | No{.text-danger} | | -| T321-03 | Function invocation | No{.text-danger} | | -| T321-04 | CALL statement | No{.text-danger} | | -| T321-05 | RETURN statement | No{.text-danger} | | +| T321-01 | User-defined functions with no overloading | No {.text-danger} | | +| T321-02 | User-defined stored procedures with no overloading | No {.text-danger} | | +| T321-03 | Function invocation | No {.text-danger} | | +| T321-04 | CALL statement | No {.text-danger} | | +| T321-05 | RETURN statement | No {.text-danger} | | | **T631** | **IN predicate with one list element** | **Yes**{.text-success} | | diff --git a/docs/en/sql-reference/data-types/lowcardinality.md b/docs/en/sql-reference/data-types/lowcardinality.md index 36c86bf443c..e0a483973e6 100644 --- a/docs/en/sql-reference/data-types/lowcardinality.md +++ b/docs/en/sql-reference/data-types/lowcardinality.md @@ -57,3 +57,5 @@ Functions: - [A Magical Mystery Tour of the LowCardinality Data Type](https://www.altinity.com/blog/2019/3/27/low-cardinality). - [Reducing Clickhouse Storage Cost with the Low Cardinality Type – Lessons from an Instana Engineer](https://www.instana.com/blog/reducing-clickhouse-storage-cost-with-the-low-cardinality-type-lessons-from-an-instana-engineer/). - [String Optimization (video presentation in Russian)](https://youtu.be/rqf-ILRgBdY?list=PL0Z2YDlm0b3iwXCpEFiOOYmwXzVmjJfEt). [Slides in English](https://github.com/yandex/clickhouse-presentations/raw/master/meetup19/string_optimization.pdf). + +[Original article](https://clickhouse.tech/docs/en/sql-reference/data-types/lowcardinality/) diff --git a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md index 957f2b6ae53..e86ac7fe105 100644 --- a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md +++ b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md @@ -93,6 +93,8 @@ Setting fields: - `path` – The absolute path to the file. - `format` – The file format. All the formats described in “[Formats](../../../interfaces/formats.md#formats)” are supported. +When dictionary with FILE source is created via DDL command (`CREATE DICTIONARY ...`), source of the dictionary have to be located in `user_files` directory, to prevent DB users accessing arbitrary file on clickhouse node. + ## Executable File {#dicts-external_dicts_dict_sources-executable} Working with executable files depends on [how the dictionary is stored in memory](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md). If the dictionary is stored using `cache` and `complex_key_cache`, ClickHouse requests the necessary keys by sending a request to the executable file’s STDIN. Otherwise, ClickHouse starts executable file and treats its output as dictionary data. @@ -108,17 +110,13 @@ Example of settings: ``` -or - -``` sql -SOURCE(EXECUTABLE(command 'cat /opt/dictionaries/os.tsv' format 'TabSeparated')) -``` - Setting fields: - `command` – The absolute path to the executable file, or the file name (if the program directory is written to `PATH`). - `format` – The file format. All the formats described in “[Formats](../../../interfaces/formats.md#formats)” are supported. +That dictionary source can be configured only via XML configuration. Creating dictionaries with executable source via DDL is disabled, otherwise, the DB user would be able to execute arbitrary binary on clickhouse node. + ## Http(s) {#dicts-external_dicts_dict_sources-http} Working with an HTTP(s) server depends on [how the dictionary is stored in memory](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md). If the dictionary is stored using `cache` and `complex_key_cache`, ClickHouse requests the necessary keys by sending a request via the `POST` method. @@ -169,6 +167,8 @@ Setting fields: - `name` – Identifiant name used for the header send on the request. - `value` – Value set for a specific identifiant name. +When creating a dictionary using the DDL command (`CREATE DICTIONARY ...`) remote hosts for HTTP dictionaries checked with the `remote_url_allow_hosts` section from config to prevent database users to access arbitrary HTTP server. + ## ODBC {#dicts-external_dicts_dict_sources-odbc} You can use this method to connect any database that has an ODBC driver. diff --git a/docs/en/sql-reference/functions/date-time-functions.md b/docs/en/sql-reference/functions/date-time-functions.md index 233585194e9..628c321adee 100644 --- a/docs/en/sql-reference/functions/date-time-functions.md +++ b/docs/en/sql-reference/functions/date-time-functions.md @@ -366,7 +366,7 @@ SELECT toDate('2016-12-27') AS date, toYearWeek(date) AS yearWeek0, toYearWeek(d └────────────┴───────────┴───────────┴───────────┘ ``` -## date_trunc {#date_trunc} +## date\_trunc {#date_trunc} Truncates date and time data to the specified part of date. @@ -435,7 +435,7 @@ Result: - [toStartOfInterval](#tostartofintervaltime-or-data-interval-x-unit-time-zone) -# now {#now} +## now {#now} Returns the current date and time. @@ -662,7 +662,7 @@ Result: [Original article](https://clickhouse.tech/docs/en/query_language/functions/date_time_functions/) -## FROM_UNIXTIME +## FROM\_UNIXTIME {#fromunixfime} When there is only single argument of integer type, it act in the same way as `toDateTime` and return [DateTime](../../sql-reference/data-types/datetime.md). type. diff --git a/docs/en/sql-reference/functions/url-functions.md b/docs/en/sql-reference/functions/url-functions.md index 0da74ce1b0e..006542f494a 100644 --- a/docs/en/sql-reference/functions/url-functions.md +++ b/docs/en/sql-reference/functions/url-functions.md @@ -131,6 +131,40 @@ For example: - `cutToFirstSignificantSubdomain('www.tr') = 'www.tr'`. - `cutToFirstSignificantSubdomain('tr') = ''`. +### cutToFirstSignificantSubdomainCustom {#cuttofirstsignificantsubdomaincustom} + +Same as `cutToFirstSignificantSubdomain` but accept custom TLD list name, useful if: + +- you need fresh TLD list, +- or you have custom. + +Configuration example: + +```xml + + + + public_suffix_list.dat + + +``` + +Example: + +- `cutToFirstSignificantSubdomain('https://news.yandex.com.tr/', 'public_suffix_list') = 'yandex.com.tr'`. + +### cutToFirstSignificantSubdomainCustomWithWWW {#cuttofirstsignificantsubdomaincustomwithwww} + +Same as `cutToFirstSignificantSubdomainWithWWW` but accept custom TLD list name. + +### firstSignificantSubdomainCustom {#firstsignificantsubdomaincustom} + +Same as `firstSignificantSubdomain` but accept custom TLD list name. + +### cutToFirstSignificantSubdomainCustomWithWWW {#cuttofirstsignificantsubdomaincustomwithwww} + +Same as `cutToFirstSignificantSubdomainWithWWW` but accept custom TLD list name. + ### port(URL\[, default_port = 0\]) {#port} Returns the port or `default_port` if there is no port in the URL (or in case of validation error). diff --git a/docs/en/sql-reference/operators/in.md b/docs/en/sql-reference/operators/in.md index eca95dbc652..bfa8b3d1003 100644 --- a/docs/en/sql-reference/operators/in.md +++ b/docs/en/sql-reference/operators/in.md @@ -197,3 +197,25 @@ This is more optimal than using the normal IN. However, keep the following point 5. If you need to use GLOBAL IN often, plan the location of the ClickHouse cluster so that a single group of replicas resides in no more than one data center with a fast network between them, so that a query can be processed entirely within a single data center. It also makes sense to specify a local table in the `GLOBAL IN` clause, in case this local table is only available on the requestor server and you want to use data from it on remote servers. + +### Distributed Subqueries and max_parallel_replicas {#max_parallel_replica-subqueries} + +When max_parallel_replicas is greater than 1, distributed queries are further transformed. For example, the following: + +```sql +SEELECT CounterID, count() FROM distributed_table_1 WHERE UserID IN (SELECT UserID FROM local_table_2 WHERE CounterID < 100) +SETTINGS max_parallel_replicas=3 +``` + +is transformed on each server into + +```sql +SELECT CounterID, count() FROM local_table_1 WHERE UserID IN (SELECT UserID FROM local_table_2 WHERE CounterID < 100) +SETTINGS parallel_replicas_count=3, parallel_replicas_offset=M +``` + +where M is between 1 and 3 depending on which replica the local query is executing on. These settings affect every MergeTree-family table in the query and have the same effect as applying `SAMPLE 1/3 OFFSET (M-1)/3` on each table. + +Therefore adding the max_parallel_replicas setting will only produce correct results if both tables have the same replication scheme and are sampled by UserID or a subkey of it. In particular, if local_table_2 does not have a sampling key, incorrect results will be produced. The same rule applies to JOIN. + +One workaround if local_table_2 doesn't meet the requirements, is to use `GLOBAL IN` or `GLOBAL JOIN`. diff --git a/docs/en/sql-reference/statements/kill.md b/docs/en/sql-reference/statements/kill.md index d3f2d9bb5c6..6aa09cca4ef 100644 --- a/docs/en/sql-reference/statements/kill.md +++ b/docs/en/sql-reference/statements/kill.md @@ -53,7 +53,7 @@ KILL MUTATION [ON CLUSTER cluster] Tries to cancel and remove [mutations](../../sql-reference/statements/alter/index.md#alter-mutations) that are currently executing. Mutations to cancel are selected from the [`system.mutations`](../../operations/system-tables/mutations.md#system_tables-mutations) table using the filter specified by the `WHERE` clause of the `KILL` query. -A test query (`TEST`) only checks the user’s rights and displays a list of queries to stop. +A test query (`TEST`) only checks the user’s rights and displays a list of mutations to stop. Examples: diff --git a/docs/en/sql-reference/statements/select/order-by.md b/docs/en/sql-reference/statements/select/order-by.md index 57e071d6734..fb1df445db1 100644 --- a/docs/en/sql-reference/statements/select/order-by.md +++ b/docs/en/sql-reference/statements/select/order-by.md @@ -56,10 +56,188 @@ When floating point numbers are sorted, NaNs are separate from the other values. ## Collation Support {#collation-support} -For sorting by String values, you can specify collation (comparison). Example: `ORDER BY SearchPhrase COLLATE 'tr'` - for sorting by keyword in ascending order, using the Turkish alphabet, case insensitive, assuming that strings are UTF-8 encoded. `COLLATE` can be specified or not for each expression in ORDER BY independently. If `ASC` or `DESC` is specified, `COLLATE` is specified after it. When using `COLLATE`, sorting is always case-insensitive. +For sorting by [String](../../../sql-reference/data-types/string.md) values, you can specify collation (comparison). Example: `ORDER BY SearchPhrase COLLATE 'tr'` - for sorting by keyword in ascending order, using the Turkish alphabet, case insensitive, assuming that strings are UTF-8 encoded. `COLLATE` can be specified or not for each expression in ORDER BY independently. If `ASC` or `DESC` is specified, `COLLATE` is specified after it. When using `COLLATE`, sorting is always case-insensitive. + +Collate is supported in [LowCardinality](../../../sql-reference/data-types/lowcardinality.md), [Nullable](../../../sql-reference/data-types/nullable.md), [Array](../../../sql-reference/data-types/array.md) and [Tuple](../../../sql-reference/data-types/tuple.md). We only recommend using `COLLATE` for final sorting of a small number of rows, since sorting with `COLLATE` is less efficient than normal sorting by bytes. +## Collation Examples {#collation-examples} + +Example only with [String](../../../sql-reference/data-types/string.md) values: + +Input table: + +``` text +┌─x─┬─s────┐ +│ 1 │ bca │ +│ 2 │ ABC │ +│ 3 │ 123a │ +│ 4 │ abc │ +│ 5 │ BCA │ +└───┴──────┘ +``` + +Query: + +```sql +SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en'; +``` + +Result: + +``` text +┌─x─┬─s────┐ +│ 3 │ 123a │ +│ 4 │ abc │ +│ 2 │ ABC │ +│ 1 │ bca │ +│ 5 │ BCA │ +└───┴──────┘ +``` + +Example with [Nullable](../../../sql-reference/data-types/nullable.md): + +Input table: + +``` text +┌─x─┬─s────┐ +│ 1 │ bca │ +│ 2 │ ᴺᵁᴸᴸ │ +│ 3 │ ABC │ +│ 4 │ 123a │ +│ 5 │ abc │ +│ 6 │ ᴺᵁᴸᴸ │ +│ 7 │ BCA │ +└───┴──────┘ +``` + +Query: + +```sql +SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en'; +``` + +Result: + +``` text +┌─x─┬─s────┐ +│ 4 │ 123a │ +│ 5 │ abc │ +│ 3 │ ABC │ +│ 1 │ bca │ +│ 7 │ BCA │ +│ 6 │ ᴺᵁᴸᴸ │ +│ 2 │ ᴺᵁᴸᴸ │ +└───┴──────┘ +``` + +Example with [Array](../../../sql-reference/data-types/array.md): + +Input table: + +``` text +┌─x─┬─s─────────────┐ +│ 1 │ ['Z'] │ +│ 2 │ ['z'] │ +│ 3 │ ['a'] │ +│ 4 │ ['A'] │ +│ 5 │ ['z','a'] │ +│ 6 │ ['z','a','a'] │ +│ 7 │ [''] │ +└───┴───────────────┘ +``` + +Query: + +```sql +SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en'; +``` + +Result: + +``` text +┌─x─┬─s─────────────┐ +│ 7 │ [''] │ +│ 3 │ ['a'] │ +│ 4 │ ['A'] │ +│ 2 │ ['z'] │ +│ 5 │ ['z','a'] │ +│ 6 │ ['z','a','a'] │ +│ 1 │ ['Z'] │ +└───┴───────────────┘ +``` + +Example with [LowCardinality](../../../sql-reference/data-types/lowcardinality.md) string: + +Input table: + +```text +┌─x─┬─s───┐ +│ 1 │ Z │ +│ 2 │ z │ +│ 3 │ a │ +│ 4 │ A │ +│ 5 │ za │ +│ 6 │ zaa │ +│ 7 │ │ +└───┴─────┘ +``` + +Query: + +```sql +SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en'; +``` + +Result: + +```text +┌─x─┬─s───┐ +│ 7 │ │ +│ 3 │ a │ +│ 4 │ A │ +│ 2 │ z │ +│ 1 │ Z │ +│ 5 │ za │ +│ 6 │ zaa │ +└───┴─────┘ +``` + +Example with [Tuple](../../../sql-reference/data-types/tuple.md): + +```text +┌─x─┬─s───────┐ +│ 1 │ (1,'Z') │ +│ 2 │ (1,'z') │ +│ 3 │ (1,'a') │ +│ 4 │ (2,'z') │ +│ 5 │ (1,'A') │ +│ 6 │ (2,'Z') │ +│ 7 │ (2,'A') │ +└───┴─────────┘ +``` + +Query: + +```sql +SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en'; +``` + +Result: + +```text +┌─x─┬─s───────┐ +│ 3 │ (1,'a') │ +│ 5 │ (1,'A') │ +│ 2 │ (1,'z') │ +│ 1 │ (1,'Z') │ +│ 7 │ (2,'A') │ +│ 4 │ (2,'z') │ +│ 6 │ (2,'Z') │ +└───┴─────────┘ +``` + ## Implementation Details {#implementation-details} Less RAM is used if a small enough [LIMIT](../../../sql-reference/statements/select/limit.md) is specified in addition to `ORDER BY`. Otherwise, the amount of memory spent is proportional to the volume of data for sorting. For distributed query processing, if [GROUP BY](../../../sql-reference/statements/select/group-by.md) is omitted, sorting is partially done on remote servers, and the results are merged on the requestor server. This means that for distributed sorting, the volume of data to sort can be greater than the amount of memory on a single server. diff --git a/docs/es/development/developer-instruction.md b/docs/es/development/developer-instruction.md index 390ac55602d..0ce5d0b457a 100644 --- a/docs/es/development/developer-instruction.md +++ b/docs/es/development/developer-instruction.md @@ -257,8 +257,8 @@ El desarrollo de ClickHouse a menudo requiere cargar conjuntos de datos realista sudo apt install wget xz-utils - wget https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz - wget https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz + wget https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz + wget https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz xz -v -d hits_v1.tsv.xz xz -v -d visits_v1.tsv.xz diff --git a/docs/es/development/style.md b/docs/es/development/style.md index c358d613fca..ec55516fe2c 100644 --- a/docs/es/development/style.md +++ b/docs/es/development/style.md @@ -579,7 +579,7 @@ Si una función captura la propiedad de un objeto creado en el montón, cree el **14.** Valores devueltos. -En la mayoría de los casos, sólo tiene que utilizar `return`. No escribir `[return std::move(res)]{.strike}`. +En la mayoría de los casos, sólo tiene que utilizar `return`. No escribir `return std::move(res)`. Si la función asigna un objeto en el montón y lo devuelve, use `shared_ptr` o `unique_ptr`. @@ -673,7 +673,7 @@ Utilice siempre `#pragma once` en lugar de incluir guardias. **24.** No use `trailing return type` para funciones a menos que sea necesario. ``` cpp -[auto f() -> void;]{.strike} +auto f() -> void ``` **25.** Declaración e inicialización de variables. diff --git a/docs/es/getting-started/example-datasets/metrica.md b/docs/es/getting-started/example-datasets/metrica.md index b99346eda29..0b3bc8b6833 100644 --- a/docs/es/getting-started/example-datasets/metrica.md +++ b/docs/es/getting-started/example-datasets/metrica.md @@ -9,14 +9,14 @@ toc_title: El Yandex.Metrica Datos El conjunto de datos consta de dos tablas que contienen datos anónimos sobre los hits (`hits_v1`) y visitas (`visits_v1`) el Yandex.Métrica. Puedes leer más sobre Yandex.Metrica en [Historial de ClickHouse](../../introduction/history.md) apartado. -El conjunto de datos consta de dos tablas, cualquiera de ellas se puede descargar como `tsv.xz` o como particiones preparadas. Además, una versión extendida de la `hits` La tabla que contiene 100 millones de filas está disponible como TSV en https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_100m_obfuscated_v1.tsv.xz y como particiones preparadas en https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz. +El conjunto de datos consta de dos tablas, cualquiera de ellas se puede descargar como `tsv.xz` o como particiones preparadas. Además, una versión extendida de la `hits` La tabla que contiene 100 millones de filas está disponible como TSV en https://datasets.clickhouse.tech/hits/tsv/hits_100m_obfuscated_v1.tsv.xz y como particiones preparadas en https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz. ## Obtención de tablas a partir de particiones preparadas {#obtaining-tables-from-prepared-partitions} Descargar e importar tabla de hits: ``` bash -curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar +curl -O https://datasets.clickhouse.tech/hits/partitions/hits_v1.tar tar xvf hits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory # check permissions on unpacked data, fix if required sudo service clickhouse-server restart @@ -26,7 +26,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" Descargar e importar visitas: ``` bash -curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar +curl -O https://datasets.clickhouse.tech/visits/partitions/visits_v1.tar tar xvf visits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory # check permissions on unpacked data, fix if required sudo service clickhouse-server restart @@ -38,7 +38,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1" Descargar e importar hits desde un archivo TSV comprimido: ``` bash -curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv # now create table clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" @@ -52,7 +52,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" Descargue e importe visitas desde un archivo tsv comprimido: ``` bash -curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv # now create table clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" diff --git a/docs/es/getting-started/example-datasets/nyc-taxi.md b/docs/es/getting-started/example-datasets/nyc-taxi.md index 4a2bae83a0a..c6441311c96 100644 --- a/docs/es/getting-started/example-datasets/nyc-taxi.md +++ b/docs/es/getting-started/example-datasets/nyc-taxi.md @@ -285,7 +285,7 @@ Entre otras cosas, puede ejecutar la consulta OPTIMIZE en MergeTree. Pero no es ## Descarga de Prepared Partitions {#download-of-prepared-partitions} ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar +$ curl -O https://datasets.clickhouse.tech/trips_mergetree/partitions/trips_mergetree.tar $ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # path to ClickHouse data directory $ # check permissions of unpacked data, fix if required $ sudo service clickhouse-server restart diff --git a/docs/es/getting-started/example-datasets/ontime.md b/docs/es/getting-started/example-datasets/ontime.md index b0662ef8b53..f89d74048bd 100644 --- a/docs/es/getting-started/example-datasets/ontime.md +++ b/docs/es/getting-started/example-datasets/ontime.md @@ -156,7 +156,7 @@ $ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhous ## Descarga de Prepared Partitions {#download-of-prepared-partitions} ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar +$ curl -O https://datasets.clickhouse.tech/ontime/partitions/ontime.tar $ tar xvf ontime.tar -C /var/lib/clickhouse # path to ClickHouse data directory $ # check permissions of unpacked data, fix if required $ sudo service clickhouse-server restart diff --git a/docs/es/getting-started/tutorial.md b/docs/es/getting-started/tutorial.md index ccc07e50468..52699190b4d 100644 --- a/docs/es/getting-started/tutorial.md +++ b/docs/es/getting-started/tutorial.md @@ -87,8 +87,8 @@ Ahora es el momento de llenar nuestro servidor ClickHouse con algunos datos de m ### Descargar y extraer datos de tabla {#download-and-extract-table-data} ``` bash -curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv -curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv ``` Los archivos extraídos tienen un tamaño de aproximadamente 10 GB. diff --git a/docs/es/operations/performance-test.md b/docs/es/operations/performance-test.md index d0beb7bd2b4..97444f339cd 100644 --- a/docs/es/operations/performance-test.md +++ b/docs/es/operations/performance-test.md @@ -48,7 +48,7 @@ Con esta instrucción, puede ejecutar una prueba de rendimiento básica de Click - wget https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz + wget https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz tar xvf hits_100m_obfuscated_v1.tar.xz -C . mv hits_100m_obfuscated_v1/* . diff --git a/docs/es/sql-reference/ansi.md b/docs/es/sql-reference/ansi.md index a16a4cb5798..29e2c5b12e9 100644 --- a/docs/es/sql-reference/ansi.md +++ b/docs/es/sql-reference/ansi.md @@ -26,155 +26,155 @@ En la tabla siguiente se enumeran los casos en que la característica de consult | Feature ID | Nombre de la función | Estatus | Comentario | |------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **E011** | **Tipos de datos numéricos** | **Parcial**{.text-warning} | | -| E011-01 | Tipos de datos INTEGER y SMALLINT | Sí{.text-success} | | -| E011-02 | REAL, DOUBLE PRECISION y FLOAT tipos de datos tipos de datos | Parcial{.text-warning} | `FLOAT()`, `REAL` y `DOUBLE PRECISION` no son compatibles | -| E011-03 | Tipos de datos DECIMAL y NUMERIC | Parcial{.text-warning} | Solo `DECIMAL(p,s)` es compatible, no `NUMERIC` | -| E011-04 | Operadores aritméticos | Sí{.text-success} | | -| E011-05 | Comparación numérica | Sí{.text-success} | | -| E011-06 | Conversión implícita entre los tipos de datos numéricos | No{.text-danger} | ANSI SQL permite la conversión implícita arbitraria entre tipos numéricos, mientras que ClickHouse se basa en funciones que tienen múltiples sobrecargas en lugar de conversión implícita | +| E011-01 | Tipos de datos INTEGER y SMALLINT | Sí {.text-success} | | +| E011-02 | REAL, DOUBLE PRECISION y FLOAT tipos de datos tipos de datos | Parcial {.text-warning} | `FLOAT()`, `REAL` y `DOUBLE PRECISION` no son compatibles | +| E011-03 | Tipos de datos DECIMAL y NUMERIC | Parcial {.text-warning} | Solo `DECIMAL(p,s)` es compatible, no `NUMERIC` | +| E011-04 | Operadores aritméticos | Sí {.text-success} | | +| E011-05 | Comparación numérica | Sí {.text-success} | | +| E011-06 | Conversión implícita entre los tipos de datos numéricos | No {.text-danger} | ANSI SQL permite la conversión implícita arbitraria entre tipos numéricos, mientras que ClickHouse se basa en funciones que tienen múltiples sobrecargas en lugar de conversión implícita | | **E021** | **Tipos de cadena de caracteres** | **Parcial**{.text-warning} | | -| E021-01 | Tipo de datos CHARACTER | No{.text-danger} | | -| E021-02 | Tipo de datos CHARACTER VARYING | No{.text-danger} | `String` se comporta de manera similar, pero sin límite de longitud entre paréntesis | -| E021-03 | Literales de caracteres | Parcial{.text-warning} | Sin concatenación automática de literales consecutivos y compatibilidad con el conjunto de caracteres | -| E021-04 | Función CHARACTER_LENGTH | Parcial{.text-warning} | No `USING` clausula | -| E021-05 | Función OCTET_LENGTH | No{.text-danger} | `LENGTH` se comporta de manera similar | -| E021-06 | SUBSTRING | Parcial{.text-warning} | No hay soporte para `SIMILAR` y `ESCAPE` cláusulas, no `SUBSTRING_REGEX` variante | -| E021-07 | Concatenación de caracteres | Parcial{.text-warning} | No `COLLATE` clausula | -| E021-08 | Funciones SUPERIOR e INFERIOR | Sí{.text-success} | | -| E021-09 | Función TRIM | Sí{.text-success} | | -| E021-10 | Conversión implícita entre los tipos de cadena de caracteres de longitud fija y longitud variable | No{.text-danger} | ANSI SQL permite la conversión implícita arbitraria entre tipos de cadena, mientras que ClickHouse se basa en funciones que tienen múltiples sobrecargas en lugar de conversión implícita | -| E021-11 | Función POSITION | Parcial{.text-warning} | No hay soporte para `IN` y `USING` cláusulas, no `POSITION_REGEX` variante | -| E021-12 | Comparación de caracteres | Sí{.text-success} | | +| E021-01 | Tipo de datos CHARACTER | No {.text-danger} | | +| E021-02 | Tipo de datos CHARACTER VARYING | No {.text-danger} | `String` se comporta de manera similar, pero sin límite de longitud entre paréntesis | +| E021-03 | Literales de caracteres | Parcial {.text-warning} | Sin concatenación automática de literales consecutivos y compatibilidad con el conjunto de caracteres | +| E021-04 | Función CHARACTER_LENGTH | Parcial {.text-warning} | No `USING` clausula | +| E021-05 | Función OCTET_LENGTH | No {.text-danger} | `LENGTH` se comporta de manera similar | +| E021-06 | SUBSTRING | Parcial {.text-warning} | No hay soporte para `SIMILAR` y `ESCAPE` cláusulas, no `SUBSTRING_REGEX` variante | +| E021-07 | Concatenación de caracteres | Parcial {.text-warning} | No `COLLATE` clausula | +| E021-08 | Funciones SUPERIOR e INFERIOR | Sí {.text-success} | | +| E021-09 | Función TRIM | Sí {.text-success} | | +| E021-10 | Conversión implícita entre los tipos de cadena de caracteres de longitud fija y longitud variable | No {.text-danger} | ANSI SQL permite la conversión implícita arbitraria entre tipos de cadena, mientras que ClickHouse se basa en funciones que tienen múltiples sobrecargas en lugar de conversión implícita | +| E021-11 | Función POSITION | Parcial {.text-warning} | No hay soporte para `IN` y `USING` cláusulas, no `POSITION_REGEX` variante | +| E021-12 | Comparación de caracteres | Sí {.text-success} | | | **E031** | **Identificador** | **Parcial**{.text-warning} | | -| E031-01 | Identificadores delimitados | Parcial{.text-warning} | El soporte literal Unicode es limitado | -| E031-02 | Identificadores de minúsculas | Sí{.text-success} | | -| E031-03 | Trailing subrayado | Sí{.text-success} | | +| E031-01 | Identificadores delimitados | Parcial {.text-warning} | El soporte literal Unicode es limitado | +| E031-02 | Identificadores de minúsculas | Sí {.text-success} | | +| E031-03 | Trailing subrayado | Sí {.text-success} | | | **E051** | **Especificación básica de la consulta** | **Parcial**{.text-warning} | | -| E051-01 | SELECT DISTINCT | Sí{.text-success} | | -| E051-02 | Cláusula GROUP BY | Sí{.text-success} | | -| E051-04 | GROUP BY puede contener columnas que no estén en `` | Sí {.text-success} | | +| E051-05 | Los elementos seleccionados pueden ser renombrados | Sí {.text-success} | | +| E051-06 | Cláusula HAVING | Sí {.text-success} | | +| E051-07 | Calificado \* en la lista de selección | Sí {.text-success} | | +| E051-08 | Nombre de correlación en la cláusula FROM | Sí {.text-success} | | +| E051-09 | Cambiar el nombre de las columnas en la cláusula FROM | No {.text-danger} | | | **E061** | **Predicados básicos y condiciones de búsqueda** | **Parcial**{.text-warning} | | -| E061-01 | Predicado de comparación | Sí{.text-success} | | -| E061-02 | ENTRE predicado | Parcial{.text-warning} | No `SYMMETRIC` y `ASYMMETRIC` clausula | -| E061-03 | Predicado IN con lista de valores | Sí{.text-success} | | -| E061-04 | COMO predicado | Sí{.text-success} | | -| E061-05 | Predicado LIKE: cláusula ESCAPE | No{.text-danger} | | -| E061-06 | Predicado NULL | Sí{.text-success} | | -| E061-07 | Predicado de comparación cuantificado | No{.text-danger} | | -| E061-08 | Predicado EXISTS | No{.text-danger} | | -| E061-09 | Subconsultas en predicado de comparación | Sí{.text-success} | | -| E061-11 | Subconsultas en el predicado IN | Sí{.text-success} | | -| E061-12 | Subconsultas en predicado de comparación cuantificado | No{.text-danger} | | -| E061-13 | Subconsultas correlacionadas | No{.text-danger} | | -| E061-14 | Condición de búsqueda | Sí{.text-success} | | +| E061-01 | Predicado de comparación | Sí {.text-success} | | +| E061-02 | ENTRE predicado | Parcial {.text-warning} | No `SYMMETRIC` y `ASYMMETRIC` clausula | +| E061-03 | Predicado IN con lista de valores | Sí {.text-success} | | +| E061-04 | COMO predicado | Sí {.text-success} | | +| E061-05 | Predicado LIKE: cláusula ESCAPE | No {.text-danger} | | +| E061-06 | Predicado NULL | Sí {.text-success} | | +| E061-07 | Predicado de comparación cuantificado | No {.text-danger} | | +| E061-08 | Predicado EXISTS | No {.text-danger} | | +| E061-09 | Subconsultas en predicado de comparación | Sí {.text-success} | | +| E061-11 | Subconsultas en el predicado IN | Sí {.text-success} | | +| E061-12 | Subconsultas en predicado de comparación cuantificado | No {.text-danger} | | +| E061-13 | Subconsultas correlacionadas | No {.text-danger} | | +| E061-14 | Condición de búsqueda | Sí {.text-success} | | | **E071** | **Expresiones de consulta básicas** | **Parcial**{.text-warning} | | -| E071-01 | Operador de tabla UNION DISTINCT | No{.text-danger} | | -| E071-02 | Operador de tabla UNION ALL | Sí{.text-success} | | -| E071-03 | EXCEPTO operador de tabla DISTINCT | No{.text-danger} | | -| E071-05 | Las columnas combinadas a través de operadores de tabla no necesitan tener exactamente el mismo tipo de datos | Sí{.text-success} | | -| E071-06 | Operadores de tabla en subconsultas | Sí{.text-success} | | +| E071-01 | Operador de tabla UNION DISTINCT | No {.text-danger} | | +| E071-02 | Operador de tabla UNION ALL | Sí {.text-success} | | +| E071-03 | EXCEPTO operador de tabla DISTINCT | No {.text-danger} | | +| E071-05 | Las columnas combinadas a través de operadores de tabla no necesitan tener exactamente el mismo tipo de datos | Sí {.text-success} | | +| E071-06 | Operadores de tabla en subconsultas | Sí {.text-success} | | | **E081** | **Privilegios básicos** | **Parcial**{.text-warning} | Trabajo en curso | | **E091** | **Establecer funciones** | **Sí**{.text-success} | | -| E091-01 | AVG | Sí{.text-success} | | -| E091-02 | COUNT | Sí{.text-success} | | -| E091-03 | MAX | Sí{.text-success} | | -| E091-04 | MIN | Sí{.text-success} | | -| E091-05 | SUM | Sí{.text-success} | | -| E091-06 | Cuantificador ALL | No{.text-danger} | | -| E091-07 | Cuantificador DISTINCT | Parcial{.text-warning} | No se admiten todas las funciones agregadas | +| E091-01 | AVG | Sí {.text-success} | | +| E091-02 | COUNT | Sí {.text-success} | | +| E091-03 | MAX | Sí {.text-success} | | +| E091-04 | MIN | Sí {.text-success} | | +| E091-05 | SUM | Sí {.text-success} | | +| E091-06 | Cuantificador ALL | No {.text-danger} | | +| E091-07 | Cuantificador DISTINCT | Parcial {.text-warning} | No se admiten todas las funciones agregadas | | **E101** | **Manipulación de datos básicos** | **Parcial**{.text-warning} | | -| E101-01 | Instrucción INSERT | Sí{.text-success} | Nota: la clave principal en ClickHouse no implica el `UNIQUE` limitación | -| E101-03 | Instrucción UPDATE buscada | No{.text-danger} | Hay una `ALTER UPDATE` declaración para la modificación de datos por lotes | -| E101-04 | Instrucción DELETE buscada | No{.text-danger} | Hay una `ALTER DELETE` declaración para la eliminación de datos por lotes | +| E101-01 | Instrucción INSERT | Sí {.text-success} | Nota: la clave principal en ClickHouse no implica el `UNIQUE` limitación | +| E101-03 | Instrucción UPDATE buscada | No {.text-danger} | Hay una `ALTER UPDATE` declaración para la modificación de datos por lotes | +| E101-04 | Instrucción DELETE buscada | No {.text-danger} | Hay una `ALTER DELETE` declaración para la eliminación de datos por lotes | | **E111** | **Instrucción SELECT de una sola fila** | **No**{.text-danger} | | | **E121** | **Soporte básico del cursor** | **No**{.text-danger} | | -| E121-01 | DECLARE CURSOR | No{.text-danger} | | -| E121-02 | Las columnas PEDIR POR no necesitan estar en la lista de selección | No{.text-danger} | | -| E121-03 | Expresiones de valor en la cláusula ORDER BY | No{.text-danger} | | -| E121-04 | Declaración ABIERTA | No{.text-danger} | | -| E121-06 | Instrucción UPDATE posicionada | No{.text-danger} | | -| E121-07 | Instrucción DELETE posicionada | No{.text-danger} | | -| E121-08 | Declaración CERRAR | No{.text-danger} | | -| E121-10 | Declaración FETCH: implícita NEXT | No{.text-danger} | | -| E121-17 | CON Cursores HOLD | No{.text-danger} | | +| E121-01 | DECLARE CURSOR | No {.text-danger} | | +| E121-02 | Las columnas PEDIR POR no necesitan estar en la lista de selección | No {.text-danger} | | +| E121-03 | Expresiones de valor en la cláusula ORDER BY | No {.text-danger} | | +| E121-04 | Declaración ABIERTA | No {.text-danger} | | +| E121-06 | Instrucción UPDATE posicionada | No {.text-danger} | | +| E121-07 | Instrucción DELETE posicionada | No {.text-danger} | | +| E121-08 | Declaración CERRAR | No {.text-danger} | | +| E121-10 | Declaración FETCH: implícita NEXT | No {.text-danger} | | +| E121-17 | CON Cursores HOLD | No {.text-danger} | | | **E131** | **Soporte de valor nulo (nulos en lugar de valores)** | **Parcial**{.text-warning} | Se aplican algunas restricciones | | **E141** | **Restricciones de integridad básicas** | **Parcial**{.text-warning} | | -| E141-01 | Restricciones NOT NULL | Sí{.text-success} | Nota: `NOT NULL` está implícito para las columnas de tabla de forma predeterminada | -| E141-02 | Restricción UNIQUE de columnas NOT NULL | No{.text-danger} | | -| E141-03 | Restricciones PRIMARY KEY | No{.text-danger} | | -| E141-04 | Restricción básica FOREIGN KEY con el valor predeterminado NO ACTION para la acción de eliminación referencial y la acción de actualización referencial | No{.text-danger} | | -| E141-06 | Restricción CHECK | Sí{.text-success} | | -| E141-07 | Valores predeterminados de columna | Sí{.text-success} | | -| E141-08 | NO NULL inferido en CLAVE PRIMARIA | Sí{.text-success} | | -| E141-10 | Los nombres de una clave externa se pueden especificar en cualquier orden | No{.text-danger} | | +| E141-01 | Restricciones NOT NULL | Sí {.text-success} | Nota: `NOT NULL` está implícito para las columnas de tabla de forma predeterminada | +| E141-02 | Restricción UNIQUE de columnas NOT NULL | No {.text-danger} | | +| E141-03 | Restricciones PRIMARY KEY | No {.text-danger} | | +| E141-04 | Restricción básica FOREIGN KEY con el valor predeterminado NO ACTION para la acción de eliminación referencial y la acción de actualización referencial | No {.text-danger} | | +| E141-06 | Restricción CHECK | Sí {.text-success} | | +| E141-07 | Valores predeterminados de columna | Sí {.text-success} | | +| E141-08 | NO NULL inferido en CLAVE PRIMARIA | Sí {.text-success} | | +| E141-10 | Los nombres de una clave externa se pueden especificar en cualquier orden | No {.text-danger} | | | **E151** | **Soporte de transacciones** | **No**{.text-danger} | | -| E151-01 | Declaración COMMIT | No{.text-danger} | | -| E151-02 | Instrucción ROLLBACK | No{.text-danger} | | +| E151-01 | Declaración COMMIT | No {.text-danger} | | +| E151-02 | Instrucción ROLLBACK | No {.text-danger} | | | **E152** | **Instrucción SET TRANSACTION básica** | **No**{.text-danger} | | -| E152-01 | Instrucción SET TRANSACTION: cláusula ISOLATION LEVEL SERIALIZABLE | No{.text-danger} | | -| E152-02 | Instrucción SET TRANSACTION: cláusulas READ ONLY y READ WRITE | No{.text-danger} | | +| E152-01 | Instrucción SET TRANSACTION: cláusula ISOLATION LEVEL SERIALIZABLE | No {.text-danger} | | +| E152-02 | Instrucción SET TRANSACTION: cláusulas READ ONLY y READ WRITE | No {.text-danger} | | | **E153** | **Consultas actualizables con subconsultas** | **No**{.text-danger} | | | **E161** | **Comentarios SQL usando doble menos inicial** | **Sí**{.text-success} | | | **E171** | **Soporte SQLSTATE** | **No**{.text-danger} | | | **E182** | **Enlace de idioma de host** | **No**{.text-danger} | | | **F031** | **Manipulación básica del esquema** | **Parcial**{.text-warning} | | -| F031-01 | Instrucción CREATE TABLE para crear tablas base persistentes | Parcial{.text-warning} | No `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` cláusulas y sin soporte para tipos de datos resueltos por el usuario | -| F031-02 | Instrucción CREATE VIEW | Parcial{.text-warning} | No `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` cláusulas y sin soporte para tipos de datos resueltos por el usuario | -| F031-03 | Declaración GRANT | Sí{.text-success} | | -| F031-04 | Sentencia ALTER TABLE: cláusula ADD COLUMN | Parcial{.text-warning} | No hay soporte para `GENERATED` cláusula y período de tiempo del sistema | -| F031-13 | Instrucción DROP TABLE: cláusula RESTRICT | No{.text-danger} | | -| F031-16 | Instrucción DROP VIEW: cláusula RESTRICT | No{.text-danger} | | -| F031-19 | Declaración REVOKE: cláusula RESTRICT | No{.text-danger} | | +| F031-01 | Instrucción CREATE TABLE para crear tablas base persistentes | Parcial {.text-warning} | No `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` cláusulas y sin soporte para tipos de datos resueltos por el usuario | +| F031-02 | Instrucción CREATE VIEW | Parcial {.text-warning} | No `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` cláusulas y sin soporte para tipos de datos resueltos por el usuario | +| F031-03 | Declaración GRANT | Sí {.text-success} | | +| F031-04 | Sentencia ALTER TABLE: cláusula ADD COLUMN | Parcial {.text-warning} | No hay soporte para `GENERATED` cláusula y período de tiempo del sistema | +| F031-13 | Instrucción DROP TABLE: cláusula RESTRICT | No {.text-danger} | | +| F031-16 | Instrucción DROP VIEW: cláusula RESTRICT | No {.text-danger} | | +| F031-19 | Declaración REVOKE: cláusula RESTRICT | No {.text-danger} | | | **F041** | **Tabla unida básica** | **Parcial**{.text-warning} | | -| F041-01 | Unión interna (pero no necesariamente la palabra clave INNER) | Sí{.text-success} | | -| F041-02 | Palabra clave INTERNA | Sí{.text-success} | | -| F041-03 | LEFT OUTER JOIN | Sí{.text-success} | | -| F041-04 | RIGHT OUTER JOIN | Sí{.text-success} | | -| F041-05 | Las uniones externas se pueden anidar | Sí{.text-success} | | -| F041-07 | La tabla interna en una combinación externa izquierda o derecha también se puede usar en una combinación interna | Sí{.text-success} | | -| F041-08 | Todos los operadores de comparación son compatibles (en lugar de solo =) | No{.text-danger} | | +| F041-01 | Unión interna (pero no necesariamente la palabra clave INNER) | Sí {.text-success} | | +| F041-02 | Palabra clave INTERNA | Sí {.text-success} | | +| F041-03 | LEFT OUTER JOIN | Sí {.text-success} | | +| F041-04 | RIGHT OUTER JOIN | Sí {.text-success} | | +| F041-05 | Las uniones externas se pueden anidar | Sí {.text-success} | | +| F041-07 | La tabla interna en una combinación externa izquierda o derecha también se puede usar en una combinación interna | Sí {.text-success} | | +| F041-08 | Todos los operadores de comparación son compatibles (en lugar de solo =) | No {.text-danger} | | | **F051** | **Fecha y hora básicas** | **Parcial**{.text-warning} | | -| F051-01 | Tipo de datos DATE (incluido el soporte del literal DATE) | Parcial{.text-warning} | No literal | -| F051-02 | Tipo de datos TIME (incluido el soporte del literal TIME) con una precisión de segundos fraccionarios de al menos 0 | No{.text-danger} | | -| F051-03 | Tipo de datos TIMESTAMP (incluido el soporte del literal TIMESTAMP) con una precisión de segundos fraccionarios de al menos 0 y 6 | No{.text-danger} | `DateTime64` tiempo proporciona una funcionalidad similar | -| F051-04 | Predicado de comparación en los tipos de datos DATE, TIME y TIMESTAMP | Parcial{.text-warning} | Sólo un tipo de datos disponible | -| F051-05 | CAST explícito entre tipos de fecha y hora y tipos de cadena de caracteres | Sí{.text-success} | | -| F051-06 | CURRENT_DATE | No{.text-danger} | `today()` es similar | -| F051-07 | LOCALTIME | No{.text-danger} | `now()` es similar | -| F051-08 | LOCALTIMESTAMP | No{.text-danger} | | +| F051-01 | Tipo de datos DATE (incluido el soporte del literal DATE) | Parcial {.text-warning} | No literal | +| F051-02 | Tipo de datos TIME (incluido el soporte del literal TIME) con una precisión de segundos fraccionarios de al menos 0 | No {.text-danger} | | +| F051-03 | Tipo de datos TIMESTAMP (incluido el soporte del literal TIMESTAMP) con una precisión de segundos fraccionarios de al menos 0 y 6 | No {.text-danger} | `DateTime64` tiempo proporciona una funcionalidad similar | +| F051-04 | Predicado de comparación en los tipos de datos DATE, TIME y TIMESTAMP | Parcial {.text-warning} | Sólo un tipo de datos disponible | +| F051-05 | CAST explícito entre tipos de fecha y hora y tipos de cadena de caracteres | Sí {.text-success} | | +| F051-06 | CURRENT_DATE | No {.text-danger} | `today()` es similar | +| F051-07 | LOCALTIME | No {.text-danger} | `now()` es similar | +| F051-08 | LOCALTIMESTAMP | No {.text-danger} | | | **F081** | **UNIÓN y EXCEPTO en vistas** | **Parcial**{.text-warning} | | | **F131** | **Operaciones agrupadas** | **Parcial**{.text-warning} | | -| F131-01 | Cláusulas WHERE, GROUP BY y HAVING admitidas en consultas con vistas agrupadas | Sí{.text-success} | | -| F131-02 | Múltiples tablas admitidas en consultas con vistas agrupadas | Sí{.text-success} | | -| F131-03 | Establecer funciones admitidas en consultas con vistas agrupadas | Sí{.text-success} | | -| F131-04 | Subconsultas con cláusulas GROUP BY y HAVING y vistas agrupadas | Sí{.text-success} | | -| F131-05 | SELECCIONAR una sola fila con cláusulas GROUP BY y HAVING y vistas agrupadas | No{.text-danger} | | +| F131-01 | Cláusulas WHERE, GROUP BY y HAVING admitidas en consultas con vistas agrupadas | Sí {.text-success} | | +| F131-02 | Múltiples tablas admitidas en consultas con vistas agrupadas | Sí {.text-success} | | +| F131-03 | Establecer funciones admitidas en consultas con vistas agrupadas | Sí {.text-success} | | +| F131-04 | Subconsultas con cláusulas GROUP BY y HAVING y vistas agrupadas | Sí {.text-success} | | +| F131-05 | SELECCIONAR una sola fila con cláusulas GROUP BY y HAVING y vistas agrupadas | No {.text-danger} | | | **F181** | **Múltiples módulos de apoyo** | **No**{.text-danger} | | | **F201** | **Función de fundición** | **Sí**{.text-success} | | | **F221** | **Valores predeterminados explícitos** | **No**{.text-danger} | | | **F261** | **Expresión CASE** | **Sí**{.text-success} | | -| F261-01 | Caso simple | Sí{.text-success} | | -| F261-02 | CASO buscado | Sí{.text-success} | | -| F261-03 | NULLIF | Sí{.text-success} | | -| F261-04 | COALESCE | Sí{.text-success} | | +| F261-01 | Caso simple | Sí {.text-success} | | +| F261-02 | CASO buscado | Sí {.text-success} | | +| F261-03 | NULLIF | Sí {.text-success} | | +| F261-04 | COALESCE | Sí {.text-success} | | | **F311** | **Instrucción de definición de esquema** | **Parcial**{.text-warning} | | -| F311-01 | CREATE SCHEMA | No{.text-danger} | | -| F311-02 | CREATE TABLE para tablas base persistentes | Sí{.text-success} | | -| F311-03 | CREATE VIEW | Sí{.text-success} | | -| F311-04 | CREATE VIEW: WITH CHECK OPTION | No{.text-danger} | | -| F311-05 | Declaración GRANT | Sí{.text-success} | | +| F311-01 | CREATE SCHEMA | No {.text-danger} | | +| F311-02 | CREATE TABLE para tablas base persistentes | Sí {.text-success} | | +| F311-03 | CREATE VIEW | Sí {.text-success} | | +| F311-04 | CREATE VIEW: WITH CHECK OPTION | No {.text-danger} | | +| F311-05 | Declaración GRANT | Sí {.text-success} | | | **F471** | **Valores escalares de la subconsulta** | **Sí**{.text-success} | | | **F481** | **Predicado NULL expandido** | **Sí**{.text-success} | | | **F812** | **Marcado básico** | **No**{.text-danger} | | | **T321** | **Rutinas básicas invocadas por SQL** | **No**{.text-danger} | | -| T321-01 | Funciones definidas por el usuario sin sobrecarga | No{.text-danger} | | -| T321-02 | Procedimientos almacenados definidos por el usuario sin sobrecarga | No{.text-danger} | | -| T321-03 | Invocación de función | No{.text-danger} | | -| T321-04 | Declaración de LLAMADA | No{.text-danger} | | -| T321-05 | Declaración DEVOLUCIÓN | No{.text-danger} | | +| T321-01 | Funciones definidas por el usuario sin sobrecarga | No {.text-danger} | | +| T321-02 | Procedimientos almacenados definidos por el usuario sin sobrecarga | No {.text-danger} | | +| T321-03 | Invocación de función | No {.text-danger} | | +| T321-04 | Declaración de LLAMADA | No {.text-danger} | | +| T321-05 | Declaración DEVOLUCIÓN | No {.text-danger} | | | **T631** | **Predicado IN con un elemento de lista** | **Sí**{.text-success} | | diff --git a/docs/fa/development/developer-instruction.md b/docs/fa/development/developer-instruction.md index ee78050da07..9284d3a511a 100644 --- a/docs/fa/development/developer-instruction.md +++ b/docs/fa/development/developer-instruction.md @@ -259,8 +259,8 @@ KDevelop و QTCreator دیگر از جایگزین های بسیار خوبی ا sudo apt install wget xz-utils - wget https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz - wget https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz + wget https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz + wget https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz xz -v -d hits_v1.tsv.xz xz -v -d visits_v1.tsv.xz diff --git a/docs/fa/development/style.md b/docs/fa/development/style.md index 67af6fa1e76..c265a7426bd 100644 --- a/docs/fa/development/style.md +++ b/docs/fa/development/style.md @@ -580,7 +580,7 @@ ready_any.set(); **14.** ارزش بازگشت. -در اکثر موارد فقط استفاده کنید `return`. ننویس `[return std::move(res)]{.strike}`. +در اکثر موارد فقط استفاده کنید `return`. ننویس `return std::move(res)`. اگر تابع یک شی در پشته اختصاص و بازده, استفاده `shared_ptr` یا `unique_ptr`. @@ -674,7 +674,7 @@ Loader() {} **24.** استفاده نشود `trailing return type` برای توابع مگر اینکه لازم باشد. ``` cpp -[auto f() -> void;]{.strike} +auto f() -> void ``` **25.** اعلامیه و مقدار دهی اولیه از متغیرهای. diff --git a/docs/fa/getting-started/example-datasets/metrica.md b/docs/fa/getting-started/example-datasets/metrica.md index b1bdf3fd131..ac6743309ef 100644 --- a/docs/fa/getting-started/example-datasets/metrica.md +++ b/docs/fa/getting-started/example-datasets/metrica.md @@ -10,14 +10,14 @@ toc_title: "\u06CC\u0627\u0646\u062F\u06A9\u0633\u0627\u0637\u0644\u0627\u0639\u مجموعه داده شامل دو جدول حاوی داده های ناشناس در مورد بازدید (`hits_v1`) و بازدیدکننده داشته است (`visits_v1`) یاندکس . متریکا شما می توانید اطلاعات بیشتر در مورد یاندکس به عنوان خوانده شده.متریکا در [تاریخچه کلیک](../../introduction/history.md) بخش. -مجموعه داده ها شامل دو جدول است که هر کدام می توانند به عنوان یک فشرده دانلود شوند `tsv.xz` فایل و یا به عنوان پارتیشن تهیه شده است. علاوه بر این, یک نسخه طولانی از `hits` جدول حاوی 100 میلیون ردیف به عنوان تسو در دسترس است https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_100m_obfuscated_v1.tsv.xz و به عنوان پارتیشن تهیه شده در https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz. +مجموعه داده ها شامل دو جدول است که هر کدام می توانند به عنوان یک فشرده دانلود شوند `tsv.xz` فایل و یا به عنوان پارتیشن تهیه شده است. علاوه بر این, یک نسخه طولانی از `hits` جدول حاوی 100 میلیون ردیف به عنوان تسو در دسترس است https://datasets.clickhouse.tech/hits/tsv/hits_100m_obfuscated_v1.tsv.xz و به عنوان پارتیشن تهیه شده در https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz. ## اخذ جداول از پارتیشن های تهیه شده {#obtaining-tables-from-prepared-partitions} دانلود و وارد کردن جدول بازدید: ``` bash -curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar +curl -O https://datasets.clickhouse.tech/hits/partitions/hits_v1.tar tar xvf hits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory # check permissions on unpacked data, fix if required sudo service clickhouse-server restart @@ -27,7 +27,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" دانلود و وارد کردن بازدیدکننده داشته است: ``` bash -curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar +curl -O https://datasets.clickhouse.tech/visits/partitions/visits_v1.tar tar xvf visits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory # check permissions on unpacked data, fix if required sudo service clickhouse-server restart @@ -39,7 +39,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1" دانلود و وارد کردن بازدید از فایل تسو فشرده: ``` bash -curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv # now create table clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" @@ -53,7 +53,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" دانلود و واردات بازدیدکننده داشته است از فشرده فایل: ``` bash -curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv # now create table clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" diff --git a/docs/fa/getting-started/example-datasets/nyc-taxi.md b/docs/fa/getting-started/example-datasets/nyc-taxi.md index 56255e1e09b..32fbd471b32 100644 --- a/docs/fa/getting-started/example-datasets/nyc-taxi.md +++ b/docs/fa/getting-started/example-datasets/nyc-taxi.md @@ -286,7 +286,7 @@ SELECT formatReadableSize(sum(bytes)) FROM system.parts WHERE table = 'trips_mer ## دانلود پارتیشن های تهیه شده {#download-of-prepared-partitions} ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar +$ curl -O https://datasets.clickhouse.tech/trips_mergetree/partitions/trips_mergetree.tar $ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # path to ClickHouse data directory $ # check permissions of unpacked data, fix if required $ sudo service clickhouse-server restart diff --git a/docs/fa/getting-started/example-datasets/ontime.md b/docs/fa/getting-started/example-datasets/ontime.md index e8cc60113e7..7282c3c29bb 100644 --- a/docs/fa/getting-started/example-datasets/ontime.md +++ b/docs/fa/getting-started/example-datasets/ontime.md @@ -156,7 +156,7 @@ $ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhous ## دانلود پارتیشن های تهیه شده {#download-of-prepared-partitions} ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar +$ curl -O https://datasets.clickhouse.tech/ontime/partitions/ontime.tar $ tar xvf ontime.tar -C /var/lib/clickhouse # path to ClickHouse data directory $ # check permissions of unpacked data, fix if required $ sudo service clickhouse-server restart diff --git a/docs/fa/getting-started/tutorial.md b/docs/fa/getting-started/tutorial.md index 933dae2b55f..595d17667ae 100644 --- a/docs/fa/getting-started/tutorial.md +++ b/docs/fa/getting-started/tutorial.md @@ -87,8 +87,8 @@ clickhouse-client --query='INSERT INTO table FORMAT TabSeparated' < data.tsv ### دانلود و استخراج داده های جدول {#download-and-extract-table-data} ``` bash -curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv -curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv ``` فایل های استخراج شده حدود 10 گیگابایت است. diff --git a/docs/fa/operations/performance-test.md b/docs/fa/operations/performance-test.md index 1d061769d64..4bd5cc2c15f 100644 --- a/docs/fa/operations/performance-test.md +++ b/docs/fa/operations/performance-test.md @@ -48,7 +48,7 @@ toc_title: "\u0633\u062E\u062A \u0627\u0641\u0632\u0627\u0631 \u062A\u0633\u062A - wget https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz + wget https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz tar xvf hits_100m_obfuscated_v1.tar.xz -C . mv hits_100m_obfuscated_v1/* . diff --git a/docs/fa/sql-reference/ansi.md b/docs/fa/sql-reference/ansi.md index 4e469891cd4..5be2c353157 100644 --- a/docs/fa/sql-reference/ansi.md +++ b/docs/fa/sql-reference/ansi.md @@ -26,155 +26,155 @@ toc_title: "\u0633\u0627\u0632\u06AF\u0627\u0631\u06CC \u0627\u0646\u0633\u06CC" | Feature ID | نام ویژگی | وضعیت | توضیح | |------------|---------------------------------------------------------------------------------------------------|-------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **E011** | **انواع داده های عددی** | **نسبی**{.text-warning} | | -| E011-01 | عدد صحیح و SMALLINT انواع داده ها | بله{.text-success} | | -| E011-02 | انواع داده های دقیق و دوگانه واقعی و شناور | نسبی{.text-warning} | `FLOAT()`, `REAL` و `DOUBLE PRECISION` پشتیبانی نمیشود | -| E011-03 | دهدهی و انواع داده های عددی | نسبی{.text-warning} | فقط `DECIMAL(p,s)` پشتیبانی می شود, نه `NUMERIC` | -| E011-04 | اپراتورهای ریاضی | بله{.text-success} | | -| E011-05 | مقایسه عددی | بله{.text-success} | | -| E011-06 | ریخته گری ضمنی در میان انواع داده های عددی | نه{.text-danger} | انسی گذاشتن اجازه می دهد تا بازیگران ضمنی دلخواه بین انواع عددی, در حالی که تاتر متکی بر توابع داشتن اضافه بار متعدد به جای بازیگران ضمنی | +| E011-01 | عدد صحیح و SMALLINT انواع داده ها | بله {.text-success} | | +| E011-02 | انواع داده های دقیق و دوگانه واقعی و شناور | نسبی {.text-warning} | `FLOAT()`, `REAL` و `DOUBLE PRECISION` پشتیبانی نمیشود | +| E011-03 | دهدهی و انواع داده های عددی | نسبی {.text-warning} | فقط `DECIMAL(p,s)` پشتیبانی می شود, نه `NUMERIC` | +| E011-04 | اپراتورهای ریاضی | بله {.text-success} | | +| E011-05 | مقایسه عددی | بله {.text-success} | | +| E011-06 | ریخته گری ضمنی در میان انواع داده های عددی | نه {.text-danger} | انسی گذاشتن اجازه می دهد تا بازیگران ضمنی دلخواه بین انواع عددی, در حالی که تاتر متکی بر توابع داشتن اضافه بار متعدد به جای بازیگران ضمنی | | **E021** | **انواع رشته شخصیت** | **نسبی**{.text-warning} | | -| E021-01 | نوع دادههای نویسه | نه{.text-danger} | | -| E021-02 | شخصیت های مختلف نوع داده ها | نه{.text-danger} | `String` رفتار مشابه, اما بدون محدودیت طول در پرانتز | -| E021-03 | شخصیت literals | نسبی{.text-warning} | بدون الحاق خودکار از لیتر متوالی و شخصیت پشتیبانی مجموعه | -| E021-04 | تابع _شخصی | نسبی{.text-warning} | نه `USING` بند | -| E021-05 | تابع اکتبر | نه{.text-danger} | `LENGTH` رفتار مشابه | -| E021-06 | SUBSTRING | نسبی{.text-warning} | هیچ پشتیبانی برای `SIMILAR` و `ESCAPE` بند نه `SUBSTRING_REGEX` گزینه | -| E021-07 | الحاق شخصیت | نسبی{.text-warning} | نه `COLLATE` بند | -| E021-08 | توابع بالا و پایین | بله{.text-success} | | -| E021-09 | تابع اصلاح | بله{.text-success} | | -| E021-10 | ریخته گری ضمنی در میان ثابت طول و متغیر طول انواع رشته شخصیت | نه{.text-danger} | انسی گذاشتن اجازه می دهد تا بازیگران ضمنی دلخواه بین انواع رشته, در حالی که تاتر متکی بر توابع داشتن اضافه بار متعدد به جای بازیگران ضمنی | -| E021-11 | تابع موقعیت | نسبی{.text-warning} | هیچ پشتیبانی برای `IN` و `USING` بند نه `POSITION_REGEX` گزینه | -| E021-12 | مقایسه شخصیت | بله{.text-success} | | +| E021-01 | نوع دادههای نویسه | نه {.text-danger} | | +| E021-02 | شخصیت های مختلف نوع داده ها | نه {.text-danger} | `String` رفتار مشابه, اما بدون محدودیت طول در پرانتز | +| E021-03 | شخصیت literals | نسبی {.text-warning} | بدون الحاق خودکار از لیتر متوالی و شخصیت پشتیبانی مجموعه | +| E021-04 | تابع _شخصی | نسبی {.text-warning} | نه `USING` بند | +| E021-05 | تابع اکتبر | نه {.text-danger} | `LENGTH` رفتار مشابه | +| E021-06 | SUBSTRING | نسبی {.text-warning} | هیچ پشتیبانی برای `SIMILAR` و `ESCAPE` بند نه `SUBSTRING_REGEX` گزینه | +| E021-07 | الحاق شخصیت | نسبی {.text-warning} | نه `COLLATE` بند | +| E021-08 | توابع بالا و پایین | بله {.text-success} | | +| E021-09 | تابع اصلاح | بله {.text-success} | | +| E021-10 | ریخته گری ضمنی در میان ثابت طول و متغیر طول انواع رشته شخصیت | نه {.text-danger} | انسی گذاشتن اجازه می دهد تا بازیگران ضمنی دلخواه بین انواع رشته, در حالی که تاتر متکی بر توابع داشتن اضافه بار متعدد به جای بازیگران ضمنی | +| E021-11 | تابع موقعیت | نسبی {.text-warning} | هیچ پشتیبانی برای `IN` و `USING` بند نه `POSITION_REGEX` گزینه | +| E021-12 | مقایسه شخصیت | بله {.text-success} | | | **E031** | **شناسهها** | **نسبی**{.text-warning} | | -| E031-01 | شناسه های محدود | نسبی{.text-warning} | پشتیبانی تحت اللفظی یونیکد محدود است | -| E031-02 | شناسه های مورد پایین | بله{.text-success} | | -| E031-03 | انتهایی تاکید | بله{.text-success} | | +| E031-01 | شناسه های محدود | نسبی {.text-warning} | پشتیبانی تحت اللفظی یونیکد محدود است | +| E031-02 | شناسه های مورد پایین | بله {.text-success} | | +| E031-03 | انتهایی تاکید | بله {.text-success} | | | **E051** | **مشخصات پرس و جو عمومی** | **نسبی**{.text-warning} | | -| E051-01 | SELECT DISTINCT | بله{.text-success} | | -| E051-02 | گروه بر اساس بند | بله{.text-success} | | -| E051-04 | گروه توسط می تواند ستون ها را شامل نمی شود `` | بله {.text-success} | | +| E051-05 | انتخاب موارد را می توان تغییر نام داد | بله {.text-success} | | +| E051-06 | داشتن بند | بله {.text-success} | | +| E051-07 | واجد شرایط \* در انتخاب لیست | بله {.text-success} | | +| E051-08 | نام همبستگی در بند | بله {.text-success} | | +| E051-09 | تغییر نام ستون ها در بند | نه {.text-danger} | | | **E061** | **مخمصه عمومی و شرایط جستجو** | **نسبی**{.text-warning} | | -| E061-01 | پیش فرض مقایسه | بله{.text-success} | | -| E061-02 | بین پیش فرض | نسبی{.text-warning} | نه `SYMMETRIC` و `ASYMMETRIC` بند | -| E061-03 | در گزاره با لیستی از ارزش ها | بله{.text-success} | | -| E061-04 | مثل گزاره | بله{.text-success} | | -| E061-05 | مانند گزاره: فرار بند | نه{.text-danger} | | -| E061-06 | پیش فرض پوچ | بله{.text-success} | | -| E061-07 | گزاره مقایسه کمی | نه{.text-danger} | | -| E061-08 | پیش فرض وجود دارد | نه{.text-danger} | | -| E061-09 | Subqueries در مقایسه گزاره | بله{.text-success} | | -| E061-11 | در حال بارگذاری | بله{.text-success} | | -| E061-12 | زیرمجموعه ها در پیش بینی مقایسه اندازه گیری شده | نه{.text-danger} | | -| E061-13 | ارتباط subqueries | نه{.text-danger} | | -| E061-14 | وضعیت جستجو | بله{.text-success} | | +| E061-01 | پیش فرض مقایسه | بله {.text-success} | | +| E061-02 | بین پیش فرض | نسبی {.text-warning} | نه `SYMMETRIC` و `ASYMMETRIC` بند | +| E061-03 | در گزاره با لیستی از ارزش ها | بله {.text-success} | | +| E061-04 | مثل گزاره | بله {.text-success} | | +| E061-05 | مانند گزاره: فرار بند | نه {.text-danger} | | +| E061-06 | پیش فرض پوچ | بله {.text-success} | | +| E061-07 | گزاره مقایسه کمی | نه {.text-danger} | | +| E061-08 | پیش فرض وجود دارد | نه {.text-danger} | | +| E061-09 | Subqueries در مقایسه گزاره | بله {.text-success} | | +| E061-11 | در حال بارگذاری | بله {.text-success} | | +| E061-12 | زیرمجموعه ها در پیش بینی مقایسه اندازه گیری شده | نه {.text-danger} | | +| E061-13 | ارتباط subqueries | نه {.text-danger} | | +| E061-14 | وضعیت جستجو | بله {.text-success} | | | **E071** | **عبارتهای پرسوجو پایه** | **نسبی**{.text-warning} | | -| E071-01 | اتحادیه اپراتور جدول مجزا | نه{.text-danger} | | -| E071-02 | اتحادیه تمام اپراتور جدول | بله{.text-success} | | -| E071-03 | به جز اپراتور جدول مجزا | نه{.text-danger} | | -| E071-05 | ستون ترکیب از طریق اپراتورهای جدول نیاز دقیقا همان نوع داده ندارد | بله{.text-success} | | -| E071-06 | اپراتورهای جدول در زیرمجموعه | بله{.text-success} | | +| E071-01 | اتحادیه اپراتور جدول مجزا | نه {.text-danger} | | +| E071-02 | اتحادیه تمام اپراتور جدول | بله {.text-success} | | +| E071-03 | به جز اپراتور جدول مجزا | نه {.text-danger} | | +| E071-05 | ستون ترکیب از طریق اپراتورهای جدول نیاز دقیقا همان نوع داده ندارد | بله {.text-success} | | +| E071-06 | اپراتورهای جدول در زیرمجموعه | بله {.text-success} | | | **E081** | **امتیازات پایه** | **نسبی**{.text-warning} | کار در حال پیشرفت | | **E091** | **تنظیم توابع** | **بله**{.text-success} | | -| E091-01 | AVG | بله{.text-success} | | -| E091-02 | COUNT | بله{.text-success} | | -| E091-03 | MAX | بله{.text-success} | | -| E091-04 | MIN | بله{.text-success} | | -| E091-05 | SUM | بله{.text-success} | | -| E091-06 | همه کمی | نه{.text-danger} | | -| E091-07 | کمی متمایز | نسبی{.text-warning} | همه توابع مجموع پشتیبانی | +| E091-01 | AVG | بله {.text-success} | | +| E091-02 | COUNT | بله {.text-success} | | +| E091-03 | MAX | بله {.text-success} | | +| E091-04 | MIN | بله {.text-success} | | +| E091-05 | SUM | بله {.text-success} | | +| E091-06 | همه کمی | نه {.text-danger} | | +| E091-07 | کمی متمایز | نسبی {.text-warning} | همه توابع مجموع پشتیبانی | | **E101** | **دستکاری داده های پایه** | **نسبی**{.text-warning} | | -| E101-01 | درج بیانیه | بله{.text-success} | توجه داشته باشید: کلید اصلی در خانه کلیک می کند به این معنی نیست `UNIQUE` محدودیت | -| E101-03 | بیانیه به روز رسانی جستجو | نه{.text-danger} | یک `ALTER UPDATE` بیانیه ای برای اصلاح داده های دسته ای | -| E101-04 | جستجو حذف بیانیه | نه{.text-danger} | یک `ALTER DELETE` بیانیه ای برای حذف داده های دسته ای | +| E101-01 | درج بیانیه | بله {.text-success} | توجه داشته باشید: کلید اصلی در خانه کلیک می کند به این معنی نیست `UNIQUE` محدودیت | +| E101-03 | بیانیه به روز رسانی جستجو | نه {.text-danger} | یک `ALTER UPDATE` بیانیه ای برای اصلاح داده های دسته ای | +| E101-04 | جستجو حذف بیانیه | نه {.text-danger} | یک `ALTER DELETE` بیانیه ای برای حذف داده های دسته ای | | **E111** | **تک ردیف انتخاب بیانیه** | **نه**{.text-danger} | | | **E121** | **پشتیبانی عمومی مکان نما** | **نه**{.text-danger} | | -| E121-01 | DECLARE CURSOR | نه{.text-danger} | | -| E121-02 | سفارش ستون ها در لیست انتخاب نمی شود | نه{.text-danger} | | -| E121-03 | عبارات ارزش به ترتیب توسط بند | نه{.text-danger} | | -| E121-04 | بیانیه باز | نه{.text-danger} | | -| E121-06 | بیانیه به روز رسانی موقعیت | نه{.text-danger} | | -| E121-07 | موقعیت حذف بیانیه | نه{.text-danger} | | -| E121-08 | بستن بیانیه | نه{.text-danger} | | -| E121-10 | واکشی بیانیه: ضمنی بعدی | نه{.text-danger} | | -| E121-17 | با نشانگر نگه دارید | نه{.text-danger} | | +| E121-01 | DECLARE CURSOR | نه {.text-danger} | | +| E121-02 | سفارش ستون ها در لیست انتخاب نمی شود | نه {.text-danger} | | +| E121-03 | عبارات ارزش به ترتیب توسط بند | نه {.text-danger} | | +| E121-04 | بیانیه باز | نه {.text-danger} | | +| E121-06 | بیانیه به روز رسانی موقعیت | نه {.text-danger} | | +| E121-07 | موقعیت حذف بیانیه | نه {.text-danger} | | +| E121-08 | بستن بیانیه | نه {.text-danger} | | +| E121-10 | واکشی بیانیه: ضمنی بعدی | نه {.text-danger} | | +| E121-17 | با نشانگر نگه دارید | نه {.text-danger} | | | **E131** | **پشتیبانی ارزش صفر (صفر به جای ارزش)** | **نسبی**{.text-warning} | برخی از محدودیت ها اعمال می شود | | **E141** | **محدودیت یکپارچگی عمومی** | **نسبی**{.text-warning} | | -| E141-01 | محدودیت NOT NULL | بله{.text-success} | یادداشت: `NOT NULL` برای ستون های جدول به طور پیش فرض ضمنی | -| E141-02 | محدودیت منحصر به فرد از ستون تهی نیست | نه{.text-danger} | | -| E141-03 | محدودیت های کلیدی اولیه | نه{.text-danger} | | -| E141-04 | محدودیت کلید خارجی عمومی با هیچ پیش فرض اقدام برای هر دو عمل حذف ارجاعی و عمل به روز رسانی ارجاعی | نه{.text-danger} | | -| E141-06 | بررسی محدودیت | بله{.text-success} | | -| E141-07 | پیشفرض ستون | بله{.text-success} | | -| E141-08 | تهی نیست استنباط در کلید اولیه | بله{.text-success} | | -| E141-10 | نام در یک کلید خارجی را می توان در هر سفارش مشخص شده است | نه{.text-danger} | | +| E141-01 | محدودیت NOT NULL | بله {.text-success} | یادداشت: `NOT NULL` برای ستون های جدول به طور پیش فرض ضمنی | +| E141-02 | محدودیت منحصر به فرد از ستون تهی نیست | نه {.text-danger} | | +| E141-03 | محدودیت های کلیدی اولیه | نه {.text-danger} | | +| E141-04 | محدودیت کلید خارجی عمومی با هیچ پیش فرض اقدام برای هر دو عمل حذف ارجاعی و عمل به روز رسانی ارجاعی | نه {.text-danger} | | +| E141-06 | بررسی محدودیت | بله {.text-success} | | +| E141-07 | پیشفرض ستون | بله {.text-success} | | +| E141-08 | تهی نیست استنباط در کلید اولیه | بله {.text-success} | | +| E141-10 | نام در یک کلید خارجی را می توان در هر سفارش مشخص شده است | نه {.text-danger} | | | **E151** | **پشتیبانی تراکنش** | **نه**{.text-danger} | | -| E151-01 | بیانیه متعهد | نه{.text-danger} | | -| E151-02 | بیانیه عقبگرد | نه{.text-danger} | | +| E151-01 | بیانیه متعهد | نه {.text-danger} | | +| E151-02 | بیانیه عقبگرد | نه {.text-danger} | | | **E152** | **بیانیه معامله عمومی مجموعه** | **نه**{.text-danger} | | -| E152-01 | مجموعه بیانیه معامله: جداسازی سطح SERIALIZABLE بند | نه{.text-danger} | | -| E152-02 | تنظیم بیانیه معامله: فقط خواندن و خواندن نوشتن جملات | نه{.text-danger} | | +| E152-01 | مجموعه بیانیه معامله: جداسازی سطح SERIALIZABLE بند | نه {.text-danger} | | +| E152-02 | تنظیم بیانیه معامله: فقط خواندن و خواندن نوشتن جملات | نه {.text-danger} | | | **E153** | **نمایش داده شد بهروز با زیرمجموعه** | **نه**{.text-danger} | | | **E161** | **گذاشتن نظرات با استفاده از منجر منهای دو** | **بله**{.text-success} | | | **E171** | **SQLSTATE پشتیبانی** | **نه**{.text-danger} | | | **E182** | **اتصال زبان میزبان** | **نه**{.text-danger} | | | **F031** | **دستکاری طرح اولیه** | **نسبی**{.text-warning} | | -| F031-01 | ایجاد بیانیه جدول برای ایجاد جداول پایه مداوم | نسبی{.text-warning} | نه `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` بند و هیچ پشتیبانی برای کاربر حل و فصل انواع داده ها | -| F031-02 | ایجاد نمایش بیانیه | نسبی{.text-warning} | نه `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` بند و هیچ پشتیبانی برای کاربر حل و فصل انواع داده ها | -| F031-03 | بیانیه گرانت | بله{.text-success} | | -| F031-04 | تغییر بیانیه جدول: اضافه کردن بند ستون | نسبی{.text-warning} | هیچ پشتیبانی برای `GENERATED` بند و مدت زمان سیستم | -| F031-13 | بیانیه جدول قطره: محدود کردن بند | نه{.text-danger} | | -| F031-16 | قطره مشاهده بیانیه: محدود بند | نه{.text-danger} | | -| F031-19 | لغو بیانیه: محدود کردن بند | نه{.text-danger} | | +| F031-01 | ایجاد بیانیه جدول برای ایجاد جداول پایه مداوم | نسبی {.text-warning} | نه `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` بند و هیچ پشتیبانی برای کاربر حل و فصل انواع داده ها | +| F031-02 | ایجاد نمایش بیانیه | نسبی {.text-warning} | نه `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` بند و هیچ پشتیبانی برای کاربر حل و فصل انواع داده ها | +| F031-03 | بیانیه گرانت | بله {.text-success} | | +| F031-04 | تغییر بیانیه جدول: اضافه کردن بند ستون | نسبی {.text-warning} | هیچ پشتیبانی برای `GENERATED` بند و مدت زمان سیستم | +| F031-13 | بیانیه جدول قطره: محدود کردن بند | نه {.text-danger} | | +| F031-16 | قطره مشاهده بیانیه: محدود بند | نه {.text-danger} | | +| F031-19 | لغو بیانیه: محدود کردن بند | نه {.text-danger} | | | **F041** | **جدول پیوست عمومی** | **نسبی**{.text-warning} | | -| F041-01 | عضویت داخلی (اما نه لزوما کلمه کلیدی درونی) | بله{.text-success} | | -| F041-02 | کلیدواژه داخلی | بله{.text-success} | | -| F041-03 | LEFT OUTER JOIN | بله{.text-success} | | -| F041-04 | RIGHT OUTER JOIN | بله{.text-success} | | -| F041-05 | بیرونی می پیوندد می توان تو در تو | بله{.text-success} | | -| F041-07 | جدول درونی در بیرونی چپ یا راست عضویت نیز می تواند مورد استفاده قرار گیرد در عضویت درونی | بله{.text-success} | | -| F041-08 | همه اپراتورهای مقایسه پشتیبانی می شوند (و نه فقط =) | نه{.text-danger} | | -| **F051** | **تاریخ پایه و زمان** | **نسبی**{.text-warning} | | -| F051-01 | تاریخ نوع داده (از جمله پشتیبانی از تاریخ تحت اللفظی) | نسبی{.text-warning} | بدون تحت اللفظی | -| F051-02 | نوع داده زمان (از جمله پشتیبانی از زمان تحت اللفظی) با دقت ثانیه کسری حداقل 0 | نه{.text-danger} | | -| F051-03 | نوع داده برچسب زمان (از جمله پشتیبانی از تحت اللفظی برچسب زمان) با دقت ثانیه کسری از حداقل 0 و 6 | نه{.text-danger} | `DateTime64` زمان فراهم می کند قابلیت های مشابه | -| F051-04 | مقایسه گزاره در تاریخ, زمان, و انواع داده های برچسب زمان | نسبی{.text-warning} | فقط یک نوع داده موجود است | -| F051-05 | بازیگران صریح و روشن بین انواع تاریخ ساعت و انواع رشته شخصیت | بله{.text-success} | | -| F051-06 | CURRENT_DATE | نه{.text-danger} | `today()` مشابه است | -| F051-07 | LOCALTIME | نه{.text-danger} | `now()` مشابه است | -| F051-08 | LOCALTIMESTAMP | نه{.text-danger} | | +| F041-01 | عضویت داخلی (اما نه لزوما کلمه کلیدی درونی) | بله {.text-success} | | +| F041-02 | کلیدواژه داخلی | بله {.text-success} | | +| F041-03 | LEFT OUTER JOIN | بله {.text-success} | | +| F041-04 | RIGHT OUTER JOIN | بله {.text-success} | | +| F041-05 | بیرونی می پیوندد می توان تو در تو | بله {.text-success} | | +| F041-07 | جدول درونی در بیرونی چپ یا راست عضویت نیز می تواند مورد استفاده قرار گیرد در عضویت درونی | بله {.text-success} | | +| F041-08 | همه اپراتورهای مقایسه پشتیبانی می شوند (و نه فقط =) | نه {.text-danger} | | +| **F051** | **تاریخ پایه و زمان** | **نسبی** {.text-warning} | | +| F051-01 | تاریخ نوع داده (از جمله پشتیبانی از تاریخ تحت اللفظی) | نسبی {.text-warning} | بدون تحت اللفظی | +| F051-02 | نوع داده زمان (از جمله پشتیبانی از زمان تحت اللفظی) با دقت ثانیه کسری حداقل 0 | نه {.text-danger} | | +| F051-03 | نوع داده برچسب زمان (از جمله پشتیبانی از تحت اللفظی برچسب زمان) با دقت ثانیه کسری از حداقل 0 و 6 | نه {.text-danger} | `DateTime64` زمان فراهم می کند قابلیت های مشابه | +| F051-04 | مقایسه گزاره در تاریخ, زمان, و انواع داده های برچسب زمان | نسبی {.text-warning} | فقط یک نوع داده موجود است | +| F051-05 | بازیگران صریح و روشن بین انواع تاریخ ساعت و انواع رشته شخصیت | بله {.text-success} | | +| F051-06 | CURRENT_DATE | نه {.text-danger} | `today()` مشابه است | +| F051-07 | LOCALTIME | نه {.text-danger} | `now()` مشابه است | +| F051-08 | LOCALTIMESTAMP | نه {.text-danger} | | | **F081** | **اتحادیه و به جز در دیدگاه** | **نسبی**{.text-warning} | | | **F131** | **عملیات گروه بندی شده** | **نسبی**{.text-warning} | | -| F131-01 | جایی که, گروه های, و داشتن بند در نمایش داده شد با نمایش گروه بندی پشتیبانی | بله{.text-success} | | -| F131-02 | جداول چندگانه در نمایش داده شد با نمایش گروه بندی پشتیبانی می شود | بله{.text-success} | | -| F131-03 | تنظیم توابع پشتیبانی شده در نمایش داده شد با نمایش گروه بندی می شوند | بله{.text-success} | | -| F131-04 | Subqueries با گروه و داشتن بند و گروه بندی views | بله{.text-success} | | -| F131-05 | تک ردیف با گروه و داشتن بند و دیدگاه های گروه بندی شده را انتخاب کنید | نه{.text-danger} | | +| F131-01 | جایی که, گروه های, و داشتن بند در نمایش داده شد با نمایش گروه بندی پشتیبانی | بله {.text-success} | | +| F131-02 | جداول چندگانه در نمایش داده شد با نمایش گروه بندی پشتیبانی می شود | بله {.text-success} | | +| F131-03 | تنظیم توابع پشتیبانی شده در نمایش داده شد با نمایش گروه بندی می شوند | بله {.text-success} | | +| F131-04 | Subqueries با گروه و داشتن بند و گروه بندی views | بله {.text-success} | | +| F131-05 | تک ردیف با گروه و داشتن بند و دیدگاه های گروه بندی شده را انتخاب کنید | نه {.text-danger} | | | **F181** | **پشتیبانی از ماژول های متعدد** | **نه**{.text-danger} | | | **F201** | **تابع بازیگران** | **بله**{.text-success} | | | **F221** | **پیش فرض های صریح** | **نه**{.text-danger} | | | **F261** | **عبارت مورد** | **بله**{.text-success} | | -| F261-01 | مورد ساده | بله{.text-success} | | -| F261-02 | مورد جستجو | بله{.text-success} | | -| F261-03 | NULLIF | بله{.text-success} | | -| F261-04 | COALESCE | بله{.text-success} | | +| F261-01 | مورد ساده | بله {.text-success} | | +| F261-02 | مورد جستجو | بله {.text-success} | | +| F261-03 | NULLIF | بله {.text-success} | | +| F261-04 | COALESCE | بله {.text-success} | | | **F311** | **بیانیه تعریف طرح** | **نسبی**{.text-warning} | | -| F311-01 | CREATE SCHEMA | نه{.text-danger} | | -| F311-02 | ایجاد جدول برای جداول پایه مداوم | بله{.text-success} | | -| F311-03 | CREATE VIEW | بله{.text-success} | | -| F311-04 | CREATE VIEW: WITH CHECK OPTION | نه{.text-danger} | | -| F311-05 | بیانیه گرانت | بله{.text-success} | | +| F311-01 | CREATE SCHEMA | نه {.text-danger} | | +| F311-02 | ایجاد جدول برای جداول پایه مداوم | بله {.text-success} | | +| F311-03 | CREATE VIEW | بله {.text-success} | | +| F311-04 | CREATE VIEW: WITH CHECK OPTION | نه {.text-danger} | | +| F311-05 | بیانیه گرانت | بله {.text-success} | | | **F471** | **مقادیر زیر مقیاس** | **بله**{.text-success} | | | **F481** | **پیش فرض صفر گسترش یافته است** | **بله**{.text-success} | | | **F812** | **عمومی ضعیف** | **نه**{.text-danger} | | | **T321** | **روال عمومی گذاشتن استناد** | **نه**{.text-danger} | | -| T321-01 | توابع تعریف شده توسط کاربر بدون اضافه بار | نه{.text-danger} | | -| T321-02 | روش های ذخیره شده تعریف شده توسط کاربر بدون اضافه بار | نه{.text-danger} | | -| T321-03 | فراخوانی تابع | نه{.text-danger} | | -| T321-04 | بیانیه تماس | نه{.text-danger} | | -| T321-05 | بیانیه بازگشت | نه{.text-danger} | | +| T321-01 | توابع تعریف شده توسط کاربر بدون اضافه بار | نه {.text-danger} | | +| T321-02 | روش های ذخیره شده تعریف شده توسط کاربر بدون اضافه بار | نه {.text-danger} | | +| T321-03 | فراخوانی تابع | نه {.text-danger} | | +| T321-04 | بیانیه تماس | نه {.text-danger} | | +| T321-05 | بیانیه بازگشت | نه {.text-danger} | | | **T631** | **در گزاره با یک عنصر لیست** | **بله**{.text-success} | | diff --git a/docs/fr/development/developer-instruction.md b/docs/fr/development/developer-instruction.md index e78ed7ba6d9..610216925c3 100644 --- a/docs/fr/development/developer-instruction.md +++ b/docs/fr/development/developer-instruction.md @@ -257,8 +257,8 @@ Le développement de ClickHouse nécessite souvent le chargement d'ensembles de sudo apt install wget xz-utils - wget https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz - wget https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz + wget https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz + wget https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz xz -v -d hits_v1.tsv.xz xz -v -d visits_v1.tsv.xz diff --git a/docs/fr/development/style.md b/docs/fr/development/style.md index 1259962b577..ad3b470f7b4 100644 --- a/docs/fr/development/style.md +++ b/docs/fr/development/style.md @@ -579,7 +579,7 @@ Si une fonction capture la propriété d'un objet créé dans le tas, définisse **14.** Les valeurs de retour. -Dans la plupart des cas, il suffit d'utiliser `return`. Ne pas écrire `[return std::move(res)]{.strike}`. +Dans la plupart des cas, il suffit d'utiliser `return`. Ne pas écrire `return std::move(res)`. Si la fonction alloue un objet sur le tas et le renvoie, utilisez `shared_ptr` ou `unique_ptr`. @@ -673,7 +673,7 @@ Toujours utiliser `#pragma once` au lieu d'inclure des gardes. **24.** Ne pas utiliser de `trailing return type` pour les fonctions, sauf si nécessaire. ``` cpp -[auto f() -> void;]{.strike} +auto f() -> void ``` **25.** Déclaration et initialisation des variables. diff --git a/docs/fr/getting-started/example-datasets/metrica.md b/docs/fr/getting-started/example-datasets/metrica.md index f9d6c7b437b..453a30d436f 100644 --- a/docs/fr/getting-started/example-datasets/metrica.md +++ b/docs/fr/getting-started/example-datasets/metrica.md @@ -9,14 +9,14 @@ toc_title: "Yandex.Metrica De Donn\xE9es" Dataset se compose de deux tables contenant des données anonymisées sur les hits (`hits_v1`) et les visites (`visits_v1`) de Yandex.Metrica. Vous pouvez en savoir plus sur Yandex.Metrica dans [Histoire de ClickHouse](../../introduction/history.md) section. -L'ensemble de données se compose de deux tables, l'une d'elles peut être téléchargée sous forme compressée `tsv.xz` fichier ou comme partitions préparées. En outre, une version étendue de l' `hits` table contenant 100 millions de lignes est disponible comme TSV à https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_100m_obfuscated_v1.tsv.xz et comme partitions préparées à https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz. +L'ensemble de données se compose de deux tables, l'une d'elles peut être téléchargée sous forme compressée `tsv.xz` fichier ou comme partitions préparées. En outre, une version étendue de l' `hits` table contenant 100 millions de lignes est disponible comme TSV à https://datasets.clickhouse.tech/hits/tsv/hits_100m_obfuscated_v1.tsv.xz et comme partitions préparées à https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz. ## Obtention de Tables à partir de Partitions préparées {#obtaining-tables-from-prepared-partitions} Télécharger et importer la table hits: ``` bash -curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar +curl -O https://datasets.clickhouse.tech/hits/partitions/hits_v1.tar tar xvf hits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory # check permissions on unpacked data, fix if required sudo service clickhouse-server restart @@ -26,7 +26,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" Télécharger et importer des visites: ``` bash -curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar +curl -O https://datasets.clickhouse.tech/visits/partitions/visits_v1.tar tar xvf visits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory # check permissions on unpacked data, fix if required sudo service clickhouse-server restart @@ -38,7 +38,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1" Télécharger et importer des hits à partir du fichier TSV compressé: ``` bash -curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv # now create table clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" @@ -52,7 +52,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" Télécharger et importer des visites à partir du fichier TSV compressé: ``` bash -curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv # now create table clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" diff --git a/docs/fr/getting-started/example-datasets/nyc-taxi.md b/docs/fr/getting-started/example-datasets/nyc-taxi.md index e351b9ec543..46aa944e718 100644 --- a/docs/fr/getting-started/example-datasets/nyc-taxi.md +++ b/docs/fr/getting-started/example-datasets/nyc-taxi.md @@ -285,7 +285,7 @@ Entre autres choses, vous pouvez exécuter la requête OPTIMIZE sur MergeTree. M ## Téléchargement des Partitions préparées {#download-of-prepared-partitions} ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar +$ curl -O https://datasets.clickhouse.tech/trips_mergetree/partitions/trips_mergetree.tar $ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # path to ClickHouse data directory $ # check permissions of unpacked data, fix if required $ sudo service clickhouse-server restart diff --git a/docs/fr/getting-started/example-datasets/ontime.md b/docs/fr/getting-started/example-datasets/ontime.md index 4040e2a1900..8d036901b99 100644 --- a/docs/fr/getting-started/example-datasets/ontime.md +++ b/docs/fr/getting-started/example-datasets/ontime.md @@ -156,7 +156,7 @@ $ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhous ## Téléchargement des Partitions préparées {#download-of-prepared-partitions} ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar +$ curl -O https://datasets.clickhouse.tech/ontime/partitions/ontime.tar $ tar xvf ontime.tar -C /var/lib/clickhouse # path to ClickHouse data directory $ # check permissions of unpacked data, fix if required $ sudo service clickhouse-server restart diff --git a/docs/fr/getting-started/tutorial.md b/docs/fr/getting-started/tutorial.md index e8a0e8d81cb..b600456b484 100644 --- a/docs/fr/getting-started/tutorial.md +++ b/docs/fr/getting-started/tutorial.md @@ -87,8 +87,8 @@ Maintenant, il est temps de remplir notre serveur ClickHouse avec quelques exemp ### Télécharger et extraire les données de la Table {#download-and-extract-table-data} ``` bash -curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv -curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv ``` Les fichiers extraits ont une taille d'environ 10 Go. diff --git a/docs/fr/operations/performance-test.md b/docs/fr/operations/performance-test.md index 6093772aefe..7940b957642 100644 --- a/docs/fr/operations/performance-test.md +++ b/docs/fr/operations/performance-test.md @@ -48,7 +48,7 @@ Avec cette instruction, vous pouvez exécuter le test de performance clickhouse - wget https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz + wget https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz tar xvf hits_100m_obfuscated_v1.tar.xz -C . mv hits_100m_obfuscated_v1/* . diff --git a/docs/fr/sql-reference/ansi.md b/docs/fr/sql-reference/ansi.md index 033d4c2c927..9fd4ed428a2 100644 --- a/docs/fr/sql-reference/ansi.md +++ b/docs/fr/sql-reference/ansi.md @@ -26,155 +26,155 @@ Le tableau suivant répertorie les cas où la fonctionnalité de requête foncti | Feature ID | Nom De La Fonctionnalité | Statut | Commentaire | |------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **E011** | **Types de données numériques** | **Partiel**{.text-warning} | | -| E011-01 | Types de données INTEGER et SMALLINT | Oui{.text-success} | | -| E011-02 | Types de données réel, double précision et flottant types de données | Partiel{.text-warning} | `FLOAT()`, `REAL` et `DOUBLE PRECISION` ne sont pas pris en charge | -| E011-03 | Types de données décimales et numériques | Partiel{.text-warning} | Seulement `DECIMAL(p,s)` est pris en charge, pas `NUMERIC` | -| E011-04 | Opérateurs arithmétiques | Oui{.text-success} | | -| E011-05 | Comparaison numérique | Oui{.text-success} | | -| E011-06 | Casting implicite parmi les types de données numériques | Aucun{.text-danger} | ANSI SQL permet la distribution implicite arbitraire entre les types numériques, tandis que ClickHouse repose sur des fonctions ayant plusieurs surcharges au lieu de la distribution implicite | +| E011-01 | Types de données INTEGER et SMALLINT | Oui {.text-success} | | +| E011-02 | Types de données réel, double précision et flottant types de données | Partiel {.text-warning} | `FLOAT()`, `REAL` et `DOUBLE PRECISION` ne sont pas pris en charge | +| E011-03 | Types de données décimales et numériques | Partiel {.text-warning} | Seulement `DECIMAL(p,s)` est pris en charge, pas `NUMERIC` | +| E011-04 | Opérateurs arithmétiques | Oui {.text-success} | | +| E011-05 | Comparaison numérique | Oui {.text-success} | | +| E011-06 | Casting implicite parmi les types de données numériques | Aucun {.text-danger} | ANSI SQL permet la distribution implicite arbitraire entre les types numériques, tandis que ClickHouse repose sur des fonctions ayant plusieurs surcharges au lieu de la distribution implicite | | **E021** | **Types de chaînes de caractères** | **Partiel**{.text-warning} | | -| E021-01 | Type de données CARACTÈRE | Aucun{.text-danger} | | -| E021-02 | TYPE DE DONNÉES variable de caractère | Aucun{.text-danger} | `String` se comporte de la même manière, mais sans limite de longueur entre parenthèses | -| E021-03 | Littéraux de caractères | Partiel{.text-warning} | Aucune concaténation automatique de littéraux consécutifs et prise en charge du jeu de caractères | -| E021-04 | Fonction CHARACTER_LENGTH | Partiel{.text-warning} | Aucun `USING` clause | -| E021-05 | Fonction OCTET_LENGTH | Aucun{.text-danger} | `LENGTH` se comporte de la même façon | -| E021-06 | SUBSTRING | Partiel{.text-warning} | Pas de support pour `SIMILAR` et `ESCAPE` clauses, pas de `SUBSTRING_REGEX` variante | -| E021-07 | Concaténation de caractères | Partiel{.text-warning} | Aucun `COLLATE` clause | -| E021-08 | Fonctions supérieures et inférieures | Oui{.text-success} | | -| E021-09 | La fonction TRIM | Oui{.text-success} | | -| E021-10 | Conversion implicite entre les types de chaînes de caractères de longueur fixe et de longueur variable | Aucun{.text-danger} | ANSI SQL permet la distribution implicite arbitraire entre les types de chaîne, tandis que ClickHouse repose sur des fonctions ayant plusieurs surcharges au lieu de la distribution implicite | -| E021-11 | La POSITION de la fonction | Partiel{.text-warning} | Pas de support pour `IN` et `USING` clauses, pas de `POSITION_REGEX` variante | -| E021-12 | Comparaison de caractères | Oui{.text-success} | | +| E021-01 | Type de données CARACTÈRE | Aucun {.text-danger} | | +| E021-02 | TYPE DE DONNÉES variable de caractère | Aucun {.text-danger} | `String` se comporte de la même manière, mais sans limite de longueur entre parenthèses | +| E021-03 | Littéraux de caractères | Partiel {.text-warning} | Aucune concaténation automatique de littéraux consécutifs et prise en charge du jeu de caractères | +| E021-04 | Fonction CHARACTER_LENGTH | Partiel {.text-warning} | Aucun `USING` clause | +| E021-05 | Fonction OCTET_LENGTH | Aucun {.text-danger} | `LENGTH` se comporte de la même façon | +| E021-06 | SUBSTRING | Partiel {.text-warning} | Pas de support pour `SIMILAR` et `ESCAPE` clauses, pas de `SUBSTRING_REGEX` variante | +| E021-07 | Concaténation de caractères | Partiel {.text-warning} | Aucun `COLLATE` clause | +| E021-08 | Fonctions supérieures et inférieures | Oui {.text-success} | | +| E021-09 | La fonction TRIM | Oui {.text-success} | | +| E021-10 | Conversion implicite entre les types de chaînes de caractères de longueur fixe et de longueur variable | Aucun {.text-danger} | ANSI SQL permet la distribution implicite arbitraire entre les types de chaîne, tandis que ClickHouse repose sur des fonctions ayant plusieurs surcharges au lieu de la distribution implicite | +| E021-11 | La POSITION de la fonction | Partiel {.text-warning} | Pas de support pour `IN` et `USING` clauses, pas de `POSITION_REGEX` variante | +| E021-12 | Comparaison de caractères | Oui {.text-success} | | | **E031** | **Identificateur** | **Partiel**{.text-warning} | | -| E031-01 | Identificateurs délimités | Partiel{.text-warning} | Le support littéral Unicode est limité | -| E031-02 | Identificateurs minuscules | Oui{.text-success} | | -| E031-03 | Fuite de soulignement | Oui{.text-success} | | +| E031-01 | Identificateurs délimités | Partiel {.text-warning} | Le support littéral Unicode est limité | +| E031-02 | Identificateurs minuscules | Oui {.text-success} | | +| E031-03 | Fuite de soulignement | Oui {.text-success} | | | **E051** | **Spécification de requête de base** | **Partiel**{.text-warning} | | -| E051-01 | SELECT DISTINCT | Oui{.text-success} | | -| E051-02 | Groupe par clause | Oui{.text-success} | | -| E051-04 | GROUP BY peut contenir des colonnes `` | Oui {.text-success} | | +| E051-05 | Les éléments sélectionnés peuvent être renommés | Oui {.text-success} | | +| E051-06 | Clause HAVING | Oui {.text-success} | | +| E051-07 | Qualifié \* dans la liste select | Oui {.text-success} | | +| E051-08 | Nom de corrélation dans la clause FROM | Oui {.text-success} | | +| E051-09 | Renommer les colonnes de la clause FROM | Aucun {.text-danger} | | | **E061** | **Prédicats de base et conditions de recherche** | **Partiel**{.text-warning} | | -| E061-01 | Prédicat de comparaison | Oui{.text-success} | | -| E061-02 | Entre prédicat | Partiel{.text-warning} | Aucun `SYMMETRIC` et `ASYMMETRIC` clause | -| E061-03 | Dans le prédicat avec la liste des valeurs | Oui{.text-success} | | -| E061-04 | Comme prédicat | Oui{.text-success} | | -| E061-05 | Comme prédicat: clause D'échappement | Aucun{.text-danger} | | -| E061-06 | Prédicat NULL | Oui{.text-success} | | -| E061-07 | Prédicat de comparaison quantifié | Aucun{.text-danger} | | -| E061-08 | Existe prédicat | Aucun{.text-danger} | | -| E061-09 | Sous-requêtes dans le prédicat de comparaison | Oui{.text-success} | | -| E061-11 | Sous-requêtes dans dans le prédicat | Oui{.text-success} | | -| E061-12 | Sous-requêtes dans le prédicat de comparaison quantifiée | Aucun{.text-danger} | | -| E061-13 | Sous-requêtes corrélées | Aucun{.text-danger} | | -| E061-14 | Condition de recherche | Oui{.text-success} | | +| E061-01 | Prédicat de comparaison | Oui {.text-success} | | +| E061-02 | Entre prédicat | Partiel {.text-warning} | Aucun `SYMMETRIC` et `ASYMMETRIC` clause | +| E061-03 | Dans le prédicat avec la liste des valeurs | Oui {.text-success} | | +| E061-04 | Comme prédicat | Oui {.text-success} | | +| E061-05 | Comme prédicat: clause D'échappement | Aucun {.text-danger} | | +| E061-06 | Prédicat NULL | Oui {.text-success} | | +| E061-07 | Prédicat de comparaison quantifié | Aucun {.text-danger} | | +| E061-08 | Existe prédicat | Aucun {.text-danger} | | +| E061-09 | Sous-requêtes dans le prédicat de comparaison | Oui {.text-success} | | +| E061-11 | Sous-requêtes dans dans le prédicat | Oui {.text-success} | | +| E061-12 | Sous-requêtes dans le prédicat de comparaison quantifiée | Aucun {.text-danger} | | +| E061-13 | Sous-requêtes corrélées | Aucun {.text-danger} | | +| E061-14 | Condition de recherche | Oui {.text-success} | | | **E071** | **Expressions de requête de base** | **Partiel**{.text-warning} | | -| E071-01 | Opérateur de table distinct UNION | Aucun{.text-danger} | | -| E071-02 | Opérateur de table UNION ALL | Oui{.text-success} | | -| E071-03 | Sauf opérateur de table DISTINCT | Aucun{.text-danger} | | -| E071-05 | Les colonnes combinées via les opérateurs de table n'ont pas besoin d'avoir exactement le même type de données | Oui{.text-success} | | -| E071-06 | Tableau des opérateurs dans les sous-requêtes | Oui{.text-success} | | +| E071-01 | Opérateur de table distinct UNION | Aucun {.text-danger} | | +| E071-02 | Opérateur de table UNION ALL | Oui {.text-success} | | +| E071-03 | Sauf opérateur de table DISTINCT | Aucun {.text-danger} | | +| E071-05 | Les colonnes combinées via les opérateurs de table n'ont pas besoin d'avoir exactement le même type de données | Oui {.text-success} | | +| E071-06 | Tableau des opérateurs dans les sous-requêtes | Oui {.text-success} | | | **E081** | **Les privilèges de base** | **Partiel**{.text-warning} | Les travaux en cours | | **E091** | **Les fonctions de jeu** | **Oui**{.text-success} | | -| E091-01 | AVG | Oui{.text-success} | | -| E091-02 | COUNT | Oui{.text-success} | | -| E091-03 | MAX | Oui{.text-success} | | -| E091-04 | MIN | Oui{.text-success} | | -| E091-05 | SUM | Oui{.text-success} | | -| E091-06 | TOUS les quantificateurs | Aucun{.text-danger} | | -| E091-07 | Quantificateur DISTINCT | Partiel{.text-warning} | Toutes les fonctions d'agrégation ne sont pas prises en charge | +| E091-01 | AVG | Oui {.text-success} | | +| E091-02 | COUNT | Oui {.text-success} | | +| E091-03 | MAX | Oui {.text-success} | | +| E091-04 | MIN | Oui {.text-success} | | +| E091-05 | SUM | Oui {.text-success} | | +| E091-06 | TOUS les quantificateurs | Aucun {.text-danger} | | +| E091-07 | Quantificateur DISTINCT | Partiel {.text-warning} | Toutes les fonctions d'agrégation ne sont pas prises en charge | | **E101** | **Manipulation des données de base** | **Partiel**{.text-warning} | | -| E101-01 | Insérer une déclaration | Oui{.text-success} | Remarque: la clé primaire dans ClickHouse n'implique pas `UNIQUE` contrainte | -| E101-03 | Déclaration de mise à jour recherchée | Aucun{.text-danger} | Il y a un `ALTER UPDATE` déclaration pour la modification des données de lot | -| E101-04 | Requête de suppression recherchée | Aucun{.text-danger} | Il y a un `ALTER DELETE` déclaration pour la suppression de données par lots | +| E101-01 | Insérer une déclaration | Oui {.text-success} | Remarque: la clé primaire dans ClickHouse n'implique pas `UNIQUE` contrainte | +| E101-03 | Déclaration de mise à jour recherchée | Aucun {.text-danger} | Il y a un `ALTER UPDATE` déclaration pour la modification des données de lot | +| E101-04 | Requête de suppression recherchée | Aucun {.text-danger} | Il y a un `ALTER DELETE` déclaration pour la suppression de données par lots | | **E111** | **Instruction SELECT à une ligne** | **Aucun**{.text-danger} | | | **E121** | **Prise en charge du curseur de base** | **Aucun**{.text-danger} | | -| E121-01 | DECLARE CURSOR | Aucun{.text-danger} | | -| E121-02 | Les colonnes ORDER BY n'ont pas besoin d'être dans la liste select | Aucun{.text-danger} | | -| E121-03 | Expressions de valeur dans la clause ORDER BY | Aucun{.text-danger} | | -| E121-04 | Instruction OPEN | Aucun{.text-danger} | | -| E121-06 | Déclaration de mise à jour positionnée | Aucun{.text-danger} | | -| E121-07 | Instruction de suppression positionnée | Aucun{.text-danger} | | -| E121-08 | Déclaration de fermeture | Aucun{.text-danger} | | -| E121-10 | Instruction FETCH: implicite suivant | Aucun{.text-danger} | | -| E121-17 | Avec curseurs HOLD | Aucun{.text-danger} | | +| E121-01 | DECLARE CURSOR | Aucun {.text-danger} | | +| E121-02 | Les colonnes ORDER BY n'ont pas besoin d'être dans la liste select | Aucun {.text-danger} | | +| E121-03 | Expressions de valeur dans la clause ORDER BY | Aucun {.text-danger} | | +| E121-04 | Instruction OPEN | Aucun {.text-danger} | | +| E121-06 | Déclaration de mise à jour positionnée | Aucun {.text-danger} | | +| E121-07 | Instruction de suppression positionnée | Aucun {.text-danger} | | +| E121-08 | Déclaration de fermeture | Aucun {.text-danger} | | +| E121-10 | Instruction FETCH: implicite suivant | Aucun {.text-danger} | | +| E121-17 | Avec curseurs HOLD | Aucun {.text-danger} | | | **E131** | **Support de valeur Null (nulls au lieu de valeurs)** | **Partiel**{.text-warning} | Certaines restrictions s'appliquent | | **E141** | **Contraintes d'intégrité de base** | **Partiel**{.text-warning} | | -| E141-01 | Contraintes non nulles | Oui{.text-success} | Note: `NOT NULL` est implicite pour les colonnes de table par défaut | -| E141-02 | Contrainte UNIQUE de colonnes non nulles | Aucun{.text-danger} | | -| E141-03 | Contraintes de clé primaire | Aucun{.text-danger} | | -| E141-04 | Contrainte de clé étrangère de base avec la valeur par défaut NO ACTION Pour l'action de suppression référentielle et l'action de mise à jour référentielle | Aucun{.text-danger} | | -| E141-06 | Vérifier la contrainte | Oui{.text-success} | | -| E141-07 | Colonne par défaut | Oui{.text-success} | | -| E141-08 | Non NULL déduit sur la clé primaire | Oui{.text-success} | | -| E141-10 | Les noms dans une clé étrangère peut être spécifié dans n'importe quel ordre | Aucun{.text-danger} | | +| E141-01 | Contraintes non nulles | Oui {.text-success} | Note: `NOT NULL` est implicite pour les colonnes de table par défaut | +| E141-02 | Contrainte UNIQUE de colonnes non nulles | Aucun {.text-danger} | | +| E141-03 | Contraintes de clé primaire | Aucun {.text-danger} | | +| E141-04 | Contrainte de clé étrangère de base avec la valeur par défaut NO ACTION Pour l'action de suppression référentielle et l'action de mise à jour référentielle | Aucun {.text-danger} | | +| E141-06 | Vérifier la contrainte | Oui {.text-success} | | +| E141-07 | Colonne par défaut | Oui {.text-success} | | +| E141-08 | Non NULL déduit sur la clé primaire | Oui {.text-success} | | +| E141-10 | Les noms dans une clé étrangère peut être spécifié dans n'importe quel ordre | Aucun {.text-danger} | | | **E151** | **Support de Transaction** | **Aucun**{.text-danger} | | -| E151-01 | COMMIT déclaration | Aucun{.text-danger} | | -| E151-02 | Déclaration de restauration | Aucun{.text-danger} | | +| E151-01 | COMMIT déclaration | Aucun {.text-danger} | | +| E151-02 | Déclaration de restauration | Aucun {.text-danger} | | | **E152** | **Instruction de transaction set de base** | **Aucun**{.text-danger} | | -| E152-01 | SET TRANSACTION statement: clause sérialisable de niveau D'isolement | Aucun{.text-danger} | | -| E152-02 | SET TRANSACTION statement: clauses en lecture seule et en lecture écriture | Aucun{.text-danger} | | +| E152-01 | SET TRANSACTION statement: clause sérialisable de niveau D'isolement | Aucun {.text-danger} | | +| E152-02 | SET TRANSACTION statement: clauses en lecture seule et en lecture écriture | Aucun {.text-danger} | | | **E153** | **Requêtes pouvant être mises à jour avec des sous requêtes** | **Aucun**{.text-danger} | | | **E161** | **Commentaires SQL en utilisant le premier Double moins** | **Oui**{.text-success} | | | **E171** | **Support SQLSTATE** | **Aucun**{.text-danger} | | | **E182** | **Liaison du langage hôte** | **Aucun**{.text-danger} | | | **F031** | **Manipulation de schéma de base** | **Partiel**{.text-warning} | | -| F031-01 | Instruction CREATE TABLE pour créer des tables de base persistantes | Partiel{.text-warning} | Aucun `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` clauses et aucun support pour les types de données résolus par l'utilisateur | -| F031-02 | Instruction créer une vue | Partiel{.text-warning} | Aucun `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` clauses et aucun support pour les types de données résolus par l'utilisateur | -| F031-03 | Déclaration de subvention | Oui{.text-success} | | -| F031-04 | ALTER TABLE statement: ajouter une clause de colonne | Partiel{.text-warning} | Pas de support pour `GENERATED` clause et période de temps du système | -| F031-13 | Instruction DROP TABLE: clause RESTRICT | Aucun{.text-danger} | | -| F031-16 | Instruction DROP VIEW: clause RESTRICT | Aucun{.text-danger} | | -| F031-19 | REVOKE statement: clause RESTRICT | Aucun{.text-danger} | | +| F031-01 | Instruction CREATE TABLE pour créer des tables de base persistantes | Partiel {.text-warning} | Aucun `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` clauses et aucun support pour les types de données résolus par l'utilisateur | +| F031-02 | Instruction créer une vue | Partiel {.text-warning} | Aucun `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` clauses et aucun support pour les types de données résolus par l'utilisateur | +| F031-03 | Déclaration de subvention | Oui {.text-success} | | +| F031-04 | ALTER TABLE statement: ajouter une clause de colonne | Partiel {.text-warning} | Pas de support pour `GENERATED` clause et période de temps du système | +| F031-13 | Instruction DROP TABLE: clause RESTRICT | Aucun {.text-danger} | | +| F031-16 | Instruction DROP VIEW: clause RESTRICT | Aucun {.text-danger} | | +| F031-19 | REVOKE statement: clause RESTRICT | Aucun {.text-danger} | | | **F041** | **Table jointe de base** | **Partiel**{.text-warning} | | -| F041-01 | INNER join (mais pas nécessairement le mot-clé INNER) | Oui{.text-success} | | -| F041-02 | INTÉRIEURE mot-clé | Oui{.text-success} | | -| F041-03 | LEFT OUTER JOIN | Oui{.text-success} | | -| F041-04 | RIGHT OUTER JOIN | Oui{.text-success} | | -| F041-05 | Les jointures externes peuvent être imbriqués | Oui{.text-success} | | -| F041-07 | La table intérieure dans une jointure extérieure gauche ou droite peut également être utilisée dans une jointure intérieure | Oui{.text-success} | | -| F041-08 | Tous les opérateurs de comparaison sont pris en charge (plutôt que juste =) | Aucun{.text-danger} | | +| F041-01 | INNER join (mais pas nécessairement le mot-clé INNER) | Oui {.text-success} | | +| F041-02 | INTÉRIEURE mot-clé | Oui {.text-success} | | +| F041-03 | LEFT OUTER JOIN | Oui {.text-success} | | +| F041-04 | RIGHT OUTER JOIN | Oui {.text-success} | | +| F041-05 | Les jointures externes peuvent être imbriqués | Oui {.text-success} | | +| F041-07 | La table intérieure dans une jointure extérieure gauche ou droite peut également être utilisée dans une jointure intérieure | Oui {.text-success} | | +| F041-08 | Tous les opérateurs de comparaison sont pris en charge (plutôt que juste =) | Aucun {.text-danger} | | | **F051** | **Date et heure de base** | **Partiel**{.text-warning} | | -| F051-01 | Type de données de DATE (y compris la prise en charge du littéral de DATE) | Partiel{.text-warning} | Aucun littéral | -| F051-02 | TYPE DE DONNÉES DE TEMPS (y compris la prise en charge du littéral de temps) avec une précision de secondes fractionnaires d'au moins 0 | Aucun{.text-danger} | | -| F051-03 | Type de données D'horodatage (y compris la prise en charge du littéral D'horodatage) avec une précision de secondes fractionnaires d'au moins 0 et 6 | Aucun{.text-danger} | `DateTime64` temps fournit des fonctionnalités similaires | -| F051-04 | Prédicat de comparaison sur les types de données DATE, heure et horodatage | Partiel{.text-warning} | Un seul type de données disponible | -| F051-05 | Distribution explicite entre les types datetime et les types de chaînes de caractères | Oui{.text-success} | | -| F051-06 | CURRENT_DATE | Aucun{.text-danger} | `today()` est similaire | -| F051-07 | LOCALTIME | Aucun{.text-danger} | `now()` est similaire | -| F051-08 | LOCALTIMESTAMP | Aucun{.text-danger} | | +| F051-01 | Type de données de DATE (y compris la prise en charge du littéral de DATE) | Partiel {.text-warning} | Aucun littéral | +| F051-02 | TYPE DE DONNÉES DE TEMPS (y compris la prise en charge du littéral de temps) avec une précision de secondes fractionnaires d'au moins 0 | Aucun {.text-danger} | | +| F051-03 | Type de données D'horodatage (y compris la prise en charge du littéral D'horodatage) avec une précision de secondes fractionnaires d'au moins 0 et 6 | Aucun {.text-danger} | `DateTime64` temps fournit des fonctionnalités similaires | +| F051-04 | Prédicat de comparaison sur les types de données DATE, heure et horodatage | Partiel {.text-warning} | Un seul type de données disponible | +| F051-05 | Distribution explicite entre les types datetime et les types de chaînes de caractères | Oui {.text-success} | | +| F051-06 | CURRENT_DATE | Aucun {.text-danger} | `today()` est similaire | +| F051-07 | LOCALTIME | Aucun {.text-danger} | `now()` est similaire | +| F051-08 | LOCALTIMESTAMP | Aucun {.text-danger} | | | **F081** | **UNION et sauf dans les vues** | **Partiel**{.text-warning} | | | **F131** | **Groupées des opérations** | **Partiel**{.text-warning} | | -| F131-01 | WHERE, GROUP BY et ayant des clauses prises en charge dans les requêtes avec des vues groupées | Oui{.text-success} | | -| F131-02 | Plusieurs tables prises en charge dans les requêtes avec des vues groupées | Oui{.text-success} | | -| F131-03 | Définir les fonctions prises en charge dans les requêtes groupées vues | Oui{.text-success} | | -| F131-04 | Sous requêtes avec des clauses GROUP BY et HAVING et des vues groupées | Oui{.text-success} | | -| F131-05 | Sélectionnez une seule ligne avec des clauses GROUP BY et HAVING et des vues groupées | Aucun{.text-danger} | | +| F131-01 | WHERE, GROUP BY et ayant des clauses prises en charge dans les requêtes avec des vues groupées | Oui {.text-success} | | +| F131-02 | Plusieurs tables prises en charge dans les requêtes avec des vues groupées | Oui {.text-success} | | +| F131-03 | Définir les fonctions prises en charge dans les requêtes groupées vues | Oui {.text-success} | | +| F131-04 | Sous requêtes avec des clauses GROUP BY et HAVING et des vues groupées | Oui {.text-success} | | +| F131-05 | Sélectionnez une seule ligne avec des clauses GROUP BY et HAVING et des vues groupées | Aucun {.text-danger} | | | **F181** | **Support de module Multiple** | **Aucun**{.text-danger} | | | **F201** | **Fonction de distribution** | **Oui**{.text-success} | | | **F221** | **Valeurs par défaut explicites** | **Aucun**{.text-danger} | | | **F261** | **Expression de cas** | **Oui**{.text-success} | | -| F261-01 | Cas Simple | Oui{.text-success} | | -| F261-02 | Cas recherché | Oui{.text-success} | | -| F261-03 | NULLIF | Oui{.text-success} | | -| F261-04 | COALESCE | Oui{.text-success} | | +| F261-01 | Cas Simple | Oui {.text-success} | | +| F261-02 | Cas recherché | Oui {.text-success} | | +| F261-03 | NULLIF | Oui {.text-success} | | +| F261-04 | COALESCE | Oui {.text-success} | | | **F311** | **Déclaration de définition de schéma** | **Partiel**{.text-warning} | | -| F311-01 | CREATE SCHEMA | Aucun{.text-danger} | | -| F311-02 | Créer une TABLE pour les tables de base persistantes | Oui{.text-success} | | -| F311-03 | CREATE VIEW | Oui{.text-success} | | -| F311-04 | CREATE VIEW: WITH CHECK OPTION | Aucun{.text-danger} | | -| F311-05 | Déclaration de subvention | Oui{.text-success} | | +| F311-01 | CREATE SCHEMA | Aucun {.text-danger} | | +| F311-02 | Créer une TABLE pour les tables de base persistantes | Oui {.text-success} | | +| F311-03 | CREATE VIEW | Oui {.text-success} | | +| F311-04 | CREATE VIEW: WITH CHECK OPTION | Aucun {.text-danger} | | +| F311-05 | Déclaration de subvention | Oui {.text-success} | | | **F471** | **Valeurs de sous-requête scalaire** | **Oui**{.text-success} | | | **F481** | **Prédicat null étendu** | **Oui**{.text-success} | | | **F812** | **Base de repérage** | **Aucun**{.text-danger} | | | **T321** | **Routines SQL-invoked de base** | **Aucun**{.text-danger} | | -| T321-01 | Fonctions définies par l'utilisateur sans surcharge | Aucun{.text-danger} | | -| T321-02 | Procédures stockées définies par l'utilisateur sans surcharge | Aucun{.text-danger} | | -| T321-03 | L'invocation de la fonction | Aucun{.text-danger} | | -| T321-04 | L'instruction d'APPEL de | Aucun{.text-danger} | | -| T321-05 | Déclaration de retour | Aucun{.text-danger} | | +| T321-01 | Fonctions définies par l'utilisateur sans surcharge | Aucun {.text-danger} | | +| T321-02 | Procédures stockées définies par l'utilisateur sans surcharge | Aucun {.text-danger} | | +| T321-03 | L'invocation de la fonction | Aucun {.text-danger} | | +| T321-04 | L'instruction d'APPEL de | Aucun {.text-danger} | | +| T321-05 | Déclaration de retour | Aucun {.text-danger} | | | **T631** | **Dans le prédicat avec un élément de liste** | **Oui**{.text-success} | | diff --git a/docs/ja/development/developer-instruction.md b/docs/ja/development/developer-instruction.md index 988becf98c3..ccc3a177d1f 100644 --- a/docs/ja/development/developer-instruction.md +++ b/docs/ja/development/developer-instruction.md @@ -257,8 +257,8 @@ KDevelopとQTCreatorは、ClickHouseを開発するためのIDEの他の優れ sudo apt install wget xz-utils - wget https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz - wget https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz + wget https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz + wget https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz xz -v -d hits_v1.tsv.xz xz -v -d visits_v1.tsv.xz diff --git a/docs/ja/development/style.md b/docs/ja/development/style.md index ef962067650..f4b3f9c77dd 100644 --- a/docs/ja/development/style.md +++ b/docs/ja/development/style.md @@ -579,7 +579,7 @@ Forkは並列化には使用されません。 **14.** 戻り値。 -ほとんどの場合、 `return`. 書かない `[return std::move(res)]{.strike}`. +ほとんどの場合、 `return`. 書かない `return std::move(res)`. 関数がオブジェクトをヒープに割り当てて返す場合は、次のようにします `shared_ptr` または `unique_ptr`. @@ -673,7 +673,7 @@ Loader() {} **24.** 使用しない `trailing return type` 必要がない限り機能のため。 ``` cpp -[auto f() -> void;]{.strike} +auto f() -> void ``` **25.** 変数の宣言と初期化。 diff --git a/docs/ja/getting-started/example-datasets/metrica.md b/docs/ja/getting-started/example-datasets/metrica.md index 36c6411ff2f..5caf32928c3 100644 --- a/docs/ja/getting-started/example-datasets/metrica.md +++ b/docs/ja/getting-started/example-datasets/metrica.md @@ -9,14 +9,14 @@ toc_title: "Yandex.Metrica データ" Yandex.Metricaについての詳細は [ClickHouse history](../../introduction/history.md) のセクションを参照してください。 データセットは2つのテーブルから構成されており、どちらも圧縮された `tsv.xz` ファイルまたは準備されたパーティションとしてダウンロードすることができます。 -さらに、1億行を含む`hits`テーブルの拡張版が TSVとして https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_100m_obfuscated_v1.tsv.xz に、準備されたパーティションとして https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz にあります。 +さらに、1億行を含む`hits`テーブルの拡張版が TSVとして https://datasets.clickhouse.tech/hits/tsv/hits_100m_obfuscated_v1.tsv.xz に、準備されたパーティションとして https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz にあります。 ## パーティション済みテーブルの取得 {#obtaining-tables-from-prepared-partitions} hits テーブルのダウンロードとインポート: ``` bash -curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar +curl -O https://datasets.clickhouse.tech/hits/partitions/hits_v1.tar tar xvf hits_v1.tar -C /var/lib/clickhouse # ClickHouse のデータディレクトリへのパス # 展開されたデータのパーミッションをチェックし、必要に応じて修正します。 sudo service clickhouse-server restart @@ -26,7 +26,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" visits のダウンロードとインポート: ``` bash -curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar +curl -O https://datasets.clickhouse.tech/visits/partitions/visits_v1.tar tar xvf visits_v1.tar -C /var/lib/clickhouse # ClickHouse のデータディレクトリへのパス # 展開されたデータのパーミッションをチェックし、必要に応じて修正します。 sudo service clickhouse-server restart @@ -38,7 +38,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1" 圧縮TSVファイルのダウンロードと hits テーブルのインポート: ``` bash -curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv # now create table clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" @@ -52,7 +52,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" 圧縮TSVファイルのダウンロードと visits テーブルのインポート: ``` bash -curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv # now create table clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" diff --git a/docs/ja/getting-started/example-datasets/nyc-taxi.md b/docs/ja/getting-started/example-datasets/nyc-taxi.md index 9d15ac14a36..a717b64b2b2 100644 --- a/docs/ja/getting-started/example-datasets/nyc-taxi.md +++ b/docs/ja/getting-started/example-datasets/nyc-taxi.md @@ -286,7 +286,7 @@ SELECT formatReadableSize(sum(bytes)) FROM system.parts WHERE table = 'trips_mer ## パーティションされたデータのダウンロード {#download-of-prepared-partitions} ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar +$ curl -O https://datasets.clickhouse.tech/trips_mergetree/partitions/trips_mergetree.tar $ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # path to ClickHouse data directory $ # check permissions of unpacked data, fix if required $ sudo service clickhouse-server restart diff --git a/docs/ja/getting-started/example-datasets/ontime.md b/docs/ja/getting-started/example-datasets/ontime.md index 58d1999bf2b..bd049e8caad 100644 --- a/docs/ja/getting-started/example-datasets/ontime.md +++ b/docs/ja/getting-started/example-datasets/ontime.md @@ -154,7 +154,7 @@ $ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhous ## パーティション済みデータのダウンロード {#download-of-prepared-partitions} ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar +$ curl -O https://datasets.clickhouse.tech/ontime/partitions/ontime.tar $ tar xvf ontime.tar -C /var/lib/clickhouse # path to ClickHouse data directory $ # check permissions of unpacked data, fix if required $ sudo service clickhouse-server restart diff --git a/docs/ja/getting-started/tutorial.md b/docs/ja/getting-started/tutorial.md index 84f834a9486..d63954684f1 100644 --- a/docs/ja/getting-started/tutorial.md +++ b/docs/ja/getting-started/tutorial.md @@ -92,8 +92,8 @@ ClickHouseサーバーにいくつかのサンプルデータを入れてみま ### テーブルデータのダウンロードと展開 {#download-and-extract-table-data} ``` bash -curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv -curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv ``` 展開されたファイルのサイズは約10GBです。 diff --git a/docs/ja/operations/performance-test.md b/docs/ja/operations/performance-test.md index ff3a0192b49..a60a8872a16 100644 --- a/docs/ja/operations/performance-test.md +++ b/docs/ja/operations/performance-test.md @@ -48,7 +48,7 @@ toc_title: "\u30CF\u30FC\u30C9\u30A6\u30A7\u30A2\u8A66\u9A13" - wget https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz + wget https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz tar xvf hits_100m_obfuscated_v1.tar.xz -C . mv hits_100m_obfuscated_v1/* . diff --git a/docs/ja/sql-reference/ansi.md b/docs/ja/sql-reference/ansi.md index fbeb4b8b6db..58ccc7ce12b 100644 --- a/docs/ja/sql-reference/ansi.md +++ b/docs/ja/sql-reference/ansi.md @@ -26,155 +26,155 @@ toc_title: "ANSI\u306E\u4E92\u63DB\u6027" | Feature ID | 機能名 | 状態 | コメント | |------------|---------------------------------------------------------------------------------------------------|----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **E011** | **数値データ型** | **部分的**{.text-warning} | | -| E011-01 | 整数およびSMALLINTデータ型 | はい。{.text-success} | | -| E011-02 | 実数、倍精度および浮動小数点データ型データ型 | 部分的{.text-warning} | `FLOAT()`, `REAL` と `DOUBLE PRECISION` 対応していません | -| E011-03 | DECIMALおよびNUMERICデータ型 | 部分的{.text-warning} | のみ `DECIMAL(p,s)` サポートされています。 `NUMERIC` | -| E011-04 | 算術演算子 | はい。{.text-success} | | -| E011-05 | 数値比較 | はい。{.text-success} | | -| E011-06 | 数値データ型間の暗黙的なキャスト | いいえ。{.text-danger} | ANSI SQLできる任意の暗黙的な数値型の間のキャストがClickHouseに依存しての機能を有する複数の過負荷の代わりに暗黙的なキャスト | +| E011-01 | 整数およびSMALLINTデータ型 | はい。 {.text-success} | | +| E011-02 | 実数、倍精度および浮動小数点データ型データ型 | 部分的 {.text-warning} | `FLOAT()`, `REAL` と `DOUBLE PRECISION` 対応していません | +| E011-03 | DECIMALおよびNUMERICデータ型 | 部分的 {.text-warning} | のみ `DECIMAL(p,s)` サポートされています。 `NUMERIC` | +| E011-04 | 算術演算子 | はい。 {.text-success} | | +| E011-05 | 数値比較 | はい。 {.text-success} | | +| E011-06 | 数値データ型間の暗黙的なキャスト | いいえ。 {.text-danger} | ANSI SQLできる任意の暗黙的な数値型の間のキャストがClickHouseに依存しての機能を有する複数の過負荷の代わりに暗黙的なキャスト | | **E021** | **文字列タイプ** | **部分的**{.text-warning} | | | E021-01 | 文字データ型 | いいえ。{.text-danger} | | -| E021-02 | 文字変化型データ型 | いいえ。{.text-danger} | `String` 動作同様に、長さの制限内 | -| E021-03 | 文字リテラル | 部分的{.text-warning} | 連続したリテラルと文字セットの自動連結はサポートされません | -| E021-04 | CHARACTER_LENGTH関数 | 部分的{.text-warning} | いいえ。 `USING` 句 | -| E021-05 | OCTET_LENGTH関数 | いいえ。{.text-danger} | `LENGTH` 同様に動作します | -| E021-06 | SUBSTRING | 部分的{.text-warning} | サポートなし `SIMILAR` と `ESCAPE` 句、ない `SUBSTRING_REGEX` バリアント | -| E021-07 | 文字の連結 | 部分的{.text-warning} | いいえ。 `COLLATE` 句 | -| E021-08 | 上部および下の機能 | はい。{.text-success} | | -| E021-09 | トリム機能 | はい。{.text-success} | | -| E021-10 | 固定長および可変長文字ストリング型間の暗黙的なキャスト | いいえ。{.text-danger} | ANSI SQLできる任意の暗黙の間のキャスト文字列の種類がClickHouseに依存しての機能を有する複数の過負荷の代わりに暗黙的なキャスト | -| E021-11 | 位置関数 | 部分的{.text-warning} | サポートなし `IN` と `USING` 句、ない `POSITION_REGEX` バリアント | -| E021-12 | 文字の比較 | はい。{.text-success} | | +| E021-02 | 文字変化型データ型 | いいえ。 {.text-danger} | `String` 動作同様に、長さの制限内 | +| E021-03 | 文字リテラル | 部分的 {.text-warning} | 連続したリテラルと文字セットの自動連結はサポートされません | +| E021-04 | CHARACTER_LENGTH関数 | 部分的 {.text-warning} | いいえ。 `USING` 句 | +| E021-05 | OCTET_LENGTH関数 | いいえ。 {.text-danger} | `LENGTH` 同様に動作します | +| E021-06 | SUBSTRING | 部分的 {.text-warning} | サポートなし `SIMILAR` と `ESCAPE` 句、ない `SUBSTRING_REGEX` バリアント | +| E021-07 | 文字の連結 | 部分的 {.text-warning} | いいえ。 `COLLATE` 句 | +| E021-08 | 上部および下の機能 | はい。 {.text-success} | | +| E021-09 | トリム機能 | はい。 {.text-success} | | +| E021-10 | 固定長および可変長文字ストリング型間の暗黙的なキャスト | いいえ。 {.text-danger} | ANSI SQLできる任意の暗黙の間のキャスト文字列の種類がClickHouseに依存しての機能を有する複数の過負荷の代わりに暗黙的なキャスト | +| E021-11 | 位置関数 | 部分的 {.text-warning} | サポートなし `IN` と `USING` 句、ない `POSITION_REGEX` バリアント | +| E021-12 | 文字の比較 | はい。 {.text-success} | | | **E031** | **識別子** | **部分的**{.text-warning} | | -| E031-01 | 区切り識別子 | 部分的{.text-warning} | Unicodeリテラルの支援は限られ | -| E031-02 | 小文字の識別子 | はい。{.text-success} | | -| E031-03 | 末尾のアンダースコア | はい。{.text-success} | | +| E031-01 | 区切り識別子 | 部分的 {.text-warning} | Unicodeリテラルの支援は限られ | +| E031-02 | 小文字の識別子 | はい。 {.text-success} | | +| E031-03 | 末尾のアンダースコア | はい。 {.text-success} | | | **E051** | **基本的なクエリ仕様** | **部分的**{.text-warning} | | -| E051-01 | SELECT DISTINCT | はい。{.text-success} | | -| E051-02 | GROUP BY句 | はい。{.text-success} | | -| E051-04 | グループによる列を含むことができない `` | はい。 {.text-success} | | +| E051-05 | 選択した項目の名前を変更できます | はい。 {.text-success} | | +| E051-06 | 句を持つ | はい。 {.text-success} | | +| E051-07 | 選択リストの修飾\* | はい。 {.text-success} | | +| E051-08 | FROM句の相関名 | はい。 {.text-success} | | +| E051-09 | FROM句の列の名前を変更します | いいえ。 {.text-danger} | | | **E061** | **基本的な述語と検索条件** | **部分的**{.text-warning} | | -| E061-01 | 比較述語 | はい。{.text-success} | | -| E061-02 | 述語の間 | 部分的{.text-warning} | いいえ。 `SYMMETRIC` と `ASYMMETRIC` 句 | -| E061-03 | 値のリストを持つ述語で | はい。{.text-success} | | -| E061-04 | 述語のように | はい。{.text-success} | | -| E061-05 | LIKE述語:エスケープ句 | いいえ。{.text-danger} | | -| E061-06 | Null述語 | はい。{.text-success} | | -| E061-07 | 定量化された比較述語 | いいえ。{.text-danger} | | -| E061-08 | 存在する述語 | いいえ。{.text-danger} | | -| E061-09 | 比較述語のサブクエリ | はい。{.text-success} | | -| E061-11 | In述語のサブクエリ | はい。{.text-success} | | -| E061-12 | 定量化された比較述語のサブクエリ | いいえ。{.text-danger} | | -| E061-13 | 相関サブクエリ | いいえ。{.text-danger} | | -| E061-14 | 検索条件 | はい。{.text-success} | | +| E061-01 | 比較述語 | はい。 {.text-success} | | +| E061-02 | 述語の間 | 部分的 {.text-warning} | いいえ。 `SYMMETRIC` と `ASYMMETRIC` 句 | +| E061-03 | 値のリストを持つ述語で | はい。 {.text-success} | | +| E061-04 | 述語のように | はい。 {.text-success} | | +| E061-05 | LIKE述語:エスケープ句 | いいえ。 {.text-danger} | | +| E061-06 | Null述語 | はい。 {.text-success} | | +| E061-07 | 定量化された比較述語 | いいえ。 {.text-danger} | | +| E061-08 | 存在する述語 | いいえ。 {.text-danger} | | +| E061-09 | 比較述語のサブクエリ | はい。 {.text-success} | | +| E061-11 | In述語のサブクエリ | はい。 {.text-success} | | +| E061-12 | 定量化された比較述語のサブクエリ | いいえ。 {.text-danger} | | +| E061-13 | 相関サブクエリ | いいえ。 {.text-danger} | | +| E061-14 | 検索条件 | はい。 {.text-success} | | | **E071** | **基本的なクエリ式** | **部分的**{.text-warning} | | -| E071-01 | UNION DISTINCTテーブル演算子 | いいえ。{.text-danger} | | -| E071-02 | UNION ALLテーブル演算子 | はい。{.text-success} | | -| E071-03 | DISTINCTテーブル演算子を除く | いいえ。{.text-danger} | | -| E071-05 | 列の結合経由でテーブル事業者の必要のない全く同じデータ型 | はい。{.text-success} | | -| E071-06 | サブクエリ内のテーブル演算子 | はい。{.text-success} | | +| E071-01 | UNION DISTINCTテーブル演算子 | いいえ。 {.text-danger} | | +| E071-02 | UNION ALLテーブル演算子 | はい。 {.text-success} | | +| E071-03 | DISTINCTテーブル演算子を除く | いいえ。 {.text-danger} | | +| E071-05 | 列の結合経由でテーブル事業者の必要のない全く同じデータ型 | はい。 {.text-success} | | +| E071-06 | サブクエリ内のテーブル演算子 | はい。 {.text-success} | | | **E081** | **基本権限** | **部分的**{.text-warning} | 進行中の作業 | | **E091** | **関数の設定** | **はい。**{.text-success} | | -| E091-01 | AVG | はい。{.text-success} | | -| E091-02 | COUNT | はい。{.text-success} | | -| E091-03 | MAX | はい。{.text-success} | | -| E091-04 | MIN | はい。{.text-success} | | -| E091-05 | SUM | はい。{.text-success} | | -| E091-06 | すべての量指定子 | いいえ。{.text-danger} | | -| E091-07 | 異なる量指定子 | 部分的{.text-warning} | な集計機能に対応 | +| E091-01 | AVG | はい。 {.text-success} | | +| E091-02 | COUNT | はい。 {.text-success} | | +| E091-03 | MAX | はい。 {.text-success} | | +| E091-04 | MIN | はい。 {.text-success} | | +| E091-05 | SUM | はい。 {.text-success} | | +| E091-06 | すべての量指定子 | いいえ。 {.text-danger} | | +| E091-07 | 異なる量指定子 | 部分的 {.text-warning} | な集計機能に対応 | | **E101** | **基本的なデータ操作** | **部分的**{.text-warning} | | -| E101-01 | INSERT文 | はい。{.text-success} | 注:ClickHouseの主キーは、 `UNIQUE` 制約 | -| E101-03 | 検索されたUPDATE文 | いいえ。{.text-danger} | そこには `ALTER UPDATE` バッチデータ変更のための命令 | -| E101-04 | 検索されたDELETE文 | いいえ。{.text-danger} | そこには `ALTER DELETE` バッチデータ削除のための命令 | +| E101-01 | INSERT文 | はい。 {.text-success} | 注:ClickHouseの主キーは、 `UNIQUE` 制約 | +| E101-03 | 検索されたUPDATE文 | いいえ。 {.text-danger} | そこには `ALTER UPDATE` バッチデータ変更のための命令 | +| E101-04 | 検索されたDELETE文 | いいえ。 {.text-danger} | そこには `ALTER DELETE` バッチデータ削除のための命令 | | **E111** | **単一行SELECTステートメント** | **いいえ。**{.text-danger} | | | **E121** | **基本的にカーソルを支援** | **いいえ。**{.text-danger} | | -| E121-01 | DECLARE CURSOR | いいえ。{.text-danger} | | -| E121-02 | ORDER BY列を選択リストに含める必要はありません | いいえ。{.text-danger} | | -| E121-03 | ORDER BY句の値式 | いいえ。{.text-danger} | | -| E121-04 | 開いた声明 | いいえ。{.text-danger} | | -| E121-06 | 位置付きUPDATE文 | いいえ。{.text-danger} | | -| E121-07 | 位置づけDELETEステートメント | いいえ。{.text-danger} | | -| E121-08 | 閉じる文 | いいえ。{.text-danger} | | -| E121-10 | FETCHステートメント:暗黙的なNEXT | いいえ。{.text-danger} | | -| E121-17 | ホールドカーソル付き | いいえ。{.text-danger} | | +| E121-01 | DECLARE CURSOR | いいえ。 {.text-danger} | | +| E121-02 | ORDER BY列を選択リストに含める必要はありません | いいえ。 {.text-danger} | | +| E121-03 | ORDER BY句の値式 | いいえ。 {.text-danger} | | +| E121-04 | 開いた声明 | いいえ。 {.text-danger} | | +| E121-06 | 位置付きUPDATE文 | いいえ。 {.text-danger} | | +| E121-07 | 位置づけDELETEステートメント | いいえ。 {.text-danger} | | +| E121-08 | 閉じる文 | いいえ。 {.text-danger} | | +| E121-10 | FETCHステートメント:暗黙的なNEXT | いいえ。 {.text-danger} | | +| E121-17 | ホールドカーソル付き | いいえ。 {.text-danger} | | | **E131** | **Null値のサポート(値の代わりにnull)** | **部分的**{.text-warning} | 一部の制限が適用されます | | **E141** | **基本的な整合性制約** | **部分的**{.text-warning} | | -| E141-01 | NULLでない制約 | はい。{.text-success} | 注: `NOT NULL` は黙示のためのテーブル列によるデフォルト | -| E141-02 | NULLでない列の一意制約 | いいえ。{.text-danger} | | -| E141-03 | 主キー制約 | いいえ。{.text-danger} | | -| E141-04 | 参照削除アクションと参照updateアクションの両方に対するNO ACTIONのデフォルトを持つ基本外部キー制約 | いいえ。{.text-danger} | | -| E141-06 | 制約のチェック | はい。{.text-success} | | -| E141-07 | 列の既定値 | はい。{.text-success} | | -| E141-08 | 主キーで推論されるNULLではありません | はい。{.text-success} | | -| E141-10 | 外部キーの名前は任意の順序で指定できます | いいえ。{.text-danger} | | +| E141-01 | NULLでない制約 | はい。 {.text-success} | 注: `NOT NULL` は黙示のためのテーブル列によるデフォルト | +| E141-02 | NULLでない列の一意制約 | いいえ。 {.text-danger} | | +| E141-03 | 主キー制約 | いいえ。 {.text-danger} | | +| E141-04 | 参照削除アクションと参照updateアクションの両方に対するNO ACTIONのデフォルトを持つ基本外部キー制約 | いいえ。 {.text-danger} | | +| E141-06 | 制約のチェック | はい。 {.text-success} | | +| E141-07 | 列の既定値 | はい。 {.text-success} | | +| E141-08 | 主キーで推論されるNULLではありません | はい。 {.text-success} | | +| E141-10 | 外部キーの名前は任意の順序で指定できます | いいえ。 {.text-danger} | | | **E151** | **取引サポート** | **いいえ。**{.text-danger} | | -| E151-01 | COMMIT文 | いいえ。{.text-danger} | | -| E151-02 | ROLLBACKステートメント | いいえ。{.text-danger} | | +| E151-01 | COMMIT文 | いいえ。 {.text-danger} | | +| E151-02 | ROLLBACKステートメント | いいえ。 {.text-danger} | | | **E152** | **基本セット取引明細書** | **いいえ。**{.text-danger} | | -| E152-01 | SET TRANSACTION文:分離レベルSERIALIZABLE句 | いいえ。{.text-danger} | | -| E152-02 | SET TRANSACTION文:READ ONLY句とREAD WRITE句 | いいえ。{.text-danger} | | +| E152-01 | SET TRANSACTION文:分離レベルSERIALIZABLE句 | いいえ。 {.text-danger} | | +| E152-02 | SET TRANSACTION文:READ ONLY句とREAD WRITE句 | いいえ。 {.text-danger} | | | **E153** | **サブクエリを使用した更新可能なクエリ** | **いいえ。**{.text-danger} | | | **E161** | **先頭のdouble minusを使用したSQLコメント** | **はい。**{.text-success} | | | **E171** | **SQLSTATEサポート** | **いいえ。**{.text-danger} | | | **E182** | **ホスト言語バインド** | **いいえ。**{.text-danger} | | | **F031** | **基本的なスキーマ操作** | **部分的**{.text-warning} | | -| F031-01 | 永続ベーステーブルを作成するCREATE TABLE文 | 部分的{.text-warning} | いいえ。 `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` 句およびユーザー解決データ型のサポートなし | -| F031-02 | CREATE VIEW文 | 部分的{.text-warning} | いいえ。 `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` 句およびユーザー解決データ型のサポートなし | -| F031-03 | グラント声明 | はい。{.text-success} | | -| F031-04 | ALTER TABLE文:ADD COLUMN句 | 部分的{.text-warning} | サポートなし `GENERATED` 節およびシステム期間 | -| F031-13 | DROP TABLE文:RESTRICT句 | いいえ。{.text-danger} | | -| F031-16 | DROP VIEW文:RESTRICT句 | いいえ。{.text-danger} | | -| F031-19 | REVOKEステートメント:RESTRICT句 | いいえ。{.text-danger} | | +| F031-01 | 永続ベーステーブルを作成するCREATE TABLE文 | 部分的 {.text-warning} | いいえ。 `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` 句およびユーザー解決データ型のサポートなし | +| F031-02 | CREATE VIEW文 | 部分的 {.text-warning} | いいえ。 `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` 句およびユーザー解決データ型のサポートなし | +| F031-03 | グラント声明 | はい。 {.text-success} | | +| F031-04 | ALTER TABLE文:ADD COLUMN句 | 部分的 {.text-warning} | サポートなし `GENERATED` 節およびシステム期間 | +| F031-13 | DROP TABLE文:RESTRICT句 | いいえ。 {.text-danger} | | +| F031-16 | DROP VIEW文:RESTRICT句 | いいえ。 {.text-danger} | | +| F031-19 | REVOKEステートメント:RESTRICT句 | いいえ。 {.text-danger} | | | **F041** | **基本的な結合テーブル** | **部分的**{.text-warning} | | -| F041-01 | Inner join必ずというわけではないが、内側のキーワード) | はい。{.text-success} | | -| F041-02 | 内部キーワード | はい。{.text-success} | | -| F041-03 | LEFT OUTER JOIN | はい。{.text-success} | | -| F041-04 | RIGHT OUTER JOIN | はい。{.text-success} | | -| F041-05 | 外部結合は入れ子にできます | はい。{.text-success} | | -| F041-07 | 左外部結合または右外部結合の内部テーブルは、内部結合でも使用できます | はい。{.text-success} | | -| F041-08 | すべての比較演算子がサポ) | いいえ。{.text-danger} | | +| F041-01 | Inner join必ずというわけではないが、内側のキーワード) | はい。 {.text-success} | | +| F041-02 | 内部キーワード | はい。 {.text-success} | | +| F041-03 | LEFT OUTER JOIN | はい。 {.text-success} | | +| F041-04 | RIGHT OUTER JOIN | はい。 {.text-success} | | +| F041-05 | 外部結合は入れ子にできます | はい。 {.text-success} | | +| F041-07 | 左外部結合または右外部結合の内部テーブルは、内部結合でも使用できます | はい。 {.text-success} | | +| F041-08 | すべての比較演算子がサポ) | いいえ。 {.text-danger} | | | **F051** | **基本日時** | **部分的**{.text-warning} | | -| F051-01 | 日付データ型(日付リテラルのサポートを含む) | 部分的{.text-warning} | リテラルなし | -| F051-02 | 秒の小数部の精度が0以上の時刻データ型(時刻リテラルのサポートを含む) | いいえ。{.text-danger} | | -| F051-03 | タイムスタンプのデータ型を含む支援のタイムスタンプ文字と小数点以下の秒の精度で少なくとも0-6 | いいえ。{.text-danger} | `DateTime64` timeは同様の機能を提供します | -| F051-04 | 日付、時刻、およびタイムスタンプのデータ型の比較述語 | 部分的{.text-warning} | 使用可能なデータ型は一つだけです | -| F051-05 | Datetime型と文字列型の間の明示的なキャスト | はい。{.text-success} | | -| F051-06 | CURRENT_DATE | いいえ。{.text-danger} | `today()` 似ています | -| F051-07 | LOCALTIME | いいえ。{.text-danger} | `now()` 似ています | -| F051-08 | LOCALTIMESTAMP | いいえ。{.text-danger} | | +| F051-01 | 日付データ型(日付リテラルのサポートを含む) | 部分的 {.text-warning} | リテラルなし | +| F051-02 | 秒の小数部の精度が0以上の時刻データ型(時刻リテラルのサポートを含む) | いいえ。 {.text-danger} | | +| F051-03 | タイムスタンプのデータ型を含む支援のタイムスタンプ文字と小数点以下の秒の精度で少なくとも0-6 | いいえ。 {.text-danger} | `DateTime64` timeは同様の機能を提供します | +| F051-04 | 日付、時刻、およびタイムスタンプのデータ型の比較述語 | 部分的 {.text-warning} | 使用可能なデータ型は一つだけです | +| F051-05 | Datetime型と文字列型の間の明示的なキャスト | はい。 {.text-success} | | +| F051-06 | CURRENT_DATE | いいえ。 {.text-danger} | `today()` 似ています | +| F051-07 | LOCALTIME | いいえ。 {.text-danger} | `now()` 似ています | +| F051-08 | LOCALTIMESTAMP | いいえ。 {.text-danger} | | | **F081** | **ビュー内の組合および除く** | **部分的**{.text-warning} | | | **F131** | **グループ化操作** | **部分的**{.text-warning} | | -| F131-01 | ここで、グループにより、条項対応してクエリを処理するクラウドの場合グ眺望 | はい。{.text-success} | | -| F131-02 | グループ化されたビュ | はい。{.text-success} | | -| F131-03 | セット機能に対応してクエリを処理するクラウドの場合グ眺望 | はい。{.text-success} | | -| F131-04 | GROUP BY句とHAVING句およびグループ化ビューを持つサブクエリ | はい。{.text-success} | | -| F131-05 | GROUP BY句およびHAVING句およびグループ化ビューを使用した単一行選択 | いいえ。{.text-danger} | | +| F131-01 | ここで、グループにより、条項対応してクエリを処理するクラウドの場合グ眺望 | はい。 {.text-success} | | +| F131-02 | グループ化されたビュ | はい。 {.text-success} | | +| F131-03 | セット機能に対応してクエリを処理するクラウドの場合グ眺望 | はい。 {.text-success} | | +| F131-04 | GROUP BY句とHAVING句およびグループ化ビューを持つサブクエリ | はい。 {.text-success} | | +| F131-05 | GROUP BY句およびHAVING句およびグループ化ビューを使用した単一行選択 | いいえ。 {.text-danger} | | | **F181** | **複数モジュール対応** | **いいえ。**{.text-danger} | | | **F201** | **キャスト機能** | **はい。**{.text-success} | | | **F221** | **明示的な既定値** | **いいえ。**{.text-danger} | | | **F261** | **大文字と小文字の式** | **はい。**{.text-success} | | -| F261-01 | 簡単な場合 | はい。{.text-success} | | -| F261-02 | 検索ケース | はい。{.text-success} | | -| F261-03 | NULLIF | はい。{.text-success} | | -| F261-04 | COALESCE | はい。{.text-success} | | +| F261-01 | 簡単な場合 | はい。 {.text-success} | | +| F261-02 | 検索ケース | はい。 {.text-success} | | +| F261-03 | NULLIF | はい。 {.text-success} | | +| F261-04 | COALESCE | はい。 {.text-success} | | | **F311** | **スキーマ定義文** | **部分的**{.text-warning} | | -| F311-01 | CREATE SCHEMA | いいえ。{.text-danger} | | -| F311-02 | 永続ベーステーブルのテーブルの作成 | はい。{.text-success} | | -| F311-03 | CREATE VIEW | はい。{.text-success} | | -| F311-04 | CREATE VIEW: WITH CHECK OPTION | いいえ。{.text-danger} | | -| F311-05 | グラント声明 | はい。{.text-success} | | +| F311-01 | CREATE SCHEMA | いいえ。 {.text-danger} | | +| F311-02 | 永続ベーステーブルのテーブルの作成 | はい。 {.text-success} | | +| F311-03 | CREATE VIEW | はい。 {.text-success} | | +| F311-04 | CREATE VIEW: WITH CHECK OPTION | いいえ。 {.text-danger} | | +| F311-05 | グラント声明 | はい。 {.text-success} | | | **F471** | **スカラーサブクエリ値** | **はい。**{.text-success} | | | **F481** | **展開されたNULL述語** | **はい。**{.text-success} | | | **F812** | **基本的なフラグ設定** | **いいえ。**{.text-danger} | | | **T321** | **基本的なSQL呼び出しルーチン** | **いいえ。**{.text-danger} | | -| T321-01 | オーバーロードのないユーザー定義関数 | いいえ。{.text-danger} | | -| T321-02 | 過負荷のないユーザー定義ストアドプロシージャ | いいえ。{.text-danger} | | -| T321-03 | 関数呼び出し | いいえ。{.text-danger} | | -| T321-04 | CALL文 | いいえ。{.text-danger} | | -| T321-05 | RETURN文 | いいえ。{.text-danger} | | +| T321-01 | オーバーロードのないユーザー定義関数 | いいえ。 {.text-danger} | | +| T321-02 | 過負荷のないユーザー定義ストアドプロシージャ | いいえ。 {.text-danger} | | +| T321-03 | 関数呼び出し | いいえ。 {.text-danger} | | +| T321-04 | CALL文 | いいえ。 {.text-danger} | | +| T321-05 | RETURN文 | いいえ。 {.text-danger} | | | **T631** | **一つのリスト要素を持つ述語で** | **はい。**{.text-success} | | diff --git a/docs/ja/sql-reference/data-types/lowcardinality.md b/docs/ja/sql-reference/data-types/lowcardinality.md new file mode 100644 index 00000000000..dd3a9aa1c0d --- /dev/null +++ b/docs/ja/sql-reference/data-types/lowcardinality.md @@ -0,0 +1 @@ +../../../../en/sql-reference/data-types/lowcardinality.md \ No newline at end of file diff --git a/docs/ru/development/developer-instruction.md b/docs/ru/development/developer-instruction.md index 4bdcf89004d..5d7c0caeafa 100644 --- a/docs/ru/development/developer-instruction.md +++ b/docs/ru/development/developer-instruction.md @@ -259,8 +259,8 @@ Mac OS X: sudo apt install wget xz-utils - wget https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz - wget https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz + wget https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz + wget https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz xz -v -d hits_v1.tsv.xz xz -v -d visits_v1.tsv.xz diff --git a/docs/ru/development/style.md b/docs/ru/development/style.md index 671293a7bbd..4d71dca46a7 100644 --- a/docs/ru/development/style.md +++ b/docs/ru/development/style.md @@ -582,7 +582,7 @@ Fork для распараллеливания не используется. **14.** Возврат значений. -В большинстве случаев, просто возвращайте значение с помощью `return`. Не пишите `[return std::move(res)]{.strike}`. +В большинстве случаев, просто возвращайте значение с помощью `return`. Не пишите `return std::move(res)`. Если внутри функции создаётся объект на куче и отдаётся наружу, то возвращайте `shared_ptr` или `unique_ptr`. @@ -676,7 +676,7 @@ Loader() {} **24.** Не нужно использовать `trailing return type` для функций, если в этом нет необходимости. ``` cpp -[auto f() -> void;]{.strike} +auto f() -> void ``` **25.** Объявление и инициализация переменных. diff --git a/docs/ru/getting-started/example-datasets/metrica.md b/docs/ru/getting-started/example-datasets/metrica.md index e8a3163376c..3246eb5178c 100644 --- a/docs/ru/getting-started/example-datasets/metrica.md +++ b/docs/ru/getting-started/example-datasets/metrica.md @@ -5,14 +5,14 @@ toc_title: "\u0410\u043d\u043e\u043d\u0438\u043c\u0438\u0437\u0438\u0440\u043e\u # Анонимизированные данные Яндекс.Метрики {#anonimizirovannye-dannye-iandeks-metriki} -Датасет состоит из двух таблиц, содержащих анонимизированные данные о хитах (`hits_v1`) и визитах (`visits_v1`) Яндекс.Метрики. Каждую из таблиц можно скачать в виде сжатого `.tsv.xz`-файла или в виде уже готовых партиций. Также можно скачать расширенную версию таблицы `hits`, содержащую 100 миллионов строк в виде [архива c файлами TSV](https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_100m_obfuscated_v1.tsv.xz) и в виде [готовых партиций](https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz). +Датасет состоит из двух таблиц, содержащих анонимизированные данные о хитах (`hits_v1`) и визитах (`visits_v1`) Яндекс.Метрики. Каждую из таблиц можно скачать в виде сжатого `.tsv.xz`-файла или в виде уже готовых партиций. Также можно скачать расширенную версию таблицы `hits`, содержащую 100 миллионов строк в виде [архива c файлами TSV](https://datasets.clickhouse.tech/hits/tsv/hits_100m_obfuscated_v1.tsv.xz) и в виде [готовых партиций](https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz). ## Получение таблиц из партиций {#poluchenie-tablits-iz-partitsii} **Скачивание и импортирование партиций hits:** ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar +$ curl -O https://datasets.clickhouse.tech/hits/partitions/hits_v1.tar $ tar xvf hits_v1.tar -C /var/lib/clickhouse # путь к папке с данными ClickHouse $ # убедитесь, что установлены корректные права доступа на файлы $ sudo service clickhouse-server restart @@ -22,7 +22,7 @@ $ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" **Скачивание и импортирование партиций visits:** ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar +$ curl -O https://datasets.clickhouse.tech/visits/partitions/visits_v1.tar $ tar xvf visits_v1.tar -C /var/lib/clickhouse # путь к папке с данными ClickHouse $ # убедитесь, что установлены корректные права доступа на файлы $ sudo service clickhouse-server restart @@ -34,7 +34,7 @@ $ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1" **Скачивание и импортирование hits из сжатого tsv-файла** ``` bash -$ curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +$ curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv $ # теперь создадим таблицу $ clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" $ clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" @@ -48,7 +48,7 @@ $ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" **Скачивание и импортирование visits из сжатого tsv-файла** ``` bash -$ curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +$ curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv $ # теперь создадим таблицу $ clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" $ clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" diff --git a/docs/ru/getting-started/example-datasets/nyc-taxi.md b/docs/ru/getting-started/example-datasets/nyc-taxi.md index 1f981324261..a4472751a99 100644 --- a/docs/ru/getting-started/example-datasets/nyc-taxi.md +++ b/docs/ru/getting-started/example-datasets/nyc-taxi.md @@ -283,7 +283,7 @@ SELECT formatReadableSize(sum(bytes)) FROM system.parts WHERE table = 'trips_mer ## Скачивание готовых партиций {#skachivanie-gotovykh-partitsii} ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar +$ curl -O https://datasets.clickhouse.tech/trips_mergetree/partitions/trips_mergetree.tar $ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # путь к папке с данными ClickHouse $ # убедитесь, что установлены корректные права доступа на файлы $ sudo service clickhouse-server restart diff --git a/docs/ru/getting-started/example-datasets/ontime.md b/docs/ru/getting-started/example-datasets/ontime.md index 4d3eea14da6..41a1c0d3142 100644 --- a/docs/ru/getting-started/example-datasets/ontime.md +++ b/docs/ru/getting-started/example-datasets/ontime.md @@ -152,7 +152,7 @@ $ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhous ## Скачивание готовых партиций {#skachivanie-gotovykh-partitsii} ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar +$ curl -O https://datasets.clickhouse.tech/ontime/partitions/ontime.tar $ tar xvf ontime.tar -C /var/lib/clickhouse # путь к папке с данными ClickHouse $ # убедитесь, что установлены корректные права доступа на файлы $ sudo service clickhouse-server restart diff --git a/docs/ru/getting-started/install.md b/docs/ru/getting-started/install.md index fb14ecfe599..04efe77712b 100644 --- a/docs/ru/getting-started/install.md +++ b/docs/ru/getting-started/install.md @@ -33,7 +33,7 @@ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not su ### Из RPM пакетов {#from-rpm-packages} -Команда ClickHouse в Яндексе рекомендует использовать официальные предкомпилированные `rpm` пакеты для CentOS, RedHad и всех остальных дистрибутивов Linux, основанных на rpm. +Команда ClickHouse в Яндексе рекомендует использовать официальные предкомпилированные `rpm` пакеты для CentOS, RedHat и всех остальных дистрибутивов Linux, основанных на rpm. Сначала нужно подключить официальный репозиторий: diff --git a/docs/ru/getting-started/tutorial.md b/docs/ru/getting-started/tutorial.md index 0fc267e497c..bd987520dff 100644 --- a/docs/ru/getting-started/tutorial.md +++ b/docs/ru/getting-started/tutorial.md @@ -85,8 +85,8 @@ Now it’s time to fill our ClickHouse server with some sample data. In this tut ### Download and Extract Table Data {#download-and-extract-table-data} ``` bash -curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv -curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv ``` The extracted files are about 10GB in size. diff --git a/docs/ru/interfaces/formats.md b/docs/ru/interfaces/formats.md index ba0d3d9ccde..482e4999cea 100644 --- a/docs/ru/interfaces/formats.md +++ b/docs/ru/interfaces/formats.md @@ -9,6 +9,7 @@ ClickHouse может принимать (`INSERT`) и отдавать (`SELECT Поддерживаемые форматы и возможность использовать их в запросах `INSERT` и `SELECT` перечислены в таблице ниже. +======= | Формат | INSERT | SELECT | |-----------------------------------------------------------------------------------------|--------|--------| | [TabSeparated](#tabseparated) | ✔ | ✔ | @@ -56,6 +57,7 @@ ClickHouse может принимать (`INSERT`) и отдавать (`SELECT | [XML](#xml) | ✗ | ✔ | | [CapnProto](#capnproto) | ✔ | ✗ | | [LineAsString](#lineasstring) | ✔ | ✗ | +| [RawBLOB](#rawblob) | ✔ | ✔ | Вы можете регулировать некоторые параметры работы с форматами с помощью настроек ClickHouse. За дополнительной информацией обращайтесь к разделу [Настройки](../operations/settings/settings.md). @@ -1248,4 +1250,45 @@ SELECT * FROM line_as_string; └───────────────────────────────────────────────────┘ ``` +## RawBLOB {#rawblob} + +В этом формате все входные данные считываются в одно значение. Парсить можно только таблицу с одним полем типа [String](../sql-reference/data-types/string.md) или подобным ему. +Результат выводится в бинарном виде без разделителей и экранирования. При выводе более одного значения формат неоднозначен и будет невозможно прочитать данные снова. + +Ниже приведено сравнение форматов `RawBLOB` и [TabSeparatedRaw](#tabseparatedraw). +`RawBLOB`: +- данные выводятся в бинарном виде, без экранирования; +- нет разделителей между значениями; +- нет перевода строки в конце каждого значения. +[TabSeparatedRaw](#tabseparatedraw): +- данные выводятся без экранирования; +- строка содержит значения, разделённые табуляцией; +- после последнего значения в строке есть перевод строки. + +Далее рассмотрено сравнение форматов `RawBLOB` и [RowBinary](#rowbinary). +`RawBLOB`: +- строки выводятся без их длины в начале. +`RowBinary`: +- строки представлены как длина в формате varint (unsigned [LEB128](https://en.wikipedia.org/wiki/LEB128)), а затем байты строки. + +При передаче на вход `RawBLOB` пустых данных, ClickHouse бросает исключение: + +``` text +Code: 108. DB::Exception: No data to insert +``` + +**Пример** + +``` bash +$ clickhouse-client --query "CREATE TABLE {some_table} (a String) ENGINE = Memory;" +$ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT RawBLOB" +$ clickhouse-client --query "SELECT * FROM {some_table} FORMAT RawBLOB" | md5sum +``` + +Результат: + +``` text +f9725a22f9191e064120d718e26862a9 - +``` + [Оригинальная статья](https://clickhouse.tech/docs/ru/interfaces/formats/) diff --git a/docs/ru/interfaces/third-party/client-libraries.md b/docs/ru/interfaces/third-party/client-libraries.md index f35acb9e968..c07aab5826c 100644 --- a/docs/ru/interfaces/third-party/client-libraries.md +++ b/docs/ru/interfaces/third-party/client-libraries.md @@ -20,6 +20,7 @@ toc_title: "\u041a\u043b\u0438\u0435\u043d\u0442\u0441\u043a\u0438\u0435\u0020\u - [simpod/clickhouse-client](https://packagist.org/packages/simpod/clickhouse-client) - [seva-code/php-click-house-client](https://packagist.org/packages/seva-code/php-click-house-client) - [SeasClick C++ client](https://github.com/SeasX/SeasClick) + - [glushkovds/phpclickhouse-laravel](https://packagist.org/packages/glushkovds/phpclickhouse-laravel) - Go - [clickhouse](https://github.com/kshvakov/clickhouse/) - [go-clickhouse](https://github.com/roistat/go-clickhouse) diff --git a/docs/ru/operations/settings/query-complexity.md b/docs/ru/operations/settings/query-complexity.md index a62e7523207..b0eac5d96e7 100644 --- a/docs/ru/operations/settings/query-complexity.md +++ b/docs/ru/operations/settings/query-complexity.md @@ -297,7 +297,7 @@ FORMAT Null; **Смотрите также** - [Секция JOIN](../../sql-reference/statements/select/join.md#select-join) -- [Движоy таблиц Join](../../engines/table-engines/special/join.md) +- [Движок таблиц Join](../../engines/table-engines/special/join.md) ## max_partitions_per_insert_block {#max-partitions-per-insert-block} diff --git a/docs/ru/operations/settings/settings.md b/docs/ru/operations/settings/settings.md index 0de29112d1b..0a8094231c2 100644 --- a/docs/ru/operations/settings/settings.md +++ b/docs/ru/operations/settings/settings.md @@ -2231,10 +2231,41 @@ SELECT CAST(toNullable(toInt32(0)) AS Int32) as x, toTypeName(x); ## output_format_tsv_null_representation {#output_format_tsv_null_representation} -Позволяет настраивать представление `NULL` для формата выходных данных [TSV](../../interfaces/formats.md#tabseparated). Настройка управляет форматом выходных данных, `\N` является единственным поддерживаемым представлением для формата входных данных TSV. +Определяет представление `NULL` для формата выходных данных [TSV](../../interfaces/formats.md#tabseparated). Пользователь может установить в качестве значения любую строку. Значение по умолчанию: `\N`. +**Примеры** + +Запрос + +```sql +SELECT * FROM tsv_custom_null FORMAT TSV; +``` + +Результат + +```text +788 +\N +\N +``` + +Запрос + +```sql +SET output_format_tsv_null_representation = 'My NULL'; +SELECT * FROM tsv_custom_null FORMAT TSV; +``` + +Результат + +```text +788 +My NULL +My NULL +``` + ## output_format_json_array_of_rows {#output-format-json-array-of-rows} Позволяет выводить все строки в виде массива JSON в формате [JSONEachRow](../../interfaces/formats.md#jsoneachrow). diff --git a/docs/ru/operations/system-tables/distribution_queue.md b/docs/ru/operations/system-tables/distribution_queue.md new file mode 100644 index 00000000000..18346b34e04 --- /dev/null +++ b/docs/ru/operations/system-tables/distribution_queue.md @@ -0,0 +1,46 @@ +# system.distribution_queue {#system_tables-distribution_queue} + +Содержит информацию о локальных файлах, которые находятся в очереди для отправки на шарды. Эти локальные файлы содержат новые куски, которые создаются путем вставки новых данных в Distributed таблицу в асинхронном режиме. + +Столбцы: + +- `database` ([String](../../sql-reference/data-types/string.md)) — имя базы данных. + +- `table` ([String](../../sql-reference/data-types/string.md)) — имя таблицы. + +- `data_path` ([String](../../sql-reference/data-types/string.md)) — путь к папке с локальными файлами. + +- `is_blocked` ([UInt8](../../sql-reference/data-types/int-uint.md)) — флаг, указывающий на блокировку отправки локальных файлов на шарды. + +- `error_count` ([UInt64](../../sql-reference/data-types/int-uint.md)) — количество ошибок. + +- `data_files` ([UInt64](../../sql-reference/data-types/int-uint.md)) — количество локальных файлов в папке. + +- `data_compressed_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — размер сжатых данных в локальных файлах в байтах. + +- `last_exception` ([String](../../sql-reference/data-types/string.md)) — текстовое сообщение о последней возникшей ошибке, если таковые имеются. + +**Пример** + +``` sql +SELECT * FROM system.distribution_queue LIMIT 1 FORMAT Vertical; +``` + +``` text +Row 1: +────── +database: default +table: dist +data_path: ./store/268/268bc070-3aad-4b1a-9cf2-4987580161af/default@127%2E0%2E0%2E2:9000/ +is_blocked: 1 +error_count: 0 +data_files: 1 +data_compressed_bytes: 499 +last_exception: +``` + +**Смотрите также** + +- [Движок таблиц Distributed](../../engines/table-engines/special/distributed.md) + +[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/distribution_queue) diff --git a/docs/ru/sql-reference/data-types/lowcardinality.md b/docs/ru/sql-reference/data-types/lowcardinality.md index ec9e4e7588e..d94cedd29ce 100644 --- a/docs/ru/sql-reference/data-types/lowcardinality.md +++ b/docs/ru/sql-reference/data-types/lowcardinality.md @@ -57,3 +57,5 @@ ORDER BY id - [A Magical Mystery Tour of the LowCardinality Data Type](https://www.altinity.com/blog/2019/3/27/low-cardinality). - [Reducing Clickhouse Storage Cost with the Low Cardinality Type – Lessons from an Instana Engineer](https://www.instana.com/blog/reducing-clickhouse-storage-cost-with-the-low-cardinality-type-lessons-from-an-instana-engineer/). - [String Optimization (video presentation in Russian)](https://youtu.be/rqf-ILRgBdY?list=PL0Z2YDlm0b3iwXCpEFiOOYmwXzVmjJfEt). [Slides in English](https://github.com/yandex/clickhouse-presentations/raw/master/meetup19/string_optimization.pdf). + +[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/data-types/lowcardinality/) diff --git a/docs/ru/sql-reference/functions/date-time-functions.md b/docs/ru/sql-reference/functions/date-time-functions.md index 3c9bd99de57..b7a077b3bd6 100644 --- a/docs/ru/sql-reference/functions/date-time-functions.md +++ b/docs/ru/sql-reference/functions/date-time-functions.md @@ -63,10 +63,18 @@ int32samoa: 1546300800 Переводит дату или дату-с-временем в число типа UInt16, содержащее номер года (AD). +## toQuarter {#toquarter} + +Переводит дату или дату-с-временем в число типа UInt8, содержащее номер квартала. + ## toMonth {#tomonth} Переводит дату или дату-с-временем в число типа UInt8, содержащее номер месяца (1-12). +## toDayOfYear {#todayofyear} + +Переводит дату или дату-с-временем в число типа UInt16, содержащее номер дня года (1-366). + ## toDayOfMonth {#todayofmonth} Переводит дату или дату-с-временем в число типа UInt8, содержащее номер дня в месяце (1-31). @@ -128,6 +136,22 @@ SELECT toUnixTimestamp('2017-11-05 08:07:47', 'Asia/Tokyo') AS unix_timestamp Округляет дату или дату-с-временем вниз до первого дня года. Возвращается дата. +## toStartOfISOYear {#tostartofisoyear} + +Округляет дату или дату-с-временем вниз до первого дня ISO года. Возвращается дата. +Начало ISO года отличается от начала обычного года, потому что в соответствии с [ISO 8601:1988](https://en.wikipedia.org/wiki/ISO_8601) первая неделя года - это неделя с четырьмя или более днями в этом году. + +1 Января 2017 г. - воскресение, т.е. первая ISO неделя 2017 года началась в понедельник 2 января, поэтому 1 января 2017 это 2016 ISO-год, который начался 2016-01-04. + +```sql +SELECT toStartOfISOYear(toDate('2017-01-01')) AS ISOYear20170101; +``` +```text +┌─ISOYear20170101─┐ +│ 2016-01-04 │ +└─────────────────┘ +``` + ## toStartOfQuarter {#tostartofquarter} Округляет дату или дату-с-временем вниз до первого дня квартала. @@ -147,6 +171,12 @@ SELECT toUnixTimestamp('2017-11-05 08:07:47', 'Asia/Tokyo') AS unix_timestamp Округляет дату или дату-с-временем вниз до ближайшего понедельника. Возвращается дата. +## toStartOfWeek(t[,mode]) {#tostartofweek} + +Округляет дату или дату со временем до ближайшего воскресенья или понедельника в соответствии с mode. +Возвращается дата. +Аргумент mode работает точно так же, как аргумент mode [toWeek()](#toweek). Если аргумент mode опущен, то используется режим 0. + ## toStartOfDay {#tostartofday} Округляет дату-с-временем вниз до начала дня. Возвращается дата-с-временем. @@ -243,6 +273,10 @@ WITH toDateTime64('2020-01-01 10:20:30.999', 3) AS dt64 SELECT toStartOfSecond(d Переводит дату-с-временем или дату в номер года, начиная с некоторого фиксированного момента в прошлом. +## toRelativeQuarterNum {#torelativequarternum} + +Переводит дату-с-временем или дату в номер квартала, начиная с некоторого фиксированного момента в прошлом. + ## toRelativeMonthNum {#torelativemonthnum} Переводит дату-с-временем или дату в номер месяца, начиная с некоторого фиксированного момента в прошлом. @@ -267,6 +301,102 @@ WITH toDateTime64('2020-01-01 10:20:30.999', 3) AS dt64 SELECT toStartOfSecond(d Переводит дату-с-временем в номер секунды, начиная с некоторого фиксированного момента в прошлом. +## toISOYear {#toisoyear} + +Переводит дату-с-временем или дату в число типа UInt16, содержащее номер ISO года. ISO год отличается от обычного года, потому что в соответствии с [ISO 8601:1988](https://en.wikipedia.org/wiki/ISO_8601) ISO год начинается необязательно первого января. + +Пример: + +```sql +SELECT + toDate('2017-01-01') AS date, + toYear(date), + toISOYear(date) +``` +```text +┌───────date─┬─toYear(toDate('2017-01-01'))─┬─toISOYear(toDate('2017-01-01'))─┐ +│ 2017-01-01 │ 2017 │ 2016 │ +└────────────┴──────────────────────────────┴─────────────────────────────────┘ +``` + +## toISOWeek {#toisoweek} + +Переводит дату-с-временем или дату в число типа UInt8, содержащее номер ISO недели. +Начало ISO года отличается от начала обычного года, потому что в соответствии с [ISO 8601:1988](https://en.wikipedia.org/wiki/ISO_8601) первая неделя года - это неделя с четырьмя или более днями в этом году. + +1 Января 2017 г. - воскресение, т.е. первая ISO неделя 2017 года началась в понедельник 2 января, поэтому 1 января 2017 это последняя неделя 2016 года. + +```sql +SELECT + toISOWeek(toDate('2017-01-01')) AS ISOWeek20170101, + toISOWeek(toDate('2017-01-02')) AS ISOWeek20170102 +``` + +```text +┌─ISOWeek20170101─┬─ISOWeek20170102─┐ +│ 52 │ 1 │ +└─────────────────┴─────────────────┘ +``` + +## toWeek(date\[, mode\]\[, timezone\]) {#toweek} +Переводит дату-с-временем или дату в число UInt8, содержащее номер недели. Второй аргументам mode задает режим, начинается ли неделя с воскресенья или с понедельника и должно ли возвращаемое значение находиться в диапазоне от 0 до 53 или от 1 до 53. Если аргумент mode опущен, то используется режим 0. + +`toISOWeek() ` эквивалентно `toWeek(date,3)`. + +Описание режимов (mode): + +| Mode | Первый день недели | Диапазон | Неделя 1 это первая неделя … | +| ----------- | -------- | -------- | ------------------ | +|0|Воскресенье|0-53|с воскресеньем в этом году +|1|Понедельник|0-53|с 4-мя или более днями в этом году +|2|Воскресенье|1-53|с воскресеньем в этом году +|3|Понедельник|1-53|с 4-мя или более днями в этом году +|4|Воскресенье|0-53|с 4-мя или более днями в этом году +|5|Понедельник|0-53|с понедельником в этом году +|6|Воскресенье|1-53|с 4-мя или более днями в этом году +|7|Понедельник|1-53|с понедельником в этом году +|8|Воскресенье|1-53|содержащая 1 января +|9|Понедельник|1-53|содержащая 1 января + +Для режимов со значением «с 4 или более днями в этом году» недели нумеруются в соответствии с ISO 8601:1988: + +- Если неделя, содержащая 1 января, имеет 4 или более дней в новом году, это неделя 1. + +- В противном случае это последняя неделя предыдущего года, а следующая неделя - неделя 1. + +Для режимов со значением «содержит 1 января», неделя 1 – это неделя содержащая 1 января. Не имеет значения, сколько дней в новом году содержала неделя, даже если она содержала только один день. + +**Пример** + +```sql +SELECT toDate('2016-12-27') AS date, toWeek(date) AS week0, toWeek(date,1) AS week1, toWeek(date,9) AS week9; +``` + +```text +┌───────date─┬─week0─┬─week1─┬─week9─┐ +│ 2016-12-27 │ 52 │ 52 │ 1 │ +└────────────┴───────┴───────┴───────┘ +``` + +## toYearWeek(date[,mode]) {#toyearweek} +Возвращает год и неделю для даты. Год в результате может отличаться от года в аргументе даты для первой и последней недели года. + +Аргумент mode работает точно так же, как аргумент mode [toWeek()](#toweek). Если mode не задан, используется режим 0. + +`toISOYear() ` эквивалентно `intDiv(toYearWeek(date,3),100)`. + +**Пример** + +```sql +SELECT toDate('2016-12-27') AS date, toYearWeek(date) AS yearWeek0, toYearWeek(date,1) AS yearWeek1, toYearWeek(date,9) AS yearWeek9; +``` + +```text +┌───────date─┬─yearWeek0─┬─yearWeek1─┬─yearWeek9─┐ +│ 2016-12-27 │ 201652 │ 201652 │ 201701 │ +└────────────┴───────────┴───────────┴───────────┘ +``` + ## date_trunc {#date_trunc} Отсекает от даты и времени части, меньшие чем указанная часть. diff --git a/docs/ru/sql-reference/statements/select/order-by.md b/docs/ru/sql-reference/statements/select/order-by.md index ea0f40b2dc0..f8b838cbd15 100644 --- a/docs/ru/sql-reference/statements/select/order-by.md +++ b/docs/ru/sql-reference/statements/select/order-by.md @@ -56,10 +56,188 @@ toc_title: ORDER BY ## Поддержка collation {#collation-support} -Для сортировки по значениям типа String есть возможность указать collation (сравнение). Пример: `ORDER BY SearchPhrase COLLATE 'tr'` - для сортировки по поисковой фразе, по возрастанию, с учётом турецкого алфавита, регистронезависимо, при допущении, что строки в кодировке UTF-8. `COLLATE` может быть указан или не указан для каждого выражения в ORDER BY независимо. Если есть `ASC` или `DESC`, то `COLLATE` указывается после них. При использовании `COLLATE` сортировка всегда регистронезависима. +Для сортировки по значениям типа [String](../../../sql-reference/data-types/string.md) есть возможность указать collation (сравнение). Пример: `ORDER BY SearchPhrase COLLATE 'tr'` - для сортировки по поисковой фразе, по возрастанию, с учётом турецкого алфавита, регистронезависимо, при допущении, что строки в кодировке UTF-8. `COLLATE` может быть указан или не указан для каждого выражения в ORDER BY независимо. Если есть `ASC` или `DESC`, то `COLLATE` указывается после них. При использовании `COLLATE` сортировка всегда регистронезависима. + +Сравнение поддерживается при использовании типов [LowCardinality](../../../sql-reference/data-types/lowcardinality.md), [Nullable](../../../sql-reference/data-types/nullable.md), [Array](../../../sql-reference/data-types/array.md) и [Tuple](../../../sql-reference/data-types/tuple.md). Рекомендуется использовать `COLLATE` только для окончательной сортировки небольшого количества строк, так как производительность сортировки с указанием `COLLATE` меньше, чем обычной сортировки по байтам. +## Примеры с использованием сравнения {#collation-examples} + +Пример с значениями типа [String](../../../sql-reference/data-types/string.md): + +Входная таблица: + +``` text +┌─x─┬─s────┐ +│ 1 │ bca │ +│ 2 │ ABC │ +│ 3 │ 123a │ +│ 4 │ abc │ +│ 5 │ BCA │ +└───┴──────┘ +``` + +Запрос: + +```sql +SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en'; +``` + +Результат: + +``` text +┌─x─┬─s────┐ +│ 3 │ 123a │ +│ 4 │ abc │ +│ 2 │ ABC │ +│ 1 │ bca │ +│ 5 │ BCA │ +└───┴──────┘ +``` + +Пример со строками типа [Nullable](../../../sql-reference/data-types/nullable.md): + +Входная таблица: + +``` text +┌─x─┬─s────┐ +│ 1 │ bca │ +│ 2 │ ᴺᵁᴸᴸ │ +│ 3 │ ABC │ +│ 4 │ 123a │ +│ 5 │ abc │ +│ 6 │ ᴺᵁᴸᴸ │ +│ 7 │ BCA │ +└───┴──────┘ +``` + +Запрос: + +```sql +SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en'; +``` + +Результат: + +``` text +┌─x─┬─s────┐ +│ 4 │ 123a │ +│ 5 │ abc │ +│ 3 │ ABC │ +│ 1 │ bca │ +│ 7 │ BCA │ +│ 6 │ ᴺᵁᴸᴸ │ +│ 2 │ ᴺᵁᴸᴸ │ +└───┴──────┘ +``` + +Пример со строками в [Array](../../../sql-reference/data-types/array.md): + +Входная таблица: + +``` text +┌─x─┬─s─────────────┐ +│ 1 │ ['Z'] │ +│ 2 │ ['z'] │ +│ 3 │ ['a'] │ +│ 4 │ ['A'] │ +│ 5 │ ['z','a'] │ +│ 6 │ ['z','a','a'] │ +│ 7 │ [''] │ +└───┴───────────────┘ +``` + +Запрос: + +```sql +SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en'; +``` + +Результат: + +``` text +┌─x─┬─s─────────────┐ +│ 7 │ [''] │ +│ 3 │ ['a'] │ +│ 4 │ ['A'] │ +│ 2 │ ['z'] │ +│ 5 │ ['z','a'] │ +│ 6 │ ['z','a','a'] │ +│ 1 │ ['Z'] │ +└───┴───────────────┘ +``` + +Пример со строками типа [LowCardinality](../../../sql-reference/data-types/lowcardinality.md): + +Входная таблица: + +```text +┌─x─┬─s───┐ +│ 1 │ Z │ +│ 2 │ z │ +│ 3 │ a │ +│ 4 │ A │ +│ 5 │ za │ +│ 6 │ zaa │ +│ 7 │ │ +└───┴─────┘ +``` + +Запрос: + +```sql +SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en'; +``` + +Результат: + +```text +┌─x─┬─s───┐ +│ 7 │ │ +│ 3 │ a │ +│ 4 │ A │ +│ 2 │ z │ +│ 1 │ Z │ +│ 5 │ za │ +│ 6 │ zaa │ +└───┴─────┘ +``` + +Пример со строками в [Tuple](../../../sql-reference/data-types/tuple.md): + +```text +┌─x─┬─s───────┐ +│ 1 │ (1,'Z') │ +│ 2 │ (1,'z') │ +│ 3 │ (1,'a') │ +│ 4 │ (2,'z') │ +│ 5 │ (1,'A') │ +│ 6 │ (2,'Z') │ +│ 7 │ (2,'A') │ +└───┴─────────┘ +``` + +Запрос: + +```sql +SELECT * FROM collate_test ORDER BY s ASC COLLATE 'en'; +``` + +Результат: + +```text +┌─x─┬─s───────┐ +│ 3 │ (1,'a') │ +│ 5 │ (1,'A') │ +│ 2 │ (1,'z') │ +│ 1 │ (1,'Z') │ +│ 7 │ (2,'A') │ +│ 4 │ (2,'z') │ +│ 6 │ (2,'Z') │ +└───┴─────────┘ +``` + ## Деталь реализации {#implementation-details} Если кроме `ORDER BY` указан также не слишком большой [LIMIT](limit.md), то расходуется меньше оперативки. Иначе расходуется количество памяти, пропорциональное количеству данных для сортировки. При распределённой обработке запроса, если отсутствует [GROUP BY](group-by.md), сортировка частично делается на удалённых серверах, а на сервере-инициаторе запроса производится слияние результатов. Таким образом, при распределённой сортировке, может сортироваться объём данных, превышающий размер памяти на одном сервере. diff --git a/docs/ru/sql-reference/statements/system.md b/docs/ru/sql-reference/statements/system.md index 4f7ac98807d..a6a6c5047af 100644 --- a/docs/ru/sql-reference/statements/system.md +++ b/docs/ru/sql-reference/statements/system.md @@ -12,6 +12,7 @@ toc_title: SYSTEM - [DROP MARK CACHE](#query_language-system-drop-mark-cache) - [DROP UNCOMPRESSED CACHE](#query_language-system-drop-uncompressed-cache) - [DROP COMPILED EXPRESSION CACHE](#query_language-system-drop-compiled-expression-cache) +- [DROP REPLICA](#query_language-system-drop-replica) - [FLUSH LOGS](#query_language-system-flush_logs) - [RELOAD CONFIG](#query_language-system-reload-config) - [SHUTDOWN](#query_language-system-shutdown) @@ -66,6 +67,24 @@ SELECT name, status FROM system.dictionaries; Сбрасывает кеш «засечек» (`mark cache`). Используется при разработке ClickHouse и тестах производительности. +## DROP REPLICA {#query_language-system-drop-replica} + +Мертвые реплики можно удалить, используя следующий синтаксис: + +``` sql +SYSTEM DROP REPLICA 'replica_name' FROM TABLE database.table; +SYSTEM DROP REPLICA 'replica_name' FROM DATABASE database; +SYSTEM DROP REPLICA 'replica_name'; +SYSTEM DROP REPLICA 'replica_name' FROM ZKPATH '/path/to/table/in/zk'; +``` + +Удаляет путь реплики из ZooKeeper-а. Это полезно, когда реплика мертва и ее метаданные не могут быть удалены из ZooKeeper с помощью `DROP TABLE`, потому что такой таблицы больше нет. `DROP REPLICA` может удалить только неактивную / устаревшую реплику и не может удалить локальную реплику, используйте для этого `DROP TABLE`. `DROP REPLICA` не удаляет таблицы и не удаляет данные или метаданные с диска. + +Первая команда удаляет метаданные реплики `'replica_name'` для таблицы `database.table`. +Вторая команда удаляет метаданные реплики `'replica_name'` для всех таблиц базы данных `database`. +Третья команда удаляет метаданные реплики `'replica_name'` для всех таблиц, существующих на локальном сервере (список таблиц генерируется из локальной реплики). +Четверая команда полезна для удаления метаданных мертвой реплики когда все другие реплики таблицы уже были удалены ранее, поэтому необходимо явно указать ZooKeeper путь таблицы. ZooKeeper путь это первый аргумент для `ReplicatedMergeTree` движка при создании таблицы. + ## DROP UNCOMPRESSED CACHE {#query_language-system-drop-uncompressed-cache} Сбрасывает кеш не сжатых данных. Используется при разработке ClickHouse и тестах производительности. diff --git a/docs/tr/development/developer-instruction.md b/docs/tr/development/developer-instruction.md index 51a6c4345c6..79c93104551 100644 --- a/docs/tr/development/developer-instruction.md +++ b/docs/tr/development/developer-instruction.md @@ -257,8 +257,8 @@ Clickhouse'un geliştirilmesi genellikle gerçekçi veri kümelerinin yüklenmes sudo apt install wget xz-utils - wget https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz - wget https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz + wget https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz + wget https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz xz -v -d hits_v1.tsv.xz xz -v -d visits_v1.tsv.xz diff --git a/docs/tr/development/style.md b/docs/tr/development/style.md index 7c8d7f3d569..d4b13371df0 100644 --- a/docs/tr/development/style.md +++ b/docs/tr/development/style.md @@ -579,7 +579,7 @@ Bir işlev öbekte oluşturulan bir nesnenin sahipliğini yakalarsa, bağımsız **14.** Değerleri döndürür. -Çoğu durumda, sadece kullanın `return`. Yaz domayın `[return std::move(res)]{.strike}`. +Çoğu durumda, sadece kullanın `return`. Yaz domayın `return std::move(res)`. İşlev öbek üzerinde bir nesne ayırır ve döndürürse, şunları kullanın `shared_ptr` veya `unique_ptr`. @@ -673,7 +673,7 @@ Her zaman kullanın `#pragma once` korumaları dahil etmek yerine. **24.** Kullanmayın `trailing return type` gerekli olmadıkça fonksiyonlar için. ``` cpp -[auto f() -> void;]{.strike} +auto f() -> void ``` **25.** Değişkenlerin bildirimi ve başlatılması. diff --git a/docs/tr/getting-started/example-datasets/metrica.md b/docs/tr/getting-started/example-datasets/metrica.md index 6a727d1ab55..0ad7debf54d 100644 --- a/docs/tr/getting-started/example-datasets/metrica.md +++ b/docs/tr/getting-started/example-datasets/metrica.md @@ -9,14 +9,14 @@ toc_title: "\xDCye.Metrica Verileri" Veri kümesi, isabetlerle ilgili anonimleştirilmiş verileri içeren iki tablodan oluşur (`hits_v1`) ve ziyaret visitsler (`visits_v1`(kayıt olmak için).Metrica. Yandex hakkında daha fazla bilgi edinebilirsiniz.Metrica içinde [ClickHouse geçmişi](../../introduction/history.md) bölme. -Veri kümesi iki tablodan oluşur, bunlardan biri sıkıştırılmış olarak indirilebilir `tsv.xz` dosya veya hazırlanmış bölümler olarak. Buna ek olarak, genişletilmiş bir sürümü `hits` 100 milyon satır içeren tablo TSV olarak mevcuttur https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_100m_obfuscated_v1.tsv.xz ve hazırlanan bölümler olarak https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz. +Veri kümesi iki tablodan oluşur, bunlardan biri sıkıştırılmış olarak indirilebilir `tsv.xz` dosya veya hazırlanmış bölümler olarak. Buna ek olarak, genişletilmiş bir sürümü `hits` 100 milyon satır içeren tablo TSV olarak mevcuttur https://datasets.clickhouse.tech/hits/tsv/hits_100m_obfuscated_v1.tsv.xz ve hazırlanan bölümler olarak https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz. ## Hazırlanan bölümlerden tablolar elde etme {#obtaining-tables-from-prepared-partitions} İndirme ve ithalat tablo hits: ``` bash -curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar +curl -O https://datasets.clickhouse.tech/hits/partitions/hits_v1.tar tar xvf hits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory # check permissions on unpacked data, fix if required sudo service clickhouse-server restart @@ -26,7 +26,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" İndirme ve ithalat ziyaretleri: ``` bash -curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar +curl -O https://datasets.clickhouse.tech/visits/partitions/visits_v1.tar tar xvf visits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory # check permissions on unpacked data, fix if required sudo service clickhouse-server restart @@ -38,7 +38,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1" Sıkıştırılmış TSV dosyasından indir ve İçe Aktar: ``` bash -curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv # now create table clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" @@ -52,7 +52,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" Sıkıştırılmış tsv dosyasından ziyaretleri indirin ve içe aktarın: ``` bash -curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv # now create table clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" diff --git a/docs/tr/getting-started/example-datasets/nyc-taxi.md b/docs/tr/getting-started/example-datasets/nyc-taxi.md index 7c2fa26eb05..ebbb595aa5e 100644 --- a/docs/tr/getting-started/example-datasets/nyc-taxi.md +++ b/docs/tr/getting-started/example-datasets/nyc-taxi.md @@ -285,7 +285,7 @@ Diğer şeylerin yanı sıra, MERGETREE üzerinde en iyi duruma getirme sorgusun ## Hazırlanan Bölüm downloadlerin indir downloadilmesi {#download-of-prepared-partitions} ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar +$ curl -O https://datasets.clickhouse.tech/trips_mergetree/partitions/trips_mergetree.tar $ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # path to ClickHouse data directory $ # check permissions of unpacked data, fix if required $ sudo service clickhouse-server restart diff --git a/docs/tr/getting-started/example-datasets/ontime.md b/docs/tr/getting-started/example-datasets/ontime.md index f1d477dbc6e..5754263fc00 100644 --- a/docs/tr/getting-started/example-datasets/ontime.md +++ b/docs/tr/getting-started/example-datasets/ontime.md @@ -156,7 +156,7 @@ $ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhous ## Hazırlanan Bölüm downloadlerin indir downloadilmesi {#download-of-prepared-partitions} ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar +$ curl -O https://datasets.clickhouse.tech/ontime/partitions/ontime.tar $ tar xvf ontime.tar -C /var/lib/clickhouse # path to ClickHouse data directory $ # check permissions of unpacked data, fix if required $ sudo service clickhouse-server restart diff --git a/docs/tr/getting-started/tutorial.md b/docs/tr/getting-started/tutorial.md index 5cca914fb35..449710ba33b 100644 --- a/docs/tr/getting-started/tutorial.md +++ b/docs/tr/getting-started/tutorial.md @@ -87,8 +87,8 @@ clickhouse-client --query='INSERT INTO table FORMAT TabSeparated' < data.tsv ### Tablo verilerini indirin ve ayıklayın {#download-and-extract-table-data} ``` bash -curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv -curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv ``` Çıkarılan dosyalar yaklaşık 10GB boyutundadır. diff --git a/docs/tr/operations/performance-test.md b/docs/tr/operations/performance-test.md index 498f3851861..8f12d63490d 100644 --- a/docs/tr/operations/performance-test.md +++ b/docs/tr/operations/performance-test.md @@ -48,7 +48,7 @@ Bu talimat ile ClickHouse paketlerinin kurulumu olmadan herhangi bir sunucuda te - wget https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz + wget https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz tar xvf hits_100m_obfuscated_v1.tar.xz -C . mv hits_100m_obfuscated_v1/* . diff --git a/docs/tr/sql-reference/ansi.md b/docs/tr/sql-reference/ansi.md index 650f32d257d..c7b6dac248f 100644 --- a/docs/tr/sql-reference/ansi.md +++ b/docs/tr/sql-reference/ansi.md @@ -26,155 +26,155 @@ Aşağıdaki tabloda, sorgu özelliği ClickHouse çalışır, ancak ANSI SQL'DE | Feature ID | Özellik Adı | Durum | Yorum | |------------|-----------------------------------------------------------------------------------------------------------------------------|--------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **E011** | **Sayısal veri türleri** | **Kısmi**{.text-warning} | | -| E011-01 | Tamsayı ve SMALLİNT veri türleri | Evet{.text-success} | | -| E011-02 | Gerçek, çift hassas ve FLOAT veri türleri veri türleri | Kısmi{.text-warning} | `FLOAT()`, `REAL` ve `DOUBLE PRECISION` desteklenmiyor | -| E011-03 | Ondalık ve sayısal veri türleri | Kısmi{.text-warning} | Sadece `DECIMAL(p,s)` desteklenir, değil `NUMERIC` | -| E011-04 | Aritmetik operat operatorsörler | Evet{.text-success} | | -| E011-05 | Sayısal karşılaştırma | Evet{.text-success} | | -| E011-06 | Sayısal veri türleri arasında örtülü döküm | Hayır{.text-danger} | ANSI SQL, sayısal türler arasında rasgele örtülü döküm yapılmasına izin verirken, ClickHouse, örtülü döküm yerine birden fazla aşırı yüke sahip işlevlere dayanır | +| E011-01 | Tamsayı ve SMALLİNT veri türleri | Evet {.text-success} | | +| E011-02 | Gerçek, çift hassas ve FLOAT veri türleri veri türleri | Kısmi {.text-warning} | `FLOAT()`, `REAL` ve `DOUBLE PRECISION` desteklenmiyor | +| E011-03 | Ondalık ve sayısal veri türleri | Kısmi {.text-warning} | Sadece `DECIMAL(p,s)` desteklenir, değil `NUMERIC` | +| E011-04 | Aritmetik operat operatorsörler | Evet {.text-success} | | +| E011-05 | Sayısal karşılaştırma | Evet {.text-success} | | +| E011-06 | Sayısal veri türleri arasında örtülü döküm | Hayır {.text-danger} | ANSI SQL, sayısal türler arasında rasgele örtülü döküm yapılmasına izin verirken, ClickHouse, örtülü döküm yerine birden fazla aşırı yüke sahip işlevlere dayanır | | **E021** | **Karakter dizesi türleri** | **Kısmi**{.text-warning} | | -| E021-01 | Karakter veri türü | Hayır{.text-danger} | | -| E021-02 | Karakter değişken veri türü | Hayır{.text-danger} | `String` benzer şekilde davranır, ancak parantez içinde uzunluk sınırı olmadan | -| E021-03 | Karakter değişmezleri | Kısmi{.text-warning} | Ardışık değişmezlerin ve karakter seti desteğinin otomatik olarak birleştirilmesi yok | -| E021-04 | CHARACTER_LENGTH işlevi | Kısmi{.text-warning} | Hayır `USING` yan | -| E021-05 | OCTET_LENGTH işlevi | Hayır{.text-danger} | `LENGTH` benzer şekilde davranır | -| E021-06 | SUBSTRING | Kısmi{.text-warning} | İçin destek yok `SIMILAR` ve `ESCAPE` CLA ,us ,es, no `SUBSTRING_REGEX` varyant | -| E021-07 | Karakter birleştirme | Kısmi{.text-warning} | Hayır `COLLATE` yan | -| E021-08 | Üst ve alt fonksiyonlar | Evet{.text-success} | | -| E021-09 | TRİM fonksiyonu | Evet{.text-success} | | -| E021-10 | Sabit uzunlukta ve değişken uzunlukta karakter dizesi türleri arasında örtülü döküm | Hayır{.text-danger} | ANSI SQL, dize türleri arasında rasgele örtük döküm yapılmasına izin verirken, ClickHouse, örtük döküm yerine birden fazla aşırı yüke sahip işlevlere dayanır | -| E021-11 | Pozisyon fonksiyonu | Kısmi{.text-warning} | İçin destek yok `IN` ve `USING` CLA ,us ,es, no `POSITION_REGEX` varyant | -| E021-12 | Karakter karşılaştırma | Evet{.text-success} | | +| E021-01 | Karakter veri türü | Hayır {.text-danger} | | +| E021-02 | Karakter değişken veri türü | Hayır {.text-danger} | `String` benzer şekilde davranır, ancak parantez içinde uzunluk sınırı olmadan | +| E021-03 | Karakter değişmezleri | Kısmi {.text-warning} | Ardışık değişmezlerin ve karakter seti desteğinin otomatik olarak birleştirilmesi yok | +| E021-04 | CHARACTER_LENGTH işlevi | Kısmi {.text-warning} | Hayır `USING` yan | +| E021-05 | OCTET_LENGTH işlevi | Hayır {.text-danger} | `LENGTH` benzer şekilde davranır | +| E021-06 | SUBSTRING | Kısmi {.text-warning} | İçin destek yok `SIMILAR` ve `ESCAPE` CLA ,us ,es, no `SUBSTRING_REGEX` varyant | +| E021-07 | Karakter birleştirme | Kısmi {.text-warning} | Hayır `COLLATE` yan | +| E021-08 | Üst ve alt fonksiyonlar | Evet {.text-success} | | +| E021-09 | TRİM fonksiyonu | Evet {.text-success} | | +| E021-10 | Sabit uzunlukta ve değişken uzunlukta karakter dizesi türleri arasında örtülü döküm | Hayır {.text-danger} | ANSI SQL, dize türleri arasında rasgele örtük döküm yapılmasına izin verirken, ClickHouse, örtük döküm yerine birden fazla aşırı yüke sahip işlevlere dayanır | +| E021-11 | Pozisyon fonksiyonu | Kısmi {.text-warning} | İçin destek yok `IN` ve `USING` CLA ,us ,es, no `POSITION_REGEX` varyant | +| E021-12 | Karakter karşılaştırma | Evet {.text-success} | | | **E031** | **Tanıtıcılar** | **Kısmi**{.text-warning} | | -| E031-01 | Ayrılmış tanımlayıcılar | Kısmi{.text-warning} | Unicode literal desteği sınırlıdır | -| E031-02 | Küçük harf tanımlayıcıları | Evet{.text-success} | | -| E031-03 | Sondaki alt çizgi | Evet{.text-success} | | +| E031-01 | Ayrılmış tanımlayıcılar | Kısmi {.text-warning} | Unicode literal desteği sınırlıdır | +| E031-02 | Küçük harf tanımlayıcıları | Evet {.text-success} | | +| E031-03 | Sondaki alt çizgi | Evet {.text-success} | | | **E051** | **Temel sorgu belirtimi** | **Kısmi**{.text-warning} | | -| E051-01 | SELECT DISTINCT | Evet{.text-success} | | -| E051-02 | GROUP BY fık clausera | Evet{.text-success} | | -| E051-04 | GROUP BY içinde olmayan sütunlar içerebilir `` | Evet {.text-success} | | +| E051-05 | Seçme öğeler yeniden adlandırılabilir | Evet {.text-success} | | +| E051-06 | Fık HAVİNGRA olması | Evet {.text-success} | | +| E051-07 | Nitelikli \* seçme listesinde | Evet {.text-success} | | +| E051-08 | From madd theesindeki korelasyon adı | Evet {.text-success} | | +| E051-09 | FROM yan tümcesinde sütunları Yeniden Adlandır | Hayır {.text-danger} | | | **E061** | **Temel yüklemler ve arama koşulları** | **Kısmi**{.text-warning} | | -| E061-01 | Karşılaştırma yüklemi | Evet{.text-success} | | -| E061-02 | Yüklem arasında | Kısmi{.text-warning} | Hayır `SYMMETRIC` ve `ASYMMETRIC` yan | -| E061-03 | Değerler listesi ile yüklemde | Evet{.text-success} | | -| E061-04 | Yüklem gibi | Evet{.text-success} | | -| E061-05 | Yüklem gibi: kaçış maddesi | Hayır{.text-danger} | | -| E061-06 | Boş yüklem | Evet{.text-success} | | -| E061-07 | Sayısal karşılaştırma yüklemi | Hayır{.text-danger} | | -| E061-08 | Var yüklemi | Hayır{.text-danger} | | -| E061-09 | Karşılaştırma yükleminde alt sorgular | Evet{.text-success} | | -| E061-11 | Yüklemde alt sorgular | Evet{.text-success} | | -| E061-12 | Sayısal karşılaştırma yükleminde alt sorgular | Hayır{.text-danger} | | -| E061-13 | İlişkili alt sorgular | Hayır{.text-danger} | | -| E061-14 | Arama koşulu | Evet{.text-success} | | +| E061-01 | Karşılaştırma yüklemi | Evet {.text-success} | | +| E061-02 | Yüklem arasında | Kısmi {.text-warning} | Hayır `SYMMETRIC` ve `ASYMMETRIC` yan | +| E061-03 | Değerler listesi ile yüklemde | Evet {.text-success} | | +| E061-04 | Yüklem gibi | Evet {.text-success} | | +| E061-05 | Yüklem gibi: kaçış maddesi | Hayır {.text-danger} | | +| E061-06 | Boş yüklem | Evet {.text-success} | | +| E061-07 | Sayısal karşılaştırma yüklemi | Hayır {.text-danger} | | +| E061-08 | Var yüklemi | Hayır {.text-danger} | | +| E061-09 | Karşılaştırma yükleminde alt sorgular | Evet {.text-success} | | +| E061-11 | Yüklemde alt sorgular | Evet {.text-success} | | +| E061-12 | Sayısal karşılaştırma yükleminde alt sorgular | Hayır {.text-danger} | | +| E061-13 | İlişkili alt sorgular | Hayır {.text-danger} | | +| E061-14 | Arama koşulu | Evet {.text-success} | | | **E071** | **Temel sorgu ifadeleri** | **Kısmi**{.text-warning} | | -| E071-01 | Sendika farklı tablo operatörü | Hayır{.text-danger} | | -| E071-02 | UNİON ALL table operat operatoror | Evet{.text-success} | | -| E071-03 | Dist DİSTİNCTİNC tablet table operatörü hariç | Hayır{.text-danger} | | -| E071-05 | Tablo operatörleri ile birleştirilen sütunların tam olarak aynı veri türüne sahip olması gerekmez | Evet{.text-success} | | -| E071-06 | Alt sorgularda tablo işleçleri | Evet{.text-success} | | +| E071-01 | Sendika farklı tablo operatörü | Hayır {.text-danger} | | +| E071-02 | UNİON ALL table operat operatoror | Evet {.text-success} | | +| E071-03 | Dist DİSTİNCTİNC tablet table operatörü hariç | Hayır {.text-danger} | | +| E071-05 | Tablo operatörleri ile birleştirilen sütunların tam olarak aynı veri türüne sahip olması gerekmez | Evet {.text-success} | | +| E071-06 | Alt sorgularda tablo işleçleri | Evet {.text-success} | | | **E081** | **Temel ayrıcalıklar** | **Kısmi**{.text-warning} | Çalışmalar sürüyor | | **E091** | **Set fonksiyonları** | **Evet**{.text-success} | | -| E091-01 | AVG | Evet{.text-success} | | -| E091-02 | COUNT | Evet{.text-success} | | -| E091-03 | MAX | Evet{.text-success} | | -| E091-04 | MIN | Evet{.text-success} | | -| E091-05 | SUM | Evet{.text-success} | | -| E091-06 | Tüm niceleyici | Hayır{.text-danger} | | -| E091-07 | Farklı niceleyici | Kısmi{.text-warning} | Tüm toplama işlevleri desteklenmiyor | +| E091-01 | AVG | Evet {.text-success} | | +| E091-02 | COUNT | Evet {.text-success} | | +| E091-03 | MAX | Evet {.text-success} | | +| E091-04 | MIN | Evet {.text-success} | | +| E091-05 | SUM | Evet {.text-success} | | +| E091-06 | Tüm niceleyici | Hayır {.text-danger} | | +| E091-07 | Farklı niceleyici | Kısmi {.text-warning} | Tüm toplama işlevleri desteklenmiyor | | **E101** | **Temel veri manipülasyonu** | **Kısmi**{.text-warning} | | -| E101-01 | INSERT deyimi | Evet{.text-success} | Not: Clickhouse'daki birincil anahtar, `UNIQUE` kısıtlama | -| E101-03 | Güncelleme deyimi Aran UPDATEDI | Hayır{.text-danger} | Bir `ALTER UPDATE` toplu veri değiştirme bildirimi | -| E101-04 | Aranan DELETE deyimi | Hayır{.text-danger} | Bir `ALTER DELETE` toplu veri kaldırma bildirimi | +| E101-01 | INSERT deyimi | Evet {.text-success} | Not: Clickhouse'daki birincil anahtar, `UNIQUE` kısıtlama | +| E101-03 | Güncelleme deyimi Aran UPDATEDI | Hayır {.text-danger} | Bir `ALTER UPDATE` toplu veri değiştirme bildirimi | +| E101-04 | Aranan DELETE deyimi | Hayır {.text-danger} | Bir `ALTER DELETE` toplu veri kaldırma bildirimi | | **E111** | **Tek sıra SELECT deyimi** | **Hayır**{.text-danger} | | | **E121** | **Temel imleç desteği** | **Hayır**{.text-danger} | | -| E121-01 | DECLARE CURSOR | Hayır{.text-danger} | | -| E121-02 | Sütunlara göre siparişin seçim listesinde olması gerekmez | Hayır{.text-danger} | | -| E121-03 | CLA clauseuse by ORDER in Value ifadeleri | Hayır{.text-danger} | | -| E121-04 | Açık ifade | Hayır{.text-danger} | | -| E121-06 | Konumlandırılmış güncelleme bildirimi | Hayır{.text-danger} | | -| E121-07 | Konumlandırılmış silme deyimi | Hayır{.text-danger} | | -| E121-08 | Kapat deyimi | Hayır{.text-danger} | | -| E121-10 | FETCH deyimi: örtük sonraki | Hayır{.text-danger} | | -| E121-17 | Tut imleçler ile | Hayır{.text-danger} | | +| E121-01 | DECLARE CURSOR | Hayır {.text-danger} | | +| E121-02 | Sütunlara göre siparişin seçim listesinde olması gerekmez | Hayır {.text-danger} | | +| E121-03 | CLA clauseuse by ORDER in Value ifadeleri | Hayır {.text-danger} | | +| E121-04 | Açık ifade | Hayır {.text-danger} | | +| E121-06 | Konumlandırılmış güncelleme bildirimi | Hayır {.text-danger} | | +| E121-07 | Konumlandırılmış silme deyimi | Hayır {.text-danger} | | +| E121-08 | Kapat deyimi | Hayır {.text-danger} | | +| E121-10 | FETCH deyimi: örtük sonraki | Hayır {.text-danger} | | +| E121-17 | Tut imleçler ile | Hayır {.text-danger} | | | **E131** | **Boş değer desteği (değerler yerine boş değerler)** | **Kısmi**{.text-warning} | Bazı kısıtlamalar geçerlidir | | **E141** | **Temel bütünlük kısıtlamaları** | **Kısmi**{.text-warning} | | -| E141-01 | NOT NULL kısıtlamaları | Evet{.text-success} | Not: `NOT NULL` tablo sütunları için varsayılan olarak ima edilir | -| E141-02 | NULL olmayan sütunların benzersiz kısıtlaması | Hayır{.text-danger} | | -| E141-03 | Birincil anahtar kısıtlamaları | Hayır{.text-danger} | | -| E141-04 | Hem referans silme eylemi hem de referans güncelleme eylemi için eylem yok varsayılanıyla temel yabancı anahtar kısıtlaması | Hayır{.text-danger} | | -| E141-06 | Kontrol kısıt CHECKLAMASI | Evet{.text-success} | | -| E141-07 | Sütun varsayılanları | Evet{.text-success} | | -| E141-08 | NOT NULL birincil anahtar üzerinde çıkarıldı | Evet{.text-success} | | -| E141-10 | Yabancı bir anahtardaki isimler herhangi bir sırada belirtilebilir | Hayır{.text-danger} | | +| E141-01 | NOT NULL kısıtlamaları | Evet {.text-success} | Not: `NOT NULL` tablo sütunları için varsayılan olarak ima edilir | +| E141-02 | NULL olmayan sütunların benzersiz kısıtlaması | Hayır {.text-danger} | | +| E141-03 | Birincil anahtar kısıtlamaları | Hayır {.text-danger} | | +| E141-04 | Hem referans silme eylemi hem de referans güncelleme eylemi için eylem yok varsayılanıyla temel yabancı anahtar kısıtlaması | Hayır {.text-danger} | | +| E141-06 | Kontrol kısıt CHECKLAMASI | Evet {.text-success} | | +| E141-07 | Sütun varsayılanları | Evet {.text-success} | | +| E141-08 | NOT NULL birincil anahtar üzerinde çıkarıldı | Evet {.text-success} | | +| E141-10 | Yabancı bir anahtardaki isimler herhangi bir sırada belirtilebilir | Hayır {.text-danger} | | | **E151** | **İşlem desteği** | **Hayır**{.text-danger} | | -| E151-01 | Taahhüt deyimi | Hayır{.text-danger} | | -| E151-02 | ROLBACKL statementback deyimi | Hayır{.text-danger} | | +| E151-01 | Taahhüt deyimi | Hayır {.text-danger} | | +| E151-02 | ROLBACKL statementback deyimi | Hayır {.text-danger} | | | **E152** | **Temel SET işlem deyimi** | **Hayır**{.text-danger} | | -| E152-01 | Set TRANSACTİON deyimi: izolasyon düzeyi SERİALİZABLE yan tümcesi | Hayır{.text-danger} | | -| E152-02 | Set TRANSACTİON deyimi: salt okunur ve okuma yazma yan tümceleri | Hayır{.text-danger} | | +| E152-01 | Set TRANSACTİON deyimi: izolasyon düzeyi SERİALİZABLE yan tümcesi | Hayır {.text-danger} | | +| E152-02 | Set TRANSACTİON deyimi: salt okunur ve okuma yazma yan tümceleri | Hayır {.text-danger} | | | **E153** | **Alt sorgularla güncellenebilir sorgular** | **Hayır**{.text-danger} | | | **E161** | **Lider çift eksi kullanarak SQL yorumlar ** | **Evet**{.text-success} | | | **E171** | **SQLSTATE desteği** | **Hayır**{.text-danger} | | | **E182** | **Ana bilgisayar dili bağlama** | **Hayır**{.text-danger} | | | **F031** | **Temel şema manipülasyonu** | **Kısmi**{.text-warning} | | -| F031-01 | Kalıcı temel tablolar oluşturmak için tablo deyimi oluşturma | Kısmi{.text-warning} | Hayır `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` yan tümceleri ve kullanıcı çözümlenmiş veri türleri için destek yok | -| F031-02 | Görünüm deyimi oluştur | Kısmi{.text-warning} | Hayır `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` yan tümceleri ve kullanıcı çözümlenmiş veri türleri için destek yok | -| F031-03 | Hibe beyanı | Evet{.text-success} | | -| F031-04 | ALTER TABLE deyimi: sütun yan tümcesi Ekle | Kısmi{.text-warning} | İçin destek yok `GENERATED` fık andra ve sistem süresi | -| F031-13 | Dro :p TABLE deyimi: kısıt :lamak | Hayır{.text-danger} | | -| F031-16 | Dro :p VİEW deyimi: kısıt :lamak | Hayır{.text-danger} | | -| F031-19 | Rev REVOKEOKE deyimi: kısıt clauselamak | Hayır{.text-danger} | | +| F031-01 | Kalıcı temel tablolar oluşturmak için tablo deyimi oluşturma | Kısmi {.text-warning} | Hayır `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` yan tümceleri ve kullanıcı çözümlenmiş veri türleri için destek yok | +| F031-02 | Görünüm deyimi oluştur | Kısmi {.text-warning} | Hayır `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` yan tümceleri ve kullanıcı çözümlenmiş veri türleri için destek yok | +| F031-03 | Hibe beyanı | Evet {.text-success} | | +| F031-04 | ALTER TABLE deyimi: sütun yan tümcesi Ekle | Kısmi {.text-warning} | İçin destek yok `GENERATED` fık andra ve sistem süresi | +| F031-13 | Dro :p TABLE deyimi: kısıt :lamak | Hayır {.text-danger} | | +| F031-16 | Dro :p VİEW deyimi: kısıt :lamak | Hayır {.text-danger} | | +| F031-19 | Rev REVOKEOKE deyimi: kısıt clauselamak | Hayır {.text-danger} | | | **F041** | **Temel birleştirilmiş tablo** | **Kısmi**{.text-warning} | | -| F041-01 | Inner join (ancak mutlaka iç anahtar kelime değil) | Evet{.text-success} | | -| F041-02 | İç anahtar kelime | Evet{.text-success} | | -| F041-03 | LEFT OUTER JOIN | Evet{.text-success} | | -| F041-04 | RIGHT OUTER JOIN | Evet{.text-success} | | -| F041-05 | Dış birleşimler iç içe geçmiş olabilir | Evet{.text-success} | | -| F041-07 | Sol veya sağ dış birleşimdeki iç tablo, bir iç birleşimde de kullanılabilir | Evet{.text-success} | | -| F041-08 | Tüm karşılaştırma operatörleri desteklenir (sadece =yerine) | Hayır{.text-danger} | | +| F041-01 | Inner join (ancak mutlaka iç anahtar kelime değil) | Evet {.text-success} | | +| F041-02 | İç anahtar kelime | Evet {.text-success} | | +| F041-03 | LEFT OUTER JOIN | Evet {.text-success} | | +| F041-04 | RIGHT OUTER JOIN | Evet {.text-success} | | +| F041-05 | Dış birleşimler iç içe geçmiş olabilir | Evet {.text-success} | | +| F041-07 | Sol veya sağ dış birleşimdeki iç tablo, bir iç birleşimde de kullanılabilir | Evet {.text-success} | | +| F041-08 | Tüm karşılaştırma operatörleri desteklenir (sadece =yerine) | Hayır {.text-danger} | | | **F051** | **Temel tarih ve saat** | **Kısmi**{.text-warning} | | -| F051-01 | Tarih veri türü (tarih literal desteği dahil) | Kısmi{.text-warning} | Hiçbir edebi | -| F051-02 | En az 0 kesirli saniye hassasiyetle zaman veri türü (zaman literal desteği dahil) | Hayır{.text-danger} | | -| F051-03 | Zaman damgası veri türü (zaman damgası literal desteği dahil) en az 0 ve 6 kesirli saniye hassasiyetle | Hayır{.text-danger} | `DateTime64` zaman benzer işlevsellik sağlar | -| F051-04 | Tarih, Saat ve zaman damgası veri türlerinde karşılaştırma yüklemi | Kısmi{.text-warning} | Yalnızca bir veri türü kullanılabilir | -| F051-05 | Datetime türleri ve karakter dizesi türleri arasında açık döküm | Evet{.text-success} | | -| F051-06 | CURRENT_DATE | Hayır{.text-danger} | `today()` benzer mi | -| F051-07 | LOCALTIME | Hayır{.text-danger} | `now()` benzer mi | -| F051-08 | LOCALTIMESTAMP | Hayır{.text-danger} | | +| F051-01 | Tarih veri türü (tarih literal desteği dahil) | Kısmi {.text-warning} | Hiçbir edebi | +| F051-02 | En az 0 kesirli saniye hassasiyetle zaman veri türü (zaman literal desteği dahil) | Hayır {.text-danger} | | +| F051-03 | Zaman damgası veri türü (zaman damgası literal desteği dahil) en az 0 ve 6 kesirli saniye hassasiyetle | Hayır {.text-danger} | `DateTime64` zaman benzer işlevsellik sağlar | +| F051-04 | Tarih, Saat ve zaman damgası veri türlerinde karşılaştırma yüklemi | Kısmi {.text-warning} | Yalnızca bir veri türü kullanılabilir | +| F051-05 | Datetime türleri ve karakter dizesi türleri arasında açık döküm | Evet {.text-success} | | +| F051-06 | CURRENT_DATE | Hayır {.text-danger} | `today()` benzer mi | +| F051-07 | LOCALTIME | Hayır {.text-danger} | `now()` benzer mi | +| F051-08 | LOCALTIMESTAMP | Hayır {.text-danger} | | | **F081** | **Sendika ve görüş EXCEPTLERDE** | **Kısmi**{.text-warning} | | | **F131** | **Grup operationslandırılmış işlemler** | **Kısmi**{.text-warning} | | -| F131-01 | WHERE, GROUP BY ve gruplandırılmış görünümlere sahip sorgularda desteklenen yan tümceleri olması | Evet{.text-success} | | -| F131-02 | Gruplandırılmış görünümlere sahip sorgularda desteklenen birden çok tablo | Evet{.text-success} | | -| F131-03 | Gruplandırılmış görünümlere sahip sorgularda desteklenen işlevleri ayarlayın | Evet{.text-success} | | -| F131-04 | GROUP BY ile alt sorgular ve yan tümceleri ve gruplandırılmış görünümler | Evet{.text-success} | | -| F131-05 | GROUP BY ile tek satır seçme ve yan tümceleri ve gruplandırılmış görünümleri sahip | Hayır{.text-danger} | | +| F131-01 | WHERE, GROUP BY ve gruplandırılmış görünümlere sahip sorgularda desteklenen yan tümceleri olması | Evet {.text-success} | | +| F131-02 | Gruplandırılmış görünümlere sahip sorgularda desteklenen birden çok tablo | Evet {.text-success} | | +| F131-03 | Gruplandırılmış görünümlere sahip sorgularda desteklenen işlevleri ayarlayın | Evet {.text-success} | | +| F131-04 | GROUP BY ile alt sorgular ve yan tümceleri ve gruplandırılmış görünümler | Evet {.text-success} | | +| F131-05 | GROUP BY ile tek satır seçme ve yan tümceleri ve gruplandırılmış görünümleri sahip | Hayır {.text-danger} | | | **F181** | **Çoklu modül desteği** | **Hayır**{.text-danger} | | | **F201** | **Döküm fonksiyonu** | **Evet**{.text-success} | | | **F221** | **Açık varsayılan** | **Hayır**{.text-danger} | | | **F261** | **Durum ifadesi** | **Evet**{.text-success} | | -| F261-01 | Basit durum | Evet{.text-success} | | -| F261-02 | Aranan dava | Evet{.text-success} | | -| F261-03 | NULLIF | Evet{.text-success} | | -| F261-04 | COALESCE | Evet{.text-success} | | +| F261-01 | Basit durum | Evet {.text-success} | | +| F261-02 | Aranan dava | Evet {.text-success} | | +| F261-03 | NULLIF | Evet {.text-success} | | +| F261-04 | COALESCE | Evet {.text-success} | | | **F311** | **Şema tanımı deyimi** | **Kısmi**{.text-warning} | | -| F311-01 | CREATE SCHEMA | Hayır{.text-danger} | | -| F311-02 | Kalıcı temel tablolar için tablo oluşturma | Evet{.text-success} | | -| F311-03 | CREATE VIEW | Evet{.text-success} | | -| F311-04 | CREATE VIEW: WITH CHECK OPTION | Hayır{.text-danger} | | -| F311-05 | Hibe beyanı | Evet{.text-success} | | +| F311-01 | CREATE SCHEMA | Hayır {.text-danger} | | +| F311-02 | Kalıcı temel tablolar için tablo oluşturma | Evet {.text-success} | | +| F311-03 | CREATE VIEW | Evet {.text-success} | | +| F311-04 | CREATE VIEW: WITH CHECK OPTION | Hayır {.text-danger} | | +| F311-05 | Hibe beyanı | Evet {.text-success} | | | **F471** | **Skaler alt sorgu değerleri** | **Evet**{.text-success} | | | **F481** | **Genişletilmiş boş yüklem** | **Evet**{.text-success} | | | **F812** | **Temel işaretleme** | **Hayır**{.text-danger} | | | **T321** | **Temel SQL-çağrılan rutinleri** | **Hayır**{.text-danger} | | -| T321-01 | Hiçbir aşırı yükleme ile kullanıcı tanımlı fonksiyonlar | Hayır{.text-danger} | | -| T321-02 | Hiçbir aşırı yükleme ile kullanıcı tanımlı saklı yordamlar | Hayır{.text-danger} | | -| T321-03 | Fonksiyon çağırma | Hayır{.text-danger} | | -| T321-04 | Çağrı bildirimi | Hayır{.text-danger} | | -| T321-05 | Ret statementurn deyimi | Hayır{.text-danger} | | +| T321-01 | Hiçbir aşırı yükleme ile kullanıcı tanımlı fonksiyonlar | Hayır {.text-danger} | | +| T321-02 | Hiçbir aşırı yükleme ile kullanıcı tanımlı saklı yordamlar | Hayır {.text-danger} | | +| T321-03 | Fonksiyon çağırma | Hayır {.text-danger} | | +| T321-04 | Çağrı bildirimi | Hayır {.text-danger} | | +| T321-05 | Ret statementurn deyimi | Hayır {.text-danger} | | | **T631** | **Bir liste öğesi ile yüklemde** | **Evet**{.text-success} | | diff --git a/docs/tr/sql-reference/data-types/lowcardinality.md b/docs/tr/sql-reference/data-types/lowcardinality.md new file mode 100644 index 00000000000..dd3a9aa1c0d --- /dev/null +++ b/docs/tr/sql-reference/data-types/lowcardinality.md @@ -0,0 +1 @@ +../../../../en/sql-reference/data-types/lowcardinality.md \ No newline at end of file diff --git a/docs/zh/development/developer-instruction.md b/docs/zh/development/developer-instruction.md index 3e2ccf5da35..5a5d8d82144 100644 --- a/docs/zh/development/developer-instruction.md +++ b/docs/zh/development/developer-instruction.md @@ -242,8 +242,8 @@ ClickHouse的架构描述可以在此处查看:https://clickhouse.tech/docs/en sudo apt install wget xz-utils - wget https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz - wget https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz + wget https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz + wget https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz xz -v -d hits_v1.tsv.xz xz -v -d visits_v1.tsv.xz diff --git a/docs/zh/development/style.md b/docs/zh/development/style.md index 85a57658c06..36e4acb6a24 100644 --- a/docs/zh/development/style.md +++ b/docs/zh/development/style.md @@ -572,7 +572,7 @@ Fork不用于并行化。 **14.** 返回值 -大部分情况下使用 `return`。不要使用 `[return std::move(res)]{.strike}`。 +大部分情况下使用 `return`。不要使用 `return std::move(res)`。 如果函数在堆上分配对象并返回它,请使用 `shared_ptr` 或 `unique_ptr`。 @@ -666,7 +666,7 @@ Loader() {} **24.** 不要使用 `trailing return type` 为必要的功能。 ``` cpp -[auto f() -> void;]{.strike} +auto f() -> void ``` **25.** 声明和初始化变量。 diff --git a/docs/zh/getting-started/example-datasets/metrica.md b/docs/zh/getting-started/example-datasets/metrica.md index 353a24ce0cb..62ba28a92ef 100644 --- a/docs/zh/getting-started/example-datasets/metrica.md +++ b/docs/zh/getting-started/example-datasets/metrica.md @@ -7,14 +7,14 @@ toc_title: Yandex.Metrica Data 数据集由两个表组成,包含关于Yandex.Metrica的hits(`hits_v1`)和visit(`visits_v1`)的匿名数据。你可以阅读更多关于Yandex的信息。在[ClickHouse历史](../../introduction/history.md)的Metrica部分。 -数据集由两个表组成,他们中的任何一个都可以下载作为一个压缩`tsv.xz`的文件或准备的分区。除此之外,一个扩展版的`hits`表包含1亿行TSV在https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_100m_obfuscated_v1.tsv.xz,准备分区在https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz。 +数据集由两个表组成,他们中的任何一个都可以下载作为一个压缩`tsv.xz`的文件或准备的分区。除此之外,一个扩展版的`hits`表包含1亿行TSV在https://datasets.clickhouse.tech/hits/tsv/hits_100m_obfuscated_v1.tsv.xz,准备分区在https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz。 ## 从准备好的分区获取表 {#obtaining-tables-from-prepared-partitions} 下载和导入`hits`表: ``` bash -curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar +curl -O https://datasets.clickhouse.tech/hits/partitions/hits_v1.tar tar xvf hits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory # check permissions on unpacked data, fix if required sudo service clickhouse-server restart @@ -24,7 +24,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" 下载和导入`visits`表: ``` bash -curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar +curl -O https://datasets.clickhouse.tech/visits/partitions/visits_v1.tar tar xvf visits_v1.tar -C /var/lib/clickhouse # path to ClickHouse data directory # check permissions on unpacked data, fix if required sudo service clickhouse-server restart @@ -36,7 +36,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1" 从TSV压缩文件下载并导入`hits`: ``` bash -curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv # now create table clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" @@ -50,7 +50,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1" 从压缩tsv文件下载和导入`visits`: ``` bash -curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv # now create table clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets" clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192" diff --git a/docs/zh/getting-started/example-datasets/nyc-taxi.md b/docs/zh/getting-started/example-datasets/nyc-taxi.md index c6b41e9d396..0e4ff67d6e7 100644 --- a/docs/zh/getting-started/example-datasets/nyc-taxi.md +++ b/docs/zh/getting-started/example-datasets/nyc-taxi.md @@ -283,7 +283,7 @@ SELECT formatReadableSize(sum(bytes)) FROM system.parts WHERE table = 'trips_mer ## 下载预处理好的分区数据 {#xia-zai-yu-chu-li-hao-de-fen-qu-shu-ju} ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar +$ curl -O https://datasets.clickhouse.tech/trips_mergetree/partitions/trips_mergetree.tar $ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # path to ClickHouse data directory $ # check permissions of unpacked data, fix if required $ sudo service clickhouse-server restart diff --git a/docs/zh/getting-started/example-datasets/ontime.md b/docs/zh/getting-started/example-datasets/ontime.md index 4c21eee51a2..3921f71fc7e 100644 --- a/docs/zh/getting-started/example-datasets/ontime.md +++ b/docs/zh/getting-started/example-datasets/ontime.md @@ -154,7 +154,7 @@ $ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhous ## 下载预处理好的分区数据 {#xia-zai-yu-chu-li-hao-de-fen-qu-shu-ju} ``` bash -$ curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar +$ curl -O https://datasets.clickhouse.tech/ontime/partitions/ontime.tar $ tar xvf ontime.tar -C /var/lib/clickhouse # path to ClickHouse data directory $ # check permissions of unpacked data, fix if required $ sudo service clickhouse-server restart diff --git a/docs/zh/getting-started/index.md b/docs/zh/getting-started/index.md index ac6074eb72f..fdffca954f7 100644 --- a/docs/zh/getting-started/index.md +++ b/docs/zh/getting-started/index.md @@ -7,11 +7,14 @@ toc_priority: 2 # 入门 {#ru-men} -如果您是ClickHouse的新手,并希望亲身体验它的性能,首先您需要通过 [安装过程](install.md). +如果您是ClickHouse的新手,并希望亲身体验它的性能。 -之后,您可以选择以下选项之一: +首先需要进行 [环境安装与部署](install.md). + +之后,您可以通过教程与示例数据完成自己的入门第一步: + +- [QuickStart教程](tutorial.md) 快速了解Clickhouse的操作流程 +- [示例数据集-航班飞行数据](example-datasets/ontime.md) 示例数据,提供了常用的SQL查询场景 -- [通过详细的教程](tutorial.md) -- [试验示例数据集](example-datasets/ontime.md) [来源文章](https://clickhouse.tech/docs/zh/getting_started/) diff --git a/docs/zh/getting-started/tutorial.md b/docs/zh/getting-started/tutorial.md index 93f368bc2dc..5e34729a9c5 100644 --- a/docs/zh/getting-started/tutorial.md +++ b/docs/zh/getting-started/tutorial.md @@ -85,8 +85,8 @@ clickhouse-client --query='INSERT INTO table FORMAT TabSeparated' < data.tsv ### 下载并提取表数据 {#download-and-extract-table-data} ``` bash -curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv -curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv +curl https://datasets.clickhouse.tech/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv +curl https://datasets.clickhouse.tech/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv ``` 提取的文件大小约为10GB。 diff --git a/docs/zh/guides/apply-catboost-model.md b/docs/zh/guides/apply-catboost-model.md index 8368fde0d26..5e374751052 100644 --- a/docs/zh/guides/apply-catboost-model.md +++ b/docs/zh/guides/apply-catboost-model.md @@ -7,27 +7,27 @@ toc_title: "\u5E94\u7528CatBoost\u6A21\u578B" # 在ClickHouse中应用Catboost模型 {#applying-catboost-model-in-clickhouse} -[CatBoost](https://catboost.ai) 是一个用于机器学习的免费开源梯度提升开发库 [Yandex](https://yandex.com/company/) 。 +[CatBoost](https://catboost.ai) 是一个由[Yandex](https://yandex.com/company/)开发的开源免费机器学习库。 -通过这篇指导,您将学会如何将预先从SQL推理出的运行模型作为训练好的模型应用到ClickHouse中去。 +通过这篇指导,您将学会如何用SQL建模,使用ClickHouse预先训练好的模型来推断数据。 -在ClickHouse中应用CatBoost模型: +在ClickHouse中应用CatBoost模型的一般过程: -1. [创建表](#create-table). +1. [创建数据表](#create-table). 2. [将数据插入到表中](#insert-data-to-table). -3. [将CatBoost集成到ClickHouse中](#integrate-catboost-into-clickhouse) (可选步骤)。 -4. [从SQL运行模型推理](#run-model-inference). +3. [将CatBoost集成到ClickHouse中](#integrate-catboost-into-clickhouse) (可跳过)。 +4. [从SQL运行模型推断](#run-model-inference). -有关训练CatBoost模型的详细信息,请参阅 [训练和使用模型](https://catboost.ai/docs/features/training.html#training). +有关训练CatBoost模型的详细信息,请参阅 [训练和模型应用](https://catboost.ai/docs/features/training.html#training). ## 先决条件 {#prerequisites} -请先安装好 [Docker](https://docs.docker.com/install/)。 +请先安装 [Docker](https://docs.docker.com/install/)。 !!! note "注" - [Docker](https://www.docker.com) 是一个软件平台,允许您创建容器,将CatBoost和ClickHouse安装与系统的其余部分隔离。 + [Docker](https://www.docker.com) 是一个软件平台,用户可以用来创建独立于其余系统、集成CatBoost和ClickHouse的容器。 在应用CatBoost模型之前: @@ -37,7 +37,7 @@ toc_title: "\u5E94\u7528CatBoost\u6A21\u578B" $ docker pull yandex/tutorial-catboost-clickhouse ``` -此Docker映像包含运行CatBoost和ClickHouse所需的所有内容:代码、运行时、库、环境变量和配置文件。 +此Docker映像包含运行CatBoost和ClickHouse所需的所有内容:代码、运行环境、库、环境变量和配置文件。 **2.** 确保已成功拉取Docker映像: @@ -53,7 +53,7 @@ yandex/tutorial-catboost-clickhouse latest 622e4d17945b 22 $ docker run -it -p 8888:8888 yandex/tutorial-catboost-clickhouse ``` -## 1. 创建表 {#create-table} +## 1. 创建数据表 {#create-table} 为训练样本创建ClickHouse表: @@ -124,19 +124,21 @@ FROM amazon_train ## 3. 将CatBoost集成到ClickHouse中 {#integrate-catboost-into-clickhouse} !!! note "注" - **可选步骤。** Docker映像包含运行CatBoost和ClickHouse所需的所有内容。 + **可跳过。** Docker映像包含运行CatBoost和ClickHouse所需的所有内容。 CatBoost集成到ClickHouse步骤: -**1.** 构建测试库文件。 +**1.** 构建评估库。 -测试CatBoost模型的最快方法是编译 `libcatboostmodel.` 库文件. 有关如何构建库文件的详细信息,请参阅 [CatBoost文件](https://catboost.ai/docs/concepts/c-plus-plus-api_dynamic-c-pluplus-wrapper.html). +评估CatBoost模型的最快方法是编译 `libcatboostmodel.` 库文件. -**2.** 任意创建一个新目录, 如 `data` 并将创建的库文件放入其中。 Docker映像已经包含了库 `data/libcatboostmodel.so`. +有关如何构建库文件的详细信息,请参阅 [CatBoost文件](https://catboost.ai/docs/concepts/c-plus-plus-api_dynamic-c-pluplus-wrapper.html). -**3.** 任意创建一个新目录来放配置模型, 如 `models`. +**2.** 创建一个新目录(位置与名称可随意指定), 如 `data` 并将创建的库文件放入其中。 Docker映像已经包含了库 `data/libcatboostmodel.so`. -**4.** 任意创建一个模型配置文件,如 `models/amazon_model.xml`. +**3.** 创建一个新目录来放配置模型, 如 `models`. + +**4.** 创建一个模型配置文件,如 `models/amazon_model.xml`. **5.** 描述模型配置: @@ -163,7 +165,7 @@ CatBoost集成到ClickHouse步骤: /home/catboost/models/*_model.xml ``` -## 4. 运行从SQL推理的模型 {#run-model-inference} +## 4. 运行从SQL推断的模型 {#run-model-inference} 测试模型是否正常,运行ClickHouse客户端 `$ clickhouse client`. @@ -189,7 +191,7 @@ LIMIT 10 !!! note "注" 函数 [modelEvaluate](../sql-reference/functions/other-functions.md#function-modelevaluate) 返回带有多类模型的每类原始预测的元组。 -让我们预测一下: +执行预测: ``` sql :) SELECT diff --git a/docs/zh/introduction/adopters.md b/docs/zh/introduction/adopters.md index fc7dfa4efeb..ed78abeacfb 100644 --- a/docs/zh/introduction/adopters.md +++ b/docs/zh/introduction/adopters.md @@ -64,6 +64,7 @@ toc_title: "ClickHouse用户" | [Splunk](https://www.splunk.com/) | 业务分析 | 主要产品 | — | — | [英文幻灯片,2018年1月](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/splunk.pdf) | | [Spotify](https://www.spotify.com) | 音乐 | 实验 | — | — | [幻灯片,七月2018](https://www.slideshare.net/glebus/using-clickhouse-for-experimentation-104247173) | | [腾讯](https://www.tencent.com) | 大数据 | 数据处理 | — | — | [中文幻灯片,2018年10月](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/5.%20ClickHouse大数据集群应用_李俊飞腾讯网媒事业部.pdf) | +| 腾讯QQ音乐(TME) | 大数据 | 数据处理 | — | — | [博客文章,2020年6月](https://cloud.tencent.com/developer/article/1637840) | [优步](https://www.uber.com) | 出租车 | 日志记录 | — | — | [幻灯片,二月2020](https://presentations.clickhouse.tech/meetup40/uber.pdf) | | [VKontakte](https://vk.com) | 社交网络 | 统计,日志记录 | — | — | [俄文幻灯片,八月2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup17/3_vk.pdf) | | [Wisebits](https://wisebits.com/) | IT解决方案 | 分析 | — | — | [俄文幻灯片,2019年5月](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup22/strategies.pdf) | diff --git a/docs/zh/operations/performance-test.md b/docs/zh/operations/performance-test.md index cde27a8547d..7728325bb9a 100644 --- a/docs/zh/operations/performance-test.md +++ b/docs/zh/operations/performance-test.md @@ -48,7 +48,7 @@ toc_title: "\u6D4B\u8BD5\u786C\u4EF6" - wget https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_100m_obfuscated_v1.tar.xz + wget https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz tar xvf hits_100m_obfuscated_v1.tar.xz -C . mv hits_100m_obfuscated_v1/* . diff --git a/docs/zh/operations/utilities/index.md b/docs/zh/operations/utilities/index.md index a2ab228f876..0d60fb8bbb9 100644 --- a/docs/zh/operations/utilities/index.md +++ b/docs/zh/operations/utilities/index.md @@ -1,7 +1,7 @@ -# ツ环板Utilityョツ嘉ッ {#clickhouse-utility} +# 实用工具 {#clickhouse-utility} -- [ツ环板-ョツ嘉ッツ偲](clickhouse-local.md) — Allows running SQL queries on data without stopping the ClickHouse server, similar to how `awk` 做到这一点。 -- [ツ环板-ョツ嘉ッツ偲](clickhouse-copier.md) — Copies (and reshards) data from one cluster to another cluster. -- [ツ暗ェツ氾环催ツ団](clickhouse-benchmark.md) — Loads server with the custom queries and settings. +- [本地查询](clickhouse-local.md) — 在不停止ClickHouse服务的情况下,对数据执行查询操作(类似于 `awk` 命令)。 +- [跨集群复制](clickhouse-copier.md) — 在不同集群间复制数据。 +- [性能测试](clickhouse-benchmark.md) — 连接到Clickhouse服务器,执行性能测试。 [原始文章](https://clickhouse.tech/docs/en/operations/utils/) diff --git a/docs/zh/sql-reference/ansi.md b/docs/zh/sql-reference/ansi.md index 8e8590079e2..cc69988f507 100644 --- a/docs/zh/sql-reference/ansi.md +++ b/docs/zh/sql-reference/ansi.md @@ -26,155 +26,155 @@ toc_title: "ANSI\u517C\u5BB9\u6027" | Feature ID | 功能名称 | 状态 | 评论 | |------------|----------------------------------------------------------------|--------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **E011** | **数字数据类型** | **部分**{.text-warning} | | -| E011-01 | 整型和小型数据类型 | 是{.text-success} | | -| E011-02 | 真实、双精度和浮点数据类型数据类型 | 部分{.text-warning} | `FLOAT()`, `REAL` 和 `DOUBLE PRECISION` 不支持 | -| E011-03 | 十进制和数值数据类型 | 部分{.text-warning} | 只有 `DECIMAL(p,s)` 支持,而不是 `NUMERIC` | -| E011-04 | 算术运算符 | 是{.text-success} | | -| E011-05 | 数字比较 | 是{.text-success} | | -| E011-06 | 数字数据类型之间的隐式转换 | 非也。{.text-danger} | ANSI SQL允许在数值类型之间进行任意隐式转换,而ClickHouse依赖于具有多个重载的函数而不是隐式转换 | +| E011-01 | 整型和小型数据类型 | 是 {.text-success} | | +| E011-02 | 真实、双精度和浮点数据类型数据类型 | 部分 {.text-warning} | `FLOAT()`, `REAL` 和 `DOUBLE PRECISION` 不支持 | +| E011-03 | 十进制和数值数据类型 | 部分 {.text-warning} | 只有 `DECIMAL(p,s)` 支持,而不是 `NUMERIC` | +| E011-04 | 算术运算符 | 是 {.text-success} | | +| E011-05 | 数字比较 | 是 {.text-success} | | +| E011-06 | 数字数据类型之间的隐式转换 | 非也。 {.text-danger} | ANSI SQL允许在数值类型之间进行任意隐式转换,而ClickHouse依赖于具有多个重载的函数而不是隐式转换 | | **E021** | **字符串类型** | **部分**{.text-warning} | | -| E021-01 | 字符数据类型 | 非也。{.text-danger} | | -| E021-02 | 字符变化数据类型 | 非也。{.text-danger} | `String` 行为类似,但括号中没有长度限制 | -| E021-03 | 字符文字 | 部分{.text-warning} | 不自动连接连续文字和字符集支持 | -| E021-04 | 字符长度函数 | 部分{.text-warning} | 非也。 `USING` 条款 | -| E021-05 | OCTET_LENGTH函数 | 非也。{.text-danger} | `LENGTH` 表现类似 | -| E021-06 | SUBSTRING | 部分{.text-warning} | 不支持 `SIMILAR` 和 `ESCAPE` 条款,否 `SUBSTRING_REGEX` 备选案文 | -| E021-07 | 字符串联 | 部分{.text-warning} | 非也。 `COLLATE` 条款 | -| E021-08 | 上下功能 | 是{.text-success} | | -| E021-09 | 修剪功能 | 是{.text-success} | | -| E021-10 | 固定长度和可变长度字符串类型之间的隐式转换 | 非也。{.text-danger} | ANSI SQL允许在字符串类型之间进行任意隐式转换,而ClickHouse依赖于具有多个重载的函数而不是隐式转换 | -| E021-11 | 职位功能 | 部分{.text-warning} | 不支持 `IN` 和 `USING` 条款,否 `POSITION_REGEX` 备选案文 | -| E021-12 | 字符比较 | 是{.text-success} | | +| E021-01 | 字符数据类型 | 非也。 {.text-danger} | | +| E021-02 | 字符变化数据类型 | 非也。 {.text-danger} | `String` 行为类似,但括号中没有长度限制 | +| E021-03 | 字符文字 | 部分 {.text-warning} | 不自动连接连续文字和字符集支持 | +| E021-04 | 字符长度函数 | 部分 {.text-warning} | 非也。 `USING` 条款 | +| E021-05 | OCTET_LENGTH函数 | 非也。 {.text-danger} | `LENGTH` 表现类似 | +| E021-06 | SUBSTRING | 部分 {.text-warning} | 不支持 `SIMILAR` 和 `ESCAPE` 条款,否 `SUBSTRING_REGEX` 备选案文 | +| E021-07 | 字符串联 | 部分 {.text-warning} | 非也。 `COLLATE` 条款 | +| E021-08 | 上下功能 | 是 {.text-success} | | +| E021-09 | 修剪功能 | 是 {.text-success} | | +| E021-10 | 固定长度和可变长度字符串类型之间的隐式转换 | 非也。 {.text-danger} | ANSI SQL允许在字符串类型之间进行任意隐式转换,而ClickHouse依赖于具有多个重载的函数而不是隐式转换 | +| E021-11 | 职位功能 | 部分 {.text-warning} | 不支持 `IN` 和 `USING` 条款,否 `POSITION_REGEX` 备选案文 | +| E021-12 | 字符比较 | 是 {.text-success} | | | **E031** | **标识符** | **部分**{.text-warning} | | -| E031-01 | 分隔标识符 | 部分{.text-warning} | Unicode文字支持有限 | -| E031-02 | 小写标识符 | 是{.text-success} | | -| E031-03 | 尾部下划线 | 是{.text-success} | | +| E031-01 | 分隔标识符 | 部分 {.text-warning} | Unicode文字支持有限 | +| E031-02 | 小写标识符 | 是 {.text-success} | | +| E031-03 | 尾部下划线 | 是 {.text-success} | | | **E051** | **基本查询规范** | **部分**{.text-warning} | | -| E051-01 | SELECT DISTINCT | 是{.text-success} | | -| E051-02 | GROUP BY子句 | 是{.text-success} | | -| E051-04 | 分组依据可以包含不在列 `` | 是 {.text-success} | | +| E051-05 | 选择项目可以重命名 | 是 {.text-success} | | +| E051-06 | 有条款 | 是 {.text-success} | | +| E051-07 | 合格\*在选择列表中 | 是 {.text-success} | | +| E051-08 | FROM子句中的关联名称 | 是 {.text-success} | | +| E051-09 | 重命名FROM子句中的列 | 非也。 {.text-danger} | | | **E061** | **基本谓词和搜索条件** | **部分**{.text-warning} | | -| E061-01 | 比较谓词 | 是{.text-success} | | -| E061-02 | 谓词之间 | 部分{.text-warning} | 非也。 `SYMMETRIC` 和 `ASYMMETRIC` 条款 | -| E061-03 | 在具有值列表的谓词中 | 是{.text-success} | | -| E061-04 | 像谓词 | 是{.text-success} | | -| E061-05 | LIKE谓词:逃避条款 | 非也。{.text-danger} | | -| E061-06 | 空谓词 | 是{.text-success} | | -| E061-07 | 量化比较谓词 | 非也。{.text-danger} | | -| E061-08 | 存在谓词 | 非也。{.text-danger} | | -| E061-09 | 比较谓词中的子查询 | 是{.text-success} | | -| E061-11 | 谓词中的子查询 | 是{.text-success} | | -| E061-12 | 量化比较谓词中的子查询 | 非也。{.text-danger} | | -| E061-13 | 相关子查询 | 非也。{.text-danger} | | -| E061-14 | 搜索条件 | 是{.text-success} | | +| E061-01 | 比较谓词 | 是 {.text-success} | | +| E061-02 | 谓词之间 | 部分 {.text-warning} | 非也。 `SYMMETRIC` 和 `ASYMMETRIC` 条款 | +| E061-03 | 在具有值列表的谓词中 | 是 {.text-success} | | +| E061-04 | 像谓词 | 是 {.text-success} | | +| E061-05 | LIKE谓词:逃避条款 | 非也。 {.text-danger} | | +| E061-06 | 空谓词 | 是 {.text-success} | | +| E061-07 | 量化比较谓词 | 非也。 {.text-danger} | | +| E061-08 | 存在谓词 | 非也。 {.text-danger} | | +| E061-09 | 比较谓词中的子查询 | 是 {.text-success} | | +| E061-11 | 谓词中的子查询 | 是 {.text-success} | | +| E061-12 | 量化比较谓词中的子查询 | 非也。 {.text-danger} | | +| E061-13 | 相关子查询 | 非也。 {.text-danger} | | +| E061-14 | 搜索条件 | 是 {.text-success} | | | **E071** | **基本查询表达式** | **部分**{.text-warning} | | -| E071-01 | UNION DISTINCT table运算符 | 非也。{.text-danger} | | -| E071-02 | 联合所有表运算符 | 是{.text-success} | | -| E071-03 | 除了不同的表运算符 | 非也。{.text-danger} | | -| E071-05 | 通过表运算符组合的列不必具有完全相同的数据类型 | 是{.text-success} | | -| E071-06 | 子查询中的表运算符 | 是{.text-success} | | +| E071-01 | UNION DISTINCT table运算符 | 非也。 {.text-danger} | | +| E071-02 | 联合所有表运算符 | 是 {.text-success} | | +| E071-03 | 除了不同的表运算符 | 非也。 {.text-danger} | | +| E071-05 | 通过表运算符组合的列不必具有完全相同的数据类型 | 是 {.text-success} | | +| E071-06 | 子查询中的表运算符 | 是 {.text-success} | | | **E081** | **基本特权** | **部分**{.text-warning} | 正在进行的工作 | | **E091** | **设置函数** | **是**{.text-success} | | -| E091-01 | AVG | 是{.text-success} | | -| E091-02 | COUNT | 是{.text-success} | | -| E091-03 | MAX | 是{.text-success} | | -| E091-04 | MIN | 是{.text-success} | | -| E091-05 | SUM | 是{.text-success} | | -| E091-06 | 全部量词 | 非也。{.text-danger} | | -| E091-07 | 不同的量词 | 部分{.text-warning} | 并非所有聚合函数都受支持 | +| E091-01 | AVG | 是 {.text-success} | | +| E091-02 | COUNT | 是 {.text-success} | | +| E091-03 | MAX | 是 {.text-success} | | +| E091-04 | MIN | 是 {.text-success} | | +| E091-05 | SUM | 是 {.text-success} | | +| E091-06 | 全部量词 | 非也。 {.text-danger} | | +| E091-07 | 不同的量词 | 部分 {.text-warning} | 并非所有聚合函数都受支持 | | **E101** | **基本数据操作** | **部分**{.text-warning} | | -| E101-01 | 插入语句 | 是{.text-success} | 注:ClickHouse中的主键并不意味着 `UNIQUE` 约束 | -| E101-03 | 搜索更新语句 | 非也。{.text-danger} | 有一个 `ALTER UPDATE` 批量数据修改语句 | -| E101-04 | 搜索的删除语句 | 非也。{.text-danger} | 有一个 `ALTER DELETE` 批量数据删除声明 | +| E101-01 | 插入语句 | 是 {.text-success} | 注:ClickHouse中的主键并不意味着 `UNIQUE` 约束 | +| E101-03 | 搜索更新语句 | 非也。 {.text-danger} | 有一个 `ALTER UPDATE` 批量数据修改语句 | +| E101-04 | 搜索的删除语句 | 非也。 {.text-danger} | 有一个 `ALTER DELETE` 批量数据删除声明 | | **E111** | **单行SELECT语句** | **非也。**{.text-danger} | | | **E121** | **基本光标支持** | **非也。**{.text-danger} | | -| E121-01 | DECLARE CURSOR | 非也。{.text-danger} | | -| E121-02 | 按列排序不需要在选择列表中 | 非也。{.text-danger} | | -| E121-03 | 按顺序排列的值表达式 | 非也。{.text-danger} | | -| E121-04 | 公开声明 | 非也。{.text-danger} | | -| E121-06 | 定位更新语句 | 非也。{.text-danger} | | -| E121-07 | 定位删除语句 | 非也。{.text-danger} | | -| E121-08 | 关闭声明 | 非也。{.text-danger} | | -| E121-10 | FETCH语句:隐式NEXT | 非也。{.text-danger} | | -| E121-17 | 使用保持游标 | 非也。{.text-danger} | | +| E121-01 | DECLARE CURSOR | 非也。 {.text-danger} | | +| E121-02 | 按列排序不需要在选择列表中 | 非也。 {.text-danger} | | +| E121-03 | 按顺序排列的值表达式 | 非也。 {.text-danger} | | +| E121-04 | 公开声明 | 非也。 {.text-danger} | | +| E121-06 | 定位更新语句 | 非也。 {.text-danger} | | +| E121-07 | 定位删除语句 | 非也。 {.text-danger} | | +| E121-08 | 关闭声明 | 非也。 {.text-danger} | | +| E121-10 | FETCH语句:隐式NEXT | 非也。 {.text-danger} | | +| E121-17 | 使用保持游标 | 非也。 {.text-danger} | | | **E131** | **空值支持(空值代替值)** | **部分**{.text-warning} | 一些限制适用 | | **E141** | **基本完整性约束** | **部分**{.text-warning} | | -| E141-01 | 非空约束 | 是{.text-success} | 注: `NOT NULL` 默认情况下,表列隐含 | -| E141-02 | 非空列的唯一约束 | 非也。{.text-danger} | | -| E141-03 | 主键约束 | 非也。{.text-danger} | | -| E141-04 | 对于引用删除操作和引用更新操作,具有默认无操作的基本外键约束 | 非也。{.text-danger} | | -| E141-06 | 检查约束 | 是{.text-success} | | -| E141-07 | 列默认值 | 是{.text-success} | | -| E141-08 | 在主键上推断为非NULL | 是{.text-success} | | -| E141-10 | 可以按任何顺序指定外键中的名称 | 非也。{.text-danger} | | +| E141-01 | 非空约束 | 是 {.text-success} | 注: `NOT NULL` 默认情况下,表列隐含 | +| E141-02 | 非空列的唯一约束 | 非也。 {.text-danger} | | +| E141-03 | 主键约束 | 非也。 {.text-danger} | | +| E141-04 | 对于引用删除操作和引用更新操作,具有默认无操作的基本外键约束 | 非也。 {.text-danger} | | +| E141-06 | 检查约束 | 是 {.text-success} | | +| E141-07 | 列默认值 | 是 {.text-success} | | +| E141-08 | 在主键上推断为非NULL | 是 {.text-success} | | +| E141-10 | 可以按任何顺序指定外键中的名称 | 非也。 {.text-danger} | | | **E151** | **交易支持** | **非也。**{.text-danger} | | -| E151-01 | 提交语句 | 非也。{.text-danger} | | -| E151-02 | 回滚语句 | 非也。{.text-danger} | | +| E151-01 | 提交语句 | 非也。 {.text-danger} | | +| E151-02 | 回滚语句 | 非也。 {.text-danger} | | | **E152** | **基本设置事务语句** | **非也。**{.text-danger} | | -| E152-01 | SET TRANSACTION语句:隔离级别SERIALIZABLE子句 | 非也。{.text-danger} | | -| E152-02 | SET TRANSACTION语句:只读和读写子句 | 非也。{.text-danger} | | +| E152-01 | SET TRANSACTION语句:隔离级别SERIALIZABLE子句 | 非也。 {.text-danger} | | +| E152-02 | SET TRANSACTION语句:只读和读写子句 | 非也。 {.text-danger} | | | **E153** | **具有子查询的可更新查询** | **非也。**{.text-danger} | | | **E161** | **SQL注释使用前导双减** | **是**{.text-success} | | | **E171** | **SQLSTATE支持** | **非也。**{.text-danger} | | | **E182** | **主机语言绑定** | **非也。**{.text-danger} | | | **F031** | **基本架构操作** | **部分**{.text-warning} | | -| F031-01 | CREATE TABLE语句创建持久基表 | 部分{.text-warning} | 非也。 `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` 子句,不支持用户解析的数据类型 | -| F031-02 | 创建视图语句 | 部分{.text-warning} | 非也。 `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` 子句,不支持用户解析的数据类型 | -| F031-03 | 赠款声明 | 是{.text-success} | | -| F031-04 | ALTER TABLE语句:ADD COLUMN子句 | 部分{.text-warning} | 不支持 `GENERATED` 条款和系统时间段 | -| F031-13 | DROP TABLE语句:RESTRICT子句 | 非也。{.text-danger} | | -| F031-16 | DROP VIEW语句:RESTRICT子句 | 非也。{.text-danger} | | -| F031-19 | REVOKE语句:RESTRICT子句 | 非也。{.text-danger} | | +| F031-01 | CREATE TABLE语句创建持久基表 | 部分 {.text-warning} | 非也。 `SYSTEM VERSIONING`, `ON COMMIT`, `GLOBAL`, `LOCAL`, `PRESERVE`, `DELETE`, `REF IS`, `WITH OPTIONS`, `UNDER`, `LIKE`, `PERIOD FOR` 子句,不支持用户解析的数据类型 | +| F031-02 | 创建视图语句 | 部分 {.text-warning} | 非也。 `RECURSIVE`, `CHECK`, `UNDER`, `WITH OPTIONS` 子句,不支持用户解析的数据类型 | +| F031-03 | 赠款声明 | 是 {.text-success} | | +| F031-04 | ALTER TABLE语句:ADD COLUMN子句 | 部分 {.text-warning} | 不支持 `GENERATED` 条款和系统时间段 | +| F031-13 | DROP TABLE语句:RESTRICT子句 | 非也。 {.text-danger} | | +| F031-16 | DROP VIEW语句:RESTRICT子句 | 非也。 {.text-danger} | | +| F031-19 | REVOKE语句:RESTRICT子句 | 非也。 {.text-danger} | | | **F041** | **基本连接表** | **部分**{.text-warning} | | -| F041-01 | Inner join(但不一定是INNER关键字) | 是{.text-success} | | -| F041-02 | 内部关键字 | 是{.text-success} | | -| F041-03 | LEFT OUTER JOIN | 是{.text-success} | | -| F041-04 | RIGHT OUTER JOIN | 是{.text-success} | | -| F041-05 | 可以嵌套外部连接 | 是{.text-success} | | -| F041-07 | 左侧或右侧外部联接中的内部表也可用于内部联接 | 是{.text-success} | | -| F041-08 | 支持所有比较运算符(而不仅仅是=) | 非也。{.text-danger} | | +| F041-01 | Inner join(但不一定是INNER关键字) | 是 {.text-success} | | +| F041-02 | 内部关键字 | 是 {.text-success} | | +| F041-03 | LEFT OUTER JOIN | 是 {.text-success} | | +| F041-04 | RIGHT OUTER JOIN | 是 {.text-success} | | +| F041-05 | 可以嵌套外部连接 | 是 {.text-success} | | +| F041-07 | 左侧或右侧外部联接中的内部表也可用于内部联接 | 是 {.text-success} | | +| F041-08 | 支持所有比较运算符(而不仅仅是=) | 非也。 {.text-danger} | | | **F051** | **基本日期和时间** | **部分**{.text-warning} | | -| F051-01 | 日期数据类型(包括对日期文字的支持) | 部分{.text-warning} | 没有文字 | -| F051-02 | 时间数据类型(包括对时间文字的支持),秒小数精度至少为0 | 非也。{.text-danger} | | -| F051-03 | 时间戳数据类型(包括对时间戳文字的支持),小数秒精度至少为0和6 | 非也。{.text-danger} | `DateTime64` 时间提供了类似的功能 | -| F051-04 | 日期、时间和时间戳数据类型的比较谓词 | 部分{.text-warning} | 只有一种数据类型可用 | -| F051-05 | Datetime类型和字符串类型之间的显式转换 | 是{.text-success} | | -| F051-06 | CURRENT_DATE | 非也。{.text-danger} | `today()` 是相似的 | -| F051-07 | LOCALTIME | 非也。{.text-danger} | `now()` 是相似的 | -| F051-08 | LOCALTIMESTAMP | 非也。{.text-danger} | | +| F051-01 | 日期数据类型(包括对日期文字的支持) | 部分 {.text-warning} | 没有文字 | +| F051-02 | 时间数据类型(包括对时间文字的支持),秒小数精度至少为0 | 非也。 {.text-danger} | | +| F051-03 | 时间戳数据类型(包括对时间戳文字的支持),小数秒精度至少为0和6 | 非也。 {.text-danger} | `DateTime64` 时间提供了类似的功能 | +| F051-04 | 日期、时间和时间戳数据类型的比较谓词 | 部分 {.text-warning} | 只有一种数据类型可用 | +| F051-05 | Datetime类型和字符串类型之间的显式转换 | 是 {.text-success} | | +| F051-06 | CURRENT_DATE | 非也。 {.text-danger} | `today()` 是相似的 | +| F051-07 | LOCALTIME | 非也。 {.text-danger} | `now()` 是相似的 | +| F051-08 | LOCALTIMESTAMP | 非也。 {.text-danger} | | | **F081** | **联盟和视图除外** | **部分**{.text-warning} | | | **F131** | **分组操作** | **部分**{.text-warning} | | -| F131-01 | WHERE、GROUP BY和HAVING子句在具有分组视图的查询中受支持 | 是{.text-success} | | -| F131-02 | 具有分组视图的查询中支持的多个表 | 是{.text-success} | | -| F131-03 | 设置具有分组视图的查询中支持的函数 | 是{.text-success} | | -| F131-04 | 具有分组依据和具有子句和分组视图的子查询 | 是{.text-success} | | -| F131-05 | 单行选择具有GROUP BY和具有子句和分组视图 | 非也。{.text-danger} | | +| F131-01 | WHERE、GROUP BY和HAVING子句在具有分组视图的查询中受支持 | 是 {.text-success} | | +| F131-02 | 具有分组视图的查询中支持的多个表 | 是 {.text-success} | | +| F131-03 | 设置具有分组视图的查询中支持的函数 | 是 {.text-success} | | +| F131-04 | 具有分组依据和具有子句和分组视图的子查询 | 是 {.text-success} | | +| F131-05 | 单行选择具有GROUP BY和具有子句和分组视图 | 非也。 {.text-danger} | | | **F181** | **多模块支持** | **非也。**{.text-danger} | | | **F201** | **投函数** | **是**{.text-success} | | | **F221** | **显式默认值** | **非也。**{.text-danger} | | | **F261** | **案例表达式** | **是**{.text-success} | | -| F261-01 | 简单案例 | 是{.text-success} | | -| F261-02 | 检索案例 | 是{.text-success} | | -| F261-03 | NULLIF | 是{.text-success} | | -| F261-04 | COALESCE | 是{.text-success} | | +| F261-01 | 简单案例 | 是 {.text-success} | | +| F261-02 | 检索案例 | 是 {.text-success} | | +| F261-03 | NULLIF | 是 {.text-success} | | +| F261-04 | COALESCE | 是 {.text-success} | | | **F311** | **架构定义语句** | **部分**{.text-warning} | | -| F311-01 | CREATE SCHEMA | 非也。{.text-danger} | | -| F311-02 | 为持久基表创建表 | 是{.text-success} | | -| F311-03 | CREATE VIEW | 是{.text-success} | | -| F311-04 | CREATE VIEW: WITH CHECK OPTION | 非也。{.text-danger} | | -| F311-05 | 赠款声明 | 是{.text-success} | | +| F311-01 | CREATE SCHEMA | 非也。 {.text-danger} | | +| F311-02 | 为持久基表创建表 | 是 {.text-success} | | +| F311-03 | CREATE VIEW | 是 {.text-success} | | +| F311-04 | CREATE VIEW: WITH CHECK OPTION | 非也。 {.text-danger} | | +| F311-05 | 赠款声明 | 是 {.text-success} | | | **F471** | **标量子查询值** | **是**{.text-success} | | | **F481** | **扩展空谓词** | **是**{.text-success} | | | **F812** | **基本标记** | **非也。**{.text-danger} | | | **T321** | **基本的SQL调用例程** | **非也。**{.text-danger} | | -| T321-01 | 无重载的用户定义函数 | 非也。{.text-danger} | | -| T321-02 | 无重载的用户定义存储过程 | 非也。{.text-danger} | | -| T321-03 | 函数调用 | 非也。{.text-danger} | | -| T321-04 | 电话声明 | 非也。{.text-danger} | | -| T321-05 | 退货声明 | 非也。{.text-danger} | | +| T321-01 | 无重载的用户定义函数 | 非也。 {.text-danger} | | +| T321-02 | 无重载的用户定义存储过程 | 非也。 {.text-danger} | | +| T321-03 | 函数调用 | 非也。 {.text-danger} | | +| T321-04 | 电话声明 | 非也。 {.text-danger} | | +| T321-05 | 退货声明 | 非也。 {.text-danger} | | | **T631** | **在一个列表元素的谓词中** | **是**{.text-success} | | diff --git a/docs/zh/sql-reference/functions/date-time-functions.md b/docs/zh/sql-reference/functions/date-time-functions.md index 65d331a7846..00dab5ee680 100644 --- a/docs/zh/sql-reference/functions/date-time-functions.md +++ b/docs/zh/sql-reference/functions/date-time-functions.md @@ -20,7 +20,37 @@ SELECT ## toTimeZone {#totimezone} -将Date或DateTime转换为指定的时区。 +将Date或DateTime转换为指定的时区。 时区是Date/DateTime类型的属性。 表字段或结果集的列的内部值(秒数)不会更改,列的类型会更改,并且其字符串表示形式也会相应更改。 + +```sql +SELECT + toDateTime('2019-01-01 00:00:00', 'UTC') AS time_utc, + toTypeName(time_utc) AS type_utc, + toInt32(time_utc) AS int32utc, + toTimeZone(time_utc, 'Asia/Yekaterinburg') AS time_yekat, + toTypeName(time_yekat) AS type_yekat, + toInt32(time_yekat) AS int32yekat, + toTimeZone(time_utc, 'US/Samoa') AS time_samoa, + toTypeName(time_samoa) AS type_samoa, + toInt32(time_samoa) AS int32samoa +FORMAT Vertical; +``` + +```text +Row 1: +────── +time_utc: 2019-01-01 00:00:00 +type_utc: DateTime('UTC') +int32utc: 1546300800 +time_yekat: 2019-01-01 05:00:00 +type_yekat: DateTime('Asia/Yekaterinburg') +int32yekat: 1546300800 +time_samoa: 2018-12-31 13:00:00 +type_samoa: DateTime('US/Samoa') +int32samoa: 1546300800 +``` + +`toTimeZone(time_utc, 'Asia/Yekaterinburg')` 把 `DateTime('UTC')` 类型转换为 `DateTime('Asia/Yekaterinburg')`. 内部值 (Unixtimestamp) 1546300800 保持不变, 但是字符串表示(toString() 函数的结果值) 由 `time_utc: 2019-01-01 00:00:00` 转换为o `time_yekat: 2019-01-01 05:00:00`. ## toYear {#toyear} @@ -34,15 +64,15 @@ SELECT 将Date或DateTime转换为包含月份编号(1-12)的UInt8类型的数字。 -## 今天一年 {#todayofyear} +## toDayOfYear {#todayofyear} 将Date或DateTime转换为包含一年中的某一天的编号的UInt16(1-366)类型的数字。 -## 今天月 {#todayofmonth} +## toDayOfMonth {#todayofmonth} 将Date或DateTime转换为包含一月中的某一天的编号的UInt8(1-31)类型的数字。 -## 今天一周 {#todayofweek} +## toDayOfWeek {#todayofweek} 将Date或DateTime转换为包含一周中的某一天的编号的UInt8(周一是1, 周日是7)类型的数字。 @@ -55,31 +85,61 @@ SELECT 将DateTime转换为包含一小时中分钟数(0-59)的UInt8数字。 -## 秒 {#tosecond} +## toSecond {#tosecond} 将DateTime转换为包含一分钟中秒数(0-59)的UInt8数字。 闰秒不计算在内。 -## toUnixTimestamp {#tounixtimestamp} +## toUnixTimestamp {#to-unix-timestamp} -将DateTime转换为unix时间戳。 +对于DateTime参数:将值转换为UInt32类型的数字-Unix时间戳(https://en.wikipedia.org/wiki/Unix_time)。 +对于String参数:根据时区将输入字符串转换为日期时间(可选的第二个参数,默认使用服务器时区),并返回相应的unix时间戳。 -## 开始一年 {#tostartofyear} +**语法** + +``` sql +toUnixTimestamp(datetime) +toUnixTimestamp(str, [timezone]) +``` + +**返回值** + +- 返回 unix timestamp. + +类型: `UInt32`. + +**示例** + +查询: + +``` sql +SELECT toUnixTimestamp('2017-11-05 08:07:47', 'Asia/Tokyo') AS unix_timestamp +``` + +结果: + +``` text +┌─unix_timestamp─┐ +│ 1509836867 │ +└────────────────┘ +``` + +## toStartOfYear {#tostartofyear} 将Date或DateTime向前取整到本年的第一天。 返回Date类型。 -## 今年开始 {#tostartofisoyear} +## toStartOfISOYear {#tostartofisoyear} 将Date或DateTime向前取整到ISO本年的第一天。 返回Date类型。 -## 四分之一开始 {#tostartofquarter} +## toStartOfQuarter {#tostartofquarter} 将Date或DateTime向前取整到本季度的第一天。 返回Date类型。 -## 到月份开始 {#tostartofmonth} +## toStartOfMonth {#tostartofmonth} 将Date或DateTime向前取整到本月的第一天。 返回Date类型。 @@ -92,27 +152,90 @@ SELECT 将Date或DateTime向前取整到本周的星期一。 返回Date类型。 -## 今天开始 {#tostartofday} +## toStartOfWeek(t\[,mode\]) {#tostartofweek} -将DateTime向前取整到当日的开始。 +按mode将Date或DateTime向前取整到最近的星期日或星期一。 +返回Date类型。 +mode参数的工作方式与toWeek()的mode参数完全相同。 对于单参数语法,mode使用默认值0。 -## 开始一小时 {#tostartofhour} +## toStartOfDay {#tostartofday} + +将DateTime向前取整到今天的开始。 + +## toStartOfHour {#tostartofhour} 将DateTime向前取整到当前小时的开始。 -## to startofminute {#tostartofminute} +## toStartOfMinute {#tostartofminute} 将DateTime向前取整到当前分钟的开始。 -## to startoffiveminute {#tostartoffiveminute} +## toStartOfSecond {#tostartofsecond} + +将DateTime向前取整到当前秒数的开始。 + +**语法** + +``` sql +toStartOfSecond(value[, timezone]) +``` + +**参数** + +- `value` — 时间和日期[DateTime64](../../sql-reference/data-types/datetime64.md). +- `timezone` — 返回值的[Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) (可选参数)。 如果未指定将使用 `value` 参数的时区。 [String](../../sql-reference/data-types/string.md)。 + +**返回值** + +- 输入值毫秒部分为零。 + +类型: [DateTime64](../../sql-reference/data-types/datetime64.md). + +**示例** + +不指定时区查询: + +``` sql +WITH toDateTime64('2020-01-01 10:20:30.999', 3) AS dt64 +SELECT toStartOfSecond(dt64); +``` + +结果: + +``` text +┌───toStartOfSecond(dt64)─┐ +│ 2020-01-01 10:20:30.000 │ +└─────────────────────────┘ +``` + +指定时区查询: + +``` sql +WITH toDateTime64('2020-01-01 10:20:30.999', 3) AS dt64 +SELECT toStartOfSecond(dt64, 'Europe/Moscow'); +``` + +结果: + +``` text +┌─toStartOfSecond(dt64, 'Europe/Moscow')─┐ +│ 2020-01-01 13:20:30.000 │ +└────────────────────────────────────────┘ +``` + +**参考** + +- [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) 服务器配置选项。 + +## toStartOfFiveMinute {#tostartoffiveminute} 将DateTime以五分钟为单位向前取整到最接近的时间点。 -## 开始分钟 {#tostartoftenminutes} +## toStartOfTenMinutes {#tostartoftenminutes} 将DateTime以十分钟为单位向前取整到最接近的时间点。 -## 开始几分钟 {#tostartoffifteenminutes} +## toStartOfFifteenMinutes {#tostartoffifteenminutes} 将DateTime以十五分钟为单位向前取整到最接近的时间点。 @@ -168,31 +291,214 @@ SELECT 将Date或DateTime转换为包含ISO周数的UInt8类型的编号。 -## 现在 {#now} +## toWeek(date\[,mode\]) {#toweekdatemode} -不接受任何参数并在请求执行时的某一刻返回当前时间(DateTime)。 -此函数返回一个常量,即时请求需要很长时间能够完成。 +返回Date或DateTime的周数。两个参数形式可以指定星期是从星期日还是星期一开始,以及返回值应在0到53还是从1到53的范围内。如果省略了mode参数,则默认 模式为0。 +`toISOWeek()`是一个兼容函数,等效于`toWeek(date,3)`。 +下表描述了mode参数的工作方式。 -## 今天 {#today} +| Mode | First day of week | Range | Week 1 is the first week … | +|------|-------------------|-------|-------------------------------| +| 0 | Sunday | 0-53 | with a Sunday in this year | +| 1 | Monday | 0-53 | with 4 or more days this year | +| 2 | Sunday | 1-53 | with a Sunday in this year | +| 3 | Monday | 1-53 | with 4 or more days this year | +| 4 | Sunday | 0-53 | with 4 or more days this year | +| 5 | Monday | 0-53 | with a Monday in this year | +| 6 | Sunday | 1-53 | with 4 or more days this year | +| 7 | Monday | 1-53 | with a Monday in this year | +| 8 | Sunday | 1-53 | contains January 1 | +| 9 | Monday | 1-53 | contains January 1 | + +对于象“with 4 or more days this year,”的mode值,根据ISO 8601:1988对周进行编号: + +- 如果包含1月1日的一周在后一年度中有4天或更多天,则为第1周。 + +- 否则,它是上一年的最后一周,下周是第1周。 + +对于像“contains January 1”的mode值, 包含1月1日的那周为本年度的第1周。 + +``` sql +toWeek(date, [, mode][, Timezone]) +``` + +**参数** + +- `date` – Date 或 DateTime. +- `mode` – 可选参数, 取值范围 \[0,9\], 默认0。 +- `Timezone` – 可选参数, 可其他时间日期转换参数的行为一致。 + +**示例** + +``` sql +SELECT toDate('2016-12-27') AS date, toWeek(date) AS week0, toWeek(date,1) AS week1, toWeek(date,9) AS week9; +``` + +``` text +┌───────date─┬─week0─┬─week1─┬─week9─┐ +│ 2016-12-27 │ 52 │ 52 │ 1 │ +└────────────┴───────┴───────┴───────┘ +``` + +## toYearWeek(date\[,mode\]) {#toyearweekdatemode} + +返回Date的年和周。 结果中的年份可能因为Date为该年份的第一周和最后一周而于Date的年份不同。 + +mode参数的工作方式与toWeek()的mode参数完全相同。 对于单参数语法,mode使用默认值0。 + +`toISOYear()`是一个兼容函数,等效于`intDiv(toYearWeek(date,3),100)`. + +**示例** + +``` sql +SELECT toDate('2016-12-27') AS date, toYearWeek(date) AS yearWeek0, toYearWeek(date,1) AS yearWeek1, toYearWeek(date,9) AS yearWeek9; +``` + +``` text +┌───────date─┬─yearWeek0─┬─yearWeek1─┬─yearWeek9─┐ +│ 2016-12-27 │ 201652 │ 201652 │ 201701 │ +└────────────┴───────────┴───────────┴───────────┘ +``` + +## date_trunc {#date_trunc} + +将Date或DateTime按指定的单位向前取整到最接近的时间点。 + +**语法** + +``` sql +date_trunc(unit, value[, timezone]) +``` + +别名: `dateTrunc`. + +**参数** + +- `unit` — 单位. [String](../syntax.md#syntax-string-literal). + 可选值: + + - `second` + - `minute` + - `hour` + - `day` + - `week` + - `month` + - `quarter` + - `year` + +- `value` — [DateTime](../../sql-reference/data-types/datetime.md) 或者 [DateTime64](../../sql-reference/data-types/datetime64.md). +- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) 返回值的时区(可选值)。如果未指定将使用`value`的时区。 [String](../../sql-reference/data-types/string.md). + +**返回值** + +- 按指定的单位向前取整后的DateTime。 + +类型: [Datetime](../../sql-reference/data-types/datetime.md). + +**示例** + +不指定时区查询: + +``` sql +SELECT now(), date_trunc('hour', now()); +``` + +结果: + +``` text +┌───────────────now()─┬─date_trunc('hour', now())─┐ +│ 2020-09-28 10:40:45 │ 2020-09-28 10:00:00 │ +└─────────────────────┴───────────────────────────┘ +``` + +指定时区查询: + +```sql +SELECT now(), date_trunc('hour', now(), 'Europe/Moscow'); +``` + +结果: + +```text +┌───────────────now()─┬─date_trunc('hour', now(), 'Europe/Moscow')─┐ +│ 2020-09-28 10:46:26 │ 2020-09-28 13:00:00 │ +└─────────────────────┴────────────────────────────────────────────┘ +``` + +**参考** + +- [toStartOfInterval](#tostartofintervaltime-or-data-interval-x-unit-time-zone) + +# now {#now} + +返回当前日期和时间。 + +**语法** + +``` sql +now([timezone]) +``` + +**参数** + +- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) 返回结果的时区(可先参数). [String](../../sql-reference/data-types/string.md). + +**返回值** + +- 当前日期和时间。 + +类型: [Datetime](../../sql-reference/data-types/datetime.md). + +**示例** + +不指定时区查询: + +``` sql +SELECT now(); +``` + +结果: + +``` text +┌───────────────now()─┐ +│ 2020-10-17 07:42:09 │ +└─────────────────────┘ +``` + +指定时区查询: + +``` sql +SELECT now('Europe/Moscow'); +``` + +结果: + +``` text +┌─now('Europe/Moscow')─┐ +│ 2020-10-17 10:42:23 │ +└──────────────────────┘ +``` + +## today {#today} 不接受任何参数并在请求执行时的某一刻返回当前日期(Date)。 -其功能与’toDate(now())’相同。 +其功能与’toDate(now())’相同。 -## 昨天 {#yesterday} +## yesterday {#yesterday} 不接受任何参数并在请求执行时的某一刻返回昨天的日期(Date)。 -其功能与’today() - 1’相同。 +其功能与’today() - 1’相同。 -## 时隙 {#timeslot} +## timeSlot {#timeslot} 将时间向前取整半小时。 此功能用于Yandex.Metrica,因为如果跟踪标记显示单个用户的连续综合浏览量在时间上严格超过此数量,则半小时是将会话分成两个会话的最短时间。这意味着(tag id,user id,time slot)可用于搜索相应会话中包含的综合浏览量。 -## toyyymm {#toyyyymm} +## toYYYMM {#toyyyymm} 将Date或DateTime转换为包含年份和月份编号的UInt32类型的数字(YYYY \* 100 + MM)。 -## toyyymmdd {#toyyyymmdd} +## toYYYMMDD {#toyyyymmdd} 将Date或DateTime转换为包含年份和月份编号的UInt32类型的数字(YYYY \* 10000 + MM \* 100 + DD)。 @@ -200,7 +506,7 @@ SELECT 将Date或DateTime转换为包含年份和月份编号的UInt64类型的数字(YYYY \* 10000000000 + MM \* 100000000 + DD \* 1000000 + hh \* 10000 + mm \* 100 + ss)。 -## 隆隆隆隆路虏脢,,陇,貌,垄拢卢虏禄quar陇,貌路,隆拢脳枚脢虏,麓脢,脱,,,录,禄庐戮,utes, {#addyears-addmonths-addweeks-adddays-addhours-addminutes-addseconds-addquarters} +## addYears, addMonths, addWeeks, addDays, addHours, addMinutes, addSeconds, addQuarters {#addyears-addmonths-addweeks-adddays-addhours-addminutes-addseconds-addquarters} 函数将一段时间间隔添加到Date/DateTime,然后返回Date/DateTime。例如: @@ -234,59 +540,145 @@ SELECT │ 2018-01-01 │ 2018-01-01 00:00:00 │ └──────────────────────────┴───────────────────────────────┘ -## dateDiff(‘unit’,t1,t2,\[时区\]) {#datediffunit-t1-t2-timezone} +## dateDiff {#datediff} -返回以’unit’为单位表示的两个时间之间的差异,例如`'hours'`。 ‘t1’和’t2’可以是Date或DateTime,如果指定’timezone’,它将应用于两个参数。如果不是,则使用来自数据类型’t1’和’t2’的时区。如果时区不相同,则结果将是未定义的。 +返回两个Date或DateTime类型之间的时差。 -支持的单位值: +**语法** -| 单位 | -|------| -| 第二 | -| 分钟 | -| 小时 | -| 日 | -| 周 | -| 月 | -| 季 | -| 年 | +``` sql +dateDiff('unit', startdate, enddate, [timezone]) +``` -## 时隙(开始时间,持续时间,\[,大小\]) {#timeslotsstarttime-duration-size} +**参数** + +- `unit` — 返回结果的时间单位。 [String](../../sql-reference/syntax.md#syntax-string-literal). + + 支持的时间单位: second, minute, hour, day, week, month, quarter, year. + +- `startdate` — 第一个待比较值。 [Date](../../sql-reference/data-types/date.md) 或 [DateTime](../../sql-reference/data-types/datetime.md). + +- `enddate` — 第二个待比较值。 [Date](../../sql-reference/data-types/date.md) 或 [DateTime](../../sql-reference/data-types/datetime.md). + +- `timezone` — 可选参数。 如果指定了,则同时适用于`startdate`和`enddate`。如果不指定,则使用`startdate`和`enddate`的时区。如果两个时区不一致,则结果不可预料。 + +**返回值** + +以`unit`为单位的`startdate`和`enddate`之间的时差。 + +类型: `int`. + +**示例** + +查询: + +``` sql +SELECT dateDiff('hour', toDateTime('2018-01-01 22:00:00'), toDateTime('2018-01-02 23:00:00')); +``` + +结果: + +``` text +┌─dateDiff('hour', toDateTime('2018-01-01 22:00:00'), toDateTime('2018-01-02 23:00:00'))─┐ +│ 25 │ +└────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +## timeSlots(StartTime, Duration,\[, Size\]) {#timeslotsstarttime-duration-size} 它返回一个时间数组,其中包括从从«StartTime»开始到«StartTime + Duration 秒»内的所有符合«size»(以秒为单位)步长的时间点。其中«size»是一个可选参数,默认为1800。 例如,`timeSlots(toDateTime('2012-01-01 12:20:00'),600) = [toDateTime('2012-01-01 12:00:00'),toDateTime('2012-01-01 12:30:00' )]`。 这对于搜索在相应会话中综合浏览量是非常有用的。 -## formatDateTime(时间,格式\[,时区\]) {#formatdatetimetime-format-timezone} +## formatDateTime {#formatdatetime} 函数根据给定的格式字符串来格式化时间。请注意:格式字符串必须是常量表达式,例如:单个结果列不能有多种格式字符串。 -支持的格式修饰符: -(«Example» 列是对`2018-01-02 22:33:44`的格式化结果) +**语法** -| 修饰符 | 产品描述 | 示例 | -|--------|-------------------------------------------|------------| -| %C | 年除以100并截断为整数(00-99) | 20 | -| %d | 月中的一天,零填充(01-31) | 02 | -| %D | 短MM/DD/YY日期,相当于%m/%d/%y | 01/02/2018 | -| %e | 月中的一天,空格填充(1-31) | 2 | -| %F | 短YYYY-MM-DD日期,相当于%Y-%m-%d | 2018-01-02 | -| %H | 24小时格式(00-23) | 22 | -| %I | 小时12h格式(01-12) | 10 | -| %j | 一年(001-366) | 002 | -| %m | 月份为十进制数(01-12) | 01 | -| %M | 分钟(00-59) | 33 | -| %n | 换行符(") | | -| %p | AM或PM指定 | PM | -| %R | 24小时HH:MM时间,相当于%H:%M | 22:33 | -| %S | 第二(00-59) | 44 | -| %t | 水平制表符(’) | | -| %T | ISO8601时间格式(HH:MM:SS),相当于%H:%M:%S | 22:33:44 | -| %u | ISO8601平日as编号,星期一为1(1-7) | 2 | -| %V | ISO8601周编号(01-53) | 01 | -| %w | 周日为十进制数,周日为0(0-6) | 2 | -| %y | 年份,最后两位数字(00-99) | 18 | -| %Y | 年 | 2018 | -| %% | %符号 | % | +``` sql +formatDateTime(Time, Format\[, Timezone\]) +``` -[来源文章](https://clickhouse.tech/docs/en/query_language/functions/date_time_functions/) +**返回值** + +根据指定格式返回的日期和时间。 + +**支持的格式修饰符** + +使用格式修饰符来指定结果字符串的样式。«Example» 列是对`2018-01-02 22:33:44`的格式化结果。 + +| 修饰符 | 描述 | 示例 | +|--------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------| +| %C | 年除以100并截断为整数(00-99) | 20 | +| %d | 月中的一天,零填充(01-31) | 02 | +| %D | 短MM/DD/YY日期,相当于%m/%d/%y | 01/02/2018 | +| %e | 月中的一天,空格填充(1-31) | 2 | +| %F | 短YYYY-MM-DD日期,相当于%Y-%m-%d | 2018-01-02 | +| %G | ISO周号的四位数年份格式, 从基于周的年份[由ISO 8601定义](https://en.wikipedia.org/wiki/ISO_8601#Week_dates) 标准计算得出,通常仅对%V有用 | 2018 | +| %g | 两位数的年份格式,与ISO 8601一致,四位数表示法的缩写 | 18 | +| %H | 24小时格式(00-23) | 22 | +| %I | 小时12h格式(01-12) | 10 | +| %j | 一年(001-366) | 002 | +| %m | 月份为十进制数(01-12) | 01 | +| %M | 分钟(00-59) | 33 | +| %n | 换行符(") | | +| %p | AM或PM指定 | PM | +| %R | 24小时HH:MM时间,相当于%H:%M | 22:33 | +| %S | 第二(00-59) | 44 | +| %t | 水平制表符(’) | | +| %T | ISO8601时间格式(HH:MM:SS),相当于%H:%M:%S | 22:33:44 | +| %u | ISO8601平日as编号,星期一为1(1-7) | 2 | +| %V | ISO8601周编号(01-53) | 01 | +| %w | 周日为十进制数,周日为0(0-6) | 2 | +| %y | 年份,最后两位数字(00-99) | 18 | +| %Y | 年 | 2018 | +| %% | %符号 | % | + +**示例** + +查询: + +``` sql +SELECT formatDateTime(toDate('2010-01-04'), '%g') +``` + +结果: + +``` +┌─formatDateTime(toDate('2010-01-04'), '%g')─┐ +│ 10 │ +└────────────────────────────────────────────┘ +``` + +[Original article](https://clickhouse.tech/docs/en/query_language/functions/date_time_functions/) + +## FROM_UNIXTIME + +当只有单个整数类型的参数时,它的作用与`toDateTime`相同,并返回[DateTime](../../sql-reference/data-types/datetime.md)类型。 + +例如: + +```sql +SELECT FROM_UNIXTIME(423543535) +``` + +```text +┌─FROM_UNIXTIME(423543535)─┐ +│ 1983-06-04 10:58:55 │ +└──────────────────────────┘ +``` + +当有两个参数时,第一个是整型或DateTime,第二个是常量格式字符串,它的作用与`formatDateTime`相同,并返回`String`类型。 + +例如: + +```sql +SELECT FROM_UNIXTIME(1234334543, '%Y-%m-%d %R:%S') AS DateTime +``` + +```text +┌─DateTime────────────┐ +│ 2009-02-11 14:42:23 │ +└─────────────────────┘ +``` diff --git a/docs/zh/sql-reference/functions/string-replace-functions.md b/docs/zh/sql-reference/functions/string-replace-functions.md index 6e0745ba5b1..5f89bfb828f 100644 --- a/docs/zh/sql-reference/functions/string-replace-functions.md +++ b/docs/zh/sql-reference/functions/string-replace-functions.md @@ -1,21 +1,21 @@ # 字符串替换函数 {#zi-fu-chuan-ti-huan-han-shu} -## replaceOne(大海捞针,模式,更换) {#replaceonehaystack-pattern-replacement} +## replaceOne(haystack, pattern, replacement) {#replaceonehaystack-pattern-replacement} -用’replacement’子串替换’haystack’中与’pattern’子串第一个匹配的匹配项(如果存在)。 +用’replacement’子串替换’haystack’中第一次出现的’pattern’子串(如果存在)。 ’pattern’和’replacement’必须是常量。 -## replaceAll(大海捞针,模式,替换),替换(大海捞针,模式,替换) {#replaceallhaystack-pattern-replacement-replacehaystack-pattern-replacement} +## replaceAll(haystack, pattern, replacement), replace(haystack, pattern, replacement) {#replaceallhaystack-pattern-replacement-replacehaystack-pattern-replacement} -用’replacement’子串替换’haystack’中出现的所有’pattern’子串。 +用’replacement’子串替换’haystack’中出现的所有的’pattern’子串。 -## replaceRegexpOne(大海捞针,模式,更换) {#replaceregexponehaystack-pattern-replacement} +## replaceRegexpOne(haystack, pattern, replacement) {#replaceregexponehaystack-pattern-replacement} -使用’pattern’正则表达式替换。 ‘pattern’可以是任意一个有效的re2正则表达式。 -如果存在与正则表达式匹配的匹配项,仅替换第一个匹配项。 -同时‘replacement’可以指定为正则表达式中的捕获组。可以包含`\0-\9`。 -在这种情况下,函数将使用正则表达式的整个匹配项替换‘\\0’。使用其他与之对应的子模式替换对应的’\\1-\\9’。要在模版中使用’‘字符,请使用’’将其转义。 -另外还请记住,字符串文字需要额外的转义。 +使用’pattern’正则表达式的替换。 ‘pattern’可以是任意一个有效的re2正则表达式。 +如果存在与’pattern’正则表达式匹配的匹配项,仅替换第一个匹配项。 +模式pattern可以指定为‘replacement’。此模式可以包含替代`\0-\9`。 +替代`\0`包含了整个正则表达式。替代`\1-\9`对应于子模式编号。要在模板中使用反斜杠`\`,请使用`\`将其转义。 +另外还请记住,字符串字面值(literal)需要额外的转义。 示例1.将日期转换为美国格式: @@ -46,7 +46,7 @@ SELECT replaceRegexpOne('Hello, World!', '.*', '\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0') │ Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World! │ └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ -## replaceRegexpAll(大海捞针,模式,替换) {#replaceregexpallhaystack-pattern-replacement} +## replaceRegexpAll(haystack, pattern, replacement) {#replaceregexpallhaystack-pattern-replacement} 与replaceRegexpOne相同,但会替换所有出现的匹配项。例如: @@ -58,7 +58,7 @@ SELECT replaceRegexpAll('Hello, World!', '.', '\\0\\0') AS res │ HHeelllloo,, WWoorrlldd!! │ └────────────────────────────┘ -例外的是,如果使用正则表达式捕获空白子串,则仅会进行一次替换。 +作为例外,对于空子字符串,正则表达式只会进行一次替换。 示例: ``` sql @@ -72,8 +72,9 @@ SELECT replaceRegexpAll('Hello, World!', '^', 'here: ') AS res ## regexpQuoteMeta(s) {#regexpquotemetas} 该函数用于在字符串中的某些预定义字符之前添加反斜杠。 -预定义字符:‘0’,‘\\’,‘\|’,‘(’,‘)’,‘^’,‘$’,‘。’,‘\[’,‘\]’,‘?’,‘\*’,‘+’,‘{’,‘:’,’ - ’。 -这个实现与re2 :: RE2 :: QuoteMeta略有不同。它以\\0而不是00转义零字节,它只转义所需的字符。 -有关详细信息,请参阅链接:\[RE2\](https://github.com/google/re2/blob/master/re2/re2.cc#L473) +预定义字符:`\0`, `\\`, `|`, `(`, `)`, `^`, `$`, `.`, `[`, `]`, `?`, `*`, `+`, `{`, `:`, `-`。 +这个实现与re2::RE2::QuoteMeta略有不同。它以`\0` 转义零字节,而不是`\x00`,并且只转义必需的字符。 +有关详细信息,请参阅链接:[RE2](https://github.com/google/re2/blob/master/re2/re2.cc#L473) [来源文章](https://clickhouse.tech/docs/en/query_language/functions/string_replace_functions/) + diff --git a/docs/zh/sql-reference/statements/misc.md b/docs/zh/sql-reference/statements/misc.md index a736ed2af5b..b4297d1ed4f 100644 --- a/docs/zh/sql-reference/statements/misc.md +++ b/docs/zh/sql-reference/statements/misc.md @@ -41,25 +41,25 @@ CHECK TABLE [db.]name 该 `CHECK TABLE` 查询支持下表引擎: -- [日志](../../engines/table-engines/log-family/log.md) +- [Log](../../engines/table-engines/log-family/log.md) - [TinyLog](../../engines/table-engines/log-family/tinylog.md) - [StripeLog](../../engines/table-engines/log-family/stripelog.md) -- [梅树家族](../../engines/table-engines/mergetree-family/mergetree.md) +- [MergeTree 家族](../../engines/table-engines/mergetree-family/mergetree.md) -使用另一个表引擎对表执行会导致异常。 +对其他不支持的表引擎的表执行会导致异常。 -从发动机 `*Log` 家庭不提供故障自动数据恢复。 使用 `CHECK TABLE` 查询以及时跟踪数据丢失。 +来自 `*Log` 家族的引擎不提供故障自动数据恢复。 使用 `CHECK TABLE` 查询及时跟踪数据丢失。 -为 `MergeTree` 家庭发动机, `CHECK TABLE` 查询显示本地服务器上表的每个单独数据部分的检查状态。 +对于 `MergeTree` 家族引擎, `CHECK TABLE` 查询显示本地服务器上表的每个单独数据部分的检查状态。 **如果数据已损坏** 如果表已损坏,则可以将未损坏的数据复制到另一个表。 要做到这一点: -1. 创建具有与损坏的表相同结构的新表。 要执行此操作,请执行查询 `CREATE TABLE AS `. -2. 设置 [max_threads](../../operations/settings/settings.md#settings-max_threads) 值为1以在单个线程中处理下一个查询。 要执行此操作,请运行查询 `SET max_threads = 1`. +1. 创建一个与损坏的表结构相同的新表。 要做到这一点,请执行查询 `CREATE TABLE AS `. +2. 将 [max_threads](../../operations/settings/settings.md#settings-max_threads) 值设置为1,以在单个线程中处理下一个查询。 要这样做,请运行查询 `SET max_threads = 1`. 3. 执行查询 `INSERT INTO SELECT * FROM `. 此请求将未损坏的数据从损坏的表复制到另一个表。 只有损坏部分之前的数据才会被复制。 -4. 重新启动 `clickhouse-client` 要重置 `max_threads` 价值。 +4. 重新启动 `clickhouse-client` 以重置 `max_threads` 值。 ## DESCRIBE TABLE {#misc-describe-table} @@ -67,57 +67,65 @@ CHECK TABLE [db.]name DESC|DESCRIBE TABLE [db.]table [INTO OUTFILE filename] [FORMAT format] ``` -返回以下内容 `String` 类型列: +返回以下 `String` 类型列: -- `name` — Column name. -- `type`— Column type. -- `default_type` — Clause that is used in [默认表达式](create.md#create-default-values) (`DEFAULT`, `MATERIALIZED` 或 `ALIAS`). 如果未指定默认表达式,则Column包含一个空字符串。 -- `default_expression` — Value specified in the `DEFAULT` 条款 -- `comment_expression` — Comment text. +- `name` — 列名。 +- `type`— 列的类型。 +- `default_type` — [默认表达式](create.md#create-default-values) (`DEFAULT`, `MATERIALIZED` 或 `ALIAS`)中使用的子句。 如果没有指定默认表达式,则列包含一个空字符串。 +- `default_expression` — `DEFAULT` 子句中指定的值。 +- `comment_expression` — 注释。 -嵌套的数据结构输出 “expanded” 格式。 每列分别显示,名称后面有一个点。 +嵌套数据结构以 “expanded” 格式输出。 每列分别显示,列名后加点号。 ## DETACH {#detach} -删除有关 ‘name’ 表从服务器。 服务器停止了解表的存在。 +从服务器中删除有关 ‘name’ 表的信息。 服务器停止了解该表的存在。 ``` sql DETACH TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster] ``` -这不会删除表的数据或元数据。 在下一次服务器启动时,服务器将读取元数据并再次查找有关表的信息。 -同样,一个 “detached” 表可以使用重新连接 `ATTACH` 查询(系统表除外,它们没有为它们存储元数据)。 - -没有 `DETACH DATABASE` 查询。 +这不会删除表的数据或元数据。 在下一次服务器启动时,服务器将读取元数据并再次查找该表。 +同样,可以使用 `ATTACH` 查询重新连接一个 “detached” 的表(系统表除外,没有为它们存储元数据)。 ## DROP {#drop} -此查询有两种类型: `DROP DATABASE` 和 `DROP TABLE`. +删除已经存在的实体。如果指定 `IF EXISTS`, 则如果实体不存在,则不返回错误。 + +## DROP DATABASE {#drop-database} + +删除 `db` 数据库中的所有表,然后删除 `db` 数据库本身。 + +语法: ``` sql DROP DATABASE [IF EXISTS] db [ON CLUSTER cluster] ``` +## DROP TABLE {#drop-table} -删除内部的所有表 ‘db’ 数据库,然后删除 ‘db’ 数据库本身。 -如果 `IF EXISTS` 如果数据库不存在,则不会返回错误。 +删除表。 + +语法: ``` sql DROP [TEMPORARY] TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster] ``` -删除表。 -如果 `IF EXISTS` 如果表不存在或数据库不存在,则不会返回错误。 - - DROP DICTIONARY [IF EXISTS] [db.]name +## DROP DICTIONARY {#drop-dictionary} 删除字典。 -如果 `IF EXISTS` 如果表不存在或数据库不存在,则不会返回错误。 + +语法: + +``` sql +DROP DICTIONARY [IF EXISTS] [db.]name +``` ## DROP USER {#drop-user-statement} 删除用户。 -### 语法 {#drop-user-syntax} +语法: ``` sql DROP USER [IF EXISTS] name [,...] [ON CLUSTER cluster_name] @@ -129,7 +137,7 @@ DROP USER [IF EXISTS] name [,...] [ON CLUSTER cluster_name] 已删除的角色将从授予该角色的所有实体撤销。 -### 语法 {#drop-role-syntax} +语法: ``` sql DROP ROLE [IF EXISTS] name [,...] [ON CLUSTER cluster_name] @@ -141,7 +149,7 @@ DROP ROLE [IF EXISTS] name [,...] [ON CLUSTER cluster_name] 已删除行策略将从分配该策略的所有实体撤销。 -### 语法 {#drop-row-policy-syntax} +语法: ``` sql DROP [ROW] POLICY [IF EXISTS] name [,...] ON [database.]table [,...] [ON CLUSTER cluster_name] @@ -153,7 +161,7 @@ DROP [ROW] POLICY [IF EXISTS] name [,...] ON [database.]table [,...] [ON CLUSTER 已删除的配额将从分配该配额的所有实体撤销。 -### 语法 {#drop-quota-syntax} +语法: ``` sql DROP QUOTA [IF EXISTS] name [,...] [ON CLUSTER cluster_name] @@ -165,12 +173,22 @@ DROP QUOTA [IF EXISTS] name [,...] [ON CLUSTER cluster_name] 已删除的settings配置将从分配该settings配置的所有实体撤销。 -### 语法 {#drop-settings-profile-syntax} +语法: ``` sql DROP [SETTINGS] PROFILE [IF EXISTS] name [,...] [ON CLUSTER cluster_name] ``` +## DROP VIEW {#drop-view} + +删除视图。视图也可以通过 `DROP TABLE` 删除,但是 `DROP VIEW` 检查 `[db.]name` 是视图。 + +语法: + +``` sql +DROP VIEW [IF EXISTS] [db.]name [ON CLUSTER cluster] +``` + ## EXISTS {#exists-statement} ``` sql @@ -189,7 +207,7 @@ KILL QUERY [ON CLUSTER cluster] ``` 尝试强制终止当前正在运行的查询。 -要终止的查询是从系统中选择的。使用在定义的标准进程表 `WHERE` 《公约》条款 `KILL` 查询。 +要终止的查询是使用 `KILL` 查询的 `WHERE` 子句定义的标准从system.processes表中选择的。 例: @@ -206,13 +224,13 @@ KILL QUERY WHERE user='username' SYNC 默认情况下,使用异步版本的查询 (`ASYNC`),不等待确认查询已停止。 同步版本 (`SYNC`)等待所有查询停止,并在停止时显示有关每个进程的信息。 -响应包含 `kill_status` 列,它可以采用以下值: +响应包含 `kill_status` 列,该列可以采用以下值: -1. ‘finished’ – The query was terminated successfully. -2. ‘waiting’ – Waiting for the query to end after sending it a signal to terminate. -3. The other values ​​explain why the query can't be stopped. +1. ‘finished’ – 查询已成功终止。 +2. ‘waiting’ – 发送查询信号终止后,等待查询结束。 +3. 其他值解释为什么查询不能停止。 -测试查询 (`TEST`)仅检查用户的权限并显示要停止的查询列表。 +测试查询 (`TEST`)仅检查用户的权限,并显示要停止的查询列表。 ## KILL MUTATION {#kill-mutation} @@ -223,9 +241,9 @@ KILL MUTATION [ON CLUSTER cluster] [FORMAT format] ``` -尝试取消和删除 [突变](alter.md#alter-mutations) 当前正在执行。 要取消的突变选自 [`system.mutations`](../../operations/system-tables/mutations.md#system_tables-mutations) 表使用由指定的过滤器 `WHERE` 《公约》条款 `KILL` 查询。 +尝试取消和删除当前正在执行的 [mutations](alter.md#alter-mutations) 。 要取消的mutation是使用 `KILL` 查询的WHERE子句指定的过滤器从[`system.mutations`](../../operations/system-tables/mutations.md#system_tables-mutations) 表中选择的。 -测试查询 (`TEST`)仅检查用户的权限并显示要停止的查询列表。 +测试查询 (`TEST`)仅检查用户的权限并显示要停止的mutations列表。 例: @@ -237,9 +255,9 @@ KILL MUTATION WHERE database = 'default' AND table = 'table' KILL MUTATION WHERE database = 'default' AND table = 'table' AND mutation_id = 'mutation_3.txt' ``` -The query is useful when a mutation is stuck and cannot finish (e.g. if some function in the mutation query throws an exception when applied to the data contained in the table). +当mutation卡住且无法完成时,该查询是有用的(例如,当mutation查询中的某些函数在应用于表中包含的数据时抛出异常)。 -已经由突变所做的更改不会回滚。 +Mutation已经做出的更改不会回滚。 ## OPTIMIZE {#misc_operations-optimize} @@ -247,19 +265,19 @@ The query is useful when a mutation is stuck and cannot finish (e.g. if some fu OPTIMIZE TABLE [db.]name [ON CLUSTER cluster] [PARTITION partition | PARTITION ID 'partition_id'] [FINAL] [DEDUPLICATE] ``` -此查询尝试使用来自表引擎的表初始化表的数据部分的非计划合并 [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) 家人 +此查询尝试初始化 [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md)家族的表引擎的表中未计划合并数据部分。 -该 `OPTMIZE` 查询也支持 [MaterializedView](../../engines/table-engines/special/materializedview.md) 和 [缓冲区](../../engines/table-engines/special/buffer.md) 引擎 不支持其他表引擎。 +该 `OPTMIZE` 查询也支持 [MaterializedView](../../engines/table-engines/special/materializedview.md) 和 [Buffer](../../engines/table-engines/special/buffer.md) 引擎。 不支持其他表引擎。 -当 `OPTIMIZE` 与使用 [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replication.md) 表引擎的家族,ClickHouse创建合并任务,并等待在所有节点上执行(如果 `replication_alter_partitions_sync` 设置已启用)。 +当 `OPTIMIZE` 与 [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replication.md) 家族的表引擎一起使用时,ClickHouse将创建一个合并任务,并等待所有节点上的执行(如果 `replication_alter_partitions_sync` 设置已启用)。 - 如果 `OPTIMIZE` 出于任何原因不执行合并,它不通知客户端。 要启用通知,请使用 [optimize_throw_if_noop](../../operations/settings/settings.md#setting-optimize_throw_if_noop) 设置。 - 如果您指定 `PARTITION`,仅优化指定的分区。 [如何设置分区表达式](alter.md#alter-how-to-specify-part-expr). - 如果您指定 `FINAL`,即使所有数据已经在一个部分中,也会执行优化。 -- 如果您指定 `DEDUPLICATE`,然后完全相同的行将被重复数据删除(所有列进行比较),这仅适用于MergeTree引擎。 +- 如果您指定 `DEDUPLICATE`,则将对完全相同的行进行重复数据删除(所有列进行比较),这仅适用于MergeTree引擎。 !!! warning "警告" - `OPTIMIZE` 无法修复 “Too many parts” 错误 + `OPTIMIZE` 无法修复 “Too many parts” 错误。 ## RENAME {#misc_operations-rename} @@ -270,6 +288,7 @@ RENAME TABLE [db11.]name11 TO [db12.]name12, [db21.]name21 TO [db22.]name22, ... ``` 所有表都在全局锁定下重命名。 重命名表是一个轻型操作。 如果您在TO之后指定了另一个数据库,则表将被移动到此数据库。 但是,包含数据库的目录必须位于同一文件系统中(否则,将返回错误)。 +如果您在一个查询中重命名多个表,这是一个非原子操作,它可能被部分执行,其他会话中的查询可能会接收错误 Table ... doesn't exist ...。 ## SET {#query-set} @@ -277,9 +296,9 @@ RENAME TABLE [db11.]name11 TO [db12.]name12, [db21.]name21 TO [db22.]name22, ... SET param = value ``` -分配 `value` 到 `param` [设置](../../operations/settings/index.md) 对于当前会话。 你不能改变 [服务器设置](../../operations/server-configuration-parameters/index.md) 这边 +为当前会话的 [设置](../../operations/settings/index.md) `param` 分配值 `value`。 您不能以这种方式更改 [服务器设置](../../operations/server-configuration-parameters/index.md)。 -您还可以在单个查询中设置指定设置配置文件中的所有值。 +您还可以在单个查询中从指定的设置配置文件中设置所有值。 ``` sql SET profile = 'profile-name-from-the-settings-file' @@ -291,8 +310,6 @@ SET profile = 'profile-name-from-the-settings-file' 激活当前用户的角色。 -### 语法 {#set-role-syntax} - ``` sql SET ROLE {DEFAULT | NONE | role [,...] | ALL | ALL EXCEPT role [,...]} ``` @@ -301,15 +318,13 @@ SET ROLE {DEFAULT | NONE | role [,...] | ALL | ALL EXCEPT role [,...]} 将默认角色设置为用户。 -默认角色在用户登录时自动激活。 您只能将以前授予的角色设置为默认值。 如果未向用户授予角色,ClickHouse将引发异常。 - -### 语法 {#set-default-role-syntax} +默认角色在用户登录时自动激活。 您只能将以前授予的角色设置为默认值。 如果角色没有授予用户,ClickHouse会抛出异常。 ``` sql SET DEFAULT ROLE {NONE | role [,...] | ALL | ALL EXCEPT role [,...]} TO {user|CURRENT_USER} [,...] ``` -### 例 {#set-default-role-examples} +### 示例 {#set-default-role-examples} 为用户设置多个默认角色: @@ -317,19 +332,19 @@ SET DEFAULT ROLE {NONE | role [,...] | ALL | ALL EXCEPT role [,...]} TO {user|CU SET DEFAULT ROLE role1, role2, ... TO user ``` -将所有授予的角色设置为用户的默认值: +将所有授予的角色设置为用户的默认角色: ``` sql SET DEFAULT ROLE ALL TO user ``` -从用户清除默认角色: +清除用户的默认角色: ``` sql SET DEFAULT ROLE NONE TO user ``` -将所有授予的角色设置为默认角色,其中一些角色除外: +将所有授予的角色设置为默认角色,但其中一些角色除外: ``` sql SET DEFAULT ROLE ALL EXCEPT role1, role2 TO user @@ -341,9 +356,9 @@ SET DEFAULT ROLE ALL EXCEPT role1, role2 TO user TRUNCATE TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster] ``` -从表中删除所有数据。 当条款 `IF EXISTS` 如果该表不存在,则查询返回错误。 +从表中删除所有数据。 当省略 `IF EXISTS`子句时,如果该表不存在,则查询返回错误。 -该 `TRUNCATE` 查询不支持 [查看](../../engines/table-engines/special/view.md), [文件](../../engines/table-engines/special/file.md), [URL](../../engines/table-engines/special/url.md) 和 [Null](../../engines/table-engines/special/null.md) 表引擎. +该 `TRUNCATE` 查询不支持 [View](../../engines/table-engines/special/view.md), [File](../../engines/table-engines/special/file.md), [URL](../../engines/table-engines/special/url.md) 和 [Null](../../engines/table-engines/special/null.md) 表引擎. ## USE {#use} diff --git a/programs/benchmark/Benchmark.cpp b/programs/benchmark/Benchmark.cpp index 8c69a545017..ae1d16ce402 100644 --- a/programs/benchmark/Benchmark.cpp +++ b/programs/benchmark/Benchmark.cpp @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -95,6 +96,7 @@ public: } global_context.makeGlobalContext(); + global_context.setSettings(settings); std::cerr << std::fixed << std::setprecision(3); @@ -404,7 +406,7 @@ private: Stopwatch watch; RemoteBlockInputStream stream( *(*connection_entries[connection_index]), - query, {}, global_context, &settings, nullptr, Scalars(), Tables(), query_processing_stage); + query, {}, global_context, nullptr, Scalars(), Tables(), query_processing_stage); if (!query_id.empty()) stream.setQueryId(query_id); diff --git a/programs/copier/ClusterCopier.cpp b/programs/copier/ClusterCopier.cpp index 2f19fc47fd2..ca09e7c1889 100644 --- a/programs/copier/ClusterCopier.cpp +++ b/programs/copier/ClusterCopier.cpp @@ -5,6 +5,7 @@ #include #include #include +#include namespace DB @@ -1588,11 +1589,14 @@ void ClusterCopier::dropParticularPartitionPieceFromAllHelpingTables(const TaskT LOG_DEBUG(log, "All helping tables dropped partition {}", partition_name); } -String ClusterCopier::getRemoteCreateTable(const DatabaseAndTableName & table, Connection & connection, const Settings * settings) +String ClusterCopier::getRemoteCreateTable(const DatabaseAndTableName & table, Connection & connection, const Settings & settings) { + Context remote_context(context); + remote_context.setSettings(settings); + String query = "SHOW CREATE TABLE " + getQuotedTable(table); Block block = getBlockWithAllStreamData(std::make_shared( - connection, query, InterpreterShowCreateQuery::getSampleBlock(), context, settings)); + connection, query, InterpreterShowCreateQuery::getSampleBlock(), remote_context)); return typeid_cast(*block.safeGetByPosition(0).column).getDataAt(0).toString(); } @@ -1604,7 +1608,7 @@ ASTPtr ClusterCopier::getCreateTableForPullShard(const ConnectionTimeouts & time String create_query_pull_str = getRemoteCreateTable( task_shard.task_table.table_pull, *connection_entry, - &task_cluster->settings_pull); + task_cluster->settings_pull); ParserCreateQuery parser_create_query; const auto & settings = context.getSettingsRef(); @@ -1856,6 +1860,9 @@ UInt64 ClusterCopier::executeQueryOnCluster( auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(shard_settings).getSaturated(shard_settings.max_execution_time); auto connections = shard.pool->getMany(timeouts, &shard_settings, pool_mode); + Context shard_context(context); + shard_context.setSettings(shard_settings); + for (auto & connection : connections) { if (connection.isNull()) @@ -1864,7 +1871,7 @@ UInt64 ClusterCopier::executeQueryOnCluster( try { /// CREATE TABLE and DROP PARTITION queries return empty block - RemoteBlockInputStream stream{*connection, query, Block{}, context, &shard_settings}; + RemoteBlockInputStream stream{*connection, query, Block{}, shard_context}; NullBlockOutputStream output{Block{}}; copyData(stream, output); diff --git a/programs/copier/ClusterCopier.h b/programs/copier/ClusterCopier.h index beaf247dfc8..9aff5493cf8 100644 --- a/programs/copier/ClusterCopier.h +++ b/programs/copier/ClusterCopier.h @@ -154,7 +154,7 @@ protected: /// table we can get rid of partition pieces (partitions in helping tables). void dropParticularPartitionPieceFromAllHelpingTables(const TaskTable & task_table, const String & partition_name); - String getRemoteCreateTable(const DatabaseAndTableName & table, Connection & connection, const Settings * settings = nullptr); + String getRemoteCreateTable(const DatabaseAndTableName & table, Connection & connection, const Settings & settings); ASTPtr getCreateTableForPullShard(const ConnectionTimeouts & timeouts, TaskShard & task_shard); diff --git a/programs/copier/ClusterCopierApp.cpp b/programs/copier/ClusterCopierApp.cpp index c2946e12c34..e3169a49ecf 100644 --- a/programs/copier/ClusterCopierApp.cpp +++ b/programs/copier/ClusterCopierApp.cpp @@ -1,6 +1,7 @@ #include "ClusterCopierApp.h" #include #include +#include #include #include diff --git a/programs/copier/TaskCluster.h b/programs/copier/TaskCluster.h index 68d98c648f5..5b28f461dd8 100644 --- a/programs/copier/TaskCluster.h +++ b/programs/copier/TaskCluster.h @@ -1,6 +1,7 @@ #pragma once #include "Aliases.h" +#include namespace DB { @@ -12,7 +13,9 @@ namespace ErrorCodes struct TaskCluster { TaskCluster(const String & task_zookeeper_path_, const String & default_local_database_) - : task_zookeeper_path(task_zookeeper_path_), default_local_database(default_local_database_) {} + : task_zookeeper_path(task_zookeeper_path_) + , default_local_database(default_local_database_) + {} void loadTasks(const Poco::Util::AbstractConfiguration & config, const String & base_key = ""); diff --git a/programs/server/Server.cpp b/programs/server/Server.cpp index 6a8944f6881..28d8301b920 100644 --- a/programs/server/Server.cpp +++ b/programs/server/Server.cpp @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -542,6 +543,12 @@ int Server::main(const std::vector & /*args*/) Poco::File(dictionaries_lib_path).createDirectories(); } + /// top_level_domains_lists + { + const std::string & top_level_domains_path = config().getString("top_level_domains_path", path + "top_level_domains/") + "/"; + TLDListsHolder::getInstance().parseConfig(top_level_domains_path, config()); + } + { Poco::File(path + "data/").createDirectories(); Poco::File(path + "metadata/").createDirectories(); diff --git a/programs/server/config.d/path.xml b/programs/server/config.d/path.xml index 8db1d18e8c7..466ed0d1663 100644 --- a/programs/server/config.d/path.xml +++ b/programs/server/config.d/path.xml @@ -4,4 +4,5 @@ ./user_files/ ./format_schemas/ ./access/ + ./top_level_domains/ diff --git a/programs/server/config.d/test_keeper_port.xml b/programs/server/config.d/test_keeper_port.xml new file mode 120000 index 00000000000..f3f721caae0 --- /dev/null +++ b/programs/server/config.d/test_keeper_port.xml @@ -0,0 +1 @@ +../../../tests/config/config.d/test_keeper_port.xml \ No newline at end of file diff --git a/programs/server/config.xml b/programs/server/config.xml index 851a7654d53..f41c346bbed 100644 --- a/programs/server/config.xml +++ b/programs/server/config.xml @@ -724,6 +724,19 @@ + + + + + + diff --git a/programs/server/play.html b/programs/server/play.html index 12435f55793..5c3d3566af4 100644 --- a/programs/server/play.html +++ b/programs/server/play.html @@ -534,6 +534,23 @@ var theme = window.localStorage.getItem('theme'); if (theme) { setColorTheme(theme); + } else { + /// Obtain system-level user preference + var media_query_list = window.matchMedia('prefers-color-scheme: dark') + + if (media_query_list.matches) { + /// Set without saving to localstorage + document.documentElement.setAttribute('data-theme', 'dark'); + } + + /// There is a rumor that on some computers, the theme is changing automatically on day/night. + media_query_list.addEventListener('change', function(e) { + if (e.matches) { + document.documentElement.setAttribute('data-theme', 'dark'); + } else { + document.documentElement.setAttribute('data-theme', 'light'); + } + }); } document.getElementById('toggle-light').onclick = function() diff --git a/src/Access/ContextAccess.cpp b/src/Access/ContextAccess.cpp index 0e4f3fe7871..a74f6e8e7a0 100644 --- a/src/Access/ContextAccess.cpp +++ b/src/Access/ContextAccess.cpp @@ -41,42 +41,7 @@ namespace } - void applyParamsToAccessRights(AccessRights & access, const ContextAccessParams & params) - { - static const AccessFlags table_ddl = AccessType::CREATE_DATABASE | AccessType::CREATE_TABLE | AccessType::CREATE_VIEW - | AccessType::ALTER_TABLE | AccessType::ALTER_VIEW | AccessType::DROP_DATABASE | AccessType::DROP_TABLE | AccessType::DROP_VIEW - | AccessType::TRUNCATE; - - static const AccessFlags dictionary_ddl = AccessType::CREATE_DICTIONARY | AccessType::DROP_DICTIONARY; - static const AccessFlags table_and_dictionary_ddl = table_ddl | dictionary_ddl; - static const AccessFlags write_table_access = AccessType::INSERT | AccessType::OPTIMIZE; - static const AccessFlags write_dcl_access = AccessType::ACCESS_MANAGEMENT - AccessType::SHOW_ACCESS; - - if (params.readonly) - access.revoke(write_table_access | table_and_dictionary_ddl | write_dcl_access | AccessType::SYSTEM | AccessType::KILL_QUERY); - - if (params.readonly == 1) - { - /// Table functions are forbidden in readonly mode. - /// For readonly = 2 they're allowed. - access.revoke(AccessType::CREATE_TEMPORARY_TABLE); - } - - if (!params.allow_ddl) - access.revoke(table_and_dictionary_ddl); - - if (!params.allow_introspection) - access.revoke(AccessType::INTROSPECTION); - - if (params.readonly) - { - /// No grant option in readonly mode. - access.revokeGrantOption(AccessType::ALL); - } - } - - - void addImplicitAccessRights(AccessRights & access) + AccessRights addImplicitAccessRights(const AccessRights & access) { auto modifier = [&](const AccessFlags & flags, const AccessFlags & min_flags_with_children, const AccessFlags & max_flags_with_children, const std::string_view & database, const std::string_view & table, const std::string_view & column) -> AccessFlags { @@ -150,23 +115,12 @@ namespace return res; }; - access.modifyFlags(modifier); - - /// Transform access to temporary tables into access to "_temporary_and_external_tables" database. - if (access.isGranted(AccessType::CREATE_TEMPORARY_TABLE)) - access.grant(AccessFlags::allTableFlags() | AccessFlags::allColumnFlags(), DatabaseCatalog::TEMPORARY_DATABASE); + AccessRights res = access; + res.modifyFlags(modifier); /// Anyone has access to the "system" database. - access.grant(AccessType::SELECT, DatabaseCatalog::SYSTEM_DATABASE); - } - - - AccessRights calculateFinalAccessRights(const AccessRights & access_from_user_and_roles, const ContextAccessParams & params) - { - AccessRights res_access = access_from_user_and_roles; - applyParamsToAccessRights(res_access, params); - addImplicitAccessRights(res_access); - return res_access; + res.grant(AccessType::SELECT, DatabaseCatalog::SYSTEM_DATABASE); + return res; } @@ -176,6 +130,12 @@ namespace ids[0] = id; return ids; } + + /// Helper for using in templates. + std::string_view getDatabase() { return {}; } + + template + std::string_view getDatabase(const std::string_view & arg1, const OtherArgs &...) { return arg1; } } @@ -203,10 +163,7 @@ void ContextAccess::setUser(const UserPtr & user_) const /// User has been dropped. auto nothing_granted = std::make_shared(); access = nothing_granted; - access_without_readonly = nothing_granted; - access_with_allow_ddl = nothing_granted; - access_with_allow_introspection = nothing_granted; - access_from_user_and_roles = nothing_granted; + access_with_implicit = nothing_granted; subscription_for_user_change = {}; subscription_for_roles_changes = {}; enabled_roles = nullptr; @@ -270,12 +227,8 @@ void ContextAccess::setRolesInfo(const std::shared_ptr & void ContextAccess::calculateAccessRights() const { - access_from_user_and_roles = std::make_shared(mixAccessRightsFromUserAndRoles(*user, *roles_info)); - access = std::make_shared(calculateFinalAccessRights(*access_from_user_and_roles, params)); - - access_without_readonly = nullptr; - access_with_allow_ddl = nullptr; - access_with_allow_introspection = nullptr; + access = std::make_shared(mixAccessRightsFromUserAndRoles(*user, *roles_info)); + access_with_implicit = std::make_shared(addImplicitAccessRights(*access)); if (trace_log) { @@ -287,6 +240,7 @@ void ContextAccess::calculateAccessRights() const } LOG_TRACE(trace_log, "Settings: readonly={}, allow_ddl={}, allow_introspection_functions={}", params.readonly, params.allow_ddl, params.allow_introspection); LOG_TRACE(trace_log, "List of all grants: {}", access->toString()); + LOG_TRACE(trace_log, "List of all grants including implicit: {}", access_with_implicit->toString()); } } @@ -340,6 +294,7 @@ std::shared_ptr ContextAccess::getFullAccess() static const std::shared_ptr res = [] { auto full_access = std::shared_ptr(new ContextAccess); + full_access->is_full_access = true; full_access->access = std::make_shared(AccessRights::getFullAccess()); full_access->enabled_quota = EnabledQuota::getUnlimitedQuota(); return full_access; @@ -362,323 +317,303 @@ std::shared_ptr ContextAccess::getSettingsConstraints } -std::shared_ptr ContextAccess::getAccess() const +std::shared_ptr ContextAccess::getAccessRights() const { std::lock_guard lock{mutex}; return access; } -template -bool ContextAccess::isGrantedImpl2(const AccessFlags & flags, const Args &... args) const +std::shared_ptr ContextAccess::getAccessRightsWithImplicit() const { - bool access_granted; - if constexpr (grant_option) - access_granted = getAccess()->hasGrantOption(flags, args...); - else - access_granted = getAccess()->isGranted(flags, args...); - - if (trace_log) - LOG_TRACE(trace_log, "Access {}: {}{}", (access_granted ? "granted" : "denied"), (AccessRightsElement{flags, args...}.toString()), - (grant_option ? " WITH GRANT OPTION" : "")); - - return access_granted; + std::lock_guard lock{mutex}; + return access_with_implicit; } -template -bool ContextAccess::isGrantedImpl(const AccessFlags & flags) const + +template +bool ContextAccess::checkAccessImpl(const AccessFlags & flags) const { - return isGrantedImpl2(flags); + return checkAccessImpl2(flags); } -template -bool ContextAccess::isGrantedImpl(const AccessFlags & flags, const std::string_view & database, const Args &... args) const +template +bool ContextAccess::checkAccessImpl(const AccessFlags & flags, const std::string_view & database, const Args &... args) const { - return isGrantedImpl2(flags, database.empty() ? params.current_database : database, args...); + return checkAccessImpl2(flags, database.empty() ? params.current_database : database, args...); } -template -bool ContextAccess::isGrantedImpl(const AccessRightsElement & element) const +template +bool ContextAccess::checkAccessImpl(const AccessRightsElement & element) const { if (element.any_database) - return isGrantedImpl(element.access_flags); + return checkAccessImpl(element.access_flags); else if (element.any_table) - return isGrantedImpl(element.access_flags, element.database); + return checkAccessImpl(element.access_flags, element.database); else if (element.any_column) - return isGrantedImpl(element.access_flags, element.database, element.table); + return checkAccessImpl(element.access_flags, element.database, element.table); else - return isGrantedImpl(element.access_flags, element.database, element.table, element.columns); + return checkAccessImpl(element.access_flags, element.database, element.table, element.columns); } -template -bool ContextAccess::isGrantedImpl(const AccessRightsElements & elements) const +template +bool ContextAccess::checkAccessImpl(const AccessRightsElements & elements) const { for (const auto & element : elements) - if (!isGrantedImpl(element)) + if (!checkAccessImpl(element)) return false; return true; } -bool ContextAccess::isGranted(const AccessFlags & flags) const { return isGrantedImpl(flags); } -bool ContextAccess::isGranted(const AccessFlags & flags, const std::string_view & database) const { return isGrantedImpl(flags, database); } -bool ContextAccess::isGranted(const AccessFlags & flags, const std::string_view & database, const std::string_view & table) const { return isGrantedImpl(flags, database, table); } -bool ContextAccess::isGranted(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const std::string_view & column) const { return isGrantedImpl(flags, database, table, column); } -bool ContextAccess::isGranted(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const std::vector & columns) const { return isGrantedImpl(flags, database, table, columns); } -bool ContextAccess::isGranted(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const Strings & columns) const { return isGrantedImpl(flags, database, table, columns); } -bool ContextAccess::isGranted(const AccessRightsElement & element) const { return isGrantedImpl(element); } -bool ContextAccess::isGranted(const AccessRightsElements & elements) const { return isGrantedImpl(elements); } - -bool ContextAccess::hasGrantOption(const AccessFlags & flags) const { return isGrantedImpl(flags); } -bool ContextAccess::hasGrantOption(const AccessFlags & flags, const std::string_view & database) const { return isGrantedImpl(flags, database); } -bool ContextAccess::hasGrantOption(const AccessFlags & flags, const std::string_view & database, const std::string_view & table) const { return isGrantedImpl(flags, database, table); } -bool ContextAccess::hasGrantOption(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const std::string_view & column) const { return isGrantedImpl(flags, database, table, column); } -bool ContextAccess::hasGrantOption(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const std::vector & columns) const { return isGrantedImpl(flags, database, table, columns); } -bool ContextAccess::hasGrantOption(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const Strings & columns) const { return isGrantedImpl(flags, database, table, columns); } -bool ContextAccess::hasGrantOption(const AccessRightsElement & element) const { return isGrantedImpl(element); } -bool ContextAccess::hasGrantOption(const AccessRightsElements & elements) const { return isGrantedImpl(elements); } - - -template -void ContextAccess::checkAccessImpl2(const AccessFlags & flags, const Args &... args) const +template +bool ContextAccess::checkAccessImpl2(const AccessFlags & flags, const Args &... args) const { - if constexpr (grant_option) + auto access_granted = [&] { - if (hasGrantOption(flags, args...)) - return; - } - else - { - if (isGranted(flags, args...)) - return; - } - - auto show_error = [&](const String & msg, int error_code) - { - throw Exception(user_name + ": " + msg, error_code); + if (trace_log) + LOG_TRACE(trace_log, "Access granted: {}{}", (AccessRightsElement{flags, args...}.toString()), + (grant_option ? " WITH GRANT OPTION" : "")); + return true; }; - std::lock_guard lock{mutex}; - - if (!user) - show_error("User has been dropped", ErrorCodes::UNKNOWN_USER); - - if (grant_option && access->isGranted(flags, args...)) + auto access_denied = [&](const String & error_msg, int error_code) { - show_error( - "Not enough privileges. " - "The required privileges have been granted, but without grant option. " - "To execute this query it's necessary to have the grant " - + AccessRightsElement{flags, args...}.toString() + " WITH GRANT OPTION", + if (trace_log) + LOG_TRACE(trace_log, "Access denied: {}{}", (AccessRightsElement{flags, args...}.toString()), + (grant_option ? " WITH GRANT OPTION" : "")); + if constexpr (throw_if_denied) + throw Exception(getUserName() + ": " + error_msg, error_code); + return false; + }; + + if (!flags || is_full_access) + return access_granted(); + + if (!getUser()) + return access_denied("User has been dropped", ErrorCodes::UNKNOWN_USER); + + /// If the current user was allowed to create a temporary table + /// then he is allowed to do with it whatever he wants. + if ((sizeof...(args) >= 2) && (getDatabase(args...) == DatabaseCatalog::TEMPORARY_DATABASE)) + return access_granted(); + + auto acs = getAccessRightsWithImplicit(); + bool granted; + if constexpr (grant_option) + granted = acs->hasGrantOption(flags, args...); + else + granted = acs->isGranted(flags, args...); + + if (!granted) + { + if (grant_option && acs->isGranted(flags, args...)) + { + return access_denied( + "Not enough privileges. " + "The required privileges have been granted, but without grant option. " + "To execute this query it's necessary to have grant " + + AccessRightsElement{flags, args...}.toString() + " WITH GRANT OPTION", + ErrorCodes::ACCESS_DENIED); + } + + return access_denied( + "Not enough privileges. To execute this query it's necessary to have grant " + + AccessRightsElement{flags, args...}.toString() + (grant_option ? " WITH GRANT OPTION" : ""), ErrorCodes::ACCESS_DENIED); } + struct PrecalculatedFlags + { + const AccessFlags table_ddl = AccessType::CREATE_DATABASE | AccessType::CREATE_TABLE | AccessType::CREATE_VIEW + | AccessType::ALTER_TABLE | AccessType::ALTER_VIEW | AccessType::DROP_DATABASE | AccessType::DROP_TABLE | AccessType::DROP_VIEW + | AccessType::TRUNCATE; + + const AccessFlags dictionary_ddl = AccessType::CREATE_DICTIONARY | AccessType::DROP_DICTIONARY; + const AccessFlags table_and_dictionary_ddl = table_ddl | dictionary_ddl; + const AccessFlags write_table_access = AccessType::INSERT | AccessType::OPTIMIZE; + const AccessFlags write_dcl_access = AccessType::ACCESS_MANAGEMENT - AccessType::SHOW_ACCESS; + + const AccessFlags not_readonly_flags = write_table_access | table_and_dictionary_ddl | write_dcl_access | AccessType::SYSTEM | AccessType::KILL_QUERY; + const AccessFlags not_readonly_1_flags = AccessType::CREATE_TEMPORARY_TABLE; + + const AccessFlags ddl_flags = table_ddl | dictionary_ddl; + const AccessFlags introspection_flags = AccessType::INTROSPECTION; + }; + static const PrecalculatedFlags precalc; + if (params.readonly) { - if (!access_without_readonly) - { - Params changed_params = params; - changed_params.readonly = 0; - access_without_readonly = std::make_shared(calculateFinalAccessRights(*access_from_user_and_roles, changed_params)); - } - - if (access_without_readonly->isGranted(flags, args...)) + if constexpr (grant_option) + return access_denied("Cannot change grants in readonly mode.", ErrorCodes::READONLY); + if ((flags & precalc.not_readonly_flags) || + ((params.readonly == 1) && (flags & precalc.not_readonly_1_flags))) { if (params.interface == ClientInfo::Interface::HTTP && params.http_method == ClientInfo::HTTPMethod::GET) - show_error( + { + return access_denied( "Cannot execute query in readonly mode. " "For queries over HTTP, method GET implies readonly. You should use method POST for modifying queries", ErrorCodes::READONLY); + } else - show_error("Cannot execute query in readonly mode", ErrorCodes::READONLY); + return access_denied("Cannot execute query in readonly mode", ErrorCodes::READONLY); } } - if (!params.allow_ddl) + if (!params.allow_ddl && !grant_option) { - if (!access_with_allow_ddl) - { - Params changed_params = params; - changed_params.allow_ddl = true; - access_with_allow_ddl = std::make_shared(calculateFinalAccessRights(*access_from_user_and_roles, changed_params)); - } - - if (access_with_allow_ddl->isGranted(flags, args...)) - { - show_error("Cannot execute query. DDL queries are prohibited for the user", ErrorCodes::QUERY_IS_PROHIBITED); - } + if (flags & precalc.ddl_flags) + return access_denied("Cannot execute query. DDL queries are prohibited for the user", ErrorCodes::QUERY_IS_PROHIBITED); } - if (!params.allow_introspection) + if (!params.allow_introspection && !grant_option) { - if (!access_with_allow_introspection) - { - Params changed_params = params; - changed_params.allow_introspection = true; - access_with_allow_introspection = std::make_shared(calculateFinalAccessRights(*access_from_user_and_roles, changed_params)); - } - - if (access_with_allow_introspection->isGranted(flags, args...)) - { - show_error("Introspection functions are disabled, because setting 'allow_introspection_functions' is set to 0", ErrorCodes::FUNCTION_NOT_ALLOWED); - } + if (flags & precalc.introspection_flags) + return access_denied("Introspection functions are disabled, because setting 'allow_introspection_functions' is set to 0", ErrorCodes::FUNCTION_NOT_ALLOWED); } - show_error( - "Not enough privileges. To execute this query it's necessary to have the grant " - + AccessRightsElement{flags, args...}.toString() + (grant_option ? " WITH GRANT OPTION" : ""), - ErrorCodes::ACCESS_DENIED); + return access_granted(); } -template -void ContextAccess::checkAccessImpl(const AccessFlags & flags) const + +bool ContextAccess::isGranted(const AccessFlags & flags) const { return checkAccessImpl(flags); } +bool ContextAccess::isGranted(const AccessFlags & flags, const std::string_view & database) const { return checkAccessImpl(flags, database); } +bool ContextAccess::isGranted(const AccessFlags & flags, const std::string_view & database, const std::string_view & table) const { return checkAccessImpl(flags, database, table); } +bool ContextAccess::isGranted(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const std::string_view & column) const { return checkAccessImpl(flags, database, table, column); } +bool ContextAccess::isGranted(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const std::vector & columns) const { return checkAccessImpl(flags, database, table, columns); } +bool ContextAccess::isGranted(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const Strings & columns) const { return checkAccessImpl(flags, database, table, columns); } +bool ContextAccess::isGranted(const AccessRightsElement & element) const { return checkAccessImpl(element); } +bool ContextAccess::isGranted(const AccessRightsElements & elements) const { return checkAccessImpl(elements); } + +bool ContextAccess::hasGrantOption(const AccessFlags & flags) const { return checkAccessImpl(flags); } +bool ContextAccess::hasGrantOption(const AccessFlags & flags, const std::string_view & database) const { return checkAccessImpl(flags, database); } +bool ContextAccess::hasGrantOption(const AccessFlags & flags, const std::string_view & database, const std::string_view & table) const { return checkAccessImpl(flags, database, table); } +bool ContextAccess::hasGrantOption(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const std::string_view & column) const { return checkAccessImpl(flags, database, table, column); } +bool ContextAccess::hasGrantOption(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const std::vector & columns) const { return checkAccessImpl(flags, database, table, columns); } +bool ContextAccess::hasGrantOption(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const Strings & columns) const { return checkAccessImpl(flags, database, table, columns); } +bool ContextAccess::hasGrantOption(const AccessRightsElement & element) const { return checkAccessImpl(element); } +bool ContextAccess::hasGrantOption(const AccessRightsElements & elements) const { return checkAccessImpl(elements); } + +void ContextAccess::checkAccess(const AccessFlags & flags) const { checkAccessImpl(flags); } +void ContextAccess::checkAccess(const AccessFlags & flags, const std::string_view & database) const { checkAccessImpl(flags, database); } +void ContextAccess::checkAccess(const AccessFlags & flags, const std::string_view & database, const std::string_view & table) const { checkAccessImpl(flags, database, table); } +void ContextAccess::checkAccess(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const std::string_view & column) const { checkAccessImpl(flags, database, table, column); } +void ContextAccess::checkAccess(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const std::vector & columns) const { checkAccessImpl(flags, database, table, columns); } +void ContextAccess::checkAccess(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const Strings & columns) const { checkAccessImpl(flags, database, table, columns); } +void ContextAccess::checkAccess(const AccessRightsElement & element) const { checkAccessImpl(element); } +void ContextAccess::checkAccess(const AccessRightsElements & elements) const { checkAccessImpl(elements); } + +void ContextAccess::checkGrantOption(const AccessFlags & flags) const { checkAccessImpl(flags); } +void ContextAccess::checkGrantOption(const AccessFlags & flags, const std::string_view & database) const { checkAccessImpl(flags, database); } +void ContextAccess::checkGrantOption(const AccessFlags & flags, const std::string_view & database, const std::string_view & table) const { checkAccessImpl(flags, database, table); } +void ContextAccess::checkGrantOption(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const std::string_view & column) const { checkAccessImpl(flags, database, table, column); } +void ContextAccess::checkGrantOption(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const std::vector & columns) const { checkAccessImpl(flags, database, table, columns); } +void ContextAccess::checkGrantOption(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const Strings & columns) const { checkAccessImpl(flags, database, table, columns); } +void ContextAccess::checkGrantOption(const AccessRightsElement & element) const { checkAccessImpl(element); } +void ContextAccess::checkGrantOption(const AccessRightsElements & elements) const { checkAccessImpl(elements); } + + +template +bool ContextAccess::checkAdminOptionImpl(const UUID & role_id) const { - checkAccessImpl2(flags); + return checkAdminOptionImpl2(to_array(role_id), [this](const UUID & id, size_t) { return manager->tryReadName(id); }); } -template -void ContextAccess::checkAccessImpl(const AccessFlags & flags, const std::string_view & database, const Args &... args) const +template +bool ContextAccess::checkAdminOptionImpl(const UUID & role_id, const String & role_name) const { - checkAccessImpl2(flags, database.empty() ? params.current_database : database, args...); + return checkAdminOptionImpl2(to_array(role_id), [&role_name](const UUID &, size_t) { return std::optional{role_name}; }); } -template -void ContextAccess::checkAccessImpl(const AccessRightsElement & element) const +template +bool ContextAccess::checkAdminOptionImpl(const UUID & role_id, const std::unordered_map & names_of_roles) const { - if (element.any_database) - checkAccessImpl(element.access_flags); - else if (element.any_table) - checkAccessImpl(element.access_flags, element.database); - else if (element.any_column) - checkAccessImpl(element.access_flags, element.database, element.table); - else - checkAccessImpl(element.access_flags, element.database, element.table, element.columns); + return checkAdminOptionImpl2(to_array(role_id), [&names_of_roles](const UUID & id, size_t) { auto it = names_of_roles.find(id); return (it != names_of_roles.end()) ? it->second : std::optional{}; }); } -template -void ContextAccess::checkAccessImpl(const AccessRightsElements & elements) const +template +bool ContextAccess::checkAdminOptionImpl(const std::vector & role_ids) const { - for (const auto & element : elements) - checkAccessImpl(element); + return checkAdminOptionImpl2(role_ids, [this](const UUID & id, size_t) { return manager->tryReadName(id); }); } -void ContextAccess::checkAccess(const AccessFlags & flags) const { checkAccessImpl(flags); } -void ContextAccess::checkAccess(const AccessFlags & flags, const std::string_view & database) const { checkAccessImpl(flags, database); } -void ContextAccess::checkAccess(const AccessFlags & flags, const std::string_view & database, const std::string_view & table) const { checkAccessImpl(flags, database, table); } -void ContextAccess::checkAccess(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const std::string_view & column) const { checkAccessImpl(flags, database, table, column); } -void ContextAccess::checkAccess(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const std::vector & columns) const { checkAccessImpl(flags, database, table, columns); } -void ContextAccess::checkAccess(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const Strings & columns) const { checkAccessImpl(flags, database, table, columns); } -void ContextAccess::checkAccess(const AccessRightsElement & element) const { checkAccessImpl(element); } -void ContextAccess::checkAccess(const AccessRightsElements & elements) const { checkAccessImpl(elements); } - -void ContextAccess::checkGrantOption(const AccessFlags & flags) const { checkAccessImpl(flags); } -void ContextAccess::checkGrantOption(const AccessFlags & flags, const std::string_view & database) const { checkAccessImpl(flags, database); } -void ContextAccess::checkGrantOption(const AccessFlags & flags, const std::string_view & database, const std::string_view & table) const { checkAccessImpl(flags, database, table); } -void ContextAccess::checkGrantOption(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const std::string_view & column) const { checkAccessImpl(flags, database, table, column); } -void ContextAccess::checkGrantOption(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const std::vector & columns) const { checkAccessImpl(flags, database, table, columns); } -void ContextAccess::checkGrantOption(const AccessFlags & flags, const std::string_view & database, const std::string_view & table, const Strings & columns) const { checkAccessImpl(flags, database, table, columns); } -void ContextAccess::checkGrantOption(const AccessRightsElement & element) const { checkAccessImpl(element); } -void ContextAccess::checkGrantOption(const AccessRightsElements & elements) const { checkAccessImpl(elements); } - - -template -bool ContextAccess::checkAdminOptionImpl(bool throw_on_error, const Container & role_ids, const GetNameFunction & get_name_function) const +template +bool ContextAccess::checkAdminOptionImpl(const std::vector & role_ids, const Strings & names_of_roles) const { + return checkAdminOptionImpl2(role_ids, [&names_of_roles](const UUID &, size_t i) { return std::optional{names_of_roles[i]}; }); +} + +template +bool ContextAccess::checkAdminOptionImpl(const std::vector & role_ids, const std::unordered_map & names_of_roles) const +{ + return checkAdminOptionImpl2(role_ids, [&names_of_roles](const UUID & id, size_t) { auto it = names_of_roles.find(id); return (it != names_of_roles.end()) ? it->second : std::optional{}; }); +} + +template +bool ContextAccess::checkAdminOptionImpl2(const Container & role_ids, const GetNameFunction & get_name_function) const +{ + if (!std::size(role_ids) || is_full_access) + return true; + + auto show_error = [this](const String & msg, int error_code) + { + UNUSED(this); + if constexpr (throw_if_denied) + throw Exception(getUserName() + ": " + msg, error_code); + }; + + if (!getUser()) + { + show_error("User has been dropped", ErrorCodes::UNKNOWN_USER); + return false; + } + if (isGranted(AccessType::ROLE_ADMIN)) return true; auto info = getRolesInfo(); - if (!info) - { - if (!user) - { - if (throw_on_error) - throw Exception(user_name + ": User has been dropped", ErrorCodes::UNKNOWN_USER); - else - return false; - } - return true; - } - size_t i = 0; for (auto it = std::begin(role_ids); it != std::end(role_ids); ++it, ++i) { const UUID & role_id = *it; - if (info->enabled_roles_with_admin_option.count(role_id)) + if (info && info->enabled_roles_with_admin_option.count(role_id)) continue; - auto role_name = get_name_function(role_id, i); - if (!role_name) - role_name = "ID {" + toString(role_id) + "}"; - String msg = "To execute this query it's necessary to have the role " + backQuoteIfNeed(*role_name) + " granted with ADMIN option"; - if (info->enabled_roles.count(role_id)) - msg = "Role " + backQuote(*role_name) + " is granted, but without ADMIN option. " + msg; - if (throw_on_error) - throw Exception(getUserName() + ": Not enough privileges. " + msg, ErrorCodes::ACCESS_DENIED); - else - return false; + if (throw_if_denied) + { + auto role_name = get_name_function(role_id, i); + if (!role_name) + role_name = "ID {" + toString(role_id) + "}"; + + if (info && info->enabled_roles.count(role_id)) + show_error("Not enough privileges. " + "Role " + backQuote(*role_name) + " is granted, but without ADMIN option. " + "To execute this query it's necessary to have the role " + backQuoteIfNeed(*role_name) + " granted with ADMIN option.", + ErrorCodes::ACCESS_DENIED); + else + show_error("Not enough privileges. " + "To execute this query it's necessary to have the role " + backQuoteIfNeed(*role_name) + " granted with ADMIN option.", + ErrorCodes::ACCESS_DENIED); + } + + return false; } return true; } -bool ContextAccess::hasAdminOption(const UUID & role_id) const -{ - return checkAdminOptionImpl(false, to_array(role_id), [this](const UUID & id, size_t) { return manager->tryReadName(id); }); -} +bool ContextAccess::hasAdminOption(const UUID & role_id) const { return checkAdminOptionImpl(role_id); } +bool ContextAccess::hasAdminOption(const UUID & role_id, const String & role_name) const { return checkAdminOptionImpl(role_id, role_name); } +bool ContextAccess::hasAdminOption(const UUID & role_id, const std::unordered_map & names_of_roles) const { return checkAdminOptionImpl(role_id, names_of_roles); } +bool ContextAccess::hasAdminOption(const std::vector & role_ids) const { return checkAdminOptionImpl(role_ids); } +bool ContextAccess::hasAdminOption(const std::vector & role_ids, const Strings & names_of_roles) const { return checkAdminOptionImpl(role_ids, names_of_roles); } +bool ContextAccess::hasAdminOption(const std::vector & role_ids, const std::unordered_map & names_of_roles) const { return checkAdminOptionImpl(role_ids, names_of_roles); } -bool ContextAccess::hasAdminOption(const UUID & role_id, const String & role_name) const -{ - return checkAdminOptionImpl(false, to_array(role_id), [&role_name](const UUID &, size_t) { return std::optional{role_name}; }); -} - -bool ContextAccess::hasAdminOption(const UUID & role_id, const std::unordered_map & names_of_roles) const -{ - return checkAdminOptionImpl(false, to_array(role_id), [&names_of_roles](const UUID & id, size_t) { auto it = names_of_roles.find(id); return (it != names_of_roles.end()) ? it->second : std::optional{}; }); -} - -bool ContextAccess::hasAdminOption(const std::vector & role_ids) const -{ - return checkAdminOptionImpl(false, role_ids, [this](const UUID & id, size_t) { return manager->tryReadName(id); }); -} - -bool ContextAccess::hasAdminOption(const std::vector & role_ids, const Strings & names_of_roles) const -{ - return checkAdminOptionImpl(false, role_ids, [&names_of_roles](const UUID &, size_t i) { return std::optional{names_of_roles[i]}; }); -} - -bool ContextAccess::hasAdminOption(const std::vector & role_ids, const std::unordered_map & names_of_roles) const -{ - return checkAdminOptionImpl(false, role_ids, [&names_of_roles](const UUID & id, size_t) { auto it = names_of_roles.find(id); return (it != names_of_roles.end()) ? it->second : std::optional{}; }); -} - -void ContextAccess::checkAdminOption(const UUID & role_id) const -{ - checkAdminOptionImpl(true, to_array(role_id), [this](const UUID & id, size_t) { return manager->tryReadName(id); }); -} - -void ContextAccess::checkAdminOption(const UUID & role_id, const String & role_name) const -{ - checkAdminOptionImpl(true, to_array(role_id), [&role_name](const UUID &, size_t) { return std::optional{role_name}; }); -} - -void ContextAccess::checkAdminOption(const UUID & role_id, const std::unordered_map & names_of_roles) const -{ - checkAdminOptionImpl(true, to_array(role_id), [&names_of_roles](const UUID & id, size_t) { auto it = names_of_roles.find(id); return (it != names_of_roles.end()) ? it->second : std::optional{}; }); -} - -void ContextAccess::checkAdminOption(const std::vector & role_ids) const -{ - checkAdminOptionImpl(true, role_ids, [this](const UUID & id, size_t) { return manager->tryReadName(id); }); -} - -void ContextAccess::checkAdminOption(const std::vector & role_ids, const Strings & names_of_roles) const -{ - checkAdminOptionImpl(true, role_ids, [&names_of_roles](const UUID &, size_t i) { return std::optional{names_of_roles[i]}; }); -} - -void ContextAccess::checkAdminOption(const std::vector & role_ids, const std::unordered_map & names_of_roles) const -{ - checkAdminOptionImpl(true, role_ids, [&names_of_roles](const UUID & id, size_t) { auto it = names_of_roles.find(id); return (it != names_of_roles.end()) ? it->second : std::optional{}; }); -} +void ContextAccess::checkAdminOption(const UUID & role_id) const { checkAdminOptionImpl(role_id); } +void ContextAccess::checkAdminOption(const UUID & role_id, const String & role_name) const { checkAdminOptionImpl(role_id, role_name); } +void ContextAccess::checkAdminOption(const UUID & role_id, const std::unordered_map & names_of_roles) const { checkAdminOptionImpl(role_id, names_of_roles); } +void ContextAccess::checkAdminOption(const std::vector & role_ids) const { checkAdminOptionImpl(role_ids); } +void ContextAccess::checkAdminOption(const std::vector & role_ids, const Strings & names_of_roles) const { checkAdminOptionImpl(role_ids, names_of_roles); } +void ContextAccess::checkAdminOption(const std::vector & role_ids, const std::unordered_map & names_of_roles) const { checkAdminOptionImpl(role_ids, names_of_roles); } } diff --git a/src/Access/ContextAccess.h b/src/Access/ContextAccess.h index 319c8edb076..43e9f60a4c6 100644 --- a/src/Access/ContextAccess.h +++ b/src/Access/ContextAccess.h @@ -96,7 +96,8 @@ public: std::shared_ptr getSettingsConstraints() const; /// Returns the current access rights. - std::shared_ptr getAccess() const; + std::shared_ptr getAccessRights() const; + std::shared_ptr getAccessRightsWithImplicit() const; /// Checks if a specified access is granted. bool isGranted(const AccessFlags & flags) const; @@ -166,41 +167,45 @@ private: void setSettingsAndConstraints() const; void calculateAccessRights() const; - template - bool isGrantedImpl(const AccessFlags & flags) const; + template + bool checkAccessImpl(const AccessFlags & flags) const; - template - bool isGrantedImpl(const AccessFlags & flags, const std::string_view & database, const Args &... args) const; + template + bool checkAccessImpl(const AccessFlags & flags, const std::string_view & database, const Args &... args) const; - template - bool isGrantedImpl(const AccessRightsElement & element) const; + template + bool checkAccessImpl(const AccessRightsElement & element) const; - template - bool isGrantedImpl(const AccessRightsElements & elements) const; + template + bool checkAccessImpl(const AccessRightsElements & elements) const; - template - bool isGrantedImpl2(const AccessFlags & flags, const Args &... args) const; + template + bool checkAccessImpl2(const AccessFlags & flags, const Args &... args) const; - template - void checkAccessImpl(const AccessFlags & flags) const; + template + bool checkAdminOptionImpl(const UUID & role_id) const; - template - void checkAccessImpl(const AccessFlags & flags, const std::string_view & database, const Args &... args) const; + template + bool checkAdminOptionImpl(const UUID & role_id, const String & role_name) const; - template - void checkAccessImpl(const AccessRightsElement & element) const; + template + bool checkAdminOptionImpl(const UUID & role_id, const std::unordered_map & names_of_roles) const; - template - void checkAccessImpl(const AccessRightsElements & elements) const; + template + bool checkAdminOptionImpl(const std::vector & role_ids) const; - template - void checkAccessImpl2(const AccessFlags & flags, const Args &... args) const; + template + bool checkAdminOptionImpl(const std::vector & role_ids, const Strings & names_of_roles) const; - template - bool checkAdminOptionImpl(bool throw_on_error, const Container & role_ids, const GetNameFunction & get_name_function) const; + template + bool checkAdminOptionImpl(const std::vector & role_ids, const std::unordered_map & names_of_roles) const; + + template + bool checkAdminOptionImpl2(const Container & role_ids, const GetNameFunction & get_name_function) const; const AccessControlManager * manager = nullptr; const Params params; + bool is_full_access = false; mutable Poco::Logger * trace_log = nullptr; mutable UserPtr user; mutable String user_name; @@ -209,13 +214,10 @@ private: mutable ext::scope_guard subscription_for_roles_changes; mutable std::shared_ptr roles_info; mutable std::shared_ptr access; + mutable std::shared_ptr access_with_implicit; mutable std::shared_ptr enabled_row_policies; mutable std::shared_ptr enabled_quota; mutable std::shared_ptr enabled_settings; - mutable std::shared_ptr access_without_readonly; - mutable std::shared_ptr access_with_allow_ddl; - mutable std::shared_ptr access_with_allow_introspection; - mutable std::shared_ptr access_from_user_and_roles; mutable std::mutex mutex; }; diff --git a/src/AggregateFunctions/AggregateFunctionAvg.h b/src/AggregateFunctions/AggregateFunctionAvg.h index fca9df9dd98..d07ff5db2f2 100644 --- a/src/AggregateFunctions/AggregateFunctionAvg.h +++ b/src/AggregateFunctions/AggregateFunctionAvg.h @@ -33,7 +33,7 @@ struct AvgFraction /// Allow division by zero as sometimes we need to return NaN. /// Invoked only is either Numerator or Denominator are Decimal. - Float64 NO_SANITIZE_UNDEFINED divideIfAnyDecimal(UInt32 num_scale, UInt32 denom_scale) const + Float64 NO_SANITIZE_UNDEFINED divideIfAnyDecimal(UInt32 num_scale, UInt32 denom_scale [[maybe_unused]]) const { if constexpr (IsDecimalNumber && IsDecimalNumber) { diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt index ac735cb3bc3..6021065f937 100644 --- a/src/CMakeLists.txt +++ b/src/CMakeLists.txt @@ -6,6 +6,18 @@ if (USE_CLANG_TIDY) set (CMAKE_CXX_CLANG_TIDY "${CLANG_TIDY_PATH}") endif () +if(COMPILER_PIPE) + set(MAX_COMPILER_MEMORY 2500) +else() + set(MAX_COMPILER_MEMORY 1500) +endif() +if(MAKE_STATIC_LIBRARIES) + set(MAX_LINKER_MEMORY 3500) +else() + set(MAX_LINKER_MEMORY 2500) +endif() +include(../cmake/limit_jobs.cmake) + set (CONFIG_VERSION ${CMAKE_CURRENT_BINARY_DIR}/Common/config_version.h) set (CONFIG_COMMON ${CMAKE_CURRENT_BINARY_DIR}/Common/config.h) diff --git a/src/Client/Connection.cpp b/src/Client/Connection.cpp index f7119195e97..8f4a64766cd 100644 --- a/src/Client/Connection.cpp +++ b/src/Client/Connection.cpp @@ -1,5 +1,6 @@ #include #include +#include #include #include #include diff --git a/src/Client/Connection.h b/src/Client/Connection.h index f4c25001f3e..30a74ec73aa 100644 --- a/src/Client/Connection.h +++ b/src/Client/Connection.h @@ -5,6 +5,7 @@ #include #include +#include #include #include @@ -17,7 +18,6 @@ #include -#include #include #include @@ -31,6 +31,7 @@ namespace DB class ClientInfo; class Pipe; +struct Settings; /// Struct which represents data we are going to send for external table. struct ExternalTableData diff --git a/src/Client/ConnectionPool.h b/src/Client/ConnectionPool.h index 736075a4cc1..2389cc6755d 100644 --- a/src/Client/ConnectionPool.h +++ b/src/Client/ConnectionPool.h @@ -1,9 +1,9 @@ #pragma once #include - #include #include +#include namespace DB { diff --git a/src/Client/ConnectionPoolWithFailover.cpp b/src/Client/ConnectionPoolWithFailover.cpp index 68f4bcd1b76..1ca61dc8059 100644 --- a/src/Client/ConnectionPoolWithFailover.cpp +++ b/src/Client/ConnectionPoolWithFailover.cpp @@ -4,6 +4,7 @@ #include #include +#include #include #include #include diff --git a/src/Common/CurrentMetrics.cpp b/src/Common/CurrentMetrics.cpp index c48e76e1d98..d3a4a41046e 100644 --- a/src/Common/CurrentMetrics.cpp +++ b/src/Common/CurrentMetrics.cpp @@ -56,6 +56,7 @@ M(LocalThreadActive, "Number of threads in local thread pools running a task.") \ M(DistributedFilesToInsert, "Number of pending files to process for asynchronous insertion into Distributed tables. Number of files for every shard is summed.") \ M(TablesToDropQueueSize, "Number of dropped tables, that are waiting for background data removal.") \ + M(MaxDDLEntryID, "Max processed DDL entry of DDLWorker.") \ namespace CurrentMetrics { diff --git a/src/Common/ErrorCodes.cpp b/src/Common/ErrorCodes.cpp index 384c29ed675..1e381808d16 100644 --- a/src/Common/ErrorCodes.cpp +++ b/src/Common/ErrorCodes.cpp @@ -528,6 +528,7 @@ M(559, INVALID_GRPC_QUERY_INFO) \ M(560, ZSTD_ENCODER_FAILED) \ M(561, ZSTD_DECODER_FAILED) \ + M(562, TLD_LIST_NOT_FOUND) \ \ M(999, KEEPER_EXCEPTION) \ M(1000, POCO_EXCEPTION) \ diff --git a/src/Common/Exception.cpp b/src/Common/Exception.cpp index dd78d0ec9fc..d9bbb170dcc 100644 --- a/src/Common/Exception.cpp +++ b/src/Common/Exception.cpp @@ -34,9 +34,9 @@ namespace ErrorCodes extern const int CANNOT_MREMAP; } - -Exception::Exception(const std::string & msg, int code) - : Poco::Exception(msg, code) +/// Aborts the process if error code is LOGICAL_ERROR. +/// Increments error codes statistics. +void handle_error_code([[maybe_unused]] const std::string & msg, int code) { // In debug builds and builds with sanitizers, treat LOGICAL_ERROR as an assertion failure. // Log the message before we fail. @@ -50,6 +50,18 @@ Exception::Exception(const std::string & msg, int code) ErrorCodes::increment(code); } +Exception::Exception(const std::string & msg, int code) + : Poco::Exception(msg, code) +{ + handle_error_code(msg, code); +} + +Exception::Exception(const std::string & msg, const Exception & nested, int code) + : Poco::Exception(msg, nested, code) +{ + handle_error_code(msg, code); +} + Exception::Exception(CreateFromPocoTag, const Poco::Exception & exc) : Poco::Exception(exc.displayText(), ErrorCodes::POCO_EXCEPTION) { diff --git a/src/Common/Exception.h b/src/Common/Exception.h index 0096c87d6e5..3da2e2fb0d0 100644 --- a/src/Common/Exception.h +++ b/src/Common/Exception.h @@ -25,6 +25,7 @@ class Exception : public Poco::Exception public: Exception() = default; Exception(const std::string & msg, int code); + Exception(const std::string & msg, const Exception & nested, int code); Exception(int code, const std::string & message) : Exception(message, code) diff --git a/src/Common/HashTable/HashTable.h b/src/Common/HashTable/HashTable.h index a569b1c15db..2d5580bf709 100644 --- a/src/Common/HashTable/HashTable.h +++ b/src/Common/HashTable/HashTable.h @@ -194,9 +194,6 @@ struct HashTableCell /// Do the hash table need to store the zero key separately (that is, can a zero key be inserted into the hash table). static constexpr bool need_zero_value_storage = true; - /// Whether the cell is deleted. - bool isDeleted() const { return false; } - /// Set the mapped value, if any (for HashMap), to the corresponding `value`. void setMapped(const value_type & /*value*/) {} @@ -230,6 +227,9 @@ struct HashTableGrower UInt8 size_degree = initial_size_degree; static constexpr auto initial_count = 1ULL << initial_size_degree; + /// If collision resolution chains are contiguous, we can implement erase operation by moving the elements. + static constexpr auto performs_linear_probing_with_single_step = true; + /// The size of the hash table in the cells. size_t bufSize() const { return 1ULL << size_degree; } @@ -277,6 +277,9 @@ template struct HashTableFixedGrower { static constexpr auto initial_count = 1ULL << key_bits; + + static constexpr auto performs_linear_probing_with_single_step = true; + size_t bufSize() const { return 1ULL << key_bits; } size_t place(size_t x) const { return x; } /// You could write __builtin_unreachable(), but the compiler does not optimize everything, and it turns out less efficiently. @@ -466,7 +469,7 @@ protected: */ size_t i = 0; for (; i < old_size; ++i) - if (!buf[i].isZero(*this) && !buf[i].isDeleted()) + if (!buf[i].isZero(*this)) reinsert(buf[i], buf[i].getHash(*this)); /** There is also a special case: @@ -477,7 +480,7 @@ protected: * after transferring all the elements from the old halves you need to [ o x ] * process tail from the collision resolution chain immediately after it [ o x ] */ - for (; !buf[i].isZero(*this) && !buf[i].isDeleted(); ++i) + for (; !buf[i].isZero(*this); ++i) reinsert(buf[i], buf[i].getHash(*this)); #ifdef DBMS_HASH_MAP_DEBUG_RESIZES @@ -829,6 +832,7 @@ protected: */ --m_size; buf[place_value].setZero(); + inserted = false; throw; } @@ -954,6 +958,112 @@ public: return const_cast *>(this)->find(x, hash_value); } + std::enable_if_t + ALWAYS_INLINE erase(const Key & x) + { + /** Deletion from open addressing hash table without tombstones + * + * https://en.wikipedia.org/wiki/Linear_probing + * https://en.wikipedia.org/wiki/Open_addressing + * Algorithm without recomputing hash but keep probes difference value (difference of natural cell position and inserted one) + * in cell https://arxiv.org/ftp/arxiv/papers/0909/0909.2547.pdf + * + * Currently we use algorithm with hash recomputing on each step from https://en.wikipedia.org/wiki/Open_addressing + */ + + if (Cell::isZero(x, *this)) + { + if (this->hasZero()) + { + --m_size; + this->clearHasZero(); + } + else + { + return; + } + } + + size_t hash_value = hash(x); + size_t erased_key_position = findCell(x, hash_value, grower.place(hash_value)); + + /// Key is not found + if (buf[erased_key_position].isZero(*this)) + { + return; + } + + /// We need to guarantee loop termination because there will be empty position + assert(m_size < grower.bufSize()); + + size_t next_position = erased_key_position; + + /** + * During element deletion there is a possibility that the search will be broken for one + * of the following elements, because this place erased_key_position is empty. We will check + * next_element. Consider a sequence from (erased_key_position, next_element], if the + * optimal_position of next_element falls into it, then removing erased_key_position + * will not break search for next_element. + * If optimal_position of the element does not fall into the sequence (erased_key_position, next_element] + * then deleting a erased_key_position will break search for it, so we need to move next_element + * to erased_key_position. Now we have empty place at next_element, so we apply the identical + * procedure for it. + * If an empty element is encoutered then means that there is no more next elements for which we can + * break the search so we can exit. + */ + + /// Walk to the right through collision resolution chain and move elements to better positions + while (true) + { + next_position = grower.next(next_position); + + /// If there's no more elements in the chain + if (buf[next_position].isZero(*this)) + break; + + /// The optimal position of the element in the cell at next_position + size_t optimal_position = grower.place(buf[next_position].getHash(*this)); + + /// If position of this element is already optimal - proceed to the next element. + if (optimal_position == next_position) + continue; + + /// Cannot move this element because optimal position is after the freed place + /// The second condition is tricky - if the chain was overlapped before erased_key_position, + /// and the optimal position is actually before in collision resolution chain: + /// + /// [*xn***----------------***] + /// ^^-next elem ^ + /// | | + /// erased elem the optimal position of the next elem + /// + /// so, the next elem should be moved to position of erased elem + + /// The case of non overlapping part of chain + if (next_position > erased_key_position + && (optimal_position > erased_key_position) && (optimal_position < next_position)) + { + continue; + } + + /// The case of overlapping chain + if (next_position < erased_key_position + /// Cannot move this element because optimal position is after the freed place + && ((optimal_position > erased_key_position) || (optimal_position < next_position))) + { + continue; + } + + /// Move the element to the freed place + memcpy(static_cast(&buf[erased_key_position]), static_cast(&buf[next_position]), sizeof(Cell)); + /// Now we have another freed place + erased_key_position = next_position; + } + + buf[erased_key_position].setZero(); + --m_size; + } + bool ALWAYS_INLINE has(const Key & x) const { if (Cell::isZero(x, *this)) diff --git a/src/Common/HashTable/StringHashSet.h b/src/Common/HashTable/StringHashSet.h new file mode 100644 index 00000000000..8714a0e1fe4 --- /dev/null +++ b/src/Common/HashTable/StringHashSet.h @@ -0,0 +1,101 @@ +#pragma once + +#include +#include +#include + +template +struct StringHashSetCell : public HashTableCell +{ + using Base = HashTableCell; + using Base::Base; + + VoidMapped void_map; + VoidMapped & getMapped() { return void_map; } + const VoidMapped & getMapped() const { return void_map; } + + static constexpr bool need_zero_value_storage = false; +}; + +template <> +struct StringHashSetCell : public HashTableCell +{ + using Base = HashTableCell; + using Base::Base; + + VoidMapped void_map; + VoidMapped & getMapped() { return void_map; } + const VoidMapped & getMapped() const { return void_map; } + + static constexpr bool need_zero_value_storage = false; + + bool isZero(const HashTableNoState & state) const { return isZero(this->key, state); } + // Zero means unoccupied cells in hash table. Use key with last word = 0 as + // zero keys, because such keys are unrepresentable (no way to encode length). + static bool isZero(const StringKey16 & key_, const HashTableNoState &) + { return key_.high == 0; } + void setZero() { this->key.high = 0; } +}; + +template <> +struct StringHashSetCell : public HashTableCell +{ + using Base = HashTableCell; + using Base::Base; + + VoidMapped void_map; + VoidMapped & getMapped() { return void_map; } + const VoidMapped & getMapped() const { return void_map; } + + static constexpr bool need_zero_value_storage = false; + + bool isZero(const HashTableNoState & state) const { return isZero(this->key, state); } + // Zero means unoccupied cells in hash table. Use key with last word = 0 as + // zero keys, because such keys are unrepresentable (no way to encode length). + static bool isZero(const StringKey24 & key_, const HashTableNoState &) + { return key_.c == 0; } + void setZero() { this->key.c = 0; } +}; + +template <> +struct StringHashSetCell : public HashSetCellWithSavedHash +{ + using Base = HashSetCellWithSavedHash; + using Base::Base; + + VoidMapped void_map; + VoidMapped & getMapped() { return void_map; } + const VoidMapped & getMapped() const { return void_map; } + + static constexpr bool need_zero_value_storage = false; +}; + +template +struct StringHashSetSubMaps +{ + using T0 = StringHashTableEmpty>; + using T1 = HashSetTable, StringHashTableHash, StringHashTableGrower<>, Allocator>; + using T2 = HashSetTable, StringHashTableHash, StringHashTableGrower<>, Allocator>; + using T3 = HashSetTable, StringHashTableHash, StringHashTableGrower<>, Allocator>; + using Ts = HashSetTable, StringHashTableHash, StringHashTableGrower<>, Allocator>; +}; + +template +class StringHashSet : public StringHashTable> +{ +public: + using Key = StringRef; + using Base = StringHashTable>; + using Self = StringHashSet; + using LookupResult = typename Base::LookupResult; + + using Base::Base; + + template + void ALWAYS_INLINE emplace(KeyHolder && key_holder, bool & inserted) + { + LookupResult it; + Base::emplace(key_holder, it, inserted); + } + +}; diff --git a/src/Common/HashTable/StringHashTable.h b/src/Common/HashTable/StringHashTable.h index 06389825e60..9f91de5585b 100644 --- a/src/Common/HashTable/StringHashTable.h +++ b/src/Common/HashTable/StringHashTable.h @@ -212,7 +212,7 @@ public: using LookupResult = StringHashTableLookupResult; using ConstLookupResult = StringHashTableLookupResult; - StringHashTable() {} + StringHashTable() = default; StringHashTable(size_t reserve_for_num_elements) : m1{reserve_for_num_elements / 4} @@ -222,8 +222,15 @@ public: { } - StringHashTable(StringHashTable && rhs) { *this = std::move(rhs); } - ~StringHashTable() {} + StringHashTable(StringHashTable && rhs) + : m1(std::move(rhs.m1)) + , m2(std::move(rhs.m2)) + , m3(std::move(rhs.m3)) + , ms(std::move(rhs.ms)) + { + } + + ~StringHashTable() = default; public: // Dispatch is written in a way that maximizes the performance: diff --git a/src/Common/SpaceSaving.h b/src/Common/SpaceSaving.h index cb6fee1ad91..185b4aa90ae 100644 --- a/src/Common/SpaceSaving.h +++ b/src/Common/SpaceSaving.h @@ -353,6 +353,7 @@ private: void destroyLastElement() { auto last_element = counter_list.back(); + counter_map.erase(last_element->key); arena.free(last_element->key); delete last_element; counter_list.pop_back(); diff --git a/src/Common/TLDListsHolder.cpp b/src/Common/TLDListsHolder.cpp new file mode 100644 index 00000000000..f0702f37e93 --- /dev/null +++ b/src/Common/TLDListsHolder.cpp @@ -0,0 +1,106 @@ +#include +#include +#include +#include +#include +#include + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int TLD_LIST_NOT_FOUND; +} + +/// +/// TLDList +/// +TLDList::TLDList(size_t size) + : tld_container(size) + , pool(std::make_unique(10 << 20)) +{} +bool TLDList::insert(const StringRef & host) +{ + bool inserted; + tld_container.emplace(DB::ArenaKeyHolder{host, *pool}, inserted); + return inserted; +} +bool TLDList::has(const StringRef & host) const +{ + return tld_container.has(host); +} + +/// +/// TLDListsHolder +/// +TLDListsHolder & TLDListsHolder::getInstance() +{ + static TLDListsHolder instance; + return instance; +} +TLDListsHolder::TLDListsHolder() = default; + +void TLDListsHolder::parseConfig(const std::string & top_level_domains_path, const Poco::Util::AbstractConfiguration & config) +{ + Poco::Util::AbstractConfiguration::Keys config_keys; + config.keys("top_level_domains_lists", config_keys); + + Poco::Logger * log = &Poco::Logger::get("TLDListsHolder"); + + for (const auto & key : config_keys) + { + const std::string & path = top_level_domains_path + config.getString("top_level_domains_lists." + key); + LOG_TRACE(log, "{} loading from {}", key, path); + size_t hosts = parseAndAddTldList(key, path); + LOG_INFO(log, "{} was added ({} hosts)", key, hosts); + } +} + +size_t TLDListsHolder::parseAndAddTldList(const std::string & name, const std::string & path) +{ + std::unordered_set tld_list_tmp; + + ReadBufferFromFile in(path); + while (!in.eof()) + { + char * newline = find_first_symbols<'\n'>(in.position(), in.buffer().end()); + if (newline >= in.buffer().end()) + break; + + std::string_view line(in.position(), newline - in.position()); + in.position() = newline + 1; + + /// Skip comments + if (line.size() > 2 && line[0] == '/' && line[1] == '/') + continue; + trim(line); + /// Skip empty line + if (line.empty()) + continue; + tld_list_tmp.emplace(line); + } + + TLDList tld_list(tld_list_tmp.size()); + for (const auto & host : tld_list_tmp) + { + StringRef host_ref{host.data(), host.size()}; + tld_list.insert(host_ref); + } + + size_t tld_list_size = tld_list.size(); + std::lock_guard lock(tld_lists_map_mutex); + tld_lists_map.insert(std::make_pair(name, std::move(tld_list))); + return tld_list_size; +} + +const TLDList & TLDListsHolder::getTldList(const std::string & name) +{ + std::lock_guard lock(tld_lists_map_mutex); + auto it = tld_lists_map.find(name); + if (it == tld_lists_map.end()) + throw Exception(ErrorCodes::TLD_LIST_NOT_FOUND, "TLD list {} does not exist", name); + return it->second; +} + +} diff --git a/src/Common/TLDListsHolder.h b/src/Common/TLDListsHolder.h new file mode 100644 index 00000000000..3900f9c30d2 --- /dev/null +++ b/src/Common/TLDListsHolder.h @@ -0,0 +1,65 @@ +#pragma once + +#include +#include +#include +#include +#include +#include +#include +#include + +namespace DB +{ + +/// Custom TLD List +/// +/// Unlike tldLookup (which uses gperf) this one uses plain StringHashSet. +class TLDList +{ +public: + using Container = StringHashSet<>; + + TLDList(size_t size); + + /// Return true if the tld_container does not contains such element. + bool insert(const StringRef & host); + /// Check is there such TLD + bool has(const StringRef & host) const; + size_t size() const { return tld_container.size(); } + +private: + Container tld_container; + std::unique_ptr pool; +}; + +class TLDListsHolder +{ +public: + using Map = std::unordered_map; + + static TLDListsHolder & getInstance(); + + /// Parse "top_level_domains_lists" section, + /// And add each found dictionary. + void parseConfig(const std::string & top_level_domains_path, const Poco::Util::AbstractConfiguration & config); + + /// Parse file and add it as a Set to the list of TLDs + /// - "//" -- comment, + /// - empty lines will be ignored. + /// + /// Example: https://publicsuffix.org/list/public_suffix_list.dat + /// + /// Return size of the list. + size_t parseAndAddTldList(const std::string & name, const std::string & path); + /// Throws TLD_LIST_NOT_FOUND if list does not exist + const TLDList & getTldList(const std::string & name); + +protected: + TLDListsHolder(); + + std::mutex tld_lists_map_mutex; + Map tld_lists_map; +}; + +} diff --git a/src/Common/XDBCBridgeHelper.h b/src/Common/XDBCBridgeHelper.h index ed1f63a2507..d7d3a6ba4cc 100644 --- a/src/Common/XDBCBridgeHelper.h +++ b/src/Common/XDBCBridgeHelper.h @@ -12,6 +12,7 @@ #include #include #include +#include #include #include diff --git a/src/Common/ZooKeeper/TestKeeperStorage.cpp b/src/Common/ZooKeeper/TestKeeperStorage.cpp index 6513c9c1050..e3e4edc23a7 100644 --- a/src/Common/ZooKeeper/TestKeeperStorage.cpp +++ b/src/Common/ZooKeeper/TestKeeperStorage.cpp @@ -427,7 +427,7 @@ struct TestKeeperStorageMultiRequest final : public TestKeeperStorageRequest for (const auto & sub_request : request.requests) { - auto sub_zk_request = dynamic_pointer_cast(sub_request); + auto sub_zk_request = std::dynamic_pointer_cast(sub_request); if (sub_zk_request->getOpNum() == Coordination::OpNum::Create) { concrete_requests.push_back(std::make_shared(sub_zk_request)); diff --git a/src/Common/tests/CMakeLists.txt b/src/Common/tests/CMakeLists.txt index 6a39c2f8553..cb36e2b97d2 100644 --- a/src/Common/tests/CMakeLists.txt +++ b/src/Common/tests/CMakeLists.txt @@ -10,9 +10,6 @@ target_link_libraries (sip_hash_perf PRIVATE clickhouse_common_io) add_executable (auto_array auto_array.cpp) target_link_libraries (auto_array PRIVATE clickhouse_common_io) -add_executable (hash_table hash_table.cpp) -target_link_libraries (hash_table PRIVATE clickhouse_common_io) - add_executable (small_table small_table.cpp) target_link_libraries (small_table PRIVATE clickhouse_common_io) diff --git a/src/Common/tests/gtest_hash_table.cpp b/src/Common/tests/gtest_hash_table.cpp new file mode 100644 index 00000000000..41255dcbba1 --- /dev/null +++ b/src/Common/tests/gtest_hash_table.cpp @@ -0,0 +1,319 @@ +#include +#include + +#include + +#include +#include + +#include + +#include + +/// To test dump functionality without using other hashes that can change +template +struct DummyHash +{ + size_t operator()(T key) const { return T(key); } +}; + +template +std::set convertToSet(const HashTable& table) +{ + std::set result; + + for (auto v: table) + result.emplace(v.getValue()); + + return result; +} + + +TEST(HashTable, Insert) +{ + using Cont = HashSet, HashTableGrower<1>>; + + Cont cont; + + cont.insert(1); + cont.insert(2); + + ASSERT_EQ(cont.size(), 2); +} + +TEST(HashTable, Emplace) +{ + using Cont = HashSet, HashTableGrower<1>>; + + Cont cont; + + Cont::LookupResult it; + bool inserted = false; + cont.emplace(1, it, inserted); + ASSERT_EQ(it->getKey(), 1); + ASSERT_EQ(inserted, true); + + cont.emplace(2, it, inserted); + ASSERT_EQ(it->getKey(), 2); + ASSERT_EQ(inserted, true); + + cont.emplace(1, it, inserted); + ASSERT_EQ(it->getKey(), 1); + ASSERT_EQ(inserted, false); +} + +TEST(HashTable, Lookup) +{ + using Cont = HashSet, HashTableGrower<1>>; + + Cont cont; + + cont.insert(1); + cont.insert(2); + + Cont::LookupResult it = cont.find(1); + ASSERT_TRUE(it != nullptr); + + it = cont.find(2); + ASSERT_TRUE(it != nullptr); + + it = cont.find(3); + ASSERT_TRUE(it == nullptr); +} + +TEST(HashTable, Iteration) +{ + using Cont = HashSet, HashTableGrower<1>>; + + Cont cont; + + cont.insert(1); + cont.insert(2); + cont.insert(3); + + std::set expected = {1, 2, 3}; + std::set actual = convertToSet(cont); + + ASSERT_EQ(actual, expected); +} + +TEST(HashTable, Erase) +{ + { + /// Check zero element deletion + using Cont = HashSet, HashTableGrower<4>>; + Cont cont; + + cont.insert(0); + + ASSERT_TRUE(cont.find(0) != nullptr && cont.find(0)->getKey() == 0); + + cont.erase(0); + + ASSERT_TRUE(cont.find(0) == nullptr); + } + { + using Cont = HashSet, HashTableGrower<4>>; + Cont cont; + + /// [.(1)..............] erase of (1). + cont.insert(1); + + ASSERT_TRUE(cont.find(1) != nullptr && cont.find(1)->getKey() == 1); + + cont.erase(1); + + ASSERT_TRUE(cont.find(1) == nullptr); + } + { + using Cont = HashSet, HashTableGrower<4>>; + Cont cont; + + /// [.(1)(2)(3)............] erase of (1) does not break search for (2) (3). + cont.insert(1); + cont.insert(2); + cont.insert(3); + cont.erase(1); + + ASSERT_TRUE(cont.find(1) == nullptr); + ASSERT_TRUE(cont.find(2) != nullptr && cont.find(2)->getKey() == 2); + ASSERT_TRUE(cont.find(3) != nullptr && cont.find(3)->getKey() == 3); + + cont.erase(2); + cont.erase(3); + ASSERT_TRUE(cont.find(2) == nullptr); + ASSERT_TRUE(cont.find(3) == nullptr); + ASSERT_EQ(cont.size(), 0); + } + { + using Cont = HashSet, HashTableGrower<4>>; + Cont cont; + + /// [.(1)(17).............] erase of (1) breaks search for (17) because their natural position is 1. + cont.insert(1); + cont.insert(17); + cont.erase(1); + + ASSERT_TRUE(cont.find(1) == nullptr); + ASSERT_TRUE(cont.find(17) != nullptr && cont.find(17)->getKey() == 17); + } + { + using Cont = HashSet, HashTableGrower<4>>; + Cont cont; + + /// [.(1)(2)(3)(17)...........] erase of (2) breaks search for (17) because their natural position is 1. + + cont.insert(1); + cont.insert(2); + cont.insert(3); + cont.insert(17); + cont.erase(2); + + ASSERT_TRUE(cont.find(2) == nullptr); + ASSERT_TRUE(cont.find(1) != nullptr && cont.find(1)->getKey() == 1); + ASSERT_TRUE(cont.find(3) != nullptr && cont.find(3)->getKey() == 3); + ASSERT_TRUE(cont.find(17) != nullptr && cont.find(17)->getKey() == 17); + } + { + using Cont = HashSet, HashTableGrower<4>>; + Cont cont; + + /// [(16)(30)............(14)(15)] erase of (16) breaks search for (30) because their natural position is 14. + cont.insert(14); + cont.insert(15); + cont.insert(16); + cont.insert(30); + cont.erase(16); + + ASSERT_TRUE(cont.find(16) == nullptr); + ASSERT_TRUE(cont.find(14) != nullptr && cont.find(14)->getKey() == 14); + ASSERT_TRUE(cont.find(15) != nullptr && cont.find(15)->getKey() == 15); + ASSERT_TRUE(cont.find(30) != nullptr && cont.find(30)->getKey() == 30); + } + { + using Cont = HashSet, HashTableGrower<4>>; + Cont cont; + + /// [(16)(30)............(14)(15)] erase of (15) breaks search for (30) because their natural position is 14. + cont.insert(14); + cont.insert(15); + cont.insert(16); + cont.insert(30); + cont.erase(15); + + ASSERT_TRUE(cont.find(15) == nullptr); + ASSERT_TRUE(cont.find(14) != nullptr && cont.find(14)->getKey() == 14); + ASSERT_TRUE(cont.find(16) != nullptr && cont.find(16)->getKey() == 16); + ASSERT_TRUE(cont.find(30) != nullptr && cont.find(30)->getKey() == 30); + } + { + using Cont = HashSet, HashTableGrower<1>>; + Cont cont; + + for (size_t i = 0; i < 5000; ++i) + { + cont.insert(i); + } + + for (size_t i = 0; i < 2500; ++i) + { + cont.erase(i); + } + + for (size_t i = 5000; i < 10000; ++i) + { + cont.insert(i); + } + + for (size_t i = 5000; i < 10000; ++i) + { + cont.erase(i); + } + + for (size_t i = 2500; i < 5000; ++i) + { + cont.erase(i); + } + + ASSERT_EQ(cont.size(), 0); + } +} + +TEST(HashTable, SerializationDeserialization) +{ + { + /// Use dummy hash to make it reproducible if default hash implementation will be changed + using Cont = HashSet, HashTableGrower<1>>; + + Cont cont; + + cont.insert(1); + cont.insert(2); + cont.insert(3); + + DB::WriteBufferFromOwnString wb; + cont.writeText(wb); + + std::string expected = "3,1,2,3"; + + ASSERT_EQ(wb.str(), expected); + + DB::ReadBufferFromString rb(expected); + + Cont deserialized; + deserialized.readText(rb); + ASSERT_EQ(convertToSet(cont), convertToSet(deserialized)); + } + { + using Cont = HashSet, HashTableGrower<1>>; + + Cont cont; + + cont.insert(1); + cont.insert(2); + cont.insert(3); + + DB::WriteBufferFromOwnString wb; + cont.write(wb); + + DB::ReadBufferFromString rb(wb.str()); + + Cont deserialized; + deserialized.read(rb); + ASSERT_EQ(convertToSet(cont), convertToSet(deserialized)); + } + { + using Cont = HashSet, HashTableGrower<1>>; + Cont cont; + + DB::WriteBufferFromOwnString wb; + cont.writeText(wb); + + std::string expected = "0"; + ASSERT_EQ(wb.str(), expected); + + DB::ReadBufferFromString rb(expected); + + Cont deserialized; + deserialized.readText(rb); + ASSERT_EQ(convertToSet(cont), convertToSet(deserialized)); + } + { + using Cont = HashSet; + Cont cont; + + DB::WriteBufferFromOwnString wb; + cont.write(wb); + + std::string expected; + expected += static_cast(0); + + ASSERT_EQ(wb.str(), expected); + + DB::ReadBufferFromString rb(expected); + + Cont deserialized; + deserialized.read(rb); + ASSERT_EQ(convertToSet(cont), convertToSet(deserialized)); + } +} diff --git a/src/Common/tests/hash_table.cpp b/src/Common/tests/hash_table.cpp deleted file mode 100644 index ebc22c5b5e5..00000000000 --- a/src/Common/tests/hash_table.cpp +++ /dev/null @@ -1,50 +0,0 @@ -#include -#include - -#include - -#include -#include - - -int main(int, char **) -{ - { - using Cont = HashSet, HashTableGrower<1>>; - Cont cont; - - cont.insert(1); - cont.insert(2); - - Cont::LookupResult it; - bool inserted; - int key = 3; - cont.emplace(key, it, inserted); - std::cerr << inserted << ", " << key << std::endl; - - cont.emplace(key, it, inserted); - std::cerr << inserted << ", " << key << std::endl; - - for (auto x : cont) - std::cerr << x.getValue() << std::endl; - - DB::WriteBufferFromOwnString wb; - cont.writeText(wb); - - std::cerr << "dump: " << wb.str() << std::endl; - } - - { - using Cont = HashSet< - DB::UInt128, - DB::UInt128TrivialHash>; - Cont cont; - - DB::WriteBufferFromOwnString wb; - cont.write(wb); - - std::cerr << "dump: " << wb.str() << std::endl; - } - - return 0; -} diff --git a/src/Common/ya.make b/src/Common/ya.make index 5bbc13f01e9..e515741a272 100644 --- a/src/Common/ya.make +++ b/src/Common/ya.make @@ -68,6 +68,7 @@ SRCS( StringUtils/StringUtils.cpp StudentTTest.cpp SymbolIndex.cpp + TLDListsHolder.cpp TaskStatsInfoGetter.cpp TerminalSize.cpp ThreadFuzzer.cpp diff --git a/src/Core/Settings.h b/src/Core/Settings.h index 2cbe0c16cae..0b4aa65a37b 100644 --- a/src/Core/Settings.h +++ b/src/Core/Settings.h @@ -65,6 +65,7 @@ class IColumn; M(UInt64, distributed_connections_pool_size, DBMS_DEFAULT_DISTRIBUTED_CONNECTIONS_POOL_SIZE, "Maximum number of connections with one remote server in the pool.", 0) \ M(UInt64, connections_with_failover_max_tries, DBMS_CONNECTION_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES, "The maximum number of attempts to connect to replicas.", 0) \ M(UInt64, s3_min_upload_part_size, 512*1024*1024, "The minimum size of part to upload during multipart upload to S3.", 0) \ + M(UInt64, s3_max_single_part_upload_size, 64*1024*1024, "The maximum size of object to upload using singlepart upload to S3.", 0) \ M(UInt64, s3_max_redirects, 10, "Max number of S3 redirects hops allowed.", 0) \ M(Bool, extremes, false, "Calculate minimums and maximums of the result columns. They can be output in JSON-formats.", IMPORTANT) \ M(Bool, use_uncompressed_cache, true, "Whether to use the cache of uncompressed blocks.", 0) \ @@ -255,6 +256,7 @@ class IColumn; M(OverflowMode, sort_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.", 0) \ M(UInt64, max_bytes_before_external_sort, 0, "", 0) \ M(UInt64, max_bytes_before_remerge_sort, 1000000000, "In case of ORDER BY with LIMIT, when memory usage is higher than specified threshold, perform additional steps of merging blocks before final merge to keep just top LIMIT rows.", 0) \ + M(Float, remerge_sort_lowered_memory_bytes_ratio, 2., "If memory usage after remerge does not reduced by this ratio, remerge will be disabled.", 0) \ \ M(UInt64, max_result_rows, 0, "Limit on result size in rows. Also checked for intermediate data sent from remote servers.", 0) \ M(UInt64, max_result_bytes, 0, "Limit on result size in bytes (uncompressed). Also checked for intermediate data sent from remote servers.", 0) \ @@ -406,8 +408,6 @@ class IColumn; \ M(UInt64, max_memory_usage_for_all_queries, 0, "Obsolete. Will be removed after 2020-10-20", 0) \ M(UInt64, multiple_joins_rewriter_version, 0, "Obsolete setting, does nothing. Will be removed after 2021-03-31", 0) \ - M(Bool, experimental_use_processors, true, "Obsolete setting, does nothing. Will be removed after 2020-11-29.", 0) \ - M(Bool, force_optimize_skip_unused_shards_no_nested, false, "Obsolete setting, does nothing. Will be removed after 2020-12-01. Use force_optimize_skip_unused_shards_nesting instead.", 0) \ M(Bool, enable_debug_queries, false, "Enabled debug queries, but now is obsolete", 0) \ M(Bool, allow_experimental_database_atomic, true, "Obsolete setting, does nothing. Will be removed after 2021-02-12", 0) \ M(UnionMode, union_default_mode, UnionMode::DISTINCT, "Set default Union Mode in SelectWithUnion query. Possible values: empty string, 'ALL', 'DISTINCT'. If empty, query without Union Mode will throw exception.", 0) diff --git a/src/DataStreams/RemoteBlockInputStream.cpp b/src/DataStreams/RemoteBlockInputStream.cpp index c7c5ce2d00a..a62f7fca0b7 100644 --- a/src/DataStreams/RemoteBlockInputStream.cpp +++ b/src/DataStreams/RemoteBlockInputStream.cpp @@ -6,27 +6,27 @@ namespace DB RemoteBlockInputStream::RemoteBlockInputStream( Connection & connection, - const String & query_, const Block & header_, const Context & context_, const Settings * settings, + const String & query_, const Block & header_, const Context & context_, const ThrottlerPtr & throttler, const Scalars & scalars_, const Tables & external_tables_, QueryProcessingStage::Enum stage_) - : query_executor(connection, query_, header_, context_, settings, throttler, scalars_, external_tables_, stage_) + : query_executor(connection, query_, header_, context_, throttler, scalars_, external_tables_, stage_) { init(); } RemoteBlockInputStream::RemoteBlockInputStream( std::vector && connections, - const String & query_, const Block & header_, const Context & context_, const Settings * settings, + const String & query_, const Block & header_, const Context & context_, const ThrottlerPtr & throttler, const Scalars & scalars_, const Tables & external_tables_, QueryProcessingStage::Enum stage_) - : query_executor(std::move(connections), query_, header_, context_, settings, throttler, scalars_, external_tables_, stage_) + : query_executor(std::move(connections), query_, header_, context_, throttler, scalars_, external_tables_, stage_) { init(); } RemoteBlockInputStream::RemoteBlockInputStream( const ConnectionPoolWithFailoverPtr & pool, - const String & query_, const Block & header_, const Context & context_, const Settings * settings, + const String & query_, const Block & header_, const Context & context_, const ThrottlerPtr & throttler, const Scalars & scalars_, const Tables & external_tables_, QueryProcessingStage::Enum stage_) - : query_executor(pool, query_, header_, context_, settings, throttler, scalars_, external_tables_, stage_) + : query_executor(pool, query_, header_, context_, throttler, scalars_, external_tables_, stage_) { init(); } diff --git a/src/DataStreams/RemoteBlockInputStream.h b/src/DataStreams/RemoteBlockInputStream.h index 628feb0ab80..5ef05ee99eb 100644 --- a/src/DataStreams/RemoteBlockInputStream.h +++ b/src/DataStreams/RemoteBlockInputStream.h @@ -6,7 +6,6 @@ #include #include -#include #include #include #include @@ -16,32 +15,31 @@ namespace DB { +class Context; + /** This class allows one to launch queries on remote replicas of one shard and get results */ class RemoteBlockInputStream : public IBlockInputStream { public: /// Takes already set connection. - /// If `settings` is nullptr, settings will be taken from context. RemoteBlockInputStream( Connection & connection, - const String & query_, const Block & header_, const Context & context_, const Settings * settings = nullptr, + const String & query_, const Block & header_, const Context & context_, const ThrottlerPtr & throttler = nullptr, const Scalars & scalars_ = Scalars(), const Tables & external_tables_ = Tables(), QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete); /// Accepts several connections already taken from pool. - /// If `settings` is nullptr, settings will be taken from context. RemoteBlockInputStream( std::vector && connections, - const String & query_, const Block & header_, const Context & context_, const Settings * settings = nullptr, + const String & query_, const Block & header_, const Context & context_, const ThrottlerPtr & throttler = nullptr, const Scalars & scalars_ = Scalars(), const Tables & external_tables_ = Tables(), QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete); /// Takes a pool and gets one or several connections from it. - /// If `settings` is nullptr, settings will be taken from context. RemoteBlockInputStream( const ConnectionPoolWithFailoverPtr & pool, - const String & query_, const Block & header_, const Context & context_, const Settings * settings = nullptr, + const String & query_, const Block & header_, const Context & context_, const ThrottlerPtr & throttler = nullptr, const Scalars & scalars_ = Scalars(), const Tables & external_tables_ = Tables(), QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete); diff --git a/src/DataStreams/RemoteQueryExecutor.cpp b/src/DataStreams/RemoteQueryExecutor.cpp index 9abce0edef1..c38f42893af 100644 --- a/src/DataStreams/RemoteQueryExecutor.cpp +++ b/src/DataStreams/RemoteQueryExecutor.cpp @@ -8,7 +8,9 @@ #include #include #include +#include #include +#include namespace DB { @@ -20,14 +22,11 @@ namespace ErrorCodes RemoteQueryExecutor::RemoteQueryExecutor( Connection & connection, - const String & query_, const Block & header_, const Context & context_, const Settings * settings, + const String & query_, const Block & header_, const Context & context_, ThrottlerPtr throttler, const Scalars & scalars_, const Tables & external_tables_, QueryProcessingStage::Enum stage_) : header(header_), query(query_), context(context_) , scalars(scalars_), external_tables(external_tables_), stage(stage_) { - if (settings) - context.setSettings(*settings); - create_multiplexed_connections = [this, &connection, throttler]() { return std::make_unique(connection, context.getSettingsRef(), throttler); @@ -36,14 +35,11 @@ RemoteQueryExecutor::RemoteQueryExecutor( RemoteQueryExecutor::RemoteQueryExecutor( std::vector && connections, - const String & query_, const Block & header_, const Context & context_, const Settings * settings, + const String & query_, const Block & header_, const Context & context_, const ThrottlerPtr & throttler, const Scalars & scalars_, const Tables & external_tables_, QueryProcessingStage::Enum stage_) : header(header_), query(query_), context(context_) , scalars(scalars_), external_tables(external_tables_), stage(stage_) { - if (settings) - context.setSettings(*settings); - create_multiplexed_connections = [this, connections, throttler]() mutable { return std::make_unique( @@ -53,14 +49,11 @@ RemoteQueryExecutor::RemoteQueryExecutor( RemoteQueryExecutor::RemoteQueryExecutor( const ConnectionPoolWithFailoverPtr & pool, - const String & query_, const Block & header_, const Context & context_, const Settings * settings, + const String & query_, const Block & header_, const Context & context_, const ThrottlerPtr & throttler, const Scalars & scalars_, const Tables & external_tables_, QueryProcessingStage::Enum stage_) : header(header_), query(query_), context(context_) , scalars(scalars_), external_tables(external_tables_), stage(stage_) { - if (settings) - context.setSettings(*settings); - create_multiplexed_connections = [this, pool, throttler]() { const Settings & current_settings = context.getSettingsRef(); @@ -147,7 +140,7 @@ void RemoteQueryExecutor::sendQuery() multiplexed_connections = create_multiplexed_connections(); - const auto& settings = context.getSettingsRef(); + const auto & settings = context.getSettingsRef(); if (settings.skip_unavailable_shards && 0 == multiplexed_connections->size()) return; diff --git a/src/DataStreams/RemoteQueryExecutor.h b/src/DataStreams/RemoteQueryExecutor.h index 0db0e0218be..eb03472504d 100644 --- a/src/DataStreams/RemoteQueryExecutor.h +++ b/src/DataStreams/RemoteQueryExecutor.h @@ -1,12 +1,15 @@ #pragma once -#include #include #include +#include +#include namespace DB { +class Context; + class Throttler; using ThrottlerPtr = std::shared_ptr; @@ -21,26 +24,23 @@ class RemoteQueryExecutor { public: /// Takes already set connection. - /// If `settings` is nullptr, settings will be taken from context. RemoteQueryExecutor( Connection & connection, - const String & query_, const Block & header_, const Context & context_, const Settings * settings = nullptr, + const String & query_, const Block & header_, const Context & context_, ThrottlerPtr throttler_ = nullptr, const Scalars & scalars_ = Scalars(), const Tables & external_tables_ = Tables(), QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete); /// Accepts several connections already taken from pool. - /// If `settings` is nullptr, settings will be taken from context. RemoteQueryExecutor( std::vector && connections, - const String & query_, const Block & header_, const Context & context_, const Settings * settings = nullptr, + const String & query_, const Block & header_, const Context & context_, const ThrottlerPtr & throttler = nullptr, const Scalars & scalars_ = Scalars(), const Tables & external_tables_ = Tables(), QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete); /// Takes a pool and gets one or several connections from it. - /// If `settings` is nullptr, settings will be taken from context. RemoteQueryExecutor( const ConnectionPoolWithFailoverPtr & pool, - const String & query_, const Block & header_, const Context & context_, const Settings * settings = nullptr, + const String & query_, const Block & header_, const Context & context_, const ThrottlerPtr & throttler = nullptr, const Scalars & scalars_ = Scalars(), const Tables & external_tables_ = Tables(), QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete); @@ -93,7 +93,7 @@ private: const String query; String query_id = ""; - Context context; + const Context & context; ProgressCallback progress_callback; ProfileInfoCallback profile_info_callback; diff --git a/src/DataTypes/DataTypeAggregateFunction.cpp b/src/DataTypes/DataTypeAggregateFunction.cpp index c4a62e64feb..9104c12120f 100644 --- a/src/DataTypes/DataTypeAggregateFunction.cpp +++ b/src/DataTypes/DataTypeAggregateFunction.cpp @@ -243,7 +243,7 @@ void DataTypeAggregateFunction::deserializeTextJSON(IColumn & column, ReadBuffer void DataTypeAggregateFunction::serializeTextXML(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings &) const { - writeXMLString(serializeToString(function, column, row_num), ostr); + writeXMLStringForTextElement(serializeToString(function, column, row_num), ostr); } diff --git a/src/DataTypes/DataTypeCustomSimpleTextSerialization.cpp b/src/DataTypes/DataTypeCustomSimpleTextSerialization.cpp index 75d0194c524..5bb963de667 100644 --- a/src/DataTypes/DataTypeCustomSimpleTextSerialization.cpp +++ b/src/DataTypes/DataTypeCustomSimpleTextSerialization.cpp @@ -85,7 +85,7 @@ void DataTypeCustomSimpleTextSerialization::deserializeTextJSON(IColumn & column void DataTypeCustomSimpleTextSerialization::serializeTextXML(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const { - writeXMLString(serializeToString(*this, column, row_num, settings), ostr); + writeXMLStringForTextElement(serializeToString(*this, column, row_num, settings), ostr); } } diff --git a/src/DataTypes/DataTypeEnum.cpp b/src/DataTypes/DataTypeEnum.cpp index 53b309e1db7..650a1da6407 100644 --- a/src/DataTypes/DataTypeEnum.cpp +++ b/src/DataTypes/DataTypeEnum.cpp @@ -195,7 +195,7 @@ void DataTypeEnum::serializeTextJSON(const IColumn & column, size_t row_nu template void DataTypeEnum::serializeTextXML(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings &) const { - writeXMLString(getNameForValue(assert_cast(column).getData()[row_num]), ostr); + writeXMLStringForTextElement(getNameForValue(assert_cast(column).getData()[row_num]), ostr); } template diff --git a/src/DataTypes/DataTypeFixedString.cpp b/src/DataTypes/DataTypeFixedString.cpp index e5c192085a4..585c5709be7 100644 --- a/src/DataTypes/DataTypeFixedString.cpp +++ b/src/DataTypes/DataTypeFixedString.cpp @@ -198,7 +198,7 @@ void DataTypeFixedString::deserializeTextJSON(IColumn & column, ReadBuffer & ist void DataTypeFixedString::serializeTextXML(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings &) const { const char * pos = reinterpret_cast(&assert_cast(column).getChars()[n * row_num]); - writeXMLString(pos, pos + n, ostr); + writeXMLStringForTextElement(pos, pos + n, ostr); } diff --git a/src/DataTypes/DataTypeString.cpp b/src/DataTypes/DataTypeString.cpp index 141f896cfc2..c752d136642 100644 --- a/src/DataTypes/DataTypeString.cpp +++ b/src/DataTypes/DataTypeString.cpp @@ -295,7 +295,7 @@ void DataTypeString::deserializeTextJSON(IColumn & column, ReadBuffer & istr, co void DataTypeString::serializeTextXML(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings &) const { - writeXMLString(assert_cast(column).getDataAt(row_num), ostr); + writeXMLStringForTextElement(assert_cast(column).getDataAt(row_num), ostr); } diff --git a/src/Databases/DatabaseAtomic.cpp b/src/Databases/DatabaseAtomic.cpp index af8b751e787..f9cc7a4197b 100644 --- a/src/Databases/DatabaseAtomic.cpp +++ b/src/Databases/DatabaseAtomic.cpp @@ -35,8 +35,8 @@ public: }; -DatabaseAtomic::DatabaseAtomic(String name_, String metadata_path_, UUID uuid, Context & context_) - : DatabaseOrdinary(name_, std::move(metadata_path_), "store/", "DatabaseAtomic (" + name_ + ")", context_) +DatabaseAtomic::DatabaseAtomic(String name_, String metadata_path_, UUID uuid, const String & logger_name, const Context & context_) + : DatabaseOrdinary(name_, std::move(metadata_path_), "store/", logger_name, context_) , path_to_table_symlinks(global_context.getPath() + "data/" + escapeForFileName(name_) + "/") , path_to_metadata_symlink(global_context.getPath() + "metadata/" + escapeForFileName(name_)) , db_uuid(uuid) @@ -46,6 +46,11 @@ DatabaseAtomic::DatabaseAtomic(String name_, String metadata_path_, UUID uuid, C tryCreateMetadataSymlink(); } +DatabaseAtomic::DatabaseAtomic(String name_, String metadata_path_, UUID uuid, const Context & context_) + : DatabaseAtomic(name_, std::move(metadata_path_), uuid, "DatabaseAtomic (" + name_ + ")", context_) +{ +} + String DatabaseAtomic::getTableDataPath(const String & table_name) const { std::lock_guard lock(mutex); diff --git a/src/Databases/DatabaseAtomic.h b/src/Databases/DatabaseAtomic.h index 82408ff3ab3..1b1c0cd4353 100644 --- a/src/Databases/DatabaseAtomic.h +++ b/src/Databases/DatabaseAtomic.h @@ -20,8 +20,8 @@ namespace DB class DatabaseAtomic : public DatabaseOrdinary { public: - - DatabaseAtomic(String name_, String metadata_path_, UUID uuid, Context & context_); + DatabaseAtomic(String name_, String metadata_path_, UUID uuid, const String & logger_name, const Context & context_); + DatabaseAtomic(String name_, String metadata_path_, UUID uuid, const Context & context_); String getEngineName() const override { return "Atomic"; } UUID getUUID() const override { return db_uuid; } @@ -51,14 +51,14 @@ public: void loadStoredObjects(Context & context, bool has_force_restore_data_flag, bool force_attach) override; /// Atomic database cannot be detached if there is detached table which still in use - void assertCanBeDetached(bool cleanup); + void assertCanBeDetached(bool cleanup) override; UUID tryGetTableUUID(const String & table_name) const override; void tryCreateSymlink(const String & table_name, const String & actual_data_path, bool if_data_path_exist = false); void tryRemoveSymlink(const String & table_name); - void waitDetachedTableNotInUse(const UUID & uuid); + void waitDetachedTableNotInUse(const UUID & uuid) override; private: void commitAlterTable(const StorageID & table_id, const String & table_metadata_tmp_path, const String & table_metadata_path) override; diff --git a/src/Databases/DatabaseFactory.cpp b/src/Databases/DatabaseFactory.cpp index 2bb4c595c05..0ac80706bd8 100644 --- a/src/Databases/DatabaseFactory.cpp +++ b/src/Databases/DatabaseFactory.cpp @@ -120,27 +120,32 @@ DatabasePtr DatabaseFactory::getImpl(const ASTCreateQuery & create, const String const auto & [remote_host_name, remote_port] = parseAddress(host_name_and_port, 3306); auto mysql_pool = mysqlxx::Pool(mysql_database_name, remote_host_name, mysql_user_name, mysql_user_password, remote_port); - if (engine_name == "MaterializeMySQL") + if (engine_name == "MySQL") { - MySQLClient client(remote_host_name, remote_port, mysql_user_name, mysql_user_password); + auto mysql_database_settings = std::make_unique(); - auto materialize_mode_settings = std::make_unique(); + mysql_database_settings->loadFromQueryContext(context); + mysql_database_settings->loadFromQuery(*engine_define); /// higher priority - if (engine_define->settings) - materialize_mode_settings->loadFromQuery(*engine_define); - - return std::make_shared( - context, database_name, metadata_path, engine_define, mysql_database_name, std::move(mysql_pool), std::move(client) - , std::move(materialize_mode_settings)); + return std::make_shared( + context, database_name, metadata_path, engine_define, mysql_database_name, std::move(mysql_database_settings), std::move(mysql_pool)); } - auto mysql_database_settings = std::make_unique(); + MySQLClient client(remote_host_name, remote_port, mysql_user_name, mysql_user_password); - mysql_database_settings->loadFromQueryContext(context); - mysql_database_settings->loadFromQuery(*engine_define); /// higher priority + auto materialize_mode_settings = std::make_unique(); - return std::make_shared( - context, database_name, metadata_path, engine_define, mysql_database_name, std::move(mysql_database_settings), std::move(mysql_pool)); + if (engine_define->settings) + materialize_mode_settings->loadFromQuery(*engine_define); + + if (create.uuid == UUIDHelpers::Nil) + return std::make_shared>( + context, database_name, metadata_path, uuid, mysql_database_name, std::move(mysql_pool), std::move(client) + , std::move(materialize_mode_settings)); + else + return std::make_shared>( + context, database_name, metadata_path, uuid, mysql_database_name, std::move(mysql_pool), std::move(client) + , std::move(materialize_mode_settings)); } catch (...) { diff --git a/src/Databases/DatabaseOnDisk.cpp b/src/Databases/DatabaseOnDisk.cpp index 1e6b4019c4b..4f172f0f8de 100644 --- a/src/Databases/DatabaseOnDisk.cpp +++ b/src/Databases/DatabaseOnDisk.cpp @@ -400,7 +400,7 @@ void DatabaseOnDisk::iterateMetadataFiles(const Context & context, const Iterati { auto process_tmp_drop_metadata_file = [&](const String & file_name) { - assert(getEngineName() != "Atomic"); + assert(getUUID() == UUIDHelpers::Nil); static const char * tmp_drop_ext = ".sql.tmp_drop"; const std::string object_name = file_name.substr(0, file_name.size() - strlen(tmp_drop_ext)); if (Poco::File(context.getPath() + getDataPath() + '/' + object_name).exists()) diff --git a/src/Databases/DatabasesCommon.cpp b/src/Databases/DatabasesCommon.cpp index c5df954c2da..29262318138 100644 --- a/src/Databases/DatabasesCommon.cpp +++ b/src/Databases/DatabasesCommon.cpp @@ -80,7 +80,7 @@ StoragePtr DatabaseWithOwnTablesBase::detachTableUnlocked(const String & table_n auto table_id = res->getStorageID(); if (table_id.hasUUID()) { - assert(database_name == DatabaseCatalog::TEMPORARY_DATABASE || getEngineName() == "Atomic"); + assert(database_name == DatabaseCatalog::TEMPORARY_DATABASE || getUUID() != UUIDHelpers::Nil); DatabaseCatalog::instance().removeUUIDMapping(table_id.uuid); } @@ -102,7 +102,7 @@ void DatabaseWithOwnTablesBase::attachTableUnlocked(const String & table_name, c if (table_id.hasUUID()) { - assert(database_name == DatabaseCatalog::TEMPORARY_DATABASE || getEngineName() == "Atomic"); + assert(database_name == DatabaseCatalog::TEMPORARY_DATABASE || getUUID() != UUIDHelpers::Nil); DatabaseCatalog::instance().addUUIDMapping(table_id.uuid, shared_from_this(), table); } @@ -131,7 +131,7 @@ void DatabaseWithOwnTablesBase::shutdown() kv.second->shutdown(); if (table_id.hasUUID()) { - assert(getDatabaseName() == DatabaseCatalog::TEMPORARY_DATABASE || getEngineName() == "Atomic"); + assert(getDatabaseName() == DatabaseCatalog::TEMPORARY_DATABASE || getUUID() != UUIDHelpers::Nil); DatabaseCatalog::instance().removeUUIDMapping(table_id.uuid); } } diff --git a/src/Databases/IDatabase.h b/src/Databases/IDatabase.h index fadec5fe7a9..9a0eb8d9969 100644 --- a/src/Databases/IDatabase.h +++ b/src/Databases/IDatabase.h @@ -334,6 +334,10 @@ public: /// All tables and dictionaries should be detached before detaching the database. virtual bool shouldBeEmptyOnDetach() const { return true; } + virtual void assertCanBeDetached(bool /*cleanup*/) {} + + virtual void waitDetachedTableNotInUse(const UUID & /*uuid*/) { assert(false); } + /// Ask all tables to complete the background threads they are using and delete all table objects. virtual void shutdown() = 0; diff --git a/src/Databases/MySQL/DatabaseMaterializeMySQL.cpp b/src/Databases/MySQL/DatabaseMaterializeMySQL.cpp index b5231a23d7e..6a9f1e37f8e 100644 --- a/src/Databases/MySQL/DatabaseMaterializeMySQL.cpp +++ b/src/Databases/MySQL/DatabaseMaterializeMySQL.cpp @@ -8,6 +8,7 @@ # include # include +# include # include # include # include @@ -22,21 +23,37 @@ namespace DB namespace ErrorCodes { extern const int NOT_IMPLEMENTED; + extern const int LOGICAL_ERROR; } -DatabaseMaterializeMySQL::DatabaseMaterializeMySQL( - const Context & context, const String & database_name_, const String & metadata_path_, const IAST * database_engine_define_ - , const String & mysql_database_name_, mysqlxx::Pool && pool_, MySQLClient && client_, std::unique_ptr settings_) - : IDatabase(database_name_), global_context(context.getGlobalContext()), engine_define(database_engine_define_->clone()) - , nested_database(std::make_shared(database_name_, metadata_path_, context)) - , settings(std::move(settings_)), log(&Poco::Logger::get("DatabaseMaterializeMySQL")) +template<> +DatabaseMaterializeMySQL::DatabaseMaterializeMySQL( + const Context & context, const String & database_name_, const String & metadata_path_, UUID /*uuid*/, + const String & mysql_database_name_, mysqlxx::Pool && pool_, MySQLClient && client_, std::unique_ptr settings_) + : DatabaseOrdinary(database_name_ + , metadata_path_ + , "data/" + escapeForFileName(database_name_) + "/" + , "DatabaseMaterializeMySQL (" + database_name_ + ")", context + ) + , settings(std::move(settings_)) , materialize_thread(context, database_name_, mysql_database_name_, std::move(pool_), std::move(client_), settings.get()) { } -void DatabaseMaterializeMySQL::rethrowExceptionIfNeed() const +template<> +DatabaseMaterializeMySQL::DatabaseMaterializeMySQL( + const Context & context, const String & database_name_, const String & metadata_path_, UUID uuid, + const String & mysql_database_name_, mysqlxx::Pool && pool_, MySQLClient && client_, std::unique_ptr settings_) + : DatabaseAtomic(database_name_, metadata_path_, uuid, "DatabaseMaterializeMySQL (" + database_name_ + ")", context) + , settings(std::move(settings_)) + , materialize_thread(context, database_name_, mysql_database_name_, std::move(pool_), std::move(client_), settings.get()) { - std::unique_lock lock(mutex); +} + +template +void DatabaseMaterializeMySQL::rethrowExceptionIfNeed() const +{ + std::unique_lock lock(Base::mutex); if (!settings->allows_query_when_mysql_lost && exception) { @@ -46,129 +63,71 @@ void DatabaseMaterializeMySQL::rethrowExceptionIfNeed() const } catch (Exception & ex) { + /// This method can be called from multiple threads + /// and Exception can be modified concurrently by calling addMessage(...), + /// so we rethrow a copy. throw Exception(ex); } } } -void DatabaseMaterializeMySQL::setException(const std::exception_ptr & exception_) +template +void DatabaseMaterializeMySQL::setException(const std::exception_ptr & exception_) { - std::unique_lock lock(mutex); + std::unique_lock lock(Base::mutex); exception = exception_; } -ASTPtr DatabaseMaterializeMySQL::getCreateDatabaseQuery() const -{ - const auto & create_query = std::make_shared(); - create_query->database = database_name; - create_query->set(create_query->storage, engine_define); - return create_query; -} - -void DatabaseMaterializeMySQL::loadStoredObjects(Context & context, bool has_force_restore_data_flag, bool force_attach) +template +void DatabaseMaterializeMySQL::loadStoredObjects(Context & context, bool has_force_restore_data_flag, bool force_attach) { + Base::loadStoredObjects(context, has_force_restore_data_flag, force_attach); try { - std::unique_lock lock(mutex); - nested_database->loadStoredObjects(context, has_force_restore_data_flag, force_attach); materialize_thread.startSynchronization(); + started_up = true; } catch (...) { - tryLogCurrentException(log, "Cannot load MySQL nested database stored objects."); + tryLogCurrentException(Base::log, "Cannot load MySQL nested database stored objects."); if (!force_attach) throw; } } -void DatabaseMaterializeMySQL::shutdown() +template +void DatabaseMaterializeMySQL::createTable(const Context & context, const String & name, const StoragePtr & table, const ASTPtr & query) { - materialize_thread.stopSynchronization(); - - auto iterator = nested_database->getTablesIterator(global_context, {}); - - /// We only shutdown the table, The tables is cleaned up when destructed database - for (; iterator->isValid(); iterator->next()) - iterator->table()->shutdown(); + assertCalledFromSyncThreadOrDrop("create table"); + Base::createTable(context, name, table, query); } -bool DatabaseMaterializeMySQL::empty() const +template +void DatabaseMaterializeMySQL::dropTable(const Context & context, const String & name, bool no_delay) { - return nested_database->empty(); + assertCalledFromSyncThreadOrDrop("drop table"); + Base::dropTable(context, name, no_delay); } -String DatabaseMaterializeMySQL::getDataPath() const +template +void DatabaseMaterializeMySQL::attachTable(const String & name, const StoragePtr & table, const String & relative_table_path) { - return nested_database->getDataPath(); + assertCalledFromSyncThreadOrDrop("attach table"); + Base::attachTable(name, table, relative_table_path); } -String DatabaseMaterializeMySQL::getMetadataPath() const +template +StoragePtr DatabaseMaterializeMySQL::detachTable(const String & name) { - return nested_database->getMetadataPath(); + assertCalledFromSyncThreadOrDrop("detach table"); + return Base::detachTable(name); } -String DatabaseMaterializeMySQL::getTableDataPath(const String & table_name) const +template +void DatabaseMaterializeMySQL::renameTable(const Context & context, const String & name, IDatabase & to_database, const String & to_name, bool exchange, bool dictionary) { - return nested_database->getTableDataPath(table_name); -} - -String DatabaseMaterializeMySQL::getTableDataPath(const ASTCreateQuery & query) const -{ - return nested_database->getTableDataPath(query); -} - -String DatabaseMaterializeMySQL::getObjectMetadataPath(const String & table_name) const -{ - return nested_database->getObjectMetadataPath(table_name); -} - -UUID DatabaseMaterializeMySQL::tryGetTableUUID(const String & table_name) const -{ - return nested_database->tryGetTableUUID(table_name); -} - -time_t DatabaseMaterializeMySQL::getObjectMetadataModificationTime(const String & name) const -{ - return nested_database->getObjectMetadataModificationTime(name); -} - -void DatabaseMaterializeMySQL::createTable(const Context & context, const String & name, const StoragePtr & table, const ASTPtr & query) -{ - if (!MaterializeMySQLSyncThread::isMySQLSyncThread()) - throw Exception("MaterializeMySQL database not support create table.", ErrorCodes::NOT_IMPLEMENTED); - - nested_database->createTable(context, name, table, query); -} - -void DatabaseMaterializeMySQL::dropTable(const Context & context, const String & name, bool no_delay) -{ - if (!MaterializeMySQLSyncThread::isMySQLSyncThread()) - throw Exception("MaterializeMySQL database not support drop table.", ErrorCodes::NOT_IMPLEMENTED); - - nested_database->dropTable(context, name, no_delay); -} - -void DatabaseMaterializeMySQL::attachTable(const String & name, const StoragePtr & table, const String & relative_table_path) -{ - if (!MaterializeMySQLSyncThread::isMySQLSyncThread()) - throw Exception("MaterializeMySQL database not support attach table.", ErrorCodes::NOT_IMPLEMENTED); - - nested_database->attachTable(name, table, relative_table_path); -} - -StoragePtr DatabaseMaterializeMySQL::detachTable(const String & name) -{ - if (!MaterializeMySQLSyncThread::isMySQLSyncThread()) - throw Exception("MaterializeMySQL database not support detach table.", ErrorCodes::NOT_IMPLEMENTED); - - return nested_database->detachTable(name); -} - -void DatabaseMaterializeMySQL::renameTable(const Context & context, const String & name, IDatabase & to_database, const String & to_name, bool exchange, bool dictionary) -{ - if (!MaterializeMySQLSyncThread::isMySQLSyncThread()) - throw Exception("MaterializeMySQL database not support rename table.", ErrorCodes::NOT_IMPLEMENTED); + assertCalledFromSyncThreadOrDrop("rename table"); if (exchange) throw Exception("MaterializeMySQL database not support exchange table.", ErrorCodes::NOT_IMPLEMENTED); @@ -176,57 +135,37 @@ void DatabaseMaterializeMySQL::renameTable(const Context & context, const String if (dictionary) throw Exception("MaterializeMySQL database not support rename dictionary.", ErrorCodes::NOT_IMPLEMENTED); - if (to_database.getDatabaseName() != getDatabaseName()) + if (to_database.getDatabaseName() != Base::getDatabaseName()) throw Exception("Cannot rename with other database for MaterializeMySQL database.", ErrorCodes::NOT_IMPLEMENTED); - nested_database->renameTable(context, name, *nested_database, to_name, exchange, dictionary); + Base::renameTable(context, name, *this, to_name, exchange, dictionary); } -void DatabaseMaterializeMySQL::alterTable(const Context & context, const StorageID & table_id, const StorageInMemoryMetadata & metadata) +template +void DatabaseMaterializeMySQL::alterTable(const Context & context, const StorageID & table_id, const StorageInMemoryMetadata & metadata) { - if (!MaterializeMySQLSyncThread::isMySQLSyncThread()) - throw Exception("MaterializeMySQL database not support alter table.", ErrorCodes::NOT_IMPLEMENTED); - - nested_database->alterTable(context, table_id, metadata); + assertCalledFromSyncThreadOrDrop("alter table"); + Base::alterTable(context, table_id, metadata); } -bool DatabaseMaterializeMySQL::shouldBeEmptyOnDetach() const +template +void DatabaseMaterializeMySQL::drop(const Context & context) { - return false; + /// Remove metadata info + Poco::File metadata(Base::getMetadataPath() + "/.metadata"); + + if (metadata.exists()) + metadata.remove(false); + + Base::drop(context); } -void DatabaseMaterializeMySQL::drop(const Context & context) -{ - if (nested_database->shouldBeEmptyOnDetach()) - { - for (auto iterator = nested_database->getTablesIterator(context, {}); iterator->isValid(); iterator->next()) - { - TableExclusiveLockHolder table_lock = iterator->table()->lockExclusively( - context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); - - nested_database->dropTable(context, iterator->name(), true); - } - - /// Remove metadata info - Poco::File metadata(getMetadataPath() + "/.metadata"); - - if (metadata.exists()) - metadata.remove(false); - } - - nested_database->drop(context); -} - -bool DatabaseMaterializeMySQL::isTableExist(const String & name, const Context & context) const -{ - return nested_database->isTableExist(name, context); -} - -StoragePtr DatabaseMaterializeMySQL::tryGetTable(const String & name, const Context & context) const +template +StoragePtr DatabaseMaterializeMySQL::tryGetTable(const String & name, const Context & context) const { if (!MaterializeMySQLSyncThread::isMySQLSyncThread()) { - StoragePtr nested_storage = nested_database->tryGetTable(name, context); + StoragePtr nested_storage = Base::tryGetTable(name, context); if (!nested_storage) return {}; @@ -234,20 +173,71 @@ StoragePtr DatabaseMaterializeMySQL::tryGetTable(const String & name, const Cont return std::make_shared(std::move(nested_storage), this); } - return nested_database->tryGetTable(name, context); + return Base::tryGetTable(name, context); } -DatabaseTablesIteratorPtr DatabaseMaterializeMySQL::getTablesIterator(const Context & context, const FilterByNameFunction & filter_by_table_name) +template +DatabaseTablesIteratorPtr DatabaseMaterializeMySQL::getTablesIterator(const Context & context, const DatabaseOnDisk::FilterByNameFunction & filter_by_table_name) { if (!MaterializeMySQLSyncThread::isMySQLSyncThread()) { - DatabaseTablesIteratorPtr iterator = nested_database->getTablesIterator(context, filter_by_table_name); + DatabaseTablesIteratorPtr iterator = Base::getTablesIterator(context, filter_by_table_name); return std::make_unique(std::move(iterator), this); } - return nested_database->getTablesIterator(context, filter_by_table_name); + return Base::getTablesIterator(context, filter_by_table_name); } +template +void DatabaseMaterializeMySQL::assertCalledFromSyncThreadOrDrop(const char * method) const +{ + if (!MaterializeMySQLSyncThread::isMySQLSyncThread() && started_up) + throw Exception(ErrorCodes::NOT_IMPLEMENTED, "MaterializeMySQL database not support {}", method); +} + +template +void DatabaseMaterializeMySQL::shutdownSynchronizationThread() +{ + materialize_thread.stopSynchronization(); + started_up = false; +} + +template class Helper, typename... Args> +auto castToMaterializeMySQLAndCallHelper(Database * database, Args && ... args) +{ + using Ordinary = DatabaseMaterializeMySQL; + using Atomic = DatabaseMaterializeMySQL; + using ToOrdinary = typename std::conditional_t, const Ordinary *, Ordinary *>; + using ToAtomic = typename std::conditional_t, const Atomic *, Atomic *>; + if (auto * database_materialize = typeid_cast(database)) + return (database_materialize->*Helper::v)(std::forward(args)...); + if (auto * database_materialize = typeid_cast(database)) + return (database_materialize->*Helper::v)(std::forward(args)...); + + throw Exception("LOGICAL_ERROR: cannot cast to DatabaseMaterializeMySQL, it is a bug.", ErrorCodes::LOGICAL_ERROR); +} + +template struct HelperSetException { static constexpr auto v = &T::setException; }; +void setSynchronizationThreadException(const DatabasePtr & materialize_mysql_db, const std::exception_ptr & exception) +{ + castToMaterializeMySQLAndCallHelper(materialize_mysql_db.get(), exception); +} + +template struct HelperStopSync { static constexpr auto v = &T::shutdownSynchronizationThread; }; +void stopDatabaseSynchronization(const DatabasePtr & materialize_mysql_db) +{ + castToMaterializeMySQLAndCallHelper(materialize_mysql_db.get()); +} + +template struct HelperRethrow { static constexpr auto v = &T::rethrowExceptionIfNeed; }; +void rethrowSyncExceptionIfNeed(const IDatabase * materialize_mysql_db) +{ + castToMaterializeMySQLAndCallHelper(materialize_mysql_db); +} + +template class DatabaseMaterializeMySQL; +template class DatabaseMaterializeMySQL; + } #endif diff --git a/src/Databases/MySQL/DatabaseMaterializeMySQL.h b/src/Databases/MySQL/DatabaseMaterializeMySQL.h index 799db65b481..e1229269a33 100644 --- a/src/Databases/MySQL/DatabaseMaterializeMySQL.h +++ b/src/Databases/MySQL/DatabaseMaterializeMySQL.h @@ -17,48 +17,34 @@ namespace DB * * All table structure and data will be written to the local file system */ -class DatabaseMaterializeMySQL : public IDatabase +template +class DatabaseMaterializeMySQL : public Base { public: + DatabaseMaterializeMySQL( - const Context & context, const String & database_name_, const String & metadata_path_, - const IAST * database_engine_define_, const String & mysql_database_name_, mysqlxx::Pool && pool_, + const Context & context, const String & database_name_, const String & metadata_path_, UUID uuid, + const String & mysql_database_name_, mysqlxx::Pool && pool_, MySQLClient && client_, std::unique_ptr settings_); void rethrowExceptionIfNeed() const; void setException(const std::exception_ptr & exception); protected: - const Context & global_context; - ASTPtr engine_define; - DatabasePtr nested_database; std::unique_ptr settings; - Poco::Logger * log; MaterializeMySQLSyncThread materialize_thread; std::exception_ptr exception; + std::atomic_bool started_up{false}; + public: String getEngineName() const override { return "MaterializeMySQL"; } - ASTPtr getCreateDatabaseQuery() const override; - void loadStoredObjects(Context & context, bool has_force_restore_data_flag, bool force_attach) override; - void shutdown() override; - - bool empty() const override; - - String getDataPath() const override; - - String getTableDataPath(const String & table_name) const override; - - String getTableDataPath(const ASTCreateQuery & query) const override; - - UUID tryGetTableUUID(const String & table_name) const override; - void createTable(const Context & context, const String & name, const StoragePtr & table, const ASTPtr & query) override; void dropTable(const Context & context, const String & name, bool no_delay) override; @@ -71,23 +57,22 @@ public: void alterTable(const Context & context, const StorageID & table_id, const StorageInMemoryMetadata & metadata) override; - time_t getObjectMetadataModificationTime(const String & name) const override; - - String getMetadataPath() const override; - - String getObjectMetadataPath(const String & table_name) const override; - - bool shouldBeEmptyOnDetach() const override; - void drop(const Context & context) override; - bool isTableExist(const String & name, const Context & context) const override; - StoragePtr tryGetTable(const String & name, const Context & context) const override; - DatabaseTablesIteratorPtr getTablesIterator(const Context & context, const FilterByNameFunction & filter_by_table_name) override; + DatabaseTablesIteratorPtr getTablesIterator(const Context & context, const DatabaseOnDisk::FilterByNameFunction & filter_by_table_name) override; + + void assertCalledFromSyncThreadOrDrop(const char * method) const; + + void shutdownSynchronizationThread(); }; + +void setSynchronizationThreadException(const DatabasePtr & materialize_mysql_db, const std::exception_ptr & exception); +void stopDatabaseSynchronization(const DatabasePtr & materialize_mysql_db); +void rethrowSyncExceptionIfNeed(const IDatabase * materialize_mysql_db); + } #endif diff --git a/src/Databases/MySQL/DatabaseMaterializeTablesIterator.h b/src/Databases/MySQL/DatabaseMaterializeTablesIterator.h index 86a5cbf8206..54031de40a2 100644 --- a/src/Databases/MySQL/DatabaseMaterializeTablesIterator.h +++ b/src/Databases/MySQL/DatabaseMaterializeTablesIterator.h @@ -2,7 +2,6 @@ #include #include -#include namespace DB { @@ -30,7 +29,7 @@ public: UUID uuid() const override { return nested_iterator->uuid(); } - DatabaseMaterializeTablesIterator(DatabaseTablesIteratorPtr nested_iterator_, DatabaseMaterializeMySQL * database_) + DatabaseMaterializeTablesIterator(DatabaseTablesIteratorPtr nested_iterator_, const IDatabase * database_) : nested_iterator(std::move(nested_iterator_)), database(database_) { } @@ -38,8 +37,7 @@ public: private: mutable std::vector tables; DatabaseTablesIteratorPtr nested_iterator; - DatabaseMaterializeMySQL * database; - + const IDatabase * database; }; } diff --git a/src/Databases/MySQL/MaterializeMySQLSyncThread.cpp b/src/Databases/MySQL/MaterializeMySQLSyncThread.cpp index bc7c9e4c554..1da033fa4b3 100644 --- a/src/Databases/MySQL/MaterializeMySQLSyncThread.cpp +++ b/src/Databases/MySQL/MaterializeMySQLSyncThread.cpp @@ -71,15 +71,6 @@ static BlockIO tryToExecuteQuery(const String & query_to_execute, Context & quer } } -static inline DatabaseMaterializeMySQL & getDatabase(const String & database_name) -{ - DatabasePtr database = DatabaseCatalog::instance().getDatabase(database_name); - - if (DatabaseMaterializeMySQL * database_materialize = typeid_cast(database.get())) - return *database_materialize; - - throw Exception("LOGICAL_ERROR: cannot cast to DatabaseMaterializeMySQL, it is a bug.", ErrorCodes::LOGICAL_ERROR); -} MaterializeMySQLSyncThread::~MaterializeMySQLSyncThread() { @@ -190,7 +181,8 @@ void MaterializeMySQLSyncThread::synchronization() { client.disconnect(); tryLogCurrentException(log); - getDatabase(database_name).setException(std::current_exception()); + auto db = DatabaseCatalog::instance().getDatabase(database_name); + setSynchronizationThreadException(db, std::current_exception()); } } @@ -343,7 +335,7 @@ std::optional MaterializeMySQLSyncThread::prepareSynchroniz opened_transaction = false; MaterializeMetadata metadata( - connection, getDatabase(database_name).getMetadataPath() + "/.metadata", mysql_database_name, opened_transaction); + connection, DatabaseCatalog::instance().getDatabase(database_name)->getMetadataPath() + "/.metadata", mysql_database_name, opened_transaction); if (!metadata.need_dumping_tables.empty()) { diff --git a/src/Dictionaries/HTTPDictionarySource.cpp b/src/Dictionaries/HTTPDictionarySource.cpp index 18a97f34486..67bd8462036 100644 --- a/src/Dictionaries/HTTPDictionarySource.cpp +++ b/src/Dictionaries/HTTPDictionarySource.cpp @@ -2,6 +2,7 @@ #include #include #include +#include #include #include #include diff --git a/src/Dictionaries/XDBCDictionarySource.cpp b/src/Dictionaries/XDBCDictionarySource.cpp index 832c30ed4b7..89df4b606fe 100644 --- a/src/Dictionaries/XDBCDictionarySource.cpp +++ b/src/Dictionaries/XDBCDictionarySource.cpp @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include diff --git a/src/Disks/S3/DiskS3.cpp b/src/Disks/S3/DiskS3.cpp index 507af58f9fa..4786c05f8b0 100644 --- a/src/Disks/S3/DiskS3.cpp +++ b/src/Disks/S3/DiskS3.cpp @@ -3,6 +3,7 @@ #include "Disks/DiskFactory.h" #include +#include #include #include #include @@ -16,6 +17,8 @@ #include #include #include +#include +#include #include #include @@ -63,64 +66,64 @@ void DiskS3::AwsS3KeyKeeper::addKey(const String & key) back().push_back(obj); } -namespace +String getRandomName() { - String getRandomName() + std::uniform_int_distribution distribution('a', 'z'); + String res(32, ' '); /// The number of bits of entropy should be not less than 128. + for (auto & c : res) + c = distribution(thread_local_rng); + return res; +} + +template +void throwIfError(Aws::Utils::Outcome && response) +{ + if (!response.IsSuccess()) { - std::uniform_int_distribution distribution('a', 'z'); - String res(32, ' '); /// The number of bits of entropy should be not less than 128. - for (auto & c : res) - c = distribution(thread_local_rng); - return res; + const auto & err = response.GetError(); + throw Exception(err.GetMessage(), static_cast(err.GetErrorType())); } +} - template - void throwIfError(Aws::Utils::Outcome && response) +/** + * S3 metadata file layout: + * Number of S3 objects, Total size of all S3 objects. + * Each S3 object represents path where object located in S3 and size of object. + */ +struct DiskS3::Metadata +{ + /// Metadata file version. + static constexpr UInt32 VERSION_ABSOLUTE_PATHS = 1; + static constexpr UInt32 VERSION_RELATIVE_PATHS = 2; + static constexpr UInt32 VERSION_READ_ONLY_FLAG = 3; + + using PathAndSize = std::pair; + + /// S3 root path. + const String & s3_root_path; + + /// Disk path. + const String & disk_path; + /// Relative path to metadata file on local FS. + String metadata_file_path; + /// Total size of all S3 objects. + size_t total_size; + /// S3 objects paths and their sizes. + std::vector s3_objects; + /// Number of references (hardlinks) to this metadata file. + UInt32 ref_count; + /// Flag indicates that file is read only. + bool read_only = false; + + /// Load metadata by path or create empty if `create` flag is set. + explicit Metadata(const String & s3_root_path_, const String & disk_path_, const String & metadata_file_path_, bool create = false) + : s3_root_path(s3_root_path_), disk_path(disk_path_), metadata_file_path(metadata_file_path_), total_size(0), s3_objects(0), ref_count(0) { - if (!response.IsSuccess()) + if (create) + return; + + try { - const auto & err = response.GetError(); - throw Exception(err.GetMessage(), static_cast(err.GetErrorType())); - } - } - - /** - * S3 metadata file layout: - * Number of S3 objects, Total size of all S3 objects. - * Each S3 object represents path where object located in S3 and size of object. - */ - struct Metadata - { - /// Metadata file version. - static constexpr UInt32 VERSION_ABSOLUTE_PATHS = 1; - static constexpr UInt32 VERSION_RELATIVE_PATHS = 2; - static constexpr UInt32 VERSION_READ_ONLY_FLAG = 3; - - using PathAndSize = std::pair; - - /// S3 root path. - const String & s3_root_path; - - /// Disk path. - const String & disk_path; - /// Relative path to metadata file on local FS. - String metadata_file_path; - /// Total size of all S3 objects. - size_t total_size; - /// S3 objects paths and their sizes. - std::vector s3_objects; - /// Number of references (hardlinks) to this metadata file. - UInt32 ref_count; - /// Flag indicates that file is read only. - bool read_only = false; - - /// Load metadata by path or create empty if `create` flag is set. - explicit Metadata(const String & s3_root_path_, const String & disk_path_, const String & metadata_file_path_, bool create = false) - : s3_root_path(s3_root_path_), disk_path(disk_path_), metadata_file_path(metadata_file_path_), total_size(0), s3_objects(0), ref_count(0) - { - if (create) - return; - ReadBufferFromFile buf(disk_path + metadata_file_path, 1024); /* reasonable buffer size for small file */ UInt32 version; @@ -129,7 +132,7 @@ namespace if (version < VERSION_ABSOLUTE_PATHS || version > VERSION_READ_ONLY_FLAG) throw Exception( "Unknown metadata file version. Path: " + disk_path + metadata_file_path - + " Version: " + std::to_string(version) + ", Maximum expected version: " + std::to_string(VERSION_READ_ONLY_FLAG), + + " Version: " + std::to_string(version) + ", Maximum expected version: " + std::to_string(VERSION_READ_ONLY_FLAG), ErrorCodes::UNKNOWN_FORMAT); assertChar('\n', buf); @@ -169,226 +172,244 @@ namespace assertChar('\n', buf); } } - - void addObject(const String & path, size_t size) + catch (Exception & e) { - total_size += size; - s3_objects.emplace_back(path, size); + if (e.code() == ErrorCodes::UNKNOWN_FORMAT) + throw; + + throw Exception("Failed to read metadata file", e, ErrorCodes::UNKNOWN_FORMAT); } + } - /// Fsync metadata file if 'sync' flag is set. - void save(bool sync = false) - { - WriteBufferFromFile buf(disk_path + metadata_file_path, 1024); - - writeIntText(VERSION_RELATIVE_PATHS, buf); - writeChar('\n', buf); - - writeIntText(s3_objects.size(), buf); - writeChar('\t', buf); - writeIntText(total_size, buf); - writeChar('\n', buf); - for (const auto & [s3_object_path, s3_object_size] : s3_objects) - { - writeIntText(s3_object_size, buf); - writeChar('\t', buf); - writeEscapedString(s3_object_path, buf); - writeChar('\n', buf); - } - - writeIntText(ref_count, buf); - writeChar('\n', buf); - - writeBoolText(read_only, buf); - writeChar('\n', buf); - - buf.finalize(); - if (sync) - buf.sync(); - } - }; - - /// Reads data from S3 using stored paths in metadata. - class ReadIndirectBufferFromS3 final : public ReadBufferFromFileBase + void addObject(const String & path, size_t size) { - public: - ReadIndirectBufferFromS3( - std::shared_ptr client_ptr_, const String & bucket_, Metadata metadata_, size_t buf_size_) - : client_ptr(std::move(client_ptr_)), bucket(bucket_), metadata(std::move(metadata_)), buf_size(buf_size_) + total_size += size; + s3_objects.emplace_back(path, size); + } + + /// Fsync metadata file if 'sync' flag is set. + void save(bool sync = false) + { + WriteBufferFromFile buf(disk_path + metadata_file_path, 1024); + + writeIntText(VERSION_RELATIVE_PATHS, buf); + writeChar('\n', buf); + + writeIntText(s3_objects.size(), buf); + writeChar('\t', buf); + writeIntText(total_size, buf); + writeChar('\n', buf); + for (const auto & [s3_object_path, s3_object_size] : s3_objects) { + writeIntText(s3_object_size, buf); + writeChar('\t', buf); + writeEscapedString(s3_object_path, buf); + writeChar('\n', buf); } - off_t seek(off_t offset_, int whence) override + writeIntText(ref_count, buf); + writeChar('\n', buf); + + writeBoolText(read_only, buf); + writeChar('\n', buf); + + buf.finalize(); + if (sync) + buf.sync(); + } +}; + +DiskS3::Metadata DiskS3::readMeta(const String & path) const +{ + return Metadata(s3_root_path, metadata_path, path); +} + +DiskS3::Metadata DiskS3::createMeta(const String & path) const +{ + return Metadata(s3_root_path, metadata_path, path, true); +} + +/// Reads data from S3 using stored paths in metadata. +class ReadIndirectBufferFromS3 final : public ReadBufferFromFileBase +{ +public: + ReadIndirectBufferFromS3( + std::shared_ptr client_ptr_, const String & bucket_, DiskS3::Metadata metadata_, size_t buf_size_) + : client_ptr(std::move(client_ptr_)), bucket(bucket_), metadata(std::move(metadata_)), buf_size(buf_size_) + { + } + + off_t seek(off_t offset_, int whence) override + { + if (whence == SEEK_CUR) { - if (whence == SEEK_CUR) + /// If position within current working buffer - shift pos. + if (working_buffer.size() && size_t(getPosition() + offset_) < absolute_position) { - /// If position within current working buffer - shift pos. - if (working_buffer.size() && size_t(getPosition() + offset_) < absolute_position) - { - pos += offset_; - return getPosition(); - } - else - { - absolute_position += offset_; - } - } - else if (whence == SEEK_SET) - { - /// If position within current working buffer - shift pos. - if (working_buffer.size() && size_t(offset_) >= absolute_position - working_buffer.size() - && size_t(offset_) < absolute_position) - { - pos = working_buffer.end() - (absolute_position - offset_); - return getPosition(); - } - else - { - absolute_position = offset_; - } + pos += offset_; + return getPosition(); } else - throw Exception("Only SEEK_SET or SEEK_CUR modes are allowed.", ErrorCodes::CANNOT_SEEK_THROUGH_FILE); + { + absolute_position += offset_; + } + } + else if (whence == SEEK_SET) + { + /// If position within current working buffer - shift pos. + if (working_buffer.size() && size_t(offset_) >= absolute_position - working_buffer.size() + && size_t(offset_) < absolute_position) + { + pos = working_buffer.end() - (absolute_position - offset_); + return getPosition(); + } + else + { + absolute_position = offset_; + } + } + else + throw Exception("Only SEEK_SET or SEEK_CUR modes are allowed.", ErrorCodes::CANNOT_SEEK_THROUGH_FILE); + current_buf = initialize(); + pos = working_buffer.end(); + + return absolute_position; + } + + off_t getPosition() override { return absolute_position - available(); } + + std::string getFileName() const override { return metadata.metadata_file_path; } + +private: + std::unique_ptr initialize() + { + size_t offset = absolute_position; + for (size_t i = 0; i < metadata.s3_objects.size(); ++i) + { + current_buf_idx = i; + const auto & [path, size] = metadata.s3_objects[i]; + if (size > offset) + { + auto buf = std::make_unique(client_ptr, bucket, metadata.s3_root_path + path, buf_size); + buf->seek(offset, SEEK_SET); + return buf; + } + offset -= size; + } + return nullptr; + } + + bool nextImpl() override + { + /// Find first available buffer that fits to given offset. + if (!current_buf) current_buf = initialize(); - pos = working_buffer.end(); - return absolute_position; - } - - off_t getPosition() override { return absolute_position - available(); } - - std::string getFileName() const override { return metadata.metadata_file_path; } - - private: - std::unique_ptr initialize() + /// If current buffer has remaining data - use it. + if (current_buf && current_buf->next()) { - size_t offset = absolute_position; - for (size_t i = 0; i < metadata.s3_objects.size(); ++i) - { - current_buf_idx = i; - const auto & [path, size] = metadata.s3_objects[i]; - if (size > offset) - { - auto buf = std::make_unique(client_ptr, bucket, metadata.s3_root_path + path, buf_size); - buf->seek(offset, SEEK_SET); - return buf; - } - offset -= size; - } - return nullptr; - } - - bool nextImpl() override - { - /// Find first available buffer that fits to given offset. - if (!current_buf) - current_buf = initialize(); - - /// If current buffer has remaining data - use it. - if (current_buf && current_buf->next()) - { - working_buffer = current_buf->buffer(); - absolute_position += working_buffer.size(); - return true; - } - - /// If there is no available buffers - nothing to read. - if (current_buf_idx + 1 >= metadata.s3_objects.size()) - return false; - - ++current_buf_idx; - const auto & path = metadata.s3_objects[current_buf_idx].first; - current_buf = std::make_unique(client_ptr, bucket, metadata.s3_root_path + path, buf_size); - current_buf->next(); working_buffer = current_buf->buffer(); absolute_position += working_buffer.size(); - return true; } - std::shared_ptr client_ptr; - const String & bucket; - Metadata metadata; - size_t buf_size; + /// If there is no available buffers - nothing to read. + if (current_buf_idx + 1 >= metadata.s3_objects.size()) + return false; - size_t absolute_position = 0; - size_t current_buf_idx = 0; - std::unique_ptr current_buf; - }; + ++current_buf_idx; + const auto & path = metadata.s3_objects[current_buf_idx].first; + current_buf = std::make_unique(client_ptr, bucket, metadata.s3_root_path + path, buf_size); + current_buf->next(); + working_buffer = current_buf->buffer(); + absolute_position += working_buffer.size(); - /// Stores data in S3 and adds the object key (S3 path) and object size to metadata file on local FS. - class WriteIndirectBufferFromS3 final : public WriteBufferFromFileBase + return true; + } + + std::shared_ptr client_ptr; + const String & bucket; + DiskS3::Metadata metadata; + size_t buf_size; + + size_t absolute_position = 0; + size_t current_buf_idx = 0; + std::unique_ptr current_buf; +}; + +/// Stores data in S3 and adds the object key (S3 path) and object size to metadata file on local FS. +class WriteIndirectBufferFromS3 final : public WriteBufferFromFileBase +{ +public: + WriteIndirectBufferFromS3( + std::shared_ptr & client_ptr_, + const String & bucket_, + DiskS3::Metadata metadata_, + const String & s3_path_, + std::optional object_metadata_, + size_t min_upload_part_size, + size_t max_single_part_upload_size, + size_t buf_size_) + : WriteBufferFromFileBase(buf_size_, nullptr, 0) + , impl(WriteBufferFromS3(client_ptr_, bucket_, metadata_.s3_root_path + s3_path_, min_upload_part_size, max_single_part_upload_size,std::move(object_metadata_), buf_size_)) + , metadata(std::move(metadata_)) + , s3_path(s3_path_) { - public: - WriteIndirectBufferFromS3( - std::shared_ptr & client_ptr_, - const String & bucket_, - Metadata metadata_, - const String & s3_path_, - bool is_multipart, - size_t min_upload_part_size, - size_t buf_size_) - : WriteBufferFromFileBase(buf_size_, nullptr, 0) - , impl(WriteBufferFromS3(client_ptr_, bucket_, metadata_.s3_root_path + s3_path_, min_upload_part_size, is_multipart, buf_size_)) - , metadata(std::move(metadata_)) - , s3_path(s3_path_) + } + + ~WriteIndirectBufferFromS3() override + { + try { + finalize(); } - - ~WriteIndirectBufferFromS3() override + catch (...) { - try - { - finalize(); - } - catch (...) - { - tryLogCurrentException(__PRETTY_FUNCTION__); - } + tryLogCurrentException(__PRETTY_FUNCTION__); } + } - void finalize() override - { - if (finalized) - return; + void finalize() override + { + if (finalized) + return; - next(); - impl.finalize(); + next(); + impl.finalize(); - metadata.addObject(s3_path, count()); - metadata.save(); + metadata.addObject(s3_path, count()); + metadata.save(); - finalized = true; - } + finalized = true; + } - void sync() override - { - if (finalized) - metadata.save(true); - } + void sync() override + { + if (finalized) + metadata.save(true); + } - std::string getFileName() const override { return metadata.metadata_file_path; } + std::string getFileName() const override { return metadata.metadata_file_path; } - private: - void nextImpl() override - { - /// Transfer current working buffer to WriteBufferFromS3. - impl.swap(*this); +private: + void nextImpl() override + { + /// Transfer current working buffer to WriteBufferFromS3. + impl.swap(*this); - /// Write actual data to S3. - impl.next(); + /// Write actual data to S3. + impl.next(); - /// Return back working buffer. - impl.swap(*this); - } + /// Return back working buffer. + impl.swap(*this); + } - WriteBufferFromS3 impl; - bool finalized = false; - Metadata metadata; - String s3_path; - }; -} + WriteBufferFromS3 impl; + bool finalized = false; + DiskS3::Metadata metadata; + String s3_path; +}; class DiskS3DirectoryIterator final : public IDiskDirectoryIterator @@ -521,8 +542,9 @@ DiskS3::DiskS3( String s3_root_path_, String metadata_path_, size_t min_upload_part_size_, - size_t min_multi_part_upload_size_, - size_t min_bytes_for_seek_) + size_t max_single_part_upload_size_, + size_t min_bytes_for_seek_, + bool send_metadata_) : IDisk(std::make_unique()) , name(std::move(name_)) , client(std::move(client_)) @@ -531,8 +553,9 @@ DiskS3::DiskS3( , s3_root_path(std::move(s3_root_path_)) , metadata_path(std::move(metadata_path_)) , min_upload_part_size(min_upload_part_size_) - , min_multi_part_upload_size(min_multi_part_upload_size_) + , max_single_part_upload_size(max_single_part_upload_size_) , min_bytes_for_seek(min_bytes_for_seek_) + , send_metadata(send_metadata_) { } @@ -560,7 +583,7 @@ bool DiskS3::isDirectory(const String & path) const size_t DiskS3::getFileSize(const String & path) const { - Metadata metadata(s3_root_path, metadata_path, path); + auto metadata = readMeta(path); return metadata.total_size; } @@ -613,8 +636,8 @@ void DiskS3::copyFile(const String & from_path, const String & to_path) if (exists(to_path)) remove(to_path); - Metadata from(s3_root_path, metadata_path, from_path); - Metadata to(s3_root_path, metadata_path, to_path, true); + auto from = readMeta(from_path); + auto to = createMeta(to_path); for (const auto & [path, size] : from.s3_objects) { @@ -633,7 +656,7 @@ void DiskS3::copyFile(const String & from_path, const String & to_path) std::unique_ptr DiskS3::readFile(const String & path, size_t buf_size, size_t, size_t, size_t) const { - Metadata metadata(s3_root_path, metadata_path, path); + auto metadata = readMeta(path); LOG_DEBUG(&Poco::Logger::get("DiskS3"), "Read from file by path: {}. Existing S3 objects: {}", backQuote(metadata_path + path), metadata.s3_objects.size()); @@ -642,40 +665,39 @@ std::unique_ptr DiskS3::readFile(const String & path, si return std::make_unique(std::move(reader), min_bytes_for_seek); } -std::unique_ptr DiskS3::writeFile(const String & path, size_t buf_size, WriteMode mode, size_t estimated_size, size_t) +std::unique_ptr DiskS3::writeFile(const String & path, size_t buf_size, WriteMode mode, size_t, size_t) { bool exist = exists(path); - if (exist) - { - Metadata metadata(s3_root_path, metadata_path, path); - if (metadata.read_only) - throw Exception("File is read-only: " + path, ErrorCodes::PATH_ACCESS_DENIED); - } + if (exist && readMeta(path).read_only) + throw Exception("File is read-only: " + path, ErrorCodes::PATH_ACCESS_DENIED); + /// Path to store new S3 object. auto s3_path = getRandomName(); - bool is_multipart = estimated_size >= min_multi_part_upload_size; + auto object_metadata = createObjectMetadata(path); if (!exist || mode == WriteMode::Rewrite) { /// If metadata file exists - remove and create new. if (exist) remove(path); - Metadata metadata(s3_root_path, metadata_path, path, true); + auto metadata = createMeta(path); /// Save empty metadata to disk to have ability to get file size while buffer is not finalized. metadata.save(); - LOG_DEBUG(&Poco::Logger::get("DiskS3"), "Write to file by path: {} New S3 path: {}", backQuote(metadata_path + path), s3_root_path + s3_path); + LOG_DEBUG(&Poco::Logger::get("DiskS3"), "Write to file by path: {}. New S3 path: {}", backQuote(metadata_path + path), s3_root_path + s3_path); - return std::make_unique(client, bucket, metadata, s3_path, is_multipart, min_upload_part_size, buf_size); + return std::make_unique( + client, bucket, metadata, s3_path, object_metadata, min_upload_part_size, max_single_part_upload_size, buf_size); } else { - Metadata metadata(s3_root_path, metadata_path, path); + auto metadata = readMeta(path); LOG_DEBUG(&Poco::Logger::get("DiskS3"), "Append to file by path: {}. New S3 path: {}. Existing S3 objects: {}.", backQuote(metadata_path + path), s3_root_path + s3_path, metadata.s3_objects.size()); - return std::make_unique(client, bucket, metadata, s3_path, is_multipart, min_upload_part_size, buf_size); + return std::make_unique( + client, bucket, metadata, s3_path, object_metadata, min_upload_part_size, max_single_part_upload_size, buf_size); } } @@ -684,9 +706,16 @@ void DiskS3::removeMeta(const String & path, AwsS3KeyKeeper & keys) LOG_DEBUG(&Poco::Logger::get("DiskS3"), "Remove file by path: {}", backQuote(metadata_path + path)); Poco::File file(metadata_path + path); - if (file.isFile()) + + if (!file.isFile()) { - Metadata metadata(s3_root_path, metadata_path, path); + file.remove(); + return; + } + + try + { + auto metadata = readMeta(path); /// If there is no references - delete content from S3. if (metadata.ref_count == 0) @@ -703,9 +732,22 @@ void DiskS3::removeMeta(const String & path, AwsS3KeyKeeper & keys) file.remove(); } } - else - file.remove(); + catch (const Exception & e) + { + /// If it's impossible to read meta - just remove it from FS. + if (e.code() == ErrorCodes::UNKNOWN_FORMAT) + { + LOG_WARNING( + &Poco::Logger::get("DiskS3"), + "Metadata file {} can't be read by reason: {}. Removing it forcibly.", + backQuote(path), + e.nested() ? e.nested()->message() : e.message()); + file.remove(); + } + else + throw; + } } void DiskS3::removeMetaRecursive(const String & path, AwsS3KeyKeeper & keys) @@ -799,7 +841,7 @@ Poco::Timestamp DiskS3::getLastModified(const String & path) void DiskS3::createHardLink(const String & src_path, const String & dst_path) { /// Increment number of references. - Metadata src(s3_root_path, metadata_path, src_path); + auto src = readMeta(src_path); ++src.ref_count; src.save(); @@ -810,7 +852,7 @@ void DiskS3::createHardLink(const String & src_path, const String & dst_path) void DiskS3::createFile(const String & path) { /// Create empty metadata file. - Metadata metadata(s3_root_path, metadata_path, path, true); + auto metadata = createMeta(path); metadata.save(); } @@ -818,7 +860,7 @@ void DiskS3::setReadOnly(const String & path) { /// We should store read only flag inside metadata file (instead of using FS flag), /// because we modify metadata file when create hard-links from it. - Metadata metadata(s3_root_path, metadata_path, path); + auto metadata = readMeta(path); metadata.read_only = true; metadata.save(); } @@ -847,4 +889,12 @@ void DiskS3::shutdown() client->DisableRequestProcessing(); } +std::optional DiskS3::createObjectMetadata(const String & path) const +{ + if (send_metadata) + return (DiskS3::ObjectMetadata){{"path", path}}; + + return {}; +} + } diff --git a/src/Disks/S3/DiskS3.h b/src/Disks/S3/DiskS3.h index fe8c47931b5..f62c603adda 100644 --- a/src/Disks/S3/DiskS3.h +++ b/src/Disks/S3/DiskS3.h @@ -19,9 +19,12 @@ namespace DB class DiskS3 : public IDisk { public: + using ObjectMetadata = std::map; + friend class DiskS3Reservation; class AwsS3KeyKeeper; + struct Metadata; DiskS3( String name_, @@ -31,8 +34,9 @@ public: String s3_root_path_, String metadata_path_, size_t min_upload_part_size_, - size_t min_multi_part_upload_size_, - size_t min_bytes_for_seek_); + size_t max_single_part_upload_size_, + size_t min_bytes_for_seek_, + bool send_metadata_); const String & getName() const override { return name; } @@ -116,6 +120,10 @@ private: void removeMeta(const String & path, AwsS3KeyKeeper & keys); void removeMetaRecursive(const String & path, AwsS3KeyKeeper & keys); void removeAws(const AwsS3KeyKeeper & keys); + std::optional createObjectMetadata(const String & path) const; + + Metadata readMeta(const String & path) const; + Metadata createMeta(const String & path) const; private: const String name; @@ -125,8 +133,9 @@ private: const String s3_root_path; const String metadata_path; size_t min_upload_part_size; - size_t min_multi_part_upload_size; + size_t max_single_part_upload_size; size_t min_bytes_for_seek; + bool send_metadata; UInt64 reserved_bytes = 0; UInt64 reservation_count = 0; diff --git a/src/Disks/S3/registerDiskS3.cpp b/src/Disks/S3/registerDiskS3.cpp index 809d6728189..fd658d95327 100644 --- a/src/Disks/S3/registerDiskS3.cpp +++ b/src/Disks/S3/registerDiskS3.cpp @@ -148,8 +148,9 @@ void registerDiskS3(DiskFactory & factory) uri.key, metadata_path, context.getSettingsRef().s3_min_upload_part_size, - config.getUInt64(config_prefix + ".min_multi_part_upload_size", 10 * 1024 * 1024), - config.getUInt64(config_prefix + ".min_bytes_for_seek", 1024 * 1024)); + context.getSettingsRef().s3_max_single_part_upload_size, + config.getUInt64(config_prefix + ".min_bytes_for_seek", 1024 * 1024), + config.getBool(config_prefix + ".send_object_metadata", false)); /// This code is used only to check access to the corresponding disk. if (!config.getBool(config_prefix + ".skip_access_check", false)) diff --git a/src/Formats/FormatFactory.h b/src/Formats/FormatFactory.h index 3b0811d579a..0fe6f19f0b7 100644 --- a/src/Formats/FormatFactory.h +++ b/src/Formats/FormatFactory.h @@ -16,7 +16,6 @@ namespace DB class Block; class Context; -struct FormatSettings; struct Settings; struct FormatFactorySettings; diff --git a/src/Functions/FunctionFactory.h b/src/Functions/FunctionFactory.h index 7872c192b41..7990e78daf8 100644 --- a/src/Functions/FunctionFactory.h +++ b/src/Functions/FunctionFactory.h @@ -2,7 +2,6 @@ #include #include -#include #include #include diff --git a/src/Functions/FunctionsComparison.h b/src/Functions/FunctionsComparison.h index 957c7e0ab3e..e674f8690ff 100644 --- a/src/Functions/FunctionsComparison.h +++ b/src/Functions/FunctionsComparison.h @@ -1216,10 +1216,7 @@ public: { return res; } - else if ((isColumnedAsDecimal(left_type) || isColumnedAsDecimal(right_type)) - // Comparing Date and DateTime64 requires implicit conversion, - // otherwise Date is treated as number. - && !(date_and_datetime && (isDate(left_type) || isDate(right_type)))) + else if (isColumnedAsDecimal(left_type) || isColumnedAsDecimal(right_type)) { // compare if (!allowDecimalComparison(left_type, right_type) && !date_and_datetime) diff --git a/src/Functions/URL/firstSignificantSubdomain.h b/src/Functions/URL/ExtractFirstSignificantSubdomain.h similarity index 77% rename from src/Functions/URL/firstSignificantSubdomain.h rename to src/Functions/URL/ExtractFirstSignificantSubdomain.h index 522e7905f69..c13b5f50156 100644 --- a/src/Functions/URL/firstSignificantSubdomain.h +++ b/src/Functions/URL/ExtractFirstSignificantSubdomain.h @@ -7,12 +7,27 @@ namespace DB { +struct FirstSignificantSubdomainDefaultLookup +{ + bool operator()(const char *src, size_t len) const + { + return tldLookup::isValid(src, len); + } +}; + template struct ExtractFirstSignificantSubdomain { static size_t getReserveLengthForElement() { return 10; } static void execute(const Pos data, const size_t size, Pos & res_data, size_t & res_size, Pos * out_domain_end = nullptr) + { + FirstSignificantSubdomainDefaultLookup loookup; + return execute(loookup, data, size, res_data, res_size, out_domain_end); + } + + template + static void execute(const Lookup & lookup, const Pos data, const size_t size, Pos & res_data, size_t & res_size, Pos * out_domain_end = nullptr) { res_data = data; res_size = 0; @@ -65,7 +80,7 @@ struct ExtractFirstSignificantSubdomain end_of_level_domain = end; } - if (tldLookup::isValid(last_3_periods[1] + 1, end_of_level_domain - last_3_periods[1] - 1) != nullptr) + if (lookup(last_3_periods[1] + 1, end_of_level_domain - last_3_periods[1] - 1)) { res_data += last_3_periods[2] + 1 - begin; res_size = last_3_periods[1] - last_3_periods[2] - 1; diff --git a/src/Functions/URL/FirstSignificantSubdomainCustomImpl.h b/src/Functions/URL/FirstSignificantSubdomainCustomImpl.h new file mode 100644 index 00000000000..244b32459c1 --- /dev/null +++ b/src/Functions/URL/FirstSignificantSubdomainCustomImpl.h @@ -0,0 +1,112 @@ +#pragma once + +#include +#include +#include +#include +#include +#include +#include + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int ILLEGAL_COLUMN; + extern const int ILLEGAL_TYPE_OF_ARGUMENT; +} + +struct FirstSignificantSubdomainCustomtLookup +{ + const TLDList & tld_list; + FirstSignificantSubdomainCustomtLookup(const std::string & tld_list_name) + : tld_list(TLDListsHolder::getInstance().getTldList(tld_list_name)) + { + } + + bool operator()(const char *pos, size_t len) const + { + return tld_list.has(StringRef{pos, len}); + } +}; + +template +class FunctionCutToFirstSignificantSubdomainCustomImpl : public IFunction +{ +public: + static constexpr auto name = Name::name; + static FunctionPtr create(const Context &) { return std::make_shared(); } + + String getName() const override { return name; } + size_t getNumberOfArguments() const override { return 2; } + + DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override + { + if (!isString(arguments[0].type)) + throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, + "Illegal type {} of first argument of function {}. Must be String.", + arguments[0].type->getName(), getName()); + if (!isString(arguments[1].type)) + throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, + "Illegal type {} of second argument (TLD_list_name) of function {}. Must be String/FixedString.", + arguments[1].type->getName(), getName()); + const auto * column = arguments[1].column.get(); + if (!column || !checkAndGetColumnConstStringOrFixedString(column)) + throw Exception(ErrorCodes::ILLEGAL_COLUMN, + "The second argument of function {} should be a constant string with the name of the custom TLD", + getName()); + + return arguments[0].type; + } + + ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr & /*result_type*/, size_t /*input_rows_count*/) const override + { + const ColumnConst * column_tld_list_name = checkAndGetColumnConstStringOrFixedString(arguments[1].column.get()); + FirstSignificantSubdomainCustomtLookup tld_lookup(column_tld_list_name->getValue()); + + /// FIXME: convertToFullColumnIfConst() is suboptimal + auto column = arguments[0].column->convertToFullColumnIfConst(); + if (const ColumnString * col = checkAndGetColumn(*column)) + { + auto col_res = ColumnString::create(); + vector(tld_lookup, col->getChars(), col->getOffsets(), col_res->getChars(), col_res->getOffsets()); + return col_res; + } + else + throw Exception( + "Illegal column " + arguments[0].column->getName() + " of argument of function " + getName(), + ErrorCodes::ILLEGAL_COLUMN); + } + + static void vector(FirstSignificantSubdomainCustomtLookup & tld_lookup, + const ColumnString::Chars & data, const ColumnString::Offsets & offsets, + ColumnString::Chars & res_data, ColumnString::Offsets & res_offsets) + { + size_t size = offsets.size(); + res_offsets.resize(size); + res_data.reserve(size * Extractor::getReserveLengthForElement()); + + size_t prev_offset = 0; + size_t res_offset = 0; + + /// Matched part. + Pos start; + size_t length; + + for (size_t i = 0; i < size; ++i) + { + Extractor::execute(tld_lookup, reinterpret_cast(&data[prev_offset]), offsets[i] - prev_offset - 1, start, length); + + res_data.resize(res_data.size() + length + 1); + memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], start, length); + res_offset += length + 1; + res_data[res_offset - 1] = 0; + + res_offsets[i] = res_offset; + prev_offset = offsets[i]; + } + } +}; + +} diff --git a/src/Functions/URL/cutToFirstSignificantSubdomain.cpp b/src/Functions/URL/cutToFirstSignificantSubdomain.cpp index 43d614a7036..82eb366dae6 100644 --- a/src/Functions/URL/cutToFirstSignificantSubdomain.cpp +++ b/src/Functions/URL/cutToFirstSignificantSubdomain.cpp @@ -1,6 +1,6 @@ #include #include -#include "firstSignificantSubdomain.h" +#include "ExtractFirstSignificantSubdomain.h" namespace DB diff --git a/src/Functions/URL/cutToFirstSignificantSubdomainCustom.cpp b/src/Functions/URL/cutToFirstSignificantSubdomainCustom.cpp new file mode 100644 index 00000000000..11fd27e317b --- /dev/null +++ b/src/Functions/URL/cutToFirstSignificantSubdomainCustom.cpp @@ -0,0 +1,43 @@ +#include +#include "ExtractFirstSignificantSubdomain.h" +#include "FirstSignificantSubdomainCustomImpl.h" + +namespace DB +{ + +template +struct CutToFirstSignificantSubdomainCustom +{ + static size_t getReserveLengthForElement() { return 15; } + + static void execute(FirstSignificantSubdomainCustomtLookup & tld_lookup, const Pos data, const size_t size, Pos & res_data, size_t & res_size) + { + res_data = data; + res_size = 0; + + Pos tmp_data; + size_t tmp_length; + Pos domain_end; + ExtractFirstSignificantSubdomain::execute(tld_lookup, data, size, tmp_data, tmp_length, &domain_end); + + if (tmp_length == 0) + return; + + res_data = tmp_data; + res_size = domain_end - tmp_data; + } +}; + +struct NameCutToFirstSignificantSubdomainCustom { static constexpr auto name = "cutToFirstSignificantSubdomainCustom"; }; +using FunctionCutToFirstSignificantSubdomainCustom = FunctionCutToFirstSignificantSubdomainCustomImpl, NameCutToFirstSignificantSubdomainCustom>; + +struct NameCutToFirstSignificantSubdomainCustomWithWWW { static constexpr auto name = "cutToFirstSignificantSubdomainCustomWithWWW"; }; +using FunctionCutToFirstSignificantSubdomainCustomWithWWW = FunctionCutToFirstSignificantSubdomainCustomImpl, NameCutToFirstSignificantSubdomainCustomWithWWW>; + +void registerFunctionCutToFirstSignificantSubdomainCustom(FunctionFactory & factory) +{ + factory.registerFunction(); + factory.registerFunction(); +} + +} diff --git a/src/Functions/URL/firstSignificantSubdomain.cpp b/src/Functions/URL/firstSignificantSubdomain.cpp index 7db18824375..87659940938 100644 --- a/src/Functions/URL/firstSignificantSubdomain.cpp +++ b/src/Functions/URL/firstSignificantSubdomain.cpp @@ -1,12 +1,13 @@ #include #include -#include "firstSignificantSubdomain.h" +#include "ExtractFirstSignificantSubdomain.h" namespace DB { struct NameFirstSignificantSubdomain { static constexpr auto name = "firstSignificantSubdomain"; }; + using FunctionFirstSignificantSubdomain = FunctionStringToString>, NameFirstSignificantSubdomain>; void registerFunctionFirstSignificantSubdomain(FunctionFactory & factory) diff --git a/src/Functions/URL/firstSignificantSubdomainCustom.cpp b/src/Functions/URL/firstSignificantSubdomainCustom.cpp new file mode 100644 index 00000000000..675b4a346de --- /dev/null +++ b/src/Functions/URL/firstSignificantSubdomainCustom.cpp @@ -0,0 +1,18 @@ +#include +#include "ExtractFirstSignificantSubdomain.h" +#include "FirstSignificantSubdomainCustomImpl.h" + + +namespace DB +{ + +struct NameFirstSignificantSubdomainCustom { static constexpr auto name = "firstSignificantSubdomainCustom"; }; + +using FunctionFirstSignificantSubdomainCustom = FunctionCutToFirstSignificantSubdomainCustomImpl, NameFirstSignificantSubdomainCustom>; + +void registerFunctionFirstSignificantSubdomainCustom(FunctionFactory & factory) +{ + factory.registerFunction(); +} + +} diff --git a/src/Functions/URL/registerFunctionsURL.cpp b/src/Functions/URL/registerFunctionsURL.cpp index f3906c2723e..91118074b7a 100644 --- a/src/Functions/URL/registerFunctionsURL.cpp +++ b/src/Functions/URL/registerFunctionsURL.cpp @@ -7,6 +7,7 @@ void registerFunctionProtocol(FunctionFactory & factory); void registerFunctionDomain(FunctionFactory & factory); void registerFunctionDomainWithoutWWW(FunctionFactory & factory); void registerFunctionFirstSignificantSubdomain(FunctionFactory & factory); +void registerFunctionFirstSignificantSubdomainCustom(FunctionFactory & factory); void registerFunctionTopLevelDomain(FunctionFactory & factory); void registerFunctionPort(FunctionFactory & factory); void registerFunctionPath(FunctionFactory & factory); @@ -20,6 +21,7 @@ void registerFunctionExtractURLParameterNames(FunctionFactory & factory); void registerFunctionURLHierarchy(FunctionFactory & factory); void registerFunctionURLPathHierarchy(FunctionFactory & factory); void registerFunctionCutToFirstSignificantSubdomain(FunctionFactory & factory); +void registerFunctionCutToFirstSignificantSubdomainCustom(FunctionFactory & factory); void registerFunctionCutWWW(FunctionFactory & factory); void registerFunctionCutQueryString(FunctionFactory & factory); void registerFunctionCutFragment(FunctionFactory & factory); @@ -34,6 +36,7 @@ void registerFunctionsURL(FunctionFactory & factory) registerFunctionDomain(factory); registerFunctionDomainWithoutWWW(factory); registerFunctionFirstSignificantSubdomain(factory); + registerFunctionFirstSignificantSubdomainCustom(factory); registerFunctionTopLevelDomain(factory); registerFunctionPort(factory); registerFunctionPath(factory); @@ -47,6 +50,7 @@ void registerFunctionsURL(FunctionFactory & factory) registerFunctionURLHierarchy(factory); registerFunctionURLPathHierarchy(factory); registerFunctionCutToFirstSignificantSubdomain(factory); + registerFunctionCutToFirstSignificantSubdomainCustom(factory); registerFunctionCutWWW(factory); registerFunctionCutQueryString(factory); registerFunctionCutFragment(factory); diff --git a/src/Functions/URL/tldLookup.h b/src/Functions/URL/tldLookup.h index 25857be3dd2..38c118b6bb1 100644 --- a/src/Functions/URL/tldLookup.h +++ b/src/Functions/URL/tldLookup.h @@ -1,5 +1,7 @@ #pragma once +#include + // Definition of the class generated by gperf, present on gperf/tldLookup.gperf class TopLevelDomainLookupHash { diff --git a/src/Functions/array/arrayAggregation.cpp b/src/Functions/array/arrayAggregation.cpp new file mode 100644 index 00000000000..eb4470cbcf2 --- /dev/null +++ b/src/Functions/array/arrayAggregation.cpp @@ -0,0 +1,311 @@ +#include +#include +#include +#include +#include +#include "FunctionArrayMapped.h" +#include + + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int ILLEGAL_TYPE_OF_ARGUMENT; + extern const int ILLEGAL_COLUMN; +} + +enum class AggregateOperation +{ + min, + max, + sum, + average +}; + +/** + * During array aggregation we derive result type from operation. + * For array min or array max we use array element as result type. + * For array average we use Float64. + * For array sum for decimal numbers we use Decimal128, for floating point numbers Float64, for numeric unsigned Int64, + * and for numeric signed UInt64. + */ + +template +struct ArrayAggregateResultImpl; + +template +struct ArrayAggregateResultImpl +{ + using Result = ArrayElement; +}; + +template +struct ArrayAggregateResultImpl +{ + using Result = ArrayElement; +}; + +template +struct ArrayAggregateResultImpl +{ + using Result = Float64; +}; + +template +struct ArrayAggregateResultImpl +{ + using Result = std::conditional_t< + IsDecimalNumber, + Decimal128, + std::conditional_t< + std::is_floating_point_v, + Float64, + std::conditional_t, Int64, UInt64>>>; +}; + +template +using ArrayAggregateResult = typename ArrayAggregateResultImpl::Result; + +template +struct ArrayAggregateImpl +{ + static bool needBoolean() { return false; } + static bool needExpression() { return false; } + static bool needOneArray() { return false; } + + static DataTypePtr getReturnType(const DataTypePtr & expression_return, const DataTypePtr & /*array_element*/) + { + DataTypePtr result; + + auto call = [&](const auto & types) + { + using Types = std::decay_t; + using DataType = typename Types::LeftType; + + if constexpr (aggregate_operation == AggregateOperation::average) + { + result = std::make_shared(); + + return true; + } + else if constexpr (IsDataTypeNumber) + { + using NumberReturnType = ArrayAggregateResult; + result = std::make_shared>(); + + return true; + } + else if constexpr (IsDataTypeDecimal && !IsDataTypeDateOrDateTime) + { + using DecimalReturnType = ArrayAggregateResult; + UInt32 scale = getDecimalScale(*expression_return); + result = std::make_shared>(DecimalUtils::maxPrecision(), scale); + + return true; + } + + return false; + }; + + if (!callOnIndexAndDataType(expression_return->getTypeId(), call)) + { + throw Exception( + "array aggregation function cannot be performed on type " + expression_return->getName(), + ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT); + } + + return result; + } + + template + static bool executeType(const ColumnPtr & mapped, const ColumnArray::Offsets & offsets, ColumnPtr & res_ptr) + { + using Result = ArrayAggregateResult; + using ColVecType = std::conditional_t, ColumnDecimal, ColumnVector>; + using ColVecResult = std::conditional_t, ColumnDecimal, ColumnVector>; + + /// For average on decimal array we return Float64 as result, + /// but to keep decimal presisision we convert to Float64 as last step of average computation + static constexpr bool use_decimal_for_average_aggregation + = aggregate_operation == AggregateOperation::average && IsDecimalNumber; + + using AggregationType = std::conditional_t; + + + const ColVecType * column = checkAndGetColumn(&*mapped); + + /// Constant case. + if (!column) + { + const ColumnConst * column_const = checkAndGetColumnConst(&*mapped); + + if (!column_const) + return false; + + const AggregationType x = column_const->template getValue(); // NOLINT + const typename ColVecType::Container & data + = checkAndGetColumn(&column_const->getDataColumn())->getData(); + + typename ColVecResult::MutablePtr res_column; + if constexpr (IsDecimalNumber) + { + res_column = ColVecResult::create(offsets.size(), data.getScale()); + } + else + res_column = ColVecResult::create(offsets.size()); + + typename ColVecResult::Container & res = res_column->getData(); + + size_t pos = 0; + for (size_t i = 0; i < offsets.size(); ++i) + { + if constexpr (aggregate_operation == AggregateOperation::sum) + { + size_t array_size = offsets[i] - pos; + /// Just multiply the value by array size. + res[i] = x * array_size; + } + else if constexpr (aggregate_operation == AggregateOperation::min || + aggregate_operation == AggregateOperation::max) + { + res[i] = x; + } + else if constexpr (aggregate_operation == AggregateOperation::average) + { + if constexpr (IsDecimalNumber) + { + res[i] = DecimalUtils::convertTo(x, data.getScale()); + } + else + { + res[i] = x; + } + } + + pos = offsets[i]; + } + + res_ptr = std::move(res_column); + return true; + } + + const typename ColVecType::Container & data = column->getData(); + + typename ColVecResult::MutablePtr res_column; + if constexpr (IsDecimalNumber) + res_column = ColVecResult::create(offsets.size(), data.getScale()); + else + res_column = ColVecResult::create(offsets.size()); + + typename ColVecResult::Container & res = res_column->getData(); + + size_t pos = 0; + for (size_t i = 0; i < offsets.size(); ++i) + { + AggregationType s = 0; + + /// Array is empty + if (offsets[i] == pos) + { + res[i] = s; + continue; + } + + size_t count = 1; + s = data[pos]; // NOLINT + ++pos; + + for (; pos < offsets[i]; ++pos) + { + auto element = data[pos]; + + if constexpr (aggregate_operation == AggregateOperation::sum || + aggregate_operation == AggregateOperation::average) + { + s += element; + } + else if constexpr (aggregate_operation == AggregateOperation::min) + { + if (element < s) + { + s = element; + } + } + else if constexpr (aggregate_operation == AggregateOperation::max) + { + if (element > s) + { + s = element; + } + } + + ++count; + } + + if constexpr (aggregate_operation == AggregateOperation::average) + { + s = s / count; + } + + if constexpr (use_decimal_for_average_aggregation) + { + res[i] = DecimalUtils::convertTo(s, data.getScale()); + } + else + { + res[i] = s; + } + } + + res_ptr = std::move(res_column); + return true; + } + + static ColumnPtr execute(const ColumnArray & array, ColumnPtr mapped) + { + const IColumn::Offsets & offsets = array.getOffsets(); + ColumnPtr res; + + if (executeType(mapped, offsets, res) || + executeType(mapped, offsets, res) || + executeType(mapped, offsets, res) || + executeType(mapped, offsets, res) || + executeType(mapped, offsets, res) || + executeType(mapped, offsets, res) || + executeType(mapped, offsets, res) || + executeType(mapped, offsets, res) || + executeType(mapped, offsets, res) || + executeType(mapped, offsets, res) || + executeType(mapped, offsets, res) || + executeType(mapped, offsets, res) || + executeType(mapped, offsets, res)) + return res; + else + throw Exception("Unexpected column for arraySum: " + mapped->getName(), ErrorCodes::ILLEGAL_COLUMN); + } +}; + +struct NameArrayMin { static constexpr auto name = "arrayMin"; }; +using FunctionArrayMin = FunctionArrayMapped, NameArrayMin>; + +struct NameArrayMax { static constexpr auto name = "arrayMax"; }; +using FunctionArrayMax = FunctionArrayMapped, NameArrayMax>; + +struct NameArraySum { static constexpr auto name = "arraySum"; }; +using FunctionArraySum = FunctionArrayMapped, NameArraySum>; + +struct NameArrayAverage { static constexpr auto name = "arrayAvg"; }; +using FunctionArrayAverage = FunctionArrayMapped, NameArrayAverage>; + +void registerFunctionArrayAggregation(FunctionFactory & factory) +{ + factory.registerFunction(); + factory.registerFunction(); + factory.registerFunction(); + factory.registerFunction(); +} + +} + diff --git a/src/Functions/array/arraySum.cpp b/src/Functions/array/arraySum.cpp deleted file mode 100644 index 1aedcb6ef92..00000000000 --- a/src/Functions/array/arraySum.cpp +++ /dev/null @@ -1,146 +0,0 @@ -#include -#include -#include -#include -#include "FunctionArrayMapped.h" -#include - - -namespace DB -{ - -namespace ErrorCodes -{ - extern const int ILLEGAL_TYPE_OF_ARGUMENT; - extern const int ILLEGAL_COLUMN; -} - -struct ArraySumImpl -{ - static bool needBoolean() { return false; } - static bool needExpression() { return false; } - static bool needOneArray() { return false; } - - static DataTypePtr getReturnType(const DataTypePtr & expression_return, const DataTypePtr & /*array_element*/) - { - WhichDataType which(expression_return); - - if (which.isNativeUInt()) - return std::make_shared(); - - if (which.isNativeInt()) - return std::make_shared(); - - if (which.isFloat()) - return std::make_shared(); - - if (which.isDecimal()) - { - UInt32 scale = getDecimalScale(*expression_return); - return std::make_shared>(DecimalUtils::maxPrecision(), scale); - } - - throw Exception("arraySum cannot add values of type " + expression_return->getName(), ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT); - } - - template - static bool executeType(const ColumnPtr & mapped, const ColumnArray::Offsets & offsets, ColumnPtr & res_ptr) - { - using ColVecType = std::conditional_t, ColumnDecimal, ColumnVector>; - using ColVecResult = std::conditional_t, ColumnDecimal, ColumnVector>; - - const ColVecType * column = checkAndGetColumn(&*mapped); - - /// Constant case. - if (!column) - { - const ColumnConst * column_const = checkAndGetColumnConst(&*mapped); - - if (!column_const) - return false; - - const Result x = column_const->template getValue(); // NOLINT - - typename ColVecResult::MutablePtr res_column; - if constexpr (IsDecimalNumber) - { - const typename ColVecType::Container & data = - checkAndGetColumn(&column_const->getDataColumn())->getData(); - res_column = ColVecResult::create(offsets.size(), data.getScale()); - } - else - res_column = ColVecResult::create(offsets.size()); - - typename ColVecResult::Container & res = res_column->getData(); - - size_t pos = 0; - for (size_t i = 0; i < offsets.size(); ++i) - { - /// Just multiply the value by array size. - res[i] = x * (offsets[i] - pos); - pos = offsets[i]; - } - - res_ptr = std::move(res_column); - return true; - } - - const typename ColVecType::Container & data = column->getData(); - - typename ColVecResult::MutablePtr res_column; - if constexpr (IsDecimalNumber) - res_column = ColVecResult::create(offsets.size(), data.getScale()); - else - res_column = ColVecResult::create(offsets.size()); - - typename ColVecResult::Container & res = res_column->getData(); - - size_t pos = 0; - for (size_t i = 0; i < offsets.size(); ++i) - { - Result s = 0; - for (; pos < offsets[i]; ++pos) - { - s += data[pos]; - } - res[i] = s; - } - - res_ptr = std::move(res_column); - return true; - } - - static ColumnPtr execute(const ColumnArray & array, ColumnPtr mapped) - { - const IColumn::Offsets & offsets = array.getOffsets(); - ColumnPtr res; - - if (executeType< UInt8 , UInt64>(mapped, offsets, res) || - executeType< UInt16, UInt64>(mapped, offsets, res) || - executeType< UInt32, UInt64>(mapped, offsets, res) || - executeType< UInt64, UInt64>(mapped, offsets, res) || - executeType< Int8 , Int64>(mapped, offsets, res) || - executeType< Int16, Int64>(mapped, offsets, res) || - executeType< Int32, Int64>(mapped, offsets, res) || - executeType< Int64, Int64>(mapped, offsets, res) || - executeType(mapped, offsets, res) || - executeType(mapped, offsets, res) || - executeType(mapped, offsets, res) || - executeType(mapped, offsets, res) || - executeType(mapped, offsets, res)) - return res; - else - throw Exception("Unexpected column for arraySum: " + mapped->getName(), ErrorCodes::ILLEGAL_COLUMN); - } -}; - -struct NameArraySum { static constexpr auto name = "arraySum"; }; -using FunctionArraySum = FunctionArrayMapped; - -void registerFunctionArraySum(FunctionFactory & factory) -{ - factory.registerFunction(); -} - -} - diff --git a/src/Functions/encodeXMLComponent.cpp b/src/Functions/encodeXMLComponent.cpp new file mode 100644 index 00000000000..d839a3a9264 --- /dev/null +++ b/src/Functions/encodeXMLComponent.cpp @@ -0,0 +1,144 @@ +#include +#include +#include +#include + + +namespace DB +{ +namespace ErrorCodes +{ + extern const int ILLEGAL_TYPE_OF_ARGUMENT; +} + +namespace +{ + struct EncodeXMLComponentName + { + static constexpr auto name = "encodeXMLComponent"; + }; + + class FunctionEncodeXMLComponentImpl + { + public: + static void vector( + const ColumnString::Chars & data, + const ColumnString::Offsets & offsets, + ColumnString::Chars & res_data, + ColumnString::Offsets & res_offsets) + { + /// 6 is the maximum size amplification (the maximum length of encoded entity: ") + res_data.resize(data.size() * 6); + size_t size = offsets.size(); + res_offsets.resize(size); + + size_t prev_offset = 0; + size_t res_offset = 0; + + for (size_t i = 0; i < size; ++i) + { + const char * src_data = reinterpret_cast(&data[prev_offset]); + size_t src_size = offsets[i] - prev_offset; + size_t dst_size = execute(src_data, src_size, reinterpret_cast(res_data.data() + res_offset)); + + res_offset += dst_size; + res_offsets[i] = res_offset; + prev_offset = offsets[i]; + } + + res_data.resize(res_offset); + } + + [[noreturn]] static void vectorFixed(const ColumnString::Chars &, size_t, ColumnString::Chars &) + { + throw Exception("Function encodeXML cannot work with FixedString argument", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT); + } + + private: + static size_t execute(const char * src, size_t src_size, char * dst) + { + const char * src_prev_pos = src; + const char * src_curr_pos = src; + const char * src_end = src + src_size; + char * dst_pos = dst; + + while (true) + { + src_curr_pos = find_first_symbols<'<', '&', '>', '"', '\''>(src_curr_pos, src_end); + + if (src_curr_pos == src_end) + { + break; + } + else if (*src_curr_pos == '<') + { + size_t bytes_to_copy = src_curr_pos - src_prev_pos; + memcpySmallAllowReadWriteOverflow15(dst_pos, src_prev_pos, bytes_to_copy); + dst_pos += bytes_to_copy; + memcpy(dst_pos, "<", 4); + dst_pos += 4; + src_prev_pos = src_curr_pos + 1; + ++src_curr_pos; + } + else if (*src_curr_pos == '&') + { + size_t bytes_to_copy = src_curr_pos - src_prev_pos; + memcpySmallAllowReadWriteOverflow15(dst_pos, src_prev_pos, bytes_to_copy); + dst_pos += bytes_to_copy; + memcpy(dst_pos, "&", 5); + dst_pos += 5; + src_prev_pos = src_curr_pos + 1; + ++src_curr_pos; + } + else if (*src_curr_pos == '>') + { + size_t bytes_to_copy = src_curr_pos - src_prev_pos; + memcpySmallAllowReadWriteOverflow15(dst_pos, src_prev_pos, bytes_to_copy); + dst_pos += bytes_to_copy; + memcpy(dst_pos, ">", 4); + dst_pos += 4; + src_prev_pos = src_curr_pos + 1; + ++src_curr_pos; + } + else if (*src_curr_pos == '"') + { + size_t bytes_to_copy = src_curr_pos - src_prev_pos; + memcpySmallAllowReadWriteOverflow15(dst_pos, src_prev_pos, bytes_to_copy); + dst_pos += bytes_to_copy; + memcpy(dst_pos, """, 6); + dst_pos += 6; + src_prev_pos = src_curr_pos + 1; + ++src_curr_pos; + } + else if (*src_curr_pos == '\'') + { + size_t bytes_to_copy = src_curr_pos - src_prev_pos; + memcpySmallAllowReadWriteOverflow15(dst_pos, src_prev_pos, bytes_to_copy); + dst_pos += bytes_to_copy; + memcpy(dst_pos, "'", 6); + dst_pos += 6; + src_prev_pos = src_curr_pos + 1; + ++src_curr_pos; + } + } + + if (src_prev_pos < src_curr_pos) + { + size_t bytes_to_copy = src_curr_pos - src_prev_pos; + memcpySmallAllowReadWriteOverflow15(dst_pos, src_prev_pos, bytes_to_copy); + dst_pos += bytes_to_copy; + } + + return dst_pos - dst; + } + }; + + using FunctionEncodeXMLComponent = FunctionStringToString; + +} + +void registerFunctionEncodeXMLComponent(FunctionFactory & factory) +{ + factory.registerFunction(); +} +} diff --git a/src/Functions/registerFunctionsHigherOrder.cpp b/src/Functions/registerFunctionsHigherOrder.cpp index 08938ef6534..d3621a03ecd 100644 --- a/src/Functions/registerFunctionsHigherOrder.cpp +++ b/src/Functions/registerFunctionsHigherOrder.cpp @@ -9,7 +9,7 @@ void registerFunctionArrayCount(FunctionFactory & factory); void registerFunctionArrayExists(FunctionFactory & factory); void registerFunctionArrayAll(FunctionFactory & factory); void registerFunctionArrayCompact(FunctionFactory & factory); -void registerFunctionArraySum(FunctionFactory & factory); +void registerFunctionArrayAggregation(FunctionFactory & factory); void registerFunctionArrayFirst(FunctionFactory & factory); void registerFunctionArrayFirstIndex(FunctionFactory & factory); void registerFunctionsArrayFill(FunctionFactory & factory); @@ -27,7 +27,7 @@ void registerFunctionsHigherOrder(FunctionFactory & factory) registerFunctionArrayExists(factory); registerFunctionArrayAll(factory); registerFunctionArrayCompact(factory); - registerFunctionArraySum(factory); + registerFunctionArrayAggregation(factory); registerFunctionArrayFirst(factory); registerFunctionArrayFirstIndex(factory); registerFunctionsArrayFill(factory); diff --git a/src/Functions/registerFunctionsString.cpp b/src/Functions/registerFunctionsString.cpp index 647f63fe910..426cc8f8d56 100644 --- a/src/Functions/registerFunctionsString.cpp +++ b/src/Functions/registerFunctionsString.cpp @@ -33,6 +33,7 @@ void registerFunctionRegexpQuoteMeta(FunctionFactory &); void registerFunctionNormalizeQuery(FunctionFactory &); void registerFunctionNormalizedQueryHash(FunctionFactory &); void registerFunctionCountMatches(FunctionFactory &); +void registerFunctionEncodeXMLComponent(FunctionFactory & factory); #if USE_BASE64 void registerFunctionBase64Encode(FunctionFactory &); @@ -68,6 +69,7 @@ void registerFunctionsString(FunctionFactory & factory) registerFunctionNormalizeQuery(factory); registerFunctionNormalizedQueryHash(factory); registerFunctionCountMatches(factory); + registerFunctionEncodeXMLComponent(factory); #if USE_BASE64 registerFunctionBase64Encode(factory); registerFunctionBase64Decode(factory); diff --git a/src/Functions/tcpPort.cpp b/src/Functions/tcpPort.cpp index 52acf0ade54..26991c900ab 100644 --- a/src/Functions/tcpPort.cpp +++ b/src/Functions/tcpPort.cpp @@ -1,5 +1,6 @@ #include #include +#include namespace DB diff --git a/src/Functions/ya.make b/src/Functions/ya.make index b94c7112da3..a544b8f3b53 100644 --- a/src/Functions/ya.make +++ b/src/Functions/ya.make @@ -80,6 +80,7 @@ SRCS( URL/cutQueryString.cpp URL/cutQueryStringAndFragment.cpp URL/cutToFirstSignificantSubdomain.cpp + URL/cutToFirstSignificantSubdomainCustom.cpp URL/cutURLParameter.cpp URL/cutWWW.cpp URL/decodeURLComponent.cpp @@ -89,6 +90,7 @@ SRCS( URL/extractURLParameterNames.cpp URL/extractURLParameters.cpp URL/firstSignificantSubdomain.cpp + URL/firstSignificantSubdomainCustom.cpp URL/fragment.cpp URL/netloc.cpp URL/path.cpp @@ -118,6 +120,7 @@ SRCS( appendTrailingCharIfAbsent.cpp array/array.cpp array/arrayAUC.cpp + array/arrayAggregation.cpp array/arrayAll.cpp array/arrayCompact.cpp array/arrayConcat.cpp @@ -153,7 +156,6 @@ SRCS( array/arraySlice.cpp array/arraySort.cpp array/arraySplit.cpp - array/arraySum.cpp array/arrayUniq.cpp array/arrayWithConstant.cpp array/arrayZip.cpp @@ -224,6 +226,7 @@ SRCS( dumpColumnStructure.cpp e.cpp empty.cpp + encodeXMLComponent.cpp encrypt.cpp endsWith.cpp equals.cpp diff --git a/src/IO/ConnectionTimeouts.h b/src/IO/ConnectionTimeouts.h index 9e87dee4fc3..e5efabee6e2 100644 --- a/src/IO/ConnectionTimeouts.h +++ b/src/IO/ConnectionTimeouts.h @@ -1,14 +1,13 @@ #pragma once #include -#include - -#include -#include namespace DB { +class Context; +struct Settings; + struct ConnectionTimeouts { Poco::Timespan connection_timeout; @@ -92,24 +91,10 @@ struct ConnectionTimeouts } /// Timeouts for the case when we have just single attempt to connect. - static ConnectionTimeouts getTCPTimeoutsWithoutFailover(const Settings & settings) - { - return ConnectionTimeouts(settings.connect_timeout, settings.send_timeout, settings.receive_timeout, settings.tcp_keep_alive_timeout); - } - + static ConnectionTimeouts getTCPTimeoutsWithoutFailover(const Settings & settings); /// Timeouts for the case when we will try many addresses in a loop. - static ConnectionTimeouts getTCPTimeoutsWithFailover(const Settings & settings) - { - return ConnectionTimeouts(settings.connect_timeout_with_failover_ms, settings.send_timeout, settings.receive_timeout, settings.tcp_keep_alive_timeout, 0, settings.connect_timeout_with_failover_secure_ms); - } - - static ConnectionTimeouts getHTTPTimeouts(const Context & context) - { - const auto & settings = context.getSettingsRef(); - const auto & config = context.getConfigRef(); - Poco::Timespan http_keep_alive_timeout{config.getUInt("keep_alive_timeout", 10), 0}; - return ConnectionTimeouts(settings.http_connection_timeout, settings.http_send_timeout, settings.http_receive_timeout, settings.tcp_keep_alive_timeout, http_keep_alive_timeout); - } + static ConnectionTimeouts getTCPTimeoutsWithFailover(const Settings & settings); + static ConnectionTimeouts getHTTPTimeouts(const Context & context); }; } diff --git a/src/IO/ConnectionTimeoutsContext.h b/src/IO/ConnectionTimeoutsContext.h new file mode 100644 index 00000000000..ce19738f507 --- /dev/null +++ b/src/IO/ConnectionTimeoutsContext.h @@ -0,0 +1,30 @@ +#pragma once + +#include +#include +#include + +namespace DB +{ + +/// Timeouts for the case when we have just single attempt to connect. +inline ConnectionTimeouts ConnectionTimeouts::getTCPTimeoutsWithoutFailover(const Settings & settings) +{ + return ConnectionTimeouts(settings.connect_timeout, settings.send_timeout, settings.receive_timeout, settings.tcp_keep_alive_timeout); +} + +/// Timeouts for the case when we will try many addresses in a loop. +inline ConnectionTimeouts ConnectionTimeouts::getTCPTimeoutsWithFailover(const Settings & settings) +{ + return ConnectionTimeouts(settings.connect_timeout_with_failover_ms, settings.send_timeout, settings.receive_timeout, settings.tcp_keep_alive_timeout, 0, settings.connect_timeout_with_failover_secure_ms); +} + +inline ConnectionTimeouts ConnectionTimeouts::getHTTPTimeouts(const Context & context) +{ + const auto & settings = context.getSettingsRef(); + const auto & config = context.getConfigRef(); + Poco::Timespan http_keep_alive_timeout{config.getUInt("keep_alive_timeout", 10), 0}; + return ConnectionTimeouts(settings.http_connection_timeout, settings.http_send_timeout, settings.http_receive_timeout, settings.tcp_keep_alive_timeout, http_keep_alive_timeout); +} + +} diff --git a/src/IO/S3Common.cpp b/src/IO/S3Common.cpp index bc49c2641a0..06c51e058a0 100644 --- a/src/IO/S3Common.cpp +++ b/src/IO/S3Common.cpp @@ -19,6 +19,7 @@ # include # include # include +# include # include namespace @@ -367,8 +368,12 @@ namespace S3 throw Exception( "Bucket name length is out of bounds in virtual hosted style S3 URI: " + bucket + " (" + uri.toString() + ")", ErrorCodes::BAD_ARGUMENTS); - /// Remove leading '/' from path to extract key. - key = uri.getPath().substr(1); + if (!uri.getPath().empty()) + { + /// Remove leading '/' from path to extract key. + key = uri.getPath().substr(1); + } + if (key.empty() || key == "/") throw Exception("Key name is empty in virtual hosted style S3 URI: " + key + " (" + uri.toString() + ")", ErrorCodes::BAD_ARGUMENTS); boost::to_upper(name); diff --git a/src/IO/WriteBufferFromS3.cpp b/src/IO/WriteBufferFromS3.cpp index a32aa4acdc9..09aabb1b21d 100644 --- a/src/IO/WriteBufferFromS3.cpp +++ b/src/IO/WriteBufferFromS3.cpp @@ -5,12 +5,9 @@ # include # include -# include -# include # include -# include -# include # include +# include # include # include # include @@ -42,20 +39,19 @@ WriteBufferFromS3::WriteBufferFromS3( const String & bucket_, const String & key_, size_t minimum_upload_part_size_, - bool is_multipart_, + size_t max_single_part_upload_size_, + std::optional> object_metadata_, size_t buffer_size_) : BufferWithOwnMemory(buffer_size_, nullptr, 0) - , is_multipart(is_multipart_) , bucket(bucket_) , key(key_) + , object_metadata(std::move(object_metadata_)) , client_ptr(std::move(client_ptr_)) - , minimum_upload_part_size{minimum_upload_part_size_} - , temporary_buffer{std::make_unique()} - , last_part_size{0} -{ - if (is_multipart) - initiate(); -} + , minimum_upload_part_size(minimum_upload_part_size_) + , max_single_part_upload_size(max_single_part_upload_size_) + , temporary_buffer(Aws::MakeShared("temporary buffer")) + , last_part_size(0) +{ } void WriteBufferFromS3::nextImpl() { @@ -66,16 +62,17 @@ void WriteBufferFromS3::nextImpl() ProfileEvents::increment(ProfileEvents::S3WriteBytes, offset()); - if (is_multipart) - { - last_part_size += offset(); + last_part_size += offset(); - if (last_part_size > minimum_upload_part_size) - { - writePart(temporary_buffer->str()); - last_part_size = 0; - temporary_buffer->restart(); - } + /// Data size exceeds singlepart upload threshold, need to use multipart upload. + if (multipart_upload_id.empty() && last_part_size > max_single_part_upload_size) + createMultipartUpload(); + + if (!multipart_upload_id.empty() && last_part_size > minimum_upload_part_size) + { + writePart(); + last_part_size = 0; + temporary_buffer = Aws::MakeShared("temporary buffer"); } } @@ -86,17 +83,23 @@ void WriteBufferFromS3::finalize() void WriteBufferFromS3::finalizeImpl() { - if (!finalized) + if (finalized) + return; + + next(); + + if (multipart_upload_id.empty()) { - next(); - - if (is_multipart) - writePart(temporary_buffer->str()); - - complete(); - - finalized = true; + makeSinglepartUpload(); } + else + { + /// Write rest of the data as last part. + writePart(); + completeMultipartUpload(); + } + + finalized = true; } WriteBufferFromS3::~WriteBufferFromS3() @@ -111,27 +114,29 @@ WriteBufferFromS3::~WriteBufferFromS3() } } -void WriteBufferFromS3::initiate() +void WriteBufferFromS3::createMultipartUpload() { Aws::S3::Model::CreateMultipartUploadRequest req; req.SetBucket(bucket); req.SetKey(key); + if (object_metadata.has_value()) + req.SetMetadata(object_metadata.value()); auto outcome = client_ptr->CreateMultipartUpload(req); if (outcome.IsSuccess()) { - upload_id = outcome.GetResult().GetUploadId(); - LOG_DEBUG(log, "Multipart upload initiated. Upload id: {}", upload_id); + multipart_upload_id = outcome.GetResult().GetUploadId(); + LOG_DEBUG(log, "Multipart upload has created. Upload id: {}", multipart_upload_id); } else throw Exception(outcome.GetError().GetMessage(), ErrorCodes::S3_ERROR); } -void WriteBufferFromS3::writePart(const String & data) +void WriteBufferFromS3::writePart() { - if (data.empty()) + if (temporary_buffer->tellp() <= 0) return; if (part_tags.size() == S3_WARN_MAX_PARTS) @@ -145,91 +150,71 @@ void WriteBufferFromS3::writePart(const String & data) req.SetBucket(bucket); req.SetKey(key); req.SetPartNumber(part_tags.size() + 1); - req.SetUploadId(upload_id); - req.SetContentLength(data.size()); - req.SetBody(std::make_shared(data)); + req.SetUploadId(multipart_upload_id); + req.SetContentLength(temporary_buffer->tellp()); + req.SetBody(temporary_buffer); auto outcome = client_ptr->UploadPart(req); - LOG_TRACE(log, "Writing part. Bucket: {}, Key: {}, Upload_id: {}, Data size: {}", bucket, key, upload_id, data.size()); + LOG_TRACE(log, "Writing part. Bucket: {}, Key: {}, Upload_id: {}, Data size: {}", bucket, key, multipart_upload_id, temporary_buffer->tellp()); if (outcome.IsSuccess()) { auto etag = outcome.GetResult().GetETag(); part_tags.push_back(etag); - LOG_DEBUG(log, "Writing part finished. Total parts: {}, Upload_id: {}, Etag: {}", part_tags.size(), upload_id, etag); + LOG_DEBUG(log, "Writing part finished. Total parts: {}, Upload_id: {}, Etag: {}", part_tags.size(), multipart_upload_id, etag); } else throw Exception(outcome.GetError().GetMessage(), ErrorCodes::S3_ERROR); } - -void WriteBufferFromS3::complete() +void WriteBufferFromS3::completeMultipartUpload() { - if (is_multipart) + LOG_DEBUG(log, "Completing multipart upload. Bucket: {}, Key: {}, Upload_id: {}", bucket, key, multipart_upload_id); + + Aws::S3::Model::CompleteMultipartUploadRequest req; + req.SetBucket(bucket); + req.SetKey(key); + req.SetUploadId(multipart_upload_id); + + Aws::S3::Model::CompletedMultipartUpload multipart_upload; + for (size_t i = 0; i < part_tags.size(); ++i) { - if (part_tags.empty()) - { - LOG_DEBUG(log, "Multipart upload has no data. Aborting it. Bucket: {}, Key: {}, Upload_id: {}", bucket, key, upload_id); - - Aws::S3::Model::AbortMultipartUploadRequest req; - req.SetBucket(bucket); - req.SetKey(key); - req.SetUploadId(upload_id); - - auto outcome = client_ptr->AbortMultipartUpload(req); - - if (outcome.IsSuccess()) - LOG_DEBUG(log, "Aborting multipart upload completed. Upload_id: {}", upload_id); - else - throw Exception(outcome.GetError().GetMessage(), ErrorCodes::S3_ERROR); - - return; - } - - LOG_DEBUG(log, "Completing multipart upload. Bucket: {}, Key: {}, Upload_id: {}", bucket, key, upload_id); - - Aws::S3::Model::CompleteMultipartUploadRequest req; - req.SetBucket(bucket); - req.SetKey(key); - req.SetUploadId(upload_id); - - Aws::S3::Model::CompletedMultipartUpload multipart_upload; - for (size_t i = 0; i < part_tags.size(); ++i) - { - Aws::S3::Model::CompletedPart part; - multipart_upload.AddParts(part.WithETag(part_tags[i]).WithPartNumber(i + 1)); - } - - req.SetMultipartUpload(multipart_upload); - - auto outcome = client_ptr->CompleteMultipartUpload(req); - - if (outcome.IsSuccess()) - LOG_DEBUG(log, "Multipart upload completed. Upload_id: {}", upload_id); - else - throw Exception(outcome.GetError().GetMessage(), ErrorCodes::S3_ERROR); + Aws::S3::Model::CompletedPart part; + multipart_upload.AddParts(part.WithETag(part_tags[i]).WithPartNumber(i + 1)); } + + req.SetMultipartUpload(multipart_upload); + + auto outcome = client_ptr->CompleteMultipartUpload(req); + + if (outcome.IsSuccess()) + LOG_DEBUG(log, "Multipart upload has completed. Upload_id: {}", multipart_upload_id); else - { - LOG_DEBUG(log, "Making single part upload. Bucket: {}, Key: {}", bucket, key); + throw Exception(outcome.GetError().GetMessage(), ErrorCodes::S3_ERROR); +} - Aws::S3::Model::PutObjectRequest req; - req.SetBucket(bucket); - req.SetKey(key); +void WriteBufferFromS3::makeSinglepartUpload() +{ + if (temporary_buffer->tellp() <= 0) + return; - /// This could be improved using an adapter to WriteBuffer. - const std::shared_ptr input_data = Aws::MakeShared("temporary buffer", temporary_buffer->str()); - temporary_buffer = std::make_unique(); - req.SetBody(input_data); + LOG_DEBUG(log, "Making single part upload. Bucket: {}, Key: {}", bucket, key); - auto outcome = client_ptr->PutObject(req); + Aws::S3::Model::PutObjectRequest req; + req.SetBucket(bucket); + req.SetKey(key); + req.SetContentLength(temporary_buffer->tellp()); + req.SetBody(temporary_buffer); + if (object_metadata.has_value()) + req.SetMetadata(object_metadata.value()); - if (outcome.IsSuccess()) - LOG_DEBUG(log, "Single part upload has completed. Bucket: {}, Key: {}", bucket, key); - else - throw Exception(outcome.GetError().GetMessage(), ErrorCodes::S3_ERROR); - } + auto outcome = client_ptr->PutObject(req); + + if (outcome.IsSuccess()) + LOG_DEBUG(log, "Single part upload has completed. Bucket: {}, Key: {}", bucket, key); + else + throw Exception(outcome.GetError().GetMessage(), ErrorCodes::S3_ERROR); } } diff --git a/src/IO/WriteBufferFromS3.h b/src/IO/WriteBufferFromS3.h index 1a1e859d913..9e4b056603a 100644 --- a/src/IO/WriteBufferFromS3.h +++ b/src/IO/WriteBufferFromS3.h @@ -6,11 +6,13 @@ # include # include +# include # include + # include -# include # include -# include + +# include namespace Aws::S3 { @@ -19,23 +21,30 @@ class S3Client; namespace DB { -/* Perform S3 HTTP PUT request. + +/** + * Buffer to write a data to a S3 object with specified bucket and key. + * If data size written to the buffer is less than 'max_single_part_upload_size' write is performed using singlepart upload. + * In another case multipart upload is used: + * Data is divided on chunks with size greater than 'minimum_upload_part_size'. Last chunk can be less than this threshold. + * Each chunk is written as a part to S3. */ class WriteBufferFromS3 : public BufferWithOwnMemory { private: - bool is_multipart; - String bucket; String key; + std::optional> object_metadata; std::shared_ptr client_ptr; size_t minimum_upload_part_size; - std::unique_ptr temporary_buffer; + size_t max_single_part_upload_size; + /// Buffer to accumulate data. + std::shared_ptr temporary_buffer; size_t last_part_size; /// Upload in S3 is made in parts. /// We initiate upload, then upload each part and get ETag as a response, and then finish upload with listing all our parts. - String upload_id; + String multipart_upload_id; std::vector part_tags; Poco::Logger * log = &Poco::Logger::get("WriteBufferFromS3"); @@ -46,7 +55,8 @@ public: const String & bucket_, const String & key_, size_t minimum_upload_part_size_, - bool is_multipart, + size_t max_single_part_upload_size_, + std::optional> object_metadata_ = std::nullopt, size_t buffer_size_ = DBMS_DEFAULT_BUFFER_SIZE); void nextImpl() override; @@ -59,9 +69,11 @@ public: private: bool finalized = false; - void initiate(); - void writePart(const String & data); - void complete(); + void createMultipartUpload(); + void writePart(); + void completeMultipartUpload(); + + void makeSinglepartUpload(); void finalizeImpl(); }; diff --git a/src/IO/WriteHelpers.h b/src/IO/WriteHelpers.h index 1997aede564..e5c6f5b4ab8 100644 --- a/src/IO/WriteHelpers.h +++ b/src/IO/WriteHelpers.h @@ -29,7 +29,12 @@ #include #include -#include +/// There is no dragonbox in Arcadia +#if !defined(ARCADIA_BUILD) +# include +#else +# include +#endif #include @@ -228,14 +233,22 @@ inline size_t writeFloatTextFastPath(T x, char * buffer) if (DecomposedFloat64(x).is_inside_int64()) result = itoa(Int64(x), buffer) - buffer; else +#if !defined(ARCADIA_BUILD) result = jkj::dragonbox::to_chars_n(x, buffer) - buffer; +#else + result = d2s_buffered_n(x, buffer); +#endif } else { if (DecomposedFloat32(x).is_inside_int32()) result = itoa(Int32(x), buffer) - buffer; else +#if !defined(ARCADIA_BUILD) result = jkj::dragonbox::to_chars_n(x, buffer) - buffer; +#else + result = f2s_buffered_n(x, buffer); +#endif } if (result <= 0) @@ -581,9 +594,65 @@ void writeCSVString(const StringRef & s, WriteBuffer & buf) writeCSVString(s.data, s.data + s.size, buf); } +inline void writeXMLStringForTextElementOrAttributeValue(const char * begin, const char * end, WriteBuffer & buf) +{ + const char * pos = begin; + while (true) + { + const char * next_pos = find_first_symbols<'<', '&', '>', '"', '\''>(pos, end); + + if (next_pos == end) + { + buf.write(pos, end - pos); + break; + } + else if (*next_pos == '<') + { + buf.write(pos, next_pos - pos); + ++next_pos; + writeCString("<", buf); + } + else if (*next_pos == '&') + { + buf.write(pos, next_pos - pos); + ++next_pos; + writeCString("&", buf); + } + else if (*next_pos == '>') + { + buf.write(pos, next_pos - pos); + ++next_pos; + writeCString(">", buf); + } + else if (*next_pos == '"') + { + buf.write(pos, next_pos - pos); + ++next_pos; + writeCString(""", buf); + } + else if (*next_pos == '\'') + { + buf.write(pos, next_pos - pos); + ++next_pos; + writeCString("'", buf); + } + + pos = next_pos; + } +} + +inline void writeXMLStringForTextElementOrAttributeValue(const String & s, WriteBuffer & buf) +{ + writeXMLStringForTextElementOrAttributeValue(s.data(), s.data() + s.size(), buf); +} + +inline void writeXMLStringForTextElementOrAttributeValue(const StringRef & s, WriteBuffer & buf) +{ + writeXMLStringForTextElementOrAttributeValue(s.data, s.data + s.size, buf); +} /// Writing a string to a text node in XML (not into an attribute - otherwise you need more escaping). -inline void writeXMLString(const char * begin, const char * end, WriteBuffer & buf) +inline void writeXMLStringForTextElement(const char * begin, const char * end, WriteBuffer & buf) { const char * pos = begin; while (true) @@ -613,14 +682,14 @@ inline void writeXMLString(const char * begin, const char * end, WriteBuffer & b } } -inline void writeXMLString(const String & s, WriteBuffer & buf) +inline void writeXMLStringForTextElement(const String & s, WriteBuffer & buf) { - writeXMLString(s.data(), s.data() + s.size(), buf); + writeXMLStringForTextElement(s.data(), s.data() + s.size(), buf); } -inline void writeXMLString(const StringRef & s, WriteBuffer & buf) +inline void writeXMLStringForTextElement(const StringRef & s, WriteBuffer & buf) { - writeXMLString(s.data, s.data + s.size, buf); + writeXMLStringForTextElement(s.data, s.data + s.size, buf); } template diff --git a/src/IO/readFloatText.h b/src/IO/readFloatText.h index a721642a185..e840e3a7b07 100644 --- a/src/IO/readFloatText.h +++ b/src/IO/readFloatText.h @@ -249,6 +249,19 @@ ReturnType readFloatTextPreciseImpl(T & x, ReadBuffer & buf) } +// credit: https://johnnylee-sde.github.io/Fast-numeric-string-to-int/ +static inline bool is_made_of_eight_digits_fast(uint64_t val) noexcept +{ + return (((val & 0xF0F0F0F0F0F0F0F0) | (((val + 0x0606060606060606) & 0xF0F0F0F0F0F0F0F0) >> 4)) == 0x3333333333333333); +} + +static inline bool is_made_of_eight_digits_fast(const char * chars) noexcept +{ + uint64_t val; + ::memcpy(&val, chars, 8); + return is_made_of_eight_digits_fast(val); +} + template static inline void readUIntTextUpToNSignificantDigits(T & x, ReadBuffer & buf) { @@ -266,9 +279,6 @@ static inline void readUIntTextUpToNSignificantDigits(T & x, ReadBuffer & buf) else return; } - - while (!buf.eof() && isNumericASCII(*buf.position())) - ++buf.position(); } else { @@ -283,10 +293,16 @@ static inline void readUIntTextUpToNSignificantDigits(T & x, ReadBuffer & buf) else return; } - - while (!buf.eof() && isNumericASCII(*buf.position())) - ++buf.position(); } + + while (!buf.eof() && (buf.position() + 8 <= buf.buffer().end()) && + is_made_of_eight_digits_fast(buf.position())) + { + buf.position() += 8; + } + + while (!buf.eof() && isNumericASCII(*buf.position())) + ++buf.position(); } @@ -319,7 +335,6 @@ ReturnType readFloatTextFastImpl(T & x, ReadBuffer & in) ++in.position(); } - auto count_after_sign = in.count(); constexpr int significant_digits = std::numeric_limits::digits10; diff --git a/src/Interpreters/Cluster.cpp b/src/Interpreters/Cluster.cpp index 218502e7f43..c9c56c96cbe 100644 --- a/src/Interpreters/Cluster.cpp +++ b/src/Interpreters/Cluster.cpp @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include diff --git a/src/Interpreters/Cluster.h b/src/Interpreters/Cluster.h index 4b6ee35efd5..c64d52724e5 100644 --- a/src/Interpreters/Cluster.h +++ b/src/Interpreters/Cluster.h @@ -1,13 +1,23 @@ #pragma once #include -#include #include #include #include +namespace Poco +{ + namespace Util + { + class AbstractConfiguration; + } +} + namespace DB { + +struct Settings; + namespace ErrorCodes { extern const int LOGICAL_ERROR; diff --git a/src/Interpreters/ClusterProxy/IStreamFactory.h b/src/Interpreters/ClusterProxy/IStreamFactory.h index c0b887e0489..3cf100cd85c 100644 --- a/src/Interpreters/ClusterProxy/IStreamFactory.h +++ b/src/Interpreters/ClusterProxy/IStreamFactory.h @@ -32,7 +32,7 @@ public: virtual void createForShard( const Cluster::ShardInfo & shard_info, const String & query, const ASTPtr & query_ast, - const Context & context, const ThrottlerPtr & throttler, + const std::shared_ptr & context_ptr, const ThrottlerPtr & throttler, const SelectQueryInfo & query_info, std::vector & res, Pipes & remote_pipes, diff --git a/src/Interpreters/ClusterProxy/SelectStreamFactory.cpp b/src/Interpreters/ClusterProxy/SelectStreamFactory.cpp index 56f306595ac..e2a7c5b55dc 100644 --- a/src/Interpreters/ClusterProxy/SelectStreamFactory.cpp +++ b/src/Interpreters/ClusterProxy/SelectStreamFactory.cpp @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -113,13 +114,15 @@ String formattedAST(const ASTPtr & ast) void SelectStreamFactory::createForShard( const Cluster::ShardInfo & shard_info, const String &, const ASTPtr & query_ast, - const Context & context, const ThrottlerPtr & throttler, + const std::shared_ptr & context_ptr, const ThrottlerPtr & throttler, const SelectQueryInfo &, std::vector & plans, Pipes & remote_pipes, Pipes & delayed_pipes, Poco::Logger * log) { + const auto & context = *context_ptr; + bool add_agg_info = processed_stage == QueryProcessingStage::WithMergeableState; bool add_totals = false; bool add_extremes = false; @@ -143,7 +146,7 @@ void SelectStreamFactory::createForShard( auto emplace_remote_stream = [&]() { auto remote_query_executor = std::make_shared( - shard_info.pool, modified_query, header, context, nullptr, throttler, scalars, external_tables, processed_stage); + shard_info.pool, modified_query, header, context, throttler, scalars, external_tables, processed_stage); remote_query_executor->setLogger(log); remote_query_executor->setPoolMode(PoolMode::GET_MANY); @@ -151,6 +154,7 @@ void SelectStreamFactory::createForShard( remote_query_executor->setMainTable(main_table); remote_pipes.emplace_back(createRemoteSourcePipe(remote_query_executor, add_agg_info, add_totals, add_extremes)); + remote_pipes.back().addInterpreterContext(context_ptr); }; const auto & settings = context.getSettingsRef(); @@ -242,7 +246,8 @@ void SelectStreamFactory::createForShard( /// Do it lazily to avoid connecting in the main thread. auto lazily_create_stream = [ - pool = shard_info.pool, shard_num = shard_info.shard_num, modified_query, header = header, modified_query_ast, context, throttler, + pool = shard_info.pool, shard_num = shard_info.shard_num, modified_query, header = header, modified_query_ast, + &context, context_ptr, throttler, main_table = main_table, table_func_ptr = table_func_ptr, scalars = scalars, external_tables = external_tables, stage = processed_stage, local_delay, add_agg_info, add_totals, add_extremes]() -> Pipe @@ -288,13 +293,14 @@ void SelectStreamFactory::createForShard( connections.emplace_back(std::move(try_result.entry)); auto remote_query_executor = std::make_shared( - std::move(connections), modified_query, header, context, nullptr, throttler, scalars, external_tables, stage); + std::move(connections), modified_query, header, context, throttler, scalars, external_tables, stage); return createRemoteSourcePipe(remote_query_executor, add_agg_info, add_totals, add_extremes); } }; delayed_pipes.emplace_back(createDelayedPipe(header, lazily_create_stream, add_totals, add_extremes)); + delayed_pipes.back().addInterpreterContext(context_ptr); } else emplace_remote_stream(); diff --git a/src/Interpreters/ClusterProxy/SelectStreamFactory.h b/src/Interpreters/ClusterProxy/SelectStreamFactory.h index b51ac109a11..596e99b8a18 100644 --- a/src/Interpreters/ClusterProxy/SelectStreamFactory.h +++ b/src/Interpreters/ClusterProxy/SelectStreamFactory.h @@ -37,7 +37,7 @@ public: void createForShard( const Cluster::ShardInfo & shard_info, const String & query, const ASTPtr & query_ast, - const Context & context, const ThrottlerPtr & throttler, + const std::shared_ptr & context_ptr, const ThrottlerPtr & throttler, const SelectQueryInfo & query_info, std::vector & plans, Pipes & remote_pipes, diff --git a/src/Interpreters/ClusterProxy/executeQuery.cpp b/src/Interpreters/ClusterProxy/executeQuery.cpp index c79b17eac2a..59cbae67770 100644 --- a/src/Interpreters/ClusterProxy/executeQuery.cpp +++ b/src/Interpreters/ClusterProxy/executeQuery.cpp @@ -19,7 +19,7 @@ namespace DB namespace ClusterProxy { -Context updateSettingsForCluster(const Cluster & cluster, const Context & context, const Settings & settings, Poco::Logger * log) +std::shared_ptr updateSettingsForCluster(const Cluster & cluster, const Context & context, const Settings & settings, Poco::Logger * log) { Settings new_settings = settings; new_settings.queue_max_wait_ms = Cluster::saturate(new_settings.queue_max_wait_ms, settings.max_execution_time); @@ -78,9 +78,8 @@ Context updateSettingsForCluster(const Cluster & cluster, const Context & contex } } - Context new_context(context); - new_context.setSettings(new_settings); - + auto new_context = std::make_shared(context); + new_context->setSettings(new_settings); return new_context; } @@ -99,7 +98,7 @@ void executeQuery( const std::string query = queryToString(query_ast); - Context new_context = updateSettingsForCluster(*query_info.cluster, context, settings, log); + auto new_context = updateSettingsForCluster(*query_info.cluster, context, settings, log); ThrottlerPtr user_level_throttler; if (auto * process_list_element = context.getProcessListElement()) diff --git a/src/Interpreters/ClusterProxy/executeQuery.h b/src/Interpreters/ClusterProxy/executeQuery.h index 0b40c1412a1..8840b82d5b2 100644 --- a/src/Interpreters/ClusterProxy/executeQuery.h +++ b/src/Interpreters/ClusterProxy/executeQuery.h @@ -27,7 +27,7 @@ class IStreamFactory; /// - optimize_skip_unused_shards_nesting /// /// @return new Context with adjusted settings -Context updateSettingsForCluster(const Cluster & cluster, const Context & context, const Settings & settings, Poco::Logger * log = nullptr); +std::shared_ptr updateSettingsForCluster(const Cluster & cluster, const Context & context, const Settings & settings, Poco::Logger * log = nullptr); /// Execute a distributed query, creating a vector of BlockInputStreams, from which the result can be read. /// `stream_factory` object encapsulates the logic of creating streams for a different type of query diff --git a/src/Interpreters/DDLWorker.cpp b/src/Interpreters/DDLWorker.cpp index 5046667fb33..cce62b1a6c4 100644 --- a/src/Interpreters/DDLWorker.cpp +++ b/src/Interpreters/DDLWorker.cpp @@ -41,6 +41,11 @@ namespace fs = std::filesystem; +namespace CurrentMetrics +{ + extern const Metric MaxDDLEntryID; +} + namespace DB { @@ -312,6 +317,7 @@ DDLWorker::DDLWorker(int pool_size_, const std::string & zk_root_dir, Context & , pool_size(pool_size_) , worker_pool(pool_size_) { + CurrentMetrics::set(CurrentMetrics::MaxDDLEntryID, 0); last_tasks.reserve(pool_size); queue_dir = zk_root_dir; @@ -806,6 +812,22 @@ void DDLWorker::processTask(DDLTask & task) task.was_executed = true; } + { + DB::ReadBufferFromString in(task.entry_name); + DB::assertString("query-", in); + UInt64 id; + readText(id, in); + auto prev_id = max_id.load(std::memory_order_relaxed); + while (prev_id < id) + { + if (max_id.compare_exchange_weak(prev_id, id)) + { + CurrentMetrics::set(CurrentMetrics::MaxDDLEntryID, id); + break; + } + } + } + /// FIXME: if server fails right here, the task will be executed twice. We need WAL here. /// Delete active flag and create finish flag diff --git a/src/Interpreters/DDLWorker.h b/src/Interpreters/DDLWorker.h index 7dd9c38e9da..1c365a0d9ba 100644 --- a/src/Interpreters/DDLWorker.h +++ b/src/Interpreters/DDLWorker.h @@ -1,6 +1,7 @@ #pragma once #include +#include #include #include #include @@ -137,6 +138,8 @@ private: ThreadGroupStatusPtr thread_group; + std::atomic max_id = 0; + friend class DDLQueryStatusInputStream; friend struct DDLTask; }; diff --git a/src/Interpreters/DatabaseCatalog.cpp b/src/Interpreters/DatabaseCatalog.cpp index ea1984c96fb..813918314c6 100644 --- a/src/Interpreters/DatabaseCatalog.cpp +++ b/src/Interpreters/DatabaseCatalog.cpp @@ -4,7 +4,7 @@ #include #include #include -#include +#include #include #include #include @@ -16,6 +16,15 @@ #include #include +#if !defined(ARCADIA_BUILD) +# include "config_core.h" +#endif + +#if USE_MYSQL +# include +# include +#endif + #include @@ -217,6 +226,15 @@ DatabaseAndTable DatabaseCatalog::getTableImpl( exception->emplace("Table " + table_id.getNameForLogs() + " doesn't exist.", ErrorCodes::UNKNOWN_TABLE); return {}; } + +#if USE_MYSQL + /// It's definitely not the best place for this logic, but behaviour must be consistent with DatabaseMaterializeMySQL::tryGetTable(...) + if (db_and_table.first->getEngineName() == "MaterializeMySQL") + { + if (!MaterializeMySQLSyncThread::isMySQLSyncThread()) + db_and_table.second = std::make_shared(std::move(db_and_table.second), db_and_table.first.get()); + } +#endif return db_and_table; } @@ -286,7 +304,6 @@ void DatabaseCatalog::attachDatabase(const String & database_name, const Databas assertDatabaseDoesntExistUnlocked(database_name); databases.emplace(database_name, database); UUID db_uuid = database->getUUID(); - assert((db_uuid != UUIDHelpers::Nil) ^ (dynamic_cast(database.get()) == nullptr)); if (db_uuid != UUIDHelpers::Nil) db_uuid_map.emplace(db_uuid, database); } @@ -313,9 +330,8 @@ DatabasePtr DatabaseCatalog::detachDatabase(const String & database_name, bool d if (!db->empty()) throw Exception("New table appeared in database being dropped or detached. Try again.", ErrorCodes::DATABASE_NOT_EMPTY); - auto * database_atomic = typeid_cast(db.get()); - if (!drop && database_atomic) - database_atomic->assertCanBeDetached(false); + if (!drop) + db->assertCanBeDetached(false); } catch (...) { diff --git a/src/Interpreters/ExpressionActions.h b/src/Interpreters/ExpressionActions.h index 2b1aa5e2456..5104f1e8e72 100644 --- a/src/Interpreters/ExpressionActions.h +++ b/src/Interpreters/ExpressionActions.h @@ -2,7 +2,6 @@ #include #include -#include #include #include diff --git a/src/Interpreters/ExpressionAnalyzer.cpp b/src/Interpreters/ExpressionAnalyzer.cpp index 802004b7ad6..2ae55b9d5bc 100644 --- a/src/Interpreters/ExpressionAnalyzer.cpp +++ b/src/Interpreters/ExpressionAnalyzer.cpp @@ -115,6 +115,11 @@ bool sanitizeBlock(Block & block, bool throw_if_cannot_create_column) return true; } +ExpressionAnalyzer::ExtractedSettings::ExtractedSettings(const Settings & settings_) + : use_index_for_in_with_subqueries(settings_.use_index_for_in_with_subqueries) + , size_limits_for_set(settings_.max_rows_in_set, settings_.max_bytes_in_set, settings_.set_overflow_mode) +{} + ExpressionAnalyzer::ExpressionAnalyzer( const ASTPtr & query_, diff --git a/src/Interpreters/ExpressionAnalyzer.h b/src/Interpreters/ExpressionAnalyzer.h index 9a2999b3667..b90c74abb62 100644 --- a/src/Interpreters/ExpressionAnalyzer.h +++ b/src/Interpreters/ExpressionAnalyzer.h @@ -1,6 +1,5 @@ #pragma once -#include #include #include #include @@ -16,6 +15,7 @@ namespace DB class Block; class Context; +struct Settings; struct ExpressionActionsChain; class ExpressionActions; @@ -85,10 +85,7 @@ private: const bool use_index_for_in_with_subqueries; const SizeLimits size_limits_for_set; - ExtractedSettings(const Settings & settings_) - : use_index_for_in_with_subqueries(settings_.use_index_for_in_with_subqueries), - size_limits_for_set(settings_.max_rows_in_set, settings_.max_bytes_in_set, settings_.set_overflow_mode) - {} + ExtractedSettings(const Settings & settings_); }; public: diff --git a/src/Interpreters/InterpreterCreateQuery.cpp b/src/Interpreters/InterpreterCreateQuery.cpp index ff58ebf7fc3..a7edd8dd5cd 100644 --- a/src/Interpreters/InterpreterCreateQuery.cpp +++ b/src/Interpreters/InterpreterCreateQuery.cpp @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -56,6 +57,7 @@ #include #include +#include namespace DB @@ -76,6 +78,8 @@ namespace ErrorCodes extern const int ILLEGAL_SYNTAX_FOR_DATA_TYPE; extern const int ILLEGAL_COLUMN; extern const int LOGICAL_ERROR; + extern const int PATH_ACCESS_DENIED; + extern const int NOT_IMPLEMENTED; } namespace fs = std::filesystem; @@ -143,7 +147,8 @@ BlockIO InterpreterCreateQuery::createDatabase(ASTCreateQuery & create) if (create.storage->engine->name == "Atomic") { if (create.attach && create.uuid == UUIDHelpers::Nil) - throw Exception("UUID must be specified for ATTACH", ErrorCodes::INCORRECT_QUERY); + throw Exception(ErrorCodes::INCORRECT_QUERY, "UUID must be specified for ATTACH. " + "If you want to attach existing database, use just ATTACH DATABASE {};", create.database); else if (create.uuid == UUIDHelpers::Nil) create.uuid = UUIDHelpers::generateV4(); @@ -152,6 +157,35 @@ BlockIO InterpreterCreateQuery::createDatabase(ASTCreateQuery & create) if (!create.attach && fs::exists(metadata_path)) throw Exception(ErrorCodes::DATABASE_ALREADY_EXISTS, "Metadata directory {} already exists", metadata_path.string()); } + else if (create.storage->engine->name == "MaterializeMySQL") + { + /// It creates nested database with Ordinary or Atomic engine depending on UUID in query and default engine setting. + /// Do nothing if it's an internal ATTACH on server startup or short-syntax ATTACH query from user, + /// because we got correct query from the metadata file in this case. + /// If we got query from user, then normalize it first. + bool attach_from_user = create.attach && !internal && !create.attach_short_syntax; + bool create_from_user = !create.attach; + + if (create_from_user) + { + const auto & default_engine = context.getSettingsRef().default_database_engine.value; + if (create.uuid == UUIDHelpers::Nil && default_engine == DefaultDatabaseEngine::Atomic) + create.uuid = UUIDHelpers::generateV4(); /// Will enable Atomic engine for nested database + } + else if (attach_from_user && create.uuid == UUIDHelpers::Nil) + { + /// Ambiguity is possible: should we attach nested database as Ordinary + /// or throw "UUID must be specified" for Atomic? So we suggest short syntax for Ordinary. + throw Exception("Use short attach syntax ('ATTACH DATABASE name;' without engine) to attach existing database " + "or specify UUID to attach new database with Atomic engine", ErrorCodes::INCORRECT_QUERY); + } + + /// Set metadata path according to nested engine + if (create.uuid == UUIDHelpers::Nil) + metadata_path = metadata_path / "metadata" / database_name_escaped; + else + metadata_path = metadata_path / "store" / DatabaseCatalog::getPathForUUID(create.uuid); + } else { bool is_on_cluster = context.getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY; @@ -209,7 +243,8 @@ BlockIO InterpreterCreateQuery::createDatabase(ASTCreateQuery & create) if (need_write_metadata) { - fs::rename(metadata_file_tmp_path, metadata_file_path); + /// Prevents from overwriting metadata of detached database + renameNoReplace(metadata_file_tmp_path, metadata_file_path); renamed = true; } @@ -647,13 +682,27 @@ void InterpreterCreateQuery::assertOrSetUUID(ASTCreateQuery & create, const Data const auto * kind = create.is_dictionary ? "Dictionary" : "Table"; const auto * kind_upper = create.is_dictionary ? "DICTIONARY" : "TABLE"; - if (database->getEngineName() == "Atomic") + bool from_path = create.attach_from_path.has_value(); + + if (database->getUUID() != UUIDHelpers::Nil) { - if (create.attach && create.uuid == UUIDHelpers::Nil) + if (create.attach && !from_path && create.uuid == UUIDHelpers::Nil) + { throw Exception(ErrorCodes::INCORRECT_QUERY, - "UUID must be specified in ATTACH {} query for Atomic database engine", - kind_upper); - if (!create.attach && create.uuid == UUIDHelpers::Nil) + "Incorrect ATTACH {} query for Atomic database engine. " + "Use one of the following queries instead:\n" + "1. ATTACH {} {};\n" + "2. CREATE {} {} ;\n" + "3. ATTACH {} {} FROM '/path/to/data/'
;\n" + "4. ATTACH {} {} UUID ''
;", + kind_upper, + kind_upper, create.table, + kind_upper, create.table, + kind_upper, create.table, + kind_upper, create.table); + } + + if (create.uuid == UUIDHelpers::Nil) create.uuid = UUIDHelpers::generateV4(); } else @@ -696,7 +745,32 @@ BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create) create.attach_short_syntax = true; create.if_not_exists = if_not_exists; } - /// TODO maybe assert table structure if create.attach_short_syntax is false? + + /// TODO throw exception if !create.attach_short_syntax && !create.attach_from_path && !internal + + if (create.attach_from_path) + { + fs::path data_path = fs::path(*create.attach_from_path).lexically_normal(); + fs::path user_files = fs::path(context.getUserFilesPath()).lexically_normal(); + if (data_path.is_relative()) + data_path = (user_files / data_path).lexically_normal(); + if (!startsWith(data_path, user_files)) + throw Exception(ErrorCodes::PATH_ACCESS_DENIED, + "Data directory {} must be inside {} to attach it", String(data_path), String(user_files)); + + fs::path root_path = fs::path(context.getPath()).lexically_normal(); + /// Data path must be relative to root_path + create.attach_from_path = fs::relative(data_path, root_path) / ""; + } + else if (create.attach && !create.attach_short_syntax) + { + auto * log = &Poco::Logger::get("InterpreterCreateQuery"); + LOG_WARNING(log, "ATTACH TABLE query with full table definition is not recommended: " + "use either ATTACH TABLE {}; to attach existing table " + "or CREATE TABLE {}
; to create new table " + "or ATTACH TABLE {} FROM '/path/to/data/'
; to create new table and attach data.", + create.table, create.table, create.table); + } if (!create.temporary && create.database.empty()) create.database = current_database; @@ -775,6 +849,18 @@ bool InterpreterCreateQuery::doCreateTable(ASTCreateQuery & create, return true; } + bool from_path = create.attach_from_path.has_value(); + String actual_data_path = data_path; + if (from_path) + { + if (data_path.empty()) + throw Exception(ErrorCodes::NOT_IMPLEMENTED, + "ATTACH ... FROM ... query is not supported for {} database engine", database->getEngineName()); + /// We will try to create Storage instance with provided data path + data_path = *create.attach_from_path; + create.attach_from_path = std::nullopt; + } + StoragePtr res; /// NOTE: CREATE query may be rewritten by Storage creator or table function if (create.as_table_function) @@ -786,7 +872,7 @@ bool InterpreterCreateQuery::doCreateTable(ASTCreateQuery & create, else { res = StorageFactory::instance().get(create, - database ? database->getTableDataPath(create) : "", + data_path, context, context.getGlobalContext(), properties.columns, @@ -794,8 +880,18 @@ bool InterpreterCreateQuery::doCreateTable(ASTCreateQuery & create, false); } + if (from_path && !res->storesDataOnDisk()) + throw Exception(ErrorCodes::NOT_IMPLEMENTED, + "ATTACH ... FROM ... query is not supported for {} table engine, " + "because such tables do not store any data on disk. Use CREATE instead.", res->getName()); + database->createTable(context, table_name, res, query_ptr); + /// Move table data to the proper place. Wo do not move data earlier to avoid situations + /// when data directory moved, but table has not been created due to some error. + if (from_path) + res->rename(actual_data_path, {create.database, create.table, create.uuid}); + /// We must call "startup" and "shutdown" while holding DDLGuard. /// Because otherwise method "shutdown" (from InterpreterDropQuery) can be called before startup /// (in case when table was created and instantly dropped before started up) diff --git a/src/Interpreters/InterpreterDropQuery.cpp b/src/Interpreters/InterpreterDropQuery.cpp index 5f7f70fdda4..00039297244 100644 --- a/src/Interpreters/InterpreterDropQuery.cpp +++ b/src/Interpreters/InterpreterDropQuery.cpp @@ -11,7 +11,14 @@ #include #include #include -#include + +#if !defined(ARCADIA_BUILD) +# include "config_core.h" +#endif + +#if USE_MYSQL +# include +#endif namespace DB @@ -66,10 +73,7 @@ void InterpreterDropQuery::waitForTableToBeActuallyDroppedOrDetached(const ASTDr if (query.kind == ASTDropQuery::Kind::Drop) DatabaseCatalog::instance().waitTableFinallyDropped(uuid_to_wait); else if (query.kind == ASTDropQuery::Kind::Detach) - { - if (auto * atomic = typeid_cast(db.get())) - atomic->waitDetachedTableNotInUse(uuid_to_wait); - } + db->waitDetachedTableNotInUse(uuid_to_wait); } BlockIO InterpreterDropQuery::executeToTable(const ASTDropQuery & query) @@ -122,7 +126,7 @@ BlockIO InterpreterDropQuery::executeToTableImpl(const ASTDropQuery & query, Dat table->checkTableCanBeDetached(); table->shutdown(); TableExclusiveLockHolder table_lock; - if (database->getEngineName() != "Atomic") + if (database->getUUID() == UUIDHelpers::Nil) table_lock = table->lockExclusively(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); /// Drop table from memory, don't touch data and metadata database->detachTable(table_id.table_name); @@ -145,7 +149,7 @@ BlockIO InterpreterDropQuery::executeToTableImpl(const ASTDropQuery & query, Dat table->shutdown(); TableExclusiveLockHolder table_lock; - if (database->getEngineName() != "Atomic") + if (database->getUUID() == UUIDHelpers::Nil) table_lock = table->lockExclusively(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); database->dropTable(context, table_id.table_name, query.no_delay); @@ -282,6 +286,11 @@ BlockIO InterpreterDropQuery::executeToDatabaseImpl(const ASTDropQuery & query, bool drop = query.kind == ASTDropQuery::Kind::Drop; context.checkAccess(AccessType::DROP_DATABASE, database_name); +#if USE_MYSQL + if (database->getEngineName() == "MaterializeMySQL") + stopDatabaseSynchronization(database); +#endif + if (database->shouldBeEmptyOnDetach()) { /// DETACH or DROP all tables and dictionaries inside database. @@ -312,9 +321,8 @@ BlockIO InterpreterDropQuery::executeToDatabaseImpl(const ASTDropQuery & query, /// Protects from concurrent CREATE TABLE queries auto db_guard = DatabaseCatalog::instance().getExclusiveDDLGuardForDatabase(database_name); - auto * database_atomic = typeid_cast(database.get()); - if (!drop && database_atomic) - database_atomic->assertCanBeDetached(true); + if (!drop) + database->assertCanBeDetached(true); /// DETACH or DROP database itself DatabaseCatalog::instance().detachDatabase(database_name, drop, database->shouldBeEmptyOnDetach()); diff --git a/src/Interpreters/InterpreterInsertQuery.cpp b/src/Interpreters/InterpreterInsertQuery.cpp index 39381bf0241..e3fc67b432c 100644 --- a/src/Interpreters/InterpreterInsertQuery.cpp +++ b/src/Interpreters/InterpreterInsertQuery.cpp @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include diff --git a/src/Interpreters/InterpreterSelectQuery.cpp b/src/Interpreters/InterpreterSelectQuery.cpp index 446072602e0..4b786163bb1 100644 --- a/src/Interpreters/InterpreterSelectQuery.cpp +++ b/src/Interpreters/InterpreterSelectQuery.cpp @@ -1847,7 +1847,7 @@ void InterpreterSelectQuery::executeOrder(QueryPlan & query_plan, InputOrderInfo auto merge_sorting_step = std::make_unique( query_plan.getCurrentDataStream(), output_order_descr, settings.max_block_size, limit, - settings.max_bytes_before_remerge_sort, + settings.max_bytes_before_remerge_sort, settings.remerge_sort_lowered_memory_bytes_ratio, settings.max_bytes_before_external_sort, context->getTemporaryVolume(), settings.min_free_disk_space_for_temporary_data); diff --git a/src/Interpreters/MergeJoin.cpp b/src/Interpreters/MergeJoin.cpp index 665ec4d60f3..60357187270 100644 --- a/src/Interpreters/MergeJoin.cpp +++ b/src/Interpreters/MergeJoin.cpp @@ -516,7 +516,7 @@ void MergeJoin::mergeInMemoryRightBlocks() pipeline.init(std::move(source)); /// TODO: there should be no split keys by blocks for RIGHT|FULL JOIN - pipeline.addTransform(std::make_shared(pipeline.getHeader(), right_sort_description, max_rows_in_right_block, 0, 0, 0, nullptr, 0)); + pipeline.addTransform(std::make_shared(pipeline.getHeader(), right_sort_description, max_rows_in_right_block, 0, 0, 0, 0, nullptr, 0)); auto sorted_input = PipelineExecutingBlockInputStream(std::move(pipeline)); diff --git a/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp b/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp index 2f8a544103e..e5263f54696 100644 --- a/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp +++ b/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp @@ -180,7 +180,7 @@ static inline std::tupleas()) { /// column_name(int64 literal) - if (columns_name_set.contains(function->name) && function->arguments->children.size() == 1) + if (columns_name_set.count(function->name) && function->arguments->children.size() == 1) { const auto & prefix_limit = function->arguments->children[0]->as(); diff --git a/src/Interpreters/TreeRewriter.cpp b/src/Interpreters/TreeRewriter.cpp index ef64dcd6e65..c6e69f1c06c 100644 --- a/src/Interpreters/TreeRewriter.cpp +++ b/src/Interpreters/TreeRewriter.cpp @@ -125,12 +125,60 @@ struct CustomizeAggregateFunctionsSuffixData { auto properties = instance.tryGetProperties(func.name); if (properties && !properties->returns_default_when_only_null) - func.name = func.name + customized_func_suffix; + { + func.name += customized_func_suffix; + } + } + } +}; + +// Used to rewrite aggregate functions with -OrNull suffix in some cases, such as sumIfOrNull, we shoule rewrite to sumOrNullIf +struct CustomizeAggregateFunctionsMoveSuffixData +{ + using TypeToVisit = ASTFunction; + + const String & customized_func_suffix; + + String moveSuffixAhead(const String & name) const + { + auto prefix = name.substr(0, name.size() - customized_func_suffix.size()); + + auto prefix_size = prefix.size(); + + if (endsWith(prefix, "MergeState")) + return prefix.substr(0, prefix_size - 10) + customized_func_suffix + "MergeState"; + + if (endsWith(prefix, "Merge")) + return prefix.substr(0, prefix_size - 5) + customized_func_suffix + "Merge"; + + if (endsWith(prefix, "State")) + return prefix.substr(0, prefix_size - 5) + customized_func_suffix + "State"; + + if (endsWith(prefix, "If")) + return prefix.substr(0, prefix_size - 2) + customized_func_suffix + "If"; + + return name; + } + + void visit(ASTFunction & func, ASTPtr &) const + { + const auto & instance = AggregateFunctionFactory::instance(); + if (instance.isAggregateFunctionName(func.name)) + { + if (endsWith(func.name, customized_func_suffix)) + { + auto properties = instance.tryGetProperties(func.name); + if (properties && !properties->returns_default_when_only_null) + { + func.name = moveSuffixAhead(func.name); + } + } } } }; using CustomizeAggregateFunctionsOrNullVisitor = InDepthNodeVisitor, true>; +using CustomizeAggregateFunctionsMoveOrNullVisitor = InDepthNodeVisitor, true>; /// Translate qualified names such as db.table.column, table.column, table_alias.column to names' normal form. /// Expand asterisks and qualified asterisks with column names. @@ -554,7 +602,7 @@ void TreeRewriterResult::collectUsedColumns(const ASTPtr & query, bool is_select /// If we have no information about columns sizes, choose a column of minimum size of its data type. required.insert(ExpressionActions::getSmallestColumn(source_columns)); } - else if (is_select && metadata_snapshot) + else if (is_select && metadata_snapshot && !columns_context.has_array_join) { const auto & partition_desc = metadata_snapshot->getPartitionKey(); if (partition_desc.expression) @@ -794,6 +842,10 @@ void TreeRewriter::normalize(ASTPtr & query, Aliases & aliases, const Settings & CustomizeAggregateFunctionsOrNullVisitor(data_or_null).visit(query); } + /// Move -OrNull suffix ahead, this should execute after add -OrNull suffix + CustomizeAggregateFunctionsMoveOrNullVisitor::Data data_or_null{"OrNull"}; + CustomizeAggregateFunctionsMoveOrNullVisitor(data_or_null).visit(query); + /// Creates a dictionary `aliases`: alias -> ASTPtr QueryAliasesVisitor(aliases).visit(query); diff --git a/src/Interpreters/tests/CMakeLists.txt b/src/Interpreters/tests/CMakeLists.txt index 20aa73166fb..1bc9d7fbacb 100644 --- a/src/Interpreters/tests/CMakeLists.txt +++ b/src/Interpreters/tests/CMakeLists.txt @@ -29,6 +29,9 @@ target_link_libraries (string_hash_map PRIVATE dbms) add_executable (string_hash_map_aggregation string_hash_map.cpp) target_link_libraries (string_hash_map_aggregation PRIVATE dbms) +add_executable (string_hash_set string_hash_set.cpp) +target_link_libraries (string_hash_set PRIVATE dbms) + add_executable (two_level_hash_map two_level_hash_map.cpp) target_include_directories (two_level_hash_map SYSTEM BEFORE PRIVATE ${SPARSEHASH_INCLUDE_DIR}) target_link_libraries (two_level_hash_map PRIVATE dbms) diff --git a/src/Interpreters/tests/gtest_cycle_aliases.cpp b/src/Interpreters/tests/gtest_cycle_aliases.cpp index 593db93de3e..56e23c6a497 100644 --- a/src/Interpreters/tests/gtest_cycle_aliases.cpp +++ b/src/Interpreters/tests/gtest_cycle_aliases.cpp @@ -5,6 +5,7 @@ #include #include #include +#include using namespace DB; diff --git a/src/Interpreters/tests/string_hash_set.cpp b/src/Interpreters/tests/string_hash_set.cpp new file mode 100644 index 00000000000..d9d6453da34 --- /dev/null +++ b/src/Interpreters/tests/string_hash_set.cpp @@ -0,0 +1,83 @@ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/// NOTE: see string_hash_map.cpp for usage example + +template +void NO_INLINE bench(const std::vector & data, DB::Arena & pool, const char * name) +{ + std::cerr << "method " << name << std::endl; + for (auto t = 0ul; t < 7; ++t) + { + Stopwatch watch; + Set set; + typename Set::LookupResult it; + bool inserted; + + for (const auto & value : data) + { + if constexpr (std::is_same_v, Set>) + set.emplace(DB::ArenaKeyHolder{value, pool}, inserted); + else + set.emplace(DB::ArenaKeyHolder{value, pool}, it, inserted); + } + watch.stop(); + + std::cerr << "arena-memory " << pool.size() + set.getBufferSizeInBytes() << std::endl; + std::cerr << "single-run " << std::setprecision(3) + << watch.elapsedSeconds() << std::endl; + } +} + +int main(int argc, char ** argv) +{ + if (argc < 3) + { + std::cerr << "Usage: program n m\n"; + return 1; + } + + size_t n = std::stol(argv[1]); + size_t m = std::stol(argv[2]); + + DB::Arena pool(128 * 1024 * 1024); + std::vector data(n); + + std::cerr << "sizeof(Key) = " << sizeof(StringRef) << std::endl; + + { + Stopwatch watch; + DB::ReadBufferFromFileDescriptor in1(STDIN_FILENO); + DB::CompressedReadBuffer in2(in1); + + std::string tmp; + for (size_t i = 0; i < n && !in2.eof(); ++i) + { + DB::readStringBinary(tmp, in2); + data[i] = StringRef(pool.insert(tmp.data(), tmp.size()), tmp.size()); + } + + watch.stop(); + std::cerr << std::fixed << std::setprecision(2) << "Vector. Size: " << n << ", elapsed: " << watch.elapsedSeconds() << " (" + << n / watch.elapsedSeconds() << " elem/sec.)" << std::endl; + } + + if (!m || m == 1) + bench>(data, pool, "StringHashSet"); + if (!m || m == 2) + bench>(data, pool, "HashSetWithSavedHash"); + if (!m || m == 3) + bench>(data, pool, "HashSet"); + return 0; +} diff --git a/src/Parsers/ASTCreateQuery.cpp b/src/Parsers/ASTCreateQuery.cpp index a193433c988..03db54c6957 100644 --- a/src/Parsers/ASTCreateQuery.cpp +++ b/src/Parsers/ASTCreateQuery.cpp @@ -251,6 +251,12 @@ void ASTCreateQuery::formatQueryImpl(const FormatSettings & settings, FormatStat if (uuid != UUIDHelpers::Nil) settings.ostr << (settings.hilite ? hilite_keyword : "") << " UUID " << (settings.hilite ? hilite_none : "") << quoteString(toString(uuid)); + + assert(attach || !attach_from_path); + if (attach_from_path) + settings.ostr << (settings.hilite ? hilite_keyword : "") << " FROM " << (settings.hilite ? hilite_none : "") + << quoteString(*attach_from_path); + if (live_view_timeout) settings.ostr << (settings.hilite ? hilite_keyword : "") << " WITH TIMEOUT " << (settings.hilite ? hilite_none : "") << *live_view_timeout; diff --git a/src/Parsers/ASTCreateQuery.h b/src/Parsers/ASTCreateQuery.h index b47a1dbd5e2..7b2deb99698 100644 --- a/src/Parsers/ASTCreateQuery.h +++ b/src/Parsers/ASTCreateQuery.h @@ -79,6 +79,8 @@ public: std::optional live_view_timeout; /// For CREATE LIVE VIEW ... WITH TIMEOUT ... bool attach_short_syntax{false}; + std::optional attach_from_path = std::nullopt; + /** Get the text that identifies this element. */ String getID(char delim) const override { return (attach ? "AttachQuery" : "CreateQuery") + (delim + database) + delim + table; } diff --git a/src/Parsers/IParser.h b/src/Parsers/IParser.h index 05ceb8c900b..7dc31e4c1eb 100644 --- a/src/Parsers/IParser.h +++ b/src/Parsers/IParser.h @@ -5,7 +5,6 @@ #include #include -#include #include #include #include diff --git a/src/Parsers/New/CMakeLists.txt b/src/Parsers/New/CMakeLists.txt index 1c3fd8368be..360dd4d7488 100644 --- a/src/Parsers/New/CMakeLists.txt +++ b/src/Parsers/New/CMakeLists.txt @@ -58,7 +58,7 @@ check_cxx_compiler_flag("-Wshadow" HAS_SHADOW) target_compile_options (clickhouse_parsers_new PRIVATE - -Wno-c++20-compat + -Wno-c++2a-compat -Wno-deprecated-this-capture -Wno-documentation-html -Wno-documentation diff --git a/src/Parsers/ParserCreateQuery.cpp b/src/Parsers/ParserCreateQuery.cpp index 23b4c6f7932..5fe6a4eee80 100644 --- a/src/Parsers/ParserCreateQuery.cpp +++ b/src/Parsers/ParserCreateQuery.cpp @@ -359,6 +359,7 @@ bool ParserCreateTableQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expe ParserKeyword s_table("TABLE"); ParserKeyword s_if_not_exists("IF NOT EXISTS"); ParserCompoundIdentifier table_name_p(true); + ParserKeyword s_from("FROM"); ParserKeyword s_on("ON"); ParserKeyword s_as("AS"); ParserToken s_dot(TokenType::Dot); @@ -378,6 +379,7 @@ bool ParserCreateTableQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expe ASTPtr as_table; ASTPtr as_table_function; ASTPtr select; + ASTPtr from_path; String cluster_str; bool attach = false; @@ -405,6 +407,13 @@ bool ParserCreateTableQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expe if (!table_name_p.parse(pos, table, expected)) return false; + if (attach && s_from.ignore(pos, expected)) + { + ParserLiteral from_path_p; + if (!from_path_p.parse(pos, from_path, expected)) + return false; + } + if (s_on.ignore(pos, expected)) { if (!ASTQueryWithOnCluster::parse(pos, cluster_str, expected)) @@ -414,7 +423,8 @@ bool ParserCreateTableQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expe StorageID table_id = getTableIdentifier(table); // Shortcut for ATTACH a previously detached table - if (attach && (!pos.isValid() || pos.get().type == TokenType::Semicolon)) + bool short_attach = attach && !from_path; + if (short_attach && (!pos.isValid() || pos.get().type == TokenType::Semicolon)) { auto query = std::make_shared(); node = query; @@ -439,12 +449,22 @@ bool ParserCreateTableQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expe if (!s_rparen.ignore(pos, expected)) return false; - if (!storage_p.parse(pos, storage, expected) && !is_temporary) + auto storage_parse_result = storage_p.parse(pos, storage, expected); + + if (storage_parse_result && s_as.ignore(pos, expected)) + { + if (!select_p.parse(pos, select, expected)) + return false; + } + + if (!storage_parse_result && !is_temporary) { if (!s_as.ignore(pos, expected)) return false; if (!table_function_p.parse(pos, as_table_function, expected)) + { return false; + } } } else @@ -509,6 +529,9 @@ bool ParserCreateTableQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expe tryGetIdentifierNameInto(as_table, query->as_table); query->set(query->select, select); + if (from_path) + query->attach_from_path = from_path->as().value.get(); + return true; } diff --git a/src/Parsers/ParserSetQuery.cpp b/src/Parsers/ParserSetQuery.cpp index 328d01f2f81..aac3a191a10 100644 --- a/src/Parsers/ParserSetQuery.cpp +++ b/src/Parsers/ParserSetQuery.cpp @@ -6,6 +6,7 @@ #include #include +#include namespace DB diff --git a/src/Parsers/ParserSetQuery.h b/src/Parsers/ParserSetQuery.h index 59a6109ea48..0bc1cec3093 100644 --- a/src/Parsers/ParserSetQuery.h +++ b/src/Parsers/ParserSetQuery.h @@ -7,6 +7,8 @@ namespace DB { +struct SettingChange; + /** Query like this: * SET name1 = value1, name2 = value2, ... */ diff --git a/src/Processors/Formats/Impl/XMLRowOutputFormat.cpp b/src/Processors/Formats/Impl/XMLRowOutputFormat.cpp index a677d0de9a0..6fd63a18147 100644 --- a/src/Processors/Formats/Impl/XMLRowOutputFormat.cpp +++ b/src/Processors/Formats/Impl/XMLRowOutputFormat.cpp @@ -67,10 +67,10 @@ void XMLRowOutputFormat::writePrefix() writeCString("\t\t\t\n", *ostr); writeCString("\t\t\t\t", *ostr); - writeXMLString(field.name, *ostr); + writeXMLStringForTextElement(field.name, *ostr); writeCString("\n", *ostr); writeCString("\t\t\t\t", *ostr); - writeXMLString(field.type->getName(), *ostr); + writeXMLStringForTextElement(field.type->getName(), *ostr); writeCString("\n", *ostr); writeCString("\t\t\t\n", *ostr); diff --git a/src/Processors/QueryPlan/MergeSortingStep.cpp b/src/Processors/QueryPlan/MergeSortingStep.cpp index fb263cbcca1..b30286130b1 100644 --- a/src/Processors/QueryPlan/MergeSortingStep.cpp +++ b/src/Processors/QueryPlan/MergeSortingStep.cpp @@ -28,6 +28,7 @@ MergeSortingStep::MergeSortingStep( size_t max_merged_block_size_, UInt64 limit_, size_t max_bytes_before_remerge_, + double remerge_lowered_memory_bytes_ratio_, size_t max_bytes_before_external_sort_, VolumePtr tmp_volume_, size_t min_free_disk_space_) @@ -36,6 +37,7 @@ MergeSortingStep::MergeSortingStep( , max_merged_block_size(max_merged_block_size_) , limit(limit_) , max_bytes_before_remerge(max_bytes_before_remerge_) + , remerge_lowered_memory_bytes_ratio(remerge_lowered_memory_bytes_ratio_) , max_bytes_before_external_sort(max_bytes_before_external_sort_), tmp_volume(tmp_volume_) , min_free_disk_space(min_free_disk_space_) { @@ -64,6 +66,7 @@ void MergeSortingStep::transformPipeline(QueryPipeline & pipeline) return std::make_shared( header, description, max_merged_block_size, limit, max_bytes_before_remerge / pipeline.getNumStreams(), + remerge_lowered_memory_bytes_ratio, max_bytes_before_external_sort, tmp_volume, min_free_disk_space); diff --git a/src/Processors/QueryPlan/MergeSortingStep.h b/src/Processors/QueryPlan/MergeSortingStep.h index a54ea7ac365..a385a8a3e93 100644 --- a/src/Processors/QueryPlan/MergeSortingStep.h +++ b/src/Processors/QueryPlan/MergeSortingStep.h @@ -17,6 +17,7 @@ public: size_t max_merged_block_size_, UInt64 limit_, size_t max_bytes_before_remerge_, + double remerge_lowered_memory_bytes_ratio_, size_t max_bytes_before_external_sort_, VolumePtr tmp_volume_, size_t min_free_disk_space_); @@ -36,6 +37,7 @@ private: UInt64 limit; size_t max_bytes_before_remerge; + double remerge_lowered_memory_bytes_ratio; size_t max_bytes_before_external_sort; VolumePtr tmp_volume; size_t min_free_disk_space; diff --git a/src/Processors/Transforms/MergeSortingTransform.cpp b/src/Processors/Transforms/MergeSortingTransform.cpp index 9be139e7875..ce6d0ad1f6c 100644 --- a/src/Processors/Transforms/MergeSortingTransform.cpp +++ b/src/Processors/Transforms/MergeSortingTransform.cpp @@ -25,7 +25,6 @@ namespace ErrorCodes { extern const int NOT_ENOUGH_SPACE; } -class MergeSorter; class BufferingToFileTransform : public IAccumulatingTransform @@ -96,10 +95,12 @@ MergeSortingTransform::MergeSortingTransform( const SortDescription & description_, size_t max_merged_block_size_, UInt64 limit_, size_t max_bytes_before_remerge_, + double remerge_lowered_memory_bytes_ratio_, size_t max_bytes_before_external_sort_, VolumePtr tmp_volume_, size_t min_free_disk_space_) : SortingTransform(header, description_, max_merged_block_size_, limit_) , max_bytes_before_remerge(max_bytes_before_remerge_) + , remerge_lowered_memory_bytes_ratio(remerge_lowered_memory_bytes_ratio_) , max_bytes_before_external_sort(max_bytes_before_external_sort_), tmp_volume(tmp_volume_) , min_free_disk_space(min_free_disk_space_) {} @@ -268,9 +269,12 @@ void MergeSortingTransform::remerge() LOG_DEBUG(log, "Memory usage is lowered from {} to {}", ReadableSize(sum_bytes_in_blocks), ReadableSize(new_sum_bytes_in_blocks)); - /// If the memory consumption was not lowered enough - we will not perform remerge anymore. 2 is a guess. - if (new_sum_bytes_in_blocks * 2 > sum_bytes_in_blocks) + /// If the memory consumption was not lowered enough - we will not perform remerge anymore. + if (remerge_lowered_memory_bytes_ratio && (new_sum_bytes_in_blocks * remerge_lowered_memory_bytes_ratio > sum_bytes_in_blocks)) + { remerge_is_useful = false; + LOG_DEBUG(log, "Re-merging is not useful (memory usage was not lowered by remerge_sort_lowered_memory_bytes_ratio={})", remerge_lowered_memory_bytes_ratio); + } chunks = std::move(new_chunks); sum_rows_in_blocks = new_sum_rows_in_blocks; diff --git a/src/Processors/Transforms/MergeSortingTransform.h b/src/Processors/Transforms/MergeSortingTransform.h index d328cb818d5..c9ab804cdc8 100644 --- a/src/Processors/Transforms/MergeSortingTransform.h +++ b/src/Processors/Transforms/MergeSortingTransform.h @@ -22,6 +22,7 @@ public: const SortDescription & description_, size_t max_merged_block_size_, UInt64 limit_, size_t max_bytes_before_remerge_, + double remerge_lowered_memory_bytes_ratio_, size_t max_bytes_before_external_sort_, VolumePtr tmp_volume_, size_t min_free_disk_space_); @@ -36,6 +37,7 @@ protected: private: size_t max_bytes_before_remerge; + double remerge_lowered_memory_bytes_ratio; size_t max_bytes_before_external_sort; VolumePtr tmp_volume; size_t min_free_disk_space; diff --git a/src/Server/HTTPHandler.cpp b/src/Server/HTTPHandler.cpp index ed154ba65f2..472850950d4 100644 --- a/src/Server/HTTPHandler.cpp +++ b/src/Server/HTTPHandler.cpp @@ -36,6 +36,7 @@ #include #include #include +#include #include #include diff --git a/src/Server/InterserverIOHTTPHandler.cpp b/src/Server/InterserverIOHTTPHandler.cpp index f4385e8ebc4..973759bedd1 100644 --- a/src/Server/InterserverIOHTTPHandler.cpp +++ b/src/Server/InterserverIOHTTPHandler.cpp @@ -11,6 +11,7 @@ #include #include #include +#include #include "IServer.h" namespace DB diff --git a/src/Server/StaticRequestHandler.cpp b/src/Server/StaticRequestHandler.cpp index 7f63099c972..ad2c07ab0aa 100644 --- a/src/Server/StaticRequestHandler.cpp +++ b/src/Server/StaticRequestHandler.cpp @@ -10,6 +10,7 @@ #include #include #include +#include #include diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index e84c89bd165..6493302a807 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -83,7 +83,7 @@ void TCPHandler::runImpl() if (in->eof()) { - LOG_WARNING(log, "Client has not sent any data."); + LOG_INFO(log, "Client has not sent any data."); return; } @@ -102,7 +102,7 @@ void TCPHandler::runImpl() if (e.code() == ErrorCodes::ATTEMPT_TO_READ_AFTER_EOF) { - LOG_WARNING(log, "Client has gone away."); + LOG_INFO(log, "Client has gone away."); return; } diff --git a/src/Storages/Distributed/DirectoryMonitor.cpp b/src/Storages/Distributed/DirectoryMonitor.cpp index 47da0a10d9e..5d089eb9f80 100644 --- a/src/Storages/Distributed/DirectoryMonitor.cpp +++ b/src/Storages/Distributed/DirectoryMonitor.cpp @@ -17,6 +17,7 @@ #include #include #include +#include #include #include diff --git a/src/Storages/Distributed/DistributedBlockOutputStream.cpp b/src/Storages/Distributed/DistributedBlockOutputStream.cpp index 2248c489679..d24967256a0 100644 --- a/src/Storages/Distributed/DistributedBlockOutputStream.cpp +++ b/src/Storages/Distributed/DistributedBlockOutputStream.cpp @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include diff --git a/src/Storages/Kafka/KafkaSettings.h b/src/Storages/Kafka/KafkaSettings.h index 53fbe8adc6b..1df10d16339 100644 --- a/src/Storages/Kafka/KafkaSettings.h +++ b/src/Storages/Kafka/KafkaSettings.h @@ -20,7 +20,7 @@ class ASTStorage; M(Milliseconds, kafka_poll_timeout_ms, 0, "Timeout for single poll from Kafka.", 0) \ /* default is min(max_block_size, kafka_max_block_size)*/ \ M(UInt64, kafka_poll_max_batch_size, 0, "Maximum amount of messages to be polled in a single Kafka poll.", 0) \ - /* default is = min_insert_block_size / kafka_num_consumers */ \ + /* default is = max_insert_block_size / kafka_num_consumers */ \ M(UInt64, kafka_max_block_size, 0, "Number of row collected by poll(s) for flushing data from Kafka.", 0) \ /* default is stream_flush_interval_ms */ \ M(Milliseconds, kafka_flush_interval_ms, 0, "Timeout for flushing data from Kafka.", 0) \ diff --git a/src/Storages/MergeTree/MergeTreeIOSettings.h b/src/Storages/MergeTree/MergeTreeIOSettings.h index db01f1ae9b7..41d9be07c70 100644 --- a/src/Storages/MergeTree/MergeTreeIOSettings.h +++ b/src/Storages/MergeTree/MergeTreeIOSettings.h @@ -1,6 +1,7 @@ #pragma once #include #include +#include namespace DB { @@ -19,13 +20,22 @@ struct MergeTreeWriterSettings { MergeTreeWriterSettings() = default; - MergeTreeWriterSettings(const Settings & global_settings, bool can_use_adaptive_granularity_, - size_t aio_threshold_, bool blocks_are_granules_size_ = false) - : min_compress_block_size(global_settings.min_compress_block_size) - , max_compress_block_size(global_settings.max_compress_block_size) + MergeTreeWriterSettings( + const Settings & global_settings, + const MergeTreeSettingsPtr & storage_settings, + bool can_use_adaptive_granularity_, + size_t aio_threshold_, + bool blocks_are_granules_size_ = false) + : min_compress_block_size( + storage_settings->min_compress_block_size ? storage_settings->min_compress_block_size : global_settings.min_compress_block_size) + , max_compress_block_size( + storage_settings->max_compress_block_size ? storage_settings->max_compress_block_size + : global_settings.max_compress_block_size) , aio_threshold(aio_threshold_) , can_use_adaptive_granularity(can_use_adaptive_granularity_) - , blocks_are_granules_size(blocks_are_granules_size_) {} + , blocks_are_granules_size(blocks_are_granules_size_) + { + } size_t min_compress_block_size; size_t max_compress_block_size; diff --git a/src/Storages/MergeTree/MergeTreeSettings.cpp b/src/Storages/MergeTree/MergeTreeSettings.cpp index 15ff62e0aa6..e77668e8900 100644 --- a/src/Storages/MergeTree/MergeTreeSettings.cpp +++ b/src/Storages/MergeTree/MergeTreeSettings.cpp @@ -4,6 +4,7 @@ #include #include #include +#include namespace DB diff --git a/src/Storages/MergeTree/MergeTreeSettings.h b/src/Storages/MergeTree/MergeTreeSettings.h index b2889af4f11..2f3931786a6 100644 --- a/src/Storages/MergeTree/MergeTreeSettings.h +++ b/src/Storages/MergeTree/MergeTreeSettings.h @@ -17,6 +17,8 @@ struct Settings; #define LIST_OF_MERGE_TREE_SETTINGS(M) \ + M(UInt64, min_compress_block_size, 0, "When granule is written, compress the data in buffer if the size of pending uncompressed data is larger or equal than the specified threshold. If this setting is not set, the corresponding global setting is used.", 0) \ + M(UInt64, max_compress_block_size, 0, "Compress the pending uncompressed data in buffer if its size is larger or equal than the specified threshold. Block of data will be compressed even if the current granule is not finished. If this setting is not set, the corresponding global setting is used.", 0) \ M(UInt64, index_granularity, 8192, "How many rows correspond to one primary key value.", 0) \ \ /** Data storing format settings. */ \ diff --git a/src/Storages/MergeTree/MergedBlockOutputStream.cpp b/src/Storages/MergeTree/MergedBlockOutputStream.cpp index 42d310da485..00a4c37c60d 100644 --- a/src/Storages/MergeTree/MergedBlockOutputStream.cpp +++ b/src/Storages/MergeTree/MergedBlockOutputStream.cpp @@ -48,6 +48,7 @@ MergedBlockOutputStream::MergedBlockOutputStream( { MergeTreeWriterSettings writer_settings( storage.global_context.getSettings(), + storage.getSettings(), data_part->index_granularity_info.is_adaptive, aio_threshold, blocks_are_granules_size); diff --git a/src/Storages/MergeTree/MergedColumnOnlyOutputStream.cpp b/src/Storages/MergeTree/MergedColumnOnlyOutputStream.cpp index 8ce4ea126ed..12b14d13656 100644 --- a/src/Storages/MergeTree/MergedColumnOnlyOutputStream.cpp +++ b/src/Storages/MergeTree/MergedColumnOnlyOutputStream.cpp @@ -22,8 +22,11 @@ MergedColumnOnlyOutputStream::MergedColumnOnlyOutputStream( , header(header_) { const auto & global_settings = data_part->storage.global_context.getSettings(); + const auto & storage_settings = data_part->storage.getSettings(); + MergeTreeWriterSettings writer_settings( global_settings, + storage_settings, index_granularity_info ? index_granularity_info->is_adaptive : data_part->storage.canUseAdaptiveGranularity(), global_settings.min_bytes_to_use_direct_io); diff --git a/src/Storages/RabbitMQ/StorageRabbitMQ.cpp b/src/Storages/RabbitMQ/StorageRabbitMQ.cpp index 1c3b1bbd99c..f41c4805d24 100644 --- a/src/Storages/RabbitMQ/StorageRabbitMQ.cpp +++ b/src/Storages/RabbitMQ/StorageRabbitMQ.cpp @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include diff --git a/src/Storages/RabbitMQ/StorageRabbitMQ.h b/src/Storages/RabbitMQ/StorageRabbitMQ.h index f4bb3215b55..a46da6072af 100644 --- a/src/Storages/RabbitMQ/StorageRabbitMQ.h +++ b/src/Storages/RabbitMQ/StorageRabbitMQ.h @@ -2,7 +2,6 @@ #include #include -#include #include #include #include @@ -19,6 +18,8 @@ namespace DB { +class Context; + using ChannelPtr = std::shared_ptr; class StorageRabbitMQ final: public ext::shared_ptr_helper, public IStorage diff --git a/src/Storages/StorageFile.cpp b/src/Storages/StorageFile.cpp index 13e8af42081..3ad9384ecfb 100644 --- a/src/Storages/StorageFile.cpp +++ b/src/Storages/StorageFile.cpp @@ -528,7 +528,10 @@ BlockOutputStreamPtr StorageFile::write( std::string path; if (!paths.empty()) + { path = paths[0]; + Poco::File(Poco::Path(path).makeParent()).createDirectories(); + } return std::make_shared(*this, metadata_snapshot, chooseCompressionMethod(path, compression_method), context, diff --git a/src/Storages/StorageMaterializeMySQL.cpp b/src/Storages/StorageMaterializeMySQL.cpp index 9be285adc69..721221e3fdc 100644 --- a/src/Storages/StorageMaterializeMySQL.cpp +++ b/src/Storages/StorageMaterializeMySQL.cpp @@ -21,12 +21,13 @@ #include #include +#include #include namespace DB { -StorageMaterializeMySQL::StorageMaterializeMySQL(const StoragePtr & nested_storage_, const DatabaseMaterializeMySQL * database_) +StorageMaterializeMySQL::StorageMaterializeMySQL(const StoragePtr & nested_storage_, const IDatabase * database_) : StorageProxy(nested_storage_->getStorageID()), nested_storage(nested_storage_), database(database_) { auto nested_memory_metadata = nested_storage->getInMemoryMetadata(); @@ -45,7 +46,7 @@ Pipe StorageMaterializeMySQL::read( unsigned int num_streams) { /// If the background synchronization thread has exception. - database->rethrowExceptionIfNeed(); + rethrowSyncExceptionIfNeed(database); NameSet column_names_set = NameSet(column_names.begin(), column_names.end()); auto lock = nested_storage->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); @@ -106,7 +107,7 @@ Pipe StorageMaterializeMySQL::read( NamesAndTypesList StorageMaterializeMySQL::getVirtuals() const { /// If the background synchronization thread has exception. - database->rethrowExceptionIfNeed(); + rethrowSyncExceptionIfNeed(database); return nested_storage->getVirtuals(); } diff --git a/src/Storages/StorageMaterializeMySQL.h b/src/Storages/StorageMaterializeMySQL.h index ff86c7abdc2..ea90c1ffc9e 100644 --- a/src/Storages/StorageMaterializeMySQL.h +++ b/src/Storages/StorageMaterializeMySQL.h @@ -5,7 +5,6 @@ #if USE_MYSQL #include -#include namespace DB { @@ -21,7 +20,7 @@ class StorageMaterializeMySQL final : public ext::shared_ptr_helperdrop(); } + +private: [[noreturn]] void throwNotAllowed() const { throw Exception("This method is not allowed for MaterializeMySQL", ErrorCodes::NOT_IMPLEMENTED); } StoragePtr nested_storage; - const DatabaseMaterializeMySQL * database; + const IDatabase * database; }; } diff --git a/src/Storages/StorageMemory.cpp b/src/Storages/StorageMemory.cpp index 3c45ef84dd6..93f00206e6b 100644 --- a/src/Storages/StorageMemory.cpp +++ b/src/Storages/StorageMemory.cpp @@ -23,7 +23,7 @@ namespace ErrorCodes class MemorySource : public SourceWithProgress { - using InitializerFunc = std::function &)>; + using InitializerFunc = std::function &)>; public: /// Blocks are stored in std::list which may be appended in another thread. /// We use pointer to the beginning of the list and its current size. @@ -32,17 +32,15 @@ public: MemorySource( Names column_names_, - BlocksList::const_iterator first_, - size_t num_blocks_, const StorageMemory & storage, const StorageMetadataPtr & metadata_snapshot, - std::shared_ptr data_, - InitializerFunc initializer_func_ = [](BlocksList::const_iterator &, size_t &, std::shared_ptr &) {}) + std::shared_ptr data_, + std::shared_ptr> parallel_execution_index_, + InitializerFunc initializer_func_ = {}) : SourceWithProgress(metadata_snapshot->getSampleBlockForColumns(column_names_, storage.getVirtuals(), storage.getStorageID())) , column_names(std::move(column_names_)) - , current_it(first_) - , num_blocks(num_blocks_) , data(data_) + , parallel_execution_index(parallel_execution_index_) , initializer_func(std::move(initializer_func_)) { } @@ -52,16 +50,20 @@ public: protected: Chunk generate() override { - if (!postponed_init_done) + if (initializer_func) { - initializer_func(current_it, num_blocks, data); - postponed_init_done = true; + initializer_func(data); + initializer_func = {}; } - if (current_block_idx == num_blocks) - return {}; + size_t current_index = getAndIncrementExecutionIndex(); - const Block & src = *current_it; + if (current_index >= data->size()) + { + return {}; + } + + const Block & src = (*data)[current_index]; Columns columns; columns.reserve(column_names.size()); @@ -69,20 +71,26 @@ protected: for (const auto & name : column_names) columns.push_back(src.getByName(name).column); - if (++current_block_idx < num_blocks) - ++current_it; - return Chunk(std::move(columns), src.rows()); } private: - const Names column_names; - BlocksList::const_iterator current_it; - size_t num_blocks; - size_t current_block_idx = 0; + size_t getAndIncrementExecutionIndex() + { + if (parallel_execution_index) + { + return (*parallel_execution_index)++; + } + else + { + return execution_index++; + } + } - std::shared_ptr data; - bool postponed_init_done = false; + const Names column_names; + size_t execution_index = 0; + std::shared_ptr data; + std::shared_ptr> parallel_execution_index; InitializerFunc initializer_func; }; @@ -107,7 +115,7 @@ public: metadata_snapshot->check(block, true); { std::lock_guard lock(storage.mutex); - auto new_data = std::make_unique(*(storage.data.get())); + auto new_data = std::make_unique(*(storage.data.get())); new_data->push_back(block); storage.data.set(std::move(new_data)); @@ -123,7 +131,7 @@ private: StorageMemory::StorageMemory(const StorageID & table_id_, ColumnsDescription columns_description_, ConstraintsDescription constraints_) - : IStorage(table_id_), data(std::make_unique()) + : IStorage(table_id_), data(std::make_unique()) { StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(std::move(columns_description_)); @@ -155,21 +163,17 @@ Pipe StorageMemory::read( return Pipe(std::make_shared( column_names, - data.get()->end(), - 0, *this, metadata_snapshot, - data.get(), - [this](BlocksList::const_iterator & current_it, size_t & num_blocks, std::shared_ptr & current_data) + nullptr /* data */, + nullptr /* parallel execution index */, + [this](std::shared_ptr & data_to_initialize) { - current_data = data.get(); - current_it = current_data->begin(); - num_blocks = current_data->size(); + data_to_initialize = data.get(); })); } auto current_data = data.get(); - size_t size = current_data->size(); if (num_streams > size) @@ -177,23 +181,11 @@ Pipe StorageMemory::read( Pipes pipes; - BlocksList::const_iterator it = current_data->begin(); + auto parallel_execution_index = std::make_shared>(0); - size_t offset = 0; for (size_t stream = 0; stream < num_streams; ++stream) { - size_t next_offset = (stream + 1) * size / num_streams; - size_t num_blocks = next_offset - offset; - - assert(num_blocks > 0); - - pipes.emplace_back(std::make_shared(column_names, it, num_blocks, *this, metadata_snapshot, current_data)); - - while (offset < next_offset) - { - ++it; - ++offset; - } + pipes.emplace_back(std::make_shared(column_names, *this, metadata_snapshot, current_data, parallel_execution_index)); } return Pipe::unitePipes(std::move(pipes)); @@ -208,7 +200,7 @@ BlockOutputStreamPtr StorageMemory::write(const ASTPtr & /*query*/, const Storag void StorageMemory::drop() { - data.set(std::make_unique()); + data.set(std::make_unique()); total_size_bytes.store(0, std::memory_order_relaxed); total_size_rows.store(0, std::memory_order_relaxed); } @@ -233,7 +225,7 @@ void StorageMemory::mutate(const MutationCommands & commands, const Context & co auto in = interpreter->execute(); in->readPrefix(); - BlocksList out; + Blocks out; Block block; while ((block = in->read())) { @@ -241,17 +233,17 @@ void StorageMemory::mutate(const MutationCommands & commands, const Context & co } in->readSuffix(); - std::unique_ptr new_data; + std::unique_ptr new_data; // all column affected if (interpreter->isAffectingAllColumns()) { - new_data = std::make_unique(out); + new_data = std::make_unique(out); } else { /// just some of the column affected, we need update it with new column - new_data = std::make_unique(*(data.get())); + new_data = std::make_unique(*(data.get())); auto data_it = new_data->begin(); auto out_it = out.begin(); @@ -284,7 +276,7 @@ void StorageMemory::mutate(const MutationCommands & commands, const Context & co void StorageMemory::truncate( const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) { - data.set(std::make_unique()); + data.set(std::make_unique()); total_size_bytes.store(0, std::memory_order_relaxed); total_size_rows.store(0, std::memory_order_relaxed); } diff --git a/src/Storages/StorageMemory.h b/src/Storages/StorageMemory.h index e3730ca104f..6453e6a53e2 100644 --- a/src/Storages/StorageMemory.h +++ b/src/Storages/StorageMemory.h @@ -91,7 +91,7 @@ public: private: /// MultiVersion data storage, so that we can copy the list of blocks to readers. - MultiVersion data; + MultiVersion data; mutable std::mutex mutex; diff --git a/src/Storages/StorageMerge.cpp b/src/Storages/StorageMerge.cpp index f15ef8a5784..97f4ccd0bba 100644 --- a/src/Storages/StorageMerge.cpp +++ b/src/Storages/StorageMerge.cpp @@ -430,7 +430,15 @@ StorageMerge::StorageListWithLocks StorageMerge::getSelectedTables( DatabaseTablesIteratorPtr StorageMerge::getDatabaseIterator(const Context & context) const { - checkStackSize(); + try + { + checkStackSize(); + } + catch (Exception & e) + { + e.addMessage("while getting table iterator of Merge table. Maybe caused by two Merge tables that will endlessly try to read each other's data"); + throw; + } auto database = DatabaseCatalog::instance().getDatabase(source_database); auto table_name_match = [this](const String & table_name_) { return table_name_regexp.match(table_name_); }; return database->getTablesIterator(context, table_name_match); diff --git a/src/Storages/StorageReplicatedMergeTree.cpp b/src/Storages/StorageReplicatedMergeTree.cpp index e2bf1592659..31b04664b17 100644 --- a/src/Storages/StorageReplicatedMergeTree.cpp +++ b/src/Storages/StorageReplicatedMergeTree.cpp @@ -42,6 +42,7 @@ #include #include #include +#include #include #include diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index fff44bb1f4c..0465cfdffba 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -141,17 +141,18 @@ namespace public: StorageS3BlockOutputStream( const String & format, - UInt64 min_upload_part_size, const Block & sample_block_, const Context & context, const CompressionMethod compression_method, const std::shared_ptr & client, const String & bucket, - const String & key) + const String & key, + size_t min_upload_part_size, + size_t max_single_part_upload_size) : sample_block(sample_block_) { write_buf = wrapWriteBufferWithCompressionMethod( - std::make_unique(client, bucket, key, min_upload_part_size, true), compression_method, 3); + std::make_unique(client, bucket, key, min_upload_part_size, max_single_part_upload_size), compression_method, 3); writer = FormatFactory::instance().getOutput(format, *write_buf, sample_block, context); } @@ -192,6 +193,7 @@ StorageS3::StorageS3( const StorageID & table_id_, const String & format_name_, UInt64 min_upload_part_size_, + UInt64 max_single_part_upload_size_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, const Context & context_, @@ -201,6 +203,7 @@ StorageS3::StorageS3( , global_context(context_.getGlobalContext()) , format_name(format_name_) , min_upload_part_size(min_upload_part_size_) + , max_single_part_upload_size(max_single_part_upload_size_) , compression_method(compression_method_) , name(uri_.storage_name) { @@ -331,9 +334,15 @@ Pipe StorageS3::read( BlockOutputStreamPtr StorageS3::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & /*context*/) { return std::make_shared( - format_name, min_upload_part_size, metadata_snapshot->getSampleBlock(), - global_context, chooseCompressionMethod(uri.endpoint, compression_method), - client, uri.bucket, uri.key); + format_name, + metadata_snapshot->getSampleBlock(), + global_context, + chooseCompressionMethod(uri.endpoint, compression_method), + client, + uri.bucket, + uri.key, + min_upload_part_size, + max_single_part_upload_size); } void registerStorageS3Impl(const String & name, StorageFactory & factory) @@ -362,6 +371,7 @@ void registerStorageS3Impl(const String & name, StorageFactory & factory) } UInt64 min_upload_part_size = args.local_context.getSettingsRef().s3_min_upload_part_size; + UInt64 max_single_part_upload_size = args.local_context.getSettingsRef().s3_max_single_part_upload_size; String compression_method; String format_name; @@ -383,6 +393,7 @@ void registerStorageS3Impl(const String & name, StorageFactory & factory) args.table_id, format_name, min_upload_part_size, + max_single_part_upload_size, args.columns, args.constraints, args.context, diff --git a/src/Storages/StorageS3.h b/src/Storages/StorageS3.h index 96f0cf02e88..f436fb85c90 100644 --- a/src/Storages/StorageS3.h +++ b/src/Storages/StorageS3.h @@ -31,6 +31,7 @@ public: const StorageID & table_id_, const String & format_name_, UInt64 min_upload_part_size_, + UInt64 max_single_part_upload_size_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, const Context & context_, @@ -59,7 +60,8 @@ private: const Context & global_context; String format_name; - UInt64 min_upload_part_size; + size_t min_upload_part_size; + size_t max_single_part_upload_size; String compression_method; std::shared_ptr client; String name; diff --git a/src/Storages/StorageURL.cpp b/src/Storages/StorageURL.cpp index 8dcd549f9c8..00903abee59 100644 --- a/src/Storages/StorageURL.cpp +++ b/src/Storages/StorageURL.cpp @@ -10,6 +10,8 @@ #include #include #include +#include +#include #include diff --git a/src/Storages/StorageURL.h b/src/Storages/StorageURL.h index 78d972c6e7e..21b2e3e27a1 100644 --- a/src/Storages/StorageURL.h +++ b/src/Storages/StorageURL.h @@ -4,12 +4,15 @@ #include #include #include -#include +#include #include namespace DB { + +struct ConnectionTimeouts; + /** * This class represents table engine for external urls. * It sends HTTP GET to server when select is called and diff --git a/src/Storages/StorageXDBC.cpp b/src/Storages/StorageXDBC.cpp index 3aca884d15a..f2f8cdb23f5 100644 --- a/src/Storages/StorageXDBC.cpp +++ b/src/Storages/StorageXDBC.cpp @@ -3,6 +3,7 @@ #include #include #include +#include #include #include #include diff --git a/src/Storages/System/StorageSystemDistributionQueue.cpp b/src/Storages/System/StorageSystemDistributionQueue.cpp index edba7c13b1c..db649e7e1ba 100644 --- a/src/Storages/System/StorageSystemDistributionQueue.cpp +++ b/src/Storages/System/StorageSystemDistributionQueue.cpp @@ -9,6 +9,7 @@ #include #include #include +#include #include namespace DB diff --git a/src/Storages/getStructureOfRemoteTable.cpp b/src/Storages/getStructureOfRemoteTable.cpp index a987e3d4e8a..de5f3924ca9 100644 --- a/src/Storages/getStructureOfRemoteTable.cpp +++ b/src/Storages/getStructureOfRemoteTable.cpp @@ -71,7 +71,7 @@ ColumnsDescription getStructureOfRemoteTableInShard( }; /// Execute remote query without restrictions (because it's not real user query, but part of implementation) - auto input = std::make_shared(shard_info.pool, query, sample_block, new_context); + auto input = std::make_shared(shard_info.pool, query, sample_block, *new_context); input->setPoolMode(PoolMode::GET_ONE); if (!table_func_ptr) input->setMainTable(table_id); diff --git a/src/Storages/tests/gtest_transform_query_for_external_database.cpp b/src/Storages/tests/gtest_transform_query_for_external_database.cpp index 48811c1c86a..835aebab900 100644 --- a/src/Storages/tests/gtest_transform_query_for_external_database.cpp +++ b/src/Storages/tests/gtest_transform_query_for_external_database.cpp @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include diff --git a/src/TableFunctions/ITableFunctionXDBC.cpp b/src/TableFunctions/ITableFunctionXDBC.cpp index 67d1257fe4c..e04a86b5abf 100644 --- a/src/TableFunctions/ITableFunctionXDBC.cpp +++ b/src/TableFunctions/ITableFunctionXDBC.cpp @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include diff --git a/src/TableFunctions/TableFunctionS3.cpp b/src/TableFunctions/TableFunctionS3.cpp index dfe1cf6e792..cc7877b204e 100644 --- a/src/TableFunctions/TableFunctionS3.cpp +++ b/src/TableFunctions/TableFunctionS3.cpp @@ -67,6 +67,7 @@ StoragePtr TableFunctionS3::executeImpl(const ASTPtr & /*ast_function*/, const C Poco::URI uri (filename); S3::URI s3_uri (uri); UInt64 min_upload_part_size = context.getSettingsRef().s3_min_upload_part_size; + UInt64 max_single_part_upload_size = context.getSettingsRef().s3_max_single_part_upload_size; StoragePtr storage = StorageS3::create( s3_uri, @@ -75,6 +76,7 @@ StoragePtr TableFunctionS3::executeImpl(const ASTPtr & /*ast_function*/, const C StorageID(getDatabaseName(), table_name), format, min_upload_part_size, + max_single_part_upload_size, getActualTableStructure(context), ConstraintsDescription{}, const_cast(context), diff --git a/tests/CMakeLists.txt b/tests/CMakeLists.txt index 3ef09e5658f..9e5a2e29dc9 100644 --- a/tests/CMakeLists.txt +++ b/tests/CMakeLists.txt @@ -15,6 +15,16 @@ install ( COMPONENT clickhouse PATTERN "CMakeLists.txt" EXCLUDE PATTERN ".gitignore" EXCLUDE + PATTERN "top_level_domains" EXCLUDE +) + +# Dereference symlink +get_filename_component(TOP_LEVEL_DOMAINS_ABS_DIR config/top_level_domains REALPATH) +install ( + DIRECTORY "${TOP_LEVEL_DOMAINS_ABS_DIR}" + DESTINATION ${CMAKE_INSTALL_DATAROOTDIR}/clickhouse-test/config + USE_SOURCE_PERMISSIONS + COMPONENT clickhouse ) install (FILES server-test.xml DESTINATION ${CLICKHOUSE_ETC_DIR}/clickhouse-server COMPONENT clickhouse) diff --git a/tests/config/config.d/top_level_domains_lists.xml b/tests/config/config.d/top_level_domains_lists.xml new file mode 100644 index 00000000000..7b5e6a5638a --- /dev/null +++ b/tests/config/config.d/top_level_domains_lists.xml @@ -0,0 +1,5 @@ + + + public_suffix_list.dat + + diff --git a/tests/config/config.d/top_level_domains_path.xml b/tests/config/config.d/top_level_domains_path.xml new file mode 100644 index 00000000000..0ab836e5818 --- /dev/null +++ b/tests/config/config.d/top_level_domains_path.xml @@ -0,0 +1,3 @@ + + /etc/clickhouse-server/top_level_domains/ + diff --git a/tests/config/install.sh b/tests/config/install.sh index 064a9bee8aa..416cc21893b 100755 --- a/tests/config/install.sh +++ b/tests/config/install.sh @@ -31,6 +31,8 @@ ln -sf $SRC_PATH/config.d/test_cluster_with_incorrect_pw.xml $DEST_SERVER_PATH/c ln -sf $SRC_PATH/config.d/test_keeper_port.xml $DEST_SERVER_PATH/config.d/ ln -sf $SRC_PATH/config.d/logging_no_rotate.xml $DEST_SERVER_PATH/config.d/ ln -sf $SRC_PATH/config.d/tcp_with_proxy.xml $DEST_SERVER_PATH/config.d/ +ln -sf $SRC_PATH/config.d/top_level_domains_lists.xml $DEST_SERVER_PATH/config.d/ +ln -sf $SRC_PATH/config.d/top_level_domains_path.xml $DEST_SERVER_PATH/config.d/ ln -sf $SRC_PATH/users.d/log_queries.xml $DEST_SERVER_PATH/users.d/ ln -sf $SRC_PATH/users.d/readonly.xml $DEST_SERVER_PATH/users.d/ ln -sf $SRC_PATH/users.d/access_management.xml $DEST_SERVER_PATH/users.d/ @@ -42,6 +44,8 @@ ln -sf $SRC_PATH/strings_dictionary.xml $DEST_SERVER_PATH/ ln -sf $SRC_PATH/decimals_dictionary.xml $DEST_SERVER_PATH/ ln -sf $SRC_PATH/executable_dictionary.xml $DEST_SERVER_PATH/ +ln -sf $SRC_PATH/top_level_domains $DEST_SERVER_PATH/ + ln -sf $SRC_PATH/server.key $DEST_SERVER_PATH/ ln -sf $SRC_PATH/server.crt $DEST_SERVER_PATH/ ln -sf $SRC_PATH/dhparam.pem $DEST_SERVER_PATH/ diff --git a/tests/config/top_level_domains b/tests/config/top_level_domains new file mode 120000 index 00000000000..7e12ab4ba2c --- /dev/null +++ b/tests/config/top_level_domains @@ -0,0 +1 @@ +../../docker/test/performance-comparison/config/top_level_domains \ No newline at end of file diff --git a/tests/integration/test_log_family_s3/configs/config.xml b/tests/integration/test_log_family_s3/configs/config.xml index 63b4d951eb7..5b9b5b5843a 100644 --- a/tests/integration/test_log_family_s3/configs/config.xml +++ b/tests/integration/test_log_family_s3/configs/config.xml @@ -8,18 +8,6 @@ 10 - - - - s3 - http://minio1:9001/root/data/ - minio - minio123 - - - - - 9000 127.0.0.1 diff --git a/tests/integration/test_log_family_s3/configs/minio.xml b/tests/integration/test_log_family_s3/configs/minio.xml index 6c9329a2bbc..7337be1ad94 100644 --- a/tests/integration/test_log_family_s3/configs/minio.xml +++ b/tests/integration/test_log_family_s3/configs/minio.xml @@ -2,12 +2,13 @@ - + s3 http://minio1:9001/root/data/ minio minio123 - + true + diff --git a/tests/integration/test_log_family_s3/test.py b/tests/integration/test_log_family_s3/test.py index 8b262bf6760..c23e7545b27 100644 --- a/tests/integration/test_log_family_s3/test.py +++ b/tests/integration/test_log_family_s3/test.py @@ -24,34 +24,37 @@ def cluster(): cluster.shutdown() +def assert_objects_count(cluster, objects_count, path='data/'): + minio = cluster.minio_client + s3_objects = list(minio.list_objects(cluster.minio_bucket, path)) + if objects_count != len(s3_objects): + for s3_object in s3_objects: + object_meta = minio.stat_object(cluster.minio_bucket, s3_object.object_name) + logging.info("Existing S3 object: %s", str(object_meta)) + assert objects_count == len(s3_objects) + + @pytest.mark.parametrize( "log_engine,files_overhead,files_overhead_per_insert", [("TinyLog", 1, 1), ("Log", 2, 1), ("StripeLog", 1, 2)]) def test_log_family_s3(cluster, log_engine, files_overhead, files_overhead_per_insert): node = cluster.instances["node"] - minio = cluster.minio_client - node.query("CREATE TABLE s3_test (id UInt64) Engine={}".format(log_engine)) + node.query("CREATE TABLE s3_test (id UInt64) ENGINE={} SETTINGS disk = 's3'".format(log_engine)) node.query("INSERT INTO s3_test SELECT number FROM numbers(5)") assert node.query("SELECT * FROM s3_test") == "0\n1\n2\n3\n4\n" - print(list(minio.list_objects(cluster.minio_bucket, 'data/')), file=sys.stderr) - assert len(list(minio.list_objects(cluster.minio_bucket, 'data/'))) == files_overhead_per_insert + files_overhead + assert_objects_count(cluster, files_overhead_per_insert + files_overhead) node.query("INSERT INTO s3_test SELECT number + 5 FROM numbers(3)") assert node.query("SELECT * FROM s3_test order by id") == "0\n1\n2\n3\n4\n5\n6\n7\n" - print(list(minio.list_objects(cluster.minio_bucket, 'data/')), file=sys.stderr) - assert len( - list(minio.list_objects(cluster.minio_bucket, 'data/'))) == files_overhead_per_insert * 2 + files_overhead + assert_objects_count(cluster, files_overhead_per_insert * 2 + files_overhead) node.query("INSERT INTO s3_test SELECT number + 8 FROM numbers(1)") assert node.query("SELECT * FROM s3_test order by id") == "0\n1\n2\n3\n4\n5\n6\n7\n8\n" - print(list(minio.list_objects(cluster.minio_bucket, 'data/')), file=sys.stderr) - assert len( - list(minio.list_objects(cluster.minio_bucket, 'data/'))) == files_overhead_per_insert * 3 + files_overhead + assert_objects_count(cluster, files_overhead_per_insert * 3 + files_overhead) node.query("TRUNCATE TABLE s3_test") - print(list(minio.list_objects(cluster.minio_bucket, 'data/')), file=sys.stderr) - assert len(list(minio.list_objects(cluster.minio_bucket, 'data/'))) == 0 + assert_objects_count(cluster, 0) node.query("DROP TABLE s3_test") diff --git a/tests/integration/test_materialize_mysql_database/configs/users.xml b/tests/integration/test_materialize_mysql_database/configs/users.xml index f6df1c30fc4..4c167c06d63 100644 --- a/tests/integration/test_materialize_mysql_database/configs/users.xml +++ b/tests/integration/test_materialize_mysql_database/configs/users.xml @@ -3,7 +3,7 @@ 1 - 1 + Ordinary diff --git a/tests/integration/test_materialize_mysql_database/configs/users_db_atomic.xml b/tests/integration/test_materialize_mysql_database/configs/users_db_atomic.xml new file mode 100644 index 00000000000..3add72ec554 --- /dev/null +++ b/tests/integration/test_materialize_mysql_database/configs/users_db_atomic.xml @@ -0,0 +1,19 @@ + + + + + 1 + Atomic + + + + + + + + ::/0 + + default + + + diff --git a/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py b/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py index 56b6d8b3a15..448e17de405 100644 --- a/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py +++ b/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py @@ -15,6 +15,7 @@ from multiprocessing.dummy import Pool def check_query(clickhouse_node, query, result_set, retry_count=60, interval_seconds=3): lastest_result = '' + for i in range(retry_count): try: lastest_result = clickhouse_node.query(query) @@ -35,6 +36,7 @@ def dml_with_materialize_mysql_database(clickhouse_node, mysql_node, service_nam clickhouse_node.query("DROP DATABASE IF EXISTS test_database") mysql_node.query("CREATE DATABASE test_database DEFAULT CHARACTER SET 'utf8'") # existed before the mapping was created + mysql_node.query("CREATE TABLE test_database.test_table_1 (" "`key` INT NOT NULL PRIMARY KEY, " "unsigned_tiny_int TINYINT UNSIGNED, tiny_int TINYINT, " @@ -51,9 +53,10 @@ def dml_with_materialize_mysql_database(clickhouse_node, mysql_node, service_nam "_date Date, _datetime DateTime, _timestamp TIMESTAMP, _bool BOOLEAN) ENGINE = InnoDB;") # it already has some data - mysql_node.query( - "INSERT INTO test_database.test_table_1 VALUES(1, 1, -1, 2, -2, 3, -3, 4, -4, 5, -5, 6, -6, 3.2, -3.2, 3.4, -3.4, 'varchar', 'char', " - "'2020-01-01', '2020-01-01 00:00:00', '2020-01-01 00:00:00', true);") + mysql_node.query(""" + INSERT INTO test_database.test_table_1 VALUES(1, 1, -1, 2, -2, 3, -3, 4, -4, 5, -5, 6, -6, 3.2, -3.2, 3.4, -3.4, 'varchar', 'char', + '2020-01-01', '2020-01-01 00:00:00', '2020-01-01 00:00:00', true); + """) clickhouse_node.query( "CREATE DATABASE test_database ENGINE = MaterializeMySQL('{}:3306', 'test_database', 'root', 'clickhouse')".format( @@ -65,9 +68,10 @@ def dml_with_materialize_mysql_database(clickhouse_node, mysql_node, service_nam "1\t1\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\tvarchar\tchar\t2020-01-01\t" "2020-01-01 00:00:00\t2020-01-01 00:00:00\t1\n") - mysql_node.query( - "INSERT INTO test_database.test_table_1 VALUES(2, 1, -1, 2, -2, 3, -3, 4, -4, 5, -5, 6, -6, 3.2, -3.2, 3.4, -3.4, 'varchar', 'char', " - "'2020-01-01', '2020-01-01 00:00:00', '2020-01-01 00:00:00', false);") + mysql_node.query(""" + INSERT INTO test_database.test_table_1 VALUES(2, 1, -1, 2, -2, 3, -3, 4, -4, 5, -5, 6, -6, 3.2, -3.2, 3.4, -3.4, 'varchar', 'char', + '2020-01-01', '2020-01-01 00:00:00', '2020-01-01 00:00:00', false); + """) check_query(clickhouse_node, "SELECT * FROM test_database.test_table_1 ORDER BY key FORMAT TSV", "1\t1\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\tvarchar\tchar\t2020-01-01\t" @@ -76,14 +80,16 @@ def dml_with_materialize_mysql_database(clickhouse_node, mysql_node, service_nam mysql_node.query("UPDATE test_database.test_table_1 SET unsigned_tiny_int = 2 WHERE `key` = 1") - check_query(clickhouse_node, "SELECT key, unsigned_tiny_int, tiny_int, unsigned_small_int," - " small_int, unsigned_medium_int, medium_int, unsigned_int, _int, unsigned_integer, _integer, " - " unsigned_bigint, _bigint, unsigned_float, _float, unsigned_double, _double, _varchar, _char, " - " _date, _datetime, /* exclude it, because ON UPDATE CURRENT_TIMESTAMP _timestamp, */ " - " _bool FROM test_database.test_table_1 ORDER BY key FORMAT TSV", - "1\t2\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\tvarchar\tchar\t2020-01-01\t" - "2020-01-01 00:00:00\t1\n2\t1\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\t" - "varchar\tchar\t2020-01-01\t2020-01-01 00:00:00\t0\n") + check_query(clickhouse_node, """ + SELECT key, unsigned_tiny_int, tiny_int, unsigned_small_int, + small_int, unsigned_medium_int, medium_int, unsigned_int, _int, unsigned_integer, _integer, + unsigned_bigint, _bigint, unsigned_float, _float, unsigned_double, _double, _varchar, _char, + _date, _datetime, /* exclude it, because ON UPDATE CURRENT_TIMESTAMP _timestamp, */ + _bool FROM test_database.test_table_1 ORDER BY key FORMAT TSV + """, + "1\t2\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\tvarchar\tchar\t2020-01-01\t" + "2020-01-01 00:00:00\t1\n2\t1\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\t" + "varchar\tchar\t2020-01-01\t2020-01-01 00:00:00\t0\n") # update primary key mysql_node.query("UPDATE test_database.test_table_1 SET `key` = 3 WHERE `unsigned_tiny_int` = 2") @@ -556,6 +562,12 @@ def err_sync_user_privs_with_materialize_mysql_database(clickhouse_node, mysql_n assert 'MySQL SYNC USER ACCESS ERR:' in str(exception.value) assert "priv_err_db" not in clickhouse_node.query("SHOW DATABASES") + mysql_node.query("GRANT SELECT ON priv_err_db.* TO 'test'@'%'") + time.sleep(3) + clickhouse_node.query("ATTACH DATABASE priv_err_db") + clickhouse_node.query("DROP DATABASE priv_err_db") + mysql_node.query("REVOKE SELECT ON priv_err_db.* FROM 'test'@'%'") + mysql_node.query("DROP DATABASE priv_err_db;") mysql_node.query("DROP USER 'test'@'%'") diff --git a/tests/integration/test_materialize_mysql_database/test.py b/tests/integration/test_materialize_mysql_database/test.py index ed37e3d7502..1b7aa041540 100644 --- a/tests/integration/test_materialize_mysql_database/test.py +++ b/tests/integration/test_materialize_mysql_database/test.py @@ -14,7 +14,10 @@ from . import materialize_with_ddl DOCKER_COMPOSE_PATH = get_docker_compose_path() cluster = ClickHouseCluster(__file__) -clickhouse_node = cluster.add_instance('node1', user_configs=["configs/users.xml"], with_mysql=False, stay_alive=True) + +node_db_ordinary = cluster.add_instance('node1', user_configs=["configs/users.xml"], with_mysql=False, stay_alive=True) +node_db_atomic = cluster.add_instance('node2', user_configs=["configs/users_db_atomic.xml"], with_mysql=False, stay_alive=True) + @pytest.fixture(scope="module") def started_cluster(): @@ -119,39 +122,30 @@ def started_mysql_8_0(): '--remove-orphans']) -def test_materialize_database_dml_with_mysql_5_7(started_cluster, started_mysql_5_7): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_materialize_database_dml_with_mysql_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.dml_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") materialize_with_ddl.materialize_mysql_database_with_datetime_and_decimal(clickhouse_node, started_mysql_5_7, "mysql1") - -def test_materialize_database_dml_with_mysql_8_0(started_cluster, started_mysql_8_0): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_materialize_database_dml_with_mysql_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.dml_with_materialize_mysql_database(clickhouse_node, started_mysql_8_0, "mysql8_0") materialize_with_ddl.materialize_mysql_database_with_datetime_and_decimal(clickhouse_node, started_mysql_8_0, "mysql8_0") +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_materialize_database_ddl_with_mysql_5_7(started_cluster, started_mysql_5_7, clickhouse_node): + materialize_with_ddl.drop_table_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") + materialize_with_ddl.create_table_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") + materialize_with_ddl.rename_table_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") + materialize_with_ddl.alter_add_column_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") + materialize_with_ddl.alter_drop_column_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") + # mysql 5.7 cannot support alter rename column + # materialize_with_ddl.alter_rename_column_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") + materialize_with_ddl.alter_rename_table_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") + materialize_with_ddl.alter_modify_column_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") - -def test_materialize_database_ddl_with_mysql_5_7(started_cluster, started_mysql_5_7): - try: - materialize_with_ddl.drop_table_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") - materialize_with_ddl.create_table_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") - materialize_with_ddl.rename_table_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") - materialize_with_ddl.alter_add_column_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, - "mysql1") - materialize_with_ddl.alter_drop_column_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, - "mysql1") - # mysql 5.7 cannot support alter rename column - # materialize_with_ddl.alter_rename_column_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") - materialize_with_ddl.alter_rename_table_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, - "mysql1") - materialize_with_ddl.alter_modify_column_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, - "mysql1") - except: - print((clickhouse_node.query( - "select '\n', thread_id, query_id, arrayStringConcat(arrayMap(x -> concat(demangle(addressToSymbol(x)), '\n ', addressToLine(x)), trace), '\n') AS sym from system.stack_trace format TSVRaw"))) - raise - - -def test_materialize_database_ddl_with_mysql_8_0(started_cluster, started_mysql_8_0): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_materialize_database_ddl_with_mysql_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.drop_table_with_materialize_mysql_database(clickhouse_node, started_mysql_8_0, "mysql8_0") materialize_with_ddl.create_table_with_materialize_mysql_database(clickhouse_node, started_mysql_8_0, "mysql8_0") materialize_with_ddl.rename_table_with_materialize_mysql_database(clickhouse_node, started_mysql_8_0, "mysql8_0") @@ -166,61 +160,72 @@ def test_materialize_database_ddl_with_mysql_8_0(started_cluster, started_mysql_ materialize_with_ddl.alter_modify_column_with_materialize_mysql_database(clickhouse_node, started_mysql_8_0, "mysql8_0") - -def test_materialize_database_ddl_with_empty_transaction_5_7(started_cluster, started_mysql_5_7): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_materialize_database_ddl_with_empty_transaction_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.query_event_with_empty_transaction(clickhouse_node, started_mysql_5_7, "mysql1") - -def test_materialize_database_ddl_with_empty_transaction_8_0(started_cluster, started_mysql_8_0): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_materialize_database_ddl_with_empty_transaction_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.query_event_with_empty_transaction(clickhouse_node, started_mysql_8_0, "mysql8_0") -def test_select_without_columns_5_7(started_cluster, started_mysql_5_7): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_select_without_columns_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.select_without_columns(clickhouse_node, started_mysql_5_7, "mysql1") -def test_select_without_columns_8_0(started_cluster, started_mysql_8_0): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_select_without_columns_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.select_without_columns(clickhouse_node, started_mysql_8_0, "mysql8_0") -def test_insert_with_modify_binlog_checksum_5_7(started_cluster, started_mysql_5_7): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_insert_with_modify_binlog_checksum_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.insert_with_modify_binlog_checksum(clickhouse_node, started_mysql_5_7, "mysql1") -def test_insert_with_modify_binlog_checksum_8_0(started_cluster, started_mysql_8_0): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_insert_with_modify_binlog_checksum_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.insert_with_modify_binlog_checksum(clickhouse_node, started_mysql_8_0, "mysql8_0") -def test_materialize_database_err_sync_user_privs_5_7(started_cluster, started_mysql_5_7): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_materialize_database_err_sync_user_privs_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.err_sync_user_privs_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") -def test_materialize_database_err_sync_user_privs_8_0(started_cluster, started_mysql_8_0): + +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_materialize_database_err_sync_user_privs_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.err_sync_user_privs_with_materialize_mysql_database(clickhouse_node, started_mysql_8_0, "mysql8_0") - -def test_network_partition_5_7(started_cluster, started_mysql_5_7): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_network_partition_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.network_partition_test(clickhouse_node, started_mysql_5_7, "mysql1") -def test_network_partition_8_0(started_cluster, started_mysql_8_0): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_network_partition_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.network_partition_test(clickhouse_node, started_mysql_8_0, "mysql8_0") - -def test_mysql_kill_sync_thread_restore_5_7(started_cluster, started_mysql_5_7): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_mysql_kill_sync_thread_restore_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.mysql_kill_sync_thread_restore_test(clickhouse_node, started_mysql_5_7, "mysql1") -def test_mysql_kill_sync_thread_restore_8_0(started_cluster, started_mysql_8_0): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_mysql_kill_sync_thread_restore_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.mysql_kill_sync_thread_restore_test(clickhouse_node, started_mysql_8_0, "mysql8_0") - -def test_mysql_killed_while_insert_5_7(started_cluster, started_mysql_5_7): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_mysql_killed_while_insert_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.mysql_killed_while_insert(clickhouse_node, started_mysql_5_7, "mysql1") -def test_mysql_killed_while_insert_8_0(started_cluster, started_mysql_8_0): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_mysql_killed_while_insert_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.mysql_killed_while_insert(clickhouse_node, started_mysql_8_0, "mysql8_0") - -def test_clickhouse_killed_while_insert_5_7(started_cluster, started_mysql_5_7): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_clickhouse_killed_while_insert_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.clickhouse_killed_while_insert(clickhouse_node, started_mysql_5_7, "mysql1") -def test_clickhouse_killed_while_insert_8_0(started_cluster, started_mysql_8_0): +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) +def test_clickhouse_killed_while_insert_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.clickhouse_killed_while_insert(clickhouse_node, started_mysql_8_0, "mysql8_0") diff --git a/tests/integration/test_settings_profile/test.py b/tests/integration/test_settings_profile/test.py index 5345ef9a474..3ceef9f25cf 100644 --- a/tests/integration/test_settings_profile/test.py +++ b/tests/integration/test_settings_profile/test.py @@ -207,34 +207,26 @@ def test_show_profiles(): def test_allow_ddl(): - assert "Not enough privileges" in instance.query_and_get_error("CREATE TABLE tbl(a Int32) ENGINE=Log", user="robin") - assert "DDL queries are prohibited" in instance.query_and_get_error("CREATE TABLE tbl(a Int32) ENGINE=Log", - settings={"allow_ddl": 0}) - - assert "Not enough privileges" in instance.query_and_get_error("GRANT CREATE ON tbl TO robin", user="robin") - assert "DDL queries are prohibited" in instance.query_and_get_error("GRANT CREATE ON tbl TO robin", - settings={"allow_ddl": 0}) - + assert "it's necessary to have grant" in instance.query_and_get_error("CREATE TABLE tbl(a Int32) ENGINE=Log", user="robin") + assert "it's necessary to have grant" in instance.query_and_get_error("GRANT CREATE ON tbl TO robin", user="robin") + assert "DDL queries are prohibited" in instance.query_and_get_error("CREATE TABLE tbl(a Int32) ENGINE=Log", settings={"allow_ddl": 0}) + instance.query("GRANT CREATE ON tbl TO robin") instance.query("CREATE TABLE tbl(a Int32) ENGINE=Log", user="robin") instance.query("DROP TABLE tbl") def test_allow_introspection(): - assert "Introspection functions are disabled" in instance.query_and_get_error("SELECT demangle('a')") - assert "Not enough privileges" in instance.query_and_get_error("SELECT demangle('a')", user="robin") - assert "Not enough privileges" in instance.query_and_get_error("SELECT demangle('a')", user="robin", - settings={"allow_introspection_functions": 1}) - - assert "Introspection functions are disabled" in instance.query_and_get_error("GRANT demangle ON *.* TO robin") - assert "Not enough privileges" in instance.query_and_get_error("GRANT demangle ON *.* TO robin", user="robin") - assert "Not enough privileges" in instance.query_and_get_error("GRANT demangle ON *.* TO robin", user="robin", - settings={"allow_introspection_functions": 1}) - assert instance.query("SELECT demangle('a')", settings={"allow_introspection_functions": 1}) == "signed char\n" - instance.query("GRANT demangle ON *.* TO robin", settings={"allow_introspection_functions": 1}) + assert "Introspection functions are disabled" in instance.query_and_get_error("SELECT demangle('a')") + assert "it's necessary to have grant" in instance.query_and_get_error("SELECT demangle('a')", user="robin") + assert "it's necessary to have grant" in instance.query_and_get_error("SELECT demangle('a')", user="robin", settings={"allow_introspection_functions": 1}) + + instance.query("GRANT demangle ON *.* TO robin") assert "Introspection functions are disabled" in instance.query_and_get_error("SELECT demangle('a')", user="robin") + assert instance.query("SELECT demangle('a')", user="robin", settings={"allow_introspection_functions": 1}) == "signed char\n" + instance.query("ALTER USER robin SETTINGS allow_introspection_functions=1") assert instance.query("SELECT demangle('a')", user="robin") == "signed char\n" @@ -248,4 +240,4 @@ def test_allow_introspection(): assert "Introspection functions are disabled" in instance.query_and_get_error("SELECT demangle('a')", user="robin") instance.query("REVOKE demangle ON *.* FROM robin", settings={"allow_introspection_functions": 1}) - assert "Not enough privileges" in instance.query_and_get_error("SELECT demangle('a')", user="robin") + assert "it's necessary to have grant" in instance.query_and_get_error("SELECT demangle('a')", user="robin") diff --git a/tests/integration/test_storage_kafka/test.py b/tests/integration/test_storage_kafka/test.py index 5d943361414..07d2bcb60c0 100644 --- a/tests/integration/test_storage_kafka/test.py +++ b/tests/integration/test_storage_kafka/test.py @@ -9,6 +9,7 @@ import time import avro.schema from confluent_kafka.avro.cached_schema_registry_client import CachedSchemaRegistryClient from confluent_kafka.avro.serializer.message_serializer import MessageSerializer +from confluent_kafka import admin import kafka.errors import pytest @@ -1161,6 +1162,66 @@ def test_kafka_materialized_view(kafka_cluster): kafka_check_result(result, True) +@pytest.mark.timeout(180) +def test_librdkafka_snappy_regression(kafka_cluster): + """ + Regression for UB in snappy-c (that is used in librdkafka), + backport pr is [1]. + + [1]: https://github.com/ClickHouse-Extras/librdkafka/pull/3 + + Example of corruption: + + 2020.12.10 09:59:56.831507 [ 20 ] {} void DB::StorageKafka::threadFunc(size_t): Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected '"' before: 'foo"}': (while reading the value of key value): (at row 1) +, Stack trace (when copying this message, always include the lines below): + """ + + # create topic with snappy compression + admin_client = admin.AdminClient({'bootstrap.servers': 'localhost:9092'}) + topic_snappy = admin.NewTopic(topic='snappy_regression', num_partitions=1, replication_factor=1, config={ + 'compression.type': 'snappy', + }) + admin_client.create_topics(new_topics=[topic_snappy], validate_only=False) + + instance.query(''' + CREATE TABLE test.kafka (key UInt64, value String) + ENGINE = Kafka + SETTINGS kafka_broker_list = 'kafka1:19092', + kafka_topic_list = 'snappy_regression', + kafka_group_name = 'ch_snappy_regression', + kafka_format = 'JSONEachRow'; + ''') + + messages = [] + expected = [] + # To trigger this regression there should duplicated messages + # Orignal reproducer is: + # + # $ gcc --version |& fgrep gcc + # gcc (GCC) 10.2.0 + # $ yes foobarbaz | fold -w 80 | head -n10 >| in-… + # $ make clean && make CFLAGS='-Wall -g -O2 -ftree-loop-vectorize -DNDEBUG=1 -DSG=1 -fPIC' + # $ ./verify in + # final comparision of in failed at 20 of 100 + value = 'foobarbaz'*10 + number_of_messages = 50 + for i in range(number_of_messages): + messages.append(json.dumps({'key': i, 'value': value})) + expected.append(f'{i}\t{value}') + kafka_produce('snappy_regression', messages) + + expected = '\n'.join(expected) + + while True: + result = instance.query('SELECT * FROM test.kafka') + rows = len(result.strip('\n').split('\n')) + print(rows) + if rows == number_of_messages: + break + + assert TSV(result) == TSV(expected) + + instance.query('DROP TABLE test.kafka') @pytest.mark.timeout(180) def test_kafka_materialized_view_with_subquery(kafka_cluster): @@ -2295,6 +2356,11 @@ def test_premature_flush_on_eof(kafka_cluster): ORDER BY key; ''') + # messages created here will be consumed immedeately after MV creation + # reaching topic EOF. + # But we should not do flush immedeately after reaching EOF, because + # next poll can return more data, and we should respect kafka_flush_interval_ms + # and try to form bigger block messages = [json.dumps({'key': j + 1, 'value': j + 1}) for j in range(1)] kafka_produce('premature_flush_on_eof', messages) @@ -2313,12 +2379,18 @@ def test_premature_flush_on_eof(kafka_cluster): # all subscriptions/assignments done during select, so it start sending data to test.destination # immediately after creation of MV - time.sleep(2) + + time.sleep(1.5) # that sleep is needed to ensure that first poll finished, and at least one 'empty' polls happened. + # Empty poll before the fix were leading to premature flush. + # TODO: wait for messages in log: "Polled batch of 1 messages", followed by "Stalled" + # produce more messages after delay kafka_produce('premature_flush_on_eof', messages) + # data was not flushed yet (it will be flushed 7.5 sec after creating MV) assert int(instance.query("SELECT count() FROM test.destination")) == 0 - time.sleep(6) + + time.sleep(9) # TODO: wait for messages in log: "Committed offset ..." # it should be single part, i.e. single insert result = instance.query('SELECT _part, count() FROM test.destination group by _part') diff --git a/tests/integration/test_storage_s3/test.py b/tests/integration/test_storage_s3/test.py index ca7206ca3b5..9e49c5ebf8f 100644 --- a/tests/integration/test_storage_s3/test.py +++ b/tests/integration/test_storage_s3/test.py @@ -306,7 +306,8 @@ def test_multipart_put(cluster, maybe_auth, positive): cluster.minio_redirect_host, cluster.minio_redirect_port, bucket, filename, maybe_auth, table_format) try: - run_query(instance, put_query, stdin=csv_data, settings={'s3_min_upload_part_size': min_part_size_bytes}) + run_query(instance, put_query, stdin=csv_data, settings={'s3_min_upload_part_size': min_part_size_bytes, + 's3_max_single_part_upload_size': 0}) except helpers.client.QueryRuntimeException: if positive: raise diff --git a/tests/performance/encodeXMLComponent.xml b/tests/performance/encodeXMLComponent.xml new file mode 100644 index 00000000000..45241941ac3 --- /dev/null +++ b/tests/performance/encodeXMLComponent.xml @@ -0,0 +1,7 @@ + + + test.hits + + + SELECT count() FROM test.hits WHERE NOT ignore(encodeXMLComponent(URL)) + diff --git a/tests/performance/first_significant_subdomain.xml b/tests/performance/first_significant_subdomain.xml deleted file mode 100644 index b8418401986..00000000000 --- a/tests/performance/first_significant_subdomain.xml +++ /dev/null @@ -1,14 +0,0 @@ - - - - - - test.hits - - - - 1 - - - SELECT count() FROM test.hits WHERE NOT ignore(firstSignificantSubdomain(URL)) - diff --git a/tests/performance/url_hits.xml b/tests/performance/url_hits.xml index c8cf119a7d7..072fb5b94e7 100644 --- a/tests/performance/url_hits.xml +++ b/tests/performance/url_hits.xml @@ -1,11 +1,10 @@ - hits_100m_single + test.hits - func @@ -32,6 +31,12 @@ - SELECT count() FROM hits_100m_single WHERE NOT ignore({func}(URL)) + + + SELECT count() FROM test.hits WHERE NOT ignore(firstSignificantSubdomain(URL)) SETTINGS max_threads=1 + SELECT count() FROM test.hits WHERE NOT ignore(firstSignificantSubdomainCustom(URL, 'public_suffix_list')) SETTINGS max_threads=1 + + SELECT count() FROM test.hits WHERE NOT ignore(cutToFirstSignificantSubdomain(URL)) SETTINGS max_threads=1 + SELECT count() FROM test.hits WHERE NOT ignore(cutToFirstSignificantSubdomainCustom(URL, 'public_suffix_list')) SETTINGS max_threads=1 diff --git a/tests/queries/0_stateless/00453_top_k.reference b/tests/queries/0_stateless/00453_top_k.reference index 1a768b03965..14beb3273fa 100644 --- a/tests/queries/0_stateless/00453_top_k.reference +++ b/tests/queries/0_stateless/00453_top_k.reference @@ -1 +1,8 @@ [0,1,2,3,4,5,6,7,8,9] +0 [[],[[],[NULL],[NULL,'1'],[NULL,'1','2'],[NULL,'1','2','3'],[NULL,'1','2','3','4'],[NULL,'1','2','3','4','5']]] +1 [[[]],[[],[NULL],[NULL,'1'],[NULL,'1','2'],[NULL,'1','2','3'],[NULL,'1','2','3','4'],[NULL,'1','2','3','4','5'],[NULL,'1','2','3','4','5','6']]] +2 [[[],[NULL]],[[],[NULL],[NULL,'1'],[NULL,'1','2'],[NULL,'1','2','3'],[NULL,'1','2','3','4'],[NULL,'1','2','3','4','5'],[NULL,'1','2','3','4','5','6'],[NULL,'1','2','3','4','5','6','7']]] +3 [[[],[NULL],[NULL,'1']]] +4 [[[],[NULL],[NULL,'1'],[NULL,'1','2']]] +5 [[[],[NULL],[NULL,'1'],[NULL,'1','2'],[NULL,'1','2','3']]] +6 [[[],[NULL],[NULL,'1'],[NULL,'1','2'],[NULL,'1','2','3'],[NULL,'1','2','3','4']]] diff --git a/tests/queries/0_stateless/00453_top_k.sql b/tests/queries/0_stateless/00453_top_k.sql index 1f79a8c5393..fb3b88e29e4 100644 --- a/tests/queries/0_stateless/00453_top_k.sql +++ b/tests/queries/0_stateless/00453_top_k.sql @@ -1 +1,15 @@ -SELECT topK(10)(n) FROM (SELECT if(number % 100 < 10, number % 10, number) AS n FROM system.numbers LIMIT 100000); \ No newline at end of file +SELECT topK(10)(n) FROM (SELECT if(number % 100 < 10, number % 10, number) AS n FROM system.numbers LIMIT 100000); + +SELECT + k, + topK(v) +FROM +( + SELECT + number % 7 AS k, + arrayMap(x -> arrayMap(x -> if(x = 0, NULL, toString(x)), range(x)), range(intDiv(number, 1))) AS v + FROM system.numbers + LIMIT 10 +) +GROUP BY k +ORDER BY k ASC diff --git a/tests/queries/0_stateless/01056_create_table_as.reference b/tests/queries/0_stateless/01056_create_table_as.reference index e69de29bb2d..c6e14cb675e 100644 --- a/tests/queries/0_stateless/01056_create_table_as.reference +++ b/tests/queries/0_stateless/01056_create_table_as.reference @@ -0,0 +1 @@ +1 String diff --git a/tests/queries/0_stateless/01056_create_table_as.sql b/tests/queries/0_stateless/01056_create_table_as.sql index bf2a143fa5a..c27f30b61d5 100644 --- a/tests/queries/0_stateless/01056_create_table_as.sql +++ b/tests/queries/0_stateless/01056_create_table_as.sql @@ -49,3 +49,7 @@ DROP DICTIONARY dict; DROP TABLE test_01056_dict_data.dict_data; DROP DATABASE test_01056_dict_data; + +CREATE TABLE t1 (x String) ENGINE = Memory AS SELECT 1; +SELECT x, toTypeName(x) FROM t1; +DROP TABLE t1; diff --git a/tests/queries/0_stateless/01081_PartialSortingTransform_full_column.sql b/tests/queries/0_stateless/01081_PartialSortingTransform_full_column.sql index 146107d6f3d..768a20c8ca4 100644 --- a/tests/queries/0_stateless/01081_PartialSortingTransform_full_column.sql +++ b/tests/queries/0_stateless/01081_PartialSortingTransform_full_column.sql @@ -9,8 +9,6 @@ select 1 from remote('127.{1,2}', currentDatabase(), test_01081) lhs join system -- Code: 171. DB::Exception: Received from localhost:9000. DB::Exception: Received from 127.2:9000. DB::Exception: Block structure mismatch in function connect between PartialSortingTransform and LazyOutputFormat stream: different columns: -- _dummy Int Int32(size = 0), 1 UInt8 UInt8(size = 0) -- _dummy Int Int32(size = 0), 1 UInt8 Const(size = 0, UInt8(size = 1)). --- --- With experimental_use_processors=1 (default at the time of writing). insert into test_01081 select * from system.numbers limit 10; select 1 from remote('127.{1,2}', currentDatabase(), test_01081) lhs join system.one as rhs on rhs.dummy = 1 order by 1; diff --git a/tests/queries/0_stateless/01188_attach_table_from_path.reference b/tests/queries/0_stateless/01188_attach_table_from_path.reference new file mode 100644 index 00000000000..63660dc7361 --- /dev/null +++ b/tests/queries/0_stateless/01188_attach_table_from_path.reference @@ -0,0 +1,4 @@ +file 42 +file 42 +42 mt +42 mt diff --git a/tests/queries/0_stateless/01188_attach_table_from_path.sql b/tests/queries/0_stateless/01188_attach_table_from_path.sql new file mode 100644 index 00000000000..d72daa78f67 --- /dev/null +++ b/tests/queries/0_stateless/01188_attach_table_from_path.sql @@ -0,0 +1,25 @@ +drop table if exists test; +drop table if exists file; +drop table if exists mt; + +attach table test from 'some/path' (n UInt8) engine=Memory; -- { serverError 48 } +attach table test from '/etc/passwd' (s String) engine=File(TSVRaw); -- { serverError 481 } +attach table test from '../../../../../../../../../etc/passwd' (s String) engine=File(TSVRaw); -- { serverError 481 } + +insert into table function file('01188_attach/file/data.TSV', 'TSV', 's String, n UInt8') values ('file', 42); +attach table file from '01188_attach/file' (s String, n UInt8) engine=File(TSV); +select * from file; +detach table file; +attach table file; +select * from file; + +attach table mt from '01188_attach/file' (n UInt8, s String) engine=MergeTree order by n; +select * from mt; +insert into mt values (42, 'mt'); +select * from mt; +detach table mt; +attach table mt; +select * from mt; + +drop table file; +drop table mt; diff --git a/tests/queries/0_stateless/01290_max_execution_speed_distributed.sql b/tests/queries/0_stateless/01290_max_execution_speed_distributed.sql index 8282390ca90..b0f545838e6 100644 --- a/tests/queries/0_stateless/01290_max_execution_speed_distributed.sql +++ b/tests/queries/0_stateless/01290_max_execution_speed_distributed.sql @@ -1,4 +1,8 @@ -SET max_execution_speed = 1000000, timeout_before_checking_execution_speed = 0.001, max_block_size = 100; +SET max_execution_speed = 1000000; +SET timeout_before_checking_execution_speed = 0.001; +SET max_block_size = 100; + +SET log_queries=1; CREATE TEMPORARY TABLE times (t DateTime); @@ -10,4 +14,10 @@ SELECT max(t) - min(t) >= 1 FROM times; -- Check that the query was also throttled on "remote" servers. SYSTEM FLUSH LOGS; -SELECT DISTINCT query_duration_ms >= 500 FROM system.query_log WHERE event_date >= yesterday() AND query LIKE '%special query for 01290_max_execution_speed_distributed%' AND type = 2; +SELECT DISTINCT query_duration_ms >= 500 +FROM system.query_log +WHERE + event_date >= yesterday() AND + query LIKE '%special query for 01290_max_execution_speed_distributed%' AND + query NOT LIKE '%system.query_log%' AND + type = 2; diff --git a/tests/queries/0_stateless/01505_trivial_count_with_partition_predicate.reference b/tests/queries/0_stateless/01505_trivial_count_with_partition_predicate.reference index 9db37bb5f81..b8b8fae2830 100644 --- a/tests/queries/0_stateless/01505_trivial_count_with_partition_predicate.reference +++ b/tests/queries/0_stateless/01505_trivial_count_with_partition_predicate.reference @@ -7,4 +7,6 @@ 0 2 0 +4 +6 3 diff --git a/tests/queries/0_stateless/01505_trivial_count_with_partition_predicate.sql b/tests/queries/0_stateless/01505_trivial_count_with_partition_predicate.sql index 47ad0128130..ecf0b791a49 100644 --- a/tests/queries/0_stateless/01505_trivial_count_with_partition_predicate.sql +++ b/tests/queries/0_stateless/01505_trivial_count_with_partition_predicate.sql @@ -2,7 +2,7 @@ drop table if exists test1; drop table if exists test_tuple; drop table if exists test_two_args; -create table test1(p DateTime, k int) engine MergeTree partition by toDate(p) order by k; +create table test1(p DateTime, k int) engine MergeTree partition by toDate(p) order by k settings index_granularity = 1; insert into test1 values ('2020-09-01 00:01:02', 1), ('2020-09-01 20:01:03', 2), ('2020-09-02 00:01:03', 3); set max_rows_to_read = 1; @@ -22,7 +22,7 @@ select count() from test1 where toDate(p) > '2020-09-01'; -- non-optimized select count() from test1 where toDate(p) >= '2020-09-01' and p <= '2020-09-01 00:00:00'; -create table test_tuple(p DateTime, i int, j int) engine MergeTree partition by (toDate(p), i) order by j; +create table test_tuple(p DateTime, i int, j int) engine MergeTree partition by (toDate(p), i) order by j settings index_granularity = 1; insert into test_tuple values ('2020-09-01 00:01:02', 1, 2), ('2020-09-01 00:01:03', 2, 3), ('2020-09-02 00:01:03', 3, 4); @@ -34,8 +34,14 @@ select count() from test_tuple where toDate(p) > '2020-09-01' and i = 1; select count() from test_tuple where i > 1; -- optimized select count() from test_tuple where i < 1; +-- non-optimized +select count() from test_tuple array join [p,p] as c where toDate(p) = '2020-09-01'; -- { serverError 158; } +select count() from test_tuple array join [1,2] as c where toDate(p) = '2020-09-01' settings max_rows_to_read = 4; +-- non-optimized +select count() from test_tuple array join [1,2,3] as c where toDate(p) = '2020-09-01'; -- { serverError 158; } +select count() from test_tuple array join [1,2,3] as c where toDate(p) = '2020-09-01' settings max_rows_to_read = 6; -create table test_two_args(i int, j int, k int) engine MergeTree partition by i + j order by k; +create table test_two_args(i int, j int, k int) engine MergeTree partition by i + j order by k settings index_granularity = 1; insert into test_two_args values (1, 2, 3), (2, 1, 3), (0, 3, 4); diff --git a/tests/queries/0_stateless/01560_DateTime_and_DateTime64_comparision.reference b/tests/queries/0_stateless/01560_DateTime_and_DateTime64_comparision.reference deleted file mode 100644 index 088030bbc28..00000000000 --- a/tests/queries/0_stateless/01560_DateTime_and_DateTime64_comparision.reference +++ /dev/null @@ -1,3 +0,0 @@ --1 DateTime64(1, \'UTC\') < 1 1 1 <= 1 1 1 = 0 0 0 >= 0 0 0 > 0 0 0 != 1 1 1 -0 DateTime64(1, \'UTC\') < 0 0 0 <= 1 1 1 = 1 1 1 >= 1 1 1 > 0 0 0 != 0 0 0 -1 DateTime64(1, \'UTC\') < 0 0 0 <= 0 0 0 = 0 0 0 >= 1 1 1 > 1 1 1 != 1 1 1 diff --git a/tests/queries/0_stateless/01560_DateTime_and_DateTime64_comparision.sql b/tests/queries/0_stateless/01560_DateTime_and_DateTime64_comparision.sql deleted file mode 100644 index afee0ebadaa..00000000000 --- a/tests/queries/0_stateless/01560_DateTime_and_DateTime64_comparision.sql +++ /dev/null @@ -1,42 +0,0 @@ -SELECT - n, - toTypeName(dt64) AS dt64_typename, - - '<', - dt64 < dt, - toDateTime(dt64) < dt, - dt64 < toDateTime64(dt, 1, 'UTC'), - - '<=', - dt64 <= dt, - toDateTime(dt64) <= dt, - dt64 <= toDateTime64(dt, 1, 'UTC'), - - '=', - dt64 = dt, - toDateTime(dt64) = dt, - dt64 = toDateTime64(dt, 1, 'UTC'), - - '>=', - dt64 >= dt, - toDateTime(dt64) >= dt, - dt64 >= toDateTime64(dt, 1, 'UTC'), - - '>', - dt64 > dt, - toDateTime(dt64) > dt, - dt64 > toDateTime64(dt, 1, 'UTC'), - - '!=', - dt64 != dt, - toDateTime(dt64) != dt, - dt64 != toDateTime64(dt, 1, 'UTC') -FROM -( - SELECT - number - 1 as n, - toDateTime64(toStartOfInterval(now(), toIntervalSecond(1), 'UTC'), 1, 'UTC') + n AS dt64, - toStartOfInterval(now(), toIntervalSecond(1), 'UTC') AS dt - FROM system.numbers - LIMIT 3 -) diff --git a/tests/queries/0_stateless/01561_Date_and_DateTime64_comparision.reference b/tests/queries/0_stateless/01561_Date_and_DateTime64_comparision.reference deleted file mode 100644 index e5183ec6a8a..00000000000 --- a/tests/queries/0_stateless/01561_Date_and_DateTime64_comparision.reference +++ /dev/null @@ -1,3 +0,0 @@ --1 DateTime64(1, \'UTC\') < 1 1 1 <= 1 1 1 = 0 0 0 >= 0 0 0 > 0 0 0 != 1 1 1 -0 DateTime64(1, \'UTC\') < 0 0 0 <= 0 1 0 = 0 1 0 >= 1 1 1 > 1 0 1 != 1 0 1 -1 DateTime64(1, \'UTC\') < 0 0 0 <= 0 0 0 = 0 0 0 >= 1 1 1 > 1 1 1 != 1 1 1 diff --git a/tests/queries/0_stateless/01561_Date_and_DateTime64_comparision.sql b/tests/queries/0_stateless/01561_Date_and_DateTime64_comparision.sql deleted file mode 100644 index b780793a777..00000000000 --- a/tests/queries/0_stateless/01561_Date_and_DateTime64_comparision.sql +++ /dev/null @@ -1,43 +0,0 @@ -SELECT - n, - toTypeName(dt64) AS dt64_typename, - - '<', - dt64 < d, - toDate(dt64) < d, - dt64 < toDateTime64(d, 1, 'UTC'), - - '<=', - dt64 <= d, - toDate(dt64) <= d, - dt64 <= toDateTime64(d, 1, 'UTC'), - - '=', - dt64 = d, - toDate(dt64) = d, - dt64 = toDateTime64(d, 1, 'UTC'), - - '>=', - dt64 >= d, - toDate(dt64) >= d, - dt64 >= toDateTime64(d, 1, 'UTC'), - - '>', - dt64 > d, - toDate(dt64) > d, - dt64 > toDateTime64(d, 1, 'UTC'), - - '!=', - dt64 != d, - toDate(dt64) != d, - dt64 != toDateTime64(d, 1, 'UTC') -FROM -( - SELECT - number - 1 as n, - toDateTime64(toStartOfInterval(now(), toIntervalSecond(1), 'UTC'), 1, 'UTC') AS dt64, - toDate(now(), 'UTC') - n as d - FROM system.numbers - LIMIT 3 -) -FORMAT TabSeparated diff --git a/tests/queries/0_stateless/01562_agg_null_for_empty_ahead.reference b/tests/queries/0_stateless/01562_agg_null_for_empty_ahead.reference new file mode 100644 index 00000000000..a197ceee71f --- /dev/null +++ b/tests/queries/0_stateless/01562_agg_null_for_empty_ahead.reference @@ -0,0 +1,20 @@ +0 +0 +0 +0 +0 +1 +\N +\N +1 +\N +\N +0 +\N +0 +\N +1 +\N +\N +1 +\N diff --git a/tests/queries/0_stateless/01562_agg_null_for_empty_ahead.sql b/tests/queries/0_stateless/01562_agg_null_for_empty_ahead.sql new file mode 100644 index 00000000000..834204fedb9 --- /dev/null +++ b/tests/queries/0_stateless/01562_agg_null_for_empty_ahead.sql @@ -0,0 +1,36 @@ +SELECT sumMerge(s) FROM (SELECT sumState(number) s FROM numbers(0)); +SELECT sumMerge(s) FROM (SELECT sumState(number) s FROM numbers(1)); + +SELECT sumMerge(s) FROM (SELECT sumMergeState(n) s FROM (SELECT sumState(number) n FROM numbers(0))); +SELECT sumMerge(s) FROM (SELECT sumMergeState(n) s FROM (SELECT sumState(number) n FROM numbers(1))); + +SELECT sumIf(1, 0); + +SELECT sumIf(1, 1); + +-- should return Null even if we donn't set aggregate_functions_null_for_empty +SELECT sumIfOrNull(1, 0); +SELECT sumOrNullIf(1, 0); + +SELECT nullIf(1, 0); + +SELECT nullIf(1, 1); + +SET aggregate_functions_null_for_empty=1; + +SELECT sumMerge(s) FROM (SELECT sumState(number) s FROM numbers(0)); +SELECT sumMerge(s) FROM (SELECT sumState(number) s FROM numbers(1)); + +SELECT sumMerge(s) FROM (SELECT sumMergeState(n) s FROM (SELECT sumState(number) n FROM numbers(0))); +SELECT sumMerge(s) FROM (SELECT sumMergeState(n) s FROM (SELECT sumState(number) n FROM numbers(1))); + +SELECT sumIf(1, 0); + +SELECT sumIf(1, 1); + +SELECT sumIfOrNull(1, 0); +SELECT sumOrNullIf(1, 0); + +SELECT nullIf(1, 0); + +SELECT nullIf(1, 1); diff --git a/tests/queries/0_stateless/01600_encode_XML.reference b/tests/queries/0_stateless/01600_encode_XML.reference new file mode 100644 index 00000000000..e917b6f7044 --- /dev/null +++ b/tests/queries/0_stateless/01600_encode_XML.reference @@ -0,0 +1,4 @@ +Hello, "world"! +<123> +&clickhouse +'foo' diff --git a/tests/queries/0_stateless/01600_encode_XML.sql b/tests/queries/0_stateless/01600_encode_XML.sql new file mode 100644 index 00000000000..af3e2ce85b4 --- /dev/null +++ b/tests/queries/0_stateless/01600_encode_XML.sql @@ -0,0 +1,4 @@ +SELECT encodeXMLComponent('Hello, "world"!'); +SELECT encodeXMLComponent('<123>'); +SELECT encodeXMLComponent('&clickhouse'); +SELECT encodeXMLComponent('\'foo\''); \ No newline at end of file diff --git a/tests/queries/0_stateless/01600_min_max_compress_block_size.reference b/tests/queries/0_stateless/01600_min_max_compress_block_size.reference new file mode 100644 index 00000000000..83b33d238da --- /dev/null +++ b/tests/queries/0_stateless/01600_min_max_compress_block_size.reference @@ -0,0 +1 @@ +1000 diff --git a/tests/queries/0_stateless/01600_min_max_compress_block_size.sql b/tests/queries/0_stateless/01600_min_max_compress_block_size.sql new file mode 100644 index 00000000000..747f0b736ce --- /dev/null +++ b/tests/queries/0_stateless/01600_min_max_compress_block_size.sql @@ -0,0 +1,9 @@ +DROP TABLE IF EXISTS ms; + +CREATE TABLE ms (n Int32) ENGINE = MergeTree() ORDER BY n SETTINGS min_compress_block_size = 1024, max_compress_block_size = 10240; + +INSERT INTO ms SELECT * FROM numbers(1000); + +SELECT COUNT(*) FROM ms; + +DROP TABLE ms; diff --git a/tests/queries/0_stateless/01600_remerge_sort_lowered_memory_bytes_ratio.reference b/tests/queries/0_stateless/01600_remerge_sort_lowered_memory_bytes_ratio.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01600_remerge_sort_lowered_memory_bytes_ratio.sql b/tests/queries/0_stateless/01600_remerge_sort_lowered_memory_bytes_ratio.sql new file mode 100644 index 00000000000..b33b74c918d --- /dev/null +++ b/tests/queries/0_stateless/01600_remerge_sort_lowered_memory_bytes_ratio.sql @@ -0,0 +1,29 @@ +-- Check remerge_sort_lowered_memory_bytes_ratio setting + +set max_memory_usage='300Mi'; +-- enter remerge once limit*2 is reached +set max_bytes_before_remerge_sort='10Mi'; +-- more blocks +set max_block_size=40960; + +-- remerge_sort_lowered_memory_bytes_ratio default 2, slightly not enough +-- MergeSortingTransform: Re-merging intermediate ORDER BY data (20 blocks with 819200 rows) to save memory consumption +-- MergeSortingTransform: Memory usage is lowered from 186.25 MiB to 95.00 MiB +-- MergeSortingTransform: Re-merging is not useful (memory usage was not lowered by remerge_sort_lowered_memory_bytes_ratio=2.0) +select number k, repeat(toString(number), 11) v1, repeat(toString(number), 12) v2 from numbers(toUInt64(3e6)) order by k limit 400e3 format Null; -- { serverError 241 } +select number k, repeat(toString(number), 11) v1, repeat(toString(number), 12) v2 from numbers(toUInt64(3e6)) order by k limit 400e3 settings remerge_sort_lowered_memory_bytes_ratio=2. format Null; -- { serverError 241 } + +-- remerge_sort_lowered_memory_bytes_ratio 1.9 is good (need at least 1.91/0.98=1.94) +-- MergeSortingTransform: Re-merging intermediate ORDER BY data (20 blocks with 819200 rows) to save memory consumption +-- MergeSortingTransform: Memory usage is lowered from 186.25 MiB to 95.00 MiB +-- MergeSortingTransform: Re-merging intermediate ORDER BY data (20 blocks with 809600 rows) to save memory consumption +-- MergeSortingTransform: Memory usage is lowered from 188.13 MiB to 95.00 MiB +-- MergeSortingTransform: Re-merging intermediate ORDER BY data (20 blocks with 809600 rows) to save memory consumption +-- MergeSortingTransform: Memory usage is lowered from 188.13 MiB to 95.00 MiB +-- MergeSortingTransform: Re-merging intermediate ORDER BY data (20 blocks with 809600 rows) to save memory consumption +-- MergeSortingTransform: Memory usage is lowered from 188.13 MiB to 95.00 MiB +-- MergeSortingTransform: Re-merging intermediate ORDER BY data (20 blocks with 809600 rows) to save memory consumption +-- MergeSortingTransform: Memory usage is lowered from 188.13 MiB to 95.00 MiB +-- MergeSortingTransform: Re-merging intermediate ORDER BY data (20 blocks with 809600 rows) to save memory consumption +-- MergeSortingTransform: Memory usage is lowered from 188.13 MiB to 95.00 MiB +select number k, repeat(toString(number), 11) v1, repeat(toString(number), 12) v2 from numbers(toUInt64(3e6)) order by k limit 400e3 settings remerge_sort_lowered_memory_bytes_ratio=1.9 format Null; diff --git a/tests/queries/0_stateless/01601_custom_tld.reference b/tests/queries/0_stateless/01601_custom_tld.reference new file mode 100644 index 00000000000..98b99778396 --- /dev/null +++ b/tests/queries/0_stateless/01601_custom_tld.reference @@ -0,0 +1,11 @@ +no-tld + +foo.there-is-no-such-domain +foo.there-is-no-such-domain +foo +generic +kernel +kernel.biz.ss +difference +biz.ss +kernel.biz.ss diff --git a/tests/queries/0_stateless/01601_custom_tld.sql b/tests/queries/0_stateless/01601_custom_tld.sql new file mode 100644 index 00000000000..6d68299c07d --- /dev/null +++ b/tests/queries/0_stateless/01601_custom_tld.sql @@ -0,0 +1,16 @@ +select 'no-tld'; +select cutToFirstSignificantSubdomainCustom('there-is-no-such-domain', 'public_suffix_list'); +-- even if there is no TLD, 2-nd level by default anyway +-- FIXME: make this behavior optional (so that TLD for host never changed, either empty or something real) +select cutToFirstSignificantSubdomainCustom('foo.there-is-no-such-domain', 'public_suffix_list'); +select cutToFirstSignificantSubdomainCustom('bar.foo.there-is-no-such-domain', 'public_suffix_list'); +select firstSignificantSubdomainCustom('bar.foo.there-is-no-such-domain', 'public_suffix_list'); + +select 'generic'; +select firstSignificantSubdomainCustom('foo.kernel.biz.ss', 'public_suffix_list'); -- kernel.biz.ss +select cutToFirstSignificantSubdomainCustom('foo.kernel.biz.ss', 'public_suffix_list'); -- kernel.biz.ss + +select 'difference'; +-- biz.ss is not in the default TLD list, hence: +select cutToFirstSignificantSubdomain('foo.kernel.biz.ss'); -- biz.ss +select cutToFirstSignificantSubdomainCustom('foo.kernel.biz.ss', 'public_suffix_list'); -- kernel.biz.ss diff --git a/tests/queries/0_stateless/01602_array_aggregation.reference b/tests/queries/0_stateless/01602_array_aggregation.reference new file mode 100644 index 00000000000..19159dd6c26 --- /dev/null +++ b/tests/queries/0_stateless/01602_array_aggregation.reference @@ -0,0 +1,56 @@ +Array min 1 +Array max 6 +Array sum 21 +Array avg 3.5 +Table array int min +1 +0 +1 +Table array int max +6 +0 +3 +Table array int sum +21 +0 +6 +Table array int avg +3.5 +0 +2 +Table array decimal min +1.00000000 +0.00000000 +1.00000000 +Table array decimal max +6.00000000 +0.00000000 +3.00000000 +Table array decimal sum +21.00000000 +0.00000000 +6.00000000 +Table array decimal avg +3.5 +0 +2 +Types of aggregation result array min +Int8 Int16 Int32 Int64 +UInt8 UInt16 UInt32 UInt64 +Float32 Float64 +Decimal(9, 8) Decimal(18, 8) Decimal(38, 8) +Types of aggregation result array max +Int8 Int16 Int32 Int64 +UInt8 UInt16 UInt32 UInt64 +Float32 Float64 +Decimal(9, 8) Decimal(18, 8) Decimal(38, 8) +Types of aggregation result array summ +Int64 Int64 Int64 Int64 +UInt64 UInt64 UInt64 UInt64 +Float64 Float64 +Decimal(38, 8) Decimal(38, 8) Decimal(38, 8) +Types of aggregation result array avg +Float64 Float64 Float64 Float64 +Float64 Float64 Float64 Float64 +Float64 Float64 +Float64 Float64 Float64 diff --git a/tests/queries/0_stateless/01602_array_aggregation.sql b/tests/queries/0_stateless/01602_array_aggregation.sql new file mode 100644 index 00000000000..4036754af90 --- /dev/null +++ b/tests/queries/0_stateless/01602_array_aggregation.sql @@ -0,0 +1,56 @@ +SELECT 'Array min ', (arrayMin(array(1,2,3,4,5,6))); +SELECT 'Array max ', (arrayMax(array(1,2,3,4,5,6))); +SELECT 'Array sum ', (arraySum(array(1,2,3,4,5,6))); +SELECT 'Array avg ', (arrayAvg(array(1,2,3,4,5,6))); + +DROP TABLE IF EXISTS test_aggregation; +CREATE TABLE test_aggregation (x Array(Int)) ENGINE=TinyLog; + +INSERT INTO test_aggregation VALUES ([1,2,3,4,5,6]), ([]), ([1,2,3]); + +SELECT 'Table array int min'; +SELECT arrayMin(x) FROM test_aggregation; +SELECT 'Table array int max'; +SELECT arrayMax(x) FROM test_aggregation; +SELECT 'Table array int sum'; +SELECT arraySum(x) FROM test_aggregation; +SELECT 'Table array int avg'; +SELECT arrayAvg(x) FROM test_aggregation; + +DROP TABLE test_aggregation; + +CREATE TABLE test_aggregation (x Array(Decimal64(8))) ENGINE=TinyLog; + +INSERT INTO test_aggregation VALUES ([1,2,3,4,5,6]), ([]), ([1,2,3]); + +SELECT 'Table array decimal min'; +SELECT arrayMin(x) FROM test_aggregation; +SELECT 'Table array decimal max'; +SELECT arrayMax(x) FROM test_aggregation; +SELECT 'Table array decimal sum'; +SELECT arraySum(x) FROM test_aggregation; +SELECT 'Table array decimal avg'; +SELECT arrayAvg(x) FROM test_aggregation; + +DROP TABLE test_aggregation; + +SELECT 'Types of aggregation result array min'; +SELECT toTypeName(arrayMin([toInt8(0)])), toTypeName(arrayMin([toInt16(0)])), toTypeName(arrayMin([toInt32(0)])), toTypeName(arrayMin([toInt64(0)])); +SELECT toTypeName(arrayMin([toUInt8(0)])), toTypeName(arrayMin([toUInt16(0)])), toTypeName(arrayMin([toUInt32(0)])), toTypeName(arrayMin([toUInt64(0)])); +SELECT toTypeName(arrayMin([toFloat32(0)])), toTypeName(arrayMin([toFloat64(0)])); +SELECT toTypeName(arrayMin([toDecimal32(0, 8)])), toTypeName(arrayMin([toDecimal64(0, 8)])), toTypeName(arrayMin([toDecimal128(0, 8)])); +SELECT 'Types of aggregation result array max'; +SELECT toTypeName(arrayMax([toInt8(0)])), toTypeName(arrayMax([toInt16(0)])), toTypeName(arrayMax([toInt32(0)])), toTypeName(arrayMax([toInt64(0)])); +SELECT toTypeName(arrayMax([toUInt8(0)])), toTypeName(arrayMax([toUInt16(0)])), toTypeName(arrayMax([toUInt32(0)])), toTypeName(arrayMax([toUInt64(0)])); +SELECT toTypeName(arrayMax([toFloat32(0)])), toTypeName(arrayMax([toFloat64(0)])); +SELECT toTypeName(arrayMax([toDecimal32(0, 8)])), toTypeName(arrayMax([toDecimal64(0, 8)])), toTypeName(arrayMax([toDecimal128(0, 8)])); +SELECT 'Types of aggregation result array summ'; +SELECT toTypeName(arraySum([toInt8(0)])), toTypeName(arraySum([toInt16(0)])), toTypeName(arraySum([toInt32(0)])), toTypeName(arraySum([toInt64(0)])); +SELECT toTypeName(arraySum([toUInt8(0)])), toTypeName(arraySum([toUInt16(0)])), toTypeName(arraySum([toUInt32(0)])), toTypeName(arraySum([toUInt64(0)])); +SELECT toTypeName(arraySum([toFloat32(0)])), toTypeName(arraySum([toFloat64(0)])); +SELECT toTypeName(arraySum([toDecimal32(0, 8)])), toTypeName(arraySum([toDecimal64(0, 8)])), toTypeName(arraySum([toDecimal128(0, 8)])); +SELECT 'Types of aggregation result array avg'; +SELECT toTypeName(arrayAvg([toInt8(0)])), toTypeName(arrayAvg([toInt16(0)])), toTypeName(arrayAvg([toInt32(0)])), toTypeName(arrayAvg([toInt64(0)])); +SELECT toTypeName(arrayAvg([toUInt8(0)])), toTypeName(arrayAvg([toUInt16(0)])), toTypeName(arrayAvg([toUInt32(0)])), toTypeName(arrayAvg([toUInt64(0)])); +SELECT toTypeName(arrayAvg([toFloat32(0)])), toTypeName(arrayAvg([toFloat64(0)])); +SELECT toTypeName(arrayAvg([toDecimal32(0, 8)])), toTypeName(arrayAvg([toDecimal64(0, 8)])), toTypeName(arrayAvg([toDecimal128(0, 8)])); diff --git a/tests/testflows/rbac/tests/privileges/attach/attach_table.py b/tests/testflows/rbac/tests/privileges/attach/attach_table.py index b13762fd43c..604e5a119e8 100644 --- a/tests/testflows/rbac/tests/privileges/attach/attach_table.py +++ b/tests/testflows/rbac/tests/privileges/attach/attach_table.py @@ -53,7 +53,7 @@ def privilege_check(grant_target_name, user_name, node=None): with Then("I attempt to attach a table"): node.query(f"ATTACH TABLE {table_name} (x Int8) ENGINE = Memory", settings = [("user", user_name)], - exitcode=80, message="DB::Exception: UUID must be specified") + exitcode=80, message="DB::Exception: Incorrect ATTACH TABLE query") finally: with Finally("I drop the table"): diff --git a/utils/CMakeLists.txt b/utils/CMakeLists.txt index 322ad2630d1..a27a7e9dadc 100644 --- a/utils/CMakeLists.txt +++ b/utils/CMakeLists.txt @@ -2,6 +2,13 @@ if (USE_CLANG_TIDY) set (CMAKE_CXX_CLANG_TIDY "${CLANG_TIDY_PATH}") endif () +if(MAKE_STATIC_LIBRARIES) + set(MAX_LINKER_MEMORY 3500) +else() + set(MAX_LINKER_MEMORY 2500) +endif() +include(../cmake/limit_jobs.cmake) + # Utils used in package add_subdirectory (config-processor) add_subdirectory (report) diff --git a/utils/convert-month-partitioned-parts/main.cpp b/utils/convert-month-partitioned-parts/main.cpp index af8e221a10b..bce1e08077c 100644 --- a/utils/convert-month-partitioned-parts/main.cpp +++ b/utils/convert-month-partitioned-parts/main.cpp @@ -1,4 +1,5 @@ #include +#include #include #include #include diff --git a/utils/make_changelog.py b/utils/make_changelog.py deleted file mode 100755 index 4f703108d38..00000000000 --- a/utils/make_changelog.py +++ /dev/null @@ -1,507 +0,0 @@ -#!/usr/bin/env python3 -# Note: should work with python 2 and 3 - - -import requests -import json -import subprocess -import re -import os -import time -import logging -import codecs -import argparse - - -GITHUB_API_URL = 'https://api.github.com/' -SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) - - -def http_get_json(url, token, max_retries, retry_timeout): - - for t in range(max_retries): - - if token: - resp = requests.get(url, headers={"Authorization": "token {}".format(token)}) - else: - resp = requests.get(url) - - if resp.status_code != 200: - msg = "Request {} failed with code {}.\n{}\n".format(url, resp.status_code, resp.text) - - if resp.status_code == 403 or resp.status_code >= 500: - try: - if (resp.json()['message'].startswith('API rate limit exceeded') or resp.status_code >= 500) and t + 1 < max_retries: - logging.warning(msg) - time.sleep(retry_timeout) - continue - except Exception: - pass - - raise Exception(msg) - - return resp.json() - - -def github_api_get_json(query, token, max_retries, retry_timeout): - return http_get_json(GITHUB_API_URL + query, token, max_retries, retry_timeout) - - -def check_sha(sha): - if not (re.match('^[a-hA-H0-9]+$', sha) and len(sha) >= 7): - raise Exception("String " + sha + " doesn't look like git sha.") - - -def get_merge_base(first, second, project_root): - try: - command = "git merge-base {} {}".format(first, second) - text = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, cwd=project_root).stdout.read() - text = text.decode('utf-8', 'ignore') - sha = tuple(filter(len, text.split()))[0] - check_sha(sha) - return sha - except Exception: - logging.error('Cannot find merge base for %s and %s', first, second) - raise - -def rev_parse(rev, project_root): - try: - command = "git rev-parse {}".format(rev) - text = subprocess.check_output(command, shell=True, cwd=project_root) - text = text.decode('utf-8', 'ignore') - sha = tuple(filter(len, text.split()))[0] - check_sha(sha) - return sha - except Exception: - logging.error('Cannot find revision %s', rev) - raise - - -# Get list of commits from branch to base_sha. Update commits_info. -def get_commits_from_branch(repo, branch, base_sha, commits_info, max_pages, token, max_retries, retry_timeout): - - def get_commits_from_page(page): - query = 'repos/{}/commits?sha={}&page={}'.format(repo, branch, page) - resp = github_api_get_json(query, token, max_retries, retry_timeout) - for commit in resp: - sha = commit['sha'] - if sha not in commits_info: - commits_info[sha] = commit - - return [commit['sha'] for commit in resp] - - commits = [] - found_base_commit = False - - for page in range(max_pages): - page_commits = get_commits_from_page(page) - for commit in page_commits: - if commit == base_sha: - found_base_commit = True - break - commits.append(commit) - - if found_base_commit: - break - - if not found_base_commit: - raise Exception("Can't found base commit sha {} in branch {}. Checked {} commits on {} pages.\nCommits: {}" - .format(base_sha, branch, len(commits), max_pages, ' '.join(commits))) - return commits - - -# Get list of commits a specified commit is cherry-picked from. Can return an empty list. -def parse_original_commits_from_cherry_pick_message(commit_message): - prefix = '(cherry picked from commits' - pos = commit_message.find(prefix) - if pos == -1: - prefix = '(cherry picked from commit' - pos = commit_message.find(prefix) - if pos == -1: - return [] - pos += len(prefix) - endpos = commit_message.find(')', pos) - if endpos == -1: - return [] - lst = [x.strip() for x in commit_message[pos:endpos].split(',')] - lst = [x for x in lst if x] - return lst - - -# Use GitHub search api to check if commit from any pull request. Update pull_requests info. -def find_pull_request_for_commit(commit_info, pull_requests, token, max_retries, retry_timeout): - commits = [commit_info['sha']] + parse_original_commits_from_cherry_pick_message(commit_info['commit']['message']) - - # Special case for cherry-picked merge commits without -x option. Parse pr number from commit message and search it. - if commit_info['commit']['message'].startswith('Merge pull request'): - tokens = commit_info['commit']['message'][len('Merge pull request'):].split() - if len(tokens) > 0 and tokens[0].startswith('#'): - pr_number = tokens[0][1:] - if len(pr_number) > 0 and pr_number.isdigit(): - commits = [pr_number] - - query = 'search/issues?q={}+type:pr+repo:{}&sort=created&order=asc'.format(' '.join(commits), repo) - resp = github_api_get_json(query, token, max_retries, retry_timeout) - - found = False - for item in resp['items']: - if 'pull_request' in item: - found = True - number = item['number'] - if number not in pull_requests: - pull_requests[number] = { - 'title': item['title'], - 'description': item['body'], - 'user': item['user']['login'], - } - - return found - - -# Find pull requests from list of commits. If no pull request found, add commit to not_found_commits list. -def find_pull_requests(commits, commits_info, token, max_retries, retry_timeout): - not_found_commits = [] - pull_requests = {} - - for i, commit in enumerate(commits): - if (i + 1) % 10 == 0: - logging.info('Processed %d commits', i + 1) - if not find_pull_request_for_commit(commits_info[commit], pull_requests, token, max_retries, retry_timeout): - not_found_commits.append(commit) - - return not_found_commits, pull_requests - - -# Find pull requests by list of numbers -def find_pull_requests_by_num(pull_requests_nums, token, max_retries, retry_timeout): - pull_requests = {} - - for pr in pull_requests_nums: - item = github_api_get_json('repos/{}/pulls/{}'.format(repo, pr), token, max_retries, retry_timeout) - - number = item['number'] - if number not in pull_requests: - pull_requests[number] = { - 'title': item['title'], - 'description': item['body'], - 'user': item['user']['login'], - } - - return pull_requests - - -# Get users for all unknown commits and pull requests. -def get_users_info(pull_requests, commits_info, token, max_retries, retry_timeout): - - users = {} - - def update_user(user): - if user not in users: - query = 'users/{}'.format(user) - resp = github_api_get_json(query, token, max_retries, retry_timeout) - users[user] = resp - - for pull_request in list(pull_requests.values()): - update_user(pull_request['user']) - - for commit_info in list(commits_info.values()): - if 'committer' in commit_info and commit_info['committer'] is not None and 'login' in commit_info['committer']: - update_user(commit_info['committer']['login']) - else: - logging.warning('Not found author for commit %s.', commit_info['html_url']) - - return users - - -# List of unknown commits -> text description. -def process_unknown_commits(commits, commits_info, users): - - pattern = 'Commit: [{}]({})\nAuthor: {}\nMessage: {}' - - texts = [] - - for commit in commits: - info = commits_info[commit] - html_url = info['html_url'] - msg = info['commit']['message'] - - name = None - login = None - author = None - - if not info['author']: - author = 'Unknown' - else: - # GitHub login - if 'login' in info['author']: - login = info['author']['login'] - - # First, try get name from github user - try: - name = users[login]['name'] - except KeyError: - pass - else: - login = 'Unknown' - - # Then, try get name from commit - if not name: - try: - name = info['commit']['author']['name'] - except KeyError: - pass - - author = '[{}]({})'.format(name or login, info['author']['html_url']) - - texts.append(pattern.format(commit, html_url, author, msg)) - - text = 'Commits which are not from any pull request:\n\n' - return text + '\n\n'.join(texts) - -# This function mirrors the PR description checks in ClickhousePullRequestTrigger. -# Returns False if the PR should not be mentioned changelog. -def parse_one_pull_request(item): - description = item['description'] - lines = [line for line in [x.strip() for x in description.split('\n')] if line] if description else [] - lines = [re.sub(r'\s+', ' ', l) for l in lines] - - cat_pos = None - short_descr_pos = None - long_descr_pos = None - - if lines: - for i in range(len(lines) - 1): - if re.match(r'(?i).*category.*:$', lines[i]): - cat_pos = i - if re.match(r'(?i)^\**\s*(Short description|Change\s*log entry)', lines[i]): - short_descr_pos = i - if re.match(r'(?i)^\**\s*Detailed description', lines[i]): - long_descr_pos = i - - if cat_pos is None: - return False - cat = lines[cat_pos + 1] - cat = re.sub(r'^[-*\s]*', '', cat) - - # Filter out the PR categories that are not for changelog. - if re.match(r'(?i)doc|((non|in|not|un)[-\s]*significant)|(not[ ]*for[ ]*changelog)', cat): - return False - - short_descr = '' - if short_descr_pos: - short_descr_end = long_descr_pos or len(lines) - short_descr = lines[short_descr_pos + 1] - if short_descr_pos + 2 != short_descr_end: - short_descr += ' ...' - - # If we have nothing meaningful - if not re.match('\w', short_descr): - short_descr = item['title'] - - # TODO: Add detailed description somewhere - - item['entry'] = short_descr - item['category'] = cat - - return True - - -# List of pull requests -> text description. -def process_pull_requests(pull_requests, users, repo): - groups = {} - - for id, item in list(pull_requests.items()): - if not parse_one_pull_request(item): - continue - - pattern = "{} [#{}]({}) ({})" - link = 'https://github.com/{}/pull/{}'.format(repo, id) - author = 'author not found' - if item['user'] in users: - # TODO get user from any commit if no user name on github - user = users[item['user']] - author = '[{}]({})'.format(user['name'] or user['login'], user['html_url']) - - cat = item['category'] - if cat not in groups: - groups[cat] = [] - groups[cat].append(pattern.format(item['entry'], id, link, author)) - - categories_preferred_order = ['Backward Incompatible Change', 'New Feature', 'Bug Fix', 'Improvement', 'Performance Improvement', 'Build/Testing/Packaging Improvement', 'Other'] - - def categories_sort_key(name): - if name in categories_preferred_order: - return str(categories_preferred_order.index(name)).zfill(3) - else: - return name.lower() - - texts = [] - for group, text in sorted(list(groups.items()), key = lambda kv: categories_sort_key(kv[0])): - items = ['* {}'.format(pr) for pr in text] - texts.append('### {}\n{}'.format(group if group else '[No category]', '\n'.join(items))) - - return '\n\n'.join(texts) - - -# Load inner state. For debug purposes. -def load_state(state_file, base_sha, new_tag, prev_tag): - - state = {} - - if state_file: - try: - if os.path.exists(state_file): - logging.info('Reading state from %s', state_file) - with codecs.open(state_file, encoding='utf-8') as f: - state = json.loads(f.read()) - else: - logging.info('State file does not exist. Will create new one.') - except Exception as e: - logging.warning('Cannot load state from %s. Reason: %s', state_file, str(e)) - - if state: - if 'base_sha' not in state or 'new_tag' not in state or 'prev_tag' not in state: - logging.warning('Invalid state. Will create new one.') - elif state['base_sha'] == base_sha and state['new_tag'] == new_tag and state['prev_tag'] == prev_tag: - logging.info('State loaded.') - else: - logging.info('Loaded state has different tags or merge base sha. Will create new state.') - state = {} - - return state - - -# Save inner state. For debug purposes. -def save_state(state_file, state): - with codecs.open(state_file, 'w', encoding='utf-8') as f: - f.write(json.dumps(state, indent=4, separators=(',', ': '))) - - -def make_changelog(new_tag, prev_tag, pull_requests_nums, repo, repo_folder, state_file, token, max_retries, retry_timeout): - - base_sha = None - if new_tag and prev_tag: - base_sha = get_merge_base(new_tag, prev_tag, repo_folder) - logging.info('Base sha: %s', base_sha) - - # Step 1. Get commits from merge_base to new_tag HEAD. - # Result is a list of commits + map with commits info (author, message) - commits_info = {} - commits = [] - is_commits_loaded = False - - # Step 2. For each commit check if it is from any pull request (using github search api). - # Result is a list of unknown commits + map with pull request info (author, description). - unknown_commits = [] - pull_requests = {} - is_pull_requests_loaded = False - - # Step 3. Map users with their info (Name) - users = {} - is_users_loaded = False - - # Step 4. Make changelog text from data above. - - state = load_state(state_file, base_sha, new_tag, prev_tag) - - if state: - - if 'commits' in state and 'commits_info' in state: - logging.info('Loading commits from %s', state_file) - commits_info = state['commits_info'] - commits = state['commits'] - is_commits_loaded = True - - if 'pull_requests' in state and 'unknown_commits' in state: - logging.info('Loading pull requests from %s', state_file) - unknown_commits = state['unknown_commits'] - pull_requests = state['pull_requests'] - is_pull_requests_loaded = True - - if 'users' in state: - logging.info('Loading users requests from %s', state_file) - users = state['users'] - is_users_loaded = True - - if base_sha: - state['base_sha'] = base_sha - state['new_tag'] = new_tag - state['prev_tag'] = prev_tag - - if not is_commits_loaded: - logging.info('Getting commits using github api.') - commits = get_commits_from_branch(repo, new_tag, base_sha, commits_info, 100, token, max_retries, retry_timeout) - state['commits'] = commits - state['commits_info'] = commits_info - - logging.info('Found %d commits from %s to %s.\n', len(commits), new_tag, base_sha) - save_state(state_file, state) - - if not is_pull_requests_loaded: - logging.info('Searching for pull requests using github api.') - unknown_commits, pull_requests = find_pull_requests(commits, commits_info, token, max_retries, retry_timeout) - state['unknown_commits'] = unknown_commits - state['pull_requests'] = pull_requests - else: - pull_requests = find_pull_requests_by_num(pull_requests_nums.split(','), token, max_retries, retry_timeout) - - logging.info('Found %d pull requests and %d unknown commits.\n', len(pull_requests), len(unknown_commits)) - save_state(state_file, state) - - if not is_users_loaded: - logging.info('Getting users info using github api.') - users = get_users_info(pull_requests, commits_info, token, max_retries, retry_timeout) - state['users'] = users - - logging.info('Found %d users.', len(users)) - save_state(state_file, state) - - changelog = '{}\n\n{}'.format(process_pull_requests(pull_requests, users, repo), process_unknown_commits(unknown_commits, commits_info, users)) - - # Substitute links to issues - changelog = re.sub(r'(?Alexander Kuzmenko Results for Android phones for "cold cache" are done without cache flushing, so they are not "cold" and cannot be compared.
Results for Digital Ocean are from Zimin Aleksey.
Results for 2x EPYC 7642 w/ 512 GB RAM (192 Cores) + 12X 1TB SSD (RAID6) are from Yiğit Konur and Metehan Çetinkaya of seo.do.
-Results for Raspberry Pi and Digital Ocean CPU-optimized are from Fritz Wijaya. +Results for Raspberry Pi and Digital Ocean CPU-optimized are from Fritz Wijaya.
+Results for Digitalocean (Storage-intesinve VMs) + (CPU/GP) are from Yiğit Konur and Metehan Çetinkaya of seo.do.

diff --git a/website/benchmark/hardware/results/do_storage_optimized.json b/website/benchmark/hardware/results/do_storage_optimized.json new file mode 100644 index 00000000000..6c6cee5423b --- /dev/null +++ b/website/benchmark/hardware/results/do_storage_optimized.json @@ -0,0 +1,380 @@ +[ + { + "system": "DigitalOcean Storage-opt 8", + "system_full": "DigitalOcean 8 CPUs, 64 GB RAM, 1.17 TB NVM SSD, Storage Intensive, Frankfurt, Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz", + "cpu_vendor": "Intel", + "cpu_model": "Xeon Gold 6140", + "time": "2020-12-13 00:00:00", + "kind": "cloud", + "result": + [ +[0.003, 0.002, 0.002], +[0.020, 0.014, 0.014], +[0.060, 0.048, 0.049], +[0.089, 0.064, 0.062], +[0.194, 0.177, 0.178], +[0.521, 0.502, 0.497], +[0.031, 0.026, 0.026], +[0.020, 0.018, 0.018], +[0.782, 0.738, 0.730], +[0.874, 0.830, 0.834], +[0.288, 0.265, 0.261], +[0.340, 0.316, 0.316], +[1.144, 1.016, 1.000], +[1.423, 1.359, 1.348], +[1.296, 1.234, 1.217], +[1.458, 1.393, 1.392], +[3.573, 3.374, 3.335], +[2.143, 2.065, 2.083], +[6.179, 6.153, 6.110], +[0.101, 0.069, 0.069], +[1.611, 1.383, 1.362], +[1.659, 1.409, 1.388], +[3.642, 3.124, 3.104], +[2.399, 1.690, 1.652], +[0.501, 0.434, 0.412], +[0.382, 0.345, 0.346], +[0.490, 0.425, 0.420], +[1.401, 1.192, 1.165], +[2.132, 1.956, 1.979], +[1.777, 1.775, 1.770], +[1.134, 1.033, 1.043], +[1.611, 1.475, 1.451], +[9.319, 8.830, 8.826], +[5.214, 5.071, 4.960], +[5.464, 5.079, 5.113], +[1.862, 1.810, 1.800], +[0.223, 0.173, 0.167], +[0.074, 0.058, 0.060], +[0.039, 0.037, 0.038], +[0.430, 0.354, 0.355], +[0.034, 0.015, 0.013], +[0.016, 0.010, 0.011], +[0.006, 0.006, 0.006] + ] + }, + { + "system": "DigitalOcean Storage-opt 16", + "system_full": "DigitalOcean 16 CPUs, 128 GB RAM, 2.34 TB NVM SSD, Storage Intensive, Frankfurt, Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz", + "cpu_vendor": "Intel", + "cpu_model": "Xeon Gold 6140", + "time": "2020-12-13 00:00:00", + "kind": "cloud", + "result": + [ +[0.002, 0.002, 0.002], +[0.017, 0.010, 0.010], +[0.039, 0.029, 0.027], +[0.055, 0.039, 0.038], +[0.125, 0.107, 0.112], +[0.302, 0.281, 0.271], +[0.021, 0.016, 0.016], +[0.012, 0.011, 0.011], +[0.418, 0.398, 0.397], +[0.473, 0.446, 0.442], +[0.172, 0.153, 0.155], +[0.202, 0.184, 0.185], +[0.605, 0.543, 0.538], +[0.753, 0.693, 0.682], +[0.687, 0.645, 0.635], +[0.708, 0.701, 0.693], +[1.703, 1.642, 1.655], +[1.037, 0.994, 0.997], +[3.563, 3.314, 3.364], +[0.085, 0.047, 0.039], +[0.857, 0.745, 0.742], +[0.900, 0.770, 0.752], +[1.931, 1.679, 1.672], +[1.537, 0.937, 0.914], +[0.272, 0.230, 0.219], +[0.212, 0.192, 0.187], +[0.271, 0.237, 0.225], +[0.782, 0.659, 0.661], +[1.169, 1.063, 1.068], +[1.193, 1.182, 1.180], +[0.625, 0.590, 0.572], +[0.936, 0.822, 0.800], +[4.796, 4.587, 4.589], +[3.061, 2.715, 2.700], +[2.819, 2.709, 2.708], +[1.078, 1.048, 1.064], +[0.210, 0.163, 0.150], +[0.073, 0.057, 0.058], +[0.037, 0.036, 0.035], +[0.387, 0.357, 0.324], +[0.030, 0.015, 0.017], +[0.014, 0.010, 0.015], +[0.012, 0.006, 0.005] + ] + }, + { + "system": "DigitalOcean Storage-opt 24", + "system_full": "DigitalOcean, 24 CPUs, 192 GB RAM, 3.52 TB NVM SSD, Storage Intensive, Frankfurt, Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz", + "cpu_vendor": "Intel", + "cpu_model": "Xeon Gold 6140", + "time": "2020-12-13 00:00:00", + "kind": "cloud", + "result": + [ +[0.002, 0.002, 0.001], +[0.017, 0.009, 0.009], +[0.033, 0.022, 0.022], +[0.046, 0.029, 0.028], +[0.114, 0.101, 0.113], +[0.228, 0.215, 0.209], +[0.018, 0.013, 0.013], +[0.012, 0.010, 0.010], +[0.316, 0.294, 0.294], +[0.350, 0.328, 0.330], +[0.142, 0.129, 0.124], +[0.157, 0.141, 0.138], +[0.452, 0.401, 0.403], +[0.550, 0.508, 0.503], +[0.883, 0.455, 0.451], +[0.539, 0.535, 0.533], +[1.278, 1.206, 1.207], +[0.771, 0.746, 0.736], +[2.776, 2.543, 2.571], +[0.084, 0.065, 0.029], +[0.666, 0.539, 0.586], +[0.703, 0.559, 0.566], +[1.404, 1.232, 1.222], +[1.243, 0.695, 0.701], +[0.215, 0.173, 0.164], +[0.162, 0.143, 0.140], +[0.207, 0.169, 0.169], +[0.606, 0.506, 0.505], +[0.849, 0.788, 0.774], +[1.085, 1.074, 1.072], +[0.426, 0.392, 0.394], +[0.718, 0.616, 0.600], +[3.579, 3.372, 3.428], +[2.161, 2.006, 1.980], +[2.491, 2.026, 1.991], +[0.839, 0.818, 0.817], +[0.220, 0.154, 0.163], +[0.074, 0.058, 0.063], +[0.037, 0.038, 0.038], +[0.391, 0.353, 0.329], +[0.034, 0.012, 0.014], +[0.014, 0.012, 0.011], +[0.005, 0.005, 0.005] + ] + }, + { + "system": "DigitalOcean Storage-opt 32", + "system_full": "DigitalOcean, 32 CPUs, 256 GB RAM, 4.56 TB NVM SSD, Storage Intensive, Frankfurt, Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz", + "cpu_vendor": "Intel", + "cpu_model": "Xeon Gold 6140", + "time": "2020-12-13 00:00:00", + "kind": "cloud", + "result": + [ +[0.002, 0.002, 0.001], +[0.016, 0.011, 0.010], +[0.029, 0.020, 0.019], +[0.044, 0.025, 0.025], +[0.103, 0.094, 0.091], +[0.202, 0.181, 0.182], +[0.021, 0.014, 0.012], +[0.012, 0.010, 0.010], +[0.259, 0.240, 0.237], +[0.291, 0.268, 0.267], +[0.128, 0.114, 0.113], +[0.137, 0.121, 0.120], +[0.351, 0.315, 0.315], +[0.435, 0.403, 0.401], +[0.389, 0.377, 0.370], +[0.473, 0.459, 0.450], +[1.076, 1.019, 1.034], +[0.673, 0.632, 0.653], +[1.804, 1.773, 1.779], +[0.055, 0.048, 0.030], +[0.555, 0.472, 0.462], +[0.575, 0.465, 0.478], +[1.245, 1.087, 1.073], +[1.206, 0.561, 0.572], +[0.178, 0.137, 0.137], +[0.133, 0.116, 0.116], +[0.168, 0.139, 0.133], +[0.533, 0.434, 0.440], +[0.727, 0.676, 0.668], +[1.037, 1.029, 1.029], +[0.343, 0.308, 0.308], +[0.503, 0.448, 0.449], +[3.128, 2.888, 2.937], +[1.772, 1.605, 1.586], +[1.653, 1.591, 1.606], +[0.754, 0.718, 0.703], +[0.198, 0.156, 0.143], +[0.074, 0.056, 0.057], +[0.040, 0.040, 0.035], +[0.395, 0.333, 0.320], +[0.031, 0.017, 0.014], +[0.013, 0.011, 0.017], +[0.008, 0.009, 0.006] + ] + }, + { + "system": "DigitalOcean Memory-opt 32", + "system_full": "DigitalOcean, 32 CPUs, 256 GB RAM, 4.8 TB SSD, Memory Intensive, Frankfurt, Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz", + "cpu_vendor": "Intel", + "cpu_model": "Xeon Gold 6140", + "time": "2020-12-13 00:00:00", + "kind": "cloud", + "result": + [ +[0.003, 0.002, 0.002], +[0.016, 0.009, 0.009], +[0.030, 0.019, 0.019], +[0.045, 0.026, 0.025], +[0.103, 0.091, 0.090], +[0.200, 0.180, 0.175], +[0.017, 0.012, 0.012], +[0.012, 0.011, 0.010], +[0.264, 0.237, 0.234], +[0.291, 0.270, 0.259], +[0.124, 0.111, 0.111], +[0.136, 0.118, 0.117], +[0.352, 0.316, 0.303], +[0.439, 0.394, 0.392], +[0.391, 0.370, 0.372], +[0.466, 0.450, 0.449], +[1.075, 1.014, 1.008], +[0.654, 0.628, 0.619], +[1.810, 1.801, 1.787], +[0.053, 0.032, 0.030], +[0.550, 0.443, 0.461], +[0.553, 0.461, 0.465], +[1.237, 1.064, 1.065], +[1.163, 0.589, 0.548], +[0.178, 0.132, 0.135], +[0.131, 0.114, 0.112], +[0.164, 0.133, 0.132], +[0.530, 0.442, 0.433], +[0.720, 0.664, 0.659], +[1.032, 1.024, 1.023], +[0.346, 0.311, 0.314], +[0.502, 0.439, 0.438], +[3.146, 2.869, 2.891], +[1.757, 1.615, 1.589], +[1.642, 1.571, 1.580], +[0.743, 0.711, 0.722], +[0.208, 0.151, 0.140], +[0.077, 0.057, 0.056], +[0.037, 0.037, 0.040], +[0.378, 0.310, 0.315], +[0.031, 0.014, 0.017], +[0.019, 0.010, 0.010], +[0.005, 0.005, 0.005] + ] + }, + { + "system": "DigitalOcean CPU-opt, 32", + "system_full": "DigitalOcean, 32 CPUs, 64 GB RAM, 3.52 TB SSD, CPU Intensive, Frankfurt, Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz", + "cpu_vendor": "Intel", + "cpu_model": "Xeon Gold 6140", + "time": "2020-12-13 00:00:00", + "kind": "cloud", + "result": + [ +[0.002, 0.002, 0.001], +[0.016, 0.009, 0.009], +[0.026, 0.018, 0.018], +[0.040, 0.025, 0.024], +[0.097, 0.083, 0.088], +[0.190, 0.167, 0.171], +[0.020, 0.011, 0.011], +[0.011, 0.009, 0.009], +[0.253, 0.228, 0.231], +[0.277, 0.257, 0.258], +[0.117, 0.104, 0.107], +[0.127, 0.111, 0.111], +[0.346, 0.313, 0.310], +[0.435, 0.396, 0.392], +[0.391, 0.366, 0.361], +[0.469, 0.458, 0.455], +[1.070, 1.026, 1.033], +[0.649, 0.626, 0.627], +[1.859, 1.808, 1.803], +[0.065, 0.032, 0.025], +[0.513, 0.402, 0.420], +[0.525, 0.418, 0.417], +[1.167, 1.004, 0.988], +[1.118, 0.531, 0.538], +[0.171, 0.126, 0.122], +[0.124, 0.108, 0.105], +[0.158, 0.126, 0.124], +[0.501, 0.407, 0.409], +[0.691, 0.624, 0.619], +[1.106, 1.095, 1.094], +[0.337, 0.301, 0.302], +[0.504, 0.447, 0.453], +[3.187, 2.934, 2.963], +[1.771, 1.583, 1.570], +[1.661, 1.584, 1.573], +[0.748, 0.733, 0.736], +[0.198, 0.152, 0.145], +[0.072, 0.055, 0.053], +[0.039, 0.035, 0.035], +[0.395, 0.332, 0.322], +[0.032, 0.012, 0.012], +[0.019, 0.014, 0.009], +[0.005, 0.005, 0.005] + ] + }, + { + "system": "DigitalOcean General 40", + "system_full": "DigitalOcean, 40 CPUs, 64 GB RAM, 1 TB SSD, General Purpose, San Francisco, Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz", + "cpu_vendor": "Intel", + "cpu_model": "Xeon Gold 6140", + "time": "2020-12-13 00:00:00", + "kind": "cloud", + "result": + [ +[0.003, 0.001, 0.001], +[0.019, 0.009, 0.009], +[0.045, 0.017, 0.017], +[0.103, 0.022, 0.021], +[0.131, 0.086, 0.088], +[0.210, 0.167, 0.163], +[0.018, 0.011, 0.011], +[0.014, 0.010, 0.010], +[0.246, 0.204, 0.204], +[0.297, 0.231, 0.238], +[0.138, 0.103, 0.102], +[0.148, 0.106, 0.107], +[0.352, 0.313, 0.309], +[0.453, 0.394, 0.392], +[0.392, 0.349, 0.349], +[0.452, 0.429, 0.429], +[1.060, 0.972, 0.966], +[0.668, 0.591, 0.606], +[1.721, 1.645, 1.640], +[0.102, 0.039, 0.024], +[0.973, 0.386, 0.381], +[1.105, 0.405, 0.388], +[1.996, 0.986, 0.969], +[2.381, 0.475, 0.463], +[0.294, 0.112, 0.111], +[0.167, 0.096, 0.099], +[0.296, 0.114, 0.114], +[0.961, 0.432, 0.424], +[0.840, 0.611, 0.610], +[1.100, 1.090, 1.088], +[0.374, 0.292, 0.283], +[0.691, 0.432, 0.429], +[3.111, 2.863, 2.922], +[1.857, 1.579, 1.553], +[1.762, 1.548, 1.542], +[0.573, 0.552, 0.542], +[0.221, 0.140, 0.144], +[0.079, 0.059, 0.058], +[0.038, 0.034, 0.035], +[0.405, 0.342, 0.354], +[0.039, 0.018, 0.013], +[0.015, 0.010, 0.010], +[0.005, 0.005, 0.005] + ] + } +] diff --git a/website/benchmark/hardware/results/core_i7_macbook_pro_2018.json b/website/benchmark/hardware/results/macbook_pro_core_i7_2018.json similarity index 100% rename from website/benchmark/hardware/results/core_i7_macbook_pro_2018.json rename to website/benchmark/hardware/results/macbook_pro_core_i7_2018.json diff --git a/website/benchmark/hardware/results/macbook_pro_core_i7_2020.json b/website/benchmark/hardware/results/macbook_pro_core_i7_2020.json new file mode 100644 index 00000000000..8250c5f3c7e --- /dev/null +++ b/website/benchmark/hardware/results/macbook_pro_core_i7_2020.json @@ -0,0 +1,54 @@ +[ + { + "system": "MacBook Pro 2020", + "system_full": "MacBook Pro 2020, 2.3 GHz Quad-Core Intel Core i7 (10th gen), 32 GiB RAM, 2TB SSD", + "time": "2020-06-25 00:00:00", + "kind": "laptop", + "result": + [ +[0.003, 0.002, 0.002], +[0.046, 0.025, 0.015], +[0.042, 0.042, 0.041], +[0.069, 0.068, 0.069], +[0.164, 0.166, 0.165], +[0.559, 0.550, 0.552], +[0.026, 0.026, 0.026], +[0.016, 0.015, 0.016], +[0.891, 0.880, 0.879], +[1.029, 1.031, 1.022], +[0.355, 0.358, 0.369], +[0.415, 0.416, 0.417], +[1.406, 1.355, 1.349], +[1.775, 1.787, 1.788], +[1.615, 1.633, 1.619], +[1.532, 1.525, 1.482], +[4.284, 4.203, 4.180], +[2.825, 2.782, 2.756], +[8.516, 8.328, 8.408], +[0.097, 0.073, 0.074], +[1.541, 1.557, 1.557], +[1.945, 1.920, 1.911], +[4.410, 4.375, 4.353], +[null, null, null], +[0.613, 0.616, 0.621], +[0.535, 0.557, 0.541], +[0.622, 0.622, 0.623], +[1.536, 1.548, 1.535], +[2.436, 2.430, 2.435], +[2.551, 2.542, 2.589], +[1.470, 1.426, 1.430], +[2.377, 2.248, 2.227], +[15.628, null, null], +[6.155, 6.397, 6.368], +[6.439, 6.392, 6.412], +[2.643, 2.337, 2.316], +[0.163, 0.174, 0.155], +[0.060, 0.064, 0.063], +[0.057, 0.057, 0.053], +[0.331, 0.314, 0.314], +[0.016, 0.023, 0.020], +[0.014, 0.014, 0.013], +[0.005, 0.006, 0.007] + ] + } +] diff --git a/website/blog/en/2020/the-clickhouse-community.md b/website/blog/en/2020/the-clickhouse-community.md new file mode 100644 index 00000000000..7080fed6479 --- /dev/null +++ b/website/blog/en/2020/the-clickhouse-community.md @@ -0,0 +1,147 @@ +--- +title: 'The ClickHouse Community' +image: 'https://blog-images.clickhouse.tech/en/2020/the-clickhouse-community/clickhouse-community-history.png' +date: '2020-12-10' +author: '[Robert Hodges](https://github.com/hodgesrm)' +tags: ['community', 'open source', 'telegram', 'meetup'] +--- + +One of the great “features” of ClickHouse is a friendly and welcoming community. In this article we would like to outline how the ClickHouse community arose, what it is today, and how you can get involved. There is a role for everyone, from end users to contributors to corporate friends. Our goal is to make the community welcoming to every person who wants to join. + +But first, let’s review a bit of history, starting with how ClickHouse first developed at [Yandex](https://yandex.com/company/). + +## Origins at Yandex + +ClickHouse began as a solution for web analytics in [Yandex Metrica](https://metrica.yandex.com/about?). Metrica is a popular service for analyzing website traffic that is now #2 in the market behind Google Analytics. In 2008 [Alexey Milovidov](https://github.com/alexey-milovidov), an engineer on the Metrica team, was looking for a database that could create reports on metrics like number of page views per day, unique visitors, and bounce rate, without aggregating the data in advance. The idea was to provide a wide range of metric data and let users ask any question about them. + +This is a classic problem for data warehouses. However, Alexey could not find one that met Yandex requirements, specifically large datasets, linear scaling, high efficiency, and compatibility with SQL tools. In a nutshell: like MySQL but for analytic applications. So Alexey wrote one. It started as a prototype to do GROUP BY operations. + +The prototype evolved into a full solution with a name, ClickHouse, short for “Clickstream Data Warehouse”. Alexey added additional features including SQL support and the MergeTree engine. The SQL dialect was superficially similar to MySQL, [which was also used in Metrica](https://clickhouse.tech/blog/en/2016/evolution-of-data-structures-in-yandex-metrica/) but could not handle query workloads without complex pre-aggregation. By 2011 ClickHouse was in production for Metrica. + +Over the next 5 years Alexey and a growing team of developers extended ClickHouse to cover new use cases. By 2016 ClickHouse was a core Metrica backend service. It was also becoming entrenched as a data warehouse within Yandex, extending to use cases like service monitoring, network flow logs, and event management. ClickHouse had evolved from the original one-person project to business critical software with a full team of a dozen engineers led by Alexey. + +By 2016, ClickHouse had an 8 year history and was ready to become a major open source project. Here’s a timeline that tracks major developments as a time series. + + + +## ClickHouse goes open source + +Yandex open sourced ClickHouse under an Apache 2.0 license in 2016. There were numerous reasons for this step. + +* Promote adoption within Yandex by making it easier for internal departments to get builds. +* Ensure that ClickHouse would continue to evolve by creating a community to nurture it. +* Motivate developers to contribute to and use ClickHouse due to the open source “cool” factor. +* Improve ClickHouse quality by making the code public. Nobody wants their name visible on bad code. ;-) +* Showcase Yandex innovation to a worldwide audience. + +Alexey and the development team moved ClickHouse code to a Github repo under the Yandex organization and began issuing community builds as well as accepting external contributions. They simultaneously began regular meetups to popularize ClickHouse and build a community around it. The result was a burst of adoption across multiple regions of the globe. + +ClickHouse quickly picked up steam in Eastern Europe. The first ClickHouse meetups started in 2016 and have grown to include 200 participants for in-person meetings and up to 400 for online meetings. ClickHouse is now widely used in start-ups in Russia as well as other Eastern European countries. Developers located in Eastern Europe continue to supply more contributions to ClickHouse than any other region. + +ClickHouse also started to gain recognition in the US and Western Europe. [CloudFlare](https://www.cloudflare.com/) published a widely read blog article about [their success using ClickHouse for DNS analytics](https://blog.cloudflare.com/how-cloudflare-analyzes-1m-dns-queries-per-second/). Alexander Zaitsev successfully migrated an ad tech analytics system from a commercial DBMS to a ClickHouse cluster. This success prompted him to found [Altinity](https://altinity.com) in 2017 with help from friends at [Percona](https://www.percona.com). US meetups started in the same year. With support from Altinity these have grown to over 100 attendees for online meetings. + +ClickHouse also took off in China. The first meetup in China took place in 2018 and attracted enormous interest. In-person meetups included over 400 participants. Online meetings have reached up to 1000 online viewers. + +In 2019 a further step occurred as ClickHouse moved out from under the Yandex Github organization into a separate [ClickHouse organization](https://github.com/ClickHouse). The new organization includes ClickHouse server code plus core ecosystem projects like the cpp and ODBC drivers. + +ClickHouse community events shifted online following world-wide disruptions due to COVID-19, but growth in usage continued. One interesting development has been the increasing number of startups using ClickHouse as a backend. Many of these are listed on the [ClickHouse Adopters](https://clickhouse.tech/docs/en/introduction/adopters/) page. Also, additional prominent companies like eBay, Uber, and Flipcart went public in 2020 with stories of successful ClickHouse usage. + +## The ClickHouse community today + +As of 2020 the ClickHouse community includes developers and users from virtually every region of the globe. Yandex engineers continue to supply a majority of pull requests to ClickHouse itself. Altinity follows in second place with contributions to ClickHouse core and ecosystem projects. There is also substantial in-house development on ClickHouse (e.g. on private forks) within Chinese internet providers. + +The real success, however, has been the huge number of commits to ClickHouse core from people in outside organizations. The following list shows the main outside contributors: + +* Azat Khuzhin +* Amos Bird +* Winter Zhang +* Denny Crane +* Danila Kutenin +* Hczhcz +* Marek Vavruša +* Guillaume Tassery +* Sundy Li +* Mikhail Shiryaev +* Nicolae Vartolomei +* Igor Hatarist +* Andrew Onyshchuk +* BohuTANG +* Yu Zhi Chang +* Kirill Shvakov +* Alexander Krasheninnikov +* Simon Podlipsky +* Silviu Caragea +* Flynn ucasFL +* [And over 550 more...](https://github.com/ClickHouse/ClickHouse/graphs/contributors) + +ClickHouse ecosystem projects are also growing rapidly. Here is a selected list of active Github projects that help enable ClickHouse applications, sorted by number of stars. + +* [sqlpad/sqlpad](https://github.com/sqlpad/sqlpad) — Web-based SQL editor that supports ClickHouse +* [mindsdb/mindsdb](https://github.com/mindsdb/mindsdb) — Predictive AI layer for databases with ClickHouse support +* [x-ream/sqli](https://github.com/x-ream/sqli) — ORM SQL interface +* [tricksterproxy/trickster](https://github.com/tricksterproxy/trickster) — HTTP reverse proxy cache and time series dashboard accelerator +* [ClickHouse/clickhouse-go](https://github.com/ClickHouse/clickhouse-go) — Golang driver for ClickHouse +* [gohouse/gorose](https://github.com/gohouse/gorose) — A mini database ORM for Golang +* [ClickHouse/clickhouse-jdbc](https://github.com/ClickHouse/clickhouse-jdbc) — JDBC driver for ClickHouse +* [brockercap/Bifrost](https://github.com/brokercap/Bifrost) — Middleware to sync MySQL binlog to ClickHouse +* [mymarilyn/clickhouse-driver](https://github.com/mymarilyn/clickhouse-driver) — ClickHouse Python driver with native interface support +* [Vertamedia/clickhouse-grafana](https://github.com/Vertamedia/clickhouse-grafana) — Grafana datasource for ClickHouse +* [smi2/phpClickHouse](https://github.com/smi2/phpClickHouse) — PHP ClickHouse client +* [Altinity/clickhouse-operator](https://github.com/Altinity/clickhouse-operator) — Kubernetes operator for ClickHouse +* [AlexAkulov/clickhouse-backup](https://github.com/AlexAkulov/clickhouse-backup) — ClickHouse backup and restore using cloud storage +* [And almost 1200 more...](https://github.com/search?o=desc&p=1&q=clickhouse&s=stars&type=Repositories) + +## Resources + +With the community growth numerous resources are available to users. At the center is the [ClickHouse org on Github](https://github.com/ClickHouse), which hosts [ClickHouse server code](https://github.com/ClickHouse/ClickHouse). ClickHouse server documentation is available at the [clickhouse.tech](https://clickhouse.tech/) website. It has [installation instructions](https://clickhouse.tech/docs/en/getting-started/install/) and links to ClickHouse community builds for major Linux distributions as well as Mac, FreeBSD, and Docker. + +In addition, ClickHouse users have a wide range of ways to engage with the community and get help on applications. These include both chat applications as well as meetups. Here are some links to get started. + +* Yandex Meetups — Yandex has regular in-person and online international and Russian-language meetups. Video recordings and online translations are available at the official [YouTube channel](https://www.youtube.com/c/ClickHouseDB/videos). Watch for announcements on the [clickhouse.tech](https://clickhouse.tech/) site and [Telegram](https://t.me/clickhouse_ru). +* [SF Bay Area ClickHouse Meetup](https://www.meetup.com/San-Francisco-Bay-Area-ClickHouse-Meetup/) — The largest US ClickHouse meetup, with meetings approximately every 2 months. +* Chinese meetups occur at regular intervals with different sponsors. Watch for announcements on clickhouse.tech. +* Telegram - By far the largest forum for ClickHouse. It is the best place to talk to ClickHouse devs. There are two groups. +* [ClickHouse не тормозит](https://t.me/clickhouse_ru) (“ClickHouse does not slow down”) - Russian language Telegram group with 4,629 members currently. +* [ClickHouse](https://t.me/clickhouse_en) — English language group with 1,286 members. +* [ClickHouse Community Slack Channel](http://clickhousedb.slack.com) — Public channel for Slack users. It currently has 551 members. +* [ClickHouse.com.cn](http://clickhouse.com.cn/) — Chinese language site for ClickHouse-related announcements and questions. +* [Conference Presentations](https://github.com/ClickHouse/clickhouse-presentations) — ClickHouse developers like to talk and do so whenever they can. Many recent presentations are stored in Github. Also, look for ClickHouse presentations at Linux Foundation conferences, Data Con LA, Percona Live, and many other venues where there are presentations about data. +* Technical webinars — Altinity has a large library of technical presentations on ClickHouse and related applications on the [Altinity Youtube channel](https://www.youtube.com/channel/UCE3Y2lDKl_ZfjaCrh62onYA/featured). + +If you know of additional resources please bring them to our attention. + +## How you can contribute to ClickHouse + +We welcome users to join the ClickHouse community in every capacity. There are four main ways to participate. + +### Use ClickHouse and share your experiences + +Start with the documentation. Download ClickHouse and try it out. Join the chat channels. If you encounter bugs, [log issues](https://github.com/ClickHouse/ClickHouse/issues) so we can get them fixed. Also, it’s easy to make contributions to the documentation if you have basic Github and markdown skills. Press the pencil icon on any page of the clickhouse.tech website to edit pages and automatically generate pull requests to merge your changes. + +If your company has deployed ClickHouse and is comfortable talking about it, please don't be shy. Add them to the [ClickHouse Adopters](https://clickhouse.tech/docs/en/introduction/adopters/) page so that others can learn from your experience. + +### Become a ClickHouse developer + +Write code to make ClickHouse better. Here are your choices. + +* ClickHouse server — Start with the [“For Beginners” documentation](https://clickhouse.tech/docs/en/development/developer-instruction/) to learn how to build ClickHouse and submit PRs. Check out the current ClickHouse issues if you are looking for work. PRs that follow the development standards will be merged faster. + +* Ecosystem projects — Most projects in the ClickHouse ecosystem accept PRs. Check with each project for specific practices. + +ClickHouse is also a great target for research problems. Overall the years many dozens of university CS students have worked on ClickHouse features. Alexey Milovidov maintains an especially rich set of [project suggestions for students](https://github.com/ClickHouse/ClickHouse/issues/15065). Join Telegram and ask for help if you are interested. Both Yandex and Altinity also offer internships. + +### Write ClickHouse applications + +ClickHouse enables a host of new applications that depend on low latency access to large datasets. If you write something interesting, blog about it and present at local meetups. Altinity has a program to highlight startups who are developing ClickHouse applications and help with marketing as well as resources for development. Send email to [info@altinity.com](mailto:info@altinity.com) for more information. + +### Become a corporate sponsor + +The ClickHouse community has been assisted by many corporate users who have helped organize meetups, funded development, and guided growth of ClickHouse. Contact community members directly at [clickhouse-feedback@yandex-team.ru](mailto:clickhouse-feedback@yandex-team.ru), [info@altinity.com](mailto:info@altinity.com), or via Telegram to find out more about how to chip in as a corporate sponsor. + +## Where we go from here + +ClickHouse has grown enormously from its origins as a basic prototype in 2008 to the popular SQL data warehouse users see today. Our community is the rock that will enable ClickHouse to become the default data warehouse worldwide. We are working together to create an inclusive environment where everyone feels welcome and has an opportunity to contribute. We welcome you to join! + +This article was written with kind assistance from Alexey Milovidov, Ivan Blinkov, and Alexander Zaitsev. + +_2020-12-11 [Robert Hodges](https://github.com/hodgesrm)_