mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-29 11:02:08 +00:00
Merge branch 'master' into evgsudarikova-DOCSUP-2039
This commit is contained in:
commit
99b220c3ed
152
CHANGELOG.md
152
CHANGELOG.md
@ -1,3 +1,145 @@
|
|||||||
|
## ClickHouse release 20.8
|
||||||
|
|
||||||
|
### ClickHouse release v20.8.2.3-stable, 2020-09-08
|
||||||
|
|
||||||
|
#### Backward Incompatible Change
|
||||||
|
|
||||||
|
* Now `OPTIMIZE FINAL` query doesn't recalculate TTL for parts that were added before TTL was created. Use `ALTER TABLE ... MATERIALIZE TTL` once to calculate them, after that `OPTIMIZE FINAL` will evaluate TTL's properly. This behavior never worked for replicated tables. [#14220](https://github.com/ClickHouse/ClickHouse/pull/14220) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Extend `parallel_distributed_insert_select` setting, adding an option to run `INSERT` into local table. The setting changes type from `Bool` to `UInt64`, so the values `false` and `true` are no longer supported. If you have these values in server configuration, the server will not start. Please replace them with `0` and `1`, respectively. [#14060](https://github.com/ClickHouse/ClickHouse/pull/14060) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Remove support for the `ODBCDriver` input/output format. This was a deprecated format once used for communication with the ClickHouse ODBC driver, now long superseded by the `ODBCDriver2` format. Resolves [#13629](https://github.com/ClickHouse/ClickHouse/issues/13629). [#13847](https://github.com/ClickHouse/ClickHouse/pull/13847) ([hexiaoting](https://github.com/hexiaoting)).
|
||||||
|
|
||||||
|
#### New Feature
|
||||||
|
|
||||||
|
* ClickHouse can work as MySQL replica - it is implemented by `MaterializeMySQL` database engine. Implements [#4006](https://github.com/ClickHouse/ClickHouse/issues/4006). [#10851](https://github.com/ClickHouse/ClickHouse/pull/10851) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Add the ability to specify `Default` compression codec for columns that correspond to settings specified in `config.xml`. Implements: [#9074](https://github.com/ClickHouse/ClickHouse/issues/9074). [#14049](https://github.com/ClickHouse/ClickHouse/pull/14049) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Support Kerberos authentication in Kafka, using `krb5` and `cyrus-sasl` libraries. [#12771](https://github.com/ClickHouse/ClickHouse/pull/12771) ([Ilya Golshtein](https://github.com/ilejn)).
|
||||||
|
* Add function `normalizeQuery` that replaces literals, sequences of literals and complex aliases with placeholders. Add function `normalizedQueryHash` that returns identical 64bit hash values for similar queries. It helps to analyze query log. This closes [#11271](https://github.com/ClickHouse/ClickHouse/issues/11271). [#13816](https://github.com/ClickHouse/ClickHouse/pull/13816) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add `time_zones` table. [#13880](https://github.com/ClickHouse/ClickHouse/pull/13880) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Add function `defaultValueOfTypeName` that returns the default value for a given type. [#13877](https://github.com/ClickHouse/ClickHouse/pull/13877) ([hcz](https://github.com/hczhcz)).
|
||||||
|
* Add `countDigits(x)` function that count number of decimal digits in integer or decimal column. Add `isDecimalOverflow(d, [p])` function that checks if the value in Decimal column is out of its (or specified) precision. [#14151](https://github.com/ClickHouse/ClickHouse/pull/14151) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Add `quantileExactLow` and `quantileExactHigh` implementations with respective aliases for `medianExactLow` and `medianExactHigh`. [#13818](https://github.com/ClickHouse/ClickHouse/pull/13818) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Added `date_trunc` function that truncates a date/time value to a specified date/time part. [#13888](https://github.com/ClickHouse/ClickHouse/pull/13888) ([Vladimir Golovchenko](https://github.com/vladimir-golovchenko)).
|
||||||
|
* Add new optional section `<user_directories>` to the main config. [#13425](https://github.com/ClickHouse/ClickHouse/pull/13425) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Add `ALTER SAMPLE BY` statement that allows to change table sample clause. [#13280](https://github.com/ClickHouse/ClickHouse/pull/13280) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Function `position` now supports optional `start_pos` argument. [#13237](https://github.com/ClickHouse/ClickHouse/pull/13237) ([vdimir](https://github.com/vdimir)).
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Fix visible data clobbering by progress bar in client in interactive mode. This fixes [#12562](https://github.com/ClickHouse/ClickHouse/issues/12562) and [#13369](https://github.com/ClickHouse/ClickHouse/issues/13369) and [#13584](https://github.com/ClickHouse/ClickHouse/issues/13584) and fixes [#12964](https://github.com/ClickHouse/ClickHouse/issues/12964). [#13691](https://github.com/ClickHouse/ClickHouse/pull/13691) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed incorrect sorting order if `LowCardinality` column when sorting by multiple columns. This fixes [#13958](https://github.com/ClickHouse/ClickHouse/issues/13958). [#14223](https://github.com/ClickHouse/ClickHouse/pull/14223) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Check for array size overflow in `topK` aggregate function. Without this check the user may send a query with carefully crafter parameters that will lead to server crash. This closes [#14452](https://github.com/ClickHouse/ClickHouse/issues/14452). [#14467](https://github.com/ClickHouse/ClickHouse/pull/14467) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix bug which can lead to wrong merges assignment if table has partitions with a single part. [#14444](https://github.com/ClickHouse/ClickHouse/pull/14444) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Stop query execution if exception happened in `PipelineExecutor` itself. This could prevent rare possible query hung. Continuation of [#14334](https://github.com/ClickHouse/ClickHouse/issues/14334). [#14402](https://github.com/ClickHouse/ClickHouse/pull/14402) [#14334](https://github.com/ClickHouse/ClickHouse/pull/14334) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix crash during `ALTER` query for table which was created `AS table_function`. Fixes [#14212](https://github.com/ClickHouse/ClickHouse/issues/14212). [#14326](https://github.com/ClickHouse/ClickHouse/pull/14326) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix exception during ALTER LIVE VIEW query with REFRESH command. Live view is an experimental feature. [#14320](https://github.com/ClickHouse/ClickHouse/pull/14320) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Fix QueryPlan lifetime (for EXPLAIN PIPELINE graph=1) for queries with nested interpreter. [#14315](https://github.com/ClickHouse/ClickHouse/pull/14315) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix segfault in `clickhouse-odbc-bridge` during schema fetch from some external sources. This PR fixes https://github.com/ClickHouse/ClickHouse/issues/13861. [#14267](https://github.com/ClickHouse/ClickHouse/pull/14267) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix crash in mark inclusion search introduced in https://github.com/ClickHouse/ClickHouse/pull/12277. [#14225](https://github.com/ClickHouse/ClickHouse/pull/14225) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix creation of tables with named tuples. This fixes [#13027](https://github.com/ClickHouse/ClickHouse/issues/13027). [#14143](https://github.com/ClickHouse/ClickHouse/pull/14143) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix formatting of minimal negative decimal numbers. This fixes https://github.com/ClickHouse/ClickHouse/issues/14111. [#14119](https://github.com/ClickHouse/ClickHouse/pull/14119) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Fix `DistributedFilesToInsert` metric (zeroed when it should not). [#14095](https://github.com/ClickHouse/ClickHouse/pull/14095) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix `pointInPolygon` with const 2d array as polygon. [#14079](https://github.com/ClickHouse/ClickHouse/pull/14079) ([Alexey Ilyukhov](https://github.com/livace)).
|
||||||
|
* Fixed wrong mount point in extra info for `Poco::Exception: no space left on device`. [#14050](https://github.com/ClickHouse/ClickHouse/pull/14050) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix GRANT ALL statement when executed on a non-global level. [#13987](https://github.com/ClickHouse/ClickHouse/pull/13987) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix parser to reject create table as table function with engine. [#13940](https://github.com/ClickHouse/ClickHouse/pull/13940) ([hcz](https://github.com/hczhcz)).
|
||||||
|
* Fix wrong results in select queries with `DISTINCT` keyword and subqueries with UNION ALL in case `optimize_duplicate_order_by_and_distinct` setting is enabled. [#13925](https://github.com/ClickHouse/ClickHouse/pull/13925) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Fixed potential deadlock when renaming `Distributed` table. [#13922](https://github.com/ClickHouse/ClickHouse/pull/13922) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix incorrect sorting for `FixedString` columns when sorting by multiple columns. Fixes [#13182](https://github.com/ClickHouse/ClickHouse/issues/13182). [#13887](https://github.com/ClickHouse/ClickHouse/pull/13887) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix potentially imprecise result of `topK`/`topKWeighted` merge (with non-default parameters). [#13817](https://github.com/ClickHouse/ClickHouse/pull/13817) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix reading from MergeTree table with INDEX of type SET fails when comparing against NULL. This fixes [#13686](https://github.com/ClickHouse/ClickHouse/issues/13686). [#13793](https://github.com/ClickHouse/ClickHouse/pull/13793) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix `arrayJoin` capturing in lambda (LOGICAL_ERROR). [#13792](https://github.com/ClickHouse/ClickHouse/pull/13792) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add step overflow check in function `range`. [#13790](https://github.com/ClickHouse/ClickHouse/pull/13790) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fixed `Directory not empty` error when concurrently executing `DROP DATABASE` and `CREATE TABLE`. [#13756](https://github.com/ClickHouse/ClickHouse/pull/13756) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add range check for `h3KRing` function. This fixes [#13633](https://github.com/ClickHouse/ClickHouse/issues/13633). [#13752](https://github.com/ClickHouse/ClickHouse/pull/13752) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix race condition between DETACH and background merges. Parts may revive after detach. This is continuation of [#8602](https://github.com/ClickHouse/ClickHouse/issues/8602) that did not fix the issue but introduced a test that started to fail in very rare cases, demonstrating the issue. [#13746](https://github.com/ClickHouse/ClickHouse/pull/13746) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix logging Settings.Names/Values when log_queries_min_type > QUERY_START. [#13737](https://github.com/ClickHouse/ClickHouse/pull/13737) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fixes `/replicas_status` endpoint response status code when verbose=1. [#13722](https://github.com/ClickHouse/ClickHouse/pull/13722) ([javi santana](https://github.com/javisantana)).
|
||||||
|
* Fix incorrect message in `clickhouse-server.init` while checking user and group. [#13711](https://github.com/ClickHouse/ClickHouse/pull/13711) ([ylchou](https://github.com/ylchou)).
|
||||||
|
* Do not optimize any(arrayJoin()) -> arrayJoin() under `optimize_move_functions_out_of_any` setting. [#13681](https://github.com/ClickHouse/ClickHouse/pull/13681) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix crash in JOIN with StorageMerge and `set enable_optimize_predicate_expression=1`. [#13679](https://github.com/ClickHouse/ClickHouse/pull/13679) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
* Fix typo in error message about `The value of 'number_of_free_entries_in_pool_to_lower_max_size_of_merge' setting`. [#13678](https://github.com/ClickHouse/ClickHouse/pull/13678) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Concurrent `ALTER ... REPLACE/MOVE PARTITION ...` queries might cause deadlock. It's fixed. [#13626](https://github.com/ClickHouse/ClickHouse/pull/13626) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fixed the behaviour when sometimes cache-dictionary returned default value instead of present value from source. [#13624](https://github.com/ClickHouse/ClickHouse/pull/13624) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix secondary indices corruption in compact parts. Compact parts are experimental feature. [#13538](https://github.com/ClickHouse/ClickHouse/pull/13538) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix premature `ON CLUSTER` timeouts for queries that must be executed on a single replica. Fixes [#6704](https://github.com/ClickHouse/ClickHouse/issues/6704), [#7228](https://github.com/ClickHouse/ClickHouse/issues/7228), [#13361](https://github.com/ClickHouse/ClickHouse/issues/13361), [#11884](https://github.com/ClickHouse/ClickHouse/issues/11884). [#13450](https://github.com/ClickHouse/ClickHouse/pull/13450) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix wrong code in function `netloc`. This fixes [#13335](https://github.com/ClickHouse/ClickHouse/issues/13335). [#13446](https://github.com/ClickHouse/ClickHouse/pull/13446) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix possible race in `StorageMemory`. [#13416](https://github.com/ClickHouse/ClickHouse/pull/13416) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix missing or excessive headers in `TSV/CSVWithNames` formats in HTTP protocol. This fixes [#12504](https://github.com/ClickHouse/ClickHouse/issues/12504). [#13343](https://github.com/ClickHouse/ClickHouse/pull/13343) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix parsing row policies from users.xml when names of databases or tables contain dots. This fixes https://github.com/ClickHouse/ClickHouse/issues/5779, https://github.com/ClickHouse/ClickHouse/issues/12527. [#13199](https://github.com/ClickHouse/ClickHouse/pull/13199) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix access to `redis` dictionary after connection was dropped once. It may happen with `cache` and `direct` dictionary layouts. [#13082](https://github.com/ClickHouse/ClickHouse/pull/13082) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Removed wrong auth access check when using ClickHouseDictionarySource to query remote tables. [#12756](https://github.com/ClickHouse/ClickHouse/pull/12756) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
* Properly distinguish subqueries in some cases for common subexpression elimination. https://github.com/ClickHouse/ClickHouse/issues/8333. [#8367](https://github.com/ClickHouse/ClickHouse/pull/8367) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Disallows `CODEC` on `ALIAS` column type. Fixes [#13911](https://github.com/ClickHouse/ClickHouse/issues/13911). [#14263](https://github.com/ClickHouse/ClickHouse/pull/14263) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* When waiting for a dictionary update to complete, use the timeout specified by `query_wait_timeout_milliseconds` setting instead of a hard-coded value. [#14105](https://github.com/ClickHouse/ClickHouse/pull/14105) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Add setting `min_index_granularity_bytes` that protects against accidentally creating a table with very low `index_granularity_bytes` setting. [#14139](https://github.com/ClickHouse/ClickHouse/pull/14139) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||||
|
* Now it's possible to fetch partitions from clusters that use different ZooKeeper: `ALTER TABLE table_name FETCH PARTITION partition_expr FROM 'zk-name:/path-in-zookeeper'`. It's useful for shipping data to new clusters. [#14155](https://github.com/ClickHouse/ClickHouse/pull/14155) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Slightly better performance of Memory table if it was constructed from a huge number of very small blocks (that's unlikely). Author of the idea: [Mark Papadakis](https://github.com/markpapadakis). Closes [#14043](https://github.com/ClickHouse/ClickHouse/issues/14043). [#14056](https://github.com/ClickHouse/ClickHouse/pull/14056) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Conditional aggregate functions (for example: `avgIf`, `sumIf`, `maxIf`) should return `NULL` when miss rows and use nullable arguments. [#13964](https://github.com/ClickHouse/ClickHouse/pull/13964) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
|
* Increase limit in -Resample combinator to 1M. [#13947](https://github.com/ClickHouse/ClickHouse/pull/13947) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Corrected an error in AvroConfluent format that caused the Kafka table engine to stop processing messages when an abnormally small, malformed, message was received. [#13941](https://github.com/ClickHouse/ClickHouse/pull/13941) ([Gervasio Varela](https://github.com/gervarela)).
|
||||||
|
* Fix wrong error for long queries. It was possible to get syntax error other than `Max query size exceeded` for correct query. [#13928](https://github.com/ClickHouse/ClickHouse/pull/13928) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Better error message for null value of `TabSeparated` format. [#13906](https://github.com/ClickHouse/ClickHouse/pull/13906) ([jiang tao](https://github.com/tomjiang1987)).
|
||||||
|
* Function `arrayCompact` will compare NaNs bitwise if the type of array elements is Float32/Float64. In previous versions NaNs were always not equal if the type of array elements is Float32/Float64 and were always equal if the type is more complex, like Nullable(Float64). This closes [#13857](https://github.com/ClickHouse/ClickHouse/issues/13857). [#13868](https://github.com/ClickHouse/ClickHouse/pull/13868) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix data race in `lgamma` function. This race was caught only in `tsan`, no side effects a really happened. [#13842](https://github.com/ClickHouse/ClickHouse/pull/13842) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Avoid too slow queries when arrays are manipulated as fields. Throw exception instead. [#13753](https://github.com/ClickHouse/ClickHouse/pull/13753) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Added Redis requirepass authorization (for redis dictionary source). [#13688](https://github.com/ClickHouse/ClickHouse/pull/13688) ([Ivan Torgashov](https://github.com/it1804)).
|
||||||
|
* Add MergeTree Write-Ahead-Log (WAL) dump tool. WAL is an experimental feature. [#13640](https://github.com/ClickHouse/ClickHouse/pull/13640) ([BohuTANG](https://github.com/BohuTANG)).
|
||||||
|
* In previous versions `lcm` function may produce assertion violation in debug build if called with specifically crafted arguments. This fixes [#13368](https://github.com/ClickHouse/ClickHouse/issues/13368). [#13510](https://github.com/ClickHouse/ClickHouse/pull/13510) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Provide monotonicity for `toDate/toDateTime` functions in more cases. Monotonicity information is used for index analysis (more complex queries will be able to use index). Now the input arguments are saturated more naturally and provides better monotonicity. [#13497](https://github.com/ClickHouse/ClickHouse/pull/13497) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Support compound identifiers for custom settings. Custom settings is an integration point of ClickHouse codebase with other codebases (no benefits for ClickHouse itself) [#13496](https://github.com/ClickHouse/ClickHouse/pull/13496) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Move parts from DiskLocal to DiskS3 in parallel. `DiskS3` is an experimental feature. [#13459](https://github.com/ClickHouse/ClickHouse/pull/13459) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Enable mixed granularity parts by default. [#13449](https://github.com/ClickHouse/ClickHouse/pull/13449) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Proper remote host checking in S3 redirects (security-related thing). [#13404](https://github.com/ClickHouse/ClickHouse/pull/13404) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Add `QueryTimeMicroseconds`, `SelectQueryTimeMicroseconds` and `InsertQueryTimeMicroseconds` to system.events. [#13336](https://github.com/ClickHouse/ClickHouse/pull/13336) ([ianton-ru](https://github.com/ianton-ru)).
|
||||||
|
* Fix debug assertion when Decimal has too large negative exponent. Fixes [#13188](https://github.com/ClickHouse/ClickHouse/issues/13188). [#13228](https://github.com/ClickHouse/ClickHouse/pull/13228) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Added cache layer for DiskS3 (cache to local disk mark and index files). `DiskS3` is an experimental feature. [#13076](https://github.com/ClickHouse/ClickHouse/pull/13076) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Fix readline so it dumps history to file now. [#13600](https://github.com/ClickHouse/ClickHouse/pull/13600) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Create `system` database with `Atomic` engine by default (a preparation to enable `Atomic` database engine by default everywhere). [#13680](https://github.com/ClickHouse/ClickHouse/pull/13680) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
|
||||||
|
#### Performance Improvement
|
||||||
|
|
||||||
|
* Slightly optimize very short queries with `LowCardinality`. [#14129](https://github.com/ClickHouse/ClickHouse/pull/14129) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Enable parallel INSERTs for table engines `Null`, `Memory`, `Distributed` and `Buffer` when the setting `max_insert_threads` is set. [#14120](https://github.com/ClickHouse/ClickHouse/pull/14120) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fail fast if `max_rows_to_read` limit is exceeded on parts scan. The motivation behind this change is to skip ranges scan for all selected parts if it is clear that `max_rows_to_read` is already exceeded. The change is quite noticeable for queries over big number of parts. [#13677](https://github.com/ClickHouse/ClickHouse/pull/13677) ([Roman Khavronenko](https://github.com/hagen1778)).
|
||||||
|
* Slightly improve performance of aggregation by UInt8/UInt16 keys. [#13099](https://github.com/ClickHouse/ClickHouse/pull/13099) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Optimize `has()`, `indexOf()` and `countEqual()` functions for `Array(LowCardinality(T))` and constant right arguments. [#12550](https://github.com/ClickHouse/ClickHouse/pull/12550) ([myrrc](https://github.com/myrrc)).
|
||||||
|
* When performing trivial `INSERT SELECT` queries, automatically set `max_threads` to 1 or `max_insert_threads`, and set `max_block_size` to `min_insert_block_size_rows`. Related to [#5907](https://github.com/ClickHouse/ClickHouse/issues/5907). [#12195](https://github.com/ClickHouse/ClickHouse/pull/12195) ([flynn](https://github.com/ucasFL)).
|
||||||
|
|
||||||
|
#### Experimental Feature
|
||||||
|
|
||||||
|
* Add types `Int128`, `Int256`, `UInt256` and related functions for them. Extend Decimals with Decimal256 (precision up to 76 digits). New types are under the setting `allow_experimental_bigint_types`. It is working extremely slow and bad. The implementation is incomplete. Please don't use this feature. [#13097](https://github.com/ClickHouse/ClickHouse/pull/13097) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
|
||||||
|
* Added `clickhouse install` script, that is useful if you only have a single binary. [#13528](https://github.com/ClickHouse/ClickHouse/pull/13528) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Allow to run `clickhouse` binary without configuration. [#13515](https://github.com/ClickHouse/ClickHouse/pull/13515) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Enable check for typos in code with `codespell`. [#13513](https://github.com/ClickHouse/ClickHouse/pull/13513) [#13511](https://github.com/ClickHouse/ClickHouse/pull/13511) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Enable Shellcheck in CI as a linter of .sh tests. This closes [#13168](https://github.com/ClickHouse/ClickHouse/issues/13168). [#13530](https://github.com/ClickHouse/ClickHouse/pull/13530) [#13529](https://github.com/ClickHouse/ClickHouse/pull/13529) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add a CMake option to fail configuration instead of auto-reconfiguration, enabled by default. [#13687](https://github.com/ClickHouse/ClickHouse/pull/13687) ([Konstantin](https://github.com/podshumok)).
|
||||||
|
* Expose version of embedded tzdata via TZDATA_VERSION in system.build_options. [#13648](https://github.com/ClickHouse/ClickHouse/pull/13648) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Improve generation of system.time_zones table during build. Closes [#14209](https://github.com/ClickHouse/ClickHouse/issues/14209). [#14215](https://github.com/ClickHouse/ClickHouse/pull/14215) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Build ClickHouse with the most fresh tzdata from package repository. [#13623](https://github.com/ClickHouse/ClickHouse/pull/13623) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add the ability to write js-style comments in skip_list.json. [#14159](https://github.com/ClickHouse/ClickHouse/pull/14159) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Ensure that there is no copy-pasted GPL code. [#13514](https://github.com/ClickHouse/ClickHouse/pull/13514) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Switch tests docker images to use test-base parent. [#14167](https://github.com/ClickHouse/ClickHouse/pull/14167) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||||
|
* Adding retry logic when bringing up docker-compose cluster; Increasing COMPOSE_HTTP_TIMEOUT. [#14112](https://github.com/ClickHouse/ClickHouse/pull/14112) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||||
|
* Enabled `system.text_log` in stress test to find more bugs. [#13855](https://github.com/ClickHouse/ClickHouse/pull/13855) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Testflows LDAP module: adding missing certificates and dhparam.pem for openldap4. [#13780](https://github.com/ClickHouse/ClickHouse/pull/13780) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||||
|
* ZooKeeper cannot work reliably in unit tests in CI infrastructure. Using unit tests for ZooKeeper interaction with real ZooKeeper is bad idea from the start (unit tests are not supposed to verify complex distributed systems). We already using integration tests for this purpose and they are better suited. [#13745](https://github.com/ClickHouse/ClickHouse/pull/13745) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Added docker image for style check. Added style check that all docker and docker compose files are located in docker directory. [#13724](https://github.com/ClickHouse/ClickHouse/pull/13724) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||||
|
* Fix cassandra build on Mac OS. [#13708](https://github.com/ClickHouse/ClickHouse/pull/13708) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||||
|
* Fix link error in shared build. [#13700](https://github.com/ClickHouse/ClickHouse/pull/13700) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Updating LDAP user authentication suite to check that it works with RBAC. [#13656](https://github.com/ClickHouse/ClickHouse/pull/13656) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||||
|
* Removed `-DENABLE_CURL_CLIENT` for `contrib/aws`. [#13628](https://github.com/ClickHouse/ClickHouse/pull/13628) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Increasing health-check timeouts for ClickHouse nodes and adding support to dump docker-compose logs if unhealthy containers found. [#13612](https://github.com/ClickHouse/ClickHouse/pull/13612) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||||
|
* Make sure https://github.com/ClickHouse/ClickHouse/issues/10977 is invalid. [#13539](https://github.com/ClickHouse/ClickHouse/pull/13539) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Skip PR's from robot-clickhouse. [#13489](https://github.com/ClickHouse/ClickHouse/pull/13489) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Move Dockerfiles from integration tests to `docker/test` directory. docker_compose files are available in `runner` docker container. Docker images are built in CI and not in integration tests. [#13448](https://github.com/ClickHouse/ClickHouse/pull/13448) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||||
|
|
||||||
|
|
||||||
## ClickHouse release 20.7
|
## ClickHouse release 20.7
|
||||||
|
|
||||||
### ClickHouse release v20.7.2.30-stable, 2020-08-31
|
### ClickHouse release v20.7.2.30-stable, 2020-08-31
|
||||||
@ -22,14 +164,14 @@
|
|||||||
* Add setting `allow_non_metadata_alters` which restricts to execute `ALTER` queries which modify data on disk. Disabled be default. Closes [#11547](https://github.com/ClickHouse/ClickHouse/issues/11547). [#12635](https://github.com/ClickHouse/ClickHouse/pull/12635) ([alesapin](https://github.com/alesapin)).
|
* Add setting `allow_non_metadata_alters` which restricts to execute `ALTER` queries which modify data on disk. Disabled be default. Closes [#11547](https://github.com/ClickHouse/ClickHouse/issues/11547). [#12635](https://github.com/ClickHouse/ClickHouse/pull/12635) ([alesapin](https://github.com/alesapin)).
|
||||||
* A function `formatRow` is added to support turning arbitrary expressions into a string via given format. It's useful for manipulating SQL outputs and is quite versatile combined with the `columns` function. [#12574](https://github.com/ClickHouse/ClickHouse/pull/12574) ([Amos Bird](https://github.com/amosbird)).
|
* A function `formatRow` is added to support turning arbitrary expressions into a string via given format. It's useful for manipulating SQL outputs and is quite versatile combined with the `columns` function. [#12574](https://github.com/ClickHouse/ClickHouse/pull/12574) ([Amos Bird](https://github.com/amosbird)).
|
||||||
* Add `FROM_UNIXTIME` function for compatibility with MySQL, related to [12149](https://github.com/ClickHouse/ClickHouse/issues/12149). [#12484](https://github.com/ClickHouse/ClickHouse/pull/12484) ([flynn](https://github.com/ucasFL)).
|
* Add `FROM_UNIXTIME` function for compatibility with MySQL, related to [12149](https://github.com/ClickHouse/ClickHouse/issues/12149). [#12484](https://github.com/ClickHouse/ClickHouse/pull/12484) ([flynn](https://github.com/ucasFL)).
|
||||||
* Allow Nullable types as keys in MergeTree tables if `allow_nullable_key` table setting is enabled. https://github.com/ClickHouse/ClickHouse/issues/5319. [#12433](https://github.com/ClickHouse/ClickHouse/pull/12433) ([Amos Bird](https://github.com/amosbird)).
|
* Allow Nullable types as keys in MergeTree tables if `allow_nullable_key` table setting is enabled. Closes [#5319](https://github.com/ClickHouse/ClickHouse/issues/5319). [#12433](https://github.com/ClickHouse/ClickHouse/pull/12433) ([Amos Bird](https://github.com/amosbird)).
|
||||||
* Integration with [COS](https://intl.cloud.tencent.com/product/cos). [#12386](https://github.com/ClickHouse/ClickHouse/pull/12386) ([fastio](https://github.com/fastio)).
|
* Integration with [COS](https://intl.cloud.tencent.com/product/cos). [#12386](https://github.com/ClickHouse/ClickHouse/pull/12386) ([fastio](https://github.com/fastio)).
|
||||||
* Add mapAdd and mapSubtract functions for adding/subtracting key-mapped values. [#11735](https://github.com/ClickHouse/ClickHouse/pull/11735) ([Ildus Kurbangaliev](https://github.com/ildus)).
|
* Add mapAdd and mapSubtract functions for adding/subtracting key-mapped values. [#11735](https://github.com/ClickHouse/ClickHouse/pull/11735) ([Ildus Kurbangaliev](https://github.com/ildus)).
|
||||||
|
|
||||||
#### Bug Fix
|
#### Bug Fix
|
||||||
|
|
||||||
* Fix premature `ON CLUSTER` timeouts for queries that must be executed on a single replica. Fixes [#6704](https://github.com/ClickHouse/ClickHouse/issues/6704), [#7228](https://github.com/ClickHouse/ClickHouse/issues/7228), [#13361](https://github.com/ClickHouse/ClickHouse/issues/13361), [#11884](https://github.com/ClickHouse/ClickHouse/issues/11884). [#13450](https://github.com/ClickHouse/ClickHouse/pull/13450) ([alesapin](https://github.com/alesapin)).
|
* Fix premature `ON CLUSTER` timeouts for queries that must be executed on a single replica. Fixes [#6704](https://github.com/ClickHouse/ClickHouse/issues/6704), [#7228](https://github.com/ClickHouse/ClickHouse/issues/7228), [#13361](https://github.com/ClickHouse/ClickHouse/issues/13361), [#11884](https://github.com/ClickHouse/ClickHouse/issues/11884). [#13450](https://github.com/ClickHouse/ClickHouse/pull/13450) ([alesapin](https://github.com/alesapin)).
|
||||||
* Fix crash in mark inclusion search introduced in https://github.com/ClickHouse/ClickHouse/pull/12277. [#14225](https://github.com/ClickHouse/ClickHouse/pull/14225) ([Amos Bird](https://github.com/amosbird)).
|
* Fix crash in mark inclusion search introduced in [#12277](https://github.com/ClickHouse/ClickHouse/pull/12277). [#14225](https://github.com/ClickHouse/ClickHouse/pull/14225) ([Amos Bird](https://github.com/amosbird)).
|
||||||
* Fix race condition in external dictionaries with cache layout which can lead server crash. [#12566](https://github.com/ClickHouse/ClickHouse/pull/12566) ([alesapin](https://github.com/alesapin)).
|
* Fix race condition in external dictionaries with cache layout which can lead server crash. [#12566](https://github.com/ClickHouse/ClickHouse/pull/12566) ([alesapin](https://github.com/alesapin)).
|
||||||
* Fix visible data clobbering by progress bar in client in interactive mode. This fixes [#12562](https://github.com/ClickHouse/ClickHouse/issues/12562) and [#13369](https://github.com/ClickHouse/ClickHouse/issues/13369) and [#13584](https://github.com/ClickHouse/ClickHouse/issues/13584) and fixes [#12964](https://github.com/ClickHouse/ClickHouse/issues/12964). [#13691](https://github.com/ClickHouse/ClickHouse/pull/13691) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
* Fix visible data clobbering by progress bar in client in interactive mode. This fixes [#12562](https://github.com/ClickHouse/ClickHouse/issues/12562) and [#13369](https://github.com/ClickHouse/ClickHouse/issues/13369) and [#13584](https://github.com/ClickHouse/ClickHouse/issues/13584) and fixes [#12964](https://github.com/ClickHouse/ClickHouse/issues/12964). [#13691](https://github.com/ClickHouse/ClickHouse/pull/13691) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
* Fixed incorrect sorting order for `LowCardinality` columns when ORDER BY multiple columns is used. This fixes [#13958](https://github.com/ClickHouse/ClickHouse/issues/13958). [#14223](https://github.com/ClickHouse/ClickHouse/pull/14223) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
* Fixed incorrect sorting order for `LowCardinality` columns when ORDER BY multiple columns is used. This fixes [#13958](https://github.com/ClickHouse/ClickHouse/issues/13958). [#14223](https://github.com/ClickHouse/ClickHouse/pull/14223) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
@ -71,7 +213,7 @@
|
|||||||
* Fix function if with nullable constexpr as cond that is not literal NULL. Fixes [#12463](https://github.com/ClickHouse/ClickHouse/issues/12463). [#13226](https://github.com/ClickHouse/ClickHouse/pull/13226) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
* Fix function if with nullable constexpr as cond that is not literal NULL. Fixes [#12463](https://github.com/ClickHouse/ClickHouse/issues/12463). [#13226](https://github.com/ClickHouse/ClickHouse/pull/13226) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
* Fix assert in `arrayElement` function in case of array elements are Nullable and array subscript is also Nullable. This fixes [#12172](https://github.com/ClickHouse/ClickHouse/issues/12172). [#13224](https://github.com/ClickHouse/ClickHouse/pull/13224) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
* Fix assert in `arrayElement` function in case of array elements are Nullable and array subscript is also Nullable. This fixes [#12172](https://github.com/ClickHouse/ClickHouse/issues/12172). [#13224](https://github.com/ClickHouse/ClickHouse/pull/13224) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
* Fix DateTime64 conversion functions with constant argument. [#13205](https://github.com/ClickHouse/ClickHouse/pull/13205) ([Azat Khuzhin](https://github.com/azat)).
|
* Fix DateTime64 conversion functions with constant argument. [#13205](https://github.com/ClickHouse/ClickHouse/pull/13205) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
* Fix parsing row policies from users.xml when names of databases or tables contain dots. This fixes https://github.com/ClickHouse/ClickHouse/issues/5779, https://github.com/ClickHouse/ClickHouse/issues/12527. [#13199](https://github.com/ClickHouse/ClickHouse/pull/13199) ([Vitaly Baranov](https://github.com/vitlibar)).
|
* Fix parsing row policies from users.xml when names of databases or tables contain dots. This fixes [#5779](https://github.com/ClickHouse/ClickHouse/issues/5779), [#12527](https://github.com/ClickHouse/ClickHouse/issues/12527). [#13199](https://github.com/ClickHouse/ClickHouse/pull/13199) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
* Fix access to `redis` dictionary after connection was dropped once. It may happen with `cache` and `direct` dictionary layouts. [#13082](https://github.com/ClickHouse/ClickHouse/pull/13082) ([Anton Popov](https://github.com/CurtizJ)).
|
* Fix access to `redis` dictionary after connection was dropped once. It may happen with `cache` and `direct` dictionary layouts. [#13082](https://github.com/ClickHouse/ClickHouse/pull/13082) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
* Fix wrong index analysis with functions. It could lead to some data parts being skipped when reading from `MergeTree` tables. Fixes [#13060](https://github.com/ClickHouse/ClickHouse/issues/13060). Fixes [#12406](https://github.com/ClickHouse/ClickHouse/issues/12406). [#13081](https://github.com/ClickHouse/ClickHouse/pull/13081) ([Anton Popov](https://github.com/CurtizJ)).
|
* Fix wrong index analysis with functions. It could lead to some data parts being skipped when reading from `MergeTree` tables. Fixes [#13060](https://github.com/ClickHouse/ClickHouse/issues/13060). Fixes [#12406](https://github.com/ClickHouse/ClickHouse/issues/12406). [#13081](https://github.com/ClickHouse/ClickHouse/pull/13081) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
* Fix error `Cannot convert column because it is constant but values of constants are different in source and result` for remote queries which use deterministic functions in scope of query, but not deterministic between queries, like `now()`, `now64()`, `randConstant()`. Fixes [#11327](https://github.com/ClickHouse/ClickHouse/issues/11327). [#13075](https://github.com/ClickHouse/ClickHouse/pull/13075) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
* Fix error `Cannot convert column because it is constant but values of constants are different in source and result` for remote queries which use deterministic functions in scope of query, but not deterministic between queries, like `now()`, `now64()`, `randConstant()`. Fixes [#11327](https://github.com/ClickHouse/ClickHouse/issues/11327). [#13075](https://github.com/ClickHouse/ClickHouse/pull/13075) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
@ -89,7 +231,7 @@
|
|||||||
* Fixed [#10572](https://github.com/ClickHouse/ClickHouse/issues/10572) fix bloom filter index with const expression. [#12659](https://github.com/ClickHouse/ClickHouse/pull/12659) ([Winter Zhang](https://github.com/zhang2014)).
|
* Fixed [#10572](https://github.com/ClickHouse/ClickHouse/issues/10572) fix bloom filter index with const expression. [#12659](https://github.com/ClickHouse/ClickHouse/pull/12659) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
* Fix SIGSEGV in StorageKafka when broker is unavailable (and not only). [#12658](https://github.com/ClickHouse/ClickHouse/pull/12658) ([Azat Khuzhin](https://github.com/azat)).
|
* Fix SIGSEGV in StorageKafka when broker is unavailable (and not only). [#12658](https://github.com/ClickHouse/ClickHouse/pull/12658) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
* Add support for function `if` with `Array(UUID)` arguments. This fixes [#11066](https://github.com/ClickHouse/ClickHouse/issues/11066). [#12648](https://github.com/ClickHouse/ClickHouse/pull/12648) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
* Add support for function `if` with `Array(UUID)` arguments. This fixes [#11066](https://github.com/ClickHouse/ClickHouse/issues/11066). [#12648](https://github.com/ClickHouse/ClickHouse/pull/12648) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
* CREATE USER IF NOT EXISTS now doesn't throw exception if the user exists. This fixes https://github.com/ClickHouse/ClickHouse/issues/12507. [#12646](https://github.com/ClickHouse/ClickHouse/pull/12646) ([Vitaly Baranov](https://github.com/vitlibar)).
|
* CREATE USER IF NOT EXISTS now doesn't throw exception if the user exists. This fixes [#12507](https://github.com/ClickHouse/ClickHouse/issues/12507). [#12646](https://github.com/ClickHouse/ClickHouse/pull/12646) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
* Exception `There is no supertype...` can be thrown during `ALTER ... UPDATE` in unexpected cases (e.g. when subtracting from UInt64 column). This fixes [#7306](https://github.com/ClickHouse/ClickHouse/issues/7306). This fixes [#4165](https://github.com/ClickHouse/ClickHouse/issues/4165). [#12633](https://github.com/ClickHouse/ClickHouse/pull/12633) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
* Exception `There is no supertype...` can be thrown during `ALTER ... UPDATE` in unexpected cases (e.g. when subtracting from UInt64 column). This fixes [#7306](https://github.com/ClickHouse/ClickHouse/issues/7306). This fixes [#4165](https://github.com/ClickHouse/ClickHouse/issues/4165). [#12633](https://github.com/ClickHouse/ClickHouse/pull/12633) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
* Fix possible `Pipeline stuck` error for queries with external sorting. Fixes [#12617](https://github.com/ClickHouse/ClickHouse/issues/12617). [#12618](https://github.com/ClickHouse/ClickHouse/pull/12618) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
* Fix possible `Pipeline stuck` error for queries with external sorting. Fixes [#12617](https://github.com/ClickHouse/ClickHouse/issues/12617). [#12618](https://github.com/ClickHouse/ClickHouse/pull/12618) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
* Fix error `Output of TreeExecutor is not sorted` for `OPTIMIZE DEDUPLICATE`. Fixes [#11572](https://github.com/ClickHouse/ClickHouse/issues/11572). [#12613](https://github.com/ClickHouse/ClickHouse/pull/12613) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
* Fix error `Output of TreeExecutor is not sorted` for `OPTIMIZE DEDUPLICATE`. Fixes [#11572](https://github.com/ClickHouse/ClickHouse/issues/11572). [#12613](https://github.com/ClickHouse/ClickHouse/pull/12613) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
@ -123,7 +265,7 @@
|
|||||||
* Fix assert in `parseDateTimeBestEffort`. This fixes [#12649](https://github.com/ClickHouse/ClickHouse/issues/12649). [#13227](https://github.com/ClickHouse/ClickHouse/pull/13227) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
* Fix assert in `parseDateTimeBestEffort`. This fixes [#12649](https://github.com/ClickHouse/ClickHouse/issues/12649). [#13227](https://github.com/ClickHouse/ClickHouse/pull/13227) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
* Minor optimization in Processors/PipelineExecutor: breaking out of a loop because it makes sense to do so. [#13058](https://github.com/ClickHouse/ClickHouse/pull/13058) ([Mark Papadakis](https://github.com/markpapadakis)).
|
* Minor optimization in Processors/PipelineExecutor: breaking out of a loop because it makes sense to do so. [#13058](https://github.com/ClickHouse/ClickHouse/pull/13058) ([Mark Papadakis](https://github.com/markpapadakis)).
|
||||||
* Support TRUNCATE table without TABLE keyword. [#12653](https://github.com/ClickHouse/ClickHouse/pull/12653) ([Winter Zhang](https://github.com/zhang2014)).
|
* Support TRUNCATE table without TABLE keyword. [#12653](https://github.com/ClickHouse/ClickHouse/pull/12653) ([Winter Zhang](https://github.com/zhang2014)).
|
||||||
* Fix explain query format overwrite by default, issue https://github.com/ClickHouse/ClickHouse/issues/12432. [#12541](https://github.com/ClickHouse/ClickHouse/pull/12541) ([BohuTANG](https://github.com/BohuTANG)).
|
* Fix explain query format overwrite by default. This fixes [#12541](https://github.com/ClickHouse/ClickHouse/issues/12432). [#12541](https://github.com/ClickHouse/ClickHouse/pull/12541) ([BohuTANG](https://github.com/BohuTANG)).
|
||||||
* Allow to set JOIN kind and type in more standad way: `LEFT SEMI JOIN` instead of `SEMI LEFT JOIN`. For now both are correct. [#12520](https://github.com/ClickHouse/ClickHouse/pull/12520) ([Artem Zuikov](https://github.com/4ertus2)).
|
* Allow to set JOIN kind and type in more standad way: `LEFT SEMI JOIN` instead of `SEMI LEFT JOIN`. For now both are correct. [#12520](https://github.com/ClickHouse/ClickHouse/pull/12520) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
* Changes default value for `multiple_joins_rewriter_version` to 2. It enables new multiple joins rewriter that knows about column names. [#12469](https://github.com/ClickHouse/ClickHouse/pull/12469) ([Artem Zuikov](https://github.com/4ertus2)).
|
* Changes default value for `multiple_joins_rewriter_version` to 2. It enables new multiple joins rewriter that knows about column names. [#12469](https://github.com/ClickHouse/ClickHouse/pull/12469) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||||
* Add several metrics for requests to S3 storages. [#12464](https://github.com/ClickHouse/ClickHouse/pull/12464) ([ianton-ru](https://github.com/ianton-ru)).
|
* Add several metrics for requests to S3 storages. [#12464](https://github.com/ClickHouse/ClickHouse/pull/12464) ([ianton-ru](https://github.com/ianton-ru)).
|
||||||
|
@ -15,6 +15,7 @@ ClickHouse is an open-source column-oriented database management system that all
|
|||||||
* [Contacts](https://clickhouse.tech/#contacts) can help to get your questions answered if there are any.
|
* [Contacts](https://clickhouse.tech/#contacts) can help to get your questions answered if there are any.
|
||||||
* You can also [fill this form](https://clickhouse.tech/#meet) to meet Yandex ClickHouse team in person.
|
* You can also [fill this form](https://clickhouse.tech/#meet) to meet Yandex ClickHouse team in person.
|
||||||
|
|
||||||
## Upcoming Events
|
## Upcoming Events
|
||||||
|
|
||||||
* [ClickHouse Data Integration Virtual Meetup](https://www.eventbrite.com/e/clickhouse-september-virtual-meetup-data-integration-tickets-117421895049) on September 10, 2020.
|
* [eBay migrating from Druid](https://us02web.zoom.us/webinar/register/tZMkfu6rpjItHtaQ1DXcgPWcSOnmM73HLGKL) on September 23, 2020.
|
||||||
|
* [ClickHouse for Edge Analytics](https://ones2020.sched.com/event/bWPs) on September 29, 2020.
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <common/types.h>
|
#include <common/extended_types.h>
|
||||||
|
|
||||||
namespace common
|
namespace common
|
||||||
{
|
{
|
||||||
|
108
base/common/extended_types.h
Normal file
108
base/common/extended_types.h
Normal file
@ -0,0 +1,108 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include <type_traits>
|
||||||
|
|
||||||
|
#include <common/types.h>
|
||||||
|
#include <common/wide_integer.h>
|
||||||
|
|
||||||
|
using Int128 = __int128;
|
||||||
|
|
||||||
|
using wInt256 = wide::integer<256, signed>;
|
||||||
|
using wUInt256 = wide::integer<256, unsigned>;
|
||||||
|
|
||||||
|
static_assert(sizeof(wInt256) == 32);
|
||||||
|
static_assert(sizeof(wUInt256) == 32);
|
||||||
|
|
||||||
|
/// The standard library type traits, such as std::is_arithmetic, with one exception
|
||||||
|
/// (std::common_type), are "set in stone". Attempting to specialize them causes undefined behavior.
|
||||||
|
/// So instead of using the std type_traits, we use our own version which allows extension.
|
||||||
|
template <typename T>
|
||||||
|
struct is_signed
|
||||||
|
{
|
||||||
|
static constexpr bool value = std::is_signed_v<T>;
|
||||||
|
};
|
||||||
|
|
||||||
|
template <> struct is_signed<Int128> { static constexpr bool value = true; };
|
||||||
|
template <> struct is_signed<wInt256> { static constexpr bool value = true; };
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
inline constexpr bool is_signed_v = is_signed<T>::value;
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
struct is_unsigned
|
||||||
|
{
|
||||||
|
static constexpr bool value = std::is_unsigned_v<T>;
|
||||||
|
};
|
||||||
|
|
||||||
|
template <> struct is_unsigned<wUInt256> { static constexpr bool value = true; };
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
inline constexpr bool is_unsigned_v = is_unsigned<T>::value;
|
||||||
|
|
||||||
|
|
||||||
|
/// TODO: is_integral includes char, char8_t and wchar_t.
|
||||||
|
template <typename T>
|
||||||
|
struct is_integer
|
||||||
|
{
|
||||||
|
static constexpr bool value = std::is_integral_v<T>;
|
||||||
|
};
|
||||||
|
|
||||||
|
template <> struct is_integer<Int128> { static constexpr bool value = true; };
|
||||||
|
template <> struct is_integer<wInt256> { static constexpr bool value = true; };
|
||||||
|
template <> struct is_integer<wUInt256> { static constexpr bool value = true; };
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
inline constexpr bool is_integer_v = is_integer<T>::value;
|
||||||
|
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
struct is_arithmetic
|
||||||
|
{
|
||||||
|
static constexpr bool value = std::is_arithmetic_v<T>;
|
||||||
|
};
|
||||||
|
|
||||||
|
template <> struct is_arithmetic<__int128> { static constexpr bool value = true; };
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
inline constexpr bool is_arithmetic_v = is_arithmetic<T>::value;
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
struct make_unsigned
|
||||||
|
{
|
||||||
|
typedef std::make_unsigned_t<T> type;
|
||||||
|
};
|
||||||
|
|
||||||
|
template <> struct make_unsigned<Int128> { using type = unsigned __int128; };
|
||||||
|
template <> struct make_unsigned<wInt256> { using type = wUInt256; };
|
||||||
|
template <> struct make_unsigned<wUInt256> { using type = wUInt256; };
|
||||||
|
|
||||||
|
template <typename T> using make_unsigned_t = typename make_unsigned<T>::type;
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
struct make_signed
|
||||||
|
{
|
||||||
|
typedef std::make_signed_t<T> type;
|
||||||
|
};
|
||||||
|
|
||||||
|
template <> struct make_signed<wInt256> { using type = wInt256; };
|
||||||
|
template <> struct make_signed<wUInt256> { using type = wInt256; };
|
||||||
|
|
||||||
|
template <typename T> using make_signed_t = typename make_signed<T>::type;
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
struct is_big_int
|
||||||
|
{
|
||||||
|
static constexpr bool value = false;
|
||||||
|
};
|
||||||
|
|
||||||
|
template <> struct is_big_int<wInt256> { static constexpr bool value = true; };
|
||||||
|
template <> struct is_big_int<wUInt256> { static constexpr bool value = true; };
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
inline constexpr bool is_big_int_v = is_big_int<T>::value;
|
||||||
|
|
||||||
|
template <typename To, typename From>
|
||||||
|
inline To bigint_cast(const From & x [[maybe_unused]])
|
||||||
|
{
|
||||||
|
return static_cast<To>(x);
|
||||||
|
}
|
13
base/common/throwError.h
Normal file
13
base/common/throwError.h
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
#pragma once
|
||||||
|
#include <stdexcept>
|
||||||
|
|
||||||
|
/// Throw DB::Exception-like exception before its definition.
|
||||||
|
/// DB::Exception derived from Poco::Exception derived from std::exception.
|
||||||
|
/// DB::Exception generally cought as Poco::Exception. std::exception generally has other catch blocks and could lead to other outcomes.
|
||||||
|
/// DB::Exception is not defined yet. It'd better to throw Poco::Exception but we do not want to include any big header here, even <string>.
|
||||||
|
/// So we throw some std::exception instead in the hope its catch block is the same as DB::Exception one.
|
||||||
|
template <typename T>
|
||||||
|
inline void throwError(const T & err)
|
||||||
|
{
|
||||||
|
throw std::runtime_error(err);
|
||||||
|
}
|
@ -1,12 +1,7 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <algorithm>
|
|
||||||
#include <cstdint>
|
#include <cstdint>
|
||||||
#include <cstdlib>
|
|
||||||
#include <string>
|
#include <string>
|
||||||
#include <type_traits>
|
|
||||||
|
|
||||||
#include <common/wide_integer.h>
|
|
||||||
|
|
||||||
using Int8 = int8_t;
|
using Int8 = int8_t;
|
||||||
using Int16 = int16_t;
|
using Int16 = int16_t;
|
||||||
@ -23,112 +18,24 @@ using UInt16 = uint16_t;
|
|||||||
using UInt32 = uint32_t;
|
using UInt32 = uint32_t;
|
||||||
using UInt64 = uint64_t;
|
using UInt64 = uint64_t;
|
||||||
|
|
||||||
using Int128 = __int128;
|
using String = std::string;
|
||||||
|
|
||||||
using wInt256 = std::wide_integer<256, signed>;
|
namespace DB
|
||||||
using wUInt256 = std::wide_integer<256, unsigned>;
|
{
|
||||||
|
|
||||||
static_assert(sizeof(wInt256) == 32);
|
using UInt8 = ::UInt8;
|
||||||
static_assert(sizeof(wUInt256) == 32);
|
using UInt16 = ::UInt16;
|
||||||
|
using UInt32 = ::UInt32;
|
||||||
|
using UInt64 = ::UInt64;
|
||||||
|
|
||||||
|
using Int8 = ::Int8;
|
||||||
|
using Int16 = ::Int16;
|
||||||
|
using Int32 = ::Int32;
|
||||||
|
using Int64 = ::Int64;
|
||||||
|
|
||||||
|
using Float32 = float;
|
||||||
|
using Float64 = double;
|
||||||
|
|
||||||
using String = std::string;
|
using String = std::string;
|
||||||
|
|
||||||
/// The standard library type traits, such as std::is_arithmetic, with one exception
|
|
||||||
/// (std::common_type), are "set in stone". Attempting to specialize them causes undefined behavior.
|
|
||||||
/// So instead of using the std type_traits, we use our own version which allows extension.
|
|
||||||
template <typename T>
|
|
||||||
struct is_signed
|
|
||||||
{
|
|
||||||
static constexpr bool value = std::is_signed_v<T>;
|
|
||||||
};
|
|
||||||
|
|
||||||
template <> struct is_signed<Int128> { static constexpr bool value = true; };
|
|
||||||
template <> struct is_signed<wInt256> { static constexpr bool value = true; };
|
|
||||||
|
|
||||||
template <typename T>
|
|
||||||
inline constexpr bool is_signed_v = is_signed<T>::value;
|
|
||||||
|
|
||||||
template <typename T>
|
|
||||||
struct is_unsigned
|
|
||||||
{
|
|
||||||
static constexpr bool value = std::is_unsigned_v<T>;
|
|
||||||
};
|
|
||||||
|
|
||||||
template <> struct is_unsigned<wUInt256> { static constexpr bool value = true; };
|
|
||||||
|
|
||||||
template <typename T>
|
|
||||||
inline constexpr bool is_unsigned_v = is_unsigned<T>::value;
|
|
||||||
|
|
||||||
|
|
||||||
/// TODO: is_integral includes char, char8_t and wchar_t.
|
|
||||||
template <typename T>
|
|
||||||
struct is_integer
|
|
||||||
{
|
|
||||||
static constexpr bool value = std::is_integral_v<T>;
|
|
||||||
};
|
|
||||||
|
|
||||||
template <> struct is_integer<Int128> { static constexpr bool value = true; };
|
|
||||||
template <> struct is_integer<wInt256> { static constexpr bool value = true; };
|
|
||||||
template <> struct is_integer<wUInt256> { static constexpr bool value = true; };
|
|
||||||
|
|
||||||
template <typename T>
|
|
||||||
inline constexpr bool is_integer_v = is_integer<T>::value;
|
|
||||||
|
|
||||||
|
|
||||||
template <typename T>
|
|
||||||
struct is_arithmetic
|
|
||||||
{
|
|
||||||
static constexpr bool value = std::is_arithmetic_v<T>;
|
|
||||||
};
|
|
||||||
|
|
||||||
template <> struct is_arithmetic<__int128> { static constexpr bool value = true; };
|
|
||||||
|
|
||||||
template <typename T>
|
|
||||||
inline constexpr bool is_arithmetic_v = is_arithmetic<T>::value;
|
|
||||||
|
|
||||||
template <typename T>
|
|
||||||
struct make_unsigned
|
|
||||||
{
|
|
||||||
typedef std::make_unsigned_t<T> type;
|
|
||||||
};
|
|
||||||
|
|
||||||
template <> struct make_unsigned<Int128> { using type = unsigned __int128; };
|
|
||||||
template <> struct make_unsigned<wInt256> { using type = wUInt256; };
|
|
||||||
template <> struct make_unsigned<wUInt256> { using type = wUInt256; };
|
|
||||||
|
|
||||||
template <typename T> using make_unsigned_t = typename make_unsigned<T>::type;
|
|
||||||
|
|
||||||
template <typename T>
|
|
||||||
struct make_signed
|
|
||||||
{
|
|
||||||
typedef std::make_signed_t<T> type;
|
|
||||||
};
|
|
||||||
|
|
||||||
template <> struct make_signed<wInt256> { using type = wInt256; };
|
|
||||||
template <> struct make_signed<wUInt256> { using type = wInt256; };
|
|
||||||
|
|
||||||
template <typename T> using make_signed_t = typename make_signed<T>::type;
|
|
||||||
|
|
||||||
template <typename T>
|
|
||||||
struct is_big_int
|
|
||||||
{
|
|
||||||
static constexpr bool value = false;
|
|
||||||
};
|
|
||||||
|
|
||||||
template <> struct is_big_int<wInt256> { static constexpr bool value = true; };
|
|
||||||
template <> struct is_big_int<wUInt256> { static constexpr bool value = true; };
|
|
||||||
|
|
||||||
template <typename T>
|
|
||||||
inline constexpr bool is_big_int_v = is_big_int<T>::value;
|
|
||||||
|
|
||||||
template <typename T>
|
|
||||||
inline std::string bigintToString(const T & x)
|
|
||||||
{
|
|
||||||
return to_string(x);
|
|
||||||
}
|
|
||||||
|
|
||||||
template <typename To, typename From>
|
|
||||||
inline To bigint_cast(const From & x [[maybe_unused]])
|
|
||||||
{
|
|
||||||
return static_cast<To>(x);
|
|
||||||
}
|
}
|
||||||
|
@ -22,79 +22,87 @@
|
|||||||
* without express or implied warranty.
|
* without express or implied warranty.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#include <climits> // CHAR_BIT
|
|
||||||
#include <cmath>
|
|
||||||
#include <cstdint>
|
#include <cstdint>
|
||||||
#include <limits>
|
#include <limits>
|
||||||
#include <type_traits>
|
#include <type_traits>
|
||||||
|
#include <initializer_list>
|
||||||
|
|
||||||
|
namespace wide
|
||||||
|
{
|
||||||
|
template <size_t Bits, typename Signed>
|
||||||
|
class integer;
|
||||||
|
}
|
||||||
|
|
||||||
namespace std
|
namespace std
|
||||||
{
|
{
|
||||||
template <size_t Bits, typename Signed>
|
|
||||||
class wide_integer;
|
|
||||||
|
|
||||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||||
struct common_type<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>>;
|
struct common_type<wide::integer<Bits, Signed>, wide::integer<Bits2, Signed2>>;
|
||||||
|
|
||||||
template <size_t Bits, typename Signed, typename Arithmetic>
|
template <size_t Bits, typename Signed, typename Arithmetic>
|
||||||
struct common_type<wide_integer<Bits, Signed>, Arithmetic>;
|
struct common_type<wide::integer<Bits, Signed>, Arithmetic>;
|
||||||
|
|
||||||
template <typename Arithmetic, size_t Bits, typename Signed>
|
template <typename Arithmetic, size_t Bits, typename Signed>
|
||||||
struct common_type<Arithmetic, wide_integer<Bits, Signed>>;
|
struct common_type<Arithmetic, wide::integer<Bits, Signed>>;
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
namespace wide
|
||||||
|
{
|
||||||
|
|
||||||
template <size_t Bits, typename Signed>
|
template <size_t Bits, typename Signed>
|
||||||
class wide_integer
|
class integer
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
using base_type = uint8_t;
|
using base_type = uint8_t;
|
||||||
using signed_base_type = int8_t;
|
using signed_base_type = int8_t;
|
||||||
|
|
||||||
// ctors
|
// ctors
|
||||||
wide_integer() = default;
|
integer() = default;
|
||||||
|
|
||||||
template <typename T>
|
template <typename T>
|
||||||
constexpr wide_integer(T rhs) noexcept;
|
constexpr integer(T rhs) noexcept;
|
||||||
template <typename T>
|
template <typename T>
|
||||||
constexpr wide_integer(std::initializer_list<T> il) noexcept;
|
constexpr integer(std::initializer_list<T> il) noexcept;
|
||||||
|
|
||||||
// assignment
|
// assignment
|
||||||
template <size_t Bits2, typename Signed2>
|
template <size_t Bits2, typename Signed2>
|
||||||
constexpr wide_integer<Bits, Signed> & operator=(const wide_integer<Bits2, Signed2> & rhs) noexcept;
|
constexpr integer<Bits, Signed> & operator=(const integer<Bits2, Signed2> & rhs) noexcept;
|
||||||
|
|
||||||
template <typename Arithmetic>
|
template <typename Arithmetic>
|
||||||
constexpr wide_integer<Bits, Signed> & operator=(Arithmetic rhs) noexcept;
|
constexpr integer<Bits, Signed> & operator=(Arithmetic rhs) noexcept;
|
||||||
|
|
||||||
template <typename Arithmetic>
|
template <typename Arithmetic>
|
||||||
constexpr wide_integer<Bits, Signed> & operator*=(const Arithmetic & rhs);
|
constexpr integer<Bits, Signed> & operator*=(const Arithmetic & rhs);
|
||||||
|
|
||||||
template <typename Arithmetic>
|
template <typename Arithmetic>
|
||||||
constexpr wide_integer<Bits, Signed> & operator/=(const Arithmetic & rhs);
|
constexpr integer<Bits, Signed> & operator/=(const Arithmetic & rhs);
|
||||||
|
|
||||||
template <typename Arithmetic>
|
template <typename Arithmetic>
|
||||||
constexpr wide_integer<Bits, Signed> & operator+=(const Arithmetic & rhs) noexcept(is_same<Signed, unsigned>::value);
|
constexpr integer<Bits, Signed> & operator+=(const Arithmetic & rhs) noexcept(std::is_same_v<Signed, unsigned>);
|
||||||
|
|
||||||
template <typename Arithmetic>
|
template <typename Arithmetic>
|
||||||
constexpr wide_integer<Bits, Signed> & operator-=(const Arithmetic & rhs) noexcept(is_same<Signed, unsigned>::value);
|
constexpr integer<Bits, Signed> & operator-=(const Arithmetic & rhs) noexcept(std::is_same_v<Signed, unsigned>);
|
||||||
|
|
||||||
template <typename Integral>
|
template <typename Integral>
|
||||||
constexpr wide_integer<Bits, Signed> & operator%=(const Integral & rhs);
|
constexpr integer<Bits, Signed> & operator%=(const Integral & rhs);
|
||||||
|
|
||||||
template <typename Integral>
|
template <typename Integral>
|
||||||
constexpr wide_integer<Bits, Signed> & operator&=(const Integral & rhs) noexcept;
|
constexpr integer<Bits, Signed> & operator&=(const Integral & rhs) noexcept;
|
||||||
|
|
||||||
template <typename Integral>
|
template <typename Integral>
|
||||||
constexpr wide_integer<Bits, Signed> & operator|=(const Integral & rhs) noexcept;
|
constexpr integer<Bits, Signed> & operator|=(const Integral & rhs) noexcept;
|
||||||
|
|
||||||
template <typename Integral>
|
template <typename Integral>
|
||||||
constexpr wide_integer<Bits, Signed> & operator^=(const Integral & rhs) noexcept;
|
constexpr integer<Bits, Signed> & operator^=(const Integral & rhs) noexcept;
|
||||||
|
|
||||||
constexpr wide_integer<Bits, Signed> & operator<<=(int n);
|
constexpr integer<Bits, Signed> & operator<<=(int n) noexcept;
|
||||||
constexpr wide_integer<Bits, Signed> & operator>>=(int n) noexcept;
|
constexpr integer<Bits, Signed> & operator>>=(int n) noexcept;
|
||||||
|
|
||||||
constexpr wide_integer<Bits, Signed> & operator++() noexcept(is_same<Signed, unsigned>::value);
|
constexpr integer<Bits, Signed> & operator++() noexcept(std::is_same_v<Signed, unsigned>);
|
||||||
constexpr wide_integer<Bits, Signed> operator++(int) noexcept(is_same<Signed, unsigned>::value);
|
constexpr integer<Bits, Signed> operator++(int) noexcept(std::is_same_v<Signed, unsigned>);
|
||||||
constexpr wide_integer<Bits, Signed> & operator--() noexcept(is_same<Signed, unsigned>::value);
|
constexpr integer<Bits, Signed> & operator--() noexcept(std::is_same_v<Signed, unsigned>);
|
||||||
constexpr wide_integer<Bits, Signed> operator--(int) noexcept(is_same<Signed, unsigned>::value);
|
constexpr integer<Bits, Signed> operator--(int) noexcept(std::is_same_v<Signed, unsigned>);
|
||||||
|
|
||||||
// observers
|
// observers
|
||||||
|
|
||||||
@ -114,10 +122,10 @@ public:
|
|||||||
|
|
||||||
private:
|
private:
|
||||||
template <size_t Bits2, typename Signed2>
|
template <size_t Bits2, typename Signed2>
|
||||||
friend class wide_integer;
|
friend class integer;
|
||||||
|
|
||||||
friend class numeric_limits<wide_integer<Bits, signed>>;
|
friend class std::numeric_limits<integer<Bits, signed>>;
|
||||||
friend class numeric_limits<wide_integer<Bits, unsigned>>;
|
friend class std::numeric_limits<integer<Bits, unsigned>>;
|
||||||
|
|
||||||
base_type m_arr[_impl::arr_size];
|
base_type m_arr[_impl::arr_size];
|
||||||
};
|
};
|
||||||
@ -134,115 +142,117 @@ using __only_integer = typename std::enable_if<IntegralConcept<T>() && IntegralC
|
|||||||
|
|
||||||
// Unary operators
|
// Unary operators
|
||||||
template <size_t Bits, typename Signed>
|
template <size_t Bits, typename Signed>
|
||||||
constexpr wide_integer<Bits, Signed> operator~(const wide_integer<Bits, Signed> & lhs) noexcept;
|
constexpr integer<Bits, Signed> operator~(const integer<Bits, Signed> & lhs) noexcept;
|
||||||
|
|
||||||
template <size_t Bits, typename Signed>
|
template <size_t Bits, typename Signed>
|
||||||
constexpr wide_integer<Bits, Signed> operator-(const wide_integer<Bits, Signed> & lhs) noexcept(is_same<Signed, unsigned>::value);
|
constexpr integer<Bits, Signed> operator-(const integer<Bits, Signed> & lhs) noexcept(std::is_same_v<Signed, unsigned>);
|
||||||
|
|
||||||
template <size_t Bits, typename Signed>
|
template <size_t Bits, typename Signed>
|
||||||
constexpr wide_integer<Bits, Signed> operator+(const wide_integer<Bits, Signed> & lhs) noexcept(is_same<Signed, unsigned>::value);
|
constexpr integer<Bits, Signed> operator+(const integer<Bits, Signed> & lhs) noexcept(std::is_same_v<Signed, unsigned>);
|
||||||
|
|
||||||
// Binary operators
|
// Binary operators
|
||||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||||
operator*(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
operator*(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||||
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator*(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator*(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||||
|
|
||||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||||
operator/(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
operator/(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||||
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator/(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator/(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||||
|
|
||||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||||
operator+(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
operator+(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||||
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator+(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator+(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||||
|
|
||||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||||
operator-(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
operator-(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||||
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator-(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator-(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||||
|
|
||||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||||
operator%(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
operator%(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||||
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
||||||
std::common_type_t<Integral, Integral2> constexpr operator%(const Integral & rhs, const Integral2 & lhs);
|
std::common_type_t<Integral, Integral2> constexpr operator%(const Integral & rhs, const Integral2 & lhs);
|
||||||
|
|
||||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||||
operator&(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
operator&(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||||
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
||||||
std::common_type_t<Integral, Integral2> constexpr operator&(const Integral & rhs, const Integral2 & lhs);
|
std::common_type_t<Integral, Integral2> constexpr operator&(const Integral & rhs, const Integral2 & lhs);
|
||||||
|
|
||||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||||
operator|(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
operator|(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||||
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
||||||
std::common_type_t<Integral, Integral2> constexpr operator|(const Integral & rhs, const Integral2 & lhs);
|
std::common_type_t<Integral, Integral2> constexpr operator|(const Integral & rhs, const Integral2 & lhs);
|
||||||
|
|
||||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||||
operator^(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
operator^(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||||
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
||||||
std::common_type_t<Integral, Integral2> constexpr operator^(const Integral & rhs, const Integral2 & lhs);
|
std::common_type_t<Integral, Integral2> constexpr operator^(const Integral & rhs, const Integral2 & lhs);
|
||||||
|
|
||||||
// TODO: Integral
|
// TODO: Integral
|
||||||
template <size_t Bits, typename Signed>
|
template <size_t Bits, typename Signed>
|
||||||
constexpr wide_integer<Bits, Signed> operator<<(const wide_integer<Bits, Signed> & lhs, int n) noexcept;
|
constexpr integer<Bits, Signed> operator<<(const integer<Bits, Signed> & lhs, int n) noexcept;
|
||||||
template <size_t Bits, typename Signed>
|
template <size_t Bits, typename Signed>
|
||||||
constexpr wide_integer<Bits, Signed> operator>>(const wide_integer<Bits, Signed> & lhs, int n) noexcept;
|
constexpr integer<Bits, Signed> operator>>(const integer<Bits, Signed> & lhs, int n) noexcept;
|
||||||
|
|
||||||
template <size_t Bits, typename Signed, typename Int, typename = std::enable_if_t<!std::is_same_v<Int, int>>>
|
template <size_t Bits, typename Signed, typename Int, typename = std::enable_if_t<!std::is_same_v<Int, int>>>
|
||||||
constexpr wide_integer<Bits, Signed> operator<<(const wide_integer<Bits, Signed> & lhs, Int n) noexcept
|
constexpr integer<Bits, Signed> operator<<(const integer<Bits, Signed> & lhs, Int n) noexcept
|
||||||
{
|
{
|
||||||
return lhs << int(n);
|
return lhs << int(n);
|
||||||
}
|
}
|
||||||
template <size_t Bits, typename Signed, typename Int, typename = std::enable_if_t<!std::is_same_v<Int, int>>>
|
template <size_t Bits, typename Signed, typename Int, typename = std::enable_if_t<!std::is_same_v<Int, int>>>
|
||||||
constexpr wide_integer<Bits, Signed> operator>>(const wide_integer<Bits, Signed> & lhs, Int n) noexcept
|
constexpr integer<Bits, Signed> operator>>(const integer<Bits, Signed> & lhs, Int n) noexcept
|
||||||
{
|
{
|
||||||
return lhs >> int(n);
|
return lhs >> int(n);
|
||||||
}
|
}
|
||||||
|
|
||||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||||
constexpr bool operator<(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
constexpr bool operator<(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||||
constexpr bool operator<(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
constexpr bool operator<(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||||
|
|
||||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||||
constexpr bool operator>(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
constexpr bool operator>(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||||
constexpr bool operator>(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
constexpr bool operator>(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||||
|
|
||||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||||
constexpr bool operator<=(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
constexpr bool operator<=(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||||
constexpr bool operator<=(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
constexpr bool operator<=(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||||
|
|
||||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||||
constexpr bool operator>=(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
constexpr bool operator>=(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||||
constexpr bool operator>=(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
constexpr bool operator>=(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||||
|
|
||||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||||
constexpr bool operator==(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
constexpr bool operator==(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||||
constexpr bool operator==(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
constexpr bool operator==(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||||
|
|
||||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||||
constexpr bool operator!=(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
constexpr bool operator!=(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||||
constexpr bool operator!=(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
constexpr bool operator!=(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||||
|
|
||||||
template <size_t Bits, typename Signed>
|
}
|
||||||
std::string to_string(const wide_integer<Bits, Signed> & n);
|
|
||||||
|
namespace std
|
||||||
|
{
|
||||||
|
|
||||||
template <size_t Bits, typename Signed>
|
template <size_t Bits, typename Signed>
|
||||||
struct hash<wide_integer<Bits, Signed>>;
|
struct hash<wide::integer<Bits, Signed>>;
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
File diff suppressed because it is too large
Load Diff
35
base/common/wide_integer_to_string.h
Normal file
35
base/common/wide_integer_to_string.h
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include <string>
|
||||||
|
|
||||||
|
#include "wide_integer.h"
|
||||||
|
|
||||||
|
namespace wide
|
||||||
|
{
|
||||||
|
|
||||||
|
template <size_t Bits, typename Signed>
|
||||||
|
inline std::string to_string(const integer<Bits, Signed> & n)
|
||||||
|
{
|
||||||
|
std::string res;
|
||||||
|
if (integer<Bits, Signed>::_impl::operator_eq(n, 0U))
|
||||||
|
return "0";
|
||||||
|
|
||||||
|
integer<Bits, unsigned> t;
|
||||||
|
bool is_neg = integer<Bits, Signed>::_impl::is_negative(n);
|
||||||
|
if (is_neg)
|
||||||
|
t = integer<Bits, Signed>::_impl::operator_unary_minus(n);
|
||||||
|
else
|
||||||
|
t = n;
|
||||||
|
|
||||||
|
while (!integer<Bits, unsigned>::_impl::operator_eq(t, 0U))
|
||||||
|
{
|
||||||
|
res.insert(res.begin(), '0' + char(integer<Bits, unsigned>::_impl::operator_percent(t, 10U)));
|
||||||
|
t = integer<Bits, unsigned>::_impl::operator_slash(t, 10U);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (is_neg)
|
||||||
|
res.insert(res.begin(), '-');
|
||||||
|
return res;
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
@ -1,9 +1,7 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <boost/noncopyable.hpp>
|
|
||||||
#include <mysqlxx/Types.h>
|
#include <mysqlxx/Types.h>
|
||||||
|
|
||||||
|
|
||||||
namespace mysqlxx
|
namespace mysqlxx
|
||||||
{
|
{
|
||||||
|
|
||||||
@ -22,6 +20,11 @@ class ResultBase
|
|||||||
public:
|
public:
|
||||||
ResultBase(MYSQL_RES * res_, Connection * conn_, const Query * query_);
|
ResultBase(MYSQL_RES * res_, Connection * conn_, const Query * query_);
|
||||||
|
|
||||||
|
ResultBase(const ResultBase &) = delete;
|
||||||
|
ResultBase & operator=(const ResultBase &) = delete;
|
||||||
|
ResultBase(ResultBase &&) = default;
|
||||||
|
ResultBase & operator=(ResultBase &&) = default;
|
||||||
|
|
||||||
Connection * getConnection() { return conn; }
|
Connection * getConnection() { return conn; }
|
||||||
MYSQL_FIELDS getFields() { return fields; }
|
MYSQL_FIELDS getFields() { return fields; }
|
||||||
unsigned getNumFields() { return num_fields; }
|
unsigned getNumFields() { return num_fields; }
|
||||||
|
@ -254,7 +254,23 @@ template <> inline std::string Value::get<std::string >() cons
|
|||||||
template <> inline LocalDate Value::get<LocalDate >() const { return getDate(); }
|
template <> inline LocalDate Value::get<LocalDate >() const { return getDate(); }
|
||||||
template <> inline LocalDateTime Value::get<LocalDateTime >() const { return getDateTime(); }
|
template <> inline LocalDateTime Value::get<LocalDateTime >() const { return getDateTime(); }
|
||||||
|
|
||||||
template <typename T> inline T Value::get() const { return T(*this); }
|
|
||||||
|
namespace details
|
||||||
|
{
|
||||||
|
// To avoid stack overflow when converting to type with no appropriate c-tor,
|
||||||
|
// resulting in endless recursive calls from `Value::get<T>()` to `Value::operator T()` to `Value::get<T>()` to ...
|
||||||
|
template <typename T, typename std::enable_if_t<std::is_constructible_v<T, Value>>>
|
||||||
|
inline T contructFromValue(const Value & val)
|
||||||
|
{
|
||||||
|
return T(val);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
inline T Value::get() const
|
||||||
|
{
|
||||||
|
return details::contructFromValue<T>(*this);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
inline std::ostream & operator<< (std::ostream & ostr, const Value & x)
|
inline std::ostream & operator<< (std::ostream & ostr, const Value & x)
|
||||||
|
@ -1,9 +1,9 @@
|
|||||||
# This strings autochanged from release_lib.sh:
|
# This strings autochanged from release_lib.sh:
|
||||||
SET(VERSION_REVISION 54439)
|
SET(VERSION_REVISION 54440)
|
||||||
SET(VERSION_MAJOR 20)
|
SET(VERSION_MAJOR 20)
|
||||||
SET(VERSION_MINOR 9)
|
SET(VERSION_MINOR 10)
|
||||||
SET(VERSION_PATCH 1)
|
SET(VERSION_PATCH 1)
|
||||||
SET(VERSION_GITHASH 0586f0d555f7481b394afc55bbb29738cd573a1c)
|
SET(VERSION_GITHASH 11a247d2f42010c1a17bf678c3e00a4bc89b23f8)
|
||||||
SET(VERSION_DESCRIBE v20.9.1.1-prestable)
|
SET(VERSION_DESCRIBE v20.10.1.1-prestable)
|
||||||
SET(VERSION_STRING 20.9.1.1)
|
SET(VERSION_STRING 20.10.1.1)
|
||||||
# end of autochange
|
# end of autochange
|
||||||
|
@ -36,7 +36,15 @@ if (SANITIZE)
|
|||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
elseif (SANITIZE STREQUAL "thread")
|
elseif (SANITIZE STREQUAL "thread")
|
||||||
set (TSAN_FLAGS "-fsanitize=thread -fsanitize-blacklist=${CMAKE_SOURCE_DIR}/tests/tsan_suppressions.txt")
|
set (TSAN_FLAGS "-fsanitize=thread")
|
||||||
|
if (COMPILER_CLANG)
|
||||||
|
set (TSAN_FLAGS "${TSAN_FLAGS} -fsanitize-blacklist=${CMAKE_SOURCE_DIR}/tests/tsan_suppressions.txt")
|
||||||
|
else()
|
||||||
|
message (WARNING "TSAN suppressions was not passed to the compiler (since the compiler is not clang)")
|
||||||
|
message (WARNING "Use the following command to pass them manually:")
|
||||||
|
message (WARNING " export TSAN_OPTIONS=\"$TSAN_OPTIONS suppressions=${CMAKE_SOURCE_DIR}/tests/tsan_suppressions.txt\"")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
|
||||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SAN_FLAGS} ${TSAN_FLAGS}")
|
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SAN_FLAGS} ${TSAN_FLAGS}")
|
||||||
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SAN_FLAGS} ${TSAN_FLAGS}")
|
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SAN_FLAGS} ${TSAN_FLAGS}")
|
||||||
|
@ -23,7 +23,7 @@ option (WEVERYTHING "Enables -Weverything option with some exceptions. This is i
|
|||||||
# Control maximum size of stack frames. It can be important if the code is run in fibers with small stack size.
|
# Control maximum size of stack frames. It can be important if the code is run in fibers with small stack size.
|
||||||
# Only in release build because debug has too large stack frames.
|
# Only in release build because debug has too large stack frames.
|
||||||
if ((NOT CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG") AND (NOT SANITIZE))
|
if ((NOT CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG") AND (NOT SANITIZE))
|
||||||
add_warning(frame-larger-than=16384)
|
add_warning(frame-larger-than=32768)
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
if (COMPILER_CLANG)
|
if (COMPILER_CLANG)
|
||||||
@ -169,9 +169,16 @@ elseif (COMPILER_GCC)
|
|||||||
# Warn if vector operation is not implemented via SIMD capabilities of the architecture
|
# Warn if vector operation is not implemented via SIMD capabilities of the architecture
|
||||||
add_cxx_compile_options(-Wvector-operation-performance)
|
add_cxx_compile_options(-Wvector-operation-performance)
|
||||||
|
|
||||||
# XXX: gcc10 stuck with this option while compiling GatherUtils code
|
|
||||||
# (anyway there are builds with clang, that will warn)
|
|
||||||
if (CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 10)
|
if (CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 10)
|
||||||
|
# XXX: gcc10 stuck with this option while compiling GatherUtils code
|
||||||
|
# (anyway there are builds with clang, that will warn)
|
||||||
add_cxx_compile_options(-Wno-sequence-point)
|
add_cxx_compile_options(-Wno-sequence-point)
|
||||||
|
# XXX: gcc10 false positive with this warning in MergeTreePartition.cpp
|
||||||
|
# inlined from 'void writeHexByteLowercase(UInt8, void*)' at ../src/Common/hex.h:39:11,
|
||||||
|
# inlined from 'DB::String DB::MergeTreePartition::getID(const DB::Block&) const' at ../src/Storages/MergeTree/MergeTreePartition.cpp:85:30:
|
||||||
|
# ../contrib/libc-headers/x86_64-linux-gnu/bits/string_fortified.h:34:33: error: writing 2 bytes into a region of size 0 [-Werror=stringop-overflow=]
|
||||||
|
# 34 | return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest));
|
||||||
|
# For some reason (bug in gcc?) macro 'GCC diagnostic ignored "-Wstringop-overflow"' doesn't help.
|
||||||
|
add_cxx_compile_options(-Wno-stringop-overflow)
|
||||||
endif()
|
endif()
|
||||||
endif ()
|
endif ()
|
||||||
|
2
contrib/llvm
vendored
2
contrib/llvm
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 3d6c7e916760b395908f28a1c885c8334d4fa98b
|
Subproject commit 8f24d507c1cfeec66d27f48fe74518fd278e2d25
|
4
debian/changelog
vendored
4
debian/changelog
vendored
@ -1,5 +1,5 @@
|
|||||||
clickhouse (20.9.1.1) unstable; urgency=low
|
clickhouse (20.10.1.1) unstable; urgency=low
|
||||||
|
|
||||||
* Modified source code
|
* Modified source code
|
||||||
|
|
||||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Mon, 31 Aug 2020 23:07:38 +0300
|
-- clickhouse-release <clickhouse-release@yandex-team.ru> Tue, 08 Sep 2020 17:04:39 +0300
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:18.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=20.9.1.*
|
ARG version=20.10.1.*
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install --yes --no-install-recommends \
|
&& apt-get install --yes --no-install-recommends \
|
||||||
|
@ -32,8 +32,6 @@ RUN apt-get update \
|
|||||||
curl \
|
curl \
|
||||||
gcc-9 \
|
gcc-9 \
|
||||||
g++-9 \
|
g++-9 \
|
||||||
gcc-10 \
|
|
||||||
g++-10 \
|
|
||||||
llvm-${LLVM_VERSION} \
|
llvm-${LLVM_VERSION} \
|
||||||
clang-${LLVM_VERSION} \
|
clang-${LLVM_VERSION} \
|
||||||
lld-${LLVM_VERSION} \
|
lld-${LLVM_VERSION} \
|
||||||
@ -93,5 +91,16 @@ RUN wget -nv "https://developer.arm.com/-/media/Files/downloads/gnu-a/8.3-2019.0
|
|||||||
# Download toolchain for FreeBSD 11.3
|
# Download toolchain for FreeBSD 11.3
|
||||||
RUN wget -nv https://clickhouse-datasets.s3.yandex.net/toolchains/toolchains/freebsd-11.3-toolchain.tar.xz
|
RUN wget -nv https://clickhouse-datasets.s3.yandex.net/toolchains/toolchains/freebsd-11.3-toolchain.tar.xz
|
||||||
|
|
||||||
|
# NOTE: For some reason we have outdated version of gcc-10 in ubuntu 20.04 stable.
|
||||||
|
# Current workaround is to use latest version proposed repo. Remove as soon as
|
||||||
|
# gcc-10.2 appear in stable repo.
|
||||||
|
RUN echo 'deb http://archive.ubuntu.com/ubuntu/ focal-proposed restricted main multiverse universe' > /etc/apt/sources.list.d/proposed-repositories.list
|
||||||
|
|
||||||
|
RUN apt-get update \
|
||||||
|
&& apt-get install gcc-10 g++-10 --yes
|
||||||
|
|
||||||
|
RUN rm /etc/apt/sources.list.d/proposed-repositories.list && apt-get update
|
||||||
|
|
||||||
|
|
||||||
COPY build.sh /
|
COPY build.sh /
|
||||||
CMD ["/bin/bash", "/build.sh"]
|
CMD ["/bin/bash", "/build.sh"]
|
||||||
|
@ -18,7 +18,7 @@ ccache --zero-stats ||:
|
|||||||
ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 /usr/lib/libOpenCL.so ||:
|
ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 /usr/lib/libOpenCL.so ||:
|
||||||
rm -f CMakeCache.txt
|
rm -f CMakeCache.txt
|
||||||
cmake --debug-trycompile --verbose=1 -DCMAKE_VERBOSE_MAKEFILE=1 -LA -DCMAKE_BUILD_TYPE=$BUILD_TYPE -DSANITIZE=$SANITIZER $CMAKE_FLAGS ..
|
cmake --debug-trycompile --verbose=1 -DCMAKE_VERBOSE_MAKEFILE=1 -LA -DCMAKE_BUILD_TYPE=$BUILD_TYPE -DSANITIZE=$SANITIZER $CMAKE_FLAGS ..
|
||||||
ninja $NINJA_FLAGS clickhouse-bundle
|
ninja -j $(($(nproc) / 2)) $NINJA_FLAGS clickhouse-bundle
|
||||||
mv ./programs/clickhouse* /output
|
mv ./programs/clickhouse* /output
|
||||||
mv ./src/unit_tests_dbms /output
|
mv ./src/unit_tests_dbms /output
|
||||||
find . -name '*.so' -print -exec mv '{}' /output \;
|
find . -name '*.so' -print -exec mv '{}' /output \;
|
||||||
|
@ -42,8 +42,6 @@ RUN export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
|
|||||||
# Libraries from OS are only needed to test the "unbundled" build (this is not used in production).
|
# Libraries from OS are only needed to test the "unbundled" build (this is not used in production).
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install \
|
&& apt-get install \
|
||||||
gcc-10 \
|
|
||||||
g++-10 \
|
|
||||||
gcc-9 \
|
gcc-9 \
|
||||||
g++-9 \
|
g++-9 \
|
||||||
clang-11 \
|
clang-11 \
|
||||||
@ -75,6 +73,16 @@ RUN apt-get update \
|
|||||||
pigz \
|
pigz \
|
||||||
--yes --no-install-recommends
|
--yes --no-install-recommends
|
||||||
|
|
||||||
|
# NOTE: For some reason we have outdated version of gcc-10 in ubuntu 20.04 stable.
|
||||||
|
# Current workaround is to use latest version proposed repo. Remove as soon as
|
||||||
|
# gcc-10.2 appear in stable repo.
|
||||||
|
RUN echo 'deb http://archive.ubuntu.com/ubuntu/ focal-proposed restricted main multiverse universe' > /etc/apt/sources.list.d/proposed-repositories.list
|
||||||
|
|
||||||
|
RUN apt-get update \
|
||||||
|
&& apt-get install gcc-10 g++-10 --yes --no-install-recommends
|
||||||
|
|
||||||
|
RUN rm /etc/apt/sources.list.d/proposed-repositories.list && apt-get update
|
||||||
|
|
||||||
# This symlink required by gcc to find lld compiler
|
# This symlink required by gcc to find lld compiler
|
||||||
RUN ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/ld.lld
|
RUN ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/ld.lld
|
||||||
|
|
||||||
|
@ -93,7 +93,7 @@ def parse_env_variables(build_type, compiler, sanitizer, package_type, image_typ
|
|||||||
|
|
||||||
cxx = cc.replace('gcc', 'g++').replace('clang', 'clang++')
|
cxx = cc.replace('gcc', 'g++').replace('clang', 'clang++')
|
||||||
|
|
||||||
if image_type == "deb":
|
if image_type == "deb" or image_type == "unbundled":
|
||||||
result.append("DEB_CC={}".format(cc))
|
result.append("DEB_CC={}".format(cc))
|
||||||
result.append("DEB_CXX={}".format(cxx))
|
result.append("DEB_CXX={}".format(cxx))
|
||||||
elif image_type == "binary":
|
elif image_type == "binary":
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:20.04
|
FROM ubuntu:20.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=20.9.1.*
|
ARG version=20.10.1.*
|
||||||
ARG gosu_ver=1.10
|
ARG gosu_ver=1.10
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:18.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=20.9.1.*
|
ARG version=20.10.1.*
|
||||||
|
|
||||||
RUN apt-get update && \
|
RUN apt-get update && \
|
||||||
apt-get install -y apt-transport-https dirmngr && \
|
apt-get install -y apt-transport-https dirmngr && \
|
||||||
|
@ -10,7 +10,7 @@ stage=${stage:-}
|
|||||||
|
|
||||||
# A variable to pass additional flags to CMake.
|
# A variable to pass additional flags to CMake.
|
||||||
# Here we explicitly default it to nothing so that bash doesn't complain about
|
# Here we explicitly default it to nothing so that bash doesn't complain about
|
||||||
# it being undefined. Also read it as array so that we can pass an empty list
|
# it being undefined. Also read it as array so that we can pass an empty list
|
||||||
# of additional variable to cmake properly, and it doesn't generate an extra
|
# of additional variable to cmake properly, and it doesn't generate an extra
|
||||||
# empty parameter.
|
# empty parameter.
|
||||||
read -ra FASTTEST_CMAKE_FLAGS <<< "${FASTTEST_CMAKE_FLAGS:-}"
|
read -ra FASTTEST_CMAKE_FLAGS <<< "${FASTTEST_CMAKE_FLAGS:-}"
|
||||||
@ -127,6 +127,7 @@ ln -s /usr/share/clickhouse-test/config/access_management.xml /etc/clickhouse-se
|
|||||||
ln -s /usr/share/clickhouse-test/config/ints_dictionary.xml /etc/clickhouse-server/
|
ln -s /usr/share/clickhouse-test/config/ints_dictionary.xml /etc/clickhouse-server/
|
||||||
ln -s /usr/share/clickhouse-test/config/strings_dictionary.xml /etc/clickhouse-server/
|
ln -s /usr/share/clickhouse-test/config/strings_dictionary.xml /etc/clickhouse-server/
|
||||||
ln -s /usr/share/clickhouse-test/config/decimals_dictionary.xml /etc/clickhouse-server/
|
ln -s /usr/share/clickhouse-test/config/decimals_dictionary.xml /etc/clickhouse-server/
|
||||||
|
ln -s /usr/share/clickhouse-test/config/executable_dictionary.xml /etc/clickhouse-server/
|
||||||
ln -s /usr/share/clickhouse-test/config/macros.xml /etc/clickhouse-server/config.d/
|
ln -s /usr/share/clickhouse-test/config/macros.xml /etc/clickhouse-server/config.d/
|
||||||
ln -s /usr/share/clickhouse-test/config/disks.xml /etc/clickhouse-server/config.d/
|
ln -s /usr/share/clickhouse-test/config/disks.xml /etc/clickhouse-server/config.d/
|
||||||
#ln -s /usr/share/clickhouse-test/config/secure_ports.xml /etc/clickhouse-server/config.d/
|
#ln -s /usr/share/clickhouse-test/config/secure_ports.xml /etc/clickhouse-server/config.d/
|
||||||
|
@ -5,4 +5,4 @@ services:
|
|||||||
restart: always
|
restart: always
|
||||||
ports:
|
ports:
|
||||||
- 6380:6379
|
- 6380:6379
|
||||||
command: redis-server --requirepass "clickhouse"
|
command: redis-server --requirepass "clickhouse" --databases 32
|
||||||
|
@ -394,12 +394,24 @@ create table query_run_metrics_denorm engine File(TSV, 'analyze/query-run-metric
|
|||||||
order by test, query_index, metric_names, version, query_id
|
order by test, query_index, metric_names, version, query_id
|
||||||
;
|
;
|
||||||
|
|
||||||
|
-- Filter out tests that don't have an even number of runs, to avoid breaking
|
||||||
|
-- the further calculations. This may happen if there was an error during the
|
||||||
|
-- test runs, e.g. the server died. It will be reported in test errors, so we
|
||||||
|
-- don't have to report it again.
|
||||||
|
create view broken_queries as
|
||||||
|
select test, query_index
|
||||||
|
from query_runs
|
||||||
|
group by test, query_index
|
||||||
|
having count(*) % 2 != 0
|
||||||
|
;
|
||||||
|
|
||||||
-- This is for statistical processing with eqmed.sql
|
-- This is for statistical processing with eqmed.sql
|
||||||
create table query_run_metrics_for_stats engine File(
|
create table query_run_metrics_for_stats engine File(
|
||||||
TSV, -- do not add header -- will parse with grep
|
TSV, -- do not add header -- will parse with grep
|
||||||
'analyze/query-run-metrics-for-stats.tsv')
|
'analyze/query-run-metrics-for-stats.tsv')
|
||||||
as select test, query_index, 0 run, version, metric_values
|
as select test, query_index, 0 run, version, metric_values
|
||||||
from query_run_metric_arrays
|
from query_run_metric_arrays
|
||||||
|
where (test, query_index) not in broken_queries
|
||||||
order by test, query_index, run, version
|
order by test, query_index, run, version
|
||||||
;
|
;
|
||||||
|
|
||||||
@ -915,13 +927,15 @@ done
|
|||||||
|
|
||||||
function report_metrics
|
function report_metrics
|
||||||
{
|
{
|
||||||
|
build_log_column_definitions
|
||||||
|
|
||||||
rm -rf metrics ||:
|
rm -rf metrics ||:
|
||||||
mkdir metrics
|
mkdir metrics
|
||||||
|
|
||||||
clickhouse-local --query "
|
clickhouse-local --query "
|
||||||
create view right_async_metric_log as
|
create view right_async_metric_log as
|
||||||
select * from file('right-async-metric-log.tsv', TSVWithNamesAndTypes,
|
select * from file('right-async-metric-log.tsv', TSVWithNamesAndTypes,
|
||||||
'event_date Date, event_time DateTime, name String, value Float64')
|
'$(cat right-async-metric-log.tsv.columns)')
|
||||||
;
|
;
|
||||||
|
|
||||||
-- Use the right log as time reference because it may have higher precision.
|
-- Use the right log as time reference because it may have higher precision.
|
||||||
@ -930,7 +944,7 @@ create table metrics engine File(TSV, 'metrics/metrics.tsv') as
|
|||||||
select name metric, r.event_time - min_time event_time, l.value as left, r.value as right
|
select name metric, r.event_time - min_time event_time, l.value as left, r.value as right
|
||||||
from right_async_metric_log r
|
from right_async_metric_log r
|
||||||
asof join file('left-async-metric-log.tsv', TSVWithNamesAndTypes,
|
asof join file('left-async-metric-log.tsv', TSVWithNamesAndTypes,
|
||||||
'event_date Date, event_time DateTime, name String, value Float64') l
|
'$(cat left-async-metric-log.tsv.columns)') l
|
||||||
on l.name = r.name and r.event_time <= l.event_time
|
on l.name = r.name and r.event_time <= l.event_time
|
||||||
order by metric, event_time
|
order by metric, event_time
|
||||||
;
|
;
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
<yandex>
|
<yandex>
|
||||||
<http_port remove="remove"/>
|
<http_port remove="remove"/>
|
||||||
<mysql_port remove="remove"/>
|
<mysql_port remove="remove"/>
|
||||||
<interserver_http_port remove="remove"/>
|
<interserver_http_port remove="remove"/>
|
||||||
@ -22,4 +22,6 @@
|
|||||||
<uncompressed_cache_size>1000000000</uncompressed_cache_size>
|
<uncompressed_cache_size>1000000000</uncompressed_cache_size>
|
||||||
|
|
||||||
<asynchronous_metrics_update_period_s>10</asynchronous_metrics_update_period_s>
|
<asynchronous_metrics_update_period_s>10</asynchronous_metrics_update_period_s>
|
||||||
|
|
||||||
|
<remap_executable replace="replace">true</remap_executable>
|
||||||
</yandex>
|
</yandex>
|
||||||
|
@ -8,7 +8,7 @@ select
|
|||||||
from
|
from
|
||||||
(
|
(
|
||||||
-- quantiles of randomization distributions
|
-- quantiles of randomization distributions
|
||||||
select quantileExactForEach(0.999)(
|
select quantileExactForEach(0.99)(
|
||||||
arrayMap(x, y -> abs(x - y), metrics_by_label[1], metrics_by_label[2]) as d
|
arrayMap(x, y -> abs(x - y), metrics_by_label[1], metrics_by_label[2]) as d
|
||||||
) threshold
|
) threshold
|
||||||
---- uncomment to see what the distribution is really like
|
---- uncomment to see what the distribution is really like
|
||||||
@ -33,7 +33,7 @@ from
|
|||||||
-- strip the query away before the join -- it might be several kB long;
|
-- strip the query away before the join -- it might be several kB long;
|
||||||
(select metrics, run, version from table) no_query,
|
(select metrics, run, version from table) no_query,
|
||||||
-- duplicate input measurements into many virtual runs
|
-- duplicate input measurements into many virtual runs
|
||||||
numbers(1, 100000) nn
|
numbers(1, 10000) nn
|
||||||
-- for each virtual run, randomly reorder measurements
|
-- for each virtual run, randomly reorder measurements
|
||||||
order by virtual_run, rand()
|
order by virtual_run, rand()
|
||||||
) virtual_runs
|
) virtual_runs
|
||||||
|
@ -20,7 +20,7 @@ parser = argparse.ArgumentParser(description='Run performance test.')
|
|||||||
parser.add_argument('file', metavar='FILE', type=argparse.FileType('r', encoding='utf-8'), nargs=1, help='test description file')
|
parser.add_argument('file', metavar='FILE', type=argparse.FileType('r', encoding='utf-8'), nargs=1, help='test description file')
|
||||||
parser.add_argument('--host', nargs='*', default=['localhost'], help="Server hostname(s). Corresponds to '--port' options.")
|
parser.add_argument('--host', nargs='*', default=['localhost'], help="Server hostname(s). Corresponds to '--port' options.")
|
||||||
parser.add_argument('--port', nargs='*', default=[9000], help="Server port(s). Corresponds to '--host' options.")
|
parser.add_argument('--port', nargs='*', default=[9000], help="Server port(s). Corresponds to '--host' options.")
|
||||||
parser.add_argument('--runs', type=int, default=int(os.environ.get('CHPC_RUNS', 13)), help='Number of query runs per server. Defaults to CHPC_RUNS environment variable.')
|
parser.add_argument('--runs', type=int, default=int(os.environ.get('CHPC_RUNS', 7)), help='Number of query runs per server. Defaults to CHPC_RUNS environment variable.')
|
||||||
parser.add_argument('--long', action='store_true', help='Do not skip the tests tagged as long.')
|
parser.add_argument('--long', action='store_true', help='Do not skip the tests tagged as long.')
|
||||||
parser.add_argument('--print-queries', action='store_true', help='Print test queries and exit.')
|
parser.add_argument('--print-queries', action='store_true', help='Print test queries and exit.')
|
||||||
parser.add_argument('--print-settings', action='store_true', help='Print test settings and exit.')
|
parser.add_argument('--print-settings', action='store_true', help='Print test settings and exit.')
|
||||||
|
@ -372,7 +372,7 @@ if args.report == 'main':
|
|||||||
'New, s', # 1
|
'New, s', # 1
|
||||||
'Ratio of speedup (-) or slowdown (+)', # 2
|
'Ratio of speedup (-) or slowdown (+)', # 2
|
||||||
'Relative difference (new − old) / old', # 3
|
'Relative difference (new − old) / old', # 3
|
||||||
'p < 0.001 threshold', # 4
|
'p < 0.01 threshold', # 4
|
||||||
# Failed # 5
|
# Failed # 5
|
||||||
'Test', # 6
|
'Test', # 6
|
||||||
'#', # 7
|
'#', # 7
|
||||||
@ -416,7 +416,7 @@ if args.report == 'main':
|
|||||||
'Old, s', #0
|
'Old, s', #0
|
||||||
'New, s', #1
|
'New, s', #1
|
||||||
'Relative difference (new - old)/old', #2
|
'Relative difference (new - old)/old', #2
|
||||||
'p < 0.001 threshold', #3
|
'p < 0.01 threshold', #3
|
||||||
# Failed #4
|
# Failed #4
|
||||||
'Test', #5
|
'Test', #5
|
||||||
'#', #6
|
'#', #6
|
||||||
@ -470,12 +470,13 @@ if args.report == 'main':
|
|||||||
text = tableStart('Test times')
|
text = tableStart('Test times')
|
||||||
text += tableHeader(columns)
|
text += tableHeader(columns)
|
||||||
|
|
||||||
nominal_runs = 13 # FIXME pass this as an argument
|
nominal_runs = 7 # FIXME pass this as an argument
|
||||||
total_runs = (nominal_runs + 1) * 2 # one prewarm run, two servers
|
total_runs = (nominal_runs + 1) * 2 # one prewarm run, two servers
|
||||||
|
allowed_average_run_time = allowed_single_run_time + 60 / total_runs; # some allowance for fill/create queries
|
||||||
attrs = ['' for c in columns]
|
attrs = ['' for c in columns]
|
||||||
for r in rows:
|
for r in rows:
|
||||||
anchor = f'{currentTableAnchor()}.{r[0]}'
|
anchor = f'{currentTableAnchor()}.{r[0]}'
|
||||||
if float(r[6]) > 1.5 * total_runs:
|
if float(r[6]) > allowed_average_run_time * total_runs:
|
||||||
# FIXME should be 15s max -- investigate parallel_insert
|
# FIXME should be 15s max -- investigate parallel_insert
|
||||||
slow_average_tests += 1
|
slow_average_tests += 1
|
||||||
attrs[6] = f'style="background: {color_bad}"'
|
attrs[6] = f'style="background: {color_bad}"'
|
||||||
@ -649,7 +650,7 @@ elif args.report == 'all-queries':
|
|||||||
'New, s', #3
|
'New, s', #3
|
||||||
'Ratio of speedup (-) or slowdown (+)', #4
|
'Ratio of speedup (-) or slowdown (+)', #4
|
||||||
'Relative difference (new − old) / old', #5
|
'Relative difference (new − old) / old', #5
|
||||||
'p < 0.001 threshold', #6
|
'p < 0.01 threshold', #6
|
||||||
'Test', #7
|
'Test', #7
|
||||||
'#', #8
|
'#', #8
|
||||||
'Query', #9
|
'Query', #9
|
||||||
|
@ -24,6 +24,7 @@ ln -s /usr/share/clickhouse-test/config/access_management.xml /etc/clickhouse-se
|
|||||||
ln -s /usr/share/clickhouse-test/config/ints_dictionary.xml /etc/clickhouse-server/
|
ln -s /usr/share/clickhouse-test/config/ints_dictionary.xml /etc/clickhouse-server/
|
||||||
ln -s /usr/share/clickhouse-test/config/strings_dictionary.xml /etc/clickhouse-server/
|
ln -s /usr/share/clickhouse-test/config/strings_dictionary.xml /etc/clickhouse-server/
|
||||||
ln -s /usr/share/clickhouse-test/config/decimals_dictionary.xml /etc/clickhouse-server/
|
ln -s /usr/share/clickhouse-test/config/decimals_dictionary.xml /etc/clickhouse-server/
|
||||||
|
ln -s /usr/share/clickhouse-test/config/executable_dictionary.xml /etc/clickhouse-server/
|
||||||
ln -s /usr/share/clickhouse-test/config/macros.xml /etc/clickhouse-server/config.d/
|
ln -s /usr/share/clickhouse-test/config/macros.xml /etc/clickhouse-server/config.d/
|
||||||
ln -s /usr/share/clickhouse-test/config/disks.xml /etc/clickhouse-server/config.d/
|
ln -s /usr/share/clickhouse-test/config/disks.xml /etc/clickhouse-server/config.d/
|
||||||
ln -s /usr/share/clickhouse-test/config/secure_ports.xml /etc/clickhouse-server/config.d/
|
ln -s /usr/share/clickhouse-test/config/secure_ports.xml /etc/clickhouse-server/config.d/
|
||||||
|
@ -24,6 +24,7 @@ ln -s /usr/share/clickhouse-test/config/access_management.xml /etc/clickhouse-se
|
|||||||
ln -s /usr/share/clickhouse-test/config/ints_dictionary.xml /etc/clickhouse-server/
|
ln -s /usr/share/clickhouse-test/config/ints_dictionary.xml /etc/clickhouse-server/
|
||||||
ln -s /usr/share/clickhouse-test/config/strings_dictionary.xml /etc/clickhouse-server/
|
ln -s /usr/share/clickhouse-test/config/strings_dictionary.xml /etc/clickhouse-server/
|
||||||
ln -s /usr/share/clickhouse-test/config/decimals_dictionary.xml /etc/clickhouse-server/
|
ln -s /usr/share/clickhouse-test/config/decimals_dictionary.xml /etc/clickhouse-server/
|
||||||
|
ln -s /usr/share/clickhouse-test/config/executable_dictionary.xml /etc/clickhouse-server/
|
||||||
ln -s /usr/share/clickhouse-test/config/macros.xml /etc/clickhouse-server/config.d/
|
ln -s /usr/share/clickhouse-test/config/macros.xml /etc/clickhouse-server/config.d/
|
||||||
ln -s /usr/share/clickhouse-test/config/disks.xml /etc/clickhouse-server/config.d/
|
ln -s /usr/share/clickhouse-test/config/disks.xml /etc/clickhouse-server/config.d/
|
||||||
ln -s /usr/share/clickhouse-test/config/secure_ports.xml /etc/clickhouse-server/config.d/
|
ln -s /usr/share/clickhouse-test/config/secure_ports.xml /etc/clickhouse-server/config.d/
|
||||||
|
@ -57,6 +57,7 @@ ln -s /usr/share/clickhouse-test/config/access_management.xml /etc/clickhouse-se
|
|||||||
ln -s /usr/share/clickhouse-test/config/ints_dictionary.xml /etc/clickhouse-server/
|
ln -s /usr/share/clickhouse-test/config/ints_dictionary.xml /etc/clickhouse-server/
|
||||||
ln -s /usr/share/clickhouse-test/config/strings_dictionary.xml /etc/clickhouse-server/
|
ln -s /usr/share/clickhouse-test/config/strings_dictionary.xml /etc/clickhouse-server/
|
||||||
ln -s /usr/share/clickhouse-test/config/decimals_dictionary.xml /etc/clickhouse-server/
|
ln -s /usr/share/clickhouse-test/config/decimals_dictionary.xml /etc/clickhouse-server/
|
||||||
|
ln -s /usr/share/clickhouse-test/config/executable_dictionary.xml /etc/clickhouse-server/
|
||||||
ln -s /usr/share/clickhouse-test/config/macros.xml /etc/clickhouse-server/config.d/
|
ln -s /usr/share/clickhouse-test/config/macros.xml /etc/clickhouse-server/config.d/
|
||||||
ln -s /usr/share/clickhouse-test/config/disks.xml /etc/clickhouse-server/config.d/
|
ln -s /usr/share/clickhouse-test/config/disks.xml /etc/clickhouse-server/config.d/
|
||||||
ln -s /usr/share/clickhouse-test/config/secure_ports.xml /etc/clickhouse-server/config.d/
|
ln -s /usr/share/clickhouse-test/config/secure_ports.xml /etc/clickhouse-server/config.d/
|
||||||
|
@ -28,7 +28,7 @@ def get_options(i):
|
|||||||
options = ""
|
options = ""
|
||||||
if 0 < i:
|
if 0 < i:
|
||||||
options += " --order=random"
|
options += " --order=random"
|
||||||
if i == 1:
|
if i % 2 == 1:
|
||||||
options += " --atomic-db-engine"
|
options += " --atomic-db-engine"
|
||||||
return options
|
return options
|
||||||
|
|
||||||
|
@ -10,42 +10,51 @@ results of a `SELECT`, and to perform `INSERT`s into a file-backed table.
|
|||||||
|
|
||||||
The supported formats are:
|
The supported formats are:
|
||||||
|
|
||||||
| Format | Input | Output |
|
| Format | Input | Output |
|
||||||
|-----------------------------------------------------------------|-------|--------|
|
|-----------------------------------------------------------------------------------------|-------|--------|
|
||||||
| [TabSeparated](#tabseparated) | ✔ | ✔ |
|
| [TabSeparated](#tabseparated) | ✔ | ✔ |
|
||||||
| [TabSeparatedRaw](#tabseparatedraw) | ✔ | ✔ |
|
| [TabSeparatedRaw](#tabseparatedraw) | ✔ | ✔ |
|
||||||
| [TabSeparatedWithNames](#tabseparatedwithnames) | ✔ | ✔ |
|
| [TabSeparatedWithNames](#tabseparatedwithnames) | ✔ | ✔ |
|
||||||
| [TabSeparatedWithNamesAndTypes](#tabseparatedwithnamesandtypes) | ✔ | ✔ |
|
| [TabSeparatedWithNamesAndTypes](#tabseparatedwithnamesandtypes) | ✔ | ✔ |
|
||||||
| [Template](#format-template) | ✔ | ✔ |
|
| [Template](#format-template) | ✔ | ✔ |
|
||||||
| [TemplateIgnoreSpaces](#templateignorespaces) | ✔ | ✗ |
|
| [TemplateIgnoreSpaces](#templateignorespaces) | ✔ | ✗ |
|
||||||
| [CSV](#csv) | ✔ | ✔ |
|
| [CSV](#csv) | ✔ | ✔ |
|
||||||
| [CSVWithNames](#csvwithnames) | ✔ | ✔ |
|
| [CSVWithNames](#csvwithnames) | ✔ | ✔ |
|
||||||
| [CustomSeparated](#format-customseparated) | ✔ | ✔ |
|
| [CustomSeparated](#format-customseparated) | ✔ | ✔ |
|
||||||
| [Values](#data-format-values) | ✔ | ✔ |
|
| [Values](#data-format-values) | ✔ | ✔ |
|
||||||
| [Vertical](#vertical) | ✗ | ✔ |
|
| [Vertical](#vertical) | ✗ | ✔ |
|
||||||
| [VerticalRaw](#verticalraw) | ✗ | ✔ |
|
| [VerticalRaw](#verticalraw) | ✗ | ✔ |
|
||||||
| [JSON](#json) | ✗ | ✔ |
|
| [JSON](#json) | ✗ | ✔ |
|
||||||
| [JSONCompact](#jsoncompact) | ✗ | ✔ |
|
| [JSONString](#jsonstring) | ✗ | ✔ |
|
||||||
| [JSONEachRow](#jsoneachrow) | ✔ | ✔ |
|
| [JSONCompact](#jsoncompact) | ✗ | ✔ |
|
||||||
| [TSKV](#tskv) | ✔ | ✔ |
|
| [JSONCompactString](#jsoncompactstring) | ✗ | ✔ |
|
||||||
| [Pretty](#pretty) | ✗ | ✔ |
|
| [JSONEachRow](#jsoneachrow) | ✔ | ✔ |
|
||||||
| [PrettyCompact](#prettycompact) | ✗ | ✔ |
|
| [JSONEachRowWithProgress](#jsoneachrowwithprogress) | ✗ | ✔ |
|
||||||
| [PrettyCompactMonoBlock](#prettycompactmonoblock) | ✗ | ✔ |
|
| [JSONStringEachRow](#jsonstringeachrow) | ✔ | ✔ |
|
||||||
| [PrettyNoEscapes](#prettynoescapes) | ✗ | ✔ |
|
| [JSONStringEachRowWithProgress](#jsonstringeachrowwithprogress) | ✗ | ✔ |
|
||||||
| [PrettySpace](#prettyspace) | ✗ | ✔ |
|
| [JSONCompactEachRow](#jsoncompacteachrow) | ✔ | ✔ |
|
||||||
| [Protobuf](#protobuf) | ✔ | ✔ |
|
| [JSONCompactEachRowWithNamesAndTypes](#jsoncompacteachrowwithnamesandtypes) | ✔ | ✔ |
|
||||||
| [Avro](#data-format-avro) | ✔ | ✔ |
|
| [JSONCompactStringEachRow](#jsoncompactstringeachrow) | ✔ | ✔ |
|
||||||
| [AvroConfluent](#data-format-avro-confluent) | ✔ | ✗ |
|
| [JSONCompactStringEachRowWithNamesAndTypes](#jsoncompactstringeachrowwithnamesandtypes) | ✔ | ✔ |
|
||||||
| [Parquet](#data-format-parquet) | ✔ | ✔ |
|
| [TSKV](#tskv) | ✔ | ✔ |
|
||||||
| [Arrow](#data-format-arrow) | ✔ | ✔ |
|
| [Pretty](#pretty) | ✗ | ✔ |
|
||||||
| [ArrowStream](#data-format-arrow-stream) | ✔ | ✔ |
|
| [PrettyCompact](#prettycompact) | ✗ | ✔ |
|
||||||
| [ORC](#data-format-orc) | ✔ | ✗ |
|
| [PrettyCompactMonoBlock](#prettycompactmonoblock) | ✗ | ✔ |
|
||||||
| [RowBinary](#rowbinary) | ✔ | ✔ |
|
| [PrettyNoEscapes](#prettynoescapes) | ✗ | ✔ |
|
||||||
| [RowBinaryWithNamesAndTypes](#rowbinarywithnamesandtypes) | ✔ | ✔ |
|
| [PrettySpace](#prettyspace) | ✗ | ✔ |
|
||||||
| [Native](#native) | ✔ | ✔ |
|
| [Protobuf](#protobuf) | ✔ | ✔ |
|
||||||
| [Null](#null) | ✗ | ✔ |
|
| [Avro](#data-format-avro) | ✔ | ✔ |
|
||||||
| [XML](#xml) | ✗ | ✔ |
|
| [AvroConfluent](#data-format-avro-confluent) | ✔ | ✗ |
|
||||||
| [CapnProto](#capnproto) | ✔ | ✗ |
|
| [Parquet](#data-format-parquet) | ✔ | ✔ |
|
||||||
|
| [Arrow](#data-format-arrow) | ✔ | ✔ |
|
||||||
|
| [ArrowStream](#data-format-arrow-stream) | ✔ | ✔ |
|
||||||
|
| [ORC](#data-format-orc) | ✔ | ✗ |
|
||||||
|
| [RowBinary](#rowbinary) | ✔ | ✔ |
|
||||||
|
| [RowBinaryWithNamesAndTypes](#rowbinarywithnamesandtypes) | ✔ | ✔ |
|
||||||
|
| [Native](#native) | ✔ | ✔ |
|
||||||
|
| [Null](#null) | ✗ | ✔ |
|
||||||
|
| [XML](#xml) | ✗ | ✔ |
|
||||||
|
| [CapnProto](#capnproto) | ✔ | ✗ |
|
||||||
|
|
||||||
You can control some format processing parameters with the ClickHouse settings. For more information read the [Settings](../operations/settings/settings.md) section.
|
You can control some format processing parameters with the ClickHouse settings. For more information read the [Settings](../operations/settings/settings.md) section.
|
||||||
|
|
||||||
@ -392,62 +401,41 @@ SELECT SearchPhrase, count() AS c FROM test.hits GROUP BY SearchPhrase WITH TOTA
|
|||||||
"meta":
|
"meta":
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
"name": "SearchPhrase",
|
"name": "'hello'",
|
||||||
"type": "String"
|
"type": "String"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "c",
|
"name": "multiply(42, number)",
|
||||||
"type": "UInt64"
|
"type": "UInt64"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "range(5)",
|
||||||
|
"type": "Array(UInt8)"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
|
||||||
"data":
|
"data":
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
"SearchPhrase": "",
|
"'hello'": "hello",
|
||||||
"c": "8267016"
|
"multiply(42, number)": "0",
|
||||||
|
"range(5)": [0,1,2,3,4]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"SearchPhrase": "bathroom interior design",
|
"'hello'": "hello",
|
||||||
"c": "2166"
|
"multiply(42, number)": "42",
|
||||||
|
"range(5)": [0,1,2,3,4]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"SearchPhrase": "yandex",
|
"'hello'": "hello",
|
||||||
"c": "1655"
|
"multiply(42, number)": "84",
|
||||||
},
|
"range(5)": [0,1,2,3,4]
|
||||||
{
|
|
||||||
"SearchPhrase": "spring 2014 fashion",
|
|
||||||
"c": "1549"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"SearchPhrase": "freeform photos",
|
|
||||||
"c": "1480"
|
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
|
||||||
"totals":
|
"rows": 3,
|
||||||
{
|
|
||||||
"SearchPhrase": "",
|
|
||||||
"c": "8873898"
|
|
||||||
},
|
|
||||||
|
|
||||||
"extremes":
|
"rows_before_limit_at_least": 3
|
||||||
{
|
|
||||||
"min":
|
|
||||||
{
|
|
||||||
"SearchPhrase": "",
|
|
||||||
"c": "1480"
|
|
||||||
},
|
|
||||||
"max":
|
|
||||||
{
|
|
||||||
"SearchPhrase": "",
|
|
||||||
"c": "8267016"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
|
|
||||||
"rows": 5,
|
|
||||||
|
|
||||||
"rows_before_limit_at_least": 141137
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -468,63 +456,165 @@ ClickHouse supports [NULL](../sql-reference/syntax.md), which is displayed as `n
|
|||||||
|
|
||||||
See also the [JSONEachRow](#jsoneachrow) format.
|
See also the [JSONEachRow](#jsoneachrow) format.
|
||||||
|
|
||||||
|
## JSONString {#jsonstring}
|
||||||
|
|
||||||
|
Differs from JSON only in that data fields are output in strings, not in typed json values.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"meta":
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"name": "'hello'",
|
||||||
|
"type": "String"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "multiply(42, number)",
|
||||||
|
"type": "UInt64"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "range(5)",
|
||||||
|
"type": "Array(UInt8)"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
|
||||||
|
"data":
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"'hello'": "hello",
|
||||||
|
"multiply(42, number)": "0",
|
||||||
|
"range(5)": "[0,1,2,3,4]"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"'hello'": "hello",
|
||||||
|
"multiply(42, number)": "42",
|
||||||
|
"range(5)": "[0,1,2,3,4]"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"'hello'": "hello",
|
||||||
|
"multiply(42, number)": "84",
|
||||||
|
"range(5)": "[0,1,2,3,4]"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
|
||||||
|
"rows": 3,
|
||||||
|
|
||||||
|
"rows_before_limit_at_least": 3
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
## JSONCompact {#jsoncompact}
|
## JSONCompact {#jsoncompact}
|
||||||
|
## JSONCompactString {#jsoncompactstring}
|
||||||
|
|
||||||
Differs from JSON only in that data rows are output in arrays, not in objects.
|
Differs from JSON only in that data rows are output in arrays, not in objects.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
``` json
|
``` json
|
||||||
|
// JSONCompact
|
||||||
{
|
{
|
||||||
"meta":
|
"meta":
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
"name": "SearchPhrase",
|
"name": "'hello'",
|
||||||
"type": "String"
|
"type": "String"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "c",
|
"name": "multiply(42, number)",
|
||||||
"type": "UInt64"
|
"type": "UInt64"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "range(5)",
|
||||||
|
"type": "Array(UInt8)"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
|
||||||
"data":
|
"data":
|
||||||
[
|
[
|
||||||
["", "8267016"],
|
["hello", "0", [0,1,2,3,4]],
|
||||||
["bathroom interior design", "2166"],
|
["hello", "42", [0,1,2,3,4]],
|
||||||
["yandex", "1655"],
|
["hello", "84", [0,1,2,3,4]]
|
||||||
["fashion trends spring 2014", "1549"],
|
|
||||||
["freeform photo", "1480"]
|
|
||||||
],
|
],
|
||||||
|
|
||||||
"totals": ["","8873898"],
|
"rows": 3,
|
||||||
|
|
||||||
"extremes":
|
"rows_before_limit_at_least": 3
|
||||||
{
|
|
||||||
"min": ["","1480"],
|
|
||||||
"max": ["","8267016"]
|
|
||||||
},
|
|
||||||
|
|
||||||
"rows": 5,
|
|
||||||
|
|
||||||
"rows_before_limit_at_least": 141137
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).
|
```json
|
||||||
See also the `JSONEachRow` format.
|
// JSONCompactString
|
||||||
|
{
|
||||||
|
"meta":
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"name": "'hello'",
|
||||||
|
"type": "String"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "multiply(42, number)",
|
||||||
|
"type": "UInt64"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "range(5)",
|
||||||
|
"type": "Array(UInt8)"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
|
||||||
## JSONEachRow {#jsoneachrow}
|
"data":
|
||||||
|
[
|
||||||
|
["hello", "0", "[0,1,2,3,4]"],
|
||||||
|
["hello", "42", "[0,1,2,3,4]"],
|
||||||
|
["hello", "84", "[0,1,2,3,4]"]
|
||||||
|
],
|
||||||
|
|
||||||
When using this format, ClickHouse outputs rows as separated, newline-delimited JSON objects, but the data as a whole is not valid JSON.
|
"rows": 3,
|
||||||
|
|
||||||
``` json
|
"rows_before_limit_at_least": 3
|
||||||
{"SearchPhrase":"curtain designs","count()":"1064"}
|
}
|
||||||
{"SearchPhrase":"baku","count()":"1000"}
|
|
||||||
{"SearchPhrase":"","count()":"8267016"}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
When inserting the data, you should provide a separate JSON object for each row.
|
## JSONEachRow {#jsoneachrow}
|
||||||
|
## JSONStringEachRow {#jsonstringeachrow}
|
||||||
|
## JSONCompactEachRow {#jsoncompacteachrow}
|
||||||
|
## JSONCompactStringEachRow {#jsoncompactstringeachrow}
|
||||||
|
|
||||||
|
When using these formats, ClickHouse outputs rows as separated, newline-delimited JSON values, but the data as a whole is not valid JSON.
|
||||||
|
|
||||||
|
``` json
|
||||||
|
{"some_int":42,"some_str":"hello","some_tuple":[1,"a"]} // JSONEachRow
|
||||||
|
[42,"hello",[1,"a"]] // JSONCompactEachRow
|
||||||
|
["42","hello","(2,'a')"] // JSONCompactStringsEachRow
|
||||||
|
```
|
||||||
|
|
||||||
|
When inserting the data, you should provide a separate JSON value for each row.
|
||||||
|
|
||||||
|
## JSONEachRowWithProgress {#jsoneachrowwithprogress}
|
||||||
|
## JSONStringEachRowWithProgress {#jsonstringeachrowwithprogress}
|
||||||
|
|
||||||
|
Differs from JSONEachRow/JSONStringEachRow in that ClickHouse will also yield progress information as JSON objects.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{"row":{"'hello'":"hello","multiply(42, number)":"0","range(5)":[0,1,2,3,4]}}
|
||||||
|
{"row":{"'hello'":"hello","multiply(42, number)":"42","range(5)":[0,1,2,3,4]}}
|
||||||
|
{"row":{"'hello'":"hello","multiply(42, number)":"84","range(5)":[0,1,2,3,4]}}
|
||||||
|
{"progress":{"read_rows":"3","read_bytes":"24","written_rows":"0","written_bytes":"0","total_rows_to_read":"3"}}
|
||||||
|
```
|
||||||
|
|
||||||
|
## JSONCompactEachRowWithNamesAndTypes {#jsoncompacteachrowwithnamesandtypes}
|
||||||
|
## JSONCompactStringEachRowWithNamesAndTypes {#jsoncompactstringeachrowwithnamesandtypes}
|
||||||
|
|
||||||
|
Differs from JSONCompactEachRow/JSONCompactStringEachRow in that the column names and types are written as the first two rows.
|
||||||
|
|
||||||
|
```json
|
||||||
|
["'hello'", "multiply(42, number)", "range(5)"]
|
||||||
|
["String", "UInt64", "Array(UInt8)"]
|
||||||
|
["hello", "0", [0,1,2,3,4]]
|
||||||
|
["hello", "42", [0,1,2,3,4]]
|
||||||
|
["hello", "84", [0,1,2,3,4]]
|
||||||
|
```
|
||||||
|
|
||||||
### Inserting Data {#inserting-data}
|
### Inserting Data {#inserting-data}
|
||||||
|
|
||||||
|
@ -6,6 +6,7 @@ Columns:
|
|||||||
|
|
||||||
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — Event date.
|
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — Event date.
|
||||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Event time.
|
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Event time.
|
||||||
|
- `event_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Event time with microseconds resolution.
|
||||||
- `name` ([String](../../sql-reference/data-types/string.md)) — Metric name.
|
- `name` ([String](../../sql-reference/data-types/string.md)) — Metric name.
|
||||||
- `value` ([Float64](../../sql-reference/data-types/float.md)) — Metric value.
|
- `value` ([Float64](../../sql-reference/data-types/float.md)) — Metric value.
|
||||||
|
|
||||||
@ -16,18 +17,18 @@ SELECT * FROM system.asynchronous_metric_log LIMIT 10
|
|||||||
```
|
```
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
┌─event_date─┬──────────event_time─┬─name─────────────────────────────────────┬────value─┐
|
┌─event_date─┬──────────event_time─┬────event_time_microseconds─┬─name─────────────────────────────────────┬─────value─┐
|
||||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.arenas.all.pmuzzy │ 0 │
|
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ CPUFrequencyMHz_0 │ 2120.9 │
|
||||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.arenas.all.pdirty │ 4214 │
|
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.arenas.all.pmuzzy │ 743 │
|
||||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.background_thread.run_intervals │ 0 │
|
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.arenas.all.pdirty │ 26288 │
|
||||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.background_thread.num_runs │ 0 │
|
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.background_thread.run_intervals │ 0 │
|
||||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.retained │ 17657856 │
|
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.background_thread.num_runs │ 0 │
|
||||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.mapped │ 71471104 │
|
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.retained │ 60694528 │
|
||||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.resident │ 61538304 │
|
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.mapped │ 303161344 │
|
||||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.metadata │ 6199264 │
|
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.resident │ 260931584 │
|
||||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.allocated │ 38074336 │
|
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.metadata │ 12079488 │
|
||||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.epoch │ 2 │
|
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.allocated │ 133756128 │
|
||||||
└────────────┴─────────────────────┴──────────────────────────────────────────┴──────────┘
|
└────────────┴─────────────────────┴────────────────────────────┴──────────────────────────────────────────┴───────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
**See Also**
|
**See Also**
|
||||||
|
@ -10,12 +10,16 @@ Columns:
|
|||||||
- `progress` (Float64) — The percentage of completed work from 0 to 1.
|
- `progress` (Float64) — The percentage of completed work from 0 to 1.
|
||||||
- `num_parts` (UInt64) — The number of pieces to be merged.
|
- `num_parts` (UInt64) — The number of pieces to be merged.
|
||||||
- `result_part_name` (String) — The name of the part that will be formed as the result of merging.
|
- `result_part_name` (String) — The name of the part that will be formed as the result of merging.
|
||||||
- `is_mutation` (UInt8) - 1 if this process is a part mutation.
|
- `is_mutation` (UInt8) — 1 if this process is a part mutation.
|
||||||
- `total_size_bytes_compressed` (UInt64) — The total size of the compressed data in the merged chunks.
|
- `total_size_bytes_compressed` (UInt64) — The total size of the compressed data in the merged chunks.
|
||||||
- `total_size_marks` (UInt64) — The total number of marks in the merged parts.
|
- `total_size_marks` (UInt64) — The total number of marks in the merged parts.
|
||||||
- `bytes_read_uncompressed` (UInt64) — Number of bytes read, uncompressed.
|
- `bytes_read_uncompressed` (UInt64) — Number of bytes read, uncompressed.
|
||||||
- `rows_read` (UInt64) — Number of rows read.
|
- `rows_read` (UInt64) — Number of rows read.
|
||||||
- `bytes_written_uncompressed` (UInt64) — Number of bytes written, uncompressed.
|
- `bytes_written_uncompressed` (UInt64) — Number of bytes written, uncompressed.
|
||||||
- `rows_written` (UInt64) — Number of rows written.
|
- `rows_written` (UInt64) — Number of rows written.
|
||||||
|
- `memory_usage` (UInt64) — Memory consumption of the merge process.
|
||||||
|
- `thread_id` (UInt64) — Thread ID of the merge process.
|
||||||
|
- `merge_type` — The type of current merge. Empty if it's an mutation.
|
||||||
|
- `merge_algorithm` — The algorithm used in current merge. Empty if it's an mutation.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/merges) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/merges) <!--hide-->
|
||||||
|
@ -23,28 +23,28 @@ SELECT * FROM system.metric_log LIMIT 1 FORMAT Vertical;
|
|||||||
``` text
|
``` text
|
||||||
Row 1:
|
Row 1:
|
||||||
──────
|
──────
|
||||||
event_date: 2020-02-18
|
event_date: 2020-09-05
|
||||||
event_time: 2020-02-18 07:15:33
|
event_time: 2020-09-05 16:22:33
|
||||||
milliseconds: 554
|
event_time_microseconds: 2020-09-05 16:22:33.196807
|
||||||
ProfileEvent_Query: 0
|
milliseconds: 196
|
||||||
ProfileEvent_SelectQuery: 0
|
ProfileEvent_Query: 0
|
||||||
ProfileEvent_InsertQuery: 0
|
ProfileEvent_SelectQuery: 0
|
||||||
ProfileEvent_FileOpen: 0
|
ProfileEvent_InsertQuery: 0
|
||||||
ProfileEvent_Seek: 0
|
ProfileEvent_FailedQuery: 0
|
||||||
ProfileEvent_ReadBufferFromFileDescriptorRead: 1
|
ProfileEvent_FailedSelectQuery: 0
|
||||||
ProfileEvent_ReadBufferFromFileDescriptorReadFailed: 0
|
|
||||||
ProfileEvent_ReadBufferFromFileDescriptorReadBytes: 0
|
|
||||||
ProfileEvent_WriteBufferFromFileDescriptorWrite: 1
|
|
||||||
ProfileEvent_WriteBufferFromFileDescriptorWriteFailed: 0
|
|
||||||
ProfileEvent_WriteBufferFromFileDescriptorWriteBytes: 56
|
|
||||||
...
|
...
|
||||||
CurrentMetric_Query: 0
|
|
||||||
CurrentMetric_Merge: 0
|
|
||||||
CurrentMetric_PartMutation: 0
|
|
||||||
CurrentMetric_ReplicatedFetch: 0
|
|
||||||
CurrentMetric_ReplicatedSend: 0
|
|
||||||
CurrentMetric_ReplicatedChecks: 0
|
|
||||||
...
|
...
|
||||||
|
CurrentMetric_Revision: 54439
|
||||||
|
CurrentMetric_VersionInteger: 20009001
|
||||||
|
CurrentMetric_RWLockWaitingReaders: 0
|
||||||
|
CurrentMetric_RWLockWaitingWriters: 0
|
||||||
|
CurrentMetric_RWLockActiveReaders: 0
|
||||||
|
CurrentMetric_RWLockActiveWriters: 0
|
||||||
|
CurrentMetric_GlobalThread: 74
|
||||||
|
CurrentMetric_GlobalThreadActive: 26
|
||||||
|
CurrentMetric_LocalThread: 0
|
||||||
|
CurrentMetric_LocalThreadActive: 0
|
||||||
|
CurrentMetric_DistributedFilesToInsert: 0
|
||||||
```
|
```
|
||||||
|
|
||||||
**See also**
|
**See also**
|
||||||
|
@ -82,8 +82,8 @@ res: /lib/x86_64-linux-gnu/libc-2.27.so
|
|||||||
|
|
||||||
- [Introspection Functions](../../sql-reference/functions/introspection.md) — Which introspection functions are available and how to use them.
|
- [Introspection Functions](../../sql-reference/functions/introspection.md) — Which introspection functions are available and how to use them.
|
||||||
- [system.trace_log](../system-tables/trace_log.md) — Contains stack traces collected by the sampling query profiler.
|
- [system.trace_log](../system-tables/trace_log.md) — Contains stack traces collected by the sampling query profiler.
|
||||||
- [arrayMap](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-map) — Description and usage example of the `arrayMap` function.
|
- [arrayMap](../../sql-reference/functions/array-functions.md#array-map) — Description and usage example of the `arrayMap` function.
|
||||||
- [arrayFilter](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-filter) — Description and usage example of the `arrayFilter` function.
|
- [arrayFilter](../../sql-reference/functions/array-functions.md#array-filter) — Description and usage example of the `arrayFilter` function.
|
||||||
|
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/stack_trace) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/stack_trace) <!--hide-->
|
||||||
|
@ -38,7 +38,7 @@ clickhouse-benchmark [keys] < queries_file
|
|||||||
- `-d N`, `--delay=N` — Interval in seconds between intermediate reports (set 0 to disable reports). Default value: 1.
|
- `-d N`, `--delay=N` — Interval in seconds between intermediate reports (set 0 to disable reports). Default value: 1.
|
||||||
- `-h WORD`, `--host=WORD` — Server host. Default value: `localhost`. For the [comparison mode](#clickhouse-benchmark-comparison-mode) you can use multiple `-h` keys.
|
- `-h WORD`, `--host=WORD` — Server host. Default value: `localhost`. For the [comparison mode](#clickhouse-benchmark-comparison-mode) you can use multiple `-h` keys.
|
||||||
- `-p N`, `--port=N` — Server port. Default value: 9000. For the [comparison mode](#clickhouse-benchmark-comparison-mode) you can use multiple `-p` keys.
|
- `-p N`, `--port=N` — Server port. Default value: 9000. For the [comparison mode](#clickhouse-benchmark-comparison-mode) you can use multiple `-p` keys.
|
||||||
- `-i N`, `--iterations=N` — Total number of queries. Default value: 0.
|
- `-i N`, `--iterations=N` — Total number of queries. Default value: 0 (repeat forever).
|
||||||
- `-r`, `--randomize` — Random order of queries execution if there is more then one input query.
|
- `-r`, `--randomize` — Random order of queries execution if there is more then one input query.
|
||||||
- `-s`, `--secure` — Using TLS connection.
|
- `-s`, `--secure` — Using TLS connection.
|
||||||
- `-t N`, `--timelimit=N` — Time limit in seconds. `clickhouse-benchmark` stops sending queries when the specified time limit is reached. Default value: 0 (time limit disabled).
|
- `-t N`, `--timelimit=N` — Time limit in seconds. `clickhouse-benchmark` stops sending queries when the specified time limit is reached. Default value: 0 (time limit disabled).
|
||||||
|
@ -7,7 +7,7 @@ toc_title: Tuple(T1, T2, ...)
|
|||||||
|
|
||||||
A tuple of elements, each having an individual [type](../../sql-reference/data-types/index.md#data_types).
|
A tuple of elements, each having an individual [type](../../sql-reference/data-types/index.md#data_types).
|
||||||
|
|
||||||
Tuples are used for temporary column grouping. Columns can be grouped when an IN expression is used in a query, and for specifying certain formal parameters of lambda functions. For more information, see the sections [IN operators](../../sql-reference/operators/in.md) and [Higher order functions](../../sql-reference/functions/higher-order-functions.md).
|
Tuples are used for temporary column grouping. Columns can be grouped when an IN expression is used in a query, and for specifying certain formal parameters of lambda functions. For more information, see the sections [IN operators](../../sql-reference/operators/in.md) and [Higher order functions](../../sql-reference/functions/index.md#higher-order-functions).
|
||||||
|
|
||||||
Tuples can be the result of a query. In this case, for text formats other than JSON, values are comma-separated in brackets. In JSON formats, tuples are output as arrays (in square brackets).
|
Tuples can be the result of a query. In this case, for text formats other than JSON, values are comma-separated in brackets. In JSON formats, tuples are output as arrays (in square brackets).
|
||||||
|
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
toc_priority: 35
|
toc_priority: 34
|
||||||
toc_title: Arithmetic
|
toc_title: Arithmetic
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -1,9 +1,9 @@
|
|||||||
---
|
---
|
||||||
toc_priority: 46
|
toc_priority: 35
|
||||||
toc_title: Arrays
|
toc_title: Arrays
|
||||||
---
|
---
|
||||||
|
|
||||||
# Functions for Working with Arrays {#functions-for-working-with-arrays}
|
# Array Functions {#functions-for-working-with-arrays}
|
||||||
|
|
||||||
## empty {#function-empty}
|
## empty {#function-empty}
|
||||||
|
|
||||||
@ -241,6 +241,12 @@ SELECT indexOf([1, 3, NULL, NULL], NULL)
|
|||||||
|
|
||||||
Elements set to `NULL` are handled as normal values.
|
Elements set to `NULL` are handled as normal values.
|
||||||
|
|
||||||
|
## arrayCount(\[func,\] arr1, …) {#array-count}
|
||||||
|
|
||||||
|
Returns the number of elements in the arr array for which func returns something other than 0. If ‘func’ is not specified, it returns the number of non-zero elements in the array.
|
||||||
|
|
||||||
|
Note that the `arrayCount` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||||
|
|
||||||
## countEqual(arr, x) {#countequalarr-x}
|
## countEqual(arr, x) {#countequalarr-x}
|
||||||
|
|
||||||
Returns the number of elements in the array equal to x. Equivalent to arrayCount (elem -\> elem = x, arr).
|
Returns the number of elements in the array equal to x. Equivalent to arrayCount (elem -\> elem = x, arr).
|
||||||
@ -568,7 +574,7 @@ SELECT arraySort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf]);
|
|||||||
- `NaN` values are right before `NULL`.
|
- `NaN` values are right before `NULL`.
|
||||||
- `Inf` values are right before `NaN`.
|
- `Inf` values are right before `NaN`.
|
||||||
|
|
||||||
Note that `arraySort` is a [higher-order function](../../sql-reference/functions/higher-order-functions.md). You can pass a lambda function to it as the first argument. In this case, sorting order is determined by the result of the lambda function applied to the elements of the array.
|
Note that `arraySort` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument. In this case, sorting order is determined by the result of the lambda function applied to the elements of the array.
|
||||||
|
|
||||||
Let’s consider the following example:
|
Let’s consider the following example:
|
||||||
|
|
||||||
@ -668,7 +674,7 @@ SELECT arrayReverseSort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf]) as res;
|
|||||||
- `NaN` values are right before `NULL`.
|
- `NaN` values are right before `NULL`.
|
||||||
- `-Inf` values are right before `NaN`.
|
- `-Inf` values are right before `NaN`.
|
||||||
|
|
||||||
Note that the `arrayReverseSort` is a [higher-order function](../../sql-reference/functions/higher-order-functions.md). You can pass a lambda function to it as the first argument. Example is shown below.
|
Note that the `arrayReverseSort` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument. Example is shown below.
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT arrayReverseSort((x) -> -x, [1, 2, 3]) as res;
|
SELECT arrayReverseSort((x) -> -x, [1, 2, 3]) as res;
|
||||||
@ -1120,7 +1126,205 @@ Result:
|
|||||||
``` text
|
``` text
|
||||||
┌─arrayAUC([0.1, 0.4, 0.35, 0.8], [0, 0, 1, 1])─┐
|
┌─arrayAUC([0.1, 0.4, 0.35, 0.8], [0, 0, 1, 1])─┐
|
||||||
│ 0.75 │
|
│ 0.75 │
|
||||||
└────────────────────────────────────────---──┘
|
└───────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## arrayMap(func, arr1, …) {#array-map}
|
||||||
|
|
||||||
|
Returns an array obtained from the original application of the `func` function to each element in the `arr` array.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT arrayMap(x -> (x + 2), [1, 2, 3]) as res;
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─res─────┐
|
||||||
|
│ [3,4,5] │
|
||||||
|
└─────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
The following example shows how to create a tuple of elements from different arrays:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─res─────────────────┐
|
||||||
|
│ [(1,4),(2,5),(3,6)] │
|
||||||
|
└─────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that the `arrayMap` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||||
|
|
||||||
|
## arrayFilter(func, arr1, …) {#array-filter}
|
||||||
|
|
||||||
|
Returns an array containing only the elements in `arr1` for which `func` returns something other than 0.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT arrayFilter(x -> x LIKE '%World%', ['Hello', 'abc World']) AS res
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─res───────────┐
|
||||||
|
│ ['abc World'] │
|
||||||
|
└───────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT
|
||||||
|
arrayFilter(
|
||||||
|
(i, x) -> x LIKE '%World%',
|
||||||
|
arrayEnumerate(arr),
|
||||||
|
['Hello', 'abc World'] AS arr)
|
||||||
|
AS res
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─res─┐
|
||||||
|
│ [2] │
|
||||||
|
└─────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that the `arrayFilter` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||||
|
|
||||||
|
## arrayFill(func, arr1, …) {#array-fill}
|
||||||
|
|
||||||
|
Scan through `arr1` from the first element to the last element and replace `arr1[i]` by `arr1[i - 1]` if `func` returns 0. The first element of `arr1` will not be replaced.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT arrayFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14, null, null]) AS res
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─res──────────────────────────────┐
|
||||||
|
│ [1,1,3,11,12,12,12,5,6,14,14,14] │
|
||||||
|
└──────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that the `arrayFill` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||||
|
|
||||||
|
## arrayReverseFill(func, arr1, …) {#array-reverse-fill}
|
||||||
|
|
||||||
|
Scan through `arr1` from the last element to the first element and replace `arr1[i]` by `arr1[i + 1]` if `func` returns 0. The last element of `arr1` will not be replaced.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT arrayReverseFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14, null, null]) AS res
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─res────────────────────────────────┐
|
||||||
|
│ [1,3,3,11,12,5,5,5,6,14,NULL,NULL] │
|
||||||
|
└────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that the `arrayReverseFilter` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||||
|
|
||||||
|
## arraySplit(func, arr1, …) {#array-split}
|
||||||
|
|
||||||
|
Split `arr1` into multiple arrays. When `func` returns something other than 0, the array will be split on the left hand side of the element. The array will not be split before the first element.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT arraySplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─res─────────────┐
|
||||||
|
│ [[1,2,3],[4,5]] │
|
||||||
|
└─────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that the `arraySplit` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||||
|
|
||||||
|
## arrayReverseSplit(func, arr1, …) {#array-reverse-split}
|
||||||
|
|
||||||
|
Split `arr1` into multiple arrays. When `func` returns something other than 0, the array will be split on the right hand side of the element. The array will not be split after the last element.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT arrayReverseSplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─res───────────────┐
|
||||||
|
│ [[1],[2,3,4],[5]] │
|
||||||
|
└───────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that the `arrayReverseSplit` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||||
|
|
||||||
|
## arrayExists(\[func,\] arr1, …) {#arrayexistsfunc-arr1}
|
||||||
|
|
||||||
|
Returns 1 if there is at least one element in `arr` for which `func` returns something other than 0. Otherwise, it returns 0.
|
||||||
|
|
||||||
|
Note that the `arrayExists` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||||
|
|
||||||
|
## arrayAll(\[func,\] arr1, …) {#arrayallfunc-arr1}
|
||||||
|
|
||||||
|
Returns 1 if `func` returns something other than 0 for all the elements in `arr`. Otherwise, it returns 0.
|
||||||
|
|
||||||
|
Note that the `arrayAll` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||||
|
|
||||||
|
## arrayFirst(func, arr1, …) {#array-first}
|
||||||
|
|
||||||
|
Returns the first element in the `arr1` array for which `func` returns something other than 0.
|
||||||
|
|
||||||
|
Note that the `arrayFirst` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||||
|
|
||||||
|
## arrayFirstIndex(func, arr1, …) {#array-first-index}
|
||||||
|
|
||||||
|
Returns the index of the first element in the `arr1` array for which `func` returns something other than 0.
|
||||||
|
|
||||||
|
Note that the `arrayFirstIndex` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||||
|
|
||||||
|
## arraySum(\[func,\] arr1, …) {#array-sum}
|
||||||
|
|
||||||
|
Returns the sum of the `func` values. If the function is omitted, it just returns the sum of the array elements.
|
||||||
|
|
||||||
|
Note that the `arraySum` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||||
|
|
||||||
|
## arrayCumSum(\[func,\] arr1, …) {#arraycumsumfunc-arr1}
|
||||||
|
|
||||||
|
Returns an array of partial sums of elements in the source array (a running sum). If the `func` function is specified, then the values of the array elements are converted by this function before summing.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT arrayCumSum([1, 1, 1, 1]) AS res
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─res──────────┐
|
||||||
|
│ [1, 2, 3, 4] │
|
||||||
|
└──────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that the `arrayCumSum` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||||
|
|
||||||
|
## arrayCumSumNonNegative(arr) {#arraycumsumnonnegativearr}
|
||||||
|
|
||||||
|
Same as `arrayCumSum`, returns an array of partial sums of elements in the source array (a running sum). Different `arrayCumSum`, when then returned value contains a value less than zero, the value is replace with zero and the subsequent calculation is performed with zero parameters. For example:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT arrayCumSumNonNegative([1, 1, -4, 1]) AS res
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─res───────┐
|
||||||
|
│ [1,2,0,1] │
|
||||||
|
└───────────┘
|
||||||
|
```
|
||||||
|
Note that the `arraySumNonNegative` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/array_functions/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/query_language/functions/array_functions/) <!--hide-->
|
||||||
|
@ -3,6 +3,9 @@ toc_priority: 58
|
|||||||
toc_title: External Dictionaries
|
toc_title: External Dictionaries
|
||||||
---
|
---
|
||||||
|
|
||||||
|
!!! attention "Attention"
|
||||||
|
`dict_name` parameter must be fully qualified for dictionaries created with DDL queries. Eg. `<database>.<dict_name>`.
|
||||||
|
|
||||||
# Functions for Working with External Dictionaries {#ext_dict_functions}
|
# Functions for Working with External Dictionaries {#ext_dict_functions}
|
||||||
|
|
||||||
For information on connecting and configuring external dictionaries, see [External dictionaries](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md).
|
For information on connecting and configuring external dictionaries, see [External dictionaries](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md).
|
||||||
|
@ -1,262 +0,0 @@
|
|||||||
---
|
|
||||||
toc_priority: 57
|
|
||||||
toc_title: Higher-Order
|
|
||||||
---
|
|
||||||
|
|
||||||
# Higher-order Functions {#higher-order-functions}
|
|
||||||
|
|
||||||
## `->` operator, lambda(params, expr) function {#operator-lambdaparams-expr-function}
|
|
||||||
|
|
||||||
Allows describing a lambda function for passing to a higher-order function. The left side of the arrow has a formal parameter, which is any ID, or multiple formal parameters – any IDs in a tuple. The right side of the arrow has an expression that can use these formal parameters, as well as any table columns.
|
|
||||||
|
|
||||||
Examples: `x -> 2 * x, str -> str != Referer.`
|
|
||||||
|
|
||||||
Higher-order functions can only accept lambda functions as their functional argument.
|
|
||||||
|
|
||||||
A lambda function that accepts multiple arguments can be passed to a higher-order function. In this case, the higher-order function is passed several arrays of identical length that these arguments will correspond to.
|
|
||||||
|
|
||||||
For some functions, such as [arrayCount](#higher_order_functions-array-count) or [arraySum](#higher_order_functions-array-count), the first argument (the lambda function) can be omitted. In this case, identical mapping is assumed.
|
|
||||||
|
|
||||||
A lambda function can’t be omitted for the following functions:
|
|
||||||
|
|
||||||
- [arrayMap](#higher_order_functions-array-map)
|
|
||||||
- [arrayFilter](#higher_order_functions-array-filter)
|
|
||||||
- [arrayFill](#higher_order_functions-array-fill)
|
|
||||||
- [arrayReverseFill](#higher_order_functions-array-reverse-fill)
|
|
||||||
- [arraySplit](#higher_order_functions-array-split)
|
|
||||||
- [arrayReverseSplit](#higher_order_functions-array-reverse-split)
|
|
||||||
- [arrayFirst](#higher_order_functions-array-first)
|
|
||||||
- [arrayFirstIndex](#higher_order_functions-array-first-index)
|
|
||||||
|
|
||||||
### arrayMap(func, arr1, …) {#higher_order_functions-array-map}
|
|
||||||
|
|
||||||
Returns an array obtained from the original application of the `func` function to each element in the `arr` array.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arrayMap(x -> (x + 2), [1, 2, 3]) as res;
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res─────┐
|
|
||||||
│ [3,4,5] │
|
|
||||||
└─────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
The following example shows how to create a tuple of elements from different arrays:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res─────────────────┐
|
|
||||||
│ [(1,4),(2,5),(3,6)] │
|
|
||||||
└─────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that the first argument (lambda function) can’t be omitted in the `arrayMap` function.
|
|
||||||
|
|
||||||
### arrayFilter(func, arr1, …) {#higher_order_functions-array-filter}
|
|
||||||
|
|
||||||
Returns an array containing only the elements in `arr1` for which `func` returns something other than 0.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arrayFilter(x -> x LIKE '%World%', ['Hello', 'abc World']) AS res
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res───────────┐
|
|
||||||
│ ['abc World'] │
|
|
||||||
└───────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT
|
|
||||||
arrayFilter(
|
|
||||||
(i, x) -> x LIKE '%World%',
|
|
||||||
arrayEnumerate(arr),
|
|
||||||
['Hello', 'abc World'] AS arr)
|
|
||||||
AS res
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res─┐
|
|
||||||
│ [2] │
|
|
||||||
└─────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that the first argument (lambda function) can’t be omitted in the `arrayFilter` function.
|
|
||||||
|
|
||||||
### arrayFill(func, arr1, …) {#higher_order_functions-array-fill}
|
|
||||||
|
|
||||||
Scan through `arr1` from the first element to the last element and replace `arr1[i]` by `arr1[i - 1]` if `func` returns 0. The first element of `arr1` will not be replaced.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arrayFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14, null, null]) AS res
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res──────────────────────────────┐
|
|
||||||
│ [1,1,3,11,12,12,12,5,6,14,14,14] │
|
|
||||||
└──────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that the first argument (lambda function) can’t be omitted in the `arrayFill` function.
|
|
||||||
|
|
||||||
### arrayReverseFill(func, arr1, …) {#higher_order_functions-array-reverse-fill}
|
|
||||||
|
|
||||||
Scan through `arr1` from the last element to the first element and replace `arr1[i]` by `arr1[i + 1]` if `func` returns 0. The last element of `arr1` will not be replaced.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arrayReverseFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14, null, null]) AS res
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res────────────────────────────────┐
|
|
||||||
│ [1,3,3,11,12,5,5,5,6,14,NULL,NULL] │
|
|
||||||
└────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that the first argument (lambda function) can’t be omitted in the `arrayReverseFill` function.
|
|
||||||
|
|
||||||
### arraySplit(func, arr1, …) {#higher_order_functions-array-split}
|
|
||||||
|
|
||||||
Split `arr1` into multiple arrays. When `func` returns something other than 0, the array will be split on the left hand side of the element. The array will not be split before the first element.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arraySplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res─────────────┐
|
|
||||||
│ [[1,2,3],[4,5]] │
|
|
||||||
└─────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that the first argument (lambda function) can’t be omitted in the `arraySplit` function.
|
|
||||||
|
|
||||||
### arrayReverseSplit(func, arr1, …) {#higher_order_functions-array-reverse-split}
|
|
||||||
|
|
||||||
Split `arr1` into multiple arrays. When `func` returns something other than 0, the array will be split on the right hand side of the element. The array will not be split after the last element.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arrayReverseSplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res───────────────┐
|
|
||||||
│ [[1],[2,3,4],[5]] │
|
|
||||||
└───────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that the first argument (lambda function) can’t be omitted in the `arraySplit` function.
|
|
||||||
|
|
||||||
### arrayCount(\[func,\] arr1, …) {#higher_order_functions-array-count}
|
|
||||||
|
|
||||||
Returns the number of elements in the arr array for which func returns something other than 0. If ‘func’ is not specified, it returns the number of non-zero elements in the array.
|
|
||||||
|
|
||||||
### arrayExists(\[func,\] arr1, …) {#arrayexistsfunc-arr1}
|
|
||||||
|
|
||||||
Returns 1 if there is at least one element in ‘arr’ for which ‘func’ returns something other than 0. Otherwise, it returns 0.
|
|
||||||
|
|
||||||
### arrayAll(\[func,\] arr1, …) {#arrayallfunc-arr1}
|
|
||||||
|
|
||||||
Returns 1 if ‘func’ returns something other than 0 for all the elements in ‘arr’. Otherwise, it returns 0.
|
|
||||||
|
|
||||||
### arraySum(\[func,\] arr1, …) {#higher-order-functions-array-sum}
|
|
||||||
|
|
||||||
Returns the sum of the ‘func’ values. If the function is omitted, it just returns the sum of the array elements.
|
|
||||||
|
|
||||||
### arrayFirst(func, arr1, …) {#higher_order_functions-array-first}
|
|
||||||
|
|
||||||
Returns the first element in the ‘arr1’ array for which ‘func’ returns something other than 0.
|
|
||||||
|
|
||||||
Note that the first argument (lambda function) can’t be omitted in the `arrayFirst` function.
|
|
||||||
|
|
||||||
### arrayFirstIndex(func, arr1, …) {#higher_order_functions-array-first-index}
|
|
||||||
|
|
||||||
Returns the index of the first element in the ‘arr1’ array for which ‘func’ returns something other than 0.
|
|
||||||
|
|
||||||
Note that the first argument (lambda function) can’t be omitted in the `arrayFirstIndex` function.
|
|
||||||
|
|
||||||
### arrayCumSum(\[func,\] arr1, …) {#arraycumsumfunc-arr1}
|
|
||||||
|
|
||||||
Returns an array of partial sums of elements in the source array (a running sum). If the `func` function is specified, then the values of the array elements are converted by this function before summing.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arrayCumSum([1, 1, 1, 1]) AS res
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res──────────┐
|
|
||||||
│ [1, 2, 3, 4] │
|
|
||||||
└──────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
### arrayCumSumNonNegative(arr) {#arraycumsumnonnegativearr}
|
|
||||||
|
|
||||||
Same as `arrayCumSum`, returns an array of partial sums of elements in the source array (a running sum). Different `arrayCumSum`, when then returned value contains a value less than zero, the value is replace with zero and the subsequent calculation is performed with zero parameters. For example:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arrayCumSumNonNegative([1, 1, -4, 1]) AS res
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res───────┐
|
|
||||||
│ [1,2,0,1] │
|
|
||||||
└───────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
### arraySort(\[func,\] arr1, …) {#arraysortfunc-arr1}
|
|
||||||
|
|
||||||
Returns an array as result of sorting the elements of `arr1` in ascending order. If the `func` function is specified, sorting order is determined by the result of the function `func` applied to the elements of array (arrays)
|
|
||||||
|
|
||||||
The [Schwartzian transform](https://en.wikipedia.org/wiki/Schwartzian_transform) is used to improve sorting efficiency.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arraySort((x, y) -> y, ['hello', 'world'], [2, 1]);
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res────────────────┐
|
|
||||||
│ ['world', 'hello'] │
|
|
||||||
└────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
For more information about the `arraySort` method, see the [Functions for Working With Arrays](../../sql-reference/functions/array-functions.md#array_functions-sort) section.
|
|
||||||
|
|
||||||
### arrayReverseSort(\[func,\] arr1, …) {#arrayreversesortfunc-arr1}
|
|
||||||
|
|
||||||
Returns an array as result of sorting the elements of `arr1` in descending order. If the `func` function is specified, sorting order is determined by the result of the function `func` applied to the elements of array (arrays).
|
|
||||||
|
|
||||||
Example:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arrayReverseSort((x, y) -> y, ['hello', 'world'], [2, 1]) as res;
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res───────────────┐
|
|
||||||
│ ['hello','world'] │
|
|
||||||
└───────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
For more information about the `arrayReverseSort` method, see the [Functions for Working With Arrays](../../sql-reference/functions/array-functions.md#array_functions-reverse-sort) section.
|
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/higher_order_functions/) <!--hide-->
|
|
@ -44,6 +44,21 @@ Functions have the following behaviors:
|
|||||||
|
|
||||||
Functions can’t change the values of their arguments – any changes are returned as the result. Thus, the result of calculating separate functions does not depend on the order in which the functions are written in the query.
|
Functions can’t change the values of their arguments – any changes are returned as the result. Thus, the result of calculating separate functions does not depend on the order in which the functions are written in the query.
|
||||||
|
|
||||||
|
## Higher-order functions, `->` operator and lambda(params, expr) function {#higher-order-functions}
|
||||||
|
|
||||||
|
Higher-order functions can only accept lambda functions as their functional argument. To pass a lambda function to a higher-order function use `->` operator. The left side of the arrow has a formal parameter, which is any ID, or multiple formal parameters – any IDs in a tuple. The right side of the arrow has an expression that can use these formal parameters, as well as any table columns.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
```
|
||||||
|
x -> 2 * x
|
||||||
|
str -> str != Referer
|
||||||
|
```
|
||||||
|
|
||||||
|
A lambda function that accepts multiple arguments can also be passed to a higher-order function. In this case, the higher-order function is passed several arrays of identical length that these arguments will correspond to.
|
||||||
|
|
||||||
|
For some functions the first argument (the lambda function) can be omitted. In this case, identical mapping is assumed.
|
||||||
|
|
||||||
## Error Handling {#error-handling}
|
## Error Handling {#error-handling}
|
||||||
|
|
||||||
Some functions might throw an exception if the data is invalid. In this case, the query is canceled and an error text is returned to the client. For distributed processing, when an exception occurs on one of the servers, the other servers also attempt to abort the query.
|
Some functions might throw an exception if the data is invalid. In this case, the query is canceled and an error text is returned to the client. For distributed processing, when an exception occurs on one of the servers, the other servers also attempt to abort the query.
|
||||||
|
@ -98,7 +98,7 @@ LIMIT 1
|
|||||||
\G
|
\G
|
||||||
```
|
```
|
||||||
|
|
||||||
The [arrayMap](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-map) function allows to process each individual element of the `trace` array by the `addressToLine` function. The result of this processing you see in the `trace_source_code_lines` column of output.
|
The [arrayMap](../../sql-reference/functions/array-functions.md#array-map) function allows to process each individual element of the `trace` array by the `addressToLine` function. The result of this processing you see in the `trace_source_code_lines` column of output.
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
Row 1:
|
Row 1:
|
||||||
@ -184,7 +184,7 @@ LIMIT 1
|
|||||||
\G
|
\G
|
||||||
```
|
```
|
||||||
|
|
||||||
The [arrayMap](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-map) function allows to process each individual element of the `trace` array by the `addressToSymbols` function. The result of this processing you see in the `trace_symbols` column of output.
|
The [arrayMap](../../sql-reference/functions/array-functions.md#array-map) function allows to process each individual element of the `trace` array by the `addressToSymbols` function. The result of this processing you see in the `trace_symbols` column of output.
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
Row 1:
|
Row 1:
|
||||||
@ -281,7 +281,7 @@ LIMIT 1
|
|||||||
\G
|
\G
|
||||||
```
|
```
|
||||||
|
|
||||||
The [arrayMap](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-map) function allows to process each individual element of the `trace` array by the `demangle` function. The result of this processing you see in the `trace_functions` column of output.
|
The [arrayMap](../../sql-reference/functions/array-functions.md#array-map) function allows to process each individual element of the `trace` array by the `demangle` function. The result of this processing you see in the `trace_functions` column of output.
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
Row 1:
|
Row 1:
|
||||||
|
@ -515,6 +515,29 @@ SELECT
|
|||||||
└────────────────┴────────────┘
|
└────────────────┴────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## formatReadableQuantity(x) {#formatreadablequantityx}
|
||||||
|
|
||||||
|
Accepts the number. Returns a rounded number with a suffix (thousand, million, billion, etc.) as a string.
|
||||||
|
|
||||||
|
It is useful for reading big numbers by human.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT
|
||||||
|
arrayJoin([1024, 1234 * 1000, (4567 * 1000) * 1000, 98765432101234]) AS number,
|
||||||
|
formatReadableQuantity(number) AS number_for_humans
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─────────number─┬─number_for_humans─┐
|
||||||
|
│ 1024 │ 1.02 thousand │
|
||||||
|
│ 1234000 │ 1.23 million │
|
||||||
|
│ 4567000000 │ 4.57 billion │
|
||||||
|
│ 98765432101234 │ 98.77 trillion │
|
||||||
|
└────────────────┴───────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
## least(a, b) {#leasta-b}
|
## least(a, b) {#leasta-b}
|
||||||
|
|
||||||
Returns the smallest value from a and b.
|
Returns the smallest value from a and b.
|
||||||
|
@ -46,3 +46,25 @@ SELECT mapSubtract(([toUInt8(1), 2], [toInt32(1), 1]), ([toUInt8(1), 2], [toInt3
|
|||||||
│ ([1,2],[-1,0]) │ Tuple(Array(UInt8), Array(Int64)) │
|
│ ([1,2],[-1,0]) │ Tuple(Array(UInt8), Array(Int64)) │
|
||||||
└────────────────┴───────────────────────────────────┘
|
└────────────────┴───────────────────────────────────┘
|
||||||
````
|
````
|
||||||
|
|
||||||
|
## mapPopulateSeries {#function-mappopulateseries}
|
||||||
|
|
||||||
|
Syntax: `mapPopulateSeries((keys : Array(<IntegerType>), values : Array(<IntegerType>)[, max : <IntegerType>])`
|
||||||
|
|
||||||
|
Generates a map, where keys are a series of numbers, from minimum to maximum keys (or `max` argument if it specified) taken from `keys` array with step size of one,
|
||||||
|
and corresponding values taken from `values` array. If the value is not specified for the key, then it uses default value in the resulting map.
|
||||||
|
For repeated keys only the first value (in order of appearing) gets associated with the key.
|
||||||
|
|
||||||
|
The number of elements in `keys` and `values` must be the same for each row.
|
||||||
|
|
||||||
|
Returns a tuple of two arrays: keys in sorted order, and values the corresponding keys.
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
select mapPopulateSeries([1,2,4], [11,22,44], 5) as res, toTypeName(res) as type;
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─res──────────────────────────┬─type──────────────────────────────┐
|
||||||
|
│ ([1,2,3,4,5],[11,22,0,44,0]) │ Tuple(Array(UInt8), Array(UInt8)) │
|
||||||
|
└──────────────────────────────┴───────────────────────────────────┘
|
||||||
|
```
|
||||||
|
@ -1,20 +1,18 @@
|
|||||||
---
|
---
|
||||||
machine_translated: true
|
|
||||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
|
||||||
toc_priority: 49
|
toc_priority: 49
|
||||||
toc_title: Copia de seguridad de datos
|
toc_title: Copia de seguridad de datos
|
||||||
---
|
---
|
||||||
|
|
||||||
# Copia de seguridad de datos {#data-backup}
|
# Copia de seguridad de datos {#data-backup}
|
||||||
|
|
||||||
Mientras [replicación](../engines/table-engines/mergetree-family/replication.md) provides protection from hardware failures, it does not protect against human errors: accidental deletion of data, deletion of the wrong table or a table on the wrong cluster, and software bugs that result in incorrect data processing or data corruption. In many cases mistakes like these will affect all replicas. ClickHouse has built-in safeguards to prevent some types of mistakes — for example, by default [no puede simplemente eliminar tablas con un motor similar a MergeTree que contenga más de 50 Gb de datos](https://github.com/ClickHouse/ClickHouse/blob/v18.14.18-stable/programs/server/config.xml#L322-L330). Sin embargo, estas garantías no cubren todos los casos posibles y pueden eludirse.
|
Mientras que la [replicación](../engines/table-engines/mergetree-family/replication.md) proporciona protección contra fallos de hardware, no protege de errores humanos: el borrado accidental de datos, elminar la tabla equivocada o una tabla en el clúster equivocado, y bugs de software que dan como resultado un procesado incorrecto de los datos o la corrupción de los datos. En muchos casos, errores como estos afectarán a todas las réplicas. ClickHouse dispone de salvaguardas para prevenir algunos tipos de errores — por ejemplo, por defecto [no se puede simplemente eliminar tablas con un motor similar a MergeTree que contenga más de 50 Gb de datos](https://github.com/ClickHouse/ClickHouse/blob/v18.14.18-stable/programs/server/config.xml#L322-L330). Sin embargo, estas salvaguardas no cubren todos los casos posibles y pueden eludirse.
|
||||||
|
|
||||||
Para mitigar eficazmente los posibles errores humanos, debe preparar cuidadosamente una estrategia para realizar copias de seguridad y restaurar sus datos **previamente**.
|
Para mitigar eficazmente los posibles errores humanos, debe preparar cuidadosamente una estrategia para realizar copias de seguridad y restaurar sus datos **previamente**.
|
||||||
|
|
||||||
Cada empresa tiene diferentes recursos disponibles y requisitos comerciales, por lo que no existe una solución universal para las copias de seguridad y restauraciones de ClickHouse que se adapten a cada situación. Lo que funciona para un gigabyte de datos probablemente no funcionará para decenas de petabytes. Hay una variedad de posibles enfoques con sus propios pros y contras, que se discutirán a continuación. Es una buena idea utilizar varios enfoques en lugar de solo uno para compensar sus diversas deficiencias.
|
Cada empresa tiene diferentes recursos disponibles y requisitos comerciales, por lo que no existe una solución universal para las copias de seguridad y restauraciones de ClickHouse que se adapten a cada situación. Lo que funciona para un gigabyte de datos probablemente no funcionará para decenas de petabytes. Hay una variedad de posibles enfoques con sus propios pros y contras, que se discutirán a continuación. Es una buena idea utilizar varios enfoques en lugar de uno solo para compensar sus diversas deficiencias.
|
||||||
|
|
||||||
!!! note "Nota"
|
!!! note "Nota"
|
||||||
Tenga en cuenta que si realizó una copia de seguridad de algo y nunca intentó restaurarlo, es probable que la restauración no funcione correctamente cuando realmente la necesite (o al menos tomará más tiempo de lo que las empresas pueden tolerar). Por lo tanto, cualquiera que sea el enfoque de copia de seguridad que elija, asegúrese de automatizar el proceso de restauración también y practicarlo en un clúster de ClickHouse de repuesto regularmente.
|
Tenga en cuenta que si realizó una copia de seguridad de algo y nunca intentó restaurarlo, es probable que la restauración no funcione correctamente cuando realmente la necesite (o al menos tomará más tiempo de lo que las empresas pueden tolerar). Por lo tanto, cualquiera que sea el enfoque de copia de seguridad que elija, asegúrese de automatizar el proceso de restauración también y ponerlo en practica en un clúster de ClickHouse de repuesto regularmente.
|
||||||
|
|
||||||
## Duplicar datos de origen en otro lugar {#duplicating-source-data-somewhere-else}
|
## Duplicar datos de origen en otro lugar {#duplicating-source-data-somewhere-else}
|
||||||
|
|
||||||
@ -32,7 +30,7 @@ Para volúmenes de datos más pequeños, un simple `INSERT INTO ... SELECT ...`
|
|||||||
|
|
||||||
## Manipulaciones con piezas {#manipulations-with-parts}
|
## Manipulaciones con piezas {#manipulations-with-parts}
|
||||||
|
|
||||||
ClickHouse permite usar el `ALTER TABLE ... FREEZE PARTITION ...` consulta para crear una copia local de particiones de tabla. Esto se implementa utilizando enlaces duros al `/var/lib/clickhouse/shadow/` carpeta, por lo que generalmente no consume espacio adicional en disco para datos antiguos. Las copias creadas de archivos no son manejadas por el servidor ClickHouse, por lo que puede dejarlas allí: tendrá una copia de seguridad simple que no requiere ningún sistema externo adicional, pero seguirá siendo propenso a problemas de hardware. Por esta razón, es mejor copiarlos de forma remota en otra ubicación y luego eliminar las copias locales. Los sistemas de archivos distribuidos y los almacenes de objetos siguen siendo una buena opción para esto, pero los servidores de archivos conectados normales con una capacidad lo suficientemente grande podrían funcionar también (en este caso, la transferencia ocurrirá a través del sistema de archivos de red o tal vez [rsync](https://en.wikipedia.org/wiki/Rsync)).
|
ClickHouse permite usar la consulta `ALTER TABLE ... FREEZE PARTITION ...` para crear una copia local de particiones de tabla. Esto se implementa utilizando enlaces duros a la carpeta `/var/lib/clickhouse/shadow/`, por lo que generalmente no consume espacio adicional en disco para datos antiguos. Las copias creadas de archivos no son manejadas por el servidor ClickHouse, por lo que puede dejarlas allí: tendrá una copia de seguridad simple que no requiere ningún sistema externo adicional, pero seguirá siendo propenso a problemas de hardware. Por esta razón, es mejor copiarlos de forma remota en otra ubicación y luego eliminar las copias locales. Los sistemas de archivos distribuidos y los almacenes de objetos siguen siendo una buena opción para esto, pero los servidores de archivos conectados normales con una capacidad lo suficientemente grande podrían funcionar también (en este caso, la transferencia ocurrirá a través del sistema de archivos de red o tal vez [rsync](https://en.wikipedia.org/wiki/Rsync)).
|
||||||
|
|
||||||
Para obtener más información sobre las consultas relacionadas con las manipulaciones de particiones, consulte [Documentación de ALTER](../sql-reference/statements/alter.md#alter_manipulations-with-partitions).
|
Para obtener más información sobre las consultas relacionadas con las manipulaciones de particiones, consulte [Documentación de ALTER](../sql-reference/statements/alter.md#alter_manipulations-with-partitions).
|
||||||
|
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
toc_priority: 30
|
||||||
|
toc_title: MergeTree
|
||||||
|
---
|
||||||
|
|
||||||
# MergeTree {#table_engines-mergetree}
|
# MergeTree {#table_engines-mergetree}
|
||||||
|
|
||||||
Движок `MergeTree`, а также другие движки этого семейства (`*MergeTree`) — это наиболее функциональные движки таблиц ClickHouse.
|
Движок `MergeTree`, а также другие движки этого семейства (`*MergeTree`) — это наиболее функциональные движки таблиц ClickHouse.
|
||||||
@ -28,8 +33,8 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
|||||||
INDEX index_name1 expr1 TYPE type1(...) GRANULARITY value1,
|
INDEX index_name1 expr1 TYPE type1(...) GRANULARITY value1,
|
||||||
INDEX index_name2 expr2 TYPE type2(...) GRANULARITY value2
|
INDEX index_name2 expr2 TYPE type2(...) GRANULARITY value2
|
||||||
) ENGINE = MergeTree()
|
) ENGINE = MergeTree()
|
||||||
|
ORDER BY expr
|
||||||
[PARTITION BY expr]
|
[PARTITION BY expr]
|
||||||
[ORDER BY expr]
|
|
||||||
[PRIMARY KEY expr]
|
[PRIMARY KEY expr]
|
||||||
[SAMPLE BY expr]
|
[SAMPLE BY expr]
|
||||||
[TTL expr [DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'], ...]
|
[TTL expr [DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'], ...]
|
||||||
@ -38,27 +43,42 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
|||||||
|
|
||||||
Описание параметров смотрите в [описании запроса CREATE](../../../engines/table-engines/mergetree-family/mergetree.md).
|
Описание параметров смотрите в [описании запроса CREATE](../../../engines/table-engines/mergetree-family/mergetree.md).
|
||||||
|
|
||||||
!!! note "Note"
|
!!! note "Примечание"
|
||||||
`INDEX` — экспериментальная возможность, смотрите [Индексы пропуска данных](#table_engine-mergetree-data_skipping-indexes).
|
`INDEX` — экспериментальная возможность, смотрите [Индексы пропуска данных](#table_engine-mergetree-data_skipping-indexes).
|
||||||
|
|
||||||
### Секции запроса {#mergetree-query-clauses}
|
### Секции запроса {#mergetree-query-clauses}
|
||||||
|
|
||||||
- `ENGINE` — имя и параметры движка. `ENGINE = MergeTree()`. `MergeTree` не имеет параметров.
|
- `ENGINE` — имя и параметры движка. `ENGINE = MergeTree()`. `MergeTree` не имеет параметров.
|
||||||
|
|
||||||
- `PARTITION BY` — [ключ партиционирования](custom-partitioning-key.md). Для партиционирования по месяцам используйте выражение `toYYYYMM(date_column)`, где `date_column` — столбец с датой типа [Date](../../../engines/table-engines/mergetree-family/mergetree.md). В этом случае имена партиций имеют формат `"YYYYMM"`.
|
- `ORDER BY` — ключ сортировки.
|
||||||
|
|
||||||
|
Кортеж столбцов или произвольных выражений. Пример: `ORDER BY (CounterID, EventDate)`.
|
||||||
|
|
||||||
- `ORDER BY` — ключ сортировки. Кортеж столбцов или произвольных выражений. Пример: `ORDER BY (CounterID, EventDate)`.
|
ClickHouse использует ключ сортировки в качестве первичного ключа, если первичный ключ не задан в секции `PRIMARY KEY`.
|
||||||
|
|
||||||
- `PRIMARY KEY` — первичный ключ, если он [отличается от ключа сортировки](#pervichnyi-kliuch-otlichnyi-ot-kliucha-sortirovki). По умолчанию первичный ключ совпадает с ключом сортировки (который задаётся секцией `ORDER BY`.) Поэтому в большинстве случаев секцию `PRIMARY KEY` отдельно указывать не нужно.
|
Чтобы отключить сортировку, используйте синтаксис `ORDER BY tuple()`. Смотрите [выбор первичного ключа](#vybor-pervichnogo-kliucha).
|
||||||
|
|
||||||
- `SAMPLE BY` — выражение для сэмплирования. Если используется выражение для сэмплирования, то первичный ключ должен содержать его. Пример: `SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`.
|
- `PARTITION BY` — [ключ партиционирования](custom-partitioning-key.md). Необязательный параметр.
|
||||||
|
|
||||||
- `TTL` — список правил, определяющих длительности хранения строк, а также задающих правила перемещения частей на определённые тома или диски. Выражение должно возвращать столбец `Date` или `DateTime`. Пример: `TTL date + INTERVAL 1 DAY`.
|
Для партиционирования по месяцам используйте выражение `toYYYYMM(date_column)`, где `date_column` — столбец с датой типа [Date](../../../engines/table-engines/mergetree-family/mergetree.md). В этом случае имена партиций имеют формат `"YYYYMM"`.
|
||||||
- Тип правила `DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'` указывает действие, которое будет выполнено с частью, удаление строк (прореживание), перемещение (при выполнении условия для всех строк части) на определённый диск (`TO DISK 'xxx'`) или том (`TO VOLUME 'xxx'`).
|
|
||||||
- Поведение по умолчанию соответствует удалению строк (`DELETE`). В списке правил может быть указано только одно выражение с поведением `DELETE`.
|
|
||||||
- Дополнительные сведения смотрите в разделе [TTL для столбцов и таблиц](#table_engine-mergetree-ttl)
|
|
||||||
|
|
||||||
- `SETTINGS` — дополнительные параметры, регулирующие поведение `MergeTree`:
|
- `PRIMARY KEY` — первичный ключ, если он [отличается от ключа сортировки](#pervichnyi-kliuch-otlichnyi-ot-kliucha-sortirovki). Необязательный параметр.
|
||||||
|
|
||||||
|
По умолчанию первичный ключ совпадает с ключом сортировки (который задаётся секцией `ORDER BY`.) Поэтому в большинстве случаев секцию `PRIMARY KEY` отдельно указывать не нужно.
|
||||||
|
|
||||||
|
- `SAMPLE BY` — выражение для сэмплирования. Необязательный параметр.
|
||||||
|
|
||||||
|
Если используется выражение для сэмплирования, то первичный ключ должен содержать его. Пример: `SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`.
|
||||||
|
|
||||||
|
- `TTL` — список правил, определяющих длительности хранения строк, а также задающих правила перемещения частей на определённые тома или диски. Необязательный параметр.
|
||||||
|
|
||||||
|
Выражение должно возвращать столбец `Date` или `DateTime`. Пример: `TTL date + INTERVAL 1 DAY`.
|
||||||
|
|
||||||
|
Тип правила `DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'` указывает действие, которое будет выполнено с частью, удаление строк (прореживание), перемещение (при выполнении условия для всех строк части) на определённый диск (`TO DISK 'xxx'`) или том (`TO VOLUME 'xxx'`). Поведение по умолчанию соответствует удалению строк (`DELETE`). В списке правил может быть указано только одно выражение с поведением `DELETE`.
|
||||||
|
|
||||||
|
Дополнительные сведения смотрите в разделе [TTL для столбцов и таблиц](#table_engine-mergetree-ttl)
|
||||||
|
|
||||||
|
- `SETTINGS` — дополнительные параметры, регулирующие поведение `MergeTree` (необязательные):
|
||||||
|
|
||||||
- `index_granularity` — максимальное количество строк данных между засечками индекса. По умолчанию — 8192. Смотрите [Хранение данных](#mergetree-data-storage).
|
- `index_granularity` — максимальное количество строк данных между засечками индекса. По умолчанию — 8192. Смотрите [Хранение данных](#mergetree-data-storage).
|
||||||
- `index_granularity_bytes` — максимальный размер гранул данных в байтах. По умолчанию — 10Mb. Чтобы ограничить размер гранул только количеством строк, установите значение 0 (не рекомендовано). Смотрите [Хранение данных](#mergetree-data-storage).
|
- `index_granularity_bytes` — максимальный размер гранул данных в байтах. По умолчанию — 10Mb. Чтобы ограничить размер гранул только количеством строк, установите значение 0 (не рекомендовано). Смотрите [Хранение данных](#mergetree-data-storage).
|
||||||
@ -180,6 +200,14 @@ ClickHouse не требует уникального первичного кл
|
|||||||
|
|
||||||
Длинный первичный ключ будет негативно влиять на производительность вставки и потребление памяти, однако на производительность ClickHouse при запросах `SELECT` лишние столбцы в первичном ключе не влияют.
|
Длинный первичный ключ будет негативно влиять на производительность вставки и потребление памяти, однако на производительность ClickHouse при запросах `SELECT` лишние столбцы в первичном ключе не влияют.
|
||||||
|
|
||||||
|
Вы можете создать таблицу без первичного ключа, используя синтаксис `ORDER BY tuple()`. В этом случае ClickHouse хранит данные в порядке вставки. Если вы хотите сохранить порядок данных при вставке данных с помощью запросов `INSERT ... SELECT`, установите [max\_insert\_threads = 1](../../../operations/settings/settings.md#settings-max-insert-threads).
|
||||||
|
|
||||||
|
Чтобы выбрать данные в первоначальном порядке, используйте
|
||||||
|
[однопоточные](../../../operations/settings/settings.md#settings-max_threads) запросы `SELECT.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### Первичный ключ, отличный от ключа сортировки {#pervichnyi-kliuch-otlichnyi-ot-kliucha-sortirovki}
|
### Первичный ключ, отличный от ключа сортировки {#pervichnyi-kliuch-otlichnyi-ot-kliucha-sortirovki}
|
||||||
|
|
||||||
Существует возможность задать первичный ключ (выражение, значения которого будут записаны в индексный файл для
|
Существует возможность задать первичный ключ (выражение, значения которого будут записаны в индексный файл для
|
||||||
|
@ -28,6 +28,8 @@ ClickHouse может принимать (`INSERT`) и отдавать (`SELECT
|
|||||||
| [PrettySpace](#prettyspace) | ✗ | ✔ |
|
| [PrettySpace](#prettyspace) | ✗ | ✔ |
|
||||||
| [Protobuf](#protobuf) | ✔ | ✔ |
|
| [Protobuf](#protobuf) | ✔ | ✔ |
|
||||||
| [Parquet](#data-format-parquet) | ✔ | ✔ |
|
| [Parquet](#data-format-parquet) | ✔ | ✔ |
|
||||||
|
| [Arrow](#data-format-arrow) | ✔ | ✔ |
|
||||||
|
| [ArrowStream](#data-format-arrow-stream) | ✔ | ✔ |
|
||||||
| [ORC](#data-format-orc) | ✔ | ✗ |
|
| [ORC](#data-format-orc) | ✔ | ✗ |
|
||||||
| [RowBinary](#rowbinary) | ✔ | ✔ |
|
| [RowBinary](#rowbinary) | ✔ | ✔ |
|
||||||
| [RowBinaryWithNamesAndTypes](#rowbinarywithnamesandtypes) | ✔ | ✔ |
|
| [RowBinaryWithNamesAndTypes](#rowbinarywithnamesandtypes) | ✔ | ✔ |
|
||||||
@ -947,6 +949,12 @@ ClickHouse пишет и читает сообщения `Protocol Buffers` в
|
|||||||
|
|
||||||
## Avro {#data-format-avro}
|
## Avro {#data-format-avro}
|
||||||
|
|
||||||
|
[Apache Avro](https://avro.apache.org/) — это ориентированный на строки фреймворк для сериализации данных. Разработан в рамках проекта Apache Hadoop.
|
||||||
|
|
||||||
|
В ClickHouse формат Avro поддерживает чтение и запись [файлов данных Avro](https://avro.apache.org/docs/current/spec.html#Object+Container+Files).
|
||||||
|
|
||||||
|
[Логические типы Avro](https://avro.apache.org/docs/current/spec.html#Logical+Types)
|
||||||
|
|
||||||
## AvroConfluent {#data-format-avro-confluent}
|
## AvroConfluent {#data-format-avro-confluent}
|
||||||
|
|
||||||
Для формата `AvroConfluent` ClickHouse поддерживает декодирование сообщений `Avro` с одним объектом. Такие сообщения используются с [Kafka] (http://kafka.apache.org/) и реестром схем [Confluent](https://docs.confluent.io/current/schema-registry/index.html).
|
Для формата `AvroConfluent` ClickHouse поддерживает декодирование сообщений `Avro` с одним объектом. Такие сообщения используются с [Kafka] (http://kafka.apache.org/) и реестром схем [Confluent](https://docs.confluent.io/current/schema-registry/index.html).
|
||||||
@ -996,7 +1004,7 @@ SELECT * FROM topic1_stream;
|
|||||||
|
|
||||||
## Parquet {#data-format-parquet}
|
## Parquet {#data-format-parquet}
|
||||||
|
|
||||||
[Apache Parquet](http://parquet.apache.org/) — формат поколоночного хранения данных, который распространён в экосистеме Hadoop. Для формата `Parquet` ClickHouse поддерживает операции чтения и записи.
|
[Apache Parquet](https://parquet.apache.org/) — формат поколоночного хранения данных, который распространён в экосистеме Hadoop. Для формата `Parquet` ClickHouse поддерживает операции чтения и записи.
|
||||||
|
|
||||||
### Соответствие типов данных {#sootvetstvie-tipov-dannykh}
|
### Соответствие типов данных {#sootvetstvie-tipov-dannykh}
|
||||||
|
|
||||||
@ -1042,6 +1050,16 @@ $ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Parquet" > {some_
|
|||||||
|
|
||||||
Для обмена данными с экосистемой Hadoop можно использовать движки таблиц [HDFS](../engines/table-engines/integrations/hdfs.md).
|
Для обмена данными с экосистемой Hadoop можно использовать движки таблиц [HDFS](../engines/table-engines/integrations/hdfs.md).
|
||||||
|
|
||||||
|
## Arrow {data-format-arrow}
|
||||||
|
|
||||||
|
[Apache Arrow](https://arrow.apache.org/) поставляется с двумя встроенными поколоночнами форматами хранения. ClickHouse поддерживает операции чтения и записи для этих форматов.
|
||||||
|
|
||||||
|
`Arrow` — это Apache Arrow's "file mode" формат. Он предназначен для произвольного доступа в памяти.
|
||||||
|
|
||||||
|
## ArrowStream {data-format-arrow-stream}
|
||||||
|
|
||||||
|
`ArrowStream` — это Apache Arrow's "stream mode" формат. Он предназначен для обработки потоков в памяти.
|
||||||
|
|
||||||
## ORC {#data-format-orc}
|
## ORC {#data-format-orc}
|
||||||
|
|
||||||
[Apache ORC](https://orc.apache.org/) - это column-oriented формат данных, распространённый в экосистеме Hadoop. Вы можете только вставлять данные этого формата в ClickHouse.
|
[Apache ORC](https://orc.apache.org/) - это column-oriented формат данных, распространённый в экосистеме Hadoop. Вы можете только вставлять данные этого формата в ClickHouse.
|
||||||
|
4
docs/ru/interfaces/third-party/gui.md
vendored
4
docs/ru/interfaces/third-party/gui.md
vendored
@ -93,6 +93,10 @@
|
|||||||
|
|
||||||
[cickhouse-plantuml](https://pypi.org/project/clickhouse-plantuml/) — скрипт, генерирующий [PlantUML](https://plantuml.com/) диаграммы схем таблиц.
|
[cickhouse-plantuml](https://pypi.org/project/clickhouse-plantuml/) — скрипт, генерирующий [PlantUML](https://plantuml.com/) диаграммы схем таблиц.
|
||||||
|
|
||||||
|
### xeus-clickhouse {#xeus-clickhouse}
|
||||||
|
|
||||||
|
[xeus-clickhouse](https://github.com/wangfenjin/xeus-clickhouse) — это ядро Jupyter для ClickHouse, которое поддерживает запрос ClickHouse-данных с использованием SQL в Jupyter.
|
||||||
|
|
||||||
## Коммерческие {#kommercheskie}
|
## Коммерческие {#kommercheskie}
|
||||||
|
|
||||||
### DataGrip {#datagrip}
|
### DataGrip {#datagrip}
|
||||||
|
@ -7,10 +7,38 @@ toc_title: Системные таблицы
|
|||||||
|
|
||||||
## Введение {#system-tables-introduction}
|
## Введение {#system-tables-introduction}
|
||||||
|
|
||||||
Системные таблицы используются для реализации части функциональности системы, а также предоставляют доступ к информации о работе системы.
|
Системные таблицы содержат информацию о:
|
||||||
Вы не можете удалить системную таблицу (хотя можете сделать DETACH).
|
|
||||||
Для системных таблиц нет файлов с данными на диске и файлов с метаданными. Сервер создаёт все системные таблицы при старте.
|
- Состоянии сервера, процессов и окружении.
|
||||||
В системные таблицы нельзя записывать данные - можно только читать.
|
- Внутренних процессах сервера.
|
||||||
Системные таблицы расположены в базе данных system.
|
|
||||||
|
Системные таблицы:
|
||||||
|
|
||||||
|
- Находятся в базе данных `system`.
|
||||||
|
- Доступны только для чтения данных.
|
||||||
|
- Не могут быть удалены или изменены, но их можно отсоединить.
|
||||||
|
|
||||||
|
Системные таблицы `metric_log`, `query_log`, `query_thread_log`, `trace_log` системные таблицы хранят данные в файловой системе. Остальные системные таблицы хранят свои данные в оперативной памяти. Сервер ClickHouse создает такие системные таблицы при запуске.
|
||||||
|
|
||||||
|
### Источники системных показателей
|
||||||
|
|
||||||
|
Для сбора системных показателей сервер ClickHouse использует:
|
||||||
|
|
||||||
|
- Возможности `CAP_NET_ADMIN`.
|
||||||
|
- [procfs](https://ru.wikipedia.org/wiki/Procfs) (только Linux).
|
||||||
|
|
||||||
|
**procfs**
|
||||||
|
|
||||||
|
Если для сервера ClickHouse не включено `CAP_NET_ADMIN`, он пытается обратиться к `ProcfsMetricsProvider`. `ProcfsMetricsProvider` позволяет собирать системные показатели для каждого запроса (для CPU и I/O).
|
||||||
|
|
||||||
|
Если procfs поддерживается и включена в системе, то сервер ClickHouse собирает следующие системные показатели:
|
||||||
|
|
||||||
|
- `OSCPUVirtualTimeMicroseconds`
|
||||||
|
- `OSCPUWaitMicroseconds`
|
||||||
|
- `OSIOWaitMicroseconds`
|
||||||
|
- `OSReadChars`
|
||||||
|
- `OSWriteChars`
|
||||||
|
- `OSReadBytes`
|
||||||
|
- `OSWriteBytes`
|
||||||
|
|
||||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system-tables/) <!--hide-->
|
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system-tables/) <!--hide-->
|
||||||
|
@ -82,7 +82,7 @@ res: /lib/x86_64-linux-gnu/libc-2.27.so
|
|||||||
|
|
||||||
- [Функции интроспекции](../../sql-reference/functions/introspection.md) — Что такое функции интроспекции и как их использовать.
|
- [Функции интроспекции](../../sql-reference/functions/introspection.md) — Что такое функции интроспекции и как их использовать.
|
||||||
- [system.trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log) — Содержит трассировки стека, собранные профилировщиком выборочных запросов.
|
- [system.trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log) — Содержит трассировки стека, собранные профилировщиком выборочных запросов.
|
||||||
- [arrayMap](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-map) — Описание и пример использования функции `arrayMap`.
|
- [arrayMap](../../sql-reference/functions/array-functions.md#array-map) — Описание и пример использования функции `arrayMap`.
|
||||||
- [arrayFilter](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-filter) — Описание и пример использования функции `arrayFilter`.
|
- [arrayFilter](../../sql-reference/functions/array-functions.md#array-filter) — Описание и пример использования функции `arrayFilter`.
|
||||||
|
|
||||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/stack_trace) <!--hide-->
|
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/stack_trace) <!--hide-->
|
||||||
|
@ -9,6 +9,7 @@ The following aggregate functions are supported:
|
|||||||
- [`min`](../../sql-reference/aggregate-functions/reference/min.md#agg_function-min)
|
- [`min`](../../sql-reference/aggregate-functions/reference/min.md#agg_function-min)
|
||||||
- [`max`](../../sql-reference/aggregate-functions/reference/max.md#agg_function-max)
|
- [`max`](../../sql-reference/aggregate-functions/reference/max.md#agg_function-max)
|
||||||
- [`sum`](../../sql-reference/aggregate-functions/reference/sum.md#agg_function-sum)
|
- [`sum`](../../sql-reference/aggregate-functions/reference/sum.md#agg_function-sum)
|
||||||
|
- [`sumWithOverflow`](../../sql-reference/aggregate-functions/reference/sumwithoverflow.md#sumwithoverflowx)
|
||||||
- [`groupBitAnd`](../../sql-reference/aggregate-functions/reference/groupbitand.md#groupbitand)
|
- [`groupBitAnd`](../../sql-reference/aggregate-functions/reference/groupbitand.md#groupbitand)
|
||||||
- [`groupBitOr`](../../sql-reference/aggregate-functions/reference/groupbitor.md#groupbitor)
|
- [`groupBitOr`](../../sql-reference/aggregate-functions/reference/groupbitor.md#groupbitor)
|
||||||
- [`groupBitXor`](../../sql-reference/aggregate-functions/reference/groupbitxor.md#groupbitxor)
|
- [`groupBitXor`](../../sql-reference/aggregate-functions/reference/groupbitxor.md#groupbitxor)
|
||||||
|
@ -7,7 +7,7 @@ toc_title: Tuple(T1, T2, ...)
|
|||||||
|
|
||||||
Кортеж из элементов любого [типа](index.md#data_types). Элементы кортежа могут быть одного или разных типов.
|
Кортеж из элементов любого [типа](index.md#data_types). Элементы кортежа могут быть одного или разных типов.
|
||||||
|
|
||||||
Кортежи используются для временной группировки столбцов. Столбцы могут группироваться при использовании выражения IN в запросе, а также для указания нескольких формальных параметров лямбда-функций. Подробнее смотрите разделы [Операторы IN](../../sql-reference/data-types/tuple.md), [Функции высшего порядка](../../sql-reference/functions/higher-order-functions.md#higher-order-functions).
|
Кортежи используются для временной группировки столбцов. Столбцы могут группироваться при использовании выражения IN в запросе, а также для указания нескольких формальных параметров лямбда-функций. Подробнее смотрите разделы [Операторы IN](../../sql-reference/data-types/tuple.md), [Функции высшего порядка](../../sql-reference/functions/index.md#higher-order-functions).
|
||||||
|
|
||||||
Кортежи могут быть результатом запроса. В этом случае, в текстовых форматах кроме JSON, значения выводятся в круглых скобках через запятую. В форматах JSON, кортежи выводятся в виде массивов (в квадратных скобках).
|
Кортежи могут быть результатом запроса. В этом случае, в текстовых форматах кроме JSON, значения выводятся в круглых скобках через запятую. В форматах JSON, кортежи выводятся в виде массивов (в квадратных скобках).
|
||||||
|
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Функции по работе с массивами {#funktsii-po-rabote-s-massivami}
|
# Массивы {#functions-for-working-with-arrays}
|
||||||
|
|
||||||
## empty {#function-empty}
|
## empty {#function-empty}
|
||||||
|
|
||||||
@ -186,6 +186,13 @@ SELECT indexOf([1, 3, NULL, NULL], NULL)
|
|||||||
|
|
||||||
Элементы, равные `NULL`, обрабатываются как обычные значения.
|
Элементы, равные `NULL`, обрабатываются как обычные значения.
|
||||||
|
|
||||||
|
## arrayCount(\[func,\] arr1, …) {#array-count}
|
||||||
|
|
||||||
|
Возвращает количество элементов массива `arr`, для которых функция `func` возвращает не 0. Если `func` не указана - возвращает количество ненулевых элементов массива.
|
||||||
|
|
||||||
|
Функция `arrayCount` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей можно передать лямбда-функцию.
|
||||||
|
|
||||||
|
|
||||||
## countEqual(arr, x) {#countequalarr-x}
|
## countEqual(arr, x) {#countequalarr-x}
|
||||||
|
|
||||||
Возвращает количество элементов массива, равных x. Эквивалентно arrayCount(elem -\> elem = x, arr).
|
Возвращает количество элементов массива, равных x. Эквивалентно arrayCount(elem -\> elem = x, arr).
|
||||||
@ -513,7 +520,7 @@ SELECT arraySort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf]);
|
|||||||
- Значения `NaN` идут перед `NULL`.
|
- Значения `NaN` идут перед `NULL`.
|
||||||
- Значения `Inf` идут перед `NaN`.
|
- Значения `Inf` идут перед `NaN`.
|
||||||
|
|
||||||
Функция `arraySort` является [функцией высшего порядка](higher-order-functions.md) — в качестве первого аргумента ей можно передать лямбда-функцию. В этом случае порядок сортировки определяется результатом применения лямбда-функции на элементы массива.
|
Функция `arraySort` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей можно передать лямбда-функцию. В этом случае порядок сортировки определяется результатом применения лямбда-функции на элементы массива.
|
||||||
|
|
||||||
Рассмотрим пример:
|
Рассмотрим пример:
|
||||||
|
|
||||||
@ -613,7 +620,7 @@ SELECT arrayReverseSort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf]) as res;
|
|||||||
- Значения `NaN` идут перед `NULL`.
|
- Значения `NaN` идут перед `NULL`.
|
||||||
- Значения `-Inf` идут перед `NaN`.
|
- Значения `-Inf` идут перед `NaN`.
|
||||||
|
|
||||||
Функция `arrayReverseSort` является [функцией высшего порядка](higher-order-functions.md). Вы можете передать ей в качестве первого аргумента лямбда-функцию. Например:
|
Функция `arrayReverseSort` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей можно передать лямбда-функцию. Например:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT arrayReverseSort((x) -> -x, [1, 2, 3]) as res;
|
SELECT arrayReverseSort((x) -> -x, [1, 2, 3]) as res;
|
||||||
@ -1036,6 +1043,116 @@ SELECT arrayZip(['a', 'b', 'c'], [5, 2, 1])
|
|||||||
└──────────────────────────────────────┘
|
└──────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## arrayMap(func, arr1, …) {#array-map}
|
||||||
|
|
||||||
|
Возвращает массив, полученный на основе результатов применения функции `func` к каждому элементу массива `arr`.
|
||||||
|
|
||||||
|
Примеры:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT arrayMap(x -> (x + 2), [1, 2, 3]) as res;
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─res─────┐
|
||||||
|
│ [3,4,5] │
|
||||||
|
└─────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Следующий пример показывает, как создать кортежи из элементов разных массивов:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─res─────────────────┐
|
||||||
|
│ [(1,4),(2,5),(3,6)] │
|
||||||
|
└─────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Функция `arrayMap` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен.
|
||||||
|
|
||||||
|
## arrayFilter(func, arr1, …) {#array-filter}
|
||||||
|
|
||||||
|
Возвращает массив, содержащий только те элементы массива `arr1`, для которых функция `func` возвращает не 0.
|
||||||
|
|
||||||
|
Примеры:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT arrayFilter(x -> x LIKE '%World%', ['Hello', 'abc World']) AS res
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─res───────────┐
|
||||||
|
│ ['abc World'] │
|
||||||
|
└───────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT
|
||||||
|
arrayFilter(
|
||||||
|
(i, x) -> x LIKE '%World%',
|
||||||
|
arrayEnumerate(arr),
|
||||||
|
['Hello', 'abc World'] AS arr)
|
||||||
|
AS res
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─res─┐
|
||||||
|
│ [2] │
|
||||||
|
└─────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Функция `arrayFilter` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен.
|
||||||
|
|
||||||
|
## arrayExists(\[func,\] arr1, …) {#arrayexistsfunc-arr1}
|
||||||
|
|
||||||
|
Возвращает 1, если существует хотя бы один элемент массива `arr`, для которого функция func возвращает не 0. Иначе возвращает 0.
|
||||||
|
|
||||||
|
Функция `arrayExists` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) - в качестве первого аргумента ей можно передать лямбда-функцию.
|
||||||
|
|
||||||
|
## arrayAll(\[func,\] arr1, …) {#arrayallfunc-arr1}
|
||||||
|
|
||||||
|
Возвращает 1, если для всех элементов массива `arr`, функция `func` возвращает не 0. Иначе возвращает 0.
|
||||||
|
|
||||||
|
Функция `arrayAll` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) - в качестве первого аргумента ей можно передать лямбда-функцию.
|
||||||
|
|
||||||
|
## arrayFirst(func, arr1, …) {#array-first}
|
||||||
|
|
||||||
|
Возвращает первый элемент массива `arr1`, для которого функция func возвращает не 0.
|
||||||
|
|
||||||
|
Функция `arrayFirst` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен.
|
||||||
|
|
||||||
|
## arrayFirstIndex(func, arr1, …) {#array-first-index}
|
||||||
|
|
||||||
|
Возвращает индекс первого элемента массива `arr1`, для которого функция func возвращает не 0.
|
||||||
|
|
||||||
|
Функция `arrayFirstIndex` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен.
|
||||||
|
|
||||||
|
## arraySum(\[func,\] arr1, …) {#array-sum}
|
||||||
|
|
||||||
|
Возвращает сумму значений функции `func`. Если функция не указана - просто возвращает сумму элементов массива.
|
||||||
|
|
||||||
|
Функция `arraySum` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) - в качестве первого аргумента ей можно передать лямбда-функцию.
|
||||||
|
|
||||||
|
## arrayCumSum(\[func,\] arr1, …) {#arraycumsumfunc-arr1}
|
||||||
|
|
||||||
|
Возвращает массив из частичных сумм элементов исходного массива (сумма с накоплением). Если указана функция `func`, то значения элементов массива преобразуются этой функцией перед суммированием.
|
||||||
|
|
||||||
|
Функция `arrayCumSum` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) - в качестве первого аргумента ей можно передать лямбда-функцию.
|
||||||
|
|
||||||
|
Пример:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT arrayCumSum([1, 1, 1, 1]) AS res
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─res──────────┐
|
||||||
|
│ [1, 2, 3, 4] │
|
||||||
|
└──────────────┘
|
||||||
|
|
||||||
## arrayAUC {#arrayauc}
|
## arrayAUC {#arrayauc}
|
||||||
|
|
||||||
Вычисляет площадь под кривой.
|
Вычисляет площадь под кривой.
|
||||||
|
@ -1,167 +0,0 @@
|
|||||||
# Функции высшего порядка {#higher-order-functions}
|
|
||||||
|
|
||||||
## Оператор `->`, функция lambda(params, expr) {#operator-funktsiia-lambdaparams-expr}
|
|
||||||
|
|
||||||
Позволяет описать лямбда-функцию для передачи в функцию высшего порядка. Слева от стрелочки стоит формальный параметр - произвольный идентификатор, или несколько формальных параметров - произвольные идентификаторы в кортеже. Справа от стрелочки стоит выражение, в котором могут использоваться эти формальные параметры, а также любые столбцы таблицы.
|
|
||||||
|
|
||||||
Примеры: `x -> 2 * x, str -> str != Referer.`
|
|
||||||
|
|
||||||
Функции высшего порядка, в качестве своего функционального аргумента могут принимать только лямбда-функции.
|
|
||||||
|
|
||||||
В функции высшего порядка может быть передана лямбда-функция, принимающая несколько аргументов. В этом случае, в функцию высшего порядка передаётся несколько массивов одинаковых длин, которым эти аргументы будут соответствовать.
|
|
||||||
|
|
||||||
Для некоторых функций, например [arrayCount](#higher_order_functions-array-count) или [arraySum](#higher_order_functions-array-sum), первый аргумент (лямбда-функция) может отсутствовать. В этом случае, подразумевается тождественное отображение.
|
|
||||||
|
|
||||||
Для функций, перечисленных ниже, лямбда-функцию должна быть указана всегда:
|
|
||||||
|
|
||||||
- [arrayMap](#higher_order_functions-array-map)
|
|
||||||
- [arrayFilter](#higher_order_functions-array-filter)
|
|
||||||
- [arrayFirst](#higher_order_functions-array-first)
|
|
||||||
- [arrayFirstIndex](#higher_order_functions-array-first-index)
|
|
||||||
|
|
||||||
### arrayMap(func, arr1, …) {#higher_order_functions-array-map}
|
|
||||||
|
|
||||||
Вернуть массив, полученный на основе результатов применения функции `func` к каждому элементу массива `arr`.
|
|
||||||
|
|
||||||
Примеры:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arrayMap(x -> (x + 2), [1, 2, 3]) as res;
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res─────┐
|
|
||||||
│ [3,4,5] │
|
|
||||||
└─────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
Следующий пример показывает, как создать кортежи из элементов разных массивов:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res─────────────────┐
|
|
||||||
│ [(1,4),(2,5),(3,6)] │
|
|
||||||
└─────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
Обратите внимание, что у функции `arrayMap` первый аргумент (лямбда-функция) не может быть опущен.
|
|
||||||
|
|
||||||
### arrayFilter(func, arr1, …) {#higher_order_functions-array-filter}
|
|
||||||
|
|
||||||
Вернуть массив, содержащий только те элементы массива `arr1`, для которых функция `func` возвращает не 0.
|
|
||||||
|
|
||||||
Примеры:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arrayFilter(x -> x LIKE '%World%', ['Hello', 'abc World']) AS res
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res───────────┐
|
|
||||||
│ ['abc World'] │
|
|
||||||
└───────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT
|
|
||||||
arrayFilter(
|
|
||||||
(i, x) -> x LIKE '%World%',
|
|
||||||
arrayEnumerate(arr),
|
|
||||||
['Hello', 'abc World'] AS arr)
|
|
||||||
AS res
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res─┐
|
|
||||||
│ [2] │
|
|
||||||
└─────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
Обратите внимание, что у функции `arrayFilter` первый аргумент (лямбда-функция) не может быть опущен.
|
|
||||||
|
|
||||||
### arrayCount(\[func,\] arr1, …) {#higher_order_functions-array-count}
|
|
||||||
|
|
||||||
Вернуть количество элементов массива `arr`, для которых функция func возвращает не 0. Если func не указана - вернуть количество ненулевых элементов массива.
|
|
||||||
|
|
||||||
### arrayExists(\[func,\] arr1, …) {#arrayexistsfunc-arr1}
|
|
||||||
|
|
||||||
Вернуть 1, если существует хотя бы один элемент массива `arr`, для которого функция func возвращает не 0. Иначе вернуть 0.
|
|
||||||
|
|
||||||
### arrayAll(\[func,\] arr1, …) {#arrayallfunc-arr1}
|
|
||||||
|
|
||||||
Вернуть 1, если для всех элементов массива `arr`, функция `func` возвращает не 0. Иначе вернуть 0.
|
|
||||||
|
|
||||||
### arraySum(\[func,\] arr1, …) {#higher_order_functions-array-sum}
|
|
||||||
|
|
||||||
Вернуть сумму значений функции `func`. Если функция не указана - просто вернуть сумму элементов массива.
|
|
||||||
|
|
||||||
### arrayFirst(func, arr1, …) {#higher_order_functions-array-first}
|
|
||||||
|
|
||||||
Вернуть первый элемент массива `arr1`, для которого функция func возвращает не 0.
|
|
||||||
|
|
||||||
Обратите внимание, что у функции `arrayFirst` первый аргумент (лямбда-функция) не может быть опущен.
|
|
||||||
|
|
||||||
### arrayFirstIndex(func, arr1, …) {#higher_order_functions-array-first-index}
|
|
||||||
|
|
||||||
Вернуть индекс первого элемента массива `arr1`, для которого функция func возвращает не 0.
|
|
||||||
|
|
||||||
Обратите внимание, что у функции `arrayFirstFilter` первый аргумент (лямбда-функция) не может быть опущен.
|
|
||||||
|
|
||||||
### arrayCumSum(\[func,\] arr1, …) {#arraycumsumfunc-arr1}
|
|
||||||
|
|
||||||
Возвращает массив из частичных сумм элементов исходного массива (сумма с накоплением). Если указана функция `func`, то значения элементов массива преобразуются этой функцией перед суммированием.
|
|
||||||
|
|
||||||
Пример:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arrayCumSum([1, 1, 1, 1]) AS res
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res──────────┐
|
|
||||||
│ [1, 2, 3, 4] │
|
|
||||||
└──────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
### arraySort(\[func,\] arr1, …) {#arraysortfunc-arr1}
|
|
||||||
|
|
||||||
Возвращает отсортированный в восходящем порядке массив `arr1`. Если задана функция `func`, то порядок сортировки определяется результатом применения функции `func` на элементы массива (массивов).
|
|
||||||
|
|
||||||
Для улучшения эффективности сортировки применяется [Преобразование Шварца](https://ru.wikipedia.org/wiki/%D0%9F%D1%80%D0%B5%D0%BE%D0%B1%D1%80%D0%B0%D0%B7%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5_%D0%A8%D0%B2%D0%B0%D1%80%D1%86%D0%B0).
|
|
||||||
|
|
||||||
Пример:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arraySort((x, y) -> y, ['hello', 'world'], [2, 1]);
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res────────────────┐
|
|
||||||
│ ['world', 'hello'] │
|
|
||||||
└────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
Подробная информация о методе `arraySort` приведена в разделе [Функции по работе с массивами](array-functions.md#array_functions-sort).
|
|
||||||
|
|
||||||
### arrayReverseSort(\[func,\] arr1, …) {#arrayreversesortfunc-arr1}
|
|
||||||
|
|
||||||
Возвращает отсортированный в нисходящем порядке массив `arr1`. Если задана функция `func`, то порядок сортировки определяется результатом применения функции `func` на элементы массива (массивов).
|
|
||||||
|
|
||||||
Пример:
|
|
||||||
|
|
||||||
``` sql
|
|
||||||
SELECT arrayReverseSort((x, y) -> y, ['hello', 'world'], [2, 1]) as res;
|
|
||||||
```
|
|
||||||
|
|
||||||
``` text
|
|
||||||
┌─res───────────────┐
|
|
||||||
│ ['hello','world'] │
|
|
||||||
└───────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
Подробная информация о методе `arrayReverseSort` приведена в разделе [Функции по работе с массивами](array-functions.md#array_functions-reverse-sort).
|
|
||||||
|
|
||||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/higher_order_functions/) <!--hide-->
|
|
@ -38,6 +38,20 @@
|
|||||||
|
|
||||||
Функции не могут поменять значения своих аргументов - любые изменения возвращаются в качестве результата. Соответственно, от порядка записи функций в запросе, результат вычислений отдельных функций не зависит.
|
Функции не могут поменять значения своих аргументов - любые изменения возвращаются в качестве результата. Соответственно, от порядка записи функций в запросе, результат вычислений отдельных функций не зависит.
|
||||||
|
|
||||||
|
## Функции высшего порядка, оператор `->` и функция lambda(params, expr) {#higher-order-functions}
|
||||||
|
|
||||||
|
Функции высшего порядка, в качестве своего функционального аргумента могут принимать только лямбда-функции. Чтобы передать лямбда-функцию в функцию высшего порядка, используйте оператор `->`. Слева от стрелочки стоит формальный параметр — произвольный идентификатор, или несколько формальных параметров — произвольные идентификаторы в кортеже. Справа от стрелочки стоит выражение, в котором могут использоваться эти формальные параметры, а также любые столбцы таблицы.
|
||||||
|
|
||||||
|
Примеры:
|
||||||
|
```
|
||||||
|
x -> 2 * x
|
||||||
|
str -> str != Referer
|
||||||
|
```
|
||||||
|
|
||||||
|
В функции высшего порядка может быть передана лямбда-функция, принимающая несколько аргументов. В этом случае в функцию высшего порядка передаётся несколько массивов одинаковой длины, которым эти аргументы будут соответствовать.
|
||||||
|
|
||||||
|
Для некоторых функций первый аргумент (лямбда-функция) может отсутствовать. В этом случае подразумевается тождественное отображение.
|
||||||
|
|
||||||
## Обработка ошибок {#obrabotka-oshibok}
|
## Обработка ошибок {#obrabotka-oshibok}
|
||||||
|
|
||||||
Некоторые функции могут кидать исключения в случае ошибочных данных. В этом случае, выполнение запроса прерывается, и текст ошибки выводится клиенту. При распределённой обработке запроса, при возникновении исключения на одном из серверов, на другие серверы пытается отправиться просьба тоже прервать выполнение запроса.
|
Некоторые функции могут кидать исключения в случае ошибочных данных. В этом случае, выполнение запроса прерывается, и текст ошибки выводится клиенту. При распределённой обработке запроса, при возникновении исключения на одном из серверов, на другие серверы пытается отправиться просьба тоже прервать выполнение запроса.
|
||||||
|
@ -93,7 +93,7 @@ LIMIT 1
|
|||||||
\G
|
\G
|
||||||
```
|
```
|
||||||
|
|
||||||
Функция [arrayMap](higher-order-functions.md#higher_order_functions-array-map) позволяет обрабатывать каждый отдельный элемент массива `trace` с помощью функции `addressToLine`. Результат этой обработки вы видите в виде `trace_source_code_lines` колонки выходных данных.
|
Функция [arrayMap](../../sql-reference/functions/array-functions.md#array-map) позволяет обрабатывать каждый отдельный элемент массива `trace` с помощью функции `addressToLine`. Результат этой обработки вы видите в виде `trace_source_code_lines` колонки выходных данных.
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
Row 1:
|
Row 1:
|
||||||
@ -179,7 +179,7 @@ LIMIT 1
|
|||||||
\G
|
\G
|
||||||
```
|
```
|
||||||
|
|
||||||
То [arrayMap](higher-order-functions.md#higher_order_functions-array-map) функция позволяет обрабатывать каждый отдельный элемент системы. `trace` массив по типу `addressToSymbols` функция. Результат этой обработки вы видите в виде `trace_symbols` колонка выходных данных.
|
То [arrayMap](../../sql-reference/functions/array-functions.md#array-map) функция позволяет обрабатывать каждый отдельный элемент системы. `trace` массив по типу `addressToSymbols` функция. Результат этой обработки вы видите в виде `trace_symbols` колонка выходных данных.
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
Row 1:
|
Row 1:
|
||||||
@ -276,7 +276,7 @@ LIMIT 1
|
|||||||
\G
|
\G
|
||||||
```
|
```
|
||||||
|
|
||||||
Функция [arrayMap](higher-order-functions.md#higher_order_functions-array-map) позволяет обрабатывать каждый отдельный элемент массива `trace` с помощью функции `demangle`.
|
Функция [arrayMap](../../sql-reference/functions/array-functions.md#array-map) позволяет обрабатывать каждый отдельный элемент массива `trace` с помощью функции `demangle`.
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
Row 1:
|
Row 1:
|
||||||
|
@ -508,6 +508,29 @@ SELECT
|
|||||||
└────────────────┴────────────┘
|
└────────────────┴────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## formatReadableQuantity(x) {#formatreadablequantityx}
|
||||||
|
|
||||||
|
Принимает число. Возвращает округленное число с суффиксом (thousand, million, billion и т.д.) в виде строки.
|
||||||
|
|
||||||
|
Облегчает визуальное восприятие больших чисел живым человеком.
|
||||||
|
|
||||||
|
Пример:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT
|
||||||
|
arrayJoin([1024, 1234 * 1000, (4567 * 1000) * 1000, 98765432101234]) AS number,
|
||||||
|
formatReadableQuantity(number) AS number_for_humans
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─────────number─┬─number_for_humans─┐
|
||||||
|
│ 1024 │ 1.02 thousand │
|
||||||
|
│ 1234000 │ 1.23 million │
|
||||||
|
│ 4567000000 │ 4.57 billion │
|
||||||
|
│ 98765432101234 │ 98.77 trillion │
|
||||||
|
└────────────────┴───────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
## least(a, b) {#leasta-b}
|
## least(a, b) {#leasta-b}
|
||||||
|
|
||||||
Возвращает наименьшее значение из a и b.
|
Возвращает наименьшее значение из a и b.
|
||||||
|
@ -55,4 +55,50 @@ FROM numbers(3)
|
|||||||
└────────────┴────────────┴──────────────┴────────────────┴─────────────────┴──────────────────────┘
|
└────────────┴────────────┴──────────────┴────────────────┴─────────────────┴──────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
# Случайные функции для работы со строками {#random-functions-for-working-with-strings}
|
||||||
|
|
||||||
|
## randomString {#random-string}
|
||||||
|
|
||||||
|
## randomFixedString {#random-fixed-string}
|
||||||
|
|
||||||
|
## randomPrintableASCII {#random-printable-ascii}
|
||||||
|
|
||||||
|
## randomStringUTF8 {#random-string-utf8}
|
||||||
|
|
||||||
|
## fuzzBits {#fuzzbits}
|
||||||
|
|
||||||
|
**Синтаксис**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
fuzzBits([s], [prob])
|
||||||
|
```
|
||||||
|
Инвертирует каждый бит `s` с вероятностью `prob`.
|
||||||
|
|
||||||
|
**Параметры**
|
||||||
|
|
||||||
|
- `s` — `String` or `FixedString`
|
||||||
|
- `prob` — constant `Float32/64`
|
||||||
|
|
||||||
|
**Возвращаемое значение**
|
||||||
|
|
||||||
|
Измененная случайным образом строка с тем же типом, что и `s`.
|
||||||
|
|
||||||
|
**Пример**
|
||||||
|
|
||||||
|
Запрос:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT fuzzBits(materialize('abacaba'), 0.1)
|
||||||
|
FROM numbers(3)
|
||||||
|
```
|
||||||
|
|
||||||
|
Результат:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─fuzzBits(materialize('abacaba'), 0.1)─┐
|
||||||
|
│ abaaaja │
|
||||||
|
│ a*cjab+ │
|
||||||
|
│ aeca2A │
|
||||||
|
└───────────────────────────────────────┘
|
||||||
|
|
||||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/random_functions/) <!--hide-->
|
[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/random_functions/) <!--hide-->
|
||||||
|
@ -5,13 +5,15 @@ toc_title: Представление
|
|||||||
|
|
||||||
# CREATE VIEW {#create-view}
|
# CREATE VIEW {#create-view}
|
||||||
|
|
||||||
``` sql
|
|
||||||
CREATE [MATERIALIZED] VIEW [IF NOT EXISTS] [db.]table_name [TO[db.]name] [ENGINE = engine] [POPULATE] AS SELECT ...
|
|
||||||
```
|
|
||||||
|
|
||||||
Создаёт представление. Представления бывают двух видов - обычные и материализованные (MATERIALIZED).
|
Создаёт представление. Представления бывают двух видов - обычные и материализованные (MATERIALIZED).
|
||||||
|
|
||||||
Обычные представления не хранят никаких данных, а всего лишь производят чтение из другой таблицы. То есть, обычное представление - не более чем сохранённый запрос. При чтении из представления, этот сохранённый запрос, используется в качестве подзапроса в секции FROM.
|
## Обычные представления {#normal}
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE [OR REPLACE] VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER] AS SELECT ...
|
||||||
|
```
|
||||||
|
|
||||||
|
Normal views don’t store any data, they just perform a read from another table on each access. In other words, a normal view is nothing more than a saved query. When reading from a view, this saved query is used as a subquery in the [FROM](../../../sql-reference/statements/select/from.md) clause.
|
||||||
|
|
||||||
Для примера, пусть вы создали представление:
|
Для примера, пусть вы создали представление:
|
||||||
|
|
||||||
@ -31,15 +33,24 @@ SELECT a, b, c FROM view
|
|||||||
SELECT a, b, c FROM (SELECT ...)
|
SELECT a, b, c FROM (SELECT ...)
|
||||||
```
|
```
|
||||||
|
|
||||||
Материализованные (MATERIALIZED) представления хранят данные, преобразованные соответствующим запросом SELECT.
|
## Материализованные представления {#materialized}
|
||||||
|
|
||||||
При создании материализованного представления без использования `TO [db].[table]`, нужно обязательно указать ENGINE - движок таблицы для хранения данных.
|
``` sql
|
||||||
|
CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER] [TO[db.]name] [ENGINE = engine] [POPULATE] AS SELECT ...
|
||||||
|
```
|
||||||
|
|
||||||
|
Материализованные (MATERIALIZED) представления хранят данные, преобразованные соответствующим запросом [SELECT](../../../sql-reference/statements/select/index.md).
|
||||||
|
|
||||||
|
При создании материализованного представления без использования `TO [db].[table]`, нужно обязательно указать `ENGINE` - движок таблицы для хранения данных.
|
||||||
|
|
||||||
При создании материализованного представления с испольованием `TO [db].[table]`, нельзя указывать `POPULATE`
|
При создании материализованного представления с испольованием `TO [db].[table]`, нельзя указывать `POPULATE`
|
||||||
|
|
||||||
Материализованное представление устроено следующим образом: при вставке данных в таблицу, указанную в SELECT-е, кусок вставляемых данных преобразуется этим запросом SELECT, и полученный результат вставляется в представление.
|
Материализованное представление устроено следующим образом: при вставке данных в таблицу, указанную в SELECT-е, кусок вставляемых данных преобразуется этим запросом SELECT, и полученный результат вставляется в представление.
|
||||||
|
|
||||||
Если указано POPULATE, то при создании представления, в него будут вставлены имеющиеся данные таблицы, как если бы был сделан запрос `CREATE TABLE ... AS SELECT ...` . Иначе, представление будет содержать только данные, вставляемые в таблицу после создания представления. Не рекомендуется использовать POPULATE, так как вставляемые в таблицу данные во время создания представления, не попадут в него.
|
!!! important "Важно"
|
||||||
|
Материализованные представлени в ClickHouse больше похожи на `after insert` триггеры. Если в запросе материализованного представления есть агрегирование, оно применяется только к вставляемому блоку записей. Любые изменения существующих данных исходной таблицы (например обновление, удаление, удаление раздела и т.д.) не изменяют материализованное представление.
|
||||||
|
|
||||||
|
Если указано `POPULATE`, то при создании представления, в него будут вставлены имеющиеся данные таблицы, как если бы был сделан запрос `CREATE TABLE ... AS SELECT ...` . Иначе, представление будет содержать только данные, вставляемые в таблицу после создания представления. Не рекомендуется использовать POPULATE, так как вставляемые в таблицу данные во время создания представления, не попадут в него.
|
||||||
|
|
||||||
Запрос `SELECT` может содержать `DISTINCT`, `GROUP BY`, `ORDER BY`, `LIMIT`… Следует иметь ввиду, что соответствующие преобразования будут выполняться независимо, на каждый блок вставляемых данных. Например, при наличии `GROUP BY`, данные будут агрегироваться при вставке, но только в рамках одной пачки вставляемых данных. Далее, данные не будут доагрегированы. Исключение - использование ENGINE, производящего агрегацию данных самостоятельно, например, `SummingMergeTree`.
|
Запрос `SELECT` может содержать `DISTINCT`, `GROUP BY`, `ORDER BY`, `LIMIT`… Следует иметь ввиду, что соответствующие преобразования будут выполняться независимо, на каждый блок вставляемых данных. Например, при наличии `GROUP BY`, данные будут агрегироваться при вставке, но только в рамках одной пачки вставляемых данных. Далее, данные не будут доагрегированы. Исключение - использование ENGINE, производящего агрегацию данных самостоятельно, например, `SummingMergeTree`.
|
||||||
|
|
||||||
@ -50,4 +61,4 @@ SELECT a, b, c FROM (SELECT ...)
|
|||||||
Отсутствует отдельный запрос для удаления представлений. Чтобы удалить представление, следует использовать `DROP TABLE`.
|
Отсутствует отдельный запрос для удаления представлений. Чтобы удалить представление, следует использовать `DROP TABLE`.
|
||||||
|
|
||||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/create/view)
|
[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/create/view)
|
||||||
<!--hide-->
|
<!--hide-->
|
||||||
|
@ -5,18 +5,35 @@ toc_title: DROP
|
|||||||
|
|
||||||
# DROP {#drop}
|
# DROP {#drop}
|
||||||
|
|
||||||
Запрос имеет два вида: `DROP DATABASE` и `DROP TABLE`.
|
Удаляет существующий объект.
|
||||||
|
Если указано `IF EXISTS` - не выдавать ошибку, если объекта не существует.
|
||||||
|
|
||||||
|
## DROP DATABASE {#drop-database}
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
DROP DATABASE [IF EXISTS] db [ON CLUSTER cluster]
|
DROP DATABASE [IF EXISTS] db [ON CLUSTER cluster]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Удаляет все таблицы в базе данных db, затем удаляет саму базу данных db.
|
||||||
|
|
||||||
|
|
||||||
|
## DROP TABLE {#drop-table}
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
DROP [TEMPORARY] TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster]
|
DROP [TEMPORARY] TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster]
|
||||||
```
|
```
|
||||||
|
|
||||||
Удаляет таблицу.
|
Удаляет таблицу.
|
||||||
Если указано `IF EXISTS` - не выдавать ошибку, если таблица не существует или база данных не существует.
|
|
||||||
|
|
||||||
|
## DROP DICTIONARY {#drop-dictionary}
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
DROP DICTIONARY [IF EXISTS] [db.]name
|
||||||
|
```
|
||||||
|
|
||||||
|
Удаляет словарь.
|
||||||
|
|
||||||
|
|
||||||
## DROP USER {#drop-user-statement}
|
## DROP USER {#drop-user-statement}
|
||||||
|
|
||||||
@ -41,6 +58,7 @@ DROP USER [IF EXISTS] name [,...] [ON CLUSTER cluster_name]
|
|||||||
DROP ROLE [IF EXISTS] name [,...] [ON CLUSTER cluster_name]
|
DROP ROLE [IF EXISTS] name [,...] [ON CLUSTER cluster_name]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## DROP ROW POLICY {#drop-row-policy-statement}
|
## DROP ROW POLICY {#drop-row-policy-statement}
|
||||||
|
|
||||||
Удаляет политику доступа к строкам.
|
Удаляет политику доступа к строкам.
|
||||||
@ -80,5 +98,13 @@ DROP [SETTINGS] PROFILE [IF EXISTS] name [,...] [ON CLUSTER cluster_name]
|
|||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## DROP VIEW {#drop-view}
|
||||||
|
|
||||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/drop/) <!--hide-->
|
``` sql
|
||||||
|
DROP VIEW [IF EXISTS] [db.]name [ON CLUSTER cluster]
|
||||||
|
```
|
||||||
|
|
||||||
|
Удаляет представление. Представления могут быть удалены и командой `DROP TABLE`, но команда `DROP VIEW` проверяет, что `[db.]name` является представлением.
|
||||||
|
|
||||||
|
|
||||||
|
[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/drop/) <!--hide-->
|
||||||
|
@ -1,6 +1,4 @@
|
|||||||
---
|
---
|
||||||
machine_translated: true
|
|
||||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
|
||||||
toc_priority: 12
|
toc_priority: 12
|
||||||
toc_title: "\u6559\u7A0B"
|
toc_title: "\u6559\u7A0B"
|
||||||
---
|
---
|
||||||
@ -9,27 +7,27 @@ toc_title: "\u6559\u7A0B"
|
|||||||
|
|
||||||
## 从本教程中可以期待什么? {#what-to-expect-from-this-tutorial}
|
## 从本教程中可以期待什么? {#what-to-expect-from-this-tutorial}
|
||||||
|
|
||||||
通过本教程,您将学习如何设置一个简单的ClickHouse集群。 它会很小,但容错和可扩展。 然后,我们将使用其中一个示例数据集来填充数据并执行一些演示查询。
|
通过本教程,您将学习如何设置一个简单的ClickHouse集群。 它会很小,但却是容错和可扩展的。 然后,我们将使用其中一个示例数据集来填充数据并执行一些演示查询。
|
||||||
|
|
||||||
## 单节点设置 {#single-node-setup}
|
## 单节点设置 {#single-node-setup}
|
||||||
|
|
||||||
为了推迟分布式环境的复杂性,我们将首先在单个服务器或虚拟机上部署ClickHouse。 ClickHouse通常是从安装 [黛布](install.md#install-from-deb-packages) 或 [rpm](install.md#from-rpm-packages) 包,但也有 [替代办法](install.md#from-docker-image) 对于不支持它们的操作系统。
|
为了推迟分布式环境的复杂性,我们将首先在单个服务器或虚拟机上部署ClickHouse。 ClickHouse通常是从[deb](install.md#install-from-deb-packages) 或 [rpm](install.md#from-rpm-packages) 包安装,但对于不支持它们的操作系统也有 [替代方法](install.md#from-docker-image) 。
|
||||||
|
|
||||||
例如,您选择了 `deb` 包和执行:
|
例如,您选择了从 `deb` 包安装,执行:
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
{% include 'install/deb.sh' %}
|
{% include 'install/deb.sh' %}
|
||||||
```
|
```
|
||||||
|
|
||||||
我们在安装的软件包中有什么:
|
在我们安装的软件中包含这些包:
|
||||||
|
|
||||||
- `clickhouse-client` 包包含 [ツ环板clientョツ嘉ッツ偲](../interfaces/cli.md) 应用程序,交互式ClickHouse控制台客户端。
|
- `clickhouse-client` 包,包含 [clickhouse-client](../interfaces/cli.md) 应用程序,它是交互式ClickHouse控制台客户端。
|
||||||
- `clickhouse-common` 包包含一个ClickHouse可执行文件。
|
- `clickhouse-common` 包,包含一个ClickHouse可执行文件。
|
||||||
- `clickhouse-server` 包包含要作为服务器运行ClickHouse的配置文件。
|
- `clickhouse-server` 包,包含要作为服务端运行的ClickHouse配置文件。
|
||||||
|
|
||||||
服务器配置文件位于 `/etc/clickhouse-server/`. 在进一步讨论之前,请注意 `<path>` 元素in `config.xml`. Path确定数据存储的位置,因此应该位于磁盘容量较大的卷上;默认值为 `/var/lib/clickhouse/`. 如果你想调整配置,直接编辑并不方便 `config.xml` 文件,考虑到它可能会在未来的软件包更新中被重写。 复盖配置元素的推荐方法是创建 [在配置文件。d目录](../operations/configuration-files.md) 它作为 “patches” 要配置。xml
|
服务端配置文件位于 `/etc/clickhouse-server/`。 在进一步讨论之前,请注意 `config.xml`文件中的`<path>` 元素. Path决定了数据存储的位置,因此该位置应该位于磁盘容量较大的卷上;默认值为 `/var/lib/clickhouse/`。 如果你想调整配置,考虑到它可能会在未来的软件包更新中被重写,直接编辑`config.xml` 文件并不方便。 推荐的方法是在[配置文件](../operations/configuration-files.md)目录创建文件,作为config.xml文件的“补丁”,用以复写配置元素。
|
||||||
|
|
||||||
你可能已经注意到了, `clickhouse-server` 安装包后不会自动启动。 它也不会在更新后自动重新启动。 您启动服务器的方式取决于您的init系统,通常情况下,它是:
|
你可能已经注意到了, `clickhouse-server` 安装后不会自动启动。 它也不会在更新后自动重新启动。 您启动服务端的方式取决于您的初始系统,通常情况下是这样:
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
sudo service clickhouse-server start
|
sudo service clickhouse-server start
|
||||||
@ -41,13 +39,13 @@ sudo service clickhouse-server start
|
|||||||
sudo /etc/init.d/clickhouse-server start
|
sudo /etc/init.d/clickhouse-server start
|
||||||
```
|
```
|
||||||
|
|
||||||
服务器日志的默认位置是 `/var/log/clickhouse-server/`. 服务器已准备好处理客户端连接一旦它记录 `Ready for connections` 消息
|
服务端日志的默认位置是 `/var/log/clickhouse-server/`。当服务端在日志中记录 `Ready for connections` 消息,即表示服务端已准备好处理客户端连接。
|
||||||
|
|
||||||
一旦 `clickhouse-server` 正在运行我们可以利用 `clickhouse-client` 连接到服务器并运行一些测试查询,如 `SELECT "Hello, world!";`.
|
一旦 `clickhouse-server` 启动并运行,我们可以利用 `clickhouse-client` 连接到服务端,并运行一些测试查询,如 `SELECT "Hello, world!";`.
|
||||||
|
|
||||||
<details markdown="1">
|
<details markdown="1">
|
||||||
|
|
||||||
<summary>Clickhouse-客户端的快速提示</summary>
|
<summary>Clickhouse-client的快速提示</summary>
|
||||||
|
|
||||||
交互模式:
|
交互模式:
|
||||||
|
|
||||||
|
@ -15,7 +15,7 @@ toc_title: "\u5E94\u7528CatBoost\u6A21\u578B"
|
|||||||
|
|
||||||
1. [创建表](#create-table).
|
1. [创建表](#create-table).
|
||||||
2. [将数据插入到表中](#insert-data-to-table).
|
2. [将数据插入到表中](#insert-data-to-table).
|
||||||
3. [碌莽禄into拢Integrate010-68520682\<url\>](#integrate-catboost-into-clickhouse) (可选步骤)。
|
3. [将CatBoost集成到ClickHouse中](#integrate-catboost-into-clickhouse) (可选步骤)。
|
||||||
4. [从SQL运行模型推理](#run-model-inference).
|
4. [从SQL运行模型推理](#run-model-inference).
|
||||||
|
|
||||||
有关训练CatBoost模型的详细信息,请参阅 [培训和应用模型](https://catboost.ai/docs/features/training.html#training).
|
有关训练CatBoost模型的详细信息,请参阅 [培训和应用模型](https://catboost.ai/docs/features/training.html#training).
|
||||||
@ -119,12 +119,12 @@ FROM amazon_train
|
|||||||
+-------+
|
+-------+
|
||||||
```
|
```
|
||||||
|
|
||||||
## 3. 碌莽禄into拢Integrate010-68520682\<url\> {#integrate-catboost-into-clickhouse}
|
## 3. 将CatBoost集成到ClickHouse中 {#integrate-catboost-into-clickhouse}
|
||||||
|
|
||||||
!!! note "注"
|
!!! note "注"
|
||||||
**可选步骤。** Docker映像包含运行CatBoost和ClickHouse所需的所有内容。
|
**可选步骤。** Docker映像包含运行CatBoost和ClickHouse所需的所有内容。
|
||||||
|
|
||||||
碌莽禄to拢integrate010-68520682\<url\>:
|
CatBoost集成到ClickHouse步骤:
|
||||||
|
|
||||||
**1.** 构建评估库。
|
**1.** 构建评估库。
|
||||||
|
|
||||||
|
@ -1,13 +1,6 @@
|
|||||||
---
|
|
||||||
machine_translated: true
|
|
||||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
|
||||||
toc_priority: 40
|
|
||||||
toc_title: "\u8FDC\u7A0B"
|
|
||||||
---
|
|
||||||
|
|
||||||
# 远程,远程安全 {#remote-remotesecure}
|
# 远程,远程安全 {#remote-remotesecure}
|
||||||
|
|
||||||
允许您访问远程服务器,而无需创建 `Distributed` 桌子
|
允许您访问远程服务器,而无需创建 `Distributed` 表
|
||||||
|
|
||||||
签名:
|
签名:
|
||||||
|
|
||||||
@ -18,10 +11,10 @@ remoteSecure('addresses_expr', db, table[, 'user'[, 'password']])
|
|||||||
remoteSecure('addresses_expr', db.table[, 'user'[, 'password']])
|
remoteSecure('addresses_expr', db.table[, 'user'[, 'password']])
|
||||||
```
|
```
|
||||||
|
|
||||||
`addresses_expr` – An expression that generates addresses of remote servers. This may be just one server address. The server address is `host:port`,或者只是 `host`. 主机可以指定为服务器名称,也可以指定为IPv4或IPv6地址。 IPv6地址在方括号中指定。 端口是远程服务器上的TCP端口。 如果省略端口,它使用 `tcp_port` 从服务器的配置文件(默认情况下,9000)。
|
`addresses_expr` – 代表远程服务器地址的一个表达式。可以只是单个服务器地址。 服务器地址可以是 `host:port` 或 `host`。`host` 可以指定为服务器域名,或是IPV4或IPV6地址。IPv6地址在方括号中指定。`port` 是远程服务器上的TCP端口。 如果省略端口,则使用服务器配置文件中的 `tcp_port` (默认情况为,9000)。
|
||||||
|
|
||||||
!!! important "重要事项"
|
!!! important "重要事项"
|
||||||
IPv6地址需要该端口。
|
IPv6地址需要指定端口。
|
||||||
|
|
||||||
例:
|
例:
|
||||||
|
|
||||||
@ -34,7 +27,7 @@ localhost
|
|||||||
[2a02:6b8:0:1111::11]:9000
|
[2a02:6b8:0:1111::11]:9000
|
||||||
```
|
```
|
||||||
|
|
||||||
多个地址可以用逗号分隔。 在这种情况下,ClickHouse将使用分布式处理,因此它将将查询发送到所有指定的地址(如具有不同数据的分片)。
|
多个地址可以用逗号分隔。在这种情况下,ClickHouse将使用分布式处理,因此它将将查询发送到所有指定的地址(如具有不同数据的分片)。
|
||||||
|
|
||||||
示例:
|
示例:
|
||||||
|
|
||||||
@ -56,7 +49,7 @@ example01-{01..02}-1
|
|||||||
|
|
||||||
如果您有多对大括号,它会生成相应集合的直接乘积。
|
如果您有多对大括号,它会生成相应集合的直接乘积。
|
||||||
|
|
||||||
大括号中的地址和部分地址可以用管道符号(\|)分隔。 在这种情况下,相应的地址集被解释为副本,并且查询将被发送到第一个正常副本。 但是,副本将按照当前设置的顺序进行迭代 [load\_balancing](../../operations/settings/settings.md) 设置。
|
大括号中的地址和部分地址可以用管道符号(\|)分隔。 在这种情况下,相应的地址集被解释为副本,并且查询将被发送到第一个正常副本。 但是,副本将按照当前[load\_balancing](../../operations/settings/settings.md)设置的顺序进行迭代。
|
||||||
|
|
||||||
示例:
|
示例:
|
||||||
|
|
||||||
@ -66,20 +59,20 @@ example01-{01..02}-{1|2}
|
|||||||
|
|
||||||
此示例指定两个分片,每个分片都有两个副本。
|
此示例指定两个分片,每个分片都有两个副本。
|
||||||
|
|
||||||
生成的地址数由常量限制。 现在这是1000个地址。
|
生成的地址数由常量限制。目前这是1000个地址。
|
||||||
|
|
||||||
使用 `remote` 表函数比创建一个不太优化 `Distributed` 表,因为在这种情况下,服务器连接被重新建立为每个请求。 此外,如果设置了主机名,则会解析这些名称,并且在使用各种副本时不会计算错误。 在处理大量查询时,始终创建 `Distributed` 表的时间提前,不要使用 `remote` 表功能。
|
使用 `remote` 表函数没有创建一个 `Distributed` 表更优,因为在这种情况下,将为每个请求重新建立服务器连接。此外,如果设置了主机名,则会解析这些名称,并且在使用各种副本时不会计算错误。 在处理大量查询时,始终优先创建 `Distributed` 表,不要使用 `remote` 表功能。
|
||||||
|
|
||||||
该 `remote` 表函数可以在以下情况下是有用的:
|
该 `remote` 表函数可以在以下情况下是有用的:
|
||||||
|
|
||||||
- 访问特定服务器进行数据比较、调试和测试。
|
- 访问特定服务器进行数据比较、调试和测试。
|
||||||
- 查询之间的各种ClickHouse群集用于研究目的。
|
- 在多个ClickHouse集群之间的用户研究目的的查询。
|
||||||
- 手动发出的罕见分布式请求。
|
- 手动发出的不频繁分布式请求。
|
||||||
- 每次重新定义服务器集的分布式请求。
|
- 每次重新定义服务器集的分布式请求。
|
||||||
|
|
||||||
如果未指定用户, `default` 被使用。
|
如果未指定用户, 将会使用`default`。
|
||||||
如果未指定密码,则使用空密码。
|
如果未指定密码,则使用空密码。
|
||||||
|
|
||||||
`remoteSecure` -相同 `remote` but with secured connection. Default port — [tcp\_port\_secure](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port_secure) 从配置或9440.
|
`remoteSecure` - 与 `remote` 相同,但是会使用加密链接。默认端口为配置文件中的[tcp\_port\_secure](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port_secure),或9440。
|
||||||
|
|
||||||
[原始文章](https://clickhouse.tech/docs/en/query_language/table_functions/remote/) <!--hide-->
|
[原始文章](https://clickhouse.tech/docs/en/query_language/table_functions/remote/) <!--hide-->
|
||||||
|
@ -16,6 +16,7 @@ option (ENABLE_CLICKHOUSE_COMPRESSOR "Enable clickhouse-compressor" ${ENABLE_CLI
|
|||||||
option (ENABLE_CLICKHOUSE_COPIER "Enable clickhouse-copier" ${ENABLE_CLICKHOUSE_ALL})
|
option (ENABLE_CLICKHOUSE_COPIER "Enable clickhouse-copier" ${ENABLE_CLICKHOUSE_ALL})
|
||||||
option (ENABLE_CLICKHOUSE_FORMAT "Enable clickhouse-format" ${ENABLE_CLICKHOUSE_ALL})
|
option (ENABLE_CLICKHOUSE_FORMAT "Enable clickhouse-format" ${ENABLE_CLICKHOUSE_ALL})
|
||||||
option (ENABLE_CLICKHOUSE_OBFUSCATOR "Enable clickhouse-obfuscator" ${ENABLE_CLICKHOUSE_ALL})
|
option (ENABLE_CLICKHOUSE_OBFUSCATOR "Enable clickhouse-obfuscator" ${ENABLE_CLICKHOUSE_ALL})
|
||||||
|
option (ENABLE_CLICKHOUSE_GIT_IMPORT "Enable clickhouse-git-import" ${ENABLE_CLICKHOUSE_ALL})
|
||||||
option (ENABLE_CLICKHOUSE_ODBC_BRIDGE "Enable clickhouse-odbc-bridge" ${ENABLE_CLICKHOUSE_ALL})
|
option (ENABLE_CLICKHOUSE_ODBC_BRIDGE "Enable clickhouse-odbc-bridge" ${ENABLE_CLICKHOUSE_ALL})
|
||||||
|
|
||||||
if (CLICKHOUSE_SPLIT_BINARY)
|
if (CLICKHOUSE_SPLIT_BINARY)
|
||||||
@ -91,21 +92,22 @@ add_subdirectory (copier)
|
|||||||
add_subdirectory (format)
|
add_subdirectory (format)
|
||||||
add_subdirectory (obfuscator)
|
add_subdirectory (obfuscator)
|
||||||
add_subdirectory (install)
|
add_subdirectory (install)
|
||||||
|
add_subdirectory (git-import)
|
||||||
|
|
||||||
if (ENABLE_CLICKHOUSE_ODBC_BRIDGE)
|
if (ENABLE_CLICKHOUSE_ODBC_BRIDGE)
|
||||||
add_subdirectory (odbc-bridge)
|
add_subdirectory (odbc-bridge)
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
if (CLICKHOUSE_ONE_SHARED)
|
if (CLICKHOUSE_ONE_SHARED)
|
||||||
add_library(clickhouse-lib SHARED ${CLICKHOUSE_SERVER_SOURCES} ${CLICKHOUSE_CLIENT_SOURCES} ${CLICKHOUSE_LOCAL_SOURCES} ${CLICKHOUSE_BENCHMARK_SOURCES} ${CLICKHOUSE_COPIER_SOURCES} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_SOURCES} ${CLICKHOUSE_COMPRESSOR_SOURCES} ${CLICKHOUSE_FORMAT_SOURCES} ${CLICKHOUSE_OBFUSCATOR_SOURCES} ${CLICKHOUSE_ODBC_BRIDGE_SOURCES})
|
add_library(clickhouse-lib SHARED ${CLICKHOUSE_SERVER_SOURCES} ${CLICKHOUSE_CLIENT_SOURCES} ${CLICKHOUSE_LOCAL_SOURCES} ${CLICKHOUSE_BENCHMARK_SOURCES} ${CLICKHOUSE_COPIER_SOURCES} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_SOURCES} ${CLICKHOUSE_COMPRESSOR_SOURCES} ${CLICKHOUSE_FORMAT_SOURCES} ${CLICKHOUSE_OBFUSCATOR_SOURCES} ${CLICKHOUSE_GIT_IMPORT_SOURCES} ${CLICKHOUSE_ODBC_BRIDGE_SOURCES})
|
||||||
target_link_libraries(clickhouse-lib ${CLICKHOUSE_SERVER_LINK} ${CLICKHOUSE_CLIENT_LINK} ${CLICKHOUSE_LOCAL_LINK} ${CLICKHOUSE_BENCHMARK_LINK} ${CLICKHOUSE_COPIER_LINK} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_LINK} ${CLICKHOUSE_COMPRESSOR_LINK} ${CLICKHOUSE_FORMAT_LINK} ${CLICKHOUSE_OBFUSCATOR_LINK} ${CLICKHOUSE_ODBC_BRIDGE_LINK})
|
target_link_libraries(clickhouse-lib ${CLICKHOUSE_SERVER_LINK} ${CLICKHOUSE_CLIENT_LINK} ${CLICKHOUSE_LOCAL_LINK} ${CLICKHOUSE_BENCHMARK_LINK} ${CLICKHOUSE_COPIER_LINK} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_LINK} ${CLICKHOUSE_COMPRESSOR_LINK} ${CLICKHOUSE_FORMAT_LINK} ${CLICKHOUSE_OBFUSCATOR_LINK} ${CLICKHOUSE_GIT_IMPORT_LINK} ${CLICKHOUSE_ODBC_BRIDGE_LINK})
|
||||||
target_include_directories(clickhouse-lib ${CLICKHOUSE_SERVER_INCLUDE} ${CLICKHOUSE_CLIENT_INCLUDE} ${CLICKHOUSE_LOCAL_INCLUDE} ${CLICKHOUSE_BENCHMARK_INCLUDE} ${CLICKHOUSE_COPIER_INCLUDE} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_INCLUDE} ${CLICKHOUSE_COMPRESSOR_INCLUDE} ${CLICKHOUSE_FORMAT_INCLUDE} ${CLICKHOUSE_OBFUSCATOR_INCLUDE} ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE})
|
target_include_directories(clickhouse-lib ${CLICKHOUSE_SERVER_INCLUDE} ${CLICKHOUSE_CLIENT_INCLUDE} ${CLICKHOUSE_LOCAL_INCLUDE} ${CLICKHOUSE_BENCHMARK_INCLUDE} ${CLICKHOUSE_COPIER_INCLUDE} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_INCLUDE} ${CLICKHOUSE_COMPRESSOR_INCLUDE} ${CLICKHOUSE_FORMAT_INCLUDE} ${CLICKHOUSE_OBFUSCATOR_INCLUDE} ${CLICKHOUSE_GIT_IMPORT_INCLUDE} ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE})
|
||||||
set_target_properties(clickhouse-lib PROPERTIES SOVERSION ${VERSION_MAJOR}.${VERSION_MINOR} VERSION ${VERSION_SO} OUTPUT_NAME clickhouse DEBUG_POSTFIX "")
|
set_target_properties(clickhouse-lib PROPERTIES SOVERSION ${VERSION_MAJOR}.${VERSION_MINOR} VERSION ${VERSION_SO} OUTPUT_NAME clickhouse DEBUG_POSTFIX "")
|
||||||
install (TARGETS clickhouse-lib LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR} COMPONENT clickhouse)
|
install (TARGETS clickhouse-lib LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR} COMPONENT clickhouse)
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
if (CLICKHOUSE_SPLIT_BINARY)
|
if (CLICKHOUSE_SPLIT_BINARY)
|
||||||
set (CLICKHOUSE_ALL_TARGETS clickhouse-server clickhouse-client clickhouse-local clickhouse-benchmark clickhouse-extract-from-config clickhouse-compressor clickhouse-format clickhouse-obfuscator clickhouse-copier)
|
set (CLICKHOUSE_ALL_TARGETS clickhouse-server clickhouse-client clickhouse-local clickhouse-benchmark clickhouse-extract-from-config clickhouse-compressor clickhouse-format clickhouse-obfuscator clickhouse-git-import clickhouse-copier)
|
||||||
|
|
||||||
if (ENABLE_CLICKHOUSE_ODBC_BRIDGE)
|
if (ENABLE_CLICKHOUSE_ODBC_BRIDGE)
|
||||||
list (APPEND CLICKHOUSE_ALL_TARGETS clickhouse-odbc-bridge)
|
list (APPEND CLICKHOUSE_ALL_TARGETS clickhouse-odbc-bridge)
|
||||||
@ -149,6 +151,9 @@ else ()
|
|||||||
if (ENABLE_CLICKHOUSE_OBFUSCATOR)
|
if (ENABLE_CLICKHOUSE_OBFUSCATOR)
|
||||||
clickhouse_target_link_split_lib(clickhouse obfuscator)
|
clickhouse_target_link_split_lib(clickhouse obfuscator)
|
||||||
endif ()
|
endif ()
|
||||||
|
if (ENABLE_CLICKHOUSE_GIT_IMPORT)
|
||||||
|
clickhouse_target_link_split_lib(clickhouse git-import)
|
||||||
|
endif ()
|
||||||
if (ENABLE_CLICKHOUSE_INSTALL)
|
if (ENABLE_CLICKHOUSE_INSTALL)
|
||||||
clickhouse_target_link_split_lib(clickhouse install)
|
clickhouse_target_link_split_lib(clickhouse install)
|
||||||
endif ()
|
endif ()
|
||||||
@ -199,6 +204,11 @@ else ()
|
|||||||
install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-obfuscator DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse)
|
install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-obfuscator DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse)
|
||||||
list(APPEND CLICKHOUSE_BUNDLE clickhouse-obfuscator)
|
list(APPEND CLICKHOUSE_BUNDLE clickhouse-obfuscator)
|
||||||
endif ()
|
endif ()
|
||||||
|
if (ENABLE_CLICKHOUSE_GIT_IMPORT)
|
||||||
|
add_custom_target (clickhouse-git-import ALL COMMAND ${CMAKE_COMMAND} -E create_symlink clickhouse clickhouse-git-import DEPENDS clickhouse)
|
||||||
|
install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-git-import DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse)
|
||||||
|
list(APPEND CLICKHOUSE_BUNDLE clickhouse-git-import)
|
||||||
|
endif ()
|
||||||
if(ENABLE_CLICKHOUSE_ODBC_BRIDGE)
|
if(ENABLE_CLICKHOUSE_ODBC_BRIDGE)
|
||||||
list(APPEND CLICKHOUSE_BUNDLE clickhouse-odbc-bridge)
|
list(APPEND CLICKHOUSE_BUNDLE clickhouse-odbc-bridge)
|
||||||
endif()
|
endif()
|
||||||
|
@ -866,6 +866,8 @@ private:
|
|||||||
// will exit. The ping() would be the best match here, but it's
|
// will exit. The ping() would be the best match here, but it's
|
||||||
// private, probably for a good reason that the protocol doesn't allow
|
// private, probably for a good reason that the protocol doesn't allow
|
||||||
// pings at any possible moment.
|
// pings at any possible moment.
|
||||||
|
// Don't forget to reset the default database which might have changed.
|
||||||
|
connection->setDefaultDatabase("");
|
||||||
connection->forceConnected(connection_parameters.timeouts);
|
connection->forceConnected(connection_parameters.timeouts);
|
||||||
|
|
||||||
if (text.size() > 4 * 1024)
|
if (text.size() > 4 * 1024)
|
||||||
@ -900,74 +902,151 @@ private:
|
|||||||
return processMultiQuery(text);
|
return processMultiQuery(text);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool processMultiQuery(const String & text)
|
bool processMultiQuery(const String & all_queries_text)
|
||||||
{
|
{
|
||||||
const bool test_mode = config().has("testmode");
|
const bool test_mode = config().has("testmode");
|
||||||
|
|
||||||
{ /// disable logs if expects errors
|
{ /// disable logs if expects errors
|
||||||
TestHint test_hint(test_mode, text);
|
TestHint test_hint(test_mode, all_queries_text);
|
||||||
if (test_hint.clientError() || test_hint.serverError())
|
if (test_hint.clientError() || test_hint.serverError())
|
||||||
processTextAsSingleQuery("SET send_logs_level = 'none'");
|
processTextAsSingleQuery("SET send_logs_level = 'none'");
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Several queries separated by ';'.
|
/// Several queries separated by ';'.
|
||||||
/// INSERT data is ended by the end of line, not ';'.
|
/// INSERT data is ended by the end of line, not ';'.
|
||||||
|
/// An exception is VALUES format where we also support semicolon in
|
||||||
|
/// addition to end of line.
|
||||||
|
|
||||||
const char * begin = text.data();
|
const char * this_query_begin = all_queries_text.data();
|
||||||
const char * end = begin + text.size();
|
const char * all_queries_end = all_queries_text.data() + all_queries_text.size();
|
||||||
|
|
||||||
while (begin < end)
|
while (this_query_begin < all_queries_end)
|
||||||
{
|
{
|
||||||
const char * pos = begin;
|
// Use the token iterator to skip any whitespace, semicolons and
|
||||||
ASTPtr orig_ast = parseQuery(pos, end, true);
|
// comments at the beginning of the query. An example from regression
|
||||||
|
// tests:
|
||||||
|
// insert into table t values ('invalid'); -- { serverError 469 }
|
||||||
|
// select 1
|
||||||
|
// Here the test hint comment gets parsed as a part of second query.
|
||||||
|
// We parse the `INSERT VALUES` up to the semicolon, and the rest
|
||||||
|
// looks like a two-line query:
|
||||||
|
// -- { serverError 469 }
|
||||||
|
// select 1
|
||||||
|
// and we expect it to fail with error 469, but this hint is actually
|
||||||
|
// for the previous query. Test hints should go after the query, so
|
||||||
|
// we can fix this by skipping leading comments. Token iterator skips
|
||||||
|
// comments and whitespace by itself, so we only have to check for
|
||||||
|
// semicolons.
|
||||||
|
// The code block is to limit visibility of `tokens` because we have
|
||||||
|
// another such variable further down the code, and get warnings for
|
||||||
|
// that.
|
||||||
|
{
|
||||||
|
Tokens tokens(this_query_begin, all_queries_end);
|
||||||
|
IParser::Pos token_iterator(tokens,
|
||||||
|
context.getSettingsRef().max_parser_depth);
|
||||||
|
while (token_iterator->type == TokenType::Semicolon
|
||||||
|
&& token_iterator.isValid())
|
||||||
|
{
|
||||||
|
++token_iterator;
|
||||||
|
}
|
||||||
|
this_query_begin = token_iterator->begin;
|
||||||
|
if (this_query_begin >= all_queries_end)
|
||||||
|
{
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (!orig_ast)
|
// Try to parse the query.
|
||||||
|
const char * this_query_end = this_query_begin;
|
||||||
|
try
|
||||||
|
{
|
||||||
|
parsed_query = parseQuery(this_query_end, all_queries_end, true);
|
||||||
|
}
|
||||||
|
catch (Exception & e)
|
||||||
|
{
|
||||||
|
if (!test_mode)
|
||||||
|
throw;
|
||||||
|
|
||||||
|
/// Try find test hint for syntax error
|
||||||
|
const char * end_of_line = find_first_symbols<'\n'>(this_query_begin,all_queries_end);
|
||||||
|
TestHint hint(true, String(this_query_end, end_of_line - this_query_end));
|
||||||
|
if (hint.serverError()) /// Syntax errors are considered as client errors
|
||||||
|
throw;
|
||||||
|
if (hint.clientError() != e.code())
|
||||||
|
{
|
||||||
|
if (hint.clientError())
|
||||||
|
e.addMessage("\nExpected clinet error: " + std::to_string(hint.clientError()));
|
||||||
|
throw;
|
||||||
|
}
|
||||||
|
|
||||||
|
/// It's expected syntax error, skip the line
|
||||||
|
this_query_begin = end_of_line;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!parsed_query)
|
||||||
{
|
{
|
||||||
if (ignore_error)
|
if (ignore_error)
|
||||||
{
|
{
|
||||||
Tokens tokens(begin, end);
|
Tokens tokens(this_query_begin, all_queries_end);
|
||||||
IParser::Pos token_iterator(tokens, context.getSettingsRef().max_parser_depth);
|
IParser::Pos token_iterator(tokens, context.getSettingsRef().max_parser_depth);
|
||||||
while (token_iterator->type != TokenType::Semicolon && token_iterator.isValid())
|
while (token_iterator->type != TokenType::Semicolon && token_iterator.isValid())
|
||||||
++token_iterator;
|
++token_iterator;
|
||||||
begin = token_iterator->end;
|
this_query_begin = token_iterator->end;
|
||||||
|
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
auto * insert = orig_ast->as<ASTInsertQuery>();
|
// INSERT queries may have the inserted data in the query text
|
||||||
|
// that follow the query itself, e.g. "insert into t format CSV 1;2".
|
||||||
if (insert && insert->data)
|
// They need special handling. First of all, here we find where the
|
||||||
|
// inserted data ends. In multy-query mode, it is delimited by a
|
||||||
|
// newline.
|
||||||
|
// The VALUES format needs even more handling -- we also allow the
|
||||||
|
// data to be delimited by semicolon. This case is handled later by
|
||||||
|
// the format parser itself.
|
||||||
|
auto * insert_ast = parsed_query->as<ASTInsertQuery>();
|
||||||
|
if (insert_ast && insert_ast->data)
|
||||||
{
|
{
|
||||||
pos = find_first_symbols<'\n'>(insert->data, end);
|
this_query_end = find_first_symbols<'\n'>(insert_ast->data, all_queries_end);
|
||||||
insert->end = pos;
|
insert_ast->end = this_query_end;
|
||||||
|
query_to_send = all_queries_text.substr(
|
||||||
|
this_query_begin - all_queries_text.data(),
|
||||||
|
insert_ast->data - this_query_begin);
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
query_to_send = all_queries_text.substr(
|
||||||
|
this_query_begin - all_queries_text.data(),
|
||||||
|
this_query_end - this_query_begin);
|
||||||
}
|
}
|
||||||
|
|
||||||
String str = text.substr(begin - text.data(), pos - begin);
|
// full_query is the query + inline INSERT data.
|
||||||
|
full_query = all_queries_text.substr(
|
||||||
|
this_query_begin - all_queries_text.data(),
|
||||||
|
this_query_end - this_query_begin);
|
||||||
|
|
||||||
begin = pos;
|
// Look for the hint in the text of query + insert data, if any.
|
||||||
while (isWhitespaceASCII(*begin) || *begin == ';')
|
// e.g. insert into t format CSV 'a' -- { serverError 123 }.
|
||||||
++begin;
|
TestHint test_hint(test_mode, full_query);
|
||||||
|
|
||||||
TestHint test_hint(test_mode, str);
|
|
||||||
expected_client_error = test_hint.clientError();
|
expected_client_error = test_hint.clientError();
|
||||||
expected_server_error = test_hint.serverError();
|
expected_server_error = test_hint.serverError();
|
||||||
|
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
auto ast_to_process = orig_ast;
|
processParsedSingleQuery();
|
||||||
if (insert && insert->data)
|
|
||||||
|
if (insert_ast && insert_ast->data)
|
||||||
{
|
{
|
||||||
ast_to_process = nullptr;
|
// For VALUES format: use the end of inline data as reported
|
||||||
processTextAsSingleQuery(str);
|
// by the format parser (it is saved in sendData()). This
|
||||||
}
|
// allows us to handle queries like:
|
||||||
else
|
// insert into t values (1); select 1
|
||||||
{
|
//, where the inline data is delimited by semicolon and not
|
||||||
parsed_query = ast_to_process;
|
// by a newline.
|
||||||
full_query = str;
|
this_query_end = parsed_query->as<ASTInsertQuery>()->end;
|
||||||
query_to_send = str;
|
|
||||||
processParsedSingleQuery();
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
@ -975,7 +1054,7 @@ private:
|
|||||||
last_exception_received_from_server = std::make_unique<Exception>(getCurrentExceptionMessage(true), getCurrentExceptionCode());
|
last_exception_received_from_server = std::make_unique<Exception>(getCurrentExceptionMessage(true), getCurrentExceptionCode());
|
||||||
actual_client_error = last_exception_received_from_server->code();
|
actual_client_error = last_exception_received_from_server->code();
|
||||||
if (!ignore_error && (!actual_client_error || actual_client_error != expected_client_error))
|
if (!ignore_error && (!actual_client_error || actual_client_error != expected_client_error))
|
||||||
std::cerr << "Error on processing query: " << str << std::endl << last_exception_received_from_server->message();
|
std::cerr << "Error on processing query: " << full_query << std::endl << last_exception_received_from_server->message();
|
||||||
received_exception_from_server = true;
|
received_exception_from_server = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -989,6 +1068,8 @@ private:
|
|||||||
else
|
else
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
this_query_begin = this_query_end;
|
||||||
}
|
}
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
@ -1103,7 +1184,9 @@ private:
|
|||||||
{
|
{
|
||||||
last_exception_received_from_server = std::make_unique<Exception>(getCurrentExceptionMessage(true), getCurrentExceptionCode());
|
last_exception_received_from_server = std::make_unique<Exception>(getCurrentExceptionMessage(true), getCurrentExceptionCode());
|
||||||
received_exception_from_server = true;
|
received_exception_from_server = true;
|
||||||
std::cerr << "Error on processing query: " << ast_to_process->formatForErrorMessage() << std::endl << last_exception_received_from_server->message();
|
fmt::print(stderr, "Error on processing query '{}': {}\n",
|
||||||
|
ast_to_process->formatForErrorMessage(),
|
||||||
|
last_exception_received_from_server->message());
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!connection->isConnected())
|
if (!connection->isConnected())
|
||||||
@ -1411,7 +1494,7 @@ private:
|
|||||||
void sendData(Block & sample, const ColumnsDescription & columns_description)
|
void sendData(Block & sample, const ColumnsDescription & columns_description)
|
||||||
{
|
{
|
||||||
/// If INSERT data must be sent.
|
/// If INSERT data must be sent.
|
||||||
const auto * parsed_insert_query = parsed_query->as<ASTInsertQuery>();
|
auto * parsed_insert_query = parsed_query->as<ASTInsertQuery>();
|
||||||
if (!parsed_insert_query)
|
if (!parsed_insert_query)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
@ -1420,6 +1503,9 @@ private:
|
|||||||
/// Send data contained in the query.
|
/// Send data contained in the query.
|
||||||
ReadBufferFromMemory data_in(parsed_insert_query->data, parsed_insert_query->end - parsed_insert_query->data);
|
ReadBufferFromMemory data_in(parsed_insert_query->data, parsed_insert_query->end - parsed_insert_query->data);
|
||||||
sendDataFrom(data_in, sample, columns_description);
|
sendDataFrom(data_in, sample, columns_description);
|
||||||
|
// Remember where the data ended. We use this info later to determine
|
||||||
|
// where the next query begins.
|
||||||
|
parsed_insert_query->end = data_in.buffer().begin() + data_in.count();
|
||||||
}
|
}
|
||||||
else if (!is_interactive)
|
else if (!is_interactive)
|
||||||
{
|
{
|
||||||
|
@ -12,5 +12,6 @@
|
|||||||
#cmakedefine01 ENABLE_CLICKHOUSE_COMPRESSOR
|
#cmakedefine01 ENABLE_CLICKHOUSE_COMPRESSOR
|
||||||
#cmakedefine01 ENABLE_CLICKHOUSE_FORMAT
|
#cmakedefine01 ENABLE_CLICKHOUSE_FORMAT
|
||||||
#cmakedefine01 ENABLE_CLICKHOUSE_OBFUSCATOR
|
#cmakedefine01 ENABLE_CLICKHOUSE_OBFUSCATOR
|
||||||
|
#cmakedefine01 ENABLE_CLICKHOUSE_GIT_IMPORT
|
||||||
#cmakedefine01 ENABLE_CLICKHOUSE_INSTALL
|
#cmakedefine01 ENABLE_CLICKHOUSE_INSTALL
|
||||||
#cmakedefine01 ENABLE_CLICKHOUSE_ODBC_BRIDGE
|
#cmakedefine01 ENABLE_CLICKHOUSE_ODBC_BRIDGE
|
||||||
|
10
programs/git-import/CMakeLists.txt
Normal file
10
programs/git-import/CMakeLists.txt
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
set (CLICKHOUSE_GIT_IMPORT_SOURCES git-import.cpp)
|
||||||
|
|
||||||
|
set (CLICKHOUSE_GIT_IMPORT_LINK
|
||||||
|
PRIVATE
|
||||||
|
boost::program_options
|
||||||
|
dbms
|
||||||
|
)
|
||||||
|
|
||||||
|
clickhouse_program_add(git-import)
|
||||||
|
|
2
programs/git-import/clickhouse-git-import.cpp
Normal file
2
programs/git-import/clickhouse-git-import.cpp
Normal file
@ -0,0 +1,2 @@
|
|||||||
|
int mainEntryClickHouseGitImport(int argc, char ** argv);
|
||||||
|
int main(int argc_, char ** argv_) { return mainEntryClickHouseGitImport(argc_, argv_); }
|
1235
programs/git-import/git-import.cpp
Normal file
1235
programs/git-import/git-import.cpp
Normal file
File diff suppressed because it is too large
Load Diff
@ -205,6 +205,7 @@ int mainEntryClickHouseInstall(int argc, char ** argv)
|
|||||||
"clickhouse-benchmark",
|
"clickhouse-benchmark",
|
||||||
"clickhouse-copier",
|
"clickhouse-copier",
|
||||||
"clickhouse-obfuscator",
|
"clickhouse-obfuscator",
|
||||||
|
"clickhouse-git-import",
|
||||||
"clickhouse-compressor",
|
"clickhouse-compressor",
|
||||||
"clickhouse-format",
|
"clickhouse-format",
|
||||||
"clickhouse-extract-from-config"
|
"clickhouse-extract-from-config"
|
||||||
|
@ -46,6 +46,9 @@ int mainEntryClickHouseClusterCopier(int argc, char ** argv);
|
|||||||
#if ENABLE_CLICKHOUSE_OBFUSCATOR
|
#if ENABLE_CLICKHOUSE_OBFUSCATOR
|
||||||
int mainEntryClickHouseObfuscator(int argc, char ** argv);
|
int mainEntryClickHouseObfuscator(int argc, char ** argv);
|
||||||
#endif
|
#endif
|
||||||
|
#if ENABLE_CLICKHOUSE_GIT_IMPORT
|
||||||
|
int mainEntryClickHouseGitImport(int argc, char ** argv);
|
||||||
|
#endif
|
||||||
#if ENABLE_CLICKHOUSE_INSTALL
|
#if ENABLE_CLICKHOUSE_INSTALL
|
||||||
int mainEntryClickHouseInstall(int argc, char ** argv);
|
int mainEntryClickHouseInstall(int argc, char ** argv);
|
||||||
int mainEntryClickHouseStart(int argc, char ** argv);
|
int mainEntryClickHouseStart(int argc, char ** argv);
|
||||||
@ -91,6 +94,9 @@ std::pair<const char *, MainFunc> clickhouse_applications[] =
|
|||||||
#if ENABLE_CLICKHOUSE_OBFUSCATOR
|
#if ENABLE_CLICKHOUSE_OBFUSCATOR
|
||||||
{"obfuscator", mainEntryClickHouseObfuscator},
|
{"obfuscator", mainEntryClickHouseObfuscator},
|
||||||
#endif
|
#endif
|
||||||
|
#if ENABLE_CLICKHOUSE_GIT_IMPORT
|
||||||
|
{"git-import", mainEntryClickHouseGitImport},
|
||||||
|
#endif
|
||||||
#if ENABLE_CLICKHOUSE_INSTALL
|
#if ENABLE_CLICKHOUSE_INSTALL
|
||||||
{"install", mainEntryClickHouseInstall},
|
{"install", mainEntryClickHouseInstall},
|
||||||
{"start", mainEntryClickHouseStart},
|
{"start", mainEntryClickHouseStart},
|
||||||
|
@ -15,6 +15,7 @@ namespace DB
|
|||||||
namespace ErrorCodes
|
namespace ErrorCodes
|
||||||
{
|
{
|
||||||
extern const int NUMBER_OF_COLUMNS_DOESNT_MATCH;
|
extern const int NUMBER_OF_COLUMNS_DOESNT_MATCH;
|
||||||
|
extern const int UNKNOWN_TYPE;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -86,6 +87,8 @@ namespace
|
|||||||
case ValueType::vtUUID:
|
case ValueType::vtUUID:
|
||||||
assert_cast<ColumnUInt128 &>(column).insert(parse<UUID>(value.convert<std::string>()));
|
assert_cast<ColumnUInt128 &>(column).insert(parse<UUID>(value.convert<std::string>()));
|
||||||
break;
|
break;
|
||||||
|
default:
|
||||||
|
throw Exception("Unsupported value type", ErrorCodes::UNKNOWN_TYPE);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -13,6 +13,11 @@
|
|||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
|
namespace ErrorCodes
|
||||||
|
{
|
||||||
|
extern const int UNKNOWN_TYPE;
|
||||||
|
}
|
||||||
|
|
||||||
namespace
|
namespace
|
||||||
{
|
{
|
||||||
using ValueType = ExternalResultDescription::ValueType;
|
using ValueType = ExternalResultDescription::ValueType;
|
||||||
@ -79,6 +84,9 @@ namespace
|
|||||||
return Poco::Dynamic::Var(std::to_string(LocalDateTime(time_t(field.get<UInt64>())))).convert<String>();
|
return Poco::Dynamic::Var(std::to_string(LocalDateTime(time_t(field.get<UInt64>())))).convert<String>();
|
||||||
case ValueType::vtUUID:
|
case ValueType::vtUUID:
|
||||||
return Poco::Dynamic::Var(UUID(field.get<UInt128>()).toUnderType().toHexString()).convert<std::string>();
|
return Poco::Dynamic::Var(UUID(field.get<UInt128>()).toUnderType().toHexString()).convert<std::string>();
|
||||||
|
default:
|
||||||
|
throw Exception("Unsupported value type", ErrorCodes::UNKNOWN_TYPE);
|
||||||
|
|
||||||
}
|
}
|
||||||
__builtin_unreachable();
|
__builtin_unreachable();
|
||||||
}
|
}
|
||||||
|
@ -32,6 +32,7 @@
|
|||||||
#include <Common/getExecutablePath.h>
|
#include <Common/getExecutablePath.h>
|
||||||
#include <Common/ThreadProfileEvents.h>
|
#include <Common/ThreadProfileEvents.h>
|
||||||
#include <Common/ThreadStatus.h>
|
#include <Common/ThreadStatus.h>
|
||||||
|
#include <Common/remapExecutable.h>
|
||||||
#include <IO/HTTPCommon.h>
|
#include <IO/HTTPCommon.h>
|
||||||
#include <IO/UseSSL.h>
|
#include <IO/UseSSL.h>
|
||||||
#include <Interpreters/AsynchronousMetrics.h>
|
#include <Interpreters/AsynchronousMetrics.h>
|
||||||
@ -305,6 +306,13 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
|||||||
|
|
||||||
/// After full config loaded
|
/// After full config loaded
|
||||||
{
|
{
|
||||||
|
if (config().getBool("remap_executable", false))
|
||||||
|
{
|
||||||
|
LOG_DEBUG(log, "Will remap executable in memory.");
|
||||||
|
remapExecutable();
|
||||||
|
LOG_DEBUG(log, "The code in memory has been successfully remapped.");
|
||||||
|
}
|
||||||
|
|
||||||
if (config().getBool("mlock_executable", false))
|
if (config().getBool("mlock_executable", false))
|
||||||
{
|
{
|
||||||
if (hasLinuxCapability(CAP_IPC_LOCK))
|
if (hasLinuxCapability(CAP_IPC_LOCK))
|
||||||
|
13
programs/server/config.d/access_control.xml
Normal file
13
programs/server/config.d/access_control.xml
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
<yandex>
|
||||||
|
<!-- Sources to read users, roles, access rights, profiles of settings, quotas. -->
|
||||||
|
<user_directories replace="replace">
|
||||||
|
<users_xml>
|
||||||
|
<!-- Path to configuration file with predefined users. -->
|
||||||
|
<path>users.xml</path>
|
||||||
|
</users_xml>
|
||||||
|
<local_directory>
|
||||||
|
<!-- Path to folder where users created by SQL commands are stored. -->
|
||||||
|
<path>access/</path>
|
||||||
|
</local_directory>
|
||||||
|
</user_directories>
|
||||||
|
</yandex>
|
@ -212,8 +212,17 @@
|
|||||||
<!-- Directory with user provided files that are accessible by 'file' table function. -->
|
<!-- Directory with user provided files that are accessible by 'file' table function. -->
|
||||||
<user_files_path>/var/lib/clickhouse/user_files/</user_files_path>
|
<user_files_path>/var/lib/clickhouse/user_files/</user_files_path>
|
||||||
|
|
||||||
<!-- Path to folder where users and roles created by SQL commands are stored. -->
|
<!-- Sources to read users, roles, access rights, profiles of settings, quotas. -->
|
||||||
<access_control_path>/var/lib/clickhouse/access/</access_control_path>
|
<user_directories>
|
||||||
|
<users_xml>
|
||||||
|
<!-- Path to configuration file with predefined users. -->
|
||||||
|
<path>users.xml</path>
|
||||||
|
</users_xml>
|
||||||
|
<local_directory>
|
||||||
|
<!-- Path to folder where users created by SQL commands are stored. -->
|
||||||
|
<path>/var/lib/clickhouse/access/</path>
|
||||||
|
</local_directory>
|
||||||
|
</user_directories>
|
||||||
|
|
||||||
<!-- External user directories (LDAP). -->
|
<!-- External user directories (LDAP). -->
|
||||||
<ldap_servers>
|
<ldap_servers>
|
||||||
@ -256,9 +265,6 @@
|
|||||||
-->
|
-->
|
||||||
</ldap_servers>
|
</ldap_servers>
|
||||||
|
|
||||||
<!-- Path to configuration file with users, access rights, profiles of settings, quotas. -->
|
|
||||||
<users_config>users.xml</users_config>
|
|
||||||
|
|
||||||
<!-- Default profile of settings. -->
|
<!-- Default profile of settings. -->
|
||||||
<default_profile>default</default_profile>
|
<default_profile>default</default_profile>
|
||||||
|
|
||||||
@ -296,6 +302,9 @@
|
|||||||
-->
|
-->
|
||||||
<mlock_executable>true</mlock_executable>
|
<mlock_executable>true</mlock_executable>
|
||||||
|
|
||||||
|
<!-- Reallocate memory for machine code ("text") using huge pages. Highly experimental. -->
|
||||||
|
<remap_executable>false</remap_executable>
|
||||||
|
|
||||||
<!-- Configuration of clusters that could be used in Distributed tables.
|
<!-- Configuration of clusters that could be used in Distributed tables.
|
||||||
https://clickhouse.tech/docs/en/operations/table_engines/distributed/
|
https://clickhouse.tech/docs/en/operations/table_engines/distributed/
|
||||||
-->
|
-->
|
||||||
|
@ -181,6 +181,15 @@ void AccessControlManager::addUsersConfigStorage(
|
|||||||
const String & preprocessed_dir_,
|
const String & preprocessed_dir_,
|
||||||
const zkutil::GetZooKeeper & get_zookeeper_function_)
|
const zkutil::GetZooKeeper & get_zookeeper_function_)
|
||||||
{
|
{
|
||||||
|
auto storages = getStoragesPtr();
|
||||||
|
for (const auto & storage : *storages)
|
||||||
|
{
|
||||||
|
if (auto users_config_storage = typeid_cast<std::shared_ptr<UsersConfigAccessStorage>>(storage))
|
||||||
|
{
|
||||||
|
if (users_config_storage->getStoragePath() == users_config_path_)
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
auto check_setting_name_function = [this](const std::string_view & setting_name) { checkSettingNameIsAllowed(setting_name); };
|
auto check_setting_name_function = [this](const std::string_view & setting_name) { checkSettingNameIsAllowed(setting_name); };
|
||||||
auto new_storage = std::make_shared<UsersConfigAccessStorage>(storage_name_, check_setting_name_function);
|
auto new_storage = std::make_shared<UsersConfigAccessStorage>(storage_name_, check_setting_name_function);
|
||||||
new_storage->load(users_config_path_, include_from_path_, preprocessed_dir_, get_zookeeper_function_);
|
new_storage->load(users_config_path_, include_from_path_, preprocessed_dir_, get_zookeeper_function_);
|
||||||
@ -210,17 +219,36 @@ void AccessControlManager::startPeriodicReloadingUsersConfigs()
|
|||||||
|
|
||||||
void AccessControlManager::addDiskStorage(const String & directory_, bool readonly_)
|
void AccessControlManager::addDiskStorage(const String & directory_, bool readonly_)
|
||||||
{
|
{
|
||||||
addStorage(std::make_shared<DiskAccessStorage>(directory_, readonly_));
|
addDiskStorage(DiskAccessStorage::STORAGE_TYPE, directory_, readonly_);
|
||||||
}
|
}
|
||||||
|
|
||||||
void AccessControlManager::addDiskStorage(const String & storage_name_, const String & directory_, bool readonly_)
|
void AccessControlManager::addDiskStorage(const String & storage_name_, const String & directory_, bool readonly_)
|
||||||
{
|
{
|
||||||
|
auto storages = getStoragesPtr();
|
||||||
|
for (const auto & storage : *storages)
|
||||||
|
{
|
||||||
|
if (auto disk_storage = typeid_cast<std::shared_ptr<DiskAccessStorage>>(storage))
|
||||||
|
{
|
||||||
|
if (disk_storage->isStoragePathEqual(directory_))
|
||||||
|
{
|
||||||
|
if (readonly_)
|
||||||
|
disk_storage->setReadOnly(readonly_);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
addStorage(std::make_shared<DiskAccessStorage>(storage_name_, directory_, readonly_));
|
addStorage(std::make_shared<DiskAccessStorage>(storage_name_, directory_, readonly_));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
void AccessControlManager::addMemoryStorage(const String & storage_name_)
|
void AccessControlManager::addMemoryStorage(const String & storage_name_)
|
||||||
{
|
{
|
||||||
|
auto storages = getStoragesPtr();
|
||||||
|
for (const auto & storage : *storages)
|
||||||
|
{
|
||||||
|
if (auto memory_storage = typeid_cast<std::shared_ptr<MemoryAccessStorage>>(storage))
|
||||||
|
return;
|
||||||
|
}
|
||||||
addStorage(std::make_shared<MemoryAccessStorage>(storage_name_));
|
addStorage(std::make_shared<MemoryAccessStorage>(storage_name_));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <Access/AccessType.h>
|
#include <Access/AccessType.h>
|
||||||
#include <Core/Types.h>
|
#include <common/types.h>
|
||||||
#include <Common/Exception.h>
|
#include <Common/Exception.h>
|
||||||
#include <ext/range.h>
|
#include <ext/range.h>
|
||||||
#include <ext/push_back.h>
|
#include <ext/push_back.h>
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <Core/Types.h>
|
#include <common/types.h>
|
||||||
#include <Access/AccessRightsElement.h>
|
#include <Access/AccessRightsElement.h>
|
||||||
#include <memory>
|
#include <memory>
|
||||||
#include <vector>
|
#include <vector>
|
||||||
|
@ -1,13 +1,17 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <Core/Types.h>
|
#include <common/types.h>
|
||||||
#include <boost/algorithm/string/case_conv.hpp>
|
#include <boost/algorithm/string/case_conv.hpp>
|
||||||
#include <boost/algorithm/string/replace.hpp>
|
#include <boost/algorithm/string/replace.hpp>
|
||||||
#include <array>
|
#include <array>
|
||||||
|
#include <vector>
|
||||||
|
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
|
using Strings = std::vector<String>;
|
||||||
|
|
||||||
/// Represents an access type which can be granted on databases, tables, columns, etc.
|
/// Represents an access type which can be granted on databases, tables, columns, etc.
|
||||||
enum class AccessType
|
enum class AccessType
|
||||||
{
|
{
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <Core/Types.h>
|
#include <common/types.h>
|
||||||
#include <Poco/Net/IPAddress.h>
|
#include <Poco/Net/IPAddress.h>
|
||||||
#include <memory>
|
#include <memory>
|
||||||
#include <vector>
|
#include <vector>
|
||||||
@ -11,6 +11,9 @@
|
|||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
|
using Strings = std::vector<String>;
|
||||||
|
|
||||||
/// Represents lists of hosts an user is allowed to connect to server from.
|
/// Represents lists of hosts an user is allowed to connect to server from.
|
||||||
class AllowedClientHosts
|
class AllowedClientHosts
|
||||||
{
|
{
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <Core/Types.h>
|
#include <common/types.h>
|
||||||
#include <Common/Exception.h>
|
#include <Common/Exception.h>
|
||||||
#include <Common/OpenSSLHelpers.h>
|
#include <Common/OpenSSLHelpers.h>
|
||||||
#include <Poco/SHA1Engine.h>
|
#include <Poco/SHA1Engine.h>
|
||||||
|
@ -218,6 +218,16 @@ namespace
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
/// Converts a path to an absolute path and append it with a separator.
|
||||||
|
String makeDirectoryPathCanonical(const String & directory_path)
|
||||||
|
{
|
||||||
|
auto canonical_directory_path = std::filesystem::weakly_canonical(directory_path);
|
||||||
|
if (canonical_directory_path.has_filename())
|
||||||
|
canonical_directory_path += std::filesystem::path::preferred_separator;
|
||||||
|
return canonical_directory_path;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
/// Calculates the path to a file named <id>.sql for saving an access entity.
|
/// Calculates the path to a file named <id>.sql for saving an access entity.
|
||||||
String getEntityFilePath(const String & directory_path, const UUID & id)
|
String getEntityFilePath(const String & directory_path, const UUID & id)
|
||||||
{
|
{
|
||||||
@ -298,22 +308,17 @@ DiskAccessStorage::DiskAccessStorage(const String & directory_path_, bool readon
|
|||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
DiskAccessStorage::DiskAccessStorage(const String & storage_name_, const String & directory_path_, bool readonly_)
|
DiskAccessStorage::DiskAccessStorage(const String & storage_name_, const String & directory_path_, bool readonly_)
|
||||||
: IAccessStorage(storage_name_)
|
: IAccessStorage(storage_name_)
|
||||||
{
|
{
|
||||||
auto canonical_directory_path = std::filesystem::weakly_canonical(directory_path_);
|
directory_path = makeDirectoryPathCanonical(directory_path_);
|
||||||
if (canonical_directory_path.has_filename())
|
readonly = readonly_;
|
||||||
canonical_directory_path += std::filesystem::path::preferred_separator;
|
|
||||||
|
|
||||||
std::error_code create_dir_error_code;
|
std::error_code create_dir_error_code;
|
||||||
std::filesystem::create_directories(canonical_directory_path, create_dir_error_code);
|
std::filesystem::create_directories(directory_path, create_dir_error_code);
|
||||||
|
|
||||||
if (!std::filesystem::exists(canonical_directory_path) || !std::filesystem::is_directory(canonical_directory_path) || create_dir_error_code)
|
if (!std::filesystem::exists(directory_path) || !std::filesystem::is_directory(directory_path) || create_dir_error_code)
|
||||||
throw Exception("Couldn't create directory " + canonical_directory_path.string() + " reason: '" + create_dir_error_code.message() + "'", ErrorCodes::DIRECTORY_DOESNT_EXIST);
|
throw Exception("Couldn't create directory " + directory_path + " reason: '" + create_dir_error_code.message() + "'", ErrorCodes::DIRECTORY_DOESNT_EXIST);
|
||||||
|
|
||||||
directory_path = canonical_directory_path;
|
|
||||||
readonly = readonly_;
|
|
||||||
|
|
||||||
bool should_rebuild_lists = std::filesystem::exists(getNeedRebuildListsMarkFilePath(directory_path));
|
bool should_rebuild_lists = std::filesystem::exists(getNeedRebuildListsMarkFilePath(directory_path));
|
||||||
if (!should_rebuild_lists)
|
if (!should_rebuild_lists)
|
||||||
@ -337,6 +342,12 @@ DiskAccessStorage::~DiskAccessStorage()
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
bool DiskAccessStorage::isStoragePathEqual(const String & directory_path_) const
|
||||||
|
{
|
||||||
|
return getStoragePath() == makeDirectoryPathCanonical(directory_path_);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
void DiskAccessStorage::clear()
|
void DiskAccessStorage::clear()
|
||||||
{
|
{
|
||||||
entries_by_id.clear();
|
entries_by_id.clear();
|
||||||
@ -426,33 +437,41 @@ bool DiskAccessStorage::writeLists()
|
|||||||
void DiskAccessStorage::scheduleWriteLists(EntityType type)
|
void DiskAccessStorage::scheduleWriteLists(EntityType type)
|
||||||
{
|
{
|
||||||
if (failed_to_write_lists)
|
if (failed_to_write_lists)
|
||||||
return;
|
return; /// We don't try to write list files after the first fail.
|
||||||
|
/// The next restart of the server will invoke rebuilding of the list files.
|
||||||
|
|
||||||
bool already_scheduled = !types_of_lists_to_write.empty();
|
|
||||||
types_of_lists_to_write.insert(type);
|
types_of_lists_to_write.insert(type);
|
||||||
|
|
||||||
if (already_scheduled)
|
if (lists_writing_thread_is_waiting)
|
||||||
return;
|
return; /// If the lists' writing thread is still waiting we can update `types_of_lists_to_write` easily,
|
||||||
|
/// without restarting that thread.
|
||||||
|
|
||||||
|
if (lists_writing_thread.joinable())
|
||||||
|
lists_writing_thread.join();
|
||||||
|
|
||||||
/// Create the 'need_rebuild_lists.mark' file.
|
/// Create the 'need_rebuild_lists.mark' file.
|
||||||
/// This file will be used later to find out if writing lists is successful or not.
|
/// This file will be used later to find out if writing lists is successful or not.
|
||||||
std::ofstream{getNeedRebuildListsMarkFilePath(directory_path)};
|
std::ofstream{getNeedRebuildListsMarkFilePath(directory_path)};
|
||||||
|
|
||||||
startListsWritingThread();
|
lists_writing_thread = ThreadFromGlobalPool{&DiskAccessStorage::listsWritingThreadFunc, this};
|
||||||
|
lists_writing_thread_is_waiting = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
void DiskAccessStorage::startListsWritingThread()
|
void DiskAccessStorage::listsWritingThreadFunc()
|
||||||
{
|
{
|
||||||
if (lists_writing_thread.joinable())
|
std::unique_lock lock{mutex};
|
||||||
|
|
||||||
{
|
{
|
||||||
if (!lists_writing_thread_exited)
|
/// It's better not to write the lists files too often, that's why we need
|
||||||
return;
|
/// the following timeout.
|
||||||
lists_writing_thread.detach();
|
const auto timeout = std::chrono::minutes(1);
|
||||||
|
SCOPE_EXIT({ lists_writing_thread_is_waiting = false; });
|
||||||
|
if (lists_writing_thread_should_exit.wait_for(lock, timeout) != std::cv_status::timeout)
|
||||||
|
return; /// The destructor requires us to exit.
|
||||||
}
|
}
|
||||||
|
|
||||||
lists_writing_thread_exited = false;
|
writeLists();
|
||||||
lists_writing_thread = ThreadFromGlobalPool{&DiskAccessStorage::listsWritingThreadFunc, this};
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -466,21 +485,6 @@ void DiskAccessStorage::stopListsWritingThread()
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
void DiskAccessStorage::listsWritingThreadFunc()
|
|
||||||
{
|
|
||||||
std::unique_lock lock{mutex};
|
|
||||||
SCOPE_EXIT({ lists_writing_thread_exited = true; });
|
|
||||||
|
|
||||||
/// It's better not to write the lists files too often, that's why we need
|
|
||||||
/// the following timeout.
|
|
||||||
const auto timeout = std::chrono::minutes(1);
|
|
||||||
if (lists_writing_thread_should_exit.wait_for(lock, timeout) != std::cv_status::timeout)
|
|
||||||
return; /// The destructor requires us to exit.
|
|
||||||
|
|
||||||
writeLists();
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
/// Reads and parses all the "<id>.sql" files from a specified directory
|
/// Reads and parses all the "<id>.sql" files from a specified directory
|
||||||
/// and then saves the files "users.list", "roles.list", etc. to the same directory.
|
/// and then saves the files "users.list", "roles.list", etc. to the same directory.
|
||||||
bool DiskAccessStorage::rebuildLists()
|
bool DiskAccessStorage::rebuildLists()
|
||||||
|
@ -18,7 +18,11 @@ public:
|
|||||||
~DiskAccessStorage() override;
|
~DiskAccessStorage() override;
|
||||||
|
|
||||||
const char * getStorageType() const override { return STORAGE_TYPE; }
|
const char * getStorageType() const override { return STORAGE_TYPE; }
|
||||||
|
|
||||||
String getStoragePath() const override { return directory_path; }
|
String getStoragePath() const override { return directory_path; }
|
||||||
|
bool isStoragePathEqual(const String & directory_path_) const;
|
||||||
|
|
||||||
|
void setReadOnly(bool readonly_) { readonly = readonly_; }
|
||||||
bool isStorageReadOnly() const override { return readonly; }
|
bool isStorageReadOnly() const override { return readonly; }
|
||||||
|
|
||||||
private:
|
private:
|
||||||
@ -42,9 +46,8 @@ private:
|
|||||||
void scheduleWriteLists(EntityType type);
|
void scheduleWriteLists(EntityType type);
|
||||||
bool rebuildLists();
|
bool rebuildLists();
|
||||||
|
|
||||||
void startListsWritingThread();
|
|
||||||
void stopListsWritingThread();
|
|
||||||
void listsWritingThreadFunc();
|
void listsWritingThreadFunc();
|
||||||
|
void stopListsWritingThread();
|
||||||
|
|
||||||
void insertNoLock(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, Notifications & notifications);
|
void insertNoLock(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, Notifications & notifications);
|
||||||
void removeNoLock(const UUID & id, Notifications & notifications);
|
void removeNoLock(const UUID & id, Notifications & notifications);
|
||||||
@ -67,14 +70,14 @@ private:
|
|||||||
void prepareNotifications(const UUID & id, const Entry & entry, bool remove, Notifications & notifications) const;
|
void prepareNotifications(const UUID & id, const Entry & entry, bool remove, Notifications & notifications) const;
|
||||||
|
|
||||||
String directory_path;
|
String directory_path;
|
||||||
bool readonly;
|
std::atomic<bool> readonly;
|
||||||
std::unordered_map<UUID, Entry> entries_by_id;
|
std::unordered_map<UUID, Entry> entries_by_id;
|
||||||
std::unordered_map<std::string_view, Entry *> entries_by_name_and_type[static_cast<size_t>(EntityType::MAX)];
|
std::unordered_map<std::string_view, Entry *> entries_by_name_and_type[static_cast<size_t>(EntityType::MAX)];
|
||||||
boost::container::flat_set<EntityType> types_of_lists_to_write;
|
boost::container::flat_set<EntityType> types_of_lists_to_write;
|
||||||
bool failed_to_write_lists = false; /// Whether writing of the list files has been failed since the recent restart of the server.
|
bool failed_to_write_lists = false; /// Whether writing of the list files has been failed since the recent restart of the server.
|
||||||
ThreadFromGlobalPool lists_writing_thread; /// List files are written in a separate thread.
|
ThreadFromGlobalPool lists_writing_thread; /// List files are written in a separate thread.
|
||||||
std::condition_variable lists_writing_thread_should_exit; /// Signals `lists_writing_thread` to exit.
|
std::condition_variable lists_writing_thread_should_exit; /// Signals `lists_writing_thread` to exit.
|
||||||
std::atomic<bool> lists_writing_thread_exited = false;
|
bool lists_writing_thread_is_waiting = false;
|
||||||
mutable std::list<OnChangedHandler> handlers_by_type[static_cast<size_t>(EntityType::MAX)];
|
mutable std::list<OnChangedHandler> handlers_by_type[static_cast<size_t>(EntityType::MAX)];
|
||||||
mutable std::mutex mutex;
|
mutable std::mutex mutex;
|
||||||
};
|
};
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <Access/RowPolicy.h>
|
#include <Access/RowPolicy.h>
|
||||||
#include <Core/Types.h>
|
#include <common/types.h>
|
||||||
#include <Core/UUID.h>
|
#include <Core/UUID.h>
|
||||||
#include <boost/smart_ptr/atomic_shared_ptr.hpp>
|
#include <boost/smart_ptr/atomic_shared_ptr.hpp>
|
||||||
#include <unordered_map>
|
#include <unordered_map>
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <Core/Types.h>
|
#include <common/types.h>
|
||||||
#include <Core/UUID.h>
|
#include <Core/UUID.h>
|
||||||
#include <Access/SettingsConstraints.h>
|
#include <Access/SettingsConstraints.h>
|
||||||
#include <Access/SettingsProfileElement.h>
|
#include <Access/SettingsProfileElement.h>
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <Access/LDAPParams.h>
|
#include <Access/LDAPParams.h>
|
||||||
#include <Core/Types.h>
|
#include <common/types.h>
|
||||||
|
|
||||||
#include <map>
|
#include <map>
|
||||||
#include <memory>
|
#include <memory>
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <Core/Types.h>
|
#include <common/types.h>
|
||||||
#include <Common/typeid_cast.h>
|
#include <Common/typeid_cast.h>
|
||||||
#include <Common/quoteString.h>
|
#include <Common/quoteString.h>
|
||||||
#include <boost/algorithm/string.hpp>
|
#include <boost/algorithm/string.hpp>
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <Access/IAccessEntity.h>
|
#include <Access/IAccessEntity.h>
|
||||||
#include <Core/Types.h>
|
#include <common/types.h>
|
||||||
#include <Core/UUID.h>
|
#include <Core/UUID.h>
|
||||||
#include <ext/scope_guard.h>
|
#include <ext/scope_guard.h>
|
||||||
#include <functional>
|
#include <functional>
|
||||||
|
@ -5,7 +5,7 @@
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
#include <Access/LDAPParams.h>
|
#include <Access/LDAPParams.h>
|
||||||
#include <Core/Types.h>
|
#include <common/types.h>
|
||||||
|
|
||||||
#if USE_LDAP
|
#if USE_LDAP
|
||||||
# include <ldap.h>
|
# include <ldap.h>
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <Core/Types.h>
|
#include <common/types.h>
|
||||||
|
|
||||||
#include <chrono>
|
#include <chrono>
|
||||||
|
|
||||||
|
@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
#include <Access/EnabledSettings.h>
|
#include <Access/EnabledSettings.h>
|
||||||
#include <Core/UUID.h>
|
#include <Core/UUID.h>
|
||||||
#include <Core/Types.h>
|
#include <common/types.h>
|
||||||
#include <ext/scope_guard.h>
|
#include <ext/scope_guard.h>
|
||||||
#include <map>
|
#include <map>
|
||||||
#include <unordered_map>
|
#include <unordered_map>
|
||||||
|
@ -12,6 +12,9 @@ namespace ErrorCodes
|
|||||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
namespace
|
||||||
|
{
|
||||||
|
|
||||||
class AggregateFunctionCombinatorArray final : public IAggregateFunctionCombinator
|
class AggregateFunctionCombinatorArray final : public IAggregateFunctionCombinator
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
@ -45,6 +48,8 @@ public:
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
void registerAggregateFunctionCombinatorArray(AggregateFunctionCombinatorFactory & factory)
|
void registerAggregateFunctionCombinatorArray(AggregateFunctionCombinatorFactory & factory)
|
||||||
{
|
{
|
||||||
factory.registerCombinator(std::make_shared<AggregateFunctionCombinatorArray>());
|
factory.registerCombinator(std::make_shared<AggregateFunctionCombinatorArray>());
|
||||||
|
@ -6,12 +6,14 @@
|
|||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
namespace ErrorCodes
|
namespace ErrorCodes
|
||||||
{
|
{
|
||||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
namespace
|
||||||
|
{
|
||||||
|
|
||||||
class AggregateFunctionCombinatorDistinct final : public IAggregateFunctionCombinator
|
class AggregateFunctionCombinatorDistinct final : public IAggregateFunctionCombinator
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
@ -56,6 +58,8 @@ public:
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
void registerAggregateFunctionCombinatorDistinct(AggregateFunctionCombinatorFactory & factory)
|
void registerAggregateFunctionCombinatorDistinct(AggregateFunctionCombinatorFactory & factory)
|
||||||
{
|
{
|
||||||
factory.registerCombinator(std::make_shared<AggregateFunctionCombinatorDistinct>());
|
factory.registerCombinator(std::make_shared<AggregateFunctionCombinatorDistinct>());
|
||||||
|
@ -12,6 +12,9 @@ namespace ErrorCodes
|
|||||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
namespace
|
||||||
|
{
|
||||||
|
|
||||||
class AggregateFunctionCombinatorForEach final : public IAggregateFunctionCombinator
|
class AggregateFunctionCombinatorForEach final : public IAggregateFunctionCombinator
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
@ -42,6 +45,8 @@ public:
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
void registerAggregateFunctionCombinatorForEach(AggregateFunctionCombinatorFactory & factory)
|
void registerAggregateFunctionCombinatorForEach(AggregateFunctionCombinatorFactory & factory)
|
||||||
{
|
{
|
||||||
factory.registerCombinator(std::make_shared<AggregateFunctionCombinatorForEach>());
|
factory.registerCombinator(std::make_shared<AggregateFunctionCombinatorForEach>());
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user