mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-21 15:12:02 +00:00
Merge remote-tracking branch 'upstream/master' into HEAD
This commit is contained in:
commit
ed42437219
2
.gitattributes
vendored
2
.gitattributes
vendored
@ -1,2 +1,4 @@
|
||||
contrib/* linguist-vendored
|
||||
*.h linguist-language=C++
|
||||
# to avoid frequent conflicts
|
||||
tests/queries/0_stateless/arcadia_skip_list.txt text merge=union
|
||||
|
156
CHANGELOG.md
156
CHANGELOG.md
@ -1,3 +1,159 @@
|
||||
## ClickHouse release 21.3
|
||||
|
||||
### ClickHouse release v21.3, 2021-03-12
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
||||
* Now it's not allowed to create MergeTree tables in old syntax with table TTL because it's just ignored. Attach of old tables is still possible. [#20282](https://github.com/ClickHouse/ClickHouse/pull/20282) ([alesapin](https://github.com/alesapin)).
|
||||
* Now all case-insensitive function names will be rewritten to their canonical representations. This is needed for projection query routing (the upcoming feature). [#20174](https://github.com/ClickHouse/ClickHouse/pull/20174) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix creation of `TTL` in cases, when its expression is a function and it is the same as `ORDER BY` key. Now it's allowed to set custom aggregation to primary key columns in `TTL` with `GROUP BY`. Backward incompatible: For primary key columns, which are not in `GROUP BY` and aren't set explicitly now is applied function `any` instead of `max`, when TTL is expired. Also if you use TTL with `WHERE` or `GROUP BY` you can see exceptions at merges, while making rolling update. [#15450](https://github.com/ClickHouse/ClickHouse/pull/15450) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Add file engine settings: `engine_file_empty_if_not_exists` and `engine_file_truncate_on_insert`. [#20620](https://github.com/ClickHouse/ClickHouse/pull/20620) ([M0r64n](https://github.com/M0r64n)).
|
||||
* Add aggregate function `deltaSum` for summing the differences between consecutive rows. [#20057](https://github.com/ClickHouse/ClickHouse/pull/20057) ([Russ Frank](https://github.com/rf)).
|
||||
* New `event_time_microseconds` column in `system.part_log` table. [#20027](https://github.com/ClickHouse/ClickHouse/pull/20027) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Added `timezoneOffset(datetime)` function which will give the offset from UTC in seconds. This close [#issue:19850](https://github.com/ClickHouse/ClickHouse/issues/19850). [#19962](https://github.com/ClickHouse/ClickHouse/pull/19962) ([keenwolf](https://github.com/keen-wolf)).
|
||||
* Add setting `insert_shard_id` to support insert data into specific shard from distributed table. [#19961](https://github.com/ClickHouse/ClickHouse/pull/19961) ([flynn](https://github.com/ucasFL)).
|
||||
* Function `reinterpretAs` updated to support big integers. Fixes [#19691](https://github.com/ClickHouse/ClickHouse/issues/19691). [#19858](https://github.com/ClickHouse/ClickHouse/pull/19858) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Added Server Side Encryption Customer Keys (the `x-amz-server-side-encryption-customer-(key/md5)` header) support in S3 client. See [the link](https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html). Closes [#19428](https://github.com/ClickHouse/ClickHouse/issues/19428). [#19748](https://github.com/ClickHouse/ClickHouse/pull/19748) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* Added `implicit_key` option for `executable` dictionary source. It allows to avoid printing key for every record if records comes in the same order as the input keys. Implements [#14527](https://github.com/ClickHouse/ClickHouse/issues/14527). [#19677](https://github.com/ClickHouse/ClickHouse/pull/19677) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Add quota type `query_selects` and `query_inserts`. [#19603](https://github.com/ClickHouse/ClickHouse/pull/19603) ([JackyWoo](https://github.com/JackyWoo)).
|
||||
* Add function `extractTextFromHTML` [#19600](https://github.com/ClickHouse/ClickHouse/pull/19600) ([zlx19950903](https://github.com/zlx19950903)), ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Tables with `MergeTree*` engine now have two new table-level settings for query concurrency control. Setting `max_concurrent_queries` limits the number of concurrently executed queries which are related to this table. Setting `min_marks_to_honor_max_concurrent_queries` tells to apply previous setting only if query reads at least this number of marks. [#19544](https://github.com/ClickHouse/ClickHouse/pull/19544) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Added `file` function to read file from user_files directory as a String. This is different from the `file` table function. This implements [#issue:18851](https://github.com/ClickHouse/ClickHouse/issues/18851). [#19204](https://github.com/ClickHouse/ClickHouse/pull/19204) ([keenwolf](https://github.com/keen-wolf)).
|
||||
|
||||
#### Experimental feature
|
||||
|
||||
* Add experimental `Replicated` database engine. It replicates DDL queries across multiple hosts. [#16193](https://github.com/ClickHouse/ClickHouse/pull/16193) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Introduce experimental support for window functions, enabled with `allow_experimental_functions = 1`. This is a preliminary, alpha-quality implementation that is not suitable for production use and will change in backward-incompatible ways in future releases. Please see [the documentation](https://github.com/ClickHouse/ClickHouse/blob/master/docs/en/sql-reference/window-functions/index.md#experimental-window-functions) for the list of supported features. [#20337](https://github.com/ClickHouse/ClickHouse/pull/20337) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Add the ability to backup/restore metadata files for DiskS3. [#18377](https://github.com/ClickHouse/ClickHouse/pull/18377) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
|
||||
#### Performance Improvement
|
||||
|
||||
* Hedged requests for remote queries. When setting `use_hedged_requests` enabled (off by default), allow to establish many connections with different replicas for query. New connection is enabled in case existent connection(s) with replica(s) were not established within `hedged_connection_timeout` or no data was received within `receive_data_timeout`. Query uses the first connection which send non empty progress packet (or data packet, if `allow_changing_replica_until_first_data_packet`); other connections are cancelled. Queries with `max_parallel_replicas > 1` are supported. [#19291](https://github.com/ClickHouse/ClickHouse/pull/19291) ([Kruglov Pavel](https://github.com/Avogar)). This allows to significantly reduce tail latencies on very large clusters.
|
||||
* Added support for `PREWHERE` (and enable the corresponding optimization) when tables have row-level security expressions specified. [#19576](https://github.com/ClickHouse/ClickHouse/pull/19576) ([Denis Glazachev](https://github.com/traceon)).
|
||||
* The setting `distributed_aggregation_memory_efficient` is enabled by default. It will lower memory usage and improve performance of distributed queries. [#20599](https://github.com/ClickHouse/ClickHouse/pull/20599) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Improve performance of GROUP BY multiple fixed size keys. [#20472](https://github.com/ClickHouse/ClickHouse/pull/20472) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Improve performance of aggregate functions by more strict aliasing. [#19946](https://github.com/ClickHouse/ClickHouse/pull/19946) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Speed up reading from `Memory` tables in extreme cases (when reading speed is in order of 50 GB/sec) by simplification of pipeline and (consequently) less lock contention in pipeline scheduling. [#20468](https://github.com/ClickHouse/ClickHouse/pull/20468) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Partially reimplement HTTP server to make it making less copies of incoming and outgoing data. It gives up to 1.5 performance improvement on inserting long records over HTTP. [#19516](https://github.com/ClickHouse/ClickHouse/pull/19516) ([Ivan](https://github.com/abyss7)).
|
||||
* Add `compress` setting for `Memory` tables. If it's enabled the table will use less RAM. On some machines and datasets it can also work faster on SELECT, but it is not always the case. This closes [#20093](https://github.com/ClickHouse/ClickHouse/issues/20093). Note: there are reasons why Memory tables can work slower than MergeTree: (1) lack of compression (2) static size of blocks (3) lack of indices and prewhere... [#20168](https://github.com/ClickHouse/ClickHouse/pull/20168) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Slightly better code in aggregation. [#20978](https://github.com/ClickHouse/ClickHouse/pull/20978) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add back `intDiv`/`modulo` specializations for better performance. This fixes [#21293](https://github.com/ClickHouse/ClickHouse/issues/21293) . The regression was introduced in https://github.com/ClickHouse/ClickHouse/pull/18145 . [#21307](https://github.com/ClickHouse/ClickHouse/pull/21307) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Do not squash blocks too much on INSERT SELECT if inserting into Memory table. In previous versions inefficient data representation was created in Memory table after INSERT SELECT. This closes [#13052](https://github.com/ClickHouse/ClickHouse/issues/13052). [#20169](https://github.com/ClickHouse/ClickHouse/pull/20169) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix at least one case when DataType parser may have exponential complexity (found by fuzzer). This closes [#20096](https://github.com/ClickHouse/ClickHouse/issues/20096). [#20132](https://github.com/ClickHouse/ClickHouse/pull/20132) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Parallelize SELECT with FINAL for single part with level > 0 when `do_not_merge_across_partitions_select_final` setting is 1. [#19375](https://github.com/ClickHouse/ClickHouse/pull/19375) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fill only requested columns when querying `system.parts` and `system.parts_columns`. Closes [#19570](https://github.com/ClickHouse/ClickHouse/issues/19570). [#21035](https://github.com/ClickHouse/ClickHouse/pull/21035) ([Anmol Arora](https://github.com/anmolarora)).
|
||||
* Perform algebraic optimizations of arithmetic expressions inside `avg` aggregate function. close [#20092](https://github.com/ClickHouse/ClickHouse/issues/20092). [#20183](https://github.com/ClickHouse/ClickHouse/pull/20183) ([flynn](https://github.com/ucasFL)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Case-insensitive compression methods for table functions. Also fixed LZMA compression method which was checked in upper case. [#21416](https://github.com/ClickHouse/ClickHouse/pull/21416) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* Add two settings to delay or throw error during insertion when there are too many inactive parts. This is useful when server fails to clean up parts quickly enough. [#20178](https://github.com/ClickHouse/ClickHouse/pull/20178) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Provide better compatibility for mysql clients. 1. mysql jdbc 2. mycli. [#21367](https://github.com/ClickHouse/ClickHouse/pull/21367) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Forbid to drop a column if it's referenced by materialized view. Closes [#21164](https://github.com/ClickHouse/ClickHouse/issues/21164). [#21303](https://github.com/ClickHouse/ClickHouse/pull/21303) ([flynn](https://github.com/ucasFL)).
|
||||
* MySQL dictionary source will now retry unexpected connection failures (Lost connection to MySQL server during query) which sometimes happen on SSL/TLS connections. [#21237](https://github.com/ClickHouse/ClickHouse/pull/21237) ([Alexander Kazakov](https://github.com/Akazz)).
|
||||
* Usability improvement: more consistent `DateTime64` parsing: recognize the case when unix timestamp with subsecond resolution is specified as scaled integer (like `1111111111222` instead of `1111111111.222`). This closes [#13194](https://github.com/ClickHouse/ClickHouse/issues/13194). [#21053](https://github.com/ClickHouse/ClickHouse/pull/21053) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Do only merging of sorted blocks on initiator with distributed_group_by_no_merge. [#20882](https://github.com/ClickHouse/ClickHouse/pull/20882) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* When loading config for mysql source ClickHouse will now randomize the list of replicas with the same priority to ensure the round-robin logics of picking mysql endpoint. This closes [#20629](https://github.com/ClickHouse/ClickHouse/issues/20629). [#20632](https://github.com/ClickHouse/ClickHouse/pull/20632) ([Alexander Kazakov](https://github.com/Akazz)).
|
||||
* Function 'reinterpretAs(x, Type)' renamed into 'reinterpret(x, Type)'. [#20611](https://github.com/ClickHouse/ClickHouse/pull/20611) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Support vhost for RabbitMQ engine [#20576](https://github.com/ClickHouse/ClickHouse/issues/20576). [#20596](https://github.com/ClickHouse/ClickHouse/pull/20596) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Improved serialization for data types combined of Arrays and Tuples. Improved matching enum data types to protobuf enum type. Fixed serialization of the `Map` data type. Omitted values are now set by default. [#20506](https://github.com/ClickHouse/ClickHouse/pull/20506) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fixed race between execution of distributed DDL tasks and cleanup of DDL queue. Now DDL task cannot be removed from ZooKeeper if there are active workers. Fixes [#20016](https://github.com/ClickHouse/ClickHouse/issues/20016). [#20448](https://github.com/ClickHouse/ClickHouse/pull/20448) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Make FQDN and other DNS related functions work correctly in alpine images. [#20336](https://github.com/ClickHouse/ClickHouse/pull/20336) ([filimonov](https://github.com/filimonov)).
|
||||
* Do not allow early constant folding of explicitly forbidden functions. [#20303](https://github.com/ClickHouse/ClickHouse/pull/20303) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Implicit conversion from integer to Decimal type might succeeded if integer value doe not fit into Decimal type. Now it throws `ARGUMENT_OUT_OF_BOUND`. [#20232](https://github.com/ClickHouse/ClickHouse/pull/20232) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Lockless `SYSTEM FLUSH DISTRIBUTED`. [#20215](https://github.com/ClickHouse/ClickHouse/pull/20215) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Normalize count(constant), sum(1) to count(). This is needed for projection query routing. [#20175](https://github.com/ClickHouse/ClickHouse/pull/20175) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Support all native integer types in bitmap functions. [#20171](https://github.com/ClickHouse/ClickHouse/pull/20171) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Updated `CacheDictionary`, `ComplexCacheDictionary`, `SSDCacheDictionary`, `SSDComplexKeyDictionary` to use LRUHashMap as underlying index. [#20164](https://github.com/ClickHouse/ClickHouse/pull/20164) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* The setting `access_management` is now configurable on startup by providing `CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT`, defaults to disabled (`0`) which was the prior value. [#20139](https://github.com/ClickHouse/ClickHouse/pull/20139) ([Marquitos](https://github.com/sonirico)).
|
||||
* Fix toDateTime64(toDate()/toDateTime()) for DateTime64 - Implement DateTime64 clamping to match DateTime behaviour. [#20131](https://github.com/ClickHouse/ClickHouse/pull/20131) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Quota improvements: SHOW TABLES is now considered as one query in the quota calculations, not two queries. SYSTEM queries now consume quota. Fix calculation of interval's end in quota consumption. [#20106](https://github.com/ClickHouse/ClickHouse/pull/20106) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Supports `path IN (set)` expressions for `system.zookeeper` table. [#20105](https://github.com/ClickHouse/ClickHouse/pull/20105) ([小路](https://github.com/nicelulu)).
|
||||
* Show full details of `MaterializeMySQL` tables in `system.tables`. [#20051](https://github.com/ClickHouse/ClickHouse/pull/20051) ([Stig Bakken](https://github.com/stigsb)).
|
||||
* Fix data race in executable dictionary that was possible only on misuse (when the script returns data ignoring its input). [#20045](https://github.com/ClickHouse/ClickHouse/pull/20045) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* The value of MYSQL_OPT_RECONNECT option can now be controlled by "opt_reconnect" parameter in the config section of mysql replica. [#19998](https://github.com/ClickHouse/ClickHouse/pull/19998) ([Alexander Kazakov](https://github.com/Akazz)).
|
||||
* If user calls `JSONExtract` function with `Float32` type requested, allow inaccurate conversion to the result type. For example the number `0.1` in JSON is double precision and is not representable in Float32, but the user still wants to get it. Previous versions return 0 for non-Nullable type and NULL for Nullable type to indicate that conversion is imprecise. The logic was 100% correct but it was surprising to users and leading to questions. This closes [#13962](https://github.com/ClickHouse/ClickHouse/issues/13962). [#19960](https://github.com/ClickHouse/ClickHouse/pull/19960) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add conversion of block structure for INSERT into Distributed tables if it does not match. [#19947](https://github.com/ClickHouse/ClickHouse/pull/19947) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Improvement for the `system.distributed_ddl_queue` table. Initialize MaxDDLEntryID to the last value after restarting. Before this PR, MaxDDLEntryID will remain zero until a new DDLTask is processed. [#19924](https://github.com/ClickHouse/ClickHouse/pull/19924) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Show `MaterializeMySQL` tables in `system.parts`. [#19770](https://github.com/ClickHouse/ClickHouse/pull/19770) ([Stig Bakken](https://github.com/stigsb)).
|
||||
* Add separate config directive for `Buffer` profile. [#19721](https://github.com/ClickHouse/ClickHouse/pull/19721) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Move conditions that are not related to JOIN to WHERE clause. [#18720](https://github.com/ClickHouse/ClickHouse/issues/18720). [#19685](https://github.com/ClickHouse/ClickHouse/pull/19685) ([hexiaoting](https://github.com/hexiaoting)).
|
||||
* Add ability to throttle INSERT into Distributed based on amount of pending bytes for async send (`bytes_to_delay_insert`/`max_delay_to_insert` and `bytes_to_throw_insert` settings for `Distributed` engine has been added). [#19673](https://github.com/ClickHouse/ClickHouse/pull/19673) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix some rare cases when write errors can be ignored in destructors. [#19451](https://github.com/ClickHouse/ClickHouse/pull/19451) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Print inline frames in stack traces for fatal errors. [#19317](https://github.com/ClickHouse/ClickHouse/pull/19317) ([Ivan](https://github.com/abyss7)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix redundant reconnects to ZooKeeper and the possibility of two active sessions for a single clickhouse server. Both problems introduced in #14678. [#21264](https://github.com/ClickHouse/ClickHouse/pull/21264) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix error `Bad cast from type ... to DB::ColumnLowCardinality` while inserting into table with `LowCardinality` column from `Values` format. Fixes #21140 [#21357](https://github.com/ClickHouse/ClickHouse/pull/21357) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix a deadlock in `ALTER DELETE` mutations for non replicated MergeTree table engines when the predicate contains the table itself. Fixes [#20558](https://github.com/ClickHouse/ClickHouse/issues/20558). [#21477](https://github.com/ClickHouse/ClickHouse/pull/21477) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix SIGSEGV for distributed queries on failures. [#21434](https://github.com/ClickHouse/ClickHouse/pull/21434) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Now `ALTER MODIFY COLUMN` queries will correctly affect changes in partition key, skip indices, TTLs, and so on. Fixes [#13675](https://github.com/ClickHouse/ClickHouse/issues/13675). [#21334](https://github.com/ClickHouse/ClickHouse/pull/21334) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix bug with `join_use_nulls` and joining `TOTALS` from subqueries. This closes [#19362](https://github.com/ClickHouse/ClickHouse/issues/19362) and [#21137](https://github.com/ClickHouse/ClickHouse/issues/21137). [#21248](https://github.com/ClickHouse/ClickHouse/pull/21248) ([vdimir](https://github.com/vdimir)).
|
||||
* Fix crash in `EXPLAIN` for query with `UNION`. Fixes [#20876](https://github.com/ClickHouse/ClickHouse/issues/20876), [#21170](https://github.com/ClickHouse/ClickHouse/issues/21170). [#21246](https://github.com/ClickHouse/ClickHouse/pull/21246) ([flynn](https://github.com/ucasFL)).
|
||||
* Now mutations allowed only for table engines that support them (MergeTree family, Memory, MaterializedView). Other engines will report a more clear error. Fixes [#21168](https://github.com/ClickHouse/ClickHouse/issues/21168). [#21183](https://github.com/ClickHouse/ClickHouse/pull/21183) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixes [#21112](https://github.com/ClickHouse/ClickHouse/issues/21112). Fixed bug that could cause duplicates with insert query (if one of the callbacks came a little too late). [#21138](https://github.com/ClickHouse/ClickHouse/pull/21138) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix `input_format_null_as_default` take effective when types are nullable. This fixes [#21116](https://github.com/ClickHouse/ClickHouse/issues/21116) . [#21121](https://github.com/ClickHouse/ClickHouse/pull/21121) ([Amos Bird](https://github.com/amosbird)).
|
||||
* fix bug related to cast Tuple to Map. Closes [#21029](https://github.com/ClickHouse/ClickHouse/issues/21029). [#21120](https://github.com/ClickHouse/ClickHouse/pull/21120) ([hexiaoting](https://github.com/hexiaoting)).
|
||||
* Fix the metadata leak when the Replicated*MergeTree with custom (non default) ZooKeeper cluster is dropped. [#21119](https://github.com/ClickHouse/ClickHouse/pull/21119) ([fastio](https://github.com/fastio)).
|
||||
* Fix type mismatch issue when using LowCardinality keys in joinGet. This fixes [#21114](https://github.com/ClickHouse/ClickHouse/issues/21114). [#21117](https://github.com/ClickHouse/ClickHouse/pull/21117) ([Amos Bird](https://github.com/amosbird)).
|
||||
* fix default_replica_path and default_replica_name values are useless on Replicated(*)MergeTree engine when the engine needs specify other parameters. [#21060](https://github.com/ClickHouse/ClickHouse/pull/21060) ([mxzlxy](https://github.com/mxzlxy)).
|
||||
* Out of bound memory access was possible when formatting specifically crafted out of range value of type `DateTime64`. This closes [#20494](https://github.com/ClickHouse/ClickHouse/issues/20494). This closes [#20543](https://github.com/ClickHouse/ClickHouse/issues/20543). [#21023](https://github.com/ClickHouse/ClickHouse/pull/21023) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Block parallel insertions into storage join. [#21009](https://github.com/ClickHouse/ClickHouse/pull/21009) ([vdimir](https://github.com/vdimir)).
|
||||
* Fixed behaviour, when `ALTER MODIFY COLUMN` created mutation, that will knowingly fail. [#21007](https://github.com/ClickHouse/ClickHouse/pull/21007) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Closes [#9969](https://github.com/ClickHouse/ClickHouse/issues/9969). Fixed Brotli http compression error, which reproduced for large data sizes, slightly complicated structure and with json output format. Update Brotli to the latest version to include the "fix rare access to uninitialized data in ring-buffer". [#20991](https://github.com/ClickHouse/ClickHouse/pull/20991) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix 'Empty task was returned from async task queue' on query cancellation. [#20881](https://github.com/ClickHouse/ClickHouse/pull/20881) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* `USE database;` query did not work when using MySQL 5.7 client to connect to ClickHouse server, it's fixed. Fixes [#18926](https://github.com/ClickHouse/ClickHouse/issues/18926). [#20878](https://github.com/ClickHouse/ClickHouse/pull/20878) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix usage of `-Distinct` combinator with `-State` combinator in aggregate functions. [#20866](https://github.com/ClickHouse/ClickHouse/pull/20866) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix subquery with union distinct and limit clause. close [#20597](https://github.com/ClickHouse/ClickHouse/issues/20597). [#20610](https://github.com/ClickHouse/ClickHouse/pull/20610) ([flynn](https://github.com/ucasFL)).
|
||||
* Fixed inconsistent behavior of dictionary in case of queries where we look for absent keys in dictionary. [#20578](https://github.com/ClickHouse/ClickHouse/pull/20578) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix the number of threads for scalar subqueries and subqueries for index (after [#19007](https://github.com/ClickHouse/ClickHouse/issues/19007) single thread was always used). Fixes [#20457](https://github.com/ClickHouse/ClickHouse/issues/20457), [#20512](https://github.com/ClickHouse/ClickHouse/issues/20512). [#20550](https://github.com/ClickHouse/ClickHouse/pull/20550) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix crash which could happen if unknown packet was received from remove query (was introduced in [#17868](https://github.com/ClickHouse/ClickHouse/issues/17868)). [#20547](https://github.com/ClickHouse/ClickHouse/pull/20547) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add proper checks while parsing directory names for async INSERT (fixes SIGSEGV). [#20498](https://github.com/ClickHouse/ClickHouse/pull/20498) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix function `transform` does not work properly for floating point keys. Closes [#20460](https://github.com/ClickHouse/ClickHouse/issues/20460). [#20479](https://github.com/ClickHouse/ClickHouse/pull/20479) ([flynn](https://github.com/ucasFL)).
|
||||
* Fix infinite loop when propagating WITH aliases to subqueries. This fixes [#20388](https://github.com/ClickHouse/ClickHouse/issues/20388). [#20476](https://github.com/ClickHouse/ClickHouse/pull/20476) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix abnormal server termination when http client goes away. [#20464](https://github.com/ClickHouse/ClickHouse/pull/20464) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix `LOGICAL_ERROR` for `join_use_nulls=1` when JOIN contains const from SELECT. [#20461](https://github.com/ClickHouse/ClickHouse/pull/20461) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Check if table function `view` is used in expression list and throw an error. This fixes [#20342](https://github.com/ClickHouse/ClickHouse/issues/20342). [#20350](https://github.com/ClickHouse/ClickHouse/pull/20350) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Avoid invalid dereference in RANGE_HASHED() dictionary. [#20345](https://github.com/ClickHouse/ClickHouse/pull/20345) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix null dereference with `join_use_nulls=1`. [#20344](https://github.com/ClickHouse/ClickHouse/pull/20344) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix incorrect result of binary operations between two constant decimals of different scale. Fixes [#20283](https://github.com/ClickHouse/ClickHouse/issues/20283). [#20339](https://github.com/ClickHouse/ClickHouse/pull/20339) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Fix too often retries of failed background tasks for `ReplicatedMergeTree` table engines family. This could lead to too verbose logging and increased CPU load. Fixes [#20203](https://github.com/ClickHouse/ClickHouse/issues/20203). [#20335](https://github.com/ClickHouse/ClickHouse/pull/20335) ([alesapin](https://github.com/alesapin)).
|
||||
* Restrict to `DROP` or `RENAME` version column of `*CollapsingMergeTree` and `ReplacingMergeTree` table engines. [#20300](https://github.com/ClickHouse/ClickHouse/pull/20300) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed the behavior when in case of broken JSON we tried to read the whole file into memory which leads to exception from the allocator. Fixes [#19719](https://github.com/ClickHouse/ClickHouse/issues/19719). [#20286](https://github.com/ClickHouse/ClickHouse/pull/20286) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix exception during vertical merge for `MergeTree` table engines family which don't allow to perform vertical merges. Fixes [#20259](https://github.com/ClickHouse/ClickHouse/issues/20259). [#20279](https://github.com/ClickHouse/ClickHouse/pull/20279) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix rare server crash on config reload during the shutdown. Fixes [#19689](https://github.com/ClickHouse/ClickHouse/issues/19689). [#20224](https://github.com/ClickHouse/ClickHouse/pull/20224) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix CTE when using in INSERT SELECT. This fixes [#20187](https://github.com/ClickHouse/ClickHouse/issues/20187), fixes [#20195](https://github.com/ClickHouse/ClickHouse/issues/20195). [#20211](https://github.com/ClickHouse/ClickHouse/pull/20211) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fixes [#19314](https://github.com/ClickHouse/ClickHouse/issues/19314). [#20156](https://github.com/ClickHouse/ClickHouse/pull/20156) ([Ivan](https://github.com/abyss7)).
|
||||
* fix toMinute function to handle special timezone correctly. [#20149](https://github.com/ClickHouse/ClickHouse/pull/20149) ([keenwolf](https://github.com/keen-wolf)).
|
||||
* Fix server crash after query with `if` function with `Tuple` type of then/else branches result. `Tuple` type must contain `Array` or another complex type. Fixes [#18356](https://github.com/ClickHouse/ClickHouse/issues/18356). [#20133](https://github.com/ClickHouse/ClickHouse/pull/20133) ([alesapin](https://github.com/alesapin)).
|
||||
* The `MongoDB` table engine now establishes connection only when it's going to read data. `ATTACH TABLE` won't try to connect anymore. [#20110](https://github.com/ClickHouse/ClickHouse/pull/20110) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Bugfix in StorageJoin. [#20079](https://github.com/ClickHouse/ClickHouse/pull/20079) ([vdimir](https://github.com/vdimir)).
|
||||
* Fix the case when calculating modulo of division of negative number by small divisor, the resulting data type was not large enough to accomodate the negative result. This closes [#20052](https://github.com/ClickHouse/ClickHouse/issues/20052). [#20067](https://github.com/ClickHouse/ClickHouse/pull/20067) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* MaterializeMySQL: Fix replication for statements that update several tables. [#20066](https://github.com/ClickHouse/ClickHouse/pull/20066) ([Håvard Kvålen](https://github.com/havardk)).
|
||||
* Prevent "Connection refused" in docker during initialization script execution. [#20012](https://github.com/ClickHouse/ClickHouse/pull/20012) ([filimonov](https://github.com/filimonov)).
|
||||
* `EmbeddedRocksDB` is an experimental storage. Fix the issue with lack of proper type checking. Simplified code. This closes [#19967](https://github.com/ClickHouse/ClickHouse/issues/19967). [#19972](https://github.com/ClickHouse/ClickHouse/pull/19972) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix a segfault in function `fromModifiedJulianDay` when the argument type is `Nullable(T)` for any integral types other than Int32. [#19959](https://github.com/ClickHouse/ClickHouse/pull/19959) ([PHO](https://github.com/depressed-pho)).
|
||||
* BloomFilter index crash fix. Fixes [#19757](https://github.com/ClickHouse/ClickHouse/issues/19757). [#19884](https://github.com/ClickHouse/ClickHouse/pull/19884) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Deadlock was possible if system.text_log is enabled. This fixes [#19874](https://github.com/ClickHouse/ClickHouse/issues/19874). [#19875](https://github.com/ClickHouse/ClickHouse/pull/19875) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix starting the server with tables having default expressions containing dictGet(). Allow getting return type of dictGet() without loading dictionary. [#19805](https://github.com/ClickHouse/ClickHouse/pull/19805) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix clickhouse-client abort exception while executing only `select`. [#19790](https://github.com/ClickHouse/ClickHouse/pull/19790) ([taiyang-li](https://github.com/taiyang-li)).
|
||||
* Fix a bug that moving pieces to destination table may failed in case of launching multiple clickhouse-copiers. [#19743](https://github.com/ClickHouse/ClickHouse/pull/19743) ([madianjun](https://github.com/mdianjun)).
|
||||
* Background thread which executes `ON CLUSTER` queries might hang waiting for dropped replicated table to do something. It's fixed. [#19684](https://github.com/ClickHouse/ClickHouse/pull/19684) ([yiguolei](https://github.com/yiguolei)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Allow to build ClickHouse with AVX-2 enabled globally. It gives slight performance benefits on modern CPUs. Not recommended for production and will not be supported as official build for now. [#20180](https://github.com/ClickHouse/ClickHouse/pull/20180) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix some of the issues found by Coverity. See [#19964](https://github.com/ClickHouse/ClickHouse/issues/19964). [#20010](https://github.com/ClickHouse/ClickHouse/pull/20010) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Allow to start up with modified binary under gdb. In previous version if you set up breakpoint in gdb before start, server will refuse to start up due to failed integrity check. [#21258](https://github.com/ClickHouse/ClickHouse/pull/21258) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add a test for different compression methods in Kafka. [#21111](https://github.com/ClickHouse/ClickHouse/pull/21111) ([filimonov](https://github.com/filimonov)).
|
||||
* Fixed port clash from test_storage_kerberized_hdfs test. [#19974](https://github.com/ClickHouse/ClickHouse/pull/19974) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Print `stdout` and `stderr` to log when failed to start docker in integration tests. Before this PR there was a very short error message in this case which didn't help to investigate the problems. [#20631](https://github.com/ClickHouse/ClickHouse/pull/20631) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
|
||||
|
||||
## ClickHouse release 21.2
|
||||
|
||||
### ClickHouse release v21.2.2.8-stable, 2021-02-07
|
||||
|
@ -169,7 +169,7 @@ endif ()
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -rdynamic")
|
||||
|
||||
if (OS_LINUX)
|
||||
find_program (OBJCOPY_PATH NAMES "llvm-objcopy" "llvm-objcopy-11" "llvm-objcopy-10" "llvm-objcopy-9" "llvm-objcopy-8" "objcopy")
|
||||
find_program (OBJCOPY_PATH NAMES "llvm-objcopy" "llvm-objcopy-12" "llvm-objcopy-11" "llvm-objcopy-10" "llvm-objcopy-9" "llvm-objcopy-8" "objcopy")
|
||||
if (OBJCOPY_PATH)
|
||||
message(STATUS "Using objcopy: ${OBJCOPY_PATH}.")
|
||||
|
||||
@ -331,7 +331,7 @@ if (COMPILER_CLANG)
|
||||
endif ()
|
||||
|
||||
# Always prefer llvm tools when using clang. For instance, we cannot use GNU ar when llvm LTO is enabled
|
||||
find_program (LLVM_AR_PATH NAMES "llvm-ar" "llvm-ar-11" "llvm-ar-10" "llvm-ar-9" "llvm-ar-8")
|
||||
find_program (LLVM_AR_PATH NAMES "llvm-ar" "llvm-ar-12" "llvm-ar-11" "llvm-ar-10" "llvm-ar-9" "llvm-ar-8")
|
||||
|
||||
if (LLVM_AR_PATH)
|
||||
message(STATUS "Using llvm-ar: ${LLVM_AR_PATH}.")
|
||||
@ -340,7 +340,7 @@ if (COMPILER_CLANG)
|
||||
message(WARNING "Cannot find llvm-ar. System ar will be used instead. It does not work with ThinLTO.")
|
||||
endif ()
|
||||
|
||||
find_program (LLVM_RANLIB_PATH NAMES "llvm-ranlib" "llvm-ranlib-11" "llvm-ranlib-10" "llvm-ranlib-9" "llvm-ranlib-8")
|
||||
find_program (LLVM_RANLIB_PATH NAMES "llvm-ranlib" "llvm-ranlib-12" "llvm-ranlib-11" "llvm-ranlib-10" "llvm-ranlib-9" "llvm-ranlib-8")
|
||||
|
||||
if (LLVM_RANLIB_PATH)
|
||||
message(STATUS "Using llvm-ranlib: ${LLVM_RANLIB_PATH}.")
|
||||
|
@ -76,6 +76,16 @@
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#if !defined(UNDEFINED_BEHAVIOR_SANITIZER)
|
||||
# if defined(__has_feature)
|
||||
# if __has_feature(undefined_behavior_sanitizer)
|
||||
# define UNDEFINED_BEHAVIOR_SANITIZER 1
|
||||
# endif
|
||||
# elif defined(__UNDEFINED_BEHAVIOR_SANITIZER__)
|
||||
# define UNDEFINED_BEHAVIOR_SANITIZER 1
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#if defined(ADDRESS_SANITIZER)
|
||||
# define BOOST_USE_ASAN 1
|
||||
# define BOOST_USE_UCONTEXT 1
|
||||
|
@ -249,15 +249,15 @@ struct integer<Bits, Signed>::_impl
|
||||
return;
|
||||
}
|
||||
|
||||
const T alpha = t / max_int;
|
||||
const T alpha = t / static_cast<T>(max_int);
|
||||
|
||||
if (alpha <= max_int)
|
||||
if (alpha <= static_cast<T>(max_int))
|
||||
self = static_cast<uint64_t>(alpha);
|
||||
else // max(double) / 2^64 will surely contain less than 52 precision bits, so speed up computations.
|
||||
set_multiplier<double>(self, alpha);
|
||||
|
||||
self *= max_int;
|
||||
self += static_cast<uint64_t>(t - alpha * max_int); // += b_i
|
||||
self += static_cast<uint64_t>(t - alpha * static_cast<T>(max_int)); // += b_i
|
||||
}
|
||||
|
||||
constexpr static void wide_integer_from_bultin(integer<Bits, Signed>& self, double rhs) noexcept {
|
||||
@ -275,7 +275,7 @@ struct integer<Bits, Signed>::_impl
|
||||
"On your system long double has less than 64 precision bits,"
|
||||
"which may result in UB when initializing double from int64_t");
|
||||
|
||||
if ((rhs > 0 && rhs < max_int) || (rhs < 0 && rhs > min_int))
|
||||
if ((rhs > 0 && rhs < static_cast<long double>(max_int)) || (rhs < 0 && rhs > static_cast<long double>(min_int)))
|
||||
{
|
||||
self = static_cast<int64_t>(rhs);
|
||||
return;
|
||||
|
@ -174,9 +174,11 @@ Pool::Entry Pool::tryGet()
|
||||
/// Fixme: There is a race condition here b/c we do not synchronize with Pool::Entry's copy-assignment operator
|
||||
if (connection_ptr->ref_count == 0)
|
||||
{
|
||||
Entry res(connection_ptr, this);
|
||||
if (res.tryForceConnected()) /// Tries to reestablish connection as well
|
||||
return res;
|
||||
{
|
||||
Entry res(connection_ptr, this);
|
||||
if (res.tryForceConnected()) /// Tries to reestablish connection as well
|
||||
return res;
|
||||
}
|
||||
|
||||
logger.debug("(%s): Idle connection to MySQL server cannot be recovered, dropping it.", getDescription());
|
||||
|
||||
|
@ -4,5 +4,5 @@
|
||||
add_library(readpassphrase readpassphrase.c)
|
||||
|
||||
set_target_properties(readpassphrase PROPERTIES LINKER_LANGUAGE C)
|
||||
target_compile_options(readpassphrase PRIVATE -Wno-unused-result -Wno-reserved-id-macro)
|
||||
target_compile_options(readpassphrase PRIVATE -Wno-unused-result -Wno-reserved-id-macro -Wno-disabled-macro-expansion)
|
||||
target_include_directories(readpassphrase PUBLIC .)
|
||||
|
@ -94,7 +94,7 @@ restart:
|
||||
if (input != STDIN_FILENO && tcgetattr(input, &oterm) == 0) {
|
||||
memcpy(&term, &oterm, sizeof(term));
|
||||
if (!(flags & RPP_ECHO_ON))
|
||||
term.c_lflag &= ~(ECHO | ECHONL);
|
||||
term.c_lflag &= ~((unsigned int) (ECHO | ECHONL));
|
||||
#ifdef VSTATUS
|
||||
if (term.c_cc[VSTATUS] != _POSIX_VDISABLE)
|
||||
term.c_cc[VSTATUS] = _POSIX_VDISABLE;
|
||||
|
@ -5,8 +5,8 @@ if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/krb5/README")
|
||||
set (ENABLE_KRB5 0)
|
||||
endif ()
|
||||
|
||||
if (NOT CMAKE_SYSTEM_NAME MATCHES "Linux")
|
||||
message (WARNING "krb5 disabled in non-Linux environments")
|
||||
if (NOT CMAKE_SYSTEM_NAME MATCHES "Linux" AND NOT (CMAKE_SYSTEM_NAME MATCHES "Darwin" AND NOT CMAKE_CROSSCOMPILING))
|
||||
message (WARNING "krb5 disabled in non-Linux and non-native-Darwin environments")
|
||||
set (ENABLE_KRB5 0)
|
||||
endif ()
|
||||
|
||||
|
@ -75,8 +75,13 @@ if (OS_LINUX AND NOT LINKER_NAME)
|
||||
endif ()
|
||||
|
||||
if (LINKER_NAME)
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fuse-ld=${LINKER_NAME}")
|
||||
set (CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -fuse-ld=${LINKER_NAME}")
|
||||
if (COMPILER_CLANG AND (CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 12.0.0 OR CMAKE_CXX_COMPILER_VERSION VERSION_EQUAL 12.0.0))
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} --ld-path=${LINKER_NAME}")
|
||||
set (CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} --ld-path=${LINKER_NAME}")
|
||||
else ()
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fuse-ld=${LINKER_NAME}")
|
||||
set (CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -fuse-ld=${LINKER_NAME}")
|
||||
endif ()
|
||||
|
||||
message(STATUS "Using custom linker by name: ${LINKER_NAME}")
|
||||
endif ()
|
||||
|
1
contrib/CMakeLists.txt
vendored
1
contrib/CMakeLists.txt
vendored
@ -32,6 +32,7 @@ endif()
|
||||
|
||||
set_property(DIRECTORY PROPERTY EXCLUDE_FROM_ALL 1)
|
||||
|
||||
add_subdirectory (abseil-cpp-cmake)
|
||||
add_subdirectory (antlr4-runtime-cmake)
|
||||
add_subdirectory (boost-cmake)
|
||||
add_subdirectory (cctz-cmake)
|
||||
|
2
contrib/NuRaft
vendored
2
contrib/NuRaft
vendored
@ -1 +1 @@
|
||||
Subproject commit 9a0d78de4b90546368d954b6434f0e9a823e8d80
|
||||
Subproject commit 3d3683e77753cfe015a05fae95ddf418e19f59e1
|
18
contrib/abseil-cpp-cmake/CMakeLists.txt
Normal file
18
contrib/abseil-cpp-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,18 @@
|
||||
set(ABSL_ROOT_DIR "${ClickHouse_SOURCE_DIR}/contrib/abseil-cpp")
|
||||
if(NOT EXISTS "${ABSL_ROOT_DIR}/CMakeLists.txt")
|
||||
message(FATAL_ERROR " submodule third_party/abseil-cpp is missing. To fix try run: \n git submodule update --init --recursive")
|
||||
endif()
|
||||
add_subdirectory("${ABSL_ROOT_DIR}" "${ClickHouse_BINARY_DIR}/contrib/abseil-cpp")
|
||||
|
||||
add_library(abseil_swiss_tables INTERFACE)
|
||||
|
||||
target_link_libraries(abseil_swiss_tables INTERFACE
|
||||
absl::flat_hash_map
|
||||
absl::flat_hash_set
|
||||
)
|
||||
|
||||
get_target_property(FLAT_HASH_MAP_INCLUDE_DIR absl::flat_hash_map INTERFACE_INCLUDE_DIRECTORIES)
|
||||
target_include_directories (abseil_swiss_tables SYSTEM BEFORE INTERFACE ${FLAT_HASH_MAP_INCLUDE_DIR})
|
||||
|
||||
get_target_property(FLAT_HASH_SET_INCLUDE_DIR absl::flat_hash_set INTERFACE_INCLUDE_DIRECTORIES)
|
||||
target_include_directories (abseil_swiss_tables SYSTEM BEFORE INTERFACE ${FLAT_HASH_SET_INCLUDE_DIR})
|
2
contrib/boringssl
vendored
2
contrib/boringssl
vendored
@ -1 +1 @@
|
||||
Subproject commit 8b2bf912ba04823cfe9e7e8f5bb60cb7f6252449
|
||||
Subproject commit fd9ce1a0406f571507068b9555d0b545b8a18332
|
2
contrib/cassandra
vendored
2
contrib/cassandra
vendored
@ -1 +1 @@
|
||||
Subproject commit b446d7eb68e6962f431e2b3771313bfe9a2bbd93
|
||||
Subproject commit c097fb5c7e63cc430016d9a8b240d8e63fbefa52
|
2
contrib/googletest
vendored
2
contrib/googletest
vendored
@ -1 +1 @@
|
||||
Subproject commit 356f2d264a485db2fcc50ec1c672e0d37b6cb39b
|
||||
Subproject commit e7e591764baba0a0c3c9ad0014430e7a27331d16
|
@ -39,11 +39,6 @@ set(_gRPC_SSL_LIBRARIES ${OPENSSL_LIBRARIES})
|
||||
|
||||
# Use abseil-cpp from ClickHouse contrib, not from gRPC third_party.
|
||||
set(gRPC_ABSL_PROVIDER "clickhouse" CACHE STRING "" FORCE)
|
||||
set(ABSL_ROOT_DIR "${ClickHouse_SOURCE_DIR}/contrib/abseil-cpp")
|
||||
if(NOT EXISTS "${ABSL_ROOT_DIR}/CMakeLists.txt")
|
||||
message(FATAL_ERROR " grpc: submodule third_party/abseil-cpp is missing. To fix try run: \n git submodule update --init --recursive")
|
||||
endif()
|
||||
add_subdirectory("${ABSL_ROOT_DIR}" "${ClickHouse_BINARY_DIR}/contrib/abseil-cpp")
|
||||
|
||||
# Choose to build static or shared library for c-ares.
|
||||
if (MAKE_STATIC_LIBRARIES)
|
||||
|
@ -474,13 +474,6 @@ add_custom_command(
|
||||
WORKING_DIRECTORY "${KRB5_SOURCE_DIR}/util/et"
|
||||
)
|
||||
|
||||
add_custom_target(
|
||||
CREATE_COMPILE_ET ALL
|
||||
DEPENDS ${KRB5_SOURCE_DIR}/util/et/compile_et
|
||||
COMMENT "creating compile_et"
|
||||
VERBATIM
|
||||
)
|
||||
|
||||
file(GLOB_RECURSE ET_FILES
|
||||
"${KRB5_SOURCE_DIR}/*.et"
|
||||
)
|
||||
@ -531,7 +524,7 @@ add_custom_command(
|
||||
|
||||
|
||||
add_custom_target(
|
||||
ERROR_MAP_H ALL
|
||||
ERROR_MAP_H
|
||||
DEPENDS ${KRB5_SOURCE_DIR}/lib/gssapi/krb5/error_map.h
|
||||
COMMENT "generating error_map.h"
|
||||
VERBATIM
|
||||
@ -544,14 +537,14 @@ add_custom_command(
|
||||
)
|
||||
|
||||
add_custom_target(
|
||||
ERRMAP_H ALL
|
||||
ERRMAP_H
|
||||
DEPENDS ${KRB5_SOURCE_DIR}/lib/gssapi/generic/errmap.h
|
||||
COMMENT "generating errmap.h"
|
||||
VERBATIM
|
||||
)
|
||||
|
||||
add_custom_target(
|
||||
KRB_5_H ALL
|
||||
KRB_5_H
|
||||
DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/include/krb5/krb5.h
|
||||
COMMENT "generating krb5.h"
|
||||
VERBATIM
|
||||
@ -564,15 +557,19 @@ add_dependencies(
|
||||
ERRMAP_H
|
||||
ERROR_MAP_H
|
||||
KRB_5_H
|
||||
)
|
||||
)
|
||||
|
||||
preprocess_et(processed_et_files ${ET_FILES})
|
||||
|
||||
add_custom_command(
|
||||
OUTPUT ${KRB5_SOURCE_DIR}/lib/gssapi/generic/errmap.h
|
||||
COMMAND perl -w -I../../../util ../../../util/gen.pl bimap errmap.h NAME=mecherrmap LEFT=OM_uint32 RIGHT=struct\ mecherror LEFTPRINT=print_OM_uint32 RIGHTPRINT=mecherror_print LEFTCMP=cmp_OM_uint32 RIGHTCMP=mecherror_cmp
|
||||
WORKING_DIRECTORY "${KRB5_SOURCE_DIR}/lib/gssapi/generic"
|
||||
)
|
||||
if(CMAKE_SYSTEM_NAME MATCHES "Darwin")
|
||||
add_custom_command(
|
||||
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/include_private/kcmrpc.h ${CMAKE_CURRENT_BINARY_DIR}/include_private/kcmrpc.c
|
||||
COMMAND mig -header kcmrpc.h -user kcmrpc.c -sheader /dev/null -server /dev/null -I${KRB5_SOURCE_DIR}/lib/krb5/ccache ${KRB5_SOURCE_DIR}/lib/krb5/ccache/kcmrpc.defs
|
||||
WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/include_private"
|
||||
)
|
||||
|
||||
list(APPEND ALL_SRCS ${CMAKE_CURRENT_BINARY_DIR}/include_private/kcmrpc.c)
|
||||
endif()
|
||||
|
||||
target_sources(${KRB5_LIBRARY} PRIVATE
|
||||
${ALL_SRCS}
|
||||
@ -604,6 +601,25 @@ file(COPY ${KRB5_SOURCE_DIR}/util/et/com_err.h
|
||||
DESTINATION ${CMAKE_CURRENT_BINARY_DIR}/include/
|
||||
)
|
||||
|
||||
file(COPY ${CMAKE_CURRENT_SOURCE_DIR}/osconf.h
|
||||
DESTINATION ${CMAKE_CURRENT_BINARY_DIR}/include_private/
|
||||
)
|
||||
|
||||
file(COPY ${CMAKE_CURRENT_SOURCE_DIR}/profile.h
|
||||
DESTINATION ${CMAKE_CURRENT_BINARY_DIR}/include_private/
|
||||
)
|
||||
|
||||
string(TOLOWER "${CMAKE_SYSTEM_NAME}" _system_name)
|
||||
|
||||
file(COPY ${CMAKE_CURRENT_SOURCE_DIR}/autoconf_${_system_name}.h
|
||||
DESTINATION ${CMAKE_CURRENT_BINARY_DIR}/include_private/
|
||||
)
|
||||
|
||||
file(RENAME
|
||||
${CMAKE_CURRENT_BINARY_DIR}/include_private/autoconf_${_system_name}.h
|
||||
${CMAKE_CURRENT_BINARY_DIR}/include_private/autoconf.h
|
||||
)
|
||||
|
||||
file(MAKE_DIRECTORY
|
||||
${CMAKE_CURRENT_BINARY_DIR}/include/krb5
|
||||
)
|
||||
@ -633,7 +649,7 @@ target_include_directories(${KRB5_LIBRARY} PUBLIC
|
||||
)
|
||||
|
||||
target_include_directories(${KRB5_LIBRARY} PRIVATE
|
||||
${CMAKE_CURRENT_SOURCE_DIR} #for autoconf.h
|
||||
${CMAKE_CURRENT_BINARY_DIR}/include_private # For autoconf.h and other generated headers.
|
||||
${KRB5_SOURCE_DIR}
|
||||
${KRB5_SOURCE_DIR}/include
|
||||
${KRB5_SOURCE_DIR}/lib/gssapi/mechglue
|
||||
|
764
contrib/krb5-cmake/autoconf_darwin.h
Normal file
764
contrib/krb5-cmake/autoconf_darwin.h
Normal file
@ -0,0 +1,764 @@
|
||||
/* include/autoconf.h. Generated from autoconf.h.in by configure. */
|
||||
/* include/autoconf.h.in. Generated from configure.in by autoheader. */
|
||||
|
||||
|
||||
#ifndef KRB5_AUTOCONF_H
|
||||
#define KRB5_AUTOCONF_H
|
||||
|
||||
|
||||
/* Define if AES-NI support is enabled */
|
||||
/* #undef AESNI */
|
||||
|
||||
/* Define if socket can't be bound to 0.0.0.0 */
|
||||
/* #undef BROKEN_STREAMS_SOCKETS */
|
||||
|
||||
/* Define if va_list objects can be simply copied by assignment. */
|
||||
/* #undef CAN_COPY_VA_LIST */
|
||||
|
||||
/* Define to reduce code size even if it means more cpu usage */
|
||||
/* #undef CONFIG_SMALL */
|
||||
|
||||
/* Define if __attribute__((constructor)) works */
|
||||
#define CONSTRUCTOR_ATTR_WORKS 1
|
||||
|
||||
/* Define to default ccache name */
|
||||
#define DEFCCNAME "FILE:/tmp/krb5cc_%{uid}"
|
||||
|
||||
/* Define to default client keytab name */
|
||||
#define DEFCKTNAME "FILE:/etc/krb5/user/%{euid}/client.keytab"
|
||||
|
||||
/* Define to default keytab name */
|
||||
#define DEFKTNAME "FILE:/etc/krb5.keytab"
|
||||
|
||||
/* Define if library initialization should be delayed until first use */
|
||||
#define DELAY_INITIALIZER 1
|
||||
|
||||
/* Define if __attribute__((destructor)) works */
|
||||
#define DESTRUCTOR_ATTR_WORKS 1
|
||||
|
||||
/* Define to disable PKINIT plugin support */
|
||||
#define DISABLE_PKINIT 1
|
||||
|
||||
/* Define if LDAP KDB support within the Kerberos library (mainly ASN.1 code)
|
||||
should be enabled. */
|
||||
/* #undef ENABLE_LDAP */
|
||||
|
||||
/* Define if translation functions should be used. */
|
||||
/* #undef ENABLE_NLS */
|
||||
|
||||
/* Define if thread support enabled */
|
||||
#define ENABLE_THREADS 1
|
||||
|
||||
/* Define as return type of endrpcent */
|
||||
#define ENDRPCENT_TYPE void
|
||||
|
||||
/* Define if Fortuna PRNG is selected */
|
||||
#define FORTUNA 1
|
||||
|
||||
/* Define to the type of elements in the array set by `getgroups'. Usually
|
||||
this is either `int' or `gid_t'. */
|
||||
#define GETGROUPS_T gid_t
|
||||
|
||||
/* Define if gethostbyname_r returns int rather than struct hostent * */
|
||||
/* #undef GETHOSTBYNAME_R_RETURNS_INT */
|
||||
|
||||
/* Type of getpeername second argument. */
|
||||
#define GETPEERNAME_ARG3_TYPE GETSOCKNAME_ARG3_TYPE
|
||||
|
||||
/* Define if getpwnam_r exists but takes only 4 arguments (e.g., POSIX draft 6
|
||||
implementations like some Solaris releases). */
|
||||
/* #undef GETPWNAM_R_4_ARGS */
|
||||
|
||||
/* Define if getpwnam_r returns an int */
|
||||
#define GETPWNAM_R_RETURNS_INT 1
|
||||
|
||||
/* Define if getpwuid_r exists but takes only 4 arguments (e.g., POSIX draft 6
|
||||
implementations like some Solaris releases). */
|
||||
/* #undef GETPWUID_R_4_ARGS */
|
||||
|
||||
/* Define if getservbyname_r returns int rather than struct servent * */
|
||||
/* #undef GETSERVBYNAME_R_RETURNS_INT */
|
||||
|
||||
/* Type of pointer target for argument 3 to getsockname */
|
||||
#define GETSOCKNAME_ARG3_TYPE socklen_t
|
||||
|
||||
/* Define if gmtime_r returns int instead of struct tm pointer, as on old
|
||||
HP-UX systems. */
|
||||
/* #undef GMTIME_R_RETURNS_INT */
|
||||
|
||||
/* Define if va_copy macro or function is available. */
|
||||
#define HAS_VA_COPY 1
|
||||
|
||||
/* Define to 1 if you have the `access' function. */
|
||||
#define HAVE_ACCESS 1
|
||||
|
||||
/* Define to 1 if you have the <alloca.h> header file. */
|
||||
#define HAVE_ALLOCA_H 1
|
||||
|
||||
/* Define to 1 if you have the <arpa/inet.h> header file. */
|
||||
#define HAVE_ARPA_INET_H 1
|
||||
|
||||
/* Define to 1 if you have the `bswap16' function. */
|
||||
/* #undef HAVE_BSWAP16 */
|
||||
|
||||
/* Define to 1 if you have the `bswap64' function. */
|
||||
/* #undef HAVE_BSWAP64 */
|
||||
|
||||
/* Define to 1 if bswap_16 is available via byteswap.h */
|
||||
/* #undef HAVE_BSWAP_16 */
|
||||
|
||||
/* Define to 1 if bswap_64 is available via byteswap.h */
|
||||
/* #undef HAVE_BSWAP_64 */
|
||||
|
||||
/* Define if bt_rseq is available, for recursive btree traversal. */
|
||||
#define HAVE_BT_RSEQ 1
|
||||
|
||||
/* Define to 1 if you have the <byteswap.h> header file. */
|
||||
/* #undef HAVE_BYTESWAP_H */
|
||||
|
||||
/* Define to 1 if you have the `chmod' function. */
|
||||
#define HAVE_CHMOD 1
|
||||
|
||||
/* Define if cmocka library is available. */
|
||||
/* #undef HAVE_CMOCKA */
|
||||
|
||||
/* Define to 1 if you have the `compile' function. */
|
||||
/* #undef HAVE_COMPILE */
|
||||
|
||||
/* Define if com_err has compatible gettext support */
|
||||
#define HAVE_COM_ERR_INTL 1
|
||||
|
||||
/* Define to 1 if you have the <cpuid.h> header file. */
|
||||
/* #undef HAVE_CPUID_H */
|
||||
|
||||
/* Define to 1 if you have the `daemon' function. */
|
||||
#define HAVE_DAEMON 1
|
||||
|
||||
/* Define to 1 if you have the declaration of `strerror_r', and to 0 if you
|
||||
don't. */
|
||||
#define HAVE_DECL_STRERROR_R 1
|
||||
|
||||
/* Define to 1 if you have the <dirent.h> header file, and it defines `DIR'.
|
||||
*/
|
||||
#define HAVE_DIRENT_H 1
|
||||
|
||||
/* Define to 1 if you have the <dlfcn.h> header file. */
|
||||
#define HAVE_DLFCN_H 1
|
||||
|
||||
/* Define to 1 if you have the `dn_skipname' function. */
|
||||
#define HAVE_DN_SKIPNAME 1
|
||||
|
||||
/* Define to 1 if you have the <endian.h> header file. */
|
||||
/* #undef HAVE_ENDIAN_H */
|
||||
|
||||
/* Define to 1 if you have the <errno.h> header file. */
|
||||
#define HAVE_ERRNO_H 1
|
||||
|
||||
/* Define to 1 if you have the `fchmod' function. */
|
||||
#define HAVE_FCHMOD 1
|
||||
|
||||
/* Define to 1 if you have the <fcntl.h> header file. */
|
||||
#define HAVE_FCNTL_H 1
|
||||
|
||||
/* Define to 1 if you have the `flock' function. */
|
||||
#define HAVE_FLOCK 1
|
||||
|
||||
/* Define to 1 if you have the `fnmatch' function. */
|
||||
#define HAVE_FNMATCH 1
|
||||
|
||||
/* Define to 1 if you have the <fnmatch.h> header file. */
|
||||
#define HAVE_FNMATCH_H 1
|
||||
|
||||
/* Define if you have the getaddrinfo function */
|
||||
#define HAVE_GETADDRINFO 1
|
||||
|
||||
/* Define to 1 if you have the `getcwd' function. */
|
||||
#define HAVE_GETCWD 1
|
||||
|
||||
/* Define to 1 if you have the `getenv' function. */
|
||||
#define HAVE_GETENV 1
|
||||
|
||||
/* Define to 1 if you have the `geteuid' function. */
|
||||
#define HAVE_GETEUID 1
|
||||
|
||||
/* Define if gethostbyname_r exists and its return type is known */
|
||||
/* #undef HAVE_GETHOSTBYNAME_R */
|
||||
|
||||
/* Define to 1 if you have the `getnameinfo' function. */
|
||||
#define HAVE_GETNAMEINFO 1
|
||||
|
||||
/* Define if system getopt should be used. */
|
||||
#define HAVE_GETOPT 1
|
||||
|
||||
/* Define if system getopt_long should be used. */
|
||||
#define HAVE_GETOPT_LONG 1
|
||||
|
||||
/* Define if getpwnam_r is available and useful. */
|
||||
#define HAVE_GETPWNAM_R 1
|
||||
|
||||
/* Define if getpwuid_r is available and useful. */
|
||||
#define HAVE_GETPWUID_R 1
|
||||
|
||||
/* Define if getservbyname_r exists and its return type is known */
|
||||
/* #undef HAVE_GETSERVBYNAME_R */
|
||||
|
||||
/* Have the gettimeofday function */
|
||||
#define HAVE_GETTIMEOFDAY 1
|
||||
|
||||
/* Define to 1 if you have the `getusershell' function. */
|
||||
#define HAVE_GETUSERSHELL 1
|
||||
|
||||
/* Define to 1 if you have the `gmtime_r' function. */
|
||||
#define HAVE_GMTIME_R 1
|
||||
|
||||
/* Define to 1 if you have the <ifaddrs.h> header file. */
|
||||
#define HAVE_IFADDRS_H 1
|
||||
|
||||
/* Define to 1 if you have the `inet_ntop' function. */
|
||||
#define HAVE_INET_NTOP 1
|
||||
|
||||
/* Define to 1 if you have the `inet_pton' function. */
|
||||
#define HAVE_INET_PTON 1
|
||||
|
||||
/* Define to 1 if the system has the type `int16_t'. */
|
||||
#define HAVE_INT16_T 1
|
||||
|
||||
/* Define to 1 if the system has the type `int32_t'. */
|
||||
#define HAVE_INT32_T 1
|
||||
|
||||
/* Define to 1 if the system has the type `int8_t'. */
|
||||
#define HAVE_INT8_T 1
|
||||
|
||||
/* Define to 1 if you have the <inttypes.h> header file. */
|
||||
#define HAVE_INTTYPES_H 1
|
||||
|
||||
/* Define to 1 if you have the <keyutils.h> header file. */
|
||||
/* #undef HAVE_KEYUTILS_H */
|
||||
|
||||
/* Define to 1 if you have the <lber.h> header file. */
|
||||
/* #undef HAVE_LBER_H */
|
||||
|
||||
/* Define to 1 if you have the <ldap.h> header file. */
|
||||
/* #undef HAVE_LDAP_H */
|
||||
|
||||
/* Define to 1 if you have the `crypto' library (-lcrypto). */
|
||||
#define HAVE_LIBCRYPTO 1
|
||||
|
||||
/* Define if building with libedit. */
|
||||
/* #undef HAVE_LIBEDIT */
|
||||
|
||||
/* Define to 1 if you have the `nsl' library (-lnsl). */
|
||||
/* #undef HAVE_LIBNSL */
|
||||
|
||||
/* Define to 1 if you have the `resolv' library (-lresolv). */
|
||||
#define HAVE_LIBRESOLV 1
|
||||
|
||||
/* Define to 1 if you have the `socket' library (-lsocket). */
|
||||
/* #undef HAVE_LIBSOCKET */
|
||||
|
||||
/* Define if the util library is available */
|
||||
#define HAVE_LIBUTIL 1
|
||||
|
||||
/* Define to 1 if you have the <limits.h> header file. */
|
||||
#define HAVE_LIMITS_H 1
|
||||
|
||||
/* Define to 1 if you have the `localtime_r' function. */
|
||||
#define HAVE_LOCALTIME_R 1
|
||||
|
||||
/* Define to 1 if you have the <machine/byte_order.h> header file. */
|
||||
#define HAVE_MACHINE_BYTE_ORDER_H 1
|
||||
|
||||
/* Define to 1 if you have the <machine/endian.h> header file. */
|
||||
#define HAVE_MACHINE_ENDIAN_H 1
|
||||
|
||||
/* Define to 1 if you have the <memory.h> header file. */
|
||||
#define HAVE_MEMORY_H 1
|
||||
|
||||
/* Define to 1 if you have the `mkstemp' function. */
|
||||
#define HAVE_MKSTEMP 1
|
||||
|
||||
/* Define to 1 if you have the <ndir.h> header file, and it defines `DIR'. */
|
||||
/* #undef HAVE_NDIR_H */
|
||||
|
||||
/* Define to 1 if you have the <netdb.h> header file. */
|
||||
#define HAVE_NETDB_H 1
|
||||
|
||||
/* Define if netdb.h declares h_errno */
|
||||
#define HAVE_NETDB_H_H_ERRNO 1
|
||||
|
||||
/* Define to 1 if you have the <netinet/in.h> header file. */
|
||||
#define HAVE_NETINET_IN_H 1
|
||||
|
||||
/* Define to 1 if you have the `ns_initparse' function. */
|
||||
#define HAVE_NS_INITPARSE 1
|
||||
|
||||
/* Define to 1 if you have the `ns_name_uncompress' function. */
|
||||
#define HAVE_NS_NAME_UNCOMPRESS 1
|
||||
|
||||
/* Define if OpenSSL supports cms. */
|
||||
#define HAVE_OPENSSL_CMS 1
|
||||
|
||||
/* Define to 1 if you have the <paths.h> header file. */
|
||||
#define HAVE_PATHS_H 1
|
||||
|
||||
/* Define if persistent keyrings are supported */
|
||||
/* #undef HAVE_PERSISTENT_KEYRING */
|
||||
|
||||
/* Define to 1 if you have the <poll.h> header file. */
|
||||
#define HAVE_POLL_H 1
|
||||
|
||||
/* Define if #pragma weak references work */
|
||||
/* #undef HAVE_PRAGMA_WEAK_REF */
|
||||
|
||||
/* Define if you have POSIX threads libraries and header files. */
|
||||
#define HAVE_PTHREAD 1
|
||||
|
||||
/* Define to 1 if you have the `pthread_once' function. */
|
||||
#define HAVE_PTHREAD_ONCE 1
|
||||
|
||||
/* Have PTHREAD_PRIO_INHERIT. */
|
||||
#define HAVE_PTHREAD_PRIO_INHERIT 1
|
||||
|
||||
/* Define to 1 if you have the `pthread_rwlock_init' function. */
|
||||
#define HAVE_PTHREAD_RWLOCK_INIT 1
|
||||
|
||||
/* Define if pthread_rwlock_init is provided in the thread library. */
|
||||
#define HAVE_PTHREAD_RWLOCK_INIT_IN_THREAD_LIB 1
|
||||
|
||||
/* Define to 1 if you have the <pwd.h> header file. */
|
||||
#define HAVE_PWD_H 1
|
||||
|
||||
/* Define if building with GNU Readline. */
|
||||
/* #undef HAVE_READLINE */
|
||||
|
||||
/* Define if regcomp exists and functions */
|
||||
#define HAVE_REGCOMP 1
|
||||
|
||||
/* Define to 1 if you have the `regexec' function. */
|
||||
#define HAVE_REGEXEC 1
|
||||
|
||||
/* Define to 1 if you have the <regexpr.h> header file. */
|
||||
/* #undef HAVE_REGEXPR_H */
|
||||
|
||||
/* Define to 1 if you have the <regex.h> header file. */
|
||||
#define HAVE_REGEX_H 1
|
||||
|
||||
/* Define to 1 if you have the `res_nclose' function. */
|
||||
#define HAVE_RES_NCLOSE 1
|
||||
|
||||
/* Define to 1 if you have the `res_ndestroy' function. */
|
||||
#define HAVE_RES_NDESTROY 1
|
||||
|
||||
/* Define to 1 if you have the `res_ninit' function. */
|
||||
#define HAVE_RES_NINIT 1
|
||||
|
||||
/* Define to 1 if you have the `res_nsearch' function. */
|
||||
#define HAVE_RES_NSEARCH 1
|
||||
|
||||
/* Define to 1 if you have the `res_search' function */
|
||||
#define HAVE_RES_SEARCH 1
|
||||
|
||||
/* Define to 1 if you have the `re_comp' function. */
|
||||
/* #undef HAVE_RE_COMP */
|
||||
|
||||
/* Define to 1 if you have the `re_exec' function. */
|
||||
/* #undef HAVE_RE_EXEC */
|
||||
|
||||
/* Define to 1 if you have the <sasl/sasl.h> header file. */
|
||||
/* #undef HAVE_SASL_SASL_H */
|
||||
|
||||
/* Define if struct sockaddr contains sa_len */
|
||||
#define HAVE_SA_LEN 1
|
||||
|
||||
/* Define to 1 if you have the `setegid' function. */
|
||||
#define HAVE_SETEGID 1
|
||||
|
||||
/* Define to 1 if you have the `setenv' function. */
|
||||
#define HAVE_SETENV 1
|
||||
|
||||
/* Define to 1 if you have the `seteuid' function. */
|
||||
#define HAVE_SETEUID 1
|
||||
|
||||
/* Define if setluid provided in OSF/1 security library */
|
||||
/* #undef HAVE_SETLUID */
|
||||
|
||||
/* Define to 1 if you have the `setregid' function. */
|
||||
#define HAVE_SETREGID 1
|
||||
|
||||
/* Define to 1 if you have the `setresgid' function. */
|
||||
/* #undef HAVE_SETRESGID */
|
||||
|
||||
/* Define to 1 if you have the `setresuid' function. */
|
||||
/* #undef HAVE_SETRESUID */
|
||||
|
||||
/* Define to 1 if you have the `setreuid' function. */
|
||||
#define HAVE_SETREUID 1
|
||||
|
||||
/* Define to 1 if you have the `setsid' function. */
|
||||
#define HAVE_SETSID 1
|
||||
|
||||
/* Define to 1 if you have the `setvbuf' function. */
|
||||
#define HAVE_SETVBUF 1
|
||||
|
||||
/* Define if there is a socklen_t type. If not, probably use size_t */
|
||||
#define HAVE_SOCKLEN_T 1
|
||||
|
||||
/* Define to 1 if you have the `srand' function. */
|
||||
#define HAVE_SRAND 1
|
||||
|
||||
/* Define to 1 if you have the `srand48' function. */
|
||||
#define HAVE_SRAND48 1
|
||||
|
||||
/* Define to 1 if you have the `srandom' function. */
|
||||
#define HAVE_SRANDOM 1
|
||||
|
||||
/* Define to 1 if the system has the type `ssize_t'. */
|
||||
#define HAVE_SSIZE_T 1
|
||||
|
||||
/* Define to 1 if you have the `stat' function. */
|
||||
#define HAVE_STAT 1
|
||||
|
||||
/* Define to 1 if you have the <stddef.h> header file. */
|
||||
#define HAVE_STDDEF_H 1
|
||||
|
||||
/* Define to 1 if you have the <stdint.h> header file. */
|
||||
#define HAVE_STDINT_H 1
|
||||
|
||||
/* Define to 1 if you have the <stdlib.h> header file. */
|
||||
#define HAVE_STDLIB_H 1
|
||||
|
||||
/* Define to 1 if you have the `step' function. */
|
||||
/* #undef HAVE_STEP */
|
||||
|
||||
/* Define to 1 if you have the `strchr' function. */
|
||||
#define HAVE_STRCHR 1
|
||||
|
||||
/* Define to 1 if you have the `strdup' function. */
|
||||
#define HAVE_STRDUP 1
|
||||
|
||||
/* Define to 1 if you have the `strerror' function. */
|
||||
#define HAVE_STRERROR 1
|
||||
|
||||
/* Define to 1 if you have the `strerror_r' function. */
|
||||
#define HAVE_STRERROR_R 1
|
||||
|
||||
/* Define to 1 if you have the <strings.h> header file. */
|
||||
#define HAVE_STRINGS_H 1
|
||||
|
||||
/* Define to 1 if you have the <string.h> header file. */
|
||||
#define HAVE_STRING_H 1
|
||||
|
||||
/* Define to 1 if you have the `strlcpy' function. */
|
||||
#define HAVE_STRLCPY 1
|
||||
|
||||
/* Define to 1 if you have the `strptime' function. */
|
||||
#define HAVE_STRPTIME 1
|
||||
|
||||
/* Define to 1 if the system has the type `struct cmsghdr'. */
|
||||
#define HAVE_STRUCT_CMSGHDR 1
|
||||
|
||||
/* Define if there is a struct if_laddrconf. */
|
||||
/* #undef HAVE_STRUCT_IF_LADDRCONF */
|
||||
|
||||
/* Define to 1 if the system has the type `struct in6_pktinfo'. */
|
||||
#define HAVE_STRUCT_IN6_PKTINFO 1
|
||||
|
||||
/* Define to 1 if the system has the type `struct in_pktinfo'. */
|
||||
#define HAVE_STRUCT_IN_PKTINFO 1
|
||||
|
||||
/* Define if there is a struct lifconf. */
|
||||
/* #undef HAVE_STRUCT_LIFCONF */
|
||||
|
||||
/* Define to 1 if the system has the type `struct rt_msghdr'. */
|
||||
#define HAVE_STRUCT_RT_MSGHDR 1
|
||||
|
||||
/* Define to 1 if the system has the type `struct sockaddr_storage'. */
|
||||
#define HAVE_STRUCT_SOCKADDR_STORAGE 1
|
||||
|
||||
/* Define to 1 if `st_mtimensec' is a member of `struct stat'. */
|
||||
/* #undef HAVE_STRUCT_STAT_ST_MTIMENSEC */
|
||||
|
||||
/* Define to 1 if `st_mtimespec.tv_nsec' is a member of `struct stat'. */
|
||||
#define HAVE_STRUCT_STAT_ST_MTIMESPEC_TV_NSEC 1
|
||||
|
||||
/* Define to 1 if `st_mtim.tv_nsec' is a member of `struct stat'. */
|
||||
/* #undef HAVE_STRUCT_STAT_ST_MTIM_TV_NSEC */
|
||||
|
||||
/* Define to 1 if you have the <sys/bswap.h> header file. */
|
||||
/* #undef HAVE_SYS_BSWAP_H */
|
||||
|
||||
/* Define to 1 if you have the <sys/dir.h> header file, and it defines `DIR'.
|
||||
*/
|
||||
/* #undef HAVE_SYS_DIR_H */
|
||||
|
||||
/* Define if sys_errlist in libc */
|
||||
#define HAVE_SYS_ERRLIST 1
|
||||
|
||||
/* Define to 1 if you have the <sys/file.h> header file. */
|
||||
#define HAVE_SYS_FILE_H 1
|
||||
|
||||
/* Define to 1 if you have the <sys/filio.h> header file. */
|
||||
#define HAVE_SYS_FILIO_H 1
|
||||
|
||||
/* Define to 1 if you have the <sys/ndir.h> header file, and it defines `DIR'.
|
||||
*/
|
||||
/* #undef HAVE_SYS_NDIR_H */
|
||||
|
||||
/* Define to 1 if you have the <sys/param.h> header file. */
|
||||
#define HAVE_SYS_PARAM_H 1
|
||||
|
||||
/* Define to 1 if you have the <sys/select.h> header file. */
|
||||
#define HAVE_SYS_SELECT_H 1
|
||||
|
||||
/* Define to 1 if you have the <sys/socket.h> header file. */
|
||||
#define HAVE_SYS_SOCKET_H 1
|
||||
|
||||
/* Define to 1 if you have the <sys/sockio.h> header file. */
|
||||
#define HAVE_SYS_SOCKIO_H 1
|
||||
|
||||
/* Define to 1 if you have the <sys/stat.h> header file. */
|
||||
#define HAVE_SYS_STAT_H 1
|
||||
|
||||
/* Define to 1 if you have the <sys/time.h> header file. */
|
||||
#define HAVE_SYS_TIME_H 1
|
||||
|
||||
/* Define to 1 if you have the <sys/types.h> header file. */
|
||||
#define HAVE_SYS_TYPES_H 1
|
||||
|
||||
/* Define to 1 if you have the <sys/uio.h> header file. */
|
||||
#define HAVE_SYS_UIO_H 1
|
||||
|
||||
/* Define if tcl.h found */
|
||||
/* #undef HAVE_TCL_H */
|
||||
|
||||
/* Define if tcl/tcl.h found */
|
||||
/* #undef HAVE_TCL_TCL_H */
|
||||
|
||||
/* Define to 1 if you have the `timegm' function. */
|
||||
#define HAVE_TIMEGM 1
|
||||
|
||||
/* Define to 1 if you have the <time.h> header file. */
|
||||
#define HAVE_TIME_H 1
|
||||
|
||||
/* Define to 1 if you have the <unistd.h> header file. */
|
||||
#define HAVE_UNISTD_H 1
|
||||
|
||||
/* Define to 1 if you have the `unsetenv' function. */
|
||||
#define HAVE_UNSETENV 1
|
||||
|
||||
/* Define to 1 if the system has the type `u_char'. */
|
||||
#define HAVE_U_CHAR 1
|
||||
|
||||
/* Define to 1 if the system has the type `u_int'. */
|
||||
#define HAVE_U_INT 1
|
||||
|
||||
/* Define to 1 if the system has the type `u_int16_t'. */
|
||||
#define HAVE_U_INT16_T 1
|
||||
|
||||
/* Define to 1 if the system has the type `u_int32_t'. */
|
||||
#define HAVE_U_INT32_T 1
|
||||
|
||||
/* Define to 1 if the system has the type `u_int8_t'. */
|
||||
#define HAVE_U_INT8_T 1
|
||||
|
||||
/* Define to 1 if the system has the type `u_long'. */
|
||||
#define HAVE_U_LONG 1
|
||||
|
||||
/* Define to 1 if you have the `vasprintf' function. */
|
||||
#define HAVE_VASPRINTF 1
|
||||
|
||||
/* Define to 1 if you have the `vsnprintf' function. */
|
||||
#define HAVE_VSNPRINTF 1
|
||||
|
||||
/* Define to 1 if you have the `vsprintf' function. */
|
||||
#define HAVE_VSPRINTF 1
|
||||
|
||||
/* Define to 1 if the system has the type `__int128_t'. */
|
||||
#define HAVE___INT128_T 1
|
||||
|
||||
/* Define to 1 if the system has the type `__uint128_t'. */
|
||||
#define HAVE___UINT128_T 1
|
||||
|
||||
/* Define if errno.h declares perror */
|
||||
/* #undef HDR_HAS_PERROR */
|
||||
|
||||
/* May need to be defined to enable IPv6 support, for example on IRIX */
|
||||
/* #undef INET6 */
|
||||
|
||||
/* Define if MIT Project Athena default configuration should be used */
|
||||
/* #undef KRB5_ATHENA_COMPAT */
|
||||
|
||||
/* Define for DNS support of locating realms and KDCs */
|
||||
#undef KRB5_DNS_LOOKUP
|
||||
|
||||
/* Define to enable DNS lookups of Kerberos realm names */
|
||||
/* #undef KRB5_DNS_LOOKUP_REALM */
|
||||
|
||||
/* Define if the KDC should return only vague error codes to clients */
|
||||
/* #undef KRBCONF_VAGUE_ERRORS */
|
||||
|
||||
/* define if the system header files are missing prototype for daemon() */
|
||||
#define NEED_DAEMON_PROTO 1
|
||||
|
||||
/* Define if in6addr_any is not defined in libc */
|
||||
#define NEED_INSIXADDR_ANY 1
|
||||
|
||||
/* define if the system header files are missing prototype for
|
||||
ss_execute_command() */
|
||||
/* #undef NEED_SS_EXECUTE_COMMAND_PROTO */
|
||||
|
||||
/* define if the system header files are missing prototype for strptime() */
|
||||
/* #undef NEED_STRPTIME_PROTO */
|
||||
|
||||
/* define if the system header files are missing prototype for swab() */
|
||||
/* #undef NEED_SWAB_PROTO */
|
||||
|
||||
/* Define if need to declare sys_errlist */
|
||||
/* #undef NEED_SYS_ERRLIST */
|
||||
|
||||
/* define if the system header files are missing prototype for vasprintf() */
|
||||
/* #undef NEED_VASPRINTF_PROTO */
|
||||
|
||||
/* Define if the KDC should use no lookaside cache */
|
||||
/* #undef NOCACHE */
|
||||
|
||||
/* Define if references to pthread routines should be non-weak. */
|
||||
/* #undef NO_WEAK_PTHREADS */
|
||||
|
||||
/* Define if lex produes code with yylineno */
|
||||
/* #undef NO_YYLINENO */
|
||||
|
||||
/* Define to the address where bug reports for this package should be sent. */
|
||||
#define PACKAGE_BUGREPORT "krb5-bugs@mit.edu"
|
||||
|
||||
/* Define to the full name of this package. */
|
||||
#define PACKAGE_NAME "Kerberos 5"
|
||||
|
||||
/* Define to the full name and version of this package. */
|
||||
#define PACKAGE_STRING "Kerberos 5 1.17.1"
|
||||
|
||||
/* Define to the one symbol short name of this package. */
|
||||
#define PACKAGE_TARNAME "krb5"
|
||||
|
||||
/* Define to the home page for this package. */
|
||||
#define PACKAGE_URL ""
|
||||
|
||||
/* Define to the version of this package. */
|
||||
#define PACKAGE_VERSION "1.17.1"
|
||||
|
||||
/* Define if setjmp indicates POSIX interface */
|
||||
#define POSIX_SETJMP 1
|
||||
|
||||
/* Define if POSIX signal handling is used */
|
||||
#define POSIX_SIGNALS 1
|
||||
|
||||
/* Define if POSIX signal handlers are used */
|
||||
#define POSIX_SIGTYPE 1
|
||||
|
||||
/* Define if termios.h exists and tcsetattr exists */
|
||||
#define POSIX_TERMIOS 1
|
||||
|
||||
/* Define to necessary symbol if this constant uses a non-standard name on
|
||||
your system. */
|
||||
/* #undef PTHREAD_CREATE_JOINABLE */
|
||||
|
||||
/* Define as the return type of signal handlers (`int' or `void'). */
|
||||
#define RETSIGTYPE void
|
||||
|
||||
/* Define as return type of setrpcent */
|
||||
#define SETRPCENT_TYPE void
|
||||
|
||||
/* The size of `size_t', as computed by sizeof. */
|
||||
#define SIZEOF_SIZE_T 8
|
||||
|
||||
/* The size of `time_t', as computed by sizeof. */
|
||||
#define SIZEOF_TIME_T 8
|
||||
|
||||
/* Define to use OpenSSL for SPAKE preauth */
|
||||
#define SPAKE_OPENSSL 1
|
||||
|
||||
/* Define for static plugin linkage */
|
||||
/* #undef STATIC_PLUGINS */
|
||||
|
||||
/* Define to 1 if you have the ANSI C header files. */
|
||||
#define STDC_HEADERS 1
|
||||
|
||||
/* Define to 1 if strerror_r returns char *. */
|
||||
/* #undef STRERROR_R_CHAR_P */
|
||||
|
||||
/* Define if sys_errlist is defined in errno.h */
|
||||
#define SYS_ERRLIST_DECLARED 1
|
||||
|
||||
/* Define to 1 if you can safely include both <sys/time.h> and <time.h>. */
|
||||
#define TIME_WITH_SYS_TIME 1
|
||||
|
||||
/* Define if no TLS implementation is selected */
|
||||
/* #undef TLS_IMPL_NONE */
|
||||
|
||||
/* Define if TLS implementation is OpenSSL */
|
||||
#define TLS_IMPL_OPENSSL 1
|
||||
|
||||
/* Define if you have dirent.h functionality */
|
||||
#define USE_DIRENT_H 1
|
||||
|
||||
/* Define if dlopen should be used */
|
||||
#define USE_DLOPEN 1
|
||||
|
||||
/* Define if the keyring ccache should be enabled */
|
||||
/* #undef USE_KEYRING_CCACHE */
|
||||
|
||||
/* Define if link-time options for library finalization will be used */
|
||||
/* #undef USE_LINKER_FINI_OPTION */
|
||||
|
||||
/* Define if link-time options for library initialization will be used */
|
||||
/* #undef USE_LINKER_INIT_OPTION */
|
||||
|
||||
/* Define if sigprocmask should be used */
|
||||
#define USE_SIGPROCMASK 1
|
||||
|
||||
/* Define if wait takes int as a argument */
|
||||
#define WAIT_USES_INT 1
|
||||
|
||||
/* Define to 1 if `lex' declares `yytext' as a `char *' by default, not a
|
||||
`char[]'. */
|
||||
#define YYTEXT_POINTER 1
|
||||
|
||||
/* Define to enable extensions in glibc */
|
||||
#define _GNU_SOURCE 1
|
||||
|
||||
/* Define to enable C11 extensions */
|
||||
#define __STDC_WANT_LIB_EXT1__ 1
|
||||
|
||||
/* Define to empty if `const' does not conform to ANSI C. */
|
||||
/* #undef const */
|
||||
|
||||
/* Define to `int' if <sys/types.h> doesn't define. */
|
||||
/* #undef gid_t */
|
||||
|
||||
/* Define to `__inline__' or `__inline' if that's what the C compiler
|
||||
calls it, or to nothing if 'inline' is not supported under any name. */
|
||||
#ifndef __cplusplus
|
||||
/* #undef inline */
|
||||
#endif
|
||||
|
||||
/* Define krb5_sigtype to type of signal handler */
|
||||
#define krb5_sigtype void
|
||||
|
||||
/* Define to `int' if <sys/types.h> does not define. */
|
||||
/* #undef mode_t */
|
||||
|
||||
/* Define to `long int' if <sys/types.h> does not define. */
|
||||
/* #undef off_t */
|
||||
|
||||
/* Define to `long' if <sys/types.h> does not define. */
|
||||
/* #undef time_t */
|
||||
|
||||
/* Define to `int' if <sys/types.h> doesn't define. */
|
||||
/* #undef uid_t */
|
||||
|
||||
|
||||
#if defined(__GNUC__) && !defined(inline)
|
||||
/* Silence gcc pedantic warnings about ANSI C. */
|
||||
# define inline __inline__
|
||||
#endif
|
||||
#endif /* KRB5_AUTOCONF_H */
|
2
contrib/mariadb-connector-c
vendored
2
contrib/mariadb-connector-c
vendored
@ -1 +1 @@
|
||||
Subproject commit 21f451d4d3157ffed31ec60a8b76c407190e66bd
|
||||
Subproject commit f4476ee7311b35b593750f6ae2cbdb62a4006374
|
2
contrib/poco
vendored
2
contrib/poco
vendored
@ -1 +1 @@
|
||||
Subproject commit fbaaba4a02e29987b8c584747a496c79528f125f
|
||||
Subproject commit c55b91f394efa9c238c33957682501681ef9b716
|
@ -151,6 +151,7 @@ function clone_submodules
|
||||
cd "$FASTTEST_SOURCE"
|
||||
|
||||
SUBMODULES_TO_UPDATE=(
|
||||
contrib/abseil-cpp
|
||||
contrib/antlr4-runtime
|
||||
contrib/boost
|
||||
contrib/zlib-ng
|
||||
|
@ -2,5 +2,6 @@
|
||||
FROM yandex/clickhouse-binary-builder
|
||||
|
||||
COPY run.sh /run.sh
|
||||
COPY process_split_build_smoke_test_result.py /
|
||||
|
||||
CMD /run.sh
|
||||
|
61
docker/test/split_build_smoke_test/process_split_build_smoke_test_result.py
Executable file
61
docker/test/split_build_smoke_test/process_split_build_smoke_test_result.py
Executable file
@ -0,0 +1,61 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import os
|
||||
import logging
|
||||
import argparse
|
||||
import csv
|
||||
|
||||
RESULT_LOG_NAME = "run.log"
|
||||
|
||||
def process_result(result_folder):
|
||||
|
||||
status = "success"
|
||||
description = 'Server started and responded'
|
||||
summary = [("Smoke test", "OK")]
|
||||
with open(os.path.join(result_folder, RESULT_LOG_NAME), 'r') as run_log:
|
||||
lines = run_log.read().split('\n')
|
||||
if not lines or lines[0].strip() != 'OK':
|
||||
status = "failure"
|
||||
logging.info("Lines is not ok: %s", str('\n'.join(lines)))
|
||||
summary = [("Smoke test", "FAIL")]
|
||||
description = 'Server failed to respond, see result in logs'
|
||||
|
||||
result_logs = []
|
||||
server_log_path = os.path.join(result_folder, "clickhouse-server.log")
|
||||
stderr_log_path = os.path.join(result_folder, "stderr.log")
|
||||
client_stderr_log_path = os.path.join(result_folder, "clientstderr.log")
|
||||
|
||||
if os.path.exists(server_log_path):
|
||||
result_logs.append(server_log_path)
|
||||
|
||||
if os.path.exists(stderr_log_path):
|
||||
result_logs.append(stderr_log_path)
|
||||
|
||||
if os.path.exists(client_stderr_log_path):
|
||||
result_logs.append(client_stderr_log_path)
|
||||
|
||||
return status, description, summary, result_logs
|
||||
|
||||
|
||||
def write_results(results_file, status_file, results, status):
|
||||
with open(results_file, 'w') as f:
|
||||
out = csv.writer(f, delimiter='\t')
|
||||
out.writerows(results)
|
||||
with open(status_file, 'w') as f:
|
||||
out = csv.writer(f, delimiter='\t')
|
||||
out.writerow(status)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')
|
||||
parser = argparse.ArgumentParser(description="ClickHouse script for parsing results of split build smoke test")
|
||||
parser.add_argument("--in-results-dir", default='/test_output/')
|
||||
parser.add_argument("--out-results-file", default='/test_output/test_results.tsv')
|
||||
parser.add_argument("--out-status-file", default='/test_output/check_status.tsv')
|
||||
args = parser.parse_args()
|
||||
|
||||
state, description, test_results, logs = process_result(args.in_results_dir)
|
||||
logging.info("Result parsed")
|
||||
status = (state, description)
|
||||
write_results(args.out_results_file, args.out_status_file, test_results, status)
|
||||
logging.info("Result written")
|
@ -5,16 +5,18 @@ set -x
|
||||
install_and_run_server() {
|
||||
mkdir /unpacked
|
||||
tar -xzf /package_folder/shared_build.tgz -C /unpacked --strip 1
|
||||
LD_LIBRARY_PATH=/unpacked /unpacked/clickhouse-server --config /unpacked/config/config.xml >/var/log/clickhouse-server/stderr.log 2>&1 &
|
||||
LD_LIBRARY_PATH=/unpacked /unpacked/clickhouse-server --config /unpacked/config/config.xml >/test_output/stderr.log 2>&1 &
|
||||
}
|
||||
|
||||
run_client() {
|
||||
for i in {1..100}; do
|
||||
sleep 1
|
||||
LD_LIBRARY_PATH=/unpacked /unpacked/clickhouse-client --query "select 'OK'" 2>/var/log/clickhouse-server/clientstderr.log && break
|
||||
LD_LIBRARY_PATH=/unpacked /unpacked/clickhouse-client --query "select 'OK'" > /test_output/run.log 2> /test_output/clientstderr.log && break
|
||||
[[ $i == 100 ]] && echo 'FAIL'
|
||||
done
|
||||
}
|
||||
|
||||
install_and_run_server
|
||||
run_client
|
||||
mv /var/log/clickhouse-server/clickhouse-server.log /test_output/clickhouse-server.log
|
||||
/process_split_build_smoke_test_result.py || echo -e "failure\tCannot parse results" > /test_output/check_status.tsv
|
||||
|
@ -1,7 +1,7 @@
|
||||
# docker build -t yandex/clickhouse-sqlancer-test .
|
||||
FROM ubuntu:20.04
|
||||
|
||||
RUN apt-get update --yes && env DEBIAN_FRONTEND=noninteractive apt-get install wget unzip git openjdk-14-jdk maven --yes --no-install-recommends
|
||||
RUN apt-get update --yes && env DEBIAN_FRONTEND=noninteractive apt-get install wget unzip git openjdk-14-jdk maven python3 --yes --no-install-recommends
|
||||
|
||||
RUN wget https://github.com/sqlancer/sqlancer/archive/master.zip -O /sqlancer.zip
|
||||
RUN mkdir /sqlancer && \
|
||||
@ -10,4 +10,5 @@ RUN mkdir /sqlancer && \
|
||||
RUN cd /sqlancer/sqlancer-master && mvn package -DskipTests
|
||||
|
||||
COPY run.sh /
|
||||
COPY process_sqlancer_result.py /
|
||||
CMD ["/bin/bash", "/run.sh"]
|
||||
|
74
docker/test/sqlancer/process_sqlancer_result.py
Executable file
74
docker/test/sqlancer/process_sqlancer_result.py
Executable file
@ -0,0 +1,74 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import os
|
||||
import logging
|
||||
import argparse
|
||||
import csv
|
||||
|
||||
|
||||
def process_result(result_folder):
|
||||
status = "success"
|
||||
summary = []
|
||||
paths = []
|
||||
tests = ["TLPWhere", "TLPGroupBy", "TLPHaving", "TLPWhereGroupBy", "TLPDistinct", "TLPAggregate"]
|
||||
|
||||
for test in tests:
|
||||
err_path = '{}/{}.err'.format(result_folder, test)
|
||||
out_path = '{}/{}.out'.format(result_folder, test)
|
||||
if not os.path.exists(err_path):
|
||||
logging.info("No output err on path %s", err_path)
|
||||
summary.append((test, "SKIPPED"))
|
||||
elif not os.path.exists(out_path):
|
||||
logging.info("No output log on path %s", out_path)
|
||||
else:
|
||||
paths.append(err_path)
|
||||
paths.append(out_path)
|
||||
with open(err_path, 'r') as f:
|
||||
if 'AssertionError' in f.read():
|
||||
summary.append((test, "FAIL"))
|
||||
else:
|
||||
summary.append((test, "OK"))
|
||||
|
||||
logs_path = '{}/logs.tar.gz'.format(result_folder)
|
||||
if not os.path.exists(logs_path):
|
||||
logging.info("No logs tar on path %s", logs_path)
|
||||
else:
|
||||
paths.append(logs_path)
|
||||
stdout_path = '{}/stdout.log'.format(result_folder)
|
||||
if not os.path.exists(stdout_path):
|
||||
logging.info("No stdout log on path %s", stdout_path)
|
||||
else:
|
||||
paths.append(stdout_path)
|
||||
stderr_path = '{}/stderr.log'.format(result_folder)
|
||||
if not os.path.exists(stderr_path):
|
||||
logging.info("No stderr log on path %s", stderr_path)
|
||||
else:
|
||||
paths.append(stderr_path)
|
||||
|
||||
description = "SQLancer test run. See report"
|
||||
|
||||
return status, description, summary, paths
|
||||
|
||||
|
||||
def write_results(results_file, status_file, results, status):
|
||||
with open(results_file, 'w') as f:
|
||||
out = csv.writer(f, delimiter='\t')
|
||||
out.writerows(results)
|
||||
with open(status_file, 'w') as f:
|
||||
out = csv.writer(f, delimiter='\t')
|
||||
out.writerow(status)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')
|
||||
parser = argparse.ArgumentParser(description="ClickHouse script for parsing results of sqlancer test")
|
||||
parser.add_argument("--in-results-dir", default='/test_output/')
|
||||
parser.add_argument("--out-results-file", default='/test_output/test_results.tsv')
|
||||
parser.add_argument("--out-status-file", default='/test_output/check_status.tsv')
|
||||
args = parser.parse_args()
|
||||
|
||||
state, description, test_results, logs = process_result(args.in_results_dir)
|
||||
logging.info("Result parsed")
|
||||
status = (state, description)
|
||||
write_results(args.out_results_file, args.out_status_file, test_results, status)
|
||||
logging.info("Result written")
|
@ -29,4 +29,5 @@ tail -n 1000 /var/log/clickhouse-server/stderr.log > /test_output/stderr.log
|
||||
tail -n 1000 /var/log/clickhouse-server/stdout.log > /test_output/stdout.log
|
||||
tail -n 1000 /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log
|
||||
|
||||
/process_sqlancer_result.py || echo -e "failure\tCannot parse results" > /test_output/check_status.tsv
|
||||
ls /test_output
|
||||
|
@ -65,3 +65,11 @@ if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]
|
||||
fi
|
||||
|
||||
clickhouse-test --testname --shard --zookeeper --no-stateless --hung-check --print-time "$SKIP_LIST_OPT" "${ADDITIONAL_OPTIONS[@]}" "$SKIP_TESTS_OPTION" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt
|
||||
|
||||
./process_functional_tests_result.py || echo -e "failure\tCannot parse results" > /test_output/check_status.tsv
|
||||
|
||||
pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.gz ||:
|
||||
mv /var/log/clickhouse-server/stderr.log /test_output/ ||:
|
||||
if [[ -n "$WITH_COVERAGE" ]] && [[ "$WITH_COVERAGE" -eq 1 ]]; then
|
||||
tar -chf /test_output/clickhouse_coverage.tar.gz /profraw ||:
|
||||
fi
|
||||
|
@ -46,4 +46,5 @@ ENV NUM_TRIES=1
|
||||
ENV MAX_RUN_TIME=0
|
||||
|
||||
COPY run.sh /
|
||||
COPY process_functional_tests_result.py /
|
||||
CMD ["/bin/bash", "/run.sh"]
|
||||
|
118
docker/test/stateless/process_functional_tests_result.py
Executable file
118
docker/test/stateless/process_functional_tests_result.py
Executable file
@ -0,0 +1,118 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import os
|
||||
import logging
|
||||
import argparse
|
||||
import csv
|
||||
|
||||
OK_SIGN = "[ OK "
|
||||
FAIL_SING = "[ FAIL "
|
||||
TIMEOUT_SING = "[ Timeout! "
|
||||
UNKNOWN_SIGN = "[ UNKNOWN "
|
||||
SKIPPED_SIGN = "[ SKIPPED "
|
||||
HUNG_SIGN = "Found hung queries in processlist"
|
||||
|
||||
def process_test_log(log_path):
|
||||
total = 0
|
||||
skipped = 0
|
||||
unknown = 0
|
||||
failed = 0
|
||||
success = 0
|
||||
hung = False
|
||||
test_results = []
|
||||
with open(log_path, 'r') as test_file:
|
||||
for line in test_file:
|
||||
line = line.strip()
|
||||
if HUNG_SIGN in line:
|
||||
hung = True
|
||||
if any(sign in line for sign in (OK_SIGN, FAIL_SING, UNKNOWN_SIGN, SKIPPED_SIGN)):
|
||||
test_name = line.split(' ')[2].split(':')[0]
|
||||
|
||||
test_time = ''
|
||||
try:
|
||||
time_token = line.split(']')[1].strip().split()[0]
|
||||
float(time_token)
|
||||
test_time = time_token
|
||||
except:
|
||||
pass
|
||||
|
||||
total += 1
|
||||
if TIMEOUT_SING in line:
|
||||
failed += 1
|
||||
test_results.append((test_name, "Timeout", test_time))
|
||||
elif FAIL_SING in line:
|
||||
failed += 1
|
||||
test_results.append((test_name, "FAIL", test_time))
|
||||
elif UNKNOWN_SIGN in line:
|
||||
unknown += 1
|
||||
test_results.append((test_name, "FAIL", test_time))
|
||||
elif SKIPPED_SIGN in line:
|
||||
skipped += 1
|
||||
test_results.append((test_name, "SKIPPED", test_time))
|
||||
else:
|
||||
success += int(OK_SIGN in line)
|
||||
test_results.append((test_name, "OK", test_time))
|
||||
return total, skipped, unknown, failed, success, hung, test_results
|
||||
|
||||
def process_result(result_path):
|
||||
test_results = []
|
||||
state = "success"
|
||||
description = ""
|
||||
files = os.listdir(result_path)
|
||||
if files:
|
||||
logging.info("Find files in result folder %s", ','.join(files))
|
||||
result_path = os.path.join(result_path, 'test_result.txt')
|
||||
else:
|
||||
result_path = None
|
||||
description = "No output log"
|
||||
state = "error"
|
||||
|
||||
if result_path and os.path.exists(result_path):
|
||||
total, skipped, unknown, failed, success, hung, test_results = process_test_log(result_path)
|
||||
is_flacky_check = 1 < int(os.environ.get('NUM_TRIES', 1))
|
||||
# If no tests were run (success == 0) it indicates an error (e.g. server did not start or crashed immediately)
|
||||
# But it's Ok for "flaky checks" - they can contain just one test for check which is marked as skipped.
|
||||
if failed != 0 or unknown != 0 or (success == 0 and (not is_flacky_check)):
|
||||
state = "failure"
|
||||
|
||||
if hung:
|
||||
description = "Some queries hung, "
|
||||
state = "failure"
|
||||
else:
|
||||
description = ""
|
||||
|
||||
description += "fail: {}, passed: {}".format(failed, success)
|
||||
if skipped != 0:
|
||||
description += ", skipped: {}".format(skipped)
|
||||
if unknown != 0:
|
||||
description += ", unknown: {}".format(unknown)
|
||||
else:
|
||||
state = "failure"
|
||||
description = "Output log doesn't exist"
|
||||
test_results = []
|
||||
|
||||
return state, description, test_results
|
||||
|
||||
|
||||
def write_results(results_file, status_file, results, status):
|
||||
with open(results_file, 'w') as f:
|
||||
out = csv.writer(f, delimiter='\t')
|
||||
out.writerows(results)
|
||||
with open(status_file, 'w') as f:
|
||||
out = csv.writer(f, delimiter='\t')
|
||||
out.writerow(status)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')
|
||||
parser = argparse.ArgumentParser(description="ClickHouse script for parsing results of functional tests")
|
||||
parser.add_argument("--in-results-dir", default='/test_output/')
|
||||
parser.add_argument("--out-results-file", default='/test_output/test_results.tsv')
|
||||
parser.add_argument("--out-status-file", default='/test_output/check_status.tsv')
|
||||
args = parser.parse_args()
|
||||
|
||||
state, description, test_results = process_result(args.in_results_dir)
|
||||
logging.info("Result parsed")
|
||||
status = (state, description)
|
||||
write_results(args.out_results_file, args.out_status_file, test_results, status)
|
||||
logging.info("Result written")
|
@ -72,5 +72,12 @@ export -f run_tests
|
||||
|
||||
timeout "$MAX_RUN_TIME" bash -c run_tests ||:
|
||||
|
||||
./process_functional_tests_result.py || echo -e "failure\tCannot parse results" > /test_output/check_status.tsv
|
||||
|
||||
pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.gz ||:
|
||||
mv /var/log/clickhouse-server/stderr.log /test_output/ ||:
|
||||
if [[ -n "$WITH_COVERAGE" ]] && [[ "$WITH_COVERAGE" -eq 1 ]]; then
|
||||
tar -chf /test_output/clickhouse_coverage.tar.gz /profraw ||:
|
||||
fi
|
||||
tar -chf /test_output/text_log_dump.tar /var/lib/clickhouse/data/system/text_log ||:
|
||||
tar -chf /test_output/query_log_dump.tar /var/lib/clickhouse/data/system/query_log ||:
|
||||
|
@ -53,10 +53,14 @@ handle SIGBUS stop print
|
||||
handle SIGABRT stop print
|
||||
continue
|
||||
thread apply all backtrace
|
||||
continue
|
||||
detach
|
||||
quit
|
||||
" > script.gdb
|
||||
|
||||
gdb -batch -command script.gdb -p "$(cat /var/run/clickhouse-server/clickhouse-server.pid)" &
|
||||
# FIXME Hung check may work incorrectly because of attached gdb
|
||||
# 1. False positives are possible
|
||||
# 2. We cannot attach another gdb to get stacktraces if some queries hung
|
||||
gdb -batch -command script.gdb -p "$(cat /var/run/clickhouse-server/clickhouse-server.pid)" >> /test_output/gdb.log &
|
||||
}
|
||||
|
||||
configure
|
||||
@ -78,11 +82,55 @@ clickhouse-client --query "RENAME TABLE datasets.hits_v1 TO test.hits"
|
||||
clickhouse-client --query "RENAME TABLE datasets.visits_v1 TO test.visits"
|
||||
clickhouse-client --query "SHOW TABLES FROM test"
|
||||
|
||||
./stress --hung-check --output-folder test_output --skip-func-tests "$SKIP_TESTS_OPTION" && echo "OK" > /test_output/script_exit_code.txt || echo "FAIL" > /test_output/script_exit_code.txt
|
||||
./stress --hung-check --output-folder test_output --skip-func-tests "$SKIP_TESTS_OPTION" \
|
||||
&& echo -e 'Test script exit code\tOK' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'Test script failed\tFAIL' >> /test_output/test_results.tsv
|
||||
|
||||
stop
|
||||
# TODO remove me when persistent snapshots will be ready
|
||||
rm -fr /var/lib/clickhouse/coordination ||:
|
||||
start
|
||||
|
||||
clickhouse-client --query "SELECT 'Server successfuly started'" > /test_output/alive_check.txt || echo 'Server failed to start' > /test_output/alive_check.txt
|
||||
clickhouse-client --query "SELECT 'Server successfully started', 'OK'" >> /test_output/test_results.tsv \
|
||||
|| echo -e 'Server failed to start\tFAIL' >> /test_output/test_results.tsv
|
||||
|
||||
[ -f /var/log/clickhouse-server/clickhouse-server.log ] || echo -e "Server log does not exist\tFAIL"
|
||||
[ -f /var/log/clickhouse-server/stderr.log ] || echo -e "Stderr log does not exist\tFAIL"
|
||||
|
||||
# Print Fatal log messages to stdout
|
||||
zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.log
|
||||
|
||||
# Grep logs for sanitizer asserts, crashes and other critical errors
|
||||
|
||||
# Sanitizer asserts
|
||||
zgrep -Fa "==================" /var/log/clickhouse-server/stderr.log >> /test_output/tmp
|
||||
zgrep -Fa "WARNING" /var/log/clickhouse-server/stderr.log >> /test_output/tmp
|
||||
zgrep -Fav "ASan doesn't fully support makecontext/swapcontext functions" > /dev/null \
|
||||
&& echo -e 'Sanitizer assert (in stderr.log)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'No sanitizer asserts\tOK' >> /test_output/test_results.tsv
|
||||
rm -f /test_output/tmp
|
||||
|
||||
# Logical errors
|
||||
zgrep -Fa "Code: 49, e.displayText() = DB::Exception:" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \
|
||||
&& echo -e 'Logical error thrown (see clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'No logical errors\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
# Crash
|
||||
zgrep -Fa "########################################" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \
|
||||
&& echo -e 'Killed by signal (in clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'Not crashed\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
# It also checks for OOM or crash without stacktrace (printed by watchdog)
|
||||
zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.log > /dev/null \
|
||||
&& echo -e 'Fatal message in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'No fatal messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
zgrep -Fa "########################################" /test_output/* > /dev/null \
|
||||
&& echo -e 'Killed by signal (output files)\tFAIL' >> /test_output/test_results.tsv
|
||||
|
||||
# Put logs into /test_output/
|
||||
pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.gz
|
||||
tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||:
|
||||
mv /var/log/clickhouse-server/stderr.log /test_output/
|
||||
|
||||
# Write check result into check_status.tsv
|
||||
clickhouse-local --structure "test String, res String" -q "SELECT 'failure', test FROM table WHERE res != 'OK' order by (lower(test) like '%hung%') LIMIT 1" < /test_output/test_results.tsv > /test_output/check_status.tsv
|
||||
[ -s /test_output/check_status.tsv ] || echo -e "success\tNo errors found" > /test_output/check_status.tsv
|
||||
|
@ -58,6 +58,27 @@ def run_func_test(cmd, output_prefix, num_processes, skip_tests_option, global_t
|
||||
time.sleep(0.5)
|
||||
return pipes
|
||||
|
||||
def prepare_for_hung_check():
|
||||
# FIXME this function should not exist, but...
|
||||
|
||||
# We attach gdb to clickhouse-server before running tests
|
||||
# to print stacktraces of all crashes even if clickhouse cannot print it for some reason.
|
||||
# However, it obstruct checking for hung queries.
|
||||
logging.info("Will terminate gdb (if any)")
|
||||
call("kill -TERM $(pidof gdb)", shell=True, stderr=STDOUT)
|
||||
|
||||
# Some tests execute SYSTEM STOP MERGES or similar queries.
|
||||
# It may cause some ALTERs to hang.
|
||||
# Possibly we should fix tests and forbid to use such queries without specifying table.
|
||||
call("clickhouse client -q 'SYSTEM START MERGES'", shell=True, stderr=STDOUT)
|
||||
call("clickhouse client -q 'SYSTEM START DISTRIBUTED SENDS'", shell=True, stderr=STDOUT)
|
||||
call("clickhouse client -q 'SYSTEM START TTL MERGES'", shell=True, stderr=STDOUT)
|
||||
call("clickhouse client -q 'SYSTEM START MOVES'", shell=True, stderr=STDOUT)
|
||||
call("clickhouse client -q 'SYSTEM START FETCHES'", shell=True, stderr=STDOUT)
|
||||
call("clickhouse client -q 'SYSTEM START REPLICATED SENDS'", shell=True, stderr=STDOUT)
|
||||
call("clickhouse client -q 'SYSTEM START REPLICATION QUEUES'", shell=True, stderr=STDOUT)
|
||||
|
||||
time.sleep(30)
|
||||
|
||||
if __name__ == "__main__":
|
||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')
|
||||
@ -88,11 +109,14 @@ if __name__ == "__main__":
|
||||
|
||||
logging.info("All processes finished")
|
||||
if args.hung_check:
|
||||
prepare_for_hung_check()
|
||||
logging.info("Checking if some queries hung")
|
||||
cmd = "{} {} {}".format(args.test_cmd, "--hung-check", "00001_select_1")
|
||||
res = call(cmd, shell=True, stderr=STDOUT)
|
||||
hung_check_status = "No queries hung\tOK\n"
|
||||
if res != 0:
|
||||
logging.info("Hung check failed with exit code {}".format(res))
|
||||
sys.exit(1)
|
||||
hung_check_status = "Hung check failed\tFAIL\n"
|
||||
open(os.path.join(args.output_folder, "test_results.tsv"), 'w+').write(hung_check_status)
|
||||
|
||||
logging.info("Stress test finished")
|
||||
|
@ -10,14 +10,6 @@ RUN apt-get update && env DEBIAN_FRONTEND=noninteractive apt-get install --yes \
|
||||
yamllint \
|
||||
&& pip3 install codespell
|
||||
|
||||
|
||||
# For |& syntax
|
||||
SHELL ["bash", "-c"]
|
||||
|
||||
CMD cd /ClickHouse/utils/check-style && \
|
||||
./check-style -n |& tee /test_output/style_output.txt && \
|
||||
./check-typos |& tee /test_output/typos_output.txt && \
|
||||
./check-whitespaces -n |& tee /test_output/whitespaces_output.txt && \
|
||||
./check-duplicate-includes.sh |& tee /test_output/duplicate_output.txt && \
|
||||
./shellcheck-run.sh |& tee /test_output/shellcheck_output.txt && \
|
||||
true
|
||||
COPY run.sh /
|
||||
COPY process_style_check_result.py /
|
||||
CMD ["/bin/bash", "/run.sh"]
|
||||
|
96
docker/test/style/process_style_check_result.py
Executable file
96
docker/test/style/process_style_check_result.py
Executable file
@ -0,0 +1,96 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import os
|
||||
import logging
|
||||
import argparse
|
||||
import csv
|
||||
|
||||
|
||||
def process_result(result_folder):
|
||||
status = "success"
|
||||
description = ""
|
||||
test_results = []
|
||||
|
||||
style_log_path = '{}/style_output.txt'.format(result_folder)
|
||||
if not os.path.exists(style_log_path):
|
||||
logging.info("No style check log on path %s", style_log_path)
|
||||
return "exception", "No style check log", []
|
||||
elif os.stat(style_log_path).st_size != 0:
|
||||
description += "Style check failed. "
|
||||
test_results.append(("Style check", "FAIL"))
|
||||
status = "failure" # Disabled for now
|
||||
else:
|
||||
test_results.append(("Style check", "OK"))
|
||||
|
||||
typos_log_path = '{}/typos_output.txt'.format(result_folder)
|
||||
if not os.path.exists(style_log_path):
|
||||
logging.info("No typos check log on path %s", style_log_path)
|
||||
return "exception", "No typos check log", []
|
||||
elif os.stat(typos_log_path).st_size != 0:
|
||||
description += "Typos check failed. "
|
||||
test_results.append(("Typos check", "FAIL"))
|
||||
status = "failure"
|
||||
else:
|
||||
test_results.append(("Typos check", "OK"))
|
||||
|
||||
whitespaces_log_path = '{}/whitespaces_output.txt'.format(result_folder)
|
||||
if not os.path.exists(style_log_path):
|
||||
logging.info("No whitespaces check log on path %s", style_log_path)
|
||||
return "exception", "No whitespaces check log", []
|
||||
elif os.stat(whitespaces_log_path).st_size != 0:
|
||||
description += "Whitespaces check failed. "
|
||||
test_results.append(("Whitespaces check", "FAIL"))
|
||||
status = "failure"
|
||||
else:
|
||||
test_results.append(("Whitespaces check", "OK"))
|
||||
|
||||
duplicate_log_path = '{}/duplicate_output.txt'.format(result_folder)
|
||||
if not os.path.exists(duplicate_log_path):
|
||||
logging.info("No header duplicates check log on path %s", duplicate_log_path)
|
||||
return "exception", "No header duplicates check log", []
|
||||
elif os.stat(duplicate_log_path).st_size != 0:
|
||||
description += " Header duplicates check failed. "
|
||||
test_results.append(("Header duplicates check", "FAIL"))
|
||||
status = "failure"
|
||||
else:
|
||||
test_results.append(("Header duplicates check", "OK"))
|
||||
|
||||
shellcheck_log_path = '{}/shellcheck_output.txt'.format(result_folder)
|
||||
if not os.path.exists(shellcheck_log_path):
|
||||
logging.info("No shellcheck log on path %s", shellcheck_log_path)
|
||||
return "exception", "No shellcheck log", []
|
||||
elif os.stat(shellcheck_log_path).st_size != 0:
|
||||
description += " Shellcheck check failed. "
|
||||
test_results.append(("Shellcheck ", "FAIL"))
|
||||
status = "failure"
|
||||
else:
|
||||
test_results.append(("Shellcheck", "OK"))
|
||||
|
||||
if not description:
|
||||
description += "Style check success"
|
||||
|
||||
return status, description, test_results
|
||||
|
||||
|
||||
def write_results(results_file, status_file, results, status):
|
||||
with open(results_file, 'w') as f:
|
||||
out = csv.writer(f, delimiter='\t')
|
||||
out.writerows(results)
|
||||
with open(status_file, 'w') as f:
|
||||
out = csv.writer(f, delimiter='\t')
|
||||
out.writerow(status)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')
|
||||
parser = argparse.ArgumentParser(description="ClickHouse script for parsing results of style check")
|
||||
parser.add_argument("--in-results-dir", default='/test_output/')
|
||||
parser.add_argument("--out-results-file", default='/test_output/test_results.tsv')
|
||||
parser.add_argument("--out-status-file", default='/test_output/check_status.tsv')
|
||||
args = parser.parse_args()
|
||||
|
||||
state, description, test_results = process_result(args.in_results_dir)
|
||||
logging.info("Result parsed")
|
||||
status = (state, description)
|
||||
write_results(args.out_results_file, args.out_status_file, test_results, status)
|
||||
logging.info("Result written")
|
9
docker/test/style/run.sh
Executable file
9
docker/test/style/run.sh
Executable file
@ -0,0 +1,9 @@
|
||||
#!/bin/bash
|
||||
|
||||
cd /ClickHouse/utils/check-style || echo -e "failure\tRepo not found" > /test_output/check_status.tsv
|
||||
./check-style -n |& tee /test_output/style_output.txt
|
||||
./check-typos |& tee /test_output/typos_output.txt
|
||||
./check-whitespaces -n |& tee /test_output/whitespaces_output.txt
|
||||
./check-duplicate-includes.sh |& tee /test_output/duplicate_output.txt
|
||||
./shellcheck-run.sh |& tee /test_output/shellcheck_output.txt
|
||||
/process_style_check_result.py || echo -e "failure\tCannot parse results" > /test_output/check_status.tsv
|
@ -61,6 +61,7 @@ RUN set -eux; \
|
||||
|
||||
COPY modprobe.sh /usr/local/bin/modprobe
|
||||
COPY dockerd-entrypoint.sh /usr/local/bin/
|
||||
COPY process_testflows_result.py /usr/local/bin/
|
||||
|
||||
RUN set -x \
|
||||
&& addgroup --system dockremap \
|
||||
@ -72,5 +73,5 @@ RUN set -x \
|
||||
VOLUME /var/lib/docker
|
||||
EXPOSE 2375
|
||||
ENTRYPOINT ["dockerd-entrypoint.sh"]
|
||||
CMD ["sh", "-c", "python3 regression.py --no-color -o classic --local --clickhouse-binary-path ${CLICKHOUSE_TESTS_SERVER_BIN_PATH} --log test.log ${TESTFLOWS_OPTS}; cat test.log | tfs report results --format json > results.json"]
|
||||
CMD ["sh", "-c", "python3 regression.py --no-color -o classic --local --clickhouse-binary-path ${CLICKHOUSE_TESTS_SERVER_BIN_PATH} --log test.log ${TESTFLOWS_OPTS}; cat test.log | tfs report results --format json > results.json; /usr/local/bin/process_testflows_result.py || echo -e 'failure\tCannot parse results' > check_status.tsv"]
|
||||
|
||||
|
67
docker/test/testflows/runner/process_testflows_result.py
Executable file
67
docker/test/testflows/runner/process_testflows_result.py
Executable file
@ -0,0 +1,67 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import os
|
||||
import logging
|
||||
import argparse
|
||||
import csv
|
||||
import json
|
||||
|
||||
|
||||
def process_result(result_folder):
|
||||
json_path = os.path.join(result_folder, "results.json")
|
||||
if not os.path.exists(json_path):
|
||||
return "success", "No testflows in branch", None, []
|
||||
|
||||
test_binary_log = os.path.join(result_folder, "test.log")
|
||||
with open(json_path) as source:
|
||||
results = json.loads(source.read())
|
||||
|
||||
total_tests = 0
|
||||
total_ok = 0
|
||||
total_fail = 0
|
||||
total_other = 0
|
||||
test_results = []
|
||||
for test in results["tests"]:
|
||||
test_name = test['test']['test_name']
|
||||
test_result = test['result']['result_type'].upper()
|
||||
test_time = str(test['result']['message_rtime'])
|
||||
total_tests += 1
|
||||
if test_result == "OK":
|
||||
total_ok += 1
|
||||
elif test_result == "FAIL" or test_result == "ERROR":
|
||||
total_fail += 1
|
||||
else:
|
||||
total_other += 1
|
||||
|
||||
test_results.append((test_name, test_result, test_time))
|
||||
if total_fail != 0:
|
||||
status = "failure"
|
||||
else:
|
||||
status = "success"
|
||||
|
||||
description = "failed: {}, passed: {}, other: {}".format(total_fail, total_ok, total_other)
|
||||
return status, description, test_results, [json_path, test_binary_log]
|
||||
|
||||
|
||||
def write_results(results_file, status_file, results, status):
|
||||
with open(results_file, 'w') as f:
|
||||
out = csv.writer(f, delimiter='\t')
|
||||
out.writerows(results)
|
||||
with open(status_file, 'w') as f:
|
||||
out = csv.writer(f, delimiter='\t')
|
||||
out.writerow(status)
|
||||
|
||||
if __name__ == "__main__":
|
||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')
|
||||
parser = argparse.ArgumentParser(description="ClickHouse script for parsing results of Testflows tests")
|
||||
parser.add_argument("--in-results-dir", default='./')
|
||||
parser.add_argument("--out-results-file", default='./test_results.tsv')
|
||||
parser.add_argument("--out-status-file", default='./check_status.tsv')
|
||||
args = parser.parse_args()
|
||||
|
||||
state, description, test_results, logs = process_result(args.in_results_dir)
|
||||
logging.info("Result parsed")
|
||||
status = (state, description)
|
||||
write_results(args.out_results_file, args.out_status_file, test_results, status)
|
||||
logging.info("Result written")
|
||||
|
@ -5,6 +5,6 @@ ENV TZ=Europe/Moscow
|
||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||
RUN apt-get install gdb
|
||||
|
||||
CMD service zookeeper start && sleep 7 && /usr/share/zookeeper/bin/zkCli.sh -server localhost:2181 -create create /clickhouse_test ''; \
|
||||
gdb -q -ex 'set print inferior-events off' -ex 'set confirm off' -ex 'set print thread-events off' -ex run -ex bt -ex quit --args ./unit_tests_dbms | tee test_output/test_result.txt
|
||||
|
||||
COPY run.sh /
|
||||
COPY process_unit_tests_result.py /
|
||||
CMD ["/bin/bash", "/run.sh"]
|
||||
|
96
docker/test/unit/process_unit_tests_result.py
Executable file
96
docker/test/unit/process_unit_tests_result.py
Executable file
@ -0,0 +1,96 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import os
|
||||
import logging
|
||||
import argparse
|
||||
import csv
|
||||
|
||||
OK_SIGN = 'OK ]'
|
||||
FAILED_SIGN = 'FAILED ]'
|
||||
SEGFAULT = 'Segmentation fault'
|
||||
SIGNAL = 'received signal SIG'
|
||||
PASSED = 'PASSED'
|
||||
|
||||
def get_test_name(line):
|
||||
elements = reversed(line.split(' '))
|
||||
for element in elements:
|
||||
if '(' not in element and ')' not in element:
|
||||
return element
|
||||
raise Exception("No test name in line '{}'".format(line))
|
||||
|
||||
def process_result(result_folder):
|
||||
summary = []
|
||||
total_counter = 0
|
||||
failed_counter = 0
|
||||
result_log_path = '{}/test_result.txt'.format(result_folder)
|
||||
if not os.path.exists(result_log_path):
|
||||
logging.info("No output log on path %s", result_log_path)
|
||||
return "exception", "No output log", []
|
||||
|
||||
status = "success"
|
||||
description = ""
|
||||
passed = False
|
||||
with open(result_log_path, 'r') as test_result:
|
||||
for line in test_result:
|
||||
if OK_SIGN in line:
|
||||
logging.info("Found ok line: '%s'", line)
|
||||
test_name = get_test_name(line.strip())
|
||||
logging.info("Test name: '%s'", test_name)
|
||||
summary.append((test_name, "OK"))
|
||||
total_counter += 1
|
||||
elif FAILED_SIGN in line and 'listed below' not in line and 'ms)' in line:
|
||||
logging.info("Found fail line: '%s'", line)
|
||||
test_name = get_test_name(line.strip())
|
||||
logging.info("Test name: '%s'", test_name)
|
||||
summary.append((test_name, "FAIL"))
|
||||
total_counter += 1
|
||||
failed_counter += 1
|
||||
elif SEGFAULT in line:
|
||||
logging.info("Found segfault line: '%s'", line)
|
||||
status = "failure"
|
||||
description += "Segmentation fault. "
|
||||
break
|
||||
elif SIGNAL in line:
|
||||
logging.info("Received signal line: '%s'", line)
|
||||
status = "failure"
|
||||
description += "Exit on signal. "
|
||||
break
|
||||
elif PASSED in line:
|
||||
logging.info("PASSED record found: '%s'", line)
|
||||
passed = True
|
||||
|
||||
if not passed:
|
||||
status = "failure"
|
||||
description += "PASSED record not found. "
|
||||
|
||||
if failed_counter != 0:
|
||||
status = "failure"
|
||||
|
||||
if not description:
|
||||
description += "fail: {}, passed: {}".format(failed_counter, total_counter - failed_counter)
|
||||
|
||||
return status, description, summary
|
||||
|
||||
|
||||
def write_results(results_file, status_file, results, status):
|
||||
with open(results_file, 'w') as f:
|
||||
out = csv.writer(f, delimiter='\t')
|
||||
out.writerows(results)
|
||||
with open(status_file, 'w') as f:
|
||||
out = csv.writer(f, delimiter='\t')
|
||||
out.writerow(status)
|
||||
|
||||
if __name__ == "__main__":
|
||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')
|
||||
parser = argparse.ArgumentParser(description="ClickHouse script for parsing results of unit tests")
|
||||
parser.add_argument("--in-results-dir", default='/test_output/')
|
||||
parser.add_argument("--out-results-file", default='/test_output/test_results.tsv')
|
||||
parser.add_argument("--out-status-file", default='/test_output/check_status.tsv')
|
||||
args = parser.parse_args()
|
||||
|
||||
state, description, test_results = process_result(args.in_results_dir)
|
||||
logging.info("Result parsed")
|
||||
status = (state, description)
|
||||
write_results(args.out_results_file, args.out_status_file, test_results, status)
|
||||
logging.info("Result written")
|
||||
|
7
docker/test/unit/run.sh
Normal file
7
docker/test/unit/run.sh
Normal file
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -x
|
||||
|
||||
service zookeeper start && sleep 7 && /usr/share/zookeeper/bin/zkCli.sh -server localhost:2181 -create create /clickhouse_test '';
|
||||
gdb -q -ex 'set print inferior-events off' -ex 'set confirm off' -ex 'set print thread-events off' -ex run -ex bt -ex quit --args ./unit_tests_dbms | tee test_output/test_result.txt
|
||||
./process_unit_tests_result.py || echo -e "failure\tCannot parse results" > /test_output/check_status.tsv
|
@ -58,6 +58,6 @@ Result:
|
||||
|
||||
Follow up with any text to clarify the example.
|
||||
|
||||
## See Also {#see-also}
|
||||
**See Also**
|
||||
|
||||
- [link](#)
|
||||
|
@ -14,12 +14,12 @@ More text (Optional).
|
||||
|
||||
**Arguments** (Optional)
|
||||
|
||||
- `x` — Description. [Type name](relative/path/to/type/dscr.md#type).
|
||||
- `y` — Description. [Type name](relative/path/to/type/dscr.md#type).
|
||||
- `x` — Description. Optional (only for optional arguments). Possible values: <values list>. Default value: <value>. [Type name](relative/path/to/type/dscr.md#type).
|
||||
- `y` — Description. Optional (only for optional arguments). Possible values: <values list>.Default value: <value>. [Type name](relative/path/to/type/dscr.md#type).
|
||||
|
||||
**Parameters** (Optional, only for parametric aggregate functions)
|
||||
|
||||
- `z` — Description. [Type name](relative/path/to/type/dscr.md#type).
|
||||
- `z` — Description. Optional (only for optional parameters). Possible values: <values list>. Default value: <value>. [Type name](relative/path/to/type/dscr.md#type).
|
||||
|
||||
**Returned value(s)**
|
||||
|
||||
|
@ -8,14 +8,14 @@ Possible value: ...
|
||||
|
||||
Default value: ...
|
||||
|
||||
Settings: (Optional)
|
||||
**Settings** (Optional)
|
||||
|
||||
If the section contains several settings, list them here. Specify possible values and default values:
|
||||
|
||||
- setting_1 — Description.
|
||||
- setting_2 — Description.
|
||||
|
||||
**Example:**
|
||||
**Example**
|
||||
|
||||
```xml
|
||||
<server_setting_name>
|
||||
|
@ -1,14 +1,14 @@
|
||||
# Statement name (for example, SHOW USER)
|
||||
# Statement name (for example, SHOW USER) {#statement-name-in-lower-case}
|
||||
|
||||
Brief description of what the statement does.
|
||||
|
||||
Syntax:
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
Syntax of the statement.
|
||||
```
|
||||
|
||||
## Other necessary sections of the description (Optional)
|
||||
## Other necessary sections of the description (Optional) {#anchor}
|
||||
|
||||
Examples of descriptions with a complicated structure:
|
||||
|
||||
@ -17,7 +17,7 @@ Examples of descriptions with a complicated structure:
|
||||
- https://clickhouse.tech/docs/en/sql-reference/statements/select/join/
|
||||
|
||||
|
||||
## See Also (Optional)
|
||||
**See Also** (Optional)
|
||||
|
||||
Links to related topics as a list.
|
||||
|
||||
|
@ -29,6 +29,17 @@ toc_title: Cloud
|
||||
- Cross-AZ scaling for performance and high availability
|
||||
- Built-in monitoring and SQL query editor
|
||||
|
||||
## Alibaba Cloud {#alibaba-cloud}
|
||||
|
||||
Alibaba Cloud Managed Service for ClickHouse [China Site](https://www.aliyun.com/product/clickhouse) (Will be available at international site at May, 2021) provides the following key features:
|
||||
- Highly reliable cloud disk storage engine based on Alibaba Cloud Apsara distributed system
|
||||
- Expand capacity on demand without manual data migration
|
||||
- Support single-node, single-replica, multi-node, and multi-replica architectures, and support hot and cold data tiering
|
||||
- Support access allow-list, one-key recovery, multi-layer network security protection, cloud disk encryption
|
||||
- Seamless integration with cloud log systems, databases, and data application tools
|
||||
- Built-in monitoring and database management platform
|
||||
- Professional database expert technical support and service
|
||||
|
||||
## Tencent Cloud {#tencent-cloud}
|
||||
|
||||
[Tencent Managed Service for ClickHouse](https://cloud.tencent.com/product/cdwch) provides the following key features:
|
||||
|
@ -170,7 +170,7 @@ $ ./release
|
||||
Normally all tools of the ClickHouse bundle, such as `clickhouse-server`, `clickhouse-client` etc., are linked into a single static executable, `clickhouse`. This executable must be re-linked on every change, which might be slow. Two common ways to improve linking time are to use `lld` linker, and use the 'split' build configuration, which builds a separate binary for every tool, and further splits the code into serveral shared libraries. To enable these tweaks, pass the following flags to `cmake`:
|
||||
|
||||
```
|
||||
-DCMAKE_C_FLAGS="-fuse-ld=lld" -DCMAKE_CXX_FLAGS="-fuse-ld=lld" -DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1
|
||||
-DCMAKE_C_FLAGS="--ld-path=lld" -DCMAKE_CXX_FLAGS="--ld-path=lld" -DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1
|
||||
```
|
||||
|
||||
## You Don’t Have to Build ClickHouse {#you-dont-have-to-build-clickhouse}
|
||||
|
@ -701,6 +701,32 @@ The `default` storage policy implies using only one volume, which consists of on
|
||||
|
||||
The number of threads performing background moves of data parts can be changed by [background_move_pool_size](../../../operations/settings/settings.md#background_move_pool_size) setting.
|
||||
|
||||
### Details {#details}
|
||||
|
||||
In the case of `MergeTree` tables, data is getting to disk in different ways:
|
||||
|
||||
- As a result of an insert (`INSERT` query).
|
||||
- During background merges and [mutations](../../../sql-reference/statements/alter/index.md#alter-mutations).
|
||||
- When downloading from another replica.
|
||||
- As a result of partition freezing [ALTER TABLE … FREEZE PARTITION](../../../sql-reference/statements/alter/partition.md#alter_freeze-partition).
|
||||
|
||||
In all these cases except for mutations and partition freezing, a part is stored on a volume and a disk according to the given storage policy:
|
||||
|
||||
1. The first volume (in the order of definition) that has enough disk space for storing a part (`unreserved_space > current_part_size`) and allows for storing parts of a given size (`max_data_part_size_bytes > current_part_size`) is chosen.
|
||||
2. Within this volume, that disk is chosen that follows the one, which was used for storing the previous chunk of data, and that has free space more than the part size (`unreserved_space - keep_free_space_bytes > current_part_size`).
|
||||
|
||||
Under the hood, mutations and partition freezing make use of [hard links](https://en.wikipedia.org/wiki/Hard_link). Hard links between different disks are not supported, therefore in such cases the resulting parts are stored on the same disks as the initial ones.
|
||||
|
||||
In the background, parts are moved between volumes on the basis of the amount of free space (`move_factor` parameter) according to the order the volumes are declared in the configuration file.
|
||||
Data is never transferred from the last one and into the first one. One may use system tables [system.part_log](../../../operations/system-tables/part_log.md#system_tables-part-log) (field `type = MOVE_PART`) and [system.parts](../../../operations/system-tables/parts.md#system_tables-parts) (fields `path` and `disk`) to monitor background moves. Also, the detailed information can be found in server logs.
|
||||
|
||||
User can force moving a part or a partition from one volume to another using the query [ALTER TABLE … MOVE PART\|PARTITION … TO VOLUME\|DISK …](../../../sql-reference/statements/alter/partition.md#alter_move-partition), all the restrictions for background operations are taken into account. The query initiates a move on its own and does not wait for background operations to be completed. User will get an error message if not enough free space is available or if any of the required conditions are not met.
|
||||
|
||||
Moving data does not interfere with data replication. Therefore, different storage policies can be specified for the same table on different replicas.
|
||||
|
||||
After the completion of background merges and mutations, old parts are removed only after a certain amount of time (`old_parts_lifetime`).
|
||||
During this time, they are not moved to other volumes or disks. Therefore, until the parts are finally removed, they are still taken into account for evaluation of the occupied disk space.
|
||||
|
||||
## Using S3 for Data Storage {#table_engine-mergetree-s3}
|
||||
|
||||
`MergeTree` family table engines is able to store data to [S3](https://aws.amazon.com/s3/) using a disk with type `s3`.
|
||||
@ -793,30 +819,4 @@ S3 disk can be configured as `main` or `cold` storage:
|
||||
|
||||
In case of `cold` option a data can be moved to S3 if local disk free size will be smaller than `move_factor * disk_size` or by TTL move rule.
|
||||
|
||||
### Details {#details}
|
||||
|
||||
In the case of `MergeTree` tables, data is getting to disk in different ways:
|
||||
|
||||
- As a result of an insert (`INSERT` query).
|
||||
- During background merges and [mutations](../../../sql-reference/statements/alter/index.md#alter-mutations).
|
||||
- When downloading from another replica.
|
||||
- As a result of partition freezing [ALTER TABLE … FREEZE PARTITION](../../../sql-reference/statements/alter/partition.md#alter_freeze-partition).
|
||||
|
||||
In all these cases except for mutations and partition freezing, a part is stored on a volume and a disk according to the given storage policy:
|
||||
|
||||
1. The first volume (in the order of definition) that has enough disk space for storing a part (`unreserved_space > current_part_size`) and allows for storing parts of a given size (`max_data_part_size_bytes > current_part_size`) is chosen.
|
||||
2. Within this volume, that disk is chosen that follows the one, which was used for storing the previous chunk of data, and that has free space more than the part size (`unreserved_space - keep_free_space_bytes > current_part_size`).
|
||||
|
||||
Under the hood, mutations and partition freezing make use of [hard links](https://en.wikipedia.org/wiki/Hard_link). Hard links between different disks are not supported, therefore in such cases the resulting parts are stored on the same disks as the initial ones.
|
||||
|
||||
In the background, parts are moved between volumes on the basis of the amount of free space (`move_factor` parameter) according to the order the volumes are declared in the configuration file.
|
||||
Data is never transferred from the last one and into the first one. One may use system tables [system.part_log](../../../operations/system-tables/part_log.md#system_tables-part-log) (field `type = MOVE_PART`) and [system.parts](../../../operations/system-tables/parts.md#system_tables-parts) (fields `path` and `disk`) to monitor background moves. Also, the detailed information can be found in server logs.
|
||||
|
||||
User can force moving a part or a partition from one volume to another using the query [ALTER TABLE … MOVE PART\|PARTITION … TO VOLUME\|DISK …](../../../sql-reference/statements/alter/partition.md#alter_move-partition), all the restrictions for background operations are taken into account. The query initiates a move on its own and does not wait for background operations to be completed. User will get an error message if not enough free space is available or if any of the required conditions are not met.
|
||||
|
||||
Moving data does not interfere with data replication. Therefore, different storage policies can be specified for the same table on different replicas.
|
||||
|
||||
After the completion of background merges and mutations, old parts are removed only after a certain amount of time (`old_parts_lifetime`).
|
||||
During this time, they are not moved to other volumes or disks. Therefore, until the parts are finally removed, they are still taken into account for evaluation of the occupied disk space.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/ru/operations/table_engines/mergetree/) <!--hide-->
|
||||
|
@ -73,19 +73,18 @@ Clusters are set like this:
|
||||
``` xml
|
||||
<remote_servers>
|
||||
<logs>
|
||||
<!-- Inter-server per-cluster secret for Distributed queries
|
||||
default: no secret (no authentication will be performed)
|
||||
|
||||
If set, then Distributed queries will be validated on shards, so at least:
|
||||
- such cluster should exist on the shard,
|
||||
- such cluster should have the same secret.
|
||||
|
||||
And also (and which is more important), the initial_user will
|
||||
be used as current user for the query.
|
||||
-->
|
||||
<!-- <secret></secret> -->
|
||||
<shard>
|
||||
<!-- Inter-server per-cluster secret for Distributed queries
|
||||
default: no secret (no authentication will be performed)
|
||||
|
||||
If set, then Distributed queries will be validated on shards, so at least:
|
||||
- such cluster should exist on the shard,
|
||||
- such cluster should have the same secret.
|
||||
|
||||
And also (and which is more important), the initial_user will
|
||||
be used as current user for the query.
|
||||
-->
|
||||
<!-- <secret></secret> -->
|
||||
|
||||
<!-- Optional. Shard weight when writing data. Default: 1. -->
|
||||
<weight>1</weight>
|
||||
<!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
|
||||
|
@ -1,4 +1,4 @@
|
||||
# LDAP {#external-authenticators-ldap}
|
||||
# LDAP {#external-authenticators-ldap}
|
||||
|
||||
LDAP server can be used to authenticate ClickHouse users. There are two different approaches for doing this:
|
||||
|
||||
@ -87,14 +87,13 @@ Note, that user `my_user` refers to `my_ldap_server`. This LDAP server must be c
|
||||
|
||||
When SQL-driven [Access Control and Account Management](../access-rights.md#access-control) is enabled in ClickHouse, users that are authenticated by LDAP servers can also be created using the [CRATE USER](../../sql-reference/statements/create/user.md#create-user-statement) statement.
|
||||
|
||||
|
||||
```sql
|
||||
CREATE USER my_user IDENTIFIED WITH ldap_server BY 'my_ldap_server'
|
||||
CREATE USER my_user IDENTIFIED WITH ldap SERVER 'my_ldap_server'
|
||||
```
|
||||
|
||||
## LDAP Exernal User Directory {#ldap-external-user-directory}
|
||||
|
||||
In addition to the locally defined users, a remote LDAP server can be used as a source of user definitions. In order to achieve this, specify previously defined LDAP server name (see [LDAP Server Definition](#ldap-server-definition)) in the `ldap` section inside the `users_directories` section of the `config.xml` file.
|
||||
In addition to the locally defined users, a remote LDAP server can be used as a source of user definitions. In order to achieve this, specify previously defined LDAP server name (see [LDAP Server Definition](#ldap-server-definition)) in an `ldap` section inside the `users_directories` section of the `config.xml` file.
|
||||
|
||||
At each login attempt, ClickHouse will try to find the user definition locally and authenticate it as usual, but if the user is not defined, ClickHouse will assume it exists in the external LDAP directory, and will try to "bind" to the specified DN at the LDAP server using the provided credentials. If successful, the user will be considered existing and authenticated. The user will be assigned roles from the list specified in the `roles` section. Additionally, LDAP "search" can be performed and results can be transformed and treated as role names and then be assigned to the user if the `role_mapping` section is also configured. All this implies that the SQL-driven [Access Control and Account Management](../access-rights.md#access-control) is enabled and roles are created using the [CREATE ROLE](../../sql-reference/statements/create/role.md#create-role-statement) statement.
|
||||
|
||||
@ -153,4 +152,3 @@ Parameters:
|
||||
- `prefix` - prefix, that will be expected to be in front of each string in the original
|
||||
list of strings returned by the LDAP search. Prefix will be removed from the original
|
||||
strings and resulting strings will be treated as local role names. Empty, by default.
|
||||
|
||||
|
@ -48,5 +48,6 @@ SELECT * FROM system.settings WHERE changed AND name='load_balancing'
|
||||
- [Settings](../../operations/settings/index.md#session-settings-intro)
|
||||
- [Permissions for Queries](../../operations/settings/permissions-for-queries.md#settings_readonly)
|
||||
- [Constraints on Settings](../../operations/settings/constraints-on-settings.md)
|
||||
- [SHOW SETTINGS](../../sql-reference/statements/show.md#show-settings) statement
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/settings) <!--hide-->
|
||||
|
@ -52,4 +52,5 @@ trace: [371912858,371912789,371798468,371799717,371801313,3717
|
||||
size: 5244400
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/trace_log) <!--hide-->
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/trace_log) <!--hide-->
|
||||
|
||||
|
@ -449,13 +449,13 @@ Result:
|
||||
└─────────────────────┴────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**See also**
|
||||
**See Also**
|
||||
|
||||
- [toStartOfInterval](#tostartofintervaltime-or-data-interval-x-unit-time-zone)
|
||||
|
||||
## date\_add {#date_add}
|
||||
|
||||
Adds specified date/time interval to the provided date.
|
||||
Adds the time interval or date interval to the provided date or date with time.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -468,22 +468,36 @@ Aliases: `dateAdd`, `DATE_ADD`.
|
||||
**Arguments**
|
||||
|
||||
- `unit` — The type of interval to add. [String](../../sql-reference/data-types/string.md).
|
||||
Possible values:
|
||||
|
||||
Supported values: second, minute, hour, day, week, month, quarter, year.
|
||||
- `value` - Value in specified unit - [Int](../../sql-reference/data-types/int-uint.md)
|
||||
- `date` — [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md).
|
||||
|
||||
- `second`
|
||||
- `minute`
|
||||
- `hour`
|
||||
- `day`
|
||||
- `week`
|
||||
- `month`
|
||||
- `quarter`
|
||||
- `year`
|
||||
|
||||
- `value` — Value of interval to add. [Int](../../sql-reference/data-types/int-uint.md).
|
||||
- `date` — The date or date with time to which `value` is added. [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
Returns Date or DateTime with `value` expressed in `unit` added to `date`.
|
||||
Date or date with time obtained by adding `value`, expressed in `unit`, to `date`.
|
||||
|
||||
Type: [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
select date_add(YEAR, 3, toDate('2018-01-01'));
|
||||
SELECT date_add(YEAR, 3, toDate('2018-01-01'));
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─plus(toDate('2018-01-01'), toIntervalYear(3))─┐
|
||||
│ 2021-01-01 │
|
||||
@ -492,7 +506,7 @@ select date_add(YEAR, 3, toDate('2018-01-01'));
|
||||
|
||||
## date\_diff {#date_diff}
|
||||
|
||||
Returns the difference between two Date or DateTime values.
|
||||
Returns the difference between two dates or dates with time values.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -500,25 +514,33 @@ Returns the difference between two Date or DateTime values.
|
||||
date_diff('unit', startdate, enddate, [timezone])
|
||||
```
|
||||
|
||||
Aliases: `dateDiff`, `DATE_DIFF`.
|
||||
Aliases: `dateDiff`, `DATE_DIFF`.
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `unit` — The type of interval for result [String](../../sql-reference/data-types/string.md).
|
||||
- `unit` — The type of interval for result. [String](../../sql-reference/data-types/string.md).
|
||||
Possible values:
|
||||
|
||||
Supported values: second, minute, hour, day, week, month, quarter, year.
|
||||
- `second`
|
||||
- `minute`
|
||||
- `hour`
|
||||
- `day`
|
||||
- `week`
|
||||
- `month`
|
||||
- `quarter`
|
||||
- `year`
|
||||
|
||||
- `startdate` — The first time value to subtract (the subtrahend). [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md).
|
||||
|
||||
- `enddate` — The second time value to subtract from (the minuend). [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md).
|
||||
|
||||
- `timezone` — Optional parameter. If specified, it is applied to both `startdate` and `enddate`. If not specified, timezones of `startdate` and `enddate` are used. If they are not the same, the result is unspecified.
|
||||
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) (optional). If specified, it is applied to both `startdate` and `enddate`. If not specified, timezones of `startdate` and `enddate` are used. If they are not the same, the result is unspecified. [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
Difference between `enddate` and `startdate` expressed in `unit`.
|
||||
|
||||
Type: `int`.
|
||||
Type: [Int](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Example**
|
||||
|
||||
@ -561,13 +583,13 @@ Aliases: `dateSub`, `DATE_SUB`.
|
||||
- `month`
|
||||
- `quarter`
|
||||
- `year`
|
||||
|
||||
|
||||
- `value` — Value of interval to subtract. [Int](../../sql-reference/data-types/int-uint.md).
|
||||
- `date` — The date or date with time from which `value` is subtracted. [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
Returns the date or date with time obtained by subtracting `value`, expressed in `unit`, from `date`.
|
||||
Date or date with time obtained by subtracting `value`, expressed in `unit`, from `date`.
|
||||
|
||||
Type: [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md).
|
||||
|
||||
@ -601,22 +623,36 @@ Aliases: `timeStampAdd`, `TIMESTAMP_ADD`.
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `date` — Date or Date with time - [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md).
|
||||
- `value` - Value in specified unit - [Int](../../sql-reference/data-types/int-uint.md)
|
||||
- `date` — Date or date with time. [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md).
|
||||
- `value` — Value of interval to add. [Int](../../sql-reference/data-types/int-uint.md).
|
||||
- `unit` — The type of interval to add. [String](../../sql-reference/data-types/string.md).
|
||||
Possible values:
|
||||
|
||||
Supported values: second, minute, hour, day, week, month, quarter, year.
|
||||
- `second`
|
||||
- `minute`
|
||||
- `hour`
|
||||
- `day`
|
||||
- `week`
|
||||
- `month`
|
||||
- `quarter`
|
||||
- `year`
|
||||
|
||||
**Returned value**
|
||||
|
||||
Returns Date or DateTime with the specified `value` expressed in `unit` added to `date`.
|
||||
Date or date with time with the specified `value` expressed in `unit` added to `date`.
|
||||
|
||||
Type: [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
select timestamp_add(toDate('2018-01-01'), INTERVAL 3 MONTH);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─plus(toDate('2018-01-01'), toIntervalMonth(3))─┐
|
||||
│ 2018-04-01 │
|
||||
@ -625,7 +661,7 @@ select timestamp_add(toDate('2018-01-01'), INTERVAL 3 MONTH);
|
||||
|
||||
## timestamp\_sub {#timestamp_sub}
|
||||
|
||||
Returns the difference between two dates in the specified unit.
|
||||
Subtracts the time interval from the provided date or date with time.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -637,22 +673,37 @@ Aliases: `timeStampSub`, `TIMESTAMP_SUB`.
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `unit` — The type of interval to add. [String](../../sql-reference/data-types/string.md).
|
||||
- `unit` — The type of interval to subtract. [String](../../sql-reference/data-types/string.md).
|
||||
Possible values:
|
||||
|
||||
Supported values: second, minute, hour, day, week, month, quarter, year.
|
||||
- `value` - Value in specified unit - [Int](../../sql-reference/data-types/int-uint.md).
|
||||
- `date`- [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md).
|
||||
- `second`
|
||||
- `minute`
|
||||
- `hour`
|
||||
- `day`
|
||||
- `week`
|
||||
- `month`
|
||||
- `quarter`
|
||||
- `year`
|
||||
|
||||
- `value` — Value of interval to subtract. [Int](../../sql-reference/data-types/int-uint.md).
|
||||
- `date` — Date or date with time. [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
Difference between `date` and the specified `value` expressed in `unit`.
|
||||
Date or date with time obtained by subtracting `value`, expressed in `unit`, from `date`.
|
||||
|
||||
Type: [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
select timestamp_sub(MONTH, 5, toDateTime('2018-12-18 01:02:03'));
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─minus(toDateTime('2018-12-18 01:02:03'), toIntervalMonth(5))─┐
|
||||
│ 2018-07-18 01:02:03 │
|
||||
|
@ -415,7 +415,7 @@ Result:
|
||||
|
||||
## sign(x) {#signx}
|
||||
|
||||
The `sign` function can extract the sign of a real number.
|
||||
Returns the sign of a real number.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -433,9 +433,9 @@ sign(x)
|
||||
- 0 for `x = 0`
|
||||
- 1 for `x > 0`
|
||||
|
||||
**Example**
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
Sign for the zero value:
|
||||
|
||||
``` sql
|
||||
SELECT sign(0);
|
||||
@ -449,7 +449,7 @@ Result:
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
Query:
|
||||
Sign for the positive value:
|
||||
|
||||
``` sql
|
||||
SELECT sign(1);
|
||||
@ -463,7 +463,7 @@ Result:
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
Query:
|
||||
Sign for the negative value:
|
||||
|
||||
``` sql
|
||||
SELECT sign(-1);
|
||||
|
@ -12,10 +12,10 @@ Syntax:
|
||||
``` sql
|
||||
ALTER USER [IF EXISTS] name1 [ON CLUSTER cluster_name1] [RENAME TO new_name1]
|
||||
[, name2 [ON CLUSTER cluster_name2] [RENAME TO new_name2] ...]
|
||||
[IDENTIFIED [WITH {PLAINTEXT_PASSWORD|SHA256_PASSWORD|DOUBLE_SHA1_PASSWORD}] BY {'password'|'hash'}]
|
||||
[[ADD|DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
|
||||
[NOT IDENTIFIED | IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']}]
|
||||
[[ADD | DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
|
||||
[DEFAULT ROLE role [,...] | ALL | ALL EXCEPT role [,...] ]
|
||||
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | PROFILE 'profile_name'] [,...]
|
||||
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY | WRITABLE] | PROFILE 'profile_name'] [,...]
|
||||
```
|
||||
|
||||
To use `ALTER USER` you must have the [ALTER USER](../../../sql-reference/statements/grant.md#grant-access-management) privilege.
|
||||
|
@ -12,10 +12,10 @@ Syntax:
|
||||
``` sql
|
||||
CREATE USER [IF NOT EXISTS | OR REPLACE] name1 [ON CLUSTER cluster_name1]
|
||||
[, name2 [ON CLUSTER cluster_name2] ...]
|
||||
[IDENTIFIED [WITH {NO_PASSWORD|PLAINTEXT_PASSWORD|SHA256_PASSWORD|SHA256_HASH|DOUBLE_SHA1_PASSWORD|DOUBLE_SHA1_HASH|LDAP_SERVER}] BY {'password'|'hash'}]
|
||||
[NOT IDENTIFIED | IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']}]
|
||||
[HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
|
||||
[DEFAULT ROLE role [,...]]
|
||||
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | PROFILE 'profile_name'] [,...]
|
||||
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY | WRITABLE] | PROFILE 'profile_name'] [,...]
|
||||
```
|
||||
|
||||
`ON CLUSTER` clause allows creating users on a cluster, see [Distributed DDL](../../../sql-reference/distributed-ddl.md).
|
||||
@ -30,7 +30,8 @@ There are multiple ways of user identification:
|
||||
- `IDENTIFIED WITH sha256_hash BY 'hash'`
|
||||
- `IDENTIFIED WITH double_sha1_password BY 'qwerty'`
|
||||
- `IDENTIFIED WITH double_sha1_hash BY 'hash'`
|
||||
- `IDENTIFIED WITH ldap_server BY 'server'`
|
||||
- `IDENTIFIED WITH ldap SERVER 'server_name'`
|
||||
- `IDENTIFIED WITH kerberos` or `IDENTIFIED WITH kerberos REALM 'realm'`
|
||||
|
||||
## User Host {#user-host}
|
||||
|
||||
|
@ -428,4 +428,69 @@ errors_count: 0
|
||||
estimated_recovery_time: 0
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/query_language/show/) <!--hide-->
|
||||
## SHOW SETTINGS {#show-settings}
|
||||
|
||||
Returns a list of system settings and their values. Selects data from the [system.settings](../../operations/system-tables/settings.md) table.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
SHOW [CHANGED] SETTINGS LIKE|ILIKE <name>
|
||||
```
|
||||
|
||||
**Clauses**
|
||||
|
||||
`LIKE|ILIKE` allow to specify a matching pattern for the setting name. It can contain globs such as `%` or `_`. `LIKE` clause is case-sensitive, `ILIKE` — case insensitive.
|
||||
|
||||
When the `CHANGED` clause is used, the query returns only settings changed from their default values.
|
||||
|
||||
**Examples**
|
||||
|
||||
Query with the `LIKE` clause:
|
||||
|
||||
```sql
|
||||
SHOW SETTINGS LIKE 'send_timeout';
|
||||
```
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─name─────────┬─type────┬─value─┐
|
||||
│ send_timeout │ Seconds │ 300 │
|
||||
└──────────────┴─────────┴───────┘
|
||||
```
|
||||
|
||||
Query with the `ILIKE` clause:
|
||||
|
||||
```sql
|
||||
SHOW SETTINGS ILIKE '%CONNECT_timeout%'
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─name────────────────────────────────────┬─type─────────┬─value─┐
|
||||
│ connect_timeout │ Seconds │ 10 │
|
||||
│ connect_timeout_with_failover_ms │ Milliseconds │ 50 │
|
||||
│ connect_timeout_with_failover_secure_ms │ Milliseconds │ 100 │
|
||||
└─────────────────────────────────────────┴──────────────┴───────┘
|
||||
```
|
||||
|
||||
Query with the `CHANGED` clause:
|
||||
|
||||
```sql
|
||||
SHOW CHANGED SETTINGS ILIKE '%MEMORY%'
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─name─────────────┬─type───┬─value───────┐
|
||||
│ max_memory_usage │ UInt64 │ 10000000000 │
|
||||
└──────────────────┴────────┴─────────────┘
|
||||
```
|
||||
|
||||
**See Also**
|
||||
|
||||
- [system.settings](../../operations/system-tables/settings.md) table
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/sql-reference/statements/show/) <!--hide-->
|
||||
|
@ -20,4 +20,16 @@ toc_title: "\u30AF\u30E9\u30A6\u30C9"
|
||||
- 暗号化と分離
|
||||
- 自動メンテナンス
|
||||
|
||||
## Alibaba Cloud {#alibaba-cloud}
|
||||
|
||||
ClickHouseのためのAlibaba Cloudの管理サービス [中国サイト](https://www.aliyun.com/product/clickhouse) (2021年5月に国際サイトで利用可能になります) 次の主な機能を提供します:
|
||||
|
||||
- Alibaba Cloud Apsara分散システムをベースにした信頼性の高いクラウドディスクストレージエンジン
|
||||
- 手動でのデータ移行を必要とせずに、オン・デマンドで容量を拡張
|
||||
- シングル・ノード、シングル・レプリカ、マルチ・ノード、マルチ・レプリカ・アーキテクチャをサポートし、ホット・データとコールド・データの階層化をサポート
|
||||
- アクセスホワイトリスト、OneKey Recovery、マルチレイヤーネットワークセキュリティ保護、クラウドディスク暗号化をサポート
|
||||
- クラウドログシステム、データベース、およびデータアプリケーションツールとのシームレスな統合
|
||||
- 組み込み型の監視およびデータベース管理プラットフォーム
|
||||
- プロフェッショナルデータベースエキスパートによるテクニカル・サポートとサービス
|
||||
|
||||
{## [元の記事](https://clickhouse.tech/docs/en/commercial/cloud/) ##}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 1
|
||||
toc_title: "\u041f\u043e\u0441\u0442\u0430\u0432\u0449\u0438\u043a\u0438\u0020\u043e\u0431\u043b\u0430\u0447\u043d\u044b\u0445\u0020\u0443\u0441\u043b\u0443\u0433\u0020\u0043\u006c\u0069\u0063\u006b\u0048\u006f\u0075\u0073\u0065"
|
||||
toc_title: "Поставщики облачных услуг ClickHouse"
|
||||
---
|
||||
|
||||
# Поставщики облачных услуг ClickHouse {#clickhouse-cloud-service-providers}
|
||||
|
@ -1,9 +1,7 @@
|
||||
---
|
||||
toc_folder_title: "\u041A\u043E\u043C\u043C\u0435\u0440\u0447\u0435\u0441\u043A\u0438\
|
||||
\u0435 \u0443\u0441\u043B\u0443\u0433\u0438"
|
||||
toc_folder_title: "Коммерческие услуги"
|
||||
toc_priority: 70
|
||||
toc_title: "\u041A\u043E\u043C\u043C\u0435\u0440\u0447\u0435\u0441\u043A\u0438\u0435\
|
||||
\ \u0443\u0441\u043B\u0443\u0433\u0438"
|
||||
toc_title: "Коммерческие услуги"
|
||||
---
|
||||
|
||||
# Коммерческие услуги {#clickhouse-commercial-services}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 62
|
||||
toc_title: "\u041e\u0431\u0437\u043e\u0440\u0020\u0430\u0440\u0445\u0438\u0442\u0435\u043a\u0442\u0443\u0440\u044b\u0020\u0043\u006c\u0069\u0063\u006b\u0048\u006f\u0075\u0073\u0065"
|
||||
toc_title: "Обзор архитектуры ClickHouse"
|
||||
---
|
||||
|
||||
# Обзор архитектуры ClickHouse {#overview-of-clickhouse-architecture}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 71
|
||||
toc_title: "\u041d\u0430\u0432\u0438\u0433\u0430\u0446\u0438\u044f\u0020\u043f\u043e\u0020\u043a\u043e\u0434\u0443\u0020\u0043\u006c\u0069\u0063\u006b\u0048\u006f\u0075\u0073\u0065"
|
||||
toc_title: "Навигация по коду ClickHouse"
|
||||
---
|
||||
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 70
|
||||
toc_title: "\u0418\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0435\u043c\u044b\u0435\u0020\u0441\u0442\u043e\u0440\u043e\u043d\u043d\u0438\u0435\u0020\u0431\u0438\u0431\u043b\u0438\u043e\u0442\u0435\u043a\u0438"
|
||||
toc_title: "Используемые сторонние библиотеки"
|
||||
---
|
||||
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 61
|
||||
toc_title: "\u0418\u043d\u0441\u0442\u0440\u0443\u043a\u0446\u0438\u044f\u0020\u0434\u043b\u044f\u0020\u0440\u0430\u0437\u0440\u0430\u0431\u043e\u0442\u0447\u0438\u043a\u043e\u0432"
|
||||
toc_title: "Инструкция для разработчиков"
|
||||
---
|
||||
|
||||
# Инструкция для разработчиков
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 68
|
||||
toc_title: "\u041a\u0430\u043a\u0020\u043f\u0438\u0441\u0430\u0442\u044c\u0020\u043a\u043e\u0434\u0020\u043d\u0430\u0020\u0043\u002b\u002b"
|
||||
toc_title: "Как писать код на C++"
|
||||
---
|
||||
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
toc_folder_title: "\u0414\u0432\u0438\u0436\u043a\u0438\u0020\u0431\u0430\u0437\u0020\u0434\u0430\u043d\u043d\u044b\u0445"
|
||||
toc_folder_title: "Движки баз данных"
|
||||
toc_priority: 27
|
||||
toc_title: "\u0412\u0432\u0435\u0434\u0435\u043d\u0438\u0435"
|
||||
toc_title: "Введение"
|
||||
---
|
||||
|
||||
# Движки баз данных {#dvizhki-baz-dannykh}
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_folder_title: "\u0045\u006e\u0067\u0069\u006e\u0065\u0073"
|
||||
toc_folder_title: "Engines"
|
||||
toc_hidden: true
|
||||
toc_priority: 25
|
||||
toc_title: hidden
|
||||
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
toc_folder_title: "\u0414\u0432\u0438\u0436\u043a\u0438\u0020\u0442\u0430\u0431\u043b\u0438\u0446"
|
||||
toc_folder_title: "Движки таблиц"
|
||||
toc_priority: 26
|
||||
toc_title: "\u0412\u0432\u0435\u0434\u0435\u043d\u0438\u0435"
|
||||
toc_title: "Введение"
|
||||
---
|
||||
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_folder_title: "\u0414\u0432\u0438\u0436\u043a\u0438\u0020\u0442\u0430\u0431\u043b\u0438\u0446\u0020\u0434\u043b\u044f\u0020\u0438\u043d\u0442\u0435\u0433\u0440\u0430\u0446\u0438\u0438"
|
||||
toc_folder_title: "Движки таблиц для интеграции"
|
||||
toc_priority: 30
|
||||
---
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_folder_title: "\u0421\u0435\u043c\u0435\u0439\u0441\u0442\u0432\u043e\u0020\u004c\u006f\u0067"
|
||||
toc_title: "\u0412\u0432\u0435\u0434\u0435\u043d\u0438\u0435"
|
||||
toc_folder_title: "Семейство Log"
|
||||
toc_title: "Введение"
|
||||
toc_priority: 29
|
||||
---
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 32
|
||||
toc_title: "\u041f\u0440\u043e\u0438\u0437\u0432\u043e\u043b\u044c\u043d\u044b\u0439\u0020\u043a\u043b\u044e\u0447\u0020\u043f\u0430\u0440\u0442\u0438\u0446\u0438\u043e\u043d\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u044f"
|
||||
toc_title: "Произвольный ключ партиционирования"
|
||||
---
|
||||
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_folder_title: MergeTree Family
|
||||
toc_priority: 28
|
||||
toc_title: "\u0412\u0432\u0435\u0434\u0435\u043d\u0438\u0435"
|
||||
toc_title: "Введение"
|
||||
---
|
||||
|
@ -56,13 +56,13 @@ ORDER BY expr
|
||||
|
||||
ClickHouse использует ключ сортировки в качестве первичного ключа, если первичный ключ не задан в секции `PRIMARY KEY`.
|
||||
|
||||
Чтобы отключить сортировку, используйте синтаксис `ORDER BY tuple()`. Смотрите [выбор первичного ключа](#vybor-pervichnogo-kliucha).
|
||||
Чтобы отключить сортировку, используйте синтаксис `ORDER BY tuple()`. Смотрите [выбор первичного ключа](#primary-keys-and-indexes-in-queries).
|
||||
|
||||
- `PARTITION BY` — [ключ партиционирования](custom-partitioning-key.md). Необязательный параметр.
|
||||
|
||||
Для партиционирования по месяцам используйте выражение `toYYYYMM(date_column)`, где `date_column` — столбец с датой типа [Date](../../../engines/table-engines/mergetree-family/mergetree.md). В этом случае имена партиций имеют формат `"YYYYMM"`.
|
||||
|
||||
- `PRIMARY KEY` — первичный ключ, если он [отличается от ключа сортировки](#pervichnyi-kliuch-otlichnyi-ot-kliucha-sortirovki). Необязательный параметр.
|
||||
- `PRIMARY KEY` — первичный ключ, если он [отличается от ключа сортировки](#choosing-a-primary-key-that-differs-from-the-sorting-key). Необязательный параметр.
|
||||
|
||||
По умолчанию первичный ключ совпадает с ключом сортировки (который задаётся секцией `ORDER BY`.) Поэтому в большинстве случаев секцию `PRIMARY KEY` отдельно указывать не нужно.
|
||||
|
||||
@ -188,7 +188,7 @@ ClickHouse не требует уникального первичного кл
|
||||
|
||||
При сортировке с использованием выражения `ORDER BY` для значений `NULL` всегда работает принцип [NULLS_LAST](../../../sql-reference/statements/select/order-by.md#sorting-of-special-values).
|
||||
|
||||
### Выбор первичного ключа {#vybor-pervichnogo-kliucha}
|
||||
### Выбор первичного ключа {#selecting-the-primary-key}
|
||||
|
||||
Количество столбцов в первичном ключе не ограничено явным образом. В зависимости от структуры данных в первичный ключ можно включать больше или меньше столбцов. Это может:
|
||||
|
||||
@ -217,7 +217,7 @@ ClickHouse не требует уникального первичного кл
|
||||
|
||||
|
||||
|
||||
### Первичный ключ, отличный от ключа сортировки {#pervichnyi-kliuch-otlichnyi-ot-kliucha-sortirovki}
|
||||
### Первичный ключ, отличный от ключа сортировки {#choosing-a-primary-key-that-differs-from-the-sorting-key}
|
||||
|
||||
Существует возможность задать первичный ключ (выражение, значения которого будут записаны в индексный файл для
|
||||
каждой засечки), отличный от ключа сортировки (выражение, по которому будут упорядочены строки в кусках
|
||||
@ -236,7 +236,7 @@ ClickHouse не требует уникального первичного кл
|
||||
|
||||
[ALTER ключа сортировки](../../../engines/table-engines/mergetree-family/mergetree.md) — лёгкая операция, так как при одновременном добавлении нового столбца в таблицу и ключ сортировки не нужно изменять данные кусков (они остаются упорядоченными и по новому выражению ключа).
|
||||
|
||||
### Использование индексов и партиций в запросах {#ispolzovanie-indeksov-i-partitsii-v-zaprosakh}
|
||||
### Использование индексов и партиций в запросах {#use-of-indexes-and-partitions-in-queries}
|
||||
|
||||
Для запросов `SELECT` ClickHouse анализирует возможность использования индекса. Индекс может использоваться, если в секции `WHERE/PREWHERE`, в качестве одного из элементов конъюнкции, или целиком, есть выражение, представляющее операции сравнения на равенства, неравенства, а также `IN` или `LIKE` с фиксированным префиксом, над столбцами или выражениями, входящими в первичный ключ или ключ партиционирования, либо над некоторыми частично монотонными функциями от этих столбцов, а также логические связки над такими выражениями.
|
||||
|
||||
@ -270,7 +270,7 @@ SELECT count() FROM table WHERE CounterID = 34 OR URL LIKE '%upyachka%'
|
||||
|
||||
Ключ партиционирования по месяцам обеспечивает чтение только тех блоков данных, которые содержат даты из нужного диапазона. При этом блок данных может содержать данные за многие даты (до целого месяца). В пределах одного блока данные упорядочены по первичному ключу, который может не содержать дату в качестве первого столбца. В связи с этим, при использовании запроса с указанием условия только на дату, но не на префикс первичного ключа, будет читаться данных больше, чем за одну дату.
|
||||
|
||||
### Использование индекса для частично-монотонных первичных ключей {#ispolzovanie-indeksa-dlia-chastichno-monotonnykh-pervichnykh-kliuchei}
|
||||
### Использование индекса для частично-монотонных первичных ключей {#use-of-index-for-partially-monotonic-primary-keys}
|
||||
|
||||
Рассмотрим, например, дни месяца. Они образуют последовательность [монотонную](https://ru.wikipedia.org/wiki/Монотонная_последовательность) в течение одного месяца, но не монотонную на более длительных периодах. Это частично-монотонная последовательность. Если пользователь создаёт таблицу с частично-монотонным первичным ключом, ClickHouse как обычно создаёт разреженный индекс. Когда пользователь выбирает данные из такого рода таблиц, ClickHouse анализирует условия запроса. Если пользователь хочет получить данные между двумя метками индекса, и обе эти метки находятся внутри одного месяца, ClickHouse может использовать индекс в данном конкретном случае, поскольку он может рассчитать расстояние между параметрами запроса и индексными метками.
|
||||
|
||||
@ -312,7 +312,7 @@ SELECT count() FROM table WHERE s < 'z'
|
||||
SELECT count() FROM table WHERE u64 * i32 == 10 AND u64 * length(s) >= 1234
|
||||
```
|
||||
|
||||
#### Доступные индексы {#dostupnye-indeksy}
|
||||
#### Доступные индексы {#available-types-of-indices}
|
||||
|
||||
- `minmax` — Хранит минимум и максимум выражения (если выражение - `tuple`, то для каждого элемента `tuple`), используя их для пропуска блоков аналогично первичному ключу.
|
||||
|
||||
@ -375,7 +375,7 @@ INDEX b (u64 * length(str), i32 + f64 * 100, date, str) TYPE set(100) GRANULARIT
|
||||
- `s != 1`
|
||||
- `NOT startsWith(s, 'test')`
|
||||
|
||||
## Конкурентный доступ к данным {#konkurentnyi-dostup-k-dannym}
|
||||
## Конкурентный доступ к данным {#concurrent-data-access}
|
||||
|
||||
Для конкурентного доступа к таблице используется мультиверсионность. То есть, при одновременном чтении и обновлении таблицы, данные будут читаться из набора кусочков, актуального на момент запроса. Длинных блокировок нет. Вставки никак не мешают чтениям.
|
||||
|
||||
@ -531,13 +531,13 @@ TTL d + INTERVAL 1 MONTH GROUP BY k1, k2 SET x = max(x), y = min(y);
|
||||
|
||||
## Хранение данных таблицы на нескольких блочных устройствах {#table_engine-mergetree-multiple-volumes}
|
||||
|
||||
### Введение {#vvedenie}
|
||||
### Введение {#introduction}
|
||||
|
||||
Движки таблиц семейства `MergeTree` могут хранить данные на нескольких блочных устройствах. Это может оказаться полезным, например, при неявном разделении данных одной таблицы на «горячие» и «холодные». Наиболее свежая часть занимает малый объём и запрашивается регулярно, а большой хвост исторических данных запрашивается редко. При наличии в системе нескольких дисков, «горячая» часть данных может быть размещена на быстрых дисках (например, на NVMe SSD или в памяти), а холодная на более медленных (например, HDD).
|
||||
|
||||
Минимальной перемещаемой единицей для `MergeTree` является кусок данных (data part). Данные одного куска могут находится только на одном диске. Куски могут перемещаться между дисками в фоне, согласно пользовательским настройкам, а также с помощью запросов [ALTER](../../../engines/table-engines/mergetree-family/mergetree.md#alter_move-partition).
|
||||
|
||||
### Термины {#terminy}
|
||||
### Термины {#terms}
|
||||
|
||||
- Диск — примонтированное в файловой системе блочное устройство.
|
||||
- Диск по умолчанию — диск, на котором находится путь, указанный в конфигурационной настройке сервера [path](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-path).
|
||||
@ -689,7 +689,7 @@ SETTINGS storage_policy = 'moving_from_ssd_to_hdd'
|
||||
|
||||
Количество потоков для фоновых перемещений кусков между дисками можно изменить с помощью настройки [background_move_pool_size](../../../operations/settings/settings.md#background_move_pool_size)
|
||||
|
||||
### Особенности работы {#osobennosti-raboty}
|
||||
### Особенности работы {#details}
|
||||
|
||||
В таблицах `MergeTree` данные попадают на диск несколькими способами:
|
||||
|
||||
@ -712,4 +712,99 @@ SETTINGS storage_policy = 'moving_from_ssd_to_hdd'
|
||||
|
||||
После выполнения фоновых слияний или мутаций старые куски не удаляются сразу, а через некоторое время (табличная настройка `old_parts_lifetime`). Также они не перемещаются на другие тома или диски, поэтому до момента удаления они продолжают учитываться при подсчёте занятого дискового пространства.
|
||||
|
||||
## Использование сервиса S3 для хранения данных {#table_engine-mergetree-s3}
|
||||
|
||||
Таблицы семейства `MergeTree` могут хранить данные в сервисе [S3](https://aws.amazon.com/s3/) при использовании диска типа `s3`.
|
||||
|
||||
Конфигурация:
|
||||
|
||||
``` xml
|
||||
<storage_configuration>
|
||||
...
|
||||
<disks>
|
||||
<s3>
|
||||
<type>s3</type>
|
||||
<endpoint>https://storage.yandexcloud.net/my-bucket/root-path/</endpoint>
|
||||
<access_key_id>your_access_key_id</access_key_id>
|
||||
<secret_access_key>your_secret_access_key</secret_access_key>
|
||||
<proxy>
|
||||
<uri>http://proxy1</uri>
|
||||
<uri>http://proxy2</uri>
|
||||
</proxy>
|
||||
<connect_timeout_ms>10000</connect_timeout_ms>
|
||||
<request_timeout_ms>5000</request_timeout_ms>
|
||||
<max_connections>100</max_connections>
|
||||
<retry_attempts>10</retry_attempts>
|
||||
<min_bytes_for_seek>1000</min_bytes_for_seek>
|
||||
<metadata_path>/var/lib/clickhouse/disks/s3/</metadata_path>
|
||||
<cache_enabled>true</cache_enabled>
|
||||
<cache_path>/var/lib/clickhouse/disks/s3/cache/</cache_path>
|
||||
<skip_access_check>false</skip_access_check>
|
||||
</s3>
|
||||
</disks>
|
||||
...
|
||||
</storage_configuration>
|
||||
```
|
||||
|
||||
Обязательные параметры:
|
||||
|
||||
- `endpoint` — URL точки приема запроса на стороне S3 в [форматах](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html) `path` или `virtual hosted`. URL точки должен содержать бакет и путь к корневой директории на сервере, где хранятся данные.
|
||||
- `access_key_id` — id ключа доступа к S3.
|
||||
- `secret_access_key` — секретный ключ доступа к S3.
|
||||
|
||||
Необязательные параметры:
|
||||
|
||||
- `use_environment_credentials` — признак, нужно ли считывать учетные данные AWS из переменных окружения `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` и `AWS_SESSION_TOKEN`, если они есть. Значение по умолчанию: `false`.
|
||||
- `proxy` — конфигурация прокси-сервера для конечной точки S3. Каждый элемент `uri` внутри блока `proxy` должен содержать URL прокси-сервера.
|
||||
- `connect_timeout_ms` — таймаут подключения к сокету в миллисекундах. Значение по умолчанию: 10 секунд.
|
||||
- `request_timeout_ms` — таймаут выполнения запроса в миллисекундах. Значение по умолчанию: 5 секунд.
|
||||
- `max_connections` — размер пула соединений S3. Значение по умолчанию: `100`.
|
||||
- `retry_attempts` — число попыток выполнения запроса в случае возникновения ошибки. Значение по умолчанию: `10`.
|
||||
- `min_bytes_for_seek` — минимальное количество байтов, которые используются для операций поиска вместо последовательного чтения. Значение по умолчанию: 1 МБайт.
|
||||
- `metadata_path` — путь к локальному файловому хранилищу для хранения файлов с метаданными для S3. Значение по умолчанию: `/var/lib/clickhouse/disks/<disk_name>/`.
|
||||
- `cache_enabled` — признак, разрешено ли хранение кэша засечек и файлов индекса в локальной файловой системе. Значение по умолчанию: `true`.
|
||||
- `cache_path` — путь в локальной файловой системе, где будут храниться кэш засечек и файлы индекса. Значение по умолчанию: `/var/lib/clickhouse/disks/<disk_name>/cache/`.
|
||||
- `skip_access_check` — признак, выполнять ли проверку доступов при запуске диска. Если установлено значение `true`, то проверка не выполняется. Значение по умолчанию: `false`.
|
||||
|
||||
|
||||
Диск S3 может быть сконфигурирован как `main` или `cold`:
|
||||
|
||||
``` xml
|
||||
<storage_configuration>
|
||||
...
|
||||
<disks>
|
||||
<s3>
|
||||
<type>s3</type>
|
||||
<endpoint>https://storage.yandexcloud.net/my-bucket/root-path/</endpoint>
|
||||
<access_key_id>your_access_key_id</access_key_id>
|
||||
<secret_access_key>your_secret_access_key</secret_access_key>
|
||||
</s3>
|
||||
</disks>
|
||||
<policies>
|
||||
<s3_main>
|
||||
<volumes>
|
||||
<main>
|
||||
<disk>s3</disk>
|
||||
</main>
|
||||
</volumes>
|
||||
</s3_main>
|
||||
<s3_cold>
|
||||
<volumes>
|
||||
<main>
|
||||
<disk>default</disk>
|
||||
</main>
|
||||
<external>
|
||||
<disk>s3</disk>
|
||||
</external>
|
||||
</volumes>
|
||||
<move_factor>0.2</move_factor>
|
||||
</s3_cold>
|
||||
</policies>
|
||||
...
|
||||
</storage_configuration>
|
||||
```
|
||||
|
||||
Если диск сконфигурирован как `cold`, данные будут переноситься в S3 при срабатывании правил TTL или когда свободное место на локальном диске станет меньше порогового значения, которое определяется как `move_factor * disk_size`.
|
||||
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/engines/table-engines/mergetree-family/mergetree/) <!--hide-->
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 31
|
||||
toc_title: "\u0420\u0435\u043f\u043b\u0438\u043a\u0430\u0446\u0438\u044f\u0020\u0434\u0430\u043d\u043d\u044b\u0445"
|
||||
toc_title: "Репликация данных"
|
||||
---
|
||||
|
||||
# Репликация данных {#table_engines-replication}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 45
|
||||
toc_title: "\u0412\u043d\u0435\u0448\u043d\u0438\u0435\u0020\u0434\u0430\u043d\u043d\u044b\u0435\u0020\u0434\u043b\u044f\u0020\u043e\u0431\u0440\u0430\u0431\u043e\u0442\u043a\u0438\u0020\u0437\u0430\u043f\u0440\u043e\u0441\u0430"
|
||||
toc_title: "Внешние данные для обработки запроса"
|
||||
---
|
||||
|
||||
# Внешние данные для обработки запроса {#vneshnie-dannye-dlia-obrabotki-zaprosa}
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_folder_title: "\u0421\u043f\u0435\u0446\u0438\u0430\u043b\u044c\u043d\u044b\u0435\u0020\u0434\u0432\u0438\u0436\u043a\u0438\u0020\u0442\u0430\u0431\u043b\u0438\u0446"
|
||||
toc_folder_title: "Специальные движки таблиц"
|
||||
toc_priority: 31
|
||||
---
|
||||
|
||||
|
@ -1,6 +1,5 @@
|
||||
---
|
||||
title: "What does \u201C\u043D\u0435 \u0442\u043E\u0440\u043C\u043E\u0437\u0438\u0442\
|
||||
\u201D mean?"
|
||||
title: "What does “не тормозит” mean?"
|
||||
toc_hidden: true
|
||||
toc_priority: 11
|
||||
---
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 18
|
||||
toc_title: "\u0422\u0435\u0440\u0430\u0431\u0430\u0439\u0442\u0020\u043b\u043e\u0433\u043e\u0432\u0020\u043a\u043b\u0438\u043a\u043e\u0432\u0020\u043e\u0442\u0020\u0043\u0072\u0069\u0074\u0065\u006f"
|
||||
toc_title: "Терабайт логов кликов от Criteo"
|
||||
---
|
||||
|
||||
# Терабайт логов кликов от Criteo {#terabait-logov-klikov-ot-criteo}
|
||||
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
toc_folder_title: "\u0422\u0435\u0441\u0442\u043e\u0432\u044b\u0435\u0020\u043c\u0430\u0441\u0441\u0438\u0432\u044b\u0020\u0434\u0430\u043d\u043d\u044b\u0445"
|
||||
toc_folder_title: "Тестовые массивы данных"
|
||||
toc_priority: 14
|
||||
toc_title: "\u0412\u0432\u0435\u0434\u0435\u043d\u0438\u0435"
|
||||
toc_title: "Введение"
|
||||
---
|
||||
|
||||
# Тестовые массивы данных {#testovye-massivy-dannykh}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 15
|
||||
toc_title: "\u0410\u043d\u043e\u043d\u0438\u043c\u0438\u0437\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u044b\u0435\u0020\u0434\u0430\u043d\u043d\u044b\u0435\u0020\u042f\u043d\u0434\u0435\u043a\u0441\u002e\u041c\u0435\u0442\u0440\u0438\u043a\u0438"
|
||||
toc_title: "Анонимизированные данные Яндекс.Метрики"
|
||||
---
|
||||
|
||||
# Анонимизированные данные Яндекс.Метрики {#anonimizirovannye-dannye-iandeks-metriki}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 20
|
||||
toc_title: "\u0414\u0430\u043d\u043d\u044b\u0435\u0020\u043e\u0020\u0442\u0430\u043a\u0441\u0438\u0020\u0432\u0020\u041d\u044c\u044e\u002d\u0419\u043e\u0440\u043a\u0435"
|
||||
toc_title: "Данные о такси в Нью-Йорке"
|
||||
---
|
||||
|
||||
# Данные о такси в Нью-Йорке {#dannye-o-taksi-v-niu-iorke}
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_folder_title: "\u041d\u0430\u0447\u0430\u043b\u043e\u0020\u0440\u0430\u0431\u043e\u0442\u044b"
|
||||
toc_folder_title: "Начало работы"
|
||||
toc_hidden: true
|
||||
toc_priority: 8
|
||||
toc_title: hidden
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 11
|
||||
toc_title: "\u0423\u0441\u0442\u0430\u043d\u043e\u0432\u043a\u0430"
|
||||
toc_title: "Установка"
|
||||
---
|
||||
|
||||
# Установка {#ustanovka}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 41
|
||||
toc_title: "\u041f\u0440\u0438\u043c\u0435\u043d\u0435\u043d\u0438\u0435\u0020\u043c\u043e\u0434\u0435\u043b\u0438\u0020\u0043\u0061\u0074\u0042\u006f\u006f\u0073\u0074\u0020\u0432\u0020\u0043\u006c\u0069\u0063\u006b\u0048\u006f\u0075\u0073\u0065"
|
||||
toc_title: "Применение модели CatBoost в ClickHouse"
|
||||
---
|
||||
|
||||
# Применение модели CatBoost в ClickHouse {#applying-catboost-model-in-clickhouse}
|
||||
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
toc_folder_title: "\u0420\u0443\u043A\u043E\u0432\u043E\u0434\u0441\u0442\u0432\u0430"
|
||||
toc_folder_title: "Руководства"
|
||||
toc_priority: 38
|
||||
toc_title: "\u041E\u0431\u0437\u043E\u0440"
|
||||
toc_title: "Обзор"
|
||||
---
|
||||
|
||||
# Руководства {#rukovodstva}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 0
|
||||
toc_title: "\u041E\u0431\u0437\u043E\u0440"
|
||||
toc_title: "Обзор"
|
||||
---
|
||||
|
||||
# Что такое ClickHouse {#what-is-clickhouse}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 17
|
||||
toc_title: "\u041a\u043b\u0438\u0435\u043d\u0442\u0020\u043a\u043e\u043c\u0430\u043d\u0434\u043d\u043e\u0439\u0020\u0441\u0442\u0440\u043e\u043a\u0438"
|
||||
toc_title: "Клиент командной строки"
|
||||
---
|
||||
|
||||
# Клиент командной строки {#klient-komandnoi-stroki}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 24
|
||||
toc_title: "\u0043\u002b\u002b\u0020\u043a\u043b\u0438\u0435\u043d\u0442\u0441\u043a\u0430\u044f\u0020\u0431\u0438\u0431\u043b\u0438\u043e\u0442\u0435\u043a\u0430"
|
||||
toc_title: "C++ клиентская библиотека"
|
||||
---
|
||||
|
||||
# C++ клиентская библиотека {#c-klientskaia-biblioteka}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 21
|
||||
toc_title: "\u0424\u043e\u0440\u043c\u0430\u0442\u044b\u0020\u0432\u0445\u043e\u0434\u043d\u044b\u0445\u0020\u0438\u0020\u0432\u044b\u0445\u043e\u0434\u043d\u044b\u0445\u0020\u0434\u0430\u043d\u043d\u044b\u0445"
|
||||
toc_title: "Форматы входных и выходных данных"
|
||||
---
|
||||
|
||||
# Форматы входных и выходных данных {#formats}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 19
|
||||
toc_title: "\u0048\u0054\u0054\u0050\u002d\u0438\u043d\u0442\u0435\u0440\u0444\u0435\u0439\u0441"
|
||||
toc_title: "HTTP-интерфейс"
|
||||
---
|
||||
|
||||
# HTTP-интерфейс {#http-interface}
|
||||
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
toc_folder_title: "\u0418\u043D\u0442\u0435\u0440\u0444\u0435\u0439\u0441\u044B"
|
||||
toc_folder_title: "Интерфейсы"
|
||||
toc_priority: 14
|
||||
toc_title: "\u0412\u0432\u0435\u0434\u0435\u043D\u0438\u0435"
|
||||
toc_title: "Введение"
|
||||
---
|
||||
|
||||
# Интерфейсы {#interfaces}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 22
|
||||
toc_title: "\u004a\u0044\u0042\u0043\u002d\u0434\u0440\u0430\u0439\u0432\u0435\u0440"
|
||||
toc_title: "JDBC-драйвер"
|
||||
---
|
||||
|
||||
# JDBC-драйвер {#jdbc-draiver}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 20
|
||||
toc_title: "\u004d\u0079\u0053\u0051\u004c\u002d\u0438\u043d\u0442\u0435\u0440\u0444\u0435\u0439\u0441"
|
||||
toc_title: "MySQL-интерфейс"
|
||||
---
|
||||
|
||||
# MySQL-интерфейс {#mysql-interface}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 23
|
||||
toc_title: "\u004f\u0044\u0042\u0043\u002d\u0434\u0440\u0430\u0439\u0432\u0435\u0440"
|
||||
toc_title: "ODBC-драйвер"
|
||||
---
|
||||
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 18
|
||||
toc_title: "\u0420\u043e\u0434\u043d\u043e\u0439\u0020\u0438\u043d\u0442\u0435\u0440\u0444\u0435\u0439\u0441\u0020\u0028\u0054\u0043\u0050\u0029"
|
||||
toc_title: "Родной интерфейс (TCP)"
|
||||
---
|
||||
|
||||
# Родной интерфейс (TCP) {#rodnoi-interfeis-tcp}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 26
|
||||
toc_title: "\u041a\u043b\u0438\u0435\u043d\u0442\u0441\u043a\u0438\u0435\u0020\u0431\u0438\u0431\u043b\u0438\u043e\u0442\u0435\u043a\u0438\u0020\u043e\u0442\u0020\u0441\u0442\u043e\u0440\u043e\u043d\u043d\u0438\u0445\u0020\u0440\u0430\u0437\u0440\u0430\u0431\u043e\u0442\u0447\u0438\u043a\u043e\u0432"
|
||||
toc_title: "Клиентские библиотеки от сторонних разработчиков"
|
||||
---
|
||||
|
||||
# Клиентские библиотеки от сторонних разработчиков {#klientskie-biblioteki-ot-storonnikh-razrabotchikov}
|
||||
|
2
docs/ru/interfaces/third-party/gui.md
vendored
2
docs/ru/interfaces/third-party/gui.md
vendored
@ -1,6 +1,6 @@
|
||||
---
|
||||
toc_priority: 28
|
||||
toc_title: "\u0412\u0438\u0437\u0443\u0430\u043b\u044c\u043d\u044b\u0435\u0020\u0438\u043d\u0442\u0435\u0440\u0444\u0435\u0439\u0441\u044b\u0020\u043e\u0442\u0020\u0441\u0442\u043e\u0440\u043e\u043d\u043d\u0438\u0445\u0020\u0440\u0430\u0437\u0440\u0430\u0431\u043e\u0442\u0447\u0438\u043a\u043e\u0432"
|
||||
toc_title: "Визуальные интерфейсы от сторонних разработчиков"
|
||||
---
|
||||
|
||||
|
||||
|
2
docs/ru/interfaces/third-party/index.md
vendored
2
docs/ru/interfaces/third-party/index.md
vendored
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_folder_title: "\u0421\u0442\u043e\u0440\u043e\u043d\u043d\u0438\u0435\u0020\u0438\u043d\u0442\u0435\u0440\u0444\u0435\u0439\u0441\u044b"
|
||||
toc_folder_title: "Сторонние интерфейсы"
|
||||
toc_priority: 24
|
||||
---
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user