mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-22 07:31:57 +00:00
Merge branch 'master' of github.com:yandex/ClickHouse into json_each_row
This commit is contained in:
commit
c5b0b6ba29
6
.gitignore
vendored
6
.gitignore
vendored
@ -13,12 +13,14 @@
|
||||
/build_*
|
||||
/build-*
|
||||
/docs/build
|
||||
/docs/publish
|
||||
/docs/edit
|
||||
/docs/tools/venv/
|
||||
/docs/en/development/build/
|
||||
/docs/ru/development/build/
|
||||
/docs/en/single.md
|
||||
/docs/ru/single.md
|
||||
/docs/zh/single.md
|
||||
/docs/ja/single.md
|
||||
/docs/fa/single.md
|
||||
|
||||
# callgrind files
|
||||
callgrind.out.*
|
||||
|
4
.gitmodules
vendored
4
.gitmodules
vendored
@ -140,3 +140,7 @@
|
||||
[submodule "contrib/ryu"]
|
||||
path = contrib/ryu
|
||||
url = https://github.com/ClickHouse-Extras/ryu.git
|
||||
[submodule "contrib/avro"]
|
||||
path = contrib/avro
|
||||
url = https://github.com/ClickHouse-Extras/avro.git
|
||||
ignore = untracked
|
||||
|
299
CHANGELOG.md
299
CHANGELOG.md
@ -1,3 +1,302 @@
|
||||
## ClickHouse release v20.1
|
||||
|
||||
### ClickHouse release v20.1.2.4, 2020-01-22
|
||||
|
||||
### Backward Incompatible Change
|
||||
* Make the setting `merge_tree_uniform_read_distribution` obsolete. The server still recognizes this setting but it has no effect. [#8308](https://github.com/ClickHouse/ClickHouse/pull/8308) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Changed return type of the function `greatCircleDistance` to `Float32` because now the result of calculation is `Float32`. [#7993](https://github.com/ClickHouse/ClickHouse/pull/7993) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Now it's expected that query parameters are represented in "escaped" format. For example, to pass string `a<tab>b` you have to write `a\tb` or `a\<tab>b` and respectively, `a%5Ctb` or `a%5C%09b` in URL. This is needed to add the possibility to pass NULL as `\N`. This fixes [#7488](https://github.com/ClickHouse/ClickHouse/issues/7488). [#8517](https://github.com/ClickHouse/ClickHouse/pull/8517) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Enable `use_minimalistic_part_header_in_zookeeper` setting for `ReplicatedMergeTree` by default. This will significantly reduce amount of data stored in ZooKeeper. This setting is supported since version 19.1 and we already use it in production in multiple services without any issues for more than half a year. Disable this setting if you have a chance to downgrade to versions older than 19.1. [#6850](https://github.com/ClickHouse/ClickHouse/pull/6850) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Data skipping indices are production ready and enabled by default. The settings `allow_experimental_data_skipping_indices`, `allow_experimental_cross_to_join_conversion` and `allow_experimental_multiple_joins_emulation` are now obsolete and do nothing. [#7974](https://github.com/ClickHouse/ClickHouse/pull/7974) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Add new `ANY JOIN` logic for `StorageJoin` consistent with `JOIN` operation. To upgrade without changes in behaviour you need add `SETTINGS any_join_distinct_right_table_keys = 1` to Engine Join tables metadata or recreate these tables after upgrade. [#8400](https://github.com/ClickHouse/ClickHouse/pull/8400) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Require server to be restarted to apply the changes in logging configuration. This is a temporary workaround to avoid the bug where the server logs to a deleted log file (see [#8696](https://github.com/ClickHouse/ClickHouse/issues/8696)). [#8707](https://github.com/ClickHouse/ClickHouse/pull/8707) ([Alexander Kuzmenkov](https://github.com/akuzm))
|
||||
|
||||
### New Feature
|
||||
* Added information about part paths to `system.merges`. [#8043](https://github.com/ClickHouse/ClickHouse/pull/8043) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Add ability to execute `SYSTEM RELOAD DICTIONARY` query in `ON CLUSTER` mode. [#8288](https://github.com/ClickHouse/ClickHouse/pull/8288) ([Guillaume Tassery](https://github.com/YiuRULE))
|
||||
* Add ability to execute `CREATE DICTIONARY` queries in `ON CLUSTER` mode. [#8163](https://github.com/ClickHouse/ClickHouse/pull/8163) ([alesapin](https://github.com/alesapin))
|
||||
* Now user's profile in `users.xml` can inherit multiple profiles. [#8343](https://github.com/ClickHouse/ClickHouse/pull/8343) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
|
||||
* Added `system.stack_trace` table that allows to look at stack traces of all server threads. This is useful for developers to introspect server state. This fixes [#7576](https://github.com/ClickHouse/ClickHouse/issues/7576). [#8344](https://github.com/ClickHouse/ClickHouse/pull/8344) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Add `DateTime64` datatype with configurable sub-second precision. [#7170](https://github.com/ClickHouse/ClickHouse/pull/7170) ([Vasily Nemkov](https://github.com/Enmk))
|
||||
* Add table function `clusterAllReplicas` which allows to query all the nodes in the cluster. [#8493](https://github.com/ClickHouse/ClickHouse/pull/8493) ([kiran sunkari](https://github.com/kiransunkari))
|
||||
* Add aggregate function `categoricalInformationValue` which calculates the information value of a discrete feature. [#8117](https://github.com/ClickHouse/ClickHouse/pull/8117) ([hcz](https://github.com/hczhcz))
|
||||
* Speed up parsing of data files in `CSV`, `TSV` and `JSONEachRow` format by doing it in parallel. [#7780](https://github.com/ClickHouse/ClickHouse/pull/7780) ([Alexander Kuzmenkov](https://github.com/akuzm))
|
||||
* Add function `bankerRound` which performs banker's rounding. [#8112](https://github.com/ClickHouse/ClickHouse/pull/8112) ([hcz](https://github.com/hczhcz))
|
||||
* Support more languages in embedded dictionary for region names: 'ru', 'en', 'ua', 'uk', 'by', 'kz', 'tr', 'de', 'uz', 'lv', 'lt', 'et', 'pt', 'he', 'vi'. [#8189](https://github.com/ClickHouse/ClickHouse/pull/8189) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Improvements in consistency of `ANY JOIN` logic. Now `t1 ANY LEFT JOIN t2` equals `t2 ANY RIGHT JOIN t1`. [#7665](https://github.com/ClickHouse/ClickHouse/pull/7665) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Add setting `any_join_distinct_right_table_keys` which enables old behaviour for `ANY INNER JOIN`. [#7665](https://github.com/ClickHouse/ClickHouse/pull/7665) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Add new `SEMI` and `ANTI JOIN`. Old `ANY INNER JOIN` behaviour now available as `SEMI LEFT JOIN`. [#7665](https://github.com/ClickHouse/ClickHouse/pull/7665) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Added `Distributed` format for `File` engine and `file` table function which allows to read from `.bin` files generated by asynchronous inserts into `Distributed` table. [#8535](https://github.com/ClickHouse/ClickHouse/pull/8535) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Add optional reset column argument for `runningAccumulate` which allows to reset aggregation results for each new key value. [#8326](https://github.com/ClickHouse/ClickHouse/pull/8326) ([Sergey Kononenko](https://github.com/kononencheg))
|
||||
* Add ability to use ClickHouse as Prometheus endpoint. [#7900](https://github.com/ClickHouse/ClickHouse/pull/7900) ([vdimir](https://github.com/Vdimir))
|
||||
* Add section `<remote_url_allow_hosts>` in `config.xml` which restricts allowed hosts for remote table engines and table functions `URL`, `S3`, `HDFS`. [#7154](https://github.com/ClickHouse/ClickHouse/pull/7154) ([Mikhail Korotov](https://github.com/millb))
|
||||
* Added function `greatCircleAngle` which calculates the distance on a sphere in degrees. [#8105](https://github.com/ClickHouse/ClickHouse/pull/8105) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Changed Earth radius to be consistent with H3 library. [#8105](https://github.com/ClickHouse/ClickHouse/pull/8105) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Added `JSONCompactEachRow` and `JSONCompactEachRowWithNamesAndTypes` formats for input and output. [#7841](https://github.com/ClickHouse/ClickHouse/pull/7841) ([Mikhail Korotov](https://github.com/millb))
|
||||
* Added feature for file-related table engines and table functions (`File`, `S3`, `URL`, `HDFS`) which allows to read and write `gzip` files based on additional engine parameter or file extension. [#7840](https://github.com/ClickHouse/ClickHouse/pull/7840) ([Andrey Bodrov](https://github.com/apbodrov))
|
||||
* Added the `randomASCII(length)` function, generating a string with a random set of [ASCII](https://en.wikipedia.org/wiki/ASCII#Printable_characters) printable characters. [#8401](https://github.com/ClickHouse/ClickHouse/pull/8401) ([BayoNet](https://github.com/BayoNet))
|
||||
* Added function `JSONExtractArrayRaw` which returns an array on unparsed json array elements from `JSON` string. [#8081](https://github.com/ClickHouse/ClickHouse/pull/8081) ([Oleg Matrokhin](https://github.com/errx))
|
||||
* Add `arrayZip` function which allows to combine multiple arrays of equal lengths into one array of tuples. [#8149](https://github.com/ClickHouse/ClickHouse/pull/8149) ([Winter Zhang](https://github.com/zhang2014))
|
||||
* Add ability to move data between disks according to configured `TTL`-expressions for `*MergeTree` table engines family. [#8140](https://github.com/ClickHouse/ClickHouse/pull/8140) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Added new aggregate function `avgWeighted` which allows to calculate weighted average. [#7898](https://github.com/ClickHouse/ClickHouse/pull/7898) ([Andrey Bodrov](https://github.com/apbodrov))
|
||||
* Now parallel parsing is enabled by default for `TSV`, `TSKV`, `CSV` and `JSONEachRow` formats. [#7894](https://github.com/ClickHouse/ClickHouse/pull/7894) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
|
||||
* Add several geo functions from `H3` library: `h3GetResolution`, `h3EdgeAngle`, `h3EdgeLength`, `h3IsValid` and `h3kRing`. [#8034](https://github.com/ClickHouse/ClickHouse/pull/8034) ([Konstantin Malanchev](https://github.com/hombit))
|
||||
* Added support for brotli (`br`) compression in file-related storages and table functions. This fixes [#8156](https://github.com/ClickHouse/ClickHouse/issues/8156). [#8526](https://github.com/ClickHouse/ClickHouse/pull/8526) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Add `groupBit*` functions for the `SimpleAggregationFunction` type. [#8485](https://github.com/ClickHouse/ClickHouse/pull/8485) ([Guillaume Tassery](https://github.com/YiuRULE))
|
||||
|
||||
### Bug Fix
|
||||
* Fix rename of tables with `Distributed` engine. Fixes issue [#7868](https://github.com/ClickHouse/ClickHouse/issues/7868). [#8306](https://github.com/ClickHouse/ClickHouse/pull/8306) ([tavplubix](https://github.com/tavplubix))
|
||||
* Now dictionaries support `EXPRESSION` for attributes in arbitrary string in non-ClickHouse SQL dialect. [#8098](https://github.com/ClickHouse/ClickHouse/pull/8098) ([alesapin](https://github.com/alesapin))
|
||||
* Fix broken `INSERT SELECT FROM mysql(...)` query. This fixes [#8070](https://github.com/ClickHouse/ClickHouse/issues/8070) and [#7960](https://github.com/ClickHouse/ClickHouse/issues/7960). [#8234](https://github.com/ClickHouse/ClickHouse/pull/8234) ([tavplubix](https://github.com/tavplubix))
|
||||
* Fix error "Mismatch column sizes" when inserting default `Tuple` from `JSONEachRow`. This fixes [#5653](https://github.com/ClickHouse/ClickHouse/issues/5653). [#8606](https://github.com/ClickHouse/ClickHouse/pull/8606) ([tavplubix](https://github.com/tavplubix))
|
||||
* Now an exception will be thrown in case of using `WITH TIES` alongside `LIMIT BY`. Also add ability to use `TOP` with `LIMIT BY`. This fixes [#7472](https://github.com/ClickHouse/ClickHouse/issues/7472). [#7637](https://github.com/ClickHouse/ClickHouse/pull/7637) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
|
||||
* Fix unintendent dependency from fresh glibc version in `clickhouse-odbc-bridge` binary. [#8046](https://github.com/ClickHouse/ClickHouse/pull/8046) ([Amos Bird](https://github.com/amosbird))
|
||||
* Fix bug in check function of `*MergeTree` engines family. Now it doesn't fail in case when we have equal amount of rows in last granule and last mark (non-final). [#8047](https://github.com/ClickHouse/ClickHouse/pull/8047) ([alesapin](https://github.com/alesapin))
|
||||
* Fix insert into `Enum*` columns after `ALTER` query, when underlying numeric type is equal to table specified type. This fixes [#7836](https://github.com/ClickHouse/ClickHouse/issues/7836). [#7908](https://github.com/ClickHouse/ClickHouse/pull/7908) ([Anton Popov](https://github.com/CurtizJ))
|
||||
* Allowed non-constant negative "size" argument for function `substring`. It was not allowed by mistake. This fixes [#4832](https://github.com/ClickHouse/ClickHouse/issues/4832). [#7703](https://github.com/ClickHouse/ClickHouse/pull/7703) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix parsing bug when wrong number of arguments passed to `(O|J)DBC` table engine. [#7709](https://github.com/ClickHouse/ClickHouse/pull/7709) ([alesapin](https://github.com/alesapin))
|
||||
* Using command name of the running clickhouse process when sending logs to syslog. In previous versions, empty string was used instead of command name. [#8460](https://github.com/ClickHouse/ClickHouse/pull/8460) ([Michael Nacharov](https://github.com/mnach))
|
||||
* Fix check of allowed hosts for `localhost`. This PR fixes the solution provided in [#8241](https://github.com/ClickHouse/ClickHouse/pull/8241). [#8342](https://github.com/ClickHouse/ClickHouse/pull/8342) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Fix rare crash in `argMin` and `argMax` functions for long string arguments, when result is used in `runningAccumulate` function. This fixes [#8325](https://github.com/ClickHouse/ClickHouse/issues/8325) [#8341](https://github.com/ClickHouse/ClickHouse/pull/8341) ([dinosaur](https://github.com/769344359))
|
||||
* Fix memory overcommit for tables with `Buffer` engine. [#8345](https://github.com/ClickHouse/ClickHouse/pull/8345) ([Azat Khuzhin](https://github.com/azat))
|
||||
* Fixed potential bug in functions that can take `NULL` as one of the arguments and return non-NULL. [#8196](https://github.com/ClickHouse/ClickHouse/pull/8196) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Better metrics calculations in thread pool for background processes for `MergeTree` table engines. [#8194](https://github.com/ClickHouse/ClickHouse/pull/8194) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Fix function `IN` inside `WHERE` statement when row-level table filter is present. Fixes [#6687](https://github.com/ClickHouse/ClickHouse/issues/6687) [#8357](https://github.com/ClickHouse/ClickHouse/pull/8357) ([Ivan](https://github.com/abyss7))
|
||||
* Now an exception is thrown if the integral value is not parsed completely for settings values. [#7678](https://github.com/ClickHouse/ClickHouse/pull/7678) ([Mikhail Korotov](https://github.com/millb))
|
||||
* Fix exception when aggregate function is used in query to distributed table with more than two local shards. [#8164](https://github.com/ClickHouse/ClickHouse/pull/8164) ([小路](https://github.com/nicelulu))
|
||||
* Now bloom filter can handle zero length arrays and doesn't perform redundant calculations. [#8242](https://github.com/ClickHouse/ClickHouse/pull/8242) ([achimbab](https://github.com/achimbab))
|
||||
* Fixed checking if a client host is allowed by matching the client host to `host_regexp` specified in `users.xml`. [#8241](https://github.com/ClickHouse/ClickHouse/pull/8241) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Relax ambiguous column check that leads to false positives in multiple `JOIN ON` section. [#8385](https://github.com/ClickHouse/ClickHouse/pull/8385) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fixed possible server crash (`std::terminate`) when the server cannot send or write data in `JSON` or `XML` format with values of `String` data type (that require `UTF-8` validation) or when compressing result data with Brotli algorithm or in some other rare cases. This fixes [#7603](https://github.com/ClickHouse/ClickHouse/issues/7603) [#8384](https://github.com/ClickHouse/ClickHouse/pull/8384) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix race condition in `StorageDistributedDirectoryMonitor` found by CI. This fixes [#8364](https://github.com/ClickHouse/ClickHouse/issues/8364). [#8383](https://github.com/ClickHouse/ClickHouse/pull/8383) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Now background merges in `*MergeTree` table engines family preserve storage policy volume order more accurately. [#8549](https://github.com/ClickHouse/ClickHouse/pull/8549) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Now table engine `Kafka` works properly with `Native` format. This fixes [#6731](https://github.com/ClickHouse/ClickHouse/issues/6731) [#7337](https://github.com/ClickHouse/ClickHouse/issues/7337) [#8003](https://github.com/ClickHouse/ClickHouse/issues/8003). [#8016](https://github.com/ClickHouse/ClickHouse/pull/8016) ([filimonov](https://github.com/filimonov))
|
||||
* Fixed formats with headers (like `CSVWithNames`) which were throwing exception about EOF for table engine `Kafka`. [#8016](https://github.com/ClickHouse/ClickHouse/pull/8016) ([filimonov](https://github.com/filimonov))
|
||||
* Fixed a bug with making set from subquery in right part of `IN` section. This fixes [#5767](https://github.com/ClickHouse/ClickHouse/issues/5767) and [#2542](https://github.com/ClickHouse/ClickHouse/issues/2542). [#7755](https://github.com/ClickHouse/ClickHouse/pull/7755) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
|
||||
* Fix possible crash while reading from storage `File`. [#7756](https://github.com/ClickHouse/ClickHouse/pull/7756) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Fixed reading of the files in `Parquet` format containing columns of type `list`. [#8334](https://github.com/ClickHouse/ClickHouse/pull/8334) ([maxulan](https://github.com/maxulan))
|
||||
* Fix error `Not found column` for distributed queries with `PREWHERE` condition dependent on sampling key if `max_parallel_replicas > 1`. [#7913](https://github.com/ClickHouse/ClickHouse/pull/7913) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Fix error `Not found column` if query used `PREWHERE` dependent on table's alias and the result set was empty because of primary key condition. [#7911](https://github.com/ClickHouse/ClickHouse/pull/7911) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Fixed return type for functions `rand` and `randConstant` in case of `Nullable` argument. Now functions always return `UInt32` and never `Nullable(UInt32)`. [#8204](https://github.com/ClickHouse/ClickHouse/pull/8204) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Disabled predicate push-down for `WITH FILL` expression. This fixes [#7784](https://github.com/ClickHouse/ClickHouse/issues/7784). [#7789](https://github.com/ClickHouse/ClickHouse/pull/7789) ([Winter Zhang](https://github.com/zhang2014))
|
||||
* Fixed incorrect `count()` result for `SummingMergeTree` when `FINAL` section is used. [#3280](https://github.com/ClickHouse/ClickHouse/issues/3280) [#7786](https://github.com/ClickHouse/ClickHouse/pull/7786) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
|
||||
* Fix possible incorrect result for constant functions from remote servers. It happened for queries with functions like `version()`, `uptime()`, etc. which returns different constant values for different servers. This fixes [#7666](https://github.com/ClickHouse/ClickHouse/issues/7666). [#7689](https://github.com/ClickHouse/ClickHouse/pull/7689) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Fix complicated bug in push-down predicate optimization which leads to wrong results. This fixes a lot of issues on push-down predicate optimization. [#8503](https://github.com/ClickHouse/ClickHouse/pull/8503) ([Winter Zhang](https://github.com/zhang2014))
|
||||
* Fix crash in `CREATE TABLE .. AS dictionary` query. [#8508](https://github.com/ClickHouse/ClickHouse/pull/8508) ([Azat Khuzhin](https://github.com/azat))
|
||||
* Several improvements ClickHouse grammar in `.g4` file. [#8294](https://github.com/ClickHouse/ClickHouse/pull/8294) ([taiyang-li](https://github.com/taiyang-li))
|
||||
* Fix bug that leads to crashes in `JOIN`s with tables with engine `Join`. This fixes [#7556](https://github.com/ClickHouse/ClickHouse/issues/7556) [#8254](https://github.com/ClickHouse/ClickHouse/issues/8254) [#7915](https://github.com/ClickHouse/ClickHouse/issues/7915) [#8100](https://github.com/ClickHouse/ClickHouse/issues/8100). [#8298](https://github.com/ClickHouse/ClickHouse/pull/8298) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fix redundant dictionaries reload on `CREATE DATABASE`. [#7916](https://github.com/ClickHouse/ClickHouse/pull/7916) ([Azat Khuzhin](https://github.com/azat))
|
||||
* Limit maximum number of streams for read from `StorageFile` and `StorageHDFS`. Fixes https://github.com/ClickHouse/ClickHouse/issues/7650. [#7981](https://github.com/ClickHouse/ClickHouse/pull/7981) ([alesapin](https://github.com/alesapin))
|
||||
* Fix bug in `ALTER ... MODIFY ... CODEC` query, when user specify both default expression and codec. Fixes [8593](https://github.com/ClickHouse/ClickHouse/issues/8593). [#8614](https://github.com/ClickHouse/ClickHouse/pull/8614) ([alesapin](https://github.com/alesapin))
|
||||
* Fix error in background merge of columns with `SimpleAggregateFunction(LowCardinality)` type. [#8613](https://github.com/ClickHouse/ClickHouse/pull/8613) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Fixed type check in function `toDateTime64`. [#8375](https://github.com/ClickHouse/ClickHouse/pull/8375) ([Vasily Nemkov](https://github.com/Enmk))
|
||||
* Now server do not crash on `LEFT` or `FULL JOIN` with and Join engine and unsupported `join_use_nulls` settings. [#8479](https://github.com/ClickHouse/ClickHouse/pull/8479) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Now `DROP DICTIONARY IF EXISTS db.dict` query doesn't throw exception if `db` doesn't exist. [#8185](https://github.com/ClickHouse/ClickHouse/pull/8185) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Fix possible crashes in table functions (`file`, `mysql`, `remote`) caused by usage of reference to removed `IStorage` object. Fix incorrect parsing of columns specified at insertion into table function. [#7762](https://github.com/ClickHouse/ClickHouse/pull/7762) ([tavplubix](https://github.com/tavplubix))
|
||||
* Ensure network be up before starting `clickhouse-server`. This fixes [#7507](https://github.com/ClickHouse/ClickHouse/issues/7507). [#8570](https://github.com/ClickHouse/ClickHouse/pull/8570) ([Zhichang Yu](https://github.com/yuzhichang))
|
||||
* Fix timeouts handling for secure connections, so queries doesn't hang indefenitely. This fixes [#8126](https://github.com/ClickHouse/ClickHouse/issues/8126). [#8128](https://github.com/ClickHouse/ClickHouse/pull/8128) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix `clickhouse-copier`'s redundant contention between concurrent workers. [#7816](https://github.com/ClickHouse/ClickHouse/pull/7816) ([Ding Xiang Fei](https://github.com/dingxiangfei2009))
|
||||
* Now mutations doesn't skip attached parts, even if their mutation version were larger than current mutation version. [#7812](https://github.com/ClickHouse/ClickHouse/pull/7812) ([Zhichang Yu](https://github.com/yuzhichang)) [#8250](https://github.com/ClickHouse/ClickHouse/pull/8250) ([alesapin](https://github.com/alesapin))
|
||||
* Ignore redundant copies of `*MergeTree` data parts after move to another disk and server restart. [#7810](https://github.com/ClickHouse/ClickHouse/pull/7810) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Fix crash in `FULL JOIN` with `LowCardinality` in `JOIN` key. [#8252](https://github.com/ClickHouse/ClickHouse/pull/8252) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Forbidden to use column name more than once in insert query like `INSERT INTO tbl (x, y, x)`. This fixes [#5465](https://github.com/ClickHouse/ClickHouse/issues/5465), [#7681](https://github.com/ClickHouse/ClickHouse/issues/7681). [#7685](https://github.com/ClickHouse/ClickHouse/pull/7685) ([alesapin](https://github.com/alesapin))
|
||||
* Added fallback for detection the number of physical CPU cores for unknown CPUs (using the number of logical CPU cores). This fixes [#5239](https://github.com/ClickHouse/ClickHouse/issues/5239). [#7726](https://github.com/ClickHouse/ClickHouse/pull/7726) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix `There's no column` error for materialized and alias columns. [#8210](https://github.com/ClickHouse/ClickHouse/pull/8210) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fixed sever crash when `EXISTS` query was used without `TABLE` or `DICTIONARY` qualifier. Just like `EXISTS t`. This fixes [#8172](https://github.com/ClickHouse/ClickHouse/issues/8172). This bug was introduced in version 19.17. [#8213](https://github.com/ClickHouse/ClickHouse/pull/8213) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix rare bug with error `"Sizes of columns doesn't match"` that might appear when using `SimpleAggregateFunction` column. [#7790](https://github.com/ClickHouse/ClickHouse/pull/7790) ([Boris Granveaud](https://github.com/bgranvea))
|
||||
* Fix bug where user with empty `allow_databases` got access to all databases (and same for `allow_dictionaries`). [#7793](https://github.com/ClickHouse/ClickHouse/pull/7793) ([DeifyTheGod](https://github.com/DeifyTheGod))
|
||||
* Fix client crash when server already disconnected from client. [#8071](https://github.com/ClickHouse/ClickHouse/pull/8071) ([Azat Khuzhin](https://github.com/azat))
|
||||
* Fix `ORDER BY` behaviour in case of sorting by primary key prefix and non primary key suffix. [#7759](https://github.com/ClickHouse/ClickHouse/pull/7759) ([Anton Popov](https://github.com/CurtizJ))
|
||||
* Check if qualified column present in the table. This fixes [#6836](https://github.com/ClickHouse/ClickHouse/issues/6836). [#7758](https://github.com/ClickHouse/ClickHouse/pull/7758) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fixed behavior with `ALTER MOVE` ran immediately after merge finish moves superpart of specified. Fixes [#8103](https://github.com/ClickHouse/ClickHouse/issues/8103). [#8104](https://github.com/ClickHouse/ClickHouse/pull/8104) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Fix possible server crash while using `UNION` with different number of columns. Fixes [#7279](https://github.com/ClickHouse/ClickHouse/issues/7279). [#7929](https://github.com/ClickHouse/ClickHouse/pull/7929) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Fix size of result substring for function `substr` with negative size. [#8589](https://github.com/ClickHouse/ClickHouse/pull/8589) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Now server does not execute part mutation in `MergeTree` if there are not enough free threads in background pool. [#8588](https://github.com/ClickHouse/ClickHouse/pull/8588) ([tavplubix](https://github.com/tavplubix))
|
||||
* Fix a minor typo on formatting `UNION ALL` AST. [#7999](https://github.com/ClickHouse/ClickHouse/pull/7999) ([litao91](https://github.com/litao91))
|
||||
* Fixed incorrect bloom filter results for negative numbers. This fixes [#8317](https://github.com/ClickHouse/ClickHouse/issues/8317). [#8566](https://github.com/ClickHouse/ClickHouse/pull/8566) ([Winter Zhang](https://github.com/zhang2014))
|
||||
* Fixed potential buffer overflow in decompress. Malicious user can pass fabricated compressed data that will cause read after buffer. This issue was found by Eldar Zaitov from Yandex information security team. [#8404](https://github.com/ClickHouse/ClickHouse/pull/8404) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix incorrect result because of integers overflow in `arrayIntersect`. [#7777](https://github.com/ClickHouse/ClickHouse/pull/7777) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Now `OPTIMIZE TABLE` query will not wait for offline replicas to perform the operation. [#8314](https://github.com/ClickHouse/ClickHouse/pull/8314) ([javi santana](https://github.com/javisantana))
|
||||
* Fixed `ALTER TTL` parser for `Replicated*MergeTree` tables. [#8318](https://github.com/ClickHouse/ClickHouse/pull/8318) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Fix communication between server and client, so server read temporary tables info after query failure. [#8084](https://github.com/ClickHouse/ClickHouse/pull/8084) ([Azat Khuzhin](https://github.com/azat))
|
||||
* Fix `bitmapAnd` function error when intersecting an aggregated bitmap and a scalar bitmap. [#8082](https://github.com/ClickHouse/ClickHouse/pull/8082) ([Yue Huang](https://github.com/moon03432))
|
||||
* Refine the definition of `ZXid` according to the ZooKeeper Programmer's Guide which fixes bug in `clickhouse-cluster-copier`. [#8088](https://github.com/ClickHouse/ClickHouse/pull/8088) ([Ding Xiang Fei](https://github.com/dingxiangfei2009))
|
||||
* `odbc` table function now respects `external_table_functions_use_nulls` setting. [#7506](https://github.com/ClickHouse/ClickHouse/pull/7506) ([Vasily Nemkov](https://github.com/Enmk))
|
||||
* Fixed bug that lead to a rare data race. [#8143](https://github.com/ClickHouse/ClickHouse/pull/8143) ([Alexander Kazakov](https://github.com/Akazz))
|
||||
* Now `SYSTEM RELOAD DICTIONARY` reloads a dictionary completely, ignoring `update_field`. This fixes [#7440](https://github.com/ClickHouse/ClickHouse/issues/7440). [#8037](https://github.com/ClickHouse/ClickHouse/pull/8037) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Add ability to check if dictionary exists in create query. [#8032](https://github.com/ClickHouse/ClickHouse/pull/8032) ([alesapin](https://github.com/alesapin))
|
||||
* Fix `Float*` parsing in `Values` format. This fixes [#7817](https://github.com/ClickHouse/ClickHouse/issues/7817). [#7870](https://github.com/ClickHouse/ClickHouse/pull/7870) ([tavplubix](https://github.com/tavplubix))
|
||||
* Fix crash when we cannot reserve space in some background operations of `*MergeTree` table engines family. [#7873](https://github.com/ClickHouse/ClickHouse/pull/7873) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Fix crash of merge operation when table contains `SimpleAggregateFunction(LowCardinality)` column. This fixes [#8515](https://github.com/ClickHouse/ClickHouse/issues/8515). [#8522](https://github.com/ClickHouse/ClickHouse/pull/8522) ([Azat Khuzhin](https://github.com/azat))
|
||||
* Restore support of all ICU locales and add the ability to apply collations for constant expressions. Also add language name to `system.collations` table. [#8051](https://github.com/ClickHouse/ClickHouse/pull/8051) ([alesapin](https://github.com/alesapin))
|
||||
* Fix bug when external dictionaries with zero minimal lifetime (`LIFETIME(MIN 0 MAX N)`, `LIFETIME(N)`) don't update in background. [#7983](https://github.com/ClickHouse/ClickHouse/pull/7983) ([alesapin](https://github.com/alesapin))
|
||||
* Fix crash when external dictionary with ClickHouse source has subquery in query. [#8351](https://github.com/ClickHouse/ClickHouse/pull/8351) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Fix incorrect parsing of file extension in table with engine `URL`. This fixes [#8157](https://github.com/ClickHouse/ClickHouse/issues/8157). [#8419](https://github.com/ClickHouse/ClickHouse/pull/8419) ([Andrey Bodrov](https://github.com/apbodrov))
|
||||
* Fix `CHECK TABLE` query for `*MergeTree` tables without key. Fixes [#7543](https://github.com/ClickHouse/ClickHouse/issues/7543). [#7979](https://github.com/ClickHouse/ClickHouse/pull/7979) ([alesapin](https://github.com/alesapin))
|
||||
* Fixed conversion of `Float64` to MySQL type. [#8079](https://github.com/ClickHouse/ClickHouse/pull/8079) ([Yuriy Baranov](https://github.com/yurriy))
|
||||
* Now if table was not completely dropped because of server crash, server will try to restore and load it. [#8176](https://github.com/ClickHouse/ClickHouse/pull/8176) ([tavplubix](https://github.com/tavplubix))
|
||||
* Fixed crash in table function `file` while inserting into file that doesn't exist. Now in this case file would be created and then insert would be processed. [#8177](https://github.com/ClickHouse/ClickHouse/pull/8177) ([Olga Khvostikova](https://github.com/stavrolia))
|
||||
* Fix rare deadlock which can happen when `trace_log` is in enabled. [#7838](https://github.com/ClickHouse/ClickHouse/pull/7838) ([filimonov](https://github.com/filimonov))
|
||||
* Add ability to work with different types besides `Date` in `RangeHashed` external dictionary created from DDL query. Fixes [7899](https://github.com/ClickHouse/ClickHouse/issues/7899). [#8275](https://github.com/ClickHouse/ClickHouse/pull/8275) ([alesapin](https://github.com/alesapin))
|
||||
* Fixes crash when `now64()` is called with result of another function. [#8270](https://github.com/ClickHouse/ClickHouse/pull/8270) ([Vasily Nemkov](https://github.com/Enmk))
|
||||
* Fixed bug with detecting client IP for connections through mysql wire protocol. [#7743](https://github.com/ClickHouse/ClickHouse/pull/7743) ([Dmitry Muzyka](https://github.com/dmitriy-myz))
|
||||
* Fix empty array handling in `arraySplit` function. This fixes [#7708](https://github.com/ClickHouse/ClickHouse/issues/7708). [#7747](https://github.com/ClickHouse/ClickHouse/pull/7747) ([hcz](https://github.com/hczhcz))
|
||||
* Fixed the issue when `pid-file` of another running `clickhouse-server` may be deleted. [#8487](https://github.com/ClickHouse/ClickHouse/pull/8487) ([Weiqing Xu](https://github.com/weiqxu))
|
||||
* Fix dictionary reload if it has `invalidate_query`, which stopped updates and some exception on previous update tries. [#8029](https://github.com/ClickHouse/ClickHouse/pull/8029) ([alesapin](https://github.com/alesapin))
|
||||
* Fixed error in function `arrayReduce` that may lead to "double free" and error in aggregate function combinator `Resample` that may lead to memory leak. Added aggregate function `aggThrow`. This function can be used for testing purposes. [#8446](https://github.com/ClickHouse/ClickHouse/pull/8446) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
### Improvement
|
||||
* Improved logging when working with `S3` table engine. [#8251](https://github.com/ClickHouse/ClickHouse/pull/8251) ([Grigory Pervakov](https://github.com/GrigoryPervakov))
|
||||
* Printed help message when no arguments are passed when calling `clickhouse-local`. This fixes [#5335](https://github.com/ClickHouse/ClickHouse/issues/5335). [#8230](https://github.com/ClickHouse/ClickHouse/pull/8230) ([Andrey Nagorny](https://github.com/Melancholic))
|
||||
* Add setting `mutations_sync` which allows to wait `ALTER UPDATE/DELETE` queries synchronously. [#8237](https://github.com/ClickHouse/ClickHouse/pull/8237) ([alesapin](https://github.com/alesapin))
|
||||
* Allow to set up relative `user_files_path` in `config.xml` (in the way similar to `format_schema_path`). [#7632](https://github.com/ClickHouse/ClickHouse/pull/7632) ([hcz](https://github.com/hczhcz))
|
||||
* Add exception for illegal types for conversion functions with `-OrZero` postfix. [#7880](https://github.com/ClickHouse/ClickHouse/pull/7880) ([Andrey Konyaev](https://github.com/akonyaev90))
|
||||
* Simplify format of the header of data sending to a shard in a distributed query. [#8044](https://github.com/ClickHouse/ClickHouse/pull/8044) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* `Live View` table engine refactoring. [#8519](https://github.com/ClickHouse/ClickHouse/pull/8519) ([vzakaznikov](https://github.com/vzakaznikov))
|
||||
* Add additional checks for external dictionaries created from DDL-queries. [#8127](https://github.com/ClickHouse/ClickHouse/pull/8127) ([alesapin](https://github.com/alesapin))
|
||||
* Fix error `Column ... already exists` while using `FINAL` and `SAMPLE` together, e.g. `select count() from table final sample 1/2`. Fixes [#5186](https://github.com/ClickHouse/ClickHouse/issues/5186). [#7907](https://github.com/ClickHouse/ClickHouse/pull/7907) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Now table the first argument of `joinGet` function can be table indentifier. [#7707](https://github.com/ClickHouse/ClickHouse/pull/7707) ([Amos Bird](https://github.com/amosbird))
|
||||
* Allow using `MaterializedView` with subqueries above `Kafka` tables. [#8197](https://github.com/ClickHouse/ClickHouse/pull/8197) ([filimonov](https://github.com/filimonov))
|
||||
* Now background moves between disks run it the seprate thread pool. [#7670](https://github.com/ClickHouse/ClickHouse/pull/7670) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* `SYSTEM RELOAD DICTIONARY` now executes synchronously. [#8240](https://github.com/ClickHouse/ClickHouse/pull/8240) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Stack traces now display physical addresses (offsets in object file) instead of virtual memory addresses (where the object file was loaded). That allows the use of `addr2line` when binary is position independent and ASLR is active. This fixes [#8360](https://github.com/ClickHouse/ClickHouse/issues/8360). [#8387](https://github.com/ClickHouse/ClickHouse/pull/8387) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Support new syntax for row-level security filters: `<table name='table_name'>…</table>`. Fixes [#5779](https://github.com/ClickHouse/ClickHouse/issues/5779). [#8381](https://github.com/ClickHouse/ClickHouse/pull/8381) ([Ivan](https://github.com/abyss7))
|
||||
* Now `cityHash` function can work with `Decimal` and `UUID` types. Fixes [#5184](https://github.com/ClickHouse/ClickHouse/issues/5184). [#7693](https://github.com/ClickHouse/ClickHouse/pull/7693) ([Mikhail Korotov](https://github.com/millb))
|
||||
* Removed fixed index granularity (it was 1024) from system logs because it's obsolete after implementation of adaptive granularity. [#7698](https://github.com/ClickHouse/ClickHouse/pull/7698) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Enabled MySQL compatibility server when ClickHouse is compiled without SSL. [#7852](https://github.com/ClickHouse/ClickHouse/pull/7852) ([Yuriy Baranov](https://github.com/yurriy))
|
||||
* Now server checksums distributed batches, which gives more verbose errors in case of corrupted data in batch. [#7914](https://github.com/ClickHouse/ClickHouse/pull/7914) ([Azat Khuzhin](https://github.com/azat))
|
||||
* Support `DROP DATABASE`, `DETACH TABLE`, `DROP TABLE` and `ATTACH TABLE` for `MySQL` database engine. [#8202](https://github.com/ClickHouse/ClickHouse/pull/8202) ([Winter Zhang](https://github.com/zhang2014))
|
||||
* Add authentication in S3 table function and table engine. [#7623](https://github.com/ClickHouse/ClickHouse/pull/7623) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Added check for extra parts of `MergeTree` at different disks, in order to not allow to miss data parts at undefined disks. [#8118](https://github.com/ClickHouse/ClickHouse/pull/8118) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Enable SSL support for Mac client and server. [#8297](https://github.com/ClickHouse/ClickHouse/pull/8297) ([Ivan](https://github.com/abyss7))
|
||||
* Now ClickHouse can work as MySQL federated server (see https://dev.mysql.com/doc/refman/5.7/en/federated-create-server.html). [#7717](https://github.com/ClickHouse/ClickHouse/pull/7717) ([Maxim Fedotov](https://github.com/MaxFedotov))
|
||||
* `clickhouse-client` now only enable `bracketed-paste` when multiquery is on and multiline is off. This fixes (#7757)[https://github.com/ClickHouse/ClickHouse/issues/7757]. [#7761](https://github.com/ClickHouse/ClickHouse/pull/7761) ([Amos Bird](https://github.com/amosbird))
|
||||
* Support `Array(Decimal)` in `if` function. [#7721](https://github.com/ClickHouse/ClickHouse/pull/7721) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Support Decimals in `arrayDifference`, `arrayCumSum` and `arrayCumSumNegative` functions. [#7724](https://github.com/ClickHouse/ClickHouse/pull/7724) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Added `lifetime` column to `system.dictionaries` table. [#6820](https://github.com/ClickHouse/ClickHouse/issues/6820) [#7727](https://github.com/ClickHouse/ClickHouse/pull/7727) ([kekekekule](https://github.com/kekekekule))
|
||||
* Improved check for existing parts on different disks for `*MergeTree` table engines. Addresses [#7660](https://github.com/ClickHouse/ClickHouse/issues/7660). [#8440](https://github.com/ClickHouse/ClickHouse/pull/8440) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Integration with `AWS SDK` for `S3` interactions which allows to use all S3 features out of the box. [#8011](https://github.com/ClickHouse/ClickHouse/pull/8011) ([Pavel Kovalenko](https://github.com/Jokser))
|
||||
* Added support for subqueries in `Live View` tables. [#7792](https://github.com/ClickHouse/ClickHouse/pull/7792) ([vzakaznikov](https://github.com/vzakaznikov))
|
||||
* Check for using `Date` or `DateTime` column from `TTL` expressions was removed. [#7920](https://github.com/ClickHouse/ClickHouse/pull/7920) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Information about disk was added to `system.detached_parts` table. [#7833](https://github.com/ClickHouse/ClickHouse/pull/7833) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Now settings `max_(table|partition)_size_to_drop` can be changed without a restart. [#7779](https://github.com/ClickHouse/ClickHouse/pull/7779) ([Grigory Pervakov](https://github.com/GrigoryPervakov))
|
||||
* Slightly better usability of error messages. Ask user not to remove the lines below `Stack trace:`. [#7897](https://github.com/ClickHouse/ClickHouse/pull/7897) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Better reading messages from `Kafka` engine in various formats after [#7935](https://github.com/ClickHouse/ClickHouse/issues/7935). [#8035](https://github.com/ClickHouse/ClickHouse/pull/8035) ([Ivan](https://github.com/abyss7))
|
||||
* Better compatibility with MySQL clients which don't support `sha2_password` auth plugin. [#8036](https://github.com/ClickHouse/ClickHouse/pull/8036) ([Yuriy Baranov](https://github.com/yurriy))
|
||||
* Support more column types in MySQL compatibility server. [#7975](https://github.com/ClickHouse/ClickHouse/pull/7975) ([Yuriy Baranov](https://github.com/yurriy))
|
||||
* Implement `ORDER BY` optimization for `Merge`, `Buffer` and `Materilized View` storages with underlying `MergeTree` tables. [#8130](https://github.com/ClickHouse/ClickHouse/pull/8130) ([Anton Popov](https://github.com/CurtizJ))
|
||||
* Now we always use POSIX implementation of `getrandom` to have better compatibility with old kernels (< 3.17). [#7940](https://github.com/ClickHouse/ClickHouse/pull/7940) ([Amos Bird](https://github.com/amosbird))
|
||||
* Better check for valid destination in a move TTL rule. [#8410](https://github.com/ClickHouse/ClickHouse/pull/8410) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Better checks for broken insert batches for `Distributed` table engine. [#7933](https://github.com/ClickHouse/ClickHouse/pull/7933) ([Azat Khuzhin](https://github.com/azat))
|
||||
* Add column with array of parts name which mutations must process in future to `system.mutations` table. [#8179](https://github.com/ClickHouse/ClickHouse/pull/8179) ([alesapin](https://github.com/alesapin))
|
||||
* Parallel merge sort optimization for processors. [#8552](https://github.com/ClickHouse/ClickHouse/pull/8552) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* The settings `mark_cache_min_lifetime` is now obsolete and does nothing. In previous versions, mark cache can grow in memory larger than `mark_cache_size` to accomodate data within `mark_cache_min_lifetime` seconds. That was leading to confusion and higher memory usage than expected, that is especially bad on memory constrained systems. If you will see performance degradation after installing this release, you should increase the `mark_cache_size`. [#8484](https://github.com/ClickHouse/ClickHouse/pull/8484) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Preparation to use `tid` everywhere. This is needed for [#7477](https://github.com/ClickHouse/ClickHouse/issues/7477). [#8276](https://github.com/ClickHouse/ClickHouse/pull/8276) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
### Performance Improvement
|
||||
* Performance optimizations in processors pipeline. [#7988](https://github.com/ClickHouse/ClickHouse/pull/7988) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Non-blocking updates of expired keys in cache dictionaries (with permission to read old ones). [#8303](https://github.com/ClickHouse/ClickHouse/pull/8303) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
|
||||
* Compile ClickHouse without `-fno-omit-frame-pointer` globally to spare one more register. [#8097](https://github.com/ClickHouse/ClickHouse/pull/8097) ([Amos Bird](https://github.com/amosbird))
|
||||
* Speedup `greatCircleDistance` function and add performance tests for it. [#7307](https://github.com/ClickHouse/ClickHouse/pull/7307) ([Olga Khvostikova](https://github.com/stavrolia))
|
||||
* Improved performance of function `roundDown`. [#8465](https://github.com/ClickHouse/ClickHouse/pull/8465) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Improved performance of `max`, `min`, `argMin`, `argMax` for `DateTime64` data type. [#8199](https://github.com/ClickHouse/ClickHouse/pull/8199) ([Vasily Nemkov](https://github.com/Enmk))
|
||||
* Improved performance of sorting without a limit or with big limit and external sorting. [#8545](https://github.com/ClickHouse/ClickHouse/pull/8545) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Improved performance of formatting floating point numbers up to 6 times. [#8542](https://github.com/ClickHouse/ClickHouse/pull/8542) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Improved performance of `modulo` function. [#7750](https://github.com/ClickHouse/ClickHouse/pull/7750) ([Amos Bird](https://github.com/amosbird))
|
||||
* Optimized `ORDER BY` and merging with single column key. [#8335](https://github.com/ClickHouse/ClickHouse/pull/8335) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Better implementation for `arrayReduce`, `-Array` and `-State` combinators. [#7710](https://github.com/ClickHouse/ClickHouse/pull/7710) ([Amos Bird](https://github.com/amosbird))
|
||||
* Now `PREWHERE` should be optimized to be at least as efficient as `WHERE`. [#7769](https://github.com/ClickHouse/ClickHouse/pull/7769) ([Amos Bird](https://github.com/amosbird))
|
||||
* Improve the way `round` and `roundBankers` handling negative numbers. [#8229](https://github.com/ClickHouse/ClickHouse/pull/8229) ([hcz](https://github.com/hczhcz))
|
||||
* Improved decoding performance of `DoubleDelta` and `Gorilla` codecs by roughly 30-40%. This fixes [#7082](https://github.com/ClickHouse/ClickHouse/issues/7082). [#8019](https://github.com/ClickHouse/ClickHouse/pull/8019) ([Vasily Nemkov](https://github.com/Enmk))
|
||||
* Improved performance of `base64` related functions. [#8444](https://github.com/ClickHouse/ClickHouse/pull/8444) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Added a function `geoDistance`. It is similar to `greatCircleDistance` but uses approximation to WGS-84 ellipsoid model. The performance of both functions are near the same. [#8086](https://github.com/ClickHouse/ClickHouse/pull/8086) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Faster `min` and `max` aggregation functions for `Decimal` data type. [#8144](https://github.com/ClickHouse/ClickHouse/pull/8144) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Vectorize processing `arrayReduce`. [#7608](https://github.com/ClickHouse/ClickHouse/pull/7608) ([Amos Bird](https://github.com/amosbird))
|
||||
* `if` chains are now optimized as `multiIf`. [#8355](https://github.com/ClickHouse/ClickHouse/pull/8355) ([kamalov-ruslan](https://github.com/kamalov-ruslan))
|
||||
* Fix performance regression of `Kafka` table engine introduced in 19.15. This fixes [#7261](https://github.com/ClickHouse/ClickHouse/issues/7261). [#7935](https://github.com/ClickHouse/ClickHouse/pull/7935) ([filimonov](https://github.com/filimonov))
|
||||
* Removed "pie" code generation that `gcc` from Debian packages occasionally brings by default. [#8483](https://github.com/ClickHouse/ClickHouse/pull/8483) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Parallel parsing data formats [#6553](https://github.com/ClickHouse/ClickHouse/pull/6553) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
|
||||
* Enable optimized parser of `Values` with expressions by default (`input_format_values_deduce_templates_of_expressions=1`). [#8231](https://github.com/ClickHouse/ClickHouse/pull/8231) ([tavplubix](https://github.com/tavplubix))
|
||||
|
||||
### Build/Testing/Packaging Improvement
|
||||
* Build fixes for `ARM` and in minimal mode. [#8304](https://github.com/ClickHouse/ClickHouse/pull/8304) ([proller](https://github.com/proller))
|
||||
* Add coverage file flush for `clickhouse-server` when std::atexit is not called. Also slightly improved logging in stateless tests with coverage. [#8267](https://github.com/ClickHouse/ClickHouse/pull/8267) ([alesapin](https://github.com/alesapin))
|
||||
* Update LLVM library in contrib. Avoid using LLVM from OS packages. [#8258](https://github.com/ClickHouse/ClickHouse/pull/8258) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Make bundled `curl` build fully quiet. [#8232](https://github.com/ClickHouse/ClickHouse/pull/8232) [#8203](https://github.com/ClickHouse/ClickHouse/pull/8203) ([Pavel Kovalenko](https://github.com/Jokser))
|
||||
* Fix some `MemorySanitizer` warnings. [#8235](https://github.com/ClickHouse/ClickHouse/pull/8235) ([Alexander Kuzmenkov](https://github.com/akuzm))
|
||||
* Use `add_warning` and `no_warning` macros in `CMakeLists.txt`. [#8604](https://github.com/ClickHouse/ClickHouse/pull/8604) ([Ivan](https://github.com/abyss7))
|
||||
* Add support of Minio S3 Compatible object (https://min.io/) for better integration tests. [#7863](https://github.com/ClickHouse/ClickHouse/pull/7863) [#7875](https://github.com/ClickHouse/ClickHouse/pull/7875) ([Pavel Kovalenko](https://github.com/Jokser))
|
||||
* Imported `libc` headers to contrib. It allows to make builds more consistent across various systems (only for `x86_64-linux-gnu`). [#5773](https://github.com/ClickHouse/ClickHouse/pull/5773) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Remove `-fPIC` from some libraries. [#8464](https://github.com/ClickHouse/ClickHouse/pull/8464) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Clean `CMakeLists.txt` for curl. See https://github.com/ClickHouse/ClickHouse/pull/8011#issuecomment-569478910 [#8459](https://github.com/ClickHouse/ClickHouse/pull/8459) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Silent warnings in `CapNProto` library. [#8220](https://github.com/ClickHouse/ClickHouse/pull/8220) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Add performance tests for short string optimized hash tables. [#7679](https://github.com/ClickHouse/ClickHouse/pull/7679) ([Amos Bird](https://github.com/amosbird))
|
||||
* Now ClickHouse will build on `AArch64` even if `MADV_FREE` is not available. This fixes [#8027](https://github.com/ClickHouse/ClickHouse/issues/8027). [#8243](https://github.com/ClickHouse/ClickHouse/pull/8243) ([Amos Bird](https://github.com/amosbird))
|
||||
* Update `zlib-ng` to fix memory sanitizer problems. [#7182](https://github.com/ClickHouse/ClickHouse/pull/7182) [#8206](https://github.com/ClickHouse/ClickHouse/pull/8206) ([Alexander Kuzmenkov](https://github.com/akuzm))
|
||||
* Enable internal MySQL library on non-Linux system, because usage of OS packages is very fragile and usually doesn't work at all. This fixes [#5765](https://github.com/ClickHouse/ClickHouse/issues/5765). [#8426](https://github.com/ClickHouse/ClickHouse/pull/8426) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed build on some systems after enabling `libc++`. This supersedes [#8374](https://github.com/ClickHouse/ClickHouse/issues/8374). [#8380](https://github.com/ClickHouse/ClickHouse/pull/8380) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Make `Field` methods more type-safe to find more errors. [#7386](https://github.com/ClickHouse/ClickHouse/pull/7386) [#8209](https://github.com/ClickHouse/ClickHouse/pull/8209) ([Alexander Kuzmenkov](https://github.com/akuzm))
|
||||
* Added missing files to the `libc-headers` submodule. [#8507](https://github.com/ClickHouse/ClickHouse/pull/8507) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix wrong `JSON` quoting in performance test output. [#8497](https://github.com/ClickHouse/ClickHouse/pull/8497) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Now stack trace is displayed for `std::exception` and `Poco::Exception`. In previous versions it was available only for `DB::Exception`. This improves diagnostics. [#8501](https://github.com/ClickHouse/ClickHouse/pull/8501) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Porting `clock_gettime` and `clock_nanosleep` for fresh glibc versions. [#8054](https://github.com/ClickHouse/ClickHouse/pull/8054) ([Amos Bird](https://github.com/amosbird))
|
||||
* Enable `part_log` in example config for developers. [#8609](https://github.com/ClickHouse/ClickHouse/pull/8609) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix async nature of reload in `01036_no_superfluous_dict_reload_on_create_database*`. [#8111](https://github.com/ClickHouse/ClickHouse/pull/8111) ([Azat Khuzhin](https://github.com/azat))
|
||||
* Fixed codec performance tests. [#8615](https://github.com/ClickHouse/ClickHouse/pull/8615) ([Vasily Nemkov](https://github.com/Enmk))
|
||||
* Add install scripts for `.tgz` build and documentation for them. [#8612](https://github.com/ClickHouse/ClickHouse/pull/8612) [#8591](https://github.com/ClickHouse/ClickHouse/pull/8591) ([alesapin](https://github.com/alesapin))
|
||||
* Removed old `ZSTD` test (it was created in year 2016 to reproduce the bug that pre 1.0 version of ZSTD has had). This fixes [#8618](https://github.com/ClickHouse/ClickHouse/issues/8618). [#8619](https://github.com/ClickHouse/ClickHouse/pull/8619) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed build on Mac OS Catalina. [#8600](https://github.com/ClickHouse/ClickHouse/pull/8600) ([meo](https://github.com/meob))
|
||||
* Increased number of rows in codec performance tests to make results noticeable. [#8574](https://github.com/ClickHouse/ClickHouse/pull/8574) ([Vasily Nemkov](https://github.com/Enmk))
|
||||
* In debug builds, treat `LOGICAL_ERROR` exceptions as assertion failures, so that they are easier to notice. [#8475](https://github.com/ClickHouse/ClickHouse/pull/8475) ([Alexander Kuzmenkov](https://github.com/akuzm))
|
||||
* Make formats-related performance test more deterministic. [#8477](https://github.com/ClickHouse/ClickHouse/pull/8477) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Update `lz4` to fix a MemorySanitizer failure. [#8181](https://github.com/ClickHouse/ClickHouse/pull/8181) ([Alexander Kuzmenkov](https://github.com/akuzm))
|
||||
* Suppress a known MemorySanitizer false positive in exception handling. [#8182](https://github.com/ClickHouse/ClickHouse/pull/8182) ([Alexander Kuzmenkov](https://github.com/akuzm))
|
||||
* Update `gcc` and `g++` to version 9 in `build/docker/build.sh` [#7766](https://github.com/ClickHouse/ClickHouse/pull/7766) ([TLightSky](https://github.com/tlightsky))
|
||||
* Add performance test case to test that `PREWHERE` is worse than `WHERE`. [#7768](https://github.com/ClickHouse/ClickHouse/pull/7768) ([Amos Bird](https://github.com/amosbird))
|
||||
* Progress towards fixing one flacky test. [#8621](https://github.com/ClickHouse/ClickHouse/pull/8621) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Avoid MemorySanitizer report for data from `libunwind`. [#8539](https://github.com/ClickHouse/ClickHouse/pull/8539) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Updated `libc++` to the latest version. [#8324](https://github.com/ClickHouse/ClickHouse/pull/8324) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Build ICU library from sources. This fixes [#6460](https://github.com/ClickHouse/ClickHouse/issues/6460). [#8219](https://github.com/ClickHouse/ClickHouse/pull/8219) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Switched from `libressl` to `openssl`. ClickHouse should support TLS 1.3 and SNI after this change. This fixes [#8171](https://github.com/ClickHouse/ClickHouse/issues/8171). [#8218](https://github.com/ClickHouse/ClickHouse/pull/8218) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed UBSan report when using `chacha20_poly1305` from SSL (happens on connect to https://yandex.ru/). [#8214](https://github.com/ClickHouse/ClickHouse/pull/8214) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix mode of default password file for `.deb` linux distros. [#8075](https://github.com/ClickHouse/ClickHouse/pull/8075) ([proller](https://github.com/proller))
|
||||
* Improved expression for getting `clickhouse-server` PID in `clickhouse-test`. [#8063](https://github.com/ClickHouse/ClickHouse/pull/8063) ([Alexander Kazakov](https://github.com/Akazz))
|
||||
* Updated contrib/googletest to v1.10.0. [#8587](https://github.com/ClickHouse/ClickHouse/pull/8587) ([Alexander Burmak](https://github.com/Alex-Burmak))
|
||||
* Fixed ThreadSaninitizer report in `base64` library. Also updated this library to the latest version, but it doesn't matter. This fixes [#8397](https://github.com/ClickHouse/ClickHouse/issues/8397). [#8403](https://github.com/ClickHouse/ClickHouse/pull/8403) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix `00600_replace_running_query` for processors. [#8272](https://github.com/ClickHouse/ClickHouse/pull/8272) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Remove support for `tcmalloc` to make `CMakeLists.txt` simpler. [#8310](https://github.com/ClickHouse/ClickHouse/pull/8310) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Release gcc builds now use `libc++` instead of `libstdc++`. Recently `libc++` was used only with clang. This will improve consistency of build configurations and portability. [#8311](https://github.com/ClickHouse/ClickHouse/pull/8311) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Enable ICU library for build with MemorySanitizer. [#8222](https://github.com/ClickHouse/ClickHouse/pull/8222) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Suppress warnings from `CapNProto` library. [#8224](https://github.com/ClickHouse/ClickHouse/pull/8224) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Removed special cases of code for `tcmalloc`, because it's no longer supported. [#8225](https://github.com/ClickHouse/ClickHouse/pull/8225) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* In CI coverage task, kill the server gracefully to allow it to save the coverage report. This fixes incomplete coverage reports we've been seeing lately. [#8142](https://github.com/ClickHouse/ClickHouse/pull/8142) ([alesapin](https://github.com/alesapin))
|
||||
* Performance tests for all codecs against `Float64` and `UInt64` values. [#8349](https://github.com/ClickHouse/ClickHouse/pull/8349) ([Vasily Nemkov](https://github.com/Enmk))
|
||||
* `termcap` is very much deprecated and lead to various problems (f.g. missing "up" cap and echoing `^J` instead of multi line) . Favor `terminfo` or bundled `ncurses`. [#7737](https://github.com/ClickHouse/ClickHouse/pull/7737) ([Amos Bird](https://github.com/amosbird))
|
||||
* Fix `test_storage_s3` integration test. [#7734](https://github.com/ClickHouse/ClickHouse/pull/7734) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Support `StorageFile(<format>, null) ` to insert block into given format file without actually write to disk. This is required for performance tests. [#8455](https://github.com/ClickHouse/ClickHouse/pull/8455) ([Amos Bird](https://github.com/amosbird))
|
||||
* Added argument `--print-time` to functional tests which prints execution time per test. [#8001](https://github.com/ClickHouse/ClickHouse/pull/8001) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Added asserts to `KeyCondition` while evaluating RPN. This will fix warning from gcc-9. [#8279](https://github.com/ClickHouse/ClickHouse/pull/8279) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Dump cmake options in CI builds. [#8273](https://github.com/ClickHouse/ClickHouse/pull/8273) ([Alexander Kuzmenkov](https://github.com/akuzm))
|
||||
* Don't generate debug info for some fat libraries. [#8271](https://github.com/ClickHouse/ClickHouse/pull/8271) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Make `log_to_console.xml` always log to stderr, regardless of is it interactive or not. [#8395](https://github.com/ClickHouse/ClickHouse/pull/8395) ([Alexander Kuzmenkov](https://github.com/akuzm))
|
||||
* Removed some unused features from `clickhouse-performance-test` tool. [#8555](https://github.com/ClickHouse/ClickHouse/pull/8555) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Now we will also search for `lld-X` with corresponding `clang-X` version. [#8092](https://github.com/ClickHouse/ClickHouse/pull/8092) ([alesapin](https://github.com/alesapin))
|
||||
* Parquet build improvement. [#8421](https://github.com/ClickHouse/ClickHouse/pull/8421) ([maxulan](https://github.com/maxulan))
|
||||
* More GCC warnings [#8221](https://github.com/ClickHouse/ClickHouse/pull/8221) ([kreuzerkrieg](https://github.com/kreuzerkrieg))
|
||||
* Package for Arch Linux now allows to run ClickHouse server, and not only client. [#8534](https://github.com/ClickHouse/ClickHouse/pull/8534) ([Vladimir Chebotarev](https://github.com/excitoon))
|
||||
* Fix test with processors. Tiny performance fixes. [#7672](https://github.com/ClickHouse/ClickHouse/pull/7672) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Update contrib/protobuf. [#8256](https://github.com/ClickHouse/ClickHouse/pull/8256) ([Matwey V. Kornilov](https://github.com/matwey))
|
||||
* In preparation of switching to c++20 as a new year celebration. "May the C++ force be with ClickHouse." [#8447](https://github.com/ClickHouse/ClickHouse/pull/8447) ([Amos Bird](https://github.com/amosbird))
|
||||
|
||||
### Experimental Feature
|
||||
* Added experimental setting `min_bytes_to_use_mmap_io`. It allows to read big files without copying data from kernel to userspace. The setting is disabled by default. Recommended threshold is about 64 MB, because mmap/munmap is slow. [#8520](https://github.com/ClickHouse/ClickHouse/pull/8520) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Reworked quotas as a part of access control system. Added new table `system.quotas`, new functions `currentQuota`, `currentQuotaKey`, new SQL syntax `CREATE QUOTA`, `ALTER QUOTA`, `DROP QUOTA`, `SHOW QUOTA`. [#7257](https://github.com/ClickHouse/ClickHouse/pull/7257) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Allow skipping unknown settings with warnings instead of throwing exceptions. [#7653](https://github.com/ClickHouse/ClickHouse/pull/7653) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Reworked row policies as a part of access control system. Added new table `system.row_policies`, new function `currentRowPolicies()`, new SQL syntax `CREATE POLICY`, `ALTER POLICY`, `DROP POLICY`, `SHOW CREATE POLICY`, `SHOW POLICIES`. [#7808](https://github.com/ClickHouse/ClickHouse/pull/7808) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
|
||||
### Security Fix
|
||||
* Fixed the possibility of reading directories structure in tables with `File` table engine. This fixes [#8536](https://github.com/ClickHouse/ClickHouse/issues/8536). [#8537](https://github.com/ClickHouse/ClickHouse/pull/8537) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
## ClickHouse release v19.17
|
||||
|
||||
### ClickHouse release v19.17.6.36, 2019-12-27
|
||||
|
@ -352,6 +352,7 @@ include (cmake/find/simdjson.cmake)
|
||||
include (cmake/find/rapidjson.cmake)
|
||||
include (cmake/find/fastops.cmake)
|
||||
include (cmake/find/orc.cmake)
|
||||
include (cmake/find/avro.cmake)
|
||||
|
||||
find_contrib_lib(cityhash)
|
||||
find_contrib_lib(farmhash)
|
||||
|
@ -12,19 +12,19 @@ If you want to contribute to documentation, read the [Contributing to ClickHouse
|
||||
|
||||
## Legal Info
|
||||
|
||||
In order for us (YANDEX LLC) to accept patches and other contributions from you, you will have to adopt our Yandex Contributor License Agreement (the "**CLA**"). The current version of the CLA you may find here:
|
||||
In order for us (YANDEX LLC) to accept patches and other contributions from you, you may adopt our Yandex Contributor License Agreement (the "**CLA**"). The current version of the CLA you may find here:
|
||||
1) https://yandex.ru/legal/cla/?lang=en (in English) and
|
||||
2) https://yandex.ru/legal/cla/?lang=ru (in Russian).
|
||||
|
||||
By adopting the CLA, you state the following:
|
||||
|
||||
* You obviously wish and are willingly licensing your contributions to us for our open source projects under the terms of the CLA,
|
||||
* You has read the terms and conditions of the CLA and agree with them in full,
|
||||
* You have read the terms and conditions of the CLA and agree with them in full,
|
||||
* You are legally able to provide and license your contributions as stated,
|
||||
* We may use your contributions for our open source projects and for any other our project too,
|
||||
* We rely on your assurances concerning the rights of third parties in relation to your contributes.
|
||||
* We rely on your assurances concerning the rights of third parties in relation to your contributions.
|
||||
|
||||
If you agree with these principles, please read and adopt our CLA. By providing us your contributions, you hereby declare that you has already read and adopt our CLA, and we may freely merge your contributions with our corresponding open source project and use it in further in accordance with terms and conditions of the CLA.
|
||||
If you agree with these principles, please read and adopt our CLA. By providing us your contributions, you hereby declare that you have already read and adopt our CLA, and we may freely merge your contributions with our corresponding open source project and use it in further in accordance with terms and conditions of the CLA.
|
||||
|
||||
If you have already adopted terms and conditions of the CLA, you are able to provide your contributes. When you submit your pull request, please add the following information into it:
|
||||
|
||||
@ -37,4 +37,7 @@ Replace the bracketed text as follows:
|
||||
|
||||
It is enough to provide us such notification once.
|
||||
|
||||
If you don't agree with the CLA, you still can open a pull request to provide your contributions.
|
||||
As an alternative, you can provide DCO instead of CLA. You can find the text of DCO here: https://developercertificate.org/
|
||||
It is enough to read and copy it verbatim to your pull request.
|
||||
|
||||
If you don't agree with the CLA and don't want to provide DCO, you still can open a pull request to provide your contributions.
|
||||
|
28
cmake/find/avro.cmake
Normal file
28
cmake/find/avro.cmake
Normal file
@ -0,0 +1,28 @@
|
||||
option (ENABLE_AVRO "Enable Avro" ${ENABLE_LIBRARIES})
|
||||
|
||||
if (ENABLE_AVRO)
|
||||
|
||||
option (USE_INTERNAL_AVRO_LIBRARY "Set to FALSE to use system avro library instead of bundled" ${NOT_UNBUNDLED})
|
||||
|
||||
if(NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/avro/lang/c++/CMakeLists.txt")
|
||||
if(USE_INTERNAL_AVRO_LIBRARY)
|
||||
message(WARNING "submodule contrib/avro is missing. to fix try run: \n git submodule update --init --recursive")
|
||||
endif()
|
||||
set(MISSING_INTERNAL_AVRO_LIBRARY 1)
|
||||
set(USE_INTERNAL_AVRO_LIBRARY 0)
|
||||
endif()
|
||||
|
||||
if (NOT USE_INTERNAL_AVRO_LIBRARY)
|
||||
elseif(NOT MISSING_INTERNAL_AVRO_LIBRARY)
|
||||
include(cmake/find/snappy.cmake)
|
||||
set(AVROCPP_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/avro/lang/c++/include")
|
||||
set(AVROCPP_LIBRARY avrocpp)
|
||||
endif ()
|
||||
|
||||
if (AVROCPP_LIBRARY AND AVROCPP_INCLUDE_DIR)
|
||||
set(USE_AVRO 1)
|
||||
endif()
|
||||
|
||||
endif()
|
||||
|
||||
message (STATUS "Using avro=${USE_AVRO}: ${AVROCPP_INCLUDE_DIR} : ${AVROCPP_LIBRARY}")
|
@ -31,6 +31,7 @@ if (NOT Boost_SYSTEM_LIBRARY AND NOT MISSING_INTERNAL_BOOST_LIBRARY)
|
||||
set (Boost_SYSTEM_LIBRARY boost_system_internal)
|
||||
set (Boost_PROGRAM_OPTIONS_LIBRARY boost_program_options_internal)
|
||||
set (Boost_FILESYSTEM_LIBRARY boost_filesystem_internal ${Boost_SYSTEM_LIBRARY})
|
||||
set (Boost_IOSTREAMS_LIBRARY boost_iostreams_internal)
|
||||
set (Boost_REGEX_LIBRARY boost_regex_internal)
|
||||
|
||||
set (Boost_INCLUDE_DIRS)
|
||||
@ -48,4 +49,4 @@ if (NOT Boost_SYSTEM_LIBRARY AND NOT MISSING_INTERNAL_BOOST_LIBRARY)
|
||||
list (APPEND Boost_INCLUDE_DIRS "${ClickHouse_SOURCE_DIR}/contrib/boost")
|
||||
endif ()
|
||||
|
||||
message (STATUS "Using Boost: ${Boost_INCLUDE_DIRS} : ${Boost_PROGRAM_OPTIONS_LIBRARY},${Boost_SYSTEM_LIBRARY},${Boost_FILESYSTEM_LIBRARY},${Boost_REGEX_LIBRARY}")
|
||||
message (STATUS "Using Boost: ${Boost_INCLUDE_DIRS} : ${Boost_PROGRAM_OPTIONS_LIBRARY},${Boost_SYSTEM_LIBRARY},${Boost_FILESYSTEM_LIBRARY},${Boost_IOSTREAMS_LIBRARY},${Boost_REGEX_LIBRARY}")
|
||||
|
@ -14,6 +14,7 @@ if (NOT ENABLE_LIBRARIES)
|
||||
set (ENABLE_POCO_REDIS ${ENABLE_LIBRARIES} CACHE BOOL "")
|
||||
set (ENABLE_POCO_ODBC ${ENABLE_LIBRARIES} CACHE BOOL "")
|
||||
set (ENABLE_POCO_SQL ${ENABLE_LIBRARIES} CACHE BOOL "")
|
||||
set (ENABLE_POCO_JSON ${ENABLE_LIBRARIES} CACHE BOOL "")
|
||||
endif ()
|
||||
|
||||
set (POCO_COMPONENTS Net XML SQL Data)
|
||||
@ -34,6 +35,9 @@ if (NOT DEFINED ENABLE_POCO_ODBC OR ENABLE_POCO_ODBC)
|
||||
list (APPEND POCO_COMPONENTS DataODBC)
|
||||
list (APPEND POCO_COMPONENTS SQLODBC)
|
||||
endif ()
|
||||
if (NOT DEFINED ENABLE_POCO_JSON OR ENABLE_POCO_JSON)
|
||||
list (APPEND POCO_COMPONENTS JSON)
|
||||
endif ()
|
||||
|
||||
if (NOT USE_INTERNAL_POCO_LIBRARY)
|
||||
find_package (Poco COMPONENTS ${POCO_COMPONENTS})
|
||||
@ -112,6 +116,11 @@ elseif (NOT MISSING_INTERNAL_POCO_LIBRARY)
|
||||
endif ()
|
||||
endif ()
|
||||
|
||||
if (NOT DEFINED ENABLE_POCO_JSON OR ENABLE_POCO_JSON)
|
||||
set (Poco_JSON_LIBRARY PocoJSON)
|
||||
set (Poco_JSON_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/poco/JSON/include/")
|
||||
endif ()
|
||||
|
||||
if (OPENSSL_FOUND AND (NOT DEFINED ENABLE_POCO_NETSSL OR ENABLE_POCO_NETSSL))
|
||||
set (Poco_NetSSL_LIBRARY PocoNetSSL ${OPENSSL_LIBRARIES})
|
||||
set (Poco_Crypto_LIBRARY PocoCrypto ${OPENSSL_LIBRARIES})
|
||||
@ -145,8 +154,11 @@ endif ()
|
||||
if (Poco_SQLODBC_LIBRARY AND ODBC_FOUND)
|
||||
set (USE_POCO_SQLODBC 1)
|
||||
endif ()
|
||||
if (Poco_JSON_LIBRARY)
|
||||
set (USE_POCO_JSON 1)
|
||||
endif ()
|
||||
|
||||
message(STATUS "Using Poco: ${Poco_INCLUDE_DIRS} : ${Poco_Foundation_LIBRARY},${Poco_Util_LIBRARY},${Poco_Net_LIBRARY},${Poco_NetSSL_LIBRARY},${Poco_Crypto_LIBRARY},${Poco_XML_LIBRARY},${Poco_Data_LIBRARY},${Poco_DataODBC_LIBRARY},${Poco_SQL_LIBRARY},${Poco_SQLODBC_LIBRARY},${Poco_MongoDB_LIBRARY},${Poco_Redis_LIBRARY}; MongoDB=${USE_POCO_MONGODB}, Redis=${USE_POCO_REDIS}, DataODBC=${USE_POCO_DATAODBC}, NetSSL=${USE_POCO_NETSSL}")
|
||||
message(STATUS "Using Poco: ${Poco_INCLUDE_DIRS} : ${Poco_Foundation_LIBRARY},${Poco_Util_LIBRARY},${Poco_Net_LIBRARY},${Poco_NetSSL_LIBRARY},${Poco_Crypto_LIBRARY},${Poco_XML_LIBRARY},${Poco_Data_LIBRARY},${Poco_DataODBC_LIBRARY},${Poco_SQL_LIBRARY},${Poco_SQLODBC_LIBRARY},${Poco_MongoDB_LIBRARY},${Poco_Redis_LIBRARY},${Poco_JSON_LIBRARY}; MongoDB=${USE_POCO_MONGODB}, Redis=${USE_POCO_REDIS}, DataODBC=${USE_POCO_DATAODBC}, NetSSL=${USE_POCO_NETSSL}, JSON=${USE_POCO_JSON}")
|
||||
|
||||
# How to make sutable poco:
|
||||
# use branch:
|
||||
|
@ -53,6 +53,7 @@ if (SANITIZE)
|
||||
set (USE_CAPNP 0 CACHE BOOL "")
|
||||
set (USE_INTERNAL_ORC_LIBRARY 0 CACHE BOOL "")
|
||||
set (USE_ORC 0 CACHE BOOL "")
|
||||
set (USE_AVRO 0 CACHE BOOL "")
|
||||
set (ENABLE_SSL 0 CACHE BOOL "")
|
||||
|
||||
elseif (SANITIZE STREQUAL "thread")
|
||||
|
34
contrib/CMakeLists.txt
vendored
34
contrib/CMakeLists.txt
vendored
@ -146,6 +146,20 @@ if (ENABLE_ICU AND USE_INTERNAL_ICU_LIBRARY)
|
||||
add_subdirectory (icu-cmake)
|
||||
endif ()
|
||||
|
||||
if(USE_INTERNAL_SNAPPY_LIBRARY)
|
||||
set(SNAPPY_BUILD_TESTS 0 CACHE INTERNAL "")
|
||||
if (NOT MAKE_STATIC_LIBRARIES)
|
||||
set(BUILD_SHARED_LIBS 1) # TODO: set at root dir
|
||||
endif()
|
||||
|
||||
add_subdirectory(snappy)
|
||||
|
||||
set (SNAPPY_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/snappy")
|
||||
if(SANITIZE STREQUAL "undefined")
|
||||
target_compile_options(${SNAPPY_LIBRARY} PRIVATE -fno-sanitize=undefined)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if (USE_INTERNAL_PARQUET_LIBRARY)
|
||||
if (USE_INTERNAL_PARQUET_LIBRARY_NATIVE_CMAKE)
|
||||
# We dont use arrow's cmakefiles because they uses too many depends and download some libs in compile time
|
||||
@ -189,20 +203,6 @@ if (USE_INTERNAL_PARQUET_LIBRARY_NATIVE_CMAKE)
|
||||
endif()
|
||||
|
||||
else()
|
||||
if(USE_INTERNAL_SNAPPY_LIBRARY)
|
||||
set(SNAPPY_BUILD_TESTS 0 CACHE INTERNAL "")
|
||||
if (NOT MAKE_STATIC_LIBRARIES)
|
||||
set(BUILD_SHARED_LIBS 1) # TODO: set at root dir
|
||||
endif()
|
||||
|
||||
add_subdirectory(snappy)
|
||||
|
||||
set (SNAPPY_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/snappy")
|
||||
if(SANITIZE STREQUAL "undefined")
|
||||
target_compile_options(${SNAPPY_LIBRARY} PRIVATE -fno-sanitize=undefined)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
add_subdirectory(arrow-cmake)
|
||||
|
||||
# The library is large - avoid bloat.
|
||||
@ -212,6 +212,10 @@ else()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if (USE_INTERNAL_AVRO_LIBRARY)
|
||||
add_subdirectory(avro-cmake)
|
||||
endif()
|
||||
|
||||
if (USE_INTERNAL_POCO_LIBRARY)
|
||||
set (POCO_VERBOSE_MESSAGES 0 CACHE INTERNAL "")
|
||||
set (save_CMAKE_CXX_FLAGS ${CMAKE_CXX_FLAGS})
|
||||
@ -302,7 +306,7 @@ if (USE_INTERNAL_AWS_S3_LIBRARY)
|
||||
# The library is large - avoid bloat.
|
||||
target_compile_options (aws_s3 PRIVATE -g0)
|
||||
target_compile_options (aws_s3_checksums PRIVATE -g0)
|
||||
target_compile_options (libcurl PRIVATE -g0)
|
||||
target_compile_options (curl PRIVATE -g0)
|
||||
endif ()
|
||||
|
||||
if (USE_BASE64)
|
||||
|
1
contrib/avro
vendored
Submodule
1
contrib/avro
vendored
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit 5b2752041c8d2f75eb5c1dbec8b4c25fc0e24d12
|
70
contrib/avro-cmake/CMakeLists.txt
Normal file
70
contrib/avro-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,70 @@
|
||||
set(AVROCPP_ROOT_DIR ${CMAKE_SOURCE_DIR}/contrib/avro/lang/c++)
|
||||
set(AVROCPP_INCLUDE_DIR ${AVROCPP_ROOT_DIR}/api)
|
||||
set(AVROCPP_SOURCE_DIR ${AVROCPP_ROOT_DIR}/impl)
|
||||
|
||||
set (CMAKE_CXX_STANDARD 17)
|
||||
|
||||
if (EXISTS ${AVROCPP_ROOT_DIR}/../../share/VERSION.txt)
|
||||
file(READ "${AVROCPP_ROOT_DIR}/../../share/VERSION.txt"
|
||||
AVRO_VERSION)
|
||||
endif()
|
||||
|
||||
string(REPLACE "\n" "" AVRO_VERSION ${AVRO_VERSION})
|
||||
set (AVRO_VERSION_MAJOR ${AVRO_VERSION})
|
||||
set (AVRO_VERSION_MINOR "0")
|
||||
|
||||
set (AVROCPP_SOURCE_FILES
|
||||
${AVROCPP_SOURCE_DIR}/Compiler.cc
|
||||
${AVROCPP_SOURCE_DIR}/Node.cc
|
||||
${AVROCPP_SOURCE_DIR}/LogicalType.cc
|
||||
${AVROCPP_SOURCE_DIR}/NodeImpl.cc
|
||||
${AVROCPP_SOURCE_DIR}/ResolverSchema.cc
|
||||
${AVROCPP_SOURCE_DIR}/Schema.cc
|
||||
${AVROCPP_SOURCE_DIR}/Types.cc
|
||||
${AVROCPP_SOURCE_DIR}/ValidSchema.cc
|
||||
${AVROCPP_SOURCE_DIR}/Zigzag.cc
|
||||
${AVROCPP_SOURCE_DIR}/BinaryEncoder.cc
|
||||
${AVROCPP_SOURCE_DIR}/BinaryDecoder.cc
|
||||
${AVROCPP_SOURCE_DIR}/Stream.cc
|
||||
${AVROCPP_SOURCE_DIR}/FileStream.cc
|
||||
${AVROCPP_SOURCE_DIR}/Generic.cc
|
||||
${AVROCPP_SOURCE_DIR}/GenericDatum.cc
|
||||
${AVROCPP_SOURCE_DIR}/DataFile.cc
|
||||
${AVROCPP_SOURCE_DIR}/parsing/Symbol.cc
|
||||
${AVROCPP_SOURCE_DIR}/parsing/ValidatingCodec.cc
|
||||
${AVROCPP_SOURCE_DIR}/parsing/JsonCodec.cc
|
||||
${AVROCPP_SOURCE_DIR}/parsing/ResolvingDecoder.cc
|
||||
${AVROCPP_SOURCE_DIR}/json/JsonIO.cc
|
||||
${AVROCPP_SOURCE_DIR}/json/JsonDom.cc
|
||||
${AVROCPP_SOURCE_DIR}/Resolver.cc
|
||||
${AVROCPP_SOURCE_DIR}/Validator.cc
|
||||
)
|
||||
|
||||
add_library (avrocpp ${AVROCPP_SOURCE_FILES})
|
||||
set_target_properties (avrocpp PROPERTIES VERSION ${AVRO_VERSION_MAJOR}.${AVRO_VERSION_MINOR})
|
||||
|
||||
target_include_directories(avrocpp SYSTEM PUBLIC ${AVROCPP_INCLUDE_DIR})
|
||||
|
||||
target_include_directories(avrocpp SYSTEM PUBLIC ${Boost_INCLUDE_DIRS})
|
||||
target_link_libraries (avrocpp ${Boost_IOSTREAMS_LIBRARY})
|
||||
|
||||
if (SNAPPY_INCLUDE_DIR AND SNAPPY_LIBRARY)
|
||||
target_compile_definitions (avrocpp PUBLIC SNAPPY_CODEC_AVAILABLE)
|
||||
target_include_directories (avrocpp PRIVATE ${SNAPPY_INCLUDE_DIR})
|
||||
target_link_libraries (avrocpp ${SNAPPY_LIBRARY})
|
||||
endif ()
|
||||
|
||||
if (COMPILER_GCC)
|
||||
set (SUPPRESS_WARNINGS -Wno-non-virtual-dtor)
|
||||
elseif (COMPILER_CLANG)
|
||||
set (SUPPRESS_WARNINGS -Wno-non-virtual-dtor)
|
||||
endif ()
|
||||
|
||||
target_compile_options(avrocpp PRIVATE ${SUPPRESS_WARNINGS})
|
||||
|
||||
# create a symlink to include headers with <avro/...>
|
||||
ADD_CUSTOM_TARGET(avro_symlink_headers ALL
|
||||
COMMAND ${CMAKE_COMMAND} -E make_directory ${AVROCPP_ROOT_DIR}/include
|
||||
COMMAND ${CMAKE_COMMAND} -E create_symlink ${AVROCPP_ROOT_DIR}/api ${AVROCPP_ROOT_DIR}/include/avro
|
||||
)
|
||||
add_dependencies(avrocpp avro_symlink_headers)
|
@ -102,4 +102,4 @@ if (OPENSSL_FOUND)
|
||||
target_link_libraries(aws_s3 PRIVATE ${OPENSSL_LIBRARIES})
|
||||
endif()
|
||||
|
||||
target_link_libraries(aws_s3 PRIVATE aws_s3_checksums libcurl)
|
||||
target_link_libraries(aws_s3 PRIVATE aws_s3_checksums curl)
|
||||
|
2
contrib/boost
vendored
2
contrib/boost
vendored
@ -1 +1 @@
|
||||
Subproject commit 830e51edb59c4f37a8638138581e1e56c29ac44f
|
||||
Subproject commit 86be2aef20bee2356b744e5569eed6eaded85dbe
|
@ -37,3 +37,8 @@ target_link_libraries(boost_filesystem_internal PRIVATE boost_system_internal)
|
||||
if (USE_INTERNAL_PARQUET_LIBRARY)
|
||||
add_boost_lib(regex)
|
||||
endif()
|
||||
|
||||
if (USE_INTERNAL_AVRO_LIBRARY)
|
||||
add_boost_lib(iostreams)
|
||||
target_link_libraries(boost_iostreams_internal PUBLIC ${ZLIB_LIBRARIES})
|
||||
endif()
|
||||
|
@ -1,61 +0,0 @@
|
||||
include(CheckCSourceCompiles)
|
||||
|
||||
option(CURL_HIDDEN_SYMBOLS "Set to ON to hide libcurl internal symbols (=hide all symbols that aren't officially external)." ON)
|
||||
mark_as_advanced(CURL_HIDDEN_SYMBOLS)
|
||||
|
||||
if(CURL_HIDDEN_SYMBOLS)
|
||||
set(SUPPORTS_SYMBOL_HIDING FALSE)
|
||||
|
||||
if(CMAKE_C_COMPILER_ID MATCHES "Clang")
|
||||
set(SUPPORTS_SYMBOL_HIDING TRUE)
|
||||
set(_SYMBOL_EXTERN "__attribute__ ((__visibility__ (\"default\")))")
|
||||
set(_CFLAG_SYMBOLS_HIDE "-fvisibility=hidden")
|
||||
elseif(CMAKE_COMPILER_IS_GNUCC)
|
||||
if(NOT CMAKE_VERSION VERSION_LESS 2.8.10)
|
||||
set(GCC_VERSION ${CMAKE_C_COMPILER_VERSION})
|
||||
else()
|
||||
execute_process(COMMAND ${CMAKE_C_COMPILER} -dumpversion
|
||||
OUTPUT_VARIABLE GCC_VERSION)
|
||||
endif()
|
||||
if(NOT GCC_VERSION VERSION_LESS 3.4)
|
||||
# note: this is considered buggy prior to 4.0 but the autotools don't care, so let's ignore that fact
|
||||
set(SUPPORTS_SYMBOL_HIDING TRUE)
|
||||
set(_SYMBOL_EXTERN "__attribute__ ((__visibility__ (\"default\")))")
|
||||
set(_CFLAG_SYMBOLS_HIDE "-fvisibility=hidden")
|
||||
endif()
|
||||
elseif(CMAKE_C_COMPILER_ID MATCHES "SunPro" AND NOT CMAKE_C_COMPILER_VERSION VERSION_LESS 8.0)
|
||||
set(SUPPORTS_SYMBOL_HIDING TRUE)
|
||||
set(_SYMBOL_EXTERN "__global")
|
||||
set(_CFLAG_SYMBOLS_HIDE "-xldscope=hidden")
|
||||
elseif(CMAKE_C_COMPILER_ID MATCHES "Intel" AND NOT CMAKE_C_COMPILER_VERSION VERSION_LESS 9.0)
|
||||
# note: this should probably just check for version 9.1.045 but I'm not 100% sure
|
||||
# so let's do it the same way autotools do.
|
||||
set(SUPPORTS_SYMBOL_HIDING TRUE)
|
||||
set(_SYMBOL_EXTERN "__attribute__ ((__visibility__ (\"default\")))")
|
||||
set(_CFLAG_SYMBOLS_HIDE "-fvisibility=hidden")
|
||||
check_c_source_compiles("#include <stdio.h>
|
||||
int main (void) { printf(\"icc fvisibility bug test\"); return 0; }" _no_bug)
|
||||
if(NOT _no_bug)
|
||||
set(SUPPORTS_SYMBOL_HIDING FALSE)
|
||||
set(_SYMBOL_EXTERN "")
|
||||
set(_CFLAG_SYMBOLS_HIDE "")
|
||||
endif()
|
||||
elseif(MSVC)
|
||||
set(SUPPORTS_SYMBOL_HIDING TRUE)
|
||||
endif()
|
||||
|
||||
set(HIDES_CURL_PRIVATE_SYMBOLS ${SUPPORTS_SYMBOL_HIDING})
|
||||
elseif(MSVC)
|
||||
if(NOT CMAKE_VERSION VERSION_LESS 3.7)
|
||||
set(CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS TRUE) #present since 3.4.3 but broken
|
||||
set(HIDES_CURL_PRIVATE_SYMBOLS FALSE)
|
||||
else()
|
||||
message(WARNING "Hiding private symbols regardless CURL_HIDDEN_SYMBOLS being disabled.")
|
||||
set(HIDES_CURL_PRIVATE_SYMBOLS TRUE)
|
||||
endif()
|
||||
else()
|
||||
set(HIDES_CURL_PRIVATE_SYMBOLS FALSE)
|
||||
endif()
|
||||
|
||||
set(CURL_CFLAG_SYMBOLS_HIDE ${_CFLAG_SYMBOLS_HIDE})
|
||||
set(CURL_EXTERN_SYMBOL ${_SYMBOL_EXTERN})
|
@ -1,617 +0,0 @@
|
||||
/***************************************************************************
|
||||
* _ _ ____ _
|
||||
* Project ___| | | | _ \| |
|
||||
* / __| | | | |_) | |
|
||||
* | (__| |_| | _ <| |___
|
||||
* \___|\___/|_| \_\_____|
|
||||
*
|
||||
* Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
|
||||
*
|
||||
* This software is licensed as described in the file COPYING, which
|
||||
* you should have received as part of this distribution. The terms
|
||||
* are also available at https://curl.haxx.se/docs/copyright.html.
|
||||
*
|
||||
* You may opt to use, copy, modify, merge, publish, distribute and/or sell
|
||||
* copies of the Software, and permit persons to whom the Software is
|
||||
* furnished to do so, under the terms of the COPYING file.
|
||||
*
|
||||
* This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
|
||||
* KIND, either express or implied.
|
||||
*
|
||||
***************************************************************************/
|
||||
#ifdef TIME_WITH_SYS_TIME
|
||||
/* Time with sys/time test */
|
||||
|
||||
#include <sys/types.h>
|
||||
#include <sys/time.h>
|
||||
#include <time.h>
|
||||
|
||||
int
|
||||
main ()
|
||||
{
|
||||
if ((struct tm *) 0)
|
||||
return 0;
|
||||
;
|
||||
return 0;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
#ifdef HAVE_FCNTL_O_NONBLOCK
|
||||
|
||||
/* headers for FCNTL_O_NONBLOCK test */
|
||||
#include <sys/types.h>
|
||||
#include <unistd.h>
|
||||
#include <fcntl.h>
|
||||
/* */
|
||||
#if defined(sun) || defined(__sun__) || \
|
||||
defined(__SUNPRO_C) || defined(__SUNPRO_CC)
|
||||
# if defined(__SVR4) || defined(__srv4__)
|
||||
# define PLATFORM_SOLARIS
|
||||
# else
|
||||
# define PLATFORM_SUNOS4
|
||||
# endif
|
||||
#endif
|
||||
#if (defined(_AIX) || defined(__xlC__)) && !defined(_AIX41)
|
||||
# define PLATFORM_AIX_V3
|
||||
#endif
|
||||
/* */
|
||||
#if defined(PLATFORM_SUNOS4) || defined(PLATFORM_AIX_V3) || defined(__BEOS__)
|
||||
#error "O_NONBLOCK does not work on this platform"
|
||||
#endif
|
||||
|
||||
int
|
||||
main ()
|
||||
{
|
||||
/* O_NONBLOCK source test */
|
||||
int flags = 0;
|
||||
if(0 != fcntl(0, F_SETFL, flags | O_NONBLOCK))
|
||||
return 1;
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
/* tests for gethostbyaddr_r or gethostbyname_r */
|
||||
#if defined(HAVE_GETHOSTBYADDR_R_5_REENTRANT) || \
|
||||
defined(HAVE_GETHOSTBYADDR_R_7_REENTRANT) || \
|
||||
defined(HAVE_GETHOSTBYADDR_R_8_REENTRANT) || \
|
||||
defined(HAVE_GETHOSTBYNAME_R_3_REENTRANT) || \
|
||||
defined(HAVE_GETHOSTBYNAME_R_5_REENTRANT) || \
|
||||
defined(HAVE_GETHOSTBYNAME_R_6_REENTRANT)
|
||||
# define _REENTRANT
|
||||
/* no idea whether _REENTRANT is always set, just invent a new flag */
|
||||
# define TEST_GETHOSTBYFOO_REENTRANT
|
||||
#endif
|
||||
#if defined(HAVE_GETHOSTBYADDR_R_5) || \
|
||||
defined(HAVE_GETHOSTBYADDR_R_7) || \
|
||||
defined(HAVE_GETHOSTBYADDR_R_8) || \
|
||||
defined(HAVE_GETHOSTBYNAME_R_3) || \
|
||||
defined(HAVE_GETHOSTBYNAME_R_5) || \
|
||||
defined(HAVE_GETHOSTBYNAME_R_6) || \
|
||||
defined(TEST_GETHOSTBYFOO_REENTRANT)
|
||||
#include <sys/types.h>
|
||||
#include <netdb.h>
|
||||
int main(void)
|
||||
{
|
||||
char *address = "example.com";
|
||||
int length = 0;
|
||||
int type = 0;
|
||||
struct hostent h;
|
||||
int rc = 0;
|
||||
#if defined(HAVE_GETHOSTBYADDR_R_5) || \
|
||||
defined(HAVE_GETHOSTBYADDR_R_5_REENTRANT) || \
|
||||
\
|
||||
defined(HAVE_GETHOSTBYNAME_R_3) || \
|
||||
defined(HAVE_GETHOSTBYNAME_R_3_REENTRANT)
|
||||
struct hostent_data hdata;
|
||||
#elif defined(HAVE_GETHOSTBYADDR_R_7) || \
|
||||
defined(HAVE_GETHOSTBYADDR_R_7_REENTRANT) || \
|
||||
defined(HAVE_GETHOSTBYADDR_R_8) || \
|
||||
defined(HAVE_GETHOSTBYADDR_R_8_REENTRANT) || \
|
||||
\
|
||||
defined(HAVE_GETHOSTBYNAME_R_5) || \
|
||||
defined(HAVE_GETHOSTBYNAME_R_5_REENTRANT) || \
|
||||
defined(HAVE_GETHOSTBYNAME_R_6) || \
|
||||
defined(HAVE_GETHOSTBYNAME_R_6_REENTRANT)
|
||||
char buffer[8192];
|
||||
int h_errnop;
|
||||
struct hostent *hp;
|
||||
#endif
|
||||
|
||||
#ifndef gethostbyaddr_r
|
||||
(void)gethostbyaddr_r;
|
||||
#endif
|
||||
|
||||
#if defined(HAVE_GETHOSTBYADDR_R_5) || \
|
||||
defined(HAVE_GETHOSTBYADDR_R_5_REENTRANT)
|
||||
rc = gethostbyaddr_r(address, length, type, &h, &hdata);
|
||||
(void)rc;
|
||||
#elif defined(HAVE_GETHOSTBYADDR_R_7) || \
|
||||
defined(HAVE_GETHOSTBYADDR_R_7_REENTRANT)
|
||||
hp = gethostbyaddr_r(address, length, type, &h, buffer, 8192, &h_errnop);
|
||||
(void)hp;
|
||||
#elif defined(HAVE_GETHOSTBYADDR_R_8) || \
|
||||
defined(HAVE_GETHOSTBYADDR_R_8_REENTRANT)
|
||||
rc = gethostbyaddr_r(address, length, type, &h, buffer, 8192, &hp, &h_errnop);
|
||||
(void)rc;
|
||||
#endif
|
||||
|
||||
#if defined(HAVE_GETHOSTBYNAME_R_3) || \
|
||||
defined(HAVE_GETHOSTBYNAME_R_3_REENTRANT)
|
||||
rc = gethostbyname_r(address, &h, &hdata);
|
||||
#elif defined(HAVE_GETHOSTBYNAME_R_5) || \
|
||||
defined(HAVE_GETHOSTBYNAME_R_5_REENTRANT)
|
||||
rc = gethostbyname_r(address, &h, buffer, 8192, &h_errnop);
|
||||
(void)hp; /* not used for test */
|
||||
#elif defined(HAVE_GETHOSTBYNAME_R_6) || \
|
||||
defined(HAVE_GETHOSTBYNAME_R_6_REENTRANT)
|
||||
rc = gethostbyname_r(address, &h, buffer, 8192, &hp, &h_errnop);
|
||||
#endif
|
||||
|
||||
(void)length;
|
||||
(void)type;
|
||||
(void)rc;
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef HAVE_SOCKLEN_T
|
||||
#ifdef _WIN32
|
||||
#include <ws2tcpip.h>
|
||||
#else
|
||||
#include <sys/types.h>
|
||||
#include <sys/socket.h>
|
||||
#endif
|
||||
int
|
||||
main ()
|
||||
{
|
||||
if ((socklen_t *) 0)
|
||||
return 0;
|
||||
if (sizeof (socklen_t))
|
||||
return 0;
|
||||
;
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_IN_ADDR_T
|
||||
#include <sys/types.h>
|
||||
#include <sys/socket.h>
|
||||
#include <arpa/inet.h>
|
||||
|
||||
int
|
||||
main ()
|
||||
{
|
||||
if ((in_addr_t *) 0)
|
||||
return 0;
|
||||
if (sizeof (in_addr_t))
|
||||
return 0;
|
||||
;
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef HAVE_BOOL_T
|
||||
#ifdef HAVE_SYS_TYPES_H
|
||||
#include <sys/types.h>
|
||||
#endif
|
||||
#ifdef HAVE_STDBOOL_H
|
||||
#include <stdbool.h>
|
||||
#endif
|
||||
int
|
||||
main ()
|
||||
{
|
||||
if (sizeof (bool *) )
|
||||
return 0;
|
||||
;
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef STDC_HEADERS
|
||||
#include <stdlib.h>
|
||||
#include <stdarg.h>
|
||||
#include <string.h>
|
||||
#include <float.h>
|
||||
int main() { return 0; }
|
||||
#endif
|
||||
#ifdef RETSIGTYPE_TEST
|
||||
#include <sys/types.h>
|
||||
#include <signal.h>
|
||||
#ifdef signal
|
||||
# undef signal
|
||||
#endif
|
||||
#ifdef __cplusplus
|
||||
extern "C" void (*signal (int, void (*)(int)))(int);
|
||||
#else
|
||||
void (*signal ()) ();
|
||||
#endif
|
||||
|
||||
int
|
||||
main ()
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_INET_NTOA_R_DECL
|
||||
#include <arpa/inet.h>
|
||||
|
||||
typedef void (*func_type)();
|
||||
|
||||
int main()
|
||||
{
|
||||
#ifndef inet_ntoa_r
|
||||
func_type func;
|
||||
func = (func_type)inet_ntoa_r;
|
||||
(void)func;
|
||||
#endif
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_INET_NTOA_R_DECL_REENTRANT
|
||||
#define _REENTRANT
|
||||
#include <arpa/inet.h>
|
||||
|
||||
typedef void (*func_type)();
|
||||
|
||||
int main()
|
||||
{
|
||||
#ifndef inet_ntoa_r
|
||||
func_type func;
|
||||
func = (func_type)&inet_ntoa_r;
|
||||
(void)func;
|
||||
#endif
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_GETADDRINFO
|
||||
#include <netdb.h>
|
||||
#include <sys/types.h>
|
||||
#include <sys/socket.h>
|
||||
|
||||
int main(void) {
|
||||
struct addrinfo hints, *ai;
|
||||
int error;
|
||||
|
||||
memset(&hints, 0, sizeof(hints));
|
||||
hints.ai_family = AF_UNSPEC;
|
||||
hints.ai_socktype = SOCK_STREAM;
|
||||
#ifndef getaddrinfo
|
||||
(void)getaddrinfo;
|
||||
#endif
|
||||
error = getaddrinfo("127.0.0.1", "8080", &hints, &ai);
|
||||
if (error) {
|
||||
return 1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_FILE_OFFSET_BITS
|
||||
#ifdef _FILE_OFFSET_BITS
|
||||
#undef _FILE_OFFSET_BITS
|
||||
#endif
|
||||
#define _FILE_OFFSET_BITS 64
|
||||
#include <sys/types.h>
|
||||
/* Check that off_t can represent 2**63 - 1 correctly.
|
||||
We can't simply define LARGE_OFF_T to be 9223372036854775807,
|
||||
since some C++ compilers masquerading as C compilers
|
||||
incorrectly reject 9223372036854775807. */
|
||||
#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
|
||||
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
|
||||
&& LARGE_OFF_T % 2147483647 == 1)
|
||||
? 1 : -1];
|
||||
int main () { ; return 0; }
|
||||
#endif
|
||||
#ifdef HAVE_IOCTLSOCKET
|
||||
/* includes start */
|
||||
#ifdef HAVE_WINDOWS_H
|
||||
# ifndef WIN32_LEAN_AND_MEAN
|
||||
# define WIN32_LEAN_AND_MEAN
|
||||
# endif
|
||||
# include <windows.h>
|
||||
# ifdef HAVE_WINSOCK2_H
|
||||
# include <winsock2.h>
|
||||
# else
|
||||
# ifdef HAVE_WINSOCK_H
|
||||
# include <winsock.h>
|
||||
# endif
|
||||
# endif
|
||||
#endif
|
||||
|
||||
int
|
||||
main ()
|
||||
{
|
||||
|
||||
/* ioctlsocket source code */
|
||||
int socket;
|
||||
unsigned long flags = ioctlsocket(socket, FIONBIO, &flags);
|
||||
|
||||
;
|
||||
return 0;
|
||||
}
|
||||
|
||||
#endif
|
||||
#ifdef HAVE_IOCTLSOCKET_CAMEL
|
||||
/* includes start */
|
||||
#ifdef HAVE_WINDOWS_H
|
||||
# ifndef WIN32_LEAN_AND_MEAN
|
||||
# define WIN32_LEAN_AND_MEAN
|
||||
# endif
|
||||
# include <windows.h>
|
||||
# ifdef HAVE_WINSOCK2_H
|
||||
# include <winsock2.h>
|
||||
# else
|
||||
# ifdef HAVE_WINSOCK_H
|
||||
# include <winsock.h>
|
||||
# endif
|
||||
# endif
|
||||
#endif
|
||||
|
||||
int
|
||||
main ()
|
||||
{
|
||||
|
||||
/* IoctlSocket source code */
|
||||
if(0 != IoctlSocket(0, 0, 0))
|
||||
return 1;
|
||||
;
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_IOCTLSOCKET_CAMEL_FIONBIO
|
||||
/* includes start */
|
||||
#ifdef HAVE_WINDOWS_H
|
||||
# ifndef WIN32_LEAN_AND_MEAN
|
||||
# define WIN32_LEAN_AND_MEAN
|
||||
# endif
|
||||
# include <windows.h>
|
||||
# ifdef HAVE_WINSOCK2_H
|
||||
# include <winsock2.h>
|
||||
# else
|
||||
# ifdef HAVE_WINSOCK_H
|
||||
# include <winsock.h>
|
||||
# endif
|
||||
# endif
|
||||
#endif
|
||||
|
||||
int
|
||||
main ()
|
||||
{
|
||||
|
||||
/* IoctlSocket source code */
|
||||
long flags = 0;
|
||||
if(0 != ioctlsocket(0, FIONBIO, &flags))
|
||||
return 1;
|
||||
;
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_IOCTLSOCKET_FIONBIO
|
||||
/* includes start */
|
||||
#ifdef HAVE_WINDOWS_H
|
||||
# ifndef WIN32_LEAN_AND_MEAN
|
||||
# define WIN32_LEAN_AND_MEAN
|
||||
# endif
|
||||
# include <windows.h>
|
||||
# ifdef HAVE_WINSOCK2_H
|
||||
# include <winsock2.h>
|
||||
# else
|
||||
# ifdef HAVE_WINSOCK_H
|
||||
# include <winsock.h>
|
||||
# endif
|
||||
# endif
|
||||
#endif
|
||||
|
||||
int
|
||||
main ()
|
||||
{
|
||||
|
||||
int flags = 0;
|
||||
if(0 != ioctlsocket(0, FIONBIO, &flags))
|
||||
return 1;
|
||||
|
||||
;
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_IOCTL_FIONBIO
|
||||
/* headers for FIONBIO test */
|
||||
/* includes start */
|
||||
#ifdef HAVE_SYS_TYPES_H
|
||||
# include <sys/types.h>
|
||||
#endif
|
||||
#ifdef HAVE_UNISTD_H
|
||||
# include <unistd.h>
|
||||
#endif
|
||||
#ifdef HAVE_SYS_SOCKET_H
|
||||
# include <sys/socket.h>
|
||||
#endif
|
||||
#ifdef HAVE_SYS_IOCTL_H
|
||||
# include <sys/ioctl.h>
|
||||
#endif
|
||||
#ifdef HAVE_STROPTS_H
|
||||
# include <stropts.h>
|
||||
#endif
|
||||
|
||||
int
|
||||
main ()
|
||||
{
|
||||
|
||||
int flags = 0;
|
||||
if(0 != ioctl(0, FIONBIO, &flags))
|
||||
return 1;
|
||||
|
||||
;
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_IOCTL_SIOCGIFADDR
|
||||
/* headers for FIONBIO test */
|
||||
/* includes start */
|
||||
#ifdef HAVE_SYS_TYPES_H
|
||||
# include <sys/types.h>
|
||||
#endif
|
||||
#ifdef HAVE_UNISTD_H
|
||||
# include <unistd.h>
|
||||
#endif
|
||||
#ifdef HAVE_SYS_SOCKET_H
|
||||
# include <sys/socket.h>
|
||||
#endif
|
||||
#ifdef HAVE_SYS_IOCTL_H
|
||||
# include <sys/ioctl.h>
|
||||
#endif
|
||||
#ifdef HAVE_STROPTS_H
|
||||
# include <stropts.h>
|
||||
#endif
|
||||
#include <net/if.h>
|
||||
|
||||
int
|
||||
main ()
|
||||
{
|
||||
struct ifreq ifr;
|
||||
if(0 != ioctl(0, SIOCGIFADDR, &ifr))
|
||||
return 1;
|
||||
|
||||
;
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_SETSOCKOPT_SO_NONBLOCK
|
||||
/* includes start */
|
||||
#ifdef HAVE_WINDOWS_H
|
||||
# ifndef WIN32_LEAN_AND_MEAN
|
||||
# define WIN32_LEAN_AND_MEAN
|
||||
# endif
|
||||
# include <windows.h>
|
||||
# ifdef HAVE_WINSOCK2_H
|
||||
# include <winsock2.h>
|
||||
# else
|
||||
# ifdef HAVE_WINSOCK_H
|
||||
# include <winsock.h>
|
||||
# endif
|
||||
# endif
|
||||
#endif
|
||||
/* includes start */
|
||||
#ifdef HAVE_SYS_TYPES_H
|
||||
# include <sys/types.h>
|
||||
#endif
|
||||
#ifdef HAVE_SYS_SOCKET_H
|
||||
# include <sys/socket.h>
|
||||
#endif
|
||||
/* includes end */
|
||||
|
||||
int
|
||||
main ()
|
||||
{
|
||||
if(0 != setsockopt(0, SOL_SOCKET, SO_NONBLOCK, 0, 0))
|
||||
return 1;
|
||||
;
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_GLIBC_STRERROR_R
|
||||
#include <string.h>
|
||||
#include <errno.h>
|
||||
|
||||
void check(char c) {}
|
||||
|
||||
int
|
||||
main () {
|
||||
char buffer[1024];
|
||||
/* This will not compile if strerror_r does not return a char* */
|
||||
check(strerror_r(EACCES, buffer, sizeof(buffer))[0]);
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_POSIX_STRERROR_R
|
||||
#include <string.h>
|
||||
#include <errno.h>
|
||||
|
||||
/* float, because a pointer can't be implicitly cast to float */
|
||||
void check(float f) {}
|
||||
|
||||
int
|
||||
main () {
|
||||
char buffer[1024];
|
||||
/* This will not compile if strerror_r does not return an int */
|
||||
check(strerror_r(EACCES, buffer, sizeof(buffer)));
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_FSETXATTR_6
|
||||
#include <sys/xattr.h> /* header from libc, not from libattr */
|
||||
int
|
||||
main() {
|
||||
fsetxattr(0, 0, 0, 0, 0, 0);
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_FSETXATTR_5
|
||||
#include <sys/xattr.h> /* header from libc, not from libattr */
|
||||
int
|
||||
main() {
|
||||
fsetxattr(0, 0, 0, 0, 0);
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_CLOCK_GETTIME_MONOTONIC
|
||||
#include <time.h>
|
||||
int
|
||||
main() {
|
||||
struct timespec ts = {0, 0};
|
||||
clock_gettime(CLOCK_MONOTONIC, &ts);
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_BUILTIN_AVAILABLE
|
||||
int
|
||||
main() {
|
||||
if(__builtin_available(macOS 10.12, *)) {}
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_VARIADIC_MACROS_C99
|
||||
#define c99_vmacro3(first, ...) fun3(first, __VA_ARGS__)
|
||||
#define c99_vmacro2(first, ...) fun2(first, __VA_ARGS__)
|
||||
|
||||
int fun3(int arg1, int arg2, int arg3);
|
||||
int fun2(int arg1, int arg2);
|
||||
|
||||
int fun3(int arg1, int arg2, int arg3) {
|
||||
return arg1 + arg2 + arg3;
|
||||
}
|
||||
int fun2(int arg1, int arg2) {
|
||||
return arg1 + arg2;
|
||||
}
|
||||
|
||||
int
|
||||
main() {
|
||||
int res3 = c99_vmacro3(1, 2, 3);
|
||||
int res2 = c99_vmacro2(1, 2);
|
||||
(void)res3;
|
||||
(void)res2;
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_VARIADIC_MACROS_GCC
|
||||
#define gcc_vmacro3(first, args...) fun3(first, args)
|
||||
#define gcc_vmacro2(first, args...) fun2(first, args)
|
||||
|
||||
int fun3(int arg1, int arg2, int arg3);
|
||||
int fun2(int arg1, int arg2);
|
||||
|
||||
int fun3(int arg1, int arg2, int arg3) {
|
||||
return arg1 + arg2 + arg3;
|
||||
}
|
||||
int fun2(int arg1, int arg2) {
|
||||
return arg1 + arg2;
|
||||
}
|
||||
|
||||
int
|
||||
main() {
|
||||
int res3 = gcc_vmacro3(1, 2, 3);
|
||||
int res2 = gcc_vmacro2(1, 2);
|
||||
(void)res3;
|
||||
(void)res2;
|
||||
return 0;
|
||||
}
|
||||
#endif
|
@ -1,84 +0,0 @@
|
||||
#File defines convenience macros for available feature testing
|
||||
|
||||
# This macro checks if the symbol exists in the library and if it
|
||||
# does, it prepends library to the list. It is intended to be called
|
||||
# multiple times with a sequence of possibly dependent libraries in
|
||||
# order of least-to-most-dependent. Some libraries depend on others
|
||||
# to link correctly.
|
||||
macro(check_library_exists_concat LIBRARY SYMBOL VARIABLE)
|
||||
check_library_exists("${LIBRARY};${CURL_LIBS}" ${SYMBOL} "${CMAKE_LIBRARY_PATH}"
|
||||
${VARIABLE})
|
||||
if(${VARIABLE})
|
||||
set(CURL_LIBS ${LIBRARY} ${CURL_LIBS})
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
# Check if header file exists and add it to the list.
|
||||
# This macro is intended to be called multiple times with a sequence of
|
||||
# possibly dependent header files. Some headers depend on others to be
|
||||
# compiled correctly.
|
||||
macro(check_include_file_concat FILE VARIABLE)
|
||||
check_include_files("${CURL_INCLUDES};${FILE}" ${VARIABLE})
|
||||
if(${VARIABLE})
|
||||
set(CURL_INCLUDES ${CURL_INCLUDES} ${FILE})
|
||||
set(CURL_TEST_DEFINES "${CURL_TEST_DEFINES} -D${VARIABLE}")
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
# For other curl specific tests, use this macro.
|
||||
macro(curl_internal_test CURL_TEST)
|
||||
if(NOT DEFINED "${CURL_TEST}")
|
||||
set(MACRO_CHECK_FUNCTION_DEFINITIONS
|
||||
"-D${CURL_TEST} ${CURL_TEST_DEFINES} ${CMAKE_REQUIRED_FLAGS}")
|
||||
if(CMAKE_REQUIRED_LIBRARIES)
|
||||
set(CURL_TEST_ADD_LIBRARIES
|
||||
"-DLINK_LIBRARIES:STRING=${CMAKE_REQUIRED_LIBRARIES}")
|
||||
endif()
|
||||
|
||||
try_compile(${CURL_TEST}
|
||||
${CMAKE_BINARY_DIR}
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/CMake/CurlTests.c
|
||||
CMAKE_FLAGS -DCOMPILE_DEFINITIONS:STRING=${MACRO_CHECK_FUNCTION_DEFINITIONS}
|
||||
"${CURL_TEST_ADD_LIBRARIES}"
|
||||
OUTPUT_VARIABLE OUTPUT)
|
||||
if(${CURL_TEST})
|
||||
set(${CURL_TEST} 1 CACHE INTERNAL "Curl test ${FUNCTION}")
|
||||
file(APPEND ${CMAKE_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/CMakeOutput.log
|
||||
"Performing Curl Test ${CURL_TEST} passed with the following output:\n"
|
||||
"${OUTPUT}\n")
|
||||
else()
|
||||
set(${CURL_TEST} "" CACHE INTERNAL "Curl test ${FUNCTION}")
|
||||
file(APPEND ${CMAKE_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/CMakeError.log
|
||||
"Performing Curl Test ${CURL_TEST} failed with the following output:\n"
|
||||
"${OUTPUT}\n")
|
||||
endif()
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
macro(curl_nroff_check)
|
||||
find_program(NROFF NAMES gnroff nroff)
|
||||
if(NROFF)
|
||||
# Need a way to write to stdin, this will do
|
||||
file(WRITE "${CMAKE_CURRENT_BINARY_DIR}/nroff-input.txt" "test")
|
||||
# Tests for a valid nroff option to generate a manpage
|
||||
foreach(_MANOPT "-man" "-mandoc")
|
||||
execute_process(COMMAND "${NROFF}" ${_MANOPT}
|
||||
OUTPUT_VARIABLE NROFF_MANOPT_OUTPUT
|
||||
INPUT_FILE "${CMAKE_CURRENT_BINARY_DIR}/nroff-input.txt"
|
||||
ERROR_QUIET)
|
||||
# Save the option if it was valid
|
||||
if(NROFF_MANOPT_OUTPUT)
|
||||
set(NROFF_MANOPT ${_MANOPT})
|
||||
set(NROFF_USEFUL ON)
|
||||
break()
|
||||
endif()
|
||||
endforeach()
|
||||
# No need for the temporary file
|
||||
file(REMOVE "${CMAKE_CURRENT_BINARY_DIR}/nroff-input.txt")
|
||||
if(NOT NROFF_USEFUL)
|
||||
message(WARNING "Found no *nroff option to get plaintext from man pages")
|
||||
endif()
|
||||
else()
|
||||
message(WARNING "Found no *nroff program")
|
||||
endif()
|
||||
endmacro()
|
@ -1,260 +0,0 @@
|
||||
include(CheckCSourceCompiles)
|
||||
# The begin of the sources (macros and includes)
|
||||
set(_source_epilogue "#undef inline")
|
||||
|
||||
macro(add_header_include check header)
|
||||
if(${check})
|
||||
set(_source_epilogue "${_source_epilogue}\n#include <${header}>")
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
set(signature_call_conv)
|
||||
if(HAVE_WINDOWS_H)
|
||||
add_header_include(HAVE_WINSOCK2_H "winsock2.h")
|
||||
add_header_include(HAVE_WINDOWS_H "windows.h")
|
||||
add_header_include(HAVE_WINSOCK_H "winsock.h")
|
||||
set(_source_epilogue
|
||||
"${_source_epilogue}\n#ifndef WIN32_LEAN_AND_MEAN\n#define WIN32_LEAN_AND_MEAN\n#endif")
|
||||
set(signature_call_conv "PASCAL")
|
||||
if(HAVE_LIBWS2_32)
|
||||
set(CMAKE_REQUIRED_LIBRARIES ws2_32)
|
||||
endif()
|
||||
else()
|
||||
add_header_include(HAVE_SYS_TYPES_H "sys/types.h")
|
||||
add_header_include(HAVE_SYS_SOCKET_H "sys/socket.h")
|
||||
endif()
|
||||
|
||||
set(CMAKE_TRY_COMPILE_TARGET_TYPE STATIC_LIBRARY)
|
||||
|
||||
check_c_source_compiles("${_source_epilogue}
|
||||
int main(void) {
|
||||
recv(0, 0, 0, 0);
|
||||
return 0;
|
||||
}" curl_cv_recv)
|
||||
if(curl_cv_recv)
|
||||
if(NOT DEFINED curl_cv_func_recv_args OR "${curl_cv_func_recv_args}" STREQUAL "unknown")
|
||||
foreach(recv_retv "int" "ssize_t" )
|
||||
foreach(recv_arg1 "SOCKET" "int" )
|
||||
foreach(recv_arg2 "char *" "void *" )
|
||||
foreach(recv_arg3 "int" "size_t" "socklen_t" "unsigned int")
|
||||
foreach(recv_arg4 "int" "unsigned int")
|
||||
if(NOT curl_cv_func_recv_done)
|
||||
unset(curl_cv_func_recv_test CACHE)
|
||||
check_c_source_compiles("
|
||||
${_source_epilogue}
|
||||
extern ${recv_retv} ${signature_call_conv}
|
||||
recv(${recv_arg1}, ${recv_arg2}, ${recv_arg3}, ${recv_arg4});
|
||||
int main(void) {
|
||||
${recv_arg1} s=0;
|
||||
${recv_arg2} buf=0;
|
||||
${recv_arg3} len=0;
|
||||
${recv_arg4} flags=0;
|
||||
${recv_retv} res = recv(s, buf, len, flags);
|
||||
(void) res;
|
||||
return 0;
|
||||
}"
|
||||
curl_cv_func_recv_test)
|
||||
if(curl_cv_func_recv_test)
|
||||
set(curl_cv_func_recv_args
|
||||
"${recv_arg1},${recv_arg2},${recv_arg3},${recv_arg4},${recv_retv}")
|
||||
set(RECV_TYPE_ARG1 "${recv_arg1}")
|
||||
set(RECV_TYPE_ARG2 "${recv_arg2}")
|
||||
set(RECV_TYPE_ARG3 "${recv_arg3}")
|
||||
set(RECV_TYPE_ARG4 "${recv_arg4}")
|
||||
set(RECV_TYPE_RETV "${recv_retv}")
|
||||
set(HAVE_RECV 1)
|
||||
set(curl_cv_func_recv_done 1)
|
||||
endif()
|
||||
endif()
|
||||
endforeach()
|
||||
endforeach()
|
||||
endforeach()
|
||||
endforeach()
|
||||
endforeach()
|
||||
else()
|
||||
string(REGEX REPLACE "^([^,]*),[^,]*,[^,]*,[^,]*,[^,]*$" "\\1" RECV_TYPE_ARG1 "${curl_cv_func_recv_args}")
|
||||
string(REGEX REPLACE "^[^,]*,([^,]*),[^,]*,[^,]*,[^,]*$" "\\1" RECV_TYPE_ARG2 "${curl_cv_func_recv_args}")
|
||||
string(REGEX REPLACE "^[^,]*,[^,]*,([^,]*),[^,]*,[^,]*$" "\\1" RECV_TYPE_ARG3 "${curl_cv_func_recv_args}")
|
||||
string(REGEX REPLACE "^[^,]*,[^,]*,[^,]*,([^,]*),[^,]*$" "\\1" RECV_TYPE_ARG4 "${curl_cv_func_recv_args}")
|
||||
string(REGEX REPLACE "^[^,]*,[^,]*,[^,]*,[^,]*,([^,]*)$" "\\1" RECV_TYPE_RETV "${curl_cv_func_recv_args}")
|
||||
endif()
|
||||
|
||||
if("${curl_cv_func_recv_args}" STREQUAL "unknown")
|
||||
message(FATAL_ERROR "Cannot find proper types to use for recv args")
|
||||
endif()
|
||||
else()
|
||||
message(FATAL_ERROR "Unable to link function recv")
|
||||
endif()
|
||||
set(curl_cv_func_recv_args "${curl_cv_func_recv_args}" CACHE INTERNAL "Arguments for recv")
|
||||
set(HAVE_RECV 1)
|
||||
|
||||
check_c_source_compiles("${_source_epilogue}
|
||||
int main(void) {
|
||||
send(0, 0, 0, 0);
|
||||
return 0;
|
||||
}" curl_cv_send)
|
||||
if(curl_cv_send)
|
||||
if(NOT DEFINED curl_cv_func_send_args OR "${curl_cv_func_send_args}" STREQUAL "unknown")
|
||||
foreach(send_retv "int" "ssize_t" )
|
||||
foreach(send_arg1 "SOCKET" "int" "ssize_t" )
|
||||
foreach(send_arg2 "const char *" "const void *" "void *" "char *")
|
||||
foreach(send_arg3 "int" "size_t" "socklen_t" "unsigned int")
|
||||
foreach(send_arg4 "int" "unsigned int")
|
||||
if(NOT curl_cv_func_send_done)
|
||||
unset(curl_cv_func_send_test CACHE)
|
||||
check_c_source_compiles("
|
||||
${_source_epilogue}
|
||||
extern ${send_retv} ${signature_call_conv}
|
||||
send(${send_arg1}, ${send_arg2}, ${send_arg3}, ${send_arg4});
|
||||
int main(void) {
|
||||
${send_arg1} s=0;
|
||||
${send_arg2} buf=0;
|
||||
${send_arg3} len=0;
|
||||
${send_arg4} flags=0;
|
||||
${send_retv} res = send(s, buf, len, flags);
|
||||
(void) res;
|
||||
return 0;
|
||||
}"
|
||||
curl_cv_func_send_test)
|
||||
if(curl_cv_func_send_test)
|
||||
string(REGEX REPLACE "(const) .*" "\\1" send_qual_arg2 "${send_arg2}")
|
||||
string(REGEX REPLACE "const (.*)" "\\1" send_arg2 "${send_arg2}")
|
||||
set(curl_cv_func_send_args
|
||||
"${send_arg1},${send_arg2},${send_arg3},${send_arg4},${send_retv},${send_qual_arg2}")
|
||||
set(SEND_TYPE_ARG1 "${send_arg1}")
|
||||
set(SEND_TYPE_ARG2 "${send_arg2}")
|
||||
set(SEND_TYPE_ARG3 "${send_arg3}")
|
||||
set(SEND_TYPE_ARG4 "${send_arg4}")
|
||||
set(SEND_TYPE_RETV "${send_retv}")
|
||||
set(HAVE_SEND 1)
|
||||
set(curl_cv_func_send_done 1)
|
||||
endif()
|
||||
endif()
|
||||
endforeach()
|
||||
endforeach()
|
||||
endforeach()
|
||||
endforeach()
|
||||
endforeach()
|
||||
else()
|
||||
string(REGEX REPLACE "^([^,]*),[^,]*,[^,]*,[^,]*,[^,]*,[^,]*$" "\\1" SEND_TYPE_ARG1 "${curl_cv_func_send_args}")
|
||||
string(REGEX REPLACE "^[^,]*,([^,]*),[^,]*,[^,]*,[^,]*,[^,]*$" "\\1" SEND_TYPE_ARG2 "${curl_cv_func_send_args}")
|
||||
string(REGEX REPLACE "^[^,]*,[^,]*,([^,]*),[^,]*,[^,]*,[^,]*$" "\\1" SEND_TYPE_ARG3 "${curl_cv_func_send_args}")
|
||||
string(REGEX REPLACE "^[^,]*,[^,]*,[^,]*,([^,]*),[^,]*,[^,]*$" "\\1" SEND_TYPE_ARG4 "${curl_cv_func_send_args}")
|
||||
string(REGEX REPLACE "^[^,]*,[^,]*,[^,]*,[^,]*,([^,]*),[^,]*$" "\\1" SEND_TYPE_RETV "${curl_cv_func_send_args}")
|
||||
string(REGEX REPLACE "^[^,]*,[^,]*,[^,]*,[^,]*,[^,]*,([^,]*)$" "\\1" SEND_QUAL_ARG2 "${curl_cv_func_send_args}")
|
||||
endif()
|
||||
|
||||
if("${curl_cv_func_send_args}" STREQUAL "unknown")
|
||||
message(FATAL_ERROR "Cannot find proper types to use for send args")
|
||||
endif()
|
||||
set(SEND_QUAL_ARG2 "const")
|
||||
else()
|
||||
message(FATAL_ERROR "Unable to link function send")
|
||||
endif()
|
||||
set(curl_cv_func_send_args "${curl_cv_func_send_args}" CACHE INTERNAL "Arguments for send")
|
||||
set(HAVE_SEND 1)
|
||||
|
||||
check_c_source_compiles("${_source_epilogue}
|
||||
int main(void) {
|
||||
int flag = MSG_NOSIGNAL;
|
||||
(void)flag;
|
||||
return 0;
|
||||
}" HAVE_MSG_NOSIGNAL)
|
||||
|
||||
if(NOT HAVE_WINDOWS_H)
|
||||
add_header_include(HAVE_SYS_TIME_H "sys/time.h")
|
||||
add_header_include(TIME_WITH_SYS_TIME "time.h")
|
||||
add_header_include(HAVE_TIME_H "time.h")
|
||||
endif()
|
||||
check_c_source_compiles("${_source_epilogue}
|
||||
int main(void) {
|
||||
struct timeval ts;
|
||||
ts.tv_sec = 0;
|
||||
ts.tv_usec = 0;
|
||||
(void)ts;
|
||||
return 0;
|
||||
}" HAVE_STRUCT_TIMEVAL)
|
||||
|
||||
set(HAVE_SIG_ATOMIC_T 1)
|
||||
set(CMAKE_REQUIRED_FLAGS)
|
||||
if(HAVE_SIGNAL_H)
|
||||
set(CMAKE_REQUIRED_FLAGS "-DHAVE_SIGNAL_H")
|
||||
set(CMAKE_EXTRA_INCLUDE_FILES "signal.h")
|
||||
endif()
|
||||
check_type_size("sig_atomic_t" SIZEOF_SIG_ATOMIC_T)
|
||||
if(HAVE_SIZEOF_SIG_ATOMIC_T)
|
||||
check_c_source_compiles("
|
||||
#ifdef HAVE_SIGNAL_H
|
||||
# include <signal.h>
|
||||
#endif
|
||||
int main(void) {
|
||||
static volatile sig_atomic_t dummy = 0;
|
||||
(void)dummy;
|
||||
return 0;
|
||||
}" HAVE_SIG_ATOMIC_T_NOT_VOLATILE)
|
||||
if(NOT HAVE_SIG_ATOMIC_T_NOT_VOLATILE)
|
||||
set(HAVE_SIG_ATOMIC_T_VOLATILE 1)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if(HAVE_WINDOWS_H)
|
||||
set(CMAKE_EXTRA_INCLUDE_FILES winsock2.h)
|
||||
else()
|
||||
set(CMAKE_EXTRA_INCLUDE_FILES)
|
||||
if(HAVE_SYS_SOCKET_H)
|
||||
set(CMAKE_EXTRA_INCLUDE_FILES sys/socket.h)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
check_type_size("struct sockaddr_storage" SIZEOF_STRUCT_SOCKADDR_STORAGE)
|
||||
if(HAVE_SIZEOF_STRUCT_SOCKADDR_STORAGE)
|
||||
set(HAVE_STRUCT_SOCKADDR_STORAGE 1)
|
||||
endif()
|
||||
|
||||
unset(CMAKE_TRY_COMPILE_TARGET_TYPE)
|
||||
|
||||
if(NOT DEFINED CMAKE_TOOLCHAIN_FILE)
|
||||
# if not cross-compilation...
|
||||
include(CheckCSourceRuns)
|
||||
set(CMAKE_REQUIRED_FLAGS "")
|
||||
if(HAVE_SYS_POLL_H)
|
||||
set(CMAKE_REQUIRED_FLAGS "-DHAVE_SYS_POLL_H")
|
||||
elseif(HAVE_POLL_H)
|
||||
set(CMAKE_REQUIRED_FLAGS "-DHAVE_POLL_H")
|
||||
endif()
|
||||
check_c_source_runs("
|
||||
#include <stdlib.h>
|
||||
#include <sys/time.h>
|
||||
|
||||
#ifdef HAVE_SYS_POLL_H
|
||||
# include <sys/poll.h>
|
||||
#elif HAVE_POLL_H
|
||||
# include <poll.h>
|
||||
#endif
|
||||
|
||||
int main(void)
|
||||
{
|
||||
if(0 != poll(0, 0, 10)) {
|
||||
return 1; /* fail */
|
||||
}
|
||||
else {
|
||||
/* detect the 10.12 poll() breakage */
|
||||
struct timeval before, after;
|
||||
int rc;
|
||||
size_t us;
|
||||
|
||||
gettimeofday(&before, NULL);
|
||||
rc = poll(NULL, 0, 500);
|
||||
gettimeofday(&after, NULL);
|
||||
|
||||
us = (after.tv_sec - before.tv_sec) * 1000000 +
|
||||
(after.tv_usec - before.tv_usec);
|
||||
|
||||
if(us < 400000) {
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}" HAVE_POLL_FINE)
|
||||
endif()
|
||||
|
@ -1,28 +0,0 @@
|
||||
#***************************************************************************
|
||||
# _ _ ____ _
|
||||
# Project ___| | | | _ \| |
|
||||
# / __| | | | |_) | |
|
||||
# | (__| |_| | _ <| |___
|
||||
# \___|\___/|_| \_\_____|
|
||||
#
|
||||
# Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
|
||||
#
|
||||
# This software is licensed as described in the file COPYING, which
|
||||
# you should have received as part of this distribution. The terms
|
||||
# are also available at https://curl.haxx.se/docs/copyright.html.
|
||||
#
|
||||
# You may opt to use, copy, modify, merge, publish, distribute and/or sell
|
||||
# copies of the Software, and permit persons to whom the Software is
|
||||
# furnished to do so, under the terms of the COPYING file.
|
||||
#
|
||||
# This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
|
||||
# KIND, either express or implied.
|
||||
#
|
||||
###########################################################################
|
||||
|
||||
CMake files under this directory were reused from project curl.
|
||||
Here are links to original source files:
|
||||
https://github.com/curl/curl/blob/master/CMake/CurlSymbolHiding.cmake
|
||||
https://github.com/curl/curl/blob/master/CMake/CurlTests,c
|
||||
https://github.com/curl/curl/blob/master/CMake/Macros.cmake
|
||||
https://github.com/curl/curl/blob/master/CMake/OtherTests.cmake
|
File diff suppressed because it is too large
Load Diff
@ -1,605 +1,148 @@
|
||||
#***************************************************************************
|
||||
# _ _ ____ _
|
||||
# Project ___| | | | _ \| |
|
||||
# / __| | | | |_) | |
|
||||
# | (__| |_| | _ <| |___
|
||||
# \___|\___/|_| \_\_____|
|
||||
#
|
||||
# Copyright (C) 1998 - 2019, Daniel Stenberg, <daniel@haxx.se>, et al.
|
||||
#
|
||||
# This software is licensed as described in the file COPYING, which
|
||||
# you should have received as part of this distribution. The terms
|
||||
# are also available at https://curl.haxx.se/docs/copyright.html.
|
||||
#
|
||||
# You may opt to use, copy, modify, merge, publish, distribute and/or sell
|
||||
# copies of the Software, and permit persons to whom the Software is
|
||||
# furnished to do so, under the terms of the COPYING file.
|
||||
#
|
||||
# This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
|
||||
# KIND, either express or implied.
|
||||
#
|
||||
###########################################################################
|
||||
|
||||
# NOTE:
|
||||
# This file is shrinked and reworked version of original curl CMakeLists.txt
|
||||
# Original file link https://github.com/curl/curl/blob/3b8bbbbd1609c638a3d3d0acb148a33dedb67be3/CMakeLists.txt
|
||||
# If you need to update curl building you can find patch file in this directory
|
||||
# and apply it to fresh original CMakeLists.txt file.
|
||||
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
|
||||
|
||||
SET(CURL_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/curl)
|
||||
SET(CURL_LIBRARY_DIR ${CURL_SOURCE_DIR}/lib)
|
||||
set(CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/CMake;${CMAKE_MODULE_PATH}")
|
||||
# Disable status messages when perform checks
|
||||
set(CMAKE_REQUIRED_QUIET TRUE)
|
||||
|
||||
include(Macros)
|
||||
include(CMakeDependentOption)
|
||||
include(CheckCCompilerFlag)
|
||||
|
||||
file(READ ${CURL_SOURCE_DIR}/include/curl/curlver.h CURL_VERSION_H_CONTENTS)
|
||||
string(REGEX MATCH "#define LIBCURL_VERSION \"[^\"]*"
|
||||
CURL_VERSION ${CURL_VERSION_H_CONTENTS})
|
||||
string(REGEX REPLACE "[^\"]+\"" "" CURL_VERSION ${CURL_VERSION})
|
||||
string(REGEX MATCH "#define LIBCURL_VERSION_NUM 0x[0-9a-fA-F]+"
|
||||
CURL_VERSION_NUM ${CURL_VERSION_H_CONTENTS})
|
||||
string(REGEX REPLACE "[^0]+0x" "" CURL_VERSION_NUM ${CURL_VERSION_NUM})
|
||||
|
||||
message(STATUS "Use curl version=[${CURL_VERSION}]")
|
||||
set(OPERATING_SYSTEM "${CMAKE_SYSTEM_NAME}")
|
||||
set(OS "\"${CMAKE_SYSTEM_NAME}\"")
|
||||
|
||||
option(PICKY_COMPILER "Enable picky compiler options" ON)
|
||||
option(ENABLE_THREADED_RESOLVER "Set to ON to enable threaded DNS lookup" ON)
|
||||
|
||||
if(CMAKE_COMPILER_IS_GNUCC OR CMAKE_COMPILER_IS_CLANG)
|
||||
if(PICKY_COMPILER)
|
||||
foreach(_CCOPT -pedantic -Wall -W -Wpointer-arith -Wwrite-strings -Wunused -Wshadow -Winline -Wnested-externs -Wmissing-declarations -Wmissing-prototypes -Wno-long-long -Wfloat-equal -Wno-multichar -Wsign-compare -Wundef -Wno-format-nonliteral -Wendif-labels -Wstrict-prototypes -Wdeclaration-after-statement -Wstrict-aliasing=3 -Wcast-align -Wtype-limits -Wold-style-declaration -Wmissing-parameter-type -Wempty-body -Wclobbered -Wignored-qualifiers -Wconversion -Wno-sign-conversion -Wvla -Wdouble-promotion -Wno-system-headers -Wno-pedantic-ms-format)
|
||||
# surprisingly, CHECK_C_COMPILER_FLAG needs a new variable to store each new
|
||||
# test result in.
|
||||
check_c_compiler_flag(${_CCOPT} OPT${_CCOPT})
|
||||
if(OPT${_CCOPT})
|
||||
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${_CCOPT}")
|
||||
endif()
|
||||
endforeach()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# initialize CURL_LIBS
|
||||
set(CURL_LIBS "")
|
||||
|
||||
include(CurlSymbolHiding)
|
||||
|
||||
# Http only
|
||||
set(CURL_DISABLE_FTP ON)
|
||||
set(CURL_DISABLE_LDAP ON)
|
||||
set(CURL_DISABLE_LDAPS ON)
|
||||
set(CURL_DISABLE_TELNET ON)
|
||||
set(CURL_DISABLE_DICT ON)
|
||||
set(CURL_DISABLE_FILE ON)
|
||||
set(CURL_DISABLE_TFTP ON)
|
||||
set(CURL_DISABLE_RTSP ON)
|
||||
set(CURL_DISABLE_POP3 ON)
|
||||
set(CURL_DISABLE_IMAP ON)
|
||||
set(CURL_DISABLE_SMTP ON)
|
||||
set(CURL_DISABLE_GOPHER ON)
|
||||
|
||||
option(CURL_DISABLE_COOKIES "to disable cookies support" OFF)
|
||||
mark_as_advanced(CURL_DISABLE_COOKIES)
|
||||
|
||||
option(CURL_DISABLE_CRYPTO_AUTH "to disable cryptographic authentication" OFF)
|
||||
mark_as_advanced(CURL_DISABLE_CRYPTO_AUTH)
|
||||
|
||||
option(CURL_DISABLE_VERBOSE_STRINGS "to disable verbose strings" OFF)
|
||||
mark_as_advanced(CURL_DISABLE_VERBOSE_STRINGS)
|
||||
|
||||
option(ENABLE_IPV6 "Define if you want to enable IPv6 support" ON)
|
||||
mark_as_advanced(ENABLE_IPV6)
|
||||
|
||||
if(ENABLE_IPV6 AND NOT WIN32)
|
||||
include(CheckStructHasMember)
|
||||
check_struct_has_member("struct sockaddr_in6" sin6_addr "netinet/in.h"
|
||||
HAVE_SOCKADDR_IN6_SIN6_ADDR)
|
||||
check_struct_has_member("struct sockaddr_in6" sin6_scope_id "netinet/in.h"
|
||||
HAVE_SOCKADDR_IN6_SIN6_SCOPE_ID)
|
||||
if(NOT HAVE_SOCKADDR_IN6_SIN6_ADDR)
|
||||
message(WARNING "struct sockaddr_in6 not available, disabling IPv6 support")
|
||||
# Force the feature off as this name is used as guard macro...
|
||||
set(ENABLE_IPV6 OFF
|
||||
CACHE BOOL "Define if you want to enable IPv6 support" FORCE)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# We need ansi c-flags, especially on HP
|
||||
set(CMAKE_C_FLAGS "${CMAKE_ANSI_CFLAGS} ${CMAKE_C_FLAGS}")
|
||||
set(CMAKE_REQUIRED_FLAGS ${CMAKE_ANSI_CFLAGS})
|
||||
|
||||
# Include all the necessary files for macros
|
||||
include(CheckFunctionExists)
|
||||
include(CheckIncludeFile)
|
||||
include(CheckIncludeFiles)
|
||||
include(CheckLibraryExists)
|
||||
include(CheckSymbolExists)
|
||||
include(CheckTypeSize)
|
||||
include(CheckCSourceCompiles)
|
||||
|
||||
if(ENABLE_THREADED_RESOLVER)
|
||||
find_package(Threads REQUIRED)
|
||||
set(USE_THREADS_POSIX ${CMAKE_USE_PTHREADS_INIT})
|
||||
set(HAVE_PTHREAD_H ${CMAKE_USE_PTHREADS_INIT})
|
||||
set(CURL_LIBS ${CURL_LIBS} ${CMAKE_THREAD_LIBS_INIT})
|
||||
endif()
|
||||
|
||||
# Check for all needed libraries
|
||||
|
||||
# We don't want any plugin loading at runtime. It is harmful.
|
||||
#check_library_exists_concat("${CMAKE_DL_LIBS}" dlopen HAVE_LIBDL)
|
||||
|
||||
# This is unneeded.
|
||||
#check_library_exists_concat("socket" connect HAVE_LIBSOCKET)
|
||||
|
||||
set (NOT_NEED_LIBNSL 1)
|
||||
set (gethostname HAVE_GETHOSTNAME 1)
|
||||
|
||||
# From cmake/find/ssl.cmake
|
||||
if (OPENSSL_FOUND)
|
||||
set(SSL_ENABLED ON)
|
||||
set(USE_OPENSSL ON)
|
||||
|
||||
list(APPEND CURL_LIBS ${OPENSSL_LIBRARIES})
|
||||
check_include_file("openssl/crypto.h" HAVE_OPENSSL_CRYPTO_H)
|
||||
check_include_file("openssl/err.h" HAVE_OPENSSL_ERR_H)
|
||||
check_include_file("openssl/pem.h" HAVE_OPENSSL_PEM_H)
|
||||
check_include_file("openssl/rsa.h" HAVE_OPENSSL_RSA_H)
|
||||
check_include_file("openssl/ssl.h" HAVE_OPENSSL_SSL_H)
|
||||
check_include_file("openssl/x509.h" HAVE_OPENSSL_X509_H)
|
||||
check_include_file("openssl/rand.h" HAVE_OPENSSL_RAND_H)
|
||||
check_symbol_exists(RAND_status "${CURL_INCLUDES}" HAVE_RAND_STATUS)
|
||||
check_symbol_exists(RAND_screen "${CURL_INCLUDES}" HAVE_RAND_SCREEN)
|
||||
check_symbol_exists(RAND_egd "${CURL_INCLUDES}" HAVE_RAND_EGD)
|
||||
endif()
|
||||
|
||||
# Check for idn
|
||||
# No, we don't need that.
|
||||
# check_library_exists_concat("idn2" idn2_lookup_ul HAVE_LIBIDN2)
|
||||
|
||||
# Check for symbol dlopen (same as HAVE_LIBDL)
|
||||
# We don't want any plugin loading at runtime. It is harmful.
|
||||
# check_library_exists("${CURL_LIBS}" dlopen "" HAVE_DLOPEN)
|
||||
|
||||
# From /cmake/find/zlib.cmake
|
||||
if (ZLIB_FOUND)
|
||||
set(HAVE_ZLIB_H ON)
|
||||
set(HAVE_LIBZ ON)
|
||||
set(USE_ZLIB ON)
|
||||
|
||||
list(APPEND CURL_LIBS ${ZLIB_LIBRARIES})
|
||||
endif()
|
||||
|
||||
option(ENABLE_UNIX_SOCKETS "Define if you want Unix domain sockets support" OFF)
|
||||
if(ENABLE_UNIX_SOCKETS)
|
||||
include(CheckStructHasMember)
|
||||
check_struct_has_member("struct sockaddr_un" sun_path "sys/un.h" USE_UNIX_SOCKETS)
|
||||
else()
|
||||
unset(USE_UNIX_SOCKETS CACHE)
|
||||
endif()
|
||||
|
||||
# CA handling
|
||||
# Explicitly set to most common case
|
||||
if (OPENSSL_FOUND)
|
||||
set(CURL_CA_BUNDLE "/etc/ssl/certs/ca-certificates.crt")
|
||||
set(CURL_CA_BUNDLE_SET TRUE CACHE BOOL "Path to the CA bundle has been set")
|
||||
set(CURL_CA_PATH "/etc/ssl/certs")
|
||||
set(CURL_CA_PATH_SET TRUE CACHE BOOL "Path to the CA bundle has been set")
|
||||
endif()
|
||||
|
||||
check_include_file_concat("stdio.h" HAVE_STDIO_H)
|
||||
check_include_file_concat("inttypes.h" HAVE_INTTYPES_H)
|
||||
check_include_file_concat("sys/filio.h" HAVE_SYS_FILIO_H)
|
||||
check_include_file_concat("sys/ioctl.h" HAVE_SYS_IOCTL_H)
|
||||
check_include_file_concat("sys/param.h" HAVE_SYS_PARAM_H)
|
||||
check_include_file_concat("sys/poll.h" HAVE_SYS_POLL_H)
|
||||
check_include_file_concat("sys/resource.h" HAVE_SYS_RESOURCE_H)
|
||||
check_include_file_concat("sys/select.h" HAVE_SYS_SELECT_H)
|
||||
check_include_file_concat("sys/socket.h" HAVE_SYS_SOCKET_H)
|
||||
check_include_file_concat("sys/sockio.h" HAVE_SYS_SOCKIO_H)
|
||||
check_include_file_concat("sys/stat.h" HAVE_SYS_STAT_H)
|
||||
check_include_file_concat("sys/time.h" HAVE_SYS_TIME_H)
|
||||
check_include_file_concat("sys/types.h" HAVE_SYS_TYPES_H)
|
||||
check_include_file_concat("sys/uio.h" HAVE_SYS_UIO_H)
|
||||
check_include_file_concat("sys/un.h" HAVE_SYS_UN_H)
|
||||
check_include_file_concat("sys/utime.h" HAVE_SYS_UTIME_H)
|
||||
check_include_file_concat("sys/xattr.h" HAVE_SYS_XATTR_H)
|
||||
check_include_file_concat("alloca.h" HAVE_ALLOCA_H)
|
||||
check_include_file_concat("arpa/inet.h" HAVE_ARPA_INET_H)
|
||||
#check_include_file_concat("arpa/tftp.h" HAVE_ARPA_TFTP_H)
|
||||
check_include_file_concat("assert.h" HAVE_ASSERT_H)
|
||||
check_include_file_concat("crypto.h" HAVE_CRYPTO_H)
|
||||
check_include_file_concat("des.h" HAVE_DES_H)
|
||||
check_include_file_concat("err.h" HAVE_ERR_H)
|
||||
check_include_file_concat("errno.h" HAVE_ERRNO_H)
|
||||
check_include_file_concat("fcntl.h" HAVE_FCNTL_H)
|
||||
#check_include_file_concat("idn2.h" HAVE_IDN2_H)
|
||||
check_include_file_concat("ifaddrs.h" HAVE_IFADDRS_H)
|
||||
check_include_file_concat("io.h" HAVE_IO_H)
|
||||
check_include_file_concat("krb.h" HAVE_KRB_H)
|
||||
check_include_file_concat("libgen.h" HAVE_LIBGEN_H)
|
||||
check_include_file_concat("locale.h" HAVE_LOCALE_H)
|
||||
check_include_file_concat("net/if.h" HAVE_NET_IF_H)
|
||||
check_include_file_concat("netdb.h" HAVE_NETDB_H)
|
||||
check_include_file_concat("netinet/in.h" HAVE_NETINET_IN_H)
|
||||
check_include_file_concat("netinet/tcp.h" HAVE_NETINET_TCP_H)
|
||||
|
||||
check_include_file_concat("pem.h" HAVE_PEM_H)
|
||||
check_include_file_concat("poll.h" HAVE_POLL_H)
|
||||
check_include_file_concat("pwd.h" HAVE_PWD_H)
|
||||
check_include_file_concat("rsa.h" HAVE_RSA_H)
|
||||
check_include_file_concat("setjmp.h" HAVE_SETJMP_H)
|
||||
check_include_file_concat("sgtty.h" HAVE_SGTTY_H)
|
||||
check_include_file_concat("signal.h" HAVE_SIGNAL_H)
|
||||
check_include_file_concat("ssl.h" HAVE_SSL_H)
|
||||
check_include_file_concat("stdbool.h" HAVE_STDBOOL_H)
|
||||
check_include_file_concat("stdint.h" HAVE_STDINT_H)
|
||||
check_include_file_concat("stdio.h" HAVE_STDIO_H)
|
||||
check_include_file_concat("stdlib.h" HAVE_STDLIB_H)
|
||||
check_include_file_concat("string.h" HAVE_STRING_H)
|
||||
check_include_file_concat("strings.h" HAVE_STRINGS_H)
|
||||
check_include_file_concat("stropts.h" HAVE_STROPTS_H)
|
||||
check_include_file_concat("termio.h" HAVE_TERMIO_H)
|
||||
check_include_file_concat("termios.h" HAVE_TERMIOS_H)
|
||||
check_include_file_concat("time.h" HAVE_TIME_H)
|
||||
check_include_file_concat("unistd.h" HAVE_UNISTD_H)
|
||||
check_include_file_concat("utime.h" HAVE_UTIME_H)
|
||||
check_include_file_concat("x509.h" HAVE_X509_H)
|
||||
|
||||
check_include_file_concat("process.h" HAVE_PROCESS_H)
|
||||
check_include_file_concat("stddef.h" HAVE_STDDEF_H)
|
||||
#check_include_file_concat("dlfcn.h" HAVE_DLFCN_H)
|
||||
check_include_file_concat("malloc.h" HAVE_MALLOC_H)
|
||||
check_include_file_concat("memory.h" HAVE_MEMORY_H)
|
||||
check_include_file_concat("netinet/if_ether.h" HAVE_NETINET_IF_ETHER_H)
|
||||
check_include_file_concat("stdint.h" HAVE_STDINT_H)
|
||||
check_include_file_concat("sockio.h" HAVE_SOCKIO_H)
|
||||
check_include_file_concat("sys/utsname.h" HAVE_SYS_UTSNAME_H)
|
||||
|
||||
check_type_size(size_t SIZEOF_SIZE_T)
|
||||
check_type_size(ssize_t SIZEOF_SSIZE_T)
|
||||
check_type_size("long long" SIZEOF_LONG_LONG)
|
||||
check_type_size("long" SIZEOF_LONG)
|
||||
check_type_size("short" SIZEOF_SHORT)
|
||||
check_type_size("int" SIZEOF_INT)
|
||||
check_type_size("__int64" SIZEOF___INT64)
|
||||
check_type_size("long double" SIZEOF_LONG_DOUBLE)
|
||||
check_type_size("time_t" SIZEOF_TIME_T)
|
||||
|
||||
set(HAVE_LONGLONG 1)
|
||||
set(HAVE_LL 1)
|
||||
|
||||
set(RANDOM_FILE /dev/urandom)
|
||||
|
||||
check_symbol_exists(basename "${CURL_INCLUDES}" HAVE_BASENAME)
|
||||
check_symbol_exists(socket "${CURL_INCLUDES}" HAVE_SOCKET)
|
||||
check_symbol_exists(select "${CURL_INCLUDES}" HAVE_SELECT)
|
||||
check_symbol_exists(poll "${CURL_INCLUDES}" HAVE_POLL)
|
||||
check_symbol_exists(strdup "${CURL_INCLUDES}" HAVE_STRDUP)
|
||||
check_symbol_exists(strstr "${CURL_INCLUDES}" HAVE_STRSTR)
|
||||
check_symbol_exists(strtok_r "${CURL_INCLUDES}" HAVE_STRTOK_R)
|
||||
check_symbol_exists(strftime "${CURL_INCLUDES}" HAVE_STRFTIME)
|
||||
check_symbol_exists(uname "${CURL_INCLUDES}" HAVE_UNAME)
|
||||
check_symbol_exists(strcasecmp "${CURL_INCLUDES}" HAVE_STRCASECMP)
|
||||
#check_symbol_exists(stricmp "${CURL_INCLUDES}" HAVE_STRICMP)
|
||||
#check_symbol_exists(strcmpi "${CURL_INCLUDES}" HAVE_STRCMPI)
|
||||
#check_symbol_exists(strncmpi "${CURL_INCLUDES}" HAVE_STRNCMPI)
|
||||
check_symbol_exists(alarm "${CURL_INCLUDES}" HAVE_ALARM)
|
||||
#check_symbol_exists(gethostbyaddr "${CURL_INCLUDES}" HAVE_GETHOSTBYADDR)
|
||||
check_symbol_exists(gethostbyaddr_r "${CURL_INCLUDES}" HAVE_GETHOSTBYADDR_R)
|
||||
check_symbol_exists(gettimeofday "${CURL_INCLUDES}" HAVE_GETTIMEOFDAY)
|
||||
check_symbol_exists(inet_addr "${CURL_INCLUDES}" HAVE_INET_ADDR)
|
||||
#check_symbol_exists(inet_ntoa "${CURL_INCLUDES}" HAVE_INET_NTOA)
|
||||
check_symbol_exists(inet_ntoa_r "${CURL_INCLUDES}" HAVE_INET_NTOA_R)
|
||||
check_symbol_exists(tcsetattr "${CURL_INCLUDES}" HAVE_TCSETATTR)
|
||||
check_symbol_exists(tcgetattr "${CURL_INCLUDES}" HAVE_TCGETATTR)
|
||||
check_symbol_exists(perror "${CURL_INCLUDES}" HAVE_PERROR)
|
||||
check_symbol_exists(closesocket "${CURL_INCLUDES}" HAVE_CLOSESOCKET)
|
||||
check_symbol_exists(setvbuf "${CURL_INCLUDES}" HAVE_SETVBUF)
|
||||
check_symbol_exists(sigsetjmp "${CURL_INCLUDES}" HAVE_SIGSETJMP)
|
||||
check_symbol_exists(getpass_r "${CURL_INCLUDES}" HAVE_GETPASS_R)
|
||||
#check_symbol_exists(strlcat "${CURL_INCLUDES}" HAVE_STRLCAT)
|
||||
#check_symbol_exists(getpwuid "${CURL_INCLUDES}" HAVE_GETPWUID)
|
||||
check_symbol_exists(getpwuid_r "${CURL_INCLUDES}" HAVE_GETPWUID_R)
|
||||
check_symbol_exists(geteuid "${CURL_INCLUDES}" HAVE_GETEUID)
|
||||
check_symbol_exists(usleep "${CURL_INCLUDES}" HAVE_USLEEP)
|
||||
check_symbol_exists(utime "${CURL_INCLUDES}" HAVE_UTIME)
|
||||
check_symbol_exists(gmtime_r "${CURL_INCLUDES}" HAVE_GMTIME_R)
|
||||
check_symbol_exists(localtime_r "${CURL_INCLUDES}" HAVE_LOCALTIME_R)
|
||||
|
||||
#check_symbol_exists(gethostbyname "${CURL_INCLUDES}" HAVE_GETHOSTBYNAME)
|
||||
check_symbol_exists(gethostbyname_r "${CURL_INCLUDES}" HAVE_GETHOSTBYNAME_R)
|
||||
|
||||
check_symbol_exists(signal "${CURL_INCLUDES}" HAVE_SIGNAL_FUNC)
|
||||
check_symbol_exists(SIGALRM "${CURL_INCLUDES}" HAVE_SIGNAL_MACRO)
|
||||
set(HAVE_SIGNAL 1)
|
||||
check_symbol_exists(uname "${CURL_INCLUDES}" HAVE_UNAME)
|
||||
check_symbol_exists(strtoll "${CURL_INCLUDES}" HAVE_STRTOLL)
|
||||
#check_symbol_exists(_strtoi64 "${CURL_INCLUDES}" HAVE__STRTOI64)
|
||||
check_symbol_exists(strerror_r "${CURL_INCLUDES}" HAVE_STRERROR_R)
|
||||
check_symbol_exists(siginterrupt "${CURL_INCLUDES}" HAVE_SIGINTERRUPT)
|
||||
check_symbol_exists(perror "${CURL_INCLUDES}" HAVE_PERROR)
|
||||
check_symbol_exists(fork "${CURL_INCLUDES}" HAVE_FORK)
|
||||
check_symbol_exists(getaddrinfo "${CURL_INCLUDES}" HAVE_GETADDRINFO)
|
||||
check_symbol_exists(freeaddrinfo "${CURL_INCLUDES}" HAVE_FREEADDRINFO)
|
||||
check_symbol_exists(freeifaddrs "${CURL_INCLUDES}" HAVE_FREEIFADDRS)
|
||||
check_symbol_exists(pipe "${CURL_INCLUDES}" HAVE_PIPE)
|
||||
check_symbol_exists(ftruncate "${CURL_INCLUDES}" HAVE_FTRUNCATE)
|
||||
check_symbol_exists(getprotobyname "${CURL_INCLUDES}" HAVE_GETPROTOBYNAME)
|
||||
check_symbol_exists(getpeername "${CURL_INCLUDES}" HAVE_GETPEERNAME)
|
||||
check_symbol_exists(getsockname "${CURL_INCLUDES}" HAVE_GETSOCKNAME)
|
||||
check_symbol_exists(if_nametoindex "${CURL_INCLUDES}" HAVE_IF_NAMETOINDEX)
|
||||
check_symbol_exists(getrlimit "${CURL_INCLUDES}" HAVE_GETRLIMIT)
|
||||
check_symbol_exists(setlocale "${CURL_INCLUDES}" HAVE_SETLOCALE)
|
||||
check_symbol_exists(setmode "${CURL_INCLUDES}" HAVE_SETMODE)
|
||||
check_symbol_exists(setrlimit "${CURL_INCLUDES}" HAVE_SETRLIMIT)
|
||||
check_symbol_exists(fcntl "${CURL_INCLUDES}" HAVE_FCNTL)
|
||||
check_symbol_exists(ioctl "${CURL_INCLUDES}" HAVE_IOCTL)
|
||||
check_symbol_exists(setsockopt "${CURL_INCLUDES}" HAVE_SETSOCKOPT)
|
||||
check_function_exists(mach_absolute_time HAVE_MACH_ABSOLUTE_TIME)
|
||||
|
||||
check_symbol_exists(fsetxattr "${CURL_INCLUDES}" HAVE_FSETXATTR)
|
||||
if(HAVE_FSETXATTR)
|
||||
foreach(CURL_TEST HAVE_FSETXATTR_5 HAVE_FSETXATTR_6)
|
||||
curl_internal_test(${CURL_TEST})
|
||||
endforeach()
|
||||
endif()
|
||||
|
||||
# sigaction and sigsetjmp are special. Use special mechanism for
|
||||
# detecting those, but only if previous attempt failed.
|
||||
if(HAVE_SIGNAL_H)
|
||||
check_symbol_exists(sigaction "signal.h" HAVE_SIGACTION)
|
||||
endif()
|
||||
|
||||
if(NOT HAVE_SIGSETJMP)
|
||||
if(HAVE_SETJMP_H)
|
||||
check_symbol_exists(sigsetjmp "setjmp.h" HAVE_MACRO_SIGSETJMP)
|
||||
if(HAVE_MACRO_SIGSETJMP)
|
||||
set(HAVE_SIGSETJMP 1)
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# If there is no stricmp(), do not allow LDAP to parse URLs
|
||||
if(NOT HAVE_STRICMP)
|
||||
set(HAVE_LDAP_URL_PARSE 1)
|
||||
endif()
|
||||
|
||||
# Do curl specific tests
|
||||
foreach(CURL_TEST
|
||||
HAVE_FCNTL_O_NONBLOCK
|
||||
HAVE_IOCTLSOCKET
|
||||
HAVE_IOCTLSOCKET_CAMEL
|
||||
HAVE_IOCTLSOCKET_CAMEL_FIONBIO
|
||||
HAVE_IOCTLSOCKET_FIONBIO
|
||||
HAVE_IOCTL_FIONBIO
|
||||
HAVE_IOCTL_SIOCGIFADDR
|
||||
HAVE_SETSOCKOPT_SO_NONBLOCK
|
||||
HAVE_SOCKADDR_IN6_SIN6_SCOPE_ID
|
||||
TIME_WITH_SYS_TIME
|
||||
HAVE_O_NONBLOCK
|
||||
HAVE_GETHOSTBYADDR_R_5
|
||||
HAVE_GETHOSTBYADDR_R_7
|
||||
HAVE_GETHOSTBYADDR_R_8
|
||||
HAVE_GETHOSTBYADDR_R_5_REENTRANT
|
||||
HAVE_GETHOSTBYADDR_R_7_REENTRANT
|
||||
HAVE_GETHOSTBYADDR_R_8_REENTRANT
|
||||
HAVE_GETHOSTBYNAME_R_3
|
||||
HAVE_GETHOSTBYNAME_R_5
|
||||
HAVE_GETHOSTBYNAME_R_6
|
||||
HAVE_GETHOSTBYNAME_R_3_REENTRANT
|
||||
HAVE_GETHOSTBYNAME_R_5_REENTRANT
|
||||
HAVE_GETHOSTBYNAME_R_6_REENTRANT
|
||||
HAVE_IN_ADDR_T
|
||||
HAVE_BOOL_T
|
||||
STDC_HEADERS
|
||||
RETSIGTYPE_TEST
|
||||
HAVE_INET_NTOA_R_DECL
|
||||
HAVE_INET_NTOA_R_DECL_REENTRANT
|
||||
HAVE_GETADDRINFO
|
||||
HAVE_FILE_OFFSET_BITS
|
||||
HAVE_VARIADIC_MACROS_C99
|
||||
HAVE_VARIADIC_MACROS_GCC
|
||||
)
|
||||
curl_internal_test(${CURL_TEST})
|
||||
endforeach()
|
||||
|
||||
if(HAVE_FILE_OFFSET_BITS)
|
||||
set(_FILE_OFFSET_BITS 64)
|
||||
set(CMAKE_REQUIRED_FLAGS "-D_FILE_OFFSET_BITS=64")
|
||||
endif()
|
||||
check_type_size("off_t" SIZEOF_OFF_T)
|
||||
|
||||
# include this header to get the type
|
||||
set(CMAKE_REQUIRED_INCLUDES "${CURL_SOURCE_DIR}/include")
|
||||
set(CMAKE_EXTRA_INCLUDE_FILES "curl/system.h")
|
||||
check_type_size("curl_off_t" SIZEOF_CURL_OFF_T)
|
||||
set(CMAKE_EXTRA_INCLUDE_FILES "")
|
||||
|
||||
foreach(CURL_TEST
|
||||
HAVE_GLIBC_STRERROR_R
|
||||
HAVE_POSIX_STRERROR_R
|
||||
)
|
||||
curl_internal_test(${CURL_TEST})
|
||||
endforeach()
|
||||
|
||||
# Check for reentrant
|
||||
foreach(CURL_TEST
|
||||
HAVE_GETHOSTBYADDR_R_5
|
||||
HAVE_GETHOSTBYADDR_R_7
|
||||
HAVE_GETHOSTBYADDR_R_8
|
||||
HAVE_GETHOSTBYNAME_R_3
|
||||
HAVE_GETHOSTBYNAME_R_5
|
||||
HAVE_GETHOSTBYNAME_R_6
|
||||
HAVE_INET_NTOA_R_DECL_REENTRANT)
|
||||
if(NOT ${CURL_TEST})
|
||||
if(${CURL_TEST}_REENTRANT)
|
||||
set(NEED_REENTRANT 1)
|
||||
endif()
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
if(NEED_REENTRANT)
|
||||
foreach(CURL_TEST
|
||||
HAVE_GETHOSTBYADDR_R_5
|
||||
HAVE_GETHOSTBYADDR_R_7
|
||||
HAVE_GETHOSTBYADDR_R_8
|
||||
HAVE_GETHOSTBYNAME_R_3
|
||||
HAVE_GETHOSTBYNAME_R_5
|
||||
HAVE_GETHOSTBYNAME_R_6)
|
||||
set(${CURL_TEST} 0)
|
||||
if(${CURL_TEST}_REENTRANT)
|
||||
set(${CURL_TEST} 1)
|
||||
endif()
|
||||
endforeach()
|
||||
endif()
|
||||
|
||||
if(HAVE_INET_NTOA_R_DECL_REENTRANT)
|
||||
set(HAVE_INET_NTOA_R_DECL 1)
|
||||
set(NEED_REENTRANT 1)
|
||||
endif()
|
||||
|
||||
# Check clock_gettime(CLOCK_MONOTONIC, x) support
|
||||
curl_internal_test(HAVE_CLOCK_GETTIME_MONOTONIC)
|
||||
|
||||
# Check compiler support of __builtin_available()
|
||||
curl_internal_test(HAVE_BUILTIN_AVAILABLE)
|
||||
|
||||
# Some other minor tests
|
||||
|
||||
if(NOT HAVE_IN_ADDR_T)
|
||||
set(in_addr_t "unsigned long")
|
||||
endif()
|
||||
|
||||
# Check for nonblocking
|
||||
set(HAVE_DISABLED_NONBLOCKING 1)
|
||||
if(HAVE_FIONBIO OR
|
||||
HAVE_IOCTLSOCKET OR
|
||||
HAVE_IOCTLSOCKET_CASE OR
|
||||
HAVE_O_NONBLOCK)
|
||||
set(HAVE_DISABLED_NONBLOCKING)
|
||||
endif()
|
||||
|
||||
set(CURL_PULL_SYS_TYPES_H ${HAVE_SYS_TYPES_H})
|
||||
set(CURL_PULL_SYS_SOCKET_H ${HAVE_SYS_SOCKET_H})
|
||||
set(CURL_PULL_SYS_POLL_H ${HAVE_SYS_POLL_H})
|
||||
set(CURL_PULL_STDINT_H ${HAVE_STDINT_H})
|
||||
set(CURL_PULL_INTTYPES_H ${HAVE_INTTYPES_H})
|
||||
|
||||
include(CMake/OtherTests.cmake)
|
||||
|
||||
SET(LIB_VAUTH_CFILES
|
||||
"${CURL_LIBRARY_DIR}/vauth/vauth.c" "${CURL_LIBRARY_DIR}/vauth/cleartext.c" "${CURL_LIBRARY_DIR}/vauth/cram.c"
|
||||
"${CURL_LIBRARY_DIR}/vauth/digest.c" "${CURL_LIBRARY_DIR}/vauth/digest_sspi.c" "${CURL_LIBRARY_DIR}/vauth/krb5_gssapi.c"
|
||||
"${CURL_LIBRARY_DIR}/vauth/krb5_sspi.c" "${CURL_LIBRARY_DIR}/vauth/ntlm.c" "${CURL_LIBRARY_DIR}/vauth/ntlm_sspi.c" "${CURL_LIBRARY_DIR}/vauth/oauth2.c"
|
||||
"${CURL_LIBRARY_DIR}/vauth/spnego_gssapi.c" "${CURL_LIBRARY_DIR}/vauth/spnego_sspi.c")
|
||||
|
||||
SET(LIB_VAUTH_HFILES "${CURL_LIBRARY_DIR}/vauth/vauth.h" "${CURL_LIBRARY_DIR}/vauth/digest.h" "${CURL_LIBRARY_DIR}/vauth/ntlm.h")
|
||||
|
||||
SET(LIB_VTLS_CFILES "${CURL_LIBRARY_DIR}/vtls/openssl.c" "${CURL_LIBRARY_DIR}/vtls/gtls.c" "${CURL_LIBRARY_DIR}/vtls/vtls.c" "${CURL_LIBRARY_DIR}/vtls/nss.c"
|
||||
"${CURL_LIBRARY_DIR}/vtls/polarssl.c" "${CURL_LIBRARY_DIR}/vtls/polarssl_threadlock.c"
|
||||
"${CURL_LIBRARY_DIR}/vtls/wolfssl.c" "${CURL_LIBRARY_DIR}/vtls/schannel.c" "${CURL_LIBRARY_DIR}/vtls/schannel_verify.c"
|
||||
"${CURL_LIBRARY_DIR}/vtls/sectransp.c" "${CURL_LIBRARY_DIR}/vtls/gskit.c" "${CURL_LIBRARY_DIR}/vtls/mbedtls.c" "${CURL_LIBRARY_DIR}/vtls/mesalink.c"
|
||||
"${CURL_LIBRARY_DIR}/vtls/bearssl.c")
|
||||
|
||||
SET(LIB_VTLS_HFILES "${CURL_LIBRARY_DIR}/vtls/openssl.h" "${CURL_LIBRARY_DIR}/vtls/vtls.h" "${CURL_LIBRARY_DIR}/vtls/gtls.h"
|
||||
"${CURL_LIBRARY_DIR}/vtls/nssg.h" "${CURL_LIBRARY_DIR}/vtls/polarssl.h" "${CURL_LIBRARY_DIR}/vtls/polarssl_threadlock.h"
|
||||
"${CURL_LIBRARY_DIR}/vtls/wolfssl.h" "${CURL_LIBRARY_DIR}/vtls/schannel.h" "${CURL_LIBRARY_DIR}/vtls/sectransp.h" "${CURL_LIBRARY_DIR}/vtls/gskit.h"
|
||||
"${CURL_LIBRARY_DIR}/vtls/mbedtls.h" "${CURL_LIBRARY_DIR}/vtls/mesalink.h" "${CURL_LIBRARY_DIR}/vtls/bearssl.h")
|
||||
|
||||
SET(LIB_VQUIC_CFILES "${CURL_LIBRARY_DIR}/vquic/ngtcp2.c" "${CURL_LIBRARY_DIR}/vquic/quiche.c")
|
||||
|
||||
SET(LIB_VQUIC_HFILES "${CURL_LIBRARY_DIR}/vquic/ngtcp2.h" "${CURL_LIBRARY_DIR}/vquic/quiche.h")
|
||||
|
||||
SET(LIB_VSSH_CFILES "${CURL_LIBRARY_DIR}/vssh/libssh2.c" "${CURL_LIBRARY_DIR}/vssh/libssh.c")
|
||||
|
||||
SET(LIB_VSSH_HFILES "${CURL_LIBRARY_DIR}/vssh/ssh.h")
|
||||
|
||||
SET(LIB_CFILES "${CURL_LIBRARY_DIR}/file.c"
|
||||
"${CURL_LIBRARY_DIR}/timeval.c" "${CURL_LIBRARY_DIR}/base64.c" "${CURL_LIBRARY_DIR}/hostip.c" "${CURL_LIBRARY_DIR}/progress.c" "${CURL_LIBRARY_DIR}/formdata.c"
|
||||
"${CURL_LIBRARY_DIR}/cookie.c" "${CURL_LIBRARY_DIR}/http.c" "${CURL_LIBRARY_DIR}/sendf.c" "${CURL_LIBRARY_DIR}/url.c" "${CURL_LIBRARY_DIR}/dict.c" "${CURL_LIBRARY_DIR}/if2ip.c" "${CURL_LIBRARY_DIR}/speedcheck.c"
|
||||
"${CURL_LIBRARY_DIR}/ldap.c" "${CURL_LIBRARY_DIR}/version.c" "${CURL_LIBRARY_DIR}/getenv.c" "${CURL_LIBRARY_DIR}/escape.c" "${CURL_LIBRARY_DIR}/mprintf.c" "${CURL_LIBRARY_DIR}/telnet.c" "${CURL_LIBRARY_DIR}/netrc.c"
|
||||
"${CURL_LIBRARY_DIR}/getinfo.c" "${CURL_LIBRARY_DIR}/transfer.c" "${CURL_LIBRARY_DIR}/strcase.c" "${CURL_LIBRARY_DIR}/easy.c" "${CURL_LIBRARY_DIR}/security.c" "${CURL_LIBRARY_DIR}/curl_fnmatch.c"
|
||||
"${CURL_LIBRARY_DIR}/fileinfo.c" "${CURL_LIBRARY_DIR}/wildcard.c" "${CURL_LIBRARY_DIR}/krb5.c" "${CURL_LIBRARY_DIR}/memdebug.c" "${CURL_LIBRARY_DIR}/http_chunks.c"
|
||||
"${CURL_LIBRARY_DIR}/strtok.c" "${CURL_LIBRARY_DIR}/connect.c" "${CURL_LIBRARY_DIR}/llist.c" "${CURL_LIBRARY_DIR}/hash.c" "${CURL_LIBRARY_DIR}/multi.c" "${CURL_LIBRARY_DIR}/content_encoding.c" "${CURL_LIBRARY_DIR}/share.c"
|
||||
"${CURL_LIBRARY_DIR}/http_digest.c" "${CURL_LIBRARY_DIR}/md4.c" "${CURL_LIBRARY_DIR}/md5.c" "${CURL_LIBRARY_DIR}/http_negotiate.c" "${CURL_LIBRARY_DIR}/inet_pton.c" "${CURL_LIBRARY_DIR}/strtoofft.c"
|
||||
"${CURL_LIBRARY_DIR}/strerror.c" "${CURL_LIBRARY_DIR}/amigaos.c" "${CURL_LIBRARY_DIR}/hostasyn.c" "${CURL_LIBRARY_DIR}/hostip4.c" "${CURL_LIBRARY_DIR}/hostip6.c" "${CURL_LIBRARY_DIR}/hostsyn.c"
|
||||
"${CURL_LIBRARY_DIR}/inet_ntop.c" "${CURL_LIBRARY_DIR}/parsedate.c" "${CURL_LIBRARY_DIR}/select.c" "${CURL_LIBRARY_DIR}/splay.c" "${CURL_LIBRARY_DIR}/strdup.c" "${CURL_LIBRARY_DIR}/socks.c"
|
||||
"${CURL_LIBRARY_DIR}/curl_addrinfo.c" "${CURL_LIBRARY_DIR}/socks_gssapi.c" "${CURL_LIBRARY_DIR}/socks_sspi.c"
|
||||
"${CURL_LIBRARY_DIR}/curl_sspi.c" "${CURL_LIBRARY_DIR}/slist.c" "${CURL_LIBRARY_DIR}/nonblock.c" "${CURL_LIBRARY_DIR}/curl_memrchr.c" "${CURL_LIBRARY_DIR}/imap.c" "${CURL_LIBRARY_DIR}/pop3.c" "${CURL_LIBRARY_DIR}/smtp.c"
|
||||
"${CURL_LIBRARY_DIR}/pingpong.c" "${CURL_LIBRARY_DIR}/rtsp.c" "${CURL_LIBRARY_DIR}/curl_threads.c" "${CURL_LIBRARY_DIR}/warnless.c" "${CURL_LIBRARY_DIR}/hmac.c" "${CURL_LIBRARY_DIR}/curl_rtmp.c"
|
||||
"${CURL_LIBRARY_DIR}/openldap.c" "${CURL_LIBRARY_DIR}/curl_gethostname.c" "${CURL_LIBRARY_DIR}/gopher.c" "${CURL_LIBRARY_DIR}/idn_win32.c"
|
||||
"${CURL_LIBRARY_DIR}/http_proxy.c" "${CURL_LIBRARY_DIR}/non-ascii.c" "${CURL_LIBRARY_DIR}/asyn-ares.c" "${CURL_LIBRARY_DIR}/asyn-thread.c" "${CURL_LIBRARY_DIR}/curl_gssapi.c"
|
||||
"${CURL_LIBRARY_DIR}/http_ntlm.c" "${CURL_LIBRARY_DIR}/curl_ntlm_wb.c" "${CURL_LIBRARY_DIR}/curl_ntlm_core.c" "${CURL_LIBRARY_DIR}/curl_sasl.c" "${CURL_LIBRARY_DIR}/rand.c"
|
||||
"${CURL_LIBRARY_DIR}/curl_multibyte.c" "${CURL_LIBRARY_DIR}/hostcheck.c" "${CURL_LIBRARY_DIR}/conncache.c" "${CURL_LIBRARY_DIR}/dotdot.c"
|
||||
"${CURL_LIBRARY_DIR}/x509asn1.c" "${CURL_LIBRARY_DIR}/http2.c" "${CURL_LIBRARY_DIR}/smb.c" "${CURL_LIBRARY_DIR}/curl_endian.c" "${CURL_LIBRARY_DIR}/curl_des.c" "${CURL_LIBRARY_DIR}/system_win32.c"
|
||||
"${CURL_LIBRARY_DIR}/mime.c" "${CURL_LIBRARY_DIR}/sha256.c" "${CURL_LIBRARY_DIR}/setopt.c" "${CURL_LIBRARY_DIR}/curl_path.c" "${CURL_LIBRARY_DIR}/curl_ctype.c" "${CURL_LIBRARY_DIR}/curl_range.c" "${CURL_LIBRARY_DIR}/psl.c"
|
||||
"${CURL_LIBRARY_DIR}/doh.c" "${CURL_LIBRARY_DIR}/urlapi.c" "${CURL_LIBRARY_DIR}/curl_get_line.c" "${CURL_LIBRARY_DIR}/altsvc.c" "${CURL_LIBRARY_DIR}/socketpair.c")
|
||||
|
||||
SET(LIB_HFILES "${CURL_LIBRARY_DIR}/arpa_telnet.h" "${CURL_LIBRARY_DIR}/netrc.h" "${CURL_LIBRARY_DIR}/file.h" "${CURL_LIBRARY_DIR}/timeval.h" "${CURL_LIBRARY_DIR}/hostip.h" "${CURL_LIBRARY_DIR}/progress.h"
|
||||
"${CURL_LIBRARY_DIR}/formdata.h" "${CURL_LIBRARY_DIR}/cookie.h" "${CURL_LIBRARY_DIR}/http.h" "${CURL_LIBRARY_DIR}/sendf.h" "${CURL_LIBRARY_DIR}/url.h" "${CURL_LIBRARY_DIR}/dict.h" "${CURL_LIBRARY_DIR}/if2ip.h"
|
||||
"${CURL_LIBRARY_DIR}/speedcheck.h" "${CURL_LIBRARY_DIR}/urldata.h" "${CURL_LIBRARY_DIR}/curl_ldap.h" "${CURL_LIBRARY_DIR}/escape.h" "${CURL_LIBRARY_DIR}/telnet.h" "${CURL_LIBRARY_DIR}/getinfo.h"
|
||||
"${CURL_LIBRARY_DIR}/strcase.h" "${CURL_LIBRARY_DIR}/curl_sec.h" "${CURL_LIBRARY_DIR}/memdebug.h" "${CURL_LIBRARY_DIR}/http_chunks.h" "${CURL_LIBRARY_DIR}/curl_fnmatch.h"
|
||||
"${CURL_LIBRARY_DIR}/wildcard.h" "${CURL_LIBRARY_DIR}/fileinfo.h" "${CURL_LIBRARY_DIR}/strtok.h" "${CURL_LIBRARY_DIR}/connect.h" "${CURL_LIBRARY_DIR}/llist.h"
|
||||
"${CURL_LIBRARY_DIR}/hash.h" "${CURL_LIBRARY_DIR}/content_encoding.h" "${CURL_LIBRARY_DIR}/share.h" "${CURL_LIBRARY_DIR}/curl_md4.h" "${CURL_LIBRARY_DIR}/curl_md5.h" "${CURL_LIBRARY_DIR}/http_digest.h"
|
||||
"${CURL_LIBRARY_DIR}/http_negotiate.h" "${CURL_LIBRARY_DIR}/inet_pton.h" "${CURL_LIBRARY_DIR}/amigaos.h" "${CURL_LIBRARY_DIR}/strtoofft.h" "${CURL_LIBRARY_DIR}/strerror.h"
|
||||
"${CURL_LIBRARY_DIR}/inet_ntop.h" "${CURL_LIBRARY_DIR}/curlx.h" "${CURL_LIBRARY_DIR}/curl_memory.h" "${CURL_LIBRARY_DIR}/curl_setup.h" "${CURL_LIBRARY_DIR}/transfer.h" "${CURL_LIBRARY_DIR}/select.h"
|
||||
"${CURL_LIBRARY_DIR}/easyif.h" "${CURL_LIBRARY_DIR}/multiif.h" "${CURL_LIBRARY_DIR}/parsedate.h" "${CURL_LIBRARY_DIR}/sockaddr.h" "${CURL_LIBRARY_DIR}/splay.h" "${CURL_LIBRARY_DIR}/strdup.h"
|
||||
"${CURL_LIBRARY_DIR}/socks.h" "${CURL_LIBRARY_DIR}/curl_base64.h" "${CURL_LIBRARY_DIR}/curl_addrinfo.h" "${CURL_LIBRARY_DIR}/curl_sspi.h"
|
||||
"${CURL_LIBRARY_DIR}/slist.h" "${CURL_LIBRARY_DIR}/nonblock.h" "${CURL_LIBRARY_DIR}/curl_memrchr.h" "${CURL_LIBRARY_DIR}/imap.h" "${CURL_LIBRARY_DIR}/pop3.h" "${CURL_LIBRARY_DIR}/smtp.h" "${CURL_LIBRARY_DIR}/pingpong.h"
|
||||
"${CURL_LIBRARY_DIR}/rtsp.h" "${CURL_LIBRARY_DIR}/curl_threads.h" "${CURL_LIBRARY_DIR}/warnless.h" "${CURL_LIBRARY_DIR}/curl_hmac.h" "${CURL_LIBRARY_DIR}/curl_rtmp.h"
|
||||
"${CURL_LIBRARY_DIR}/curl_gethostname.h" "${CURL_LIBRARY_DIR}/gopher.h" "${CURL_LIBRARY_DIR}/http_proxy.h" "${CURL_LIBRARY_DIR}/non-ascii.h" "${CURL_LIBRARY_DIR}/asyn.h"
|
||||
"${CURL_LIBRARY_DIR}/http_ntlm.h" "${CURL_LIBRARY_DIR}/curl_gssapi.h" "${CURL_LIBRARY_DIR}/curl_ntlm_wb.h" "${CURL_LIBRARY_DIR}/curl_ntlm_core.h"
|
||||
"${CURL_LIBRARY_DIR}/curl_sasl.h" "${CURL_LIBRARY_DIR}/curl_multibyte.h" "${CURL_LIBRARY_DIR}/hostcheck.h" "${CURL_LIBRARY_DIR}/conncache.h"
|
||||
"${CURL_LIBRARY_DIR}/multihandle.h" "${CURL_LIBRARY_DIR}/setup-vms.h" "${CURL_LIBRARY_DIR}/dotdot.h"
|
||||
"${CURL_LIBRARY_DIR}/x509asn1.h" "${CURL_LIBRARY_DIR}/http2.h" "${CURL_LIBRARY_DIR}/sigpipe.h" "${CURL_LIBRARY_DIR}/smb.h" "${CURL_LIBRARY_DIR}/curl_endian.h" "${CURL_LIBRARY_DIR}/curl_des.h"
|
||||
"${CURL_LIBRARY_DIR}/curl_printf.h" "${CURL_LIBRARY_DIR}/system_win32.h" "${CURL_LIBRARY_DIR}/rand.h" "${CURL_LIBRARY_DIR}/mime.h" "${CURL_LIBRARY_DIR}/curl_sha256.h" "${CURL_LIBRARY_DIR}/setopt.h"
|
||||
"${CURL_LIBRARY_DIR}/curl_path.h" "${CURL_LIBRARY_DIR}/curl_ctype.h" "${CURL_LIBRARY_DIR}/curl_range.h" "${CURL_LIBRARY_DIR}/psl.h" "${CURL_LIBRARY_DIR}/doh.h" "${CURL_LIBRARY_DIR}/urlapi-int.h"
|
||||
"${CURL_LIBRARY_DIR}/curl_get_line.h" "${CURL_LIBRARY_DIR}/altsvc.h" "${CURL_LIBRARY_DIR}/quic.h" "${CURL_LIBRARY_DIR}/socketpair.h")
|
||||
|
||||
SET(LIB_RCFILES "${CURL_LIBRARY_DIR}/libcurl.rc")
|
||||
|
||||
SET(CSOURCES ${LIB_CFILES} ${LIB_VAUTH_CFILES} ${LIB_VTLS_CFILES}
|
||||
${LIB_VQUIC_CFILES} ${LIB_VSSH_CFILES})
|
||||
SET(HHEADERS ${LIB_HFILES} ${LIB_VAUTH_HFILES} ${LIB_VTLS_HFILES}
|
||||
${LIB_VQUIC_HFILES} ${LIB_VSSH_HFILES})
|
||||
|
||||
configure_file(${CURL_SOURCE_DIR}/lib/curl_config.h.cmake
|
||||
${CMAKE_CURRENT_BINARY_DIR}/curl/curl_config.h)
|
||||
|
||||
list(APPEND HHEADERS
|
||||
${CMAKE_CURRENT_BINARY_DIR}/curl/curl_config.h
|
||||
)
|
||||
|
||||
add_library(libcurl ${HHEADERS} ${CSOURCES})
|
||||
|
||||
if(NOT BUILD_SHARED_LIBS)
|
||||
set_target_properties(libcurl PROPERTIES INTERFACE_COMPILE_DEFINITIONS CURL_STATICLIB)
|
||||
endif()
|
||||
|
||||
if(HIDES_CURL_PRIVATE_SYMBOLS)
|
||||
set_property(TARGET libcurl APPEND PROPERTY COMPILE_DEFINITIONS "CURL_HIDDEN_SYMBOLS")
|
||||
set_property(TARGET libcurl APPEND PROPERTY COMPILE_FLAGS ${CURL_CFLAG_SYMBOLS_HIDE})
|
||||
endif()
|
||||
|
||||
if(OPENSSL_FOUND)
|
||||
target_include_directories(libcurl PUBLIC ${OPENSSL_INCLUDE_DIR})
|
||||
message("-- Including openssl ${OPENSSL_INCLUDE_DIR} to curl")
|
||||
endif()
|
||||
|
||||
if(ZLIB_FOUND)
|
||||
target_include_directories(libcurl PUBLIC ${ZLIB_INCLUDE_DIRS}})
|
||||
message("-- Including zlib ${ZLIB_INCLUDE_DIRS} to curl")
|
||||
endif()
|
||||
|
||||
target_compile_definitions(libcurl PUBLIC -DHAVE_CONFIG_H)
|
||||
target_compile_definitions(libcurl PUBLIC -DBUILDING_LIBCURL)
|
||||
target_include_directories(libcurl PUBLIC "${CURL_SOURCE_DIR}/include" "${CURL_LIBRARY_DIR}" "${CMAKE_CURRENT_BINARY_DIR}/curl")
|
||||
|
||||
target_link_libraries(libcurl ${CURL_LIBS})
|
||||
set (CURL_DIR ${ClickHouse_SOURCE_DIR}/contrib/curl)
|
||||
|
||||
set (SRCS
|
||||
${CURL_DIR}/lib/file.c
|
||||
${CURL_DIR}/lib/timeval.c
|
||||
${CURL_DIR}/lib/base64.c
|
||||
${CURL_DIR}/lib/hostip.c
|
||||
${CURL_DIR}/lib/progress.c
|
||||
${CURL_DIR}/lib/formdata.c
|
||||
${CURL_DIR}/lib/cookie.c
|
||||
${CURL_DIR}/lib/http.c
|
||||
${CURL_DIR}/lib/sendf.c
|
||||
${CURL_DIR}/lib/url.c
|
||||
${CURL_DIR}/lib/dict.c
|
||||
${CURL_DIR}/lib/if2ip.c
|
||||
${CURL_DIR}/lib/speedcheck.c
|
||||
${CURL_DIR}/lib/ldap.c
|
||||
${CURL_DIR}/lib/version.c
|
||||
${CURL_DIR}/lib/getenv.c
|
||||
${CURL_DIR}/lib/escape.c
|
||||
${CURL_DIR}/lib/mprintf.c
|
||||
${CURL_DIR}/lib/telnet.c
|
||||
${CURL_DIR}/lib/netrc.c
|
||||
${CURL_DIR}/lib/getinfo.c
|
||||
${CURL_DIR}/lib/transfer.c
|
||||
${CURL_DIR}/lib/strcase.c
|
||||
${CURL_DIR}/lib/easy.c
|
||||
${CURL_DIR}/lib/security.c
|
||||
${CURL_DIR}/lib/curl_fnmatch.c
|
||||
${CURL_DIR}/lib/fileinfo.c
|
||||
${CURL_DIR}/lib/wildcard.c
|
||||
${CURL_DIR}/lib/krb5.c
|
||||
${CURL_DIR}/lib/memdebug.c
|
||||
${CURL_DIR}/lib/http_chunks.c
|
||||
${CURL_DIR}/lib/strtok.c
|
||||
${CURL_DIR}/lib/connect.c
|
||||
${CURL_DIR}/lib/llist.c
|
||||
${CURL_DIR}/lib/hash.c
|
||||
${CURL_DIR}/lib/multi.c
|
||||
${CURL_DIR}/lib/content_encoding.c
|
||||
${CURL_DIR}/lib/share.c
|
||||
${CURL_DIR}/lib/http_digest.c
|
||||
${CURL_DIR}/lib/md4.c
|
||||
${CURL_DIR}/lib/md5.c
|
||||
${CURL_DIR}/lib/http_negotiate.c
|
||||
${CURL_DIR}/lib/inet_pton.c
|
||||
${CURL_DIR}/lib/strtoofft.c
|
||||
${CURL_DIR}/lib/strerror.c
|
||||
${CURL_DIR}/lib/amigaos.c
|
||||
${CURL_DIR}/lib/hostasyn.c
|
||||
${CURL_DIR}/lib/hostip4.c
|
||||
${CURL_DIR}/lib/hostip6.c
|
||||
${CURL_DIR}/lib/hostsyn.c
|
||||
${CURL_DIR}/lib/inet_ntop.c
|
||||
${CURL_DIR}/lib/parsedate.c
|
||||
${CURL_DIR}/lib/select.c
|
||||
${CURL_DIR}/lib/splay.c
|
||||
${CURL_DIR}/lib/strdup.c
|
||||
${CURL_DIR}/lib/socks.c
|
||||
${CURL_DIR}/lib/curl_addrinfo.c
|
||||
${CURL_DIR}/lib/socks_gssapi.c
|
||||
${CURL_DIR}/lib/socks_sspi.c
|
||||
${CURL_DIR}/lib/curl_sspi.c
|
||||
${CURL_DIR}/lib/slist.c
|
||||
${CURL_DIR}/lib/nonblock.c
|
||||
${CURL_DIR}/lib/curl_memrchr.c
|
||||
${CURL_DIR}/lib/imap.c
|
||||
${CURL_DIR}/lib/pop3.c
|
||||
${CURL_DIR}/lib/smtp.c
|
||||
${CURL_DIR}/lib/pingpong.c
|
||||
${CURL_DIR}/lib/rtsp.c
|
||||
${CURL_DIR}/lib/curl_threads.c
|
||||
${CURL_DIR}/lib/warnless.c
|
||||
${CURL_DIR}/lib/hmac.c
|
||||
${CURL_DIR}/lib/curl_rtmp.c
|
||||
${CURL_DIR}/lib/openldap.c
|
||||
${CURL_DIR}/lib/curl_gethostname.c
|
||||
${CURL_DIR}/lib/gopher.c
|
||||
${CURL_DIR}/lib/idn_win32.c
|
||||
${CURL_DIR}/lib/http_proxy.c
|
||||
${CURL_DIR}/lib/non-ascii.c
|
||||
${CURL_DIR}/lib/asyn-thread.c
|
||||
${CURL_DIR}/lib/curl_gssapi.c
|
||||
${CURL_DIR}/lib/http_ntlm.c
|
||||
${CURL_DIR}/lib/curl_ntlm_wb.c
|
||||
${CURL_DIR}/lib/curl_ntlm_core.c
|
||||
${CURL_DIR}/lib/curl_sasl.c
|
||||
${CURL_DIR}/lib/rand.c
|
||||
${CURL_DIR}/lib/curl_multibyte.c
|
||||
${CURL_DIR}/lib/hostcheck.c
|
||||
${CURL_DIR}/lib/conncache.c
|
||||
${CURL_DIR}/lib/dotdot.c
|
||||
${CURL_DIR}/lib/x509asn1.c
|
||||
${CURL_DIR}/lib/http2.c
|
||||
${CURL_DIR}/lib/smb.c
|
||||
${CURL_DIR}/lib/curl_endian.c
|
||||
${CURL_DIR}/lib/curl_des.c
|
||||
${CURL_DIR}/lib/system_win32.c
|
||||
${CURL_DIR}/lib/mime.c
|
||||
${CURL_DIR}/lib/sha256.c
|
||||
${CURL_DIR}/lib/setopt.c
|
||||
${CURL_DIR}/lib/curl_path.c
|
||||
${CURL_DIR}/lib/curl_ctype.c
|
||||
${CURL_DIR}/lib/curl_range.c
|
||||
${CURL_DIR}/lib/psl.c
|
||||
${CURL_DIR}/lib/doh.c
|
||||
${CURL_DIR}/lib/urlapi.c
|
||||
${CURL_DIR}/lib/curl_get_line.c
|
||||
${CURL_DIR}/lib/altsvc.c
|
||||
${CURL_DIR}/lib/socketpair.c
|
||||
${CURL_DIR}/lib/vauth/vauth.c
|
||||
${CURL_DIR}/lib/vauth/cleartext.c
|
||||
${CURL_DIR}/lib/vauth/cram.c
|
||||
${CURL_DIR}/lib/vauth/digest.c
|
||||
${CURL_DIR}/lib/vauth/digest_sspi.c
|
||||
${CURL_DIR}/lib/vauth/krb5_gssapi.c
|
||||
${CURL_DIR}/lib/vauth/krb5_sspi.c
|
||||
${CURL_DIR}/lib/vauth/ntlm.c
|
||||
${CURL_DIR}/lib/vauth/ntlm_sspi.c
|
||||
${CURL_DIR}/lib/vauth/oauth2.c
|
||||
${CURL_DIR}/lib/vauth/spnego_gssapi.c
|
||||
${CURL_DIR}/lib/vauth/spnego_sspi.c
|
||||
${CURL_DIR}/lib/vtls/openssl.c
|
||||
${CURL_DIR}/lib/vtls/gtls.c
|
||||
${CURL_DIR}/lib/vtls/vtls.c
|
||||
${CURL_DIR}/lib/vtls/nss.c
|
||||
${CURL_DIR}/lib/vtls/polarssl.c
|
||||
${CURL_DIR}/lib/vtls/polarssl_threadlock.c
|
||||
${CURL_DIR}/lib/vtls/wolfssl.c
|
||||
${CURL_DIR}/lib/vtls/schannel.c
|
||||
${CURL_DIR}/lib/vtls/schannel_verify.c
|
||||
${CURL_DIR}/lib/vtls/sectransp.c
|
||||
${CURL_DIR}/lib/vtls/gskit.c
|
||||
${CURL_DIR}/lib/vtls/mbedtls.c
|
||||
${CURL_DIR}/lib/vtls/mesalink.c
|
||||
${CURL_DIR}/lib/vtls/bearssl.c
|
||||
${CURL_DIR}/lib/vquic/ngtcp2.c
|
||||
${CURL_DIR}/lib/vquic/quiche.c
|
||||
${CURL_DIR}/lib/vssh/libssh2.c
|
||||
${CURL_DIR}/lib/vssh/libssh.c
|
||||
)
|
||||
|
||||
add_library (curl ${SRCS})
|
||||
|
||||
target_compile_definitions(curl PRIVATE HAVE_CONFIG_H BUILDING_LIBCURL CURL_HIDDEN_SYMBOLS libcurl_EXPORTS)
|
||||
target_include_directories(curl PUBLIC ${CURL_DIR}/include ${CURL_DIR}/lib .)
|
||||
|
||||
target_compile_definitions(curl PRIVATE OS="${CMAKE_SYSTEM_NAME}")
|
||||
|
38
contrib/curl-cmake/curl_config.h
Normal file
38
contrib/curl-cmake/curl_config.h
Normal file
@ -0,0 +1,38 @@
|
||||
#define CURL_DISABLE_FTP
|
||||
#define CURL_DISABLE_TFTP
|
||||
#define CURL_DISABLE_LDAP
|
||||
#define CURL_EXTERN_SYMBOL __attribute__ ((__visibility__ ("default")))
|
||||
|
||||
#define SIZEOF_SHORT 2
|
||||
#define SIZEOF_INT 4
|
||||
#define SIZEOF_LONG 8
|
||||
#define SIZEOF_CURL_OFF_T 8
|
||||
#define SIZEOF_SIZE_T 8
|
||||
|
||||
#define HAVE_FCNTL_O_NONBLOCK
|
||||
#define HAVE_LONGLONG
|
||||
#define HAVE_POLL_FINE
|
||||
#define HAVE_SOCKET
|
||||
#define HAVE_STRUCT_TIMEVAL
|
||||
|
||||
#define HAVE_RECV
|
||||
#define RECV_TYPE_ARG1 int
|
||||
#define RECV_TYPE_ARG2 void*
|
||||
#define RECV_TYPE_ARG3 size_t
|
||||
#define RECV_TYPE_ARG4 int
|
||||
#define RECV_TYPE_RETV ssize_t
|
||||
|
||||
#define HAVE_SEND
|
||||
#define SEND_TYPE_ARG1 int
|
||||
#define SEND_TYPE_ARG2 void*
|
||||
#define SEND_QUAL_ARG2 const
|
||||
#define SEND_TYPE_ARG3 size_t
|
||||
#define SEND_TYPE_ARG4 int
|
||||
#define SEND_TYPE_RETV ssize_t
|
||||
|
||||
#define HAVE_ARPA_INET_H
|
||||
#define HAVE_ERRNO_H
|
||||
#define HAVE_FCNTL_H
|
||||
#define HAVE_NETDB_H
|
||||
#define HAVE_SYS_STAT_H
|
||||
#define HAVE_UNISTD_H
|
@ -504,6 +504,10 @@ if (USE_POCO_NETSSL)
|
||||
dbms_target_link_libraries (PRIVATE ${Poco_NetSSL_LIBRARY} ${Poco_Crypto_LIBRARY})
|
||||
endif()
|
||||
|
||||
if (USE_POCO_JSON)
|
||||
dbms_target_link_libraries (PRIVATE ${Poco_JSON_LIBRARY})
|
||||
endif()
|
||||
|
||||
dbms_target_link_libraries (PRIVATE ${Poco_Foundation_LIBRARY})
|
||||
|
||||
if (USE_ICU)
|
||||
@ -522,6 +526,11 @@ if (USE_PARQUET)
|
||||
endif ()
|
||||
endif ()
|
||||
|
||||
if (USE_AVRO)
|
||||
dbms_target_link_libraries(PRIVATE ${AVROCPP_LIBRARY})
|
||||
dbms_target_include_directories (SYSTEM BEFORE PRIVATE ${AVROCPP_INCLUDE_DIR})
|
||||
endif ()
|
||||
|
||||
if (OPENSSL_CRYPTO_LIBRARY)
|
||||
dbms_target_link_libraries (PRIVATE ${OPENSSL_CRYPTO_LIBRARY})
|
||||
target_link_libraries (clickhouse_common_io PRIVATE ${OPENSSL_CRYPTO_LIBRARY})
|
||||
|
22
dbms/benchmark/clickhouse/benchmark-chyt.sh
Executable file
22
dbms/benchmark/clickhouse/benchmark-chyt.sh
Executable file
@ -0,0 +1,22 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
QUERIES_FILE="queries.sql"
|
||||
TABLE=$1
|
||||
TRIES=3
|
||||
|
||||
cat "$QUERIES_FILE" | sed "s|{table}|\"${TABLE}\"|g" | while read query; do
|
||||
|
||||
echo -n "["
|
||||
for i in $(seq 1 $TRIES); do
|
||||
while true; do
|
||||
RES=$(command time -f %e -o /dev/stdout curl -sS -G --data-urlencode "query=$query" --data "default_format=Null&max_memory_usage=100000000000&max_memory_usage_for_all_queries=100000000000&max_concurrent_queries_for_user=100&database=*$YT_CLIQUE_ID" --location-trusted -H "Authorization: OAuth $YT_TOKEN" "$YT_PROXY.yt.yandex.net/query" 2>/dev/null);
|
||||
if [[ $? == 0 ]]; then
|
||||
[[ $RES =~ 'fail|Exception' ]] || break;
|
||||
fi
|
||||
done
|
||||
|
||||
[[ "$?" == "0" ]] && echo -n "${RES}" || echo -n "null"
|
||||
[[ "$i" != $TRIES ]] && echo -n ", "
|
||||
done
|
||||
echo "],"
|
||||
done
|
19
dbms/benchmark/clickhouse/benchmark-yql.sh
Executable file
19
dbms/benchmark/clickhouse/benchmark-yql.sh
Executable file
@ -0,0 +1,19 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
QUERIES_FILE="queries.sql"
|
||||
TABLE=$1
|
||||
TRIES=3
|
||||
|
||||
cat "$QUERIES_FILE" | sed "s|{table}|\"${TABLE}\"|g" | while read query; do
|
||||
|
||||
echo -n "["
|
||||
for i in $(seq 1 $TRIES); do
|
||||
while true; do
|
||||
RES=$(command time -f %e -o time ./yql --clickhouse --syntax-version 1 -f empty <<< "USE chyt.hume; PRAGMA max_memory_usage = 100000000000; PRAGMA max_memory_usage_for_all_queries = 100000000000; $query" >/dev/null 2>&1 && cat time) && break;
|
||||
done
|
||||
|
||||
[[ "$?" == "0" ]] && echo -n "${RES}" || echo -n "null"
|
||||
[[ "$i" != $TRIES ]] && echo -n ", "
|
||||
done
|
||||
echo "],"
|
||||
done
|
@ -34,10 +34,10 @@ SELECT WatchID, ClientIP, count() AS c, sum(Refresh), avg(ResolutionWidth) FROM
|
||||
SELECT URL, count() AS c FROM {table} GROUP BY URL ORDER BY c DESC LIMIT 10;
|
||||
SELECT 1, URL, count() AS c FROM {table} GROUP BY 1, URL ORDER BY c DESC LIMIT 10;
|
||||
SELECT ClientIP AS x, x - 1, x - 2, x - 3, count() AS c FROM {table} GROUP BY x, x - 1, x - 2, x - 3 ORDER BY c DESC LIMIT 10;
|
||||
SELECT URL, count() AS PageViews FROM {table} WHERE CounterID = 62 AND EventDate >= toDate('2013-07-01') AND EventDate <= toDate('2013-07-31') AND NOT DontCountHits AND NOT Refresh AND notEmpty(URL) GROUP BY URL ORDER BY PageViews DESC LIMIT 10;
|
||||
SELECT Title, count() AS PageViews FROM {table} WHERE CounterID = 62 AND EventDate >= toDate('2013-07-01') AND EventDate <= toDate('2013-07-31') AND NOT DontCountHits AND NOT Refresh AND notEmpty(Title) GROUP BY Title ORDER BY PageViews DESC LIMIT 10;
|
||||
SELECT URL, count() AS PageViews FROM {table} WHERE CounterID = 62 AND EventDate >= toDate('2013-07-01') AND EventDate <= toDate('2013-07-31') AND NOT Refresh AND IsLink AND NOT IsDownload GROUP BY URL ORDER BY PageViews DESC LIMIT 1000;
|
||||
SELECT TraficSourceID, SearchEngineID, AdvEngineID, ((SearchEngineID = 0 AND AdvEngineID = 0) ? Referer : '') AS Src, URL AS Dst, count() AS PageViews FROM {table} WHERE CounterID = 62 AND EventDate >= toDate('2013-07-01') AND EventDate <= toDate('2013-07-31') AND NOT Refresh GROUP BY TraficSourceID, SearchEngineID, AdvEngineID, Src, Dst ORDER BY PageViews DESC LIMIT 1000;
|
||||
SELECT URLHash, EventDate, count() AS PageViews FROM {table} WHERE CounterID = 62 AND EventDate >= toDate('2013-07-01') AND EventDate <= toDate('2013-07-31') AND NOT Refresh AND TraficSourceID IN (-1, 6) AND RefererHash = halfMD5('http://example.ru/') GROUP BY URLHash, EventDate ORDER BY PageViews DESC LIMIT 100;
|
||||
SELECT WindowClientWidth, WindowClientHeight, count() AS PageViews FROM {table} WHERE CounterID = 62 AND EventDate >= toDate('2013-07-01') AND EventDate <= toDate('2013-07-31') AND NOT Refresh AND NOT DontCountHits AND URLHash = halfMD5('http://example.ru/') GROUP BY WindowClientWidth, WindowClientHeight ORDER BY PageViews DESC LIMIT 10000;
|
||||
SELECT toStartOfMinute(EventTime) AS Minute, count() AS PageViews FROM {table} WHERE CounterID = 62 AND EventDate >= toDate('2013-07-01') AND EventDate <= toDate('2013-07-02') AND NOT Refresh AND NOT DontCountHits GROUP BY Minute ORDER BY Minute;
|
||||
SELECT URL, count() AS PageViews FROM {table} WHERE CounterID = 62 AND EventDate >= '2013-07-01' AND EventDate <= '2013-07-31' AND NOT DontCountHits AND NOT Refresh AND notEmpty(URL) GROUP BY URL ORDER BY PageViews DESC LIMIT 10;
|
||||
SELECT Title, count() AS PageViews FROM {table} WHERE CounterID = 62 AND EventDate >= '2013-07-01' AND EventDate <= '2013-07-31' AND NOT DontCountHits AND NOT Refresh AND notEmpty(Title) GROUP BY Title ORDER BY PageViews DESC LIMIT 10;
|
||||
SELECT URL, count() AS PageViews FROM {table} WHERE CounterID = 62 AND EventDate >= '2013-07-01' AND EventDate <= '2013-07-31' AND NOT Refresh AND IsLink AND NOT IsDownload GROUP BY URL ORDER BY PageViews DESC LIMIT 1000;
|
||||
SELECT TraficSourceID, SearchEngineID, AdvEngineID, ((SearchEngineID = 0 AND AdvEngineID = 0) ? Referer : '') AS Src, URL AS Dst, count() AS PageViews FROM {table} WHERE CounterID = 62 AND EventDate >= '2013-07-01' AND EventDate <= '2013-07-31' AND NOT Refresh GROUP BY TraficSourceID, SearchEngineID, AdvEngineID, Src, Dst ORDER BY PageViews DESC LIMIT 1000;
|
||||
SELECT URLHash, EventDate, count() AS PageViews FROM {table} WHERE CounterID = 62 AND EventDate >= '2013-07-01' AND EventDate <= '2013-07-31' AND NOT Refresh AND TraficSourceID IN (-1, 6) AND RefererHash = halfMD5('http://example.ru/') GROUP BY URLHash, EventDate ORDER BY PageViews DESC LIMIT 100;
|
||||
SELECT WindowClientWidth, WindowClientHeight, count() AS PageViews FROM {table} WHERE CounterID = 62 AND EventDate >= '2013-07-01' AND EventDate <= '2013-07-31' AND NOT Refresh AND NOT DontCountHits AND URLHash = halfMD5('http://example.ru/') GROUP BY WindowClientWidth, WindowClientHeight ORDER BY PageViews DESC LIMIT 10000;
|
||||
SELECT toStartOfMinute(EventTime) AS Minute, count() AS PageViews FROM {table} WHERE CounterID = 62 AND EventDate >= '2013-07-01' AND EventDate <= '2013-07-02' AND NOT Refresh AND NOT DontCountHits GROUP BY Minute ORDER BY Minute;
|
||||
|
@ -513,6 +513,7 @@ private:
|
||||
if (input.empty())
|
||||
break;
|
||||
|
||||
has_vertical_output_suffix = false;
|
||||
if (input.ends_with("\\G"))
|
||||
{
|
||||
input.resize(input.size() - 2);
|
||||
|
@ -65,6 +65,8 @@ void Suggest::loadImpl(Connection & connection, const ConnectionTimeouts & timeo
|
||||
" UNION ALL "
|
||||
"SELECT name FROM system.data_type_families"
|
||||
" UNION ALL "
|
||||
"SELECT name FROM system.merge_tree_settings"
|
||||
" UNION ALL "
|
||||
"SELECT name FROM system.settings"
|
||||
" UNION ALL "
|
||||
"SELECT cluster FROM system.clusters"
|
||||
|
@ -111,7 +111,7 @@ void LocalServer::tryInitPath()
|
||||
|
||||
/// In case of empty path set paths to helpful directories
|
||||
std::string cd = Poco::Path::current();
|
||||
context->setTemporaryPath(cd + "tmp");
|
||||
context->setTemporaryStorage(cd + "tmp");
|
||||
context->setFlagsPath(cd + "flags");
|
||||
context->setUserFilesPath(""); // user's files are everywhere
|
||||
}
|
||||
|
@ -17,6 +17,7 @@
|
||||
#include <Common/setThreadName.h>
|
||||
#include <Common/config.h>
|
||||
#include <Common/SettingsChanges.h>
|
||||
#include <Disks/DiskSpaceMonitor.h>
|
||||
#include <Compression/CompressedReadBuffer.h>
|
||||
#include <Compression/CompressedWriteBuffer.h>
|
||||
#include <IO/ReadBufferFromIStream.h>
|
||||
@ -351,7 +352,8 @@ void HTTPHandler::processQuery(
|
||||
|
||||
if (buffer_until_eof)
|
||||
{
|
||||
std::string tmp_path_template = context.getTemporaryPath() + "http_buffers/";
|
||||
const std::string tmp_path(context.getTemporaryVolume()->getNextDisk()->getPath());
|
||||
const std::string tmp_path_template(tmp_path + "http_buffers/");
|
||||
|
||||
auto create_tmp_disk_buffer = [tmp_path_template] (const WriteBufferPtr &)
|
||||
{
|
||||
@ -590,7 +592,11 @@ void HTTPHandler::processQuery(
|
||||
customizeContext(context);
|
||||
|
||||
executeQuery(*in, *used_output.out_maybe_delayed_and_compressed, /* allow_into_outfile = */ false, context,
|
||||
[&response] (const String & content_type) { response.setContentType(content_type); },
|
||||
[&response] (const String & content_type, const String & format)
|
||||
{
|
||||
response.setContentType(content_type);
|
||||
response.add("X-ClickHouse-Format", format);
|
||||
},
|
||||
[&response] (const String & current_query_id) { response.add("X-ClickHouse-Query-Id", current_query_id); });
|
||||
|
||||
if (used_output.hasDelayed())
|
||||
@ -610,6 +616,8 @@ void HTTPHandler::trySendExceptionToClient(const std::string & s, int exception_
|
||||
{
|
||||
try
|
||||
{
|
||||
response.set("X-ClickHouse-Exception-Code", toString<int>(exception_code));
|
||||
|
||||
/// If HTTP method is POST and Keep-Alive is turned on, we should read the whole request body
|
||||
/// to avoid reading part of the current request body in the next request.
|
||||
if (request.getMethod() == Poco::Net::HTTPRequest::HTTP_POST
|
||||
|
@ -282,7 +282,8 @@ void MySQLHandler::comQuery(ReadBuffer & payload)
|
||||
else
|
||||
{
|
||||
bool with_output = false;
|
||||
std::function<void(const String &)> set_content_type = [&with_output](const String &) -> void {
|
||||
std::function<void(const String &, const String &)> set_content_type_and_format = [&with_output](const String &, const String &) -> void
|
||||
{
|
||||
with_output = true;
|
||||
};
|
||||
|
||||
@ -305,7 +306,7 @@ void MySQLHandler::comQuery(ReadBuffer & payload)
|
||||
ReadBufferFromString replacement(replacement_query);
|
||||
|
||||
Context query_context = connection_context;
|
||||
executeQuery(should_replace ? replacement : payload, *out, true, query_context, set_content_type, nullptr);
|
||||
executeQuery(should_replace ? replacement : payload, *out, true, query_context, set_content_type_and_format, {});
|
||||
|
||||
if (!with_output)
|
||||
packet_sender->sendPacket(OK_Packet(0x00, client_capability_flags, 0, 0, 0), true);
|
||||
|
@ -77,6 +77,31 @@ namespace CurrentMetrics
|
||||
extern const Metric VersionInteger;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
void setupTmpPath(Logger * log, const std::string & path)
|
||||
{
|
||||
LOG_DEBUG(log, "Setting up " << path << " to store temporary data in it");
|
||||
|
||||
Poco::File(path).createDirectories();
|
||||
|
||||
/// Clearing old temporary files.
|
||||
Poco::DirectoryIterator dir_end;
|
||||
for (Poco::DirectoryIterator it(path); it != dir_end; ++it)
|
||||
{
|
||||
if (it->isFile() && startsWith(it.name(), "tmp"))
|
||||
{
|
||||
LOG_DEBUG(log, "Removing old temporary file " << it->path());
|
||||
it->remove();
|
||||
}
|
||||
else
|
||||
LOG_DEBUG(log, "Skipped file in temporary path " << it->path());
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
@ -331,22 +356,14 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
||||
DateLUT::instance();
|
||||
LOG_TRACE(log, "Initialized DateLUT with time zone '" << DateLUT::instance().getTimeZone() << "'.");
|
||||
|
||||
/// Directory with temporary data for processing of heavy queries.
|
||||
|
||||
/// Storage with temporary data for processing of heavy queries.
|
||||
{
|
||||
std::string tmp_path = config().getString("tmp_path", path + "tmp/");
|
||||
global_context->setTemporaryPath(tmp_path);
|
||||
Poco::File(tmp_path).createDirectories();
|
||||
|
||||
/// Clearing old temporary files.
|
||||
Poco::DirectoryIterator dir_end;
|
||||
for (Poco::DirectoryIterator it(tmp_path); it != dir_end; ++it)
|
||||
{
|
||||
if (it->isFile() && startsWith(it.name(), "tmp"))
|
||||
{
|
||||
LOG_DEBUG(log, "Removing old temporary file " << it->path());
|
||||
it->remove();
|
||||
}
|
||||
}
|
||||
std::string tmp_policy = config().getString("tmp_policy", "");
|
||||
const VolumePtr & volume = global_context->setTemporaryStorage(tmp_path, tmp_policy);
|
||||
for (const DiskPtr & disk : volume->disks)
|
||||
setupTmpPath(log, disk->getPath());
|
||||
}
|
||||
|
||||
/** Directory with 'flags': files indicating temporary settings for the server set by system administrator.
|
||||
@ -864,7 +881,11 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
||||
for (auto & server : servers)
|
||||
server->start();
|
||||
|
||||
setTextLog(global_context->getTextLog());
|
||||
{
|
||||
String level_str = config().getString("text_log.level", "");
|
||||
int level = level_str.empty() ? INT_MAX : Poco::Logger::parseLevel(level_str);
|
||||
setTextLog(global_context->getTextLog(), level);
|
||||
}
|
||||
buildLoggers(config(), logger());
|
||||
|
||||
main_config_reloader->start();
|
||||
|
@ -591,11 +591,9 @@ void TCPHandler::processOrdinaryQueryWithProcessors(size_t num_threads)
|
||||
}
|
||||
});
|
||||
|
||||
/// Wait in case of exception. Delete pipeline to release memory.
|
||||
/// Wait in case of exception happened outside of pool.
|
||||
SCOPE_EXIT(
|
||||
/// Clear queue in case if somebody is waiting lazy_format to push.
|
||||
lazy_format->finish();
|
||||
lazy_format->clearQueue();
|
||||
|
||||
try
|
||||
{
|
||||
@ -604,72 +602,58 @@ void TCPHandler::processOrdinaryQueryWithProcessors(size_t num_threads)
|
||||
catch (...)
|
||||
{
|
||||
/// If exception was thrown during pipeline execution, skip it while processing other exception.
|
||||
tryLogCurrentException(log);
|
||||
}
|
||||
|
||||
/// pipeline = QueryPipeline()
|
||||
);
|
||||
|
||||
while (true)
|
||||
while (!lazy_format->isFinished() && !exception)
|
||||
{
|
||||
Block block;
|
||||
|
||||
while (true)
|
||||
if (isQueryCancelled())
|
||||
{
|
||||
if (isQueryCancelled())
|
||||
{
|
||||
/// A packet was received requesting to stop execution of the request.
|
||||
executor->cancel();
|
||||
|
||||
break;
|
||||
}
|
||||
else
|
||||
{
|
||||
if (after_send_progress.elapsed() / 1000 >= query_context->getSettingsRef().interactive_delay)
|
||||
{
|
||||
/// Some time passed and there is a progress.
|
||||
after_send_progress.restart();
|
||||
sendProgress();
|
||||
}
|
||||
|
||||
sendLogs();
|
||||
|
||||
if ((block = lazy_format->getBlock(query_context->getSettingsRef().interactive_delay / 1000)))
|
||||
break;
|
||||
|
||||
if (lazy_format->isFinished())
|
||||
break;
|
||||
|
||||
if (exception)
|
||||
{
|
||||
pool.wait();
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/** If data has run out, we will send the profiling data and total values to
|
||||
* the last zero block to be able to use
|
||||
* this information in the suffix output of stream.
|
||||
* If the request was interrupted, then `sendTotals` and other methods could not be called,
|
||||
* because we have not read all the data yet,
|
||||
* and there could be ongoing calculations in other threads at the same time.
|
||||
*/
|
||||
if (!block && !isQueryCancelled())
|
||||
{
|
||||
pool.wait();
|
||||
pipeline.finalize();
|
||||
|
||||
sendTotals(lazy_format->getTotals());
|
||||
sendExtremes(lazy_format->getExtremes());
|
||||
sendProfileInfo(lazy_format->getProfileInfo());
|
||||
sendProgress();
|
||||
sendLogs();
|
||||
}
|
||||
|
||||
sendData(block);
|
||||
if (!block)
|
||||
/// A packet was received requesting to stop execution of the request.
|
||||
executor->cancel();
|
||||
break;
|
||||
}
|
||||
|
||||
if (after_send_progress.elapsed() / 1000 >= query_context->getSettingsRef().interactive_delay)
|
||||
{
|
||||
/// Some time passed and there is a progress.
|
||||
after_send_progress.restart();
|
||||
sendProgress();
|
||||
}
|
||||
|
||||
sendLogs();
|
||||
|
||||
if (auto block = lazy_format->getBlock(query_context->getSettingsRef().interactive_delay / 1000))
|
||||
{
|
||||
if (!state.io.null_format)
|
||||
sendData(block);
|
||||
}
|
||||
}
|
||||
|
||||
/// Finish lazy_format before waiting. Otherwise some thread may write into it, and waiting will lock.
|
||||
lazy_format->finish();
|
||||
pool.wait();
|
||||
|
||||
/** If data has run out, we will send the profiling data and total values to
|
||||
* the last zero block to be able to use
|
||||
* this information in the suffix output of stream.
|
||||
* If the request was interrupted, then `sendTotals` and other methods could not be called,
|
||||
* because we have not read all the data yet,
|
||||
* and there could be ongoing calculations in other threads at the same time.
|
||||
*/
|
||||
if (!isQueryCancelled())
|
||||
{
|
||||
pipeline.finalize();
|
||||
|
||||
sendTotals(lazy_format->getTotals());
|
||||
sendExtremes(lazy_format->getExtremes());
|
||||
sendProfileInfo(lazy_format->getProfileInfo());
|
||||
sendProgress();
|
||||
sendLogs();
|
||||
}
|
||||
|
||||
sendData({});
|
||||
}
|
||||
|
||||
state.io.onFinish();
|
||||
|
@ -3,27 +3,27 @@
|
||||
NOTE: User and query level settings are set up in "users.xml" file.
|
||||
-->
|
||||
<yandex>
|
||||
<!-- The list of hosts allowed to use in URL-related storage engines and table functions.
|
||||
If this section is not present in configuration, all hosts are allowed.
|
||||
-->
|
||||
<remote_url_allow_hosts>
|
||||
<!-- Host should be specified exactly as in URL. The name is checked before DNS resolution.
|
||||
Example: "yandex.ru", "yandex.ru." and "www.yandex.ru" are different hosts.
|
||||
If port is explicitly specified in URL, the host:port is checked as a whole.
|
||||
If host specified here without port, any port with this host allowed.
|
||||
"yandex.ru" -> "yandex.ru:443", "yandex.ru:80" etc. is allowed, but "yandex.ru:80" -> only "yandex.ru:80" is allowed.
|
||||
If the host is specified as IP address, it is checked as specified in URL. Example: "[2a02:6b8:a::a]".
|
||||
If there are redirects and support for redirects is enabled, every redirect (the Location field) is checked.
|
||||
-->
|
||||
<!-- The list of hosts allowed to use in URL-related storage engines and table functions.
|
||||
If this section is not present in configuration, all hosts are allowed.
|
||||
-->
|
||||
<remote_url_allow_hosts>
|
||||
<!-- Host should be specified exactly as in URL. The name is checked before DNS resolution.
|
||||
Example: "yandex.ru", "yandex.ru." and "www.yandex.ru" are different hosts.
|
||||
If port is explicitly specified in URL, the host:port is checked as a whole.
|
||||
If host specified here without port, any port with this host allowed.
|
||||
"yandex.ru" -> "yandex.ru:443", "yandex.ru:80" etc. is allowed, but "yandex.ru:80" -> only "yandex.ru:80" is allowed.
|
||||
If the host is specified as IP address, it is checked as specified in URL. Example: "[2a02:6b8:a::a]".
|
||||
If there are redirects and support for redirects is enabled, every redirect (the Location field) is checked.
|
||||
-->
|
||||
|
||||
<!-- Regular expression can be specified. RE2 engine is used for regexps.
|
||||
Regexps are not aligned: don't forget to add ^ and $. Also don't forget to escape dot (.) metacharacter
|
||||
(forgetting to do so is a common source of error).
|
||||
-->
|
||||
</remote_url_allow_hosts>
|
||||
<!-- Regular expression can be specified. RE2 engine is used for regexps.
|
||||
Regexps are not aligned: don't forget to add ^ and $. Also don't forget to escape dot (.) metacharacter
|
||||
(forgetting to do so is a common source of error).
|
||||
-->
|
||||
</remote_url_allow_hosts>
|
||||
|
||||
<logger>
|
||||
<!-- Possible levels: https://github.com/pocoproject/poco/blob/develop/Foundation/include/Poco/Logger.h#L105 -->
|
||||
<!-- Possible levels: https://github.com/pocoproject/poco/blob/poco-1.9.4-release/Foundation/include/Poco/Logger.h#L105 -->
|
||||
<level>trace</level>
|
||||
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
|
||||
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
|
||||
@ -133,6 +133,17 @@
|
||||
<!-- Path to temporary data for processing hard queries. -->
|
||||
<tmp_path>/var/lib/clickhouse/tmp/</tmp_path>
|
||||
|
||||
<!-- Policy from the <storage_configuration> for the temporary files.
|
||||
If not set <tmp_path> is used, otherwise <tmp_path> is ignored.
|
||||
|
||||
Notes:
|
||||
- move_factor is ignored
|
||||
- keep_free_space_bytes is ignored
|
||||
- max_data_part_size_bytes is ignored
|
||||
- you must have exactly one volume in that policy
|
||||
-->
|
||||
<!-- <tmp_policy>tmp</tmp_policy> -->
|
||||
|
||||
<!-- Directory with user provided files that are accessible by 'file' table function. -->
|
||||
<user_files_path>/var/lib/clickhouse/user_files/</user_files_path>
|
||||
|
||||
@ -344,6 +355,11 @@
|
||||
toStartOfHour(event_time)
|
||||
-->
|
||||
<partition_by>toYYYYMM(event_date)</partition_by>
|
||||
|
||||
<!-- Instead of partition_by, you can provide full engine expression (starting with ENGINE = ) with parameters,
|
||||
Example: <engine>ENGINE = MergeTree PARTITION BY toYYYYMM(event_date) ORDER BY (event_date, event_time) SETTINGS index_granularity = 1024</engine>
|
||||
-->
|
||||
|
||||
<!-- Interval of flushing data. -->
|
||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||
</query_log>
|
||||
@ -378,10 +394,12 @@
|
||||
|
||||
<!-- Uncomment to write text log into table.
|
||||
Text log contains all information from usual server log but stores it in structured and efficient way.
|
||||
The level of the messages that goes to the table can be limited (<level>), if not specified all messages will go to the table.
|
||||
<text_log>
|
||||
<database>system</database>
|
||||
<table>text_log</table>
|
||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||
<level></level>
|
||||
</text_log>
|
||||
-->
|
||||
|
||||
|
@ -309,7 +309,7 @@ protected:
|
||||
/// Uses a DFA based approach in order to better handle patterns without
|
||||
/// time assertions.
|
||||
///
|
||||
/// NOTE: This implementation relies on the assumption that the pattern are *small*.
|
||||
/// NOTE: This implementation relies on the assumption that the pattern is *small*.
|
||||
///
|
||||
/// This algorithm performs in O(mn) (with m the number of DFA states and N the number
|
||||
/// of events) with a memory consumption and memory allocations in O(m). It means that
|
||||
|
@ -50,16 +50,21 @@
|
||||
*
|
||||
* P.S. This is also required, because tcmalloc can not allocate a chunk of
|
||||
* memory greater than 16 GB.
|
||||
*
|
||||
* P.P.S. Note that MMAP_THRESHOLD symbol is intentionally made weak. It allows
|
||||
* to override it during linkage when using ClickHouse as a library in
|
||||
* third-party applications which may already use own allocator doing mmaps
|
||||
* in the implementation of alloc/realloc.
|
||||
*/
|
||||
#ifdef NDEBUG
|
||||
static constexpr size_t MMAP_THRESHOLD = 64 * (1ULL << 20);
|
||||
__attribute__((__weak__)) extern const size_t MMAP_THRESHOLD = 64 * (1ULL << 20);
|
||||
#else
|
||||
/**
|
||||
* In debug build, use small mmap threshold to reproduce more memory
|
||||
* stomping bugs. Along with ASLR it will hopefully detect more issues than
|
||||
* ASan. The program may fail due to the limit on number of memory mappings.
|
||||
*/
|
||||
static constexpr size_t MMAP_THRESHOLD = 4096;
|
||||
__attribute__((__weak__)) extern const size_t MMAP_THRESHOLD = 4096;
|
||||
#endif
|
||||
|
||||
static constexpr size_t MMAP_MIN_ALIGNMENT = 4096;
|
||||
|
@ -478,6 +478,7 @@ namespace ErrorCodes
|
||||
extern const int FILE_ALREADY_EXISTS = 504;
|
||||
extern const int CANNOT_DELETE_DIRECTORY = 505;
|
||||
extern const int UNEXPECTED_ERROR_CODE = 506;
|
||||
extern const int UNABLE_TO_SKIP_UNUSED_SHARDS = 507;
|
||||
|
||||
extern const int KEEPER_EXCEPTION = 999;
|
||||
extern const int POCO_EXCEPTION = 1000;
|
||||
|
@ -195,7 +195,7 @@ std::string getCurrentExceptionMessage(bool with_stacktrace, bool check_embedded
|
||||
<< ", e.displayText() = " << e.displayText()
|
||||
<< (with_stacktrace ? getExceptionStackTraceString(e) : "")
|
||||
<< (with_extra_info ? getExtraExceptionInfo(e) : "")
|
||||
<< " (version " << VERSION_STRING << VERSION_OFFICIAL;
|
||||
<< " (version " << VERSION_STRING << VERSION_OFFICIAL << ")";
|
||||
}
|
||||
catch (...) {}
|
||||
}
|
||||
|
@ -184,7 +184,7 @@ template <> constexpr bool isDecimalField<DecimalField<Decimal128>>() { return t
|
||||
class FieldVisitorAccurateEquals : public StaticVisitor<bool>
|
||||
{
|
||||
public:
|
||||
bool operator() (const UInt64 & l, const Null & r) const { return cantCompare(l, r); }
|
||||
bool operator() (const UInt64 &, const Null &) const { return false; }
|
||||
bool operator() (const UInt64 & l, const UInt64 & r) const { return l == r; }
|
||||
bool operator() (const UInt64 & l, const UInt128 & r) const { return cantCompare(l, r); }
|
||||
bool operator() (const UInt64 & l, const Int64 & r) const { return accurate::equalsOp(l, r); }
|
||||
@ -194,7 +194,7 @@ public:
|
||||
bool operator() (const UInt64 & l, const Tuple & r) const { return cantCompare(l, r); }
|
||||
bool operator() (const UInt64 & l, const AggregateFunctionStateData & r) const { return cantCompare(l, r); }
|
||||
|
||||
bool operator() (const Int64 & l, const Null & r) const { return cantCompare(l, r); }
|
||||
bool operator() (const Int64 &, const Null &) const { return false; }
|
||||
bool operator() (const Int64 & l, const UInt64 & r) const { return accurate::equalsOp(l, r); }
|
||||
bool operator() (const Int64 & l, const UInt128 & r) const { return cantCompare(l, r); }
|
||||
bool operator() (const Int64 & l, const Int64 & r) const { return l == r; }
|
||||
@ -204,7 +204,7 @@ public:
|
||||
bool operator() (const Int64 & l, const Tuple & r) const { return cantCompare(l, r); }
|
||||
bool operator() (const Int64 & l, const AggregateFunctionStateData & r) const { return cantCompare(l, r); }
|
||||
|
||||
bool operator() (const Float64 & l, const Null & r) const { return cantCompare(l, r); }
|
||||
bool operator() (const Float64 &, const Null &) const { return false; }
|
||||
bool operator() (const Float64 & l, const UInt64 & r) const { return accurate::equalsOp(l, r); }
|
||||
bool operator() (const Float64 & l, const UInt128 & r) const { return cantCompare(l, r); }
|
||||
bool operator() (const Float64 & l, const Int64 & r) const { return accurate::equalsOp(l, r); }
|
||||
@ -227,6 +227,8 @@ public:
|
||||
return l == r;
|
||||
if constexpr (std::is_same_v<T, UInt128>)
|
||||
return stringToUUID(l) == r;
|
||||
if constexpr (std::is_same_v<T, Null>)
|
||||
return false;
|
||||
return cantCompare(l, r);
|
||||
}
|
||||
|
||||
@ -237,6 +239,8 @@ public:
|
||||
return l == r;
|
||||
if constexpr (std::is_same_v<T, String>)
|
||||
return l == stringToUUID(r);
|
||||
if constexpr (std::is_same_v<T, Null>)
|
||||
return false;
|
||||
return cantCompare(l, r);
|
||||
}
|
||||
|
||||
@ -245,6 +249,8 @@ public:
|
||||
{
|
||||
if constexpr (std::is_same_v<T, Array>)
|
||||
return l == r;
|
||||
if constexpr (std::is_same_v<T, Null>)
|
||||
return false;
|
||||
return cantCompare(l, r);
|
||||
}
|
||||
|
||||
@ -253,6 +259,8 @@ public:
|
||||
{
|
||||
if constexpr (std::is_same_v<T, Tuple>)
|
||||
return l == r;
|
||||
if constexpr (std::is_same_v<T, Null>)
|
||||
return false;
|
||||
return cantCompare(l, r);
|
||||
}
|
||||
|
||||
@ -263,6 +271,8 @@ public:
|
||||
return l == r;
|
||||
if constexpr (std::is_same_v<U, Int64> || std::is_same_v<U, UInt64>)
|
||||
return l == DecimalField<Decimal128>(r, 0);
|
||||
if constexpr (std::is_same_v<U, Null>)
|
||||
return false;
|
||||
return cantCompare(l, r);
|
||||
}
|
||||
|
||||
@ -289,11 +299,10 @@ private:
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
class FieldVisitorAccurateLess : public StaticVisitor<bool>
|
||||
{
|
||||
public:
|
||||
bool operator() (const UInt64 & l, const Null & r) const { return cantCompare(l, r); }
|
||||
bool operator() (const UInt64 &, const Null &) const { return false; }
|
||||
bool operator() (const UInt64 & l, const UInt64 & r) const { return l < r; }
|
||||
bool operator() (const UInt64 & l, const UInt128 & r) const { return cantCompare(l, r); }
|
||||
bool operator() (const UInt64 & l, const Int64 & r) const { return accurate::lessOp(l, r); }
|
||||
@ -303,7 +312,7 @@ public:
|
||||
bool operator() (const UInt64 & l, const Tuple & r) const { return cantCompare(l, r); }
|
||||
bool operator() (const UInt64 & l, const AggregateFunctionStateData & r) const { return cantCompare(l, r); }
|
||||
|
||||
bool operator() (const Int64 & l, const Null & r) const { return cantCompare(l, r); }
|
||||
bool operator() (const Int64 &, const Null &) const { return false; }
|
||||
bool operator() (const Int64 & l, const UInt64 & r) const { return accurate::lessOp(l, r); }
|
||||
bool operator() (const Int64 & l, const UInt128 & r) const { return cantCompare(l, r); }
|
||||
bool operator() (const Int64 & l, const Int64 & r) const { return l < r; }
|
||||
@ -313,7 +322,7 @@ public:
|
||||
bool operator() (const Int64 & l, const Tuple & r) const { return cantCompare(l, r); }
|
||||
bool operator() (const Int64 & l, const AggregateFunctionStateData & r) const { return cantCompare(l, r); }
|
||||
|
||||
bool operator() (const Float64 & l, const Null & r) const { return cantCompare(l, r); }
|
||||
bool operator() (const Float64 &, const Null &) const { return false; }
|
||||
bool operator() (const Float64 & l, const UInt64 & r) const { return accurate::lessOp(l, r); }
|
||||
bool operator() (const Float64 & l, const UInt128 & r) const { return cantCompare(l, r); }
|
||||
bool operator() (const Float64 & l, const Int64 & r) const { return accurate::lessOp(l, r); }
|
||||
@ -336,6 +345,8 @@ public:
|
||||
return l < r;
|
||||
if constexpr (std::is_same_v<T, UInt128>)
|
||||
return stringToUUID(l) < r;
|
||||
if constexpr (std::is_same_v<T, Null>)
|
||||
return false;
|
||||
return cantCompare(l, r);
|
||||
}
|
||||
|
||||
@ -346,6 +357,8 @@ public:
|
||||
return l < r;
|
||||
if constexpr (std::is_same_v<T, String>)
|
||||
return l < stringToUUID(r);
|
||||
if constexpr (std::is_same_v<T, Null>)
|
||||
return false;
|
||||
return cantCompare(l, r);
|
||||
}
|
||||
|
||||
@ -354,6 +367,8 @@ public:
|
||||
{
|
||||
if constexpr (std::is_same_v<T, Array>)
|
||||
return l < r;
|
||||
if constexpr (std::is_same_v<T, Null>)
|
||||
return false;
|
||||
return cantCompare(l, r);
|
||||
}
|
||||
|
||||
@ -362,6 +377,8 @@ public:
|
||||
{
|
||||
if constexpr (std::is_same_v<T, Tuple>)
|
||||
return l < r;
|
||||
if constexpr (std::is_same_v<T, Null>)
|
||||
return false;
|
||||
return cantCompare(l, r);
|
||||
}
|
||||
|
||||
@ -370,8 +387,10 @@ public:
|
||||
{
|
||||
if constexpr (isDecimalField<U>())
|
||||
return l < r;
|
||||
else if constexpr (std::is_same_v<U, Int64> || std::is_same_v<U, UInt64>)
|
||||
if constexpr (std::is_same_v<U, Int64> || std::is_same_v<U, UInt64>)
|
||||
return l < DecimalField<Decimal128>(r, 0);
|
||||
if constexpr (std::is_same_v<U, Null>)
|
||||
return false;
|
||||
return cantCompare(l, r);
|
||||
}
|
||||
|
||||
|
@ -141,7 +141,15 @@ QueryProfilerBase<ProfilerImpl>::QueryProfilerBase(const Int32 thread_id, const
|
||||
sev._sigev_un._tid = thread_id;
|
||||
#endif
|
||||
if (timer_create(clock_type, &sev, &timer_id))
|
||||
{
|
||||
/// In Google Cloud Run, the function "timer_create" is implemented incorrectly as of 2020-01-25.
|
||||
/// https://mybranch.dev/posts/clickhouse-on-cloud-run/
|
||||
if (errno == 0)
|
||||
throw Exception("Failed to create thread timer. The function 'timer_create' returned non-zero but didn't set errno. This is bug in your OS.",
|
||||
ErrorCodes::CANNOT_CREATE_TIMER);
|
||||
|
||||
throwFromErrno("Failed to create thread timer", ErrorCodes::CANNOT_CREATE_TIMER);
|
||||
}
|
||||
|
||||
/// Randomize offset as uniform random value from 0 to period - 1.
|
||||
/// It will allow to sample short queries even if timer period is large.
|
||||
|
@ -1,12 +1,13 @@
|
||||
#include <re2/re2.h>
|
||||
#include <Common/RemoteHostFilter.h>
|
||||
#include <Poco/URI.h>
|
||||
#include <Formats/FormatFactory.h>
|
||||
#include <Poco/Util/AbstractConfiguration.h>
|
||||
#include <Formats/FormatFactory.h>
|
||||
#include <Common/RemoteHostFilter.h>
|
||||
#include <Common/StringUtils/StringUtils.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
|
@ -1,17 +1,19 @@
|
||||
#pragma once
|
||||
|
||||
#include <string>
|
||||
#include <vector>
|
||||
#include <unordered_set>
|
||||
#include <Poco/URI.h>
|
||||
#include <Poco/Util/AbstractConfiguration.h>
|
||||
|
||||
|
||||
namespace Poco { class URI; }
|
||||
namespace Poco { namespace Util { class AbstractConfiguration; } }
|
||||
|
||||
namespace DB
|
||||
{
|
||||
class RemoteHostFilter
|
||||
{
|
||||
/**
|
||||
* This class checks if url is allowed.
|
||||
* This class checks if URL is allowed.
|
||||
* If primary_hosts and regexp_hosts are empty all urls are allowed.
|
||||
*/
|
||||
public:
|
||||
@ -25,6 +27,7 @@ private:
|
||||
std::unordered_set<std::string> primary_hosts; /// Allowed primary (<host>) URL from config.xml
|
||||
std::vector<std::string> regexp_hosts; /// Allowed regexp (<hots_regexp>) URL from config.xml
|
||||
|
||||
bool checkForDirectEntry(const std::string & str) const; /// Checks if the primary_hosts and regexp_hosts contain str. If primary_hosts and regexp_hosts are empty return true.
|
||||
/// Checks if the primary_hosts and regexp_hosts contain str. If primary_hosts and regexp_hosts are empty return true.
|
||||
bool checkForDirectEntry(const std::string & str) const;
|
||||
};
|
||||
}
|
||||
|
@ -265,7 +265,19 @@ static void toStringEveryLineImpl(const StackTrace::Frames & frames, size_t offs
|
||||
uintptr_t virtual_offset = object ? uintptr_t(object->address_begin) : 0;
|
||||
const void * physical_addr = reinterpret_cast<const void *>(uintptr_t(virtual_addr) - virtual_offset);
|
||||
|
||||
out << i << ". " << physical_addr << " ";
|
||||
out << i << ". ";
|
||||
|
||||
if (object)
|
||||
{
|
||||
if (std::filesystem::exists(object->name))
|
||||
{
|
||||
auto dwarf_it = dwarfs.try_emplace(object->name, *object->elf).first;
|
||||
|
||||
DB::Dwarf::LocationInfo location;
|
||||
if (dwarf_it->second.findAddress(uintptr_t(physical_addr), location, DB::Dwarf::LocationInfoMode::FAST))
|
||||
out << location.file.toString() << ":" << location.line << ": ";
|
||||
}
|
||||
}
|
||||
|
||||
auto symbol = symbol_index.findSymbol(virtual_addr);
|
||||
if (symbol)
|
||||
@ -276,22 +288,8 @@ static void toStringEveryLineImpl(const StackTrace::Frames & frames, size_t offs
|
||||
else
|
||||
out << "?";
|
||||
|
||||
out << " ";
|
||||
|
||||
if (object)
|
||||
{
|
||||
if (std::filesystem::exists(object->name))
|
||||
{
|
||||
auto dwarf_it = dwarfs.try_emplace(object->name, *object->elf).first;
|
||||
|
||||
DB::Dwarf::LocationInfo location;
|
||||
if (dwarf_it->second.findAddress(uintptr_t(physical_addr), location, DB::Dwarf::LocationInfoMode::FAST))
|
||||
out << location.file.toString() << ":" << location.line;
|
||||
}
|
||||
out << " in " << object->name;
|
||||
}
|
||||
else
|
||||
out << "?";
|
||||
out << " @ " << physical_addr;
|
||||
out << " in " << (object ? object->name : "?");
|
||||
|
||||
callback(out.str());
|
||||
out.str({});
|
||||
|
@ -151,10 +151,18 @@ public:
|
||||
func = std::forward<Function>(func),
|
||||
args = std::make_tuple(std::forward<Args>(args)...)]
|
||||
{
|
||||
try
|
||||
{
|
||||
/// Thread status holds raw pointer on query context, thus it always must be destroyed
|
||||
/// before sending signal that permits to join this thread.
|
||||
DB::ThreadStatus thread_status;
|
||||
std::apply(func, args);
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
state->set();
|
||||
throw;
|
||||
}
|
||||
state->set();
|
||||
});
|
||||
}
|
||||
|
@ -203,6 +203,9 @@ protected:
|
||||
|
||||
/// Set to non-nullptr only if we have enough capabilities.
|
||||
std::unique_ptr<TaskStatsInfoGetter> taskstats_getter;
|
||||
|
||||
private:
|
||||
void setupState(const ThreadGroupStatusPtr & thread_group_);
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -23,7 +23,14 @@ namespace DB
|
||||
static thread_local void * stack_address = nullptr;
|
||||
static thread_local size_t max_stack_size = 0;
|
||||
|
||||
void checkStackSize()
|
||||
/** It works fine when interpreters are instantiated by ClickHouse code in properly prepared threads,
|
||||
* but there are cases when ClickHouse runs as a library inside another application.
|
||||
* If application is using user-space lightweight threads with manually allocated stacks,
|
||||
* current implementation is not reasonable, as it has no way to properly check the remaining
|
||||
* stack size without knowing the details of how stacks are allocated.
|
||||
* We mark this function as weak symbol to be able to replace it in another ClickHouse-based products.
|
||||
*/
|
||||
__attribute__((__weak__)) void checkStackSize()
|
||||
{
|
||||
using namespace DB;
|
||||
|
||||
|
@ -147,6 +147,14 @@ using BlocksList = std::list<Block>;
|
||||
using BlocksPtr = std::shared_ptr<Blocks>;
|
||||
using BlocksPtrs = std::shared_ptr<std::vector<BlocksPtr>>;
|
||||
|
||||
/// Extends block with extra data in derived classes
|
||||
struct ExtraBlock
|
||||
{
|
||||
Block block;
|
||||
};
|
||||
|
||||
using ExtraBlockPtr = std::shared_ptr<ExtraBlock>;
|
||||
|
||||
/// Compare number of columns, data types, column types, column names, and values of constant columns.
|
||||
bool blocksHaveEqualStructure(const Block & lhs, const Block & rhs);
|
||||
|
||||
|
@ -52,6 +52,8 @@ struct Settings : public SettingsCollection<Settings>
|
||||
M(SettingUInt64, max_insert_block_size, DEFAULT_INSERT_BLOCK_SIZE, "The maximum block size for insertion, if we control the creation of blocks for insertion.", 0) \
|
||||
M(SettingUInt64, min_insert_block_size_rows, DEFAULT_INSERT_BLOCK_SIZE, "Squash blocks passed to INSERT query to specified size in rows, if blocks are not big enough.", 0) \
|
||||
M(SettingUInt64, min_insert_block_size_bytes, (DEFAULT_INSERT_BLOCK_SIZE * 256), "Squash blocks passed to INSERT query to specified size in bytes, if blocks are not big enough.", 0) \
|
||||
M(SettingUInt64, max_joined_block_size_rows, DEFAULT_BLOCK_SIZE, "Maximum block size for JOIN result (if join algorithm supports it). 0 means unlimited.", 0) \
|
||||
M(SettingUInt64, max_insert_threads, 0, "The maximum number of threads to execute the INSERT SELECT query. By default, it is determined automatically.", 0) \
|
||||
M(SettingMaxThreads, max_threads, 0, "The maximum number of threads to execute the request. By default, it is determined automatically.", 0) \
|
||||
M(SettingMaxThreads, max_alter_threads, 0, "The maximum number of threads to execute the ALTER requests. By default, it is determined automatically.", 0) \
|
||||
M(SettingUInt64, max_read_buffer_size, DBMS_DEFAULT_BUFFER_SIZE, "The maximum size of the buffer to read from the filesystem.", 0) \
|
||||
@ -110,6 +112,7 @@ struct Settings : public SettingsCollection<Settings>
|
||||
\
|
||||
M(SettingBool, distributed_group_by_no_merge, false, "Do not merge aggregation states from different servers for distributed query processing - in case it is for certain that there are different keys on different shards.", 0) \
|
||||
M(SettingBool, optimize_skip_unused_shards, false, "Assumes that data is distributed by sharding_key. Optimization to skip unused shards if SELECT query filters by sharding_key.", 0) \
|
||||
M(SettingUInt64, force_optimize_skip_unused_shards, 0, "Throw an exception if unused shards cannot be skipped (1 - throw only if the table has the sharding key, 2 - always throw.", 0) \
|
||||
\
|
||||
M(SettingBool, input_format_parallel_parsing, true, "Enable parallel parsing for some data formats.", 0) \
|
||||
M(SettingUInt64, min_chunk_bytes_for_parallel_parsing, (1024 * 1024), "The minimum chunk size in bytes, which each thread will parse in parallel.", 0) \
|
||||
@ -186,6 +189,7 @@ struct Settings : public SettingsCollection<Settings>
|
||||
M(SettingBool, input_format_values_interpret_expressions, true, "For Values format: if the field could not be parsed by streaming parser, run SQL parser and try to interpret it as SQL expression.", 0) \
|
||||
M(SettingBool, input_format_values_deduce_templates_of_expressions, true, "For Values format: if the field could not be parsed by streaming parser, run SQL parser, deduce template of the SQL expression, try to parse all rows using template and then interpret expression for all rows.", 0) \
|
||||
M(SettingBool, input_format_values_accurate_types_of_literals, true, "For Values format: when parsing and interpreting expressions using template, check actual type of literal to avoid possible overflow and precision issues.", 0) \
|
||||
M(SettingString, input_format_avro_schema_registry_url, "", "For AvroConfluent format: Confluent Schema Registry URL.", 0) \
|
||||
\
|
||||
M(SettingBool, output_format_json_quote_64bit_integers, true, "Controls quoting of 64-bit integers in JSON output format.", 0) \
|
||||
\
|
||||
@ -197,6 +201,8 @@ struct Settings : public SettingsCollection<Settings>
|
||||
M(SettingUInt64, output_format_pretty_max_column_pad_width, 250, "Maximum width to pad all values in a column in Pretty formats.", 0) \
|
||||
M(SettingBool, output_format_pretty_color, true, "Use ANSI escape sequences to paint colors in Pretty formats", 0) \
|
||||
M(SettingUInt64, output_format_parquet_row_group_size, 1000000, "Row group size in rows.", 0) \
|
||||
M(SettingString, output_format_avro_codec, "", "Compression codec used for output. Possible values: 'null', 'deflate', 'snappy'.", 0) \
|
||||
M(SettingUInt64, output_format_avro_sync_interval, 16 * 1024, "Sync interval in bytes.", 0) \
|
||||
\
|
||||
M(SettingBool, use_client_time_zone, false, "Use client timezone for interpreting DateTime string values, instead of adopting server timezone.", 0) \
|
||||
\
|
||||
@ -312,7 +318,6 @@ struct Settings : public SettingsCollection<Settings>
|
||||
M(SettingBool, partial_merge_join_optimizations, false, "Enable optimizations in partial merge join", 0) \
|
||||
M(SettingUInt64, default_max_bytes_in_join, 100000000, "Maximum size of right-side table if limit is required but max_bytes_in_join is not set.", 0) \
|
||||
M(SettingUInt64, partial_merge_join_rows_in_right_blocks, 10000, "Split right-hand joining data in blocks of specified size. It's a portion of data indexed by min-max values and possibly unloaded on disk.", 0) \
|
||||
M(SettingUInt64, partial_merge_join_rows_in_left_blocks, 10000, "Group left-hand joining data in bigger blocks. Setting it to a bigger value increases JOIN performance and memory usage.", 0) \
|
||||
\
|
||||
M(SettingUInt64, max_rows_to_transfer, 0, "Maximum size (in rows) of the transmitted external table obtained when the GLOBAL IN/JOIN section is executed.", 0) \
|
||||
M(SettingUInt64, max_bytes_to_transfer, 0, "Maximum size (in uncompressed bytes) of the transmitted external table obtained when the GLOBAL IN/JOIN section is executed.", 0) \
|
||||
|
@ -62,7 +62,7 @@ void SettingNumber<Type>::set(const Field & x)
|
||||
template <typename Type>
|
||||
void SettingNumber<Type>::set(const String & x)
|
||||
{
|
||||
set(completeParse<Type>(x));
|
||||
set(parseWithSizeSuffix<Type>(x));
|
||||
}
|
||||
|
||||
template <>
|
||||
|
@ -31,7 +31,6 @@ enum class TypeIndex
|
||||
Float64,
|
||||
Date,
|
||||
DateTime,
|
||||
DateTime32 = DateTime,
|
||||
DateTime64,
|
||||
String,
|
||||
FixedString,
|
||||
@ -158,8 +157,6 @@ using Decimal32 = Decimal<Int32>;
|
||||
using Decimal64 = Decimal<Int64>;
|
||||
using Decimal128 = Decimal<Int128>;
|
||||
|
||||
// TODO (nemkov): consider making a strong typedef
|
||||
//using DateTime32 = time_t;
|
||||
using DateTime64 = Decimal64;
|
||||
|
||||
template <> struct TypeName<Decimal32> { static const char * get() { return "Decimal32"; } };
|
||||
|
@ -10,5 +10,6 @@
|
||||
#cmakedefine01 USE_POCO_DATAODBC
|
||||
#cmakedefine01 USE_POCO_MONGODB
|
||||
#cmakedefine01 USE_POCO_REDIS
|
||||
#cmakedefine01 USE_POCO_JSON
|
||||
#cmakedefine01 USE_INTERNAL_LLVM_LIBRARY
|
||||
#cmakedefine01 USE_SSL
|
||||
|
@ -5,6 +5,7 @@
|
||||
#include <Common/quoteString.h>
|
||||
#include <Parsers/IAST.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
@ -64,18 +65,8 @@ ConvertingBlockInputStream::ConvertingBlockInputStream(
|
||||
throw Exception("Cannot find column " + backQuote(res_elem.name) + " in source stream",
|
||||
ErrorCodes::THERE_IS_NO_COLUMN);
|
||||
break;
|
||||
|
||||
case MatchColumnsMode::NameOrDefault:
|
||||
if (input_header.has(res_elem.name))
|
||||
conversion[result_col_num] = input_header.getPositionByName(res_elem.name);
|
||||
else
|
||||
conversion[result_col_num] = USE_DEFAULT;
|
||||
break;
|
||||
}
|
||||
|
||||
if (conversion[result_col_num] == USE_DEFAULT)
|
||||
continue;
|
||||
|
||||
const auto & src_elem = input_header.getByPosition(conversion[result_col_num]);
|
||||
|
||||
/// Check constants.
|
||||
@ -109,17 +100,8 @@ Block ConvertingBlockInputStream::readImpl()
|
||||
Block res = header.cloneEmpty();
|
||||
for (size_t res_pos = 0, size = conversion.size(); res_pos < size; ++res_pos)
|
||||
{
|
||||
auto & res_elem = res.getByPosition(res_pos);
|
||||
|
||||
if (conversion[res_pos] == USE_DEFAULT)
|
||||
{
|
||||
// Create a column with default values
|
||||
auto column_with_defaults = res_elem.type->createColumn()->cloneResized(src.rows());
|
||||
res_elem.column = std::move(column_with_defaults);
|
||||
continue;
|
||||
}
|
||||
|
||||
const auto & src_elem = src.getByPosition(conversion[res_pos]);
|
||||
auto & res_elem = res.getByPosition(res_pos);
|
||||
|
||||
ColumnPtr converted = castColumnWithDiagnostic(src_elem, res_elem, context);
|
||||
|
||||
@ -132,4 +114,3 @@ Block ConvertingBlockInputStream::readImpl()
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
@ -28,9 +28,7 @@ public:
|
||||
/// Require same number of columns in source and result. Match columns by corresponding positions, regardless to names.
|
||||
Position,
|
||||
/// Find columns in source by their names. Allow excessive columns in source.
|
||||
Name,
|
||||
/// Find columns in source by their names if present else use the default. Allow excessive columns in source.
|
||||
NameOrDefault
|
||||
Name
|
||||
};
|
||||
|
||||
ConvertingBlockInputStream(
|
||||
@ -50,7 +48,6 @@ private:
|
||||
|
||||
/// How to construct result block. Position in source block, where to get each column.
|
||||
using Conversion = std::vector<size_t>;
|
||||
const size_t USE_DEFAULT = static_cast<size_t>(-1);
|
||||
Conversion conversion;
|
||||
};
|
||||
|
||||
|
@ -17,6 +17,8 @@ namespace DB
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int TOO_SLOW;
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int TIMEOUT_EXCEEDED;
|
||||
}
|
||||
|
||||
static void limitProgressingSpeed(size_t total_progress_size, size_t max_speed_in_seconds, UInt64 total_elapsed_microseconds)
|
||||
@ -88,4 +90,29 @@ void ExecutionSpeedLimits::throttle(
|
||||
}
|
||||
}
|
||||
|
||||
static bool handleOverflowMode(OverflowMode mode, const String & message, int code)
|
||||
{
|
||||
switch (mode)
|
||||
{
|
||||
case OverflowMode::THROW:
|
||||
throw Exception(message, code);
|
||||
case OverflowMode::BREAK:
|
||||
return false;
|
||||
default:
|
||||
throw Exception("Logical error: unknown overflow mode", ErrorCodes::LOGICAL_ERROR);
|
||||
}
|
||||
}
|
||||
|
||||
bool ExecutionSpeedLimits::checkTimeLimit(UInt64 elapsed_ns, OverflowMode overflow_mode)
|
||||
{
|
||||
if (max_execution_time != 0
|
||||
&& elapsed_ns > static_cast<UInt64>(max_execution_time.totalMicroseconds()) * 1000)
|
||||
return handleOverflowMode(overflow_mode,
|
||||
"Timeout exceeded: elapsed " + toString(static_cast<double>(elapsed_ns) / 1000000000ULL)
|
||||
+ " seconds, maximum: " + toString(max_execution_time.totalMicroseconds() / 1000000.0),
|
||||
ErrorCodes::TIMEOUT_EXCEEDED);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -2,6 +2,7 @@
|
||||
|
||||
#include <Poco/Timespan.h>
|
||||
#include <Core/Types.h>
|
||||
#include <DataStreams/SizeLimits.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -23,6 +24,8 @@ public:
|
||||
|
||||
/// Pause execution in case if speed limits were exceeded.
|
||||
void throttle(size_t read_rows, size_t read_bytes, size_t total_rows_to_read, UInt64 total_elapsed_microseconds);
|
||||
|
||||
bool checkTimeLimit(UInt64 elapsed_ns, OverflowMode overflow_mode);
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -44,4 +44,29 @@ Block ExpressionBlockInputStream::readImpl()
|
||||
return res;
|
||||
}
|
||||
|
||||
Block InflatingExpressionBlockInputStream::readImpl()
|
||||
{
|
||||
if (!initialized)
|
||||
{
|
||||
if (expression->resultIsAlwaysEmpty())
|
||||
return {};
|
||||
|
||||
initialized = true;
|
||||
}
|
||||
|
||||
Block res;
|
||||
if (likely(!not_processed))
|
||||
{
|
||||
res = children.back()->read();
|
||||
if (res)
|
||||
expression->execute(res, not_processed, action_number);
|
||||
}
|
||||
else
|
||||
{
|
||||
res = std::move(not_processed->block);
|
||||
expression->execute(res, not_processed, action_number);
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -15,10 +15,9 @@ class ExpressionActions;
|
||||
*/
|
||||
class ExpressionBlockInputStream : public IBlockInputStream
|
||||
{
|
||||
private:
|
||||
public:
|
||||
using ExpressionActionsPtr = std::shared_ptr<ExpressionActions>;
|
||||
|
||||
public:
|
||||
ExpressionBlockInputStream(const BlockInputStreamPtr & input, const ExpressionActionsPtr & expression_);
|
||||
|
||||
String getName() const override;
|
||||
@ -26,12 +25,29 @@ public:
|
||||
Block getHeader() const override;
|
||||
|
||||
protected:
|
||||
bool initialized = false;
|
||||
ExpressionActionsPtr expression;
|
||||
|
||||
Block readImpl() override;
|
||||
|
||||
private:
|
||||
ExpressionActionsPtr expression;
|
||||
Block cached_header;
|
||||
bool initialized = false;
|
||||
};
|
||||
|
||||
/// ExpressionBlockInputStream that could generate many out blocks for single input block.
|
||||
class InflatingExpressionBlockInputStream : public ExpressionBlockInputStream
|
||||
{
|
||||
public:
|
||||
InflatingExpressionBlockInputStream(const BlockInputStreamPtr & input, const ExpressionActionsPtr & expression_)
|
||||
: ExpressionBlockInputStream(input, expression_)
|
||||
{}
|
||||
|
||||
protected:
|
||||
Block readImpl() override;
|
||||
|
||||
private:
|
||||
ExtraBlockPtr not_processed;
|
||||
size_t action_number = 0;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -203,30 +203,9 @@ void IBlockInputStream::updateExtremes(Block & block)
|
||||
}
|
||||
|
||||
|
||||
static bool handleOverflowMode(OverflowMode mode, const String & message, int code)
|
||||
{
|
||||
switch (mode)
|
||||
{
|
||||
case OverflowMode::THROW:
|
||||
throw Exception(message, code);
|
||||
case OverflowMode::BREAK:
|
||||
return false;
|
||||
default:
|
||||
throw Exception("Logical error: unknown overflow mode", ErrorCodes::LOGICAL_ERROR);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
bool IBlockInputStream::checkTimeLimit()
|
||||
{
|
||||
if (limits.speed_limits.max_execution_time != 0
|
||||
&& info.total_stopwatch.elapsed() > static_cast<UInt64>(limits.speed_limits.max_execution_time.totalMicroseconds()) * 1000)
|
||||
return handleOverflowMode(limits.timeout_overflow_mode,
|
||||
"Timeout exceeded: elapsed " + toString(info.total_stopwatch.elapsedSeconds())
|
||||
+ " seconds, maximum: " + toString(limits.speed_limits.max_execution_time.totalMicroseconds() / 1000000.0),
|
||||
ErrorCodes::TIMEOUT_EXCEEDED);
|
||||
|
||||
return true;
|
||||
return limits.speed_limits.checkTimeLimit(info.total_stopwatch.elapsed(), limits.timeout_overflow_mode);
|
||||
}
|
||||
|
||||
|
||||
|
@ -12,5 +12,6 @@ class IBlockOutputStream;
|
||||
using BlockInputStreamPtr = std::shared_ptr<IBlockInputStream>;
|
||||
using BlockInputStreams = std::vector<BlockInputStreamPtr>;
|
||||
using BlockOutputStreamPtr = std::shared_ptr<IBlockOutputStream>;
|
||||
using BlockOutputStreams = std::vector<BlockOutputStreamPtr>;
|
||||
|
||||
}
|
||||
|
@ -7,6 +7,7 @@
|
||||
#include <IO/WriteBufferFromFile.h>
|
||||
#include <Compression/CompressedWriteBuffer.h>
|
||||
#include <Interpreters/sortBlock.h>
|
||||
#include <Disks/DiskSpaceMonitor.h>
|
||||
|
||||
|
||||
namespace ProfileEvents
|
||||
@ -21,10 +22,10 @@ namespace DB
|
||||
MergeSortingBlockInputStream::MergeSortingBlockInputStream(
|
||||
const BlockInputStreamPtr & input, SortDescription & description_,
|
||||
size_t max_merged_block_size_, UInt64 limit_, size_t max_bytes_before_remerge_,
|
||||
size_t max_bytes_before_external_sort_, const std::string & tmp_path_, size_t min_free_disk_space_)
|
||||
size_t max_bytes_before_external_sort_, VolumePtr tmp_volume_, size_t min_free_disk_space_)
|
||||
: description(description_), max_merged_block_size(max_merged_block_size_), limit(limit_),
|
||||
max_bytes_before_remerge(max_bytes_before_remerge_),
|
||||
max_bytes_before_external_sort(max_bytes_before_external_sort_), tmp_path(tmp_path_),
|
||||
max_bytes_before_external_sort(max_bytes_before_external_sort_), tmp_volume(tmp_volume_),
|
||||
min_free_disk_space(min_free_disk_space_)
|
||||
{
|
||||
children.push_back(input);
|
||||
@ -78,10 +79,14 @@ Block MergeSortingBlockInputStream::readImpl()
|
||||
*/
|
||||
if (max_bytes_before_external_sort && sum_bytes_in_blocks > max_bytes_before_external_sort)
|
||||
{
|
||||
if (!enoughSpaceInDirectory(tmp_path, sum_bytes_in_blocks + min_free_disk_space))
|
||||
throw Exception("Not enough space for external sort in " + tmp_path, ErrorCodes::NOT_ENOUGH_SPACE);
|
||||
size_t size = sum_bytes_in_blocks + min_free_disk_space;
|
||||
auto reservation = tmp_volume->reserve(size);
|
||||
if (!reservation)
|
||||
throw Exception("Not enough space for external sort in temporary storage", ErrorCodes::NOT_ENOUGH_SPACE);
|
||||
|
||||
const std::string tmp_path(reservation->getDisk()->getPath());
|
||||
temporary_files.emplace_back(createTemporaryFile(tmp_path));
|
||||
|
||||
const std::string & path = temporary_files.back()->path();
|
||||
MergeSortingBlocksBlockInputStream block_in(blocks, description, max_merged_block_size, limit);
|
||||
|
||||
|
@ -18,6 +18,9 @@ namespace DB
|
||||
|
||||
struct TemporaryFileStream;
|
||||
|
||||
class Volume;
|
||||
using VolumePtr = std::shared_ptr<Volume>;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NOT_ENOUGH_SPACE;
|
||||
@ -77,7 +80,7 @@ public:
|
||||
MergeSortingBlockInputStream(const BlockInputStreamPtr & input, SortDescription & description_,
|
||||
size_t max_merged_block_size_, UInt64 limit_,
|
||||
size_t max_bytes_before_remerge_,
|
||||
size_t max_bytes_before_external_sort_, const std::string & tmp_path_,
|
||||
size_t max_bytes_before_external_sort_, VolumePtr tmp_volume_,
|
||||
size_t min_free_disk_space_);
|
||||
|
||||
String getName() const override { return "MergeSorting"; }
|
||||
@ -97,7 +100,7 @@ private:
|
||||
|
||||
size_t max_bytes_before_remerge;
|
||||
size_t max_bytes_before_external_sort;
|
||||
const std::string tmp_path;
|
||||
VolumePtr tmp_volume;
|
||||
size_t min_free_disk_space;
|
||||
|
||||
Logger * log = &Logger::get("MergeSortingBlockInputStream");
|
||||
|
@ -21,9 +21,19 @@ class NullAndDoCopyBlockInputStream : public IBlockInputStream
|
||||
{
|
||||
public:
|
||||
NullAndDoCopyBlockInputStream(const BlockInputStreamPtr & input_, BlockOutputStreamPtr output_)
|
||||
: input(input_), output(output_)
|
||||
{
|
||||
children.push_back(input_);
|
||||
input_streams.push_back(input_);
|
||||
output_streams.push_back(output_);
|
||||
|
||||
for (auto & input_stream : input_streams)
|
||||
children.push_back(input_stream);
|
||||
}
|
||||
|
||||
NullAndDoCopyBlockInputStream(const BlockInputStreams & input_, BlockOutputStreams & output_)
|
||||
: input_streams(input_), output_streams(output_)
|
||||
{
|
||||
for (auto & input_stream : input_)
|
||||
children.push_back(input_stream);
|
||||
}
|
||||
|
||||
/// Suppress readPrefix and readSuffix, because they are called by copyData.
|
||||
@ -39,13 +49,20 @@ public:
|
||||
protected:
|
||||
Block readImpl() override
|
||||
{
|
||||
copyData(*input, *output);
|
||||
/// We do not use cancel flag here.
|
||||
/// If query was cancelled, it will be processed by child streams.
|
||||
/// Part of the data will be processed.
|
||||
|
||||
if (input_streams.size() == 1 && output_streams.size() == 1)
|
||||
copyData(*input_streams.at(0), *output_streams.at(0));
|
||||
else
|
||||
copyData(input_streams, output_streams);
|
||||
return Block();
|
||||
}
|
||||
|
||||
private:
|
||||
BlockInputStreamPtr input;
|
||||
BlockOutputStreamPtr output;
|
||||
BlockInputStreams input_streams;
|
||||
BlockOutputStreams output_streams;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -56,9 +56,24 @@ PushingToViewsBlockOutputStream::PushingToViewsBlockOutputStream(
|
||||
StoragePtr inner_table = materialized_view->getTargetTable();
|
||||
auto inner_table_id = inner_table->getStorageID();
|
||||
query = materialized_view->getInnerQuery();
|
||||
|
||||
std::unique_ptr<ASTInsertQuery> insert = std::make_unique<ASTInsertQuery>();
|
||||
insert->database = inner_table_id.database_name;
|
||||
insert->table = inner_table_id.table_name;
|
||||
|
||||
/// Get list of columns we get from select query.
|
||||
auto header = InterpreterSelectQuery(query, *views_context, SelectQueryOptions().analyze())
|
||||
.getSampleBlock();
|
||||
|
||||
/// Insert only columns returned by select.
|
||||
auto list = std::make_shared<ASTExpressionList>();
|
||||
for (auto & column : header)
|
||||
/// But skip columns which storage doesn't have.
|
||||
if (inner_table->hasColumn(column.name))
|
||||
list->children.emplace_back(std::make_shared<ASTIdentifier>(column.name));
|
||||
|
||||
insert->columns = std::move(list);
|
||||
|
||||
ASTPtr insert_query_ptr(insert.release());
|
||||
InterpreterInsertQuery interpreter(insert_query_ptr, *views_context);
|
||||
BlockIO io = interpreter.execute();
|
||||
@ -230,7 +245,7 @@ void PushingToViewsBlockOutputStream::process(const Block & block, size_t view_n
|
||||
/// and two-level aggregation is triggered).
|
||||
in = std::make_shared<SquashingBlockInputStream>(
|
||||
in, context.getSettingsRef().min_insert_block_size_rows, context.getSettingsRef().min_insert_block_size_bytes);
|
||||
in = std::make_shared<ConvertingBlockInputStream>(context, in, view.out->getHeader(), ConvertingBlockInputStream::MatchColumnsMode::NameOrDefault);
|
||||
in = std::make_shared<ConvertingBlockInputStream>(context, in, view.out->getHeader(), ConvertingBlockInputStream::MatchColumnsMode::Name);
|
||||
}
|
||||
else
|
||||
in = std::make_shared<OneBlockInputStream>(block);
|
||||
|
@ -70,7 +70,7 @@ bool TTLBlockInputStream::isTTLExpired(time_t ttl)
|
||||
Block TTLBlockInputStream::readImpl()
|
||||
{
|
||||
/// Skip all data if table ttl is expired for part
|
||||
if (storage.hasTableTTL() && isTTLExpired(old_ttl_infos.table_ttl.max))
|
||||
if (storage.hasRowsTTL() && isTTLExpired(old_ttl_infos.table_ttl.max))
|
||||
{
|
||||
rows_removed = data_part->rows_count;
|
||||
return {};
|
||||
@ -80,7 +80,7 @@ Block TTLBlockInputStream::readImpl()
|
||||
if (!block)
|
||||
return block;
|
||||
|
||||
if (storage.hasTableTTL() && (force || isTTLExpired(old_ttl_infos.table_ttl.min)))
|
||||
if (storage.hasRowsTTL() && (force || isTTLExpired(old_ttl_infos.table_ttl.min)))
|
||||
removeRowsWithExpiredTableTTL(block);
|
||||
|
||||
removeValuesWithExpiredColumnTTL(block);
|
||||
@ -106,10 +106,10 @@ void TTLBlockInputStream::readSuffixImpl()
|
||||
|
||||
void TTLBlockInputStream::removeRowsWithExpiredTableTTL(Block & block)
|
||||
{
|
||||
storage.ttl_table_entry.expression->execute(block);
|
||||
storage.rows_ttl_entry.expression->execute(block);
|
||||
|
||||
const IColumn * ttl_column =
|
||||
block.getByName(storage.ttl_table_entry.result_column).column.get();
|
||||
block.getByName(storage.rows_ttl_entry.result_column).column.get();
|
||||
|
||||
const auto & column_names = header.getNames();
|
||||
MutableColumns result_columns;
|
||||
|
@ -1,6 +1,10 @@
|
||||
#include <thread>
|
||||
#include <DataStreams/IBlockInputStream.h>
|
||||
#include <DataStreams/IBlockOutputStream.h>
|
||||
#include <DataStreams/copyData.h>
|
||||
#include <DataStreams/ParallelInputsProcessor.h>
|
||||
#include <Common/ConcurrentBoundedQueue.h>
|
||||
#include <Common/ThreadPool.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -51,6 +55,79 @@ void copyDataImpl(IBlockInputStream & from, IBlockOutputStream & to, TCancelCall
|
||||
|
||||
inline void doNothing(const Block &) {}
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
|
||||
struct ParallelInsertsHandler
|
||||
{
|
||||
using CencellationHook = std::function<void()>;
|
||||
|
||||
explicit ParallelInsertsHandler(BlockOutputStreams & output_streams, CencellationHook cancellation_hook_, size_t num_threads)
|
||||
: outputs(output_streams.size()), cancellation_hook(std::move(cancellation_hook_))
|
||||
{
|
||||
exceptions.resize(num_threads);
|
||||
|
||||
for (auto & output : output_streams)
|
||||
outputs.push(output.get());
|
||||
}
|
||||
|
||||
void onBlock(Block & block, size_t /*thread_num*/)
|
||||
{
|
||||
IBlockOutputStream * out = nullptr;
|
||||
|
||||
outputs.pop(out);
|
||||
out->write(block);
|
||||
outputs.push(out);
|
||||
}
|
||||
|
||||
void onFinishThread(size_t /*thread_num*/) {}
|
||||
void onFinish() {}
|
||||
|
||||
void onException(std::exception_ptr & exception, size_t thread_num)
|
||||
{
|
||||
exceptions[thread_num] = exception;
|
||||
cancellation_hook();
|
||||
}
|
||||
|
||||
void rethrowFirstException()
|
||||
{
|
||||
for (auto & exception : exceptions)
|
||||
if (exception)
|
||||
std::rethrow_exception(exception);
|
||||
}
|
||||
|
||||
ConcurrentBoundedQueue<IBlockOutputStream *> outputs;
|
||||
std::vector<std::exception_ptr> exceptions;
|
||||
CencellationHook cancellation_hook;
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
static void copyDataImpl(BlockInputStreams & inputs, BlockOutputStreams & outputs)
|
||||
{
|
||||
for (auto & output : outputs)
|
||||
output->writePrefix();
|
||||
|
||||
using Processor = ParallelInputsProcessor<ParallelInsertsHandler>;
|
||||
Processor * processor_ptr = nullptr;
|
||||
|
||||
ParallelInsertsHandler handler(outputs, [&processor_ptr]() { processor_ptr->cancel(false); }, inputs.size());
|
||||
ParallelInputsProcessor<ParallelInsertsHandler> processor(inputs, nullptr, inputs.size(), handler);
|
||||
processor_ptr = &processor;
|
||||
|
||||
processor.process();
|
||||
processor.wait();
|
||||
handler.rethrowFirstException();
|
||||
|
||||
/// readPrefix is called in ParallelInputsProcessor.
|
||||
for (auto & input : inputs)
|
||||
input->readSuffix();
|
||||
|
||||
for (auto & output : outputs)
|
||||
output->writeSuffix();
|
||||
}
|
||||
|
||||
void copyData(IBlockInputStream & from, IBlockOutputStream & to, std::atomic<bool> * is_cancelled)
|
||||
{
|
||||
auto is_cancelled_pred = [is_cancelled] ()
|
||||
@ -61,6 +138,10 @@ void copyData(IBlockInputStream & from, IBlockOutputStream & to, std::atomic<boo
|
||||
copyDataImpl(from, to, is_cancelled_pred, doNothing);
|
||||
}
|
||||
|
||||
void copyData(BlockInputStreams & inputs, BlockOutputStreams & outputs)
|
||||
{
|
||||
copyDataImpl(inputs, outputs);
|
||||
}
|
||||
|
||||
void copyData(IBlockInputStream & from, IBlockOutputStream & to, const std::function<bool()> & is_cancelled)
|
||||
{
|
||||
|
@ -16,6 +16,8 @@ class Block;
|
||||
*/
|
||||
void copyData(IBlockInputStream & from, IBlockOutputStream & to, std::atomic<bool> * is_cancelled = nullptr);
|
||||
|
||||
void copyData(BlockInputStreams & inputs, BlockOutputStreams & outputs);
|
||||
|
||||
void copyData(IBlockInputStream & from, IBlockOutputStream & to, const std::function<bool()> & is_cancelled);
|
||||
|
||||
void copyData(IBlockInputStream & from, IBlockOutputStream & to, const std::function<bool()> & is_cancelled,
|
||||
|
@ -13,7 +13,7 @@ namespace DB
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
|
||||
@ -70,7 +70,7 @@ public:
|
||||
{
|
||||
const auto it = value_to_name_map.find(value);
|
||||
if (it == std::end(value_to_name_map))
|
||||
throw Exception{"Unexpected value " + toString(value) + " for type " + getName(), ErrorCodes::LOGICAL_ERROR};
|
||||
throw Exception{"Unexpected value " + toString(value) + " for type " + getName(), ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
return it->second;
|
||||
}
|
||||
@ -79,7 +79,7 @@ public:
|
||||
{
|
||||
const auto it = name_to_value_map.find(field_name);
|
||||
if (!it)
|
||||
throw Exception{"Unknown element '" + field_name.toString() + "' for type " + getName(), ErrorCodes::LOGICAL_ERROR};
|
||||
throw Exception{"Unknown element '" + field_name.toString() + "' for type " + getName(), ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
return it->getMapped();
|
||||
}
|
||||
|
@ -23,6 +23,8 @@ namespace DB
|
||||
|
||||
class Context;
|
||||
|
||||
/** Create dictionary according to its layout.
|
||||
*/
|
||||
class DictionaryFactory : private boost::noncopyable
|
||||
{
|
||||
public:
|
||||
|
@ -111,6 +111,12 @@ Volume::Volume(
|
||||
<< " < " << formatReadableSizeWithBinarySuffix(MIN_PART_SIZE) << ")");
|
||||
}
|
||||
|
||||
DiskPtr Volume::getNextDisk()
|
||||
{
|
||||
size_t start_from = last_used.fetch_add(1u, std::memory_order_relaxed);
|
||||
size_t index = start_from % disks.size();
|
||||
return disks[index];
|
||||
}
|
||||
|
||||
ReservationPtr Volume::reserve(UInt64 expected_size)
|
||||
{
|
||||
|
@ -67,6 +67,13 @@ public:
|
||||
const String & config_prefix,
|
||||
const DiskSelector & disk_selector);
|
||||
|
||||
/// Next disk (round-robin)
|
||||
///
|
||||
/// - Used with policy for temporary data
|
||||
/// - Ignores all limitations
|
||||
/// - Shares last access with reserve()
|
||||
DiskPtr getNextDisk();
|
||||
|
||||
/// Uses Round-robin to choose disk for reservation.
|
||||
/// Returns valid reservation or nullptr if there is no space left on any disk.
|
||||
ReservationPtr reserve(UInt64 bytes) override;
|
||||
|
@ -68,6 +68,7 @@ static FormatSettings getInputFormatSetting(const Settings & settings, const Con
|
||||
format_settings.custom.row_before_delimiter = settings.format_custom_row_before_delimiter;
|
||||
format_settings.custom.row_after_delimiter = settings.format_custom_row_after_delimiter;
|
||||
format_settings.custom.row_between_delimiter = settings.format_custom_row_between_delimiter;
|
||||
format_settings.avro.schema_registry_url = settings.input_format_avro_schema_registry_url;
|
||||
|
||||
return format_settings;
|
||||
}
|
||||
@ -99,6 +100,8 @@ static FormatSettings getOutputFormatSetting(const Settings & settings, const Co
|
||||
format_settings.custom.row_before_delimiter = settings.format_custom_row_before_delimiter;
|
||||
format_settings.custom.row_after_delimiter = settings.format_custom_row_after_delimiter;
|
||||
format_settings.custom.row_between_delimiter = settings.format_custom_row_between_delimiter;
|
||||
format_settings.avro.output_codec = settings.output_format_avro_codec;
|
||||
format_settings.avro.output_sync_interval = settings.output_format_avro_sync_interval;
|
||||
|
||||
return format_settings;
|
||||
}
|
||||
@ -325,6 +328,8 @@ FormatFactory::FormatFactory()
|
||||
registerInputFormatProcessorORC(*this);
|
||||
registerInputFormatProcessorParquet(*this);
|
||||
registerOutputFormatProcessorParquet(*this);
|
||||
registerInputFormatProcessorAvro(*this);
|
||||
registerOutputFormatProcessorAvro(*this);
|
||||
registerInputFormatProcessorTemplate(*this);
|
||||
registerOutputFormatProcessorTemplate(*this);
|
||||
|
||||
|
@ -166,6 +166,8 @@ void registerInputFormatProcessorORC(FormatFactory & factory);
|
||||
void registerOutputFormatProcessorParquet(FormatFactory & factory);
|
||||
void registerInputFormatProcessorProtobuf(FormatFactory & factory);
|
||||
void registerOutputFormatProcessorProtobuf(FormatFactory & factory);
|
||||
void registerInputFormatProcessorAvro(FormatFactory & factory);
|
||||
void registerOutputFormatProcessorAvro(FormatFactory & factory);
|
||||
void registerInputFormatProcessorTemplate(FormatFactory & factory);
|
||||
void registerOutputFormatProcessorTemplate(FormatFactory &factory);
|
||||
|
||||
|
@ -110,6 +110,16 @@ struct FormatSettings
|
||||
};
|
||||
|
||||
Custom custom;
|
||||
|
||||
struct Avro
|
||||
{
|
||||
String schema_registry_url;
|
||||
String output_codec;
|
||||
UInt64 output_sync_interval = 16 * 1024;
|
||||
};
|
||||
|
||||
Avro avro;
|
||||
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -2,6 +2,7 @@
|
||||
|
||||
// .h autogenerated by cmake!
|
||||
|
||||
#cmakedefine01 USE_AVRO
|
||||
#cmakedefine01 USE_CAPNP
|
||||
#cmakedefine01 USE_SNAPPY
|
||||
#cmakedefine01 USE_PARQUET
|
||||
|
@ -746,6 +746,23 @@ inline void readBinary(Decimal128 & x, ReadBuffer & buf) { readPODBinary(x, buf)
|
||||
inline void readBinary(LocalDate & x, ReadBuffer & buf) { readPODBinary(x, buf); }
|
||||
|
||||
|
||||
template <typename T>
|
||||
inline std::enable_if_t<is_arithmetic_v<T> && (sizeof(T) <= 8), void>
|
||||
readBinaryBigEndian(T & x, ReadBuffer & buf) /// Assuming little endian architecture.
|
||||
{
|
||||
readPODBinary(x, buf);
|
||||
|
||||
if constexpr (sizeof(x) == 1)
|
||||
return;
|
||||
else if constexpr (sizeof(x) == 2)
|
||||
x = __builtin_bswap16(x);
|
||||
else if constexpr (sizeof(x) == 4)
|
||||
x = __builtin_bswap32(x);
|
||||
else if constexpr (sizeof(x) == 8)
|
||||
x = __builtin_bswap64(x);
|
||||
}
|
||||
|
||||
|
||||
/// Generic methods to read value in text tab-separated format.
|
||||
template <typename T>
|
||||
inline std::enable_if_t<is_integral_v<T>, void>
|
||||
@ -955,28 +972,78 @@ inline T parse(const char * data, size_t size)
|
||||
return res;
|
||||
}
|
||||
|
||||
/// Read something from text format, but expect complete parse of given text
|
||||
/// For example: 723145 -- ok, 213MB -- not ok
|
||||
template <typename T>
|
||||
inline T completeParse(const char * data, size_t size)
|
||||
inline std::enable_if_t<!is_integral_v<T>, void>
|
||||
readTextWithSizeSuffix(T & x, ReadBuffer & buf) { readText(x, buf); }
|
||||
|
||||
template <typename T>
|
||||
inline std::enable_if_t<is_integral_v<T>, void>
|
||||
readTextWithSizeSuffix(T & x, ReadBuffer & buf)
|
||||
{
|
||||
readIntText(x, buf);
|
||||
if (buf.eof())
|
||||
return;
|
||||
|
||||
/// Updates x depending on the suffix
|
||||
auto finish = [&buf, &x] (UInt64 base, int power_of_two) mutable
|
||||
{
|
||||
++buf.position();
|
||||
if (buf.eof())
|
||||
{
|
||||
x *= base; /// For decimal suffixes, such as k, M, G etc.
|
||||
}
|
||||
else if (*buf.position() == 'i')
|
||||
{
|
||||
x = (x << power_of_two); /// For binary suffixes, such as ki, Mi, Gi, etc.
|
||||
++buf.position();
|
||||
}
|
||||
return;
|
||||
};
|
||||
|
||||
switch (*buf.position())
|
||||
{
|
||||
case 'k': [[fallthrough]];
|
||||
case 'K':
|
||||
finish(1000, 10);
|
||||
break;
|
||||
case 'M':
|
||||
finish(1000000, 20);
|
||||
break;
|
||||
case 'G':
|
||||
finish(1000000000, 30);
|
||||
break;
|
||||
case 'T':
|
||||
finish(1000000000000ULL, 40);
|
||||
break;
|
||||
default:
|
||||
return;
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
/// Read something from text format and trying to parse the suffix.
|
||||
/// If the suffix is not valid gives an error
|
||||
/// For example: 723145 -- ok, 213MB -- not ok, but 213Mi -- ok
|
||||
template <typename T>
|
||||
inline T parseWithSizeSuffix(const char * data, size_t size)
|
||||
{
|
||||
T res;
|
||||
ReadBufferFromMemory buf(data, size);
|
||||
readText(res, buf);
|
||||
readTextWithSizeSuffix(res, buf);
|
||||
assertEOF(buf);
|
||||
return res;
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
inline T completeParse(const String & s)
|
||||
inline T parseWithSizeSuffix(const String & s)
|
||||
{
|
||||
return completeParse<T>(s.data(), s.size());
|
||||
return parseWithSizeSuffix<T>(s.data(), s.size());
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
inline T completeParse(const char * data)
|
||||
inline T parseWithSizeSuffix(const char * data)
|
||||
{
|
||||
return completeParse<T>(data, strlen(data));
|
||||
return parseWithSizeSuffix<T>(data, strlen(data));
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
|
@ -330,7 +330,7 @@ void ActionsMatcher::visit(const ASTFunction & node, const ASTPtr & ast, Data &
|
||||
/// Let's find the type of the first argument (then getActionsImpl will be called again and will not affect anything).
|
||||
visit(node.arguments->children.at(0), data);
|
||||
|
||||
if ((prepared_set = makeSet(node, data, data.no_subqueries)))
|
||||
if (!data.no_makeset && (prepared_set = makeSet(node, data, data.no_subqueries)))
|
||||
{
|
||||
/// Transform tuple or subquery into a set.
|
||||
}
|
||||
|
@ -72,6 +72,7 @@ public:
|
||||
PreparedSets & prepared_sets;
|
||||
SubqueriesForSets & subqueries_for_sets;
|
||||
bool no_subqueries;
|
||||
bool no_makeset;
|
||||
bool only_consts;
|
||||
bool no_storage_or_local;
|
||||
size_t visit_depth;
|
||||
@ -80,7 +81,7 @@ public:
|
||||
Data(const Context & context_, SizeLimits set_size_limit_, size_t subquery_depth_,
|
||||
const NamesAndTypesList & source_columns_, const ExpressionActionsPtr & actions,
|
||||
PreparedSets & prepared_sets_, SubqueriesForSets & subqueries_for_sets_,
|
||||
bool no_subqueries_, bool only_consts_, bool no_storage_or_local_)
|
||||
bool no_subqueries_, bool no_makeset_, bool only_consts_, bool no_storage_or_local_)
|
||||
: context(context_),
|
||||
set_size_limit(set_size_limit_),
|
||||
subquery_depth(subquery_depth_),
|
||||
@ -88,6 +89,7 @@ public:
|
||||
prepared_sets(prepared_sets_),
|
||||
subqueries_for_sets(subqueries_for_sets_),
|
||||
no_subqueries(no_subqueries_),
|
||||
no_makeset(no_makeset_),
|
||||
only_consts(only_consts_),
|
||||
no_storage_or_local(no_storage_or_local_),
|
||||
visit_depth(0),
|
||||
|
@ -28,6 +28,7 @@
|
||||
#include <common/config_common.h>
|
||||
#include <AggregateFunctions/AggregateFunctionArray.h>
|
||||
#include <AggregateFunctions/AggregateFunctionState.h>
|
||||
#include <Disks/DiskSpaceMonitor.h>
|
||||
|
||||
|
||||
namespace ProfileEvents
|
||||
@ -681,22 +682,25 @@ bool Aggregator::executeOnBlock(Columns columns, UInt64 num_rows, AggregatedData
|
||||
&& current_memory_usage > static_cast<Int64>(params.max_bytes_before_external_group_by)
|
||||
&& worth_convert_to_two_level)
|
||||
{
|
||||
if (!enoughSpaceInDirectory(params.tmp_path, current_memory_usage + params.min_free_disk_space))
|
||||
throw Exception("Not enough space for external aggregation in " + params.tmp_path, ErrorCodes::NOT_ENOUGH_SPACE);
|
||||
size_t size = current_memory_usage + params.min_free_disk_space;
|
||||
auto reservation = params.tmp_volume->reserve(size);
|
||||
if (!reservation)
|
||||
throw Exception("Not enough space for external aggregation in temporary storage", ErrorCodes::NOT_ENOUGH_SPACE);
|
||||
|
||||
writeToTemporaryFile(result);
|
||||
const std::string tmp_path(reservation->getDisk()->getPath());
|
||||
writeToTemporaryFile(result, tmp_path);
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
void Aggregator::writeToTemporaryFile(AggregatedDataVariants & data_variants)
|
||||
void Aggregator::writeToTemporaryFile(AggregatedDataVariants & data_variants, const String & tmp_path)
|
||||
{
|
||||
Stopwatch watch;
|
||||
size_t rows = data_variants.size();
|
||||
|
||||
auto file = createTemporaryFile(params.tmp_path);
|
||||
auto file = createTemporaryFile(tmp_path);
|
||||
const std::string & path = file->path();
|
||||
WriteBufferFromFile file_buf(path);
|
||||
CompressedWriteBuffer compressed_buf(file_buf);
|
||||
@ -753,6 +757,10 @@ void Aggregator::writeToTemporaryFile(AggregatedDataVariants & data_variants)
|
||||
<< (uncompressed_bytes / elapsed_seconds / 1048576.0) << " MiB/sec. uncompressed, "
|
||||
<< (compressed_bytes / elapsed_seconds / 1048576.0) << " MiB/sec. compressed)");
|
||||
}
|
||||
void Aggregator::writeToTemporaryFile(AggregatedDataVariants & data_variants)
|
||||
{
|
||||
return writeToTemporaryFile(data_variants, params.tmp_volume->getNextDisk()->getPath());
|
||||
}
|
||||
|
||||
|
||||
template <typename Method>
|
||||
|
@ -46,6 +46,8 @@ namespace ErrorCodes
|
||||
|
||||
class IBlockOutputStream;
|
||||
|
||||
class Volume;
|
||||
using VolumePtr = std::shared_ptr<Volume>;
|
||||
|
||||
/** Different data structures that can be used for aggregation
|
||||
* For efficiency, the aggregation data itself is put into the pool.
|
||||
@ -860,7 +862,7 @@ public:
|
||||
/// Return empty result when aggregating without keys on empty set.
|
||||
bool empty_result_for_aggregation_by_empty_set;
|
||||
|
||||
const std::string tmp_path;
|
||||
VolumePtr tmp_volume;
|
||||
|
||||
/// Settings is used to determine cache size. No threads are created.
|
||||
size_t max_threads;
|
||||
@ -873,7 +875,7 @@ public:
|
||||
size_t group_by_two_level_threshold_, size_t group_by_two_level_threshold_bytes_,
|
||||
size_t max_bytes_before_external_group_by_,
|
||||
bool empty_result_for_aggregation_by_empty_set_,
|
||||
const std::string & tmp_path_, size_t max_threads_,
|
||||
VolumePtr tmp_volume_, size_t max_threads_,
|
||||
size_t min_free_disk_space_)
|
||||
: src_header(src_header_),
|
||||
keys(keys_), aggregates(aggregates_), keys_size(keys.size()), aggregates_size(aggregates.size()),
|
||||
@ -881,7 +883,7 @@ public:
|
||||
group_by_two_level_threshold(group_by_two_level_threshold_), group_by_two_level_threshold_bytes(group_by_two_level_threshold_bytes_),
|
||||
max_bytes_before_external_group_by(max_bytes_before_external_group_by_),
|
||||
empty_result_for_aggregation_by_empty_set(empty_result_for_aggregation_by_empty_set_),
|
||||
tmp_path(tmp_path_), max_threads(max_threads_),
|
||||
tmp_volume(tmp_volume_), max_threads(max_threads_),
|
||||
min_free_disk_space(min_free_disk_space_)
|
||||
{
|
||||
}
|
||||
@ -889,7 +891,7 @@ public:
|
||||
/// Only parameters that matter during merge.
|
||||
Params(const Block & intermediate_header_,
|
||||
const ColumnNumbers & keys_, const AggregateDescriptions & aggregates_, bool overflow_row_, size_t max_threads_)
|
||||
: Params(Block(), keys_, aggregates_, overflow_row_, 0, OverflowMode::THROW, 0, 0, 0, false, "", max_threads_, 0)
|
||||
: Params(Block(), keys_, aggregates_, overflow_row_, 0, OverflowMode::THROW, 0, 0, 0, false, nullptr, max_threads_, 0)
|
||||
{
|
||||
intermediate_header = intermediate_header_;
|
||||
}
|
||||
@ -955,6 +957,7 @@ public:
|
||||
void setCancellationHook(const CancellationHook cancellation_hook);
|
||||
|
||||
/// For external aggregation.
|
||||
void writeToTemporaryFile(AggregatedDataVariants & data_variants, const String & tmp_path);
|
||||
void writeToTemporaryFile(AggregatedDataVariants & data_variants);
|
||||
|
||||
bool hasTemporaryFiles() const { return !temporary_files.empty(); }
|
||||
|
@ -19,14 +19,15 @@ namespace ErrorCodes
|
||||
extern const int PARAMETER_OUT_OF_BOUND;
|
||||
}
|
||||
|
||||
AnalyzedJoin::AnalyzedJoin(const Settings & settings, const String & tmp_path_)
|
||||
AnalyzedJoin::AnalyzedJoin(const Settings & settings, VolumePtr tmp_volume_)
|
||||
: size_limits(SizeLimits{settings.max_rows_in_join, settings.max_bytes_in_join, settings.join_overflow_mode})
|
||||
, default_max_bytes(settings.default_max_bytes_in_join)
|
||||
, join_use_nulls(settings.join_use_nulls)
|
||||
, max_joined_block_rows(settings.max_joined_block_size_rows)
|
||||
, partial_merge_join(settings.partial_merge_join)
|
||||
, partial_merge_join_optimizations(settings.partial_merge_join_optimizations)
|
||||
, partial_merge_join_rows_in_right_blocks(settings.partial_merge_join_rows_in_right_blocks)
|
||||
, tmp_path(tmp_path_)
|
||||
, tmp_volume(tmp_volume_)
|
||||
{}
|
||||
|
||||
void AnalyzedJoin::addUsingKey(const ASTPtr & ast)
|
||||
|
@ -21,6 +21,9 @@ class Block;
|
||||
|
||||
struct Settings;
|
||||
|
||||
class Volume;
|
||||
using VolumePtr = std::shared_ptr<Volume>;
|
||||
|
||||
class AnalyzedJoin
|
||||
{
|
||||
/** Query of the form `SELECT expr(x) AS k FROM t1 ANY LEFT JOIN (SELECT expr(x) AS k FROM t2) USING k`
|
||||
@ -40,6 +43,7 @@ class AnalyzedJoin
|
||||
const SizeLimits size_limits;
|
||||
const size_t default_max_bytes;
|
||||
const bool join_use_nulls;
|
||||
const size_t max_joined_block_rows = 0;
|
||||
const bool partial_merge_join = false;
|
||||
const bool partial_merge_join_optimizations = false;
|
||||
const size_t partial_merge_join_rows_in_right_blocks = 0;
|
||||
@ -61,10 +65,10 @@ class AnalyzedJoin
|
||||
/// Original name -> name. Only ranamed columns.
|
||||
std::unordered_map<String, String> renames;
|
||||
|
||||
String tmp_path;
|
||||
VolumePtr tmp_volume;
|
||||
|
||||
public:
|
||||
AnalyzedJoin(const Settings &, const String & tmp_path);
|
||||
AnalyzedJoin(const Settings &, VolumePtr tmp_volume);
|
||||
|
||||
/// for StorageJoin
|
||||
AnalyzedJoin(SizeLimits limits, bool use_nulls, ASTTableJoin::Kind kind, ASTTableJoin::Strictness strictness,
|
||||
@ -81,11 +85,12 @@ public:
|
||||
ASTTableJoin::Kind kind() const { return table_join.kind; }
|
||||
ASTTableJoin::Strictness strictness() const { return table_join.strictness; }
|
||||
const SizeLimits & sizeLimits() const { return size_limits; }
|
||||
const String & getTemporaryPath() const { return tmp_path; }
|
||||
VolumePtr getTemporaryVolume() { return tmp_volume; }
|
||||
|
||||
bool forceNullableRight() const { return join_use_nulls && isLeftOrFull(table_join.kind); }
|
||||
bool forceNullableLeft() const { return join_use_nulls && isRightOrFull(table_join.kind); }
|
||||
size_t defaultMaxBytes() const { return default_max_bytes; }
|
||||
size_t maxJoinedBlockRows() const { return max_joined_block_rows; }
|
||||
size_t maxRowsInRightBlock() const { return partial_merge_join_rows_in_right_blocks; }
|
||||
bool enablePartialMergeJoinOptimizations() const { return partial_merge_join_optimizations; }
|
||||
|
||||
|
@ -22,6 +22,7 @@
|
||||
#include <Storages/MergeTree/MergeList.h>
|
||||
#include <Storages/MergeTree/MergeTreeSettings.h>
|
||||
#include <Storages/CompressionCodecSelector.h>
|
||||
#include <Disks/DiskLocal.h>
|
||||
#include <TableFunctions/TableFunctionFactory.h>
|
||||
#include <Interpreters/ActionLocksManager.h>
|
||||
#include <Core/Settings.h>
|
||||
@ -95,6 +96,7 @@ namespace ErrorCodes
|
||||
extern const int SCALAR_ALREADY_EXISTS;
|
||||
extern const int UNKNOWN_SCALAR;
|
||||
extern const int NOT_ENOUGH_PRIVILEGES;
|
||||
extern const int UNKNOWN_POLICY;
|
||||
}
|
||||
|
||||
|
||||
@ -123,12 +125,14 @@ struct ContextShared
|
||||
String interserver_scheme; /// http or https
|
||||
|
||||
String path; /// Path to the data directory, with a slash at the end.
|
||||
String tmp_path; /// The path to the temporary files that occur when processing the request.
|
||||
String flags_path; /// Path to the directory with some control flags for server maintenance.
|
||||
String user_files_path; /// Path to the directory with user provided files, usable by 'file' table function.
|
||||
String dictionaries_lib_path; /// Path to the directory with user provided binaries and libraries for external dictionaries.
|
||||
ConfigurationPtr config; /// Global configuration settings.
|
||||
|
||||
String tmp_path; /// Path to the temporary files that occur when processing the request.
|
||||
mutable VolumePtr tmp_volume; /// Volume for the the temporary files that occur when processing the request.
|
||||
|
||||
Databases databases; /// List of databases and tables in them.
|
||||
mutable std::optional<EmbeddedDictionaries> embedded_dictionaries; /// Metrica's dictionaries. Have lazy initialization.
|
||||
mutable std::optional<ExternalDictionariesLoader> external_dictionaries_loader;
|
||||
@ -151,9 +155,9 @@ struct ContextShared
|
||||
std::unique_ptr<DDLWorker> ddl_worker; /// Process ddl commands from zk.
|
||||
/// Rules for selecting the compression settings, depending on the size of the part.
|
||||
mutable std::unique_ptr<CompressionCodecSelector> compression_codec_selector;
|
||||
/// Storage disk chooser
|
||||
/// Storage disk chooser for MergeTree engines
|
||||
mutable std::unique_ptr<DiskSelector> merge_tree_disk_selector;
|
||||
/// Storage policy chooser
|
||||
/// Storage policy chooser for MergeTree engines
|
||||
mutable std::unique_ptr<StoragePolicySelector> merge_tree_storage_policy_selector;
|
||||
|
||||
std::optional<MergeTreeSettings> merge_tree_settings; /// Settings of MergeTree* engines.
|
||||
@ -527,12 +531,6 @@ String Context::getPath() const
|
||||
return shared->path;
|
||||
}
|
||||
|
||||
String Context::getTemporaryPath() const
|
||||
{
|
||||
auto lock = getLock();
|
||||
return shared->tmp_path;
|
||||
}
|
||||
|
||||
String Context::getFlagsPath() const
|
||||
{
|
||||
auto lock = getLock();
|
||||
@ -551,13 +549,19 @@ String Context::getDictionariesLibPath() const
|
||||
return shared->dictionaries_lib_path;
|
||||
}
|
||||
|
||||
VolumePtr Context::getTemporaryVolume() const
|
||||
{
|
||||
auto lock = getLock();
|
||||
return shared->tmp_volume;
|
||||
}
|
||||
|
||||
void Context::setPath(const String & path)
|
||||
{
|
||||
auto lock = getLock();
|
||||
|
||||
shared->path = path;
|
||||
|
||||
if (shared->tmp_path.empty())
|
||||
if (shared->tmp_path.empty() && !shared->tmp_volume)
|
||||
shared->tmp_path = shared->path + "tmp/";
|
||||
|
||||
if (shared->flags_path.empty())
|
||||
@ -570,10 +574,31 @@ void Context::setPath(const String & path)
|
||||
shared->dictionaries_lib_path = shared->path + "dictionaries_lib/";
|
||||
}
|
||||
|
||||
void Context::setTemporaryPath(const String & path)
|
||||
VolumePtr Context::setTemporaryStorage(const String & path, const String & policy_name)
|
||||
{
|
||||
auto lock = getLock();
|
||||
shared->tmp_path = path;
|
||||
|
||||
if (policy_name.empty())
|
||||
{
|
||||
shared->tmp_path = path;
|
||||
if (!shared->tmp_path.ends_with('/'))
|
||||
shared->tmp_path += '/';
|
||||
|
||||
auto disk = std::make_shared<DiskLocal>("_tmp_default", shared->tmp_path, 0);
|
||||
shared->tmp_volume = std::make_shared<Volume>("_tmp_default", std::vector<DiskPtr>{disk}, 0);
|
||||
}
|
||||
else
|
||||
{
|
||||
StoragePolicyPtr tmp_policy = getStoragePolicySelector()[policy_name];
|
||||
if (tmp_policy->getVolumes().size() != 1)
|
||||
throw Exception("Policy " + policy_name + " is used temporary files, such policy should have exactly one volume", ErrorCodes::NO_ELEMENTS_IN_CONFIG);
|
||||
shared->tmp_volume = tmp_policy->getVolume(0);
|
||||
}
|
||||
|
||||
if (!shared->tmp_volume->disks.size())
|
||||
throw Exception("No disks volume for temporary files", ErrorCodes::NO_ELEMENTS_IN_CONFIG);
|
||||
|
||||
return shared->tmp_volume;
|
||||
}
|
||||
|
||||
void Context::setFlagsPath(const String & path)
|
||||
|
@ -91,6 +91,9 @@ class StoragePolicySelector;
|
||||
class IOutputFormat;
|
||||
using OutputFormatPtr = std::shared_ptr<IOutputFormat>;
|
||||
|
||||
class Volume;
|
||||
using VolumePtr = std::shared_ptr<Volume>;
|
||||
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
|
||||
class CompiledExpressionCache;
|
||||
@ -195,17 +198,19 @@ public:
|
||||
~Context();
|
||||
|
||||
String getPath() const;
|
||||
String getTemporaryPath() const;
|
||||
String getFlagsPath() const;
|
||||
String getUserFilesPath() const;
|
||||
String getDictionariesLibPath() const;
|
||||
|
||||
VolumePtr getTemporaryVolume() const;
|
||||
|
||||
void setPath(const String & path);
|
||||
void setTemporaryPath(const String & path);
|
||||
void setFlagsPath(const String & path);
|
||||
void setUserFilesPath(const String & path);
|
||||
void setDictionariesLibPath(const String & path);
|
||||
|
||||
VolumePtr setTemporaryStorage(const String & path, const String & policy_name = "");
|
||||
|
||||
using ConfigurationPtr = Poco::AutoPtr<Poco::Util::AbstractConfiguration>;
|
||||
|
||||
/// Global application configuration settings.
|
||||
|
@ -346,7 +346,7 @@ void ExpressionAction::prepare(Block & sample_block, const Settings & settings,
|
||||
}
|
||||
|
||||
|
||||
void ExpressionAction::execute(Block & block, bool dry_run) const
|
||||
void ExpressionAction::execute(Block & block, bool dry_run, ExtraBlockPtr & not_processed) const
|
||||
{
|
||||
size_t input_rows_count = block.rows();
|
||||
|
||||
@ -477,7 +477,7 @@ void ExpressionAction::execute(Block & block, bool dry_run) const
|
||||
|
||||
case JOIN:
|
||||
{
|
||||
join->joinBlock(block);
|
||||
join->joinBlock(block, not_processed);
|
||||
break;
|
||||
}
|
||||
|
||||
@ -762,6 +762,21 @@ void ExpressionActions::execute(Block & block, bool dry_run) const
|
||||
}
|
||||
}
|
||||
|
||||
/// @warning It's a tricky method that allows to continue ONLY ONE action in reason of one-to-many ALL JOIN logic.
|
||||
void ExpressionActions::execute(Block & block, ExtraBlockPtr & not_processed, size_t & start_action) const
|
||||
{
|
||||
size_t i = start_action;
|
||||
start_action = 0;
|
||||
for (; i < actions.size(); ++i)
|
||||
{
|
||||
actions[i].execute(block, false, not_processed);
|
||||
checkLimits(block);
|
||||
|
||||
if (not_processed)
|
||||
start_action = i;
|
||||
}
|
||||
}
|
||||
|
||||
bool ExpressionActions::hasTotalsInJoin() const
|
||||
{
|
||||
for (const auto & action : actions)
|
||||
|
@ -138,8 +138,16 @@ private:
|
||||
friend class ExpressionActions;
|
||||
|
||||
void prepare(Block & sample_block, const Settings & settings, NameSet & names_not_for_constant_folding);
|
||||
void execute(Block & block, bool dry_run) const;
|
||||
void executeOnTotals(Block & block) const;
|
||||
|
||||
/// Executes action on block (modify it). Block could be splitted in case of JOIN. Then not_processed block is created.
|
||||
void execute(Block & block, bool dry_run, ExtraBlockPtr & not_processed) const;
|
||||
|
||||
void execute(Block & block, bool dry_run) const
|
||||
{
|
||||
ExtraBlockPtr extra;
|
||||
execute(block, dry_run, extra);
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
@ -221,6 +229,9 @@ public:
|
||||
/// Execute the expression on the block. The block must contain all the columns returned by getRequiredColumns.
|
||||
void execute(Block & block, bool dry_run = false) const;
|
||||
|
||||
/// Execute the expression on the block with continuation.
|
||||
void execute(Block & block, ExtraBlockPtr & not_processed, size_t & start_action) const;
|
||||
|
||||
/// Check if joined subquery has totals.
|
||||
bool hasTotalsInJoin() const;
|
||||
|
||||
|
@ -122,7 +122,7 @@ void ExpressionAnalyzer::analyzeAggregation()
|
||||
ASTPtr array_join_expression_list = select_query->array_join_expression_list(is_array_join_left);
|
||||
if (array_join_expression_list)
|
||||
{
|
||||
getRootActions(array_join_expression_list, true, temp_actions);
|
||||
getRootActionsNoMakeSet(array_join_expression_list, true, temp_actions, false);
|
||||
addMultipleArrayJoinAction(temp_actions, is_array_join_left);
|
||||
|
||||
array_join_columns.clear();
|
||||
@ -134,7 +134,7 @@ void ExpressionAnalyzer::analyzeAggregation()
|
||||
const ASTTablesInSelectQueryElement * join = select_query->join();
|
||||
if (join)
|
||||
{
|
||||
getRootActions(analyzedJoin().leftKeysList(), true, temp_actions);
|
||||
getRootActionsNoMakeSet(analyzedJoin().leftKeysList(), true, temp_actions, false);
|
||||
addJoinAction(temp_actions);
|
||||
}
|
||||
}
|
||||
@ -155,7 +155,7 @@ void ExpressionAnalyzer::analyzeAggregation()
|
||||
for (ssize_t i = 0; i < ssize_t(group_asts.size()); ++i)
|
||||
{
|
||||
ssize_t size = group_asts.size();
|
||||
getRootActions(group_asts[i], true, temp_actions);
|
||||
getRootActionsNoMakeSet(group_asts[i], true, temp_actions, false);
|
||||
|
||||
const auto & column_name = group_asts[i]->getColumnName();
|
||||
const auto & block = temp_actions->getSampleBlock();
|
||||
@ -340,7 +340,18 @@ void ExpressionAnalyzer::getRootActions(const ASTPtr & ast, bool no_subqueries,
|
||||
LogAST log;
|
||||
ActionsVisitor::Data visitor_data(context, settings.size_limits_for_set, subquery_depth,
|
||||
sourceColumns(), actions, prepared_sets, subqueries_for_sets,
|
||||
no_subqueries, only_consts, !isRemoteStorage());
|
||||
no_subqueries, false, only_consts, !isRemoteStorage());
|
||||
ActionsVisitor(visitor_data, log.stream()).visit(ast);
|
||||
visitor_data.updateActions(actions);
|
||||
}
|
||||
|
||||
|
||||
void ExpressionAnalyzer::getRootActionsNoMakeSet(const ASTPtr & ast, bool no_subqueries, ExpressionActionsPtr & actions, bool only_consts)
|
||||
{
|
||||
LogAST log;
|
||||
ActionsVisitor::Data visitor_data(context, settings.size_limits_for_set, subquery_depth,
|
||||
sourceColumns(), actions, prepared_sets, subqueries_for_sets,
|
||||
no_subqueries, true, only_consts, !isRemoteStorage());
|
||||
ActionsVisitor(visitor_data, log.stream()).visit(ast);
|
||||
visitor_data.updateActions(actions);
|
||||
}
|
||||
|
@ -134,6 +134,12 @@ protected:
|
||||
|
||||
void getRootActions(const ASTPtr & ast, bool no_subqueries, ExpressionActionsPtr & actions, bool only_consts = false);
|
||||
|
||||
/** Similar to getRootActions but do not make sets when analyzing IN functions. It's used in
|
||||
* analyzeAggregation which happens earlier than analyzing PREWHERE and WHERE. If we did, the
|
||||
* prepared sets would not be applicable for MergeTree index optimization.
|
||||
*/
|
||||
void getRootActionsNoMakeSet(const ASTPtr & ast, bool no_subqueries, ExpressionActionsPtr & actions, bool only_consts = false);
|
||||
|
||||
/** Add aggregation keys to aggregation_keys, aggregate functions to aggregate_descriptions,
|
||||
* Create a set of columns aggregated_columns resulting after the aggregation, if any,
|
||||
* or after all the actions that are normally performed before aggregation.
|
||||
|
@ -11,6 +11,7 @@ namespace DB
|
||||
{
|
||||
|
||||
class Block;
|
||||
struct ExtraBlock;
|
||||
|
||||
class IJoin
|
||||
{
|
||||
@ -23,7 +24,7 @@ public:
|
||||
|
||||
/// Join the block with data from left hand of JOIN to the right hand data (that was previously built by calls to addJoinedBlock).
|
||||
/// Could be called from different threads in parallel.
|
||||
virtual void joinBlock(Block & block) = 0;
|
||||
virtual void joinBlock(Block & block, std::shared_ptr<ExtraBlock> & not_processed) = 0;
|
||||
|
||||
virtual bool hasTotals() const = 0;
|
||||
virtual void setTotals(const Block & block) = 0;
|
||||
|
@ -96,63 +96,95 @@ Block InterpreterInsertQuery::getSampleBlock(const ASTInsertQuery & query, const
|
||||
|
||||
BlockIO InterpreterInsertQuery::execute()
|
||||
{
|
||||
const Settings & settings = context.getSettingsRef();
|
||||
|
||||
const auto & query = query_ptr->as<ASTInsertQuery &>();
|
||||
checkAccess(query);
|
||||
|
||||
BlockIO res;
|
||||
StoragePtr table = getTable(query);
|
||||
|
||||
auto table_lock = table->lockStructureForShare(true, context.getInitialQueryId());
|
||||
|
||||
/// We create a pipeline of several streams, into which we will write data.
|
||||
BlockOutputStreamPtr out;
|
||||
|
||||
/// NOTE: we explicitly ignore bound materialized views when inserting into Kafka Storage.
|
||||
/// Otherwise we'll get duplicates when MV reads same rows again from Kafka.
|
||||
if (table->noPushingToViews() && !no_destination)
|
||||
out = table->write(query_ptr, context);
|
||||
else
|
||||
out = std::make_shared<PushingToViewsBlockOutputStream>(table, context, query_ptr, no_destination);
|
||||
|
||||
/// Do not squash blocks if it is a sync INSERT into Distributed, since it lead to double bufferization on client and server side.
|
||||
/// Client-side bufferization might cause excessive timeouts (especially in case of big blocks).
|
||||
if (!(context.getSettingsRef().insert_distributed_sync && table->isRemote()) && !no_squash)
|
||||
{
|
||||
out = std::make_shared<SquashingBlockOutputStream>(
|
||||
out, out->getHeader(), context.getSettingsRef().min_insert_block_size_rows, context.getSettingsRef().min_insert_block_size_bytes);
|
||||
}
|
||||
auto query_sample_block = getSampleBlock(query, table);
|
||||
|
||||
/// Actually we don't know structure of input blocks from query/table,
|
||||
/// because some clients break insertion protocol (columns != header)
|
||||
out = std::make_shared<AddingDefaultBlockOutputStream>(
|
||||
out, query_sample_block, out->getHeader(), table->getColumns().getDefaults(), context);
|
||||
|
||||
if (const auto & constraints = table->getConstraints(); !constraints.empty())
|
||||
out = std::make_shared<CheckConstraintsBlockOutputStream>(query.table,
|
||||
out, query_sample_block, table->getConstraints(), context);
|
||||
|
||||
auto out_wrapper = std::make_shared<CountingBlockOutputStream>(out);
|
||||
out_wrapper->setProcessListElement(context.getProcessListElement());
|
||||
out = std::move(out_wrapper);
|
||||
|
||||
BlockIO res;
|
||||
|
||||
/// What type of query: INSERT or INSERT SELECT?
|
||||
BlockInputStreams in_streams;
|
||||
size_t out_streams_size = 1;
|
||||
if (query.select)
|
||||
{
|
||||
/// Passing 1 as subquery_depth will disable limiting size of intermediate result.
|
||||
InterpreterSelectWithUnionQuery interpreter_select{query.select, context, SelectQueryOptions(QueryProcessingStage::Complete, 1)};
|
||||
|
||||
/// BlockIO may hold StoragePtrs to temporary tables
|
||||
res = interpreter_select.execute();
|
||||
res.out = nullptr;
|
||||
if (table->supportsParallelInsert() && settings.max_insert_threads > 0)
|
||||
{
|
||||
in_streams = interpreter_select.executeWithMultipleStreams(res.pipeline);
|
||||
out_streams_size = std::min(size_t(settings.max_insert_threads), in_streams.size());
|
||||
}
|
||||
else
|
||||
{
|
||||
res = interpreter_select.execute();
|
||||
in_streams.emplace_back(res.in);
|
||||
res.in = nullptr;
|
||||
res.out = nullptr;
|
||||
}
|
||||
}
|
||||
|
||||
res.in = std::make_shared<ConvertingBlockInputStream>(context, res.in, out->getHeader(), ConvertingBlockInputStream::MatchColumnsMode::Position);
|
||||
res.in = std::make_shared<NullAndDoCopyBlockInputStream>(res.in, out);
|
||||
BlockOutputStreams out_streams;
|
||||
auto query_sample_block = getSampleBlock(query, table);
|
||||
|
||||
for (size_t i = 0; i < out_streams_size; i++)
|
||||
{
|
||||
/// We create a pipeline of several streams, into which we will write data.
|
||||
BlockOutputStreamPtr out;
|
||||
|
||||
/// NOTE: we explicitly ignore bound materialized views when inserting into Kafka Storage.
|
||||
/// Otherwise we'll get duplicates when MV reads same rows again from Kafka.
|
||||
if (table->noPushingToViews() && !no_destination)
|
||||
out = table->write(query_ptr, context);
|
||||
else
|
||||
out = std::make_shared<PushingToViewsBlockOutputStream>(table, context, query_ptr, no_destination);
|
||||
|
||||
/// Do not squash blocks if it is a sync INSERT into Distributed, since it lead to double bufferization on client and server side.
|
||||
/// Client-side bufferization might cause excessive timeouts (especially in case of big blocks).
|
||||
if (!(context.getSettingsRef().insert_distributed_sync && table->isRemote()) && !no_squash)
|
||||
{
|
||||
out = std::make_shared<SquashingBlockOutputStream>(
|
||||
out, out->getHeader(), context.getSettingsRef().min_insert_block_size_rows, context.getSettingsRef().min_insert_block_size_bytes);
|
||||
}
|
||||
|
||||
/// Actually we don't know structure of input blocks from query/table,
|
||||
/// because some clients break insertion protocol (columns != header)
|
||||
out = std::make_shared<AddingDefaultBlockOutputStream>(
|
||||
out, query_sample_block, out->getHeader(), table->getColumns().getDefaults(), context);
|
||||
|
||||
if (const auto & constraints = table->getConstraints(); !constraints.empty())
|
||||
out = std::make_shared<CheckConstraintsBlockOutputStream>(query.table,
|
||||
out, query_sample_block, table->getConstraints(), context);
|
||||
|
||||
auto out_wrapper = std::make_shared<CountingBlockOutputStream>(out);
|
||||
out_wrapper->setProcessListElement(context.getProcessListElement());
|
||||
out = std::move(out_wrapper);
|
||||
out_streams.emplace_back(std::move(out));
|
||||
}
|
||||
|
||||
/// What type of query: INSERT or INSERT SELECT?
|
||||
if (query.select)
|
||||
{
|
||||
for (auto & in_stream : in_streams)
|
||||
{
|
||||
in_stream = std::make_shared<ConvertingBlockInputStream>(
|
||||
context, in_stream, out_streams.at(0)->getHeader(), ConvertingBlockInputStream::MatchColumnsMode::Position);
|
||||
}
|
||||
|
||||
Block in_header = in_streams.at(0)->getHeader();
|
||||
if (in_streams.size() > 1)
|
||||
{
|
||||
for (size_t i = 1; i < in_streams.size(); ++i)
|
||||
assertBlocksHaveEqualStructure(in_streams[i]->getHeader(), in_header, "INSERT SELECT");
|
||||
}
|
||||
|
||||
res.in = std::make_shared<NullAndDoCopyBlockInputStream>(in_streams, out_streams);
|
||||
|
||||
if (!allow_materialized)
|
||||
{
|
||||
Block in_header = res.in->getHeader();
|
||||
for (const auto & column : table->getColumns())
|
||||
if (column.default_desc.kind == ColumnDefaultKind::Materialized && in_header.has(column.name))
|
||||
throw Exception("Cannot insert column " + column.name + ", because it is MATERIALIZED column.", ErrorCodes::ILLEGAL_COLUMN);
|
||||
@ -160,12 +192,12 @@ BlockIO InterpreterInsertQuery::execute()
|
||||
}
|
||||
else if (query.data && !query.has_tail) /// can execute without additional data
|
||||
{
|
||||
// res.out = std::move(out_streams.at(0));
|
||||
res.in = std::make_shared<InputStreamFromASTInsertQuery>(query_ptr, nullptr, query_sample_block, context, nullptr);
|
||||
res.in = std::make_shared<NullAndDoCopyBlockInputStream>(res.in, out);
|
||||
res.in = std::make_shared<NullAndDoCopyBlockInputStream>(res.in, out_streams.at(0));
|
||||
}
|
||||
else
|
||||
res.out = std::move(out);
|
||||
|
||||
res.out = std::move(out_streams.at(0));
|
||||
res.pipeline.addStorageHolder(table);
|
||||
|
||||
return res;
|
||||
|
@ -26,7 +26,6 @@
|
||||
#include <DataStreams/ConvertingBlockInputStream.h>
|
||||
#include <DataStreams/ReverseBlockInputStream.h>
|
||||
#include <DataStreams/FillingBlockInputStream.h>
|
||||
#include <DataStreams/SquashingBlockInputStream.h>
|
||||
|
||||
#include <Parsers/ASTFunction.h>
|
||||
#include <Parsers/ASTIdentifier.h>
|
||||
@ -75,6 +74,7 @@
|
||||
#include <Processors/Sources/SourceFromInputStream.h>
|
||||
#include <Processors/Transforms/FilterTransform.h>
|
||||
#include <Processors/Transforms/ExpressionTransform.h>
|
||||
#include <Processors/Transforms/InflatingExpressionTransform.h>
|
||||
#include <Processors/Transforms/AggregatingTransform.h>
|
||||
#include <Processors/Transforms/MergingAggregatedTransform.h>
|
||||
#include <Processors/Transforms/MergingAggregatedMemoryEfficientTransform.h>
|
||||
@ -1104,7 +1104,12 @@ void InterpreterSelectQuery::executeImpl(TPipeline & pipeline, const BlockInputS
|
||||
pipeline.addSimpleTransform([&](const Block & header, QueryPipeline::StreamType type)
|
||||
{
|
||||
bool on_totals = type == QueryPipeline::StreamType::Totals;
|
||||
return std::make_shared<ExpressionTransform>(header, expressions.before_join, on_totals, default_totals);
|
||||
std::shared_ptr<IProcessor> ret;
|
||||
if (settings.partial_merge_join)
|
||||
ret = std::make_shared<InflatingExpressionTransform>(header, expressions.before_join, on_totals, default_totals);
|
||||
else
|
||||
ret = std::make_shared<ExpressionTransform>(header, expressions.before_join, on_totals, default_totals);
|
||||
return ret;
|
||||
});
|
||||
}
|
||||
else
|
||||
@ -1112,14 +1117,7 @@ void InterpreterSelectQuery::executeImpl(TPipeline & pipeline, const BlockInputS
|
||||
header_before_join = pipeline.firstStream()->getHeader();
|
||||
/// Applies to all sources except stream_with_non_joined_data.
|
||||
for (auto & stream : pipeline.streams)
|
||||
stream = std::make_shared<ExpressionBlockInputStream>(stream, expressions.before_join);
|
||||
|
||||
if (isMergeJoin(expressions.before_join->getTableJoinAlgo()) && settings.partial_merge_join_optimizations)
|
||||
{
|
||||
if (size_t rows_in_block = settings.partial_merge_join_rows_in_left_blocks)
|
||||
for (auto & stream : pipeline.streams)
|
||||
stream = std::make_shared<SquashingBlockInputStream>(stream, rows_in_block, 0, true);
|
||||
}
|
||||
stream = std::make_shared<InflatingExpressionBlockInputStream>(stream, expressions.before_join);
|
||||
}
|
||||
|
||||
if (JoinPtr join = expressions.before_join->getTableJoinAlgo())
|
||||
@ -1873,7 +1871,7 @@ void InterpreterSelectQuery::executeAggregation(Pipeline & pipeline, const Expre
|
||||
allow_to_use_two_level_group_by ? settings.group_by_two_level_threshold : SettingUInt64(0),
|
||||
allow_to_use_two_level_group_by ? settings.group_by_two_level_threshold_bytes : SettingUInt64(0),
|
||||
settings.max_bytes_before_external_group_by, settings.empty_result_for_aggregation_by_empty_set,
|
||||
context->getTemporaryPath(), settings.max_threads, settings.min_free_disk_space_for_temporary_data);
|
||||
context->getTemporaryVolume(), settings.max_threads, settings.min_free_disk_space_for_temporary_data);
|
||||
|
||||
/// If there are several sources, then we perform parallel aggregation
|
||||
if (pipeline.streams.size() > 1)
|
||||
@ -1939,7 +1937,7 @@ void InterpreterSelectQuery::executeAggregation(QueryPipeline & pipeline, const
|
||||
allow_to_use_two_level_group_by ? settings.group_by_two_level_threshold : SettingUInt64(0),
|
||||
allow_to_use_two_level_group_by ? settings.group_by_two_level_threshold_bytes : SettingUInt64(0),
|
||||
settings.max_bytes_before_external_group_by, settings.empty_result_for_aggregation_by_empty_set,
|
||||
context->getTemporaryPath(), settings.max_threads, settings.min_free_disk_space_for_temporary_data);
|
||||
context->getTemporaryVolume(), settings.max_threads, settings.min_free_disk_space_for_temporary_data);
|
||||
|
||||
auto transform_params = std::make_shared<AggregatingTransformParams>(params, final);
|
||||
|
||||
@ -2165,7 +2163,7 @@ void InterpreterSelectQuery::executeRollupOrCube(Pipeline & pipeline, Modificato
|
||||
false, settings.max_rows_to_group_by, settings.group_by_overflow_mode,
|
||||
SettingUInt64(0), SettingUInt64(0),
|
||||
settings.max_bytes_before_external_group_by, settings.empty_result_for_aggregation_by_empty_set,
|
||||
context->getTemporaryPath(), settings.max_threads, settings.min_free_disk_space_for_temporary_data);
|
||||
context->getTemporaryVolume(), settings.max_threads, settings.min_free_disk_space_for_temporary_data);
|
||||
|
||||
if (modificator == Modificator::ROLLUP)
|
||||
pipeline.firstStream() = std::make_shared<RollupBlockInputStream>(pipeline.firstStream(), params);
|
||||
@ -2194,7 +2192,7 @@ void InterpreterSelectQuery::executeRollupOrCube(QueryPipeline & pipeline, Modif
|
||||
false, settings.max_rows_to_group_by, settings.group_by_overflow_mode,
|
||||
SettingUInt64(0), SettingUInt64(0),
|
||||
settings.max_bytes_before_external_group_by, settings.empty_result_for_aggregation_by_empty_set,
|
||||
context->getTemporaryPath(), settings.max_threads, settings.min_free_disk_space_for_temporary_data);
|
||||
context->getTemporaryVolume(), settings.max_threads, settings.min_free_disk_space_for_temporary_data);
|
||||
|
||||
auto transform_params = std::make_shared<AggregatingTransformParams>(params, true);
|
||||
|
||||
@ -2278,7 +2276,7 @@ void InterpreterSelectQuery::executeOrder(Pipeline & pipeline, InputSortingInfoP
|
||||
sorting_stream, output_order_descr, settings.max_block_size, limit,
|
||||
settings.max_bytes_before_remerge_sort,
|
||||
settings.max_bytes_before_external_sort / pipeline.streams.size(),
|
||||
context->getTemporaryPath(), settings.min_free_disk_space_for_temporary_data);
|
||||
context->getTemporaryVolume(), settings.min_free_disk_space_for_temporary_data);
|
||||
|
||||
stream = merging_stream;
|
||||
});
|
||||
@ -2360,7 +2358,8 @@ void InterpreterSelectQuery::executeOrder(QueryPipeline & pipeline, InputSorting
|
||||
return std::make_shared<MergeSortingTransform>(
|
||||
header, output_order_descr, settings.max_block_size, limit,
|
||||
settings.max_bytes_before_remerge_sort / pipeline.getNumStreams(),
|
||||
settings.max_bytes_before_external_sort, context->getTemporaryPath(), settings.min_free_disk_space_for_temporary_data);
|
||||
settings.max_bytes_before_external_sort, context->getTemporaryVolume(),
|
||||
settings.min_free_disk_space_for_temporary_data);
|
||||
});
|
||||
|
||||
/// If there are several streams, we merge them into one
|
||||
|
@ -1091,7 +1091,7 @@ void Join::joinGet(Block & block, const String & column_name) const
|
||||
}
|
||||
|
||||
|
||||
void Join::joinBlock(Block & block)
|
||||
void Join::joinBlock(Block & block, ExtraBlockPtr &)
|
||||
{
|
||||
std::shared_lock lock(data->rwlock);
|
||||
|
||||
|
@ -158,7 +158,7 @@ public:
|
||||
/** Join data from the map (that was previously built by calls to addJoinedBlock) to the block with data from "left" table.
|
||||
* Could be called from different threads in parallel.
|
||||
*/
|
||||
void joinBlock(Block & block) override;
|
||||
void joinBlock(Block & block, ExtraBlockPtr & not_processed) override;
|
||||
|
||||
/// Infer the return type for joinGet function
|
||||
DataTypePtr joinGetReturnType(const String & column_name) const;
|
||||
|
@ -13,6 +13,7 @@
|
||||
#include <DataStreams/OneBlockInputStream.h>
|
||||
#include <DataStreams/TemporaryFileStream.h>
|
||||
#include <DataStreams/ConcatBlockInputStream.h>
|
||||
#include <Disks/DiskSpaceMonitor.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -294,36 +295,56 @@ void joinEqualsAnyLeft(const Block & right_block, const Block & right_columns_to
|
||||
copyRightRange(right_block, right_columns_to_add, right_columns, range.right_start, range.left_length);
|
||||
}
|
||||
|
||||
void joinEquals(const Block & left_block, const Block & right_block, const Block & right_columns_to_add,
|
||||
MutableColumns & left_columns, MutableColumns & right_columns, const Range & range, bool is_all)
|
||||
template <bool is_all>
|
||||
bool joinEquals(const Block & left_block, const Block & right_block, const Block & right_columns_to_add,
|
||||
MutableColumns & left_columns, MutableColumns & right_columns, Range & range, size_t max_rows [[maybe_unused]])
|
||||
{
|
||||
size_t left_rows_to_add = range.left_length;
|
||||
size_t right_rows_to_add = is_all ? range.right_length : 1;
|
||||
bool one_more = true;
|
||||
|
||||
size_t row_position = range.right_start;
|
||||
for (size_t right_row = 0; right_row < right_rows_to_add; ++right_row, ++row_position)
|
||||
if constexpr (is_all)
|
||||
{
|
||||
copyLeftRange(left_block, left_columns, range.left_start, left_rows_to_add);
|
||||
copyRightRange(right_block, right_columns_to_add, right_columns, row_position, left_rows_to_add);
|
||||
size_t range_rows = range.left_length * range.right_length;
|
||||
if (range_rows > max_rows)
|
||||
{
|
||||
/// We need progress. So we join at least one right row.
|
||||
range.right_length = max_rows / range.left_length;
|
||||
if (!range.right_length)
|
||||
range.right_length = 1;
|
||||
one_more = false;
|
||||
}
|
||||
|
||||
size_t left_rows_to_add = range.left_length;
|
||||
size_t row_position = range.right_start;
|
||||
for (size_t right_row = 0; right_row < range.right_length; ++right_row, ++row_position)
|
||||
{
|
||||
copyLeftRange(left_block, left_columns, range.left_start, left_rows_to_add);
|
||||
copyRightRange(right_block, right_columns_to_add, right_columns, row_position, left_rows_to_add);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
size_t left_rows_to_add = range.left_length;
|
||||
copyLeftRange(left_block, left_columns, range.left_start, left_rows_to_add);
|
||||
copyRightRange(right_block, right_columns_to_add, right_columns, range.right_start, left_rows_to_add);
|
||||
}
|
||||
|
||||
return one_more;
|
||||
}
|
||||
|
||||
void appendNulls(MutableColumns & right_columns, size_t rows_to_add)
|
||||
{
|
||||
for (auto & column : right_columns)
|
||||
column->insertManyDefaults(rows_to_add);
|
||||
}
|
||||
|
||||
template <bool copy_left>
|
||||
void joinInequalsLeft(const Block & left_block, MutableColumns & left_columns, MutableColumns & right_columns,
|
||||
size_t start, size_t end, bool copy_left)
|
||||
size_t start, size_t end)
|
||||
{
|
||||
if (end <= start)
|
||||
return;
|
||||
|
||||
size_t rows_to_add = end - start;
|
||||
if (copy_left)
|
||||
if constexpr (copy_left)
|
||||
copyLeftRange(left_block, left_columns, start, rows_to_add);
|
||||
appendNulls(right_columns, rows_to_add);
|
||||
|
||||
/// append nulls
|
||||
for (auto & column : right_columns)
|
||||
column->insertManyDefaults(rows_to_add);
|
||||
}
|
||||
|
||||
Blocks blocksListToBlocks(const BlocksList & in_blocks)
|
||||
@ -386,6 +407,8 @@ void MiniLSM::insert(const BlocksList & blocks)
|
||||
if (blocks.empty())
|
||||
return;
|
||||
|
||||
const std::string path(volume->getNextDisk()->getPath());
|
||||
|
||||
SortedFiles sorted_blocks;
|
||||
if (blocks.size() > 1)
|
||||
{
|
||||
@ -414,6 +437,7 @@ void MiniLSM::merge(std::function<void(const Block &)> callback)
|
||||
BlockInputStreams inputs = makeSortedInputStreams(sorted_files, sample_block);
|
||||
MergingSortedBlockInputStream sorted_stream(inputs, sort_description, rows_in_block);
|
||||
|
||||
const std::string path(volume->getNextDisk()->getPath());
|
||||
SortedFiles out;
|
||||
flushStreamToFiles(path, sample_block, sorted_stream, out, callback);
|
||||
|
||||
@ -427,10 +451,11 @@ MergeJoin::MergeJoin(std::shared_ptr<AnalyzedJoin> table_join_, const Block & ri
|
||||
, size_limits(table_join->sizeLimits())
|
||||
, right_sample_block(right_sample_block_)
|
||||
, nullable_right_side(table_join->forceNullableRight())
|
||||
, is_all(table_join->strictness() == ASTTableJoin::Strictness::All)
|
||||
, is_all_join(table_join->strictness() == ASTTableJoin::Strictness::All)
|
||||
, is_inner(isInner(table_join->kind()))
|
||||
, is_left(isLeft(table_join->kind()))
|
||||
, skip_not_intersected(table_join->enablePartialMergeJoinOptimizations())
|
||||
, max_joined_block_rows(table_join->maxJoinedBlockRows())
|
||||
, max_rows_in_right_block(table_join->maxRowsInRightBlock())
|
||||
{
|
||||
if (!isLeft(table_join->kind()) && !isInner(table_join->kind()))
|
||||
@ -463,7 +488,7 @@ MergeJoin::MergeJoin(std::shared_ptr<AnalyzedJoin> table_join_, const Block & ri
|
||||
makeSortAndMerge(table_join->keyNamesLeft(), left_sort_description, left_merge_description);
|
||||
makeSortAndMerge(table_join->keyNamesRight(), right_sort_description, right_merge_description);
|
||||
|
||||
lsm = std::make_unique<MiniLSM>(table_join->getTemporaryPath(), right_sample_block, right_sort_description, max_rows_in_right_block);
|
||||
lsm = std::make_unique<MiniLSM>(table_join->getTemporaryVolume(), right_sample_block, right_sort_description, max_rows_in_right_block);
|
||||
}
|
||||
|
||||
void MergeJoin::setTotals(const Block & totals_block)
|
||||
@ -567,7 +592,7 @@ bool MergeJoin::addJoinedBlock(const Block & src_block)
|
||||
return saveRightBlock(std::move(block));
|
||||
}
|
||||
|
||||
void MergeJoin::joinBlock(Block & block)
|
||||
void MergeJoin::joinBlock(Block & block, ExtraBlockPtr & not_processed)
|
||||
{
|
||||
JoinCommon::checkTypesOfKeys(block, table_join->keyNamesLeft(), right_table_keys, table_join->keyNamesRight());
|
||||
materializeBlockInplace(block);
|
||||
@ -575,13 +600,23 @@ void MergeJoin::joinBlock(Block & block)
|
||||
|
||||
sortBlock(block, left_sort_description);
|
||||
if (is_in_memory)
|
||||
joinSortedBlock<true>(block);
|
||||
{
|
||||
if (is_all_join)
|
||||
joinSortedBlock<true, true>(block, not_processed);
|
||||
else
|
||||
joinSortedBlock<true, false>(block, not_processed);
|
||||
}
|
||||
else
|
||||
joinSortedBlock<false>(block);
|
||||
{
|
||||
if (is_all_join)
|
||||
joinSortedBlock<false, true>(block, not_processed);
|
||||
else
|
||||
joinSortedBlock<false, false>(block, not_processed);
|
||||
}
|
||||
}
|
||||
|
||||
template <bool in_memory>
|
||||
void MergeJoin::joinSortedBlock(Block & block)
|
||||
template <bool in_memory, bool is_all>
|
||||
void MergeJoin::joinSortedBlock(Block & block, ExtraBlockPtr & not_processed)
|
||||
{
|
||||
std::shared_lock lock(rwlock);
|
||||
|
||||
@ -590,11 +625,22 @@ void MergeJoin::joinSortedBlock(Block & block)
|
||||
MutableColumns right_columns = makeMutableColumns(right_columns_to_add, rows_to_reserve);
|
||||
MergeJoinCursor left_cursor(block, left_merge_description);
|
||||
size_t left_key_tail = 0;
|
||||
size_t skip_right = 0;
|
||||
size_t right_blocks_count = rightBlocksCount<in_memory>();
|
||||
|
||||
size_t starting_right_block = 0;
|
||||
if (not_processed)
|
||||
{
|
||||
auto & continuation = static_cast<NotProcessed &>(*not_processed);
|
||||
left_cursor.nextN(continuation.left_position);
|
||||
skip_right = continuation.right_position;
|
||||
starting_right_block = continuation.right_block;
|
||||
not_processed.reset();
|
||||
}
|
||||
|
||||
if (is_left)
|
||||
{
|
||||
for (size_t i = 0; i < right_blocks_count; ++i)
|
||||
for (size_t i = starting_right_block; i < right_blocks_count; ++i)
|
||||
{
|
||||
if (left_cursor.atEnd())
|
||||
break;
|
||||
@ -610,11 +656,16 @@ void MergeJoin::joinSortedBlock(Block & block)
|
||||
|
||||
std::shared_ptr<Block> right_block = loadRightBlock<in_memory>(i);
|
||||
|
||||
leftJoin(left_cursor, block, *right_block, left_columns, right_columns, left_key_tail);
|
||||
if (!leftJoin<is_all>(left_cursor, block, *right_block, left_columns, right_columns, left_key_tail, skip_right))
|
||||
{
|
||||
not_processed = extraBlock<is_all>(block, std::move(left_columns), std::move(right_columns),
|
||||
left_cursor.position(), skip_right, i);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
left_cursor.nextN(left_key_tail);
|
||||
joinInequalsLeft(block, left_columns, right_columns, left_cursor.position(), left_cursor.end(), is_all);
|
||||
joinInequalsLeft<is_all>(block, left_columns, right_columns, left_cursor.position(), left_cursor.end());
|
||||
//left_cursor.nextN(left_cursor.end() - left_cursor.position());
|
||||
|
||||
changeLeftColumns(block, std::move(left_columns));
|
||||
@ -622,7 +673,7 @@ void MergeJoin::joinSortedBlock(Block & block)
|
||||
}
|
||||
else if (is_inner)
|
||||
{
|
||||
for (size_t i = 0; i < right_blocks_count; ++i)
|
||||
for (size_t i = starting_right_block; i < right_blocks_count; ++i)
|
||||
{
|
||||
if (left_cursor.atEnd())
|
||||
break;
|
||||
@ -638,7 +689,12 @@ void MergeJoin::joinSortedBlock(Block & block)
|
||||
|
||||
std::shared_ptr<Block> right_block = loadRightBlock<in_memory>(i);
|
||||
|
||||
innerJoin(left_cursor, block, *right_block, left_columns, right_columns, left_key_tail);
|
||||
if (!innerJoin<is_all>(left_cursor, block, *right_block, left_columns, right_columns, left_key_tail, skip_right))
|
||||
{
|
||||
not_processed = extraBlock<is_all>(block, std::move(left_columns), std::move(right_columns),
|
||||
left_cursor.position(), skip_right, i);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
left_cursor.nextN(left_key_tail);
|
||||
@ -647,12 +703,30 @@ void MergeJoin::joinSortedBlock(Block & block)
|
||||
}
|
||||
}
|
||||
|
||||
void MergeJoin::leftJoin(MergeJoinCursor & left_cursor, const Block & left_block, const Block & right_block,
|
||||
MutableColumns & left_columns, MutableColumns & right_columns, size_t & left_key_tail)
|
||||
static size_t maxRangeRows(size_t current_rows, size_t max_rows)
|
||||
{
|
||||
if (!max_rows)
|
||||
return std::numeric_limits<size_t>::max();
|
||||
if (current_rows >= max_rows)
|
||||
return 0;
|
||||
return max_rows - current_rows;
|
||||
}
|
||||
|
||||
template <bool is_all>
|
||||
bool MergeJoin::leftJoin(MergeJoinCursor & left_cursor, const Block & left_block, const Block & right_block,
|
||||
MutableColumns & left_columns, MutableColumns & right_columns, size_t & left_key_tail,
|
||||
size_t & skip_right [[maybe_unused]])
|
||||
{
|
||||
MergeJoinCursor right_cursor(right_block, right_merge_description);
|
||||
left_cursor.setCompareNullability(right_cursor);
|
||||
|
||||
/// Set right cursor position in first continuation right block
|
||||
if constexpr (is_all)
|
||||
{
|
||||
right_cursor.nextN(skip_right);
|
||||
skip_right = 0;
|
||||
}
|
||||
|
||||
while (!left_cursor.atEnd() && !right_cursor.atEnd())
|
||||
{
|
||||
/// Not zero left_key_tail means there were equality for the last left key in previous leftJoin() call.
|
||||
@ -662,56 +736,97 @@ void MergeJoin::leftJoin(MergeJoinCursor & left_cursor, const Block & left_block
|
||||
|
||||
Range range = left_cursor.getNextEqualRange(right_cursor);
|
||||
|
||||
joinInequalsLeft(left_block, left_columns, right_columns, left_unequal_position, range.left_start, is_all);
|
||||
joinInequalsLeft<is_all>(left_block, left_columns, right_columns, left_unequal_position, range.left_start);
|
||||
|
||||
if (range.empty())
|
||||
break;
|
||||
|
||||
if (is_all)
|
||||
joinEquals(left_block, right_block, right_columns_to_add, left_columns, right_columns, range, is_all);
|
||||
if constexpr (is_all)
|
||||
{
|
||||
size_t max_rows = maxRangeRows(left_columns[0]->size(), max_joined_block_rows);
|
||||
|
||||
if (!joinEquals<true>(left_block, right_block, right_columns_to_add, left_columns, right_columns, range, max_rows))
|
||||
{
|
||||
right_cursor.nextN(range.right_length);
|
||||
skip_right = right_cursor.position();
|
||||
return false;
|
||||
}
|
||||
}
|
||||
else
|
||||
joinEqualsAnyLeft(right_block, right_columns_to_add, right_columns, range);
|
||||
|
||||
right_cursor.nextN(range.right_length);
|
||||
|
||||
/// Do not run over last left keys for ALL JOIN (cause of possible duplicates in next right block)
|
||||
if (is_all && right_cursor.atEnd())
|
||||
if constexpr (is_all)
|
||||
{
|
||||
left_key_tail = range.left_length;
|
||||
break;
|
||||
if (right_cursor.atEnd())
|
||||
{
|
||||
left_key_tail = range.left_length;
|
||||
break;
|
||||
}
|
||||
}
|
||||
left_cursor.nextN(range.left_length);
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
void MergeJoin::innerJoin(MergeJoinCursor & left_cursor, const Block & left_block, const Block & right_block,
|
||||
MutableColumns & left_columns, MutableColumns & right_columns, size_t & left_key_tail)
|
||||
template <bool is_all>
|
||||
bool MergeJoin::innerJoin(MergeJoinCursor & left_cursor, const Block & left_block, const Block & right_block,
|
||||
MutableColumns & left_columns, MutableColumns & right_columns, size_t & left_key_tail,
|
||||
size_t & skip_right [[maybe_unused]])
|
||||
{
|
||||
MergeJoinCursor right_cursor(right_block, right_merge_description);
|
||||
left_cursor.setCompareNullability(right_cursor);
|
||||
|
||||
/// Set right cursor position in first continuation right block
|
||||
if constexpr (is_all)
|
||||
{
|
||||
right_cursor.nextN(skip_right);
|
||||
skip_right = 0;
|
||||
}
|
||||
|
||||
while (!left_cursor.atEnd() && !right_cursor.atEnd())
|
||||
{
|
||||
Range range = left_cursor.getNextEqualRange(right_cursor);
|
||||
if (range.empty())
|
||||
break;
|
||||
|
||||
joinEquals(left_block, right_block, right_columns_to_add, left_columns, right_columns, range, is_all);
|
||||
if constexpr (is_all)
|
||||
{
|
||||
size_t max_rows = maxRangeRows(left_columns[0]->size(), max_joined_block_rows);
|
||||
|
||||
if (!joinEquals<true>(left_block, right_block, right_columns_to_add, left_columns, right_columns, range, max_rows))
|
||||
{
|
||||
right_cursor.nextN(range.right_length);
|
||||
skip_right = right_cursor.position();
|
||||
return false;
|
||||
}
|
||||
}
|
||||
else
|
||||
joinEquals<false>(left_block, right_block, right_columns_to_add, left_columns, right_columns, range, 0);
|
||||
|
||||
right_cursor.nextN(range.right_length);
|
||||
|
||||
/// Do not run over last left keys for ALL JOIN (cause of possible duplicates in next right block)
|
||||
if (is_all && right_cursor.atEnd())
|
||||
if constexpr (is_all)
|
||||
{
|
||||
left_key_tail = range.left_length;
|
||||
break;
|
||||
if (right_cursor.atEnd())
|
||||
{
|
||||
left_key_tail = range.left_length;
|
||||
break;
|
||||
}
|
||||
}
|
||||
left_cursor.nextN(range.left_length);
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
void MergeJoin::changeLeftColumns(Block & block, MutableColumns && columns)
|
||||
{
|
||||
if (is_left && !is_all)
|
||||
if (is_left && !is_all_join)
|
||||
return;
|
||||
block.setColumns(std::move(columns));
|
||||
}
|
||||
@ -725,6 +840,27 @@ void MergeJoin::addRightColumns(Block & block, MutableColumns && right_columns)
|
||||
}
|
||||
}
|
||||
|
||||
/// Split block into processed (result) and not processed. Not processed block would be joined next time.
|
||||
template <bool is_all>
|
||||
ExtraBlockPtr MergeJoin::extraBlock(Block & processed, MutableColumns && left_columns, MutableColumns && right_columns,
|
||||
size_t left_position [[maybe_unused]], size_t right_position [[maybe_unused]],
|
||||
size_t right_block_number [[maybe_unused]])
|
||||
{
|
||||
ExtraBlockPtr not_processed;
|
||||
|
||||
if constexpr (is_all)
|
||||
{
|
||||
not_processed = std::make_shared<NotProcessed>(
|
||||
NotProcessed{{processed.cloneEmpty()}, left_position, right_position, right_block_number});
|
||||
not_processed->block.swap(processed);
|
||||
|
||||
changeLeftColumns(processed, std::move(left_columns));
|
||||
addRightColumns(processed, std::move(right_columns));
|
||||
}
|
||||
|
||||
return not_processed;
|
||||
}
|
||||
|
||||
template <bool in_memory>
|
||||
size_t MergeJoin::rightBlocksCount()
|
||||
{
|
||||
|
@ -17,20 +17,23 @@ class AnalyzedJoin;
|
||||
class MergeJoinCursor;
|
||||
struct MergeJoinEqualRange;
|
||||
|
||||
class Volume;
|
||||
using VolumePtr = std::shared_ptr<Volume>;
|
||||
|
||||
struct MiniLSM
|
||||
{
|
||||
using SortedFiles = std::vector<std::unique_ptr<TemporaryFile>>;
|
||||
|
||||
const String & path;
|
||||
VolumePtr volume;
|
||||
const Block & sample_block;
|
||||
const SortDescription & sort_description;
|
||||
const size_t rows_in_block;
|
||||
const size_t max_size;
|
||||
std::vector<SortedFiles> sorted_files;
|
||||
|
||||
MiniLSM(const String & path_, const Block & sample_block_, const SortDescription & description,
|
||||
MiniLSM(VolumePtr volume_, const Block & sample_block_, const SortDescription & description,
|
||||
size_t rows_in_block_, size_t max_size_ = 16)
|
||||
: path(path_)
|
||||
: volume(volume_)
|
||||
, sample_block(sample_block_)
|
||||
, sort_description(description)
|
||||
, rows_in_block(rows_in_block_)
|
||||
@ -48,13 +51,20 @@ public:
|
||||
MergeJoin(std::shared_ptr<AnalyzedJoin> table_join_, const Block & right_sample_block);
|
||||
|
||||
bool addJoinedBlock(const Block & block) override;
|
||||
void joinBlock(Block &) override;
|
||||
void joinBlock(Block &, ExtraBlockPtr & not_processed) override;
|
||||
void joinTotals(Block &) const override;
|
||||
void setTotals(const Block &) override;
|
||||
bool hasTotals() const override { return totals; }
|
||||
size_t getTotalRowCount() const override { return right_blocks_row_count; }
|
||||
|
||||
private:
|
||||
struct NotProcessed : public ExtraBlock
|
||||
{
|
||||
size_t left_position;
|
||||
size_t right_position;
|
||||
size_t right_block;
|
||||
};
|
||||
|
||||
/// There're two size limits for right-hand table: max_rows_in_join, max_bytes_in_join.
|
||||
/// max_bytes is prefered. If it isn't set we approximate it as (max_rows * bytes/row).
|
||||
struct BlockByteWeight
|
||||
@ -85,28 +95,35 @@ private:
|
||||
size_t right_blocks_bytes = 0;
|
||||
bool is_in_memory = true;
|
||||
const bool nullable_right_side;
|
||||
const bool is_all;
|
||||
const bool is_all_join;
|
||||
const bool is_inner;
|
||||
const bool is_left;
|
||||
const bool skip_not_intersected;
|
||||
const size_t max_joined_block_rows;
|
||||
const size_t max_rows_in_right_block;
|
||||
|
||||
void changeLeftColumns(Block & block, MutableColumns && columns);
|
||||
void addRightColumns(Block & block, MutableColumns && columns);
|
||||
|
||||
template <bool is_all>
|
||||
ExtraBlockPtr extraBlock(Block & processed, MutableColumns && left_columns, MutableColumns && right_columns,
|
||||
size_t left_position, size_t right_position, size_t right_block_number);
|
||||
|
||||
void mergeRightBlocks();
|
||||
|
||||
template <bool in_memory>
|
||||
size_t rightBlocksCount();
|
||||
template <bool in_memory>
|
||||
void joinSortedBlock(Block & block);
|
||||
template <bool in_memory, bool is_all>
|
||||
void joinSortedBlock(Block & block, ExtraBlockPtr & not_processed);
|
||||
template <bool in_memory>
|
||||
std::shared_ptr<Block> loadRightBlock(size_t pos);
|
||||
|
||||
void leftJoin(MergeJoinCursor & left_cursor, const Block & left_block, const Block & right_block,
|
||||
MutableColumns & left_columns, MutableColumns & right_columns, size_t & left_key_tail);
|
||||
void innerJoin(MergeJoinCursor & left_cursor, const Block & left_block, const Block & right_block,
|
||||
MutableColumns & left_columns, MutableColumns & right_columns, size_t & left_key_tail);
|
||||
template <bool is_all>
|
||||
bool leftJoin(MergeJoinCursor & left_cursor, const Block & left_block, const Block & right_block,
|
||||
MutableColumns & left_columns, MutableColumns & right_columns, size_t & left_key_tail, size_t & skip_right);
|
||||
template <bool is_all>
|
||||
bool innerJoin(MergeJoinCursor & left_cursor, const Block & left_block, const Block & right_block,
|
||||
MutableColumns & left_columns, MutableColumns & right_columns, size_t & left_key_tail, size_t & skip_right);
|
||||
|
||||
bool saveRightBlock(Block && block);
|
||||
void flushRightBlocks();
|
||||
|
@ -816,7 +816,7 @@ SyntaxAnalyzerResultPtr SyntaxAnalyzer::analyze(
|
||||
SyntaxAnalyzerResult result;
|
||||
result.storage = storage;
|
||||
result.source_columns = source_columns_;
|
||||
result.analyzed_join = std::make_shared<AnalyzedJoin>(settings, context.getTemporaryPath()); /// TODO: move to select_query logic
|
||||
result.analyzed_join = std::make_shared<AnalyzedJoin>(settings, context.getTemporaryVolume()); /// TODO: move to select_query logic
|
||||
|
||||
if (storage)
|
||||
collectSourceColumns(storage->getColumns(), result.source_columns, (select_query != nullptr));
|
||||
|
@ -12,6 +12,11 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
@ -31,8 +36,19 @@ std::shared_ptr<TSystemLog> createSystemLog(
|
||||
|
||||
String database = config.getString(config_prefix + ".database", default_database_name);
|
||||
String table = config.getString(config_prefix + ".table", default_table_name);
|
||||
String partition_by = config.getString(config_prefix + ".partition_by", "toYYYYMM(event_date)");
|
||||
String engine = "ENGINE = MergeTree PARTITION BY (" + partition_by + ") ORDER BY (event_date, event_time)";
|
||||
|
||||
String engine;
|
||||
if (config.has(config_prefix + ".engine"))
|
||||
{
|
||||
if (config.has(config_prefix + ".partition_by"))
|
||||
throw Exception("If 'engine' is specified for system table, PARTITION BY parameters should be specified directly inside 'engine' and 'partition_by' setting doesn't make sense", ErrorCodes::BAD_ARGUMENTS);
|
||||
engine = config.getString(config_prefix + ".engine");
|
||||
}
|
||||
else
|
||||
{
|
||||
String partition_by = config.getString(config_prefix + ".partition_by", "toYYYYMM(event_date)");
|
||||
engine = "ENGINE = MergeTree PARTITION BY (" + partition_by + ") ORDER BY (event_date, event_time) SETTINGS index_granularity = 1024";
|
||||
}
|
||||
|
||||
size_t flush_interval_milliseconds = config.getUInt64(config_prefix + ".flush_interval_milliseconds", DEFAULT_SYSTEM_LOG_FLUSH_INTERVAL_MILLISECONDS);
|
||||
|
||||
|
@ -1,4 +1,5 @@
|
||||
#include <Common/ThreadStatus.h>
|
||||
|
||||
#include <Common/CurrentThread.h>
|
||||
#include <Common/ThreadProfileEvents.h>
|
||||
#include <Common/Exception.h>
|
||||
@ -7,11 +8,11 @@
|
||||
#include <Interpreters/QueryThreadLog.h>
|
||||
#include <Interpreters/ProcessList.h>
|
||||
|
||||
#if defined(__linux__)
|
||||
#include <sys/time.h>
|
||||
#include <sys/resource.h>
|
||||
#if defined(OS_LINUX)
|
||||
# include <Common/hasLinuxCapability.h>
|
||||
|
||||
#include <Common/hasLinuxCapability.h>
|
||||
# include <sys/time.h>
|
||||
# include <sys/resource.h>
|
||||
#endif
|
||||
|
||||
|
||||
@ -37,10 +38,13 @@ void ThreadStatus::attachQueryContext(Context & query_context_)
|
||||
if (thread_group)
|
||||
{
|
||||
std::lock_guard lock(thread_group->mutex);
|
||||
|
||||
thread_group->query_context = query_context;
|
||||
if (!thread_group->global_context)
|
||||
thread_group->global_context = global_context;
|
||||
}
|
||||
|
||||
initQueryProfiler();
|
||||
}
|
||||
|
||||
void CurrentThread::defaultThreadDeleter()
|
||||
@ -50,40 +54,11 @@ void CurrentThread::defaultThreadDeleter()
|
||||
current_thread->detachQuery(true, true);
|
||||
}
|
||||
|
||||
void ThreadStatus::initializeQuery()
|
||||
void ThreadStatus::setupState(const ThreadGroupStatusPtr & thread_group_)
|
||||
{
|
||||
assertState({ThreadState::DetachedFromQuery}, __PRETTY_FUNCTION__);
|
||||
|
||||
thread_group = std::make_shared<ThreadGroupStatus>();
|
||||
|
||||
performance_counters.setParent(&thread_group->performance_counters);
|
||||
memory_tracker.setParent(&thread_group->memory_tracker);
|
||||
thread_group->memory_tracker.setDescription("(for query)");
|
||||
|
||||
thread_group->thread_numbers.emplace_back(thread_number);
|
||||
thread_group->os_thread_ids.emplace_back(os_thread_id);
|
||||
thread_group->master_thread_number = thread_number;
|
||||
thread_group->master_thread_os_id = os_thread_id;
|
||||
|
||||
initPerformanceCounters();
|
||||
thread_state = ThreadState::AttachedToQuery;
|
||||
}
|
||||
|
||||
void ThreadStatus::attachQuery(const ThreadGroupStatusPtr & thread_group_, bool check_detached)
|
||||
{
|
||||
if (thread_state == ThreadState::AttachedToQuery)
|
||||
{
|
||||
if (check_detached)
|
||||
throw Exception("Can't attach query to the thread, it is already attached", ErrorCodes::LOGICAL_ERROR);
|
||||
return;
|
||||
}
|
||||
|
||||
assertState({ThreadState::DetachedFromQuery}, __PRETTY_FUNCTION__);
|
||||
|
||||
if (!thread_group_)
|
||||
throw Exception("Attempt to attach to nullptr thread group", ErrorCodes::LOGICAL_ERROR);
|
||||
|
||||
/// Attach current thread to thread group and copy useful information from it
|
||||
/// Attach or init current thread to thread group and copy useful information from it
|
||||
thread_group = thread_group_;
|
||||
|
||||
performance_counters.setParent(&thread_group->performance_counters);
|
||||
@ -92,22 +67,23 @@ void ThreadStatus::attachQuery(const ThreadGroupStatusPtr & thread_group_, bool
|
||||
{
|
||||
std::lock_guard lock(thread_group->mutex);
|
||||
|
||||
/// NOTE: thread may be attached multiple times if it is reused from a thread pool.
|
||||
thread_group->thread_numbers.emplace_back(thread_number);
|
||||
thread_group->os_thread_ids.emplace_back(os_thread_id);
|
||||
|
||||
logs_queue_ptr = thread_group->logs_queue_ptr;
|
||||
query_context = thread_group->query_context;
|
||||
|
||||
if (!global_context)
|
||||
global_context = thread_group->global_context;
|
||||
|
||||
/// NOTE: A thread may be attached multiple times if it is reused from a thread pool.
|
||||
thread_group->thread_numbers.emplace_back(thread_number);
|
||||
thread_group->os_thread_ids.emplace_back(os_thread_id);
|
||||
}
|
||||
|
||||
if (query_context)
|
||||
{
|
||||
query_id = query_context->getCurrentQueryId();
|
||||
initQueryProfiler();
|
||||
|
||||
#if defined(__linux__)
|
||||
#if defined(OS_LINUX)
|
||||
/// Set "nice" value if required.
|
||||
Int32 new_os_thread_priority = query_context->getSettingsRef().os_thread_priority;
|
||||
if (new_os_thread_priority && hasLinuxCapability(CAP_SYS_NICE))
|
||||
@ -123,11 +99,35 @@ void ThreadStatus::attachQuery(const ThreadGroupStatusPtr & thread_group_, bool
|
||||
}
|
||||
|
||||
initPerformanceCounters();
|
||||
initQueryProfiler();
|
||||
|
||||
thread_state = ThreadState::AttachedToQuery;
|
||||
}
|
||||
|
||||
void ThreadStatus::initializeQuery()
|
||||
{
|
||||
setupState(std::make_shared<ThreadGroupStatus>());
|
||||
|
||||
/// No need to lock on mutex here
|
||||
thread_group->memory_tracker.setDescription("(for query)");
|
||||
thread_group->master_thread_number = thread_number;
|
||||
thread_group->master_thread_os_id = os_thread_id;
|
||||
}
|
||||
|
||||
void ThreadStatus::attachQuery(const ThreadGroupStatusPtr & thread_group_, bool check_detached)
|
||||
{
|
||||
if (thread_state == ThreadState::AttachedToQuery)
|
||||
{
|
||||
if (check_detached)
|
||||
throw Exception("Can't attach query to the thread, it is already attached", ErrorCodes::LOGICAL_ERROR);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!thread_group_)
|
||||
throw Exception("Attempt to attach to nullptr thread group", ErrorCodes::LOGICAL_ERROR);
|
||||
|
||||
setupState(thread_group_);
|
||||
}
|
||||
|
||||
void ThreadStatus::finalizePerformanceCounters()
|
||||
{
|
||||
if (performance_counters_finalized)
|
||||
@ -160,15 +160,23 @@ void ThreadStatus::initQueryProfiler()
|
||||
|
||||
const auto & settings = query_context->getSettingsRef();
|
||||
|
||||
if (settings.query_profiler_real_time_period_ns > 0)
|
||||
query_profiler_real = std::make_unique<QueryProfilerReal>(
|
||||
/* thread_id */ os_thread_id,
|
||||
/* period */ static_cast<UInt32>(settings.query_profiler_real_time_period_ns));
|
||||
try
|
||||
{
|
||||
if (settings.query_profiler_real_time_period_ns > 0)
|
||||
query_profiler_real = std::make_unique<QueryProfilerReal>(
|
||||
/* thread_id */ os_thread_id,
|
||||
/* period */ static_cast<UInt32>(settings.query_profiler_real_time_period_ns));
|
||||
|
||||
if (settings.query_profiler_cpu_time_period_ns > 0)
|
||||
query_profiler_cpu = std::make_unique<QueryProfilerCpu>(
|
||||
/* thread_id */ os_thread_id,
|
||||
/* period */ static_cast<UInt32>(settings.query_profiler_cpu_time_period_ns));
|
||||
if (settings.query_profiler_cpu_time_period_ns > 0)
|
||||
query_profiler_cpu = std::make_unique<QueryProfilerCpu>(
|
||||
/* thread_id */ os_thread_id,
|
||||
/* period */ static_cast<UInt32>(settings.query_profiler_cpu_time_period_ns));
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
/// QueryProfiler is optional.
|
||||
tryLogCurrentException("ThreadStatus", "Cannot initialize QueryProfiler");
|
||||
}
|
||||
}
|
||||
|
||||
void ThreadStatus::finalizeQueryProfiler()
|
||||
|
@ -575,14 +575,17 @@ BlockIO executeQuery(
|
||||
BlockIO streams;
|
||||
std::tie(ast, streams) = executeQueryImpl(query.data(), query.data() + query.size(), context,
|
||||
internal, stage, !may_have_embedded_data, nullptr, allow_processors);
|
||||
if (streams.in)
|
||||
|
||||
if (const auto * ast_query_with_output = dynamic_cast<const ASTQueryWithOutput *>(ast.get()))
|
||||
{
|
||||
const auto * ast_query_with_output = dynamic_cast<const ASTQueryWithOutput *>(ast.get());
|
||||
String format_name = ast_query_with_output && (ast_query_with_output->format != nullptr)
|
||||
? getIdentifierName(ast_query_with_output->format) : context.getDefaultFormat();
|
||||
String format_name = ast_query_with_output->format
|
||||
? getIdentifierName(ast_query_with_output->format)
|
||||
: context.getDefaultFormat();
|
||||
|
||||
if (format_name == "Null")
|
||||
streams.null_format = true;
|
||||
}
|
||||
|
||||
return streams;
|
||||
}
|
||||
|
||||
@ -592,7 +595,7 @@ void executeQuery(
|
||||
WriteBuffer & ostr,
|
||||
bool allow_into_outfile,
|
||||
Context & context,
|
||||
std::function<void(const String &)> set_content_type,
|
||||
std::function<void(const String &, const String &)> set_content_type_and_format,
|
||||
std::function<void(const String &)> set_query_id)
|
||||
{
|
||||
PODArray<char> parse_buf;
|
||||
@ -682,8 +685,8 @@ void executeQuery(
|
||||
out->onProgress(progress);
|
||||
});
|
||||
|
||||
if (set_content_type)
|
||||
set_content_type(out->getContentType());
|
||||
if (set_content_type_and_format)
|
||||
set_content_type_and_format(out->getContentType(), format_name);
|
||||
|
||||
if (set_query_id)
|
||||
set_query_id(context.getClientInfo().current_query_id);
|
||||
@ -744,8 +747,8 @@ void executeQuery(
|
||||
out->onProgress(progress);
|
||||
});
|
||||
|
||||
if (set_content_type)
|
||||
set_content_type(out->getContentType());
|
||||
if (set_content_type_and_format)
|
||||
set_content_type_and_format(out->getContentType(), format_name);
|
||||
|
||||
if (set_query_id)
|
||||
set_query_id(context.getClientInfo().current_query_id);
|
||||
|
@ -19,7 +19,7 @@ void executeQuery(
|
||||
WriteBuffer & ostr, /// Where to write query output to.
|
||||
bool allow_into_outfile, /// If true and the query contains INTO OUTFILE section, redirect output to that file.
|
||||
Context & context, /// DB, tables, data types, storage engines, functions, aggregate functions...
|
||||
std::function<void(const String &)> set_content_type, /// If non-empty callback is passed, it will be called with the Content-Type of the result.
|
||||
std::function<void(const String &, const String &)> set_content_type_and_format, /// If non-empty callback is passed, it will be called with the Content-Type and the Format of the result.
|
||||
std::function<void(const String &)> set_query_id /// If non-empty callback is passed, it will be called with the query id.
|
||||
);
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user