Merge remote-tracking branch 'upstream/master' into order-by-efficient

This commit is contained in:
CurtizJ 2019-08-14 01:41:38 +03:00
commit 94bca8315d
764 changed files with 7791 additions and 11825 deletions

View File

@ -1,4 +1,191 @@
## ClickHouse release 19.9.4.1, 2019-07-05
## ClickHouse release 19.11.5.28, 2019-08-05
### Bug fix
* Fixed the possibility of hanging queries when server is overloaded. [#6301](https://github.com/yandex/ClickHouse/pull/6301) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix FPE in yandexConsistentHash function. This fixes [#6304](https://github.com/yandex/ClickHouse/issues/6304). [#6126](https://github.com/yandex/ClickHouse/pull/6126) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed bug in conversion of `LowCardinality` types in `AggregateFunctionFactory`. This fixes [#6257](https://github.com/yandex/ClickHouse/issues/6257). [#6281](https://github.com/yandex/ClickHouse/pull/6281) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Fix parsing of `bool` settings from `true` and `false` strings in configuration files. [#6278](https://github.com/yandex/ClickHouse/pull/6278) ([alesapin](https://github.com/alesapin))
* Fix rare bug with incompatible stream headers in queries to `Distributed` table over `MergeTree` table when part of `WHERE` moves to `PREWHERE`. [#6236](https://github.com/yandex/ClickHouse/pull/6236) ([alesapin](https://github.com/alesapin))
* Fixed overflow in integer division of signed type to unsigned type. This fixes [#6214](https://github.com/yandex/ClickHouse/issues/6214). [#6233](https://github.com/yandex/ClickHouse/pull/6233) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Backward Incompatible Change
* `Kafka` still broken.
## ClickHouse release 19.11.4.24, 2019-08-01
### Bug Fix
* Fix bug with writing secondary indices marks with adaptive granularity. [#6126](https://github.com/yandex/ClickHouse/pull/6126) ([alesapin](https://github.com/alesapin))
* Fix `WITH ROLLUP` and `WITH CUBE` modifiers of `GROUP BY` with two-level aggregation. [#6225](https://github.com/yandex/ClickHouse/pull/6225) ([Anton Popov](https://github.com/CurtizJ))
* Fixed hang in `JSONExtractRaw` function. Fixed [#6195](https://github.com/yandex/ClickHouse/issues/6195) [#6198](https://github.com/yandex/ClickHouse/pull/6198) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix segfault in ExternalLoader::reloadOutdated(). [#6082](https://github.com/yandex/ClickHouse/pull/6082) ([Vitaly Baranov](https://github.com/vitlibar))
* Fixed the case when server may close listening sockets but not shutdown and continue serving remaining queries. You may end up with two running clickhouse-server processes. Sometimes, the server may return an error `bad_function_call` for remaining queries. [#6231](https://github.com/yandex/ClickHouse/pull/6231) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed useless and incorrect condition on update field for initial loading of external dictionaries via ODBC, MySQL, ClickHouse and HTTP. This fixes [#6069](https://github.com/yandex/ClickHouse/issues/6069) [#6083](https://github.com/yandex/ClickHouse/pull/6083) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed irrelevant exception in cast of `LowCardinality(Nullable)` to not-Nullable column in case if it doesn't contain Nulls (e.g. in query like `SELECT CAST(CAST('Hello' AS LowCardinality(Nullable(String))) AS String)`. [#6094](https://github.com/yandex/ClickHouse/issues/6094) [#6119](https://github.com/yandex/ClickHouse/pull/6119) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Fix non-deterministic result of "uniq" aggregate function in extreme rare cases. The bug was present in all ClickHouse versions. [#6058](https://github.com/yandex/ClickHouse/pull/6058) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Segfault when we set a little bit too high CIDR on the function `IPv6CIDRToRange`. [#6068](https://github.com/yandex/ClickHouse/pull/6068) ([Guillaume Tassery](https://github.com/YiuRULE))
* Fixed small memory leak when server throw many exceptions from many different contexts. [#6144](https://github.com/yandex/ClickHouse/pull/6144) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix the situation when consumer got paused before subscription and not resumed afterwards. [#6075](https://github.com/yandex/ClickHouse/pull/6075) ([Ivan](https://github.com/abyss7)) Note that Kafka is broken in this version.
* Clearing the Kafka data buffer from the previous read operation that was completed with an error [#6026](https://github.com/yandex/ClickHouse/pull/6026) ([Nikolay](https://github.com/bopohaa)) Note that Kafka is broken in this version.
* Since `StorageMergeTree::background_task_handle` is initialized in `startup()` the `MergeTreeBlockOutputStream::write()` may try to use it before initialization. Just check if it is initialized. [#6080](https://github.com/yandex/ClickHouse/pull/6080) ([Ivan](https://github.com/abyss7))
### Build/Testing/Packaging Improvement
* Added official `rpm` packages. [#5740](https://github.com/yandex/ClickHouse/pull/5740) ([proller](https://github.com/proller)) ([alesapin](https://github.com/alesapin))
* Add an ability to build `.rpm` and `.tgz` packages with `packager` script. [#5769](https://github.com/yandex/ClickHouse/pull/5769) ([alesapin](https://github.com/alesapin))
* Fixes for "Arcadia" build system. [#6223](https://github.com/yandex/ClickHouse/pull/6223) ([proller](https://github.com/proller))
### Backward Incompatible Change
* `Kafka` is broken in this version.
## ClickHouse release 19.11.3.11, 2019-07-18
### New Feature
* Added support for prepared statements. [#5331](https://github.com/yandex/ClickHouse/pull/5331/) ([Alexander](https://github.com/sanych73)) [#5630](https://github.com/yandex/ClickHouse/pull/5630) ([alexey-milovidov](https://github.com/alexey-milovidov))
* `DoubleDelta` and `Gorilla` column codecs [#5600](https://github.com/yandex/ClickHouse/pull/5600) ([Vasily Nemkov](https://github.com/Enmk))
* Added `os_thread_priority` setting that allows to control the "nice" value of query processing threads that is used by OS to adjust dynamic scheduling priority. It requires `CAP_SYS_NICE` capabilities to work. This implements [#5858](https://github.com/yandex/ClickHouse/issues/5858) [#5909](https://github.com/yandex/ClickHouse/pull/5909) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Implement `_topic`, `_offset`, `_key` columns for Kafka engine [#5382](https://github.com/yandex/ClickHouse/pull/5382) ([Ivan](https://github.com/abyss7)) Note that Kafka is broken in this version.
* Add aggregate function combinator `-Resample` [#5590](https://github.com/yandex/ClickHouse/pull/5590) ([hcz](https://github.com/hczhcz))
* Aggregate functions `groupArrayMovingSum(win_size)(x)` and `groupArrayMovingAvg(win_size)(x)`, which calculate moving sum/avg with or without window-size limitation. [#5595](https://github.com/yandex/ClickHouse/pull/5595) ([inv2004](https://github.com/inv2004))
* Add synonim `arrayFlatten` <-> `flatten` [#5764](https://github.com/yandex/ClickHouse/pull/5764) ([hcz](https://github.com/hczhcz))
* Intergate H3 function `geoToH3` from Uber. [#4724](https://github.com/yandex/ClickHouse/pull/4724) ([Remen Ivan](https://github.com/BHYCHIK)) [#5805](https://github.com/yandex/ClickHouse/pull/5805) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Bug Fix
* Implement DNS cache with asynchronous update. Separate thread resolves all hosts and updates DNS cache with period (setting `dns_cache_update_period`). It should help, when ip of hosts changes frequently. [#5857](https://github.com/yandex/ClickHouse/pull/5857) ([Anton Popov](https://github.com/CurtizJ))
* Fix segfault in `Delta` codec which affects columns with values less than 32 bits size. The bug led to random memory corruption. [#5786](https://github.com/yandex/ClickHouse/pull/5786) ([alesapin](https://github.com/alesapin))
* Fix segfault in TTL merge with non-physical columns in block. [#5819](https://github.com/yandex/ClickHouse/pull/5819) ([Anton Popov](https://github.com/CurtizJ))
* Fix rare bug in checking of part with `LowCardinality` column. Previously `checkDataPart` always fails for part with `LowCardinality` column. [#5832](https://github.com/yandex/ClickHouse/pull/5832) ([alesapin](https://github.com/alesapin))
* Avoid hanging connections when server thread pool is full. It is important for connections from `remote` table function or connections to a shard without replicas when there is long connection timeout. This fixes [#5878](https://github.com/yandex/ClickHouse/issues/5878) [#5881](https://github.com/yandex/ClickHouse/pull/5881) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Support for constant arguments to `evalMLModel` function. This fixes [#5817](https://github.com/yandex/ClickHouse/issues/5817) [#5820](https://github.com/yandex/ClickHouse/pull/5820) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed the issue when ClickHouse determines default time zone as `UCT` instead of `UTC`. This fixes [#5804](https://github.com/yandex/ClickHouse/issues/5804). [#5828](https://github.com/yandex/ClickHouse/pull/5828) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed buffer underflow in `visitParamExtractRaw`. This fixes [#5901](https://github.com/yandex/ClickHouse/issues/5901) [#5902](https://github.com/yandex/ClickHouse/pull/5902) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Now distributed `DROP/ALTER/TRUNCATE/OPTIMIZE ON CLUSTER` queries will be executed directly on leader replica. [#5757](https://github.com/yandex/ClickHouse/pull/5757) ([alesapin](https://github.com/alesapin))
* Fix `coalesce` for `ColumnConst` with `ColumnNullable` + related changes. [#5755](https://github.com/yandex/ClickHouse/pull/5755) ([Artem Zuikov](https://github.com/4ertus2))
* Fix the `ReadBufferFromKafkaConsumer` so that it keeps reading new messages after `commit()` even if it was stalled before [#5852](https://github.com/yandex/ClickHouse/pull/5852) ([Ivan](https://github.com/abyss7))
* Fix `FULL` and `RIGHT` JOIN results when joining on `Nullable` keys in right table. [#5859](https://github.com/yandex/ClickHouse/pull/5859) ([Artem Zuikov](https://github.com/4ertus2))
* Possible fix of infinite sleeping of low-priority queries. [#5842](https://github.com/yandex/ClickHouse/pull/5842) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix race condition, which cause that some queries may not appear in query_log after `SYSTEM FLUSH LOGS` query. [#5456](https://github.com/yandex/ClickHouse/issues/5456) [#5685](https://github.com/yandex/ClickHouse/pull/5685) ([Anton Popov](https://github.com/CurtizJ))
* Fixed `heap-use-after-free` ASan warning in ClusterCopier caused by watch which try to use already removed copier object. [#5871](https://github.com/yandex/ClickHouse/pull/5871) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Fixed wrong `StringRef` pointer returned by some implementations of `IColumn::deserializeAndInsertFromArena`. This bug affected only unit-tests. [#5973](https://github.com/yandex/ClickHouse/pull/5973) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Prevent source and intermediate array join columns of masking same name columns. [#5941](https://github.com/yandex/ClickHouse/pull/5941) ([Artem Zuikov](https://github.com/4ertus2))
* Fix insert and select query to MySQL engine with MySQL style identifier quoting. [#5704](https://github.com/yandex/ClickHouse/pull/5704) ([Winter Zhang](https://github.com/zhang2014))
* Now `CHECK TABLE` query can work with MergeTree engine family. It returns check status and message if any for each part (or file in case of simplier engines). Also, fix bug in fetch of a broken part. [#5865](https://github.com/yandex/ClickHouse/pull/5865) ([alesapin](https://github.com/alesapin))
* Fix SPLIT_SHARED_LIBRARIES runtime [#5793](https://github.com/yandex/ClickHouse/pull/5793) ([Danila Kutenin](https://github.com/danlark1))
* Fixed time zone initialization when `/etc/localtime` is a relative symlink like `../usr/share/zoneinfo/Europe/Moscow` [#5922](https://github.com/yandex/ClickHouse/pull/5922) ([alexey-milovidov](https://github.com/alexey-milovidov))
* clickhouse-copier: Fix use-after free on shutdown [#5752](https://github.com/yandex/ClickHouse/pull/5752) ([proller](https://github.com/proller))
* Updated `simdjson`. Fixed the issue that some invalid JSONs with zero bytes successfully parse. [#5938](https://github.com/yandex/ClickHouse/pull/5938) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix shutdown of SystemLogs [#5802](https://github.com/yandex/ClickHouse/pull/5802) ([Anton Popov](https://github.com/CurtizJ))
* Fix hanging when condition in invalidate_query depends on a dictionary. [#6011](https://github.com/yandex/ClickHouse/pull/6011) ([Vitaly Baranov](https://github.com/vitlibar))
### Improvement
* Allow unresolvable addresses in cluster configuration. They will be considered unavailable and tried to resolve at every connection attempt. This is especially useful for Kubernetes. This fixes [#5714](https://github.com/yandex/ClickHouse/issues/5714) [#5924](https://github.com/yandex/ClickHouse/pull/5924) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Close idle TCP connections (with one hour timeout by default). This is especially important for large clusters with multiple distributed tables on every server, because every server can possibly keep a connection pool to every other server, and after peak query concurrency, connections will stall. This fixes [#5879](https://github.com/yandex/ClickHouse/issues/5879) [#5880](https://github.com/yandex/ClickHouse/pull/5880) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Better quality of `topK` function. Changed the SavingSpace set behavior to remove the last element if the new element have a bigger weight. [#5833](https://github.com/yandex/ClickHouse/issues/5833) [#5850](https://github.com/yandex/ClickHouse/pull/5850) ([Guillaume Tassery](https://github.com/YiuRULE))
* URL functions to work with domains now can work for incomplete URLs without scheme [#5725](https://github.com/yandex/ClickHouse/pull/5725) ([alesapin](https://github.com/alesapin))
* Checksums added to the `system.parts_columns` table. [#5874](https://github.com/yandex/ClickHouse/pull/5874) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
* Added `Enum` data type as a synonim for `Enum8` or `Enum16`. [#5886](https://github.com/yandex/ClickHouse/pull/5886) ([dimarub2000](https://github.com/dimarub2000))
* Full bit transpose variant for `T64` codec. Could lead to better compression with `zstd`. [#5742](https://github.com/yandex/ClickHouse/pull/5742) ([Artem Zuikov](https://github.com/4ertus2))
* Condition on `startsWith` function now can uses primary key. This fixes [#5310](https://github.com/yandex/ClickHouse/issues/5310) and [#5882](https://github.com/yandex/ClickHouse/issues/5882) [#5919](https://github.com/yandex/ClickHouse/pull/5919) ([dimarub2000](https://github.com/dimarub2000))
* Allow to use `clickhouse-copier` with cross-replication cluster topology by permitting empty database name. [#5745](https://github.com/yandex/ClickHouse/pull/5745) ([nvartolomei](https://github.com/nvartolomei))
* Use `UTC` as default timezone on a system without `tzdata` (e.g. bare Docker container). Before this patch, error message `Could not determine local time zone` was printed and server or client refused to start. [#5827](https://github.com/yandex/ClickHouse/pull/5827) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Returned back support for floating point argument in function `quantileTiming` for backward compatibility. [#5911](https://github.com/yandex/ClickHouse/pull/5911) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Show which table is missing column in error messages. [#5768](https://github.com/yandex/ClickHouse/pull/5768) ([Ivan](https://github.com/abyss7))
* Disallow run query with same query_id by various users [#5430](https://github.com/yandex/ClickHouse/pull/5430) ([proller](https://github.com/proller))
* More robust code for sending metrics to Graphite. It will work even during long multiple `RENAME TABLE` operation. [#5875](https://github.com/yandex/ClickHouse/pull/5875) ([alexey-milovidov](https://github.com/alexey-milovidov))
* More informative error messages will be displayed when ThreadPool cannot schedule a task for execution. This fixes [#5305](https://github.com/yandex/ClickHouse/issues/5305) [#5801](https://github.com/yandex/ClickHouse/pull/5801) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Inverting ngramSearch to be more intuitive [#5807](https://github.com/yandex/ClickHouse/pull/5807) ([Danila Kutenin](https://github.com/danlark1))
* Add user parsing in HDFS engine builder [#5946](https://github.com/yandex/ClickHouse/pull/5946) ([akonyaev90](https://github.com/akonyaev90))
* Update default value of `max_ast_elements parameter` [#5933](https://github.com/yandex/ClickHouse/pull/5933) ([Artem Konovalov](https://github.com/izebit))
* Added a notion of obsolete settings. The obsolete setting `allow_experimental_low_cardinality_type` can be used with no effect. [0f15c01c6802f7ce1a1494c12c846be8c98944cd](https://github.com/yandex/ClickHouse/commit/0f15c01c6802f7ce1a1494c12c846be8c98944cd) [Alexey Milovidov](https://github.com/alexey-milovidov)
### Performance Improvement
* Increase number of streams to SELECT from Merge table for more uniform distribution of threads. Added setting `max_streams_multiplier_for_merge_tables`. This fixes [#5797](https://github.com/yandex/ClickHouse/issues/5797) [#5915](https://github.com/yandex/ClickHouse/pull/5915) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Build/Testing/Packaging Improvement
* Add a backward compatibility test for client-server interaction with different versions of clickhouse. [#5868](https://github.com/yandex/ClickHouse/pull/5868) ([alesapin](https://github.com/alesapin))
* Test coverage information in every commit and pull request. [#5896](https://github.com/yandex/ClickHouse/pull/5896) ([alesapin](https://github.com/alesapin))
* Cooperate with address sanitizer to support our custom allocators (`Arena` and `ArenaWithFreeLists`) for better debugging of "use-after-free" errors. [#5728](https://github.com/yandex/ClickHouse/pull/5728) ([akuzm](https://github.com/akuzm))
* Switch to [LLVM libunwind implementation](https://github.com/llvm-mirror/libunwind) for C++ exception handling and for stack traces printing [#4828](https://github.com/yandex/ClickHouse/pull/4828) ([Nikita Lapkov](https://github.com/laplab))
* Add two more warnings from -Weverything [#5923](https://github.com/yandex/ClickHouse/pull/5923) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Allow to build ClickHouse with Memory Sanitizer. [#3949](https://github.com/yandex/ClickHouse/pull/3949) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed ubsan report about `bitTest` function in fuzz test. [#5943](https://github.com/yandex/ClickHouse/pull/5943) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Docker: added possibility to init a ClickHouse instance which requires authentication. [#5727](https://github.com/yandex/ClickHouse/pull/5727) ([Korviakov Andrey](https://github.com/shurshun))
* Update librdkafka to version 1.1.0 [#5872](https://github.com/yandex/ClickHouse/pull/5872) ([Ivan](https://github.com/abyss7))
* Add global timeout for integration tests and disable some of them in tests code. [#5741](https://github.com/yandex/ClickHouse/pull/5741) ([alesapin](https://github.com/alesapin))
* Fix some ThreadSanitizer failures. [#5854](https://github.com/yandex/ClickHouse/pull/5854) ([akuzm](https://github.com/akuzm))
* The `--no-undefined` option forces the linker to check all external names for existence while linking. It's very useful to track real dependencies between libraries in the split build mode. [#5855](https://github.com/yandex/ClickHouse/pull/5855) ([Ivan](https://github.com/abyss7))
* Added performance test for [#5797](https://github.com/yandex/ClickHouse/issues/5797) [#5914](https://github.com/yandex/ClickHouse/pull/5914) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed compatibility with gcc-7. [#5840](https://github.com/yandex/ClickHouse/pull/5840) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Added support for gcc-9. This fixes [#5717](https://github.com/yandex/ClickHouse/issues/5717) [#5774](https://github.com/yandex/ClickHouse/pull/5774) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed error when libunwind can be linked incorrectly. [#5948](https://github.com/yandex/ClickHouse/pull/5948) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed a few warnings found by PVS-Studio. [#5921](https://github.com/yandex/ClickHouse/pull/5921) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Added initial support for `clang-tidy` static analyzer. [#5806](https://github.com/yandex/ClickHouse/pull/5806) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Convert BSD/Linux endian macros( 'be64toh' and 'htobe64') to the Mac OS X equivalents [#5785](https://github.com/yandex/ClickHouse/pull/5785) ([Fu Chen](https://github.com/fredchenbj))
* Improved integration tests guide. [#5796](https://github.com/yandex/ClickHouse/pull/5796) ([Vladimir Chebotarev](https://github.com/excitoon))
* Fixing build at macosx + gcc9 [#5822](https://github.com/yandex/ClickHouse/pull/5822) ([filimonov](https://github.com/filimonov))
* Fix a hard-to-spot typo: aggreAGte -> aggregate. [#5753](https://github.com/yandex/ClickHouse/pull/5753) ([akuzm](https://github.com/akuzm))
* Fix freebsd build [#5760](https://github.com/yandex/ClickHouse/pull/5760) ([proller](https://github.com/proller))
* Add link to experimental YouTube channel to website [#5845](https://github.com/yandex/ClickHouse/pull/5845) ([Ivan Blinkov](https://github.com/blinkov))
* CMake: add option for coverage flags: WITH_COVERAGE [#5776](https://github.com/yandex/ClickHouse/pull/5776) ([proller](https://github.com/proller))
* Fix initial size of some inline PODArray's. [#5787](https://github.com/yandex/ClickHouse/pull/5787) ([akuzm](https://github.com/akuzm))
* clickhouse-server.postinst: fix os detection for centos 6 [#5788](https://github.com/yandex/ClickHouse/pull/5788) ([proller](https://github.com/proller))
* Added Arch linux package generation. [#5719](https://github.com/yandex/ClickHouse/pull/5719) ([Vladimir Chebotarev](https://github.com/excitoon))
* Split Common/config.h by libs (dbms) [#5715](https://github.com/yandex/ClickHouse/pull/5715) ([proller](https://github.com/proller))
* Fixes for "Arcadia" build platform [#5795](https://github.com/yandex/ClickHouse/pull/5795) ([proller](https://github.com/proller))
* Fixes for unconventional build (gcc9, no submodules) [#5792](https://github.com/yandex/ClickHouse/pull/5792) ([proller](https://github.com/proller))
* Require explicit type in unalignedStore because it was proven to be bug-prone [#5791](https://github.com/yandex/ClickHouse/pull/5791) ([akuzm](https://github.com/akuzm))
* Fixes MacOS build [#5830](https://github.com/yandex/ClickHouse/pull/5830) ([filimonov](https://github.com/filimonov))
* Performance test concerning the new JIT feature with bigger dataset, as requested here [#5263](https://github.com/yandex/ClickHouse/issues/5263) [#5887](https://github.com/yandex/ClickHouse/pull/5887) ([Guillaume Tassery](https://github.com/YiuRULE))
* Run stateful tests in stress test [12693e568722f11e19859742f56428455501fd2a](https://github.com/yandex/ClickHouse/commit/12693e568722f11e19859742f56428455501fd2a) ([alesapin](https://github.com/alesapin))
### Backward Incompatible Change
* `Kafka` is broken in this version.
* Enable `adaptive_index_granularity` = 10MB by default for new `MergeTree` tables. If you created new MergeTree tables on version 19.11+, downgrade to versions prior to 19.6 will be impossible. [#5628](https://github.com/yandex/ClickHouse/pull/5628) ([alesapin](https://github.com/alesapin))
* Removed obsolete undocumented embedded dictionaries that were used by Yandex.Metrica. The functions `OSIn`, `SEIn`, `OSToRoot`, `SEToRoot`, `OSHierarchy`, `SEHierarchy` are no longer available. If you are using these functions, write email to clickhouse-feedback@yandex-team.com. Note: at the last moment we decided to keep these functions for a while. [#5780](https://github.com/yandex/ClickHouse/pull/5780) ([alexey-milovidov](https://github.com/alexey-milovidov))
## ClickHouse release 19.10.1.5, 2019-07-12
### New Feature
* Add new column codec: `T64`. Made for (U)IntX/EnumX/Data(Time)/DecimalX columns. It should be good for columns with constant or small range values. Codec itself allows enlarge or shrink data type without re-compression. [#5557](https://github.com/yandex/ClickHouse/pull/5557) ([Artem Zuikov](https://github.com/4ertus2))
* Add database engine `MySQL` that allow to view all the tables in remote MySQL server [#5599](https://github.com/yandex/ClickHouse/pull/5599) ([Winter Zhang](https://github.com/zhang2014))
* `bitmapContains` implementation. It's 2x faster than `bitmapHasAny` if the second bitmap contains one element. [#5535](https://github.com/yandex/ClickHouse/pull/5535) ([Zhichang Yu](https://github.com/yuzhichang))
* Support for `crc32` function (with behaviour exactly as in MySQL or PHP). Do not use it if you need a hash function. [#5661](https://github.com/yandex/ClickHouse/pull/5661) ([Remen Ivan](https://github.com/BHYCHIK))
* Implemented `SYSTEM START/STOP DISTRIBUTED SENDS` queries to control asynchronous inserts into `Distributed` tables. [#4935](https://github.com/yandex/ClickHouse/pull/4935) ([Winter Zhang](https://github.com/zhang2014))
### Bug Fix
* Ignore query execution limits and max parts size for merge limits while executing mutations. [#5659](https://github.com/yandex/ClickHouse/pull/5659) ([Anton Popov](https://github.com/CurtizJ))
* Fix bug which may lead to deduplication of normal blocks (extremely rare) and insertion of duplicate blocks (more often). [#5549](https://github.com/yandex/ClickHouse/pull/5549) ([alesapin](https://github.com/alesapin))
* Fix of function `arrayEnumerateUniqRanked` for arguments with empty arrays [#5559](https://github.com/yandex/ClickHouse/pull/5559) ([proller](https://github.com/proller))
* Don't subscribe to Kafka topics without intent to poll any messages. [#5698](https://github.com/yandex/ClickHouse/pull/5698) ([Ivan](https://github.com/abyss7))
* Make setting `join_use_nulls` get no effect for types that cannot be inside Nullable [#5700](https://github.com/yandex/ClickHouse/pull/5700) ([Olga Khvostikova](https://github.com/stavrolia))
* Fixed `Incorrect size of index granularity` errors [#5720](https://github.com/yandex/ClickHouse/pull/5720) ([coraxster](https://github.com/coraxster))
* Fix Float to Decimal convert overflow [#5607](https://github.com/yandex/ClickHouse/pull/5607) ([coraxster](https://github.com/coraxster))
* Flush buffer when `WriteBufferFromHDFS`'s destructor is called. This fixes writing into `HDFS`. [#5684](https://github.com/yandex/ClickHouse/pull/5684) ([Xindong Peng](https://github.com/eejoin))
### Improvement
* Treat empty cells in `CSV` as default values when the setting `input_format_defaults_for_omitted_fields` is enabled. [#5625](https://github.com/yandex/ClickHouse/pull/5625) ([akuzm](https://github.com/akuzm))
* Non-blocking loading of external dictionaries. [#5567](https://github.com/yandex/ClickHouse/pull/5567) ([Vitaly Baranov](https://github.com/vitlibar))
* Network timeouts can be dynamically changed for already established connections according to the settings. [#4558](https://github.com/yandex/ClickHouse/pull/4558) ([Konstantin Podshumok](https://github.com/podshumok))
* Using "public_suffix_list" for functions `firstSignificantSubdomain`, `cutToFirstSignificantSubdomain`. It's using a perfect hash table generated by `gperf` with a list generated from the file: [https://publicsuffix.org/list/public_suffix_list.dat](https://publicsuffix.org/list/public_suffix_list.dat). (for example, now we recognize the domain `ac.uk` as non-significant). [#5030](https://github.com/yandex/ClickHouse/pull/5030) ([Guillaume Tassery](https://github.com/YiuRULE))
* Adopted `IPv6` data type in system tables; unified client info columns in `system.processes` and `system.query_log` [#5640](https://github.com/yandex/ClickHouse/pull/5640) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Using sessions for connections with MySQL compatibility protocol. #5476 [#5646](https://github.com/yandex/ClickHouse/pull/5646) ([Yuriy Baranov](https://github.com/yurriy))
* Support more `ALTER` queries `ON CLUSTER`. [#5593](https://github.com/yandex/ClickHouse/pull/5593) [#5613](https://github.com/yandex/ClickHouse/pull/5613) ([sundyli](https://github.com/sundy-li))
* Support `<logger>` section in `clickhouse-local` config file. [#5540](https://github.com/yandex/ClickHouse/pull/5540) ([proller](https://github.com/proller))
* Allow run query with `remote` table function in `clickhouse-local` [#5627](https://github.com/yandex/ClickHouse/pull/5627) ([proller](https://github.com/proller))
### Performance Improvement
* Add the possibility to write the final mark at the end of MergeTree columns. It allows to avoid useless reads for keys that are out of table data range. It is enabled only if adaptive index granularity is in use. [#5624](https://github.com/yandex/ClickHouse/pull/5624) ([alesapin](https://github.com/alesapin))
* Improved performance of MergeTree tables on very slow filesystems by reducing number of `stat` syscalls. [#5648](https://github.com/yandex/ClickHouse/pull/5648) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed performance degradation in reading from MergeTree tables that was introduced in version 19.6. Fixes #5631. [#5633](https://github.com/yandex/ClickHouse/pull/5633) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Build/Testing/Packaging Improvement
* Implemented `TestKeeper` as an implementation of ZooKeeper interface used for testing [#5643](https://github.com/yandex/ClickHouse/pull/5643) ([alexey-milovidov](https://github.com/alexey-milovidov)) ([levushkin aleksej](https://github.com/alexey-milovidov))
* From now on `.sql` tests can be run isolated by server, in parallel, with random database. It allows to run them faster, add new tests with custom server configurations, and be sure that different tests doesn't affect each other. [#5554](https://github.com/yandex/ClickHouse/pull/5554) ([Ivan](https://github.com/abyss7))
* Remove `<name>` and `<metrics>` from performance tests [#5672](https://github.com/yandex/ClickHouse/pull/5672) ([Olga Khvostikova](https://github.com/stavrolia))
* Fixed "select_format" performance test for `Pretty` formats [#5642](https://github.com/yandex/ClickHouse/pull/5642) ([alexey-milovidov](https://github.com/alexey-milovidov))
## ClickHouse release 19.9.3.31, 2019-07-05
### Bug Fix
* Fix segfault in Delta codec which affects columns with values less than 32 bits size. The bug led to random memory corruption. [#5786](https://github.com/yandex/ClickHouse/pull/5786) ([alesapin](https://github.com/alesapin))
@ -10,7 +197,7 @@
* Fix race condition, which cause that some queries may not appear in query_log instantly after SYSTEM FLUSH LOGS query. [#5685](https://github.com/yandex/ClickHouse/pull/5685) ([Anton Popov](https://github.com/CurtizJ))
* Added missing support for constant arguments to `evalMLModel` function. [#5820](https://github.com/yandex/ClickHouse/pull/5820) ([alexey-milovidov](https://github.com/alexey-milovidov))
## ClickHouse release 19.7.6.1, 2019-07-05
## ClickHouse release 19.7.5.29, 2019-07-05
### Bug Fix
* Fix performance regression in some queries with JOIN. [#5192](https://github.com/yandex/ClickHouse/pull/5192) ([Winter Zhang](https://github.com/zhang2014))
@ -98,7 +285,7 @@
* Batched version of RowRefList for ALL JOINS. [#5267](https://github.com/yandex/ClickHouse/pull/5267) ([Artem Zuikov](https://github.com/4ertus2))
* clickhouse-server: more informative listen error messages. [#5268](https://github.com/yandex/ClickHouse/pull/5268) ([proller](https://github.com/proller))
* Support dictionaries in clickhouse-copier for functions in `<sharding_key>` [#5270](https://github.com/yandex/ClickHouse/pull/5270) ([proller](https://github.com/proller))
* Add new setting `kafka_commit_every_batch` to regulate Kafka committing policy.
* Add new setting `kafka_commit_every_batch` to regulate Kafka committing policy.
It allows to set commit mode: after every batch of messages is handled, or after the whole block is written to the storage. It's a trade-off between losing some messages or reading them twice in some extreme situations. [#5308](https://github.com/yandex/ClickHouse/pull/5308) ([Ivan](https://github.com/abyss7))
* Make `windowFunnel` support other Unsigned Integer Types. [#5320](https://github.com/yandex/ClickHouse/pull/5320) ([sundyli](https://github.com/sundy-li))
* Allow to shadow virtual column `_table` in Merge engine. [#5325](https://github.com/yandex/ClickHouse/pull/5325) ([Ivan](https://github.com/abyss7))
@ -130,15 +317,15 @@ It allows to set commit mode: after every batch of messages is handled, or after
* Fixed FPU clobbering in simdjson library that lead to wrong calculation of `uniqHLL` and `uniqCombined` aggregate function and math functions such as `log`. [#5354](https://github.com/yandex/ClickHouse/pull/5354) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed handling mixed const/nonconst cases in JSON functions. [#5435](https://github.com/yandex/ClickHouse/pull/5435) ([Vitaly Baranov](https://github.com/vitlibar))
* Fix `retention` function. Now all conditions that satisfy in a row of data are added to the data state. [#5119](https://github.com/yandex/ClickHouse/pull/5119) ([小路](https://github.com/nicelulu))
* Fix result type for `quantileExact` with Decimals. [#5304](https://github.com/yandex/ClickHouse/pull/5304) ([Artem Zuikov](https://github.com/4ertus2))
* Fix result type for `quantileExact` with Decimals. [#5304](https://github.com/yandex/ClickHouse/pull/5304) ([Artem Zuikov](https://github.com/4ertus2))
### Documentation
* Translate documentation for `CollapsingMergeTree` to chinese. [#5168](https://github.com/yandex/ClickHouse/pull/5168) ([张风啸](https://github.com/AlexZFX))
* Translate some documentation about table engines to chinese.
* Translate some documentation about table engines to chinese.
[#5134](https://github.com/yandex/ClickHouse/pull/5134)
[#5328](https://github.com/yandex/ClickHouse/pull/5328)
([never lee](https://github.com/neverlee))
### Build/Testing/Packaging Improvements
* Fix some sanitizer reports that show probable use-after-free.[#5139](https://github.com/yandex/ClickHouse/pull/5139) [#5143](https://github.com/yandex/ClickHouse/pull/5143) [#5393](https://github.com/yandex/ClickHouse/pull/5393) ([Ivan](https://github.com/abyss7))
@ -220,7 +407,7 @@ Kutenin](https://github.com/danlark1))
([张风啸](https://github.com/AlexZFX)),
[#5068](https://github.com/yandex/ClickHouse/pull/5068) ([never
lee](https://github.com/neverlee))
### Build/Testing/Packaging Improvements
* Print UTF-8 characters properly in `clickhouse-test`.
[#5084](https://github.com/yandex/ClickHouse/pull/5084)
@ -242,7 +429,7 @@ lee](https://github.com/neverlee))
### Bug Fixes
* Fixed IN condition pushdown for queries from table functions `mysql` and `odbc` and corresponding table engines. This fixes #3540 and #2384. [#5313](https://github.com/yandex/ClickHouse/pull/5313) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix deadlock in Zookeeper. [#5297](https://github.com/yandex/ClickHouse/pull/5297) ([github1youlc](https://github.com/github1youlc))
* Allow quoted decimals in CSV. [#5284](https://github.com/yandex/ClickHouse/pull/5284) ([Artem Zuikov](https://github.com/4ertus2)
* Allow quoted decimals in CSV. [#5284](https://github.com/yandex/ClickHouse/pull/5284) ([Artem Zuikov](https://github.com/4ertus2)
* Disallow conversion from float Inf/NaN into Decimals (throw exception). [#5282](https://github.com/yandex/ClickHouse/pull/5282) ([Artem Zuikov](https://github.com/4ertus2))
* Fix data race in rename query. [#5247](https://github.com/yandex/ClickHouse/pull/5247) ([Winter Zhang](https://github.com/zhang2014))
* Temporarily disable LFAlloc. Usage of LFAlloc might lead to a lot of MAP_FAILED in allocating UncompressedCache and in a result to crashes of queries at high loaded servers. [cfdba93](https://github.com/yandex/ClickHouse/commit/cfdba938ce22f16efeec504f7f90206a515b1280)([Danila Kutenin](https://github.com/danlark1))

File diff suppressed because it is too large Load Diff

View File

@ -34,14 +34,16 @@ else()
endif()
if (COMPILER_GCC)
# Require at least gcc 7
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 7 AND NOT CMAKE_VERSION VERSION_LESS 2.8.9)
message (FATAL_ERROR "GCC version must be at least 7. For example, if GCC 7 is available under gcc-7, g++-7 names, do the following: export CC=gcc-7 CXX=g++-7; rm -rf CMakeCache.txt CMakeFiles; and re run cmake or ./release.")
# Require minimum version of gcc
set (GCC_MINIMUM_VERSION 8)
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS ${GCC_MINIMUM_VERSION} AND NOT CMAKE_VERSION VERSION_LESS 2.8.9)
message (FATAL_ERROR "GCC version must be at least ${GCC_MINIMUM_VERSION}. For example, if GCC ${GCC_MINIMUM_VERSION} is available under gcc-${GCC_MINIMUM_VERSION}, g++-${GCC_MINIMUM_VERSION} names, do the following: export CC=gcc-${GCC_MINIMUM_VERSION} CXX=g++-${GCC_MINIMUM_VERSION}; rm -rf CMakeCache.txt CMakeFiles; and re run cmake or ./release.")
endif ()
elseif (CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
# Require at least clang 6
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 6)
message (FATAL_ERROR "Clang version must be at least 6.")
# Require minimum version of clang
set (CLANG_MINIMUM_VERSION 7)
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS ${CLANG_MINIMUM_VERSION})
message (FATAL_ERROR "Clang version must be at least ${CLANG_MINIMUM_VERSION}.")
endif ()
else ()
message (WARNING "You are using an unsupported compiler. Compilation has only been tested with Clang 6+ and GCC 7+.")
@ -262,7 +264,9 @@ if (USE_STATIC_LIBRARIES AND HAVE_NO_PIE)
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${FLAG_NO_PIE}")
endif ()
if (NOT SANITIZE AND NOT SPLIT_SHARED_LIBRARIES)
# TODO: only make this extra-checks in CI builds, since a lot of contrib libs won't link -
# CI works around this problem by explicitly adding GLIBC_COMPATIBILITY flag.
if (NOT SANITIZE AND YANDEX_OFFICIAL_BUILD)
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -Wl,--no-undefined")
set (CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -Wl,--no-undefined")
endif ()
@ -326,7 +330,7 @@ if (OS_LINUX AND NOT UNBUNDLED AND (GLIBC_COMPATIBILITY OR USE_INTERNAL_UNWIND_L
if (USE_INTERNAL_LIBCXX_LIBRARY)
set (LIBCXX_LIBS "${ClickHouse_BINARY_DIR}/contrib/libcxx-cmake/libcxx_static${${CMAKE_POSTFIX_VARIABLE}}.a ${ClickHouse_BINARY_DIR}/contrib/libcxxabi-cmake/libcxxabi_static${${CMAKE_POSTFIX_VARIABLE}}.a")
else ()
set (LIBCXX_LIBS "-lc++ -lc++abi")
set (LIBCXX_LIBS "-lc++ -lc++abi -lc++fs")
endif ()
set (DEFAULT_LIBS "${DEFAULT_LIBS} -Wl,-Bstatic ${LIBCXX_LIBS} ${EXCEPTION_HANDLING_LIBRARY} ${BUILTINS_LIB_PATH} -Wl,-Bdynamic")
@ -435,10 +439,10 @@ message (STATUS "Building for: ${CMAKE_SYSTEM} ${CMAKE_SYSTEM_PROCESSOR} ${CMAKE
include(GNUInstallDirs)
include (cmake/find_contrib_lib.cmake)
include (cmake/lib_name.cmake)
find_contrib_lib(double-conversion) # Must be before parquet
include (cmake/find_ssl.cmake)
include (cmake/lib_name.cmake)
include (cmake/find_icu.cmake)
include (cmake/find_boost.cmake)
include (cmake/find_zlib.cmake)

View File

@ -13,7 +13,7 @@ ClickHouse is an open-source column-oriented database management system that all
* You can also [fill this form](https://forms.yandex.com/surveys/meet-yandex-clickhouse-team/) to meet Yandex ClickHouse team in person.
## Upcoming Events
* [ClickHouse Meetup in Mountain View](https://www.eventbrite.com/e/meetup-clickhouse-in-the-south-bay-registration-65935505873) on August 13.
* [ClickHouse Meetup in Moscow](https://yandex.ru/promo/clickhouse/moscow-2019) on September 5.
* [ClickHouse Meetup in Hong Kong](https://www.meetup.com/Hong-Kong-Machine-Learning-Meetup/events/263580542/) on October 17.
* [ClickHouse Meetup in Shenzhen](https://www.huodongxing.com/event/3483759917300) on October 20.
* [ClickHouse Meetup in Shanghai](https://www.huodongxing.com/event/4483760336000) on October 27.

View File

@ -1,4 +1,6 @@
option (ENABLE_FASTOPS "Enable fast vectorized mathematical functions library by Michael Parakhin" ${NOT_UNBUNDLED})
if (NOT ARCH_ARM AND NOT OS_FREEBSD)
option (ENABLE_FASTOPS "Enable fast vectorized mathematical functions library by Mikhail Parakhin" ${NOT_UNBUNDLED})
endif ()
if (ENABLE_FASTOPS)
if(NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/fastops/fastops/fastops.h")
@ -12,4 +14,4 @@ else ()
set(USE_FASTOPS 0)
endif ()
message (STATUS "Using fastops")
message (STATUS "Using fastops=${USE_FASTOPS}: ${FASTOPS_INCLUDE_DIR} : ${FASTOPS_LIBRARY}")

2
contrib/fastops vendored

@ -1 +1 @@
Subproject commit d2c85c5d6549cfd648a7f31ef7b14341881ff8ae
Subproject commit 88752a5e03cf34639a4a37a4b41d8b463fffd2b5

View File

@ -3,9 +3,8 @@ set(LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/fastops)
set(SRCS "")
if(HAVE_AVX)
set (SRCS ${SRCS} ${LIBRARY_DIR}/fastops/avx/ops_avx.cpp ${LIBRARY_DIR}/fastops/core/FastIntrinsics.cpp)
set (SRCS ${SRCS} ${LIBRARY_DIR}/fastops/avx/ops_avx.cpp)
set_source_files_properties(${LIBRARY_DIR}/fastops/avx/ops_avx.cpp PROPERTIES COMPILE_FLAGS "-mavx -DNO_AVX2")
set_source_files_properties(${LIBRARY_DIR}/fastops/core/FastIntrinsics.cpp PROPERTIES COMPILE_FLAGS "-mavx -DNO_AVX2")
endif()
if(HAVE_AVX2)

2
contrib/poco vendored

@ -1 +1 @@
Subproject commit ea2516be366a73a02a82b499ed4a7db1d40037e0
Subproject commit 7a2d304c21549427460428c9039009ef4bbfd899

View File

@ -48,7 +48,7 @@ if (CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wshadow -Wshadow-uncaptured-local -Wextra-semi -Wcomma -Winconsistent-missing-destructor-override -Wunused-exception-parameter -Wcovered-switch-default -Wold-style-cast -Wrange-loop-analysis -Wunused-member-function -Wunreachable-code -Wunreachable-code-return -Wnewline-eof -Wembedded-directive -Wgnu-case-range -Wunused-macros -Wconditional-uninitialized -Wdeprecated -Wundef -Wreserved-id-macro -Wredundant-parens -Wzero-as-null-pointer-constant")
if (WEVERYTHING)
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Weverything -Wno-c++98-compat -Wno-c++98-compat-pedantic -Wno-padded -Wno-switch-enum -Wno-shadow-field-in-constructor -Wno-deprecated-dynamic-exception-spec -Wno-float-equal -Wno-weak-vtables -Wno-shift-sign-overflow -Wno-sign-conversion -Wno-conversion -Wno-exit-time-destructors -Wno-undefined-func-template -Wno-documentation-unknown-command -Wno-missing-variable-declarations -Wno-unused-template -Wno-global-constructors -Wno-c99-extensions -Wno-missing-prototypes -Wno-weak-template-vtables -Wno-zero-length-array -Wno-gnu-anonymous-struct -Wno-nested-anon-types -Wno-double-promotion -Wno-disabled-macro-expansion -Wno-vla-extension -Wno-vla -Wno-packed")
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Weverything -Wno-c++98-compat -Wno-c++98-compat-pedantic -Wno-padded -Wno-switch-enum -Wno-deprecated-dynamic-exception-spec -Wno-float-equal -Wno-weak-vtables -Wno-shift-sign-overflow -Wno-sign-conversion -Wno-conversion -Wno-exit-time-destructors -Wno-undefined-func-template -Wno-documentation-unknown-command -Wno-missing-variable-declarations -Wno-unused-template -Wno-global-constructors -Wno-c99-extensions -Wno-missing-prototypes -Wno-weak-template-vtables -Wno-zero-length-array -Wno-gnu-anonymous-struct -Wno-nested-anon-types -Wno-double-promotion -Wno-disabled-macro-expansion -Wno-vla-extension -Wno-vla -Wno-packed")
# TODO Enable conversion, sign-conversion, double-promotion warnings.
endif ()
@ -71,7 +71,9 @@ if (CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-ctad-maybe-unsupported")
endif ()
endif ()
endif ()
elseif (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wshadow")
endif()
if (USE_DEBUG_HELPERS)
set (INCLUDE_DEBUG_HELPERS "-include ${ClickHouse_SOURCE_DIR}/libs/libcommon/include/common/iostream_debug_helpers.h")

View File

@ -711,7 +711,7 @@ private:
if (ignore_error)
{
Tokens tokens(begin, end);
TokenIterator token_iterator(tokens);
IParser::Pos token_iterator(tokens);
while (token_iterator->type != TokenType::Semicolon && token_iterator.isValid())
++token_iterator;
begin = token_iterator->end;

View File

@ -123,7 +123,7 @@ enum class TaskState
struct TaskStateWithOwner
{
TaskStateWithOwner() = default;
TaskStateWithOwner(TaskState state, const String & owner) : state(state), owner(owner) {}
TaskStateWithOwner(TaskState state_, const String & owner_) : state(state_), owner(owner_) {}
TaskState state{TaskState::Unknown};
String owner;
@ -2100,9 +2100,9 @@ void ClusterCopierApp::initialize(Poco::Util::Application & self)
// process_id is '<hostname>#<start_timestamp>_<pid>'
time_t timestamp = Poco::Timestamp().epochTime();
auto pid = Poco::Process::id();
auto curr_pid = Poco::Process::id();
process_id = std::to_string(DateLUT::instance().toNumYYYYMMDDhhmmss(timestamp)) + "_" + std::to_string(pid);
process_id = std::to_string(DateLUT::instance().toNumYYYYMMDDhhmmss(timestamp)) + "_" + std::to_string(curr_pid);
host_id = escapeForFileName(getFQDNOrHostName()) + '#' + process_id;
process_path = Poco::Path(base_dir + "/clickhouse-copier_" + process_id).absolute().toString();
Poco::File(process_path).createDirectories();

View File

@ -176,7 +176,7 @@ private:
const UInt64 seed;
public:
UnsignedIntegerModel(UInt64 seed) : seed(seed) {}
UnsignedIntegerModel(UInt64 seed_) : seed(seed_) {}
void train(const IColumn &) override {}
void finalize() override {}
@ -212,7 +212,7 @@ private:
const UInt64 seed;
public:
SignedIntegerModel(UInt64 seed) : seed(seed) {}
SignedIntegerModel(UInt64 seed_) : seed(seed_) {}
void train(const IColumn &) override {}
void finalize() override {}
@ -256,7 +256,7 @@ private:
Float res_prev_value = 0;
public:
FloatModel(UInt64 seed) : seed(seed) {}
FloatModel(UInt64 seed_) : seed(seed_) {}
void train(const IColumn &) override {}
void finalize() override {}
@ -348,7 +348,7 @@ private:
const UInt64 seed;
public:
FixedStringModel(UInt64 seed) : seed(seed) {}
FixedStringModel(UInt64 seed_) : seed(seed_) {}
void train(const IColumn &) override {}
void finalize() override {}
@ -385,7 +385,7 @@ private:
const DateLUTImpl & date_lut;
public:
DateTimeModel(UInt64 seed) : seed(seed), date_lut(DateLUT::instance()) {}
DateTimeModel(UInt64 seed_) : seed(seed_), date_lut(DateLUT::instance()) {}
void train(const IColumn &) override {}
void finalize() override {}
@ -533,8 +533,8 @@ private:
}
public:
MarkovModel(MarkovModelParameters params)
: params(std::move(params)), code_points(params.order, BEGIN) {}
MarkovModel(MarkovModelParameters params_)
: params(std::move(params_)), code_points(params.order, BEGIN) {}
void consume(const char * data, size_t size)
{
@ -745,7 +745,7 @@ private:
MarkovModel markov_model;
public:
StringModel(UInt64 seed, MarkovModelParameters params) : seed(seed), markov_model(std::move(params)) {}
StringModel(UInt64 seed_, MarkovModelParameters params_) : seed(seed_), markov_model(std::move(params_)) {}
void train(const IColumn & column) override
{
@ -797,7 +797,7 @@ private:
ModelPtr nested_model;
public:
ArrayModel(ModelPtr nested_model) : nested_model(std::move(nested_model)) {}
ArrayModel(ModelPtr nested_model_) : nested_model(std::move(nested_model_)) {}
void train(const IColumn & column) override
{
@ -830,7 +830,7 @@ private:
ModelPtr nested_model;
public:
NullableModel(ModelPtr nested_model) : nested_model(std::move(nested_model)) {}
NullableModel(ModelPtr nested_model_) : nested_model(std::move(nested_model_)) {}
void train(const IColumn & column) override
{

View File

@ -5,7 +5,6 @@
#include <DataStreams/copyData.h>
#include <DataTypes/DataTypeFactory.h>
#include "ODBCBlockInputStream.h"
#include <Formats/BinaryRowInputStream.h>
#include <Formats/FormatFactory.h>
#include <IO/WriteBufferFromHTTPServerResponse.h>
#include <IO/WriteHelpers.h>

View File

@ -18,12 +18,12 @@ namespace ErrorCodes
ODBCBlockInputStream::ODBCBlockInputStream(
Poco::Data::Session && session, const std::string & query_str, const Block & sample_block, const UInt64 max_block_size)
: session{session}
Poco::Data::Session && session_, const std::string & query_str, const Block & sample_block, const UInt64 max_block_size_)
: session{session_}
, statement{(this->session << query_str, Poco::Data::Keywords::now)}
, result{statement}
, iterator{result.begin()}
, max_block_size{max_block_size}
, max_block_size{max_block_size_}
, log(&Logger::get("ODBCBlockInputStream"))
{
if (sample_block.columns() != result.columnCount())
@ -43,46 +43,46 @@ namespace
{
switch (type)
{
case ValueType::UInt8:
case ValueType::vtUInt8:
static_cast<ColumnUInt8 &>(column).insertValue(value.convert<UInt64>());
break;
case ValueType::UInt16:
case ValueType::vtUInt16:
static_cast<ColumnUInt16 &>(column).insertValue(value.convert<UInt64>());
break;
case ValueType::UInt32:
case ValueType::vtUInt32:
static_cast<ColumnUInt32 &>(column).insertValue(value.convert<UInt64>());
break;
case ValueType::UInt64:
case ValueType::vtUInt64:
static_cast<ColumnUInt64 &>(column).insertValue(value.convert<UInt64>());
break;
case ValueType::Int8:
case ValueType::vtInt8:
static_cast<ColumnInt8 &>(column).insertValue(value.convert<Int64>());
break;
case ValueType::Int16:
case ValueType::vtInt16:
static_cast<ColumnInt16 &>(column).insertValue(value.convert<Int64>());
break;
case ValueType::Int32:
case ValueType::vtInt32:
static_cast<ColumnInt32 &>(column).insertValue(value.convert<Int64>());
break;
case ValueType::Int64:
case ValueType::vtInt64:
static_cast<ColumnInt64 &>(column).insertValue(value.convert<Int64>());
break;
case ValueType::Float32:
case ValueType::vtFloat32:
static_cast<ColumnFloat32 &>(column).insertValue(value.convert<Float64>());
break;
case ValueType::Float64:
case ValueType::vtFloat64:
static_cast<ColumnFloat64 &>(column).insertValue(value.convert<Float64>());
break;
case ValueType::String:
case ValueType::vtString:
static_cast<ColumnString &>(column).insert(value.convert<String>());
break;
case ValueType::Date:
case ValueType::vtDate:
static_cast<ColumnUInt16 &>(column).insertValue(UInt16{LocalDate{value.convert<String>()}.getDayNum()});
break;
case ValueType::DateTime:
case ValueType::vtDateTime:
static_cast<ColumnUInt32 &>(column).insertValue(time_t{LocalDateTime{value.convert<String>()}});
break;
case ValueType::UUID:
case ValueType::vtUUID:
static_cast<ColumnUInt128 &>(column).insert(parse<UUID>(value.convert<std::string>()));
break;
}

View File

@ -16,7 +16,7 @@ class ODBCBlockInputStream final : public IBlockInputStream
{
public:
ODBCBlockInputStream(
Poco::Data::Session && session, const std::string & query_str, const Block & sample_block, const UInt64 max_block_size);
Poco::Data::Session && session_, const std::string & query_str, const Block & sample_block, const UInt64 max_block_size_);
String getName() const override { return "ODBC"; }

View File

@ -324,7 +324,6 @@ try
using po::value;
using Strings = DB::Strings;
po::options_description desc("Allowed options");
desc.add_options()
("help", "produce help message")

View File

@ -16,8 +16,8 @@ namespace DB
{
MetricsTransmitter::MetricsTransmitter(
const Poco::Util::AbstractConfiguration & config, const std::string & config_name, const AsynchronousMetrics & async_metrics)
: async_metrics(async_metrics), config_name(config_name)
const Poco::Util::AbstractConfiguration & config, const std::string & config_name_, const AsynchronousMetrics & async_metrics_)
: async_metrics(async_metrics_), config_name(config_name_)
{
interval_seconds = config.getInt(config_name + ".interval", 60);
send_events = config.getBool(config_name + ".events", true);

View File

@ -32,7 +32,7 @@ class AsynchronousMetrics;
class MetricsTransmitter
{
public:
MetricsTransmitter(const Poco::Util::AbstractConfiguration & config, const std::string & config_name, const AsynchronousMetrics & async_metrics);
MetricsTransmitter(const Poco::Util::AbstractConfiguration & config, const std::string & config_name_, const AsynchronousMetrics & async_metrics_);
~MetricsTransmitter();
private:

View File

@ -37,14 +37,14 @@ namespace ErrorCodes
extern const int OPENSSL_ERROR;
}
MySQLHandler::MySQLHandler(IServer & server_, const Poco::Net::StreamSocket & socket_, RSA & public_key, RSA & private_key, bool ssl_enabled, size_t connection_id)
MySQLHandler::MySQLHandler(IServer & server_, const Poco::Net::StreamSocket & socket_, RSA & public_key_, RSA & private_key_, bool ssl_enabled, size_t connection_id_)
: Poco::Net::TCPServerConnection(socket_)
, server(server_)
, log(&Poco::Logger::get("MySQLHandler"))
, connection_context(server.context())
, connection_id(connection_id)
, public_key(public_key)
, private_key(private_key)
, connection_id(connection_id_)
, public_key(public_key_)
, private_key(private_key_)
{
server_capability_flags = CLIENT_PROTOCOL_41 | CLIENT_SECURE_CONNECTION | CLIENT_PLUGIN_AUTH | CLIENT_PLUGIN_AUTH_LENENC_CLIENT_DATA | CLIENT_CONNECT_WITH_DB | CLIENT_DEPRECATE_EOF;
if (ssl_enabled)
@ -77,7 +77,7 @@ void MySQLHandler::run()
if (!connection_context.mysql.max_packet_size)
connection_context.mysql.max_packet_size = MAX_PACKET_LENGTH;
LOG_DEBUG(log, "Capabilities: " << handshake_response.capability_flags
/* LOG_TRACE(log, "Capabilities: " << handshake_response.capability_flags
<< "\nmax_packet_size: "
<< handshake_response.max_packet_size
<< "\ncharacter_set: "
@ -91,7 +91,7 @@ void MySQLHandler::run()
<< "\ndatabase: "
<< handshake_response.database
<< "\nauth_plugin_name: "
<< handshake_response.auth_plugin_name);
<< handshake_response.auth_plugin_name);*/
client_capability_flags = handshake_response.capability_flags;
if (!(client_capability_flags & CLIENT_PROTOCOL_41))
@ -299,7 +299,7 @@ void MySQLHandler::authenticate(const HandshakeResponse & handshake_response, co
}
password.resize(plaintext_size);
for (int i = 0; i < plaintext_size; i++)
for (int i = 0; i < plaintext_size; ++i)
{
password[i] = plaintext[i] ^ static_cast<unsigned char>(scramble[i % scramble.size()]);
}

View File

@ -14,7 +14,7 @@ namespace DB
class MySQLHandler : public Poco::Net::TCPServerConnection
{
public:
MySQLHandler(IServer & server_, const Poco::Net::StreamSocket & socket_, RSA & public_key, RSA & private_key, bool ssl_enabled, size_t connection_id);
MySQLHandler(IServer & server_, const Poco::Net::StreamSocket & socket_, RSA & public_key_, RSA & private_key_, bool ssl_enabled, size_t connection_id_);
void run() final;

View File

@ -29,6 +29,7 @@
#include <Common/getFQDNOrHostName.h>
#include <Common/getMultipleKeysFromConfig.h>
#include <Common/getNumberOfPhysicalCPUCores.h>
#include <Common/getExecutablePath.h>
#include <Common/TaskStatsInfoGetter.h>
#include <Common/ThreadStatus.h>
#include <IO/HTTPCommon.h>
@ -156,19 +157,19 @@ std::string Server::getDefaultCorePath() const
return getCanonicalPath(config().getString("path", DBMS_DEFAULT_PATH)) + "cores";
}
void Server::defineOptions(Poco::Util::OptionSet & _options)
void Server::defineOptions(Poco::Util::OptionSet & options)
{
_options.addOption(
options.addOption(
Poco::Util::Option("help", "h", "show help and exit")
.required(false)
.repeatable(false)
.binding("help"));
_options.addOption(
options.addOption(
Poco::Util::Option("version", "V", "show version and exit")
.required(false)
.repeatable(false)
.binding("version"));
BaseDaemon::defineOptions(_options);
BaseDaemon::defineOptions(options);
}
int Server::main(const std::vector<std::string> & /*args*/)
@ -212,6 +213,10 @@ int Server::main(const std::vector<std::string> & /*args*/)
const auto memory_amount = getMemoryAmount();
#if defined(__linux__)
std::string executable_path = getExecutablePath();
if (executable_path.empty())
executable_path = "/usr/bin/clickhouse"; /// It is used for information messages.
/// After full config loaded
{
if (config().getBool("mlock_executable", false))
@ -228,7 +233,7 @@ int Server::main(const std::vector<std::string> & /*args*/)
{
LOG_INFO(log, "It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled."
" It could happen due to incorrect ClickHouse package installation."
" You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'."
" You could resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep " << executable_path << "'."
" Note that it will not work on 'nosuid' mounted filesystems.");
}
}
@ -273,17 +278,13 @@ int Server::main(const std::vector<std::string> & /*args*/)
* table engines could use Context on destroy.
*/
LOG_INFO(log, "Shutting down storages.");
if (text_log)
text_log->shutdown();
global_context->shutdown();
LOG_DEBUG(log, "Shutted down storages.");
/** Explicitly destroy Context. It is more convenient than in destructor of Server, because logger is still available.
* At this moment, no one could own shared part of Context.
*/
text_log.reset();
global_context.reset();
LOG_DEBUG(log, "Destroyed global context.");
});
@ -413,7 +414,7 @@ int Server::main(const std::vector<std::string> & /*args*/)
main_config_zk_changed_event,
[&](ConfigurationPtr config)
{
setTextLog(text_log);
setTextLog(global_context->getTextLog());
buildLoggers(*config, logger());
global_context->setClustersConfig(config);
global_context->setMacros(std::make_unique<Macros>(*config, "macros"));
@ -500,14 +501,11 @@ int Server::main(const std::vector<std::string> & /*args*/)
LOG_INFO(log, "Loading metadata from " + path);
/// Create text_log instance
text_log = createSystemLog<TextLog>(*global_context, "system", "text_log", global_context->getConfigRef(), "text_log");
try
{
loadMetadataSystem(*global_context);
/// After attaching system databases we can initialize system log.
global_context->initializeSystemLogs(text_log);
global_context->initializeSystemLogs();
/// After the system database is created, attach virtual system tables (in addition to query_log and part_log)
attachSystemTablesServer(*global_context->getDatabase("system"), has_zookeeper);
/// Then, load remaining databases
@ -554,7 +552,7 @@ int Server::main(const std::vector<std::string> & /*args*/)
{
LOG_INFO(log, "It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled."
" It could happen due to incorrect ClickHouse package installation."
" You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'."
" You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep " << executable_path << "'."
" Note that it will not work on 'nosuid' mounted filesystems."
" It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.");
}
@ -563,7 +561,7 @@ int Server::main(const std::vector<std::string> & /*args*/)
{
LOG_INFO(log, "It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_nice' will have no effect."
" It could happen due to incorrect ClickHouse package installation."
" You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'."
" You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep " << executable_path << "'."
" Note that it will not work on 'nosuid' mounted filesystems.");
}
#else
@ -703,6 +701,7 @@ int Server::main(const std::vector<std::string> & /*args*/)
LOG_INFO(log, "Listening https://" + address.toString());
#else
UNUSED(port);
throw Exception{"HTTPS protocol is disabled because Poco library was built without NetSSL support.",
ErrorCodes::SUPPORT_IS_DISABLED};
#endif
@ -739,6 +738,7 @@ int Server::main(const std::vector<std::string> & /*args*/)
new Poco::Net::TCPServerParams));
LOG_INFO(log, "Listening for connections with secure native protocol (tcp_secure): " + address.toString());
#else
UNUSED(port);
throw Exception{"SSL support for TCP protocol is disabled because Poco library was built without NetSSL support.",
ErrorCodes::SUPPORT_IS_DISABLED};
#endif
@ -775,6 +775,7 @@ int Server::main(const std::vector<std::string> & /*args*/)
LOG_INFO(log, "Listening for secure replica communication (interserver) https://" + address.toString());
#else
UNUSED(port);
throw Exception{"SSL support for TCP protocol is disabled because Poco library was built without NetSSL support.",
ErrorCodes::SUPPORT_IS_DISABLED};
#endif
@ -795,6 +796,7 @@ int Server::main(const std::vector<std::string> & /*args*/)
LOG_INFO(log, "Listening for MySQL compatibility protocol: " + address.toString());
#else
UNUSED(port);
throw Exception{"SSL support for MySQL protocol is disabled because Poco library was built without NetSSL support.",
ErrorCodes::SUPPORT_IS_DISABLED};
#endif

View File

@ -57,7 +57,6 @@ protected:
private:
std::unique_ptr<Context> global_context;
std::shared_ptr<TextLog> text_log;
};
}

View File

@ -35,8 +35,8 @@ private:
const DataTypePtr & type_val;
public:
AggregateFunctionArgMinMax(const DataTypePtr & type_res, const DataTypePtr & type_val)
: IAggregateFunctionDataHelper<Data, AggregateFunctionArgMinMax<Data, AllocatesMemoryInArena>>({type_res, type_val}, {}),
AggregateFunctionArgMinMax(const DataTypePtr & type_res_, const DataTypePtr & type_val_)
: IAggregateFunctionDataHelper<Data, AggregateFunctionArgMinMax<Data, AllocatesMemoryInArena>>({type_res_, type_val_}, {}),
type_res(this->argument_types[0]), type_val(this->argument_types[1])
{
if (!type_val->isComparable())

View File

@ -62,12 +62,6 @@ public:
static_cast<ColumnUInt64 &>(to).getData().push_back(data(place).count);
}
/// May be used for optimization.
void addDelta(AggregateDataPtr place, UInt64 x) const
{
data(place).count += x;
}
const char * getHeaderFilePath() const override { return __FILE__; }
};

View File

@ -253,8 +253,8 @@ class GroupArrayGeneralListImpl final
UInt64 max_elems;
public:
GroupArrayGeneralListImpl(const DataTypePtr & data_type, UInt64 max_elems_ = std::numeric_limits<UInt64>::max())
: IAggregateFunctionDataHelper<GroupArrayGeneralListData<Node>, GroupArrayGeneralListImpl<Node, limit_num_elems>>({data_type}, {})
GroupArrayGeneralListImpl(const DataTypePtr & data_type_, UInt64 max_elems_ = std::numeric_limits<UInt64>::max())
: IAggregateFunctionDataHelper<GroupArrayGeneralListData<Node>, GroupArrayGeneralListImpl<Node, limit_num_elems>>({data_type_}, {})
, data_type(this->argument_types[0]), max_elems(max_elems_) {}
String getName() const override { return "groupArray"; }

View File

@ -1,5 +1,6 @@
#pragma once
#include <algorithm>
#include <roaring/roaring.h>
#include <IO/ReadHelpers.h>
#include <IO/WriteHelpers.h>
@ -454,6 +455,44 @@ public:
return count;
}
/**
* Return new set with specified range (not include the range_end)
*/
UInt64 rb_range(UInt32 range_start, UInt32 range_end, RoaringBitmapWithSmallSet& r1) const
{
UInt64 count = 0;
if (range_start >= range_end)
return count;
if (isSmall())
{
std::vector<T> ans;
for (const auto & x : small)
{
T val = x.getValue();
if ((UInt32)val >= range_start && (UInt32)val < range_end)
{
r1.add(val);
count++;
}
}
}
else
{
roaring_uint32_iterator_t iterator;
roaring_init_iterator(rb, &iterator);
roaring_move_uint32_iterator_equalorlarger(&iterator, range_start);
while (iterator.has_value)
{
if ((UInt32)iterator.current_value >= range_end)
break;
r1.add(iterator.current_value);
roaring_advance_uint32_iterator(&iterator);
count++;
}
}
return count;
}
private:
/// To read and write the DB Buffer directly, migrate code from CRoaring
void db_roaring_bitmap_add_many(DB::ReadBuffer & dbBuf, roaring_bitmap_t * r, size_t n_args)

View File

@ -164,8 +164,8 @@ class AggregateFunctionGroupUniqArrayGeneric
}
public:
AggregateFunctionGroupUniqArrayGeneric(const DataTypePtr & input_data_type, UInt64 max_elems_ = std::numeric_limits<UInt64>::max())
: IAggregateFunctionDataHelper<AggregateFunctionGroupUniqArrayGenericData, AggregateFunctionGroupUniqArrayGeneric<is_plain_column, Tlimit_num_elem>>({input_data_type}, {})
AggregateFunctionGroupUniqArrayGeneric(const DataTypePtr & input_data_type_, UInt64 max_elems_ = std::numeric_limits<UInt64>::max())
: IAggregateFunctionDataHelper<AggregateFunctionGroupUniqArrayGenericData, AggregateFunctionGroupUniqArrayGeneric<is_plain_column, Tlimit_num_elem>>({input_data_type_}, {})
, input_data_type(this->argument_types[0])
, max_elems(max_elems_) {}

View File

@ -304,9 +304,9 @@ private:
const UInt32 max_bins;
public:
AggregateFunctionHistogram(UInt32 max_bins, const DataTypes & arguments, const Array & params)
AggregateFunctionHistogram(UInt32 max_bins_, const DataTypes & arguments, const Array & params)
: IAggregateFunctionDataHelper<AggregateFunctionHistogramData, AggregateFunctionHistogram<T>>(arguments, params)
, max_bins(max_bins)
, max_bins(max_bins_)
{
}

View File

@ -104,21 +104,21 @@ void registerAggregateFunctionMLMethod(AggregateFunctionFactory & factory)
}
LinearModelData::LinearModelData(
Float64 learning_rate,
Float64 l2_reg_coef,
UInt64 param_num,
UInt64 batch_capacity,
std::shared_ptr<DB::IGradientComputer> gradient_computer,
std::shared_ptr<DB::IWeightsUpdater> weights_updater)
: learning_rate(learning_rate)
, l2_reg_coef(l2_reg_coef)
, batch_capacity(batch_capacity)
Float64 learning_rate_,
Float64 l2_reg_coef_,
UInt64 param_num_,
UInt64 batch_capacity_,
std::shared_ptr<DB::IGradientComputer> gradient_computer_,
std::shared_ptr<DB::IWeightsUpdater> weights_updater_)
: learning_rate(learning_rate_)
, l2_reg_coef(l2_reg_coef_)
, batch_capacity(batch_capacity_)
, batch_size(0)
, gradient_computer(std::move(gradient_computer))
, weights_updater(std::move(weights_updater))
, gradient_computer(std::move(gradient_computer_))
, weights_updater(std::move(weights_updater_))
{
weights.resize(param_num, Float64{0.0});
gradient_batch.resize(param_num + 1, Float64{0.0});
weights.resize(param_num_, Float64{0.0});
gradient_batch.resize(param_num_ + 1, Float64{0.0});
}
void LinearModelData::update_state()

View File

@ -248,12 +248,12 @@ public:
LinearModelData() {}
LinearModelData(
Float64 learning_rate,
Float64 l2_reg_coef,
UInt64 param_num,
UInt64 batch_capacity,
std::shared_ptr<IGradientComputer> gradient_computer,
std::shared_ptr<IWeightsUpdater> weights_updater);
Float64 learning_rate_,
Float64 l2_reg_coef_,
UInt64 param_num_,
UInt64 batch_capacity_,
std::shared_ptr<IGradientComputer> gradient_computer_,
std::shared_ptr<IWeightsUpdater> weights_updater_);
void add(const IColumn ** columns, size_t row_num);
@ -304,21 +304,21 @@ public:
String getName() const override { return Name::name; }
explicit AggregateFunctionMLMethod(
UInt32 param_num,
std::unique_ptr<IGradientComputer> gradient_computer,
std::string weights_updater_name,
Float64 learning_rate,
Float64 l2_reg_coef,
UInt64 batch_size,
UInt32 param_num_,
std::unique_ptr<IGradientComputer> gradient_computer_,
std::string weights_updater_name_,
Float64 learning_rate_,
Float64 l2_reg_coef_,
UInt64 batch_size_,
const DataTypes & arguments_types,
const Array & params)
: IAggregateFunctionDataHelper<Data, AggregateFunctionMLMethod<Data, Name>>(arguments_types, params)
, param_num(param_num)
, learning_rate(learning_rate)
, l2_reg_coef(l2_reg_coef)
, batch_size(batch_size)
, gradient_computer(std::move(gradient_computer))
, weights_updater_name(std::move(weights_updater_name))
, param_num(param_num_)
, learning_rate(learning_rate_)
, l2_reg_coef(l2_reg_coef_)
, batch_size(batch_size_)
, gradient_computer(std::move(gradient_computer_))
, weights_updater_name(std::move(weights_updater_name_))
{
}

View File

@ -679,8 +679,8 @@ private:
DataTypePtr & type;
public:
AggregateFunctionsSingleValue(const DataTypePtr & type)
: IAggregateFunctionDataHelper<Data, AggregateFunctionsSingleValue<Data, AllocatesMemoryInArena>>({type}, {})
AggregateFunctionsSingleValue(const DataTypePtr & type_)
: IAggregateFunctionDataHelper<Data, AggregateFunctionsSingleValue<Data, AllocatesMemoryInArena>>({type_}, {})
, type(this->argument_types[0])
{
if (StringRef(Data::name()) == StringRef("min")

View File

@ -76,8 +76,8 @@ private:
DataTypePtr & argument_type;
public:
AggregateFunctionQuantile(const DataTypePtr & argument_type, const Array & params)
: IAggregateFunctionDataHelper<Data, AggregateFunctionQuantile<Value, Data, Name, has_second_arg, FloatReturnType, returns_many>>({argument_type}, params)
AggregateFunctionQuantile(const DataTypePtr & argument_type_, const Array & params)
: IAggregateFunctionDataHelper<Data, AggregateFunctionQuantile<Value, Data, Name, has_second_arg, FloatReturnType, returns_many>>({argument_type_}, params)
, levels(params, returns_many), level(levels.levels[0]), argument_type(this->argument_types[0])
{
if (!returns_many && levels.size() > 1)

View File

@ -33,18 +33,18 @@ private:
public:
AggregateFunctionResample(
AggregateFunctionPtr nested_function,
Key begin,
Key end,
size_t step,
AggregateFunctionPtr nested_function_,
Key begin_,
Key end_,
size_t step_,
const DataTypes & arguments,
const Array & params)
: IAggregateFunctionHelper<AggregateFunctionResample<Key>>{arguments, params}
, nested_function{nested_function}
, nested_function{nested_function_}
, last_col{arguments.size() - 1}
, begin{begin}
, end{end}
, step{step}
, begin{begin_}
, end{end_}
, step{step_}
, total{0}
, aod{nested_function->alignOfData()}
, sod{(nested_function->sizeOfData() + aod - 1) / aod * aod}

View File

@ -142,9 +142,9 @@ template <typename T, typename Data, typename Derived>
class AggregateFunctionSequenceBase : public IAggregateFunctionDataHelper<Data, Derived>
{
public:
AggregateFunctionSequenceBase(const DataTypes & arguments, const Array & params, const String & pattern)
AggregateFunctionSequenceBase(const DataTypes & arguments, const Array & params, const String & pattern_)
: IAggregateFunctionDataHelper<Data, Derived>(arguments, params)
, pattern(pattern)
, pattern(pattern_)
{
arg_count = arguments.size();
parsePattern();
@ -199,7 +199,7 @@ private:
std::uint64_t extra;
PatternAction() = default;
PatternAction(const PatternActionType type, const std::uint64_t extra = 0) : type{type}, extra{extra} {}
PatternAction(const PatternActionType type_, const std::uint64_t extra_ = 0) : type{type_}, extra{extra_} {}
};
using PatternActions = PODArrayWithStackMemory<PatternAction, 64>;
@ -520,8 +520,8 @@ private:
struct DFAState
{
DFAState(bool has_kleene = false)
: has_kleene{has_kleene}, event{0}, transition{DFATransition::None}
DFAState(bool has_kleene_ = false)
: has_kleene{has_kleene_}, event{0}, transition{DFATransition::None}
{}
/// .-------.
@ -554,8 +554,8 @@ template <typename T, typename Data>
class AggregateFunctionSequenceMatch final : public AggregateFunctionSequenceBase<T, Data, AggregateFunctionSequenceMatch<T, Data>>
{
public:
AggregateFunctionSequenceMatch(const DataTypes & arguments, const Array & params, const String & pattern)
: AggregateFunctionSequenceBase<T, Data, AggregateFunctionSequenceMatch<T, Data>>(arguments, params, pattern) {}
AggregateFunctionSequenceMatch(const DataTypes & arguments, const Array & params, const String & pattern_)
: AggregateFunctionSequenceBase<T, Data, AggregateFunctionSequenceMatch<T, Data>>(arguments, params, pattern_) {}
using AggregateFunctionSequenceBase<T, Data, AggregateFunctionSequenceMatch<T, Data>>::AggregateFunctionSequenceBase;
@ -582,8 +582,8 @@ template <typename T, typename Data>
class AggregateFunctionSequenceCount final : public AggregateFunctionSequenceBase<T, Data, AggregateFunctionSequenceCount<T, Data>>
{
public:
AggregateFunctionSequenceCount(const DataTypes & arguments, const Array & params, const String & pattern)
: AggregateFunctionSequenceBase<T, Data, AggregateFunctionSequenceCount<T, Data>>(arguments, params, pattern) {}
AggregateFunctionSequenceCount(const DataTypes & arguments, const Array & params, const String & pattern_)
: AggregateFunctionSequenceBase<T, Data, AggregateFunctionSequenceCount<T, Data>>(arguments, params, pattern_) {}
using AggregateFunctionSequenceBase<T, Data, AggregateFunctionSequenceCount<T, Data>>::AggregateFunctionSequenceBase;

View File

@ -23,9 +23,9 @@ private:
Array params;
public:
AggregateFunctionState(AggregateFunctionPtr nested, const DataTypes & arguments, const Array & params)
: IAggregateFunctionHelper<AggregateFunctionState>(arguments, params)
, nested_func(nested), arguments(arguments), params(params) {}
AggregateFunctionState(AggregateFunctionPtr nested_, const DataTypes & arguments_, const Array & params_)
: IAggregateFunctionHelper<AggregateFunctionState>(arguments_, params_)
, nested_func(nested_), arguments(arguments_), params(params_) {}
String getName() const override
{

View File

@ -62,10 +62,10 @@ private:
public:
AggregateFunctionSumMapBase(
const DataTypePtr & keys_type, const DataTypes & values_types,
const DataTypePtr & keys_type_, const DataTypes & values_types_,
const DataTypes & argument_types_, const Array & params_)
: IAggregateFunctionDataHelper<AggregateFunctionSumMapData<NearestFieldType<T>>, Derived>(argument_types_, params_)
, keys_type(keys_type), values_types(values_types) {}
, keys_type(keys_type_), values_types(values_types_) {}
String getName() const override { return "sumMap"; }
@ -295,9 +295,9 @@ private:
public:
AggregateFunctionSumMapFiltered(
const DataTypePtr & keys_type, const DataTypes & values_types, const Array & keys_to_keep_,
const DataTypePtr & keys_type_, const DataTypes & values_types_, const Array & keys_to_keep_,
const DataTypes & argument_types_, const Array & params_)
: Base{keys_type, values_types, argument_types_, params_}
: Base{keys_type_, values_types_, argument_types_, params_}
{
keys_to_keep.reserve(keys_to_keep_.size());
for (const Field & f : keys_to_keep_)

View File

@ -44,9 +44,9 @@ protected:
UInt64 reserved;
public:
AggregateFunctionTopK(UInt64 threshold, UInt64 load_factor, const DataTypes & argument_types_, const Array & params)
AggregateFunctionTopK(UInt64 threshold_, UInt64 load_factor, const DataTypes & argument_types_, const Array & params)
: IAggregateFunctionDataHelper<AggregateFunctionTopKData<T>, AggregateFunctionTopK<T, is_weighted>>(argument_types_, params)
, threshold(threshold), reserved(load_factor * threshold) {}
, threshold(threshold_), reserved(load_factor * threshold) {}
String getName() const override { return is_weighted ? "topKWeighted" : "topK"; }
@ -139,9 +139,9 @@ private:
public:
AggregateFunctionTopKGeneric(
UInt64 threshold, UInt64 load_factor, const DataTypePtr & input_data_type, const Array & params)
: IAggregateFunctionDataHelper<AggregateFunctionTopKGenericData, AggregateFunctionTopKGeneric<is_plain_column, is_weighted>>({input_data_type}, params)
, threshold(threshold), reserved(load_factor * threshold), input_data_type(this->argument_types[0]) {}
UInt64 threshold_, UInt64 load_factor, const DataTypePtr & input_data_type_, const Array & params)
: IAggregateFunctionDataHelper<AggregateFunctionTopKGenericData, AggregateFunctionTopKGeneric<is_plain_column, is_weighted>>({input_data_type_}, params)
, threshold(threshold_), reserved(load_factor * threshold), input_data_type(this->argument_types[0]) {}
String getName() const override { return is_weighted ? "topKWeighted" : "topK"; }

View File

@ -136,9 +136,9 @@ private:
UInt8 threshold;
public:
AggregateFunctionUniqUpTo(UInt8 threshold, const DataTypes & argument_types_, const Array & params_)
AggregateFunctionUniqUpTo(UInt8 threshold_, const DataTypes & argument_types_, const Array & params_)
: IAggregateFunctionDataHelper<AggregateFunctionUniqUpToData<T>, AggregateFunctionUniqUpTo<T>>(argument_types_, params_)
, threshold(threshold)
, threshold(threshold_)
{
}
@ -196,9 +196,9 @@ private:
UInt8 threshold;
public:
AggregateFunctionUniqUpToVariadic(const DataTypes & arguments, const Array & params, UInt8 threshold)
AggregateFunctionUniqUpToVariadic(const DataTypes & arguments, const Array & params, UInt8 threshold_)
: IAggregateFunctionDataHelper<AggregateFunctionUniqUpToData<UInt64>, AggregateFunctionUniqUpToVariadic<is_exact, argument_is_tuple>>(arguments, params)
, threshold(threshold)
, threshold(threshold_)
{
if (argument_is_tuple)
num_args = typeid_cast<const DataTypeTuple &>(*arguments[0]).getElements().size();

View File

@ -128,6 +128,15 @@ public:
using AddFunc = void (*)(const IAggregateFunction *, AggregateDataPtr, const IColumn **, size_t, Arena *);
virtual AddFunc getAddressOfAddFunction() const = 0;
/** Contains a loop with calls to "add" function. You can collect arguments into array "places"
* and do a single call to "addBatch" for devirtualization and inlining.
*/
virtual void addBatch(size_t batch_size, AggregateDataPtr * places, size_t place_offset, const IColumn ** columns, Arena * arena) const = 0;
/** The same for single place.
*/
virtual void addBatchSinglePlace(size_t batch_size, AggregateDataPtr place, const IColumn ** columns, Arena * arena) const = 0;
/** This is used for runtime code generation to determine, which header files to include in generated source.
* Always implement it as
* const char * getHeaderFilePath() const override { return __FILE__; }
@ -156,7 +165,20 @@ private:
public:
IAggregateFunctionHelper(const DataTypes & argument_types_, const Array & parameters_)
: IAggregateFunction(argument_types_, parameters_) {}
AddFunc getAddressOfAddFunction() const override { return &addFree; }
void addBatch(size_t batch_size, AggregateDataPtr * places, size_t place_offset, const IColumn ** columns, Arena * arena) const override
{
for (size_t i = 0; i < batch_size; ++i)
static_cast<const Derived *>(this)->add(places[i] + place_offset, columns, i, arena);
}
void addBatchSinglePlace(size_t batch_size, AggregateDataPtr place, const IColumn ** columns, Arena * arena) const override
{
for (size_t i = 0; i < batch_size; ++i)
static_cast<const Derived *>(this)->add(place, columns, i, arena);
}
};

View File

@ -50,9 +50,9 @@ class QuantileTDigest
Centroid() = default;
explicit Centroid(Value mean, Count count)
: mean(mean)
, count(count)
explicit Centroid(Value mean_, Count count_)
: mean(mean_)
, count(count_)
{}
Centroid & operator+=(const Centroid & other)

View File

@ -53,8 +53,8 @@ class ReservoirSamplerDeterministic
}
public:
ReservoirSamplerDeterministic(const size_t sample_count = DEFAULT_SAMPLE_COUNT)
: sample_count{sample_count}
ReservoirSamplerDeterministic(const size_t sample_count_ = DEFAULT_SAMPLE_COUNT)
: sample_count{sample_count_}
{
}

View File

@ -13,8 +13,8 @@ namespace ErrorCodes
extern const int SIZES_OF_COLUMNS_DOESNT_MATCH;
}
ColumnConst::ColumnConst(const ColumnPtr & data_, size_t s)
: data(data_), s(s)
ColumnConst::ColumnConst(const ColumnPtr & data_, size_t s_)
: data(data_), s(s_)
{
/// Squash Const of Const.
while (const ColumnConst * const_data = typeid_cast<const ColumnConst *>(data.get()))

View File

@ -26,7 +26,7 @@ private:
WrappedPtr data;
size_t s;
ColumnConst(const ColumnPtr & data, size_t s);
ColumnConst(const ColumnPtr & data, size_t s_);
ColumnConst(const ColumnConst & src) = default;
public:

View File

@ -26,10 +26,12 @@ namespace ErrorCodes
template <typename T>
int ColumnDecimal<T>::compareAt(size_t n, size_t m, const IColumn & rhs_, int) const
{
auto other = static_cast<const Self &>(rhs_);
auto & other = static_cast<const Self &>(rhs_);
const T & a = data[n];
const T & b = other.data[m];
if (scale == other.scale)
return a > b ? 1 : (a < b ? -1 : 0);
return decimalLess<T>(b, a, other.scale, scale) ? 1 : (decimalLess<T>(a, b, scale, other.scale) ? -1 : 0);
}

View File

@ -13,8 +13,8 @@ namespace ErrorCodes
extern const int LOGICAL_ERROR;
}
ColumnFunction::ColumnFunction(size_t size, FunctionBasePtr function, const ColumnsWithTypeAndName & columns_to_capture)
: size_(size), function(function)
ColumnFunction::ColumnFunction(size_t size, FunctionBasePtr function_, const ColumnsWithTypeAndName & columns_to_capture)
: size_(size), function(function_)
{
appendArguments(columns_to_capture);
}

View File

@ -20,7 +20,7 @@ class ColumnFunction final : public COWHelper<IColumn, ColumnFunction>
private:
friend class COWHelper<IColumn, ColumnFunction>;
ColumnFunction(size_t size, FunctionBasePtr function, const ColumnsWithTypeAndName & columns_to_capture);
ColumnFunction(size_t size, FunctionBasePtr function_, const ColumnsWithTypeAndName & columns_to_capture);
public:
const char * getFamilyName() const override { return "Function"; }

View File

@ -360,12 +360,12 @@ bool ColumnLowCardinality::containsNull() const
ColumnLowCardinality::Index::Index() : positions(ColumnUInt8::create()), size_of_type(sizeof(UInt8)) {}
ColumnLowCardinality::Index::Index(MutableColumnPtr && positions) : positions(std::move(positions))
ColumnLowCardinality::Index::Index(MutableColumnPtr && positions_) : positions(std::move(positions_))
{
updateSizeOfType();
}
ColumnLowCardinality::Index::Index(ColumnPtr positions) : positions(std::move(positions))
ColumnLowCardinality::Index::Index(ColumnPtr positions_) : positions(std::move(positions_))
{
updateSizeOfType();
}

View File

@ -201,8 +201,8 @@ public:
public:
Index();
Index(const Index & other) = default;
explicit Index(MutableColumnPtr && positions);
explicit Index(ColumnPtr positions);
explicit Index(MutableColumnPtr && positions_);
explicit Index(ColumnPtr positions_);
const ColumnPtr & getPositions() const { return positions; }
WrappedPtr & getPositionsPtr() { return positions; }

View File

@ -257,8 +257,8 @@ struct ColumnTuple::Less
TupleColumns columns;
int nan_direction_hint;
Less(const TupleColumns & columns, int nan_direction_hint_)
: columns(columns), nan_direction_hint(nan_direction_hint_)
Less(const TupleColumns & columns_, int nan_direction_hint_)
: columns(columns_), nan_direction_hint(nan_direction_hint_)
{
}

View File

@ -186,10 +186,10 @@ ColumnUnique<ColumnType>::ColumnUnique(const IDataType & type)
}
template <typename ColumnType>
ColumnUnique<ColumnType>::ColumnUnique(MutableColumnPtr && holder, bool is_nullable)
ColumnUnique<ColumnType>::ColumnUnique(MutableColumnPtr && holder, bool is_nullable_)
: column_holder(std::move(holder))
, is_nullable(is_nullable)
, index(numSpecialValues(is_nullable), 0)
, is_nullable(is_nullable_)
, index(numSpecialValues(is_nullable_), 0)
{
if (column_holder->size() < numSpecialValues())
throw Exception("Too small holder column for ColumnUnique.", ErrorCodes::ILLEGAL_COLUMN);

View File

@ -235,8 +235,8 @@ template <typename IndexType, typename ColumnType>
class ReverseIndex
{
public:
explicit ReverseIndex(UInt64 num_prefix_rows_to_skip, UInt64 base_index)
: num_prefix_rows_to_skip(num_prefix_rows_to_skip), base_index(base_index), saved_hash_ptr(nullptr) {}
explicit ReverseIndex(UInt64 num_prefix_rows_to_skip_, UInt64 base_index_)
: num_prefix_rows_to_skip(num_prefix_rows_to_skip_), base_index(base_index_), saved_hash_ptr(nullptr) {}
void setColumn(ColumnType * column_);

View File

@ -265,13 +265,12 @@ using Allocator = AllocatorWithHint<clear_memory, AllocatorHints::DefaultHint, M
#endif
/** Allocator with optimization to place small memory ranges in automatic memory.
* TODO alignment
*/
template <typename Base, size_t N = 64>
template <typename Base, size_t N = 64, size_t Alignment = 1>
class AllocatorWithStackMemory : private Base
{
private:
char stack_memory[N];
alignas(Alignment) char stack_memory[N];
public:
/// Do not use boost::noncopyable to avoid the warning about direct base
@ -291,7 +290,7 @@ public:
return stack_memory;
}
return Base::alloc(size);
return Base::alloc(size, Alignment);
}
void free(void * buf, size_t size)
@ -308,10 +307,10 @@ public:
/// Already was big enough to not fit in stack_memory.
if (old_size > N)
return Base::realloc(buf, old_size, new_size);
return Base::realloc(buf, old_size, new_size, Alignment);
/// Was in stack memory, but now will not fit there.
void * new_buf = Base::alloc(new_size);
void * new_buf = Base::alloc(new_size, Alignment);
memcpy(new_buf, buf, old_size);
return new_buf;
}

View File

@ -243,11 +243,11 @@ struct HashMethodSingleLowCardinalityColumn : public SingleColumnMethod
throw Exception("Cache wasn't created for HashMethodSingleLowCardinalityColumn",
ErrorCodes::LOGICAL_ERROR);
LowCardinalityDictionaryCache * cache;
LowCardinalityDictionaryCache * lcd_cache;
if constexpr (use_cache)
{
cache = typeid_cast<LowCardinalityDictionaryCache *>(context.get());
if (!cache)
lcd_cache = typeid_cast<LowCardinalityDictionaryCache *>(context.get());
if (!lcd_cache)
{
const auto & cached_val = *context;
throw Exception("Invalid type for HashMethodSingleLowCardinalityColumn cache: "
@ -267,7 +267,7 @@ struct HashMethodSingleLowCardinalityColumn : public SingleColumnMethod
{
dictionary_key = {column->getDictionary().getHash(), dict->size()};
if constexpr (use_cache)
cached_values = cache->get(dictionary_key);
cached_values = lcd_cache->get(dictionary_key);
}
if (cached_values)
@ -288,7 +288,7 @@ struct HashMethodSingleLowCardinalityColumn : public SingleColumnMethod
cached_values->saved_hash = saved_hash;
cached_values->dictionary_holder = dictionary_holder;
cache->set(dictionary_key, cached_values);
lcd_cache->set(dictionary_key, cached_values);
}
}
}
@ -470,8 +470,8 @@ struct HashMethodKeysFixed
Sizes key_sizes;
size_t keys_size;
HashMethodKeysFixed(const ColumnRawPtrs & key_columns, const Sizes & key_sizes, const HashMethodContextPtr &)
: Base(key_columns), key_sizes(std::move(key_sizes)), keys_size(key_columns.size())
HashMethodKeysFixed(const ColumnRawPtrs & key_columns, const Sizes & key_sizes_, const HashMethodContextPtr &)
: Base(key_columns), key_sizes(std::move(key_sizes_)), keys_size(key_columns.size())
{
if constexpr (has_low_cardinality)
{
@ -525,8 +525,8 @@ struct HashMethodSerialized
ColumnRawPtrs key_columns;
size_t keys_size;
HashMethodSerialized(const ColumnRawPtrs & key_columns, const Sizes & /*key_sizes*/, const HashMethodContextPtr &)
: key_columns(key_columns), keys_size(key_columns.size()) {}
HashMethodSerialized(const ColumnRawPtrs & key_columns_, const Sizes & /*key_sizes*/, const HashMethodContextPtr &)
: key_columns(key_columns_), keys_size(key_columns_.size()) {}
protected:
friend class columns_hashing_impl::HashMethodBase<Self, Value, Mapped, false>;
@ -550,8 +550,8 @@ struct HashMethodHashed
ColumnRawPtrs key_columns;
HashMethodHashed(ColumnRawPtrs key_columns, const Sizes &, const HashMethodContextPtr &)
: key_columns(std::move(key_columns)) {}
HashMethodHashed(ColumnRawPtrs key_columns_, const Sizes &, const HashMethodContextPtr &)
: key_columns(std::move(key_columns_)) {}
ALWAYS_INLINE Key getKey(size_t row, Arena &) const { return hash128(row, key_columns.size(), key_columns); }

View File

@ -56,8 +56,8 @@ class EmplaceResultImpl
bool inserted;
public:
EmplaceResultImpl(Mapped & value, Mapped & cached_value, bool inserted)
: value(value), cached_value(cached_value), inserted(inserted) {}
EmplaceResultImpl(Mapped & value_, Mapped & cached_value_, bool inserted_)
: value(value_), cached_value(cached_value_), inserted(inserted_) {}
bool isInserted() const { return inserted; }
auto & getMapped() const { return value; }
@ -75,7 +75,7 @@ class EmplaceResultImpl<void>
bool inserted;
public:
explicit EmplaceResultImpl(bool inserted) : inserted(inserted) {}
explicit EmplaceResultImpl(bool inserted_) : inserted(inserted_) {}
bool isInserted() const { return inserted; }
};
@ -86,7 +86,7 @@ class FindResultImpl
bool found;
public:
FindResultImpl(Mapped * value, bool found) : value(value), found(found) {}
FindResultImpl(Mapped * value_, bool found_) : value(value_), found(found_) {}
bool isFound() const { return found; }
Mapped & getMapped() const { return *value; }
};
@ -97,7 +97,7 @@ class FindResultImpl<void>
bool found;
public:
explicit FindResultImpl(bool found) : found(found) {}
explicit FindResultImpl(bool found_) : found(found_) {}
bool isFound() const { return found; }
};

View File

@ -67,13 +67,13 @@ public:
int fd = ::open(path.c_str(), O_RDWR | O_CREAT, 0666);
if (-1 == fd)
DB::throwFromErrno("Cannot open file " + path, DB::ErrorCodes::CANNOT_OPEN_FILE);
DB::throwFromErrnoWithPath("Cannot open file " + path, path, DB::ErrorCodes::CANNOT_OPEN_FILE);
try
{
int flock_ret = flock(fd, LOCK_EX);
if (-1 == flock_ret)
DB::throwFromErrno("Cannot lock file " + path, DB::ErrorCodes::CANNOT_OPEN_FILE);
DB::throwFromErrnoWithPath("Cannot lock file " + path, path, DB::ErrorCodes::CANNOT_OPEN_FILE);
if (!file_doesnt_exists)
{
@ -141,7 +141,7 @@ public:
int fd = ::open(path.c_str(), O_RDWR | O_CREAT, 0666);
if (-1 == fd)
DB::throwFromErrno("Cannot open file " + path, DB::ErrorCodes::CANNOT_OPEN_FILE);
DB::throwFromErrnoWithPath("Cannot open file " + path, path, DB::ErrorCodes::CANNOT_OPEN_FILE);
try
{

View File

@ -59,15 +59,15 @@ namespace CurrentMetrics
std::atomic<Value> * what;
Value amount;
Increment(std::atomic<Value> * what, Value amount)
: what(what), amount(amount)
Increment(std::atomic<Value> * what_, Value amount_)
: what(what_), amount(amount_)
{
*what += amount;
}
public:
Increment(Metric metric, Value amount = 1)
: Increment(&values[metric], amount) {}
Increment(Metric metric, Value amount_ = 1)
: Increment(&values[metric], amount_) {}
~Increment()
{

View File

@ -26,6 +26,11 @@ void CurrentThread::updatePerformanceCounters()
current_thread->updatePerformanceCounters();
}
bool CurrentThread::isInitialized()
{
return current_thread;
}
ThreadStatus & CurrentThread::get()
{
if (unlikely(!current_thread))

View File

@ -33,6 +33,9 @@ class InternalTextLogsQueue;
class CurrentThread
{
public:
/// Return true in case of successful initializaiton
static bool isInitialized();
/// Handler to current thread
static ThreadStatus & get();

View File

@ -707,7 +707,7 @@ void Dwarf::LineNumberVM::init()
lineRange_ = read<uint8_t>(header);
opcodeBase_ = read<uint8_t>(header);
SAFE_CHECK(opcodeBase_ != 0, "invalid opcode base");
standardOpcodeLengths_ = reinterpret_cast<const uint8_t *>(header.data());
standardOpcodeLengths_ = reinterpret_cast<const uint8_t *>(header.data()); //-V506
header.remove_prefix(opcodeBase_ - 1);
// We don't want to use heap, so we don't keep an unbounded amount of state.

View File

@ -55,8 +55,8 @@ Elf::Elf(const std::string & path)
}
Elf::Section::Section(const ElfShdr & header, const Elf & elf)
: header(header), elf(elf)
Elf::Section::Section(const ElfShdr & header_, const Elf & elf_)
: header(header_), elf(elf_)
{
}

View File

@ -35,7 +35,7 @@ public:
const char * end() const;
size_t size() const;
Section(const ElfShdr & header, const Elf & elf);
Section(const ElfShdr & header_, const Elf & elf_);
private:
const Elf & elf;

View File

@ -127,7 +127,7 @@ namespace ErrorCodes
extern const int INCORRECT_DATA = 117;
extern const int ENGINE_REQUIRED = 119;
extern const int CANNOT_INSERT_VALUE_OF_DIFFERENT_SIZE_INTO_TUPLE = 120;
extern const int UNKNOWN_SET_DATA_VARIANT = 121;
extern const int UNSUPPORTED_JOIN_KEYS = 121;
extern const int INCOMPATIBLE_COLUMNS = 122;
extern const int UNKNOWN_TYPE_OF_AST_NODE = 123;
extern const int INCORRECT_ELEMENT_OF_SET = 124;
@ -442,6 +442,7 @@ namespace ErrorCodes
extern const int CANNOT_PARSE_DWARF = 465;
extern const int INSECURE_PATH = 466;
extern const int CANNOT_PARSE_BOOL = 467;
extern const int CANNOT_PTHREAD_ATTR = 468;
extern const int KEEPER_EXCEPTION = 999;
extern const int POCO_EXCEPTION = 1000;

View File

@ -9,6 +9,9 @@
#include <IO/ReadBufferFromString.h>
#include <common/demangle.h>
#include <Common/config_version.h>
#include <Common/formatReadable.h>
#include <Storages/MergeTree/DiskSpaceMonitor.h>
#include <filesystem>
namespace DB
{
@ -52,6 +55,11 @@ void throwFromErrno(const std::string & s, int code, int e)
throw ErrnoException(s + ", " + errnoToString(code, e), code, e);
}
void throwFromErrnoWithPath(const std::string & s, const std::string & path, int code, int the_errno)
{
throw ErrnoException(s + ", " + errnoToString(code, the_errno), code, the_errno, path);
}
void tryLogCurrentException(const char * log_name, const std::string & start_of_message)
{
tryLogCurrentException(&Logger::get(log_name), start_of_message);
@ -68,7 +76,52 @@ void tryLogCurrentException(Poco::Logger * logger, const std::string & start_of_
}
}
std::string getCurrentExceptionMessage(bool with_stacktrace, bool check_embedded_stacktrace)
void getNoSpaceLeftInfoMessage(std::filesystem::path path, std::string & msg)
{
path = std::filesystem::absolute(path);
/// It's possible to get ENOSPC for non existent file (e.g. if there are no free inodes and creat() fails)
/// So try to get info for existent parent directory.
while (!std::filesystem::exists(path) && path.has_relative_path())
path = path.parent_path();
auto fs = DiskSpaceMonitor::getStatVFS(path);
msg += "\nTotal space: " + formatReadableSizeWithBinarySuffix(fs.f_blocks * fs.f_bsize)
+ "\nAvailable space: " + formatReadableSizeWithBinarySuffix(fs.f_bavail * fs.f_bsize)
+ "\nTotal inodes: " + formatReadableQuantity(fs.f_files)
+ "\nAvailable inodes: " + formatReadableQuantity(fs.f_favail);
auto mount_point = DiskSpaceMonitor::getMountPoint(path).string();
msg += "\nMount point: " + mount_point;
#if defined(__linux__)
msg += "\nFilesystem: " + DiskSpaceMonitor::getFilesystemName(mount_point);
#endif
}
std::string getExtraExceptionInfo(const std::exception & e)
{
String msg;
try
{
if (auto file_exception = dynamic_cast<const Poco::FileException *>(&e))
{
if (file_exception->code() == ENOSPC)
getNoSpaceLeftInfoMessage(file_exception->message(), msg);
}
else if (auto errno_exception = dynamic_cast<const DB::ErrnoException *>(&e))
{
if (errno_exception->getErrno() == ENOSPC && errno_exception->getPath())
getNoSpaceLeftInfoMessage(errno_exception->getPath().value(), msg);
}
}
catch (...)
{
msg += "\nCannot print extra info: " + getCurrentExceptionMessage(false, false, false);
}
return msg;
}
std::string getCurrentExceptionMessage(bool with_stacktrace, bool check_embedded_stacktrace /*= false*/, bool with_extra_info /*= true*/)
{
std::stringstream stream;
@ -78,7 +131,9 @@ std::string getCurrentExceptionMessage(bool with_stacktrace, bool check_embedded
}
catch (const Exception & e)
{
stream << getExceptionMessage(e, with_stacktrace, check_embedded_stacktrace) << " (version " << VERSION_STRING << VERSION_OFFICIAL << ")";
stream << getExceptionMessage(e, with_stacktrace, check_embedded_stacktrace)
<< (with_extra_info ? getExtraExceptionInfo(e) : "")
<< " (version " << VERSION_STRING << VERSION_OFFICIAL << ")";
}
catch (const Poco::Exception & e)
{
@ -86,7 +141,8 @@ std::string getCurrentExceptionMessage(bool with_stacktrace, bool check_embedded
{
stream << "Poco::Exception. Code: " << ErrorCodes::POCO_EXCEPTION << ", e.code() = " << e.code()
<< ", e.displayText() = " << e.displayText()
<< " (version " << VERSION_STRING << VERSION_OFFICIAL << ")";
<< (with_extra_info ? getExtraExceptionInfo(e) : "")
<< " (version " << VERSION_STRING << VERSION_OFFICIAL;
}
catch (...) {}
}
@ -100,7 +156,9 @@ std::string getCurrentExceptionMessage(bool with_stacktrace, bool check_embedded
if (status)
name += " (demangling status: " + toString(status) + ")";
stream << "std::exception. Code: " << ErrorCodes::STD_EXCEPTION << ", type: " << name << ", e.what() = " << e.what() << ", version = " << VERSION_STRING << VERSION_OFFICIAL;
stream << "std::exception. Code: " << ErrorCodes::STD_EXCEPTION << ", type: " << name << ", e.what() = " << e.what()
<< (with_extra_info ? getExtraExceptionInfo(e) : "")
<< ", version = " << VERSION_STRING << VERSION_OFFICIAL;
}
catch (...) {}
}

View File

@ -52,16 +52,18 @@ private:
class ErrnoException : public Exception
{
public:
ErrnoException(const std::string & msg, int code, int saved_errno_)
: Exception(msg, code), saved_errno(saved_errno_) {}
ErrnoException(const std::string & msg, int code, int saved_errno_, const std::optional<std::string> & path_ = {})
: Exception(msg, code), saved_errno(saved_errno_), path(path_) {}
ErrnoException * clone() const override { return new ErrnoException(*this); }
void rethrow() const override { throw *this; }
int getErrno() const { return saved_errno; }
const std::optional<std::string> getPath() const { return path; }
private:
int saved_errno;
std::optional<std::string> path;
const char * name() const throw() override { return "DB::ErrnoException"; }
const char * className() const throw() override { return "DB::ErrnoException"; }
@ -73,6 +75,8 @@ using Exceptions = std::vector<std::exception_ptr>;
std::string errnoToString(int code, int the_errno = errno);
[[noreturn]] void throwFromErrno(const std::string & s, int code, int the_errno = errno);
[[noreturn]] void throwFromErrnoWithPath(const std::string & s, const std::string & path, int code,
int the_errno = errno);
/** Try to write an exception to the log (and forget about it).
@ -87,7 +91,8 @@ void tryLogCurrentException(Poco::Logger * logger, const std::string & start_of_
* check_embedded_stacktrace - if DB::Exception has embedded stacktrace then
* only this stack trace will be printed.
*/
std::string getCurrentExceptionMessage(bool with_stacktrace, bool check_embedded_stacktrace = false);
std::string getCurrentExceptionMessage(bool with_stacktrace, bool check_embedded_stacktrace = false,
bool with_extra_info = true);
/// Returns error code from ErrorCodes
int getCurrentExceptionCode();

View File

@ -167,7 +167,7 @@ String FieldVisitorToString::operator() (const Tuple & x_def) const
}
FieldVisitorHash::FieldVisitorHash(SipHash & hash) : hash(hash) {}
FieldVisitorHash::FieldVisitorHash(SipHash & hash_) : hash(hash_) {}
void FieldVisitorHash::operator() (const Null &) const
{

View File

@ -222,7 +222,7 @@ class FieldVisitorHash : public StaticVisitor<>
private:
SipHash & hash;
public:
FieldVisitorHash(SipHash & hash);
FieldVisitorHash(SipHash & hash_);
void operator() (const Null & x) const;
void operator() (const UInt64 & x) const;

View File

@ -3,6 +3,8 @@
#include <Core/Types.h>
#include <Common/UInt128.h>
#include <type_traits>
/** Hash functions that are better than the trivial function std::hash.
*
@ -57,8 +59,6 @@ inline DB::UInt64 intHashCRC32(DB::UInt64 x)
}
template <typename T> struct DefaultHash;
template <typename T>
inline size_t DefaultHash64(T key)
{
@ -72,28 +72,18 @@ inline size_t DefaultHash64(T key)
return intHash64(u.out);
}
#define DEFINE_HASH(T) \
template <> struct DefaultHash<T>\
{\
size_t operator() (T key) const\
{\
return DefaultHash64<T>(key);\
}\
template <typename T, typename Enable = void>
struct DefaultHash;
template <typename T>
struct DefaultHash<T, std::enable_if_t<std::is_arithmetic_v<T>>>
{
size_t operator() (T key) const
{
return DefaultHash64<T>(key);
}
};
DEFINE_HASH(DB::UInt8)
DEFINE_HASH(DB::UInt16)
DEFINE_HASH(DB::UInt32)
DEFINE_HASH(DB::UInt64)
DEFINE_HASH(DB::Int8)
DEFINE_HASH(DB::Int16)
DEFINE_HASH(DB::Int32)
DEFINE_HASH(DB::Int64)
DEFINE_HASH(DB::Float32)
DEFINE_HASH(DB::Float64)
#undef DEFINE_HASH
template <typename T> struct HashCRC32;

View File

@ -31,9 +31,9 @@ class MemoryTracker
const char * description = nullptr;
public:
MemoryTracker(VariableContext level = VariableContext::Thread) : level(level) {}
MemoryTracker(Int64 limit_, VariableContext level = VariableContext::Thread) : limit(limit_), level(level) {}
MemoryTracker(MemoryTracker * parent_, VariableContext level = VariableContext::Thread) : parent(parent_), level(level) {}
MemoryTracker(VariableContext level_ = VariableContext::Thread) : level(level_) {}
MemoryTracker(Int64 limit_, VariableContext level_ = VariableContext::Thread) : limit(limit_), level(level_) {}
MemoryTracker(MemoryTracker * parent_, VariableContext level_ = VariableContext::Thread) : parent(parent_), level(level_) {}
~MemoryTracker();

View File

@ -636,6 +636,6 @@ using PaddedPODArray = PODArray<T, initial_bytes, TAllocator, 15, 16>;
template <typename T, size_t inline_bytes,
size_t rounded_bytes = integerRoundUp(inline_bytes, sizeof(T))>
using PODArrayWithStackMemory = PODArray<T, rounded_bytes,
AllocatorWithStackMemory<Allocator<false>, rounded_bytes>>;
AllocatorWithStackMemory<Allocator<false>, rounded_bytes, alignof(T)>>;
}

View File

@ -191,10 +191,10 @@ Counters global_counters(global_counters_array);
const Event Counters::num_counters = END;
Counters::Counters(VariableContext level, Counters * parent)
Counters::Counters(VariableContext level_, Counters * parent_)
: counters_holder(new Counter[num_counters] {}),
parent(parent),
level(level)
parent(parent_),
level(level_)
{
counters = counters_holder.get();
}

View File

@ -33,7 +33,7 @@ namespace ProfileEvents
VariableContext level = VariableContext::Thread;
/// By default, any instance have to increment global counters
Counters(VariableContext level = VariableContext::Thread, Counters * parent = &global_counters);
Counters(VariableContext level_ = VariableContext::Thread, Counters * parent_ = &global_counters);
/// Global level static initializer
Counters(Counter * allocated_counters)

View File

@ -127,9 +127,9 @@ namespace ErrorCodes
}
template <typename ProfilerImpl>
QueryProfilerBase<ProfilerImpl>::QueryProfilerBase(const Int32 thread_id, const int clock_type, UInt32 period, const int pause_signal)
QueryProfilerBase<ProfilerImpl>::QueryProfilerBase(const Int32 thread_id, const int clock_type, UInt32 period, const int pause_signal_)
: log(&Logger::get("QueryProfiler"))
, pause_signal(pause_signal)
, pause_signal(pause_signal_)
{
#if USE_INTERNAL_UNWIND_LIBRARY
/// Sanity check.

View File

@ -35,7 +35,7 @@ template <typename ProfilerImpl>
class QueryProfilerBase
{
public:
QueryProfilerBase(const Int32 thread_id, const int clock_type, UInt32 period, const int pause_signal);
QueryProfilerBase(const Int32 thread_id, const int clock_type, UInt32 period, const int pause_signal_);
~QueryProfilerBase();
private:

View File

@ -161,9 +161,9 @@ RWLockImpl::LockHolderImpl::~LockHolderImpl()
}
RWLockImpl::LockHolderImpl::LockHolderImpl(RWLock && parent, RWLockImpl::GroupsContainer::iterator it_group,
RWLockImpl::ClientsContainer::iterator it_client)
: parent{std::move(parent)}, it_group{it_group}, it_client{it_client},
RWLockImpl::LockHolderImpl::LockHolderImpl(RWLock && parent_, RWLockImpl::GroupsContainer::iterator it_group_,
RWLockImpl::ClientsContainer::iterator it_client_)
: parent{std::move(parent_)}, it_group{it_group_}, it_client{it_client_},
active_client_increment{(*it_client == RWLockImpl::Read) ? CurrentMetrics::RWLockActiveReaders
: CurrentMetrics::RWLockActiveWriters}
{}

View File

@ -68,7 +68,7 @@ private:
std::condition_variable cv; /// all clients of the group wait group condvar
explicit Group(Type type) : type{type} {}
explicit Group(Type type_) : type{type_} {}
};
mutable std::mutex mutex;

View File

@ -34,13 +34,13 @@ namespace ErrorCodes
extern const int CANNOT_CREATE_CHILD_PROCESS;
}
ShellCommand::ShellCommand(pid_t pid, int in_fd, int out_fd, int err_fd, bool terminate_in_destructor_)
: pid(pid)
ShellCommand::ShellCommand(pid_t pid_, int in_fd_, int out_fd_, int err_fd_, bool terminate_in_destructor_)
: pid(pid_)
, terminate_in_destructor(terminate_in_destructor_)
, log(&Poco::Logger::get("ShellCommand"))
, in(in_fd)
, out(out_fd)
, err(err_fd) {}
, in(in_fd_)
, out(out_fd_)
, err(err_fd_) {}
ShellCommand::~ShellCommand()
{

View File

@ -32,7 +32,7 @@ private:
Poco::Logger * log;
ShellCommand(pid_t pid, int in_fd, int out_fd, int err_fd, bool terminate_in_destructor_);
ShellCommand(pid_t pid_, int in_fd_, int out_fd_, int err_fd_, bool terminate_in_destructor_);
static std::unique_ptr<ShellCommand> executeImpl(const char * filename, char * const argv[], bool pipe_stdin_only, bool terminate_in_destructor);

View File

@ -113,7 +113,8 @@ public:
}
TKey key;
size_t slot, hash;
size_t slot;
size_t hash;
UInt64 count;
UInt64 error;
};
@ -147,15 +148,13 @@ public:
void insert(const TKey & key, UInt64 increment = 1, UInt64 error = 0)
{
// Increase weight of a key that already exists
// It uses hashtable for both value mapping as a presence test (c_i != 0)
auto hash = counter_map.hash(key);
auto it = counter_map.find(key, hash);
if (it != counter_map.end())
auto counter = findCounter(key, hash);
if (counter)
{
auto c = it->getSecond();
c->count += increment;
c->error += error;
percolate(c);
counter->count += increment;
counter->error += error;
percolate(counter);
return;
}
// Key doesn't exist, but can fit in the top K
@ -177,6 +176,7 @@ public:
push(new Counter(arena.emplace(key), increment, error, hash));
return;
}
const size_t alpha_mask = alpha_map.size() - 1;
auto & alpha = alpha_map[hash & alpha_mask];
if (alpha + increment < min->count)
@ -187,22 +187,9 @@ public:
// Erase the current minimum element
alpha_map[min->hash & alpha_mask] = min->count;
it = counter_map.find(min->key, min->hash);
destroyLastElement();
// Replace minimum with newly inserted element
if (it != counter_map.end())
{
arena.free(min->key);
min->hash = hash;
min->key = arena.emplace(key);
min->count = alpha + increment;
min->error = alpha + error;
percolate(min);
it->getSecond() = min;
it->getFirstMutable() = min->key;
counter_map.reinsert(it, hash);
}
push(new Counter(arena.emplace(key), alpha + increment, alpha + error, hash));
}
/*
@ -242,17 +229,35 @@ public:
// The list is sorted in descending order, we have to scan in reverse
for (auto counter : boost::adaptors::reverse(rhs.counter_list))
{
if (counter_map.find(counter->key) != counter_map.end())
size_t hash = counter_map.hash(counter->key);
if (auto current = findCounter(counter->key, hash))
{
// Subtract m2 previously added, guaranteed not negative
insert(counter->key, counter->count - m2, counter->error - m2);
current->count += (counter->count - m2);
current->error += (counter->error - m2);
}
else
{
// Counters not monitored in S1
insert(counter->key, counter->count + m1, counter->error + m1);
counter_list.push_back(new Counter(arena.emplace(counter->key), counter->count + m1, counter->error + m1, hash));
}
}
std::sort(counter_list.begin(), counter_list.end(), [](Counter * l, Counter * r) { return *l > *r; });
if (counter_list.size() > m_capacity)
{
for (size_t i = m_capacity; i < counter_list.size(); ++i)
{
arena.free(counter_list[i]->key);
delete counter_list[i];
}
counter_list.resize(m_capacity);
}
for (size_t i = 0; i < counter_list.size(); ++i)
counter_list[i]->slot = i;
rebuildCounterMap();
}
std::vector<Counter> topK(size_t k) const
@ -336,7 +341,10 @@ private:
void destroyElements()
{
for (auto counter : counter_list)
{
arena.free(counter->key);
delete counter;
}
counter_map.clear();
counter_list.clear();
@ -346,19 +354,40 @@ private:
void destroyLastElement()
{
auto last_element = counter_list.back();
auto cell = counter_map.find(last_element->key, last_element->hash);
cell->setZero();
counter_map.reinsert(cell, last_element->hash);
counter_list.pop_back();
arena.free(last_element->key);
delete last_element;
counter_list.pop_back();
++removed_keys;
if (removed_keys * 2 > counter_map.size())
rebuildCounterMap();
}
HashMap<TKey, Counter *, Hash, Grower, Allocator> counter_map;
Counter * findCounter(const TKey & key, size_t hash)
{
auto it = counter_map.find(key, hash);
if (it == counter_map.end())
return nullptr;
return it->getSecond();
}
void rebuildCounterMap()
{
removed_keys = 0;
counter_map.clear();
for (auto counter : counter_list)
counter_map[counter->key] = counter;
}
using CounterMap = HashMap<TKey, Counter *, Hash, Grower, Allocator>;
CounterMap counter_map;
std::vector<Counter *> counter_list;
std::vector<UInt64> alpha_map;
SpaceSavingArena<TKey> arena;
size_t m_capacity;
size_t removed_keys = 0;
};
}

View File

@ -253,7 +253,7 @@ static std::string toStringImpl(const StackTrace::Frames & frames, size_t offset
{
const void * addr = frames[i];
out << "#" << i << " " << addr << " ";
out << i << ". " << addr << " ";
auto symbol = symbol_index.findSymbol(addr);
if (symbol)
{

View File

@ -51,7 +51,7 @@ StatusFile::StatusFile(const std::string & path_)
fd = ::open(path.c_str(), O_WRONLY | O_CREAT, 0666);
if (-1 == fd)
throwFromErrno("Cannot open file " + path, ErrorCodes::CANNOT_OPEN_FILE);
throwFromErrnoWithPath("Cannot open file " + path, path, ErrorCodes::CANNOT_OPEN_FILE);
try
{
@ -61,14 +61,14 @@ StatusFile::StatusFile(const std::string & path_)
if (errno == EWOULDBLOCK)
throw Exception("Cannot lock file " + path + ". Another server instance in same directory is already running.", ErrorCodes::CANNOT_OPEN_FILE);
else
throwFromErrno("Cannot lock file " + path, ErrorCodes::CANNOT_OPEN_FILE);
throwFromErrnoWithPath("Cannot lock file " + path, path, ErrorCodes::CANNOT_OPEN_FILE);
}
if (0 != ftruncate(fd, 0))
throwFromErrno("Cannot ftruncate " + path, ErrorCodes::CANNOT_TRUNCATE_FILE);
throwFromErrnoWithPath("Cannot ftruncate " + path, path, ErrorCodes::CANNOT_TRUNCATE_FILE);
if (0 != lseek(fd, 0, SEEK_SET))
throwFromErrno("Cannot lseek " + path, ErrorCodes::CANNOT_SEEK_THROUGH_FILE);
throwFromErrnoWithPath("Cannot lseek " + path, path, ErrorCodes::CANNOT_SEEK_THROUGH_FILE);
/// Write information about current server instance to the file.
{

View File

@ -86,7 +86,7 @@ public:
operator bool() const { return parent != nullptr; }
Lock(AtomicStopwatch * parent) : parent(parent) {}
Lock(AtomicStopwatch * parent_) : parent(parent_) {}
Lock(Lock &&) = default;

View File

@ -75,8 +75,8 @@ private:
#endif
public:
StringSearcher(const char * const needle_, const size_t needle_size)
: needle{reinterpret_cast<const UInt8 *>(needle_)}, needle_size{needle_size}
StringSearcher(const char * const needle_, const size_t needle_size_)
: needle{reinterpret_cast<const UInt8 *>(needle_)}, needle_size{needle_size_}
{
if (0 == needle_size)
return;
@ -714,8 +714,8 @@ struct LibCASCIICaseSensitiveStringSearcher
{
const char * const needle;
LibCASCIICaseSensitiveStringSearcher(const char * const needle, const size_t /* needle_size */)
: needle(needle) {}
LibCASCIICaseSensitiveStringSearcher(const char * const needle_, const size_t /* needle_size */)
: needle(needle_) {}
const UInt8 * search(const UInt8 * haystack, const UInt8 * const haystack_end) const
{
@ -735,8 +735,8 @@ struct LibCASCIICaseInsensitiveStringSearcher
{
const char * const needle;
LibCASCIICaseInsensitiveStringSearcher(const char * const needle, const size_t /* needle_size */)
: needle(needle) {}
LibCASCIICaseInsensitiveStringSearcher(const char * const needle_, const size_t /* needle_size */)
: needle(needle_) {}
const UInt8 * search(const UInt8 * haystack, const UInt8 * const haystack_end) const
{

View File

@ -22,14 +22,14 @@ namespace CurrentMetrics
template <typename Thread>
ThreadPoolImpl<Thread>::ThreadPoolImpl(size_t max_threads)
: ThreadPoolImpl(max_threads, max_threads, max_threads)
ThreadPoolImpl<Thread>::ThreadPoolImpl(size_t max_threads_)
: ThreadPoolImpl(max_threads_, max_threads_, max_threads_)
{
}
template <typename Thread>
ThreadPoolImpl<Thread>::ThreadPoolImpl(size_t max_threads, size_t max_free_threads, size_t queue_size)
: max_threads(max_threads), max_free_threads(max_free_threads), queue_size(queue_size)
ThreadPoolImpl<Thread>::ThreadPoolImpl(size_t max_threads_, size_t max_free_threads_, size_t queue_size_)
: max_threads(max_threads_), max_free_threads(max_free_threads_), queue_size(queue_size_)
{
}

View File

@ -31,10 +31,10 @@ public:
using Job = std::function<void()>;
/// Size is constant. Up to num_threads are created on demand and then run until shutdown.
explicit ThreadPoolImpl(size_t max_threads);
explicit ThreadPoolImpl(size_t max_threads_);
/// queue_size - maximum number of running plus scheduled jobs. It can be greater than max_threads. Zero means unlimited.
ThreadPoolImpl(size_t max_threads, size_t max_free_threads, size_t queue_size);
ThreadPoolImpl(size_t max_threads_, size_t max_free_threads_, size_t queue_size_);
/// Add new job. Locks until number of scheduled jobs is less than maximum or exception in one of threads was thrown.
/// If an exception in some thread was thrown, method silently returns, and exception will be rethrown only on call to 'wait' function.
@ -81,8 +81,8 @@ private:
Job job;
int priority;
JobWithPriority(Job job, int priority)
: job(job), priority(priority) {}
JobWithPriority(Job job_, int priority_)
: job(job_), priority(priority_) {}
bool operator< (const JobWithPriority & rhs) const
{

View File

@ -36,12 +36,12 @@ namespace ErrorCodes
class Throttler
{
public:
Throttler(size_t max_speed_, const std::shared_ptr<Throttler> & parent = nullptr)
: max_speed(max_speed_), limit_exceeded_exception_message(""), parent(parent) {}
Throttler(size_t max_speed_, const std::shared_ptr<Throttler> & parent_ = nullptr)
: max_speed(max_speed_), limit_exceeded_exception_message(""), parent(parent_) {}
Throttler(size_t max_speed_, size_t limit_, const char * limit_exceeded_exception_message_,
const std::shared_ptr<Throttler> & parent = nullptr)
: max_speed(max_speed_), limit(limit_), limit_exceeded_exception_message(limit_exceeded_exception_message_), parent(parent) {}
const std::shared_ptr<Throttler> & parent_ = nullptr)
: max_speed(max_speed_), limit(limit_), limit_exceeded_exception_message(limit_exceeded_exception_message_), parent(parent_) {}
void add(const size_t amount)
{

View File

@ -28,9 +28,9 @@ namespace ErrorCodes
extern const int CANNOT_FCNTL;
}
TraceCollector::TraceCollector(std::shared_ptr<TraceLog> & trace_log)
TraceCollector::TraceCollector(std::shared_ptr<TraceLog> & trace_log_)
: log(&Poco::Logger::get("TraceCollector"))
, trace_log(trace_log)
, trace_log(trace_log_)
{
if (trace_log == nullptr)
throw Exception("Invalid trace log pointer passed", ErrorCodes::NULL_POINTER_DEREFERENCE);

View File

@ -24,7 +24,7 @@ private:
static void notifyToStop();
public:
TraceCollector(std::shared_ptr<TraceLog> & trace_log);
TraceCollector(std::shared_ptr<TraceLog> & trace_log_);
~TraceCollector();
};

View File

@ -28,7 +28,7 @@ struct UInt128
UInt64 high;
UInt128() = default;
explicit UInt128(const UInt64 low, const UInt64 high) : low(low), high(high) {}
explicit UInt128(const UInt64 low_, const UInt64 high_) : low(low_), high(high_) {}
explicit UInt128(const UInt64 rhs) : low(rhs), high() {}
auto tuple() const { return std::tie(high, low); }

View File

@ -331,11 +331,11 @@ public:
* If you specify it small enough, the fallback algorithm will be used,
* since it is considered that it's useless to waste time initializing the hash table.
*/
VolnitskyBase(const char * const needle, const size_t needle_size, size_t haystack_size_hint = 0)
: needle{reinterpret_cast<const UInt8 *>(needle)}
, needle_size{needle_size}
VolnitskyBase(const char * const needle_, const size_t needle_size_, size_t haystack_size_hint = 0)
: needle{reinterpret_cast<const UInt8 *>(needle_)}
, needle_size{needle_size_}
, fallback{VolnitskyTraits::isFallbackNeedle(needle_size, haystack_size_hint)}
, fallback_searcher{needle, needle_size}
, fallback_searcher{needle_, needle_size}
{
if (fallback)
return;

View File

@ -23,8 +23,8 @@ namespace ProfileEvents
namespace Coordination
{
Exception::Exception(const std::string & msg, const int32_t code, int)
: DB::Exception(msg, DB::ErrorCodes::KEEPER_EXCEPTION), code(code)
Exception::Exception(const std::string & msg, const int32_t code_, int)
: DB::Exception(msg, DB::ErrorCodes::KEEPER_EXCEPTION), code(code_)
{
if (Coordination::isUserError(code))
ProfileEvents::increment(ProfileEvents::ZooKeeperUserExceptions);
@ -34,18 +34,18 @@ Exception::Exception(const std::string & msg, const int32_t code, int)
ProfileEvents::increment(ProfileEvents::ZooKeeperOtherExceptions);
}
Exception::Exception(const std::string & msg, const int32_t code)
: Exception(msg + " (" + errorMessage(code) + ")", code, 0)
Exception::Exception(const std::string & msg, const int32_t code_)
: Exception(msg + " (" + errorMessage(code_) + ")", code_, 0)
{
}
Exception::Exception(const int32_t code)
: Exception(errorMessage(code), code, 0)
Exception::Exception(const int32_t code_)
: Exception(errorMessage(code_), code_, 0)
{
}
Exception::Exception(const int32_t code, const std::string & path)
: Exception(std::string{errorMessage(code)} + ", path: " + path, code, 0)
Exception::Exception(const int32_t code_, const std::string & path)
: Exception(std::string{errorMessage(code_)} + ", path: " + path, code_, 0)
{
}

View File

@ -301,12 +301,12 @@ class Exception : public DB::Exception
{
private:
/// Delegate constructor, used to minimize repetition; last parameter used for overload resolution.
Exception(const std::string & msg, const int32_t code, int);
Exception(const std::string & msg, const int32_t code_, int);
public:
explicit Exception(const int32_t code);
Exception(const std::string & msg, const int32_t code);
Exception(const int32_t code, const std::string & path);
explicit Exception(const int32_t code_);
Exception(const std::string & msg, const int32_t code_);
Exception(const int32_t code_, const std::string & path);
Exception(const Exception & exc);
const char * name() const throw() override { return "Coordination::Exception"; }

View File

@ -418,8 +418,8 @@ ResponsePtr TestKeeperCheckRequest::createResponse() const { return std::make_sh
ResponsePtr TestKeeperMultiRequest::createResponse() const { return std::make_shared<MultiResponse>(); }
TestKeeper::TestKeeper(const String & root_path_, Poco::Timespan operation_timeout)
: root_path(root_path_), operation_timeout(operation_timeout)
TestKeeper::TestKeeper(const String & root_path_, Poco::Timespan operation_timeout_)
: root_path(root_path_), operation_timeout(operation_timeout_)
{
container.emplace("/", Node());

View File

@ -33,7 +33,7 @@ using TestKeeperRequestPtr = std::shared_ptr<TestKeeperRequest>;
class TestKeeper : public IKeeper
{
public:
TestKeeper(const String & root_path, Poco::Timespan operation_timeout);
TestKeeper(const String & root_path_, Poco::Timespan operation_timeout_);
~TestKeeper() override;
bool isExpired() const override { return expired; }

View File

@ -106,10 +106,10 @@ void ZooKeeper::init(const std::string & implementation, const std::string & hos
throw KeeperException("Zookeeper root doesn't exist. You should create root node " + chroot + " before start.", Coordination::ZNONODE);
}
ZooKeeper::ZooKeeper(const std::string & hosts, const std::string & identity, int32_t session_timeout_ms,
int32_t operation_timeout_ms, const std::string & chroot, const std::string & implementation)
ZooKeeper::ZooKeeper(const std::string & hosts_, const std::string & identity_, int32_t session_timeout_ms_,
int32_t operation_timeout_ms_, const std::string & chroot_, const std::string & implementation)
{
init(implementation, hosts, identity, session_timeout_ms, operation_timeout_ms, chroot);
init(implementation, hosts_, identity_, session_timeout_ms_, operation_timeout_ms_, chroot_);
}
struct ZooKeeperArgs
@ -891,9 +891,9 @@ size_t KeeperMultiException::getFailedOpIndex(int32_t exception_code, const Coor
}
KeeperMultiException::KeeperMultiException(int32_t exception_code, const Coordination::Requests & requests, const Coordination::Responses & responses)
KeeperMultiException::KeeperMultiException(int32_t exception_code, const Coordination::Requests & requests_, const Coordination::Responses & responses_)
: KeeperException("Transaction failed", exception_code),
requests(requests), responses(responses), failed_op_index(getFailedOpIndex(exception_code, responses))
requests(requests_), responses(responses_), failed_op_index(getFailedOpIndex(exception_code, responses))
{
addMessage("Op #" + std::to_string(failed_op_index) + ", path: " + getPathForFirstFailedOp());
}

View File

@ -52,10 +52,10 @@ class ZooKeeper
public:
using Ptr = std::shared_ptr<ZooKeeper>;
ZooKeeper(const std::string & hosts, const std::string & identity = "",
int32_t session_timeout_ms = DEFAULT_SESSION_TIMEOUT,
int32_t operation_timeout_ms = DEFAULT_OPERATION_TIMEOUT,
const std::string & chroot = "",
ZooKeeper(const std::string & hosts_, const std::string & identity_ = "",
int32_t session_timeout_ms_ = DEFAULT_SESSION_TIMEOUT,
int32_t operation_timeout_ms_ = DEFAULT_OPERATION_TIMEOUT,
const std::string & chroot_ = "",
const std::string & implementation = "zookeeper");
/** Config of the form:

View File

@ -758,17 +758,17 @@ struct ZooKeeperMultiResponse final : MultiResponse, ZooKeeperResponse
{
ZooKeeper::OpNum op_num;
bool done;
int32_t error;
int32_t error_;
Coordination::read(op_num, in);
Coordination::read(done, in);
Coordination::read(error, in);
Coordination::read(error_, in);
if (!done)
throw Exception("Too many results received for multi transaction", ZMARSHALLINGERROR);
if (op_num != -1)
throw Exception("Unexpected op_num received at the end of results for multi transaction", ZMARSHALLINGERROR);
if (error != -1)
if (error_ != -1)
throw Exception("Unexpected error value received at the end of results for multi transaction", ZMARSHALLINGERROR);
}
}
@ -821,12 +821,12 @@ ZooKeeper::ZooKeeper(
const String & root_path_,
const String & auth_scheme,
const String & auth_data,
Poco::Timespan session_timeout,
Poco::Timespan session_timeout_,
Poco::Timespan connection_timeout,
Poco::Timespan operation_timeout)
Poco::Timespan operation_timeout_)
: root_path(root_path_),
session_timeout(session_timeout),
operation_timeout(std::min(operation_timeout, session_timeout))
session_timeout(session_timeout_),
operation_timeout(std::min(operation_timeout_, session_timeout_))
{
if (!root_path.empty())
{

View File

@ -108,9 +108,9 @@ public:
const String & root_path,
const String & auth_scheme,
const String & auth_data,
Poco::Timespan session_timeout,
Poco::Timespan session_timeout_,
Poco::Timespan connection_timeout,
Poco::Timespan operation_timeout);
Poco::Timespan operation_timeout_);
~ZooKeeper() override;

View File

@ -0,0 +1,62 @@
#include <Common/checkStackSize.h>
#include <Common/Exception.h>
#include <ext/scope_guard.h>
#include <pthread.h>
#include <cstdint>
#include <sstream>
namespace DB
{
namespace ErrorCodes
{
extern const int CANNOT_PTHREAD_ATTR;
extern const int LOGICAL_ERROR;
extern const int TOO_DEEP_RECURSION;
}
}
static thread_local void * stack_address = nullptr;
static thread_local size_t max_stack_size = 0;
void checkStackSize()
{
using namespace DB;
if (!stack_address)
{
pthread_attr_t attr;
if (0 != pthread_getattr_np(pthread_self(), &attr))
throwFromErrno("Cannot pthread_getattr_np", ErrorCodes::CANNOT_PTHREAD_ATTR);
SCOPE_EXIT({ pthread_attr_destroy(&attr); });
if (0 != pthread_attr_getstack(&attr, &stack_address, &max_stack_size))
throwFromErrno("Cannot pthread_getattr_np", ErrorCodes::CANNOT_PTHREAD_ATTR);
}
const void * frame_address = __builtin_frame_address(0);
uintptr_t int_frame_address = reinterpret_cast<uintptr_t>(frame_address);
uintptr_t int_stack_address = reinterpret_cast<uintptr_t>(stack_address);
/// We assume that stack grows towards lower addresses. And that it starts to grow from the end of a chunk of memory of max_stack_size.
if (int_frame_address > int_stack_address + max_stack_size)
throw Exception("Logical error: frame address is greater than stack begin address", ErrorCodes::LOGICAL_ERROR);
size_t stack_size = int_stack_address + max_stack_size - int_frame_address;
/// Just check if we have already eat more than a half of stack size. It's a bit overkill (a half of stack size is wasted).
/// It's safe to assume that overflow in multiplying by two cannot occur.
if (stack_size * 2 > max_stack_size)
{
std::stringstream message;
message << "Stack size too large"
<< ". Stack address: " << stack_address
<< ", frame address: " << frame_address
<< ", stack size: " << stack_size
<< ", maximum stack size: " << max_stack_size;
throw Exception(message.str(), ErrorCodes::TOO_DEEP_RECURSION);
}
}

View File

@ -0,0 +1,7 @@
#pragma once
/** If the stack is large enough and is near its size, throw an exception.
* You can call this function in "heavy" functions that may be called recursively
* to prevent possible stack overflows.
*/
void checkStackSize();

View File

@ -26,16 +26,19 @@ void createHardLink(const String & source_path, const String & destination_path)
struct stat destination_descr;
if (0 != lstat(source_path.c_str(), &source_descr))
throwFromErrno("Cannot stat " + source_path, ErrorCodes::CANNOT_STAT);
throwFromErrnoWithPath("Cannot stat " + source_path, source_path, ErrorCodes::CANNOT_STAT);
if (0 != lstat(destination_path.c_str(), &destination_descr))
throwFromErrno("Cannot stat " + destination_path, ErrorCodes::CANNOT_STAT);
throwFromErrnoWithPath("Cannot stat " + destination_path, destination_path, ErrorCodes::CANNOT_STAT);
if (source_descr.st_ino != destination_descr.st_ino)
throwFromErrno("Destination file " + destination_path + " is already exist and have different inode.", ErrorCodes::CANNOT_LINK, link_errno);
throwFromErrnoWithPath(
"Destination file " + destination_path + " is already exist and have different inode.",
destination_path, ErrorCodes::CANNOT_LINK, link_errno);
}
else
throwFromErrno("Cannot link " + source_path + " to " + destination_path, ErrorCodes::CANNOT_LINK);
throwFromErrnoWithPath("Cannot link " + source_path + " to " + destination_path, destination_path,
ErrorCodes::CANNOT_LINK);
}
}

Some files were not shown because too many files have changed in this diff Show More