merged 'origin/master' into mysql

This commit is contained in:
Yuriy 2019-05-16 14:19:05 +03:00
commit a51c293ec6
481 changed files with 12320 additions and 3935 deletions

3
.gitmodules vendored
View File

@ -79,3 +79,6 @@
[submodule "contrib/hyperscan"]
path = contrib/hyperscan
url = https://github.com/ClickHouse-Extras/hyperscan.git
[submodule "contrib/simdjson"]
path = contrib/simdjson
url = https://github.com/lemire/simdjson.git

View File

@ -1,3 +1,8 @@
## ClickHouse release 19.5.3.8, 2019-04-18
### Bug fixes
* Fixed type of setting `max_partitions_per_insert_block` from boolean to UInt64. [#5028](https://github.com/yandex/ClickHouse/pull/5028) ([Mohammad Hossein Sekhavat](https://github.com/mhsekhavat))
## ClickHouse release 19.5.2.6, 2019-04-15
### New Features
@ -24,6 +29,7 @@
* Fill `system.graphite_detentions` from a table config of `*GraphiteMergeTree` engine tables. [#4584](https://github.com/yandex/ClickHouse/pull/4584) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
* Rename `trigramDistance` function to `ngramDistance` and add more functions with `CaseInsensitive` and `UTF`. [#4602](https://github.com/yandex/ClickHouse/pull/4602) ([Danila Kutenin](https://github.com/danlark1))
* Improved data skipping indices calculation. [#4640](https://github.com/yandex/ClickHouse/pull/4640) ([Nikita Vasilev](https://github.com/nikvas0))
* Keep ordinary, `DEFAULT`, `MATERIALIZED` and `ALIAS` columns in a single list (fixes issue [#2867](https://github.com/yandex/ClickHouse/issues/2867)). [#4707](https://github.com/yandex/ClickHouse/pull/4707) ([Alex Zatelepin](https://github.com/ztlpn))
### Bug Fix
@ -34,7 +40,6 @@
* Deadlock may happen while executing `DROP DATABASE dictionary` query. [#4701](https://github.com/yandex/ClickHouse/pull/4701) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix undefinied behavior in `median` and `quantile` functions. [#4702](https://github.com/yandex/ClickHouse/pull/4702) ([hcz](https://github.com/hczhcz))
* Fix compression level detection when `network_compression_method` in lowercase. Broken in v19.1. [#4706](https://github.com/yandex/ClickHouse/pull/4706) ([proller](https://github.com/proller))
* Keep ordinary, `DEFAULT`, `MATERIALIZED` and `ALIAS` columns in a single list (fixes issue [#2867](https://github.com/yandex/ClickHouse/issues/2867)). [#4707](https://github.com/yandex/ClickHouse/pull/4707) ([Alex Zatelepin](https://github.com/ztlpn))
* Fixed ignorance of `<timezone>UTC</timezone>` setting (fixes issue [#4658](https://github.com/yandex/ClickHouse/issues/4658)). [#4718](https://github.com/yandex/ClickHouse/pull/4718) ([proller](https://github.com/proller))
* Fix `histogram` function behaviour with `Distributed` tables. [#4741](https://github.com/yandex/ClickHouse/pull/4741) ([olegkv](https://github.com/olegkv))
* Fixed tsan report `destroy of a locked mutex`. [#4742](https://github.com/yandex/ClickHouse/pull/4742) ([alexey-milovidov](https://github.com/alexey-milovidov))
@ -63,7 +68,7 @@
* Fix rare bug when setting `min_bytes_to_use_direct_io` is greater than zero, which occures when thread have to seek backward in column file. [#4897](https://github.com/yandex/ClickHouse/pull/4897) ([alesapin](https://github.com/alesapin))
* Fix wrong argument types for aggregate functions with `LowCardinality` arguments (fixes issue [#4919](https://github.com/yandex/ClickHouse/issues/4919)). [#4922](https://github.com/yandex/ClickHouse/pull/4922) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Fix wrong name qualification in `GLOBAL JOIN`. [#4969](https://github.com/yandex/ClickHouse/pull/4969) ([Artem Zuikov](https://github.com/4ertus2))
* Function `toISOWeek` result for year 1970. [#4988](https://github.com/yandex/ClickHouse/pull/4988) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix function `toISOWeek` result for year 1970. [#4988](https://github.com/yandex/ClickHouse/pull/4988) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix `DROP`, `TRUNCATE` and `OPTIMIZE` queries duplication, when executed on `ON CLUSTER` for `ReplicatedMergeTree*` tables family. [#4991](https://github.com/yandex/ClickHouse/pull/4991) ([alesapin](https://github.com/alesapin))
### Backward Incompatible Change
@ -93,6 +98,47 @@
* Disable usage of `mremap` when compiled with Thread Sanitizer. Surprisingly enough, TSan does not intercept `mremap` (though it does intercept `mmap`, `munmap`) that leads to false positives. Fixed TSan report in stateful tests. [#4859](https://github.com/yandex/ClickHouse/pull/4859) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Add test checking using format schema via HTTP interface. [#4864](https://github.com/yandex/ClickHouse/pull/4864) ([Vitaly Baranov](https://github.com/vitlibar))
## ClickHouse release 19.4.4.33, 2019-04-17
### Bug Fixes
* Avoid `std::terminate` in case of memory allocation failure. Now `std::bad_alloc` exception is thrown as expected. [#4665](https://github.com/yandex/ClickHouse/pull/4665) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixes capnproto reading from buffer. Sometimes files wasn't loaded successfully by HTTP. [#4674](https://github.com/yandex/ClickHouse/pull/4674) ([Vladislav](https://github.com/smirnov-vs))
* Fix error `Unknown log entry type: 0` after `OPTIMIZE TABLE FINAL` query. [#4683](https://github.com/yandex/ClickHouse/pull/4683) ([Amos Bird](https://github.com/amosbird))
* Wrong arguments to `hasAny` or `hasAll` functions may lead to segfault. [#4698](https://github.com/yandex/ClickHouse/pull/4698) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Deadlock may happen while executing `DROP DATABASE dictionary` query. [#4701](https://github.com/yandex/ClickHouse/pull/4701) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix undefinied behavior in `median` and `quantile` functions. [#4702](https://github.com/yandex/ClickHouse/pull/4702) ([hcz](https://github.com/hczhcz))
* Fix compression level detection when `network_compression_method` in lowercase. Broken in v19.1. [#4706](https://github.com/yandex/ClickHouse/pull/4706) ([proller](https://github.com/proller))
* Fixed ignorance of `<timezone>UTC</timezone>` setting (fixes issue [#4658](https://github.com/yandex/ClickHouse/issues/4658)). [#4718](https://github.com/yandex/ClickHouse/pull/4718) ([proller](https://github.com/proller))
* Fix `histogram` function behaviour with `Distributed` tables. [#4741](https://github.com/yandex/ClickHouse/pull/4741) ([olegkv](https://github.com/olegkv))
* Fixed tsan report `destroy of a locked mutex`. [#4742](https://github.com/yandex/ClickHouse/pull/4742) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed TSan report on shutdown due to race condition in system logs usage. Fixed potential use-after-free on shutdown when part_log is enabled. [#4758](https://github.com/yandex/ClickHouse/pull/4758) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix recheck parts in `ReplicatedMergeTreeAlterThread` in case of error. [#4772](https://github.com/yandex/ClickHouse/pull/4772) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Arithmetic operations on intermediate aggregate function states were not working for constant arguments (such as subquery results). [#4776](https://github.com/yandex/ClickHouse/pull/4776) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Always backquote column names in metadata. Otherwise it's impossible to create a table with column named `index` (server won't restart due to malformed `ATTACH` query in metadata). [#4782](https://github.com/yandex/ClickHouse/pull/4782) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix crash in `ALTER ... MODIFY ORDER BY` on `Distributed` table. [#4790](https://github.com/yandex/ClickHouse/pull/4790) ([TCeason](https://github.com/TCeason))
* Fix segfault in `JOIN ON` with enabled `enable_optimize_predicate_expression`. [#4794](https://github.com/yandex/ClickHouse/pull/4794) ([Winter Zhang](https://github.com/zhang2014))
* Fix bug with adding an extraneous row after consuming a protobuf message from Kafka. [#4808](https://github.com/yandex/ClickHouse/pull/4808) ([Vitaly Baranov](https://github.com/vitlibar))
* Fix segmentation fault in `clickhouse-copier`. [#4835](https://github.com/yandex/ClickHouse/pull/4835) ([proller](https://github.com/proller))
* Fixed race condition in `SELECT` from `system.tables` if the table is renamed or altered concurrently. [#4836](https://github.com/yandex/ClickHouse/pull/4836) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed data race when fetching data part that is already obsolete. [#4839](https://github.com/yandex/ClickHouse/pull/4839) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed rare data race that can happen during `RENAME` table of MergeTree family. [#4844](https://github.com/yandex/ClickHouse/pull/4844) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed segmentation fault in function `arrayIntersect`. Segmentation fault could happen if function was called with mixed constant and ordinary arguments. [#4847](https://github.com/yandex/ClickHouse/pull/4847) ([Lixiang Qian](https://github.com/fancyqlx))
* Fixed reading from `Array(LowCardinality)` column in rare case when column contained a long sequence of empty arrays. [#4850](https://github.com/yandex/ClickHouse/pull/4850) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Fix `No message received` exception while fetching parts between replicas. [#4856](https://github.com/yandex/ClickHouse/pull/4856) ([alesapin](https://github.com/alesapin))
* Fixed `arrayIntersect` function wrong result in case of several repeated values in single array. [#4871](https://github.com/yandex/ClickHouse/pull/4871) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Fix a race condition during concurrent `ALTER COLUMN` queries that could lead to a server crash (fixes issue [#3421](https://github.com/yandex/ClickHouse/issues/3421)). [#4592](https://github.com/yandex/ClickHouse/pull/4592) ([Alex Zatelepin](https://github.com/ztlpn))
* Fix parameter deduction in `ALTER MODIFY` of column `CODEC` when column type is not specified. [#4883](https://github.com/yandex/ClickHouse/pull/4883) ([alesapin](https://github.com/alesapin))
* Functions `cutQueryStringAndFragment()` and `queryStringAndFragment()` now works correctly when `URL` contains a fragment and no query. [#4894](https://github.com/yandex/ClickHouse/pull/4894) ([Vitaly Baranov](https://github.com/vitlibar))
* Fix rare bug when setting `min_bytes_to_use_direct_io` is greater than zero, which occures when thread have to seek backward in column file. [#4897](https://github.com/yandex/ClickHouse/pull/4897) ([alesapin](https://github.com/alesapin))
* Fix wrong argument types for aggregate functions with `LowCardinality` arguments (fixes issue [#4919](https://github.com/yandex/ClickHouse/issues/4919)). [#4922](https://github.com/yandex/ClickHouse/pull/4922) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Fix function `toISOWeek` result for year 1970. [#4988](https://github.com/yandex/ClickHouse/pull/4988) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix `DROP`, `TRUNCATE` and `OPTIMIZE` queries duplication, when executed on `ON CLUSTER` for `ReplicatedMergeTree*` tables family. [#4991](https://github.com/yandex/ClickHouse/pull/4991) ([alesapin](https://github.com/alesapin))
### Improvements
* Keep ordinary, `DEFAULT`, `MATERIALIZED` and `ALIAS` columns in a single list (fixes issue [#2867](https://github.com/yandex/ClickHouse/issues/2867)). [#4707](https://github.com/yandex/ClickHouse/pull/4707) ([Alex Zatelepin](https://github.com/ztlpn))
## ClickHouse release 19.4.3.11, 2019-04-02
### Bug Fixes

View File

@ -1,3 +1,164 @@
## ClickHouse release 19.5.3.8, 2019-04-18
### Исправления ошибок
* Исправлен тип настройки `max_partitions_per_insert_block` с булевого на UInt64. [#5028](https://github.com/yandex/ClickHouse/pull/5028) ([Mohammad Hossein Sekhavat](https://github.com/mhsekhavat))
## ClickHouse release 19.5.2.6, 2019-04-15
### Новые возможности
* Добавлены функции для работы с несколькими регулярными выражениями с помощью библиотеки [Hyperscan](https://github.com/intel/hyperscan). (`multiMatchAny`, `multiMatchAnyIndex`, `multiFuzzyMatchAny`, `multiFuzzyMatchAnyIndex`). [#4780](https://github.com/yandex/ClickHouse/pull/4780), [#4841](https://github.com/yandex/ClickHouse/pull/4841) ([Danila Kutenin](https://github.com/danlark1))
* Добавлена функция `multiSearchFirstPosition`. [#4780](https://github.com/yandex/ClickHouse/pull/4780) ([Danila Kutenin](https://github.com/danlark1))
* Реализована возможность указания построчного ограничения доступа к таблицам. [#4792](https://github.com/yandex/ClickHouse/pull/4792) ([Ivan](https://github.com/abyss7))
* Добавлен новый тип вторичного индекса на базе фильтра Блума (используется в функциях `equal`, `in` и `like`). [#4499](https://github.com/yandex/ClickHouse/pull/4499) ([Nikita Vasilev](https://github.com/nikvas0))
* Добавлен `ASOF JOIN` которые позволяет джойнить строки по наиболее близкому известному значению. [#4774](https://github.com/yandex/ClickHouse/pull/4774) [#4867](https://github.com/yandex/ClickHouse/pull/4867) [#4863](https://github.com/yandex/ClickHouse/pull/4863) [#4875](https://github.com/yandex/ClickHouse/pull/4875) ([Martijn Bakker](https://github.com/Gladdy), [Artem Zuikov](https://github.com/4ertus2))
* Теперь запрос `COMMA JOIN` переписывается `CROSS JOIN`. И затем оба переписываются в `INNER JOIN`, если это возможно. [#4661](https://github.com/yandex/ClickHouse/pull/4661) ([Artem Zuikov](https://github.com/4ertus2))
### Улучшения
* Функции `topK` и `topKWeighted` теперь поддерживают произвольный `loadFactor` (исправляет issue [#4252](https://github.com/yandex/ClickHouse/issues/4252)). [#4634](https://github.com/yandex/ClickHouse/pull/4634) ([Kirill Danshin](https://github.com/kirillDanshin))
* Добавлена возможность использования настройки `parallel_replicas_count > 1` для таблиц без семплирования (ранее настройка просто игнорировалась). [#4637](https://github.com/yandex/ClickHouse/pull/4637) ([Alexey Elymanov](https://github.com/digitalist))
* Поддержан запрос `CREATE OR REPLACE VIEW`. Позволяет создать `VIEW` или изменить запрос в одном выражении. [#4654](https://github.com/yandex/ClickHouse/pull/4654) ([Boris Granveaud](https://github.com/bgranvea))
* Движок таблиц `Buffer` теперь поддерживает `PREWHERE`. [#4671](https://github.com/yandex/ClickHouse/pull/4671) ([Yangkuan Liu](https://github.com/LiuYangkuan))
* Теперь реплицируемые таблицы могу стартовать в `readonly` режиме даже при отсутствии zookeeper. [#4691](https://github.com/yandex/ClickHouse/pull/4691) ([alesapin](https://github.com/alesapin))
* Исправлено мигание прогресс-бара в clickhouse-client. Проблема была наиболее заметна при использовании `FORMAT Null` в потоковых запросах. [#4811](https://github.com/yandex/ClickHouse/pull/4811) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Добавлена возможность отключения функций, использующих библиотеку `hyperscan`, для пользователей, чтобы ограничить возможное неконтролируемое потребление ресурсов. [#4816](https://github.com/yandex/ClickHouse/pull/4816) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Добавлено логирование номера версии во все исключения. [#4824](https://github.com/yandex/ClickHouse/pull/4824) ([proller](https://github.com/proller))
* Добавлено ограничение на размер строк и количество параметров в функции `multiMatch`. Теперь они принимают строки умещающиеся в `unsigned int`. [#4834](https://github.com/yandex/ClickHouse/pull/4834) ([Danila Kutenin](https://github.com/danlark1))
* Улучшено использование памяти и обработка ошибок в Hyperscan. [#4866](https://github.com/yandex/ClickHouse/pull/4866) ([Danila Kutenin](https://github.com/danlark1))
* Теперь системная таблица `system.graphite_detentions` заполняется из конфигурационного файла для таблиц семейства `*GraphiteMergeTree`. [#4584](https://github.com/yandex/ClickHouse/pull/4584) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
* Функция `trigramDistance` переименована в функцию `ngramDistance`. Добавлено несколько функций с `CaseInsensitive` и `UTF`. [#4602](https://github.com/yandex/ClickHouse/pull/4602) ([Danila Kutenin](https://github.com/danlark1))
* Улучшено вычисление вторичных индексов. [#4640](https://github.com/yandex/ClickHouse/pull/4640) ([Nikita Vasilev](https://github.com/nikvas0))
* Теперь обычные колонки, а также колонки `DEFAULT`, `MATERIALIZED` и `ALIAS` хранятся в одном списке (исправляет issue [#2867](https://github.com/yandex/ClickHouse/issues/2867)). [#4707](https://github.com/yandex/ClickHouse/pull/4707) ([Alex Zatelepin](https://github.com/ztlpn))
### Исправления ошибок
* В случае невозможности выделить память вместо вызова `std::terminate` бросается исключение `std::bad_alloc`. [#4665](https://github.com/yandex/ClickHouse/pull/4665) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлены ошибки чтения capnproto из буфера. Иногда файлы не загружались по HTTP. [#4674](https://github.com/yandex/ClickHouse/pull/4674) ([Vladislav](https://github.com/smirnov-vs))
* Исправлена ошибка `Unknown log entry type: 0` после запроса `OPTIMIZE TABLE FINAL`. [#4683](https://github.com/yandex/ClickHouse/pull/4683) ([Amos Bird](https://github.com/amosbird))
* При передаче неправильных аргументов в `hasAny` и `hasAll` могла происходить ошибка сегментирования. [#4698](https://github.com/yandex/ClickHouse/pull/4698) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлен дедлок, который мог происходить при запросе `DROP DATABASE dictionary`. [#4701](https://github.com/yandex/ClickHouse/pull/4701) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлено неопределенное поведение в функциях `median` и `quantile`. [#4702](https://github.com/yandex/ClickHouse/pull/4702) ([hcz](https://github.com/hczhcz))
* Исправлено определение уровня сжатия при указании настройки `network_compression_method` в нижнем регистре. Было сломано в v19.1. [#4706](https://github.com/yandex/ClickHouse/pull/4706) ([proller](https://github.com/proller))
* Настройка `<timezone>UTC</timezone>` больше не игнорируется (исправляет issue [#4658](https://github.com/yandex/ClickHouse/issues/4658)). [#4718](https://github.com/yandex/ClickHouse/pull/4718) ([proller](https://github.com/proller))
* Исправлено поведение функции `histogram` с `Distributed` таблицами. [#4741](https://github.com/yandex/ClickHouse/pull/4741) ([olegkv](https://github.com/olegkv))
* Исправлено срабатывание thread-санитайзера с ошибкой `destroy of a locked mutex`. [#4742](https://github.com/yandex/ClickHouse/pull/4742) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлено срабатывание thread-санитайзера при завершении сервера, вызванное гонкой при использовании системных логов. Также исправлена потенциальная ошибка use-after-free при завершении сервера в котором был включен `part_log`. [#4758](https://github.com/yandex/ClickHouse/pull/4758) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлена перепроверка кусков в `ReplicatedMergeTreeAlterThread` при появлении ошибок. [#4772](https://github.com/yandex/ClickHouse/pull/4772) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Исправлена работа арифметических операций с промежуточными состояниями агрегатных функций для константных аргументов (таких как результаты подзапросов). [#4776](https://github.com/yandex/ClickHouse/pull/4776) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Теперь имена колонок всегда экранируются в файлах с метаинформацией. В противном случае было невозможно создать таблицу с колонкой с именем `index`. [#4782](https://github.com/yandex/ClickHouse/pull/4782) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлено падение в запросе `ALTER ... MODIFY ORDER BY` к `Distributed` таблице. [#4790](https://github.com/yandex/ClickHouse/pull/4790) ([TCeason](https://github.com/TCeason))
* Исправлена ошибка сегментирования при запросах с `JOIN ON` и включенной настройкой `enable_optimize_predicate_expression`. [#4794](https://github.com/yandex/ClickHouse/pull/4794) ([Winter Zhang](https://github.com/zhang2014))
* Исправлено добавление лишней строки после чтения protobuf-сообщения из таблицы с движком `Kafka`. [#4808](https://github.com/yandex/ClickHouse/pull/4808) ([Vitaly Baranov](https://github.com/vitlibar))
* Исправлено падение при запросе с `JOIN ON` с не `nullable` и nullable колонкой. Также исправлено поведение при появлении `NULLs` среди ключей справа в`ANY JOIN` + `join_use_nulls`. [#4815](https://github.com/yandex/ClickHouse/pull/4815) ([Artem Zuikov](https://github.com/4ertus2))
* Исправлена ошибка сегментирования в `clickhouse-copier`. [#4835](https://github.com/yandex/ClickHouse/pull/4835) ([proller](https://github.com/proller))
* Исправлена гонка при `SELECT` запросе из `system.tables` если таблица была конкурентно переименована или к ней был применен `ALTER` запрос. [#4836](https://github.com/yandex/ClickHouse/pull/4836) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлена гонка при скачивании куска, который уже является устаревшим. [#4839](https://github.com/yandex/ClickHouse/pull/4839) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлена редкая гонка при `RENAME` запросах к таблицам семейства MergeTree. [#4844](https://github.com/yandex/ClickHouse/pull/4844) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлена ошибка сегментирования в функции `arrayIntersect`. Ошибка возникала при вызове функции с константными и не константными аргументами. [#4847](https://github.com/yandex/ClickHouse/pull/4847) ([Lixiang Qian](https://github.com/fancyqlx))
* Исправлена редкая ошибка при чтении из колонки типа `Array(LowCardinality)`, которая возникала, если в колонке содержалось большее количество подряд идущих пустых массивов. [#4850](https://github.com/yandex/ClickHouse/pull/4850) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Исправлено паление в запроса с `FULL/RIGHT JOIN` когда объединение происходило по nullable и не nullable колонке. [#4855](https://github.com/yandex/ClickHouse/pull/4855) ([Artem Zuikov](https://github.com/4ertus2))
* Исправлена ошибка `No message received`, возникавшая при скачивании кусков между репликами. [#4856](https://github.com/yandex/ClickHouse/pull/4856) ([alesapin](https://github.com/alesapin))
* Исправлена ошибка в функции `arrayIntersect` приводившая к неправильным результатам в случае нескольких повторяющихся значений в массиве. [#4871](https://github.com/yandex/ClickHouse/pull/4871) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Исправлена гонка при конкурентных `ALTER COLUMN` запросах, которая могла приводить к падению сервера (исправляет issue [#3421](https://github.com/yandex/ClickHouse/issues/3421)). [#4592](https://github.com/yandex/ClickHouse/pull/4592) ([Alex Zatelepin](https://github.com/ztlpn))
* Исправлен некорректный результат в `FULL/RIGHT JOIN` запросах с константной колонкой. [#4723](https://github.com/yandex/ClickHouse/pull/4723) ([Artem Zuikov](https://github.com/4ertus2))
* Исправлено появление дубликатов в `GLOBAL JOIN` со звездочкой. [#4705](https://github.com/yandex/ClickHouse/pull/4705) ([Artem Zuikov](https://github.com/4ertus2))
* Исправлено определение параметров кодеков в запросах `ALTER MODIFY`, если тип колонки не был указан. [#4883](https://github.com/yandex/ClickHouse/pull/4883) ([alesapin](https://github.com/alesapin))
* Функции `cutQueryStringAndFragment()` и `queryStringAndFragment()` теперь работают корректно, когда `URL` содержит фрагмент, но не содержит запроса. [#4894](https://github.com/yandex/ClickHouse/pull/4894) ([Vitaly Baranov](https://github.com/vitlibar))
* Исправлена редкая ошибка, возникавшая при установке настройки `min_bytes_to_use_direct_io` больше нуля. Она возникла при необходимости сдвинутся в файле, который уже прочитан до конца. [#4897](https://github.com/yandex/ClickHouse/pull/4897) ([alesapin](https://github.com/alesapin))
* Исправлено неправильное определение типов аргументов для агрегатных функций с `LowCardinality` аргументами (исправляет [#4919](https://github.com/yandex/ClickHouse/issues/4919)). [#4922](https://github.com/yandex/ClickHouse/pull/4922) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Исправлена неверная квалификация имён в `GLOBAL JOIN`. [#4969](https://github.com/yandex/ClickHouse/pull/4969) ([Artem Zuikov](https://github.com/4ertus2))
* Исправлен результат функции `toISOWeek` для 1970 года. [#4988](https://github.com/yandex/ClickHouse/pull/4988) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлено дублирование `DROP`, `TRUNCATE` и `OPTIMIZE` запросов, когда они выполнялись `ON CLUSTER` для семейства таблиц `ReplicatedMergeTree*`. [#4991](https://github.com/yandex/ClickHouse/pull/4991) ([alesapin](https://github.com/alesapin))
### Обратно несовместимые изменения
* Настройка `insert_sample_with_metadata` переименована в `input_format_defaults_for_omitted_fields`. [#4771](https://github.com/yandex/ClickHouse/pull/4771) ([Artem Zuikov](https://github.com/4ertus2))
* Добавлена настройка `max_partitions_per_insert_block` (со значением по умолчанию 100). Если вставляемый блок содержит большое количество партиций, то бросается исключение. Лимит можно убрать выставив настройку в 0 (не рекомендуется). [#4845](https://github.com/yandex/ClickHouse/pull/4845) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Функции мультипоиска были переименованы (`multiPosition` в `multiSearchAllPositions`, `multiSearch` в `multiSearchAny`, `firstMatch` в `multiSearchFirstIndex`). [#4780](https://github.com/yandex/ClickHouse/pull/4780) ([Danila Kutenin](https://github.com/danlark1))
### Улучшение производительности
* Оптимизирован поиска с помощью алгоритма Volnitsky с помощью инлайнинга. Это дает около 5-10% улучшения производительности поиска для запросов ищущих множество слов или много одинаковых биграмм. [#4862](https://github.com/yandex/ClickHouse/pull/4862) ([Danila Kutenin](https://github.com/danlark1))
* Исправлено снижение производительности при выставлении настройки `use_uncompressed_cache` больше нуля для запросов, данные которых целиком лежат в кеше. [#4913](https://github.com/yandex/ClickHouse/pull/4913) ([alesapin](https://github.com/alesapin))
### Улучшения сборки/тестирования/пакетирования
* Более строгие настройки для debug-сборок: более гранулярные маппинги памяти и использование ASLR; добавлена защита памяти для кеша засечек и индекса. Это позволяет найти больше ошибок порчи памяти, которые не обнаруживают address-санитайзер и thread-санитайзер. [#4632](https://github.com/yandex/ClickHouse/pull/4632) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Добавлены настройки `ENABLE_PROTOBUF`, `ENABLE_PARQUET` и `ENABLE_BROTLI` которые позволяют отключить соответствующие компоненты. [#4669](https://github.com/yandex/ClickHouse/pull/4669) ([Silviu Caragea](https://github.com/silviucpp))
* Теперь при зависании запросов во время работы тестов будет показан список запросов и стек-трейсы всех потоков. [#4675](https://github.com/yandex/ClickHouse/pull/4675) ([alesapin](https://github.com/alesapin))
* Добавлены ретраи при ошибке `Connection loss` в `clickhouse-test`. [#4682](https://github.com/yandex/ClickHouse/pull/4682) ([alesapin](https://github.com/alesapin))
* Добавлена возможность сборки под FreeBSD в `packager`-скрипт. [#4712](https://github.com/yandex/ClickHouse/pull/4712) [#4748](https://github.com/yandex/ClickHouse/pull/4748) ([alesapin](https://github.com/alesapin))
* Теперь при установке предлагается установить пароль для пользователя `'default'`. [#4725](https://github.com/yandex/ClickHouse/pull/4725) ([proller](https://github.com/proller))
* Убраны предупреждения из библиотеки `rdkafka` при сборке. [#4740](https://github.com/yandex/ClickHouse/pull/4740) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Добавлена возможность сборки без поддержки ssl. [#4750](https://github.com/yandex/ClickHouse/pull/4750) ([proller](https://github.com/proller))
* Добавлена возможность запускать докер-образ с clickhouse-server из под любого пользователя. [#4753](https://github.com/yandex/ClickHouse/pull/4753) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
* Boost обновлен до 1.69. [#4793](https://github.com/yandex/ClickHouse/pull/4793) ([proller](https://github.com/proller))
* Отключено использование `mremap` при сборке с thread-санитайзером, что приводило к ложным срабатываниям. Исправлены ошибки thread-санитайзера в stateful-тестах. [#4859](https://github.com/yandex/ClickHouse/pull/4859) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Добавлен тест проверяющий использование схемы форматов для HTTP-интерфейса. [#4864](https://github.com/yandex/ClickHouse/pull/4864) ([Vitaly Baranov](https://github.com/vitlibar))
## ClickHouse release 19.4.4.33, 2019-04-17
### Исправление ошибок
* В случае невозможности выделить память вместо вызова `std::terminate` бросается исключение `std::bad_alloc`. [#4665](https://github.com/yandex/ClickHouse/pull/4665) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлены ошибки чтения capnproto из буфера. Иногда файлы не загружались по HTTP. [#4674](https://github.com/yandex/ClickHouse/pull/4674) ([Vladislav](https://github.com/smirnov-vs))
* Исправлена ошибка `Unknown log entry type: 0` после запроса `OPTIMIZE TABLE FINAL`. [#4683](https://github.com/yandex/ClickHouse/pull/4683) ([Amos Bird](https://github.com/amosbird))
* При передаче неправильных аргументов в `hasAny` и `hasAll` могла происходить ошибка сегментирования. [#4698](https://github.com/yandex/ClickHouse/pull/4698) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлен дедлок, который мог происходить при запросе `DROP DATABASE dictionary`. [#4701](https://github.com/yandex/ClickHouse/pull/4701) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлено неопределенное поведение в функциях `median` и `quantile`. [#4702](https://github.com/yandex/ClickHouse/pull/4702) ([hcz](https://github.com/hczhcz))
* Исправлено определение уровня сжатия при указании настройки `network_compression_method` в нижнем регистре. Было сломано в v19.1. [#4706](https://github.com/yandex/ClickHouse/pull/4706) ([proller](https://github.com/proller))
* Настройка `<timezone>UTC</timezone>` больше не игнорируется (исправляет issue [#4658](https://github.com/yandex/ClickHouse/issues/4658)). [#4718](https://github.com/yandex/ClickHouse/pull/4718) ([proller](https://github.com/proller))
* Исправлено поведение функции `histogram` с `Distributed` таблицами. [#4741](https://github.com/yandex/ClickHouse/pull/4741) ([olegkv](https://github.com/olegkv))
* Исправлено срабатывание thread-санитайзера с ошибкой `destroy of a locked mutex`. [#4742](https://github.com/yandex/ClickHouse/pull/4742) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлено срабатывание thread-санитайзера при завершении сервера, вызванное гонкой при использовании системных логов. Также исправлена потенциальная ошибка use-after-free при завершении сервера в котором был включен `part_log`. [#4758](https://github.com/yandex/ClickHouse/pull/4758) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлена перепроверка кусков в `ReplicatedMergeTreeAlterThread` при появлении ошибок. [#4772](https://github.com/yandex/ClickHouse/pull/4772) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Исправлена работа арифметических операций с промежуточными состояниями агрегатных функций для константных аргументов (таких как результаты подзапросов). [#4776](https://github.com/yandex/ClickHouse/pull/4776) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Теперь имена колонок всегда экранируются в файлах с метаинформацией. В противном случае было невозможно создать таблицу с колонкой с именем `index`. [#4782](https://github.com/yandex/ClickHouse/pull/4782) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлено падение в запросе `ALTER ... MODIFY ORDER BY` к `Distributed` таблице. [#4790](https://github.com/yandex/ClickHouse/pull/4790) ([TCeason](https://github.com/TCeason))
* Исправлена ошибка сегментирования при запросах с `JOIN ON` и включенной настройкой `enable_optimize_predicate_expression`. [#4794](https://github.com/yandex/ClickHouse/pull/4794) ([Winter Zhang](https://github.com/zhang2014))
* Исправлено добавление лишней строки после чтения protobuf-сообщения из таблицы с движком `Kafka`. [#4808](https://github.com/yandex/ClickHouse/pull/4808) ([Vitaly Baranov](https://github.com/vitlibar))
* Исправлена ошибка сегментирования в `clickhouse-copier`. [#4835](https://github.com/yandex/ClickHouse/pull/4835) ([proller](https://github.com/proller))
* Исправлена гонка при `SELECT` запросе из `system.tables` если таблица была конкурентно переименована или к ней был применен `ALTER` запрос. [#4836](https://github.com/yandex/ClickHouse/pull/4836) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлена гонка при скачивании куска, который уже является устаревшим. [#4839](https://github.com/yandex/ClickHouse/pull/4839) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлена редкая гонка при `RENAME` запросах к таблицам семейства MergeTree. [#4844](https://github.com/yandex/ClickHouse/pull/4844) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлена ошибка сегментирования в функции `arrayIntersect`. Ошибка возникала при вызове функции с константными и не константными аргументами. [#4847](https://github.com/yandex/ClickHouse/pull/4847) ([Lixiang Qian](https://github.com/fancyqlx))
* Исправлена редкая ошибка при чтении из колонки типа `Array(LowCardinality)`, которая возникала, если в колонке содержалось большее количество подряд идущих пустых массивов. [#4850](https://github.com/yandex/ClickHouse/pull/4850) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Исправлена ошибка `No message received`, возникавшая при скачивании кусков между репликами. [#4856](https://github.com/yandex/ClickHouse/pull/4856) ([alesapin](https://github.com/alesapin))
* Исправлена ошибка в функции `arrayIntersect` приводившая к неправильным результатам в случае нескольких повторяющихся значений в массиве. [#4871](https://github.com/yandex/ClickHouse/pull/4871) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Исправлена гонка при конкурентных `ALTER COLUMN` запросах, которая могла приводить к падению сервера (исправляет issue [#3421](https://github.com/yandex/ClickHouse/issues/3421)). [#4592](https://github.com/yandex/ClickHouse/pull/4592) ([Alex Zatelepin](https://github.com/ztlpn))
* Исправлено определение параметров кодеков в запросах `ALTER MODIFY`, если тип колонки не был указан. [#4883](https://github.com/yandex/ClickHouse/pull/4883) ([alesapin](https://github.com/alesapin))
* Функции `cutQueryStringAndFragment()` и `queryStringAndFragment()` теперь работают корректно, когда `URL` содержит фрагмент, но не содержит запроса. [#4894](https://github.com/yandex/ClickHouse/pull/4894) ([Vitaly Baranov](https://github.com/vitlibar))
* Исправлена редкая ошибка, возникавшая при установке настройки `min_bytes_to_use_direct_io` больше нуля. Она возникла при необходимости сдвинутся в файле, который уже прочитан до конца. [#4897](https://github.com/yandex/ClickHouse/pull/4897) ([alesapin](https://github.com/alesapin))
* Исправлено неправильное определение типов аргументов для агрегатных функций с `LowCardinality` аргументами (исправляет [#4919](https://github.com/yandex/ClickHouse/issues/4919)). [#4922](https://github.com/yandex/ClickHouse/pull/4922) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Исправлен результат функции `toISOWeek` для 1970 года. [#4988](https://github.com/yandex/ClickHouse/pull/4988) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Исправлено дублирование `DROP`, `TRUNCATE` и `OPTIMIZE` запросов, когда они выполнялись `ON CLUSTER` для семейства таблиц `ReplicatedMergeTree*`. [#4991](https://github.com/yandex/ClickHouse/pull/4991) ([alesapin](https://github.com/alesapin))
### Улучшения
* Теперь обычные колонки, а также колонки `DEFAULT`, `MATERIALIZED` и `ALIAS` хранятся в одном списке (исправляет issue [#2867](https://github.com/yandex/ClickHouse/issues/2867)). [#4707](https://github.com/yandex/ClickHouse/pull/4707) ([Alex Zatelepin](https://github.com/ztlpn))
## ClickHouse release 19.4.3.11, 2019-04-02
### Исправление ошибок
* Исправлено паление в запроса с `FULL/RIGHT JOIN` когда объединение происходило по nullable и не nullable колонке. [#4855](https://github.com/yandex/ClickHouse/pull/4855) ([Artem Zuikov](https://github.com/4ertus2))
* Исправлена ошибка сегментирования в `clickhouse-copier`. [#4835](https://github.com/yandex/ClickHouse/pull/4835) ([proller](https://github.com/proller))
### Улучшения сборки/тестирования/пакетирования
* Добавлена возможность запускать докер-образ с clickhouse-server из под любого пользователя. [#4753](https://github.com/yandex/ClickHouse/pull/4753) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
## ClickHouse release 19.4.2.7, 2019-03-30
### Исправление ошибок
* Исправлена редкая ошибка при чтении из колонки типа `Array(LowCardinality)`, которая возникала, если в колонке содержалось большее количество подряд идущих пустых массивов. [#4850](https://github.com/yandex/ClickHouse/pull/4850) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
## ClickHouse release 19.4.1.3, 2019-03-19
### Исправление ошибок
* Исправлено поведение удаленных запросов, которые одновременно содержали `LIMIT BY` и `LIMIT`. Раньше для таких запросов `LIMIT` мог быть выполнен до `LIMIT BY`, что приводило к перефильтрации. [#4708](https://github.com/yandex/ClickHouse/pull/4708) ([Constantin S. Pan](https://github.com/kvap))
## ClickHouse release 19.4.0.49, 2019-03-09
### Новые возможности

View File

@ -1,6 +1,15 @@
project(ClickHouse)
cmake_minimum_required(VERSION 3.3)
cmake_policy(SET CMP0023 NEW)
foreach(policy
CMP0023
CMP0074 # CMake 3.12
)
if(POLICY ${policy})
cmake_policy(SET ${policy} NEW)
endif()
endforeach()
set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_CURRENT_SOURCE_DIR}/cmake/Modules/")
set(CMAKE_EXPORT_COMPILE_COMMANDS 1) # Write compile_commands.json
set(CMAKE_LINK_DEPENDS_NO_SHARED 1) # Do not relink all depended targets on .so
@ -59,7 +68,7 @@ if (NOT MAKE_STATIC_LIBRARIES)
endif ()
if (SPLIT_SHARED_LIBRARIES)
set (LINK_MODE SHARED)
set(BUILD_SHARED_LIBS 1 CACHE INTERNAL "")
endif ()
if (USE_STATIC_LIBRARIES)
@ -301,6 +310,7 @@ include (cmake/find_rt.cmake)
include (cmake/find_execinfo.cmake)
include (cmake/find_readline_edit.cmake)
include (cmake/find_re2.cmake)
include (cmake/find_libgsasl.cmake)
include (cmake/find_rdkafka.cmake)
include (cmake/find_capnp.cmake)
include (cmake/find_llvm.cmake)
@ -308,7 +318,6 @@ include (cmake/find_cpuid.cmake) # Freebsd, bundled
if (NOT USE_CPUID)
include (cmake/find_cpuinfo.cmake) # Debian
endif()
include (cmake/find_libgsasl.cmake)
include (cmake/find_libxml2.cmake)
include (cmake/find_brotli.cmake)
include (cmake/find_protobuf.cmake)
@ -318,6 +327,7 @@ include (cmake/find_consistent-hashing.cmake)
include (cmake/find_base64.cmake)
include (cmake/find_hyperscan.cmake)
include (cmake/find_lfalloc.cmake)
include (cmake/find_simdjson.cmake)
find_contrib_lib(cityhash)
find_contrib_lib(farmhash)
find_contrib_lib(metrohash)

View File

@ -12,8 +12,8 @@ ClickHouse is an open-source column-oriented database management system that all
* You can also [fill this form](https://forms.yandex.com/surveys/meet-yandex-clickhouse-team/) to meet Yandex ClickHouse team in person.
## Upcoming Events
* [ClickHouse Community Meetup in Limassol](https://www.facebook.com/events/386638262181785/) on May 7.
* ClickHouse at [Percona Live 2019](https://www.percona.com/live/19/other-open-source-databases-track) in Austin on May 28-30.
* [ClickHouse Community Meetup in San Francisco](https://www.meetup.com/San-Francisco-Bay-Area-ClickHouse-Meetup/events/261110652/) on June 4.
* [ClickHouse Community Meetup in Beijing](https://www.huodongxing.com/event/2483759276200) on June 8.
* [ClickHouse Community Meetup in Shenzhen](https://www.huodongxing.com/event/3483759917300) on October 20.
* [ClickHouse Community Meetup in Shanghai](https://www.huodongxing.com/event/4483760336000) on October 27.

View File

@ -1,88 +1,147 @@
# This file copied from contrib/poco/cmake/FindODBC.cmake to allow build without submodules
# Distributed under the OSI-approved BSD 3-Clause License. See accompanying
# file Copyright.txt or https://cmake.org/licensing for details.
#.rst:
# FindMySQL
# -------
#
# Find the ODBC driver manager includes and library.
# Find ODBC Runtime
#
# ODBC is an open standard for connecting to different databases in a
# semi-vendor-independent fashion. First you install the ODBC driver
# manager. Then you need a driver for each separate database you want
# to connect to (unless a generic one works). VTK includes neither
# the driver manager nor the vendor-specific drivers: you have to find
# those yourself.
# This will define the following variables::
#
# This module defines
# ODBC_INCLUDE_DIRECTORIES, where to find sql.h
# ODBC_LIBRARIES, the libraries to link against to use ODBC
# ODBC_FOUND. If false, you cannot build anything that requires ODBC.
# ODBC_FOUND - True if the system has the libraries
# ODBC_INCLUDE_DIRS - where to find the headers
# ODBC_LIBRARIES - where to find the libraries
# ODBC_DEFINITIONS - compile definitons
#
# Hints:
# Set ``ODBC_ROOT_DIR`` to the root directory of an installation.
#
include(FindPackageHandleStandardArgs)
option (ENABLE_ODBC "Enable ODBC" ${OS_LINUX})
if (OS_LINUX)
option (USE_INTERNAL_ODBC_LIBRARY "Set to FALSE to use system odbc library instead of bundled" ${NOT_UNBUNDLED})
else ()
option (USE_INTERNAL_ODBC_LIBRARY "Set to FALSE to use system odbc library instead of bundled" OFF)
endif ()
find_package(PkgConfig QUIET)
pkg_check_modules(PC_ODBC QUIET odbc)
if (USE_INTERNAL_ODBC_LIBRARY AND NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/unixodbc/README")
message (WARNING "submodule contrib/unixodbc is missing. to fix try run: \n git submodule update --init --recursive")
set (USE_INTERNAL_ODBC_LIBRARY 0)
endif ()
if(WIN32)
get_filename_component(kit_dir "[HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows Kits\\Installed Roots;KitsRoot]" REALPATH)
get_filename_component(kit81_dir "[HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows Kits\\Installed Roots;KitsRoot81]" REALPATH)
endif()
if (ENABLE_ODBC)
if (USE_INTERNAL_ODBC_LIBRARY)
set (ODBC_LIBRARIES unixodbc)
set (ODBC_INCLUDE_DIRECTORIES ${CMAKE_SOURCE_DIR}/contrib/unixodbc/include)
set (ODBC_FOUND 1)
set (USE_ODBC 1)
else ()
find_path(ODBC_INCLUDE_DIRECTORIES
NAMES sql.h
HINTS
/usr/include
/usr/include/iodbc
/usr/include/odbc
/usr/local/include
/usr/local/include/iodbc
/usr/local/include/odbc
/usr/local/iodbc/include
/usr/local/odbc/include
"C:/Program Files/ODBC/include"
"C:/Program Files/Microsoft SDKs/Windows/v7.0/include"
"C:/Program Files/Microsoft SDKs/Windows/v6.0a/include"
"C:/ODBC/include"
DOC "Specify the directory containing sql.h."
)
find_path(ODBC_INCLUDE_DIR
NAMES sql.h
HINTS
${ODBC_ROOT_DIR}/include
${ODBC_ROOT_INCLUDE_DIRS}
PATHS
${PC_ODBC_INCLUDE_DIRS}
/usr/include
/usr/local/include
/usr/local/odbc/include
/usr/local/iodbc/include
"C:/Program Files/ODBC/include"
"C:/Program Files/Microsoft SDKs/Windows/v7.0/include"
"C:/Program Files/Microsoft SDKs/Windows/v6.0a/include"
"C:/ODBC/include"
"${kit_dir}/Include/um"
"${kit81_dir}/Include/um"
PATH_SUFFIXES
odbc
iodbc
DOC "Specify the directory containing sql.h."
)
find_library(ODBC_LIBRARIES
NAMES iodbc odbc iodbcinst odbcinst odbc32
HINTS
/usr/lib
/usr/lib/iodbc
/usr/lib/odbc
/usr/local/lib
/usr/local/lib/iodbc
/usr/local/lib/odbc
/usr/local/iodbc/lib
/usr/local/odbc/lib
"C:/Program Files/ODBC/lib"
"C:/ODBC/lib/debug"
"C:/Program Files (x86)/Microsoft SDKs/Windows/v7.0A/Lib"
DOC "Specify the ODBC driver manager library here."
)
if(NOT ODBC_INCLUDE_DIR AND WIN32)
set(ODBC_INCLUDE_DIR "")
else()
set(REQUIRED_INCLUDE_DIR ODBC_INCLUDE_DIR)
endif()
# MinGW find usually fails
if(MINGW)
set(ODBC_INCLUDE_DIRECTORIES ".")
set(ODBC_LIBRARIES odbc32)
endif()
if(WIN32 AND CMAKE_SIZEOF_VOID_P EQUAL 8)
set(WIN_ARCH x64)
elseif(WIN32 AND CMAKE_SIZEOF_VOID_P EQUAL 4)
set(WIN_ARCH x86)
endif()
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(ODBC
DEFAULT_MSG
ODBC_INCLUDE_DIRECTORIES
ODBC_LIBRARIES)
find_library(ODBC_LIBRARY
NAMES unixodbc iodbc odbc odbc32
HINTS
${ODBC_ROOT_DIR}/lib
${ODBC_ROOT_LIBRARY_DIRS}
PATHS
${PC_ODBC_LIBRARY_DIRS}
/usr/lib
/usr/local/lib
/usr/local/odbc/lib
/usr/local/iodbc/lib
"C:/Program Files/ODBC/lib"
"C:/ODBC/lib/debug"
"C:/Program Files (x86)/Microsoft SDKs/Windows/v7.0A/Lib"
"${kit81_dir}/Lib/winv6.3/um"
"${kit_dir}/Lib/win8/um"
PATH_SUFIXES
odbc
${WIN_ARCH}
DOC "Specify the ODBC driver manager library here."
)
mark_as_advanced(ODBC_FOUND ODBC_LIBRARIES ODBC_INCLUDE_DIRECTORIES)
endif ()
endif ()
if(NOT ODBC_LIBRARY AND WIN32)
# List names of ODBC libraries on Windows
set(ODBC_LIBRARY odbc32.lib)
endif()
message (STATUS "Using odbc: ${ODBC_INCLUDE_DIRECTORIES} : ${ODBC_LIBRARIES}")
# List additional libraries required to use ODBC library
if(WIN32 AND MSVC OR CMAKE_CXX_COMPILER_ID MATCHES "Intel")
set(_odbc_required_libs_names odbccp32;ws2_32)
endif()
foreach(_lib_name IN LISTS _odbc_required_libs_names)
find_library(_lib_path
NAMES ${_lib_name}
HINTS
${ODBC_ROOT_DIR}/lib
${ODBC_ROOT_LIBRARY_DIRS}
PATHS
${PC_ODBC_LIBRARY_DIRS}
/usr/lib
/usr/local/lib
/usr/local/odbc/lib
/usr/local/iodbc/lib
"C:/Program Files/ODBC/lib"
"C:/ODBC/lib/debug"
"C:/Program Files (x86)/Microsoft SDKs/Windows/v7.0A/Lib"
PATH_SUFFIXES
odbc
)
if (_lib_path)
list(APPEND _odbc_required_libs_paths ${_lib_path})
endif()
unset(_lib_path CACHE)
endforeach()
unset(_odbc_lib_paths)
unset(_odbc_required_libs_names)
find_package_handle_standard_args(ODBC
FOUND_VAR ODBC_FOUND
REQUIRED_VARS
ODBC_LIBRARY
${REQUIRED_INCLUDE_DIR}
VERSION_VAR ODBC_VERSION
)
if(ODBC_FOUND)
set(ODBC_LIBRARIES ${ODBC_LIBRARY} ${_odbc_required_libs_paths})
set(ODBC_INCLUDE_DIRS ${ODBC_INCLUDE_DIR})
set(ODBC_DEFINITIONS ${PC_ODBC_CFLAGS_OTHER})
endif()
if(ODBC_FOUND AND NOT TARGET ODBC::ODBC)
add_library(ODBC::ODBC UNKNOWN IMPORTED)
set_target_properties(ODBC::ODBC PROPERTIES
IMPORTED_LOCATION "${ODBC_LIBRARY}"
INTERFACE_LINK_LIBRARIES "${_odbc_required_libs_paths}"
INTERFACE_COMPILE_OPTIONS "${PC_ODBC_CFLAGS_OTHER}"
INTERFACE_INCLUDE_DIRECTORIES "${ODBC_INCLUDE_DIR}"
)
endif()
mark_as_advanced(ODBC_LIBRARY ODBC_INCLUDE_DIR)

View File

@ -203,12 +203,12 @@ endforeach()
if(Poco_DataODBC_LIBRARY)
list(APPEND Poco_DataODBC_LIBRARY ${ODBC_LIBRARIES} ${LTDL_LIBRARY})
list(APPEND Poco_INCLUDE_DIRS ${ODBC_INCLUDE_DIRECTORIES})
list(APPEND Poco_INCLUDE_DIRS ${ODBC_INCLUDE_DIRS})
endif()
if(Poco_SQLODBC_LIBRARY)
list(APPEND Poco_SQLODBC_LIBRARY ${ODBC_LIBRARIES} ${LTDL_LIBRARY})
list(APPEND Poco_INCLUDE_DIRS ${ODBC_INCLUDE_DIRECTORIES})
list(APPEND Poco_INCLUDE_DIRS ${ODBC_INCLUDE_DIRS})
endif()
if(Poco_NetSSL_LIBRARY)

View File

@ -1,9 +1,12 @@
option (USE_INTERNAL_BOOST_LIBRARY "Set to FALSE to use system boost library instead of bundled" ${NOT_UNBUNDLED})
# Test random file existing in all package variants
if (USE_INTERNAL_BOOST_LIBRARY AND NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/boost/libs/system/src/error_code.cpp")
message (WARNING "submodules in contrib/boost is missing. to fix try run: \n git submodule update --init --recursive")
set (USE_INTERNAL_BOOST_LIBRARY 0)
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/boost/libs/system/src/error_code.cpp")
if(USE_INTERNAL_BOOST_LIBRARY)
message(WARNING "submodules in contrib/boost is missing. to fix try run: \n git submodule update --init --recursive")
endif()
set (USE_INTERNAL_BOOST_LIBRARY 0)
set (MISSING_INTERNAL_BOOST_LIBRARY 1)
endif ()
if (NOT USE_INTERNAL_BOOST_LIBRARY)
@ -21,10 +24,9 @@ if (NOT USE_INTERNAL_BOOST_LIBRARY)
set (Boost_INCLUDE_DIRS "")
set (Boost_SYSTEM_LIBRARY "")
endif ()
endif ()
if (NOT Boost_SYSTEM_LIBRARY)
if (NOT Boost_SYSTEM_LIBRARY AND NOT MISSING_INTERNAL_BOOST_LIBRARY)
set (USE_INTERNAL_BOOST_LIBRARY 1)
set (Boost_SYSTEM_LIBRARY boost_system_internal)
set (Boost_PROGRAM_OPTIONS_LIBRARY boost_program_options_internal)
@ -44,7 +46,6 @@ if (NOT Boost_SYSTEM_LIBRARY)
# For packaged version:
list (APPEND Boost_INCLUDE_DIRS "${ClickHouse_SOURCE_DIR}/contrib/boost")
endif ()
message (STATUS "Using Boost: ${Boost_INCLUDE_DIRS} : ${Boost_PROGRAM_OPTIONS_LIBRARY},${Boost_SYSTEM_LIBRARY},${Boost_FILESYSTEM_LIBRARY},${Boost_REGEX_LIBRARY}")

View File

@ -1,8 +1,7 @@
find_program (CCACHE_FOUND ccache)
if (CCACHE_FOUND AND NOT CMAKE_CXX_COMPILER_LAUNCHER MATCHES "ccache" AND NOT CMAKE_CXX_COMPILER MATCHES "ccache")
execute_process(COMMAND ${CCACHE_FOUND} "-V" OUTPUT_VARIABLE CCACHE_VERSION)
string(REGEX REPLACE "ccache version ([0-9\\.]+).*" "\\1" CCACHE_VERSION ${CCACHE_VERSION} )
string(REGEX REPLACE "ccache version ([0-9\\.]+).*" "\\1" CCACHE_VERSION ${CCACHE_VERSION})
if (CCACHE_VERSION VERSION_GREATER "3.2.0" OR NOT CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
#message(STATUS "Using ${CCACHE_FOUND} ${CCACHE_VERSION}")

View File

@ -1,6 +1,9 @@
option(ENABLE_ICU "Enable ICU" ON)
if(ENABLE_ICU)
if (APPLE)
set(ICU_ROOT "/usr/local/opt/icu4c" CACHE STRING "")
endif()
find_package(ICU COMPONENTS i18n uc data) # TODO: remove Modules/FindICU.cmake after cmake 3.7
#set (ICU_LIBRARIES ${ICU_I18N_LIBRARY} ${ICU_UC_LIBRARY} ${ICU_DATA_LIBRARY} CACHE STRING "")
if(ICU_FOUND)

View File

@ -1,4 +1,4 @@
if (NOT SANITIZE AND NOT ARCH_ARM AND NOT ARCH_32 AND NOT ARCH_PPC64LE AND NOT OS_FREEBSD)
if (NOT SANITIZE AND NOT ARCH_ARM AND NOT ARCH_32 AND NOT ARCH_PPC64LE AND NOT OS_FREEBSD AND NOT APPLE)
option (ENABLE_LFALLOC "Set to FALSE to use system libgsasl library instead of bundled" ${NOT_UNBUNDLED})
endif ()

View File

@ -22,4 +22,8 @@ elseif (NOT MISSING_INTERNAL_LIBGSASL_LIBRARY AND NOT APPLE AND NOT ARCH_32)
set (LIBGSASL_LIBRARY libgsasl)
endif ()
message (STATUS "Using libgsasl: ${LIBGSASL_INCLUDE_DIR} : ${LIBGSASL_LIBRARY}")
if(LIBGSASL_LIBRARY AND LIBGSASL_INCLUDE_DIR)
set (USE_LIBGSASL 1)
endif()
message (STATUS "Using libgsasl=${USE_LIBGSASL}: ${LIBGSASL_INCLUDE_DIR} : ${LIBGSASL_LIBRARY}")

View File

@ -1,93 +1,34 @@
# This file copied from contrib/poco/cmake/FindODBC.cmake to allow build without submodules
#
# Find the ODBC driver manager includes and library.
#
# ODBC is an open standard for connecting to different databases in a
# semi-vendor-independent fashion. First you install the ODBC driver
# manager. Then you need a driver for each separate database you want
# to connect to (unless a generic one works). VTK includes neither
# the driver manager nor the vendor-specific drivers: you have to find
# those yourself.
#
# This module defines
# ODBC_INCLUDE_DIRECTORIES, where to find sql.h
# ODBC_LIBRARIES, the libraries to link against to use ODBC
# ODBC_FOUND. If false, you cannot build anything that requires ODBC.
option (ENABLE_ODBC "Enable ODBC" ${OS_LINUX})
if (OS_LINUX)
option (USE_INTERNAL_ODBC_LIBRARY "Set to FALSE to use system odbc library instead of bundled" ${NOT_UNBUNDLED})
else ()
option (USE_INTERNAL_ODBC_LIBRARY "Set to FALSE to use system odbc library instead of bundled" OFF)
endif ()
if (USE_INTERNAL_ODBC_LIBRARY AND NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/unixodbc/README")
message (WARNING "submodule contrib/unixodbc is missing. to fix try run: \n git submodule update --init --recursive")
set (USE_INTERNAL_ODBC_LIBRARY 0)
endif ()
set (ODBC_INCLUDE_DIRECTORIES ) # Include directories will be either used automatically by target_include_directories or set later.
if (ENABLE_ODBC)
if (USE_INTERNAL_ODBC_LIBRARY)
set (ODBC_LIBRARIES unixodbc)
set (ODBC_FOUND 1)
set (USE_ODBC 1)
if(ENABLE_ODBC)
if (OS_LINUX)
option(USE_INTERNAL_ODBC_LIBRARY "Set to FALSE to use system odbc library instead of bundled" ${NOT_UNBUNDLED})
else ()
find_path(ODBC_INCLUDE_DIRECTORIES
NAMES sql.h
HINTS
/usr/include
/usr/include/iodbc
/usr/include/odbc
/usr/local/include
/usr/local/include/iodbc
/usr/local/include/odbc
/usr/local/iodbc/include
/usr/local/odbc/include
"C:/Program Files/ODBC/include"
"C:/Program Files/Microsoft SDKs/Windows/v7.0/include"
"C:/Program Files/Microsoft SDKs/Windows/v6.0a/include"
"C:/ODBC/include"
DOC "Specify the directory containing sql.h."
)
option(USE_INTERNAL_ODBC_LIBRARY "Set to FALSE to use system odbc library instead of bundled" OFF)
endif()
find_library(ODBC_LIBRARIES
NAMES iodbc odbc iodbcinst odbcinst odbc32
HINTS
/usr/lib
/usr/lib/iodbc
/usr/lib/odbc
/usr/local/lib
/usr/local/lib/iodbc
/usr/local/lib/odbc
/usr/local/iodbc/lib
/usr/local/odbc/lib
"C:/Program Files/ODBC/lib"
"C:/ODBC/lib/debug"
"C:/Program Files (x86)/Microsoft SDKs/Windows/v7.0A/Lib"
DOC "Specify the ODBC driver manager library here."
)
if(USE_INTERNAL_ODBC_LIBRARY AND NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/unixodbc/README")
message(WARNING "submodule contrib/unixodbc is missing. to fix try run: \n git submodule update --init --recursive")
set(USE_INTERNAL_ODBC_LIBRARY 0)
set(MISSING_INTERNAL_ODBC_LIBRARY 1)
endif()
# MinGW find usually fails
if (MINGW)
set(ODBC_INCLUDE_DIRECTORIES ".")
set(ODBC_LIBRARIES odbc32)
endif ()
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(ODBC
DEFAULT_MSG
ODBC_INCLUDE_DIRECTORIES
ODBC_LIBRARIES)
if (USE_STATIC_LIBRARIES)
list(APPEND ODBC_LIBRARIES ${LTDL_LIBRARY})
endif ()
mark_as_advanced(ODBC_FOUND ODBC_LIBRARIES ODBC_INCLUDE_DIRECTORIES)
set(ODBC_INCLUDE_DIRS ) # Include directories will be either used automatically by target_include_directories or set later.
if(USE_INTERNAL_ODBC_LIBRARY AND NOT MISSING_INTERNAL_ODBC_LIBRARY)
set(ODBC_LIBRARY unixodbc)
set(ODBC_LIBRARIES ${ODBC_LIBRARY})
set(ODBC_INCLUDE_DIRS "${ClickHouse_SOURCE_DIR}/contrib/unixodbc/include")
set(ODBC_FOUND 1)
else()
find_package(ODBC)
endif ()
endif ()
message (STATUS "Using odbc=${ODBC_FOUND}: ${ODBC_INCLUDE_DIRECTORIES} : ${ODBC_LIBRARIES}")
if(ODBC_FOUND)
set(USE_ODBC 1)
set(ODBC_INCLUDE_DIRECTORIES ${ODBC_INCLUDE_DIRS}) # for old poco
set(ODBC_INCLUDE_DIR ${ODBC_INCLUDE_DIRS}) # for old poco
endif()
message(STATUS "Using odbc=${USE_ODBC}: ${ODBC_INCLUDE_DIRS} : ${ODBC_LIBRARIES}")
endif()

View File

@ -76,7 +76,7 @@ elseif (NOT MISSING_INTERNAL_POCO_LIBRARY)
set (Poco_SQLODBC_INCLUDE_DIR
"${ClickHouse_SOURCE_DIR}/contrib/poco/SQL/ODBC/include/"
"${ClickHouse_SOURCE_DIR}/contrib/poco/Data/ODBC/include/"
${ODBC_INCLUDE_DIRECTORIES}
${ODBC_INCLUDE_DIRS}
)
set (Poco_SQLODBC_LIBRARY PocoSQLODBC ${ODBC_LIBRARIES} ${LTDL_LIBRARY})
endif ()
@ -88,7 +88,7 @@ elseif (NOT MISSING_INTERNAL_POCO_LIBRARY)
set (USE_POCO_DATAODBC 1)
set (Poco_DataODBC_INCLUDE_DIR
"${ClickHouse_SOURCE_DIR}/contrib/poco/Data/ODBC/include/"
${ODBC_INCLUDE_DIRECTORIES}
${ODBC_INCLUDE_DIRS}
)
set (Poco_DataODBC_LIBRARY PocoDataODBC ${ODBC_LIBRARIES} ${LTDL_LIBRARY})
endif ()

View File

@ -10,7 +10,7 @@ endif ()
if (ENABLE_RDKAFKA)
if (OS_LINUX AND NOT ARCH_ARM)
if (OS_LINUX AND NOT ARCH_ARM AND USE_LIBGSASL)
option (USE_INTERNAL_RDKAFKA_LIBRARY "Set to FALSE to use system librdkafka instead of the bundled" ${NOT_UNBUNDLED})
endif ()

View File

@ -1,5 +1,13 @@
option (USE_INTERNAL_RE2_LIBRARY "Set to FALSE to use system re2 library instead of bundled [slower]" ${NOT_UNBUNDLED})
if(NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/re2/CMakeLists.txt")
if(USE_INTERNAL_RE2_LIBRARY)
message(WARNING "submodule contrib/re2 is missing. to fix try run: \n git submodule update --init --recursive")
endif()
set(USE_INTERNAL_RE2_LIBRARY 0)
set(MISSING_INTERNAL_RE2_LIBRARY 1)
endif()
if (NOT USE_INTERNAL_RE2_LIBRARY)
find_library (RE2_LIBRARY re2)
find_path (RE2_INCLUDE_DIR NAMES re2/re2.h PATHS ${RE2_INCLUDE_PATHS})

14
cmake/find_simdjson.cmake Normal file
View File

@ -0,0 +1,14 @@
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/simdjson/include/simdjson/jsonparser.h")
message (WARNING "submodule contrib/simdjson is missing. to fix try run: \n git submodule update --init --recursive")
return()
endif ()
if (NOT HAVE_AVX2)
message (WARNING "submodule contrib/simdjson requires AVX2 support")
return()
endif ()
option (USE_SIMDJSON "Use simdjson" ON)
set (SIMDJSON_LIBRARY "simdjson")
message(STATUS "Using simdjson=${USE_SIMDJSON}: ${SIMDJSON_LIBRARY}")

View File

@ -60,6 +60,72 @@ if(OPENSSL_FOUND)
set(USE_SSL 1)
endif()
# used by new poco
# part from /usr/share/cmake-*/Modules/FindOpenSSL.cmake, with removed all "EXISTS "
if(OPENSSL_FOUND AND NOT USE_INTERNAL_SSL_LIBRARY)
if(NOT TARGET OpenSSL::Crypto AND
(OPENSSL_CRYPTO_LIBRARY OR
LIB_EAY_LIBRARY_DEBUG OR
LIB_EAY_LIBRARY_RELEASE)
)
add_library(OpenSSL::Crypto UNKNOWN IMPORTED)
set_target_properties(OpenSSL::Crypto PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "${OPENSSL_INCLUDE_DIR}")
if(OPENSSL_CRYPTO_LIBRARY)
set_target_properties(OpenSSL::Crypto PROPERTIES
IMPORTED_LINK_INTERFACE_LANGUAGES "C"
IMPORTED_LOCATION "${OPENSSL_CRYPTO_LIBRARY}")
endif()
if(LIB_EAY_LIBRARY_RELEASE)
set_property(TARGET OpenSSL::Crypto APPEND PROPERTY
IMPORTED_CONFIGURATIONS RELEASE)
set_target_properties(OpenSSL::Crypto PROPERTIES
IMPORTED_LINK_INTERFACE_LANGUAGES_RELEASE "C"
IMPORTED_LOCATION_RELEASE "${LIB_EAY_LIBRARY_RELEASE}")
endif()
if(LIB_EAY_LIBRARY_DEBUG)
set_property(TARGET OpenSSL::Crypto APPEND PROPERTY
IMPORTED_CONFIGURATIONS DEBUG)
set_target_properties(OpenSSL::Crypto PROPERTIES
IMPORTED_LINK_INTERFACE_LANGUAGES_DEBUG "C"
IMPORTED_LOCATION_DEBUG "${LIB_EAY_LIBRARY_DEBUG}")
endif()
endif()
if(NOT TARGET OpenSSL::SSL AND
(OPENSSL_SSL_LIBRARY OR
SSL_EAY_LIBRARY_DEBUG OR
SSL_EAY_LIBRARY_RELEASE)
)
add_library(OpenSSL::SSL UNKNOWN IMPORTED)
set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "${OPENSSL_INCLUDE_DIR}")
if(OPENSSL_SSL_LIBRARY)
set_target_properties(OpenSSL::SSL PROPERTIES
IMPORTED_LINK_INTERFACE_LANGUAGES "C"
IMPORTED_LOCATION "${OPENSSL_SSL_LIBRARY}")
endif()
if(SSL_EAY_LIBRARY_RELEASE)
set_property(TARGET OpenSSL::SSL APPEND PROPERTY
IMPORTED_CONFIGURATIONS RELEASE)
set_target_properties(OpenSSL::SSL PROPERTIES
IMPORTED_LINK_INTERFACE_LANGUAGES_RELEASE "C"
IMPORTED_LOCATION_RELEASE "${SSL_EAY_LIBRARY_RELEASE}")
endif()
if(SSL_EAY_LIBRARY_DEBUG)
set_property(TARGET OpenSSL::SSL APPEND PROPERTY
IMPORTED_CONFIGURATIONS DEBUG)
set_target_properties(OpenSSL::SSL PROPERTIES
IMPORTED_LINK_INTERFACE_LANGUAGES_DEBUG "C"
IMPORTED_LOCATION_DEBUG "${SSL_EAY_LIBRARY_DEBUG}")
endif()
if(TARGET OpenSSL::Crypto)
set_target_properties(OpenSSL::SSL PROPERTIES
INTERFACE_LINK_LIBRARIES OpenSSL::Crypto)
endif()
endif()
endif()
endif ()
message (STATUS "Using ssl=${USE_SSL}: ${OPENSSL_INCLUDE_DIR} : ${OPENSSL_LIBRARIES}")

View File

@ -2,20 +2,28 @@ if (NOT OS_FREEBSD AND NOT ARCH_32)
option (USE_INTERNAL_ZLIB_LIBRARY "Set to FALSE to use system zlib library instead of bundled" ${NOT_UNBUNDLED})
endif ()
if (NOT MSVC)
set (INTERNAL_ZLIB_NAME "zlib-ng" CACHE INTERNAL "")
else ()
set (INTERNAL_ZLIB_NAME "zlib" CACHE INTERNAL "")
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/${INTERNAL_ZLIB_NAME}")
message (WARNING "Will use standard zlib, please clone manually:\n git clone https://github.com/madler/zlib.git ${ClickHouse_SOURCE_DIR}/contrib/${INTERNAL_ZLIB_NAME}")
endif ()
endif ()
if(NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/${INTERNAL_ZLIB_NAME}/zlib.h")
if(USE_INTERNAL_ZLIB_LIBRARY)
message(WARNING "submodule contrib/${INTERNAL_ZLIB_NAME} is missing. to fix try run: \n git submodule update --init --recursive")
endif()
set(USE_INTERNAL_ZLIB_LIBRARY 0)
set(MISSING_INTERNAL_ZLIB_LIBRARY 1)
endif()
if (NOT USE_INTERNAL_ZLIB_LIBRARY)
find_package (ZLIB)
endif ()
if (NOT ZLIB_FOUND)
if (NOT MSVC)
set (INTERNAL_ZLIB_NAME "zlib-ng" CACHE INTERNAL "")
else ()
set (INTERNAL_ZLIB_NAME "zlib" CACHE INTERNAL "")
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/${INTERNAL_ZLIB_NAME}")
message (WARNING "Will use standard zlib, please clone manually:\n git clone https://github.com/madler/zlib.git ${ClickHouse_SOURCE_DIR}/contrib/${INTERNAL_ZLIB_NAME}")
endif ()
endif ()
if (NOT ZLIB_FOUND AND NOT MISSING_INTERNAL_ZLIB_LIBRARY)
set (USE_INTERNAL_ZLIB_LIBRARY 1)
set (ZLIB_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/${INTERNAL_ZLIB_NAME}" "${ClickHouse_BINARY_DIR}/contrib/${INTERNAL_ZLIB_NAME}" CACHE INTERNAL "") # generated zconf.h
set (ZLIB_INCLUDE_DIRS ${ZLIB_INCLUDE_DIR}) # for poco

View File

@ -1,9 +1,12 @@
option (USE_INTERNAL_ZSTD_LIBRARY "Set to FALSE to use system zstd library instead of bundled" ${NOT_UNBUNDLED})
if (USE_INTERNAL_ZSTD_LIBRARY AND NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/zstd/lib/zstd.h")
message (WARNING "submodule contrib/zstd is missing. to fix try run: \n git submodule update --init --recursive")
set (USE_INTERNAL_ZSTD_LIBRARY 0)
endif ()
if(NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/zstd/lib/zstd.h")
if(USE_INTERNAL_ZSTD_LIBRARY)
message(WARNING "submodule contrib/zstd is missing. to fix try run: \n git submodule update --init --recursive")
endif()
set(USE_INTERNAL_ZSTD_LIBRARY 0)
set(MISSING_INTERNAL_ZSTD_LIBRARY 1)
endif()
if (NOT USE_INTERNAL_ZSTD_LIBRARY)
find_library (ZSTD_LIBRARY zstd)
@ -11,7 +14,7 @@ if (NOT USE_INTERNAL_ZSTD_LIBRARY)
endif ()
if (ZSTD_LIBRARY AND ZSTD_INCLUDE_DIR)
else ()
elseif (NOT MISSING_INTERNAL_ZSTD_LIBRARY)
set (USE_INTERNAL_ZSTD_LIBRARY 1)
set (ZSTD_LIBRARY zstd)
set (ZSTD_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/zstd/lib)

View File

@ -120,6 +120,9 @@ if (USE_INTERNAL_SSL_LIBRARY)
target_include_directories(${OPENSSL_CRYPTO_LIBRARY} SYSTEM PUBLIC ${OPENSSL_INCLUDE_DIR})
target_include_directories(${OPENSSL_SSL_LIBRARY} SYSTEM PUBLIC ${OPENSSL_INCLUDE_DIR})
set (POCO_SKIP_OPENSSL_FIND 1)
add_library(OpenSSL::Crypto ALIAS ${OPENSSL_CRYPTO_LIBRARY})
add_library(OpenSSL::SSL ALIAS ${OPENSSL_SSL_LIBRARY})
endif ()
if (ENABLE_MYSQL AND USE_INTERNAL_MYSQL_LIBRARY)
@ -144,6 +147,7 @@ endif()
if (ENABLE_ODBC AND USE_INTERNAL_ODBC_LIBRARY)
add_subdirectory (unixodbc-cmake)
add_library(ODBC::ODBC ALIAS ${ODBC_LIBRARIES})
endif ()
if (USE_INTERNAL_CAPNP_LIBRARY)
@ -223,7 +227,7 @@ if (USE_INTERNAL_POCO_LIBRARY)
set (ENABLE_TESTS 0)
set (POCO_ENABLE_TESTS 0)
set (CMAKE_DISABLE_FIND_PACKAGE_ZLIB 1)
if (MSVC)
if (MSVC OR NOT USE_POCO_DATAODBC)
set (ENABLE_DATA_ODBC 0 CACHE INTERNAL "") # TODO (build fail)
endif ()
add_subdirectory (poco)
@ -309,3 +313,7 @@ endif()
if (USE_INTERNAL_HYPERSCAN_LIBRARY)
add_subdirectory (hyperscan)
endif()
if (USE_SIMDJSON)
add_subdirectory (simdjson-cmake)
endif()

View File

@ -41,7 +41,7 @@ set( thriftcpp_threads_SOURCES
${LIBRARY_DIR}/src/thrift/concurrency/Monitor.cpp
${LIBRARY_DIR}/src/thrift/concurrency/Mutex.cpp
)
add_library(${THRIFT_LIBRARY} ${LINK_MODE} ${thriftcpp_SOURCES} ${thriftcpp_threads_SOURCES})
add_library(${THRIFT_LIBRARY} ${thriftcpp_SOURCES} ${thriftcpp_threads_SOURCES})
set_target_properties(${THRIFT_LIBRARY} PROPERTIES CXX_STANDARD 14) # REMOVE after https://github.com/apache/thrift/pull/1641
target_include_directories(${THRIFT_LIBRARY} SYSTEM PUBLIC ${ClickHouse_SOURCE_DIR}/contrib/thrift/lib/cpp/src PRIVATE ${Boost_INCLUDE_DIRS})
@ -149,7 +149,7 @@ if (ARROW_WITH_ZSTD)
endif()
add_library(${ARROW_LIBRARY} ${LINK_MODE} ${ARROW_SRCS})
add_library(${ARROW_LIBRARY} ${ARROW_SRCS})
target_include_directories(${ARROW_LIBRARY} SYSTEM PUBLIC ${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/src PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/cpp/src ${Boost_INCLUDE_DIRS})
target_link_libraries(${ARROW_LIBRARY} PRIVATE ${DOUBLE_CONVERSION_LIBRARIES} Threads::Threads)
if (ARROW_WITH_LZ4)
@ -195,7 +195,7 @@ list(APPEND PARQUET_SRCS
${CMAKE_CURRENT_SOURCE_DIR}/cpp/src/parquet/parquet_constants.cpp
${CMAKE_CURRENT_SOURCE_DIR}/cpp/src/parquet/parquet_types.cpp
)
add_library(${PARQUET_LIBRARY} ${LINK_MODE} ${PARQUET_SRCS})
add_library(${PARQUET_LIBRARY} ${PARQUET_SRCS})
target_include_directories(${PARQUET_LIBRARY} SYSTEM PUBLIC ${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/src ${CMAKE_CURRENT_SOURCE_DIR}/cpp/src)
include(${ClickHouse_SOURCE_DIR}/contrib/thrift/build/cmake/ConfigureChecks.cmake) # makes config.h
target_link_libraries(${PARQUET_LIBRARY} PUBLIC ${ARROW_LIBRARY} PRIVATE ${THRIFT_LIBRARY} ${Boost_REGEX_LIBRARY})

View File

@ -24,7 +24,7 @@ endif ()
configure_file(config.h.in ${CMAKE_CURRENT_BINARY_DIR}/config.h)
add_library(base64 ${LINK_MODE}
add_library(base64
${LIBRARY_DIR}/lib/lib.c
${LIBRARY_DIR}/lib/codec_choose.c
${LIBRARY_DIR}/lib/arch/avx/codec.c

View File

@ -20,7 +20,7 @@ endif()
macro(add_boost_lib lib_name)
add_headers_and_sources(boost_${lib_name} ${LIBRARY_DIR}/libs/${lib_name}/src)
add_library(boost_${lib_name}_internal ${LINK_MODE} ${boost_${lib_name}_sources})
add_library(boost_${lib_name}_internal ${boost_${lib_name}_sources})
target_include_directories(boost_${lib_name}_internal SYSTEM BEFORE PUBLIC ${Boost_INCLUDE_DIRS})
target_compile_definitions(boost_${lib_name}_internal PUBLIC BOOST_SYSTEM_NO_DEPRECATED)
endmacro()

View File

@ -28,6 +28,6 @@ set(SRCS
${BROTLI_SOURCE_DIR}/common/transform.c
)
add_library(brotli ${LINK_MODE} ${SRCS})
add_library(brotli ${SRCS})
target_include_directories(brotli PUBLIC ${BROTLI_SOURCE_DIR}/include)

View File

@ -1,6 +1,6 @@
SET(LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/cctz)
add_library(cctz ${LINK_MODE}
add_library(cctz
${LIBRARY_DIR}/src/civil_time_detail.cc
${LIBRARY_DIR}/src/time_zone_fixed.cc
${LIBRARY_DIR}/src/time_zone_format.cc

View File

@ -23,7 +23,7 @@ set(SRCS
${CPPKAFKA_DIR}/src/consumer.cpp
)
add_library(cppkafka ${LINK_MODE} ${SRCS})
add_library(cppkafka ${SRCS})
target_link_libraries(cppkafka PRIVATE ${RDKAFKA_LIBRARY})
target_include_directories(cppkafka PRIVATE ${CPPKAFKA_DIR}/include/cppkafka)

View File

@ -358,8 +358,15 @@ static char* AllocWithMMap(uintptr_t sz, EMMapMode mode) {
}
#if defined(USE_LFALLOC_RANDOM_HINT)
static thread_local std::mt19937_64 generator(std::random_device{}());
std::uniform_int_distribution<intptr_t> distr(areaStart, areaFinish / 2);
char* largeBlock = (char*)mmap(reinterpret_cast<void*>(distr(generator)), sz, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON, -1, 0);
std::uniform_int_distribution<intptr_t> distr(areaStart, areaFinish - sz - 1);
char* largeBlock;
static constexpr size_t MaxAttempts = 100;
size_t attempt = 0;
do
{
largeBlock = (char*)mmap(reinterpret_cast<void*>(distr(generator)), sz, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON, -1, 0);
++attempt;
} while (uintptr_t(((char*)largeBlock - ALLOC_START) + sz) >= areaFinish && attempt < MaxAttempts && munmap(largeBlock, sz) == 0);
#else
char* largeBlock = (char*)mmap(0, sz, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON, -1, 0);
#endif

View File

@ -1,4 +1,4 @@
add_library (btrie
add_library(btrie
src/btrie.c
include/btrie.h
)

View File

@ -183,7 +183,7 @@ set(SRCS
)
# target
add_library(hdfs3 STATIC ${SRCS} ${PROTO_SOURCES} ${PROTO_HEADERS})
add_library(hdfs3 ${SRCS} ${PROTO_SOURCES} ${PROTO_HEADERS})
if (USE_INTERNAL_PROTOBUF_LIBRARY)
add_dependencies(hdfs3 protoc)

View File

@ -33,6 +33,7 @@ set(SRCS
${RDKAFKA_SOURCE_DIR}/rdkafka_roundrobin_assignor.c
${RDKAFKA_SOURCE_DIR}/rdkafka_sasl.c
${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_plain.c
${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_scram.c
${RDKAFKA_SOURCE_DIR}/rdkafka_subscription.c
${RDKAFKA_SOURCE_DIR}/rdkafka_timer.c
${RDKAFKA_SOURCE_DIR}/rdkafka_topic.c
@ -54,11 +55,11 @@ set(SRCS
${RDKAFKA_SOURCE_DIR}/rdgz.c
)
add_library(rdkafka ${LINK_MODE} ${SRCS})
add_library(rdkafka ${SRCS})
target_include_directories(rdkafka SYSTEM PUBLIC include)
target_include_directories(rdkafka SYSTEM PUBLIC ${RDKAFKA_SOURCE_DIR}) # Because weird logic with "include_next" is used.
target_include_directories(rdkafka SYSTEM PRIVATE ${ZSTD_INCLUDE_DIR}/common) # Because wrong path to "zstd_errors.h" is used.
target_link_libraries(rdkafka PUBLIC ${ZLIB_LIBRARIES} ${ZSTD_LIBRARY} ${LZ4_LIBRARY})
target_link_libraries(rdkafka PUBLIC ${ZLIB_LIBRARIES} ${ZSTD_LIBRARY} ${LZ4_LIBRARY} ${LIBGSASL_LIBRARY})
if(OPENSSL_SSL_LIBRARY AND OPENSSL_CRYPTO_LIBRARY)
target_link_libraries(rdkafka PUBLIC ${OPENSSL_SSL_LIBRARY} ${OPENSSL_CRYPTO_LIBRARY})
endif()

View File

@ -12,7 +12,7 @@
#define ENABLE_SHAREDPTR_DEBUG 0
#define ENABLE_LZ4_EXT 1
#define ENABLE_SSL 1
//#define ENABLE_SASL 1
#define ENABLE_SASL 1
#define MKL_APP_NAME "librdkafka"
#define MKL_APP_DESC_ONELINE "The Apache Kafka C/C++ library"
// distro
@ -62,7 +62,7 @@
// libssl
#define WITH_SSL 1
// WITH_SASL_SCRAM
//#define WITH_SASL_SCRAM 1
#define WITH_SASL_SCRAM 1
// crc32chw
#if !defined(__PPC__)
#define WITH_CRC32C_HW 1

View File

@ -1,62 +1,56 @@
set(LIBXML2_SOURCE_DIR ${CMAKE_SOURCE_DIR}/contrib/libxml2)
set(LIBXML2_BINARY_DIR ${CMAKE_BINARY_DIR}/contrib/libxml2)
set(SRCS
${LIBXML2_SOURCE_DIR}/parser.c
${LIBXML2_SOURCE_DIR}/HTMLparser.c
${LIBXML2_SOURCE_DIR}/buf.c
${LIBXML2_SOURCE_DIR}/xzlib.c
${LIBXML2_SOURCE_DIR}/xmlregexp.c
${LIBXML2_SOURCE_DIR}/entities.c
${LIBXML2_SOURCE_DIR}/rngparser.c
${LIBXML2_SOURCE_DIR}/encoding.c
${LIBXML2_SOURCE_DIR}/legacy.c
${LIBXML2_SOURCE_DIR}/error.c
${LIBXML2_SOURCE_DIR}/debugXML.c
${LIBXML2_SOURCE_DIR}/xpointer.c
${LIBXML2_SOURCE_DIR}/DOCBparser.c
${LIBXML2_SOURCE_DIR}/xmlcatalog.c
${LIBXML2_SOURCE_DIR}/c14n.c
${LIBXML2_SOURCE_DIR}/xmlreader.c
${LIBXML2_SOURCE_DIR}/xmlstring.c
${LIBXML2_SOURCE_DIR}/dict.c
${LIBXML2_SOURCE_DIR}/xpath.c
${LIBXML2_SOURCE_DIR}/tree.c
${LIBXML2_SOURCE_DIR}/trionan.c
${LIBXML2_SOURCE_DIR}/pattern.c
${LIBXML2_SOURCE_DIR}/globals.c
${LIBXML2_SOURCE_DIR}/xmllint.c
${LIBXML2_SOURCE_DIR}/chvalid.c
${LIBXML2_SOURCE_DIR}/relaxng.c
${LIBXML2_SOURCE_DIR}/list.c
${LIBXML2_SOURCE_DIR}/xinclude.c
${LIBXML2_SOURCE_DIR}/xmlIO.c
${LIBXML2_SOURCE_DIR}/triostr.c
${LIBXML2_SOURCE_DIR}/hash.c
${LIBXML2_SOURCE_DIR}/xmlsave.c
${LIBXML2_SOURCE_DIR}/HTMLtree.c
${LIBXML2_SOURCE_DIR}/SAX.c
${LIBXML2_SOURCE_DIR}/xmlschemas.c
${LIBXML2_SOURCE_DIR}/SAX2.c
${LIBXML2_SOURCE_DIR}/threads.c
${LIBXML2_SOURCE_DIR}/runsuite.c
${LIBXML2_SOURCE_DIR}/catalog.c
${LIBXML2_SOURCE_DIR}/uri.c
${LIBXML2_SOURCE_DIR}/xmlmodule.c
${LIBXML2_SOURCE_DIR}/xlink.c
${LIBXML2_SOURCE_DIR}/entities.c
${LIBXML2_SOURCE_DIR}/encoding.c
${LIBXML2_SOURCE_DIR}/error.c
${LIBXML2_SOURCE_DIR}/parserInternals.c
${LIBXML2_SOURCE_DIR}/xmlwriter.c
${LIBXML2_SOURCE_DIR}/xmlunicode.c
${LIBXML2_SOURCE_DIR}/runxmlconf.c
${LIBXML2_SOURCE_DIR}/parser.c
${LIBXML2_SOURCE_DIR}/tree.c
${LIBXML2_SOURCE_DIR}/hash.c
${LIBXML2_SOURCE_DIR}/list.c
${LIBXML2_SOURCE_DIR}/xmlIO.c
${LIBXML2_SOURCE_DIR}/xmlmemory.c
${LIBXML2_SOURCE_DIR}/nanoftp.c
${LIBXML2_SOURCE_DIR}/xmlschemastypes.c
${LIBXML2_SOURCE_DIR}/uri.c
${LIBXML2_SOURCE_DIR}/valid.c
${LIBXML2_SOURCE_DIR}/xlink.c
${LIBXML2_SOURCE_DIR}/HTMLparser.c
${LIBXML2_SOURCE_DIR}/HTMLtree.c
${LIBXML2_SOURCE_DIR}/debugXML.c
${LIBXML2_SOURCE_DIR}/xpath.c
${LIBXML2_SOURCE_DIR}/xpointer.c
${LIBXML2_SOURCE_DIR}/xinclude.c
${LIBXML2_SOURCE_DIR}/nanohttp.c
${LIBXML2_SOURCE_DIR}/nanoftp.c
${LIBXML2_SOURCE_DIR}/DOCBparser.c
${LIBXML2_SOURCE_DIR}/catalog.c
${LIBXML2_SOURCE_DIR}/globals.c
${LIBXML2_SOURCE_DIR}/threads.c
${LIBXML2_SOURCE_DIR}/c14n.c
${LIBXML2_SOURCE_DIR}/xmlstring.c
${LIBXML2_SOURCE_DIR}/buf.c
${LIBXML2_SOURCE_DIR}/xmlregexp.c
${LIBXML2_SOURCE_DIR}/xmlschemas.c
${LIBXML2_SOURCE_DIR}/xmlschemastypes.c
${LIBXML2_SOURCE_DIR}/xmlunicode.c
${LIBXML2_SOURCE_DIR}/triostr.c
#${LIBXML2_SOURCE_DIR}/trio.c
${LIBXML2_SOURCE_DIR}/xmlreader.c
${LIBXML2_SOURCE_DIR}/relaxng.c
${LIBXML2_SOURCE_DIR}/dict.c
${LIBXML2_SOURCE_DIR}/SAX2.c
${LIBXML2_SOURCE_DIR}/xmlwriter.c
${LIBXML2_SOURCE_DIR}/legacy.c
${LIBXML2_SOURCE_DIR}/chvalid.c
${LIBXML2_SOURCE_DIR}/pattern.c
${LIBXML2_SOURCE_DIR}/xmlsave.c
${LIBXML2_SOURCE_DIR}/xmlmodule.c
${LIBXML2_SOURCE_DIR}/schematron.c
${LIBXML2_SOURCE_DIR}/xzlib.c
)
add_library(libxml2 STATIC ${SRCS})
add_library(libxml2 ${SRCS})
target_link_libraries(libxml2 ${ZLIB_LIBRARIES})

2
contrib/lz4 vendored

@ -1 +1 @@
Subproject commit 780aac520b69d6369f4e3995624c37e56d75498d
Subproject commit 7a4e3b1fac5cd9d4ec7c8d0091329ba107ec2131

View File

@ -2,7 +2,7 @@ set(MARIADB_CLIENT_SOURCE_DIR ${CMAKE_SOURCE_DIR}/contrib/mariadb-connector-c)
set(MARIADB_CLIENT_BINARY_DIR ${CMAKE_BINARY_DIR}/contrib/mariadb-connector-c)
set(SRCS
${MARIADB_CLIENT_SOURCE_DIR}/libmariadb/bmove_upp.c
#${MARIADB_CLIENT_SOURCE_DIR}/libmariadb/bmove_upp.c
${MARIADB_CLIENT_SOURCE_DIR}/libmariadb/get_password.c
${MARIADB_CLIENT_SOURCE_DIR}/libmariadb/ma_alloc.c
${MARIADB_CLIENT_SOURCE_DIR}/libmariadb/ma_array.c
@ -58,10 +58,10 @@ if(OPENSSL_LIBRARIES)
list(APPEND SRCS ${MARIADB_CLIENT_SOURCE_DIR}/libmariadb/secure/openssl.c)
endif()
add_library(mysqlclient STATIC ${SRCS})
add_library(mysqlclient ${SRCS})
if(OPENSSL_LIBRARIES)
target_link_libraries(mysqlclient ${OPENSSL_LIBRARIES})
target_link_libraries(mysqlclient PRIVATE ${OPENSSL_LIBRARIES})
target_compile_definitions(mysqlclient PRIVATE -D HAVE_OPENSSL -D HAVE_TLS)
endif()

View File

@ -10,7 +10,7 @@ foreach (src ${RE2_SOURCES_})
list(APPEND RE2_ST_SOURCES ${RE2_SOURCE_DIR}/${src})
endforeach ()
add_library (re2_st ${RE2_ST_SOURCES})
add_library(re2_st ${RE2_ST_SOURCES})
target_compile_definitions (re2_st PRIVATE NDEBUG NO_THREADS re2=re2_st)
target_include_directories (re2_st PRIVATE .)
target_include_directories (re2_st SYSTEM PUBLIC ${CMAKE_CURRENT_BINARY_DIR} ${RE2_SOURCE_DIR})

1
contrib/simdjson vendored Submodule

@ -0,0 +1 @@
Subproject commit 681cd3369860f4eada49a387cbff93030f759c95

View File

@ -0,0 +1,18 @@
if (NOT HAVE_AVX2)
message (FATAL_ERROR "No AVX2 support")
endif ()
set(SIMDJSON_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/simdjson/include")
set(SIMDJSON_SRC_DIR "${SIMDJSON_INCLUDE_DIR}/../src")
set(SIMDJSON_SRC
${SIMDJSON_SRC_DIR}/jsonioutil.cpp
${SIMDJSON_SRC_DIR}/jsonminifier.cpp
${SIMDJSON_SRC_DIR}/jsonparser.cpp
${SIMDJSON_SRC_DIR}/stage1_find_marks.cpp
${SIMDJSON_SRC_DIR}/stage2_build_tape.cpp
${SIMDJSON_SRC_DIR}/parsedjson.cpp
${SIMDJSON_SRC_DIR}/parsedjsoniterator.cpp
)
add_library(${SIMDJSON_LIBRARY} ${SIMDJSON_SRC})
target_include_directories(${SIMDJSON_LIBRARY} PUBLIC "${SIMDJSON_INCLUDE_DIR}")
target_compile_options(${SIMDJSON_LIBRARY} PRIVATE -mavx2 -mbmi -mbmi2 -mpclmul)

View File

@ -23,7 +23,7 @@ ${ODBC_SOURCE_DIR}/libltdl/loaders/preopen.c
${CMAKE_CURRENT_SOURCE_DIR}/linux_x86_64/libltdl/libltdlcS.c
)
add_library(ltdl ${LINK_MODE} ${SRCS})
add_library(ltdl ${SRCS})
target_include_directories(ltdl PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/linux_x86_64/libltdl)
target_include_directories(ltdl PUBLIC ${ODBC_SOURCE_DIR}/libltdl)
@ -273,15 +273,15 @@ ${ODBC_SOURCE_DIR}/lst/lstSetFreeFunc.c
${ODBC_SOURCE_DIR}/lst/_lstVisible.c
)
add_library(unixodbc ${LINK_MODE} ${SRCS})
add_library(unixodbc ${SRCS})
target_link_libraries(unixodbc ltdl)
target_link_libraries(unixodbc PRIVATE ltdl)
# SYSTEM_FILE_PATH was changed to /etc
target_include_directories(unixodbc PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/linux_x86_64/private)
target_include_directories(unixodbc PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/linux_x86_64)
target_include_directories(unixodbc PUBLIC ${ODBC_SOURCE_DIR}/include)
target_include_directories(unixodbc SYSTEM PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/linux_x86_64/private)
target_include_directories(unixodbc SYSTEM PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/linux_x86_64)
target_include_directories(unixodbc SYSTEM PUBLIC ${ODBC_SOURCE_DIR}/include)
target_compile_definitions(unixodbc PRIVATE -DHAVE_CONFIG_H)

View File

@ -125,6 +125,6 @@ IF (ZSTD_LEGACY_SUPPORT)
${LIBRARY_LEGACY_DIR}/zstd_v07.h)
ENDIF (ZSTD_LEGACY_SUPPORT)
ADD_LIBRARY(zstd ${LINK_MODE} ${Sources} ${Headers})
ADD_LIBRARY(zstd ${Sources} ${Headers})
target_include_directories (zstd PUBLIC ${LIBRARY_DIR})

View File

@ -134,7 +134,7 @@ list (APPEND dbms_headers src/TableFunctions/ITableFunction.h src/TableFunctio
list (APPEND dbms_sources src/Dictionaries/DictionaryFactory.cpp src/Dictionaries/DictionarySourceFactory.cpp src/Dictionaries/DictionaryStructure.cpp)
list (APPEND dbms_headers src/Dictionaries/DictionaryFactory.h src/Dictionaries/DictionarySourceFactory.h src/Dictionaries/DictionaryStructure.h)
add_library(clickhouse_common_io ${LINK_MODE} ${clickhouse_common_io_headers} ${clickhouse_common_io_sources})
add_library(clickhouse_common_io ${clickhouse_common_io_headers} ${clickhouse_common_io_sources})
if (OS_FREEBSD)
target_compile_definitions (clickhouse_common_io PUBLIC CLOCK_MONOTONIC_COARSE=CLOCK_MONOTONIC_FAST)
@ -189,8 +189,17 @@ target_link_libraries (clickhouse_common_io
${Poco_Net_LIBRARY}
${Poco_Util_LIBRARY}
${Poco_Foundation_LIBRARY}
${RE2_LIBRARY}
${RE2_ST_LIBRARY}
)
if(RE2_LIBRARY)
target_link_libraries(clickhouse_common_io PUBLIC ${RE2_LIBRARY})
endif()
if(RE2_ST_LIBRARY)
target_link_libraries(clickhouse_common_io PUBLIC ${RE2_ST_LIBRARY})
endif()
target_link_libraries(clickhouse_common_io
PUBLIC
${CITYHASH_LIBRARIES}
PRIVATE
${ZLIB_LIBRARIES}
@ -208,7 +217,9 @@ target_link_libraries (clickhouse_common_io
)
target_include_directories(clickhouse_common_io SYSTEM BEFORE PUBLIC ${RE2_INCLUDE_DIR})
if(RE2_INCLUDE_DIR)
target_include_directories(clickhouse_common_io SYSTEM BEFORE PUBLIC ${RE2_INCLUDE_DIR})
endif()
if (USE_LFALLOC)
target_include_directories (clickhouse_common_io SYSTEM BEFORE PUBLIC ${LFALLOC_INCLUDE_DIR})
@ -257,8 +268,8 @@ if (USE_POCO_SQLODBC)
target_link_libraries (clickhouse_common_io PRIVATE ${Poco_SQL_LIBRARY})
target_link_libraries (dbms PRIVATE ${Poco_SQLODBC_LIBRARY} ${Poco_SQL_LIBRARY})
if (NOT USE_INTERNAL_POCO_LIBRARY)
target_include_directories (clickhouse_common_io SYSTEM PRIVATE ${ODBC_INCLUDE_DIRECTORIES} ${Poco_SQL_INCLUDE_DIR})
target_include_directories (dbms SYSTEM PRIVATE ${ODBC_INCLUDE_DIRECTORIES} ${Poco_SQLODBC_INCLUDE_DIR} PUBLIC ${Poco_SQL_INCLUDE_DIR})
target_include_directories (clickhouse_common_io SYSTEM PRIVATE ${ODBC_INCLUDE_DIRS} ${Poco_SQL_INCLUDE_DIR})
target_include_directories (dbms SYSTEM PRIVATE ${ODBC_INCLUDE_DIRS} ${Poco_SQLODBC_INCLUDE_DIR} SYSTEM PUBLIC ${Poco_SQL_INCLUDE_DIR})
endif()
endif()
@ -271,7 +282,7 @@ if (USE_POCO_DATAODBC)
target_link_libraries (clickhouse_common_io PRIVATE ${Poco_Data_LIBRARY})
target_link_libraries (dbms PRIVATE ${Poco_DataODBC_LIBRARY})
if (NOT USE_INTERNAL_POCO_LIBRARY)
target_include_directories (dbms SYSTEM PRIVATE ${ODBC_INCLUDE_DIRECTORIES} ${Poco_DataODBC_INCLUDE_DIR})
target_include_directories (dbms SYSTEM PRIVATE ${ODBC_INCLUDE_DIRS} ${Poco_DataODBC_INCLUDE_DIR})
endif()
endif()

View File

@ -1,11 +1,11 @@
# This strings autochanged from release_lib.sh:
set(VERSION_REVISION 54419)
set(VERSION_REVISION 54420)
set(VERSION_MAJOR 19)
set(VERSION_MINOR 7)
set(VERSION_MINOR 8)
set(VERSION_PATCH 1)
set(VERSION_GITHASH b0b369b30f04a5026d1da5c7d3fd5998d6de1fe4)
set(VERSION_DESCRIBE v19.7.1.1-testing)
set(VERSION_STRING 19.7.1.1)
set(VERSION_GITHASH a76e504f45ff4a74e8c492bd269f022352d5f6d9)
set(VERSION_DESCRIBE v19.8.1.1-testing)
set(VERSION_STRING 19.8.1.1)
# end of autochange
set(VERSION_EXTRA "" CACHE STRING "")

View File

@ -44,7 +44,7 @@ macro(clickhouse_program_add_library name)
set(CLICKHOUSE_${name_uc}_INCLUDE ${CLICKHOUSE_${name_uc}_INCLUDE} PARENT_SCOPE)
if(NOT CLICKHOUSE_ONE_SHARED)
add_library(clickhouse-${name}-lib ${LINK_MODE} ${CLICKHOUSE_${name_uc}_SOURCES})
add_library(clickhouse-${name}-lib ${CLICKHOUSE_${name_uc}_SOURCES})
set(_link ${CLICKHOUSE_${name_uc}_LINK}) # can't use ${} in if()
if(_link)
@ -209,11 +209,9 @@ else ()
install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-obfuscator DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse)
list(APPEND CLICKHOUSE_BUNDLE clickhouse-obfuscator)
endif ()
if (ENABLE_CLICKHOUSE_ODBC_BRIDGE)
# just to be able to run integration tests
add_custom_target (clickhouse-odbc-bridge-copy ALL COMMAND ${CMAKE_COMMAND} -E create_symlink ${CMAKE_CURRENT_BINARY_DIR}/odbc-bridge/clickhouse-odbc-bridge clickhouse-odbc-bridge DEPENDS clickhouse-odbc-bridge)
endif ()
if(ENABLE_CLICKHOUSE_ODBC_BRIDGE)
list(APPEND CLICKHOUSE_BUNDLE clickhouse-odbc-bridge)
endif()
# install always because depian package want this files:
add_custom_target (clickhouse-clang ALL COMMAND ${CMAKE_COMMAND} -E create_symlink clickhouse clickhouse-clang DEPENDS clickhouse)

View File

@ -451,14 +451,14 @@ int mainEntryClickHouseBenchmark(int argc, char ** argv)
("password", value<std::string>()->default_value(""), "")
("database", value<std::string>()->default_value("default"), "")
("stacktrace", "print stack traces of exceptions")
#define DECLARE_SETTING(TYPE, NAME, DEFAULT, DESCRIPTION) (#NAME, boost::program_options::value<std::string> (), DESCRIPTION)
APPLY_FOR_SETTINGS(DECLARE_SETTING)
#undef DECLARE_SETTING
;
Settings settings;
settings.addProgramOptions(desc);
boost::program_options::variables_map options;
boost::program_options::store(boost::program_options::parse_command_line(argc, argv, desc), options);
boost::program_options::notify(options);
if (options.count("help"))
{
@ -469,15 +469,6 @@ int mainEntryClickHouseBenchmark(int argc, char ** argv)
print_stacktrace = options.count("stacktrace");
/// Extract `settings` and `limits` from received `options`
Settings settings;
#define EXTRACT_SETTING(TYPE, NAME, DEFAULT, DESCRIPTION) \
if (options.count(#NAME)) \
settings.set(#NAME, options[#NAME].as<std::string>());
APPLY_FOR_SETTINGS(EXTRACT_SETTING)
#undef EXTRACT_SETTING
UseSSL use_ssl;
Benchmark benchmark(

View File

@ -2,7 +2,7 @@ add_definitions(-Wno-error -Wno-unused-parameter -Wno-non-virtual-dtor -U_LIBCPP
link_directories(${LLVM_LIBRARY_DIRS})
add_library(clickhouse-compiler-lib ${LINK_MODE}
add_library(clickhouse-compiler-lib
driver.cpp
cc1_main.cpp
cc1as_main.cpp

View File

@ -1,8 +1,9 @@
add_definitions(-Wno-error -Wno-unused-parameter -Wno-non-virtual-dtor -U_LIBCPP_DEBUG)
link_directories(${LLVM_LIBRARY_DIRS})
add_library(clickhouse-compiler-lib ${LINK_MODE}
add_library(clickhouse-compiler-lib
driver.cpp
cc1_main.cpp
cc1as_main.cpp

View File

@ -2,7 +2,7 @@ add_definitions(-Wno-error -Wno-unused-parameter -Wno-non-virtual-dtor -U_LIBCPP
link_directories(${LLVM_LIBRARY_DIRS})
add_library(clickhouse-compiler-lib ${LINK_MODE}
add_library(clickhouse-compiler-lib
driver.cpp
cc1_main.cpp
cc1gen_reproducer_main.cpp

View File

@ -2,7 +2,7 @@ add_definitions(-Wno-error -Wno-unused-parameter -Wno-non-virtual-dtor -U_LIBCPP
link_directories(${LLVM_LIBRARY_DIRS})
add_library(clickhouse-compiler-lib ${LINK_MODE}
add_library(clickhouse-compiler-lib
driver.cpp
cc1_main.cpp
cc1as_main.cpp

View File

@ -217,11 +217,12 @@ private:
context.setApplicationType(Context::ApplicationType::CLIENT);
/// settings and limits could be specified in config file, but passed settings has higher priority
#define EXTRACT_SETTING(TYPE, NAME, DEFAULT, DESCRIPTION) \
if (config().has(#NAME) && !context.getSettingsRef().NAME.changed) \
context.setSetting(#NAME, config().getString(#NAME));
APPLY_FOR_SETTINGS(EXTRACT_SETTING)
#undef EXTRACT_SETTING
for (auto && setting : context.getSettingsRef())
{
const String & name = setting.getName().toString();
if (config().has(name) && !setting.isChanged())
setting.setValue(config().getString(name));
}
/// Set path for format schema files
if (config().has("format_schema_path"))
@ -479,6 +480,17 @@ private:
}
else
{
/// This is intended for testing purposes.
if (config().getBool("always_load_suggestion_data", false))
{
#if USE_READLINE
SCOPE_EXIT({ Suggest::instance().finalize(); });
Suggest::instance().load(connection_parameters, config().getInt("suggestion_limit"));
#else
throw Exception("Command line suggestions cannot work without readline", ErrorCodes::BAD_ARGUMENTS);
#endif
}
query_id = config().getString("query_id", "");
nonInteractive();
@ -805,8 +817,7 @@ private:
{
if (!old_settings)
old_settings.emplace(context.getSettingsRef());
for (const auto & change : settings_ast.as<ASTSetQuery>()->changes)
context.setSetting(change.name, change.value);
context.applySettingsChanges(settings_ast.as<ASTSetQuery>()->changes);
};
const auto * insert = parsed_query->as<ASTInsertQuery>();
if (insert && insert->settings_ast)
@ -836,7 +847,7 @@ private:
if (change.name == "profile")
current_profile = change.value.safeGet<String>();
else
context.setSetting(change.name, change.value);
context.applySettingChange(change);
}
}
@ -1604,8 +1615,6 @@ public:
min_description_length = std::min(min_description_length, line_length - 2);
}
#define DECLARE_SETTING(TYPE, NAME, DEFAULT, DESCRIPTION) (#NAME, po::value<std::string>(), DESCRIPTION)
/// Main commandline options related to client functionality and all parameters from Settings.
po::options_description main_description("Main options", line_length, min_description_length);
main_description.add_options()
@ -1629,6 +1638,7 @@ public:
("database,d", po::value<std::string>(), "database")
("pager", po::value<std::string>(), "pager")
("disable_suggestion,A", "Disable loading suggestion data. Note that suggestion data is loaded asynchronously through a second connection to ClickHouse server. Also it is reasonable to disable suggestion if you want to paste a query with TAB characters. Shorthand option -A is for those who get used to mysql client.")
("always_load_suggestion_data", "Load suggestion data even if clickhouse-client is run in non-interactive mode. Used for testing.")
("suggestion_limit", po::value<int>()->default_value(10000),
"Suggestion limit for how many databases, tables and columns to fetch.")
("multiline,m", "multiline")
@ -1647,9 +1657,9 @@ public:
("compression", po::value<bool>(), "enable or disable compression")
("log-level", po::value<std::string>(), "client log level")
("server_logs_file", po::value<std::string>(), "put server logs into specified file")
APPLY_FOR_SETTINGS(DECLARE_SETTING)
;
#undef DECLARE_SETTING
context.getSettingsRef().addProgramOptions(main_description);
/// Commandline options related to external tables.
po::options_description external_description("External tables options");
@ -1665,6 +1675,8 @@ public:
common_arguments.size(), common_arguments.data()).options(main_description).run();
po::variables_map options;
po::store(parsed, options);
po::notify(options);
if (options.count("version") || options.count("V"))
{
showClientVersion();
@ -1715,15 +1727,14 @@ public:
}
}
/// Extract settings from the options.
#define EXTRACT_SETTING(TYPE, NAME, DEFAULT, DESCRIPTION) \
if (options.count(#NAME)) \
{ \
context.setSetting(#NAME, options[#NAME].as<std::string>()); \
config().setString(#NAME, options[#NAME].as<std::string>()); \
/// Copy settings-related program options to config.
/// TODO: Is this code necessary?
for (const auto & setting : context.getSettingsRef())
{
const String name = setting.getName().toString();
if (options.count(name))
config().setString(name, options[name].as<std::string>());
}
APPLY_FOR_SETTINGS(EXTRACT_SETTING)
#undef EXTRACT_SETTING
if (options.count("config-file") && options.count("config"))
throw Exception("Two or more configuration files referenced in arguments", ErrorCodes::BAD_ARGUMENTS);
@ -1782,6 +1793,13 @@ public:
server_logs_file = options["server_logs_file"].as<std::string>();
if (options.count("disable_suggestion"))
config().setBool("disable_suggestion", true);
if (options.count("always_load_suggestion_data"))
{
if (options.count("disable_suggestion"))
throw Exception("Command line parameters disable_suggestion (-A) and always_load_suggestion_data cannot be specified simultaneously",
ErrorCodes::BAD_ARGUMENTS);
config().setBool("always_load_suggestion_data", true);
}
if (options.count("suggestion_limit"))
config().setInt("suggestion_limit", options["suggestion_limit"].as<int>());
}

View File

@ -1,5 +1,5 @@
set(CLICKHOUSE_COPIER_SOURCES ${CMAKE_CURRENT_SOURCE_DIR}/ClusterCopier.cpp)
set(CLICKHOUSE_COPIER_LINK PRIVATE clickhouse_functions clickhouse_table_functions clickhouse_aggregate_functions PUBLIC daemon)
set(CLICKHOUSE_COPIER_LINK PRIVATE clickhouse_functions clickhouse_table_functions clickhouse_aggregate_functions clickhouse_dictionaries PUBLIC daemon)
set(CLICKHOUSE_COPIER_INCLUDE SYSTEM PRIVATE ${PCG_RANDOM_INCLUDE_DIR})
clickhouse_program_add(copier)

View File

@ -63,6 +63,7 @@
#include <AggregateFunctions/registerAggregateFunctions.h>
#include <Storages/registerStorages.h>
#include <Storages/StorageDistributed.h>
#include <Dictionaries/registerDictionaries.h>
#include <Databases/DatabaseMemory.h>
#include <Common/StatusFile.h>
@ -2169,6 +2170,7 @@ void ClusterCopierApp::mainImpl()
registerAggregateFunctions();
registerTableFunctions();
registerStorages();
registerDictionaries();
static const std::string default_database = "_local";
context->addDatabase(default_database, std::make_shared<DatabaseMemory>(default_database));

View File

@ -69,11 +69,7 @@ void LocalServer::initialize(Poco::Util::Application & self)
void LocalServer::applyCmdSettings()
{
#define EXTRACT_SETTING(TYPE, NAME, DEFAULT, DESCRIPTION) \
if (cmd_settings.NAME.changed) \
context->getSettingsRef().NAME = cmd_settings.NAME;
APPLY_FOR_SETTINGS(EXTRACT_SETTING)
#undef EXTRACT_SETTING
context->getSettingsRef().copyChangesFrom(cmd_settings);
}
/// If path is specified and not empty, will try to setup server environment and load existing metadata
@ -414,7 +410,6 @@ void LocalServer::init(int argc, char ** argv)
min_description_length = std::min(min_description_length, line_length - 2);
}
#define DECLARE_SETTING(TYPE, NAME, DEFAULT, DESCRIPTION) (#NAME, po::value<std::string> (), DESCRIPTION)
po::options_description description("Main options", line_length, min_description_length);
description.add_options()
("help", "produce help message")
@ -435,13 +430,15 @@ void LocalServer::init(int argc, char ** argv)
("verbose", "print query and other debugging info")
("ignore-error", "do not stop processing if a query failed")
("version,V", "print version information and exit")
APPLY_FOR_SETTINGS(DECLARE_SETTING);
#undef DECLARE_SETTING
;
cmd_settings.addProgramOptions(description);
/// Parse main commandline options.
po::parsed_options parsed = po::command_line_parser(argc, argv).options(description).run();
po::variables_map options;
po::store(parsed, options);
po::notify(options);
if (options.count("version") || options.count("V"))
{
@ -457,13 +454,6 @@ void LocalServer::init(int argc, char ** argv)
exit(0);
}
/// Extract settings and limits from the options.
#define EXTRACT_SETTING(TYPE, NAME, DEFAULT, DESCRIPTION) \
if (options.count(#NAME)) \
cmd_settings.set(#NAME, options[#NAME].as<std::string>());
APPLY_FOR_SETTINGS(EXTRACT_SETTING)
#undef EXTRACT_SETTING
/// Save received data into the internal config.
if (options.count("config-file"))
config().setString("config-file", options["config-file"].as<std::string>());

View File

@ -912,8 +912,8 @@ public:
size_t columns = header.columns();
models.reserve(columns);
for (size_t i = 0; i < columns; ++i)
models.emplace_back(factory.get(*header.getByPosition(i).type, hash(seed, i), markov_model_params));
for (const auto & elem : header)
models.emplace_back(factory.get(*elem.type, hash(seed, elem.name), markov_model_params));
}
void train(const Columns & columns)
@ -954,7 +954,7 @@ try
("structure,S", po::value<std::string>(), "structure of the initial table (list of column and type names)")
("input-format", po::value<std::string>(), "input format of the initial table data")
("output-format", po::value<std::string>(), "default output format")
("seed", po::value<std::string>(), "seed (arbitrary string), must be random string with at least 10 bytes length")
("seed", po::value<std::string>(), "seed (arbitrary string), must be random string with at least 10 bytes length; note that a seed for each column is derived from this seed and a column name: you can obfuscate data for different tables and as long as you use identical seed and identical column names, the data for corresponding non-text columns for different tables will be transformed in the same way, so the data for different tables can be JOINed after obfuscation")
("limit", po::value<UInt64>(), "if specified - stop after generating that number of rows")
("silent", po::value<bool>()->default_value(false), "don't print information messages to stderr")
("order", po::value<UInt64>()->default_value(5), "order of markov model to generate strings")

View File

@ -16,18 +16,20 @@ set(CLICKHOUSE_ODBC_BRIDGE_INCLUDE PUBLIC ${ClickHouse_SOURCE_DIR}/libs/libdaemo
if (USE_POCO_SQLODBC)
set(CLICKHOUSE_ODBC_BRIDGE_LINK ${CLICKHOUSE_ODBC_BRIDGE_LINK} PRIVATE ${Poco_SQLODBC_LIBRARY})
set(CLICKHOUSE_ODBC_BRIDGE_INCLUDE ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE} SYSTEM PRIVATE ${ODBC_INCLUDE_DIRECTORIES} ${Poco_SQLODBC_INCLUDE_DIR})
set(CLICKHOUSE_ODBC_BRIDGE_INCLUDE ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE} SYSTEM PRIVATE ${ODBC_INCLUDE_DIRS} ${Poco_SQLODBC_INCLUDE_DIR})
endif ()
if (Poco_SQL_FOUND)
set(CLICKHOUSE_ODBC_BRIDGE_LINK ${CLICKHOUSE_ODBC_BRIDGE_LINK} PRIVATE ${Poco_SQL_LIBRARY})
set(CLICKHOUSE_ODBC_BRIDGE_INCLUDE ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE} SYSTEM PRIVATE ${Poco_SQL_INCLUDE_DIR})
endif ()
if (USE_POCO_DATAODBC)
set(CLICKHOUSE_ODBC_BRIDGE_LINK ${CLICKHOUSE_ODBC_BRIDGE_LINK} PRIVATE ${Poco_DataODBC_LIBRARY})
set(CLICKHOUSE_ODBC_BRIDGE_INCLUDE ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE} SYSTEM PRIVATE ${ODBC_INCLUDE_DIRECTORIES} ${Poco_DataODBC_INCLUDE_DIR})
set(CLICKHOUSE_ODBC_BRIDGE_INCLUDE ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE} SYSTEM PRIVATE ${ODBC_INCLUDE_DIRS} ${Poco_DataODBC_INCLUDE_DIR})
endif()
if (Poco_Data_FOUND)
set(CLICKHOUSE_ODBC_BRIDGE_LINK ${CLICKHOUSE_ODBC_BRIDGE_LINK} PRIVATE ${Poco_Data_LIBRARY})
set(CLICKHOUSE_ODBC_BRIDGE_INCLUDE ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE} SYSTEM PRIVATE ${Poco_Data_INCLUDE_DIR})
endif ()
clickhouse_program_add_library(odbc-bridge)
@ -37,12 +39,13 @@ clickhouse_program_add_library(odbc-bridge)
# For this reason, we disabling -rdynamic linker flag. But we do it in strange way:
SET(CMAKE_SHARED_LIBRARY_LINK_CXX_FLAGS "")
add_executable (clickhouse-odbc-bridge odbc-bridge.cpp)
add_executable(clickhouse-odbc-bridge odbc-bridge.cpp)
set_target_properties(clickhouse-odbc-bridge PROPERTIES RUNTIME_OUTPUT_DIRECTORY ..)
clickhouse_program_link_split_binary(odbc-bridge)
install (TARGETS clickhouse-odbc-bridge RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse)
install(TARGETS clickhouse-odbc-bridge RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse)
if (ENABLE_TESTS)
add_subdirectory (tests)
endif ()
if(ENABLE_TESTS)
add_subdirectory(tests)
endif()

View File

@ -21,7 +21,7 @@ void extractSettings(
const XMLConfigurationPtr & config,
const std::string & key,
const Strings & settings_list,
std::map<std::string, std::string> & settings_to_apply)
SettingsChanges & settings_to_apply)
{
for (const std::string & setup : settings_list)
{
@ -32,7 +32,7 @@ void extractSettings(
if (value.empty())
value = "true";
settings_to_apply[setup] = value;
settings_to_apply.emplace_back(SettingChange{setup, value});
}
}
@ -70,7 +70,7 @@ void PerformanceTestInfo::applySettings(XMLConfigurationPtr config)
{
if (config->has("settings"))
{
std::map<std::string, std::string> settings_to_apply;
SettingsChanges settings_to_apply;
Strings config_settings;
config->keys("settings", config_settings);
@ -96,19 +96,7 @@ void PerformanceTestInfo::applySettings(XMLConfigurationPtr config)
}
extractSettings(config, "settings", config_settings, settings_to_apply);
/// This macro goes through all settings in the Settings.h
/// and, if found any settings in test's xml configuration
/// with the same name, sets its value to settings
std::map<std::string, std::string>::iterator it;
#define EXTRACT_SETTING(TYPE, NAME, DEFAULT, DESCRIPTION) \
it = settings_to_apply.find(#NAME); \
if (it != settings_to_apply.end()) \
settings.set(#NAME, settings_to_apply[#NAME]);
APPLY_FOR_SETTINGS(EXTRACT_SETTING)
#undef EXTRACT_SETTING
settings.applyChanges(settings_to_apply);
if (settings_contain("average_rows_speed_precision"))
TestStats::avg_rows_speed_precision =

View File

@ -61,6 +61,7 @@ public:
const std::string & default_database_,
const std::string & user_,
const std::string & password_,
const Settings & cmd_settings,
const bool lite_output_,
const std::string & profiles_file_,
Strings && input_files_,
@ -87,6 +88,7 @@ public:
, input_files(input_files_)
, log(&Poco::Logger::get("PerformanceTestSuite"))
{
global_context.getSettingsRef().copyChangesFrom(cmd_settings);
if (input_files.size() < 1)
throw Exception("No tests were specified", ErrorCodes::BAD_ARGUMENTS);
}
@ -110,10 +112,6 @@ public:
return 0;
}
void setContextSetting(const String & name, const std::string & value)
{
global_context.setSetting(name, value);
}
private:
Connection connection;
@ -326,7 +324,6 @@ try
using Strings = DB::Strings;
#define DECLARE_SETTING(TYPE, NAME, DEFAULT, DESCRIPTION) (#NAME, po::value<std::string>(), DESCRIPTION)
po::options_description desc("Allowed options");
desc.add_options()
("help", "produce help message")
@ -348,9 +345,10 @@ try
("input-files", value<Strings>()->multitoken(), "Input .xml files")
("query-indexes", value<std::vector<size_t>>()->multitoken(), "Input query indexes")
("recursive,r", "Recurse in directories to find all xml's")
APPLY_FOR_SETTINGS(DECLARE_SETTING);
#undef DECLARE_SETTING
;
DB::Settings cmd_settings;
cmd_settings.addProgramOptions(desc);
po::options_description cmdline_options;
cmdline_options.add(desc);
@ -397,6 +395,7 @@ try
options["database"].as<std::string>(),
options["user"].as<std::string>(),
options["password"].as<std::string>(),
cmd_settings,
options.count("lite") > 0,
options["profiles-file"].as<std::string>(),
std::move(input_files),
@ -408,15 +407,6 @@ try
std::move(skip_names_regexp),
queries_with_indexes,
timeouts);
/// Extract settings from the options.
#define EXTRACT_SETTING(TYPE, NAME, DEFAULT, DESCRIPTION) \
if (options.count(#NAME)) \
{ \
performance_test_suite.setContextSetting(#NAME, options[#NAME].as<std::string>()); \
}
APPLY_FOR_SETTINGS(EXTRACT_SETTING)
#undef EXTRACT_SETTING
return performance_test_suite.run();
}
catch (...)

View File

@ -1,4 +1,5 @@
#include "TestStats.h"
#include <algorithm>
namespace DB
{
@ -92,11 +93,10 @@ void TestStats::update_average_speed(
avg_speed_value /= number_of_info_batches;
if (avg_speed_first == 0)
{
avg_speed_first = avg_speed_value;
}
if (std::abs(avg_speed_value - avg_speed_first) >= precision)
auto [min, max] = std::minmax(avg_speed_value, avg_speed_first);
if (1 - min / max >= precision)
{
avg_speed_first = avg_speed_value;
avg_speed_watch.restart();

View File

@ -40,11 +40,11 @@ struct TestStats
double avg_rows_speed_value = 0;
double avg_rows_speed_first = 0;
static inline double avg_rows_speed_precision = 0.001;
static inline double avg_rows_speed_precision = 0.005;
double avg_bytes_speed_value = 0;
double avg_bytes_speed_first = 0;
static inline double avg_bytes_speed_precision = 0.001;
static inline double avg_bytes_speed_precision = 0.005;
size_t number_of_rows_speed_info_batches = 0;
size_t number_of_bytes_speed_info_batches = 0;

View File

@ -497,8 +497,7 @@ void HTTPHandler::processQuery(
settings.readonly = 2;
}
auto readonly_before_query = settings.readonly;
SettingsChanges settings_changes;
for (auto it = params.begin(); it != params.end(); ++it)
{
if (it->first == "database")
@ -515,21 +514,13 @@ void HTTPHandler::processQuery(
else
{
/// All other query parameters are treated as settings.
String value;
/// Setting is skipped if value wasn't changed.
if (!settings.tryGet(it->first, value) || it->second != value)
{
if (readonly_before_query == 1)
throw Exception("Cannot override setting (" + it->first + ") in readonly mode", ErrorCodes::READONLY);
if (readonly_before_query && it->first == "readonly")
throw Exception("Setting 'readonly' cannot be overrided in readonly mode", ErrorCodes::READONLY);
context.setSetting(it->first, it->second);
}
settings_changes.push_back({it->first, it->second});
}
}
context.checkSettingsConstraints(settings_changes);
context.applySettingsChanges(settings_changes);
/// HTTP response compression is turned on only if the client signalled that they support it
/// (using Accept-Encoding header) and 'enable_http_compression' setting is turned on.
used_output.out->setCompression(client_supports_http_compression && settings.enable_http_compression);

View File

@ -85,7 +85,7 @@ void ReplicasStatusHandler::handleRequest(Poco::Net::HTTPServerRequest & request
if (!response.sent())
{
/// We have not sent anything yet and we don't even know if we need to compress response.
/// We have not sent anything yet and we don't even know if we need to compress response.
response.send() << getCurrentExceptionMessage(false) << std::endl;
}
}

View File

@ -80,6 +80,7 @@ namespace ErrorCodes
extern const int SYSTEM_ERROR;
extern const int FAILED_TO_GETPWUID;
extern const int MISMATCHING_USERS_FOR_PROCESS_AND_DATA;
extern const int NETWORK_ERROR;
}
@ -588,12 +589,12 @@ int Server::main(const std::vector<std::string> & /*args*/)
return socket_address;
};
auto socket_bind_listen = [&](auto & socket, const std::string & host, UInt16 port, bool secure = 0)
auto socket_bind_listen = [&](auto & socket, const std::string & host, UInt16 port, [[maybe_unused]] bool secure = 0)
{
auto address = make_socket_address(host, port);
#if !defined(POCO_CLICKHOUSE_PATCH) || POCO_VERSION <= 0x02000000 // TODO: fill correct version
#if !defined(POCO_CLICKHOUSE_PATCH) || POCO_VERSION < 0x01090100
if (secure)
/// Bug in old poco, listen() after bind() with reusePort param will fail because have no implementation in SecureServerSocketImpl
/// Bug in old (<1.9.1) poco, listen() after bind() with reusePort param will fail because have no implementation in SecureServerSocketImpl
/// https://github.com/pocoproject/poco/pull/2257
socket.bind(address, /* reuseAddress = */ true);
else
@ -612,13 +613,15 @@ int Server::main(const std::vector<std::string> & /*args*/)
for (const auto & listen_host : listen_hosts)
{
/// For testing purposes, user may omit tcp_port or http_port or https_port in configuration file.
uint16_t listen_port = 0;
try
{
/// HTTP
if (config().has("http_port"))
{
Poco::Net::ServerSocket socket;
auto address = socket_bind_listen(socket, listen_host, config().getInt("http_port"));
listen_port = config().getInt("http_port");
auto address = socket_bind_listen(socket, listen_host, listen_port);
socket.setReceiveTimeout(settings.http_receive_timeout);
socket.setSendTimeout(settings.http_send_timeout);
servers.emplace_back(std::make_unique<Poco::Net::HTTPServer>(
@ -635,7 +638,8 @@ int Server::main(const std::vector<std::string> & /*args*/)
{
#if USE_POCO_NETSSL
Poco::Net::SecureServerSocket socket;
auto address = socket_bind_listen(socket, listen_host, config().getInt("https_port"), /* secure = */ true);
listen_port = config().getInt("https_port");
auto address = socket_bind_listen(socket, listen_host, listen_port, /* secure = */ true);
socket.setReceiveTimeout(settings.http_receive_timeout);
socket.setSendTimeout(settings.http_send_timeout);
servers.emplace_back(std::make_unique<Poco::Net::HTTPServer>(
@ -655,7 +659,8 @@ int Server::main(const std::vector<std::string> & /*args*/)
if (config().has("tcp_port"))
{
Poco::Net::ServerSocket socket;
auto address = socket_bind_listen(socket, listen_host, config().getInt("tcp_port"));
listen_port = config().getInt("tcp_port");
auto address = socket_bind_listen(socket, listen_host, listen_port);
socket.setReceiveTimeout(settings.receive_timeout);
socket.setSendTimeout(settings.send_timeout);
servers.emplace_back(std::make_unique<Poco::Net::TCPServer>(
@ -672,7 +677,8 @@ int Server::main(const std::vector<std::string> & /*args*/)
{
#if USE_POCO_NETSSL
Poco::Net::SecureServerSocket socket;
auto address = socket_bind_listen(socket, listen_host, config().getInt("tcp_port_secure"), /* secure = */ true);
listen_port = config().getInt("tcp_port_secure");
auto address = socket_bind_listen(socket, listen_host, listen_port, /* secure = */ true);
socket.setReceiveTimeout(settings.receive_timeout);
socket.setSendTimeout(settings.send_timeout);
servers.emplace_back(std::make_unique<Poco::Net::TCPServer>(
@ -695,7 +701,8 @@ int Server::main(const std::vector<std::string> & /*args*/)
if (config().has("interserver_http_port"))
{
Poco::Net::ServerSocket socket;
auto address = socket_bind_listen(socket, listen_host, config().getInt("interserver_http_port"));
listen_port = config().getInt("interserver_http_port");
auto address = socket_bind_listen(socket, listen_host, listen_port);
socket.setReceiveTimeout(settings.http_receive_timeout);
socket.setSendTimeout(settings.http_send_timeout);
servers.emplace_back(std::make_unique<Poco::Net::HTTPServer>(
@ -711,7 +718,8 @@ int Server::main(const std::vector<std::string> & /*args*/)
{
#if USE_POCO_NETSSL
Poco::Net::SecureServerSocket socket;
auto address = socket_bind_listen(socket, listen_host, config().getInt("interserver_https_port"), /* secure = */ true);
listen_port = config().getInt("interserver_https_port");
auto address = socket_bind_listen(socket, listen_host, listen_port, /* secure = */ true);
socket.setReceiveTimeout(settings.http_receive_timeout);
socket.setSendTimeout(settings.http_send_timeout);
servers.emplace_back(std::make_unique<Poco::Net::HTTPServer>(
@ -742,16 +750,17 @@ int Server::main(const std::vector<std::string> & /*args*/)
LOG_INFO(log, "Listening mysql: " + address.toString());
}
}
catch (const Poco::Net::NetException & e)
catch (const Poco::Exception & e)
{
std::string message = "Listen [" + listen_host + "]:" + std::to_string(listen_port) + " failed: " + std::to_string(e.code()) + ": " + e.what() + ": " + e.message();
if (listen_try)
LOG_ERROR(log, "Listen [" << listen_host << "]: " << e.code() << ": " << e.what() << ": " << e.message()
LOG_ERROR(log, message
<< " If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to "
"specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration "
"file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> ."
" Example for disabled IPv4: <listen_host>::</listen_host>");
else
throw;
throw Exception{message, ErrorCodes::NETWORK_ERROR};
}
}

View File

@ -370,8 +370,8 @@ void TCPHandler::processInsertQuery(const Settings & global_settings)
if (client_revision >= DBMS_MIN_REVISION_WITH_COLUMN_DEFAULTS_METADATA)
{
const auto & db_and_table = query_context->getInsertionTable();
if (auto * columns = ColumnsDescription::loadFromContext(*query_context, db_and_table.first, db_and_table.second))
sendTableColumns(*columns);
if (query_context->getSettingsRef().input_format_defaults_for_omitted_fields)
sendTableColumns(query_context->getTable(db_and_table.first, db_and_table.second)->getColumns());
}
/// Send block to the client - table structure.

View File

@ -9,55 +9,97 @@
namespace DB
{
namespace ErrorCodes
{
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
extern const int BAD_ARGUMENTS;
}
namespace
{
/// Substitute return type for Date and DateTime
class AggregateFunctionGroupUniqArrayDate : public AggregateFunctionGroupUniqArray<DataTypeDate::FieldType>
template <typename has_limit>
class AggregateFunctionGroupUniqArrayDate : public AggregateFunctionGroupUniqArray<DataTypeDate::FieldType, has_limit>
{
public:
AggregateFunctionGroupUniqArrayDate(const DataTypePtr & argument_type) : AggregateFunctionGroupUniqArray<DataTypeDate::FieldType>(argument_type) {}
AggregateFunctionGroupUniqArrayDate(const DataTypePtr & argument_type, UInt64 max_elems_ = std::numeric_limits<UInt64>::max()) : AggregateFunctionGroupUniqArray<DataTypeDate::FieldType, has_limit>(argument_type, max_elems_) {}
DataTypePtr getReturnType() const override { return std::make_shared<DataTypeArray>(std::make_shared<DataTypeDate>()); }
};
class AggregateFunctionGroupUniqArrayDateTime : public AggregateFunctionGroupUniqArray<DataTypeDateTime::FieldType>
template <typename has_limit>
class AggregateFunctionGroupUniqArrayDateTime : public AggregateFunctionGroupUniqArray<DataTypeDateTime::FieldType, has_limit>
{
public:
AggregateFunctionGroupUniqArrayDateTime(const DataTypePtr & argument_type) : AggregateFunctionGroupUniqArray<DataTypeDateTime::FieldType>(argument_type) {}
AggregateFunctionGroupUniqArrayDateTime(const DataTypePtr & argument_type, UInt64 max_elems_ = std::numeric_limits<UInt64>::max()) : AggregateFunctionGroupUniqArray<DataTypeDateTime::FieldType, has_limit>(argument_type, max_elems_) {}
DataTypePtr getReturnType() const override { return std::make_shared<DataTypeArray>(std::make_shared<DataTypeDateTime>()); }
};
static IAggregateFunction * createWithExtraTypes(const DataTypePtr & argument_type)
template <typename has_limit, typename ... TArgs>
static IAggregateFunction * createWithExtraTypes(const DataTypePtr & argument_type, TArgs && ... args)
{
WhichDataType which(argument_type);
if (which.idx == TypeIndex::Date) return new AggregateFunctionGroupUniqArrayDate(argument_type);
else if (which.idx == TypeIndex::DateTime) return new AggregateFunctionGroupUniqArrayDateTime(argument_type);
if (which.idx == TypeIndex::Date) return new AggregateFunctionGroupUniqArrayDate<has_limit>(argument_type, std::forward<TArgs>(args)...);
else if (which.idx == TypeIndex::DateTime) return new AggregateFunctionGroupUniqArrayDateTime<has_limit>(argument_type, std::forward<TArgs>(args)...);
else
{
/// Check that we can use plain version of AggreagteFunctionGroupUniqArrayGeneric
if (argument_type->isValueUnambiguouslyRepresentedInContiguousMemoryRegion())
return new AggreagteFunctionGroupUniqArrayGeneric<true>(argument_type);
return new AggreagteFunctionGroupUniqArrayGeneric<true, has_limit>(argument_type, std::forward<TArgs>(args)...);
else
return new AggreagteFunctionGroupUniqArrayGeneric<false>(argument_type);
return new AggreagteFunctionGroupUniqArrayGeneric<false, has_limit>(argument_type, std::forward<TArgs>(args)...);
}
}
template <typename has_limit, typename ... TArgs>
inline AggregateFunctionPtr createAggregateFunctionGroupUniqArrayImpl(const std::string & name, const DataTypePtr & argument_type, TArgs ... args)
{
AggregateFunctionPtr res(createWithNumericType<AggregateFunctionGroupUniqArray, has_limit, const DataTypePtr &, TArgs...>(*argument_type, argument_type, std::forward<TArgs>(args)...));
if (!res)
res = AggregateFunctionPtr(createWithExtraTypes<has_limit>(argument_type, std::forward<TArgs>(args)...));
if (!res)
throw Exception("Illegal type " + argument_type->getName() +
" of argument for aggregate function " + name, ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
return res;
}
AggregateFunctionPtr createAggregateFunctionGroupUniqArray(const std::string & name, const DataTypes & argument_types, const Array & parameters)
{
assertNoParameters(name, parameters);
assertUnary(name, argument_types);
AggregateFunctionPtr res(createWithNumericType<AggregateFunctionGroupUniqArray>(*argument_types[0], argument_types[0]));
bool limit_size = false;
UInt64 max_elems = std::numeric_limits<UInt64>::max();
if (!res)
res = AggregateFunctionPtr(createWithExtraTypes(argument_types[0]));
if (parameters.empty())
{
// no limit
}
else if (parameters.size() == 1)
{
auto type = parameters[0].getType();
if (type != Field::Types::Int64 && type != Field::Types::UInt64)
throw Exception("Parameter for aggregate function " + name + " should be positive number", ErrorCodes::BAD_ARGUMENTS);
if (!res)
throw Exception("Illegal type " + argument_types[0]->getName() +
" of argument for aggregate function " + name, ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
if ((type == Field::Types::Int64 && parameters[0].get<Int64>() < 0) ||
(type == Field::Types::UInt64 && parameters[0].get<UInt64>() == 0))
throw Exception("Parameter for aggregate function " + name + " should be positive number", ErrorCodes::BAD_ARGUMENTS);
return res;
limit_size = true;
max_elems = parameters[0].get<UInt64>();
}
else
throw Exception("Incorrect number of parameters for aggregate function " + name + ", should be 0 or 1",
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
if (!limit_size)
return createAggregateFunctionGroupUniqArrayImpl<std::false_type>(name, argument_types[0]);
else
return createAggregateFunctionGroupUniqArrayImpl<std::true_type>(name, argument_types[0], max_elems);
}
}

View File

@ -36,16 +36,21 @@ struct AggregateFunctionGroupUniqArrayData
/// Puts all values to the hash set. Returns an array of unique values. Implemented for numeric types.
template <typename T>
template <typename T, typename Tlimit_num_elem>
class AggregateFunctionGroupUniqArray
: public IAggregateFunctionDataHelper<AggregateFunctionGroupUniqArrayData<T>, AggregateFunctionGroupUniqArray<T>>
: public IAggregateFunctionDataHelper<AggregateFunctionGroupUniqArrayData<T>, AggregateFunctionGroupUniqArray<T, Tlimit_num_elem>>
{
static constexpr bool limit_num_elems = Tlimit_num_elem::value;
UInt64 max_elems;
private:
using State = AggregateFunctionGroupUniqArrayData<T>;
public:
AggregateFunctionGroupUniqArray(const DataTypePtr & argument_type)
: IAggregateFunctionDataHelper<AggregateFunctionGroupUniqArrayData<T>, AggregateFunctionGroupUniqArray<T>>({argument_type}, {}) {}
AggregateFunctionGroupUniqArray(const DataTypePtr & argument_type, UInt64 max_elems_ = std::numeric_limits<UInt64>::max())
: IAggregateFunctionDataHelper<AggregateFunctionGroupUniqArrayData<T>,
AggregateFunctionGroupUniqArray<T, Tlimit_num_elem>>({argument_type}, {}),
max_elems(max_elems_) {}
String getName() const override { return "groupUniqArray"; }
@ -56,12 +61,27 @@ public:
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
{
if (limit_num_elems && this->data(place).value.size() >= max_elems)
return;
this->data(place).value.insert(static_cast<const ColumnVector<T> &>(*columns[0]).getData()[row_num]);
}
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
{
this->data(place).value.merge(this->data(rhs).value);
if (!limit_num_elems)
this->data(place).value.merge(this->data(rhs).value);
else
{
auto & cur_set = this->data(place).value;
auto & rhs_set = this->data(rhs).value;
for (auto & rhs_elem : rhs_set)
{
if (cur_set.size() >= max_elems)
return;
cur_set.insert(rhs_elem.getValue());
}
}
}
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
@ -111,25 +131,43 @@ struct AggreagteFunctionGroupUniqArrayGenericData
Set value;
};
/// Helper function for deserialize and insert for the class AggreagteFunctionGroupUniqArrayGeneric
template <bool is_plain_column>
static StringRef getSerializationImpl(const IColumn & column, size_t row_num, Arena & arena);
template <bool is_plain_column>
static void deserializeAndInsertImpl(StringRef str, IColumn & data_to);
/** Template parameter with true value should be used for columns that store their elements in memory continuously.
* For such columns groupUniqArray() can be implemented more efficiently (especially for small numeric arrays).
*/
template <bool is_plain_column = false>
template <bool is_plain_column = false, typename Tlimit_num_elem = std::false_type>
class AggreagteFunctionGroupUniqArrayGeneric
: public IAggregateFunctionDataHelper<AggreagteFunctionGroupUniqArrayGenericData, AggreagteFunctionGroupUniqArrayGeneric<is_plain_column>>
: public IAggregateFunctionDataHelper<AggreagteFunctionGroupUniqArrayGenericData, AggreagteFunctionGroupUniqArrayGeneric<is_plain_column, Tlimit_num_elem>>
{
DataTypePtr & input_data_type;
static constexpr bool limit_num_elems = Tlimit_num_elem::value;
UInt64 max_elems;
using State = AggreagteFunctionGroupUniqArrayGenericData;
static StringRef getSerialization(const IColumn & column, size_t row_num, Arena & arena);
static StringRef getSerialization(const IColumn & column, size_t row_num, Arena & arena)
{
return getSerializationImpl<is_plain_column>(column, row_num, arena);
}
static void deserializeAndInsert(StringRef str, IColumn & data_to);
static void deserializeAndInsert(StringRef str, IColumn & data_to)
{
return deserializeAndInsertImpl<is_plain_column>(str, data_to);
}
public:
AggreagteFunctionGroupUniqArrayGeneric(const DataTypePtr & input_data_type)
: IAggregateFunctionDataHelper<AggreagteFunctionGroupUniqArrayGenericData, AggreagteFunctionGroupUniqArrayGeneric<is_plain_column>>({input_data_type}, {})
, input_data_type(this->argument_types[0]) {}
AggreagteFunctionGroupUniqArrayGeneric(const DataTypePtr & input_data_type, UInt64 max_elems_ = std::numeric_limits<UInt64>::max())
: IAggregateFunctionDataHelper<AggreagteFunctionGroupUniqArrayGenericData, AggreagteFunctionGroupUniqArrayGeneric<is_plain_column, Tlimit_num_elem>>({input_data_type}, {})
, input_data_type(this->argument_types[0])
, max_elems(max_elems_) {}
String getName() const override { return "groupUniqArray"; }
@ -174,7 +212,10 @@ public:
bool inserted;
State::Set::iterator it;
if (limit_num_elems && set.size() >= max_elems)
return;
StringRef str_serialized = getSerialization(*columns[0], row_num, *arena);
set.emplace(str_serialized, it, inserted);
if constexpr (!is_plain_column)
@ -198,6 +239,8 @@ public:
State::Set::iterator it;
for (auto & rhs_elem : rhs_set)
{
if (limit_num_elems && cur_set.size() >= max_elems)
return ;
cur_set.emplace(rhs_elem.getValue(), it, inserted);
if (inserted)
{
@ -229,31 +272,30 @@ public:
template <>
inline StringRef AggreagteFunctionGroupUniqArrayGeneric<false>::getSerialization(const IColumn & column, size_t row_num, Arena & arena)
inline StringRef getSerializationImpl<false>(const IColumn & column, size_t row_num, Arena & arena)
{
const char * begin = nullptr;
return column.serializeValueIntoArena(row_num, arena, begin);
}
template <>
inline StringRef AggreagteFunctionGroupUniqArrayGeneric<true>::getSerialization(const IColumn & column, size_t row_num, Arena &)
inline StringRef getSerializationImpl<true>(const IColumn & column, size_t row_num, Arena &)
{
return column.getDataAt(row_num);
}
template <>
inline void AggreagteFunctionGroupUniqArrayGeneric<false>::deserializeAndInsert(StringRef str, IColumn & data_to)
inline void deserializeAndInsertImpl<false>(StringRef str, IColumn & data_to)
{
data_to.deserializeAndInsertFromArena(str.data);
}
template <>
inline void AggreagteFunctionGroupUniqArrayGeneric<true>::deserializeAndInsert(StringRef str, IColumn & data_to)
inline void deserializeAndInsertImpl<true>(StringRef str, IColumn & data_to)
{
data_to.insertData(str.data, str.size);
}
#undef AGGREGATE_FUNCTION_GROUP_ARRAY_UNIQ_MAX_SIZE
}

View File

@ -0,0 +1,474 @@
#include "AggregateFunctionMLMethod.h"
#include <IO/ReadHelpers.h>
#include <IO/WriteHelpers.h>
#include <Interpreters/castColumn.h>
#include <Common/FieldVisitors.h>
#include <Common/typeid_cast.h>
#include "AggregateFunctionFactory.h"
#include "FactoryHelpers.h"
#include "Helpers.h"
namespace DB
{
namespace
{
using FuncLinearRegression = AggregateFunctionMLMethod<LinearModelData, NameLinearRegression>;
using FuncLogisticRegression = AggregateFunctionMLMethod<LinearModelData, NameLogisticRegression>;
template <class Method>
AggregateFunctionPtr
createAggregateFunctionMLMethod(const std::string & name, const DataTypes & argument_types, const Array & parameters)
{
if (parameters.size() > 4)
throw Exception(
"Aggregate function " + name
+ " requires at most four parameters: learning_rate, l2_regularization_coef, mini-batch size and weights_updater "
"method",
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
if (argument_types.size() < 2)
throw Exception(
"Aggregate function " + name + " requires at least two arguments: target and model's parameters",
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
for (size_t i = 0; i < argument_types.size(); ++i)
{
if (!isNumber(argument_types[i]))
throw Exception(
"Argument " + std::to_string(i) + " of type " + argument_types[i]->getName()
+ " must be numeric for aggregate function " + name,
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
}
/// Such default parameters were picked because they did good on some tests,
/// though it still requires to fit parameters to achieve better result
auto learning_rate = Float64(0.01);
auto l2_reg_coef = Float64(0.01);
UInt32 batch_size = 1;
std::shared_ptr<IWeightsUpdater> weights_updater = std::make_shared<StochasticGradientDescent>();
std::shared_ptr<IGradientComputer> gradient_computer;
if (!parameters.empty())
{
learning_rate = applyVisitor(FieldVisitorConvertToNumber<Float64>(), parameters[0]);
}
if (parameters.size() > 1)
{
l2_reg_coef = applyVisitor(FieldVisitorConvertToNumber<Float64>(), parameters[1]);
}
if (parameters.size() > 2)
{
batch_size = applyVisitor(FieldVisitorConvertToNumber<UInt32>(), parameters[2]);
}
if (parameters.size() > 3)
{
if (applyVisitor(FieldVisitorToString(), parameters[3]) == "\'SGD\'")
{
weights_updater = std::make_shared<StochasticGradientDescent>();
}
else if (applyVisitor(FieldVisitorToString(), parameters[3]) == "\'Momentum\'")
{
weights_updater = std::make_shared<Momentum>();
}
else if (applyVisitor(FieldVisitorToString(), parameters[3]) == "\'Nesterov\'")
{
weights_updater = std::make_shared<Nesterov>();
}
else
{
throw Exception("Invalid parameter for weights updater", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
}
}
if (std::is_same<Method, FuncLinearRegression>::value)
{
gradient_computer = std::make_shared<LinearRegression>();
}
else if (std::is_same<Method, FuncLogisticRegression>::value)
{
gradient_computer = std::make_shared<LogisticRegression>();
}
else
{
throw Exception("Such gradient computer is not implemented yet", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
}
return std::make_shared<Method>(
argument_types.size() - 1,
gradient_computer,
weights_updater,
learning_rate,
l2_reg_coef,
batch_size,
argument_types,
parameters);
}
}
void registerAggregateFunctionMLMethod(AggregateFunctionFactory & factory)
{
factory.registerFunction("LinearRegression", createAggregateFunctionMLMethod<FuncLinearRegression>);
factory.registerFunction("LogisticRegression", createAggregateFunctionMLMethod<FuncLogisticRegression>);
}
LinearModelData::LinearModelData(
Float64 learning_rate,
Float64 l2_reg_coef,
UInt32 param_num,
UInt32 batch_capacity,
std::shared_ptr<DB::IGradientComputer> gradient_computer,
std::shared_ptr<DB::IWeightsUpdater> weights_updater)
: learning_rate(learning_rate)
, l2_reg_coef(l2_reg_coef)
, batch_capacity(batch_capacity)
, batch_size(0)
, gradient_computer(std::move(gradient_computer))
, weights_updater(std::move(weights_updater))
{
weights.resize(param_num, Float64{0.0});
gradient_batch.resize(param_num + 1, Float64{0.0});
}
void LinearModelData::update_state()
{
if (batch_size == 0)
return;
weights_updater->update(batch_size, weights, bias, gradient_batch);
batch_size = 0;
++iter_num;
gradient_batch.assign(gradient_batch.size(), Float64{0.0});
}
void LinearModelData::predict(
ColumnVector<Float64>::Container & container, Block & block, const ColumnNumbers & arguments, const Context & context) const
{
gradient_computer->predict(container, block, arguments, weights, bias, context);
}
void LinearModelData::read(ReadBuffer & buf)
{
readBinary(bias, buf);
readBinary(weights, buf);
readBinary(iter_num, buf);
readBinary(gradient_batch, buf);
readBinary(batch_size, buf);
weights_updater->read(buf);
}
void LinearModelData::write(WriteBuffer & buf) const
{
writeBinary(bias, buf);
writeBinary(weights, buf);
writeBinary(iter_num, buf);
writeBinary(gradient_batch, buf);
writeBinary(batch_size, buf);
weights_updater->write(buf);
}
void LinearModelData::merge(const DB::LinearModelData & rhs)
{
if (iter_num == 0 && rhs.iter_num == 0)
return;
update_state();
/// can't update rhs state because it's constant
Float64 frac = (static_cast<Float64>(iter_num) * iter_num) / (iter_num * iter_num + rhs.iter_num * rhs.iter_num);
for (size_t i = 0; i < weights.size(); ++i)
{
weights[i] = weights[i] * frac + rhs.weights[i] * (1 - frac);
}
bias = bias * frac + rhs.bias * (1 - frac);
iter_num += rhs.iter_num;
weights_updater->merge(*rhs.weights_updater, frac, 1 - frac);
}
void LinearModelData::add(const IColumn ** columns, size_t row_num)
{
/// first column stores target; features start from (columns + 1)
const auto target = (*columns[0])[row_num].get<Float64>();
/// Here we have columns + 1 as first column corresponds to target value, and others - to features
weights_updater->add_to_batch(
gradient_batch, *gradient_computer, weights, bias, learning_rate, l2_reg_coef, target, columns + 1, row_num);
++batch_size;
if (batch_size == batch_capacity)
{
update_state();
}
}
void Nesterov::read(ReadBuffer & buf)
{
readBinary(accumulated_gradient, buf);
}
void Nesterov::write(WriteBuffer & buf) const
{
writeBinary(accumulated_gradient, buf);
}
void Nesterov::merge(const IWeightsUpdater & rhs, Float64 frac, Float64 rhs_frac)
{
auto & nesterov_rhs = static_cast<const Nesterov &>(rhs);
for (size_t i = 0; i < accumulated_gradient.size(); ++i)
{
accumulated_gradient[i] = accumulated_gradient[i] * frac + nesterov_rhs.accumulated_gradient[i] * rhs_frac;
}
}
void Nesterov::update(UInt32 batch_size, std::vector<Float64> & weights, Float64 & bias, const std::vector<Float64> & batch_gradient)
{
if (accumulated_gradient.empty())
{
accumulated_gradient.resize(batch_gradient.size(), Float64{0.0});
}
for (size_t i = 0; i < batch_gradient.size(); ++i)
{
accumulated_gradient[i] = accumulated_gradient[i] * alpha_ + batch_gradient[i] / batch_size;
}
for (size_t i = 0; i < weights.size(); ++i)
{
weights[i] += accumulated_gradient[i];
}
bias += accumulated_gradient[weights.size()];
}
void Nesterov::add_to_batch(
std::vector<Float64> & batch_gradient,
IGradientComputer & gradient_computer,
const std::vector<Float64> & weights,
Float64 bias,
Float64 learning_rate,
Float64 l2_reg_coef,
Float64 target,
const IColumn ** columns,
size_t row_num)
{
if (accumulated_gradient.empty())
{
accumulated_gradient.resize(batch_gradient.size(), Float64{0.0});
}
std::vector<Float64> shifted_weights(weights.size());
for (size_t i = 0; i != shifted_weights.size(); ++i)
{
shifted_weights[i] = weights[i] + accumulated_gradient[i] * alpha_;
}
auto shifted_bias = bias + accumulated_gradient[weights.size()] * alpha_;
gradient_computer.compute(batch_gradient, shifted_weights, shifted_bias, learning_rate, l2_reg_coef, target, columns, row_num);
}
void Momentum::read(ReadBuffer & buf)
{
readBinary(accumulated_gradient, buf);
}
void Momentum::write(WriteBuffer & buf) const
{
writeBinary(accumulated_gradient, buf);
}
void Momentum::merge(const IWeightsUpdater & rhs, Float64 frac, Float64 rhs_frac)
{
auto & momentum_rhs = static_cast<const Momentum &>(rhs);
for (size_t i = 0; i < accumulated_gradient.size(); ++i)
{
accumulated_gradient[i] = accumulated_gradient[i] * frac + momentum_rhs.accumulated_gradient[i] * rhs_frac;
}
}
void Momentum::update(UInt32 batch_size, std::vector<Float64> & weights, Float64 & bias, const std::vector<Float64> & batch_gradient)
{
/// batch_size is already checked to be greater than 0
if (accumulated_gradient.empty())
{
accumulated_gradient.resize(batch_gradient.size(), Float64{0.0});
}
for (size_t i = 0; i < batch_gradient.size(); ++i)
{
accumulated_gradient[i] = accumulated_gradient[i] * alpha_ + batch_gradient[i] / batch_size;
}
for (size_t i = 0; i < weights.size(); ++i)
{
weights[i] += accumulated_gradient[i];
}
bias += accumulated_gradient[weights.size()];
}
void StochasticGradientDescent::update(
UInt32 batch_size, std::vector<Float64> & weights, Float64 & bias, const std::vector<Float64> & batch_gradient)
{
/// batch_size is already checked to be greater than 0
for (size_t i = 0; i < weights.size(); ++i)
{
weights[i] += batch_gradient[i] / batch_size;
}
bias += batch_gradient[weights.size()] / batch_size;
}
void IWeightsUpdater::add_to_batch(
std::vector<Float64> & batch_gradient,
IGradientComputer & gradient_computer,
const std::vector<Float64> & weights,
Float64 bias,
Float64 learning_rate,
Float64 l2_reg_coef,
Float64 target,
const IColumn ** columns,
size_t row_num)
{
gradient_computer.compute(batch_gradient, weights, bias, learning_rate, l2_reg_coef, target, columns, row_num);
}
void LogisticRegression::predict(
ColumnVector<Float64>::Container & container,
Block & block,
const ColumnNumbers & arguments,
const std::vector<Float64> & weights,
Float64 bias,
const Context & context) const
{
size_t rows_num = block.rows();
std::vector<Float64> results(rows_num, bias);
for (size_t i = 1; i < arguments.size(); ++i)
{
const ColumnWithTypeAndName & cur_col = block.getByPosition(arguments[i]);
if (!isNumber(cur_col.type))
{
throw Exception("Prediction arguments must have numeric type", ErrorCodes::BAD_ARGUMENTS);
}
/// If column type is already Float64 then castColumn simply returns it
auto features_col_ptr = castColumn(cur_col, std::make_shared<DataTypeFloat64>(), context);
auto features_column = typeid_cast<const ColumnFloat64 *>(features_col_ptr.get());
if (!features_column)
{
throw Exception("Unexpectedly cannot dynamically cast features column " + std::to_string(i), ErrorCodes::LOGICAL_ERROR);
}
for (size_t row_num = 0; row_num != rows_num; ++row_num)
{
results[row_num] += weights[i - 1] * features_column->getElement(row_num);
}
}
container.reserve(rows_num);
for (size_t row_num = 0; row_num != rows_num; ++row_num)
{
container.emplace_back(1 / (1 + exp(-results[row_num])));
}
}
void LogisticRegression::compute(
std::vector<Float64> & batch_gradient,
const std::vector<Float64> & weights,
Float64 bias,
Float64 learning_rate,
Float64 l2_reg_coef,
Float64 target,
const IColumn ** columns,
size_t row_num)
{
Float64 derivative = bias;
for (size_t i = 0; i < weights.size(); ++i)
{
auto value = (*columns[i])[row_num].get<Float64>();
derivative += weights[i] * value;
}
derivative *= target;
derivative = exp(derivative);
batch_gradient[weights.size()] += learning_rate * target / (derivative + 1);
for (size_t i = 0; i < weights.size(); ++i)
{
auto value = (*columns[i])[row_num].get<Float64>();
batch_gradient[i] += learning_rate * target * value / (derivative + 1) - 2 * l2_reg_coef * weights[i];
}
}
void LinearRegression::predict(
ColumnVector<Float64>::Container & container,
Block & block,
const ColumnNumbers & arguments,
const std::vector<Float64> & weights,
Float64 bias,
const Context & context) const
{
if (weights.size() + 1 != arguments.size())
{
throw Exception("In predict function number of arguments differs from the size of weights vector", ErrorCodes::LOGICAL_ERROR);
}
size_t rows_num = block.rows();
std::vector<Float64> results(rows_num, bias);
for (size_t i = 1; i < arguments.size(); ++i)
{
const ColumnWithTypeAndName & cur_col = block.getByPosition(arguments[i]);
if (!isNumber(cur_col.type))
{
throw Exception("Prediction arguments must have numeric type", ErrorCodes::BAD_ARGUMENTS);
}
/// If column type is already Float64 then castColumn simply returns it
auto features_col_ptr = castColumn(cur_col, std::make_shared<DataTypeFloat64>(), context);
auto features_column = typeid_cast<const ColumnFloat64 *>(features_col_ptr.get());
if (!features_column)
{
throw Exception("Unexpectedly cannot dynamically cast features column " + std::to_string(i), ErrorCodes::LOGICAL_ERROR);
}
for (size_t row_num = 0; row_num != rows_num; ++row_num)
{
results[row_num] += weights[i - 1] * features_column->getElement(row_num);
}
}
container.reserve(rows_num);
for (size_t row_num = 0; row_num != rows_num; ++row_num)
{
container.emplace_back(results[row_num]);
}
}
void LinearRegression::compute(
std::vector<Float64> & batch_gradient,
const std::vector<Float64> & weights,
Float64 bias,
Float64 learning_rate,
Float64 l2_reg_coef,
Float64 target,
const IColumn ** columns,
size_t row_num)
{
Float64 derivative = (target - bias);
for (size_t i = 0; i < weights.size(); ++i)
{
auto value = (*columns[i])[row_num].get<Float64>();
derivative -= weights[i] * value;
}
derivative *= (2 * learning_rate);
batch_gradient[weights.size()] += derivative;
for (size_t i = 0; i < weights.size(); ++i)
{
auto value = (*columns[i])[row_num].get<Float64>();
batch_gradient[i] += derivative * value - 2 * l2_reg_coef * weights[i];
}
}
}

View File

@ -0,0 +1,330 @@
#pragma once
#include <Columns/ColumnVector.h>
#include <Columns/ColumnsCommon.h>
#include <Columns/ColumnsNumber.h>
#include <DataTypes/DataTypesNumber.h>
#include "IAggregateFunction.h"
namespace DB
{
namespace ErrorCodes
{
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
extern const int BAD_ARGUMENTS;
}
/**
GradientComputer class computes gradient according to its loss function
*/
class IGradientComputer
{
public:
IGradientComputer() {}
virtual ~IGradientComputer() = default;
/// Adds computed gradient in new point (weights, bias) to batch_gradient
virtual void compute(
std::vector<Float64> & batch_gradient,
const std::vector<Float64> & weights,
Float64 bias,
Float64 learning_rate,
Float64 l2_reg_coef,
Float64 target,
const IColumn ** columns,
size_t row_num)
= 0;
virtual void predict(
ColumnVector<Float64>::Container & container,
Block & block,
const ColumnNumbers & arguments,
const std::vector<Float64> & weights,
Float64 bias,
const Context & context) const = 0;
};
class LinearRegression : public IGradientComputer
{
public:
LinearRegression() {}
void compute(
std::vector<Float64> & batch_gradient,
const std::vector<Float64> & weights,
Float64 bias,
Float64 learning_rate,
Float64 l2_reg_coef,
Float64 target,
const IColumn ** columns,
size_t row_num) override;
void predict(
ColumnVector<Float64>::Container & container,
Block & block,
const ColumnNumbers & arguments,
const std::vector<Float64> & weights,
Float64 bias,
const Context & context) const override;
};
class LogisticRegression : public IGradientComputer
{
public:
LogisticRegression() {}
void compute(
std::vector<Float64> & batch_gradient,
const std::vector<Float64> & weights,
Float64 bias,
Float64 learning_rate,
Float64 l2_reg_coef,
Float64 target,
const IColumn ** columns,
size_t row_num) override;
void predict(
ColumnVector<Float64>::Container & container,
Block & block,
const ColumnNumbers & arguments,
const std::vector<Float64> & weights,
Float64 bias,
const Context & context) const override;
};
/**
* IWeightsUpdater class defines the way to update current weights
* and uses GradientComputer class on each iteration
*/
class IWeightsUpdater
{
public:
virtual ~IWeightsUpdater() = default;
/// Calls GradientComputer to update current mini-batch
virtual void add_to_batch(
std::vector<Float64> & batch_gradient,
IGradientComputer & gradient_computer,
const std::vector<Float64> & weights,
Float64 bias,
Float64 learning_rate,
Float64 l2_reg_coef,
Float64 target,
const IColumn ** columns,
size_t row_num);
/// Updates current weights according to the gradient from the last mini-batch
virtual void update(UInt32 batch_size, std::vector<Float64> & weights, Float64 & bias, const std::vector<Float64> & gradient) = 0;
/// Used during the merge of two states
virtual void merge(const IWeightsUpdater &, Float64, Float64) {}
/// Used for serialization when necessary
virtual void write(WriteBuffer &) const {}
/// Used for serialization when necessary
virtual void read(ReadBuffer &) {}
};
class StochasticGradientDescent : public IWeightsUpdater
{
public:
void update(UInt32 batch_size, std::vector<Float64> & weights, Float64 & bias, const std::vector<Float64> & batch_gradient) override;
};
class Momentum : public IWeightsUpdater
{
public:
Momentum() {}
Momentum(Float64 alpha) : alpha_(alpha) {}
void update(UInt32 batch_size, std::vector<Float64> & weights, Float64 & bias, const std::vector<Float64> & batch_gradient) override;
virtual void merge(const IWeightsUpdater & rhs, Float64 frac, Float64 rhs_frac) override;
void write(WriteBuffer & buf) const override;
void read(ReadBuffer & buf) override;
private:
Float64 alpha_{0.1};
std::vector<Float64> accumulated_gradient;
};
class Nesterov : public IWeightsUpdater
{
public:
Nesterov() {}
Nesterov(Float64 alpha) : alpha_(alpha) {}
void add_to_batch(
std::vector<Float64> & batch_gradient,
IGradientComputer & gradient_computer,
const std::vector<Float64> & weights,
Float64 bias,
Float64 learning_rate,
Float64 l2_reg_coef,
Float64 target,
const IColumn ** columns,
size_t row_num) override;
void update(UInt32 batch_size, std::vector<Float64> & weights, Float64 & bias, const std::vector<Float64> & batch_gradient) override;
virtual void merge(const IWeightsUpdater & rhs, Float64 frac, Float64 rhs_frac) override;
void write(WriteBuffer & buf) const override;
void read(ReadBuffer & buf) override;
private:
Float64 alpha_{0.1};
std::vector<Float64> accumulated_gradient;
};
/**
* LinearModelData is a class which manages current state of learning
*/
class LinearModelData
{
public:
LinearModelData() {}
LinearModelData(
Float64 learning_rate,
Float64 l2_reg_coef,
UInt32 param_num,
UInt32 batch_capacity,
std::shared_ptr<IGradientComputer> gradient_computer,
std::shared_ptr<IWeightsUpdater> weights_updater);
void add(const IColumn ** columns, size_t row_num);
void merge(const LinearModelData & rhs);
void write(WriteBuffer & buf) const;
void read(ReadBuffer & buf);
void
predict(ColumnVector<Float64>::Container & container, Block & block, const ColumnNumbers & arguments, const Context & context) const;
private:
std::vector<Float64> weights;
Float64 bias{0.0};
Float64 learning_rate;
Float64 l2_reg_coef;
UInt32 batch_capacity;
UInt32 iter_num = 0;
std::vector<Float64> gradient_batch;
UInt32 batch_size;
std::shared_ptr<IGradientComputer> gradient_computer;
std::shared_ptr<IWeightsUpdater> weights_updater;
/**
* The function is called when we want to flush current batch and update our weights
*/
void update_state();
};
template <
/// Implemented Machine Learning method
typename Data,
/// Name of the method
typename Name>
class AggregateFunctionMLMethod final : public IAggregateFunctionDataHelper<Data, AggregateFunctionMLMethod<Data, Name>>
{
public:
String getName() const override { return Name::name; }
explicit AggregateFunctionMLMethod(
UInt32 param_num,
std::shared_ptr<IGradientComputer> gradient_computer,
std::shared_ptr<IWeightsUpdater> weights_updater,
Float64 learning_rate,
Float64 l2_reg_coef,
UInt32 batch_size,
const DataTypes & arguments_types,
const Array & params)
: IAggregateFunctionDataHelper<Data, AggregateFunctionMLMethod<Data, Name>>(arguments_types, params)
, param_num(param_num)
, learning_rate(learning_rate)
, l2_reg_coef(l2_reg_coef)
, batch_size(batch_size)
, gradient_computer(std::move(gradient_computer))
, weights_updater(std::move(weights_updater))
{
}
DataTypePtr getReturnType() const override { return std::make_shared<DataTypeNumber<Float64>>(); }
void create(AggregateDataPtr place) const override
{
new (place) Data(learning_rate, l2_reg_coef, param_num, batch_size, gradient_computer, weights_updater);
}
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
{
this->data(place).add(columns, row_num);
}
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override { this->data(place).merge(this->data(rhs)); }
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override { this->data(place).write(buf); }
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override { this->data(place).read(buf); }
void predictValues(
ConstAggregateDataPtr place, IColumn & to, Block & block, const ColumnNumbers & arguments, const Context & context) const override
{
if (arguments.size() != param_num + 1)
throw Exception(
"Predict got incorrect number of arguments. Got: " + std::to_string(arguments.size())
+ ". Required: " + std::to_string(param_num + 1),
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
auto & column = dynamic_cast<ColumnVector<Float64> &>(to);
this->data(place).predict(column.getData(), block, arguments, context);
}
void insertResultInto(ConstAggregateDataPtr place, IColumn & to) const override
{
std::ignore = place;
std::ignore = to;
throw std::runtime_error("not implemented");
}
const char * getHeaderFilePath() const override { return __FILE__; }
private:
UInt32 param_num;
Float64 learning_rate;
Float64 l2_reg_coef;
UInt32 batch_size;
std::shared_ptr<IGradientComputer> gradient_computer;
std::shared_ptr<IWeightsUpdater> weights_updater;
};
struct NameLinearRegression
{
static constexpr auto name = "LinearRegression";
};
struct NameLogisticRegression
{
static constexpr auto name = "LogisticRegression";
};
}

View File

@ -104,7 +104,6 @@ public:
if (event)
{
this->data(place).add(i);
break;
}
}
}

View File

@ -0,0 +1,30 @@
#include "AggregateFunctionTSGroupSum.h"
#include "AggregateFunctionFactory.h"
#include "FactoryHelpers.h"
#include "Helpers.h"
namespace DB
{
namespace
{
template <bool rate>
AggregateFunctionPtr createAggregateFunctionTSgroupSum(const std::string & name, const DataTypes & arguments, const Array & params)
{
assertNoParameters(name, params);
if (arguments.size() < 3)
throw Exception("Not enough event arguments for aggregate function " + name, ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
return std::make_shared<AggregateFunctionTSgroupSum<rate>>(arguments);
}
}
void registerAggregateFunctionTSgroupSum(AggregateFunctionFactory & factory)
{
factory.registerFunction("TSgroupSum", createAggregateFunctionTSgroupSum<false>, AggregateFunctionFactory::CaseInsensitive);
factory.registerFunction("TSgroupRateSum", createAggregateFunctionTSgroupSum<true>, AggregateFunctionFactory::CaseInsensitive);
}
}

View File

@ -0,0 +1,287 @@
#pragma once
#include <bitset>
#include <iostream>
#include <map>
#include <queue>
#include <sstream>
#include <unordered_set>
#include <utility>
#include <Columns/ColumnArray.h>
#include <Columns/ColumnTuple.h>
#include <Columns/ColumnsNumber.h>
#include <DataTypes/DataTypeArray.h>
#include <DataTypes/DataTypeTuple.h>
#include <DataTypes/DataTypesNumber.h>
#include <IO/ReadHelpers.h>
#include <IO/WriteHelpers.h>
#include <Common/ArenaAllocator.h>
#include <ext/range.h>
#include "IAggregateFunction.h"
namespace DB
{
namespace ErrorCodes
{
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
extern const int TOO_MANY_ARGUMENTS_FOR_FUNCTION;
}
template <bool rate>
struct AggregateFunctionTSgroupSumData
{
using DataPoint = std::pair<Int64, Float64>;
struct Points
{
using Dps = std::queue<DataPoint>;
Dps dps;
void add(Int64 t, Float64 v)
{
dps.push(std::make_pair(t, v));
if (dps.size() > 2)
dps.pop();
}
Float64 getval(Int64 t)
{
Int64 t1, t2;
Float64 v1, v2;
if (rate)
{
if (dps.size() < 2)
return 0;
t1 = dps.back().first;
t2 = dps.front().first;
v1 = dps.back().second;
v2 = dps.front().second;
return (v1 - v2) / Float64(t1 - t2);
}
else
{
if (dps.size() == 1 && t == dps.front().first)
return dps.front().second;
t1 = dps.back().first;
t2 = dps.front().first;
v1 = dps.back().second;
v2 = dps.front().second;
return v2 + ((v1 - v2) * Float64(t - t2)) / Float64(t1 - t2);
}
}
};
static constexpr size_t bytes_on_stack = 128;
typedef std::map<UInt64, Points> Series;
typedef PODArray<DataPoint, bytes_on_stack, AllocatorWithStackMemory<Allocator<false>, bytes_on_stack>> AggSeries;
Series ss;
AggSeries result;
void add(UInt64 uid, Int64 t, Float64 v)
{ //suppose t is coming asc
typename Series::iterator it_ss;
if (ss.count(uid) == 0)
{ //time series not exist, insert new one
Points tmp;
tmp.add(t, v);
ss.emplace(uid, tmp);
it_ss = ss.find(uid);
}
else
{
it_ss = ss.find(uid);
it_ss->second.add(t, v);
}
if (result.size() > 0 && t < result.back().first)
throw Exception{"TSgroupSum or TSgroupRateSum must order by timestamp asc!!!", ErrorCodes::LOGICAL_ERROR};
if (result.size() > 0 && t == result.back().first)
{
//do not add new point
if (rate)
result.back().second += it_ss->second.getval(t);
else
result.back().second += v;
}
else
{
if (rate)
result.emplace_back(std::make_pair(t, it_ss->second.getval(t)));
else
result.emplace_back(std::make_pair(t, v));
}
size_t i = result.size() - 1;
//reverse find out the index of timestamp that more than previous timestamp of t
while (result[i].first > it_ss->second.dps.front().first && i >= 0)
i--;
i++;
while (i < result.size() - 1)
{
result[i].second += it_ss->second.getval(result[i].first);
i++;
}
}
void merge(const AggregateFunctionTSgroupSumData & other)
{
//if ts has overlap, then aggregate two series by interpolation;
AggSeries tmp;
tmp.reserve(other.result.size() + result.size());
size_t i = 0, j = 0;
Int64 t1, t2;
Float64 v1, v2;
while (i < result.size() && j < other.result.size())
{
if (result[i].first < other.result[j].first)
{
if (j == 0)
{
tmp.emplace_back(result[i]);
}
else
{
t1 = other.result[j].first;
t2 = other.result[j - 1].first;
v1 = other.result[j].second;
v2 = other.result[j - 1].second;
Float64 value = result[i].second + v2 + (v1 - v2) * (Float64(result[i].first - t2)) / Float64(t1 - t2);
tmp.emplace_back(std::make_pair(result[i].first, value));
}
i++;
}
else if (result[i].first > other.result[j].first)
{
if (i == 0)
{
tmp.emplace_back(other.result[j]);
}
else
{
t1 = result[i].first;
t2 = result[i - 1].first;
v1 = result[i].second;
v2 = result[i - 1].second;
Float64 value = other.result[j].second + v2 + (v1 - v2) * (Float64(other.result[j].first - t2)) / Float64(t1 - t2);
tmp.emplace_back(std::make_pair(other.result[j].first, value));
}
j++;
}
else
{
tmp.emplace_back(std::make_pair(result[i].first, result[i].second + other.result[j].second));
i++;
j++;
}
}
while (i < result.size())
{
tmp.emplace_back(result[i]);
i++;
}
while (j < other.result.size())
{
tmp.push_back(other.result[j]);
j++;
}
swap(result, tmp);
}
void serialize(WriteBuffer & buf) const
{
size_t size = result.size();
writeVarUInt(size, buf);
buf.write(reinterpret_cast<const char *>(result.data()), sizeof(result[0]));
}
void deserialize(ReadBuffer & buf)
{
size_t size = 0;
readVarUInt(size, buf);
result.resize(size);
buf.read(reinterpret_cast<char *>(result.data()), size * sizeof(result[0]));
}
};
template <bool rate>
class AggregateFunctionTSgroupSum final
: public IAggregateFunctionDataHelper<AggregateFunctionTSgroupSumData<rate>, AggregateFunctionTSgroupSum<rate>>
{
private:
public:
String getName() const override { return rate ? "TSgroupRateSum" : "TSgroupSum"; }
AggregateFunctionTSgroupSum(const DataTypes & arguments)
: IAggregateFunctionDataHelper<AggregateFunctionTSgroupSumData<rate>, AggregateFunctionTSgroupSum<rate>>(arguments, {})
{
if (!WhichDataType(arguments[0].get()).isUInt64())
throw Exception{"Illegal type " + arguments[0].get()->getName() + " of argument 1 of aggregate function " + getName()
+ ", must be UInt64",
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT};
if (!WhichDataType(arguments[1].get()).isInt64())
throw Exception{"Illegal type " + arguments[1].get()->getName() + " of argument 2 of aggregate function " + getName()
+ ", must be Int64",
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT};
if (!WhichDataType(arguments[2].get()).isFloat64())
throw Exception{"Illegal type " + arguments[2].get()->getName() + " of argument 3 of aggregate function " + getName()
+ ", must be Float64",
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT};
}
DataTypePtr getReturnType() const override
{
auto datatypes = std::vector<DataTypePtr>();
datatypes.push_back(std::make_shared<DataTypeInt64>());
datatypes.push_back(std::make_shared<DataTypeFloat64>());
return std::make_shared<DataTypeArray>(std::make_shared<DataTypeTuple>(datatypes));
}
void add(AggregateDataPtr place, const IColumn ** columns, const size_t row_num, Arena *) const override
{
auto uid = static_cast<const ColumnVector<UInt64> *>(columns[0])->getData()[row_num];
auto ts = static_cast<const ColumnVector<Int64> *>(columns[1])->getData()[row_num];
auto val = static_cast<const ColumnVector<Float64> *>(columns[2])->getData()[row_num];
if (uid && ts && val)
{
this->data(place).add(uid, ts, val);
}
}
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override { this->data(place).merge(this->data(rhs)); }
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override { this->data(place).serialize(buf); }
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override { this->data(place).deserialize(buf); }
void insertResultInto(ConstAggregateDataPtr place, IColumn & to) const override
{
const auto & value = this->data(place).result;
size_t size = value.size();
ColumnArray & arr_to = static_cast<ColumnArray &>(to);
ColumnArray::Offsets & offsets_to = arr_to.getOffsets();
size_t old_size = offsets_to.back();
offsets_to.push_back(offsets_to.back() + size);
if (size)
{
typename ColumnInt64::Container & ts_to
= static_cast<ColumnInt64 &>(static_cast<ColumnTuple &>(arr_to.getData()).getColumn(0)).getData();
typename ColumnFloat64::Container & val_to
= static_cast<ColumnFloat64 &>(static_cast<ColumnTuple &>(arr_to.getData()).getColumn(1)).getData();
ts_to.reserve(old_size + size);
val_to.reserve(old_size + size);
size_t i = 0;
while (i < this->data(place).result.size())
{
ts_to.push_back(this->data(place).result[i].first);
val_to.push_back(this->data(place).result[i].second);
i++;
}
}
}
bool allocatesMemoryInArena() const override { return true; }
const char * getHeaderFilePath() const override { return __FILE__; }
};
}

View File

@ -19,7 +19,7 @@ list(REMOVE_ITEM clickhouse_aggregate_functions_headers
FactoryHelpers.h
)
add_library(clickhouse_aggregate_functions ${LINK_MODE} ${clickhouse_aggregate_functions_sources})
add_library(clickhouse_aggregate_functions ${clickhouse_aggregate_functions_sources})
target_link_libraries(clickhouse_aggregate_functions PRIVATE dbms PUBLIC ${CITYHASH_LIBRARIES})
target_include_directories (clickhouse_aggregate_functions BEFORE PRIVATE ${COMMON_INCLUDE_DIR})

View File

@ -7,6 +7,8 @@
#include <Core/Types.h>
#include <Core/Field.h>
#include <Core/ColumnNumbers.h>
#include <Core/Block.h>
#include <Common/Exception.h>
@ -92,6 +94,13 @@ public:
/// Inserts results into a column.
virtual void insertResultInto(ConstAggregateDataPtr place, IColumn & to) const = 0;
/// This function is used for machine learning methods
virtual void predictValues(ConstAggregateDataPtr /* place */, IColumn & /*to*/,
Block & /*block*/, const ColumnNumbers & /*arguments*/, const Context & /*context*/) const
{
throw Exception("Method predictValues is not supported for " + getName(), ErrorCodes::NOT_IMPLEMENTED);
}
/** Returns true for aggregate functions of type -State.
* They are executed as other aggregate functions, but not finalized (return an aggregation state that can be combined with another).
*/
@ -149,7 +158,6 @@ protected:
static const Data & data(ConstAggregateDataPtr place) { return *reinterpret_cast<const Data*>(place); }
public:
IAggregateFunctionDataHelper(const DataTypes & argument_types_, const Array & parameters_)
: IAggregateFunctionHelper<Derived>(argument_types_, parameters_) {}

View File

@ -136,7 +136,7 @@ class QuantileTDigest
{
if (unmerged > 0)
{
RadixSort<RadixSortTraits>::execute(summary.data(), summary.size());
RadixSort<RadixSortTraits>::executeLSD(summary.data(), summary.size());
if (summary.size() > 3)
{

View File

@ -28,6 +28,7 @@ void registerAggregateFunctionTopK(AggregateFunctionFactory &);
void registerAggregateFunctionsBitwise(AggregateFunctionFactory &);
void registerAggregateFunctionsBitmap(AggregateFunctionFactory &);
void registerAggregateFunctionsMaxIntersections(AggregateFunctionFactory &);
void registerAggregateFunctionMLMethod(AggregateFunctionFactory &);
void registerAggregateFunctionEntropy(AggregateFunctionFactory &);
void registerAggregateFunctionLeastSqr(AggregateFunctionFactory &);
@ -40,7 +41,7 @@ void registerAggregateFunctionCombinatorNull(AggregateFunctionCombinatorFactory
void registerAggregateFunctionHistogram(AggregateFunctionFactory & factory);
void registerAggregateFunctionRetention(AggregateFunctionFactory & factory);
void registerAggregateFunctionTSgroupSum(AggregateFunctionFactory & factory);
void registerAggregateFunctions()
{
{
@ -69,6 +70,8 @@ void registerAggregateFunctions()
registerAggregateFunctionsMaxIntersections(factory);
registerAggregateFunctionHistogram(factory);
registerAggregateFunctionRetention(factory);
registerAggregateFunctionTSgroupSum(factory);
registerAggregateFunctionMLMethod(factory);
registerAggregateFunctionEntropy(factory);
registerAggregateFunctionLeastSqr(factory);
}

View File

@ -1,7 +1,7 @@
# TODO: make separate lib datastream, block, ...
#include(${ClickHouse_SOURCE_DIR}/cmake/dbms_glob_sources.cmake)
#add_headers_and_sources(clickhouse_client .)
#add_library(clickhouse_client ${LINK_MODE} ${clickhouse_client_headers} ${clickhouse_client_sources})
#add_library(clickhouse_client ${clickhouse_client_headers} ${clickhouse_client_sources})
#target_link_libraries (clickhouse_client clickhouse_common_io ${Poco_Net_LIBRARY})
#target_include_directories (clickhouse_client PRIVATE ${DBMS_INCLUDE_DIR})

View File

@ -401,7 +401,7 @@ void Connection::sendQuery(
if (settings)
settings->serialize(*out);
else
writeStringBinary("", *out);
writeStringBinary("" /* empty string is a marker of the end of settings */, *out);
writeVarUInt(stage, *out);
writeVarUInt(static_cast<bool>(compression), *out);

View File

@ -10,6 +10,7 @@
#include <Common/typeid_cast.h>
#include <Common/Arena.h>
#include <AggregateFunctions/AggregateFunctionMLMethod.h>
namespace DB
{
@ -18,6 +19,7 @@ namespace ErrorCodes
{
extern const int PARAMETER_OUT_OF_BOUND;
extern const int SIZES_OF_COLUMNS_DOESNT_MATCH;
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
}
@ -33,6 +35,25 @@ void ColumnAggregateFunction::addArena(ArenaPtr arena_)
arenas.push_back(arena_);
}
/// This function is used in convertToValues() and predictValues()
/// and is written here to avoid repetitions
bool ColumnAggregateFunction::tryFinalizeAggregateFunction(MutableColumnPtr *res_) const
{
if (const AggregateFunctionState *function_state = typeid_cast<const AggregateFunctionState *>(func.get()))
{
auto res = createView();
res->set(function_state->getNestedFunction());
res->data.assign(data.begin(), data.end());
*res_ = std::move(res);
return true;
}
MutableColumnPtr res = func->getReturnType()->createColumn();
res->reserve(data.size());
*res_ = std::move(res);
return false;
}
MutableColumnPtr ColumnAggregateFunction::convertToValues() const
{
/** If the aggregate function returns an unfinalized/unfinished state,
@ -65,23 +86,46 @@ MutableColumnPtr ColumnAggregateFunction::convertToValues() const
* AggregateFunction(quantileTiming(0.5), UInt64)
* into UInt16 - already finished result of `quantileTiming`.
*/
if (const AggregateFunctionState * function_state = typeid_cast<const AggregateFunctionState *>(func.get()))
/** Convertion function is used in convertToValues and predictValues
* in the similar part of both functions
*/
MutableColumnPtr res;
if (tryFinalizeAggregateFunction(&res))
{
auto res = createView();
res->set(function_state->getNestedFunction());
res->data.assign(data.begin(), data.end());
return res;
}
MutableColumnPtr res = func->getReturnType()->createColumn();
res->reserve(data.size());
for (auto val : data)
func->insertResultInto(val, *res);
return res;
}
MutableColumnPtr ColumnAggregateFunction::predictValues(Block & block, const ColumnNumbers & arguments, const Context & context) const
{
MutableColumnPtr res;
tryFinalizeAggregateFunction(&res);
auto ML_function = func.get();
if (ML_function)
{
size_t row_num = 0;
for (auto val : data)
{
ML_function->predictValues(val, *res, block, arguments, context);
++row_num;
}
}
else
{
throw Exception("Illegal aggregate function is passed",
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
}
return res;
}
void ColumnAggregateFunction::ensureOwnership()
{

View File

@ -10,6 +10,7 @@
#include <IO/WriteBuffer.h>
#include <IO/WriteHelpers.h>
#include <Functions/FunctionHelpers.h>
namespace DB
{
@ -117,6 +118,9 @@ public:
std::string getName() const override { return "AggregateFunction(" + func->getName() + ")"; }
const char * getFamilyName() const override { return "AggregateFunction"; }
bool tryFinalizeAggregateFunction(MutableColumnPtr* res_) const;
MutableColumnPtr predictValues(Block & block, const ColumnNumbers & arguments, const Context & context) const;
size_t size() const override
{
return getData().size();

View File

@ -7,6 +7,7 @@
#include <Common/Arena.h>
#include <Common/SipHash.h>
#include <Common/NaNUtils.h>
#include <Common/RadixSort.h>
#include <IO/WriteBuffer.h>
#include <IO/WriteHelpers.h>
#include <Columns/ColumnsCommon.h>
@ -18,7 +19,6 @@
#include <emmintrin.h>
#endif
namespace DB
{
@ -68,19 +68,41 @@ struct ColumnVector<T>::greater
bool operator()(size_t lhs, size_t rhs) const { return CompareHelper<T>::greater(parent.data[lhs], parent.data[rhs], nan_direction_hint); }
};
namespace
{
template <typename T>
struct ValueWithIndex
{
T value;
UInt32 index;
};
template <typename T>
struct RadixSortTraits : RadixSortNumTraits<T>
{
using Element = ValueWithIndex<T>;
static T & extractKey(Element & elem) { return elem.value; }
};
}
template <typename T>
void ColumnVector<T>::getPermutation(bool reverse, size_t limit, int nan_direction_hint, IColumn::Permutation & res) const
{
size_t s = data.size();
res.resize(s);
for (size_t i = 0; i < s; ++i)
res[i] = i;
if (s == 0)
return;
if (limit >= s)
limit = 0;
if (limit)
{
for (size_t i = 0; i < s; ++i)
res[i] = i;
if (reverse)
std::partial_sort(res.begin(), res.begin() + limit, res.end(), greater(*this, nan_direction_hint));
else
@ -88,6 +110,71 @@ void ColumnVector<T>::getPermutation(bool reverse, size_t limit, int nan_directi
}
else
{
/// A case for radix sort
if constexpr (std::is_arithmetic_v<T> && !std::is_same_v<T, UInt128>)
{
/// Thresholds on size. Lower threshold is arbitrary. Upper threshold is chosen by the type for histogram counters.
if (s >= 256 && s <= std::numeric_limits<UInt32>::max())
{
PaddedPODArray<ValueWithIndex<T>> pairs(s);
for (UInt32 i = 0; i < s; ++i)
pairs[i] = {data[i], i};
RadixSort<RadixSortTraits<T>>::executeLSD(pairs.data(), s);
/// Radix sort treats all NaNs to be greater than all numbers.
/// If the user needs the opposite, we must move them accordingly.
size_t nans_to_move = 0;
if (std::is_floating_point_v<T> && nan_direction_hint < 0)
{
for (ssize_t i = s - 1; i >= 0; --i)
{
if (isNaN(pairs[i].value))
++nans_to_move;
else
break;
}
}
if (reverse)
{
if (nans_to_move)
{
for (size_t i = 0; i < s - nans_to_move; ++i)
res[i] = pairs[s - nans_to_move - 1 - i].index;
for (size_t i = s - nans_to_move; i < s; ++i)
res[i] = pairs[s - 1 - (i - (s - nans_to_move))].index;
}
else
{
for (size_t i = 0; i < s; ++i)
res[s - 1 - i] = pairs[i].index;
}
}
else
{
if (nans_to_move)
{
for (size_t i = 0; i < nans_to_move; ++i)
res[i] = pairs[i + s - nans_to_move].index;
for (size_t i = nans_to_move; i < s; ++i)
res[i] = pairs[i - nans_to_move].index;
}
else
{
for (size_t i = 0; i < s; ++i)
res[i] = pairs[i].index;
}
}
return;
}
}
/// Default sorting algorithm.
for (size_t i = 0; i < s; ++i)
res[i] = i;
if (reverse)
pdqsort(res.begin(), res.end(), greater(*this, nan_direction_hint));
else
@ -95,6 +182,7 @@ void ColumnVector<T>::getPermutation(bool reverse, size_t limit, int nan_directi
}
}
template <typename T>
const char * ColumnVector<T>::getFamilyName() const
{

View File

@ -2,7 +2,7 @@ include(${ClickHouse_SOURCE_DIR}/cmake/dbms_glob_sources.cmake)
add_headers_and_sources(clickhouse_common_config .)
add_library(clickhouse_common_config ${LINK_MODE} ${clickhouse_common_config_headers} ${clickhouse_common_config_sources})
add_library(clickhouse_common_config ${clickhouse_common_config_headers} ${clickhouse_common_config_sources})
target_link_libraries(clickhouse_common_config PUBLIC common PRIVATE clickhouse_common_zookeeper string_utils PUBLIC ${Poco_XML_LIBRARY} ${Poco_Util_LIBRARY} Threads::Threads)
target_include_directories(clickhouse_common_config PUBLIC ${DBMS_INCLUDE_DIR})

View File

@ -571,41 +571,41 @@ ConfigProcessor::LoadedConfig ConfigProcessor::loadConfigWithZooKeeperIncludes(
void ConfigProcessor::savePreprocessedConfig(const LoadedConfig & loaded_config, std::string preprocessed_dir)
{
if (preprocessed_path.empty())
try
{
auto new_path = loaded_config.config_path;
if (new_path.substr(0, main_config_path.size()) == main_config_path)
new_path.replace(0, main_config_path.size(), "");
std::replace(new_path.begin(), new_path.end(), '/', '_');
if (preprocessed_dir.empty())
if (preprocessed_path.empty())
{
if (!loaded_config.configuration->has("path"))
auto new_path = loaded_config.config_path;
if (new_path.substr(0, main_config_path.size()) == main_config_path)
new_path.replace(0, main_config_path.size(), "");
std::replace(new_path.begin(), new_path.end(), '/', '_');
if (preprocessed_dir.empty())
{
// Will use current directory
auto parent_path = Poco::Path(loaded_config.config_path).makeParent();
preprocessed_dir = parent_path.toString();
Poco::Path poco_new_path(new_path);
poco_new_path.setBaseName(poco_new_path.getBaseName() + PREPROCESSED_SUFFIX);
new_path = poco_new_path.toString();
if (!loaded_config.configuration->has("path"))
{
// Will use current directory
auto parent_path = Poco::Path(loaded_config.config_path).makeParent();
preprocessed_dir = parent_path.toString();
Poco::Path poco_new_path(new_path);
poco_new_path.setBaseName(poco_new_path.getBaseName() + PREPROCESSED_SUFFIX);
new_path = poco_new_path.toString();
}
else
{
preprocessed_dir = loaded_config.configuration->getString("path") + "/preprocessed_configs/";
}
}
else
{
preprocessed_dir = loaded_config.configuration->getString("path") + "/preprocessed_configs/";
preprocessed_dir += "/preprocessed_configs/";
}
}
else
{
preprocessed_dir += "/preprocessed_configs/";
}
preprocessed_path = preprocessed_dir + new_path;
auto preprocessed_path_parent = Poco::Path(preprocessed_path).makeParent();
if (!preprocessed_path_parent.toString().empty())
Poco::File(preprocessed_path_parent).createDirectories();
}
try
{
preprocessed_path = preprocessed_dir + new_path;
auto preprocessed_path_parent = Poco::Path(preprocessed_path).makeParent();
if (!preprocessed_path_parent.toString().empty())
Poco::File(preprocessed_path_parent).createDirectories();
}
DOMWriter().writeNode(preprocessed_path, loaded_config.preprocessed_xml);
}
catch (Poco::Exception & e)

318
dbms/src/Common/CpuId.h Normal file
View File

@ -0,0 +1,318 @@
#pragma once
#include <Core/Types.h>
#if defined(__x86_64__) || defined(__i386__)
#include <cpuid.h>
#endif
#include <cstring>
namespace DB
{
namespace Cpu
{
#if defined(__x86_64__) || defined(__i386__)
inline UInt64 _xgetbv(UInt32 xcr) noexcept
{
UInt32 eax;
UInt32 edx;
__asm__ volatile(
"xgetbv"
: "=a"(eax), "=d"(edx)
: "c"(xcr));
return (static_cast<UInt64>(edx) << 32) | eax;
}
#endif
inline bool cpuid(UInt32 op, UInt32 sub_op, UInt32 * res) noexcept
{
#if defined(__x86_64__) || defined(__i386__)
__cpuid_count(op, sub_op, res[0], res[1], res[2], res[3]);
return true;
#else
(void)op;
(void)sub_op;
memset(res, 0, 4 * sizeof(*res));
return false;
#endif
}
inline bool cpuid(UInt32 op, UInt32 * res) noexcept
{
#if defined(__x86_64__) || defined(__i386__)
__cpuid(op, res[0], res[1], res[2], res[3]);
return true;
#else
(void)op;
memset(res, 0, 4 * sizeof(*res));
return false;
#endif
}
#define CPU_ID_ENUMERATE(OP) \
OP(SSE) \
OP(SSE2) \
OP(SSE3) \
OP(SSSE3) \
OP(SSE41) \
OP(SSE42) \
OP(F16C) \
OP(POPCNT) \
OP(BMI1) \
OP(BMI2) \
OP(PCLMUL) \
OP(AES) \
OP(AVX) \
OP(FMA) \
OP(AVX2) \
OP(AVX512F) \
OP(AVX512DQ) \
OP(AVX512IFMA) \
OP(AVX512PF) \
OP(AVX512ER) \
OP(AVX512CD) \
OP(AVX512BW) \
OP(AVX512VL) \
OP(AVX512VBMI) \
OP(PREFETCHWT1) \
OP(SHA) \
OP(ADX) \
OP(RDRAND) \
OP(RDSEED) \
OP(PCOMMIT) \
OP(RDTSCP) \
OP(CLFLUSHOPT) \
OP(CLWB) \
OP(XSAVE) \
OP(OSXSAVE)
union CpuInfo
{
UInt32 info[4];
struct
{
UInt32 eax;
UInt32 ebx;
UInt32 ecx;
UInt32 edx;
};
inline CpuInfo(UInt32 op) noexcept { cpuid(op, info); }
inline CpuInfo(UInt32 op, UInt32 sub_op) noexcept { cpuid(op, sub_op, info); }
};
#define DEF_NAME(X) inline bool have##X() noexcept;
CPU_ID_ENUMERATE(DEF_NAME)
#undef DEF_NAME
bool haveRDTSCP() noexcept
{
return (CpuInfo(0x80000001).edx >> 27) & 1u;
}
bool haveSSE() noexcept
{
return (CpuInfo(0x1).edx >> 25) & 1u;
}
bool haveSSE2() noexcept
{
return (CpuInfo(0x1).edx >> 26) & 1u;
}
bool haveSSE3() noexcept
{
return CpuInfo(0x1).ecx & 1u;
}
bool havePCLMUL() noexcept
{
return (CpuInfo(0x1).ecx >> 1) & 1u;
}
bool haveSSSE3() noexcept
{
return (CpuInfo(0x1).ecx >> 9) & 1u;
}
bool haveSSE41() noexcept
{
return (CpuInfo(0x1).ecx >> 19) & 1u;
}
bool haveSSE42() noexcept
{
return (CpuInfo(0x1).ecx >> 20) & 1u;
}
bool haveF16C() noexcept
{
return (CpuInfo(0x1).ecx >> 29) & 1u;
}
bool havePOPCNT() noexcept
{
return (CpuInfo(0x1).ecx >> 23) & 1u;
}
bool haveAES() noexcept
{
return (CpuInfo(0x1).ecx >> 25) & 1u;
}
bool haveXSAVE() noexcept
{
return (CpuInfo(0x1).ecx >> 26) & 1u;
}
bool haveOSXSAVE() noexcept
{
return (CpuInfo(0x1).ecx >> 27) & 1u;
}
bool haveAVX() noexcept
{
#if defined(__x86_64__) || defined(__i386__)
// http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf
// https://bugs.chromium.org/p/chromium/issues/detail?id=375968
return haveOSXSAVE() // implies haveXSAVE()
&& (_xgetbv(0) & 6u) == 6u // XMM state and YMM state are enabled by OS
&& ((CpuInfo(0x1).ecx >> 28) & 1u); // AVX bit
#else
return false;
#endif
}
bool haveFMA() noexcept
{
return haveAVX() && ((CpuInfo(0x1).ecx >> 12) & 1u);
}
bool haveAVX2() noexcept
{
return haveAVX() && ((CpuInfo(0x7, 0).ebx >> 5) & 1u);
}
bool haveBMI1() noexcept
{
return (CpuInfo(0x7, 0).ebx >> 3) & 1u;
}
bool haveBMI2() noexcept
{
return (CpuInfo(0x7, 0).ebx >> 8) & 1u;
}
bool haveAVX512F() noexcept
{
#if defined(__x86_64__) || defined(__i386__)
// https://software.intel.com/en-us/articles/how-to-detect-knl-instruction-support
return haveOSXSAVE() // implies haveXSAVE()
&& (_xgetbv(0) & 6u) == 6u // XMM state and YMM state are enabled by OS
&& ((_xgetbv(0) >> 5) & 7u) == 7u // ZMM state is enabled by OS
&& CpuInfo(0x0).eax >= 0x7 // leaf 7 is present
&& ((CpuInfo(0x7).ebx >> 16) & 1u); // AVX512F bit
#else
return false;
#endif
}
bool haveAVX512DQ() noexcept
{
return haveAVX512F() && ((CpuInfo(0x7, 0).ebx >> 17) & 1u);
}
bool haveRDSEED() noexcept
{
return CpuInfo(0x0).eax >= 0x7 && ((CpuInfo(0x7, 0).ebx >> 18) & 1u);
}
bool haveADX() noexcept
{
return CpuInfo(0x0).eax >= 0x7 && ((CpuInfo(0x7, 0).ebx >> 19) & 1u);
}
bool haveAVX512IFMA() noexcept
{
return haveAVX512F() && ((CpuInfo(0x7, 0).ebx >> 21) & 1u);
}
bool havePCOMMIT() noexcept
{
return CpuInfo(0x0).eax >= 0x7 && ((CpuInfo(0x7, 0).ebx >> 22) & 1u);
}
bool haveCLFLUSHOPT() noexcept
{
return CpuInfo(0x0).eax >= 0x7 && ((CpuInfo(0x7, 0).ebx >> 23) & 1u);
}
bool haveCLWB() noexcept
{
return CpuInfo(0x0).eax >= 0x7 && ((CpuInfo(0x7, 0).ebx >> 24) & 1u);
}
bool haveAVX512PF() noexcept
{
return haveAVX512F() && ((CpuInfo(0x7, 0).ebx >> 26) & 1u);
}
bool haveAVX512ER() noexcept
{
return haveAVX512F() && ((CpuInfo(0x7, 0).ebx >> 27) & 1u);
}
bool haveAVX512CD() noexcept
{
return haveAVX512F() && ((CpuInfo(0x7, 0).ebx >> 28) & 1u);
}
bool haveSHA() noexcept
{
return CpuInfo(0x0).eax >= 0x7 && ((CpuInfo(0x7, 0).ebx >> 29) & 1u);
}
bool haveAVX512BW() noexcept
{
return haveAVX512F() && ((CpuInfo(0x7, 0).ebx >> 30) & 1u);
}
bool haveAVX512VL() noexcept
{
return haveAVX512F() && ((CpuInfo(0x7, 0).ebx >> 31) & 1u);
}
bool havePREFETCHWT1() noexcept
{
return CpuInfo(0x0).eax >= 0x7 && ((CpuInfo(0x7, 0).ecx >> 0) & 1u);
}
bool haveAVX512VBMI() noexcept
{
return haveAVX512F() && ((CpuInfo(0x7, 0).ecx >> 1) & 1u);
}
bool haveRDRAND() noexcept
{
return CpuInfo(0x0).eax >= 0x7 && ((CpuInfo(0x1).ecx >> 30) & 1u);
}
struct CpuFlagsCache
{
#define DEF_NAME(X) static inline bool have_##X = have##X();
CPU_ID_ENUMERATE(DEF_NAME)
#undef DEF_NAME
};
}
}

View File

@ -426,6 +426,7 @@ namespace ErrorCodes
extern const int BROTLI_WRITE_FAILED = 449;
extern const int BAD_TTL_EXPRESSION = 450;
extern const int BAD_TTL_FILE = 451;
extern const int SETTING_CONSTRAINT_VIOLATION = 452;
extern const int KEEPER_EXCEPTION = 999;
extern const int POCO_EXCEPTION = 1000;

View File

@ -1,7 +1,6 @@
#include <Common/Exception.h>
#include <Common/OptimizedRegularExpression.h>
#define MIN_LENGTH_FOR_STRSTR 3
#define MAX_SUBPATTERNS 5
@ -211,20 +210,18 @@ void OptimizedRegularExpressionImpl<thread_safe>::analyze(
{
if (!has_alternative_on_depth_0)
{
/** We choose the non-alternative substring of the maximum length, among the prefixes,
* or a non-alternative substring of maximum length.
*/
/// We choose the non-alternative substring of the maximum length for first search.
/// Tuning for typical usage domain
auto tuning_strings_condition = [](const std::string & str)
{
return str != "://" && str != "http://" && str != "www" && str != "Windows ";
};
size_t max_length = 0;
Substrings::const_iterator candidate_it = trivial_substrings.begin();
for (Substrings::const_iterator it = trivial_substrings.begin(); it != trivial_substrings.end(); ++it)
{
if (((it->second == 0 && candidate_it->second != 0)
|| ((it->second == 0) == (candidate_it->second == 0) && it->first.size() > max_length))
/// Tuning for typical usage domain
&& (it->first.size() > strlen("://") || strncmp(it->first.data(), "://", strlen("://")))
&& (it->first.size() > strlen("http://") || strncmp(it->first.data(), "http", strlen("http")))
&& (it->first.size() > strlen("www.") || strncmp(it->first.data(), "www", strlen("www")))
&& (it->first.size() > strlen("Windows ") || strncmp(it->first.data(), "Windows ", strlen("Windows "))))
if (it->first.size() > max_length && tuning_strings_condition(it->first))
{
max_length = it->first.size();
candidate_it = it;

View File

@ -37,8 +37,8 @@ class RWLockImpl::LockHolderImpl
RWLock parent;
GroupsContainer::iterator it_group;
ClientsContainer::iterator it_client;
ThreadToHolder::iterator it_thread;
QueryIdToHolder::iterator it_query;
ThreadToHolder::key_type thread_id;
QueryIdToHolder::key_type query_id;
CurrentMetrics::Increment active_client_increment;
LockHolderImpl(RWLock && parent, GroupsContainer::iterator it_group, ClientsContainer::iterator it_client);
@ -122,24 +122,16 @@ RWLockImpl::LockHolder RWLockImpl::getLock(RWLockImpl::Type type, const String &
LockHolder res(new LockHolderImpl(shared_from_this(), it_group, it_client));
/// Wait a notification until we will be the only in the group.
it_group->cv.wait(lock, [&] () { return it_group == queue.begin(); });
/// Insert myself (weak_ptr to the holder) to threads set to implement recursive lock
it_thread = thread_to_holder.emplace(this_thread_id, res).first;
res->it_thread = it_thread;
thread_to_holder.emplace(this_thread_id, res);
res->thread_id = this_thread_id;
if (query_id != RWLockImpl::NO_QUERY)
it_query = query_id_to_holder.emplace(query_id, res).first;
res->it_query = it_query;
/// We are first, we should not wait anything
/// If we are not the first client in the group, a notification could be already sent
if (it_group == queue.begin())
{
finalize_metrics();
return res;
}
/// Wait a notification
it_group->cv.wait(lock, [&] () { return it_group == queue.begin(); });
query_id_to_holder.emplace(query_id, res);
res->query_id = query_id;
finalize_metrics();
return res;
@ -151,10 +143,8 @@ RWLockImpl::LockHolderImpl::~LockHolderImpl()
std::unique_lock lock(parent->mutex);
/// Remove weak_ptrs to the holder, since there are no owners of the current lock
parent->thread_to_holder.erase(it_thread);
if (it_query != parent->query_id_to_holder.end())
parent->query_id_to_holder.erase(it_query);
parent->thread_to_holder.erase(thread_id);
parent->query_id_to_holder.erase(query_id);
/// Removes myself from client list of our group
it_group->clients.erase(it_client);

View File

@ -1,6 +1,7 @@
#pragma once
#include <Core/Types.h>
#include <boost/core/noncopyable.hpp>
#include <list>
#include <vector>
#include <mutex>

View File

@ -1,9 +1,12 @@
#pragma once
#include <string.h>
#if !defined(__APPLE__) && !defined(__FreeBSD__)
#include <malloc.h>
#endif
#include <algorithm>
#include <cmath>
#include <cstdlib>
#include <cstdint>
#include <type_traits>
@ -64,15 +67,15 @@ struct RadixSortFloatTransform
};
template <typename _Element, typename _Key = _Element>
template <typename TElement>
struct RadixSortFloatTraits
{
using Element = _Element; /// The type of the element. It can be a structure with a key and some other payload. Or just a key.
using Key = _Key; /// The key to sort.
using Element = TElement; /// The type of the element. It can be a structure with a key and some other payload. Or just a key.
using Key = Element; /// The key to sort by.
using CountType = uint32_t; /// Type for calculating histograms. In the case of a known small number of elements, it can be less than size_t.
/// The type to which the key is transformed to do bit operations. This UInt is the same size as the key.
using KeyBits = std::conditional_t<sizeof(_Key) == 8, uint64_t, uint32_t>;
using KeyBits = std::conditional_t<sizeof(Key) == 8, uint64_t, uint32_t>;
static constexpr size_t PART_SIZE_BITS = 8; /// With what pieces of the key, in bits, to do one pass - reshuffle of the array.
@ -85,12 +88,13 @@ struct RadixSortFloatTraits
using Allocator = RadixSortMallocAllocator;
/// The function to get the key from an array element.
static Key & extractKey(Element & elem)
static Key & extractKey(Element & elem) { return elem; }
/// Used when fallback to comparison based sorting is needed.
/// TODO: Correct handling of NaNs, NULLs, etc
static bool less(Key x, Key y)
{
if constexpr (std::is_same_v<Element, Key>)
return elem;
else
return *reinterpret_cast<Key *>(&elem);
return x < y;
}
};
@ -105,6 +109,29 @@ struct RadixSortIdentityTransform
};
template <typename TElement>
struct RadixSortUIntTraits
{
using Element = TElement;
using Key = Element;
using CountType = uint32_t;
using KeyBits = Key;
static constexpr size_t PART_SIZE_BITS = 8;
using Transform = RadixSortIdentityTransform<KeyBits>;
using Allocator = RadixSortMallocAllocator;
static Key & extractKey(Element & elem) { return elem; }
static bool less(Key x, Key y)
{
return x < y;
}
};
template <typename KeyBits>
struct RadixSortSignedTransform
{
@ -115,53 +142,37 @@ struct RadixSortSignedTransform
};
template <typename _Element, typename _Key = _Element>
struct RadixSortUIntTraits
{
using Element = _Element;
using Key = _Key;
using CountType = uint32_t;
using KeyBits = _Key;
static constexpr size_t PART_SIZE_BITS = 8;
using Transform = RadixSortIdentityTransform<KeyBits>;
using Allocator = RadixSortMallocAllocator;
/// The function to get the key from an array element.
static Key & extractKey(Element & elem)
{
if constexpr (std::is_same_v<Element, Key>)
return elem;
else
return *reinterpret_cast<Key *>(&elem);
}
};
template <typename _Element, typename _Key = _Element>
template <typename TElement>
struct RadixSortIntTraits
{
using Element = _Element;
using Key = _Key;
using Element = TElement;
using Key = Element;
using CountType = uint32_t;
using KeyBits = std::make_unsigned_t<_Key>;
using KeyBits = std::make_unsigned_t<Key>;
static constexpr size_t PART_SIZE_BITS = 8;
using Transform = RadixSortSignedTransform<KeyBits>;
using Allocator = RadixSortMallocAllocator;
/// The function to get the key from an array element.
static Key & extractKey(Element & elem)
static Key & extractKey(Element & elem) { return elem; }
static bool less(Key x, Key y)
{
if constexpr (std::is_same_v<Element, Key>)
return elem;
else
return *reinterpret_cast<Key *>(&elem);
return x < y;
}
};
template <typename T>
using RadixSortNumTraits =
std::conditional_t<std::is_integral_v<T>,
std::conditional_t<std::is_unsigned_v<T>,
RadixSortUIntTraits<T>,
RadixSortIntTraits<T>>,
RadixSortFloatTraits<T>>;
template <typename Traits>
struct RadixSort
{
@ -171,6 +182,9 @@ private:
using CountType = typename Traits::CountType;
using KeyBits = typename Traits::KeyBits;
// Use insertion sort if the size of the array is less than equal to this threshold
static constexpr size_t INSERTION_SORT_THRESHOLD = 64;
static constexpr size_t HISTOGRAM_SIZE = 1 << Traits::PART_SIZE_BITS;
static constexpr size_t PART_BITMASK = HISTOGRAM_SIZE - 1;
static constexpr size_t KEY_BITS = sizeof(Key) * 8;
@ -187,8 +201,113 @@ private:
static KeyBits keyToBits(Key x) { return ext::bit_cast<KeyBits>(x); }
static Key bitsToKey(KeyBits x) { return ext::bit_cast<Key>(x); }
static void insertionSortInternal(Element *arr, size_t size)
{
Element * end = arr + size;
for (Element * i = arr + 1; i < end; ++i)
{
if (Traits::less(Traits::extractKey(*i), Traits::extractKey(*(i - 1))))
{
Element * j;
Element tmp = *i;
*i = *(i - 1);
for (j = i - 1; j > arr && Traits::less(Traits::extractKey(tmp), Traits::extractKey(*(j - 1))); --j)
*j = *(j - 1);
*j = tmp;
}
}
}
/* Main MSD radix sort subroutine
* Puts elements to buckets based on PASS-th digit, then recursively calls insertion sort or itself on the buckets
*/
template <size_t PASS>
static inline void radixSortMSDInternal(Element * arr, size_t size, size_t limit)
{
Element * last_list[HISTOGRAM_SIZE + 1];
Element ** last = last_list + 1;
size_t count[HISTOGRAM_SIZE] = {0};
for (Element * i = arr; i < arr + size; ++i)
++count[getPart(PASS, *i)];
last_list[0] = last_list[1] = arr;
size_t buckets_for_recursion = HISTOGRAM_SIZE;
Element * finish = arr + size;
for (size_t i = 1; i < HISTOGRAM_SIZE; ++i)
{
last[i] = last[i - 1] + count[i - 1];
if (last[i] >= arr + limit)
{
buckets_for_recursion = i;
finish = last[i];
}
}
/* At this point, we have the following variables:
* count[i] is the size of i-th bucket
* last[i] is a pointer to the beginning of i-th bucket, last[-1] == last[0]
* buckets_for_recursion is the number of buckets that should be sorted, the last of them only partially
* finish is a pointer to the end of the first buckets_for_recursion buckets
*/
// Scatter array elements to buckets until the first buckets_for_recursion buckets are full
for (size_t i = 0; i < buckets_for_recursion; ++i)
{
Element * end = last[i - 1] + count[i];
if (end == finish)
{
last[i] = end;
break;
}
while (last[i] != end)
{
Element swapper = *last[i];
KeyBits tag = getPart(PASS, swapper);
if (tag != i)
{
do
{
std::swap(swapper, *last[tag]++);
} while ((tag = getPart(PASS, swapper)) != i);
*last[i] = swapper;
}
++last[i];
}
}
if constexpr (PASS > 0)
{
// Recursively sort buckets, except the last one
for (size_t i = 0; i < buckets_for_recursion - 1; ++i)
{
Element * start = last[i - 1];
size_t subsize = last[i] - last[i - 1];
radixSortMSDInternalHelper<PASS - 1>(start, subsize, subsize);
}
// Sort last necessary bucket with limit
Element * start = last[buckets_for_recursion - 2];
size_t subsize = last[buckets_for_recursion - 1] - last[buckets_for_recursion - 2];
size_t sublimit = limit - (last[buckets_for_recursion - 1] - arr);
radixSortMSDInternalHelper<PASS - 1>(start, subsize, sublimit);
}
}
// A helper to choose sorting algorithm based on array length
template <size_t PASS>
static inline void radixSortMSDInternalHelper(Element * arr, size_t size, size_t limit)
{
if (size <= INSERTION_SORT_THRESHOLD)
insertionSortInternal(arr, size);
else
radixSortMSDInternal<PASS>(arr, size, limit);
}
public:
static void execute(Element * arr, size_t size)
/// Least significant digit radix sort (stable)
static void executeLSD(Element * arr, size_t size)
{
/// If the array is smaller than 256, then it is better to use another algorithm.
@ -209,8 +328,8 @@ public:
if (!Traits::Transform::transform_is_simple)
Traits::extractKey(arr[i]) = bitsToKey(Traits::Transform::forward(keyToBits(Traits::extractKey(arr[i]))));
for (size_t j = 0; j < NUM_PASSES; ++j)
++histograms[j * HISTOGRAM_SIZE + getPart(j, keyToBits(Traits::extractKey(arr[i])))];
for (size_t pass = 0; pass < NUM_PASSES; ++pass)
++histograms[pass * HISTOGRAM_SIZE + getPart(pass, keyToBits(Traits::extractKey(arr[i])))];
}
{
@ -219,31 +338,31 @@ public:
for (size_t i = 0; i < HISTOGRAM_SIZE; ++i)
{
for (size_t j = 0; j < NUM_PASSES; ++j)
for (size_t pass = 0; pass < NUM_PASSES; ++pass)
{
size_t tmp = histograms[j * HISTOGRAM_SIZE + i] + sums[j];
histograms[j * HISTOGRAM_SIZE + i] = sums[j] - 1;
sums[j] = tmp;
size_t tmp = histograms[pass * HISTOGRAM_SIZE + i] + sums[pass];
histograms[pass * HISTOGRAM_SIZE + i] = sums[pass] - 1;
sums[pass] = tmp;
}
}
}
/// Move the elements in the order starting from the least bit piece, and then do a few passes on the number of pieces.
for (size_t j = 0; j < NUM_PASSES; ++j)
for (size_t pass = 0; pass < NUM_PASSES; ++pass)
{
Element * writer = j % 2 ? arr : swap_buffer;
Element * reader = j % 2 ? swap_buffer : arr;
Element * writer = pass % 2 ? arr : swap_buffer;
Element * reader = pass % 2 ? swap_buffer : arr;
for (size_t i = 0; i < size; ++i)
{
size_t pos = getPart(j, keyToBits(Traits::extractKey(reader[i])));
size_t pos = getPart(pass, keyToBits(Traits::extractKey(reader[i])));
/// Place the element on the next free position.
auto & dest = writer[++histograms[j * HISTOGRAM_SIZE + pos]];
auto & dest = writer[++histograms[pass * HISTOGRAM_SIZE + pos]];
dest = reader[i];
/// On the last pass, we do the reverse transformation.
if (!Traits::Transform::transform_is_simple && j == NUM_PASSES - 1)
if (!Traits::Transform::transform_is_simple && pass == NUM_PASSES - 1)
Traits::extractKey(dest) = bitsToKey(Traits::Transform::backward(keyToBits(Traits::extractKey(reader[i]))));
}
}
@ -255,40 +374,53 @@ public:
allocator.deallocate(swap_buffer, size * sizeof(Element));
}
/* Most significant digit radix sort
* Usually slower than LSD and is not stable, but allows partial sorting
*
* Based on https://github.com/voutcn/kxsort, license:
* The MIT License
* Copyright (c) 2016 Dinghua Li <voutcn@gmail.com>
*
* Permission is hereby granted, free of charge, to any person obtaining
* a copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
static void executeMSD(Element * arr, size_t size, size_t limit)
{
limit = std::min(limit, size);
radixSortMSDInternalHelper<NUM_PASSES - 1>(arr, size, limit);
}
};
/// Helper functions for numeric types.
/// Use RadixSort with custom traits for complex types instead.
template <typename T>
std::enable_if_t<std::is_unsigned_v<T> && std::is_integral_v<T>, void>
radixSort(T * arr, size_t size)
void radixSortLSD(T *arr, size_t size)
{
return RadixSort<RadixSortUIntTraits<T>>::execute(arr, size);
RadixSort<RadixSortNumTraits<T>>::executeLSD(arr, size);
}
template <typename T>
std::enable_if_t<std::is_signed_v<T> && std::is_integral_v<T>, void>
radixSort(T * arr, size_t size)
void radixSortMSD(T *arr, size_t size, size_t limit)
{
return RadixSort<RadixSortIntTraits<T>>::execute(arr, size);
}
template <typename T>
std::enable_if_t<std::is_floating_point_v<T>, void>
radixSort(T * arr, size_t size)
{
return RadixSort<RadixSortFloatTraits<T>>::execute(arr, size);
}
template <typename _Element, typename _Key>
std::enable_if_t<std::is_integral_v<_Key>, void>
radixSort(_Element * arr, size_t size)
{
return RadixSort<RadixSortUIntTraits<_Element, _Key>>::execute(arr, size);
}
template <typename _Element, typename _Key>
std::enable_if_t<std::is_floating_point_v<_Key>, void>
radixSort(_Element * arr, size_t size)
{
return RadixSort<RadixSortFloatTraits<_Element, _Key>>::execute(arr, size);
RadixSort<RadixSortNumTraits<T>>::executeMSD(arr, size, limit);
}

View File

@ -0,0 +1,17 @@
#pragma once
#include <Core/Field.h>
namespace DB
{
struct SettingChange
{
String name;
Field value;
};
using SettingsChanges = std::vector<SettingChange>;
}

View File

@ -1,6 +1,7 @@
#pragma once
#include <Common/UTF8Helpers.h>
#include <Core/Defines.h>
#include <ext/range.h>
#include <Poco/UTF8Encoding.h>
#include <Poco/Unicode.h>

View File

@ -5,5 +5,5 @@ include(${ClickHouse_SOURCE_DIR}/cmake/dbms_glob_sources.cmake)
add_headers_and_sources(clickhouse_common_stringutils .)
add_library(string_utils ${LINK_MODE} ${clickhouse_common_stringutils_headers} ${clickhouse_common_stringutils_sources})
add_library(string_utils ${clickhouse_common_stringutils_headers} ${clickhouse_common_stringutils_sources})
target_include_directories (string_utils PRIVATE ${DBMS_INCLUDE_DIR})

View File

@ -2,7 +2,7 @@ include(${ClickHouse_SOURCE_DIR}/cmake/dbms_glob_sources.cmake)
add_headers_and_sources(clickhouse_common_zookeeper .)
add_library(clickhouse_common_zookeeper ${LINK_MODE} ${clickhouse_common_zookeeper_headers} ${clickhouse_common_zookeeper_sources})
add_library(clickhouse_common_zookeeper ${clickhouse_common_zookeeper_headers} ${clickhouse_common_zookeeper_sources})
target_link_libraries (clickhouse_common_zookeeper PUBLIC clickhouse_common_io common PRIVATE string_utils PUBLIC ${Poco_Util_LIBRARY} Threads::Threads)
target_include_directories(clickhouse_common_zookeeper PUBLIC ${DBMS_INCLUDE_DIR})

View File

@ -25,6 +25,7 @@
#cmakedefine01 USE_BROTLI
#cmakedefine01 USE_SSL
#cmakedefine01 USE_HYPERSCAN
#cmakedefine01 USE_SIMDJSON
#cmakedefine01 USE_LFALLOC
#cmakedefine01 USE_LFALLOC_RANDOM_HINT

View File

@ -16,7 +16,7 @@ void NO_INLINE sort1(Key * data, size_t size)
void NO_INLINE sort2(Key * data, size_t size)
{
radixSort(data, size);
radixSortLSD(data, size);
}
void NO_INLINE sort3(Key * data, size_t size)

View File

@ -1,14 +1,18 @@
include(${ClickHouse_SOURCE_DIR}/cmake/dbms_glob_sources.cmake)
add_headers_and_sources(clickhouse_compression .)
add_library(clickhouse_compression ${LINK_MODE} ${clickhouse_compression_headers} ${clickhouse_compression_sources})
target_link_libraries(clickhouse_compression PRIVATE clickhouse_parsers clickhouse_common_io ${ZSTD_LIBRARY} ${LZ4_LIBRARY} ${CITYHASH_LIBRARIES})
add_library(clickhouse_compression ${clickhouse_compression_headers} ${clickhouse_compression_sources})
target_link_libraries(clickhouse_compression PRIVATE clickhouse_parsers clickhouse_common_io ${LZ4_LIBRARY} ${CITYHASH_LIBRARIES})
if(ZSTD_LIBRARY)
target_link_libraries(clickhouse_compression PRIVATE ${ZSTD_LIBRARY})
endif()
target_include_directories(clickhouse_compression PUBLIC ${DBMS_INCLUDE_DIR})
target_include_directories(clickhouse_compression SYSTEM PUBLIC ${PCG_RANDOM_INCLUDE_DIR})
if (NOT USE_INTERNAL_LZ4_LIBRARY)
target_include_directories(clickhouse_compression SYSTEM BEFORE PRIVATE ${LZ4_INCLUDE_DIR})
endif ()
if (NOT USE_INTERNAL_ZSTD_LIBRARY)
if (NOT USE_INTERNAL_ZSTD_LIBRARY AND ZSTD_INCLUDE_DIR)
target_include_directories(clickhouse_compression SYSTEM BEFORE PRIVATE ${ZSTD_INCLUDE_DIR})
endif ()

View File

@ -22,7 +22,6 @@
#include <arm_neon.h>
#endif
namespace LZ4
{
@ -41,6 +40,10 @@ inline void copy8(UInt8 * dst, const UInt8 * src)
inline void wildCopy8(UInt8 * dst, const UInt8 * src, UInt8 * dst_end)
{
/// Unrolling with clang is doing >10% performance degrade.
#if defined(__clang__)
#pragma nounroll
#endif
do
{
copy8(dst, src);
@ -204,7 +207,6 @@ inline void copyOverlap8Shuffle(UInt8 * op, const UInt8 *& match, const size_t o
#endif
template <> void inline copy<8>(UInt8 * dst, const UInt8 * src) { copy8(dst, src); }
template <> void inline wildCopy<8>(UInt8 * dst, const UInt8 * src, UInt8 * dst_end) { wildCopy8(dst, src, dst_end); }
template <> void inline copyOverlap<8, false>(UInt8 * op, const UInt8 *& match, const size_t offset) { copyOverlap8(op, match, offset); }
@ -221,10 +223,12 @@ inline void copy16(UInt8 * dst, const UInt8 * src)
#endif
}
inline void wildCopy16(UInt8 * dst, const UInt8 * src, UInt8 * dst_end)
{
/// Unrolling with clang is doing >10% performance degrade.
#if defined(__clang__)
#pragma nounroll
#endif
do
{
copy16(dst, src);
@ -342,8 +346,73 @@ template <> void inline copyOverlap<16, false>(UInt8 * op, const UInt8 *& match,
template <> void inline copyOverlap<16, true>(UInt8 * op, const UInt8 *& match, const size_t offset) { copyOverlap16Shuffle(op, match, offset); }
/// See also https://stackoverflow.com/a/30669632
inline void copy32(UInt8 * dst, const UInt8 * src)
{
/// There was an AVX here but with mash with SSE instructions, we got a big slowdown.
#if defined(__SSE2__)
_mm_storeu_si128(reinterpret_cast<__m128i *>(dst),
_mm_loadu_si128(reinterpret_cast<const __m128i *>(src)));
_mm_storeu_si128(reinterpret_cast<__m128i *>(dst + 16),
_mm_loadu_si128(reinterpret_cast<const __m128i *>(src + 16)));
#else
memcpy(dst, src, 16);
memcpy(dst + 16, src + 16, 16);
#endif
}
inline void wildCopy32(UInt8 * dst, const UInt8 * src, UInt8 * dst_end)
{
/// Unrolling with clang is doing >10% performance degrade.
#if defined(__clang__)
#pragma nounroll
#endif
do
{
copy32(dst, src);
dst += 32;
src += 32;
} while (dst < dst_end);
}
inline void copyOverlap32(UInt8 * op, const UInt8 *& match, const size_t offset)
{
/// 4 % n.
static constexpr int shift1[]
= { 0, 1, 2, 1, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4 };
/// 8 % n - 4 % n
static constexpr int shift2[]
= { 0, 0, 0, 1, 0, -1, -2, -3, -4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4 };
/// 16 % n - 8 % n
static constexpr int shift3[]
= { 0, 0, 0, -1, 0, -2, 2, 1, 8, -1, -2, -3, -4, -5, -6, -7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8 };
/// 32 % n - 16 % n
static constexpr int shift4[]
= { 0, 0, 0, 1, 0, 1, -2, 2, 0, -2, -4, 5, 4, 3, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9,-10,-11,-12,-13,-14,-15 };
op[0] = match[0];
op[1] = match[1];
op[2] = match[2];
op[3] = match[3];
match += shift1[offset];
memcpy(op + 4, match, 4);
match += shift2[offset];
memcpy(op + 8, match, 8);
match += shift3[offset];
memcpy(op + 16, match, 16);
match += shift4[offset];
}
template <> void inline copy<32>(UInt8 * dst, const UInt8 * src) { copy32(dst, src); }
template <> void inline wildCopy<32>(UInt8 * dst, const UInt8 * src, UInt8 * dst_end) { wildCopy32(dst, src, dst_end); }
template <> void inline copyOverlap<32, false>(UInt8 * op, const UInt8 *& match, const size_t offset) { copyOverlap32(op, match, offset); }
/// See also https://stackoverflow.com/a/30669632
template <size_t copy_amount, bool use_shuffle>
void NO_INLINE decompressImpl(
@ -355,6 +424,10 @@ void NO_INLINE decompressImpl(
UInt8 * op = reinterpret_cast<UInt8 *>(dest);
UInt8 * const output_end = op + dest_size;
/// Unrolling with clang is doing >10% performance degrade.
#if defined(__clang__)
#pragma nounroll
#endif
while (1)
{
size_t length;
@ -464,6 +537,7 @@ void decompress(
if (source_size == 0 || dest_size == 0)
return;
/// Don't run timer if the block is too small.
if (dest_size >= 32768)
{
@ -472,13 +546,14 @@ void decompress(
/// Run the selected method and measure time.
Stopwatch watch;
if (best_variant == 0)
decompressImpl<16, true>(source, dest, dest_size);
if (best_variant == 1)
decompressImpl<16, false>(source, dest, dest_size);
if (best_variant == 2)
decompressImpl<8, true>(source, dest, dest_size);
if (best_variant == 3)
decompressImpl<32, false>(source, dest, dest_size);
watch.stop();
@ -531,7 +606,6 @@ void statistics(
const UInt8 * ip = reinterpret_cast<const UInt8 *>(source);
UInt8 * op = reinterpret_cast<UInt8 *>(dest);
UInt8 * const output_end = op + dest_size;
while (1)
{
size_t length;

View File

@ -34,7 +34,7 @@ namespace LZ4
* that is allowed to read/write.
* This value is a little overestimation.
*/
static constexpr size_t ADDITIONAL_BYTES_AT_END_OF_BUFFER = 32;
static constexpr size_t ADDITIONAL_BYTES_AT_END_OF_BUFFER = 64;
/** When decompressing uniform sequence of blocks (for example, blocks from one file),
@ -88,7 +88,7 @@ struct PerformanceStatistics
};
/// Number of different algorithms to select from.
static constexpr size_t NUM_ELEMENTS = 3;
static constexpr size_t NUM_ELEMENTS = 4;
/// Cold invocations may be affected by additional memory latencies. Don't take first invocations into account.
static constexpr double NUM_INVOCATIONS_TO_THROW_OFF = 2;

View File

@ -123,3 +123,7 @@
#else
#define OPTIMIZE(x)
#endif
/// This number is only used for distributed version compatible.
/// It could be any magic number.
#define DBMS_DISTRIBUTED_SENDS_MAGIC_NUMBER 0xCAFECABE

View File

@ -1,12 +1,10 @@
#include "Settings.h"
#include <Poco/Util/AbstractConfiguration.h>
#include <Core/Field.h>
#include <IO/ReadHelpers.h>
#include <IO/WriteHelpers.h>
#include <Columns/ColumnArray.h>
#include <Common/typeid_cast.h>
#include <string.h>
#include <boost/program_options/options_description.hpp>
namespace DB
@ -14,94 +12,13 @@ namespace DB
namespace ErrorCodes
{
extern const int UNKNOWN_SETTING;
extern const int THERE_IS_NO_PROFILE;
extern const int NO_ELEMENTS_IN_CONFIG;
}
/// Set the configuration by name.
void Settings::set(const String & name, const Field & value)
{
#define TRY_SET(TYPE, NAME, DEFAULT, DESCRIPTION) \
else if (name == #NAME) NAME.set(value);
IMPLEMENT_SETTINGS_COLLECTION(Settings, LIST_OF_SETTINGS)
if (false) {}
APPLY_FOR_SETTINGS(TRY_SET)
else
throw Exception("Unknown setting " + name, ErrorCodes::UNKNOWN_SETTING);
#undef TRY_SET
}
/// Set the configuration by name. Read the binary serialized value from the buffer (for interserver interaction).
void Settings::set(const String & name, ReadBuffer & buf)
{
#define TRY_SET(TYPE, NAME, DEFAULT, DESCRIPTION) \
else if (name == #NAME) NAME.set(buf);
if (false) {}
APPLY_FOR_SETTINGS(TRY_SET)
else
throw Exception("Unknown setting " + name, ErrorCodes::UNKNOWN_SETTING);
#undef TRY_SET
}
/// Skip the binary-serialized value from the buffer.
void Settings::ignore(const String & name, ReadBuffer & buf)
{
#define TRY_IGNORE(TYPE, NAME, DEFAULT, DESCRIPTION) \
else if (name == #NAME) decltype(NAME)(DEFAULT).set(buf);
if (false) {}
APPLY_FOR_SETTINGS(TRY_IGNORE)
else
throw Exception("Unknown setting " + name, ErrorCodes::UNKNOWN_SETTING);
#undef TRY_IGNORE
}
/** Set the setting by name. Read the value in text form from a string (for example, from a config, or from a URL parameter).
*/
void Settings::set(const String & name, const String & value)
{
#define TRY_SET(TYPE, NAME, DEFAULT, DESCRIPTION) \
else if (name == #NAME) NAME.set(value);
if (false) {}
APPLY_FOR_SETTINGS(TRY_SET)
else
throw Exception("Unknown setting " + name, ErrorCodes::UNKNOWN_SETTING);
#undef TRY_SET
}
String Settings::get(const String & name) const
{
#define GET(TYPE, NAME, DEFAULT, DESCRIPTION) \
else if (name == #NAME) return NAME.toString();
if (false) {}
APPLY_FOR_SETTINGS(GET)
else
throw Exception("Unknown setting " + name, ErrorCodes::UNKNOWN_SETTING);
#undef GET
}
bool Settings::tryGet(const String & name, String & value) const
{
#define TRY_GET(TYPE, NAME, DEFAULT, DESCRIPTION) \
else if (name == #NAME) { value = NAME.toString(); return true; }
if (false) {}
APPLY_FOR_SETTINGS(TRY_GET)
else
return false;
#undef TRY_GET
}
/** Set the settings from the profile (in the server configuration, many settings can be listed in one profile).
* The profile can also be set using the `set` functions, like the `profile` setting.
@ -118,6 +35,8 @@ void Settings::setProfile(const String & profile_name, const Poco::Util::Abstrac
for (const std::string & key : config_keys)
{
if (key == "constraints")
continue;
if (key == "profile") /// Inheritance of one profile from another.
setProfile(config.getString(elem + "." + key), config);
else
@ -139,47 +58,6 @@ void Settings::loadSettingsFromConfig(const String & path, const Poco::Util::Abs
}
}
/// Read the settings from the buffer. They are written as a set of name-value pairs that go successively, ending with an empty `name`.
/// If the `check_readonly` flag is set, `readonly` is set in the preferences, but some changes have occurred - throw an exception.
void Settings::deserialize(ReadBuffer & buf)
{
auto before_readonly = readonly;
while (true)
{
String name;
readBinary(name, buf);
/// An empty string is the marker for the end of the settings.
if (name.empty())
break;
/// If readonly = 2, then you can change the settings, except for the readonly setting.
if (before_readonly == 0 || (before_readonly == 2 && name != "readonly"))
set(name, buf);
else
ignore(name, buf);
}
}
/// Record the changed settings to the buffer. (For example, to send to a remote server.)
void Settings::serialize(WriteBuffer & buf) const
{
#define WRITE(TYPE, NAME, DEFAULT, DESCRIPTION) \
if (NAME.changed) \
{ \
writeStringBinary(#NAME, buf); \
NAME.write(buf); \
}
APPLY_FOR_SETTINGS(WRITE)
/// An empty string is a marker for the end of the settings.
writeStringBinary("", buf);
#undef WRITE
}
void Settings::dumpToArrayColumns(IColumn * column_names_, IColumn * column_values_, bool changed_only)
{
/// Convert ptr and make simple check
@ -188,17 +66,20 @@ void Settings::dumpToArrayColumns(IColumn * column_names_, IColumn * column_valu
size_t size = 0;
#define ADD_SETTING(TYPE, NAME, DEFAULT, DESCRIPTION) \
if (!changed_only || NAME.changed) \
{ \
if (column_names) \
column_names->getData().insertData(#NAME, strlen(#NAME)); \
if (column_values) \
column_values->getData().insert(NAME.toString()); \
++size; \
for (const auto & setting : *this)
{
if (!changed_only || setting.isChanged())
{
if (column_names)
{
StringRef name = setting.getName();
column_names->getData().insertData(name.data, name.size);
}
if (column_values)
column_values->getData().insert(setting.getValueAsString());
++size;
}
}
APPLY_FOR_SETTINGS(ADD_SETTING)
#undef ADD_SETTING
if (column_names)
{
@ -216,4 +97,16 @@ void Settings::dumpToArrayColumns(IColumn * column_names_, IColumn * column_valu
}
}
void Settings::addProgramOptions(boost::program_options::options_description & options)
{
for (size_t index = 0; index != Settings::size(); ++index)
{
auto on_program_option
= boost::function1<void, const std::string &>([this, index](const std::string & value) { set(index, value); });
options.add(boost::shared_ptr<boost::program_options::option_description>(new boost::program_options::option_description(
Settings::getName(index).data,
boost::program_options::value<std::string>()->composing()->notifier(on_program_option),
Settings::getDescription(index).data)));
}
}
}

View File

@ -12,15 +12,24 @@ namespace Poco
}
}
namespace boost
{
namespace program_options
{
class options_description;
}
}
namespace DB
{
class IColumn;
class Field;
/** Settings of query execution.
*/
struct Settings
struct Settings : public SettingsCollection<Settings>
{
/// For initialization from empty initializer-list to be "value initialization", not "aggregate initialization" in C++14.
/// http://en.cppreference.com/w/cpp/language/aggregate_initialization
@ -33,7 +42,7 @@ struct Settings
* but we are not going to do it, because settings is used everywhere as static struct fields.
*/
#define APPLY_FOR_SETTINGS(M) \
#define LIST_OF_SETTINGS(M) \
M(SettingUInt64, min_compress_block_size, 65536, "The actual size of the block to compress, if the uncompressed data less than max_compress_block_size is no less than this value and no less than the volume of data for one mark.") \
M(SettingUInt64, max_compress_block_size, 1048576, "The maximum size of blocks of uncompressed data before compressing for writing to a table.") \
M(SettingUInt64, max_block_size, DEFAULT_BLOCK_SIZE, "Maximum block size for reading") \
@ -41,6 +50,7 @@ struct Settings
M(SettingUInt64, min_insert_block_size_rows, DEFAULT_INSERT_BLOCK_SIZE, "Squash blocks passed to INSERT query to specified size in rows, if blocks are not big enough.") \
M(SettingUInt64, min_insert_block_size_bytes, (DEFAULT_INSERT_BLOCK_SIZE * 256), "Squash blocks passed to INSERT query to specified size in bytes, if blocks are not big enough.") \
M(SettingMaxThreads, max_threads, 0, "The maximum number of threads to execute the request. By default, it is determined automatically.") \
M(SettingMaxThreads, max_alter_threads, 0, "The maximum number of threads to execute the ALTER requests. By default, it is determined automatically.") \
M(SettingUInt64, max_read_buffer_size, DBMS_DEFAULT_BUFFER_SIZE, "The maximum size of the buffer to read from the filesystem.") \
M(SettingUInt64, max_distributed_connections, 1024, "The maximum number of connections for distributed processing of one query (should be greater than max_threads).") \
M(SettingUInt64, max_query_size, 262144, "Which part of the query can be read into RAM for parsing (the remaining data for INSERT, if any, is read later)") \
@ -153,7 +163,8 @@ struct Settings
\
M(SettingBool, add_http_cors_header, false, "Write add http CORS header.") \
\
M(SettingBool, input_format_skip_unknown_fields, false, "Skip columns with unknown names from input data (it works for JSONEachRow and TSKV formats).") \
M(SettingBool, input_format_skip_unknown_fields, false, "Skip columns with unknown names from input data (it works for JSONEachRow, CSVWithNames, TSVWithNames and TSKV formats).") \
M(SettingBool, input_format_with_names_use_header, false, "For TSVWithNames and CSVWithNames input formats this controls whether format parser is to assume that column data appear in the input exactly as they are specified in the header.") \
M(SettingBool, input_format_import_nested_json, false, "Map nested JSON data to nested tables (it works for JSONEachRow format).") \
M(SettingBool, input_format_defaults_for_omitted_fields, false, "For input data calculate default expressions for omitted fields (it works for JSONEachRow format).") \
\
@ -195,6 +206,7 @@ struct Settings
M(SettingUInt64, insert_distributed_timeout, 0, "Timeout for insert query into distributed. Setting is used only with insert_distributed_sync enabled. Zero value means no timeout.") \
M(SettingInt64, distributed_ddl_task_timeout, 180, "Timeout for DDL query responses from all hosts in cluster. Negative value means infinite.") \
M(SettingMilliseconds, stream_flush_interval_ms, 7500, "Timeout for flushing data from streaming storages.") \
M(SettingMilliseconds, stream_poll_timeout_ms, 500, "Timeout for polling data from streaming storages.") \
M(SettingString, format_schema, "", "Schema identifier (used by schema-based formats)") \
M(SettingBool, insert_allow_materialized_columns, 0, "If setting is enabled, Allow materialized columns in INSERT.") \
M(SettingSeconds, http_connection_timeout, DEFAULT_HTTP_READ_BUFFER_CONNECTION_TIMEOUT, "HTTP connection timeout.") \
@ -216,25 +228,25 @@ struct Settings
\
M(SettingUInt64, max_rows_to_read, 0, "Limit on read rows from the most 'deep' sources. That is, only in the deepest subquery. When reading from a remote server, it is only checked on a remote server.") \
M(SettingUInt64, max_bytes_to_read, 0, "Limit on read bytes (after decompression) from the most 'deep' sources. That is, only in the deepest subquery. When reading from a remote server, it is only checked on a remote server.") \
M(SettingOverflowMode<false>, read_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
M(SettingOverflowMode, read_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
\
M(SettingUInt64, max_rows_to_group_by, 0, "") \
M(SettingOverflowMode<true>, group_by_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
M(SettingOverflowModeGroupBy, group_by_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
M(SettingUInt64, max_bytes_before_external_group_by, 0, "") \
\
M(SettingUInt64, max_rows_to_sort, 0, "") \
M(SettingUInt64, max_bytes_to_sort, 0, "") \
M(SettingOverflowMode<false>, sort_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
M(SettingOverflowMode, sort_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
M(SettingUInt64, max_bytes_before_external_sort, 0, "") \
M(SettingUInt64, max_bytes_before_remerge_sort, 1000000000, "In case of ORDER BY with LIMIT, when memory usage is higher than specified threshold, perform additional steps of merging blocks before final merge to keep just top LIMIT rows.") \
\
M(SettingUInt64, max_result_rows, 0, "Limit on result size in rows. Also checked for intermediate data sent from remote servers.") \
M(SettingUInt64, max_result_bytes, 0, "Limit on result size in bytes (uncompressed). Also checked for intermediate data sent from remote servers.") \
M(SettingOverflowMode<false>, result_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
M(SettingOverflowMode, result_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
\
/* TODO: Check also when merging and finalizing aggregate functions. */ \
M(SettingSeconds, max_execution_time, 0, "") \
M(SettingOverflowMode<false>, timeout_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
M(SettingOverflowMode, timeout_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
\
M(SettingUInt64, min_execution_speed, 0, "Minimum number of execution rows per second.") \
M(SettingUInt64, max_execution_speed, 0, "Maximum number of execution rows per second.") \
@ -256,20 +268,20 @@ struct Settings
\
M(SettingUInt64, max_rows_in_set, 0, "Maximum size of the set (in number of elements) resulting from the execution of the IN section.") \
M(SettingUInt64, max_bytes_in_set, 0, "Maximum size of the set (in bytes in memory) resulting from the execution of the IN section.") \
M(SettingOverflowMode<false>, set_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
M(SettingOverflowMode, set_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
\
M(SettingUInt64, max_rows_in_join, 0, "Maximum size of the hash table for JOIN (in number of rows).") \
M(SettingUInt64, max_bytes_in_join, 0, "Maximum size of the hash table for JOIN (in number of bytes in memory).") \
M(SettingOverflowMode<false>, join_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
M(SettingOverflowMode, join_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
M(SettingBool, join_any_take_last_row, false, "When disabled (default) ANY JOIN will take the first found row for a key. When enabled, it will take the last row seen if there are multiple rows for the same key.") \
\
M(SettingUInt64, max_rows_to_transfer, 0, "Maximum size (in rows) of the transmitted external table obtained when the GLOBAL IN/JOIN section is executed.") \
M(SettingUInt64, max_bytes_to_transfer, 0, "Maximum size (in uncompressed bytes) of the transmitted external table obtained when the GLOBAL IN/JOIN section is executed.") \
M(SettingOverflowMode<false>, transfer_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
M(SettingOverflowMode, transfer_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
\
M(SettingUInt64, max_rows_in_distinct, 0, "Maximum number of elements during execution of DISTINCT.") \
M(SettingUInt64, max_bytes_in_distinct, 0, "Maximum total size of state (in uncompressed bytes) in memory for the execution of DISTINCT.") \
M(SettingOverflowMode<false>, distinct_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
M(SettingOverflowMode, distinct_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
\
M(SettingUInt64, max_memory_usage, 0, "Maximum memory usage for processing of single query. Zero means unlimited.") \
M(SettingUInt64, max_memory_usage_for_user, 0, "Maximum memory usage for processing all concurrently running queries for the user. Zero means unlimited.") \
@ -314,48 +326,22 @@ struct Settings
\
M(SettingUInt64, max_partitions_per_insert_block, 100, "Limit maximum number of partitions in single INSERTed block. Zero means unlimited. Throw exception if the block contains too many partitions. This setting is a safety threshold, because using large number of partitions is a common misconception.") \
#define DECLARE(TYPE, NAME, DEFAULT, DESCRIPTION) \
TYPE NAME {DEFAULT};
APPLY_FOR_SETTINGS(DECLARE)
#undef DECLARE
/// Set setting by name.
void set(const String & name, const Field & value);
/// Set setting by name. Read value, serialized in binary form from buffer (for inter-server communication).
void set(const String & name, ReadBuffer & buf);
/// Skip value, serialized in binary form in buffer.
void ignore(const String & name, ReadBuffer & buf);
/// Set setting by name. Read value in text form from string (for example, from configuration file or from URL parameter).
void set(const String & name, const String & value);
/// Get setting by name. Converts value to String.
String get(const String & name) const;
bool tryGet(const String & name, String & value) const;
DECLARE_SETTINGS_COLLECTION(LIST_OF_SETTINGS)
/** Set multiple settings from "profile" (in server configuration file (users.xml), profiles contain groups of multiple settings).
* The profile can also be set using the `set` functions, like the profile setting.
*/
* The profile can also be set using the `set` functions, like the profile setting.
*/
void setProfile(const String & profile_name, const Poco::Util::AbstractConfiguration & config);
/// Load settings from configuration file, at "path" prefix in configuration.
void loadSettingsFromConfig(const String & path, const Poco::Util::AbstractConfiguration & config);
/// Read settings from buffer. They are serialized as list of contiguous name-value pairs, finished with empty name.
/// If readonly=1 is set, ignore read settings.
void deserialize(ReadBuffer & buf);
/// Write changed settings to buffer. (For example, to be sent to remote server.)
void serialize(WriteBuffer & buf) const;
/// Dumps profile events to two columns of type Array(String)
void dumpToArrayColumns(IColumn * column_names, IColumn * column_values, bool changed_only = true);
/// Adds program options to set the settings from a command line.
/// (Don't forget to call notify() on the `variables_map` after parsing it!)
void addProgramOptions(boost::program_options::options_description & options);
};
}

View File

@ -25,55 +25,97 @@ namespace ErrorCodes
extern const int UNKNOWN_LOG_LEVEL;
extern const int SIZE_OF_FIXED_STRING_DOESNT_MATCH;
extern const int BAD_ARGUMENTS;
extern const int UNKNOWN_SETTING;
}
template <typename IntType>
String SettingInt<IntType>::toString() const
template <typename Type>
String SettingNumber<Type>::toString() const
{
return DB::toString(value);
}
template <typename IntType>
void SettingInt<IntType>::set(IntType x)
template <typename Type>
Field SettingNumber<Type>::toField() const
{
return value;
}
template <typename Type>
void SettingNumber<Type>::set(Type x)
{
value = x;
changed = true;
}
template <typename IntType>
void SettingInt<IntType>::set(const Field & x)
template <typename Type>
void SettingNumber<Type>::set(const Field & x)
{
set(applyVisitor(FieldVisitorConvertToNumber<IntType>(), x));
if (x.getType() == Field::Types::String)
set(get<const String &>(x));
else
set(applyVisitor(FieldVisitorConvertToNumber<Type>(), x));
}
template <typename IntType>
void SettingInt<IntType>::set(const String & x)
template <typename Type>
void SettingNumber<Type>::set(const String & x)
{
set(parse<IntType>(x));
set(parse<Type>(x));
}
template <typename IntType>
void SettingInt<IntType>::set(ReadBuffer & buf)
template <typename Type>
void SettingNumber<Type>::serialize(WriteBuffer & buf) const
{
IntType x = 0;
readVarT(x, buf);
set(x);
if constexpr (std::is_integral_v<Type> && std::is_unsigned_v<Type>)
writeVarUInt(static_cast<UInt64>(value), buf);
else if constexpr (std::is_integral_v<Type> && std::is_signed_v<Type>)
writeVarInt(static_cast<Int64>(value), buf);
else
{
static_assert(std::is_floating_point_v<Type>);
writeBinary(toString(), buf);
}
}
template <typename IntType>
void SettingInt<IntType>::write(WriteBuffer & buf) const
template <typename Type>
void SettingNumber<Type>::deserialize(ReadBuffer & buf)
{
writeVarT(value, buf);
if constexpr (std::is_integral_v<Type> && std::is_unsigned_v<Type>)
{
UInt64 x;
readVarUInt(x, buf);
set(static_cast<Type>(x));
}
else if constexpr (std::is_integral_v<Type> && std::is_signed_v<Type>)
{
Int64 x;
readVarInt(x, buf);
set(static_cast<Type>(x));
}
else
{
static_assert(std::is_floating_point_v<Type>);
String x;
readBinary(x, buf);
set(x);
}
}
template struct SettingInt<UInt64>;
template struct SettingInt<Int64>;
template struct SettingNumber<UInt64>;
template struct SettingNumber<Int64>;
template struct SettingNumber<float>;
template struct SettingNumber<bool>;
String SettingMaxThreads::toString() const
{
/// Instead of the `auto` value, we output the actual value to make it easier to see.
return DB::toString(value);
return is_auto ? ("auto(" + DB::toString(value) + ")") : DB::toString(value);
}
Field SettingMaxThreads::toField() const
{
return is_auto ? 0 : value;
}
void SettingMaxThreads::set(UInt64 x)
@ -86,31 +128,31 @@ void SettingMaxThreads::set(UInt64 x)
void SettingMaxThreads::set(const Field & x)
{
if (x.getType() == Field::Types::String)
set(safeGet<const String &>(x));
set(get<const String &>(x));
else
set(safeGet<UInt64>(x));
}
void SettingMaxThreads::set(const String & x)
{
if (x == "auto")
if (startsWith(x, "auto"))
setAuto();
else
set(parse<UInt64>(x));
}
void SettingMaxThreads::set(ReadBuffer & buf)
void SettingMaxThreads::serialize(WriteBuffer & buf) const
{
writeVarUInt(is_auto ? 0 : value, buf);
}
void SettingMaxThreads::deserialize(ReadBuffer & buf)
{
UInt64 x = 0;
readVarUInt(x, buf);
set(x);
}
void SettingMaxThreads::write(WriteBuffer & buf) const
{
writeVarUInt(is_auto ? 0 : value, buf);
}
void SettingMaxThreads::setAuto()
{
value = getAutoValue();
@ -119,390 +161,67 @@ void SettingMaxThreads::setAuto()
UInt64 SettingMaxThreads::getAutoValue() const
{
static auto res = getAutoValueImpl();
static auto res = getNumberOfPhysicalCPUCores();
return res;
}
/// Executed once for all time. Executed from one thread.
UInt64 SettingMaxThreads::getAutoValueImpl() const
template <SettingTimespanIO io_unit>
String SettingTimespan<io_unit>::toString() const
{
return getNumberOfPhysicalCPUCores();
return DB::toString(value.totalMicroseconds() / microseconds_per_io_unit);
}
String SettingSeconds::toString() const
template <SettingTimespanIO io_unit>
Field SettingTimespan<io_unit>::toField() const
{
return DB::toString(totalSeconds());
return value.totalMicroseconds() / microseconds_per_io_unit;
}
void SettingSeconds::set(const Poco::Timespan & x)
template <SettingTimespanIO io_unit>
void SettingTimespan<io_unit>::set(const Poco::Timespan & x)
{
value = x;
changed = true;
}
void SettingSeconds::set(UInt64 x)
template <SettingTimespanIO io_unit>
void SettingTimespan<io_unit>::set(UInt64 x)
{
set(Poco::Timespan(x, 0));
set(Poco::Timespan(x * microseconds_per_io_unit));
}
void SettingSeconds::set(const Field & x)
template <SettingTimespanIO io_unit>
void SettingTimespan<io_unit>::set(const Field & x)
{
set(safeGet<UInt64>(x));
if (x.getType() == Field::Types::String)
set(get<const String &>(x));
else
set(safeGet<UInt64>(x));
}
void SettingSeconds::set(const String & x)
template <SettingTimespanIO io_unit>
void SettingTimespan<io_unit>::set(const String & x)
{
set(parse<UInt64>(x));
}
void SettingSeconds::set(ReadBuffer & buf)
template <SettingTimespanIO io_unit>
void SettingTimespan<io_unit>::serialize(WriteBuffer & buf) const
{
writeVarUInt(value.totalMicroseconds() / microseconds_per_io_unit, buf);
}
template <SettingTimespanIO io_unit>
void SettingTimespan<io_unit>::deserialize(ReadBuffer & buf)
{
UInt64 x = 0;
readVarUInt(x, buf);
set(x);
}
void SettingSeconds::write(WriteBuffer & buf) const
{
writeVarUInt(value.totalSeconds(), buf);
}
String SettingMilliseconds::toString() const
{
return DB::toString(totalMilliseconds());
}
void SettingMilliseconds::set(const Poco::Timespan & x)
{
value = x;
changed = true;
}
void SettingMilliseconds::set(UInt64 x)
{
set(Poco::Timespan(x * 1000));
}
void SettingMilliseconds::set(const Field & x)
{
set(safeGet<UInt64>(x));
}
void SettingMilliseconds::set(const String & x)
{
set(parse<UInt64>(x));
}
void SettingMilliseconds::set(ReadBuffer & buf)
{
UInt64 x = 0;
readVarUInt(x, buf);
set(x);
}
void SettingMilliseconds::write(WriteBuffer & buf) const
{
writeVarUInt(value.totalMilliseconds(), buf);
}
String SettingFloat::toString() const
{
return DB::toString(value);
}
void SettingFloat::set(float x)
{
value = x;
changed = true;
}
void SettingFloat::set(const Field & x)
{
set(applyVisitor(FieldVisitorConvertToNumber<float>(), x));
}
void SettingFloat::set(const String & x)
{
set(parse<float>(x));
}
void SettingFloat::set(ReadBuffer & buf)
{
String x;
readBinary(x, buf);
set(x);
}
void SettingFloat::write(WriteBuffer & buf) const
{
writeBinary(toString(), buf);
}
LoadBalancing SettingLoadBalancing::getLoadBalancing(const String & s)
{
if (s == "random") return LoadBalancing::RANDOM;
if (s == "nearest_hostname") return LoadBalancing::NEAREST_HOSTNAME;
if (s == "in_order") return LoadBalancing::IN_ORDER;
if (s == "first_or_random") return LoadBalancing::FIRST_OR_RANDOM;
throw Exception("Unknown load balancing mode: '" + s + "', must be one of 'random', 'nearest_hostname', 'in_order', 'first_or_random'",
ErrorCodes::UNKNOWN_LOAD_BALANCING);
}
String SettingLoadBalancing::toString() const
{
const char * strings[] = {"random", "nearest_hostname", "in_order", "first_or_random"};
if (value < LoadBalancing::RANDOM || value > LoadBalancing::FIRST_OR_RANDOM)
throw Exception("Unknown load balancing mode", ErrorCodes::UNKNOWN_LOAD_BALANCING);
return strings[static_cast<size_t>(value)];
}
void SettingLoadBalancing::set(LoadBalancing x)
{
value = x;
changed = true;
}
void SettingLoadBalancing::set(const Field & x)
{
set(safeGet<const String &>(x));
}
void SettingLoadBalancing::set(const String & x)
{
set(getLoadBalancing(x));
}
void SettingLoadBalancing::set(ReadBuffer & buf)
{
String x;
readBinary(x, buf);
set(x);
}
void SettingLoadBalancing::write(WriteBuffer & buf) const
{
writeBinary(toString(), buf);
}
JoinStrictness SettingJoinStrictness::getJoinStrictness(const String & s)
{
if (s == "") return JoinStrictness::Unspecified;
if (s == "ALL") return JoinStrictness::ALL;
if (s == "ANY") return JoinStrictness::ANY;
throw Exception("Unknown join strictness mode: '" + s + "', must be one of '', 'ALL', 'ANY'",
ErrorCodes::UNKNOWN_JOIN_STRICTNESS);
}
String SettingJoinStrictness::toString() const
{
const char * strings[] = {"", "ALL", "ANY"};
if (value < JoinStrictness::Unspecified || value > JoinStrictness::ANY)
throw Exception("Unknown join strictness mode", ErrorCodes::UNKNOWN_JOIN_STRICTNESS);
return strings[static_cast<size_t>(value)];
}
void SettingJoinStrictness::set(JoinStrictness x)
{
value = x;
changed = true;
}
void SettingJoinStrictness::set(const Field & x)
{
set(safeGet<const String &>(x));
}
void SettingJoinStrictness::set(const String & x)
{
set(getJoinStrictness(x));
}
void SettingJoinStrictness::set(ReadBuffer & buf)
{
String x;
readBinary(x, buf);
set(x);
}
void SettingJoinStrictness::write(WriteBuffer & buf) const
{
writeBinary(toString(), buf);
}
TotalsMode SettingTotalsMode::getTotalsMode(const String & s)
{
if (s == "before_having") return TotalsMode::BEFORE_HAVING;
if (s == "after_having_exclusive") return TotalsMode::AFTER_HAVING_EXCLUSIVE;
if (s == "after_having_inclusive") return TotalsMode::AFTER_HAVING_INCLUSIVE;
if (s == "after_having_auto") return TotalsMode::AFTER_HAVING_AUTO;
throw Exception("Unknown totals mode: '" + s + "', must be one of 'before_having', 'after_having_exclusive', 'after_having_inclusive', 'after_having_auto'", ErrorCodes::UNKNOWN_TOTALS_MODE);
}
String SettingTotalsMode::toString() const
{
switch (value)
{
case TotalsMode::BEFORE_HAVING: return "before_having";
case TotalsMode::AFTER_HAVING_EXCLUSIVE: return "after_having_exclusive";
case TotalsMode::AFTER_HAVING_INCLUSIVE: return "after_having_inclusive";
case TotalsMode::AFTER_HAVING_AUTO: return "after_having_auto";
}
__builtin_unreachable();
}
void SettingTotalsMode::set(TotalsMode x)
{
value = x;
changed = true;
}
void SettingTotalsMode::set(const Field & x)
{
set(safeGet<const String &>(x));
}
void SettingTotalsMode::set(const String & x)
{
set(getTotalsMode(x));
}
void SettingTotalsMode::set(ReadBuffer & buf)
{
String x;
readBinary(x, buf);
set(x);
}
void SettingTotalsMode::write(WriteBuffer & buf) const
{
writeBinary(toString(), buf);
}
template <bool enable_mode_any>
OverflowMode SettingOverflowMode<enable_mode_any>::getOverflowModeForGroupBy(const String & s)
{
if (s == "throw") return OverflowMode::THROW;
if (s == "break") return OverflowMode::BREAK;
if (s == "any") return OverflowMode::ANY;
throw Exception("Unknown overflow mode: '" + s + "', must be one of 'throw', 'break', 'any'", ErrorCodes::UNKNOWN_OVERFLOW_MODE);
}
template <bool enable_mode_any>
OverflowMode SettingOverflowMode<enable_mode_any>::getOverflowMode(const String & s)
{
OverflowMode mode = getOverflowModeForGroupBy(s);
if (mode == OverflowMode::ANY && !enable_mode_any)
throw Exception("Illegal overflow mode: 'any' is only for 'group_by_overflow_mode'", ErrorCodes::ILLEGAL_OVERFLOW_MODE);
return mode;
}
template <bool enable_mode_any>
String SettingOverflowMode<enable_mode_any>::toString() const
{
const char * strings[] = { "throw", "break", "any" };
if (value < OverflowMode::THROW || value > OverflowMode::ANY)
throw Exception("Unknown overflow mode", ErrorCodes::UNKNOWN_OVERFLOW_MODE);
return strings[static_cast<size_t>(value)];
}
template <bool enable_mode_any>
void SettingOverflowMode<enable_mode_any>::set(OverflowMode x)
{
value = x;
changed = true;
}
template <bool enable_mode_any>
void SettingOverflowMode<enable_mode_any>::set(const Field & x)
{
set(safeGet<const String &>(x));
}
template <bool enable_mode_any>
void SettingOverflowMode<enable_mode_any>::set(const String & x)
{
set(getOverflowMode(x));
}
template <bool enable_mode_any>
void SettingOverflowMode<enable_mode_any>::set(ReadBuffer & buf)
{
String x;
readBinary(x, buf);
set(x);
}
template <bool enable_mode_any>
void SettingOverflowMode<enable_mode_any>::write(WriteBuffer & buf) const
{
writeBinary(toString(), buf);
}
template struct SettingOverflowMode<false>;
template struct SettingOverflowMode<true>;
DistributedProductMode SettingDistributedProductMode::getDistributedProductMode(const String & s)
{
if (s == "deny") return DistributedProductMode::DENY;
if (s == "local") return DistributedProductMode::LOCAL;
if (s == "global") return DistributedProductMode::GLOBAL;
if (s == "allow") return DistributedProductMode::ALLOW;
throw Exception("Unknown distributed product mode: '" + s + "', must be one of 'deny', 'local', 'global', 'allow'",
ErrorCodes::UNKNOWN_DISTRIBUTED_PRODUCT_MODE);
}
String SettingDistributedProductMode::toString() const
{
const char * strings[] = {"deny", "local", "global", "allow"};
if (value < DistributedProductMode::DENY || value > DistributedProductMode::ALLOW)
throw Exception("Unknown distributed product mode", ErrorCodes::UNKNOWN_DISTRIBUTED_PRODUCT_MODE);
return strings[static_cast<size_t>(value)];
}
void SettingDistributedProductMode::set(DistributedProductMode x)
{
value = x;
changed = true;
}
void SettingDistributedProductMode::set(const Field & x)
{
set(safeGet<const String &>(x));
}
void SettingDistributedProductMode::set(const String & x)
{
set(getDistributedProductMode(x));
}
void SettingDistributedProductMode::set(ReadBuffer & buf)
{
String x;
readBinary(x, buf);
set(x);
}
void SettingDistributedProductMode::write(WriteBuffer & buf) const
{
writeBinary(toString(), buf);
}
template struct SettingTimespan<SettingTimespanIO::SECOND>;
template struct SettingTimespan<SettingTimespanIO::MILLISECOND>;
String SettingString::toString() const
@ -510,6 +229,11 @@ String SettingString::toString() const
return value;
}
Field SettingString::toField() const
{
return value;
}
void SettingString::set(const String & x)
{
value = x;
@ -521,24 +245,29 @@ void SettingString::set(const Field & x)
set(safeGet<const String &>(x));
}
void SettingString::set(ReadBuffer & buf)
{
String x;
readBinary(x, buf);
set(x);
}
void SettingString::write(WriteBuffer & buf) const
void SettingString::serialize(WriteBuffer & buf) const
{
writeBinary(value, buf);
}
void SettingString::deserialize(ReadBuffer & buf)
{
String s;
readBinary(s, buf);
set(s);
}
String SettingChar::toString() const
{
return String(1, value);
}
Field SettingChar::toField() const
{
return toString();
}
void SettingChar::set(char x)
{
value = x;
@ -559,108 +288,154 @@ void SettingChar::set(const Field & x)
set(s);
}
void SettingChar::set(ReadBuffer & buf)
void SettingChar::serialize(WriteBuffer & buf) const
{
writeBinary(toString(), buf);
}
void SettingChar::deserialize(ReadBuffer & buf)
{
String s;
readBinary(s, buf);
set(s);
}
void SettingChar::write(WriteBuffer & buf) const
template <typename EnumType, typename Tag>
void SettingEnum<EnumType, Tag>::serialize(WriteBuffer & buf) const
{
writeBinary(toString(), buf);
}
SettingDateTimeInputFormat::Value SettingDateTimeInputFormat::getValue(const String & s)
template <typename EnumType, typename Tag>
void SettingEnum<EnumType, Tag>::deserialize(ReadBuffer & buf)
{
if (s == "basic") return Value::Basic;
if (s == "best_effort") return Value::BestEffort;
throw Exception("Unknown DateTime input format: '" + s + "', must be one of 'basic', 'best_effort'", ErrorCodes::BAD_ARGUMENTS);
String s;
readBinary(s, buf);
set(s);
}
String SettingDateTimeInputFormat::toString() const
#define IMPLEMENT_SETTING_ENUM(ENUM_NAME, LIST_OF_NAMES_MACRO, ERROR_CODE_FOR_UNEXPECTED_NAME) \
IMPLEMENT_SETTING_ENUM_WITH_TAG(ENUM_NAME, void, LIST_OF_NAMES_MACRO, ERROR_CODE_FOR_UNEXPECTED_NAME)
#define IMPLEMENT_SETTING_ENUM_WITH_TAG(ENUM_NAME, TAG, LIST_OF_NAMES_MACRO, ERROR_CODE_FOR_UNEXPECTED_NAME) \
template <> \
String SettingEnum<ENUM_NAME, TAG>::toString() const \
{ \
using EnumType = ENUM_NAME; \
using UnderlyingType = std::underlying_type<EnumType>::type; \
switch (static_cast<UnderlyingType>(value)) \
{ \
LIST_OF_NAMES_MACRO(IMPLEMENT_SETTING_ENUM_TO_STRING_HELPER_) \
} \
throw Exception("Unknown " #ENUM_NAME, ERROR_CODE_FOR_UNEXPECTED_NAME); \
} \
\
template <> \
void SettingEnum<ENUM_NAME, TAG>::set(const String & s) \
{ \
using EnumType = ENUM_NAME; \
LIST_OF_NAMES_MACRO(IMPLEMENT_SETTING_ENUM_FROM_STRING_HELPER_) \
\
String all_io_names; \
LIST_OF_NAMES_MACRO(IMPLEMENT_SETTING_ENUM_CONCAT_NAMES_HELPER_) \
throw Exception("Unknown " #ENUM_NAME " : '" + s + "', must be one of " + all_io_names, \
ERROR_CODE_FOR_UNEXPECTED_NAME); \
} \
\
template struct SettingEnum<ENUM_NAME, TAG>;
#define IMPLEMENT_SETTING_ENUM_TO_STRING_HELPER_(NAME, IO_NAME) \
case static_cast<UnderlyingType>(EnumType::NAME): return IO_NAME;
#define IMPLEMENT_SETTING_ENUM_FROM_STRING_HELPER_(NAME, IO_NAME) \
if (s == IO_NAME) \
{ \
set(EnumType::NAME); \
return; \
}
#define IMPLEMENT_SETTING_ENUM_CONCAT_NAMES_HELPER_(NAME, IO_NAME) \
if (!all_io_names.empty()) \
all_io_names += ", "; \
all_io_names += String("'") + IO_NAME + "'";
#define LOAD_BALANCING_LIST_OF_NAMES(M) \
M(RANDOM, "random") \
M(NEAREST_HOSTNAME, "nearest_hostname") \
M(IN_ORDER, "in_order") \
M(FIRST_OR_RANDOM, "first_or_random")
IMPLEMENT_SETTING_ENUM(LoadBalancing, LOAD_BALANCING_LIST_OF_NAMES, ErrorCodes::UNKNOWN_LOAD_BALANCING)
#define JOIN_STRICTNESS_LIST_OF_NAMES(M) \
M(Unspecified, "") \
M(ALL, "ALL") \
M(ANY, "ANY")
IMPLEMENT_SETTING_ENUM(JoinStrictness, JOIN_STRICTNESS_LIST_OF_NAMES, ErrorCodes::UNKNOWN_JOIN_STRICTNESS)
#define TOTALS_MODE_LIST_OF_NAMES(M) \
M(BEFORE_HAVING, "before_having") \
M(AFTER_HAVING_EXCLUSIVE, "after_having_exclusive") \
M(AFTER_HAVING_INCLUSIVE, "after_having_inclusive") \
M(AFTER_HAVING_AUTO, "after_having_auto")
IMPLEMENT_SETTING_ENUM(TotalsMode, TOTALS_MODE_LIST_OF_NAMES, ErrorCodes::UNKNOWN_TOTALS_MODE)
#define OVERFLOW_MODE_LIST_OF_NAMES(M) \
M(THROW, "throw") \
M(BREAK, "break")
IMPLEMENT_SETTING_ENUM(OverflowMode, OVERFLOW_MODE_LIST_OF_NAMES, ErrorCodes::UNKNOWN_OVERFLOW_MODE)
#define OVERFLOW_MODE_LIST_OF_NAMES_WITH_ANY(M) \
M(THROW, "throw") \
M(BREAK, "break") \
M(ANY, "any")
IMPLEMENT_SETTING_ENUM_WITH_TAG(OverflowMode, SettingOverflowModeGroupByTag, OVERFLOW_MODE_LIST_OF_NAMES_WITH_ANY, ErrorCodes::UNKNOWN_OVERFLOW_MODE)
#define DISTRIBUTED_PRODUCT_MODE_LIST_OF_NAMES(M) \
M(DENY, "deny") \
M(LOCAL, "local") \
M(GLOBAL, "global") \
M(ALLOW, "allow")
IMPLEMENT_SETTING_ENUM(DistributedProductMode, DISTRIBUTED_PRODUCT_MODE_LIST_OF_NAMES, ErrorCodes::UNKNOWN_DISTRIBUTED_PRODUCT_MODE)
#define DATE_TIME_INPUT_FORMAT_LIST_OF_NAMES(M) \
M(Basic, "basic") \
M(BestEffort, "best_effort")
IMPLEMENT_SETTING_ENUM(FormatSettings::DateTimeInputFormat, DATE_TIME_INPUT_FORMAT_LIST_OF_NAMES, ErrorCodes::BAD_ARGUMENTS)
#define LOGS_LEVEL_LIST_OF_NAMES(M) \
M(none, "none") \
M(error, "error") \
M(warning, "warning") \
M(information, "information") \
M(debug, "debug") \
M(trace, "trace")
IMPLEMENT_SETTING_ENUM(LogsLevel, LOGS_LEVEL_LIST_OF_NAMES, ErrorCodes::BAD_ARGUMENTS)
namespace details
{
const char * strings[] = {"basic", "best_effort"};
if (value < Value::Basic || value > Value::BestEffort)
throw Exception("Unknown DateTime input format", ErrorCodes::BAD_ARGUMENTS);
return strings[static_cast<size_t>(value)];
String SettingsCollectionUtils::deserializeName(ReadBuffer & buf)
{
String name;
readBinary(name, buf);
return name;
}
void SettingsCollectionUtils::serializeName(const StringRef & name, WriteBuffer & buf) { writeBinary(name, buf); }
void SettingsCollectionUtils::throwNameNotFound(const StringRef & name)
{
throw Exception("Unknown setting " + name.toString(), ErrorCodes::UNKNOWN_SETTING);
}
}
void SettingDateTimeInputFormat::set(Value x)
{
value = x;
changed = true;
}
void SettingDateTimeInputFormat::set(const Field & x)
{
set(safeGet<const String &>(x));
}
void SettingDateTimeInputFormat::set(const String & x)
{
set(getValue(x));
}
void SettingDateTimeInputFormat::set(ReadBuffer & buf)
{
String x;
readBinary(x, buf);
set(x);
}
void SettingDateTimeInputFormat::write(WriteBuffer & buf) const
{
writeBinary(toString(), buf);
}
SettingLogsLevel::Value SettingLogsLevel::getValue(const String & s)
{
if (s == "none") return Value::none;
if (s == "error") return Value::error;
if (s == "warning") return Value::warning;
if (s == "information") return Value::information;
if (s == "debug") return Value::debug;
if (s == "trace") return Value::trace;
throw Exception("Unknown logs level: '" + s + "', must be one of: none, error, warning, information, debug, trace", ErrorCodes::BAD_ARGUMENTS);
}
String SettingLogsLevel::toString() const
{
const char * strings[] = {"none", "error", "warning", "information", "debug", "trace"};
return strings[static_cast<size_t>(value)];
}
void SettingLogsLevel::set(Value x)
{
value = x;
changed = true;
}
void SettingLogsLevel::set(const Field & x)
{
set(safeGet<const String &>(x));
}
void SettingLogsLevel::set(const String & x)
{
set(getValue(x));
}
void SettingLogsLevel::set(ReadBuffer & buf)
{
String x;
readBinary(x, buf);
set(x);
}
void SettingLogsLevel::write(WriteBuffer & buf) const
{
writeBinary(toString(), buf);
}
}

View File

@ -3,8 +3,11 @@
#include <Poco/Timespan.h>
#include <DataStreams/SizeLimits.h>
#include <Formats/FormatSettings.h>
#include <Compression/CompressionInfo.h>
#include <common/StringRef.h>
#include <Common/SettingsChanges.h>
#include <Core/Types.h>
#include <ext/singleton.h>
#include <unordered_map>
namespace DB
@ -22,24 +25,24 @@ class WriteBuffer;
* and the remote server will use its default value.
*/
template <typename IntType>
struct SettingInt
template <typename Type>
struct SettingNumber
{
IntType value;
Type value;
bool changed = false;
SettingInt(IntType x = 0) : value(x) {}
SettingNumber(Type x = 0) : value(x) {}
operator IntType() const { return value; }
SettingInt & operator= (IntType x) { set(x); return *this; }
operator Type() const { return value; }
SettingNumber & operator= (Type x) { set(x); return *this; }
/// Serialize to a test string.
String toString() const;
/// Serialize to binary stream suitable for transfer over network.
void write(WriteBuffer & buf) const;
/// Converts to a field.
Field toField() const;
void set(IntType x);
void set(Type x);
/// Read from SQL literal.
void set(const Field & x);
@ -47,13 +50,17 @@ struct SettingInt
/// Read from text string.
void set(const String & x);
/// Serialize to binary stream suitable for transfer over network.
void serialize(WriteBuffer & buf) const;
/// Read from binary stream.
void set(ReadBuffer & buf);
void deserialize(ReadBuffer & buf);
};
using SettingUInt64 = SettingInt<UInt64>;
using SettingInt64 = SettingInt<Int64>;
using SettingBool = SettingUInt64;
using SettingUInt64 = SettingNumber<UInt64>;
using SettingInt64 = SettingNumber<Int64>;
using SettingFloat = SettingNumber<float>;
using SettingBool = SettingNumber<bool>;
/** Unlike SettingUInt64, supports the value of 'auto' - the number of processor cores without taking into account SMT.
@ -72,248 +79,53 @@ struct SettingMaxThreads
SettingMaxThreads & operator= (UInt64 x) { set(x); return *this; }
String toString() const;
Field toField() const;
void set(UInt64 x);
void set(const Field & x);
void set(const String & x);
void set(ReadBuffer & buf);
void write(WriteBuffer & buf) const;
void serialize(WriteBuffer & buf) const;
void deserialize(ReadBuffer & buf);
void setAuto();
UInt64 getAutoValue() const;
/// Executed once for all time. Executed from one thread.
UInt64 getAutoValueImpl() const;
};
struct SettingSeconds
enum class SettingTimespanIO { MILLISECOND, SECOND };
template <SettingTimespanIO io_unit>
struct SettingTimespan
{
Poco::Timespan value;
bool changed = false;
SettingSeconds(UInt64 seconds = 0) : value(seconds, 0) {}
SettingTimespan(UInt64 x = 0) : value(x * microseconds_per_io_unit) {}
operator Poco::Timespan() const { return value; }
SettingSeconds & operator= (const Poco::Timespan & x) { set(x); return *this; }
SettingTimespan & operator= (const Poco::Timespan & x) { set(x); return *this; }
Poco::Timespan::TimeDiff totalSeconds() const { return value.totalSeconds(); }
String toString() const;
void set(const Poco::Timespan & x);
void set(UInt64 x);
void set(const Field & x);
void set(const String & x);
void set(ReadBuffer & buf);
void write(WriteBuffer & buf) const;
};
struct SettingMilliseconds
{
Poco::Timespan value;
bool changed = false;
SettingMilliseconds(UInt64 milliseconds = 0) : value(milliseconds * 1000) {}
operator Poco::Timespan() const { return value; }
SettingMilliseconds & operator= (const Poco::Timespan & x) { set(x); return *this; }
Poco::Timespan::TimeDiff totalMilliseconds() const { return value.totalMilliseconds(); }
String toString() const;
Field toField() const;
void set(const Poco::Timespan & x);
void set(UInt64 x);
void set(const Field & x);
void set(const String & x);
void set(ReadBuffer & buf);
void write(WriteBuffer & buf) const;
void serialize(WriteBuffer & buf) const;
void deserialize(ReadBuffer & buf);
static constexpr UInt64 microseconds_per_io_unit = (io_unit == SettingTimespanIO::MILLISECOND) ? 1000 : 1000000;
};
struct SettingFloat
{
float value;
bool changed = false;
SettingFloat(float x = 0) : value(x) {}
operator float() const { return value; }
SettingFloat & operator= (float x) { set(x); return *this; }
String toString() const;
void set(float x);
void set(const Field & x);
void set(const String & x);
void set(ReadBuffer & buf);
void write(WriteBuffer & buf) const;
};
/// TODO: X macro
enum class LoadBalancing
{
/// among replicas with a minimum number of errors selected randomly
RANDOM = 0,
/// a replica is selected among the replicas with the minimum number of errors
/// with the minimum number of distinguished characters in the replica name and local hostname
NEAREST_HOSTNAME,
/// replicas are walked through strictly in order; the number of errors does not matter
IN_ORDER,
/// if first replica one has higher number of errors,
/// pick a random one from replicas with minimum number of errors
FIRST_OR_RANDOM,
};
struct SettingLoadBalancing
{
LoadBalancing value;
bool changed = false;
SettingLoadBalancing(LoadBalancing x) : value(x) {}
operator LoadBalancing() const { return value; }
SettingLoadBalancing & operator= (LoadBalancing x) { set(x); return *this; }
static LoadBalancing getLoadBalancing(const String & s);
String toString() const;
void set(LoadBalancing x);
void set(const Field & x);
void set(const String & x);
void set(ReadBuffer & buf);
void write(WriteBuffer & buf) const;
};
enum class JoinStrictness
{
Unspecified = 0, /// Query JOIN without strictness will throw Exception.
ALL, /// Query JOIN without strictness -> ALL JOIN ...
ANY, /// Query JOIN without strictness -> ANY JOIN ...
};
struct SettingJoinStrictness
{
JoinStrictness value;
bool changed = false;
SettingJoinStrictness(JoinStrictness x) : value(x) {}
operator JoinStrictness() const { return value; }
SettingJoinStrictness & operator= (JoinStrictness x) { set(x); return *this; }
static JoinStrictness getJoinStrictness(const String & s);
String toString() const;
void set(JoinStrictness x);
void set(const Field & x);
void set(const String & x);
void set(ReadBuffer & buf);
void write(WriteBuffer & buf) const;
};
/// Which rows should be included in TOTALS.
enum class TotalsMode
{
BEFORE_HAVING = 0, /// Count HAVING for all read rows;
/// including those not in max_rows_to_group_by
/// and have not passed HAVING after grouping.
AFTER_HAVING_INCLUSIVE = 1, /// Count on all rows except those that have not passed HAVING;
/// that is, to include in TOTALS all the rows that did not pass max_rows_to_group_by.
AFTER_HAVING_EXCLUSIVE = 2, /// Include only the rows that passed and max_rows_to_group_by, and HAVING.
AFTER_HAVING_AUTO = 3, /// Automatically select between INCLUSIVE and EXCLUSIVE,
};
struct SettingTotalsMode
{
TotalsMode value;
bool changed = false;
SettingTotalsMode(TotalsMode x) : value(x) {}
operator TotalsMode() const { return value; }
SettingTotalsMode & operator= (TotalsMode x) { set(x); return *this; }
static TotalsMode getTotalsMode(const String & s);
String toString() const;
void set(TotalsMode x);
void set(const Field & x);
void set(const String & x);
void set(ReadBuffer & buf);
void write(WriteBuffer & buf) const;
};
template <bool enable_mode_any>
struct SettingOverflowMode
{
OverflowMode value;
bool changed = false;
SettingOverflowMode(OverflowMode x = OverflowMode::THROW) : value(x) {}
operator OverflowMode() const { return value; }
SettingOverflowMode & operator= (OverflowMode x) { set(x); return *this; }
static OverflowMode getOverflowModeForGroupBy(const String & s);
static OverflowMode getOverflowMode(const String & s);
String toString() const;
void set(OverflowMode x);
void set(const Field & x);
void set(const String & x);
void set(ReadBuffer & buf);
void write(WriteBuffer & buf) const;
};
/// The setting for executing distributed subqueries inside IN or JOIN sections.
enum class DistributedProductMode
{
DENY = 0, /// Disable
LOCAL, /// Convert to local query
GLOBAL, /// Convert to global query
ALLOW /// Enable
};
struct SettingDistributedProductMode
{
DistributedProductMode value;
bool changed = false;
SettingDistributedProductMode(DistributedProductMode x) : value(x) {}
operator DistributedProductMode() const { return value; }
SettingDistributedProductMode & operator= (DistributedProductMode x) { set(x); return *this; }
static DistributedProductMode getDistributedProductMode(const String & s);
String toString() const;
void set(DistributedProductMode x);
void set(const Field & x);
void set(const String & x);
void set(ReadBuffer & buf);
void write(WriteBuffer & buf) const;
};
using SettingSeconds = SettingTimespan<SettingTimespanIO::SECOND>;
using SettingMilliseconds = SettingTimespan<SettingTimespanIO::MILLISECOND>;
struct SettingString
@ -327,12 +139,13 @@ struct SettingString
SettingString & operator= (const String & x) { set(x); return *this; }
String toString() const;
Field toField() const;
void set(const String & x);
void set(const Field & x);
void set(ReadBuffer & buf);
void write(WriteBuffer & buf) const;
void serialize(WriteBuffer & buf) const;
void deserialize(ReadBuffer & buf);
};
@ -348,40 +161,102 @@ public:
SettingChar & operator= (char x) { set(x); return *this; }
String toString() const;
Field toField() const;
void set(char x);
void set(const String & x);
void set(const Field & x);
void set(ReadBuffer & buf);
void write(WriteBuffer & buf) const;
void serialize(WriteBuffer & buf) const;
void deserialize(ReadBuffer & buf);
};
struct SettingDateTimeInputFormat
/// Template class to define enum-based settings.
template <typename EnumType, typename Tag = void>
struct SettingEnum
{
using Value = FormatSettings::DateTimeInputFormat;
Value value;
EnumType value;
bool changed = false;
SettingDateTimeInputFormat(Value x) : value(x) {}
SettingEnum(EnumType x) : value(x) {}
operator Value() const { return value; }
SettingDateTimeInputFormat & operator= (Value x) { set(x); return *this; }
static Value getValue(const String & s);
operator EnumType() const { return value; }
SettingEnum & operator= (EnumType x) { set(x); return *this; }
String toString() const;
Field toField() const { return toString(); }
void set(Value x);
void set(const Field & x);
void set(EnumType x) { value = x; changed = true; }
void set(const Field & x) { set(safeGet<const String &>(x)); }
void set(const String & x);
void set(ReadBuffer & buf);
void write(WriteBuffer & buf) const;
void serialize(WriteBuffer & buf) const;
void deserialize(ReadBuffer & buf);
};
enum class LoadBalancing
{
/// among replicas with a minimum number of errors selected randomly
RANDOM = 0,
/// a replica is selected among the replicas with the minimum number of errors
/// with the minimum number of distinguished characters in the replica name and local hostname
NEAREST_HOSTNAME,
/// replicas are walked through strictly in order; the number of errors does not matter
IN_ORDER,
/// if first replica one has higher number of errors,
/// pick a random one from replicas with minimum number of errors
FIRST_OR_RANDOM,
};
using SettingLoadBalancing = SettingEnum<LoadBalancing>;
enum class JoinStrictness
{
Unspecified = 0, /// Query JOIN without strictness will throw Exception.
ALL, /// Query JOIN without strictness -> ALL JOIN ...
ANY, /// Query JOIN without strictness -> ANY JOIN ...
};
using SettingJoinStrictness = SettingEnum<JoinStrictness>;
/// Which rows should be included in TOTALS.
enum class TotalsMode
{
BEFORE_HAVING = 0, /// Count HAVING for all read rows;
/// including those not in max_rows_to_group_by
/// and have not passed HAVING after grouping.
AFTER_HAVING_INCLUSIVE = 1, /// Count on all rows except those that have not passed HAVING;
/// that is, to include in TOTALS all the rows that did not pass max_rows_to_group_by.
AFTER_HAVING_EXCLUSIVE = 2, /// Include only the rows that passed and max_rows_to_group_by, and HAVING.
AFTER_HAVING_AUTO = 3, /// Automatically select between INCLUSIVE and EXCLUSIVE,
};
using SettingTotalsMode = SettingEnum<TotalsMode>;
/// The settings keeps OverflowMode which cannot be OverflowMode::ANY.
using SettingOverflowMode = SettingEnum<OverflowMode>;
struct SettingOverflowModeGroupByTag;
/// The settings keeps OverflowMode which can be OverflowMode::ANY.
using SettingOverflowModeGroupBy = SettingEnum<OverflowMode, SettingOverflowModeGroupByTag>;
/// The setting for executing distributed subqueries inside IN or JOIN sections.
enum class DistributedProductMode
{
DENY = 0, /// Disable
LOCAL, /// Convert to local query
GLOBAL, /// Convert to global query
ALLOW /// Enable
};
using SettingDistributedProductMode = SettingEnum<DistributedProductMode>;
using SettingDateTimeInputFormat = SettingEnum<FormatSettings::DateTimeInputFormat>;
enum class LogsLevel
{
none = 0, /// Disable
@ -391,29 +266,392 @@ enum class LogsLevel
debug,
trace,
};
using SettingLogsLevel = SettingEnum<LogsLevel>;
class SettingLogsLevel
namespace details
{
struct SettingsCollectionUtils
{
static void serializeName(const StringRef & name, WriteBuffer & buf);
static String deserializeName(ReadBuffer & buf);
static void throwNameNotFound(const StringRef & name);
};
}
/** Template class to define collections of settings.
* Example of usage:
*
* mysettings.h:
* struct MySettings : public SettingsCollection<MySettings>
* {
* # define APPLY_FOR_MYSETTINGS(M) \
* M(SettingUInt64, a, 100, "Description of a") \
* M(SettingFloat, f, 3.11, "Description of f") \
* M(SettingString, s, "default", "Description of s")
*
* DECLARE_SETTINGS_COLLECTION(MySettings, APPLY_FOR_MYSETTINGS)
* };
*
* mysettings.cpp:
* IMPLEMENT_SETTINGS_COLLECTION(MySettings, APPLY_FOR_MYSETTINGS)
*/
template <class Derived>
class SettingsCollection
{
private:
Derived & castToDerived() { return *static_cast<Derived *>(this); }
const Derived & castToDerived() const { return *static_cast<const Derived *>(this); }
using GetStringFunction = String (*)(const Derived &);
using GetFieldFunction = Field (*)(const Derived &);
using SetStringFunction = void (*)(Derived &, const String &);
using SetFieldFunction = void (*)(Derived &, const Field &);
using SerializeFunction = void (*)(const Derived &, WriteBuffer & buf);
using DeserializeFunction = void (*)(Derived &, ReadBuffer & buf);
using CastValueWithoutApplyingFunction = Field (*)(const Field &);
struct MemberInfo
{
size_t offset_of_changed;
StringRef name;
StringRef description;
GetStringFunction get_string;
GetFieldFunction get_field;
SetStringFunction set_string;
SetFieldFunction set_field;
SerializeFunction serialize;
DeserializeFunction deserialize;
CastValueWithoutApplyingFunction cast_value_without_applying;
bool isChanged(const Derived & collection) const { return *reinterpret_cast<const bool*>(reinterpret_cast<const UInt8*>(&collection) + offset_of_changed); }
};
class MemberInfos
{
public:
static const MemberInfos & instance()
{
static const MemberInfos single_instance;
return single_instance;
}
size_t size() const { return infos.size(); }
const MemberInfo & operator[](size_t index) const { return infos[index]; }
const MemberInfo * begin() const { return infos.data(); }
const MemberInfo * end() const { return infos.data() + infos.size(); }
size_t findIndex(const StringRef & name) const
{
auto it = by_name_map.find(name);
if (it == by_name_map.end())
return static_cast<size_t>(-1); // npos
return it->second;
}
size_t findIndexStrict(const StringRef & name) const
{
auto it = by_name_map.find(name);
if (it == by_name_map.end())
details::SettingsCollectionUtils::throwNameNotFound(name);
return it->second;
}
const MemberInfo * find(const StringRef & name) const
{
auto it = by_name_map.find(name);
if (it == by_name_map.end())
return end();
else
return &infos[it->second];
}
const MemberInfo * findStrict(const StringRef & name) const { return &infos[findIndexStrict(name)]; }
private:
MemberInfos();
void add(MemberInfo && member)
{
size_t index = infos.size();
infos.emplace_back(member);
by_name_map.emplace(infos.back().name, index);
}
std::vector<MemberInfo> infos;
std::unordered_map<StringRef, size_t> by_name_map;
};
static const MemberInfos & members() { return MemberInfos::instance(); }
public:
using Value = LogsLevel;
class const_iterator;
Value value;
bool changed = false;
/// Provides read-only access to a setting.
class const_reference
{
public:
const_reference(const Derived & collection_, const MemberInfo & member_) : collection(&collection_), member(&member_) {}
const_reference(const const_reference & src) = default;
const StringRef & getName() const { return member->name; }
const StringRef & getDescription() const { return member->description; }
bool isChanged() const { return member->isChanged(*collection); }
Field getValue() const { return member->get_field(*collection); }
String getValueAsString() const { return member->get_string(*collection); }
protected:
friend class SettingsCollection<Derived>::const_iterator;
const_reference() : collection(nullptr), member(nullptr) {}
const_reference & operator=(const const_reference &) = default;
const Derived * collection;
const MemberInfo * member;
};
SettingLogsLevel(Value x) : value(x) {}
/// Provides access to a setting.
class reference : public const_reference
{
public:
reference(Derived & collection_, const MemberInfo & member_) : const_reference(collection_, member_) {}
reference(const const_reference & src) : const_reference(src) {}
void setValue(const Field & value) { this->member->set_field(*const_cast<Derived *>(this->collection), value); }
void setValue(const String & value) { this->member->set_string(*const_cast<Derived *>(this->collection), value); }
};
operator Value() const { return value; }
SettingLogsLevel & operator= (Value x) { set(x); return *this; }
/// Iterator to iterating through all the settings.
class const_iterator
{
public:
const_iterator(const Derived & collection_, const MemberInfo * member_) : ref(const_cast<Derived &>(collection_), *member_) {}
const_iterator() = default;
const_iterator(const const_iterator & src) = default;
const_iterator & operator =(const const_iterator & src) = default;
const const_reference & operator *() const { return ref; }
const const_reference * operator ->() const { return &ref; }
const_iterator & operator ++() { ++ref.member; return *this; }
const_iterator operator ++(int) { const_iterator tmp = *this; ++*this; return tmp; }
bool operator ==(const const_iterator & rhs) const { return ref.member == rhs.ref.member && ref.collection == rhs.ref.collection; }
bool operator !=(const const_iterator & rhs) const { return !(*this == rhs); }
protected:
mutable reference ref;
};
static Value getValue(const String & s);
class iterator : public const_iterator
{
public:
iterator(Derived & collection_, const MemberInfo * member_) : const_iterator(collection_, member_) {}
iterator() = default;
iterator(const const_iterator & src) : const_iterator(src) {}
iterator & operator =(const const_iterator & src) { const_iterator::operator =(src); return *this; }
reference & operator *() const { return this->ref; }
reference * operator ->() const { return &this->ref; }
iterator & operator ++() { const_iterator::operator ++(); return *this; }
iterator operator ++(int) { iterator tmp = *this; ++*this; return tmp; }
};
String toString() const;
/// Returns the number of settings.
static size_t size() { return members().size(); }
void set(Value x);
void set(const Field & x);
void set(const String & x);
void set(ReadBuffer & buf);
void write(WriteBuffer & buf) const;
/// Returns name of a setting by its index (0..size()-1).
static StringRef getName(size_t index) { return members()[index].name; }
/// Returns description of a setting.
static StringRef getDescription(size_t index) { return members()[index].description; }
static StringRef getDescription(const String & name) { return members().findStrict(name)->description; }
/// Searches a setting by its name; returns `npos` if not found.
static size_t findIndex(const String & name) { return members().findIndex(name); }
static constexpr size_t npos = static_cast<size_t>(-1);
/// Searches a setting by its name; throws an exception if not found.
static size_t findIndexStrict(const String & name) { return members().findIndexStrict(name); }
/// Casts a value to a type according to a specified setting without actual changing this settings.
/// E.g. for SettingInt64 it casts Field to Field::Types::Int64.
static Field castValueWithoutApplying(size_t index, const Field & value) { return members()[index].cast_value_without_applying(value); }
static Field castValueWithoutApplying(const String & name, const Field & value) { return members().findStrict(name)->cast_value_without_applying(value); }
iterator begin() { return iterator(castToDerived(), members().begin()); }
const_iterator begin() const { return const_iterator(castToDerived(), members().begin()); }
iterator end() { return iterator(castToDerived(), members().end()); }
const_iterator end() const { return const_iterator(castToDerived(), members().end()); }
/// Returns a proxy object for accessing to a setting. Throws an exception if there is not setting with such name.
reference operator[](size_t index) { return reference(castToDerived(), members()[index]); }
reference operator[](const String & name) { return reference(castToDerived(), *(members().findStrict(name))); }
const_reference operator[](size_t index) const { return const_reference(castToDerived(), members()[index]); }
const_reference operator[](const String & name) const { return const_reference(castToDerived(), *(members().findStrict(name))); }
/// Searches a setting by its name; returns end() if not found.
iterator find(const String & name) { return iterator(castToDerived(), members().find(name)); }
const_iterator find(const String & name) const { return const_iterator(castToDerived(), members().find(name)); }
/// Searches a setting by its name; throws an exception if not found.
iterator findStrict(const String & name) { return iterator(castToDerived(), members().findStrict(name)); }
const_iterator findStrict(const String & name) const { return const_iterator(castToDerived(), members().findStrict(name)); }
/// Sets setting's value.
void set(size_t index, const Field & value) { (*this)[index].setValue(value); }
void set(const String & name, const Field & value) { (*this)[name].setValue(value); }
/// Sets setting's value. Read value in text form from string (for example, from configuration file or from URL parameter).
void set(size_t index, const String & value) { (*this)[index].setValue(value); }
void set(const String & name, const String & value) { (*this)[name].setValue(value); }
/// Returns value of a setting.
Field get(size_t index) const { return (*this)[index].getValue(); }
Field get(const String & name) const { return (*this)[name].getValue(); }
/// Returns value of a setting converted to string.
String getAsString(size_t index) const { return (*this)[index].getValueAsString(); }
String getAsString(const String & name) const { return (*this)[name].getValueAsString(); }
/// Returns value of a setting; returns false if there is no setting with the specified name.
bool tryGet(const String & name, Field & value) const
{
auto it = find(name);
if (it == end())
return false;
value = it->getValue();
return true;
}
/// Returns value of a setting converted to string; returns false if there is no setting with the specified name.
bool tryGet(const String & name, String & value) const
{
auto it = find(name);
if (it == end())
return false;
value = it->getValueAsString();
return true;
}
/// Compares two collections of settings.
bool operator ==(const Derived & rhs) const
{
for (const auto & member : members())
{
bool left_changed = member.isChanged(castToDerived());
bool right_changed = member.isChanged(rhs);
if (left_changed || right_changed)
{
if (left_changed != right_changed)
return false;
if (member.get_field(castToDerived()) != member.get_field(rhs))
return false;
}
}
return true;
}
bool operator !=(const Derived & rhs) const
{
return !(*this == rhs);
}
/// Gathers all changed values (e.g. for applying them later to another collection of settings).
SettingsChanges changes() const
{
SettingsChanges found_changes;
for (const auto & member : members())
{
if (member.isChanged(castToDerived()))
found_changes.emplace_back(member.name.toString(), member.get_field(castToDerived()));
}
return found_changes;
}
/// Applies changes to the settings.
void applyChange(const SettingChange & change)
{
set(change.name, change.value);
}
void applyChanges(const SettingsChanges & changes)
{
for (const SettingChange & change : changes)
applyChange(change);
}
void copyChangesFrom(const Derived & src)
{
for (const auto & member : members())
if (member.isChanged(src))
member.set_field(castToDerived(), member.get_field(src));
}
void copyChangesTo(Derived & dest) const
{
dest.copyChangesFrom(castToDerived());
}
/// Writes the settings to buffer (e.g. to be sent to remote server).
/// Only changed settings are written. They are written as list of contiguous name-value pairs,
/// finished with empty name.
void serialize(WriteBuffer & buf) const
{
for (const auto & member : members())
{
if (member.isChanged(castToDerived()))
{
details::SettingsCollectionUtils::serializeName(member.name, buf);
member.serialize(castToDerived(), buf);
}
}
details::SettingsCollectionUtils::serializeName(StringRef{} /* empty string is a marker of the end of settings */, buf);
}
/// Reads the settings from buffer.
void deserialize(ReadBuffer & buf)
{
const auto & the_members = members();
while (true)
{
String name = details::SettingsCollectionUtils::deserializeName(buf);
if (name.empty() /* empty string is a marker of the end of settings */)
break;
the_members.findStrict(name)->deserialize(castToDerived(), buf);
}
}
};
#define DECLARE_SETTINGS_COLLECTION(LIST_OF_SETTINGS_MACRO) \
LIST_OF_SETTINGS_MACRO(DECLARE_SETTINGS_COLLECTION_DECLARE_VARIABLES_HELPER_)
#define IMPLEMENT_SETTINGS_COLLECTION(DERIVED_CLASS_NAME, LIST_OF_SETTINGS_MACRO) \
template<> \
SettingsCollection<DERIVED_CLASS_NAME>::MemberInfos::MemberInfos() \
{ \
using Derived = DERIVED_CLASS_NAME; \
struct Functions \
{ \
LIST_OF_SETTINGS_MACRO(IMPLEMENT_SETTINGS_COLLECTION_DEFINE_FUNCTIONS_HELPER_) \
}; \
LIST_OF_SETTINGS_MACRO(IMPLEMENT_SETTINGS_COLLECTION_ADD_MEMBER_INFO_HELPER_) \
}
#define DECLARE_SETTINGS_COLLECTION_DECLARE_VARIABLES_HELPER_(TYPE, NAME, DEFAULT, DESCRIPTION) \
TYPE NAME {DEFAULT};
#define IMPLEMENT_SETTINGS_COLLECTION_DEFINE_FUNCTIONS_HELPER_(TYPE, NAME, DEFAULT, DESCRIPTION) \
static String NAME##_getString(const Derived & collection) { return collection.NAME.toString(); } \
static Field NAME##_getField(const Derived & collection) { return collection.NAME.toField(); } \
static void NAME##_setString(Derived & collection, const String & value) { collection.NAME.set(value); } \
static void NAME##_setField(Derived & collection, const Field & value) { collection.NAME.set(value); } \
static void NAME##_serialize(const Derived & collection, WriteBuffer & buf) { collection.NAME.serialize(buf); } \
static void NAME##_deserialize(Derived & collection, ReadBuffer & buf) { collection.NAME.deserialize(buf); } \
static Field NAME##_castValueWithoutApplying(const Field & value) { TYPE temp{DEFAULT}; temp.set(value); return temp.toField(); }
#define IMPLEMENT_SETTINGS_COLLECTION_ADD_MEMBER_INFO_HELPER_(TYPE, NAME, DEFAULT, DESCRIPTION) \
static_assert(std::is_same_v<decltype(std::declval<Derived>().NAME.changed), bool>); \
add({offsetof(Derived, NAME.changed), \
StringRef(#NAME, strlen(#NAME)), StringRef(#DESCRIPTION, strlen(#DESCRIPTION)), \
&Functions::NAME##_getString, &Functions::NAME##_getField, \
&Functions::NAME##_setString, &Functions::NAME##_setField, \
&Functions::NAME##_serialize, &Functions::NAME##_deserialize, \
&Functions::NAME##_castValueWithoutApplying });
}

Some files were not shown because too many files have changed in this diff Show More