mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-21 23:21:59 +00:00
Merge branch 'master' into shared-context-lifetime
This commit is contained in:
commit
84faa9af26
106
CHANGELOG.md
106
CHANGELOG.md
@ -1,11 +1,68 @@
|
||||
## ClickHouse release v20.3
|
||||
|
||||
### ClickHouse release v20.3.7.46, 2020-04-17
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix `Logical error: CROSS JOIN has expressions` error for queries with comma and names joins mix. [#10311](https://github.com/ClickHouse/ClickHouse/pull/10311) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix queries with `max_bytes_before_external_group_by`. [#10302](https://github.com/ClickHouse/ClickHouse/pull/10302) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix move-to-prewhere optimization in presense of arrayJoin functions (in certain cases). This fixes [#10092](https://github.com/ClickHouse/ClickHouse/issues/10092). [#10195](https://github.com/ClickHouse/ClickHouse/pull/10195) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add the ability to relax the restriction on non-deterministic functions usage in mutations with `allow_nondeterministic_mutations` setting. [#10186](https://github.com/ClickHouse/ClickHouse/pull/10186) ([filimonov](https://github.com/filimonov)).
|
||||
|
||||
### ClickHouse release v20.3.6.40, 2020-04-16
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Added function `isConstant`. This function checks whether its argument is constant expression and returns 1 or 0. It is intended for development, debugging and demonstration purposes. [#10198](https://github.com/ClickHouse/ClickHouse/pull/10198) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix error `Pipeline stuck` with `max_rows_to_group_by` and `group_by_overflow_mode = 'break'`. [#10279](https://github.com/ClickHouse/ClickHouse/pull/10279) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix rare possible exception `Cannot drain connections: cancel first`. [#10239](https://github.com/ClickHouse/ClickHouse/pull/10239) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed bug where ClickHouse would throw "Unknown function lambda." error message when user tries to run ALTER UPDATE/DELETE on tables with ENGINE = Replicated*. Check for nondeterministic functions now handles lambda expressions correctly. [#10237](https://github.com/ClickHouse/ClickHouse/pull/10237) ([Alexander Kazakov](https://github.com/Akazz)).
|
||||
* Fixed "generateRandom" function for Date type. This fixes [#9973](https://github.com/ClickHouse/ClickHouse/issues/9973). Fix an edge case when dates with year 2106 are inserted to MergeTree tables with old-style partitioning but partitions are named with year 1970. [#10218](https://github.com/ClickHouse/ClickHouse/pull/10218) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Convert types if the table definition of a View does not correspond to the SELECT query. This fixes [#10180](https://github.com/ClickHouse/ClickHouse/issues/10180) and [#10022](https://github.com/ClickHouse/ClickHouse/issues/10022). [#10217](https://github.com/ClickHouse/ClickHouse/pull/10217) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix `parseDateTimeBestEffort` for strings in RFC-2822 when day of week is Tuesday or Thursday. This fixes [#10082](https://github.com/ClickHouse/ClickHouse/issues/10082). [#10214](https://github.com/ClickHouse/ClickHouse/pull/10214) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix column names of constants inside JOIN that may clash with names of constants outside of JOIN. [#10207](https://github.com/ClickHouse/ClickHouse/pull/10207) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix possible inifinite query execution when the query actually should stop on LIMIT, while reading from infinite source like `system.numbers` or `system.zeros`. [#10206](https://github.com/ClickHouse/ClickHouse/pull/10206) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix using the current database for access checking when the database isn't specified. [#10192](https://github.com/ClickHouse/ClickHouse/pull/10192) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Convert blocks if structure does not match on INSERT into Distributed(). [#10135](https://github.com/ClickHouse/ClickHouse/pull/10135) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix possible incorrect result for extremes in processors pipeline. [#10131](https://github.com/ClickHouse/ClickHouse/pull/10131) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix some kinds of alters with compact parts. [#10130](https://github.com/ClickHouse/ClickHouse/pull/10130) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix incorrect `index_granularity_bytes` check while creating new replica. Fixes [#10098](https://github.com/ClickHouse/ClickHouse/issues/10098). [#10121](https://github.com/ClickHouse/ClickHouse/pull/10121) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix SIGSEGV on INSERT into Distributed table when its structure differs from the underlying tables. [#10105](https://github.com/ClickHouse/ClickHouse/pull/10105) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix possible rows loss for queries with `JOIN` and `UNION ALL`. Fixes [#9826](https://github.com/ClickHouse/ClickHouse/issues/9826), [#10113](https://github.com/ClickHouse/ClickHouse/issues/10113). [#10099](https://github.com/ClickHouse/ClickHouse/pull/10099) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed replicated tables startup when updating from an old ClickHouse version where `/table/replicas/replica_name/metadata` node doesn't exist. Fixes [#10037](https://github.com/ClickHouse/ClickHouse/issues/10037). [#10095](https://github.com/ClickHouse/ClickHouse/pull/10095) ([alesapin](https://github.com/alesapin)).
|
||||
* Add some arguments check and support identifier arguments for MySQL Database Engine. [#10077](https://github.com/ClickHouse/ClickHouse/pull/10077) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix bug in clickhouse dictionary source from localhost clickhouse server. The bug may lead to memory corruption if types in dictionary and source are not compatible. [#10071](https://github.com/ClickHouse/ClickHouse/pull/10071) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix bug in `CHECK TABLE` query when table contain skip indices. [#10068](https://github.com/ClickHouse/ClickHouse/pull/10068) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix error `Cannot clone block with columns because block has 0 columns ... While executing GroupingAggregatedTransform`. It happened when setting `distributed_aggregation_memory_efficient` was enabled, and distributed query read aggregating data with different level from different shards (mixed single and two level aggregation). [#10063](https://github.com/ClickHouse/ClickHouse/pull/10063) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix a segmentation fault that could occur in GROUP BY over string keys containing trailing zero bytes ([#8636](https://github.com/ClickHouse/ClickHouse/issues/8636), [#8925](https://github.com/ClickHouse/ClickHouse/issues/8925)). [#10025](https://github.com/ClickHouse/ClickHouse/pull/10025) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Fix parallel distributed INSERT SELECT for remote table. This PR fixes the solution provided in [#9759](https://github.com/ClickHouse/ClickHouse/pull/9759). [#9999](https://github.com/ClickHouse/ClickHouse/pull/9999) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix the number of threads used for remote query execution (performance regression, since 20.3). This happened when query from `Distributed` table was executed simultaneously on local and remote shards. Fixes [#9965](https://github.com/ClickHouse/ClickHouse/issues/9965). [#9971](https://github.com/ClickHouse/ClickHouse/pull/9971) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix bug in which the necessary tables weren't retrieved at one of the processing stages of queries to some databases. Fixes [#9699](https://github.com/ClickHouse/ClickHouse/issues/9699). [#9949](https://github.com/ClickHouse/ClickHouse/pull/9949) ([achulkov2](https://github.com/achulkov2)).
|
||||
* Fix 'Not found column in block' error when `JOIN` appears with `TOTALS`. Fixes [#9839](https://github.com/ClickHouse/ClickHouse/issues/9839). [#9939](https://github.com/ClickHouse/ClickHouse/pull/9939) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix a bug with `ON CLUSTER` DDL queries freezing on server startup. [#9927](https://github.com/ClickHouse/ClickHouse/pull/9927) ([Gagan Arneja](https://github.com/garneja)).
|
||||
* Fix parsing multiple hosts set in the CREATE USER command, e.g. `CREATE USER user6 HOST NAME REGEXP 'lo.?*host', NAME REGEXP 'lo*host'`. [#9924](https://github.com/ClickHouse/ClickHouse/pull/9924) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix `TRUNCATE` for Join table engine ([#9917](https://github.com/ClickHouse/ClickHouse/issues/9917)). [#9920](https://github.com/ClickHouse/ClickHouse/pull/9920) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix "scalar doesn't exist" error in ALTERs ([#9878](https://github.com/ClickHouse/ClickHouse/issues/9878)). [#9904](https://github.com/ClickHouse/ClickHouse/pull/9904) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix race condition between drop and optimize in `ReplicatedMergeTree`. [#9901](https://github.com/ClickHouse/ClickHouse/pull/9901) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix error with qualified names in `distributed_product_mode='local'`. Fixes [#4756](https://github.com/ClickHouse/ClickHouse/issues/4756). [#9891](https://github.com/ClickHouse/ClickHouse/pull/9891) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix calculating grants for introspection functions from the setting 'allow_introspection_functions'. [#9840](https://github.com/ClickHouse/ClickHouse/pull/9840) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Fix integration test `test_settings_constraints`. [#9962](https://github.com/ClickHouse/ClickHouse/pull/9962) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Removed dependency on `clock_getres`. [#9833](https://github.com/ClickHouse/ClickHouse/pull/9833) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
|
||||
### ClickHouse release v20.3.5.21, 2020-03-27
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix 'Different expressions with the same alias' error when query has PREWHERE and WHERE on distributed table and `SET distributed_product_mode = 'local'`. [#9871](https://github.com/ClickHouse/ClickHouse/pull/9871) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix mutations excessive memory consumption for tables with a composite primary key. This fixes [#9850](https://github.com/ClickHouse/ClickHouse/issues/9850). [#9860](https://github.com/ClickHouse/ClickHouse/pull/9860) ([alesapin](https://github.com/alesapin)).
|
||||
* For INSERT queries shard now clamps the settings got from the initiator to the shard's constaints instead of throwing an exception. This fix allows to send INSERT queries to a shard with another constraints. This change improves fix [#9447](https://github.com/ClickHouse/ClickHouse/issues/9447). [#9852](https://github.com/ClickHouse/ClickHouse/pull/9852) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix 'COMMA to CROSS JOIN rewriter is not enabled or cannot rewrite query' error in case of subqueries with COMMA JOIN out of tables lists (i.e. in WHERE). Fixes [#9782](https://github.com/ClickHouse/ClickHouse/issues/9782). [#9830](https://github.com/ClickHouse/ClickHouse/pull/9830) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix possible exception `Got 0 in totals chunk, expected 1` on client. It happened for queries with `JOIN` in case if right joined table had zero rows. Example: `select * from system.one t1 join system.one t2 on t1.dummy = t2.dummy limit 0 FORMAT TabSeparated;`. Fixes [#9777](https://github.com/ClickHouse/ClickHouse/issues/9777). [#9823](https://github.com/ClickHouse/ClickHouse/pull/9823) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix SIGSEGV with optimize_skip_unused_shards when type cannot be converted. [#9804](https://github.com/ClickHouse/ClickHouse/pull/9804) ([Azat Khuzhin](https://github.com/azat)).
|
||||
@ -273,6 +330,55 @@
|
||||
|
||||
## ClickHouse release v20.1
|
||||
|
||||
### ClickHouse release v20.1.10.70, 2020-04-17
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix rare possible exception `Cannot drain connections: cancel first`. [#10239](https://github.com/ClickHouse/ClickHouse/pull/10239) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed bug where ClickHouse would throw `'Unknown function lambda.'` error message when user tries to run `ALTER UPDATE/DELETE` on tables with `ENGINE = Replicated*`. Check for nondeterministic functions now handles lambda expressions correctly. [#10237](https://github.com/ClickHouse/ClickHouse/pull/10237) ([Alexander Kazakov](https://github.com/Akazz)).
|
||||
* Fix `parseDateTimeBestEffort` for strings in RFC-2822 when day of week is Tuesday or Thursday. This fixes [#10082](https://github.com/ClickHouse/ClickHouse/issues/10082). [#10214](https://github.com/ClickHouse/ClickHouse/pull/10214) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix column names of constants inside `JOIN` that may clash with names of constants outside of `JOIN`. [#10207](https://github.com/ClickHouse/ClickHouse/pull/10207) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix possible inifinite query execution when the query actually should stop on LIMIT, while reading from infinite source like `system.numbers` or `system.zeros`. [#10206](https://github.com/ClickHouse/ClickHouse/pull/10206) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix move-to-prewhere optimization in presense of `arrayJoin` functions (in certain cases). This fixes [#10092](https://github.com/ClickHouse/ClickHouse/issues/10092). [#10195](https://github.com/ClickHouse/ClickHouse/pull/10195) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add the ability to relax the restriction on non-deterministic functions usage in mutations with `allow_nondeterministic_mutations` setting. [#10186](https://github.com/ClickHouse/ClickHouse/pull/10186) ([filimonov](https://github.com/filimonov)).
|
||||
* Convert blocks if structure does not match on `INSERT` into table with `Distributed` engine. [#10135](https://github.com/ClickHouse/ClickHouse/pull/10135) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix `SIGSEGV` on `INSERT` into `Distributed` table when its structure differs from the underlying tables. [#10105](https://github.com/ClickHouse/ClickHouse/pull/10105) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix possible rows loss for queries with `JOIN` and `UNION ALL`. Fixes [#9826](https://github.com/ClickHouse/ClickHouse/issues/9826), [#10113](https://github.com/ClickHouse/ClickHouse/issues/10113). [#10099](https://github.com/ClickHouse/ClickHouse/pull/10099) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Add arguments check and support identifier arguments for MySQL Database Engine. [#10077](https://github.com/ClickHouse/ClickHouse/pull/10077) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix bug in clickhouse dictionary source from localhost clickhouse server. The bug may lead to memory corruption if types in dictionary and source are not compatible. [#10071](https://github.com/ClickHouse/ClickHouse/pull/10071) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix error `Cannot clone block with columns because block has 0 columns ... While executing GroupingAggregatedTransform`. It happened when setting `distributed_aggregation_memory_efficient` was enabled, and distributed query read aggregating data with different level from different shards (mixed single and two level aggregation). [#10063](https://github.com/ClickHouse/ClickHouse/pull/10063) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix a segmentation fault that could occur in `GROUP BY` over string keys containing trailing zero bytes ([#8636](https://github.com/ClickHouse/ClickHouse/issues/8636), [#8925](https://github.com/ClickHouse/ClickHouse/issues/8925)). [#10025](https://github.com/ClickHouse/ClickHouse/pull/10025) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Fix bug in which the necessary tables weren't retrieved at one of the processing stages of queries to some databases. Fixes [#9699](https://github.com/ClickHouse/ClickHouse/issues/9699). [#9949](https://github.com/ClickHouse/ClickHouse/pull/9949) ([achulkov2](https://github.com/achulkov2)).
|
||||
* Fix `'Not found column in block'` error when `JOIN` appears with `TOTALS`. Fixes [#9839](https://github.com/ClickHouse/ClickHouse/issues/9839). [#9939](https://github.com/ClickHouse/ClickHouse/pull/9939) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix a bug with `ON CLUSTER` DDL queries freezing on server startup. [#9927](https://github.com/ClickHouse/ClickHouse/pull/9927) ([Gagan Arneja](https://github.com/garneja)).
|
||||
* Fix `TRUNCATE` for Join table engine ([#9917](https://github.com/ClickHouse/ClickHouse/issues/9917)). [#9920](https://github.com/ClickHouse/ClickHouse/pull/9920) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix `'scalar doesn't exist'` error in ALTER queries ([#9878](https://github.com/ClickHouse/ClickHouse/issues/9878)). [#9904](https://github.com/ClickHouse/ClickHouse/pull/9904) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix race condition between drop and optimize in `ReplicatedMergeTree`. [#9901](https://github.com/ClickHouse/ClickHouse/pull/9901) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed `DeleteOnDestroy` logic in `ATTACH PART` which could lead to automatic removal of attached part and added few tests. [#9410](https://github.com/ClickHouse/ClickHouse/pull/9410) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Fix unit test `collapsing_sorted_stream`. [#9367](https://github.com/ClickHouse/ClickHouse/pull/9367) ([Deleted user](https://github.com/ghost)).
|
||||
|
||||
### ClickHouse release v20.1.9.54, 2020-03-28
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix `'Different expressions with the same alias'` error when query has `PREWHERE` and `WHERE` on distributed table and `SET distributed_product_mode = 'local'`. [#9871](https://github.com/ClickHouse/ClickHouse/pull/9871) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix mutations excessive memory consumption for tables with a composite primary key. This fixes [#9850](https://github.com/ClickHouse/ClickHouse/issues/9850). [#9860](https://github.com/ClickHouse/ClickHouse/pull/9860) ([alesapin](https://github.com/alesapin)).
|
||||
* For INSERT queries shard now clamps the settings got from the initiator to the shard's constaints instead of throwing an exception. This fix allows to send `INSERT` queries to a shard with another constraints. This change improves fix [#9447](https://github.com/ClickHouse/ClickHouse/issues/9447). [#9852](https://github.com/ClickHouse/ClickHouse/pull/9852) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix possible exception `Got 0 in totals chunk, expected 1` on client. It happened for queries with `JOIN` in case if right joined table had zero rows. Example: `select * from system.one t1 join system.one t2 on t1.dummy = t2.dummy limit 0 FORMAT TabSeparated;`. Fixes [#9777](https://github.com/ClickHouse/ClickHouse/issues/9777). [#9823](https://github.com/ClickHouse/ClickHouse/pull/9823) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix `SIGSEGV` with `optimize_skip_unused_shards` when type cannot be converted. [#9804](https://github.com/ClickHouse/ClickHouse/pull/9804) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed a few cases when timezone of the function argument wasn't used properly. [#9574](https://github.com/ClickHouse/ClickHouse/pull/9574) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Remove `ORDER BY` stage from mutations because we read from a single ordered part in a single thread. Also add check that the order of rows in mutation is ordered in sorting key order and this order is not violated. [#9886](https://github.com/ClickHouse/ClickHouse/pull/9886) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Clean up duplicated linker flags. Make sure the linker won't look up an unexpected symbol. [#9433](https://github.com/ClickHouse/ClickHouse/pull/9433) ([Amos Bird](https://github.com/amosbird)).
|
||||
|
||||
### ClickHouse release v20.1.8.41, 2020-03-20
|
||||
|
||||
#### Bug Fix
|
||||
|
@ -3,8 +3,10 @@ if (USE_CLANG_TIDY)
|
||||
endif ()
|
||||
|
||||
add_subdirectory (common)
|
||||
add_subdirectory (loggers)
|
||||
add_subdirectory (daemon)
|
||||
add_subdirectory (loggers)
|
||||
add_subdirectory (pcg-random)
|
||||
add_subdirectory (widechar_width)
|
||||
|
||||
if (USE_MYSQL)
|
||||
add_subdirectory (mysqlxx)
|
||||
|
@ -11,6 +11,10 @@ using Int16 = int16_t;
|
||||
using Int32 = int32_t;
|
||||
using Int64 = int64_t;
|
||||
|
||||
#if __cplusplus <= 201703L
|
||||
using char8_t = unsigned char;
|
||||
#endif
|
||||
|
||||
using UInt8 = char8_t;
|
||||
using UInt16 = uint16_t;
|
||||
using UInt32 = uint32_t;
|
||||
|
@ -1,12 +1,47 @@
|
||||
LIBRARY()
|
||||
|
||||
ADDINCL(
|
||||
GLOBAL clickhouse/base
|
||||
contrib/libs/cctz/include
|
||||
)
|
||||
|
||||
CFLAGS (GLOBAL -DARCADIA_BUILD)
|
||||
|
||||
IF (OS_DARWIN)
|
||||
CFLAGS (GLOBAL -DOS_DARWIN)
|
||||
ELSEIF (OS_FREEBSD)
|
||||
CFLAGS (GLOBAL -DOS_FREEBSD)
|
||||
ELSEIF (OS_LINUX)
|
||||
CFLAGS (GLOBAL -DOS_LINUX)
|
||||
ENDIF ()
|
||||
|
||||
PEERDIR(
|
||||
contrib/libs/cctz/src
|
||||
contrib/libs/cxxsupp/libcxx-filesystem
|
||||
contrib/libs/poco/Net
|
||||
contrib/libs/poco/Util
|
||||
contrib/restricted/boost
|
||||
contrib/restricted/cityhash-1.0.2
|
||||
)
|
||||
|
||||
SRCS(
|
||||
argsToConfig.cpp
|
||||
coverage.cpp
|
||||
DateLUT.cpp
|
||||
DateLUTImpl.cpp
|
||||
demangle.cpp
|
||||
getFQDNOrHostName.cpp
|
||||
getMemoryAmount.cpp
|
||||
getThreadId.cpp
|
||||
JSON.cpp
|
||||
LineReader.cpp
|
||||
mremap.cpp
|
||||
phdr_cache.cpp
|
||||
preciseExp10.c
|
||||
setTerminalEcho.cpp
|
||||
shift10.cpp
|
||||
sleep.cpp
|
||||
terminalColors.cpp
|
||||
)
|
||||
|
||||
END()
|
||||
|
@ -50,11 +50,13 @@
|
||||
#include <Common/getMultipleKeysFromConfig.h>
|
||||
#include <Common/ClickHouseRevision.h>
|
||||
#include <Common/Config/ConfigProcessor.h>
|
||||
#include <Common/config_version.h>
|
||||
|
||||
#ifdef __APPLE__
|
||||
// ucontext is not available without _XOPEN_SOURCE
|
||||
#define _XOPEN_SOURCE 700
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config_version.h>
|
||||
#endif
|
||||
|
||||
#if defined(OS_DARWIN)
|
||||
# define _XOPEN_SOURCE 700 // ucontext is not available without _XOPEN_SOURCE
|
||||
#endif
|
||||
#include <ucontext.h>
|
||||
|
||||
@ -410,7 +412,7 @@ std::string BaseDaemon::getDefaultCorePath() const
|
||||
|
||||
void BaseDaemon::closeFDs()
|
||||
{
|
||||
#if defined(__FreeBSD__) || (defined(__APPLE__) && defined(__MACH__))
|
||||
#if defined(OS_FREEBSD) || defined(OS_DARWIN)
|
||||
Poco::File proc_path{"/dev/fd"};
|
||||
#else
|
||||
Poco::File proc_path{"/proc/self/fd"};
|
||||
@ -430,7 +432,7 @@ void BaseDaemon::closeFDs()
|
||||
else
|
||||
{
|
||||
int max_fd = -1;
|
||||
#ifdef _SC_OPEN_MAX
|
||||
#if defined(_SC_OPEN_MAX)
|
||||
max_fd = sysconf(_SC_OPEN_MAX);
|
||||
if (max_fd == -1)
|
||||
#endif
|
||||
@ -448,7 +450,7 @@ namespace
|
||||
/// the maximum is 1000, and chromium uses 300 for its tab processes. Ignore
|
||||
/// whatever errors that occur, because it's just a debugging aid and we don't
|
||||
/// care if it breaks.
|
||||
#if defined(__linux__) && !defined(NDEBUG)
|
||||
#if defined(OS_LINUX) && !defined(NDEBUG)
|
||||
void debugIncreaseOOMScore()
|
||||
{
|
||||
const std::string new_score = "555";
|
||||
|
14
base/daemon/ya.make
Normal file
14
base/daemon/ya.make
Normal file
@ -0,0 +1,14 @@
|
||||
LIBRARY()
|
||||
|
||||
NO_COMPILER_WARNINGS()
|
||||
|
||||
PEERDIR(
|
||||
clickhouse/src/Common
|
||||
)
|
||||
|
||||
SRCS(
|
||||
BaseDaemon.cpp
|
||||
GraphiteWriter.cpp
|
||||
)
|
||||
|
||||
END()
|
15
base/loggers/ya.make
Normal file
15
base/loggers/ya.make
Normal file
@ -0,0 +1,15 @@
|
||||
LIBRARY()
|
||||
|
||||
PEERDIR(
|
||||
clickhouse/src/Common
|
||||
)
|
||||
|
||||
SRCS(
|
||||
ExtendedLogChannel.cpp
|
||||
Loggers.cpp
|
||||
OwnFormattingChannel.cpp
|
||||
OwnPatternFormatter.cpp
|
||||
OwnSplitChannel.cpp
|
||||
)
|
||||
|
||||
END()
|
2
base/pcg-random/CMakeLists.txt
Normal file
2
base/pcg-random/CMakeLists.txt
Normal file
@ -0,0 +1,2 @@
|
||||
add_library(pcg_random INTERFACE)
|
||||
target_include_directories(pcg_random INTERFACE .)
|
@ -292,7 +292,7 @@ inline itype rotl(itype value, bitcount_t rot)
|
||||
{
|
||||
constexpr bitcount_t bits = sizeof(itype) * 8;
|
||||
constexpr bitcount_t mask = bits - 1;
|
||||
#if PCG_USE_ZEROCHECK_ROTATE_IDIOM
|
||||
#if defined(PCG_USE_ZEROCHECK_ROTATE_IDIOM)
|
||||
return rot ? (value << rot) | (value >> (bits - rot)) : value;
|
||||
#else
|
||||
return (value << rot) | (value >> ((- rot) & mask));
|
||||
@ -304,7 +304,7 @@ inline itype rotr(itype value, bitcount_t rot)
|
||||
{
|
||||
constexpr bitcount_t bits = sizeof(itype) * 8;
|
||||
constexpr bitcount_t mask = bits - 1;
|
||||
#if PCG_USE_ZEROCHECK_ROTATE_IDIOM
|
||||
#if defined(PCG_USE_ZEROCHECK_ROTATE_IDIOM)
|
||||
return rot ? (value >> rot) | (value << (bits - rot)) : value;
|
||||
#else
|
||||
return (value >> rot) | (value << ((- rot) & mask));
|
||||
@ -318,7 +318,7 @@ inline itype rotr(itype value, bitcount_t rot)
|
||||
*
|
||||
* These overloads will be preferred over the general template code above.
|
||||
*/
|
||||
#if PCG_USE_INLINE_ASM && __GNUC__ && (__x86_64__ || __i386__)
|
||||
#if defined(PCG_USE_INLINE_ASM) && __GNUC__ && (__x86_64__ || __i386__)
|
||||
|
||||
inline uint8_t rotr(uint8_t value, bitcount_t rot)
|
||||
{
|
||||
@ -600,7 +600,7 @@ std::ostream& operator<<(std::ostream& out, printable_typename<T>) {
|
||||
#ifdef __GNUC__
|
||||
int status;
|
||||
char* pretty_name =
|
||||
abi::__cxa_demangle(implementation_typename, NULL, NULL, &status);
|
||||
abi::__cxa_demangle(implementation_typename, nullptr, nullptr, &status);
|
||||
if (status == 0)
|
||||
out << pretty_name;
|
||||
free(static_cast<void*>(pretty_name));
|
5
base/pcg-random/ya.make
Normal file
5
base/pcg-random/ya.make
Normal file
@ -0,0 +1,5 @@
|
||||
LIBRARY()
|
||||
|
||||
ADDINCL (GLOBAL clickhouse/base/pcg-random)
|
||||
|
||||
END()
|
9
base/widechar_width/ya.make
Normal file
9
base/widechar_width/ya.make
Normal file
@ -0,0 +1,9 @@
|
||||
LIBRARY()
|
||||
|
||||
ADDINCL(GLOBAL clickhouse/base/widechar_width)
|
||||
|
||||
SRCS(
|
||||
widechar_width.cpp
|
||||
)
|
||||
|
||||
END()
|
@ -1,3 +1,7 @@
|
||||
RECURSE(
|
||||
common
|
||||
daemon
|
||||
loggers
|
||||
pcg-random
|
||||
widechar_width
|
||||
)
|
||||
|
@ -2,4 +2,3 @@ set(DIVIDE_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/libdivide)
|
||||
set(DBMS_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/src ${ClickHouse_BINARY_DIR}/src)
|
||||
set(DOUBLE_CONVERSION_CONTRIB_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/double-conversion)
|
||||
set(METROHASH_CONTRIB_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/libmetrohash/src)
|
||||
set(PCG_RANDOM_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/libpcg-random/include)
|
||||
|
1
contrib/CMakeLists.txt
vendored
1
contrib/CMakeLists.txt
vendored
@ -333,6 +333,5 @@ add_subdirectory(grpc-cmake)
|
||||
|
||||
add_subdirectory(replxx-cmake)
|
||||
add_subdirectory(FastMemcpy)
|
||||
add_subdirectory(widecharwidth)
|
||||
add_subdirectory(consistent-hashing)
|
||||
add_subdirectory(consistent-hashing-sumbur)
|
||||
|
@ -1,52 +0,0 @@
|
||||
# PCG Random Number Generation, C++ Edition
|
||||
|
||||
[PCG-Random website]: http://www.pcg-random.org
|
||||
|
||||
This code provides an implementation of the PCG family of random number
|
||||
generators, which are fast, statistically excellent, and offer a number of
|
||||
useful features.
|
||||
|
||||
Full details can be found at the [PCG-Random website]. This version
|
||||
of the code provides many family members -- if you just want one
|
||||
simple generator, you may prefer the minimal C version of the library.
|
||||
|
||||
There are two kinds of generator, normal generators and extended generators.
|
||||
Extended generators provide *k* dimensional equidistribution and can perform
|
||||
party tricks, but generally speaking most people only need the normal
|
||||
generators.
|
||||
|
||||
There are two ways to access the generators, using a convenience typedef
|
||||
or by using the underlying templates directly (similar to C++11's `std::mt19937` typedef vs its `std::mersenne_twister_engine` template). For most users, the convenience typedef is what you want, and probably you're fine with `pcg32` for 32-bit numbers. If you want 64-bit numbers, either use `pcg64` (or, if you're on a 32-bit system, making 64 bits from two calls to `pcg32_k2` may be faster).
|
||||
|
||||
## Documentation and Examples
|
||||
|
||||
Visit [PCG-Random website] for information on how to use this library, or look
|
||||
at the sample code in the `sample` directory -- hopefully it should be fairly
|
||||
self explanatory.
|
||||
|
||||
## Building
|
||||
|
||||
The code is written in C++11, as an include-only library (i.e., there is
|
||||
nothing you need to build). There are some provided demo programs and tests
|
||||
however. On a Unix-style system (e.g., Linux, Mac OS X) you should be able
|
||||
to just type
|
||||
|
||||
make
|
||||
|
||||
To build the demo programs.
|
||||
|
||||
## Testing
|
||||
|
||||
Run
|
||||
|
||||
make test
|
||||
|
||||
## Directory Structure
|
||||
|
||||
The directories are arranged as follows:
|
||||
|
||||
* `include` -- contains `pcg_random.hpp` and supporting include files
|
||||
* `test-high` -- test code for the high-level API where the functions have
|
||||
shorter, less scary-looking names.
|
||||
* `sample` -- sample code, some similar to the code in `test-high` but more
|
||||
human readable, some other examples too
|
@ -2,7 +2,7 @@
|
||||
set -ex
|
||||
set -o pipefail
|
||||
trap "exit" INT TERM
|
||||
trap "kill $(jobs -pr) ||:" EXIT
|
||||
trap 'kill $(jobs -pr) ||:' EXIT
|
||||
|
||||
stage=${stage:-}
|
||||
script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
|
||||
@ -18,22 +18,22 @@ function configure
|
||||
sed -i 's/<tcp_port>9000/<tcp_port>9002/g' right/config/config.xml
|
||||
|
||||
# Start a temporary server to rename the tables
|
||||
while killall clickhouse; do echo . ; sleep 1 ; done
|
||||
while killall clickhouse-server; do echo . ; sleep 1 ; done
|
||||
echo all killed
|
||||
|
||||
set -m # Spawn temporary in its own process groups
|
||||
left/clickhouse server --config-file=left/config/config.xml -- --path db0 &> setup-server-log.log &
|
||||
left/clickhouse-server --config-file=left/config/config.xml -- --path db0 &> setup-server-log.log &
|
||||
left_pid=$!
|
||||
kill -0 $left_pid
|
||||
disown $left_pid
|
||||
set +m
|
||||
while ! left/clickhouse client --port 9001 --query "select 1" ; do kill -0 $left_pid ; echo . ; sleep 1 ; done
|
||||
while ! clickhouse-client --port 9001 --query "select 1" ; do kill -0 $left_pid ; echo . ; sleep 1 ; done
|
||||
echo server for setup started
|
||||
|
||||
left/clickhouse client --port 9001 --query "create database test" ||:
|
||||
left/clickhouse client --port 9001 --query "rename table datasets.hits_v1 to test.hits" ||:
|
||||
clickhouse-client --port 9001 --query "create database test" ||:
|
||||
clickhouse-client --port 9001 --query "rename table datasets.hits_v1 to test.hits" ||:
|
||||
|
||||
while killall clickhouse; do echo . ; sleep 1 ; done
|
||||
while killall clickhouse-server; do echo . ; sleep 1 ; done
|
||||
echo all killed
|
||||
|
||||
# Remove logs etc, because they will be updated, and sharing them between
|
||||
@ -42,43 +42,50 @@ function configure
|
||||
rm db0/metadata/system/* -rf ||:
|
||||
|
||||
# Make copies of the original db for both servers. Use hardlinks instead
|
||||
# of copying. Be careful to remove preprocessed configs or it can lead to
|
||||
# weird effects.
|
||||
# of copying. Be careful to remove preprocessed configs and system tables,or
|
||||
# it can lead to weird effects.
|
||||
rm -r left/db ||:
|
||||
rm -r right/db ||:
|
||||
rm -r db0/preprocessed_configs ||:
|
||||
rm -r db/{data,metadata}/system ||:
|
||||
cp -al db0/ left/db/
|
||||
cp -al db0/ right/db/
|
||||
}
|
||||
|
||||
function restart
|
||||
{
|
||||
while killall clickhouse; do echo . ; sleep 1 ; done
|
||||
while killall clickhouse-server; do echo . ; sleep 1 ; done
|
||||
echo all killed
|
||||
|
||||
set -m # Spawn servers in their own process groups
|
||||
|
||||
left/clickhouse server --config-file=left/config/config.xml -- --path left/db &>> left-server-log.log &
|
||||
left/clickhouse-server --config-file=left/config/config.xml -- --path left/db &>> left-server-log.log &
|
||||
left_pid=$!
|
||||
kill -0 $left_pid
|
||||
disown $left_pid
|
||||
|
||||
right/clickhouse server --config-file=right/config/config.xml -- --path right/db &>> right-server-log.log &
|
||||
right/clickhouse-server --config-file=right/config/config.xml -- --path right/db &>> right-server-log.log &
|
||||
right_pid=$!
|
||||
kill -0 $right_pid
|
||||
disown $right_pid
|
||||
|
||||
set +m
|
||||
|
||||
while ! left/clickhouse client --port 9001 --query "select 1" ; do kill -0 $left_pid ; echo . ; sleep 1 ; done
|
||||
while ! clickhouse-client --port 9001 --query "select 1" ; do kill -0 $left_pid ; echo . ; sleep 1 ; done
|
||||
echo left ok
|
||||
while ! right/clickhouse client --port 9002 --query "select 1" ; do kill -0 $right_pid ; echo . ; sleep 1 ; done
|
||||
while ! clickhouse-client --port 9002 --query "select 1" ; do kill -0 $right_pid ; echo . ; sleep 1 ; done
|
||||
echo right ok
|
||||
|
||||
left/clickhouse client --port 9001 --query "select * from system.tables where database != 'system'"
|
||||
left/clickhouse client --port 9001 --query "select * from system.build_options"
|
||||
right/clickhouse client --port 9002 --query "select * from system.tables where database != 'system'"
|
||||
right/clickhouse client --port 9002 --query "select * from system.build_options"
|
||||
clickhouse-client --port 9001 --query "select * from system.tables where database != 'system'"
|
||||
clickhouse-client --port 9001 --query "select * from system.build_options"
|
||||
clickhouse-client --port 9002 --query "select * from system.tables where database != 'system'"
|
||||
clickhouse-client --port 9002 --query "select * from system.build_options"
|
||||
|
||||
# Check again that both servers we started are running -- this is important
|
||||
# for running locally, when there might be some other servers started and we
|
||||
# will connect to them instead.
|
||||
kill -0 $left_pid
|
||||
kill -0 $right_pid
|
||||
}
|
||||
|
||||
function run_tests
|
||||
@ -129,7 +136,7 @@ function run_tests
|
||||
# FIXME remove some broken long tests
|
||||
for test_name in {IPv4,IPv6,modulo,parse_engine_file,number_formatting_formats,select_format,arithmetic,cryptographic_hashes,logical_functions_{medium,small}}
|
||||
do
|
||||
printf "$test_name\tMarked as broken (see compare.sh)\n" >> skipped-tests.tsv
|
||||
printf "%s\tMarked as broken (see compare.sh)\n" "$test_name">> skipped-tests.tsv
|
||||
rm "$test_prefix/$test_name.xml" ||:
|
||||
done
|
||||
test_files=$(ls "$test_prefix"/*.xml)
|
||||
@ -140,9 +147,9 @@ function run_tests
|
||||
for test in $test_files
|
||||
do
|
||||
# Check that both servers are alive, to fail faster if they die.
|
||||
left/clickhouse client --port 9001 --query "select 1 format Null" \
|
||||
clickhouse-client --port 9001 --query "select 1 format Null" \
|
||||
|| { echo $test_name >> left-server-died.log ; restart ; continue ; }
|
||||
right/clickhouse client --port 9002 --query "select 1 format Null" \
|
||||
clickhouse-client --port 9002 --query "select 1 format Null" \
|
||||
|| { echo $test_name >> right-server-died.log ; restart ; continue ; }
|
||||
|
||||
test_name=$(basename "$test" ".xml")
|
||||
@ -160,7 +167,7 @@ function run_tests
|
||||
skipped=$(grep ^skipped "$test_name-raw.tsv" | cut -f2-)
|
||||
if [ "$skipped" != "" ]
|
||||
then
|
||||
printf "$test_name""\t""$skipped""\n" >> skipped-tests.tsv
|
||||
printf "%s\t%s\n" "$test_name" "$skipped">> skipped-tests.tsv
|
||||
fi
|
||||
done
|
||||
|
||||
@ -172,24 +179,24 @@ function run_tests
|
||||
function get_profiles
|
||||
{
|
||||
# Collect the profiles
|
||||
left/clickhouse client --port 9001 --query "set query_profiler_cpu_time_period_ns = 0"
|
||||
left/clickhouse client --port 9001 --query "set query_profiler_real_time_period_ns = 0"
|
||||
right/clickhouse client --port 9001 --query "set query_profiler_cpu_time_period_ns = 0"
|
||||
right/clickhouse client --port 9001 --query "set query_profiler_real_time_period_ns = 0"
|
||||
left/clickhouse client --port 9001 --query "system flush logs"
|
||||
right/clickhouse client --port 9002 --query "system flush logs"
|
||||
clickhouse-client --port 9001 --query "set query_profiler_cpu_time_period_ns = 0"
|
||||
clickhouse-client --port 9001 --query "set query_profiler_real_time_period_ns = 0"
|
||||
clickhouse-client --port 9001 --query "set query_profiler_cpu_time_period_ns = 0"
|
||||
clickhouse-client --port 9001 --query "set query_profiler_real_time_period_ns = 0"
|
||||
clickhouse-client --port 9001 --query "system flush logs"
|
||||
clickhouse-client --port 9002 --query "system flush logs"
|
||||
|
||||
left/clickhouse client --port 9001 --query "select * from system.query_log where type = 2 format TSVWithNamesAndTypes" > left-query-log.tsv ||: &
|
||||
left/clickhouse client --port 9001 --query "select * from system.query_thread_log format TSVWithNamesAndTypes" > left-query-thread-log.tsv ||: &
|
||||
left/clickhouse client --port 9001 --query "select * from system.trace_log format TSVWithNamesAndTypes" > left-trace-log.tsv ||: &
|
||||
left/clickhouse client --port 9001 --query "select arrayJoin(trace) addr, concat(splitByChar('/', addressToLine(addr))[-1], '#', demangle(addressToSymbol(addr)) ) name from system.trace_log group by addr format TSVWithNamesAndTypes" > left-addresses.tsv ||: &
|
||||
left/clickhouse client --port 9001 --query "select * from system.metric_log format TSVWithNamesAndTypes" > left-metric-log.tsv ||: &
|
||||
clickhouse-client --port 9001 --query "select * from system.query_log where type = 2 format TSVWithNamesAndTypes" > left-query-log.tsv ||: &
|
||||
clickhouse-client --port 9001 --query "select * from system.query_thread_log format TSVWithNamesAndTypes" > left-query-thread-log.tsv ||: &
|
||||
clickhouse-client --port 9001 --query "select * from system.trace_log format TSVWithNamesAndTypes" > left-trace-log.tsv ||: &
|
||||
clickhouse-client --port 9001 --query "select arrayJoin(trace) addr, concat(splitByChar('/', addressToLine(addr))[-1], '#', demangle(addressToSymbol(addr)) ) name from system.trace_log group by addr format TSVWithNamesAndTypes" > left-addresses.tsv ||: &
|
||||
clickhouse-client --port 9001 --query "select * from system.metric_log format TSVWithNamesAndTypes" > left-metric-log.tsv ||: &
|
||||
|
||||
right/clickhouse client --port 9002 --query "select * from system.query_log where type = 2 format TSVWithNamesAndTypes" > right-query-log.tsv ||: &
|
||||
right/clickhouse client --port 9002 --query "select * from system.query_thread_log format TSVWithNamesAndTypes" > right-query-thread-log.tsv ||: &
|
||||
right/clickhouse client --port 9002 --query "select * from system.trace_log format TSVWithNamesAndTypes" > right-trace-log.tsv ||: &
|
||||
right/clickhouse client --port 9002 --query "select arrayJoin(trace) addr, concat(splitByChar('/', addressToLine(addr))[-1], '#', demangle(addressToSymbol(addr)) ) name from system.trace_log group by addr format TSVWithNamesAndTypes" > right-addresses.tsv ||: &
|
||||
right/clickhouse client --port 9002 --query "select * from system.metric_log format TSVWithNamesAndTypes" > right-metric-log.tsv ||: &
|
||||
clickhouse-client --port 9002 --query "select * from system.query_log where type = 2 format TSVWithNamesAndTypes" > right-query-log.tsv ||: &
|
||||
clickhouse-client --port 9002 --query "select * from system.query_thread_log format TSVWithNamesAndTypes" > right-query-thread-log.tsv ||: &
|
||||
clickhouse-client --port 9002 --query "select * from system.trace_log format TSVWithNamesAndTypes" > right-trace-log.tsv ||: &
|
||||
clickhouse-client --port 9002 --query "select arrayJoin(trace) addr, concat(splitByChar('/', addressToLine(addr))[-1], '#', demangle(addressToSymbol(addr)) ) name from system.trace_log group by addr format TSVWithNamesAndTypes" > right-addresses.tsv ||: &
|
||||
clickhouse-client --port 9002 --query "select * from system.metric_log format TSVWithNamesAndTypes" > right-metric-log.tsv ||: &
|
||||
|
||||
wait
|
||||
}
|
||||
@ -197,9 +204,9 @@ function get_profiles
|
||||
# Build and analyze randomization distribution for all queries.
|
||||
function analyze_queries
|
||||
{
|
||||
find . -maxdepth 1 -name "*-queries.tsv" -print | \
|
||||
xargs -n1 -I% basename % -queries.tsv | \
|
||||
parallel --verbose right/clickhouse local --file "{}-queries.tsv" \
|
||||
find . -maxdepth 1 -name "*-queries.tsv" -print0 | \
|
||||
xargs -0 -n1 -I% basename % -queries.tsv | \
|
||||
parallel --verbose clickhouse-local --file "{}-queries.tsv" \
|
||||
--structure "\"query text, run int, version UInt32, time float\"" \
|
||||
--query "\"$(cat "$script_dir/eqmed.sql")\"" \
|
||||
">" {}-report.tsv
|
||||
@ -221,7 +228,7 @@ done
|
||||
|
||||
rm ./*.{rep,svg} test-times.tsv test-dump.tsv unstable.tsv unstable-query-ids.tsv unstable-query-metrics.tsv changed-perf.tsv unstable-tests.tsv unstable-queries.tsv bad-tests.tsv slow-on-client.tsv all-queries.tsv ||:
|
||||
|
||||
right/clickhouse local --query "
|
||||
clickhouse-local --query "
|
||||
create table queries engine File(TSVWithNamesAndTypes, 'queries.rep')
|
||||
as select
|
||||
-- FIXME Comparison mode doesn't make sense for queries that complete
|
||||
@ -230,12 +237,14 @@ create table queries engine File(TSVWithNamesAndTypes, 'queries.rep')
|
||||
-- but the right way to do this is not yet clear.
|
||||
left + right < 0.05 as short,
|
||||
|
||||
not short and abs(diff) < 0.10 and rd[3] > 0.10 as unstable,
|
||||
|
||||
-- Do not consider changed the queries with 5% RD below 5% -- e.g., we're
|
||||
-- likely to observe a difference > 5% in less than 5% cases.
|
||||
-- Not sure it is correct, but empirically it filters out a lot of noise.
|
||||
not short and abs(diff) > 0.15 and abs(diff) > rd[3] and rd[1] > 0.05 as changed,
|
||||
-- Difference > 15% and > rd(99%) -- changed. We can't filter out flaky
|
||||
-- queries by rd(5%), because it can be zero when the difference is smaller
|
||||
-- than a typical distribution width. The difference is still real though.
|
||||
not short and abs(diff) > 0.15 and abs(diff) > rd[4] as changed,
|
||||
|
||||
-- Not changed but rd(99%) > 10% -- unstable.
|
||||
not short and not changed and rd[4] > 0.10 as unstable,
|
||||
|
||||
left, right, diff, rd,
|
||||
replaceAll(_file, '-report.tsv', '') test,
|
||||
query
|
||||
@ -293,7 +302,7 @@ create table all_tests_tsv engine File(TSV, 'all-queries.tsv') as
|
||||
|
||||
for version in {right,left}
|
||||
do
|
||||
right/clickhouse local --query "
|
||||
clickhouse-local --query "
|
||||
create view queries as
|
||||
select * from file('queries.rep', TSVWithNamesAndTypes,
|
||||
'short int, unstable int, changed int, left float, right float,
|
||||
@ -411,6 +420,10 @@ unset IFS
|
||||
grep -H -m2 -i '\(Exception\|Error\):[^:]' ./*-err.log | sed 's/:/\t/' > run-errors.tsv ||:
|
||||
}
|
||||
|
||||
# Check that local and client are in PATH
|
||||
clickhouse-local --version > /dev/null
|
||||
clickhouse-client --version > /dev/null
|
||||
|
||||
case "$stage" in
|
||||
"")
|
||||
;&
|
||||
|
@ -2,7 +2,7 @@
|
||||
set -ex
|
||||
set -o pipefail
|
||||
trap "exit" INT TERM
|
||||
trap "kill $(jobs -pr) ||:" EXIT
|
||||
trap 'kill $(jobs -pr) ||:' EXIT
|
||||
|
||||
mkdir db0 ||:
|
||||
|
||||
|
@ -87,9 +87,6 @@ git -C ch diff --name-only "$SHA_TO_TEST" "$(git -C ch merge-base "$SHA_TO_TEST"
|
||||
# Set python output encoding so that we can print queries with Russian letters.
|
||||
export PYTHONIOENCODING=utf-8
|
||||
|
||||
# Use a default number of runs if not told otherwise
|
||||
export CHPC_RUNS=${CHPC_RUNS:-7}
|
||||
|
||||
# By default, use the main comparison script from the tested package, so that we
|
||||
# can change it in PRs.
|
||||
script_path="right/scripts"
|
||||
@ -101,14 +98,15 @@ fi
|
||||
# Even if we have some errors, try our best to save the logs.
|
||||
set +e
|
||||
|
||||
# Older version use 'kill 0', so put the script into a separate process group
|
||||
# FIXME remove set +m in April 2020
|
||||
set +m
|
||||
# Use clickhouse-client and clickhouse-local from the right server.
|
||||
PATH="$(readlink -f right/)":"$PATH"
|
||||
export PATH
|
||||
|
||||
# Start the main comparison script.
|
||||
{ \
|
||||
time ../download.sh "$REF_PR" "$REF_SHA" "$PR_TO_TEST" "$SHA_TO_TEST" && \
|
||||
time stage=configure "$script_path"/compare.sh ; \
|
||||
} 2>&1 | ts "$(printf '%%Y-%%m-%%d %%H:%%M:%%S\t')" | tee compare.log
|
||||
set -m
|
||||
|
||||
# Stop the servers to free memory. Normally they are restarted before getting
|
||||
# the profile info, so they shouldn't use much, but if the comparison script
|
||||
|
@ -25,7 +25,7 @@ parser = argparse.ArgumentParser(description='Run performance test.')
|
||||
parser.add_argument('file', metavar='FILE', type=argparse.FileType('r', encoding='utf-8'), nargs=1, help='test description file')
|
||||
parser.add_argument('--host', nargs='*', default=['localhost'], help="Server hostname(s). Corresponds to '--port' options.")
|
||||
parser.add_argument('--port', nargs='*', default=[9000], help="Server port(s). Corresponds to '--host' options.")
|
||||
parser.add_argument('--runs', type=int, default=int(os.environ.get('CHPC_RUNS', 7)), help='Number of query runs per server. Defaults to CHPC_RUNS environment variable.')
|
||||
parser.add_argument('--runs', type=int, default=int(os.environ.get('CHPC_RUNS', 11)), help='Number of query runs per server. Defaults to CHPC_RUNS environment variable.')
|
||||
parser.add_argument('--no-long', type=bool, default=True, help='Skip the tests tagged as long.')
|
||||
args = parser.parse_args()
|
||||
|
||||
@ -140,9 +140,16 @@ report_stage_end('substitute2')
|
||||
for q in test_queries:
|
||||
# Prewarm: run once on both servers. Helps to bring the data into memory,
|
||||
# precompile the queries, etc.
|
||||
for conn_index, c in enumerate(connections):
|
||||
res = c.execute(q, query_id = 'prewarm {} {}'.format(0, q))
|
||||
print('prewarm\t' + tsv_escape(q) + '\t' + str(conn_index) + '\t' + str(c.last_query.elapsed))
|
||||
try:
|
||||
for conn_index, c in enumerate(connections):
|
||||
res = c.execute(q, query_id = 'prewarm {} {}'.format(0, q))
|
||||
print('prewarm\t' + tsv_escape(q) + '\t' + str(conn_index) + '\t' + str(c.last_query.elapsed))
|
||||
except:
|
||||
# If prewarm fails for some query -- skip it, and try to test the others.
|
||||
# This might happen if the new test introduces some function that the
|
||||
# old server doesn't support. Still, report it as an error.
|
||||
print(traceback.format_exc(), file=sys.stderr)
|
||||
continue
|
||||
|
||||
# Now, perform measured runs.
|
||||
# Track the time spent by the client to process this query, so that we can notice
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_folder_title: Разработка
|
||||
toc_folder_title: Development
|
||||
toc_hidden: true
|
||||
toc_priority: 58
|
||||
toc_title: hidden
|
||||
|
@ -78,48 +78,6 @@ See the difference?
|
||||
|
||||
For example, the query “count the number of records for each advertising platform” requires reading one “advertising platform ID” column, which takes up 1 byte uncompressed. If most of the traffic was not from advertising platforms, you can expect at least 10-fold compression of this column. When using a quick compression algorithm, data decompression is possible at a speed of at least several gigabytes of uncompressed data per second. In other words, this query can be processed at a speed of approximately several billion rows per second on a single server. This speed is actually achieved in practice.
|
||||
|
||||
<details markdown="1">
|
||||
|
||||
<summary>Example</summary>
|
||||
|
||||
``` bash
|
||||
$ clickhouse-client
|
||||
ClickHouse client version 0.0.52053.
|
||||
Connecting to localhost:9000.
|
||||
Connected to ClickHouse server version 0.0.52053.
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─CounterID─┬──count()─┐
|
||||
│ 114208 │ 56057344 │
|
||||
│ 115080 │ 51619590 │
|
||||
│ 3228 │ 44658301 │
|
||||
│ 38230 │ 42045932 │
|
||||
│ 145263 │ 42042158 │
|
||||
│ 91244 │ 38297270 │
|
||||
│ 154139 │ 26647572 │
|
||||
│ 150748 │ 24112755 │
|
||||
│ 242232 │ 21302571 │
|
||||
│ 338158 │ 13507087 │
|
||||
│ 62180 │ 12229491 │
|
||||
│ 82264 │ 12187441 │
|
||||
│ 232261 │ 12148031 │
|
||||
│ 146272 │ 11438516 │
|
||||
│ 168777 │ 11403636 │
|
||||
│ 4120072 │ 11227824 │
|
||||
│ 10938808 │ 10519739 │
|
||||
│ 74088 │ 9047015 │
|
||||
│ 115079 │ 8837972 │
|
||||
│ 337234 │ 8205961 │
|
||||
└───────────┴──────────┘
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### CPU {#cpu}
|
||||
|
||||
Since executing a query requires processing a large number of rows, it helps to dispatch all operations for entire vectors instead of for separate rows, or to implement the query engine so that there is almost no dispatching cost. If you don’t do this, with any half-decent disk subsystem, the query interpreter inevitably stalls the CPU. It makes sense to both store data in columns and process it, when possible, by columns.
|
||||
|
@ -179,7 +179,7 @@ CREATE TABLE codec_example
|
||||
ENGINE = MergeTree()
|
||||
```
|
||||
|
||||
#### Common Purpose Codecs {#create-query-common-purpose-codecs}
|
||||
#### General Purpose Codecs {#create-query-general-purpose-codecs}
|
||||
|
||||
Codecs:
|
||||
|
||||
|
@ -80,48 +80,6 @@ Ver la diferencia?
|
||||
|
||||
Por ejemplo, la consulta “count the number of records for each advertising platform” requiere leer uno “advertising platform ID” columna, que ocupa 1 byte sin comprimir. Si la mayor parte del tráfico no proviene de plataformas publicitarias, puede esperar al menos una compresión de 10 veces de esta columna. Cuando se utiliza un algoritmo de compresión rápida, la descompresión de datos es posible a una velocidad de al menos varios gigabytes de datos sin comprimir por segundo. En otras palabras, esta consulta se puede procesar a una velocidad de aproximadamente varios miles de millones de filas por segundo en un único servidor. Esta velocidad se logra realmente en la práctica.
|
||||
|
||||
<details markdown="1">
|
||||
|
||||
<summary>Ejemplo</summary>
|
||||
|
||||
``` bash
|
||||
$ clickhouse-client
|
||||
ClickHouse client version 0.0.52053.
|
||||
Connecting to localhost:9000.
|
||||
Connected to ClickHouse server version 0.0.52053.
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─CounterID─┬──count()─┐
|
||||
│ 114208 │ 56057344 │
|
||||
│ 115080 │ 51619590 │
|
||||
│ 3228 │ 44658301 │
|
||||
│ 38230 │ 42045932 │
|
||||
│ 145263 │ 42042158 │
|
||||
│ 91244 │ 38297270 │
|
||||
│ 154139 │ 26647572 │
|
||||
│ 150748 │ 24112755 │
|
||||
│ 242232 │ 21302571 │
|
||||
│ 338158 │ 13507087 │
|
||||
│ 62180 │ 12229491 │
|
||||
│ 82264 │ 12187441 │
|
||||
│ 232261 │ 12148031 │
|
||||
│ 146272 │ 11438516 │
|
||||
│ 168777 │ 11403636 │
|
||||
│ 4120072 │ 11227824 │
|
||||
│ 10938808 │ 10519739 │
|
||||
│ 74088 │ 9047015 │
|
||||
│ 115079 │ 8837972 │
|
||||
│ 337234 │ 8205961 │
|
||||
└───────────┴──────────┘
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### CPU {#cpu}
|
||||
|
||||
Dado que la ejecución de una consulta requiere procesar un gran número de filas, ayuda enviar todas las operaciones para vectores completos en lugar de para filas separadas, o implementar el motor de consultas para que casi no haya costo de envío. Si no hace esto, con cualquier subsistema de disco medio decente, el intérprete de consultas inevitablemente detiene la CPU. Tiene sentido almacenar datos en columnas y procesarlos, cuando sea posible, por columnas.
|
||||
|
@ -79,54 +79,6 @@ ClickHouse یک مدیریت دیتابیس (DBMS) ستون گرا برای پر
|
||||
|
||||
برای مثال، query «تعداد رکوردها به ازای هر بستر نیازمندی» نیازمند خواندن ستون «آیدی بستر آگهی»، که 1 بایت بدون فشرده طول می کشد، خواهد بود. اگر بیشتر ترافیک مربوط به بستر های نیازمندی نبود، شما می توانید انتظار حداقل 10 برابر فشرده سازی این ستون را داشته باشید. زمانی که از الگوریتم فشرده سازی quick استفاده می کنید، عملیات decompression داده ها با سرعت حداقل چندین گیگابایت در ثانیه انجام می شود. به عبارت دیگر، این query توانایی پردازش تقریبا چندین میلیارد رکورد در ثانیه به ازای یک سرور را دارد. این سرعت در عمل واقعی و دست یافتنی است.
|
||||
|
||||
<details markdown="1">
|
||||
|
||||
<summary>مثال</summary>
|
||||
|
||||
$ clickhouse-client
|
||||
ClickHouse client version 0.0.52053.
|
||||
Connecting to localhost:9000.
|
||||
Connected to ClickHouse server version 0.0.52053.
|
||||
|
||||
:) SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
|
||||
|
||||
SELECT
|
||||
CounterID,
|
||||
count()
|
||||
FROM hits
|
||||
GROUP BY CounterID
|
||||
ORDER BY count() DESC
|
||||
LIMIT 20
|
||||
|
||||
┌─CounterID─┬──count()─┐
|
||||
│ 114208 │ 56057344 │
|
||||
│ 115080 │ 51619590 │
|
||||
│ 3228 │ 44658301 │
|
||||
│ 38230 │ 42045932 │
|
||||
│ 145263 │ 42042158 │
|
||||
│ 91244 │ 38297270 │
|
||||
│ 154139 │ 26647572 │
|
||||
│ 150748 │ 24112755 │
|
||||
│ 242232 │ 21302571 │
|
||||
│ 338158 │ 13507087 │
|
||||
│ 62180 │ 12229491 │
|
||||
│ 82264 │ 12187441 │
|
||||
│ 232261 │ 12148031 │
|
||||
│ 146272 │ 11438516 │
|
||||
│ 168777 │ 11403636 │
|
||||
│ 4120072 │ 11227824 │
|
||||
│ 10938808 │ 10519739 │
|
||||
│ 74088 │ 9047015 │
|
||||
│ 115079 │ 8837972 │
|
||||
│ 337234 │ 8205961 │
|
||||
└───────────┴──────────┘
|
||||
|
||||
20 rows in set. Elapsed: 0.153 sec. Processed 1.00 billion rows, 4.00 GB (6.53 billion rows/s., 26.10 GB/s.)
|
||||
|
||||
:)
|
||||
|
||||
</details>
|
||||
|
||||
### CPU {#cpu}
|
||||
|
||||
از آنجایی که اجرای یک query نیازمند پردازش تعداد زیادی سطر می باشد، این کمک می کند تا تمام عملیات ها به جای ارسال به سطرهای جداگانه، برای کل بردار ارسال شود، یا برای ترکیب query engine به طوری که هیچ هزینه ی ارسالی وجود ندارد. اگر این کار رو نکنید، با هر half-decent disk subsystem، تفسیرگر query ناگزیر است که CPU را متوقف کند. این منطقی است که که در صورت امکان هر دو کار ذخیره سازی داده در ستون ها و پردازش ستون ها با هم انجام شود.
|
||||
|
@ -80,48 +80,6 @@ Vous voyez la différence?
|
||||
|
||||
Par exemple, la requête “count the number of records for each advertising platform” nécessite la lecture d'un “advertising platform ID” colonne, qui prend 1 octet non compressé. Si la majeure partie du trafic ne provenait pas de plates-formes publicitaires, vous pouvez vous attendre à une compression d'au moins 10 fois de cette colonne. Lors de l'utilisation d'un algorithme de compression rapide, la décompression des données est possible à une vitesse d'au moins plusieurs gigaoctets de données non compressées par seconde. En d'autres termes, cette requête ne peut être traitée qu'à une vitesse d'environ plusieurs milliards de lignes par seconde sur un seul serveur. Cette vitesse est effectivement atteinte dans la pratique.
|
||||
|
||||
<details markdown="1">
|
||||
|
||||
<summary>Exemple</summary>
|
||||
|
||||
``` bash
|
||||
$ clickhouse-client
|
||||
ClickHouse client version 0.0.52053.
|
||||
Connecting to localhost:9000.
|
||||
Connected to ClickHouse server version 0.0.52053.
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─CounterID─┬──count()─┐
|
||||
│ 114208 │ 56057344 │
|
||||
│ 115080 │ 51619590 │
|
||||
│ 3228 │ 44658301 │
|
||||
│ 38230 │ 42045932 │
|
||||
│ 145263 │ 42042158 │
|
||||
│ 91244 │ 38297270 │
|
||||
│ 154139 │ 26647572 │
|
||||
│ 150748 │ 24112755 │
|
||||
│ 242232 │ 21302571 │
|
||||
│ 338158 │ 13507087 │
|
||||
│ 62180 │ 12229491 │
|
||||
│ 82264 │ 12187441 │
|
||||
│ 232261 │ 12148031 │
|
||||
│ 146272 │ 11438516 │
|
||||
│ 168777 │ 11403636 │
|
||||
│ 4120072 │ 11227824 │
|
||||
│ 10938808 │ 10519739 │
|
||||
│ 74088 │ 9047015 │
|
||||
│ 115079 │ 8837972 │
|
||||
│ 337234 │ 8205961 │
|
||||
└───────────┴──────────┘
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### CPU {#cpu}
|
||||
|
||||
Étant donné que l'exécution d'une requête nécessite le traitement d'un grand nombre de lignes, il est utile de répartir toutes les opérations pour des vecteurs entiers au lieu de lignes séparées, ou d'implémenter le moteur de requête de sorte qu'il n'y ait presque aucun coût d'expédition. Si vous ne le faites pas, avec un sous-système de disque à moitié décent, l'interpréteur de requête bloque inévitablement le processeur. Il est logique de stocker des données dans des colonnes et de les traiter, si possible, par des colonnes.
|
||||
|
@ -82,48 +82,6 @@ OLAPシナリオは、他の一般的なシナリオ(OLTPやKey-Valueアクセ
|
||||
|
||||
たとえば、「各広告プラットフォームのレコード数をカウントする」クエリでは、1つの「広告プラットフォームID」列を読み取る必要がありますが、これは非圧縮では1バイトの領域を要します。トラフィックのほとんどが広告プラットフォームからのものではない場合、この列は少なくとも10倍の圧縮が期待できます。高速な圧縮アルゴリズムを使用すれば、1秒あたり少なくとも非圧縮データに換算して数ギガバイトの速度でデータを展開できます。つまり、このクエリは、単一のサーバーで1秒あたり約数十億行の速度で処理できます。この速度はまさに実際に達成されます。
|
||||
|
||||
<details markdown="1">
|
||||
|
||||
<summary>Example</summary>
|
||||
|
||||
``` bash
|
||||
$ clickhouse-client
|
||||
ClickHouse client version 0.0.52053.
|
||||
Connecting to localhost:9000.
|
||||
Connected to ClickHouse server version 0.0.52053.
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─CounterID─┬──count()─┐
|
||||
│ 114208 │ 56057344 │
|
||||
│ 115080 │ 51619590 │
|
||||
│ 3228 │ 44658301 │
|
||||
│ 38230 │ 42045932 │
|
||||
│ 145263 │ 42042158 │
|
||||
│ 91244 │ 38297270 │
|
||||
│ 154139 │ 26647572 │
|
||||
│ 150748 │ 24112755 │
|
||||
│ 242232 │ 21302571 │
|
||||
│ 338158 │ 13507087 │
|
||||
│ 62180 │ 12229491 │
|
||||
│ 82264 │ 12187441 │
|
||||
│ 232261 │ 12148031 │
|
||||
│ 146272 │ 11438516 │
|
||||
│ 168777 │ 11403636 │
|
||||
│ 4120072 │ 11227824 │
|
||||
│ 10938808 │ 10519739 │
|
||||
│ 74088 │ 9047015 │
|
||||
│ 115079 │ 8837972 │
|
||||
│ 337234 │ 8205961 │
|
||||
└───────────┴──────────┘
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### CPU {#cpu}
|
||||
|
||||
クエリを実行するには大量の行を処理する必要があるため、個別の行ではなくベクター全体のすべての操作をディスパッチするか、ディスパッチコストがほとんどないようにクエリエンジンを実装すると効率的です。 適切なディスクサブシステムでこれを行わないと、クエリインタープリターが必然的にCPUを失速させます。
|
||||
|
@ -82,48 +82,6 @@ ClickHouse - столбцовая система управления базам
|
||||
|
||||
Например, для запроса «посчитать количество записей для каждой рекламной системы», требуется прочитать один столбец «идентификатор рекламной системы», который занимает 1 байт в несжатом виде. Если большинство переходов было не с рекламных систем, то можно рассчитывать хотя бы на десятикратное сжатие этого столбца. При использовании быстрого алгоритма сжатия, возможно разжатие данных со скоростью более нескольких гигабайт несжатых данных в секунду. То есть, такой запрос может выполняться со скоростью около нескольких миллиардов строк в секунду на одном сервере. На практике, такая скорость действительно достигается.
|
||||
|
||||
<details markdown="1">
|
||||
|
||||
<summary>Пример</summary>
|
||||
|
||||
``` bash
|
||||
$ clickhouse-client
|
||||
ClickHouse client version 0.0.52053.
|
||||
Connecting to localhost:9000.
|
||||
Connected to ClickHouse server version 0.0.52053.
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─CounterID─┬──count()─┐
|
||||
│ 114208 │ 56057344 │
|
||||
│ 115080 │ 51619590 │
|
||||
│ 3228 │ 44658301 │
|
||||
│ 38230 │ 42045932 │
|
||||
│ 145263 │ 42042158 │
|
||||
│ 91244 │ 38297270 │
|
||||
│ 154139 │ 26647572 │
|
||||
│ 150748 │ 24112755 │
|
||||
│ 242232 │ 21302571 │
|
||||
│ 338158 │ 13507087 │
|
||||
│ 62180 │ 12229491 │
|
||||
│ 82264 │ 12187441 │
|
||||
│ 232261 │ 12148031 │
|
||||
│ 146272 │ 11438516 │
|
||||
│ 168777 │ 11403636 │
|
||||
│ 4120072 │ 11227824 │
|
||||
│ 10938808 │ 10519739 │
|
||||
│ 74088 │ 9047015 │
|
||||
│ 115079 │ 8837972 │
|
||||
│ 337234 │ 8205961 │
|
||||
└───────────┴──────────┘
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### По вычислениям {#po-vychisleniiam}
|
||||
|
||||
Так как для выполнения запроса надо обработать достаточно большое количество строк, становится актуальным диспетчеризовывать все операции не для отдельных строк, а для целых векторов, или реализовать движок выполнения запроса так, чтобы издержки на диспетчеризацию были примерно нулевыми. Если этого не делать, то при любой не слишком плохой дисковой подсистеме, интерпретатор запроса неизбежно упрётся в CPU.
|
||||
|
@ -15,9 +15,7 @@
|
||||
Задача «normalized z-Order curve» в перспективе может быть полезна для БК и Метрики, так как позволяет смешивать OrderID и PageID и избежать дублирования данных.
|
||||
В задаче также вводится способ индексации путём обращения функции нескольких аргументов на интервале, что имеет смысл для дальнейшего развития.
|
||||
|
||||
Изначально делал [Андрей Чулков](https://github.com/achulkov2), ВШЭ, теперь (не) доделывает [Ольга Хвостикова](https://github.com/stavrolia), но сроки немного сдвинуты из-за задачи 25.9. Будем надеятся на лучшее.
|
||||
|
||||
Upd. Доделывать будет другой человек. Приоритет не высокий.
|
||||
[Андрей Чулков](https://github.com/achulkov2), ВШЭ.
|
||||
|
||||
### 1.2. Wait-free каталог баз данных. {#wait-free-katalog-baz-dannykh}
|
||||
|
||||
@ -869,8 +867,6 @@ Upd. Нас заставляют переписать эту библиотек
|
||||
|
||||
### 10.14. Поддержка всех типов в функции transform. {#podderzhka-vsekh-tipov-v-funktsii-transform}
|
||||
|
||||
Задачу взяла Ольга Хвостикова. Upd. Статус неизвестен.
|
||||
|
||||
### 10.15. Использование словарей как специализированного layout для Join. {#ispolzovanie-slovarei-kak-spetsializirovannogo-layout-dlia-join}
|
||||
|
||||
### 10.16. Словари на локальном SSD. {#slovari-na-lokalnom-ssd}
|
||||
@ -1414,8 +1410,6 @@ N.Vartolomei.
|
||||
|
||||
### 22.3. Защита от абсурдно заданных пользователем кодеков. {#zashchita-ot-absurdno-zadannykh-polzovatelem-kodekov}
|
||||
|
||||
В очереди, скорее всего [Ольга Хвостикова](https://github.com/stavrolia).
|
||||
|
||||
### 22.4. Исправление оставшихся deadlocks в табличных RWLock-ах. {#ispravlenie-ostavshikhsia-deadlocks-v-tablichnykh-rwlock-akh}
|
||||
|
||||
Александр Казаков. Нужно для Яндекс.Метрики и Datalens. Задача постепенно тащится и исправлениями в соседних местах стала менее актуальна.
|
||||
|
@ -111,6 +111,7 @@ def build_for_lang(lang, args):
|
||||
'codehilite',
|
||||
'nl2br',
|
||||
'sane_lists',
|
||||
'pymdownx.details',
|
||||
'pymdownx.magiclink',
|
||||
'pymdownx.superfences',
|
||||
'extra',
|
||||
|
@ -35,4 +35,4 @@ soupsieve==2.0
|
||||
termcolor==1.1.0
|
||||
tornado==5.1.1
|
||||
Unidecode==1.1.1
|
||||
urllib3==1.25.8
|
||||
urllib3==1.25.9
|
||||
|
@ -9,4 +9,4 @@ python-slugify==4.0.0
|
||||
PyYAML==5.3.1
|
||||
requests==2.23.0
|
||||
text-unidecode==1.3
|
||||
urllib3==1.25.8
|
||||
urllib3==1.25.9
|
||||
|
@ -78,48 +78,6 @@ Farkı görüyor musunuz?
|
||||
|
||||
Örneğin, sorgu “count the number of records for each advertising platform” bir okuma gerektirir “advertising platform ID” 1 bayt sıkıştırılmamış kadar alır sütun. Trafiğin çoğu reklam platformlarından değilse, bu sütunun en az 10 kat sıkıştırılmasını bekleyebilirsiniz. Hızlı bir sıkıştırma algoritması kullanırken, saniyede en az birkaç gigabayt sıkıştırılmamış veri hızında veri dekompresyonu mümkündür. Başka bir deyişle, bu sorgu, tek bir sunucuda saniyede yaklaşık birkaç milyar satır hızında işlenebilir. Bu hız aslında pratikte elde edilir.
|
||||
|
||||
<details markdown="1">
|
||||
|
||||
<summary>Örnek</summary>
|
||||
|
||||
``` bash
|
||||
$ clickhouse-client
|
||||
ClickHouse client version 0.0.52053.
|
||||
Connecting to localhost:9000.
|
||||
Connected to ClickHouse server version 0.0.52053.
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─CounterID─┬──count()─┐
|
||||
│ 114208 │ 56057344 │
|
||||
│ 115080 │ 51619590 │
|
||||
│ 3228 │ 44658301 │
|
||||
│ 38230 │ 42045932 │
|
||||
│ 145263 │ 42042158 │
|
||||
│ 91244 │ 38297270 │
|
||||
│ 154139 │ 26647572 │
|
||||
│ 150748 │ 24112755 │
|
||||
│ 242232 │ 21302571 │
|
||||
│ 338158 │ 13507087 │
|
||||
│ 62180 │ 12229491 │
|
||||
│ 82264 │ 12187441 │
|
||||
│ 232261 │ 12148031 │
|
||||
│ 146272 │ 11438516 │
|
||||
│ 168777 │ 11403636 │
|
||||
│ 4120072 │ 11227824 │
|
||||
│ 10938808 │ 10519739 │
|
||||
│ 74088 │ 9047015 │
|
||||
│ 115079 │ 8837972 │
|
||||
│ 337234 │ 8205961 │
|
||||
└───────────┴──────────┘
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### CPU {#cpu}
|
||||
|
||||
Bir sorguyu yürütmek çok sayıda satırı işlemeyi gerektirdiğinden, ayrı satırlar yerine tüm vektörler için tüm işlemlerin gönderilmesine veya sorgu motorunun neredeyse hiç gönderim maliyeti olmaması için uygulanmasına yardımcı olur. Bunu yapmazsanız, yarı iyi bir disk alt sistemi ile, sorgu yorumlayıcısı kaçınılmaz olarak CPU'yu durdurur. Hem verileri sütunlarda depolamak hem de mümkün olduğunda sütunlarla işlemek mantıklıdır.
|
||||
|
@ -1,8 +1,3 @@
|
||||
---
|
||||
machine_translated: true
|
||||
machine_translated_rev: b111334d6614a02564cf32f379679e9ff970d9b1
|
||||
---
|
||||
|
||||
# 什么是ClickHouse? {#shi-yao-shi-clickhouse}
|
||||
|
||||
ClickHouse是一个用于联机分析(OLAP)的列式数据库管理系统(DBMS)。
|
||||
@ -81,54 +76,6 @@ ClickHouse是一个用于联机分析(OLAP)的列式数据库管理系统(DBMS)
|
||||
|
||||
例如,查询«统计每个广告平台的记录数量»需要读取«广告平台ID»这一列,它在未压缩的情况下需要1个字节进行存储。如果大部分流量不是来自广告平台,那么这一列至少可以以十倍的压缩率被压缩。当采用快速压缩算法,它的解压速度最少在十亿字节(未压缩数据)每秒。换句话说,这个查询可以在单个服务器上以每秒大约几十亿行的速度进行处理。这实际上是当前实现的速度。
|
||||
|
||||
<details markdown="1">
|
||||
|
||||
<summary>示例</summary>
|
||||
|
||||
$ clickhouse-client
|
||||
ClickHouse client version 0.0.52053.
|
||||
Connecting to localhost:9000.
|
||||
Connected to ClickHouse server version 0.0.52053.
|
||||
|
||||
:) SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
|
||||
|
||||
SELECT
|
||||
CounterID,
|
||||
count()
|
||||
FROM hits
|
||||
GROUP BY CounterID
|
||||
ORDER BY count() DESC
|
||||
LIMIT 20
|
||||
|
||||
┌─CounterID─┬──count()─┐
|
||||
│ 114208 │ 56057344 │
|
||||
│ 115080 │ 51619590 │
|
||||
│ 3228 │ 44658301 │
|
||||
│ 38230 │ 42045932 │
|
||||
│ 145263 │ 42042158 │
|
||||
│ 91244 │ 38297270 │
|
||||
│ 154139 │ 26647572 │
|
||||
│ 150748 │ 24112755 │
|
||||
│ 242232 │ 21302571 │
|
||||
│ 338158 │ 13507087 │
|
||||
│ 62180 │ 12229491 │
|
||||
│ 82264 │ 12187441 │
|
||||
│ 232261 │ 12148031 │
|
||||
│ 146272 │ 11438516 │
|
||||
│ 168777 │ 11403636 │
|
||||
│ 4120072 │ 11227824 │
|
||||
│ 10938808 │ 10519739 │
|
||||
│ 74088 │ 9047015 │
|
||||
│ 115079 │ 8837972 │
|
||||
│ 337234 │ 8205961 │
|
||||
└───────────┴──────────┘
|
||||
|
||||
20 rows in set. Elapsed: 0.153 sec. Processed 1.00 billion rows, 4.00 GB (6.53 billion rows/s., 26.10 GB/s.)
|
||||
|
||||
:)
|
||||
|
||||
</details>
|
||||
|
||||
### CPU {#cpu}
|
||||
|
||||
由于执行一个查询需要处理大量的行,因此在整个向量上执行所有操作将比在每一行上执行所有操作更加高效。同时这将有助于实现一个几乎没有调用成本的查询引擎。如果你不这样做,使用任何一个机械硬盘,查询引擎都不可避免的停止CPU进行等待。所以,在数据按列存储并且按列执行是很有意义的。
|
||||
|
@ -1,6 +1,5 @@
|
||||
set(CLICKHOUSE_BENCHMARK_SOURCES ${CMAKE_CURRENT_SOURCE_DIR}/Benchmark.cpp)
|
||||
set(CLICKHOUSE_BENCHMARK_LINK PRIVATE dbms clickhouse_aggregate_functions clickhouse_common_config ${Boost_PROGRAM_OPTIONS_LIBRARY})
|
||||
set(CLICKHOUSE_BENCHMARK_INCLUDE SYSTEM PRIVATE ${PCG_RANDOM_INCLUDE_DIR})
|
||||
|
||||
clickhouse_program_add(benchmark)
|
||||
|
||||
|
@ -685,7 +685,7 @@ private:
|
||||
if (ignore_error)
|
||||
{
|
||||
Tokens tokens(begin, end);
|
||||
IParser::Pos token_iterator(tokens);
|
||||
IParser::Pos token_iterator(tokens, context.getSettingsRef().max_parser_depth);
|
||||
while (token_iterator->type != TokenType::Semicolon && token_iterator.isValid())
|
||||
++token_iterator;
|
||||
begin = token_iterator->end;
|
||||
@ -959,10 +959,15 @@ private:
|
||||
ParserQuery parser(end, true);
|
||||
ASTPtr res;
|
||||
|
||||
const auto & settings = context.getSettingsRef();
|
||||
size_t max_length = 0;
|
||||
if (!allow_multi_statements)
|
||||
max_length = settings.max_query_size;
|
||||
|
||||
if (is_interactive || ignore_error)
|
||||
{
|
||||
String message;
|
||||
res = tryParseQuery(parser, pos, end, message, true, "", allow_multi_statements, 0);
|
||||
res = tryParseQuery(parser, pos, end, message, true, "", allow_multi_statements, max_length, settings.max_parser_depth);
|
||||
|
||||
if (!res)
|
||||
{
|
||||
@ -971,7 +976,7 @@ private:
|
||||
}
|
||||
}
|
||||
else
|
||||
res = parseQueryAndMovePosition(parser, pos, end, "", allow_multi_statements, 0);
|
||||
res = parseQueryAndMovePosition(parser, pos, end, "", allow_multi_statements, max_length, settings.max_parser_depth);
|
||||
|
||||
if (is_interactive)
|
||||
{
|
||||
|
@ -14,6 +14,7 @@
|
||||
#include <Parsers/ExpressionElementParsers.h>
|
||||
#include <Compression/CompressionFactory.h>
|
||||
#include <Common/TerminalSize.h>
|
||||
#include <Core/Defines.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -123,7 +124,7 @@ int mainEntryClickHouseCompressor(int argc, char ** argv)
|
||||
DB::ParserCodec codec_parser;
|
||||
|
||||
std::string codecs_line = boost::algorithm::join(codecs, ",");
|
||||
auto ast = DB::parseQuery(codec_parser, "(" + codecs_line + ")", 0);
|
||||
auto ast = DB::parseQuery(codec_parser, "(" + codecs_line + ")", 0, DBMS_DEFAULT_MAX_PARSER_DEPTH);
|
||||
codec = DB::CompressionCodecFactory::instance().get(ast, nullptr);
|
||||
}
|
||||
else
|
||||
|
@ -12,6 +12,6 @@ set(CLICKHOUSE_COPIER_LINK PRIVATE
|
||||
clickhouse_dictionaries
|
||||
string_utils ${Poco_XML_LIBRARY} PUBLIC daemon)
|
||||
|
||||
set(CLICKHOUSE_COPIER_INCLUDE SYSTEM PRIVATE ${PCG_RANDOM_INCLUDE_DIR} ${CMAKE_CURRENT_SOURCE_DIR})
|
||||
set(CLICKHOUSE_COPIER_INCLUDE SYSTEM PRIVATE ${CMAKE_CURRENT_SOURCE_DIR})
|
||||
|
||||
clickhouse_program_add(copier)
|
||||
|
@ -1197,7 +1197,9 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl(
|
||||
query += " LIMIT " + limit;
|
||||
|
||||
ParserQuery p_query(query.data() + query.size());
|
||||
return parseQuery(p_query, query, 0);
|
||||
|
||||
const auto & settings = context.getSettingsRef();
|
||||
return parseQuery(p_query, query, settings.max_query_size, settings.max_parser_depth);
|
||||
};
|
||||
|
||||
/// Load balancing
|
||||
@ -1409,7 +1411,8 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl(
|
||||
query += "INSERT INTO " + getQuotedTable(split_table_for_current_piece) + " VALUES ";
|
||||
|
||||
ParserQuery p_query(query.data() + query.size());
|
||||
query_insert_ast = parseQuery(p_query, query, 0);
|
||||
const auto & settings = context.getSettingsRef();
|
||||
query_insert_ast = parseQuery(p_query, query, settings.max_query_size, settings.max_parser_depth);
|
||||
|
||||
LOG_DEBUG(log, "Executing INSERT query: " << query);
|
||||
}
|
||||
@ -1634,7 +1637,8 @@ ASTPtr ClusterCopier::getCreateTableForPullShard(const ConnectionTimeouts & time
|
||||
&task_cluster->settings_pull);
|
||||
|
||||
ParserCreateQuery parser_create_query;
|
||||
return parseQuery(parser_create_query, create_query_pull_str, 0);
|
||||
const auto & settings = context.getSettingsRef();
|
||||
return parseQuery(parser_create_query, create_query_pull_str, settings.max_query_size, settings.max_parser_depth);
|
||||
}
|
||||
|
||||
/// If it is implicitly asked to create split Distributed table for certain piece on current shard, we will do it.
|
||||
@ -1712,7 +1716,8 @@ std::set<String> ClusterCopier::getShardPartitions(const ConnectionTimeouts & ti
|
||||
}
|
||||
|
||||
ParserQuery parser_query(query.data() + query.size());
|
||||
ASTPtr query_ast = parseQuery(parser_query, query, 0);
|
||||
const auto & settings = context.getSettingsRef();
|
||||
ASTPtr query_ast = parseQuery(parser_query, query, settings.max_query_size, settings.max_parser_depth);
|
||||
|
||||
LOG_DEBUG(log, "Computing destination partition set, executing query: " << query);
|
||||
|
||||
@ -1759,7 +1764,8 @@ bool ClusterCopier::checkShardHasPartition(const ConnectionTimeouts & timeouts,
|
||||
<< partition_quoted_name << " existence, executing query: " << query);
|
||||
|
||||
ParserQuery parser_query(query.data() + query.size());
|
||||
ASTPtr query_ast = parseQuery(parser_query, query, 0);
|
||||
const auto & settings = context.getSettingsRef();
|
||||
ASTPtr query_ast = parseQuery(parser_query, query, settings.max_query_size, settings.max_parser_depth);
|
||||
|
||||
Context local_context = context;
|
||||
local_context.setSettings(task_cluster->settings_pull);
|
||||
@ -1793,7 +1799,8 @@ bool ClusterCopier::checkPresentPartitionPiecesOnCurrentShard(const ConnectionTi
|
||||
<< "existence, executing query: " << query);
|
||||
|
||||
ParserQuery parser_query(query.data() + query.size());
|
||||
ASTPtr query_ast = parseQuery(parser_query, query, 0);
|
||||
const auto & settings = context.getSettingsRef();
|
||||
ASTPtr query_ast = parseQuery(parser_query, query, settings.max_query_size, settings.max_parser_depth);
|
||||
|
||||
Context local_context = context;
|
||||
local_context.setSettings(task_cluster->settings_pull);
|
||||
@ -1826,7 +1833,8 @@ UInt64 ClusterCopier::executeQueryOnCluster(
|
||||
if (query_ast_ == nullptr)
|
||||
{
|
||||
ParserQuery p_query(query.data() + query.size());
|
||||
query_ast = parseQuery(p_query, query, 0);
|
||||
const auto & settings = context.getSettingsRef();
|
||||
query_ast = parseQuery(p_query, query, settings.max_query_size, settings.max_parser_depth);
|
||||
}
|
||||
else
|
||||
query_ast = query_ast_;
|
||||
|
@ -4,6 +4,9 @@
|
||||
#include "Internals.h"
|
||||
#include "ClusterPartition.h"
|
||||
|
||||
#include <Core/Defines.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
@ -260,9 +263,10 @@ inline TaskTable::TaskTable(TaskCluster & parent, const Poco::Util::AbstractConf
|
||||
+ "." + escapeForFileName(table_push.second);
|
||||
|
||||
engine_push_str = config.getString(table_prefix + "engine");
|
||||
|
||||
{
|
||||
ParserStorage parser_storage;
|
||||
engine_push_ast = parseQuery(parser_storage, engine_push_str, 0);
|
||||
engine_push_ast = parseQuery(parser_storage, engine_push_str, 0, DBMS_DEFAULT_MAX_PARSER_DEPTH);
|
||||
engine_push_partition_key_ast = extractPartitionKey(engine_push_ast);
|
||||
primary_key_comma_separated = createCommaSeparatedStringFrom(extractPrimaryKeyColumnNames(engine_push_ast));
|
||||
engine_push_zk_path = extractReplicatedTableZookeeperPath(engine_push_ast);
|
||||
@ -273,7 +277,7 @@ inline TaskTable::TaskTable(TaskCluster & parent, const Poco::Util::AbstractConf
|
||||
auxiliary_engine_split_asts.reserve(number_of_splits);
|
||||
{
|
||||
ParserExpressionWithOptionalAlias parser_expression(false);
|
||||
sharding_key_ast = parseQuery(parser_expression, sharding_key_str, 0);
|
||||
sharding_key_ast = parseQuery(parser_expression, sharding_key_str, 0, DBMS_DEFAULT_MAX_PARSER_DEPTH);
|
||||
main_engine_split_ast = createASTStorageDistributed(cluster_push_name, table_push.first, table_push.second,
|
||||
sharding_key_ast);
|
||||
|
||||
@ -291,7 +295,7 @@ inline TaskTable::TaskTable(TaskCluster & parent, const Poco::Util::AbstractConf
|
||||
if (!where_condition_str.empty())
|
||||
{
|
||||
ParserExpressionWithOptionalAlias parser_expression(false);
|
||||
where_condition_ast = parseQuery(parser_expression, where_condition_str, 0);
|
||||
where_condition_ast = parseQuery(parser_expression, where_condition_str, 0, DBMS_DEFAULT_MAX_PARSER_DEPTH);
|
||||
|
||||
// Will use canonical expression form
|
||||
where_condition_str = queryToString(where_condition_ast);
|
||||
|
@ -53,7 +53,7 @@ int mainEntryClickHouseFormat(int argc, char ** argv)
|
||||
const char * end = pos + query.size();
|
||||
|
||||
ParserQuery parser(end);
|
||||
ASTPtr res = parseQuery(parser, pos, end, "query", 0);
|
||||
ASTPtr res = parseQuery(parser, pos, end, "query", 0, DBMS_DEFAULT_MAX_PARSER_DEPTH);
|
||||
|
||||
if (!quiet)
|
||||
{
|
||||
|
@ -268,8 +268,10 @@ void LocalServer::processQueries()
|
||||
String initial_create_query = getInitialCreateTableQuery();
|
||||
String queries_str = initial_create_query + config().getRawString("query");
|
||||
|
||||
const auto & settings = context->getSettingsRef();
|
||||
|
||||
std::vector<String> queries;
|
||||
auto parse_res = splitMultipartQuery(queries_str, queries);
|
||||
auto parse_res = splitMultipartQuery(queries_str, queries, settings.max_query_size, settings.max_parser_depth);
|
||||
|
||||
if (!parse_res.second)
|
||||
throw Exception("Cannot parse and execute the following part of query: " + String(parse_res.first), ErrorCodes::SYNTAX_ERROR);
|
||||
|
@ -120,12 +120,14 @@ void ODBCColumnsInfoHandler::handleRequest(Poco::Net::HTTPServerRequest & reques
|
||||
|
||||
SCOPE_EXIT(SQLFreeStmt(hstmt, SQL_DROP));
|
||||
|
||||
const auto & context_settings = context->getSettingsRef();
|
||||
|
||||
/// TODO Why not do SQLColumns instead?
|
||||
std::string name = schema_name.empty() ? table_name : schema_name + "." + table_name;
|
||||
std::stringstream ss;
|
||||
std::string input = "SELECT * FROM " + name + " WHERE 1 = 0";
|
||||
ParserQueryWithOutput parser;
|
||||
ASTPtr select = parseQuery(parser, input.data(), input.data() + input.size(), "", 0);
|
||||
ASTPtr select = parseQuery(parser, input.data(), input.data() + input.size(), "", context_settings.max_query_size, context_settings.max_parser_depth);
|
||||
|
||||
IAST::FormatSettings settings(ss, true);
|
||||
settings.always_quote_identifiers = true;
|
||||
|
@ -15,7 +15,6 @@
|
||||
#include <common/getFQDNOrHostName.h>
|
||||
#include <Common/CurrentThread.h>
|
||||
#include <Common/setThreadName.h>
|
||||
#include <Common/config.h>
|
||||
#include <Common/SettingsChanges.h>
|
||||
#include <Disks/DiskSpaceMonitor.h>
|
||||
#include <Compression/CompressedReadBuffer.h>
|
||||
@ -36,6 +35,11 @@
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Poco/Net/HTTPStream.h>
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config.h>
|
||||
#endif
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
|
@ -1,10 +1,8 @@
|
||||
#include <Common/config.h>
|
||||
|
||||
#include "MySQLHandler.h"
|
||||
|
||||
#include <limits>
|
||||
#include <ext/scope_guard.h>
|
||||
#include <Columns/ColumnVector.h>
|
||||
#include <Common/config_version.h>
|
||||
#include <Common/NetException.h>
|
||||
#include <Common/OpenSSLHelpers.h>
|
||||
#include <Core/MySQLProtocol.h>
|
||||
@ -18,11 +16,15 @@
|
||||
#include <boost/algorithm/string/replace.hpp>
|
||||
#include <regex>
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config_version.h>
|
||||
#endif
|
||||
|
||||
#if USE_POCO_NETSSL
|
||||
#include <Poco/Net/SecureStreamSocket.h>
|
||||
#include <Poco/Net/SSLManager.h>
|
||||
#include <Poco/Crypto/CipherFactory.h>
|
||||
#include <Poco/Crypto/RSAKey.h>
|
||||
# include <Poco/Crypto/CipherFactory.h>
|
||||
# include <Poco/Crypto/RSAKey.h>
|
||||
# include <Poco/Net/SSLManager.h>
|
||||
# include <Poco/Net/SecureStreamSocket.h>
|
||||
#endif
|
||||
|
||||
namespace DB
|
||||
|
@ -1,13 +1,17 @@
|
||||
#pragma once
|
||||
#include <Common/config.h>
|
||||
|
||||
#include <Poco/Net/TCPServerConnection.h>
|
||||
#include <common/getFQDNOrHostName.h>
|
||||
#include <Common/CurrentMetrics.h>
|
||||
#include <Core/MySQLProtocol.h>
|
||||
#include "IServer.h"
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config.h>
|
||||
#endif
|
||||
|
||||
#if USE_POCO_NETSSL
|
||||
#include <Poco/Net/SecureStreamSocket.h>
|
||||
# include <Poco/Net/SecureStreamSocket.h>
|
||||
#endif
|
||||
|
||||
namespace CurrentMetrics
|
||||
|
@ -1,11 +1,15 @@
|
||||
#pragma once
|
||||
|
||||
#include <Common/config.h>
|
||||
#include <Poco/Net/TCPServerConnectionFactory.h>
|
||||
#include <atomic>
|
||||
#include "IServer.h"
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config.h>
|
||||
#endif
|
||||
|
||||
#if USE_SSL
|
||||
#include <openssl/rsa.h>
|
||||
# include <openssl/rsa.h>
|
||||
#endif
|
||||
|
||||
namespace DB
|
||||
|
@ -15,7 +15,6 @@
|
||||
#include <ext/scope_guard.h>
|
||||
#include <common/logger_useful.h>
|
||||
#include <common/phdr_cache.h>
|
||||
#include <common/config_common.h>
|
||||
#include <common/ErrorHandlers.h>
|
||||
#include <common/getMemoryAmount.h>
|
||||
#include <common/coverage.h>
|
||||
@ -26,7 +25,6 @@
|
||||
#include <Common/StringUtils/StringUtils.h>
|
||||
#include <Common/ZooKeeper/ZooKeeper.h>
|
||||
#include <Common/ZooKeeper/ZooKeeperNodeCache.h>
|
||||
#include "config_core.h"
|
||||
#include <common/getFQDNOrHostName.h>
|
||||
#include <Common/getMultipleKeysFromConfig.h>
|
||||
#include <Common/getNumberOfPhysicalCPUCores.h>
|
||||
@ -59,19 +57,24 @@
|
||||
#include "MetricsTransmitter.h"
|
||||
#include <Common/StatusFile.h>
|
||||
#include "TCPHandlerFactory.h"
|
||||
#include "Common/config_version.h"
|
||||
#include <Common/SensitiveDataMasker.h>
|
||||
#include <Common/ThreadFuzzer.h>
|
||||
#include "MySQLHandlerFactory.h"
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <common/config_common.h>
|
||||
# include "config_core.h"
|
||||
# include "Common/config_version.h"
|
||||
#endif
|
||||
|
||||
#if defined(OS_LINUX)
|
||||
#include <Common/hasLinuxCapability.h>
|
||||
#include <sys/mman.h>
|
||||
# include <sys/mman.h>
|
||||
# include <Common/hasLinuxCapability.h>
|
||||
#endif
|
||||
|
||||
#if USE_POCO_NETSSL
|
||||
#include <Poco/Net/Context.h>
|
||||
#include <Poco/Net/SecureServerSocket.h>
|
||||
# include <Poco/Net/Context.h>
|
||||
# include <Poco/Net/SecureServerSocket.h>
|
||||
#endif
|
||||
|
||||
namespace CurrentMetrics
|
||||
@ -251,7 +254,7 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
||||
|
||||
const auto memory_amount = getMemoryAmount();
|
||||
|
||||
#if defined(__linux__)
|
||||
#if defined(OS_LINUX)
|
||||
std::string executable_path = getExecutablePath();
|
||||
if (executable_path.empty())
|
||||
executable_path = "/usr/bin/clickhouse"; /// It is used for information messages.
|
||||
@ -636,7 +639,7 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
||||
dns_cache_updater = std::make_unique<DNSCacheUpdater>(*global_context, config().getInt("dns_cache_update_period", 15));
|
||||
}
|
||||
|
||||
#if defined(__linux__)
|
||||
#if defined(OS_LINUX)
|
||||
if (!TaskStatsInfoGetter::checkPermissions())
|
||||
{
|
||||
LOG_INFO(log, "It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled."
|
||||
|
@ -6,7 +6,6 @@
|
||||
#include <Common/Stopwatch.h>
|
||||
#include <Common/NetException.h>
|
||||
#include <Common/setThreadName.h>
|
||||
#include <Common/config_version.h>
|
||||
#include <IO/Progress.h>
|
||||
#include <Compression/CompressedReadBuffer.h>
|
||||
#include <Compression/CompressedWriteBuffer.h>
|
||||
@ -33,6 +32,10 @@
|
||||
|
||||
#include "TCPHandler.h"
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config_version.h>
|
||||
#endif
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
30
programs/server/ya.make
Normal file
30
programs/server/ya.make
Normal file
@ -0,0 +1,30 @@
|
||||
PROGRAM(clickhouse-server)
|
||||
|
||||
PEERDIR(
|
||||
clickhouse/base/common
|
||||
clickhouse/base/daemon
|
||||
clickhouse/base/loggers
|
||||
clickhouse/src
|
||||
contrib/libs/poco/NetSSL_OpenSSL
|
||||
)
|
||||
|
||||
SRCS(
|
||||
clickhouse-server.cpp
|
||||
|
||||
HTTPHandler.cpp
|
||||
HTTPHandlerFactory.cpp
|
||||
InterserverIOHTTPHandler.cpp
|
||||
MetricsTransmitter.cpp
|
||||
MySQLHandler.cpp
|
||||
MySQLHandlerFactory.cpp
|
||||
NotFoundHandler.cpp
|
||||
PingRequestHandler.cpp
|
||||
PrometheusMetricsWriter.cpp
|
||||
PrometheusRequestHandler.cpp
|
||||
ReplicasStatusHandler.cpp
|
||||
RootRequestHandler.cpp
|
||||
Server.cpp
|
||||
TCPHandler.cpp
|
||||
)
|
||||
|
||||
END()
|
3
programs/ya.make
Normal file
3
programs/ya.make
Normal file
@ -0,0 +1,3 @@
|
||||
RECURSE(
|
||||
server
|
||||
)
|
@ -253,7 +253,7 @@ private:
|
||||
}
|
||||
else
|
||||
{
|
||||
if (nodes.contains(keyword))
|
||||
if (nodes.count(keyword))
|
||||
throw Exception(keyword + " declared twice", ErrorCodes::LOGICAL_ERROR);
|
||||
node = std::make_unique<Node>(keyword, node_type);
|
||||
nodes[node->keyword] = node.get();
|
||||
@ -279,7 +279,7 @@ private:
|
||||
{
|
||||
auto parent_node = std::make_unique<Node>(parent_keyword);
|
||||
it_parent = nodes.emplace(parent_node->keyword, parent_node.get()).first;
|
||||
assert(!owned_nodes.contains(parent_node->keyword));
|
||||
assert(!owned_nodes.count(parent_node->keyword));
|
||||
std::string_view parent_keyword_as_string_view = parent_node->keyword;
|
||||
owned_nodes[parent_keyword_as_string_view] = std::move(parent_node);
|
||||
}
|
||||
@ -299,9 +299,9 @@ private:
|
||||
|
||||
#undef MAKE_ACCESS_FLAGS_TO_KEYWORD_TREE_NODE
|
||||
|
||||
if (!owned_nodes.contains("NONE"))
|
||||
if (!owned_nodes.count("NONE"))
|
||||
throw Exception("'NONE' not declared", ErrorCodes::LOGICAL_ERROR);
|
||||
if (!owned_nodes.contains("ALL"))
|
||||
if (!owned_nodes.count("ALL"))
|
||||
throw Exception("'ALL' not declared", ErrorCodes::LOGICAL_ERROR);
|
||||
|
||||
flags_to_keyword_tree = std::move(owned_nodes["ALL"]);
|
||||
|
@ -147,9 +147,9 @@ void ContextAccess::setUser(const UserPtr & user_) const
|
||||
current_roles.reserve(params.current_roles.size());
|
||||
for (const auto & id : params.current_roles)
|
||||
{
|
||||
if (user->granted_roles.contains(id))
|
||||
if (user->granted_roles.count(id))
|
||||
current_roles.push_back(id);
|
||||
if (user->granted_roles_with_admin_option.contains(id))
|
||||
if (user->granted_roles_with_admin_option.count(id))
|
||||
current_roles_with_admin_option.push_back(id);
|
||||
}
|
||||
}
|
||||
@ -358,7 +358,7 @@ void ContextAccess::checkAdminOption(const UUID & role_id) const
|
||||
return;
|
||||
|
||||
auto roles_with_admin_option_loaded = roles_with_admin_option.load();
|
||||
if (roles_with_admin_option_loaded && roles_with_admin_option_loaded->contains(role_id))
|
||||
if (roles_with_admin_option_loaded && roles_with_admin_option_loaded->count(role_id))
|
||||
return;
|
||||
|
||||
std::optional<String> role_name = manager->readName(role_id);
|
||||
|
@ -32,6 +32,7 @@
|
||||
#include <Interpreters/InterpreterShowCreateAccessEntityQuery.h>
|
||||
#include <Interpreters/InterpreterShowGrantsQuery.h>
|
||||
#include <Common/quoteString.h>
|
||||
#include <Core/Defines.h>
|
||||
#include <boost/range/adaptor/map.hpp>
|
||||
#include <boost/range/algorithm/copy.hpp>
|
||||
#include <boost/range/algorithm_ext/push_back.hpp>
|
||||
@ -93,7 +94,7 @@ namespace
|
||||
const char * end = begin + file_contents.size();
|
||||
while (pos < end)
|
||||
{
|
||||
queries.emplace_back(parseQueryAndMovePosition(parser, pos, end, "", true, 0));
|
||||
queries.emplace_back(parseQueryAndMovePosition(parser, pos, end, "", true, 0, DBMS_DEFAULT_MAX_PARSER_DEPTH));
|
||||
while (isWhitespaceASCII(*pos) || *pos == ';')
|
||||
++pos;
|
||||
}
|
||||
@ -560,7 +561,7 @@ std::vector<UUID> DiskAccessStorage::findAllImpl(std::type_index type) const
|
||||
bool DiskAccessStorage::existsImpl(const UUID & id) const
|
||||
{
|
||||
std::lock_guard lock{mutex};
|
||||
return id_to_entry_map.contains(id);
|
||||
return id_to_entry_map.count(id);
|
||||
}
|
||||
|
||||
|
||||
@ -709,7 +710,7 @@ void DiskAccessStorage::updateNoLock(const UUID & id, const UpdateFunc & update_
|
||||
if (name_changed)
|
||||
{
|
||||
const auto & name_to_id_map = name_to_id_maps.at(type);
|
||||
if (name_to_id_map.contains(new_name))
|
||||
if (name_to_id_map.count(new_name))
|
||||
throwNameCollisionCannotRename(type, String{old_name}, new_name);
|
||||
scheduleWriteLists(type);
|
||||
}
|
||||
|
@ -253,44 +253,44 @@ void ExtendedRoleSet::add(const boost::container::flat_set<UUID> & ids_)
|
||||
|
||||
bool ExtendedRoleSet::match(const UUID & id) const
|
||||
{
|
||||
return (all || ids.contains(id)) && !except_ids.contains(id);
|
||||
return (all || ids.count(id)) && !except_ids.count(id);
|
||||
}
|
||||
|
||||
|
||||
bool ExtendedRoleSet::match(const UUID & user_id, const std::vector<UUID> & enabled_roles) const
|
||||
{
|
||||
if (!all && !ids.contains(user_id))
|
||||
if (!all && !ids.count(user_id))
|
||||
{
|
||||
bool found_enabled_role = std::any_of(
|
||||
enabled_roles.begin(), enabled_roles.end(), [this](const UUID & enabled_role) { return ids.contains(enabled_role); });
|
||||
enabled_roles.begin(), enabled_roles.end(), [this](const UUID & enabled_role) { return ids.count(enabled_role); });
|
||||
if (!found_enabled_role)
|
||||
return false;
|
||||
}
|
||||
|
||||
if (except_ids.contains(user_id))
|
||||
if (except_ids.count(user_id))
|
||||
return false;
|
||||
|
||||
bool in_except_list = std::any_of(
|
||||
enabled_roles.begin(), enabled_roles.end(), [this](const UUID & enabled_role) { return except_ids.contains(enabled_role); });
|
||||
enabled_roles.begin(), enabled_roles.end(), [this](const UUID & enabled_role) { return except_ids.count(enabled_role); });
|
||||
return !in_except_list;
|
||||
}
|
||||
|
||||
|
||||
bool ExtendedRoleSet::match(const UUID & user_id, const boost::container::flat_set<UUID> & enabled_roles) const
|
||||
{
|
||||
if (!all && !ids.contains(user_id))
|
||||
if (!all && !ids.count(user_id))
|
||||
{
|
||||
bool found_enabled_role = std::any_of(
|
||||
enabled_roles.begin(), enabled_roles.end(), [this](const UUID & enabled_role) { return ids.contains(enabled_role); });
|
||||
enabled_roles.begin(), enabled_roles.end(), [this](const UUID & enabled_role) { return ids.count(enabled_role); });
|
||||
if (!found_enabled_role)
|
||||
return false;
|
||||
}
|
||||
|
||||
if (except_ids.contains(user_id))
|
||||
if (except_ids.count(user_id))
|
||||
return false;
|
||||
|
||||
bool in_except_list = std::any_of(
|
||||
enabled_roles.begin(), enabled_roles.end(), [this](const UUID & enabled_role) { return except_ids.contains(enabled_role); });
|
||||
enabled_roles.begin(), enabled_roles.end(), [this](const UUID & enabled_role) { return except_ids.count(enabled_role); });
|
||||
return !in_except_list;
|
||||
}
|
||||
|
||||
|
@ -255,16 +255,18 @@ void QuotaCache::quotaRemoved(const UUID & quota_id)
|
||||
void QuotaCache::chooseQuotaToConsume()
|
||||
{
|
||||
/// `mutex` is already locked.
|
||||
std::erase_if(
|
||||
enabled_quotas,
|
||||
[&](const std::pair<EnabledQuota::Params, std::weak_ptr<EnabledQuota>> & pr)
|
||||
|
||||
for (auto i = enabled_quotas.begin(), e = enabled_quotas.end(); i != e;)
|
||||
{
|
||||
auto elem = i->second.lock();
|
||||
if (!elem)
|
||||
i = enabled_quotas.erase(i);
|
||||
else
|
||||
{
|
||||
auto elem = pr.second.lock();
|
||||
if (!elem)
|
||||
return true; // remove from the `enabled_quotas` list.
|
||||
chooseQuotaToConsumeFor(*elem);
|
||||
return false; // keep in the `enabled_quotas` list.
|
||||
});
|
||||
++i;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void QuotaCache::chooseQuotaToConsumeFor(EnabledQuota & enabled)
|
||||
|
@ -103,16 +103,17 @@ void RoleCache::collectRolesInfo()
|
||||
{
|
||||
/// `mutex` is already locked.
|
||||
|
||||
std::erase_if(
|
||||
enabled_roles,
|
||||
[&](const std::pair<EnabledRoles::Params, std::weak_ptr<EnabledRoles>> & pr)
|
||||
for (auto i = enabled_roles.begin(), e = enabled_roles.end(); i != e;)
|
||||
{
|
||||
auto elem = i->second.lock();
|
||||
if (!elem)
|
||||
i = enabled_roles.erase(i);
|
||||
else
|
||||
{
|
||||
auto elem = pr.second.lock();
|
||||
if (!elem)
|
||||
return true; // remove from the `enabled_roles` map.
|
||||
collectRolesInfoFor(*elem);
|
||||
return false; // keep in the `enabled_roles` map.
|
||||
});
|
||||
++i;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
@ -8,6 +8,7 @@
|
||||
#include <Common/quoteString.h>
|
||||
#include <ext/range.h>
|
||||
#include <boost/smart_ptr/make_shared.hpp>
|
||||
#include <Core/Defines.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -77,7 +78,7 @@ void RowPolicyCache::PolicyInfo::setPolicy(const RowPolicyPtr & policy_)
|
||||
try
|
||||
{
|
||||
ParserExpression parser;
|
||||
parsed_conditions[type] = parseQuery(parser, condition, 0);
|
||||
parsed_conditions[type] = parseQuery(parser, condition, 0, DBMS_DEFAULT_MAX_PARSER_DEPTH);
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
@ -178,16 +179,17 @@ void RowPolicyCache::rowPolicyRemoved(const UUID & policy_id)
|
||||
void RowPolicyCache::mixConditions()
|
||||
{
|
||||
/// `mutex` is already locked.
|
||||
std::erase_if(
|
||||
enabled_row_policies,
|
||||
[&](const std::pair<EnabledRowPolicies::Params, std::weak_ptr<EnabledRowPolicies>> & pr)
|
||||
for (auto i = enabled_row_policies.begin(), e = enabled_row_policies.end(); i != e;)
|
||||
{
|
||||
auto elem = i->second.lock();
|
||||
if (!elem)
|
||||
i = enabled_row_policies.erase(i);
|
||||
else
|
||||
{
|
||||
auto elem = pr.second.lock();
|
||||
if (!elem)
|
||||
return true; // remove from the `enabled_row_policies` map.
|
||||
mixConditionsFor(*elem);
|
||||
return false; // keep in the `enabled_row_policies` map.
|
||||
});
|
||||
++i;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
@ -104,16 +104,17 @@ void SettingsProfilesCache::setDefaultProfileName(const String & default_profile
|
||||
void SettingsProfilesCache::mergeSettingsAndConstraints()
|
||||
{
|
||||
/// `mutex` is already locked.
|
||||
std::erase_if(
|
||||
enabled_settings,
|
||||
[&](const std::pair<EnabledSettings::Params, std::weak_ptr<EnabledSettings>> & pr)
|
||||
for (auto i = enabled_settings.begin(), e = enabled_settings.end(); i != e;)
|
||||
{
|
||||
auto enabled = i->second.lock();
|
||||
if (!enabled)
|
||||
i = enabled_settings.erase(i);
|
||||
else
|
||||
{
|
||||
auto enabled = pr.second.lock();
|
||||
if (!enabled)
|
||||
return true; // remove from the `enabled_settings` list.
|
||||
mergeSettingsAndConstraintsFor(*enabled);
|
||||
return false; // keep in the `enabled_settings` list.
|
||||
});
|
||||
++i;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@ -161,7 +162,7 @@ void SettingsProfilesCache::substituteProfiles(SettingsProfileElements & element
|
||||
|
||||
auto parent_profile_id = *element.parent_profile;
|
||||
element.parent_profile.reset();
|
||||
if (already_substituted.contains(parent_profile_id))
|
||||
if (already_substituted.count(parent_profile_id))
|
||||
continue;
|
||||
|
||||
already_substituted.insert(parent_profile_id);
|
||||
|
40
src/Access/ya.make
Normal file
40
src/Access/ya.make
Normal file
@ -0,0 +1,40 @@
|
||||
LIBRARY()
|
||||
|
||||
PEERDIR(
|
||||
clickhouse/src/Common
|
||||
)
|
||||
|
||||
SRCS(
|
||||
AccessControlManager.cpp
|
||||
AccessRights.cpp
|
||||
AccessRightsElement.cpp
|
||||
AllowedClientHosts.cpp
|
||||
Authentication.cpp
|
||||
ContextAccess.cpp
|
||||
DiskAccessStorage.cpp
|
||||
EnabledQuota.cpp
|
||||
EnabledRoles.cpp
|
||||
EnabledRolesInfo.cpp
|
||||
EnabledRowPolicies.cpp
|
||||
EnabledSettings.cpp
|
||||
ExtendedRoleSet.cpp
|
||||
IAccessEntity.cpp
|
||||
IAccessStorage.cpp
|
||||
MemoryAccessStorage.cpp
|
||||
MultipleAccessStorage.cpp
|
||||
Quota.cpp
|
||||
QuotaCache.cpp
|
||||
QuotaUsageInfo.cpp
|
||||
Role.cpp
|
||||
RoleCache.cpp
|
||||
RowPolicy.cpp
|
||||
RowPolicyCache.cpp
|
||||
SettingsConstraints.cpp
|
||||
SettingsProfile.cpp
|
||||
SettingsProfileElement.cpp
|
||||
SettingsProfilesCache.cpp
|
||||
User.cpp
|
||||
UsersConfigAccessStorage.cpp
|
||||
)
|
||||
|
||||
END()
|
@ -2,6 +2,7 @@
|
||||
#include <Parsers/ExpressionListParsers.h>
|
||||
#include <Parsers/parseQuery.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Core/Defines.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -65,7 +66,7 @@ void getAggregateFunctionNameAndParametersArray(
|
||||
ParserExpressionList params_parser(false);
|
||||
ASTPtr args_ast = parseQuery(params_parser,
|
||||
parameters_str.data(), parameters_str.data() + parameters_str.size(),
|
||||
"parameters of aggregate function in " + error_context, 0);
|
||||
"parameters of aggregate function in " + error_context, 0, DBMS_DEFAULT_MAX_PARSER_DEPTH);
|
||||
|
||||
if (args_ast->children.empty())
|
||||
throw Exception("Incorrect list of parameters to aggregate function "
|
||||
|
@ -32,7 +32,9 @@ void registerAggregateFunctions()
|
||||
registerAggregateFunctionUniqUpTo(factory);
|
||||
registerAggregateFunctionTopK(factory);
|
||||
registerAggregateFunctionsBitwise(factory);
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
registerAggregateFunctionsBitmap(factory);
|
||||
#endif
|
||||
registerAggregateFunctionsMaxIntersections(factory);
|
||||
registerAggregateFunctionHistogram(factory);
|
||||
registerAggregateFunctionRetention(factory);
|
||||
|
54
src/AggregateFunctions/ya.make
Normal file
54
src/AggregateFunctions/ya.make
Normal file
@ -0,0 +1,54 @@
|
||||
LIBRARY()
|
||||
|
||||
PEERDIR(
|
||||
clickhouse/src/Common
|
||||
)
|
||||
|
||||
SRCS(
|
||||
AggregateFunctionAggThrow.cpp
|
||||
AggregateFunctionArray.cpp
|
||||
AggregateFunctionAvg.cpp
|
||||
AggregateFunctionAvgWeighted.cpp
|
||||
AggregateFunctionBitwise.cpp
|
||||
AggregateFunctionBoundingRatio.cpp
|
||||
AggregateFunctionCategoricalInformationValue.cpp
|
||||
AggregateFunctionCombinatorFactory.cpp
|
||||
AggregateFunctionCount.cpp
|
||||
AggregateFunctionEntropy.cpp
|
||||
AggregateFunctionFactory.cpp
|
||||
AggregateFunctionForEach.cpp
|
||||
AggregateFunctionGroupArray.cpp
|
||||
AggregateFunctionGroupArrayInsertAt.cpp
|
||||
AggregateFunctionGroupArrayMoving.cpp
|
||||
AggregateFunctionGroupUniqArray.cpp
|
||||
AggregateFunctionHistogram.cpp
|
||||
AggregateFunctionIf.cpp
|
||||
AggregateFunctionMaxIntersections.cpp
|
||||
AggregateFunctionMerge.cpp
|
||||
AggregateFunctionMinMaxAny.cpp
|
||||
AggregateFunctionMLMethod.cpp
|
||||
AggregateFunctionNull.cpp
|
||||
AggregateFunctionOrFill.cpp
|
||||
AggregateFunctionQuantile.cpp
|
||||
AggregateFunctionResample.cpp
|
||||
AggregateFunctionRetention.cpp
|
||||
AggregateFunctionSequenceMatch.cpp
|
||||
AggregateFunctionSimpleLinearRegression.cpp
|
||||
AggregateFunctionState.cpp
|
||||
AggregateFunctionStatistics.cpp
|
||||
AggregateFunctionStatisticsSimple.cpp
|
||||
AggregateFunctionSum.cpp
|
||||
AggregateFunctionSumMap.cpp
|
||||
AggregateFunctionTimeSeriesGroupSum.cpp
|
||||
AggregateFunctionTopK.cpp
|
||||
AggregateFunctionUniq.cpp
|
||||
AggregateFunctionUniqCombined.cpp
|
||||
AggregateFunctionUniqUpTo.cpp
|
||||
AggregateFunctionWindowFunnel.cpp
|
||||
parseAggregateFunctionParameters.cpp
|
||||
registerAggregateFunctions.cpp
|
||||
UniqCombinedBiasData.cpp
|
||||
UniqVariadicHash.cpp
|
||||
)
|
||||
|
||||
END()
|
@ -398,6 +398,7 @@ endif()
|
||||
target_link_libraries(clickhouse_common_io
|
||||
PUBLIC
|
||||
${CITYHASH_LIBRARIES}
|
||||
pcg_random
|
||||
PRIVATE
|
||||
${Poco_XML_LIBRARY}
|
||||
${ZLIB_LIBRARIES}
|
||||
@ -453,9 +454,6 @@ dbms_target_link_libraries (
|
||||
target_include_directories(clickhouse_common_io PUBLIC ${CMAKE_CURRENT_BINARY_DIR}/Core/include) # uses some includes from core
|
||||
dbms_target_include_directories(PUBLIC ${CMAKE_CURRENT_BINARY_DIR}/Core/include)
|
||||
|
||||
target_include_directories(clickhouse_common_io SYSTEM PUBLIC ${PCG_RANDOM_INCLUDE_DIR})
|
||||
dbms_target_include_directories(SYSTEM PUBLIC ${PCG_RANDOM_INCLUDE_DIR})
|
||||
|
||||
dbms_target_include_directories(SYSTEM BEFORE PUBLIC ${PDQSORT_INCLUDE_DIR})
|
||||
|
||||
if (NOT USE_INTERNAL_LZ4_LIBRARY AND LZ4_INCLUDE_DIR)
|
||||
|
@ -19,16 +19,19 @@
|
||||
#include <Common/CurrentMetrics.h>
|
||||
#include <Common/DNSResolver.h>
|
||||
#include <Common/StringUtils/StringUtils.h>
|
||||
#include <Common/config_version.h>
|
||||
#include <Interpreters/ClientInfo.h>
|
||||
#include <Compression/CompressionFactory.h>
|
||||
#include <Processors/Pipe.h>
|
||||
#include <Processors/ISink.h>
|
||||
#include <Processors/Executors/PipelineExecutor.h>
|
||||
|
||||
#include <Common/config.h>
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config_version.h>
|
||||
# include <Common/config.h>
|
||||
#endif
|
||||
|
||||
#if USE_POCO_NETSSL
|
||||
#include <Poco/Net/SecureStreamSocket.h>
|
||||
# include <Poco/Net/SecureStreamSocket.h>
|
||||
#endif
|
||||
|
||||
namespace CurrentMetrics
|
||||
|
15
src/Client/ya.make
Normal file
15
src/Client/ya.make
Normal file
@ -0,0 +1,15 @@
|
||||
LIBRARY()
|
||||
|
||||
PEERDIR(
|
||||
clickhouse/src/Common
|
||||
contrib/libs/poco/NetSSL_OpenSSL
|
||||
)
|
||||
|
||||
SRCS(
|
||||
Connection.cpp
|
||||
ConnectionPoolWithFailover.cpp
|
||||
MultiplexedConnections.cpp
|
||||
TimeoutSetter.cpp
|
||||
)
|
||||
|
||||
END()
|
@ -1,17 +1,19 @@
|
||||
#include <Columns/Collator.h>
|
||||
|
||||
#include "config_core.h"
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include "config_core.h"
|
||||
#endif
|
||||
|
||||
#if USE_ICU
|
||||
#include <unicode/ucol.h>
|
||||
#include <unicode/unistr.h>
|
||||
#include <unicode/locid.h>
|
||||
#include <unicode/ucnv.h>
|
||||
# include <unicode/locid.h>
|
||||
# include <unicode/ucnv.h>
|
||||
# include <unicode/ucol.h>
|
||||
# include <unicode/unistr.h>
|
||||
#else
|
||||
#ifdef __clang__
|
||||
#pragma clang diagnostic ignored "-Wunused-private-field"
|
||||
#pragma clang diagnostic ignored "-Wmissing-noreturn"
|
||||
#endif
|
||||
# if defined(__clang__)
|
||||
# pragma clang diagnostic ignored "-Wunused-private-field"
|
||||
# pragma clang diagnostic ignored "-Wmissing-noreturn"
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#include <Common/Exception.h>
|
||||
|
34
src/Columns/ya.make
Normal file
34
src/Columns/ya.make
Normal file
@ -0,0 +1,34 @@
|
||||
LIBRARY()
|
||||
|
||||
ADDINCL(
|
||||
contrib/libs/icu/common
|
||||
contrib/libs/icu/i18n
|
||||
contrib/libs/pdqsort
|
||||
)
|
||||
|
||||
PEERDIR(
|
||||
clickhouse/src/Common
|
||||
contrib/libs/icu
|
||||
contrib/libs/pdqsort
|
||||
)
|
||||
|
||||
SRCS(
|
||||
Collator.cpp
|
||||
ColumnAggregateFunction.cpp
|
||||
ColumnArray.cpp
|
||||
ColumnConst.cpp
|
||||
ColumnDecimal.cpp
|
||||
ColumnFixedString.cpp
|
||||
ColumnFunction.cpp
|
||||
ColumnLowCardinality.cpp
|
||||
ColumnNullable.cpp
|
||||
ColumnsCommon.cpp
|
||||
ColumnString.cpp
|
||||
ColumnTuple.cpp
|
||||
ColumnVector.cpp
|
||||
FilterDescription.cpp
|
||||
getLeastSuperColumn.cpp
|
||||
IColumn.cpp
|
||||
)
|
||||
|
||||
END()
|
@ -1,5 +1,8 @@
|
||||
#include <Common/ClickHouseRevision.h>
|
||||
#include <Common/config_version.h>
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config_version.h>
|
||||
#endif
|
||||
|
||||
namespace ClickHouseRevision
|
||||
{
|
||||
|
@ -8,11 +8,14 @@
|
||||
#include <IO/Operators.h>
|
||||
#include <IO/ReadBufferFromString.h>
|
||||
#include <common/demangle.h>
|
||||
#include <Common/config_version.h>
|
||||
#include <Common/formatReadable.h>
|
||||
#include <Common/filesystemHelpers.h>
|
||||
#include <filesystem>
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config_version.h>
|
||||
#endif
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
|
@ -1,8 +1,11 @@
|
||||
#pragma once
|
||||
#include <Common/config.h>
|
||||
#if USE_SSL
|
||||
|
||||
#include <Core/Types.h>
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config.h>
|
||||
#endif
|
||||
|
||||
#if USE_SSL
|
||||
# include <Core/Types.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
|
@ -5,12 +5,16 @@
|
||||
#include <memory>
|
||||
#include <optional>
|
||||
#include <Common/StringSearcher.h>
|
||||
#include <Common/config.h>
|
||||
#include <re2/re2.h>
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config.h>
|
||||
#endif
|
||||
|
||||
#if USE_RE2_ST
|
||||
#include <re2_st/re2.h>
|
||||
# include <re2_st/re2.h>
|
||||
#else
|
||||
#define re2_st re2
|
||||
# define re2_st re2
|
||||
#endif
|
||||
|
||||
|
||||
|
@ -6,7 +6,6 @@
|
||||
#include <Common/TraceCollector.h>
|
||||
#include <Common/thread_local_rng.h>
|
||||
#include <common/StringRef.h>
|
||||
#include <common/config_common.h>
|
||||
#include <common/logger_useful.h>
|
||||
#include <common/phdr_cache.h>
|
||||
|
||||
|
@ -1,11 +1,14 @@
|
||||
#pragma once
|
||||
|
||||
#include <Core/Types.h>
|
||||
#include <Common/config.h>
|
||||
#include <common/config_common.h>
|
||||
#include <signal.h>
|
||||
#include <time.h>
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config.h>
|
||||
# include <common/config_common.h>
|
||||
#endif
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
|
@ -154,23 +154,17 @@ RWLockImpl::getLock(RWLockImpl::Type type, const String & query_id, const std::c
|
||||
writers_queue.emplace_back(type); /// SM1: may throw (nothing to roll back)
|
||||
}
|
||||
else if (readers_queue.empty() ||
|
||||
(rdlock_owner == readers_queue.begin() && !writers_queue.empty()))
|
||||
(rdlock_owner == readers_queue.begin() && readers_queue.size() == 1 && !writers_queue.empty()))
|
||||
{
|
||||
readers_queue.emplace_back(type); /// SM1: may throw (nothing to roll back)
|
||||
}
|
||||
GroupsContainer::iterator it_group =
|
||||
(type == Type::Write) ? std::prev(writers_queue.end()) : std::prev(readers_queue.end());
|
||||
|
||||
/// Lock is free to acquire
|
||||
if (rdlock_owner == readers_queue.end() && wrlock_owner == writers_queue.end())
|
||||
{
|
||||
if (type == Type::Read)
|
||||
{
|
||||
rdlock_owner = it_group; /// SM2: nothrow
|
||||
}
|
||||
else
|
||||
{
|
||||
wrlock_owner = it_group; /// SM2: nothrow
|
||||
}
|
||||
(type == Read ? rdlock_owner : wrlock_owner) = it_group; /// SM2: nothrow
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -192,17 +186,10 @@ RWLockImpl::getLock(RWLockImpl::Type type, const String & query_id, const std::c
|
||||
/// Step 3a. Check if we must handle timeout and exit
|
||||
if (!wait_result) /// Wait timed out!
|
||||
{
|
||||
/// Rollback(SM1): nothrow
|
||||
if (it_group->requests == 0)
|
||||
{
|
||||
/// Roll back SM1
|
||||
if (type == Read)
|
||||
{
|
||||
readers_queue.erase(it_group); /// Rollback(SM1): nothrow
|
||||
}
|
||||
else
|
||||
{
|
||||
writers_queue.erase(it_group); /// Rollback(SM1): nothrow
|
||||
}
|
||||
(type == Read ? readers_queue : writers_queue).erase(it_group);
|
||||
}
|
||||
|
||||
return nullptr;
|
||||
@ -224,7 +211,7 @@ RWLockImpl::getLock(RWLockImpl::Type type, const String & query_id, const std::c
|
||||
/// Methods std::list<>::emplace_back() and std::unordered_map<>::emplace() provide strong exception safety
|
||||
/// We only need to roll back the changes to these objects: owner_queries and the readers/writers queue
|
||||
if (it_group->requests == 0)
|
||||
eraseGroup(it_group); /// Rollback(SM1): nothrow
|
||||
dropOwnerGroupAndPassOwnership(it_group); /// Rollback(SM1): nothrow
|
||||
|
||||
throw;
|
||||
}
|
||||
@ -272,11 +259,11 @@ void RWLockImpl::unlock(GroupsContainer::iterator group_it, const String & query
|
||||
|
||||
/// If we are the last remaining referrer, remove this QNode and notify the next one
|
||||
if (--group_it->requests == 0) /// SM: nothrow
|
||||
eraseGroup(group_it);
|
||||
dropOwnerGroupAndPassOwnership(group_it);
|
||||
}
|
||||
|
||||
|
||||
void RWLockImpl::eraseGroup(GroupsContainer::iterator group_it) noexcept
|
||||
void RWLockImpl::dropOwnerGroupAndPassOwnership(GroupsContainer::iterator group_it) noexcept
|
||||
{
|
||||
rdlock_owner = readers_queue.end();
|
||||
wrlock_owner = writers_queue.end();
|
||||
|
@ -19,9 +19,15 @@ class RWLockImpl;
|
||||
using RWLock = std::shared_ptr<RWLockImpl>;
|
||||
|
||||
|
||||
/// Implements shared lock with FIFO service
|
||||
/// Implements Readers-Writers locking algorithm that serves requests in "Phase Fair" order.
|
||||
/// (Phase Fair RWLock as suggested in https://www.cs.unc.edu/~anderson/papers/rtsj10-for-web.pdf)
|
||||
/// Can be acquired recursively (for the same query) in Read mode
|
||||
/// It is used for synchronizing access to various objects on query level (i.e. Storages).
|
||||
///
|
||||
/// In general, ClickHouse processes queries by multiple threads of execution in parallel.
|
||||
/// As opposed to the standard OS synchronization primitives (mutexes), this implementation allows
|
||||
/// unlock() to be called by a thread other than the one, that called lock().
|
||||
/// It is also possible to acquire RWLock in Read mode without waiting (FastPath) by multiple threads,
|
||||
/// that execute the same query (share the same query_id).
|
||||
///
|
||||
/// NOTE: it is important to allow acquiring the same lock in Read mode without waiting if it is already
|
||||
/// acquired by another thread of the same query. Otherwise the following deadlock is possible:
|
||||
@ -82,6 +88,6 @@ private:
|
||||
private:
|
||||
RWLockImpl() = default;
|
||||
void unlock(GroupsContainer::iterator group_it, const String & query_id) noexcept;
|
||||
void eraseGroup(GroupsContainer::iterator group_it) noexcept;
|
||||
void dropOwnerGroupAndPassOwnership(GroupsContainer::iterator group_it) noexcept;
|
||||
};
|
||||
}
|
||||
|
@ -3,7 +3,6 @@
|
||||
#include <Common/Dwarf.h>
|
||||
#include <Common/Elf.h>
|
||||
#include <Common/SymbolIndex.h>
|
||||
#include <Common/config.h>
|
||||
#include <Common/MemorySanitizer.h>
|
||||
#include <common/SimpleCache.h>
|
||||
#include <common/demangle.h>
|
||||
@ -14,8 +13,12 @@
|
||||
#include <sstream>
|
||||
#include <unordered_map>
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config.h>
|
||||
#endif
|
||||
|
||||
#if USE_UNWIND
|
||||
# include <libunwind.h>
|
||||
# include <libunwind.h>
|
||||
#endif
|
||||
|
||||
std::string signalToErrorMessage(int sig, const siginfo_t & info, const ucontext_t & context)
|
||||
|
@ -13,10 +13,13 @@
|
||||
#include <Poco/URI.h>
|
||||
#include <Poco/Util/AbstractConfiguration.h>
|
||||
#include <Common/ShellCommand.h>
|
||||
#include <Common/config.h>
|
||||
#include <common/logger_useful.h>
|
||||
#include <ext/range.h>
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config.h>
|
||||
#endif
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
|
@ -1,11 +1,16 @@
|
||||
#include <Common/getNumberOfPhysicalCPUCores.h>
|
||||
#include <thread>
|
||||
|
||||
#include <Common/config.h>
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config.h>
|
||||
#else
|
||||
# include <libcpuid/libcpuid.h>
|
||||
#endif
|
||||
|
||||
#if USE_CPUID
|
||||
# include <libcpuid/libcpuid.h>
|
||||
# include <libcpuid/libcpuid.h>
|
||||
#elif USE_CPUINFO
|
||||
# include <cpuinfo.h>
|
||||
# include <cpuinfo.h>
|
||||
#endif
|
||||
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
#include <common/config_common.h>
|
||||
#include <common/memory.h>
|
||||
#include <Common/MemoryTracker.h>
|
||||
|
||||
@ -14,14 +13,13 @@
|
||||
/// Replace default new/delete with memory tracking versions.
|
||||
/// @sa https://en.cppreference.com/w/cpp/memory/new/operator_new
|
||||
/// https://en.cppreference.com/w/cpp/memory/new/operator_delete
|
||||
#if !UNBUNDLED
|
||||
|
||||
namespace Memory
|
||||
{
|
||||
|
||||
inline ALWAYS_INLINE void trackMemory(std::size_t size)
|
||||
{
|
||||
#if USE_JEMALLOC
|
||||
#if USE_JEMALLOC && JEMALLOC_VERSION_MAJOR >= 5
|
||||
/// The nallocx() function allocates no memory, but it performs the same size computation as the mallocx() function
|
||||
/// @note je_mallocx() != je_malloc(). It's expected they don't differ much in allocation logic.
|
||||
if (likely(size != 0))
|
||||
@ -49,18 +47,18 @@ inline ALWAYS_INLINE void untrackMemory(void * ptr [[maybe_unused]], std::size_t
|
||||
{
|
||||
try
|
||||
{
|
||||
#if USE_JEMALLOC
|
||||
#if USE_JEMALLOC && JEMALLOC_VERSION_MAJOR >= 5
|
||||
/// @note It's also possible to use je_malloc_usable_size() here.
|
||||
if (likely(ptr != nullptr))
|
||||
CurrentMemoryTracker::free(sallocx(ptr, 0));
|
||||
#else
|
||||
if (size)
|
||||
CurrentMemoryTracker::free(size);
|
||||
# ifdef _GNU_SOURCE
|
||||
# if defined(_GNU_SOURCE)
|
||||
/// It's innaccurate resource free for sanitizers. malloc_usable_size() result is greater or equal to allocated size.
|
||||
else
|
||||
CurrentMemoryTracker::free(malloc_usable_size(ptr));
|
||||
# endif
|
||||
# endif
|
||||
#endif
|
||||
}
|
||||
catch (...)
|
||||
@ -130,26 +128,3 @@ void operator delete[](void * ptr, std::size_t size) noexcept
|
||||
Memory::untrackMemory(ptr, size);
|
||||
Memory::deleteSized(ptr, size);
|
||||
}
|
||||
|
||||
#else
|
||||
|
||||
/// new
|
||||
|
||||
void * operator new(std::size_t size) { return Memory::newImpl(size); }
|
||||
void * operator new[](std::size_t size) { return Memory::newImpl(size); }
|
||||
|
||||
void * operator new(std::size_t size, const std::nothrow_t &) noexcept { return Memory::newNoExept(size); }
|
||||
void * operator new[](std::size_t size, const std::nothrow_t &) noexcept { return Memory::newNoExept(size); }
|
||||
|
||||
/// delete
|
||||
|
||||
void operator delete(void * ptr) noexcept { Memory::deleteImpl(ptr); }
|
||||
void operator delete[](void * ptr) noexcept { Memory::deleteImpl(ptr); }
|
||||
|
||||
void operator delete(void * ptr, const std::nothrow_t &) noexcept { Memory::deleteImpl(ptr); }
|
||||
void operator delete[](void * ptr, const std::nothrow_t &) noexcept { Memory::deleteImpl(ptr); }
|
||||
|
||||
void operator delete(void * ptr, std::size_t size) noexcept { Memory::deleteSized(ptr, size); }
|
||||
void operator delete[](void * ptr, std::size_t size) noexcept { Memory::deleteSized(ptr, size); }
|
||||
|
||||
#endif
|
||||
|
109
src/Common/ya.make
Normal file
109
src/Common/ya.make
Normal file
@ -0,0 +1,109 @@
|
||||
LIBRARY()
|
||||
|
||||
ADDINCL (
|
||||
GLOBAL clickhouse/src
|
||||
contrib/libs/libcpuid
|
||||
contrib/libs/libunwind/include
|
||||
GLOBAL contrib/restricted/ryu
|
||||
)
|
||||
|
||||
PEERDIR(
|
||||
clickhouse/base/common
|
||||
clickhouse/base/pcg-random
|
||||
clickhouse/base/widechar_width
|
||||
contrib/libs/libcpuid/libcpuid
|
||||
contrib/libs/openssl
|
||||
contrib/libs/re2
|
||||
contrib/restricted/ryu
|
||||
)
|
||||
|
||||
# TODO: stub for config_version.h
|
||||
CFLAGS (GLOBAL -DDBMS_NAME=\"ClickHouse\")
|
||||
CFLAGS (GLOBAL -DDBMS_VERSION_MAJOR=0)
|
||||
CFLAGS (GLOBAL -DDBMS_VERSION_MINOR=0)
|
||||
CFLAGS (GLOBAL -DDBMS_VERSION_PATCH=0)
|
||||
CFLAGS (GLOBAL -DVERSION_FULL=\"ClickHouse\")
|
||||
CFLAGS (GLOBAL -DVERSION_INTEGER=0)
|
||||
CFLAGS (GLOBAL -DVERSION_NAME=\"ClickHouse\")
|
||||
CFLAGS (GLOBAL -DVERSION_OFFICIAL=\"\\\(arcadia\\\)\")
|
||||
CFLAGS (GLOBAL -DVERSION_REVISION=0)
|
||||
CFLAGS (GLOBAL -DVERSION_STRING=\"Unknown\")
|
||||
|
||||
SRCS(
|
||||
ActionLock.cpp
|
||||
AlignedBuffer.cpp
|
||||
checkStackSize.cpp
|
||||
ClickHouseRevision.cpp
|
||||
Config/AbstractConfigurationComparison.cpp
|
||||
Config/ConfigProcessor.cpp
|
||||
Config/configReadClient.cpp
|
||||
Config/ConfigReloader.cpp
|
||||
createHardLink.cpp
|
||||
CurrentMetrics.cpp
|
||||
CurrentThread.cpp
|
||||
DNSResolver.cpp
|
||||
Dwarf.cpp
|
||||
Elf.cpp
|
||||
ErrorCodes.cpp
|
||||
escapeForFileName.cpp
|
||||
Exception.cpp
|
||||
ExternalLoaderStatus.cpp
|
||||
FieldVisitors.cpp
|
||||
FileChecker.cpp
|
||||
filesystemHelpers.cpp
|
||||
formatIPv6.cpp
|
||||
formatReadable.cpp
|
||||
getExecutablePath.cpp
|
||||
getMultipleKeysFromConfig.cpp
|
||||
getNumberOfPhysicalCPUCores.cpp
|
||||
hasLinuxCapability.cpp
|
||||
hex.cpp
|
||||
IntervalKind.cpp
|
||||
IPv6ToBinary.cpp
|
||||
isLocalAddress.cpp
|
||||
Macros.cpp
|
||||
malloc.cpp
|
||||
MemoryTracker.cpp
|
||||
new_delete.cpp
|
||||
OptimizedRegularExpression.cpp
|
||||
parseAddress.cpp
|
||||
parseGlobs.cpp
|
||||
parseRemoteDescription.cpp
|
||||
PipeFDs.cpp
|
||||
PODArray.cpp
|
||||
ProfileEvents.cpp
|
||||
QueryProfiler.cpp
|
||||
quoteString.cpp
|
||||
randomSeed.cpp
|
||||
RemoteHostFilter.cpp
|
||||
RWLock.cpp
|
||||
SensitiveDataMasker.cpp
|
||||
setThreadName.cpp
|
||||
SharedLibrary.cpp
|
||||
ShellCommand.cpp
|
||||
StackTrace.cpp
|
||||
StatusFile.cpp
|
||||
StatusInfo.cpp
|
||||
Stopwatch.cpp
|
||||
StringUtils/StringUtils.cpp
|
||||
StudentTTest.cpp
|
||||
SymbolIndex.cpp
|
||||
TaskStatsInfoGetter.cpp
|
||||
TerminalSize.cpp
|
||||
thread_local_rng.cpp
|
||||
ThreadFuzzer.cpp
|
||||
ThreadPool.cpp
|
||||
ThreadStatus.cpp
|
||||
TraceCollector.cpp
|
||||
UTF8Helpers.cpp
|
||||
WeakHash.cpp
|
||||
ZooKeeper/IKeeper.cpp
|
||||
ZooKeeper/Lock.cpp
|
||||
ZooKeeper/TestKeeper.cpp
|
||||
ZooKeeper/ZooKeeper.cpp
|
||||
ZooKeeper/ZooKeeperHolder.cpp
|
||||
ZooKeeper/ZooKeeperImpl.cpp
|
||||
ZooKeeper/ZooKeeperNodeCache.cpp
|
||||
)
|
||||
|
||||
END()
|
@ -462,7 +462,7 @@ CompressionCodecPtr makeCodec(const std::string & codec_string, const DataTypePt
|
||||
{
|
||||
const std::string codec_statement = "(" + codec_string + ")";
|
||||
Tokens tokens(codec_statement.begin().base(), codec_statement.end().base());
|
||||
IParser::Pos token_iterator(tokens);
|
||||
IParser::Pos token_iterator(tokens, 0);
|
||||
|
||||
Expected expected;
|
||||
ASTPtr codec_ast;
|
||||
|
33
src/Compression/ya.make
Normal file
33
src/Compression/ya.make
Normal file
@ -0,0 +1,33 @@
|
||||
LIBRARY()
|
||||
|
||||
ADDINCL(
|
||||
contrib/libs/lz4
|
||||
contrib/libs/zstd
|
||||
)
|
||||
|
||||
PEERDIR(
|
||||
clickhouse/src/Common
|
||||
contrib/libs/lz4
|
||||
contrib/libs/zstd
|
||||
)
|
||||
|
||||
SRCS(
|
||||
CachedCompressedReadBuffer.cpp
|
||||
CompressedReadBuffer.cpp
|
||||
CompressedReadBufferBase.cpp
|
||||
CompressedReadBufferFromFile.cpp
|
||||
CompressedWriteBuffer.cpp
|
||||
CompressionCodecDelta.cpp
|
||||
CompressionCodecDoubleDelta.cpp
|
||||
CompressionCodecGorilla.cpp
|
||||
CompressionCodecLZ4.cpp
|
||||
CompressionCodecMultiple.cpp
|
||||
CompressionCodecNone.cpp
|
||||
CompressionCodecT64.cpp
|
||||
CompressionCodecZSTD.cpp
|
||||
CompressionFactory.cpp
|
||||
ICompressionCodec.cpp
|
||||
LZ4_decompress_faster.cpp
|
||||
)
|
||||
|
||||
END()
|
@ -95,3 +95,6 @@
|
||||
/// Actually, there may be multiple acquisitions of different locks for a given table within one query.
|
||||
/// Check with IStorage class for the list of possible locks
|
||||
#define DBMS_DEFAULT_LOCK_ACQUIRE_TIMEOUT_SEC 120
|
||||
|
||||
/// Default limit on recursion depth of recursive descend parser.
|
||||
#define DBMS_DEFAULT_MAX_PARSER_DEPTH 1000
|
||||
|
@ -157,7 +157,9 @@ private:
|
||||
template <> struct NearestFieldTypeImpl<char> { using Type = std::conditional_t<is_signed_v<char>, Int64, UInt64>; };
|
||||
template <> struct NearestFieldTypeImpl<signed char> { using Type = Int64; };
|
||||
template <> struct NearestFieldTypeImpl<unsigned char> { using Type = UInt64; };
|
||||
#if __cplusplus > 201703L
|
||||
template <> struct NearestFieldTypeImpl<char8_t> { using Type = UInt64; };
|
||||
#endif
|
||||
|
||||
template <> struct NearestFieldTypeImpl<UInt16> { using Type = UInt64; };
|
||||
template <> struct NearestFieldTypeImpl<UInt32> { using Type = UInt64; };
|
||||
|
@ -22,10 +22,14 @@
|
||||
#include <Poco/Net/StreamSocket.h>
|
||||
#include <Poco/RandomStream.h>
|
||||
#include <Poco/SHA1Engine.h>
|
||||
#include "config_core.h"
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include "config_core.h"
|
||||
#endif
|
||||
|
||||
#if USE_SSL
|
||||
#include <openssl/pem.h>
|
||||
#include <openssl/rsa.h>
|
||||
# include <openssl/pem.h>
|
||||
# include <openssl/rsa.h>
|
||||
#endif
|
||||
|
||||
/// Implementation of MySQL wire protocol.
|
||||
|
@ -137,8 +137,12 @@ NamesAndTypesList NamesAndTypesList::filter(const Names & names) const
|
||||
|
||||
NamesAndTypesList NamesAndTypesList::addTypes(const Names & names) const
|
||||
{
|
||||
/// NOTE It's better to make a map in `IStorage` than to create it here every time again.
|
||||
::google::dense_hash_map<StringRef, const DataTypePtr *, StringRefHash> types;
|
||||
/// NOTE: It's better to make a map in `IStorage` than to create it here every time again.
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
google::dense_hash_map<StringRef, const DataTypePtr *, StringRefHash> types;
|
||||
#else
|
||||
google::sparsehash::dense_hash_map<StringRef, const DataTypePtr *, StringRefHash> types;
|
||||
#endif
|
||||
types.set_empty_key(StringRef());
|
||||
|
||||
for (const NameAndTypePair & column : *this)
|
||||
|
@ -404,7 +404,7 @@ struct Settings : public SettingsCollection<Settings>
|
||||
M(SettingBool, use_compact_format_in_distributed_parts_names, false, "Changes format of directories names for distributed table insert parts.", 0) \
|
||||
M(SettingUInt64, multiple_joins_rewriter_version, 1, "1 or 2. Second rewriter version knows about table columns and keep not clashed names as is.", 0) \
|
||||
M(SettingBool, validate_polygons, true, "Throw exception if polygon is invalid in function pointInPolygon (e.g. self-tangent, self-intersecting). If the setting is false, the function will accept invalid polygons but may silently return wrong result.", 0) \
|
||||
M(SettingUInt64, max_parser_depth, 1000, "Maximum parser depth.", 0) \
|
||||
M(SettingUInt64, max_parser_depth, DBMS_DEFAULT_MAX_PARSER_DEPTH, "Maximum parser depth (recursion depth of recursive descend parser).", 0) \
|
||||
M(SettingSeconds, temporary_live_view_timeout, DEFAULT_TEMPORARY_LIVE_VIEW_TIMEOUT_SEC, "Timeout after which temporary live view is deleted.", 0) \
|
||||
M(SettingBool, transform_null_in, false, "If enabled, NULL values will be matched with 'IN' operator as if they are considered equal.", 0) \
|
||||
M(SettingBool, allow_nondeterministic_mutations, false, "Allow non-deterministic functions in ALTER UPDATE/ALTER DELETE statements", 0) \
|
||||
|
24
src/Core/ya.make
Normal file
24
src/Core/ya.make
Normal file
@ -0,0 +1,24 @@
|
||||
LIBRARY()
|
||||
|
||||
PEERDIR(
|
||||
clickhouse/src/Common
|
||||
contrib/libs/sparsehash
|
||||
contrib/restricted/boost/libs
|
||||
)
|
||||
|
||||
SRCS(
|
||||
BackgroundSchedulePool.cpp
|
||||
Block.cpp
|
||||
BlockInfo.cpp
|
||||
ColumnWithTypeAndName.cpp
|
||||
ExternalResultDescription.cpp
|
||||
ExternalTable.cpp
|
||||
Field.cpp
|
||||
iostream_debug_helpers.cpp
|
||||
MySQLProtocol.cpp
|
||||
NamesAndTypes.cpp
|
||||
Settings.cpp
|
||||
SettingsCollection.cpp
|
||||
)
|
||||
|
||||
END()
|
@ -33,7 +33,7 @@ try
|
||||
std::string input = "SELECT number, number / 3, number * number";
|
||||
|
||||
ParserSelectQuery parser;
|
||||
ASTPtr ast = parseQuery(parser, input.data(), input.data() + input.size(), "", 0);
|
||||
ASTPtr ast = parseQuery(parser, input.data(), input.data() + input.size(), "", 0, 0);
|
||||
|
||||
SharedContextHolder shared_context = Context::createShared();
|
||||
Context context = Context::createGlobal(shared_context.get());
|
||||
|
@ -35,7 +35,7 @@ try
|
||||
std::string input = "SELECT number, number % 3 == 1";
|
||||
|
||||
ParserSelectQuery parser;
|
||||
ASTPtr ast = parseQuery(parser, input.data(), input.data() + input.size(), "", 0);
|
||||
ASTPtr ast = parseQuery(parser, input.data(), input.data() + input.size(), "", 0, 0);
|
||||
|
||||
formatAST(*ast, std::cerr);
|
||||
std::cerr << std::endl;
|
||||
|
71
src/DataStreams/ya.make
Normal file
71
src/DataStreams/ya.make
Normal file
@ -0,0 +1,71 @@
|
||||
LIBRARY()
|
||||
|
||||
PEERDIR(
|
||||
clickhouse/src/Common
|
||||
)
|
||||
|
||||
NO_COMPILER_WARNINGS()
|
||||
|
||||
SRCS(
|
||||
AddingDefaultBlockOutputStream.cpp
|
||||
AddingDefaultsBlockInputStream.cpp
|
||||
AggregatingBlockInputStream.cpp
|
||||
AggregatingSortedBlockInputStream.cpp
|
||||
AsynchronousBlockInputStream.cpp
|
||||
BlockIO.cpp
|
||||
BlockStreamProfileInfo.cpp
|
||||
CheckConstraintsBlockOutputStream.cpp
|
||||
CheckSortedBlockInputStream.cpp
|
||||
CollapsingFinalBlockInputStream.cpp
|
||||
CollapsingSortedBlockInputStream.cpp
|
||||
ColumnGathererStream.cpp
|
||||
ConvertingBlockInputStream.cpp
|
||||
copyData.cpp
|
||||
CountingBlockOutputStream.cpp
|
||||
CreatingSetsBlockInputStream.cpp
|
||||
CubeBlockInputStream.cpp
|
||||
DistinctBlockInputStream.cpp
|
||||
DistinctSortedBlockInputStream.cpp
|
||||
ExecutionSpeedLimits.cpp
|
||||
ExpressionBlockInputStream.cpp
|
||||
FillingBlockInputStream.cpp
|
||||
FilterBlockInputStream.cpp
|
||||
FilterColumnsBlockInputStream.cpp
|
||||
finalizeBlock.cpp
|
||||
FinishSortingBlockInputStream.cpp
|
||||
GraphiteRollupSortedBlockInputStream.cpp
|
||||
IBlockInputStream.cpp
|
||||
InputStreamFromASTInsertQuery.cpp
|
||||
InternalTextLogsRowOutputStream.cpp
|
||||
LimitBlockInputStream.cpp
|
||||
LimitByBlockInputStream.cpp
|
||||
materializeBlock.cpp
|
||||
MaterializingBlockInputStream.cpp
|
||||
MergeSortingBlockInputStream.cpp
|
||||
MergingAggregatedBlockInputStream.cpp
|
||||
MergingAggregatedMemoryEfficientBlockInputStream.cpp
|
||||
MergingSortedBlockInputStream.cpp
|
||||
narrowBlockInputStreams.cpp
|
||||
NativeBlockInputStream.cpp
|
||||
NativeBlockOutputStream.cpp
|
||||
ParallelAggregatingBlockInputStream.cpp
|
||||
ParallelParsingBlockInputStream.cpp
|
||||
PartialSortingBlockInputStream.cpp
|
||||
processConstants.cpp
|
||||
PushingToViewsBlockOutputStream.cpp
|
||||
RemoteBlockInputStream.cpp
|
||||
RemoteBlockOutputStream.cpp
|
||||
ReplacingSortedBlockInputStream.cpp
|
||||
ReverseBlockInputStream.cpp
|
||||
RollupBlockInputStream.cpp
|
||||
SizeLimits.cpp
|
||||
SquashingBlockInputStream.cpp
|
||||
SquashingBlockOutputStream.cpp
|
||||
SquashingTransform.cpp
|
||||
SummingSortedBlockInputStream.cpp
|
||||
TotalsHavingBlockInputStream.cpp
|
||||
TTLBlockInputStream.cpp
|
||||
VersionedCollapsingSortedBlockInputStream.cpp
|
||||
)
|
||||
|
||||
END()
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user