mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-30 19:42:00 +00:00
Compare commits
258 Commits
266f980016
...
a0bef3dfb9
Author | SHA1 | Date | |
---|---|---|---|
|
a0bef3dfb9 | ||
|
682eb4c92f | ||
|
9c893d4712 | ||
|
b6725d33d1 | ||
|
71b651d4a8 | ||
|
1166e93447 | ||
|
e3e6905638 | ||
|
0c02222173 | ||
|
e9f083b26d | ||
|
9f7c6f94c5 | ||
|
c50472d73a | ||
|
ca2d9ccef5 | ||
|
22f34a2fe9 | ||
|
f7795a638d | ||
|
934ae95e39 | ||
|
6a899f97eb | ||
|
e8f548a9a1 | ||
|
dd092411ff | ||
|
5b1bdef54f | ||
|
0ebee19f2e | ||
|
cb2249e4a6 | ||
|
d9859c9186 | ||
|
45e1702f82 | ||
|
2ed07b21d3 | ||
|
552d8b9699 | ||
|
0d5d4249e1 | ||
|
9865ca5fd5 | ||
|
37cd162397 | ||
|
41f7203cfc | ||
|
b487f59496 | ||
|
6762c30a88 | ||
|
6e0f888347 | ||
|
637672a13c | ||
|
91391b2f13 | ||
|
c70bf58d9c | ||
|
d7833c402d | ||
|
ecd1fcb21a | ||
|
dbaa196709 | ||
|
031eac6da0 | ||
|
1ef6762456 | ||
|
3d65e72586 | ||
|
438b80e89b | ||
|
63fc8a37a8 | ||
|
7ed67942e8 | ||
|
31a669892f | ||
|
b2d358c27b | ||
|
81f6e993e9 | ||
|
88495e1d5d | ||
|
66b874dafd | ||
|
f7833ae419 | ||
|
1653d1d828 | ||
|
6c8a1bb597 | ||
|
33e5bc136a | ||
|
0e8dc6b72a | ||
|
cb7d33dc6e | ||
|
a35971f749 | ||
|
10160fed8e | ||
|
658ac29111 | ||
|
c1cefb0190 | ||
|
b0cd933285 | ||
|
4785469ac6 | ||
|
4585a6225c | ||
|
0bcc933592 | ||
|
7773873e72 | ||
|
12040bbf24 | ||
|
db3660c4b3 | ||
|
9feeb949fe | ||
|
7173c71142 | ||
|
17f27275ae | ||
|
7c3d015113 | ||
|
2fb3871a62 | ||
|
ba7ebe8cde | ||
|
13687d681b | ||
|
978b423b5b | ||
|
9a0d19257d | ||
|
3000cef7b8 | ||
|
0b3185fa13 | ||
|
32a6b823ef | ||
|
c97cdedd9b | ||
|
84fc0fa6fd | ||
|
0b22faa973 | ||
|
69b4743b13 | ||
|
a83da580ae | ||
|
b02f0975c9 | ||
|
207cc837f5 | ||
|
0073a74881 | ||
|
00c565961e | ||
|
bf23cc865e | ||
|
8f5a939d65 | ||
|
fc61f33bea | ||
|
95b21f4a44 | ||
|
cc0f8271e2 | ||
|
115df8705d | ||
|
3bb6b579be | ||
|
70c983835d | ||
|
26d2ac7631 | ||
|
180f4eb991 | ||
|
22de29a70f | ||
|
b6cc52410d | ||
|
70103f11c5 | ||
|
46a3e3e795 | ||
|
b64700cfa8 | ||
|
dcd07e25ac | ||
|
97d2fc3e90 | ||
|
856f73a7c2 | ||
|
f6bf6b6faa | ||
|
b1cbb3f2de | ||
|
690cb514f2 | ||
|
4c55f0c38d | ||
|
d9877280c1 | ||
|
46b663d503 | ||
|
6c1e44a82d | ||
|
b85b6ca0b4 | ||
|
cd89353c09 | ||
|
e3c6689144 | ||
|
1fcc70a89c | ||
|
4de859a7ea | ||
|
30a40f89af | ||
|
db1ecacf50 | ||
|
f9c07bc6fc | ||
|
bf102d2a00 | ||
|
530cdbbf13 | ||
|
bd8186acbc | ||
|
7a3a286f9b | ||
|
d478a1dc0e | ||
|
523945fade | ||
|
140f5987da | ||
|
a9336afbc7 | ||
|
02bacd71fd | ||
|
71b1ba5009 | ||
|
3d8378a0c5 | ||
|
9b5b23b957 | ||
|
7816346403 | ||
|
40be220f8b | ||
|
8d8d1e8503 | ||
|
833b137390 | ||
|
f14f794e7d | ||
|
ee6ff8bbd6 | ||
|
9c0683e157 | ||
|
7e730bba9e | ||
|
18a881437d | ||
|
a64ed74297 | ||
|
989c85ef52 | ||
|
eadcde4ca5 | ||
|
bd3f300220 | ||
|
ae82b2ec8b | ||
|
069c4b198a | ||
|
2bea36c19f | ||
|
532632c203 | ||
|
5abca6283a | ||
|
fa94338f15 | ||
|
f2e8237fe9 | ||
|
6c762a0418 | ||
|
4c627aadd3 | ||
|
f0b706ffde | ||
|
451eb79a85 | ||
|
c4329bf242 | ||
|
c10d68c7a0 | ||
|
30625c78e1 | ||
|
ea31e2775e | ||
|
f20e494a8d | ||
|
24218238b4 | ||
|
f87013250b | ||
|
4fc776779e | ||
|
62789dab31 | ||
|
ac881626dc | ||
|
c69de20ace | ||
|
913361a641 | ||
|
625cb70128 | ||
|
0e6284f7bc | ||
|
38b951a1f0 | ||
|
ee1cd0acf3 | ||
|
ad97f0440b | ||
|
a329ea7768 | ||
|
e19ac0aa11 | ||
|
04ccefe447 | ||
|
41279ca59b | ||
|
99916f85fc | ||
|
ddb8f3d57f | ||
|
334bf3bfe0 | ||
|
10cd060fb3 | ||
|
c0a6cc14fd | ||
|
95e72b7588 | ||
|
2875cf5230 | ||
|
296b88efdf | ||
|
2e83d0f61f | ||
|
cca7da6664 | ||
|
d69427c35f | ||
|
ea375aecb3 | ||
|
e16c231c58 | ||
|
89096b554a | ||
|
ca8a93e1dd | ||
|
b57938df4e | ||
|
4cc8d36419 | ||
|
f12a582bd0 | ||
|
cb4741f770 | ||
|
163f2b1121 | ||
|
52ab45b6ee | ||
|
bfafaf93dc | ||
|
c9ffcb0ef1 | ||
|
e87ac61ff3 | ||
|
2313b7ca78 | ||
|
6d4c26dda5 | ||
|
cc6fd38dc6 | ||
|
7881ae2286 | ||
|
aed4edd941 | ||
|
c6df1b09c6 | ||
|
93e10644b7 | ||
|
dc0aa94110 | ||
|
2c963fe710 | ||
|
a74d615aaf | ||
|
82a7c33796 | ||
|
e29fc6ee63 | ||
|
b03f48807a | ||
|
2906524f0d | ||
|
1c5b362f17 | ||
|
fcaa058c92 | ||
|
1e79afbaec | ||
|
d92383feff | ||
|
c73663b74b | ||
|
1c350a968c | ||
|
37176374a9 | ||
|
ffd1183a1a | ||
|
7865eb1ce8 | ||
|
1929029882 | ||
|
0fb51df9ab | ||
|
4d973cf7b1 | ||
|
0d4f66b825 | ||
|
3c5343b6c8 | ||
|
2ed30407f8 | ||
|
f46a6fabd1 | ||
|
feb1fbbacb | ||
|
d828adf2c5 | ||
|
9835894ed6 | ||
|
25861c87fd | ||
|
0a6556d696 | ||
|
5f3ccdc3f1 | ||
|
d8899f8758 | ||
|
d50644ce96 | ||
|
7e1517ffb2 | ||
|
dff73d5baf | ||
|
89f1a81a39 | ||
|
140441b8c4 | ||
|
6d62adbfb6 | ||
|
6e1f61adc9 | ||
|
4f077c6ca3 | ||
|
aa2ea47827 | ||
|
f451d1fc43 | ||
|
72a8097929 | ||
|
db15f4733e | ||
|
5eb2255417 | ||
|
58e34a53cf | ||
|
52596a8e6b | ||
|
75b88042ea | ||
|
2f15a2c03e | ||
|
8d86810192 | ||
|
cb32322c67 | ||
|
f9fc955638 |
@ -10,7 +10,7 @@ HeaderFilterRegex: '^.*/(base|src|programs|utils)/.*(h|hpp)$'
|
||||
Checks: [
|
||||
'*',
|
||||
|
||||
'-abseil-*',
|
||||
'-abseil-string-find-str-contains', # disabled to avoid a misleading suggestion (obsolete absl::StrContains() instead of C++23 std::string::contains())
|
||||
|
||||
'-altera-*',
|
||||
|
||||
@ -28,13 +28,10 @@ Checks: [
|
||||
'-bugprone-multi-level-implicit-pointer-conversion',
|
||||
'-bugprone-narrowing-conversions',
|
||||
'-bugprone-not-null-terminated-result',
|
||||
'-bugprone-reserved-identifier', # useful but too slow, TODO retry when https://reviews.llvm.org/rG1c282052624f9d0bd273bde0b47b30c96699c6c7 is merged
|
||||
'-bugprone-unchecked-optional-access',
|
||||
'-bugprone-crtp-constructor-accessibility',
|
||||
|
||||
'-cert-dcl16-c',
|
||||
'-cert-dcl37-c',
|
||||
'-cert-dcl51-cpp',
|
||||
'-cert-err58-cpp',
|
||||
'-cert-msc32-c',
|
||||
'-cert-msc51-cpp',
|
||||
|
144
CHANGELOG.md
144
CHANGELOG.md
@ -26,7 +26,7 @@
|
||||
* When retrieving data directly from a dictionary using Dictionary storage, dictionary table function, or direct SELECT from the dictionary itself, it is now enough to have `SELECT` permission or `dictGet` permission for the dictionary. This aligns with previous attempts to prevent ACL bypasses: https://github.com/ClickHouse/ClickHouse/pull/57362 and https://github.com/ClickHouse/ClickHouse/pull/65359. It also makes the latter one backward compatible. [#72051](https://github.com/ClickHouse/ClickHouse/pull/72051) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
|
||||
#### Experimental feature
|
||||
* Implement `allowed_feature_tier` as a global switch to disable all experimental / beta features. [#71841](https://github.com/ClickHouse/ClickHouse/pull/71841) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Implement `allow_feature_tier` as a global switch to disable all experimental / beta features. [#71841](https://github.com/ClickHouse/ClickHouse/pull/71841) [#71145](https://github.com/ClickHouse/ClickHouse/pull/71145) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix possible error `No such file or directory` due to unescaped special symbols in files for JSON subcolumns. [#71182](https://github.com/ClickHouse/ClickHouse/pull/71182) ([Pavel Kruglov](https://github.com/Avogar)).
|
||||
* Support alter from String to JSON. This PR also changes the serialization of JSON and Dynamic types to new version V2. Old version V1 can be still used by enabling setting `merge_tree_use_v1_object_and_dynamic_serialization` (can be used during upgrade to be able to rollback the version without issues). [#70442](https://github.com/ClickHouse/ClickHouse/pull/70442) ([Pavel Kruglov](https://github.com/Avogar)).
|
||||
* Implement simple CAST from Map/Tuple/Object to new JSON through serialization/deserialization from JSON string. [#71320](https://github.com/ClickHouse/ClickHouse/pull/71320) ([Pavel Kruglov](https://github.com/Avogar)).
|
||||
@ -34,74 +34,140 @@
|
||||
* Forbid Dynamic/Variant types in min/max functions to avoid confusion. [#71761](https://github.com/ClickHouse/ClickHouse/pull/71761) ([Pavel Kruglov](https://github.com/Avogar)).
|
||||
|
||||
#### New Feature
|
||||
* Added SQL syntax to describe workload and resource management. https://clickhouse.com/docs/en/operations/workload-scheduling. [#69187](https://github.com/ClickHouse/ClickHouse/pull/69187) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* A new data type, `BFloat16`, represents 16-bit floating point numbers with 8-bit exponent, sign, and 7-bit mantissa. This closes [#44206](https://github.com/ClickHouse/ClickHouse/issues/44206). This closes [#49937](https://github.com/ClickHouse/ClickHouse/issues/49937). [#64712](https://github.com/ClickHouse/ClickHouse/pull/64712) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add `CHECK GRANT` query to check whether the current user/role has been granted the specific privilege and whether the corresponding table/column exists in the memory. [#68885](https://github.com/ClickHouse/ClickHouse/pull/68885) ([Unalian](https://github.com/Unalian)).
|
||||
* Added SQL syntax to describe workload and resource management. https://clickhouse.com/docs/en/operations/workload-scheduling. [#69187](https://github.com/ClickHouse/ClickHouse/pull/69187) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Added server setting `async_load_system_database` that allows the server to start with not fully loaded system database. This helps to start ClickHouse faster if there are many system tables. [#69847](https://github.com/ClickHouse/ClickHouse/pull/69847) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Allow each authentication method to have its own expiration date, remove from user entity. [#70090](https://github.com/ClickHouse/ClickHouse/pull/70090) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Push external user roles from query originator to other nodes in cluster. Helpful when only originator has access to the external authenticator (like LDAP). [#70332](https://github.com/ClickHouse/ClickHouse/pull/70332) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||
* Added a new header type for S3 endpoints for user authentication (`access_header`). This allows to get some access header with the lowest priority, which will be overwritten with `access_key_id` from any other source (for example, a table schema or a named collection). [#71011](https://github.com/ClickHouse/ClickHouse/pull/71011) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||
* Initial implementation of settings tiers. [#71145](https://github.com/ClickHouse/ClickHouse/pull/71145) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Add support for staleness clause in order by with fill operator. [#71151](https://github.com/ClickHouse/ClickHouse/pull/71151) ([Mikhail Artemenko](https://github.com/Michicosun)).
|
||||
* Added aliases `anyRespectNulls`, `firstValueRespectNulls`, and `anyValueRespectNulls` for aggregation function `any`. Also added aliases `anyLastRespectNulls` and `lastValueRespectNulls` for aggregation function `anyLast`. This allows using more natural camel-case-only syntax rather than mixed camel-case/underscore syntax, for example: `SELECT anyLastRespectNullsStateIf` instead of `anyLast_respect_nullsStateIf`. [#71403](https://github.com/ClickHouse/ClickHouse/pull/71403) ([Peter Nguyen](https://github.com/petern48)).
|
||||
* Added the configuration `date_time_utc` parameter, enabling JSON log formatting to support UTC date-time in RFC 3339/ISO8601 format. [#71560](https://github.com/ClickHouse/ClickHouse/pull/71560) ([Ali](https://github.com/xogoodnow)).
|
||||
* Optimized memory usage for values of index granularity if granularity is constant for part. Added an ability to always select constant granularity for part (setting `use_const_adaptive_granularity`), which helps to ensure that it is always optimized in memory. It helps in large workloads (trillions of rows in shared storage) to avoid constantly growing memory usage by metadata (values of index granularity) of data parts. [#71786](https://github.com/ClickHouse/ClickHouse/pull/71786) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Add `iceberg[S3;HDFS;Azure]Cluster`, `deltaLakeCluster`, `hudiCluster` table functions. [#72045](https://github.com/ClickHouse/ClickHouse/pull/72045) ([Mikhail Artemenko](https://github.com/Michicosun)).
|
||||
* Add ability to set user/password in http_handlers (for `dynamic_query_handler`/`predefined_query_handler`). [#70725](https://github.com/ClickHouse/ClickHouse/pull/70725) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add support for staleness clause in the ORDER BY WITH FILL operator. [#71151](https://github.com/ClickHouse/ClickHouse/pull/71151) ([Mikhail Artemenko](https://github.com/Michicosun)).
|
||||
* Allow each authentication method to have its own expiration date, remove from user entity. [#70090](https://github.com/ClickHouse/ClickHouse/pull/70090) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Added new functions `parseDateTime64`, `parseDateTime64OrNull` and `parseDateTime64OrZero`. Compared to the existing function `parseDateTime` (and variants), they return a value of type `DateTime64` instead of `DateTime`. [#71581](https://github.com/ClickHouse/ClickHouse/pull/71581) ([kevinyhzou](https://github.com/KevinyhZou)).
|
||||
|
||||
#### Performance Improvement
|
||||
* Now we won't copy input blocks columns for `join_algorithm='parallel_hash'` when distribute them between threads for parallel processing. [#67782](https://github.com/ClickHouse/ClickHouse/pull/67782) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Optimized `Replacing` merge algorithm for non intersecting parts. [#70977](https://github.com/ClickHouse/ClickHouse/pull/70977) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Optimized memory usage for values of index granularity if granularity is constant for part. Added an ability to always select constant granularity for part (setting `use_const_adaptive_granularity`), which helps to ensure that it is always optimized in memory. It helps in large workloads (trillions of rows in shared storage) to avoid constantly growing memory usage by metadata (values of index granularity) of data parts. [#71786](https://github.com/ClickHouse/ClickHouse/pull/71786) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Now we don't copy input blocks columns for `join_algorithm = 'parallel_hash'` when distribute them between threads for parallel processing. [#67782](https://github.com/ClickHouse/ClickHouse/pull/67782) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Optimized `Replacing` merge algorithm for non-intersecting parts. [#70977](https://github.com/ClickHouse/ClickHouse/pull/70977) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Do not list detached parts from readonly and write-once disks for metrics and system.detached_parts. [#71086](https://github.com/ClickHouse/ClickHouse/pull/71086) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Do not calculate heavy asynchronous metrics by default. The feature was introduced in [#40332](https://github.com/ClickHouse/ClickHouse/issues/40332), but it isn't good to have a heavy background job that is needed for only a single customer. [#71087](https://github.com/ClickHouse/ClickHouse/pull/71087) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Improve the performance and accuracy of system.query_metric_log collection interval by reducing the critical region. [#71473](https://github.com/ClickHouse/ClickHouse/pull/71473) ([Pablo Marcos](https://github.com/pamarcos)).
|
||||
* For the `plain_rewritable` disks: Do not call the object storage API when listing directories, as this may be cost-inefficient. Instead, store the list of filenames in the memory. The trade-offs are increased initial load time and memory required to store filenames. [#70823](https://github.com/ClickHouse/ClickHouse/pull/70823) ([Julia Kartseva](https://github.com/jkartseva)).
|
||||
* Improve the performance and accuracy of `system.query_metric_log` collection interval by reducing the critical region. [#71473](https://github.com/ClickHouse/ClickHouse/pull/71473) ([Pablo Marcos](https://github.com/pamarcos)).
|
||||
* Read-in-order optimization via generating virtual rows, so less data would be read during merge sort especially useful when multiple parts exist. [#62125](https://github.com/ClickHouse/ClickHouse/pull/62125) ([Shichao Jin](https://github.com/jsc0218)).
|
||||
* Added server setting `async_load_system_database` that allows the server to start with not fully loaded system database. This helps to start ClickHouse faster if there are many system tables. [#69847](https://github.com/ClickHouse/ClickHouse/pull/69847) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Add `--threads` parameter to `clickhouse-compressor`, which allows to compress data in parallel. [#70860](https://github.com/ClickHouse/ClickHouse/pull/70860) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added a setting `prewarm_mark_cache` which enables loading of marks to mark cache on inserts, merges, fetches of parts and on startup of the table. [#71053](https://github.com/ClickHouse/ClickHouse/pull/71053) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Shrink to fit index_granularity array in memory to reduce memory footprint for MergeTree table engines family. [#71595](https://github.com/ClickHouse/ClickHouse/pull/71595) ([alesapin](https://github.com/alesapin)).
|
||||
* Turn off filesystem cache setting `boundary_alignment` for non-disk read, which improves performance of reading from standalone remote files with caching. [#71827](https://github.com/ClickHouse/ClickHouse/pull/71827) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Queries like `SELECT * FROM table LIMIT ...` used to load part indexes even though they were not used. [#71866](https://github.com/ClickHouse/ClickHouse/pull/71866) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Enable `parallel_replicas_local_plan` by default. Building a full-fledged local plan on the query initiator improves parallel replicas performance with less resource consumption, provides opportunities to apply more query optimizations. [#70171](https://github.com/ClickHouse/ClickHouse/pull/70171) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
|
||||
#### Improvement
|
||||
* Allow using clickhouse with a file argument as `ch queries.sql`. [#71589](https://github.com/ClickHouse/ClickHouse/pull/71589) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* The `Vertical` format (which is also activated when you end your query with `\G`) gets the features of Pretty formats, such as: - highlighting thousand groups in numbers; - printing a readable number tip. [#71630](https://github.com/ClickHouse/ClickHouse/pull/71630) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Push external user roles from query originator to other nodes in cluster. Helpful when only originator has access to the external authenticator (like LDAP). [#70332](https://github.com/ClickHouse/ClickHouse/pull/70332) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||
* Added aliases `anyRespectNulls`, `firstValueRespectNulls`, and `anyValueRespectNulls` for aggregation function `any`. Also added aliases `anyLastRespectNulls` and `lastValueRespectNulls` for aggregation function `anyLast`. This allows using more natural camel-case-only syntax rather than mixed camel-case/underscore syntax, for example: `SELECT anyLastRespectNullsStateIf` instead of `anyLast_respect_nullsStateIf`. [#71403](https://github.com/ClickHouse/ClickHouse/pull/71403) ([Peter Nguyen](https://github.com/petern48)).
|
||||
* Added the configuration `date_time_utc` parameter, enabling JSON log formatting to support UTC date-time in RFC 3339/ISO8601 format. [#71560](https://github.com/ClickHouse/ClickHouse/pull/71560) ([Ali](https://github.com/xogoodnow)).
|
||||
* Added a new header type for S3 endpoints for user authentication (`access_header`). This allows to get some access header with the lowest priority, which will be overwritten with `access_key_id` from any other source (for example, a table schema or a named collection). [#71011](https://github.com/ClickHouse/ClickHouse/pull/71011) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||
* Higher-order functions with constant arrays and constant captured arguments will return constants. [#58400](https://github.com/ClickHouse/ClickHouse/pull/58400) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Read-in-order optimization via generating virtual rows, so less data would be read during merge sort especially useful when multiple parts exist. [#62125](https://github.com/ClickHouse/ClickHouse/pull/62125) ([Shichao Jin](https://github.com/jsc0218)).
|
||||
* Query plan step names (`EXPLAIN PLAN json=1`) and pipeline processor names (`EXPLAIN PIPELINE compact=0,graph=1`) now have a unique id as a suffix. This allows to match processors profiler output and OpenTelemetry traces with explain output. [#63518](https://github.com/ClickHouse/ClickHouse/pull/63518) ([qhsong](https://github.com/qhsong)).
|
||||
* Added option to check object exists after writing to Azure Blob Storage, this is controlled by setting `check_objects_after_upload`. [#64847](https://github.com/ClickHouse/ClickHouse/pull/64847) ([Smita Kulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Added option to check if the object exists after writing it to Azure Blob Storage, this is controlled by setting `check_objects_after_upload`. [#64847](https://github.com/ClickHouse/ClickHouse/pull/64847) ([Smita Kulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Use `Atomic` database by default in `clickhouse-local`. Address items 1 and 5 from [#50647](https://github.com/ClickHouse/ClickHouse/issues/50647). Closes [#44817](https://github.com/ClickHouse/ClickHouse/issues/44817). [#68024](https://github.com/ClickHouse/ClickHouse/pull/68024) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Exceptions break the HTTP protocol in order to alert the client about error. [#68800](https://github.com/ClickHouse/ClickHouse/pull/68800) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Report running DDLWorker hosts by creating replica_dir and mark replicas active in DDLWorker. [#69658](https://github.com/ClickHouse/ClickHouse/pull/69658) ([tuanpach](https://github.com/tuanpach)).
|
||||
* Report hosts running distributed DDL queries by creating replica_dir and mark replicas active in DDLWorker. [#69658](https://github.com/ClickHouse/ClickHouse/pull/69658) ([tuanpach](https://github.com/tuanpach)).
|
||||
* Wait only on active replicas for database ON CLUSTER queries if distributed_ddl_output_mode is set to be *_only_active. [#69660](https://github.com/ClickHouse/ClickHouse/pull/69660) ([tuanpach](https://github.com/tuanpach)).
|
||||
* Better error-handling and cancellation of `ON CLUSTER` backups and restores: - If a backup or restore fails on one host then it'll be cancelled on other hosts automatically - No weird errors must be produced because some hosts failed while other hosts continued their work - If a backup or restore is cancelled on one host then it'll be cancelled on other hosts automatically - Fix issues with `test_disallow_concurrency` - now disabling of concurrency must work better - Backups and restores now are much more resistant to ZooKeeper disconnects. [#70027](https://github.com/ClickHouse/ClickHouse/pull/70027) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Enable `parallel_replicas_local_plan` by default. Building a full-fledged local plan on the query initiator improves parallel replicas performance with less resource consumption, provides opportunities to apply more query optimizations. [#70171](https://github.com/ClickHouse/ClickHouse/pull/70171) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Add ability to set user/password in http_handlers (for `dynamic_query_handler`/`predefined_query_handler`). [#70725](https://github.com/ClickHouse/ClickHouse/pull/70725) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Support `ALTER TABLE ... MODIFY/RESET SETTING ...` for certain settings in storage S3Queue. [#70811](https://github.com/ClickHouse/ClickHouse/pull/70811) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Do not call the object storage API when listing directories, as this may be cost-inefficient. Instead, store the list of filenames in the memory. The trade-offs are increased initial load time and memory required to store filenames. [#70823](https://github.com/ClickHouse/ClickHouse/pull/70823) ([Julia Kartseva](https://github.com/jkartseva)).
|
||||
* Add `--threads` parameter to `clickhouse-compressor`, which allows to compress data in parallel. [#70860](https://github.com/ClickHouse/ClickHouse/pull/70860) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added the ability to reload client certificates in the same way as the procedure for reloading server certificates. [#70997](https://github.com/ClickHouse/ClickHouse/pull/70997) ([Roman Antonov](https://github.com/Romeo58rus)).
|
||||
* Refactored internal structure of files which work with DataLake Storages. [#71012](https://github.com/ClickHouse/ClickHouse/pull/71012) ([Daniil Ivanik](https://github.com/divanik)).
|
||||
* Make the Replxx client history size configurable. [#71014](https://github.com/ClickHouse/ClickHouse/pull/71014) ([Jiří Kozlovský](https://github.com/jirislav)).
|
||||
* Added a setting `prewarm_mark_cache` which enables loading of marks to mark cache on inserts, merges, fetches of parts and on startup of the table. [#71053](https://github.com/ClickHouse/ClickHouse/pull/71053) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Boolean support for parquet native reader. [#71055](https://github.com/ClickHouse/ClickHouse/pull/71055) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Make the client history size configurable and increase its default size. [#71014](https://github.com/ClickHouse/ClickHouse/pull/71014) ([Jiří Kozlovský](https://github.com/jirislav)).
|
||||
* Boolean types support for the parquet native reader. [#71055](https://github.com/ClickHouse/ClickHouse/pull/71055) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Retry more errors when interacting with S3, such as "Malformed message". [#71088](https://github.com/ClickHouse/ClickHouse/pull/71088) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Lower log level for some messages about S3. [#71090](https://github.com/ClickHouse/ClickHouse/pull/71090) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Support write hdfs files with space. [#71105](https://github.com/ClickHouse/ClickHouse/pull/71105) ([exmy](https://github.com/exmy)).
|
||||
* Support writing HDFS files with spaces. [#71105](https://github.com/ClickHouse/ClickHouse/pull/71105) ([exmy](https://github.com/exmy)).
|
||||
* Added settings limiting the number of replicated tables, dictionaries and views. [#71179](https://github.com/ClickHouse/ClickHouse/pull/71179) ([Kirill](https://github.com/kirillgarbar)).
|
||||
* Use `AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE` instead of `AWS_CONTAINER_AUTHORIZATION_TOKEN` if former is available. Fixes [#71074](https://github.com/ClickHouse/ClickHouse/issues/71074). [#71269](https://github.com/ClickHouse/ClickHouse/pull/71269) ([Konstantin Bogdanov](https://github.com/thevar1able)).
|
||||
* Remove the metadata_version ZooKeeper node creation from RMT restarting thread. The only scenario where we need to create this node is when the user updated from a version earlier than 20.4 straight to one later than 24.10. ClickHouse does not support upgrades that span more than a year, so we should throw an exception and ask the user to update gradually, instead of creating the node. [#71385](https://github.com/ClickHouse/ClickHouse/pull/71385) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
|
||||
* Remove the metadata_version ZooKeeper node creation from ReplicatedMergeTree restarting thread. The only scenario where we need to create this node is when the user updated from a version earlier than 20.4 straight to one later than 24.10. ClickHouse does not support upgrades that span more than a year, so we should throw an exception and ask the user to update gradually, instead of creating the node. [#71385](https://github.com/ClickHouse/ClickHouse/pull/71385) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
|
||||
* Add per host dashboards `Overview (host)` and `Cloud overview (host)` to advanced dashboard. [#71422](https://github.com/ClickHouse/ClickHouse/pull/71422) ([alesapin](https://github.com/alesapin)).
|
||||
* The methods `removeObject` and `removeObjects` are not idempotent. When retries happen due to network errors, the result could be `object not found` because it has been deleted at previous attempts. [#71529](https://github.com/ClickHouse/ClickHouse/pull/71529) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Added new functions `parseDateTime64`, `parseDateTime64OrNull` and `parseDateTime64OrZero`. Compared to the existing function `parseDateTime` (and variants), they return a value of type `DateTime64` instead of `DateTime`. [#71581](https://github.com/ClickHouse/ClickHouse/pull/71581) ([kevinyhzou](https://github.com/KevinyhZou)).
|
||||
* Allow using clickhouse with a file argument as --queries-file. [#71589](https://github.com/ClickHouse/ClickHouse/pull/71589) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Shrink to fit index_granularity array in memory to reduce memory footprint for MergeTree table engines family. [#71595](https://github.com/ClickHouse/ClickHouse/pull/71595) ([alesapin](https://github.com/alesapin)).
|
||||
* `clickhouse-local` uses implicit SELECT by default, which allows to use it as a calculator. Improve the syntax highlighting for the implicit SELECT mode. [#71620](https://github.com/ClickHouse/ClickHouse/pull/71620) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* The command line applications will highlight syntax even for multi-statements. [#71622](https://github.com/ClickHouse/ClickHouse/pull/71622) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Command-line applications will return non-zero exit codes on errors. In previous versions, the `disks` application returned zero on errors, and other applications returned zero for errors 256 (`PARTITION_ALREADY_EXISTS`) and 512 (`SET_NON_GRANTED_ROLE`). [#71623](https://github.com/ClickHouse/ClickHouse/pull/71623) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* When user/group is given as ID, the `clickhouse su` fails. This patch fixes it to accept `UID:GID` as well. ### Documentation entry for user-facing changes. [#71626](https://github.com/ClickHouse/ClickHouse/pull/71626) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* The `Vertical` format (which is also activated when you end your query with `\G`) gets the features of Pretty formats, such as: - highlighting thousand groups in numbers; - printing a readable number tip. [#71630](https://github.com/ClickHouse/ClickHouse/pull/71630) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* When user/group is given as ID, the `clickhouse su` fails. This patch fixes it to accept `UID:GID` as well. [#71626](https://github.com/ClickHouse/ClickHouse/pull/71626) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Allow to disable memory buffer increase for filesystem cache via setting `filesystem_cache_prefer_bigger_buffer_size`. [#71640](https://github.com/ClickHouse/ClickHouse/pull/71640) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Add a separate setting `background_download_max_file_segment_size` for background download max file segment size in filesystem cache. [#71648](https://github.com/ClickHouse/ClickHouse/pull/71648) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Changes the default value of `enable_http_compression` from 0 to 1. Closes [#71591](https://github.com/ClickHouse/ClickHouse/issues/71591). [#71774](https://github.com/ClickHouse/ClickHouse/pull/71774) ([Peter Nguyen](https://github.com/petern48)).
|
||||
* Slightly better JSON type parsing: if current block for the JSON path contains values of several types, try to choose the best type by trying types in special best-effort order. [#71785](https://github.com/ClickHouse/ClickHouse/pull/71785) ([Pavel Kruglov](https://github.com/Avogar)).
|
||||
* Previously reading from `system.asynchronous_metrics` would wait for concurrent update to finish. This can take long time if system is under heavy load. With this change the previously collected values can always be read. [#71798](https://github.com/ClickHouse/ClickHouse/pull/71798) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Set `polling_max_timeout_ms` to 10 minutes, `polling_backoff_ms` to 30 seconds. [#71817](https://github.com/ClickHouse/ClickHouse/pull/71817) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Turn-off filesystem cache setting `boundary_alignment` for non-disk read. [#71827](https://github.com/ClickHouse/ClickHouse/pull/71827) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Update `HostResolver` 3 times in a `history` period. [#71863](https://github.com/ClickHouse/ClickHouse/pull/71863) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Queries like 'SELECT * FROM t LIMIT 1' used to load part indexes even though they were not used. [#71866](https://github.com/ClickHouse/ClickHouse/pull/71866) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Allow_reorder_prewhere_conditions is on by default with old compatibility settings. [#71867](https://github.com/ClickHouse/ClickHouse/pull/71867) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* S3Queue and AzureQueue: Set `polling_max_timeout_ms` to 10 minutes, `polling_backoff_ms` to 30 seconds. [#71817](https://github.com/ClickHouse/ClickHouse/pull/71817) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Update `HostResolver` three times in a `history` period. [#71863](https://github.com/ClickHouse/ClickHouse/pull/71863) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* On the advanced dashboard HTML page added a dropdown selector for the dashboard from `system.dashboards` table. [#72081](https://github.com/ClickHouse/ClickHouse/pull/72081) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Check if default database is present after authorization. Fixes [#71097](https://github.com/ClickHouse/ClickHouse/issues/71097). [#71140](https://github.com/ClickHouse/ClickHouse/pull/71140) ([Konstantin Bogdanov](https://github.com/thevar1able)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
* The parts deduplicated during `ATTACH PART` query don't get stuck with the `attaching_` prefix anymore. [#65636](https://github.com/ClickHouse/ClickHouse/pull/65636) ([Kirill](https://github.com/kirillgarbar)).
|
||||
* Fix for the bug when DateTime64 losing precision for the `IN` function. [#67230](https://github.com/ClickHouse/ClickHouse/pull/67230) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
* Fix possible logical error when using functions with `IGNORE/RESPECT NULLS` in `ORDER BY ... WITH FILL`, close [#57609](https://github.com/ClickHouse/ClickHouse/issues/57609). [#68234](https://github.com/ClickHouse/ClickHouse/pull/68234) ([Vladimir Cherkasov](https://github.com/vdimir)).
|
||||
* Fixed rare logical errors in asynchronous inserts with format `Native` in case of reached memory limit. [#68965](https://github.com/ClickHouse/ClickHouse/pull/68965) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix COMMENT in CREATE TABLE for EPHEMERAL column. [#70458](https://github.com/ClickHouse/ClickHouse/pull/70458) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
* Fix logical error in JSONExtract with LowCardinality(Nullable). [#70549](https://github.com/ClickHouse/ClickHouse/pull/70549) ([Pavel Kruglov](https://github.com/Avogar)).
|
||||
* Allow system drop replica zkpath when there is another replica with the same zk path. [#70642](https://github.com/ClickHouse/ClickHouse/pull/70642) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||
* Fix a crash and a leak in AggregateFunctionGroupArraySorted. [#70820](https://github.com/ClickHouse/ClickHouse/pull/70820) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Add ability to override Content-Type by user headers in the URL engine. [#70859](https://github.com/ClickHouse/ClickHouse/pull/70859) ([Artem Iurin](https://github.com/ortyomka)).
|
||||
* Fix logical error in `StorageS3Queue` "Cannot create a persistent node in /processed since it already exists". [#70984](https://github.com/ClickHouse/ClickHouse/pull/70984) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fixed named sessions not being closed and hanging on forever under certain circumstances. [#70998](https://github.com/ClickHouse/ClickHouse/pull/70998) ([Márcio Martins](https://github.com/marcio-absmartly)).
|
||||
* Fix the bug that didn't consider _row_exists column in rebuild option of projection lightweight delete. [#71089](https://github.com/ClickHouse/ClickHouse/pull/71089) ([Shichao Jin](https://github.com/jsc0218)).
|
||||
* Fix `AT_* is out of range` problem when running on Oracle Linux UEK 6.10. [#71109](https://github.com/ClickHouse/ClickHouse/pull/71109) ([Örjan Fors](https://github.com/op)).
|
||||
* Fix wrong value in system.query_metric_log due to unexpected race condition. [#71124](https://github.com/ClickHouse/ClickHouse/pull/71124) ([Pablo Marcos](https://github.com/pamarcos)).
|
||||
* Fix mismatched aggreage function name of quantileExactWeightedInterpolated. The bug was introduced in https://github.com/ClickHouse/ClickHouse/pull/69619. cc @Algunenano. [#71168](https://github.com/ClickHouse/ClickHouse/pull/71168) ([李扬](https://github.com/taiyang-li)).
|
||||
* Fix bad_weak_ptr exception with Dynamic in functions comparison. [#71183](https://github.com/ClickHouse/ClickHouse/pull/71183) ([Pavel Kruglov](https://github.com/Avogar)).
|
||||
* Checks that read 7z file is on a local machine. [#71184](https://github.com/ClickHouse/ClickHouse/pull/71184) ([Daniil Ivanik](https://github.com/divanik)).
|
||||
* Fix ignoring format settings in Native format via HTTP and Async Inserts. [#71193](https://github.com/ClickHouse/ClickHouse/pull/71193) ([Pavel Kruglov](https://github.com/Avogar)).
|
||||
* SELECT queries run with setting `use_query_cache = 1` are no longer rejected if the name of a system table appears as a literal, e.g. `SELECT * FROM users WHERE name = 'system.metrics' SETTINGS use_query_cache = true;` now works. [#71254](https://github.com/ClickHouse/ClickHouse/pull/71254) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix bug of memory usage increase if enable_filesystem_cache=1, but disk in storage configuration did not have any cache configuration. [#71261](https://github.com/ClickHouse/ClickHouse/pull/71261) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix possible error "Cannot read all data" erros during deserialization of LowCardinality dictionary from Dynamic column. [#71299](https://github.com/ClickHouse/ClickHouse/pull/71299) ([Pavel Kruglov](https://github.com/Avogar)).
|
||||
* Fix incomplete cleanup of parallel output format in the client. [#71304](https://github.com/ClickHouse/ClickHouse/pull/71304) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Added missing unescaping in named collections. Without fix clickhouse-server can't start. [#71308](https://github.com/ClickHouse/ClickHouse/pull/71308) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||
* Fix async inserts with empty blocks via native protocol. [#71312](https://github.com/ClickHouse/ClickHouse/pull/71312) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix inconsistent AST formatting when granting wrong wildcard grants [#71309](https://github.com/ClickHouse/ClickHouse/issues/71309). [#71332](https://github.com/ClickHouse/ClickHouse/pull/71332) ([pufit](https://github.com/pufit)).
|
||||
* Add try/catch to data parts destructors to avoid std::terminate. [#71364](https://github.com/ClickHouse/ClickHouse/pull/71364) ([alesapin](https://github.com/alesapin)).
|
||||
* Check suspicious and experimental types in JSON type hints. [#71369](https://github.com/ClickHouse/ClickHouse/pull/71369) ([Pavel Kruglov](https://github.com/Avogar)).
|
||||
* Start memory worker thread on non-Linux OS too (fixes [#71051](https://github.com/ClickHouse/ClickHouse/issues/71051)). [#71384](https://github.com/ClickHouse/ClickHouse/pull/71384) ([Alexandre Snarskii](https://github.com/snar)).
|
||||
* Fix error Invalid number of rows in Chunk with the Variant column. [#71388](https://github.com/ClickHouse/ClickHouse/pull/71388) ([Pavel Kruglov](https://github.com/Avogar)).
|
||||
* Fix error column "attgenerated" does not exist for older PostgreSQL versions, fix [#60651](https://github.com/ClickHouse/ClickHouse/issues/60651). [#71396](https://github.com/ClickHouse/ClickHouse/pull/71396) ([0xMihalich](https://github.com/0xMihalich)).
|
||||
* To avoid spamming the server logs, failing authentication attempts are now logged at level `DEBUG` instead of `ERROR`. [#71405](https://github.com/ClickHouse/ClickHouse/pull/71405) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix crash in `mongodb` table function when passing wrong arguments (e.g. `NULL`). [#71426](https://github.com/ClickHouse/ClickHouse/pull/71426) ([Vladimir Cherkasov](https://github.com/vdimir)).
|
||||
* Fix crash with optimize_rewrite_array_exists_to_has. [#71432](https://github.com/ClickHouse/ClickHouse/pull/71432) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fixed the usage of setting `max_insert_delayed_streams_for_parallel_write` in inserts. Previously it worked incorrectly which could lead to high memory usage in inserts which write data into several partitions. [#71474](https://github.com/ClickHouse/ClickHouse/pull/71474) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix possible error `Argument for function must be constant` (old analyzer) in case when arrayJoin can apparently appear in `WHERE` condition. Regression after https://github.com/ClickHouse/ClickHouse/pull/65414. [#71476](https://github.com/ClickHouse/ClickHouse/pull/71476) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Prevent crash in SortCursor with 0 columns (old analyzer). [#71494](https://github.com/ClickHouse/ClickHouse/pull/71494) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix Date32 out of range caused by uninitialized ORC data. For more details, refer to https://github.com/apache/incubator-gluten/issues/7823. [#71500](https://github.com/ClickHouse/ClickHouse/pull/71500) ([李扬](https://github.com/taiyang-li)).
|
||||
* Fix counting column size in wide part for Dynamic and JSON types. [#71526](https://github.com/ClickHouse/ClickHouse/pull/71526) ([Pavel Kruglov](https://github.com/Avogar)).
|
||||
* Analyzer fix when query inside materialized view uses IN with CTE. Closes [#65598](https://github.com/ClickHouse/ClickHouse/issues/65598). [#71538](https://github.com/ClickHouse/ClickHouse/pull/71538) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Avoid crash when using a UDF in a constraint. [#71541](https://github.com/ClickHouse/ClickHouse/pull/71541) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Return 0 or default char instead of throwing an error in bitShift functions in case of out of bounds. [#71580](https://github.com/ClickHouse/ClickHouse/pull/71580) ([Pablo Marcos](https://github.com/pamarcos)).
|
||||
* Fix server crashes while using materialized view with certain engines. [#71593](https://github.com/ClickHouse/ClickHouse/pull/71593) ([Pervakov Grigorii](https://github.com/GrigoryPervakov)).
|
||||
* Array join with a nested data structure, which contains an alias to a constant array was leading to a null pointer dereference. This closes [#71677](https://github.com/ClickHouse/ClickHouse/issues/71677). [#71678](https://github.com/ClickHouse/ClickHouse/pull/71678) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix LOGICAL_ERROR when doing ALTER with empty tuple. This fixes [#71647](https://github.com/ClickHouse/ClickHouse/issues/71647). [#71679](https://github.com/ClickHouse/ClickHouse/pull/71679) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Don't transform constant set in predicates over partition columns in case of NOT IN operator. [#71695](https://github.com/ClickHouse/ClickHouse/pull/71695) ([Eduard Karacharov](https://github.com/korowa)).
|
||||
* Fix docker init script fail log message for more clean understanding. [#71734](https://github.com/ClickHouse/ClickHouse/pull/71734) ([Андрей](https://github.com/andreineustroev)).
|
||||
* Fix CAST from LowCardinality(Nullable) to Dynamic. Previously it could lead to error `Bad cast from type DB::ColumnVector<int> to DB::ColumnNullable`. [#71742](https://github.com/ClickHouse/ClickHouse/pull/71742) ([Pavel Kruglov](https://github.com/Avogar)).
|
||||
* Fix exception for toDayOfWeek on WHERE condition with primary key of DateTime64 type. [#71849](https://github.com/ClickHouse/ClickHouse/pull/71849) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
* Fixed filling of defaults after parsing into sparse columns. [#71854](https://github.com/ClickHouse/ClickHouse/pull/71854) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix GROUPING function error when input is ALIAS on distributed table, close [#68602](https://github.com/ClickHouse/ClickHouse/issues/68602). [#71855](https://github.com/ClickHouse/ClickHouse/pull/71855) ([Vladimir Cherkasov](https://github.com/vdimir)).
|
||||
* Fix possible crash when using `allow_experimental_join_condition`, close [#71693](https://github.com/ClickHouse/ClickHouse/issues/71693). [#71857](https://github.com/ClickHouse/ClickHouse/pull/71857) ([Vladimir Cherkasov](https://github.com/vdimir)).
|
||||
* Fixed select statements that use `WITH TIES` clause which might not return enough rows. [#71886](https://github.com/ClickHouse/ClickHouse/pull/71886) ([wxybear](https://github.com/wxybear)).
|
||||
* Fix the TOO_LARGE_ARRAY_SIZE exception caused when a column of arrayWithConstant evaluation is mistaken to cross the array size limit. [#71894](https://github.com/ClickHouse/ClickHouse/pull/71894) ([Udi](https://github.com/udiz)).
|
||||
* `clickhouse-benchmark` reported wrong metrics for queries taking longer than one second. [#71898](https://github.com/ClickHouse/ClickHouse/pull/71898) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix data race between the progress indicator and the progress table in clickhouse-client. This issue is visible when FROM INFILE is used. Intercept keystrokes during INSERT queries to toggle progress table display. [#71901](https://github.com/ClickHouse/ClickHouse/pull/71901) ([Julia Kartseva](https://github.com/jkartseva)).
|
||||
* Use auxiliary keepers for cluster autodiscovery. [#71911](https://github.com/ClickHouse/ClickHouse/pull/71911) ([Anton Ivashkin](https://github.com/ianton-ru)).
|
||||
* Fix rows_processed column in system.s3/azure_queue_log broken in 24.6. Closes [#69975](https://github.com/ClickHouse/ClickHouse/issues/69975). [#71946](https://github.com/ClickHouse/ClickHouse/pull/71946) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fixed case when `s3`/`s3Cluster` functions could return incomplete result or throw an exception. It involved using glob pattern in s3 uri (like `pattern/*`) and an empty object should exist with the key `pattern/` (such objects automatically created by S3 Console). Also default value for setting `s3_skip_empty_files` changed from `false` to `true` by default. [#71947](https://github.com/ClickHouse/ClickHouse/pull/71947) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Fix a crash in clickhouse-client syntax highlighting. Closes [#71864](https://github.com/ClickHouse/ClickHouse/issues/71864). [#71949](https://github.com/ClickHouse/ClickHouse/pull/71949) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix `Illegal type` error for `MergeTree` tables with binary monotonic function in `ORDER BY` when the first argument is constant. Fixes [#71941](https://github.com/ClickHouse/ClickHouse/issues/71941). [#71966](https://github.com/ClickHouse/ClickHouse/pull/71966) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Allow only SELECT queries in EXPLAIN AST used inside subquery. Other types of queries lead to logical error: 'Bad cast from type DB::ASTCreateQuery to DB::ASTSelectWithUnionQuery' or `Inconsistent AST formatting`. [#71982](https://github.com/ClickHouse/ClickHouse/pull/71982) ([Pavel Kruglov](https://github.com/Avogar)).
|
||||
* When insert a record by `clickhouse-client`, client will read column descriptions from server. but there was a bug that we wrote the descritions with a wrong order , it should be [statistics, ttl, settings]. [#71991](https://github.com/ClickHouse/ClickHouse/pull/71991) ([Han Fei](https://github.com/hanfei1991)).
|
||||
* Fix formatting of `MOVE PARTITION ... TO TABLE ...` alter commands when `format_alter_commands_with_parentheses` is enabled. [#72080](https://github.com/ClickHouse/ClickHouse/pull/72080) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
* Fixes RIGHT / FULL joins in queries with parallel replicas. Now, RIGHT joins can be executed with parallel replicas (right table reading is distributed). FULL joins can't be parallelized among nodes, - executed locally. [#71162](https://github.com/ClickHouse/ClickHouse/pull/71162) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Fix the issue where ClickHouse in Docker containers printed "get_mempolicy: Operation not permitted" into stderr due to restricted syscalls. [#70900](https://github.com/ClickHouse/ClickHouse/pull/70900) ([filimonov](https://github.com/filimonov)).
|
||||
* Fix the metadata_version record in ZooKeeper in restarting thread rather than in attach thread. [#70297](https://github.com/ClickHouse/ClickHouse/pull/70297) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
|
||||
* This is a fix for "zero-copy" replication, which is unsupported and will be removed entirely. Don't delete a blob when there are nodes using it in ReplicatedMergeTree with zero-copy replication. [#71186](https://github.com/ClickHouse/ClickHouse/pull/71186) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* This is a fix for "zero-copy" replication, which is unsupported and will be removed entirely. Acquiring zero-copy shared lock before moving a part to zero-copy disk to prevent possible data loss if Keeper is unavailable. [#71845](https://github.com/ClickHouse/ClickHouse/pull/71845) ([Aleksei Filatov](https://github.com/aalexfvk)).
|
||||
|
||||
### <a id="2410"></a> ClickHouse release 24.10, 2024-10-31
|
||||
|
||||
|
@ -14,6 +14,7 @@ The following versions of ClickHouse server are currently supported with securit
|
||||
|
||||
| Version | Supported |
|
||||
|:-|:-|
|
||||
| 24.11 | ✔️ |
|
||||
| 24.10 | ✔️ |
|
||||
| 24.9 | ✔️ |
|
||||
| 24.8 | ✔️ |
|
||||
|
@ -3,11 +3,6 @@ add_subdirectory (Data)
|
||||
add_subdirectory (Data/ODBC)
|
||||
add_subdirectory (Foundation)
|
||||
add_subdirectory (JSON)
|
||||
|
||||
if (USE_MONGODB)
|
||||
add_subdirectory(MongoDB)
|
||||
endif()
|
||||
|
||||
add_subdirectory (Net)
|
||||
add_subdirectory (NetSSL_OpenSSL)
|
||||
add_subdirectory (Redis)
|
||||
|
@ -1,16 +0,0 @@
|
||||
file (GLOB SRCS src/*.cpp)
|
||||
|
||||
add_library (_poco_mongodb ${SRCS})
|
||||
add_library (Poco::MongoDB ALIAS _poco_mongodb)
|
||||
|
||||
# TODO: remove these warning exclusions
|
||||
target_compile_options (_poco_mongodb
|
||||
PRIVATE
|
||||
-Wno-old-style-cast
|
||||
-Wno-unused-parameter
|
||||
-Wno-zero-as-null-pointer-constant
|
||||
)
|
||||
|
||||
target_include_directories (_poco_mongodb SYSTEM PUBLIC "include")
|
||||
target_link_libraries (_poco_mongodb PUBLIC Poco::Net)
|
||||
|
@ -1,142 +0,0 @@
|
||||
//
|
||||
// Array.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: Array
|
||||
//
|
||||
// Definition of the Array class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_Array_INCLUDED
|
||||
#define MongoDB_Array_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Document.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
#include "Poco/NumberFormatter.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API Array : public Document
|
||||
/// This class represents a BSON Array.
|
||||
{
|
||||
public:
|
||||
using Ptr = SharedPtr<Array>;
|
||||
|
||||
Array();
|
||||
/// Creates an empty Array.
|
||||
|
||||
virtual ~Array();
|
||||
/// Destroys the Array.
|
||||
|
||||
// Document template functions available for backward compatibility
|
||||
using Document::add;
|
||||
using Document::get;
|
||||
|
||||
template <typename T>
|
||||
Document & add(T value)
|
||||
/// Creates an element with the name from the current pos and value and
|
||||
/// adds it to the array document.
|
||||
///
|
||||
/// The active document is returned to allow chaining of the add methods.
|
||||
{
|
||||
return Document::add<T>(Poco::NumberFormatter::format(size()), value);
|
||||
}
|
||||
|
||||
Document & add(const char * value)
|
||||
/// Creates an element with a name from the current pos and value and
|
||||
/// adds it to the array document.
|
||||
///
|
||||
/// The active document is returned to allow chaining of the add methods.
|
||||
{
|
||||
return Document::add(Poco::NumberFormatter::format(size()), value);
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
T get(std::size_t pos) const
|
||||
/// Returns the element at the given index and tries to convert
|
||||
/// it to the template type. If the element is not found, a
|
||||
/// Poco::NotFoundException will be thrown. If the element cannot be
|
||||
/// converted a BadCastException will be thrown.
|
||||
{
|
||||
return Document::get<T>(Poco::NumberFormatter::format(pos));
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
T get(std::size_t pos, const T & deflt) const
|
||||
/// Returns the element at the given index and tries to convert
|
||||
/// it to the template type. If the element is not found, or
|
||||
/// has the wrong type, the deflt argument will be returned.
|
||||
{
|
||||
return Document::get<T>(Poco::NumberFormatter::format(pos), deflt);
|
||||
}
|
||||
|
||||
Element::Ptr get(std::size_t pos) const;
|
||||
/// Returns the element at the given index.
|
||||
/// An empty element will be returned if the element is not found.
|
||||
|
||||
template <typename T>
|
||||
bool isType(std::size_t pos) const
|
||||
/// Returns true if the type of the element equals the TypeId of ElementTrait,
|
||||
/// otherwise false.
|
||||
{
|
||||
return Document::isType<T>(Poco::NumberFormatter::format(pos));
|
||||
}
|
||||
|
||||
std::string toString(int indent = 0) const;
|
||||
/// Returns a string representation of the Array.
|
||||
|
||||
private:
|
||||
friend void BSONReader::read<Array::Ptr>(Array::Ptr & to);
|
||||
};
|
||||
|
||||
|
||||
// BSON Embedded Array
|
||||
// spec: document
|
||||
template <>
|
||||
struct ElementTraits<Array::Ptr>
|
||||
{
|
||||
enum
|
||||
{
|
||||
TypeId = 0x04
|
||||
};
|
||||
|
||||
static std::string toString(const Array::Ptr & value, int indent = 0)
|
||||
{
|
||||
//TODO:
|
||||
return value.isNull() ? "null" : value->toString(indent);
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONReader::read<Array::Ptr>(Array::Ptr & to)
|
||||
{
|
||||
to->read(_reader);
|
||||
}
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONWriter::write<Array::Ptr>(Array::Ptr & from)
|
||||
{
|
||||
from->write(_writer);
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_Array_INCLUDED
|
@ -1,88 +0,0 @@
|
||||
//
|
||||
// BSONReader.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: BSONReader
|
||||
//
|
||||
// Definition of the BSONReader class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_BSONReader_INCLUDED
|
||||
#define MongoDB_BSONReader_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/BinaryReader.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API BSONReader
|
||||
/// Class for reading BSON using a Poco::BinaryReader
|
||||
{
|
||||
public:
|
||||
BSONReader(const Poco::BinaryReader & reader) : _reader(reader)
|
||||
/// Creates the BSONReader using the given BinaryWriter.
|
||||
{
|
||||
}
|
||||
|
||||
virtual ~BSONReader()
|
||||
/// Destroys the BSONReader.
|
||||
{
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
void read(T & t)
|
||||
/// Reads the value from the reader. The default implementation uses the >> operator to
|
||||
/// the given argument. Special types can write their own version.
|
||||
{
|
||||
_reader >> t;
|
||||
}
|
||||
|
||||
std::string readCString();
|
||||
/// Reads a cstring from the reader.
|
||||
/// A cstring is a string terminated with a 0x00.
|
||||
|
||||
private:
|
||||
Poco::BinaryReader _reader;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline std::string BSONReader::readCString()
|
||||
{
|
||||
std::string val;
|
||||
while (_reader.good())
|
||||
{
|
||||
char c;
|
||||
_reader >> c;
|
||||
if (_reader.good())
|
||||
{
|
||||
if (c == 0x00)
|
||||
return val;
|
||||
else
|
||||
val += c;
|
||||
}
|
||||
}
|
||||
return val;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_BSONReader_INCLUDED
|
@ -1,76 +0,0 @@
|
||||
//
|
||||
// BSONWriter.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: BSONWriter
|
||||
//
|
||||
// Definition of the BSONWriter class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_BSONWriter_INCLUDED
|
||||
#define MongoDB_BSONWriter_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/BinaryWriter.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API BSONWriter
|
||||
/// Class for writing BSON using a Poco::BinaryWriter.
|
||||
{
|
||||
public:
|
||||
BSONWriter(const Poco::BinaryWriter & writer) : _writer(writer)
|
||||
/// Creates the BSONWriter.
|
||||
{
|
||||
}
|
||||
|
||||
virtual ~BSONWriter()
|
||||
/// Destroys the BSONWriter.
|
||||
{
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
void write(T & t)
|
||||
/// Writes the value to the writer. The default implementation uses
|
||||
/// the << operator. Special types can write their own version.
|
||||
{
|
||||
_writer << t;
|
||||
}
|
||||
|
||||
void writeCString(const std::string & value);
|
||||
/// Writes a cstring to the writer. A cstring is a string
|
||||
/// terminated a null character.
|
||||
|
||||
private:
|
||||
Poco::BinaryWriter _writer;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline void BSONWriter::writeCString(const std::string & value)
|
||||
{
|
||||
_writer.writeRaw(value);
|
||||
_writer << (unsigned char)0x00;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_BSONWriter_INCLUDED
|
@ -1,158 +0,0 @@
|
||||
//
|
||||
// Binary.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: Binary
|
||||
//
|
||||
// Definition of the Binary class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_Binary_INCLUDED
|
||||
#define MongoDB_Binary_INCLUDED
|
||||
|
||||
|
||||
#include <sstream>
|
||||
#include "Poco/Base64Encoder.h"
|
||||
#include "Poco/Buffer.h"
|
||||
#include "Poco/MemoryStream.h"
|
||||
#include "Poco/MongoDB/Element.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
#include "Poco/StreamCopier.h"
|
||||
#include "Poco/UUID.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API Binary
|
||||
/// Implements BSON Binary.
|
||||
///
|
||||
/// A Binary stores its data in a Poco::Buffer<unsigned char>.
|
||||
{
|
||||
public:
|
||||
using Ptr = SharedPtr<Binary>;
|
||||
|
||||
Binary();
|
||||
/// Creates an empty Binary with subtype 0.
|
||||
|
||||
Binary(Poco::Int32 size, unsigned char subtype);
|
||||
/// Creates a Binary with a buffer of the given size and the given subtype.
|
||||
|
||||
Binary(const UUID & uuid);
|
||||
/// Creates a Binary containing an UUID.
|
||||
|
||||
Binary(const std::string & data, unsigned char subtype = 0);
|
||||
/// Creates a Binary with the contents of the given string and the given subtype.
|
||||
|
||||
Binary(const void * data, Poco::Int32 size, unsigned char subtype = 0);
|
||||
/// Creates a Binary with the contents of the given buffer and the given subtype.
|
||||
|
||||
virtual ~Binary();
|
||||
/// Destroys the Binary.
|
||||
|
||||
Buffer<unsigned char> & buffer();
|
||||
/// Returns a reference to the internal buffer
|
||||
|
||||
unsigned char subtype() const;
|
||||
/// Returns the subtype.
|
||||
|
||||
void subtype(unsigned char type);
|
||||
/// Sets the subtype.
|
||||
|
||||
std::string toString(int indent = 0) const;
|
||||
/// Returns the contents of the Binary as Base64-encoded string.
|
||||
|
||||
std::string toRawString() const;
|
||||
/// Returns the raw content of the Binary as a string.
|
||||
|
||||
UUID uuid() const;
|
||||
/// Returns the UUID when the binary subtype is 0x04.
|
||||
/// Otherwise, throws a Poco::BadCastException.
|
||||
|
||||
private:
|
||||
Buffer<unsigned char> _buffer;
|
||||
unsigned char _subtype;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline unsigned char Binary::subtype() const
|
||||
{
|
||||
return _subtype;
|
||||
}
|
||||
|
||||
|
||||
inline void Binary::subtype(unsigned char type)
|
||||
{
|
||||
_subtype = type;
|
||||
}
|
||||
|
||||
|
||||
inline Buffer<unsigned char> & Binary::buffer()
|
||||
{
|
||||
return _buffer;
|
||||
}
|
||||
|
||||
|
||||
inline std::string Binary::toRawString() const
|
||||
{
|
||||
return std::string(reinterpret_cast<const char *>(_buffer.begin()), _buffer.size());
|
||||
}
|
||||
|
||||
|
||||
// BSON Embedded Document
|
||||
// spec: binary
|
||||
template <>
|
||||
struct ElementTraits<Binary::Ptr>
|
||||
{
|
||||
enum
|
||||
{
|
||||
TypeId = 0x05
|
||||
};
|
||||
|
||||
static std::string toString(const Binary::Ptr & value, int indent = 0) { return value.isNull() ? "" : value->toString(); }
|
||||
};
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONReader::read<Binary::Ptr>(Binary::Ptr & to)
|
||||
{
|
||||
Poco::Int32 size;
|
||||
_reader >> size;
|
||||
|
||||
to->buffer().resize(size);
|
||||
|
||||
unsigned char subtype;
|
||||
_reader >> subtype;
|
||||
to->subtype(subtype);
|
||||
|
||||
_reader.readRaw((char *)to->buffer().begin(), size);
|
||||
}
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONWriter::write<Binary::Ptr>(Binary::Ptr & from)
|
||||
{
|
||||
_writer << (Poco::Int32)from->buffer().size();
|
||||
_writer << from->subtype();
|
||||
_writer.writeRaw((char *)from->buffer().begin(), from->buffer().size());
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_Binary_INCLUDED
|
@ -1,191 +0,0 @@
|
||||
//
|
||||
// Connection.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: Connection
|
||||
//
|
||||
// Definition of the Connection class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_Connection_INCLUDED
|
||||
#define MongoDB_Connection_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/OpMsgMessage.h"
|
||||
#include "Poco/MongoDB/RequestMessage.h"
|
||||
#include "Poco/MongoDB/ResponseMessage.h"
|
||||
#include "Poco/Mutex.h"
|
||||
#include "Poco/Net/SocketAddress.h"
|
||||
#include "Poco/Net/StreamSocket.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API Connection
|
||||
/// Represents a connection to a MongoDB server
|
||||
/// using the MongoDB wire protocol.
|
||||
///
|
||||
/// See https://docs.mongodb.com/manual/reference/mongodb-wire-protocol/
|
||||
/// for more information on the wire protocol.
|
||||
{
|
||||
public:
|
||||
using Ptr = Poco::SharedPtr<Connection>;
|
||||
|
||||
class MongoDB_API SocketFactory
|
||||
{
|
||||
public:
|
||||
SocketFactory();
|
||||
/// Creates the SocketFactory.
|
||||
|
||||
virtual ~SocketFactory();
|
||||
/// Destroys the SocketFactory.
|
||||
|
||||
virtual Poco::Net::StreamSocket createSocket(const std::string & host, int port, Poco::Timespan connectTimeout, bool secure);
|
||||
/// Creates a Poco::Net::StreamSocket (if secure is false), or a
|
||||
/// Poco::Net::SecureStreamSocket (if secure is true) connected to the
|
||||
/// given host and port number.
|
||||
///
|
||||
/// The default implementation will throw a Poco::NotImplementedException
|
||||
/// if secure is true.
|
||||
};
|
||||
|
||||
Connection();
|
||||
/// Creates an unconnected Connection.
|
||||
///
|
||||
/// Use this when you want to connect later on.
|
||||
|
||||
Connection(const std::string & hostAndPort);
|
||||
/// Creates a Connection connected to the given MongoDB instance at host:port.
|
||||
///
|
||||
/// The host and port must be separated with a colon.
|
||||
|
||||
Connection(const std::string & uri, SocketFactory & socketFactory);
|
||||
/// Creates a Connection connected to the given MongoDB instance at the
|
||||
/// given URI.
|
||||
///
|
||||
/// See the corresponding connect() method for more information.
|
||||
|
||||
Connection(const std::string & host, int port);
|
||||
/// Creates a Connection connected to the given MongoDB instance at host and port.
|
||||
|
||||
Connection(const Poco::Net::SocketAddress & addrs);
|
||||
/// Creates a Connection connected to the given MongoDB instance at the given address.
|
||||
|
||||
Connection(const Poco::Net::StreamSocket & socket);
|
||||
/// Creates a Connection connected to the given MongoDB instance using the given socket,
|
||||
/// which must already be connected.
|
||||
|
||||
virtual ~Connection();
|
||||
/// Destroys the Connection.
|
||||
|
||||
Poco::Net::SocketAddress address() const;
|
||||
/// Returns the address of the MongoDB server.
|
||||
|
||||
const std::string & uri() const;
|
||||
/// Returns the uri on which the connection was made.
|
||||
|
||||
void connect(const std::string & hostAndPort);
|
||||
/// Connects to the given MongoDB server.
|
||||
///
|
||||
/// The host and port must be separated with a colon.
|
||||
|
||||
void connect(const std::string & uri, SocketFactory & socketFactory);
|
||||
/// Connects to the given MongoDB instance at the given URI.
|
||||
///
|
||||
/// The URI must be in standard MongoDB connection string URI format:
|
||||
///
|
||||
/// mongodb://<user>:<password>@hostname.com:<port>/database-name?options
|
||||
///
|
||||
/// The following options are supported:
|
||||
///
|
||||
/// - ssl: If ssl=true is specified, a custom SocketFactory subclass creating
|
||||
/// a SecureStreamSocket must be supplied.
|
||||
/// - connectTimeoutMS: Socket connection timeout in milliseconds.
|
||||
/// - socketTimeoutMS: Socket send/receive timeout in milliseconds.
|
||||
/// - authMechanism: Authentication mechanism. Only "SCRAM-SHA-1" (default)
|
||||
/// and "MONGODB-CR" are supported.
|
||||
///
|
||||
/// Unknown options are silently ignored.
|
||||
///
|
||||
/// Will also attempt to authenticate using the specified credentials,
|
||||
/// using Database::authenticate().
|
||||
///
|
||||
/// Throws a Poco::NoPermissionException if authentication fails.
|
||||
|
||||
void connect(const std::string & host, int port);
|
||||
/// Connects to the given MongoDB server.
|
||||
|
||||
void connect(const Poco::Net::SocketAddress & addrs);
|
||||
/// Connects to the given MongoDB server.
|
||||
|
||||
void connect(const Poco::Net::StreamSocket & socket);
|
||||
/// Connects using an already connected socket.
|
||||
|
||||
void disconnect();
|
||||
/// Disconnects from the MongoDB server.
|
||||
|
||||
void sendRequest(RequestMessage & request);
|
||||
/// Sends a request to the MongoDB server.
|
||||
///
|
||||
/// Used for one-way requests without a response.
|
||||
|
||||
void sendRequest(RequestMessage & request, ResponseMessage & response);
|
||||
/// Sends a request to the MongoDB server and receives the response.
|
||||
///
|
||||
/// Use this when a response is expected: only a "query" or "getmore"
|
||||
/// request will return a response.
|
||||
|
||||
void sendRequest(OpMsgMessage & request, OpMsgMessage & response);
|
||||
/// Sends a request to the MongoDB server and receives the response
|
||||
/// using newer wire protocol with OP_MSG.
|
||||
|
||||
void sendRequest(OpMsgMessage & request);
|
||||
/// Sends an unacknowledged request to the MongoDB server using newer
|
||||
/// wire protocol with OP_MSG.
|
||||
/// No response is sent by the server.
|
||||
|
||||
void readResponse(OpMsgMessage & response);
|
||||
/// Reads additional response data when previous message's flag moreToCome
|
||||
/// indicates that server will send more data.
|
||||
/// NOTE: See comments in OpMsgCursor code.
|
||||
|
||||
|
||||
protected:
|
||||
void connect();
|
||||
|
||||
private:
|
||||
Poco::Net::SocketAddress _address;
|
||||
Poco::Net::StreamSocket _socket;
|
||||
std::string _uri;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline Net::SocketAddress Connection::address() const
|
||||
{
|
||||
return _address;
|
||||
}
|
||||
inline const std::string & Connection::uri() const
|
||||
{
|
||||
return _uri;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_Connection_INCLUDED
|
@ -1,80 +0,0 @@
|
||||
//
|
||||
// Cursor.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: Cursor
|
||||
//
|
||||
// Definition of the Cursor class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_Cursor_INCLUDED
|
||||
#define MongoDB_Cursor_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Connection.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
#include "Poco/MongoDB/QueryRequest.h"
|
||||
#include "Poco/MongoDB/ResponseMessage.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API Cursor : public Document
|
||||
/// Cursor is an helper class for querying multiple documents.
|
||||
{
|
||||
public:
|
||||
Cursor(const std::string & dbname, const std::string & collectionName, QueryRequest::Flags flags = QueryRequest::QUERY_DEFAULT);
|
||||
/// Creates a Cursor for the given database and collection, using the specified flags.
|
||||
|
||||
Cursor(const std::string & fullCollectionName, QueryRequest::Flags flags = QueryRequest::QUERY_DEFAULT);
|
||||
/// Creates a Cursor for the given database and collection ("database.collection"), using the specified flags.
|
||||
|
||||
Cursor(const Document & aggregationResponse);
|
||||
/// Creates a Cursor for the given aggregation query response.
|
||||
|
||||
virtual ~Cursor();
|
||||
/// Destroys the Cursor.
|
||||
|
||||
ResponseMessage & next(Connection & connection);
|
||||
/// Tries to get the next documents. As long as ResponseMessage has a
|
||||
/// cursor ID next can be called to retrieve the next bunch of documents.
|
||||
///
|
||||
/// The cursor must be killed (see kill()) when not all documents are needed.
|
||||
|
||||
QueryRequest & query();
|
||||
/// Returns the associated query.
|
||||
|
||||
void kill(Connection & connection);
|
||||
/// Kills the cursor and reset it so that it can be reused.
|
||||
|
||||
private:
|
||||
QueryRequest _query;
|
||||
ResponseMessage _response;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline QueryRequest & Cursor::query()
|
||||
{
|
||||
return _query;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_Cursor_INCLUDED
|
@ -1,233 +0,0 @@
|
||||
//
|
||||
// Database.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: Database
|
||||
//
|
||||
// Definition of the Database class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_Database_INCLUDED
|
||||
#define MongoDB_Database_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Connection.h"
|
||||
#include "Poco/MongoDB/DeleteRequest.h"
|
||||
#include "Poco/MongoDB/Document.h"
|
||||
#include "Poco/MongoDB/InsertRequest.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
#include "Poco/MongoDB/QueryRequest.h"
|
||||
#include "Poco/MongoDB/UpdateRequest.h"
|
||||
|
||||
#include "Poco/MongoDB/OpMsgCursor.h"
|
||||
#include "Poco/MongoDB/OpMsgMessage.h"
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API Database
|
||||
/// Database is a helper class for creating requests. MongoDB works with
|
||||
/// collection names and uses the part before the first dot as the name of
|
||||
/// the database.
|
||||
{
|
||||
public:
|
||||
explicit Database(const std::string & name);
|
||||
/// Creates a Database for the database with the given name.
|
||||
|
||||
virtual ~Database();
|
||||
/// Destroys the Database.
|
||||
|
||||
const std::string & name() const;
|
||||
/// Database name
|
||||
|
||||
bool authenticate(
|
||||
Connection & connection,
|
||||
const std::string & username,
|
||||
const std::string & password,
|
||||
const std::string & method = AUTH_SCRAM_SHA1);
|
||||
/// Authenticates against the database using the given connection,
|
||||
/// username and password, as well as authentication method.
|
||||
///
|
||||
/// "MONGODB-CR" (default prior to MongoDB 3.0) and
|
||||
/// "SCRAM-SHA-1" (default starting in 3.0) are the only supported
|
||||
/// authentication methods.
|
||||
///
|
||||
/// Returns true if authentication was successful, otherwise false.
|
||||
///
|
||||
/// May throw a Poco::ProtocolException if authentication fails for a reason other than
|
||||
/// invalid credentials.
|
||||
|
||||
Document::Ptr queryBuildInfo(Connection & connection) const;
|
||||
/// Queries server build info (all wire protocols)
|
||||
|
||||
Document::Ptr queryServerHello(Connection & connection, bool old = false) const;
|
||||
/// Queries hello response from server (all wire protocols)
|
||||
|
||||
Int64 count(Connection & connection, const std::string & collectionName) const;
|
||||
/// Sends a count request for the given collection to MongoDB. (old wire protocol)
|
||||
///
|
||||
/// If the command fails, -1 is returned.
|
||||
|
||||
Poco::SharedPtr<Poco::MongoDB::QueryRequest> createCommand() const;
|
||||
/// Creates a QueryRequest for a command. (old wire protocol)
|
||||
|
||||
Poco::SharedPtr<Poco::MongoDB::QueryRequest> createCountRequest(const std::string & collectionName) const;
|
||||
/// Creates a QueryRequest to count the given collection.
|
||||
/// The collectionname must not contain the database name. (old wire protocol)
|
||||
|
||||
Poco::SharedPtr<Poco::MongoDB::DeleteRequest> createDeleteRequest(const std::string & collectionName) const;
|
||||
/// Creates a DeleteRequest to delete documents in the given collection.
|
||||
/// The collectionname must not contain the database name. (old wire protocol)
|
||||
|
||||
Poco::SharedPtr<Poco::MongoDB::InsertRequest> createInsertRequest(const std::string & collectionName) const;
|
||||
/// Creates an InsertRequest to insert new documents in the given collection.
|
||||
/// The collectionname must not contain the database name. (old wire protocol)
|
||||
|
||||
Poco::SharedPtr<Poco::MongoDB::QueryRequest> createQueryRequest(const std::string & collectionName) const;
|
||||
/// Creates a QueryRequest. (old wire protocol)
|
||||
/// The collectionname must not contain the database name.
|
||||
|
||||
Poco::SharedPtr<Poco::MongoDB::UpdateRequest> createUpdateRequest(const std::string & collectionName) const;
|
||||
/// Creates an UpdateRequest. (old wire protocol)
|
||||
/// The collectionname must not contain the database name.
|
||||
|
||||
Poco::SharedPtr<Poco::MongoDB::OpMsgMessage> createOpMsgMessage(const std::string & collectionName) const;
|
||||
/// Creates OpMsgMessage. (new wire protocol)
|
||||
|
||||
Poco::SharedPtr<Poco::MongoDB::OpMsgMessage> createOpMsgMessage() const;
|
||||
/// Creates OpMsgMessage for database commands that do not require collection as an argument. (new wire protocol)
|
||||
|
||||
Poco::SharedPtr<Poco::MongoDB::OpMsgCursor> createOpMsgCursor(const std::string & collectionName) const;
|
||||
/// Creates OpMsgCursor. (new wire protocol)
|
||||
|
||||
Poco::MongoDB::Document::Ptr ensureIndex(
|
||||
Connection & connection,
|
||||
const std::string & collection,
|
||||
const std::string & indexName,
|
||||
Poco::MongoDB::Document::Ptr keys,
|
||||
bool unique = false,
|
||||
bool background = false,
|
||||
int version = 0,
|
||||
int ttl = 0);
|
||||
/// Creates an index. The document returned is the result of a getLastError call.
|
||||
/// For more info look at the ensureIndex information on the MongoDB website. (old wire protocol)
|
||||
|
||||
Document::Ptr getLastErrorDoc(Connection & connection) const;
|
||||
/// Sends the getLastError command to the database and returns the error document.
|
||||
/// (old wire protocol)
|
||||
|
||||
std::string getLastError(Connection & connection) const;
|
||||
/// Sends the getLastError command to the database and returns the err element
|
||||
/// from the error document. When err is null, an empty string is returned.
|
||||
/// (old wire protocol)
|
||||
|
||||
static const std::string AUTH_MONGODB_CR;
|
||||
/// Default authentication mechanism prior to MongoDB 3.0.
|
||||
|
||||
static const std::string AUTH_SCRAM_SHA1;
|
||||
/// Default authentication mechanism for MongoDB 3.0.
|
||||
|
||||
enum WireVersion
|
||||
/// Wire version as reported by the command hello.
|
||||
/// See details in MongoDB github, repository specifications.
|
||||
/// @see queryServerHello
|
||||
{
|
||||
VER_26 = 1,
|
||||
VER_26_2 = 2,
|
||||
VER_30 = 3,
|
||||
VER_32 = 4,
|
||||
VER_34 = 5,
|
||||
VER_36 = 6, ///< First wire version that supports OP_MSG
|
||||
VER_40 = 7,
|
||||
VER_42 = 8,
|
||||
VER_44 = 9,
|
||||
VER_50 = 13,
|
||||
VER_51 = 14, ///< First wire version that supports only OP_MSG
|
||||
VER_52 = 15,
|
||||
VER_53 = 16,
|
||||
VER_60 = 17
|
||||
};
|
||||
|
||||
protected:
|
||||
bool authCR(Connection & connection, const std::string & username, const std::string & password);
|
||||
bool authSCRAM(Connection & connection, const std::string & username, const std::string & password);
|
||||
|
||||
private:
|
||||
std::string _dbname;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline const std::string & Database::name() const
|
||||
{
|
||||
return _dbname;
|
||||
}
|
||||
|
||||
|
||||
inline Poco::SharedPtr<Poco::MongoDB::QueryRequest> Database::createCommand() const
|
||||
{
|
||||
Poco::SharedPtr<Poco::MongoDB::QueryRequest> cmd = createQueryRequest("$cmd");
|
||||
cmd->setNumberToReturn(1);
|
||||
return cmd;
|
||||
}
|
||||
|
||||
|
||||
inline Poco::SharedPtr<Poco::MongoDB::DeleteRequest> Database::createDeleteRequest(const std::string & collectionName) const
|
||||
{
|
||||
return new Poco::MongoDB::DeleteRequest(_dbname + '.' + collectionName);
|
||||
}
|
||||
|
||||
|
||||
inline Poco::SharedPtr<Poco::MongoDB::InsertRequest> Database::createInsertRequest(const std::string & collectionName) const
|
||||
{
|
||||
return new Poco::MongoDB::InsertRequest(_dbname + '.' + collectionName);
|
||||
}
|
||||
|
||||
|
||||
inline Poco::SharedPtr<Poco::MongoDB::QueryRequest> Database::createQueryRequest(const std::string & collectionName) const
|
||||
{
|
||||
return new Poco::MongoDB::QueryRequest(_dbname + '.' + collectionName);
|
||||
}
|
||||
|
||||
|
||||
inline Poco::SharedPtr<Poco::MongoDB::UpdateRequest> Database::createUpdateRequest(const std::string & collectionName) const
|
||||
{
|
||||
return new Poco::MongoDB::UpdateRequest(_dbname + '.' + collectionName);
|
||||
}
|
||||
|
||||
// -- New wire protocol commands
|
||||
|
||||
inline Poco::SharedPtr<Poco::MongoDB::OpMsgMessage> Database::createOpMsgMessage(const std::string & collectionName) const
|
||||
{
|
||||
return new Poco::MongoDB::OpMsgMessage(_dbname, collectionName);
|
||||
}
|
||||
|
||||
inline Poco::SharedPtr<Poco::MongoDB::OpMsgMessage> Database::createOpMsgMessage() const
|
||||
{
|
||||
// Collection name for database commands is not needed.
|
||||
return createOpMsgMessage("");
|
||||
}
|
||||
|
||||
inline Poco::SharedPtr<Poco::MongoDB::OpMsgCursor> Database::createOpMsgCursor(const std::string & collectionName) const
|
||||
{
|
||||
return new Poco::MongoDB::OpMsgCursor(_dbname, collectionName);
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_Database_INCLUDED
|
@ -1,116 +0,0 @@
|
||||
//
|
||||
// DeleteRequest.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: DeleteRequest
|
||||
//
|
||||
// Definition of the DeleteRequest class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_DeleteRequest_INCLUDED
|
||||
#define MongoDB_DeleteRequest_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Document.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
#include "Poco/MongoDB/RequestMessage.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API DeleteRequest : public RequestMessage
|
||||
/// A DeleteRequest is used to delete one or more documents from a database.
|
||||
///
|
||||
/// Specific flags for this request
|
||||
/// - DELETE_DEFAULT: default delete operation
|
||||
/// - DELETE_SINGLE_REMOVE: delete only the first document
|
||||
{
|
||||
public:
|
||||
enum Flags
|
||||
{
|
||||
DELETE_DEFAULT = 0,
|
||||
/// Default
|
||||
|
||||
DELETE_SINGLE_REMOVE = 1
|
||||
/// Delete only the first document.
|
||||
};
|
||||
|
||||
DeleteRequest(const std::string & collectionName, Flags flags = DELETE_DEFAULT);
|
||||
/// Creates a DeleteRequest for the given collection using the given flags.
|
||||
///
|
||||
/// The full collection name is the concatenation of the database
|
||||
/// name with the collection name, using a "." for the concatenation. For example,
|
||||
/// for the database "foo" and the collection "bar", the full collection name is
|
||||
/// "foo.bar".
|
||||
|
||||
DeleteRequest(const std::string & collectionName, bool justOne);
|
||||
/// Creates a DeleteRequest for the given collection.
|
||||
///
|
||||
/// The full collection name is the concatenation of the database
|
||||
/// name with the collection name, using a "." for the concatenation. For example,
|
||||
/// for the database "foo" and the collection "bar", the full collection name is
|
||||
/// "foo.bar".
|
||||
///
|
||||
/// If justOne is true, only the first matching document will
|
||||
/// be removed (the same as using flag DELETE_SINGLE_REMOVE).
|
||||
|
||||
virtual ~DeleteRequest();
|
||||
/// Destructor
|
||||
|
||||
Flags flags() const;
|
||||
/// Returns the flags.
|
||||
|
||||
void flags(Flags flag);
|
||||
/// Sets the flags.
|
||||
|
||||
Document & selector();
|
||||
/// Returns the selector document.
|
||||
|
||||
protected:
|
||||
void buildRequest(BinaryWriter & writer);
|
||||
/// Writes the OP_DELETE request to the writer.
|
||||
|
||||
private:
|
||||
Flags _flags;
|
||||
std::string _fullCollectionName;
|
||||
Document _selector;
|
||||
};
|
||||
|
||||
|
||||
///
|
||||
/// inlines
|
||||
///
|
||||
inline DeleteRequest::Flags DeleteRequest::flags() const
|
||||
{
|
||||
return _flags;
|
||||
}
|
||||
|
||||
|
||||
inline void DeleteRequest::flags(DeleteRequest::Flags flags)
|
||||
{
|
||||
_flags = flags;
|
||||
}
|
||||
|
||||
|
||||
inline Document & DeleteRequest::selector()
|
||||
{
|
||||
return _selector;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_DeleteRequest_INCLUDED
|
@ -1,296 +0,0 @@
|
||||
//
|
||||
// Document.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: Document
|
||||
//
|
||||
// Definition of the Document class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_Document_INCLUDED
|
||||
#define MongoDB_Document_INCLUDED
|
||||
|
||||
|
||||
#include <algorithm>
|
||||
#include <cstdlib>
|
||||
#include "Poco/BinaryReader.h"
|
||||
#include "Poco/BinaryWriter.h"
|
||||
#include "Poco/MongoDB/Element.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
class Array;
|
||||
|
||||
class ElementFindByName
|
||||
{
|
||||
public:
|
||||
ElementFindByName(const std::string & name) : _name(name) { }
|
||||
|
||||
bool operator()(const Element::Ptr & element) { return !element.isNull() && element->name() == _name; }
|
||||
|
||||
private:
|
||||
std::string _name;
|
||||
};
|
||||
|
||||
|
||||
class MongoDB_API Document
|
||||
/// Represents a MongoDB (BSON) document.
|
||||
{
|
||||
public:
|
||||
using Ptr = SharedPtr<Document>;
|
||||
using Vector = std::vector<Document::Ptr>;
|
||||
|
||||
Document();
|
||||
/// Creates an empty Document.
|
||||
|
||||
virtual ~Document();
|
||||
/// Destroys the Document.
|
||||
|
||||
Document & addElement(Element::Ptr element);
|
||||
/// Add an element to the document.
|
||||
///
|
||||
/// The active document is returned to allow chaining of the add methods.
|
||||
|
||||
template <typename T>
|
||||
Document & add(const std::string & name, T value)
|
||||
/// Creates an element with the given name and value and
|
||||
/// adds it to the document.
|
||||
///
|
||||
/// The active document is returned to allow chaining of the add methods.
|
||||
{
|
||||
return addElement(new ConcreteElement<T>(name, value));
|
||||
}
|
||||
|
||||
Document & add(const std::string & name, const char * value)
|
||||
/// Creates an element with the given name and value and
|
||||
/// adds it to the document.
|
||||
///
|
||||
/// The active document is returned to allow chaining of the add methods.
|
||||
{
|
||||
return addElement(new ConcreteElement<std::string>(name, std::string(value)));
|
||||
}
|
||||
|
||||
Document & addNewDocument(const std::string & name);
|
||||
/// Create a new document and add it to this document.
|
||||
/// Unlike the other add methods, this method returns
|
||||
/// a reference to the new document.
|
||||
|
||||
Array & addNewArray(const std::string & name);
|
||||
/// Create a new array and add it to this document.
|
||||
/// Method returns a reference to the new array.
|
||||
|
||||
void clear();
|
||||
/// Removes all elements from the document.
|
||||
|
||||
void elementNames(std::vector<std::string> & keys) const;
|
||||
/// Puts all element names into std::vector.
|
||||
|
||||
bool empty() const;
|
||||
/// Returns true if the document doesn't contain any documents.
|
||||
|
||||
bool exists(const std::string & name) const;
|
||||
/// Returns true if the document has an element with the given name.
|
||||
|
||||
template <typename T>
|
||||
T get(const std::string & name) const
|
||||
/// Returns the element with the given name and tries to convert
|
||||
/// it to the template type. When the element is not found, a
|
||||
/// NotFoundException will be thrown. When the element can't be
|
||||
/// converted a BadCastException will be thrown.
|
||||
{
|
||||
Element::Ptr element = get(name);
|
||||
if (element.isNull())
|
||||
{
|
||||
throw NotFoundException(name);
|
||||
}
|
||||
else
|
||||
{
|
||||
if (ElementTraits<T>::TypeId == element->type())
|
||||
{
|
||||
ConcreteElement<T> * concrete = dynamic_cast<ConcreteElement<T> *>(element.get());
|
||||
if (concrete != 0)
|
||||
{
|
||||
return concrete->value();
|
||||
}
|
||||
}
|
||||
throw BadCastException("Invalid type mismatch!");
|
||||
}
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
T get(const std::string & name, const T & def) const
|
||||
/// Returns the element with the given name and tries to convert
|
||||
/// it to the template type. When the element is not found, or
|
||||
/// has the wrong type, the def argument will be returned.
|
||||
{
|
||||
Element::Ptr element = get(name);
|
||||
if (element.isNull())
|
||||
{
|
||||
return def;
|
||||
}
|
||||
|
||||
if (ElementTraits<T>::TypeId == element->type())
|
||||
{
|
||||
ConcreteElement<T> * concrete = dynamic_cast<ConcreteElement<T> *>(element.get());
|
||||
if (concrete != 0)
|
||||
{
|
||||
return concrete->value();
|
||||
}
|
||||
}
|
||||
|
||||
return def;
|
||||
}
|
||||
|
||||
Element::Ptr get(const std::string & name) const;
|
||||
/// Returns the element with the given name.
|
||||
/// An empty element will be returned when the element is not found.
|
||||
|
||||
Int64 getInteger(const std::string & name) const;
|
||||
/// Returns an integer. Useful when MongoDB returns Int32, Int64
|
||||
/// or double for a number (count for example). This method will always
|
||||
/// return an Int64. When the element is not found, a
|
||||
/// Poco::NotFoundException will be thrown.
|
||||
|
||||
bool remove(const std::string & name);
|
||||
/// Removes an element from the document.
|
||||
|
||||
template <typename T>
|
||||
bool isType(const std::string & name) const
|
||||
/// Returns true when the type of the element equals the TypeId of ElementTrait.
|
||||
{
|
||||
Element::Ptr element = get(name);
|
||||
if (element.isNull())
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
return ElementTraits<T>::TypeId == element->type();
|
||||
}
|
||||
|
||||
void read(BinaryReader & reader);
|
||||
/// Reads a document from the reader
|
||||
|
||||
std::size_t size() const;
|
||||
/// Returns the number of elements in the document.
|
||||
|
||||
virtual std::string toString(int indent = 0) const;
|
||||
/// Returns a String representation of the document.
|
||||
|
||||
void write(BinaryWriter & writer);
|
||||
/// Writes a document to the reader
|
||||
|
||||
protected:
|
||||
ElementSet _elements;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline Document & Document::addElement(Element::Ptr element)
|
||||
{
|
||||
_elements.push_back(element);
|
||||
return *this;
|
||||
}
|
||||
|
||||
|
||||
inline Document & Document::addNewDocument(const std::string & name)
|
||||
{
|
||||
Document::Ptr newDoc = new Document();
|
||||
add(name, newDoc);
|
||||
return *newDoc;
|
||||
}
|
||||
|
||||
|
||||
inline void Document::clear()
|
||||
{
|
||||
_elements.clear();
|
||||
}
|
||||
|
||||
|
||||
inline bool Document::empty() const
|
||||
{
|
||||
return _elements.empty();
|
||||
}
|
||||
|
||||
|
||||
inline void Document::elementNames(std::vector<std::string> & keys) const
|
||||
{
|
||||
for (ElementSet::const_iterator it = _elements.begin(); it != _elements.end(); ++it)
|
||||
{
|
||||
keys.push_back((*it)->name());
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
inline bool Document::exists(const std::string & name) const
|
||||
{
|
||||
return std::find_if(_elements.begin(), _elements.end(), ElementFindByName(name)) != _elements.end();
|
||||
}
|
||||
|
||||
|
||||
inline bool Document::remove(const std::string & name)
|
||||
{
|
||||
auto it = std::find_if(_elements.begin(), _elements.end(), ElementFindByName(name));
|
||||
if (it == _elements.end())
|
||||
return false;
|
||||
|
||||
_elements.erase(it);
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
inline std::size_t Document::size() const
|
||||
{
|
||||
return _elements.size();
|
||||
}
|
||||
|
||||
|
||||
// BSON Embedded Document
|
||||
// spec: document
|
||||
template <>
|
||||
struct ElementTraits<Document::Ptr>
|
||||
{
|
||||
enum
|
||||
{
|
||||
TypeId = 0x03
|
||||
};
|
||||
|
||||
static std::string toString(const Document::Ptr & value, int indent = 0)
|
||||
{
|
||||
return value.isNull() ? "null" : value->toString(indent);
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONReader::read<Document::Ptr>(Document::Ptr & to)
|
||||
{
|
||||
to->read(_reader);
|
||||
}
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONWriter::write<Document::Ptr>(Document::Ptr & from)
|
||||
{
|
||||
from->write(_writer);
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_Document_INCLUDED
|
@ -1,393 +0,0 @@
|
||||
//
|
||||
// Element.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: Element
|
||||
//
|
||||
// Definition of the Element class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_Element_INCLUDED
|
||||
#define MongoDB_Element_INCLUDED
|
||||
|
||||
|
||||
#include <iomanip>
|
||||
#include <list>
|
||||
#include <sstream>
|
||||
#include <string>
|
||||
#include "Poco/BinaryReader.h"
|
||||
#include "Poco/BinaryWriter.h"
|
||||
#include "Poco/DateTimeFormatter.h"
|
||||
#include "Poco/MongoDB/BSONReader.h"
|
||||
#include "Poco/MongoDB/BSONWriter.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
#include "Poco/Nullable.h"
|
||||
#include "Poco/NumberFormatter.h"
|
||||
#include "Poco/SharedPtr.h"
|
||||
#include "Poco/Timestamp.h"
|
||||
#include "Poco/UTF8String.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API Element
|
||||
/// Represents an Element of a Document or an Array.
|
||||
{
|
||||
public:
|
||||
using Ptr = Poco::SharedPtr<Element>;
|
||||
|
||||
explicit Element(const std::string & name);
|
||||
/// Creates the Element with the given name.
|
||||
|
||||
virtual ~Element();
|
||||
/// Destructor
|
||||
|
||||
const std::string & name() const;
|
||||
/// Returns the name of the element.
|
||||
|
||||
virtual std::string toString(int indent = 0) const = 0;
|
||||
/// Returns a string representation of the element.
|
||||
|
||||
virtual int type() const = 0;
|
||||
/// Returns the MongoDB type of the element.
|
||||
|
||||
private:
|
||||
virtual void read(BinaryReader & reader) = 0;
|
||||
virtual void write(BinaryWriter & writer) = 0;
|
||||
|
||||
friend class Document;
|
||||
std::string _name;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline const std::string & Element::name() const
|
||||
{
|
||||
return _name;
|
||||
}
|
||||
|
||||
|
||||
using ElementSet = std::list<Element::Ptr>;
|
||||
|
||||
|
||||
template <typename T>
|
||||
struct ElementTraits
|
||||
{
|
||||
};
|
||||
|
||||
|
||||
// BSON Floating point
|
||||
// spec: double
|
||||
template <>
|
||||
struct ElementTraits<double>
|
||||
{
|
||||
enum
|
||||
{
|
||||
TypeId = 0x01
|
||||
};
|
||||
|
||||
static std::string toString(const double & value, int indent = 0) { return Poco::NumberFormatter::format(value); }
|
||||
};
|
||||
|
||||
|
||||
// BSON UTF-8 string
|
||||
// spec: int32 (byte*) "\x00"
|
||||
// int32 is the number bytes in byte* + 1 (for trailing "\x00")
|
||||
template <>
|
||||
struct ElementTraits<std::string>
|
||||
{
|
||||
enum
|
||||
{
|
||||
TypeId = 0x02
|
||||
};
|
||||
|
||||
static std::string toString(const std::string & value, int indent = 0)
|
||||
{
|
||||
std::ostringstream oss;
|
||||
|
||||
oss << '"';
|
||||
|
||||
for (std::string::const_iterator it = value.begin(); it != value.end(); ++it)
|
||||
{
|
||||
switch (*it)
|
||||
{
|
||||
case '"':
|
||||
oss << "\\\"";
|
||||
break;
|
||||
case '\\':
|
||||
oss << "\\\\";
|
||||
break;
|
||||
case '\b':
|
||||
oss << "\\b";
|
||||
break;
|
||||
case '\f':
|
||||
oss << "\\f";
|
||||
break;
|
||||
case '\n':
|
||||
oss << "\\n";
|
||||
break;
|
||||
case '\r':
|
||||
oss << "\\r";
|
||||
break;
|
||||
case '\t':
|
||||
oss << "\\t";
|
||||
break;
|
||||
default: {
|
||||
if (*it > 0 && *it <= 0x1F)
|
||||
{
|
||||
oss << "\\u" << std::hex << std::uppercase << std::setfill('0') << std::setw(4) << static_cast<int>(*it);
|
||||
}
|
||||
else
|
||||
{
|
||||
oss << *it;
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
oss << '"';
|
||||
return oss.str();
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONReader::read<std::string>(std::string & to)
|
||||
{
|
||||
Poco::Int32 size;
|
||||
_reader >> size;
|
||||
_reader.readRaw(size, to);
|
||||
to.erase(to.end() - 1); // remove terminating 0
|
||||
}
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONWriter::write<std::string>(std::string & from)
|
||||
{
|
||||
_writer << (Poco::Int32)(from.length() + 1);
|
||||
writeCString(from);
|
||||
}
|
||||
|
||||
|
||||
// BSON bool
|
||||
// spec: "\x00" "\x01"
|
||||
template <>
|
||||
struct ElementTraits<bool>
|
||||
{
|
||||
enum
|
||||
{
|
||||
TypeId = 0x08
|
||||
};
|
||||
|
||||
static std::string toString(const bool & value, int indent = 0) { return value ? "true" : "false"; }
|
||||
};
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONReader::read<bool>(bool & to)
|
||||
{
|
||||
unsigned char b;
|
||||
_reader >> b;
|
||||
to = b != 0;
|
||||
}
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONWriter::write<bool>(bool & from)
|
||||
{
|
||||
unsigned char b = from ? 0x01 : 0x00;
|
||||
_writer << b;
|
||||
}
|
||||
|
||||
|
||||
// BSON 32-bit integer
|
||||
// spec: int32
|
||||
template <>
|
||||
struct ElementTraits<Int32>
|
||||
{
|
||||
enum
|
||||
{
|
||||
TypeId = 0x10
|
||||
};
|
||||
|
||||
|
||||
static std::string toString(const Int32 & value, int indent = 0) { return Poco::NumberFormatter::format(value); }
|
||||
};
|
||||
|
||||
|
||||
// BSON UTC datetime
|
||||
// spec: int64
|
||||
template <>
|
||||
struct ElementTraits<Timestamp>
|
||||
{
|
||||
enum
|
||||
{
|
||||
TypeId = 0x09
|
||||
};
|
||||
|
||||
static std::string toString(const Timestamp & value, int indent = 0)
|
||||
{
|
||||
std::string result;
|
||||
result.append(1, '"');
|
||||
result.append(DateTimeFormatter::format(value, "%Y-%m-%dT%H:%M:%s%z"));
|
||||
result.append(1, '"');
|
||||
return result;
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONReader::read<Timestamp>(Timestamp & to)
|
||||
{
|
||||
Poco::Int64 value;
|
||||
_reader >> value;
|
||||
to = Timestamp::fromEpochTime(static_cast<std::time_t>(value / 1000));
|
||||
to += (value % 1000 * 1000);
|
||||
}
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONWriter::write<Timestamp>(Timestamp & from)
|
||||
{
|
||||
_writer << (from.epochMicroseconds() / 1000);
|
||||
}
|
||||
|
||||
|
||||
using NullValue = Nullable<unsigned char>;
|
||||
|
||||
|
||||
// BSON Null Value
|
||||
// spec:
|
||||
template <>
|
||||
struct ElementTraits<NullValue>
|
||||
{
|
||||
enum
|
||||
{
|
||||
TypeId = 0x0A
|
||||
};
|
||||
|
||||
static std::string toString(const NullValue & value, int indent = 0) { return "null"; }
|
||||
};
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONReader::read<NullValue>(NullValue & to)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONWriter::write<NullValue>(NullValue & from)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
struct BSONTimestamp
|
||||
{
|
||||
Poco::Timestamp ts;
|
||||
Poco::Int32 inc;
|
||||
};
|
||||
|
||||
|
||||
// BSON Timestamp
|
||||
// spec: int64
|
||||
template <>
|
||||
struct ElementTraits<BSONTimestamp>
|
||||
{
|
||||
enum
|
||||
{
|
||||
TypeId = 0x11
|
||||
};
|
||||
|
||||
static std::string toString(const BSONTimestamp & value, int indent = 0)
|
||||
{
|
||||
std::string result;
|
||||
result.append(1, '"');
|
||||
result.append(DateTimeFormatter::format(value.ts, "%Y-%m-%dT%H:%M:%s%z"));
|
||||
result.append(1, ' ');
|
||||
result.append(NumberFormatter::format(value.inc));
|
||||
result.append(1, '"');
|
||||
return result;
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONReader::read<BSONTimestamp>(BSONTimestamp & to)
|
||||
{
|
||||
Poco::Int64 value;
|
||||
_reader >> value;
|
||||
to.inc = value & 0xffffffff;
|
||||
value >>= 32;
|
||||
to.ts = Timestamp::fromEpochTime(static_cast<std::time_t>(value));
|
||||
}
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONWriter::write<BSONTimestamp>(BSONTimestamp & from)
|
||||
{
|
||||
Poco::Int64 value = from.ts.epochMicroseconds() / 1000;
|
||||
value <<= 32;
|
||||
value += from.inc;
|
||||
_writer << value;
|
||||
}
|
||||
|
||||
|
||||
// BSON 64-bit integer
|
||||
// spec: int64
|
||||
template <>
|
||||
struct ElementTraits<Int64>
|
||||
{
|
||||
enum
|
||||
{
|
||||
TypeId = 0x12
|
||||
};
|
||||
|
||||
static std::string toString(const Int64 & value, int indent = 0) { return NumberFormatter::format(value); }
|
||||
};
|
||||
|
||||
|
||||
template <typename T>
|
||||
class ConcreteElement : public Element
|
||||
{
|
||||
public:
|
||||
ConcreteElement(const std::string & name, const T & init) : Element(name), _value(init) { }
|
||||
|
||||
virtual ~ConcreteElement() { }
|
||||
|
||||
|
||||
T value() const { return _value; }
|
||||
|
||||
|
||||
std::string toString(int indent = 0) const { return ElementTraits<T>::toString(_value, indent); }
|
||||
|
||||
|
||||
int type() const { return ElementTraits<T>::TypeId; }
|
||||
|
||||
void read(BinaryReader & reader) { BSONReader(reader).read(_value); }
|
||||
|
||||
void write(BinaryWriter & writer) { BSONWriter(writer).write(_value); }
|
||||
|
||||
private:
|
||||
T _value;
|
||||
};
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_Element_INCLUDED
|
@ -1,92 +0,0 @@
|
||||
//
|
||||
// GetMoreRequest.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: GetMoreRequest
|
||||
//
|
||||
// Definition of the GetMoreRequest class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_GetMoreRequest_INCLUDED
|
||||
#define MongoDB_GetMoreRequest_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
#include "Poco/MongoDB/RequestMessage.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API GetMoreRequest : public RequestMessage
|
||||
/// A GetMoreRequest is used to query the database for more documents in a collection
|
||||
/// after a query request is send (OP_GETMORE).
|
||||
{
|
||||
public:
|
||||
GetMoreRequest(const std::string & collectionName, Int64 cursorID);
|
||||
/// Creates a GetMoreRequest for the give collection and cursor.
|
||||
///
|
||||
/// The full collection name is the concatenation of the database
|
||||
/// name with the collection name, using a "." for the concatenation. For example,
|
||||
/// for the database "foo" and the collection "bar", the full collection name is
|
||||
/// "foo.bar". The cursorID has been returned by the response on the query request.
|
||||
/// By default the numberToReturn is set to 100.
|
||||
|
||||
virtual ~GetMoreRequest();
|
||||
/// Destroys the GetMoreRequest.
|
||||
|
||||
Int32 getNumberToReturn() const;
|
||||
/// Returns the limit of returned documents.
|
||||
|
||||
void setNumberToReturn(Int32 n);
|
||||
/// Sets the limit of returned documents.
|
||||
|
||||
Int64 cursorID() const;
|
||||
/// Returns the cursor ID.
|
||||
|
||||
protected:
|
||||
void buildRequest(BinaryWriter & writer);
|
||||
|
||||
private:
|
||||
std::string _fullCollectionName;
|
||||
Int32 _numberToReturn;
|
||||
Int64 _cursorID;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline Int32 GetMoreRequest::getNumberToReturn() const
|
||||
{
|
||||
return _numberToReturn;
|
||||
}
|
||||
|
||||
|
||||
inline void GetMoreRequest::setNumberToReturn(Int32 n)
|
||||
{
|
||||
_numberToReturn = n;
|
||||
}
|
||||
|
||||
|
||||
inline Int64 GetMoreRequest::cursorID() const
|
||||
{
|
||||
return _cursorID;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_GetMoreRequest_INCLUDED
|
@ -1,100 +0,0 @@
|
||||
//
|
||||
// InsertRequest.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: InsertRequest
|
||||
//
|
||||
// Definition of the InsertRequest class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_InsertRequest_INCLUDED
|
||||
#define MongoDB_InsertRequest_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Document.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
#include "Poco/MongoDB/RequestMessage.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API InsertRequest : public RequestMessage
|
||||
/// A request for inserting one or more documents to the database
|
||||
/// (OP_INSERT).
|
||||
{
|
||||
public:
|
||||
enum Flags
|
||||
{
|
||||
INSERT_DEFAULT = 0,
|
||||
/// If specified, perform a normal insert operation.
|
||||
|
||||
INSERT_CONTINUE_ON_ERROR = 1
|
||||
/// If set, the database will not stop processing a bulk insert if one
|
||||
/// fails (e.g. due to duplicate IDs). This makes bulk insert behave similarly
|
||||
/// to a series of single inserts, except lastError will be set if any insert
|
||||
/// fails, not just the last one. If multiple errors occur, only the most
|
||||
/// recent will be reported.
|
||||
};
|
||||
|
||||
InsertRequest(const std::string & collectionName, Flags flags = INSERT_DEFAULT);
|
||||
/// Creates an InsertRequest.
|
||||
///
|
||||
/// The full collection name is the concatenation of the database
|
||||
/// name with the collection name, using a "." for the concatenation. For example,
|
||||
/// for the database "foo" and the collection "bar", the full collection name is
|
||||
/// "foo.bar".
|
||||
|
||||
virtual ~InsertRequest();
|
||||
/// Destroys the InsertRequest.
|
||||
|
||||
Document & addNewDocument();
|
||||
/// Adds a new document for insertion. A reference to the empty document is
|
||||
/// returned. InsertRequest is the owner of the Document and will free it
|
||||
/// on destruction.
|
||||
|
||||
Document::Vector & documents();
|
||||
/// Returns the documents to insert into the database.
|
||||
|
||||
protected:
|
||||
void buildRequest(BinaryWriter & writer);
|
||||
|
||||
private:
|
||||
Int32 _flags;
|
||||
std::string _fullCollectionName;
|
||||
Document::Vector _documents;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline Document & InsertRequest::addNewDocument()
|
||||
{
|
||||
Document::Ptr doc = new Document();
|
||||
_documents.push_back(doc);
|
||||
return *doc;
|
||||
}
|
||||
|
||||
|
||||
inline Document::Vector & InsertRequest::documents()
|
||||
{
|
||||
return _documents;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_InsertRequest_INCLUDED
|
@ -1,108 +0,0 @@
|
||||
//
|
||||
// JavaScriptCode.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: JavaScriptCode
|
||||
//
|
||||
// Definition of the JavaScriptCode class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_JavaScriptCode_INCLUDED
|
||||
#define MongoDB_JavaScriptCode_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/BSONReader.h"
|
||||
#include "Poco/MongoDB/BSONWriter.h"
|
||||
#include "Poco/MongoDB/Element.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
#include "Poco/SharedPtr.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API JavaScriptCode
|
||||
/// Represents JavaScript type in BSON.
|
||||
{
|
||||
public:
|
||||
using Ptr = SharedPtr<JavaScriptCode>;
|
||||
|
||||
JavaScriptCode();
|
||||
/// Creates an empty JavaScriptCode object.
|
||||
|
||||
virtual ~JavaScriptCode();
|
||||
/// Destroys the JavaScriptCode.
|
||||
|
||||
void setCode(const std::string & code);
|
||||
/// Sets the JavaScript code.
|
||||
|
||||
std::string getCode() const;
|
||||
/// Returns the JavaScript code.
|
||||
|
||||
private:
|
||||
std::string _code;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline void JavaScriptCode::setCode(const std::string & code)
|
||||
{
|
||||
_code = code;
|
||||
}
|
||||
|
||||
|
||||
inline std::string JavaScriptCode::getCode() const
|
||||
{
|
||||
return _code;
|
||||
}
|
||||
|
||||
|
||||
// BSON JavaScript code
|
||||
// spec: string
|
||||
template <>
|
||||
struct ElementTraits<JavaScriptCode::Ptr>
|
||||
{
|
||||
enum
|
||||
{
|
||||
TypeId = 0x0D
|
||||
};
|
||||
|
||||
static std::string toString(const JavaScriptCode::Ptr & value, int indent = 0) { return value.isNull() ? "" : value->getCode(); }
|
||||
};
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONReader::read<JavaScriptCode::Ptr>(JavaScriptCode::Ptr & to)
|
||||
{
|
||||
std::string code;
|
||||
BSONReader(_reader).read(code);
|
||||
to = new JavaScriptCode();
|
||||
to->setCode(code);
|
||||
}
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONWriter::write<JavaScriptCode::Ptr>(JavaScriptCode::Ptr & from)
|
||||
{
|
||||
std::string code = from->getCode();
|
||||
BSONWriter(_writer).write(code);
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_JavaScriptCode_INCLUDED
|
@ -1,65 +0,0 @@
|
||||
//
|
||||
// KillCursorsRequest.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: KillCursorsRequest
|
||||
//
|
||||
// Definition of the KillCursorsRequest class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_KillCursorsRequest_INCLUDED
|
||||
#define MongoDB_KillCursorsRequest_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
#include "Poco/MongoDB/RequestMessage.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API KillCursorsRequest : public RequestMessage
|
||||
/// Class for creating an OP_KILL_CURSORS client request. This
|
||||
/// request is used to kill cursors, which are still open,
|
||||
/// returned by query requests.
|
||||
{
|
||||
public:
|
||||
KillCursorsRequest();
|
||||
/// Creates a KillCursorsRequest.
|
||||
|
||||
virtual ~KillCursorsRequest();
|
||||
/// Destroys the KillCursorsRequest.
|
||||
|
||||
std::vector<Int64> & cursors();
|
||||
/// The internal list of cursors.
|
||||
|
||||
protected:
|
||||
void buildRequest(BinaryWriter & writer);
|
||||
std::vector<Int64> _cursors;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline std::vector<Int64> & KillCursorsRequest::cursors()
|
||||
{
|
||||
return _cursors;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_KillCursorsRequest_INCLUDED
|
@ -1,76 +0,0 @@
|
||||
//
|
||||
// Message.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: Message
|
||||
//
|
||||
// Definition of the Message class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_Message_INCLUDED
|
||||
#define MongoDB_Message_INCLUDED
|
||||
|
||||
|
||||
#include <sstream>
|
||||
#include "Poco/BinaryReader.h"
|
||||
#include "Poco/BinaryWriter.h"
|
||||
#include "Poco/MongoDB/MessageHeader.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
#include "Poco/Net/Socket.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API Message
|
||||
/// Base class for all messages send or retrieved from MongoDB server.
|
||||
{
|
||||
public:
|
||||
explicit Message(MessageHeader::OpCode opcode);
|
||||
/// Creates a Message using the given OpCode.
|
||||
|
||||
virtual ~Message();
|
||||
/// Destructor
|
||||
|
||||
MessageHeader & header();
|
||||
/// Returns the message header
|
||||
|
||||
protected:
|
||||
MessageHeader _header;
|
||||
|
||||
void messageLength(Poco::Int32 length);
|
||||
/// Sets the message length in the message header
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline MessageHeader & Message::header()
|
||||
{
|
||||
return _header;
|
||||
}
|
||||
|
||||
|
||||
inline void Message::messageLength(Poco::Int32 length)
|
||||
{
|
||||
poco_assert(length > 0);
|
||||
_header.setMessageLength(length);
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_Message_INCLUDED
|
@ -1,140 +0,0 @@
|
||||
//
|
||||
// MessageHeader.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: MessageHeader
|
||||
//
|
||||
// Definition of the MessageHeader class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_MessageHeader_INCLUDED
|
||||
#define MongoDB_MessageHeader_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/MessageHeader.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class Message; // Required to disambiguate friend declaration in MessageHeader.
|
||||
|
||||
|
||||
class MongoDB_API MessageHeader
|
||||
/// Represents the message header which is always prepended to a
|
||||
/// MongoDB request or response message.
|
||||
{
|
||||
public:
|
||||
static const unsigned int MSG_HEADER_SIZE = 16;
|
||||
|
||||
enum OpCode
|
||||
{
|
||||
// Opcodes deprecated in MongoDB 5.0
|
||||
OP_REPLY = 1,
|
||||
OP_UPDATE = 2001,
|
||||
OP_INSERT = 2002,
|
||||
OP_QUERY = 2004,
|
||||
OP_GET_MORE = 2005,
|
||||
OP_DELETE = 2006,
|
||||
OP_KILL_CURSORS = 2007,
|
||||
|
||||
/// Opcodes supported in MongoDB 5.1 and later
|
||||
OP_COMPRESSED = 2012,
|
||||
OP_MSG = 2013
|
||||
};
|
||||
|
||||
explicit MessageHeader(OpCode);
|
||||
/// Creates the MessageHeader using the given OpCode.
|
||||
|
||||
virtual ~MessageHeader();
|
||||
/// Destroys the MessageHeader.
|
||||
|
||||
void read(BinaryReader & reader);
|
||||
/// Reads the header using the given BinaryReader.
|
||||
|
||||
void write(BinaryWriter & writer);
|
||||
/// Writes the header using the given BinaryWriter.
|
||||
|
||||
Int32 getMessageLength() const;
|
||||
/// Returns the message length.
|
||||
|
||||
OpCode opCode() const;
|
||||
/// Returns the OpCode.
|
||||
|
||||
Int32 getRequestID() const;
|
||||
/// Returns the request ID of the current message.
|
||||
|
||||
void setRequestID(Int32 id);
|
||||
/// Sets the request ID of the current message.
|
||||
|
||||
Int32 responseTo() const;
|
||||
/// Returns the request id from the original request.
|
||||
|
||||
private:
|
||||
void setMessageLength(Int32 length);
|
||||
/// Sets the message length.
|
||||
|
||||
Int32 _messageLength;
|
||||
Int32 _requestID;
|
||||
Int32 _responseTo;
|
||||
OpCode _opCode;
|
||||
|
||||
friend class Message;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline MessageHeader::OpCode MessageHeader::opCode() const
|
||||
{
|
||||
return _opCode;
|
||||
}
|
||||
|
||||
|
||||
inline Int32 MessageHeader::getMessageLength() const
|
||||
{
|
||||
return _messageLength;
|
||||
}
|
||||
|
||||
|
||||
inline void MessageHeader::setMessageLength(Int32 length)
|
||||
{
|
||||
poco_assert(_messageLength >= 0);
|
||||
_messageLength = MSG_HEADER_SIZE + length;
|
||||
}
|
||||
|
||||
|
||||
inline void MessageHeader::setRequestID(Int32 id)
|
||||
{
|
||||
_requestID = id;
|
||||
}
|
||||
|
||||
|
||||
inline Int32 MessageHeader::getRequestID() const
|
||||
{
|
||||
return _requestID;
|
||||
}
|
||||
|
||||
inline Int32 MessageHeader::responseTo() const
|
||||
{
|
||||
return _responseTo;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_MessageHeader_INCLUDED
|
@ -1,64 +0,0 @@
|
||||
//
|
||||
// MongoDB.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: MongoDB
|
||||
//
|
||||
// Basic definitions for the Poco MongoDB library.
|
||||
// This file must be the first file included by every other MongoDB
|
||||
// header file.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDBMongoDB_INCLUDED
|
||||
#define MongoDBMongoDB_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/Foundation.h"
|
||||
|
||||
|
||||
//
|
||||
// The following block is the standard way of creating macros which make exporting
|
||||
// from a DLL simpler. All files within this DLL are compiled with the MongoDB_EXPORTS
|
||||
// symbol defined on the command line. this symbol should not be defined on any project
|
||||
// that uses this DLL. This way any other project whose source files include this file see
|
||||
// MongoDB_API functions as being imported from a DLL, whereas this DLL sees symbols
|
||||
// defined with this macro as being exported.
|
||||
//
|
||||
|
||||
|
||||
#if defined(_WIN32) && defined(POCO_DLL)
|
||||
# if defined(MongoDB_EXPORTS)
|
||||
# define MongoDB_API __declspec(dllexport)
|
||||
# else
|
||||
# define MongoDB_API __declspec(dllimport)
|
||||
# endif
|
||||
#endif
|
||||
|
||||
|
||||
#if !defined(MongoDB_API)
|
||||
# if !defined(POCO_NO_GCC_API_ATTRIBUTE) && defined(__GNUC__) && (__GNUC__ >= 4)
|
||||
# define MongoDB_API __attribute__((visibility("default")))
|
||||
# else
|
||||
# define MongoDB_API
|
||||
# endif
|
||||
#endif
|
||||
|
||||
|
||||
//
|
||||
// Automatically link MongoDB library.
|
||||
//
|
||||
#if defined(_MSC_VER)
|
||||
# if !defined(POCO_NO_AUTOMATIC_LIBS) && !defined(MongoDB_EXPORTS)
|
||||
# pragma comment(lib, "PocoMongoDB" POCO_LIB_SUFFIX)
|
||||
# endif
|
||||
#endif
|
||||
|
||||
|
||||
#endif // MongoDBMongoDB_INCLUDED
|
@ -1,151 +0,0 @@
|
||||
//
|
||||
// Array.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: ObjectId
|
||||
//
|
||||
// Definition of the ObjectId class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_ObjectId_INCLUDED
|
||||
#define MongoDB_ObjectId_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Element.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
#include "Poco/Timestamp.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API ObjectId
|
||||
/// ObjectId is a 12-byte BSON type, constructed using:
|
||||
///
|
||||
/// - a 4-byte timestamp,
|
||||
/// - a 3-byte machine identifier,
|
||||
/// - a 2-byte process id, and
|
||||
/// - a 3-byte counter, starting with a random value.
|
||||
///
|
||||
/// In MongoDB, documents stored in a collection require a unique _id field that acts
|
||||
/// as a primary key. Because ObjectIds are small, most likely unique, and fast to generate,
|
||||
/// MongoDB uses ObjectIds as the default value for the _id field if the _id field is not
|
||||
/// specified; i.e., the mongod adds the _id field and generates a unique ObjectId to assign
|
||||
/// as its value.
|
||||
{
|
||||
public:
|
||||
using Ptr = SharedPtr<ObjectId>;
|
||||
|
||||
explicit ObjectId(const std::string & id);
|
||||
/// Creates an ObjectId from a string.
|
||||
///
|
||||
/// The string must contain a hexadecimal representation
|
||||
/// of an object ID. This means a string of 24 characters.
|
||||
|
||||
ObjectId(const ObjectId & copy);
|
||||
/// Creates an ObjectId by copying another one.
|
||||
|
||||
virtual ~ObjectId();
|
||||
/// Destroys the ObjectId.
|
||||
|
||||
Timestamp timestamp() const;
|
||||
/// Returns the timestamp which is stored in the first four bytes of the id
|
||||
|
||||
std::string toString(const std::string & fmt = "%02x") const;
|
||||
/// Returns the id in string format. The fmt parameter
|
||||
/// specifies the formatting used for individual members
|
||||
/// of the ID char array.
|
||||
|
||||
private:
|
||||
ObjectId();
|
||||
|
||||
static int fromHex(char c);
|
||||
static char fromHex(const char * c);
|
||||
|
||||
unsigned char _id[12];
|
||||
|
||||
friend class BSONWriter;
|
||||
friend class BSONReader;
|
||||
friend class Document;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline Timestamp ObjectId::timestamp() const
|
||||
{
|
||||
int time;
|
||||
char * T = (char *)&time;
|
||||
T[0] = _id[3];
|
||||
T[1] = _id[2];
|
||||
T[2] = _id[1];
|
||||
T[3] = _id[0];
|
||||
return Timestamp::fromEpochTime((time_t)time);
|
||||
}
|
||||
|
||||
|
||||
inline int ObjectId::fromHex(char c)
|
||||
{
|
||||
if ('0' <= c && c <= '9')
|
||||
return c - '0';
|
||||
if ('a' <= c && c <= 'f')
|
||||
return c - 'a' + 10;
|
||||
if ('A' <= c && c <= 'F')
|
||||
return c - 'A' + 10;
|
||||
return 0xff;
|
||||
}
|
||||
|
||||
|
||||
inline char ObjectId::fromHex(const char * c)
|
||||
{
|
||||
return (char)((fromHex(c[0]) << 4) | fromHex(c[1]));
|
||||
}
|
||||
|
||||
|
||||
// BSON Embedded Document
|
||||
// spec: ObjectId
|
||||
template <>
|
||||
struct ElementTraits<ObjectId::Ptr>
|
||||
{
|
||||
enum
|
||||
{
|
||||
TypeId = 0x07
|
||||
};
|
||||
|
||||
static std::string toString(const ObjectId::Ptr & id, int indent = 0, const std::string & fmt = "%02x")
|
||||
{
|
||||
return id->toString(fmt);
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONReader::read<ObjectId::Ptr>(ObjectId::Ptr & to)
|
||||
{
|
||||
_reader.readRaw((char *)to->_id, 12);
|
||||
}
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONWriter::write<ObjectId::Ptr>(ObjectId::Ptr & from)
|
||||
{
|
||||
_writer.writeRaw((char *)from->_id, 12);
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_ObjectId_INCLUDED
|
@ -1,96 +0,0 @@
|
||||
//
|
||||
// OpMsgCursor.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: OpMsgCursor
|
||||
//
|
||||
// Definition of the OpMsgCursor class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_OpMsgCursor_INCLUDED
|
||||
#define MongoDB_OpMsgCursor_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Connection.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
#include "Poco/MongoDB/OpMsgMessage.h"
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API OpMsgCursor : public Document
|
||||
/// OpMsgCursor is an helper class for querying multiple documents using OpMsgMessage.
|
||||
{
|
||||
public:
|
||||
OpMsgCursor(const std::string & dbname, const std::string & collectionName);
|
||||
/// Creates a OpMsgCursor for the given database and collection.
|
||||
|
||||
virtual ~OpMsgCursor();
|
||||
/// Destroys the OpMsgCursor.
|
||||
|
||||
void setEmptyFirstBatch(bool empty);
|
||||
/// Empty first batch is used to get error response faster with little server processing
|
||||
|
||||
bool emptyFirstBatch() const;
|
||||
|
||||
void setBatchSize(Int32 batchSize);
|
||||
/// Set non-default batch size
|
||||
|
||||
Int32 batchSize() const;
|
||||
/// Current batch size (zero or negative number indicates default batch size)
|
||||
|
||||
Int64 cursorID() const;
|
||||
|
||||
OpMsgMessage & next(Connection & connection);
|
||||
/// Tries to get the next documents. As long as response message has a
|
||||
/// cursor ID next can be called to retrieve the next bunch of documents.
|
||||
///
|
||||
/// The cursor must be killed (see kill()) when not all documents are needed.
|
||||
|
||||
OpMsgMessage & query();
|
||||
/// Returns the associated query.
|
||||
|
||||
void kill(Connection & connection);
|
||||
/// Kills the cursor and reset it so that it can be reused.
|
||||
|
||||
private:
|
||||
OpMsgMessage _query;
|
||||
OpMsgMessage _response;
|
||||
|
||||
bool _emptyFirstBatch{false};
|
||||
Int32 _batchSize{-1};
|
||||
/// Batch size used in the cursor. Zero or negative value means that default shall be used.
|
||||
|
||||
Int64 _cursorID{0};
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline OpMsgMessage & OpMsgCursor::query()
|
||||
{
|
||||
return _query;
|
||||
}
|
||||
|
||||
inline Int64 OpMsgCursor::cursorID() const
|
||||
{
|
||||
return _cursorID;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_OpMsgCursor_INCLUDED
|
@ -1,163 +0,0 @@
|
||||
//
|
||||
// OpMsgMessage.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: OpMsgMessage
|
||||
//
|
||||
// Definition of the OpMsgMessage class.
|
||||
//
|
||||
// Copyright (c) 2022, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_OpMsgMessage_INCLUDED
|
||||
#define MongoDB_OpMsgMessage_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Document.h"
|
||||
#include "Poco/MongoDB/Message.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
|
||||
#include <string>
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API OpMsgMessage : public Message
|
||||
/// This class represents a request/response (OP_MSG) to send requests and receive responses to/from MongoDB.
|
||||
{
|
||||
public:
|
||||
// Constants for most often used MongoDB commands that can be sent using OP_MSG
|
||||
// For complete list see: https://www.mongodb.com/docs/manual/reference/command/
|
||||
|
||||
// Query and write
|
||||
static const std::string CMD_INSERT;
|
||||
static const std::string CMD_DELETE;
|
||||
static const std::string CMD_UPDATE;
|
||||
static const std::string CMD_FIND;
|
||||
static const std::string CMD_FIND_AND_MODIFY;
|
||||
static const std::string CMD_GET_MORE;
|
||||
|
||||
// Aggregation
|
||||
static const std::string CMD_AGGREGATE;
|
||||
static const std::string CMD_COUNT;
|
||||
static const std::string CMD_DISTINCT;
|
||||
static const std::string CMD_MAP_REDUCE;
|
||||
|
||||
// Replication and administration
|
||||
static const std::string CMD_HELLO;
|
||||
static const std::string CMD_REPL_SET_GET_STATUS;
|
||||
static const std::string CMD_REPL_SET_GET_CONFIG;
|
||||
|
||||
static const std::string CMD_CREATE;
|
||||
static const std::string CMD_CREATE_INDEXES;
|
||||
static const std::string CMD_DROP;
|
||||
static const std::string CMD_DROP_DATABASE;
|
||||
static const std::string CMD_KILL_CURSORS;
|
||||
static const std::string CMD_LIST_DATABASES;
|
||||
static const std::string CMD_LIST_INDEXES;
|
||||
|
||||
// Diagnostic
|
||||
static const std::string CMD_BUILD_INFO;
|
||||
static const std::string CMD_COLL_STATS;
|
||||
static const std::string CMD_DB_STATS;
|
||||
static const std::string CMD_HOST_INFO;
|
||||
|
||||
|
||||
enum Flags : UInt32
|
||||
{
|
||||
MSG_FLAGS_DEFAULT = 0,
|
||||
|
||||
MSG_CHECKSUM_PRESENT = (1 << 0),
|
||||
|
||||
MSG_MORE_TO_COME = (1 << 1),
|
||||
/// Sender will send another message and is not prepared for overlapping messages
|
||||
|
||||
MSG_EXHAUST_ALLOWED = (1 << 16)
|
||||
/// Client is prepared for multiple replies (using the moreToCome bit) to this request
|
||||
};
|
||||
|
||||
OpMsgMessage();
|
||||
/// Creates an OpMsgMessage for response.
|
||||
|
||||
OpMsgMessage(const std::string & databaseName, const std::string & collectionName, UInt32 flags = MSG_FLAGS_DEFAULT);
|
||||
/// Creates an OpMsgMessage for requests.
|
||||
|
||||
virtual ~OpMsgMessage();
|
||||
|
||||
const std::string & databaseName() const;
|
||||
|
||||
const std::string & collectionName() const;
|
||||
|
||||
void setCommandName(const std::string & command);
|
||||
/// Sets the command name and clears the command document
|
||||
|
||||
void setCursor(Poco::Int64 cursorID, Poco::Int32 batchSize = -1);
|
||||
/// Sets the command "getMore" for the cursor id with batch size (if it is not negative).
|
||||
|
||||
const std::string & commandName() const;
|
||||
/// Current command name.
|
||||
|
||||
void setAcknowledgedRequest(bool ack);
|
||||
/// Set false to create request that does not return response.
|
||||
/// It has effect only for commands that write or delete documents.
|
||||
/// Default is true (request returns acknowledge response).
|
||||
|
||||
bool acknowledgedRequest() const;
|
||||
|
||||
UInt32 flags() const;
|
||||
|
||||
Document & body();
|
||||
/// Access to body document.
|
||||
/// Additional query arguments shall be added after setting the command name.
|
||||
|
||||
const Document & body() const;
|
||||
|
||||
Document::Vector & documents();
|
||||
/// Documents prepared for request or retrieved in response.
|
||||
|
||||
const Document::Vector & documents() const;
|
||||
/// Documents prepared for request or retrieved in response.
|
||||
|
||||
bool responseOk() const;
|
||||
/// Reads "ok" status from the response message.
|
||||
|
||||
void clear();
|
||||
/// Clears the message.
|
||||
|
||||
void send(std::ostream & ostr);
|
||||
/// Writes the request to stream.
|
||||
|
||||
void read(std::istream & istr);
|
||||
/// Reads the response from the stream.
|
||||
|
||||
private:
|
||||
enum PayloadType : UInt8
|
||||
{
|
||||
PAYLOAD_TYPE_0 = 0,
|
||||
PAYLOAD_TYPE_1 = 1
|
||||
};
|
||||
|
||||
std::string _databaseName;
|
||||
std::string _collectionName;
|
||||
UInt32 _flags{MSG_FLAGS_DEFAULT};
|
||||
std::string _commandName;
|
||||
bool _acknowledged{true};
|
||||
|
||||
Document _body;
|
||||
Document::Vector _documents;
|
||||
};
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_OpMsgMessage_INCLUDED
|
@ -1,123 +0,0 @@
|
||||
//
|
||||
// PoolableConnectionFactory.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: PoolableConnectionFactory
|
||||
//
|
||||
// Definition of the PoolableConnectionFactory class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_PoolableConnectionFactory_INCLUDED
|
||||
#define MongoDB_PoolableConnectionFactory_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Connection.h"
|
||||
#include "Poco/ObjectPool.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
|
||||
|
||||
template <>
|
||||
class PoolableObjectFactory<MongoDB::Connection, MongoDB::Connection::Ptr>
|
||||
/// PoolableObjectFactory specialisation for Connection. New connections
|
||||
/// are created with the given address or URI.
|
||||
///
|
||||
/// If a Connection::SocketFactory is given, it must live for the entire
|
||||
/// lifetime of the PoolableObjectFactory.
|
||||
{
|
||||
public:
|
||||
PoolableObjectFactory(Net::SocketAddress & address) : _address(address), _pSocketFactory(0) { }
|
||||
|
||||
PoolableObjectFactory(const std::string & address) : _address(address), _pSocketFactory(0) { }
|
||||
|
||||
PoolableObjectFactory(const std::string & uri, MongoDB::Connection::SocketFactory & socketFactory)
|
||||
: _uri(uri), _pSocketFactory(&socketFactory)
|
||||
{
|
||||
}
|
||||
|
||||
MongoDB::Connection::Ptr createObject()
|
||||
{
|
||||
if (_pSocketFactory)
|
||||
return new MongoDB::Connection(_uri, *_pSocketFactory);
|
||||
else
|
||||
return new MongoDB::Connection(_address);
|
||||
}
|
||||
|
||||
bool validateObject(MongoDB::Connection::Ptr pObject) { return true; }
|
||||
|
||||
void activateObject(MongoDB::Connection::Ptr pObject) { }
|
||||
|
||||
void deactivateObject(MongoDB::Connection::Ptr pObject) { }
|
||||
|
||||
void destroyObject(MongoDB::Connection::Ptr pObject) { }
|
||||
|
||||
private:
|
||||
Net::SocketAddress _address;
|
||||
std::string _uri;
|
||||
MongoDB::Connection::SocketFactory * _pSocketFactory;
|
||||
};
|
||||
|
||||
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class PooledConnection
|
||||
/// Helper class for borrowing and returning a connection automatically from a pool.
|
||||
{
|
||||
public:
|
||||
PooledConnection(Poco::ObjectPool<Connection, Connection::Ptr> & pool) : _pool(pool) { _connection = _pool.borrowObject(); }
|
||||
|
||||
virtual ~PooledConnection()
|
||||
{
|
||||
try
|
||||
{
|
||||
if (_connection)
|
||||
{
|
||||
_pool.returnObject(_connection);
|
||||
}
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
poco_unexpected();
|
||||
}
|
||||
}
|
||||
|
||||
operator Connection::Ptr() { return _connection; }
|
||||
|
||||
#if defined(POCO_ENABLE_CPP11)
|
||||
// Disable copy to prevent unwanted release of resources: C++11 way
|
||||
PooledConnection(const PooledConnection &) = delete;
|
||||
PooledConnection & operator=(const PooledConnection &) = delete;
|
||||
|
||||
// Enable move semantics
|
||||
PooledConnection(PooledConnection && other) = default;
|
||||
PooledConnection & operator=(PooledConnection &&) = default;
|
||||
#endif
|
||||
|
||||
private:
|
||||
#if !defined(POCO_ENABLE_CPP11)
|
||||
// Disable copy to prevent unwanted release of resources: pre C++11 way
|
||||
PooledConnection(const PooledConnection &);
|
||||
PooledConnection & operator=(const PooledConnection &);
|
||||
#endif
|
||||
|
||||
Poco::ObjectPool<Connection, Connection::Ptr> & _pool;
|
||||
Connection::Ptr _connection;
|
||||
};
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_PoolableConnectionFactory_INCLUDED
|
@ -1,190 +0,0 @@
|
||||
//
|
||||
// QueryRequest.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: QueryRequest
|
||||
//
|
||||
// Definition of the QueryRequest class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_QueryRequest_INCLUDED
|
||||
#define MongoDB_QueryRequest_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Document.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
#include "Poco/MongoDB/RequestMessage.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API QueryRequest : public RequestMessage
|
||||
/// A request to query documents in a MongoDB database
|
||||
/// using an OP_QUERY request.
|
||||
{
|
||||
public:
|
||||
enum Flags
|
||||
{
|
||||
QUERY_DEFAULT = 0,
|
||||
/// Do not set any flags.
|
||||
|
||||
QUERY_TAILABLE_CURSOR = 2,
|
||||
/// Tailable means cursor is not closed when the last data is retrieved.
|
||||
/// Rather, the cursor marks the final object’s position.
|
||||
/// You can resume using the cursor later, from where it was located,
|
||||
/// if more data were received. Like any "latent cursor", the cursor may
|
||||
/// become invalid at some point (CursorNotFound) – for example if the final
|
||||
/// object it references were deleted.
|
||||
|
||||
QUERY_SLAVE_OK = 4,
|
||||
/// Allow query of replica slave. Normally these return an error except
|
||||
/// for namespace "local".
|
||||
|
||||
// QUERY_OPLOG_REPLAY = 8 (internal replication use only - drivers should not implement)
|
||||
|
||||
QUERY_NO_CURSOR_TIMEOUT = 16,
|
||||
/// The server normally times out idle cursors after an inactivity period
|
||||
/// (10 minutes) to prevent excess memory use. Set this option to prevent that.
|
||||
|
||||
QUERY_AWAIT_DATA = 32,
|
||||
/// Use with QUERY_TAILABLECURSOR. If we are at the end of the data, block for
|
||||
/// a while rather than returning no data. After a timeout period, we do
|
||||
/// return as normal.
|
||||
|
||||
QUERY_EXHAUST = 64,
|
||||
/// Stream the data down full blast in multiple "more" packages, on the
|
||||
/// assumption that the client will fully read all data queried.
|
||||
/// Faster when you are pulling a lot of data and know you want to pull
|
||||
/// it all down.
|
||||
/// Note: the client is not allowed to not read all the data unless it
|
||||
/// closes the connection.
|
||||
|
||||
QUERY_PARTIAL = 128
|
||||
/// Get partial results from a mongos if some shards are down
|
||||
/// (instead of throwing an error).
|
||||
};
|
||||
|
||||
QueryRequest(const std::string & collectionName, Flags flags = QUERY_DEFAULT);
|
||||
/// Creates a QueryRequest.
|
||||
///
|
||||
/// The full collection name is the concatenation of the database
|
||||
/// name with the collection name, using a "." for the concatenation. For example,
|
||||
/// for the database "foo" and the collection "bar", the full collection name is
|
||||
/// "foo.bar".
|
||||
|
||||
virtual ~QueryRequest();
|
||||
/// Destroys the QueryRequest.
|
||||
|
||||
Flags getFlags() const;
|
||||
/// Returns the flags.
|
||||
|
||||
void setFlags(Flags flag);
|
||||
/// Set the flags.
|
||||
|
||||
std::string fullCollectionName() const;
|
||||
/// Returns the <db>.<collection> used for this query.
|
||||
|
||||
Int32 getNumberToSkip() const;
|
||||
/// Returns the number of documents to skip.
|
||||
|
||||
void setNumberToSkip(Int32 n);
|
||||
/// Sets the number of documents to skip.
|
||||
|
||||
Int32 getNumberToReturn() const;
|
||||
/// Returns the number of documents to return.
|
||||
|
||||
void setNumberToReturn(Int32 n);
|
||||
/// Sets the number of documents to return (limit).
|
||||
|
||||
Document & selector();
|
||||
/// Returns the selector document.
|
||||
|
||||
Document & returnFieldSelector();
|
||||
/// Returns the field selector document.
|
||||
|
||||
protected:
|
||||
void buildRequest(BinaryWriter & writer);
|
||||
|
||||
private:
|
||||
Flags _flags;
|
||||
std::string _fullCollectionName;
|
||||
Int32 _numberToSkip;
|
||||
Int32 _numberToReturn;
|
||||
Document _selector;
|
||||
Document _returnFieldSelector;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline QueryRequest::Flags QueryRequest::getFlags() const
|
||||
{
|
||||
return _flags;
|
||||
}
|
||||
|
||||
|
||||
inline void QueryRequest::setFlags(QueryRequest::Flags flags)
|
||||
{
|
||||
_flags = flags;
|
||||
}
|
||||
|
||||
|
||||
inline std::string QueryRequest::fullCollectionName() const
|
||||
{
|
||||
return _fullCollectionName;
|
||||
}
|
||||
|
||||
|
||||
inline Document & QueryRequest::selector()
|
||||
{
|
||||
return _selector;
|
||||
}
|
||||
|
||||
|
||||
inline Document & QueryRequest::returnFieldSelector()
|
||||
{
|
||||
return _returnFieldSelector;
|
||||
}
|
||||
|
||||
|
||||
inline Int32 QueryRequest::getNumberToSkip() const
|
||||
{
|
||||
return _numberToSkip;
|
||||
}
|
||||
|
||||
|
||||
inline void QueryRequest::setNumberToSkip(Int32 n)
|
||||
{
|
||||
_numberToSkip = n;
|
||||
}
|
||||
|
||||
|
||||
inline Int32 QueryRequest::getNumberToReturn() const
|
||||
{
|
||||
return _numberToReturn;
|
||||
}
|
||||
|
||||
|
||||
inline void QueryRequest::setNumberToReturn(Int32 n)
|
||||
{
|
||||
_numberToReturn = n;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_QueryRequest_INCLUDED
|
@ -1,135 +0,0 @@
|
||||
//
|
||||
// RegularExpression.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: RegularExpression
|
||||
//
|
||||
// Definition of the RegularExpression class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_RegularExpression_INCLUDED
|
||||
#define MongoDB_RegularExpression_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Element.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
#include "Poco/RegularExpression.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API RegularExpression
|
||||
/// Represents a regular expression in BSON format.
|
||||
{
|
||||
public:
|
||||
using Ptr = SharedPtr<RegularExpression>;
|
||||
|
||||
RegularExpression();
|
||||
/// Creates an empty RegularExpression.
|
||||
|
||||
RegularExpression(const std::string & pattern, const std::string & options);
|
||||
/// Creates a RegularExpression using the given pattern and options.
|
||||
|
||||
virtual ~RegularExpression();
|
||||
/// Destroys the RegularExpression.
|
||||
|
||||
SharedPtr<Poco::RegularExpression> createRE() const;
|
||||
/// Tries to create a Poco::RegularExpression from the MongoDB regular expression.
|
||||
|
||||
std::string getOptions() const;
|
||||
/// Returns the options string.
|
||||
|
||||
void setOptions(const std::string & options);
|
||||
/// Sets the options string.
|
||||
|
||||
std::string getPattern() const;
|
||||
/// Returns the pattern.
|
||||
|
||||
void setPattern(const std::string & pattern);
|
||||
/// Sets the pattern.
|
||||
|
||||
private:
|
||||
std::string _pattern;
|
||||
std::string _options;
|
||||
};
|
||||
|
||||
|
||||
///
|
||||
/// inlines
|
||||
///
|
||||
inline std::string RegularExpression::getPattern() const
|
||||
{
|
||||
return _pattern;
|
||||
}
|
||||
|
||||
|
||||
inline void RegularExpression::setPattern(const std::string & pattern)
|
||||
{
|
||||
_pattern = pattern;
|
||||
}
|
||||
|
||||
|
||||
inline std::string RegularExpression::getOptions() const
|
||||
{
|
||||
return _options;
|
||||
}
|
||||
|
||||
|
||||
inline void RegularExpression::setOptions(const std::string & options)
|
||||
{
|
||||
_options = options;
|
||||
}
|
||||
|
||||
|
||||
// BSON Regex
|
||||
// spec: cstring cstring
|
||||
template <>
|
||||
struct ElementTraits<RegularExpression::Ptr>
|
||||
{
|
||||
enum
|
||||
{
|
||||
TypeId = 0x0B
|
||||
};
|
||||
|
||||
static std::string toString(const RegularExpression::Ptr & value, int indent = 0)
|
||||
{
|
||||
//TODO
|
||||
return "RE: not implemented yet";
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONReader::read<RegularExpression::Ptr>(RegularExpression::Ptr & to)
|
||||
{
|
||||
std::string pattern = readCString();
|
||||
std::string options = readCString();
|
||||
|
||||
to = new RegularExpression(pattern, options);
|
||||
}
|
||||
|
||||
|
||||
template <>
|
||||
inline void BSONWriter::write<RegularExpression::Ptr>(RegularExpression::Ptr & from)
|
||||
{
|
||||
writeCString(from->getPattern());
|
||||
writeCString(from->getOptions());
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_RegularExpression_INCLUDED
|
@ -1,61 +0,0 @@
|
||||
//
|
||||
// ReplicaSet.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: ReplicaSet
|
||||
//
|
||||
// Definition of the ReplicaSet class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_ReplicaSet_INCLUDED
|
||||
#define MongoDB_ReplicaSet_INCLUDED
|
||||
|
||||
|
||||
#include <vector>
|
||||
#include "Poco/MongoDB/Connection.h"
|
||||
#include "Poco/Net/SocketAddress.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API ReplicaSet
|
||||
/// Class for working with a MongoDB replica set.
|
||||
{
|
||||
public:
|
||||
explicit ReplicaSet(const std::vector<Net::SocketAddress> & addresses);
|
||||
/// Creates the ReplicaSet using the given server addresses.
|
||||
|
||||
virtual ~ReplicaSet();
|
||||
/// Destroys the ReplicaSet.
|
||||
|
||||
Connection::Ptr findMaster();
|
||||
/// Tries to find the master MongoDB instance from the addresses
|
||||
/// passed to the constructor.
|
||||
///
|
||||
/// Returns the Connection to the master, or null if no master
|
||||
/// instance was found.
|
||||
|
||||
protected:
|
||||
Connection::Ptr isMaster(const Net::SocketAddress & host);
|
||||
|
||||
private:
|
||||
std::vector<Net::SocketAddress> _addresses;
|
||||
};
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_ReplicaSet_INCLUDED
|
@ -1,54 +0,0 @@
|
||||
//
|
||||
// RequestMessage.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: RequestMessage
|
||||
//
|
||||
// Definition of the RequestMessage class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_RequestMessage_INCLUDED
|
||||
#define MongoDB_RequestMessage_INCLUDED
|
||||
|
||||
|
||||
#include <ostream>
|
||||
#include "Poco/MongoDB/Message.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API RequestMessage : public Message
|
||||
/// Base class for a request sent to the MongoDB server.
|
||||
{
|
||||
public:
|
||||
explicit RequestMessage(MessageHeader::OpCode opcode);
|
||||
/// Creates a RequestMessage using the given opcode.
|
||||
|
||||
virtual ~RequestMessage();
|
||||
/// Destroys the RequestMessage.
|
||||
|
||||
void send(std::ostream & ostr);
|
||||
/// Writes the request to stream.
|
||||
|
||||
protected:
|
||||
virtual void buildRequest(BinaryWriter & ss) = 0;
|
||||
};
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_RequestMessage_INCLUDED
|
@ -1,114 +0,0 @@
|
||||
//
|
||||
// ResponseMessage.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: ResponseMessage
|
||||
//
|
||||
// Definition of the ResponseMessage class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_ResponseMessage_INCLUDED
|
||||
#define MongoDB_ResponseMessage_INCLUDED
|
||||
|
||||
|
||||
#include <cstdlib>
|
||||
#include <istream>
|
||||
#include "Poco/MongoDB/Document.h"
|
||||
#include "Poco/MongoDB/Message.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class MongoDB_API ResponseMessage : public Message
|
||||
/// This class represents a response (OP_REPLY) from MongoDB.
|
||||
{
|
||||
public:
|
||||
ResponseMessage();
|
||||
/// Creates an empty ResponseMessage.
|
||||
|
||||
ResponseMessage(const Int64 & cursorID);
|
||||
/// Creates an ResponseMessage for existing cursor ID.
|
||||
|
||||
virtual ~ResponseMessage();
|
||||
/// Destroys the ResponseMessage.
|
||||
|
||||
Int64 cursorID() const;
|
||||
/// Returns the cursor ID.
|
||||
|
||||
void clear();
|
||||
/// Clears the response.
|
||||
|
||||
std::size_t count() const;
|
||||
/// Returns the number of documents in the response.
|
||||
|
||||
Document::Vector & documents();
|
||||
/// Returns a vector containing the received documents.
|
||||
|
||||
bool empty() const;
|
||||
/// Returns true if the response does not contain any documents.
|
||||
|
||||
bool hasDocuments() const;
|
||||
/// Returns true if there is at least one document in the response.
|
||||
|
||||
void read(std::istream & istr);
|
||||
/// Reads the response from the stream.
|
||||
|
||||
private:
|
||||
Int32 _responseFlags;
|
||||
Int64 _cursorID;
|
||||
Int32 _startingFrom;
|
||||
Int32 _numberReturned;
|
||||
Document::Vector _documents;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline std::size_t ResponseMessage::count() const
|
||||
{
|
||||
return _documents.size();
|
||||
}
|
||||
|
||||
|
||||
inline bool ResponseMessage::empty() const
|
||||
{
|
||||
return _documents.size() == 0;
|
||||
}
|
||||
|
||||
|
||||
inline Int64 ResponseMessage::cursorID() const
|
||||
{
|
||||
return _cursorID;
|
||||
}
|
||||
|
||||
|
||||
inline Document::Vector & ResponseMessage::documents()
|
||||
{
|
||||
return _documents;
|
||||
}
|
||||
|
||||
|
||||
inline bool ResponseMessage::hasDocuments() const
|
||||
{
|
||||
return _documents.size() > 0;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_ResponseMessage_INCLUDED
|
@ -1,117 +0,0 @@
|
||||
//
|
||||
// UpdateRequest.h
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: UpdateRequest
|
||||
//
|
||||
// Definition of the UpdateRequest class.
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#ifndef MongoDB_UpdateRequest_INCLUDED
|
||||
#define MongoDB_UpdateRequest_INCLUDED
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Document.h"
|
||||
#include "Poco/MongoDB/MongoDB.h"
|
||||
#include "Poco/MongoDB/RequestMessage.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace MongoDB
|
||||
{
|
||||
|
||||
|
||||
class UpdateRequest : public RequestMessage
|
||||
/// This request is used to update a document in a database
|
||||
/// using the OP_UPDATE client request.
|
||||
{
|
||||
public:
|
||||
enum Flags
|
||||
{
|
||||
UPDATE_DEFAULT = 0,
|
||||
/// If set, the database will insert the supplied object into the
|
||||
/// collection if no matching document is found.
|
||||
|
||||
UPDATE_UPSERT = 1,
|
||||
/// If set, the database will update all matching objects in the collection.
|
||||
/// Otherwise only updates first matching doc.
|
||||
|
||||
UPDATE_MULTIUPDATE = 2
|
||||
/// If set to, updates multiple documents that meet the query criteria.
|
||||
/// Otherwise only updates one document.
|
||||
};
|
||||
|
||||
UpdateRequest(const std::string & collectionName, Flags flags = UPDATE_DEFAULT);
|
||||
/// Creates the UpdateRequest.
|
||||
///
|
||||
/// The full collection name is the concatenation of the database
|
||||
/// name with the collection name, using a "." for the concatenation. For example,
|
||||
/// for the database "foo" and the collection "bar", the full collection name is
|
||||
/// "foo.bar".
|
||||
|
||||
virtual ~UpdateRequest();
|
||||
/// Destroys the UpdateRequest.
|
||||
|
||||
Document & selector();
|
||||
/// Returns the selector document.
|
||||
|
||||
Document & update();
|
||||
/// Returns the document to update.
|
||||
|
||||
Flags flags() const;
|
||||
/// Returns the flags
|
||||
|
||||
void flags(Flags flags);
|
||||
/// Sets the flags
|
||||
|
||||
protected:
|
||||
void buildRequest(BinaryWriter & writer);
|
||||
|
||||
private:
|
||||
Flags _flags;
|
||||
std::string _fullCollectionName;
|
||||
Document _selector;
|
||||
Document _update;
|
||||
};
|
||||
|
||||
|
||||
//
|
||||
// inlines
|
||||
//
|
||||
inline UpdateRequest::Flags UpdateRequest::flags() const
|
||||
{
|
||||
return _flags;
|
||||
}
|
||||
|
||||
|
||||
inline void UpdateRequest::flags(UpdateRequest::Flags flags)
|
||||
{
|
||||
_flags = flags;
|
||||
}
|
||||
|
||||
|
||||
inline Document & UpdateRequest::selector()
|
||||
{
|
||||
return _selector;
|
||||
}
|
||||
|
||||
|
||||
inline Document & UpdateRequest::update()
|
||||
{
|
||||
return _update;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
} // namespace Poco::MongoDB
|
||||
|
||||
|
||||
#endif // MongoDB_UpdateRequest_INCLUDED
|
@ -1,75 +0,0 @@
|
||||
//
|
||||
// Array.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: Array
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Array.h"
|
||||
#include <sstream>
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
Array::Array():
|
||||
Document()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Array::~Array()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Element::Ptr Array::get(std::size_t pos) const
|
||||
{
|
||||
std::string name = Poco::NumberFormatter::format(pos);
|
||||
return Document::get(name);
|
||||
}
|
||||
|
||||
|
||||
std::string Array::toString(int indent) const
|
||||
{
|
||||
std::ostringstream oss;
|
||||
|
||||
oss << "[";
|
||||
|
||||
if (indent > 0) oss << std::endl;
|
||||
|
||||
for (ElementSet::const_iterator it = _elements.begin(); it != _elements.end(); ++it)
|
||||
{
|
||||
if (it != _elements.begin())
|
||||
{
|
||||
oss << ",";
|
||||
if (indent > 0) oss << std::endl;
|
||||
}
|
||||
|
||||
for (int i = 0; i < indent; ++i) oss << ' ';
|
||||
|
||||
oss << (*it)->toString(indent > 0 ? indent + 2 : 0);
|
||||
}
|
||||
|
||||
if (indent > 0)
|
||||
{
|
||||
oss << std::endl;
|
||||
if (indent >= 2) indent -= 2;
|
||||
for (int i = 0; i < indent; ++i) oss << ' ';
|
||||
}
|
||||
|
||||
oss << "]";
|
||||
|
||||
return oss.str();
|
||||
}
|
||||
|
||||
|
||||
} } // Namespace Poco::Mongo
|
@ -1,89 +0,0 @@
|
||||
//
|
||||
// Binary.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: Binary
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Binary.h"
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
Binary::Binary():
|
||||
_buffer(0),
|
||||
_subtype(0)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Binary::Binary(Poco::Int32 size, unsigned char subtype):
|
||||
_buffer(size),
|
||||
_subtype(subtype)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Binary::Binary(const UUID& uuid):
|
||||
_buffer(128 / 8),
|
||||
_subtype(0x04)
|
||||
{
|
||||
unsigned char szUUID[16];
|
||||
uuid.copyTo((char*) szUUID);
|
||||
_buffer.assign(szUUID, 16);
|
||||
}
|
||||
|
||||
|
||||
|
||||
Binary::Binary(const std::string& data, unsigned char subtype):
|
||||
_buffer(reinterpret_cast<const unsigned char*>(data.data()), data.size()),
|
||||
_subtype(subtype)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Binary::Binary(const void* data, Poco::Int32 size, unsigned char subtype):
|
||||
_buffer(reinterpret_cast<const unsigned char*>(data), size),
|
||||
_subtype(subtype)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Binary::~Binary()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
std::string Binary::toString(int indent) const
|
||||
{
|
||||
std::ostringstream oss;
|
||||
Base64Encoder encoder(oss);
|
||||
MemoryInputStream mis((const char*) _buffer.begin(), _buffer.size());
|
||||
StreamCopier::copyStream(mis, encoder);
|
||||
encoder.close();
|
||||
return oss.str();
|
||||
}
|
||||
|
||||
|
||||
UUID Binary::uuid() const
|
||||
{
|
||||
if ((_subtype == 0x04 || _subtype == 0x03) && _buffer.size() == 16)
|
||||
{
|
||||
UUID uuid;
|
||||
uuid.copyFrom((const char*) _buffer.begin());
|
||||
return uuid;
|
||||
}
|
||||
throw BadCastException("Invalid subtype: " + std::to_string(_subtype) + ", size: " + std::to_string(_buffer.size()));
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,348 +0,0 @@
|
||||
//
|
||||
// Connection.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: Connection
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/Net/SocketStream.h"
|
||||
#include "Poco/MongoDB/Connection.h"
|
||||
#include "Poco/MongoDB/Database.h"
|
||||
#include "Poco/URI.h"
|
||||
#include "Poco/Format.h"
|
||||
#include "Poco/NumberParser.h"
|
||||
#include "Poco/Exception.h"
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
Connection::SocketFactory::SocketFactory()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Connection::SocketFactory::~SocketFactory()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Poco::Net::StreamSocket Connection::SocketFactory::createSocket(const std::string& host, int port, Poco::Timespan connectTimeout, bool secure)
|
||||
{
|
||||
if (!secure)
|
||||
{
|
||||
Poco::Net::SocketAddress addr(host, port);
|
||||
Poco::Net::StreamSocket socket;
|
||||
if (connectTimeout > 0)
|
||||
socket.connect(addr, connectTimeout);
|
||||
else
|
||||
socket.connect(addr);
|
||||
return socket;
|
||||
}
|
||||
else throw Poco::NotImplementedException("Default SocketFactory implementation does not support SecureStreamSocket");
|
||||
}
|
||||
|
||||
|
||||
Connection::Connection():
|
||||
_address(),
|
||||
_socket()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Connection::Connection(const std::string& hostAndPort):
|
||||
_address(hostAndPort),
|
||||
_socket()
|
||||
{
|
||||
connect();
|
||||
}
|
||||
|
||||
|
||||
Connection::Connection(const std::string& uri, SocketFactory& socketFactory):
|
||||
_address(),
|
||||
_socket()
|
||||
{
|
||||
connect(uri, socketFactory);
|
||||
}
|
||||
|
||||
|
||||
Connection::Connection(const std::string& host, int port):
|
||||
_address(host, port),
|
||||
_socket()
|
||||
{
|
||||
connect();
|
||||
}
|
||||
|
||||
|
||||
Connection::Connection(const Poco::Net::SocketAddress& addrs):
|
||||
_address(addrs),
|
||||
_socket()
|
||||
{
|
||||
connect();
|
||||
}
|
||||
|
||||
|
||||
Connection::Connection(const Poco::Net::StreamSocket& socket):
|
||||
_address(socket.peerAddress()),
|
||||
_socket(socket)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Connection::~Connection()
|
||||
{
|
||||
try
|
||||
{
|
||||
disconnect();
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void Connection::connect()
|
||||
{
|
||||
_socket.connect(_address);
|
||||
}
|
||||
|
||||
|
||||
void Connection::connect(const std::string& hostAndPort)
|
||||
{
|
||||
_address = Poco::Net::SocketAddress(hostAndPort);
|
||||
connect();
|
||||
}
|
||||
|
||||
|
||||
void Connection::connect(const std::string& host, int port)
|
||||
{
|
||||
_address = Poco::Net::SocketAddress(host, port);
|
||||
connect();
|
||||
}
|
||||
|
||||
|
||||
void Connection::connect(const Poco::Net::SocketAddress& addrs)
|
||||
{
|
||||
_address = addrs;
|
||||
connect();
|
||||
}
|
||||
|
||||
|
||||
void Connection::connect(const Poco::Net::StreamSocket& socket)
|
||||
{
|
||||
_address = socket.peerAddress();
|
||||
_socket = socket;
|
||||
}
|
||||
|
||||
|
||||
void Connection::connect(const std::string& uri, SocketFactory& socketFactory)
|
||||
{
|
||||
std::vector<std::string> strAddresses;
|
||||
std::string newURI;
|
||||
|
||||
if (uri.find(',') != std::string::npos)
|
||||
{
|
||||
size_t pos;
|
||||
size_t head = 0;
|
||||
if ((pos = uri.find("@")) != std::string::npos)
|
||||
{
|
||||
head = pos + 1;
|
||||
}
|
||||
else if ((pos = uri.find("://")) != std::string::npos)
|
||||
{
|
||||
head = pos + 3;
|
||||
}
|
||||
|
||||
std::string tempstr;
|
||||
std::string::const_iterator it = uri.begin();
|
||||
it += head;
|
||||
size_t tail = head;
|
||||
for (;it != uri.end() && *it != '?' && *it != '/'; ++it)
|
||||
{
|
||||
tempstr += *it;
|
||||
tail++;
|
||||
}
|
||||
|
||||
it = tempstr.begin();
|
||||
std::string token;
|
||||
for (;it != tempstr.end(); ++it)
|
||||
{
|
||||
if (*it == ',')
|
||||
{
|
||||
newURI = uri.substr(0, head) + token + uri.substr(tail, uri.length());
|
||||
strAddresses.push_back(newURI);
|
||||
token = "";
|
||||
}
|
||||
else
|
||||
{
|
||||
token += *it;
|
||||
}
|
||||
}
|
||||
newURI = uri.substr(0, head) + token + uri.substr(tail, uri.length());
|
||||
strAddresses.push_back(newURI);
|
||||
}
|
||||
else
|
||||
{
|
||||
strAddresses.push_back(uri);
|
||||
}
|
||||
|
||||
newURI = strAddresses.front();
|
||||
Poco::URI theURI(newURI);
|
||||
if (theURI.getScheme() != "mongodb") throw Poco::UnknownURISchemeException(uri);
|
||||
|
||||
std::string userInfo = theURI.getUserInfo();
|
||||
std::string databaseName = theURI.getPath();
|
||||
if (!databaseName.empty() && databaseName[0] == '/') databaseName.erase(0, 1);
|
||||
if (databaseName.empty()) databaseName = "admin";
|
||||
|
||||
bool ssl = false;
|
||||
Poco::Timespan connectTimeout;
|
||||
Poco::Timespan socketTimeout;
|
||||
std::string authMechanism = Database::AUTH_SCRAM_SHA1;
|
||||
std::string readPreference="primary";
|
||||
|
||||
Poco::URI::QueryParameters params = theURI.getQueryParameters();
|
||||
for (Poco::URI::QueryParameters::const_iterator it = params.begin(); it != params.end(); ++it)
|
||||
{
|
||||
if (it->first == "ssl")
|
||||
{
|
||||
ssl = (it->second == "true");
|
||||
}
|
||||
else if (it->first == "connectTimeoutMS")
|
||||
{
|
||||
connectTimeout = static_cast<Poco::Timespan::TimeDiff>(1000)*Poco::NumberParser::parse(it->second);
|
||||
}
|
||||
else if (it->first == "socketTimeoutMS")
|
||||
{
|
||||
socketTimeout = static_cast<Poco::Timespan::TimeDiff>(1000)*Poco::NumberParser::parse(it->second);
|
||||
}
|
||||
else if (it->first == "authMechanism")
|
||||
{
|
||||
authMechanism = it->second;
|
||||
}
|
||||
else if (it->first == "readPreference")
|
||||
{
|
||||
readPreference= it->second;
|
||||
}
|
||||
}
|
||||
|
||||
for (std::vector<std::string>::const_iterator it = strAddresses.cbegin();it != strAddresses.cend(); ++it)
|
||||
{
|
||||
newURI = *it;
|
||||
theURI = Poco::URI(newURI);
|
||||
|
||||
std::string host = theURI.getHost();
|
||||
Poco::UInt16 port = theURI.getPort();
|
||||
if (port == 0) port = 27017;
|
||||
|
||||
connect(socketFactory.createSocket(host, port, connectTimeout, ssl));
|
||||
_uri = newURI;
|
||||
if (socketTimeout > 0)
|
||||
{
|
||||
_socket.setSendTimeout(socketTimeout);
|
||||
_socket.setReceiveTimeout(socketTimeout);
|
||||
}
|
||||
if (strAddresses.size() > 1)
|
||||
{
|
||||
Poco::MongoDB::QueryRequest request("admin.$cmd");
|
||||
request.setNumberToReturn(1);
|
||||
request.selector().add("isMaster", 1);
|
||||
Poco::MongoDB::ResponseMessage response;
|
||||
|
||||
sendRequest(request, response);
|
||||
_uri = newURI;
|
||||
if (!response.documents().empty())
|
||||
{
|
||||
Poco::MongoDB::Document::Ptr doc = response.documents()[0];
|
||||
if (doc->get<bool>("ismaster") && readPreference == "primary")
|
||||
{
|
||||
break;
|
||||
}
|
||||
else if (!doc->get<bool>("ismaster") && readPreference == "secondary")
|
||||
{
|
||||
break;
|
||||
}
|
||||
else if (it + 1 == strAddresses.cend())
|
||||
{
|
||||
throw Poco::URISyntaxException(uri);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if (!userInfo.empty())
|
||||
{
|
||||
std::string username;
|
||||
std::string password;
|
||||
std::string::size_type pos = userInfo.find(':');
|
||||
if (pos != std::string::npos)
|
||||
{
|
||||
username.assign(userInfo, 0, pos++);
|
||||
password.assign(userInfo, pos, userInfo.size() - pos);
|
||||
}
|
||||
else username = userInfo;
|
||||
|
||||
Database database(databaseName);
|
||||
|
||||
if (!database.authenticate(*this, username, password, authMechanism))
|
||||
throw Poco::NoPermissionException(Poco::format("Access to MongoDB database %s denied for user %s", databaseName, username));
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void Connection::disconnect()
|
||||
{
|
||||
_socket.close();
|
||||
}
|
||||
|
||||
|
||||
void Connection::sendRequest(RequestMessage& request)
|
||||
{
|
||||
Poco::Net::SocketOutputStream sos(_socket);
|
||||
request.send(sos);
|
||||
}
|
||||
|
||||
|
||||
void Connection::sendRequest(RequestMessage& request, ResponseMessage& response)
|
||||
{
|
||||
sendRequest(request);
|
||||
|
||||
Poco::Net::SocketInputStream sis(_socket);
|
||||
response.read(sis);
|
||||
}
|
||||
|
||||
|
||||
void Connection::sendRequest(OpMsgMessage& request, OpMsgMessage& response)
|
||||
{
|
||||
Poco::Net::SocketOutputStream sos(_socket);
|
||||
request.send(sos);
|
||||
|
||||
response.clear();
|
||||
readResponse(response);
|
||||
}
|
||||
|
||||
|
||||
void Connection::sendRequest(OpMsgMessage& request)
|
||||
{
|
||||
request.setAcknowledgedRequest(false);
|
||||
Poco::Net::SocketOutputStream sos(_socket);
|
||||
request.send(sos);
|
||||
}
|
||||
|
||||
|
||||
void Connection::readResponse(OpMsgMessage& response)
|
||||
{
|
||||
Poco::Net::SocketInputStream sis(_socket);
|
||||
response.read(sis);
|
||||
}
|
||||
|
||||
|
||||
|
||||
} } // Poco::MongoDB
|
@ -1,83 +0,0 @@
|
||||
//
|
||||
// Cursor.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: Cursor
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Cursor.h"
|
||||
#include "Poco/MongoDB/GetMoreRequest.h"
|
||||
#include "Poco/MongoDB/KillCursorsRequest.h"
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
Cursor::Cursor(const std::string& db, const std::string& collection, QueryRequest::Flags flags):
|
||||
_query(db + '.' + collection, flags)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Cursor::Cursor(const std::string& fullCollectionName, QueryRequest::Flags flags):
|
||||
_query(fullCollectionName, flags)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Cursor::Cursor(const Document& aggregationResponse) :
|
||||
_query(aggregationResponse.get<Poco::MongoDB::Document::Ptr>("cursor")->get<std::string>("ns")),
|
||||
_response(aggregationResponse.get<Poco::MongoDB::Document::Ptr>("cursor")->get<Int64>("id"))
|
||||
{
|
||||
}
|
||||
|
||||
Cursor::~Cursor()
|
||||
{
|
||||
try
|
||||
{
|
||||
poco_assert_dbg(!_response.cursorID());
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
ResponseMessage& Cursor::next(Connection& connection)
|
||||
{
|
||||
if (_response.cursorID() == 0)
|
||||
{
|
||||
connection.sendRequest(_query, _response);
|
||||
}
|
||||
else
|
||||
{
|
||||
Poco::MongoDB::GetMoreRequest getMore(_query.fullCollectionName(), _response.cursorID());
|
||||
getMore.setNumberToReturn(_query.getNumberToReturn());
|
||||
_response.clear();
|
||||
connection.sendRequest(getMore, _response);
|
||||
}
|
||||
return _response;
|
||||
}
|
||||
|
||||
|
||||
void Cursor::kill(Connection& connection)
|
||||
{
|
||||
if (_response.cursorID() != 0)
|
||||
{
|
||||
KillCursorsRequest killRequest;
|
||||
killRequest.cursors().push_back(_response.cursorID());
|
||||
connection.sendRequest(killRequest);
|
||||
}
|
||||
_response.clear();
|
||||
}
|
||||
|
||||
|
||||
} } // Namespace Poco::MongoDB
|
@ -1,482 +0,0 @@
|
||||
//
|
||||
// Database.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: Database
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Database.h"
|
||||
#include "Poco/MongoDB/Binary.h"
|
||||
#include "Poco/MD5Engine.h"
|
||||
#include "Poco/SHA1Engine.h"
|
||||
#include "Poco/PBKDF2Engine.h"
|
||||
#include "Poco/HMACEngine.h"
|
||||
#include "Poco/Base64Decoder.h"
|
||||
#include "Poco/MemoryStream.h"
|
||||
#include "Poco/StreamCopier.h"
|
||||
#include "Poco/Exception.h"
|
||||
#include "Poco/RandomStream.h"
|
||||
#include "Poco/Random.h"
|
||||
#include "Poco/Format.h"
|
||||
#include "Poco/NumberParser.h"
|
||||
#include <sstream>
|
||||
#include <map>
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
const std::string Database::AUTH_MONGODB_CR("MONGODB-CR");
|
||||
const std::string Database::AUTH_SCRAM_SHA1("SCRAM-SHA-1");
|
||||
|
||||
|
||||
namespace
|
||||
{
|
||||
std::map<std::string, std::string> parseKeyValueList(const std::string& str)
|
||||
{
|
||||
std::map<std::string, std::string> kvm;
|
||||
std::string::const_iterator it = str.begin();
|
||||
std::string::const_iterator end = str.end();
|
||||
while (it != end)
|
||||
{
|
||||
std::string k;
|
||||
std::string v;
|
||||
while (it != end && *it != '=') k += *it++;
|
||||
if (it != end) ++it;
|
||||
while (it != end && *it != ',') v += *it++;
|
||||
if (it != end) ++it;
|
||||
kvm[k] = v;
|
||||
}
|
||||
return kvm;
|
||||
}
|
||||
|
||||
std::string decodeBase64(const std::string& base64)
|
||||
{
|
||||
Poco::MemoryInputStream istr(base64.data(), base64.size());
|
||||
Poco::Base64Decoder decoder(istr);
|
||||
std::string result;
|
||||
Poco::StreamCopier::copyToString(decoder, result);
|
||||
return result;
|
||||
}
|
||||
|
||||
std::string encodeBase64(const std::string& data)
|
||||
{
|
||||
std::ostringstream ostr;
|
||||
Poco::Base64Encoder encoder(ostr);
|
||||
encoder.rdbuf()->setLineLength(0);
|
||||
encoder << data;
|
||||
encoder.close();
|
||||
return ostr.str();
|
||||
}
|
||||
|
||||
std::string digestToBinaryString(Poco::DigestEngine& engine)
|
||||
{
|
||||
Poco::DigestEngine::Digest d = engine.digest();
|
||||
return std::string(reinterpret_cast<const char*>(&d[0]), d.size());
|
||||
}
|
||||
|
||||
std::string digestToHexString(Poco::DigestEngine& engine)
|
||||
{
|
||||
Poco::DigestEngine::Digest d = engine.digest();
|
||||
return Poco::DigestEngine::digestToHex(d);
|
||||
}
|
||||
|
||||
std::string digestToBase64(Poco::DigestEngine& engine)
|
||||
{
|
||||
return encodeBase64(digestToBinaryString(engine));
|
||||
}
|
||||
|
||||
std::string hashCredentials(const std::string& username, const std::string& password)
|
||||
{
|
||||
Poco::MD5Engine md5;
|
||||
md5.update(username);
|
||||
md5.update(std::string(":mongo:"));
|
||||
md5.update(password);
|
||||
return digestToHexString(md5);
|
||||
}
|
||||
|
||||
std::string createNonce()
|
||||
{
|
||||
Poco::MD5Engine md5;
|
||||
Poco::RandomInputStream randomStream;
|
||||
Poco::Random random;
|
||||
for (int i = 0; i < 4; i++)
|
||||
{
|
||||
md5.update(randomStream.get());
|
||||
md5.update(random.nextChar());
|
||||
}
|
||||
return digestToHexString(md5);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
Database::Database(const std::string& db):
|
||||
_dbname(db)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Database::~Database()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
bool Database::authenticate(Connection& connection, const std::string& username, const std::string& password, const std::string& method)
|
||||
{
|
||||
if (username.empty()) throw Poco::InvalidArgumentException("empty username");
|
||||
if (password.empty()) throw Poco::InvalidArgumentException("empty password");
|
||||
|
||||
if (method == AUTH_MONGODB_CR)
|
||||
return authCR(connection, username, password);
|
||||
else if (method == AUTH_SCRAM_SHA1)
|
||||
return authSCRAM(connection, username, password);
|
||||
else
|
||||
throw Poco::InvalidArgumentException("authentication method", method);
|
||||
}
|
||||
|
||||
|
||||
bool Database::authCR(Connection& connection, const std::string& username, const std::string& password)
|
||||
{
|
||||
std::string nonce;
|
||||
Poco::SharedPtr<QueryRequest> pCommand = createCommand();
|
||||
pCommand->selector().add<Poco::Int32>("getnonce", 1);
|
||||
|
||||
ResponseMessage response;
|
||||
connection.sendRequest(*pCommand, response);
|
||||
if (response.documents().size() > 0)
|
||||
{
|
||||
Document::Ptr pDoc = response.documents()[0];
|
||||
if (pDoc->getInteger("ok") != 1) return false;
|
||||
nonce = pDoc->get<std::string>("nonce", "");
|
||||
if (nonce.empty()) throw Poco::ProtocolException("no nonce received");
|
||||
}
|
||||
else throw Poco::ProtocolException("empty response for getnonce");
|
||||
|
||||
std::string credsDigest = hashCredentials(username, password);
|
||||
|
||||
Poco::MD5Engine md5;
|
||||
md5.update(nonce);
|
||||
md5.update(username);
|
||||
md5.update(credsDigest);
|
||||
std::string key = digestToHexString(md5);
|
||||
|
||||
pCommand = createCommand();
|
||||
pCommand->selector()
|
||||
.add<Poco::Int32>("authenticate", 1)
|
||||
.add<std::string>("user", username)
|
||||
.add<std::string>("nonce", nonce)
|
||||
.add<std::string>("key", key);
|
||||
|
||||
connection.sendRequest(*pCommand, response);
|
||||
if (response.documents().size() > 0)
|
||||
{
|
||||
Document::Ptr pDoc = response.documents()[0];
|
||||
return pDoc->getInteger("ok") == 1;
|
||||
}
|
||||
else throw Poco::ProtocolException("empty response for authenticate");
|
||||
}
|
||||
|
||||
|
||||
bool Database::authSCRAM(Connection& connection, const std::string& username, const std::string& password)
|
||||
{
|
||||
std::string clientNonce(createNonce());
|
||||
std::string clientFirstMsg = Poco::format("n=%s,r=%s", username, clientNonce);
|
||||
|
||||
Poco::SharedPtr<QueryRequest> pCommand = createCommand();
|
||||
pCommand->selector()
|
||||
.add<Poco::Int32>("saslStart", 1)
|
||||
.add<std::string>("mechanism", AUTH_SCRAM_SHA1)
|
||||
.add<Binary::Ptr>("payload", new Binary(Poco::format("n,,%s", clientFirstMsg)));
|
||||
|
||||
ResponseMessage response;
|
||||
connection.sendRequest(*pCommand, response);
|
||||
|
||||
Int32 conversationId = 0;
|
||||
std::string serverFirstMsg;
|
||||
|
||||
if (response.documents().size() > 0)
|
||||
{
|
||||
Document::Ptr pDoc = response.documents()[0];
|
||||
if (pDoc->getInteger("ok") == 1)
|
||||
{
|
||||
Binary::Ptr pPayload = pDoc->get<Binary::Ptr>("payload");
|
||||
serverFirstMsg = pPayload->toRawString();
|
||||
conversationId = pDoc->get<Int32>("conversationId");
|
||||
}
|
||||
else
|
||||
{
|
||||
if (pDoc->exists("errmsg"))
|
||||
{
|
||||
const Poco::MongoDB::Element::Ptr value = pDoc->get("errmsg");
|
||||
auto message = static_cast<const Poco::MongoDB::ConcreteElement<std::string> &>(*value).value();
|
||||
throw Poco::RuntimeException(message);
|
||||
}
|
||||
else
|
||||
return false;
|
||||
}
|
||||
}
|
||||
else throw Poco::ProtocolException("empty response for saslStart");
|
||||
|
||||
std::map<std::string, std::string> kvm = parseKeyValueList(serverFirstMsg);
|
||||
const std::string serverNonce = kvm["r"];
|
||||
const std::string salt = decodeBase64(kvm["s"]);
|
||||
const unsigned iterations = Poco::NumberParser::parseUnsigned(kvm["i"]);
|
||||
const Poco::UInt32 dkLen = 20;
|
||||
|
||||
std::string hashedPassword = hashCredentials(username, password);
|
||||
|
||||
Poco::PBKDF2Engine<Poco::HMACEngine<Poco::SHA1Engine> > pbkdf2(salt, iterations, dkLen);
|
||||
pbkdf2.update(hashedPassword);
|
||||
std::string saltedPassword = digestToBinaryString(pbkdf2);
|
||||
|
||||
std::string clientFinalNoProof = Poco::format("c=biws,r=%s", serverNonce);
|
||||
std::string authMessage = Poco::format("%s,%s,%s", clientFirstMsg, serverFirstMsg, clientFinalNoProof);
|
||||
|
||||
Poco::HMACEngine<Poco::SHA1Engine> hmacKey(saltedPassword);
|
||||
hmacKey.update(std::string("Client Key"));
|
||||
std::string clientKey = digestToBinaryString(hmacKey);
|
||||
|
||||
Poco::SHA1Engine sha1;
|
||||
sha1.update(clientKey);
|
||||
std::string storedKey = digestToBinaryString(sha1);
|
||||
|
||||
Poco::HMACEngine<Poco::SHA1Engine> hmacSig(storedKey);
|
||||
hmacSig.update(authMessage);
|
||||
std::string clientSignature = digestToBinaryString(hmacSig);
|
||||
|
||||
std::string clientProof(clientKey);
|
||||
for (std::size_t i = 0; i < clientProof.size(); i++)
|
||||
{
|
||||
clientProof[i] ^= clientSignature[i];
|
||||
}
|
||||
|
||||
std::string clientFinal = Poco::format("%s,p=%s", clientFinalNoProof, encodeBase64(clientProof));
|
||||
|
||||
pCommand = createCommand();
|
||||
pCommand->selector()
|
||||
.add<Poco::Int32>("saslContinue", 1)
|
||||
.add<Poco::Int32>("conversationId", conversationId)
|
||||
.add<Binary::Ptr>("payload", new Binary(clientFinal));
|
||||
|
||||
std::string serverSecondMsg;
|
||||
connection.sendRequest(*pCommand, response);
|
||||
if (response.documents().size() > 0)
|
||||
{
|
||||
Document::Ptr pDoc = response.documents()[0];
|
||||
if (pDoc->getInteger("ok") == 1)
|
||||
{
|
||||
Binary::Ptr pPayload = pDoc->get<Binary::Ptr>("payload");
|
||||
serverSecondMsg = pPayload->toRawString();
|
||||
}
|
||||
else
|
||||
{
|
||||
if (pDoc->exists("errmsg"))
|
||||
{
|
||||
const Poco::MongoDB::Element::Ptr value = pDoc->get("errmsg");
|
||||
auto message = static_cast<const Poco::MongoDB::ConcreteElement<std::string> &>(*value).value();
|
||||
throw Poco::RuntimeException(message);
|
||||
}
|
||||
else
|
||||
return false;
|
||||
}
|
||||
}
|
||||
else throw Poco::ProtocolException("empty response for saslContinue");
|
||||
|
||||
Poco::HMACEngine<Poco::SHA1Engine> hmacSKey(saltedPassword);
|
||||
hmacSKey.update(std::string("Server Key"));
|
||||
std::string serverKey = digestToBinaryString(hmacSKey);
|
||||
|
||||
Poco::HMACEngine<Poco::SHA1Engine> hmacSSig(serverKey);
|
||||
hmacSSig.update(authMessage);
|
||||
std::string serverSignature = digestToBase64(hmacSSig);
|
||||
|
||||
kvm = parseKeyValueList(serverSecondMsg);
|
||||
std::string serverSignatureReceived = kvm["v"];
|
||||
|
||||
if (serverSignature != serverSignatureReceived)
|
||||
throw Poco::ProtocolException("server signature verification failed");
|
||||
|
||||
pCommand = createCommand();
|
||||
pCommand->selector()
|
||||
.add<Poco::Int32>("saslContinue", 1)
|
||||
.add<Poco::Int32>("conversationId", conversationId)
|
||||
.add<Binary::Ptr>("payload", new Binary);
|
||||
|
||||
connection.sendRequest(*pCommand, response);
|
||||
if (response.documents().size() > 0)
|
||||
{
|
||||
Document::Ptr pDoc = response.documents()[0];
|
||||
if (pDoc->getInteger("ok") == 1)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
else
|
||||
{
|
||||
if (pDoc->exists("errmsg"))
|
||||
{
|
||||
const Poco::MongoDB::Element::Ptr value = pDoc->get("errmsg");
|
||||
auto message = static_cast<const Poco::MongoDB::ConcreteElement<std::string> &>(*value).value();
|
||||
throw Poco::RuntimeException(message);
|
||||
}
|
||||
else
|
||||
return false;
|
||||
}
|
||||
}
|
||||
else throw Poco::ProtocolException("empty response for saslContinue");
|
||||
}
|
||||
|
||||
|
||||
Document::Ptr Database::queryBuildInfo(Connection& connection) const
|
||||
{
|
||||
// build info can be issued on "config" system database
|
||||
Poco::SharedPtr<Poco::MongoDB::QueryRequest> request = createCommand();
|
||||
request->selector().add("buildInfo", 1);
|
||||
|
||||
Poco::MongoDB::ResponseMessage response;
|
||||
connection.sendRequest(*request, response);
|
||||
|
||||
Document::Ptr buildInfo;
|
||||
if ( response.documents().size() > 0 )
|
||||
{
|
||||
buildInfo = response.documents()[0];
|
||||
}
|
||||
else
|
||||
{
|
||||
throw Poco::ProtocolException("Didn't get a response from the buildinfo command");
|
||||
}
|
||||
return buildInfo;
|
||||
}
|
||||
|
||||
|
||||
Document::Ptr Database::queryServerHello(Connection& connection, bool old) const
|
||||
{
|
||||
// hello can be issued on "config" system database
|
||||
Poco::SharedPtr<Poco::MongoDB::QueryRequest> request = createCommand();
|
||||
|
||||
// 'hello' command was previously called 'isMaster'
|
||||
std::string command_name;
|
||||
if (old)
|
||||
command_name = "isMaster";
|
||||
else
|
||||
command_name = "hello";
|
||||
|
||||
request->selector().add(command_name, 1);
|
||||
|
||||
Poco::MongoDB::ResponseMessage response;
|
||||
connection.sendRequest(*request, response);
|
||||
|
||||
Document::Ptr hello;
|
||||
if ( response.documents().size() > 0 )
|
||||
{
|
||||
hello = response.documents()[0];
|
||||
}
|
||||
else
|
||||
{
|
||||
throw Poco::ProtocolException("Didn't get a response from the hello command");
|
||||
}
|
||||
return hello;
|
||||
}
|
||||
|
||||
|
||||
Int64 Database::count(Connection& connection, const std::string& collectionName) const
|
||||
{
|
||||
Poco::SharedPtr<Poco::MongoDB::QueryRequest> countRequest = createCountRequest(collectionName);
|
||||
|
||||
Poco::MongoDB::ResponseMessage response;
|
||||
connection.sendRequest(*countRequest, response);
|
||||
|
||||
if (response.documents().size() > 0)
|
||||
{
|
||||
Poco::MongoDB::Document::Ptr doc = response.documents()[0];
|
||||
return doc->getInteger("n");
|
||||
}
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
Poco::MongoDB::Document::Ptr Database::ensureIndex(Connection& connection, const std::string& collection, const std::string& indexName, Poco::MongoDB::Document::Ptr keys, bool unique, bool background, int version, int ttl)
|
||||
{
|
||||
Poco::MongoDB::Document::Ptr index = new Poco::MongoDB::Document();
|
||||
index->add("ns", _dbname + "." + collection);
|
||||
index->add("name", indexName);
|
||||
index->add("key", keys);
|
||||
|
||||
if (version > 0)
|
||||
{
|
||||
index->add("version", version);
|
||||
}
|
||||
|
||||
if (unique)
|
||||
{
|
||||
index->add("unique", true);
|
||||
}
|
||||
|
||||
if (background)
|
||||
{
|
||||
index->add("background", true);
|
||||
}
|
||||
|
||||
if (ttl > 0)
|
||||
{
|
||||
index->add("expireAfterSeconds", ttl);
|
||||
}
|
||||
|
||||
Poco::SharedPtr<Poco::MongoDB::InsertRequest> insertRequest = createInsertRequest("system.indexes");
|
||||
insertRequest->documents().push_back(index);
|
||||
connection.sendRequest(*insertRequest);
|
||||
|
||||
return getLastErrorDoc(connection);
|
||||
}
|
||||
|
||||
|
||||
Document::Ptr Database::getLastErrorDoc(Connection& connection) const
|
||||
{
|
||||
Document::Ptr errorDoc;
|
||||
|
||||
Poco::SharedPtr<Poco::MongoDB::QueryRequest> request = createCommand();
|
||||
request->setNumberToReturn(1);
|
||||
request->selector().add("getLastError", 1);
|
||||
|
||||
Poco::MongoDB::ResponseMessage response;
|
||||
connection.sendRequest(*request, response);
|
||||
|
||||
if (response.documents().size() > 0)
|
||||
{
|
||||
errorDoc = response.documents()[0];
|
||||
}
|
||||
|
||||
return errorDoc;
|
||||
}
|
||||
|
||||
|
||||
std::string Database::getLastError(Connection& connection) const
|
||||
{
|
||||
Document::Ptr errorDoc = getLastErrorDoc(connection);
|
||||
if (!errorDoc.isNull() && errorDoc->isType<std::string>("err"))
|
||||
{
|
||||
return errorDoc->get<std::string>("err");
|
||||
}
|
||||
|
||||
return "";
|
||||
}
|
||||
|
||||
|
||||
Poco::SharedPtr<Poco::MongoDB::QueryRequest> Database::createCountRequest(const std::string& collectionName) const
|
||||
{
|
||||
Poco::SharedPtr<Poco::MongoDB::QueryRequest> request = createCommand();
|
||||
request->setNumberToReturn(1);
|
||||
request->selector().add("count", collectionName);
|
||||
return request;
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,54 +0,0 @@
|
||||
//
|
||||
// DeleteRequest.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: DeleteRequest
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/DeleteRequest.h"
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
DeleteRequest::DeleteRequest(const std::string& collectionName, DeleteRequest::Flags flags):
|
||||
RequestMessage(MessageHeader::OP_DELETE),
|
||||
_flags(flags),
|
||||
_fullCollectionName(collectionName),
|
||||
_selector()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
DeleteRequest::DeleteRequest(const std::string& collectionName, bool justOne):
|
||||
RequestMessage(MessageHeader::OP_DELETE),
|
||||
_flags(justOne ? DELETE_SINGLE_REMOVE : DELETE_DEFAULT),
|
||||
_fullCollectionName(collectionName),
|
||||
_selector()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
DeleteRequest::~DeleteRequest()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
void DeleteRequest::buildRequest(BinaryWriter& writer)
|
||||
{
|
||||
writer << 0; // 0 - reserved for future use
|
||||
BSONWriter(writer).writeCString(_fullCollectionName);
|
||||
writer << _flags;
|
||||
_selector.write(writer);
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,227 +0,0 @@
|
||||
//
|
||||
// Document.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: Document
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Document.h"
|
||||
#include "Poco/MongoDB/Binary.h"
|
||||
#include "Poco/MongoDB/ObjectId.h"
|
||||
#include "Poco/MongoDB/Array.h"
|
||||
#include "Poco/MongoDB/RegularExpression.h"
|
||||
#include "Poco/MongoDB/JavaScriptCode.h"
|
||||
#include <sstream>
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
Document::Document()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Document::~Document()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Array& Document::addNewArray(const std::string& name)
|
||||
{
|
||||
Array::Ptr newArray = new Array();
|
||||
add(name, newArray);
|
||||
return *newArray;
|
||||
}
|
||||
|
||||
|
||||
Element::Ptr Document::get(const std::string& name) const
|
||||
{
|
||||
Element::Ptr element;
|
||||
|
||||
ElementSet::const_iterator it = std::find_if(_elements.begin(), _elements.end(), ElementFindByName(name));
|
||||
if (it != _elements.end())
|
||||
{
|
||||
return *it;
|
||||
}
|
||||
|
||||
return element;
|
||||
}
|
||||
|
||||
|
||||
Int64 Document::getInteger(const std::string& name) const
|
||||
{
|
||||
Element::Ptr element = get(name);
|
||||
if (element.isNull()) throw Poco::NotFoundException(name);
|
||||
|
||||
if (ElementTraits<double>::TypeId == element->type())
|
||||
{
|
||||
ConcreteElement<double>* concrete = dynamic_cast<ConcreteElement<double>*>(element.get());
|
||||
if (concrete) return static_cast<Int64>(concrete->value());
|
||||
}
|
||||
else if (ElementTraits<Int32>::TypeId == element->type())
|
||||
{
|
||||
ConcreteElement<Int32>* concrete = dynamic_cast<ConcreteElement<Int32>*>(element.get());
|
||||
if (concrete) return concrete->value();
|
||||
}
|
||||
else if (ElementTraits<Int64>::TypeId == element->type())
|
||||
{
|
||||
ConcreteElement<Int64>* concrete = dynamic_cast<ConcreteElement<Int64>*>(element.get());
|
||||
if (concrete) return concrete->value();
|
||||
}
|
||||
throw Poco::BadCastException("Invalid type mismatch!");
|
||||
}
|
||||
|
||||
|
||||
void Document::read(BinaryReader& reader)
|
||||
{
|
||||
int size;
|
||||
reader >> size;
|
||||
|
||||
unsigned char type;
|
||||
reader >> type;
|
||||
|
||||
while (type != '\0')
|
||||
{
|
||||
Element::Ptr element;
|
||||
|
||||
std::string name = BSONReader(reader).readCString();
|
||||
|
||||
switch (type)
|
||||
{
|
||||
case ElementTraits<double>::TypeId:
|
||||
element = new ConcreteElement<double>(name, 0);
|
||||
break;
|
||||
case ElementTraits<Int32>::TypeId:
|
||||
element = new ConcreteElement<Int32>(name, 0);
|
||||
break;
|
||||
case ElementTraits<std::string>::TypeId:
|
||||
element = new ConcreteElement<std::string>(name, "");
|
||||
break;
|
||||
case ElementTraits<Document::Ptr>::TypeId:
|
||||
element = new ConcreteElement<Document::Ptr>(name, new Document);
|
||||
break;
|
||||
case ElementTraits<Array::Ptr>::TypeId:
|
||||
element = new ConcreteElement<Array::Ptr>(name, new Array);
|
||||
break;
|
||||
case ElementTraits<Binary::Ptr>::TypeId:
|
||||
element = new ConcreteElement<Binary::Ptr>(name, new Binary);
|
||||
break;
|
||||
case ElementTraits<ObjectId::Ptr>::TypeId:
|
||||
element = new ConcreteElement<ObjectId::Ptr>(name, new ObjectId);
|
||||
break;
|
||||
case ElementTraits<bool>::TypeId:
|
||||
element = new ConcreteElement<bool>(name, false);
|
||||
break;
|
||||
case ElementTraits<Poco::Timestamp>::TypeId:
|
||||
element = new ConcreteElement<Poco::Timestamp>(name, Poco::Timestamp());
|
||||
break;
|
||||
case ElementTraits<BSONTimestamp>::TypeId:
|
||||
element = new ConcreteElement<BSONTimestamp>(name, BSONTimestamp());
|
||||
break;
|
||||
case ElementTraits<NullValue>::TypeId:
|
||||
element = new ConcreteElement<NullValue>(name, NullValue(0));
|
||||
break;
|
||||
case ElementTraits<RegularExpression::Ptr>::TypeId:
|
||||
element = new ConcreteElement<RegularExpression::Ptr>(name, new RegularExpression());
|
||||
break;
|
||||
case ElementTraits<JavaScriptCode::Ptr>::TypeId:
|
||||
element = new ConcreteElement<JavaScriptCode::Ptr>(name, new JavaScriptCode());
|
||||
break;
|
||||
case ElementTraits<Int64>::TypeId:
|
||||
element = new ConcreteElement<Int64>(name, 0);
|
||||
break;
|
||||
default:
|
||||
{
|
||||
std::stringstream ss;
|
||||
ss << "Element " << name << " contains an unsupported type 0x" << std::hex << (int) type;
|
||||
throw Poco::NotImplementedException(ss.str());
|
||||
}
|
||||
//TODO: x0F -> JavaScript code with scope
|
||||
// xFF -> Min Key
|
||||
// x7F -> Max Key
|
||||
}
|
||||
|
||||
element->read(reader);
|
||||
_elements.push_back(element);
|
||||
|
||||
reader >> type;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
std::string Document::toString(int indent) const
|
||||
{
|
||||
std::ostringstream oss;
|
||||
|
||||
oss << '{';
|
||||
|
||||
if (indent > 0) oss << std::endl;
|
||||
|
||||
|
||||
for (ElementSet::const_iterator it = _elements.begin(); it != _elements.end(); ++it)
|
||||
{
|
||||
if (it != _elements.begin())
|
||||
{
|
||||
oss << ',';
|
||||
if (indent > 0) oss << std::endl;
|
||||
}
|
||||
|
||||
for (int i = 0; i < indent; ++i) oss << ' ';
|
||||
|
||||
oss << '"' << (*it)->name() << '"';
|
||||
oss << (indent > 0 ? " : " : ":");
|
||||
|
||||
oss << (*it)->toString(indent > 0 ? indent + 2 : 0);
|
||||
}
|
||||
|
||||
if (indent > 0)
|
||||
{
|
||||
oss << std::endl;
|
||||
if (indent >= 2) indent -= 2;
|
||||
|
||||
for (int i = 0; i < indent; ++i) oss << ' ';
|
||||
}
|
||||
|
||||
oss << '}';
|
||||
|
||||
return oss.str();
|
||||
}
|
||||
|
||||
|
||||
void Document::write(BinaryWriter& writer)
|
||||
{
|
||||
if (_elements.empty())
|
||||
{
|
||||
writer << 5;
|
||||
}
|
||||
else
|
||||
{
|
||||
std::stringstream sstream;
|
||||
Poco::BinaryWriter tempWriter(sstream, BinaryWriter::LITTLE_ENDIAN_BYTE_ORDER);
|
||||
for (ElementSet::iterator it = _elements.begin(); it != _elements.end(); ++it)
|
||||
{
|
||||
tempWriter << static_cast<unsigned char>((*it)->type());
|
||||
BSONWriter(tempWriter).writeCString((*it)->name());
|
||||
Element::Ptr element = *it;
|
||||
element->write(tempWriter);
|
||||
}
|
||||
tempWriter.flush();
|
||||
|
||||
Poco::Int32 len = static_cast<Poco::Int32>(5 + sstream.tellp()); /* 5 = sizeof(len) + 0-byte */
|
||||
writer << len;
|
||||
writer.writeRaw(sstream.str());
|
||||
}
|
||||
writer << '\0';
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,32 +0,0 @@
|
||||
//
|
||||
// Element.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: Element
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Element.h"
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
Element::Element(const std::string& name) : _name(name)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Element::~Element()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,46 +0,0 @@
|
||||
//
|
||||
// GetMoreRequest.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: GetMoreRequest
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/GetMoreRequest.h"
|
||||
#include "Poco/MongoDB/Element.h"
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
GetMoreRequest::GetMoreRequest(const std::string& collectionName, Int64 cursorID):
|
||||
RequestMessage(MessageHeader::OP_GET_MORE),
|
||||
_fullCollectionName(collectionName),
|
||||
_numberToReturn(100),
|
||||
_cursorID(cursorID)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
GetMoreRequest::~GetMoreRequest()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
void GetMoreRequest::buildRequest(BinaryWriter& writer)
|
||||
{
|
||||
writer << 0; // 0 - reserved for future use
|
||||
BSONWriter(writer).writeCString(_fullCollectionName);
|
||||
writer << _numberToReturn;
|
||||
writer << _cursorID;
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,49 +0,0 @@
|
||||
//
|
||||
// InsertRequest.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: InsertRequest
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/InsertRequest.h"
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
InsertRequest::InsertRequest(const std::string& collectionName, Flags flags):
|
||||
RequestMessage(MessageHeader::OP_INSERT),
|
||||
_flags(flags),
|
||||
_fullCollectionName(collectionName)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
InsertRequest::~InsertRequest()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
void InsertRequest::buildRequest(BinaryWriter& writer)
|
||||
{
|
||||
poco_assert (!_documents.empty());
|
||||
|
||||
writer << _flags;
|
||||
BSONWriter bsonWriter(writer);
|
||||
bsonWriter.writeCString(_fullCollectionName);
|
||||
for (Document::Vector::iterator it = _documents.begin(); it != _documents.end(); ++it)
|
||||
{
|
||||
bsonWriter.write(*it);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,33 +0,0 @@
|
||||
//
|
||||
// JavaScriptCode.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: JavaScriptCode
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/JavaScriptCode.h"
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
JavaScriptCode::JavaScriptCode()
|
||||
{
|
||||
|
||||
}
|
||||
|
||||
|
||||
JavaScriptCode::~JavaScriptCode()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,44 +0,0 @@
|
||||
//
|
||||
// KillCursorsRequest.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: KillCursorsRequest
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/KillCursorsRequest.h"
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
KillCursorsRequest::KillCursorsRequest():
|
||||
RequestMessage(MessageHeader::OP_KILL_CURSORS)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
KillCursorsRequest::~KillCursorsRequest()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
void KillCursorsRequest::buildRequest(BinaryWriter& writer)
|
||||
{
|
||||
writer << 0; // 0 - reserved for future use
|
||||
writer << static_cast<Poco::UInt64>(_cursors.size());
|
||||
for (std::vector<Int64>::iterator it = _cursors.begin(); it != _cursors.end(); ++it)
|
||||
{
|
||||
writer << *it;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,33 +0,0 @@
|
||||
//
|
||||
// Message.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: Message
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Message.h"
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
Message::Message(MessageHeader::OpCode opcode):
|
||||
_header(opcode)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Message::~Message()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,63 +0,0 @@
|
||||
//
|
||||
// MessageHeader.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: MessageHeader
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/Message.h"
|
||||
#include "Poco/Exception.h"
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
MessageHeader::MessageHeader(OpCode opCode):
|
||||
_messageLength(0),
|
||||
_requestID(0),
|
||||
_responseTo(0),
|
||||
_opCode(opCode)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
MessageHeader::~MessageHeader()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
void MessageHeader::read(BinaryReader& reader)
|
||||
{
|
||||
reader >> _messageLength;
|
||||
reader >> _requestID;
|
||||
reader >> _responseTo;
|
||||
|
||||
Int32 opCode;
|
||||
reader >> opCode;
|
||||
_opCode = static_cast<OpCode>(opCode);
|
||||
|
||||
if (!reader.good())
|
||||
{
|
||||
throw IOException("Failed to read from socket");
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void MessageHeader::write(BinaryWriter& writer)
|
||||
{
|
||||
writer << _messageLength;
|
||||
writer << _requestID;
|
||||
writer << _responseTo;
|
||||
writer << static_cast<Int32>(_opCode);
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,66 +0,0 @@
|
||||
//
|
||||
// ObjectId.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: ObjectId
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/ObjectId.h"
|
||||
#include "Poco/Format.h"
|
||||
#include <cstring>
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
ObjectId::ObjectId()
|
||||
{
|
||||
std::memset(_id, 0, sizeof(_id));
|
||||
}
|
||||
|
||||
|
||||
ObjectId::ObjectId(const std::string& id)
|
||||
{
|
||||
poco_assert_dbg(id.size() == 24);
|
||||
|
||||
const char* p = id.c_str();
|
||||
for (std::size_t i = 0; i < 12; ++i)
|
||||
{
|
||||
_id[i] = fromHex(p);
|
||||
p += 2;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
ObjectId::ObjectId(const ObjectId& copy)
|
||||
{
|
||||
std::memcpy(_id, copy._id, sizeof(_id));
|
||||
}
|
||||
|
||||
|
||||
ObjectId::~ObjectId()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
std::string ObjectId::toString(const std::string& fmt) const
|
||||
{
|
||||
std::string s;
|
||||
|
||||
for (int i = 0; i < 12; ++i)
|
||||
{
|
||||
s += Poco::format(fmt, (unsigned int) _id[i]);
|
||||
}
|
||||
return s;
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,187 +0,0 @@
|
||||
//
|
||||
// OpMsgCursor.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: OpMsgCursor
|
||||
//
|
||||
// Copyright (c) 2022, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/OpMsgCursor.h"
|
||||
#include "Poco/MongoDB/Array.h"
|
||||
|
||||
//
|
||||
// NOTE:
|
||||
//
|
||||
// MongoDB specification indicates that the flag MSG_EXHAUST_ALLOWED shall be
|
||||
// used in the request when the receiver is ready to receive multiple messages
|
||||
// without sending additional requests in between. Sender (MongoDB) indicates
|
||||
// that more messages follow with flag MSG_MORE_TO_COME.
|
||||
//
|
||||
// It seems that this does not work properly. MSG_MORE_TO_COME is set and reading
|
||||
// next messages sometimes works, however often the data is missing in response
|
||||
// or the message header contains wrong message length and reading blocks.
|
||||
// Opcode in the header is correct.
|
||||
//
|
||||
// Using MSG_EXHAUST_ALLOWED is therefore currently disabled.
|
||||
//
|
||||
// It seems that related JIRA ticket is:
|
||||
//
|
||||
// https://jira.mongodb.org/browse/SERVER-57297
|
||||
//
|
||||
// https://github.com/mongodb/specifications/blob/master/source/message/OP_MSG.rst
|
||||
//
|
||||
|
||||
#define MONGODB_EXHAUST_ALLOWED_WORKS false
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
[[ maybe_unused ]] static const std::string keyCursor {"cursor"};
|
||||
[[ maybe_unused ]] static const std::string keyFirstBatch {"firstBatch"};
|
||||
[[ maybe_unused ]] static const std::string keyNextBatch {"nextBatch"};
|
||||
|
||||
static Poco::Int64 cursorIdFromResponse(const MongoDB::Document& doc);
|
||||
|
||||
|
||||
OpMsgCursor::OpMsgCursor(const std::string& db, const std::string& collection):
|
||||
#if MONGODB_EXHAUST_ALLOWED_WORKS
|
||||
_query(db, collection, OpMsgMessage::MSG_EXHAUST_ALLOWED)
|
||||
#else
|
||||
_query(db, collection)
|
||||
#endif
|
||||
{
|
||||
}
|
||||
|
||||
OpMsgCursor::~OpMsgCursor()
|
||||
{
|
||||
try
|
||||
{
|
||||
poco_assert_dbg(_cursorID == 0);
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void OpMsgCursor::setEmptyFirstBatch(bool empty)
|
||||
{
|
||||
_emptyFirstBatch = empty;
|
||||
}
|
||||
|
||||
|
||||
bool OpMsgCursor::emptyFirstBatch() const
|
||||
{
|
||||
return _emptyFirstBatch;
|
||||
}
|
||||
|
||||
|
||||
void OpMsgCursor::setBatchSize(Int32 batchSize)
|
||||
{
|
||||
_batchSize = batchSize;
|
||||
}
|
||||
|
||||
|
||||
Int32 OpMsgCursor::batchSize() const
|
||||
{
|
||||
return _batchSize;
|
||||
}
|
||||
|
||||
|
||||
OpMsgMessage& OpMsgCursor::next(Connection& connection)
|
||||
{
|
||||
if (_cursorID == 0)
|
||||
{
|
||||
_response.clear();
|
||||
|
||||
if (_emptyFirstBatch || _batchSize > 0)
|
||||
{
|
||||
Int32 bsize = _emptyFirstBatch ? 0 : _batchSize;
|
||||
if (_query.commandName() == OpMsgMessage::CMD_FIND)
|
||||
{
|
||||
_query.body().add("batchSize", bsize);
|
||||
}
|
||||
else if (_query.commandName() == OpMsgMessage::CMD_AGGREGATE)
|
||||
{
|
||||
auto& cursorDoc = _query.body().addNewDocument("cursor");
|
||||
cursorDoc.add("batchSize", bsize);
|
||||
}
|
||||
}
|
||||
|
||||
connection.sendRequest(_query, _response);
|
||||
|
||||
const auto& rdoc = _response.body();
|
||||
_cursorID = cursorIdFromResponse(rdoc);
|
||||
}
|
||||
else
|
||||
{
|
||||
#if MONGODB_EXHAUST_ALLOWED_WORKS
|
||||
std::cout << "Response flags: " << _response.flags() << std::endl;
|
||||
if (_response.flags() & OpMsgMessage::MSG_MORE_TO_COME)
|
||||
{
|
||||
std::cout << "More to come. Reading more response: " << std::endl;
|
||||
_response.clear();
|
||||
connection.readResponse(_response);
|
||||
}
|
||||
else
|
||||
#endif
|
||||
{
|
||||
_response.clear();
|
||||
_query.setCursor(_cursorID, _batchSize);
|
||||
connection.sendRequest(_query, _response);
|
||||
}
|
||||
}
|
||||
|
||||
const auto& rdoc = _response.body();
|
||||
_cursorID = cursorIdFromResponse(rdoc);
|
||||
|
||||
return _response;
|
||||
}
|
||||
|
||||
|
||||
void OpMsgCursor::kill(Connection& connection)
|
||||
{
|
||||
_response.clear();
|
||||
if (_cursorID != 0)
|
||||
{
|
||||
_query.setCommandName(OpMsgMessage::CMD_KILL_CURSORS);
|
||||
|
||||
MongoDB::Array::Ptr cursors = new MongoDB::Array();
|
||||
cursors->add<Poco::Int64>(_cursorID);
|
||||
_query.body().add("cursors", cursors);
|
||||
|
||||
connection.sendRequest(_query, _response);
|
||||
|
||||
const auto killed = _response.body().get<MongoDB::Array::Ptr>("cursorsKilled", nullptr);
|
||||
if (!killed || killed->size() != 1 || killed->get<Poco::Int64>(0, -1) != _cursorID)
|
||||
{
|
||||
throw Poco::ProtocolException("Cursor not killed as expected: " + std::to_string(_cursorID));
|
||||
}
|
||||
|
||||
_cursorID = 0;
|
||||
_query.clear();
|
||||
_response.clear();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
Poco::Int64 cursorIdFromResponse(const MongoDB::Document& doc)
|
||||
{
|
||||
Poco::Int64 id {0};
|
||||
auto cursorDoc = doc.get<Document::Ptr>(keyCursor, nullptr);
|
||||
if(cursorDoc)
|
||||
{
|
||||
id = cursorDoc->get<Poco::Int64>("id", 0);
|
||||
}
|
||||
return id;
|
||||
}
|
||||
|
||||
|
||||
} } // Namespace Poco::MongoDB
|
@ -1,412 +0,0 @@
|
||||
//
|
||||
// OpMsgMessage.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: OpMsgMessage
|
||||
//
|
||||
// Copyright (c) 2022, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
#include "Poco/MongoDB/OpMsgMessage.h"
|
||||
#include "Poco/MongoDB/MessageHeader.h"
|
||||
#include "Poco/MongoDB/Array.h"
|
||||
#include "Poco/StreamCopier.h"
|
||||
#include "Poco/Logger.h"
|
||||
|
||||
#define POCO_MONGODB_DUMP false
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
// Query and write
|
||||
const std::string OpMsgMessage::CMD_INSERT { "insert" };
|
||||
const std::string OpMsgMessage::CMD_DELETE { "delete" };
|
||||
const std::string OpMsgMessage::CMD_UPDATE { "update" };
|
||||
const std::string OpMsgMessage::CMD_FIND { "find" };
|
||||
const std::string OpMsgMessage::CMD_FIND_AND_MODIFY { "findAndModify" };
|
||||
const std::string OpMsgMessage::CMD_GET_MORE { "getMore" };
|
||||
|
||||
// Aggregation
|
||||
const std::string OpMsgMessage::CMD_AGGREGATE { "aggregate" };
|
||||
const std::string OpMsgMessage::CMD_COUNT { "count" };
|
||||
const std::string OpMsgMessage::CMD_DISTINCT { "distinct" };
|
||||
const std::string OpMsgMessage::CMD_MAP_REDUCE { "mapReduce" };
|
||||
|
||||
// Replication and administration
|
||||
const std::string OpMsgMessage::CMD_HELLO { "hello" };
|
||||
const std::string OpMsgMessage::CMD_REPL_SET_GET_STATUS { "replSetGetStatus" };
|
||||
const std::string OpMsgMessage::CMD_REPL_SET_GET_CONFIG { "replSetGetConfig" };
|
||||
|
||||
const std::string OpMsgMessage::CMD_CREATE { "create" };
|
||||
const std::string OpMsgMessage::CMD_CREATE_INDEXES { "createIndexes" };
|
||||
const std::string OpMsgMessage::CMD_DROP { "drop" };
|
||||
const std::string OpMsgMessage::CMD_DROP_DATABASE { "dropDatabase" };
|
||||
const std::string OpMsgMessage::CMD_KILL_CURSORS { "killCursors" };
|
||||
const std::string OpMsgMessage::CMD_LIST_DATABASES { "listDatabases" };
|
||||
const std::string OpMsgMessage::CMD_LIST_INDEXES { "listIndexes" };
|
||||
|
||||
// Diagnostic
|
||||
const std::string OpMsgMessage::CMD_BUILD_INFO { "buildInfo" };
|
||||
const std::string OpMsgMessage::CMD_COLL_STATS { "collStats" };
|
||||
const std::string OpMsgMessage::CMD_DB_STATS { "dbStats" };
|
||||
const std::string OpMsgMessage::CMD_HOST_INFO { "hostInfo" };
|
||||
|
||||
|
||||
static const std::string& commandIdentifier(const std::string& command);
|
||||
/// Commands have different names for the payload that is sent in a separate section
|
||||
|
||||
|
||||
static const std::string keyCursor {"cursor"};
|
||||
static const std::string keyFirstBatch {"firstBatch"};
|
||||
static const std::string keyNextBatch {"nextBatch"};
|
||||
|
||||
|
||||
OpMsgMessage::OpMsgMessage() :
|
||||
Message(MessageHeader::OP_MSG)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
OpMsgMessage::OpMsgMessage(const std::string& databaseName, const std::string& collectionName, UInt32 flags) :
|
||||
Message(MessageHeader::OP_MSG),
|
||||
_databaseName(databaseName),
|
||||
_collectionName(collectionName),
|
||||
_flags(flags)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
OpMsgMessage::~OpMsgMessage()
|
||||
{
|
||||
}
|
||||
|
||||
const std::string& OpMsgMessage::databaseName() const
|
||||
{
|
||||
return _databaseName;
|
||||
}
|
||||
|
||||
|
||||
const std::string& OpMsgMessage::collectionName() const
|
||||
{
|
||||
return _collectionName;
|
||||
}
|
||||
|
||||
|
||||
void OpMsgMessage::setCommandName(const std::string& command)
|
||||
{
|
||||
_commandName = command;
|
||||
_body.clear();
|
||||
|
||||
// IMPORTANT: Command name must be first
|
||||
if (_collectionName.empty())
|
||||
{
|
||||
// Collection is not specified. It is assumed that this particular command does
|
||||
// not need it.
|
||||
_body.add(_commandName, Int32(1));
|
||||
}
|
||||
else
|
||||
{
|
||||
_body.add(_commandName, _collectionName);
|
||||
}
|
||||
_body.add("$db", _databaseName);
|
||||
}
|
||||
|
||||
|
||||
void OpMsgMessage::setCursor(Poco::Int64 cursorID, Poco::Int32 batchSize)
|
||||
{
|
||||
_commandName = OpMsgMessage::CMD_GET_MORE;
|
||||
_body.clear();
|
||||
|
||||
// IMPORTANT: Command name must be first
|
||||
_body.add(_commandName, cursorID);
|
||||
_body.add("$db", _databaseName);
|
||||
_body.add("collection", _collectionName);
|
||||
if (batchSize > 0)
|
||||
{
|
||||
_body.add("batchSize", batchSize);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
const std::string& OpMsgMessage::commandName() const
|
||||
{
|
||||
return _commandName;
|
||||
}
|
||||
|
||||
|
||||
void OpMsgMessage::setAcknowledgedRequest(bool ack)
|
||||
{
|
||||
const auto& id = commandIdentifier(_commandName);
|
||||
if (id.empty())
|
||||
return;
|
||||
|
||||
_acknowledged = ack;
|
||||
|
||||
auto writeConcern = _body.get<Document::Ptr>("writeConcern", nullptr);
|
||||
if (writeConcern)
|
||||
writeConcern->remove("w");
|
||||
|
||||
if (ack)
|
||||
{
|
||||
_flags = _flags & (~MSG_MORE_TO_COME);
|
||||
}
|
||||
else
|
||||
{
|
||||
_flags = _flags | MSG_MORE_TO_COME;
|
||||
if (!writeConcern)
|
||||
_body.addNewDocument("writeConcern").add("w", 0);
|
||||
else
|
||||
writeConcern->add("w", 0);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
bool OpMsgMessage::acknowledgedRequest() const
|
||||
{
|
||||
return _acknowledged;
|
||||
}
|
||||
|
||||
|
||||
UInt32 OpMsgMessage::flags() const
|
||||
{
|
||||
return _flags;
|
||||
}
|
||||
|
||||
|
||||
Document& OpMsgMessage::body()
|
||||
{
|
||||
return _body;
|
||||
}
|
||||
|
||||
|
||||
const Document& OpMsgMessage::body() const
|
||||
{
|
||||
return _body;
|
||||
}
|
||||
|
||||
|
||||
Document::Vector& OpMsgMessage::documents()
|
||||
{
|
||||
return _documents;
|
||||
}
|
||||
|
||||
|
||||
const Document::Vector& OpMsgMessage::documents() const
|
||||
{
|
||||
return _documents;
|
||||
}
|
||||
|
||||
|
||||
bool OpMsgMessage::responseOk() const
|
||||
{
|
||||
Poco::Int64 ok {false};
|
||||
if (_body.exists("ok"))
|
||||
{
|
||||
ok = _body.getInteger("ok");
|
||||
}
|
||||
return (ok != 0);
|
||||
}
|
||||
|
||||
|
||||
void OpMsgMessage::clear()
|
||||
{
|
||||
_flags = MSG_FLAGS_DEFAULT;
|
||||
_commandName.clear();
|
||||
_body.clear();
|
||||
_documents.clear();
|
||||
}
|
||||
|
||||
|
||||
void OpMsgMessage::send(std::ostream& ostr)
|
||||
{
|
||||
BinaryWriter socketWriter(ostr, BinaryWriter::LITTLE_ENDIAN_BYTE_ORDER);
|
||||
|
||||
// Serialise the body
|
||||
std::stringstream ss;
|
||||
BinaryWriter writer(ss, BinaryWriter::LITTLE_ENDIAN_BYTE_ORDER);
|
||||
writer << _flags;
|
||||
|
||||
writer << PAYLOAD_TYPE_0;
|
||||
_body.write(writer);
|
||||
|
||||
if (!_documents.empty())
|
||||
{
|
||||
// Serialise attached documents
|
||||
|
||||
std::stringstream ssdoc;
|
||||
BinaryWriter wdoc(ssdoc, BinaryWriter::LITTLE_ENDIAN_BYTE_ORDER);
|
||||
for (auto& doc: _documents)
|
||||
{
|
||||
doc->write(wdoc);
|
||||
}
|
||||
wdoc.flush();
|
||||
|
||||
const std::string& identifier = commandIdentifier(_commandName);
|
||||
const Poco::Int32 size = static_cast<Poco::Int32>(sizeof(size) + identifier.size() + 1 + ssdoc.tellp());
|
||||
writer << PAYLOAD_TYPE_1;
|
||||
writer << size;
|
||||
writer.writeCString(identifier.c_str());
|
||||
StreamCopier::copyStream(ssdoc, ss);
|
||||
}
|
||||
writer.flush();
|
||||
|
||||
#if POCO_MONGODB_DUMP
|
||||
const std::string section = ss.str();
|
||||
std::string dump;
|
||||
Logger::formatDump(dump, section.data(), section.length());
|
||||
std::cout << dump << std::endl;
|
||||
#endif
|
||||
|
||||
messageLength(static_cast<Poco::Int32>(ss.tellp()));
|
||||
|
||||
_header.write(socketWriter);
|
||||
StreamCopier::copyStream(ss, ostr);
|
||||
|
||||
ostr.flush();
|
||||
}
|
||||
|
||||
|
||||
void OpMsgMessage::read(std::istream& istr)
|
||||
{
|
||||
std::string message;
|
||||
{
|
||||
BinaryReader reader(istr, BinaryReader::LITTLE_ENDIAN_BYTE_ORDER);
|
||||
_header.read(reader);
|
||||
|
||||
poco_assert_dbg(_header.opCode() == _header.OP_MSG);
|
||||
|
||||
const std::streamsize remainingSize {_header.getMessageLength() - _header.MSG_HEADER_SIZE };
|
||||
message.reserve(remainingSize);
|
||||
|
||||
#if POCO_MONGODB_DUMP
|
||||
std::cout
|
||||
<< "Message hdr: " << _header.getMessageLength() << " " << remainingSize << " "
|
||||
<< _header.opCode() << " " << _header.getRequestID() << " " << _header.responseTo()
|
||||
<< std::endl;
|
||||
#endif
|
||||
|
||||
reader.readRaw(remainingSize, message);
|
||||
|
||||
#if POCO_MONGODB_DUMP
|
||||
std::string dump;
|
||||
Logger::formatDump(dump, message.data(), message.length());
|
||||
std::cout << dump << std::endl;
|
||||
#endif
|
||||
}
|
||||
// Read complete message and then interpret it.
|
||||
|
||||
std::istringstream msgss(message);
|
||||
BinaryReader reader(msgss, BinaryReader::LITTLE_ENDIAN_BYTE_ORDER);
|
||||
|
||||
Poco::UInt8 payloadType {0xFF};
|
||||
|
||||
reader >> _flags;
|
||||
reader >> payloadType;
|
||||
poco_assert_dbg(payloadType == PAYLOAD_TYPE_0);
|
||||
|
||||
_body.read(reader);
|
||||
|
||||
// Read next sections from the buffer
|
||||
while (msgss.good())
|
||||
{
|
||||
// NOTE: Not tested yet with database, because it returns everything in the body.
|
||||
// Does MongoDB ever return documents as Payload type 1?
|
||||
reader >> payloadType;
|
||||
if (!msgss.good())
|
||||
{
|
||||
break;
|
||||
}
|
||||
poco_assert_dbg(payloadType == PAYLOAD_TYPE_1);
|
||||
#if POCO_MONGODB_DUMP
|
||||
std::cout << "section payload: " << payloadType << std::endl;
|
||||
#endif
|
||||
|
||||
Poco::Int32 sectionSize {0};
|
||||
reader >> sectionSize;
|
||||
poco_assert_dbg(sectionSize > 0);
|
||||
|
||||
#if POCO_MONGODB_DUMP
|
||||
std::cout << "section size: " << sectionSize << std::endl;
|
||||
#endif
|
||||
std::streamoff offset = sectionSize - sizeof(sectionSize);
|
||||
std::streampos endOfSection = msgss.tellg() + offset;
|
||||
|
||||
std::string identifier;
|
||||
reader.readCString(identifier);
|
||||
#if POCO_MONGODB_DUMP
|
||||
std::cout << "section identifier: " << identifier << std::endl;
|
||||
#endif
|
||||
|
||||
// Loop to read documents from this section.
|
||||
while (msgss.tellg() < endOfSection)
|
||||
{
|
||||
#if POCO_MONGODB_DUMP
|
||||
std::cout << "section doc: " << msgss.tellg() << " " << endOfSection << std::endl;
|
||||
#endif
|
||||
Document::Ptr doc = new Document();
|
||||
doc->read(reader);
|
||||
_documents.push_back(doc);
|
||||
if (msgss.tellg() < 0)
|
||||
{
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Extract documents from the cursor batch if they are there.
|
||||
MongoDB::Array::Ptr batch;
|
||||
auto curDoc = _body.get<MongoDB::Document::Ptr>(keyCursor, nullptr);
|
||||
if (curDoc)
|
||||
{
|
||||
batch = curDoc->get<MongoDB::Array::Ptr>(keyFirstBatch, nullptr);
|
||||
if (!batch)
|
||||
{
|
||||
batch = curDoc->get<MongoDB::Array::Ptr>(keyNextBatch, nullptr);
|
||||
}
|
||||
}
|
||||
if (batch)
|
||||
{
|
||||
for(std::size_t i = 0; i < batch->size(); i++)
|
||||
{
|
||||
const auto& d = batch->get<MongoDB::Document::Ptr>(i, nullptr);
|
||||
if (d)
|
||||
{
|
||||
_documents.push_back(d);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
const std::string& commandIdentifier(const std::string& command)
|
||||
{
|
||||
// Names of identifiers for commands that send bulk documents in the request
|
||||
// The identifier is set in the section type 1.
|
||||
static std::map<std::string, std::string> identifiers {
|
||||
{ OpMsgMessage::CMD_INSERT, "documents" },
|
||||
{ OpMsgMessage::CMD_DELETE, "deletes" },
|
||||
{ OpMsgMessage::CMD_UPDATE, "updates" },
|
||||
|
||||
// Not sure if create index can send document section
|
||||
{ OpMsgMessage::CMD_CREATE_INDEXES, "indexes" }
|
||||
};
|
||||
|
||||
const auto i = identifiers.find(command);
|
||||
if (i != identifiers.end())
|
||||
{
|
||||
return i->second;
|
||||
}
|
||||
|
||||
// This likely means that documents are incorrectly set for a command
|
||||
// that does not send list of documents in section type 1.
|
||||
static const std::string emptyIdentifier;
|
||||
return emptyIdentifier;
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,54 +0,0 @@
|
||||
//
|
||||
// QueryRequest.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: QueryRequest
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/QueryRequest.h"
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
QueryRequest::QueryRequest(const std::string& collectionName, QueryRequest::Flags flags):
|
||||
RequestMessage(MessageHeader::OP_QUERY),
|
||||
_flags(flags),
|
||||
_fullCollectionName(collectionName),
|
||||
_numberToSkip(0),
|
||||
_numberToReturn(100),
|
||||
_selector(),
|
||||
_returnFieldSelector()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
QueryRequest::~QueryRequest()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
void QueryRequest::buildRequest(BinaryWriter& writer)
|
||||
{
|
||||
writer << _flags;
|
||||
BSONWriter(writer).writeCString(_fullCollectionName);
|
||||
writer << _numberToSkip;
|
||||
writer << _numberToReturn;
|
||||
_selector.write(writer);
|
||||
|
||||
if (!_returnFieldSelector.empty())
|
||||
{
|
||||
_returnFieldSelector.write(writer);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,71 +0,0 @@
|
||||
//
|
||||
// RegularExpression.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: RegularExpression
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/RegularExpression.h"
|
||||
#include <sstream>
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
RegularExpression::RegularExpression()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
RegularExpression::RegularExpression(const std::string& pattern, const std::string& options):
|
||||
_pattern(pattern),
|
||||
_options(options)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
RegularExpression::~RegularExpression()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
SharedPtr<Poco::RegularExpression> RegularExpression::createRE() const
|
||||
{
|
||||
int options = 0;
|
||||
for (std::string::const_iterator optIt = _options.begin(); optIt != _options.end(); ++optIt)
|
||||
{
|
||||
switch (*optIt)
|
||||
{
|
||||
case 'i': // Case Insensitive
|
||||
options |= Poco::RegularExpression::RE_CASELESS;
|
||||
break;
|
||||
case 'm': // Multiline matching
|
||||
options |= Poco::RegularExpression::RE_MULTILINE;
|
||||
break;
|
||||
case 'x': // Verbose mode
|
||||
//No equivalent in Poco
|
||||
break;
|
||||
case 'l': // \w \W Locale dependent
|
||||
//No equivalent in Poco
|
||||
break;
|
||||
case 's': // Dotall mode
|
||||
options |= Poco::RegularExpression::RE_DOTALL;
|
||||
break;
|
||||
case 'u': // \w \W Unicode
|
||||
//No equivalent in Poco
|
||||
break;
|
||||
}
|
||||
}
|
||||
return new Poco::RegularExpression(_pattern, options);
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,89 +0,0 @@
|
||||
//
|
||||
// ReplicaSet.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: ReplicaSet
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/ReplicaSet.h"
|
||||
#include "Poco/MongoDB/QueryRequest.h"
|
||||
#include "Poco/MongoDB/ResponseMessage.h"
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
ReplicaSet::ReplicaSet(const std::vector<Net::SocketAddress> &addresses):
|
||||
_addresses(addresses)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
ReplicaSet::~ReplicaSet()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
Connection::Ptr ReplicaSet::findMaster()
|
||||
{
|
||||
Connection::Ptr master;
|
||||
|
||||
for (std::vector<Net::SocketAddress>::iterator it = _addresses.begin(); it != _addresses.end(); ++it)
|
||||
{
|
||||
master = isMaster(*it);
|
||||
if (!master.isNull())
|
||||
{
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return master;
|
||||
}
|
||||
|
||||
|
||||
Connection::Ptr ReplicaSet::isMaster(const Net::SocketAddress& address)
|
||||
{
|
||||
Connection::Ptr conn = new Connection();
|
||||
|
||||
try
|
||||
{
|
||||
conn->connect(address);
|
||||
|
||||
QueryRequest request("admin.$cmd");
|
||||
request.setNumberToReturn(1);
|
||||
request.selector().add("isMaster", 1);
|
||||
|
||||
ResponseMessage response;
|
||||
conn->sendRequest(request, response);
|
||||
|
||||
if (response.documents().size() > 0)
|
||||
{
|
||||
Document::Ptr doc = response.documents()[0];
|
||||
if (doc->get<bool>("ismaster"))
|
||||
{
|
||||
return conn;
|
||||
}
|
||||
else if (doc->exists("primary"))
|
||||
{
|
||||
return isMaster(Net::SocketAddress(doc->get<std::string>("primary")));
|
||||
}
|
||||
}
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
conn = 0;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,51 +0,0 @@
|
||||
//
|
||||
// RequestMessage.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: RequestMessage
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/RequestMessage.h"
|
||||
#include "Poco/Net/SocketStream.h"
|
||||
#include "Poco/StreamCopier.h"
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
RequestMessage::RequestMessage(MessageHeader::OpCode opcode):
|
||||
Message(opcode)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
RequestMessage::~RequestMessage()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
void RequestMessage::send(std::ostream& ostr)
|
||||
{
|
||||
std::stringstream ss;
|
||||
BinaryWriter requestWriter(ss, BinaryWriter::LITTLE_ENDIAN_BYTE_ORDER);
|
||||
buildRequest(requestWriter);
|
||||
requestWriter.flush();
|
||||
|
||||
messageLength(static_cast<Poco::Int32>(ss.tellp()));
|
||||
|
||||
BinaryWriter socketWriter(ostr, BinaryWriter::LITTLE_ENDIAN_BYTE_ORDER);
|
||||
_header.write(socketWriter);
|
||||
StreamCopier::copyStream(ss, ostr);
|
||||
ostr.flush();
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,80 +0,0 @@
|
||||
//
|
||||
// ResponseMessage.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: ResponseMessage
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/ResponseMessage.h"
|
||||
#include "Poco/Net/SocketStream.h"
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
ResponseMessage::ResponseMessage():
|
||||
Message(MessageHeader::OP_REPLY),
|
||||
_responseFlags(0),
|
||||
_cursorID(0),
|
||||
_startingFrom(0),
|
||||
_numberReturned(0)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
ResponseMessage::ResponseMessage(const Int64& cursorID):
|
||||
Message(MessageHeader::OP_REPLY),
|
||||
_responseFlags(0),
|
||||
_cursorID(cursorID),
|
||||
_startingFrom(0),
|
||||
_numberReturned(0)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
ResponseMessage::~ResponseMessage()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
void ResponseMessage::clear()
|
||||
{
|
||||
_responseFlags = 0;
|
||||
_startingFrom = 0;
|
||||
_cursorID = 0;
|
||||
_numberReturned = 0;
|
||||
_documents.clear();
|
||||
}
|
||||
|
||||
|
||||
void ResponseMessage::read(std::istream& istr)
|
||||
{
|
||||
clear();
|
||||
|
||||
BinaryReader reader(istr, BinaryReader::LITTLE_ENDIAN_BYTE_ORDER);
|
||||
|
||||
_header.read(reader);
|
||||
|
||||
reader >> _responseFlags;
|
||||
reader >> _cursorID;
|
||||
reader >> _startingFrom;
|
||||
reader >> _numberReturned;
|
||||
|
||||
for (int i = 0; i < _numberReturned; ++i)
|
||||
{
|
||||
Document::Ptr doc = new Document();
|
||||
doc->read(reader);
|
||||
_documents.push_back(doc);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
@ -1,47 +0,0 @@
|
||||
//
|
||||
// UpdateRequest.cpp
|
||||
//
|
||||
// Library: MongoDB
|
||||
// Package: MongoDB
|
||||
// Module: UpdateRequest
|
||||
//
|
||||
// Copyright (c) 2012, Applied Informatics Software Engineering GmbH.
|
||||
// and Contributors.
|
||||
//
|
||||
// SPDX-License-Identifier: BSL-1.0
|
||||
//
|
||||
|
||||
|
||||
#include "Poco/MongoDB/UpdateRequest.h"
|
||||
|
||||
|
||||
namespace Poco {
|
||||
namespace MongoDB {
|
||||
|
||||
|
||||
UpdateRequest::UpdateRequest(const std::string& collectionName, UpdateRequest::Flags flags):
|
||||
RequestMessage(MessageHeader::OP_UPDATE),
|
||||
_flags(flags),
|
||||
_fullCollectionName(collectionName),
|
||||
_selector(),
|
||||
_update()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
UpdateRequest::~UpdateRequest()
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
void UpdateRequest::buildRequest(BinaryWriter& writer)
|
||||
{
|
||||
writer << 0; // 0 - reserved for future use
|
||||
BSONWriter(writer).writeCString(_fullCollectionName);
|
||||
writer << _flags;
|
||||
_selector.write(writer);
|
||||
_update.write(writer);
|
||||
}
|
||||
|
||||
|
||||
} } // namespace Poco::MongoDB
|
80
ci/docker/binary-builder/Dockerfile
Normal file
80
ci/docker/binary-builder/Dockerfile
Normal file
@ -0,0 +1,80 @@
|
||||
# docker build -t clickhouse/binary-builder .
|
||||
ARG FROM_TAG=latest
|
||||
FROM clickhouse/fasttest:$FROM_TAG
|
||||
ENV CC=clang-${LLVM_VERSION}
|
||||
ENV CXX=clang++-${LLVM_VERSION}
|
||||
|
||||
# If the cctools is updated, then first build it in the CI, then update here in a different commit
|
||||
COPY --from=clickhouse/cctools:d9e3596e706b /cctools /cctools
|
||||
|
||||
# Rust toolchain and libraries
|
||||
ENV RUSTUP_HOME=/rust/rustup
|
||||
ENV CARGO_HOME=/rust/cargo
|
||||
ENV PATH="/rust/cargo/bin:${PATH}"
|
||||
RUN curl https://sh.rustup.rs -sSf | bash -s -- -y && \
|
||||
chmod 777 -R /rust && \
|
||||
rustup toolchain install nightly-2024-04-01 && \
|
||||
rustup default nightly-2024-04-01 && \
|
||||
rustup toolchain remove stable && \
|
||||
rustup component add rust-src && \
|
||||
rustup target add x86_64-unknown-linux-gnu && \
|
||||
rustup target add aarch64-unknown-linux-gnu && \
|
||||
rustup target add x86_64-apple-darwin && \
|
||||
rustup target add x86_64-unknown-freebsd && \
|
||||
rustup target add aarch64-apple-darwin && \
|
||||
rustup target add powerpc64le-unknown-linux-gnu && \
|
||||
rustup target add x86_64-unknown-linux-musl && \
|
||||
rustup target add aarch64-unknown-linux-musl && \
|
||||
rustup target add riscv64gc-unknown-linux-gnu
|
||||
|
||||
# A cross-linker for RISC-V 64 (we need it, because LLVM's LLD does not work):
|
||||
RUN apt-get update \
|
||||
&& apt-get install software-properties-common --yes --no-install-recommends --verbose-versions
|
||||
|
||||
RUN add-apt-repository ppa:ubuntu-toolchain-r/test --yes \
|
||||
&& apt-get update \
|
||||
&& apt-get install --yes \
|
||||
binutils-riscv64-linux-gnu \
|
||||
build-essential \
|
||||
python3-boto3 \
|
||||
yasm \
|
||||
zstd \
|
||||
zip \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
|
||||
|
||||
# Download toolchain and SDK for Darwin
|
||||
RUN curl -sL -O https://github.com/phracker/MacOSX-SDKs/releases/download/11.3/MacOSX11.0.sdk.tar.xz
|
||||
|
||||
# Download and install mold 2.0 for s390x build
|
||||
RUN curl -Lo /tmp/mold.tar.gz "https://github.com/rui314/mold/releases/download/v2.0.0/mold-2.0.0-x86_64-linux.tar.gz" \
|
||||
&& mkdir /tmp/mold \
|
||||
&& tar -xzf /tmp/mold.tar.gz -C /tmp/mold \
|
||||
&& cp -r /tmp/mold/mold*/* /usr \
|
||||
&& rm -rf /tmp/mold \
|
||||
&& rm /tmp/mold.tar.gz
|
||||
|
||||
# Architecture of the image when BuildKit/buildx is used
|
||||
ARG TARGETARCH
|
||||
ARG NFPM_VERSION=2.20.0
|
||||
|
||||
RUN arch=${TARGETARCH:-amd64} \
|
||||
&& curl -Lo /tmp/nfpm.deb "https://github.com/goreleaser/nfpm/releases/download/v${NFPM_VERSION}/nfpm_${arch}.deb" \
|
||||
&& dpkg -i /tmp/nfpm.deb \
|
||||
&& rm /tmp/nfpm.deb
|
||||
|
||||
ARG GO_VERSION=1.19.10
|
||||
# We needed go for clickhouse-diagnostics (it is not used anymore)
|
||||
RUN arch=${TARGETARCH:-amd64} \
|
||||
&& curl -Lo /tmp/go.tgz "https://go.dev/dl/go${GO_VERSION}.linux-${arch}.tar.gz" \
|
||||
&& tar -xzf /tmp/go.tgz -C /usr/local/ \
|
||||
&& rm /tmp/go.tgz
|
||||
|
||||
ENV PATH="$PATH:/usr/local/go/bin"
|
||||
ENV GOPATH=/workdir/go
|
||||
ENV GOCACHE=/workdir/
|
||||
|
||||
ARG CLANG_TIDY_SHA1=c191254ea00d47ade11d7170ef82fe038c213774
|
||||
RUN curl -Lo /usr/bin/clang-tidy-cache \
|
||||
"https://raw.githubusercontent.com/matus-chochlik/ctcache/$CLANG_TIDY_SHA1/clang-tidy-cache" \
|
||||
&& chmod +x /usr/bin/clang-tidy-cache
|
5
ci/docker/compatibility/centos/Dockerfile
Normal file
5
ci/docker/compatibility/centos/Dockerfile
Normal file
@ -0,0 +1,5 @@
|
||||
# docker build -t clickhouse/test-old-centos .
|
||||
FROM centos:5
|
||||
|
||||
CMD /bin/sh -c "/clickhouse server --config /config/config.xml > /var/log/clickhouse-server/stderr.log 2>&1 & \
|
||||
sleep 5 && /clickhouse client --query \"select 'OK'\" 2> /var/log/clickhouse-server/clientstderr.log || echo 'FAIL'"
|
5
ci/docker/compatibility/ubuntu/Dockerfile
Normal file
5
ci/docker/compatibility/ubuntu/Dockerfile
Normal file
@ -0,0 +1,5 @@
|
||||
# docker build -t clickhouse/test-old-ubuntu .
|
||||
FROM ubuntu:12.04
|
||||
|
||||
CMD /bin/sh -c "/clickhouse server --config /config/config.xml > /var/log/clickhouse-server/stderr.log 2>&1 & \
|
||||
sleep 5 && /clickhouse client --query \"select 'OK'\" 2> /var/log/clickhouse-server/clientstderr.log || echo 'FAIL'"
|
@ -102,5 +102,14 @@ RUN groupadd --system --gid 1000 clickhouse \
|
||||
&& useradd --system --gid 1000 --uid 1000 -m clickhouse \
|
||||
&& mkdir -p /.cache/sccache && chmod 777 /.cache/sccache
|
||||
|
||||
|
||||
# TODO move nfpm to docker that will do packaging
|
||||
ARG TARGETARCH
|
||||
ARG NFPM_VERSION=2.20.0
|
||||
RUN arch=${TARGETARCH:-amd64} \
|
||||
&& curl -Lo /tmp/nfpm.deb "https://github.com/goreleaser/nfpm/releases/download/v${NFPM_VERSION}/nfpm_${arch}.deb" \
|
||||
&& dpkg -i /tmp/nfpm.deb \
|
||||
&& rm /tmp/nfpm.deb
|
||||
|
||||
ENV PYTHONPATH="/wd"
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
|
@ -11,7 +11,8 @@ ARG odbc_driver_url="https://github.com/ClickHouse/clickhouse-odbc/releases/down
|
||||
RUN mkdir /etc/clickhouse-server /etc/clickhouse-keeper /etc/clickhouse-client && chmod 777 /etc/clickhouse-* \
|
||||
&& mkdir -p /var/lib/clickhouse /var/log/clickhouse-server && chmod 777 /var/log/clickhouse-server /var/lib/clickhouse
|
||||
|
||||
RUN addgroup --gid 1001 clickhouse && adduser --uid 1001 --gid 1001 --disabled-password clickhouse
|
||||
RUN addgroup --gid 1000 clickhouse && adduser --uid 1000 --gid 1000 --disabled-password clickhouse
|
||||
RUN addgroup --gid 1001 clickhouse2 && adduser --uid 1001 --gid 1001 --disabled-password clickhouse2
|
||||
|
||||
# moreutils - provides ts fo FT
|
||||
# expect, bzip2 - requried by FT
|
||||
@ -58,6 +59,7 @@ RUN apt-get update -y \
|
||||
curl \
|
||||
wget \
|
||||
xz-utils \
|
||||
ripgrep \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
|
||||
|
||||
@ -114,4 +116,5 @@ RUN curl -L --no-verbose -O 'https://archive.apache.org/dist/hadoop/common/hadoo
|
||||
RUN npm install -g azurite@3.30.0 \
|
||||
&& npm install -g tslib && npm install -g node
|
||||
|
||||
ENV PYTHONPATH=".:./ci"
|
||||
USER clickhouse
|
||||
|
@ -4,3 +4,4 @@ requests==2.32.3
|
||||
pandas==1.5.3
|
||||
scipy==1.12.0
|
||||
pyarrow==18.0.0
|
||||
grpcio==1.47.0
|
||||
|
@ -6,6 +6,7 @@ RUN apt-get update && env DEBIAN_FRONTEND=noninteractive apt-get install --yes \
|
||||
libxml2-utils \
|
||||
python3-pip \
|
||||
locales \
|
||||
ripgrep \
|
||||
git \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
|
||||
|
@ -1,14 +1,22 @@
|
||||
import argparse
|
||||
import os
|
||||
|
||||
from praktika.result import Result
|
||||
from praktika.settings import Settings
|
||||
from praktika.utils import MetaClasses, Shell, Utils
|
||||
|
||||
from ci.jobs.scripts.clickhouse_version import CHVersion
|
||||
from ci.workflows.defs import CIFiles, ToolSet
|
||||
from ci.workflows.pull_request import S3_BUILDS_BUCKET
|
||||
|
||||
|
||||
class JobStages(metaclass=MetaClasses.WithIter):
|
||||
CHECKOUT_SUBMODULES = "checkout"
|
||||
CMAKE = "cmake"
|
||||
UNSHALLOW = "unshallow"
|
||||
BUILD = "build"
|
||||
PACKAGE = "package"
|
||||
UNIT = "unit"
|
||||
|
||||
|
||||
def parse_args():
|
||||
@ -32,15 +40,22 @@ CMAKE_CMD = """cmake --debug-trycompile -DCMAKE_VERBOSE_MAKEFILE=1 -LA \
|
||||
-DENABLE_UTILS=0 -DCMAKE_FIND_PACKAGE_NO_PACKAGE_REGISTRY=ON -DCMAKE_INSTALL_PREFIX=/usr \
|
||||
-DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_INSTALL_LOCALSTATEDIR=/var -DCMAKE_SKIP_INSTALL_ALL_DEPENDENCY=ON \
|
||||
{AUX_DEFS} \
|
||||
-DCMAKE_C_COMPILER=clang-18 -DCMAKE_CXX_COMPILER=clang++-18 \
|
||||
-DCOMPILER_CACHE={CACHE_TYPE} \
|
||||
-DENABLE_BUILD_PROFILING=1 {DIR}"""
|
||||
-DCMAKE_C_COMPILER={COMPILER} -DCMAKE_CXX_COMPILER={COMPILER_CPP} \
|
||||
-DCOMPILER_CACHE={CACHE_TYPE} -DENABLE_BUILD_PROFILING=1 {DIR}"""
|
||||
|
||||
# release: cmake --debug-trycompile -DCMAKE_VERBOSE_MAKEFILE=1 -LA -DCMAKE_BUILD_TYPE=None -DSANITIZE= -DENABLE_CHECK_HEAVY_BUILDS=1 -DENABLE_CLICKHOUSE_SELF_EXTRACTING=1 -DENABLE_TESTS=0 -DENABLE_UTILS=0 -DCMAKE_FIND_PACKAGE_NO_PACKAGE_REGISTRY=ON -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_INSTALL_LOCALSTATEDIR=/var -DCMAKE_SKIP_INSTALL_ALL_DEPENDENCY=ON -DSPLIT_DEBUG_SYMBOLS=ON -DBUILD_STANDALONE_KEEPER=1 -DCMAKE_C_COMPILER=clang-18 -DCMAKE_CXX_COMPILER=clang++-18 -DCOMPILER_CACHE=sccache -DENABLE_BUILD_PROFILING=1 ..
|
||||
# binary release: cmake --debug-trycompile -DCMAKE_VERBOSE_MAKEFILE=1 -LA -DCMAKE_BUILD_TYPE=None -DSANITIZE= -DENABLE_CHECK_HEAVY_BUILDS=1 -DENABLE_CLICKHOUSE_SELF_EXTRACTING=1 -DCMAKE_C_COMPILER=clang-18 -DCMAKE_CXX_COMPILER=clang++-18 -DCOMPILER_CACHE=sccache -DENABLE_BUILD_PROFILING=1 ..
|
||||
# release coverage: cmake --debug-trycompile -DCMAKE_VERBOSE_MAKEFILE=1 -LA -DCMAKE_BUILD_TYPE=None -DSANITIZE= -DENABLE_CHECK_HEAVY_BUILDS=1 -DENABLE_CLICKHOUSE_SELF_EXTRACTING=1 -DENABLE_TESTS=0 -DENABLE_UTILS=0 -DCMAKE_FIND_PACKAGE_NO_PACKAGE_REGISTRY=ON -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_INSTALL_LOCALSTATEDIR=/var -DCMAKE_SKIP_INSTALL_ALL_DEPENDENCY=ON -DCMAKE_C_COMPILER=clang-18 -DCMAKE_CXX_COMPILER=clang++-18 -DSANITIZE_COVERAGE=1 -DBUILD_STANDALONE_KEEPER=0 -DCOMPILER_CACHE=sccache -DENABLE_BUILD_PROFILING=1 ..
|
||||
|
||||
|
||||
def main():
|
||||
|
||||
args = parse_args()
|
||||
|
||||
# # for sccache
|
||||
# os.environ["SCCACHE_BUCKET"] = S3_BUILDS_BUCKET
|
||||
# os.environ["SCCACHE_S3_KEY_PREFIX"] = "ccache/sccache"
|
||||
# TODO: check with SCCACHE_LOG=debug SCCACHE_NO_DAEMON=1
|
||||
|
||||
stop_watch = Utils.Stopwatch()
|
||||
|
||||
stages = list(JobStages)
|
||||
@ -62,40 +77,83 @@ def main():
|
||||
|
||||
BUILD_TYPE = "RelWithDebInfo"
|
||||
SANITIZER = ""
|
||||
AUX_DEFS = " -DENABLE_TESTS=0 "
|
||||
AUX_DEFS = " -DENABLE_TESTS=1 "
|
||||
cmake_cmd = None
|
||||
|
||||
if "debug" in build_type:
|
||||
print("Build type set: debug")
|
||||
BUILD_TYPE = "Debug"
|
||||
AUX_DEFS = " -DENABLE_TESTS=1 "
|
||||
AUX_DEFS = " -DENABLE_TESTS=0 "
|
||||
package_type = "debug"
|
||||
elif "release" in build_type:
|
||||
print("Build type set: release")
|
||||
AUX_DEFS = (
|
||||
" -DENABLE_TESTS=0 -DSPLIT_DEBUG_SYMBOLS=ON -DBUILD_STANDALONE_KEEPER=1 "
|
||||
)
|
||||
package_type = "release"
|
||||
elif "asan" in build_type:
|
||||
print("Sanitizer set: address")
|
||||
SANITIZER = "address"
|
||||
package_type = "asan"
|
||||
elif "tsan" in build_type:
|
||||
print("Sanitizer set: thread")
|
||||
SANITIZER = "thread"
|
||||
package_type = "tsan"
|
||||
elif "msan" in build_type:
|
||||
print("Sanitizer set: memory")
|
||||
SANITIZER = "memory"
|
||||
package_type = "msan"
|
||||
elif "ubsan" in build_type:
|
||||
print("Sanitizer set: undefined")
|
||||
SANITIZER = "undefined"
|
||||
package_type = "ubsan"
|
||||
elif "binary" in build_type:
|
||||
package_type = "binary"
|
||||
cmake_cmd = f"cmake --debug-trycompile -DCMAKE_VERBOSE_MAKEFILE=1 -LA -DCMAKE_BUILD_TYPE=None -DSANITIZE= -DENABLE_CHECK_HEAVY_BUILDS=1 -DENABLE_CLICKHOUSE_SELF_EXTRACTING=1 -DCMAKE_C_COMPILER={ToolSet.COMPILER_C} -DCMAKE_CXX_COMPILER={ToolSet.COMPILER_CPP} -DCOMPILER_CACHE=sccache -DENABLE_BUILD_PROFILING=1 {Utils.cwd()}"
|
||||
else:
|
||||
assert False
|
||||
|
||||
cmake_cmd = CMAKE_CMD.format(
|
||||
BUILD_TYPE=BUILD_TYPE,
|
||||
CACHE_TYPE=CACHE_TYPE,
|
||||
SANITIZER=SANITIZER,
|
||||
AUX_DEFS=AUX_DEFS,
|
||||
DIR=Utils.cwd(),
|
||||
)
|
||||
if not cmake_cmd:
|
||||
cmake_cmd = CMAKE_CMD.format(
|
||||
BUILD_TYPE=BUILD_TYPE,
|
||||
CACHE_TYPE=CACHE_TYPE,
|
||||
SANITIZER=SANITIZER,
|
||||
AUX_DEFS=AUX_DEFS,
|
||||
DIR=Utils.cwd(),
|
||||
COMPILER=ToolSet.COMPILER_C,
|
||||
COMPILER_CPP=ToolSet.COMPILER_CPP,
|
||||
)
|
||||
|
||||
build_dir = f"{Settings.TEMP_DIR}/build"
|
||||
|
||||
res = True
|
||||
results = []
|
||||
version = ""
|
||||
|
||||
if res and JobStages.UNSHALLOW in stages:
|
||||
results.append(
|
||||
Result.from_commands_run(
|
||||
name="Repo Unshallow",
|
||||
command="git rev-parse --is-shallow-repository | grep -q true && git fetch --depth 10000 --no-tags --filter=tree:0 origin $(git rev-parse --abbrev-ref HEAD)",
|
||||
with_log=True,
|
||||
)
|
||||
)
|
||||
res = results[-1].is_ok()
|
||||
if res:
|
||||
try:
|
||||
version = CHVersion.get_version()
|
||||
assert version
|
||||
print(f"Got version from repo [{version}]")
|
||||
except Exception as e:
|
||||
results[-1].set_failed().set_info(
|
||||
f"Failed to get version from repo, ex [{e}]"
|
||||
)
|
||||
res = False
|
||||
|
||||
if res and JobStages.CHECKOUT_SUBMODULES in stages:
|
||||
Shell.check(f"rm -rf {build_dir} && mkdir -p {build_dir}")
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
Result.from_commands_run(
|
||||
name="Checkout Submodules",
|
||||
command=f"git submodule sync --recursive && git submodule init && git submodule update --depth 1 --recursive --jobs {min([Utils.cpu_count(), 20])}",
|
||||
)
|
||||
@ -104,7 +162,7 @@ def main():
|
||||
|
||||
if res and JobStages.CMAKE in stages:
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
Result.from_commands_run(
|
||||
name="Cmake configuration",
|
||||
command=cmake_cmd,
|
||||
workdir=build_dir,
|
||||
@ -116,7 +174,7 @@ def main():
|
||||
if res and JobStages.BUILD in stages:
|
||||
Shell.check("sccache --show-stats")
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
Result.from_commands_run(
|
||||
name="Build ClickHouse",
|
||||
command="ninja clickhouse-bundle clickhouse-odbc-bridge clickhouse-library-bridge",
|
||||
workdir=build_dir,
|
||||
@ -125,6 +183,44 @@ def main():
|
||||
)
|
||||
Shell.check("sccache --show-stats")
|
||||
Shell.check(f"ls -l {build_dir}/programs/")
|
||||
Shell.check(f"pwd")
|
||||
Shell.check(f"find {build_dir} -name unit_tests_dbms")
|
||||
Shell.check(f"find . -name unit_tests_dbms")
|
||||
res = results[-1].is_ok()
|
||||
|
||||
if res and JobStages.PACKAGE in stages and "binary" not in build_type:
|
||||
assert package_type
|
||||
if "amd" in build_type:
|
||||
deb_arch = "amd64"
|
||||
else:
|
||||
deb_arch = "arm64"
|
||||
|
||||
output_dir = "/tmp/praktika/output/"
|
||||
assert Shell.check(f"rm -f {output_dir}/*.deb")
|
||||
|
||||
results.append(
|
||||
Result.from_commands_run(
|
||||
name="Build Packages",
|
||||
command=[
|
||||
f"DESTDIR={build_dir}/root ninja programs/install",
|
||||
f"ln -sf {build_dir}/root {Utils.cwd()}/packages/root",
|
||||
f"cd {Utils.cwd()}/packages/ && OUTPUT_DIR={output_dir} BUILD_TYPE={package_type} VERSION_STRING={version} DEB_ARCH={deb_arch} ./build --deb",
|
||||
],
|
||||
workdir=build_dir,
|
||||
with_log=True,
|
||||
)
|
||||
)
|
||||
res = results[-1].is_ok()
|
||||
|
||||
if res and JobStages.UNIT in stages and (SANITIZER or "binary" in build_type):
|
||||
# TODO: parallel execution
|
||||
results.append(
|
||||
Result.from_gtest_run(
|
||||
name="Unit Tests",
|
||||
unit_tests_path=CIFiles.UNIT_TESTS_BIN,
|
||||
with_log=False,
|
||||
)
|
||||
)
|
||||
res = results[-1].is_ok()
|
||||
|
||||
Result.create_from(results=results, stopwatch=stop_watch).complete_job()
|
||||
|
@ -1,3 +1,4 @@
|
||||
import argparse
|
||||
import math
|
||||
import multiprocessing
|
||||
import os
|
||||
@ -245,8 +246,18 @@ def check_file_names(files):
|
||||
return ""
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(description="ClickHouse Style Check Job")
|
||||
# parser.add_argument("--param", help="Optional job start stage", default=None)
|
||||
parser.add_argument("--test", help="Optional test name pattern", default="")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
results = []
|
||||
args = parse_args()
|
||||
testpattern = args.test
|
||||
|
||||
stop_watch = Utils.Stopwatch()
|
||||
|
||||
all_files = Utils.traverse_paths(
|
||||
@ -296,87 +307,111 @@ if __name__ == "__main__":
|
||||
)
|
||||
)
|
||||
|
||||
results.append(
|
||||
run_check_concurrent(
|
||||
check_name="Whitespace Check",
|
||||
check_function=check_whitespaces,
|
||||
files=cpp_files,
|
||||
testname = "Whitespace Check"
|
||||
if testpattern.lower() in testname.lower():
|
||||
results.append(
|
||||
run_check_concurrent(
|
||||
check_name=testname,
|
||||
check_function=check_whitespaces,
|
||||
files=cpp_files,
|
||||
)
|
||||
)
|
||||
)
|
||||
results.append(
|
||||
run_check_concurrent(
|
||||
check_name="YamlLint Check",
|
||||
check_function=check_yamllint,
|
||||
files=yaml_workflow_files,
|
||||
testname = "YamlLint Check"
|
||||
if testpattern.lower() in testname.lower():
|
||||
results.append(
|
||||
run_check_concurrent(
|
||||
check_name=testname,
|
||||
check_function=check_yamllint,
|
||||
files=yaml_workflow_files,
|
||||
)
|
||||
)
|
||||
)
|
||||
results.append(
|
||||
run_check_concurrent(
|
||||
check_name="XmlLint Check",
|
||||
check_function=check_xmllint,
|
||||
files=xml_files,
|
||||
testname = "XmlLint Check"
|
||||
if testpattern.lower() in testname.lower():
|
||||
results.append(
|
||||
run_check_concurrent(
|
||||
check_name=testname,
|
||||
check_function=check_xmllint,
|
||||
files=xml_files,
|
||||
)
|
||||
)
|
||||
)
|
||||
results.append(
|
||||
run_check_concurrent(
|
||||
check_name="Functional Tests scripts smoke check",
|
||||
check_function=check_functional_test_cases,
|
||||
files=functional_test_files,
|
||||
testname = "Functional Tests scripts smoke check"
|
||||
if testpattern.lower() in testname.lower():
|
||||
results.append(
|
||||
run_check_concurrent(
|
||||
check_name=testname,
|
||||
check_function=check_functional_test_cases,
|
||||
files=functional_test_files,
|
||||
)
|
||||
)
|
||||
)
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
name="Check Tests Numbers",
|
||||
command=check_gaps_in_tests_numbers,
|
||||
command_args=[functional_test_files],
|
||||
testname = "Check Tests Numbers"
|
||||
if testpattern.lower() in testname.lower():
|
||||
results.append(
|
||||
Result.from_commands_run(
|
||||
name=testname,
|
||||
command=check_gaps_in_tests_numbers,
|
||||
command_args=[functional_test_files],
|
||||
)
|
||||
)
|
||||
)
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
name="Check Broken Symlinks",
|
||||
command=check_broken_links,
|
||||
command_kwargs={
|
||||
"path": "./",
|
||||
"exclude_paths": ["contrib/", "metadata/", "programs/server/data"],
|
||||
},
|
||||
testname = "Check Broken Symlinks"
|
||||
if testpattern.lower() in testname.lower():
|
||||
results.append(
|
||||
Result.from_commands_run(
|
||||
name=testname,
|
||||
command=check_broken_links,
|
||||
command_kwargs={
|
||||
"path": "./",
|
||||
"exclude_paths": ["contrib/", "metadata/", "programs/server/data"],
|
||||
},
|
||||
)
|
||||
)
|
||||
)
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
name="Check CPP code",
|
||||
command=check_cpp_code,
|
||||
testname = "Check CPP code"
|
||||
if testpattern.lower() in testname.lower():
|
||||
results.append(
|
||||
Result.from_commands_run(
|
||||
name=testname,
|
||||
command=check_cpp_code,
|
||||
)
|
||||
)
|
||||
)
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
name="Check Submodules",
|
||||
command=check_repo_submodules,
|
||||
testname = "Check Submodules"
|
||||
if testpattern.lower() in testname.lower():
|
||||
results.append(
|
||||
Result.from_commands_run(
|
||||
name=testname,
|
||||
command=check_repo_submodules,
|
||||
)
|
||||
)
|
||||
)
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
name="Check File Names",
|
||||
command=check_file_names,
|
||||
command_args=[all_files],
|
||||
testname = "Check File Names"
|
||||
if testpattern.lower() in testname.lower():
|
||||
results.append(
|
||||
Result.from_commands_run(
|
||||
name=testname,
|
||||
command=check_file_names,
|
||||
command_args=[all_files],
|
||||
)
|
||||
)
|
||||
)
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
name="Check Many Different Things",
|
||||
command=check_other,
|
||||
testname = "Check Many Different Things"
|
||||
if testpattern.lower() in testname.lower():
|
||||
results.append(
|
||||
Result.from_commands_run(
|
||||
name=testname,
|
||||
command=check_other,
|
||||
)
|
||||
)
|
||||
)
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
name="Check Codespell",
|
||||
command=check_codespell,
|
||||
testname = "Check Codespell"
|
||||
if testpattern.lower() in testname.lower():
|
||||
results.append(
|
||||
Result.from_commands_run(
|
||||
name=testname,
|
||||
command=check_codespell,
|
||||
)
|
||||
)
|
||||
)
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
name="Check Aspell",
|
||||
command=check_aspell,
|
||||
testname = "Check Aspell"
|
||||
if testpattern.lower() in testname.lower():
|
||||
results.append(
|
||||
Result.from_commands_run(
|
||||
name=testname,
|
||||
command=check_aspell,
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
Result.create_from(results=results, stopwatch=stop_watch).complete_job()
|
||||
|
@ -6,6 +6,7 @@ from praktika.utils import MetaClasses, Shell, Utils
|
||||
|
||||
from ci.jobs.scripts.clickhouse_proc import ClickHouseProc
|
||||
from ci.jobs.scripts.functional_tests_results import FTResultsProcessor
|
||||
from ci.workflows.defs import ToolSet
|
||||
|
||||
|
||||
def clone_submodules():
|
||||
@ -132,7 +133,7 @@ def main():
|
||||
if res and JobStages.CHECKOUT_SUBMODULES in stages:
|
||||
Shell.check(f"rm -rf {build_dir} && mkdir -p {build_dir}")
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
Result.from_commands_run(
|
||||
name="Checkout Submodules",
|
||||
command=clone_submodules,
|
||||
)
|
||||
@ -141,10 +142,12 @@ def main():
|
||||
|
||||
if res and JobStages.CMAKE in stages:
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
Result.from_commands_run(
|
||||
name="Cmake configuration",
|
||||
command=f"cmake {current_directory} -DCMAKE_CXX_COMPILER=clang++-18 -DCMAKE_C_COMPILER=clang-18 \
|
||||
-DCMAKE_TOOLCHAIN_FILE={current_directory}/cmake/linux/toolchain-x86_64-musl.cmake -DENABLE_LIBRARIES=0 \
|
||||
command=f"cmake {current_directory} -DCMAKE_CXX_COMPILER={ToolSet.COMPILER_CPP} \
|
||||
-DCMAKE_C_COMPILER={ToolSet.COMPILER_C} \
|
||||
-DCMAKE_TOOLCHAIN_FILE={current_directory}/cmake/linux/toolchain-x86_64-musl.cmake \
|
||||
-DENABLE_LIBRARIES=0 \
|
||||
-DENABLE_TESTS=0 -DENABLE_UTILS=0 -DENABLE_THINLTO=0 -DENABLE_NURAFT=1 -DENABLE_SIMDJSON=1 \
|
||||
-DENABLE_JEMALLOC=1 -DENABLE_LIBURING=1 -DENABLE_YAML_CPP=1 -DCOMPILER_CACHE=sccache",
|
||||
workdir=build_dir,
|
||||
@ -156,7 +159,7 @@ def main():
|
||||
if res and JobStages.BUILD in stages:
|
||||
Shell.check("sccache --show-stats")
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
Result.from_commands_run(
|
||||
name="Build ClickHouse",
|
||||
command="ninja clickhouse-bundle clickhouse-stripped",
|
||||
workdir=build_dir,
|
||||
@ -176,7 +179,7 @@ def main():
|
||||
"clickhouse-test --help",
|
||||
]
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
Result.from_commands_run(
|
||||
name="Check and Compress binary",
|
||||
command=commands,
|
||||
workdir=build_dir,
|
||||
@ -195,7 +198,7 @@ def main():
|
||||
update_path_ch_config,
|
||||
]
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
Result.from_commands_run(
|
||||
name="Install ClickHouse Config",
|
||||
command=commands,
|
||||
with_log=True,
|
||||
|
@ -110,7 +110,7 @@ def main():
|
||||
f"clickhouse-server --version",
|
||||
]
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
Result.from_commands_run(
|
||||
name="Install ClickHouse", command=commands, with_log=True
|
||||
)
|
||||
)
|
||||
@ -131,6 +131,10 @@ def main():
|
||||
)
|
||||
res = res and CH.start()
|
||||
res = res and CH.wait_ready()
|
||||
# TODO: Use --database-replicated optionally
|
||||
res = res and Shell.check(
|
||||
f"./ci/jobs/scripts/functional_tests/setup_ch_cluster.sh"
|
||||
)
|
||||
if res:
|
||||
print("ch started")
|
||||
logs_to_attach += [
|
||||
@ -150,6 +154,10 @@ def main():
|
||||
stop_watch_ = Utils.Stopwatch()
|
||||
step_name = "Tests"
|
||||
print(step_name)
|
||||
|
||||
# TODO: fix tests dependent on this and remove:
|
||||
os.environ["CLICKHOUSE_TMP"] = "tests/queries/1_stateful"
|
||||
|
||||
# assert Shell.check("clickhouse-client -q \"insert into system.zookeeper (name, path, value) values ('auxiliary_zookeeper2', '/test/chroot/', '')\"", verbose=True)
|
||||
run_test(
|
||||
no_parallel=no_parallel,
|
||||
|
@ -1,5 +1,4 @@
|
||||
import argparse
|
||||
import os
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
@ -101,6 +100,7 @@ def main():
|
||||
f"ln -sf {ch_path}/clickhouse {ch_path}/clickhouse-client",
|
||||
f"ln -sf {ch_path}/clickhouse {ch_path}/clickhouse-compressor",
|
||||
f"ln -sf {ch_path}/clickhouse {ch_path}/clickhouse-local",
|
||||
f"ln -sf {ch_path}/clickhouse {ch_path}/clickhouse-disks",
|
||||
f"rm -rf {Settings.TEMP_DIR}/etc/ && mkdir -p {Settings.TEMP_DIR}/etc/clickhouse-client {Settings.TEMP_DIR}/etc/clickhouse-server",
|
||||
f"cp programs/server/config.xml programs/server/users.xml {Settings.TEMP_DIR}/etc/clickhouse-server/",
|
||||
# TODO: find a way to work with Azure secret so it's ok for local tests as well, for now keep azure disabled
|
||||
@ -114,9 +114,10 @@ def main():
|
||||
f"for file in /tmp/praktika/etc/clickhouse-server/*.xml; do [ -f $file ] && echo Change config $file && sed -i 's|>/var/log|>{Settings.TEMP_DIR}/var/log|g; s|>/etc/|>{Settings.TEMP_DIR}/etc/|g' $(readlink -f $file); done",
|
||||
f"for file in /tmp/praktika/etc/clickhouse-server/config.d/*.xml; do [ -f $file ] && echo Change config $file && sed -i 's|<path>local_disk|<path>{Settings.TEMP_DIR}/local_disk|g' $(readlink -f $file); done",
|
||||
f"clickhouse-server --version",
|
||||
f"chmod +x /tmp/praktika/input/clickhouse-odbc-bridge",
|
||||
]
|
||||
results.append(
|
||||
Result.create_from_command_execution(
|
||||
Result.from_commands_run(
|
||||
name="Install ClickHouse", command=commands, with_log=True
|
||||
)
|
||||
)
|
||||
@ -138,6 +139,7 @@ def main():
|
||||
res = res and Shell.check(
|
||||
"aws s3 ls s3://test --endpoint-url http://localhost:11111/", verbose=True
|
||||
)
|
||||
res = res and CH.log_cluster_config()
|
||||
res = res and CH.start()
|
||||
res = res and CH.wait_ready()
|
||||
if res:
|
||||
@ -170,6 +172,7 @@ def main():
|
||||
batch_total=total_batches,
|
||||
test=args.test,
|
||||
)
|
||||
CH.log_cluster_stop_replication()
|
||||
results.append(FTResultsProcessor(wd=Settings.OUTPUT_DIR).run())
|
||||
results[-1].set_timing(stopwatch=stop_watch_)
|
||||
res = results[-1].is_ok()
|
||||
|
@ -15,7 +15,7 @@
|
||||
LC_ALL="en_US.UTF-8"
|
||||
ROOT_PATH="."
|
||||
EXCLUDE='build/|integration/|widechar_width/|glibc-compatibility/|poco/|memcpy/|consistent-hashing|benchmark|tests/.*.cpp|utils/keeper-bench/example.yaml'
|
||||
EXCLUDE_DOCS='Settings\.cpp|FormatFactorySettingsDeclaration\.h'
|
||||
EXCLUDE_DOCS='Settings\.cpp|FormatFactorySettings\.h'
|
||||
|
||||
# From [1]:
|
||||
# But since array_to_string_internal() in array.c still loops over array
|
||||
@ -85,6 +85,8 @@ EXTERN_TYPES_EXCLUDES=(
|
||||
CurrentMetrics::add
|
||||
CurrentMetrics::sub
|
||||
CurrentMetrics::get
|
||||
CurrentMetrics::getDocumentation
|
||||
CurrentMetrics::getName
|
||||
CurrentMetrics::set
|
||||
CurrentMetrics::end
|
||||
CurrentMetrics::Increment
|
||||
|
@ -66,6 +66,24 @@ class ClickHouseProc:
|
||||
print(f"Started setup_minio.sh asynchronously with PID {process.pid}")
|
||||
return True
|
||||
|
||||
def log_cluster_config(self):
|
||||
return Shell.check(
|
||||
f"./ci/jobs/scripts/functional_tests/setup_log_cluster.sh --config-logs-export-cluster /tmp/praktika/etc/clickhouse-server/config.d/system_logs_export.yaml",
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
def log_cluster_setup_replication(self):
|
||||
return Shell.check(
|
||||
f"./ci/jobs/scripts/functional_tests/setup_log_cluster.sh --setup-logs-replication",
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
def log_cluster_stop_replication(self):
|
||||
return Shell.check(
|
||||
f"./ci/jobs/scripts/functional_tests/setup_log_cluster.sh --stop-log-replication",
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
def start(self):
|
||||
print("Starting ClickHouse server")
|
||||
Shell.check(f"rm {self.pid_file}")
|
||||
|
38
ci/jobs/scripts/clickhouse_version.py
Normal file
38
ci/jobs/scripts/clickhouse_version.py
Normal file
@ -0,0 +1,38 @@
|
||||
from pathlib import Path
|
||||
|
||||
from praktika.utils import Shell
|
||||
|
||||
|
||||
class CHVersion:
|
||||
FILE_WITH_VERSION_PATH = "./cmake/autogenerated_versions.txt"
|
||||
|
||||
@classmethod
|
||||
def _get_tweak(cls):
|
||||
tag = Shell.get_output("git describe --tags --abbrev=0")
|
||||
assert tag.startswith("v24")
|
||||
num = Shell.get_output(f"git rev-list --count {tag}..HEAD")
|
||||
return int(num)
|
||||
|
||||
@classmethod
|
||||
def get_version(cls):
|
||||
versions = {}
|
||||
for line in (
|
||||
Path(cls.FILE_WITH_VERSION_PATH).read_text(encoding="utf-8").splitlines()
|
||||
):
|
||||
line = line.strip()
|
||||
if not line.startswith("SET("):
|
||||
continue
|
||||
|
||||
name, value = line[4:-1].split(maxsplit=1)
|
||||
name = name.removeprefix("VERSION_").lower()
|
||||
try:
|
||||
value = int(value)
|
||||
except ValueError:
|
||||
pass
|
||||
versions[name] = value
|
||||
|
||||
version_sha = versions["githash"]
|
||||
tweak = int(
|
||||
Shell.get_output(f"git rev-list --count {version_sha}..HEAD", verbose=True)
|
||||
)
|
||||
return f"{versions['major']}.{versions['minor']}.{versions['patch']}.{tweak}"
|
118
ci/jobs/scripts/functional_tests/setup_ch_cluster.sh
Executable file
118
ci/jobs/scripts/functional_tests/setup_ch_cluster.sh
Executable file
@ -0,0 +1,118 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e -x
|
||||
|
||||
clickhouse-client --query "SHOW DATABASES"
|
||||
clickhouse-client --query "CREATE DATABASE datasets"
|
||||
clickhouse-client < ./tests/docker_scripts/create.sql
|
||||
clickhouse-client --query "SHOW TABLES FROM datasets"
|
||||
|
||||
USE_DATABASE_REPLICATED=0
|
||||
|
||||
while [[ "$#" -gt 0 ]]; do
|
||||
case $1 in
|
||||
--database-replicated)
|
||||
echo "Setup cluster for testing with Database Replicated"
|
||||
USE_DATABASE_REPLICATED=1
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $1"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
if [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
||||
clickhouse-client --query "CREATE DATABASE test ON CLUSTER 'test_cluster_database_replicated'
|
||||
ENGINE=Replicated('/test/clickhouse/db/test', '{shard}', '{replica}')"
|
||||
|
||||
clickhouse-client --query "CREATE TABLE test.hits AS datasets.hits_v1"
|
||||
clickhouse-client --query "CREATE TABLE test.visits AS datasets.visits_v1"
|
||||
|
||||
clickhouse-client --max_memory_usage 10G --query "INSERT INTO test.hits SELECT * FROM datasets.hits_v1"
|
||||
clickhouse-client --max_memory_usage 10G --query "INSERT INTO test.visits SELECT * FROM datasets.visits_v1"
|
||||
|
||||
clickhouse-client --query "DROP TABLE datasets.hits_v1"
|
||||
clickhouse-client --query "DROP TABLE datasets.visits_v1"
|
||||
else
|
||||
clickhouse-client --query "CREATE DATABASE test"
|
||||
clickhouse-client --query "SHOW TABLES FROM test"
|
||||
if [[ -n "$USE_S3_STORAGE_FOR_MERGE_TREE" ]] && [[ "$USE_S3_STORAGE_FOR_MERGE_TREE" -eq 1 ]]; then
|
||||
clickhouse-client --query "CREATE TABLE test.hits (WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16,
|
||||
EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32,
|
||||
UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String,
|
||||
RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16),
|
||||
URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8,
|
||||
FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16,
|
||||
UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8,
|
||||
MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16,
|
||||
SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16,
|
||||
ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32,
|
||||
SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8,
|
||||
FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8,
|
||||
HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8,
|
||||
GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32,
|
||||
HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String,
|
||||
HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32,
|
||||
FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32,
|
||||
LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32,
|
||||
RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String,
|
||||
ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String,
|
||||
OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String,
|
||||
UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64,
|
||||
URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String,
|
||||
ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64),
|
||||
IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate)
|
||||
ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192, storage_policy='s3_cache'"
|
||||
clickhouse-client --query "CREATE TABLE test.visits (CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8,
|
||||
VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32,
|
||||
Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String,
|
||||
EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String,
|
||||
AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32),
|
||||
RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32,
|
||||
SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32,
|
||||
ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32,
|
||||
SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16,
|
||||
UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16,
|
||||
FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8,
|
||||
FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8,
|
||||
Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8,
|
||||
BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16),
|
||||
Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32),
|
||||
WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64,
|
||||
ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32,
|
||||
ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32,
|
||||
ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32,
|
||||
ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16,
|
||||
ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32,
|
||||
OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String,
|
||||
UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime,
|
||||
PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8,
|
||||
PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16),
|
||||
CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64,
|
||||
StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64,
|
||||
OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64,
|
||||
UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32,
|
||||
ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64),
|
||||
Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32,
|
||||
DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16))
|
||||
ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID)
|
||||
SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192, storage_policy='s3_cache'"
|
||||
|
||||
clickhouse-client --max_memory_usage 10G --query "INSERT INTO test.hits SELECT * FROM datasets.hits_v1 SETTINGS enable_filesystem_cache_on_write_operations=0, max_insert_threads=16"
|
||||
clickhouse-client --max_memory_usage 10G --query "INSERT INTO test.visits SELECT * FROM datasets.visits_v1 SETTINGS enable_filesystem_cache_on_write_operations=0, max_insert_threads=16"
|
||||
clickhouse-client --query "DROP TABLE datasets.visits_v1 SYNC"
|
||||
clickhouse-client --query "DROP TABLE datasets.hits_v1 SYNC"
|
||||
else
|
||||
clickhouse-client --query "RENAME TABLE datasets.hits_v1 TO test.hits"
|
||||
clickhouse-client --query "RENAME TABLE datasets.visits_v1 TO test.visits"
|
||||
fi
|
||||
clickhouse-client --query "CREATE TABLE test.hits_s3 (WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192, storage_policy='s3_cache'"
|
||||
# AWS S3 is very inefficient, so increase memory even further:
|
||||
clickhouse-client --max_memory_usage 30G --max_memory_usage_for_user 30G --query "INSERT INTO test.hits_s3 SELECT * FROM test.hits SETTINGS enable_filesystem_cache_on_write_operations=0, max_insert_threads=16"
|
||||
fi
|
||||
|
||||
clickhouse-client --query "SHOW TABLES FROM test"
|
||||
clickhouse-client --query "SELECT count() FROM test.hits"
|
||||
clickhouse-client --query "SELECT count() FROM test.visits"
|
261
ci/jobs/scripts/functional_tests/setup_log_cluster.sh
Executable file
261
ci/jobs/scripts/functional_tests/setup_log_cluster.sh
Executable file
@ -0,0 +1,261 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
# This script sets up export of system log tables to a remote server.
|
||||
# Remote tables are created if not exist, and augmented with extra columns,
|
||||
# and their names will contain a hash of the table structure,
|
||||
# which allows exporting tables from servers of different versions.
|
||||
|
||||
# Config file contains KEY=VALUE pairs with any necessary parameters like:
|
||||
# CLICKHOUSE_CI_LOGS_HOST - remote host
|
||||
# CLICKHOUSE_CI_LOGS_USER - password for user
|
||||
# CLICKHOUSE_CI_LOGS_PASSWORD - password for user
|
||||
CLICKHOUSE_CI_LOGS_CREDENTIALS=${CLICKHOUSE_CI_LOGS_CREDENTIALS:-/tmp/export-logs-config.sh}
|
||||
CLICKHOUSE_CI_LOGS_USER=${CLICKHOUSE_CI_LOGS_USER:-ci}
|
||||
|
||||
# Pre-configured destination cluster, where to export the data
|
||||
CLICKHOUSE_CI_LOGS_CLUSTER=${CLICKHOUSE_CI_LOGS_CLUSTER:-system_logs_export}
|
||||
|
||||
EXTRA_COLUMNS=${EXTRA_COLUMNS:-"pull_request_number UInt32, commit_sha String, check_start_time DateTime('UTC'), check_name LowCardinality(String), instance_type LowCardinality(String), instance_id String, INDEX ix_pr (pull_request_number) TYPE set(100), INDEX ix_commit (commit_sha) TYPE set(100), INDEX ix_check_time (check_start_time) TYPE minmax, "}
|
||||
EXTRA_COLUMNS_EXPRESSION=${EXTRA_COLUMNS_EXPRESSION:-"CAST(0 AS UInt32) AS pull_request_number, '' AS commit_sha, now() AS check_start_time, toLowCardinality('') AS check_name, toLowCardinality('') AS instance_type, '' AS instance_id"}
|
||||
EXTRA_ORDER_BY_COLUMNS=${EXTRA_ORDER_BY_COLUMNS:-"check_name"}
|
||||
|
||||
# trace_log needs more columns for symbolization
|
||||
EXTRA_COLUMNS_TRACE_LOG="${EXTRA_COLUMNS} symbols Array(LowCardinality(String)), lines Array(LowCardinality(String)), "
|
||||
EXTRA_COLUMNS_EXPRESSION_TRACE_LOG="${EXTRA_COLUMNS_EXPRESSION}, arrayMap(x -> demangle(addressToSymbol(x)), trace)::Array(LowCardinality(String)) AS symbols, arrayMap(x -> addressToLine(x), trace)::Array(LowCardinality(String)) AS lines"
|
||||
|
||||
# coverage_log needs more columns for symbolization, but only symbol names (the line numbers are too heavy to calculate)
|
||||
EXTRA_COLUMNS_COVERAGE_LOG="${EXTRA_COLUMNS} symbols Array(LowCardinality(String)), "
|
||||
EXTRA_COLUMNS_EXPRESSION_COVERAGE_LOG="${EXTRA_COLUMNS_EXPRESSION}, arrayDistinct(arrayMap(x -> demangle(addressToSymbol(x)), coverage))::Array(LowCardinality(String)) AS symbols"
|
||||
|
||||
|
||||
function __set_connection_args
|
||||
{
|
||||
# It's impossible to use a generic $CONNECTION_ARGS string, it's unsafe from word splitting perspective.
|
||||
# That's why we must stick to the generated option
|
||||
CONNECTION_ARGS=(
|
||||
--receive_timeout=45 --send_timeout=45 --secure
|
||||
--user "${CLICKHOUSE_CI_LOGS_USER}" --host "${CLICKHOUSE_CI_LOGS_HOST}"
|
||||
--password "${CLICKHOUSE_CI_LOGS_PASSWORD}"
|
||||
)
|
||||
}
|
||||
|
||||
function __shadow_credentials
|
||||
{
|
||||
# The function completely screws the output, it shouldn't be used in normal functions, only in ()
|
||||
# The only way to substitute the env as a plain text is using perl 's/\Qsomething\E/another/
|
||||
exec &> >(perl -pe '
|
||||
s(\Q$ENV{CLICKHOUSE_CI_LOGS_HOST}\E)[CLICKHOUSE_CI_LOGS_HOST]g;
|
||||
s(\Q$ENV{CLICKHOUSE_CI_LOGS_USER}\E)[CLICKHOUSE_CI_LOGS_USER]g;
|
||||
s(\Q$ENV{CLICKHOUSE_CI_LOGS_PASSWORD}\E)[CLICKHOUSE_CI_LOGS_PASSWORD]g;
|
||||
')
|
||||
}
|
||||
|
||||
function check_logs_credentials
|
||||
(
|
||||
# The function connects with given credentials, and if it's unable to execute the simplest query, returns exit code
|
||||
|
||||
# First check, if all necessary parameters are set
|
||||
set +x
|
||||
for parameter in CLICKHOUSE_CI_LOGS_HOST CLICKHOUSE_CI_LOGS_USER CLICKHOUSE_CI_LOGS_PASSWORD; do
|
||||
export -p | grep -q "$parameter" || {
|
||||
echo "Credentials parameter $parameter is unset"
|
||||
return 1
|
||||
}
|
||||
done
|
||||
|
||||
__shadow_credentials
|
||||
__set_connection_args
|
||||
local code
|
||||
# Catch both success and error to not fail on `set -e`
|
||||
clickhouse-client "${CONNECTION_ARGS[@]}" -q 'SELECT 1 FORMAT Null' && return 0 || code=$?
|
||||
if [ "$code" != 0 ]; then
|
||||
echo 'Failed to connect to CI Logs cluster'
|
||||
return $code
|
||||
fi
|
||||
)
|
||||
|
||||
function config_logs_export_cluster
|
||||
(
|
||||
# The function is launched in a separate shell instance to not expose the
|
||||
# exported values from CLICKHOUSE_CI_LOGS_CREDENTIALS
|
||||
set +x
|
||||
if ! [ -r "${CLICKHOUSE_CI_LOGS_CREDENTIALS}" ]; then
|
||||
echo "File $CLICKHOUSE_CI_LOGS_CREDENTIALS does not exist, do not setup"
|
||||
return
|
||||
fi
|
||||
set -a
|
||||
# shellcheck disable=SC1090
|
||||
source "${CLICKHOUSE_CI_LOGS_CREDENTIALS}"
|
||||
set +a
|
||||
__shadow_credentials
|
||||
echo "Checking if the credentials work"
|
||||
check_logs_credentials || return 0
|
||||
cluster_config="${1:-/etc/clickhouse-server/config.d/system_logs_export.yaml}"
|
||||
mkdir -p "$(dirname "$cluster_config")"
|
||||
echo "remote_servers:
|
||||
${CLICKHOUSE_CI_LOGS_CLUSTER}:
|
||||
shard:
|
||||
replica:
|
||||
secure: 1
|
||||
user: '${CLICKHOUSE_CI_LOGS_USER}'
|
||||
host: '${CLICKHOUSE_CI_LOGS_HOST}'
|
||||
port: 9440
|
||||
password: '${CLICKHOUSE_CI_LOGS_PASSWORD}'
|
||||
" > "$cluster_config"
|
||||
echo "Cluster ${CLICKHOUSE_CI_LOGS_CLUSTER} is confugured in ${cluster_config}"
|
||||
)
|
||||
|
||||
function setup_logs_replication
|
||||
(
|
||||
# The function is launched in a separate shell instance to not expose the
|
||||
# exported values from CLICKHOUSE_CI_LOGS_CREDENTIALS
|
||||
set +x
|
||||
# disable output
|
||||
if ! [ -r "${CLICKHOUSE_CI_LOGS_CREDENTIALS}" ]; then
|
||||
echo "File $CLICKHOUSE_CI_LOGS_CREDENTIALS does not exist, do not setup"
|
||||
return 0
|
||||
fi
|
||||
set -a
|
||||
# shellcheck disable=SC1090
|
||||
source "${CLICKHOUSE_CI_LOGS_CREDENTIALS}"
|
||||
set +a
|
||||
__shadow_credentials
|
||||
echo "Checking if the credentials work"
|
||||
check_logs_credentials || return 0
|
||||
__set_connection_args
|
||||
|
||||
echo "My hostname is ${HOSTNAME}"
|
||||
|
||||
echo 'Create all configured system logs'
|
||||
clickhouse-client --query "SYSTEM FLUSH LOGS"
|
||||
|
||||
debug_or_sanitizer_build=$(clickhouse-client -q "WITH ((SELECT value FROM system.build_options WHERE name='BUILD_TYPE') AS build, (SELECT value FROM system.build_options WHERE name='CXX_FLAGS') as flags) SELECT build='Debug' OR flags LIKE '%fsanitize%'")
|
||||
echo "Build is debug or sanitizer: $debug_or_sanitizer_build"
|
||||
|
||||
# We will pre-create a table system.coverage_log.
|
||||
# It is normally created by clickhouse-test rather than the server,
|
||||
# so we will create it in advance to make it be picked up by the next commands:
|
||||
|
||||
clickhouse-client --query "
|
||||
CREATE TABLE IF NOT EXISTS system.coverage_log
|
||||
(
|
||||
time DateTime COMMENT 'The time of test run',
|
||||
test_name String COMMENT 'The name of the test',
|
||||
coverage Array(UInt64) COMMENT 'An array of addresses of the code (a subset of addresses instrumented for coverage) that were encountered during the test run'
|
||||
) ENGINE = MergeTree ORDER BY test_name COMMENT 'Contains information about per-test coverage from the CI, but used only for exporting to the CI cluster'
|
||||
"
|
||||
|
||||
# For each system log table:
|
||||
echo 'Create %_log tables'
|
||||
clickhouse-client --query "SHOW TABLES FROM system LIKE '%\\_log'" | while read -r table
|
||||
do
|
||||
if [[ "$table" = "trace_log" ]]
|
||||
then
|
||||
EXTRA_COLUMNS_FOR_TABLE="${EXTRA_COLUMNS_TRACE_LOG}"
|
||||
# Do not try to resolve stack traces in case of debug/sanitizers
|
||||
# build, since it is too slow (flushing of trace_log can take ~1min
|
||||
# with such MV attached)
|
||||
if [[ "$debug_or_sanitizer_build" = 1 ]]
|
||||
then
|
||||
EXTRA_COLUMNS_EXPRESSION_FOR_TABLE="${EXTRA_COLUMNS_EXPRESSION}"
|
||||
else
|
||||
EXTRA_COLUMNS_EXPRESSION_FOR_TABLE="${EXTRA_COLUMNS_EXPRESSION_TRACE_LOG}"
|
||||
fi
|
||||
elif [[ "$table" = "coverage_log" ]]
|
||||
then
|
||||
EXTRA_COLUMNS_FOR_TABLE="${EXTRA_COLUMNS_COVERAGE_LOG}"
|
||||
EXTRA_COLUMNS_EXPRESSION_FOR_TABLE="${EXTRA_COLUMNS_EXPRESSION_COVERAGE_LOG}"
|
||||
else
|
||||
EXTRA_COLUMNS_FOR_TABLE="${EXTRA_COLUMNS}"
|
||||
EXTRA_COLUMNS_EXPRESSION_FOR_TABLE="${EXTRA_COLUMNS_EXPRESSION}"
|
||||
fi
|
||||
|
||||
# Calculate hash of its structure. Note: 4 is the version of extra columns - increment it if extra columns are changed:
|
||||
hash=$(clickhouse-client --query "
|
||||
SELECT sipHash64(9, groupArray((name, type)))
|
||||
FROM (SELECT name, type FROM system.columns
|
||||
WHERE database = 'system' AND table = '$table'
|
||||
ORDER BY position)
|
||||
")
|
||||
|
||||
# Create the destination table with adapted name and structure:
|
||||
statement=$(clickhouse-client --format TSVRaw --query "SHOW CREATE TABLE system.${table}" | sed -r -e '
|
||||
s/^\($/('"$EXTRA_COLUMNS_FOR_TABLE"'/;
|
||||
s/^ORDER BY (([^\(].+?)|\((.+?)\))$/ORDER BY ('"$EXTRA_ORDER_BY_COLUMNS"', \2\3)/;
|
||||
s/^CREATE TABLE system\.\w+_log$/CREATE TABLE IF NOT EXISTS '"$table"'_'"$hash"'/;
|
||||
/^TTL /d
|
||||
')
|
||||
|
||||
echo -e "Creating remote destination table ${table}_${hash} with statement:" >&2
|
||||
|
||||
echo "::group::${table}"
|
||||
# there's the only way big "$statement" can be printed without causing EAGAIN error
|
||||
# cat: write error: Resource temporarily unavailable
|
||||
statement_print="${statement}"
|
||||
if [ "${#statement_print}" -gt 4000 ]; then
|
||||
statement_print="${statement::1999}\n…\n${statement:${#statement}-1999}"
|
||||
fi
|
||||
echo -e "$statement_print"
|
||||
echo "::endgroup::"
|
||||
|
||||
echo "$statement" | clickhouse-client --database_replicated_initial_query_timeout_sec=10 \
|
||||
--distributed_ddl_task_timeout=30 --distributed_ddl_output_mode=throw_only_active \
|
||||
"${CONNECTION_ARGS[@]}" || continue
|
||||
|
||||
echo "Creating table system.${table}_sender" >&2
|
||||
|
||||
# Create Distributed table and materialized view to watch on the original table:
|
||||
clickhouse-client --query "
|
||||
CREATE TABLE system.${table}_sender
|
||||
ENGINE = Distributed(${CLICKHOUSE_CI_LOGS_CLUSTER}, default, ${table}_${hash})
|
||||
SETTINGS flush_on_detach=0
|
||||
EMPTY AS
|
||||
SELECT ${EXTRA_COLUMNS_EXPRESSION_FOR_TABLE}, *
|
||||
FROM system.${table}
|
||||
" || continue
|
||||
|
||||
echo "Creating materialized view system.${table}_watcher" >&2
|
||||
|
||||
clickhouse-client --query "
|
||||
CREATE MATERIALIZED VIEW system.${table}_watcher TO system.${table}_sender AS
|
||||
SELECT ${EXTRA_COLUMNS_EXPRESSION_FOR_TABLE}, *
|
||||
FROM system.${table}
|
||||
" || continue
|
||||
done
|
||||
)
|
||||
|
||||
function stop_logs_replication
|
||||
{
|
||||
echo "Detach all logs replication"
|
||||
clickhouse-client --query "select database||'.'||table from system.tables where database = 'system' and (table like '%_sender' or table like '%_watcher')" | {
|
||||
tee /dev/stderr
|
||||
} | {
|
||||
timeout --preserve-status --signal TERM --kill-after 5m 15m xargs -n1 -r -i clickhouse-client --query "drop table {}"
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
while [[ "$#" -gt 0 ]]; do
|
||||
case $1 in
|
||||
--stop-log-replication)
|
||||
echo "Stopping log replication..."
|
||||
stop_logs_replication
|
||||
;;
|
||||
--setup-logs-replication)
|
||||
echo "Setting up log replication..."
|
||||
setup_logs_replication
|
||||
;;
|
||||
--config-logs-export-cluster)
|
||||
echo "Configuring logs export for the cluster..."
|
||||
config_logs_export_cluster "$2"
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $1"
|
||||
echo "Usage: $0 [--stop-log-replication | --setup-logs-replication | --config-logs-export-cluster ]"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
1444
ci/jobs/scripts/performance_compare.sh
Executable file
1444
ci/jobs/scripts/performance_compare.sh
Executable file
File diff suppressed because it is too large
Load Diff
200
ci/jobs/scripts/performance_test.sh
Executable file
200
ci/jobs/scripts/performance_test.sh
Executable file
@ -0,0 +1,200 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e +x
|
||||
|
||||
CHPC_CHECK_START_TIMESTAMP="$(date +%s)"
|
||||
export CHPC_CHECK_START_TIMESTAMP
|
||||
|
||||
S3_URL=${S3_URL:="https://clickhouse-builds.s3.amazonaws.com"}
|
||||
BUILD_NAME=${BUILD_NAME:-package_release}
|
||||
export S3_URL BUILD_NAME
|
||||
SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
|
||||
|
||||
# Sometimes AWS responds with DNS error and it's impossible to retry it with
|
||||
# current curl version options.
|
||||
function curl_with_retry
|
||||
{
|
||||
for _ in 1 2 3 4 5 6 7 8 9 10; do
|
||||
if curl --fail --head "$1"
|
||||
then
|
||||
return 0
|
||||
else
|
||||
sleep 1
|
||||
fi
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
# Use the packaged repository to find the revision we will compare to.
|
||||
function find_reference_sha
|
||||
{
|
||||
git -C right/ch log -1 origin/master
|
||||
git -C right/ch log -1 pr
|
||||
# Go back from the revision to be tested, trying to find the closest published
|
||||
# testing release. The PR branch may be either pull/*/head which is the
|
||||
# author's branch, or pull/*/merge, which is head merged with some master
|
||||
# automatically by Github. We will use a merge base with master as a reference
|
||||
# for tesing (or some older commit). A caveat is that if we're testing the
|
||||
# master, the merge base is the tested commit itself, so we have to step back
|
||||
# once.
|
||||
start_ref=$(git -C right/ch merge-base origin/master pr)
|
||||
if [ "$PR_TO_TEST" == "0" ]
|
||||
then
|
||||
start_ref=$start_ref~
|
||||
fi
|
||||
|
||||
# Loop back to find a commit that actually has a published perf test package.
|
||||
while :
|
||||
do
|
||||
# FIXME the original idea was to compare to a closest testing tag, which
|
||||
# is a version that is verified to work correctly. However, we're having
|
||||
# some test stability issues now, and the testing release can't roll out
|
||||
# for more that a weak already because of that. Temporarily switch to
|
||||
# using just closest master, so that we can go on.
|
||||
#ref_tag=$(git -C ch describe --match='v*-testing' --abbrev=0 --first-parent "$start_ref")
|
||||
ref_tag="$start_ref"
|
||||
|
||||
echo Reference tag is "$ref_tag"
|
||||
# We use annotated tags which have their own shas, so we have to further
|
||||
# dereference the tag to get the commit it points to, hence the '~0' thing.
|
||||
REF_SHA=$(git -C right/ch rev-parse "$ref_tag~0")
|
||||
|
||||
# FIXME sometimes we have testing tags on commits without published builds.
|
||||
# Normally these are documentation commits. Loop to skip them.
|
||||
# Historically there were various path for the performance test package,
|
||||
# test all of them.
|
||||
unset found
|
||||
declare -a urls_to_try=(
|
||||
"$S3_URL/PRs/0/$REF_SHA/$BUILD_NAME/performance.tar.zst"
|
||||
"$S3_URL/0/$REF_SHA/$BUILD_NAME/performance.tar.zst"
|
||||
"$S3_URL/0/$REF_SHA/$BUILD_NAME/performance.tgz"
|
||||
)
|
||||
for path in "${urls_to_try[@]}"
|
||||
do
|
||||
if curl_with_retry "$path"
|
||||
then
|
||||
found="$path"
|
||||
break
|
||||
fi
|
||||
done
|
||||
if [ -n "$found" ] ; then break; fi
|
||||
|
||||
start_ref="$REF_SHA~"
|
||||
done
|
||||
|
||||
REF_PR=0
|
||||
}
|
||||
|
||||
#chown nobody workspace output
|
||||
#chgrp nogroup workspace output
|
||||
#chmod 777 workspace output
|
||||
|
||||
#[ ! -e "/artifacts/performance.tar.zst" ] && echo "ERROR: performance.tar.zst not found" && exit 1
|
||||
#mkdir -p right
|
||||
#tar -xf "/artifacts/performance.tar.zst" -C right --no-same-owner --strip-components=1 --zstd --extract --verbose
|
||||
|
||||
## Find reference revision if not specified explicitly
|
||||
#if [ "$REF_SHA" == "" ]; then find_reference_sha; fi
|
||||
#if [ "$REF_SHA" == "" ]; then echo Reference SHA is not specified ; exit 1 ; fi
|
||||
#if [ "$REF_PR" == "" ]; then echo Reference PR is not specified ; exit 1 ; fi
|
||||
|
||||
# Show what we're testing
|
||||
#(
|
||||
# git -C right/ch log -1 --decorate "$REF_SHA" ||:
|
||||
#) | tee left-commit.txt
|
||||
#
|
||||
#(
|
||||
# git -C right/ch log -1 --decorate "$SHA_TO_TEST" ||:
|
||||
# echo
|
||||
# echo Real tested commit is:
|
||||
# git -C right/ch log -1 --decorate "pr"
|
||||
#) | tee right-commit.txt
|
||||
|
||||
#if [ "$PR_TO_TEST" != "0" ]
|
||||
#then
|
||||
# # If the PR only changes the tests and nothing else, prepare a list of these
|
||||
# # tests for use by compare.sh. Compare to merge base, because master might be
|
||||
# # far in the future and have unrelated test changes.
|
||||
# base=$(git -C right/ch merge-base pr origin/master)
|
||||
# git -C right/ch diff --name-only "$base" pr -- . | tee all-changed-files.txt
|
||||
# git -C right/ch diff --name-only --diff-filter=d "$base" pr -- tests/performance/*.xml | tee changed-test-definitions.txt
|
||||
# git -C right/ch diff --name-only "$base" pr -- :!tests/performance/*.xml :!docker/test/performance-comparison | tee other-changed-files.txt
|
||||
#fi
|
||||
|
||||
# prepare config for the right server
|
||||
export PATH="/tmp/praktika/input:$PATH"
|
||||
rm -rf /tmp/praktika/right/config && mkdir -p /tmp/praktika/right/config
|
||||
cp -r ./tests/config /tmp/praktika/right/config
|
||||
cp ./programs/server/config.xml /tmp/praktika/right/config/
|
||||
cd /tmp/praktika/input
|
||||
chmod +x clickhouse
|
||||
ln -sf clickhouse clickhouse-local
|
||||
ln -sf clickhouse clickhouse-client
|
||||
#for file in /tmp/praktika/right/config/config.d/*.xml; do [ -f $file ] && echo Change config $file && sed -i 's|>/var/log|>/tmp/praktika/right/var/log|g; s|>/etc/|>/tmp/praktika/right/etc/|g' $(readlink -f $file); done
|
||||
cd -
|
||||
|
||||
|
||||
# prepare config for the left server
|
||||
left_sha=$(sed -n 's/SET(VERSION_GITHASH \(.*\))/\1/p' cmake/autogenerated_versions.txt)
|
||||
version_major=$(sed -n 's/SET(VERSION_MAJOR \(.*\))/\1/p' cmake/autogenerated_versions.txt)
|
||||
version_minor=$(sed -n 's/SET(VERSION_MINOR \(.*\))/\1/p' cmake/autogenerated_versions.txt)
|
||||
rm -rf /tmp/praktika/left/config && mkdir -p /tmp/praktika/left/config
|
||||
#git checkout left_sha
|
||||
#rm -rf /tmp/praktika/left && mkdir -p /tmp/praktika/left
|
||||
#cp -r ./tests/config /tmp/praktika/left/config
|
||||
#git checkout -
|
||||
cd /tmp/praktika/left
|
||||
[ ! -f clickhouse ] && wget -nv https://clickhouse-builds.s3.us-east-1.amazonaws.com/$version_major.$version_minor/020d843058ae211c43285852e5f4f0e0e9cc1eb6/package_aarch64/clickhouse
|
||||
chmod +x clickhouse
|
||||
ln -sf clickhouse clickhouse-local
|
||||
ln -sf clickhouse clickhouse-client
|
||||
ln -sf clickhouse clickhouse-server
|
||||
cd -
|
||||
|
||||
|
||||
# Set python output encoding so that we can print queries with non-ASCII letters.
|
||||
export PYTHONIOENCODING=utf-8
|
||||
|
||||
script_path="tests/performance/scripts/"
|
||||
|
||||
## Even if we have some errors, try our best to save the logs.
|
||||
#set +e
|
||||
|
||||
# Use clickhouse-client and clickhouse-local from the right server.
|
||||
|
||||
|
||||
export REF_PR
|
||||
export REF_SHA
|
||||
|
||||
# Try to collect some core dumps.
|
||||
# At least we remove the ulimit and then try to pack some common file names into output.
|
||||
ulimit -c unlimited
|
||||
cat /proc/sys/kernel/core_pattern
|
||||
|
||||
# Start the main comparison script.
|
||||
{
|
||||
# time $SCRIPT_DIR/download.sh "$REF_PR" "$REF_SHA" "$PR_TO_TEST" "$SHA_TO_TEST" && \
|
||||
time stage=configure ./ci/jobs/scripts/performance_compare.sh ; \
|
||||
} 2>&1 | ts "$(printf '%%Y-%%m-%%d %%H:%%M:%%S\t')" | tee -a compare.log
|
||||
|
||||
# Stop the servers to free memory. Normally they are restarted before getting
|
||||
# the profile info, so they shouldn't use much, but if the comparison script
|
||||
# fails in the middle, this might not be the case.
|
||||
for _ in {1..30}
|
||||
do
|
||||
killall clickhouse || break
|
||||
sleep 1
|
||||
done
|
||||
|
||||
dmesg -T > dmesg.log
|
||||
|
||||
ls -lath
|
||||
|
||||
7z a '-x!*/tmp' /output/output.7z ./*.{log,tsv,html,txt,rep,svg,columns} \
|
||||
{right,left}/{performance,scripts} {{right,left}/db,db0}/preprocessed_configs \
|
||||
report analyze benchmark metrics \
|
||||
./*.core.dmp ./*.core
|
||||
|
||||
# If the files aren't same, copy it
|
||||
cmp --silent compare.log /output/compare.log || \
|
||||
cp compare.log /output
|
@ -8,12 +8,12 @@ from praktika.yaml_generator import YamlGenerator
|
||||
|
||||
|
||||
def create_parser():
|
||||
parser = argparse.ArgumentParser(prog="python3 -m praktika")
|
||||
parser = argparse.ArgumentParser(prog="praktika")
|
||||
|
||||
subparsers = parser.add_subparsers(dest="command", help="Available subcommands")
|
||||
|
||||
run_parser = subparsers.add_parser("run", help="Job Runner")
|
||||
run_parser.add_argument("--job", help="Job Name", type=str, required=True)
|
||||
run_parser.add_argument("job", help="Job Name", type=str)
|
||||
run_parser.add_argument(
|
||||
"--workflow",
|
||||
help="Workflow Name (required if job name is not uniq per config)",
|
||||
@ -75,7 +75,8 @@ def create_parser():
|
||||
return parser
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
def main():
|
||||
sys.path.append(".")
|
||||
parser = create_parser()
|
||||
args = parser.parse_args()
|
||||
|
||||
@ -120,3 +121,7 @@ if __name__ == "__main__":
|
||||
else:
|
||||
parser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
@ -179,7 +179,7 @@ class _Environment(MetaClasses.Serializable):
|
||||
if bucket in path:
|
||||
path = path.replace(bucket, endpoint)
|
||||
break
|
||||
REPORT_URL = f"https://{path}/{Path(settings.HTML_PAGE_FILE).name}?PR={self.PR_NUMBER}&sha={'latest' if latest else self.SHA}&name_0={urllib.parse.quote(self.WORKFLOW_NAME, safe='')}&name_1={urllib.parse.quote(self.JOB_NAME, safe='')}"
|
||||
REPORT_URL = f"https://{path}/{Path(settings.HTML_PAGE_FILE).name}?PR={self.PR_NUMBER}&sha={'latest' if latest else self.SHA}&name_0={urllib.parse.quote(self.WORKFLOW_NAME, safe='')}"
|
||||
return REPORT_URL
|
||||
|
||||
def is_local_run(self):
|
||||
|
@ -1,3 +1,4 @@
|
||||
import copy
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@ -24,6 +25,14 @@ class Artifact:
|
||||
def is_s3_artifact(self):
|
||||
return self.type == Artifact.Type.S3
|
||||
|
||||
def parametrize(self, names):
|
||||
res = []
|
||||
for name in names:
|
||||
obj = copy.deepcopy(self)
|
||||
obj.name = name
|
||||
res.append(obj)
|
||||
return res
|
||||
|
||||
@classmethod
|
||||
def define_artifact(cls, name, type, path):
|
||||
return cls.Config(name=name, type=type, path=path)
|
||||
|
@ -137,14 +137,14 @@ class HtmlRunnerHooks:
|
||||
summary_result.start_time = Utils.timestamp()
|
||||
|
||||
assert _ResultS3.copy_result_to_s3_with_version(summary_result, version=0)
|
||||
page_url = env.get_report_url(settings=Settings)
|
||||
page_url = env.get_report_url(settings=Settings, latest=True)
|
||||
print(f"CI Status page url [{page_url}]")
|
||||
|
||||
res1 = GH.post_commit_status(
|
||||
name=_workflow.name,
|
||||
status=Result.Status.PENDING,
|
||||
description="",
|
||||
url=env.get_report_url(settings=Settings, latest=True),
|
||||
url=page_url,
|
||||
)
|
||||
res2 = GH.post_pr_comment(
|
||||
comment_body=f"Workflow [[{_workflow.name}]({page_url})], commit [{_Environment.get().SHA[:8]}]",
|
||||
|
@ -529,7 +529,7 @@
|
||||
|
||||
const columnSymbols = {
|
||||
name: '🗂️',
|
||||
status: '🧾',
|
||||
status: '✅',
|
||||
start_time: '🕒',
|
||||
duration: '⏳',
|
||||
info: '📝',
|
||||
@ -601,7 +601,7 @@
|
||||
td.classList.add('time-column');
|
||||
td.textContent = value ? formatDuration(value) : '';
|
||||
} else if (column === 'info') {
|
||||
td.textContent = value.includes('\n') ? '↵' : (value || '');
|
||||
td.textContent = value && value.includes('\n') ? '↵' : (value || '');
|
||||
td.classList.add('info-column');
|
||||
}
|
||||
|
||||
|
@ -68,9 +68,7 @@ def _update_workflow_with_native_jobs(workflow):
|
||||
print(f"Enable native job [{_docker_build_job.name}] for [{workflow.name}]")
|
||||
aux_job = copy.deepcopy(_docker_build_job)
|
||||
if workflow.enable_cache:
|
||||
print(
|
||||
f"Add automatic digest config for [{aux_job.name}] job since cache is enabled"
|
||||
)
|
||||
print(f"Add automatic digest config for [{aux_job.name}] job")
|
||||
docker_digest_config = Job.CacheDigestConfig()
|
||||
for docker_config in workflow.dockers:
|
||||
docker_digest_config.include_paths.append(docker_config.path)
|
||||
|
@ -144,7 +144,7 @@ def _config_workflow(workflow: Workflow.Config, job_name):
|
||||
f"git diff-index HEAD -- {Settings.WORKFLOW_PATH_PREFIX}"
|
||||
)
|
||||
info = ""
|
||||
status = Result.Status.SUCCESS
|
||||
status = Result.Status.FAILED
|
||||
if exit_code != 0:
|
||||
info = f"workspace has uncommitted files unexpectedly [{output}]"
|
||||
status = Result.Status.ERROR
|
||||
@ -154,10 +154,14 @@ def _config_workflow(workflow: Workflow.Config, job_name):
|
||||
exit_code, output, err = Shell.get_res_stdout_stderr(
|
||||
f"git diff-index HEAD -- {Settings.WORKFLOW_PATH_PREFIX}"
|
||||
)
|
||||
if exit_code != 0:
|
||||
info = f"workspace has outdated workflows [{output}] - regenerate with [python -m praktika --generate]"
|
||||
status = Result.Status.ERROR
|
||||
if output:
|
||||
info = f"workflows are outdated: [{output}]"
|
||||
status = Result.Status.FAILED
|
||||
print("ERROR: ", info)
|
||||
elif exit_code == 0 and not err:
|
||||
status = Result.Status.SUCCESS
|
||||
else:
|
||||
print(f"ERROR: exit code [{exit_code}], err [{err}]")
|
||||
|
||||
return (
|
||||
Result(
|
||||
@ -310,7 +314,7 @@ def _finish_workflow(workflow, job_name):
|
||||
print(env.get_needs_statuses())
|
||||
|
||||
print("Check Workflow results")
|
||||
_ResultS3.copy_result_from_s3(
|
||||
version = _ResultS3.copy_result_from_s3_with_version(
|
||||
Result.file_name_static(workflow.name),
|
||||
)
|
||||
workflow_result = Result.from_fs(workflow.name)
|
||||
@ -333,7 +337,7 @@ def _finish_workflow(workflow, job_name):
|
||||
# dump workflow result after update - to have an updated result in post
|
||||
workflow_result.dump()
|
||||
# add error into env - should apper in the report
|
||||
env.add_info(ResultInfo.NOT_FINALIZED + f" [{result.name}]")
|
||||
env.add_info(f"{result.name}: {ResultInfo.NOT_FINALIZED}")
|
||||
update_final_report = True
|
||||
job = workflow.get_job(result.name)
|
||||
if not job or not job.allow_merge_on_failure:
|
||||
@ -358,9 +362,7 @@ def _finish_workflow(workflow, job_name):
|
||||
env.add_info(ResultInfo.GH_STATUS_ERROR)
|
||||
|
||||
if update_final_report:
|
||||
_ResultS3.copy_result_to_s3(
|
||||
workflow_result,
|
||||
)
|
||||
_ResultS3.copy_result_to_s3_with_version(workflow_result, version + 1)
|
||||
|
||||
Result.from_fs(job_name).set_status(Result.Status.SUCCESS)
|
||||
|
||||
|
@ -1,5 +1,6 @@
|
||||
import dataclasses
|
||||
import datetime
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
@ -80,12 +81,19 @@ class Result(MetaClasses.Serializable):
|
||||
infos += info
|
||||
if results and not status:
|
||||
for result in results:
|
||||
if result.status not in (Result.Status.SUCCESS, Result.Status.FAILED):
|
||||
if result.status not in (
|
||||
Result.Status.SUCCESS,
|
||||
Result.Status.FAILED,
|
||||
Result.Status.ERROR,
|
||||
):
|
||||
Utils.raise_with_error(
|
||||
f"Unexpected result status [{result.status}] for Result.create_from call"
|
||||
)
|
||||
if result.status != Result.Status.SUCCESS:
|
||||
result_status = Result.Status.FAILED
|
||||
if result.status == Result.Status.ERROR:
|
||||
result_status = Result.Status.ERROR
|
||||
break
|
||||
if results:
|
||||
for result in results:
|
||||
if result.info and with_info_from_results:
|
||||
@ -121,6 +129,9 @@ class Result(MetaClasses.Serializable):
|
||||
def set_success(self) -> "Result":
|
||||
return self.set_status(Result.Status.SUCCESS)
|
||||
|
||||
def set_failed(self) -> "Result":
|
||||
return self.set_status(Result.Status.FAILED)
|
||||
|
||||
def set_results(self, results: List["Result"]) -> "Result":
|
||||
self.results = results
|
||||
self.dump()
|
||||
@ -163,17 +174,14 @@ class Result(MetaClasses.Serializable):
|
||||
return Result(**obj)
|
||||
|
||||
def update_duration(self):
|
||||
if not self.duration and self.start_time:
|
||||
if self.duration:
|
||||
return self
|
||||
if self.start_time:
|
||||
self.duration = datetime.datetime.utcnow().timestamp() - self.start_time
|
||||
else:
|
||||
if not self.duration:
|
||||
print(
|
||||
f"NOTE: duration is set for job [{self.name}] Result - do not update by CI"
|
||||
)
|
||||
else:
|
||||
print(
|
||||
f"NOTE: start_time is not set for job [{self.name}] Result - do not update duration"
|
||||
)
|
||||
print(
|
||||
f"NOTE: start_time is not set for job [{self.name}] Result - do not update duration"
|
||||
)
|
||||
return self
|
||||
|
||||
def set_timing(self, stopwatch: Utils.Stopwatch):
|
||||
@ -247,7 +255,21 @@ class Result(MetaClasses.Serializable):
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def create_from_command_execution(
|
||||
def from_gtest_run(cls, name, unit_tests_path, with_log=False):
|
||||
Shell.check(f"rm {ResultTranslator.GTEST_RESULT_FILE}")
|
||||
result = Result.from_commands_run(
|
||||
name=name,
|
||||
command=[
|
||||
f"{unit_tests_path} --gtest_output='json:{ResultTranslator.GTEST_RESULT_FILE}'"
|
||||
],
|
||||
with_log=with_log,
|
||||
)
|
||||
status, results, info = ResultTranslator.from_gtest()
|
||||
result.set_status(status).set_results(results).set_info(info)
|
||||
return result
|
||||
|
||||
@classmethod
|
||||
def from_commands_run(
|
||||
cls,
|
||||
name,
|
||||
command,
|
||||
@ -504,10 +526,11 @@ class _ResultS3:
|
||||
# return True
|
||||
|
||||
@classmethod
|
||||
def upload_result_files_to_s3(cls, result):
|
||||
def upload_result_files_to_s3(cls, result, s3_subprefix=""):
|
||||
s3_subprefix = "/".join([s3_subprefix, Utils.normalize_string(result.name)])
|
||||
if result.results:
|
||||
for result_ in result.results:
|
||||
cls.upload_result_files_to_s3(result_)
|
||||
cls.upload_result_files_to_s3(result_, s3_subprefix=s3_subprefix)
|
||||
for file in result.files:
|
||||
if not Path(file).is_file():
|
||||
print(f"ERROR: Invalid file [{file}] in [{result.name}] - skip upload")
|
||||
@ -526,7 +549,7 @@ class _ResultS3:
|
||||
file,
|
||||
upload_to_s3=True,
|
||||
text=is_text,
|
||||
s3_subprefix=Utils.normalize_string(result.name),
|
||||
s3_subprefix=s3_subprefix,
|
||||
)
|
||||
result.links.append(file_link)
|
||||
if result.files:
|
||||
@ -569,3 +592,138 @@ class _ResultS3:
|
||||
return new_status
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
class ResultTranslator:
|
||||
GTEST_RESULT_FILE = "/tmp/praktika/gtest.json"
|
||||
|
||||
@classmethod
|
||||
def from_gtest(cls):
|
||||
"""The json is described by the next proto3 scheme:
|
||||
(It's wrong, but that's a copy/paste from
|
||||
https://google.github.io/googletest/advanced.html#generating-a-json-report)
|
||||
|
||||
syntax = "proto3";
|
||||
|
||||
package googletest;
|
||||
|
||||
import "google/protobuf/timestamp.proto";
|
||||
import "google/protobuf/duration.proto";
|
||||
|
||||
message UnitTest {
|
||||
int32 tests = 1;
|
||||
int32 failures = 2;
|
||||
int32 disabled = 3;
|
||||
int32 errors = 4;
|
||||
google.protobuf.Timestamp timestamp = 5;
|
||||
google.protobuf.Duration time = 6;
|
||||
string name = 7;
|
||||
repeated TestCase testsuites = 8;
|
||||
}
|
||||
|
||||
message TestCase {
|
||||
string name = 1;
|
||||
int32 tests = 2;
|
||||
int32 failures = 3;
|
||||
int32 disabled = 4;
|
||||
int32 errors = 5;
|
||||
google.protobuf.Duration time = 6;
|
||||
repeated TestInfo testsuite = 7;
|
||||
}
|
||||
|
||||
message TestInfo {
|
||||
string name = 1;
|
||||
string file = 6;
|
||||
int32 line = 7;
|
||||
enum Status {
|
||||
RUN = 0;
|
||||
NOTRUN = 1;
|
||||
}
|
||||
Status status = 2;
|
||||
google.protobuf.Duration time = 3;
|
||||
string classname = 4;
|
||||
message Failure {
|
||||
string failures = 1;
|
||||
string type = 2;
|
||||
}
|
||||
repeated Failure failures = 5;
|
||||
}"""
|
||||
|
||||
test_results = [] # type: List[Result]
|
||||
|
||||
if not Path(cls.GTEST_RESULT_FILE).exists():
|
||||
print(f"ERROR: No test result file [{cls.GTEST_RESULT_FILE}]")
|
||||
return (
|
||||
Result.Status.ERROR,
|
||||
test_results,
|
||||
f"No test result file [{cls.GTEST_RESULT_FILE}]",
|
||||
)
|
||||
|
||||
with open(cls.GTEST_RESULT_FILE, "r", encoding="utf-8") as j:
|
||||
report = json.load(j)
|
||||
|
||||
total_counter = report["tests"]
|
||||
failed_counter = report["failures"]
|
||||
error_counter = report["errors"]
|
||||
|
||||
description = ""
|
||||
SEGFAULT = "Segmentation fault. "
|
||||
SIGNAL = "Exit on signal. "
|
||||
for suite in report["testsuites"]:
|
||||
suite_name = suite["name"]
|
||||
for test_case in suite["testsuite"]:
|
||||
case_name = test_case["name"]
|
||||
test_time = float(test_case["time"][:-1])
|
||||
raw_logs = None
|
||||
if "failures" in test_case:
|
||||
raw_logs = ""
|
||||
for failure in test_case["failures"]:
|
||||
raw_logs += failure[Result.Status.FAILED]
|
||||
if (
|
||||
"Segmentation fault" in raw_logs # type: ignore
|
||||
and SEGFAULT not in description
|
||||
):
|
||||
description += SEGFAULT
|
||||
if (
|
||||
"received signal SIG" in raw_logs # type: ignore
|
||||
and SIGNAL not in description
|
||||
):
|
||||
description += SIGNAL
|
||||
if test_case["status"] == "NOTRUN":
|
||||
test_status = "SKIPPED"
|
||||
elif raw_logs is None:
|
||||
test_status = Result.Status.SUCCESS
|
||||
else:
|
||||
test_status = Result.Status.FAILED
|
||||
|
||||
test_results.append(
|
||||
Result(
|
||||
f"{suite_name}.{case_name}",
|
||||
test_status,
|
||||
duration=test_time,
|
||||
info=raw_logs,
|
||||
)
|
||||
)
|
||||
|
||||
check_status = Result.Status.SUCCESS
|
||||
tests_status = Result.Status.SUCCESS
|
||||
tests_time = float(report["time"][:-1])
|
||||
if failed_counter:
|
||||
check_status = Result.Status.FAILED
|
||||
test_status = Result.Status.FAILED
|
||||
if error_counter:
|
||||
check_status = Result.Status.ERROR
|
||||
test_status = Result.Status.ERROR
|
||||
test_results.append(Result(report["name"], tests_status, duration=tests_time))
|
||||
|
||||
if not description:
|
||||
description += (
|
||||
f"fail: {failed_counter + error_counter}, "
|
||||
f"passed: {total_counter - failed_counter - error_counter}"
|
||||
)
|
||||
|
||||
return (
|
||||
check_status,
|
||||
test_results,
|
||||
description,
|
||||
)
|
||||
|
@ -1,3 +1,5 @@
|
||||
import glob
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
@ -58,6 +60,7 @@ class Runner:
|
||||
workflow_config.digest_dockers[docker.name] = Digest().calc_docker_digest(
|
||||
docker, workflow.dockers
|
||||
)
|
||||
|
||||
workflow_config.dump()
|
||||
|
||||
Result.generate_pending(job.name).dump()
|
||||
@ -81,6 +84,7 @@ class Runner:
|
||||
print("Read GH Environment")
|
||||
env = _Environment.from_env()
|
||||
env.JOB_NAME = job.name
|
||||
os.environ["JOB_NAME"] = job.name
|
||||
env.dump()
|
||||
print(env)
|
||||
|
||||
@ -119,8 +123,21 @@ class Runner:
|
||||
else:
|
||||
prefixes = [env.get_s3_prefix()] * len(required_artifacts)
|
||||
for artifact, prefix in zip(required_artifacts, prefixes):
|
||||
s3_path = f"{Settings.S3_ARTIFACT_PATH}/{prefix}/{Utils.normalize_string(artifact._provided_by)}/{Path(artifact.path).name}"
|
||||
assert S3.copy_file_from_s3(s3_path=s3_path, local_path=Settings.INPUT_DIR)
|
||||
recursive = False
|
||||
include_pattern = ""
|
||||
if "*" in artifact.path:
|
||||
s3_path = f"{Settings.S3_ARTIFACT_PATH}/{prefix}/{Utils.normalize_string(artifact._provided_by)}/"
|
||||
recursive = True
|
||||
include_pattern = Path(artifact.path).name
|
||||
assert "*" in include_pattern
|
||||
else:
|
||||
s3_path = f"{Settings.S3_ARTIFACT_PATH}/{prefix}/{Utils.normalize_string(artifact._provided_by)}/{Path(artifact.path).name}"
|
||||
assert S3.copy_file_from_s3(
|
||||
s3_path=s3_path,
|
||||
local_path=Settings.INPUT_DIR,
|
||||
recursive=recursive,
|
||||
include_pattern=include_pattern,
|
||||
)
|
||||
|
||||
return 0
|
||||
|
||||
@ -130,6 +147,14 @@ class Runner:
|
||||
env.JOB_NAME = job.name
|
||||
env.dump()
|
||||
|
||||
# work around for old clickhouse jobs
|
||||
try:
|
||||
os.environ["DOCKER_TAG"] = json.dumps(
|
||||
RunConfig.from_fs(workflow.name).digest_dockers
|
||||
)
|
||||
except Exception as e:
|
||||
print(f"WARNING: Failed to set DOCKER_TAG, ex [{e}]")
|
||||
|
||||
if param:
|
||||
if not isinstance(param, str):
|
||||
Utils.raise_with_error(
|
||||
@ -182,13 +207,15 @@ class Runner:
|
||||
ResultInfo.TIMEOUT
|
||||
)
|
||||
elif result.is_running():
|
||||
info = f"ERROR: Job terminated with an error, exit code [{exit_code}] - set status to [{Result.Status.ERROR}]"
|
||||
info = f"ERROR: Job killed, exit code [{exit_code}] - set status to [{Result.Status.ERROR}]"
|
||||
print(info)
|
||||
result.set_status(Result.Status.ERROR).set_info(info)
|
||||
result.set_files([Settings.RUN_LOG])
|
||||
else:
|
||||
info = f"ERROR: Invalid status [{result.status}] for exit code [{exit_code}] - switch to [{Result.Status.ERROR}]"
|
||||
print(info)
|
||||
result.set_status(Result.Status.ERROR).set_info(info)
|
||||
result.set_files([Settings.RUN_LOG])
|
||||
result.dump()
|
||||
|
||||
return exit_code
|
||||
@ -240,8 +267,6 @@ class Runner:
|
||||
print(info)
|
||||
result.set_info(info).set_status(Result.Status.ERROR).dump()
|
||||
|
||||
if not result.is_ok():
|
||||
result.set_files(files=[Settings.RUN_LOG])
|
||||
result.update_duration().dump()
|
||||
|
||||
if run_exit_code == 0:
|
||||
@ -262,10 +287,11 @@ class Runner:
|
||||
f"ls -l {artifact.path}", verbose=True
|
||||
), f"Artifact {artifact.path} not found"
|
||||
s3_path = f"{Settings.S3_ARTIFACT_PATH}/{env.get_s3_prefix()}/{Utils.normalize_string(env.JOB_NAME)}"
|
||||
link = S3.copy_file_to_s3(
|
||||
s3_path=s3_path, local_path=artifact.path
|
||||
)
|
||||
result.set_link(link)
|
||||
for file_path in glob.glob(artifact.path):
|
||||
link = S3.copy_file_to_s3(
|
||||
s3_path=s3_path, local_path=file_path
|
||||
)
|
||||
result.set_link(link)
|
||||
except Exception as e:
|
||||
error = (
|
||||
f"ERROR: Failed to upload artifact [{artifact}], ex [{e}]"
|
||||
|
@ -2,6 +2,7 @@ import dataclasses
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Dict
|
||||
from urllib.parse import quote
|
||||
|
||||
from praktika._environment import _Environment
|
||||
from praktika.settings import Settings
|
||||
@ -55,7 +56,7 @@ class S3:
|
||||
bucket = s3_path.split("/")[0]
|
||||
endpoint = Settings.S3_BUCKET_TO_HTTP_ENDPOINT[bucket]
|
||||
assert endpoint
|
||||
return f"https://{s3_full_path}".replace(bucket, endpoint)
|
||||
return quote(f"https://{s3_full_path}".replace(bucket, endpoint), safe=":/?&=")
|
||||
|
||||
@classmethod
|
||||
def put(cls, s3_path, local_path, text=False, metadata=None, if_none_matched=False):
|
||||
@ -117,15 +118,21 @@ class S3:
|
||||
return res
|
||||
|
||||
@classmethod
|
||||
def copy_file_from_s3(cls, s3_path, local_path):
|
||||
def copy_file_from_s3(
|
||||
cls, s3_path, local_path, recursive=False, include_pattern=""
|
||||
):
|
||||
assert Path(s3_path), f"Invalid S3 Path [{s3_path}]"
|
||||
if Path(local_path).is_dir():
|
||||
local_path = Path(local_path) / Path(s3_path).name
|
||||
pass
|
||||
else:
|
||||
assert Path(
|
||||
local_path
|
||||
).parent.is_dir(), f"Parent path for [{local_path}] does not exist"
|
||||
cmd = f"aws s3 cp s3://{s3_path} {local_path}"
|
||||
if recursive:
|
||||
cmd += " --recursive"
|
||||
if include_pattern:
|
||||
cmd += f" --include {include_pattern}"
|
||||
res = cls.run_command_with_retries(cmd)
|
||||
return res
|
||||
|
||||
|
@ -227,8 +227,8 @@ class Shell:
|
||||
proc = subprocess.Popen(
|
||||
command,
|
||||
shell=True,
|
||||
stderr=subprocess.STDOUT,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
stdin=subprocess.PIPE if stdin_str else None,
|
||||
universal_newlines=True,
|
||||
start_new_session=True, # Start a new process group for signal handling
|
||||
@ -248,11 +248,24 @@ class Shell:
|
||||
proc.stdin.write(stdin_str)
|
||||
proc.stdin.close()
|
||||
|
||||
# Process output in real-time
|
||||
if proc.stdout:
|
||||
for line in proc.stdout:
|
||||
# Process both stdout and stderr in real-time
|
||||
def stream_output(stream, output_fp):
|
||||
for line in iter(stream.readline, ""):
|
||||
sys.stdout.write(line)
|
||||
log_fp.write(line)
|
||||
output_fp.write(line)
|
||||
|
||||
stdout_thread = Thread(
|
||||
target=stream_output, args=(proc.stdout, log_fp)
|
||||
)
|
||||
stderr_thread = Thread(
|
||||
target=stream_output, args=(proc.stderr, log_fp)
|
||||
)
|
||||
|
||||
stdout_thread.start()
|
||||
stderr_thread.start()
|
||||
|
||||
stdout_thread.join()
|
||||
stderr_thread.join()
|
||||
|
||||
proc.wait() # Wait for the process to finish
|
||||
|
||||
|
@ -105,9 +105,9 @@ jobs:
|
||||
. /tmp/praktika_setup_env.sh
|
||||
set -o pipefail
|
||||
if command -v ts &> /dev/null; then
|
||||
python3 -m praktika run --job '''{JOB_NAME}''' --workflow "{WORKFLOW_NAME}" --ci |& ts '[%Y-%m-%d %H:%M:%S]' | tee /tmp/praktika/praktika_run.log
|
||||
python3 -m praktika run '''{JOB_NAME}''' --workflow "{WORKFLOW_NAME}" --ci |& ts '[%Y-%m-%d %H:%M:%S]' | tee /tmp/praktika/praktika_run.log
|
||||
else
|
||||
python3 -m praktika run --job '''{JOB_NAME}''' --workflow "{WORKFLOW_NAME}" --ci |& tee /tmp/praktika/praktika_run.log
|
||||
python3 -m praktika run '''{JOB_NAME}''' --workflow "{WORKFLOW_NAME}" --ci |& tee /tmp/praktika/praktika_run.log
|
||||
fi
|
||||
{UPLOADS_GITHUB}\
|
||||
"""
|
||||
|
@ -1,244 +0,0 @@
|
||||
from praktika import Docker, Secret
|
||||
|
||||
S3_BUCKET_NAME = "clickhouse-builds"
|
||||
S3_BUCKET_HTTP_ENDPOINT = "clickhouse-builds.s3.amazonaws.com"
|
||||
|
||||
|
||||
class RunnerLabels:
|
||||
CI_SERVICES = "ci_services"
|
||||
CI_SERVICES_EBS = "ci_services_ebs"
|
||||
BUILDER_AMD = "builder"
|
||||
BUILDER_ARM = "builder-aarch64"
|
||||
FUNC_TESTER_AMD = "func-tester"
|
||||
FUNC_TESTER_ARM = "func-tester-aarch64"
|
||||
|
||||
|
||||
BASE_BRANCH = "master"
|
||||
|
||||
azure_secret = Secret.Config(
|
||||
name="azure_connection_string",
|
||||
type=Secret.Type.AWS_SSM_VAR,
|
||||
)
|
||||
|
||||
SECRETS = [
|
||||
Secret.Config(
|
||||
name="dockerhub_robot_password",
|
||||
type=Secret.Type.AWS_SSM_VAR,
|
||||
),
|
||||
azure_secret,
|
||||
# Secret.Config(
|
||||
# name="woolenwolf_gh_app.clickhouse-app-id",
|
||||
# type=Secret.Type.AWS_SSM_SECRET,
|
||||
# ),
|
||||
# Secret.Config(
|
||||
# name="woolenwolf_gh_app.clickhouse-app-key",
|
||||
# type=Secret.Type.AWS_SSM_SECRET,
|
||||
# ),
|
||||
]
|
||||
|
||||
DOCKERS = [
|
||||
# Docker.Config(
|
||||
# name="clickhouse/binary-builder",
|
||||
# path="./ci/docker/packager/binary-builder",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=[],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/cctools",
|
||||
# path="./ci/docker/packager/cctools",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=[],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/test-old-centos",
|
||||
# path="./ci/docker/test/compatibility/centos",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=[],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/test-old-ubuntu",
|
||||
# path="./ci/docker/test/compatibility/ubuntu",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=[],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/test-util",
|
||||
# path="./ci/docker/test/util",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=[],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/integration-test",
|
||||
# path="./ci/docker/test/integration/base",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/fuzzer",
|
||||
# path="./ci/docker/test/fuzzer",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/performance-comparison",
|
||||
# path="./ci/docker/test/performance-comparison",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=[],
|
||||
# ),
|
||||
Docker.Config(
|
||||
name="clickhouse/fasttest",
|
||||
path="./ci/docker/fasttest",
|
||||
platforms=Docker.Platforms.arm_amd,
|
||||
depends_on=[],
|
||||
),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/test-base",
|
||||
# path="./ci/docker/test/base",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-util"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/clickbench",
|
||||
# path="./ci/docker/test/clickbench",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/keeper-jepsen-test",
|
||||
# path="./ci/docker/test/keeper-jepsen",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/server-jepsen-test",
|
||||
# path="./ci/docker/test/server-jepsen",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/sqllogic-test",
|
||||
# path="./ci/docker/test/sqllogic",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/sqltest",
|
||||
# path="./ci/docker/test/sqltest",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
Docker.Config(
|
||||
name="clickhouse/stateless-test",
|
||||
path="./ci/docker/stateless-test",
|
||||
platforms=Docker.Platforms.arm_amd,
|
||||
depends_on=[],
|
||||
),
|
||||
Docker.Config(
|
||||
name="clickhouse/stateful-test",
|
||||
path="./ci/docker/stateful-test",
|
||||
platforms=Docker.Platforms.arm_amd,
|
||||
depends_on=["clickhouse/stateless-test"],
|
||||
),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/stress-test",
|
||||
# path="./ci/docker/test/stress",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/stateful-test"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/unit-test",
|
||||
# path="./ci/docker/test/unit",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/integration-tests-runner",
|
||||
# path="./ci/docker/test/integration/runner",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
Docker.Config(
|
||||
name="clickhouse/style-test",
|
||||
path="./ci/docker/style-test",
|
||||
platforms=Docker.Platforms.arm_amd,
|
||||
depends_on=[],
|
||||
),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/docs-builder",
|
||||
# path="./ci/docker/docs/builder",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
]
|
||||
|
||||
# TODO:
|
||||
# "docker/test/integration/s3_proxy": {
|
||||
# "name": "clickhouse/s3-proxy",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/resolver": {
|
||||
# "name": "clickhouse/python-bottle",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/helper_container": {
|
||||
# "name": "clickhouse/integration-helper",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/mysql_golang_client": {
|
||||
# "name": "clickhouse/mysql-golang-client",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/dotnet_client": {
|
||||
# "name": "clickhouse/dotnet-client",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/mysql_java_client": {
|
||||
# "name": "clickhouse/mysql-java-client",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/mysql_js_client": {
|
||||
# "name": "clickhouse/mysql-js-client",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/mysql_php_client": {
|
||||
# "name": "clickhouse/mysql-php-client",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/postgresql_java_client": {
|
||||
# "name": "clickhouse/postgresql-java-client",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/kerberos_kdc": {
|
||||
# "only_amd64": true,
|
||||
# "name": "clickhouse/kerberos-kdc",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/kerberized_hadoop": {
|
||||
# "only_amd64": true,
|
||||
# "name": "clickhouse/kerberized-hadoop",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/sqlancer": {
|
||||
# "name": "clickhouse/sqlancer-test",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/install/deb": {
|
||||
# "name": "clickhouse/install-deb-test",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/install/rpm": {
|
||||
# "name": "clickhouse/install-rpm-test",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/nginx_dav": {
|
||||
# "name": "clickhouse/nginx-dav",
|
||||
# "dependent": []
|
||||
# }
|
||||
|
||||
|
||||
class JobNames:
|
||||
STYLE_CHECK = "Style Check"
|
||||
FAST_TEST = "Fast test"
|
||||
BUILD = "Build"
|
||||
STATELESS = "Stateless tests"
|
||||
STATEFUL = "Stateful tests"
|
@ -1,14 +1,13 @@
|
||||
from ci.settings.definitions import (
|
||||
S3_BUCKET_HTTP_ENDPOINT,
|
||||
S3_BUCKET_NAME,
|
||||
RunnerLabels,
|
||||
)
|
||||
# aux settings:
|
||||
S3_BUCKET_NAME = "clickhouse-builds"
|
||||
S3_BUCKET_HTTP_ENDPOINT = "clickhouse-builds.s3.amazonaws.com"
|
||||
|
||||
# praktika settings:
|
||||
MAIN_BRANCH = "master"
|
||||
|
||||
S3_ARTIFACT_PATH = f"{S3_BUCKET_NAME}/artifacts"
|
||||
CI_CONFIG_RUNS_ON = [RunnerLabels.CI_SERVICES]
|
||||
DOCKER_BUILD_RUNS_ON = [RunnerLabels.CI_SERVICES_EBS]
|
||||
CI_CONFIG_RUNS_ON = ["ci_services"]
|
||||
DOCKER_BUILD_RUNS_ON = ["ci_services_ebs"]
|
||||
CACHE_S3_PATH = f"{S3_BUCKET_NAME}/ci_ch_cache"
|
||||
HTML_S3_PATH = f"{S3_BUCKET_NAME}/reports"
|
||||
S3_BUCKET_TO_HTTP_ENDPOINT = {S3_BUCKET_NAME: S3_BUCKET_HTTP_ENDPOINT}
|
||||
|
17
ci/setup.py
Normal file
17
ci/setup.py
Normal file
@ -0,0 +1,17 @@
|
||||
from setuptools import find_packages, setup
|
||||
|
||||
setup(
|
||||
name="praktika",
|
||||
version="0.1",
|
||||
packages=find_packages(),
|
||||
url="https://github.com/ClickHouse/praktika",
|
||||
license="Apache 2.0",
|
||||
author="Max Kainov",
|
||||
author_email="max.kainov@clickhouse.com",
|
||||
description="CI Infrastructure Toolbox",
|
||||
entry_points={
|
||||
"console_scripts": [
|
||||
"praktika=praktika.__main__:main",
|
||||
]
|
||||
},
|
||||
)
|
610
ci/workflows/defs.py
Normal file
610
ci/workflows/defs.py
Normal file
@ -0,0 +1,610 @@
|
||||
from praktika import Artifact, Docker, Job, Secret
|
||||
from praktika.settings import Settings
|
||||
|
||||
|
||||
class RunnerLabels:
|
||||
CI_SERVICES = "ci_services"
|
||||
CI_SERVICES_EBS = "ci_services_ebs"
|
||||
BUILDER_AMD = "builder"
|
||||
BUILDER_ARM = "builder-aarch64"
|
||||
FUNC_TESTER_AMD = "func-tester"
|
||||
FUNC_TESTER_ARM = "func-tester-aarch64"
|
||||
STYLE_CHECK_AMD = "style-checker"
|
||||
STYLE_CHECK_ARM = "style-checker-aarch64"
|
||||
CI_SERVICES = "ci_services"
|
||||
|
||||
|
||||
class CIFiles:
|
||||
UNIT_TESTS_RESULTS = "/tmp/praktika/output/unit_tests_result.json"
|
||||
UNIT_TESTS_BIN = "/tmp/praktika/build/src/unit_tests_dbms"
|
||||
|
||||
|
||||
BASE_BRANCH = "master"
|
||||
|
||||
azure_secret = Secret.Config(
|
||||
name="azure_connection_string",
|
||||
type=Secret.Type.AWS_SSM_VAR,
|
||||
)
|
||||
|
||||
SECRETS = [
|
||||
Secret.Config(
|
||||
name="dockerhub_robot_password",
|
||||
type=Secret.Type.AWS_SSM_VAR,
|
||||
),
|
||||
azure_secret,
|
||||
# Secret.Config(
|
||||
# name="woolenwolf_gh_app.clickhouse-app-id",
|
||||
# type=Secret.Type.AWS_SSM_SECRET,
|
||||
# ),
|
||||
# Secret.Config(
|
||||
# name="woolenwolf_gh_app.clickhouse-app-key",
|
||||
# type=Secret.Type.AWS_SSM_SECRET,
|
||||
# ),
|
||||
]
|
||||
|
||||
DOCKERS = [
|
||||
Docker.Config(
|
||||
name="clickhouse/binary-builder",
|
||||
path="./ci/docker/binary-builder",
|
||||
platforms=Docker.Platforms.arm_amd,
|
||||
depends_on=["clickhouse/fasttest"],
|
||||
),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/cctools",
|
||||
# path="./ci/docker/packager/cctools",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=[],
|
||||
# ),
|
||||
Docker.Config(
|
||||
name="clickhouse/test-old-centos",
|
||||
path="./ci/docker/compatibility/centos",
|
||||
platforms=Docker.Platforms.arm_amd,
|
||||
depends_on=[],
|
||||
),
|
||||
Docker.Config(
|
||||
name="clickhouse/test-old-ubuntu",
|
||||
path="./ci/docker/compatibility/ubuntu",
|
||||
platforms=Docker.Platforms.arm_amd,
|
||||
depends_on=[],
|
||||
),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/test-util",
|
||||
# path="./ci/docker/test/util",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=[],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/integration-test",
|
||||
# path="./ci/docker/test/integration/base",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/fuzzer",
|
||||
# path="./ci/docker/test/fuzzer",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/performance-comparison",
|
||||
# path="./ci/docker/test/performance-comparison",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=[],
|
||||
# ),
|
||||
Docker.Config(
|
||||
name="clickhouse/fasttest",
|
||||
path="./ci/docker/fasttest",
|
||||
platforms=Docker.Platforms.arm_amd,
|
||||
depends_on=[],
|
||||
),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/test-base",
|
||||
# path="./ci/docker/test/base",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-util"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/clickbench",
|
||||
# path="./ci/docker/test/clickbench",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/keeper-jepsen-test",
|
||||
# path="./ci/docker/test/keeper-jepsen",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/server-jepsen-test",
|
||||
# path="./ci/docker/test/server-jepsen",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/sqllogic-test",
|
||||
# path="./ci/docker/test/sqllogic",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/sqltest",
|
||||
# path="./ci/docker/test/sqltest",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
Docker.Config(
|
||||
name="clickhouse/stateless-test",
|
||||
path="./ci/docker/stateless-test",
|
||||
platforms=Docker.Platforms.arm_amd,
|
||||
depends_on=[],
|
||||
),
|
||||
Docker.Config(
|
||||
name="clickhouse/stateful-test",
|
||||
path="./ci/docker/stateful-test",
|
||||
platforms=Docker.Platforms.arm_amd,
|
||||
depends_on=["clickhouse/stateless-test"],
|
||||
),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/stress-test",
|
||||
# path="./ci/docker/test/stress",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/stateful-test"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/unit-test",
|
||||
# path="./ci/docker/test/unit",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/integration-tests-runner",
|
||||
# path="./ci/docker/test/integration/runner",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
Docker.Config(
|
||||
name="clickhouse/style-test",
|
||||
path="./ci/docker/style-test",
|
||||
platforms=Docker.Platforms.arm_amd,
|
||||
depends_on=[],
|
||||
),
|
||||
# Docker.Config(
|
||||
# name="clickhouse/docs-builder",
|
||||
# path="./ci/docker/docs/builder",
|
||||
# platforms=Docker.Platforms.arm_amd,
|
||||
# depends_on=["clickhouse/test-base"],
|
||||
# ),
|
||||
]
|
||||
|
||||
# TODO:
|
||||
# "docker/test/integration/s3_proxy": {
|
||||
# "name": "clickhouse/s3-proxy",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/resolver": {
|
||||
# "name": "clickhouse/python-bottle",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/helper_container": {
|
||||
# "name": "clickhouse/integration-helper",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/mysql_golang_client": {
|
||||
# "name": "clickhouse/mysql-golang-client",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/dotnet_client": {
|
||||
# "name": "clickhouse/dotnet-client",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/mysql_java_client": {
|
||||
# "name": "clickhouse/mysql-java-client",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/mysql_js_client": {
|
||||
# "name": "clickhouse/mysql-js-client",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/mysql_php_client": {
|
||||
# "name": "clickhouse/mysql-php-client",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/postgresql_java_client": {
|
||||
# "name": "clickhouse/postgresql-java-client",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/kerberos_kdc": {
|
||||
# "only_amd64": true,
|
||||
# "name": "clickhouse/kerberos-kdc",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/kerberized_hadoop": {
|
||||
# "only_amd64": true,
|
||||
# "name": "clickhouse/kerberized-hadoop",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/sqlancer": {
|
||||
# "name": "clickhouse/sqlancer-test",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/install/deb": {
|
||||
# "name": "clickhouse/install-deb-test",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/install/rpm": {
|
||||
# "name": "clickhouse/install-rpm-test",
|
||||
# "dependent": []
|
||||
# },
|
||||
# "docker/test/integration/nginx_dav": {
|
||||
# "name": "clickhouse/nginx-dav",
|
||||
# "dependent": []
|
||||
# }
|
||||
|
||||
|
||||
class JobNames:
|
||||
STYLE_CHECK = "Style Check"
|
||||
FAST_TEST = "Fast test"
|
||||
BUILD = "Build"
|
||||
STATELESS = "Stateless tests"
|
||||
STATEFUL = "Stateful tests"
|
||||
STRESS = "Stress tests"
|
||||
PERFORMANCE = "Performance tests"
|
||||
COMPATIBILITY = "Compatibility check"
|
||||
|
||||
|
||||
class ToolSet:
|
||||
COMPILER_C = "clang-19"
|
||||
COMPILER_CPP = "clang++-19"
|
||||
|
||||
|
||||
class ArtifactNames:
|
||||
CH_AMD_DEBUG = "CH_AMD_DEBUG"
|
||||
CH_AMD_RELEASE = "CH_AMD_RELEASE"
|
||||
CH_AMD_ASAN = "CH_AMD_ASAN"
|
||||
CH_AMD_TSAN = "CH_AMD_TSAN"
|
||||
CH_AMD_MSAN = "CH_AMD_MSAN"
|
||||
CH_AMD_UBSAN = "CH_AMD_UBSAN"
|
||||
CH_AMD_BINARY = "CH_AMD_BINARY"
|
||||
CH_ARM_RELEASE = "CH_ARM_RELEASE"
|
||||
CH_ARM_ASAN = "CH_ARM_ASAN"
|
||||
|
||||
CH_ODBC_B_AMD_DEBUG = "CH_ODBC_B_AMD_DEBUG"
|
||||
CH_ODBC_B_AMD_RELEASE = "CH_ODBC_B_AMD_RELEASE"
|
||||
CH_ODBC_B_AMD_ASAN = "CH_ODBC_B_AMD_ASAN"
|
||||
CH_ODBC_B_AMD_TSAN = "CH_ODBC_B_AMD_TSAN"
|
||||
CH_ODBC_B_AMD_MSAN = "CH_ODBC_B_AMD_MSAN"
|
||||
CH_ODBC_B_AMD_UBSAN = "CH_ODBC_B_AMD_UBSAN"
|
||||
CH_ODBC_B_ARM_RELEASE = "CH_ODBC_B_ARM_RELEASE"
|
||||
CH_ODBC_B_ARM_ASAN = "CH_ODBC_B_ARM_ASAN"
|
||||
|
||||
UNITTEST_AMD_ASAN = "UNITTEST_AMD_ASAN"
|
||||
UNITTEST_AMD_TSAN = "UNITTEST_AMD_TSAN"
|
||||
UNITTEST_AMD_MSAN = "UNITTEST_AMD_MSAN"
|
||||
UNITTEST_AMD_UBSAN = "UNITTEST_AMD_UBSAN"
|
||||
UNITTEST_AMD_BINARY = "UNITTEST_AMD_BINARY"
|
||||
|
||||
DEB_AMD_DEBUG = "DEB_AMD_DEBUG"
|
||||
DEB_AMD_RELEASE = "DEB_AMD_RELEASE"
|
||||
DEB_AMD_ASAN = "DEB_AMD_ASAN"
|
||||
DEB_AMD_TSAN = "DEB_AMD_TSAN"
|
||||
DEB_AMD_MSAM = "DEB_AMD_MSAM"
|
||||
DEB_AMD_UBSAN = "DEB_AMD_UBSAN"
|
||||
DEB_ARM_RELEASE = "DEB_ARM_RELEASE"
|
||||
DEB_ARM_ASAN = "DEB_ARM_ASAN"
|
||||
|
||||
|
||||
ARTIFACTS = [
|
||||
*Artifact.Config(
|
||||
name="...",
|
||||
type=Artifact.Type.S3,
|
||||
path=f"{Settings.TEMP_DIR}/build/programs/clickhouse",
|
||||
).parametrize(
|
||||
names=[
|
||||
ArtifactNames.CH_AMD_DEBUG,
|
||||
ArtifactNames.CH_AMD_RELEASE,
|
||||
ArtifactNames.CH_AMD_ASAN,
|
||||
ArtifactNames.CH_AMD_TSAN,
|
||||
ArtifactNames.CH_AMD_MSAN,
|
||||
ArtifactNames.CH_AMD_UBSAN,
|
||||
ArtifactNames.CH_AMD_BINARY,
|
||||
ArtifactNames.CH_ARM_RELEASE,
|
||||
ArtifactNames.CH_ARM_ASAN,
|
||||
]
|
||||
),
|
||||
*Artifact.Config(
|
||||
name="...",
|
||||
type=Artifact.Type.S3,
|
||||
path=f"{Settings.TEMP_DIR}/build/programs/clickhouse-odbc-bridge",
|
||||
).parametrize(
|
||||
names=[
|
||||
ArtifactNames.CH_ODBC_B_AMD_DEBUG,
|
||||
ArtifactNames.CH_ODBC_B_AMD_ASAN,
|
||||
ArtifactNames.CH_ODBC_B_AMD_TSAN,
|
||||
ArtifactNames.CH_ODBC_B_AMD_MSAN,
|
||||
ArtifactNames.CH_ODBC_B_AMD_UBSAN,
|
||||
ArtifactNames.CH_ODBC_B_AMD_RELEASE,
|
||||
ArtifactNames.CH_ODBC_B_ARM_RELEASE,
|
||||
ArtifactNames.CH_ODBC_B_ARM_ASAN,
|
||||
]
|
||||
),
|
||||
# *Artifact.Config(
|
||||
# name="...",
|
||||
# type=Artifact.Type.S3,
|
||||
# path=f"{Settings.TEMP_DIR}/build/src/unit_tests_dbms",
|
||||
# ).parametrize(
|
||||
# names=[
|
||||
# ArtifactNames.UNITTEST_AMD_BINARY,
|
||||
# ArtifactNames.UNITTEST_AMD_ASAN,
|
||||
# ArtifactNames.UNITTEST_AMD_TSAN,
|
||||
# ArtifactNames.UNITTEST_AMD_MSAN,
|
||||
# ArtifactNames.UNITTEST_AMD_UBSAN,
|
||||
# ]
|
||||
# ),
|
||||
*Artifact.Config(
|
||||
name="*",
|
||||
type=Artifact.Type.S3,
|
||||
path=f"{Settings.TEMP_DIR}/output/*.deb",
|
||||
).parametrize(
|
||||
names=[
|
||||
ArtifactNames.DEB_AMD_DEBUG,
|
||||
ArtifactNames.DEB_AMD_ASAN,
|
||||
ArtifactNames.DEB_AMD_TSAN,
|
||||
ArtifactNames.DEB_AMD_MSAM,
|
||||
ArtifactNames.DEB_AMD_UBSAN,
|
||||
]
|
||||
),
|
||||
Artifact.Config(
|
||||
name=ArtifactNames.DEB_AMD_RELEASE,
|
||||
type=Artifact.Type.S3,
|
||||
path=f"{Settings.TEMP_DIR}/output/*.deb",
|
||||
),
|
||||
Artifact.Config(
|
||||
name=ArtifactNames.DEB_ARM_RELEASE,
|
||||
type=Artifact.Type.S3,
|
||||
path=f"{Settings.TEMP_DIR}/output/*.deb",
|
||||
),
|
||||
Artifact.Config(
|
||||
name=ArtifactNames.DEB_ARM_ASAN,
|
||||
type=Artifact.Type.S3,
|
||||
path=f"{Settings.TEMP_DIR}/output/*.deb",
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
class Jobs:
|
||||
style_check_job = Job.Config(
|
||||
name=JobNames.STYLE_CHECK,
|
||||
runs_on=[RunnerLabels.CI_SERVICES],
|
||||
command="python3 ./ci/jobs/check_style.py",
|
||||
run_in_docker="clickhouse/style-test",
|
||||
)
|
||||
|
||||
fast_test_job = Job.Config(
|
||||
name=JobNames.FAST_TEST,
|
||||
runs_on=[RunnerLabels.BUILDER_AMD],
|
||||
command="python3 ./ci/jobs/fast_test.py",
|
||||
run_in_docker="clickhouse/fasttest",
|
||||
digest_config=Job.CacheDigestConfig(
|
||||
include_paths=[
|
||||
"./ci/jobs/fast_test.py",
|
||||
"./tests/queries/0_stateless/",
|
||||
"./src",
|
||||
],
|
||||
),
|
||||
)
|
||||
|
||||
build_jobs = Job.Config(
|
||||
name=JobNames.BUILD,
|
||||
runs_on=["...from params..."],
|
||||
requires=[],
|
||||
command="python3 ./ci/jobs/build_clickhouse.py --build-type {PARAMETER}",
|
||||
run_in_docker="clickhouse/binary-builder",
|
||||
timeout=3600 * 2,
|
||||
digest_config=Job.CacheDigestConfig(
|
||||
include_paths=[
|
||||
"./src",
|
||||
"./contrib/",
|
||||
"./CMakeLists.txt",
|
||||
"./PreLoad.cmake",
|
||||
"./cmake",
|
||||
"./base",
|
||||
"./programs",
|
||||
"./docker/packager/packager",
|
||||
"./rust",
|
||||
"./tests/ci/version_helper.py",
|
||||
"./ci/jobs/build_clickhouse.py",
|
||||
],
|
||||
),
|
||||
).parametrize(
|
||||
parameter=[
|
||||
"amd_debug",
|
||||
"amd_release",
|
||||
"amd_asan",
|
||||
"amd_tsan",
|
||||
"amd_msan",
|
||||
"amd_ubsan",
|
||||
"amd_binary",
|
||||
"arm_release",
|
||||
"arm_asan",
|
||||
],
|
||||
provides=[
|
||||
[
|
||||
ArtifactNames.CH_AMD_DEBUG,
|
||||
ArtifactNames.DEB_AMD_DEBUG,
|
||||
ArtifactNames.CH_ODBC_B_AMD_DEBUG,
|
||||
],
|
||||
[
|
||||
ArtifactNames.CH_AMD_RELEASE,
|
||||
ArtifactNames.DEB_AMD_RELEASE,
|
||||
ArtifactNames.CH_ODBC_B_AMD_RELEASE,
|
||||
],
|
||||
[
|
||||
ArtifactNames.CH_AMD_ASAN,
|
||||
ArtifactNames.DEB_AMD_ASAN,
|
||||
ArtifactNames.CH_ODBC_B_AMD_ASAN,
|
||||
# ArtifactNames.UNITTEST_AMD_ASAN,
|
||||
],
|
||||
[
|
||||
ArtifactNames.CH_AMD_TSAN,
|
||||
ArtifactNames.DEB_AMD_TSAN,
|
||||
ArtifactNames.CH_ODBC_B_AMD_TSAN,
|
||||
# ArtifactNames.UNITTEST_AMD_TSAN,
|
||||
],
|
||||
[
|
||||
ArtifactNames.CH_AMD_MSAN,
|
||||
ArtifactNames.DEB_AMD_MSAM,
|
||||
ArtifactNames.CH_ODBC_B_AMD_MSAN,
|
||||
# ArtifactNames.UNITTEST_AMD_MSAN,
|
||||
],
|
||||
[
|
||||
ArtifactNames.CH_AMD_UBSAN,
|
||||
ArtifactNames.DEB_AMD_UBSAN,
|
||||
ArtifactNames.CH_ODBC_B_AMD_UBSAN,
|
||||
# ArtifactNames.UNITTEST_AMD_UBSAN,
|
||||
],
|
||||
[
|
||||
ArtifactNames.CH_AMD_BINARY,
|
||||
# ArtifactNames.UNITTEST_AMD_BINARY,
|
||||
],
|
||||
[
|
||||
ArtifactNames.CH_ARM_RELEASE,
|
||||
ArtifactNames.DEB_ARM_RELEASE,
|
||||
ArtifactNames.CH_ODBC_B_ARM_RELEASE,
|
||||
],
|
||||
[
|
||||
ArtifactNames.CH_ARM_ASAN,
|
||||
ArtifactNames.DEB_ARM_ASAN,
|
||||
ArtifactNames.CH_ODBC_B_ARM_ASAN,
|
||||
],
|
||||
],
|
||||
runs_on=[
|
||||
[RunnerLabels.BUILDER_AMD],
|
||||
[RunnerLabels.BUILDER_AMD],
|
||||
[RunnerLabels.BUILDER_AMD],
|
||||
[RunnerLabels.BUILDER_AMD],
|
||||
[RunnerLabels.BUILDER_AMD],
|
||||
[RunnerLabels.BUILDER_AMD],
|
||||
[RunnerLabels.BUILDER_AMD],
|
||||
[RunnerLabels.BUILDER_ARM],
|
||||
[RunnerLabels.BUILDER_ARM],
|
||||
],
|
||||
)
|
||||
|
||||
stateless_tests_jobs = Job.Config(
|
||||
name=JobNames.STATELESS,
|
||||
runs_on=[RunnerLabels.BUILDER_AMD],
|
||||
command="python3 ./ci/jobs/functional_stateless_tests.py --test-options {PARAMETER}",
|
||||
# many tests expect to see "/var/lib/clickhouse" in various output lines - add mount for now, consider creating this dir in docker file
|
||||
run_in_docker="clickhouse/stateless-test+--security-opt seccomp=unconfined",
|
||||
digest_config=Job.CacheDigestConfig(
|
||||
include_paths=[
|
||||
"./ci/jobs/functional_stateless_tests.py",
|
||||
],
|
||||
),
|
||||
).parametrize(
|
||||
parameter=[
|
||||
"amd_debug,parallel",
|
||||
"amd_debug,non-parallel",
|
||||
"amd_release,parallel",
|
||||
"amd_release,non-parallel",
|
||||
"arm_asan,parallel",
|
||||
"arm_asan,non-parallel",
|
||||
],
|
||||
runs_on=[
|
||||
[RunnerLabels.BUILDER_AMD],
|
||||
[RunnerLabels.FUNC_TESTER_AMD],
|
||||
[RunnerLabels.BUILDER_AMD],
|
||||
[RunnerLabels.FUNC_TESTER_AMD],
|
||||
[RunnerLabels.BUILDER_ARM],
|
||||
[RunnerLabels.FUNC_TESTER_ARM],
|
||||
],
|
||||
requires=[
|
||||
[ArtifactNames.CH_AMD_DEBUG, ArtifactNames.CH_ODBC_B_AMD_DEBUG],
|
||||
[ArtifactNames.CH_AMD_DEBUG, ArtifactNames.CH_ODBC_B_AMD_DEBUG],
|
||||
[ArtifactNames.CH_AMD_RELEASE, ArtifactNames.CH_ODBC_B_AMD_RELEASE],
|
||||
[ArtifactNames.CH_AMD_RELEASE, ArtifactNames.CH_ODBC_B_AMD_RELEASE],
|
||||
[ArtifactNames.CH_ARM_ASAN, ArtifactNames.CH_ODBC_B_ARM_ASAN],
|
||||
[ArtifactNames.CH_ARM_ASAN, ArtifactNames.CH_ODBC_B_ARM_ASAN],
|
||||
],
|
||||
)
|
||||
|
||||
stateful_tests_jobs = Job.Config(
|
||||
name=JobNames.STATEFUL,
|
||||
runs_on=[RunnerLabels.BUILDER_AMD],
|
||||
command="python3 ./ci/jobs/functional_stateful_tests.py --test-options {PARAMETER}",
|
||||
# many tests expect to see "/var/lib/clickhouse"
|
||||
# some tests expect to see "/var/log/clickhouse"
|
||||
run_in_docker="clickhouse/stateless-test+--security-opt seccomp=unconfined",
|
||||
digest_config=Job.CacheDigestConfig(
|
||||
include_paths=[
|
||||
"./ci/jobs/functional_stateful_tests.py",
|
||||
],
|
||||
),
|
||||
).parametrize(
|
||||
parameter=[
|
||||
"amd_release,parallel",
|
||||
],
|
||||
runs_on=[
|
||||
[RunnerLabels.BUILDER_AMD],
|
||||
],
|
||||
requires=[
|
||||
[ArtifactNames.CH_AMD_DEBUG],
|
||||
],
|
||||
)
|
||||
|
||||
# TODO: refactor job to be aligned with praktika style (remove wrappers, run in docker)
|
||||
stress_test_jobs = Job.Config(
|
||||
name=JobNames.STRESS,
|
||||
runs_on=[RunnerLabels.BUILDER_ARM],
|
||||
command="python3 ./tests/ci/stress_check.py {PARAMETER}",
|
||||
digest_config=Job.CacheDigestConfig(
|
||||
include_paths=[
|
||||
"./ci/jobs/functional_stateful_tests.py",
|
||||
],
|
||||
),
|
||||
).parametrize(
|
||||
parameter=[
|
||||
"arm_release",
|
||||
],
|
||||
runs_on=[
|
||||
[RunnerLabels.FUNC_TESTER_ARM],
|
||||
],
|
||||
requires=[
|
||||
[ArtifactNames.DEB_ARM_RELEASE],
|
||||
],
|
||||
)
|
||||
|
||||
performance_test_job = Job.Config(
|
||||
name=JobNames.PERFORMANCE,
|
||||
runs_on=[RunnerLabels.FUNC_TESTER_ARM],
|
||||
command="./ci/jobs/scripts/performance_test.sh",
|
||||
run_in_docker="clickhouse/stateless-test",
|
||||
requires=[ArtifactNames.CH_ARM_RELEASE],
|
||||
# digest_config=Job.CacheDigestConfig(
|
||||
# include_paths=[
|
||||
# "./ci/jobs/fast_test.py",
|
||||
# "./tests/queries/0_stateless/",
|
||||
# "./src",
|
||||
# ],
|
||||
# ),
|
||||
)
|
||||
|
||||
compatibility_test_jobs = Job.Config(
|
||||
name=JobNames.COMPATIBILITY,
|
||||
runs_on=["#from param"],
|
||||
command="python3 ./tests/ci/compatibility_check.py --check-name {PARAMETER}",
|
||||
digest_config=Job.CacheDigestConfig(
|
||||
include_paths=[
|
||||
"./tests/ci/compatibility_check.py",
|
||||
"./docker/test/compatibility",
|
||||
],
|
||||
),
|
||||
).parametrize(
|
||||
parameter=["amd_release", "arm_release"],
|
||||
runs_on=[
|
||||
[RunnerLabels.STYLE_CHECK_AMD],
|
||||
[RunnerLabels.STYLE_CHECK_ARM],
|
||||
],
|
||||
requires=[[ArtifactNames.DEB_AMD_RELEASE], [ArtifactNames.DEB_ARM_RELEASE]],
|
||||
)
|
@ -1,176 +1,24 @@
|
||||
from praktika import Artifact, Job, Workflow
|
||||
from praktika.settings import Settings
|
||||
from praktika import Workflow
|
||||
|
||||
from ci.settings.definitions import (
|
||||
BASE_BRANCH,
|
||||
DOCKERS,
|
||||
SECRETS,
|
||||
JobNames,
|
||||
RunnerLabels,
|
||||
)
|
||||
from ci.workflows.defs import ARTIFACTS, BASE_BRANCH, DOCKERS, SECRETS, Jobs
|
||||
|
||||
|
||||
class ArtifactNames:
|
||||
CH_AMD_DEBUG = "CH_AMD_DEBUG"
|
||||
CH_AMD_RELEASE = "CH_AMD_RELEASE"
|
||||
CH_ARM_RELEASE = "CH_ARM_RELEASE"
|
||||
CH_ARM_ASAN = "CH_ARM_ASAN"
|
||||
|
||||
|
||||
style_check_job = Job.Config(
|
||||
name=JobNames.STYLE_CHECK,
|
||||
runs_on=[RunnerLabels.CI_SERVICES],
|
||||
command="python3 ./ci/jobs/check_style.py",
|
||||
run_in_docker="clickhouse/style-test",
|
||||
)
|
||||
|
||||
fast_test_job = Job.Config(
|
||||
name=JobNames.FAST_TEST,
|
||||
runs_on=[RunnerLabels.BUILDER_AMD],
|
||||
command="python3 ./ci/jobs/fast_test.py",
|
||||
run_in_docker="clickhouse/fasttest",
|
||||
digest_config=Job.CacheDigestConfig(
|
||||
include_paths=[
|
||||
"./ci/jobs/fast_test.py",
|
||||
"./tests/queries/0_stateless/",
|
||||
"./src",
|
||||
],
|
||||
),
|
||||
)
|
||||
|
||||
build_jobs = Job.Config(
|
||||
name=JobNames.BUILD,
|
||||
runs_on=["...from params..."],
|
||||
requires=[JobNames.FAST_TEST],
|
||||
command="python3 ./ci/jobs/build_clickhouse.py --build-type {PARAMETER}",
|
||||
run_in_docker="clickhouse/fasttest",
|
||||
timeout=3600 * 2,
|
||||
digest_config=Job.CacheDigestConfig(
|
||||
include_paths=[
|
||||
"./src",
|
||||
"./contrib/",
|
||||
"./CMakeLists.txt",
|
||||
"./PreLoad.cmake",
|
||||
"./cmake",
|
||||
"./base",
|
||||
"./programs",
|
||||
"./docker/packager/packager",
|
||||
"./rust",
|
||||
"./tests/ci/version_helper.py",
|
||||
"./ci/jobs/build_clickhouse.py",
|
||||
],
|
||||
),
|
||||
).parametrize(
|
||||
parameter=["amd_debug", "amd_release", "arm_release", "arm_asan"],
|
||||
provides=[
|
||||
[ArtifactNames.CH_AMD_DEBUG],
|
||||
[ArtifactNames.CH_AMD_RELEASE],
|
||||
[ArtifactNames.CH_ARM_RELEASE],
|
||||
[ArtifactNames.CH_ARM_ASAN],
|
||||
],
|
||||
runs_on=[
|
||||
[RunnerLabels.BUILDER_AMD],
|
||||
[RunnerLabels.BUILDER_AMD],
|
||||
[RunnerLabels.BUILDER_ARM],
|
||||
[RunnerLabels.BUILDER_ARM],
|
||||
],
|
||||
)
|
||||
|
||||
stateless_tests_jobs = Job.Config(
|
||||
name=JobNames.STATELESS,
|
||||
runs_on=[RunnerLabels.BUILDER_AMD],
|
||||
command="python3 ./ci/jobs/functional_stateless_tests.py --test-options {PARAMETER}",
|
||||
# many tests expect to see "/var/lib/clickhouse" in various output lines - add mount for now, consider creating this dir in docker file
|
||||
run_in_docker="clickhouse/stateless-test+--security-opt seccomp=unconfined",
|
||||
digest_config=Job.CacheDigestConfig(
|
||||
include_paths=[
|
||||
"./ci/jobs/functional_stateless_tests.py",
|
||||
],
|
||||
),
|
||||
).parametrize(
|
||||
parameter=[
|
||||
"amd_debug,parallel",
|
||||
"amd_debug,non-parallel",
|
||||
"amd_release,parallel",
|
||||
"amd_release,non-parallel",
|
||||
"arm_asan,parallel",
|
||||
"arm_asan,non-parallel",
|
||||
],
|
||||
runs_on=[
|
||||
[RunnerLabels.BUILDER_AMD],
|
||||
[RunnerLabels.FUNC_TESTER_AMD],
|
||||
[RunnerLabels.BUILDER_AMD],
|
||||
[RunnerLabels.FUNC_TESTER_AMD],
|
||||
[RunnerLabels.BUILDER_ARM],
|
||||
[RunnerLabels.FUNC_TESTER_ARM],
|
||||
],
|
||||
requires=[
|
||||
[ArtifactNames.CH_AMD_DEBUG],
|
||||
[ArtifactNames.CH_AMD_DEBUG],
|
||||
[ArtifactNames.CH_AMD_RELEASE],
|
||||
[ArtifactNames.CH_AMD_RELEASE],
|
||||
[ArtifactNames.CH_ARM_ASAN],
|
||||
[ArtifactNames.CH_ARM_ASAN],
|
||||
],
|
||||
)
|
||||
|
||||
stateful_tests_jobs = Job.Config(
|
||||
name=JobNames.STATEFUL,
|
||||
runs_on=[RunnerLabels.BUILDER_AMD],
|
||||
command="python3 ./ci/jobs/functional_stateful_tests.py --test-options {PARAMETER}",
|
||||
# many tests expect to see "/var/lib/clickhouse"
|
||||
# some tests expect to see "/var/log/clickhouse"
|
||||
run_in_docker="clickhouse/stateless-test+--security-opt seccomp=unconfined",
|
||||
digest_config=Job.CacheDigestConfig(
|
||||
include_paths=[
|
||||
"./ci/jobs/functional_stateful_tests.py",
|
||||
],
|
||||
),
|
||||
).parametrize(
|
||||
parameter=[
|
||||
"amd_debug,parallel",
|
||||
],
|
||||
runs_on=[
|
||||
[RunnerLabels.BUILDER_AMD],
|
||||
],
|
||||
requires=[
|
||||
[ArtifactNames.CH_AMD_DEBUG],
|
||||
],
|
||||
)
|
||||
S3_BUILDS_BUCKET = "clickhouse-builds"
|
||||
|
||||
workflow = Workflow.Config(
|
||||
name="PR",
|
||||
event=Workflow.Event.PULL_REQUEST,
|
||||
base_branches=[BASE_BRANCH],
|
||||
jobs=[
|
||||
style_check_job,
|
||||
fast_test_job,
|
||||
*build_jobs,
|
||||
*stateless_tests_jobs,
|
||||
*stateful_tests_jobs,
|
||||
],
|
||||
artifacts=[
|
||||
Artifact.Config(
|
||||
name=ArtifactNames.CH_AMD_DEBUG,
|
||||
type=Artifact.Type.S3,
|
||||
path=f"{Settings.TEMP_DIR}/build/programs/clickhouse",
|
||||
),
|
||||
Artifact.Config(
|
||||
name=ArtifactNames.CH_AMD_RELEASE,
|
||||
type=Artifact.Type.S3,
|
||||
path=f"{Settings.TEMP_DIR}/build/programs/clickhouse",
|
||||
),
|
||||
Artifact.Config(
|
||||
name=ArtifactNames.CH_ARM_RELEASE,
|
||||
type=Artifact.Type.S3,
|
||||
path=f"{Settings.TEMP_DIR}/build/programs/clickhouse",
|
||||
),
|
||||
Artifact.Config(
|
||||
name=ArtifactNames.CH_ARM_ASAN,
|
||||
type=Artifact.Type.S3,
|
||||
path=f"{Settings.TEMP_DIR}/build/programs/clickhouse",
|
||||
),
|
||||
Jobs.style_check_job,
|
||||
Jobs.fast_test_job,
|
||||
*Jobs.build_jobs,
|
||||
*Jobs.stateless_tests_jobs,
|
||||
*Jobs.stateful_tests_jobs,
|
||||
*Jobs.stress_test_jobs,
|
||||
Jobs.performance_test_job,
|
||||
*Jobs.compatibility_test_jobs,
|
||||
],
|
||||
artifacts=ARTIFACTS,
|
||||
dockers=DOCKERS,
|
||||
secrets=SECRETS,
|
||||
enable_cache=True,
|
||||
@ -181,13 +29,3 @@ workflow = Workflow.Config(
|
||||
WORKFLOWS = [
|
||||
workflow,
|
||||
]
|
||||
|
||||
|
||||
# if __name__ == "__main__":
|
||||
# # local job test inside praktika environment
|
||||
# from praktika.runner import Runner
|
||||
# from praktika.digest import Digest
|
||||
#
|
||||
# print(Digest().calc_job_digest(amd_debug_build_job))
|
||||
#
|
||||
# Runner().run(workflow, fast_test_job, docker="fasttest", local_run=True)
|
||||
|
2
contrib/googletest
vendored
2
contrib/googletest
vendored
@ -1 +1 @@
|
||||
Subproject commit a7f443b80b105f940225332ed3c31f2790092f47
|
||||
Subproject commit 35d0c365609296fa4730d62057c487e3cfa030ff
|
@ -18,7 +18,6 @@ add_library(_protobuf-mutator
|
||||
target_include_directories(_protobuf-mutator BEFORE PUBLIC "${LIBRARY_DIR}/src")
|
||||
# ... which includes <port/protobuf.h>
|
||||
target_include_directories(_protobuf-mutator BEFORE PUBLIC "${LIBRARY_DIR}")
|
||||
target_include_directories(_protobuf-mutator BEFORE PUBLIC "${ClickHouse_SOURCE_DIR}/contrib/protobuf/src")
|
||||
|
||||
target_link_libraries(_protobuf-mutator ch_contrib::protobuf)
|
||||
|
||||
|
@ -38,7 +38,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
# lts / testing / prestable / etc
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||
ARG VERSION="24.10.3.21"
|
||||
ARG VERSION="24.11.1.2557"
|
||||
ARG PACKAGES="clickhouse-keeper"
|
||||
ARG DIRECT_DOWNLOAD_URLS=""
|
||||
|
||||
|
@ -35,7 +35,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
# lts / testing / prestable / etc
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||
ARG VERSION="24.10.3.21"
|
||||
ARG VERSION="24.11.1.2557"
|
||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||
ARG DIRECT_DOWNLOAD_URLS=""
|
||||
|
||||
|
@ -28,7 +28,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
|
||||
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
||||
ARG VERSION="24.10.3.21"
|
||||
ARG VERSION="24.11.1.2557"
|
||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||
|
||||
#docker-official-library:off
|
||||
|
@ -4,7 +4,7 @@ FROM ubuntu:22.04
|
||||
# ARG for quick switch to a given ubuntu mirror
|
||||
ARG apt_archive="http://archive.ubuntu.com"
|
||||
RUN sed -i "s|http://archive.ubuntu.com|$apt_archive|g" /etc/apt/sources.list
|
||||
ARG LLVM_APT_VERSION="1:19.1.4~*"
|
||||
ARG LLVM_APT_VERSION="1:19.1.4"
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=19
|
||||
|
||||
@ -29,7 +29,7 @@ RUN apt-get update \
|
||||
&& echo "deb https://apt.llvm.org/${CODENAME}/ llvm-toolchain-${CODENAME}-${LLVM_VERSION} main" >> \
|
||||
/etc/apt/sources.list \
|
||||
&& apt-get update \
|
||||
&& apt-get install --yes --no-install-recommends --verbose-versions llvm-${LLVM_VERSION}>=${LLVM_APT_VERSION} \
|
||||
&& apt-get satisfy --yes --no-install-recommends "llvm-${LLVM_VERSION} (>= ${LLVM_APT_VERSION})" \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
|
||||
|
||||
|
376
docs/changelogs/v24.11.1.2557-stable.md
Normal file
376
docs/changelogs/v24.11.1.2557-stable.md
Normal file
File diff suppressed because one or more lines are too long
@ -6,7 +6,7 @@ sidebar_position: 10
|
||||
|
||||
# Atomic
|
||||
|
||||
It supports non-blocking [DROP TABLE](#drop-detach-table) and [RENAME TABLE](#rename-table) queries and atomic [EXCHANGE TABLES](#exchange-tables) queries. `Atomic` database engine is used by default.
|
||||
It supports non-blocking [DROP TABLE](#drop-detach-table) and [RENAME TABLE](#rename-table) queries and atomic [EXCHANGE TABLES](#exchange-tables) queries. `Atomic` database engine is used by default. Note that on ClickHouse Cloud, the `Replicated` database engine is used by default.
|
||||
|
||||
## Creating a Database {#creating-a-database}
|
||||
|
||||
|
@ -11,11 +11,6 @@ MongoDB engine is read-only table engine which allows to read data from remote [
|
||||
Only MongoDB v3.6+ servers are supported.
|
||||
[Seed list(`mongodb+srv`)](https://www.mongodb.com/docs/manual/reference/glossary/#std-term-seed-list) is not yet supported.
|
||||
|
||||
:::note
|
||||
If you're facing troubles, please report the issue, and try to use [the legacy implementation](../../../operations/server-configuration-parameters/settings.md#use_legacy_mongodb_integration).
|
||||
Keep in mind that it is deprecated, and will be removed in next releases.
|
||||
:::
|
||||
|
||||
## Creating a Table {#creating-a-table}
|
||||
|
||||
``` sql
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user