Compare commits

...

464 Commits

Author SHA1 Message Date
AlexNsf
e4b2493855
Merge c2096575ba into 2c29b3d98c 2024-09-19 20:22:29 +08:00
Mikhail f. Shiryaev
2c29b3d98c
Merge pull request #69750 from ClickHouse/light-env
Make env_helper importable from any module
2024-09-19 10:50:11 +00:00
Nikita Taranov
61524aabb6
Merge pull request #69744 from ClickHouse/fix_pr_protocol
Fix parallel replicas protocol after #68424
2024-09-19 09:21:45 +00:00
Kseniia Sumarokova
39c95f6c73
Merge pull request #62730 from takakawa/bugfix/publication_name_error
[bugfix] MaterializedPostgreSQL Cannot  attach table  when pg dbname  contains "-", need doubleQuoting
2024-09-19 08:54:45 +00:00
Robert Schulze
dfdc25acc9
Merge pull request #69741 from rschu1ze/bump-pg
Bump libpq to v16.4
2024-09-19 08:48:32 +00:00
Nikolay Degterinsky
3efe136635
Merge pull request #69736 from evillique/fix-ttl
Fix METADATA_MISMATCH due to TTL with WHERE
2024-09-19 08:38:55 +00:00
Nikolay Degterinsky
efc0cec707
Merge pull request #69751 from evillique/keepermap-parameters
Save CREATE QUERY with KeeperMap engine with evaluated parameters
2024-09-19 08:38:10 +00:00
Robert Schulze
396abf7636
Merge pull request #69717 from gabrielmcg44/add-array-unscaled
Allow `arrayAUC` without scaling
2024-09-19 08:37:18 +00:00
Kseniia Sumarokova
f8fb4fb120
Merge pull request #69742 from ClickHouse/fix-s3-queue-ttl-sec
s3queue: fix tracked_files_ttl_sec
2024-09-19 08:35:08 +00:00
vdimir
1c6165f6ee
Merge pull request #69203 from Avogar/json-dynamic-hash
Fix uniq and GROUP BY for JSON/Dynamic types
2024-09-19 08:24:03 +00:00
Robert Schulze
71dd3d5cf6
Merge pull request #69746 from rschu1ze/sparse-pg
CI: Include PostgreSQL in sparse checkout script
2024-09-19 08:23:41 +00:00
Mikhail f. Shiryaev
cb503ec2ec
Make env_helper importable from any module 2024-09-19 10:22:05 +02:00
Yakov Olkhovskiy
c0c83236b6
Merge pull request #69570 from alexkats/fix-azure
Mask azure connection string sensitive info
2024-09-19 05:40:47 +00:00
Nikolay Degterinsky
14823f789b Save CREATE QUERY with KeeperMap engine with evaluated parameters 2024-09-19 00:56:59 +00:00
Yarik Briukhovetskyi
3eb5bc1a0f
Merge pull request #68963 from yariks5s/hive_partitioning_filtration
Filtering for hive partitioning
2024-09-18 22:16:58 +00:00
Robert Schulze
bb6db8926e
Some fixups 2024-09-18 20:48:36 +00:00
Nikolay Degterinsky
19353a74db Merge remote-tracking branch 'upstream/master' into fix-ttl 2024-09-18 20:01:49 +00:00
Kruglov Pavel
228ac44a92
Fix asan issue 2024-09-18 21:27:38 +02:00
Robert Schulze
d2de15871c
Include postgres in sparse checkout script 2024-09-18 19:15:13 +00:00
Robert Schulze
b94a7167a8
Merge pull request #69580 from rschu1ze/bump-libpqxx
Bump libpqxx to v7.7.5
2024-09-18 18:56:12 +00:00
robot-clickhouse
0fdd04254d Automatic style fix 2024-09-18 18:36:34 +00:00
Alex Katsman
b88cd79959 Mask azure connection string sensitive info 2024-09-18 18:32:22 +00:00
Nikita Taranov
818aac02c6 fix 2024-09-18 19:29:00 +01:00
Konstantin Bogdanov
64e58baba1
Merge pull request #69682 from ClickHouse/more-asserts-for-hashjoin
Try fix asserts failure in `HashJoin`
2024-09-18 18:20:27 +00:00
Gabriel Mendes
7f0b7a9158
add tests to cover all possible flows 2024-09-18 15:17:54 -03:00
Gabriel Mendes
006d14445e
remove stdout files 2024-09-18 14:59:58 -03:00
Kseniia Sumarokova
b5de8e622d
Merge branch 'master' into bugfix/publication_name_error 2024-09-18 19:58:20 +02:00
max-vostrikov
a3fe155579
Merge pull request #69737 from ClickHouse/test_printf
added some edge cases for printf tests
2024-09-18 17:49:57 +00:00
kssenii
373927d6a5 Fix tracked_files_ttl_sec 2024-09-18 19:25:18 +02:00
Gabriel Mendes
02fcd90a66
address some pr comments 2024-09-18 14:13:22 -03:00
Robert Schulze
e818b65dc0
Bump libpq to v16.4 2024-09-18 17:11:40 +00:00
Antonio Andelic
a997cfad2b
Merge pull request #68108 from ClickHouse/keeper-some-improvement2
Keeper improvements package
2024-09-18 16:35:57 +00:00
maxvostrikov
f4b4b3cc35 added some edge cases for printf tests
added some edge cases for printf tests
2024-09-18 17:22:36 +02:00
Nikolay Degterinsky
3315e87e1a Fix METADATA_MISMATCH due to TTL with WHERE 2024-09-18 15:15:52 +00:00
Gabriel Mendes
e0fc95c894
remove trailing spaces 2024-09-18 11:12:30 -03:00
Gabriel Mendes
b940171252
fix tests 2024-09-18 11:04:46 -03:00
Konstantin Bogdanov
cb24849396
Move assert 2024-09-18 15:24:48 +02:00
Gabriel Mendes
e3b207d217
fmt 2024-09-18 09:03:29 -03:00
Gabriel Mendes
4be8a0feba
fmt 2024-09-18 08:58:14 -03:00
Gabriel Mendes
4c72fb0e32
remove unnecessary file 2024-09-18 08:56:13 -03:00
Gabriel Mendes
8f350a7ec9
remove separate function 2024-09-18 08:52:58 -03:00
Kseniia Sumarokova
7fd2207626
Merge pull request #68504 from ClickHouse/miscellaneous-3
Miscellaneous
2024-09-18 11:21:26 +00:00
Antonio Andelic
4f73c677ac Merge branch 'master' into keeper-some-improvement2 2024-09-18 13:19:24 +02:00
Kseniia Sumarokova
69f45acfd7
Merge pull request #69672 from ClickHouse/s3queue-refactor-2
S3Queue small refactoring
2024-09-18 10:47:14 +00:00
Yarik Briukhovetskyi
4c78206d0a
Merge pull request #69718 from arruw/patch-1
Improve QuantileDD docs
2024-09-18 10:34:46 +00:00
Mikhail Artemenko
429e8ada79
Merge pull request #69690 from ClickHouse/remove_recursive_small_fixes
Remove recursive small fixes
2024-09-18 10:23:27 +00:00
mmav
06b49d18d9
Update quantileddsketch.md
Update function syntax
2024-09-18 10:45:10 +01:00
Kseniia Sumarokova
a17a8febf7
Merge pull request #69714 from tbragin/patch-15
Update README.md - Meetups
2024-09-18 09:20:05 +00:00
Robert Schulze
55529ec5a2
Merge pull request #69674 from rschu1ze/bump-pg
Bump libpq from v14.3 to v15.8
2024-09-18 09:13:13 +00:00
Yarik Briukhovetskyi
143d9f0201
Merge branch 'ClickHouse:master' into hive_partitioning_filtration 2024-09-18 11:11:04 +02:00
Antonio Andelic
3106653852 Fix watches 2024-09-18 10:47:40 +02:00
Yakov Olkhovskiy
82dbb3bb32
Merge pull request #69615 from ClickHouse/refactor-secret-finder
Unification of FunctionSecretArgumentsFinder
2024-09-18 08:17:52 +00:00
Gabriel Mendes
2218ebebbf
initial commit, tested function 2024-09-18 05:15:57 -03:00
Tanya Bragin
1bcdde3e62
Update README.md - Meetups 2024-09-17 19:48:48 -07:00
Alexey Katsman
2cef99c311
Merge pull request #69576 from bigo-sg/arrayzip_allow_empty
Allow empty arguments for arrayZip/arrayZipUnaligned
2024-09-17 21:25:29 +00:00
Robert Schulze
cd7a1a9288
Merge pull request #69684 from rschu1ze/disallow-alter-table-add-inv-idx
Prohibit `ALTER TABLE ... ADD INDEX ... TYPE` inverted if setting = 0
2024-09-17 21:18:02 +00:00
Alexander Gololobov
6597a8ed04
Merge pull request #69596 from ClickHouse/fix_dedup_in_parallel_replicas_announcement
Optimize complexity of part deduplication in parallel replicas announcement
2024-09-17 20:48:12 +00:00
Kseniia Sumarokova
3b901f49e5
Merge pull request #69673 from ClickHouse/update-assert
Update assert
2024-09-17 20:39:07 +00:00
Raúl Marín
958c3effae
Merge pull request #69705 from ClickHouse/revert-69376-marco-vb/setting-stop-insert-on-full-disk
Revert "Add user-level settings min_free_diskspace_bytes_to_throw_insert and min_free_diskspace_ratio_to_throw_insert"
2024-09-17 19:49:35 +00:00
Raúl Marín
474499d240
Revert "Add user-level settings min_free_diskspace_bytes_to_throw_insert and min_free_diskspace_ratio_to_throw_insert" 2024-09-17 21:48:19 +02:00
jsc0218
839f06035f
Merge pull request #69376 from marco-vb/marco-vb/setting-stop-insert-on-full-disk
Add user-level settings min_free_diskspace_bytes_to_throw_insert and min_free_diskspace_ratio_to_throw_insert
2024-09-17 18:46:43 +00:00
Vitaly Baranov
4f88ccb6a8
Merge pull request #69201 from NikBarykin/allow_arguments_in_custom_database_engines
Allow custom settings in database engine
2024-09-17 18:26:57 +00:00
Kruglov Pavel
a226567bc2
Merge pull request #69560 from Avogar/fix-prewhere-reorder
Keep original order of conditions during move to prewhere
2024-09-17 18:22:19 +00:00
Vitaly Baranov
fcda762a27
Merge pull request #69346 from vitlibar/restore-access-dependencies
Improve restoring of access entities' dependencies
2024-09-17 18:21:55 +00:00
Mikhail Artemenko
9c185374e4 fix level sorting 2024-09-17 18:14:47 +00:00
Mikhail Artemenko
13e82d6439 fix double visit of uncommitted changes 2024-09-17 17:45:04 +00:00
Mikhail f. Shiryaev
fdee35cccc
Merge pull request #69557 from ClickHouse/integration-prepull-kill-runner
Kill runner when integration tests fail to pre-pull
2024-09-17 17:27:20 +00:00
Konstantin Bogdanov
b08e727aef
Count allocated bytes from scratch after rerange 2024-09-17 19:02:10 +02:00
Miсhael Stetsyuk
9eba103c5e
Merge pull request #69670 from ClickHouse/sync-executeToDatabaseImpl-with-private-fork
sync changes to `InterpreterDropQuery::executeToDatabaseImpl` from the private fork
2024-09-17 16:54:15 +00:00
Yarik Briukhovetskyi
f52cdfb795
Merge branch 'ClickHouse:master' into hive_partitioning_filtration 2024-09-17 18:50:43 +02:00
Konstantin Bogdanov
a210f98819
Lint 2024-09-17 18:28:27 +02:00
kssenii
e574c49e25 Fix 2024-09-17 18:19:05 +02:00
Konstantin Bogdanov
7c5d55c6b2
Lint 2024-09-17 18:10:51 +02:00
Robert Schulze
665f362601
Prohibit ALTER TABLE ... ADD INDEX ... TYPE inverted if setting = 0 2024-09-17 16:10:03 +00:00
Konstantin Bogdanov
80259659ff
More asserts 2024-09-17 18:03:19 +02:00
Alexander Gololobov
574a26c63b Use adjacent_find to check adjacent parts 2024-09-17 17:56:44 +02:00
Alexander Gololobov
3674c97ebb Fix for using part after std::move from it 2024-09-17 17:49:02 +02:00
vdimir
8508b1ba37
Merge pull request #67966 from ClickHouse/vdimir/datetime64_constant_to_ast_f
Add test cases to 03217_datetime64_constant_to_ast
2024-09-17 14:56:32 +00:00
Alexander Gololobov
190d3f04c9 More optimal check for intrsecting parts in DefaultCoordinator init 2024-09-17 16:54:49 +02:00
Alexander Gololobov
aba7de5091 Verify that there are no intersecting parts in the resulting all_parts_to_read 2024-09-17 16:53:32 +02:00
Kseniia Sumarokova
d11abd634a
Update max_replication_slots 2024-09-17 16:37:08 +02:00
Antonio Andelic
8db3dddb3d Fix watches count and list request 2024-09-17 16:15:55 +02:00
Nikita Taranov
ffaf97a390
Merge pull request #68424 from ClickHouse/adaptive_parallel_replicas
Adaptive mark_segment_size for parallel replicas
2024-09-17 13:52:42 +00:00
Yarik Briukhovetskyi
3a7c68a052
Update src/Storages/VirtualColumnUtils.cpp
Co-authored-by: Kruglov Pavel <48961922+Avogar@users.noreply.github.com>
2024-09-17 15:39:26 +02:00
Antonio Andelic
452fde78c7
Merge pull request #69582 from ClickHouse/keeper-better-ssl-support
Support more advanced SSL options for Keeper internal communication
2024-09-17 13:32:18 +00:00
Kseniia Sumarokova
51fa9ebf8a
Merge pull request #68520 from ClickHouse/fix-bad-exception-messages
Fix bad exception messages
2024-09-17 13:29:53 +00:00
kssenii
e30ebfa23e Add mode validation 2024-09-17 15:24:02 +02:00
Kruglov Pavel
b21be2bc54
Merge pull request #68591 from bigo-sg/orc_dict_encode
Add settings `output_format_orc_dictionary_key_size_threshold` to allow user to enable dict encoding for string column in ORC output format
2024-09-17 13:22:44 +00:00
Alexander Gololobov
14736d95c5
Merge pull request #69606 from ClickHouse/check_time_limit_in_index_analysis
Check time limits while analyzing indexes
2024-09-17 13:14:08 +00:00
Yarik Briukhovetskyi
e8d50aa97f
review 2024-09-17 15:02:33 +02:00
NikBarykin
4b69d8e2ca Fix CE 2024-09-17 15:52:20 +03:00
robot-clickhouse
5ce8604869 Automatic style fix 2024-09-17 12:37:31 +00:00
Robert Schulze
813bcd896f
Bump to v18.8 2024-09-17 12:30:12 +00:00
kssenii
3a05282bce Update assert 2024-09-17 14:26:31 +02:00
Yakov Olkhovskiy
fd0c7a1c18 Merge branch 'master' into refactor-secret-finder 2024-09-17 12:16:19 +00:00
kssenii
88b22094c8 Update test 2024-09-17 14:11:17 +02:00
kssenii
3cb8160240 Merge remote-tracking branch 'origin' into bugfix/publication_name_error 2024-09-17 14:05:06 +02:00
Kseniia Sumarokova
4704fb8a3b
Merge branch 'master' into miscellaneous-3 2024-09-17 13:32:01 +02:00
Kseniia Sumarokova
0369aaea87
Merge pull request #69092 from 1on/master
Ability to limit columns for tables in MaterializedPostgreSQL
2024-09-17 11:08:05 +00:00
Kseniia Sumarokova
3f663f8e09
Merge pull request #68821 from joelynch/joelynch/disk-encrypted-missing-method
Fix zero copy bug with encrypted disk and UNFREEZE
2024-09-17 11:05:14 +00:00
Vitaly Baranov
f768717be8 Fix test. 2024-09-17 13:05:02 +02:00
Kseniia Sumarokova
64106e7b3c
Merge pull request #68696 from ucasfl/minor
Use proper ErrorCodes, replace NETWORK_ERROR by HDFS_ERROR
2024-09-17 10:57:30 +00:00
Kseniia Sumarokova
79223045c9
Merge pull request #68592 from Sergey2Gnezdilov/patch-1
Update nats.md
2024-09-17 10:56:37 +00:00
Vitaly Baranov
983b061b58 Corrections after review. 2024-09-17 12:56:10 +02:00
kssenii
3a299f382d Refactor 2024-09-17 12:52:45 +02:00
Vitaly Baranov
f8f72ccb00 Add test. 2024-09-17 12:10:31 +02:00
Vitaly Baranov
1ccd461c97 Fix restoring access entities dependant on existing ones. 2024-09-17 12:10:31 +02:00
vdimir
de308acfad
Merge pull request #69328 from ClickHouse/vdimir/integration-tests-randomize-settings
Randomize integration tests settings
2024-09-17 10:05:17 +00:00
Yarik Briukhovetskyi
cb92aaf968
fix 03232_file_path_normalizing 2024-09-17 11:26:13 +02:00
Michael Stetsyuk
5aaff37b36 sync changes to InterpreterDropQuery::executeToDatabaseImpl from the private fork 2024-09-17 09:16:52 +00:00
Robert Schulze
386d54cedf
Merge pull request #69564 from rschu1ze/untrash-libpq
Replace libpq code dump by postgresql fork and bump to v14.3
2024-09-17 09:15:38 +00:00
Yarik Briukhovetskyi
2c1c1a93c6
Merge pull request #69626 from yariks5s/small_tests_fix
Reorder some tests (follow-up to #69514)
2024-09-17 09:09:06 +00:00
Antonio Andelic
9f932fb453 Merge branch 'master' into keeper-better-ssl-support 2024-09-17 10:52:35 +02:00
Antonio Andelic
f3654b8fc8 Merge branch 'master' into keeper-some-improvement2 2024-09-17 10:35:38 +02:00
Antonio Andelic
676b6238d0 Update comments 2024-09-17 10:30:39 +02:00
Antonio Andelic
e876997ebb Merge branch 'master' into keeper-some-improvement2 2024-09-17 10:28:02 +02:00
Nikolai Kochetov
e7eaa01bb3
Merge pull request #69298 from ClickHouse/array-join-step-refactoring
Refactor ArrayJoin step.
2024-09-17 08:26:09 +00:00
Antonio Andelic
52dc9a54a7
Merge pull request #69627 from ClickHouse/keeper-fix-multi-no-auth
Fix Keeper multi request preprocessing with NOAUTH
2024-09-17 06:48:07 +00:00
Konstantin Bogdanov
b4a6d41b52
Merge pull request #69655 from ClickHouse/try-fix-02447_drop_database_replica
Try fix `02447_drop_database_replica`
2024-09-17 01:00:26 +00:00
Konstantin Bogdanov
a329150eef
Merge pull request #69601 from ClickHouse/fix-tsan-writebufferfromhttpserverresponse
Try to fix data race in `WriteBufferFromHTTPServerResponse`
2024-09-17 00:23:17 +00:00
pufit
7b94dc1813
Merge pull request #65277 from arthurpassos/multi_auth_methods
Multi auth methods
2024-09-16 23:53:33 +00:00
Konstantin Bogdanov
8c7c37de1d
temp-commit 2024-09-16 23:41:51 +02:00
Konstantin Bogdanov
6a26c5cf8e
Fix 2024-09-16 20:54:02 +02:00
Yarik Briukhovetskyi
0cdec0acf1
fix logical error 2024-09-16 19:13:30 +02:00
Yakov Olkhovskiy
19e2197582
fix 2024-09-16 10:38:28 -04:00
Yarik Briukhovetskyi
04f23332c3
fix filter issue 2024-09-16 15:59:22 +02:00
Yakov Olkhovskiy
d223c4547f
fix after master merge 2024-09-16 08:35:05 -04:00
Yakov Olkhovskiy
58993d3f3b
Merge branch 'master' into refactor-secret-finder 2024-09-16 08:33:16 -04:00
Yarik Briukhovetskyi
dc02b168a0
fix references + remove index 2024-09-16 13:53:47 +02:00
Alexander Gololobov
8507d209c0
Merge branch 'master' into check_time_limit_in_index_analysis 2024-09-16 13:36:51 +02:00
Alexander Gololobov
f5b9d5ad34 Test for checking time limit in index analysis 2024-09-16 13:34:40 +02:00
Alexander Gololobov
4af369fbc4 Failpoint for testing slow index analysis 2024-09-16 13:34:01 +02:00
Antonio Andelic
8cdcc431fe Fix 2024-09-16 13:30:17 +02:00
Antonio Andelic
f401eccc64 Fix Keeper multi request preprocessing with NOAUTH 2024-09-16 12:48:01 +02:00
Yarik Briukhovetskyi
6863dc7647
init 2024-09-16 12:48:01 +02:00
vdimir
1963e971f3
fix 2024-09-16 09:57:20 +00:00
vdimir
056c7af356
comment 2024-09-16 09:14:48 +00:00
vdimir
cf9200f1d0
Randomize integration tests settings 2024-09-16 09:14:44 +00:00
Antonio Andelic
187a717872 Merge branch 'master' into keeper-better-ssl-support 2024-09-16 09:17:30 +02:00
Arthur Passos
5f464a6d74 trigger ic 2024-09-15 07:09:49 -03:00
李扬
4412946532
Merge branch 'master' into orc_dict_encode 2024-09-15 17:25:20 +08:00
marco-vb
03737ddcab Reduced disk size on test for faster execution. 2024-09-14 22:24:17 +00:00
marco-vb
038f56cb5e Only make checks to stop inserts if settings are being used. 2024-09-14 21:04:12 +00:00
Nikita Taranov
63577507c9 fix build 2024-09-14 21:43:27 +01:00
Nikita Taranov
9eb78773a6 Merge branch 'master' into adaptive_parallel_replicas 2024-09-14 19:31:02 +01:00
Yakov Olkhovskiy
6f63a7b213 fix tidy 2024-09-14 16:46:48 +00:00
Yakov Olkhovskiy
56cfa74a14 fix 2024-09-14 13:32:52 +00:00
Yakov Olkhovskiy
dbb1d043fe unification of FunctionSecretArgumentsFinder 2024-09-14 05:46:08 +00:00
Yarik Briukhovetskyi
7d5203f8a7
add resize for partitioning_columns 2024-09-13 21:38:48 +02:00
Yarik Briukhovetskyi
0d1d750437
fix crash 2024-09-13 20:43:51 +02:00
Yarik Briukhovetskyi
ad31d86a15
move the block inserting 2024-09-13 19:58:19 +02:00
marco-vb
56f3030b17 Black formatting python test. 2024-09-13 17:32:33 +00:00
Yarik Briukhovetskyi
991279e5c6
revert 2024-09-13 19:23:00 +02:00
Alexander Gololobov
31ddfc6f5f Check time limit while analyzing indexes 2024-09-13 19:19:21 +02:00
Marco Vilas Boas
ddf2e07fd0
Merge branch 'ClickHouse:master' into marco-vb/setting-stop-insert-on-full-disk 2024-09-13 18:17:44 +01:00
marco-vb
5cc12ca9ee Added integration testing for newly implemented settings. 2024-09-13 17:16:16 +00:00
Alexander Gololobov
e13247b67e Fix clang-18 build 2024-09-13 16:50:43 +02:00
Yarik Briukhovetskyi
c184aae686
review 2024-09-13 16:40:01 +02:00
Yarik Briukhovetskyi
14a6b0422b
disable optimize_count_from_files 2024-09-13 16:33:17 +02:00
Alexander Gololobov
2650a20628 Make dedup logic O(n*log(n)) instead of O(n^2) 2024-09-13 16:21:17 +02:00
Antonio Andelic
9a31fc385d Fixes 2024-09-13 15:58:17 +02:00
marco-vb
ddc506a677 Corrected implementation for check of new settings and fix lint of settings change history. 2024-09-13 13:48:42 +00:00
avogar
2812953a8a Try to fix tests 2024-09-13 13:37:42 +00:00
Antonio Andelic
492461271b Merge branch 'master' into keeper-better-ssl-support 2024-09-13 14:44:12 +02:00
Antonio Andelic
3c47f3df4b Support more advanced SSL options for Keeper internal communication 2024-09-13 14:23:01 +02:00
Arthur Passos
54e75fd4dc fix black 2024-09-13 09:04:14 -03:00
Arthur Passos
ae6e236acf update recently introduced test 2024-09-13 08:59:59 -03:00
李扬
11c7cdabf8
Merge branch 'ClickHouse:master' into orc_dict_encode 2024-09-13 18:26:20 +08:00
李扬
71553022e0
fix 03230_array_zip_unaligned 2024-09-13 18:16:13 +08:00
李扬
53e1975833
fix 01045_array_zip 2024-09-13 18:15:47 +08:00
Marco Vilas Boas
8299b31d47
Merge branch 'master' into marco-vb/setting-stop-insert-on-full-disk 2024-09-13 10:44:04 +01:00
Robert Schulze
97c8d2897c
Bump to v14.13 2024-09-13 09:36:19 +00:00
Arthur Passos
2c165096cd Merge branch 'master' into multi_auth_methods 2024-09-13 05:18:11 -03:00
李扬
11d2963497
fix style 2024-09-13 11:56:47 +08:00
taiyang-li
f9335a2fd5 update uts 2024-09-13 10:50:50 +08:00
taiyang-li
8a89d7b2b9 allow empty inputs for arrayZip or arrayZipUnaligned 2024-09-13 10:46:38 +08:00
Robert Schulze
aab0d3dd9e
Bump to 7.7.5 2024-09-12 19:42:32 +00:00
Robert Schulze
5a34b9f24e
Bump to 7.6.1 2024-09-12 19:14:41 +00:00
Robert Schulze
a0a4858e00
Scratch build of libpqxx at 7.5.3 + patches 2024-09-12 18:55:35 +00:00
avogar
9c1f4f4545 Remove bad files 2024-09-12 17:21:28 +00:00
avogar
2e82e06330 Update tests 2024-09-12 16:59:25 +00:00
Robert Schulze
4963ab603c
Switch Postgres to 2f7bae2f92, adjust build description, delete libqp
Based on the code state of July 2021 which Kseniia copied over here:
https://github.com/ClickHouse/libpq/pull/5 (found out the hard way)
2024-09-12 15:53:26 +00:00
Nikita Taranov
7b2810bea2 Merge branch 'master' into adaptive_parallel_replicas 2024-09-12 16:51:15 +01:00
Robert Schulze
e2bfce66dd
Add postgres as a submodule 2024-09-12 15:20:10 +00:00
Robert Schulze
0bb3967d14
Remove obsolete target_include_directories (/config does not exist) 2024-09-12 15:20:10 +00:00
avogar
401a3d0931 Add test 2024-09-12 15:10:29 +00:00
1on
51d770fa7a Ability to limit columns for tables in MaterializedPostgreSQL 2024-09-12 18:10:19 +03:00
avogar
beffb92411 Keep original order of conditions during move to prewhere 2024-09-12 14:52:09 +00:00
Robert Schulze
877002f689
3% more aesthetic build description 2024-09-12 14:42:17 +00:00
Nikita Taranov
16f93ea1b3 revive separate protocol versioning for PRs 2024-09-12 15:40:51 +01:00
Nikita Taranov
1e3bc6d359 log mark_segment_size on initiator 2024-09-12 15:15:57 +01:00
Robert Schulze
bde54b96f7
Move ENABLE_LIBPQXX in a central place 2024-09-12 14:03:22 +00:00
Robert Schulze
4a9b376e2a
Fix typo 2024-09-12 14:01:26 +00:00
marco-vb
562c23eac6 Add new settings to settings change history. 2024-09-12 13:28:49 +00:00
Mikhail f. Shiryaev
8d5babf65f
Kill the runner process if integration tests fail to pre-pull 2024-09-12 15:26:21 +02:00
Mikhail f. Shiryaev
99ede620be
Add kill_ci_runner to ci_utils, will allow restarts 2024-09-12 15:24:25 +02:00
Joe Lynch
92351a67e8
Merge branch 'master' into joelynch/disk-encrypted-missing-method 2024-09-12 11:58:41 +02:00
Marco Vilas Boas
f292767778
Merge branch 'master' into marco-vb/setting-stop-insert-on-full-disk 2024-09-12 10:56:32 +01:00
marco-vb
7d36f3b764 Implemented checks for new settings. 2024-09-12 09:53:56 +00:00
marco-vb
21bd47f09e Add settings min_free_disk_bytes_to_throw_insert and min_free_disk_ratio_to_throw_insert and update documentation. 2024-09-12 09:45:43 +00:00
Nikita Taranov
fc83c1c7a2 use final task size in segment size calculation 2024-09-11 20:20:18 +01:00
Joe Lynch
f378047f30
Properly clean up 2024-09-11 16:02:36 +02:00
Yarik Briukhovetskyi
e8cec05d08
shellcheck 2024-09-11 13:52:20 +02:00
Yarik Briukhovetskyi
2876a4e714
add retries 2024-09-11 13:32:12 +02:00
李扬
0de3b1dacb
Merge branch 'ClickHouse:master' into orc_dict_encode 2024-09-11 12:08:06 +08:00
Nikita Taranov
8d5d7dd83a fix wording 2024-09-10 17:18:27 +01:00
Joe Lynch
35df5ff28e
Add test for SYSTEM UNFREEZE with zero_copy 2024-09-10 17:52:02 +02:00
Nikita Taranov
61ebcdc2ed fix 2024-09-10 12:07:44 +01:00
Nikita Taranov
1df897db27 Merge branch 'master' into adaptive_parallel_replicas 2024-09-10 12:03:46 +01:00
Nikita Taranov
8cdc10cf65 fix settings changes 2024-09-09 18:11:03 +01:00
avogar
4ece895b41 Merge branch 'master' of github.com:ClickHouse/ClickHouse into json-dynamic-hash 2024-09-09 10:54:18 +00:00
Antonio Andelic
65019c4b9b Merge branch 'master' into keeper-some-improvement2 2024-09-07 20:59:04 +02:00
Antonio Andelic
190339c4e6 Fix snapshot sync 2024-09-07 17:34:59 +02:00
Antonio Andelic
5a86371b02 Merge branch 'master' into keeper-some-improvement2 2024-09-07 11:32:44 +02:00
Igor Nikonov
f5d49f8e10
Merge branch 'master' into adaptive_parallel_replicas 2024-09-06 23:08:30 +02:00
Yarik Briukhovetskyi
a903e1a726
remove logging + fixing bug 2024-09-06 20:24:18 +02:00
Nikolai Kochetov
ee304c7fc3 Fix tidy 2024-09-06 16:02:47 +00:00
Antonio Andelic
03c7f3817b Correct lock order 2024-09-06 15:41:04 +02:00
Nikolai Kochetov
fdbf8e71ab Stable explain 2024-09-06 10:30:09 +00:00
Antonio Andelic
f44eaa808d Merge branch 'master' into keeper-some-improvement2 2024-09-06 09:35:56 +02:00
Antonio Andelic
e388f6f99b Remove useless log 2024-09-06 09:35:02 +02:00
Kseniia Sumarokova
92507d9938
Update nats.md 2024-09-05 17:10:12 +02:00
Kseniia Sumarokova
6170c15c90
Merge branch 'master' into minor 2024-09-05 17:08:30 +02:00
Nikolai Kochetov
d23145fd19
Update emptyArrayToSingle.h 2024-09-05 16:59:14 +02:00
Nikolai Kochetov
fb8999a885 Remove commented code. 2024-09-05 14:44:51 +00:00
Nikolai Kochetov
03ac70f988 Fising build. 2024-09-05 14:41:06 +00:00
Nikolai Kochetov
5f5acd3c44 Refactor ArrayJoin step. 2024-09-05 14:34:30 +00:00
Yarik Briukhovetskyi
2fa6be55ff
tests fix 2024-09-04 17:02:01 +02:00
Antonio Andelic
a3e233a537 Fix watch 2024-09-04 15:19:56 +02:00
Yarik Briukhovetskyi
8896d1b78b
try to fix tests 2024-09-04 14:46:29 +02:00
avogar
f495a4f431 Merge branch 'master' of github.com:ClickHouse/ClickHouse into json-dynamic-hash 2024-09-04 11:21:21 +00:00
Antonio Andelic
955412888c Merge branch 'master' into keeper-some-improvement2 2024-09-04 11:30:29 +02:00
Antonio Andelic
9633563fbd Fix 2024-09-04 11:30:05 +02:00
Arthur Passos
d9a4964cd9 Merge branch 'master' into multi_auth_methods 2024-09-03 15:01:59 -03:00
avogar
a44b3d0268 Fix sorted typed paths 2024-09-03 17:31:07 +00:00
NikBarykin
83854cf293 Make method of DatabaseFactory 2024-09-03 19:13:05 +03:00
NikBarykin
e874c6e1de Fix typo 2024-09-03 18:58:39 +03:00
avogar
f1377b0b4a Fix uniq and GROUP BY for JSON/Dynamic types 2024-09-03 14:10:28 +00:00
Yarik Briukhovetskyi
f688b903db
empty commit 2024-09-03 15:58:22 +02:00
Yarik Briukhovetskyi
21f9669836
empty commit 2024-09-03 15:41:43 +02:00
Yarik Briukhovetskyi
1a386ae4d5
Merge branch 'ClickHouse:master' into hive_partitioning_filtration 2024-09-03 15:35:31 +02:00
Yarik Briukhovetskyi
24f4e87f8b
revert debugging in tests 2024-09-03 15:20:22 +02:00
NikBarykin
03ccf05d14 Allow custom settings in database engine 2024-09-03 16:14:15 +03:00
Antonio Andelic
79fc8d67ad More fixes 2024-09-02 15:46:04 +02:00
Arthur Passos
0bf95655aa add new line 2024-09-02 10:45:37 -03:00
Arthur Passos
15a67f10dc add cleanup code to existing tests so flaky tests pass.. 2024-09-02 09:59:28 -03:00
Antonio Andelic
596ba574e3 Merge branch 'master' into keeper-some-improvement2 2024-09-02 09:31:02 +02:00
Antonio Andelic
e968984d17 More changes 2024-09-02 08:25:17 +02:00
Arthur Passos
a22f9fd91f Merge branch 'master' into multi_auth_methods 2024-08-30 16:59:40 -03:00
Arthur Passos
7c766e7458 split tests again and randomize user name to be able to parallel 2024-08-30 11:41:44 -03:00
Yarik Briukhovetskyi
620640a042
just to test 2024-08-30 12:58:21 +02:00
Yarik Briukhovetskyi
ec469a117d
testing 2024-08-30 00:56:35 +02:00
Yarik Briukhovetskyi
7a879980d8
try to fix tests 2024-08-29 18:25:11 +02:00
Arthur Passos
f9b845486a Merge branch 'master' into multi_auth_methods 2024-08-29 13:12:19 -03:00
Yarik Briukhovetskyi
2adc61c215
add flush logs 2024-08-29 16:39:22 +02:00
Arthur Passos
7000f214ab add head -n 1 to make sure only on occurrence of error code is grepped 2024-08-29 09:47:18 -03:00
Yarik Briukhovetskyi
afc4d08aad
add no-fasttest tag 2024-08-29 13:31:05 +02:00
李扬
3d04f3d33a
Merge branch 'ClickHouse:master' into orc_dict_encode 2024-08-29 10:16:06 +08:00
yariks5s
edc5d8dd92 fix path 2024-08-28 23:15:01 +00:00
yariks5s
d6b2a9d534 CLICKHOUSE_LOCAL -> CLIENT 2024-08-28 22:32:44 +00:00
Arthur Passos
cb6d142947 remove non clustered tests to make it faster :D 2024-08-28 18:09:30 -03:00
Arthur Passos
a690539935 fix test 2024-08-28 18:03:27 -03:00
yariks5s
dc97bd6b92 review + testing the code 2024-08-28 17:22:47 +00:00
Arthur Passos
b2795f06dc add extra line 2024-08-28 14:02:09 -03:00
Arthur Passos
7879915493 merge test fils now that they are faster 2024-08-28 14:01:24 -03:00
Arthur Passos
313b6b533f further optimize test by using http client instead of clickhouse-client 2024-08-28 13:54:31 -03:00
Arthur Passos
9f7f6c6f93 use curl client for create/drop/alter queries to make test run faster 2024-08-28 12:54:22 -03:00
李扬
553c309477
Merge branch 'master' into orc_dict_encode 2024-08-28 21:00:18 +08:00
taiyang-li
ae582120ae change as request 2024-08-28 20:56:33 +08:00
Arthur Passos
b29e00b838 add space 2024-08-28 09:17:04 -03:00
Arthur Passos
ae8d90f6b8 no replicated database 2024-08-28 09:05:25 -03:00
Arthur Passos
41a4a97ca3 no parallel 2024-08-27 19:57:59 -03:00
Arthur Passos
50b3d3172c add no fast test 2024-08-27 16:48:10 -03:00
Arthur Passos
c298b20ba9 extract some tests into sql as an attempt to make them run faster for flaky check.. 2024-08-27 16:17:35 -03:00
Arthur Passos
0b29aef1a0 remove extra dot from ex message 2024-08-27 15:25:41 -03:00
Yarik Briukhovetskyi
60c6eb2610
trying to fix the test 2024-08-27 19:42:47 +02:00
Yarik Briukhovetskyi
9133505952
fix the test 2024-08-27 19:16:05 +02:00
Yarik Briukhovetskyi
2741bf00e4 chmod +x 2024-08-27 16:53:14 +00:00
Yarik Briukhovetskyi
4eca00a666
fix style 2024-08-27 18:10:41 +02:00
Arthur Passos
a0ab22e031 style 2024-08-27 13:01:10 -03:00
Arthur Passos
6f806124a3 Merge branch 'master' into multi_auth_methods 2024-08-27 12:24:53 -03:00
Arthur Passos
a65d175a81 change parsing logic a bit 2024-08-27 12:23:42 -03:00
Yarik Briukhovetskyi
c6804122cb
fix shell 2024-08-27 16:52:29 +02:00
Yarik Briukhovetskyi
189cbe25fe
init 2024-08-27 16:28:18 +02:00
taiyang-li
aa4688a982 fix style 2024-08-27 12:25:22 +08:00
taiyang-li
7aaa0289e1 revert files 2024-08-26 14:58:57 +08:00
taiyang-li
d6df83d561 add uts about orc string encode 2024-08-26 14:57:51 +08:00
taiyang-li
1011f8ef9c add uts about orc string encode 2024-08-26 14:45:41 +08:00
Arthur Passos
cb7ef910e2
Update 03174_multiple_authentication_methods.sh 2024-08-23 13:50:38 -03:00
joelynch
1c6976d7a5
Fix zero copy bug with encrypted disk and UNFREEZE
When running UNFREEZE with encrypted disk, zookeeper
metadata would be erroneously removed here
src/Storages/StorageReplicatedMergeTree.cpp#L10418.
2024-08-23 17:04:39 +02:00
Arthur Passos
43e9a7ba4b Merge branch 'master' into multi_auth_methods 2024-08-22 10:57:23 -03:00
Arthur Passos
62ce2999ae minor grammar 2024-08-22 10:00:20 -03:00
Arthur Passos
91cceccb4b add setting docs 2024-08-22 09:48:53 -03:00
Arthur Passos
13d5b029a4 make max_number_of_authentication_methods=0 unlimited 2024-08-22 09:41:10 -03:00
flynn
ca40da5c03 Proper ErrorCodes 2024-08-22 09:03:02 +00:00
taiyang-li
b0a0988c5b change as request 2024-08-22 10:46:44 +08:00
Arthur Passos
d7d40db036 make sure all alters except id related are allowed even if state of user is invalid afterwards 2024-08-21 19:07:54 -03:00
Arthur Passos
4fd19c3ad9 do not allow reset auth to new to be used along with add identified clauses 2024-08-20 17:16:12 -03:00
Arthur Passos
f0223aedde add on cluster tests 2024-08-20 16:28:12 -03:00
Arthur Passos
1980959c8b exception message 2024-08-20 14:27:53 -03:00
Arthur Passos
27ee4dd611 throw syntax error instead of bad arguments in case of add not identified 2024-08-20 14:17:05 -03:00
Sergey (Finn) Gnezdilov
21e64f2aa9
Update nats.md
"kafka_handle_error_mode" fixed on "nats_handle_error_mode"
2024-08-20 11:33:13 +03:00
taiyang-li
03ab625265 enable string dict encoding in orc output format 2024-08-20 15:47:26 +08:00
taiyang-li
dbd4ee44ed enable dict encoding in orc writer 2024-08-20 14:09:14 +08:00
Arthur Passos
623c507e5f fix in the right place.. 2024-08-19 16:42:48 -03:00
Arthur Passos
01f5337f69 fix serialization to fix on cluster commands 2024-08-19 15:13:39 -03:00
Nikita Taranov
d4a3a033b0
Merge branch 'master' into adaptive_parallel_replicas 2024-08-19 12:48:39 +02:00
Alexey Milovidov
165d08f088 Fix bad exception messages 2024-08-19 05:54:37 +02:00
Alexey Milovidov
e0dbc53b58 Merge branch 'master' into miscellaneous-3 2024-08-19 01:18:48 +02:00
Alexey Milovidov
f97551e2ad Fix tests 2024-08-18 22:17:16 +02:00
Nikita Taranov
c252b3c8b0 fix build 2024-08-18 18:29:48 +01:00
Nikita Taranov
30229a3bfd better 2024-08-18 17:44:16 +01:00
Nikita Taranov
8a0f41da7a
Merge branch 'master' into adaptive_parallel_replicas 2024-08-18 17:55:29 +02:00
Nikita Taranov
628a4300ba fix 2024-08-18 16:53:00 +01:00
Alexey Milovidov
f88b5988c1 Update test 2024-08-18 09:44:39 +02:00
Alexey Milovidov
4bb2f7b3f6 Miscellaneous 2024-08-18 09:09:58 +02:00
Alexey Milovidov
95edca513c Fix tests 2024-08-18 05:43:01 +02:00
Alexey Milovidov
5004e4d2cc Miscellaneous 2024-08-18 03:27:42 +02:00
Arthur Passos
accade2390 Merge branch 'master' into multi_auth_methods 2024-08-17 10:26:11 -03:00
Nikita Taranov
e7fc89ba26 add bw-compatibility test 2024-08-16 23:23:03 +01:00
Arthur Passos
770804ffdc change default setting value to 100 2024-08-16 08:57:11 -03:00
vdimir
49ce2c7619
Merge branch 'master' into vdimir/datetime64_constant_to_ast_f 2024-08-15 21:41:56 +02:00
Nikita Taranov
80d985a690 add setting change 2024-08-15 19:11:43 +01:00
Nikita Taranov
891f9c5358 fix typo 2024-08-15 18:44:37 +01:00
Nikita Taranov
cb0335446e impl 2024-08-15 18:34:06 +01:00
Arthur Passos
70e7c4e63d fix black with script 2024-08-15 12:21:06 -03:00
Arthur Passos
12e6645058 fix black 2024-08-15 12:19:36 -03:00
Arthur Passos
714a4d871c add integ tests for defa7ult value 2024-08-15 12:18:09 -03:00
Arthur Passos
72f1695014 add integ tests for new setting 2024-08-15 12:03:30 -03:00
Arthur Passos
17c1cef52b add server setting 2024-08-15 10:30:33 -03:00
Arthur Passos
3247f3ad08 make sure reset authentication methods can only be used on alter queries 2024-08-14 12:44:43 -03:00
Arthur Passos
026fa0a7fd fix leading id method without WITH and with type specified being allowed 2024-08-14 12:39:27 -03:00
vdimir
64e10b2dda
Merge branch 'master' into vdimir/datetime64_constant_to_ast_f 2024-08-13 17:00:51 +02:00
Arthur Passos
e8a40d9d52 Merge branch 'master' into multi_auth_methods 2024-08-13 10:30:01 -03:00
Arthur Passos
9abc001296 fix trailing comma issue 2024-08-13 10:17:05 -03:00
Antonio Andelic
c61fc591c4 Use functions instead of classes 2024-08-13 11:33:17 +02:00
Antonio Andelic
dcbc590302 Merge branch 'master' into keeper-some-improvement2 2024-08-13 09:01:10 +02:00
Arthur Passos
4c6aca2eed add docs about reset auth method 2024-08-12 15:58:10 -03:00
Arthur Passos
64d50d6e5b Merge branch 'master' into multi_auth_methods 2024-08-12 15:42:37 -03:00
Antonio Andelic
b6c3619543 Whitespace 2024-08-09 15:41:11 +02:00
Antonio Andelic
b2172af817 Merge branch 'master' into keeper-some-improvement2 2024-08-09 14:50:52 +02:00
vdimir
ef40cc3bae
Merge branch 'master' into vdimir/datetime64_constant_to_ast_f 2024-08-08 12:12:22 +02:00
vdimir
f5c07b8938
Add test cases to 03217_datetime64_constant_to_ast 2024-08-07 09:43:13 +00:00
Antonio Andelic
5ea4844d69 Merge branch 'master' into keeper-some-improvement2 2024-08-07 11:26:33 +02:00
Arthur Passos
fa6564dbe3 Merge branch 'master' into multi_auth_methods 2024-07-30 10:32:13 -03:00
Arthur Passos
ae72bd57f2 try to fix docs? is it broken at all 2024-07-29 09:50:54 -03:00
Arthur Passos
c433d9cfdb retrigger ci 2024-07-28 12:33:15 -03:00
Arthur Passos
f5ee7aaf26 update some integ tests 2024-07-28 10:04:38 -03:00
Arthur Passos
352b502559 update logout session 2024-07-26 14:47:37 -03:00
Arthur Passos
546cca1251 small comment update 2024-07-26 14:42:58 -03:00
Arthur Passos
456613e7fa minor doc change 2024-07-26 14:40:50 -03:00
Arthur Passos
5a45563f1b add note about downgrading 2024-07-26 14:36:58 -03:00
Arthur Passos
a21529f66a style 2024-07-26 11:05:05 -03:00
Arthur Passos
921947e368 fix tests and add new one 2024-07-26 10:45:25 -03:00
Arthur Passos
93cbd4bf9a update test 2024-07-25 17:44:38 -03:00
Arthur Passos
77d46aad05 change auth_params type from string to array<string> 2024-07-25 14:24:26 -03:00
Arthur Passos
0404a8e800 make auth_type a vector of int8_t and auth_params a json array 2024-07-25 14:14:31 -03:00
Arthur Passos
8eda32600f remove todo 2024-07-25 09:13:09 -03:00
Arthur Passos
f3f9d5f4de add notes about no_password behavior 2024-07-25 09:07:13 -03:00
Arthur Passos
0204de640f fix test that was recently introduced 2024-07-23 09:32:59 -03:00
Arthur Passos
0b151bbe8f fix conflict 2024-07-22 14:31:57 -03:00
Antonio Andelic
48e7057200 Merge branch 'master' into keeper-some-improvement2 2024-07-22 16:51:20 +02:00
Arthur Passos
f2c22408da
Merge branch 'master' into multi_auth_methods 2024-07-18 17:13:50 -03:00
Arthur Passos
006f20858a remove dupl inc 2024-07-15 17:01:02 -03:00
Arthur Passos
cc02ebca75 Merge branch 'master' into multi_auth_methods 2024-07-15 10:55:40 -03:00
Arthur Passos
3ab6760412 update docs 2024-07-12 15:21:29 -03:00
Arthur Passos
ca0b821aaf fix a few ut 2024-07-12 14:26:24 -03:00
Arthur Passos
2e9b7e8334 Removed stale comment and added log_info about skipped auth methods 2024-07-12 10:20:59 -03:00
Arthur Passos
b428327c6e trigger ci 2024-07-12 09:20:36 -03:00
Arthur Passos
8e0e2cec28 fix hilite test 2024-07-11 18:06:40 -03:00
Arthur Passos
300d4ae593 add comment back and missing file 2024-07-11 17:23:37 -03:00
Arthur Passos
ee62318348 rename test and add a few more 2024-07-11 17:21:33 -03:00
Arthur Passos
29e0d5c1e3 use comma separated instead of multiple add id add id 2024-07-11 17:04:28 -03:00
Arthur Passos
91e8ef6776 do not allow no_pwd to co-exist with other auth methods 2024-07-10 09:11:02 -03:00
Antonio Andelic
5a96290cce Merge branch 'master' into keeper-some-improvement2 2024-07-10 12:45:43 +02:00
Arthur Passos
acc2249288 do not allow no_password to co-exist with other auth methods 2024-07-09 22:05:04 -03:00
Arthur Passos
e36776551e remove unused extern 2024-07-09 15:58:57 -03:00
Arthur Passos
cd0145113f allow other auth methods to be used if no_password is setup but not allowed 2024-07-09 15:05:24 -03:00
Arthur Passos
5a2b0ea1cc simplify formatting of astauth 2024-07-05 16:08:10 -03:00
Arthur Passos
cc13783acd Merge branch 'master' into multi_auth_methods 2024-07-02 09:11:20 -03:00
Antonio Andelic
7e22af06f1 Merge branch 'master' into keeper-some-improvement2 2024-07-02 09:01:48 +02:00
Arthur Passos
6f020901a8 add a few comments 2024-06-29 18:39:01 -03:00
Arthur Passos
306d55f636 add some docs 2024-06-29 18:29:05 -03:00
Arthur Passos
27c9bb9b10 add missing reference qualifier 2024-06-28 17:06:06 -03:00
Arthur Passos
1cd253e05f fix integ test 2024-06-28 12:48:38 -03:00
Arthur Passos
e9360221b7 trigger ci 2024-06-28 08:40:32 -03:00
Arthur Passos
eb8a18304f rename somem stuff 2024-06-27 17:08:14 -03:00
Arthur Passos
e111958762 remmovev optional from auth data in session object 2024-06-27 14:33:28 -03:00
Arthur Passos
43a9194739 remove optional from auth_result 2024-06-27 14:21:29 -03:00
Arthur Passos
50da0cb732 remove definition of logical_error from auth.cpp 2024-06-27 12:14:08 -03:00
Arthur Passos
a948334b8b minor 2024-06-27 11:53:42 -03:00
Arthur Passos
b219b99380 small refactor 2024-06-27 11:50:16 -03:00
Arthur Passos
21de3e2961 try to fix black 2024-06-27 10:05:37 -03:00
Arthur Passos
f69a19d01d try to fix black 2024-06-27 09:20:12 -03:00
Arthur Passos
ec0d426a6f try to fix black 2024-06-27 08:52:20 -03:00
Arthur Passos
678700f137 update some other tests 2024-06-26 22:26:46 -03:00
Arthur Passos
95f584c5d0 black 2024-06-26 17:25:46 -03:00
Arthur Passos
c0e1095e66 initial fix for session_log, shall be refactored 2024-06-26 17:06:38 -03:00
Arthur Passos
938004c090 update yet another test 2024-06-26 16:40:40 -03:00
Arthur Passos
a77a0b5eb0 black 2024-06-26 13:59:57 -03:00
Arthur Passos
3e77101b16 update some other tests 2024-06-26 13:29:13 -03:00
Arthur Passos
341071402c fix wrong expected error code & add test 2024-06-26 11:00:13 -03:00
Arthur Passos
f15551b47b black 2024-06-26 09:38:55 -03:00
Arthur Passos
c2dd3bb5d2 Merge branch 'master' into multi_auth_methods 2024-06-26 09:03:58 -03:00
Arthur Passos
b93c21a041 add no-parallel to grant_and_Revoke.sql 2024-06-26 08:58:44 -03:00
Arthur Passos
4d800a8487 update test_disk_access_storage 2024-06-26 08:54:58 -03:00
Arthur Passos
11e537cb28 update some more tests 2024-06-26 08:51:30 -03:00
Arthur Passos
35214a34ee update some tests 2024-06-25 22:39:30 -03:00
Arthur Passos
57afc5f035 serialize no_password 2024-06-25 20:02:19 -03:00
Arthur Passos
907a54e9f6 add no parallel 2024-06-24 21:21:03 -03:00
Arthur Passos
a1928bd299 no-fast test 2024-06-24 19:39:26 -03:00
Arthur Passos
c7aed3c98c Revert "dont test ssh at all, it wont work if openssl is not built"
This reverts commit 4d5676f455.
2024-06-24 19:37:46 -03:00
Arthur Passos
96cb9f13dd remove comments 2024-06-24 16:51:34 -03:00
Arthur Passos
4d5676f455 dont test ssh at all, it wont work if openssl is not built 2024-06-24 16:50:23 -03:00
Arthur Passos
5bdd49f36c do not gen at runtime, use pre-built ones 2024-06-24 16:43:19 -03:00
Arthur Passos
d4da0a0a21 use plaintext_password instead of sha256 because of cicd 2024-06-24 16:30:08 -03:00
Arthur Passos
fe0d3b3e27 initial tests 2024-06-24 15:48:46 -03:00
Arthur Passos
9d19001945 Merge branch 'master' into multi_auth_methods 2024-06-22 10:52:47 -03:00
Arthur Passos
f55d15d9b9 Merge branch 'master' into multi_auth_methods 2024-06-21 15:39:02 -03:00
Arthur Passos
237abda2eb add some ut 2024-06-21 15:35:50 -03:00
Arthur Passos
08c9cc18d6 fix astcreateuserquery clone 2024-06-21 12:05:58 -03:00
Arthur Passos
55da169fe7 fix wrong condition 2024-06-21 10:30:12 -03:00
Arthur Passos
a1211a0f5a simplify syntax 2024-06-20 16:51:52 -03:00
Arthur Passos
179d54505a make progress, seems functional 2024-06-20 15:07:16 -03:00
Arthur Passos
1514dcbb34 ptal 2024-06-19 11:57:38 -03:00
Antonio Andelic
ac78184fe7 Merge branch 'tracing-try-2' into keeper-some-improvement2 2024-06-18 11:04:00 +02:00
Antonio Andelic
1777ff37c0 Merge branch 'master' into keeper-some-improvement2 2024-06-18 11:03:38 +02:00
Antonio Andelic
7dca59da56 Revert "Merge branch 'use-thread-from-global-pool-in-poco-threadpool' into keeper-some-improvement"
This reverts commit 737d7484c5, reversing
changes made to b3a742304e.
2024-06-17 09:03:49 +02:00
Arthur Passos
6ac24fcf54 fix test build 2024-06-14 18:08:01 -03:00
Arthur Passos
70e4933221 fix no pwd authentication 2024-06-14 14:39:55 -03:00
Arthur Passos
c1250ccb35 append default constructed auth method upon alter without auth data 2024-06-14 10:58:57 -03:00
Arthur Passos
98e5ea5206 style fix 2024-06-14 10:02:40 -03:00
Arthur Passos
b22776d3a8 throw exception upon auth 2024-06-14 09:34:10 -03:00
Arthur Passos
d4c1faad3b testing the waters 2024-06-13 11:24:00 -03:00
Antonio Andelic
0fa45c3954 More parallel storage 2024-06-11 16:39:35 +02:00
Antonio Andelic
c802d7d58a Writing improvements 2024-06-11 14:35:26 +02:00
Antonio Andelic
5ab06caffc Merge branch 'keeper-parallel-storage' into keeper-some-improvement2 2024-06-11 10:18:27 +02:00
Antonio Andelic
737d7484c5 Merge branch 'use-thread-from-global-pool-in-poco-threadpool' into keeper-some-improvement 2024-06-11 09:46:58 +02:00
Antonio Andelic
b3a742304e Merge branch 'master' into keeper-some-improvement 2024-06-11 09:46:41 +02:00
kssenii
6514d72fea Move servers pool back 2024-06-10 18:53:51 +02:00
kssenii
c3d4b429d9 Fix merge 2024-06-10 15:39:54 +02:00
kssenii
7ff848c2c8 Merge remote-tracking branch 'origin/master' into use-thread-from-global-pool-in-poco-threadpool 2024-06-10 15:20:03 +02:00
kssenii
a11ba3f437 Fix shutdown 2024-06-10 15:19:03 +02:00
kssenii
6604d94271 Ping CI: skip fast test to see all stateless runs 2024-06-07 17:11:49 +02:00
kssenii
e30fa1da4d Fix ThreadStatus 2024-06-07 15:03:13 +02:00
kssenii
7ea3345e0d Use ThreadFromGlobalPool in Poco::ThreadPool 2024-06-06 17:25:15 +02:00
kssenii
1e97d73bd0 Squashed commit of the following:
commit 27fe0439fa
Merge: bfb1c4c793 bb469e0d45
Author: Antonio Andelic <antonio@clickhouse.com>
Date:   Thu Jun 6 14:36:02 2024 +0200

    Merge branch 'master' into fix-global-trace-collector

commit bfb1c4c793
Author: Antonio Andelic <antonio@clickhouse.com>
Date:   Thu Jun 6 11:29:42 2024 +0200

    better

commit fcee260b25
Author: Antonio Andelic <antonio2368@users.noreply.github.com>
Date:   Thu Jun 6 11:22:48 2024 +0200

    Update src/Interpreters/TraceCollector.h

    Co-authored-by: alesapin <alesapin@clickhouse.com>

commit 1d3cf17053
Author: Antonio Andelic <antonio@clickhouse.com>
Date:   Thu Jun 6 11:11:08 2024 +0200

    Fix global trace collector
2024-06-06 17:13:37 +02:00
Antonio Andelic
f0e9703384 Some small improvements 2024-06-06 09:45:07 +02:00
Antonio Andelic
514941627b Merge branch 'master' into keeper-parallel-storage 2024-06-05 15:31:57 +02:00
AlexNsf
c2096575ba Remove unused imports 2024-05-22 18:07:39 +03:00
AlexNsf
3d9e5cc1cf Backup disks init 2024-05-22 16:41:37 +03:00
Antonio Andelic
acc08c65d9 Add stopwatch 2024-05-22 11:56:45 +02:00
Antonio Andelic
f1e4403f98 Merge branch 'master' into keeper-parallel-storage 2024-05-22 11:39:57 +02:00
Antonio Andelic
b1d53f0472 Merge branch 'master' into keeper-parallel-storage 2024-04-29 15:13:19 +02:00
kssenii
5ffa2c9ca1 Add a test 2024-04-25 13:37:24 +02:00
gao chuan
5a6fe87b7c [bugfix]alter postgresql subscription error 2024-04-17 23:43:36 +08:00
Antonio Andelic
bc3cfb008e Merge branch 'master' into keeper-parallel-storage 2024-03-25 13:14:57 +01:00
Antonio Andelic
9791a2ea40 Merge branch 'keeper-batch-flushes' into keeper-parallel-storage 2023-09-08 16:26:12 +00:00
Antonio Andelic
9fb9d16737 Merge branch 'keeper-batch-flushes' into keeper-parallel-storage 2023-09-06 13:30:05 +00:00
Antonio Andelic
6be1d0724a More mutex 2023-09-06 13:04:08 +00:00
Antonio Andelic
9238520490 Merge branch 'master' into keeper-parallel-storage 2023-09-06 10:57:33 +00:00
Antonio Andelic
dd1bb579df Better 2023-09-05 12:05:37 +00:00
Antonio Andelic
57943798b7 Merge branch 'master' into keeper-parallel-storage 2023-09-05 08:46:38 +00:00
Antonio Andelic
b43c3d75a2 Initial implementation 2023-09-04 14:49:49 +00:00
337 changed files with 11286 additions and 5503 deletions

6
.gitmodules vendored
View File

@ -170,9 +170,6 @@
[submodule "contrib/fast_float"]
path = contrib/fast_float
url = https://github.com/fastfloat/fast_float
[submodule "contrib/libpq"]
path = contrib/libpq
url = https://github.com/ClickHouse/libpq
[submodule "contrib/NuRaft"]
path = contrib/NuRaft
url = https://github.com/ClickHouse/NuRaft
@ -369,3 +366,6 @@
[submodule "contrib/numactl"]
path = contrib/numactl
url = https://github.com/ClickHouse/numactl.git
[submodule "contrib/postgres"]
path = contrib/postgres
url = https://github.com/ClickHouse/postgres.git

View File

@ -40,17 +40,8 @@ Every month we get together with the community (users, contributors, customers,
Keep an eye out for upcoming meetups and events around the world. Somewhere else you want us to be? Please feel free to reach out to tyler `<at>` clickhouse `<dot>` com. You can also peruse [ClickHouse Events](https://clickhouse.com/company/news-events) for a list of all upcoming trainings, meetups, speaking engagements, etc.
The following upcoming meetups are featuring creator of ClickHouse & CTO, Alexey Milovidov:
Upcoming meetups
* [Raleigh Meetup (Deutsche Bank)](https://www.meetup.com/triangletechtalks/events/302723486/) - September 9
* [New York Meetup (Rokt)](https://www.meetup.com/clickhouse-new-york-user-group/events/302575342) - September 10
* [Chicago Meetup (Jump Capital)](https://lu.ma/43tvmrfw) - September 12
Other upcoming meetups
* [Toronto Meetup (Shopify)](https://www.meetup.com/clickhouse-toronto-user-group/events/301490855/) - September 10
* [Austin Meetup](https://www.meetup.com/clickhouse-austin-user-group/events/302558689/) - September 17
* [London Meetup](https://www.meetup.com/clickhouse-london-user-group/events/302977267) - September 17
* [Bangalore Meetup](https://www.meetup.com/clickhouse-bangalore-user-group/events/303208274/) - September 18
* [Tel Aviv Meetup](https://www.meetup.com/clickhouse-meetup-israel/events/303095121) - September 22
* [Jakarta Meetup](https://www.meetup.com/clickhouse-indonesia-user-group/events/303191359/) - October 1
@ -62,13 +53,20 @@ Other upcoming meetups
* [Dubai Meetup](https://www.meetup.com/clickhouse-dubai-meetup-group/events/303096989/) - November 21
* [Paris Meetup](https://www.meetup.com/clickhouse-france-user-group/events/303096434) - November 26
Recently completed events
Recently completed meetups
* [ClickHouse Guangzhou User Group Meetup](https://mp.weixin.qq.com/s/GSvo-7xUoVzCsuUvlLTpCw) - August 25
* [Seattle Meetup (Statsig)](https://www.meetup.com/clickhouse-seattle-user-group/events/302518075/) - August 27
* [Melbourne Meetup](https://www.meetup.com/clickhouse-australia-user-group/events/302732666/) - August 27
* [Sydney Meetup](https://www.meetup.com/clickhouse-australia-user-group/events/302862966/) - September 5
* [Zurich Meetup](https://www.meetup.com/clickhouse-switzerland-meetup-group/events/302267429/) - September 5
* [San Francisco Meetup (Cloudflare)](https://www.meetup.com/clickhouse-silicon-valley-meetup-group/events/302540575) - September 5
* [Raleigh Meetup (Deutsche Bank)](https://www.meetup.com/triangletechtalks/events/302723486/) - September 9
* [New York Meetup (Rokt)](https://www.meetup.com/clickhouse-new-york-user-group/events/302575342) - September 10
* [Toronto Meetup (Shopify)](https://www.meetup.com/clickhouse-toronto-user-group/events/301490855/) - September 10
* [Chicago Meetup (Jump Capital)](https://lu.ma/43tvmrfw) - September 12
* [London Meetup](https://www.meetup.com/clickhouse-london-user-group/events/302977267) - September 17
* [Austin Meetup](https://www.meetup.com/clickhouse-austin-user-group/events/302558689/) - September 17
## Recent Recordings
* **Recent Meetup Videos**: [Meetup Playlist](https://www.youtube.com/playlist?list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U) Whenever possible recordings of the ClickHouse Community Meetups are edited and presented as individual talks. Current featuring "Modern SQL in 2023", "Fast, Concurrent, and Consistent Asynchronous INSERTS in ClickHouse", and "Full-Text Indices: Design and Experiments"

View File

@ -188,8 +188,9 @@ namespace Crypto
pFile = fopen(keyFile.c_str(), "r");
if (pFile)
{
pem_password_cb * pCB = pass.empty() ? (pem_password_cb *)0 : &passCB;
void * pPassword = pass.empty() ? (void *)0 : (void *)pass.c_str();
pem_password_cb * pCB = &passCB;
static constexpr char * no_password = "";
void * pPassword = pass.empty() ? (void *)no_password : (void *)pass.c_str();
if (readFunc(pFile, &pKey, pCB, pPassword))
{
fclose(pFile);
@ -225,6 +226,13 @@ namespace Crypto
error:
if (pFile)
fclose(pFile);
if (*ppKey)
{
if constexpr (std::is_same_v<K, EVP_PKEY>)
EVP_PKEY_free(*ppKey);
else
EC_KEY_free(*ppKey);
}
throw OpenSSLException("EVPKey::loadKey(string)");
}
@ -286,6 +294,13 @@ namespace Crypto
error:
if (pBIO)
BIO_free(pBIO);
if (*ppKey)
{
if constexpr (std::is_same_v<K, EVP_PKEY>)
EVP_PKEY_free(*ppKey);
else
EC_KEY_free(*ppKey);
}
throw OpenSSLException("EVPKey::loadKey(stream)");
}

View File

@ -248,6 +248,9 @@ namespace Net
SSL_CTX * sslContext() const;
/// Returns the underlying OpenSSL SSL Context object.
SSL_CTX * takeSslContext();
/// Takes ownership of the underlying OpenSSL SSL Context object.
Usage usage() const;
/// Returns whether the context is for use by a client or by a server
/// and whether TLSv1 is required.
@ -401,6 +404,13 @@ namespace Net
return _pSSLContext;
}
inline SSL_CTX * Context::takeSslContext()
{
auto * result = _pSSLContext;
_pSSLContext = nullptr;
return result;
}
inline bool Context::extendedCertificateVerificationEnabled() const
{

View File

@ -106,6 +106,11 @@ Context::Context(
Context::~Context()
{
if (_pSSLContext == nullptr)
{
return;
}
try
{
SSL_CTX_free(_pSSLContext);

View File

@ -145,8 +145,13 @@ add_contrib (isa-l-cmake isa-l)
add_contrib (libhdfs3-cmake libhdfs3) # requires: google-protobuf, krb5, isa-l
add_contrib (hive-metastore-cmake hive-metastore) # requires: thrift, avro, arrow, libhdfs3
add_contrib (cppkafka-cmake cppkafka)
add_contrib (libpqxx-cmake libpqxx)
add_contrib (libpq-cmake libpq)
option(ENABLE_LIBPQXX "Enable PostgreSQL" ${ENABLE_LIBRARIES})
if (ENABLE_LIBPQXX)
add_contrib (postgres-cmake postgres)
add_contrib (libpqxx-cmake libpqxx)
endif()
add_contrib (rocksdb-cmake rocksdb) # requires: jemalloc, snappy, zlib, lz4, zstd, liburing
add_contrib (nuraft-cmake NuRaft)
add_contrib (fast_float-cmake fast_float)

1
contrib/libpq vendored

@ -1 +0,0 @@
Subproject commit 2446f2c85650b56df9d4ebc4c2ea7f4b01beee57

View File

@ -1,78 +0,0 @@
if (NOT ENABLE_LIBPQXX)
return()
endif()
set(LIBPQ_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/libpq")
set(SRCS
"${LIBPQ_SOURCE_DIR}/fe-auth.c"
"${LIBPQ_SOURCE_DIR}/fe-auth-scram.c"
"${LIBPQ_SOURCE_DIR}/fe-connect.c"
"${LIBPQ_SOURCE_DIR}/fe-exec.c"
"${LIBPQ_SOURCE_DIR}/fe-lobj.c"
"${LIBPQ_SOURCE_DIR}/fe-misc.c"
"${LIBPQ_SOURCE_DIR}/fe-print.c"
"${LIBPQ_SOURCE_DIR}/fe-trace.c"
"${LIBPQ_SOURCE_DIR}/fe-protocol3.c"
"${LIBPQ_SOURCE_DIR}/fe-secure.c"
"${LIBPQ_SOURCE_DIR}/fe-secure-common.c"
"${LIBPQ_SOURCE_DIR}/fe-secure-openssl.c"
"${LIBPQ_SOURCE_DIR}/legacy-pqsignal.c"
"${LIBPQ_SOURCE_DIR}/libpq-events.c"
"${LIBPQ_SOURCE_DIR}/pqexpbuffer.c"
"${LIBPQ_SOURCE_DIR}/common/scram-common.c"
"${LIBPQ_SOURCE_DIR}/common/sha2.c"
"${LIBPQ_SOURCE_DIR}/common/sha1.c"
"${LIBPQ_SOURCE_DIR}/common/md5.c"
"${LIBPQ_SOURCE_DIR}/common/md5_common.c"
"${LIBPQ_SOURCE_DIR}/common/hmac_openssl.c"
"${LIBPQ_SOURCE_DIR}/common/cryptohash.c"
"${LIBPQ_SOURCE_DIR}/common/saslprep.c"
"${LIBPQ_SOURCE_DIR}/common/unicode_norm.c"
"${LIBPQ_SOURCE_DIR}/common/ip.c"
"${LIBPQ_SOURCE_DIR}/common/jsonapi.c"
"${LIBPQ_SOURCE_DIR}/common/wchar.c"
"${LIBPQ_SOURCE_DIR}/common/base64.c"
"${LIBPQ_SOURCE_DIR}/common/link-canary.c"
"${LIBPQ_SOURCE_DIR}/common/fe_memutils.c"
"${LIBPQ_SOURCE_DIR}/common/string.c"
"${LIBPQ_SOURCE_DIR}/common/pg_get_line.c"
"${LIBPQ_SOURCE_DIR}/common/stringinfo.c"
"${LIBPQ_SOURCE_DIR}/common/psprintf.c"
"${LIBPQ_SOURCE_DIR}/common/encnames.c"
"${LIBPQ_SOURCE_DIR}/common/logging.c"
"${LIBPQ_SOURCE_DIR}/port/snprintf.c"
"${LIBPQ_SOURCE_DIR}/port/strlcpy.c"
"${LIBPQ_SOURCE_DIR}/port/strerror.c"
"${LIBPQ_SOURCE_DIR}/port/inet_net_ntop.c"
"${LIBPQ_SOURCE_DIR}/port/getpeereid.c"
"${LIBPQ_SOURCE_DIR}/port/chklocale.c"
"${LIBPQ_SOURCE_DIR}/port/noblock.c"
"${LIBPQ_SOURCE_DIR}/port/pg_strong_random.c"
"${LIBPQ_SOURCE_DIR}/port/pgstrcasecmp.c"
"${LIBPQ_SOURCE_DIR}/port/thread.c"
"${LIBPQ_SOURCE_DIR}/port/path.c"
)
add_library(_libpq ${SRCS})
add_definitions(-DHAVE_BIO_METH_NEW)
add_definitions(-DHAVE_HMAC_CTX_NEW)
add_definitions(-DHAVE_HMAC_CTX_FREE)
target_include_directories (_libpq SYSTEM PUBLIC ${LIBPQ_SOURCE_DIR})
target_include_directories (_libpq SYSTEM PUBLIC "${LIBPQ_SOURCE_DIR}/include")
target_include_directories (_libpq SYSTEM PRIVATE "${LIBPQ_SOURCE_DIR}/configs")
# NOTE: this is a dirty hack to avoid and instead pg_config.h should be shipped
# for different OS'es like for jemalloc, not one generic for all OS'es like
# now.
if (OS_DARWIN OR OS_FREEBSD OR USE_MUSL)
target_compile_definitions(_libpq PRIVATE -DSTRERROR_R_INT=1)
endif()
target_link_libraries (_libpq PRIVATE OpenSSL::SSL)
add_library(ch_contrib::libpq ALIAS _libpq)

2
contrib/libpqxx vendored

@ -1 +1 @@
Subproject commit c995193a3a14d71f4711f1f421f65a1a1db64640
Subproject commit 41e4c331564167cca97ad6eccbd5b8879c2ca044

View File

@ -1,16 +1,9 @@
option(ENABLE_LIBPQXX "Enalbe libpqxx" ${ENABLE_LIBRARIES})
if (NOT ENABLE_LIBPQXX)
message(STATUS "Not using libpqxx")
return()
endif()
set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/libpqxx")
set (SRCS
"${LIBRARY_DIR}/src/strconv.cxx"
"${LIBRARY_DIR}/src/array.cxx"
"${LIBRARY_DIR}/src/binarystring.cxx"
"${LIBRARY_DIR}/src/blob.cxx"
"${LIBRARY_DIR}/src/connection.cxx"
"${LIBRARY_DIR}/src/cursor.cxx"
"${LIBRARY_DIR}/src/encodings.cxx"
@ -19,59 +12,25 @@ set (SRCS
"${LIBRARY_DIR}/src/field.cxx"
"${LIBRARY_DIR}/src/largeobject.cxx"
"${LIBRARY_DIR}/src/notification.cxx"
"${LIBRARY_DIR}/src/params.cxx"
"${LIBRARY_DIR}/src/pipeline.cxx"
"${LIBRARY_DIR}/src/result.cxx"
"${LIBRARY_DIR}/src/robusttransaction.cxx"
"${LIBRARY_DIR}/src/row.cxx"
"${LIBRARY_DIR}/src/sql_cursor.cxx"
"${LIBRARY_DIR}/src/strconv.cxx"
"${LIBRARY_DIR}/src/stream_from.cxx"
"${LIBRARY_DIR}/src/stream_to.cxx"
"${LIBRARY_DIR}/src/subtransaction.cxx"
"${LIBRARY_DIR}/src/time.cxx"
"${LIBRARY_DIR}/src/transaction.cxx"
"${LIBRARY_DIR}/src/transaction_base.cxx"
"${LIBRARY_DIR}/src/row.cxx"
"${LIBRARY_DIR}/src/params.cxx"
"${LIBRARY_DIR}/src/util.cxx"
"${LIBRARY_DIR}/src/version.cxx"
"${LIBRARY_DIR}/src/wait.cxx"
)
# Need to explicitly include each header file, because in the directory include/pqxx there are also files
# like just 'array'. So if including the whole directory with `target_include_directories`, it will make
# conflicts with all includes of <array>.
set (HDRS
"${LIBRARY_DIR}/include/pqxx/array.hxx"
"${LIBRARY_DIR}/include/pqxx/params.hxx"
"${LIBRARY_DIR}/include/pqxx/binarystring.hxx"
"${LIBRARY_DIR}/include/pqxx/composite.hxx"
"${LIBRARY_DIR}/include/pqxx/connection.hxx"
"${LIBRARY_DIR}/include/pqxx/cursor.hxx"
"${LIBRARY_DIR}/include/pqxx/dbtransaction.hxx"
"${LIBRARY_DIR}/include/pqxx/errorhandler.hxx"
"${LIBRARY_DIR}/include/pqxx/except.hxx"
"${LIBRARY_DIR}/include/pqxx/field.hxx"
"${LIBRARY_DIR}/include/pqxx/isolation.hxx"
"${LIBRARY_DIR}/include/pqxx/largeobject.hxx"
"${LIBRARY_DIR}/include/pqxx/nontransaction.hxx"
"${LIBRARY_DIR}/include/pqxx/notification.hxx"
"${LIBRARY_DIR}/include/pqxx/pipeline.hxx"
"${LIBRARY_DIR}/include/pqxx/prepared_statement.hxx"
"${LIBRARY_DIR}/include/pqxx/result.hxx"
"${LIBRARY_DIR}/include/pqxx/robusttransaction.hxx"
"${LIBRARY_DIR}/include/pqxx/row.hxx"
"${LIBRARY_DIR}/include/pqxx/separated_list.hxx"
"${LIBRARY_DIR}/include/pqxx/strconv.hxx"
"${LIBRARY_DIR}/include/pqxx/stream_from.hxx"
"${LIBRARY_DIR}/include/pqxx/stream_to.hxx"
"${LIBRARY_DIR}/include/pqxx/subtransaction.hxx"
"${LIBRARY_DIR}/include/pqxx/transaction.hxx"
"${LIBRARY_DIR}/include/pqxx/transaction_base.hxx"
"${LIBRARY_DIR}/include/pqxx/types.hxx"
"${LIBRARY_DIR}/include/pqxx/util.hxx"
"${LIBRARY_DIR}/include/pqxx/version.hxx"
"${LIBRARY_DIR}/include/pqxx/zview.hxx"
)
add_library(_libpqxx ${SRCS} ${HDRS})
add_library(_libpqxx ${SRCS})
target_link_libraries(_libpqxx PUBLIC ch_contrib::libpq)
target_include_directories (_libpqxx SYSTEM BEFORE PUBLIC "${LIBRARY_DIR}/include")

1
contrib/postgres vendored Submodule

@ -0,0 +1 @@
Subproject commit 2e51f82e27f4be389cc239d1b8784bbf2f01d33a

View File

@ -0,0 +1,81 @@
# Build description for libpq which is part of the PostgreSQL sources
set(POSTGRES_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/postgres")
set(LIBPQ_SOURCE_DIR "${POSTGRES_SOURCE_DIR}/src/interfaces/libpq")
set(LIBPQ_CMAKE_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/postgres-cmake")
set(SRCS
"${LIBPQ_SOURCE_DIR}/fe-auth.c"
"${LIBPQ_SOURCE_DIR}/fe-auth-scram.c"
"${LIBPQ_SOURCE_DIR}/fe-connect.c"
"${LIBPQ_SOURCE_DIR}/fe-exec.c"
"${LIBPQ_SOURCE_DIR}/fe-lobj.c"
"${LIBPQ_SOURCE_DIR}/fe-misc.c"
"${LIBPQ_SOURCE_DIR}/fe-print.c"
"${LIBPQ_SOURCE_DIR}/fe-trace.c"
"${LIBPQ_SOURCE_DIR}/fe-protocol3.c"
"${LIBPQ_SOURCE_DIR}/fe-secure.c"
"${LIBPQ_SOURCE_DIR}/fe-secure-common.c"
"${LIBPQ_SOURCE_DIR}/fe-secure-openssl.c"
"${LIBPQ_SOURCE_DIR}/legacy-pqsignal.c"
"${LIBPQ_SOURCE_DIR}/libpq-events.c"
"${LIBPQ_SOURCE_DIR}/pqexpbuffer.c"
"${POSTGRES_SOURCE_DIR}/src/common/scram-common.c"
"${POSTGRES_SOURCE_DIR}/src/common/sha2.c"
"${POSTGRES_SOURCE_DIR}/src/common/sha1.c"
"${POSTGRES_SOURCE_DIR}/src/common/md5.c"
"${POSTGRES_SOURCE_DIR}/src/common/md5_common.c"
"${POSTGRES_SOURCE_DIR}/src/common/hmac_openssl.c"
"${POSTGRES_SOURCE_DIR}/src/common/cryptohash.c"
"${POSTGRES_SOURCE_DIR}/src/common/saslprep.c"
"${POSTGRES_SOURCE_DIR}/src/common/unicode_norm.c"
"${POSTGRES_SOURCE_DIR}/src/common/ip.c"
"${POSTGRES_SOURCE_DIR}/src/common/jsonapi.c"
"${POSTGRES_SOURCE_DIR}/src/common/wchar.c"
"${POSTGRES_SOURCE_DIR}/src/common/base64.c"
"${POSTGRES_SOURCE_DIR}/src/common/link-canary.c"
"${POSTGRES_SOURCE_DIR}/src/common/fe_memutils.c"
"${POSTGRES_SOURCE_DIR}/src/common/string.c"
"${POSTGRES_SOURCE_DIR}/src/common/pg_get_line.c"
"${POSTGRES_SOURCE_DIR}/src/common/pg_prng.c"
"${POSTGRES_SOURCE_DIR}/src/common/stringinfo.c"
"${POSTGRES_SOURCE_DIR}/src/common/psprintf.c"
"${POSTGRES_SOURCE_DIR}/src/common/encnames.c"
"${POSTGRES_SOURCE_DIR}/src/common/logging.c"
"${POSTGRES_SOURCE_DIR}/src/port/snprintf.c"
"${POSTGRES_SOURCE_DIR}/src/port/strlcat.c"
"${POSTGRES_SOURCE_DIR}/src/port/strlcpy.c"
"${POSTGRES_SOURCE_DIR}/src/port/strerror.c"
"${POSTGRES_SOURCE_DIR}/src/port/inet_net_ntop.c"
"${POSTGRES_SOURCE_DIR}/src/port/getpeereid.c"
"${POSTGRES_SOURCE_DIR}/src/port/chklocale.c"
"${POSTGRES_SOURCE_DIR}/src/port/noblock.c"
"${POSTGRES_SOURCE_DIR}/src/port/pg_strong_random.c"
"${POSTGRES_SOURCE_DIR}/src/port/pgstrcasecmp.c"
"${POSTGRES_SOURCE_DIR}/src/port/pg_bitutils.c"
"${POSTGRES_SOURCE_DIR}/src/port/thread.c"
"${POSTGRES_SOURCE_DIR}/src/port/path.c"
)
add_library(_libpq ${SRCS})
add_definitions(-DHAVE_BIO_METH_NEW)
add_definitions(-DHAVE_HMAC_CTX_NEW)
add_definitions(-DHAVE_HMAC_CTX_FREE)
target_include_directories (_libpq SYSTEM PUBLIC ${LIBPQ_SOURCE_DIR})
target_include_directories (_libpq SYSTEM PUBLIC "${POSTGRES_SOURCE_DIR}/src/include")
target_include_directories (_libpq SYSTEM PUBLIC "${LIBPQ_CMAKE_SOURCE_DIR}") # pre-generated headers
# NOTE: this is a dirty hack to avoid and instead pg_config.h should be shipped
# for different OS'es like for jemalloc, not one generic for all OS'es like
# now.
if (OS_DARWIN OR OS_FREEBSD OR USE_MUSL)
target_compile_definitions(_libpq PRIVATE -DSTRERROR_R_INT=1)
endif()
target_link_libraries (_libpq PRIVATE OpenSSL::SSL)
add_library(ch_contrib::libpq ALIAS _libpq)

View File

@ -0,0 +1,471 @@
/*-------------------------------------------------------------------------
*
* nodetags.h
* Generated node infrastructure code
*
* Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* NOTES
* ******************************
* *** DO NOT EDIT THIS FILE! ***
* ******************************
*
* It has been GENERATED by src/backend/nodes/gen_node_support.pl
*
*-------------------------------------------------------------------------
*/
T_List = 1,
T_Alias = 2,
T_RangeVar = 3,
T_TableFunc = 4,
T_IntoClause = 5,
T_Var = 6,
T_Const = 7,
T_Param = 8,
T_Aggref = 9,
T_GroupingFunc = 10,
T_WindowFunc = 11,
T_SubscriptingRef = 12,
T_FuncExpr = 13,
T_NamedArgExpr = 14,
T_OpExpr = 15,
T_DistinctExpr = 16,
T_NullIfExpr = 17,
T_ScalarArrayOpExpr = 18,
T_BoolExpr = 19,
T_SubLink = 20,
T_SubPlan = 21,
T_AlternativeSubPlan = 22,
T_FieldSelect = 23,
T_FieldStore = 24,
T_RelabelType = 25,
T_CoerceViaIO = 26,
T_ArrayCoerceExpr = 27,
T_ConvertRowtypeExpr = 28,
T_CollateExpr = 29,
T_CaseExpr = 30,
T_CaseWhen = 31,
T_CaseTestExpr = 32,
T_ArrayExpr = 33,
T_RowExpr = 34,
T_RowCompareExpr = 35,
T_CoalesceExpr = 36,
T_MinMaxExpr = 37,
T_SQLValueFunction = 38,
T_XmlExpr = 39,
T_JsonFormat = 40,
T_JsonReturning = 41,
T_JsonValueExpr = 42,
T_JsonConstructorExpr = 43,
T_JsonIsPredicate = 44,
T_NullTest = 45,
T_BooleanTest = 46,
T_CoerceToDomain = 47,
T_CoerceToDomainValue = 48,
T_SetToDefault = 49,
T_CurrentOfExpr = 50,
T_NextValueExpr = 51,
T_InferenceElem = 52,
T_TargetEntry = 53,
T_RangeTblRef = 54,
T_JoinExpr = 55,
T_FromExpr = 56,
T_OnConflictExpr = 57,
T_Query = 58,
T_TypeName = 59,
T_ColumnRef = 60,
T_ParamRef = 61,
T_A_Expr = 62,
T_A_Const = 63,
T_TypeCast = 64,
T_CollateClause = 65,
T_RoleSpec = 66,
T_FuncCall = 67,
T_A_Star = 68,
T_A_Indices = 69,
T_A_Indirection = 70,
T_A_ArrayExpr = 71,
T_ResTarget = 72,
T_MultiAssignRef = 73,
T_SortBy = 74,
T_WindowDef = 75,
T_RangeSubselect = 76,
T_RangeFunction = 77,
T_RangeTableFunc = 78,
T_RangeTableFuncCol = 79,
T_RangeTableSample = 80,
T_ColumnDef = 81,
T_TableLikeClause = 82,
T_IndexElem = 83,
T_DefElem = 84,
T_LockingClause = 85,
T_XmlSerialize = 86,
T_PartitionElem = 87,
T_PartitionSpec = 88,
T_PartitionBoundSpec = 89,
T_PartitionRangeDatum = 90,
T_PartitionCmd = 91,
T_RangeTblEntry = 92,
T_RTEPermissionInfo = 93,
T_RangeTblFunction = 94,
T_TableSampleClause = 95,
T_WithCheckOption = 96,
T_SortGroupClause = 97,
T_GroupingSet = 98,
T_WindowClause = 99,
T_RowMarkClause = 100,
T_WithClause = 101,
T_InferClause = 102,
T_OnConflictClause = 103,
T_CTESearchClause = 104,
T_CTECycleClause = 105,
T_CommonTableExpr = 106,
T_MergeWhenClause = 107,
T_MergeAction = 108,
T_TriggerTransition = 109,
T_JsonOutput = 110,
T_JsonKeyValue = 111,
T_JsonObjectConstructor = 112,
T_JsonArrayConstructor = 113,
T_JsonArrayQueryConstructor = 114,
T_JsonAggConstructor = 115,
T_JsonObjectAgg = 116,
T_JsonArrayAgg = 117,
T_RawStmt = 118,
T_InsertStmt = 119,
T_DeleteStmt = 120,
T_UpdateStmt = 121,
T_MergeStmt = 122,
T_SelectStmt = 123,
T_SetOperationStmt = 124,
T_ReturnStmt = 125,
T_PLAssignStmt = 126,
T_CreateSchemaStmt = 127,
T_AlterTableStmt = 128,
T_ReplicaIdentityStmt = 129,
T_AlterTableCmd = 130,
T_AlterCollationStmt = 131,
T_AlterDomainStmt = 132,
T_GrantStmt = 133,
T_ObjectWithArgs = 134,
T_AccessPriv = 135,
T_GrantRoleStmt = 136,
T_AlterDefaultPrivilegesStmt = 137,
T_CopyStmt = 138,
T_VariableSetStmt = 139,
T_VariableShowStmt = 140,
T_CreateStmt = 141,
T_Constraint = 142,
T_CreateTableSpaceStmt = 143,
T_DropTableSpaceStmt = 144,
T_AlterTableSpaceOptionsStmt = 145,
T_AlterTableMoveAllStmt = 146,
T_CreateExtensionStmt = 147,
T_AlterExtensionStmt = 148,
T_AlterExtensionContentsStmt = 149,
T_CreateFdwStmt = 150,
T_AlterFdwStmt = 151,
T_CreateForeignServerStmt = 152,
T_AlterForeignServerStmt = 153,
T_CreateForeignTableStmt = 154,
T_CreateUserMappingStmt = 155,
T_AlterUserMappingStmt = 156,
T_DropUserMappingStmt = 157,
T_ImportForeignSchemaStmt = 158,
T_CreatePolicyStmt = 159,
T_AlterPolicyStmt = 160,
T_CreateAmStmt = 161,
T_CreateTrigStmt = 162,
T_CreateEventTrigStmt = 163,
T_AlterEventTrigStmt = 164,
T_CreatePLangStmt = 165,
T_CreateRoleStmt = 166,
T_AlterRoleStmt = 167,
T_AlterRoleSetStmt = 168,
T_DropRoleStmt = 169,
T_CreateSeqStmt = 170,
T_AlterSeqStmt = 171,
T_DefineStmt = 172,
T_CreateDomainStmt = 173,
T_CreateOpClassStmt = 174,
T_CreateOpClassItem = 175,
T_CreateOpFamilyStmt = 176,
T_AlterOpFamilyStmt = 177,
T_DropStmt = 178,
T_TruncateStmt = 179,
T_CommentStmt = 180,
T_SecLabelStmt = 181,
T_DeclareCursorStmt = 182,
T_ClosePortalStmt = 183,
T_FetchStmt = 184,
T_IndexStmt = 185,
T_CreateStatsStmt = 186,
T_StatsElem = 187,
T_AlterStatsStmt = 188,
T_CreateFunctionStmt = 189,
T_FunctionParameter = 190,
T_AlterFunctionStmt = 191,
T_DoStmt = 192,
T_InlineCodeBlock = 193,
T_CallStmt = 194,
T_CallContext = 195,
T_RenameStmt = 196,
T_AlterObjectDependsStmt = 197,
T_AlterObjectSchemaStmt = 198,
T_AlterOwnerStmt = 199,
T_AlterOperatorStmt = 200,
T_AlterTypeStmt = 201,
T_RuleStmt = 202,
T_NotifyStmt = 203,
T_ListenStmt = 204,
T_UnlistenStmt = 205,
T_TransactionStmt = 206,
T_CompositeTypeStmt = 207,
T_CreateEnumStmt = 208,
T_CreateRangeStmt = 209,
T_AlterEnumStmt = 210,
T_ViewStmt = 211,
T_LoadStmt = 212,
T_CreatedbStmt = 213,
T_AlterDatabaseStmt = 214,
T_AlterDatabaseRefreshCollStmt = 215,
T_AlterDatabaseSetStmt = 216,
T_DropdbStmt = 217,
T_AlterSystemStmt = 218,
T_ClusterStmt = 219,
T_VacuumStmt = 220,
T_VacuumRelation = 221,
T_ExplainStmt = 222,
T_CreateTableAsStmt = 223,
T_RefreshMatViewStmt = 224,
T_CheckPointStmt = 225,
T_DiscardStmt = 226,
T_LockStmt = 227,
T_ConstraintsSetStmt = 228,
T_ReindexStmt = 229,
T_CreateConversionStmt = 230,
T_CreateCastStmt = 231,
T_CreateTransformStmt = 232,
T_PrepareStmt = 233,
T_ExecuteStmt = 234,
T_DeallocateStmt = 235,
T_DropOwnedStmt = 236,
T_ReassignOwnedStmt = 237,
T_AlterTSDictionaryStmt = 238,
T_AlterTSConfigurationStmt = 239,
T_PublicationTable = 240,
T_PublicationObjSpec = 241,
T_CreatePublicationStmt = 242,
T_AlterPublicationStmt = 243,
T_CreateSubscriptionStmt = 244,
T_AlterSubscriptionStmt = 245,
T_DropSubscriptionStmt = 246,
T_PlannerGlobal = 247,
T_PlannerInfo = 248,
T_RelOptInfo = 249,
T_IndexOptInfo = 250,
T_ForeignKeyOptInfo = 251,
T_StatisticExtInfo = 252,
T_JoinDomain = 253,
T_EquivalenceClass = 254,
T_EquivalenceMember = 255,
T_PathKey = 256,
T_PathTarget = 257,
T_ParamPathInfo = 258,
T_Path = 259,
T_IndexPath = 260,
T_IndexClause = 261,
T_BitmapHeapPath = 262,
T_BitmapAndPath = 263,
T_BitmapOrPath = 264,
T_TidPath = 265,
T_TidRangePath = 266,
T_SubqueryScanPath = 267,
T_ForeignPath = 268,
T_CustomPath = 269,
T_AppendPath = 270,
T_MergeAppendPath = 271,
T_GroupResultPath = 272,
T_MaterialPath = 273,
T_MemoizePath = 274,
T_UniquePath = 275,
T_GatherPath = 276,
T_GatherMergePath = 277,
T_NestPath = 278,
T_MergePath = 279,
T_HashPath = 280,
T_ProjectionPath = 281,
T_ProjectSetPath = 282,
T_SortPath = 283,
T_IncrementalSortPath = 284,
T_GroupPath = 285,
T_UpperUniquePath = 286,
T_AggPath = 287,
T_GroupingSetData = 288,
T_RollupData = 289,
T_GroupingSetsPath = 290,
T_MinMaxAggPath = 291,
T_WindowAggPath = 292,
T_SetOpPath = 293,
T_RecursiveUnionPath = 294,
T_LockRowsPath = 295,
T_ModifyTablePath = 296,
T_LimitPath = 297,
T_RestrictInfo = 298,
T_PlaceHolderVar = 299,
T_SpecialJoinInfo = 300,
T_OuterJoinClauseInfo = 301,
T_AppendRelInfo = 302,
T_RowIdentityVarInfo = 303,
T_PlaceHolderInfo = 304,
T_MinMaxAggInfo = 305,
T_PlannerParamItem = 306,
T_AggInfo = 307,
T_AggTransInfo = 308,
T_PlannedStmt = 309,
T_Result = 310,
T_ProjectSet = 311,
T_ModifyTable = 312,
T_Append = 313,
T_MergeAppend = 314,
T_RecursiveUnion = 315,
T_BitmapAnd = 316,
T_BitmapOr = 317,
T_SeqScan = 318,
T_SampleScan = 319,
T_IndexScan = 320,
T_IndexOnlyScan = 321,
T_BitmapIndexScan = 322,
T_BitmapHeapScan = 323,
T_TidScan = 324,
T_TidRangeScan = 325,
T_SubqueryScan = 326,
T_FunctionScan = 327,
T_ValuesScan = 328,
T_TableFuncScan = 329,
T_CteScan = 330,
T_NamedTuplestoreScan = 331,
T_WorkTableScan = 332,
T_ForeignScan = 333,
T_CustomScan = 334,
T_NestLoop = 335,
T_NestLoopParam = 336,
T_MergeJoin = 337,
T_HashJoin = 338,
T_Material = 339,
T_Memoize = 340,
T_Sort = 341,
T_IncrementalSort = 342,
T_Group = 343,
T_Agg = 344,
T_WindowAgg = 345,
T_Unique = 346,
T_Gather = 347,
T_GatherMerge = 348,
T_Hash = 349,
T_SetOp = 350,
T_LockRows = 351,
T_Limit = 352,
T_PlanRowMark = 353,
T_PartitionPruneInfo = 354,
T_PartitionedRelPruneInfo = 355,
T_PartitionPruneStepOp = 356,
T_PartitionPruneStepCombine = 357,
T_PlanInvalItem = 358,
T_ExprState = 359,
T_IndexInfo = 360,
T_ExprContext = 361,
T_ReturnSetInfo = 362,
T_ProjectionInfo = 363,
T_JunkFilter = 364,
T_OnConflictSetState = 365,
T_MergeActionState = 366,
T_ResultRelInfo = 367,
T_EState = 368,
T_WindowFuncExprState = 369,
T_SetExprState = 370,
T_SubPlanState = 371,
T_DomainConstraintState = 372,
T_ResultState = 373,
T_ProjectSetState = 374,
T_ModifyTableState = 375,
T_AppendState = 376,
T_MergeAppendState = 377,
T_RecursiveUnionState = 378,
T_BitmapAndState = 379,
T_BitmapOrState = 380,
T_ScanState = 381,
T_SeqScanState = 382,
T_SampleScanState = 383,
T_IndexScanState = 384,
T_IndexOnlyScanState = 385,
T_BitmapIndexScanState = 386,
T_BitmapHeapScanState = 387,
T_TidScanState = 388,
T_TidRangeScanState = 389,
T_SubqueryScanState = 390,
T_FunctionScanState = 391,
T_ValuesScanState = 392,
T_TableFuncScanState = 393,
T_CteScanState = 394,
T_NamedTuplestoreScanState = 395,
T_WorkTableScanState = 396,
T_ForeignScanState = 397,
T_CustomScanState = 398,
T_JoinState = 399,
T_NestLoopState = 400,
T_MergeJoinState = 401,
T_HashJoinState = 402,
T_MaterialState = 403,
T_MemoizeState = 404,
T_SortState = 405,
T_IncrementalSortState = 406,
T_GroupState = 407,
T_AggState = 408,
T_WindowAggState = 409,
T_UniqueState = 410,
T_GatherState = 411,
T_GatherMergeState = 412,
T_HashState = 413,
T_SetOpState = 414,
T_LockRowsState = 415,
T_LimitState = 416,
T_IndexAmRoutine = 417,
T_TableAmRoutine = 418,
T_TsmRoutine = 419,
T_EventTriggerData = 420,
T_TriggerData = 421,
T_TupleTableSlot = 422,
T_FdwRoutine = 423,
T_Bitmapset = 424,
T_ExtensibleNode = 425,
T_ErrorSaveContext = 426,
T_IdentifySystemCmd = 427,
T_BaseBackupCmd = 428,
T_CreateReplicationSlotCmd = 429,
T_DropReplicationSlotCmd = 430,
T_StartReplicationCmd = 431,
T_ReadReplicationSlotCmd = 432,
T_TimeLineHistoryCmd = 433,
T_SupportRequestSimplify = 434,
T_SupportRequestSelectivity = 435,
T_SupportRequestCost = 436,
T_SupportRequestRows = 437,
T_SupportRequestIndexCondition = 438,
T_SupportRequestWFuncMonotonic = 439,
T_SupportRequestOptimizeWindowClause = 440,
T_Integer = 441,
T_Float = 442,
T_Boolean = 443,
T_String = 444,
T_BitString = 445,
T_ForeignKeyCacheInfo = 446,
T_IntList = 447,
T_OidList = 448,
T_XidList = 449,
T_AllocSetContext = 450,
T_GenerationContext = 451,
T_SlabContext = 452,
T_TIDBitmap = 453,
T_WindowObjectData = 454,

View File

@ -0,0 +1,803 @@
/* src/include/pg_config.h. Generated from pg_config.h.in by configure. */
/* src/include/pg_config.h.in. Generated from configure.in by autoheader. */
/* Define if building universal (internal helper macro) */
/* #undef AC_APPLE_UNIVERSAL_BUILD */
/* The normal alignment of `double', in bytes. */
#define ALIGNOF_DOUBLE 4
/* The normal alignment of `int', in bytes. */
#define ALIGNOF_INT 4
/* The normal alignment of `long', in bytes. */
#define ALIGNOF_LONG 4
/* The normal alignment of `long long int', in bytes. */
#define ALIGNOF_LONG_LONG_INT 4
/* The normal alignment of `short', in bytes. */
#define ALIGNOF_SHORT 2
/* Size of a disk block --- this also limits the size of a tuple. You can set
it bigger if you need bigger tuples (although TOAST should reduce the need
to have large tuples, since fields can be spread across multiple tuples).
BLCKSZ must be a power of 2. The maximum possible value of BLCKSZ is
currently 2^15 (32768). This is determined by the 15-bit widths of the
lp_off and lp_len fields in ItemIdData (see include/storage/itemid.h).
Changing BLCKSZ requires an initdb. */
#define BLCKSZ 8192
/* Define to the default TCP port number on which the server listens and to
which clients will try to connect. This can be overridden at run-time, but
it's convenient if your clients have the right default compiled in.
(--with-pgport=PORTNUM) */
#define DEF_PGPORT 5432
/* Define to the default TCP port number as a string constant. */
#define DEF_PGPORT_STR "5432"
/* Define to the file name extension of dynamically-loadable modules. */
#define DLSUFFIX ".so"
/* Define to build with GSSAPI support. (--with-gssapi) */
//#define ENABLE_GSS 0
/* Define to 1 if you want National Language Support. (--enable-nls) */
/* #undef ENABLE_NLS */
/* Define to 1 to build client libraries as thread-safe code.
(--enable-thread-safety) */
#define ENABLE_THREAD_SAFETY 1
/* Define to nothing if C supports flexible array members, and to 1 if it does
not. That way, with a declaration like `struct s { int n; double
d[FLEXIBLE_ARRAY_MEMBER]; };', the struct hack can be used with pre-C99
compilers. When computing the size of such an object, don't use 'sizeof
(struct s)' as it overestimates the size. Use 'offsetof (struct s, d)'
instead. Don't use 'offsetof (struct s, d[0])', as this doesn't work with
MSVC and with C++ compilers. */
#define FLEXIBLE_ARRAY_MEMBER /**/
/* float4 values are passed by value if 'true', by reference if 'false' */
#define FLOAT4PASSBYVAL true
/* float8, int8, and related values are passed by value if 'true', by
reference if 'false' */
#define FLOAT8PASSBYVAL false
/* Define to 1 if you have the `append_history' function. */
/* #undef HAVE_APPEND_HISTORY */
/* Define to 1 if you want to use atomics if available. */
#define HAVE_ATOMICS 1
/* Define to 1 if you have the <atomic.h> header file. */
/* #undef HAVE_ATOMIC_H */
/* Define to 1 if you have the `cbrt' function. */
#define HAVE_CBRT 1
/* Define to 1 if you have the `class' function. */
/* #undef HAVE_CLASS */
/* Define to 1 if you have the <crtdefs.h> header file. */
/* #undef HAVE_CRTDEFS_H */
/* Define to 1 if you have the `crypt' function. */
#define HAVE_CRYPT 1
/* Define to 1 if you have the <crypt.h> header file. */
#define HAVE_CRYPT_H 1
/* Define to 1 if you have the declaration of `fdatasync', and to 0 if you
don't. */
#define HAVE_DECL_FDATASYNC 1
/* Define to 1 if you have the declaration of `F_FULLFSYNC', and to 0 if you
don't. */
#define HAVE_DECL_F_FULLFSYNC 0
/* Define to 1 if you have the declaration of `posix_fadvise', and to 0 if you
don't. */
#define HAVE_DECL_POSIX_FADVISE 1
/* Define to 1 if you have the declaration of `snprintf', and to 0 if you
don't. */
#define HAVE_DECL_SNPRINTF 1
/* Define to 1 if you have the declaration of `strlcat', and to 0 if you
don't. */
#if OS_DARWIN
#define HAVE_DECL_STRLCAT 1
#endif
/* Define to 1 if you have the declaration of `strlcpy', and to 0 if you
don't. */
#if OS_DARWIN
#define HAVE_DECL_STRLCPY 1
#endif
/* Define to 1 if you have the declaration of `sys_siglist', and to 0 if you
don't. */
#define HAVE_DECL_SYS_SIGLIST 1
/* Define to 1 if you have the declaration of `vsnprintf', and to 0 if you
don't. */
#define HAVE_DECL_VSNPRINTF 1
/* Define to 1 if you have the <dld.h> header file. */
/* #undef HAVE_DLD_H */
/* Define to 1 if you have the <editline/history.h> header file. */
/* #undef HAVE_EDITLINE_HISTORY_H */
/* Define to 1 if you have the <editline/readline.h> header file. */
#define HAVE_EDITLINE_READLINE_H 1
/* Define to 1 if you have the `fpclass' function. */
/* #undef HAVE_FPCLASS */
/* Define to 1 if you have the `fp_class' function. */
/* #undef HAVE_FP_CLASS */
/* Define to 1 if you have the `fp_class_d' function. */
/* #undef HAVE_FP_CLASS_D */
/* Define to 1 if you have the <fp_class.h> header file. */
/* #undef HAVE_FP_CLASS_H */
/* Define to 1 if fseeko (and presumably ftello) exists and is declared. */
#define HAVE_FSEEKO 1
/* Define to 1 if you have __atomic_compare_exchange_n(int *, int *, int). */
/* #undef HAVE_GCC__ATOMIC_INT32_CAS */
/* Define to 1 if you have __atomic_compare_exchange_n(int64 *, int *, int64).
*/
/* #undef HAVE_GCC__ATOMIC_INT64_CAS */
/* Define to 1 if you have __sync_lock_test_and_set(char *) and friends. */
#define HAVE_GCC__SYNC_CHAR_TAS 1
/* Define to 1 if you have __sync_compare_and_swap(int *, int, int). */
/* #undef HAVE_GCC__SYNC_INT32_CAS */
/* Define to 1 if you have __sync_lock_test_and_set(int *) and friends. */
#define HAVE_GCC__SYNC_INT32_TAS 1
/* Define to 1 if you have __sync_compare_and_swap(int64 *, int64, int64). */
/* #undef HAVE_GCC__SYNC_INT64_CAS */
/* Define to 1 if you have the `getifaddrs' function. */
#define HAVE_GETIFADDRS 1
/* Define to 1 if you have the `getopt' function. */
#define HAVE_GETOPT 1
/* Define to 1 if you have the <getopt.h> header file. */
#define HAVE_GETOPT_H 1
/* Define to 1 if you have the `getopt_long' function. */
#define HAVE_GETOPT_LONG 1
/* Define to 1 if you have the `getpeereid' function. */
/* #undef HAVE_GETPEEREID */
/* Define to 1 if you have the `getpeerucred' function. */
/* #undef HAVE_GETPEERUCRED */
/* Define to 1 if you have the <gssapi_ext.h> header file. */
/* #undef HAVE_GSSAPI_EXT_H */
/* Define to 1 if you have the <gssapi/gssapi_ext.h> header file. */
/* #undef HAVE_GSSAPI_GSSAPI_EXT_H */
/* Define to 1 if you have the <gssapi/gssapi.h> header file. */
//#define HAVE_GSSAPI_GSSAPI_H 0
/* Define to 1 if you have the <gssapi.h> header file. */
/* #undef HAVE_GSSAPI_H */
/* Define to 1 if you have the <history.h> header file. */
/* #undef HAVE_HISTORY_H */
/* Define to 1 if you have the `history_truncate_file' function. */
#define HAVE_HISTORY_TRUNCATE_FILE 1
/* Define to 1 if you have the <ieeefp.h> header file. */
/* #undef HAVE_IEEEFP_H */
/* Define to 1 if you have the <ifaddrs.h> header file. */
#define HAVE_IFADDRS_H 1
/* Define to 1 if you have the `inet_aton' function. */
#define HAVE_INET_ATON 1
/* Define to 1 if you have the `inet_pton' function. */
#define HAVE_INET_PTON 1
/* Define to 1 if the system has the type `int64'. */
/* #undef HAVE_INT64 */
/* Define to 1 if the system has the type `int8'. */
/* #undef HAVE_INT8 */
/* Define to 1 if the system has the type `intptr_t'. */
#define HAVE_INTPTR_T 1
/* Define to 1 if you have the <inttypes.h> header file. */
#define HAVE_INTTYPES_H 1
/* Define to 1 if you have the global variable 'int opterr'. */
#define HAVE_INT_OPTERR 1
/* Define to 1 if you have the global variable 'int optreset'. */
/* #undef HAVE_INT_OPTRESET */
/* Define to 1 if you have the global variable 'int timezone'. */
#define HAVE_INT_TIMEZONE 1
/* Define to 1 if you have isinf(). */
#define HAVE_ISINF 1
/* Define to 1 if you have the <langinfo.h> header file. */
#define HAVE_LANGINFO_H 1
/* Define to 1 if you have the `crypto' library (-lcrypto). */
#define HAVE_LIBCRYPTO 1
/* Define to 1 if you have the `ldap' library (-lldap). */
//#define HAVE_LIBLDAP 0
/* Define to 1 if you have the `m' library (-lm). */
#define HAVE_LIBM 1
/* Define to 1 if you have the `pam' library (-lpam). */
#define HAVE_LIBPAM 1
/* Define if you have a function readline library */
#define HAVE_LIBREADLINE 1
/* Define to 1 if you have the `selinux' library (-lselinux). */
/* #undef HAVE_LIBSELINUX */
/* Define to 1 if you have the `ssl' library (-lssl). */
#define HAVE_LIBSSL 0
/* Define to 1 if you have the `wldap32' library (-lwldap32). */
/* #undef HAVE_LIBWLDAP32 */
/* Define to 1 if you have the `xml2' library (-lxml2). */
#define HAVE_LIBXML2 1
/* Define to 1 if you have the `xslt' library (-lxslt). */
#define HAVE_LIBXSLT 1
/* Define to 1 if you have the `z' library (-lz). */
#define HAVE_LIBZ 1
/* Define to 1 if you have the `zstd' library (-lzstd). */
/* #undef HAVE_LIBZSTD */
/* Define to 1 if constants of type 'long long int' should have the suffix LL.
*/
#define HAVE_LL_CONSTANTS 1
/* Define to 1 if the system has the type `locale_t'. */
#define HAVE_LOCALE_T 1
/* Define to 1 if `long int' works and is 64 bits. */
/* #undef HAVE_LONG_INT_64 */
/* Define to 1 if the system has the type `long long int'. */
#define HAVE_LONG_LONG_INT 1
/* Define to 1 if `long long int' works and is 64 bits. */
#define HAVE_LONG_LONG_INT_64 1
/* Define to 1 if you have the <mbarrier.h> header file. */
/* #undef HAVE_MBARRIER_H */
/* Define to 1 if you have the `mbstowcs_l' function. */
/* #undef HAVE_MBSTOWCS_L */
/* Define to 1 if you have the `memmove' function. */
#define HAVE_MEMMOVE 1
/* Define to 1 if you have the <memory.h> header file. */
#define HAVE_MEMORY_H 1
/* Define to 1 if you have the `mkdtemp' function. */
#define HAVE_MKDTEMP 1
/* Define to 1 if you have the <net/if.h> header file. */
#define HAVE_NET_IF_H 1
/* Define to 1 if you have the <ossp/uuid.h> header file. */
/* #undef HAVE_OSSP_UUID_H */
/* Define to 1 if you have the <pam/pam_appl.h> header file. */
/* #undef HAVE_PAM_PAM_APPL_H */
/* Define to 1 if you have the `posix_fadvise' function. */
#define HAVE_POSIX_FADVISE 1
/* Define to 1 if you have the declaration of `preadv', and to 0 if you don't. */
/* #undef HAVE_DECL_PREADV */
/* Define to 1 if you have the declaration of `pwritev', and to 0 if you don't. */
/* #define HAVE_DECL_PWRITEV */
/* Define to 1 if you have the `X509_get_signature_info' function. */
/* #undef HAVE_X509_GET_SIGNATURE_INFO */
/* Define to 1 if you have the POSIX signal interface. */
#define HAVE_POSIX_SIGNALS 1
/* Define to 1 if the assembler supports PPC's LWARX mutex hint bit. */
/* #undef HAVE_PPC_LWARX_MUTEX_HINT */
/* Define to 1 if you have the `pthread_is_threaded_np' function. */
/* #undef HAVE_PTHREAD_IS_THREADED_NP */
/* Define to 1 if you have the <pwd.h> header file. */
#define HAVE_PWD_H 1
/* Define to 1 if you have the <readline.h> header file. */
/* #undef HAVE_READLINE_H */
/* Define to 1 if you have the <readline/history.h> header file. */
#define HAVE_READLINE_HISTORY_H 1
/* Define to 1 if you have the <readline/readline.h> header file. */
/* #undef HAVE_READLINE_READLINE_H */
/* Define to 1 if you have the `rint' function. */
#define HAVE_RINT 1
/* Define to 1 if you have the `rl_completion_matches' function. */
#define HAVE_RL_COMPLETION_MATCHES 1
/* Define to 1 if you have the `rl_filename_completion_function' function. */
#define HAVE_RL_FILENAME_COMPLETION_FUNCTION 1
/* Define to 1 if you have the `rl_reset_screen_size' function. */
/* #undef HAVE_RL_RESET_SCREEN_SIZE */
/* Define to 1 if you have the `rl_variable_bind' function. */
#define HAVE_RL_VARIABLE_BIND 1
/* Define to 1 if you have the <security/pam_appl.h> header file. */
#define HAVE_SECURITY_PAM_APPL_H 1
/* Define to 1 if you have the `setproctitle' function. */
/* #undef HAVE_SETPROCTITLE */
/* Define to 1 if the system has the type `socklen_t'. */
#define HAVE_SOCKLEN_T 1
/* Define to 1 if you have the `sigprocmask' function. */
#define HAVE_SIGPROCMASK 1
/* Define to 1 if you have sigsetjmp(). */
#define HAVE_SIGSETJMP 1
/* Define to 1 if the system has the type `sig_atomic_t'. */
#define HAVE_SIG_ATOMIC_T 1
/* Define to 1 if you have the `snprintf' function. */
#define HAVE_SNPRINTF 1
/* Define to 1 if you have spinlocks. */
#define HAVE_SPINLOCKS 1
/* Define to 1 if you have the `SSL_CTX_set_cert_cb' function. */
#define HAVE_SSL_CTX_SET_CERT_CB 1
/* Define to 1 if you have the `SSL_CTX_set_num_tickets' function. */
/* #define HAVE_SSL_CTX_SET_NUM_TICKETS */
/* Define to 1 if you have the `SSL_get_current_compression' function. */
#define HAVE_SSL_GET_CURRENT_COMPRESSION 0
/* Define to 1 if you have the <stdint.h> header file. */
#define HAVE_STDINT_H 1
/* Define to 1 if you have the <stdlib.h> header file. */
#define HAVE_STDLIB_H 1
/* Define to 1 if you have the `strerror' function. */
#define HAVE_STRERROR 1
/* Define to 1 if you have the `strerror_r' function. */
#define HAVE_STRERROR_R 1
/* Define to 1 if you have the <strings.h> header file. */
//#define HAVE_STRINGS_H 1
/* Define to 1 if you have the <string.h> header file. */
#define HAVE_STRING_H 1
/* Define to 1 if you have the `strlcat' function. */
/* #undef HAVE_STRLCAT */
/* Define to 1 if you have the `strlcpy' function. */
/* #undef HAVE_STRLCPY */
#if (!OS_DARWIN)
#define HAVE_STRCHRNUL 1
#endif
/* Define to 1 if the system has the type `struct option'. */
#define HAVE_STRUCT_OPTION 1
/* Define to 1 if `sa_len' is a member of `struct sockaddr'. */
/* #undef HAVE_STRUCT_SOCKADDR_SA_LEN */
/* Define to 1 if `tm_zone' is a member of `struct tm'. */
#define HAVE_STRUCT_TM_TM_ZONE 1
/* Define to 1 if you have the `sync_file_range' function. */
/* #undef HAVE_SYNC_FILE_RANGE */
/* Define to 1 if you have the syslog interface. */
#define HAVE_SYSLOG 1
/* Define to 1 if you have the <sys/ioctl.h> header file. */
#define HAVE_SYS_IOCTL_H 1
/* Define to 1 if you have the <sys/personality.h> header file. */
/* #undef HAVE_SYS_PERSONALITY_H */
/* Define to 1 if you have the <sys/poll.h> header file. */
#define HAVE_SYS_POLL_H 1
/* Define to 1 if you have the <sys/signalfd.h> header file. */
/* #undef HAVE_SYS_SIGNALFD_H */
/* Define to 1 if you have the <sys/socket.h> header file. */
#define HAVE_SYS_SOCKET_H 1
/* Define to 1 if you have the <sys/stat.h> header file. */
#define HAVE_SYS_STAT_H 1
/* Define to 1 if you have the <sys/time.h> header file. */
#define HAVE_SYS_TIME_H 1
/* Define to 1 if you have the <sys/types.h> header file. */
#define HAVE_SYS_TYPES_H 1
/* Define to 1 if you have the <sys/ucred.h> header file. */
#if (OS_DARWIN || OS_FREEBSD)
#define HAVE_SYS_UCRED_H 1
#endif
/* Define to 1 if you have the <sys/un.h> header file. */
#define _GNU_SOURCE 1 /* Needed for glibc struct ucred */
/* Define to 1 if you have the <termios.h> header file. */
#define HAVE_TERMIOS_H 1
/* Define to 1 if your `struct tm' has `tm_zone'. Deprecated, use
`HAVE_STRUCT_TM_TM_ZONE' instead. */
#define HAVE_TM_ZONE 1
/* Define to 1 if you have the `towlower' function. */
#define HAVE_TOWLOWER 1
/* Define to 1 if you have the external array `tzname'. */
#define HAVE_TZNAME 1
/* Define to 1 if you have the <ucred.h> header file. */
/* #undef HAVE_UCRED_H */
/* Define to 1 if the system has the type `uint64'. */
/* #undef HAVE_UINT64 */
/* Define to 1 if the system has the type `uint8'. */
/* #undef HAVE_UINT8 */
/* Define to 1 if the system has the type `uintptr_t'. */
#define HAVE_UINTPTR_T 1
/* Define to 1 if the system has the type `union semun'. */
/* #undef HAVE_UNION_SEMUN */
/* Define to 1 if you have the <unistd.h> header file. */
#define HAVE_UNISTD_H 1
/* Define to 1 if you have unix sockets. */
#define HAVE_UNIX_SOCKETS 1
/* Define to 1 if the system has the type `unsigned long long int'. */
#define HAVE_UNSIGNED_LONG_LONG_INT 1
/* Define to 1 if you have the `utime' function. */
#define HAVE_UTIME 1
/* Define to 1 if you have the `utimes' function. */
#define HAVE_UTIMES 1
/* Define to 1 if you have the <utime.h> header file. */
#define HAVE_UTIME_H 1
/* Define to 1 if you have BSD UUID support. */
/* #undef HAVE_UUID_BSD */
/* Define to 1 if you have E2FS UUID support. */
/* #undef HAVE_UUID_E2FS */
/* Define to 1 if you have the <uuid.h> header file. */
#define HAVE_UUID_H 1
/* Define to 1 if you have OSSP UUID support. */
#define HAVE_UUID_OSSP 1
/* Define to 1 if you have the <uuid/uuid.h> header file. */
/* #undef HAVE_UUID_UUID_H */
/* Define to 1 if your compiler knows the visibility("hidden") attribute. */
/* #undef HAVE_VISIBILITY_ATTRIBUTE */
/* Define to 1 if you have the `vsnprintf' function. */
#define HAVE_VSNPRINTF 1
/* Define to 1 if you have the <wchar.h> header file. */
#define HAVE_WCHAR_H 1
/* Define to 1 if you have the `wcstombs' function. */
#define HAVE_WCSTOMBS 1
/* Define to 1 if you have the `wcstombs_l' function. */
/* #undef HAVE_WCSTOMBS_L */
/* Define to 1 if your compiler understands __builtin_bswap32. */
/* #undef HAVE__BUILTIN_BSWAP32 */
/* Define to 1 if your compiler understands __builtin_constant_p. */
#define HAVE__BUILTIN_CONSTANT_P 1
/* Define to 1 if your compiler understands __builtin_frame_address. */
/* #undef HAVE__BUILTIN_FRAME_ADDRESS */
/* Define to 1 if your compiler understands __builtin_types_compatible_p. */
#define HAVE__BUILTIN_TYPES_COMPATIBLE_P 1
/* Define to 1 if your compiler understands __builtin_unreachable. */
/* #undef HAVE__BUILTIN_UNREACHABLE */
/* Define to 1 if you have __cpuid. */
/* #undef HAVE__CPUID */
/* Define to 1 if you have __get_cpuid. */
/* #undef HAVE__GET_CPUID */
/* Define to 1 if your compiler understands _Static_assert. */
/* #undef HAVE__STATIC_ASSERT */
/* Define to 1 if your compiler understands __VA_ARGS__ in macros. */
#define HAVE__VA_ARGS 1
/* Define to the appropriate snprintf length modifier for 64-bit ints. */
#define INT64_MODIFIER "ll"
/* Define to 1 if `locale_t' requires <xlocale.h>. */
/* #undef LOCALE_T_IN_XLOCALE */
/* Define as the maximum alignment requirement of any C data type. */
#define MAXIMUM_ALIGNOF 4
/* Define bytes to use libc memset(). */
#define MEMSET_LOOP_LIMIT 1024
/* Define to the address where bug reports for this package should be sent. */
#define PACKAGE_BUGREPORT "pgsql-bugs@postgresql.org"
/* Define to the full name of this package. */
#define PACKAGE_NAME "PostgreSQL"
/* Define to the full name and version of this package. */
#define PACKAGE_STRING "PostgreSQL 9.5.4"
/* Define to the one symbol short name of this package. */
#define PACKAGE_TARNAME "postgresql"
/* Define to the home page for this package. */
#define PACKAGE_URL ""
/* Define to the version of this package. */
#define PACKAGE_VERSION "9.5.4"
/* Define to the name of a signed 128-bit integer type. */
/* #undef PG_INT128_TYPE */
/* Define to the name of a signed 64-bit integer type. */
#define PG_INT64_TYPE long long int
/* Define to the name of the default PostgreSQL service principal in Kerberos
(GSSAPI). (--with-krb-srvnam=NAME) */
#define PG_KRB_SRVNAM "postgres"
/* PostgreSQL major version as a string */
#define PG_MAJORVERSION "9.5"
/* Define to gnu_printf if compiler supports it, else printf. */
#define PG_PRINTF_ATTRIBUTE printf
/* Define to 1 if "static inline" works without unwanted warnings from
compilations where static inline functions are defined but not called. */
#define PG_USE_INLINE 1
/* PostgreSQL version as a string */
#define PG_VERSION "9.5.4"
/* PostgreSQL version as a number */
#define PG_VERSION_NUM 90504
/* A string containing the version number, platform, and C compiler */
#define PG_VERSION_STR "PostgreSQL 9.5.4 on i686-pc-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-55), 32-bit"
/* Define to 1 to allow profiling output to be saved separately for each
process. */
/* #undef PROFILE_PID_DIR */
/* RELSEG_SIZE is the maximum number of blocks allowed in one disk file. Thus,
the maximum size of a single file is RELSEG_SIZE * BLCKSZ; relations bigger
than that are divided into multiple files. RELSEG_SIZE * BLCKSZ must be
less than your OS' limit on file size. This is often 2 GB or 4GB in a
32-bit operating system, unless you have large file support enabled. By
default, we make the limit 1 GB to avoid any possible integer-overflow
problems within the OS. A limit smaller than necessary only means we divide
a large relation into more chunks than necessary, so it seems best to err
in the direction of a small limit. A power-of-2 value is recommended to
save a few cycles in md.c, but is not absolutely required. Changing
RELSEG_SIZE requires an initdb. */
#define RELSEG_SIZE 131072
/* The size of `long', as computed by sizeof. */
#define SIZEOF_LONG 4
/* The size of `off_t', as computed by sizeof. */
#define SIZEOF_OFF_T 8
/* The size of `size_t', as computed by sizeof. */
#define SIZEOF_SIZE_T 4
/* The size of `void *', as computed by sizeof. */
#define SIZEOF_VOID_P 4
/* Define to 1 if you have the ANSI C header files. */
#define STDC_HEADERS 1
/* Define to 1 if strerror_r() returns a int. */
/* #undef STRERROR_R_INT */
/* Define to 1 if your <sys/time.h> declares `struct tm'. */
/* #undef TM_IN_SYS_TIME */
/* Define to 1 to build with assertion checks. (--enable-cassert) */
/* #undef USE_ASSERT_CHECKING */
/* Define to 1 to build with Bonjour support. (--with-bonjour) */
/* #undef USE_BONJOUR */
/* Define to 1 if you want float4 values to be passed by value.
(--enable-float4-byval) */
#define USE_FLOAT4_BYVAL 1
/* Define to 1 if you want float8, int8, etc values to be passed by value.
(--enable-float8-byval) */
/* #undef USE_FLOAT8_BYVAL */
/* Define to 1 if you want 64-bit integer timestamp and interval support.
(--enable-integer-datetimes) */
#define USE_INTEGER_DATETIMES 1
/* Define to 1 to build with LDAP support. (--with-ldap) */
//#define USE_LDAP 0
/* Define to 1 to build with XML support. (--with-libxml) */
#define USE_LIBXML 1
/* Define to 1 to use XSLT support when building contrib/xml2.
(--with-libxslt) */
#define USE_LIBXSLT 1
/* Define to select named POSIX semaphores. */
/* #undef USE_NAMED_POSIX_SEMAPHORES */
/* Define to build with OpenSSL support. (--with-openssl) */
#define USE_OPENSSL 0
#define USE_OPENSSL_RANDOM 0
#define FRONTEND 1
/* Define to 1 to build with PAM support. (--with-pam) */
#define USE_PAM 1
/* Use replacement snprintf() functions. */
/* #undef USE_REPL_SNPRINTF */
/* Define to 1 to use Intel SSE 4.2 CRC instructions with a runtime check. */
#define USE_SLICING_BY_8_CRC32C 1
/* Define to 1 use Intel SSE 4.2 CRC instructions. */
/* #undef USE_SSE42_CRC32C */
/* Define to 1 to use Intel SSSE 4.2 CRC instructions with a runtime check. */
/* #undef USE_SSE42_CRC32C_WITH_RUNTIME_CHECK */
/* Define to select SysV-style semaphores. */
#define USE_SYSV_SEMAPHORES 1
/* Define to select SysV-style shared memory. */
#define USE_SYSV_SHARED_MEMORY 1
/* Define to select unnamed POSIX semaphores. */
/* #undef USE_UNNAMED_POSIX_SEMAPHORES */
/* Define to select Win32-style semaphores. */
/* #undef USE_WIN32_SEMAPHORES */
/* Define to select Win32-style shared memory. */
/* #undef USE_WIN32_SHARED_MEMORY */
/* Define to 1 to build with ZSTD support. (--with-zstd) */
/* #undef USE_ZSTD */
/* Define to 1 if `wcstombs_l' requires <xlocale.h>. */
/* #undef WCSTOMBS_L_IN_XLOCALE */
/* Define WORDS_BIGENDIAN to 1 if your processor stores words with the most
significant byte first (like Motorola and SPARC, unlike Intel). */
#if defined AC_APPLE_UNIVERSAL_BUILD
# if defined __BIG_ENDIAN__
# define WORDS_BIGENDIAN 1
# endif
#else
# ifndef WORDS_BIGENDIAN
/* # undef WORDS_BIGENDIAN */
# endif
#endif
/* Size of a WAL file block. This need have no particular relation to BLCKSZ.
XLOG_BLCKSZ must be a power of 2, and if your system supports O_DIRECT I/O,
XLOG_BLCKSZ must be a multiple of the alignment requirement for direct-I/O
buffers, else direct I/O may fail. Changing XLOG_BLCKSZ requires an initdb.
*/
#define XLOG_BLCKSZ 8192
/* XLOG_SEG_SIZE is the size of a single WAL file. This must be a power of 2
and larger than XLOG_BLCKSZ (preferably, a great deal larger than
XLOG_BLCKSZ). Changing XLOG_SEG_SIZE requires an initdb. */
#define XLOG_SEG_SIZE (16 * 1024 * 1024)
/* Number of bits in a file offset, on hosts where this is settable. */
#define _FILE_OFFSET_BITS 64
/* Define to 1 to make fseeko visible on some hosts (e.g. glibc 2.2). */
/* #undef _LARGEFILE_SOURCE */
/* Define for large files, on AIX-style hosts. */
/* #undef _LARGE_FILES */
/* Define to `__inline__' or `__inline' if that's what the C compiler
calls it, or to nothing if 'inline' is not supported under any name. */
#ifndef __cplusplus
/* #undef inline */
#endif
/* Define to the type of a signed integer type wide enough to hold a pointer,
if such a type exists, and if the system does not define it. */
/* #undef intptr_t */
/* Define to empty if the C compiler does not understand signed types. */
/* #undef signed */
/* Define to the type of an unsigned integer type wide enough to hold a
pointer, if such a type exists, and if the system does not define it. */
/* #undef uintptr_t */

View File

@ -0,0 +1,7 @@
/*
* * src/include/pg_config_ext.h.in. This is generated manually, not by
* * autoheader, since we want to limit which symbols get defined here.
* */
/* Define to the name of a signed 64-bit integer type. */
#define PG_INT64_TYPE long long int

View File

@ -0,0 +1,34 @@
#if defined(OS_DARWIN)
/* src/include/port/darwin.h */
#define __darwin__ 1
#if HAVE_DECL_F_FULLFSYNC /* not present before macOS 10.3 */
#define HAVE_FSYNC_WRITETHROUGH
#endif
#else
/* src/include/port/linux.h */
/*
* As of July 2007, all known versions of the Linux kernel will sometimes
* return EIDRM for a shmctl() operation when EINVAL is correct (it happens
* when the low-order 15 bits of the supplied shm ID match the slot number
* assigned to a newer shmem segment). We deal with this by assuming that
* EIDRM means EINVAL in PGSharedMemoryIsInUse(). This is reasonably safe
* since in fact Linux has no excuse for ever returning EIDRM; it doesn't
* track removed segments in a way that would allow distinguishing them from
* private ones. But someday that code might get upgraded, and we'd have
* to have a kernel version test here.
*/
#define HAVE_LINUX_EIDRM_BUG
/*
* Set the default wal_sync_method to fdatasync. With recent Linux versions,
* xlogdefs.h's normal rules will prefer open_datasync, which (a) doesn't
* perform better and (b) causes outright failures on ext4 data=journal
* filesystems, because those don't support O_DIRECT.
*/
#define PLATFORM_DEFAULT_SYNC_METHOD SYNC_METHOD_FDATASYNC
#endif

View File

@ -0,0 +1,12 @@
#define PGBINDIR "/bin"
#define PGSHAREDIR "/share"
#define SYSCONFDIR "/etc"
#define INCLUDEDIR "/include"
#define PKGINCLUDEDIR "/include"
#define INCLUDEDIRSERVER "/include/server"
#define LIBDIR "/lib"
#define PKGLIBDIR "/lib"
#define LOCALEDIR "/share/locale"
#define DOCDIR "/doc"
#define HTMLDIR "/doc"
#define MANDIR "/man"

View File

View File

@ -14,5 +14,6 @@ git config submodule."contrib/icu".update '!../sparse-checkout/update-icu.sh'
git config submodule."contrib/boost".update '!../sparse-checkout/update-boost.sh'
git config submodule."contrib/aws-s2n-tls".update '!../sparse-checkout/update-aws-s2n-tls.sh'
git config submodule."contrib/protobuf".update '!../sparse-checkout/update-protobuf.sh'
git config submodule."contrib/postgres".update '!../sparse-checkout/update-postgres.sh'
git config submodule."contrib/libxml2".update '!../sparse-checkout/update-libxml2.sh'
git config submodule."contrib/brotli".update '!../sparse-checkout/update-brotli.sh'

View File

@ -0,0 +1,16 @@
#!/bin/sh
echo "Using sparse checkout for postgres"
FILES_TO_CHECKOUT=$(git rev-parse --git-dir)/info/sparse-checkout
echo '!/*' > $FILES_TO_CHECKOUT
echo '/src/interfaces/libpq/*' >> $FILES_TO_CHECKOUT
echo '!/src/interfaces/libpq/*/*' >> $FILES_TO_CHECKOUT
echo '/src/common/*' >> $FILES_TO_CHECKOUT
echo '!/src/port/*/*' >> $FILES_TO_CHECKOUT
echo '/src/port/*' >> $FILES_TO_CHECKOUT
echo '/src/include/*' >> $FILES_TO_CHECKOUT
git config core.sparsecheckout true
git checkout $1
git read-tree -mu HEAD

View File

@ -155,6 +155,12 @@ Replication of [**TOAST**](https://www.postgresql.org/docs/9.5/storage-toast.htm
Sets a comma-separated list of PostgreSQL database tables, which will be replicated via [MaterializedPostgreSQL](../../engines/database-engines/materialized-postgresql.md) database engine.
Each table can have subset of replicated columns in brackets. If subset of columns is omitted, then all columns for table will be replicated.
``` sql
materialized_postgresql_tables_list = 'table1(co1, col2),table2,table3(co3, col5, col7)
```
Default value: empty list — means whole PostgreSQL database will be replicated.
### `materialized_postgresql_schema` {#materialized-postgresql-schema}

View File

@ -112,7 +112,7 @@ Example:
```
The NATS server configuration can be added using the ClickHouse config file.
More specifically you can add Redis password for NATS engine:
More specifically you can add Redis password for NATS engine:
``` xml
<nats>
@ -167,7 +167,7 @@ If you want to change the target table by using `ALTER`, we recommend disabling
- `_subject` - NATS message subject. Data type: `String`.
Additional virtual columns when `kafka_handle_error_mode='stream'`:
Additional virtual columns when `nats_handle_error_mode='stream'`:
- `_raw_message` - Raw message that couldn't be parsed successfully. Data type: `Nullable(String)`.
- `_error` - Exception message happened during failed parsing. Data type: `Nullable(String)`.

View File

@ -3150,3 +3150,15 @@ Default value: "default"
**See Also**
- [Workload Scheduling](/docs/en/operations/workload-scheduling.md)
## max_authentication_methods_per_user {#max_authentication_methods_per_user}
The maximum number of authentication methods a user can be created with or altered to.
Changing this setting does not affect existing users. Create/alter authentication-related queries will fail if they exceed the limit specified in this setting.
Non authentication create/alter queries will succeed.
Type: UInt64
Default value: 100
Zero means unlimited

View File

@ -9,7 +9,7 @@ Computes an approximate [quantile](https://en.wikipedia.org/wiki/Quantile) of a
**Syntax**
``` sql
quantileDDsketch[relative_accuracy, (level)](expr)
quantileDD(relative_accuracy, [level])(expr)
```
**Arguments**

View File

@ -2088,13 +2088,14 @@ Calculate AUC (Area Under the Curve, which is a concept in machine learning, see
**Syntax**
``` sql
arrayAUC(arr_scores, arr_labels)
arrayAUC(arr_scores, arr_labels[, scale])
```
**Arguments**
- `arr_scores` — scores prediction model gives.
- `arr_labels` — labels of samples, usually 1 for positive sample and 0 for negative sample.
- `scale` - Optional. Wether to return the normalized area. Default value: true. [Bool]
**Returned value**

View File

@ -12,9 +12,10 @@ Syntax:
``` sql
ALTER USER [IF EXISTS] name1 [ON CLUSTER cluster_name1] [RENAME TO new_name1]
[, name2 [ON CLUSTER cluster_name2] [RENAME TO new_name2] ...]
[NOT IDENTIFIED | IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']} | {WITH ssl_certificate CN 'common_name' | SAN 'TYPE:subject_alt_name'}]
[NOT IDENTIFIED | IDENTIFIED | ADD IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']} | {WITH ssl_certificate CN 'common_name' | SAN 'TYPE:subject_alt_name'}]
[[ADD | DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
[VALID UNTIL datetime]
[RESET AUTHENTICATION METHODS TO NEW]
[DEFAULT ROLE role [,...] | ALL | ALL EXCEPT role [,...] ]
[GRANTEES {user | role | ANY | NONE} [,...] [EXCEPT {user | role} [,...]]]
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY | WRITABLE] | PROFILE 'profile_name'] [,...]
@ -62,3 +63,31 @@ Allows the user with `john` account to grant his privileges to the user with `ja
``` sql
ALTER USER john GRANTEES jack;
```
Adds new authentication methods to the user while keeping the existing ones:
``` sql
ALTER USER user1 ADD IDENTIFIED WITH plaintext_password by '1', bcrypt_password by '2', plaintext_password by '3'
```
Notes:
1. Older versions of ClickHouse might not support the syntax of multiple authentication methods. Therefore, if the ClickHouse server contains such users and is downgraded to a version that does not support it, such users will become unusable and some user related operations will be broken. In order to downgrade gracefully, one must set all users to contain a single authentication method prior to downgrading. Alternatively, if the server was downgraded without the proper procedure, the faulty users should be dropped.
2. `no_password` can not co-exist with other authentication methods for security reasons.
Because of that, it is not possible to `ADD` a `no_password` authentication method. The below query will throw an error:
``` sql
ALTER USER user1 ADD IDENTIFIED WITH no_password
```
If you want to drop authentication methods for a user and rely on `no_password`, you must specify in the below replacing form.
Reset authentication methods and adds the ones specified in the query (effect of leading IDENTIFIED without the ADD keyword):
``` sql
ALTER USER user1 IDENTIFIED WITH plaintext_password by '1', bcrypt_password by '2', plaintext_password by '3'
```
Reset authentication methods and keep the most recent added one:
``` sql
ALTER USER user1 RESET AUTHENTICATION METHODS TO NEW
```

View File

@ -15,6 +15,7 @@ CREATE USER [IF NOT EXISTS | OR REPLACE] name1 [ON CLUSTER cluster_name1]
[NOT IDENTIFIED | IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']} | {WITH ssl_certificate CN 'common_name' | SAN 'TYPE:subject_alt_name'} | {WITH ssh_key BY KEY 'public_key' TYPE 'ssh-rsa|...'} | {WITH http SERVER 'server_name' [SCHEME 'Basic']}]
[HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
[VALID UNTIL datetime]
[RESET AUTHENTICATION METHODS TO NEW]
[IN access_storage_type]
[DEFAULT ROLE role [,...]]
[DEFAULT DATABASE database | NONE]
@ -144,6 +145,17 @@ In ClickHouse Cloud, by default, passwords must meet the following complexity re
The available password types are: `plaintext_password`, `sha256_password`, `double_sha1_password`.
7. Multiple authentication methods can be specified:
```sql
CREATE USER user1 IDENTIFIED WITH plaintext_password by '1', bcrypt_password by '2', plaintext_password by '3''
```
Notes:
1. Older versions of ClickHouse might not support the syntax of multiple authentication methods. Therefore, if the ClickHouse server contains such users and is downgraded to a version that does not support it, such users will become unusable and some user related operations will be broken. In order to downgrade gracefully, one must set all users to contain a single authentication method prior to downgrading. Alternatively, if the server was downgraded without the proper procedure, the faulty users should be dropped.
2. `no_password` can not co-exist with other authentication methods for security reasons. Therefore, you can only specify
`no_password` if it is the only authentication method in the query.
## User Host
User host is a host from which a connection to ClickHouse server could be established. The host can be specified in the `HOST` query section in the following ways:

View File

@ -29,6 +29,7 @@ namespace DB
namespace ErrorCodes
{
extern const int CANNOT_RESTORE_TABLE;
extern const int ACCESS_ENTITY_ALREADY_EXISTS;
extern const int LOGICAL_ERROR;
}
@ -175,9 +176,46 @@ namespace
return res;
}
std::unordered_map<UUID, UUID> resolveDependencies(const std::unordered_map<UUID, std::pair<String, AccessEntityType>> & dependencies, const AccessControl & access_control, bool allow_unresolved_dependencies)
/// Checks if new entities (which we're going to restore) already exist,
/// and either skips them or throws an exception depending on the restore settings.
void checkExistingEntities(std::vector<std::pair<UUID, AccessEntityPtr>> & entities,
std::unordered_map<UUID, UUID> & old_to_new_id,
const AccessControl & access_control,
RestoreAccessCreationMode creation_mode)
{
if (creation_mode == RestoreAccessCreationMode::kReplace)
return;
auto should_skip = [&](const std::pair<UUID, AccessEntityPtr> & id_and_entity)
{
const auto & id = id_and_entity.first;
const auto & entity = *id_and_entity.second;
auto existing_id = access_control.find(entity.getType(), entity.getName());
if (!existing_id)
{
return false;
}
else if (creation_mode == RestoreAccessCreationMode::kCreateIfNotExists)
{
old_to_new_id[id] = *existing_id;
return true;
}
else
{
throw Exception(ErrorCodes::ACCESS_ENTITY_ALREADY_EXISTS, "Cannot restore {} because it already exists", entity.formatTypeWithName());
}
};
std::erase_if(entities, should_skip);
}
/// If new entities (which we're going to restore) depend on other entities which are not going to be restored or not present in the backup
/// then we should try to replace those dependencies with already existing entities.
void resolveDependencies(const std::unordered_map<UUID, std::pair<String, AccessEntityType>> & dependencies,
std::unordered_map<UUID, UUID> & old_to_new_ids,
const AccessControl & access_control,
bool allow_unresolved_dependencies)
{
std::unordered_map<UUID, UUID> old_to_new_ids;
for (const auto & [id, name_and_type] : dependencies)
{
std::optional<UUID> new_id;
@ -188,9 +226,9 @@ namespace
if (new_id)
old_to_new_ids.emplace(id, *new_id);
}
return old_to_new_ids;
}
/// Generates random IDs for the new entities.
void generateRandomIDs(std::vector<std::pair<UUID, AccessEntityPtr>> & entities, std::unordered_map<UUID, UUID> & old_to_new_ids)
{
Poco::UUIDGenerator generator;
@ -203,27 +241,12 @@ namespace
}
}
void replaceDependencies(std::vector<std::pair<UUID, AccessEntityPtr>> & entities, const std::unordered_map<UUID, UUID> & old_to_new_ids)
/// Updates dependencies of the new entities using a specified map.
void replaceDependencies(std::vector<std::pair<UUID, AccessEntityPtr>> & entities,
const std::unordered_map<UUID, UUID> & old_to_new_ids)
{
for (auto & entity : entities | boost::adaptors::map_values)
{
bool need_replace = false;
for (const auto & dependency : entity->findDependencies())
{
if (old_to_new_ids.contains(dependency))
{
need_replace = true;
break;
}
}
if (!need_replace)
continue;
auto new_entity = entity->clone();
new_entity->replaceDependencies(old_to_new_ids);
entity = new_entity;
}
IAccessEntity::replaceDependencies(entity, old_to_new_ids);
}
AccessRightsElements getRequiredAccessToRestore(const std::vector<std::pair<UUID, AccessEntityPtr>> & entities)
@ -314,7 +337,9 @@ std::pair<String, BackupEntryPtr> makeBackupEntryForAccess(
AccessRestorerFromBackup::AccessRestorerFromBackup(
const BackupPtr & backup_, const RestoreSettings & restore_settings_)
: backup(backup_), allow_unresolved_access_dependencies(restore_settings_.allow_unresolved_access_dependencies)
: backup(backup_)
, creation_mode(restore_settings_.create_access)
, allow_unresolved_dependencies(restore_settings_.allow_unresolved_access_dependencies)
{
}
@ -362,7 +387,9 @@ std::vector<std::pair<UUID, AccessEntityPtr>> AccessRestorerFromBackup::getAcces
{
auto new_entities = entities;
auto old_to_new_ids = resolveDependencies(dependencies, access_control, allow_unresolved_access_dependencies);
std::unordered_map<UUID, UUID> old_to_new_ids;
checkExistingEntities(new_entities, old_to_new_ids, access_control, creation_mode);
resolveDependencies(dependencies, old_to_new_ids, access_control, allow_unresolved_dependencies);
generateRandomIDs(new_entities, old_to_new_ids);
replaceDependencies(new_entities, old_to_new_ids);

View File

@ -17,6 +17,7 @@ using BackupPtr = std::shared_ptr<const IBackup>;
class IBackupEntry;
using BackupEntryPtr = std::shared_ptr<const IBackupEntry>;
struct RestoreSettings;
enum class RestoreAccessCreationMode : uint8_t;
/// Makes a backup of access entities of a specified type.
@ -45,7 +46,8 @@ public:
private:
BackupPtr backup;
bool allow_unresolved_access_dependencies = false;
RestoreAccessCreationMode creation_mode;
bool allow_unresolved_dependencies = false;
std::vector<std::pair<UUID, AccessEntityPtr>> entities;
std::unordered_map<UUID, std::pair<String, AccessEntityType>> dependencies;
std::unordered_set<String> data_paths;

View File

@ -544,9 +544,9 @@ scope_guard AccessControl::subscribeForChanges(const std::vector<UUID> & ids, co
return changes_notifier->subscribeForChanges(ids, handler);
}
bool AccessControl::insertImpl(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists)
bool AccessControl::insertImpl(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id)
{
if (MultipleAccessStorage::insertImpl(id, entity, replace_if_exists, throw_if_exists))
if (MultipleAccessStorage::insertImpl(id, entity, replace_if_exists, throw_if_exists, conflicting_id))
{
changes_notifier->sendNotifications();
return true;

View File

@ -243,7 +243,7 @@ private:
class CustomSettingsPrefixes;
class PasswordComplexityRules;
bool insertImpl(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists) override;
bool insertImpl(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id) override;
bool removeImpl(const UUID & id, bool throw_if_not_exists) override;
bool updateImpl(const UUID & id, const UpdateFunc & update_func, bool throw_if_not_exists) override;

View File

@ -82,7 +82,7 @@ AccessEntityPtr deserializeAccessEntityImpl(const String & definition)
if (res)
throw Exception(ErrorCodes::INCORRECT_ACCESS_ENTITY_DEFINITION, "Two access entities attached in the same file");
res = user = std::make_unique<User>();
InterpreterCreateUserQuery::updateUserFromQuery(*user, *create_user_query, /* allow_no_password = */ true, /* allow_plaintext_password = */ true);
InterpreterCreateUserQuery::updateUserFromQuery(*user, *create_user_query, /* allow_no_password = */ true, /* allow_plaintext_password = */ true, /* max_number_of_authentication_methods = zero is unlimited*/ 0);
}
else if (auto * create_role_query = query->as<ASTCreateRoleQuery>())
{

View File

@ -14,11 +14,6 @@
namespace DB
{
namespace ErrorCodes
{
extern const int NOT_IMPLEMENTED;
extern const int SUPPORT_IS_DISABLED;
}
namespace
{
@ -84,12 +79,140 @@ namespace
return false;
}
#endif
}
bool checkKerberosAuthentication(
const GSSAcceptorContext * gss_acceptor_context,
const AuthenticationData & authentication_method,
const ExternalAuthenticators & external_authenticators)
{
return authentication_method.getType() == AuthenticationType::KERBEROS
&& external_authenticators.checkKerberosCredentials(authentication_method.getKerberosRealm(), *gss_acceptor_context);
}
bool checkMySQLAuthentication(
const MySQLNative41Credentials * mysql_credentials,
const AuthenticationData & authentication_method)
{
switch (authentication_method.getType())
{
case AuthenticationType::PLAINTEXT_PASSWORD:
return checkPasswordPlainTextMySQL(
mysql_credentials->getScramble(),
mysql_credentials->getScrambledPassword(),
authentication_method.getPasswordHashBinary());
case AuthenticationType::DOUBLE_SHA1_PASSWORD:
return checkPasswordDoubleSHA1MySQL(
mysql_credentials->getScramble(),
mysql_credentials->getScrambledPassword(),
authentication_method.getPasswordHashBinary());
default:
return false;
}
}
bool checkBasicAuthentication(
const BasicCredentials * basic_credentials,
const AuthenticationData & authentication_method,
const ExternalAuthenticators & external_authenticators,
SettingsChanges & settings)
{
switch (authentication_method.getType())
{
case AuthenticationType::NO_PASSWORD:
{
return true; // N.B. even if the password is not empty!
}
case AuthenticationType::PLAINTEXT_PASSWORD:
{
return checkPasswordPlainText(basic_credentials->getPassword(), authentication_method.getPasswordHashBinary());
}
case AuthenticationType::SHA256_PASSWORD:
{
return checkPasswordSHA256(
basic_credentials->getPassword(), authentication_method.getPasswordHashBinary(), authentication_method.getSalt());
}
case AuthenticationType::DOUBLE_SHA1_PASSWORD:
{
return checkPasswordDoubleSHA1(basic_credentials->getPassword(), authentication_method.getPasswordHashBinary());
}
case AuthenticationType::LDAP:
{
return external_authenticators.checkLDAPCredentials(authentication_method.getLDAPServerName(), *basic_credentials);
}
case AuthenticationType::BCRYPT_PASSWORD:
{
return checkPasswordBcrypt(basic_credentials->getPassword(), authentication_method.getPasswordHashBinary());
}
case AuthenticationType::HTTP:
{
if (authentication_method.getHTTPAuthenticationScheme() == HTTPAuthenticationScheme::BASIC)
{
return external_authenticators.checkHTTPBasicCredentials(
authentication_method.getHTTPAuthenticationServerName(), *basic_credentials, settings);
}
break;
}
default:
break;
}
return false;
}
bool checkSSLCertificateAuthentication(
const SSLCertificateCredentials * ssl_certificate_credentials,
const AuthenticationData & authentication_method)
{
if (AuthenticationType::SSL_CERTIFICATE != authentication_method.getType())
{
return false;
}
for (SSLCertificateSubjects::Type type : {SSLCertificateSubjects::Type::CN, SSLCertificateSubjects::Type::SAN})
{
for (const auto & subject : authentication_method.getSSLCertificateSubjects().at(type))
{
if (ssl_certificate_credentials->getSSLCertificateSubjects().at(type).contains(subject))
return true;
// Wildcard support (1 only)
if (subject.contains('*'))
{
auto prefix = std::string_view(subject).substr(0, subject.find('*'));
auto suffix = std::string_view(subject).substr(subject.find('*') + 1);
auto slashes = std::count(subject.begin(), subject.end(), '/');
for (const auto & certificate_subject : ssl_certificate_credentials->getSSLCertificateSubjects().at(type))
{
bool matches_wildcard = certificate_subject.starts_with(prefix) && certificate_subject.ends_with(suffix);
// '*' must not represent a '/' in URI, so check if the number of '/' are equal
bool matches_slashes = slashes == count(certificate_subject.begin(), certificate_subject.end(), '/');
if (matches_wildcard && matches_slashes)
return true;
}
}
}
}
return false;
}
#if USE_SSH
bool checkSshAuthentication(
const SshCredentials * ssh_credentials,
const AuthenticationData & authentication_method)
{
return AuthenticationType::SSH_KEY == authentication_method.getType()
&& checkSshSignature(authentication_method.getSSHKeys(), ssh_credentials->getSignature(), ssh_credentials->getOriginal());
}
#endif
}
bool Authentication::areCredentialsValid(
const Credentials & credentials,
const AuthenticationData & auth_data,
const AuthenticationData & authentication_method,
const ExternalAuthenticators & external_authenticators,
SettingsChanges & settings)
{
@ -98,225 +221,35 @@ bool Authentication::areCredentialsValid(
if (const auto * gss_acceptor_context = typeid_cast<const GSSAcceptorContext *>(&credentials))
{
switch (auth_data.getType())
{
case AuthenticationType::NO_PASSWORD:
case AuthenticationType::PLAINTEXT_PASSWORD:
case AuthenticationType::SHA256_PASSWORD:
case AuthenticationType::DOUBLE_SHA1_PASSWORD:
case AuthenticationType::BCRYPT_PASSWORD:
case AuthenticationType::LDAP:
case AuthenticationType::HTTP:
throw Authentication::Require<BasicCredentials>("ClickHouse Basic Authentication");
case AuthenticationType::JWT:
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "JWT is available only in ClickHouse Cloud");
case AuthenticationType::KERBEROS:
return external_authenticators.checkKerberosCredentials(auth_data.getKerberosRealm(), *gss_acceptor_context);
case AuthenticationType::SSL_CERTIFICATE:
throw Authentication::Require<BasicCredentials>("ClickHouse X.509 Authentication");
case AuthenticationType::SSH_KEY:
#if USE_SSH
throw Authentication::Require<SshCredentials>("SSH Keys Authentication");
#else
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "SSH is disabled, because ClickHouse is built without libssh");
#endif
case AuthenticationType::MAX:
break;
}
return checkKerberosAuthentication(gss_acceptor_context, authentication_method, external_authenticators);
}
if (const auto * mysql_credentials = typeid_cast<const MySQLNative41Credentials *>(&credentials))
{
switch (auth_data.getType())
{
case AuthenticationType::NO_PASSWORD:
return true; // N.B. even if the password is not empty!
case AuthenticationType::PLAINTEXT_PASSWORD:
return checkPasswordPlainTextMySQL(mysql_credentials->getScramble(), mysql_credentials->getScrambledPassword(), auth_data.getPasswordHashBinary());
case AuthenticationType::DOUBLE_SHA1_PASSWORD:
return checkPasswordDoubleSHA1MySQL(mysql_credentials->getScramble(), mysql_credentials->getScrambledPassword(), auth_data.getPasswordHashBinary());
case AuthenticationType::SHA256_PASSWORD:
case AuthenticationType::BCRYPT_PASSWORD:
case AuthenticationType::LDAP:
case AuthenticationType::KERBEROS:
case AuthenticationType::HTTP:
throw Authentication::Require<BasicCredentials>("ClickHouse Basic Authentication");
case AuthenticationType::SSL_CERTIFICATE:
throw Authentication::Require<BasicCredentials>("ClickHouse X.509 Authentication");
case AuthenticationType::JWT:
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "JWT is available only in ClickHouse Cloud");
case AuthenticationType::SSH_KEY:
#if USE_SSH
throw Authentication::Require<SshCredentials>("SSH Keys Authentication");
#else
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "SSH is disabled, because ClickHouse is built without libssh");
#endif
case AuthenticationType::MAX:
break;
}
return checkMySQLAuthentication(mysql_credentials, authentication_method);
}
if (const auto * basic_credentials = typeid_cast<const BasicCredentials *>(&credentials))
{
switch (auth_data.getType())
{
case AuthenticationType::NO_PASSWORD:
return true; // N.B. even if the password is not empty!
case AuthenticationType::PLAINTEXT_PASSWORD:
return checkPasswordPlainText(basic_credentials->getPassword(), auth_data.getPasswordHashBinary());
case AuthenticationType::SHA256_PASSWORD:
return checkPasswordSHA256(basic_credentials->getPassword(), auth_data.getPasswordHashBinary(), auth_data.getSalt());
case AuthenticationType::DOUBLE_SHA1_PASSWORD:
return checkPasswordDoubleSHA1(basic_credentials->getPassword(), auth_data.getPasswordHashBinary());
case AuthenticationType::LDAP:
return external_authenticators.checkLDAPCredentials(auth_data.getLDAPServerName(), *basic_credentials);
case AuthenticationType::KERBEROS:
throw Authentication::Require<GSSAcceptorContext>(auth_data.getKerberosRealm());
case AuthenticationType::SSL_CERTIFICATE:
throw Authentication::Require<BasicCredentials>("ClickHouse X.509 Authentication");
case AuthenticationType::SSH_KEY:
#if USE_SSH
throw Authentication::Require<SshCredentials>("SSH Keys Authentication");
#else
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "SSH is disabled, because ClickHouse is built without libssh");
#endif
case AuthenticationType::JWT:
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "JWT is available only in ClickHouse Cloud");
case AuthenticationType::BCRYPT_PASSWORD:
return checkPasswordBcrypt(basic_credentials->getPassword(), auth_data.getPasswordHashBinary());
case AuthenticationType::HTTP:
switch (auth_data.getHTTPAuthenticationScheme())
{
case HTTPAuthenticationScheme::BASIC:
return external_authenticators.checkHTTPBasicCredentials(
auth_data.getHTTPAuthenticationServerName(), *basic_credentials, settings);
}
case AuthenticationType::MAX:
break;
}
return checkBasicAuthentication(basic_credentials, authentication_method, external_authenticators, settings);
}
if (const auto * ssl_certificate_credentials = typeid_cast<const SSLCertificateCredentials *>(&credentials))
{
switch (auth_data.getType())
{
case AuthenticationType::NO_PASSWORD:
case AuthenticationType::PLAINTEXT_PASSWORD:
case AuthenticationType::SHA256_PASSWORD:
case AuthenticationType::DOUBLE_SHA1_PASSWORD:
case AuthenticationType::BCRYPT_PASSWORD:
case AuthenticationType::LDAP:
case AuthenticationType::HTTP:
throw Authentication::Require<BasicCredentials>("ClickHouse Basic Authentication");
case AuthenticationType::JWT:
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "JWT is available only in ClickHouse Cloud");
case AuthenticationType::KERBEROS:
throw Authentication::Require<GSSAcceptorContext>(auth_data.getKerberosRealm());
case AuthenticationType::SSL_CERTIFICATE:
{
for (SSLCertificateSubjects::Type type : {SSLCertificateSubjects::Type::CN, SSLCertificateSubjects::Type::SAN})
{
for (const auto & subject : auth_data.getSSLCertificateSubjects().at(type))
{
if (ssl_certificate_credentials->getSSLCertificateSubjects().at(type).contains(subject))
return true;
// Wildcard support (1 only)
if (subject.contains('*'))
{
auto prefix = std::string_view(subject).substr(0, subject.find('*'));
auto suffix = std::string_view(subject).substr(subject.find('*') + 1);
auto slashes = std::count(subject.begin(), subject.end(), '/');
for (const auto & certificate_subject : ssl_certificate_credentials->getSSLCertificateSubjects().at(type))
{
bool matches_wildcard = certificate_subject.starts_with(prefix) && certificate_subject.ends_with(suffix);
// '*' must not represent a '/' in URI, so check if the number of '/' are equal
bool matches_slashes = slashes == count(certificate_subject.begin(), certificate_subject.end(), '/');
if (matches_wildcard && matches_slashes)
return true;
}
}
}
}
return false;
}
case AuthenticationType::SSH_KEY:
#if USE_SSH
throw Authentication::Require<SshCredentials>("SSH Keys Authentication");
#else
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "SSH is disabled, because ClickHouse is built without libssh");
#endif
case AuthenticationType::MAX:
break;
}
return checkSSLCertificateAuthentication(ssl_certificate_credentials, authentication_method);
}
#if USE_SSH
if (const auto * ssh_credentials = typeid_cast<const SshCredentials *>(&credentials))
{
switch (auth_data.getType())
{
case AuthenticationType::NO_PASSWORD:
case AuthenticationType::PLAINTEXT_PASSWORD:
case AuthenticationType::SHA256_PASSWORD:
case AuthenticationType::DOUBLE_SHA1_PASSWORD:
case AuthenticationType::BCRYPT_PASSWORD:
case AuthenticationType::LDAP:
case AuthenticationType::HTTP:
throw Authentication::Require<BasicCredentials>("ClickHouse Basic Authentication");
case AuthenticationType::JWT:
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "JWT is available only in ClickHouse Cloud");
case AuthenticationType::KERBEROS:
throw Authentication::Require<GSSAcceptorContext>(auth_data.getKerberosRealm());
case AuthenticationType::SSL_CERTIFICATE:
throw Authentication::Require<SSLCertificateCredentials>("ClickHouse X.509 Authentication");
case AuthenticationType::SSH_KEY:
return checkSshSignature(auth_data.getSSHKeys(), ssh_credentials->getSignature(), ssh_credentials->getOriginal());
case AuthenticationType::MAX:
break;
}
return checkSshAuthentication(ssh_credentials, authentication_method);
}
#endif
if ([[maybe_unused]] const auto * always_allow_credentials = typeid_cast<const AlwaysAllowCredentials *>(&credentials))
return true;
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "areCredentialsValid(): authentication type {} not supported", toString(auth_data.getType()));
return false;
}
}

View File

@ -24,7 +24,7 @@ struct Authentication
/// returned by the authentication server
static bool areCredentialsValid(
const Credentials & credentials,
const AuthenticationData & auth_data,
const AuthenticationData & authentication_method,
const ExternalAuthenticators & external_authenticators,
SettingsChanges & settings);

View File

@ -375,7 +375,8 @@ std::shared_ptr<ASTAuthenticationData> AuthenticationData::toAST() const
break;
}
case AuthenticationType::NO_PASSWORD: [[fallthrough]];
case AuthenticationType::NO_PASSWORD:
break;
case AuthenticationType::MAX:
throw Exception(ErrorCodes::LOGICAL_ERROR, "AST: Unexpected authentication type {}", toString(auth_type));
}

View File

@ -1,8 +1,6 @@
#include <Access/DiskAccessStorage.h>
#include <Access/AccessEntityIO.h>
#include <Access/AccessChangesNotifier.h>
#include <Backups/RestorerFromBackup.h>
#include <Backups/RestoreSettings.h>
#include <IO/WriteHelpers.h>
#include <IO/ReadHelpers.h>
#include <IO/ReadBufferFromFile.h>
@ -418,7 +416,7 @@ void DiskAccessStorage::setAllInMemory(const std::vector<std::pair<UUID, AccessE
/// Insert or update entities.
for (const auto & [id, entity] : entities_without_conflicts)
insertNoLock(id, entity, /* replace_if_exists = */ true, /* throw_if_exists = */ false, /* write_on_disk= */ false);
insertNoLock(id, entity, /* replace_if_exists = */ true, /* throw_if_exists = */ false, /* conflicting_id = */ nullptr, /* write_on_disk= */ false);
}
void DiskAccessStorage::removeAllExceptInMemory(const boost::container::flat_set<UUID> & ids_to_keep)
@ -507,14 +505,14 @@ std::optional<std::pair<String, AccessEntityType>> DiskAccessStorage::readNameWi
}
bool DiskAccessStorage::insertImpl(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, bool throw_if_exists)
bool DiskAccessStorage::insertImpl(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id)
{
std::lock_guard lock{mutex};
return insertNoLock(id, new_entity, replace_if_exists, throw_if_exists, /* write_on_disk = */ true);
return insertNoLock(id, new_entity, replace_if_exists, throw_if_exists, conflicting_id, /* write_on_disk = */ true);
}
bool DiskAccessStorage::insertNoLock(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, bool throw_if_exists, bool write_on_disk)
bool DiskAccessStorage::insertNoLock(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id, bool write_on_disk)
{
const String & name = new_entity->getName();
AccessEntityType type = new_entity->getType();
@ -533,9 +531,15 @@ bool DiskAccessStorage::insertNoLock(const UUID & id, const AccessEntityPtr & ne
if (name_collision && !replace_if_exists)
{
if (throw_if_exists)
{
throwNameCollisionCannotInsert(type, name);
}
else
{
if (conflicting_id)
*conflicting_id = id_by_name;
return false;
}
}
auto it_by_id = entries_by_id.find(id);
@ -548,7 +552,11 @@ bool DiskAccessStorage::insertNoLock(const UUID & id, const AccessEntityPtr & ne
throwIDCollisionCannotInsert(id, type, name, existing_entry.type, existing_entry.name);
}
else
{
if (conflicting_id)
*conflicting_id = id;
return false;
}
}
if (write_on_disk)
@ -727,25 +735,4 @@ void DiskAccessStorage::deleteAccessEntityOnDisk(const UUID & id) const
throw Exception(ErrorCodes::FILE_DOESNT_EXIST, "Couldn't delete {}", file_path);
}
void DiskAccessStorage::restoreFromBackup(RestorerFromBackup & restorer)
{
if (!isRestoreAllowed())
throwRestoreNotAllowed();
auto entities = restorer.getAccessEntitiesToRestore();
if (entities.empty())
return;
auto create_access = restorer.getRestoreSettings().create_access;
bool replace_if_exists = (create_access == RestoreAccessCreationMode::kReplace);
bool throw_if_exists = (create_access == RestoreAccessCreationMode::kCreate);
restorer.addDataRestoreTask([this, my_entities = std::move(entities), replace_if_exists, throw_if_exists]
{
for (const auto & [id, entity] : my_entities)
insert(id, entity, replace_if_exists, throw_if_exists);
});
}
}

View File

@ -34,14 +34,13 @@ public:
bool exists(const UUID & id) const override;
bool isBackupAllowed() const override { return backup_allowed; }
void restoreFromBackup(RestorerFromBackup & restorer) override;
private:
std::optional<UUID> findImpl(AccessEntityType type, const String & name) const override;
std::vector<UUID> findAllImpl(AccessEntityType type) const override;
AccessEntityPtr readImpl(const UUID & id, bool throw_if_not_exists) const override;
std::optional<std::pair<String, AccessEntityType>> readNameWithTypeImpl(const UUID & id, bool throw_if_not_exists) const override;
bool insertImpl(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists) override;
bool insertImpl(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id) override;
bool removeImpl(const UUID & id, bool throw_if_not_exists) override;
bool updateImpl(const UUID & id, const UpdateFunc & update_func, bool throw_if_not_exists) override;
@ -55,7 +54,7 @@ private:
void listsWritingThreadFunc() TSA_NO_THREAD_SAFETY_ANALYSIS;
void stopListsWritingThread();
bool insertNoLock(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, bool throw_if_exists, bool write_on_disk) TSA_REQUIRES(mutex);
bool insertNoLock(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id, bool write_on_disk) TSA_REQUIRES(mutex);
bool updateNoLock(const UUID & id, const UpdateFunc & update_func, bool throw_if_not_exists, bool write_on_disk) TSA_REQUIRES(mutex);
bool removeNoLock(const UUID & id, bool throw_if_not_exists, bool write_on_disk) TSA_REQUIRES(mutex);

View File

@ -9,4 +9,28 @@ bool IAccessEntity::equal(const IAccessEntity & other) const
return (name == other.name) && (getType() == other.getType());
}
void IAccessEntity::replaceDependencies(std::shared_ptr<const IAccessEntity> & entity, const std::unordered_map<UUID, UUID> & old_to_new_ids)
{
if (old_to_new_ids.empty())
return;
bool need_replace_dependencies = false;
auto dependencies = entity->findDependencies();
for (const auto & dependency : dependencies)
{
if (old_to_new_ids.contains(dependency))
{
need_replace_dependencies = true;
break;
}
}
if (!need_replace_dependencies)
return;
auto new_entity = entity->clone();
new_entity->replaceDependencies(old_to_new_ids);
entity = new_entity;
}
}

View File

@ -50,7 +50,8 @@ struct IAccessEntity
virtual std::vector<UUID> findDependencies() const { return {}; }
/// Replaces dependencies according to a specified map.
virtual void replaceDependencies(const std::unordered_map<UUID, UUID> & /* old_to_new_ids */) {}
void replaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids) { doReplaceDependencies(old_to_new_ids); }
static void replaceDependencies(std::shared_ptr<const IAccessEntity> & entity, const std::unordered_map<UUID, UUID> & old_to_new_ids);
/// Whether this access entity should be written to a backup.
virtual bool isBackupAllowed() const { return false; }
@ -66,6 +67,8 @@ protected:
{
return std::make_shared<EntityClassT>(typeid_cast<const EntityClassT &>(*this));
}
virtual void doReplaceDependencies(const std::unordered_map<UUID, UUID> & /* old_to_new_ids */) {}
};
using AccessEntityPtr = std::shared_ptr<const IAccessEntity>;

View File

@ -4,6 +4,8 @@
#include <Access/User.h>
#include <Access/AccessBackup.h>
#include <Backups/BackupEntriesCollector.h>
#include <Backups/RestorerFromBackup.h>
#include <Backups/RestoreSettings.h>
#include <Common/Exception.h>
#include <Common/quoteString.h>
#include <Common/callOnce.h>
@ -14,10 +16,11 @@
#include <base/FnTraits.h>
#include <boost/algorithm/string/join.hpp>
#include <boost/algorithm/string/replace.hpp>
#include <boost/range/adaptor/map.hpp>
#include <boost/range/adaptor/reversed.hpp>
#include <boost/range/algorithm/copy.hpp>
#include <boost/range/algorithm_ext/erase.hpp>
namespace DB
{
namespace ErrorCodes
@ -30,7 +33,6 @@ namespace ErrorCodes
extern const int IP_ADDRESS_NOT_ALLOWED;
extern const int LOGICAL_ERROR;
extern const int NOT_IMPLEMENTED;
extern const int AUTHENTICATION_FAILED;
}
@ -179,20 +181,20 @@ UUID IAccessStorage::insert(const AccessEntityPtr & entity)
return *insert(entity, /* replace_if_exists = */ false, /* throw_if_exists = */ true);
}
std::optional<UUID> IAccessStorage::insert(const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists)
std::optional<UUID> IAccessStorage::insert(const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id)
{
auto id = generateRandomID();
if (insert(id, entity, replace_if_exists, throw_if_exists))
if (insert(id, entity, replace_if_exists, throw_if_exists, conflicting_id))
return id;
return std::nullopt;
}
bool IAccessStorage::insert(const DB::UUID & id, const DB::AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists)
bool IAccessStorage::insert(const DB::UUID & id, const DB::AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id)
{
return insertImpl(id, entity, replace_if_exists, throw_if_exists);
return insertImpl(id, entity, replace_if_exists, throw_if_exists, conflicting_id);
}
@ -286,7 +288,7 @@ std::vector<UUID> IAccessStorage::insertOrReplace(const std::vector<AccessEntity
}
bool IAccessStorage::insertImpl(const UUID &, const AccessEntityPtr & entity, bool, bool)
bool IAccessStorage::insertImpl(const UUID &, const AccessEntityPtr & entity, bool, bool, UUID *)
{
if (isReadOnly())
throwReadonlyCannotInsert(entity->getType(), entity->getName());
@ -525,15 +527,32 @@ std::optional<AuthResult> IAccessStorage::authenticateImpl(
if (!isAddressAllowed(*user, address))
throwAddressNotAllowed(address);
auto auth_type = user->auth_data.getType();
if (((auth_type == AuthenticationType::NO_PASSWORD) && !allow_no_password) ||
((auth_type == AuthenticationType::PLAINTEXT_PASSWORD) && !allow_plaintext_password))
throwAuthenticationTypeNotAllowed(auth_type);
bool skipped_not_allowed_authentication_methods = false;
if (!areCredentialsValid(*user, credentials, external_authenticators, auth_result.settings))
throwInvalidCredentials();
for (const auto & auth_method : user->authentication_methods)
{
auto auth_type = auth_method.getType();
if (((auth_type == AuthenticationType::NO_PASSWORD) && !allow_no_password) ||
((auth_type == AuthenticationType::PLAINTEXT_PASSWORD) && !allow_plaintext_password))
{
skipped_not_allowed_authentication_methods = true;
continue;
}
return auth_result;
if (areCredentialsValid(user->getName(), user->valid_until, auth_method, credentials, external_authenticators, auth_result.settings))
{
auth_result.authentication_data = auth_method;
return auth_result;
}
}
if (skipped_not_allowed_authentication_methods)
{
LOG_INFO(log, "Skipped the check for not allowed authentication methods,"
"check allow_no_password and allow_plaintext_password settings in the server configuration");
}
throwInvalidCredentials();
}
}
@ -543,9 +562,10 @@ std::optional<AuthResult> IAccessStorage::authenticateImpl(
return std::nullopt;
}
bool IAccessStorage::areCredentialsValid(
const User & user,
const std::string & user_name,
time_t valid_until,
const AuthenticationData & authentication_method,
const Credentials & credentials,
const ExternalAuthenticators & external_authenticators,
SettingsChanges & settings) const
@ -553,21 +573,20 @@ bool IAccessStorage::areCredentialsValid(
if (!credentials.isReady())
return false;
if (credentials.getUserName() != user.getName())
if (credentials.getUserName() != user_name)
return false;
if (user.valid_until)
if (valid_until)
{
const time_t now = std::chrono::system_clock::to_time_t(std::chrono::system_clock::now());
if (now > user.valid_until)
if (now > valid_until)
return false;
}
return Authentication::areCredentialsValid(credentials, user.auth_data, external_authenticators, settings);
return Authentication::areCredentialsValid(credentials, authentication_method, external_authenticators, settings);
}
bool IAccessStorage::isAddressAllowed(const User & user, const Poco::Net::IPAddress & address) const
{
return user.allowed_client_hosts.contains(address);
@ -595,12 +614,51 @@ void IAccessStorage::backup(BackupEntriesCollector & backup_entries_collector, c
}
void IAccessStorage::restoreFromBackup(RestorerFromBackup &)
void IAccessStorage::restoreFromBackup(RestorerFromBackup & restorer)
{
if (!isRestoreAllowed())
throwRestoreNotAllowed();
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "restoreFromBackup() is not implemented in {}", getStorageType());
if (isReplicated() && !acquireReplicatedRestore(restorer))
return;
auto entities = restorer.getAccessEntitiesToRestore();
if (entities.empty())
return;
auto create_access = restorer.getRestoreSettings().create_access;
bool replace_if_exists = (create_access == RestoreAccessCreationMode::kReplace);
bool throw_if_exists = (create_access == RestoreAccessCreationMode::kCreate);
restorer.addDataRestoreTask([this, entities_to_restore = std::move(entities), replace_if_exists, throw_if_exists] mutable
{
std::unordered_map<UUID, UUID> new_to_existing_ids;
for (auto & [id, entity] : entities_to_restore)
{
UUID existing_entity_id;
if (!insert(id, entity, replace_if_exists, throw_if_exists, &existing_entity_id))
{
/// Couldn't insert `entity` because there is an existing entity with the same name.
new_to_existing_ids[id] = existing_entity_id;
}
}
if (!new_to_existing_ids.empty())
{
/// If new entities restored from backup have dependencies on other entities from backup which were not restored because they existed,
/// then we should correct those dependencies.
auto update_func = [&](const AccessEntityPtr & entity) -> AccessEntityPtr
{
auto res = entity;
IAccessEntity::replaceDependencies(res, new_to_existing_ids);
return res;
};
std::vector<UUID> ids;
ids.reserve(entities_to_restore.size());
boost::copy(entities_to_restore | boost::adaptors::map_keys, std::back_inserter(ids));
tryUpdate(ids, update_func);
}
});
}
@ -747,14 +805,6 @@ void IAccessStorage::throwAddressNotAllowed(const Poco::Net::IPAddress & address
throw Exception(ErrorCodes::IP_ADDRESS_NOT_ALLOWED, "Connections from {} are not allowed", address.toString());
}
void IAccessStorage::throwAuthenticationTypeNotAllowed(AuthenticationType auth_type)
{
throw Exception(
ErrorCodes::AUTHENTICATION_FAILED,
"Authentication type {} is not allowed, check the setting allow_{} in the server configuration",
toString(auth_type), AuthenticationTypeInfo::get(auth_type).name);
}
void IAccessStorage::throwInvalidCredentials()
{
throw Exception(ErrorCodes::WRONG_PASSWORD, "Invalid credentials");

View File

@ -1,6 +1,7 @@
#pragma once
#include <Access/IAccessEntity.h>
#include <Access/AuthenticationData.h>
#include <Core/Types.h>
#include <Core/UUID.h>
#include <Parsers/IParser.h>
@ -34,6 +35,7 @@ struct AuthResult
UUID user_id;
/// Session settings received from authentication server (if any)
SettingsChanges settings{};
AuthenticationData authentication_data {};
};
/// Contains entities, i.e. instances of classes derived from IAccessEntity.
@ -62,6 +64,9 @@ public:
/// Returns true if this entity is readonly.
virtual bool isReadOnly(const UUID &) const { return isReadOnly(); }
/// Returns true if this storage is replicated.
virtual bool isReplicated() const { return false; }
/// Starts periodic reloading and updating of entities in this storage.
virtual void startPeriodicReloading() {}
@ -151,8 +156,8 @@ public:
/// Inserts an entity to the storage. Returns ID of a new entry in the storage.
/// Throws an exception if the specified name already exists.
UUID insert(const AccessEntityPtr & entity);
std::optional<UUID> insert(const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists);
bool insert(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists);
std::optional<UUID> insert(const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id = nullptr);
bool insert(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id = nullptr);
std::vector<UUID> insert(const std::vector<AccessEntityPtr> & multiple_entities, bool replace_if_exists = false, bool throw_if_exists = true);
std::vector<UUID> insert(const std::vector<AccessEntityPtr> & multiple_entities, const std::vector<UUID> & ids, bool replace_if_exists = false, bool throw_if_exists = true);
@ -216,7 +221,7 @@ protected:
virtual std::vector<UUID> findAllImpl(AccessEntityType type) const = 0;
virtual AccessEntityPtr readImpl(const UUID & id, bool throw_if_not_exists) const = 0;
virtual std::optional<std::pair<String, AccessEntityType>> readNameWithTypeImpl(const UUID & id, bool throw_if_not_exists) const;
virtual bool insertImpl(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists);
virtual bool insertImpl(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id);
virtual bool removeImpl(const UUID & id, bool throw_if_not_exists);
virtual bool updateImpl(const UUID & id, const UpdateFunc & update_func, bool throw_if_not_exists);
virtual std::optional<AuthResult> authenticateImpl(
@ -227,7 +232,9 @@ protected:
bool allow_no_password,
bool allow_plaintext_password) const;
virtual bool areCredentialsValid(
const User & user,
const std::string & user_name,
time_t valid_until,
const AuthenticationData & authentication_method,
const Credentials & credentials,
const ExternalAuthenticators & external_authenticators,
SettingsChanges & settings) const;
@ -236,6 +243,7 @@ protected:
LoggerPtr getLogger() const;
static String formatEntityTypeWithName(AccessEntityType type, const String & name) { return AccessEntityTypeInfo::get(type).formatEntityNameWithType(name); }
static void clearConflictsInEntitiesList(std::vector<std::pair<UUID, AccessEntityPtr>> & entities, LoggerPtr log_);
virtual bool acquireReplicatedRestore(RestorerFromBackup &) const { return false; }
[[noreturn]] void throwNotFound(const UUID & id) const;
[[noreturn]] void throwNotFound(AccessEntityType type, const String & name) const;
[[noreturn]] static void throwBadCast(const UUID & id, AccessEntityType type, const String & name, AccessEntityType required_type);
@ -248,7 +256,6 @@ protected:
[[noreturn]] void throwReadonlyCannotRemove(AccessEntityType type, const String & name) const;
[[noreturn]] static void throwAddressNotAllowed(const Poco::Net::IPAddress & address);
[[noreturn]] static void throwInvalidCredentials();
[[noreturn]] static void throwAuthenticationTypeNotAllowed(AuthenticationType auth_type);
[[noreturn]] void throwBackupNotAllowed() const;
[[noreturn]] void throwRestoreNotAllowed() const;

View File

@ -468,8 +468,8 @@ std::optional<AuthResult> LDAPAccessStorage::authenticateImpl(
// User does not exist, so we create one, and will add it if authentication is successful.
new_user = std::make_shared<User>();
new_user->setName(credentials.getUserName());
new_user->auth_data = AuthenticationData(AuthenticationType::LDAP);
new_user->auth_data.setLDAPServerName(ldap_server_name);
new_user->authentication_methods.emplace_back(AuthenticationType::LDAP);
new_user->authentication_methods.back().setLDAPServerName(ldap_server_name);
user = new_user;
}
@ -504,7 +504,7 @@ std::optional<AuthResult> LDAPAccessStorage::authenticateImpl(
}
if (id)
return AuthResult{ .user_id = *id };
return AuthResult{ .user_id = *id, .authentication_data = AuthenticationData(AuthenticationType::LDAP) };
return std::nullopt;
}

View File

@ -1,7 +1,5 @@
#include <Access/MemoryAccessStorage.h>
#include <Access/AccessChangesNotifier.h>
#include <Backups/RestorerFromBackup.h>
#include <Backups/RestoreSettings.h>
#include <base/scope_guard.h>
#include <boost/container/flat_set.hpp>
#include <boost/range/adaptor/map.hpp>
@ -63,14 +61,14 @@ AccessEntityPtr MemoryAccessStorage::readImpl(const UUID & id, bool throw_if_not
}
bool MemoryAccessStorage::insertImpl(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, bool throw_if_exists)
bool MemoryAccessStorage::insertImpl(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id)
{
std::lock_guard lock{mutex};
return insertNoLock(id, new_entity, replace_if_exists, throw_if_exists);
return insertNoLock(id, new_entity, replace_if_exists, throw_if_exists, conflicting_id);
}
bool MemoryAccessStorage::insertNoLock(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, bool throw_if_exists)
bool MemoryAccessStorage::insertNoLock(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id)
{
const String & name = new_entity->getName();
AccessEntityType type = new_entity->getType();
@ -86,9 +84,15 @@ bool MemoryAccessStorage::insertNoLock(const UUID & id, const AccessEntityPtr &
if (name_collision && !replace_if_exists)
{
if (throw_if_exists)
{
throwNameCollisionCannotInsert(type, name);
}
else
{
if (conflicting_id)
*conflicting_id = id_by_name;
return false;
}
}
auto it_by_id = entries_by_id.find(id);
@ -97,9 +101,15 @@ bool MemoryAccessStorage::insertNoLock(const UUID & id, const AccessEntityPtr &
{
const auto & existing_entry = it_by_id->second;
if (throw_if_exists)
{
throwIDCollisionCannotInsert(id, type, name, existing_entry.entity->getType(), existing_entry.entity->getName());
}
else
{
if (conflicting_id)
*conflicting_id = id;
return false;
}
}
/// Remove collisions if necessary.
@ -270,28 +280,7 @@ void MemoryAccessStorage::setAll(const std::vector<std::pair<UUID, AccessEntityP
/// Insert or update entities.
for (const auto & [id, entity] : entities_without_conflicts)
insertNoLock(id, entity, /* replace_if_exists = */ true, /* throw_if_exists = */ false);
}
void MemoryAccessStorage::restoreFromBackup(RestorerFromBackup & restorer)
{
if (!isRestoreAllowed())
throwRestoreNotAllowed();
auto entities = restorer.getAccessEntitiesToRestore();
if (entities.empty())
return;
auto create_access = restorer.getRestoreSettings().create_access;
bool replace_if_exists = (create_access == RestoreAccessCreationMode::kReplace);
bool throw_if_exists = (create_access == RestoreAccessCreationMode::kCreate);
restorer.addDataRestoreTask([this, my_entities = std::move(entities), replace_if_exists, throw_if_exists]
{
for (const auto & [id, entity] : my_entities)
insert(id, entity, replace_if_exists, throw_if_exists);
});
insertNoLock(id, entity, /* replace_if_exists = */ true, /* throw_if_exists = */ false, /* conflicting_id = */ nullptr);
}
}

View File

@ -34,17 +34,16 @@ public:
bool exists(const UUID & id) const override;
bool isBackupAllowed() const override { return backup_allowed; }
void restoreFromBackup(RestorerFromBackup & restorer) override;
private:
std::optional<UUID> findImpl(AccessEntityType type, const String & name) const override;
std::vector<UUID> findAllImpl(AccessEntityType type) const override;
AccessEntityPtr readImpl(const UUID & id, bool throw_if_not_exists) const override;
bool insertImpl(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists) override;
bool insertImpl(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id) override;
bool removeImpl(const UUID & id, bool throw_if_not_exists) override;
bool updateImpl(const UUID & id, const UpdateFunc & update_func, bool throw_if_not_exists) override;
bool insertNoLock(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists);
bool insertNoLock(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id);
bool removeNoLock(const UUID & id, bool throw_if_not_exists);
bool updateNoLock(const UUID & id, const UpdateFunc & update_func, bool throw_if_not_exists);

View File

@ -353,7 +353,7 @@ void MultipleAccessStorage::reload(ReloadMode reload_mode)
}
bool MultipleAccessStorage::insertImpl(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists)
bool MultipleAccessStorage::insertImpl(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id)
{
std::shared_ptr<IAccessStorage> storage_for_insertion;
@ -376,7 +376,7 @@ bool MultipleAccessStorage::insertImpl(const UUID & id, const AccessEntityPtr &
getStorageName());
}
if (storage_for_insertion->insert(id, entity, replace_if_exists, throw_if_exists))
if (storage_for_insertion->insert(id, entity, replace_if_exists, throw_if_exists, conflicting_id))
{
std::lock_guard lock{mutex};
ids_cache.set(id, storage_for_insertion);

View File

@ -67,7 +67,7 @@ protected:
std::vector<UUID> findAllImpl(AccessEntityType type) const override;
AccessEntityPtr readImpl(const UUID & id, bool throw_if_not_exists) const override;
std::optional<std::pair<String, AccessEntityType>> readNameWithTypeImpl(const UUID & id, bool throw_if_not_exists) const override;
bool insertImpl(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists) override;
bool insertImpl(const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id) override;
bool removeImpl(const UUID & id, bool throw_if_not_exists) override;
bool updateImpl(const UUID & id, const UpdateFunc & update_func, bool throw_if_not_exists) override;
std::optional<AuthResult> authenticateImpl(const Credentials & credentials, const Poco::Net::IPAddress & address, const ExternalAuthenticators & external_authenticators, bool throw_if_user_not_exists, bool allow_no_password, bool allow_plaintext_password) const override;

View File

@ -24,7 +24,7 @@ std::vector<UUID> Quota::findDependencies() const
return to_roles.findDependencies();
}
void Quota::replaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids)
void Quota::doReplaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids)
{
to_roles.replaceDependencies(old_to_new_ids);
}

View File

@ -47,7 +47,7 @@ struct Quota : public IAccessEntity
AccessEntityType getType() const override { return TYPE; }
std::vector<UUID> findDependencies() const override;
void replaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids) override;
void doReplaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids) override;
bool isBackupAllowed() const override { return true; }
};

View File

@ -5,10 +5,9 @@
#include <Access/AccessChangesNotifier.h>
#include <Access/AccessBackup.h>
#include <Backups/BackupEntriesCollector.h>
#include <Backups/RestorerFromBackup.h>
#include <Backups/RestoreSettings.h>
#include <Backups/IBackupCoordination.h>
#include <Backups/IRestoreCoordination.h>
#include <Backups/RestorerFromBackup.h>
#include <IO/ReadHelpers.h>
#include <Interpreters/Context.h>
#include <Common/ZooKeeper/KeeperException.h>
@ -120,7 +119,7 @@ static void retryOnZooKeeperUserError(size_t attempts, Func && function)
}
}
bool ReplicatedAccessStorage::insertImpl(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, bool throw_if_exists)
bool ReplicatedAccessStorage::insertImpl(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id)
{
const AccessEntityTypeInfo type_info = AccessEntityTypeInfo::get(new_entity->getType());
const String & name = new_entity->getName();
@ -128,7 +127,7 @@ bool ReplicatedAccessStorage::insertImpl(const UUID & id, const AccessEntityPtr
auto zookeeper = getZooKeeper();
bool ok = false;
retryOnZooKeeperUserError(10, [&]{ ok = insertZooKeeper(zookeeper, id, new_entity, replace_if_exists, throw_if_exists); });
retryOnZooKeeperUserError(10, [&]{ ok = insertZooKeeper(zookeeper, id, new_entity, replace_if_exists, throw_if_exists, conflicting_id); });
if (!ok)
return false;
@ -143,7 +142,8 @@ bool ReplicatedAccessStorage::insertZooKeeper(
const UUID & id,
const AccessEntityPtr & new_entity,
bool replace_if_exists,
bool throw_if_exists)
bool throw_if_exists,
UUID * conflicting_id)
{
const String & name = new_entity->getName();
const AccessEntityType type = new_entity->getType();
@ -167,27 +167,52 @@ bool ReplicatedAccessStorage::insertZooKeeper(
if (res == Coordination::Error::ZNODEEXISTS)
{
if (!throw_if_exists && !replace_if_exists)
return false; /// Couldn't insert a new entity.
if (throw_if_exists)
if (!replace_if_exists)
{
if (responses[0]->error == Coordination::Error::ZNODEEXISTS)
{
/// To fail with a nice error message, we need info about what already exists.
/// This itself could fail if the conflicting uuid disappears in the meantime.
/// If that happens, then we'll just retry from the start.
String existing_entity_definition = zookeeper->get(entity_path);
/// Couldn't insert the new entity because there is an existing entity with such UUID.
if (throw_if_exists)
{
/// To fail with a nice error message, we need info about what already exists.
/// This itself can fail if the conflicting uuid disappears in the meantime.
/// If that happens, then retryOnZooKeeperUserError() will just retry the operation from the start.
String existing_entity_definition = zookeeper->get(entity_path);
AccessEntityPtr existing_entity = deserializeAccessEntity(existing_entity_definition, entity_path);
AccessEntityType existing_type = existing_entity->getType();
String existing_name = existing_entity->getName();
throwIDCollisionCannotInsert(id, type, name, existing_type, existing_name);
AccessEntityPtr existing_entity = deserializeAccessEntity(existing_entity_definition, entity_path);
AccessEntityType existing_type = existing_entity->getType();
String existing_name = existing_entity->getName();
throwIDCollisionCannotInsert(id, type, name, existing_type, existing_name);
}
else
{
if (conflicting_id)
*conflicting_id = id;
return false;
}
}
else if (responses[1]->error == Coordination::Error::ZNODEEXISTS)
{
/// Couldn't insert the new entity because there is an existing entity with the same name.
if (throw_if_exists)
{
throwNameCollisionCannotInsert(type, name);
}
else
{
if (conflicting_id)
{
/// Get UUID of the existing entry with the same name.
/// This itself can fail if the conflicting name disappears in the meantime.
/// If that happens, then retryOnZooKeeperUserError() will just retry the operation from the start.
*conflicting_id = parseUUID(zookeeper->get(name_path));
}
return false;
}
}
else
{
/// Couldn't insert the new entity because there is an existing entity with such name.
throwNameCollisionCannotInsert(type, name);
zkutil::KeeperMultiException::check(res, ops, responses);
}
}
@ -693,28 +718,10 @@ void ReplicatedAccessStorage::backup(BackupEntriesCollector & backup_entries_col
}
void ReplicatedAccessStorage::restoreFromBackup(RestorerFromBackup & restorer)
bool ReplicatedAccessStorage::acquireReplicatedRestore(RestorerFromBackup & restorer) const
{
if (!isRestoreAllowed())
throwRestoreNotAllowed();
auto restore_coordination = restorer.getRestoreCoordination();
if (!restore_coordination->acquireReplicatedAccessStorage(zookeeper_path))
return;
auto entities = restorer.getAccessEntitiesToRestore();
if (entities.empty())
return;
auto create_access = restorer.getRestoreSettings().create_access;
bool replace_if_exists = (create_access == RestoreAccessCreationMode::kReplace);
bool throw_if_exists = (create_access == RestoreAccessCreationMode::kCreate);
restorer.addDataRestoreTask([this, my_entities = std::move(entities), replace_if_exists, throw_if_exists]
{
for (const auto & [id, entity] : my_entities)
insert(id, entity, replace_if_exists, throw_if_exists);
});
return restore_coordination->acquireReplicatedAccessStorage(zookeeper_path);
}
}

View File

@ -26,6 +26,7 @@ public:
void shutdown() override;
const char * getStorageType() const override { return STORAGE_TYPE; }
bool isReplicated() const override { return true; }
void startPeriodicReloading() override { startWatchingThread(); }
void stopPeriodicReloading() override { stopWatchingThread(); }
@ -35,7 +36,6 @@ public:
bool isBackupAllowed() const override { return backup_allowed; }
void backup(BackupEntriesCollector & backup_entries_collector, const String & data_path_in_backup, AccessEntityType type) const override;
void restoreFromBackup(RestorerFromBackup & restorer) override;
private:
String zookeeper_path;
@ -48,11 +48,11 @@ private:
std::unique_ptr<ThreadFromGlobalPool> watching_thread;
std::shared_ptr<ConcurrentBoundedQueue<UUID>> watched_queue;
bool insertImpl(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, bool throw_if_exists) override;
bool insertImpl(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id) override;
bool removeImpl(const UUID & id, bool throw_if_not_exists) override;
bool updateImpl(const UUID & id, const UpdateFunc & update_func, bool throw_if_not_exists) override;
bool insertZooKeeper(const zkutil::ZooKeeperPtr & zookeeper, const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists);
bool insertZooKeeper(const zkutil::ZooKeeperPtr & zookeeper, const UUID & id, const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists, UUID * conflicting_id);
bool removeZooKeeper(const zkutil::ZooKeeperPtr & zookeeper, const UUID & id, bool throw_if_not_exists);
bool updateZooKeeper(const zkutil::ZooKeeperPtr & zookeeper, const UUID & id, const UpdateFunc & update_func, bool throw_if_not_exists);
@ -80,6 +80,7 @@ private:
std::optional<UUID> findImpl(AccessEntityType type, const String & name) const override;
std::vector<UUID> findAllImpl(AccessEntityType type) const override;
AccessEntityPtr readImpl(const UUID & id, bool throw_if_not_exists) const override;
bool acquireReplicatedRestore(RestorerFromBackup & restorer) const override;
mutable std::mutex mutex;
MemoryAccessStorage memory_storage TSA_GUARDED_BY(mutex);

View File

@ -21,7 +21,7 @@ std::vector<UUID> Role::findDependencies() const
return res;
}
void Role::replaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids)
void Role::doReplaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids)
{
granted_roles.replaceDependencies(old_to_new_ids);
settings.replaceDependencies(old_to_new_ids);

View File

@ -21,7 +21,7 @@ struct Role : public IAccessEntity
AccessEntityType getType() const override { return TYPE; }
std::vector<UUID> findDependencies() const override;
void replaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids) override;
void doReplaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids) override;
bool isBackupAllowed() const override { return settings.isBackupAllowed(); }
};

View File

@ -63,7 +63,7 @@ std::vector<UUID> RowPolicy::findDependencies() const
return to_roles.findDependencies();
}
void RowPolicy::replaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids)
void RowPolicy::doReplaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids)
{
to_roles.replaceDependencies(old_to_new_ids);
}

View File

@ -50,7 +50,7 @@ struct RowPolicy : public IAccessEntity
AccessEntityType getType() const override { return TYPE; }
std::vector<UUID> findDependencies() const override;
void replaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids) override;
void doReplaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids) override;
bool isBackupAllowed() const override { return true; }
/// Which roles or users should use this row policy.

View File

@ -21,7 +21,7 @@ std::vector<UUID> SettingsProfile::findDependencies() const
return res;
}
void SettingsProfile::replaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids)
void SettingsProfile::doReplaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids)
{
elements.replaceDependencies(old_to_new_ids);
to_roles.replaceDependencies(old_to_new_ids);

View File

@ -22,7 +22,7 @@ struct SettingsProfile : public IAccessEntity
AccessEntityType getType() const override { return TYPE; }
std::vector<UUID> findDependencies() const override;
void replaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids) override;
void doReplaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids) override;
bool isBackupAllowed() const override { return elements.isBackupAllowed(); }
};

View File

@ -16,7 +16,8 @@ bool User::equal(const IAccessEntity & other) const
if (!IAccessEntity::equal(other))
return false;
const auto & other_user = typeid_cast<const User &>(other);
return (auth_data == other_user.auth_data) && (allowed_client_hosts == other_user.allowed_client_hosts)
return (authentication_methods == other_user.authentication_methods)
&& (allowed_client_hosts == other_user.allowed_client_hosts)
&& (access == other_user.access) && (granted_roles == other_user.granted_roles) && (default_roles == other_user.default_roles)
&& (settings == other_user.settings) && (grantees == other_user.grantees) && (default_database == other_user.default_database)
&& (valid_until == other_user.valid_until);
@ -48,7 +49,7 @@ std::vector<UUID> User::findDependencies() const
return res;
}
void User::replaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids)
void User::doReplaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids)
{
default_roles.replaceDependencies(old_to_new_ids);
granted_roles.replaceDependencies(old_to_new_ids);

View File

@ -15,7 +15,7 @@ namespace DB
*/
struct User : public IAccessEntity
{
AuthenticationData auth_data;
std::vector<AuthenticationData> authentication_methods;
AllowedClientHosts allowed_client_hosts = AllowedClientHosts::AnyHostTag{};
AccessRights access;
GrantedRoles granted_roles;
@ -32,7 +32,7 @@ struct User : public IAccessEntity
void setName(const String & name_) override;
std::vector<UUID> findDependencies() const override;
void replaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids) override;
void doReplaceDependencies(const std::unordered_map<UUID, UUID> & old_to_new_ids) override;
bool isBackupAllowed() const override { return settings.isBackupAllowed(); }
};

View File

@ -155,18 +155,18 @@ namespace
if (has_password_plaintext)
{
user->auth_data = AuthenticationData{AuthenticationType::PLAINTEXT_PASSWORD};
user->auth_data.setPassword(config.getString(user_config + ".password"));
user->authentication_methods.emplace_back(AuthenticationType::PLAINTEXT_PASSWORD);
user->authentication_methods.back().setPassword(config.getString(user_config + ".password"));
}
else if (has_password_sha256_hex)
{
user->auth_data = AuthenticationData{AuthenticationType::SHA256_PASSWORD};
user->auth_data.setPasswordHashHex(config.getString(user_config + ".password_sha256_hex"));
user->authentication_methods.emplace_back(AuthenticationType::SHA256_PASSWORD);
user->authentication_methods.back().setPasswordHashHex(config.getString(user_config + ".password_sha256_hex"));
}
else if (has_password_double_sha1_hex)
{
user->auth_data = AuthenticationData{AuthenticationType::DOUBLE_SHA1_PASSWORD};
user->auth_data.setPasswordHashHex(config.getString(user_config + ".password_double_sha1_hex"));
user->authentication_methods.emplace_back(AuthenticationType::DOUBLE_SHA1_PASSWORD);
user->authentication_methods.back().setPasswordHashHex(config.getString(user_config + ".password_double_sha1_hex"));
}
else if (has_ldap)
{
@ -178,19 +178,19 @@ namespace
if (ldap_server_name.empty())
throw Exception(ErrorCodes::BAD_ARGUMENTS, "LDAP server name cannot be empty for user {}.", user_name);
user->auth_data = AuthenticationData{AuthenticationType::LDAP};
user->auth_data.setLDAPServerName(ldap_server_name);
user->authentication_methods.emplace_back(AuthenticationType::LDAP);
user->authentication_methods.back().setLDAPServerName(ldap_server_name);
}
else if (has_kerberos)
{
const auto realm = config.getString(user_config + ".kerberos.realm", "");
user->auth_data = AuthenticationData{AuthenticationType::KERBEROS};
user->auth_data.setKerberosRealm(realm);
user->authentication_methods.emplace_back(AuthenticationType::KERBEROS);
user->authentication_methods.back().setKerberosRealm(realm);
}
else if (has_certificates)
{
user->auth_data = AuthenticationData{AuthenticationType::SSL_CERTIFICATE};
user->authentication_methods.emplace_back(AuthenticationType::SSL_CERTIFICATE);
/// Fill list of allowed certificates.
Poco::Util::AbstractConfiguration::Keys keys;
@ -200,14 +200,14 @@ namespace
if (key.starts_with("common_name"))
{
String value = config.getString(certificates_config + "." + key);
user->auth_data.addSSLCertificateSubject(SSLCertificateSubjects::Type::CN, std::move(value));
user->authentication_methods.back().addSSLCertificateSubject(SSLCertificateSubjects::Type::CN, std::move(value));
}
else if (key.starts_with("subject_alt_name"))
{
String value = config.getString(certificates_config + "." + key);
if (value.empty())
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Expected ssl_certificates.subject_alt_name to not be empty");
user->auth_data.addSSLCertificateSubject(SSLCertificateSubjects::Type::SAN, std::move(value));
user->authentication_methods.back().addSSLCertificateSubject(SSLCertificateSubjects::Type::SAN, std::move(value));
}
else
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown certificate pattern type: {}", key);
@ -216,7 +216,7 @@ namespace
else if (has_ssh_keys)
{
#if USE_SSH
user->auth_data = AuthenticationData{AuthenticationType::SSH_KEY};
user->authentication_methods.emplace_back(AuthenticationType::SSH_KEY);
Poco::Util::AbstractConfiguration::Keys entries;
config.keys(ssh_keys_config, entries);
@ -253,26 +253,33 @@ namespace
else
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown ssh_key entry pattern type: {}", entry);
}
user->auth_data.setSSHKeys(std::move(keys));
user->authentication_methods.back().setSSHKeys(std::move(keys));
#else
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "SSH is disabled, because ClickHouse is built without libssh");
#endif
}
else if (has_http_auth)
{
user->auth_data = AuthenticationData{AuthenticationType::HTTP};
user->auth_data.setHTTPAuthenticationServerName(config.getString(http_auth_config + ".server"));
user->authentication_methods.emplace_back(AuthenticationType::HTTP);
user->authentication_methods.back().setHTTPAuthenticationServerName(config.getString(http_auth_config + ".server"));
auto scheme = config.getString(http_auth_config + ".scheme");
user->auth_data.setHTTPAuthenticationScheme(parseHTTPAuthenticationScheme(scheme));
user->authentication_methods.back().setHTTPAuthenticationScheme(parseHTTPAuthenticationScheme(scheme));
}
else
{
user->authentication_methods.emplace_back();
}
auto auth_type = user->auth_data.getType();
if (((auth_type == AuthenticationType::NO_PASSWORD) && !allow_no_password) ||
((auth_type == AuthenticationType::PLAINTEXT_PASSWORD) && !allow_plaintext_password))
for (const auto & authentication_method : user->authentication_methods)
{
throw Exception(ErrorCodes::BAD_ARGUMENTS,
"Authentication type {} is not allowed, check the setting allow_{} in the server configuration",
toString(auth_type), AuthenticationTypeInfo::get(auth_type).name);
auto auth_type = authentication_method.getType();
if (((auth_type == AuthenticationType::NO_PASSWORD) && !allow_no_password) ||
((auth_type == AuthenticationType::PLAINTEXT_PASSWORD) && !allow_plaintext_password))
{
throw Exception(ErrorCodes::BAD_ARGUMENTS,
"Authentication type {} is not allowed, check the setting allow_{} in the server configuration",
toString(auth_type), AuthenticationTypeInfo::get(auth_type).name);
}
}
const auto profile_name_config = user_config + ".profile";

View File

@ -3,370 +3,89 @@
#include <Parsers/FunctionSecretArgumentsFinder.h>
#include <Analyzer/ConstantNode.h>
#include <Analyzer/FunctionNode.h>
#include <Analyzer/IQueryTreeNode.h>
#include <Analyzer/IdentifierNode.h>
#include <Analyzer/ListNode.h>
#include <Common/KnownObjectNames.h>
#include <Core/QualifiedTableName.h>
#include <boost/algorithm/string/predicate.hpp>
namespace DB
{
class FunctionTreeNode : public AbstractFunction
{
public:
class ArgumentTreeNode : public Argument
{
public:
explicit ArgumentTreeNode(const IQueryTreeNode * argument_) : argument(argument_) {}
std::unique_ptr<AbstractFunction> getFunction() const override
{
if (const auto * f = argument->as<FunctionNode>())
return std::make_unique<FunctionTreeNode>(*f);
return nullptr;
}
bool isIdentifier() const override { return argument->as<IdentifierNode>(); }
bool tryGetString(String * res, bool allow_identifier) const override
{
if (const auto * literal = argument->as<ConstantNode>())
{
if (literal->getValue().getType() != Field::Types::String)
return false;
if (res)
*res = literal->getValue().safeGet<String>();
return true;
}
if (allow_identifier)
{
if (const auto * id = argument->as<IdentifierNode>())
{
if (res)
*res = id->getIdentifier().getFullName();
return true;
}
}
return false;
}
private:
const IQueryTreeNode * argument = nullptr;
};
class ArgumentsTreeNode : public Arguments
{
public:
explicit ArgumentsTreeNode(const QueryTreeNodes * arguments_) : arguments(arguments_) {}
size_t size() const override { return arguments ? arguments->size() : 0; }
std::unique_ptr<Argument> at(size_t n) const override { return std::make_unique<ArgumentTreeNode>(arguments->at(n).get()); }
private:
const QueryTreeNodes * arguments = nullptr;
};
explicit FunctionTreeNode(const FunctionNode & function_) : function(&function_)
{
if (const auto & nodes = function->getArguments().getNodes(); !nodes.empty())
arguments = std::make_unique<ArgumentsTreeNode>(&nodes);
}
String name() const override { return function->getFunctionName(); }
private:
const FunctionNode * function = nullptr;
};
/// Finds arguments of a specified function which should not be displayed for most users for security reasons.
/// That involves passwords and secret keys.
class FunctionSecretArgumentsFinderTreeNode
class FunctionSecretArgumentsFinderTreeNode : public FunctionSecretArgumentsFinder
{
public:
explicit FunctionSecretArgumentsFinderTreeNode(const FunctionNode & function_) : function(function_), arguments(function.getArguments())
explicit FunctionSecretArgumentsFinderTreeNode(const FunctionNode & function_)
: FunctionSecretArgumentsFinder(std::make_unique<FunctionTreeNode>(function_))
{
if (arguments.getNodes().empty())
if (!function->hasArguments())
return;
findFunctionSecretArguments();
findOrdinaryFunctionSecretArguments();
}
struct Result
{
/// Result constructed by default means no arguments will be hidden.
size_t start = static_cast<size_t>(-1);
size_t count = 0; /// Mostly it's either 0 or 1. There are only a few cases where `count` can be greater than 1 (e.g. see `encrypt`).
/// In all known cases secret arguments are consecutive
bool are_named = false; /// Arguments like `password = 'password'` are considered as named arguments.
/// E.g. "headers" in `url('..', headers('foo' = '[HIDDEN]'))`
std::vector<std::string> nested_maps;
bool hasSecrets() const
{
return count != 0 || !nested_maps.empty();
}
};
FunctionSecretArgumentsFinder::Result getResult() const { return result; }
private:
const FunctionNode & function;
const ListNode & arguments;
FunctionSecretArgumentsFinder::Result result;
void markSecretArgument(size_t index, bool argument_is_named = false)
{
if (index >= arguments.getNodes().size())
return;
if (!result.count)
{
result.start = index;
result.are_named = argument_is_named;
}
chassert(index >= result.start); /// We always check arguments consecutively
result.count = index + 1 - result.start;
if (!argument_is_named)
result.are_named = false;
}
void findFunctionSecretArguments()
{
const auto & name = function.getFunctionName();
if ((name == "mysql") || (name == "postgresql") || (name == "mongodb"))
{
/// mysql('host:port', 'database', 'table', 'user', 'password', ...)
/// postgresql('host:port', 'database', 'table', 'user', 'password', ...)
/// mongodb('host:port', 'database', 'collection', 'user', 'password', ...)
findMySQLFunctionSecretArguments();
}
else if ((name == "s3") || (name == "cosn") || (name == "oss") ||
(name == "deltaLake") || (name == "hudi") || (name == "iceberg"))
{
/// s3('url', 'aws_access_key_id', 'aws_secret_access_key', ...)
findS3FunctionSecretArguments(/* is_cluster_function= */ false);
}
else if (name == "s3Cluster")
{
/// s3Cluster('cluster_name', 'url', 'aws_access_key_id', 'aws_secret_access_key', ...)
findS3FunctionSecretArguments(/* is_cluster_function= */ true);
}
else if ((name == "remote") || (name == "remoteSecure"))
{
/// remote('addresses_expr', 'db', 'table', 'user', 'password', ...)
findRemoteFunctionSecretArguments();
}
else if ((name == "encrypt") || (name == "decrypt") ||
(name == "aes_encrypt_mysql") || (name == "aes_decrypt_mysql") ||
(name == "tryDecrypt"))
{
/// encrypt('mode', 'plaintext', 'key' [, iv, aad])
findEncryptionFunctionSecretArguments();
}
else if (name == "url")
{
findURLSecretArguments();
}
}
void findMySQLFunctionSecretArguments()
{
if (isNamedCollectionName(0))
{
/// mysql(named_collection, ..., password = 'password', ...)
findSecretNamedArgument("password", 1);
}
else
{
/// mysql('host:port', 'database', 'table', 'user', 'password', ...)
markSecretArgument(4);
}
}
/// Returns the number of arguments excluding "headers" and "extra_credentials" (which should
/// always be at the end). Marks "headers" as secret, if found.
size_t excludeS3OrURLNestedMaps()
{
const auto & nodes = arguments.getNodes();
size_t count = nodes.size();
while (count > 0)
{
const FunctionNode * f = nodes.at(count - 1)->as<FunctionNode>();
if (!f)
break;
if (f->getFunctionName() == "headers")
result.nested_maps.push_back(f->getFunctionName());
else if (f->getFunctionName() != "extra_credentials")
break;
count -= 1;
}
return count;
}
void findS3FunctionSecretArguments(bool is_cluster_function)
{
/// s3Cluster('cluster_name', 'url', ...) has 'url' as its second argument.
size_t url_arg_idx = is_cluster_function ? 1 : 0;
if (!is_cluster_function && isNamedCollectionName(0))
{
/// s3(named_collection, ..., secret_access_key = 'secret_access_key', ...)
findSecretNamedArgument("secret_access_key", 1);
return;
}
/// We should check other arguments first because we don't need to do any replacement in case of
/// s3('url', NOSIGN, 'format' [, 'compression'] [, extra_credentials(..)] [, headers(..)])
/// s3('url', 'format', 'structure' [, 'compression'] [, extra_credentials(..)] [, headers(..)])
size_t count = excludeS3OrURLNestedMaps();
if ((url_arg_idx + 3 <= count) && (count <= url_arg_idx + 4))
{
String second_arg;
if (tryGetStringFromArgument(url_arg_idx + 1, &second_arg))
{
if (boost::iequals(second_arg, "NOSIGN"))
return; /// The argument after 'url' is "NOSIGN".
if (second_arg == "auto" || KnownFormatNames::instance().exists(second_arg))
return; /// The argument after 'url' is a format: s3('url', 'format', ...)
}
}
/// We're going to replace 'aws_secret_access_key' with '[HIDDEN]' for the following signatures:
/// s3('url', 'aws_access_key_id', 'aws_secret_access_key', ...)
/// s3Cluster('cluster_name', 'url', 'aws_access_key_id', 'aws_secret_access_key', 'format', 'compression')
if (url_arg_idx + 2 < count)
markSecretArgument(url_arg_idx + 2);
}
void findURLSecretArguments()
{
if (!isNamedCollectionName(0))
excludeS3OrURLNestedMaps();
}
bool tryGetStringFromArgument(size_t arg_idx, String * res, bool allow_identifier = true) const
{
if (arg_idx >= arguments.getNodes().size())
return false;
return tryGetStringFromArgument(arguments.getNodes()[arg_idx], res, allow_identifier);
}
static bool tryGetStringFromArgument(const QueryTreeNodePtr argument, String * res, bool allow_identifier = true)
{
if (const auto * literal = argument->as<ConstantNode>())
{
if (literal->getValue().getType() != Field::Types::String)
return false;
if (res)
*res = literal->getValue().safeGet<String>();
return true;
}
if (allow_identifier)
{
if (const auto * id = argument->as<IdentifierNode>())
{
if (res)
*res = id->getIdentifier().getFullName();
return true;
}
}
return false;
}
void findRemoteFunctionSecretArguments()
{
if (isNamedCollectionName(0))
{
/// remote(named_collection, ..., password = 'password', ...)
findSecretNamedArgument("password", 1);
return;
}
/// We're going to replace 'password' with '[HIDDEN'] for the following signatures:
/// remote('addresses_expr', db.table, 'user' [, 'password'] [, sharding_key])
/// remote('addresses_expr', 'db', 'table', 'user' [, 'password'] [, sharding_key])
/// remote('addresses_expr', table_function(), 'user' [, 'password'] [, sharding_key])
/// But we should check the number of arguments first because we don't need to do any replacements in case of
/// remote('addresses_expr', db.table)
if (arguments.getNodes().size() < 3)
return;
size_t arg_num = 1;
/// Skip 1 or 2 arguments with table_function() or db.table or 'db', 'table'.
const auto * table_function = arguments.getNodes()[arg_num]->as<FunctionNode>();
if (table_function && KnownTableFunctionNames::instance().exists(table_function->getFunctionName()))
{
++arg_num;
}
else
{
std::optional<String> database;
std::optional<QualifiedTableName> qualified_table_name;
if (!tryGetDatabaseNameOrQualifiedTableName(arg_num, database, qualified_table_name))
{
/// We couldn't evaluate the argument so we don't know whether it is 'db.table' or just 'db'.
/// Hence we can't figure out whether we should skip one argument 'user' or two arguments 'table', 'user'
/// before the argument 'password'. So it's safer to wipe two arguments just in case.
/// The last argument can be also a `sharding_key`, so we need to check that argument is a literal string
/// before wiping it (because the `password` argument is always a literal string).
if (tryGetStringFromArgument(arg_num + 2, nullptr, /* allow_identifier= */ false))
{
/// Wipe either `password` or `user`.
markSecretArgument(arg_num + 2);
}
if (tryGetStringFromArgument(arg_num + 3, nullptr, /* allow_identifier= */ false))
{
/// Wipe either `password` or `sharding_key`.
markSecretArgument(arg_num + 3);
}
return;
}
/// Skip the current argument (which is either a database name or a qualified table name).
++arg_num;
if (database)
{
/// Skip the 'table' argument if the previous argument was a database name.
++arg_num;
}
}
/// Skip username.
++arg_num;
/// Do our replacement:
/// remote('addresses_expr', db.table, 'user', 'password', ...) -> remote('addresses_expr', db.table, 'user', '[HIDDEN]', ...)
/// The last argument can be also a `sharding_key`, so we need to check that argument is a literal string
/// before wiping it (because the `password` argument is always a literal string).
bool can_be_password = tryGetStringFromArgument(arg_num, nullptr, /* allow_identifier= */ false);
if (can_be_password)
markSecretArgument(arg_num);
}
/// Tries to get either a database name or a qualified table name from an argument.
/// Empty string is also allowed (it means the default database).
/// The function is used by findRemoteFunctionSecretArguments() to determine how many arguments to skip before a password.
bool tryGetDatabaseNameOrQualifiedTableName(
size_t arg_idx,
std::optional<String> & res_database,
std::optional<QualifiedTableName> & res_qualified_table_name) const
{
res_database.reset();
res_qualified_table_name.reset();
String str;
if (!tryGetStringFromArgument(arg_idx, &str, /* allow_identifier= */ true))
return false;
if (str.empty())
{
res_database = "";
return true;
}
auto qualified_table_name = QualifiedTableName::tryParseFromString(str);
if (!qualified_table_name)
return false;
if (qualified_table_name->database.empty())
res_database = std::move(qualified_table_name->table);
else
res_qualified_table_name = std::move(qualified_table_name);
return true;
}
void findEncryptionFunctionSecretArguments()
{
if (arguments.getNodes().empty())
return;
/// We replace all arguments after 'mode' with '[HIDDEN]':
/// encrypt('mode', 'plaintext', 'key' [, iv, aad]) -> encrypt('mode', '[HIDDEN]')
result.start = 1;
result.count = arguments.getNodes().size() - 1;
}
/// Whether a specified argument can be the name of a named collection?
bool isNamedCollectionName(size_t arg_idx) const
{
if (arguments.getNodes().size() <= arg_idx)
return false;
const auto * identifier = arguments.getNodes()[arg_idx]->as<IdentifierNode>();
return identifier != nullptr;
}
/// Looks for a secret argument with a specified name. This function looks for arguments in format `key=value` where the key is specified.
void findSecretNamedArgument(const std::string_view & key, size_t start = 0)
{
for (size_t i = start; i < arguments.getNodes().size(); ++i)
{
const auto & argument = arguments.getNodes()[i];
const auto * equals_func = argument->as<FunctionNode>();
if (!equals_func || (equals_func->getFunctionName() != "equals"))
continue;
const auto * expr_list = equals_func->getArguments().as<ListNode>();
if (!expr_list)
continue;
const auto & equal_args = expr_list->getNodes();
if (equal_args.size() != 2)
continue;
String found_key;
if (!tryGetStringFromArgument(equal_args[0], &found_key))
continue;
if (found_key == key)
markSecretArgument(i, /* argument_is_named= */ true);
}
}
};
}

View File

@ -2564,8 +2564,8 @@ void checkFunctionNodeHasEmptyNullsAction(FunctionNode const & node)
if (node.getNullsAction() != NullsAction::EMPTY)
throw Exception(
ErrorCodes::SYNTAX_ERROR,
"Function with name '{}' cannot use {} NULLS",
node.getFunctionName(),
"Function with name {} cannot use {} NULLS",
backQuote(node.getFunctionName()),
node.getNullsAction() == NullsAction::IGNORE_NULLS ? "IGNORE" : "RESPECT");
}
}
@ -3228,16 +3228,16 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
auto hints = NamePrompter<2>::getHints(function_name, possible_function_names);
throw Exception(ErrorCodes::UNKNOWN_FUNCTION,
"Function with name '{}' does not exist. In scope {}{}",
function_name,
"Function with name {} does not exist. In scope {}{}",
backQuote(function_name),
scope.scope_node->formatASTForErrorMessage(),
getHintsErrorMessageSuffix(hints));
}
if (!function_lambda_arguments_indexes.empty())
throw Exception(ErrorCodes::UNSUPPORTED_METHOD,
"Aggregate function '{}' does not support lambda arguments",
function_name);
"Aggregate function {} does not support lambda arguments",
backQuote(function_name));
auto action = function_node_ptr->getNullsAction();
std::string aggregate_function_name = rewriteAggregateFunctionNameIfNeeded(function_name, action, scope.context);
@ -3679,10 +3679,10 @@ ProjectionNames QueryAnalyzer::resolveExpressionNode(
auto hints = IdentifierResolver::collectIdentifierTypoHints(unresolved_identifier, valid_identifiers);
throw Exception(ErrorCodes::UNKNOWN_IDENTIFIER, "Unknown {}{} identifier '{}' in scope {}{}",
throw Exception(ErrorCodes::UNKNOWN_IDENTIFIER, "Unknown {}{} identifier {} in scope {}{}",
toStringLowercase(IdentifierLookupContext::EXPRESSION),
message_clarification,
unresolved_identifier.getFullName(),
backQuote(unresolved_identifier.getFullName()),
scope.scope_node->formatASTForErrorMessage(),
getHintsErrorMessageSuffix(hints));
}

View File

@ -1875,11 +1875,11 @@ void ClientBase::processParsedSingleQuery(const String & full_query, const Strin
if (const auto * create_user_query = parsed_query->as<ASTCreateUserQuery>())
{
if (!create_user_query->attach && create_user_query->auth_data)
if (!create_user_query->attach && !create_user_query->authentication_methods.empty())
{
if (const auto * auth_data = create_user_query->auth_data->as<ASTAuthenticationData>())
for (const auto & authentication_method : create_user_query->authentication_methods)
{
auto password = auth_data->getPassword();
auto password = authentication_method->getPassword();
if (password)
client_context->getAccessControl().checkPasswordComplexityRules(*password);

View File

@ -455,6 +455,9 @@ void Connection::sendAddendum()
writeStringBinary(proto_recv_chunked, *out);
}
if (server_revision >= DBMS_MIN_REVISION_WITH_VERSIONED_PARALLEL_REPLICAS_PROTOCOL)
writeVarUInt(DBMS_PARALLEL_REPLICAS_PROTOCOL_VERSION, *out);
out->next();
}
@ -525,6 +528,8 @@ void Connection::receiveHello(const Poco::Timespan & handshake_timeout)
readVarUInt(server_version_major, *in);
readVarUInt(server_version_minor, *in);
readVarUInt(server_revision, *in);
if (server_revision >= DBMS_MIN_REVISION_WITH_VERSIONED_PARALLEL_REPLICAS_PROTOCOL)
readVarUInt(server_parallel_replicas_protocol_version, *in);
if (server_revision >= DBMS_MIN_REVISION_WITH_SERVER_TIMEZONE)
readStringBinary(server_timezone, *in);
if (server_revision >= DBMS_MIN_REVISION_WITH_SERVER_DISPLAY_NAME)
@ -959,7 +964,7 @@ void Connection::sendReadTaskResponse(const String & response)
void Connection::sendMergeTreeReadTaskResponse(const ParallelReadResponse & response)
{
writeVarUInt(Protocol::Client::MergeTreeReadTaskResponse, *out);
response.serialize(*out);
response.serialize(*out, server_parallel_replicas_protocol_version);
out->finishChunk();
out->next();
}
@ -1413,7 +1418,7 @@ ParallelReadRequest Connection::receiveParallelReadRequest() const
InitialAllRangesAnnouncement Connection::receiveInitialParallelReadAnnouncement() const
{
return InitialAllRangesAnnouncement::deserialize(*in);
return InitialAllRangesAnnouncement::deserialize(*in, server_parallel_replicas_protocol_version);
}

View File

@ -210,6 +210,7 @@ private:
UInt64 server_version_minor = 0;
UInt64 server_version_patch = 0;
UInt64 server_revision = 0;
UInt64 server_parallel_replicas_protocol_version = 0;
String server_timezone;
String server_display_name;

View File

@ -816,6 +816,22 @@ void ColumnDynamic::updateHashWithValue(size_t n, SipHash & hash) const
return;
}
/// If it's not null we update hash with the type name and the actual value.
/// If value in this row is in shared variant, deserialize type and value and
/// update hash with it.
if (discr == getSharedVariantDiscriminator())
{
auto value = getSharedVariant().getDataAt(variant_col.offsetAt(n));
ReadBufferFromMemory buf(value.data, value.size);
auto type = decodeDataType(buf);
hash.update(type->getName());
auto tmp_column = type->createColumn();
type->getDefaultSerialization()->deserializeBinary(*tmp_column, buf, getFormatSettings());
tmp_column->updateHashWithValue(0, hash);
return;
}
hash.update(variant_info.variant_names[discr]);
variant_col.getVariantByGlobalDiscriminator(discr).updateHashWithValue(variant_col.offsetAt(n), hash);
}

View File

@ -47,15 +47,21 @@ ColumnObject::ColumnObject(
, statistics(statistics_)
{
typed_paths.reserve(typed_paths_.size());
sorted_typed_paths.reserve(typed_paths_.size());
for (auto & [path, column] : typed_paths_)
typed_paths[path] = std::move(column);
{
auto it = typed_paths.emplace(path, std::move(column)).first;
sorted_typed_paths.push_back(it->first);
}
std::sort(sorted_typed_paths.begin(), sorted_typed_paths.end());
dynamic_paths.reserve(dynamic_paths_.size());
dynamic_paths_ptrs.reserve(dynamic_paths_.size());
for (auto & [path, column] : dynamic_paths_)
{
dynamic_paths[path] = std::move(column);
dynamic_paths_ptrs[path] = assert_cast<ColumnDynamic *>(dynamic_paths[path].get());
auto it = dynamic_paths.emplace(path, std::move(column)).first;
dynamic_paths_ptrs[path] = assert_cast<ColumnDynamic *>(it->second.get());
sorted_dynamic_paths.insert(it->first);
}
}
@ -64,13 +70,17 @@ ColumnObject::ColumnObject(
: max_dynamic_paths(max_dynamic_paths_), global_max_dynamic_paths(max_dynamic_paths_), max_dynamic_types(max_dynamic_types_)
{
typed_paths.reserve(typed_paths_.size());
sorted_typed_paths.reserve(typed_paths_.size());
for (auto & [path, column] : typed_paths_)
{
if (!column->empty())
throw Exception(ErrorCodes::LOGICAL_ERROR, "Unexpected non-empty typed path column in ColumnObject constructor");
typed_paths[path] = std::move(column);
auto it = typed_paths.emplace(path, std::move(column)).first;
sorted_typed_paths.push_back(it->first);
}
std::sort(sorted_typed_paths.begin(), sorted_typed_paths.end());
MutableColumns paths_and_values;
paths_and_values.emplace_back(ColumnString::create());
paths_and_values.emplace_back(ColumnString::create());
@ -129,13 +139,8 @@ std::string ColumnObject::getName() const
ss << "Object(";
ss << "max_dynamic_paths=" << global_max_dynamic_paths;
ss << ", max_dynamic_types=" << max_dynamic_types;
std::vector<String> sorted_typed_paths;
sorted_typed_paths.reserve(typed_paths.size());
for (const auto & [path, column] : typed_paths)
sorted_typed_paths.push_back(path);
std::sort(sorted_typed_paths.begin(), sorted_typed_paths.end());
for (const auto & path : sorted_typed_paths)
ss << ", " << path << " " << typed_paths.at(path)->getName();
ss << ", " << path << " " << typed_paths.find(path)->second->getName();
ss << ")";
return ss.str();
}
@ -260,6 +265,7 @@ ColumnDynamic * ColumnObject::tryToAddNewDynamicPath(std::string_view path)
new_dynamic_column->insertManyDefaults(size());
auto it = dynamic_paths.emplace(path, std::move(new_dynamic_column)).first;
auto it_ptr = dynamic_paths_ptrs.emplace(path, assert_cast<ColumnDynamic *>(it->second.get())).first;
sorted_dynamic_paths.insert(it->first);
return it_ptr->second;
}
@ -288,8 +294,9 @@ void ColumnObject::setDynamicPaths(const std::vector<String> & paths)
auto new_dynamic_column = ColumnDynamic::create(max_dynamic_types);
if (size)
new_dynamic_column->insertManyDefaults(size);
dynamic_paths[path] = std::move(new_dynamic_column);
dynamic_paths_ptrs[path] = assert_cast<ColumnDynamic *>(dynamic_paths[path].get());
auto it = dynamic_paths.emplace(path, std::move(new_dynamic_column)).first;
dynamic_paths_ptrs[path] = assert_cast<ColumnDynamic *>(it->second.get());
sorted_dynamic_paths.insert(it->first);
}
}
@ -658,39 +665,61 @@ void ColumnObject::popBack(size_t n)
StringRef ColumnObject::serializeValueIntoArena(size_t n, Arena & arena, const char *& begin) const
{
StringRef res(begin, 0);
// Serialize all paths and values in binary format.
/// First serialize values from typed paths in sorted order. They are the same for all instances of this column.
for (auto path : sorted_typed_paths)
{
auto data_ref = typed_paths.find(path)->second->serializeValueIntoArena(n, arena, begin);
res.data = data_ref.data - res.size;
res.size += data_ref.size;
}
/// Second, serialize paths and values in bunary format from dynamic paths and shared data in sorted by path order.
/// Calculate total number of paths to serialize and write it.
const auto & shared_data_offsets = getSharedDataOffsets();
size_t offset = shared_data_offsets[static_cast<ssize_t>(n) - 1];
size_t end = shared_data_offsets[static_cast<ssize_t>(n)];
size_t num_paths = typed_paths.size() + dynamic_paths.size() + (end - offset);
size_t num_paths = (end - offset);
/// Don't serialize Nulls from dynamic paths.
for (const auto & [_, column] : dynamic_paths)
num_paths += !column->isNullAt(n);
char * pos = arena.allocContinue(sizeof(size_t), begin);
memcpy(pos, &num_paths, sizeof(size_t));
res.data = pos - res.size;
res.size += sizeof(size_t);
/// Serialize paths and values from typed paths.
for (const auto & [path, column] : typed_paths)
{
size_t path_size = path.size();
pos = arena.allocContinue(sizeof(size_t) + path_size, begin);
memcpy(pos, &path_size, sizeof(size_t));
memcpy(pos + sizeof(size_t), path.data(), path_size);
auto data_ref = column->serializeValueIntoArena(n, arena, begin);
res.data = data_ref.data - res.size - sizeof(size_t) - path_size;
res.size += data_ref.size + sizeof(size_t) + path_size;
}
/// Serialize paths and values from dynamic paths.
for (const auto & [path, column] : dynamic_paths)
{
WriteBufferFromOwnString buf;
getDynamicSerialization()->serializeBinary(*column, n, buf, getFormatSettings());
serializePathAndValueIntoArena(arena, begin, path, buf.str(), res);
}
/// Serialize paths and values from shared data.
auto dynamic_paths_it = sorted_dynamic_paths.begin();
auto [shared_data_paths, shared_data_values] = getSharedDataPathsAndValues();
for (size_t i = offset; i != end; ++i)
serializePathAndValueIntoArena(arena, begin, shared_data_paths->getDataAt(i), shared_data_values->getDataAt(i), res);
{
auto path = shared_data_paths->getDataAt(i).toView();
/// Paths in shared data are sorted. Serialize all paths from dynamic paths that go before this path in sorted order.
while (dynamic_paths_it != sorted_dynamic_paths.end() && *dynamic_paths_it < path)
{
const auto * dynamic_column = dynamic_paths_ptrs.find(*dynamic_paths_it)->second;
/// Don't serialize Nulls.
if (!dynamic_column->isNullAt(n))
{
WriteBufferFromOwnString buf;
getDynamicSerialization()->serializeBinary(*dynamic_column, n, buf, getFormatSettings());
serializePathAndValueIntoArena(arena, begin, StringRef(*dynamic_paths_it), buf.str(), res);
}
++dynamic_paths_it;
}
serializePathAndValueIntoArena(arena, begin, StringRef(path), shared_data_values->getDataAt(i), res);
}
/// Serialize all remaining paths in dynamic paths.
for (; dynamic_paths_it != sorted_dynamic_paths.end(); ++dynamic_paths_it)
{
const auto * dynamic_column = dynamic_paths_ptrs.find(*dynamic_paths_it)->second;
if (!dynamic_column->isNullAt(n))
{
WriteBufferFromOwnString buf;
getDynamicSerialization()->serializeBinary(*dynamic_column, n, buf, getFormatSettings());
serializePathAndValueIntoArena(arena, begin, StringRef(*dynamic_paths_it), buf.str(), res);
}
}
return res;
}
@ -711,70 +740,49 @@ void ColumnObject::serializePathAndValueIntoArena(DB::Arena & arena, const char
const char * ColumnObject::deserializeAndInsertFromArena(const char * pos)
{
size_t current_size = size();
/// Deserialize paths and values and insert them into typed paths, dynamic paths or shared data.
/// Serialized paths could be unsorted, so we will have to sort all paths that will be inserted into shared data.
std::vector<std::pair<std::string_view, std::string_view>> paths_and_values_for_shared_data;
/// First deserialize typed paths. They come first.
for (auto path : sorted_typed_paths)
pos = typed_paths.find(path)->second->deserializeAndInsertFromArena(pos);
/// Second deserialize all other paths and values and insert them into dynamic paths or shared data.
auto num_paths = unalignedLoad<size_t>(pos);
pos += sizeof(size_t);
const auto [shared_data_paths, shared_data_values] = getSharedDataPathsAndValues();
for (size_t i = 0; i != num_paths; ++i)
{
auto path_size = unalignedLoad<size_t>(pos);
pos += sizeof(size_t);
std::string_view path(pos, path_size);
pos += path_size;
/// Check if it's a typed path. In this case we should use
/// deserializeAndInsertFromArena of corresponding column.
if (auto typed_it = typed_paths.find(path); typed_it != typed_paths.end())
/// Deserialize binary value and try to insert it to dynamic paths or shared data.
auto value_size = unalignedLoad<size_t>(pos);
pos += sizeof(size_t);
std::string_view value(pos, value_size);
pos += value_size;
/// Check if we have this path in dynamic paths.
if (auto dynamic_it = dynamic_paths.find(path); dynamic_it != dynamic_paths.end())
{
pos = typed_it->second->deserializeAndInsertFromArena(pos);
ReadBufferFromMemory buf(value.data(), value.size());
getDynamicSerialization()->deserializeBinary(*dynamic_it->second, buf, getFormatSettings());
}
/// If it's not a typed path, deserialize binary value and try to insert it
/// to dynamic paths or shared data.
/// Try to add a new dynamic path.
else if (auto * dynamic_path_column = tryToAddNewDynamicPath(path))
{
ReadBufferFromMemory buf(value.data(), value.size());
getDynamicSerialization()->deserializeBinary(*dynamic_path_column, buf, getFormatSettings());
}
/// Limit on dynamic paths is reached, add this path to shared data.
/// Serialized paths are sorted, so we can insert right away.
else
{
auto value_size = unalignedLoad<size_t>(pos);
pos += sizeof(size_t);
std::string_view value(pos, value_size);
pos += value_size;
/// Check if we have this path in dynamic paths.
if (auto dynamic_it = dynamic_paths.find(path); dynamic_it != dynamic_paths.end())
{
ReadBufferFromMemory buf(value.data(), value.size());
getDynamicSerialization()->deserializeBinary(*dynamic_it->second, buf, getFormatSettings());
}
/// Try to add a new dynamic path.
else if (auto * dynamic_path_column = tryToAddNewDynamicPath(path))
{
ReadBufferFromMemory buf(value.data(), value.size());
getDynamicSerialization()->deserializeBinary(*dynamic_path_column, buf, getFormatSettings());
}
/// Limit on dynamic paths is reached, add this path to shared data later.
else
{
paths_and_values_for_shared_data.emplace_back(path, value);
}
shared_data_paths->insertData(path.data(), path.size());
shared_data_values->insertData(value.data(), value.size());
}
}
/// Sort and insert all paths from paths_and_values_for_shared_data into shared data.
std::sort(paths_and_values_for_shared_data.begin(), paths_and_values_for_shared_data.end());
const auto [shared_data_paths, shared_data_values] = getSharedDataPathsAndValues();
for (const auto & [path, value] : paths_and_values_for_shared_data)
{
shared_data_paths->insertData(path.data(), path.size());
shared_data_values->insertData(value.data(), value.size());
}
getSharedDataOffsets().push_back(shared_data_paths->size());
/// Insert default value in all remaining typed and dynamic paths.
for (auto & [_, column] : typed_paths)
{
if (column->size() == current_size)
column->insertDefault();
}
/// Insert default value in all remaining dynamic paths.
for (auto & [_, column] : dynamic_paths_ptrs)
{
if (column->size() == current_size)
@ -786,6 +794,11 @@ const char * ColumnObject::deserializeAndInsertFromArena(const char * pos)
const char * ColumnObject::skipSerializedInArena(const char * pos) const
{
/// First, skip all values of typed paths;
for (auto path : sorted_typed_paths)
pos = typed_paths.find(path)->second->skipSerializedInArena(pos);
/// Second, skip all other paths and values.
auto num_paths = unalignedLoad<size_t>(pos);
pos += sizeof(size_t);
for (size_t i = 0; i != num_paths; ++i)
@ -794,15 +807,8 @@ const char * ColumnObject::skipSerializedInArena(const char * pos) const
pos += sizeof(size_t);
std::string_view path(pos, path_size);
pos += path_size;
if (auto typed_it = typed_paths.find(path); typed_it != typed_paths.end())
{
pos = typed_it->second->skipSerializedInArena(pos);
}
else
{
auto value_size = unalignedLoad<size_t>(pos);
pos += sizeof(size_t) + value_size;
}
auto value_size = unalignedLoad<size_t>(pos);
pos += sizeof(size_t) + value_size;
}
return pos;
@ -810,11 +816,51 @@ const char * ColumnObject::skipSerializedInArena(const char * pos) const
void ColumnObject::updateHashWithValue(size_t n, SipHash & hash) const
{
for (const auto & [_, column] : typed_paths)
column->updateHashWithValue(n, hash);
for (const auto & [_, column] : dynamic_paths_ptrs)
column->updateHashWithValue(n, hash);
shared_data->updateHashWithValue(n, hash);
for (auto path : sorted_typed_paths)
typed_paths.find(path)->second->updateHashWithValue(n, hash);
/// The hash of the object in row should not depend on the way we store paths (in dynamic paths or in shared data)
/// and should be the same for the same objects. To support it we update hash with path and its value (if not null) in
/// sorted by path order from both dynamic paths and shared data.
const auto [shared_data_paths, shared_data_values] = getSharedDataPathsAndValues();
const auto & shared_data_offsets = getSharedDataOffsets();
size_t start = shared_data_offsets[static_cast<ssize_t>(n) - 1];
size_t end = shared_data_offsets[static_cast<ssize_t>(n)];
auto dynamic_paths_it = sorted_dynamic_paths.begin();
for (size_t i = start; i != end; ++i)
{
auto path = shared_data_paths->getDataAt(i).toView();
/// Paths in shared data are sorted. Update hash with all paths from dynamic paths that go before this path in sorted order.
while (dynamic_paths_it != sorted_dynamic_paths.end() && *dynamic_paths_it < path)
{
const auto * dynamic_column = dynamic_paths_ptrs.find(*dynamic_paths_it)->second;
if (!dynamic_column->isNullAt(n))
{
hash.update(*dynamic_paths_it);
dynamic_column->updateHashWithValue(n, hash);
}
++dynamic_paths_it;
}
/// Deserialize value in temporary column to get its hash.
auto value = shared_data_values->getDataAt(i);
ReadBufferFromMemory buf(value.data, value.size);
auto tmp_column = ColumnDynamic::create();
getDynamicSerialization()->deserializeBinary(*tmp_column, buf, getFormatSettings());
hash.update(path);
tmp_column->updateHashWithValue(0, hash);
}
/// Iterate over all remaining paths in dynamic paths.
for (; dynamic_paths_it != sorted_dynamic_paths.end(); ++dynamic_paths_it)
{
const auto * dynamic_column = dynamic_paths_ptrs.find(*dynamic_paths_it)->second;
if (!dynamic_column->isNullAt(n))
{
hash.update(*dynamic_paths_it);
dynamic_column->updateHashWithValue(n, hash);
}
}
}
WeakHash32 ColumnObject::getWeakHash32() const
@ -1310,6 +1356,7 @@ void ColumnObject::takeDynamicStructureFromSourceColumns(const DB::Columns & sou
/// Reset current state.
dynamic_paths.clear();
dynamic_paths_ptrs.clear();
sorted_dynamic_paths.clear();
max_dynamic_paths = global_max_dynamic_paths;
Statistics new_statistics(Statistics::Source::MERGE);
@ -1328,8 +1375,9 @@ void ColumnObject::takeDynamicStructureFromSourceColumns(const DB::Columns & sou
{
if (dynamic_paths.size() < max_dynamic_paths)
{
dynamic_paths.emplace(path, ColumnDynamic::create(max_dynamic_types));
dynamic_paths_ptrs.emplace(path, assert_cast<ColumnDynamic *>(dynamic_paths.find(path)->second.get()));
auto it = dynamic_paths.emplace(path, ColumnDynamic::create(max_dynamic_types)).first;
dynamic_paths_ptrs.emplace(path, assert_cast<ColumnDynamic *>(it->second.get()));
sorted_dynamic_paths.insert(it->first);
}
/// Add all remaining paths into shared data statistics until we reach its max size;
else if (new_statistics.shared_data_paths_statistics.size() < Statistics::MAX_SHARED_DATA_STATISTICS_SIZE)
@ -1343,8 +1391,9 @@ void ColumnObject::takeDynamicStructureFromSourceColumns(const DB::Columns & sou
{
for (const auto & [path, _] : path_to_total_number_of_non_null_values)
{
dynamic_paths[path] = ColumnDynamic::create(max_dynamic_types);
dynamic_paths_ptrs[path] = assert_cast<ColumnDynamic *>(dynamic_paths[path].get());
auto it = dynamic_paths.emplace(path, ColumnDynamic::create(max_dynamic_types)).first;
dynamic_paths_ptrs[path] = assert_cast<ColumnDynamic *>(it->second.get());
sorted_dynamic_paths.insert(it->first);
}
}

View File

@ -238,10 +238,15 @@ private:
/// Map path -> column for paths with explicitly specified types.
/// This set of paths is constant and cannot be changed.
PathToColumnMap typed_paths;
/// Sorted list of typed paths. Used to avoid sorting paths every time in some methods.
std::vector<std::string_view> sorted_typed_paths;
/// Map path -> column for dynamically added paths. All columns
/// here are Dynamic columns. This set of paths can be extended
/// during inerts into the column.
PathToColumnMap dynamic_paths;
/// Sorted list of dynamic paths. Used to avoid sorting paths every time in some methods.
std::set<std::string_view> sorted_dynamic_paths;
/// Store and use pointers to ColumnDynamic to avoid virtual calls.
/// With hundreds of dynamic paths these virtual calls are noticeable.
PathToDynamicColumnPtrMap dynamic_paths_ptrs;

View File

@ -64,6 +64,7 @@ static struct InitFiu
REGULAR(lazy_pipe_fds_fail_close) \
PAUSEABLE(infinite_sleep) \
PAUSEABLE(stop_moving_part_before_swap_with_active) \
REGULAR(slowdown_index_analysis) \
namespace FailPoints

View File

@ -376,6 +376,7 @@ The server successfully detected this situation and will download merged part fr
M(ParallelReplicasReadAssignedMarks, "Sum across all replicas of how many of scheduled marks were assigned by consistent hash") \
M(ParallelReplicasReadUnassignedMarks, "Sum across all replicas of how many unassigned marks were scheduled") \
M(ParallelReplicasReadAssignedForStealingMarks, "Sum across all replicas of how many of scheduled marks were assigned for stealing by consistent hash") \
M(ParallelReplicasReadMarks, "How many marks were read by the given replica") \
\
M(ParallelReplicasStealingByHashMicroseconds, "Time spent collecting segments meant for stealing by hash") \
M(ParallelReplicasProcessingPartsMicroseconds, "Time spent processing data parts") \
@ -529,6 +530,7 @@ The server successfully detected this situation and will download merged part fr
M(CachedReadBufferReadFromCacheMicroseconds, "Time reading from filesystem cache") \
M(CachedReadBufferReadFromSourceBytes, "Bytes read from filesystem cache source (from remote fs, etc)") \
M(CachedReadBufferReadFromCacheBytes, "Bytes read from filesystem cache") \
M(CachedReadBufferPredownloadedBytes, "Bytes read from filesystem cache source. Cache segments are read from left to right as a whole, it might be that we need to predownload some part of the segment irrelevant for the current task just to get to the needed data") \
M(CachedReadBufferCacheWriteBytes, "Bytes written from source (remote fs, etc) to filesystem cache") \
M(CachedReadBufferCacheWriteMicroseconds, "Time spent writing data into filesystem cache") \
M(CachedReadBufferCreateBufferMicroseconds, "Prepare buffer time") \

View File

@ -181,12 +181,6 @@ void SetACLRequest::addRootPath(const String & root_path) { Coordination::addRoo
void GetACLRequest::addRootPath(const String & root_path) { Coordination::addRootPath(path, root_path); }
void SyncRequest::addRootPath(const String & root_path) { Coordination::addRootPath(path, root_path); }
void MultiRequest::addRootPath(const String & root_path)
{
for (auto & request : requests)
request->addRootPath(root_path);
}
void CreateResponse::removeRootPath(const String & root_path) { Coordination::removeRootPath(path_created, root_path); }
void WatchResponse::removeRootPath(const String & root_path) { Coordination::removeRootPath(path, root_path); }

View File

@ -408,11 +408,17 @@ struct ReconfigResponse : virtual Response
size_t bytesSize() const override { return value.size() + sizeof(stat); }
};
template <typename T>
struct MultiRequest : virtual Request
{
Requests requests;
std::vector<T> requests;
void addRootPath(const String & root_path) override
{
for (auto & request : requests)
request->addRootPath(root_path);
}
void addRootPath(const String & root_path) override;
String getPath() const override { return {}; }
size_t bytesSize() const override

View File

@ -184,7 +184,7 @@ struct TestKeeperReconfigRequest final : ReconfigRequest, TestKeeperRequest
std::pair<ResponsePtr, Undo> process(TestKeeper::Container & container, int64_t zxid) const override;
};
struct TestKeeperMultiRequest final : MultiRequest, TestKeeperRequest
struct TestKeeperMultiRequest final : MultiRequest<RequestPtr>, TestKeeperRequest
{
explicit TestKeeperMultiRequest(const Requests & generic_requests)
: TestKeeperMultiRequest(std::span(generic_requests))

View File

@ -18,14 +18,16 @@ using namespace DB;
void ZooKeeperResponse::write(WriteBuffer & out) const
{
/// Excessive copy to calculate length.
WriteBufferFromOwnString buf;
Coordination::write(xid, buf);
Coordination::write(zxid, buf);
Coordination::write(error, buf);
auto response_size = Coordination::size(xid) + Coordination::size(zxid) + Coordination::size(error);
if (error == Error::ZOK)
writeImpl(buf);
Coordination::write(buf.str(), out);
response_size += sizeImpl();
Coordination::write(static_cast<int32_t>(response_size), out);
Coordination::write(xid, out);
Coordination::write(zxid, out);
Coordination::write(error, out);
if (error == Error::ZOK)
writeImpl(out);
}
std::string ZooKeeperRequest::toString(bool short_format) const
@ -41,12 +43,12 @@ std::string ZooKeeperRequest::toString(bool short_format) const
void ZooKeeperRequest::write(WriteBuffer & out) const
{
/// Excessive copy to calculate length.
WriteBufferFromOwnString buf;
Coordination::write(xid, buf);
Coordination::write(getOpNum(), buf);
writeImpl(buf);
Coordination::write(buf.str(), out);
auto request_size = Coordination::size(xid) + Coordination::size(getOpNum()) + sizeImpl();
Coordination::write(static_cast<int32_t>(request_size), out);
Coordination::write(xid, out);
Coordination::write(getOpNum(), out);
writeImpl(out);
}
void ZooKeeperSyncRequest::writeImpl(WriteBuffer & out) const
@ -54,6 +56,11 @@ void ZooKeeperSyncRequest::writeImpl(WriteBuffer & out) const
Coordination::write(path, out);
}
size_t ZooKeeperSyncRequest::sizeImpl() const
{
return Coordination::size(path);
}
void ZooKeeperSyncRequest::readImpl(ReadBuffer & in)
{
Coordination::read(path, in);
@ -74,6 +81,11 @@ void ZooKeeperSyncResponse::writeImpl(WriteBuffer & out) const
Coordination::write(path, out);
}
size_t ZooKeeperSyncResponse::sizeImpl() const
{
return Coordination::size(path);
}
void ZooKeeperReconfigRequest::writeImpl(WriteBuffer & out) const
{
Coordination::write(joining, out);
@ -82,6 +94,11 @@ void ZooKeeperReconfigRequest::writeImpl(WriteBuffer & out) const
Coordination::write(version, out);
}
size_t ZooKeeperReconfigRequest::sizeImpl() const
{
return Coordination::size(joining) + Coordination::size(leaving) + Coordination::size(new_members) + Coordination::size(version);
}
void ZooKeeperReconfigRequest::readImpl(ReadBuffer & in)
{
Coordination::read(joining, in);
@ -109,6 +126,11 @@ void ZooKeeperReconfigResponse::writeImpl(WriteBuffer & out) const
Coordination::write(stat, out);
}
size_t ZooKeeperReconfigResponse::sizeImpl() const
{
return Coordination::size(value) + Coordination::size(stat);
}
void ZooKeeperWatchResponse::readImpl(ReadBuffer & in)
{
Coordination::read(type, in);
@ -123,6 +145,11 @@ void ZooKeeperWatchResponse::writeImpl(WriteBuffer & out) const
Coordination::write(path, out);
}
size_t ZooKeeperWatchResponse::sizeImpl() const
{
return Coordination::size(type) + Coordination::size(state) + Coordination::size(path);
}
void ZooKeeperWatchResponse::write(WriteBuffer & out) const
{
if (error == Error::ZOK)
@ -137,6 +164,11 @@ void ZooKeeperAuthRequest::writeImpl(WriteBuffer & out) const
Coordination::write(data, out);
}
size_t ZooKeeperAuthRequest::sizeImpl() const
{
return Coordination::size(type) + Coordination::size(scheme) + Coordination::size(data);
}
void ZooKeeperAuthRequest::readImpl(ReadBuffer & in)
{
Coordination::read(type, in);
@ -175,6 +207,12 @@ void ZooKeeperCreateRequest::writeImpl(WriteBuffer & out) const
Coordination::write(flags, out);
}
size_t ZooKeeperCreateRequest::sizeImpl() const
{
int32_t flags = 0;
return Coordination::size(path) + Coordination::size(data) + Coordination::size(acls) + Coordination::size(flags);
}
void ZooKeeperCreateRequest::readImpl(ReadBuffer & in)
{
Coordination::read(path, in);
@ -211,12 +249,22 @@ void ZooKeeperCreateResponse::writeImpl(WriteBuffer & out) const
Coordination::write(path_created, out);
}
size_t ZooKeeperCreateResponse::sizeImpl() const
{
return Coordination::size(path_created);
}
void ZooKeeperRemoveRequest::writeImpl(WriteBuffer & out) const
{
Coordination::write(path, out);
Coordination::write(version, out);
}
size_t ZooKeeperRemoveRequest::sizeImpl() const
{
return Coordination::size(path) + Coordination::size(version);
}
std::string ZooKeeperRemoveRequest::toStringImpl(bool /*short_format*/) const
{
return fmt::format(
@ -244,6 +292,11 @@ void ZooKeeperRemoveRecursiveRequest::readImpl(ReadBuffer & in)
Coordination::read(remove_nodes_limit, in);
}
size_t ZooKeeperRemoveRecursiveRequest::sizeImpl() const
{
return Coordination::size(path) + Coordination::size(remove_nodes_limit);
}
std::string ZooKeeperRemoveRecursiveRequest::toStringImpl(bool /*short_format*/) const
{
return fmt::format(
@ -259,6 +312,11 @@ void ZooKeeperExistsRequest::writeImpl(WriteBuffer & out) const
Coordination::write(has_watch, out);
}
size_t ZooKeeperExistsRequest::sizeImpl() const
{
return Coordination::size(path) + Coordination::size(has_watch);
}
void ZooKeeperExistsRequest::readImpl(ReadBuffer & in)
{
Coordination::read(path, in);
@ -280,12 +338,22 @@ void ZooKeeperExistsResponse::writeImpl(WriteBuffer & out) const
Coordination::write(stat, out);
}
size_t ZooKeeperExistsResponse::sizeImpl() const
{
return Coordination::size(stat);
}
void ZooKeeperGetRequest::writeImpl(WriteBuffer & out) const
{
Coordination::write(path, out);
Coordination::write(has_watch, out);
}
size_t ZooKeeperGetRequest::sizeImpl() const
{
return Coordination::size(path) + Coordination::size(has_watch);
}
void ZooKeeperGetRequest::readImpl(ReadBuffer & in)
{
Coordination::read(path, in);
@ -309,6 +377,11 @@ void ZooKeeperGetResponse::writeImpl(WriteBuffer & out) const
Coordination::write(stat, out);
}
size_t ZooKeeperGetResponse::sizeImpl() const
{
return Coordination::size(data) + Coordination::size(stat);
}
void ZooKeeperSetRequest::writeImpl(WriteBuffer & out) const
{
Coordination::write(path, out);
@ -316,6 +389,11 @@ void ZooKeeperSetRequest::writeImpl(WriteBuffer & out) const
Coordination::write(version, out);
}
size_t ZooKeeperSetRequest::sizeImpl() const
{
return Coordination::size(path) + Coordination::size(data) + Coordination::size(version);
}
void ZooKeeperSetRequest::readImpl(ReadBuffer & in)
{
Coordination::read(path, in);
@ -342,12 +420,22 @@ void ZooKeeperSetResponse::writeImpl(WriteBuffer & out) const
Coordination::write(stat, out);
}
size_t ZooKeeperSetResponse::sizeImpl() const
{
return Coordination::size(stat);
}
void ZooKeeperListRequest::writeImpl(WriteBuffer & out) const
{
Coordination::write(path, out);
Coordination::write(has_watch, out);
}
size_t ZooKeeperListRequest::sizeImpl() const
{
return Coordination::size(path) + Coordination::size(has_watch);
}
void ZooKeeperListRequest::readImpl(ReadBuffer & in)
{
Coordination::read(path, in);
@ -366,6 +454,11 @@ void ZooKeeperFilteredListRequest::writeImpl(WriteBuffer & out) const
Coordination::write(static_cast<uint8_t>(list_request_type), out);
}
size_t ZooKeeperFilteredListRequest::sizeImpl() const
{
return Coordination::size(path) + Coordination::size(has_watch) + Coordination::size(static_cast<uint8_t>(list_request_type));
}
void ZooKeeperFilteredListRequest::readImpl(ReadBuffer & in)
{
Coordination::read(path, in);
@ -397,6 +490,11 @@ void ZooKeeperListResponse::writeImpl(WriteBuffer & out) const
Coordination::write(stat, out);
}
size_t ZooKeeperListResponse::sizeImpl() const
{
return Coordination::size(names) + Coordination::size(stat);
}
void ZooKeeperSimpleListResponse::readImpl(ReadBuffer & in)
{
Coordination::read(names, in);
@ -407,6 +505,11 @@ void ZooKeeperSimpleListResponse::writeImpl(WriteBuffer & out) const
Coordination::write(names, out);
}
size_t ZooKeeperSimpleListResponse::sizeImpl() const
{
return Coordination::size(names);
}
void ZooKeeperSetACLRequest::writeImpl(WriteBuffer & out) const
{
Coordination::write(path, out);
@ -414,6 +517,11 @@ void ZooKeeperSetACLRequest::writeImpl(WriteBuffer & out) const
Coordination::write(version, out);
}
size_t ZooKeeperSetACLRequest::sizeImpl() const
{
return Coordination::size(path) + Coordination::size(acls) + Coordination::size(version);
}
void ZooKeeperSetACLRequest::readImpl(ReadBuffer & in)
{
Coordination::read(path, in);
@ -431,6 +539,11 @@ void ZooKeeperSetACLResponse::writeImpl(WriteBuffer & out) const
Coordination::write(stat, out);
}
size_t ZooKeeperSetACLResponse::sizeImpl() const
{
return Coordination::size(stat);
}
void ZooKeeperSetACLResponse::readImpl(ReadBuffer & in)
{
Coordination::read(stat, in);
@ -446,6 +559,11 @@ void ZooKeeperGetACLRequest::writeImpl(WriteBuffer & out) const
Coordination::write(path, out);
}
size_t ZooKeeperGetACLRequest::sizeImpl() const
{
return Coordination::size(path);
}
std::string ZooKeeperGetACLRequest::toStringImpl(bool /*short_format*/) const
{
return fmt::format("path = {}", path);
@ -457,6 +575,11 @@ void ZooKeeperGetACLResponse::writeImpl(WriteBuffer & out) const
Coordination::write(stat, out);
}
size_t ZooKeeperGetACLResponse::sizeImpl() const
{
return Coordination::size(acl) + Coordination::size(stat);
}
void ZooKeeperGetACLResponse::readImpl(ReadBuffer & in)
{
Coordination::read(acl, in);
@ -469,6 +592,11 @@ void ZooKeeperCheckRequest::writeImpl(WriteBuffer & out) const
Coordination::write(version, out);
}
size_t ZooKeeperCheckRequest::sizeImpl() const
{
return Coordination::size(path) + Coordination::size(version);
}
void ZooKeeperCheckRequest::readImpl(ReadBuffer & in)
{
Coordination::read(path, in);
@ -494,6 +622,11 @@ void ZooKeeperErrorResponse::writeImpl(WriteBuffer & out) const
Coordination::write(error, out);
}
size_t ZooKeeperErrorResponse::sizeImpl() const
{
return Coordination::size(error);
}
void ZooKeeperMultiRequest::checkOperationType(OperationType type)
{
chassert(!operation_type.has_value() || *operation_type == type);
@ -596,6 +729,27 @@ void ZooKeeperMultiRequest::writeImpl(WriteBuffer & out) const
Coordination::write(error, out);
}
size_t ZooKeeperMultiRequest::sizeImpl() const
{
size_t total_size = 0;
for (const auto & request : requests)
{
const auto & zk_request = dynamic_cast<const ZooKeeperRequest &>(*request);
bool done = false;
int32_t error = -1;
total_size
+= Coordination::size(zk_request.getOpNum()) + Coordination::size(done) + Coordination::size(error) + zk_request.sizeImpl();
}
OpNum op_num = OpNum::Error;
bool done = true;
int32_t error = -1;
return total_size + Coordination::size(op_num) + Coordination::size(done) + Coordination::size(error);
}
void ZooKeeperMultiRequest::readImpl(ReadBuffer & in)
{
while (true)
@ -729,31 +883,54 @@ void ZooKeeperMultiResponse::writeImpl(WriteBuffer & out) const
}
}
ZooKeeperResponsePtr ZooKeeperHeartbeatRequest::makeResponse() const { return setTime(std::make_shared<ZooKeeperHeartbeatResponse>()); }
ZooKeeperResponsePtr ZooKeeperSyncRequest::makeResponse() const { return setTime(std::make_shared<ZooKeeperSyncResponse>()); }
ZooKeeperResponsePtr ZooKeeperAuthRequest::makeResponse() const { return setTime(std::make_shared<ZooKeeperAuthResponse>()); }
ZooKeeperResponsePtr ZooKeeperRemoveRequest::makeResponse() const { return setTime(std::make_shared<ZooKeeperRemoveResponse>()); }
ZooKeeperResponsePtr ZooKeeperRemoveRecursiveRequest::makeResponse() const { return setTime(std::make_shared<ZooKeeperRemoveRecursiveResponse>()); }
ZooKeeperResponsePtr ZooKeeperExistsRequest::makeResponse() const { return setTime(std::make_shared<ZooKeeperExistsResponse>()); }
ZooKeeperResponsePtr ZooKeeperGetRequest::makeResponse() const { return setTime(std::make_shared<ZooKeeperGetResponse>()); }
ZooKeeperResponsePtr ZooKeeperSetRequest::makeResponse() const { return setTime(std::make_shared<ZooKeeperSetResponse>()); }
ZooKeeperResponsePtr ZooKeeperReconfigRequest::makeResponse() const { return setTime(std::make_shared<ZooKeeperReconfigResponse>()); }
ZooKeeperResponsePtr ZooKeeperListRequest::makeResponse() const { return setTime(std::make_shared<ZooKeeperListResponse>()); }
ZooKeeperResponsePtr ZooKeeperSimpleListRequest::makeResponse() const { return setTime(std::make_shared<ZooKeeperSimpleListResponse>()); }
size_t ZooKeeperMultiResponse::sizeImpl() const
{
size_t total_size = 0;
for (const auto & response : responses)
{
const ZooKeeperResponse & zk_response = dynamic_cast<const ZooKeeperResponse &>(*response);
OpNum op_num = zk_response.getOpNum();
bool done = false;
Error op_error = zk_response.error;
total_size += Coordination::size(op_num) + Coordination::size(done) + Coordination::size(op_error);
if (op_error == Error::ZOK || op_num == OpNum::Error)
total_size += zk_response.sizeImpl();
}
/// Footer.
OpNum op_num = OpNum::Error;
bool done = true;
int32_t error_read = - 1;
return total_size + Coordination::size(op_num) + Coordination::size(done) + Coordination::size(error_read);
}
ZooKeeperResponsePtr ZooKeeperHeartbeatRequest::makeResponse() const { return std::make_shared<ZooKeeperHeartbeatResponse>(); }
ZooKeeperResponsePtr ZooKeeperSyncRequest::makeResponse() const { return std::make_shared<ZooKeeperSyncResponse>(); }
ZooKeeperResponsePtr ZooKeeperAuthRequest::makeResponse() const { return std::make_shared<ZooKeeperAuthResponse>(); }
ZooKeeperResponsePtr ZooKeeperRemoveRequest::makeResponse() const { return std::make_shared<ZooKeeperRemoveResponse>(); }
ZooKeeperResponsePtr ZooKeeperRemoveRecursiveRequest::makeResponse() const { return std::make_shared<ZooKeeperRemoveRecursiveResponse>(); }
ZooKeeperResponsePtr ZooKeeperExistsRequest::makeResponse() const { return std::make_shared<ZooKeeperExistsResponse>(); }
ZooKeeperResponsePtr ZooKeeperGetRequest::makeResponse() const { return std::make_shared<ZooKeeperGetResponse>(); }
ZooKeeperResponsePtr ZooKeeperSetRequest::makeResponse() const { return std::make_shared<ZooKeeperSetResponse>(); }
ZooKeeperResponsePtr ZooKeeperReconfigRequest::makeResponse() const { return std::make_shared<ZooKeeperReconfigResponse>(); }
ZooKeeperResponsePtr ZooKeeperListRequest::makeResponse() const { return std::make_shared<ZooKeeperListResponse>(); }
ZooKeeperResponsePtr ZooKeeperSimpleListRequest::makeResponse() const { return std::make_shared<ZooKeeperSimpleListResponse>(); }
ZooKeeperResponsePtr ZooKeeperCreateRequest::makeResponse() const
{
if (not_exists)
return setTime(std::make_shared<ZooKeeperCreateIfNotExistsResponse>());
return setTime(std::make_shared<ZooKeeperCreateResponse>());
return std::make_shared<ZooKeeperCreateIfNotExistsResponse>();
return std::make_shared<ZooKeeperCreateResponse>();
}
ZooKeeperResponsePtr ZooKeeperCheckRequest::makeResponse() const
{
if (not_exists)
return setTime(std::make_shared<ZooKeeperCheckNotExistsResponse>());
return std::make_shared<ZooKeeperCheckNotExistsResponse>();
return setTime(std::make_shared<ZooKeeperCheckResponse>());
return std::make_shared<ZooKeeperCheckResponse>();
}
ZooKeeperResponsePtr ZooKeeperMultiRequest::makeResponse() const
@ -764,11 +941,12 @@ ZooKeeperResponsePtr ZooKeeperMultiRequest::makeResponse() const
else
response = std::make_shared<ZooKeeperMultiReadResponse>(requests);
return setTime(std::move(response));
return std::move(response);
}
ZooKeeperResponsePtr ZooKeeperCloseRequest::makeResponse() const { return setTime(std::make_shared<ZooKeeperCloseResponse>()); }
ZooKeeperResponsePtr ZooKeeperSetACLRequest::makeResponse() const { return setTime(std::make_shared<ZooKeeperSetACLResponse>()); }
ZooKeeperResponsePtr ZooKeeperGetACLRequest::makeResponse() const { return setTime(std::make_shared<ZooKeeperGetACLResponse>()); }
ZooKeeperResponsePtr ZooKeeperCloseRequest::makeResponse() const { return std::make_shared<ZooKeeperCloseResponse>(); }
ZooKeeperResponsePtr ZooKeeperSetACLRequest::makeResponse() const { return std::make_shared<ZooKeeperSetACLResponse>(); }
ZooKeeperResponsePtr ZooKeeperGetACLRequest::makeResponse() const { return std::make_shared<ZooKeeperGetACLResponse>(); }
void ZooKeeperSessionIDRequest::writeImpl(WriteBuffer & out) const
{
@ -777,6 +955,11 @@ void ZooKeeperSessionIDRequest::writeImpl(WriteBuffer & out) const
Coordination::write(server_id, out);
}
size_t ZooKeeperSessionIDRequest::sizeImpl() const
{
return Coordination::size(internal_id) + Coordination::size(session_timeout_ms) + Coordination::size(server_id);
}
void ZooKeeperSessionIDRequest::readImpl(ReadBuffer & in)
{
Coordination::read(internal_id, in);
@ -803,6 +986,11 @@ void ZooKeeperSessionIDResponse::writeImpl(WriteBuffer & out) const
Coordination::write(server_id, out);
}
size_t ZooKeeperSessionIDResponse::sizeImpl() const
{
return Coordination::size(internal_id) + Coordination::size(session_id) + Coordination::size(server_id);
}
void ZooKeeperRequest::createLogElements(LogElements & elems) const
{
@ -960,40 +1148,6 @@ std::shared_ptr<ZooKeeperRequest> ZooKeeperRequest::read(ReadBuffer & in)
return request;
}
ZooKeeperRequest::~ZooKeeperRequest()
{
if (!request_created_time_ns)
return;
UInt64 elapsed_ns = clock_gettime_ns() - request_created_time_ns;
constexpr UInt64 max_request_time_ns = 1000000000ULL; /// 1 sec
if (max_request_time_ns < elapsed_ns)
{
LOG_TEST(getLogger(__PRETTY_FUNCTION__), "Processing of request xid={} took {} ms", xid, elapsed_ns / 1000000UL);
}
}
ZooKeeperResponsePtr ZooKeeperRequest::setTime(ZooKeeperResponsePtr response) const
{
if (request_created_time_ns)
{
response->response_created_time_ns = clock_gettime_ns();
}
return response;
}
ZooKeeperResponse::~ZooKeeperResponse()
{
if (!response_created_time_ns)
return;
UInt64 elapsed_ns = clock_gettime_ns() - response_created_time_ns;
constexpr UInt64 max_request_time_ns = 1000000000ULL; /// 1 sec
if (max_request_time_ns < elapsed_ns)
{
LOG_TEST(getLogger(__PRETTY_FUNCTION__), "Processing of response xid={} took {} ms", xid, elapsed_ns / 1000000UL);
}
}
ZooKeeperRequestPtr ZooKeeperRequestFactory::get(OpNum op_num) const
{
auto it = op_num_to_request.find(op_num);
@ -1015,7 +1169,6 @@ void registerZooKeeperRequest(ZooKeeperRequestFactory & factory)
factory.registerRequest(num, []
{
auto res = std::make_shared<RequestT>();
res->request_created_time_ns = clock_gettime_ns();
if constexpr (num == OpNum::MultiRead)
res->operation_type = ZooKeeperMultiRequest::OperationType::Read;

View File

@ -7,13 +7,11 @@
#include <boost/noncopyable.hpp>
#include <IO/ReadBuffer.h>
#include <IO/WriteBuffer.h>
#include <unordered_map>
#include <vector>
#include <memory>
#include <cstdint>
#include <optional>
#include <functional>
#include <span>
namespace Coordination
@ -25,13 +23,11 @@ struct ZooKeeperResponse : virtual Response
{
XID xid = 0;
UInt64 response_created_time_ns = 0;
ZooKeeperResponse() = default;
ZooKeeperResponse(const ZooKeeperResponse &) = default;
~ZooKeeperResponse() override;
virtual void readImpl(ReadBuffer &) = 0;
virtual void writeImpl(WriteBuffer &) const = 0;
virtual size_t sizeImpl() const = 0;
virtual void write(WriteBuffer & out) const;
virtual OpNum getOpNum() const = 0;
virtual void fillLogElements(LogElements & elems, size_t idx) const;
@ -51,13 +47,11 @@ struct ZooKeeperRequest : virtual Request
bool restored_from_zookeeper_log = false;
UInt64 request_created_time_ns = 0;
UInt64 thread_id = 0;
String query_id;
ZooKeeperRequest() = default;
ZooKeeperRequest(const ZooKeeperRequest &) = default;
~ZooKeeperRequest() override;
virtual OpNum getOpNum() const = 0;
@ -66,6 +60,7 @@ struct ZooKeeperRequest : virtual Request
std::string toString(bool short_format = false) const;
virtual void writeImpl(WriteBuffer &) const = 0;
virtual size_t sizeImpl() const = 0;
virtual void readImpl(ReadBuffer &) = 0;
virtual std::string toStringImpl(bool /*short_format*/) const { return ""; }
@ -73,7 +68,6 @@ struct ZooKeeperRequest : virtual Request
static std::shared_ptr<ZooKeeperRequest> read(ReadBuffer & in);
virtual ZooKeeperResponsePtr makeResponse() const = 0;
ZooKeeperResponsePtr setTime(ZooKeeperResponsePtr response) const;
virtual bool isReadRequest() const = 0;
virtual void createLogElements(LogElements & elems) const;
@ -86,6 +80,7 @@ struct ZooKeeperHeartbeatRequest final : ZooKeeperRequest
String getPath() const override { return {}; }
OpNum getOpNum() const override { return OpNum::Heartbeat; }
void writeImpl(WriteBuffer &) const override {}
size_t sizeImpl() const override { return 0; }
void readImpl(ReadBuffer &) override {}
ZooKeeperResponsePtr makeResponse() const override;
bool isReadRequest() const override { return false; }
@ -97,6 +92,7 @@ struct ZooKeeperSyncRequest final : ZooKeeperRequest
String getPath() const override { return path; }
OpNum getOpNum() const override { return OpNum::Sync; }
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
void readImpl(ReadBuffer & in) override;
std::string toStringImpl(bool short_format) const override;
ZooKeeperResponsePtr makeResponse() const override;
@ -109,6 +105,7 @@ struct ZooKeeperSyncResponse final : SyncResponse, ZooKeeperResponse
{
void readImpl(ReadBuffer & in) override;
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
OpNum getOpNum() const override { return OpNum::Sync; }
};
@ -122,6 +119,7 @@ struct ZooKeeperReconfigRequest final : ZooKeeperRequest
String getPath() const override { return keeper_config_path; }
OpNum getOpNum() const override { return OpNum::Reconfig; }
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
void readImpl(ReadBuffer & in) override;
std::string toStringImpl(bool short_format) const override;
ZooKeeperResponsePtr makeResponse() const override;
@ -138,6 +136,7 @@ struct ZooKeeperReconfigResponse final : ReconfigResponse, ZooKeeperResponse
{
void readImpl(ReadBuffer & in) override;
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
OpNum getOpNum() const override { return OpNum::Reconfig; }
};
@ -145,6 +144,7 @@ struct ZooKeeperHeartbeatResponse final : ZooKeeperResponse
{
void readImpl(ReadBuffer &) override {}
void writeImpl(WriteBuffer &) const override {}
size_t sizeImpl() const override { return 0; }
OpNum getOpNum() const override { return OpNum::Heartbeat; }
};
@ -153,6 +153,7 @@ struct ZooKeeperWatchResponse final : WatchResponse, ZooKeeperResponse
void readImpl(ReadBuffer & in) override;
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
void write(WriteBuffer & out) const override;
@ -175,6 +176,7 @@ struct ZooKeeperAuthRequest final : ZooKeeperRequest
String getPath() const override { return {}; }
OpNum getOpNum() const override { return OpNum::Auth; }
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
void readImpl(ReadBuffer & in) override;
std::string toStringImpl(bool short_format) const override;
@ -189,6 +191,7 @@ struct ZooKeeperAuthResponse final : ZooKeeperResponse
{
void readImpl(ReadBuffer &) override {}
void writeImpl(WriteBuffer &) const override {}
size_t sizeImpl() const override { return 0; }
OpNum getOpNum() const override { return OpNum::Auth; }
@ -200,6 +203,7 @@ struct ZooKeeperCloseRequest final : ZooKeeperRequest
String getPath() const override { return {}; }
OpNum getOpNum() const override { return OpNum::Close; }
void writeImpl(WriteBuffer &) const override {}
size_t sizeImpl() const override { return 0; }
void readImpl(ReadBuffer &) override {}
ZooKeeperResponsePtr makeResponse() const override;
@ -214,6 +218,7 @@ struct ZooKeeperCloseResponse final : ZooKeeperResponse
}
void writeImpl(WriteBuffer &) const override {}
size_t sizeImpl() const override { return 0; }
OpNum getOpNum() const override { return OpNum::Close; }
};
@ -228,6 +233,7 @@ struct ZooKeeperCreateRequest final : public CreateRequest, ZooKeeperRequest
OpNum getOpNum() const override { return not_exists ? OpNum::CreateIfNotExists : OpNum::Create; }
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
void readImpl(ReadBuffer & in) override;
std::string toStringImpl(bool short_format) const override;
@ -244,6 +250,7 @@ struct ZooKeeperCreateResponse : CreateResponse, ZooKeeperResponse
void readImpl(ReadBuffer & in) override;
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
OpNum getOpNum() const override { return OpNum::Create; }
@ -265,6 +272,7 @@ struct ZooKeeperRemoveRequest final : RemoveRequest, ZooKeeperRequest
OpNum getOpNum() const override { return OpNum::Remove; }
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
void readImpl(ReadBuffer & in) override;
std::string toStringImpl(bool short_format) const override;
@ -280,6 +288,7 @@ struct ZooKeeperRemoveResponse final : RemoveResponse, ZooKeeperResponse
{
void readImpl(ReadBuffer &) override {}
void writeImpl(WriteBuffer &) const override {}
size_t sizeImpl() const override { return 0; }
OpNum getOpNum() const override { return OpNum::Remove; }
size_t bytesSize() const override { return RemoveResponse::bytesSize() + sizeof(xid) + sizeof(zxid); }
@ -293,6 +302,7 @@ struct ZooKeeperRemoveRecursiveRequest final : RemoveRecursiveRequest, ZooKeeper
OpNum getOpNum() const override { return OpNum::RemoveRecursive; }
void writeImpl(WriteBuffer & out) const override;
void readImpl(ReadBuffer & in) override;
size_t sizeImpl() const override;
std::string toStringImpl(bool short_format) const override;
ZooKeeperResponsePtr makeResponse() const override;
@ -305,6 +315,7 @@ struct ZooKeeperRemoveRecursiveResponse : RemoveRecursiveResponse, ZooKeeperResp
{
void readImpl(ReadBuffer &) override {}
void writeImpl(WriteBuffer &) const override {}
size_t sizeImpl() const override { return 0; }
OpNum getOpNum() const override { return OpNum::RemoveRecursive; }
size_t bytesSize() const override { return RemoveRecursiveResponse::bytesSize() + sizeof(xid) + sizeof(zxid); }
@ -317,6 +328,7 @@ struct ZooKeeperExistsRequest final : ExistsRequest, ZooKeeperRequest
OpNum getOpNum() const override { return OpNum::Exists; }
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
void readImpl(ReadBuffer & in) override;
std::string toStringImpl(bool short_format) const override;
@ -330,6 +342,7 @@ struct ZooKeeperExistsResponse final : ExistsResponse, ZooKeeperResponse
{
void readImpl(ReadBuffer & in) override;
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
OpNum getOpNum() const override { return OpNum::Exists; }
size_t bytesSize() const override { return ExistsResponse::bytesSize() + sizeof(xid) + sizeof(zxid); }
@ -344,6 +357,7 @@ struct ZooKeeperGetRequest final : GetRequest, ZooKeeperRequest
OpNum getOpNum() const override { return OpNum::Get; }
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
void readImpl(ReadBuffer & in) override;
std::string toStringImpl(bool short_format) const override;
@ -357,6 +371,7 @@ struct ZooKeeperGetResponse final : GetResponse, ZooKeeperResponse
{
void readImpl(ReadBuffer & in) override;
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
OpNum getOpNum() const override { return OpNum::Get; }
size_t bytesSize() const override { return GetResponse::bytesSize() + sizeof(xid) + sizeof(zxid); }
@ -371,6 +386,7 @@ struct ZooKeeperSetRequest final : SetRequest, ZooKeeperRequest
OpNum getOpNum() const override { return OpNum::Set; }
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
void readImpl(ReadBuffer & in) override;
std::string toStringImpl(bool short_format) const override;
ZooKeeperResponsePtr makeResponse() const override;
@ -385,6 +401,7 @@ struct ZooKeeperSetResponse final : SetResponse, ZooKeeperResponse
{
void readImpl(ReadBuffer & in) override;
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
OpNum getOpNum() const override { return OpNum::Set; }
size_t bytesSize() const override { return SetResponse::bytesSize() + sizeof(xid) + sizeof(zxid); }
@ -399,6 +416,7 @@ struct ZooKeeperListRequest : ListRequest, ZooKeeperRequest
OpNum getOpNum() const override { return OpNum::List; }
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
void readImpl(ReadBuffer & in) override;
std::string toStringImpl(bool short_format) const override;
ZooKeeperResponsePtr makeResponse() const override;
@ -419,6 +437,7 @@ struct ZooKeeperFilteredListRequest final : ZooKeeperListRequest
OpNum getOpNum() const override { return OpNum::FilteredList; }
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
void readImpl(ReadBuffer & in) override;
std::string toStringImpl(bool short_format) const override;
@ -429,6 +448,7 @@ struct ZooKeeperListResponse : ListResponse, ZooKeeperResponse
{
void readImpl(ReadBuffer & in) override;
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
OpNum getOpNum() const override { return OpNum::List; }
size_t bytesSize() const override { return ListResponse::bytesSize() + sizeof(xid) + sizeof(zxid); }
@ -440,6 +460,7 @@ struct ZooKeeperSimpleListResponse final : ZooKeeperListResponse
{
void readImpl(ReadBuffer & in) override;
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
OpNum getOpNum() const override { return OpNum::SimpleList; }
size_t bytesSize() const override { return ZooKeeperListResponse::bytesSize() - sizeof(stat); }
@ -452,6 +473,7 @@ struct ZooKeeperCheckRequest : CheckRequest, ZooKeeperRequest
OpNum getOpNum() const override { return not_exists ? OpNum::CheckNotExists : OpNum::Check; }
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
void readImpl(ReadBuffer & in) override;
std::string toStringImpl(bool short_format) const override;
@ -467,6 +489,7 @@ struct ZooKeeperCheckResponse : CheckResponse, ZooKeeperResponse
{
void readImpl(ReadBuffer &) override {}
void writeImpl(WriteBuffer &) const override {}
size_t sizeImpl() const override { return 0; }
OpNum getOpNum() const override { return OpNum::Check; }
size_t bytesSize() const override { return CheckResponse::bytesSize() + sizeof(xid) + sizeof(zxid); }
@ -483,6 +506,7 @@ struct ZooKeeperErrorResponse final : ErrorResponse, ZooKeeperResponse
{
void readImpl(ReadBuffer & in) override;
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
OpNum getOpNum() const override { return OpNum::Error; }
@ -493,6 +517,7 @@ struct ZooKeeperSetACLRequest final : SetACLRequest, ZooKeeperRequest
{
OpNum getOpNum() const override { return OpNum::SetACL; }
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
void readImpl(ReadBuffer & in) override;
std::string toStringImpl(bool short_format) const override;
ZooKeeperResponsePtr makeResponse() const override;
@ -505,6 +530,7 @@ struct ZooKeeperSetACLResponse final : SetACLResponse, ZooKeeperResponse
{
void readImpl(ReadBuffer & in) override;
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
OpNum getOpNum() const override { return OpNum::SetACL; }
size_t bytesSize() const override { return SetACLResponse::bytesSize() + sizeof(xid) + sizeof(zxid); }
@ -514,6 +540,7 @@ struct ZooKeeperGetACLRequest final : GetACLRequest, ZooKeeperRequest
{
OpNum getOpNum() const override { return OpNum::GetACL; }
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
void readImpl(ReadBuffer & in) override;
std::string toStringImpl(bool short_format) const override;
ZooKeeperResponsePtr makeResponse() const override;
@ -526,12 +553,13 @@ struct ZooKeeperGetACLResponse final : GetACLResponse, ZooKeeperResponse
{
void readImpl(ReadBuffer & in) override;
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
OpNum getOpNum() const override { return OpNum::GetACL; }
size_t bytesSize() const override { return GetACLResponse::bytesSize() + sizeof(xid) + sizeof(zxid); }
};
struct ZooKeeperMultiRequest final : MultiRequest, ZooKeeperRequest
struct ZooKeeperMultiRequest final : MultiRequest<ZooKeeperRequestPtr>, ZooKeeperRequest
{
OpNum getOpNum() const override;
ZooKeeperMultiRequest() = default;
@ -540,6 +568,7 @@ struct ZooKeeperMultiRequest final : MultiRequest, ZooKeeperRequest
ZooKeeperMultiRequest(std::span<const Coordination::RequestPtr> generic_requests, const ACLs & default_acls);
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
void readImpl(ReadBuffer & in) override;
std::string toStringImpl(bool short_format) const override;
@ -563,12 +592,14 @@ private:
struct ZooKeeperMultiResponse : MultiResponse, ZooKeeperResponse
{
explicit ZooKeeperMultiResponse(const Requests & requests)
ZooKeeperMultiResponse() = default;
explicit ZooKeeperMultiResponse(const std::vector<ZooKeeperRequestPtr> & requests)
{
responses.reserve(requests.size());
for (const auto & request : requests)
responses.emplace_back(dynamic_cast<const ZooKeeperRequest &>(*request).makeResponse());
responses.emplace_back(request->makeResponse());
}
explicit ZooKeeperMultiResponse(const Responses & responses_)
@ -579,6 +610,7 @@ struct ZooKeeperMultiResponse : MultiResponse, ZooKeeperResponse
void readImpl(ReadBuffer & in) override;
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
size_t bytesSize() const override { return MultiResponse::bytesSize() + sizeof(xid) + sizeof(zxid); }
@ -609,6 +641,7 @@ struct ZooKeeperSessionIDRequest final : ZooKeeperRequest
Coordination::OpNum getOpNum() const override { return OpNum::SessionID; }
String getPath() const override { return {}; }
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
void readImpl(ReadBuffer & in) override;
Coordination::ZooKeeperResponsePtr makeResponse() const override;
@ -627,6 +660,7 @@ struct ZooKeeperSessionIDResponse final : ZooKeeperResponse
void readImpl(ReadBuffer & in) override;
void writeImpl(WriteBuffer & out) const override;
size_t sizeImpl() const override;
Coordination::OpNum getOpNum() const override { return OpNum::SessionID; }
};

View File

@ -42,6 +42,32 @@ void write(const Error & x, WriteBuffer & out)
write(static_cast<int32_t>(x), out);
}
size_t size(OpNum x)
{
return size(static_cast<int32_t>(x));
}
size_t size(const std::string & s)
{
return size(static_cast<int32_t>(s.size())) + s.size();
}
size_t size(const ACL & acl)
{
return size(acl.permissions) + size(acl.scheme) + size(acl.id);
}
size_t size(const Stat & stat)
{
return size(stat.czxid) + size(stat.mzxid) + size(stat.ctime) + size(stat.mtime) + size(stat.version) + size(stat.cversion)
+ size(stat.aversion) + size(stat.ephemeralOwner) + size(stat.dataLength) + size(stat.numChildren) + size(stat.pzxid);
}
size_t size(const Error & x)
{
return size(static_cast<int32_t>(x));
}
void read(OpNum & x, ReadBuffer & in)
{
int32_t raw_op_num;

View File

@ -43,6 +43,36 @@ void write(const std::vector<T> & arr, WriteBuffer & out)
write(elem, out);
}
template <typename T>
requires is_arithmetic_v<T>
size_t size(T x)
{
return sizeof(x);
}
size_t size(OpNum x);
size_t size(const std::string & s);
size_t size(const ACL & acl);
size_t size(const Stat & stat);
size_t size(const Error & x);
template <size_t N>
size_t size(const std::array<char, N>)
{
return size(static_cast<int32_t>(N)) + N;
}
template <typename T>
size_t size(const std::vector<T> & arr)
{
size_t total_size = size(static_cast<int32_t>(arr.size()));
for (const auto & elem : arr)
total_size += size(elem);
return total_size;
}
template <typename T>
requires is_arithmetic_v<T>
void read(T & x, ReadBuffer & in)

View File

@ -1,4 +1,4 @@
clickhouse_add_executable(integer_hash_tables_and_hashes integer_hash_tables_and_hashes.cpp)
clickhouse_add_executable(integer_hash_tables_and_hashes integer_hash_tables_and_hashes.cpp orc_string_dictionary.cpp)
target_link_libraries (integer_hash_tables_and_hashes PRIVATE
ch_contrib::gbenchmark_all
dbms
@ -7,3 +7,8 @@ target_link_libraries (integer_hash_tables_and_hashes PRIVATE
ch_contrib::wyhash
ch_contrib::farmhash
ch_contrib::xxHash)
clickhouse_add_executable(orc_string_dictionary orc_string_dictionary.cpp)
target_link_libraries (orc_string_dictionary PRIVATE
ch_contrib::gbenchmark_all
dbms)

View File

@ -0,0 +1,311 @@
#include <cstdlib>
#include <base/defines.h>
#include <benchmark/benchmark.h>
class OldSortedStringDictionary
{
public:
struct DictEntry
{
DictEntry(const char * str, size_t len) : data(str), length(len) { }
const char * data;
size_t length;
};
OldSortedStringDictionary() : totalLength(0) { }
// insert a new string into dictionary, return its insertion order
size_t insert(const char * str, size_t len);
// reorder input index buffer from insertion order to dictionary order
void reorder(std::vector<int64_t> & idxBuffer) const;
// get dict entries in insertion order
void getEntriesInInsertionOrder(std::vector<const DictEntry *> &) const;
size_t size() const;
// return total length of strings in the dictionary
uint64_t length() const;
void clear();
// store indexes of insertion order in the dictionary for not-null rows
std::vector<int64_t> idxInDictBuffer;
private:
struct LessThan
{
bool operator()(const DictEntry & left, const DictEntry & right) const
{
int ret = memcmp(left.data, right.data, std::min(left.length, right.length));
if (ret != 0)
{
return ret < 0;
}
return left.length < right.length;
}
};
std::map<DictEntry, size_t, LessThan> dict;
std::vector<std::vector<char>> data;
uint64_t totalLength;
};
// insert a new string into dictionary, return its insertion order
size_t OldSortedStringDictionary::insert(const char * str, size_t len)
{
auto ret = dict.insert({DictEntry(str, len), dict.size()});
if (ret.second)
{
// make a copy to internal storage
data.push_back(std::vector<char>(len));
memcpy(data.back().data(), str, len);
// update dictionary entry to link pointer to internal storage
DictEntry * entry = const_cast<DictEntry *>(&(ret.first->first));
entry->data = data.back().data();
totalLength += len;
}
return ret.first->second;
}
/**
* Reorder input index buffer from insertion order to dictionary order
*
* We require this function because string values are buffered by indexes
* in their insertion order. Until the entire dictionary is complete can
* we get their sorted indexes in the dictionary in that ORC specification
* demands dictionary should be ordered. Therefore this function transforms
* the indexes from insertion order to dictionary value order for final
* output.
*/
void OldSortedStringDictionary::reorder(std::vector<int64_t> & idxBuffer) const
{
// iterate the dictionary to get mapping from insertion order to value order
std::vector<size_t> mapping(dict.size());
size_t dictIdx = 0;
for (auto it = dict.cbegin(); it != dict.cend(); ++it)
{
mapping[it->second] = dictIdx++;
}
// do the transformation
for (size_t i = 0; i != idxBuffer.size(); ++i)
{
idxBuffer[i] = static_cast<int64_t>(mapping[static_cast<size_t>(idxBuffer[i])]);
}
}
// get dict entries in insertion order
void OldSortedStringDictionary::getEntriesInInsertionOrder(std::vector<const DictEntry *> & entries) const
{
entries.resize(dict.size());
for (auto it = dict.cbegin(); it != dict.cend(); ++it)
{
entries[it->second] = &(it->first);
}
}
// return count of entries
size_t OldSortedStringDictionary::size() const
{
return dict.size();
}
// return total length of strings in the dictionary
uint64_t OldSortedStringDictionary::length() const
{
return totalLength;
}
void OldSortedStringDictionary::clear()
{
totalLength = 0;
data.clear();
dict.clear();
}
/**
* Implementation of increasing sorted string dictionary
*/
class NewSortedStringDictionary
{
public:
struct DictEntry
{
DictEntry(const char * str, size_t len) : data(str), length(len) { }
const char * data;
size_t length;
};
struct DictEntryWithIndex
{
DictEntryWithIndex(const char * str, size_t len, size_t index_) : entry(str, len), index(index_) { }
DictEntry entry;
size_t index;
};
NewSortedStringDictionary() : totalLength_(0) { }
// insert a new string into dictionary, return its insertion order
size_t insert(const char * str, size_t len);
// reorder input index buffer from insertion order to dictionary order
void reorder(std::vector<int64_t> & idxBuffer) const;
// get dict entries in insertion order
void getEntriesInInsertionOrder(std::vector<const DictEntry *> &) const;
// return count of entries
size_t size() const;
// return total length of strings in the dictionary
uint64_t length() const;
void clear();
// store indexes of insertion order in the dictionary for not-null rows
std::vector<int64_t> idxInDictBuffer;
private:
struct LessThan
{
bool operator()(const DictEntryWithIndex & l, const DictEntryWithIndex & r)
{
const auto & left = l.entry;
const auto & right = r.entry;
int ret = memcmp(left.data, right.data, std::min(left.length, right.length));
if (ret != 0)
{
return ret < 0;
}
return left.length < right.length;
}
};
mutable std::vector<DictEntryWithIndex> flatDict_;
std::unordered_map<std::string, size_t> keyToIndex;
uint64_t totalLength_;
};
// insert a new string into dictionary, return its insertion order
size_t NewSortedStringDictionary::insert(const char * str, size_t len)
{
size_t index = flatDict_.size();
auto ret = keyToIndex.emplace(std::string(str, len), index);
if (ret.second)
{
flatDict_.emplace_back(ret.first->first.data(), ret.first->first.size(), index);
totalLength_ += len;
}
return ret.first->second;
}
/**
* Reorder input index buffer from insertion order to dictionary order
*
* We require this function because string values are buffered by indexes
* in their insertion order. Until the entire dictionary is complete can
* we get their sorted indexes in the dictionary in that ORC specification
* demands dictionary should be ordered. Therefore this function transforms
* the indexes from insertion order to dictionary value order for final
* output.
*/
void NewSortedStringDictionary::reorder(std::vector<int64_t> & idxBuffer) const
{
// iterate the dictionary to get mapping from insertion order to value order
std::vector<size_t> mapping(flatDict_.size());
for (size_t i = 0; i < flatDict_.size(); ++i)
{
mapping[flatDict_[i].index] = i;
}
// do the transformation
for (size_t i = 0; i != idxBuffer.size(); ++i)
{
idxBuffer[i] = static_cast<int64_t>(mapping[static_cast<size_t>(idxBuffer[i])]);
}
}
// get dict entries in insertion order
void NewSortedStringDictionary::getEntriesInInsertionOrder(std::vector<const DictEntry *> & entries) const
{
std::sort(
flatDict_.begin(),
flatDict_.end(),
[](const DictEntryWithIndex & left, const DictEntryWithIndex & right) { return left.index < right.index; });
entries.resize(flatDict_.size());
for (size_t i = 0; i < flatDict_.size(); ++i)
{
entries[i] = &(flatDict_[i].entry);
}
}
// return count of entries
size_t NewSortedStringDictionary::size() const
{
return flatDict_.size();
}
// return total length of strings in the dictionary
uint64_t NewSortedStringDictionary::length() const
{
return totalLength_;
}
void NewSortedStringDictionary::clear()
{
totalLength_ = 0;
keyToIndex.clear();
flatDict_.clear();
}
template <size_t cardinality>
static std::vector<std::string> mockStrings()
{
std::vector<std::string> res(1000000);
for (auto & s : res)
{
s = "test string dictionary " + std::to_string(rand() % cardinality);
}
return res;
}
template <typename DictionaryImpl>
static NO_INLINE std::unique_ptr<DictionaryImpl> createAndWriteStringDictionary(const std::vector<std::string> & strs)
{
auto dict = std::make_unique<DictionaryImpl>();
for (const auto & str : strs)
{
auto index = dict->insert(str.data(), str.size());
dict->idxInDictBuffer.push_back(index);
}
dict->reorder(dict->idxInDictBuffer);
return dict;
}
template <typename DictionaryImpl, size_t cardinality>
static void BM_writeStringDictionary(benchmark::State & state)
{
auto strs = mockStrings<cardinality>();
for (auto _ : state)
{
auto dict = createAndWriteStringDictionary<DictionaryImpl>(strs);
benchmark::DoNotOptimize(dict);
}
}
BENCHMARK_TEMPLATE(BM_writeStringDictionary, OldSortedStringDictionary, 10);
BENCHMARK_TEMPLATE(BM_writeStringDictionary, NewSortedStringDictionary, 10);
BENCHMARK_TEMPLATE(BM_writeStringDictionary, OldSortedStringDictionary, 100);
BENCHMARK_TEMPLATE(BM_writeStringDictionary, NewSortedStringDictionary, 100);
BENCHMARK_TEMPLATE(BM_writeStringDictionary, OldSortedStringDictionary, 1000);
BENCHMARK_TEMPLATE(BM_writeStringDictionary, NewSortedStringDictionary, 1000);
BENCHMARK_TEMPLATE(BM_writeStringDictionary, OldSortedStringDictionary, 10000);
BENCHMARK_TEMPLATE(BM_writeStringDictionary, NewSortedStringDictionary, 10000);
BENCHMARK_TEMPLATE(BM_writeStringDictionary, OldSortedStringDictionary, 100000);
BENCHMARK_TEMPLATE(BM_writeStringDictionary, NewSortedStringDictionary, 100000);

View File

@ -45,6 +45,7 @@ uint64_t ACLMap::convertACLs(const Coordination::ACLs & acls)
if (acls.empty())
return 0;
std::lock_guard lock(map_mutex);
if (acl_to_num.contains(acls))
return acl_to_num[acls];
@ -62,6 +63,7 @@ Coordination::ACLs ACLMap::convertNumber(uint64_t acls_id) const
if (acls_id == 0)
return Coordination::ACLs{};
std::lock_guard lock(map_mutex);
if (!num_to_acl.contains(acls_id))
throw Exception(ErrorCodes::LOGICAL_ERROR, "Unknown ACL id {}. It's a bug", acls_id);
@ -70,6 +72,7 @@ Coordination::ACLs ACLMap::convertNumber(uint64_t acls_id) const
void ACLMap::addMapping(uint64_t acls_id, const Coordination::ACLs & acls)
{
std::lock_guard lock(map_mutex);
num_to_acl[acls_id] = acls;
acl_to_num[acls] = acls_id;
max_acl_id = std::max(acls_id + 1, max_acl_id); /// max_acl_id pointer next slot
@ -77,11 +80,13 @@ void ACLMap::addMapping(uint64_t acls_id, const Coordination::ACLs & acls)
void ACLMap::addUsage(uint64_t acl_id)
{
std::lock_guard lock(map_mutex);
usage_counter[acl_id]++;
}
void ACLMap::removeUsage(uint64_t acl_id)
{
std::lock_guard lock(map_mutex);
if (!usage_counter.contains(acl_id))
return;

View File

@ -32,6 +32,8 @@ private:
NumToACLMap num_to_acl;
UsageCounter usage_counter;
uint64_t max_acl_id{1};
mutable std::mutex map_mutex;
public:
/// Convert ACL to number. If it's new ACL than adds it to map

View File

@ -301,11 +301,13 @@ String MonitorCommand::run()
print(ret, "server_state", keeper_info.getRole());
print(ret, "znode_count", state_machine.getNodesCount());
print(ret, "watch_count", state_machine.getTotalWatchesCount());
print(ret, "ephemerals_count", state_machine.getTotalEphemeralNodesCount());
print(ret, "approximate_data_size", state_machine.getApproximateDataSize());
print(ret, "key_arena_size", state_machine.getKeyArenaSize());
const auto & storage_stats = state_machine.getStorageStats();
print(ret, "znode_count", storage_stats.nodes_count.load(std::memory_order_relaxed));
print(ret, "watch_count", storage_stats.total_watches_count.load(std::memory_order_relaxed));
print(ret, "ephemerals_count", storage_stats.total_emphemeral_nodes_count.load(std::memory_order_relaxed));
print(ret, "approximate_data_size", storage_stats.approximate_data_size.load(std::memory_order_relaxed));
print(ret, "key_arena_size", 0);
print(ret, "latest_snapshot_size", state_machine.getLatestSnapshotSize());
#if defined(OS_LINUX) || defined(OS_DARWIN)
@ -387,6 +389,7 @@ String ServerStatCommand::run()
auto & stats = keeper_dispatcher.getKeeperConnectionStats();
Keeper4LWInfo keeper_info = keeper_dispatcher.getKeeper4LWInfo();
const auto & storage_stats = keeper_dispatcher.getStateMachine().getStorageStats();
write("ClickHouse Keeper version", String(VERSION_DESCRIBE) + "-" + VERSION_GITHASH);
@ -398,9 +401,9 @@ String ServerStatCommand::run()
write("Sent", toString(stats.getPacketsSent()));
write("Connections", toString(keeper_info.alive_connections_count));
write("Outstanding", toString(keeper_info.outstanding_requests_count));
write("Zxid", formatZxid(keeper_info.last_zxid));
write("Zxid", formatZxid(storage_stats.last_zxid.load(std::memory_order_relaxed)));
write("Mode", keeper_info.getRole());
write("Node count", toString(keeper_info.total_nodes_count));
write("Node count", toString(storage_stats.nodes_count.load(std::memory_order_relaxed)));
return buf.str();
}
@ -416,6 +419,7 @@ String StatCommand::run()
auto & stats = keeper_dispatcher.getKeeperConnectionStats();
Keeper4LWInfo keeper_info = keeper_dispatcher.getKeeper4LWInfo();
const auto & storage_stats = keeper_dispatcher.getStateMachine().getStorageStats();
write("ClickHouse Keeper version", String(VERSION_DESCRIBE) + "-" + VERSION_GITHASH);
@ -431,9 +435,9 @@ String StatCommand::run()
write("Sent", toString(stats.getPacketsSent()));
write("Connections", toString(keeper_info.alive_connections_count));
write("Outstanding", toString(keeper_info.outstanding_requests_count));
write("Zxid", formatZxid(keeper_info.last_zxid));
write("Zxid", formatZxid(storage_stats.last_zxid.load(std::memory_order_relaxed)));
write("Mode", keeper_info.getRole());
write("Node count", toString(keeper_info.total_nodes_count));
write("Node count", toString(storage_stats.nodes_count.load(std::memory_order_relaxed)));
return buf.str();
}

View File

@ -1,7 +1,5 @@
#pragma once
#include <string>
#include <base/types.h>
#include <Common/Exception.h>
@ -30,9 +28,6 @@ struct Keeper4LWInfo
uint64_t follower_count;
uint64_t synced_follower_count;
uint64_t total_nodes_count;
int64_t last_zxid;
String getRole() const
{
if (is_standalone)

View File

@ -38,15 +38,16 @@ void updateKeeperInformation(KeeperDispatcher & keeper_dispatcher, AsynchronousM
is_follower = static_cast<size_t>(keeper_info.is_follower);
is_exceeding_mem_soft_limit = static_cast<size_t>(keeper_info.is_exceeding_mem_soft_limit);
zxid = keeper_info.last_zxid;
const auto & state_machine = keeper_dispatcher.getStateMachine();
znode_count = state_machine.getNodesCount();
watch_count = state_machine.getTotalWatchesCount();
ephemerals_count = state_machine.getTotalEphemeralNodesCount();
approximate_data_size = state_machine.getApproximateDataSize();
key_arena_size = state_machine.getKeyArenaSize();
session_with_watches = state_machine.getSessionsWithWatchesCount();
paths_watched = state_machine.getWatchedPathsCount();
const auto & storage_stats = state_machine.getStorageStats();
zxid = storage_stats.last_zxid.load(std::memory_order_relaxed);
znode_count = storage_stats.nodes_count.load(std::memory_order_relaxed);
watch_count = storage_stats.total_watches_count.load(std::memory_order_relaxed);
ephemerals_count = storage_stats.total_emphemeral_nodes_count.load(std::memory_order_relaxed);
approximate_data_size = storage_stats.approximate_data_size.load(std::memory_order_relaxed);
key_arena_size = 0;
session_with_watches = storage_stats.sessions_with_watches_count.load(std::memory_order_relaxed);
paths_watched = storage_stats.watched_paths_count.load(std::memory_order_relaxed);
# if defined(__linux__) || defined(__APPLE__)
open_file_descriptor_count = getCurrentProcessFDCount();

View File

@ -305,7 +305,7 @@ void KeeperDispatcher::requestThread()
if (has_read_request)
{
if (server->isLeaderAlive())
server->putLocalReadRequest(request);
server->putLocalReadRequest({request});
else
addErrorResponses({request}, Coordination::Error::ZCONNECTIONLOSS);
}

View File

@ -28,6 +28,16 @@
#include <Common/getMultipleKeysFromConfig.h>
#include <Common/getNumberOfPhysicalCPUCores.h>
#if USE_SSL
# include <Server/CertificateReloader.h>
# include <openssl/ssl.h>
# include <Poco/Crypto/EVPPKey.h>
# include <Poco/Net/Context.h>
# include <Poco/Net/SSLManager.h>
# include <Poco/Net/Utility.h>
# include <Poco/StringTokenizer.h>
#endif
#include <chrono>
#include <mutex>
#include <string>
@ -48,6 +58,7 @@ namespace ErrorCodes
extern const int SUPPORT_IS_DISABLED;
extern const int LOGICAL_ERROR;
extern const int INVALID_CONFIG_PARAMETER;
extern const int BAD_ARGUMENTS;
}
using namespace std::chrono_literals;
@ -56,6 +67,16 @@ namespace
{
#if USE_SSL
int callSetCertificate(SSL * ssl, void * arg)
{
if (!arg)
return -1;
const CertificateReloader::Data * data = reinterpret_cast<CertificateReloader::Data *>(arg);
return setCertificateCallback(ssl, data, getLogger("SSLContext"));
}
void setSSLParams(nuraft::asio_service::options & asio_opts)
{
const Poco::Util::LayeredConfiguration & config = Poco::Util::Application::instance().config();
@ -69,18 +90,55 @@ void setSSLParams(nuraft::asio_service::options & asio_opts)
if (!config.has(private_key_file_property))
throw Exception(ErrorCodes::NO_ELEMENTS_IN_CONFIG, "Server private key file is not set.");
asio_opts.enable_ssl_ = true;
asio_opts.server_cert_file_ = config.getString(certificate_file_property);
asio_opts.server_key_file_ = config.getString(private_key_file_property);
Poco::Net::Context::Params params;
params.certificateFile = config.getString(certificate_file_property);
if (params.certificateFile.empty())
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Server certificate file in config '{}' is empty", certificate_file_property);
params.privateKeyFile = config.getString(private_key_file_property);
if (params.privateKeyFile.empty())
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Server key file in config '{}' is empty", private_key_file_property);
auto pass_phrase = config.getString("openSSL.server.privateKeyPassphraseHandler.options.password", "");
auto certificate_data = std::make_shared<CertificateReloader::Data>(params.certificateFile, params.privateKeyFile, pass_phrase);
if (config.has(root_ca_file_property))
asio_opts.root_cert_file_ = config.getString(root_ca_file_property);
params.caLocation = config.getString(root_ca_file_property);
if (config.getBool("openSSL.server.loadDefaultCAFile", false))
asio_opts.load_default_ca_file_ = true;
params.loadDefaultCAs = config.getBool("openSSL.server.loadDefaultCAFile", false);
params.verificationMode = Poco::Net::Utility::convertVerificationMode(config.getString("openSSL.server.verificationMode", "none"));
if (config.getString("openSSL.server.verificationMode", "none") == "none")
asio_opts.skip_verification_ = true;
std::string disabled_protocols_list = config.getString("openSSL.server.disableProtocols", "");
Poco::StringTokenizer dp_tok(disabled_protocols_list, ";,", Poco::StringTokenizer::TOK_TRIM | Poco::StringTokenizer::TOK_IGNORE_EMPTY);
int disabled_protocols = 0;
for (const auto & token : dp_tok)
{
if (token == "sslv2")
disabled_protocols |= Poco::Net::Context::PROTO_SSLV2;
else if (token == "sslv3")
disabled_protocols |= Poco::Net::Context::PROTO_SSLV3;
else if (token == "tlsv1")
disabled_protocols |= Poco::Net::Context::PROTO_TLSV1;
else if (token == "tlsv1_1")
disabled_protocols |= Poco::Net::Context::PROTO_TLSV1_1;
else if (token == "tlsv1_2")
disabled_protocols |= Poco::Net::Context::PROTO_TLSV1_2;
}
asio_opts.ssl_context_provider_server_ = [params, certificate_data, disabled_protocols]
{
Poco::Net::Context context(Poco::Net::Context::Usage::TLSV1_2_SERVER_USE, params);
context.disableProtocols(disabled_protocols);
SSL_CTX * ssl_ctx = context.takeSslContext();
SSL_CTX_set_cert_cb(ssl_ctx, callSetCertificate, reinterpret_cast<void *>(certificate_data.get()));
return ssl_ctx;
};
asio_opts.ssl_context_provider_client_ = [ctx_params = std::move(params)]
{
Poco::Net::Context context(Poco::Net::Context::Usage::TLSV1_2_CLIENT_USE, ctx_params);
return context.takeSslContext();
};
}
#endif
@ -1149,8 +1207,6 @@ Keeper4LWInfo KeeperServer::getPartiallyFilled4LWInfo() const
result.synced_follower_count = getSyncedFollowerCount();
}
result.is_exceeding_mem_soft_limit = isExceedingMemorySoftLimit();
result.total_nodes_count = getKeeperStateMachine()->getNodesCount();
result.last_zxid = getKeeperStateMachine()->getLastProcessedZxid();
return result;
}

View File

@ -78,20 +78,20 @@ namespace
writeBinary(false, out);
/// Serialize stat
writeBinary(node.czxid, out);
writeBinary(node.mzxid, out);
writeBinary(node.ctime(), out);
writeBinary(node.mtime, out);
writeBinary(node.version, out);
writeBinary(node.cversion, out);
writeBinary(node.aversion, out);
writeBinary(node.ephemeralOwner(), out);
writeBinary(node.stats.czxid, out);
writeBinary(node.stats.mzxid, out);
writeBinary(node.stats.ctime(), out);
writeBinary(node.stats.mtime, out);
writeBinary(node.stats.version, out);
writeBinary(node.stats.cversion, out);
writeBinary(node.stats.aversion, out);
writeBinary(node.stats.ephemeralOwner(), out);
if (version < SnapshotVersion::V6)
writeBinary(static_cast<int32_t>(node.getData().size()), out);
writeBinary(node.numChildren(), out);
writeBinary(node.pzxid, out);
writeBinary(static_cast<int32_t>(node.stats.data_size), out);
writeBinary(node.stats.numChildren(), out);
writeBinary(node.stats.pzxid, out);
writeBinary(node.seqNum(), out);
writeBinary(node.stats.seqNum(), out);
if (version >= SnapshotVersion::V4 && version <= SnapshotVersion::V5)
writeBinary(node.sizeInBytes(), out);
@ -100,11 +100,11 @@ namespace
template<typename Node>
void readNode(Node & node, ReadBuffer & in, SnapshotVersion version, ACLMap & acl_map)
{
readVarUInt(node.data_size, in);
if (node.data_size != 0)
readVarUInt(node.stats.data_size, in);
if (node.stats.data_size != 0)
{
node.data = std::unique_ptr<char[]>(new char[node.data_size]);
in.readStrict(node.data.get(), node.data_size);
node.data = std::unique_ptr<char[]>(new char[node.stats.data_size]);
in.readStrict(node.data.get(), node.stats.data_size);
}
if (version >= SnapshotVersion::V1)
@ -141,19 +141,19 @@ namespace
}
/// Deserialize stat
readBinary(node.czxid, in);
readBinary(node.mzxid, in);
readBinary(node.stats.czxid, in);
readBinary(node.stats.mzxid, in);
int64_t ctime;
readBinary(ctime, in);
node.setCtime(ctime);
readBinary(node.mtime, in);
readBinary(node.version, in);
readBinary(node.cversion, in);
readBinary(node.aversion, in);
node.stats.setCtime(ctime);
readBinary(node.stats.mtime, in);
readBinary(node.stats.version, in);
readBinary(node.stats.cversion, in);
readBinary(node.stats.aversion, in);
int64_t ephemeral_owner = 0;
readBinary(ephemeral_owner, in);
if (ephemeral_owner != 0)
node.setEphemeralOwner(ephemeral_owner);
node.stats.setEphemeralOwner(ephemeral_owner);
if (version < SnapshotVersion::V6)
{
@ -163,14 +163,14 @@ namespace
int32_t num_children = 0;
readBinary(num_children, in);
if (ephemeral_owner == 0)
node.setNumChildren(num_children);
node.stats.setNumChildren(num_children);
readBinary(node.pzxid, in);
readBinary(node.stats.pzxid, in);
int32_t seq_num = 0;
readBinary(seq_num, in);
if (ephemeral_owner == 0)
node.setSeqNum(seq_num);
node.stats.setSeqNum(seq_num);
if (version >= SnapshotVersion::V4 && version <= SnapshotVersion::V5)
{
@ -256,7 +256,7 @@ void KeeperStorageSnapshot<Storage>::serialize(const KeeperStorageSnapshot<Stora
/// Benign race condition possible while taking snapshot: NuRaft decide to create snapshot at some log id
/// and only after some time we lock storage and enable snapshot mode. So snapshot_container_size can be
/// slightly bigger than required.
if (node.mzxid > snapshot.zxid)
if (node.stats.mzxid > snapshot.zxid)
break;
writeBinary(path, out);
writeNode(node, snapshot.version, out);
@ -306,7 +306,7 @@ void KeeperStorageSnapshot<Storage>::serialize(const KeeperStorageSnapshot<Stora
}
template<typename Storage>
void KeeperStorageSnapshot<Storage>::deserialize(SnapshotDeserializationResult<Storage> & deserialization_result, ReadBuffer & in, KeeperContextPtr keeper_context)
void KeeperStorageSnapshot<Storage>::deserialize(SnapshotDeserializationResult<Storage> & deserialization_result, ReadBuffer & in, KeeperContextPtr keeper_context) TSA_NO_THREAD_SAFETY_ANALYSIS
{
uint8_t version;
readBinary(version, in);
@ -435,13 +435,13 @@ void KeeperStorageSnapshot<Storage>::deserialize(SnapshotDeserializationResult<S
}
}
auto ephemeral_owner = node.ephemeralOwner();
auto ephemeral_owner = node.stats.ephemeralOwner();
if constexpr (!use_rocksdb)
if (!node.isEphemeral() && node.numChildren() > 0)
node.getChildren().reserve(node.numChildren());
if (!node.stats.isEphemeral() && node.stats.numChildren() > 0)
node.getChildren().reserve(node.stats.numChildren());
if (ephemeral_owner != 0)
storage.ephemerals[node.ephemeralOwner()].insert(std::string{path});
storage.committed_ephemerals[node.stats.ephemeralOwner()].insert(std::string{path});
if (recalculate_digest)
storage.nodes_digest += node.getDigest(path);
@ -467,16 +467,25 @@ void KeeperStorageSnapshot<Storage>::deserialize(SnapshotDeserializationResult<S
{
if (itr.key != "/")
{
if (itr.value.numChildren() != static_cast<int32_t>(itr.value.getChildren().size()))
if (itr.value.stats.numChildren() != static_cast<int32_t>(itr.value.getChildren().size()))
{
#ifdef NDEBUG
/// TODO (alesapin) remove this, it should be always CORRUPTED_DATA.
LOG_ERROR(getLogger("KeeperSnapshotManager"), "Children counter in stat.numChildren {}"
" is different from actual children size {} for node {}", itr.value.numChildren(), itr.value.getChildren().size(), itr.key);
LOG_ERROR(
getLogger("KeeperSnapshotManager"),
"Children counter in stat.numChildren {}"
" is different from actual children size {} for node {}",
itr.value.stats.numChildren(),
itr.value.getChildren().size(),
itr.key);
#else
throw Exception(ErrorCodes::LOGICAL_ERROR, "Children counter in stat.numChildren {}"
" is different from actual children size {} for node {}",
itr.value.numChildren(), itr.value.getChildren().size(), itr.key);
throw Exception(
ErrorCodes::LOGICAL_ERROR,
"Children counter in stat.numChildren {}"
" is different from actual children size {} for node {}",
itr.value.stats.numChildren(),
itr.value.getChildren().size(),
itr.key);
#endif
}
}
@ -511,7 +520,7 @@ void KeeperStorageSnapshot<Storage>::deserialize(SnapshotDeserializationResult<S
session_auth_counter++;
}
if (!ids.empty())
storage.session_and_auth[active_session_id] = ids;
storage.committed_session_and_auth[active_session_id] = ids;
}
current_session_size++;
}
@ -527,6 +536,8 @@ void KeeperStorageSnapshot<Storage>::deserialize(SnapshotDeserializationResult<S
buffer->pos(0);
deserialization_result.cluster_config = ClusterConfig::deserialize(*buffer);
}
storage.updateStats();
}
template<typename Storage>
@ -544,7 +555,7 @@ KeeperStorageSnapshot<Storage>::KeeperStorageSnapshot(Storage * storage_, uint64
begin = storage->getSnapshotIteratorBegin();
session_and_timeout = storage->getActiveSessions();
acl_map = storage->acl_map.getMapping();
session_and_auth = storage->session_and_auth;
session_and_auth = storage->committed_session_and_auth;
}
template<typename Storage>
@ -563,7 +574,7 @@ KeeperStorageSnapshot<Storage>::KeeperStorageSnapshot(
begin = storage->getSnapshotIteratorBegin();
session_and_timeout = storage->getActiveSessions();
acl_map = storage->acl_map.getMapping();
session_and_auth = storage->session_and_auth;
session_and_auth = storage->committed_session_and_auth;
}
template<typename Storage>

View File

@ -36,6 +36,11 @@ namespace ProfileEvents
extern const Event KeeperStorageLockWaitMicroseconds;
}
namespace CurrentMetrics
{
extern const Metric KeeperAliveConnections;
}
namespace DB
{
@ -56,6 +61,7 @@ IKeeperStateMachine::IKeeperStateMachine(
, snapshots_queue(snapshots_queue_)
, min_request_size_to_cache(keeper_context_->getCoordinationSettings()->min_request_size_for_cache)
, log(getLogger("KeeperStateMachine"))
, read_pool(CurrentMetrics::KeeperAliveConnections, CurrentMetrics::KeeperAliveConnections, CurrentMetrics::KeeperAliveConnections, 100, 10000, 10000)
, superdigest(superdigest_)
, keeper_context(keeper_context_)
, snapshot_manager_s3(snapshot_manager_s3_)
@ -175,18 +181,20 @@ void assertDigest(
}
}
struct TSA_SCOPED_LOCKABLE LockGuardWithStats final
template <bool shared = false>
struct LockGuardWithStats final
{
std::unique_lock<std::mutex> lock;
explicit LockGuardWithStats(std::mutex & mutex) TSA_ACQUIRE(mutex)
using LockType = std::conditional_t<shared, std::shared_lock<SharedMutex>, std::unique_lock<SharedMutex>>;
LockType lock;
explicit LockGuardWithStats(SharedMutex & mutex)
{
Stopwatch watch;
std::unique_lock l(mutex);
LockType l(mutex);
ProfileEvents::increment(ProfileEvents::KeeperStorageLockWaitMicroseconds, watch.elapsedMicroseconds());
lock = std::move(l);
}
~LockGuardWithStats() TSA_RELEASE() = default;
~LockGuardWithStats() = default;
};
}
@ -312,13 +320,12 @@ bool KeeperStateMachine<Storage>::preprocess(const KeeperStorageBase::RequestFor
if (op_num == Coordination::OpNum::SessionID || op_num == Coordination::OpNum::Reconfig)
return true;
LockGuardWithStats lock(storage_and_responses_lock);
if (storage->isFinalized())
return false;
try
{
LockGuardWithStats<true> lock(storage_mutex);
storage->preprocessRequest(
request_for_session.request,
request_for_session.session_id,
@ -335,7 +342,12 @@ bool KeeperStateMachine<Storage>::preprocess(const KeeperStorageBase::RequestFor
}
if (keeper_context->digestEnabled() && request_for_session.digest)
assertDigest(*request_for_session.digest, storage->getNodesDigest(false), *request_for_session.request, request_for_session.log_idx, false);
assertDigest(
*request_for_session.digest,
storage->getNodesDigest(false, /*lock_transaction_mutex=*/true),
*request_for_session.request,
request_for_session.log_idx,
false);
return true;
}
@ -343,7 +355,7 @@ bool KeeperStateMachine<Storage>::preprocess(const KeeperStorageBase::RequestFor
template<typename Storage>
void KeeperStateMachine<Storage>::reconfigure(const KeeperStorageBase::RequestForSession& request_for_session)
{
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
KeeperStorageBase::ResponseForSession response = processReconfiguration(request_for_session);
if (!responses_queue.push(response))
{
@ -461,7 +473,7 @@ nuraft::ptr<nuraft::buffer> KeeperStateMachine<Storage>::commit(const uint64_t l
response_for_session.response = response;
response_for_session.request = request_for_session->request;
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
session_id = storage->getSessionID(session_id_request.session_timeout_ms);
LOG_DEBUG(log, "Session ID response {} with timeout {}", session_id, session_id_request.session_timeout_ms);
response->session_id = session_id;
@ -472,24 +484,31 @@ nuraft::ptr<nuraft::buffer> KeeperStateMachine<Storage>::commit(const uint64_t l
if (op_num == Coordination::OpNum::Close)
{
std::lock_guard lock(request_cache_mutex);
std::lock_guard cache_lock(request_cache_mutex);
parsed_request_cache.erase(request_for_session->session_id);
}
LockGuardWithStats lock(storage_and_responses_lock);
KeeperStorageBase::ResponsesForSessions responses_for_sessions
= storage->processRequest(request_for_session->request, request_for_session->session_id, request_for_session->zxid);
for (auto & response_for_session : responses_for_sessions)
{
if (response_for_session.response->xid != Coordination::WATCH_XID)
response_for_session.request = request_for_session->request;
LockGuardWithStats<true> lock(storage_mutex);
std::lock_guard response_lock(process_and_responses_lock);
KeeperStorageBase::ResponsesForSessions responses_for_sessions
= storage->processRequest(request_for_session->request, request_for_session->session_id, request_for_session->zxid);
for (auto & response_for_session : responses_for_sessions)
{
if (response_for_session.response->xid != Coordination::WATCH_XID)
response_for_session.request = request_for_session->request;
try_push(response_for_session);
try_push(response_for_session);
}
}
if (keeper_context->digestEnabled() && request_for_session->digest)
assertDigest(*request_for_session->digest, storage->getNodesDigest(true), *request_for_session->request, request_for_session->log_idx, true);
assertDigest(
*request_for_session->digest,
storage->getNodesDigest(true, /*lock_transaction_mutex=*/true),
*request_for_session->request,
request_for_session->log_idx,
true);
}
ProfileEvents::increment(ProfileEvents::KeeperCommits);
@ -534,8 +553,6 @@ bool KeeperStateMachine<Storage>::apply_snapshot(nuraft::snapshot & s)
}
{ /// deserialize and apply snapshot to storage
LockGuardWithStats lock(storage_and_responses_lock);
SnapshotDeserializationResult<Storage> snapshot_deserialization_result;
if (latest_snapshot_ptr)
snapshot_deserialization_result = snapshot_manager.deserializeSnapshotFromBuffer(latest_snapshot_ptr);
@ -543,6 +560,7 @@ bool KeeperStateMachine<Storage>::apply_snapshot(nuraft::snapshot & s)
snapshot_deserialization_result
= snapshot_manager.deserializeSnapshotFromBuffer(snapshot_manager.deserializeSnapshotBufferFromDisk(s.get_last_log_idx()));
LockGuardWithStats storage_lock(storage_mutex);
/// maybe some logs were preprocessed with log idx larger than the snapshot idx
/// we have to apply them to the new storage
storage->applyUncommittedState(*snapshot_deserialization_result.storage, snapshot_deserialization_result.snapshot_meta->get_last_log_idx());
@ -587,16 +605,7 @@ void KeeperStateMachine<Storage>::rollbackRequest(const KeeperStorageBase::Reque
if (request_for_session.request->getOpNum() == Coordination::OpNum::SessionID)
return;
LockGuardWithStats lock(storage_and_responses_lock);
storage->rollbackRequest(request_for_session.zxid, allow_missing);
}
template<typename Storage>
void KeeperStateMachine<Storage>::rollbackRequestNoLock(const KeeperStorageBase::RequestForSession & request_for_session, bool allow_missing)
{
if (request_for_session.request->getOpNum() == Coordination::OpNum::SessionID)
return;
LockGuardWithStats lock(storage_mutex);
storage->rollbackRequest(request_for_session.zxid, allow_missing);
}
@ -616,7 +625,7 @@ void KeeperStateMachine<Storage>::create_snapshot(nuraft::snapshot & s, nuraft::
auto snapshot_meta_copy = nuraft::snapshot::deserialize(*snp_buf);
CreateSnapshotTask snapshot_task;
{ /// lock storage for a short period time to turn on "snapshot mode". After that we can read consistent storage state without locking.
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
snapshot_task.snapshot = std::make_shared<KeeperStorageSnapshot<Storage>>(storage.get(), snapshot_meta_copy, getClusterConfig());
}
@ -681,7 +690,7 @@ void KeeperStateMachine<Storage>::create_snapshot(nuraft::snapshot & s, nuraft::
}
{
/// Destroy snapshot with lock
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
LOG_TRACE(log, "Clearing garbage after snapshot");
/// Turn off "snapshot mode" and clear outdate part of storage state
storage->clearGarbageAfterSnapshot();
@ -824,10 +833,10 @@ template<typename Storage>
void KeeperStateMachine<Storage>::processReadRequest(const KeeperStorageBase::RequestForSession & request_for_session)
{
/// Pure local request, just process it with storage
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats<true> storage_lock(storage_mutex);
std::lock_guard response_lock(process_and_responses_lock);
auto responses = storage->processRequest(
request_for_session.request, request_for_session.session_id, std::nullopt, true /*check_acl*/, true /*is_local*/);
for (auto & response_for_session : responses)
{
if (response_for_session.response->xid != Coordination::WATCH_XID)
@ -840,112 +849,116 @@ void KeeperStateMachine<Storage>::processReadRequest(const KeeperStorageBase::Re
template<typename Storage>
void KeeperStateMachine<Storage>::shutdownStorage()
{
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
storage->finalize();
}
template<typename Storage>
std::vector<int64_t> KeeperStateMachine<Storage>::getDeadSessions()
{
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
return storage->getDeadSessions();
}
template<typename Storage>
int64_t KeeperStateMachine<Storage>::getNextZxid() const
{
LockGuardWithStats lock(storage_and_responses_lock);
return storage->getNextZXID();
}
template<typename Storage>
KeeperStorageBase::Digest KeeperStateMachine<Storage>::getNodesDigest() const
{
LockGuardWithStats lock(storage_and_responses_lock);
return storage->getNodesDigest(false);
LockGuardWithStats lock(storage_mutex);
return storage->getNodesDigest(false, /*lock_transaction_mutex=*/true);
}
template<typename Storage>
uint64_t KeeperStateMachine<Storage>::getLastProcessedZxid() const
{
LockGuardWithStats lock(storage_and_responses_lock);
return storage->getZXID();
}
template<typename Storage>
const KeeperStorageBase::Stats & KeeperStateMachine<Storage>::getStorageStats() const TSA_NO_THREAD_SAFETY_ANALYSIS
{
return storage->getStorageStats();
}
template<typename Storage>
uint64_t KeeperStateMachine<Storage>::getNodesCount() const
{
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
return storage->getNodesCount();
}
template<typename Storage>
uint64_t KeeperStateMachine<Storage>::getTotalWatchesCount() const
{
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
return storage->getTotalWatchesCount();
}
template<typename Storage>
uint64_t KeeperStateMachine<Storage>::getWatchedPathsCount() const
{
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
return storage->getWatchedPathsCount();
}
template<typename Storage>
uint64_t KeeperStateMachine<Storage>::getSessionsWithWatchesCount() const
{
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
return storage->getSessionsWithWatchesCount();
}
template<typename Storage>
uint64_t KeeperStateMachine<Storage>::getTotalEphemeralNodesCount() const
{
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
return storage->getTotalEphemeralNodesCount();
}
template<typename Storage>
uint64_t KeeperStateMachine<Storage>::getSessionWithEphemeralNodesCount() const
{
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
return storage->getSessionWithEphemeralNodesCount();
}
template<typename Storage>
void KeeperStateMachine<Storage>::dumpWatches(WriteBufferFromOwnString & buf) const
{
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
storage->dumpWatches(buf);
}
template<typename Storage>
void KeeperStateMachine<Storage>::dumpWatchesByPath(WriteBufferFromOwnString & buf) const
{
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
storage->dumpWatchesByPath(buf);
}
template<typename Storage>
void KeeperStateMachine<Storage>::dumpSessionsAndEphemerals(WriteBufferFromOwnString & buf) const
{
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
storage->dumpSessionsAndEphemerals(buf);
}
template<typename Storage>
uint64_t KeeperStateMachine<Storage>::getApproximateDataSize() const
{
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
return storage->getApproximateDataSize();
}
template<typename Storage>
uint64_t KeeperStateMachine<Storage>::getKeyArenaSize() const
{
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
return storage->getArenaDataSize();
}
@ -988,7 +1001,7 @@ ClusterConfigPtr IKeeperStateMachine::getClusterConfig() const
template<typename Storage>
void KeeperStateMachine<Storage>::recalculateStorageStats()
{
LockGuardWithStats lock(storage_and_responses_lock);
LockGuardWithStats lock(storage_mutex);
LOG_INFO(log, "Recalculating storage stats");
storage->recalculateStats();
LOG_INFO(log, "Done recalculating storage stats");

View File

@ -85,6 +85,8 @@ public:
/// Introspection functions for 4lw commands
virtual uint64_t getLastProcessedZxid() const = 0;
virtual const KeeperStorageBase::Stats & getStorageStats() const = 0;
virtual uint64_t getNodesCount() const = 0;
virtual uint64_t getTotalWatchesCount() const = 0;
virtual uint64_t getWatchedPathsCount() const = 0;
@ -124,12 +126,16 @@ protected:
/// Mutex for snapshots
mutable std::mutex snapshots_lock;
/// Lock for storage and responses_queue. It's important to process requests
/// Lock for the storage
/// Storage works in thread-safe way ONLY for preprocessing/processing
/// In any other case, unique storage lock needs to be taken
mutable SharedMutex storage_mutex;
/// Lock for processing and responses_queue. It's important to process requests
/// and push them to the responses queue while holding this lock. Otherwise
/// we can get strange cases when, for example client send read request with
/// watch and after that receive watch response and only receive response
/// for request.
mutable std::mutex storage_and_responses_lock;
mutable std::mutex process_and_responses_lock;
std::unordered_map<int64_t, std::unordered_map<Coordination::XID, std::shared_ptr<KeeperStorageBase::RequestForSession>>> parsed_request_cache;
uint64_t min_request_size_to_cache{0};
@ -146,6 +152,7 @@ protected:
mutable std::mutex cluster_config_lock;
ClusterConfigPtr cluster_config;
ThreadPool read_pool;
/// Special part of ACL system -- superdigest specified in server config.
const std::string superdigest;
@ -153,10 +160,8 @@ protected:
KeeperSnapshotManagerS3 * snapshot_manager_s3;
virtual KeeperStorageBase::ResponseForSession processReconfiguration(
const KeeperStorageBase::RequestForSession& request_for_session)
TSA_REQUIRES(storage_and_responses_lock) = 0;
virtual KeeperStorageBase::ResponseForSession processReconfiguration(const KeeperStorageBase::RequestForSession & request_for_session)
= 0;
};
/// ClickHouse Keeper state machine. Wrapper for KeeperStorage.
@ -189,10 +194,6 @@ public:
// (can happen in case of exception during preprocessing)
void rollbackRequest(const KeeperStorageBase::RequestForSession & request_for_session, bool allow_missing) override;
void rollbackRequestNoLock(
const KeeperStorageBase::RequestForSession & request_for_session,
bool allow_missing) TSA_NO_THREAD_SAFETY_ANALYSIS;
/// Apply preliminarily saved (save_logical_snp_obj) snapshot to our state.
bool apply_snapshot(nuraft::snapshot & s) override;
@ -205,7 +206,7 @@ public:
// This should be used only for tests or keeper-data-dumper because it violates
// TSA -- we can't acquire the lock outside of this class or return a storage under lock
// in a reasonable way.
Storage & getStorageUnsafe() TSA_NO_THREAD_SAFETY_ANALYSIS
Storage & getStorageUnsafe()
{
return *storage;
}
@ -224,6 +225,8 @@ public:
/// Introspection functions for 4lw commands
uint64_t getLastProcessedZxid() const override;
const KeeperStorageBase::Stats & getStorageStats() const override;
uint64_t getNodesCount() const override;
uint64_t getTotalWatchesCount() const override;
uint64_t getWatchedPathsCount() const override;
@ -245,12 +248,12 @@ public:
private:
/// Main state machine logic
std::unique_ptr<Storage> storage; //TSA_PT_GUARDED_BY(storage_and_responses_lock);
std::unique_ptr<Storage> storage;
/// Save/Load and Serialize/Deserialize logic for snapshots.
KeeperSnapshotManager<Storage> snapshot_manager;
KeeperStorageBase::ResponseForSession processReconfiguration(const KeeperStorageBase::RequestForSession & request_for_session)
TSA_REQUIRES(storage_and_responses_lock) override;
KeeperStorageBase::ResponseForSession processReconfiguration(const KeeperStorageBase::RequestForSession & request_for_session) override;
};
}

File diff suppressed because it is too large Load Diff

View File

@ -1,10 +1,16 @@
#pragma once
#include <unordered_map>
#include <unordered_set>
#include <vector>
#include <Coordination/ACLMap.h>
#include <Coordination/SessionExpiryQueue.h>
#include <Coordination/SnapshotableHashTable.h>
#include "Common/StringHashForHeterogeneousLookup.h"
#include <Common/SharedMutex.h>
#include <Common/Concepts.h>
#include <base/defines.h>
#include <absl/container/flat_hash_set.h>
@ -23,14 +29,11 @@ using ResponseCallback = std::function<void(const Coordination::ZooKeeperRespons
using ChildrenSet = absl::flat_hash_set<StringRef, StringRefHash>;
using SessionAndTimeout = std::unordered_map<int64_t, int64_t>;
/// KeeperRocksNodeInfo is used in RocksDB keeper.
/// It is serialized directly as POD to RocksDB.
struct KeeperRocksNodeInfo
struct NodeStats
{
int64_t czxid{0};
int64_t mzxid{0};
int64_t pzxid{0};
uint64_t acl_id = 0; /// 0 -- no ACL by default
int64_t mtime{0};
@ -38,225 +41,9 @@ struct KeeperRocksNodeInfo
int32_t cversion{0};
int32_t aversion{0};
int32_t seq_num = 0;
mutable UInt64 digest = 0; /// we cached digest for this node.
/// as ctime can't be negative because it stores the timestamp when the
/// node was created, we can use the MSB for a bool
struct
{
bool is_ephemeral : 1;
int64_t ctime : 63;
} is_ephemeral_and_ctime{false, 0};
/// ephemeral notes cannot have children so a node can set either
/// ephemeral_owner OR seq_num + num_children
union
{
int64_t ephemeral_owner;
struct
{
int32_t seq_num;
int32_t num_children;
} children_info;
} ephemeral_or_children_data{0};
bool isEphemeral() const
{
return is_ephemeral_and_ctime.is_ephemeral;
}
int64_t ephemeralOwner() const
{
if (isEphemeral())
return ephemeral_or_children_data.ephemeral_owner;
return 0;
}
void setEphemeralOwner(int64_t ephemeral_owner)
{
is_ephemeral_and_ctime.is_ephemeral = ephemeral_owner != 0;
ephemeral_or_children_data.ephemeral_owner = ephemeral_owner;
}
int32_t numChildren() const
{
if (isEphemeral())
return 0;
return ephemeral_or_children_data.children_info.num_children;
}
void setNumChildren(int32_t num_children)
{
ephemeral_or_children_data.children_info.num_children = num_children;
}
/// dummy interface for test
void addChild(StringRef) {}
auto getChildren() const
{
return std::vector<int>(numChildren());
}
void increaseNumChildren()
{
chassert(!isEphemeral());
++ephemeral_or_children_data.children_info.num_children;
}
void decreaseNumChildren()
{
chassert(!isEphemeral());
--ephemeral_or_children_data.children_info.num_children;
}
int32_t seqNum() const
{
if (isEphemeral())
return 0;
return ephemeral_or_children_data.children_info.seq_num;
}
void setSeqNum(int32_t seq_num_)
{
ephemeral_or_children_data.children_info.seq_num = seq_num_;
}
void increaseSeqNum()
{
chassert(!isEphemeral());
++ephemeral_or_children_data.children_info.seq_num;
}
int64_t ctime() const
{
return is_ephemeral_and_ctime.ctime;
}
void setCtime(uint64_t ctime)
{
is_ephemeral_and_ctime.ctime = ctime;
}
uint32_t data_size{0};
void copyStats(const Coordination::Stat & stat);
};
/// KeeperRocksNode is the memory structure used by RocksDB
struct KeeperRocksNode : public KeeperRocksNodeInfo
{
#if USE_ROCKSDB
friend struct RocksDBContainer<KeeperRocksNode>;
#endif
using Meta = KeeperRocksNodeInfo;
uint64_t size_bytes = 0; // only for compatible, should be deprecated
uint64_t sizeInBytes() const { return data_size + sizeof(KeeperRocksNodeInfo); }
void setData(String new_data)
{
data_size = static_cast<uint32_t>(new_data.size());
if (data_size != 0)
{
data = std::unique_ptr<char[]>(new char[new_data.size()]);
memcpy(data.get(), new_data.data(), data_size);
}
}
void shallowCopy(const KeeperRocksNode & other)
{
czxid = other.czxid;
mzxid = other.mzxid;
pzxid = other.pzxid;
acl_id = other.acl_id; /// 0 -- no ACL by default
mtime = other.mtime;
is_ephemeral_and_ctime = other.is_ephemeral_and_ctime;
ephemeral_or_children_data = other.ephemeral_or_children_data;
data_size = other.data_size;
if (data_size != 0)
{
data = std::unique_ptr<char[]>(new char[data_size]);
memcpy(data.get(), other.data.get(), data_size);
}
version = other.version;
cversion = other.cversion;
aversion = other.aversion;
/// cached_digest = other.cached_digest;
}
void invalidateDigestCache() const;
UInt64 getDigest(std::string_view path) const;
String getEncodedString();
void decodeFromString(const String & buffer_str);
void recalculateSize() {}
std::string_view getData() const noexcept { return {data.get(), data_size}; }
void setResponseStat(Coordination::Stat & response_stat) const
{
response_stat.czxid = czxid;
response_stat.mzxid = mzxid;
response_stat.ctime = ctime();
response_stat.mtime = mtime;
response_stat.version = version;
response_stat.cversion = cversion;
response_stat.aversion = aversion;
response_stat.ephemeralOwner = ephemeralOwner();
response_stat.dataLength = static_cast<int32_t>(data_size);
response_stat.numChildren = numChildren();
response_stat.pzxid = pzxid;
}
void reset()
{
serialized = false;
}
bool empty() const
{
return data_size == 0 && mzxid == 0;
}
std::unique_ptr<char[]> data{nullptr};
uint32_t data_size{0};
private:
bool serialized = false;
};
/// KeeperMemNode should have as minimal size as possible to reduce memory footprint
/// of stored nodes
/// New fields should be added to the struct only if it's really necessary
struct KeeperMemNode
{
int64_t czxid{0};
int64_t mzxid{0};
int64_t pzxid{0};
uint64_t acl_id = 0; /// 0 -- no ACL by default
int64_t mtime{0};
std::unique_ptr<char[]> data{nullptr};
uint32_t data_size{0};
int32_t version{0};
int32_t cversion{0};
int32_t aversion{0};
mutable uint64_t cached_digest = 0;
KeeperMemNode() = default;
KeeperMemNode & operator=(const KeeperMemNode & other);
KeeperMemNode(const KeeperMemNode & other);
KeeperMemNode & operator=(KeeperMemNode && other) noexcept;
KeeperMemNode(KeeperMemNode && other) noexcept;
bool empty() const;
bool isEphemeral() const
{
@ -287,6 +74,7 @@ struct KeeperMemNode
void setNumChildren(int32_t num_children)
{
is_ephemeral_and_ctime.is_ephemeral = false;
ephemeral_or_children_data.children_info.num_children = num_children;
}
@ -331,34 +119,6 @@ struct KeeperMemNode
is_ephemeral_and_ctime.ctime = ctime;
}
void copyStats(const Coordination::Stat & stat);
void setResponseStat(Coordination::Stat & response_stat) const;
/// Object memory size
uint64_t sizeInBytes() const;
void setData(const String & new_data);
std::string_view getData() const noexcept { return {data.get(), data_size}; }
void addChild(StringRef child_path);
void removeChild(StringRef child_path);
const auto & getChildren() const noexcept { return children; }
auto & getChildren() { return children; }
// Invalidate the calculated digest so it's recalculated again on the next
// getDigest call
void invalidateDigestCache() const;
// get the calculated digest of the node
UInt64 getDigest(std::string_view path) const;
// copy only necessary information for preprocessing and digest calculation
// (e.g. we don't need to copy list of children)
void shallowCopy(const KeeperMemNode & other);
private:
/// as ctime can't be negative because it stores the timestamp when the
/// node was created, we can use the MSB for a bool
@ -379,7 +139,132 @@ private:
int32_t num_children;
} children_info;
} ephemeral_or_children_data{0};
};
/// KeeperRocksNodeInfo is used in RocksDB keeper.
/// It is serialized directly as POD to RocksDB.
struct KeeperRocksNodeInfo
{
NodeStats stats;
uint64_t acl_id = 0; /// 0 -- no ACL by default
/// dummy interface for test
void addChild(StringRef) {}
auto getChildren() const
{
return std::vector<int>(stats.numChildren());
}
void copyStats(const Coordination::Stat & stat);
};
/// KeeperRocksNode is the memory structure used by RocksDB
struct KeeperRocksNode : public KeeperRocksNodeInfo
{
#if USE_ROCKSDB
friend struct RocksDBContainer<KeeperRocksNode>;
#endif
using Meta = KeeperRocksNodeInfo;
uint64_t size_bytes = 0; // only for compatible, should be deprecated
uint64_t sizeInBytes() const { return stats.data_size + sizeof(KeeperRocksNodeInfo); }
void setData(String new_data)
{
stats.data_size = static_cast<uint32_t>(new_data.size());
if (stats.data_size != 0)
{
data = std::unique_ptr<char[]>(new char[new_data.size()]);
memcpy(data.get(), new_data.data(), stats.data_size);
}
}
void shallowCopy(const KeeperRocksNode & other)
{
stats = other.stats;
acl_id = other.acl_id;
if (stats.data_size != 0)
{
data = std::unique_ptr<char[]>(new char[stats.data_size]);
memcpy(data.get(), other.data.get(), stats.data_size);
}
/// cached_digest = other.cached_digest;
}
void invalidateDigestCache() const;
UInt64 getDigest(std::string_view path) const;
String getEncodedString();
void decodeFromString(const String & buffer_str);
void recalculateSize() {}
std::string_view getData() const noexcept { return {data.get(), stats.data_size}; }
void setResponseStat(Coordination::Stat & response_stat) const;
void reset()
{
serialized = false;
}
bool empty() const
{
return stats.data_size == 0 && stats.mzxid == 0;
}
std::unique_ptr<char[]> data{nullptr};
mutable UInt64 cached_digest = 0; /// we cached digest for this node.
private:
bool serialized = false;
};
/// KeeperMemNode should have as minimal size as possible to reduce memory footprint
/// of stored nodes
/// New fields should be added to the struct only if it's really necessary
struct KeeperMemNode
{
NodeStats stats;
std::unique_ptr<char[]> data{nullptr};
mutable uint64_t cached_digest = 0;
uint64_t acl_id = 0; /// 0 -- no ACL by default
KeeperMemNode() = default;
KeeperMemNode & operator=(const KeeperMemNode & other);
KeeperMemNode(const KeeperMemNode & other);
KeeperMemNode & operator=(KeeperMemNode && other) noexcept;
KeeperMemNode(KeeperMemNode && other) noexcept;
bool empty() const;
void copyStats(const Coordination::Stat & stat);
void setResponseStat(Coordination::Stat & response_stat) const;
/// Object memory size
uint64_t sizeInBytes() const;
void setData(const String & new_data);
std::string_view getData() const noexcept { return {data.get(), stats.data_size}; }
void addChild(StringRef child_path);
void removeChild(StringRef child_path);
const auto & getChildren() const noexcept { return children; }
auto & getChildren() { return children; }
// Invalidate the calculated digest so it's recalculated again on the next
// getDigest call
void invalidateDigestCache() const;
// get the calculated digest of the node
UInt64 getDigest(std::string_view path) const;
// copy only necessary information for preprocessing and digest calculation
// (e.g. we don't need to copy list of children)
void shallowCopy(const KeeperMemNode & other);
private:
ChildrenSet children{};
};
@ -430,18 +315,187 @@ public:
};
using Ephemerals = std::unordered_map<int64_t, std::unordered_set<std::string>>;
using SessionAndWatcher = std::unordered_map<int64_t, std::unordered_set<std::string>>;
struct WatchInfo
{
std::string_view path;
bool is_list_watch;
bool operator==(const WatchInfo &) const = default;
};
struct WatchInfoHash
{
auto operator()(WatchInfo info) const
{
SipHash hash;
hash.update(info.path);
hash.update(info.is_list_watch);
return hash.get64();
}
};
using SessionAndWatcher = std::unordered_map<int64_t, std::unordered_set<WatchInfo, WatchInfoHash>>;
using SessionIDs = std::unordered_set<int64_t>;
/// Just vector of SHA1 from user:password
using AuthIDs = std::vector<AuthID>;
using SessionAndAuth = std::unordered_map<int64_t, AuthIDs>;
using Watches = std::unordered_map<String /* path, relative of root_path */, SessionIDs>;
using Watches = std::unordered_map<
String /* path, relative of root_path */,
SessionIDs,
StringHashForHeterogeneousLookup,
StringHashForHeterogeneousLookup::transparent_key_equal>;
// Applying ZooKeeper request to storage consists of two steps:
// - preprocessing which, instead of applying the changes directly to storage,
// generates deltas with those changes, denoted with the request ZXID
// - processing which applies deltas with the correct ZXID to the storage
//
// Delta objects allow us two things:
// - fetch the latest, uncommitted state of an object by getting the committed
// state of that same object from the storage and applying the deltas
// in the same order as they are defined
// - quickly commit the changes to the storage
struct CreateNodeDelta
{
Coordination::Stat stat;
Coordination::ACLs acls;
String data;
};
struct RemoveNodeDelta
{
int32_t version{-1};
NodeStats stat;
Coordination::ACLs acls;
String data;
};
struct UpdateNodeStatDelta
{
template <is_any_of<KeeperMemNode, KeeperRocksNode> Node>
explicit UpdateNodeStatDelta(const Node & node)
: old_stats(node.stats)
, new_stats(node.stats)
{}
NodeStats old_stats;
NodeStats new_stats;
int32_t version{-1};
};
struct UpdateNodeDataDelta
{
std::string old_data;
std::string new_data;
int32_t version{-1};
};
struct SetACLDelta
{
Coordination::ACLs old_acls;
Coordination::ACLs new_acls;
int32_t version{-1};
};
struct ErrorDelta
{
Coordination::Error error;
};
struct FailedMultiDelta
{
std::vector<Coordination::Error> error_codes;
Coordination::Error global_error{Coordination::Error::ZOK};
};
// Denotes end of a subrequest in multi request
struct SubDeltaEnd
{
};
struct AddAuthDelta
{
int64_t session_id;
std::shared_ptr<AuthID> auth_id;
};
struct CloseSessionDelta
{
int64_t session_id;
};
using Operation = std::variant<
CreateNodeDelta,
RemoveNodeDelta,
UpdateNodeStatDelta,
UpdateNodeDataDelta,
SetACLDelta,
AddAuthDelta,
ErrorDelta,
SubDeltaEnd,
FailedMultiDelta,
CloseSessionDelta>;
struct Delta
{
Delta(String path_, int64_t zxid_, Operation operation_) : path(std::move(path_)), zxid(zxid_), operation(std::move(operation_)) { }
Delta(int64_t zxid_, Coordination::Error error) : Delta("", zxid_, ErrorDelta{error}) { }
Delta(int64_t zxid_, Operation subdelta) : Delta("", zxid_, subdelta) { }
String path;
int64_t zxid;
Operation operation;
};
using DeltaIterator = std::list<KeeperStorageBase::Delta>::const_iterator;
struct DeltaRange
{
DeltaIterator begin_it;
DeltaIterator end_it;
auto begin() const
{
return begin_it;
}
auto end() const
{
return end_it;
}
bool empty() const
{
return begin_it == end_it;
}
const auto & front() const
{
return *begin_it;
}
};
struct Stats
{
std::atomic<uint64_t> nodes_count = 0;
std::atomic<uint64_t> approximate_data_size = 0;
std::atomic<uint64_t> total_watches_count = 0;
std::atomic<uint64_t> watched_paths_count = 0;
std::atomic<uint64_t> sessions_with_watches_count = 0;
std::atomic<uint64_t> session_with_ephemeral_nodes_count = 0;
std::atomic<uint64_t> total_emphemeral_nodes_count = 0;
std::atomic<int64_t> last_zxid = 0;
};
Stats stats;
static bool checkDigest(const Digest & first, const Digest & second);
};
/// Keeper state machine almost equal to the ZooKeeper's state machine.
/// Implements all logic of operations, data changes, sessions allocation.
/// In-memory and not thread safe.
@ -472,159 +526,73 @@ public:
int64_t session_id_counter{1};
SessionAndAuth session_and_auth;
mutable SharedMutex auth_mutex;
SessionAndAuth committed_session_and_auth;
mutable SharedMutex storage_mutex;
/// Main hashtable with nodes. Contain all information about data.
/// All other structures expect session_and_timeout can be restored from
/// container.
Container container;
// Applying ZooKeeper request to storage consists of two steps:
// - preprocessing which, instead of applying the changes directly to storage,
// generates deltas with those changes, denoted with the request ZXID
// - processing which applies deltas with the correct ZXID to the storage
//
// Delta objects allow us two things:
// - fetch the latest, uncommitted state of an object by getting the committed
// state of that same object from the storage and applying the deltas
// in the same order as they are defined
// - quickly commit the changes to the storage
struct CreateNodeDelta
{
Coordination::Stat stat;
Coordination::ACLs acls;
String data;
};
struct RemoveNodeDelta
{
int32_t version{-1};
int64_t ephemeral_owner{0};
};
struct UpdateNodeDelta
{
std::function<void(Node &)> update_fn;
int32_t version{-1};
};
struct SetACLDelta
{
Coordination::ACLs acls;
int32_t version{-1};
};
struct ErrorDelta
{
Coordination::Error error;
};
struct FailedMultiDelta
{
std::vector<Coordination::Error> error_codes;
};
// Denotes end of a subrequest in multi request
struct SubDeltaEnd
{
};
struct AddAuthDelta
{
int64_t session_id;
AuthID auth_id;
};
struct CloseSessionDelta
{
int64_t session_id;
};
using Operation = std::
variant<CreateNodeDelta, RemoveNodeDelta, UpdateNodeDelta, SetACLDelta, AddAuthDelta, ErrorDelta, SubDeltaEnd, FailedMultiDelta, CloseSessionDelta>;
struct Delta
{
Delta(String path_, int64_t zxid_, Operation operation_) : path(std::move(path_)), zxid(zxid_), operation(std::move(operation_)) { }
Delta(int64_t zxid_, Coordination::Error error) : Delta("", zxid_, ErrorDelta{error}) { }
Delta(int64_t zxid_, Operation subdelta) : Delta("", zxid_, subdelta) { }
String path;
int64_t zxid;
Operation operation;
};
struct UncommittedState
{
explicit UncommittedState(KeeperStorage & storage_) : storage(storage_) { }
void addDelta(Delta new_delta);
void addDeltas(std::vector<Delta> new_deltas);
void commit(int64_t commit_zxid);
void addDeltas(std::list<Delta> new_deltas);
void cleanup(int64_t commit_zxid);
void rollback(int64_t rollback_zxid);
void rollback(std::list<Delta> rollback_deltas);
std::shared_ptr<Node> getNode(StringRef path) const;
std::shared_ptr<Node> getNode(StringRef path, bool should_lock_storage = true) const;
const Node * getActualNodeView(StringRef path, const Node & storage_node) const;
Coordination::ACLs getACLs(StringRef path) const;
void applyDeltas(const std::list<Delta> & new_deltas);
void applyDelta(const Delta & delta);
void rollbackDelta(const Delta & delta);
bool hasACL(int64_t session_id, bool is_local, std::function<bool(const AuthID &)> predicate) const;
void forEachAuthInSession(int64_t session_id, std::function<void(const AuthID &)> func) const;
std::shared_ptr<Node> tryGetNodeFromStorage(StringRef path) const;
std::shared_ptr<Node> tryGetNodeFromStorage(StringRef path, bool should_lock_storage = true) const;
std::unordered_map<int64_t, std::list<const AuthID *>> session_and_auth;
std::unordered_set<int64_t> closed_sessions;
using ZxidToNodes = std::map<int64_t, std::unordered_set<std::string_view>>;
struct UncommittedNode
{
std::shared_ptr<Node> node{nullptr};
Coordination::ACLs acls{};
int64_t zxid{0};
};
std::optional<Coordination::ACLs> acls{};
std::unordered_set<uint64_t> applied_zxids{};
struct Hash
{
auto operator()(const std::string_view view) const
{
SipHash hash;
hash.update(view);
return hash.get64();
}
using is_transparent = void; // required to make find() work with different type than key_type
};
struct Equal
{
auto operator()(const std::string_view a,
const std::string_view b) const
{
return a == b;
}
using is_transparent = void; // required to make find() work with different type than key_type
void materializeACL(const ACLMap & current_acl_map);
};
struct PathCmp
{
using is_transparent = std::true_type;
auto operator()(const std::string_view a,
const std::string_view b) const
{
return a.size() < b.size() || (a.size() == b.size() && a < b);
size_t level_a = std::count(a.begin(), a.end(), '/');
size_t level_b = std::count(b.begin(), b.end(), '/');
return level_a < level_b || (level_a == level_b && a < b);
}
using is_transparent = void; // required to make find() work with different type than key_type
};
mutable std::map<std::string, UncommittedNode, PathCmp> nodes;
std::unordered_map<std::string, std::list<const Delta *>, Hash, Equal> deltas_for_path;
Ephemerals ephemerals;
std::list<Delta> deltas;
std::unordered_map<int64_t, std::list<std::pair<int64_t, std::shared_ptr<AuthID>>>> session_and_auth;
mutable std::map<std::string, UncommittedNode, PathCmp> nodes;
mutable ZxidToNodes zxid_to_nodes;
mutable std::mutex deltas_mutex;
std::list<Delta> deltas TSA_GUARDED_BY(deltas_mutex);
KeeperStorage<Container> & storage;
};
@ -634,7 +602,7 @@ public:
// with zxid > last_zxid
void applyUncommittedState(KeeperStorage & other, int64_t last_log_idx);
Coordination::Error commit(int64_t zxid);
Coordination::Error commit(DeltaRange deltas);
// Create node in the storage
// Returns false if it failed to create the node, true otherwise
@ -652,12 +620,11 @@ public:
bool checkACL(StringRef path, int32_t permissions, int64_t session_id, bool is_local);
void unregisterEphemeralPath(int64_t session_id, const std::string & path);
std::mutex ephemeral_mutex;
/// Mapping session_id -> set of ephemeral nodes paths
Ephemerals ephemerals;
/// Mapping session_id -> set of watched nodes paths
SessionAndWatcher sessions_and_watchers;
Ephemerals committed_ephemerals;
size_t committed_ephemeral_nodes{0};
/// Expiration queue for session, allows to get dead sessions at some point of time
SessionExpiryQueue session_expiry_queue;
/// All active sessions with timeout
@ -666,8 +633,10 @@ public:
/// ACLMap for more compact ACLs storage inside nodes.
ACLMap acl_map;
mutable std::mutex transaction_mutex;
/// Global id of all requests applied to storage
int64_t zxid{0};
int64_t zxid TSA_GUARDED_BY(transaction_mutex) = 0;
// older Keeper node (pre V5 snapshots) can create snapshots and receive logs from newer Keeper nodes
// this can lead to some inconsistencies, e.g. from snapshot it will use log_idx as zxid
@ -684,11 +653,16 @@ public:
int64_t log_idx = 0;
};
std::deque<TransactionInfo> uncommitted_transactions;
std::list<TransactionInfo> uncommitted_transactions TSA_GUARDED_BY(transaction_mutex);
uint64_t nodes_digest{0};
uint64_t nodes_digest = 0;
bool finalized{false};
std::atomic<bool> finalized{false};
/// Mapping session_id -> set of watched nodes paths
SessionAndWatcher sessions_and_watchers;
size_t total_watches_count = 0;
/// Currently active watches (node_path -> subscribed sessions)
Watches watches;
@ -697,45 +671,30 @@ public:
void clearDeadWatches(int64_t session_id);
/// Get current committed zxid
int64_t getZXID() const { return zxid; }
int64_t getZXID() const;
int64_t getNextZXID() const
{
if (uncommitted_transactions.empty())
return zxid + 1;
int64_t getNextZXID() const;
int64_t getNextZXIDLocked() const TSA_REQUIRES(transaction_mutex);
return uncommitted_transactions.back().zxid + 1;
}
Digest getNodesDigest(bool committed) const;
Digest getNodesDigest(bool committed, bool lock_transaction_mutex) const;
KeeperContextPtr keeper_context;
const String superdigest;
bool initialized{false};
std::atomic<bool> initialized{false};
KeeperStorage(int64_t tick_time_ms, const String & superdigest_, const KeeperContextPtr & keeper_context_, bool initialize_system_nodes = true);
void initializeSystemNodes();
void initializeSystemNodes() TSA_NO_THREAD_SAFETY_ANALYSIS;
/// Allocate new session id with the specified timeouts
int64_t getSessionID(int64_t session_timeout_ms)
{
auto result = session_id_counter++;
session_and_timeout.emplace(result, session_timeout_ms);
session_expiry_queue.addNewSessionOrUpdate(result, session_timeout_ms);
return result;
}
int64_t getSessionID(int64_t session_timeout_ms);
/// Add session id. Used when restoring KeeperStorage from snapshot.
void addSessionID(int64_t session_id, int64_t session_timeout_ms)
{
session_and_timeout.emplace(session_id, session_timeout_ms);
session_expiry_queue.addNewSessionOrUpdate(session_id, session_timeout_ms);
}
void addSessionID(int64_t session_id, int64_t session_timeout_ms) TSA_NO_THREAD_SAFETY_ANALYSIS;
UInt64 calculateNodesDigest(UInt64 current_digest, const std::vector<Delta> & new_deltas) const;
UInt64 calculateNodesDigest(UInt64 current_digest, const std::list<Delta> & new_deltas) const;
/// Process user request and return response.
/// check_acl = false only when converting data from ZooKeeper.
@ -762,42 +721,39 @@ public:
/// Set of methods for creating snapshots
/// Turn on snapshot mode, so data inside Container is not deleted, but replaced with new version.
void enableSnapshotMode(size_t up_to_version)
{
container.enableSnapshotMode(up_to_version);
}
void enableSnapshotMode(size_t up_to_version);
/// Turn off snapshot mode.
void disableSnapshotMode()
{
container.disableSnapshotMode();
}
void disableSnapshotMode();
Container::const_iterator getSnapshotIteratorBegin() const { return container.begin(); }
Container::const_iterator getSnapshotIteratorBegin() const;
/// Clear outdated data from internal container.
void clearGarbageAfterSnapshot() { container.clearOutdatedNodes(); }
void clearGarbageAfterSnapshot();
/// Get all active sessions
const SessionAndTimeout & getActiveSessions() const { return session_and_timeout; }
SessionAndTimeout getActiveSessions() const;
/// Get all dead sessions
std::vector<int64_t> getDeadSessions() const { return session_expiry_queue.getExpiredSessions(); }
std::vector<int64_t> getDeadSessions() const;
void updateStats();
const Stats & getStorageStats() const;
/// Introspection functions mostly used in 4-letter commands
uint64_t getNodesCount() const { return container.size(); }
uint64_t getNodesCount() const;
uint64_t getApproximateDataSize() const { return container.getApproximateDataSize(); }
uint64_t getApproximateDataSize() const;
uint64_t getArenaDataSize() const { return container.keyArenaSize(); }
uint64_t getArenaDataSize() const;
uint64_t getTotalWatchesCount() const;
uint64_t getWatchedPathsCount() const { return watches.size() + list_watches.size(); }
uint64_t getWatchedPathsCount() const;
uint64_t getSessionsWithWatchesCount() const;
uint64_t getSessionWithEphemeralNodesCount() const { return ephemerals.size(); }
uint64_t getSessionWithEphemeralNodesCount() const;
uint64_t getTotalEphemeralNodesCount() const;
void dumpWatches(WriteBufferFromOwnString & buf) const;

View File

@ -155,11 +155,11 @@ public:
ReadBufferFromOwnString buffer(iter->value().ToStringView());
typename Node::Meta & meta = new_pair->value;
readPODBinary(meta, buffer);
readVarUInt(new_pair->value.data_size, buffer);
if (new_pair->value.data_size)
readVarUInt(new_pair->value.stats.data_size, buffer);
if (new_pair->value.stats.data_size)
{
new_pair->value.data = std::unique_ptr<char[]>(new char[new_pair->value.data_size]);
buffer.readStrict(new_pair->value.data.get(), new_pair->value.data_size);
new_pair->value.data = std::unique_ptr<char[]>(new char[new_pair->value.stats.data_size]);
buffer.readStrict(new_pair->value.data.get(), new_pair->value.stats.data_size);
}
pair = new_pair;
}
@ -211,7 +211,7 @@ public:
}
}
std::vector<std::pair<std::string, Node>> getChildren(const std::string & key_)
std::vector<std::pair<std::string, Node>> getChildren(const std::string & key_, bool read_data = false)
{
rocksdb::ReadOptions read_options;
read_options.total_order_seek = true;
@ -232,6 +232,15 @@ public:
typename Node::Meta & meta = node;
/// We do not read data here
readPODBinary(meta, buffer);
if (read_data)
{
readVarUInt(meta.stats.data_size, buffer);
if (meta.stats.data_size)
{
node.data = std::unique_ptr<char[]>(new char[meta.stats.data_size]);
buffer.readStrict(node.data.get(), meta.stats.data_size);
}
}
std::string real_key(iter->key().data() + len, iter->key().size() - len);
// std::cout << "real key: " << real_key << std::endl;
result.emplace_back(std::move(real_key), std::move(node));
@ -268,11 +277,11 @@ public:
typename Node::Meta & meta = kv->value;
readPODBinary(meta, buffer);
/// TODO: Sometimes we don't need to load data.
readVarUInt(kv->value.data_size, buffer);
if (kv->value.data_size)
readVarUInt(kv->value.stats.data_size, buffer);
if (kv->value.stats.data_size)
{
kv->value.data = std::unique_ptr<char[]>(new char[kv->value.data_size]);
buffer.readStrict(kv->value.data.get(), kv->value.data_size);
kv->value.data = std::unique_ptr<char[]>(new char[kv->value.stats.data_size]);
buffer.readStrict(kv->value.data.get(), kv->value.stats.data_size);
}
return const_iterator(kv);
}
@ -281,7 +290,7 @@ public:
{
auto it = find(key);
chassert(it != end());
return MockNode(it->value.numChildren(), it->value.getData());
return MockNode(it->value.stats.numChildren(), it->value.getData());
}
const_iterator updateValue(StringRef key_, ValueUpdater updater)

View File

@ -93,7 +93,7 @@ void deserializeACLMap(Storage & storage, ReadBuffer & in)
}
template<typename Storage>
int64_t deserializeStorageData(Storage & storage, ReadBuffer & in, LoggerPtr log)
int64_t deserializeStorageData(Storage & storage, ReadBuffer & in, LoggerPtr log) TSA_NO_THREAD_SAFETY_ANALYSIS
{
int64_t max_zxid = 0;
std::string path;
@ -108,33 +108,33 @@ int64_t deserializeStorageData(Storage & storage, ReadBuffer & in, LoggerPtr log
Coordination::read(node.acl_id, in);
/// Deserialize stat
Coordination::read(node.czxid, in);
Coordination::read(node.mzxid, in);
Coordination::read(node.stats.czxid, in);
Coordination::read(node.stats.mzxid, in);
/// For some reason ZXID specified in filename can be smaller
/// then actual zxid from nodes. In this case we will use zxid from nodes.
max_zxid = std::max(max_zxid, node.mzxid);
max_zxid = std::max(max_zxid, node.stats.mzxid);
int64_t ctime;
Coordination::read(ctime, in);
node.setCtime(ctime);
Coordination::read(node.mtime, in);
Coordination::read(node.version, in);
Coordination::read(node.cversion, in);
Coordination::read(node.aversion, in);
node.stats.setCtime(ctime);
Coordination::read(node.stats.mtime, in);
Coordination::read(node.stats.version, in);
Coordination::read(node.stats.cversion, in);
Coordination::read(node.stats.aversion, in);
int64_t ephemeral_owner;
Coordination::read(ephemeral_owner, in);
if (ephemeral_owner != 0)
node.setEphemeralOwner(ephemeral_owner);
Coordination::read(node.pzxid, in);
node.stats.setEphemeralOwner(ephemeral_owner);
Coordination::read(node.stats.pzxid, in);
if (!path.empty())
{
if (ephemeral_owner == 0)
node.setSeqNum(node.cversion);
node.stats.setSeqNum(node.stats.cversion);
storage.container.insertOrReplace(path, node);
if (ephemeral_owner != 0)
storage.ephemerals[ephemeral_owner].insert(path);
storage.committed_ephemerals[ephemeral_owner].insert(path);
storage.acl_map.addUsage(node.acl_id);
}
@ -149,7 +149,13 @@ int64_t deserializeStorageData(Storage & storage, ReadBuffer & in, LoggerPtr log
if (itr.key != "/")
{
auto parent_path = parentNodePath(itr.key);
storage.container.updateValue(parent_path, [my_path = itr.key] (typename Storage::Node & value) { value.addChild(getBaseNodeName(my_path)); value.increaseNumChildren(); });
storage.container.updateValue(
parent_path,
[my_path = itr.key](typename Storage::Node & value)
{
value.addChild(getBaseNodeName(my_path));
value.stats.increaseNumChildren();
});
}
}
@ -157,7 +163,7 @@ int64_t deserializeStorageData(Storage & storage, ReadBuffer & in, LoggerPtr log
}
template<typename Storage>
void deserializeKeeperStorageFromSnapshot(Storage & storage, const std::string & snapshot_path, LoggerPtr log)
void deserializeKeeperStorageFromSnapshot(Storage & storage, const std::string & snapshot_path, LoggerPtr log) TSA_NO_THREAD_SAFETY_ANALYSIS
{
LOG_INFO(log, "Deserializing storage snapshot {}", snapshot_path);
int64_t zxid = getZxidFromName(snapshot_path);
@ -487,7 +493,7 @@ bool hasErrorsInMultiRequest(Coordination::ZooKeeperRequestPtr request)
}
template<typename Storage>
bool deserializeTxn(Storage & storage, ReadBuffer & in, LoggerPtr /*log*/)
bool deserializeTxn(Storage & storage, ReadBuffer & in, LoggerPtr /*log*/) TSA_NO_THREAD_SAFETY_ANALYSIS
{
int64_t checksum;
Coordination::read(checksum, in);
@ -568,7 +574,7 @@ void deserializeLogAndApplyToStorage(Storage & storage, const std::string & log_
}
template<typename Storage>
void deserializeLogsAndApplyToStorage(Storage & storage, const std::string & path, LoggerPtr log)
void deserializeLogsAndApplyToStorage(Storage & storage, const std::string & path, LoggerPtr log) TSA_NO_THREAD_SAFETY_ANALYSIS
{
std::map<int64_t, std::string> existing_logs;
for (const auto & p : fs::directory_iterator(path))

View File

@ -1,6 +1,7 @@
#include <chrono>
#include <gtest/gtest.h>
#include "base/defines.h"
#include "config.h"
#if USE_NURAFT
@ -1540,7 +1541,7 @@ void addNode(Storage & storage, const std::string & path, const std::string & da
using Node = typename Storage::Node;
Node node{};
node.setData(data);
node.setEphemeralOwner(ephemeral_owner);
node.stats.setEphemeralOwner(ephemeral_owner);
storage.container.insertOrReplace(path, node);
auto child_it = storage.container.find(path);
auto child_path = DB::getBaseNodeName(child_it->key);
@ -1549,7 +1550,7 @@ void addNode(Storage & storage, const std::string & path, const std::string & da
[&](auto & parent)
{
parent.addChild(child_path);
parent.increaseNumChildren();
parent.stats.increaseNumChildren();
});
}
@ -1570,9 +1571,9 @@ TYPED_TEST(CoordinationTest, TestStorageSnapshotSimple)
addNode(storage, "/hello1", "world", 1);
addNode(storage, "/hello2", "somedata", 3);
storage.session_id_counter = 5;
storage.zxid = 2;
storage.ephemerals[3] = {"/hello2"};
storage.ephemerals[1] = {"/hello1"};
TSA_SUPPRESS_WARNING_FOR_WRITE(storage.zxid) = 2;
storage.committed_ephemerals[3] = {"/hello2"};
storage.committed_ephemerals[1] = {"/hello1"};
storage.getSessionID(130);
storage.getSessionID(130);
@ -1601,10 +1602,10 @@ TYPED_TEST(CoordinationTest, TestStorageSnapshotSimple)
EXPECT_EQ(restored_storage->container.getValue("/hello1").getData(), "world");
EXPECT_EQ(restored_storage->container.getValue("/hello2").getData(), "somedata");
EXPECT_EQ(restored_storage->session_id_counter, 7);
EXPECT_EQ(restored_storage->zxid, 2);
EXPECT_EQ(restored_storage->ephemerals.size(), 2);
EXPECT_EQ(restored_storage->ephemerals[3].size(), 1);
EXPECT_EQ(restored_storage->ephemerals[1].size(), 1);
EXPECT_EQ(restored_storage->getZXID(), 2);
EXPECT_EQ(restored_storage->committed_ephemerals.size(), 2);
EXPECT_EQ(restored_storage->committed_ephemerals[3].size(), 1);
EXPECT_EQ(restored_storage->committed_ephemerals[1].size(), 1);
EXPECT_EQ(restored_storage->session_and_timeout.size(), 2);
}
@ -2027,7 +2028,7 @@ TYPED_TEST(CoordinationTest, TestEphemeralNodeRemove)
state_machine->commit(1, entry_c->get_buf());
const auto & storage = state_machine->getStorageUnsafe();
EXPECT_EQ(storage.ephemerals.size(), 1);
EXPECT_EQ(storage.committed_ephemerals.size(), 1);
std::shared_ptr<ZooKeeperRemoveRequest> request_d = std::make_shared<ZooKeeperRemoveRequest>();
request_d->path = "/hello";
/// Delete from other session
@ -2035,7 +2036,7 @@ TYPED_TEST(CoordinationTest, TestEphemeralNodeRemove)
state_machine->pre_commit(2, entry_d->get_buf());
state_machine->commit(2, entry_d->get_buf());
EXPECT_EQ(storage.ephemerals.size(), 0);
EXPECT_EQ(storage.committed_ephemerals.size(), 0);
}
@ -2280,6 +2281,62 @@ TYPED_TEST(CoordinationTest, TestPreprocessWhenCloseSessionIsPrecommitted)
}
}
TYPED_TEST(CoordinationTest, TestMultiRequestWithNoAuth)
{
using namespace Coordination;
using namespace DB;
ChangelogDirTest snapshots("./snapshots");
this->setSnapshotDirectory("./snapshots");
using Storage = typename TestFixture::Storage;
ChangelogDirTest rocks("./rocksdb");
this->setRocksDBDirectory("./rocksdb");
ResponsesQueue queue(std::numeric_limits<size_t>::max());
SnapshotsQueue snapshots_queue{1};
int64_t session_without_auth = 1;
int64_t session_with_auth = 2;
size_t term = 0;
auto state_machine = std::make_shared<KeeperStateMachine<Storage>>(queue, snapshots_queue, this->keeper_context, nullptr);
state_machine->init();
auto & storage = state_machine->getStorageUnsafe();
auto auth_req = std::make_shared<ZooKeeperAuthRequest>();
auth_req->scheme = "digest";
auth_req->data = "test_user:test_password";
// Add auth data to the session
auto auth_entry = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), auth_req);
state_machine->pre_commit(1, auth_entry->get_buf());
state_machine->commit(1, auth_entry->get_buf());
std::string node_with_acl = "/node_with_acl";
{
auto create_req = std::make_shared<ZooKeeperCreateRequest>();
create_req->path = node_with_acl;
create_req->data = "notmodified";
create_req->acls = {{.permissions = ACL::Read, .scheme = "auth", .id = ""}};
auto create_entry = getLogEntryFromZKRequest(term, session_with_auth, state_machine->getNextZxid(), create_req);
state_machine->pre_commit(3, create_entry->get_buf());
state_machine->commit(3, create_entry->get_buf());
ASSERT_TRUE(storage.container.contains(node_with_acl));
}
Requests ops;
ops.push_back(zkutil::makeSetRequest(node_with_acl, "modified", -1));
ops.push_back(zkutil::makeCheckRequest("/nonexistentnode", -1));
auto multi_req = std::make_shared<ZooKeeperMultiRequest>(ops, ACLs{});
auto multi_entry = getLogEntryFromZKRequest(term, session_without_auth, state_machine->getNextZxid(), multi_req);
state_machine->pre_commit(4, multi_entry->get_buf());
state_machine->commit(4, multi_entry->get_buf());
auto node_it = storage.container.find(node_with_acl);
ASSERT_FALSE(node_it == storage.container.end());
ASSERT_TRUE(node_it->value.getData() == "notmodified");
}
TYPED_TEST(CoordinationTest, TestSetACLWithAuthSchemeForAclWhenAuthIsPrecommitted)
{
using namespace Coordination;
@ -2534,9 +2591,9 @@ TYPED_TEST(CoordinationTest, TestStorageSnapshotDifferentCompressions)
addNode(storage, "/hello1", "world", 1);
addNode(storage, "/hello2", "somedata", 3);
storage.session_id_counter = 5;
storage.zxid = 2;
storage.ephemerals[3] = {"/hello2"};
storage.ephemerals[1] = {"/hello1"};
TSA_SUPPRESS_WARNING_FOR_WRITE(storage.zxid) = 2;
storage.committed_ephemerals[3] = {"/hello2"};
storage.committed_ephemerals[1] = {"/hello1"};
storage.getSessionID(130);
storage.getSessionID(130);
@ -2561,10 +2618,10 @@ TYPED_TEST(CoordinationTest, TestStorageSnapshotDifferentCompressions)
EXPECT_EQ(restored_storage->container.getValue("/hello1").getData(), "world");
EXPECT_EQ(restored_storage->container.getValue("/hello2").getData(), "somedata");
EXPECT_EQ(restored_storage->session_id_counter, 7);
EXPECT_EQ(restored_storage->zxid, 2);
EXPECT_EQ(restored_storage->ephemerals.size(), 2);
EXPECT_EQ(restored_storage->ephemerals[3].size(), 1);
EXPECT_EQ(restored_storage->ephemerals[1].size(), 1);
EXPECT_EQ(restored_storage->getZXID(), 2);
EXPECT_EQ(restored_storage->committed_ephemerals.size(), 2);
EXPECT_EQ(restored_storage->committed_ephemerals[3].size(), 1);
EXPECT_EQ(restored_storage->committed_ephemerals[1].size(), 1);
EXPECT_EQ(restored_storage->session_and_timeout.size(), 2);
}
@ -2749,13 +2806,13 @@ TYPED_TEST(CoordinationTest, TestStorageSnapshotEqual)
storage.session_id_counter = 5;
storage.ephemerals[3] = {"/hello"};
storage.ephemerals[1] = {"/hello/somepath"};
storage.committed_ephemerals[3] = {"/hello"};
storage.committed_ephemerals[1] = {"/hello/somepath"};
for (size_t j = 0; j < 3333; ++j)
storage.getSessionID(130 * j);
DB::KeeperStorageSnapshot<Storage> snapshot(&storage, storage.zxid);
DB::KeeperStorageSnapshot<Storage> snapshot(&storage, storage.getZXID());
auto buf = manager.serializeSnapshotToBuffer(snapshot);
@ -3259,7 +3316,7 @@ TYPED_TEST(CoordinationTest, TestCheckNotExistsRequest)
create_path("/test_node");
auto node_it = storage.container.find("/test_node");
ASSERT_NE(node_it, storage.container.end());
auto node_version = node_it->value.version;
auto node_version = node_it->value.stats.version;
{
SCOPED_TRACE("CheckNotExists returns ZNODEEXISTS");
@ -3510,12 +3567,12 @@ TYPED_TEST(CoordinationTest, TestRemoveRecursiveRequest)
{
SCOPED_TRACE("Recursive Remove Ephemeral");
create("/T7", zkutil::CreateMode::Ephemeral);
ASSERT_EQ(storage.ephemerals.size(), 1);
ASSERT_EQ(storage.committed_ephemerals.size(), 1);
auto responses = remove_recursive("/T7", 100);
ASSERT_EQ(responses.size(), 1);
ASSERT_EQ(responses[0].response->error, Coordination::Error::ZOK);
ASSERT_EQ(storage.ephemerals.size(), 0);
ASSERT_EQ(storage.committed_ephemerals.size(), 0);
ASSERT_FALSE(exists("/T7"));
}
@ -3525,12 +3582,12 @@ TYPED_TEST(CoordinationTest, TestRemoveRecursiveRequest)
create("/T8/A", zkutil::CreateMode::Persistent);
create("/T8/B", zkutil::CreateMode::Ephemeral);
create("/T8/A/C", zkutil::CreateMode::Ephemeral);
ASSERT_EQ(storage.ephemerals.size(), 1);
ASSERT_EQ(storage.committed_ephemerals.size(), 1);
auto responses = remove_recursive("/T8", 4);
ASSERT_EQ(responses.size(), 1);
ASSERT_EQ(responses[0].response->error, Coordination::Error::ZOK);
ASSERT_EQ(storage.ephemerals.size(), 0);
ASSERT_EQ(storage.committed_ephemerals.size(), 0);
ASSERT_FALSE(exists("/T8"));
ASSERT_FALSE(exists("/T8/A"));
ASSERT_FALSE(exists("/T8/B"));
@ -3682,6 +3739,72 @@ TYPED_TEST(CoordinationTest, TestRemoveRecursiveInMultiRequest)
ASSERT_FALSE(exists("/A/B"));
ASSERT_FALSE(exists("/A/B/D"));
}
{
SCOPED_TRACE("Recursive Remove For Subtree With Updated Node");
int create_zxid = ++zxid;
auto ops = prepare_create_tree();
/// First create nodes
const auto create_request = std::make_shared<ZooKeeperMultiRequest>(ops, ACLs{});
storage.preprocessRequest(create_request, 1, 0, create_zxid);
auto create_responses = storage.processRequest(create_request, 1, create_zxid);
ASSERT_EQ(create_responses.size(), 1);
ASSERT_TRUE(is_multi_ok(create_responses[0].response));
/// Small limit
int remove_zxid = ++zxid;
ops = {
zkutil::makeSetRequest("/A/B", "", -1),
zkutil::makeRemoveRecursiveRequest("/A", 3),
};
auto remove_request = std::make_shared<ZooKeeperMultiRequest>(ops, ACLs{});
storage.preprocessRequest(remove_request, 1, 0, remove_zxid);
auto remove_responses = storage.processRequest(remove_request, 1, remove_zxid);
ASSERT_EQ(remove_responses.size(), 1);
ASSERT_FALSE(is_multi_ok(remove_responses[0].response));
/// Big limit
remove_zxid = ++zxid;
ops[1] = zkutil::makeRemoveRecursiveRequest("/A", 4);
remove_request = std::make_shared<ZooKeeperMultiRequest>(ops, ACLs{});
storage.preprocessRequest(remove_request, 1, 0, remove_zxid);
remove_responses = storage.processRequest(remove_request, 1, remove_zxid);
ASSERT_EQ(remove_responses.size(), 1);
ASSERT_TRUE(is_multi_ok(remove_responses[0].response));
ASSERT_FALSE(exists("/A"));
ASSERT_FALSE(exists("/A/C"));
ASSERT_FALSE(exists("/A/B"));
ASSERT_FALSE(exists("/A/B/D"));
}
{
SCOPED_TRACE("[BUG] Recursive Remove Level Sorting");
int new_zxid = ++zxid;
Coordination::Requests ops = {
zkutil::makeCreateRequest("/a", "", zkutil::CreateMode::Persistent),
zkutil::makeCreateRequest("/a/bbbbbb", "", zkutil::CreateMode::Persistent),
zkutil::makeCreateRequest("/A", "", zkutil::CreateMode::Persistent),
zkutil::makeCreateRequest("/A/B", "", zkutil::CreateMode::Persistent),
zkutil::makeCreateRequest("/A/CCCCCCCCCCCC", "", zkutil::CreateMode::Persistent),
zkutil::makeRemoveRecursiveRequest("/A", 3),
};
auto remove_request = std::make_shared<ZooKeeperMultiRequest>(ops, ACLs{});
storage.preprocessRequest(remove_request, 1, 0, new_zxid);
auto remove_responses = storage.processRequest(remove_request, 1, new_zxid);
ASSERT_EQ(remove_responses.size(), 1);
ASSERT_TRUE(is_multi_ok(remove_responses[0].response));
ASSERT_TRUE(exists("/a"));
ASSERT_TRUE(exists("/a/bbbbbb"));
ASSERT_FALSE(exists("/A"));
ASSERT_FALSE(exists("/A/B"));
ASSERT_FALSE(exists("/A/CCCCCCCCCCCC"));
}
}
TYPED_TEST(CoordinationTest, TestRemoveRecursiveWatches)
@ -3767,14 +3890,26 @@ TYPED_TEST(CoordinationTest, TestRemoveRecursiveWatches)
auto responses = storage.processRequest(remove_request, 1, new_zxid);
ASSERT_EQ(responses.size(), 7);
/// request response is last
ASSERT_EQ(dynamic_cast<Coordination::ZooKeeperWatchResponse *>(responses.back().response.get()), nullptr);
for (size_t i = 0; i < 7; ++i)
std::unordered_map<std::string, std::vector<Coordination::Event>> expected_watch_responses
{
{"/A/B/D", {Coordination::Event::DELETED}},
{"/A/B", {Coordination::Event::CHILD, Coordination::Event::DELETED}},
{"/A/C", {Coordination::Event::DELETED}},
{"/A", {Coordination::Event::CHILD, Coordination::Event::DELETED}},
};
std::unordered_map<std::string, std::vector<Coordination::Event>> actual_watch_responses;
for (size_t i = 0; i < 6; ++i)
{
ASSERT_EQ(responses[i].response->error, Coordination::Error::ZOK);
if (const auto * watch_response = dynamic_cast<Coordination::ZooKeeperWatchResponse *>(responses[i].response.get()))
ASSERT_EQ(watch_response->type, Coordination::Event::DELETED);
const auto & watch_response = dynamic_cast<Coordination::ZooKeeperWatchResponse &>(*responses[i].response);
actual_watch_responses[watch_response.path].push_back(static_cast<Coordination::Event>(watch_response.type));
}
ASSERT_EQ(expected_watch_responses, actual_watch_responses);
ASSERT_EQ(storage.watches.size(), 0);
ASSERT_EQ(storage.list_watches.size(), 0);

View File

@ -151,6 +151,15 @@ Names NamesAndTypesList::getNames() const
return res;
}
NameSet NamesAndTypesList::getNameSet() const
{
NameSet res;
res.reserve(size());
for (const NameAndTypePair & column : *this)
res.insert(column.name);
return res;
}
DataTypes NamesAndTypesList::getTypes() const
{
DataTypes res;

View File

@ -100,6 +100,7 @@ public:
void getDifference(const NamesAndTypesList & rhs, NamesAndTypesList & deleted, NamesAndTypesList & added) const;
Names getNames() const;
NameSet getNameSet() const;
DataTypes getTypes() const;
/// Remove columns which names are not in the `names`.

View File

@ -890,16 +890,19 @@ public:
Messaging::MessageTransport & mt,
const Poco::Net::SocketAddress & address)
{
AuthenticationType user_auth_type;
try
{
user_auth_type = session.getAuthenticationTypeOrLogInFailure(user_name);
if (type_to_method.find(user_auth_type) != type_to_method.end())
const auto user_authentication_types = session.getAuthenticationTypesOrLogInFailure(user_name);
for (auto user_authentication_type : user_authentication_types)
{
type_to_method[user_auth_type]->authenticate(user_name, session, mt, address);
mt.send(Messaging::AuthenticationOk(), true);
LOG_DEBUG(log, "Authentication for user {} was successful.", user_name);
return;
if (type_to_method.find(user_authentication_type) != type_to_method.end())
{
type_to_method[user_authentication_type]->authenticate(user_name, session, mt, address);
mt.send(Messaging::AuthenticationOk(), true);
LOG_DEBUG(log, "Authentication for user {} was successful.", user_name);
return;
}
}
}
catch (const Exception&)
@ -913,7 +916,7 @@ public:
mt.send(Messaging::ErrorOrNoticeResponse(Messaging::ErrorOrNoticeResponse::ERROR, "0A000", "Authentication method is not supported"),
true);
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Authentication method is not supported: {}", user_auth_type);
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "None of the authentication methods registered for the user are supported");
}
};
}

View File

@ -33,7 +33,9 @@ static constexpr auto DBMS_MIN_REVISION_WITH_AGGREGATE_FUNCTIONS_VERSIONING = 54
static constexpr auto DBMS_CLUSTER_PROCESSING_PROTOCOL_VERSION = 1;
static constexpr auto DBMS_PARALLEL_REPLICAS_PROTOCOL_VERSION = 3;
static constexpr auto DBMS_MIN_SUPPORTED_PARALLEL_REPLICAS_PROTOCOL_VERSION = 3;
static constexpr auto DBMS_PARALLEL_REPLICAS_MIN_VERSION_WITH_MARK_SEGMENT_SIZE_FIELD = 4;
static constexpr auto DBMS_PARALLEL_REPLICAS_PROTOCOL_VERSION = 4;
static constexpr auto DBMS_MIN_REVISION_WITH_PARALLEL_REPLICAS = 54453;
static constexpr auto DBMS_MERGE_TREE_PART_INFO_VERSION = 1;
@ -86,6 +88,8 @@ static constexpr auto DBMS_MIN_REVISION_WITH_ROWS_BEFORE_AGGREGATION = 54469;
/// Packets size header
static constexpr auto DBMS_MIN_PROTOCOL_VERSION_WITH_CHUNKED_PACKETS = 54470;
static constexpr auto DBMS_MIN_REVISION_WITH_VERSIONED_PARALLEL_REPLICAS_PROTOCOL = 54471;
/// Version of ClickHouse TCP protocol.
///
/// Should be incremented manually on protocol changes.
@ -93,6 +97,6 @@ static constexpr auto DBMS_MIN_PROTOCOL_VERSION_WITH_CHUNKED_PACKETS = 54470;
/// NOTE: DBMS_TCP_PROTOCOL_VERSION has nothing common with VERSION_REVISION,
/// later is just a number for server version (one number instead of commit SHA)
/// for simplicity (sometimes it may be more convenient in some use cases).
static constexpr auto DBMS_TCP_PROTOCOL_VERSION = 54470;
static constexpr auto DBMS_TCP_PROTOCOL_VERSION = 54471;
}

View File

@ -119,6 +119,7 @@ namespace DB
M(UInt64, max_part_num_to_warn, 100000lu, "If the number of parts is greater than this value, the server will create a warning that will displayed to user.", 0) \
M(UInt64, max_table_num_to_throw, 0lu, "If number of tables is greater than this value, server will throw an exception. 0 means no limitation. View, remote tables, dictionary, system tables are not counted. Only count table in Atomic/Ordinary/Replicated/Lazy database engine.", 0) \
M(UInt64, max_database_num_to_throw, 0lu, "If number of databases is greater than this value, server will throw an exception. 0 means no limitation.", 0) \
M(UInt64, max_authentication_methods_per_user, 100, "The maximum number of authentication methods a user can be created with or altered. Changing this setting does not affect existing users. Zero means unlimited", 0) \
M(UInt64, concurrent_threads_soft_limit_num, 0, "Sets how many concurrent thread can be allocated before applying CPU pressure. Zero means unlimited.", 0) \
M(UInt64, concurrent_threads_soft_limit_ratio_to_cores, 0, "Same as concurrent_threads_soft_limit_num, but with ratio to cores.", 0) \
\

View File

@ -946,7 +946,7 @@ class IColumn;
M(Bool, parallel_replicas_for_non_replicated_merge_tree, false, "If true, ClickHouse will use parallel replicas algorithm also for non-replicated MergeTree tables", 0) \
M(UInt64, parallel_replicas_min_number_of_rows_per_replica, 0, "Limit the number of replicas used in a query to (estimated rows to read / min_number_of_rows_per_replica). The max is still limited by 'max_parallel_replicas'", 0) \
M(Bool, parallel_replicas_prefer_local_join, true, "If true, and JOIN can be executed with parallel replicas algorithm, and all storages of right JOIN part are *MergeTree, local JOIN will be used instead of GLOBAL JOIN.", 0) \
M(UInt64, parallel_replicas_mark_segment_size, 128, "Parts virtually divided into segments to be distributed between replicas for parallel reading. This setting controls the size of these segments. Not recommended to change until you're absolutely sure in what you're doing", 0) \
M(UInt64, parallel_replicas_mark_segment_size, 0, "Parts virtually divided into segments to be distributed between replicas for parallel reading. This setting controls the size of these segments. Not recommended to change until you're absolutely sure in what you're doing. Value should be in range [128; 16384]", 0) \
M(Bool, allow_archive_path_syntax, true, "File/S3 engines/table function will parse paths with '::' as '<archive> :: <file>' if archive has correct extension", 0) \
M(Bool, parallel_replicas_local_plan, false, "Build local plan for local replica", 0) \
\
@ -972,7 +972,6 @@ class IColumn;
\
M(Bool, allow_experimental_database_materialized_mysql, false, "Allow to create database with Engine=MaterializedMySQL(...).", 0) \
M(Bool, allow_experimental_database_materialized_postgresql, false, "Allow to create database with Engine=MaterializedPostgreSQL(...).", 0) \
\
/** Experimental feature for moving data between shards. */ \
M(Bool, allow_experimental_query_deduplication, false, "Experimental data deduplication for SELECT queries based on part UUIDs", 0) \
@ -1272,6 +1271,7 @@ class IColumn;
M(Bool, output_format_orc_string_as_string, true, "Use ORC String type instead of Binary for String columns", 0) \
M(ORCCompression, output_format_orc_compression_method, "zstd", "Compression method for ORC output format. Supported codecs: lz4, snappy, zlib, zstd, none (uncompressed)", 0) \
M(UInt64, output_format_orc_row_index_stride, 10'000, "Target row index stride in ORC output format", 0) \
M(Double, output_format_orc_dictionary_key_size_threshold, 0.0, "For a string column in ORC output format, if the number of distinct values is greater than this fraction of the total number of non-null rows, turn off dictionary encoding. Otherwise dictionary encoding is enabled", 0) \
\
M(CapnProtoEnumComparingMode, format_capn_proto_enum_comparising_mode, FormatSettings::CapnProtoEnumComparingMode::BY_VALUES, "How to map ClickHouse Enum and CapnProto Enum", 0) \
\

View File

@ -71,6 +71,7 @@ static std::initializer_list<std::pair<ClickHouseVersion, SettingsChangesHistory
},
{"24.9",
{
{"output_format_orc_dictionary_key_size_threshold", 0.0, 0.0, "For a string column in ORC output format, if the number of distinct values is greater than this fraction of the total number of non-null rows, turn off dictionary encoding. Otherwise dictionary encoding is enabled"},
{"input_format_json_empty_as_default", false, false, "Added new setting to allow to treat empty fields in JSON input as default values."},
{"input_format_try_infer_variants", false, false, "Try to infer Variant type in text formats when there is more than one possible type for column/array elements"},
{"join_output_by_rowlist_perkey_rows_threshold", 0, 5, "The lower limit of per-key average rows in the right table to determine whether to output by row list in hash join."},
@ -78,6 +79,7 @@ static std::initializer_list<std::pair<ClickHouseVersion, SettingsChangesHistory
{"allow_materialized_view_with_bad_select", true, true, "Support (but not enable yet) stricter validation in CREATE MATERIALIZED VIEW"},
{"output_format_always_quote_identifiers", false, false, "New setting."},
{"output_format_identifier_quoting_style", "Backticks", "Backticks", "New setting."},
{"parallel_replicas_mark_segment_size", 128, 0, "Value for this setting now determined automatically"},
{"database_replicated_allow_replicated_engine_arguments", 1, 0, "Don't allow explicit arguments by default"},
{"database_replicated_allow_explicit_uuid", 0, 0, "Added a new setting to disallow explicitly specifying table UUID"},
{"parallel_replicas_local_plan", false, false, "Use local plan for local replica in a query with parallel replicas"},

Some files were not shown because too many files have changed in this diff Show More