Use LDAP hostname with regular DNS lookup to check if LDAP is online.
Before that, we used the IP address that was extracted via the docker
client (not via DNS lookup) and it could happen that LDAP was reachable
via the IP, thus passing the online check, but not via DNS lookup, which
led to test failures (e.g. LDAP not reachable from ClickHouse instance).
Sometimes the requests library detect the encoding incorrectly, and
because this test compares binary data it fails.
Here is an example of successfull attempt:
2023-10-30 07:32:37 [ 654 ] DEBUG : http://172.16.1.2:8123 "GET /?query=SELECT+%2A+FROM+test.simple+FORMAT+Protobuf+SETTINGS+format_schema%3D%27simple%3AKeyValuePair%27 HTTP/1.1" 200 None (connectionpool.py:546, _make_request)
2023-10-30 07:32:37 [ 654 ] DEBUG : Encoding detection: utf_8 will be used as a fallback match (api.py:480, from_bytes)
2023-10-30 07:32:37 [ 654 ] DEBUG : Encoding detection: Found utf_8 as plausible (best-candidate) for content. With 0 alternatives. (api.py:487, from_bytes)
And here is failed [1]:
2023-10-29 18:12:56 [ 525 ] DEBUG : http://172.16.9.2:8123 "GET /?query=SELECT+%2A+FROM+test.simple+FORMAT+Protobuf+SETTINGS+format_schema%3D%27message_tmp%3AMessageTmp%27 HTTP/1.1" 200 None (connectionpool.py:547, _make_request)
2023-10-29 18:12:56 [ 525 ] DEBUG : Encoding detection: Found utf_16_be as plausible (best-candidate) for content. With 1 alternatives. (api.py:487, from_bytes)
E AssertionError: assert '܈Ē͡扣܈Ȓͤ敦' == '\x07\x08\x01\x12\x03abc\x07\x08\x02\x12\x03def'
E - abcdef
E + ܈Ē͡扣܈Ȓͤ敦
[1]: https://s3.amazonaws.com/clickhouse-test-reports/56030/c7f392500e93863638c9ca9bd56c93b3193091f3/integration_tests__release__[3_4].html
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
* initial impl
* fix env ut
* move ut directory
* make sure no null proxy resolver is returned by ProxyConfigurationResolverProvider
* minor adjustment
* add a few tests, still incomplete
* add proxy support for url table function
* use proxy for select from url as well
* remove optional from return type, just returns empty config
* fix style
* style
* black
* ohg boy
* rm in progress file
* god pls don't let me kill anyone
* ...
* add use_aws guards
* remove hard coded s3 proxy resolver
* add concurrency-mt-unsafe
* aa
* black
* add logging back
* revert change
* imrpove code a bit
* helper functions and separate tests
* for some reason, this env test is not working..
* formatting
* :)
* clangtidy
* lint
* revert some stupid things
* small test adjusmtments
* simplify tests
* rename test
* remove extra line
* freaking style change
* simplify a bit
* fix segfault & remove an extra call
* tightly couple proxy provider with context..
* remove useless include
* rename config prefix parameter
* simplify provider a bit
* organize provider a bit
* add a few comments
* comment out proxy env tests
* fix nullptr in unit tests
* make sure old storage proxy config is properly covered without global context instance
* move a few functions from class to anonymous namespace
* fix no fallback for specific storage conf
* change API to accept http method instead of bool
* implement http/https distinction in listresolver, any still not implemented
* implement http/https distinction in remote resolver
* progress on code, improve tests and add url function working test
* use protcol instead of method for http and https
* small fix
* few more adjustments
* fix style
* black
* move enum to proxyconfiguration
* wip
* fix build
* fix ut
* delete atomicroundrobin class
* remove stale include
* add some tests.. need to spend some more time on the design..
* change design a bit
* progress
* use existing context for tests
* rename aux function and fix ut
* ..
* rename test
* try to simplify tests a bit
* simplify tests a bit more
* attempt to fix tests, accept more than one remote resolver
* use proper log id
* try waiting for resolver
* proper wait logic
* black
* empty
* address a few comments
* refactor tests
* remove old tests
* baclk
* use RAII to set/unset env
* black
* clang tidy
* fix env proxy not respecting any
* use log trace
* fix wrong logic in getRemoteREsolver
* fix wrong logic in getRemoteREsolver
* fix test
* remove unwanted code
* remove ClientConfigurationperRequest and auxilary classes
* remove unwanted code
* remove adapter test
* few adjustments and add test for s3 storage conf with new proxy settings
* black
* use chassert for context
* Add getenv comment
Since there can be some leftovers:
2023.07.24 07:08:25.238066 [ 140 ] {} <Error> Application: Code: 219. DB::Exception: Cannot drop: filesystem error: in remove: Directory not empty ["/var/lib/clickhouse/data/system/"]. Probably database contain some detached tables or metadata leftovers from Ordinary engine. If you want to remove all data anyway, try to attach database back and drop it again with enabled force_remove_data_recursively_on_drop setting: Exception while trying to convert database system from Ordinary to Atomic. It may be in some intermediate state. You can finish conversion manually by moving the rest tables from system to .tmp_convert.system.9396432095832455195 (using RENAME TABLE) and executing DROP DATABASE system and RENAME DATABASE .tmp_convert.system.9396432095832455195 TO system. (DATABASE_NOT_EMPTY), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000e68af57 in /usr/bin/clickhouse
1. ? @ 0x000000000cab443c in /usr/bin/clickhouse
2. DB::DatabaseOnDisk::drop(std::shared_ptr<DB::Context const>) @ 0x000000001328d617 in /usr/bin/clickhouse
3. DB::DatabaseCatalog::detachDatabase(std::shared_ptr<DB::Context const>, String const&, bool, bool) @ 0x0000000013524a6c in /usr/bin/clickhouse
4. DB::InterpreterDropQuery::executeToDatabaseImpl(DB::ASTDropQuery const&, std::shared_ptr<DB::IDatabase>&, std::vector<StrongTypedef<wide::integer<128ul, unsigned int>, DB::UUIDTag>, std::allocator<StrongTypedef<wide::integer<128ul, unsigned int>, DB::UUIDTag>>>&) @ 0x0000000013bc05e4 in /usr/bin/clickhouse
5. DB::InterpreterDropQuery::executeToDatabase(DB::ASTDropQuery const&) @ 0x0000000013bbc6b8 in /usr/bin/clickhouse
6. DB::InterpreterDropQuery::execute() @ 0x0000000013bbba22 in /usr/bin/clickhouse
7. ? @ 0x00000000140b13a5 in /usr/bin/clickhouse
8. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x00000000140ad20e in /usr/bin/clickhouse
9. ? @ 0x00000000140d2ef0 in /usr/bin/clickhouse
10. DB::maybeConvertSystemDatabase(std::shared_ptr<DB::Context>) @ 0x00000000140d0aaf in /usr/bin/clickhouse
11. DB::Server::main(std::vector<String, std::allocator<String>> const&) @ 0x000000000e724e55 in /usr/bin/clickhouse
12. Poco::Util::Application::run() @ 0x0000000017ead086 in /usr/bin/clickhouse
13. DB::Server::run() @ 0x000000000e714a5d in /usr/bin/clickhouse
14. Poco::Util::ServerApplication::run(int, char**) @ 0x0000000017ec07b9 in /usr/bin/clickhouse
15. mainEntryClickHouseServer(int, char**) @ 0x000000000e711a26 in /usr/bin/clickhouse
16. main @ 0x0000000008cf13cf in /usr/bin/clickhouse
17. __libc_start_main @ 0x0000000000021b97 in /lib/x86_64-linux-gnu/libc-2.27.so
18. _start @ 0x00000000080705ae in /usr/bin/clickhouse
(version 23.7.1.2012)
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
Sometimes you may get:
> raise subprocess.CalledProcessError(exit_code, cmd)
E subprocess.CalledProcessError: Command '['iptables', '--wait', '-D', 'DOCKER-USER', '-p', 'tcp', '-s', '172.16.2.3', '-d', '172.16.2.2', '-j', 'DROP']' returned non-zero exit status 137.
And only sometimes you may get the reason:
OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown
So this means that container for iptables does not exists anymore, and
the reason is the timeout. And the fact that container_exit_timeout was
equal to container_expire_timeout and was 120.
From the docker logs:
time="2023-07-16T15:46:52.513673446Z" level=debug msg="form data: {\"AttachStderr\":false,\"AttachStdin\":false,\"AttachStdout\":false,\"Cmd\":[\"sleep\",\"120\"],\"HostConfig\":{\"AutoRemove\":true,\"NetworkMode\":\"host\"},\"Image\":\"clickhouse/integration-helper:latest\",\"NetworkDisabled\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Tty\":false}"
time="2023-07-16T15:48:57.611857183Z" level=debug msg="form data: {\"AttachStderr\":false,\"AttachStdin\":false,\"AttachStdout\":false,\"Cmd\":[\"sleep\",\"120\"],\"HostConfig\":{\"AutoRemove\":true,\"NetworkMode\":\"host\"},\"Image\":\"clickhouse/integration-helper:latest\",\"NetworkDisabled\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Tty\":false}"
And then pytest will try to execute the iptables command:
time="2023-07-16T15:50:57.698705244Z" level=debug msg="starting exec command 860920ab2aa07e8d285050f200ac92423a3cf8ec3fb2f57683541e62cf6bc20e in container 66d6c96671b5e987345290ddd260727d96b99789b512d40f333f6263f42fd2f1"
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
* Use fingerprints instead of key IDs to find keys in encrypted disks.
Always use little endian in the headers of encryption files.
* Add tests.
* Fix copying binary files to test containers.
* Fix ownership for copied files in test containers.
* Add comments after review.
---------
Co-authored-by: Nikita Mikhaylov <mikhaylovnikitka@gmail.com>
The log level will be substituted from "test" to "trace" in case of the
tag is not "latest", the assumption behind this I guess is that it
should not try to use "test" log level for older versions.
But, it could have per-PR image in case of changes in the Dockerfile, so
it is better to check for self.with_installed_binary, since actually any
parameters except this will use new clickhouse binary anyway.
CI: https://s3.amazonaws.com/clickhouse-test-reports/48596/a1272e8536265929255fdf5020836f057859e425/integration_tests__tsan__[1/6].html
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
Before they was ignored because first there was a check for a sign of
sanitizer (==================), but it was done by
clickhouse-server.log, while sanitizer write to stderr.log.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
There is logic regarding which keeper binary use to start keeper cluster in an integration test
There 2 options:
(1) standalone keeper binary (expected binary name clickhouse-keeper)
(2) clickhouse binary with keeper inside
Fixed:
- option (1) didn't work since docker_compose_keeper.yaml didn't create
target clickhouse-keeper at all
- if clickhouse-keeper existed, option (1) was taken but
clickhouse-keeper could be just a link to clickhouse binary (the link
is created always during build if cmake option BUILD_STANDALONE_KEEPER is OFF)