constant_column_search was excluced when ARM perf tests were set up
PR 37497. Back then we haven't had PR 41033 and regexp eval on ARM was
slow. Let's check again.
The test is a stress test which queries all system tables in parallel.
System table SYSTEM.MODELS is now populated using the library-bridge.
This made the test fail with error:
2022-09-12 23:51:34 Code: 410. DB::Exception: Received from localhost:9000. DB::Exception: BridgeHelper: clickhouse-library-bridge is not responding. (EXTERNAL_SERVER_IS_NOT_RESPONDING)
2022-09-12 23:51:34 (query: SELECT * FROM system.models LIMIT 10000 FORMAT Null)
Looking at the logs, the server tried to start the library-bridge when
querying SYSTEM.MODELS but multiple handshake attempts failed.
So (most likely) the infrastructure is slow or (unlikely) there are
settings in CI (e.g. a firewall) preventing communication with the
bridge. Because 1. other system tables are also excluded (zookeeper,
merge_tree_metadata_cache), 2. SYSTEM.MODELS is tested in integration
test "test_catboost_evaluate" and 3. the test runs locally just fine, I
am excluding SYSTEM.MODELS from it.
- This commit restores statements "SYSTEM RELOAD MODEL(S)" which provide
a mechanism to update a model explicitly. It also saves potentially
unnecessary reloads of a model from disk after it's initial load.
To keep the complexity low, the semantics of "SYSTEM RELOAD MODEL(S)
was changed from eager to lazy. This means that both statements
previously immedately reloaded the specified/all models, whereas now
the statements only trigger an unload and the first call to
catboostEvaluate() does the actual load.
- Monitoring view SYSTEM.MODELS is also restored but with some obsolete
fields removed. The view was not documented in the past and for now it
remains undocumented. The commit is thus not considered a breach of
ClickHouse's public interface.
This has been implemented by simply restarting http servers in case of
http_handlers directive in configuration xml had been changed.
But, for this I have to change the handlers interface to accept
configuration separatelly, since the configuration that contains in the
server is the configuration with which server had been started.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
Co-authored-by: Antonio Andelic
* Remove strange code
* Even more code removal
* Fix style
* Remove even more code
* Simplify code by making it slower
* Attempt to do something
* Attempt to do something
* Well do something with this horrible trash
* Add a test
Before test failed if clickhouse-server timed out while obtaining stack
trace for the first thread from the list, fix this by getting first
row with non empty trace.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
CI found one issue [1].
Here is the stack trace for invalid read:
<details>
<summary>stack trace</summary>
```
0: DB::TemporaryFileLazySource::TemporaryFileLazySource(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Block const&) [inlined] std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__is_long(this="") const at string:1445:22
1: DB::TemporaryFileLazySource::TemporaryFileLazySource(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Block const&) [inlined] std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::basic_string(this="", __str="") at string:1927
2: DB::TemporaryFileLazySource::TemporaryFileLazySource(this=0x00007f3aec105f58, path_="", header_=0x00007f38ffd93b40) at TemporaryFileLazySource.cpp:11
3: DB::SortedBlocksWriter::streamFromFile(std::__1::unique_ptr<Poco::TemporaryFile, std::__1::default_delete<Poco::TemporaryFile> > const&) const [inlined] DB::TemporaryFileLazySource* std::__1::construct_at<DB::TemporaryFileLazySource, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Block, DB::TemporaryFileLazySource*>(__args=0x00007f38ffd91560) at construct_at.h:38:50
4: DB::SortedBlocksWriter::streamFromFile(std::__1::unique_ptr<Poco::TemporaryFile, std::__1::default_delete<Poco::TemporaryFile> > const&) const [inlined] void std::__1::allocator_traits<std::__1::allocator<DB::TemporaryFileLazySource> >::construct<DB::TemporaryFileLazySource, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Block, void, void>(__args=0x00007f38ffd91560) at allocator_traits.h:298
5: DB::SortedBlocksWriter::streamFromFile(std::__1::unique_ptr<Poco::TemporaryFile, std::__1::default_delete<Poco::TemporaryFile> > const&) const [inlined] std::__1::__shared_ptr_emplace<DB::TemporaryFileLazySource, std::__1::allocator<DB::TemporaryFileLazySource> >::__shared_ptr_emplace<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Block>(this=0x00007f3aec105f40, __args=0x00007f38ffd91560) at shared_ptr.h:293
6: DB::SortedBlocksWriter::streamFromFile(std::__1::unique_ptr<Poco::TemporaryFile, std::__1::default_delete<Poco::TemporaryFile> > const&) const [inlined] std::__1::shared_ptr<DB::TemporaryFileLazySource> std::__1::allocate_shared<DB::TemporaryFileLazySource, std::__1::allocator<DB::TemporaryFileLazySource>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Block, void>(__args=<unavailable>, __args=<unavailable>) at shared_ptr.h:954
7: DB::SortedBlocksWriter::streamFromFile(std::__1::unique_ptr<Poco::TemporaryFile, std::__1::default_delete<Poco::TemporaryFile> > const&) const [inlined] std::__1::shared_ptr<DB::TemporaryFileLazySource> std::__1::make_shared<DB::TemporaryFileLazySource, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Block, void>(__args=<unavailable>, __args=<unavailable>) at shared_ptr.h:963
8: DB::SortedBlocksWriter::streamFromFile(this=<unavailable>, file=<unavailable>) const at SortedBlocksWriter.cpp:238
9: DB::SortedBlocksWriter::premerge(this=<unavailable>) at SortedBlocksWriter.cpp:209:32
```
</details>
[1]: https://s3.amazonaws.com/clickhouse-test-reports/41046/adea92f847373d1fcfd733d8979c63024f9b80bf/stress_test__asan_.html
So the problem here is that there was empty unique_ptr<> reference to
temporary file, because of empty block that accepted by
SortedBlocksWriter::insert(), but insert() is not a problem the problem
is premerge() that steals blocks from insert() and do not have check
that there are some rows. However this check exists in
SortedBlocksWriter::flush(), and in that case temporary file is not
created.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
There is logic regarding which keeper binary use to start keeper cluster in an integration test
There 2 options:
(1) standalone keeper binary (expected binary name clickhouse-keeper)
(2) clickhouse binary with keeper inside
Fixed:
- option (1) didn't work since docker_compose_keeper.yaml didn't create
target clickhouse-keeper at all
- if clickhouse-keeper existed, option (1) was taken but
clickhouse-keeper could be just a link to clickhouse binary (the link
is created always during build if cmake option BUILD_STANDALONE_KEEPER is OFF)
- The deleted function modelEvaluate() was superseded by
catboostEvaluate().
- Also delete the external model repository, as modelEvaluate() was it's
last user. Additionally remove the system view SYSTEM.MODELS for
inspecting the repository.
- SYSTEM RELOAD MODELS is also obsolete. HOWEVER, it was retained and
made a no-op instead of deleted.
Why?
The reason is that RBAC in distributed setups works by storing
privileges (granted and revoked) as plain SQL statements in Keeper.
Nodes read these statements at startup and parse them. If a privilege
for SYSTEM RELOAD MODELS exists but parser doesn't recognize it
nodes would fail to come up.
Considered but rejected alternatives:
- Ignore SYSTEM RELOAD MODELS during parsing RBAC privileges and
return an error for regular SYSTEM RELOAD MODELS SQL. Special-case
of no-op behavior, too brittle.
- Remove SYSTEM RELOAD MODELS manually from Keeper via command-line
manipulation of Keeper nodes or via SQL by dropping the privileges.
Needs user intervention during upgrade.
This commit moves the catboost model evaluation out of the server
process into the library-bridge binary. This serves two goals: On the
one hand, crashes / memory corruptions of the catboost library no longer
affect the server. On the other hand, we can forbid loading dynamic
libraries in the server (catboost was the last consumer of this
functionality), thus improving security.
SQL syntax:
SELECT
catboostEvaluate('/path/to/model.bin', FEAT_1, ..., FEAT_N) > 0 AS prediction,
ACTION AS target
FROM amazon_train
LIMIT 10
Required configuration:
<catboost_lib_path>/path/to/libcatboostmodel.so</catboost_lib_path>
*** Implementation Details ***
The internal protocol between the server and the library-bridge is
simple:
- HTTP GET on path "/extdict_ping":
A ping, used during the handshake to check if the library-bridge runs.
- HTTP POST on path "extdict_request"
(1) Send a "catboost_GetTreeCount" request from the server to the
bridge, containing a library path (e.g /home/user/libcatboost.so) and
a model path (e.g. /home/user/model.bin). Rirst, this unloads the
catboost library handler associated to the model path (if it was
loaded), then loads the catboost library handler associated to the
model path, then executes GetTreeCount() on the library handler and
finally sends the result back to the server. Step (1) is called once
by the server from FunctionCatBoostEvaluate::getReturnTypeImpl(). The
library path handler is unloaded in the beginning because it contains
state which may no longer be valid if the user runs
catboost("/path/to/model.bin", ...) more than once and if "model.bin"
was updated in between.
(2) Send "catboost_Evaluate" from the server to the bridge, containing
the model path and the features to run the interference on. Step (2)
is called multiple times (once per chunk) by the server from function
FunctionCatBoostEvaluate::executeImpl(). The library handler for the
given model path is expected to be already loaded by Step (1).
Fixes#27870
99% of all tests in the fasttest profile run in one sec or less. The
excluded tests take 10 sec or more (the slowest being
02271_replace_partition_many_tables with 30 sec).
Estimated savings: 7 min
majority_insert_quorum is defined as (number_of_replicas/2)+1. Insert
will be successful only if majority of quorum have applied it. If
insert_quorum and majority_insert_quorum both are specified, max of
both will be used.