- changed config.xml/yaml files used by CH's own internal tests which
are (hopefully) not sensitive to mark_cache_size being set or not
- further occurrences exist but changing them seems a bad idea (e.g.
because they are in customer-provided data)
I played around with my local config.xml file. The minimal working example is this:
<?xml version="1.0"?>
<clickhouse>
<mark_cache_size>5368709120</mark_cache_size>
<listen_host>localhost</listen_host>
<tcp_port>9000</tcp_port>
<users_config>users.xml</users_config>
<logger><console>true</console></logger>
</clickhouse>
Not specifying mark_cache_size made the server not start up:
2022.05.18 12:15:06.549078 [ 8728320 ] {} <Error> Application: Not found: mark_cache_size
Looking at ClickHouse's ca. 100 server configuration options +
sub-options, it seems that mark_cache_size is NOT special enough to
require explicit configuration but instead that the behavior was
unintended because no default value was provided.
If you will execute 'SYSTEM RELOAD CONFIG' via, i.e., TCP protocol, then
reload on port change will endlessly wait for connection from which this
query had been issued, and you will see the following message in the
logs:
2022.04.28 03:34:57.552513 [ 37101 ] {b41d855c-4dbf-470a-a144-c6ae5a1abda8} <Debug> executeQuery: (from 127.0.0.1:11774) system reload config
...
2022.04.28 03:34:57.710640 [ 37101 ] {b41d855c-4dbf-470a-a144-c6ae5a1abda8} <Information> Application: Stopped listening for http://127.0.0.1:18123
2022.04.28 03:34:57.798774 [ 37101 ] {b41d855c-4dbf-470a-a144-c6ae5a1abda8} <Information> Application: Stopped listening for native protocol (tcp): 127.0.0.1:19000
...
2022.04.28 03:34:57.901375 [ 37101 ] {b41d855c-4dbf-470a-a144-c6ae5a1abda8} <Debug> Application: Server finished: http://127.0.0.1:18123
2022.04.28 03:34:57.901455 [ 37101 ] {b41d855c-4dbf-470a-a144-c6ae5a1abda8} <Trace> Application: Waiting server to finish: native protocol (tcp): 127.0.0.1:19000
2022.04.28 03:34:58.001717 [ 37101 ] {b41d855c-4dbf-470a-a144-c6ae5a1abda8} <Trace> Application: Waiting server to finish: native protocol (tcp): 127.0.0.1:19000
2022.04.28 03:34:58.101881 [ 37101 ] {b41d855c-4dbf-470a-a144-c6ae5a1abda8} <Trace> Application: Waiting server to finish: native protocol (tcp): 127.0.0.1:19000
...
2022.04.28 03:35:01.707951 [ 37101 ] {b41d855c-4dbf-470a-a144-c6ae5a1abda8} <Trace> Application: Waiting server to finish: native protocol (tcp): 127.0.0.1:19000
But waiting for the current connection will never ends.
So instead of waiting directly from the query context (SYSTEM RELOAD
CONFIG) do this in background (actually not even in background, but
check on server reload and on exit).
v0: just don't wait for the servers
v2: fix use-after-free by removing dependency from server in handlers
v3: wait servers in background to avoid use-after-free of the context
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
In case you have different roles for the same user on multiple clusters,
ON CLUSTER query can help to overcome some limitations.
Consider the following example:
- cluster_with_data, dev_user (readonly=2)
- stage_cluster, dev_user (readonly=0)
So when you will execute the following query from stage_cluster, it will
be successfully executed, since ON CLUSTER queries has different system
profile:
DROP DATABASE default ON CLUSTER cluster_with_data
This is not 100% safe, but at least something.
Note, that right now only ON CLUSTER query it self is supported, but
separate clusters are not (i.e. GRANT CLUSTER some_cluster_name TO
default), since right now grants sticked to database+.
v2: on_cluster_queries_require_cluster_grant
v3: fix test and process flags as bit mask
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
Official docs:
Some headers from C library were deprecated in C++ and are no longer
welcome in C++ codebases. Some have no effect in C++. For more details
refer to the C++ 14 Standard [depr.c.headers] section. This check
replaces C standard library headers with their C++ alternatives and
removes redundant ones.