Usually functions are validated before index analysis, however it is
not the case for parallel replicas, and it requires additional checks
before interpreting function arguments.
<details>
<summary>stack trace</summary>
```
==172==WARNING: MemorySanitizer: use-of-uninitialized-value
0 0x55cf82aedd6c in DB::ASTFunction* typeid_cast<DB::ASTFunction*, DB::IAST>(DB::IAST*) build_docker/./src/Common/typeid_cast.h:42:73
1 0x55cf82aedd6c in DB::TypePromotion<DB::IAST>::CastHelper<DB::ASTFunction, false, false>::value(DB::IAST*) build_docker/./src/Common/TypePromotion.h:38:43
2 0x55cf82aedd6c in std::__1::invoke_result<decltype(&CastHelper<DB::ASTFunction, false>::value), DB::TypePromotion<DB::IAST>::CastHelper<DB::ASTFunction, false, std::is_reference_v<DB::ASTFunction>>, DB::IAST*>::type DB::TypePromotion<DB::IAST>::as<DB::ASTFunction>() build_docker/./src/>
3 0x55cf82aedd6c in DB::MergeTreeIndexConditionSet::operatorFromAST(std::__1::shared_ptr<DB::IAST>&) build_docker/./src/Storages/MergeTree/MergeTreeIndexSet.cpp:602:25
4 0x55cf82ae5bc1 in DB::MergeTreeIndexConditionSet::traverseAST(std::__1::shared_ptr<DB::IAST>&) const build_docker/./src/Storages/MergeTree/MergeTreeIndexSet.cpp:547:9
5 0x55cf82ae5de6 in DB::MergeTreeIndexConditionSet::traverseAST(std::__1::shared_ptr<DB::IAST>&) const build_docker/./src/Storages/MergeTree/MergeTreeIndexSet.cpp:552:13
6 0x55cf82ae06db in DB::MergeTreeIndexConditionSet::MergeTreeIndexConditionSet() build_docker/.>
...
12 0x55cf82aef09c in DB::MergeTreeIndexSet::createIndexCondition() const build_docker/./src/Storages/MergeTree/MergeTreeIndexSet.cpp:703:12
13 0x55cf84951cd4 in DB::buildIndexes()
14 0x55cf84955ed3 in DB::ReadFromMergeTree::selectRangesToReadImpl()
15 0x55cf8494caef in DB::ReadFromMergeTree::selectRangesToRead()
16 0x55cf82a409a9 in DB::MergeTreeDataSelectExecutor::estimateNumMarksToRead()
17 0x55cf827f728d in DB::MergeTreeData::canUseParallelReplicasBasedOnPKAnalysis()
18 0x55cf827f627e in DB::MergeTreeData::getQueryProcessingStage()
19 0x55cf7f2f4969 in DB::InterpreterSelectQuery::getSampleBlockImpl() build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:937:31
20 0x55cf7f2daa00 in DB::InterpreterSelectQuery::InterpreterSelectQuery()
24 0x55cf7f520b98 in DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter()
25 0x55cf7f51b6cd in DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery()
27 0x55cf7f1d4ea9 in DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) build_docker/./src/Interpreters/InterpreterFactory.cpp:162:16
28 0x55cf8012e485 in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) build_docker/./src/Interpreters/executeQuery.cpp:1032:31
29 0x55cf80121bc1 in DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) build_docker/./src/Interpreters/executeQuery.cpp:1229:30
30 0x55cf8389295f in DB::TCPHandler::runImpl() build_docker/./src/Server/TCPHandler.cpp:424:24
31 0x55cf838d7dfb in DB::TCPHandler::run() build_docker/./src/Server/TCPHandler.cpp:2050:9
```
</details>
CI: https://s3.amazonaws.com/clickhouse-test-reports/53214/d99b10c340909ed4ee2e6edf0921e8a2f8561b0d/fuzzer_astfuzzermsan/report.html
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
Cgroups allows to change the amount of memory available to a process
while it runs. The previous logic calculated the amount of available
memory only once at server startup. As a result, memory thresholds set
via cgroups were not picked up when the settings changed. We now always
incorporate the current limits during re-configuraton.
Note 1: getMemoryAmount() opens/reads a file which is potentially
expensive. Should be fine though since that happens only when
the server configuration changes.
Note 2: An better approach would be to treat cgroup limit changes as
another trigger for ClickHouse server re-configuration (which
currently only happens when the config files change). Shied away
from that for now because of the case that when the cgroup limit
is lowered, there is no guarantee that ClickHouse can shrink the
memory amount accordingly in time (afaik, it does so only lazily
by denying new allocations). As a result, the OOM killer would
kill the server. The same will happen with this PR but at a
lower implementation complexity.
Sometimes you can have tons of data there, i.e. few TiBs, and sending
them on server shutdown does not looks sane (maybe there is a bug and
you need to update/restart to fix flushing).
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>