After #36425 there was a lot of confusions/problems with configuring pools - when the message was confusing, and settings need to be ajusted in several places.
See some examples in #44251, #43351, #47900, #46515.
The commit includes the following changes:
1) Introduced a unified mechanism for reading pool sizes from the configuration file(s). Previously, pool sizes were read from the Context.cpp with fallbacks to profiles, whereas main_config_reloader in Server.cpp read them directly without fallbacks.
2) Corrected the data type for background_merges_mutations_concurrency_ratio. It should be float instead of int.
3) Refactored the default values for settings. Previously, they were defined in multiple places throughout the codebase, but they are now defined in one place (or two, to be exact: Settings.h and ServerSettings.h).
4) Improved documentation, including the correct message in system.settings.
Additionally make the code more conform with #46550.
There are lots of thread pools and simple local-vs-global is not enough
already, it is good to know which one in particular uses threads.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
CI reports [1]:
2023.03.14 00:29:07.031349 [ 166170 ] {110f8654-7d7d-4b47-b6b0-3ce83414a80f} <Error> ReadWriteBufferFromHTTP: HTTP request to `http://127.0.0.1:9018/columns_info?use_connection_pooling=1&version=1&connection_string=DSN%3D%7BClickHouse%20DSN%20%28ANSI%29%7D&schema=test_15&table=t&external_table_functions_use_nulls=1` failed at try 1/1 with bytes read: 0/unknown. Error: DB::HTTPException: Received error from remote server /columns_info?use_connection_pooling=1&version=1&connection_string=DSN%3D%7BClickHouse%20DSN%20%28ANSI%29%7D&schema=test_15&table=t&external_table_functions_use_nulls=1. HTTP status code: 500 Internal Server Error, body: Error getting columns from ODBC 'Code: 49. DB::Exception: Columns definition was not returned. (LOGICAL_ERROR) (version 23.2.4.12 (official build))'
[1]: https://s3.amazonaws.com/clickhouse-test-reports/47541/3d247b8635da44bccfdeb5fcd53be7130b8d0a32/upgrade_check__msan_.html
Here the problem is that system.columns has cached value for number of
total table to iterate, and so it can skip something.
But anyway, this should be LOGICAL_ERROR, since ODBC bridge does two
queries:
- to system.tables and
- to system.columns
And if between this two queries the table will be removed, them there
will be no columns
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
We have an issue when using external dictionary. Occasionally library bridge called with extDict_libClone and fails with Unknown library method 'extDict_libClone'. And it looks like it is because of at some point `else if (method == "extDict_libNew")` was changed to if (lib_new) with no handling for extDict_libClone inside this new if else statement and reporing an error that extDict_libClone is an unknown method.
So there is a two-line fix to handle extDict_libClone properly.
Error logs that we have:
```
2022.12.16 14:17:44.285088 [ 393573 ] {} <Error> ExternalDictionaries: Could not update cache dictionary 'dict.vhash_s', next update is scheduled at 2022-12-16 14:18:00: Code: 86. DB::Exception: Received error from remote server /extdict_request?version=1&dictionary_id=be2b2cd1-ba57-4658-8d1b-35ef40ab005b&method=extDict_libClone&from_dictionary_id=c3537142-eaa9-4deb-9b65-47eb8ea1dee6. HTTP status code: 500 Internal Server Error, body: Unknown library method 'extDict_libClone'
2022.12.16 14:17:44.387049 [ 399133 ] {} <Error> ExternalDictionaries: Could not update cache dictionary 'dict.vhash_s', next update is scheduled at 2022-12-16 14:17:51: Code: 86. DB::Exception: Received error from remote server /extdict_request?version=1&dictionary_id=0df866ac-6c94-4974-a76c-3940522091b9&method=extDict_libClone&from_dictionary_id=c3537142-eaa9-4deb-9b65-47eb8ea1dee6. HTTP status code: 500 Internal Server Error, body: Unknown library method 'extDict_libClone'
2022.12.16 14:17:44.488468 [ 397769 ] {} <Error> ExternalDictionaries: Could not update cache dictionary 'dict.vhash_s', next update is scheduled at 2022-12-16 14:19:38: Code: 86. DB::Exception: Received error from remote server /extdict_request?version=1&dictionary_id=2d8af321-b669-4526-982b-42c0fabf0e8d&method=extDict_libClone&from_dictionary_id=c3537142-eaa9-4deb-9b65-47eb8ea1dee6. HTTP status code: 500 Internal Server Error, body: Unknown library method 'extDict_libClone'
2022.12.16 14:17:44.489935 [ 398226 ] {datamarts_v_dwh_node0032-241534:0x552da2_1_11} <Error> executeQuery: Code: 510. DB::Exception: Update failed for dictionary 'dict.vhash_s': Code: 510. DB::Exception: Update failed for dictionary dict.vhash_s : Code: 86. DB::Exception: Received error from remote server /extdict_request?version=1&dictionary_id=be2b2cd1-ba57-4658-8d1b-35ef40ab005b&method=extDict_libClone&from_dictionary_id=c3537142-eaa9-4deb-9b65-47eb8ea1dee6. HTTP status code: 500 Internal Server Error, body: Unknown library method 'extDict_libClone'
```
Introduce new `--connection` option, by default as a connection name
`--host` will be used.
And using --connection you also can specify --host and it will be
overwritten.
Follow-up for: #45715
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
There are very frequent flakiness of `test_cluster_copier` test, here is
an example of copier failures on CI [1]:
AssertionError: Instance: s0_1_0 (172.16.29.9). Info: {'ID': '5d68dcb46fdb4b0c54b7c7ba1ddde83b8f34d483bbb32abcb0c52b966444ce82', 'Running': False, 'ExitCode': 85, 'ProcessConfig': {'tty': False, 'entrypoint': '/usr/bin/clickhouse', 'arguments': ['copier', '--config', '/etc/clickhouse-server/config-copier.xml', '--task-path', '/clickhouse-copier/task_simple_4DFWYTDD49', '--task-file', '/task0_description.xml', '--task-upload-force', 'true', '--base-dir', '/var/log/clickhouse-server/copier', '--copy-fault-probability', '0.2', '--experimental-use-sample-offset', '1'], 'privileged': False, 'user': '0'}, 'OpenStdin': False, 'OpenStderr': True, 'OpenStdout': True, 'CanRemove': False, 'ContainerID': 'f356df6694b3cc09ee9830c623681626f8e8d999677c188b9fe911aa702784ca', 'DetachKeys': '', 'Pid': 84332}
assert 85 == 0
But let's look what the error it is, apparently it is UNFINISHED:
SELECT
name,
code
FROM system.errors
WHERE ((code % 256) = 85) AND (NOT remote)
SETTINGS system_events_show_zero_values = 1
┌─name─────────────────────────────┬─code─┐
│ FORMAT_IS_NOT_SUITABLE_FOR_INPUT │ 85 │
│ UNFINISHED │ 341 │
│ NO_SUCH_ERROR_CODE │ 597 │
└──────────────────────────────────┴──────┘
Let's verify:
$ grep -r UNFINISHED ./test_cluster_copier/_instances_0/s0_1_0/logs/copier/clickhouse-copier_*
./test_cluster_copier/_instances_0/s0_1_0/logs/copier/clickhouse-copier_20230206220846_368/log.log:2023.02.06 22:09:19.015251 [ 368 ] {} <Error> : virtual int DB::ClusterCopierApp::main(const std::vector<std::string> &): Code: 341. DB::Exception: Too many tries to process table cluster1.default.hits. Abort remaining execution. (UNFINISHED), Stack trace (when copying this message, always include the lines below):
And apparently that it is due to query error with fault injection:
2023.02.06 22:09:15.654724 [ 368 ] {} <Error> Application: An error occurred while processing partition 0: Code: 62. DB::Exception: Syntax error (Query): failed at position 168 ('Native'): Native. Expected one of: token, Dot, OR, AND, BETWEEN, NOT BETWEEN, LIKE, ILIKE, NOT LIKE, NOT ILIKE, IN, NOT IN, GLOBAL IN, GLOBAL NOT IN, MOD, DIV, IS NULL, IS NOT NULL, alias, AS, Comma, OFFSET, WITH TIES, BY, LIMIT, SETTINGS, UNION, EXCEPT, INTERSECT, INTO OUTFILE, FORMAT, end of query. (SYNTAX_ERROR), Stack trace (when copying this message, always include the lines below):
Example:
select x from x limit 1FORMAT Native
Syntax error: failed at position 32 ('Native'):
So fixing this should fix test_cluster_copier flakiness.
[1]: https://s3.amazonaws.com/clickhouse-test-reports/46045/bd4170e03c6af583a51d12d2c39fa775dcb9997b/integration_tests__release__[4/4].html
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
Sorry for the clickbaity title. This is about static method
ConnectionTimeouts::getHTTPTimeouts(). It was be declared in header
IO/ConnectionTimeouts.h, and defined in header
IO/ConnectionTimeoutsContext.h (!). This is weird and caused issues with
linking on s390x (##45520). There was an attempt to fix some
inconsistencies (#45848) but neither did @Algunenano nor me at first
really understand why the definition is in the header.
Turns out that ConnectionTimeoutsContext.h is only #include'd from
source files which are part of the normal server build BUT NOT part of
the keeper standalone build (which must be enabled via CMake
-DBUILD_STANDALONE_KEEPER=1). This dependency was not documented and as
a result, some misguided workarounds were introduced earlier, e.g.
0341c6c54b
The deeper cause was that getHTTPTimeouts() is passed a "Context". This
class is part of the "dbms" libary which is deliberately not linked by
the standalone build of clickhouse-keeper. The context is only used to
read the settings and the "Settings" class is part of the
clickhouse_common library which is linked by clickhouse-keeper already.
To resolve this mess, this PR
- creates source file IO/ConnectionTimeouts.cpp and moves all
ConnectionTimeouts definitions into it, including getHTTPTimeouts().
- breaks the wrong dependency by passing "Settings" instead of "Context"
into getHTTPTimeouts().
- resolves the previous hacks
This is somehow analog of .netrc [1].
[1]: https://www.gnu.org/software/inetutils/manual/html_node/The-_002enetrc-file.html
The follow options can be overwritten on a per-hostname/connection
basis:
- hostname
- port
- secure
- user
- password
- database
- history_file
Also note, that you can have multiple settings for one hostname, can be
useful to distinguish readonly from non-readonly for example.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
* save format string for NetException
* format exceptions
* format exceptions 2
* format exceptions 3
* format exceptions 4
* format exceptions 5
* format exceptions 6
* fix
* format exceptions 7
* format exceptions 8
* Update MergeTreeIndexGin.cpp
* Update AggregateFunctionMap.cpp
* Update AggregateFunctionMap.cpp
* fix
Implementation:
* Added ProfileEvents::ServerStartupTime.
* Recorded time from start of main till listening to sockets.
Testing:
* Added a test 02532_profileevents_server_startup_time.sql
* Align Benchmark::Benchmark()
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
* Add --max-consecutive-errors for clickhouse-benchmark
Unlike --continue_on_errors, it will not leave the benchmark forever if
server is unavailable.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
Co-authored-by: Nikita Taranov <nikita.taranov@clickhouse.com>
After switching to replxx you don't need the following anymore:
- RL_PROMPT_START_IGNORE (\001)
- RL_PROMPT_END_IGNORE (\002)
And those symbols also breaks the client prompt (the text length
calculated incorrectly, it does not interpret "\x01" as an escape
sequence but instead just a bunch of bytes and shows extra whitespaces
after prompt).
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
* master: (331 commits)
Revert "Fix endian issue in integer hex string conversion"
Update sparse-primary-indexes.md to correct typo
to MaterializeMySQL_support_drop_mulit_table-change ParserIdentifier
to MaterializeMySQL_support_drop_mulit_table-clean up unnecessary code
Make insertRangeFrom() more exception safe (#43338)
Update docs/en/sql-reference/statements/create/view.md
Update docs/en/sql-reference/statements/alter/projection.md
Analyzer table functions untuple fix
Revert "Add table_uuid to system.parts"
Update 00176_bson_parallel_parsing.sh
Update test.py
doc fix
bump lib for diag
Update kafka.md
check ast limits for create_parser_fuzzer
Disable compressed marks and index in stress tests
add a test with weird strings
Update .reference
Add information about written rows in progress indicator
Avoid possible DROP hung due to attached web disk
...