Commit Graph

77 Commits

Author SHA1 Message Date
Raúl Marín
de855ca917 Reduce header dependencies 2024-03-19 17:04:29 +01:00
Alexey Milovidov
cbf5443585 Remove old code 2024-03-04 00:11:55 +01:00
Igor Nikonov
b85a68790a Cleanup: connection pool priority -> config priority
- names were creating confusion between config priority and balancing priority for a reader
2024-02-16 14:39:41 +00:00
Azat Khuzhin
7fb31fe160 Remove ability to disable generic clickhouse components
Components like client/server/... are very generic, and there is no
point in disabling them, since it does not reduce amount of compiled
code a lot anyway (just a few modules for entrypoints, everything else
is already included in the clickhouse binary), and eventually they are
just symlinks to the clickhouse binary.

But there are few, that requires extra libraries, like ODBC bridge or
keeper components (and there is also standalone keeper binary compiled
with musl), those had been kept.

Also add some descriptions for some utils and change exit code to 0 for
--help.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2024-02-12 11:10:00 +01:00
Alexey Milovidov
6bb181ce55 Looking at strange code 2023-12-23 13:06:34 +01:00
Raúl Marín
b269f87f4c Better text_log with ErrnoException 2023-12-15 19:27:56 +01:00
Alexey Milovidov
d56cbda185 Add metrics for the number of queued jobs, which is useful for the IO thread pool 2023-11-18 19:07:59 +01:00
Azat Khuzhin
8782873e4f Fix overrides via connections_credentials in case of root directives exists
Before the following did not work, it always uses user `dev`, even with
`clickhouse-client --connection prod`:

  ```yaml
  user: dev

  connections_credentials:
    prod:
      name: prod
      user: prod
  ```

The problem was that before it was not possible to distinguish options
that had been set via command line options and via configuration file.

I've splitted this two actions, and embedded a call to
parseConnectionsCredentials() in between.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-10-27 20:03:50 +02:00
Alexey Gerasimchuck
3140958132 Added client_info validation 2023-08-22 03:52:57 +00:00
Alexey Milovidov
38a391ef57 Fix tidy 2023-08-12 16:34:35 +02:00
Alexey Milovidov
45655928d1 clickhouse-benchmark: connect in parallel 2023-08-10 23:39:06 +02:00
Alexey Milovidov
98ae9be734
Revert "Added tests for ClickHouse apps help and fixed help issues" 2023-04-21 01:54:34 +03:00
Yatsishin Ilya
b5b65d2149 Merge remote-tracking branch 'origin' into clickhouse-help 2023-04-11 11:24:48 +00:00
Azat Khuzhin
f38a7aeabe ThreadPool metrics introspection
There are lots of thread pools and simple local-vs-global is not enough
already, it is good to know which one in particular uses threads.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-03-29 10:46:59 +02:00
Yatsishin Ilya
80857638c1 Merge remote-tracking branch 'origin/master' into clickhouse-help 2023-02-22 13:58:17 +00:00
Robert Schulze
84b9ff450f
Fix terribly broken, fragile and potentially cyclic linking
Sorry for the clickbaity title. This is about static method
ConnectionTimeouts::getHTTPTimeouts(). It was be declared in header
IO/ConnectionTimeouts.h, and defined in header
IO/ConnectionTimeoutsContext.h (!). This is weird and caused issues with
linking on s390x (##45520). There was an attempt to fix some
inconsistencies (#45848) but neither did @Algunenano nor me at first
really understand why the definition is in the header.

Turns out that ConnectionTimeoutsContext.h is only #include'd from
source files which are part of the normal server build BUT NOT part of
the keeper standalone build (which must be enabled via CMake
-DBUILD_STANDALONE_KEEPER=1). This dependency was not documented and as
a result, some misguided workarounds were introduced earlier, e.g.
0341c6c54b

The deeper cause was that getHTTPTimeouts() is passed a "Context". This
class is part of the "dbms" libary which is deliberately not linked by
the standalone build of clickhouse-keeper. The context is only used to
read the settings and the "Settings" class is part of the
clickhouse_common library which is linked by clickhouse-keeper already.

To resolve this mess, this PR

- creates source file IO/ConnectionTimeouts.cpp and moves all
  ConnectionTimeouts definitions into it, including getHTTPTimeouts().

- breaks the wrong dependency by passing "Settings" instead of "Context"
  into getHTTPTimeouts().

- resolves the previous hacks
2023-02-05 20:49:34 +00:00
Yatsishin Ilya
f4cfd8a2d9 Merge remote-tracking branch 'origin' into clickhouse-help 2023-02-03 20:08:23 +00:00
Nikita Mikhaylov
33877b5e00
Parallel replicas. Part [2] (#43772) 2023-02-03 14:34:18 +01:00
Yatsishin Ilya
98edb9f06b Update help for clickhouse tools and add test 2023-01-31 12:19:37 +00:00
Alexander Tokmakov
3f6594f4c6 forbid old ctor of Exception 2023-01-23 22:18:05 +01:00
Robert Schulze
27f5aad49e
What happens if I remove 156 lines of code? 2023-01-03 18:51:16 +00:00
Alexey Milovidov
d4864c7d38 Better command line argument name in clickhouse-benchmark 2022-12-24 20:40:23 +01:00
Azat Khuzhin
a8faf196c4
Add --max-consecutive-errors for clickhouse-benchmark (#43344)
* Align Benchmark::Benchmark()

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Add --max-consecutive-errors for clickhouse-benchmark

Unlike --continue_on_errors, it will not leave the benchmark forever if
server is unavailable.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
Co-authored-by: Nikita Taranov <nikita.taranov@clickhouse.com>
2022-12-07 14:51:16 +01:00
Alexey Milovidov
35cce03125 Remove dlopen 2022-09-17 03:02:34 +02:00
Robert Schulze
e8b3f56733
Limit suppression to a specific warning 2022-08-21 18:24:17 +00:00
Alexey Milovidov
74e1f4dc61 Fix clang-tidy 2022-08-20 17:09:20 +02:00
Alexey Milovidov
e774d28c43 Fix style 2022-08-14 04:16:48 +02:00
Alexey Milovidov
53ce2986de Display server-side time in clickhouse-benchmark by default 2022-08-14 03:33:42 +02:00
Yakov Olkhovskiy
2e34b384c1 update tcp protocol, add quota_key 2022-08-03 15:44:08 -04:00
Robert Schulze
1b81bb49b4
Enable clang-tidy modernize-deprecated-headers & hicpp-deprecated-headers
Official docs:

  Some headers from C library were deprecated in C++ and are no longer
  welcome in C++ codebases. Some have no effect in C++. For more details
  refer to the C++ 14 Standard [depr.c.headers] section. This check
  replaces C standard library headers with their C++ alternatives and
  removes redundant ones.
2022-05-09 08:23:33 +02:00
Alexey Milovidov
dc914c635c
Merge pull request #36497 from tonickkozlov/tonickkozlov/benchmark/allow-auth-env
Benchmark can read auth from environment variables
2022-04-30 08:55:15 +03:00
Anton Kozlov
5cc78febde [benchmark] Allow auth environment variables 2022-04-29 11:30:59 +00:00
Tian Xinhui
164647cc05
Update programs/benchmark/Benchmark.cpp
Co-authored-by: Vladimir C <vdimir@clickhouse.com>
2022-04-22 18:31:07 +08:00
xinhuitian
f261291fa6 fix benchmark json report info 2022-04-21 14:10:29 +08:00
Azat Khuzhin
a871036361
Fix parallel_reading_from_replicas with clickhouse-bechmark (#34751)
* Use INITIAL_QUERY for clickhouse-benchmark

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Fix parallel_reading_from_replicas with clickhouse-bechmark

Before it produces the following error:

    $ clickhouse-benchmark --stacktrace -i1 --query "select * from remote('127.1', default.data_mt) limit 10" --allow_experimental_parallel_reading_from_replicas=1 --max_parallel_replicas=3
    Loaded 1 queries.
    Logical error: 'Coordinator for parallel reading from replicas is not initialized'.
    Aborted (core dumped)

Since it uses the same code, i.e RemoteQueryExecutor ->
MultiplexedConnections, which enables coordinator if it was requested
from settings, but it should be done only for non-initial queries, i.e.
when server send connection to another server.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Fix 02226_parallel_reading_from_replicas_benchmark for older shellcheck

By shellcheck 0.8 does not complains, while on CI shellcheck 0.7.0 and
it does complains [1]:

    In 02226_parallel_reading_from_replicas_benchmark.sh line 17:
        --allow_experimental_parallel_reading_from_replicas=1
        ^-- SC2191: The = here is literal. To assign by index, use ( [index]=value ) with no spaces. To keep as literal, quote it.

    Did you mean:
        "--allow_experimental_parallel_reading_from_replicas=1"

  [1]: https://s3.amazonaws.com/clickhouse-test-reports/34751/d883af711822faf294c876b017cbf745b1cda1b3/style_check__actions_/shellcheck_output.txt

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2022-03-08 16:42:29 +01:00
Alexey Milovidov
7067fafd76 Reimplement #33054 2021-12-29 22:09:07 +03:00
Nikolai Kochetov
fd14faeae2 Remove DataStreams folder. 2021-10-15 23:18:20 +03:00
Nikolai Kochetov
340b53ef85 Remove some more streams. 2021-10-08 17:03:54 +03:00
alexey-milovidov
ec949e4c66
Merge pull request #26607 from azat/bench-round-robin
Add round-robin support for clickhouse-benchmark
2021-07-24 20:17:35 +03:00
alexey-milovidov
a2ccfed2e2
Update Benchmark.cpp 2021-07-24 19:14:57 +03:00
alexey-milovidov
ea48b2a810
Update Benchmark.cpp 2021-07-24 19:10:37 +03:00
alexey-milovidov
adb92c1b70
Merge pull request #26656 from azat/bench-hang-on-EMFILE-fix
Avoid hanging clickhouse-benchmark if connection fails (i.e. on EMFILE)
2021-07-24 19:02:46 +03:00
vdimir
e4f3b9e7f4
Log exception message in void thread in clickhouse-benchmark 2021-07-23 17:41:32 +03:00
vdimir
d1106b325e
Lock mutex before access to std::cerr in clickhouse-benchmark 2021-07-23 17:35:22 +03:00
Azat Khuzhin
3f0dd40c69 Avoid hanging clickhouse-benchmark if connection fails (i.e. on EMFILE) 2021-07-22 10:08:49 +03:00
Azat Khuzhin
1860808969 Add round-robin support for clickhouse-benchmark 2021-07-20 23:00:04 +03:00
Alexey Milovidov
7a993404b4 Whitespace 2021-07-02 02:30:18 +03:00
Azat Khuzhin
18e8f0eb5e Add ability to push down LIMIT for distributed queries
This way the remote nodes will not need to send all the rows, so this
will decrease network io and also this will make queries w/
optimize_aggregation_in_order=1/LIMIT X and w/o ORDER BY faster since it
initiator will not need to read all the rows, only first X (but note
that for this you need to your data to be sharded correctly or you may
get inaccurate results).

Note, that having lots of processing stages will increase the complexity
of interpreter (it is already not that clean and simple right now).

Although using separate QueryProcessingStage looks pretty natural.

Another option is to make WithMergeableStateAfterAggregation always, but
in this case you will not be able to disable only this optimization,
i.e. if there will be some issue with it.

v2: fix OFFSET
v3: convert 01814_distributed_push_down_limit test to .sh and add retries
v4: add test with OFFSET
v5: add new query stage into the bash completion
v6/tests: use LIMIT O,L syntax over LIMIT L OFFSET O since it is broken in ANTLR parser
          https://clickhouse-test-reports.s3.yandex.net/23027/a18a06399b7aeacba7c50b5d1e981ada5df19745/functional_stateless_tests_(antlr_debug).html#fail1
v7/tests: set use_hedged_requests to 0, to avoid excessive log entries on retries
          https://clickhouse-test-reports.s3.yandex.net/23027/a18a06399b7aeacba7c50b5d1e981ada5df19745/functional_stateless_tests_flaky_check_(address).html#fail1
2021-06-09 02:29:50 +03:00
tavplubix
e9ff0b6d70
Merge pull request #23657 from kssenii/poco-file-to-std-fs
Poco::File to std::filesystem
2021-05-31 23:17:02 +03:00
Nikolai Kochetov
afc1fe7f3d Make ContextPtr const by default. 2021-05-31 17:49:02 +03:00