Commit Graph

15716 Commits

Author SHA1 Message Date
vdimir
b8b64b1d15
Bugfix check requires either functional _or_ stateless test 2022-03-14 13:09:53 +00:00
vdimir
f2050b062e
Remove dummy tests ok on master 2022-03-11 14:42:19 +00:00
vdimir
983e8c8bdf
Remove tests failed on master 2022-03-11 14:37:00 +00:00
vdimir
23bee2c843
Do not invert result of integration bugfix check in ci-runner.py 2022-03-11 14:32:24 +00:00
vdimir
e5c63266c2
Integration test bugfix check 2022-03-11 14:14:16 +00:00
vdimir
e757837ec0
Different status colors in report for bugfix validation 2022-03-11 13:36:29 +00:00
vdimir
f7bcb796ce
add dummy integration tests 2022-03-11 12:50:34 +00:00
vdimir
6614d6eaaf
bugfix validate integration test 2022-03-11 12:50:33 +00:00
vdimir
b8c7e4657f
invert check in validate bugfix 2022-03-10 16:43:44 +00:00
vdimir
355a0d6fef
upd test for bugfix valiate 2022-03-10 16:43:43 +00:00
vdimir
000a31df3b
no-random-settings for bugfix validate 2022-03-10 16:43:43 +00:00
vdimir
fcb7e9ed36
fix CLICKHOUSE_CLIENT_DOWNLOAD_URL in download_previous_release.py 2022-03-10 16:43:42 +00:00
vdimir
4947d6db13
Use download_previous_release for bugfix validation
Should be merged with https://github.com/ClickHouse/ClickHouse/pull/27928
2022-03-10 16:43:41 +00:00
vdimir
196645679f
Add bugfix validate check to ci_config 2022-03-10 16:43:41 +00:00
vdimir
73b69805a8
add testcase to check bugfix validation 2022-03-10 16:43:40 +00:00
vdimir
da179c607e
Add bugfux validate check 2022-03-10 16:43:40 +00:00
Kruglov Pavel
a506120646
Fix bug in schema inference in s3 table function (#35176) 2022-03-10 15:16:07 +01:00
Vladimir C
84af08b1a1
Merge pull request #35116 from bigo-sg/snappy_bug 2022-03-10 11:47:37 +01:00
Kseniia Sumarokova
e6ee891c9c
Merge pull request #34957 from bigo-sg/hive_random_access_file_cache
Optimization for first time to read a random access readbuffer in hive
2022-03-10 11:36:22 +01:00
lgbo-ustc
fdd423a3da fixed code style 2022-03-10 12:13:19 +08:00
lgbo-ustc
e4883f31b7 update tests
1. fixed code style in src/IO/tests/gtest_hadoop_snappy_decoder.cpp
2. enable tests 01060_avro.sh
2022-03-10 09:46:43 +08:00
Vladimir C
ce266b5a3e
Merge pull request #35146 from amosbird/fixpartitionprunerin 2022-03-09 13:23:45 +01:00
Amos Bird
a19224bc9b
Fix partition pruner: non-monotonic function IN 2022-03-09 15:48:42 +08:00
lgbo-ustc
95d8f28aa0 update test.py 2022-03-09 15:42:57 +08:00
lgbo-ustc
46c4b3a69f retry on exception 2022-03-09 11:03:05 +08:00
Azat Khuzhin
4843e210c3 Support view() for parallel_distributed_insert_select
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2022-03-08 22:05:57 +03:00
Azat Khuzhin
a871036361
Fix parallel_reading_from_replicas with clickhouse-bechmark (#34751)
* Use INITIAL_QUERY for clickhouse-benchmark

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Fix parallel_reading_from_replicas with clickhouse-bechmark

Before it produces the following error:

    $ clickhouse-benchmark --stacktrace -i1 --query "select * from remote('127.1', default.data_mt) limit 10" --allow_experimental_parallel_reading_from_replicas=1 --max_parallel_replicas=3
    Loaded 1 queries.
    Logical error: 'Coordinator for parallel reading from replicas is not initialized'.
    Aborted (core dumped)

Since it uses the same code, i.e RemoteQueryExecutor ->
MultiplexedConnections, which enables coordinator if it was requested
from settings, but it should be done only for non-initial queries, i.e.
when server send connection to another server.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Fix 02226_parallel_reading_from_replicas_benchmark for older shellcheck

By shellcheck 0.8 does not complains, while on CI shellcheck 0.7.0 and
it does complains [1]:

    In 02226_parallel_reading_from_replicas_benchmark.sh line 17:
        --allow_experimental_parallel_reading_from_replicas=1
        ^-- SC2191: The = here is literal. To assign by index, use ( [index]=value ) with no spaces. To keep as literal, quote it.

    Did you mean:
        "--allow_experimental_parallel_reading_from_replicas=1"

  [1]: https://s3.amazonaws.com/clickhouse-test-reports/34751/d883af711822faf294c876b017cbf745b1cda1b3/style_check__actions_/shellcheck_output.txt

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2022-03-08 16:42:29 +01:00
Azat Khuzhin
c4b6342853
Improvements for parallel_distributed_insert_select (and related) (#34728)
* Add a warning if parallel_distributed_insert_select was ignored

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Respect max_distributed_depth for parallel_distributed_insert_select

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Print warning for non applied parallel_distributed_insert_select only for initial query

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Remove Cluster::getHashOfAddresses()

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Forbid parallel_distributed_insert_select for remote()/cluster() with different addresses

Before it uses empty cluster name (getClusterName()) which is not
correct, compare all addresses instead.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Fix max_distributed_depth check

max_distributed_depth=1 must mean not more then one distributed query,
not two, since max_distributed_depth=0 means no limit, and
distribute_depth is 0 for the first query.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Fix INSERT INTO remote()/cluster() with parallel_distributed_insert_select

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Add a test for parallel_distributed_insert_select with cluster()/remote()

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Return <remote> instead of empty cluster name in Distributed engine

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Make user with sharding_key and w/o in remote()/cluster() identical

Before with sharding_key the user was "default", while w/o it it was
empty.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2022-03-08 15:24:39 +01:00
Vladimir C
95dd88d3de
Merge pull request #35058 from vdimir/fix-aarch64 2022-03-08 12:16:23 +01:00
lgbo-ustc
d98ef45a50 update tests 2022-03-08 18:22:53 +08:00
lgbo-ustc
256e92ffee Merge remote-tracking branch 'ck/master' into hive_random_access_file_cache 2022-03-08 14:14:40 +08:00
lgbo-ustc
148109e929 update tests 2022-03-08 09:36:02 +08:00
alexey-milovidov
df1a031851
Merge pull request #35046 from vdimir/issue-35044
Fix trim function
2022-03-08 01:50:02 +03:00
Nikolai Kochetov
8f77b2b778
Merge pull request #34889 from ClickHouse/finally-enable-s3-async-writes-again
Update DiskS3.cpp
2022-03-07 21:31:44 +01:00
Kseniia Sumarokova
28b9ec01c0
Merge pull request #34945 from bigo-sg/hive_bug_fixed
unexpected result when use `in` in hive query
2022-03-07 17:13:11 +01:00
alesapin
aae13ed912 Supress move partition long for storage S3 2022-03-07 15:18:57 +01:00
mergify[bot]
88052e2d7c
Merge branch 'master' into finally-enable-s3-async-writes-again 2022-03-07 12:55:52 +00:00
vdimir
20478e9088
add testcase to 02100_replaceRegexpAll_bug 2022-03-07 11:18:12 +00:00
vdimir
202ac18e76
Skip 01086_odbc_roundtrip for aarch, disable force_tests 2022-03-07 11:04:37 +00:00
alesapin
527df53c1e
Merge pull request #35088 from ClickHouse/push-artifactory-improvement
Put downloaded artifacts to a temorary path
2022-03-07 11:39:21 +01:00
lgbo-ustc
0f40a5a52d update tests 2022-03-07 17:31:27 +08:00
lgbo-ustc
f322674577 update tests 2022-03-07 17:22:55 +08:00
lgbo-ustc
a016ce3576 updat codes 2022-03-07 12:15:20 +08:00
lgbo-ustc
4507cc58aa update codes 2022-03-07 12:05:07 +08:00
Mikhail f. Shiryaev
223ec3d0b6
Put downloaded artifacts to a temorary path 2022-03-06 14:07:47 +01:00
mergify[bot]
275ce197c7
Merge branch 'master' into fix-tests 2022-03-05 23:26:36 +00:00
alexey-milovidov
f9b7df6ba1
Merge pull request #35050 from CurtizJ/fix-async-inserts-system-table
Fix reading from `system.asynchronous_inserts` table
2022-03-06 02:25:53 +03:00
avogar
722e0ea214 Fix clickhouse-test 2022-03-05 16:46:14 +00:00
avogar
abd4a32f83 Merge branch 'master' of github.com:ClickHouse/ClickHouse into fix-tests 2022-03-05 16:45:39 +00:00
avogar
8a12a4c214 Try to fix failed tests 2022-03-05 16:17:08 +00:00