tavplubix
6e0bdaf46d
Merge pull request #14535 from zhang2014/fix/datetime
...
ISSUES-4006 support decimal data type for MaterializedMySQL
2020-09-19 14:05:32 +03:00
alexey-milovidov
c1402d62db
Merge pull request #14892 from vzakaznikov/fix_test_distributed_over_live_view2
...
Fixing tests/integration/test_distributed_over_live_view/test.py
2020-09-19 13:59:05 +03:00
alexey-milovidov
65517da62b
Merge pull request #14495 from nikitamikhaylov/update-permutation-bugfix-3
...
updatePermumation with Nullable
2020-09-19 13:53:55 +03:00
alexey-milovidov
0d26228599
Merge pull request #15002 from 4ertus2/bugs
...
Fix crash in RIGHT or FULL JOIN switch
2020-09-19 13:45:10 +03:00
alexey-milovidov
1fcebce926
Merge pull request #15000 from Jokser/disable-ttl-move-on-insert
...
Option to disable TTL move on data part insert
2020-09-19 13:45:02 +03:00
alexey-milovidov
3463d97f8c
Merge pull request #14973 from amosbird/bm2
...
Ignore key constraints when doing mutations.
2020-09-19 13:44:30 +03:00
alexey-milovidov
988b20a32c
Merge pull request #14684 from azat/parallel-distributed_ddl
...
Allow parallel execution of distributed DDL
2020-09-18 22:18:17 +03:00
Pavel Kovalenko
1fc3aa3ea8
Fixed test_disabled_ttl_move_on_insert test
2020-09-18 22:10:49 +03:00
Pavel Kovalenko
77be35a2b8
Fixed test_disabled_ttl_move_on_insert test
2020-09-18 21:59:56 +03:00
Amos Bird
d842cb704f
Allow mutations to work with key constraints.
2020-09-19 02:40:02 +08:00
Pavel Kovalenko
da04a130ed
Add option to disable ttl move on data part insert - minor fixes.
2020-09-18 20:45:30 +03:00
tavplubix
1762535ffc
Merge pull request #14797 from Vxider/add_tablefunction_null
...
Add table function null
2020-09-18 20:34:35 +03:00
Artem Zuikov
28afbafa08
fix crash in RIGHT or FULL JOIN switch
2020-09-18 19:25:20 +03:00
Pavel Kovalenko
0da19ab46d
Add option to disable ttl move on data part insert
2020-09-18 18:30:00 +03:00
alesapin
e38b537017
Merge pull request #11684 from ClickHouse/manual-write-duplicate-parts-to-replicas
...
Don't ignore duplicate parts written to replicas
2020-09-18 16:50:00 +03:00
Nikolai Kochetov
e5dfc38bfe
Skip 01455_shard_leaf_max_rows_bytes_to_read for arcadia.
2020-09-18 16:13:04 +03:00
Nikolai Kochetov
c7aff19937
Merge pull request #14221 from hagen1778/settings-leaf-limits
...
[settings]: introduce new query complexity settings for leaf-nodes
2020-09-18 14:05:10 +03:00
Artem Zuikov
0520b05001
Speedup wide integers ( #14859 )
2020-09-18 12:51:44 +03:00
alesapin
748fb74de2
Fix build type for integration tests
2020-09-18 10:02:55 +03:00
Vxider
fb31544d4a
add blank line to the end of file
2020-09-18 09:39:15 +08:00
Vxider
848664c4af
rewrite performance test to functional test
2020-09-18 09:34:51 +08:00
alexey-milovidov
76a3cc2dae
Merge pull request #14937 from filimonov/finalizeAggregation_statefullness
...
Fix enable_optimize_predicate_expression for finalizeAggregation
2020-09-18 01:24:02 +03:00
alexey-milovidov
cf5db5e4dc
Merge pull request #14888 from azat/client-imporove-INSERT-error-message
...
Improve error message for INSERT via clickhouse-client
2020-09-18 01:13:22 +03:00
alesapin
dc677b93fb
Comments and fix test
2020-09-17 22:30:17 +03:00
Alexander Tokmakov
06ff6d2eda
better 01193_metadata_loading
2020-09-17 19:28:44 +03:00
zhang2014
64032e22a2
Merge branch 'master' into fix/datetime
2020-09-18 00:28:09 +08:00
Alexander Kuzmenkov
652163c07c
Merge pull request #14883 from ClickHouse/aku/ignore-thresholds
...
Adjust ignore thresholds for unstable perf tests
2020-09-17 18:14:53 +03:00
Alexander Kuzmenkov
5539c6ecaa
Merge pull request #14928 from ClickHouse/aku/special-builds
...
Move non-essential builds to special
2020-09-17 17:19:32 +03:00
Vitaly Baranov
3356d75b23
Merge pull request #13156 from azat/cluster-secure
...
Secure inter-cluster query execution (with initial_user as current query user) [v3]
2020-09-17 17:11:00 +03:00
alexey-milovidov
496df5b3e9
Merge pull request #14678 from sundy-li/patch-2
...
dynamic zookeeper config when session expired
2020-09-17 17:05:22 +03:00
Mikhail Filimonov
22bd22702e
Fix enable_optimize_predicate_expression for finalizeAggregation
...
finalizeAggregation was wrongly marked as stateful, preventing pushing the conditions down.
2020-09-17 15:59:14 +02:00
alexey-milovidov
2413caa7d5
Merge pull request #14889 from ClickHouse/extract-all-groups-empty-match
...
Fix error in "extractAllGroups" function
2020-09-17 16:02:30 +03:00
Alexander Kuzmenkov
946d364b10
Move non-essential builds to special
...
Special builds have lower CI priority and start later. If some tests
fail, they won't start at all, so we'll save some CI time.
2020-09-17 14:41:14 +03:00
alesapin
40b2f203b6
Merge branch 'master' into manual-write-duplicate-parts-to-replicas
2020-09-17 13:21:00 +03:00
alesapin
50c55eb2d2
Merge pull request #14846 from ClickHouse/add_clang_11
...
Move to clang-11 in CI builds
2020-09-17 13:14:00 +03:00
alexey-milovidov
2886b38c03
Merge branch 'master' into fix_test_distributed_over_live_view2
2020-09-17 13:06:39 +03:00
Alexander Kuzmenkov
36538ce08f
Don't account for short queries, we'll deal with them separately.
...
New query:
```
WITH ceil(max(q[3]), 1) AS h
SELECT concat('sed -i s\'/^<test.*$/<test max_ignored_relative_change="', toString(h), '">/g\' tests/performance/', test, '.xml') AS s
FROM
(
SELECT
test,
query_index,
count(*),
min(event_time),
max(event_time) AS t,
arrayMap(x -> floor(x, 3), quantiles(0, 0.5, 0.95, 1)(stat_threshold)) AS q,
median(stat_threshold) AS m
FROM perftest.query_metrics
WHERE (metric = 'client_time') AND (abs(diff) < 0.05) AND (old > 0.2)
GROUP BY
test,
query_index,
query_display_name
HAVING (t > '2020-09-01 00:00:00') AND (m > 0.1)
ORDER BY test DESC
)
GROUP BY test
ORDER BY h DESC
FORMAT PrettySpace
```
2020-09-17 13:00:51 +03:00
roman
b41421cb1c
[settings]: introduce new query complexity settings for leaf-nodes
...
The new setting should allow to control query complexity on leaf nodes
excluding the final merging stage on the root-node. For example, distributed
query that reads 1k rows from 5 shards will breach the `max_rows_to_read=5000`,
while effectively every shard reads only 1k rows. With setting `max_rows_to_read_leaf=1500`
this limit won't be reached and query will succeed since every shard reads
not more that ~1k rows.
2020-09-17 10:37:05 +01:00
alesapin
f104c382f8
Merge pull request #14887 from azat/StorageFile-write-to-fd
...
Fix SIGSEGV for an attempt to INSERT into StorageFile(fd)
2020-09-17 10:25:02 +03:00
alesapin
4348dca960
Update ci_config.json
2020-09-17 10:07:58 +03:00
alesapin
73544a3781
Merge pull request #14845 from ClickHouse/fix_alias_array
...
Fix recursive column defaults
2020-09-17 10:02:39 +03:00
sundy-li
544b2cb20d
add configChanged method for zookeeper
...
fix logic error && skip reload testkeeper
2020-09-17 13:33:45 +08:00
Azat Khuzhin
13088d9bef
Fix 00900_parquet_load (update exception message on INSERT failures)
2020-09-17 08:05:56 +03:00
Vitaliy Zakaznikov
bf9feb6865
Removing usage of time.sleep in tests/integration/test_distributed_over_live_view/test.py
2020-09-16 22:07:58 -04:00
Azat Khuzhin
138e953429
Fix SIGSEGV for an attempt to INSERT into StorageFile(fd)
2020-09-17 01:26:34 +03:00
Alexey Milovidov
c37b55c3b1
Fix error in "extractAllGroups" function
2020-09-17 00:19:58 +03:00
Azat Khuzhin
7d046b24e6
Improve error message for INSERT via clickhouse-client
...
With '\n...' after the query [1] clickhouse-client prefer data from the
INSERT over from stdin, and produce very tricky message:
Code: 27. DB::Exception: Cannot parse input: expected '\n' before: ' ': (at row 1)
Well for TSV it is ok, but for RowBinary:
Code: 33. DB::Exception: Cannot read all data. Bytes read: 1. Bytes expected: 4.
So improve error message by adding the source of data for INSERT.
[1]: clickhouse-client -q "INSERT INTO data FORMAT TSV\n " <<<2
2020-09-17 00:16:51 +03:00
alexey-milovidov
84b210f93e
Merge pull request #14864 from bharatnc/ncb/format-integration-tests
...
Format and cleanup imports form all *.py integration test files
2020-09-16 20:40:57 +03:00
Alexander Kuzmenkov
0f8aec59a3
Adjust ignore thresholds for unstable perf tests
...
Based on historical data.
```
SELECT
test,
ceil(max(q[3]), 1) AS h
FROM
(
SELECT
test,
query_index,
count(*),
min(event_time),
max(event_time) AS t,
arrayMap(x -> floor(x, 3), quantiles(0, 0.5, 0.95, 1)(stat_threshold)) AS q,
median(stat_threshold) AS m
FROM perftest.query_metrics
WHERE (metric = 'client_time') AND (abs(diff) < 0.05)
GROUP BY
test,
query_index,
query_display_name
HAVING (t > '2020-09-01 00:00:00') AND (m > 0.1)
ORDER BY m DESC
)
GROUP BY test
ORDER BY h DESC
FORMAT TSV
cryptographic_hashes 1.3
collations 0.8
joins_in_memory_pmj 0.8
joins_in_memory 0.7
merge_tree_simple_select 0.7
set_index 0.7
decimal_casts 0.7
website 0.6
logical_functions_medium 0.5
count 0.5
merge_tree_many_partitions 0.5
decimal_aggregates 0.5
codecs_int_insert 0.5
column_column_comparison 0.5
insert_parallel 0.4
parse_engine_file 0.4
read_in_order_many_parts 0.4
logical_functions_small 0.4
parallel_insert 0.3
parallel_index 0.3
push_down_limit 0.3
jit_large_requests 0.3
select_format 0.3
arithmetic 0.3
merge_tree_huge_pk 0.3
materialized_view_parallel_insert 0.3
columns_hashing 0.3
if_array_string 0.3
random_string 0.2
random_printable_ascii 0.2
set 0.2
empty_string_serialization 0.2
```
To apply:
```
sed 's/^\(.*\) \(.*\)$/sed -i "s\/^<test.*$\/<test max_ignored_relative_change="'"'"\2">\/g" tests\/performance\/\1.xml/g' ../bad.tsv | bash
```
2020-09-16 18:27:51 +03:00
tavplubix
faa5190f11
Update arcadia_skip_list.txt
2020-09-16 18:17:16 +03:00