Commit Graph

68996 Commits

Author SHA1 Message Date
alexey-milovidov
418af36372
Merge pull request #26757 from Felixoid/fix-18923
Disable watchdog in docker by default
2021-07-24 05:52:16 +03:00
alexey-milovidov
dab9cfb9c9
Merge pull request #26713 from ClickHouse/remove-more-and--more-streams
Remove more streams.
2021-07-24 02:24:10 +03:00
Alexey Milovidov
0a4e26e682 Experiment with sharing file descriptors 2021-07-24 01:50:14 +03:00
BoloniniD
07c57edbfc Merge branch 'master' of github.com:ClickHouse/ClickHouse into pipe_reading 2021-07-23 23:39:38 +03:00
Vitaly Baranov
249ccd879e SET PROFILE applies constraints too. 2021-07-23 23:28:55 +03:00
Vitaly Baranov
db97921b5b Changing default roles affects new sessions only. 2021-07-23 23:23:14 +03:00
Vitaly Baranov
c68c74634d
Merge pull request #26707 from vitlibar/fix-set-role
Fix SET ROLE
2021-07-23 23:16:15 +03:00
Mikhail f. Shiryaev
64c35b2511
Disable watchdog in docker by default 2021-07-23 21:42:33 +02:00
Nikolai Kochetov
9c92f43359 Update storages. 2021-07-23 22:33:59 +03:00
alexey-milovidov
c4f997337d
Merge pull request #26744 from vzakaznikov/fix_testflows_test_window_functions_error_window_function_in_join
Update error message in tests/testflows/window_functions/tests/errors.py
2021-07-23 19:42:15 +03:00
Vitaliy Zakaznikov
1b5f697480 Enabling RBAC TestFlows tests and crossing out new fails. 2021-07-23 12:09:18 -04:00
Caspian
421e59b9f5 rm whitespace 2021-07-23 23:12:30 +08:00
Vitaliy Zakaznikov
1dfa347e20 Update error message in tests/testflows/window_functions/tests/errors.py 2021-07-23 10:59:05 -04:00
vdimir
e4f3b9e7f4
Log exception message in void thread in clickhouse-benchmark 2021-07-23 17:41:32 +03:00
vdimir
d1106b325e
Lock mutex before access to std::cerr in clickhouse-benchmark 2021-07-23 17:35:22 +03:00
Nikolai Kochetov
2dc5c89b66 Update Storage::write 2021-07-23 17:25:35 +03:00
Caspian
e724e972fe remove uncessary Exception 2021-07-23 21:37:55 +08:00
Maksim Kita
73ab70af1e
Merge pull request #26738 from excitoon/patch-9
Fixed wrong error message in `S3Common`
2021-07-23 16:21:24 +03:00
Maksim Kita
f6f8bea689
Merge pull request #26671 from kitaisreal/mysql-dictionaries-support-for-custom-query
Lexer introduce heredoc
2021-07-23 16:20:54 +03:00
Maksim Kita
6e2d992dce
Merge pull request #26719 from kitaisreal/compile-aggregate-functions-profile-events-fix
Compile aggregate functions profile events fix
2021-07-23 16:17:57 +03:00
Vladimir Chebotarev
eb2defb098
Fixed wrong error message in S3Common. 2021-07-23 15:36:19 +03:00
Vitaly Baranov
67d4da224a
Merge pull request #26384 from Cas-pian/grant_by_replace
add grant by replace support
2021-07-23 14:40:47 +03:00
Raúl Marín
383c982715 CH local: Treat localhost:port as a remote database 2021-07-23 13:16:35 +02:00
Nikolai Kochetov
52cc98e9c7
Update MergeJoin.cpp 2021-07-23 13:55:28 +03:00
Nikolai Kochetov
80e0e24448 Fix unit test and style. 2021-07-23 12:29:53 +03:00
Nikolai Kochetov
d03bcebc8e Remove debug logging. 2021-07-23 12:05:42 +03:00
romanzhukov
d624e22b2a DOCSUP-11551: Add ru contrib info. 2021-07-23 11:48:59 +03:00
Vladimir
817ed354ff
Merge pull request #26673 from vdimir/fix_test_materialize_mysql_database 2021-07-23 11:26:19 +03:00
romanzhukov
4fc1613577 DOCSUP-11551: Add ru contrib info. 2021-07-23 11:25:46 +03:00
Maksim Kita
e961de3ea0 Fixed build 2021-07-23 11:15:29 +03:00
Maksim Kita
7e08748c5b Updated tests 2021-07-23 11:13:51 +03:00
vdimir
dccc379d39
Fix use after free in AsyncDrain connection from S3Cluster 2021-07-23 10:40:03 +03:00
Vitaly Baranov
19d5a6ab2f
Merge pull request #26714 from vitlibar/new-function-current-profiles
New functions currentProfiles(), enabledProfiles(), defaultProfiles().
2021-07-23 09:10:29 +03:00
Maksim Kita
42201d3e30 Fixed code review issues 2021-07-23 01:03:44 +03:00
Azat Khuzhin
00e2083421 Fix event_time_microseconds for REMOVE_PART in system.part_log 2021-07-23 00:59:08 +03:00
Maksim Kita
1fea19846b Compile aggregate functions profile events fix 2021-07-23 00:43:31 +03:00
Maksim Kita
b5980f312a
Merge pull request #26718 from kitaisreal/setting-min-count-to-compile-aggregate-expression-fix
Setting min_count_to_compile_aggregate_expression fix
2021-07-23 00:22:49 +03:00
Maksim Kita
46fd046f11 Setting min_count_to_compile_aggregate_expression fix 2021-07-23 00:22:04 +03:00
Maksim Kita
222915c275 Fixed code review issues 2021-07-23 00:07:22 +03:00
Alexander Kuzmenkov
9465e5d191
Merge pull request #26701 from ClickHouse/aku/window-debug
more debug checks for window functions
2021-07-22 23:09:29 +03:00
Alexey
4a57267c5b Sections about Globs moved at the very end, after examples 2021-07-22 19:30:53 +00:00
Vitaly Baranov
7afcc65060 Add new functions currentProfiles(), enabledProfiles(), defaultProfiles(). 2021-07-22 22:20:53 +03:00
lehasm
8e8ef98fa4
Update docs/ru/operations/settings/merge-tree-settings.md
Co-authored-by: Anna <42538400+adevyatova@users.noreply.github.com>
2021-07-22 21:32:46 +03:00
Nikolai Kochetov
3c17a62686
Merge pull request #26590 from ClickHouse/remove-some-more-streams
Remove some streams.
2021-07-22 21:28:50 +03:00
romanzhukov
492af9332c DOCSUP-11551: Add ru contrib info. 2021-07-22 20:53:30 +03:00
Nicolae Vartolomei
f35e6eee19 Avoid deleting old parts from FS on shutdown for replicated engine
This was introduced in https://github.com/ClickHouse/ClickHouse/pull/8602.
The idea was to avoid data re-appearing in ClickHouse after DROP/DETACH
PARTITION. This problem was only present in MergeTree engine and I don't
understand why we need to do the same in ReplicatedMergeTree.

For ReplicatedMergeTree the state of truth is stored in ZK, deleting
things from filesystem just introduces inconsistencies and this is the
main source for errors like "No active replica has part X or covering
part".

The resulting problem is fixed by
https://github.com/ClickHouse/ClickHouse/pull/25820, but in my opinion
we would better avoid introducing the ZK/FS inconsistency in the first
place.

When does this inconsistency appear? Often the sequence is like this:

0. Write 2 parts to ZK [all_0_0_0, all_1_1_0]
1. A merge gets scheduled
2. New part replaces old parts [new: all_0_1_1, old: all_0_0_0, all_1_1_0]
3. Replica gets shutdown and old parts are removed from filesystem
4. Replica comes back online, metadata about all parts is still stored in ZK for this new replica.
5. Other replica after cleanup thread runs will have only [all_0_1_1] in
   ZK
5. User triggers a DROP_RANGE after a while (drop range is for all_0_1_9999*)
6. Each replica deletes from ZK only [all_0_1_1]. The replica that got
   restarted uses its in-memory state to choose nodes to delete from ZK.
7. Restart the replica again. It will now think that there are 2 parts
   that it lost and needs to fetch them [all_0_0_0, all_1_1_0].

`clearOldPartsAndRemoveFromZK` which is triggered from cleanup thread
runs cleanup sequence correctly, it first removes things from ZK and
then from filesystem. I don't see much benefit of triggering it on
shutdown and would rather have it called only from a single place.

---

This is a very, very edge case situation but it proves that the current
"fix" (https://github.com/ClickHouse/ClickHouse/pull/25820) isn't
complete.

```
create table test(
    v UInt64
)
engine=ReplicatedMergeTree('/clickhouse/test', 'one')
order by v
settings old_parts_lifetime = 30;

create table test2(
    v UInt64
)
engine=ReplicatedMergeTree('/clickhouse/test', 'two')
order by v
settings old_parts_lifetime = 30;

create table test3(
    v UInt64
)
engine=ReplicatedMergeTree('/clickhouse/test', 'three')
order by v
settings old_parts_lifetime = 30;

insert into table test values (1), (2), (3);
insert into table test values (4);

optimize table test final;

detach table test;
detach table test2;

alter table test3 drop partition tuple();

attach table test;
attach table test2;
```

```
(CONNECTED [localhost:9181]) /> ls /clickhouse/test/replicas/one/parts
all_0_0_0
all_1_1_0
(CONNECTED [localhost:9181]) /> ls /clickhouse/test/replicas/two/parts
all_0_0_0
all_1_1_0
(CONNECTED [localhost:9181]) /> ls /clickhouse/test/replicas/three/parts
```

```
detach table test;
attach table test;
```

`test` will now figure out that parts exist only in ZK and will issue `GET_PART`
after first removing parts from ZK.

`test2` will receive fetch for unknown parts and will trigger part checks itself.
Because `test` doesn't have the parts anymore in ZK `test2` will mark them as LostForever.
It will also not insert empty parts, because the partition is empty.

`test` is left with `GET_PART` in the queue and stuck.

```
SELECT
    table,
    type,
    replica_name,
    new_part_name,
    last_exception
FROM system.replication_queue

Query id: 74c5aa00-048d-4bc1-a2ea-6f69501c11a0

Row 1:
──────
table:          test
type:           GET_PART
replica_name:   one
new_part_name:  all_0_0_0
last_exception: Code: 234. DB::Exception: No active replica has part all_0_0_0 or covering part. (NO_REPLICA_HAS_PART) (version 21.9.1.1)

Row 2:
──────
table:          test
type:           GET_PART
replica_name:   one
new_part_name:  all_1_1_0
last_exception: Code: 234. DB::Exception: No active replica has part all_1_1_0 or covering part. (NO_REPLICA_HAS_PART) (version 21.9.1.1)
```
2021-07-22 17:48:16 +01:00
Alexander Kuzmenkov
86b7701834
Merge pull request #26706 from ClickHouse/aku/server-exit-code
record server exit code in fuzzer
2021-07-22 19:42:12 +03:00
ryzuo
4d36d54c81 Update the implementaion of nth_value
Make nth_value for nullable values for out of frame rows, as the same
fashion as lagInFrame just does.
2021-07-23 00:34:08 +08:00
Raúl Marín
cb50fd9521 01946_test_wrong_host_name_access: Clear DNS in the end
Leaves a better env and avoids future errors in the logs
2021-07-22 18:21:17 +02:00
Nikolai Kochetov
f56a45155f Merge branch 'master' into remove-more-and--more-streams 2021-07-22 19:10:39 +03:00