Commit Graph

30856 Commits

Author SHA1 Message Date
chertus
f2028e901d review related changes 2019-10-24 16:04:50 +03:00
Ivan
c250db4922
Update Docker Image for Binary Packager (#7474) 2019-10-24 15:56:30 +03:00
chertus
12cd21f3c3 Merge branch 'master' into ast 2019-10-24 15:20:09 +03:00
rainbowsysu
08788e3443 fixed the given examples of nullable in zh doc 2019-10-24 20:07:10 +08:00
Artem Zuikov
39b64dff87
Merge pull request #7454 from 4ertus2/refactoring
Refactoring: extract non aliases logic out of QueryNormalizer
2019-10-24 14:34:53 +03:00
Vladimir Chebotarev
64f158ff28 Fixed message for ALTER MOVE PART. 2019-10-24 13:56:32 +03:00
Alexander Kuzmenkov
29052b6a37
Merge pull request #7377 from azat/INSERT-Distributed-MATERIALIZED-cols
* Fix INSERT into Distributed non local node with MATERIALIZED columns

Previous patch e527def18a ("Fix INSERT
into Distributed() table with MATERIALIZED column") fixes it only for
cases when the node is local, i.e. direct insert.

This patch address the problem when the node is not local
(`is_local == false`), by erasing materialized columns on INSERT into
Distributed.

And this patch fixes two cases, depends on `insert_distributed_sync`
setting:

- `insert_distributed_sync=0`

    ```
    Not found column value in block. There are only columns: date. Stack trace:

    2. 0x7ffff7be92e0 DB::Exception::Exception() dbms/src/Common/Exception.h:27
    3. 0x7fffec5d6cf6 DB::Block::getByName(...) dbms/src/Core/Block.cpp:187
    4. 0x7fffec2fe067 DB::NativeBlockInputStream::readImpl() dbms/src/DataStreams/NativeBlockInputStream.cpp:159
    5. 0x7fffec2d223f DB::IBlockInputStream::read() dbms/src/DataStreams/IBlockInputStream.cpp:61
    6. 0x7ffff7c6d40d DB::TCPHandler::receiveData() dbms/programs/server/TCPHandler.cpp:971
    7. 0x7ffff7c6cc1d DB::TCPHandler::receivePacket() dbms/programs/server/TCPHandler.cpp:855
    8. 0x7ffff7c6a1ef DB::TCPHandler::readDataNext(unsigned long const&, int const&) dbms/programs/server/TCPHandler.cpp:406
    9. 0x7ffff7c6a41b DB::TCPHandler::readData(DB::Settings const&) dbms/programs/server/TCPHandler.cpp:437
    10. 0x7ffff7c6a5d9 DB::TCPHandler::processInsertQuery(DB::Settings const&) dbms/programs/server/TCPHandler.cpp:464
    11. 0x7ffff7c687b5 DB::TCPHandler::runImpl() dbms/programs/server/TCPHandler.cpp:257
    ```

- `insert_distributed_sync=1`

    ```
    2019.10.18 13:23:22.114578 [ 44 ] {a78f669f-0b08-4337-abf8-d31e958f6d12} <Error> executeQuery: Code: 171, e.displayText() = DB::Exception: Block structure mismatch in RemoteBlockOutputStream stream: different number of columns:
    date Date UInt16(size = 1), value Date UInt16(size = 1)
    date Date UInt16(size = 0): Insertion status:
    Wrote 1 blocks and 0 rows on shard 0 replica 0, 127.0.0.1:59000 (average 0 ms per block)
    Wrote 0 blocks and 0 rows on shard 1 replica 0, 127.0.0.2:59000 (average 2 ms per block)
     (version 19.16.1.1) (from [::1]:3624) (in query: INSERT INTO distributed_00952 VALUES ), Stack trace:

    2. 0x7ffff7be92e0 DB::Exception::Exception() dbms/src/Common/Exception.h:27
    3. 0x7fffec5da4e9 DB::checkBlockStructure<void>(...)::{...}::operator()(...) const dbms/src/Core/Block.cpp:460
    4. 0x7fffec5da671 void DB::checkBlockStructure<void>(...) dbms/src/Core/Block.cpp:467
    5. 0x7fffec5d8d58 DB::assertBlocksHaveEqualStructure(...) dbms/src/Core/Block.cpp:515
    6. 0x7fffec326630 DB::RemoteBlockOutputStream::write(DB::Block const&) dbms/src/DataStreams/RemoteBlockOutputStream.cpp:68
    7. 0x7fffe98bd154 DB::DistributedBlockOutputStream::runWritingJob(DB::DistributedBlockOutputStream::JobReplica&, DB::Block const&)::{lambda()#1}::operator()() const dbms/src/Storages/Distributed/DistributedBlockOutputStream.cpp:280
    <snip>
    ````

Fixes: #7365
Fixes: #5429
Refs: #6891

* Cover INSERT into Distributed with MATERIALIZED columns and !is_local node

I guess that adding new cluster into server-test.xml is not required,
but it won't harm.

* Update DistributedBlockOutputStream.cpp
2019-10-24 12:35:09 +03:00
Alexander Kuzmenkov
050de71ef4
Update DistributedBlockOutputStream.cpp 2019-10-24 12:33:45 +03:00
alesapin
a6199b7e69 Merge with master 2019-10-24 12:33:40 +03:00
alesapin
7edd80c9b7 Add test for existing dictionary 2019-10-24 12:25:28 +03:00
Vladimir Chebotarev
255da8f5e0 Fixed style. 2019-10-24 12:11:06 +03:00
Vladimir Chebotarev
3debdc2119 Added integration tests for ALTER MOVE PARTITION and fixed minor things. 2019-10-24 11:52:33 +03:00
Mikhail f. Shiryaev
3d2eab7535 Add PARTITION ID to OPTIMIZE documentation 2019-10-24 09:49:58 +02:00
Vladimir Chebotarev
e8e5cefc35 Fixed integration test for #7414. 2019-10-24 08:59:33 +03:00
Ivan Blinkov
f8a401bbf7
Labeler seems to require additional permissions from PR authors (#7466)
* Delete labeler.yml

* Delete labeler.keywords.yml

* Delete labeler.yml
2019-10-24 08:56:53 +08:00
chertus
9818eada69 rename: merge_max_block_size 2019-10-24 02:18:21 +03:00
Artem Zuikov
18a72fa91a
Merge pull request #7392 from amosbird/scalar
better scalar query
2019-10-23 22:54:30 +03:00
BayoNet
b6d6fea9e5 Merge branch 'master' of github.com:ClickHouse/ClickHouse 2019-10-23 22:51:35 +03:00
Azat Khuzhin
80cf86f100 Cover INSERT into Distributed with MATERIALIZED columns and !is_local node
I guess that adding new cluster into server-test.xml is not required,
but it won't harm.
2019-10-23 21:55:08 +03:00
Azat Khuzhin
ab9d9f8997 Fix INSERT into Distributed non local node with MATERIALIZED columns
Previous patch e527def18a ("Fix INSERT
into Distributed() table with MATERIALIZED column") fixes it only for
cases when the node is local, i.e. direct insert.

This patch address the problem when the node is not local
(`is_local == false`), by erasing materialized columns on INSERT into
Distributed.

And this patch fixes two cases, depends on `insert_distributed_sync`
setting:

- `insert_distributed_sync=0`

    ```
    Not found column value in block. There are only columns: date. Stack trace:

    2. 0x7ffff7be92e0 DB::Exception::Exception() dbms/src/Common/Exception.h:27
    3. 0x7fffec5d6cf6 DB::Block::getByName(...) dbms/src/Core/Block.cpp:187
    4. 0x7fffec2fe067 DB::NativeBlockInputStream::readImpl() dbms/src/DataStreams/NativeBlockInputStream.cpp:159
    5. 0x7fffec2d223f DB::IBlockInputStream::read() dbms/src/DataStreams/IBlockInputStream.cpp:61
    6. 0x7ffff7c6d40d DB::TCPHandler::receiveData() dbms/programs/server/TCPHandler.cpp:971
    7. 0x7ffff7c6cc1d DB::TCPHandler::receivePacket() dbms/programs/server/TCPHandler.cpp:855
    8. 0x7ffff7c6a1ef DB::TCPHandler::readDataNext(unsigned long const&, int const&) dbms/programs/server/TCPHandler.cpp:406
    9. 0x7ffff7c6a41b DB::TCPHandler::readData(DB::Settings const&) dbms/programs/server/TCPHandler.cpp:437
    10. 0x7ffff7c6a5d9 DB::TCPHandler::processInsertQuery(DB::Settings const&) dbms/programs/server/TCPHandler.cpp:464
    11. 0x7ffff7c687b5 DB::TCPHandler::runImpl() dbms/programs/server/TCPHandler.cpp:257
    ```

- `insert_distributed_sync=1`

    ```
    2019.10.18 13:23:22.114578 [ 44 ] {a78f669f-0b08-4337-abf8-d31e958f6d12} <Error> executeQuery: Code: 171, e.displayText() = DB::Exception: Block structure mismatch in RemoteBlockOutputStream stream: different number of columns:
    date Date UInt16(size = 1), value Date UInt16(size = 1)
    date Date UInt16(size = 0): Insertion status:
    Wrote 1 blocks and 0 rows on shard 0 replica 0, 127.0.0.1:59000 (average 0 ms per block)
    Wrote 0 blocks and 0 rows on shard 1 replica 0, 127.0.0.2:59000 (average 2 ms per block)
     (version 19.16.1.1) (from [::1]:3624) (in query: INSERT INTO distributed_00952 VALUES ), Stack trace:

    2. 0x7ffff7be92e0 DB::Exception::Exception() dbms/src/Common/Exception.h:27
    3. 0x7fffec5da4e9 DB::checkBlockStructure<void>(...)::{...}::operator()(...) const dbms/src/Core/Block.cpp:460
    4. 0x7fffec5da671 void DB::checkBlockStructure<void>(...) dbms/src/Core/Block.cpp:467
    5. 0x7fffec5d8d58 DB::assertBlocksHaveEqualStructure(...) dbms/src/Core/Block.cpp:515
    6. 0x7fffec326630 DB::RemoteBlockOutputStream::write(DB::Block const&) dbms/src/DataStreams/RemoteBlockOutputStream.cpp:68
    7. 0x7fffe98bd154 DB::DistributedBlockOutputStream::runWritingJob(DB::DistributedBlockOutputStream::JobReplica&, DB::Block const&)::{lambda()#1}::operator()() const dbms/src/Storages/Distributed/DistributedBlockOutputStream.cpp:280
    <snip>
    ````

Fixes: #7365
Fixes: #5429
Refs: #6891
2019-10-23 21:54:27 +03:00
Konstantin Podshumok
d1a19d26e8
Remove hardcoded paths in unwind target
In most cases they match defaults now, but it is too hard to override when one needs to (alternative builds)
2019-10-23 20:33:40 +03:00
Artem Zuikov
bb1c1d0ed9
Merge pull request #7431 from arenadata/master
fix aggregation (avg and quantiles) over empty decimal columns.
2019-10-23 20:28:35 +03:00
Ivan
32ca372b9d
Revert "Update Dockerfile for binary packager (#7456)" (#7458)
This reverts commit fa05a5860f.
2019-10-23 18:54:18 +03:00
Koblikov Mihail
6dc497f7bc fix typo in ontime.md (#7285)
* fix type in ontime.md

* Update docs/en/getting_started/example_datasets/ontime.md

Co-Authored-By: Ivan Blinkov <github@blinkov.ru>

* Update docs/en/getting_started/example_datasets/ontime.md

Co-Authored-By: Ivan Blinkov <github@blinkov.ru>

* Update docs/en/getting_started/example_datasets/ontime.md

Co-Authored-By: Ivan Blinkov <github@blinkov.ru>

* Update ontime.md
2019-10-23 18:32:42 +03:00
Ivan
fa05a5860f
Update Dockerfile for binary packager (#7456) 2019-10-23 17:41:17 +03:00
chertus
20093fa065 extract more logic out of QueryNormalizer 2019-10-23 16:59:03 +03:00
Amos Bird
295864e6e0
better scalar query 2019-10-23 21:37:54 +08:00
alesapin
c3519ff376 Better check of dictionary lifetime for updates 2019-10-23 16:02:40 +03:00
Vladimir Chebotarev
7546c315a6 Added integration test for #7414. 2019-10-23 14:25:51 +03:00
BayoNet
04a6c6ac4d
Docs links fix (#7448)
* Typo fix.

* Links fix.

* Fixed links in docs.

* More fixes.

* Link fixes.

* Fixed links.
2019-10-23 13:51:06 +03:00
Alexander Kuzmenkov
5ac672bfdf
Merge pull request #7273 from azat/uniqCombined-fix-docs
[RFC] Drop note about "estimation error for large sets will be large"
2019-10-23 13:49:04 +03:00
BayoNet
081e9d9554 Fixed links. 2019-10-23 12:59:57 +03:00
Alexander Kuzmenkov
1a609c27bd
Merge pull request #6243 from ClickHouse/aku/hashtables
Introduce String Hash Map to speed up aggregation over short string keys.
2019-10-23 12:52:50 +03:00
alesapin
6a0246f58e Fix if 2019-10-23 12:40:09 +03:00
alesapin
0abb2e538b Remove strange logic 2019-10-23 12:36:20 +03:00
BayoNet
326276a4a8 Merge branch 'master' of github.com:ClickHouse/ClickHouse 2019-10-23 12:30:34 +03:00
alesapin
fb349757ba Fix ubsan error 2019-10-23 12:27:34 +03:00
Andrey Konyaev
c6a3ba29c2
Merge pull request #4 from arenadata/ADQM-61
up precision for avg result to max of type
2019-10-23 11:26:44 +03:00
akonyaev
7426542b8b up precision for avg result to max of type 2019-10-23 11:22:51 +03:00
alexey-milovidov
5c73a75843
Merge pull request #7439 from volfco/master
Fixed spelling error in error message
2019-10-23 07:33:56 +03:00
alexey-milovidov
4b9e9cea56
Merge pull request #7389 from azat/count_distinct_implementation_uniqCombined64
docs: enumerate uniqCombined64 for count_distinct_implementation
2019-10-23 07:31:14 +03:00
alexey-milovidov
ffc2e4e149
Merge pull request #7396 from nvartolomei/nv/mv-extra-columns
Test materialized view pushing extra columns
2019-10-23 07:30:53 +03:00
alexey-milovidov
bd31ed3a20
Merge pull request #7418 from amosbird/addglob
Better add_globs
2019-10-23 07:28:13 +03:00
alexey-milovidov
44d2e1d27c
Merge pull request #7419 from amosbird/dump2
Resolve DUMP overload resolution ambiguity.
2019-10-23 07:19:38 +03:00
alexey-milovidov
f8d408e8f4
Merge pull request #7423 from excitoon/patch-1
Fixed erroneous warning `max_data_part_size is too low`
2019-10-23 07:17:33 +03:00
Nikolai Kochetov
9abab40512 Added more comments. 2019-10-23 06:45:43 +03:00
Colum
a413770a97 Fixed spelling error in error message 2019-10-22 10:02:51 -07:00
alesapin
c12014ca15 Fix shared build 2019-10-22 19:47:11 +03:00
alesapin
dfa9b0c149 Remove complex logic with lazy load 2019-10-22 19:26:15 +03:00
Nikolai Kochetov
e22b43c669
Merge pull request #7436 from amosbird/scalarperf
add perf test for subqueries with large scalars
2019-10-22 19:06:20 +03:00