* Fix INSERT into Distributed non local node with MATERIALIZED columns
Previous patch e527def18a ("Fix INSERT
into Distributed() table with MATERIALIZED column") fixes it only for
cases when the node is local, i.e. direct insert.
This patch address the problem when the node is not local
(`is_local == false`), by erasing materialized columns on INSERT into
Distributed.
And this patch fixes two cases, depends on `insert_distributed_sync`
setting:
- `insert_distributed_sync=0`
```
Not found column value in block. There are only columns: date. Stack trace:
2. 0x7ffff7be92e0 DB::Exception::Exception() dbms/src/Common/Exception.h:27
3. 0x7fffec5d6cf6 DB::Block::getByName(...) dbms/src/Core/Block.cpp:187
4. 0x7fffec2fe067 DB::NativeBlockInputStream::readImpl() dbms/src/DataStreams/NativeBlockInputStream.cpp:159
5. 0x7fffec2d223f DB::IBlockInputStream::read() dbms/src/DataStreams/IBlockInputStream.cpp:61
6. 0x7ffff7c6d40d DB::TCPHandler::receiveData() dbms/programs/server/TCPHandler.cpp:971
7. 0x7ffff7c6cc1d DB::TCPHandler::receivePacket() dbms/programs/server/TCPHandler.cpp:855
8. 0x7ffff7c6a1ef DB::TCPHandler::readDataNext(unsigned long const&, int const&) dbms/programs/server/TCPHandler.cpp:406
9. 0x7ffff7c6a41b DB::TCPHandler::readData(DB::Settings const&) dbms/programs/server/TCPHandler.cpp:437
10. 0x7ffff7c6a5d9 DB::TCPHandler::processInsertQuery(DB::Settings const&) dbms/programs/server/TCPHandler.cpp:464
11. 0x7ffff7c687b5 DB::TCPHandler::runImpl() dbms/programs/server/TCPHandler.cpp:257
```
- `insert_distributed_sync=1`
```
2019.10.18 13:23:22.114578 [ 44 ] {a78f669f-0b08-4337-abf8-d31e958f6d12} <Error> executeQuery: Code: 171, e.displayText() = DB::Exception: Block structure mismatch in RemoteBlockOutputStream stream: different number of columns:
date Date UInt16(size = 1), value Date UInt16(size = 1)
date Date UInt16(size = 0): Insertion status:
Wrote 1 blocks and 0 rows on shard 0 replica 0, 127.0.0.1:59000 (average 0 ms per block)
Wrote 0 blocks and 0 rows on shard 1 replica 0, 127.0.0.2:59000 (average 2 ms per block)
(version 19.16.1.1) (from [::1]:3624) (in query: INSERT INTO distributed_00952 VALUES ), Stack trace:
2. 0x7ffff7be92e0 DB::Exception::Exception() dbms/src/Common/Exception.h:27
3. 0x7fffec5da4e9 DB::checkBlockStructure<void>(...)::{...}::operator()(...) const dbms/src/Core/Block.cpp:460
4. 0x7fffec5da671 void DB::checkBlockStructure<void>(...) dbms/src/Core/Block.cpp:467
5. 0x7fffec5d8d58 DB::assertBlocksHaveEqualStructure(...) dbms/src/Core/Block.cpp:515
6. 0x7fffec326630 DB::RemoteBlockOutputStream::write(DB::Block const&) dbms/src/DataStreams/RemoteBlockOutputStream.cpp:68
7. 0x7fffe98bd154 DB::DistributedBlockOutputStream::runWritingJob(DB::DistributedBlockOutputStream::JobReplica&, DB::Block const&)::{lambda()#1}::operator()() const dbms/src/Storages/Distributed/DistributedBlockOutputStream.cpp:280
<snip>
````
Fixes: #7365Fixes: #5429
Refs: #6891
* Cover INSERT into Distributed with MATERIALIZED columns and !is_local node
I guess that adding new cluster into server-test.xml is not required,
but it won't harm.
* Update DistributedBlockOutputStream.cpp
* Do not insert values into MV when inserting directly to Kafka
* Add method noPushingToViews() to IStorage interface
To separate InterpreterInsertQuery and StorageKafka
This test exists to prevent unintended changes to existing behaviour.
Although this behaviour might not be ideal it is can be exploited for 0-downtime changes to materialized views.
Step 1: Add new column to source table.
Step 2: Create new view reading source column.
Step 3: Swap views using `RENAME TABLE`.
Step 4: Add new column to destination table as well.