From 454e6635adfdd270c0c566a305179b14ef83ccf8 Mon Sep 17 00:00:00 2001 From: Alexey Date: Thu, 1 Apr 2021 20:33:54 +0000 Subject: [PATCH 001/108] non-printable characters removed --- docs/en/sql-reference/functions/bitmap-functions.md | 2 +- docs/en/sql-reference/functions/hash-functions.md | 2 +- docs/ja/sql-reference/functions/bitmap-functions.md | 2 +- docs/ja/sql-reference/functions/hash-functions.md | 2 +- docs/ru/sql-reference/functions/bitmap-functions.md | 2 +- docs/ru/sql-reference/functions/hash-functions.md | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/en/sql-reference/functions/bitmap-functions.md b/docs/en/sql-reference/functions/bitmap-functions.md index 7ec400949e9..4875532605e 100644 --- a/docs/en/sql-reference/functions/bitmap-functions.md +++ b/docs/en/sql-reference/functions/bitmap-functions.md @@ -33,7 +33,7 @@ SELECT bitmapBuild([1, 2, 3, 4, 5]) AS res, toTypeName(res); ``` text ┌─res─┬─toTypeName(bitmapBuild([1, 2, 3, 4, 5]))─────┐ -│  │ AggregateFunction(groupBitmap, UInt8) │ +│ │ AggregateFunction(groupBitmap, UInt8) │ └─────┴──────────────────────────────────────────────┘ ``` diff --git a/docs/en/sql-reference/functions/hash-functions.md b/docs/en/sql-reference/functions/hash-functions.md index 945ede4927f..92bd47e58bd 100644 --- a/docs/en/sql-reference/functions/hash-functions.md +++ b/docs/en/sql-reference/functions/hash-functions.md @@ -440,7 +440,7 @@ SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3) ``` text ┌─MurmurHash3──────┬─type────────────┐ -│ 6�1�4"S5KT�~~q │ FixedString(16) │ +│ 6�1�4"S5KT�~~q | FixedString(16) │ └──────────────────┴─────────────────┘ ``` diff --git a/docs/ja/sql-reference/functions/bitmap-functions.md b/docs/ja/sql-reference/functions/bitmap-functions.md index cc57e762610..de3ce938444 100644 --- a/docs/ja/sql-reference/functions/bitmap-functions.md +++ b/docs/ja/sql-reference/functions/bitmap-functions.md @@ -35,7 +35,7 @@ SELECT bitmapBuild([1, 2, 3, 4, 5]) AS res, toTypeName(res) ``` text ┌─res─┬─toTypeName(bitmapBuild([1, 2, 3, 4, 5]))─────┐ -│  │ AggregateFunction(groupBitmap, UInt8) │ +│ │ AggregateFunction(groupBitmap, UInt8) │ └─────┴──────────────────────────────────────────────┘ ``` diff --git a/docs/ja/sql-reference/functions/hash-functions.md b/docs/ja/sql-reference/functions/hash-functions.md index d48e6846bb4..3de3e40d0eb 100644 --- a/docs/ja/sql-reference/functions/hash-functions.md +++ b/docs/ja/sql-reference/functions/hash-functions.md @@ -439,7 +439,7 @@ SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3) ``` text ┌─MurmurHash3──────┬─type────────────┐ -│ 6�1�4"S5KT�~~q │ FixedString(16) │ +│ 6�1�4"S5KT�~~q │ FixedString(16) │ └──────────────────┴─────────────────┘ ``` diff --git a/docs/ru/sql-reference/functions/bitmap-functions.md b/docs/ru/sql-reference/functions/bitmap-functions.md index ddae2f3eb40..3da729664d0 100644 --- a/docs/ru/sql-reference/functions/bitmap-functions.md +++ b/docs/ru/sql-reference/functions/bitmap-functions.md @@ -25,7 +25,7 @@ SELECT bitmapBuild([1, 2, 3, 4, 5]) AS res, toTypeName(res); ``` text ┌─res─┬─toTypeName(bitmapBuild([1, 2, 3, 4, 5]))─────┐ -│  │ AggregateFunction(groupBitmap, UInt8) │ +│ │ AggregateFunction(groupBitmap, UInt8) │ └─────┴──────────────────────────────────────────────┘ ``` diff --git a/docs/ru/sql-reference/functions/hash-functions.md b/docs/ru/sql-reference/functions/hash-functions.md index 6797f530346..29669cc7e5d 100644 --- a/docs/ru/sql-reference/functions/hash-functions.md +++ b/docs/ru/sql-reference/functions/hash-functions.md @@ -442,7 +442,7 @@ SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3) ``` text ┌─MurmurHash3──────┬─type────────────┐ -│ 6�1�4"S5KT�~~q │ FixedString(16) │ +│ 6�1�4"S5KT�~~q │ FixedString(16) │ └──────────────────┴─────────────────┘ ``` From f6345abba18725c361b98c1c2984c2737d719f0f Mon Sep 17 00:00:00 2001 From: Alexey Date: Thu, 1 Apr 2021 21:01:04 +0000 Subject: [PATCH 002/108] Added leading space to lines starting with "---" They are interpreted as meta information for yaml TOC and mess up building documentation process (with single page the second time without clean) --- docs/en/engines/table-engines/integrations/postgresql.md | 8 ++++---- docs/en/sql-reference/table-functions/postgresql.md | 8 ++++---- docs/ru/engines/table-engines/integrations/postgresql.md | 8 ++++---- docs/ru/sql-reference/table-functions/postgresql.md | 8 ++++---- 4 files changed, 16 insertions(+), 16 deletions(-) diff --git a/docs/en/engines/table-engines/integrations/postgresql.md b/docs/en/engines/table-engines/integrations/postgresql.md index 8326038407f..1267caf027a 100644 --- a/docs/en/engines/table-engines/integrations/postgresql.md +++ b/docs/en/engines/table-engines/integrations/postgresql.md @@ -68,10 +68,10 @@ postgres=# insert into test (int_id, str, "float") VALUES (1,'test',2); INSERT 0 1 postgresql> select * from test; - int_id | int_nullable | float | str | float_nullable ---------+--------------+-------+------+---------------- - 1 | | 2 | test | -(1 row) + int_id | int_nullable | float | str | float_nullable + --------+--------------+-------+------+---------------- + 1 | | 2 | test | + (1 row) ``` Table in ClickHouse, retrieving data from the PostgreSQL table created above: diff --git a/docs/en/sql-reference/table-functions/postgresql.md b/docs/en/sql-reference/table-functions/postgresql.md index ad5d8a29904..a8935a6dd79 100644 --- a/docs/en/sql-reference/table-functions/postgresql.md +++ b/docs/en/sql-reference/table-functions/postgresql.md @@ -64,10 +64,10 @@ postgres=# insert into test (int_id, str, "float") VALUES (1,'test',2); INSERT 0 1 postgresql> select * from test; - int_id | int_nullable | float | str | float_nullable ---------+--------------+-------+------+---------------- - 1 | | 2 | test | -(1 row) + int_id | int_nullable | float | str | float_nullable + --------+--------------+-------+------+---------------- + 1 | | 2 | test | + (1 row) ``` Selecting data from ClickHouse: diff --git a/docs/ru/engines/table-engines/integrations/postgresql.md b/docs/ru/engines/table-engines/integrations/postgresql.md index 1fc7a307d94..de51fde9209 100644 --- a/docs/ru/engines/table-engines/integrations/postgresql.md +++ b/docs/ru/engines/table-engines/integrations/postgresql.md @@ -68,10 +68,10 @@ postgres=# insert into test (int_id, str, "float") VALUES (1,'test',2); INSERT 0 1 postgresql> select * from test; - int_id | int_nullable | float | str | float_nullable ---------+--------------+-------+------+---------------- - 1 | | 2 | test | -(1 row) + int_id | int_nullable | float | str | float_nullable + --------+--------------+-------+------+---------------- + 1 | | 2 | test | + (1 row) ``` Таблица в ClickHouse, получение данных из PostgreSQL таблицы созданной выше: diff --git a/docs/ru/sql-reference/table-functions/postgresql.md b/docs/ru/sql-reference/table-functions/postgresql.md index a8ed23db8ed..be18c360fbc 100644 --- a/docs/ru/sql-reference/table-functions/postgresql.md +++ b/docs/ru/sql-reference/table-functions/postgresql.md @@ -62,10 +62,10 @@ postgres=# insert into test (int_id, str, "float") VALUES (1,'test',2); INSERT 0 1 postgresql> select * from test; - int_id | int_nullable | float | str | float_nullable ---------+--------------+-------+------+---------------- - 1 | | 2 | test | -(1 row) + int_id | int_nullable | float | str | float_nullable + --------+--------------+-------+------+---------------- + 1 | | 2 | test | + (1 row) ``` Получение данных в ClickHouse: From 0ef827ded7372d67146793d6b4f6217897a36cec Mon Sep 17 00:00:00 2001 From: Alexey Date: Thu, 8 Apr 2021 19:32:27 +0000 Subject: [PATCH 003/108] added indent in tables --- docs/en/engines/table-engines/integrations/postgresql.md | 8 ++++---- docs/en/sql-reference/table-functions/postgresql.md | 6 +++--- docs/ru/engines/table-engines/integrations/postgresql.md | 8 ++++---- docs/ru/sql-reference/table-functions/postgresql.md | 8 ++++---- 4 files changed, 15 insertions(+), 15 deletions(-) diff --git a/docs/en/engines/table-engines/integrations/postgresql.md b/docs/en/engines/table-engines/integrations/postgresql.md index ad5bebb3dea..4474b764d2e 100644 --- a/docs/en/engines/table-engines/integrations/postgresql.md +++ b/docs/en/engines/table-engines/integrations/postgresql.md @@ -94,10 +94,10 @@ postgres=# INSERT INTO test (int_id, str, "float") VALUES (1,'test',2); INSERT 0 1 postgresql> SELECT * FROM test; - int_id | int_nullable | float | str | float_nullable ---------+--------------+-------+------+---------------- - 1 | | 2 | test | -(1 row) + int_id | int_nullable | float | str | float_nullable + --------+--------------+-------+------+---------------- + 1 | | 2 | test | + (1 row) ``` Table in ClickHouse, retrieving data from the PostgreSQL table created above: diff --git a/docs/en/sql-reference/table-functions/postgresql.md b/docs/en/sql-reference/table-functions/postgresql.md index bfb5fdf9be6..3eab572ac12 100644 --- a/docs/en/sql-reference/table-functions/postgresql.md +++ b/docs/en/sql-reference/table-functions/postgresql.md @@ -65,9 +65,9 @@ postgres=# INSERT INTO test (int_id, str, "float") VALUES (1,'test',2); INSERT 0 1 postgresql> SELECT * FROM test; - int_id | int_nullable | float | str | float_nullable ---------+--------------+-------+------+---------------- - 1 | | 2 | test | + int_id | int_nullable | float | str | float_nullable + --------+--------------+-------+------+---------------- + 1 | | 2 | test | (1 row) ``` diff --git a/docs/ru/engines/table-engines/integrations/postgresql.md b/docs/ru/engines/table-engines/integrations/postgresql.md index 8964b1dbf02..3c7975556bf 100644 --- a/docs/ru/engines/table-engines/integrations/postgresql.md +++ b/docs/ru/engines/table-engines/integrations/postgresql.md @@ -94,10 +94,10 @@ postgres=# INSERT INTO test (int_id, str, "float") VALUES (1,'test',2); INSERT 0 1 postgresql> SELECT * FROM test; - int_id | int_nullable | float | str | float_nullable ---------+--------------+-------+------+---------------- - 1 | | 2 | test | -(1 row) + int_id | int_nullable | float | str | float_nullable + --------+--------------+-------+------+---------------- + 1 | | 2 | test | + (1 row) ``` Таблица в ClickHouse, получение данных из PostgreSQL таблицы, созданной выше: diff --git a/docs/ru/sql-reference/table-functions/postgresql.md b/docs/ru/sql-reference/table-functions/postgresql.md index 66637276726..2d8afe28f1e 100644 --- a/docs/ru/sql-reference/table-functions/postgresql.md +++ b/docs/ru/sql-reference/table-functions/postgresql.md @@ -65,10 +65,10 @@ postgres=# INSERT INTO test (int_id, str, "float") VALUES (1,'test',2); INSERT 0 1 postgresql> SELECT * FROM test; - int_id | int_nullable | float | str | float_nullable ---------+--------------+-------+------+---------------- - 1 | | 2 | test | -(1 row) + int_id | int_nullable | float | str | float_nullable + --------+--------------+-------+------+---------------- + 1 | | 2 | test | + (1 row) ``` Получение данных в ClickHouse: From 92c495af760354e7f1f1efd88428480d9ab20582 Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Sun, 11 Apr 2021 22:07:23 +0300 Subject: [PATCH 004/108] Simplify debian packages --- debian/clickhouse-server.config | 16 ---------------- debian/clickhouse-server.postinst | 8 ++++---- debian/clickhouse-server.preinst | 8 -------- debian/clickhouse-server.prerm | 6 ------ debian/clickhouse-server.templates | 3 --- debian/clickhouse.limits | 2 -- debian/watch | 2 +- programs/install/Install.cpp | 1 - 8 files changed, 5 insertions(+), 41 deletions(-) delete mode 100644 debian/clickhouse-server.config delete mode 100644 debian/clickhouse-server.preinst delete mode 100644 debian/clickhouse-server.prerm delete mode 100644 debian/clickhouse-server.templates delete mode 100644 debian/clickhouse.limits diff --git a/debian/clickhouse-server.config b/debian/clickhouse-server.config deleted file mode 100644 index 636ff7f4da7..00000000000 --- a/debian/clickhouse-server.config +++ /dev/null @@ -1,16 +0,0 @@ -#!/bin/sh -e - -test -f /usr/share/debconf/confmodule && . /usr/share/debconf/confmodule - -db_fget clickhouse-server/default-password seen || true -password_seen="$RET" - -if [ "$1" = "reconfigure" ]; then - password_seen=false -fi - -if [ "$password_seen" != "true" ]; then - db_input high clickhouse-server/default-password || true - db_go || true -fi -db_go || true diff --git a/debian/clickhouse-server.postinst b/debian/clickhouse-server.postinst index dc876f45954..419c13e3daf 100644 --- a/debian/clickhouse-server.postinst +++ b/debian/clickhouse-server.postinst @@ -23,11 +23,13 @@ if [ ! -f "/etc/debian_version" ]; then fi if [ "$1" = configure ] || [ -n "$not_deb_os" ]; then + + ${CLICKHOUSE_GENERIC_PROGRAM} install --user "${CLICKHOUSE_USER}" --group "${CLICKHOUSE_GROUP}" --pid-path "${CLICKHOUSE_PIDDIR}" --config-path "${CLICKHOUSE_CONFDIR}" --binary-path "${CLICKHOUSE_BINDIR}" --log-path "${CLICKHOUSE_LOGDIR}" --data-path "${CLICKHOUSE_DATADIR}" + if [ -x "/bin/systemctl" ] && [ -f /etc/systemd/system/clickhouse-server.service ] && [ -d /run/systemd/system ]; then # if old rc.d service present - remove it if [ -x "/etc/init.d/clickhouse-server" ] && [ -x "/usr/sbin/update-rc.d" ]; then /usr/sbin/update-rc.d clickhouse-server remove - echo "ClickHouse init script has migrated to systemd. Please manually stop old server and restart the service: sudo killall clickhouse-server && sleep 5 && sudo service clickhouse-server restart" fi /bin/systemctl daemon-reload @@ -38,10 +40,8 @@ if [ "$1" = configure ] || [ -n "$not_deb_os" ]; then if [ -x "/usr/sbin/update-rc.d" ]; then /usr/sbin/update-rc.d clickhouse-server defaults 19 19 >/dev/null || exit $? else - echo # TODO [ "$OS" = "rhel" ] || [ "$OS" = "centos" ] || [ "$OS" = "fedora" ] + echo # Other OS fi fi fi - - ${CLICKHOUSE_GENERIC_PROGRAM} install --user "${CLICKHOUSE_USER}" --group "${CLICKHOUSE_GROUP}" --pid-path "${CLICKHOUSE_PIDDIR}" --config-path "${CLICKHOUSE_CONFDIR}" --binary-path "${CLICKHOUSE_BINDIR}" --log-path "${CLICKHOUSE_LOGDIR}" --data-path "${CLICKHOUSE_DATADIR}" fi diff --git a/debian/clickhouse-server.preinst b/debian/clickhouse-server.preinst deleted file mode 100644 index 3529aefa7da..00000000000 --- a/debian/clickhouse-server.preinst +++ /dev/null @@ -1,8 +0,0 @@ -#!/bin/sh - -if [ "$1" = "upgrade" ]; then - # Return etc/cron.d/clickhouse-server to original state - service clickhouse-server disable_cron ||: -fi - -#DEBHELPER# diff --git a/debian/clickhouse-server.prerm b/debian/clickhouse-server.prerm deleted file mode 100644 index 02e855a7125..00000000000 --- a/debian/clickhouse-server.prerm +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/sh - -if [ "$1" = "upgrade" ] || [ "$1" = "remove" ]; then - # Return etc/cron.d/clickhouse-server to original state - service clickhouse-server disable_cron ||: -fi diff --git a/debian/clickhouse-server.templates b/debian/clickhouse-server.templates deleted file mode 100644 index dd55824e15c..00000000000 --- a/debian/clickhouse-server.templates +++ /dev/null @@ -1,3 +0,0 @@ -Template: clickhouse-server/default-password -Type: password -Description: Enter password for default user: diff --git a/debian/clickhouse.limits b/debian/clickhouse.limits deleted file mode 100644 index aca44082c4e..00000000000 --- a/debian/clickhouse.limits +++ /dev/null @@ -1,2 +0,0 @@ -clickhouse soft nofile 262144 -clickhouse hard nofile 262144 diff --git a/debian/watch b/debian/watch index 7ad4cedf713..ed3cab97ade 100644 --- a/debian/watch +++ b/debian/watch @@ -1,6 +1,6 @@ version=4 opts="filenamemangle=s%(?:.*?)?v?(\d[\d.]*)-stable\.tar\.gz%clickhouse-$1.tar.gz%" \ - https://github.com/yandex/clickhouse/tags \ + https://github.com/ClickHouse/ClickHouse/tags \ (?:.*?/)?v?(\d[\d.]*)-stable\.tar\.gz debian uupdate diff --git a/programs/install/Install.cpp b/programs/install/Install.cpp index ef72624e7ab..c40495d702a 100644 --- a/programs/install/Install.cpp +++ b/programs/install/Install.cpp @@ -641,7 +641,6 @@ int mainEntryClickHouseInstall(int argc, char ** argv) " This is optional. Taskstats accounting will be disabled." " To enable taskstats accounting you may add the required capability later manually.\"", "/tmp/test_setcap.sh", fs::canonical(main_bin_path).string()); - fmt::print(" {}\n", command); executeScript(command); #endif From 5a5590580757550253a99fa50a51c6bb0af2c065 Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Sun, 11 Apr 2021 23:04:50 +0300 Subject: [PATCH 005/108] Simplify debian packages --- debian/clickhouse-common-static.install | 1 - debian/rules | 3 --- 2 files changed, 4 deletions(-) diff --git a/debian/clickhouse-common-static.install b/debian/clickhouse-common-static.install index bd65f17ad42..087a6dbba8f 100644 --- a/debian/clickhouse-common-static.install +++ b/debian/clickhouse-common-static.install @@ -3,4 +3,3 @@ usr/bin/clickhouse-odbc-bridge usr/bin/clickhouse-library-bridge usr/bin/clickhouse-extract-from-config usr/share/bash-completion/completions -etc/security/limits.d/clickhouse.conf diff --git a/debian/rules b/debian/rules index 8eb47e95389..73d1f3d3b34 100755 --- a/debian/rules +++ b/debian/rules @@ -113,9 +113,6 @@ override_dh_install: ln -sf clickhouse-server.docs debian/clickhouse-client.docs ln -sf clickhouse-server.docs debian/clickhouse-common-static.docs - mkdir -p $(DESTDIR)/etc/security/limits.d - cp debian/clickhouse.limits $(DESTDIR)/etc/security/limits.d/clickhouse.conf - # systemd compatibility mkdir -p $(DESTDIR)/etc/systemd/system/ cp debian/clickhouse-server.service $(DESTDIR)/etc/systemd/system/ From 61816ad0767d0eb85816af49150ee77a066aa6dd Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 00:56:16 +0300 Subject: [PATCH 006/108] Add a test for #2719 --- .../01812_basic_auth_http_server.reference | 0 .../01812_basic_auth_http_server.sh | 18 ++++++++++++++++++ 2 files changed, 18 insertions(+) create mode 100644 tests/queries/0_stateless/01812_basic_auth_http_server.reference create mode 100755 tests/queries/0_stateless/01812_basic_auth_http_server.sh diff --git a/tests/queries/0_stateless/01812_basic_auth_http_server.reference b/tests/queries/0_stateless/01812_basic_auth_http_server.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01812_basic_auth_http_server.sh b/tests/queries/0_stateless/01812_basic_auth_http_server.sh new file mode 100755 index 00000000000..9b328911e39 --- /dev/null +++ b/tests/queries/0_stateless/01812_basic_auth_http_server.sh @@ -0,0 +1,18 @@ +#!/usr/bin/env bash + +# In very old (e.g. 1.1.54385) versions of ClickHouse there was a bug in Poco HTTP library: +# Basic HTTP authentication headers was not parsed if the size of URL is exactly 4077 + something bytes. +# So, the user may get authentication error if valid credentials are passed. +# This is a minor issue because it does not have security implications (at worse the user will be not allowed to access). + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +# In this test we do the opposite: passing the invalid credentials while server is accepting default user without a password. +# And if the bug exists, they will be ignored (treat as empty credentials) and query succeed. + +for i in {3950..4100}; do ${CLICKHOUSE_CURL} --user default:12345 "${CLICKHOUSE_URL}&query=SELECT+1"$(perl -e "print '+'x$i") | grep -v -F 'password' && echo 'Fail'; done + +# You can check that the bug exists in old version by running the old server in Docker: +# docker run --network host -it --rm yandex/clickhouse-server:1.1.54385 From 8a88009c6c7cdc5d39e644547e7bf906a354aae4 Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 15:39:13 +0300 Subject: [PATCH 007/108] Remove non-essential parts from Suggest --- programs/client/Suggest.cpp | 19 +++---------------- programs/client/Suggest.h | 2 +- 2 files changed, 4 insertions(+), 17 deletions(-) diff --git a/programs/client/Suggest.cpp b/programs/client/Suggest.cpp index dfa7048349e..8d4c0fdbd5a 100644 --- a/programs/client/Suggest.cpp +++ b/programs/client/Suggest.cpp @@ -108,14 +108,6 @@ void Suggest::loadImpl(Connection & connection, const ConnectionTimeouts & timeo " UNION ALL " "SELECT cluster FROM system.clusters" " UNION ALL " - "SELECT name FROM system.errors" - " UNION ALL " - "SELECT event FROM system.events" - " UNION ALL " - "SELECT metric FROM system.asynchronous_metrics" - " UNION ALL " - "SELECT metric FROM system.metrics" - " UNION ALL " "SELECT macro FROM system.macros" " UNION ALL " "SELECT policy_name FROM system.storage_policies" @@ -139,17 +131,12 @@ void Suggest::loadImpl(Connection & connection, const ConnectionTimeouts & timeo query << ") WHERE notEmpty(res)"; - Settings settings; - /// To show all rows from: - /// - system.errors - /// - system.events - settings.system_events_show_zero_values = true; - fetch(connection, timeouts, query.str(), settings); + fetch(connection, timeouts, query.str()); } -void Suggest::fetch(Connection & connection, const ConnectionTimeouts & timeouts, const std::string & query, Settings & settings) +void Suggest::fetch(Connection & connection, const ConnectionTimeouts & timeouts, const std::string & query) { - connection.sendQuery(timeouts, query, "" /* query_id */, QueryProcessingStage::Complete, &settings); + connection.sendQuery(timeouts, query, "" /* query_id */, QueryProcessingStage::Complete); while (true) { diff --git a/programs/client/Suggest.h b/programs/client/Suggest.h index 0049bc08ebf..03332088cbe 100644 --- a/programs/client/Suggest.h +++ b/programs/client/Suggest.h @@ -33,7 +33,7 @@ public: private: void loadImpl(Connection & connection, const ConnectionTimeouts & timeouts, size_t suggestion_limit); - void fetch(Connection & connection, const ConnectionTimeouts & timeouts, const std::string & query, Settings & settings); + void fetch(Connection & connection, const ConnectionTimeouts & timeouts, const std::string & query); void fillWordsFromBlock(const Block & block); /// Words are fetched asynchronously. From fdf3cf378f277667b1a2b9b93ada9e3379a76e88 Mon Sep 17 00:00:00 2001 From: Alexander Tokmakov Date: Tue, 13 Apr 2021 15:49:40 +0300 Subject: [PATCH 008/108] fix testkeeper multi response --- src/Common/ZooKeeper/TestKeeper.cpp | 34 +++++++++++++------ .../01152_cross_replication.reference | 12 +++++++ .../0_stateless/01152_cross_replication.sql | 30 ++++++++++++++++ 3 files changed, 65 insertions(+), 11 deletions(-) create mode 100644 tests/queries/0_stateless/01152_cross_replication.reference create mode 100644 tests/queries/0_stateless/01152_cross_replication.sql diff --git a/src/Common/ZooKeeper/TestKeeper.cpp b/src/Common/ZooKeeper/TestKeeper.cpp index 5951164f58f..36c875fe325 100644 --- a/src/Common/ZooKeeper/TestKeeper.cpp +++ b/src/Common/ZooKeeper/TestKeeper.cpp @@ -421,26 +421,38 @@ std::pair TestKeeperMultiRequest::process(TestKeeper::Contain try { - for (const auto & request : requests) + auto request_it = requests.begin(); + response.error = Error::ZOK; + while (request_it != requests.end()) { - const TestKeeperRequest & concrete_request = dynamic_cast(*request); + const TestKeeperRequest & concrete_request = dynamic_cast(**request_it); + ++request_it; auto [ cur_response, undo_action ] = concrete_request.process(container, zxid); response.responses.emplace_back(cur_response); if (cur_response->error != Error::ZOK) { response.error = cur_response->error; - - for (auto it = undo_actions.rbegin(); it != undo_actions.rend(); ++it) - if (*it) - (*it)(); - - return { std::make_shared(response), {} }; + break; + } + + undo_actions.emplace_back(std::move(undo_action)); + } + + if (response.error != Error::ZOK) + { + for (auto it = undo_actions.rbegin(); it != undo_actions.rend(); ++it) + if (*it) + (*it)(); + + while (request_it != requests.end()) + { + const TestKeeperRequest & concrete_request = dynamic_cast(**request_it); + ++request_it; + response.responses.emplace_back(concrete_request.createResponse()); + response.responses.back()->error = Error::ZRUNTIMEINCONSISTENCY; } - else - undo_actions.emplace_back(std::move(undo_action)); } - response.error = Error::ZOK; return { std::make_shared(response), {} }; } catch (...) diff --git a/tests/queries/0_stateless/01152_cross_replication.reference b/tests/queries/0_stateless/01152_cross_replication.reference new file mode 100644 index 00000000000..f409f3e65fa --- /dev/null +++ b/tests/queries/0_stateless/01152_cross_replication.reference @@ -0,0 +1,12 @@ +localhost 9000 0 0 0 +localhost 9000 0 0 0 +demo_loan_01568 +demo_loan_01568 +CREATE TABLE shard_0.demo_loan_01568\n(\n `id` Int64 COMMENT \'id\',\n `date_stat` Date COMMENT \'date of stat\',\n `customer_no` String COMMENT \'customer no\',\n `loan_principal` Float64 COMMENT \'loan principal\'\n)\nENGINE = ReplacingMergeTree\nPARTITION BY toYYYYMM(date_stat)\nORDER BY id\nSETTINGS index_granularity = 8192 +CREATE TABLE shard_1.demo_loan_01568\n(\n `id` Int64 COMMENT \'id\',\n `date_stat` Date COMMENT \'date of stat\',\n `customer_no` String COMMENT \'customer no\',\n `loan_principal` Float64 COMMENT \'loan principal\'\n)\nENGINE = ReplacingMergeTree\nPARTITION BY toYYYYMM(date_stat)\nORDER BY id\nSETTINGS index_granularity = 8192 +1 2021-04-13 qwerty 3.14159 +2 2021-04-14 asdfgh 2.71828 +1 2021-04-13 qwerty 3.14159 +2 2021-04-14 asdfgh 2.71828 +2 2021-04-14 asdfgh 2.71828 +1 2021-04-13 qwerty 3.14159 diff --git a/tests/queries/0_stateless/01152_cross_replication.sql b/tests/queries/0_stateless/01152_cross_replication.sql new file mode 100644 index 00000000000..4137554ca85 --- /dev/null +++ b/tests/queries/0_stateless/01152_cross_replication.sql @@ -0,0 +1,30 @@ +DROP DATABASE IF EXISTS shard_0; +DROP DATABASE IF EXISTS shard_1; +DROP TABLE IF EXISTS demo_loan_01568_dist; + +CREATE DATABASE shard_0; +CREATE DATABASE shard_1; + +SET distributed_ddl_output_mode='none'; +CREATE TABLE demo_loan_01568 ON CLUSTER test_cluster_two_shards_different_databases ( `id` Int64 COMMENT 'id', `date_stat` Date COMMENT 'date of stat', `customer_no` String COMMENT 'customer no', `loan_principal` Float64 COMMENT 'loan principal' ) ENGINE=ReplacingMergeTree() ORDER BY id PARTITION BY toYYYYMM(date_stat); -- { serverError 371 } +SET distributed_ddl_output_mode='throw'; +CREATE TABLE shard_0.demo_loan_01568 ON CLUSTER test_cluster_two_shards_different_databases ( `id` Int64 COMMENT 'id', `date_stat` Date COMMENT 'date of stat', `customer_no` String COMMENT 'customer no', `loan_principal` Float64 COMMENT 'loan principal' ) ENGINE=ReplacingMergeTree() ORDER BY id PARTITION BY toYYYYMM(date_stat); +CREATE TABLE shard_1.demo_loan_01568 ON CLUSTER test_cluster_two_shards_different_databases ( `id` Int64 COMMENT 'id', `date_stat` Date COMMENT 'date of stat', `customer_no` String COMMENT 'customer no', `loan_principal` Float64 COMMENT 'loan principal' ) ENGINE=ReplacingMergeTree() ORDER BY id PARTITION BY toYYYYMM(date_stat); + +SHOW TABLES FROM shard_0; +SHOW TABLES FROM shard_1; +SHOW CREATE TABLE shard_0.demo_loan_01568; +SHOW CREATE TABLE shard_1.demo_loan_01568; + +CREATE TABLE demo_loan_01568_dist AS shard_0.demo_loan_01568 ENGINE=Distributed('test_cluster_two_shards_different_databases', '', 'demo_loan_01568', id % 2); +INSERT INTO demo_loan_01568_dist VALUES (1, '2021-04-13', 'qwerty', 3.14159), (2, '2021-04-14', 'asdfgh', 2.71828); +SELECT * FROM demo_loan_01568_dist ORDER BY id; +SYSTEM FLUSH DISTRIBUTED demo_loan_01568_dist; +SELECT * FROM demo_loan_01568_dist ORDER BY id; + +SELECT * FROM shard_0.demo_loan_01568; +SELECT * FROM shard_1.demo_loan_01568; + +DROP DATABASE shard_0; +DROP DATABASE shard_1; +DROP TABLE demo_loan_01568_dist; From 83a78a5aa7d8aa38d195ca5925a1ae869c2f5b1e Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 16:56:39 +0300 Subject: [PATCH 009/108] Fix unfinished code in GatherUtils #20272 --- src/Functions/GatherUtils/Algorithms.h | 8 +++--- src/Functions/array/arrayIndex.h | 9 +++---- src/Functions/array/hasAllAny.h | 37 ++++---------------------- 3 files changed, 13 insertions(+), 41 deletions(-) diff --git a/src/Functions/GatherUtils/Algorithms.h b/src/Functions/GatherUtils/Algorithms.h index e174261d76e..1a962089d0c 100644 --- a/src/Functions/GatherUtils/Algorithms.h +++ b/src/Functions/GatherUtils/Algorithms.h @@ -82,7 +82,7 @@ inline ALWAYS_INLINE void writeSlice(const GenericArraySlice & slice, GenericArr sink.current_offset += slice.size; } else - throw Exception("Function writeSlice expect same column types for GenericArraySlice and GenericArraySink.", + throw Exception("Function writeSlice expects same column types for GenericArraySlice and GenericArraySink.", ErrorCodes::LOGICAL_ERROR); } @@ -162,7 +162,7 @@ inline ALWAYS_INLINE void writeSlice(const GenericValueSlice & slice, GenericArr ++sink.current_offset; } else - throw Exception("Function writeSlice expect same column types for GenericValueSlice and GenericArraySink.", + throw Exception("Function writeSlice expects same column types for GenericValueSlice and GenericArraySink.", ErrorCodes::LOGICAL_ERROR); } @@ -609,7 +609,7 @@ bool sliceHas(const GenericArraySlice & first, const GenericArraySlice & second) { /// Generic arrays should have the same type in order to use column.compareAt(...) if (!first.elements->structureEquals(*second.elements)) - return false; + throw Exception("Function sliceHas expects same column types for slices.", ErrorCodes::LOGICAL_ERROR); auto impl = sliceHasImpl; return impl(first, second, nullptr, nullptr); @@ -670,7 +670,7 @@ void NO_INLINE arrayAllAny(FirstSource && first, SecondSource && second, ColumnU auto & data = result.getData(); for (auto row : ext::range(0, size)) { - data[row] = static_cast(sliceHas(first.getWhole(), second.getWhole()) ? 1 : 0); + data[row] = static_cast(sliceHas(first.getWhole(), second.getWhole())); first.next(); second.next(); } diff --git a/src/Functions/array/arrayIndex.h b/src/Functions/array/arrayIndex.h index fb9496e634f..5e31695f7e2 100644 --- a/src/Functions/array/arrayIndex.h +++ b/src/Functions/array/arrayIndex.h @@ -373,11 +373,10 @@ public: ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT); if (!arguments[1]->onlyNull() && !allowArguments(array_type->getNestedType(), arguments[1])) - throw Exception("Types of array and 2nd argument of function \"" - + getName() + "\" must be identical up to nullability, cardinality, " - "numeric types, or Enum and numeric type. Passed: " - + arguments[0]->getName() + " and " + arguments[1]->getName() + ".", - ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT); + throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, + "Types of array and 2nd argument of function `{}` must be identical up to nullability, cardinality, " + "numeric types, or Enum and numeric type. Passed: {} and {}.", + getName(), arguments[0]->getName(), arguments[1]->getName()); return std::make_shared>(); } diff --git a/src/Functions/array/hasAllAny.h b/src/Functions/array/hasAllAny.h index b35c5996652..1ad1df14020 100644 --- a/src/Functions/array/hasAllAny.h +++ b/src/Functions/array/hasAllAny.h @@ -13,6 +13,7 @@ #include #include #include +#include namespace DB @@ -51,41 +52,13 @@ public: ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr &, size_t input_rows_count) const override { - size_t rows = input_rows_count; size_t num_args = arguments.size(); - DataTypePtr common_type = nullptr; - auto commonType = [&common_type, &arguments]() - { - if (common_type == nullptr) - { - DataTypes data_types; - data_types.reserve(arguments.size()); - for (const auto & argument : arguments) - data_types.push_back(argument.type); - - common_type = getLeastSupertype(data_types); - } - - return common_type; - }; + DataTypePtr common_type = getLeastSupertype(ext::map(arguments, [](auto & arg) { return arg.type; })); Columns preprocessed_columns(num_args); - for (size_t i = 0; i < num_args; ++i) - { - const auto & argument = arguments[i]; - ColumnPtr preprocessed_column = argument.column; - - const auto * argument_type = typeid_cast(argument.type.get()); - const auto & nested_type = argument_type->getNestedType(); - - /// Converts Array(Nothing) or Array(Nullable(Nothing) to common type. Example: hasAll([Null, 1], [Null]) -> 1 - if (typeid_cast(removeNullable(nested_type).get())) - preprocessed_column = castColumn(argument, commonType()); - - preprocessed_columns[i] = std::move(preprocessed_column); - } + preprocessed_columns[i] = castColumn(arguments[i], common_type); std::vector> sources; @@ -100,12 +73,12 @@ public: } if (const auto * argument_column_array = typeid_cast(argument_column.get())) - sources.emplace_back(GatherUtils::createArraySource(*argument_column_array, is_const, rows)); + sources.emplace_back(GatherUtils::createArraySource(*argument_column_array, is_const, input_rows_count)); else throw Exception{"Arguments for function " + getName() + " must be arrays.", ErrorCodes::LOGICAL_ERROR}; } - auto result_column = ColumnUInt8::create(rows); + auto result_column = ColumnUInt8::create(input_rows_count); auto * result_column_ptr = typeid_cast(result_column.get()); GatherUtils::sliceHas(*sources[0], *sources[1], search_type, *result_column_ptr); From 094e7032cb8a3fe5055a9ec3f7ea0747ced5d0ee Mon Sep 17 00:00:00 2001 From: Alexander Kuzmenkov Date: Tue, 13 Apr 2021 16:57:31 +0300 Subject: [PATCH 010/108] blog article about code review --- website/blog/en/2021/code-review.md | 78 +++++++++++++++++++++++++++++ 1 file changed, 78 insertions(+) create mode 100644 website/blog/en/2021/code-review.md diff --git a/website/blog/en/2021/code-review.md b/website/blog/en/2021/code-review.md new file mode 100644 index 00000000000..37af333de02 --- /dev/null +++ b/website/blog/en/2021/code-review.md @@ -0,0 +1,78 @@ +# Code review in ClickHouse +# Understanding Why Your Program Works +# Effective Code Review +# Explaining Why Your Program Works +# The Tests Are Passing, Why Would I Read The Diff Again? + +Code review is one of the few software development techniques that is consistently found to reduce the incidence of defects. Why is it effective? This article offers some wild conjecture on this topic, complete with practical advice on getting the most out of your code review. + + +## Understanding Why Your Program Works + +As software developers, we routinely have to reason about the behaviour of software. For example, to fix a bug, we start with a test case that exhibits the behavior in question, and then read the source code to see how this behavior arises. Often we find ourselves unable to understand anything, having to resort to forensic techniques such as using a debugger or interrogating the author of the code. This situation is far from ideal. After all, if we have trouble understanding our software, how can we be sure it works at all? No surprise that it doesn't. + +The correct understanding is also important when modifying and extending software. A programmer must always have a precise mental model on what is going on in the program, how exactly it maps to the domain, and so on. If there are flaws in this model, the code they write won't match the domain and won't solve the problem correctly. Wrong understanding directly causes bugs. + +How can we make our software easier to understand? It is often said that to see if you really understand something, you have to try explaining it to somebody. For example, as a science student taking an exam, you might be expected to give an explanation to some well-known observed effect, deriving it from the basic laws of this domain. In a similar way, if we are modeling some problem in software, we can start from domain knowledge and general programming knowledge, and build an argument as to why our model is applicable to the problem, why it is correct, has optimal performance and so on. This explanation takes the form of code comments, or, at a higher level, design documents. + +If you have a habit of thoroughly commenting your code, you might have noticed that writing the comments is often much harder than writing the code itself. It also has an unpleasant side effect -- at times, while writing a comment, it becomes increasingly clear to you that the code is incomprehensible and takes forever to explain, or maybe is downright wrong, and you have to rewrite it. This is exactly the major positive effect of writing the comments. It helps you find bugs and make the code more understandable, and you wouldn't have noticed these problems unless you tried to explain the code. + +Understanding why your program works is inseparable from understanding why it fails, so it's no surprise that there is a similar process for the latter, called "rubber duck debugging". To debug a particularly nasty bug, you start explaining the program logic step by step to an imaginary partner or even to an inanimate object such as a yellow rubber duck. This process is often very effective, much in excess of what one would expect given the limited conversational abilities of rubber ducks. The underlying mechanism is probably the same as with comments &emdash; you start to understand your program better by just trying to explain it, and this lets you find bugs. + +When working in a team, you even have a luxury of explaining your code to another developer who works on the same project. It's probably more entertaining than talking to a duck. More importantly, they are going to maintain the code you wrote, so better make sure that _they_ can understand it as well. A good formal occasion for explaining how your code works is the code review process. Let's see how you can get the most out of it, in terms of making your code understandable. + +## Reviewing Others Code + +Code review is often framed as a gatekeeping process, where each contribution is vetted by maintainers to ensure that it is in line with project direction, has acceptable quality, meets the coding guidelines and so on. This perspective might seem natural when dealing with external contributions, but makes less sense if you apply it to internal ones. After all, our fellow maintainers have perfect understanding of project goals and guidelines, probably they are more talented and experienced than us, and can be trusted to produce the best solution possible. How can an additional review be helpful? + +A less-obvious, but very important, part of reviewing the code is just seeing whether it can be understood by another person. It is helpful regardless of the administrative roles and programming proficiency of the parties. What should you do as a reviewer if ease of understanding is your main priority? + +You probably don't need to be concerned with trivia such as code style. There are automated tools for that. You might find some bugs, but this is probably a side effect. Your main task is making sense of the code. + +Start with checking the high-level description of the problem that the pull request is trying to solve. Read the description of the bug it fixes, or the docs for the feature it adds. For bigger features, there is normally a design document that describes the overall implementation without getting too deep into the code details. After you understand the problem, start reading the code. Does it make sense to you? You shouldn't try too hard to understand it. Imagine that you are tired and under time pressure. If you feel you have to make a lot of effort to understand the code, ask the author for clarifications. As you talk, you might discover that the code is not correct, or it may be rewritten in a more straightforward way, or it needs more comments. + +After you get the answers, don't forget to update the code and the comments to reflect them. Don't just stop after getting it explained to you personally. If you had a question as a reviewer, chances are that other people will also have this question later, but there might be nobody around to ask. They will have to resort to `git blame` and re-reading the entire pull request or several of them. Code archaeology is sometimes fun, but it's the last thing you want to do when you are investigating an urgent bug. All the answers should be on the surface. + +Working with the author, you should ensure that the code is mostly obvious to anyone with basic domain and programming knowledge, and all non-obvious parts are clearly explained. + +### Preparing Your Code For Review + +As an author, you can also do some things to make your code easier to understand for the reviewer. + +First of all, if you are implementing a major feature, it probably needs a round of desing review before you even start writing code. Skipping a desing review and jumping right into the code review can be a major source of frustration, because it might turn out that even the problem you are solving was formulated incorrectly, and all your work has to be thrown away. Of course, this is not prevented completely by desing review, either. Programming is an iterative, exploratory activity, and in complex cases you only begin to grasp the problem after implementing a first solution, which you then realize is incorrect and has to be thrown away. + +When preparing your code for review, your major objective is to make your problem and its solution clear to the reviewer. A good tool for this is code comments. Any sizable piece of logic should have an introductory comment describing its general purpose and outlining the implementation. This description can reference similar features, explain the difference to them, explain how it interfaces with other subsystems. A good place to put this general description is a function that serves as a main entry point for the feature, or other form of its public interface, or the most significant class, or the file containing the implementation, and so on. + +Drilling down to each block of code, you should be able to explain what it does, why it does that, why this way and not another. If there are several ways of doing the thing, why did you choose this one? Of course, for some code these things follow from the more general comments and don't have to be restated. The mechanics of data manipulation should be apparent from the code itself. If you find yourself explaining a particular feature of the language, it's probably best not to use it. + +Pay special attention to making the data structures apparent in the code, and their meaning and invariants well commented. The choice of data structures ultimately determines which algorithms you can apply, and sets the limits of performance, which is another reason why we should care about it as ClickHouse developers. + +When explaining the code, it is important to give your reader enough context, so that they can understand you without a deep investigation of the surrounding systems and obscure test cases. Give pointers to all the things that might be relevant to the task. If you know some corner cases which your code has to handle, describe them in enough detail so that they can be reproduced. If there is a relevant standard or a design document, reference it, or even quote it inline. If you're relying on some invariant in other system, mention it. It is good practice to add programmatic checks that mirror your comments, when it is easy to do so. Your comment about an invariant should be accompanied by an assertion, and an important scenario should be reproduced by a test case. + +Don't worry about being too verbose. There is often not enough comments, but almost never too much of them. + +## Common Concerns about Code Comments + +It is common to hear objections to the idea of commenting the code, so let's discuss a couple of usual ones. + +### Self-documenting Code + +You can often see a perplexing idea that the source code can somehow be "self-documenting", or that the comments are a "code smell", and their presence indicates that the code is badly written. I have trouble imagining how this belief can be compatible with any experience in maintaining sufficiently complex and large software, over the years, in collaboration with others. The code and the comments describe different parts of the solution. The code describes the data structures and their transformations, but it cannot convey meaning. The names in the code serve as pointers that map the data and its transformations to the domain concepts, but they are schematic and lack nuance. It is not so difficult to write code that makes it easy to understand what's going on in terms of data manipulation. What it takes is mostly moderation, that is, stopping yourself from being too clever. For most code, it is easy to see what it does, but why? Why this way and not that way? Why is it correct? Why this fast path here helps? Why did you choose this data layout? How is this invariant guaranteed? Why do you need this data now and not later? And so on. This might be not so evident for a developer who is working alone on a short-lived project, because they have all the necessary context in their head. But when they have to work with other people (or even with themselves from past and future), or in an unfamiliar area, the importance of non-code, higher-level context becomes painfully clear. The idea that we should, or even can, somehow encode comments such as [this one](https://github.com/postgres/postgres/blob/55dc86eca70b1dc18a79c141b3567efed910329d/src/backend/optimizer/path/indxpath.c#L2226) into names or control flow is just absurd. + +### Obsolete Comments + +The comments can't be checked by the compiler or the tests, so there is no automated way to make sure that they are up to date with the rest of the comments and the code. The possibility of comments gradually getting incorrect is sometimes used as an argument against having any comments at all. + +This problem is not exclusive to the comments -- the code also can and does become obsolete. Simple cases such as dead code can be detected by static analysis or studying the test coverage of code. More complex cases can only be found by proofreading, such as maintaining an invariant that is not important anymore, or preparing some data that is not needed. + +While an obsolete comment can lead to a mistake, the same applies, perhaps more strongly, to the lack of comments. When you need some higher-level knowledge about the code, but it is not written down, you are forced to perform an entire investigation from first principles to understand what's going on, and this is error-prone. Even an obsolete comment likely gives a better starting point than nothing. Moreover, in a code base that makes an active use of the comments, they tend to be mostly correct. This is because the developers rely on comments, read and write them, pay attention to them during code review. The comments are routinely changed along with changing the code, and the outdated comments are soon noticed and fixed. This does require some habit. A lone comment in a vast desert of impenetrable self-documenting code is not going to fare well. + + +## Conclusion + +Code review makes your software better, and a significant part of this probably comes from trying to understand what your software actually does. By paying attention specifically to this aspect of code review, you can make it even more efficient. You'll have less bugs, and your code will be easier to maintain -- and what else could we ask for as software developers? + + +_2021-04-13 [Alexander Kuzmenkov](https://github.com/akuzm)_ + +_P.S. This text contains the personal opinions of the author, and is not an authoritative manual for ClickHouse maintainers._ From 96d0d05380cb709d6bc80a95281c1ad956b1df8f Mon Sep 17 00:00:00 2001 From: alexey-milovidov Date: Tue, 13 Apr 2021 19:06:21 +0300 Subject: [PATCH 011/108] Update code-review.md --- website/blog/en/2021/code-review.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/en/2021/code-review.md b/website/blog/en/2021/code-review.md index 37af333de02..63a99c13e04 100644 --- a/website/blog/en/2021/code-review.md +++ b/website/blog/en/2021/code-review.md @@ -39,7 +39,7 @@ Working with the author, you should ensure that the code is mostly obvious to an As an author, you can also do some things to make your code easier to understand for the reviewer. -First of all, if you are implementing a major feature, it probably needs a round of desing review before you even start writing code. Skipping a desing review and jumping right into the code review can be a major source of frustration, because it might turn out that even the problem you are solving was formulated incorrectly, and all your work has to be thrown away. Of course, this is not prevented completely by desing review, either. Programming is an iterative, exploratory activity, and in complex cases you only begin to grasp the problem after implementing a first solution, which you then realize is incorrect and has to be thrown away. +First of all, if you are implementing a major feature, it probably needs a round of design review before you even start writing code. Skipping a design review and jumping right into the code review can be a major source of frustration, because it might turn out that even the problem you are solving was formulated incorrectly, and all your work has to be thrown away. Of course, this is not prevented completely by design review, either. Programming is an iterative, exploratory activity, and in complex cases you only begin to grasp the problem after implementing a first solution, which you then realize is incorrect and has to be thrown away. When preparing your code for review, your major objective is to make your problem and its solution clear to the reviewer. A good tool for this is code comments. Any sizable piece of logic should have an introductory comment describing its general purpose and outlining the implementation. This description can reference similar features, explain the difference to them, explain how it interfaces with other subsystems. A good place to put this general description is a function that serves as a main entry point for the feature, or other form of its public interface, or the most significant class, or the file containing the implementation, and so on. From c01756014e32dcbe9164ff2b8181096144455e6a Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 20:08:49 +0300 Subject: [PATCH 012/108] More generic implementation of `has` --- src/Functions/array/arrayIndex.h | 108 ++++++------------------------- 1 file changed, 19 insertions(+), 89 deletions(-) diff --git a/src/Functions/array/arrayIndex.h b/src/Functions/array/arrayIndex.h index 5e31695f7e2..a48cfb2edc5 100644 --- a/src/Functions/array/arrayIndex.h +++ b/src/Functions/array/arrayIndex.h @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -13,9 +14,9 @@ #include #include #include -#include "Columns/ColumnLowCardinality.h" -#include "DataTypes/DataTypeLowCardinality.h" -#include "Interpreters/castColumn.h" +#include +#include +#include namespace DB @@ -493,86 +494,10 @@ private: inline void moveResult() { result_column = std::move(result); } }; - static inline bool allowNested(const DataTypePtr & left, const DataTypePtr & right) - { - return ((isNativeNumber(left) || isEnum(left)) && isNativeNumber(right)) || left->equals(*right); - } - static inline bool allowArguments(const DataTypePtr & array_inner_type, const DataTypePtr & arg) { - if (allowNested(array_inner_type, arg)) - return true; - - /// Nullable - - const bool array_is_nullable = array_inner_type->isNullable(); - const bool arg_is_nullable = arg->isNullable(); - - const DataTypePtr arg_or_arg_nullable_nested = arg_is_nullable - ? checkAndGetDataType(arg.get())->getNestedType() - : arg; - - if (array_is_nullable) // comparing Array(Nullable(T)) elem and U - { - const DataTypePtr array_nullable_nested = - checkAndGetDataType(array_inner_type.get())->getNestedType(); - - // We also allow Nullable(T) and LC(U) if the Nullable(T) and U are allowed, - // the LC(U) will be converted to U. - return allowNested( - array_nullable_nested, - recursiveRemoveLowCardinality(arg_or_arg_nullable_nested)); - } - else if (arg_is_nullable) // cannot compare Array(T) elem (namely, T) and Nullable(T) - return false; - - /// LowCardinality - - const auto * const array_lc_ptr = checkAndGetDataType(array_inner_type.get()); - const auto * const arg_lc_ptr = checkAndGetDataType(arg.get()); - - const DataTypePtr array_lc_inner_type = recursiveRemoveLowCardinality(array_inner_type); - const DataTypePtr arg_lc_inner_type = recursiveRemoveLowCardinality(arg); - - const bool array_is_lc = nullptr != array_lc_ptr; - const bool arg_is_lc = nullptr != arg_lc_ptr; - - const bool array_lc_inner_type_is_nullable = array_is_lc && array_lc_inner_type->isNullable(); - const bool arg_lc_inner_type_is_nullable = arg_is_lc && arg_lc_inner_type->isNullable(); - - if (array_is_lc) // comparing LC(T) and U - { - const DataTypePtr array_lc_nested_or_lc_nullable_nested = array_lc_inner_type_is_nullable - ? checkAndGetDataType(array_lc_inner_type.get())->getNestedType() - : array_lc_inner_type; - - if (arg_is_lc) // comparing LC(T) and LC(U) - { - const DataTypePtr arg_lc_nested_or_lc_nullable_nested = arg_lc_inner_type_is_nullable - ? checkAndGetDataType(arg_lc_inner_type.get())->getNestedType() - : arg_lc_inner_type; - - return allowNested( - array_lc_nested_or_lc_nullable_nested, - arg_lc_nested_or_lc_nullable_nested); - } - else if (arg_is_nullable) // Comparing LC(T) and Nullable(U) - { - if (!array_lc_inner_type_is_nullable) - return false; // Can't compare Array(LC(U)) elem and Nullable(T); - - return allowNested( - array_lc_nested_or_lc_nullable_nested, - arg_or_arg_nullable_nested); - } - else // Comparing LC(T) and U (U neither Nullable nor LC) - return allowNested(array_lc_nested_or_lc_nullable_nested, arg); - } - - if (arg_is_lc) // Allow T and LC(U) if U and T are allowed (the low cardinality column will be converted). - return allowNested(array_inner_type, arg_lc_inner_type); - - return false; + return ((isNativeNumber(array_inner_type) || isEnum(array_inner_type)) && isNativeNumber(arg)) + || getLeastSupertype({array_inner_type, arg}); } #define INTEGRAL_TPL_PACK UInt8, UInt16, UInt32, UInt64, Int8, Int16, Int32, Int64, Float32, Float64 @@ -1043,33 +968,38 @@ private: if (!col) return nullptr; - const IColumn & col_nested = col->getData(); + DataTypePtr array_elements_type = assert_cast(*arguments[0].type).getNestedType(); + const DataTypePtr & index_type = arguments[1].type; + + DataTypePtr common_type = getLeastSupertype({array_elements_type, index_type}); + + ColumnPtr col_nested = castColumn({ col->getDataPtr(), array_elements_type, "" }, common_type); const ColumnPtr right_ptr = arguments[1].column->convertToFullColumnIfLowCardinality(); - const IColumn & item_arg = *right_ptr.get(); + ColumnPtr item_arg = castColumn({ right_ptr, removeLowCardinality(index_type), "" }, common_type); auto col_res = ResultColumnType::create(); auto [null_map_data, null_map_item] = getNullMaps(arguments); - if (item_arg.onlyNull()) + if (item_arg->onlyNull()) Impl::Null::process( col->getOffsets(), col_res->getData(), null_map_data); - else if (isColumnConst(item_arg)) + else if (isColumnConst(*item_arg)) Impl::Main::vector( - col_nested, + *col_nested, col->getOffsets(), - typeid_cast(item_arg).getDataColumn(), + typeid_cast(*item_arg).getDataColumn(), col_res->getData(), /// TODO This is wrong. null_map_data, nullptr); else Impl::Main::vector( - col_nested, + *col_nested, col->getOffsets(), - item_arg, + *item_arg, col_res->getData(), null_map_data, null_map_item); From a54985458d8bd542c1a91544bebeb0f6d68765e6 Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 20:10:27 +0300 Subject: [PATCH 013/108] Add a test --- tests/queries/0_stateless/01812_has_generic.reference | 3 +++ tests/queries/0_stateless/01812_has_generic.sql | 3 +++ 2 files changed, 6 insertions(+) create mode 100644 tests/queries/0_stateless/01812_has_generic.reference create mode 100644 tests/queries/0_stateless/01812_has_generic.sql diff --git a/tests/queries/0_stateless/01812_has_generic.reference b/tests/queries/0_stateless/01812_has_generic.reference new file mode 100644 index 00000000000..e8183f05f5d --- /dev/null +++ b/tests/queries/0_stateless/01812_has_generic.reference @@ -0,0 +1,3 @@ +1 +1 +1 diff --git a/tests/queries/0_stateless/01812_has_generic.sql b/tests/queries/0_stateless/01812_has_generic.sql new file mode 100644 index 00000000000..9ab5b655102 --- /dev/null +++ b/tests/queries/0_stateless/01812_has_generic.sql @@ -0,0 +1,3 @@ +SELECT has([(1, 2), (3, 4)], (toUInt16(3), 4)); +SELECT hasAny([(1, 2), (3, 4)], [(toUInt16(3), 4)]); +SELECT hasAll([(1, 2), (3, 4)], [(toNullable(1), toUInt64(2)), (toUInt16(3), 4)]); From 919e96dbbe6a307ffb9d733ebe040f24945a13cd Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 20:12:40 +0300 Subject: [PATCH 014/108] Fix test --- .../0_stateless/00918_has_unsufficient_type_check.reference | 1 - tests/queries/0_stateless/00918_has_unsufficient_type_check.sql | 2 +- 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/tests/queries/0_stateless/00918_has_unsufficient_type_check.reference b/tests/queries/0_stateless/00918_has_unsufficient_type_check.reference index 7938dcdde86..b261da18d51 100644 --- a/tests/queries/0_stateless/00918_has_unsufficient_type_check.reference +++ b/tests/queries/0_stateless/00918_has_unsufficient_type_check.reference @@ -1,3 +1,2 @@ -0 1 0 diff --git a/tests/queries/0_stateless/00918_has_unsufficient_type_check.sql b/tests/queries/0_stateless/00918_has_unsufficient_type_check.sql index f76fd446a8e..c40419e4d56 100644 --- a/tests/queries/0_stateless/00918_has_unsufficient_type_check.sql +++ b/tests/queries/0_stateless/00918_has_unsufficient_type_check.sql @@ -1,3 +1,3 @@ -SELECT hasAny([['Hello, world']], [[[]]]); +SELECT hasAny([['Hello, world']], [[[]]]); -- { serverError 386 } SELECT hasAny([['Hello, world']], [['Hello', 'world'], ['Hello, world']]); SELECT hasAll([['Hello, world']], [['Hello', 'world'], ['Hello, world']]); From 924eb6921719ecd80e7c35d764699bd681ff7073 Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 20:14:29 +0300 Subject: [PATCH 015/108] Fix test --- tests/queries/0_stateless/00555_hasSubstr.reference | 2 -- tests/queries/0_stateless/00555_hasSubstr.sql | 4 ++-- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/tests/queries/0_stateless/00555_hasSubstr.reference b/tests/queries/0_stateless/00555_hasSubstr.reference index 1051fa28d6c..de97d19c932 100644 --- a/tests/queries/0_stateless/00555_hasSubstr.reference +++ b/tests/queries/0_stateless/00555_hasSubstr.reference @@ -20,8 +20,6 @@ 0 1 - -0 -0 1 1 0 diff --git a/tests/queries/0_stateless/00555_hasSubstr.sql b/tests/queries/0_stateless/00555_hasSubstr.sql index 04c70e4a43b..5f90a69c546 100644 --- a/tests/queries/0_stateless/00555_hasSubstr.sql +++ b/tests/queries/0_stateless/00555_hasSubstr.sql @@ -25,8 +25,8 @@ select hasSubstr(['a', 'b'], ['a', 'c']); select hasSubstr(['a', 'c', 'b'], ['a', 'c']); select '-'; -select hasSubstr([1], ['a']); -select hasSubstr([[1, 2], [3, 4]], ['a', 'c']); +select hasSubstr([1], ['a']); -- { serverError 386 } +select hasSubstr([[1, 2], [3, 4]], ['a', 'c']); -- { serverError 386 } select hasSubstr([[1, 2], [3, 4], [5, 8]], [[3, 4]]); select hasSubstr([[1, 2], [3, 4], [5, 8]], [[3, 4], [5, 8]]); select hasSubstr([[1, 2], [3, 4], [5, 8]], [[1, 2], [5, 8]]); From 63a272a5333f667eafad8d08d95785d654cde7f5 Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 20:15:18 +0300 Subject: [PATCH 016/108] Fix test --- tests/queries/0_stateless/00555_hasAll_hasAny.reference | 4 ---- tests/queries/0_stateless/00555_hasAll_hasAny.sql | 8 ++++---- 2 files changed, 4 insertions(+), 8 deletions(-) diff --git a/tests/queries/0_stateless/00555_hasAll_hasAny.reference b/tests/queries/0_stateless/00555_hasAll_hasAny.reference index b33700bfa02..5608f7b970e 100644 --- a/tests/queries/0_stateless/00555_hasAll_hasAny.reference +++ b/tests/queries/0_stateless/00555_hasAll_hasAny.reference @@ -34,10 +34,6 @@ 1 0 - -0 -0 -0 -0 - 0 1 diff --git a/tests/queries/0_stateless/00555_hasAll_hasAny.sql b/tests/queries/0_stateless/00555_hasAll_hasAny.sql index 9df356dce2e..c8a6c3cecbd 100644 --- a/tests/queries/0_stateless/00555_hasAll_hasAny.sql +++ b/tests/queries/0_stateless/00555_hasAll_hasAny.sql @@ -39,10 +39,10 @@ select hasAny(['a', 'b'], ['a', 'c']); select hasAll(['a', 'b'], ['a', 'c']); select '-'; -select hasAny([1], ['a']); -select hasAll([1], ['a']); -select hasAll([[1, 2], [3, 4]], ['a', 'c']); -select hasAny([[1, 2], [3, 4]], ['a', 'c']); +select hasAny([1], ['a']); -- { serverError 386 } +select hasAll([1], ['a']); -- { serverError 386 } +select hasAll([[1, 2], [3, 4]], ['a', 'c']); -- { serverError 386 } +select hasAny([[1, 2], [3, 4]], ['a', 'c']); -- { serverError 386 } select '-'; select hasAll([[1, 2], [3, 4]], [[1, 2], [3, 5]]); From 58ef54dd6313396b4e7a4e43fdeaffd732e1efa0 Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 20:21:27 +0300 Subject: [PATCH 017/108] Fix test --- .../01676_clickhouse_client_autocomplete.sh | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh b/tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh index 08e07044841..21f415b59ce 100755 --- a/tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh +++ b/tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh @@ -69,16 +69,6 @@ compwords_positive=( max_concurrent_queries_for_all_users # system.clusters test_shard_localhost - # system.errors, also it is very rare to cover system_events_show_zero_values - CONDITIONAL_TREE_PARENT_NOT_FOUND - # system.events, also it is very rare to cover system_events_show_zero_values - WriteBufferFromFileDescriptorWriteFailed - # system.asynchronous_metrics, also this metric has zero value - # - # NOTE: that there is no ability to complete metrics like - # jemalloc.background_thread.num_runs, due to "." is used as a word breaker - # (and this cannot be changed -- db.table) - ReplicasMaxAbsoluteDelay # system.metrics PartsPreCommitted # system.macros From a29334de7c30adddc83304195af65870a50978f0 Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 20:23:14 +0300 Subject: [PATCH 018/108] Fix test --- tests/queries/0_stateless/01812_basic_auth_http_server.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tests/queries/0_stateless/01812_basic_auth_http_server.sh b/tests/queries/0_stateless/01812_basic_auth_http_server.sh index 9b328911e39..b2caa32ef1a 100755 --- a/tests/queries/0_stateless/01812_basic_auth_http_server.sh +++ b/tests/queries/0_stateless/01812_basic_auth_http_server.sh @@ -12,7 +12,7 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # In this test we do the opposite: passing the invalid credentials while server is accepting default user without a password. # And if the bug exists, they will be ignored (treat as empty credentials) and query succeed. -for i in {3950..4100}; do ${CLICKHOUSE_CURL} --user default:12345 "${CLICKHOUSE_URL}&query=SELECT+1"$(perl -e "print '+'x$i") | grep -v -F 'password' && echo 'Fail'; done +for i in {3950..4100}; do ${CLICKHOUSE_CURL} --user default:12345 "${CLICKHOUSE_URL}&query=SELECT+1"$(perl -e "print '+'x$i") | grep -v -F 'password' && echo 'Fail' ||:; done # You can check that the bug exists in old version by running the old server in Docker: # docker run --network host -it --rm yandex/clickhouse-server:1.1.54385 From 2450b1e7ca4b08a431f88c06c07370aac14dfa19 Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 20:34:11 +0300 Subject: [PATCH 019/108] Add hilight for usability --- programs/install/Install.cpp | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/programs/install/Install.cpp b/programs/install/Install.cpp index ef72624e7ab..3cab7a0ce96 100644 --- a/programs/install/Install.cpp +++ b/programs/install/Install.cpp @@ -71,6 +71,9 @@ namespace ErrorCodes } +/// ANSI escape sequence for intense color in terminal. +#define HILITE "\033[1m" +#define END_HILITE "\033[0m" using namespace DB; namespace po = boost::program_options; @@ -563,12 +566,12 @@ int mainEntryClickHouseInstall(int argc, char ** argv) if (has_password_for_default_user) { - fmt::print("Password for default user is already specified. To remind or reset, see {} and {}.\n", + fmt::print(HILITE "Password for default user is already specified. To remind or reset, see {} and {}." END_HILITE, users_config_file.string(), users_d.string()); } else if (!is_interactive) { - fmt::print("Password for default user is empty string. See {} and {} to change it.\n", + fmt::print(HILITE "Password for default user is empty string. See {} and {} to change it." END_HILITE, users_config_file.string(), users_d.string()); } else From 4a69d6f231d97e25f17a7277a05ca16ad278ac29 Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 20:34:11 +0300 Subject: [PATCH 020/108] Add hilight for usability --- programs/install/Install.cpp | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/programs/install/Install.cpp b/programs/install/Install.cpp index c40495d702a..3e887b0575c 100644 --- a/programs/install/Install.cpp +++ b/programs/install/Install.cpp @@ -71,6 +71,9 @@ namespace ErrorCodes } +/// ANSI escape sequence for intense color in terminal. +#define HILITE "\033[1m" +#define END_HILITE "\033[0m" using namespace DB; namespace po = boost::program_options; @@ -563,12 +566,12 @@ int mainEntryClickHouseInstall(int argc, char ** argv) if (has_password_for_default_user) { - fmt::print("Password for default user is already specified. To remind or reset, see {} and {}.\n", + fmt::print(HILITE "Password for default user is already specified. To remind or reset, see {} and {}." END_HILITE, users_config_file.string(), users_d.string()); } else if (!is_interactive) { - fmt::print("Password for default user is empty string. See {} and {} to change it.\n", + fmt::print(HILITE "Password for default user is empty string. See {} and {} to change it." END_HILITE, users_config_file.string(), users_d.string()); } else From a767e174a2a62ab36164004733b7ea84b23ae5f7 Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 20:37:36 +0300 Subject: [PATCH 021/108] Maybe better (experiment) --- programs/install/Install.cpp | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/programs/install/Install.cpp b/programs/install/Install.cpp index 3e887b0575c..c6c21d43739 100644 --- a/programs/install/Install.cpp +++ b/programs/install/Install.cpp @@ -560,9 +560,8 @@ int mainEntryClickHouseInstall(int argc, char ** argv) /// Set up password for default user. - bool stdin_is_a_tty = isatty(STDIN_FILENO); bool stdout_is_a_tty = isatty(STDOUT_FILENO); - bool is_interactive = stdin_is_a_tty && stdout_is_a_tty; + bool is_interactive = stdout_is_a_tty; if (has_password_for_default_user) { From 8cb0bb4ca82ac57ee6b886b7a8b63c46968077e0 Mon Sep 17 00:00:00 2001 From: Azat Khuzhin Date: Sat, 20 Feb 2021 16:46:55 +0300 Subject: [PATCH 022/108] Fix SIGSEGV by waiting servers thread pool It is easy to reproduce with shutdown_wait_unfinished=0: ================================================================= ==13442==ERROR: AddressSanitizer: heap-use-after-free on address 0x611000210f30 at pc 0x00000a8e55a0 bp 0x7fff2b83e270 sp 0x7fff2b83e268 WRITE of size 8 at 0x611000210f30 thread T2 (TCPHandler) 0 0xa8e559f in long std::__1::__cxx_atomic_fetch_add(std::__1::__cxx_atomic_base_impl*, long, std::__1::memory_order) obj-x86_64-linux-gnu/../contrib/libcxx/include/atomic:1050:12 1 0xa8e559f in std::__1::__atomic_base::fetch_add(long, std::__1::memory_order) obj-x86_64-linux-gnu/../contrib/libcxx/include/atomic:1719:17 2 0xa8e559f in MemoryTracker::alloc(long) obj-x86_64-linux-gnu/../src/Common/MemoryTracker.cpp:146:35 3 0xa8e510c in MemoryTracker::alloc(long) obj-x86_64-linux-gnu/../src/Common/MemoryTracker.cpp 4 0xa90b474 in DB::ThreadStatus::~ThreadStatus() obj-x86_64-linux-gnu/../src/Common/ThreadStatus.cpp:92:28 5 0x1f90ee83 in DB::TCPHandler::runImpl() obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:450:1 6 0x1f92dcac in DB::TCPHandler::run() obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:1492:9 7 0x25bdc2fe in Poco::Net::TCPServerConnection::start() obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerConnection.cpp:43:3 8 0x25bdce1b in Poco::Net::TCPServerDispatcher::run() obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerDispatcher.cpp:113:19 9 0x25e9c784 in Poco::PooledThread::run() obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:199:14 10 0x25e96cd6 in Poco::ThreadImpl::runnableEntry(void*) obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345:27 11 0x7ffff7f723e8 in start_thread (/usr/lib/libpthread.so.0+0x93e8) 12 0x7ffff7ea0292 in clone (/usr/lib/libc.so.6+0x100292) 0x611000210f30 is located 112 bytes inside of 216-byte region [0x611000210ec0,0x611000210f98) freed by thread T0 here: 0 0xa845d02 in operator delete(void*, unsigned long) (/src/ch/tmp/upstream/clickhouse-asan+0xa845d02) 1 0x1d38328c in void std::__1::__libcpp_operator_delete(void*, unsigned long) obj-x86_64-linux-gnu/../contrib/libcxx/include/new:245:3 2 0x1d38328c in void std::__1::__do_deallocate_handle_size<>(void*, unsigned long) obj-x86_64-linux-gnu/../contrib/libcxx/include/new:271:10 3 0x1d38328c in std::__1::__libcpp_deallocate(void*, unsigned long, unsigned long) obj-x86_64-linux-gnu/../contrib/libcxx/include/new:285:14 4 0x1d38328c in std::__1::allocator, std::__1::allocator >, DB::ProcessListForUser>, void*> >::deallocate(std::__1::__hash_node, std::__1::allocator >, DB::ProcessListForUser>, void*>*, unsigned long) obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:849:13 5 0x1d38328c in std::__1::allocator_traits, std::__1::allocator >, DB::ProcessListForUser>, void*> > >::deallocate(std::__1::allocator, std::__1::allocator >, DB::ProcessListForUser>, void*> >&, std::__1::__hash_node, std::__1::allocator >, DB::ProcessListForUser>, void*>*, unsigned long) obj-x86_64-linux-gnu/../contrib/libcxx/include/__memory/allocator_traits.h:476:14 6 0x1d38328c in std::__1::__hash_table, std::__1::allocator >, DB::ProcessListForUser>, std::__1::__unordered_map_hasher, std::__1::allocator >, std::__1::__hash_value_type, std::__1::allocator >, DB::ProcessListForUser>, std::__1::hash, std::__1::allocator > >, std::__1::equal_to, std::__1::allocator > >, true>, std::__1::__unordered_map_equal, std::__1::allocator >, std::__1::__hash_value_type, std::__1::allocator >, DB::ProcessListForUser>, std::__1::equal_to, std::__1::allocator > >, std::__1::hash, std::__1::allocator > >, true>, std::__1::allocator, std::__1::allocator >, DB::ProcessListForUser> > >::__deallocate_node(std::__1::__hash_node_base, std::__1::allocator >, DB::ProcessListForUser>, void*>*>*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__hash_table:1581:9 7 0x1d38328c in std::__1::__hash_table, std::__1::allocator >, DB::ProcessListForUser>, std::__1::__unordered_map_hasher, std::__1::allocator >, std::__1::__hash_value_type, std::__1::allocator >, DB::ProcessListForUser>, std::__1::hash, std::__1::allocator > >, std::__1::equal_to, std::__1::allocator > >, true>, std::__1::__unordered_map_equal, std::__1::allocator >, std::__1::__hash_value_type, std::__1::allocator >, DB::ProcessListForUser>, std::__1::equal_to, std::__1::allocator > >, std::__1::hash, std::__1::allocator > >, true>, std::__1::allocator, std::__1::allocator >, DB::ProcessListForUser> > >::~__hash_table() obj-x86_64-linux-gnu/../contrib/libcxx/include/__hash_table:1519:5 8 0x1d38328c in std::__1::unordered_map, std::__1::allocator >, DB::ProcessListForUser, std::__1::hash, std::__1::allocator > >, std::__1::equal_to, std::__1::allocator > >, std::__1::allocator, std::__1::allocator > const, DB::ProcessListForUser> > >::~unordered_map() obj-x86_64-linux-gnu/../contrib/libcxx/include/unordered_map:1044:5 9 0x1d38328c in DB::ProcessList::~ProcessList() obj-x86_64-linux-gnu/../src/Interpreters/ProcessList.h:263:7 10 0x1d38169c in DB::ContextShared::~ContextShared() obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:417:5 11 0x1d32f3e5 in std::__1::default_delete::operator()(DB::ContextShared*) const obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:1397:5 12 0x1d32f3e5 in std::__1::unique_ptr >::reset(DB::ContextShared*) obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:1658:7 13 0x1d32f3e5 in DB::SharedContextHolder::reset() obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:485:44 14 0xa8863d4 in DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&)::$_5::operator()() const obj-x86_64-linux-gnu/../programs/server/Server.cpp:880:5 15 0xa8863d4 in ext::basic_scope_guard, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&)::$_5>::invoke() obj-x86_64-linux-gnu/../base/common/../ext/scope_guard.h:97:9 16 0xa8863d4 in ext::basic_scope_guard, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&)::$_5>::~basic_scope_guard() obj-x86_64-linux-gnu/../base/common/../ext/scope_guard.h:47:28 17 0xa86d889 in DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&) obj-x86_64-linux-gnu/../programs/server/Server.cpp:1379:1 18 0x25c0c8b5 in Poco::Util::Application::run() obj-x86_64-linux-gnu/../contrib/poco/Util/src/Application.cpp:334:8 19 0xa85070d in DB::Server::run() obj-x86_64-linux-gnu/../programs/server/Server.cpp:340:25 20 0x25c49eb7 in Poco::Util::ServerApplication::run(int, char**) obj-x86_64-linux-gnu/../contrib/poco/Util/src/ServerApplication.cpp:611:9 21 0xa84cd11 in mainEntryClickHouseServer(int, char**) obj-x86_64-linux-gnu/../programs/server/Server.cpp:132:20 22 0xa848c3a in main obj-x86_64-linux-gnu/../programs/main.cpp:368:12 23 0x7ffff7dc8151 in __libc_start_main (/usr/lib/libc.so.6+0x28151) previously allocated by thread T2 (TCPHandler) here: 0 0xa84509d in operator new(unsigned long) (/src/ch/tmp/upstream/clickhouse-asan+0xa84509d) 1 0x1e2a7aa6 in void* std::__1::__libcpp_operator_new(unsigned long) obj-x86_64-linux-gnu/../contrib/libcxx/include/new:235:10 2 0x1e2a7aa6 in std::__1::__libcpp_allocate(unsigned long, unsigned long) obj-x86_64-linux-gnu/../contrib/libcxx/include/new:261:10 3 0x1e2a7aa6 in std::__1::allocator, std::__1::allocator >, DB::ProcessListForUser>, void*> >::allocate(unsigned long) obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:840:38 4 0x1e2a7aa6 in std::__1::allocator_traits, std::__1::allocator >, DB::ProcessListForUser>, void*> > >::allocate(std::__1::allocator, std::__1::allocator >, DB::ProcessListForUser>, void*> >&, unsigned long) obj-x86_64-linux-gnu/../contrib/libcxx/include/__memory/allocator_traits.h:468:21 5 0x1e2a7aa6 in std::__1::unique_ptr, std::__1::allocator >, DB::ProcessListForUser>, void*>, std::__1::__hash_node_destructor, std::__1::allocator >, DB::ProcessListForUser>, void*> > > > std::__1::__hash_table, std::__1::allocator >, DB::ProcessListForUser>, std::__1::__unordered_map_hasher, std::__1::allocator >, std::__1::__hash_value_type, std::__1::allocator >, DB::ProcessListForUser>, std::__1::hash, std::__1::allocator > >, std::__1::equal_to, std::__1::allocator > >, true>, std::__1::__unordered_map_equal, std::__1::allocator >, std::__1::__hash_value_type, std::__1::allocator >, DB::ProcessListForUser>, std::__1::equal_to, std::__1::allocator > >, std::__1::hash, std::__1::allocator > >, true>, std::__1::allocator, std::__1::allocator >, DB::ProcessListForUser> > >::__construct_node_hash, std::__1::allocator > const&>, std::__1::tuple<> >(unsigned long, std::__1::piecewise_construct_t const&, std::__1::tuple, std::__1::allocator > const&>&&, std::__1::tuple<>&&) obj-x86_64-linux-gnu/../contrib/libcxx/include/__hash_table:2472:23 6 0x1e2a7aa6 in std::__1::pair, std::__1::allocator >, DB::ProcessListForUser>, void*>*>, bool> std::__1::__hash_table, std::__1::allocator >, DB::ProcessListForUser>, std::__1::__unordered_map_hasher, std::__1::allocator >, std::__1::__hash_value_type, std::__1::allocator >, DB::ProcessListForUser>, std::__1::hash, std::__1::allocator > >, std::__1::equal_to, std::__1::allocator > >, true>, std::__1::__unordered_map_equal, std::__1::allocator >, std::__1::__hash_value_type, std::__1::allocator >, DB::ProcessListForUser>, std::__1::equal_to, std::__1::allocator > >, std::__1::hash, std::__1::allocator > >, true>, std::__1::allocator, std::__1::allocator >, DB::ProcessListForUser> > >::__emplace_unique_key_args, std::__1::allocator >, std::__1::piecewise_construct_t const&, std::__1::tuple, std::__1::allocator > const&>, std::__1::tuple<> >(std::__1::basic_string, std::__1::allocator > const&, std::__1::piecewise_construct_t const&, std::__1::tuple, std::__1::allocator > const&>&&, std::__1::tuple<>&&) obj-x86_64-linux-gnu/../contrib/libcxx/include/__hash_table:2093:29 7 0x1e29c13d in std::__1::unordered_map, std::__1::allocator >, DB::ProcessListForUser, std::__1::hash, std::__1::allocator > >, std::__1::equal_to, std::__1::allocator > >, std::__1::allocator, std::__1::allocator > const, DB::ProcessListForUser> > >::operator[](std::__1::basic_string, std::__1::allocator > const&) obj-x86_64-linux-gnu/../contrib/libcxx/include/unordered_map:1740:21 8 0x1e29c13d in DB::ProcessList::insert(std::__1::basic_string, std::__1::allocator > const&, DB::IAST const*, DB::Context&) obj-x86_64-linux-gnu/../src/Interpreters/ProcessList.cpp:183:50 9 0x1e5a3a58 in DB::executeQueryImpl(char const*, char const*, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, DB::ReadBuffer*) obj-x86_64-linux-gnu/../src/Interpreters/executeQuery.cpp:486:59 10 0x1e5a153e in DB::executeQuery(std::__1::basic_string, std::__1::allocator > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) obj-x86_64-linux-gnu/../src/Interpreters/executeQuery.cpp:904:30 11 0x1f909bdc in DB::TCPHandler::runImpl() obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:289:24 12 0x1f92dcac in DB::TCPHandler::run() obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:1492:9 13 0x25bdc2fe in Poco::Net::TCPServerConnection::start() obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerConnection.cpp:43:3 14 0x25bdce1b in Poco::Net::TCPServerDispatcher::run() obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerDispatcher.cpp:113:19 15 0x25e9c784 in Poco::PooledThread::run() obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:199:14 16 0x25e96cd6 in Poco::ThreadImpl::runnableEntry(void*) obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345:27 17 0x7ffff7f723e8 in start_thread (/usr/lib/libpthread.so.0+0x93e8) Thread T2 (TCPHandler) created by T0 here: 0 0xa7ffe0a in pthread_create (/src/ch/tmp/upstream/clickhouse-asan+0xa7ffe0a) 1 0x25e9606f in Poco::ThreadImpl::startImpl(Poco::SharedPtr >) obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread_POSIX.cpp:202:6 2 0x25e98eea in Poco::Thread::start(Poco::Runnable&) obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread.cpp:128:2 3 0x25e9cd28 in Poco::PooledThread::start() obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:85:10 4 0x25e9cd28 in Poco::ThreadPool::ThreadPool(int, int, int, int) obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:252:12 5 0xa865aff in DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&) obj-x86_64-linux-gnu/../programs/server/Server.cpp:831:22 6 0x25c0c8b5 in Poco::Util::Application::run() obj-x86_64-linux-gnu/../contrib/poco/Util/src/Application.cpp:334:8 7 0xa85070d in DB::Server::run() obj-x86_64-linux-gnu/../programs/server/Server.cpp:340:25 8 0x25c49eb7 in Poco::Util::ServerApplication::run(int, char**) obj-x86_64-linux-gnu/../contrib/poco/Util/src/ServerApplication.cpp:611:9 9 0xa84cd11 in mainEntryClickHouseServer(int, char**) obj-x86_64-linux-gnu/../programs/server/Server.cpp:132:20 10 0xa848c3a in main obj-x86_64-linux-gnu/../programs/main.cpp:368:12 11 0x7ffff7dc8151 in __libc_start_main (/usr/lib/libc.so.6+0x28151) SUMMARY: AddressSanitizer: heap-use-after-free obj-x86_64-linux-gnu/../contrib/libcxx/include/atomic:1050:12 in long std::__1::__cxx_atomic_fetch_add(std::__1::__cxx_atomic_base_impl*, long, std::__1::memory_order) Shadow bytes around the buggy address: 0x0c228003a190: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c228003a1a0: fd fd fd fd fd fd fa fa fa fa fa fa fa fa fa fa 0x0c228003a1b0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c228003a1c0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa 0x0c228003a1d0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd =>0x0c228003a1e0: fd fd fd fd fd fd[fd]fd fd fd fd fd fd fd fd fd 0x0c228003a1f0: fd fd fd fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c228003a200: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c228003a210: fd fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa 0x0c228003a220: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 0x0c228003a230: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==13442==ABORTING 2021.02.20 16:39:50.861426 [ 13443 ] {} BaseDaemon: Received signal -3 2021.02.20 16:39:50.861668 [ 14989 ] {} BaseDaemon: ######################################## 2021.02.20 16:39:50.861749 [ 14989 ] {} BaseDaemon: (version 21.3.1.6073 (official build), build id: AC8A516D2F60B8505FA128074527EC2C86198E64) (from thread 13874) (no query) Received signal Unknown signal (-3) 2021.02.20 16:39:50.861810 [ 14989 ] {} BaseDaemon: Sanitizer trap. 2021.02.20 16:39:50.861880 [ 14989 ] {} BaseDaemon: Stack trace: 0xa8e94a7 0xad25b1b 0xa831a16 0xa819444 0xa81aefe 0xa81bb4b 0xa8e55a0 0xa8e510d 0xa90b475 0x1f90ee84 0x1f92dcad 0x25bdc2ff 0x25bdce1c 0x25e9c785 0x25e96cd7 0x7ffff7f723e9 0x7ffff7ea0293 2021.02.20 16:39:50.903643 [ 14989 ] {} BaseDaemon: 0.1. inlined from ./obj-x86_64-linux-gnu/../src/Common/StackTrace.cpp:298: StackTrace::tryCapture() 2021.02.20 16:39:50.903708 [ 14989 ] {} BaseDaemon: 0. ../src/Common/StackTrace.cpp:259: StackTrace::StackTrace() @ 0xa8e94a7 in /src/ch/tmp/upstream/clickhouse-asan 2021.02.20 16:39:51.041733 [ 14989 ] {} BaseDaemon: 1.1. inlined from ./obj-x86_64-linux-gnu/../src/Common/CurrentThread.h:78: DB::CurrentThread::getQueryId() 2021.02.20 16:39:51.041768 [ 14989 ] {} BaseDaemon: 1. ../base/daemon/BaseDaemon.cpp:381: sanitizerDeathCallback() @ 0xad25b1b in /src/ch/tmp/upstream/clickhouse-asan 2021.02.20 16:39:52.551623 [ 13442 ] {} Application: shutting down 2021.02.20 16:39:52.551696 [ 13442 ] {} Application: Uninitializing subsystem: Logging Subsystem 2021.02.20 16:39:52.551792 [ 13443 ] {} BaseDaemon: Received signal -2 2021.02.20 16:39:52.551831 [ 13443 ] {} BaseDaemon: Stop SignalListener thread --- programs/server/Server.cpp | 3 +++ 1 file changed, 3 insertions(+) diff --git a/programs/server/Server.cpp b/programs/server/Server.cpp index 8a96612721d..cb53687df4d 100644 --- a/programs/server/Server.cpp +++ b/programs/server/Server.cpp @@ -951,6 +951,9 @@ int Server::main(const std::vector & /*args*/) global_context->shutdownKeeperStorageDispatcher(); } + /// Wait server pool to avoid use-after-free of destroyed context in the handlers + server_pool.joinAll(); + /** Explicitly destroy Context. It is more convenient than in destructor of Server, because logger is still available. * At this moment, no one could own shared part of Context. */ From 3056d2c5d1bfb5ad4c8627d45307c01f76d6eaca Mon Sep 17 00:00:00 2001 From: Azat Khuzhin Date: Mon, 1 Mar 2021 08:10:13 +0300 Subject: [PATCH 023/108] Fix data-race during server shutdown the context Found with 01737_clickhouse_server_wait_server_pool: ================== WARNING: ThreadSanitizer: data race (pid=13248) Write of size 1 at 0x7b9000003a38 by main thread: 0 std::__1::__optional_destruct_base::reset() obj-x86_64-linux-gnu/../contrib/libcxx/include/optional:246:24 (clickhouse-tsan+0x11e3043e) 1 DB::ContextShared::shutdown() obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:441:21 (clickhouse-tsan+0x11e3043e) 2 DB::Context::shutdown() obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:2249:13 (clickhouse-tsan+0x11e28c17) 3 DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&)::$_5::operator()() const obj-x86_64-linux-gnu/../programs/server/Server.cpp:892:5 (clickhouse-tsan+0x8ab1a32) 4 ext::basic_scope_guard, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&)::$_5>::invoke() obj-x86_64-linux-gnu/../base/common/../ext/scope_guard.h:97:9 (clickhouse-tsan+0x8ab1a32) 5 ext::basic_scope_guard, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&)::$_5>::~basic_scope_guard() obj-x86_64-linux-gnu/../base/common/../ext/scope_guard.h:47:28 (clickhouse-tsan+0x8ab1a32) 6 DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&) obj-x86_64-linux-gnu/../programs/server/Server.cpp:1395:1 (clickhouse-tsan+0x8aacb39) 7 Poco::Util::Application::run() obj-x86_64-linux-gnu/../contrib/poco/Util/src/Application.cpp:334:8 (clickhouse-tsan+0x15b3446b) 8 DB::Server::run() obj-x86_64-linux-gnu/../programs/server/Server.cpp:342:25 (clickhouse-tsan+0x8aa04be) 9 Poco::Util::ServerApplication::run(int, char**) obj-x86_64-linux-gnu/../contrib/poco/Util/src/ServerApplication.cpp:611:9 (clickhouse-tsan+0x15b50883) 10 mainEntryClickHouseServer(int, char**) obj-x86_64-linux-gnu/../programs/server/Server.cpp:134:20 (clickhouse-tsan+0x8a9f08e) 11 main obj-x86_64-linux-gnu/../programs/main.cpp:368:12 (clickhouse-tsan+0x8a9d5f9) Previous read of size 1 at 0x7b9000003a38 by thread T2 (mutexes: write M2504): 0 std::__1::__optional_storage_base::has_value() const obj-x86_64-linux-gnu/../contrib/libcxx/include/optional:295:22 (clickhouse-tsan+0x11e25348) 1 std::__1::optional::operator bool() const obj-x86_64-linux-gnu/../contrib/libcxx/include/optional:938:64 (clickhouse-tsan+0x11e25348) 2 DB::Context::getQueryLog() obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:1875:10 (clickhouse-tsan+0x11e25348) 3 DB::executeQueryImpl(char const*, char const*, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, DB::ReadBuffer*) obj-x86_64-linux-gnu/../src/Interpreters/executeQuery.cpp:657:50 (clickhouse-tsan+0x12653751) 4 DB::executeQuery(std::__1::basic_string, std::__1::allocator > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) obj-x86_64-linux-gnu/../src/Interpreters/executeQuery.cpp:904:30 (clickhouse-tsan+0x12651308) 5 DB::TCPHandler::runImpl() obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:289:24 (clickhouse-tsan+0x12f04b45) 6 DB::TCPHandler::run() obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:1500:9 (clickhouse-tsan+0x12f13907) 7 Poco::Net::TCPServerConnection::start() obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerConnection.cpp:43:3 (clickhouse-tsan+0x15b1f722) 8 Poco::Net::TCPServerDispatcher::run() obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerDispatcher.cpp:113:19 (clickhouse-tsan+0x15b1fe4e) 9 Poco::PooledThread::run() obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:199:14 (clickhouse-tsan+0x15c86fe1) 10 Poco::(anonymous namespace)::RunnableHolder::run() obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread.cpp:55:11 (clickhouse-tsan+0x15c8557f) 11 Poco::ThreadImpl::runnableEntry(void*) obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345:27 (clickhouse-tsan+0x15c83d87) Location is heap block of size 7296 at 0x7b9000002000 allocated by main thread: 0 operator new(unsigned long) (clickhouse-tsan+0x8a9aae7) 1 std::__1::__unique_if::__unique_single std::__1::make_unique() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2068:28 (clickhouse-tsan+0x11e15c2c) 2 DB::Context::createShared() obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:503:32 (clickhouse-tsan+0x11e15c2c) 3 DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&) obj-x86_64-linux-gnu/../programs/server/Server.cpp:426:27 (clickhouse-tsan+0x8aa19ee) 4 Poco::Util::Application::run() obj-x86_64-linux-gnu/../contrib/poco/Util/src/Application.cpp:334:8 (clickhouse-tsan+0x15b3446b) 5 DB::Server::run() obj-x86_64-linux-gnu/../programs/server/Server.cpp:342:25 (clickhouse-tsan+0x8aa04be) 6 Poco::Util::ServerApplication::run(int, char**) obj-x86_64-linux-gnu/../contrib/poco/Util/src/ServerApplication.cpp:611:9 (clickhouse-tsan+0x15b50883) 7 mainEntryClickHouseServer(int, char**) obj-x86_64-linux-gnu/../programs/server/Server.cpp:134:20 (clickhouse-tsan+0x8a9f08e) 8 main obj-x86_64-linux-gnu/../programs/main.cpp:368:12 (clickhouse-tsan+0x8a9d5f9) Mutex M2504 (0x7b9000002008) created at: 0 pthread_mutex_init (clickhouse-tsan+0x8a0d37d) 1 std::__1::__libcpp_recursive_mutex_init(pthread_mutex_t*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__threading_support:370:10 (clickhouse-tsan+0x17cc4d93) 2 std::__1::recursive_mutex::recursive_mutex() obj-x86_64-linux-gnu/../contrib/libcxx/src/mutex.cpp:56:14 (clickhouse-tsan+0x17cc4d93) 3 DB::ContextShared::ContextShared() obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:394:5 (clickhouse-tsan+0x11e40bc3) 4 std::__1::__unique_if::__unique_single std::__1::make_unique() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2068:32 (clickhouse-tsan+0x11e15c37) 5 DB::Context::createShared() obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:503:32 (clickhouse-tsan+0x11e15c37) 6 DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&) obj-x86_64-linux-gnu/../programs/server/Server.cpp:426:27 (clickhouse-tsan+0x8aa19ee) 7 Poco::Util::Application::run() obj-x86_64-linux-gnu/../contrib/poco/Util/src/Application.cpp:334:8 (clickhouse-tsan+0x15b3446b) 8 DB::Server::run() obj-x86_64-linux-gnu/../programs/server/Server.cpp:342:25 (clickhouse-tsan+0x8aa04be) 9 Poco::Util::ServerApplication::run(int, char**) obj-x86_64-linux-gnu/../contrib/poco/Util/src/ServerApplication.cpp:611:9 (clickhouse-tsan+0x15b50883) 10 mainEntryClickHouseServer(int, char**) obj-x86_64-linux-gnu/../programs/server/Server.cpp:134:20 (clickhouse-tsan+0x8a9f08e) 11 main obj-x86_64-linux-gnu/../programs/main.cpp:368:12 (clickhouse-tsan+0x8a9d5f9) Thread T2 'TCPHandler' (tid=13643, running) created by main thread at: 0 pthread_create (clickhouse-tsan+0x8a0bf0b) 1 Poco::ThreadImpl::startImpl(Poco::SharedPtr >) obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread_POSIX.cpp:202:6 (clickhouse-tsan+0x15c83827) 2 Poco::Thread::start(Poco::Runnable&) obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread.cpp:128:2 (clickhouse-tsan+0x15c84f6c) 3 Poco::PooledThread::start() obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:85:10 (clickhouse-tsan+0x15c873e2) 4 Poco::ThreadPool::ThreadPool(int, int, int, int) obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:252:12 (clickhouse-tsan+0x15c873e2) 5 DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&) obj-x86_64-linux-gnu/../programs/server/Server.cpp:843:22 (clickhouse-tsan+0x8aa8e5f) 6 Poco::Util::Application::run() obj-x86_64-linux-gnu/../contrib/poco/Util/src/Application.cpp:334:8 (clickhouse-tsan+0x15b3446b) 7 DB::Server::run() obj-x86_64-linux-gnu/../programs/server/Server.cpp:342:25 (clickhouse-tsan+0x8aa04be) 8 Poco::Util::ServerApplication::run(int, char**) obj-x86_64-linux-gnu/../contrib/poco/Util/src/ServerApplication.cpp:611:9 (clickhouse-tsan+0x15b50883) 9 mainEntryClickHouseServer(int, char**) obj-x86_64-linux-gnu/../programs/server/Server.cpp:134:20 (clickhouse-tsan+0x8a9f08e) 10 main obj-x86_64-linux-gnu/../programs/main.cpp:368:12 (clickhouse-tsan+0x8a9d5f9) SUMMARY: ThreadSanitizer: data race obj-x86_64-linux-gnu/../contrib/libcxx/include/optional:246:24 in std::__1::__optional_destruct_base::reset() v2: fix deadlock by calling SystemLogs::shutdown w/o Context lock --- src/Interpreters/Context.cpp | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/Interpreters/Context.cpp b/src/Interpreters/Context.cpp index 187edf8843f..ab327089333 100644 --- a/src/Interpreters/Context.cpp +++ b/src/Interpreters/Context.cpp @@ -442,6 +442,8 @@ struct ContextSharedPart DatabaseCatalog::shutdown(); + auto lock = std::lock_guard(mutex); + /// Preemptive destruction is important, because these objects may have a refcount to ContextShared (cyclic reference). /// TODO: Get rid of this. From a61ae26729c34e31e4ad02c96105e81a1a5db6a7 Mon Sep 17 00:00:00 2001 From: Azat Khuzhin Date: Tue, 2 Mar 2021 23:41:29 +0300 Subject: [PATCH 024/108] Fix lock-order-inversion during system.*_log shutting down As TSan reports [1]: WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock) (pid=24429) Cycle in lock order graph: M152695175523663992 (0x000000000000) => M2505 (0x7b9000002008) => M152695175523663992 Mutex M2505 acquired here while holding mutex M152695175523663992 in thread T7: 0 pthread_mutex_lock (clickhouse-tsan+0x8a301b6) 1 std::__1::__libcpp_recursive_mutex_lock(pthread_mutex_t*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__threading_support:385:10 (clickhouse-tsan+0x17cd6e89) 2 std::__1::recursive_mutex::lock() obj-x86_64-linux-gnu/../contrib/libcxx/src/mutex.cpp:71:14 (clickhouse-tsan+0x17cd6e89) 3 std::__1::unique_lock::unique_lock(std::__1::recursive_mutex&) obj-x86_64-linux-gnu/../contrib/libcxx/include/__mutex_base:119:61 (clickhouse-tsan+0x11e32a9f) 4 DB::Context::getLock() const obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:517:12 (clickhouse-tsan+0x11e32a9f) 5 DB::Context::getSchedulePool() const obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:1517:17 (clickhouse-tsan+0x11e32a9f) 6 DB::IBackgroundJobExecutor::start() obj-x86_64-linux-gnu/../src/Storages/MergeTree/BackgroundJobsExecutor.cpp:158:42 (clickhouse-tsan+0x12bde50a) 7 DB::StorageMergeTree::startup() obj-x86_64-linux-gnu/../src/Storages/StorageMergeTree.cpp:112:29 (clickhouse-tsan+0x129e9e1e) 8 DB::DatabaseOrdinary::startupTables(ThreadPoolImpl&)::$_2::operator()(std::__1::shared_ptr const&) const obj-x86_64-linux-gnu/../src/Databases/DatabaseOrdinary.cpp:230:16 (clickhouse-tsan+0x11d6fa2a) 9 DB::DatabaseOrdinary::startupTables(ThreadPoolImpl&)::$_3::operator()() const obj-x86_64-linux-gnu/../src/Databases/DatabaseOrdinary.cpp:238:56 (clickhouse-tsan+0x11d6fa2a) 10 decltype(std::__1::forward&)::$_3&>(fp)()) std::__1::__invoke&)::$_3&>(DB::DatabaseOrdinary::startupTables(ThreadPoolImpl&)::$_3&) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1 (clickhouse-tsan+0x11d6fa2a) 11 void std::__1::__invoke_void_return_wrapper::__call&)::$_3&>(DB::DatabaseOrdinary::startupTables(ThreadPoolImpl&)::$_3&) obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:348:9 (clickhouse-tsan+0x11d6fa2a) 12 std::__1::__function::__default_alloc_func&)::$_3, void ()>::operator()() obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608:12 (clickhouse-tsan+0x11d6fa2a) 13 void std::__1::__function::__policy_invoker::__call_impl&)::$_3, void ()> >(std::__1::__function::__policy_storage const*) obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089:16 (clickhouse-tsan+0x11d6fa2a) 14 std::__1::__function::__policy_func::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221:16 (clickhouse-tsan+0x8b39350) 15 std::__1::function::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560:12 (clickhouse-tsan+0x8b39350) 16 ThreadPoolImpl::worker(std::__1::__list_iterator) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:247:17 (clickhouse-tsan+0x8b39350) 17 void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()::operator()() const obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:124:73 (clickhouse-tsan+0x8b3c070) 18 decltype(std::__1::forward(fp)(std::__1::forward::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&>(fp0)...)) std::__1::__invoke_constexpr::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3682:1 (clickhouse-tsan+0x8b3c070) 19 decltype(auto) std::__1::__apply_tuple_impl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&, std::__1::tuple<>&>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&, std::__1::__tuple_indices&...>) obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1415:1 (clickhouse-tsan+0x8b3c070) 20 decltype(auto) std::__1::apply::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&, std::__1::tuple<>&>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&) obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1424:1 (clickhouse-tsan+0x8b3c070) 21 ThreadFromGlobalPool::ThreadFromGlobalPool::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...)::'lambda'()::operator()() obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:178:13 (clickhouse-tsan+0x8b3c070) 22 decltype(std::__1::forward(fp)(std::__1::forward::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(fp0)...)) std::__1::__invoke::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...)::'lambda'()&>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1 (clickhouse-tsan+0x8b3bfd1) 23 void std::__1::__invoke_void_return_wrapper::__call::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...)::'lambda'()&>(void&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:348:9 (clickhouse-tsan+0x8b3bfd1) 24 std::__1::__function::__default_alloc_func::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...)::'lambda'(), void ()>::operator()() obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608:12 (clickhouse-tsan+0x8b3bfd1) 25 void std::__1::__function::__policy_invoker::__call_impl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089:16 (clickhouse-tsan+0x8b3bfd1) 26 std::__1::__function::__policy_func::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221:16 (clickhouse-tsan+0x8b36c75) 27 std::__1::function::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560:12 (clickhouse-tsan+0x8b36c75) 28 ThreadPoolImpl::worker(std::__1::__list_iterator) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:247:17 (clickhouse-tsan+0x8b36c75) 29 void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()::operator()() const obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:124:73 (clickhouse-tsan+0x8b3a918) 30 decltype(std::__1::forward(fp)(std::__1::forward::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(fp0)...)) std::__1::__invoke::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1 (clickhouse-tsan+0x8b3a918) 31 void std::__1::__thread_execute >, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(std::__1::tuple::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>&, std::__1::__tuple_indices<>) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:280:5 (clickhouse-tsan+0x8b3a918) 32 void* std::__1::__thread_proxy >, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()> >(void*) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:291:5 (clickhouse-tsan+0x8b3a918) Hint: use TSAN_OPTIONS=second_deadlock_stack=1 to get more informative warning message Mutex M152695175523663992 acquired here while holding mutex M2505 in main thread: 0 pthread_mutex_lock (clickhouse-tsan+0x8a301b6) 1 std::__1::__libcpp_mutex_lock(pthread_mutex_t*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__threading_support:405:10 (clickhouse-tsan+0x17cd6cf9) 2 std::__1::mutex::lock() obj-x86_64-linux-gnu/../contrib/libcxx/src/mutex.cpp:33:14 (clickhouse-tsan+0x17cd6cf9) 3 std::__1::lock_guard::lock_guard(std::__1::mutex&) obj-x86_64-linux-gnu/../contrib/libcxx/include/__mutex_base:91:27 (clickhouse-tsan+0x12bdee4b) 4 DB::IBackgroundJobExecutor::finish() obj-x86_64-linux-gnu/../src/Storages/MergeTree/BackgroundJobsExecutor.cpp:167:21 (clickhouse-tsan+0x12bdee4b) 5 DB::IBackgroundJobExecutor::~IBackgroundJobExecutor() obj-x86_64-linux-gnu/../src/Storages/MergeTree/BackgroundJobsExecutor.cpp:185:5 (clickhouse-tsan+0x12bdee4b) 6 DB::StorageMergeTree::~StorageMergeTree() obj-x86_64-linux-gnu/../src/Storages/StorageMergeTree.cpp:174:1 (clickhouse-tsan+0x129ea118) 7 std::__1::default_delete::operator()(DB::StorageMergeTree*) const obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:1397:5 (clickhouse-tsan+0x12e4433b) 8 std::__1::__shared_ptr_pointer::__shared_ptr_default_delete, std::__1::allocator >::__on_zero_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2565:5 (clickhouse-tsan+0x12e4433b) 9 std::__1::__shared_count::__release_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2475:9 (clickhouse-tsan+0x125b355a) 10 std::__1::__shared_weak_count::__release_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2517:27 (clickhouse-tsan+0x125b355a) 11 std::__1::shared_ptr::~shared_ptr() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:3212:19 (clickhouse-tsan+0x125b355a) 12 DB::SystemLog::~SystemLog() obj-x86_64-linux-gnu/../src/Interpreters/SystemLog.h:118:7 (clickhouse-tsan+0x125b355a) 13 std::__1::allocator::destroy(DB::AsynchronousMetricLog*) obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:891:15 (clickhouse-tsan+0x125aff68) 14 void std::__1::allocator_traits >::__destroy(std::__1::integral_constant, std::__1::allocator&, DB::AsynchronousMetricLog*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__memory/allocator_traits.h:539:21 (clickhouse-tsan+0x125aff68) 15 void std::__1::allocator_traits >::destroy(std::__1::allocator&, DB::AsynchronousMetricLog*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__memory/allocator_traits.h:487:14 (clickhouse-tsan+0x125aff68) 16 std::__1::__shared_ptr_emplace >::__on_zero_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2611:9 (clickhouse-tsan+0x125aff68) 17 std::__1::__shared_count::__release_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2475:9 (clickhouse-tsan+0x1258e74f) 18 std::__1::__shared_weak_count::__release_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2517:27 (clickhouse-tsan+0x1258e74f) 19 std::__1::shared_ptr::~shared_ptr() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:3212:19 (clickhouse-tsan+0x1258e74f) 20 DB::SystemLogs::~SystemLogs() obj-x86_64-linux-gnu/../src/Interpreters/SystemLog.cpp:155:1 (clickhouse-tsan+0x1258e74f) 21 std::__1::__optional_destruct_base::reset() obj-x86_64-linux-gnu/../contrib/libcxx/include/optional:245:21 (clickhouse-tsan+0x11e41085) 22 DB::ContextShared::shutdown() obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:443:21 (clickhouse-tsan+0x11e41085) 23 DB::Context::shutdown() obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:2251:13 (clickhouse-tsan+0x11e39867) 24 DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&)::$_5::operator()() const obj-x86_64-linux-gnu/../programs/server/Server.cpp:892:5 (clickhouse-tsan+0x8ab8732) 25 ext::basic_scope_guard, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&)::$_5>::invoke() obj-x86_64-linux-gnu/../base/common/../ext/scope_guard.h:97:9 (clickhouse-tsan+0x8ab8732) 26 ext::basic_scope_guard, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&)::$_5>::~basic_scope_guard() obj-x86_64-linux-gnu/../base/common/../ext/scope_guard.h:47:28 (clickhouse-tsan+0x8ab8732) 27 DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&) obj-x86_64-linux-gnu/../programs/server/Server.cpp:1395:1 (clickhouse-tsan+0x8ab3839) 28 Poco::Util::Application::run() obj-x86_64-linux-gnu/../contrib/poco/Util/src/Application.cpp:334:8 (clickhouse-tsan+0x15b464ab) 29 DB::Server::run() obj-x86_64-linux-gnu/../programs/server/Server.cpp:342:25 (clickhouse-tsan+0x8aa71be) 30 Poco::Util::ServerApplication::run(int, char**) obj-x86_64-linux-gnu/../contrib/poco/Util/src/ServerApplication.cpp:611:9 (clickhouse-tsan+0x15b628c3) 31 mainEntryClickHouseServer(int, char**) obj-x86_64-linux-gnu/../programs/server/Server.cpp:134:20 (clickhouse-tsan+0x8aa5d8e) 32 main obj-x86_64-linux-gnu/../programs/main.cpp:368:12 (clickhouse-tsan+0x8aa42f9) [1]: https://clickhouse-test-reports.s3.yandex.net/21318/38be9ff43ac4c46ce6e803fc125d910bde1d4c71/functional_stateful_tests_(thread).html#fail1 --- src/Interpreters/SystemLog.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/Interpreters/SystemLog.h b/src/Interpreters/SystemLog.h index 2f7bdd4a22f..aa01ca3517b 100644 --- a/src/Interpreters/SystemLog.h +++ b/src/Interpreters/SystemLog.h @@ -152,6 +152,8 @@ public: void shutdown() override { stopFlushThread(); + if (table) + table->shutdown(); } String getName() override From d59bdfd45d24a9400b02571712318a2ee3ce47ba Mon Sep 17 00:00:00 2001 From: Azat Khuzhin Date: Wed, 3 Mar 2021 08:08:10 +0300 Subject: [PATCH 025/108] Fix one more lock-order-inversion TSan report [1]: WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock) (pid=11314) Cycle in lock order graph: M183938897938677368 (0x000000000000) => M2505 (0x7b9000002008) => M183938897938677368 Mutex M2505 acquired here while holding mutex M183938897938677368 in thread T6: 0 pthread_mutex_lock (clickhouse-tsan+0x8a327b6) 1 std::__1::__libcpp_recursive_mutex_lock(pthread_mutex_t*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__threading_support:385:10 (clickhouse-tsan+0x17cdb689) 2 std::__1::recursive_mutex::lock() obj-x86_64-linux-gnu/../contrib/libcxx/src/mutex.cpp:71:14 (clickhouse-tsan+0x17cdb689) 3 std::__1::unique_lock::unique_lock(std::__1::recursive_mutex&) obj-x86_64-linux-gnu/../contrib/libcxx/include/__mutex_base:119:61 (clickhouse-tsan+0x11e3506f) 4 DB::Context::getLock() const obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:517:12 (clickhouse-tsan+0x11e3506f) 5 DB::Context::getSchedulePool() const obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:1517:17 (clickhouse-tsan+0x11e3506f) 6 DB::IBackgroundJobExecutor::start() obj-x86_64-linux-gnu/../src/Storages/MergeTree/BackgroundJobsExecutor.cpp:158:42 (clickhouse-tsan+0x12be1cda) 7 DB::StorageMergeTree::startup() obj-x86_64-linux-gnu/../src/Storages/StorageMergeTree.cpp:112:29 (clickhouse-tsan+0x129ed46e) 8 DB::DatabaseOrdinary::startupTables(ThreadPoolImpl&)::$_2::operator()(std::__1::shared_ptr const&) const obj-x86_64-linux-gnu/../src/Databases/DatabaseOrdinary.cpp:230:16 (clickhouse-tsan+0x11d71fba) 9 DB::DatabaseOrdinary::startupTables(ThreadPoolImpl&)::$_3::operator()() const obj-x86_64-linux-gnu/../src/Databases/DatabaseOrdinary.cpp:238:56 (clickhouse-tsan+0x11d71fba) 10 decltype(std::__1::forward&)::$_3&>(fp)()) std::__1::__invoke&)::$_3&>(DB::DatabaseOrdinary::startupTables(ThreadPoolImpl&)::$_3&) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1 (clickhouse-tsan+0x11d71fba) 11 void std::__1::__invoke_void_return_wrapper::__call&)::$_3&>(DB::DatabaseOrdinary::startupTables(ThreadPoolImpl&)::$_3&) obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:348:9 (clickhouse-tsan+0x11d71fba) 12 std::__1::__function::__default_alloc_func&)::$_3, void ()>::operator()() obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608:12 (clickhouse-tsan+0x11d71fba) 13 void std::__1::__function::__policy_invoker::__call_impl&)::$_3, void ()> >(std::__1::__function::__policy_storage const*) obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089:16 (clickhouse-tsan+0x11d71fba) 14 std::__1::__function::__policy_func::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221:16 (clickhouse-tsan+0x8b3b8e0) 15 std::__1::function::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560:12 (clickhouse-tsan+0x8b3b8e0) 16 ThreadPoolImpl::worker(std::__1::__list_iterator) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:247:17 (clickhouse-tsan+0x8b3b8e0) 17 void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()::operator()() const obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:124:73 (clickhouse-tsan+0x8b3e600) 18 decltype(std::__1::forward(fp)(std::__1::forward::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&>(fp0)...)) std::__1::__invoke_constexpr::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3682:1 (clickhouse-tsan+0x8b3e600) 19 decltype(auto) std::__1::__apply_tuple_impl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&, std::__1::tuple<>&>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&, std::__1::__tuple_indices&...>) obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1415:1 (clickhouse-tsan+0x8b3e600) 20 decltype(auto) std::__1::apply::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&, std::__1::tuple<>&>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&) obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1424:1 (clickhouse-tsan+0x8b3e600) 21 ThreadFromGlobalPool::ThreadFromGlobalPool::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...)::'lambda'()::operator()() obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:178:13 (clickhouse-tsan+0x8b3e600) 22 decltype(std::__1::forward(fp)(std::__1::forward::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(fp0)...)) std::__1::__invoke::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...)::'lambda'()&>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1 (clickhouse-tsan+0x8b3e561) 23 void std::__1::__invoke_void_return_wrapper::__call::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...)::'lambda'()&>(void&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:348:9 (clickhouse-tsan+0x8b3e561) 24 std::__1::__function::__default_alloc_func::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...)::'lambda'(), void ()>::operator()() obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608:12 (clickhouse-tsan+0x8b3e561) 25 void std::__1::__function::__policy_invoker::__call_impl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089:16 (clickhouse-tsan+0x8b3e561) 26 std::__1::__function::__policy_func::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221:16 (clickhouse-tsan+0x8b39205) 27 std::__1::function::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560:12 (clickhouse-tsan+0x8b39205) 28 ThreadPoolImpl::worker(std::__1::__list_iterator) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:247:17 (clickhouse-tsan+0x8b39205) 29 void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()::operator()() const obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:124:73 (clickhouse-tsan+0x8b3cea8) 30 decltype(std::__1::forward(fp)(std::__1::forward::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(fp0)...)) std::__1::__invoke::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1 (clickhouse-tsan+0x8b3cea8) 31 void std::__1::__thread_execute >, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(std::__1::tuple::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>&, std::__1::__tuple_indices<>) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:280:5 (clickhouse-tsan+0x8b3cea8) 32 void* std::__1::__thread_proxy >, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()> >(void*) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:291:5 (clickhouse-tsan+0x8b3cea8) Mutex M183938897938677368 previously acquired by the same thread here: 0 pthread_mutex_lock (clickhouse-tsan+0x8a327b6) 1 std::__1::__libcpp_mutex_lock(pthread_mutex_t*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__threading_support:405:10 (clickhouse-tsan+0x17cdb4f9) 2 std::__1::mutex::lock() obj-x86_64-linux-gnu/../contrib/libcxx/src/mutex.cpp:33:14 (clickhouse-tsan+0x17cdb4f9) 3 std::__1::lock_guard::lock_guard(std::__1::mutex&) obj-x86_64-linux-gnu/../contrib/libcxx/include/__mutex_base:91:27 (clickhouse-tsan+0x12be1ca9) 4 DB::IBackgroundJobExecutor::start() obj-x86_64-linux-gnu/../src/Storages/MergeTree/BackgroundJobsExecutor.cpp:155:21 (clickhouse-tsan+0x12be1ca9) 5 DB::StorageMergeTree::startup() obj-x86_64-linux-gnu/../src/Storages/StorageMergeTree.cpp:112:29 (clickhouse-tsan+0x129ed46e) 6 DB::DatabaseOrdinary::startupTables(ThreadPoolImpl&)::$_2::operator()(std::__1::shared_ptr const&) const obj-x86_64-linux-gnu/../src/Databases/DatabaseOrdinary.cpp:230:16 (clickhouse-tsan+0x11d71fba) 7 DB::DatabaseOrdinary::startupTables(ThreadPoolImpl&)::$_3::operator()() const obj-x86_64-linux-gnu/../src/Databases/DatabaseOrdinary.cpp:238:56 (clickhouse-tsan+0x11d71fba) 8 decltype(std::__1::forward&)::$_3&>(fp)()) std::__1::__invoke&)::$_3&>(DB::DatabaseOrdinary::startupTables(ThreadPoolImpl&)::$_3&) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1 (clickhouse-tsan+0x11d71fba) 9 void std::__1::__invoke_void_return_wrapper::__call&)::$_3&>(DB::DatabaseOrdinary::startupTables(ThreadPoolImpl&)::$_3&) obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:348:9 (clickhouse-tsan+0x11d71fba) 10 std::__1::__function::__default_alloc_func&)::$_3, void ()>::operator()() obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608:12 (clickhouse-tsan+0x11d71fba) 11 void std::__1::__function::__policy_invoker::__call_impl&)::$_3, void ()> >(std::__1::__function::__policy_storage const*) obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089:16 (clickhouse-tsan+0x11d71fba) 12 std::__1::__function::__policy_func::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221:16 (clickhouse-tsan+0x8b3b8e0) 13 std::__1::function::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560:12 (clickhouse-tsan+0x8b3b8e0) 14 ThreadPoolImpl::worker(std::__1::__list_iterator) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:247:17 (clickhouse-tsan+0x8b3b8e0) 15 void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()::operator()() const obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:124:73 (clickhouse-tsan+0x8b3e600) 16 decltype(std::__1::forward(fp)(std::__1::forward::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&>(fp0)...)) std::__1::__invoke_constexpr::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3682:1 (clickhouse-tsan+0x8b3e600) 17 decltype(auto) std::__1::__apply_tuple_impl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&, std::__1::tuple<>&>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&, std::__1::__tuple_indices&...>) obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1415:1 (clickhouse-tsan+0x8b3e600) 18 decltype(auto) std::__1::apply::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&, std::__1::tuple<>&>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&) obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1424:1 (clickhouse-tsan+0x8b3e600) 19 ThreadFromGlobalPool::ThreadFromGlobalPool::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...)::'lambda'()::operator()() obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:178:13 (clickhouse-tsan+0x8b3e600) 20 decltype(std::__1::forward(fp)(std::__1::forward::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(fp0)...)) std::__1::__invoke::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...)::'lambda'()&>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1 (clickhouse-tsan+0x8b3e561) 21 void std::__1::__invoke_void_return_wrapper::__call::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...)::'lambda'()&>(void&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:348:9 (clickhouse-tsan+0x8b3e561) 22 std::__1::__function::__default_alloc_func::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...)::'lambda'(), void ()>::operator()() obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608:12 (clickhouse-tsan+0x8b3e561) 23 void std::__1::__function::__policy_invoker::__call_impl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089:16 (clickhouse-tsan+0x8b3e561) 24 std::__1::__function::__policy_func::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221:16 (clickhouse-tsan+0x8b39205) 25 std::__1::function::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560:12 (clickhouse-tsan+0x8b39205) 26 ThreadPoolImpl::worker(std::__1::__list_iterator) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:247:17 (clickhouse-tsan+0x8b39205) 27 void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()::operator()() const obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:124:73 (clickhouse-tsan+0x8b3cea8) 28 decltype(std::__1::forward(fp)(std::__1::forward::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(fp0)...)) std::__1::__invoke::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1 (clickhouse-tsan+0x8b3cea8) 29 void std::__1::__thread_execute >, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>(std::__1::tuple::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()>&, std::__1::__tuple_indices<>) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:280:5 (clickhouse-tsan+0x8b3cea8) 30 void* std::__1::__thread_proxy >, void ThreadPoolImpl::scheduleImpl(std::__1::function, int, std::__1::optional)::'lambda1'()> >(void*) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:291:5 (clickhouse-tsan+0x8b3cea8) Mutex M183938897938677368 acquired here while holding mutex M2505 in main thread: 0 pthread_mutex_lock (clickhouse-tsan+0x8a327b6) 1 std::__1::__libcpp_mutex_lock(pthread_mutex_t*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__threading_support:405:10 (clickhouse-tsan+0x17cdb4f9) 2 std::__1::mutex::lock() obj-x86_64-linux-gnu/../contrib/libcxx/src/mutex.cpp:33:14 (clickhouse-tsan+0x17cdb4f9) 3 std::__1::lock_guard::lock_guard(std::__1::mutex&) obj-x86_64-linux-gnu/../contrib/libcxx/include/__mutex_base:91:27 (clickhouse-tsan+0x12be261b) 4 DB::IBackgroundJobExecutor::finish() obj-x86_64-linux-gnu/../src/Storages/MergeTree/BackgroundJobsExecutor.cpp:167:21 (clickhouse-tsan+0x12be261b) 5 DB::IBackgroundJobExecutor::~IBackgroundJobExecutor() obj-x86_64-linux-gnu/../src/Storages/MergeTree/BackgroundJobsExecutor.cpp:185:5 (clickhouse-tsan+0x12be261b) 6 DB::StorageMergeTree::~StorageMergeTree() obj-x86_64-linux-gnu/../src/Storages/StorageMergeTree.cpp:174:1 (clickhouse-tsan+0x129ed768) 7 std::__1::default_delete::operator()(DB::StorageMergeTree*) const obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:1397:5 (clickhouse-tsan+0x12e48b0b) 8 std::__1::__shared_ptr_pointer::__shared_ptr_default_delete, std::__1::allocator >::__on_zero_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2565:5 (clickhouse-tsan+0x12e48b0b) 9 std::__1::__shared_count::__release_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2475:9 (clickhouse-tsan+0x125b53ea) 10 std::__1::__shared_weak_count::__release_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2517:27 (clickhouse-tsan+0x125b53ea) 11 std::__1::shared_ptr::~shared_ptr() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:3212:19 (clickhouse-tsan+0x125b53ea) 12 DB::SystemLog::~SystemLog() obj-x86_64-linux-gnu/../src/Interpreters/SystemLog.h:118:7 (clickhouse-tsan+0x125b53ea) 13 std::__1::allocator::destroy(DB::AsynchronousMetricLog*) obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:891:15 (clickhouse-tsan+0x125b1dd8) 14 void std::__1::allocator_traits >::__destroy(std::__1::integral_constant, std::__1::allocator&, DB::AsynchronousMetricLog*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__memory/allocator_traits.h:539:21 (clickhouse-tsan+0x125b1dd8) 15 void std::__1::allocator_traits >::destroy(std::__1::allocator&, DB::AsynchronousMetricLog*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__memory/allocator_traits.h:487:14 (clickhouse-tsan+0x125b1dd8) 16 std::__1::__shared_ptr_emplace >::__on_zero_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2611:9 (clickhouse-tsan+0x125b1dd8) 17 std::__1::__shared_count::__release_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2475:9 (clickhouse-tsan+0x125904ff) 18 std::__1::__shared_weak_count::__release_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2517:27 (clickhouse-tsan+0x125904ff) 19 std::__1::shared_ptr::~shared_ptr() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:3212:19 (clickhouse-tsan+0x125904ff) 20 DB::SystemLogs::~SystemLogs() obj-x86_64-linux-gnu/../src/Interpreters/SystemLog.cpp:155:1 (clickhouse-tsan+0x125904ff) 21 std::__1::__optional_destruct_base::reset() obj-x86_64-linux-gnu/../contrib/libcxx/include/optional:245:21 (clickhouse-tsan+0x11e43655) 22 DB::ContextShared::shutdown() obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:443:21 (clickhouse-tsan+0x11e43655) 23 DB::Context::shutdown() obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:2251:13 (clickhouse-tsan+0x11e3be37) 24 DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&)::$_5::operator()() const obj-x86_64-linux-gnu/../programs/server/Server.cpp:892:5 (clickhouse-tsan+0x8abacc2) 25 ext::basic_scope_guard, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&)::$_5>::invoke() obj-x86_64-linux-gnu/../base/common/../ext/scope_guard.h:97:9 (clickhouse-tsan+0x8abacc2) 26 ext::basic_scope_guard, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&)::$_5>::~basic_scope_guard() obj-x86_64-linux-gnu/../base/common/../ext/scope_guard.h:47:28 (clickhouse-tsan+0x8abacc2) 27 DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&) obj-x86_64-linux-gnu/../programs/server/Server.cpp:1395:1 (clickhouse-tsan+0x8ab5cba) 28 Poco::Util::Application::run() obj-x86_64-linux-gnu/../contrib/poco/Util/src/Application.cpp:334:8 (clickhouse-tsan+0x15b4ac7b) 29 DB::Server::run() obj-x86_64-linux-gnu/../programs/server/Server.cpp:342:25 (clickhouse-tsan+0x8aa97be) 30 Poco::Util::ServerApplication::run(int, char**) obj-x86_64-linux-gnu/../contrib/poco/Util/src/ServerApplication.cpp:611:9 (clickhouse-tsan+0x15b67093) 31 mainEntryClickHouseServer(int, char**) obj-x86_64-linux-gnu/../programs/server/Server.cpp:134:20 (clickhouse-tsan+0x8aa838e) 32 main obj-x86_64-linux-gnu/../programs/main.cpp:368:12 (clickhouse-tsan+0x8aa68f9) Mutex M2505 previously acquired by the same thread here: 0 pthread_mutex_lock (clickhouse-tsan+0x8a327b6) 1 std::__1::__libcpp_recursive_mutex_lock(pthread_mutex_t*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__threading_support:385:10 (clickhouse-tsan+0x17cdb689) 2 std::__1::recursive_mutex::lock() obj-x86_64-linux-gnu/../contrib/libcxx/src/mutex.cpp:71:14 (clickhouse-tsan+0x17cdb689) 3 std::__1::lock_guard::lock_guard(std::__1::recursive_mutex&) obj-x86_64-linux-gnu/../contrib/libcxx/include/__mutex_base:91:27 (clickhouse-tsan+0x11e4363f) 4 DB::ContextShared::shutdown() obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:438:21 (clickhouse-tsan+0x11e4363f) 5 DB::Context::shutdown() obj-x86_64-linux-gnu/../src/Interpreters/Context.cpp:2251:13 (clickhouse-tsan+0x11e3be37) 6 DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&)::$_5::operator()() const obj-x86_64-linux-gnu/../programs/server/Server.cpp:892:5 (clickhouse-tsan+0x8abacc2) 7 ext::basic_scope_guard, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&)::$_5>::invoke() obj-x86_64-linux-gnu/../base/common/../ext/scope_guard.h:97:9 (clickhouse-tsan+0x8abacc2) 8 ext::basic_scope_guard, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&)::$_5>::~basic_scope_guard() obj-x86_64-linux-gnu/../base/common/../ext/scope_guard.h:47:28 (clickhouse-tsan+0x8abacc2) 9 DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&) obj-x86_64-linux-gnu/../programs/server/Server.cpp:1395:1 (clickhouse-tsan+0x8ab5cba) 10 Poco::Util::Application::run() obj-x86_64-linux-gnu/../contrib/poco/Util/src/Application.cpp:334:8 (clickhouse-tsan+0x15b4ac7b) 11 DB::Server::run() obj-x86_64-linux-gnu/../programs/server/Server.cpp:342:25 (clickhouse-tsan+0x8aa97be) 12 Poco::Util::ServerApplication::run(int, char**) obj-x86_64-linux-gnu/../contrib/poco/Util/src/ServerApplication.cpp:611:9 (clickhouse-tsan+0x15b67093) 13 mainEntryClickHouseServer(int, char**) obj-x86_64-linux-gnu/../programs/server/Server.cpp:134:20 (clickhouse-tsan+0x8aa838e) 14 main obj-x86_64-linux-gnu/../programs/main.cpp:368:12 (clickhouse-tsan+0x8aa68f9) [1]: https://clickhouse-test-reports.s3.yandex.net/21318/f3b1ad0f5d1024275674e1beac24251ae97c8453/functional_stateful_tests_(thread).html#fail1 v2: Convert ContextSharedPart::system_logs to std::unique_ptr (to avoid copy ctor) v3: Fix readability-identifier-naming,-warnings-as-errors for system_logs_ v4: fix conflicts --- src/Interpreters/Context.cpp | 44 ++++++++++++++++++++---------------- 1 file changed, 25 insertions(+), 19 deletions(-) diff --git a/src/Interpreters/Context.cpp b/src/Interpreters/Context.cpp index ab327089333..6eaefa0978e 100644 --- a/src/Interpreters/Context.cpp +++ b/src/Interpreters/Context.cpp @@ -373,7 +373,7 @@ struct ContextSharedPart std::atomic_size_t max_partition_size_to_drop = 50000000000lu; /// Protects MergeTree partitions from accidental DROP (50GB by default) String format_schema_path; /// Path to a directory that contains schema files used by input formats. ActionLocksManagerPtr action_locks_manager; /// Set of storages' action lockers - std::optional system_logs; /// Used to log queries and operations on parts + std::unique_ptr system_logs; /// Used to log queries and operations on parts std::optional storage_s3_settings; /// Settings of S3 storage RemoteHostFilter remote_host_filter; /// Allowed URL from config.xml @@ -442,26 +442,32 @@ struct ContextSharedPart DatabaseCatalog::shutdown(); - auto lock = std::lock_guard(mutex); + std::unique_ptr delete_system_logs; + { + auto lock = std::lock_guard(mutex); - /// Preemptive destruction is important, because these objects may have a refcount to ContextShared (cyclic reference). - /// TODO: Get rid of this. + /// Preemptive destruction is important, because these objects may have a refcount to ContextShared (cyclic reference). + /// TODO: Get rid of this. - system_logs.reset(); - embedded_dictionaries.reset(); - external_dictionaries_loader.reset(); - models_repository_guard.reset(); - external_models_loader.reset(); - buffer_flush_schedule_pool.reset(); - schedule_pool.reset(); - distributed_schedule_pool.reset(); - message_broker_schedule_pool.reset(); - ddl_worker.reset(); + delete_system_logs = std::move(system_logs); + embedded_dictionaries.reset(); + external_dictionaries_loader.reset(); + models_repository_guard.reset(); + external_models_loader.reset(); + buffer_flush_schedule_pool.reset(); + schedule_pool.reset(); + distributed_schedule_pool.reset(); + message_broker_schedule_pool.reset(); + ddl_worker.reset(); - /// Stop trace collector if any - trace_collector.reset(); - /// Stop zookeeper connection - zookeeper.reset(); + /// Stop trace collector if any + trace_collector.reset(); + /// Stop zookeeper connection + zookeeper.reset(); + } + + /// Can be removed w/o context lock + delete_system_logs.reset(); } bool hasTraceCollector() const @@ -1912,7 +1918,7 @@ void Context::setCluster(const String & cluster_name, const std::shared_ptrsystem_logs.emplace(getGlobalContext(), getConfigRef()); + shared->system_logs = std::make_unique(getGlobalContext(), getConfigRef()); } void Context::initializeTraceCollector() From c849686b16f92840cab2b1099ebf87961ad22a90 Mon Sep 17 00:00:00 2001 From: Azat Khuzhin Date: Tue, 13 Apr 2021 08:20:59 +0300 Subject: [PATCH 026/108] Fix current connections count with shutdown_wait_unfinished=0 --- programs/server/Server.cpp | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/programs/server/Server.cpp b/programs/server/Server.cpp index cb53687df4d..e3b4316079c 100644 --- a/programs/server/Server.cpp +++ b/programs/server/Server.cpp @@ -173,18 +173,24 @@ int waitServersToFinish(std::vector & servers, size_t const int sleep_one_ms = 100; int sleep_current_ms = 0; int current_connections = 0; - while (sleep_current_ms < sleep_max_ms) + for (;;) { current_connections = 0; + for (auto & server : servers) { server.stop(); current_connections += server.currentConnections(); } + if (!current_connections) break; + sleep_current_ms += sleep_one_ms; - std::this_thread::sleep_for(std::chrono::milliseconds(sleep_one_ms)); + if (sleep_current_ms < sleep_max_ms) + std::this_thread::sleep_for(std::chrono::milliseconds(sleep_one_ms)); + else + break; } return current_connections; } From 3afa94612a17d7a9501a37429c51a377d392ced5 Mon Sep 17 00:00:00 2001 From: Azat Khuzhin Date: Sun, 28 Feb 2021 22:31:27 +0300 Subject: [PATCH 027/108] Add a test to ensure that server will wait the server thread pools v2: add into skip_list v3: print server log on error v4: increase sleep time to trigger some issues under TSAN v5: avoid ports overlaps v6: avoid endless loops to print server log on failure --- ...se_server_wait_server_pool_long.config.xml | 35 ++++++++ ...use_server_wait_server_pool_long.reference | 0 ...clickhouse_server_wait_server_pool_long.sh | 83 +++++++++++++++++++ tests/queries/shell_config.sh | 7 ++ tests/queries/skip_list.json | 1 + 5 files changed, 126 insertions(+) create mode 100644 tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.config.xml create mode 100644 tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.reference create mode 100755 tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.sh diff --git a/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.config.xml b/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.config.xml new file mode 100644 index 00000000000..2d0a480a375 --- /dev/null +++ b/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.config.xml @@ -0,0 +1,35 @@ + + + + trace + true + + + 9000 + + ./ + + 0 + + + + + + + ::/0 + + + default + default + 1 + + + + + + + + + + + diff --git a/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.reference b/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.sh b/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.sh new file mode 100755 index 00000000000..a4fd7529ab2 --- /dev/null +++ b/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.sh @@ -0,0 +1,83 @@ +#!/usr/bin/env bash + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +server_opts=( + "--config-file=$CUR_DIR/$(basename "${BASH_SOURCE[0]}" .sh).config.xml" + "--" + # to avoid multiple listen sockets (complexity for port discovering) + "--listen_host=127.1" + # we will discover the real port later. + "--tcp_port=0" + "--shutdown_wait_unfinished=0" +) +CLICKHOUSE_WATCHDOG_ENABLE=0 $CLICKHOUSE_SERVER_BINARY "${server_opts[@]}" >& clickhouse-server.log & +server_pid=$! + +trap cleanup EXIT +function cleanup() +{ + kill -9 $server_pid + kill -9 $client_pid + + echo "Test failed. Server log:" + cat clickhouse-server.log + rm -f clickhouse-server.log + + exit 1 +} + +server_port= +i=0 retries=300 +# wait until server will start to listen (max 30 seconds) +while [[ -z $server_port ]] && [[ $i -lt $retries ]]; do + server_port=$(lsof -n -a -P -i tcp -s tcp:LISTEN -p $server_pid 2>/dev/null | awk -F'[ :]' '/LISTEN/ { print $(NF-1) }') + ((++i)) + sleep 0.1 +done +if [[ -z $server_port ]]; then + echo "Cannot wait for LISTEN socket" >&2 + exit 1 +fi + +# wait for the server to start accepting tcp connections (max 30 seconds) +i=0 retries=300 +while ! $CLICKHOUSE_CLIENT_BINARY --host 127.1 --port "$server_port" --format Null -q 'select 1' 2>/dev/null && [[ $i -lt $retries ]]; do + sleep 0.1 +done +if ! $CLICKHOUSE_CLIENT_BINARY --host 127.1 --port "$server_port" --format Null -q 'select 1'; then + echo "Cannot wait until server will start accepting connections on " >&2 + exit 1 +fi + +query_id="$CLICKHOUSE_DATABASE-$SECONDS" +$CLICKHOUSE_CLIENT_BINARY --query_id "$query_id" --host 127.1 --port "$server_port" --format Null -q 'select sleepEachRow(1) from numbers(10)' 2>/dev/null & +client_pid=$! + +# wait until the query will appear in processlist (max 10 second) +# (it is enough to trigger the problem) +i=0 retries=1000 +while [[ $($CLICKHOUSE_CLIENT_BINARY --host 127.1 --port "$server_port" -q "select count() from system.processes where query_id = '$query_id'") != "1" ]] && [[ $i -lt $retries ]]; do + sleep 0.01 +done +if [[ $($CLICKHOUSE_CLIENT_BINARY --host 127.1 --port "$server_port" -q "select count() from system.processes where query_id = '$query_id'") != "1" ]]; then + echo "Cannot wait until the query will start" >&2 + exit 1 +fi + +# send TERM and save the error code to ensure that it is 0 (EXIT_SUCCESS) +kill $server_pid +wait $server_pid +return_code=$? + +wait $client_pid + +trap '' EXIT +if [ $return_code != 0 ]; then + cat clickhouse-server.log +fi +rm -f clickhouse-server.log + +exit $return_code diff --git a/tests/queries/shell_config.sh b/tests/queries/shell_config.sh index 5b942a95d02..ff8de80c0cc 100644 --- a/tests/queries/shell_config.sh +++ b/tests/queries/shell_config.sh @@ -23,14 +23,21 @@ export CLICKHOUSE_TEST_ZOOKEEPER_PREFIX="${CLICKHOUSE_TEST_NAME}_${CLICKHOUSE_DA [ -v CLICKHOUSE_LOG_COMMENT ] && CLICKHOUSE_BENCHMARK_OPT0+=" --log_comment='${CLICKHOUSE_LOG_COMMENT}' " export CLICKHOUSE_BINARY=${CLICKHOUSE_BINARY:="clickhouse"} +# client [ -x "$CLICKHOUSE_BINARY-client" ] && CLICKHOUSE_CLIENT_BINARY=${CLICKHOUSE_CLIENT_BINARY:=$CLICKHOUSE_BINARY-client} [ -x "$CLICKHOUSE_BINARY" ] && CLICKHOUSE_CLIENT_BINARY=${CLICKHOUSE_CLIENT_BINARY:=$CLICKHOUSE_BINARY client} export CLICKHOUSE_CLIENT_BINARY=${CLICKHOUSE_CLIENT_BINARY:=$CLICKHOUSE_BINARY-client} export CLICKHOUSE_CLIENT_OPT="${CLICKHOUSE_CLIENT_OPT0:-} ${CLICKHOUSE_CLIENT_OPT:-}" export CLICKHOUSE_CLIENT=${CLICKHOUSE_CLIENT:="$CLICKHOUSE_CLIENT_BINARY ${CLICKHOUSE_CLIENT_OPT:-}"} +# local [ -x "${CLICKHOUSE_BINARY}-local" ] && CLICKHOUSE_LOCAL=${CLICKHOUSE_LOCAL:="${CLICKHOUSE_BINARY}-local"} [ -x "${CLICKHOUSE_BINARY}" ] && CLICKHOUSE_LOCAL=${CLICKHOUSE_LOCAL:="${CLICKHOUSE_BINARY} local"} export CLICKHOUSE_LOCAL=${CLICKHOUSE_LOCAL:="${CLICKHOUSE_BINARY}-local"} +# server +[ -x "${CLICKHOUSE_BINARY}-server" ] && CLICKHOUSE_SERVER_BINARY=${CLICKHOUSE_SERVER_BINARY:="${CLICKHOUSE_BINARY}-server"} +[ -x "${CLICKHOUSE_BINARY}" ] && CLICKHOUSE_SERVER_BINARY=${CLICKHOUSE_SERVER_BINARY:="${CLICKHOUSE_BINARY} server"} +export CLICKHOUSE_SERVER_BINARY=${CLICKHOUSE_SERVER_BINARY:="${CLICKHOUSE_BINARY}-server"} +# others export CLICKHOUSE_OBFUSCATOR=${CLICKHOUSE_OBFUSCATOR:="${CLICKHOUSE_BINARY}-obfuscator"} export CLICKHOUSE_COMPRESSOR=${CLICKHOUSE_COMPRESSOR:="${CLICKHOUSE_BINARY}-compressor"} export CLICKHOUSE_BENCHMARK=${CLICKHOUSE_BENCHMARK:="${CLICKHOUSE_BINARY}-benchmark ${CLICKHOUSE_BENCHMARK_OPT0:-}"} diff --git a/tests/queries/skip_list.json b/tests/queries/skip_list.json index d41a41bd524..740a8d33ae7 100644 --- a/tests/queries/skip_list.json +++ b/tests/queries/skip_list.json @@ -696,6 +696,7 @@ "01682_cache_dictionary_complex_key", "01684_ssd_cache_dictionary_simple_key", "01685_ssd_cache_dictionary_complex_key", + "01737_clickhouse_server_wait_server_pool_long", // This test is fully compatible to run in parallel, however under ASAN processes are pretty heavy and may fail under flaky adress check. "01760_system_dictionaries", "01760_polygon_dictionaries", "01778_hierarchical_dictionaries", From 1b61580fa3e157d7160db5798fd0d187b4573b2d Mon Sep 17 00:00:00 2001 From: alexey-milovidov Date: Tue, 13 Apr 2021 21:58:04 +0300 Subject: [PATCH 028/108] Update 01676_clickhouse_client_autocomplete.sh --- .../queries/0_stateless/01676_clickhouse_client_autocomplete.sh | 2 -- 1 file changed, 2 deletions(-) diff --git a/tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh b/tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh index 21f415b59ce..1ed5c6be272 100755 --- a/tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh +++ b/tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh @@ -69,8 +69,6 @@ compwords_positive=( max_concurrent_queries_for_all_users # system.clusters test_shard_localhost - # system.metrics - PartsPreCommitted # system.macros default_path_test # system.storage_policies, egh not uniq From 760bd0dc76509ea1f9d7d4c688d2239a6af1c1fe Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 22:11:45 +0300 Subject: [PATCH 029/108] Maybe better --- programs/install/Install.cpp | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/programs/install/Install.cpp b/programs/install/Install.cpp index c6c21d43739..1179ce4cba3 100644 --- a/programs/install/Install.cpp +++ b/programs/install/Install.cpp @@ -560,21 +560,25 @@ int mainEntryClickHouseInstall(int argc, char ** argv) /// Set up password for default user. + bool stdin_is_a_tty = isatty(STDIN_FILENO); bool stdout_is_a_tty = isatty(STDOUT_FILENO); - bool is_interactive = stdout_is_a_tty; + bool is_interactive = stdin_is_a_tty && stdout_is_a_tty; if (has_password_for_default_user) { fmt::print(HILITE "Password for default user is already specified. To remind or reset, see {} and {}." END_HILITE, users_config_file.string(), users_d.string()); } - else if (!is_interactive) + else if (!stdout_is_a_tty) { fmt::print(HILITE "Password for default user is empty string. See {} and {} to change it." END_HILITE, users_config_file.string(), users_d.string()); } else { + /// NOTE: When installing debian package with dpkg -i, stdin is not a terminal but we are still being able to enter password. + /// More sophisticated method with /dev/tty is used inside the `readpassphrase` function. + char buf[1000] = {}; std::string password; if (auto * result = readpassphrase("Enter password for default user: ", buf, sizeof(buf), 0)) From 05e4c8d3efc5106ba180e69b3a005ccc1f455a7c Mon Sep 17 00:00:00 2001 From: Alexander Tokmakov Date: Tue, 13 Apr 2021 22:13:26 +0300 Subject: [PATCH 030/108] fix attach mv in atomic db --- src/Interpreters/InterpreterCreateQuery.cpp | 2 +- src/Storages/StorageMaterializedView.cpp | 8 +++- .../01153_attach_mv_uuid.reference | 22 ++++++++++ .../0_stateless/01153_attach_mv_uuid.sql | 42 +++++++++++++++++++ tests/queries/skip_list.json | 4 +- 5 files changed, 74 insertions(+), 4 deletions(-) create mode 100644 tests/queries/0_stateless/01153_attach_mv_uuid.reference create mode 100644 tests/queries/0_stateless/01153_attach_mv_uuid.sql diff --git a/src/Interpreters/InterpreterCreateQuery.cpp b/src/Interpreters/InterpreterCreateQuery.cpp index 5d58d19ffaa..fbb6e5f3378 100644 --- a/src/Interpreters/InterpreterCreateQuery.cpp +++ b/src/Interpreters/InterpreterCreateQuery.cpp @@ -723,7 +723,7 @@ static void generateUUIDForTable(ASTCreateQuery & create) /// If destination table (to_table_id) is not specified for materialized view, /// then MV will create inner table. We should generate UUID of inner table here, /// so it will be the same on all hosts if query in ON CLUSTER or database engine is Replicated. - bool need_uuid_for_inner_table = create.is_materialized_view && !create.to_table_id; + bool need_uuid_for_inner_table = !create.attach && create.is_materialized_view && !create.to_table_id; if (need_uuid_for_inner_table && create.to_inner_uuid == UUIDHelpers::Nil) create.to_inner_uuid = UUIDHelpers::generateV4(); } diff --git a/src/Storages/StorageMaterializedView.cpp b/src/Storages/StorageMaterializedView.cpp index ffd57008e6e..1e86ce6a4e3 100644 --- a/src/Storages/StorageMaterializedView.cpp +++ b/src/Storages/StorageMaterializedView.cpp @@ -76,10 +76,14 @@ StorageMaterializedView::StorageMaterializedView( storage_metadata.setSelectQuery(select); setInMemoryMetadata(storage_metadata); + bool point_to_itself_by_uuid = has_inner_table && query.to_inner_uuid == table_id_.uuid; + bool point_to_itself_by_name = has_inner_table && query.to_table_id.database_name == table_id_.database_name + && query.to_table_id.table_name == table_id_.table_name; + if (point_to_itself_by_uuid || point_to_itself_by_name) + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Materialized view {} cannot point to itself", table_id_.getFullTableName()); + if (!has_inner_table) { - if (query.to_table_id.database_name == table_id_.database_name && query.to_table_id.table_name == table_id_.table_name) - throw Exception(ErrorCodes::BAD_ARGUMENTS, "Materialized view {} cannot point to itself", table_id_.getFullTableName()); target_table_id = query.to_table_id; } else if (attach_) diff --git a/tests/queries/0_stateless/01153_attach_mv_uuid.reference b/tests/queries/0_stateless/01153_attach_mv_uuid.reference new file mode 100644 index 00000000000..e37fe28e303 --- /dev/null +++ b/tests/queries/0_stateless/01153_attach_mv_uuid.reference @@ -0,0 +1,22 @@ +1 1 +2 4 +1 1 +2 4 +3 9 +4 16 +CREATE MATERIALIZED VIEW default.mv UUID \'e15f3ab5-6cae-4df3-b879-f40deafd82c2\'\n(\n `n` Int32,\n `n2` Int64\n)\nENGINE = MergeTree\nPARTITION BY n % 10\nORDER BY n AS\nSELECT\n n,\n n * n AS n2\nFROM default.src +1 1 +2 4 +CREATE MATERIALIZED VIEW default.mv UUID \'e15f3ab5-6cae-4df3-b879-f40deafd82c2\'\n(\n `n` Int32,\n `n2` Int64\n)\nENGINE = MergeTree\nPARTITION BY n % 10\nORDER BY n AS\nSELECT\n n,\n n * n AS n2\nFROM default.src +1 1 +2 4 +3 9 +4 16 +CREATE MATERIALIZED VIEW default.mv UUID \'e15f3ab5-6cae-4df3-b879-f40deafd82c2\' TO INNER UUID \'3bd68e3c-2693-4352-ad66-a66eba9e345e\'\n(\n `n` Int32,\n `n2` Int64\n)\nENGINE = MergeTree\nPARTITION BY n % 10\nORDER BY n AS\nSELECT\n n,\n n * n AS n2\nFROM default.src +1 1 +2 4 +CREATE MATERIALIZED VIEW default.mv UUID \'e15f3ab5-6cae-4df3-b879-f40deafd82c2\' TO INNER UUID \'3bd68e3c-2693-4352-ad66-a66eba9e345e\'\n(\n `n` Int32,\n `n2` Int64\n)\nENGINE = MergeTree\nPARTITION BY n % 10\nORDER BY n AS\nSELECT\n n,\n n * n AS n2\nFROM default.src +1 1 +2 4 +3 9 +4 16 diff --git a/tests/queries/0_stateless/01153_attach_mv_uuid.sql b/tests/queries/0_stateless/01153_attach_mv_uuid.sql new file mode 100644 index 00000000000..86d768d94a7 --- /dev/null +++ b/tests/queries/0_stateless/01153_attach_mv_uuid.sql @@ -0,0 +1,42 @@ +DROP TABLE IF EXISTS src; +DROP TABLE IF EXISTS mv; +DROP TABLE IF EXISTS ".inner_id.e15f3ab5-6cae-4df3-b879-f40deafd82c2"; + +CREATE TABLE src (n UInt64) ENGINE=MergeTree ORDER BY n; +CREATE MATERIALIZED VIEW mv (n Int32, n2 Int64) ENGINE = MergeTree PARTITION BY n % 10 ORDER BY n AS SELECT n, n * n AS n2 FROM src; +INSERT INTO src VALUES (1), (2); +SELECT * FROM mv ORDER BY n; +DETACH TABLE mv; +ATTACH TABLE mv; +INSERT INTO src VALUES (3), (4); +SELECT * FROM mv ORDER BY n; +DROP TABLE mv SYNC; + +SET show_table_uuid_in_table_create_query_if_not_nil=1; +CREATE TABLE ".inner_id.e15f3ab5-6cae-4df3-b879-f40deafd82c2" (n Int32, n2 Int64) ENGINE = MergeTree PARTITION BY n % 10 ORDER BY n; +ATTACH MATERIALIZED VIEW mv UUID 'e15f3ab5-6cae-4df3-b879-f40deafd82c2' (n Int32, n2 Int64) ENGINE = MergeTree PARTITION BY n % 10 ORDER BY n AS SELECT n, n * n AS n2 FROM src; +SHOW CREATE TABLE mv; +INSERT INTO src VALUES (1), (2); +SELECT * FROM mv ORDER BY n; +DETACH TABLE mv; +ATTACH TABLE mv; +SHOW CREATE TABLE mv; +INSERT INTO src VALUES (3), (4); +SELECT * FROM mv ORDER BY n; +DROP TABLE mv SYNC; + +CREATE TABLE ".inner_id.e15f3ab5-6cae-4df3-b879-f40deafd82c2" UUID '3bd68e3c-2693-4352-ad66-a66eba9e345e' (n Int32, n2 Int64) ENGINE = MergeTree PARTITION BY n % 10 ORDER BY n; +ATTACH MATERIALIZED VIEW mv UUID 'e15f3ab5-6cae-4df3-b879-f40deafd82c2' TO INNER UUID '3bd68e3c-2693-4352-ad66-a66eba9e345e' (n Int32, n2 Int64) ENGINE = MergeTree PARTITION BY n % 10 ORDER BY n AS SELECT n, n * n AS n2 FROM src; +SHOW CREATE TABLE mv; +INSERT INTO src VALUES (1), (2); +SELECT * FROM mv ORDER BY n; +DETACH TABLE mv; +ATTACH TABLE mv; +SHOW CREATE TABLE mv; +INSERT INTO src VALUES (3), (4); +SELECT * FROM mv ORDER BY n; +DROP TABLE mv SYNC; + +ATTACH MATERIALIZED VIEW mv UUID '3bd68e3c-2693-4352-ad66-a66eba9e345e' TO INNER UUID '3bd68e3c-2693-4352-ad66-a66eba9e345e' (n Int32, n2 Int64) ENGINE = MergeTree PARTITION BY n % 10 ORDER BY n AS SELECT n, n * n AS n2 FROM src; -- { serverError 36 } + +DROP TABLE src; diff --git a/tests/queries/skip_list.json b/tests/queries/skip_list.json index d41a41bd524..ad48d443d08 100644 --- a/tests/queries/skip_list.json +++ b/tests/queries/skip_list.json @@ -105,7 +105,8 @@ "00604_show_create_database", "00609_mv_index_in_in", "00510_materizlized_view_and_deduplication_zookeeper", - "00738_lock_for_inner_table" + "00738_lock_for_inner_table", + "01153_attach_mv_uuid" ], "database-replicated": [ "memory_tracking", @@ -557,6 +558,7 @@ "01135_default_and_alter_zookeeper", "01148_zookeeper_path_macros_unfolding", "01150_ddl_guard_rwr", + "01153_attach_mv_uuid", "01185_create_or_replace_table", "01190_full_attach_syntax", "01191_rename_dictionary", From 39d7f50e8a06e91be81ff4abebbacf95739f7356 Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 22:15:15 +0300 Subject: [PATCH 031/108] Fix style --- tests/queries/0_stateless/01812_basic_auth_http_server.sh | 1 + 1 file changed, 1 insertion(+) diff --git a/tests/queries/0_stateless/01812_basic_auth_http_server.sh b/tests/queries/0_stateless/01812_basic_auth_http_server.sh index b2caa32ef1a..6f553596656 100755 --- a/tests/queries/0_stateless/01812_basic_auth_http_server.sh +++ b/tests/queries/0_stateless/01812_basic_auth_http_server.sh @@ -1,4 +1,5 @@ #!/usr/bin/env bash +# shellcheck disable=SC2046 # In very old (e.g. 1.1.54385) versions of ClickHouse there was a bug in Poco HTTP library: # Basic HTTP authentication headers was not parsed if the size of URL is exactly 4077 + something bytes. From df44476307429444b2936050ac249183d94caefe Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 22:32:02 +0300 Subject: [PATCH 032/108] Fix error --- src/Functions/array/arrayIndex.h | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/src/Functions/array/arrayIndex.h b/src/Functions/array/arrayIndex.h index a48cfb2edc5..1b58c9418bf 100644 --- a/src/Functions/array/arrayIndex.h +++ b/src/Functions/array/arrayIndex.h @@ -496,8 +496,11 @@ private: static inline bool allowArguments(const DataTypePtr & array_inner_type, const DataTypePtr & arg) { - return ((isNativeNumber(array_inner_type) || isEnum(array_inner_type)) && isNativeNumber(arg)) - || getLeastSupertype({array_inner_type, arg}); + auto inner_type_decayed = removeNullable(removeLowCardinality(array_inner_type)); + auto arg_decayed = removeNullable(removeLowCardinality(arg)); + + return ((isNativeNumber(inner_type_decayed) || isEnum(inner_type_decayed)) && isNativeNumber(arg_decayed)) + || getLeastSupertype({inner_type_decayed, arg_decayed}); } #define INTEGRAL_TPL_PACK UInt8, UInt16, UInt32, UInt64, Int8, Int16, Int32, Int64, Float32, Float64 From 7254ed5e7f6c8f238623393c9b0b3d8f3670bd23 Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 22:32:10 +0300 Subject: [PATCH 033/108] This is correct --- .../00700_decimal_complex_types.reference | 24 ++++++++++ .../00700_decimal_complex_types.sql | 48 +++++++++---------- 2 files changed, 48 insertions(+), 24 deletions(-) diff --git a/tests/queries/0_stateless/00700_decimal_complex_types.reference b/tests/queries/0_stateless/00700_decimal_complex_types.reference index e81dd94513f..9c7c6fefefd 100644 --- a/tests/queries/0_stateless/00700_decimal_complex_types.reference +++ b/tests/queries/0_stateless/00700_decimal_complex_types.reference @@ -39,9 +39,33 @@ Tuple(Decimal(9, 1), Decimal(18, 1), Decimal(38, 1)) Decimal(9, 1) Decimal(18, 1 1 0 1 0 1 0 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 1 0 2 0 3 0 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 [0.100,0.200,0.300,0.400,0.500,0.600] Array(Decimal(18, 3)) [0.100,0.200,0.300,0.700,0.800,0.900] Array(Decimal(38, 3)) [0.400,0.500,0.600,0.700,0.800,0.900] Array(Decimal(38, 3)) diff --git a/tests/queries/0_stateless/00700_decimal_complex_types.sql b/tests/queries/0_stateless/00700_decimal_complex_types.sql index 2d506b124a2..f4b29e77be9 100644 --- a/tests/queries/0_stateless/00700_decimal_complex_types.sql +++ b/tests/queries/0_stateless/00700_decimal_complex_types.sql @@ -58,35 +58,35 @@ SELECT has(a, toDecimal32(0.1, 3)), has(a, toDecimal32(1.0, 3)) FROM decimal; SELECT has(b, toDecimal64(0.4, 3)), has(b, toDecimal64(1.0, 3)) FROM decimal; SELECT has(c, toDecimal128(0.7, 3)), has(c, toDecimal128(1.0, 3)) FROM decimal; -SELECT has(a, toDecimal32(0.1, 2)) FROM decimal; -- { serverError 43 } -SELECT has(a, toDecimal32(0.1, 4)) FROM decimal; -- { serverError 43 } -SELECT has(a, toDecimal64(0.1, 3)) FROM decimal; -- { serverError 43 } -SELECT has(a, toDecimal128(0.1, 3)) FROM decimal; -- { serverError 43 } -SELECT has(b, toDecimal32(0.4, 3)) FROM decimal; -- { serverError 43 } -SELECT has(b, toDecimal64(0.4, 2)) FROM decimal; -- { serverError 43 } -SELECT has(b, toDecimal64(0.4, 4)) FROM decimal; -- { serverError 43 } -SELECT has(b, toDecimal128(0.4, 3)) FROM decimal; -- { serverError 43 } -SELECT has(c, toDecimal32(0.7, 3)) FROM decimal; -- { serverError 43 } -SELECT has(c, toDecimal64(0.7, 3)) FROM decimal; -- { serverError 43 } -SELECT has(c, toDecimal128(0.7, 2)) FROM decimal; -- { serverError 43 } -SELECT has(c, toDecimal128(0.7, 4)) FROM decimal; -- { serverError 43 } +SELECT has(a, toDecimal32(0.1, 2)) FROM decimal; +SELECT has(a, toDecimal32(0.1, 4)) FROM decimal; +SELECT has(a, toDecimal64(0.1, 3)) FROM decimal; +SELECT has(a, toDecimal128(0.1, 3)) FROM decimal; +SELECT has(b, toDecimal32(0.4, 3)) FROM decimal; +SELECT has(b, toDecimal64(0.4, 2)) FROM decimal; +SELECT has(b, toDecimal64(0.4, 4)) FROM decimal; +SELECT has(b, toDecimal128(0.4, 3)) FROM decimal; +SELECT has(c, toDecimal32(0.7, 3)) FROM decimal; +SELECT has(c, toDecimal64(0.7, 3)) FROM decimal; +SELECT has(c, toDecimal128(0.7, 2)) FROM decimal; +SELECT has(c, toDecimal128(0.7, 4)) FROM decimal; SELECT indexOf(a, toDecimal32(0.1, 3)), indexOf(a, toDecimal32(1.0, 3)) FROM decimal; SELECT indexOf(b, toDecimal64(0.5, 3)), indexOf(b, toDecimal64(1.0, 3)) FROM decimal; SELECT indexOf(c, toDecimal128(0.9, 3)), indexOf(c, toDecimal128(1.0, 3)) FROM decimal; -SELECT indexOf(a, toDecimal32(0.1, 2)) FROM decimal; -- { serverError 43 } -SELECT indexOf(a, toDecimal32(0.1, 4)) FROM decimal; -- { serverError 43 } -SELECT indexOf(a, toDecimal64(0.1, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(a, toDecimal128(0.1, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(b, toDecimal32(0.4, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(b, toDecimal64(0.4, 2)) FROM decimal; -- { serverError 43 } -SELECT indexOf(b, toDecimal64(0.4, 4)) FROM decimal; -- { serverError 43 } -SELECT indexOf(b, toDecimal128(0.4, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(c, toDecimal32(0.7, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(c, toDecimal64(0.7, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(c, toDecimal128(0.7, 2)) FROM decimal; -- { serverError 43 } -SELECT indexOf(c, toDecimal128(0.7, 4)) FROM decimal; -- { serverError 43 } +SELECT indexOf(a, toDecimal32(0.1, 2)) FROM decimal; +SELECT indexOf(a, toDecimal32(0.1, 4)) FROM decimal; +SELECT indexOf(a, toDecimal64(0.1, 3)) FROM decimal; +SELECT indexOf(a, toDecimal128(0.1, 3)) FROM decimal; +SELECT indexOf(b, toDecimal32(0.4, 3)) FROM decimal; +SELECT indexOf(b, toDecimal64(0.4, 2)) FROM decimal; +SELECT indexOf(b, toDecimal64(0.4, 4)) FROM decimal; +SELECT indexOf(b, toDecimal128(0.4, 3)) FROM decimal; +SELECT indexOf(c, toDecimal32(0.7, 3)) FROM decimal; +SELECT indexOf(c, toDecimal64(0.7, 3)) FROM decimal; +SELECT indexOf(c, toDecimal128(0.7, 2)) FROM decimal; +SELECT indexOf(c, toDecimal128(0.7, 4)) FROM decimal; SELECT arrayConcat(a, b) AS x, toTypeName(x) FROM decimal; SELECT arrayConcat(a, c) AS x, toTypeName(x) FROM decimal; From 0b39a64a6e1173ce473bffbd002071b7b2c01578 Mon Sep 17 00:00:00 2001 From: Alexey Date: Tue, 13 Apr 2021 19:35:44 +0000 Subject: [PATCH 034/108] hex() added to murmurHash3_128 example --- docs/en/sql-reference/functions/hash-functions.md | 8 ++++---- docs/ja/sql-reference/functions/hash-functions.md | 8 ++++---- docs/ru/sql-reference/functions/hash-functions.md | 10 +++++----- 3 files changed, 13 insertions(+), 13 deletions(-) diff --git a/docs/en/sql-reference/functions/hash-functions.md b/docs/en/sql-reference/functions/hash-functions.md index 22929ef36f7..0ea4cfd6fbe 100644 --- a/docs/en/sql-reference/functions/hash-functions.md +++ b/docs/en/sql-reference/functions/hash-functions.md @@ -437,13 +437,13 @@ A [FixedString(16)](../../sql-reference/data-types/fixedstring.md) data type has **Example** ``` sql -SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3) AS type; +SELECT hex(murmurHash3_128('example_string')) AS MurmurHash3, toTypeName(MurmurHash3) AS type; ``` ``` text -┌─MurmurHash3──────┬─type────────────┐ -│ 6�1�4"S5KT�~~q | FixedString(16) │ -└──────────────────┴─────────────────┘ +┌─MurmurHash3──────────────────────┬─type───┐ +│ 368A1A311CB7342253354B548E7E7E71 │ String │ +└──────────────────────────────────┴────────┘ ``` ## xxHash32, xxHash64 {#hash-functions-xxhash32} diff --git a/docs/ja/sql-reference/functions/hash-functions.md b/docs/ja/sql-reference/functions/hash-functions.md index 3de3e40d0eb..a98ae60690d 100644 --- a/docs/ja/sql-reference/functions/hash-functions.md +++ b/docs/ja/sql-reference/functions/hash-functions.md @@ -434,13 +434,13 @@ A [FixedString(16)](../../sql-reference/data-types/fixedstring.md) データ型 **例** ``` sql -SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3) AS type +SELECT hex(murmurHash3_128('example_string')) AS MurmurHash3, toTypeName(MurmurHash3) AS type; ``` ``` text -┌─MurmurHash3──────┬─type────────────┐ -│ 6�1�4"S5KT�~~q │ FixedString(16) │ -└──────────────────┴─────────────────┘ +┌─MurmurHash3──────────────────────┬─type───┐ +│ 368A1A311CB7342253354B548E7E7E71 │ String │ +└──────────────────────────────────┴────────┘ ``` ## xxHash32,xxHash64 {#hash-functions-xxhash32} diff --git a/docs/ru/sql-reference/functions/hash-functions.md b/docs/ru/sql-reference/functions/hash-functions.md index e680778e1f7..07c741e0588 100644 --- a/docs/ru/sql-reference/functions/hash-functions.md +++ b/docs/ru/sql-reference/functions/hash-functions.md @@ -430,7 +430,7 @@ murmurHash3_128( expr ) **Аргументы** -- `expr` — [выражение](../syntax.md#syntax-expressions), возвращающее значение типа[String](../../sql-reference/functions/hash-functions.md). +- `expr` — [выражение](../syntax.md#syntax-expressions), возвращающее значение типа [String](../../sql-reference/functions/hash-functions.md). **Возвращаемое значение** @@ -439,13 +439,13 @@ murmurHash3_128( expr ) **Пример** ``` sql -SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3) AS type; +SELECT hex(murmurHash3_128('example_string')) AS MurmurHash3, toTypeName(MurmurHash3) AS type; ``` ``` text -┌─MurmurHash3──────┬─type────────────┐ -│ 6�1�4"S5KT�~~q │ FixedString(16) │ -└──────────────────┴─────────────────┘ +┌─MurmurHash3──────────────────────┬─type───┐ +│ 368A1A311CB7342253354B548E7E7E71 │ String │ +└──────────────────────────────────┴────────┘ ``` ## xxHash32, xxHash64 {#hash-functions-xxhash32-xxhash64} From 081ea84a41fa6d1328c551c1c3ecab61950cb30d Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Sat, 13 Mar 2021 02:47:20 +0300 Subject: [PATCH 035/108] save --- src/CMakeLists.txt | 1 + src/Server/S3TaskServer.h | 104 ++++++++++++++++++ src/Server/grpc_protos/CMakeLists.txt | 6 + .../clickhouse_s3_task_server.proto | 20 ++++ src/Storages/StorageDistributed.cpp | 16 +-- src/TableFunctions/TableFunctionRemote.h | 2 +- 6 files changed, 138 insertions(+), 11 deletions(-) create mode 100644 src/Server/S3TaskServer.h create mode 100644 src/Server/grpc_protos/clickhouse_s3_task_server.proto diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt index 43f6ae8fea5..6f3bf988c17 100644 --- a/src/CMakeLists.txt +++ b/src/CMakeLists.txt @@ -423,6 +423,7 @@ endif () if (USE_GRPC) dbms_target_link_libraries (PUBLIC clickhouse_grpc_protos) + dbms_target_link_libraries (PUBLIC clickhouse_s3_task_server_protos) endif() if (USE_HDFS) diff --git a/src/Server/S3TaskServer.h b/src/Server/S3TaskServer.h new file mode 100644 index 00000000000..d1345ad8532 --- /dev/null +++ b/src/Server/S3TaskServer.h @@ -0,0 +1,104 @@ +#pragma once + +#include +#if !defined(ARCADIA_BUILD) +#include +#endif + +#if USE_GRPC +#include +#include "clickhouse_s3_task_server.grpc.pb.h" + + +#include +#include +#include + +#include +#include +#include + +using grpc::Server; +using grpc::ServerBuilder; +using grpc::ServerContext; +using grpc::Status; +using clickhouse::s3_task_server::S3TaskServer; +using clickhouse::s3_task_server::S3TaskRequest; +using clickhouse::s3_task_server::S3TaskReply; + + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + + +class S3Task +{ +public: + S3Task() = delete; + + explicit S3Task(std::vector && paths_) + : paths(std::move(paths_)) + {} + + std::optional getNext() { + static size_t next = 0; + if (next >= paths.size()) + return std::nullopt; + const auto result = paths[next]; + ++next; + return result; + } +private: + std::vector paths; +}; + + +// Logic and data behind the server's behavior. +class S3TaskServer final : public S3TaskServer::Service { + Status GetNext(ServerContext* context, const S3TaskRequest* request, S3TaskReply* reply) override { + std::string prefix("Hello"); + const auto query_id = request->query_id(); + auto it = handlers.find(query_id); + if (it == handlers.end()) { + reply->set_message(""); + reply->set_error(ErrorCodes::LOGICAL_ERROR); + return Status::CANCELLED; + } + + reply->set_error(0); + reply->set_message(it->second.getNext()); + return Status::OK; + } + + private: + std::unordered_map handlers; +}; + + +void RunServer() { + std::string server_address("0.0.0.0:50051"); + static S3TaskServer service; + + grpc::EnableDefaultHealthCheckService(true); + grpc::reflection::InitProtoReflectionServerBuilderPlugin(); + ServerBuilder builder; + // Listen on the given address without any authentication mechanism. + builder.AddListeningPort(server_address, grpc::InsecureServerCredentials()); + // Register "service" as the instance through which we'll communicate with + // clients. In this case it corresponds to an *synchronous* service. + builder.RegisterService(&service); + // Finally assemble the server. + std::unique_ptr server(builder.BuildAndStart()); + std::cout << "Server listening on " << server_address << std::endl; + server->Wait(); +} + +} + + +#endif diff --git a/src/Server/grpc_protos/CMakeLists.txt b/src/Server/grpc_protos/CMakeLists.txt index 584cf015a65..b8a5d5290c5 100644 --- a/src/Server/grpc_protos/CMakeLists.txt +++ b/src/Server/grpc_protos/CMakeLists.txt @@ -1,4 +1,5 @@ PROTOBUF_GENERATE_GRPC_CPP(clickhouse_grpc_proto_sources clickhouse_grpc_proto_headers clickhouse_grpc.proto) +PROTOBUF_GENERATE_GRPC_CPP(clickhouse_s3_task_server_sources clickhouse_s3_task_server_headers clickhouse_s3_task_server.proto) # Ignore warnings while compiling protobuf-generated *.pb.h and *.pb.cpp files. set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w") @@ -9,3 +10,8 @@ set (CMAKE_CXX_CLANG_TIDY "") add_library(clickhouse_grpc_protos ${clickhouse_grpc_proto_headers} ${clickhouse_grpc_proto_sources}) target_include_directories(clickhouse_grpc_protos SYSTEM PUBLIC ${gRPC_INCLUDE_DIRS} ${Protobuf_INCLUDE_DIR} ${CMAKE_CURRENT_BINARY_DIR}) target_link_libraries (clickhouse_grpc_protos PUBLIC ${gRPC_LIBRARIES}) + + +add_library(clickhouse_s3_task_server_protos ${clickhouse_s3_task_server_headers} ${clickhouse_s3_task_server_sources}) +target_include_directories(clickhouse_s3_task_server_protos SYSTEM PUBLIC ${gRPC_INCLUDE_DIRS} ${Protobuf_INCLUDE_DIR} ${CMAKE_CURRENT_BINARY_DIR}) +target_link_libraries (clickhouse_s3_task_server_protos PUBLIC ${gRPC_LIBRARIES}) diff --git a/src/Server/grpc_protos/clickhouse_s3_task_server.proto b/src/Server/grpc_protos/clickhouse_s3_task_server.proto new file mode 100644 index 00000000000..6b3b8a34ad4 --- /dev/null +++ b/src/Server/grpc_protos/clickhouse_s3_task_server.proto @@ -0,0 +1,20 @@ + +syntax = "proto3"; + +package clickhouse.s3_task_server; + + +service S3TaskServer { + rpc GetNext (S3TaskRequest) returns (S3TaskReply) {} +} + + +message S3TaskRequest { + string query_id = 1; +} + + +message S3TaskReply { + string message = 1; + int32 error = 2; +} \ No newline at end of file diff --git a/src/Storages/StorageDistributed.cpp b/src/Storages/StorageDistributed.cpp index e42e53d3f1b..cd5fee30b1b 100644 --- a/src/Storages/StorageDistributed.cpp +++ b/src/Storages/StorageDistributed.cpp @@ -452,7 +452,8 @@ StorageDistributed::StorageDistributed( const DistributedSettings & distributed_settings_, bool attach, ClusterPtr owned_cluster_) - : StorageDistributed(id_, columns_, constraints_, String{}, String{}, cluster_name_, context_, sharding_key_, storage_policy_name_, relative_data_path_, distributed_settings_, attach, std::move(owned_cluster_)) + : StorageDistributed(id_, columns_, constraints_, String{}, String{}, cluster_name_, context_, sharding_key_, + storage_policy_name_, relative_data_path_, distributed_settings_, attach, std::move(owned_cluster_)) { remote_table_function_ptr = std::move(remote_table_function_ptr_); } @@ -473,20 +474,15 @@ QueryProcessingStage::Enum StorageDistributed::getQueryProcessingStage( ClusterPtr optimized_cluster = getOptimizedCluster(local_context, metadata_snapshot, query_info.query); if (optimized_cluster) { - LOG_DEBUG( - log, - "Skipping irrelevant shards - the query will be sent to the following shards of the cluster (shard numbers): {}", - makeFormattedListOfShards(optimized_cluster)); + LOG_DEBUG(log, "Skipping irrelevant shards - the query will be sent to the following shards of the cluster (shard numbers): {}", + makeFormattedListOfShards(optimized_cluster)); cluster = optimized_cluster; query_info.optimized_cluster = cluster; } else { - LOG_DEBUG( - log, - "Unable to figure out irrelevant shards from WHERE/PREWHERE clauses - the query will be sent to all shards of the " - "cluster{}", - has_sharding_key ? "" : " (no sharding key)"); + LOG_DEBUG(log, "Unable to figure out irrelevant shards from WHERE/PREWHERE clauses - the query will be sent to all shards of the cluster{}", + has_sharding_key ? "" : " (no sharding key)"); } } diff --git a/src/TableFunctions/TableFunctionRemote.h b/src/TableFunctions/TableFunctionRemote.h index 8fc918dfc20..845c36182dc 100644 --- a/src/TableFunctions/TableFunctionRemote.h +++ b/src/TableFunctions/TableFunctionRemote.h @@ -18,7 +18,7 @@ namespace DB class TableFunctionRemote : public ITableFunction { public: - TableFunctionRemote(const std::string & name_, bool secure_ = false); + explicit TableFunctionRemote(const std::string & name_, bool secure_ = false); std::string getName() const override { return name; } From 44ca65a9a4b52242698ac7f2d2e2eaa0ae7f803a Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Tue, 16 Mar 2021 21:41:29 +0300 Subject: [PATCH 036/108] save --- src/CMakeLists.txt | 2 +- src/Client/Connection.cpp | 11 ++ src/Client/Connection.h | 2 + src/Core/Protocol.h | 8 +- src/IO/S3Common.cpp | 7 + src/IO/S3Common.h | 4 + src/Server/TCPHandler.cpp | 19 ++- src/Server/TCPHandler.h | 3 +- src/Server/grpc_protos/CMakeLists.txt | 5 - src/Storages/CMakeLists.txt | 4 + src/Storages/StorageProxy.h | 2 +- src/Storages/StorageS3.cpp | 64 ++++----- src/Storages/StorageS3.h | 26 +++- src/Storages/StorageS3ReaderFollower.cpp | 99 ++++++++++++++ src/Storages/StorageS3ReaderFollower.h | 27 ++++ src/Storages/StorageTaskManager.cpp | 0 src/Storages/StorageTaskManager.h | 121 ++++++++++++++++++ src/Storages/System/StorageSystemOne.h | 2 +- src/Storages/protos/CMakeLists.txt | 11 ++ .../protos/clickhouse_s3_reader.proto} | 6 +- .../TableFunctionS3Distributed.cpp | 117 +++++++++++++++++ .../TableFunctionS3Distributed.h | 61 +++++++++ src/TableFunctions/registerTableFunctions.cpp | 1 + src/TableFunctions/registerTableFunctions.h | 1 + 24 files changed, 543 insertions(+), 60 deletions(-) create mode 100644 src/Storages/StorageS3ReaderFollower.cpp create mode 100644 src/Storages/StorageS3ReaderFollower.h create mode 100644 src/Storages/StorageTaskManager.cpp create mode 100644 src/Storages/StorageTaskManager.h create mode 100644 src/Storages/protos/CMakeLists.txt rename src/{Server/grpc_protos/clickhouse_s3_task_server.proto => Storages/protos/clickhouse_s3_reader.proto} (85%) create mode 100644 src/TableFunctions/TableFunctionS3Distributed.cpp create mode 100644 src/TableFunctions/TableFunctionS3Distributed.h diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt index 6f3bf988c17..b93af56ae4a 100644 --- a/src/CMakeLists.txt +++ b/src/CMakeLists.txt @@ -423,7 +423,7 @@ endif () if (USE_GRPC) dbms_target_link_libraries (PUBLIC clickhouse_grpc_protos) - dbms_target_link_libraries (PUBLIC clickhouse_s3_task_server_protos) + dbms_target_link_libraries (PUBLIC clickhouse_s3_reader_protos) endif() if (USE_HDFS) diff --git a/src/Client/Connection.cpp b/src/Client/Connection.cpp index 939a48d949f..7c6675873a2 100644 --- a/src/Client/Connection.cpp +++ b/src/Client/Connection.cpp @@ -20,6 +20,7 @@ #include #include #include +#include "Core/Protocol.h" #include #include #include @@ -551,6 +552,16 @@ void Connection::sendIgnoredPartUUIDs(const std::vector & uuids) out->next(); } + +void Connection::sendNextTaskRequest(const std::string & id) +{ + std::cout << "Connection::sendNextTaskRequest" << std::endl; + std::cout << StackTrace().toString() << std::endl; + writeVarUInt(Protocol::Client::NextTaskRequest, *out); + writeStringBinary(id, *out); + out->next(); +} + void Connection::sendPreparedData(ReadBuffer & input, size_t size, const String & name) { /// NOTE 'Throttler' is not used in this method (could use, but it's not important right now). diff --git a/src/Client/Connection.h b/src/Client/Connection.h index 65ed956a60b..62f17d6ce2d 100644 --- a/src/Client/Connection.h +++ b/src/Client/Connection.h @@ -158,6 +158,8 @@ public: void sendExternalTablesData(ExternalTablesData & data); /// Send parts' uuids to excluded them from query processing void sendIgnoredPartUUIDs(const std::vector & uuids); + /// Send request to acquire a new task + void sendNextTaskRequest(const std::string & id); /// Send prepared block of data (serialized and, if need, compressed), that will be read from 'input'. /// You could pass size of serialized/compressed block. diff --git a/src/Core/Protocol.h b/src/Core/Protocol.h index df51a0cb61a..a6678ccae62 100644 --- a/src/Core/Protocol.h +++ b/src/Core/Protocol.h @@ -76,8 +76,9 @@ namespace Protocol Log = 10, /// System logs of the query execution TableColumns = 11, /// Columns' description for default values calculation PartUUIDs = 12, /// List of unique parts ids. + NextTaskReply = 13, /// String that describes the next task (a file to read from S3) - MAX = PartUUIDs, + MAX = NextTaskReply, }; /// NOTE: If the type of packet argument would be Enum, the comparison packet >= 0 && packet < 10 @@ -100,6 +101,7 @@ namespace Protocol "Log", "TableColumns", "PartUUIDs", + "NextTaskReply" }; return packet <= MAX ? data[packet] @@ -135,8 +137,9 @@ namespace Protocol KeepAlive = 6, /// Keep the connection alive Scalar = 7, /// A block of data (compressed or not). IgnoredPartUUIDs = 8, /// List of unique parts ids to exclude from query processing + NextTaskRequest = 9, /// String which contains an id to request a new task - MAX = IgnoredPartUUIDs, + MAX = NextTaskRequest, }; inline const char * toString(UInt64 packet) @@ -151,6 +154,7 @@ namespace Protocol "KeepAlive", "Scalar", "IgnoredPartUUIDs", + "NextTaskRequest" }; return packet <= MAX ? data[packet] diff --git a/src/IO/S3Common.cpp b/src/IO/S3Common.cpp index f9962735ddc..1e498c03a45 100644 --- a/src/IO/S3Common.cpp +++ b/src/IO/S3Common.cpp @@ -326,6 +326,7 @@ namespace S3 URI::URI(const Poco::URI & uri_) { + full = uri_.toString(); /// Case when bucket name represented in domain name of S3 URL. /// E.g. (https://bucket-name.s3.Region.amazonaws.com/key) /// https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html#virtual-hosted-style-access @@ -399,6 +400,12 @@ namespace S3 else throw Exception("Bucket or key name are invalid in S3 URI: " + uri.toString(), ErrorCodes::BAD_ARGUMENTS); } + + + String URI::toString() const + { + return full; + } } } diff --git a/src/IO/S3Common.h b/src/IO/S3Common.h index b071daefee1..54493bb4d44 100644 --- a/src/IO/S3Common.h +++ b/src/IO/S3Common.h @@ -67,9 +67,13 @@ struct URI String key; String storage_name; + /// Full representation of URI + String full; + bool is_virtual_hosted_style; explicit URI(const Poco::URI & uri_); + String toString() const; }; } diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index d1a0ea61066..2c22a1421c0 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -25,6 +25,7 @@ #include #include #include +#include #include #include #include @@ -33,6 +34,7 @@ #include +#include "Core/Protocol.h" #include "TCPHandler.h" #if !defined(ARCADIA_BUILD) @@ -963,10 +965,13 @@ bool TCPHandler::receivePacket() UInt64 packet_type = 0; readVarUInt(packet_type, *in); -// std::cerr << "Server got packet: " << Protocol::Client::toString(packet_type) << "\n"; - switch (packet_type) { + case Protocol::Client::NextTaskRequest: + std::cout << "Protocol::Client::NextTaskRequest" << std::endl; + std::cout << StackTrace().toString() << std::endl; + receiveNextTaskRequest(); + return false; case Protocol::Client::IgnoredPartUUIDs: /// Part uuids packet if any comes before query. receiveIgnoredPartUUIDs(); @@ -1006,6 +1011,16 @@ bool TCPHandler::receivePacket() } } + +void TCPHandler::receiveNextTaskRequest() +{ + std::string id; + readStringBinary(id, *in); + LOG_DEBUG(log, "Got nextTaskRequest {}", id); + auto next = TaskSupervisor::instance().getNextTaskForId(id); + LOG_DEBUG(log, "Nexttask for id is {} ", next); +} + void TCPHandler::receiveIgnoredPartUUIDs() { state.part_uuids = true; diff --git a/src/Server/TCPHandler.h b/src/Server/TCPHandler.h index e0160b82962..ca5f720273d 100644 --- a/src/Server/TCPHandler.h +++ b/src/Server/TCPHandler.h @@ -89,7 +89,7 @@ struct QueryState *this = QueryState(); } - bool empty() + bool empty() const { return is_empty; } @@ -169,6 +169,7 @@ private: bool receivePacket(); void receiveQuery(); void receiveIgnoredPartUUIDs(); + void receiveNextTaskRequest(); bool receiveData(bool scalar); bool readDataNext(const size_t & poll_interval, const int & receive_timeout); void readData(const Settings & connection_settings); diff --git a/src/Server/grpc_protos/CMakeLists.txt b/src/Server/grpc_protos/CMakeLists.txt index b8a5d5290c5..22a834f96a1 100644 --- a/src/Server/grpc_protos/CMakeLists.txt +++ b/src/Server/grpc_protos/CMakeLists.txt @@ -1,5 +1,4 @@ PROTOBUF_GENERATE_GRPC_CPP(clickhouse_grpc_proto_sources clickhouse_grpc_proto_headers clickhouse_grpc.proto) -PROTOBUF_GENERATE_GRPC_CPP(clickhouse_s3_task_server_sources clickhouse_s3_task_server_headers clickhouse_s3_task_server.proto) # Ignore warnings while compiling protobuf-generated *.pb.h and *.pb.cpp files. set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w") @@ -11,7 +10,3 @@ add_library(clickhouse_grpc_protos ${clickhouse_grpc_proto_headers} ${clickhouse target_include_directories(clickhouse_grpc_protos SYSTEM PUBLIC ${gRPC_INCLUDE_DIRS} ${Protobuf_INCLUDE_DIR} ${CMAKE_CURRENT_BINARY_DIR}) target_link_libraries (clickhouse_grpc_protos PUBLIC ${gRPC_LIBRARIES}) - -add_library(clickhouse_s3_task_server_protos ${clickhouse_s3_task_server_headers} ${clickhouse_s3_task_server_sources}) -target_include_directories(clickhouse_s3_task_server_protos SYSTEM PUBLIC ${gRPC_INCLUDE_DIRS} ${Protobuf_INCLUDE_DIR} ${CMAKE_CURRENT_BINARY_DIR}) -target_link_libraries (clickhouse_s3_task_server_protos PUBLIC ${gRPC_LIBRARIES}) diff --git a/src/Storages/CMakeLists.txt b/src/Storages/CMakeLists.txt index deb1c9f6716..2dbc2013648 100644 --- a/src/Storages/CMakeLists.txt +++ b/src/Storages/CMakeLists.txt @@ -4,3 +4,7 @@ add_subdirectory(System) if(ENABLE_TESTS) add_subdirectory(tests) endif() + +if (USE_GRPC) + add_subdirectory(protos) +endif() diff --git a/src/Storages/StorageProxy.h b/src/Storages/StorageProxy.h index 6fd2a86b6eb..2c3e9d610b0 100644 --- a/src/Storages/StorageProxy.h +++ b/src/Storages/StorageProxy.h @@ -11,7 +11,7 @@ class StorageProxy : public IStorage { public: - StorageProxy(const StorageID & table_id_) : IStorage(table_id_) {} + explicit StorageProxy(const StorageID & table_id_) : IStorage(table_id_) {} virtual StoragePtr getNested() const = 0; diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index af4b1777223..e31862d68cb 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -207,32 +207,26 @@ StorageS3::StorageS3( ContextPtr context_, const String & compression_method_) : IStorage(table_id_) - , WithContext(context_->getGlobalContext()) - , uri(uri_) - , access_key_id(access_key_id_) - , secret_access_key(secret_access_key_) - , max_connections(max_connections_) + , client_auth{uri_, access_key_id_, secret_access_key_, max_connections_, {}, {}} /// Client and settings will be updated later , format_name(format_name_) , min_upload_part_size(min_upload_part_size_) , max_single_part_upload_size(max_single_part_upload_size_) , compression_method(compression_method_) , name(uri_.storage_name) { - getContext()->getRemoteHostFilter().checkURL(uri_.uri); + context_->getGlobalContext()->getRemoteHostFilter().checkURL(uri_.uri); StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); storage_metadata.setConstraints(constraints_); setInMemoryMetadata(storage_metadata); - updateAuthSettings(context_); + updateClientAndAuthSettings(context_, client_auth); } -namespace -{ - /* "Recursive" directory listing with matched paths as a result. +/* "Recursive" directory listing with matched paths as a result. * Have the same method in StorageFile. */ -Strings listFilesWithRegexpMatching(Aws::S3::S3Client & client, const S3::URI & globbed_uri) +Strings StorageS3::listFilesWithRegexpMatching(Aws::S3::S3Client & client, const S3::URI & globbed_uri) { if (globbed_uri.bucket.find_first_of("*?{") != globbed_uri.bucket.npos) { @@ -283,8 +277,6 @@ Strings listFilesWithRegexpMatching(Aws::S3::S3Client & client, const S3::URI & return result; } -} - Pipe StorageS3::read( const Names & column_names, @@ -295,7 +287,7 @@ Pipe StorageS3::read( size_t max_block_size, unsigned num_streams) { - updateAuthSettings(local_context); + updateClientAndAuthSettings(local_context, client_auth); Pipes pipes; bool need_path_column = false; @@ -308,7 +300,7 @@ Pipe StorageS3::read( need_file_column = true; } - for (const String & key : listFilesWithRegexpMatching(*client, uri)) + for (const String & key : listFilesWithRegexpMatching(*client_auth.client, client_auth.uri)) pipes.emplace_back(std::make_shared( need_path_column, need_file_column, @@ -318,9 +310,9 @@ Pipe StorageS3::read( local_context, metadata_snapshot->getColumns(), max_block_size, - chooseCompressionMethod(uri.key, compression_method), - client, - uri.bucket, + chooseCompressionMethod(client_auth.uri.key, compression_method), + client_auth.client, + client_auth.uri.bucket, key)); auto pipe = Pipe::unitePipes(std::move(pipes)); @@ -332,49 +324,49 @@ Pipe StorageS3::read( BlockOutputStreamPtr StorageS3::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { - updateAuthSettings(local_context); + updateClientAndAuthSettings(local_context, client_auth); return std::make_shared( format_name, metadata_snapshot->getSampleBlock(), - getContext(), - chooseCompressionMethod(uri.key, compression_method), - client, - uri.bucket, - uri.key, + context, + chooseCompressionMethod(client_auth.uri.key, compression_method), + client_auth.client, + client_auth.uri.bucket, + client_auth.uri.key, min_upload_part_size, max_single_part_upload_size); } -void StorageS3::updateAuthSettings(ContextPtr local_context) +void StorageS3::updateClientAndAuthSettings(ContextPtr ctx, StorageS3::ClientAuthentificaiton & upd) { - auto settings = local_context->getStorageS3Settings().getSettings(uri.uri.toString()); - if (client && (!access_key_id.empty() || settings == auth_settings)) + auto settings = ctx->getStorageS3Settings().getSettings(upd.uri.uri.toString()); + if (upd.client && (!upd.access_key_id.empty() || settings == upd.auth_settings)) return; - Aws::Auth::AWSCredentials credentials(access_key_id, secret_access_key); + Aws::Auth::AWSCredentials credentials(upd.access_key_id, upd.secret_access_key); HeaderCollection headers; - if (access_key_id.empty()) + if (upd.access_key_id.empty()) { credentials = Aws::Auth::AWSCredentials(settings.access_key_id, settings.secret_access_key); headers = settings.headers; } S3::PocoHTTPClientConfiguration client_configuration = S3::ClientFactory::instance().createClientConfiguration( - local_context->getRemoteHostFilter(), local_context->getGlobalContext()->getSettingsRef().s3_max_redirects); + ctx->getRemoteHostFilter(), ctx->getGlobalContext()->getSettingsRef().s3_max_redirects); - client_configuration.endpointOverride = uri.endpoint; - client_configuration.maxConnections = max_connections; + client_configuration.endpointOverride = upd.uri.endpoint; + client_configuration.maxConnections = upd.max_connections; - client = S3::ClientFactory::instance().create( + upd.client = S3::ClientFactory::instance().create( client_configuration, - uri.is_virtual_hosted_style, + upd.uri.is_virtual_hosted_style, credentials.GetAWSAccessKeyId(), credentials.GetAWSSecretKey(), settings.server_side_encryption_customer_key_base64, std::move(headers), - settings.use_environment_credentials.value_or(getContext()->getConfigRef().getBool("s3.use_environment_credentials", false))); + settings.use_environment_credentials.value_or(ctx->getConfigRef().getBool("s3.use_environment_credentials", false))); - auth_settings = std::move(settings); + upd.auth_settings = std::move(settings); } void registerStorageS3Impl(const String & name, StorageFactory & factory) diff --git a/src/Storages/StorageS3.h b/src/Storages/StorageS3.h index 14946f881f2..6f75592f1d2 100644 --- a/src/Storages/StorageS3.h +++ b/src/Storages/StorageS3.h @@ -1,6 +1,7 @@ #pragma once #include +#include "TableFunctions/TableFunctionS3Distributed.h" #if USE_AWS_S3 @@ -9,6 +10,7 @@ #include #include #include +#include namespace Aws::S3 { @@ -58,20 +60,30 @@ public: NamesAndTypesList getVirtuals() const override; private: - const S3::URI uri; - const String access_key_id; - const String secret_access_key; - const UInt64 max_connections; + struct ClientAuthentificaiton + { + const S3::URI uri; + const String access_key_id; + const String secret_access_key; + const UInt64 max_connections; + + std::shared_ptr client; + S3AuthSettings auth_settings; + }; + + ClientAuthentificaiton client_auth; String format_name; size_t min_upload_part_size; size_t max_single_part_upload_size; String compression_method; - std::shared_ptr client; String name; - S3AuthSettings auth_settings; - void updateAuthSettings(ContextPtr context); + + friend class TableFunctionS3Distributed; + + static Strings listFilesWithRegexpMatching(Aws::S3::S3Client & client, const S3::URI & globbed_uri); + static void updateClientAndAuthSettings(ContextPtr, ClientAuthentificaiton &); }; } diff --git a/src/Storages/StorageS3ReaderFollower.cpp b/src/Storages/StorageS3ReaderFollower.cpp new file mode 100644 index 00000000000..075b28ddb0e --- /dev/null +++ b/src/Storages/StorageS3ReaderFollower.cpp @@ -0,0 +1,99 @@ +#include "Storages/StorageS3ReaderFollower.h" + +#include +#include +#include +#include "Common/Throttler.h" +#include "clickhouse_s3_reader.grpc.pb.h" + + +#include +#include + +using grpc::Channel; +using grpc::ClientContext; +using grpc::Status; +using S3TaskServer = clickhouse::s3_reader::S3TaskServer; +using S3TaskRequest = clickhouse::s3_reader::S3TaskRequest; +using S3TaskReply = clickhouse::s3_reader::S3TaskReply; + + + +namespace DB +{ + + +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + +namespace +{ + +class S3TaskManagerClient +{ +public: + explicit S3TaskManagerClient(std::shared_ptr channel) + : stub(S3TaskServer::NewStub(channel)) + {} + + + [[ maybe_unused ]] std::string GetNext(const std::string & query_id) + { + S3TaskRequest request; + request.set_query_id(query_id); + + S3TaskReply reply; + ClientContext context; + + Status status = stub->GetNext(&context, request, &reply); + + if (status.ok()) { + return reply.message(); + } else { + throw Exception("RPC Failed", ErrorCodes::LOGICAL_ERROR); + } + + } +private: + std::unique_ptr stub; +}; + +} + +const auto * target_str = "localhost:50051"; + +class StorageS3ReaderFollower::Impl +{ +public: + Impl() + : client(grpc::CreateChannel(target_str, grpc::InsecureChannelCredentials())) { + } + void startup(); +private: + S3TaskManagerClient client; +}; + +void StorageS3ReaderFollower::Impl::startup() { + +} + + +void StorageS3ReaderFollower::startup() +{ + return pimpl->startup(); +} + +bool StorageS3ReaderFollower::isRemote() const +{ + return true; +} + +std::string StorageS3ReaderFollower::getName() const +{ + return "StorageS3ReaderFollower"; +} + +} + diff --git a/src/Storages/StorageS3ReaderFollower.h b/src/Storages/StorageS3ReaderFollower.h new file mode 100644 index 00000000000..f07670227c6 --- /dev/null +++ b/src/Storages/StorageS3ReaderFollower.h @@ -0,0 +1,27 @@ +#pragma once + +#include "Storages/IStorage.h" + +#include + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + + +class StorageS3ReaderFollower : public IStorage +{ + std::string getName() const override; + bool isRemote() const override; + void startup() override; +private: + class Impl; + std::unique_ptr pimpl; +}; + + +} diff --git a/src/Storages/StorageTaskManager.cpp b/src/Storages/StorageTaskManager.cpp new file mode 100644 index 00000000000..e69de29bb2d diff --git a/src/Storages/StorageTaskManager.h b/src/Storages/StorageTaskManager.h new file mode 100644 index 00000000000..87fd32c0359 --- /dev/null +++ b/src/Storages/StorageTaskManager.h @@ -0,0 +1,121 @@ +#pragma once + + +#include +#include +#include +#include +#include +#include +#include +#include +#include + + +#include + +namespace DB +{ +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + +using QueryId = std::string; +using Task = std::string; +using Tasks = std::vector; +using TasksIterator = Tasks::iterator; + + +class NextTaskResolverBase +{ +public: + virtual ~NextTaskResolverBase() = default; + virtual std::string next() = 0; + virtual std::string getName() = 0; + virtual std::string getId() = 0; +}; + +using NextTaskResolverBasePtr = std::unique_ptr; + + +class S3NextTaskResolver : public NextTaskResolverBase +{ +public: + S3NextTaskResolver(QueryId query_id, Tasks && all_tasks) + : id(query_id) + , tasks(all_tasks) + , current(tasks.begin()) + {} + + ~S3NextTaskResolver() override = default; + + std::string next() override + { + auto it = current; + ++current; + return it == tasks.end() ? "" : *it; + } + + std::string getName() override + { + return "S3NextTaskResolverBase"; + } + + std::string getId() override + { + return id; + } + +private: + QueryId id; + Tasks tasks; + TasksIterator current; +}; + + + +class TaskSupervisor +{ +public: + using QueryId = std::string; + + TaskSupervisor() + { + auto nexttask = std::make_unique("12345", std::vector{"anime1", "anime2", "anime3"}); + registerNextTaskResolver(std::move(nexttask)); + } + + static TaskSupervisor & instance() + { + static TaskSupervisor task_manager; + return task_manager; + } + + void registerNextTaskResolver(NextTaskResolverBasePtr resolver) + { + std::lock_guard lock(rwlock); + auto & target = dict[resolver->getId()]; + if (target) + throw Exception(fmt::format("NextTaskResolver with name {} is already registered for query {}", + target->getId(), resolver->getId()), ErrorCodes::LOGICAL_ERROR); + target = std::move(resolver); + } + + + Task getNextTaskForId(const QueryId & id) + { + std::shared_lock lock(rwlock); + auto it = dict.find(id); + if (it == dict.end()) + throw Exception(fmt::format("NextTaskResolver is not registered for query {}", id), ErrorCodes::LOGICAL_ERROR); + return it->second->next(); + } +private: + using ResolverDict = std::unordered_map; + ResolverDict dict; + std::shared_mutex rwlock; +}; + + +} diff --git a/src/Storages/System/StorageSystemOne.h b/src/Storages/System/StorageSystemOne.h index a34f9562025..a14d5e15726 100644 --- a/src/Storages/System/StorageSystemOne.h +++ b/src/Storages/System/StorageSystemOne.h @@ -31,7 +31,7 @@ public: unsigned num_streams) override; protected: - StorageSystemOne(const StorageID & table_id_); + explicit StorageSystemOne(const StorageID & table_id_); }; } diff --git a/src/Storages/protos/CMakeLists.txt b/src/Storages/protos/CMakeLists.txt new file mode 100644 index 00000000000..42df78dd87b --- /dev/null +++ b/src/Storages/protos/CMakeLists.txt @@ -0,0 +1,11 @@ +PROTOBUF_GENERATE_GRPC_CPP(clickhouse_s3_reader_proto_sources clickhouse_s3_reader_proto_headers clickhouse_s3_reader.proto) + +# Ignore warnings while compiling protobuf-generated *.pb.h and *.pb.cpp files. +set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w") + +# Disable clang-tidy for protobuf-generated *.pb.h and *.pb.cpp files. +set (CMAKE_CXX_CLANG_TIDY "") + +add_library(clickhouse_s3_reader_protos ${clickhouse_s3_reader_proto_headers} ${clickhouse_s3_reader_proto_sources}) +target_include_directories(clickhouse_s3_reader_protos SYSTEM PUBLIC ${gRPC_INCLUDE_DIRS} ${Protobuf_INCLUDE_DIR} ${CMAKE_CURRENT_BINARY_DIR}) +target_link_libraries (clickhouse_s3_reader_protos PUBLIC ${gRPC_LIBRARIES}) diff --git a/src/Server/grpc_protos/clickhouse_s3_task_server.proto b/src/Storages/protos/clickhouse_s3_reader.proto similarity index 85% rename from src/Server/grpc_protos/clickhouse_s3_task_server.proto rename to src/Storages/protos/clickhouse_s3_reader.proto index 6b3b8a34ad4..18d1102d40b 100644 --- a/src/Server/grpc_protos/clickhouse_s3_task_server.proto +++ b/src/Storages/protos/clickhouse_s3_reader.proto @@ -1,7 +1,6 @@ - syntax = "proto3"; -package clickhouse.s3_task_server; +package clickhouse.s3_reader; service S3TaskServer { @@ -13,8 +12,7 @@ message S3TaskRequest { string query_id = 1; } - message S3TaskReply { string message = 1; int32 error = 2; -} \ No newline at end of file +} diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Distributed.cpp new file mode 100644 index 00000000000..f16371f2d47 --- /dev/null +++ b/src/TableFunctions/TableFunctionS3Distributed.cpp @@ -0,0 +1,117 @@ +#include +#include +#include "Storages/System/StorageSystemOne.h" + +#if USE_AWS_S3 + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "registerTableFunctions.h" + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; + extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; +} + +void TableFunctionS3Distributed::parseArguments(const ASTPtr & ast_function, const Context & context) +{ + /// Parse args + ASTs & args_func = ast_function->children; + + if (args_func.size() != 1) + throw Exception("Table function '" + getName() + "' must have arguments.", ErrorCodes::LOGICAL_ERROR); + + ASTs & args = args_func.at(0)->children; + + if (args.size() < 3 || args.size() > 6) + throw Exception("Table function '" + getName() + "' requires 3 to 6 arguments: url, [access_key_id, secret_access_key,] format, structure and [compression_method].", + ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + + for (auto & arg : args) + arg = evaluateConstantExpressionOrIdentifierAsLiteral(arg, context); + + filename = args[0]->as().value.safeGet(); + + if (args.size() < 5) + { + format = args[1]->as().value.safeGet(); + structure = args[2]->as().value.safeGet(); + } + else + { + access_key_id = args[1]->as().value.safeGet(); + secret_access_key = args[2]->as().value.safeGet(); + format = args[3]->as().value.safeGet(); + structure = args[4]->as().value.safeGet(); + } + + if (args.size() == 4 || args.size() == 6) + compression_method = args.back()->as().value.safeGet(); +} + + +ColumnsDescription TableFunctionS3Distributed::getActualTableStructure(const Context & context) const +{ + return parseColumnsListFromString(structure, context); +} + +StoragePtr TableFunctionS3Distributed::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +{ + Poco::URI uri (filename); + S3::URI s3_uri (uri); + // UInt64 min_upload_part_size = context.getSettingsRef().s3_min_upload_part_size; + // UInt64 max_single_part_upload_size = context.getSettingsRef().s3_max_single_part_upload_size; + UInt64 max_connections = context.getSettingsRef().s3_max_connections; + + StorageS3::ClientAuthentificaiton client_auth{s3_uri, access_key_id, secret_access_key, max_connections, {}, {}}; + StorageS3::updateClientAndAuthSettings(context, client_auth); + + auto lists = StorageS3::listFilesWithRegexpMatching(*client_auth.client, client_auth.uri); + Strings tasks; + tasks.reserve(lists.size()); + + for (auto & value : lists) { + tasks.emplace_back(client_auth.uri.endpoint + '/' + client_auth.uri.bucket + '/' + value); + } + + std::cout << "query_id " << context.getCurrentQueryId() << std::endl; + + /// Register resolver, which will give other nodes a task to execute + TaskSupervisor::instance().registerNextTaskResolver( + std::make_unique(context.getCurrentQueryId(), std::move(tasks))); + + StoragePtr storage = StorageSystemOne::create(StorageID(getDatabaseName(), table_name)); + + storage->startup(); + + std::this_thread::sleep_for(std::chrono::seconds(60)); + + return storage; +} + + +void registerTableFunctionS3Distributed(TableFunctionFactory & factory) +{ + factory.registerFunction(); +} + +void registerTableFunctionCOSDistributed(TableFunctionFactory & factory) +{ + factory.registerFunction(); +} + +} + +#endif diff --git a/src/TableFunctions/TableFunctionS3Distributed.h b/src/TableFunctions/TableFunctionS3Distributed.h new file mode 100644 index 00000000000..767077f07b5 --- /dev/null +++ b/src/TableFunctions/TableFunctionS3Distributed.h @@ -0,0 +1,61 @@ +#pragma once + +#include + +#if USE_AWS_S3 + +#include + + +namespace DB +{ + +class Context; + +/* s3(source, [access_key_id, secret_access_key,] format, structure) - creates a temporary storage for a file in S3 + */ +class TableFunctionS3Distributed : public ITableFunction +{ +public: + static constexpr auto name = "s3Distributed"; + std::string getName() const override + { + return name; + } + bool hasStaticStructure() const override { return true; } + +protected: + StoragePtr executeImpl( + const ASTPtr & ast_function, + const Context & context, + const std::string & table_name, + ColumnsDescription cached_columns) const override; + + const char * getStorageTypeName() const override { return "S3Distributed"; } + + ColumnsDescription getActualTableStructure(const Context & context) const override; + void parseArguments(const ASTPtr & ast_function, const Context & context) override; + + String filename; + String format; + String structure; + String access_key_id; + String secret_access_key; + String compression_method = "auto"; +}; + +class TableFunctionCOSDistributed : public TableFunctionS3Distributed +{ +public: + static constexpr auto name = "cosnDistributed"; + std::string getName() const override + { + return name; + } +private: + const char * getStorageTypeName() const override { return "COSNDistributed"; } +}; + +} + +#endif diff --git a/src/TableFunctions/registerTableFunctions.cpp b/src/TableFunctions/registerTableFunctions.cpp index 2e55c16d815..2a1d4070b44 100644 --- a/src/TableFunctions/registerTableFunctions.cpp +++ b/src/TableFunctions/registerTableFunctions.cpp @@ -21,6 +21,7 @@ void registerTableFunctions() #if USE_AWS_S3 registerTableFunctionS3(factory); + registerTableFunctionS3Distributed(factory); registerTableFunctionCOS(factory); #endif diff --git a/src/TableFunctions/registerTableFunctions.h b/src/TableFunctions/registerTableFunctions.h index 2654ab2afc2..5cb948761da 100644 --- a/src/TableFunctions/registerTableFunctions.h +++ b/src/TableFunctions/registerTableFunctions.h @@ -21,6 +21,7 @@ void registerTableFunctionGenerate(TableFunctionFactory & factory); #if USE_AWS_S3 void registerTableFunctionS3(TableFunctionFactory & factory); +void registerTableFunctionS3Distributed(TableFunctionFactory & factory); void registerTableFunctionCOS(TableFunctionFactory & factory); #endif From 31b4f9b17f4ff35356852d832fc3205be4a0a72e Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Sat, 20 Mar 2021 00:49:18 +0300 Subject: [PATCH 037/108] save --- programs/client/Client.cpp | 23 ++- programs/client/Suggest.cpp | 4 + src/Client/Connection.cpp | 26 +++- src/Client/Connection.h | 4 +- src/Client/ConnectionPool.h | 2 +- src/Common/PoolBase.h | 8 +- src/Core/Protocol.h | 2 +- src/Server/TCPHandler.cpp | 3 + src/Storages/StorageS3Distributed.cpp | 140 ++++++++++++++++++ src/Storages/StorageS3Distributed.h | 48 ++++++ src/Storages/StorageS3ReaderFollower.cpp | 99 ------------- src/Storages/StorageS3ReaderFollower.h | 27 ---- .../TableFunctionS3Distributed.cpp | 33 +++-- .../TableFunctionS3Distributed.h | 1 + 14 files changed, 266 insertions(+), 154 deletions(-) create mode 100644 src/Storages/StorageS3Distributed.cpp create mode 100644 src/Storages/StorageS3Distributed.h delete mode 100644 src/Storages/StorageS3ReaderFollower.cpp delete mode 100644 src/Storages/StorageS3ReaderFollower.h diff --git a/programs/client/Client.cpp b/programs/client/Client.cpp index 1aec3677b41..fde3daa6c43 100644 --- a/programs/client/Client.cpp +++ b/programs/client/Client.cpp @@ -561,6 +561,16 @@ private: connect(); + + if (config().has("next_task")) + { + std::cout << "has next task" << std::endl; + auto next_task = config().getString("next_task", "12345"); + std::cout << "got next task " << next_task << std::endl; + sendNextTaskRequest(next_task); + std::cout << "sended " << std::endl; + } + /// Initialize DateLUT here to avoid counting time spent here as query execution time. const auto local_tz = DateLUT::instance().getTimeZone(); if (!context->getSettingsRef().use_client_time_zone) @@ -1620,8 +1630,7 @@ private: const auto * insert = parsed_query->as(); if (insert && insert->settings_ast) apply_query_settings(*insert->settings_ast); - /// FIXME: try to prettify this cast using `as<>()` - const auto * with_output = dynamic_cast(parsed_query.get()); + const auto * with_output = parsed_query->as(); if (with_output && with_output->settings_ast) apply_query_settings(*with_output->settings_ast); @@ -1699,6 +1708,13 @@ private: } + + void sendNextTaskRequest(std::string id) + { + connection->sendNextTaskRequest(id); + } + + /// Process the query that doesn't require transferring data blocks to the server. void processOrdinaryQuery() { @@ -2629,6 +2645,7 @@ public: ("opentelemetry-traceparent", po::value(), "OpenTelemetry traceparent header as described by W3C Trace Context recommendation") ("opentelemetry-tracestate", po::value(), "OpenTelemetry tracestate header as described by W3C Trace Context recommendation") ("history_file", po::value(), "path to history file") + ("next_task", po::value(), "request new task from server") ; Settings cmd_settings; @@ -2792,6 +2809,8 @@ public: config().setBool("highlight", options["highlight"].as()); if (options.count("history_file")) config().setString("history_file", options["history_file"].as()); + if (options.count("next_task")) + config().setString("next_task", options["next_task"].as()); if ((query_fuzzer_runs = options["query-fuzzer-runs"].as())) { diff --git a/programs/client/Suggest.cpp b/programs/client/Suggest.cpp index dfa7048349e..04f90897dd9 100644 --- a/programs/client/Suggest.cpp +++ b/programs/client/Suggest.cpp @@ -3,6 +3,7 @@ #include #include #include +#include "Core/Protocol.h" namespace DB { @@ -33,6 +34,8 @@ void Suggest::load(const ConnectionParameters & connection_parameters, size_t su connection_parameters.compression, connection_parameters.security); + std::cerr << "Connection created" << std::endl; + loadImpl(connection, connection_parameters.timeouts, suggestion_limit); } catch (const Exception & e) @@ -156,6 +159,7 @@ void Suggest::fetch(Connection & connection, const ConnectionTimeouts & timeouts Packet packet = connection.receivePacket(); switch (packet.type) { + case Protocol::Server::NextTaskReply: [[fallthrough]]; case Protocol::Server::Data: fillWordsFromBlock(packet.block); continue; diff --git a/src/Client/Connection.cpp b/src/Client/Connection.cpp index 7c6675873a2..54f31d2a34a 100644 --- a/src/Client/Connection.cpp +++ b/src/Client/Connection.cpp @@ -1,3 +1,4 @@ +#include #include #include #include @@ -553,13 +554,13 @@ void Connection::sendIgnoredPartUUIDs(const std::vector & uuids) } -void Connection::sendNextTaskRequest(const std::string & id) +void Connection::sendNextTaskRequest(const std::string &) { - std::cout << "Connection::sendNextTaskRequest" << std::endl; - std::cout << StackTrace().toString() << std::endl; - writeVarUInt(Protocol::Client::NextTaskRequest, *out); - writeStringBinary(id, *out); - out->next(); + // std::cout << "Connection::sendNextTaskRequest" << std::endl; + // std::cout << StackTrace().toString() << std::endl; + // writeVarUInt(Protocol::Client::NextTaskRequest, *out); + // writeStringBinary(id, *out); + // out->next(); } void Connection::sendPreparedData(ReadBuffer & input, size_t size, const String & name) @@ -783,10 +784,15 @@ Packet Connection::receivePacket() readVarUInt(res.type, *in); } + std::cerr << "res.type " << res.type << ' ' << Protocol::Server::NextTaskReply << std::endl; + switch (res.type) { case Protocol::Server::Data: [[fallthrough]]; case Protocol::Server::Totals: [[fallthrough]]; + case Protocol::Server::NextTaskReply: + res.next_task = receiveNextTask(); + return res; case Protocol::Server::Extremes: res.block = receiveData(); return res; @@ -840,6 +846,14 @@ Packet Connection::receivePacket() } +std::string Connection::receiveNextTask() +{ + String next_task; + readStringBinary(next_task, *in); + return next_task; +} + + Block Connection::receiveData() { initBlockInput(); diff --git a/src/Client/Connection.h b/src/Client/Connection.h index 62f17d6ce2d..1b10202576c 100644 --- a/src/Client/Connection.h +++ b/src/Client/Connection.h @@ -65,6 +65,7 @@ struct Packet Progress progress; BlockStreamProfileInfo profile_info; std::vector part_uuids; + std::string next_task; Packet() : type(Protocol::Server::Hello) {} }; @@ -271,7 +272,7 @@ private: class LoggerWrapper { public: - LoggerWrapper(Connection & parent_) + explicit LoggerWrapper(Connection & parent_) : log(nullptr), parent(parent_) { } @@ -302,6 +303,7 @@ private: #endif bool ping(); + std::string receiveNextTask(); Block receiveData(); Block receiveLogData(); Block receiveDataImpl(BlockInputStreamPtr & stream); diff --git a/src/Client/ConnectionPool.h b/src/Client/ConnectionPool.h index 9e1d5f78b03..bf73e9756d2 100644 --- a/src/Client/ConnectionPool.h +++ b/src/Client/ConnectionPool.h @@ -26,7 +26,7 @@ public: using Entry = PoolBase::Entry; public: - virtual ~IConnectionPool() {} + virtual ~IConnectionPool() = default; /// Selects the connection to work. /// If force_connected is false, the client must manually ensure that returned connection is good. diff --git a/src/Common/PoolBase.h b/src/Common/PoolBase.h index 43f4fbff9fe..6fc5aee26dd 100644 --- a/src/Common/PoolBase.h +++ b/src/Common/PoolBase.h @@ -51,7 +51,7 @@ private: */ struct PoolEntryHelper { - PoolEntryHelper(PooledObject & data_) : data(data_) { data.in_use = true; } + explicit PoolEntryHelper(PooledObject & data_) : data(data_) { data.in_use = true; } ~PoolEntryHelper() { std::unique_lock lock(data.pool.mutex); @@ -69,7 +69,7 @@ public: public: friend class PoolBase; - Entry() {} /// For deferred initialization. + Entry() = default; /// For deferred initialization. /** The `Entry` object protects the resource from being used by another thread. * The following methods are forbidden for `rvalue`, so you can not write a similar to @@ -99,10 +99,10 @@ public: private: std::shared_ptr data; - Entry(PooledObject & object) : data(std::make_shared(object)) {} + explicit Entry(PooledObject & object) : data(std::make_shared(object)) {} }; - virtual ~PoolBase() {} + virtual ~PoolBase() = default; /** Allocates the object. Wait for free object in pool for 'timeout'. With 'timeout' < 0, the timeout is infinite. */ Entry get(Poco::Timespan::TimeDiff timeout) diff --git a/src/Core/Protocol.h b/src/Core/Protocol.h index a6678ccae62..38dfd171cd9 100644 --- a/src/Core/Protocol.h +++ b/src/Core/Protocol.h @@ -76,7 +76,7 @@ namespace Protocol Log = 10, /// System logs of the query execution TableColumns = 11, /// Columns' description for default values calculation PartUUIDs = 12, /// List of unique parts ids. - NextTaskReply = 13, /// String that describes the next task (a file to read from S3) + NextTaskReply = 13, /// String that describes the next task (a file to read from S3) MAX = NextTaskReply, }; diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index 2c22a1421c0..6e14f8c0800 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -965,6 +965,9 @@ bool TCPHandler::receivePacket() UInt64 packet_type = 0; readVarUInt(packet_type, *in); + + std::cout << "TCPHander receivePacket" << packet_type << ' ' << Protocol::Client::NextTaskRequest << std::endl; + switch (packet_type) { case Protocol::Client::NextTaskRequest: diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp new file mode 100644 index 00000000000..16b07aa3403 --- /dev/null +++ b/src/Storages/StorageS3Distributed.cpp @@ -0,0 +1,140 @@ +#include "Storages/StorageS3Distributed.h" + +#include +#include "Client/Connection.h" +#include "DataStreams/RemoteBlockInputStream.h" +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + + +namespace DB +{ + + +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + + + +StorageS3Distributed::StorageS3Distributed( + const StorageID & table_id_, + std::string cluster_name_, + const Context & context) + : IStorage(table_id_) + , cluster_name(cluster_name_) + , cluster(context.getCluster(cluster_name)->getClusterWithReplicasAsShards(context.getSettings())) +{ + StorageInMemoryMetadata storage_metadata; + storage_metadata.setColumns(ColumnsDescription({{"dummy", std::make_shared()}})); + setInMemoryMetadata(storage_metadata); +} + + + +Pipe StorageS3Distributed::read( + const Names & column_names, + const StorageMetadataPtr & metadata_snapshot, + SelectQueryInfo & query_info, + const Context & context, + QueryProcessingStage::Enum /*processed_stage*/, + size_t /*max_block_size*/, + unsigned /*num_streams*/) +{ + /// Secondary query, need to read from S3 + if (context.getCurrentQueryId() != context.getInitialQueryId()) { + std::cout << "Secondary query" << std::endl; + auto initial_host = context.getClientInfo().initial_address.host().toString(); + auto initial_port = std::to_string(context.getClientInfo().initial_address.port()); + // auto client_info = context.getClientInfo(); + + std::cout << initial_host << ' ' << initial_port << std::endl; + + + String password; + String cluster_anime; + String cluster_secret; + + // auto connection = std::make_shared( + // /*host=*/initial_host, + // /*port=*/initial_port, + // /*default_database=*/context.getGlobalContext().getCurrentDatabase(), + // /*user=*/client_info.initial_user, + // /*password=*/password, + // /*cluster=*/cluster_anime, + // /*cluster_secret=*/cluster_secret + // ); + + // connection->sendNextTaskRequest(context.getInitialQueryId()); + // auto packet = connection->receivePacket(); + + + std::this_thread::sleep_for(std::chrono::seconds(1)); + + + Block header{ColumnWithTypeAndName( + DataTypeUInt8().createColumn(), + std::make_shared(), + "dummy")}; + + auto column = DataTypeUInt8().createColumnConst(1, 0u)->convertToFullColumnIfConst(); + Chunk chunk({ std::move(column) }, 1); + + return Pipe(std::make_shared(std::move(header), std::move(chunk))); + } + + + Pipes pipes; + connections.reserve(cluster->getShardCount()); + + std::cout << "StorageS3Distributed::read" << std::endl; + + for (const auto & replicas : cluster->getShardsAddresses()) { + /// There will be only one replica, because we consider each replica as a shard + for (const auto & node : replicas) + { + connections.emplace_back(std::make_shared( + /*host=*/node.host_name, + /*port=*/node.port, + /*default_database=*/context.getGlobalContext().getCurrentDatabase(), + /*user=*/node.user, + /*password=*/node.password, + /*cluster=*/node.cluster, + /*cluster_secret=*/node.cluster_secret + )); + auto stream = std::make_shared( + /*connection=*/*connections.back(), + /*query=*/queryToString(query_info.query), + /*header=*/metadata_snapshot->getSampleBlock(), + /*context=*/context + ); + pipes.emplace_back(std::make_shared(std::move(stream))); + } + } + + + metadata_snapshot->check(column_names, getVirtuals(), getStorageID()); + + + return Pipe::unitePipes(std::move(pipes)); +} + + + + + + +} + diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Distributed.h new file mode 100644 index 00000000000..9cbedc4543a --- /dev/null +++ b/src/Storages/StorageS3Distributed.h @@ -0,0 +1,48 @@ +#pragma once + +#include "Client/Connection.h" +#include "Interpreters/Cluster.h" +#include "Storages/IStorage.h" + +#include +#include "ext/shared_ptr_helper.h" + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + + +class Context; + + +class StorageS3Distributed : public ext::shared_ptr_helper, public IStorage +{ + friend struct ext::shared_ptr_helper; +public: + std::string getName() const override { return "S3Distributed"; } + + Pipe read( + const Names & /*column_names*/, + const StorageMetadataPtr & /*metadata_snapshot*/, + SelectQueryInfo & /*query_info*/, + const Context & /*context*/, + QueryProcessingStage::Enum /*processed_stage*/, + size_t /*max_block_size*/, + unsigned /*num_streams*/) override; + + +protected: + StorageS3Distributed(const StorageID & table_id_, std::string cluster_name_, const Context & context); + +private: + std::vector> connections; + std::string cluster_name; + ClusterPtr cluster; +}; + + +} diff --git a/src/Storages/StorageS3ReaderFollower.cpp b/src/Storages/StorageS3ReaderFollower.cpp deleted file mode 100644 index 075b28ddb0e..00000000000 --- a/src/Storages/StorageS3ReaderFollower.cpp +++ /dev/null @@ -1,99 +0,0 @@ -#include "Storages/StorageS3ReaderFollower.h" - -#include -#include -#include -#include "Common/Throttler.h" -#include "clickhouse_s3_reader.grpc.pb.h" - - -#include -#include - -using grpc::Channel; -using grpc::ClientContext; -using grpc::Status; -using S3TaskServer = clickhouse::s3_reader::S3TaskServer; -using S3TaskRequest = clickhouse::s3_reader::S3TaskRequest; -using S3TaskReply = clickhouse::s3_reader::S3TaskReply; - - - -namespace DB -{ - - -namespace ErrorCodes -{ - extern const int LOGICAL_ERROR; -} - -namespace -{ - -class S3TaskManagerClient -{ -public: - explicit S3TaskManagerClient(std::shared_ptr channel) - : stub(S3TaskServer::NewStub(channel)) - {} - - - [[ maybe_unused ]] std::string GetNext(const std::string & query_id) - { - S3TaskRequest request; - request.set_query_id(query_id); - - S3TaskReply reply; - ClientContext context; - - Status status = stub->GetNext(&context, request, &reply); - - if (status.ok()) { - return reply.message(); - } else { - throw Exception("RPC Failed", ErrorCodes::LOGICAL_ERROR); - } - - } -private: - std::unique_ptr stub; -}; - -} - -const auto * target_str = "localhost:50051"; - -class StorageS3ReaderFollower::Impl -{ -public: - Impl() - : client(grpc::CreateChannel(target_str, grpc::InsecureChannelCredentials())) { - } - void startup(); -private: - S3TaskManagerClient client; -}; - -void StorageS3ReaderFollower::Impl::startup() { - -} - - -void StorageS3ReaderFollower::startup() -{ - return pimpl->startup(); -} - -bool StorageS3ReaderFollower::isRemote() const -{ - return true; -} - -std::string StorageS3ReaderFollower::getName() const -{ - return "StorageS3ReaderFollower"; -} - -} - diff --git a/src/Storages/StorageS3ReaderFollower.h b/src/Storages/StorageS3ReaderFollower.h deleted file mode 100644 index f07670227c6..00000000000 --- a/src/Storages/StorageS3ReaderFollower.h +++ /dev/null @@ -1,27 +0,0 @@ -#pragma once - -#include "Storages/IStorage.h" - -#include - -namespace DB -{ - -namespace ErrorCodes -{ - extern const int LOGICAL_ERROR; -} - - -class StorageS3ReaderFollower : public IStorage -{ - std::string getName() const override; - bool isRemote() const override; - void startup() override; -private: - class Impl; - std::unique_ptr pimpl; -}; - - -} diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Distributed.cpp index f16371f2d47..40cfd42d322 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.cpp +++ b/src/TableFunctions/TableFunctionS3Distributed.cpp @@ -1,5 +1,8 @@ #include #include +#include "DataStreams/RemoteBlockInputStream.h" +#include "Processors/Sources/SourceFromInputStream.h" +#include "Storages/StorageS3Distributed.h" #include "Storages/System/StorageSystemOne.h" #if USE_AWS_S3 @@ -35,29 +38,31 @@ void TableFunctionS3Distributed::parseArguments(const ASTPtr & ast_function, con ASTs & args = args_func.at(0)->children; - if (args.size() < 3 || args.size() > 6) - throw Exception("Table function '" + getName() + "' requires 3 to 6 arguments: url, [access_key_id, secret_access_key,] format, structure and [compression_method].", + if (args.size() < 4 || args.size() > 7) + throw Exception("Table function '" + getName() + "' requires 4 to 7 arguments: cluster, url," + + "[access_key_id, secret_access_key,] format, structure and [compression_method].", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (auto & arg : args) arg = evaluateConstantExpressionOrIdentifierAsLiteral(arg, context); - filename = args[0]->as().value.safeGet(); + cluster_name = args[0]->as().value.safeGet(); + filename = args[1]->as().value.safeGet(); if (args.size() < 5) { - format = args[1]->as().value.safeGet(); - structure = args[2]->as().value.safeGet(); + format = args[2]->as().value.safeGet(); + structure = args[3]->as().value.safeGet(); } else { - access_key_id = args[1]->as().value.safeGet(); - secret_access_key = args[2]->as().value.safeGet(); - format = args[3]->as().value.safeGet(); - structure = args[4]->as().value.safeGet(); + access_key_id = args[2]->as().value.safeGet(); + secret_access_key = args[3]->as().value.safeGet(); + format = args[4]->as().value.safeGet(); + structure = args[5]->as().value.safeGet(); } - if (args.size() == 4 || args.size() == 6) + if (args.size() == 5 || args.size() == 7) compression_method = args.back()->as().value.safeGet(); } @@ -67,7 +72,9 @@ ColumnsDescription TableFunctionS3Distributed::getActualTableStructure(const Con return parseColumnsListFromString(structure, context); } -StoragePtr TableFunctionS3Distributed::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionS3Distributed::executeImpl( + const ASTPtr & /*ast_function*/, const Context & context, + const std::string & table_name, ColumnsDescription /*cached_columns*/) const { Poco::URI uri (filename); S3::URI s3_uri (uri); @@ -92,11 +99,11 @@ StoragePtr TableFunctionS3Distributed::executeImpl(const ASTPtr & /*ast_function TaskSupervisor::instance().registerNextTaskResolver( std::make_unique(context.getCurrentQueryId(), std::move(tasks))); - StoragePtr storage = StorageSystemOne::create(StorageID(getDatabaseName(), table_name)); + StoragePtr storage = StorageS3Distributed::create(StorageID(getDatabaseName(), table_name), cluster_name, context); storage->startup(); - std::this_thread::sleep_for(std::chrono::seconds(60)); + // std::this_thread::sleep_for(std::chrono::seconds(60)); return storage; } diff --git a/src/TableFunctions/TableFunctionS3Distributed.h b/src/TableFunctions/TableFunctionS3Distributed.h index 767077f07b5..d531ef175bb 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.h +++ b/src/TableFunctions/TableFunctionS3Distributed.h @@ -36,6 +36,7 @@ protected: ColumnsDescription getActualTableStructure(const Context & context) const override; void parseArguments(const ASTPtr & ast_function, const Context & context) override; + String cluster_name; String filename; String format; String structure; From 64b4cd0e63ba0941c92dcfd679a180f216419921 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Mon, 22 Mar 2021 20:12:31 +0300 Subject: [PATCH 038/108] save --- src/Client/Connection.cpp | 36 +-- src/Client/Connection.h | 17 +- src/Server/TCPHandler.cpp | 22 +- src/Server/TCPHandler.h | 3 +- src/Storages/StorageS3.cpp | 152 +++++------ src/Storages/StorageS3.h | 51 +++- src/Storages/StorageS3Distributed.cpp | 250 +++++++++++++++--- src/Storages/StorageS3Distributed.h | 24 +- .../TableFunctionS3Distributed.cpp | 13 +- 9 files changed, 405 insertions(+), 163 deletions(-) diff --git a/src/Client/Connection.cpp b/src/Client/Connection.cpp index 54f31d2a34a..7fcd8332249 100644 --- a/src/Client/Connection.cpp +++ b/src/Client/Connection.cpp @@ -1,5 +1,6 @@ #include #include +#include #include #include #include @@ -91,7 +92,10 @@ void Connection::connect(const ConnectionTimeouts & timeouts) socket = std::make_unique(); } - current_resolved_address = DNSResolver::instance().resolveAddress(host, port); + if (!explicitly_resolved_address) + current_resolved_address = DNSResolver::instance().resolveAddress(host, port); + else + current_resolved_address = Poco::Net::SocketAddress(explicitly_resolved_address.value()); const auto & connection_timeout = static_cast(secure) ? timeouts.secure_connection_timeout : timeouts.connection_timeout; socket->connect(*current_resolved_address, connection_timeout); @@ -554,13 +558,12 @@ void Connection::sendIgnoredPartUUIDs(const std::vector & uuids) } -void Connection::sendNextTaskRequest(const std::string &) +void Connection::sendNextTaskRequest(const std::string & id) { - // std::cout << "Connection::sendNextTaskRequest" << std::endl; - // std::cout << StackTrace().toString() << std::endl; - // writeVarUInt(Protocol::Client::NextTaskRequest, *out); - // writeStringBinary(id, *out); - // out->next(); + std::cout << "Connection::sendNextTaskRequest" << std::endl; + writeVarUInt(Protocol::Client::NextTaskRequest, *out); + writeStringBinary(id, *out); + out->next(); } void Connection::sendPreparedData(ReadBuffer & input, size_t size, const String & name) @@ -784,15 +787,10 @@ Packet Connection::receivePacket() readVarUInt(res.type, *in); } - std::cerr << "res.type " << res.type << ' ' << Protocol::Server::NextTaskReply << std::endl; - switch (res.type) { case Protocol::Server::Data: [[fallthrough]]; case Protocol::Server::Totals: [[fallthrough]]; - case Protocol::Server::NextTaskReply: - res.next_task = receiveNextTask(); - return res; case Protocol::Server::Extremes: res.block = receiveData(); return res; @@ -824,6 +822,10 @@ Packet Connection::receivePacket() readVectorBinary(res.part_uuids, *in); return res; + case Protocol::Server::NextTaskReply: + res.next_task = receiveNextTask(); + return res; + default: /// In unknown state, disconnect - to not leave unsynchronised connection. disconnect(); @@ -846,7 +848,7 @@ Packet Connection::receivePacket() } -std::string Connection::receiveNextTask() +String Connection::receiveNextTask() const { String next_task; readStringBinary(next_task, *in); @@ -932,13 +934,13 @@ void Connection::setDescription() } -std::unique_ptr Connection::receiveException() +std::unique_ptr Connection::receiveException() const { return std::make_unique(readException(*in, "Received from " + getDescription(), true /* remote */)); } -std::vector Connection::receiveMultistringMessage(UInt64 msg_type) +std::vector Connection::receiveMultistringMessage(UInt64 msg_type) const { size_t num = Protocol::Server::stringsInMessage(msg_type); std::vector strings(num); @@ -948,7 +950,7 @@ std::vector Connection::receiveMultistringMessage(UInt64 msg_type) } -Progress Connection::receiveProgress() +Progress Connection::receiveProgress() const { Progress progress; progress.read(*in, server_revision); @@ -956,7 +958,7 @@ Progress Connection::receiveProgress() } -BlockStreamProfileInfo Connection::receiveProfileInfo() +BlockStreamProfileInfo Connection::receiveProfileInfo() const { BlockStreamProfileInfo profile_info; profile_info.read(*in); diff --git a/src/Client/Connection.h b/src/Client/Connection.h index 1b10202576c..e38b0501964 100644 --- a/src/Client/Connection.h +++ b/src/Client/Connection.h @@ -65,7 +65,7 @@ struct Packet Progress progress; BlockStreamProfileInfo profile_info; std::vector part_uuids; - std::string next_task; + String next_task; Packet() : type(Protocol::Server::Hello) {} }; @@ -205,9 +205,12 @@ public: in->setAsyncCallback(std::move(async_callback)); } -private: +public: String host; UInt16 port; + + std::optional explicitly_resolved_address; + String default_database; String user; String password; @@ -303,15 +306,15 @@ private: #endif bool ping(); - std::string receiveNextTask(); + String receiveNextTask() const; Block receiveData(); Block receiveLogData(); Block receiveDataImpl(BlockInputStreamPtr & stream); - std::vector receiveMultistringMessage(UInt64 msg_type); - std::unique_ptr receiveException(); - Progress receiveProgress(); - BlockStreamProfileInfo receiveProfileInfo(); + std::vector receiveMultistringMessage(UInt64 msg_type) const; + std::unique_ptr receiveException() const; + Progress receiveProgress() const; + BlockStreamProfileInfo receiveProfileInfo() const; void initInputBuffers(); void initBlockInput(); diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index 6e14f8c0800..0270c015e6c 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -757,6 +757,15 @@ void TCPHandler::sendPartUUIDs() } } + +void TCPHandler::sendNextTaskReply(String reply) +{ + LOG_DEBUG(log, "Nexttask for id is {} ", reply); + writeVarUInt(Protocol::Server::NextTaskReply, *out); + writeStringBinary(reply, *out); + out->next(); +} + void TCPHandler::sendProfileInfo(const BlockStreamProfileInfo & info) { writeVarUInt(Protocol::Server::ProfileInfo, *out); @@ -971,10 +980,14 @@ bool TCPHandler::receivePacket() switch (packet_type) { case Protocol::Client::NextTaskRequest: + { std::cout << "Protocol::Client::NextTaskRequest" << std::endl; - std::cout << StackTrace().toString() << std::endl; - receiveNextTaskRequest(); + auto id = receiveNextTaskRequest(); + auto next = TaskSupervisor::instance().getNextTaskForId(id); + sendNextTaskReply(next); return false; + } + case Protocol::Client::IgnoredPartUUIDs: /// Part uuids packet if any comes before query. receiveIgnoredPartUUIDs(); @@ -1015,13 +1028,12 @@ bool TCPHandler::receivePacket() } -void TCPHandler::receiveNextTaskRequest() +String TCPHandler::receiveNextTaskRequest() { std::string id; readStringBinary(id, *in); LOG_DEBUG(log, "Got nextTaskRequest {}", id); - auto next = TaskSupervisor::instance().getNextTaskForId(id); - LOG_DEBUG(log, "Nexttask for id is {} ", next); + return id; } void TCPHandler::receiveIgnoredPartUUIDs() diff --git a/src/Server/TCPHandler.h b/src/Server/TCPHandler.h index ca5f720273d..b737f1f2c7a 100644 --- a/src/Server/TCPHandler.h +++ b/src/Server/TCPHandler.h @@ -169,7 +169,7 @@ private: bool receivePacket(); void receiveQuery(); void receiveIgnoredPartUUIDs(); - void receiveNextTaskRequest(); + String receiveNextTaskRequest(); bool receiveData(bool scalar); bool readDataNext(const size_t & poll_interval, const int & receive_timeout); void readData(const Settings & connection_settings); @@ -199,6 +199,7 @@ private: void sendLogs(); void sendEndOfStream(); void sendPartUUIDs(); + void sendNextTaskReply(String reply); void sendProfileInfo(const BlockStreamProfileInfo & info); void sendTotals(const Block & totals); void sendExtremes(const Block & extremes); diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index e31862d68cb..e81d8da6817 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -47,98 +47,84 @@ namespace ErrorCodes } -namespace +Block StorageS3Source::getHeader(Block sample_block, bool with_path_column, bool with_file_column) { - class StorageS3Source : public SourceWithProgress + if (with_path_column) + sample_block.insert({DataTypeString().createColumn(), std::make_shared(), "_path"}); + if (with_file_column) + sample_block.insert({DataTypeString().createColumn(), std::make_shared(), "_file"}); + + return sample_block; +} + +StorageS3Source::StorageS3Source( + bool need_path, + bool need_file, + const String & format, + String name_, + const Block & sample_block, + const Context & context, + const ColumnsDescription & columns, + UInt64 max_block_size, + const CompressionMethod compression_method, + const std::shared_ptr & client, + const String & bucket, + const String & key) + : SourceWithProgress(getHeader(sample_block, need_path, need_file)) + , name(std::move(name_)) + , with_file_column(need_file) + , with_path_column(need_path) + , file_path(bucket + "/" + key) +{ + read_buf = wrapReadBufferWithCompressionMethod(std::make_unique(client, bucket, key), compression_method); + auto input_format = FormatFactory::instance().getInput(format, *read_buf, sample_block, context, max_block_size); + reader = std::make_shared(input_format); + + if (columns.hasDefaults()) + reader = std::make_shared(reader, columns, context); +} + +String StorageS3Source::getName() const +{ + return name; +} + +Chunk StorageS3Source::generate() +{ + if (!reader) + return {}; + + if (!initialized) { - public: + reader->readSuffix(); + initialized = true; + } - static Block getHeader(Block sample_block, bool with_path_column, bool with_file_column) + if (auto block = reader->read()) + { + auto columns = block.getColumns(); + UInt64 num_rows = block.rows(); + + if (with_path_column) + columns.push_back(DataTypeString().createColumnConst(num_rows, file_path)->convertToFullColumnIfConst()); + if (with_file_column) { - if (with_path_column) - sample_block.insert({DataTypeString().createColumn(), std::make_shared(), "_path"}); - if (with_file_column) - sample_block.insert({DataTypeString().createColumn(), std::make_shared(), "_file"}); - - return sample_block; + size_t last_slash_pos = file_path.find_last_of('/'); + columns.push_back(DataTypeString().createColumnConst(num_rows, file_path.substr( + last_slash_pos + 1))->convertToFullColumnIfConst()); } - StorageS3Source( - bool need_path, - bool need_file, - const String & format, - String name_, - const Block & sample_block, - ContextPtr context, - const ColumnsDescription & columns, - UInt64 max_block_size, - const CompressionMethod compression_method, - const std::shared_ptr & client, - const String & bucket, - const String & key) - : SourceWithProgress(getHeader(sample_block, need_path, need_file)) - , name(std::move(name_)) - , with_file_column(need_file) - , with_path_column(need_path) - , file_path(bucket + "/" + key) - { - read_buf = wrapReadBufferWithCompressionMethod(std::make_unique(client, bucket, key), compression_method); - auto input_format = FormatFactory::instance().getInput(format, *read_buf, sample_block, context, max_block_size); - reader = std::make_shared(input_format); + return Chunk(std::move(columns), num_rows); + } - if (columns.hasDefaults()) - reader = std::make_shared(reader, columns, context); - } + reader.reset(); - String getName() const override - { - return name; - } + return {}; +} - Chunk generate() override - { - if (!reader) - return {}; - - if (!initialized) - { - reader->readSuffix(); - initialized = true; - } - - if (auto block = reader->read()) - { - auto columns = block.getColumns(); - UInt64 num_rows = block.rows(); - - if (with_path_column) - columns.push_back(DataTypeString().createColumnConst(num_rows, file_path)->convertToFullColumnIfConst()); - if (with_file_column) - { - size_t last_slash_pos = file_path.find_last_of('/'); - columns.push_back(DataTypeString().createColumnConst(num_rows, file_path.substr( - last_slash_pos + 1))->convertToFullColumnIfConst()); - } - - return Chunk(std::move(columns), num_rows); - } - - reader.reset(); - - return {}; - } - - private: - String name; - std::unique_ptr read_buf; - BlockInputStreamPtr reader; - bool initialized = false; - bool with_file_column = false; - bool with_path_column = false; - String file_path; - }; - - class StorageS3BlockOutputStream : public IBlockOutputStream +namespace +{ + class StorageS3BlockOutputStream : public IBlockOutputStream { public: StorageS3BlockOutputStream( diff --git a/src/Storages/StorageS3.h b/src/Storages/StorageS3.h index 6f75592f1d2..c10f8ec12fb 100644 --- a/src/Storages/StorageS3.h +++ b/src/Storages/StorageS3.h @@ -1,16 +1,22 @@ #pragma once #include -#include "TableFunctions/TableFunctionS3Distributed.h" #if USE_AWS_S3 +#include + +#include + #include #include + +#include #include #include #include #include +#include namespace Aws::S3 { @@ -20,6 +26,41 @@ namespace Aws::S3 namespace DB { +class StorageS3SequentialSource; +class StorageS3Source : public SourceWithProgress +{ +public: + + static Block getHeader(Block sample_block, bool with_path_column, bool with_file_column); + + StorageS3Source( + bool need_path, + bool need_file, + const String & format, + String name_, + const Block & sample_block, + const Context & context, + const ColumnsDescription & columns, + UInt64 max_block_size, + const CompressionMethod compression_method, + const std::shared_ptr & client, + const String & bucket, + const String & key); + + String getName() const override; + + Chunk generate() override; + +private: + String name; + std::unique_ptr read_buf; + BlockInputStreamPtr reader; + bool initialized = false; + bool with_file_column = false; + bool with_path_column = false; + String file_path; +}; + /** * This class represents table engine for external S3 urls. * It sends HTTP GET to server when select is called and @@ -60,6 +101,11 @@ public: NamesAndTypesList getVirtuals() const override; private: + + friend class StorageS3Distributed; + friend class TableFunctionS3Distributed; + friend class StorageS3SequentialSource; + friend class StorageS3Distributed; struct ClientAuthentificaiton { const S3::URI uri; @@ -79,9 +125,6 @@ private: String compression_method; String name; - - friend class TableFunctionS3Distributed; - static Strings listFilesWithRegexpMatching(Aws::S3::S3Client & client, const S3::URI & globbed_uri); static void updateClientAndAuthSettings(ContextPtr, ClientAuthentificaiton &); }; diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index 16b07aa3403..02459700779 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -1,21 +1,49 @@ #include "Storages/StorageS3Distributed.h" +#include "Common/Exception.h" #include #include "Client/Connection.h" #include "DataStreams/RemoteBlockInputStream.h" #include #include +#include + +#include +#include +#include +#include + +#include + +#include +#include +#include + +#include + + #include #include #include #include #include #include +#include #include +#include +#include + #include #include #include +#include + +#include +#include +#include + +#include namespace DB @@ -27,19 +55,164 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } +class StorageS3SequentialSource : public SourceWithProgress +{ +public: + + static Block getHeader(Block sample_block, bool with_path_column, bool with_file_column) + { + if (with_path_column) + sample_block.insert({DataTypeString().createColumn(), std::make_shared(), "_path"}); + if (with_file_column) + sample_block.insert({DataTypeString().createColumn(), std::make_shared(), "_file"}); + + return sample_block; + } + + StorageS3SequentialSource( + String initial_query_id_, + bool need_path_, + bool need_file_, + const String & format_, + String name_, + const Block & sample_block_, + const Context & context_, + const ColumnsDescription & columns_, + UInt64 max_block_size_, + const CompressionMethod compression_method_, + StorageS3::ClientAuthentificaiton & client_auth_) + : SourceWithProgress(getHeader(sample_block_, need_path_, need_file_)) + , need_path(need_path_) + , need_file(need_file_) + , format(format_) + , name(name_) + , sample_block(sample_block_) + , context(context_) + , columns(columns_) + , max_block_size(max_block_size_) + , compression_method(compression_method_) + , client_auth(client_auth_) + , initial_query_id(initial_query_id_) + { + initiator_connection = std::make_shared( + /*host*/"127.0.0.1", + /*port*/9000, + /*default_database=*/context.getGlobalContext().getCurrentDatabase(), + /*user=*/context.getClientInfo().initial_user, + /*password=*/"", + /*cluster=*/"", + /*cluster_secret=*/"" + ); + + createOrUpdateInnerSource(); + } + + String getName() const override + { + return name; + } + + Chunk generate() override + { + auto chunk = inner->generate(); + if (!chunk && !createOrUpdateInnerSource()) + return {}; + return inner->generate(); + } + +private: + + String askAboutNextKey() + { + try + { + initiator_connection->connect(timeouts); + initiator_connection->sendNextTaskRequest(initial_query_id); + auto packet = initiator_connection->receivePacket(); + assert(packet.type = Protocol::Server::NextTaskReply); + LOG_DEBUG(&Poco::Logger::get("StorageS3SequentialSource"), "Got new task {}", packet.next_task); + return packet.next_task; + } + catch (...) + { + tryLogCurrentException(&Poco::Logger::get("StorageS3SequentialSource")); + throw; + } + } + + + bool createOrUpdateInnerSource() + { + auto next_uri = S3::URI(Poco::URI(askAboutNextKey())); + + if (next_uri.uri.empty()) + return false; + + assert(next_uri.bucket == client_auth.uri.bucket); + + inner = std::make_unique( + need_path, + need_file, + format, + name, + sample_block, + context, + columns, + max_block_size, + compression_method, + client_auth.client, + client_auth.uri.bucket, + next_uri.key + ); + return true; + } + + bool need_path; + bool need_file; + String format; + String name; + Block sample_block; + const Context & context; + const ColumnsDescription & columns; + UInt64 max_block_size; + const CompressionMethod compression_method; + + std::unique_ptr inner; + StorageS3::ClientAuthentificaiton client_auth; + + /// One second just in case + ConnectionTimeouts timeouts{{1, 0}, {1, 0}, {1, 0}}; + std::shared_ptr initiator_connection; + /// This is used to ask about next task + String initial_query_id; +}; + StorageS3Distributed::StorageS3Distributed( - const StorageID & table_id_, - std::string cluster_name_, - const Context & context) + const S3::URI & uri_, + const String & access_key_id_, + const String & secret_access_key_, + const StorageID & table_id_, + String cluster_name_, + const String & format_name_, + UInt64 max_connections_, + const ColumnsDescription & columns_, + const ConstraintsDescription & constraints_, + const Context & context_, + const String & compression_method_) : IStorage(table_id_) , cluster_name(cluster_name_) - , cluster(context.getCluster(cluster_name)->getClusterWithReplicasAsShards(context.getSettings())) + , cluster(context_.getCluster(cluster_name)->getClusterWithReplicasAsShards(context_.getSettings())) + , client_auth{uri_, access_key_id_, secret_access_key_, max_connections_, {}, {}} + , format_name(format_name_) + , compression_method(compression_method_) { StorageInMemoryMetadata storage_metadata; - storage_metadata.setColumns(ColumnsDescription({{"dummy", std::make_shared()}})); + storage_metadata.setColumns(columns_); + storage_metadata.setConstraints(constraints_); setInMemoryMetadata(storage_metadata); + StorageS3::updateClientAndAuthSettings(context_, client_auth); } @@ -50,49 +223,40 @@ Pipe StorageS3Distributed::read( SelectQueryInfo & query_info, const Context & context, QueryProcessingStage::Enum /*processed_stage*/, - size_t /*max_block_size*/, + size_t max_block_size, unsigned /*num_streams*/) { /// Secondary query, need to read from S3 - if (context.getCurrentQueryId() != context.getInitialQueryId()) { - std::cout << "Secondary query" << std::endl; - auto initial_host = context.getClientInfo().initial_address.host().toString(); - auto initial_port = std::to_string(context.getClientInfo().initial_address.port()); - // auto client_info = context.getClientInfo(); + if (context.getCurrentQueryId() != context.getInitialQueryId()) + { + StorageS3::updateClientAndAuthSettings(context, client_auth); - std::cout << initial_host << ' ' << initial_port << std::endl; + Pipes pipes; + bool need_path_column = false; + bool need_file_column = false; + for (const auto & column : column_names) + { + if (column == "_path") + need_path_column = true; + if (column == "_file") + need_file_column = true; + } + std::cout << metadata_snapshot->getSampleBlock().dumpStructure() << std::endl; - String password; - String cluster_anime; - String cluster_secret; - - // auto connection = std::make_shared( - // /*host=*/initial_host, - // /*port=*/initial_port, - // /*default_database=*/context.getGlobalContext().getCurrentDatabase(), - // /*user=*/client_info.initial_user, - // /*password=*/password, - // /*cluster=*/cluster_anime, - // /*cluster_secret=*/cluster_secret - // ); - - // connection->sendNextTaskRequest(context.getInitialQueryId()); - // auto packet = connection->receivePacket(); - - - std::this_thread::sleep_for(std::chrono::seconds(1)); - - - Block header{ColumnWithTypeAndName( - DataTypeUInt8().createColumn(), - std::make_shared(), - "dummy")}; - - auto column = DataTypeUInt8().createColumnConst(1, 0u)->convertToFullColumnIfConst(); - Chunk chunk({ std::move(column) }, 1); - - return Pipe(std::make_shared(std::move(header), std::move(chunk))); + return Pipe(std::make_shared( + context.getInitialQueryId(), + need_path_column, + need_file_column, + format_name, + getName(), + metadata_snapshot->getSampleBlock(), + context, + metadata_snapshot->getColumns(), + max_block_size, + chooseCompressionMethod(client_auth.uri.key, compression_method), + client_auth + )); } @@ -126,8 +290,6 @@ Pipe StorageS3Distributed::read( metadata_snapshot->check(column_names, getVirtuals(), getStorageID()); - - return Pipe::unitePipes(std::move(pipes)); } diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Distributed.h index 9cbedc4543a..e7c5c96900e 100644 --- a/src/Storages/StorageS3Distributed.h +++ b/src/Storages/StorageS3Distributed.h @@ -3,8 +3,10 @@ #include "Client/Connection.h" #include "Interpreters/Cluster.h" #include "Storages/IStorage.h" +#include "Storages/StorageS3.h" #include +#include #include "ext/shared_ptr_helper.h" namespace DB @@ -36,12 +38,32 @@ public: protected: - StorageS3Distributed(const StorageID & table_id_, std::string cluster_name_, const Context & context); + StorageS3Distributed( + const S3::URI & uri_, + const String & access_key_id_, + const String & secret_access_key_, + const StorageID & table_id_, + String cluster_name_, + const String & format_name_, + UInt64 max_connections_, + const ColumnsDescription & columns_, + const ConstraintsDescription & constraints_, + const Context & context_, + const String & compression_method_ = ""); private: + /// Connections from initiator to other nodes std::vector> connections; std::string cluster_name; ClusterPtr cluster; + + /// This will be used on non-initiator nodes. + std::optional initiator; + std::shared_ptr initiator_connection; + StorageS3::ClientAuthentificaiton client_auth; + + String format_name; + String compression_method; }; diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Distributed.cpp index 40cfd42d322..debd89604a8 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.cpp +++ b/src/TableFunctions/TableFunctionS3Distributed.cpp @@ -99,7 +99,18 @@ StoragePtr TableFunctionS3Distributed::executeImpl( TaskSupervisor::instance().registerNextTaskResolver( std::make_unique(context.getCurrentQueryId(), std::move(tasks))); - StoragePtr storage = StorageS3Distributed::create(StorageID(getDatabaseName(), table_name), cluster_name, context); + StoragePtr storage = StorageS3Distributed::create( + s3_uri, + access_key_id, + secret_access_key, + StorageID(getDatabaseName(), table_name), + cluster_name, + format, + max_connections, + getActualTableStructure(context), + ConstraintsDescription{}, + const_cast(context), + compression_method); storage->startup(); From 0be3fa178bee1babacbe3bf95990393a5cc37605 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Tue, 23 Mar 2021 02:06:10 +0300 Subject: [PATCH 039/108] save --- src/IO/ReadBufferFromS3.cpp | 3 +++ src/Storages/StorageS3Distributed.cpp | 4 +++- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/src/IO/ReadBufferFromS3.cpp b/src/IO/ReadBufferFromS3.cpp index fd07a7f309a..be5497a709b 100644 --- a/src/IO/ReadBufferFromS3.cpp +++ b/src/IO/ReadBufferFromS3.cpp @@ -43,6 +43,9 @@ bool ReadBufferFromS3::nextImpl() initialized = true; } + if (hasPendingData()) + return true; + Stopwatch watch; auto res = impl->next(); watch.stop(); diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index 02459700779..d520310ad66 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -3,6 +3,7 @@ #include "Common/Exception.h" #include #include "Client/Connection.h" +#include "Core/QueryProcessingStage.h" #include "DataStreams/RemoteBlockInputStream.h" #include #include @@ -282,7 +283,8 @@ Pipe StorageS3Distributed::read( /*connection=*/*connections.back(), /*query=*/queryToString(query_info.query), /*header=*/metadata_snapshot->getSampleBlock(), - /*context=*/context + /*context=*/context, + nullptr, Scalars(), Tables(), QueryProcessingStage::WithMergeableState ); pipes.emplace_back(std::make_shared(std::move(stream))); } From 2549468c142de5429024d70b2b65ff99bc03c7a3 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Tue, 23 Mar 2021 20:58:29 +0300 Subject: [PATCH 040/108] better --- programs/client/Client.cpp | 19 ---- programs/client/Suggest.cpp | 4 - src/CMakeLists.txt | 1 - src/Client/Connection.cpp | 4 +- src/Client/Connection.h | 2 +- src/DataStreams/RemoteBlockInputStream.cpp | 1 + src/DataStreams/RemoteQueryExecutor.cpp | 22 +++- src/Functions/IFunction.cpp | 2 +- src/IO/ReadBufferFromS3.cpp | 3 - src/IO/S3Common.cpp | 6 - src/IO/S3Common.h | 4 - src/Server/S3TaskServer.h | 104 ------------------ src/Server/TCPHandler.cpp | 5 +- src/Server/grpc_protos/CMakeLists.txt | 1 - src/Storages/CMakeLists.txt | 4 - src/Storages/StorageS3Distributed.cpp | 52 +++++++-- src/Storages/StorageTaskManager.h | 21 ++-- src/Storages/protos/CMakeLists.txt | 11 -- .../protos/clickhouse_s3_reader.proto | 18 --- .../TableFunctionS3Distributed.cpp | 17 ++- 20 files changed, 86 insertions(+), 215 deletions(-) delete mode 100644 src/Server/S3TaskServer.h delete mode 100644 src/Storages/protos/CMakeLists.txt delete mode 100644 src/Storages/protos/clickhouse_s3_reader.proto diff --git a/programs/client/Client.cpp b/programs/client/Client.cpp index fde3daa6c43..308076f9033 100644 --- a/programs/client/Client.cpp +++ b/programs/client/Client.cpp @@ -562,15 +562,6 @@ private: connect(); - if (config().has("next_task")) - { - std::cout << "has next task" << std::endl; - auto next_task = config().getString("next_task", "12345"); - std::cout << "got next task " << next_task << std::endl; - sendNextTaskRequest(next_task); - std::cout << "sended " << std::endl; - } - /// Initialize DateLUT here to avoid counting time spent here as query execution time. const auto local_tz = DateLUT::instance().getTimeZone(); if (!context->getSettingsRef().use_client_time_zone) @@ -1708,13 +1699,6 @@ private: } - - void sendNextTaskRequest(std::string id) - { - connection->sendNextTaskRequest(id); - } - - /// Process the query that doesn't require transferring data blocks to the server. void processOrdinaryQuery() { @@ -2645,7 +2629,6 @@ public: ("opentelemetry-traceparent", po::value(), "OpenTelemetry traceparent header as described by W3C Trace Context recommendation") ("opentelemetry-tracestate", po::value(), "OpenTelemetry tracestate header as described by W3C Trace Context recommendation") ("history_file", po::value(), "path to history file") - ("next_task", po::value(), "request new task from server") ; Settings cmd_settings; @@ -2809,8 +2792,6 @@ public: config().setBool("highlight", options["highlight"].as()); if (options.count("history_file")) config().setString("history_file", options["history_file"].as()); - if (options.count("next_task")) - config().setString("next_task", options["next_task"].as()); if ((query_fuzzer_runs = options["query-fuzzer-runs"].as())) { diff --git a/programs/client/Suggest.cpp b/programs/client/Suggest.cpp index 04f90897dd9..dfa7048349e 100644 --- a/programs/client/Suggest.cpp +++ b/programs/client/Suggest.cpp @@ -3,7 +3,6 @@ #include #include #include -#include "Core/Protocol.h" namespace DB { @@ -34,8 +33,6 @@ void Suggest::load(const ConnectionParameters & connection_parameters, size_t su connection_parameters.compression, connection_parameters.security); - std::cerr << "Connection created" << std::endl; - loadImpl(connection, connection_parameters.timeouts, suggestion_limit); } catch (const Exception & e) @@ -159,7 +156,6 @@ void Suggest::fetch(Connection & connection, const ConnectionTimeouts & timeouts Packet packet = connection.receivePacket(); switch (packet.type) { - case Protocol::Server::NextTaskReply: [[fallthrough]]; case Protocol::Server::Data: fillWordsFromBlock(packet.block); continue; diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt index b93af56ae4a..43f6ae8fea5 100644 --- a/src/CMakeLists.txt +++ b/src/CMakeLists.txt @@ -423,7 +423,6 @@ endif () if (USE_GRPC) dbms_target_link_libraries (PUBLIC clickhouse_grpc_protos) - dbms_target_link_libraries (PUBLIC clickhouse_s3_reader_protos) endif() if (USE_HDFS) diff --git a/src/Client/Connection.cpp b/src/Client/Connection.cpp index 7fcd8332249..018544f969f 100644 --- a/src/Client/Connection.cpp +++ b/src/Client/Connection.cpp @@ -1,4 +1,3 @@ -#include #include #include #include @@ -22,7 +21,7 @@ #include #include #include -#include "Core/Protocol.h" +#include #include #include #include @@ -560,7 +559,6 @@ void Connection::sendIgnoredPartUUIDs(const std::vector & uuids) void Connection::sendNextTaskRequest(const std::string & id) { - std::cout << "Connection::sendNextTaskRequest" << std::endl; writeVarUInt(Protocol::Client::NextTaskRequest, *out); writeStringBinary(id, *out); out->next(); diff --git a/src/Client/Connection.h b/src/Client/Connection.h index e38b0501964..123b10942f1 100644 --- a/src/Client/Connection.h +++ b/src/Client/Connection.h @@ -205,7 +205,7 @@ public: in->setAsyncCallback(std::move(async_callback)); } -public: +private: String host; UInt16 port; diff --git a/src/DataStreams/RemoteBlockInputStream.cpp b/src/DataStreams/RemoteBlockInputStream.cpp index c633600d37f..5ab226acd13 100644 --- a/src/DataStreams/RemoteBlockInputStream.cpp +++ b/src/DataStreams/RemoteBlockInputStream.cpp @@ -62,6 +62,7 @@ Block RemoteBlockInputStream::readImpl() if (isCancelledOrThrowIfKilled()) return Block(); + std::cout << "RemoteBlockInputStream " << block.rows() << ' ' << block.dumpStructure() << std::endl; return block; } diff --git a/src/DataStreams/RemoteQueryExecutor.cpp b/src/DataStreams/RemoteQueryExecutor.cpp index 4aa659854b9..847baf555ee 100644 --- a/src/DataStreams/RemoteQueryExecutor.cpp +++ b/src/DataStreams/RemoteQueryExecutor.cpp @@ -114,7 +114,7 @@ RemoteQueryExecutor::~RemoteQueryExecutor() /** If we receive a block with slightly different column types, or with excessive columns, * we will adapt it to expected structure. */ -static Block adaptBlockStructure(const Block & block, const Block & header) +[[maybe_unused]] static Block adaptBlockStructure(const Block & block, const Block & header) { /// Special case when reader doesn't care about result structure. Deprecated and used only in Benchmark, PerformanceTest. if (!header) @@ -123,6 +123,9 @@ static Block adaptBlockStructure(const Block & block, const Block & header) Block res; res.info = block.info; + std::cout << "block " << block.dumpStructure() << std::endl; + std::cout << "header " << header.dumpStructure() << std::endl; + for (const auto & elem : header) { ColumnPtr column; @@ -153,7 +156,17 @@ static Block adaptBlockStructure(const Block & block, const Block & header) column = elem.column->cloneResized(block.rows()); } else + { + // if (!block.has(elem.name)) + // { + // column = elem.type->createColumn(); + // } + // else + // { + // column = castColumn(block.getByName(elem.name), elem.type); + // } column = castColumn(block.getByName(elem.name), elem.type); + } res.insert({column, elem.type, elem.name}); } @@ -314,7 +327,12 @@ std::optional RemoteQueryExecutor::processPacket(Packet packet) case Protocol::Server::Data: /// If the block is not empty and is not a header block if (packet.block && (packet.block.rows() > 0)) - return adaptBlockStructure(packet.block, header); + { + // return packet.block; + Block anime = adaptBlockStructure(packet.block, header); + std::cout << "RemoteQueryExecutor " << anime.dumpStructure() << std::endl; + return anime; + } break; /// If the block is empty - we will receive other packets before EndOfStream. case Protocol::Server::Exception: diff --git a/src/Functions/IFunction.cpp b/src/Functions/IFunction.cpp index e4a1adb8525..9636573c5f4 100644 --- a/src/Functions/IFunction.cpp +++ b/src/Functions/IFunction.cpp @@ -477,7 +477,7 @@ DataTypePtr FunctionOverloadResolverAdaptor::getReturnTypeDefaultImplementationF } if (null_presence.has_nullable) { - Block nested_columns = createBlockWithNestedColumns(arguments); + Block nested_columns{createBlockWithNestedColumns(arguments)}; auto return_type = getter(ColumnsWithTypeAndName(nested_columns.begin(), nested_columns.end())); return makeNullable(return_type); } diff --git a/src/IO/ReadBufferFromS3.cpp b/src/IO/ReadBufferFromS3.cpp index be5497a709b..fd07a7f309a 100644 --- a/src/IO/ReadBufferFromS3.cpp +++ b/src/IO/ReadBufferFromS3.cpp @@ -43,9 +43,6 @@ bool ReadBufferFromS3::nextImpl() initialized = true; } - if (hasPendingData()) - return true; - Stopwatch watch; auto res = impl->next(); watch.stop(); diff --git a/src/IO/S3Common.cpp b/src/IO/S3Common.cpp index 1e498c03a45..e0d0709bbab 100644 --- a/src/IO/S3Common.cpp +++ b/src/IO/S3Common.cpp @@ -326,7 +326,6 @@ namespace S3 URI::URI(const Poco::URI & uri_) { - full = uri_.toString(); /// Case when bucket name represented in domain name of S3 URL. /// E.g. (https://bucket-name.s3.Region.amazonaws.com/key) /// https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html#virtual-hosted-style-access @@ -401,11 +400,6 @@ namespace S3 throw Exception("Bucket or key name are invalid in S3 URI: " + uri.toString(), ErrorCodes::BAD_ARGUMENTS); } - - String URI::toString() const - { - return full; - } } } diff --git a/src/IO/S3Common.h b/src/IO/S3Common.h index 54493bb4d44..b071daefee1 100644 --- a/src/IO/S3Common.h +++ b/src/IO/S3Common.h @@ -67,13 +67,9 @@ struct URI String key; String storage_name; - /// Full representation of URI - String full; - bool is_virtual_hosted_style; explicit URI(const Poco::URI & uri_); - String toString() const; }; } diff --git a/src/Server/S3TaskServer.h b/src/Server/S3TaskServer.h deleted file mode 100644 index d1345ad8532..00000000000 --- a/src/Server/S3TaskServer.h +++ /dev/null @@ -1,104 +0,0 @@ -#pragma once - -#include -#if !defined(ARCADIA_BUILD) -#include -#endif - -#if USE_GRPC -#include -#include "clickhouse_s3_task_server.grpc.pb.h" - - -#include -#include -#include - -#include -#include -#include - -using grpc::Server; -using grpc::ServerBuilder; -using grpc::ServerContext; -using grpc::Status; -using clickhouse::s3_task_server::S3TaskServer; -using clickhouse::s3_task_server::S3TaskRequest; -using clickhouse::s3_task_server::S3TaskReply; - - -namespace DB -{ - -namespace ErrorCodes -{ - extern const int LOGICAL_ERROR; -} - - -class S3Task -{ -public: - S3Task() = delete; - - explicit S3Task(std::vector && paths_) - : paths(std::move(paths_)) - {} - - std::optional getNext() { - static size_t next = 0; - if (next >= paths.size()) - return std::nullopt; - const auto result = paths[next]; - ++next; - return result; - } -private: - std::vector paths; -}; - - -// Logic and data behind the server's behavior. -class S3TaskServer final : public S3TaskServer::Service { - Status GetNext(ServerContext* context, const S3TaskRequest* request, S3TaskReply* reply) override { - std::string prefix("Hello"); - const auto query_id = request->query_id(); - auto it = handlers.find(query_id); - if (it == handlers.end()) { - reply->set_message(""); - reply->set_error(ErrorCodes::LOGICAL_ERROR); - return Status::CANCELLED; - } - - reply->set_error(0); - reply->set_message(it->second.getNext()); - return Status::OK; - } - - private: - std::unordered_map handlers; -}; - - -void RunServer() { - std::string server_address("0.0.0.0:50051"); - static S3TaskServer service; - - grpc::EnableDefaultHealthCheckService(true); - grpc::reflection::InitProtoReflectionServerBuilderPlugin(); - ServerBuilder builder; - // Listen on the given address without any authentication mechanism. - builder.AddListeningPort(server_address, grpc::InsecureServerCredentials()); - // Register "service" as the instance through which we'll communicate with - // clients. In this case it corresponds to an *synchronous* service. - builder.RegisterService(&service); - // Finally assemble the server. - std::unique_ptr server(builder.BuildAndStart()); - std::cout << "Server listening on " << server_address << std::endl; - server->Wait(); -} - -} - - -#endif diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index 0270c015e6c..d6c5aed4fc3 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -760,7 +760,7 @@ void TCPHandler::sendPartUUIDs() void TCPHandler::sendNextTaskReply(String reply) { - LOG_DEBUG(log, "Nexttask for id is {} ", reply); + LOG_TRACE(log, "Nexttask for id is {} ", reply); writeVarUInt(Protocol::Server::NextTaskReply, *out); writeStringBinary(reply, *out); out->next(); @@ -975,13 +975,10 @@ bool TCPHandler::receivePacket() readVarUInt(packet_type, *in); - std::cout << "TCPHander receivePacket" << packet_type << ' ' << Protocol::Client::NextTaskRequest << std::endl; - switch (packet_type) { case Protocol::Client::NextTaskRequest: { - std::cout << "Protocol::Client::NextTaskRequest" << std::endl; auto id = receiveNextTaskRequest(); auto next = TaskSupervisor::instance().getNextTaskForId(id); sendNextTaskReply(next); diff --git a/src/Server/grpc_protos/CMakeLists.txt b/src/Server/grpc_protos/CMakeLists.txt index 22a834f96a1..584cf015a65 100644 --- a/src/Server/grpc_protos/CMakeLists.txt +++ b/src/Server/grpc_protos/CMakeLists.txt @@ -9,4 +9,3 @@ set (CMAKE_CXX_CLANG_TIDY "") add_library(clickhouse_grpc_protos ${clickhouse_grpc_proto_headers} ${clickhouse_grpc_proto_sources}) target_include_directories(clickhouse_grpc_protos SYSTEM PUBLIC ${gRPC_INCLUDE_DIRS} ${Protobuf_INCLUDE_DIR} ${CMAKE_CURRENT_BINARY_DIR}) target_link_libraries (clickhouse_grpc_protos PUBLIC ${gRPC_LIBRARIES}) - diff --git a/src/Storages/CMakeLists.txt b/src/Storages/CMakeLists.txt index 2dbc2013648..deb1c9f6716 100644 --- a/src/Storages/CMakeLists.txt +++ b/src/Storages/CMakeLists.txt @@ -4,7 +4,3 @@ add_subdirectory(System) if(ENABLE_TESTS) add_subdirectory(tests) endif() - -if (USE_GRPC) - add_subdirectory(protos) -endif() diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index d520310ad66..f64e6fb3622 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -32,9 +32,14 @@ #include #include +#include +#include +#include + #include #include +#include #include #include #include @@ -115,10 +120,19 @@ public: Chunk generate() override { - auto chunk = inner->generate(); - if (!chunk && !createOrUpdateInnerSource()) + if (!inner) return {}; - return inner->generate(); + + auto chunk = inner->generate(); + if (!chunk) + { + if (!createOrUpdateInnerSource()) + return {}; + else + chunk = inner->generate(); + } + std::cout << "generate() " << chunk.dumpStructure() << std::endl; + return chunk; } private: @@ -131,7 +145,7 @@ private: initiator_connection->sendNextTaskRequest(initial_query_id); auto packet = initiator_connection->receivePacket(); assert(packet.type = Protocol::Server::NextTaskReply); - LOG_DEBUG(&Poco::Logger::get("StorageS3SequentialSource"), "Got new task {}", packet.next_task); + LOG_TRACE(&Poco::Logger::get("StorageS3SequentialSource"), "Got new task {}", packet.next_task); return packet.next_task; } catch (...) @@ -144,11 +158,13 @@ private: bool createOrUpdateInnerSource() { - auto next_uri = S3::URI(Poco::URI(askAboutNextKey())); - - if (next_uri.uri.empty()) + auto next_string = askAboutNextKey(); + std::cout << "createOrUpdateInnerSource " << next_string << std::endl; + if (next_string.empty()) return false; + auto next_uri = S3::URI(Poco::URI(next_string)); + assert(next_uri.bucket == client_auth.uri.bucket); inner = std::make_unique( @@ -223,7 +239,7 @@ Pipe StorageS3Distributed::read( const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, const Context & context, - QueryProcessingStage::Enum /*processed_stage*/, + QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned /*num_streams*/) { @@ -232,7 +248,6 @@ Pipe StorageS3Distributed::read( { StorageS3::updateClientAndAuthSettings(context, client_auth); - Pipes pipes; bool need_path_column = false; bool need_file_column = false; for (const auto & column : column_names) @@ -243,7 +258,10 @@ Pipe StorageS3Distributed::read( need_file_column = true; } - std::cout << metadata_snapshot->getSampleBlock().dumpStructure() << std::endl; + std::cout << need_file_column << std::boolalpha << need_file_column << std::endl; + std::cout << need_path_column << std::boolalpha << need_path_column << std::endl; + + std::cout << "metadata_snapshot->getSampleBlock().dumpStructure() " << metadata_snapshot->getSampleBlock().dumpStructure() << std::endl; return Pipe(std::make_shared( context.getInitialQueryId(), @@ -265,6 +283,13 @@ Pipe StorageS3Distributed::read( connections.reserve(cluster->getShardCount()); std::cout << "StorageS3Distributed::read" << std::endl; + std::cout << "QueryProcessingStage " << processed_stage << std::endl; + + + Block header = + InterpreterSelectQuery(query_info.query, context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); + + const Scalars & scalars = context.hasQueryContext() ? context.getQueryContext().getScalars() : Scalars{}; for (const auto & replicas : cluster->getShardsAddresses()) { /// There will be only one replica, because we consider each replica as a shard @@ -282,9 +307,12 @@ Pipe StorageS3Distributed::read( auto stream = std::make_shared( /*connection=*/*connections.back(), /*query=*/queryToString(query_info.query), - /*header=*/metadata_snapshot->getSampleBlock(), + /*header=*/header, /*context=*/context, - nullptr, Scalars(), Tables(), QueryProcessingStage::WithMergeableState + nullptr, + scalars, + Tables(), + QueryProcessingStage::FetchColumns ); pipes.emplace_back(std::make_shared(std::move(stream))); } diff --git a/src/Storages/StorageTaskManager.h b/src/Storages/StorageTaskManager.h index 87fd32c0359..bb8b7952a4f 100644 --- a/src/Storages/StorageTaskManager.h +++ b/src/Storages/StorageTaskManager.h @@ -38,7 +38,6 @@ public: using NextTaskResolverBasePtr = std::unique_ptr; - class S3NextTaskResolver : public NextTaskResolverBase { public: @@ -73,19 +72,13 @@ private: TasksIterator current; }; - - class TaskSupervisor { public: using QueryId = std::string; - TaskSupervisor() - { - auto nexttask = std::make_unique("12345", std::vector{"anime1", "anime2", "anime3"}); - registerNextTaskResolver(std::move(nexttask)); - } - + TaskSupervisor() = default; + static TaskSupervisor & instance() { static TaskSupervisor task_manager; @@ -105,12 +98,16 @@ public: Task getNextTaskForId(const QueryId & id) { - std::shared_lock lock(rwlock); + std::lock_guard lock(rwlock); auto it = dict.find(id); if (it == dict.end()) - throw Exception(fmt::format("NextTaskResolver is not registered for query {}", id), ErrorCodes::LOGICAL_ERROR); - return it->second->next(); + return ""; + auto answer = it->second->next(); + if (answer.empty()) + dict.erase(it); + return answer; } + private: using ResolverDict = std::unordered_map; ResolverDict dict; diff --git a/src/Storages/protos/CMakeLists.txt b/src/Storages/protos/CMakeLists.txt deleted file mode 100644 index 42df78dd87b..00000000000 --- a/src/Storages/protos/CMakeLists.txt +++ /dev/null @@ -1,11 +0,0 @@ -PROTOBUF_GENERATE_GRPC_CPP(clickhouse_s3_reader_proto_sources clickhouse_s3_reader_proto_headers clickhouse_s3_reader.proto) - -# Ignore warnings while compiling protobuf-generated *.pb.h and *.pb.cpp files. -set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w") - -# Disable clang-tidy for protobuf-generated *.pb.h and *.pb.cpp files. -set (CMAKE_CXX_CLANG_TIDY "") - -add_library(clickhouse_s3_reader_protos ${clickhouse_s3_reader_proto_headers} ${clickhouse_s3_reader_proto_sources}) -target_include_directories(clickhouse_s3_reader_protos SYSTEM PUBLIC ${gRPC_INCLUDE_DIRS} ${Protobuf_INCLUDE_DIR} ${CMAKE_CURRENT_BINARY_DIR}) -target_link_libraries (clickhouse_s3_reader_protos PUBLIC ${gRPC_LIBRARIES}) diff --git a/src/Storages/protos/clickhouse_s3_reader.proto b/src/Storages/protos/clickhouse_s3_reader.proto deleted file mode 100644 index 18d1102d40b..00000000000 --- a/src/Storages/protos/clickhouse_s3_reader.proto +++ /dev/null @@ -1,18 +0,0 @@ -syntax = "proto3"; - -package clickhouse.s3_reader; - - -service S3TaskServer { - rpc GetNext (S3TaskRequest) returns (S3TaskReply) {} -} - - -message S3TaskRequest { - string query_id = 1; -} - -message S3TaskReply { - string message = 1; - int32 error = 2; -} diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Distributed.cpp index debd89604a8..8717a5aa5bc 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.cpp +++ b/src/TableFunctions/TableFunctionS3Distributed.cpp @@ -1,6 +1,8 @@ #include #include #include "DataStreams/RemoteBlockInputStream.h" +#include "Parsers/ASTFunction.h" +#include "Parsers/IAST_fwd.h" #include "Processors/Sources/SourceFromInputStream.h" #include "Storages/StorageS3Distributed.h" #include "Storages/System/StorageSystemOne.h" @@ -73,7 +75,7 @@ ColumnsDescription TableFunctionS3Distributed::getActualTableStructure(const Con } StoragePtr TableFunctionS3Distributed::executeImpl( - const ASTPtr & /*ast_function*/, const Context & context, + const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { Poco::URI uri (filename); @@ -89,12 +91,19 @@ StoragePtr TableFunctionS3Distributed::executeImpl( Strings tasks; tasks.reserve(lists.size()); - for (auto & value : lists) { + for (auto & value : lists) + { tasks.emplace_back(client_auth.uri.endpoint + '/' + client_auth.uri.bucket + '/' + value); + std::cout << tasks.back() << std::endl; } std::cout << "query_id " << context.getCurrentQueryId() << std::endl; + std::cout << ast_function->dumpTree() << std::endl; + auto * func = ast_function->as(); + + std::cout << func->arguments->dumpTree() << std::endl; + /// Register resolver, which will give other nodes a task to execute TaskSupervisor::instance().registerNextTaskResolver( std::make_unique(context.getCurrentQueryId(), std::move(tasks))); @@ -109,13 +118,11 @@ StoragePtr TableFunctionS3Distributed::executeImpl( max_connections, getActualTableStructure(context), ConstraintsDescription{}, - const_cast(context), + context, compression_method); storage->startup(); - // std::this_thread::sleep_for(std::chrono::seconds(60)); - return storage; } From 4e4b3832146354082b3e6488f6813b963f7caca4 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Wed, 24 Mar 2021 21:36:31 +0300 Subject: [PATCH 041/108] added hash of itiator address --- src/DataStreams/RemoteQueryExecutor.cpp | 22 +- src/IO/S3Common.cpp | 1 - src/Interpreters/Cluster.cpp | 11 + src/Interpreters/Cluster.h | 3 + src/Interpreters/DatabaseAndTableWithAlias.h | 8 +- .../ASTFunctionWithKeyValueArguments.h | 4 +- src/Parsers/ExpressionElementParsers.h | 12 +- src/Storages/StorageS3Distributed.cpp | 208 +++++++++++------- src/Storages/StorageS3Distributed.h | 19 +- .../TableFunctionS3Distributed.cpp | 55 ++--- .../TableFunctionS3Distributed.h | 2 +- 11 files changed, 192 insertions(+), 153 deletions(-) diff --git a/src/DataStreams/RemoteQueryExecutor.cpp b/src/DataStreams/RemoteQueryExecutor.cpp index 847baf555ee..4aa659854b9 100644 --- a/src/DataStreams/RemoteQueryExecutor.cpp +++ b/src/DataStreams/RemoteQueryExecutor.cpp @@ -114,7 +114,7 @@ RemoteQueryExecutor::~RemoteQueryExecutor() /** If we receive a block with slightly different column types, or with excessive columns, * we will adapt it to expected structure. */ -[[maybe_unused]] static Block adaptBlockStructure(const Block & block, const Block & header) +static Block adaptBlockStructure(const Block & block, const Block & header) { /// Special case when reader doesn't care about result structure. Deprecated and used only in Benchmark, PerformanceTest. if (!header) @@ -123,9 +123,6 @@ RemoteQueryExecutor::~RemoteQueryExecutor() Block res; res.info = block.info; - std::cout << "block " << block.dumpStructure() << std::endl; - std::cout << "header " << header.dumpStructure() << std::endl; - for (const auto & elem : header) { ColumnPtr column; @@ -156,17 +153,7 @@ RemoteQueryExecutor::~RemoteQueryExecutor() column = elem.column->cloneResized(block.rows()); } else - { - // if (!block.has(elem.name)) - // { - // column = elem.type->createColumn(); - // } - // else - // { - // column = castColumn(block.getByName(elem.name), elem.type); - // } column = castColumn(block.getByName(elem.name), elem.type); - } res.insert({column, elem.type, elem.name}); } @@ -327,12 +314,7 @@ std::optional RemoteQueryExecutor::processPacket(Packet packet) case Protocol::Server::Data: /// If the block is not empty and is not a header block if (packet.block && (packet.block.rows() > 0)) - { - // return packet.block; - Block anime = adaptBlockStructure(packet.block, header); - std::cout << "RemoteQueryExecutor " << anime.dumpStructure() << std::endl; - return anime; - } + return adaptBlockStructure(packet.block, header); break; /// If the block is empty - we will receive other packets before EndOfStream. case Protocol::Server::Exception: diff --git a/src/IO/S3Common.cpp b/src/IO/S3Common.cpp index e0d0709bbab..f9962735ddc 100644 --- a/src/IO/S3Common.cpp +++ b/src/IO/S3Common.cpp @@ -399,7 +399,6 @@ namespace S3 else throw Exception("Bucket or key name are invalid in S3 URI: " + uri.toString(), ErrorCodes::BAD_ARGUMENTS); } - } } diff --git a/src/Interpreters/Cluster.cpp b/src/Interpreters/Cluster.cpp index bac688fe81e..20ec3a794d1 100644 --- a/src/Interpreters/Cluster.cpp +++ b/src/Interpreters/Cluster.cpp @@ -138,6 +138,17 @@ String Cluster::Address::toString() const return toString(host_name, port); } + +String Cluster::Address::getHash() const +{ + SipHash hash; + hash.update(host_name); + hash.update(std::to_string(port)); + hash.update(user); + hash.update(password); + return std::to_string(hash.get64()); +} + String Cluster::Address::toString(const String & host_name, UInt16 port) { return escapeForFileName(host_name) + ':' + DB::toString(port); diff --git a/src/Interpreters/Cluster.h b/src/Interpreters/Cluster.h index 5976074ec7a..89d508396ad 100644 --- a/src/Interpreters/Cluster.h +++ b/src/Interpreters/Cluster.h @@ -122,6 +122,9 @@ public: /// Returns 'escaped_host_name:port' String toString() const; + + /// Returns hash of all fields + String getHash() const; /// Returns 'host_name:port' String readableString() const; diff --git a/src/Interpreters/DatabaseAndTableWithAlias.h b/src/Interpreters/DatabaseAndTableWithAlias.h index d2b1d655de7..a4773ec435b 100644 --- a/src/Interpreters/DatabaseAndTableWithAlias.h +++ b/src/Interpreters/DatabaseAndTableWithAlias.h @@ -26,9 +26,9 @@ struct DatabaseAndTableWithAlias UUID uuid = UUIDHelpers::Nil; DatabaseAndTableWithAlias() = default; - DatabaseAndTableWithAlias(const ASTPtr & identifier_node, const String & current_database = ""); - DatabaseAndTableWithAlias(const ASTIdentifier & identifier, const String & current_database = ""); - DatabaseAndTableWithAlias(const ASTTableExpression & table_expression, const String & current_database = ""); + explicit DatabaseAndTableWithAlias(const ASTPtr & identifier_node, const String & current_database = ""); + explicit DatabaseAndTableWithAlias(const ASTIdentifier & identifier, const String & current_database = ""); + explicit DatabaseAndTableWithAlias(const ASTTableExpression & table_expression, const String & current_database = ""); /// "alias." or "table." if alias is empty String getQualifiedNamePrefix(bool with_dot = true) const; @@ -80,7 +80,7 @@ private: void addAdditionalColumns(NamesAndTypesList & target, const NamesAndTypesList & addition) { target.insert(target.end(), addition.begin(), addition.end()); - for (auto & col : addition) + for (const auto & col : addition) names.insert(col.name); } diff --git a/src/Parsers/ASTFunctionWithKeyValueArguments.h b/src/Parsers/ASTFunctionWithKeyValueArguments.h index 88ab712cc04..f5eaa33bfc7 100644 --- a/src/Parsers/ASTFunctionWithKeyValueArguments.h +++ b/src/Parsers/ASTFunctionWithKeyValueArguments.h @@ -20,7 +20,7 @@ public: bool second_with_brackets; public: - ASTPair(bool second_with_brackets_) + explicit ASTPair(bool second_with_brackets_) : second_with_brackets(second_with_brackets_) { } @@ -49,7 +49,7 @@ public: /// Has brackets around arguments bool has_brackets; - ASTFunctionWithKeyValueArguments(bool has_brackets_ = true) + explicit ASTFunctionWithKeyValueArguments(bool has_brackets_ = true) : has_brackets(has_brackets_) { } diff --git a/src/Parsers/ExpressionElementParsers.h b/src/Parsers/ExpressionElementParsers.h index cbbbd3f6d3b..f8b2408ac16 100644 --- a/src/Parsers/ExpressionElementParsers.h +++ b/src/Parsers/ExpressionElementParsers.h @@ -45,7 +45,7 @@ protected: class ParserIdentifier : public IParserBase { public: - ParserIdentifier(bool allow_query_parameter_ = false) : allow_query_parameter(allow_query_parameter_) {} + explicit ParserIdentifier(bool allow_query_parameter_ = false) : allow_query_parameter(allow_query_parameter_) {} protected: const char * getName() const override { return "identifier"; } bool parseImpl(Pos & pos, ASTPtr & node, Expected & expected) override; @@ -59,7 +59,7 @@ protected: class ParserCompoundIdentifier : public IParserBase { public: - ParserCompoundIdentifier(bool table_name_with_optional_uuid_ = false, bool allow_query_parameter_ = false) + explicit ParserCompoundIdentifier(bool table_name_with_optional_uuid_ = false, bool allow_query_parameter_ = false) : table_name_with_optional_uuid(table_name_with_optional_uuid_), allow_query_parameter(allow_query_parameter_) { } @@ -85,7 +85,7 @@ public: using ColumnTransformers = MultiEnum; static constexpr auto AllTransformers = ColumnTransformers{ColumnTransformer::APPLY, ColumnTransformer::EXCEPT, ColumnTransformer::REPLACE}; - ParserColumnsTransformers(ColumnTransformers allowed_transformers_ = AllTransformers, bool is_strict_ = false) + explicit ParserColumnsTransformers(ColumnTransformers allowed_transformers_ = AllTransformers, bool is_strict_ = false) : allowed_transformers(allowed_transformers_) , is_strict(is_strict_) {} @@ -103,7 +103,7 @@ class ParserAsterisk : public IParserBase { public: using ColumnTransformers = ParserColumnsTransformers::ColumnTransformers; - ParserAsterisk(ColumnTransformers allowed_transformers_ = ParserColumnsTransformers::AllTransformers) + explicit ParserAsterisk(ColumnTransformers allowed_transformers_ = ParserColumnsTransformers::AllTransformers) : allowed_transformers(allowed_transformers_) {} @@ -129,7 +129,7 @@ class ParserColumnsMatcher : public IParserBase { public: using ColumnTransformers = ParserColumnsTransformers::ColumnTransformers; - ParserColumnsMatcher(ColumnTransformers allowed_transformers_ = ParserColumnsTransformers::AllTransformers) + explicit ParserColumnsMatcher(ColumnTransformers allowed_transformers_ = ParserColumnsTransformers::AllTransformers) : allowed_transformers(allowed_transformers_) {} @@ -149,7 +149,7 @@ protected: class ParserFunction : public IParserBase { public: - ParserFunction(bool allow_function_parameters_ = true, bool is_table_function_ = false) + explicit ParserFunction(bool allow_function_parameters_ = true, bool is_table_function_ = false) : allow_function_parameters(allow_function_parameters_), is_table_function(is_table_function_) { } diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index f64e6fb3622..6254b7f15df 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -31,10 +31,12 @@ #include #include #include +#include #include #include #include +#include #include #include @@ -61,6 +63,19 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } +struct StorageS3SourceBuilder +{ + bool need_path; + bool need_file; + String format; + String name; + Block sample_block; + const Context & context; + const ColumnsDescription & columns; + UInt64 max_block_size; + String compression_method; +}; + class StorageS3SequentialSource : public SourceWithProgress { public: @@ -77,37 +92,23 @@ public: StorageS3SequentialSource( String initial_query_id_, - bool need_path_, - bool need_file_, - const String & format_, - String name_, - const Block & sample_block_, - const Context & context_, - const ColumnsDescription & columns_, - UInt64 max_block_size_, - const CompressionMethod compression_method_, - StorageS3::ClientAuthentificaiton & client_auth_) - : SourceWithProgress(getHeader(sample_block_, need_path_, need_file_)) - , need_path(need_path_) - , need_file(need_file_) - , format(format_) - , name(name_) - , sample_block(sample_block_) - , context(context_) - , columns(columns_) - , max_block_size(max_block_size_) - , compression_method(compression_method_) - , client_auth(client_auth_) + Cluster::Address initiator, + const ClientAuthentificationBuilder & client_auth_builder_, + const StorageS3SourceBuilder & s3_builder_) + : SourceWithProgress(getHeader(s3_builder_.sample_block, s3_builder_.need_path, s3_builder_.need_file)) , initial_query_id(initial_query_id_) + , s3_source_builder(s3_builder_) + , cli_builder(client_auth_builder_) { - initiator_connection = std::make_shared( - /*host*/"127.0.0.1", - /*port*/9000, - /*default_database=*/context.getGlobalContext().getCurrentDatabase(), - /*user=*/context.getClientInfo().initial_user, - /*password=*/"", - /*cluster=*/"", - /*cluster_secret=*/"" + connections = std::make_shared( + /*max_connections*/3, + /*host*/initiator.host_name, + /*port*/initiator.port, + /*default_database=*/s3_builder_.context.getGlobalContext().getCurrentDatabase(), + /*user=*/s3_builder_.context.getClientInfo().initial_user, + /*password=*/initiator.password, + /*cluster=*/initiator.cluster, + /*cluster_secret=*/initiator.cluster_secret ); createOrUpdateInnerSource(); @@ -115,7 +116,7 @@ public: String getName() const override { - return name; + return "StorageS3SequentialSource"; } Chunk generate() override @@ -131,7 +132,6 @@ public: else chunk = inner->generate(); } - std::cout << "generate() " << chunk.dumpStructure() << std::endl; return chunk; } @@ -141,9 +141,9 @@ private: { try { - initiator_connection->connect(timeouts); - initiator_connection->sendNextTaskRequest(initial_query_id); - auto packet = initiator_connection->receivePacket(); + auto connection = connections->get(timeouts); + connection->sendNextTaskRequest(initial_query_id); + auto packet = connection->receivePacket(); assert(packet.type = Protocol::Server::NextTaskReply); LOG_TRACE(&Poco::Logger::get("StorageS3SequentialSource"), "Got new task {}", packet.next_task); return packet.next_task; @@ -155,28 +155,32 @@ private: } } - bool createOrUpdateInnerSource() { auto next_string = askAboutNextKey(); - std::cout << "createOrUpdateInnerSource " << next_string << std::endl; if (next_string.empty()) return false; auto next_uri = S3::URI(Poco::URI(next_string)); - assert(next_uri.bucket == client_auth.uri.bucket); + auto client_auth = StorageS3::ClientAuthentificaiton{ + next_uri, + cli_builder.access_key_id, + cli_builder.secret_access_key, + cli_builder.max_connections, + {}, {}}; + StorageS3::updateClientAndAuthSettings(s3_source_builder.context, client_auth); inner = std::make_unique( - need_path, - need_file, - format, - name, - sample_block, - context, - columns, - max_block_size, - compression_method, + s3_source_builder.need_path, + s3_source_builder.need_file, + s3_source_builder.format, + s3_source_builder.name, + s3_source_builder.sample_block, + s3_source_builder.context, + s3_source_builder.columns, + s3_source_builder.max_block_size, + chooseCompressionMethod(client_auth.uri.key, s3_source_builder.compression_method), client_auth.client, client_auth.uri.bucket, next_uri.key @@ -184,30 +188,24 @@ private: return true; } - bool need_path; - bool need_file; - String format; - String name; - Block sample_block; - const Context & context; - const ColumnsDescription & columns; - UInt64 max_block_size; - const CompressionMethod compression_method; + /// This is used to ask about next task + String initial_query_id; + + StorageS3SourceBuilder s3_source_builder; + ClientAuthentificationBuilder cli_builder; std::unique_ptr inner; - StorageS3::ClientAuthentificaiton client_auth; /// One second just in case ConnectionTimeouts timeouts{{1, 0}, {1, 0}, {1, 0}}; - std::shared_ptr initiator_connection; - /// This is used to ask about next task - String initial_query_id; + std::shared_ptr connections; }; StorageS3Distributed::StorageS3Distributed( - const S3::URI & uri_, + IAST::Hash tree_hash_, + const String & address_hash_or_filename_, const String & access_key_id_, const String & secret_access_key_, const StorageID & table_id_, @@ -219,17 +217,18 @@ StorageS3Distributed::StorageS3Distributed( const Context & context_, const String & compression_method_) : IStorage(table_id_) + , tree_hash(tree_hash_) + , address_hash_or_filename(address_hash_or_filename_) , cluster_name(cluster_name_) , cluster(context_.getCluster(cluster_name)->getClusterWithReplicasAsShards(context_.getSettings())) - , client_auth{uri_, access_key_id_, secret_access_key_, max_connections_, {}, {}} , format_name(format_name_) , compression_method(compression_method_) + , cli_builder{access_key_id_, secret_access_key_, max_connections_} { StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); storage_metadata.setConstraints(constraints_); setInMemoryMetadata(storage_metadata); - StorageS3::updateClientAndAuthSettings(context_, client_auth); } @@ -246,7 +245,16 @@ Pipe StorageS3Distributed::read( /// Secondary query, need to read from S3 if (context.getCurrentQueryId() != context.getInitialQueryId()) { - StorageS3::updateClientAndAuthSettings(context, client_auth); + /// Find initiator in cluster + Cluster::Address initiator; + for (const auto & replicas : cluster->getShardsAddresses()) + for (const auto & node : replicas) + if (node.getHash() == address_hash_or_filename) + { + initiator = node; + break; + } + bool need_path_column = false; bool need_file_column = false; @@ -258,13 +266,8 @@ Pipe StorageS3Distributed::read( need_file_column = true; } - std::cout << need_file_column << std::boolalpha << need_file_column << std::endl; - std::cout << need_path_column << std::boolalpha << need_path_column << std::endl; - - std::cout << "metadata_snapshot->getSampleBlock().dumpStructure() " << metadata_snapshot->getSampleBlock().dumpStructure() << std::endl; - - return Pipe(std::make_shared( - context.getInitialQueryId(), + StorageS3SourceBuilder s3builder + { need_path_column, need_file_column, format_name, @@ -273,24 +276,65 @@ Pipe StorageS3Distributed::read( context, metadata_snapshot->getColumns(), max_block_size, - chooseCompressionMethod(client_auth.uri.key, compression_method), - client_auth + compression_method + }; + + return Pipe(std::make_shared( + context.getInitialQueryId(), + /*initiator*/initiator, + cli_builder, + s3builder )); } - Pipes pipes; - connections.reserve(cluster->getShardCount()); + /// This part of code executes on initiator - std::cout << "StorageS3Distributed::read" << std::endl; - std::cout << "QueryProcessingStage " << processed_stage << std::endl; + String hash_of_address; + for (const auto & replicas : cluster->getShardsAddresses()) + for (const auto & node : replicas) + if (node.is_local && node.port == context.getTCPPort()) + { + hash_of_address = node.getHash(); + break; + } + /// FIXME: better exception + if (hash_of_address.empty()) + throw Exception(fmt::format("Could not find outself in cluster {}", ""), ErrorCodes::LOGICAL_ERROR); + + auto remote_query_ast = query_info.query->clone(); + auto table_expressions_from_whole_query = getTableExpressions(remote_query_ast->as()); + + String remote_query; + for (const auto & table_expression : table_expressions_from_whole_query) + { + const auto & table_function_ast = table_expression->table_function; + if (table_function_ast->getTreeHash() == tree_hash) + { + std::cout << table_function_ast->dumpTree() << std::endl; + auto & arguments = table_function_ast->children.at(0)->children; + auto & bucket = arguments[1]->as().value.safeGet(); + /// We rewrite query, and insert a port to connect as a first parameter + /// So, we write hash_of_address here as buckey name to find initiator node + /// in cluster from config on remote replica + bucket = hash_of_address; + remote_query = queryToString(remote_query_ast); + break; + } + } + + if (remote_query.empty()) + throw Exception("No table function", ErrorCodes::LOGICAL_ERROR); Block header = - InterpreterSelectQuery(query_info.query, context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); + InterpreterSelectQuery(remote_query_ast, context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); const Scalars & scalars = context.hasQueryContext() ? context.getQueryContext().getScalars() : Scalars{}; + Pipes pipes; + connections.reserve(cluster->getShardCount()); + for (const auto & replicas : cluster->getShardsAddresses()) { /// There will be only one replica, because we consider each replica as a shard for (const auto & node : replicas) @@ -306,13 +350,13 @@ Pipe StorageS3Distributed::read( )); auto stream = std::make_shared( /*connection=*/*connections.back(), - /*query=*/queryToString(query_info.query), + /*query=*/remote_query, /*header=*/header, /*context=*/context, nullptr, scalars, Tables(), - QueryProcessingStage::FetchColumns + processed_stage ); pipes.emplace_back(std::make_shared(std::move(stream))); } @@ -322,11 +366,5 @@ Pipe StorageS3Distributed::read( metadata_snapshot->check(column_names, getVirtuals(), getStorageID()); return Pipe::unitePipes(std::move(pipes)); } - - - - - - } diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Distributed.h index e7c5c96900e..ba7dfb88330 100644 --- a/src/Storages/StorageS3Distributed.h +++ b/src/Storages/StorageS3Distributed.h @@ -20,6 +20,12 @@ namespace ErrorCodes class Context; +struct ClientAuthentificationBuilder +{ + String access_key_id; + String secret_access_key; + UInt64 max_connections; +}; class StorageS3Distributed : public ext::shared_ptr_helper, public IStorage { @@ -39,7 +45,8 @@ public: protected: StorageS3Distributed( - const S3::URI & uri_, + IAST::Hash tree_hash_, + const String & address_hash_or_filename_, const String & access_key_id_, const String & secret_access_key_, const StorageID & table_id_, @@ -49,21 +56,19 @@ protected: const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, const Context & context_, - const String & compression_method_ = ""); + const String & compression_method_); private: /// Connections from initiator to other nodes std::vector> connections; + IAST::Hash tree_hash; + String address_hash_or_filename; std::string cluster_name; ClusterPtr cluster; - /// This will be used on non-initiator nodes. - std::optional initiator; - std::shared_ptr initiator_connection; - StorageS3::ClientAuthentificaiton client_auth; - String format_name; String compression_method; + ClientAuthentificationBuilder cli_builder; }; diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Distributed.cpp index 8717a5aa5bc..3c17faff456 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.cpp +++ b/src/TableFunctions/TableFunctionS3Distributed.cpp @@ -1,6 +1,7 @@ #include #include #include "DataStreams/RemoteBlockInputStream.h" +#include "Parsers/ASTExpressionList.h" #include "Parsers/ASTFunction.h" #include "Parsers/IAST_fwd.h" #include "Processors/Sources/SourceFromInputStream.h" @@ -49,7 +50,7 @@ void TableFunctionS3Distributed::parseArguments(const ASTPtr & ast_function, con arg = evaluateConstantExpressionOrIdentifierAsLiteral(arg, context); cluster_name = args[0]->as().value.safeGet(); - filename = args[1]->as().value.safeGet(); + filename_or_initiator_hash = args[1]->as().value.safeGet(); if (args.size() < 5) { @@ -78,38 +79,38 @@ StoragePtr TableFunctionS3Distributed::executeImpl( const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { - Poco::URI uri (filename); - S3::URI s3_uri (uri); - // UInt64 min_upload_part_size = context.getSettingsRef().s3_min_upload_part_size; - // UInt64 max_single_part_upload_size = context.getSettingsRef().s3_max_single_part_upload_size; UInt64 max_connections = context.getSettingsRef().s3_max_connections; - StorageS3::ClientAuthentificaiton client_auth{s3_uri, access_key_id, secret_access_key, max_connections, {}, {}}; - StorageS3::updateClientAndAuthSettings(context, client_auth); - - auto lists = StorageS3::listFilesWithRegexpMatching(*client_auth.client, client_auth.uri); - Strings tasks; - tasks.reserve(lists.size()); - - for (auto & value : lists) + /// Initiator specific logic + while (context.getInitialQueryId() == context.getCurrentQueryId()) { - tasks.emplace_back(client_auth.uri.endpoint + '/' + client_auth.uri.bucket + '/' + value); - std::cout << tasks.back() << std::endl; + auto poco_uri = Poco::URI{filename_or_initiator_hash}; + + /// This is needed, because secondary query on local replica has the same query-id + if (poco_uri.getHost().empty() || poco_uri.getPort() == 0) + break; + + S3::URI s3_uri(poco_uri); + StorageS3::ClientAuthentificaiton client_auth{s3_uri, access_key_id, secret_access_key, max_connections, {}, {}}; + StorageS3::updateClientAndAuthSettings(context, client_auth); + + auto lists = StorageS3::listFilesWithRegexpMatching(*client_auth.client, client_auth.uri); + Strings tasks; + tasks.reserve(lists.size()); + + for (auto & value : lists) + tasks.emplace_back(client_auth.uri.endpoint + '/' + client_auth.uri.bucket + '/' + value); + + /// Register resolver, which will give other nodes a task to execute + TaskSupervisor::instance().registerNextTaskResolver( + std::make_unique(context.getCurrentQueryId(), std::move(tasks))); + + break; } - std::cout << "query_id " << context.getCurrentQueryId() << std::endl; - - std::cout << ast_function->dumpTree() << std::endl; - auto * func = ast_function->as(); - - std::cout << func->arguments->dumpTree() << std::endl; - - /// Register resolver, which will give other nodes a task to execute - TaskSupervisor::instance().registerNextTaskResolver( - std::make_unique(context.getCurrentQueryId(), std::move(tasks))); - StoragePtr storage = StorageS3Distributed::create( - s3_uri, + ast_function->getTreeHash(), + filename_or_initiator_hash, access_key_id, secret_access_key, StorageID(getDatabaseName(), table_name), diff --git a/src/TableFunctions/TableFunctionS3Distributed.h b/src/TableFunctions/TableFunctionS3Distributed.h index d531ef175bb..2fab786dee6 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.h +++ b/src/TableFunctions/TableFunctionS3Distributed.h @@ -37,7 +37,7 @@ protected: void parseArguments(const ASTPtr & ast_function, const Context & context) override; String cluster_name; - String filename; + String filename_or_initiator_hash; String format; String structure; String access_key_id; From 9b7a6e66d03c1cf29e2e3121fe71e7f724f6fa37 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Thu, 25 Mar 2021 00:02:21 +0300 Subject: [PATCH 042/108] add ifded --- src/Storages/StorageS3Distributed.cpp | 5 +++++ src/Storages/StorageS3Distributed.h | 6 ++++++ 2 files changed, 11 insertions(+) diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index 6254b7f15df..369d8d200b3 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -1,5 +1,9 @@ #include "Storages/StorageS3Distributed.h" +#include + +#if USE_AWS_S3 + #include "Common/Exception.h" #include #include "Client/Connection.h" @@ -368,3 +372,4 @@ Pipe StorageS3Distributed::read( } } +#endif diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Distributed.h index ba7dfb88330..23c3230c6c6 100644 --- a/src/Storages/StorageS3Distributed.h +++ b/src/Storages/StorageS3Distributed.h @@ -1,5 +1,9 @@ #pragma once +#include + +#if USE_AWS_S3 + #include "Client/Connection.h" #include "Interpreters/Cluster.h" #include "Storages/IStorage.h" @@ -73,3 +77,5 @@ private: } + +#endif From 17acfefbcd88c63bde43b906dfa280b816907db1 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Thu, 25 Mar 2021 02:22:23 +0300 Subject: [PATCH 043/108] add integration test --- .../test_s3_distributed/__init__.py | 0 .../test_s3_distributed/configs/cluster.xml | 50 ++++++++++++++++ .../data/clickhouse/part1.csv | 10 ++++ .../data/clickhouse/part123.csv | 3 + .../data/database/part2.csv | 5 ++ .../data/database/partition675.csv | 7 +++ tests/integration/test_s3_distributed/test.py | 57 +++++++++++++++++++ 7 files changed, 132 insertions(+) create mode 100644 tests/integration/test_s3_distributed/__init__.py create mode 100644 tests/integration/test_s3_distributed/configs/cluster.xml create mode 100644 tests/integration/test_s3_distributed/data/clickhouse/part1.csv create mode 100644 tests/integration/test_s3_distributed/data/clickhouse/part123.csv create mode 100644 tests/integration/test_s3_distributed/data/database/part2.csv create mode 100644 tests/integration/test_s3_distributed/data/database/partition675.csv create mode 100644 tests/integration/test_s3_distributed/test.py diff --git a/tests/integration/test_s3_distributed/__init__.py b/tests/integration/test_s3_distributed/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/integration/test_s3_distributed/configs/cluster.xml b/tests/integration/test_s3_distributed/configs/cluster.xml new file mode 100644 index 00000000000..9813e1ce7e3 --- /dev/null +++ b/tests/integration/test_s3_distributed/configs/cluster.xml @@ -0,0 +1,50 @@ + + + + + + + s0_0_0 + 9000 + + + s0_0_1 + 9000 + + + + + s0_1_0 + 9000 + + + + + + + + + \ No newline at end of file diff --git a/tests/integration/test_s3_distributed/data/clickhouse/part1.csv b/tests/integration/test_s3_distributed/data/clickhouse/part1.csv new file mode 100644 index 00000000000..a44d3ca1ffb --- /dev/null +++ b/tests/integration/test_s3_distributed/data/clickhouse/part1.csv @@ -0,0 +1,10 @@ +"fSRH",976027584,"[[(-1.5346513608456012e-204,-2.867937504545497e266),(3.1627675144114637e-231,-2.20343471241604e-54),(-1.866886218651809e-89,-7.695893036366416e100),(8.196307577166986e-169,-8.203793887684096e-263),(-1.6150328830402252e-215,8.531116551449711e-296),(4.3378407855931477e92,1.1313645428723989e117),(-4.238081208165573e137,-8.969951719788361e67)],[(-3.409639554701108e169,-7.277093176871153e-254),(1.1466207153308928e-226,3.429893348531029e96),(6.451302850199177e-189,-7.52379443153242e125),(-1.7132078539493614e-127,-2.3177814806867505e241),(1.4996520594989919e-257,4.271017883966942e128)],[(65460976657479156000,1.7055814145588595e253),(-1.921491101580189e154,3.2912740465446566e-286),(0.0008437955075350972,-5.143493717005472e-107),(8.637208599142187e-150,7.825076274945548e136),(1.8077733932468565e-159,5.51061479974026e-77),(1.300406236793709e-260,10669142.497111017),(-1.731981751951159e91,-1.270795062098902e102)],[(3.336706342781395e-7,-1.1919528866481513e266)]]" +"sX6>",733011552,"[[(-3.737863336077909e-44,3.066510481088993e-161),(-1.0047259170558555e-31,8.066145272086467e-274)],[(1.2261835328136691e-58,-6.154561379350395e258),(8.26019994651558e35,-6.736984599062694e-19),(-1.4143671344485664e-238,-1.220003479858045e203),(2.466089772925698e-207,1.0025476904532926e-242),(-6.3786667153054354e240,-7.010834902137467e-103),(-6.766918514324285e-263,7.404639608483947e188),(2.753493977757937e126,-4.089565842001999e-152)],[(4.339873790493155e239,-5.022554811588342e24),(-1.7712390083519473e-66,1.3290563068463308e112),(3.3648764781548893e233,1.1123394188044336e112),(-5.415278137684864e195,5.590597851016202e-270),(-2.1032310903543943e99,-2.2335799924679948e-184)]]" +"",2396526460,"[[(1.9925796792641788e-261,1.647618305107044e158),(3.014593666207223e-222,-9.016473078578002e-20),(-1.5307802021477097e-230,-7.867078587209265e-243),(-7.330317098800564e295,1.7496539408601967e-281)],[(2.2816938730052074e98,-3.3089122320442997e-136),(-4.930983789361344e-263,-6.526758521792829e59),(-2.6482873886835413e34,-4.1985691142515947e83),(1.5496810029349365e238,-4.790553105593492e71),(-7.597436233325566e83,-1.3791763752378415e137),(-1.917321980700588e-307,-1.5913257477581824e62)]]" +"=@ep",3618088392,"[[(-2.2303235811290024e-306,8.64070367587338e-13),(-7.403012423264767e-129,-1.0825508572345856e-147),(-3.6080301450167e286,1.7302718548299961e285),(-1.3839239794870825e-156,4.255424291564323e107),(2.3191305762555e-33,-2.873899421579949e-145),(7.237414513124649e-159,-4.926574547865783e178),(4.251831312243431e-199,1.2164714479391436e201)],[(-5.114074387943793e242,2.0119340496886292e295),(-3.3663670765548e-262,-6.1992631068472835e221),(1.1539386993255106e-261,1.582903697171063e-33),(-6.1914577817088e118,-1.0401495621681123e145)],[],[(-5.9815907467493136e82,4.369047439375412e219),(-4.485368440431237e89,-3.633023372434946e-59),(-2.087497331251707e-180,1.0524018118646965e257)],[(-1.2636503461000215e-228,-4.8426877075223456e204),(2.74943107551342e281,-7.453097760262003e-14)]]" +"",3467776823,"[]" +"b'zQ",484159052,"[[(3.041838095219909e276,-6.956822159518612e-87)],[(6.636906358770296e-97,1.0531865724169307e-214)],[(-8.429249069245283e-243,-2.134779842898037e243)],[(-0.4657586598569572,2.799768548127799e187),(-5.961335445789657e-129,2.560331789344886e293),(-3.139409694983184e45,2.8011384557268085e-47)]]" +"6xGw",4126129912,"[]" +"Q",3109335413,"[[(-2.8435266267772945e39,9.548278488724291e26),(-1.1682790407223344e46,-3.925561182768867e-266),(2.8381633655721614e-202,-3.472921303086527e40),(3.3968328275944204e-150,-2.2188876184777275e-69),(-1.2612795000783405e-88,-1.2942793285205966e-49),(1.3678466236967012e179,1.721664680964459e97),(-1.1020844667744628e198,-3.403142062758506e-47)],[],[(1.343149099058239e-279,9.397894929770352e-132),(-5.280854317597215e250,9.862550191577643e-292),(-7.11468799151533e-58,7.510011657942604e96),(1.183774454157175e-288,-1.5697197095936546e272),(-3.727289017361602e120,2.422831380775067e-107),(1.4345094301262986e-177,2.4990983297605437e-91)],[(9.195226893854516e169,6.546374357272709e-236),(2.320311199531441e-126,2.2257031285964243e-185),(3.351868475505779e-184,1.84394695526876e88)],[(1.6290814396647987e-112,-3.589542711073253e38),(4.0060174859833907e-261,-1.9900431208726192e-296),(2.047468933030435e56,8.483912759156179e-57),(3.1165727272872075e191,-1.5487136748040008e-156),(0.43564020198461034,4.618165048931035e-244),(-7.674951896752824e-214,1.1652522629091777e-105),(4.838653901829244e-89,5.3085904574780206e169)],[(1.8286703553352283e-246,2.0403170465657044e255),(2.040810692623279e267,4.3956975402250484e-8),(2.4101343663018673e131,-8.672394158504762e167),(3.092080945239809e-219,-3.775474693770226e293),(-1.527991241079512e-15,-1.2603969180963007e226),(9.17470637459212e-56,1.6021090930395906e-133),(7.877647227721046e58,3.2592118033868903e-108)],[(1.4334765313272463e170,2.6971234798957105e-50)]]" +"^ip",1015254922,"[[(-2.227414144223298e-63,1.2391785738638914e276),(1.2668491759136862e207,2.5656762953078853e-67),(2.385410876813441e-268,1.451107969531624e25),(-5.475956161647574e131,2239495689376746),(1.5591286361054593e180,3.672868971445151e117)]]" +"5N]",1720727300,"[[(-2.0670321228319122e-258,-2.6893477429616666e-32),(-2.2424105705209414e225,3.547832127050775e25),(4.452916756606404e-121,-3.71114618421911e156),(-1.966961937965055e-110,3.1217044497868816e227),(20636923519704216,1.3500210618276638e30),(3.3195926701816527e-276,1.5557140338374535e234)],[]]" diff --git a/tests/integration/test_s3_distributed/data/clickhouse/part123.csv b/tests/integration/test_s3_distributed/data/clickhouse/part123.csv new file mode 100644 index 00000000000..1ca3353b741 --- /dev/null +++ b/tests/integration/test_s3_distributed/data/clickhouse/part123.csv @@ -0,0 +1,3 @@ +"b'zQ",2960084897,"[[(3.014593666207223e-222,-7.277093176871153e-254),(-1.5307802021477097e-230,3.429893348531029e96),(-7.330317098800564e295,-7.52379443153242e125),(2.2816938730052074e98,-2.3177814806867505e241),(-4.930983789361344e-263,4.271017883966942e128)],[(-2.6482873886835413e34,1.7055814145588595e253),(1.5496810029349365e238,3.2912740465446566e-286),(-7.597436233325566e83,-5.143493717005472e-107),(-1.917321980700588e-307,7.825076274945548e136)],[(-2.2303235811290024e-306,5.51061479974026e-77),(-7.403012423264767e-129,10669142.497111017),(-3.6080301450167e286,-1.270795062098902e102),(-1.3839239794870825e-156,-1.1919528866481513e266),(2.3191305762555e-33,3.066510481088993e-161),(7.237414513124649e-159,8.066145272086467e-274)],[(4.251831312243431e-199,-6.154561379350395e258),(-5.114074387943793e242,-6.736984599062694e-19),(-3.3663670765548e-262,-1.220003479858045e203),(1.1539386993255106e-261,1.0025476904532926e-242),(-6.1914577817088e118,-7.010834902137467e-103),(-5.9815907467493136e82,7.404639608483947e188),(-4.485368440431237e89,-4.089565842001999e-152)]]" +"6xGw",2107128550,"[[(-2.087497331251707e-180,-5.022554811588342e24),(-1.2636503461000215e-228,1.3290563068463308e112),(2.74943107551342e281,1.1123394188044336e112),(3.041838095219909e276,5.590597851016202e-270)],[],[(6.636906358770296e-97,-2.2335799924679948e-184),(-8.429249069245283e-243,1.647618305107044e158),(-0.4657586598569572,-9.016473078578002e-20)]]" +"Q",2713167232,"[[(-5.961335445789657e-129,-7.867078587209265e-243),(-3.139409694983184e45,1.7496539408601967e-281)],[(-2.8435266267772945e39,-3.3089122320442997e-136)]]" diff --git a/tests/integration/test_s3_distributed/data/database/part2.csv b/tests/integration/test_s3_distributed/data/database/part2.csv new file mode 100644 index 00000000000..572676e47c6 --- /dev/null +++ b/tests/integration/test_s3_distributed/data/database/part2.csv @@ -0,0 +1,5 @@ +"~m`",820408404,"[]" +"~E",3621610983,"[[(1.183772215004139e-238,-1.282774073199881e211),(1.6787305112393978e-46,7.500499989257719e25),(-2.458759475104641e-260,3.1724599388651864e-171),(-2.0163203163062471e118,-4.677226438945462e-162),(-5.52491070012707e-135,7.051780441780731e-236)]]" +"~1",1715555780,"[[(-6.847404226505131e-267,5.939552045362479e-272),(8.02275075985457e-160,8.369250185716419e-104),(-1.193940928527857e-258,-1.132580458849774e39)],[(1.1866087552639048e253,3.104988412734545e57),(-3.37278669639914e84,-2.387628643569968e287),(-2.452136349495753e73,3.194309776006896e-204),(-1001997440265471100,3.482122851077378e-182)],[],[(-5.754682082202988e-20,6.598766936241908e156)],[(8.386764833095757e300,1.2049637765877942e229),(3.136243074210055e53,5.764669663844127e-100),(-4.190632347661851e195,-5.053553379163823e302),(2.0805194731736336e-19,-1.0849036699112485e-271),(1.1292361211411365e227,-8.767824448179629e229),(-3.6938137156625264e-19,-5.387931698392423e109),(-1.2240482125885677e189,-1.5631467861525635e-103)],[(-2.3917431782202442e138,7.817228281030191e-242),(-1.1462343232899826e279,-1.971215065504208e-225),(5.4316119855340265e-62,3.761081156597423e-60),(8.111852137718836e306,8.115485489580134e-208)],[]]" +"~%]",1606443384,"[[]]" +"}or",726681547,"[]" \ No newline at end of file diff --git a/tests/integration/test_s3_distributed/data/database/partition675.csv b/tests/integration/test_s3_distributed/data/database/partition675.csv new file mode 100644 index 00000000000..e8496680368 --- /dev/null +++ b/tests/integration/test_s3_distributed/data/database/partition675.csv @@ -0,0 +1,7 @@ +"kvUES",4281162618,"[[(2.4538308454074088e303,1.2209370543175666e178),(1.4564007891121754e-186,2.340773478952682e-273),(-1.01791181533976e165,-3.9617466227377253e248)]]" +"Gu",4280623186,"[[(-1.623487579335014e38,-1.0633405021023563e225),(-4.373688812751571e180,2.5511550357717127e138)]]" +"J_u1",4277430503,"[[(2.981826196369429e-294,-6.059236590410922e236),(8.502045137575854e-296,3.0210403188125657e-91),(-9.370591842861745e175,4.150870185764185e129),(1.011801592194125e275,-9.236010982686472e266),(-3.1830638196303316e277,2.417706446545472e-105),(-1.4369143023804266e-201,4.7529126795899655e238)],[(-2.118789593804697e186,-1.8760231612433755e-280),(2.5982563179976053e200,-1.4683025762313524e-40)],[(-1.873397623255704e-240,1.4363190147949886e-283),(-1.5760337746177136e153,1.5272278536086246e-34),(-8.117473317695919e155,2.4375370926733504e150),(-1.179230972881795e99,1.7693459774706515e-259),(2.2102106250558424e-40,4.734162675762768e-56),(6.058833110550111e-8,8.892471775821198e164),(-1.8208740799996599e59,6.446958261080721e178)]]" +"s:\",4265055390,"[[(-3.291651377214531e-167,3.9198636942402856e185),(2.4897781692770126e176,2.579309759138358e188),(4.653945381397663e205,3.216314556208208e158),(-5.3373279440714224e-39,2.404386813826413e212),(-1.4217294382527138e307,8.874978978402512e-173)],[(8.527603121149904e-58,-5.0520795335878225e88),(-0.00022870878520550814,-3.2334214176860943e-68),(-6.97683613433404e304,-2.1573757788072144e-82),(-1.1394163455875937e36,-3.817990182461824e271),(2.4099027412881423e-209,8.542179392011098e-156),(3.2610511540394803e174,1.1692631657517616e-20)],[(3.625474290538107e261,-5.359205062039837e-193),(-3.574126569378072e-112,-5.421804160994412e265),(-4.873653931207849e-76,3219678918284.317),(-7.030770825898911e-57,1.4647389742249787e-274),(-4.4882439220492357e-203,6.569338333730439e-38)],[(-2.2418056002374865e-136,5.113251922954469e-16),(2.5156744571032497e297,-3.0536957683846124e-192)],[(1.861112291954516e306,-1.8160882143331256e129),(1.982573454900027e290,-2.451412311394593e170)],[(-2.8292230178712157e-18,1.2570198161962067e216),(6.24832495972797e-164,-2.0770908334330718e-273)],[(980143647.1858811,1.2738714961511727e106),(6.516450532397311e-184,4.088688742052062e31),(-2.246311532913914e269,-7.418103885850518e-179),(1.2222973942835046e-289,2.750544834553288e-46),(9.503169349701076e159,-1.355457053256579e215)]]" +":hzO",4263726959,"[[(-2.553206398375626e-90,1.6536977728640226e199),(1.5630078027143848e-36,2.805242683101373e-211),(2.2573933085983554e-92,3.450501333524858e292),(-1.215900901292646e-275,-3.860558658606121e272),(6.65716072773856e-145,2.5359010031217893e217)],[(-1.3308039625779135e308,1.7464622720773261e258),(-3.2986890093446374e179,3.9038871583175653e-69),(-4.3594764087383885e-95,4.229921973278908e-123),(-5.455694205415656e137,3.597894902167716e108),(1.2480860990110662e-29,-1.4873488392480292e-185),(7.563210285835444e55,-5624068447.488605)],[(3.9517937289943195e181,-3.2799189227094424e-68),(8.906762198487649e-167,3.952452177941537e-159)]]" +"a",4258301804,"[[(5.827965576703262e-281,2.2523852665173977e90)],[(-6.837604072282348e-97,8.125864241406046e-61)],[(-2.3047912084435663e53,-8.814499720685194e36),(1.2072558137199047e-79,1.2096862541827071e142),(2.2000026293774143e275,-3.2571689055108606e-199),(1.1822278574921316e134,2.9571188365006754e-86),(1.0448954272555034e-169,1.2182183489600953e-60)],[(-3.1366540817730525e89,9.327128058982966e-306),(6.588968210928936e73,-11533531378.938957),(-2.6715943555840563e44,-4.557428011172859e224),(-3.8334913754415923e285,-4.748721454106074e-173),(-1.6912052107425128e275,-4.789382438422238e-219),(1.8538365229016863e151,-3.5698172075468775e-37)],[(-2.1963131282037294e49,-5.53604352524995e-296)],[(-8.834414834987965e167,1.3186354307320576e247),(2.109209547987338e298,1.2191009105107557e-32),(-3.896880410603213e-92,-3.4589588698231044e-121),(-3.252529090888335e138,-7.862741341454407e204)],[(-9.673078095447289e-207,8.839303128607278e123),(2.6043620378793597e-244,-6.898328199987363e-308),(-2.5921142292355475e-54,1.0352159149517285e-143)]]" +"S+",4257734123,"[[(1.5714269203495863e245,-15651321.549208183),(-3.7292056272445236e-254,-4.556927533596056e-234),(-3.0309414401442555e-203,-3.84393827531526e-12)],[(1.7718777510571518e219,3.972086323144777e139),(1.5723805735454373e-67,-3.805243648123396e226),(154531069271292800000,1.1384408025183933e-285),(-2.009892367470994e-247,2.0325742976832167e81)],[(1.2145787097670788e55,-5.0579298233321666e-30),(5.05577441452021e-182,-2.968914705509665e-175),(-1.702335524921919e67,-2.852552828587631e-226),(-2.7664498327826963e-99,-1.2967072085088717e-305),(7.68881162387673e-68,-1.2506915095983359e-142),(-7.60308693295946e-40,5.414853590549086e218)],[(8.595602987813848e226,-3.9708286611967497e-206),(-5.80352787694746e-52,5.610493934761672e236),(2.1336999375861025e217,-5.431988994371099e-154),(-6.2758614367782974e29,-8.359901046980544e-55)],[(1.6910790690897504e54,9.798739710823911e197),(-6.530270107036228e-284,8.758552462406328e-302),(2.931625032390877e-118,2.8793800873550273e83),(-3.293986884112906e-88,11877326093331202),(0.0008071321465157103,1.0720860516457485e-298)]]" diff --git a/tests/integration/test_s3_distributed/test.py b/tests/integration/test_s3_distributed/test.py new file mode 100644 index 00000000000..6977fbd4b4d --- /dev/null +++ b/tests/integration/test_s3_distributed/test.py @@ -0,0 +1,57 @@ +import logging +import os + +import pytest +from helpers.cluster import ClickHouseCluster +from helpers.test_tools import TSV + +logging.getLogger().setLevel(logging.INFO) +logging.getLogger().addHandler(logging.StreamHandler()) + +SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) +S3_DATA = ['data/clickhouse/part1.csv', 'data/clickhouse/part123.csv', 'data/database/part2.csv', 'data/database/partition675.csv'] + +def create_buckets_s3(cluster): + minio = cluster.minio_client + for file in S3_DATA: + minio.fput_object(bucket_name=cluster.minio_bucket, object_name=file, file_path=os.path.join(SCRIPT_DIR, file)) + for obj in minio.list_objects(cluster.minio_bucket, recursive=True): + print(obj.object_name) + + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster = ClickHouseCluster(__file__) + cluster.add_instance('s0_0_0', main_configs=["configs/cluster.xml"], with_minio=True) + cluster.add_instance('s0_0_1', main_configs=["configs/cluster.xml"]) + cluster.add_instance('s0_1_0', main_configs=["configs/cluster.xml"]) + + logging.info("Starting cluster...") + cluster.start() + logging.info("Cluster started") + + create_buckets_s3(cluster) + + yield cluster + finally: + cluster.shutdown() + + +def test_log_family_s3(started_cluster): + node = started_cluster.instances['s0_0_0'] + pure_s3 = node.query("SELECT * from s3('http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon)") + # print(pure_s3) + s3_distibuted = node.query("SELECT * from s3Distributed('cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon)") + # print(s3_distibuted) + + assert TSV(pure_s3) == TSV(s3_distibuted) + +def test_count(started_cluster): + node = started_cluster.instances['s0_0_0'] + pure_s3 = node.query("SELECT count(*) from s3('http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')") + # print(pure_s3) + s3_distibuted = node.query("SELECT count(*) from s3Distributed('cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')") + # print(s3_distibuted) + + assert TSV(pure_s3) == TSV(s3_distibuted) \ No newline at end of file From a5d55781b10ddc749fdb326c7413ca5050f32131 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Thu, 25 Mar 2021 02:23:29 +0300 Subject: [PATCH 044/108] remote debug prints --- src/DataStreams/RemoteBlockInputStream.cpp | 1 - src/Storages/StorageS3Distributed.cpp | 1 - 2 files changed, 2 deletions(-) diff --git a/src/DataStreams/RemoteBlockInputStream.cpp b/src/DataStreams/RemoteBlockInputStream.cpp index 5ab226acd13..c633600d37f 100644 --- a/src/DataStreams/RemoteBlockInputStream.cpp +++ b/src/DataStreams/RemoteBlockInputStream.cpp @@ -62,7 +62,6 @@ Block RemoteBlockInputStream::readImpl() if (isCancelledOrThrowIfKilled()) return Block(); - std::cout << "RemoteBlockInputStream " << block.rows() << ' ' << block.dumpStructure() << std::endl; return block; } diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index 369d8d200b3..0353422c667 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -316,7 +316,6 @@ Pipe StorageS3Distributed::read( const auto & table_function_ast = table_expression->table_function; if (table_function_ast->getTreeHash() == tree_hash) { - std::cout << table_function_ast->dumpTree() << std::endl; auto & arguments = table_function_ast->children.at(0)->children; auto & bucket = arguments[1]->as().value.safeGet(); /// We rewrite query, and insert a port to connect as a first parameter From cef9e19eb2cd21dbcd3ddfcc0e66db5d16ed109a Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Thu, 25 Mar 2021 16:49:07 +0300 Subject: [PATCH 045/108] better --- programs/client/Client.cpp | 1 - src/Client/Connection.cpp | 10 +++------- src/Client/Connection.h | 3 --- src/Functions/IFunction.cpp | 2 +- src/Server/TCPHandler.cpp | 3 +++ src/Storages/StorageS3.h | 2 +- 6 files changed, 8 insertions(+), 13 deletions(-) diff --git a/programs/client/Client.cpp b/programs/client/Client.cpp index 308076f9033..f7605e364f8 100644 --- a/programs/client/Client.cpp +++ b/programs/client/Client.cpp @@ -561,7 +561,6 @@ private: connect(); - /// Initialize DateLUT here to avoid counting time spent here as query execution time. const auto local_tz = DateLUT::instance().getTimeZone(); if (!context->getSettingsRef().use_client_time_zone) diff --git a/src/Client/Connection.cpp b/src/Client/Connection.cpp index 018544f969f..c302a3ff47c 100644 --- a/src/Client/Connection.cpp +++ b/src/Client/Connection.cpp @@ -1,5 +1,4 @@ #include -#include #include #include #include @@ -21,7 +20,6 @@ #include #include #include -#include #include #include #include @@ -91,10 +89,7 @@ void Connection::connect(const ConnectionTimeouts & timeouts) socket = std::make_unique(); } - if (!explicitly_resolved_address) - current_resolved_address = DNSResolver::instance().resolveAddress(host, port); - else - current_resolved_address = Poco::Net::SocketAddress(explicitly_resolved_address.value()); + current_resolved_address = DNSResolver::instance().resolveAddress(host, port); const auto & connection_timeout = static_cast(secure) ? timeouts.secure_connection_timeout : timeouts.connection_timeout; socket->connect(*current_resolved_address, connection_timeout); @@ -417,7 +412,7 @@ void Connection::sendQuery( { if (!connected) connect(timeouts); - + TimeoutSetter timeout_setter(*socket, timeouts.send_timeout, timeouts.receive_timeout, true); if (settings) @@ -452,6 +447,7 @@ void Connection::sendQuery( /// Per query settings. if (settings) { + std::cout << "Settings enabled" << std::endl; auto settings_format = (server_revision >= DBMS_MIN_REVISION_WITH_SETTINGS_SERIALIZED_AS_STRINGS) ? SettingsWriteFormat::STRINGS_WITH_FLAGS : SettingsWriteFormat::BINARY; settings->write(*out, settings_format); diff --git a/src/Client/Connection.h b/src/Client/Connection.h index 123b10942f1..16a509021da 100644 --- a/src/Client/Connection.h +++ b/src/Client/Connection.h @@ -208,9 +208,6 @@ public: private: String host; UInt16 port; - - std::optional explicitly_resolved_address; - String default_database; String user; String password; diff --git a/src/Functions/IFunction.cpp b/src/Functions/IFunction.cpp index 9636573c5f4..0a8e8f426b0 100644 --- a/src/Functions/IFunction.cpp +++ b/src/Functions/IFunction.cpp @@ -477,7 +477,7 @@ DataTypePtr FunctionOverloadResolverAdaptor::getReturnTypeDefaultImplementationF } if (null_presence.has_nullable) { - Block nested_columns{createBlockWithNestedColumns(arguments)}; + auto nested_columns = Block(createBlockWithNestedColumns(arguments)); auto return_type = getter(ColumnsWithTypeAndName(nested_columns.begin(), nested_columns.end())); return makeNullable(return_type); } diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index d6c5aed4fc3..059ba19f340 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -1102,6 +1102,9 @@ void TCPHandler::receiveQuery() Settings passed_settings; passed_settings.read(*in, settings_format); + std::cout << "receive Query" << std::endl; + std::cout << passed_settings.output_format_json_named_tuples_as_objects << std::endl; + /// Interserver secret. std::string received_hash; if (client_tcp_protocol_version >= DBMS_MIN_REVISION_WITH_INTERSERVER_SECRET) diff --git a/src/Storages/StorageS3.h b/src/Storages/StorageS3.h index c10f8ec12fb..c47a88e35d9 100644 --- a/src/Storages/StorageS3.h +++ b/src/Storages/StorageS3.h @@ -105,7 +105,7 @@ private: friend class StorageS3Distributed; friend class TableFunctionS3Distributed; friend class StorageS3SequentialSource; - friend class StorageS3Distributed; + struct ClientAuthentificaiton { const S3::URI uri; From e601a432e5a5c7cb9db00194160472324bc7ce48 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Thu, 25 Mar 2021 16:50:19 +0300 Subject: [PATCH 046/108] better[2] --- src/Client/Connection.cpp | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/src/Client/Connection.cpp b/src/Client/Connection.cpp index c302a3ff47c..1d3efefa6ec 100644 --- a/src/Client/Connection.cpp +++ b/src/Client/Connection.cpp @@ -412,7 +412,7 @@ void Connection::sendQuery( { if (!connected) connect(timeouts); - + TimeoutSetter timeout_setter(*socket, timeouts.send_timeout, timeouts.receive_timeout, true); if (settings) @@ -447,7 +447,6 @@ void Connection::sendQuery( /// Per query settings. if (settings) { - std::cout << "Settings enabled" << std::endl; auto settings_format = (server_revision >= DBMS_MIN_REVISION_WITH_SETTINGS_SERIALIZED_AS_STRINGS) ? SettingsWriteFormat::STRINGS_WITH_FLAGS : SettingsWriteFormat::BINARY; settings->write(*out, settings_format); From 851644d8bfad6b6ec748de5efe77debd22c7597d Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Thu, 25 Mar 2021 17:33:11 +0300 Subject: [PATCH 047/108] fix test --- programs/client/Client.cpp | 3 ++- src/Server/TCPHandler.cpp | 4 ++-- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/programs/client/Client.cpp b/programs/client/Client.cpp index f7605e364f8..1aec3677b41 100644 --- a/programs/client/Client.cpp +++ b/programs/client/Client.cpp @@ -1620,7 +1620,8 @@ private: const auto * insert = parsed_query->as(); if (insert && insert->settings_ast) apply_query_settings(*insert->settings_ast); - const auto * with_output = parsed_query->as(); + /// FIXME: try to prettify this cast using `as<>()` + const auto * with_output = dynamic_cast(parsed_query.get()); if (with_output && with_output->settings_ast) apply_query_settings(*with_output->settings_ast); diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index 059ba19f340..4915a435afe 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -1102,8 +1102,6 @@ void TCPHandler::receiveQuery() Settings passed_settings; passed_settings.read(*in, settings_format); - std::cout << "receive Query" << std::endl; - std::cout << passed_settings.output_format_json_named_tuples_as_objects << std::endl; /// Interserver secret. std::string received_hash; @@ -1120,6 +1118,8 @@ void TCPHandler::receiveQuery() readStringBinary(state.query, *in); + std::cout << state.query << std::endl; + /// It is OK to check only when query != INITIAL_QUERY, /// since only in that case the actions will be done. if (!cluster.empty() && client_info.query_kind != ClientInfo::QueryKind::INITIAL_QUERY) From c1d1313dd8213447f8848a239ec34d15d8d52e50 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Thu, 25 Mar 2021 17:34:01 +0300 Subject: [PATCH 048/108] remote prints --- src/Server/TCPHandler.cpp | 2 -- 1 file changed, 2 deletions(-) diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index 4915a435afe..67a17069655 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -1118,8 +1118,6 @@ void TCPHandler::receiveQuery() readStringBinary(state.query, *in); - std::cout << state.query << std::endl; - /// It is OK to check only when query != INITIAL_QUERY, /// since only in that case the actions will be done. if (!cluster.empty() && client_info.query_kind != ClientInfo::QueryKind::INITIAL_QUERY) From b3094412b15d33435edf9630998f483047da9b24 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Fri, 26 Mar 2021 18:33:14 +0300 Subject: [PATCH 049/108] better --- src/IO/ReadBufferFromIStream.h | 2 +- src/Processors/Sources/SourceWithProgress.h | 4 +- src/Server/TCPHandler.cpp | 2 +- src/Storages/StorageS3.cpp | 8 +- src/Storages/StorageS3Distributed.cpp | 109 ++++++++-------- src/Storages/StorageS3Distributed.h | 76 +++++++++++ src/Storages/StorageTaskManager.cpp | 0 src/Storages/StorageTaskManager.h | 118 ------------------ .../TableFunctionS3Distributed.cpp | 6 +- .../TableFunctionS3Distributed.h | 9 +- .../test_s3_distributed/configs/cluster.xml | 10 +- tests/integration/test_s3_distributed/test.py | 63 ++++++++-- 12 files changed, 213 insertions(+), 194 deletions(-) delete mode 100644 src/Storages/StorageTaskManager.cpp delete mode 100644 src/Storages/StorageTaskManager.h diff --git a/src/IO/ReadBufferFromIStream.h b/src/IO/ReadBufferFromIStream.h index 7f804783ba2..67cc60c053f 100644 --- a/src/IO/ReadBufferFromIStream.h +++ b/src/IO/ReadBufferFromIStream.h @@ -17,7 +17,7 @@ private: bool nextImpl() override; public: - ReadBufferFromIStream(std::istream & istr_, size_t size = DBMS_DEFAULT_BUFFER_SIZE); + explicit ReadBufferFromIStream(std::istream & istr_, size_t size = DBMS_DEFAULT_BUFFER_SIZE); }; } diff --git a/src/Processors/Sources/SourceWithProgress.h b/src/Processors/Sources/SourceWithProgress.h index 3aa7a81f418..25ff3eacec7 100644 --- a/src/Processors/Sources/SourceWithProgress.h +++ b/src/Processors/Sources/SourceWithProgress.h @@ -55,12 +55,12 @@ public: void setProgressCallback(const ProgressCallback & callback) final { progress_callback = callback; } void addTotalRowsApprox(size_t value) final { total_rows_approx += value; } + void work() override; + protected: /// Call this method to provide information about progress. void progress(const Progress & value); - void work() override; - private: StreamLocalLimits limits; SizeLimits leaf_limits; diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index 67a17069655..545403ccfa4 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -25,7 +25,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index e81d8da6817..c3cfa8b35d5 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -96,7 +96,7 @@ Chunk StorageS3Source::generate() if (!initialized) { - reader->readSuffix(); + reader->readPrefix(); initialized = true; } @@ -226,6 +226,11 @@ Strings StorageS3::listFilesWithRegexpMatching(Aws::S3::S3Client & client, const } Aws::S3::Model::ListObjectsV2Request request; + + std::cout << "Will list objects: " << std::endl; + std::cout << globbed_uri.bucket << std::endl; + std::cout << key_prefix << std::endl; + request.SetBucket(globbed_uri.bucket); request.SetPrefix(key_prefix); @@ -252,6 +257,7 @@ Strings StorageS3::listFilesWithRegexpMatching(Aws::S3::S3Client & client, const for (const auto & row : outcome.GetResult().GetContents()) { String key = row.GetKey(); + std::cout << "KEY " << key << std::endl; if (re2::RE2::FullMatch(key, matcher)) result.emplace_back(std::move(key)); } diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index 0353422c667..5b40a3420cf 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -1,6 +1,7 @@ #include "Storages/StorageS3Distributed.h" #include +#include "Processors/Sources/SourceWithProgress.h" #if USE_AWS_S3 @@ -12,38 +13,32 @@ #include #include #include - #include #include #include #include - -#include - -#include -#include -#include - -#include - - -#include -#include -#include -#include #include -#include -#include -#include -#include - #include #include #include #include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include -#include -#include +#include +#include +#include #include #include @@ -51,13 +46,6 @@ #include #include -#include -#include -#include - -#include - - namespace DB { @@ -184,11 +172,12 @@ private: s3_source_builder.context, s3_source_builder.columns, s3_source_builder.max_block_size, - chooseCompressionMethod(client_auth.uri.key, s3_source_builder.compression_method), + chooseCompressionMethod(client_auth.uri.key, ""), client_auth.client, client_auth.uri.bucket, next_uri.key ); + return true; } @@ -236,7 +225,6 @@ StorageS3Distributed::StorageS3Distributed( } - Pipe StorageS3Distributed::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, @@ -247,17 +235,20 @@ Pipe StorageS3Distributed::read( unsigned /*num_streams*/) { /// Secondary query, need to read from S3 - if (context.getCurrentQueryId() != context.getInitialQueryId()) + if (context.getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY) { /// Find initiator in cluster Cluster::Address initiator; - for (const auto & replicas : cluster->getShardsAddresses()) - for (const auto & node : replicas) - if (node.getHash() == address_hash_or_filename) - { - initiator = node; - break; - } + [&]() + { + for (const auto & replicas : cluster->getShardsAddresses()) + for (const auto & node : replicas) + if (node.getHash() == address_hash_or_filename) + { + initiator = node; + return; + } + }(); bool need_path_column = false; @@ -292,21 +283,29 @@ Pipe StorageS3Distributed::read( } - /// This part of code executes on initiator + /// The code from here and below executes on initiator String hash_of_address; - for (const auto & replicas : cluster->getShardsAddresses()) - for (const auto & node : replicas) - if (node.is_local && node.port == context.getTCPPort()) - { - hash_of_address = node.getHash(); - break; - } + [&]() + { + for (const auto & replicas : cluster->getShardsAddresses()) + for (const auto & node : replicas) + /// Finding ourselves in cluster + if (node.is_local && node.port == context.getTCPPort()) + { + hash_of_address = node.getHash(); + break; + } + }(); - /// FIXME: better exception if (hash_of_address.empty()) - throw Exception(fmt::format("Could not find outself in cluster {}", ""), ErrorCodes::LOGICAL_ERROR); + throw Exception(fmt::format("The initiator must be a part of a cluster {}", cluster_name), ErrorCodes::BAD_ARGUMENTS); + /// Our purpose to change some arguments of this function to store some relevant + /// information. Then we will send changed query to another hosts. + /// We got a pointer to table function representation in AST (a pointer to subtree) + /// as parameter of TableFunctionRemote::execute and saved its hash value. + /// Here we find it in the AST of whole query, change parameter and format it to string. auto remote_query_ast = query_info.query->clone(); auto table_expressions_from_whole_query = getTableExpressions(remote_query_ast->as()); @@ -328,8 +327,9 @@ Pipe StorageS3Distributed::read( } if (remote_query.empty()) - throw Exception("No table function", ErrorCodes::LOGICAL_ERROR); + throw Exception(fmt::format("There is no table function with hash of AST equals to {}", hash_of_address), ErrorCodes::LOGICAL_ERROR); + /// Calculate the header. This is significant, because some columns could be thrown away in some cases like query with count(*) Block header = InterpreterSelectQuery(remote_query_ast, context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); @@ -356,16 +356,15 @@ Pipe StorageS3Distributed::read( /*query=*/remote_query, /*header=*/header, /*context=*/context, - nullptr, - scalars, - Tables(), - processed_stage + /*throttler=*/nullptr, + /*scalars*/scalars, + /*external_tables*/Tables(), + /*stage*/processed_stage ); pipes.emplace_back(std::make_shared(std::move(stream))); } } - metadata_snapshot->check(column_names, getVirtuals(), getStorageID()); return Pipe::unitePipes(std::move(pipes)); } diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Distributed.h index 23c3230c6c6..e686efd8c78 100644 --- a/src/Storages/StorageS3Distributed.h +++ b/src/Storages/StorageS3Distributed.h @@ -31,6 +31,82 @@ struct ClientAuthentificationBuilder UInt64 max_connections; }; +using QueryId = std::string; +using Task = std::string; +using Tasks = std::vector; +using TasksIterator = Tasks::iterator; + +class S3NextTaskResolver +{ +public: + S3NextTaskResolver(QueryId query_id, Tasks && all_tasks) + : id(query_id) + , tasks(all_tasks) + , current(tasks.begin()) + {} + + std::string next() + { + auto it = current; + ++current; + return it == tasks.end() ? "" : *it; + } + + std::string getId() + { + return id; + } + +private: + QueryId id; + Tasks tasks; + TasksIterator current; +}; + +using S3NextTaskResolverPtr = std::shared_ptr; + +class TaskSupervisor +{ +public: + using QueryId = std::string; + + TaskSupervisor() = default; + + static TaskSupervisor & instance() + { + static TaskSupervisor task_manager; + return task_manager; + } + + void registerNextTaskResolver(S3NextTaskResolverPtr resolver) + { + std::lock_guard lock(mutex); + auto & target = dict[resolver->getId()]; + if (target) + throw Exception(fmt::format("NextTaskResolver with name {} is already registered for query {}", + target->getId(), resolver->getId()), ErrorCodes::LOGICAL_ERROR); + target = std::move(resolver); + } + + + Task getNextTaskForId(const QueryId & id) + { + std::lock_guard lock(mutex); + auto it = dict.find(id); + if (it == dict.end()) + return ""; + auto answer = it->second->next(); + if (answer.empty()) + dict.erase(it); + return answer; + } + +private: + using ResolverDict = std::unordered_map; + ResolverDict dict; + std::mutex mutex; +}; + class StorageS3Distributed : public ext::shared_ptr_helper, public IStorage { friend struct ext::shared_ptr_helper; diff --git a/src/Storages/StorageTaskManager.cpp b/src/Storages/StorageTaskManager.cpp deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/src/Storages/StorageTaskManager.h b/src/Storages/StorageTaskManager.h deleted file mode 100644 index bb8b7952a4f..00000000000 --- a/src/Storages/StorageTaskManager.h +++ /dev/null @@ -1,118 +0,0 @@ -#pragma once - - -#include -#include -#include -#include -#include -#include -#include -#include -#include - - -#include - -namespace DB -{ -namespace ErrorCodes -{ - extern const int LOGICAL_ERROR; -} - -using QueryId = std::string; -using Task = std::string; -using Tasks = std::vector; -using TasksIterator = Tasks::iterator; - - -class NextTaskResolverBase -{ -public: - virtual ~NextTaskResolverBase() = default; - virtual std::string next() = 0; - virtual std::string getName() = 0; - virtual std::string getId() = 0; -}; - -using NextTaskResolverBasePtr = std::unique_ptr; - -class S3NextTaskResolver : public NextTaskResolverBase -{ -public: - S3NextTaskResolver(QueryId query_id, Tasks && all_tasks) - : id(query_id) - , tasks(all_tasks) - , current(tasks.begin()) - {} - - ~S3NextTaskResolver() override = default; - - std::string next() override - { - auto it = current; - ++current; - return it == tasks.end() ? "" : *it; - } - - std::string getName() override - { - return "S3NextTaskResolverBase"; - } - - std::string getId() override - { - return id; - } - -private: - QueryId id; - Tasks tasks; - TasksIterator current; -}; - -class TaskSupervisor -{ -public: - using QueryId = std::string; - - TaskSupervisor() = default; - - static TaskSupervisor & instance() - { - static TaskSupervisor task_manager; - return task_manager; - } - - void registerNextTaskResolver(NextTaskResolverBasePtr resolver) - { - std::lock_guard lock(rwlock); - auto & target = dict[resolver->getId()]; - if (target) - throw Exception(fmt::format("NextTaskResolver with name {} is already registered for query {}", - target->getId(), resolver->getId()), ErrorCodes::LOGICAL_ERROR); - target = std::move(resolver); - } - - - Task getNextTaskForId(const QueryId & id) - { - std::lock_guard lock(rwlock); - auto it = dict.find(id); - if (it == dict.end()) - return ""; - auto answer = it->second->next(); - if (answer.empty()) - dict.erase(it); - return answer; - } - -private: - using ResolverDict = std::unordered_map; - ResolverDict dict; - std::shared_mutex rwlock; -}; - - -} diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Distributed.cpp index 3c17faff456..f0a12c26705 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.cpp +++ b/src/TableFunctions/TableFunctionS3Distributed.cpp @@ -1,12 +1,12 @@ #include #include #include "DataStreams/RemoteBlockInputStream.h" +#include "Interpreters/ClientInfo.h" #include "Parsers/ASTExpressionList.h" #include "Parsers/ASTFunction.h" #include "Parsers/IAST_fwd.h" #include "Processors/Sources/SourceFromInputStream.h" #include "Storages/StorageS3Distributed.h" -#include "Storages/System/StorageSystemOne.h" #if USE_AWS_S3 @@ -18,7 +18,6 @@ #include #include #include -#include #include #include "registerTableFunctions.h" @@ -82,7 +81,7 @@ StoragePtr TableFunctionS3Distributed::executeImpl( UInt64 max_connections = context.getSettingsRef().s3_max_connections; /// Initiator specific logic - while (context.getInitialQueryId() == context.getCurrentQueryId()) + while (context.getClientInfo().query_kind == ClientInfo::QueryKind::INITIAL_QUERY) { auto poco_uri = Poco::URI{filename_or_initiator_hash}; @@ -108,6 +107,7 @@ StoragePtr TableFunctionS3Distributed::executeImpl( break; } + StoragePtr storage = StorageS3Distributed::create( ast_function->getTreeHash(), filename_or_initiator_hash, diff --git a/src/TableFunctions/TableFunctionS3Distributed.h b/src/TableFunctions/TableFunctionS3Distributed.h index 2fab786dee6..a2dd526ab05 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.h +++ b/src/TableFunctions/TableFunctionS3Distributed.h @@ -12,7 +12,14 @@ namespace DB class Context; -/* s3(source, [access_key_id, secret_access_key,] format, structure) - creates a temporary storage for a file in S3 +/** + * s3Distributed(cluster_name, source, [access_key_id, secret_access_key,] format, structure) + * A table function, which allows to process many files from S3 on a specific cluster + * On initiator it creates a conneciton to _all_ nodes in cluster, discloses asterics + * in S3 file path and register all tasks (paths in S3) in NextTaskResolver to dispatch + * them dynamically. + * On worker node it asks initiator about next task to process, processes it. + * This is repeated until the tasks are finished. */ class TableFunctionS3Distributed : public ITableFunction { diff --git a/tests/integration/test_s3_distributed/configs/cluster.xml b/tests/integration/test_s3_distributed/configs/cluster.xml index 9813e1ce7e3..675994af2bf 100644 --- a/tests/integration/test_s3_distributed/configs/cluster.xml +++ b/tests/integration/test_s3_distributed/configs/cluster.xml @@ -20,22 +20,22 @@ - + diff --git a/tests/integration/test_s3_distributed/test.py b/tests/integration/test_s3_distributed/test.py index 6977fbd4b4d..8e55557f3f6 100644 --- a/tests/integration/test_s3_distributed/test.py +++ b/tests/integration/test_s3_distributed/test.py @@ -38,20 +38,69 @@ def started_cluster(): cluster.shutdown() -def test_log_family_s3(started_cluster): +def test_select_all(started_cluster): node = started_cluster.instances['s0_0_0'] - pure_s3 = node.query("SELECT * from s3('http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon)") + pure_s3 = node.query(""" + SELECT * from s3( + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + ORDER BY (name, value, polygon)""") # print(pure_s3) - s3_distibuted = node.query("SELECT * from s3Distributed('cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon)") + s3_distibuted = node.query(""" + SELECT * from s3Distributed( + 'cluster_simple', + 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon)""") # print(s3_distibuted) assert TSV(pure_s3) == TSV(s3_distibuted) -def test_count(started_cluster): + +def test_select_all_with_dead_replica(started_cluster): node = started_cluster.instances['s0_0_0'] - pure_s3 = node.query("SELECT count(*) from s3('http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')") + pure_s3 = node.query(""" + SELECT * from s3( + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + ORDER BY (name, value, polygon)""") # print(pure_s3) - s3_distibuted = node.query("SELECT count(*) from s3Distributed('cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')") + s3_distibuted = node.query(""" + SELECT * from s3Distributed('cluster_simple', + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + ORDER BY (name, value, polygon)""") # print(s3_distibuted) - assert TSV(pure_s3) == TSV(s3_distibuted) \ No newline at end of file + assert TSV(pure_s3) == TSV(s3_distibuted) + + +def test_count(started_cluster): + node = started_cluster.instances['s0_0_0'] + pure_s3 = node.query(""" + SELECT count(*) from s3( + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')""") + # print(pure_s3) + s3_distibuted = node.query(""" + SELECT count(*) from s3Distributed( + 'cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')""") + # print(s3_distibuted) + + assert TSV(pure_s3) == TSV(s3_distibuted) + + +def test_wrong_cluster(started_cluster): + node = started_cluster.instances['s0_0_0'] + error = node.query_and_get_error(""" + SELECT count(*) from s3Distributed( + 'non_existent_cluster', + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')""") + + assert "not found" in error \ No newline at end of file From b05b72093534778cb46e296b5d604e001cf741e1 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Fri, 26 Mar 2021 18:39:44 +0300 Subject: [PATCH 050/108] update test --- .../test_s3_distributed/configs/cluster.xml | 28 +------------------ tests/integration/test_s3_distributed/test.py | 20 ------------- 2 files changed, 1 insertion(+), 47 deletions(-) diff --git a/tests/integration/test_s3_distributed/configs/cluster.xml b/tests/integration/test_s3_distributed/configs/cluster.xml index 675994af2bf..53fb0aa58e8 100644 --- a/tests/integration/test_s3_distributed/configs/cluster.xml +++ b/tests/integration/test_s3_distributed/configs/cluster.xml @@ -18,33 +18,7 @@ 9000 - - - - - true - - s0_0_0 - 9000 - - - s0_0_1 - 9000 - - - - true - - s0_1_0 - 9000 - - - 255.255.255.255 - 9000 - - - - + s \ No newline at end of file diff --git a/tests/integration/test_s3_distributed/test.py b/tests/integration/test_s3_distributed/test.py index 8e55557f3f6..c8ede0fddf8 100644 --- a/tests/integration/test_s3_distributed/test.py +++ b/tests/integration/test_s3_distributed/test.py @@ -57,26 +57,6 @@ def test_select_all(started_cluster): assert TSV(pure_s3) == TSV(s3_distibuted) -def test_select_all_with_dead_replica(started_cluster): - node = started_cluster.instances['s0_0_0'] - pure_s3 = node.query(""" - SELECT * from s3( - 'http://minio1:9001/root/data/{clickhouse,database}/*', - 'minio', 'minio123', 'CSV', - 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') - ORDER BY (name, value, polygon)""") - # print(pure_s3) - s3_distibuted = node.query(""" - SELECT * from s3Distributed('cluster_simple', - 'http://minio1:9001/root/data/{clickhouse,database}/*', - 'minio', 'minio123', 'CSV', - 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') - ORDER BY (name, value, polygon)""") - # print(s3_distibuted) - - assert TSV(pure_s3) == TSV(s3_distibuted) - - def test_count(started_cluster): node = started_cluster.instances['s0_0_0'] pure_s3 = node.query(""" From 3ab17233cfa78beecc4a54c22ed3eebe25f2e2e4 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Fri, 26 Mar 2021 18:44:15 +0300 Subject: [PATCH 051/108] cleanup --- src/Storages/StorageS3.cpp | 5 ----- tests/integration/test_s3_distributed/configs/cluster.xml | 3 +-- 2 files changed, 1 insertion(+), 7 deletions(-) diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index c3cfa8b35d5..678a6cc3270 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -226,11 +226,6 @@ Strings StorageS3::listFilesWithRegexpMatching(Aws::S3::S3Client & client, const } Aws::S3::Model::ListObjectsV2Request request; - - std::cout << "Will list objects: " << std::endl; - std::cout << globbed_uri.bucket << std::endl; - std::cout << key_prefix << std::endl; - request.SetBucket(globbed_uri.bucket); request.SetPrefix(key_prefix); diff --git a/tests/integration/test_s3_distributed/configs/cluster.xml b/tests/integration/test_s3_distributed/configs/cluster.xml index 53fb0aa58e8..e3a15bac7be 100644 --- a/tests/integration/test_s3_distributed/configs/cluster.xml +++ b/tests/integration/test_s3_distributed/configs/cluster.xml @@ -20,5 +20,4 @@ s - - \ No newline at end of file + \ No newline at end of file From ebf1e2fa9c80183050b28328c190d7429d1ee5c6 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Fri, 26 Mar 2021 20:29:15 +0300 Subject: [PATCH 052/108] fix build --- src/Server/TCPHandler.cpp | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index 545403ccfa4..bfd81f54915 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -980,7 +980,11 @@ bool TCPHandler::receivePacket() case Protocol::Client::NextTaskRequest: { auto id = receiveNextTaskRequest(); +#if USE_AWS_S3 auto next = TaskSupervisor::instance().getNextTaskForId(id); +#else + auto next = ""; +#endif sendNextTaskReply(next); return false; } From 5f48e4769fd5213e9cc849c92f8f1867b5d7e093 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Mon, 29 Mar 2021 15:19:09 +0300 Subject: [PATCH 053/108] fix test --- tests/integration/test_s3_distributed/configs/cluster.xml | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/tests/integration/test_s3_distributed/configs/cluster.xml b/tests/integration/test_s3_distributed/configs/cluster.xml index e3a15bac7be..8334ace15eb 100644 --- a/tests/integration/test_s3_distributed/configs/cluster.xml +++ b/tests/integration/test_s3_distributed/configs/cluster.xml @@ -18,6 +18,7 @@ 9000 - s + - \ No newline at end of file + + \ No newline at end of file From 4843f863294f5753be3e2b4b55a91fd105469607 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Tue, 6 Apr 2021 14:05:47 +0300 Subject: [PATCH 054/108] use only one connection between initiator and worker --- src/Client/Connection.cpp | 12 +- src/Client/Connection.h | 8 +- src/Client/HedgedConnections.h | 4 + src/Client/IConnections.h | 2 + src/Client/MultiplexedConnections.cpp | 8 ++ src/Client/MultiplexedConnections.h | 2 + src/Core/Protocol.h | 15 +- src/DataStreams/RemoteQueryExecutor.cpp | 11 ++ src/DataStreams/RemoteQueryExecutor.h | 2 + src/Interpreters/Context.cpp | 26 ++++ src/Interpreters/Context.h | 17 +++ src/Server/TCPHandler.cpp | 128 +++++++++++++----- src/Server/TCPHandler.h | 9 +- src/Storages/StorageS3Distributed.cpp | 32 ++--- src/Storages/StorageS3Distributed.h | 76 ----------- src/Storages/TaskSupervisor.h | 90 ++++++++++++ .../TableFunctionS3Distributed.cpp | 5 +- 17 files changed, 290 insertions(+), 157 deletions(-) create mode 100644 src/Storages/TaskSupervisor.h diff --git a/src/Client/Connection.cpp b/src/Client/Connection.cpp index 1d3efefa6ec..7bcd8970f8d 100644 --- a/src/Client/Connection.cpp +++ b/src/Client/Connection.cpp @@ -552,10 +552,10 @@ void Connection::sendIgnoredPartUUIDs(const std::vector & uuids) } -void Connection::sendNextTaskRequest(const std::string & id) +void Connection::sendReadTaskResponse(const std::string & responce) { - writeVarUInt(Protocol::Client::NextTaskRequest, *out); - writeStringBinary(id, *out); + writeVarUInt(Protocol::Client::ReadTaskResponse, *out); + writeStringBinary(responce, *out); out->next(); } @@ -815,8 +815,8 @@ Packet Connection::receivePacket() readVectorBinary(res.part_uuids, *in); return res; - case Protocol::Server::NextTaskReply: - res.next_task = receiveNextTask(); + case Protocol::Server::ReadTaskRequest: + res.read_task_request = receiveReadTaskRequest(); return res; default: @@ -841,7 +841,7 @@ Packet Connection::receivePacket() } -String Connection::receiveNextTask() const +String Connection::receiveReadTaskRequest() const { String next_task; readStringBinary(next_task, *in); diff --git a/src/Client/Connection.h b/src/Client/Connection.h index 16a509021da..7c21a282ce1 100644 --- a/src/Client/Connection.h +++ b/src/Client/Connection.h @@ -65,7 +65,7 @@ struct Packet Progress progress; BlockStreamProfileInfo profile_info; std::vector part_uuids; - String next_task; + String read_task_request; Packet() : type(Protocol::Server::Hello) {} }; @@ -159,8 +159,8 @@ public: void sendExternalTablesData(ExternalTablesData & data); /// Send parts' uuids to excluded them from query processing void sendIgnoredPartUUIDs(const std::vector & uuids); - /// Send request to acquire a new task - void sendNextTaskRequest(const std::string & id); + + void sendReadTaskResponse(const std::string &); /// Send prepared block of data (serialized and, if need, compressed), that will be read from 'input'. /// You could pass size of serialized/compressed block. @@ -303,7 +303,7 @@ private: #endif bool ping(); - String receiveNextTask() const; + String receiveReadTaskRequest() const; Block receiveData(); Block receiveLogData(); Block receiveDataImpl(BlockInputStreamPtr & stream); diff --git a/src/Client/HedgedConnections.h b/src/Client/HedgedConnections.h index f1675108349..331394c2322 100644 --- a/src/Client/HedgedConnections.h +++ b/src/Client/HedgedConnections.h @@ -84,6 +84,10 @@ public: const ClientInfo & client_info, bool with_pending_data) override; + void sendReadTaskResponce(const String &) override { + throw Exception("sendReadTaskResponce in not supported with HedgedConnections", ErrorCodes::LOGICAL_ERROR); + } + Packet receivePacket() override; Packet receivePacketUnlocked(AsyncCallback async_callback) override; diff --git a/src/Client/IConnections.h b/src/Client/IConnections.h index 38730922456..a5e7638c0bd 100644 --- a/src/Client/IConnections.h +++ b/src/Client/IConnections.h @@ -24,6 +24,8 @@ public: const ClientInfo & client_info, bool with_pending_data) = 0; + virtual void sendReadTaskResponce(const String &) = 0; + /// Get packet from any replica. virtual Packet receivePacket() = 0; diff --git a/src/Client/MultiplexedConnections.cpp b/src/Client/MultiplexedConnections.cpp index 8b2b7c49f26..73b68b9fc50 100644 --- a/src/Client/MultiplexedConnections.cpp +++ b/src/Client/MultiplexedConnections.cpp @@ -155,6 +155,13 @@ void MultiplexedConnections::sendIgnoredPartUUIDs(const std::vector & uuid } } + +void MultiplexedConnections::sendReadTaskResponce(const String & response) +{ + /// No lock_guard because assume it is already called under lock + current_connection->sendReadTaskResponse(response); +} + Packet MultiplexedConnections::receivePacket() { std::lock_guard lock(cancel_mutex); @@ -273,6 +280,7 @@ Packet MultiplexedConnections::receivePacketUnlocked(AsyncCallback async_callbac switch (packet.type) { + case Protocol::Server::ReadTaskRequest: case Protocol::Server::PartUUIDs: case Protocol::Server::Data: case Protocol::Server::Progress: diff --git a/src/Client/MultiplexedConnections.h b/src/Client/MultiplexedConnections.h index c04b06e525e..0021ecd863d 100644 --- a/src/Client/MultiplexedConnections.h +++ b/src/Client/MultiplexedConnections.h @@ -39,6 +39,8 @@ public: const ClientInfo & client_info, bool with_pending_data) override; + void sendReadTaskResponce(const String &) override; + Packet receivePacket() override; void disconnect() override; diff --git a/src/Core/Protocol.h b/src/Core/Protocol.h index 38dfd171cd9..10a05d8dde0 100644 --- a/src/Core/Protocol.h +++ b/src/Core/Protocol.h @@ -76,9 +76,10 @@ namespace Protocol Log = 10, /// System logs of the query execution TableColumns = 11, /// Columns' description for default values calculation PartUUIDs = 12, /// List of unique parts ids. - NextTaskReply = 13, /// String that describes the next task (a file to read from S3) - - MAX = NextTaskReply, + ReadTaskRequest = 13, /// String (UUID) describes a request for which next task is needed + /// This is such an inverted logic, where server sends requests + /// And client returns back response + MAX = ReadTaskRequest, }; /// NOTE: If the type of packet argument would be Enum, the comparison packet >= 0 && packet < 10 @@ -101,7 +102,7 @@ namespace Protocol "Log", "TableColumns", "PartUUIDs", - "NextTaskReply" + "ReadTaskRequest" }; return packet <= MAX ? data[packet] @@ -137,9 +138,9 @@ namespace Protocol KeepAlive = 6, /// Keep the connection alive Scalar = 7, /// A block of data (compressed or not). IgnoredPartUUIDs = 8, /// List of unique parts ids to exclude from query processing - NextTaskRequest = 9, /// String which contains an id to request a new task + ReadTaskResponse = 9, /// TODO: - MAX = NextTaskRequest, + MAX = ReadTaskResponse, }; inline const char * toString(UInt64 packet) @@ -154,7 +155,7 @@ namespace Protocol "KeepAlive", "Scalar", "IgnoredPartUUIDs", - "NextTaskRequest" + "ReadTaskResponse", }; return packet <= MAX ? data[packet] diff --git a/src/DataStreams/RemoteQueryExecutor.cpp b/src/DataStreams/RemoteQueryExecutor.cpp index 4aa659854b9..dc161a52ac3 100644 --- a/src/DataStreams/RemoteQueryExecutor.cpp +++ b/src/DataStreams/RemoteQueryExecutor.cpp @@ -16,6 +16,7 @@ #include #include #include +#include namespace DB { @@ -307,6 +308,9 @@ std::optional RemoteQueryExecutor::processPacket(Packet packet) { switch (packet.type) { + case Protocol::Server::ReadTaskRequest: + processReadTaskRequest(packet.read_task_request); + break; case Protocol::Server::PartUUIDs: if (!setPartUUIDs(packet.part_uuids)) got_duplicated_part_uuids = true; @@ -385,6 +389,13 @@ bool RemoteQueryExecutor::setPartUUIDs(const std::vector & uuids) return true; } +void RemoteQueryExecutor::processReadTaskRequest(const String & request) +{ + auto query_context = context->getQueryContext(); + String responce = query_context->getReadTaskSupervisor()->getNextTaskForId(request); + connections->sendReadTaskResponce(responce); +} + void RemoteQueryExecutor::finish(std::unique_ptr * read_context) { /** If one of: diff --git a/src/DataStreams/RemoteQueryExecutor.h b/src/DataStreams/RemoteQueryExecutor.h index 45a633230b7..cdd3eda5897 100644 --- a/src/DataStreams/RemoteQueryExecutor.h +++ b/src/DataStreams/RemoteQueryExecutor.h @@ -179,6 +179,8 @@ private: /// Return true if duplicates found. bool setPartUUIDs(const std::vector & uuids); + void processReadTaskRequest(const String &); + /// Cancell query and restart it with info about duplicated UUIDs /// only for `allow_experimental_query_deduplication`. std::variant restartQueryWithoutDuplicatedUUIDs(std::unique_ptr * read_context = nullptr); diff --git a/src/Interpreters/Context.cpp b/src/Interpreters/Context.cpp index 187edf8843f..689c211d94b 100644 --- a/src/Interpreters/Context.cpp +++ b/src/Interpreters/Context.cpp @@ -2615,6 +2615,32 @@ PartUUIDsPtr Context::getPartUUIDs() return part_uuids; } + +TaskSupervisorPtr Context::getReadTaskSupervisor() const +{ + return read_task_supervisor; +} + + +void Context::setReadTaskSupervisor(TaskSupervisorPtr resolver) +{ + read_task_supervisor = resolver; +} + + +NextTaskCallback Context::getNextTaskCallback() const +{ + if (!next_task_callback.has_value()) + throw Exception(fmt::format("Next task callback is not set for query {}", getInitialQueryId()), ErrorCodes::LOGICAL_ERROR); + return next_task_callback.value(); +} + + +void Context::setNextTaskCallback(NextTaskCallback && callback) +{ + next_task_callback = callback; +} + PartUUIDsPtr Context::getIgnoredPartUUIDs() { auto lock = getLock(); diff --git a/src/Interpreters/Context.h b/src/Interpreters/Context.h index b5912738833..6516a342b12 100644 --- a/src/Interpreters/Context.h +++ b/src/Interpreters/Context.h @@ -128,6 +128,11 @@ using InputInitializer = std::function; /// Callback for reading blocks of data from client for function input() using InputBlocksReader = std::function; +/// Class which gives tasks to other nodes in cluster +class TaskSupervisor; +using TaskSupervisorPtr = std::shared_ptr; +using NextTaskCallback = std::function; + /// An empty interface for an arbitrary object that may be attached by a shared pointer /// to query context, when using ClickHouse as a library. struct IHostContext @@ -189,6 +194,10 @@ private: TemporaryTablesMapping external_tables_mapping; Scalars scalars; + /// Fields for distributed s3 function + TaskSupervisorPtr read_task_supervisor; + std::optional next_task_callback; + /// Record entities accessed by current query, and store this information in system.query_log. struct QueryAccessInfo { @@ -769,6 +778,14 @@ public: PartUUIDsPtr getPartUUIDs(); PartUUIDsPtr getIgnoredPartUUIDs(); + + /// A bunch of functions for distributed s3 function + TaskSupervisorPtr getReadTaskSupervisor() const; + void setReadTaskSupervisor(TaskSupervisorPtr); + + NextTaskCallback getNextTaskCallback() const; + void setNextTaskCallback(NextTaskCallback && callback); + private: std::unique_lock getLock() const; diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index bfd81f54915..3341e1b9eb2 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -26,6 +26,7 @@ #include #include #include +#include #include #include #include @@ -287,6 +288,16 @@ void TCPHandler::runImpl() customizeContext(query_context); + /// This callback is needed for requsting read tasks inside pipeline for distributed processing + query_context->setNextTaskCallback([this](String request) -> String + { + std::lock_guard lock(buffer_mutex); + sendReadTaskRequestAssumeLocked(request); + return receiveReadTaskResponseAssumeLocked(); + }); + + query_context->setReadTaskSupervisor(std::make_shared()); + bool may_have_embedded_data = client_tcp_protocol_version >= DBMS_MIN_REVISION_WITH_CLIENT_SUPPORT_EMBEDDED_DATA; /// Processing Query state.io = executeQuery(state.query, query_context, false, state.stage, may_have_embedded_data); @@ -460,8 +471,11 @@ bool TCPHandler::readDataNext(const size_t & poll_interval, const int & receive_ /// We are waiting for a packet from the client. Thus, every `POLL_INTERVAL` seconds check whether we need to shut down. while (true) { - if (static_cast(*in).poll(poll_interval)) - break; + { + std::lock_guard lock(buffer_mutex); + if (static_cast(*in).poll(poll_interval)) + break; + } /// Do we need to shut down? if (server.isCancelled()) @@ -480,12 +494,15 @@ bool TCPHandler::readDataNext(const size_t & poll_interval, const int & receive_ } } - /// If client disconnected. - if (in->eof()) { - LOG_INFO(log, "Client has dropped the connection, cancel the query."); - state.is_connection_closed = true; - return false; + std::lock_guard lock(buffer_mutex); + /// If client disconnected. + if (in->eof()) + { + LOG_INFO(log, "Client has dropped the connection, cancel the query."); + state.is_connection_closed = true; + return false; + } } /// We accept and process data. And if they are over, then we leave. @@ -737,6 +754,8 @@ void TCPHandler::processTablesStatusRequest() void TCPHandler::receiveUnexpectedTablesStatusRequest() { + std::lock_guard lock(buffer_mutex); + TablesStatusRequest skip_request; skip_request.read(*in, client_tcp_protocol_version); @@ -745,6 +764,8 @@ void TCPHandler::receiveUnexpectedTablesStatusRequest() void TCPHandler::sendPartUUIDs() { + std::lock_guard lock(buffer_mutex); + auto uuids = query_context->getPartUUIDs()->get(); if (!uuids.empty()) { @@ -758,16 +779,17 @@ void TCPHandler::sendPartUUIDs() } -void TCPHandler::sendNextTaskReply(String reply) +void TCPHandler::sendReadTaskRequestAssumeLocked(const String & request) { - LOG_TRACE(log, "Nexttask for id is {} ", reply); - writeVarUInt(Protocol::Server::NextTaskReply, *out); - writeStringBinary(reply, *out); + writeVarUInt(Protocol::Server::ReadTaskRequest, *out); + writeStringBinary(request, *out); out->next(); } void TCPHandler::sendProfileInfo(const BlockStreamProfileInfo & info) { + std::lock_guard lock(buffer_mutex); + writeVarUInt(Protocol::Server::ProfileInfo, *out); info.write(*out); out->next(); @@ -776,6 +798,8 @@ void TCPHandler::sendProfileInfo(const BlockStreamProfileInfo & info) void TCPHandler::sendTotals(const Block & totals) { + std::lock_guard lock(buffer_mutex); + if (totals) { initBlockOutput(totals); @@ -792,6 +816,8 @@ void TCPHandler::sendTotals(const Block & totals) void TCPHandler::sendExtremes(const Block & extremes) { + std::lock_guard lock(buffer_mutex); + if (extremes) { initBlockOutput(extremes); @@ -808,6 +834,8 @@ void TCPHandler::sendExtremes(const Block & extremes) bool TCPHandler::receiveProxyHeader() { + std::lock_guard lock(buffer_mutex); + if (in->eof()) { LOG_WARNING(log, "Client has not sent any data."); @@ -880,6 +908,8 @@ bool TCPHandler::receiveProxyHeader() void TCPHandler::receiveHello() { + std::lock_guard lock(buffer_mutex); + /// Receive `hello` packet. UInt64 packet_type = 0; String user; @@ -937,6 +967,8 @@ void TCPHandler::receiveHello() void TCPHandler::receiveUnexpectedHello() { + std::lock_guard lock(buffer_mutex); + UInt64 skip_uint_64; String skip_string; @@ -954,6 +986,8 @@ void TCPHandler::receiveUnexpectedHello() void TCPHandler::sendHello() { + std::lock_guard lock(buffer_mutex); + writeVarUInt(Protocol::Server::Hello, *out); writeStringBinary(DBMS_NAME, *out); writeVarUInt(DBMS_VERSION_MAJOR, *out); @@ -972,23 +1006,16 @@ void TCPHandler::sendHello() bool TCPHandler::receivePacket() { UInt64 packet_type = 0; - readVarUInt(packet_type, *in); + { + std::lock_guard lock(buffer_mutex); + readVarUInt(packet_type, *in); + } switch (packet_type) { - case Protocol::Client::NextTaskRequest: - { - auto id = receiveNextTaskRequest(); -#if USE_AWS_S3 - auto next = TaskSupervisor::instance().getNextTaskForId(id); -#else - auto next = ""; -#endif - sendNextTaskReply(next); - return false; - } - + case Protocol::Client::ReadTaskResponse: + throw Exception("ReadTaskResponse must be received only after requesting in callback", ErrorCodes::LOGICAL_ERROR); case Protocol::Client::IgnoredPartUUIDs: /// Part uuids packet if any comes before query. receiveIgnoredPartUUIDs(); @@ -1029,16 +1056,10 @@ bool TCPHandler::receivePacket() } -String TCPHandler::receiveNextTaskRequest() -{ - std::string id; - readStringBinary(id, *in); - LOG_DEBUG(log, "Got nextTaskRequest {}", id); - return id; -} - void TCPHandler::receiveIgnoredPartUUIDs() { + std::lock_guard lock(buffer_mutex); + state.part_uuids = true; std::vector uuids; readVectorBinary(uuids, *in); @@ -1047,10 +1068,29 @@ void TCPHandler::receiveIgnoredPartUUIDs() query_context->getIgnoredPartUUIDs()->add(uuids); } + +String TCPHandler::receiveReadTaskResponseAssumeLocked() +{ + UInt64 packet_type = 0; + readVarUInt(packet_type, *in); + + if (packet_type != Protocol::Client::ReadTaskResponse) + throw Exception(fmt::format("Received {} packet after requesting read task", + Protocol::Client::toString(packet_type)), ErrorCodes::LOGICAL_ERROR); + + String response; + readStringBinary(response, *in); + return response; +} + + void TCPHandler::receiveClusterNameAndSalt() { - readStringBinary(cluster, *in); - readStringBinary(salt, *in, 32); + { + std::lock_guard lock(buffer_mutex); + readStringBinary(cluster, *in); + readStringBinary(salt, *in, 32); + } try { @@ -1074,6 +1114,8 @@ void TCPHandler::receiveClusterNameAndSalt() void TCPHandler::receiveQuery() { + std::lock_guard lock(buffer_mutex); + UInt64 stage = 0; UInt64 compression = 0; @@ -1215,6 +1257,8 @@ void TCPHandler::receiveQuery() void TCPHandler::receiveUnexpectedQuery() { + std::lock_guard lock(buffer_mutex); + UInt64 skip_uint_64; String skip_string; @@ -1243,6 +1287,8 @@ void TCPHandler::receiveUnexpectedQuery() bool TCPHandler::receiveData(bool scalar) { + std::lock_guard lock(buffer_mutex); + initBlockInput(); /// The name of the temporary table for writing data, default to empty string @@ -1302,6 +1348,8 @@ bool TCPHandler::receiveData(bool scalar) void TCPHandler::receiveUnexpectedData() { + std::lock_guard lock(buffer_mutex); + String skip_external_table_name; readStringBinary(skip_external_table_name, *in); @@ -1440,6 +1488,8 @@ bool TCPHandler::isQueryCancelled() void TCPHandler::sendData(const Block & block) { + std::lock_guard lock(buffer_mutex); + initBlockOutput(block); auto prev_bytes_written_out = out->count(); @@ -1502,6 +1552,8 @@ void TCPHandler::sendLogData(const Block & block) void TCPHandler::sendTableColumns(const ColumnsDescription & columns) { + std::lock_guard lock(buffer_mutex); + writeVarUInt(Protocol::Server::TableColumns, *out); /// Send external table name (empty name is the main table) @@ -1513,6 +1565,8 @@ void TCPHandler::sendTableColumns(const ColumnsDescription & columns) void TCPHandler::sendException(const Exception & e, bool with_stack_trace) { + std::lock_guard lock(buffer_mutex); + writeVarUInt(Protocol::Server::Exception, *out); writeException(e, *out, with_stack_trace); out->next(); @@ -1521,6 +1575,8 @@ void TCPHandler::sendException(const Exception & e, bool with_stack_trace) void TCPHandler::sendEndOfStream() { + std::lock_guard lock(buffer_mutex); + state.sent_all_data = true; writeVarUInt(Protocol::Server::EndOfStream, *out); out->next(); @@ -1535,6 +1591,8 @@ void TCPHandler::updateProgress(const Progress & value) void TCPHandler::sendProgress() { + std::lock_guard lock(buffer_mutex); + writeVarUInt(Protocol::Server::Progress, *out); auto increment = state.progress.fetchAndResetPiecewiseAtomically(); increment.write(*out, client_tcp_protocol_version); @@ -1544,6 +1602,8 @@ void TCPHandler::sendProgress() void TCPHandler::sendLogs() { + std::lock_guard lock(buffer_mutex); + if (!state.logs_queue) return; diff --git a/src/Server/TCPHandler.h b/src/Server/TCPHandler.h index b737f1f2c7a..e30b1ded2cc 100644 --- a/src/Server/TCPHandler.h +++ b/src/Server/TCPHandler.h @@ -138,6 +138,8 @@ private: /// Streams for reading/writing from/to client connection socket. std::shared_ptr in; std::shared_ptr out; + std::mutex buffer_mutex; + /// Time after the last check to stop the request and send the progress. Stopwatch after_check_cancelled; @@ -169,10 +171,11 @@ private: bool receivePacket(); void receiveQuery(); void receiveIgnoredPartUUIDs(); - String receiveNextTaskRequest(); + String receiveReadTaskResponseAssumeLocked(); bool receiveData(bool scalar); bool readDataNext(const size_t & poll_interval, const int & receive_timeout); void readData(const Settings & connection_settings); + void receiveClusterNameAndSalt(); // ASSUMELocked std::tuple getReadTimeouts(const Settings & connection_settings); [[noreturn]] void receiveUnexpectedData(); @@ -199,13 +202,11 @@ private: void sendLogs(); void sendEndOfStream(); void sendPartUUIDs(); - void sendNextTaskReply(String reply); + void sendReadTaskRequestAssumeLocked(const String &); void sendProfileInfo(const BlockStreamProfileInfo & info); void sendTotals(const Block & totals); void sendExtremes(const Block & extremes); - void receiveClusterNameAndSalt(); - /// Creates state.block_in/block_out for blocks read/write, depending on whether compression is enabled. void initBlockInput(); void initBlockOutput(const Block & block); diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index 5b40a3420cf..2bf47ab9f9f 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -84,25 +84,15 @@ public: StorageS3SequentialSource( String initial_query_id_, - Cluster::Address initiator, + NextTaskCallback read_task_callback_, const ClientAuthentificationBuilder & client_auth_builder_, const StorageS3SourceBuilder & s3_builder_) : SourceWithProgress(getHeader(s3_builder_.sample_block, s3_builder_.need_path, s3_builder_.need_file)) , initial_query_id(initial_query_id_) , s3_source_builder(s3_builder_) , cli_builder(client_auth_builder_) + , read_task_callback(read_task_callback_) { - connections = std::make_shared( - /*max_connections*/3, - /*host*/initiator.host_name, - /*port*/initiator.port, - /*default_database=*/s3_builder_.context.getGlobalContext().getCurrentDatabase(), - /*user=*/s3_builder_.context.getClientInfo().initial_user, - /*password=*/initiator.password, - /*cluster=*/initiator.cluster, - /*cluster_secret=*/initiator.cluster_secret - ); - createOrUpdateInnerSource(); } @@ -133,12 +123,7 @@ private: { try { - auto connection = connections->get(timeouts); - connection->sendNextTaskRequest(initial_query_id); - auto packet = connection->receivePacket(); - assert(packet.type = Protocol::Server::NextTaskReply); - LOG_TRACE(&Poco::Logger::get("StorageS3SequentialSource"), "Got new task {}", packet.next_task); - return packet.next_task; + return read_task_callback(initial_query_id); } catch (...) { @@ -189,9 +174,7 @@ private: std::unique_ptr inner; - /// One second just in case - ConnectionTimeouts timeouts{{1, 0}, {1, 0}, {1, 0}}; - std::shared_ptr connections; + NextTaskCallback read_task_callback; }; @@ -276,7 +259,7 @@ Pipe StorageS3Distributed::read( return Pipe(std::make_shared( context.getInitialQueryId(), - /*initiator*/initiator, + context.getNextTaskCallback(), cli_builder, s3builder )); @@ -349,7 +332,10 @@ Pipe StorageS3Distributed::read( /*user=*/node.user, /*password=*/node.password, /*cluster=*/node.cluster, - /*cluster_secret=*/node.cluster_secret + /*cluster_secret=*/node.cluster_secret, + "StorageS3DistributedInititiator", + Protocol::Compression::Disable, + Protocol::Secure::Disable )); auto stream = std::make_shared( /*connection=*/*connections.back(), diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Distributed.h index e686efd8c78..23c3230c6c6 100644 --- a/src/Storages/StorageS3Distributed.h +++ b/src/Storages/StorageS3Distributed.h @@ -31,82 +31,6 @@ struct ClientAuthentificationBuilder UInt64 max_connections; }; -using QueryId = std::string; -using Task = std::string; -using Tasks = std::vector; -using TasksIterator = Tasks::iterator; - -class S3NextTaskResolver -{ -public: - S3NextTaskResolver(QueryId query_id, Tasks && all_tasks) - : id(query_id) - , tasks(all_tasks) - , current(tasks.begin()) - {} - - std::string next() - { - auto it = current; - ++current; - return it == tasks.end() ? "" : *it; - } - - std::string getId() - { - return id; - } - -private: - QueryId id; - Tasks tasks; - TasksIterator current; -}; - -using S3NextTaskResolverPtr = std::shared_ptr; - -class TaskSupervisor -{ -public: - using QueryId = std::string; - - TaskSupervisor() = default; - - static TaskSupervisor & instance() - { - static TaskSupervisor task_manager; - return task_manager; - } - - void registerNextTaskResolver(S3NextTaskResolverPtr resolver) - { - std::lock_guard lock(mutex); - auto & target = dict[resolver->getId()]; - if (target) - throw Exception(fmt::format("NextTaskResolver with name {} is already registered for query {}", - target->getId(), resolver->getId()), ErrorCodes::LOGICAL_ERROR); - target = std::move(resolver); - } - - - Task getNextTaskForId(const QueryId & id) - { - std::lock_guard lock(mutex); - auto it = dict.find(id); - if (it == dict.end()) - return ""; - auto answer = it->second->next(); - if (answer.empty()) - dict.erase(it); - return answer; - } - -private: - using ResolverDict = std::unordered_map; - ResolverDict dict; - std::mutex mutex; -}; - class StorageS3Distributed : public ext::shared_ptr_helper, public IStorage { friend struct ext::shared_ptr_helper; diff --git a/src/Storages/TaskSupervisor.h b/src/Storages/TaskSupervisor.h new file mode 100644 index 00000000000..7de0081d048 --- /dev/null +++ b/src/Storages/TaskSupervisor.h @@ -0,0 +1,90 @@ +#pragma once + +#include +#include +#include +#include + +#include + +namespace DB +{ + + +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + +using QueryId = std::string; +using Task = std::string; +using Tasks = std::vector; +using TasksIterator = Tasks::iterator; + +class S3NextTaskResolver +{ +public: + S3NextTaskResolver(QueryId query_id, Tasks && all_tasks) + : id(query_id) + , tasks(all_tasks) + , current(tasks.begin()) + {} + + std::string next() + { + auto it = current; + ++current; + return it == tasks.end() ? "" : *it; + } + + std::string getId() + { + return id; + } + +private: + QueryId id; + Tasks tasks; + TasksIterator current; +}; + +using S3NextTaskResolverPtr = std::shared_ptr; + +class TaskSupervisor +{ +public: + using QueryId = std::string; + + TaskSupervisor() = default; + + void registerNextTaskResolver(S3NextTaskResolverPtr resolver) + { + std::lock_guard lock(mutex); + auto & target = dict[resolver->getId()]; + if (target) + throw Exception(fmt::format("NextTaskResolver with name {} is already registered for query {}", + target->getId(), resolver->getId()), ErrorCodes::LOGICAL_ERROR); + target = std::move(resolver); + } + + + Task getNextTaskForId(const QueryId & id) + { + std::lock_guard lock(mutex); + auto it = dict.find(id); + if (it == dict.end()) + return ""; + auto answer = it->second->next(); + if (answer.empty()) + dict.erase(it); + return answer; + } + +private: + using ResolverDict = std::unordered_map; + ResolverDict dict; + std::mutex mutex; +}; + + +} diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Distributed.cpp index f0a12c26705..a5b9012e7a2 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.cpp +++ b/src/TableFunctions/TableFunctionS3Distributed.cpp @@ -7,6 +7,7 @@ #include "Parsers/IAST_fwd.h" #include "Processors/Sources/SourceFromInputStream.h" #include "Storages/StorageS3Distributed.h" +#include #if USE_AWS_S3 @@ -101,9 +102,7 @@ StoragePtr TableFunctionS3Distributed::executeImpl( tasks.emplace_back(client_auth.uri.endpoint + '/' + client_auth.uri.bucket + '/' + value); /// Register resolver, which will give other nodes a task to execute - TaskSupervisor::instance().registerNextTaskResolver( - std::make_unique(context.getCurrentQueryId(), std::move(tasks))); - + context.getReadTaskSupervisor()->registerNextTaskResolver(std::make_unique(context.getCurrentQueryId(), std::move(tasks))); break; } From fea3aaafe6f6461cc5a0800d8438dff583e96f3a Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Tue, 6 Apr 2021 14:51:16 +0300 Subject: [PATCH 055/108] simplify storages3dist --- src/Storages/StorageS3Distributed.cpp | 62 +-------------------------- 1 file changed, 2 insertions(+), 60 deletions(-) diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index 2bf47ab9f9f..12a1f146ad5 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -220,20 +220,6 @@ Pipe StorageS3Distributed::read( /// Secondary query, need to read from S3 if (context.getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY) { - /// Find initiator in cluster - Cluster::Address initiator; - [&]() - { - for (const auto & replicas : cluster->getShardsAddresses()) - for (const auto & node : replicas) - if (node.getHash() == address_hash_or_filename) - { - initiator = node; - return; - } - }(); - - bool need_path_column = false; bool need_file_column = false; for (const auto & column : column_names) @@ -268,53 +254,9 @@ Pipe StorageS3Distributed::read( /// The code from here and below executes on initiator - String hash_of_address; - [&]() - { - for (const auto & replicas : cluster->getShardsAddresses()) - for (const auto & node : replicas) - /// Finding ourselves in cluster - if (node.is_local && node.port == context.getTCPPort()) - { - hash_of_address = node.getHash(); - break; - } - }(); - - if (hash_of_address.empty()) - throw Exception(fmt::format("The initiator must be a part of a cluster {}", cluster_name), ErrorCodes::BAD_ARGUMENTS); - - /// Our purpose to change some arguments of this function to store some relevant - /// information. Then we will send changed query to another hosts. - /// We got a pointer to table function representation in AST (a pointer to subtree) - /// as parameter of TableFunctionRemote::execute and saved its hash value. - /// Here we find it in the AST of whole query, change parameter and format it to string. - auto remote_query_ast = query_info.query->clone(); - auto table_expressions_from_whole_query = getTableExpressions(remote_query_ast->as()); - - String remote_query; - for (const auto & table_expression : table_expressions_from_whole_query) - { - const auto & table_function_ast = table_expression->table_function; - if (table_function_ast->getTreeHash() == tree_hash) - { - auto & arguments = table_function_ast->children.at(0)->children; - auto & bucket = arguments[1]->as().value.safeGet(); - /// We rewrite query, and insert a port to connect as a first parameter - /// So, we write hash_of_address here as buckey name to find initiator node - /// in cluster from config on remote replica - bucket = hash_of_address; - remote_query = queryToString(remote_query_ast); - break; - } - } - - if (remote_query.empty()) - throw Exception(fmt::format("There is no table function with hash of AST equals to {}", hash_of_address), ErrorCodes::LOGICAL_ERROR); - /// Calculate the header. This is significant, because some columns could be thrown away in some cases like query with count(*) Block header = - InterpreterSelectQuery(remote_query_ast, context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); + InterpreterSelectQuery(query_info.query, context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); const Scalars & scalars = context.hasQueryContext() ? context.getQueryContext().getScalars() : Scalars{}; @@ -339,7 +281,7 @@ Pipe StorageS3Distributed::read( )); auto stream = std::make_shared( /*connection=*/*connections.back(), - /*query=*/remote_query, + /*query=*/queryToString(query_info.query), /*header=*/header, /*context=*/context, /*throttler=*/nullptr, From 587fbdd10d93f45c3ffc5d555f240e18cd3bf12e Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Tue, 6 Apr 2021 22:18:45 +0300 Subject: [PATCH 056/108] better --- src/Processors/Sources/SourceWithProgress.h | 4 +- src/Server/TCPHandler.cpp | 75 ++------- src/Storages/StorageS3.cpp | 156 +++++++++++------- src/Storages/StorageS3.h | 12 +- src/Storages/StorageS3Distributed.cpp | 32 ++-- src/Storages/StorageS3Distributed.h | 12 +- src/Storages/TaskSupervisor.h | 41 ++--- .../TableFunctionS3Distributed.cpp | 82 ++++++--- .../TableFunctionS3Distributed.h | 2 +- 9 files changed, 208 insertions(+), 208 deletions(-) diff --git a/src/Processors/Sources/SourceWithProgress.h b/src/Processors/Sources/SourceWithProgress.h index 25ff3eacec7..3aa7a81f418 100644 --- a/src/Processors/Sources/SourceWithProgress.h +++ b/src/Processors/Sources/SourceWithProgress.h @@ -55,12 +55,12 @@ public: void setProgressCallback(const ProgressCallback & callback) final { progress_callback = callback; } void addTotalRowsApprox(size_t value) final { total_rows_approx += value; } - void work() override; - protected: /// Call this method to provide information about progress. void progress(const Progress & value); + void work() override; + private: StreamLocalLimits limits; SizeLimits leaf_limits; diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index 3341e1b9eb2..3b8823e1e86 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -471,11 +471,8 @@ bool TCPHandler::readDataNext(const size_t & poll_interval, const int & receive_ /// We are waiting for a packet from the client. Thus, every `POLL_INTERVAL` seconds check whether we need to shut down. while (true) { - { - std::lock_guard lock(buffer_mutex); - if (static_cast(*in).poll(poll_interval)) - break; - } + if (static_cast(*in).poll(poll_interval)) + break; /// Do we need to shut down? if (server.isCancelled()) @@ -494,15 +491,12 @@ bool TCPHandler::readDataNext(const size_t & poll_interval, const int & receive_ } } + /// If client disconnected. + if (in->eof()) { - std::lock_guard lock(buffer_mutex); - /// If client disconnected. - if (in->eof()) - { - LOG_INFO(log, "Client has dropped the connection, cancel the query."); - state.is_connection_closed = true; - return false; - } + LOG_INFO(log, "Client has dropped the connection, cancel the query."); + state.is_connection_closed = true; + return false; } /// We accept and process data. And if they are over, then we leave. @@ -670,6 +664,8 @@ void TCPHandler::processOrdinaryQueryWithProcessors() break; } + std::lock_guard lock(buffer_mutex); + if (after_send_progress.elapsed() / 1000 >= query_context->getSettingsRef().interactive_delay) { /// Some time passed and there is a progress. @@ -754,8 +750,6 @@ void TCPHandler::processTablesStatusRequest() void TCPHandler::receiveUnexpectedTablesStatusRequest() { - std::lock_guard lock(buffer_mutex); - TablesStatusRequest skip_request; skip_request.read(*in, client_tcp_protocol_version); @@ -764,8 +758,6 @@ void TCPHandler::receiveUnexpectedTablesStatusRequest() void TCPHandler::sendPartUUIDs() { - std::lock_guard lock(buffer_mutex); - auto uuids = query_context->getPartUUIDs()->get(); if (!uuids.empty()) { @@ -788,8 +780,6 @@ void TCPHandler::sendReadTaskRequestAssumeLocked(const String & request) void TCPHandler::sendProfileInfo(const BlockStreamProfileInfo & info) { - std::lock_guard lock(buffer_mutex); - writeVarUInt(Protocol::Server::ProfileInfo, *out); info.write(*out); out->next(); @@ -798,8 +788,6 @@ void TCPHandler::sendProfileInfo(const BlockStreamProfileInfo & info) void TCPHandler::sendTotals(const Block & totals) { - std::lock_guard lock(buffer_mutex); - if (totals) { initBlockOutput(totals); @@ -816,8 +804,6 @@ void TCPHandler::sendTotals(const Block & totals) void TCPHandler::sendExtremes(const Block & extremes) { - std::lock_guard lock(buffer_mutex); - if (extremes) { initBlockOutput(extremes); @@ -834,8 +820,6 @@ void TCPHandler::sendExtremes(const Block & extremes) bool TCPHandler::receiveProxyHeader() { - std::lock_guard lock(buffer_mutex); - if (in->eof()) { LOG_WARNING(log, "Client has not sent any data."); @@ -908,8 +892,6 @@ bool TCPHandler::receiveProxyHeader() void TCPHandler::receiveHello() { - std::lock_guard lock(buffer_mutex); - /// Receive `hello` packet. UInt64 packet_type = 0; String user; @@ -967,8 +949,6 @@ void TCPHandler::receiveHello() void TCPHandler::receiveUnexpectedHello() { - std::lock_guard lock(buffer_mutex); - UInt64 skip_uint_64; String skip_string; @@ -986,8 +966,6 @@ void TCPHandler::receiveUnexpectedHello() void TCPHandler::sendHello() { - std::lock_guard lock(buffer_mutex); - writeVarUInt(Protocol::Server::Hello, *out); writeStringBinary(DBMS_NAME, *out); writeVarUInt(DBMS_VERSION_MAJOR, *out); @@ -1006,11 +984,7 @@ void TCPHandler::sendHello() bool TCPHandler::receivePacket() { UInt64 packet_type = 0; - { - std::lock_guard lock(buffer_mutex); - readVarUInt(packet_type, *in); - } - + readVarUInt(packet_type, *in); switch (packet_type) { @@ -1058,8 +1032,6 @@ bool TCPHandler::receivePacket() void TCPHandler::receiveIgnoredPartUUIDs() { - std::lock_guard lock(buffer_mutex); - state.part_uuids = true; std::vector uuids; readVectorBinary(uuids, *in); @@ -1086,11 +1058,8 @@ String TCPHandler::receiveReadTaskResponseAssumeLocked() void TCPHandler::receiveClusterNameAndSalt() { - { - std::lock_guard lock(buffer_mutex); - readStringBinary(cluster, *in); - readStringBinary(salt, *in, 32); - } + readStringBinary(cluster, *in); + readStringBinary(salt, *in, 32); try { @@ -1114,8 +1083,6 @@ void TCPHandler::receiveClusterNameAndSalt() void TCPHandler::receiveQuery() { - std::lock_guard lock(buffer_mutex); - UInt64 stage = 0; UInt64 compression = 0; @@ -1257,8 +1224,6 @@ void TCPHandler::receiveQuery() void TCPHandler::receiveUnexpectedQuery() { - std::lock_guard lock(buffer_mutex); - UInt64 skip_uint_64; String skip_string; @@ -1287,8 +1252,6 @@ void TCPHandler::receiveUnexpectedQuery() bool TCPHandler::receiveData(bool scalar) { - std::lock_guard lock(buffer_mutex); - initBlockInput(); /// The name of the temporary table for writing data, default to empty string @@ -1348,8 +1311,6 @@ bool TCPHandler::receiveData(bool scalar) void TCPHandler::receiveUnexpectedData() { - std::lock_guard lock(buffer_mutex); - String skip_external_table_name; readStringBinary(skip_external_table_name, *in); @@ -1488,8 +1449,6 @@ bool TCPHandler::isQueryCancelled() void TCPHandler::sendData(const Block & block) { - std::lock_guard lock(buffer_mutex); - initBlockOutput(block); auto prev_bytes_written_out = out->count(); @@ -1552,8 +1511,6 @@ void TCPHandler::sendLogData(const Block & block) void TCPHandler::sendTableColumns(const ColumnsDescription & columns) { - std::lock_guard lock(buffer_mutex); - writeVarUInt(Protocol::Server::TableColumns, *out); /// Send external table name (empty name is the main table) @@ -1565,8 +1522,6 @@ void TCPHandler::sendTableColumns(const ColumnsDescription & columns) void TCPHandler::sendException(const Exception & e, bool with_stack_trace) { - std::lock_guard lock(buffer_mutex); - writeVarUInt(Protocol::Server::Exception, *out); writeException(e, *out, with_stack_trace); out->next(); @@ -1575,8 +1530,6 @@ void TCPHandler::sendException(const Exception & e, bool with_stack_trace) void TCPHandler::sendEndOfStream() { - std::lock_guard lock(buffer_mutex); - state.sent_all_data = true; writeVarUInt(Protocol::Server::EndOfStream, *out); out->next(); @@ -1591,8 +1544,6 @@ void TCPHandler::updateProgress(const Progress & value) void TCPHandler::sendProgress() { - std::lock_guard lock(buffer_mutex); - writeVarUInt(Protocol::Server::Progress, *out); auto increment = state.progress.fetchAndResetPiecewiseAtomically(); increment.write(*out, client_tcp_protocol_version); @@ -1602,8 +1553,6 @@ void TCPHandler::sendProgress() void TCPHandler::sendLogs() { - std::lock_guard lock(buffer_mutex); - if (!state.logs_queue) return; diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index 678a6cc3270..7d77d420584 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -46,6 +46,95 @@ namespace ErrorCodes extern const int S3_ERROR; } +class StorageS3Source::DisclosedGlobIterator::Impl +{ + +public: + Impl(Aws::S3::S3Client & client_, const S3::URI & globbed_uri_) + : client(client_), globbed_uri(globbed_uri_) { + + if (globbed_uri.bucket.find_first_of("*?{") != globbed_uri.bucket.npos) + throw Exception("Expression can not have wildcards inside bucket name", ErrorCodes::UNEXPECTED_EXPRESSION); + + const String key_prefix = globbed_uri.key.substr(0, globbed_uri.key.find_first_of("*?{")); + + if (key_prefix.size() == globbed_uri.key.size()) + buffer.emplace_back(globbed_uri.key); + + request.SetBucket(globbed_uri.bucket); + request.SetPrefix(key_prefix); + + matcher = std::make_unique(makeRegexpPatternFromGlobs(globbed_uri.key)); + + /// Don't forget about iterator invalidation + buffer_iter = buffer.begin(); + } + + std::optional next() + { + if (buffer_iter != buffer.end()) + { + auto answer = *buffer_iter; + ++buffer_iter; + return answer; + } + + if (is_finished) + return std::nullopt; // Or throw? + + fillInternalBuffer(); + + return next(); + } + +private: + + void fillInternalBuffer() + { + buffer.clear(); + + outcome = client.ListObjectsV2(request); + if (!outcome.IsSuccess()) + throw Exception(ErrorCodes::S3_ERROR, "Could not list objects in bucket {} with prefix {}, S3 exception: {}, message: {}", + quoteString(request.GetBucket()), quoteString(request.GetPrefix()), + backQuote(outcome.GetError().GetExceptionName()), quoteString(outcome.GetError().GetMessage())); + + const auto & result_batch = outcome.GetResult().GetContents(); + + buffer.reserve(result_batch.size()); + for (const auto & row : result_batch) + { + String key = row.GetKey(); + if (re2::RE2::FullMatch(key, *matcher)) + buffer.emplace_back(std::move(key)); + } + /// Set iterator only after the whole batch is processed + buffer_iter = buffer.begin(); + + request.SetContinuationToken(outcome.GetResult().GetNextContinuationToken()); + + /// It returns false when all objects were returned + is_finished = !outcome.GetResult().GetIsTruncated(); + } + + Strings buffer; + Strings::iterator buffer_iter; + Aws::S3::S3Client client; + S3::URI globbed_uri; + Aws::S3::Model::ListObjectsV2Request request; + Aws::S3::Model::ListObjectsV2Outcome outcome; + std::unique_ptr matcher; + bool is_finished{false}; +}; + +StorageS3Source::DisclosedGlobIterator::DisclosedGlobIterator(Aws::S3::S3Client & client_, const S3::URI & globbed_uri_) + : pimpl(std::make_unique(client_, globbed_uri_)) {} + +std::optional StorageS3Source::DisclosedGlobIterator::next() +{ + return pimpl->next(); +} + Block StorageS3Source::getHeader(Block sample_block, bool with_path_column, bool with_file_column) { @@ -209,62 +298,6 @@ StorageS3::StorageS3( } -/* "Recursive" directory listing with matched paths as a result. - * Have the same method in StorageFile. - */ -Strings StorageS3::listFilesWithRegexpMatching(Aws::S3::S3Client & client, const S3::URI & globbed_uri) -{ - if (globbed_uri.bucket.find_first_of("*?{") != globbed_uri.bucket.npos) - { - throw Exception("Expression can not have wildcards inside bucket name", ErrorCodes::UNEXPECTED_EXPRESSION); - } - - const String key_prefix = globbed_uri.key.substr(0, globbed_uri.key.find_first_of("*?{")); - if (key_prefix.size() == globbed_uri.key.size()) - { - return {globbed_uri.key}; - } - - Aws::S3::Model::ListObjectsV2Request request; - request.SetBucket(globbed_uri.bucket); - request.SetPrefix(key_prefix); - - re2::RE2 matcher(makeRegexpPatternFromGlobs(globbed_uri.key)); - Strings result; - Aws::S3::Model::ListObjectsV2Outcome outcome; - int page = 0; - do - { - ++page; - outcome = client.ListObjectsV2(request); - if (!outcome.IsSuccess()) - { - if (page > 1) - throw Exception(ErrorCodes::S3_ERROR, "Could not list objects in bucket {} with prefix {}, page {}, S3 exception: {}, message: {}", - quoteString(request.GetBucket()), quoteString(request.GetPrefix()), page, - backQuote(outcome.GetError().GetExceptionName()), quoteString(outcome.GetError().GetMessage())); - - throw Exception(ErrorCodes::S3_ERROR, "Could not list objects in bucket {} with prefix {}, S3 exception: {}, message: {}", - quoteString(request.GetBucket()), quoteString(request.GetPrefix()), - backQuote(outcome.GetError().GetExceptionName()), quoteString(outcome.GetError().GetMessage())); - } - - for (const auto & row : outcome.GetResult().GetContents()) - { - String key = row.GetKey(); - std::cout << "KEY " << key << std::endl; - if (re2::RE2::FullMatch(key, matcher)) - result.emplace_back(std::move(key)); - } - - request.SetContinuationToken(outcome.GetResult().GetNextContinuationToken()); - } - while (outcome.GetResult().GetIsTruncated()); - - return result; -} - - Pipe StorageS3::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, @@ -287,7 +320,12 @@ Pipe StorageS3::read( need_file_column = true; } - for (const String & key : listFilesWithRegexpMatching(*client_auth.client, client_auth.uri)) + /// Iterate through disclosed globs and make a source for each file + StorageS3Source::DisclosedGlobIterator glob_iterator(*client_auth.client, client_auth.uri); + /// TODO: better to put first num_streams keys into pipeline + /// and put others dynamically in runtime + while (auto key = glob_iterator.next()) + { pipes.emplace_back(std::make_shared( need_path_column, need_file_column, @@ -300,8 +338,8 @@ Pipe StorageS3::read( chooseCompressionMethod(client_auth.uri.key, compression_method), client_auth.client, client_auth.uri.bucket, - key)); - + key.value())); + } auto pipe = Pipe::unitePipes(std::move(pipes)); // It's possible to have many buckets read from s3, resize(num_streams) might open too many handles at the same time. // Using narrowPipe instead. diff --git a/src/Storages/StorageS3.h b/src/Storages/StorageS3.h index c47a88e35d9..6e9202abb6f 100644 --- a/src/Storages/StorageS3.h +++ b/src/Storages/StorageS3.h @@ -31,6 +31,17 @@ class StorageS3Source : public SourceWithProgress { public: + class DisclosedGlobIterator + { + public: + DisclosedGlobIterator(Aws::S3::S3Client &, const S3::URI &); + std::optional next(); + private: + class Impl; + /// shared_ptr to have copy constructor + std::shared_ptr pimpl; + }; + static Block getHeader(Block sample_block, bool with_path_column, bool with_file_column); StorageS3Source( @@ -125,7 +136,6 @@ private: String compression_method; String name; - static Strings listFilesWithRegexpMatching(Aws::S3::S3Client & client, const S3::URI & globbed_uri); static void updateClientAndAuthSettings(ContextPtr, ClientAuthentificaiton &); }; diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index 12a1f146ad5..2a257ed922e 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -180,8 +181,7 @@ private: StorageS3Distributed::StorageS3Distributed( - IAST::Hash tree_hash_, - const String & address_hash_or_filename_, + const String & filename_, const String & access_key_id_, const String & secret_access_key_, const StorageID & table_id_, @@ -193,8 +193,7 @@ StorageS3Distributed::StorageS3Distributed( const Context & context_, const String & compression_method_) : IStorage(table_id_) - , tree_hash(tree_hash_) - , address_hash_or_filename(address_hash_or_filename_) + , filename(filename_) , cluster_name(cluster_name_) , cluster(context_.getCluster(cluster_name)->getClusterWithReplicasAsShards(context_.getSettings())) , format_name(format_name_) @@ -268,28 +267,17 @@ Pipe StorageS3Distributed::read( for (const auto & node : replicas) { connections.emplace_back(std::make_shared( - /*host=*/node.host_name, - /*port=*/node.port, - /*default_database=*/context.getGlobalContext().getCurrentDatabase(), - /*user=*/node.user, - /*password=*/node.password, - /*cluster=*/node.cluster, - /*cluster_secret=*/node.cluster_secret, + node.host_name, node.port, context.getGlobalContext().getCurrentDatabase(), + node.user, node.password, node.cluster, node.cluster_secret, "StorageS3DistributedInititiator", Protocol::Compression::Disable, Protocol::Secure::Disable )); - auto stream = std::make_shared( - /*connection=*/*connections.back(), - /*query=*/queryToString(query_info.query), - /*header=*/header, - /*context=*/context, - /*throttler=*/nullptr, - /*scalars*/scalars, - /*external_tables*/Tables(), - /*stage*/processed_stage - ); - pipes.emplace_back(std::make_shared(std::move(stream))); + + auto remote_query_executor = std::make_shared( + *connections.back(), queryToString(query_info.query), header, context, /*throttler=*/nullptr, scalars, Tables(), processed_stage); + + pipes.emplace_back(createRemoteSourcePipe(remote_query_executor, false, false, false, false)); } } diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Distributed.h index 23c3230c6c6..13e28d1a7aa 100644 --- a/src/Storages/StorageS3Distributed.h +++ b/src/Storages/StorageS3Distributed.h @@ -46,11 +46,16 @@ public: size_t /*max_block_size*/, unsigned /*num_streams*/) override; + QueryProcessingStage::Enum getQueryProcessingStage(const Context &, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override + { + return QueryProcessingStage::Enum::WithMergeableState; + } + + NamesAndTypesList getVirtuals() const override; protected: StorageS3Distributed( - IAST::Hash tree_hash_, - const String & address_hash_or_filename_, + const String & filename_, const String & access_key_id_, const String & secret_access_key_, const StorageID & table_id_, @@ -65,8 +70,7 @@ protected: private: /// Connections from initiator to other nodes std::vector> connections; - IAST::Hash tree_hash; - String address_hash_or_filename; + String filename; std::string cluster_name; ClusterPtr cluster; diff --git a/src/Storages/TaskSupervisor.h b/src/Storages/TaskSupervisor.h index 7de0081d048..20e2489d120 100644 --- a/src/Storages/TaskSupervisor.h +++ b/src/Storages/TaskSupervisor.h @@ -21,34 +21,15 @@ using Task = std::string; using Tasks = std::vector; using TasksIterator = Tasks::iterator; -class S3NextTaskResolver +struct ReadTaskResolver { -public: - S3NextTaskResolver(QueryId query_id, Tasks && all_tasks) - : id(query_id) - , tasks(all_tasks) - , current(tasks.begin()) - {} - - std::string next() - { - auto it = current; - ++current; - return it == tasks.end() ? "" : *it; - } - - std::string getId() - { - return id; - } - -private: - QueryId id; - Tasks tasks; - TasksIterator current; + ReadTaskResolver(String name_, std::function callback_) + : name(name_), callback(callback_) {} + String name; + std::function callback; }; -using S3NextTaskResolverPtr = std::shared_ptr; +using ReadTaskResolverPtr = std::unique_ptr; class TaskSupervisor { @@ -57,13 +38,13 @@ public: TaskSupervisor() = default; - void registerNextTaskResolver(S3NextTaskResolverPtr resolver) + void registerNextTaskResolver(ReadTaskResolverPtr resolver) { std::lock_guard lock(mutex); - auto & target = dict[resolver->getId()]; + auto & target = dict[resolver->name]; if (target) throw Exception(fmt::format("NextTaskResolver with name {} is already registered for query {}", - target->getId(), resolver->getId()), ErrorCodes::LOGICAL_ERROR); + target->name, resolver->name), ErrorCodes::LOGICAL_ERROR); target = std::move(resolver); } @@ -74,14 +55,14 @@ public: auto it = dict.find(id); if (it == dict.end()) return ""; - auto answer = it->second->next(); + auto answer = it->second->callback(); if (answer.empty()) dict.erase(it); return answer; } private: - using ResolverDict = std::unordered_map; + using ResolverDict = std::unordered_map; ResolverDict dict; std::mutex mutex; }; diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Distributed.cpp index a5b9012e7a2..814b2586242 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.cpp +++ b/src/TableFunctions/TableFunctionS3Distributed.cpp @@ -1,3 +1,5 @@ +#include +#include #include #include #include "DataStreams/RemoteBlockInputStream.h" @@ -11,6 +13,8 @@ #if USE_AWS_S3 + +#include #include #include #include @@ -29,8 +33,10 @@ namespace ErrorCodes { extern const int LOGICAL_ERROR; extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; + extern const int UNEXPECTED_EXPRESSION; } + void TableFunctionS3Distributed::parseArguments(const ASTPtr & ast_function, const Context & context) { /// Parse args @@ -41,32 +47,51 @@ void TableFunctionS3Distributed::parseArguments(const ASTPtr & ast_function, con ASTs & args = args_func.at(0)->children; + const auto message = fmt::format( + "The signature of table function {} could be the following:\n" \ + " - cluster, url, format, structure\n" \ + " - cluster, url, format, structure, compression_method\n" \ + " - cluster, url, access_key_id, secret_access_key, format, structure\n" \ + " - cluster, url, access_key_id, secret_access_key, format, structure, compression_method", + getName()); + if (args.size() < 4 || args.size() > 7) - throw Exception("Table function '" + getName() + "' requires 4 to 7 arguments: cluster, url," + - "[access_key_id, secret_access_key,] format, structure and [compression_method].", - ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + throw Exception(message, ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (auto & arg : args) arg = evaluateConstantExpressionOrIdentifierAsLiteral(arg, context); cluster_name = args[0]->as().value.safeGet(); - filename_or_initiator_hash = args[1]->as().value.safeGet(); + filename = args[1]->as().value.safeGet(); - if (args.size() < 5) + if (args.size() == 4) { format = args[2]->as().value.safeGet(); structure = args[3]->as().value.safeGet(); + } + else if (args.size() == 5) + { + format = args[2]->as().value.safeGet(); + structure = args[3]->as().value.safeGet(); + compression_method = args[4]->as().value.safeGet(); } - else + else if (args.size() == 6) { access_key_id = args[2]->as().value.safeGet(); secret_access_key = args[3]->as().value.safeGet(); format = args[4]->as().value.safeGet(); structure = args[5]->as().value.safeGet(); } - - if (args.size() == 5 || args.size() == 7) - compression_method = args.back()->as().value.safeGet(); + else if (args.size() == 7) + { + access_key_id = args[2]->as().value.safeGet(); + secret_access_key = args[3]->as().value.safeGet(); + format = args[4]->as().value.safeGet(); + structure = args[5]->as().value.safeGet(); + compression_method = args[4]->as().value.safeGet(); + } + else + throw Exception(message, ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); } @@ -76,7 +101,7 @@ ColumnsDescription TableFunctionS3Distributed::getActualTableStructure(const Con } StoragePtr TableFunctionS3Distributed::executeImpl( - const ASTPtr & ast_function, const Context & context, + const ASTPtr & /*filename*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { UInt64 max_connections = context.getSettingsRef().s3_max_connections; @@ -84,32 +109,28 @@ StoragePtr TableFunctionS3Distributed::executeImpl( /// Initiator specific logic while (context.getClientInfo().query_kind == ClientInfo::QueryKind::INITIAL_QUERY) { - auto poco_uri = Poco::URI{filename_or_initiator_hash}; - - /// This is needed, because secondary query on local replica has the same query-id - if (poco_uri.getHost().empty() || poco_uri.getPort() == 0) - break; - + auto poco_uri = Poco::URI{filename}; S3::URI s3_uri(poco_uri); StorageS3::ClientAuthentificaiton client_auth{s3_uri, access_key_id, secret_access_key, max_connections, {}, {}}; StorageS3::updateClientAndAuthSettings(context, client_auth); + StorageS3Source::DisclosedGlobIterator iterator(*client_auth.client, client_auth.uri); - auto lists = StorageS3::listFilesWithRegexpMatching(*client_auth.client, client_auth.uri); - Strings tasks; - tasks.reserve(lists.size()); + auto callback = [endpoint = client_auth.uri.endpoint, bucket = client_auth.uri.bucket, iterator = std::move(iterator)]() mutable -> String + { + if (auto value = iterator.next()) + return endpoint + '/' + bucket + '/' + *value; + return {}; + }; - for (auto & value : lists) - tasks.emplace_back(client_auth.uri.endpoint + '/' + client_auth.uri.bucket + '/' + value); - - /// Register resolver, which will give other nodes a task to execute - context.getReadTaskSupervisor()->registerNextTaskResolver(std::make_unique(context.getCurrentQueryId(), std::move(tasks))); + /// Register resolver, which will give other nodes a task std::make_unique + context.getReadTaskSupervisor()->registerNextTaskResolver( + std::make_unique(context.getCurrentQueryId(), std::move(callback))); break; } StoragePtr storage = StorageS3Distributed::create( - ast_function->getTreeHash(), - filename_or_initiator_hash, + filename, access_key_id, secret_access_key, StorageID(getDatabaseName(), table_name), @@ -137,6 +158,15 @@ void registerTableFunctionCOSDistributed(TableFunctionFactory & factory) factory.registerFunction(); } + +NamesAndTypesList StorageS3Distributed::getVirtuals() const +{ + return NamesAndTypesList{ + {"_path", std::make_shared()}, + {"_file", std::make_shared()} + }; +} + } #endif diff --git a/src/TableFunctions/TableFunctionS3Distributed.h b/src/TableFunctions/TableFunctionS3Distributed.h index a2dd526ab05..ff94eaa83e3 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.h +++ b/src/TableFunctions/TableFunctionS3Distributed.h @@ -44,7 +44,7 @@ protected: void parseArguments(const ASTPtr & ast_function, const Context & context) override; String cluster_name; - String filename_or_initiator_hash; + String filename; String format; String structure; String access_key_id; From 2555ae5d3f7f88b46cc5611c149136ed8357d31d Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Wed, 7 Apr 2021 17:43:34 +0300 Subject: [PATCH 057/108] better processing stage --- src/Client/Connection.h | 2 + src/Storages/StorageS3Distributed.cpp | 74 ++++++++++++--------------- src/Storages/StorageS3Distributed.h | 5 +- 3 files changed, 36 insertions(+), 45 deletions(-) diff --git a/src/Client/Connection.h b/src/Client/Connection.h index 7c21a282ce1..502cf8ad9e8 100644 --- a/src/Client/Connection.h +++ b/src/Client/Connection.h @@ -65,6 +65,8 @@ struct Packet Progress progress; BlockStreamProfileInfo profile_info; std::vector part_uuids; + /// String describes an identifier for a request. + /// Used for dynamic distributed data processing (S3 downloading) String read_task_request; Packet() : type(Protocol::Server::Hello) {} diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index 2a257ed922e..604750e4e9a 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -90,8 +90,8 @@ public: const StorageS3SourceBuilder & s3_builder_) : SourceWithProgress(getHeader(s3_builder_.sample_block, s3_builder_.need_path, s3_builder_.need_file)) , initial_query_id(initial_query_id_) - , s3_source_builder(s3_builder_) - , cli_builder(client_auth_builder_) + , s3b(s3_builder_) + , cab(client_auth_builder_) , read_task_callback(read_task_callback_) { createOrUpdateInnerSource(); @@ -128,7 +128,7 @@ private: } catch (...) { - tryLogCurrentException(&Poco::Logger::get("StorageS3SequentialSource")); + tryLogCurrentException(&Poco::Logger::get(getName())); throw; } } @@ -143,21 +143,15 @@ private: auto client_auth = StorageS3::ClientAuthentificaiton{ next_uri, - cli_builder.access_key_id, - cli_builder.secret_access_key, - cli_builder.max_connections, + cab.access_key_id, + cab.secret_access_key, + cab.max_connections, {}, {}}; - StorageS3::updateClientAndAuthSettings(s3_source_builder.context, client_auth); + StorageS3::updateClientAndAuthSettings(s3b.context, client_auth); inner = std::make_unique( - s3_source_builder.need_path, - s3_source_builder.need_file, - s3_source_builder.format, - s3_source_builder.name, - s3_source_builder.sample_block, - s3_source_builder.context, - s3_source_builder.columns, - s3_source_builder.max_block_size, + s3b.need_path, s3b.need_file, s3b.format, s3b.name, + s3b.sample_block, s3b.context, s3b.columns, s3b.max_block_size, chooseCompressionMethod(client_auth.uri.key, ""), client_auth.client, client_auth.uri.bucket, @@ -167,14 +161,10 @@ private: return true; } - /// This is used to ask about next task String initial_query_id; - - StorageS3SourceBuilder s3_source_builder; - ClientAuthentificationBuilder cli_builder; - + StorageS3SourceBuilder s3b; + ClientAuthentificationBuilder cab; std::unique_ptr inner; - NextTaskCallback read_task_callback; }; @@ -229,28 +219,18 @@ Pipe StorageS3Distributed::read( need_file_column = true; } - StorageS3SourceBuilder s3builder - { - need_path_column, - need_file_column, - format_name, - getName(), - metadata_snapshot->getSampleBlock(), - context, - metadata_snapshot->getColumns(), - max_block_size, + StorageS3SourceBuilder s3builder{ + need_path_column, need_file_column, + format_name, getName(), + metadata_snapshot->getSampleBlock(), context, + metadata_snapshot->getColumns(), max_block_size, compression_method }; return Pipe(std::make_shared( - context.getInitialQueryId(), - context.getNextTaskCallback(), - cli_builder, - s3builder - )); + context.getInitialQueryId(), context.getNextTaskCallback(), cli_builder, s3builder)); } - /// The code from here and below executes on initiator /// Calculate the header. This is significant, because some columns could be thrown away in some cases like query with count(*) @@ -269,21 +249,33 @@ Pipe StorageS3Distributed::read( connections.emplace_back(std::make_shared( node.host_name, node.port, context.getGlobalContext().getCurrentDatabase(), node.user, node.password, node.cluster, node.cluster_secret, - "StorageS3DistributedInititiator", - Protocol::Compression::Disable, - Protocol::Secure::Disable + "S3DistributedInititiator", + node.compression, + node.secure )); auto remote_query_executor = std::make_shared( *connections.back(), queryToString(query_info.query), header, context, /*throttler=*/nullptr, scalars, Tables(), processed_stage); - pipes.emplace_back(createRemoteSourcePipe(remote_query_executor, false, false, false, false)); + pipes.emplace_back(std::make_shared(remote_query_executor, false, false)); } } metadata_snapshot->check(column_names, getVirtuals(), getStorageID()); return Pipe::unitePipes(std::move(pipes)); } + +QueryProcessingStage::Enum StorageS3Distributed::getQueryProcessingStage(const Context & context, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const +{ + /// Initiator executes query on remote node. + if (context.getClientInfo().query_kind == ClientInfo::QueryKind::INITIAL_QUERY) { + return QueryProcessingStage::Enum::WithMergeableState; + } + /// Follower just reads the data. + return QueryProcessingStage::Enum::FetchColumns; +} + + } #endif diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Distributed.h index 13e28d1a7aa..9bfb792766d 100644 --- a/src/Storages/StorageS3Distributed.h +++ b/src/Storages/StorageS3Distributed.h @@ -46,10 +46,7 @@ public: size_t /*max_block_size*/, unsigned /*num_streams*/) override; - QueryProcessingStage::Enum getQueryProcessingStage(const Context &, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override - { - return QueryProcessingStage::Enum::WithMergeableState; - } + QueryProcessingStage::Enum getQueryProcessingStage(const Context & context, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; NamesAndTypesList getVirtuals() const override; From 7276b40556ec657b077e21b83dbd545768d23b48 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Thu, 8 Apr 2021 03:09:15 +0300 Subject: [PATCH 058/108] better --- src/DataStreams/RemoteQueryExecutor.cpp | 2 + src/Interpreters/ClientInfo.cpp | 4 + src/Interpreters/ClientInfo.h | 3 + src/Storages/StorageS3.cpp | 88 +++++++---- src/Storages/StorageS3.h | 54 ++++++- src/Storages/StorageS3Distributed.cpp | 143 +++--------------- src/Storages/StorageS3Distributed.h | 28 +++- .../TableFunctionS3Distributed.cpp | 9 +- 8 files changed, 168 insertions(+), 163 deletions(-) diff --git a/src/DataStreams/RemoteQueryExecutor.cpp b/src/DataStreams/RemoteQueryExecutor.cpp index dc161a52ac3..bb41a460e2b 100644 --- a/src/DataStreams/RemoteQueryExecutor.cpp +++ b/src/DataStreams/RemoteQueryExecutor.cpp @@ -200,6 +200,8 @@ void RemoteQueryExecutor::sendQuery() connections->sendIgnoredPartUUIDs(duplicated_part_uuids); } + std::cout << "RemoteQueryExecutor " << toString(context.getClientInfo().task_identifier) << std::endl; + connections->sendQuery(timeouts, query, query_id, stage, modified_client_info, true); established = false; diff --git a/src/Interpreters/ClientInfo.cpp b/src/Interpreters/ClientInfo.cpp index 223837aaf3d..45b3e8aeb28 100644 --- a/src/Interpreters/ClientInfo.cpp +++ b/src/Interpreters/ClientInfo.cpp @@ -88,6 +88,8 @@ void ClientInfo::write(WriteBuffer & out, const UInt64 server_protocol_revision) writeBinary(uint8_t(0), out); } } + + writeBinary(task_identifier, out); } @@ -163,6 +165,8 @@ void ClientInfo::read(ReadBuffer & in, const UInt64 client_protocol_revision) readBinary(client_trace_context.trace_flags, in); } } + + readBinary(task_identifier, in); } diff --git a/src/Interpreters/ClientInfo.h b/src/Interpreters/ClientInfo.h index 21aae45bfab..127d50706fc 100644 --- a/src/Interpreters/ClientInfo.h +++ b/src/Interpreters/ClientInfo.h @@ -1,5 +1,6 @@ #pragma once +#include #include #include #include @@ -97,6 +98,8 @@ public: String quota_key; UInt64 distributed_depth = 0; + /// For distributed file processing (e.g. s3Distributed) + String task_identifier; bool empty() const { return query_kind == QueryKind::NO_QUERY; } diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index 7d77d420584..71a3bcdf3a9 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -149,28 +149,62 @@ Block StorageS3Source::getHeader(Block sample_block, bool with_path_column, bool StorageS3Source::StorageS3Source( bool need_path, bool need_file, - const String & format, + const String & format_, String name_, - const Block & sample_block, - const Context & context, - const ColumnsDescription & columns, - UInt64 max_block_size, - const CompressionMethod compression_method, - const std::shared_ptr & client, - const String & bucket, - const String & key) - : SourceWithProgress(getHeader(sample_block, need_path, need_file)) + const Block & sample_block_, + const Context & context_, + const ColumnsDescription & columns_, + UInt64 max_block_size_, + const String compression_hint_, + const std::shared_ptr & client_, + const String & bucket_, + std::shared_ptr file_iterator_) + : SourceWithProgress(getHeader(sample_block_, need_path, need_file)) , name(std::move(name_)) + , bucket(bucket_) + , format(format_) + , context(context_) + , columns_desc(columns_) + , max_block_size(max_block_size_) + , compression_hint(compression_hint_) + , client(client_) + , sample_block(sample_block_) , with_file_column(need_file) , with_path_column(need_path) - , file_path(bucket + "/" + key) + , file_iterator(file_iterator_) { - read_buf = wrapReadBufferWithCompressionMethod(std::make_unique(client, bucket, key), compression_method); + initialize(); +} + + +bool StorageS3Source::initialize() +{ + String current_key; + if (auto result = file_iterator->next()) + { + current_key = result.value(); + if (current_key.empty()) { + return false; + } + file_path = bucket + "/" + current_key; + std::cout << "StorageS3Source " << file_path << std::endl; + } + else + { + /// Do not initialize read_buffer and stream. + return false; + } + + read_buf = wrapReadBufferWithCompressionMethod( + std::make_unique(client, bucket, current_key), chooseCompressionMethod(current_key, compression_hint)); auto input_format = FormatFactory::instance().getInput(format, *read_buf, sample_block, context, max_block_size); reader = std::make_shared(input_format); - if (columns.hasDefaults()) - reader = std::make_shared(reader, columns, context); + if (columns_desc.hasDefaults()) + reader = std::make_shared(reader, columns_desc, context); + + initialized = false; + return true; } String StorageS3Source::getName() const @@ -206,9 +240,14 @@ Chunk StorageS3Source::generate() return Chunk(std::move(columns), num_rows); } + reader->readSuffix(); reader.reset(); + read_buf.reset(); - return {}; + if (!initialize()) + return {}; + + return generate(); } namespace @@ -322,9 +361,9 @@ Pipe StorageS3::read( /// Iterate through disclosed globs and make a source for each file StorageS3Source::DisclosedGlobIterator glob_iterator(*client_auth.client, client_auth.uri); - /// TODO: better to put first num_streams keys into pipeline - /// and put others dynamically in runtime - while (auto key = glob_iterator.next()) + + auto file_iterator = std::make_shared(glob_iterator); + for (size_t i = 0; i < num_streams; ++i) { pipes.emplace_back(std::make_shared( need_path_column, @@ -335,16 +374,12 @@ Pipe StorageS3::read( local_context, metadata_snapshot->getColumns(), max_block_size, - chooseCompressionMethod(client_auth.uri.key, compression_method), + compression_method, client_auth.client, client_auth.uri.bucket, - key.value())); + file_iterator)); } - auto pipe = Pipe::unitePipes(std::move(pipes)); - // It's possible to have many buckets read from s3, resize(num_streams) might open too many handles at the same time. - // Using narrowPipe instead. - narrowPipe(pipe, num_streams); - return pipe; + return Pipe::unitePipes(std::move(pipes)); } BlockOutputStreamPtr StorageS3::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) @@ -402,7 +437,8 @@ void registerStorageS3Impl(const String & name, StorageFactory & factory) if (engine_args.size() < 2 || engine_args.size() > 5) throw Exception( - "Storage S3 requires 2 to 5 arguments: url, [access_key_id, secret_access_key], name of used format and [compression_method].", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + "Storage S3 requires 2 to 5 arguments: url, [access_key_id, secret_access_key], name of used format and [compression_method].", + ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (auto & engine_arg : engine_args) engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.getLocalContext()); diff --git a/src/Storages/StorageS3.h b/src/Storages/StorageS3.h index 6e9202abb6f..1cb26470c51 100644 --- a/src/Storages/StorageS3.h +++ b/src/Storages/StorageS3.h @@ -17,6 +17,7 @@ #include #include #include +#include namespace Aws::S3 { @@ -30,7 +31,6 @@ class StorageS3SequentialSource; class StorageS3Source : public SourceWithProgress { public: - class DisclosedGlobIterator { public: @@ -42,6 +42,13 @@ public: std::shared_ptr pimpl; }; + struct FileIterator + { + virtual ~FileIterator() = default; + virtual std::optional next() = 0; + }; + + static Block getHeader(Block sample_block, bool with_path_column, bool with_file_column); StorageS3Source( @@ -50,13 +57,13 @@ public: const String & format, String name_, const Block & sample_block, - const Context & context, - const ColumnsDescription & columns, - UInt64 max_block_size, - const CompressionMethod compression_method, - const std::shared_ptr & client, + const Context & context_, + const ColumnsDescription & columns_, + UInt64 max_block_size_, + const String compression_hint_, + const std::shared_ptr & client_, const String & bucket, - const String & key); + std::shared_ptr file_iterator_); String getName() const override; @@ -64,12 +71,26 @@ public: private: String name; + String bucket; + String file_path; + String format; + Context context; + ColumnsDescription columns_desc; + UInt64 max_block_size; + String compression_hint; + std::shared_ptr client; + Block sample_block; + + std::unique_ptr read_buf; BlockInputStreamPtr reader; bool initialized = false; bool with_file_column = false; bool with_path_column = false; - String file_path; + std::shared_ptr file_iterator; + + /// Recreate ReadBuffer and BlockInputStream for each file. + bool initialize(); }; /** @@ -136,6 +157,23 @@ private: String compression_method; String name; + struct LocalFileIterator : public StorageS3Source::FileIterator + { + explicit LocalFileIterator(StorageS3Source::DisclosedGlobIterator glob_iterator_) + : glob_iterator(glob_iterator_) {} + + StorageS3Source::DisclosedGlobIterator glob_iterator; + /// Several files could be processed in parallel + /// from different sources + std::mutex iterator_mutex; + + std::optional next() override + { + std::lock_guard lock(iterator_mutex); + return glob_iterator.next(); + } + }; + static void updateClientAndAuthSettings(ContextPtr, ClientAuthentificaiton &); }; diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index 604750e4e9a..9a7707cb418 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -9,6 +9,7 @@ #include #include "Client/Connection.h" #include "Core/QueryProcessingStage.h" +#include #include "DataStreams/RemoteBlockInputStream.h" #include #include @@ -34,7 +35,6 @@ #include #include #include -#include #include #include @@ -56,119 +56,6 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } -struct StorageS3SourceBuilder -{ - bool need_path; - bool need_file; - String format; - String name; - Block sample_block; - const Context & context; - const ColumnsDescription & columns; - UInt64 max_block_size; - String compression_method; -}; - -class StorageS3SequentialSource : public SourceWithProgress -{ -public: - - static Block getHeader(Block sample_block, bool with_path_column, bool with_file_column) - { - if (with_path_column) - sample_block.insert({DataTypeString().createColumn(), std::make_shared(), "_path"}); - if (with_file_column) - sample_block.insert({DataTypeString().createColumn(), std::make_shared(), "_file"}); - - return sample_block; - } - - StorageS3SequentialSource( - String initial_query_id_, - NextTaskCallback read_task_callback_, - const ClientAuthentificationBuilder & client_auth_builder_, - const StorageS3SourceBuilder & s3_builder_) - : SourceWithProgress(getHeader(s3_builder_.sample_block, s3_builder_.need_path, s3_builder_.need_file)) - , initial_query_id(initial_query_id_) - , s3b(s3_builder_) - , cab(client_auth_builder_) - , read_task_callback(read_task_callback_) - { - createOrUpdateInnerSource(); - } - - String getName() const override - { - return "StorageS3SequentialSource"; - } - - Chunk generate() override - { - if (!inner) - return {}; - - auto chunk = inner->generate(); - if (!chunk) - { - if (!createOrUpdateInnerSource()) - return {}; - else - chunk = inner->generate(); - } - return chunk; - } - -private: - - String askAboutNextKey() - { - try - { - return read_task_callback(initial_query_id); - } - catch (...) - { - tryLogCurrentException(&Poco::Logger::get(getName())); - throw; - } - } - - bool createOrUpdateInnerSource() - { - auto next_string = askAboutNextKey(); - if (next_string.empty()) - return false; - - auto next_uri = S3::URI(Poco::URI(next_string)); - - auto client_auth = StorageS3::ClientAuthentificaiton{ - next_uri, - cab.access_key_id, - cab.secret_access_key, - cab.max_connections, - {}, {}}; - StorageS3::updateClientAndAuthSettings(s3b.context, client_auth); - - inner = std::make_unique( - s3b.need_path, s3b.need_file, s3b.format, s3b.name, - s3b.sample_block, s3b.context, s3b.columns, s3b.max_block_size, - chooseCompressionMethod(client_auth.uri.key, ""), - client_auth.client, - client_auth.uri.bucket, - next_uri.key - ); - - return true; - } - - String initial_query_id; - StorageS3SourceBuilder s3b; - ClientAuthentificationBuilder cab; - std::unique_ptr inner; - NextTaskCallback read_task_callback; -}; - - StorageS3Distributed::StorageS3Distributed( const String & filename_, @@ -183,17 +70,18 @@ StorageS3Distributed::StorageS3Distributed( const Context & context_, const String & compression_method_) : IStorage(table_id_) + , client_auth{S3::URI{Poco::URI{filename_}}, access_key_id_, secret_access_key_, max_connections_, {}, {}} /// Client and settings will be updated later , filename(filename_) , cluster_name(cluster_name_) , cluster(context_.getCluster(cluster_name)->getClusterWithReplicasAsShards(context_.getSettings())) , format_name(format_name_) , compression_method(compression_method_) - , cli_builder{access_key_id_, secret_access_key_, max_connections_} { StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); storage_metadata.setConstraints(constraints_); setInMemoryMetadata(storage_metadata); + StorageS3::updateClientAndAuthSettings(context_, client_auth); } @@ -206,6 +94,8 @@ Pipe StorageS3Distributed::read( size_t max_block_size, unsigned /*num_streams*/) { + StorageS3::updateClientAndAuthSettings(context, client_auth); + /// Secondary query, need to read from S3 if (context.getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY) { @@ -219,16 +109,21 @@ Pipe StorageS3Distributed::read( need_file_column = true; } - StorageS3SourceBuilder s3builder{ - need_path_column, need_file_column, - format_name, getName(), + std::cout << "Got UUID on worker " << toString(context.getClientInfo().task_identifier) << std::endl; + + auto file_iterator = std::make_shared( + context.getNextTaskCallback(), + context.getInitialQueryId()); + + return Pipe(std::make_shared( + need_path_column, need_file_column, format_name, getName(), metadata_snapshot->getSampleBlock(), context, metadata_snapshot->getColumns(), max_block_size, - compression_method - }; - - return Pipe(std::make_shared( - context.getInitialQueryId(), context.getNextTaskCallback(), cli_builder, s3builder)); + compression_method, + client_auth.client, + client_auth.uri.bucket, + file_iterator + )); } /// The code from here and below executes on initiator @@ -254,6 +149,8 @@ Pipe StorageS3Distributed::read( node.secure )); + std::cout << "S3Distributed initiator " << toString(context.getClientInfo().task_identifier) << std::endl; + auto remote_query_executor = std::make_shared( *connections.back(), queryToString(query_info.query), header, context, /*throttler=*/nullptr, scalars, Tables(), processed_stage); diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Distributed.h index 9bfb792766d..4a811f4f84f 100644 --- a/src/Storages/StorageS3Distributed.h +++ b/src/Storages/StorageS3Distributed.h @@ -5,9 +5,9 @@ #if USE_AWS_S3 #include "Client/Connection.h" -#include "Interpreters/Cluster.h" -#include "Storages/IStorage.h" -#include "Storages/StorageS3.h" +#include +#include +#include #include #include @@ -67,13 +67,33 @@ protected: private: /// Connections from initiator to other nodes std::vector> connections; + StorageS3::ClientAuthentificaiton client_auth; + String filename; std::string cluster_name; ClusterPtr cluster; String format_name; String compression_method; - ClientAuthentificationBuilder cli_builder; + + + struct DistributedFileIterator : public StorageS3Source::FileIterator + { + DistributedFileIterator(NextTaskCallback callback_, String identifier_) + : callback(callback_), identifier(identifier_) {} + + NextTaskCallback callback; + String identifier; + + std::optional next() override + { + std::cout << "DistributedFileIterator" << std::endl; + std::cout << identifier << std::endl; + auto answer = callback(identifier); + std::cout << answer << std::endl; + return answer; + } + }; }; diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Distributed.cpp index 814b2586242..36f16e12d4d 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.cpp +++ b/src/TableFunctions/TableFunctionS3Distributed.cpp @@ -115,10 +115,15 @@ StoragePtr TableFunctionS3Distributed::executeImpl( StorageS3::updateClientAndAuthSettings(context, client_auth); StorageS3Source::DisclosedGlobIterator iterator(*client_auth.client, client_auth.uri); - auto callback = [endpoint = client_auth.uri.endpoint, bucket = client_auth.uri.bucket, iterator = std::move(iterator)]() mutable -> String + auto task_identifier = UUIDHelpers::generateV4(); + const_cast(context).getClientInfo().task_identifier = toString(task_identifier); + + std::cout << "Created UUID: " << toString(context.getClientInfo().task_identifier) << std::endl; + + auto callback = [iterator = std::move(iterator)]() mutable -> String { if (auto value = iterator.next()) - return endpoint + '/' + bucket + '/' + *value; + return *value; return {}; }; From 8a4b5a586e3f11397ab8eb36b8d613ad047557d1 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Thu, 8 Apr 2021 17:22:19 +0300 Subject: [PATCH 059/108] fixed uuid --- src/Core/Defines.h | 3 ++ src/DataStreams/RemoteQueryExecutor.cpp | 42 +++++++------------ src/DataStreams/RemoteQueryExecutor.h | 8 ++-- src/Interpreters/ClientInfo.cpp | 6 ++- src/Storages/StorageS3Distributed.cpp | 31 ++++++++++---- .../TableFunctionS3Distributed.cpp | 32 +------------- 6 files changed, 53 insertions(+), 69 deletions(-) diff --git a/src/Core/Defines.h b/src/Core/Defines.h index e7c1c86a23e..aba3a98b8ef 100644 --- a/src/Core/Defines.h +++ b/src/Core/Defines.h @@ -74,6 +74,9 @@ /// Minimum revision supporting OpenTelemetry #define DBMS_MIN_REVISION_WITH_OPENTELEMETRY 54442 +/// Minimum revision supporting task processing on cluster +#define DBMS_MIN_REVISION_WITH_CLUSTER_PROCESSING 54443 + /// Minimum revision supporting interserver secret. #define DBMS_MIN_REVISION_WITH_INTERSERVER_SECRET 54441 diff --git a/src/DataStreams/RemoteQueryExecutor.cpp b/src/DataStreams/RemoteQueryExecutor.cpp index bb41a460e2b..ae5368975e1 100644 --- a/src/DataStreams/RemoteQueryExecutor.cpp +++ b/src/DataStreams/RemoteQueryExecutor.cpp @@ -29,14 +29,11 @@ namespace ErrorCodes RemoteQueryExecutor::RemoteQueryExecutor( Connection & connection, - const String & query_, - const Block & header_, - ContextPtr context_, - ThrottlerPtr throttler, - const Scalars & scalars_, - const Tables & external_tables_, - QueryProcessingStage::Enum stage_) - : header(header_), query(query_), context(context_), scalars(scalars_), external_tables(external_tables_), stage(stage_) + const String & query_, const Block & header_, ContextPtr context_, + ThrottlerPtr throttler, const Scalars & scalars_, const Tables & external_tables_, + QueryProcessingStage::Enum stage_, std::optional task_identifier_) + : header(header_), query(query_), context(context_) + , scalars(scalars_), external_tables(external_tables_), stage(stage_), task_identifier(task_identifier_) { create_connections = [this, &connection, throttler]() { @@ -46,14 +43,11 @@ RemoteQueryExecutor::RemoteQueryExecutor( RemoteQueryExecutor::RemoteQueryExecutor( std::vector && connections_, - const String & query_, - const Block & header_, - ContextPtr context_, - const ThrottlerPtr & throttler, - const Scalars & scalars_, - const Tables & external_tables_, - QueryProcessingStage::Enum stage_) - : header(header_), query(query_), context(context_), scalars(scalars_), external_tables(external_tables_), stage(stage_) + const String & query_, const Block & header_, ContextPtr context_, + const ThrottlerPtr & throttler, const Scalars & scalars_, const Tables & external_tables_, + QueryProcessingStage::Enum stage_, std::optional task_identifier_) + : header(header_), query(query_), context(context_) + , scalars(scalars_), external_tables(external_tables_), stage(stage_), task_identifier(task_identifier_) { create_connections = [this, connections_, throttler]() mutable { return std::make_unique(std::move(connections_), context->getSettingsRef(), throttler); @@ -62,14 +56,11 @@ RemoteQueryExecutor::RemoteQueryExecutor( RemoteQueryExecutor::RemoteQueryExecutor( const ConnectionPoolWithFailoverPtr & pool, - const String & query_, - const Block & header_, - ContextPtr context_, - const ThrottlerPtr & throttler, - const Scalars & scalars_, - const Tables & external_tables_, - QueryProcessingStage::Enum stage_) - : header(header_), query(query_), context(context_), scalars(scalars_), external_tables(external_tables_), stage(stage_) + const String & query_, const Block & header_, ContextPtr context_, + const ThrottlerPtr & throttler, const Scalars & scalars_, const Tables & external_tables_, + QueryProcessingStage::Enum stage_, std::optional task_identifier_) + : header(header_), query(query_), context(context_) + , scalars(scalars_), external_tables(external_tables_), stage(stage_), task_identifier(task_identifier_) { create_connections = [this, pool, throttler]()->std::unique_ptr { @@ -189,6 +180,7 @@ void RemoteQueryExecutor::sendQuery() auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(settings); ClientInfo modified_client_info = context->getClientInfo(); modified_client_info.query_kind = ClientInfo::QueryKind::SECONDARY_QUERY; + modified_client_info.task_identifier = task_identifier ? *task_identifier : ""; if (CurrentThread::isInitialized()) { modified_client_info.client_trace_context = CurrentThread::get().thread_trace_context; @@ -200,8 +192,6 @@ void RemoteQueryExecutor::sendQuery() connections->sendIgnoredPartUUIDs(duplicated_part_uuids); } - std::cout << "RemoteQueryExecutor " << toString(context.getClientInfo().task_identifier) << std::endl; - connections->sendQuery(timeouts, query, query_id, stage, modified_client_info, true); established = false; diff --git a/src/DataStreams/RemoteQueryExecutor.h b/src/DataStreams/RemoteQueryExecutor.h index cdd3eda5897..e0748474c37 100644 --- a/src/DataStreams/RemoteQueryExecutor.h +++ b/src/DataStreams/RemoteQueryExecutor.h @@ -37,21 +37,21 @@ public: Connection & connection, const String & query_, const Block & header_, ContextPtr context_, ThrottlerPtr throttler_ = nullptr, const Scalars & scalars_ = Scalars(), const Tables & external_tables_ = Tables(), - QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete); + QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete, std::optional task_identifier_ = {}); /// Accepts several connections already taken from pool. RemoteQueryExecutor( std::vector && connections_, const String & query_, const Block & header_, ContextPtr context_, const ThrottlerPtr & throttler = nullptr, const Scalars & scalars_ = Scalars(), const Tables & external_tables_ = Tables(), - QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete); + QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete, std::optional task_identifier_ = {}); /// Takes a pool and gets one or several connections from it. RemoteQueryExecutor( const ConnectionPoolWithFailoverPtr & pool, const String & query_, const Block & header_, ContextPtr context_, const ThrottlerPtr & throttler = nullptr, const Scalars & scalars_ = Scalars(), const Tables & external_tables_ = Tables(), - QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete); + QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete, std::optional task_identifier_ = {}); ~RemoteQueryExecutor(); @@ -119,6 +119,8 @@ private: /// Temporary tables needed to be sent to remote servers Tables external_tables; QueryProcessingStage::Enum stage; + /// Initiator identifier for distributed task processing + std::optional task_identifier; /// Streams for reading from temporary tables and following sending of data /// to remote servers for GLOBAL-subqueries diff --git a/src/Interpreters/ClientInfo.cpp b/src/Interpreters/ClientInfo.cpp index 45b3e8aeb28..50164ea4074 100644 --- a/src/Interpreters/ClientInfo.cpp +++ b/src/Interpreters/ClientInfo.cpp @@ -89,7 +89,8 @@ void ClientInfo::write(WriteBuffer & out, const UInt64 server_protocol_revision) } } - writeBinary(task_identifier, out); + if (server_protocol_revision >= DBMS_MIN_REVISION_WITH_CLUSTER_PROCESSING) + writeBinary(task_identifier, out); } @@ -166,7 +167,8 @@ void ClientInfo::read(ReadBuffer & in, const UInt64 client_protocol_revision) } } - readBinary(task_identifier, in); + if (client_protocol_revision >= DBMS_MIN_REVISION_WITH_CLUSTER_PROCESSING) + readBinary(task_identifier, in); } diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index 9a7707cb418..dfae9038dd7 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -108,12 +108,10 @@ Pipe StorageS3Distributed::read( if (column == "_file") need_file_column = true; } - - std::cout << "Got UUID on worker " << toString(context.getClientInfo().task_identifier) << std::endl; auto file_iterator = std::make_shared( context.getNextTaskCallback(), - context.getInitialQueryId()); + context.getClientInfo().task_identifier); return Pipe(std::make_shared( need_path_column, need_file_column, format_name, getName(), @@ -127,6 +125,23 @@ Pipe StorageS3Distributed::read( } /// The code from here and below executes on initiator + S3::URI s3_uri(Poco::URI{filename}); + StorageS3::updateClientAndAuthSettings(context, client_auth); + + auto callback = [iterator = StorageS3Source::DisclosedGlobIterator(*client_auth.client, client_auth.uri)]() mutable -> String + { + if (auto value = iterator.next()) + return *value; + return {}; + }; + + auto task_identifier = toString(UUIDHelpers::generateV4()); + + std::cout << "Generated UUID : " << task_identifier << std::endl; + + /// Register resolver, which will give other nodes a task std::make_unique + context.getReadTaskSupervisor()->registerNextTaskResolver( + std::make_unique(task_identifier, std::move(callback))); /// Calculate the header. This is significant, because some columns could be thrown away in some cases like query with count(*) Block header = @@ -149,10 +164,11 @@ Pipe StorageS3Distributed::read( node.secure )); - std::cout << "S3Distributed initiator " << toString(context.getClientInfo().task_identifier) << std::endl; - + /// For unknown reason global context is passed to IStorage::read() method + /// So, task_identifier is passed as constructor argument. It is more obvious. auto remote_query_executor = std::make_shared( - *connections.back(), queryToString(query_info.query), header, context, /*throttler=*/nullptr, scalars, Tables(), processed_stage); + *connections.back(), queryToString(query_info.query), header, context, + /*throttler=*/nullptr, scalars, Tables(), processed_stage, task_identifier); pipes.emplace_back(std::make_shared(remote_query_executor, false, false)); } @@ -162,7 +178,8 @@ Pipe StorageS3Distributed::read( return Pipe::unitePipes(std::move(pipes)); } -QueryProcessingStage::Enum StorageS3Distributed::getQueryProcessingStage(const Context & context, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const +QueryProcessingStage::Enum StorageS3Distributed::getQueryProcessingStage( + const Context & context, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const { /// Initiator executes query on remote node. if (context.getClientInfo().query_kind == ClientInfo::QueryKind::INITIAL_QUERY) { diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Distributed.cpp index 36f16e12d4d..b11a8d43b27 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.cpp +++ b/src/TableFunctions/TableFunctionS3Distributed.cpp @@ -104,36 +104,6 @@ StoragePtr TableFunctionS3Distributed::executeImpl( const ASTPtr & /*filename*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { - UInt64 max_connections = context.getSettingsRef().s3_max_connections; - - /// Initiator specific logic - while (context.getClientInfo().query_kind == ClientInfo::QueryKind::INITIAL_QUERY) - { - auto poco_uri = Poco::URI{filename}; - S3::URI s3_uri(poco_uri); - StorageS3::ClientAuthentificaiton client_auth{s3_uri, access_key_id, secret_access_key, max_connections, {}, {}}; - StorageS3::updateClientAndAuthSettings(context, client_auth); - StorageS3Source::DisclosedGlobIterator iterator(*client_auth.client, client_auth.uri); - - auto task_identifier = UUIDHelpers::generateV4(); - const_cast(context).getClientInfo().task_identifier = toString(task_identifier); - - std::cout << "Created UUID: " << toString(context.getClientInfo().task_identifier) << std::endl; - - auto callback = [iterator = std::move(iterator)]() mutable -> String - { - if (auto value = iterator.next()) - return *value; - return {}; - }; - - /// Register resolver, which will give other nodes a task std::make_unique - context.getReadTaskSupervisor()->registerNextTaskResolver( - std::make_unique(context.getCurrentQueryId(), std::move(callback))); - break; - } - - StoragePtr storage = StorageS3Distributed::create( filename, access_key_id, @@ -141,7 +111,7 @@ StoragePtr TableFunctionS3Distributed::executeImpl( StorageID(getDatabaseName(), table_name), cluster_name, format, - max_connections, + context.getSettingsRef().s3_max_connections, getActualTableStructure(context), ConstraintsDescription{}, context, From 3b22375c363c882ea1928e62988c40d6be7996f8 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Thu, 8 Apr 2021 18:57:39 +0300 Subject: [PATCH 060/108] add func test --- .../0_stateless/01801_s3_distributed.reference | 0 tests/queries/0_stateless/01801_s3_distributed.sh | 12 ++++++++++++ 2 files changed, 12 insertions(+) create mode 100644 tests/queries/0_stateless/01801_s3_distributed.reference create mode 100644 tests/queries/0_stateless/01801_s3_distributed.sh diff --git a/tests/queries/0_stateless/01801_s3_distributed.reference b/tests/queries/0_stateless/01801_s3_distributed.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01801_s3_distributed.sh b/tests/queries/0_stateless/01801_s3_distributed.sh new file mode 100644 index 00000000000..10aa91d5ae7 --- /dev/null +++ b/tests/queries/0_stateless/01801_s3_distributed.sh @@ -0,0 +1,12 @@ +#!/usr/bin/env bash + +# NOTE: this is a partial copy of the 01683_dist_INSERT_block_structure_mismatch, +# but this test also checks the log messages + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + + +$CLICKHOUSE_CLIENT -q "SELECT * FROM s3('https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT NUll;" +$CLICKHOUSE_CLIENT -q "SELECT * FROM s3Distributed('test_cluster_two_shards', 'https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" \ No newline at end of file From 6f268476e6e5621c7f48e133305299140b09942b Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Thu, 8 Apr 2021 19:39:20 +0300 Subject: [PATCH 061/108] chmod --- tests/queries/0_stateless/01801_s3_distributed.sh | 0 1 file changed, 0 insertions(+), 0 deletions(-) mode change 100644 => 100755 tests/queries/0_stateless/01801_s3_distributed.sh diff --git a/tests/queries/0_stateless/01801_s3_distributed.sh b/tests/queries/0_stateless/01801_s3_distributed.sh old mode 100644 new mode 100755 From 6f7f3a4ec5930352696bc7e2da9d0ffbf97474fc Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Thu, 8 Apr 2021 19:47:51 +0300 Subject: [PATCH 062/108] better test --- tests/queries/0_stateless/01801_s3_distributed.sh | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tests/queries/0_stateless/01801_s3_distributed.sh b/tests/queries/0_stateless/01801_s3_distributed.sh index 10aa91d5ae7..e426587ea0f 100755 --- a/tests/queries/0_stateless/01801_s3_distributed.sh +++ b/tests/queries/0_stateless/01801_s3_distributed.sh @@ -8,5 +8,5 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) . "$CUR_DIR"/../shell_config.sh -$CLICKHOUSE_CLIENT -q "SELECT * FROM s3('https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT NUll;" -$CLICKHOUSE_CLIENT -q "SELECT * FROM s3Distributed('test_cluster_two_shards', 'https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" \ No newline at end of file +${CLICKHOUSE_CLIENT_BINARY} --send_logs_level="none" -q "SELECT * FROM s3('https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" +${CLICKHOUSE_CLIENT_BINARY} --send_logs_level="none" -q "SELECT * FROM s3Distributed('test_cluster_two_shards', 'https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" \ No newline at end of file From 05e04f792e259ea951c3913cd9cfc243a1c16ff6 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Thu, 8 Apr 2021 20:18:14 +0300 Subject: [PATCH 063/108] disable func test in fast test --- docker/test/fasttest/run.sh | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/docker/test/fasttest/run.sh b/docker/test/fasttest/run.sh index 14c6ee0d337..51185fd0d17 100755 --- a/docker/test/fasttest/run.sh +++ b/docker/test/fasttest/run.sh @@ -337,7 +337,7 @@ function run_tests secure sha256 xz - + # Not sure why these two fail even in sequential mode. Disabled for now # to make some progress. 00646_url_engine @@ -365,7 +365,10 @@ function run_tests 01622_defaults_for_url_engine # JSON functions - 01666_blns + 01666_blnsi + + # Depends on AWS + 01801_s3_distributed.sh ) (time clickhouse-test --hung-check -j 8 --order=random --use-skip-list --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" -- "$FASTTEST_FOCUS" 2>&1 ||:) | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt" From 36a8419f600f50d4a51e2249849174d6eaa592c6 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Thu, 8 Apr 2021 22:00:39 +0300 Subject: [PATCH 064/108] style --- src/Client/Connection.cpp | 10 +++--- src/Client/HedgedConnections.h | 5 +-- src/Client/IConnections.h | 2 +- src/Client/MultiplexedConnections.cpp | 3 +- src/Client/MultiplexedConnections.h | 2 +- src/DataStreams/RemoteQueryExecutor.cpp | 6 ++-- src/DataStreams/RemoteQueryExecutor.h | 2 +- src/Interpreters/Cluster.cpp | 11 +------ src/Interpreters/Cluster.h | 3 -- src/Interpreters/Context.cpp | 6 ++-- src/Interpreters/Context.h | 10 +++--- src/Server/TCPHandler.cpp | 11 ++++--- src/Server/TCPHandler.h | 3 +- src/Storages/StorageS3.cpp | 1 - src/Storages/StorageS3Distributed.cpp | 4 +-- src/Storages/StorageS3Distributed.h | 4 +-- src/Storages/TaskSupervisor.h | 8 ++--- .../TableFunctionS3Distributed.cpp | 17 +++------- tests/integration/test_s3_distributed/test.py | 31 +++++++++++++++++++ .../0_stateless/01801_s3_distributed.sh | 5 ++- 20 files changed, 79 insertions(+), 65 deletions(-) diff --git a/src/Client/Connection.cpp b/src/Client/Connection.cpp index 7bcd8970f8d..14c5d8d710d 100644 --- a/src/Client/Connection.cpp +++ b/src/Client/Connection.cpp @@ -552,10 +552,10 @@ void Connection::sendIgnoredPartUUIDs(const std::vector & uuids) } -void Connection::sendReadTaskResponse(const std::string & responce) +void Connection::sendReadTaskResponse(const std::string & response) { writeVarUInt(Protocol::Client::ReadTaskResponse, *out); - writeStringBinary(responce, *out); + writeStringBinary(response, *out); out->next(); } @@ -843,9 +843,9 @@ Packet Connection::receivePacket() String Connection::receiveReadTaskRequest() const { - String next_task; - readStringBinary(next_task, *in); - return next_task; + String read_task; + readStringBinary(read_task, *in); + return read_task; } diff --git a/src/Client/HedgedConnections.h b/src/Client/HedgedConnections.h index 331394c2322..20640460009 100644 --- a/src/Client/HedgedConnections.h +++ b/src/Client/HedgedConnections.h @@ -84,8 +84,9 @@ public: const ClientInfo & client_info, bool with_pending_data) override; - void sendReadTaskResponce(const String &) override { - throw Exception("sendReadTaskResponce in not supported with HedgedConnections", ErrorCodes::LOGICAL_ERROR); + void sendReadTaskResponse(const String &) override + { + throw Exception("sendReadTaskResponse in not supported with HedgedConnections", ErrorCodes::LOGICAL_ERROR); } Packet receivePacket() override; diff --git a/src/Client/IConnections.h b/src/Client/IConnections.h index a5e7638c0bd..d251a5fb3ab 100644 --- a/src/Client/IConnections.h +++ b/src/Client/IConnections.h @@ -24,7 +24,7 @@ public: const ClientInfo & client_info, bool with_pending_data) = 0; - virtual void sendReadTaskResponce(const String &) = 0; + virtual void sendReadTaskResponse(const String &) = 0; /// Get packet from any replica. virtual Packet receivePacket() = 0; diff --git a/src/Client/MultiplexedConnections.cpp b/src/Client/MultiplexedConnections.cpp index 73b68b9fc50..28740873d44 100644 --- a/src/Client/MultiplexedConnections.cpp +++ b/src/Client/MultiplexedConnections.cpp @@ -156,7 +156,7 @@ void MultiplexedConnections::sendIgnoredPartUUIDs(const std::vector & uuid } -void MultiplexedConnections::sendReadTaskResponce(const String & response) +void MultiplexedConnections::sendReadTaskResponse(const String & response) { /// No lock_guard because assume it is already called under lock current_connection->sendReadTaskResponse(response); @@ -217,6 +217,7 @@ Packet MultiplexedConnections::drain() switch (packet.type) { + case Protocol::Server::ReadTaskRequest: case Protocol::Server::PartUUIDs: case Protocol::Server::Data: case Protocol::Server::Progress: diff --git a/src/Client/MultiplexedConnections.h b/src/Client/MultiplexedConnections.h index 0021ecd863d..f642db1c4cd 100644 --- a/src/Client/MultiplexedConnections.h +++ b/src/Client/MultiplexedConnections.h @@ -39,7 +39,7 @@ public: const ClientInfo & client_info, bool with_pending_data) override; - void sendReadTaskResponce(const String &) override; + void sendReadTaskResponse(const String &) override; Packet receivePacket() override; diff --git a/src/DataStreams/RemoteQueryExecutor.cpp b/src/DataStreams/RemoteQueryExecutor.cpp index ae5368975e1..47ca57eaf0b 100644 --- a/src/DataStreams/RemoteQueryExecutor.cpp +++ b/src/DataStreams/RemoteQueryExecutor.cpp @@ -383,9 +383,9 @@ bool RemoteQueryExecutor::setPartUUIDs(const std::vector & uuids) void RemoteQueryExecutor::processReadTaskRequest(const String & request) { - auto query_context = context->getQueryContext(); - String responce = query_context->getReadTaskSupervisor()->getNextTaskForId(request); - connections->sendReadTaskResponce(responce); + Context & query_context = context.getQueryContext(); + String response = query_context.getTaskSupervisor()->getNextTaskForId(request); + connections->sendReadTaskResponse(response); } void RemoteQueryExecutor::finish(std::unique_ptr * read_context) diff --git a/src/DataStreams/RemoteQueryExecutor.h b/src/DataStreams/RemoteQueryExecutor.h index e0748474c37..5a0c2713623 100644 --- a/src/DataStreams/RemoteQueryExecutor.h +++ b/src/DataStreams/RemoteQueryExecutor.h @@ -88,7 +88,7 @@ public: /// Set the query_id. For now, used by performance test to later find the query /// in the server query_log. Must be called before sending the query to the server. - void setQueryId(const std::string& query_id_) { assert(!sent_query); query_id = query_id_; } + void setQueryId(const std::string& query_id_) { assert(!sent_query); std::cout << query_id_ << std::endl; query_id = query_id_; } /// Specify how we allocate connections on a shard. void setPoolMode(PoolMode pool_mode_) { pool_mode = pool_mode_; } diff --git a/src/Interpreters/Cluster.cpp b/src/Interpreters/Cluster.cpp index 20ec3a794d1..7ccb28ae0f5 100644 --- a/src/Interpreters/Cluster.cpp +++ b/src/Interpreters/Cluster.cpp @@ -139,21 +139,12 @@ String Cluster::Address::toString() const } -String Cluster::Address::getHash() const -{ - SipHash hash; - hash.update(host_name); - hash.update(std::to_string(port)); - hash.update(user); - hash.update(password); - return std::to_string(hash.get64()); -} - String Cluster::Address::toString(const String & host_name, UInt16 port) { return escapeForFileName(host_name) + ':' + DB::toString(port); } + String Cluster::Address::readableString() const { String res; diff --git a/src/Interpreters/Cluster.h b/src/Interpreters/Cluster.h index 89d508396ad..5976074ec7a 100644 --- a/src/Interpreters/Cluster.h +++ b/src/Interpreters/Cluster.h @@ -122,9 +122,6 @@ public: /// Returns 'escaped_host_name:port' String toString() const; - - /// Returns hash of all fields - String getHash() const; /// Returns 'host_name:port' String readableString() const; diff --git a/src/Interpreters/Context.cpp b/src/Interpreters/Context.cpp index 689c211d94b..4ed0bc78c5b 100644 --- a/src/Interpreters/Context.cpp +++ b/src/Interpreters/Context.cpp @@ -2616,7 +2616,7 @@ PartUUIDsPtr Context::getPartUUIDs() } -TaskSupervisorPtr Context::getReadTaskSupervisor() const +TaskSupervisorPtr Context::getTaskSupervisor() const { return read_task_supervisor; } @@ -2628,7 +2628,7 @@ void Context::setReadTaskSupervisor(TaskSupervisorPtr resolver) } -NextTaskCallback Context::getNextTaskCallback() const +ReadTaskCallback Context::getReadTaskCallback() const { if (!next_task_callback.has_value()) throw Exception(fmt::format("Next task callback is not set for query {}", getInitialQueryId()), ErrorCodes::LOGICAL_ERROR); @@ -2636,7 +2636,7 @@ NextTaskCallback Context::getNextTaskCallback() const } -void Context::setNextTaskCallback(NextTaskCallback && callback) +void Context::setReadTaskCallback(ReadTaskCallback && callback) { next_task_callback = callback; } diff --git a/src/Interpreters/Context.h b/src/Interpreters/Context.h index 6516a342b12..ee3d7519c8e 100644 --- a/src/Interpreters/Context.h +++ b/src/Interpreters/Context.h @@ -131,7 +131,7 @@ using InputBlocksReader = std::function; /// Class which gives tasks to other nodes in cluster class TaskSupervisor; using TaskSupervisorPtr = std::shared_ptr; -using NextTaskCallback = std::function; +using ReadTaskCallback = std::function; /// An empty interface for an arbitrary object that may be attached by a shared pointer /// to query context, when using ClickHouse as a library. @@ -196,7 +196,7 @@ private: /// Fields for distributed s3 function TaskSupervisorPtr read_task_supervisor; - std::optional next_task_callback; + std::optional next_task_callback; /// Record entities accessed by current query, and store this information in system.query_log. struct QueryAccessInfo @@ -780,11 +780,11 @@ public: PartUUIDsPtr getIgnoredPartUUIDs(); /// A bunch of functions for distributed s3 function - TaskSupervisorPtr getReadTaskSupervisor() const; + TaskSupervisorPtr getTaskSupervisor() const; void setReadTaskSupervisor(TaskSupervisorPtr); - NextTaskCallback getNextTaskCallback() const; - void setNextTaskCallback(NextTaskCallback && callback); + ReadTaskCallback getReadTaskCallback() const; + void setReadTaskCallback(ReadTaskCallback && callback); private: std::unique_lock getLock() const; diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index 3b8823e1e86..e49ae27ccdd 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -288,16 +288,17 @@ void TCPHandler::runImpl() customizeContext(query_context); + /// Create task supervisor for distributed task processing + query_context->setReadTaskSupervisor(std::make_shared()); + /// This callback is needed for requsting read tasks inside pipeline for distributed processing - query_context->setNextTaskCallback([this](String request) -> String + query_context->setReadTaskCallback([this](String request) mutable -> String { - std::lock_guard lock(buffer_mutex); + std::lock_guard lock(task_callback_mutex); sendReadTaskRequestAssumeLocked(request); return receiveReadTaskResponseAssumeLocked(); }); - query_context->setReadTaskSupervisor(std::make_shared()); - bool may_have_embedded_data = client_tcp_protocol_version >= DBMS_MIN_REVISION_WITH_CLIENT_SUPPORT_EMBEDDED_DATA; /// Processing Query state.io = executeQuery(state.query, query_context, false, state.stage, may_have_embedded_data); @@ -664,7 +665,7 @@ void TCPHandler::processOrdinaryQueryWithProcessors() break; } - std::lock_guard lock(buffer_mutex); + std::lock_guard lock(task_callback_mutex); if (after_send_progress.elapsed() / 1000 >= query_context->getSettingsRef().interactive_delay) { diff --git a/src/Server/TCPHandler.h b/src/Server/TCPHandler.h index e30b1ded2cc..9c752a4db41 100644 --- a/src/Server/TCPHandler.h +++ b/src/Server/TCPHandler.h @@ -138,8 +138,6 @@ private: /// Streams for reading/writing from/to client connection socket. std::shared_ptr in; std::shared_ptr out; - std::mutex buffer_mutex; - /// Time after the last check to stop the request and send the progress. Stopwatch after_check_cancelled; @@ -152,6 +150,7 @@ private: String cluster; String cluster_secret; + std::mutex task_callback_mutex; /// At the moment, only one ongoing query in the connection is supported at a time. QueryState state; diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index 71a3bcdf3a9..63c40499c4c 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -187,7 +187,6 @@ bool StorageS3Source::initialize() return false; } file_path = bucket + "/" + current_key; - std::cout << "StorageS3Source " << file_path << std::endl; } else { diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index dfae9038dd7..f41e87ef892 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -110,7 +110,7 @@ Pipe StorageS3Distributed::read( } auto file_iterator = std::make_shared( - context.getNextTaskCallback(), + context.getReadTaskCallback(), context.getClientInfo().task_identifier); return Pipe(std::make_shared( @@ -140,7 +140,7 @@ Pipe StorageS3Distributed::read( std::cout << "Generated UUID : " << task_identifier << std::endl; /// Register resolver, which will give other nodes a task std::make_unique - context.getReadTaskSupervisor()->registerNextTaskResolver( + context.getTaskSupervisor()->registerNextTaskResolver( std::make_unique(task_identifier, std::move(callback))); /// Calculate the header. This is significant, because some columns could be thrown away in some cases like query with count(*) diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Distributed.h index 4a811f4f84f..c4927d68681 100644 --- a/src/Storages/StorageS3Distributed.h +++ b/src/Storages/StorageS3Distributed.h @@ -79,10 +79,10 @@ private: struct DistributedFileIterator : public StorageS3Source::FileIterator { - DistributedFileIterator(NextTaskCallback callback_, String identifier_) + DistributedFileIterator(ReadTaskCallback callback_, String identifier_) : callback(callback_), identifier(identifier_) {} - NextTaskCallback callback; + ReadTaskCallback callback; String identifier; std::optional next() override diff --git a/src/Storages/TaskSupervisor.h b/src/Storages/TaskSupervisor.h index 20e2489d120..132aa257f5e 100644 --- a/src/Storages/TaskSupervisor.h +++ b/src/Storages/TaskSupervisor.h @@ -48,17 +48,15 @@ public: target = std::move(resolver); } - + /// Do not erase anything from the map, because TaskSupervisor is stored + /// into context and will be deleted after query ends. Task getNextTaskForId(const QueryId & id) { std::lock_guard lock(mutex); auto it = dict.find(id); if (it == dict.end()) return ""; - auto answer = it->second->callback(); - if (answer.empty()) - dict.erase(it); - return answer; + return it->second->callback(); } private: diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Distributed.cpp index b11a8d43b27..6e70d2f3766 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.cpp +++ b/src/TableFunctions/TableFunctionS3Distributed.cpp @@ -101,21 +101,14 @@ ColumnsDescription TableFunctionS3Distributed::getActualTableStructure(const Con } StoragePtr TableFunctionS3Distributed::executeImpl( - const ASTPtr & /*filename*/, const Context & context, + const ASTPtr & /*function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { StoragePtr storage = StorageS3Distributed::create( - filename, - access_key_id, - secret_access_key, - StorageID(getDatabaseName(), table_name), - cluster_name, - format, - context.getSettingsRef().s3_max_connections, - getActualTableStructure(context), - ConstraintsDescription{}, - context, - compression_method); + filename, access_key_id, secret_access_key, StorageID(getDatabaseName(), table_name), + cluster_name, format, context.getSettingsRef().s3_max_connections, + getActualTableStructure(context), ConstraintsDescription{}, + context, compression_method); storage->startup(); diff --git a/tests/integration/test_s3_distributed/test.py b/tests/integration/test_s3_distributed/test.py index c8ede0fddf8..a7826dc5582 100644 --- a/tests/integration/test_s3_distributed/test.py +++ b/tests/integration/test_s3_distributed/test.py @@ -75,9 +75,40 @@ def test_count(started_cluster): assert TSV(pure_s3) == TSV(s3_distibuted) +def test_union_all(started_cluster): + node = started_cluster.instances['s0_0_0'] + pure_s3 = node.query(""" + SELECT * from s3( + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + ORDER BY (name, value, polygon) + UNION ALL + SELECT * from s3( + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + ORDER BY (name, value, polygon) + """) + # print(pure_s3) + s3_distibuted = node.query(""" + SELECT * from s3Distributed( + 'cluster_simple', + 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon)""") + # print(s3_distibuted) + + assert TSV(pure_s3) == TSV(s3_distibuted) + + def test_wrong_cluster(started_cluster): node = started_cluster.instances['s0_0_0'] error = node.query_and_get_error(""" + SELECT count(*) from s3Distributed( + 'non_existent_cluster', + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + UNION ALL SELECT count(*) from s3Distributed( 'non_existent_cluster', 'http://minio1:9001/root/data/{clickhouse,database}/*', diff --git a/tests/queries/0_stateless/01801_s3_distributed.sh b/tests/queries/0_stateless/01801_s3_distributed.sh index e426587ea0f..0ae4861b534 100755 --- a/tests/queries/0_stateless/01801_s3_distributed.sh +++ b/tests/queries/0_stateless/01801_s3_distributed.sh @@ -8,5 +8,8 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) . "$CUR_DIR"/../shell_config.sh +echo $S3_ACCESS_KEY_ID +echo $S3_SECRET_ACCESS + ${CLICKHOUSE_CLIENT_BINARY} --send_logs_level="none" -q "SELECT * FROM s3('https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" -${CLICKHOUSE_CLIENT_BINARY} --send_logs_level="none" -q "SELECT * FROM s3Distributed('test_cluster_two_shards', 'https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" \ No newline at end of file +${CLICKHOUSE_CLIENT_BINARY} --send_logs_level="none" -q "SELECT * FROM s3Distributed('test_cluster_two_shards', 'https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" From 9144bc717638cc631830e12dcac5bf990ba7d9ac Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Thu, 8 Apr 2021 23:25:25 +0300 Subject: [PATCH 065/108] Update 01801_s3_distributed.sh --- tests/queries/0_stateless/01801_s3_distributed.sh | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tests/queries/0_stateless/01801_s3_distributed.sh b/tests/queries/0_stateless/01801_s3_distributed.sh index 0ae4861b534..1c82eaac5f1 100755 --- a/tests/queries/0_stateless/01801_s3_distributed.sh +++ b/tests/queries/0_stateless/01801_s3_distributed.sh @@ -11,5 +11,7 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) echo $S3_ACCESS_KEY_ID echo $S3_SECRET_ACCESS +printenv + ${CLICKHOUSE_CLIENT_BINARY} --send_logs_level="none" -q "SELECT * FROM s3('https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" ${CLICKHOUSE_CLIENT_BINARY} --send_logs_level="none" -q "SELECT * FROM s3Distributed('test_cluster_two_shards', 'https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" From efef179b8987e10f8aae7dc4957a4988789fefac Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Fri, 9 Apr 2021 03:08:02 +0300 Subject: [PATCH 066/108] remove prints --- docker/test/fasttest/run.sh | 4 ++-- src/DataStreams/RemoteQueryExecutor.h | 2 +- src/Storages/StorageS3Distributed.cpp | 2 -- src/Storages/StorageS3Distributed.h | 6 +----- tests/queries/0_stateless/01801_s3_distributed.sh | 5 ----- tests/queries/0_stateless/arcadia_skip_list.txt | 1 + 6 files changed, 5 insertions(+), 15 deletions(-) diff --git a/docker/test/fasttest/run.sh b/docker/test/fasttest/run.sh index 51185fd0d17..9500a7854ba 100755 --- a/docker/test/fasttest/run.sh +++ b/docker/test/fasttest/run.sh @@ -337,7 +337,7 @@ function run_tests secure sha256 xz - + # Not sure why these two fail even in sequential mode. Disabled for now # to make some progress. 00646_url_engine @@ -368,7 +368,7 @@ function run_tests 01666_blnsi # Depends on AWS - 01801_s3_distributed.sh + 01801_s3_distributed ) (time clickhouse-test --hung-check -j 8 --order=random --use-skip-list --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" -- "$FASTTEST_FOCUS" 2>&1 ||:) | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt" diff --git a/src/DataStreams/RemoteQueryExecutor.h b/src/DataStreams/RemoteQueryExecutor.h index 5a0c2713623..e0748474c37 100644 --- a/src/DataStreams/RemoteQueryExecutor.h +++ b/src/DataStreams/RemoteQueryExecutor.h @@ -88,7 +88,7 @@ public: /// Set the query_id. For now, used by performance test to later find the query /// in the server query_log. Must be called before sending the query to the server. - void setQueryId(const std::string& query_id_) { assert(!sent_query); std::cout << query_id_ << std::endl; query_id = query_id_; } + void setQueryId(const std::string& query_id_) { assert(!sent_query); query_id = query_id_; } /// Specify how we allocate connections on a shard. void setPoolMode(PoolMode pool_mode_) { pool_mode = pool_mode_; } diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index f41e87ef892..fab2b204027 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -137,8 +137,6 @@ Pipe StorageS3Distributed::read( auto task_identifier = toString(UUIDHelpers::generateV4()); - std::cout << "Generated UUID : " << task_identifier << std::endl; - /// Register resolver, which will give other nodes a task std::make_unique context.getTaskSupervisor()->registerNextTaskResolver( std::make_unique(task_identifier, std::move(callback))); diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Distributed.h index c4927d68681..283b7f2ac65 100644 --- a/src/Storages/StorageS3Distributed.h +++ b/src/Storages/StorageS3Distributed.h @@ -87,11 +87,7 @@ private: std::optional next() override { - std::cout << "DistributedFileIterator" << std::endl; - std::cout << identifier << std::endl; - auto answer = callback(identifier); - std::cout << answer << std::endl; - return answer; + return callback(identifier); } }; }; diff --git a/tests/queries/0_stateless/01801_s3_distributed.sh b/tests/queries/0_stateless/01801_s3_distributed.sh index 1c82eaac5f1..05710a21214 100755 --- a/tests/queries/0_stateless/01801_s3_distributed.sh +++ b/tests/queries/0_stateless/01801_s3_distributed.sh @@ -8,10 +8,5 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) . "$CUR_DIR"/../shell_config.sh -echo $S3_ACCESS_KEY_ID -echo $S3_SECRET_ACCESS - -printenv - ${CLICKHOUSE_CLIENT_BINARY} --send_logs_level="none" -q "SELECT * FROM s3('https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" ${CLICKHOUSE_CLIENT_BINARY} --send_logs_level="none" -q "SELECT * FROM s3Distributed('test_cluster_two_shards', 'https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" diff --git a/tests/queries/0_stateless/arcadia_skip_list.txt b/tests/queries/0_stateless/arcadia_skip_list.txt index c8a0971bb28..0af6f1b72c1 100644 --- a/tests/queries/0_stateless/arcadia_skip_list.txt +++ b/tests/queries/0_stateless/arcadia_skip_list.txt @@ -229,3 +229,4 @@ 01791_dist_INSERT_block_structure_mismatch 01801_distinct_group_by_shard 01804_dictionary_decimal256_type +01801_s3_distributed From c333c3dedb498597d9eb99a5a37240858fc585b2 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Sat, 10 Apr 2021 05:21:18 +0300 Subject: [PATCH 067/108] review fixes --- src/Client/Connection.cpp | 14 +--- src/Client/Connection.h | 6 +- src/Client/HedgedConnections.h | 2 +- src/Client/IConnections.h | 2 +- src/Client/MultiplexedConnections.cpp | 6 +- src/Client/MultiplexedConnections.h | 2 +- src/Core/Defines.h | 1 + src/DataStreams/RemoteQueryExecutor.cpp | 23 +++---- src/DataStreams/RemoteQueryExecutor.h | 13 ++-- src/Interpreters/ClientInfo.cpp | 6 -- src/Interpreters/ClientInfo.h | 2 - src/Interpreters/Context.cpp | 12 ---- src/Interpreters/Context.h | 11 +-- src/Server/TCPHandler.cpp | 38 ++++++---- src/Server/TCPHandler.h | 4 +- src/Storages/StorageS3.cpp | 38 ++++++---- src/Storages/StorageS3.h | 28 +------- src/Storages/StorageS3Distributed.cpp | 25 +++---- src/Storages/StorageS3Distributed.h | 15 ---- src/Storages/TaskSupervisor.h | 69 ------------------- .../TableFunctionS3Distributed.cpp | 1 - 21 files changed, 95 insertions(+), 223 deletions(-) delete mode 100644 src/Storages/TaskSupervisor.h diff --git a/src/Client/Connection.cpp b/src/Client/Connection.cpp index 14c5d8d710d..9988c3d6b5c 100644 --- a/src/Client/Connection.cpp +++ b/src/Client/Connection.cpp @@ -552,10 +552,11 @@ void Connection::sendIgnoredPartUUIDs(const std::vector & uuids) } -void Connection::sendReadTaskResponse(const std::string & response) +void Connection::sendReadTaskResponse(const std::optional & response) { writeVarUInt(Protocol::Client::ReadTaskResponse, *out); - writeStringBinary(response, *out); + writeVarUInt(DBMS_CLUSTER_PROCESSING_PROTOCOL_VERSION, *out); + writeStringBinary(response.has_value() ? String(*response) : "", *out); out->next(); } @@ -816,7 +817,6 @@ Packet Connection::receivePacket() return res; case Protocol::Server::ReadTaskRequest: - res.read_task_request = receiveReadTaskRequest(); return res; default: @@ -841,14 +841,6 @@ Packet Connection::receivePacket() } -String Connection::receiveReadTaskRequest() const -{ - String read_task; - readStringBinary(read_task, *in); - return read_task; -} - - Block Connection::receiveData() { initBlockInput(); diff --git a/src/Client/Connection.h b/src/Client/Connection.h index 502cf8ad9e8..3c45832db8a 100644 --- a/src/Client/Connection.h +++ b/src/Client/Connection.h @@ -65,9 +65,6 @@ struct Packet Progress progress; BlockStreamProfileInfo profile_info; std::vector part_uuids; - /// String describes an identifier for a request. - /// Used for dynamic distributed data processing (S3 downloading) - String read_task_request; Packet() : type(Protocol::Server::Hello) {} }; @@ -162,7 +159,7 @@ public: /// Send parts' uuids to excluded them from query processing void sendIgnoredPartUUIDs(const std::vector & uuids); - void sendReadTaskResponse(const std::string &); + void sendReadTaskResponse(const std::optional &); /// Send prepared block of data (serialized and, if need, compressed), that will be read from 'input'. /// You could pass size of serialized/compressed block. @@ -305,7 +302,6 @@ private: #endif bool ping(); - String receiveReadTaskRequest() const; Block receiveData(); Block receiveLogData(); Block receiveDataImpl(BlockInputStreamPtr & stream); diff --git a/src/Client/HedgedConnections.h b/src/Client/HedgedConnections.h index 20640460009..eef2ffdfcf2 100644 --- a/src/Client/HedgedConnections.h +++ b/src/Client/HedgedConnections.h @@ -84,7 +84,7 @@ public: const ClientInfo & client_info, bool with_pending_data) override; - void sendReadTaskResponse(const String &) override + void sendReadTaskResponse(const std::optional &) override { throw Exception("sendReadTaskResponse in not supported with HedgedConnections", ErrorCodes::LOGICAL_ERROR); } diff --git a/src/Client/IConnections.h b/src/Client/IConnections.h index d251a5fb3ab..65e3542f4f7 100644 --- a/src/Client/IConnections.h +++ b/src/Client/IConnections.h @@ -24,7 +24,7 @@ public: const ClientInfo & client_info, bool with_pending_data) = 0; - virtual void sendReadTaskResponse(const String &) = 0; + virtual void sendReadTaskResponse(const std::optional &) = 0; /// Get packet from any replica. virtual Packet receivePacket() = 0; diff --git a/src/Client/MultiplexedConnections.cpp b/src/Client/MultiplexedConnections.cpp index 28740873d44..2d60183e098 100644 --- a/src/Client/MultiplexedConnections.cpp +++ b/src/Client/MultiplexedConnections.cpp @@ -156,9 +156,11 @@ void MultiplexedConnections::sendIgnoredPartUUIDs(const std::vector & uuid } -void MultiplexedConnections::sendReadTaskResponse(const String & response) +void MultiplexedConnections::sendReadTaskResponse(const std::optional & response) { - /// No lock_guard because assume it is already called under lock + std::lock_guard lock(cancel_mutex); + if (cancelled) + return; current_connection->sendReadTaskResponse(response); } diff --git a/src/Client/MultiplexedConnections.h b/src/Client/MultiplexedConnections.h index f642db1c4cd..de8f5479bbf 100644 --- a/src/Client/MultiplexedConnections.h +++ b/src/Client/MultiplexedConnections.h @@ -39,7 +39,7 @@ public: const ClientInfo & client_info, bool with_pending_data) override; - void sendReadTaskResponse(const String &) override; + void sendReadTaskResponse(const std::optional &) override; Packet receivePacket() override; diff --git a/src/Core/Defines.h b/src/Core/Defines.h index aba3a98b8ef..6cfe2273c7d 100644 --- a/src/Core/Defines.h +++ b/src/Core/Defines.h @@ -76,6 +76,7 @@ /// Minimum revision supporting task processing on cluster #define DBMS_MIN_REVISION_WITH_CLUSTER_PROCESSING 54443 +#define DBMS_CLUSTER_PROCESSING_PROTOCOL_VERSION 1 /// Minimum revision supporting interserver secret. #define DBMS_MIN_REVISION_WITH_INTERSERVER_SECRET 54441 diff --git a/src/DataStreams/RemoteQueryExecutor.cpp b/src/DataStreams/RemoteQueryExecutor.cpp index 47ca57eaf0b..d8dc804817c 100644 --- a/src/DataStreams/RemoteQueryExecutor.cpp +++ b/src/DataStreams/RemoteQueryExecutor.cpp @@ -16,7 +16,6 @@ #include #include #include -#include namespace DB { @@ -31,9 +30,9 @@ RemoteQueryExecutor::RemoteQueryExecutor( Connection & connection, const String & query_, const Block & header_, ContextPtr context_, ThrottlerPtr throttler, const Scalars & scalars_, const Tables & external_tables_, - QueryProcessingStage::Enum stage_, std::optional task_identifier_) + QueryProcessingStage::Enum stage_, std::shared_ptr task_iterator_) : header(header_), query(query_), context(context_) - , scalars(scalars_), external_tables(external_tables_), stage(stage_), task_identifier(task_identifier_) + , scalars(scalars_), external_tables(external_tables_), stage(stage_), task_iterator(task_iterator_) { create_connections = [this, &connection, throttler]() { @@ -45,9 +44,9 @@ RemoteQueryExecutor::RemoteQueryExecutor( std::vector && connections_, const String & query_, const Block & header_, ContextPtr context_, const ThrottlerPtr & throttler, const Scalars & scalars_, const Tables & external_tables_, - QueryProcessingStage::Enum stage_, std::optional task_identifier_) + QueryProcessingStage::Enum stage_, std::shared_ptr task_iterator_) : header(header_), query(query_), context(context_) - , scalars(scalars_), external_tables(external_tables_), stage(stage_), task_identifier(task_identifier_) + , scalars(scalars_), external_tables(external_tables_), stage(stage_), task_iterator(task_iterator_) { create_connections = [this, connections_, throttler]() mutable { return std::make_unique(std::move(connections_), context->getSettingsRef(), throttler); @@ -58,9 +57,9 @@ RemoteQueryExecutor::RemoteQueryExecutor( const ConnectionPoolWithFailoverPtr & pool, const String & query_, const Block & header_, ContextPtr context_, const ThrottlerPtr & throttler, const Scalars & scalars_, const Tables & external_tables_, - QueryProcessingStage::Enum stage_, std::optional task_identifier_) + QueryProcessingStage::Enum stage_, std::shared_ptr task_iterator_) : header(header_), query(query_), context(context_) - , scalars(scalars_), external_tables(external_tables_), stage(stage_), task_identifier(task_identifier_) + , scalars(scalars_), external_tables(external_tables_), stage(stage_), task_iterator(task_iterator_) { create_connections = [this, pool, throttler]()->std::unique_ptr { @@ -180,7 +179,6 @@ void RemoteQueryExecutor::sendQuery() auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(settings); ClientInfo modified_client_info = context->getClientInfo(); modified_client_info.query_kind = ClientInfo::QueryKind::SECONDARY_QUERY; - modified_client_info.task_identifier = task_identifier ? *task_identifier : ""; if (CurrentThread::isInitialized()) { modified_client_info.client_trace_context = CurrentThread::get().thread_trace_context; @@ -301,7 +299,7 @@ std::optional RemoteQueryExecutor::processPacket(Packet packet) switch (packet.type) { case Protocol::Server::ReadTaskRequest: - processReadTaskRequest(packet.read_task_request); + processReadTaskRequest(); break; case Protocol::Server::PartUUIDs: if (!setPartUUIDs(packet.part_uuids)) @@ -381,10 +379,11 @@ bool RemoteQueryExecutor::setPartUUIDs(const std::vector & uuids) return true; } -void RemoteQueryExecutor::processReadTaskRequest(const String & request) +void RemoteQueryExecutor::processReadTaskRequest() { - Context & query_context = context.getQueryContext(); - String response = query_context.getTaskSupervisor()->getNextTaskForId(request); + if (!task_iterator) + throw Exception("Distributed task iterator is not initialized", ErrorCodes::LOGICAL_ERROR); + auto response = (*task_iterator)(); connections->sendReadTaskResponse(response); } diff --git a/src/DataStreams/RemoteQueryExecutor.h b/src/DataStreams/RemoteQueryExecutor.h index e0748474c37..584961f1baa 100644 --- a/src/DataStreams/RemoteQueryExecutor.h +++ b/src/DataStreams/RemoteQueryExecutor.h @@ -26,6 +26,9 @@ using ProfileInfoCallback = std::function()>; + /// This class allows one to launch queries on remote replicas of one shard and get results class RemoteQueryExecutor { @@ -37,21 +40,21 @@ public: Connection & connection, const String & query_, const Block & header_, ContextPtr context_, ThrottlerPtr throttler_ = nullptr, const Scalars & scalars_ = Scalars(), const Tables & external_tables_ = Tables(), - QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete, std::optional task_identifier_ = {}); + QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete, std::shared_ptr task_iterator_ = {}); /// Accepts several connections already taken from pool. RemoteQueryExecutor( std::vector && connections_, const String & query_, const Block & header_, ContextPtr context_, const ThrottlerPtr & throttler = nullptr, const Scalars & scalars_ = Scalars(), const Tables & external_tables_ = Tables(), - QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete, std::optional task_identifier_ = {}); + QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete, std::shared_ptr task_iterator_ = {}); /// Takes a pool and gets one or several connections from it. RemoteQueryExecutor( const ConnectionPoolWithFailoverPtr & pool, const String & query_, const Block & header_, ContextPtr context_, const ThrottlerPtr & throttler = nullptr, const Scalars & scalars_ = Scalars(), const Tables & external_tables_ = Tables(), - QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete, std::optional task_identifier_ = {}); + QueryProcessingStage::Enum stage_ = QueryProcessingStage::Complete, std::shared_ptr task_iterator_ = {}); ~RemoteQueryExecutor(); @@ -120,7 +123,7 @@ private: Tables external_tables; QueryProcessingStage::Enum stage; /// Initiator identifier for distributed task processing - std::optional task_identifier; + std::shared_ptr task_iterator; /// Streams for reading from temporary tables and following sending of data /// to remote servers for GLOBAL-subqueries @@ -181,7 +184,7 @@ private: /// Return true if duplicates found. bool setPartUUIDs(const std::vector & uuids); - void processReadTaskRequest(const String &); + void processReadTaskRequest(); /// Cancell query and restart it with info about duplicated UUIDs /// only for `allow_experimental_query_deduplication`. diff --git a/src/Interpreters/ClientInfo.cpp b/src/Interpreters/ClientInfo.cpp index 50164ea4074..223837aaf3d 100644 --- a/src/Interpreters/ClientInfo.cpp +++ b/src/Interpreters/ClientInfo.cpp @@ -88,9 +88,6 @@ void ClientInfo::write(WriteBuffer & out, const UInt64 server_protocol_revision) writeBinary(uint8_t(0), out); } } - - if (server_protocol_revision >= DBMS_MIN_REVISION_WITH_CLUSTER_PROCESSING) - writeBinary(task_identifier, out); } @@ -166,9 +163,6 @@ void ClientInfo::read(ReadBuffer & in, const UInt64 client_protocol_revision) readBinary(client_trace_context.trace_flags, in); } } - - if (client_protocol_revision >= DBMS_MIN_REVISION_WITH_CLUSTER_PROCESSING) - readBinary(task_identifier, in); } diff --git a/src/Interpreters/ClientInfo.h b/src/Interpreters/ClientInfo.h index 127d50706fc..60abb0dd671 100644 --- a/src/Interpreters/ClientInfo.h +++ b/src/Interpreters/ClientInfo.h @@ -98,8 +98,6 @@ public: String quota_key; UInt64 distributed_depth = 0; - /// For distributed file processing (e.g. s3Distributed) - String task_identifier; bool empty() const { return query_kind == QueryKind::NO_QUERY; } diff --git a/src/Interpreters/Context.cpp b/src/Interpreters/Context.cpp index 4ed0bc78c5b..18cf6da2259 100644 --- a/src/Interpreters/Context.cpp +++ b/src/Interpreters/Context.cpp @@ -2616,18 +2616,6 @@ PartUUIDsPtr Context::getPartUUIDs() } -TaskSupervisorPtr Context::getTaskSupervisor() const -{ - return read_task_supervisor; -} - - -void Context::setReadTaskSupervisor(TaskSupervisorPtr resolver) -{ - read_task_supervisor = resolver; -} - - ReadTaskCallback Context::getReadTaskCallback() const { if (!next_task_callback.has_value()) diff --git a/src/Interpreters/Context.h b/src/Interpreters/Context.h index ee3d7519c8e..e5c49458ee3 100644 --- a/src/Interpreters/Context.h +++ b/src/Interpreters/Context.h @@ -128,10 +128,8 @@ using InputInitializer = std::function; /// Callback for reading blocks of data from client for function input() using InputBlocksReader = std::function; -/// Class which gives tasks to other nodes in cluster -class TaskSupervisor; -using TaskSupervisorPtr = std::shared_ptr; -using ReadTaskCallback = std::function; +/// Used in distributed task processing +using ReadTaskCallback = std::function()>; /// An empty interface for an arbitrary object that may be attached by a shared pointer /// to query context, when using ClickHouse as a library. @@ -195,7 +193,6 @@ private: Scalars scalars; /// Fields for distributed s3 function - TaskSupervisorPtr read_task_supervisor; std::optional next_task_callback; /// Record entities accessed by current query, and store this information in system.query_log. @@ -779,10 +776,6 @@ public: PartUUIDsPtr getPartUUIDs(); PartUUIDsPtr getIgnoredPartUUIDs(); - /// A bunch of functions for distributed s3 function - TaskSupervisorPtr getTaskSupervisor() const; - void setReadTaskSupervisor(TaskSupervisorPtr); - ReadTaskCallback getReadTaskCallback() const; void setReadTaskCallback(ReadTaskCallback && callback); diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index e49ae27ccdd..88c8aae4c60 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -26,7 +26,6 @@ #include #include #include -#include #include #include #include @@ -288,14 +287,11 @@ void TCPHandler::runImpl() customizeContext(query_context); - /// Create task supervisor for distributed task processing - query_context->setReadTaskSupervisor(std::make_shared()); - /// This callback is needed for requsting read tasks inside pipeline for distributed processing - query_context->setReadTaskCallback([this](String request) mutable -> String + query_context->setReadTaskCallback([this]() -> std::optional { std::lock_guard lock(task_callback_mutex); - sendReadTaskRequestAssumeLocked(request); + sendReadTaskRequestAssumeLocked(); return receiveReadTaskResponseAssumeLocked(); }); @@ -658,6 +654,8 @@ void TCPHandler::processOrdinaryQueryWithProcessors() Block block; while (executor.pull(block, query_context->getSettingsRef().interactive_delay / 1000)) { + std::lock_guard lock(task_callback_mutex); + if (isQueryCancelled()) { /// A packet was received requesting to stop execution of the request. @@ -665,7 +663,6 @@ void TCPHandler::processOrdinaryQueryWithProcessors() break; } - std::lock_guard lock(task_callback_mutex); if (after_send_progress.elapsed() / 1000 >= query_context->getSettingsRef().interactive_delay) { @@ -772,10 +769,9 @@ void TCPHandler::sendPartUUIDs() } -void TCPHandler::sendReadTaskRequestAssumeLocked(const String & request) +void TCPHandler::sendReadTaskRequestAssumeLocked() { writeVarUInt(Protocol::Server::ReadTaskRequest, *out); - writeStringBinary(request, *out); out->next(); } @@ -1042,17 +1038,31 @@ void TCPHandler::receiveIgnoredPartUUIDs() } -String TCPHandler::receiveReadTaskResponseAssumeLocked() +std::optional TCPHandler::receiveReadTaskResponseAssumeLocked() { UInt64 packet_type = 0; readVarUInt(packet_type, *in); - if (packet_type != Protocol::Client::ReadTaskResponse) - throw Exception(fmt::format("Received {} packet after requesting read task", - Protocol::Client::toString(packet_type)), ErrorCodes::LOGICAL_ERROR); - + { + if (packet_type == Protocol::Client::Cancel) + { + state.is_cancelled = true; + return {}; + } + else + { + throw Exception(fmt::format("Received {} packet after requesting read task", + Protocol::Client::toString(packet_type)), ErrorCodes::UNEXPECTED_PACKET_FROM_CLIENT); + } + } + UInt64 version; + readVarUInt(version, *in); + if (version != DBMS_CLUSTER_PROCESSING_PROTOCOL_VERSION) + throw Exception("Protocol version for distributed processing mismatched", ErrorCodes::LOGICAL_ERROR); String response; readStringBinary(response, *in); + if (response.empty()) + return std::nullopt; return response; } diff --git a/src/Server/TCPHandler.h b/src/Server/TCPHandler.h index 9c752a4db41..d1c6432799d 100644 --- a/src/Server/TCPHandler.h +++ b/src/Server/TCPHandler.h @@ -170,7 +170,7 @@ private: bool receivePacket(); void receiveQuery(); void receiveIgnoredPartUUIDs(); - String receiveReadTaskResponseAssumeLocked(); + std::optional receiveReadTaskResponseAssumeLocked(); bool receiveData(bool scalar); bool readDataNext(const size_t & poll_interval, const int & receive_timeout); void readData(const Settings & connection_settings); @@ -201,7 +201,7 @@ private: void sendLogs(); void sendEndOfStream(); void sendPartUUIDs(); - void sendReadTaskRequestAssumeLocked(const String &); + void sendReadTaskRequestAssumeLocked(); void sendProfileInfo(const BlockStreamProfileInfo & info); void sendTotals(const Block & totals); void sendExtremes(const Block & extremes); diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index 63c40499c4c..a5283448e38 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -45,7 +45,6 @@ namespace ErrorCodes extern const int UNEXPECTED_EXPRESSION; extern const int S3_ERROR; } - class StorageS3Source::DisclosedGlobIterator::Impl { @@ -53,6 +52,8 @@ public: Impl(Aws::S3::S3Client & client_, const S3::URI & globbed_uri_) : client(client_), globbed_uri(globbed_uri_) { + std::lock_guard lock(mutex); + if (globbed_uri.bucket.find_first_of("*?{") != globbed_uri.bucket.npos) throw Exception("Expression can not have wildcards inside bucket name", ErrorCodes::UNEXPECTED_EXPRESSION); @@ -71,6 +72,14 @@ public: } std::optional next() + { + std::lock_guard lock(mutex); + return nextAssumeLocked(); + } + +private: + + std::optional nextAssumeLocked() { if (buffer_iter != buffer.end()) { @@ -82,14 +91,12 @@ public: if (is_finished) return std::nullopt; // Or throw? - fillInternalBuffer(); + fillInternalBufferAssumeLocked(); - return next(); + return nextAssumeLocked(); } -private: - - void fillInternalBuffer() + void fillInternalBufferAssumeLocked() { buffer.clear(); @@ -117,6 +124,7 @@ private: is_finished = !outcome.GetResult().GetIsTruncated(); } + std::mutex mutex; Strings buffer; Strings::iterator buffer_iter; Aws::S3::S3Client client; @@ -128,7 +136,7 @@ private: }; StorageS3Source::DisclosedGlobIterator::DisclosedGlobIterator(Aws::S3::S3Client & client_, const S3::URI & globbed_uri_) - : pimpl(std::make_unique(client_, globbed_uri_)) {} + : pimpl(std::make_shared(client_, globbed_uri_)) {} std::optional StorageS3Source::DisclosedGlobIterator::next() { @@ -158,7 +166,7 @@ StorageS3Source::StorageS3Source( const String compression_hint_, const std::shared_ptr & client_, const String & bucket_, - std::shared_ptr file_iterator_) + std::shared_ptr file_iterator_) : SourceWithProgress(getHeader(sample_block_, need_path, need_file)) , name(std::move(name_)) , bucket(bucket_) @@ -180,12 +188,9 @@ StorageS3Source::StorageS3Source( bool StorageS3Source::initialize() { String current_key; - if (auto result = file_iterator->next()) + if (auto result = (*file_iterator)()) { current_key = result.value(); - if (current_key.empty()) { - return false; - } file_path = bucket + "/" + current_key; } else @@ -359,9 +364,12 @@ Pipe StorageS3::read( } /// Iterate through disclosed globs and make a source for each file - StorageS3Source::DisclosedGlobIterator glob_iterator(*client_auth.client, client_auth.uri); + auto glob_iterator = std::make_shared(*client_auth.client, client_auth.uri); + auto iterator_wrapper = std::make_shared([glob_iterator]() + { + return glob_iterator->next(); + }); - auto file_iterator = std::make_shared(glob_iterator); for (size_t i = 0; i < num_streams; ++i) { pipes.emplace_back(std::make_shared( @@ -376,7 +384,7 @@ Pipe StorageS3::read( compression_method, client_auth.client, client_auth.uri.bucket, - file_iterator)); + iterator_wrapper)); } return Pipe::unitePipes(std::move(pipes)); } diff --git a/src/Storages/StorageS3.h b/src/Storages/StorageS3.h index 1cb26470c51..7b6cc235be4 100644 --- a/src/Storages/StorageS3.h +++ b/src/Storages/StorageS3.h @@ -42,12 +42,7 @@ public: std::shared_ptr pimpl; }; - struct FileIterator - { - virtual ~FileIterator() = default; - virtual std::optional next() = 0; - }; - + using IteratorWrapper = std::function()>; static Block getHeader(Block sample_block, bool with_path_column, bool with_file_column); @@ -63,7 +58,7 @@ public: const String compression_hint_, const std::shared_ptr & client_, const String & bucket, - std::shared_ptr file_iterator_); + std::shared_ptr file_iterator_); String getName() const override; @@ -87,7 +82,7 @@ private: bool initialized = false; bool with_file_column = false; bool with_path_column = false; - std::shared_ptr file_iterator; + std::shared_ptr file_iterator; /// Recreate ReadBuffer and BlockInputStream for each file. bool initialize(); @@ -157,23 +152,6 @@ private: String compression_method; String name; - struct LocalFileIterator : public StorageS3Source::FileIterator - { - explicit LocalFileIterator(StorageS3Source::DisclosedGlobIterator glob_iterator_) - : glob_iterator(glob_iterator_) {} - - StorageS3Source::DisclosedGlobIterator glob_iterator; - /// Several files could be processed in parallel - /// from different sources - std::mutex iterator_mutex; - - std::optional next() override - { - std::lock_guard lock(iterator_mutex); - return glob_iterator.next(); - } - }; - static void updateClientAndAuthSettings(ContextPtr, ClientAuthentificaiton &); }; diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index fab2b204027..5bd4a4ffd81 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -109,9 +109,11 @@ Pipe StorageS3Distributed::read( need_file_column = true; } - auto file_iterator = std::make_shared( - context.getReadTaskCallback(), - context.getClientInfo().task_identifier); + /// Save callback not to capture context by reference of copy it. + auto file_iterator = std::make_shared( + [callback = context.getReadTaskCallback()]() -> std::optional { + return callback(); + }); return Pipe(std::make_shared( need_path_column, need_file_column, format_name, getName(), @@ -128,18 +130,11 @@ Pipe StorageS3Distributed::read( S3::URI s3_uri(Poco::URI{filename}); StorageS3::updateClientAndAuthSettings(context, client_auth); - auto callback = [iterator = StorageS3Source::DisclosedGlobIterator(*client_auth.client, client_auth.uri)]() mutable -> String + auto iterator = std::make_shared(*client_auth.client, client_auth.uri); + auto callback = std::make_shared([iterator]() mutable -> std::optional { - if (auto value = iterator.next()) - return *value; - return {}; - }; - - auto task_identifier = toString(UUIDHelpers::generateV4()); - - /// Register resolver, which will give other nodes a task std::make_unique - context.getTaskSupervisor()->registerNextTaskResolver( - std::make_unique(task_identifier, std::move(callback))); + return iterator->next(); + }); /// Calculate the header. This is significant, because some columns could be thrown away in some cases like query with count(*) Block header = @@ -166,7 +161,7 @@ Pipe StorageS3Distributed::read( /// So, task_identifier is passed as constructor argument. It is more obvious. auto remote_query_executor = std::make_shared( *connections.back(), queryToString(query_info.query), header, context, - /*throttler=*/nullptr, scalars, Tables(), processed_stage, task_identifier); + /*throttler=*/nullptr, scalars, Tables(), processed_stage, callback); pipes.emplace_back(std::make_shared(remote_query_executor, false, false)); } diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Distributed.h index 283b7f2ac65..3ec2b5ec813 100644 --- a/src/Storages/StorageS3Distributed.h +++ b/src/Storages/StorageS3Distributed.h @@ -75,21 +75,6 @@ private: String format_name; String compression_method; - - - struct DistributedFileIterator : public StorageS3Source::FileIterator - { - DistributedFileIterator(ReadTaskCallback callback_, String identifier_) - : callback(callback_), identifier(identifier_) {} - - ReadTaskCallback callback; - String identifier; - - std::optional next() override - { - return callback(identifier); - } - }; }; diff --git a/src/Storages/TaskSupervisor.h b/src/Storages/TaskSupervisor.h deleted file mode 100644 index 132aa257f5e..00000000000 --- a/src/Storages/TaskSupervisor.h +++ /dev/null @@ -1,69 +0,0 @@ -#pragma once - -#include -#include -#include -#include - -#include - -namespace DB -{ - - -namespace ErrorCodes -{ - extern const int LOGICAL_ERROR; -} - -using QueryId = std::string; -using Task = std::string; -using Tasks = std::vector; -using TasksIterator = Tasks::iterator; - -struct ReadTaskResolver -{ - ReadTaskResolver(String name_, std::function callback_) - : name(name_), callback(callback_) {} - String name; - std::function callback; -}; - -using ReadTaskResolverPtr = std::unique_ptr; - -class TaskSupervisor -{ -public: - using QueryId = std::string; - - TaskSupervisor() = default; - - void registerNextTaskResolver(ReadTaskResolverPtr resolver) - { - std::lock_guard lock(mutex); - auto & target = dict[resolver->name]; - if (target) - throw Exception(fmt::format("NextTaskResolver with name {} is already registered for query {}", - target->name, resolver->name), ErrorCodes::LOGICAL_ERROR); - target = std::move(resolver); - } - - /// Do not erase anything from the map, because TaskSupervisor is stored - /// into context and will be deleted after query ends. - Task getNextTaskForId(const QueryId & id) - { - std::lock_guard lock(mutex); - auto it = dict.find(id); - if (it == dict.end()) - return ""; - return it->second->callback(); - } - -private: - using ResolverDict = std::unordered_map; - ResolverDict dict; - std::mutex mutex; -}; - - -} diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Distributed.cpp index 6e70d2f3766..af950538e52 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.cpp +++ b/src/TableFunctions/TableFunctionS3Distributed.cpp @@ -9,7 +9,6 @@ #include "Parsers/IAST_fwd.h" #include "Processors/Sources/SourceFromInputStream.h" #include "Storages/StorageS3Distributed.h" -#include #if USE_AWS_S3 From 704fb049412df2be2fc4cef4c1c8764b7caf5774 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Sat, 10 Apr 2021 05:35:07 +0300 Subject: [PATCH 068/108] better --- src/Interpreters/Cluster.cpp | 2 -- src/Server/TCPHandler.cpp | 3 --- src/Server/TCPHandler.h | 2 +- 3 files changed, 1 insertion(+), 6 deletions(-) diff --git a/src/Interpreters/Cluster.cpp b/src/Interpreters/Cluster.cpp index 7ccb28ae0f5..bac688fe81e 100644 --- a/src/Interpreters/Cluster.cpp +++ b/src/Interpreters/Cluster.cpp @@ -138,13 +138,11 @@ String Cluster::Address::toString() const return toString(host_name, port); } - String Cluster::Address::toString(const String & host_name, UInt16 port) { return escapeForFileName(host_name) + ':' + DB::toString(port); } - String Cluster::Address::readableString() const { String res; diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index 88c8aae4c60..d0d150c98cd 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -663,7 +663,6 @@ void TCPHandler::processOrdinaryQueryWithProcessors() break; } - if (after_send_progress.elapsed() / 1000 >= query_context->getSettingsRef().interactive_delay) { /// Some time passed and there is a progress. @@ -1026,7 +1025,6 @@ bool TCPHandler::receivePacket() } } - void TCPHandler::receiveIgnoredPartUUIDs() { state.part_uuids = true; @@ -1126,7 +1124,6 @@ void TCPHandler::receiveQuery() Settings passed_settings; passed_settings.read(*in, settings_format); - /// Interserver secret. std::string received_hash; if (client_tcp_protocol_version >= DBMS_MIN_REVISION_WITH_INTERSERVER_SECRET) diff --git a/src/Server/TCPHandler.h b/src/Server/TCPHandler.h index d1c6432799d..67dc2ade1bc 100644 --- a/src/Server/TCPHandler.h +++ b/src/Server/TCPHandler.h @@ -174,7 +174,7 @@ private: bool receiveData(bool scalar); bool readDataNext(const size_t & poll_interval, const int & receive_timeout); void readData(const Settings & connection_settings); - void receiveClusterNameAndSalt(); // ASSUMELocked + void receiveClusterNameAndSalt(); std::tuple getReadTimeouts(const Settings & connection_settings); [[noreturn]] void receiveUnexpectedData(); From 4465a0627f154fa92efe1cad5566b72ad73c7cb0 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Sat, 10 Apr 2021 17:58:29 +0300 Subject: [PATCH 069/108] better --- docker/test/fasttest/run.sh | 2 +- src/Storages/StorageS3.cpp | 8 +++--- src/Storages/StorageS3Distributed.cpp | 4 ++- src/Storages/StorageS3Distributed.h | 2 ++ tests/integration/test_s3_distributed/test.py | 8 +++++- tests/queries/0_stateless/trace_raw | 27 +++++++++++++++++++ 6 files changed, 45 insertions(+), 6 deletions(-) create mode 100644 tests/queries/0_stateless/trace_raw diff --git a/docker/test/fasttest/run.sh b/docker/test/fasttest/run.sh index 9500a7854ba..a482ab35d78 100755 --- a/docker/test/fasttest/run.sh +++ b/docker/test/fasttest/run.sh @@ -365,7 +365,7 @@ function run_tests 01622_defaults_for_url_engine # JSON functions - 01666_blnsi + 01666_blns # Depends on AWS 01801_s3_distributed diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index a5283448e38..51e8a3225e4 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -67,8 +67,7 @@ public: matcher = std::make_unique(makeRegexpPatternFromGlobs(globbed_uri.key)); - /// Don't forget about iterator invalidation - buffer_iter = buffer.begin(); + fillInternalBufferAssumeLocked(); } std::optional next() @@ -386,7 +385,10 @@ Pipe StorageS3::read( client_auth.uri.bucket, iterator_wrapper)); } - return Pipe::unitePipes(std::move(pipes)); + auto pipe = Pipe::unitePipes(std::move(pipes)); + + narrowPipe(pipe, num_streams); + return pipe; } BlockOutputStreamPtr StorageS3::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index 5bd4a4ffd81..29784bd73df 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -1,7 +1,8 @@ #include "Storages/StorageS3Distributed.h" +#if !defined(ARCADIA_BUILD) #include -#include "Processors/Sources/SourceWithProgress.h" +#endif #if USE_AWS_S3 @@ -30,6 +31,7 @@ #include #include #include +#include "Processors/Sources/SourceWithProgress.h" #include #include #include diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Distributed.h index 3ec2b5ec813..34d9612f5d4 100644 --- a/src/Storages/StorageS3Distributed.h +++ b/src/Storages/StorageS3Distributed.h @@ -1,6 +1,8 @@ #pragma once +#if !defined(ARCADIA_BUILD) #include +#endif #if USE_AWS_S3 diff --git a/tests/integration/test_s3_distributed/test.py b/tests/integration/test_s3_distributed/test.py index a7826dc5582..7ad7f201ce6 100644 --- a/tests/integration/test_s3_distributed/test.py +++ b/tests/integration/test_s3_distributed/test.py @@ -95,7 +95,13 @@ def test_union_all(started_cluster): SELECT * from s3Distributed( 'cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', - 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon)""") + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon) + UNION ALL + SELECT * from s3Distributed( + 'cluster_simple', + 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon) + """) # print(s3_distibuted) assert TSV(pure_s3) == TSV(s3_distibuted) diff --git a/tests/queries/0_stateless/trace_raw b/tests/queries/0_stateless/trace_raw new file mode 100644 index 00000000000..032acbef6bf --- /dev/null +++ b/tests/queries/0_stateless/trace_raw @@ -0,0 +1,27 @@ +[Thread debugging using libthread_db enabled] +Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". +0x00007fb11d0bedd7 in __GI___select (nfds=0, readfds=0x0, writefds=0x0, exceptfds=0x0, timeout=0x7fff4e474300) at ../sysdeps/unix/sysv/linux/select.c:41 + +Thread 1 (Thread 0x7fb11d5ad740 (LWP 38888)): +#0 0x00007fb11d0bedd7 in __GI___select (nfds=0, readfds=0x0, writefds=0x0, exceptfds=0x0, timeout=0x7fff4e474300) at ../sysdeps/unix/sysv/linux/select.c:41 +#1 0x00000000005bbd3a in ?? () +#2 0x000000000050a2cc in ?? () +#3 0x000000000050bf44 in _PyEval_EvalFrameDefault () +#4 0x00000000005096c8 in ?? () +#5 0x000000000050a3fd in ?? () +#6 0x000000000050bf44 in _PyEval_EvalFrameDefault () +#7 0x0000000000507cd4 in ?? () +#8 0x0000000000509a00 in ?? () +#9 0x000000000050a3fd in ?? () +#10 0x000000000050bf44 in _PyEval_EvalFrameDefault () +#11 0x0000000000507cd4 in ?? () +#12 0x0000000000509a00 in ?? () +#13 0x000000000050a3fd in ?? () +#14 0x000000000050bf44 in _PyEval_EvalFrameDefault () +#15 0x0000000000507cd4 in ?? () +#16 0x000000000050ae13 in PyEval_EvalCode () +#17 0x0000000000635262 in ?? () +#18 0x0000000000635317 in PyRun_FileExFlags () +#19 0x0000000000638acf in PyRun_SimpleFileExFlags () +#20 0x0000000000639671 in Py_Main () +#21 0x00000000004b0e40 in main () From 7c2b662e280cb6c7e2463caf895f686a8165c3a1 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Sat, 10 Apr 2021 18:00:11 +0300 Subject: [PATCH 070/108] delete file --- tests/queries/0_stateless/trace_raw | 27 --------------------------- 1 file changed, 27 deletions(-) delete mode 100644 tests/queries/0_stateless/trace_raw diff --git a/tests/queries/0_stateless/trace_raw b/tests/queries/0_stateless/trace_raw deleted file mode 100644 index 032acbef6bf..00000000000 --- a/tests/queries/0_stateless/trace_raw +++ /dev/null @@ -1,27 +0,0 @@ -[Thread debugging using libthread_db enabled] -Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". -0x00007fb11d0bedd7 in __GI___select (nfds=0, readfds=0x0, writefds=0x0, exceptfds=0x0, timeout=0x7fff4e474300) at ../sysdeps/unix/sysv/linux/select.c:41 - -Thread 1 (Thread 0x7fb11d5ad740 (LWP 38888)): -#0 0x00007fb11d0bedd7 in __GI___select (nfds=0, readfds=0x0, writefds=0x0, exceptfds=0x0, timeout=0x7fff4e474300) at ../sysdeps/unix/sysv/linux/select.c:41 -#1 0x00000000005bbd3a in ?? () -#2 0x000000000050a2cc in ?? () -#3 0x000000000050bf44 in _PyEval_EvalFrameDefault () -#4 0x00000000005096c8 in ?? () -#5 0x000000000050a3fd in ?? () -#6 0x000000000050bf44 in _PyEval_EvalFrameDefault () -#7 0x0000000000507cd4 in ?? () -#8 0x0000000000509a00 in ?? () -#9 0x000000000050a3fd in ?? () -#10 0x000000000050bf44 in _PyEval_EvalFrameDefault () -#11 0x0000000000507cd4 in ?? () -#12 0x0000000000509a00 in ?? () -#13 0x000000000050a3fd in ?? () -#14 0x000000000050bf44 in _PyEval_EvalFrameDefault () -#15 0x0000000000507cd4 in ?? () -#16 0x000000000050ae13 in PyEval_EvalCode () -#17 0x0000000000635262 in ?? () -#18 0x0000000000635317 in PyRun_FileExFlags () -#19 0x0000000000638acf in PyRun_SimpleFileExFlags () -#20 0x0000000000639671 in Py_Main () -#21 0x00000000004b0e40 in main () From 7b95ff579dc2d3e0e2a8437d9b594bd09f5312ab Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Mon, 12 Apr 2021 19:51:52 +0300 Subject: [PATCH 071/108] fix S3 test --- src/Storages/StorageS3.cpp | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index 51e8a3225e4..165bb6519e8 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -59,8 +59,14 @@ public: const String key_prefix = globbed_uri.key.substr(0, globbed_uri.key.find_first_of("*?{")); + /// We don't have to list bucket, because there is no asterics. if (key_prefix.size() == globbed_uri.key.size()) + { buffer.emplace_back(globbed_uri.key); + buffer_iter = buffer.begin(); + is_finished = true; + return; + } request.SetBucket(globbed_uri.bucket); request.SetPrefix(key_prefix); From 7a68820342e4d8c0c5e83b0221a3bda345eec729 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Mon, 12 Apr 2021 20:07:01 +0300 Subject: [PATCH 072/108] style --- src/Client/HedgedConnections.h | 6 ++ src/DataStreams/RemoteQueryExecutor.cpp | 1 + src/Server/TCPHandler.cpp | 2 +- src/Storages/StorageDistributed.cpp | 2 +- src/Storages/StorageS3.cpp | 100 +++++++++--------- src/Storages/StorageS3Distributed.cpp | 19 ++-- src/Storages/StorageS3Distributed.h | 8 +- .../TableFunctionS3Distributed.cpp | 24 ++--- .../TableFunctionS3Distributed.h | 2 +- 9 files changed, 79 insertions(+), 85 deletions(-) diff --git a/src/Client/HedgedConnections.h b/src/Client/HedgedConnections.h index eef2ffdfcf2..bf78dca236f 100644 --- a/src/Client/HedgedConnections.h +++ b/src/Client/HedgedConnections.h @@ -14,6 +14,12 @@ namespace DB { +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + + /** To receive data from multiple replicas (connections) from one shard asynchronously. * The principe of Hedged Connections is used to reduce tail latency: * if we don't receive data from replica and there is no progress in query execution diff --git a/src/DataStreams/RemoteQueryExecutor.cpp b/src/DataStreams/RemoteQueryExecutor.cpp index d8dc804817c..0961dd41458 100644 --- a/src/DataStreams/RemoteQueryExecutor.cpp +++ b/src/DataStreams/RemoteQueryExecutor.cpp @@ -22,6 +22,7 @@ namespace DB namespace ErrorCodes { + extern const int LOGICAL_ERROR; extern const int UNKNOWN_PACKET_FROM_SERVER; extern const int DUPLICATED_PART_UUIDS; } diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index d0d150c98cd..451f1545708 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -287,7 +287,7 @@ void TCPHandler::runImpl() customizeContext(query_context); - /// This callback is needed for requsting read tasks inside pipeline for distributed processing + /// This callback is needed for requesting read tasks inside pipeline for distributed processing query_context->setReadTaskCallback([this]() -> std::optional { std::lock_guard lock(task_callback_mutex); diff --git a/src/Storages/StorageDistributed.cpp b/src/Storages/StorageDistributed.cpp index cd5fee30b1b..5c82e5378e8 100644 --- a/src/Storages/StorageDistributed.cpp +++ b/src/Storages/StorageDistributed.cpp @@ -452,7 +452,7 @@ StorageDistributed::StorageDistributed( const DistributedSettings & distributed_settings_, bool attach, ClusterPtr owned_cluster_) - : StorageDistributed(id_, columns_, constraints_, String{}, String{}, cluster_name_, context_, sharding_key_, + : StorageDistributed(id_, columns_, constraints_, String{}, String{}, cluster_name_, context_, sharding_key_, storage_policy_name_, relative_data_path_, distributed_settings_, attach, std::move(owned_cluster_)) { remote_table_function_ptr = std::move(remote_table_function_ptr_); diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index 165bb6519e8..2fa24e3b2f7 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -50,8 +50,8 @@ class StorageS3Source::DisclosedGlobIterator::Impl public: Impl(Aws::S3::S3Client & client_, const S3::URI & globbed_uri_) - : client(client_), globbed_uri(globbed_uri_) { - + : client(client_), globbed_uri(globbed_uri_) + { std::lock_guard lock(mutex); if (globbed_uri.bucket.find_first_of("*?{") != globbed_uri.bucket.npos) @@ -70,9 +70,7 @@ public: request.SetBucket(globbed_uri.bucket); request.SetPrefix(key_prefix); - matcher = std::make_unique(makeRegexpPatternFromGlobs(globbed_uri.key)); - fillInternalBufferAssumeLocked(); } @@ -124,7 +122,7 @@ private: buffer_iter = buffer.begin(); request.SetContinuationToken(outcome.GetResult().GetNextContinuationToken()); - + /// It returns false when all objects were returned is_finished = !outcome.GetResult().GetIsTruncated(); } @@ -259,60 +257,60 @@ Chunk StorageS3Source::generate() return generate(); } -namespace +namespace { - class StorageS3BlockOutputStream : public IBlockOutputStream +class StorageS3BlockOutputStream : public IBlockOutputStream +{ +public: + StorageS3BlockOutputStream( + const String & format, + const Block & sample_block_, + ContextPtr context, + const CompressionMethod compression_method, + const std::shared_ptr & client, + const String & bucket, + const String & key, + size_t min_upload_part_size, + size_t max_single_part_upload_size) + : sample_block(sample_block_) { - public: - StorageS3BlockOutputStream( - const String & format, - const Block & sample_block_, - ContextPtr context, - const CompressionMethod compression_method, - const std::shared_ptr & client, - const String & bucket, - const String & key, - size_t min_upload_part_size, - size_t max_single_part_upload_size) - : sample_block(sample_block_) - { - write_buf = wrapWriteBufferWithCompressionMethod( - std::make_unique(client, bucket, key, min_upload_part_size, max_single_part_upload_size), compression_method, 3); - writer = FormatFactory::instance().getOutputStreamParallelIfPossible(format, *write_buf, sample_block, context); - } + write_buf = wrapWriteBufferWithCompressionMethod( + std::make_unique(client, bucket, key, min_upload_part_size, max_single_part_upload_size), compression_method, 3); + writer = FormatFactory::instance().getOutputStreamParallelIfPossible(format, *write_buf, sample_block, context); + } - Block getHeader() const override - { - return sample_block; - } + Block getHeader() const override + { + return sample_block; + } - void write(const Block & block) override - { - writer->write(block); - } + void write(const Block & block) override + { + writer->write(block); + } - void writePrefix() override - { - writer->writePrefix(); - } + void writePrefix() override + { + writer->writePrefix(); + } - void flush() override - { - writer->flush(); - } + void flush() override + { + writer->flush(); + } - void writeSuffix() override - { - writer->writeSuffix(); - writer->flush(); - write_buf->finalize(); - } + void writeSuffix() override + { + writer->writeSuffix(); + writer->flush(); + write_buf->finalize(); + } - private: - Block sample_block; - std::unique_ptr write_buf; - BlockOutputStreamPtr writer; - }; +private: + Block sample_block; + std::unique_ptr write_buf; + BlockOutputStreamPtr writer; +}; } diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index 29784bd73df..d55fe4bb363 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -53,12 +53,6 @@ namespace DB { -namespace ErrorCodes -{ - extern const int LOGICAL_ERROR; -} - - StorageS3Distributed::StorageS3Distributed( const String & filename_, const String & access_key_id_, @@ -72,7 +66,7 @@ StorageS3Distributed::StorageS3Distributed( const Context & context_, const String & compression_method_) : IStorage(table_id_) - , client_auth{S3::URI{Poco::URI{filename_}}, access_key_id_, secret_access_key_, max_connections_, {}, {}} /// Client and settings will be updated later + , client_auth{S3::URI{Poco::URI{filename_}}, access_key_id_, secret_access_key_, max_connections_, {}, {}} , filename(filename_) , cluster_name(cluster_name_) , cluster(context_.getCluster(cluster_name)->getClusterWithReplicasAsShards(context_.getSettings())) @@ -110,7 +104,7 @@ Pipe StorageS3Distributed::read( if (column == "_file") need_file_column = true; } - + /// Save callback not to capture context by reference of copy it. auto file_iterator = std::make_shared( [callback = context.getReadTaskCallback()]() -> std::optional { @@ -147,7 +141,8 @@ Pipe StorageS3Distributed::read( Pipes pipes; connections.reserve(cluster->getShardCount()); - for (const auto & replicas : cluster->getShardsAddresses()) { + for (const auto & replicas : cluster->getShardsAddresses()) + { /// There will be only one replica, because we consider each replica as a shard for (const auto & node : replicas) { @@ -165,7 +160,7 @@ Pipe StorageS3Distributed::read( *connections.back(), queryToString(query_info.query), header, context, /*throttler=*/nullptr, scalars, Tables(), processed_stage, callback); - pipes.emplace_back(std::make_shared(remote_query_executor, false, false)); + pipes.emplace_back(std::make_shared(remote_query_executor, false, false)); } } @@ -177,9 +172,9 @@ QueryProcessingStage::Enum StorageS3Distributed::getQueryProcessingStage( const Context & context, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const { /// Initiator executes query on remote node. - if (context.getClientInfo().query_kind == ClientInfo::QueryKind::INITIAL_QUERY) { + if (context.getClientInfo().query_kind == ClientInfo::QueryKind::INITIAL_QUERY) return QueryProcessingStage::Enum::WithMergeableState; - } + /// Follower just reads the data. return QueryProcessingStage::Enum::FetchColumns; } diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Distributed.h index 34d9612f5d4..eac0fc2ca31 100644 --- a/src/Storages/StorageS3Distributed.h +++ b/src/Storages/StorageS3Distributed.h @@ -18,12 +18,6 @@ namespace DB { -namespace ErrorCodes -{ - extern const int LOGICAL_ERROR; -} - - class Context; struct ClientAuthentificationBuilder @@ -33,7 +27,7 @@ struct ClientAuthentificationBuilder UInt64 max_connections; }; -class StorageS3Distributed : public ext::shared_ptr_helper, public IStorage +class StorageS3Distributed : public ext::shared_ptr_helper, public IStorage { friend struct ext::shared_ptr_helper; public: diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Distributed.cpp index af950538e52..99c18a57017 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.cpp +++ b/src/TableFunctions/TableFunctionS3Distributed.cpp @@ -1,29 +1,30 @@ -#include -#include -#include + #include -#include "DataStreams/RemoteBlockInputStream.h" -#include "Interpreters/ClientInfo.h" -#include "Parsers/ASTExpressionList.h" -#include "Parsers/ASTFunction.h" -#include "Parsers/IAST_fwd.h" -#include "Processors/Sources/SourceFromInputStream.h" -#include "Storages/StorageS3Distributed.h" #if USE_AWS_S3 +#include #include +#include #include #include #include #include +#include #include #include #include #include #include -#include "registerTableFunctions.h" +#include +#include +#include +#include +#include + +#include +#include namespace DB { @@ -32,7 +33,6 @@ namespace ErrorCodes { extern const int LOGICAL_ERROR; extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; - extern const int UNEXPECTED_EXPRESSION; } diff --git a/src/TableFunctions/TableFunctionS3Distributed.h b/src/TableFunctions/TableFunctionS3Distributed.h index ff94eaa83e3..9d9492e1eb4 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.h +++ b/src/TableFunctions/TableFunctionS3Distributed.h @@ -15,7 +15,7 @@ class Context; /** * s3Distributed(cluster_name, source, [access_key_id, secret_access_key,] format, structure) * A table function, which allows to process many files from S3 on a specific cluster - * On initiator it creates a conneciton to _all_ nodes in cluster, discloses asterics + * On initiator it creates a connection to _all_ nodes in cluster, discloses asterics * in S3 file path and register all tasks (paths in S3) in NextTaskResolver to dispatch * them dynamically. * On worker node it asks initiator about next task to process, processes it. From 75230e37013a5d75433ee05735886026206576f1 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Mon, 12 Apr 2021 20:33:55 +0300 Subject: [PATCH 073/108] better --- src/Storages/StorageS3Distributed.cpp | 9 +++++ src/TableFunctions/TableFunctionS3.cpp | 38 +++++++++++++++---- .../TableFunctionS3Distributed.cpp | 16 +++----- 3 files changed, 44 insertions(+), 19 deletions(-) diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index d55fe4bb363..ced01fcc504 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -180,6 +180,15 @@ QueryProcessingStage::Enum StorageS3Distributed::getQueryProcessingStage( } +NamesAndTypesList StorageS3Distributed::getVirtuals() const +{ + return NamesAndTypesList{ + {"_path", std::make_shared()}, + {"_file", std::make_shared()} + }; +} + + } #endif diff --git a/src/TableFunctions/TableFunctionS3.cpp b/src/TableFunctions/TableFunctionS3.cpp index 1a67bae48b0..4aa5fbc45ce 100644 --- a/src/TableFunctions/TableFunctionS3.cpp +++ b/src/TableFunctions/TableFunctionS3.cpp @@ -26,35 +26,57 @@ void TableFunctionS3::parseArguments(const ASTPtr & ast_function, ContextPtr con /// Parse args ASTs & args_func = ast_function->children; + const auto message = fmt::format( + "The signature of table function {} could be the following:\n" \ + " - url, format, structure\n" \ + " - url, format, structure, compression_method\n" \ + " - url, access_key_id, secret_access_key, format, structure\n" \ + " - url, access_key_id, secret_access_key, format, structure, compression_method", + getName()); + if (args_func.size() != 1) - throw Exception("Table function '" + getName() + "' must have arguments.", ErrorCodes::LOGICAL_ERROR); + throw Exception("Table function '" + getName() + "' must have arguments.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); ASTs & args = args_func.at(0)->children; if (args.size() < 3 || args.size() > 6) - throw Exception("Table function '" + getName() + "' requires 3 to 6 arguments: url, [access_key_id, secret_access_key,] format, structure and [compression_method].", - ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + throw Exception(message, ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (auto & arg : args) arg = evaluateConstantExpressionOrIdentifierAsLiteral(arg, context); filename = args[0]->as().value.safeGet(); - if (args.size() < 5) + if (args.size() == 3) { format = args[1]->as().value.safeGet(); structure = args[2]->as().value.safeGet(); } - else + else if (args.size() == 4) + { + format = args[1]->as().value.safeGet(); + structure = args[2]->as().value.safeGet(); + compression_method = args[3]->as().value.safeGet(); + } + else if (args.size() == 5) { access_key_id = args[1]->as().value.safeGet(); secret_access_key = args[2]->as().value.safeGet(); format = args[3]->as().value.safeGet(); structure = args[4]->as().value.safeGet(); } - - if (args.size() == 4 || args.size() == 6) - compression_method = args.back()->as().value.safeGet(); + else if (args.size() == 6) + { + access_key_id = args[1]->as().value.safeGet(); + secret_access_key = args[2]->as().value.safeGet(); + format = args[3]->as().value.safeGet(); + structure = args[4]->as().value.safeGet(); + compression_method = args[5]->as().value.safeGet(); + } + else + { + throw Exception("Table function '" + getName() + "' must have arguments.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + } } ColumnsDescription TableFunctionS3::getActualTableStructure(ContextPtr context) const diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Distributed.cpp index 99c18a57017..a4fa33c60dd 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.cpp +++ b/src/TableFunctions/TableFunctionS3Distributed.cpp @@ -1,5 +1,6 @@ - +#if !defined(ARCADIA_BUILD) #include +#endif #if USE_AWS_S3 @@ -21,7 +22,8 @@ #include #include #include -#include + +#include "registerTableFunctions.h" #include #include @@ -42,7 +44,7 @@ void TableFunctionS3Distributed::parseArguments(const ASTPtr & ast_function, con ASTs & args_func = ast_function->children; if (args_func.size() != 1) - throw Exception("Table function '" + getName() + "' must have arguments.", ErrorCodes::LOGICAL_ERROR); + throw Exception("Table function '" + getName() + "' must have arguments.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); ASTs & args = args_func.at(0)->children; @@ -126,14 +128,6 @@ void registerTableFunctionCOSDistributed(TableFunctionFactory & factory) } -NamesAndTypesList StorageS3Distributed::getVirtuals() const -{ - return NamesAndTypesList{ - {"_path", std::make_shared()}, - {"_file", std::make_shared()} - }; -} - } #endif From 507cb8514aebbc199016f4fc1028c9502d9cf9b6 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Mon, 12 Apr 2021 20:48:16 +0300 Subject: [PATCH 074/108] review fixes --- src/Server/TCPHandler.cpp | 3 +- src/Storages/StorageS3.cpp | 2 +- src/Storages/StorageS3Distributed.cpp | 5 ++- tests/integration/test_s3_distributed/test.py | 44 +++++++++++-------- 4 files changed, 31 insertions(+), 23 deletions(-) diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index 451f1545708..469a23651e4 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -57,6 +57,7 @@ namespace ErrorCodes extern const int SOCKET_TIMEOUT; extern const int UNEXPECTED_PACKET_FROM_CLIENT; extern const int SUPPORT_IS_DISABLED; + extern const int UNKNOWN_PROTOCOL; } TCPHandler::TCPHandler(IServer & server_, const Poco::Net::StreamSocket & socket_, bool parse_proxy_protocol_, std::string server_display_name_) @@ -1056,7 +1057,7 @@ std::optional TCPHandler::receiveReadTaskResponseAssumeLocked() UInt64 version; readVarUInt(version, *in); if (version != DBMS_CLUSTER_PROCESSING_PROTOCOL_VERSION) - throw Exception("Protocol version for distributed processing mismatched", ErrorCodes::LOGICAL_ERROR); + throw Exception("Protocol version for distributed processing mismatched", ErrorCodes::UNKNOWN_PROTOCOL); String response; readStringBinary(response, *in); if (response.empty()) diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index 2fa24e3b2f7..9c50bb050d0 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -92,7 +92,7 @@ private: } if (is_finished) - return std::nullopt; // Or throw? + return std::nullopt; fillInternalBufferAssumeLocked(); diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index ced01fcc504..24742555822 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -169,11 +169,12 @@ Pipe StorageS3Distributed::read( } QueryProcessingStage::Enum StorageS3Distributed::getQueryProcessingStage( - const Context & context, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const + const Context & context, QueryProcessingStage::Enum to_stage, SelectQueryInfo &) const { /// Initiator executes query on remote node. if (context.getClientInfo().query_kind == ClientInfo::QueryKind::INITIAL_QUERY) - return QueryProcessingStage::Enum::WithMergeableState; + if (to_stage >= QueryProcessingStage::Enum::WithMergeableState) + return QueryProcessingStage::Enum::WithMergeableState; /// Follower just reads the data. return QueryProcessingStage::Enum::FetchColumns; diff --git a/tests/integration/test_s3_distributed/test.py b/tests/integration/test_s3_distributed/test.py index 7ad7f201ce6..671d611fca4 100644 --- a/tests/integration/test_s3_distributed/test.py +++ b/tests/integration/test_s3_distributed/test.py @@ -78,29 +78,35 @@ def test_count(started_cluster): def test_union_all(started_cluster): node = started_cluster.instances['s0_0_0'] pure_s3 = node.query(""" - SELECT * from s3( - 'http://minio1:9001/root/data/{clickhouse,database}/*', - 'minio', 'minio123', 'CSV', - 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') - ORDER BY (name, value, polygon) - UNION ALL - SELECT * from s3( - 'http://minio1:9001/root/data/{clickhouse,database}/*', - 'minio', 'minio123', 'CSV', - 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + SELECT * FROM + ( + SELECT * from s3( + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + UNION ALL + SELECT * from s3( + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + ) ORDER BY (name, value, polygon) """) # print(pure_s3) s3_distibuted = node.query(""" - SELECT * from s3Distributed( - 'cluster_simple', - 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', - 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon) - UNION ALL - SELECT * from s3Distributed( - 'cluster_simple', - 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', - 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon) + SELECT * FROM + ( + SELECT * from s3Distributed( + 'cluster_simple', + 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + UNION ALL + SELECT * from s3Distributed( + 'cluster_simple', + 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + ) + ORDER BY (name, value, polygon) """) # print(s3_distibuted) From a743442a17660887984eb0327f410c598554a2d3 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Mon, 12 Apr 2021 22:35:26 +0300 Subject: [PATCH 075/108] build fixes --- src/Core/Protocol.h | 2 +- src/Storages/StorageS3.cpp | 14 ++++------ src/Storages/StorageS3.h | 6 ++-- src/Storages/StorageS3Distributed.cpp | 20 ++++++------- src/Storages/StorageS3Distributed.h | 28 +++++-------------- src/TableFunctions/TableFunctionS3.cpp | 1 - .../TableFunctionS3Distributed.cpp | 11 ++++---- .../TableFunctionS3Distributed.h | 6 ++-- 8 files changed, 34 insertions(+), 54 deletions(-) diff --git a/src/Core/Protocol.h b/src/Core/Protocol.h index 10a05d8dde0..92e780104b5 100644 --- a/src/Core/Protocol.h +++ b/src/Core/Protocol.h @@ -77,7 +77,7 @@ namespace Protocol TableColumns = 11, /// Columns' description for default values calculation PartUUIDs = 12, /// List of unique parts ids. ReadTaskRequest = 13, /// String (UUID) describes a request for which next task is needed - /// This is such an inverted logic, where server sends requests + /// This is such an inverted logic, where server sends requests /// And client returns back response MAX = ReadTaskRequest, }; diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index 9c50bb050d0..86ed97052fa 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -163,7 +163,7 @@ StorageS3Source::StorageS3Source( const String & format_, String name_, const Block & sample_block_, - const Context & context_, + ContextPtr context_, const ColumnsDescription & columns_, UInt64 max_block_size_, const String compression_hint_, @@ -171,10 +171,10 @@ StorageS3Source::StorageS3Source( const String & bucket_, std::shared_ptr file_iterator_) : SourceWithProgress(getHeader(sample_block_, need_path, need_file)) + , WithContext(context_) , name(std::move(name_)) , bucket(bucket_) , format(format_) - , context(context_) , columns_desc(columns_) , max_block_size(max_block_size_) , compression_hint(compression_hint_) @@ -204,11 +204,11 @@ bool StorageS3Source::initialize() read_buf = wrapReadBufferWithCompressionMethod( std::make_unique(client, bucket, current_key), chooseCompressionMethod(current_key, compression_hint)); - auto input_format = FormatFactory::instance().getInput(format, *read_buf, sample_block, context, max_block_size); + auto input_format = FormatFactory::instance().getInput(format, *read_buf, sample_block, getContext(), max_block_size); reader = std::make_shared(input_format); if (columns_desc.hasDefaults()) - reader = std::make_shared(reader, columns_desc, context); + reader = std::make_shared(reader, columns_desc, getContext()); initialized = false; return true; @@ -257,8 +257,7 @@ Chunk StorageS3Source::generate() return generate(); } -namespace -{ + class StorageS3BlockOutputStream : public IBlockOutputStream { public: @@ -311,7 +310,6 @@ private: std::unique_ptr write_buf; BlockOutputStreamPtr writer; }; -} StorageS3::StorageS3( @@ -401,7 +399,7 @@ BlockOutputStreamPtr StorageS3::write(const ASTPtr & /*query*/, const StorageMet return std::make_shared( format_name, metadata_snapshot->getSampleBlock(), - context, + local_context, chooseCompressionMethod(client_auth.uri.key, compression_method), client_auth.client, client_auth.uri.bucket, diff --git a/src/Storages/StorageS3.h b/src/Storages/StorageS3.h index 7b6cc235be4..6fb3d0004a4 100644 --- a/src/Storages/StorageS3.h +++ b/src/Storages/StorageS3.h @@ -28,7 +28,7 @@ namespace DB { class StorageS3SequentialSource; -class StorageS3Source : public SourceWithProgress +class StorageS3Source : public SourceWithProgress, WithContext { public: class DisclosedGlobIterator @@ -52,7 +52,7 @@ public: const String & format, String name_, const Block & sample_block, - const Context & context_, + ContextPtr context_, const ColumnsDescription & columns_, UInt64 max_block_size_, const String compression_hint_, @@ -69,7 +69,6 @@ private: String bucket; String file_path; String format; - Context context; ColumnsDescription columns_desc; UInt64 max_block_size; String compression_hint; @@ -139,7 +138,6 @@ private: const String access_key_id; const String secret_access_key; const UInt64 max_connections; - std::shared_ptr client; S3AuthSettings auth_settings; }; diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index 24742555822..c68ca16efd1 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -63,13 +63,13 @@ StorageS3Distributed::StorageS3Distributed( UInt64 max_connections_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_, + ContextPtr context_, const String & compression_method_) : IStorage(table_id_) , client_auth{S3::URI{Poco::URI{filename_}}, access_key_id_, secret_access_key_, max_connections_, {}, {}} , filename(filename_) , cluster_name(cluster_name_) - , cluster(context_.getCluster(cluster_name)->getClusterWithReplicasAsShards(context_.getSettings())) + , cluster(context_->getCluster(cluster_name)->getClusterWithReplicasAsShards(context_->getSettings())) , format_name(format_name_) , compression_method(compression_method_) { @@ -85,7 +85,7 @@ Pipe StorageS3Distributed::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned /*num_streams*/) @@ -93,7 +93,7 @@ Pipe StorageS3Distributed::read( StorageS3::updateClientAndAuthSettings(context, client_auth); /// Secondary query, need to read from S3 - if (context.getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY) + if (context->getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY) { bool need_path_column = false; bool need_file_column = false; @@ -107,7 +107,7 @@ Pipe StorageS3Distributed::read( /// Save callback not to capture context by reference of copy it. auto file_iterator = std::make_shared( - [callback = context.getReadTaskCallback()]() -> std::optional { + [callback = context->getReadTaskCallback()]() -> std::optional { return callback(); }); @@ -136,7 +136,7 @@ Pipe StorageS3Distributed::read( Block header = InterpreterSelectQuery(query_info.query, context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); - const Scalars & scalars = context.hasQueryContext() ? context.getQueryContext().getScalars() : Scalars{}; + const Scalars & scalars = context->hasQueryContext() ? context->getQueryContext()->getScalars() : Scalars{}; Pipes pipes; connections.reserve(cluster->getShardCount()); @@ -147,7 +147,7 @@ Pipe StorageS3Distributed::read( for (const auto & node : replicas) { connections.emplace_back(std::make_shared( - node.host_name, node.port, context.getGlobalContext().getCurrentDatabase(), + node.host_name, node.port, context->getGlobalContext()->getCurrentDatabase(), node.user, node.password, node.cluster, node.cluster_secret, "S3DistributedInititiator", node.compression, @@ -157,7 +157,7 @@ Pipe StorageS3Distributed::read( /// For unknown reason global context is passed to IStorage::read() method /// So, task_identifier is passed as constructor argument. It is more obvious. auto remote_query_executor = std::make_shared( - *connections.back(), queryToString(query_info.query), header, context, + *connections.back(), queryToString(query_info.query), header, context, /*throttler=*/nullptr, scalars, Tables(), processed_stage, callback); pipes.emplace_back(std::make_shared(remote_query_executor, false, false)); @@ -169,10 +169,10 @@ Pipe StorageS3Distributed::read( } QueryProcessingStage::Enum StorageS3Distributed::getQueryProcessingStage( - const Context & context, QueryProcessingStage::Enum to_stage, SelectQueryInfo &) const + ContextPtr context, QueryProcessingStage::Enum to_stage, SelectQueryInfo &) const { /// Initiator executes query on remote node. - if (context.getClientInfo().query_kind == ClientInfo::QueryKind::INITIAL_QUERY) + if (context->getClientInfo().query_kind == ClientInfo::QueryKind::INITIAL_QUERY) if (to_stage >= QueryProcessingStage::Enum::WithMergeableState) return QueryProcessingStage::Enum::WithMergeableState; diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Distributed.h index eac0fc2ca31..30fbcf3aeb2 100644 --- a/src/Storages/StorageS3Distributed.h +++ b/src/Storages/StorageS3Distributed.h @@ -33,32 +33,18 @@ class StorageS3Distributed : public ext::shared_ptr_helper public: std::string getName() const override { return "S3Distributed"; } - Pipe read( - const Names & /*column_names*/, - const StorageMetadataPtr & /*metadata_snapshot*/, - SelectQueryInfo & /*query_info*/, - const Context & /*context*/, - QueryProcessingStage::Enum /*processed_stage*/, - size_t /*max_block_size*/, - unsigned /*num_streams*/) override; + Pipe read(const Names &, const StorageMetadataPtr &, SelectQueryInfo &, + ContextPtr, QueryProcessingStage::Enum, size_t /*max_block_size*/, unsigned /*num_streams*/) override; - QueryProcessingStage::Enum getQueryProcessingStage(const Context & context, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; + QueryProcessingStage::Enum getQueryProcessingStage(ContextPtr, QueryProcessingStage::Enum, SelectQueryInfo &) const override; NamesAndTypesList getVirtuals() const override; protected: StorageS3Distributed( - const String & filename_, - const String & access_key_id_, - const String & secret_access_key_, - const StorageID & table_id_, - String cluster_name_, - const String & format_name_, - UInt64 max_connections_, - const ColumnsDescription & columns_, - const ConstraintsDescription & constraints_, - const Context & context_, - const String & compression_method_); + const String & filename_, const String & access_key_id_, const String & secret_access_key_, const StorageID & table_id_, + String cluster_name_, const String & format_name_, UInt64 max_connections_, const ColumnsDescription & columns_, + const ConstraintsDescription & constraints_, ContextPtr context_, const String & compression_method_); private: /// Connections from initiator to other nodes @@ -66,7 +52,7 @@ private: StorageS3::ClientAuthentificaiton client_auth; String filename; - std::string cluster_name; + String cluster_name; ClusterPtr cluster; String format_name; diff --git a/src/TableFunctions/TableFunctionS3.cpp b/src/TableFunctions/TableFunctionS3.cpp index 4aa5fbc45ce..8e70c31542a 100644 --- a/src/TableFunctions/TableFunctionS3.cpp +++ b/src/TableFunctions/TableFunctionS3.cpp @@ -17,7 +17,6 @@ namespace DB namespace ErrorCodes { - extern const int LOGICAL_ERROR; extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; } diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Distributed.cpp index a4fa33c60dd..d0cf6e7fe54 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.cpp +++ b/src/TableFunctions/TableFunctionS3Distributed.cpp @@ -33,12 +33,11 @@ namespace DB namespace ErrorCodes { - extern const int LOGICAL_ERROR; extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; } -void TableFunctionS3Distributed::parseArguments(const ASTPtr & ast_function, const Context & context) +void TableFunctionS3Distributed::parseArguments(const ASTPtr & ast_function, ContextPtr context) { /// Parse args ASTs & args_func = ast_function->children; @@ -69,7 +68,7 @@ void TableFunctionS3Distributed::parseArguments(const ASTPtr & ast_function, con { format = args[2]->as().value.safeGet(); structure = args[3]->as().value.safeGet(); - } + } else if (args.size() == 5) { format = args[2]->as().value.safeGet(); @@ -96,18 +95,18 @@ void TableFunctionS3Distributed::parseArguments(const ASTPtr & ast_function, con } -ColumnsDescription TableFunctionS3Distributed::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionS3Distributed::getActualTableStructure(ContextPtr context) const { return parseColumnsListFromString(structure, context); } StoragePtr TableFunctionS3Distributed::executeImpl( - const ASTPtr & /*function*/, const Context & context, + const ASTPtr & /*function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { StoragePtr storage = StorageS3Distributed::create( filename, access_key_id, secret_access_key, StorageID(getDatabaseName(), table_name), - cluster_name, format, context.getSettingsRef().s3_max_connections, + cluster_name, format, context->getSettingsRef().s3_max_connections, getActualTableStructure(context), ConstraintsDescription{}, context, compression_method); diff --git a/src/TableFunctions/TableFunctionS3Distributed.h b/src/TableFunctions/TableFunctionS3Distributed.h index 9d9492e1eb4..e2c03e53b76 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.h +++ b/src/TableFunctions/TableFunctionS3Distributed.h @@ -34,14 +34,14 @@ public: protected: StoragePtr executeImpl( const ASTPtr & ast_function, - const Context & context, + ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "S3Distributed"; } - ColumnsDescription getActualTableStructure(const Context & context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + ColumnsDescription getActualTableStructure(ContextPtr) const override; + void parseArguments(const ASTPtr &, ContextPtr) override; String cluster_name; String filename; From 09a62e713ad47b2543dde2d72cb42fa4330816c6 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Tue, 13 Apr 2021 00:42:52 +0300 Subject: [PATCH 076/108] rename to s3Cluster --- src/Server/TCPHandler.cpp | 2 +- src/Storages/StorageS3.h | 4 ++-- src/Storages/StorageS3Distributed.cpp | 12 ++++++------ src/Storages/StorageS3Distributed.h | 8 ++++---- .../TableFunctionS3Distributed.cpp | 16 ++++++++-------- src/TableFunctions/TableFunctionS3Distributed.h | 10 +++++----- src/TableFunctions/registerTableFunctions.cpp | 2 +- src/TableFunctions/registerTableFunctions.h | 2 +- 8 files changed, 28 insertions(+), 28 deletions(-) diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index 469a23651e4..a997884d68b 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -25,7 +25,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/src/Storages/StorageS3.h b/src/Storages/StorageS3.h index 6fb3d0004a4..ceac2d1b46f 100644 --- a/src/Storages/StorageS3.h +++ b/src/Storages/StorageS3.h @@ -128,8 +128,8 @@ public: private: - friend class StorageS3Distributed; - friend class TableFunctionS3Distributed; + friend class StorageS3Cluster; + friend class TableFunctionS3Cluster; friend class StorageS3SequentialSource; struct ClientAuthentificaiton diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Distributed.cpp index c68ca16efd1..b91880e143d 100644 --- a/src/Storages/StorageS3Distributed.cpp +++ b/src/Storages/StorageS3Distributed.cpp @@ -1,4 +1,4 @@ -#include "Storages/StorageS3Distributed.h" +#include "Storages/StorageS3Cluster.h" #if !defined(ARCADIA_BUILD) #include @@ -53,7 +53,7 @@ namespace DB { -StorageS3Distributed::StorageS3Distributed( +StorageS3Cluster::StorageS3Cluster( const String & filename_, const String & access_key_id_, const String & secret_access_key_, @@ -81,7 +81,7 @@ StorageS3Distributed::StorageS3Distributed( } -Pipe StorageS3Distributed::read( +Pipe StorageS3Cluster::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, @@ -149,7 +149,7 @@ Pipe StorageS3Distributed::read( connections.emplace_back(std::make_shared( node.host_name, node.port, context->getGlobalContext()->getCurrentDatabase(), node.user, node.password, node.cluster, node.cluster_secret, - "S3DistributedInititiator", + "S3ClusterInititiator", node.compression, node.secure )); @@ -168,7 +168,7 @@ Pipe StorageS3Distributed::read( return Pipe::unitePipes(std::move(pipes)); } -QueryProcessingStage::Enum StorageS3Distributed::getQueryProcessingStage( +QueryProcessingStage::Enum StorageS3Cluster::getQueryProcessingStage( ContextPtr context, QueryProcessingStage::Enum to_stage, SelectQueryInfo &) const { /// Initiator executes query on remote node. @@ -181,7 +181,7 @@ QueryProcessingStage::Enum StorageS3Distributed::getQueryProcessingStage( } -NamesAndTypesList StorageS3Distributed::getVirtuals() const +NamesAndTypesList StorageS3Cluster::getVirtuals() const { return NamesAndTypesList{ {"_path", std::make_shared()}, diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Distributed.h index 30fbcf3aeb2..f7d9481d991 100644 --- a/src/Storages/StorageS3Distributed.h +++ b/src/Storages/StorageS3Distributed.h @@ -27,11 +27,11 @@ struct ClientAuthentificationBuilder UInt64 max_connections; }; -class StorageS3Distributed : public ext::shared_ptr_helper, public IStorage +class StorageS3Cluster : public ext::shared_ptr_helper, public IStorage { - friend struct ext::shared_ptr_helper; + friend struct ext::shared_ptr_helper; public: - std::string getName() const override { return "S3Distributed"; } + std::string getName() const override { return "S3Cluster"; } Pipe read(const Names &, const StorageMetadataPtr &, SelectQueryInfo &, ContextPtr, QueryProcessingStage::Enum, size_t /*max_block_size*/, unsigned /*num_streams*/) override; @@ -41,7 +41,7 @@ public: NamesAndTypesList getVirtuals() const override; protected: - StorageS3Distributed( + StorageS3Cluster( const String & filename_, const String & access_key_id_, const String & secret_access_key_, const StorageID & table_id_, String cluster_name_, const String & format_name_, UInt64 max_connections_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, ContextPtr context_, const String & compression_method_); diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Distributed.cpp index d0cf6e7fe54..f0d0da9a759 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.cpp +++ b/src/TableFunctions/TableFunctionS3Distributed.cpp @@ -4,7 +4,7 @@ #if USE_AWS_S3 -#include +#include #include #include @@ -15,7 +15,7 @@ #include #include #include -#include +#include #include #include #include @@ -37,7 +37,7 @@ namespace ErrorCodes } -void TableFunctionS3Distributed::parseArguments(const ASTPtr & ast_function, ContextPtr context) +void TableFunctionS3Cluster::parseArguments(const ASTPtr & ast_function, ContextPtr context) { /// Parse args ASTs & args_func = ast_function->children; @@ -95,16 +95,16 @@ void TableFunctionS3Distributed::parseArguments(const ASTPtr & ast_function, Con } -ColumnsDescription TableFunctionS3Distributed::getActualTableStructure(ContextPtr context) const +ColumnsDescription TableFunctionS3Cluster::getActualTableStructure(ContextPtr context) const { return parseColumnsListFromString(structure, context); } -StoragePtr TableFunctionS3Distributed::executeImpl( +StoragePtr TableFunctionS3Cluster::executeImpl( const ASTPtr & /*function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { - StoragePtr storage = StorageS3Distributed::create( + StoragePtr storage = StorageS3Cluster::create( filename, access_key_id, secret_access_key, StorageID(getDatabaseName(), table_name), cluster_name, format, context->getSettingsRef().s3_max_connections, getActualTableStructure(context), ConstraintsDescription{}, @@ -116,9 +116,9 @@ StoragePtr TableFunctionS3Distributed::executeImpl( } -void registerTableFunctionS3Distributed(TableFunctionFactory & factory) +void registerTableFunctionS3Cluster(TableFunctionFactory & factory) { - factory.registerFunction(); + factory.registerFunction(); } void registerTableFunctionCOSDistributed(TableFunctionFactory & factory) diff --git a/src/TableFunctions/TableFunctionS3Distributed.h b/src/TableFunctions/TableFunctionS3Distributed.h index e2c03e53b76..4e3ac227c02 100644 --- a/src/TableFunctions/TableFunctionS3Distributed.h +++ b/src/TableFunctions/TableFunctionS3Distributed.h @@ -13,7 +13,7 @@ namespace DB class Context; /** - * s3Distributed(cluster_name, source, [access_key_id, secret_access_key,] format, structure) + * s3Cluster(cluster_name, source, [access_key_id, secret_access_key,] format, structure) * A table function, which allows to process many files from S3 on a specific cluster * On initiator it creates a connection to _all_ nodes in cluster, discloses asterics * in S3 file path and register all tasks (paths in S3) in NextTaskResolver to dispatch @@ -21,10 +21,10 @@ class Context; * On worker node it asks initiator about next task to process, processes it. * This is repeated until the tasks are finished. */ -class TableFunctionS3Distributed : public ITableFunction +class TableFunctionS3Cluster : public ITableFunction { public: - static constexpr auto name = "s3Distributed"; + static constexpr auto name = "s3Cluster"; std::string getName() const override { return name; @@ -38,7 +38,7 @@ protected: const std::string & table_name, ColumnsDescription cached_columns) const override; - const char * getStorageTypeName() const override { return "S3Distributed"; } + const char * getStorageTypeName() const override { return "S3Cluster"; } ColumnsDescription getActualTableStructure(ContextPtr) const override; void parseArguments(const ASTPtr &, ContextPtr) override; @@ -52,7 +52,7 @@ protected: String compression_method = "auto"; }; -class TableFunctionCOSDistributed : public TableFunctionS3Distributed +class TableFunctionCOSDistributed : public TableFunctionS3Cluster { public: static constexpr auto name = "cosnDistributed"; diff --git a/src/TableFunctions/registerTableFunctions.cpp b/src/TableFunctions/registerTableFunctions.cpp index 2a1d4070b44..6cf40c4f090 100644 --- a/src/TableFunctions/registerTableFunctions.cpp +++ b/src/TableFunctions/registerTableFunctions.cpp @@ -21,7 +21,7 @@ void registerTableFunctions() #if USE_AWS_S3 registerTableFunctionS3(factory); - registerTableFunctionS3Distributed(factory); + registerTableFunctionS3Cluster(factory); registerTableFunctionCOS(factory); #endif diff --git a/src/TableFunctions/registerTableFunctions.h b/src/TableFunctions/registerTableFunctions.h index 5cb948761da..c49fafc5f86 100644 --- a/src/TableFunctions/registerTableFunctions.h +++ b/src/TableFunctions/registerTableFunctions.h @@ -21,7 +21,7 @@ void registerTableFunctionGenerate(TableFunctionFactory & factory); #if USE_AWS_S3 void registerTableFunctionS3(TableFunctionFactory & factory); -void registerTableFunctionS3Distributed(TableFunctionFactory & factory); +void registerTableFunctionS3Cluster(TableFunctionFactory & factory); void registerTableFunctionCOS(TableFunctionFactory & factory); #endif From a15757a9c9e7cc826a49d4bea60b5ea62d4b777c Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Tue, 13 Apr 2021 00:52:16 +0300 Subject: [PATCH 077/108] better renaming --- docker/test/fasttest/run.sh | 2 +- ...StorageS3Distributed.cpp => StorageS3Cluster.cpp} | 0 .../{StorageS3Distributed.h => StorageS3Cluster.h} | 0 ...nS3Distributed.cpp => TableFunctionS3Cluster.cpp} | 0 ...ctionS3Distributed.h => TableFunctionS3Cluster.h} | 0 .../__init__.py | 0 .../configs/cluster.xml | 0 .../data/clickhouse/part1.csv | 0 .../data/clickhouse/part123.csv | 0 .../data/database/part2.csv | 0 .../data/database/partition675.csv | 0 .../{test_s3_distributed => test_s3_cluster}/test.py | 12 ++++++------ ...tributed.reference => 01801_s3_cluster.reference} | 0 .../{01801_s3_distributed.sh => 01801_s3_cluster.sh} | 2 +- 14 files changed, 8 insertions(+), 8 deletions(-) rename src/Storages/{StorageS3Distributed.cpp => StorageS3Cluster.cpp} (100%) rename src/Storages/{StorageS3Distributed.h => StorageS3Cluster.h} (100%) rename src/TableFunctions/{TableFunctionS3Distributed.cpp => TableFunctionS3Cluster.cpp} (100%) rename src/TableFunctions/{TableFunctionS3Distributed.h => TableFunctionS3Cluster.h} (100%) rename tests/integration/{test_s3_distributed => test_s3_cluster}/__init__.py (100%) rename tests/integration/{test_s3_distributed => test_s3_cluster}/configs/cluster.xml (100%) rename tests/integration/{test_s3_distributed => test_s3_cluster}/data/clickhouse/part1.csv (100%) rename tests/integration/{test_s3_distributed => test_s3_cluster}/data/clickhouse/part123.csv (100%) rename tests/integration/{test_s3_distributed => test_s3_cluster}/data/database/part2.csv (100%) rename tests/integration/{test_s3_distributed => test_s3_cluster}/data/database/partition675.csv (100%) rename tests/integration/{test_s3_distributed => test_s3_cluster}/test.py (95%) rename tests/queries/0_stateless/{01801_s3_distributed.reference => 01801_s3_cluster.reference} (100%) rename tests/queries/0_stateless/{01801_s3_distributed.sh => 01801_s3_cluster.sh} (70%) diff --git a/docker/test/fasttest/run.sh b/docker/test/fasttest/run.sh index a482ab35d78..a42cb25f6f0 100755 --- a/docker/test/fasttest/run.sh +++ b/docker/test/fasttest/run.sh @@ -368,7 +368,7 @@ function run_tests 01666_blns # Depends on AWS - 01801_s3_distributed + 01801_s3_cluster ) (time clickhouse-test --hung-check -j 8 --order=random --use-skip-list --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" -- "$FASTTEST_FOCUS" 2>&1 ||:) | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt" diff --git a/src/Storages/StorageS3Distributed.cpp b/src/Storages/StorageS3Cluster.cpp similarity index 100% rename from src/Storages/StorageS3Distributed.cpp rename to src/Storages/StorageS3Cluster.cpp diff --git a/src/Storages/StorageS3Distributed.h b/src/Storages/StorageS3Cluster.h similarity index 100% rename from src/Storages/StorageS3Distributed.h rename to src/Storages/StorageS3Cluster.h diff --git a/src/TableFunctions/TableFunctionS3Distributed.cpp b/src/TableFunctions/TableFunctionS3Cluster.cpp similarity index 100% rename from src/TableFunctions/TableFunctionS3Distributed.cpp rename to src/TableFunctions/TableFunctionS3Cluster.cpp diff --git a/src/TableFunctions/TableFunctionS3Distributed.h b/src/TableFunctions/TableFunctionS3Cluster.h similarity index 100% rename from src/TableFunctions/TableFunctionS3Distributed.h rename to src/TableFunctions/TableFunctionS3Cluster.h diff --git a/tests/integration/test_s3_distributed/__init__.py b/tests/integration/test_s3_cluster/__init__.py similarity index 100% rename from tests/integration/test_s3_distributed/__init__.py rename to tests/integration/test_s3_cluster/__init__.py diff --git a/tests/integration/test_s3_distributed/configs/cluster.xml b/tests/integration/test_s3_cluster/configs/cluster.xml similarity index 100% rename from tests/integration/test_s3_distributed/configs/cluster.xml rename to tests/integration/test_s3_cluster/configs/cluster.xml diff --git a/tests/integration/test_s3_distributed/data/clickhouse/part1.csv b/tests/integration/test_s3_cluster/data/clickhouse/part1.csv similarity index 100% rename from tests/integration/test_s3_distributed/data/clickhouse/part1.csv rename to tests/integration/test_s3_cluster/data/clickhouse/part1.csv diff --git a/tests/integration/test_s3_distributed/data/clickhouse/part123.csv b/tests/integration/test_s3_cluster/data/clickhouse/part123.csv similarity index 100% rename from tests/integration/test_s3_distributed/data/clickhouse/part123.csv rename to tests/integration/test_s3_cluster/data/clickhouse/part123.csv diff --git a/tests/integration/test_s3_distributed/data/database/part2.csv b/tests/integration/test_s3_cluster/data/database/part2.csv similarity index 100% rename from tests/integration/test_s3_distributed/data/database/part2.csv rename to tests/integration/test_s3_cluster/data/database/part2.csv diff --git a/tests/integration/test_s3_distributed/data/database/partition675.csv b/tests/integration/test_s3_cluster/data/database/partition675.csv similarity index 100% rename from tests/integration/test_s3_distributed/data/database/partition675.csv rename to tests/integration/test_s3_cluster/data/database/partition675.csv diff --git a/tests/integration/test_s3_distributed/test.py b/tests/integration/test_s3_cluster/test.py similarity index 95% rename from tests/integration/test_s3_distributed/test.py rename to tests/integration/test_s3_cluster/test.py index 671d611fca4..f60e6e6862f 100644 --- a/tests/integration/test_s3_distributed/test.py +++ b/tests/integration/test_s3_cluster/test.py @@ -48,7 +48,7 @@ def test_select_all(started_cluster): ORDER BY (name, value, polygon)""") # print(pure_s3) s3_distibuted = node.query(""" - SELECT * from s3Distributed( + SELECT * from s3Cluster( 'cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon)""") @@ -66,7 +66,7 @@ def test_count(started_cluster): 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')""") # print(pure_s3) s3_distibuted = node.query(""" - SELECT count(*) from s3Distributed( + SELECT count(*) from s3Cluster( 'cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')""") @@ -96,12 +96,12 @@ def test_union_all(started_cluster): s3_distibuted = node.query(""" SELECT * FROM ( - SELECT * from s3Distributed( + SELECT * from s3Cluster( 'cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') UNION ALL - SELECT * from s3Distributed( + SELECT * from s3Cluster( 'cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') @@ -116,12 +116,12 @@ def test_union_all(started_cluster): def test_wrong_cluster(started_cluster): node = started_cluster.instances['s0_0_0'] error = node.query_and_get_error(""" - SELECT count(*) from s3Distributed( + SELECT count(*) from s3Cluster( 'non_existent_cluster', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') UNION ALL - SELECT count(*) from s3Distributed( + SELECT count(*) from s3Cluster( 'non_existent_cluster', 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')""") diff --git a/tests/queries/0_stateless/01801_s3_distributed.reference b/tests/queries/0_stateless/01801_s3_cluster.reference similarity index 100% rename from tests/queries/0_stateless/01801_s3_distributed.reference rename to tests/queries/0_stateless/01801_s3_cluster.reference diff --git a/tests/queries/0_stateless/01801_s3_distributed.sh b/tests/queries/0_stateless/01801_s3_cluster.sh similarity index 70% rename from tests/queries/0_stateless/01801_s3_distributed.sh rename to tests/queries/0_stateless/01801_s3_cluster.sh index 05710a21214..215d5500be5 100755 --- a/tests/queries/0_stateless/01801_s3_distributed.sh +++ b/tests/queries/0_stateless/01801_s3_cluster.sh @@ -9,4 +9,4 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) ${CLICKHOUSE_CLIENT_BINARY} --send_logs_level="none" -q "SELECT * FROM s3('https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" -${CLICKHOUSE_CLIENT_BINARY} --send_logs_level="none" -q "SELECT * FROM s3Distributed('test_cluster_two_shards', 'https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" +${CLICKHOUSE_CLIENT_BINARY} --send_logs_level="none" -q "SELECT * FROM s3Cluster('test_cluster_two_shards', 'https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" From f36a715c324658309baa19f9225478ca8be9f516 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Tue, 13 Apr 2021 00:55:48 +0300 Subject: [PATCH 078/108] delete unused code --- src/TableFunctions/TableFunctionS3Cluster.cpp | 5 ----- src/TableFunctions/TableFunctionS3Cluster.h | 12 ------------ 2 files changed, 17 deletions(-) diff --git a/src/TableFunctions/TableFunctionS3Cluster.cpp b/src/TableFunctions/TableFunctionS3Cluster.cpp index f0d0da9a759..ab73bd51b75 100644 --- a/src/TableFunctions/TableFunctionS3Cluster.cpp +++ b/src/TableFunctions/TableFunctionS3Cluster.cpp @@ -121,11 +121,6 @@ void registerTableFunctionS3Cluster(TableFunctionFactory & factory) factory.registerFunction(); } -void registerTableFunctionCOSDistributed(TableFunctionFactory & factory) -{ - factory.registerFunction(); -} - } diff --git a/src/TableFunctions/TableFunctionS3Cluster.h b/src/TableFunctions/TableFunctionS3Cluster.h index 4e3ac227c02..02f3b634ac9 100644 --- a/src/TableFunctions/TableFunctionS3Cluster.h +++ b/src/TableFunctions/TableFunctionS3Cluster.h @@ -52,18 +52,6 @@ protected: String compression_method = "auto"; }; -class TableFunctionCOSDistributed : public TableFunctionS3Cluster -{ -public: - static constexpr auto name = "cosnDistributed"; - std::string getName() const override - { - return name; - } -private: - const char * getStorageTypeName() const override { return "COSNDistributed"; } -}; - } #endif From 024374a2ece25763f6cb87a92e9edbde3e3bf67b Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Tue, 13 Apr 2021 13:59:02 +0300 Subject: [PATCH 079/108] review fixes --- src/Client/Connection.cpp | 4 ++-- src/Client/Connection.h | 2 +- src/Client/HedgedConnections.h | 2 +- src/Client/IConnections.h | 2 +- src/Client/MultiplexedConnections.cpp | 2 +- src/Client/MultiplexedConnections.h | 2 +- src/Core/Defines.h | 3 +-- src/DataStreams/RemoteQueryExecutor.h | 2 +- src/Interpreters/Context.h | 2 +- src/Server/TCPHandler.cpp | 6 ++---- src/Server/TCPHandler.h | 2 +- src/Storages/StorageS3.cpp | 22 ++++++++------------- src/Storages/StorageS3.h | 4 ++-- src/Storages/StorageS3Cluster.cpp | 8 +++++--- src/TableFunctions/TableFunctionS3Cluster.h | 3 +-- 15 files changed, 29 insertions(+), 37 deletions(-) diff --git a/src/Client/Connection.cpp b/src/Client/Connection.cpp index 9988c3d6b5c..70d8109545b 100644 --- a/src/Client/Connection.cpp +++ b/src/Client/Connection.cpp @@ -552,11 +552,11 @@ void Connection::sendIgnoredPartUUIDs(const std::vector & uuids) } -void Connection::sendReadTaskResponse(const std::optional & response) +void Connection::sendReadTaskResponse(const String & response) { writeVarUInt(Protocol::Client::ReadTaskResponse, *out); writeVarUInt(DBMS_CLUSTER_PROCESSING_PROTOCOL_VERSION, *out); - writeStringBinary(response.has_value() ? String(*response) : "", *out); + writeStringBinary(response, *out); out->next(); } diff --git a/src/Client/Connection.h b/src/Client/Connection.h index 3c45832db8a..b4b0d36fb1f 100644 --- a/src/Client/Connection.h +++ b/src/Client/Connection.h @@ -159,7 +159,7 @@ public: /// Send parts' uuids to excluded them from query processing void sendIgnoredPartUUIDs(const std::vector & uuids); - void sendReadTaskResponse(const std::optional &); + void sendReadTaskResponse(const String &); /// Send prepared block of data (serialized and, if need, compressed), that will be read from 'input'. /// You could pass size of serialized/compressed block. diff --git a/src/Client/HedgedConnections.h b/src/Client/HedgedConnections.h index bf78dca236f..9f7d8837536 100644 --- a/src/Client/HedgedConnections.h +++ b/src/Client/HedgedConnections.h @@ -90,7 +90,7 @@ public: const ClientInfo & client_info, bool with_pending_data) override; - void sendReadTaskResponse(const std::optional &) override + void sendReadTaskResponse(const String &) override { throw Exception("sendReadTaskResponse in not supported with HedgedConnections", ErrorCodes::LOGICAL_ERROR); } diff --git a/src/Client/IConnections.h b/src/Client/IConnections.h index 65e3542f4f7..d251a5fb3ab 100644 --- a/src/Client/IConnections.h +++ b/src/Client/IConnections.h @@ -24,7 +24,7 @@ public: const ClientInfo & client_info, bool with_pending_data) = 0; - virtual void sendReadTaskResponse(const std::optional &) = 0; + virtual void sendReadTaskResponse(const String &) = 0; /// Get packet from any replica. virtual Packet receivePacket() = 0; diff --git a/src/Client/MultiplexedConnections.cpp b/src/Client/MultiplexedConnections.cpp index 2d60183e098..2992e991df7 100644 --- a/src/Client/MultiplexedConnections.cpp +++ b/src/Client/MultiplexedConnections.cpp @@ -156,7 +156,7 @@ void MultiplexedConnections::sendIgnoredPartUUIDs(const std::vector & uuid } -void MultiplexedConnections::sendReadTaskResponse(const std::optional & response) +void MultiplexedConnections::sendReadTaskResponse(const String & response) { std::lock_guard lock(cancel_mutex); if (cancelled) diff --git a/src/Client/MultiplexedConnections.h b/src/Client/MultiplexedConnections.h index de8f5479bbf..f642db1c4cd 100644 --- a/src/Client/MultiplexedConnections.h +++ b/src/Client/MultiplexedConnections.h @@ -39,7 +39,7 @@ public: const ClientInfo & client_info, bool with_pending_data) override; - void sendReadTaskResponse(const std::optional &) override; + void sendReadTaskResponse(const String &) override; Packet receivePacket() override; diff --git a/src/Core/Defines.h b/src/Core/Defines.h index 6cfe2273c7d..668a60f9be8 100644 --- a/src/Core/Defines.h +++ b/src/Core/Defines.h @@ -74,8 +74,7 @@ /// Minimum revision supporting OpenTelemetry #define DBMS_MIN_REVISION_WITH_OPENTELEMETRY 54442 -/// Minimum revision supporting task processing on cluster -#define DBMS_MIN_REVISION_WITH_CLUSTER_PROCESSING 54443 + #define DBMS_CLUSTER_PROCESSING_PROTOCOL_VERSION 1 /// Minimum revision supporting interserver secret. diff --git a/src/DataStreams/RemoteQueryExecutor.h b/src/DataStreams/RemoteQueryExecutor.h index 584961f1baa..a9cffd9cf97 100644 --- a/src/DataStreams/RemoteQueryExecutor.h +++ b/src/DataStreams/RemoteQueryExecutor.h @@ -27,7 +27,7 @@ using ProfileInfoCallback = std::function()>; +using TaskIterator = std::function; /// This class allows one to launch queries on remote replicas of one shard and get results class RemoteQueryExecutor diff --git a/src/Interpreters/Context.h b/src/Interpreters/Context.h index e5c49458ee3..680ee7c779f 100644 --- a/src/Interpreters/Context.h +++ b/src/Interpreters/Context.h @@ -129,7 +129,7 @@ using InputInitializer = std::function; using InputBlocksReader = std::function; /// Used in distributed task processing -using ReadTaskCallback = std::function()>; +using ReadTaskCallback = std::function; /// An empty interface for an arbitrary object that may be attached by a shared pointer /// to query context, when using ClickHouse as a library. diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index a997884d68b..c6cd74f6c6a 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -289,7 +289,7 @@ void TCPHandler::runImpl() customizeContext(query_context); /// This callback is needed for requesting read tasks inside pipeline for distributed processing - query_context->setReadTaskCallback([this]() -> std::optional + query_context->setReadTaskCallback([this]() -> String { std::lock_guard lock(task_callback_mutex); sendReadTaskRequestAssumeLocked(); @@ -1037,7 +1037,7 @@ void TCPHandler::receiveIgnoredPartUUIDs() } -std::optional TCPHandler::receiveReadTaskResponseAssumeLocked() +String TCPHandler::receiveReadTaskResponseAssumeLocked() { UInt64 packet_type = 0; readVarUInt(packet_type, *in); @@ -1060,8 +1060,6 @@ std::optional TCPHandler::receiveReadTaskResponseAssumeLocked() throw Exception("Protocol version for distributed processing mismatched", ErrorCodes::UNKNOWN_PROTOCOL); String response; readStringBinary(response, *in); - if (response.empty()) - return std::nullopt; return response; } diff --git a/src/Server/TCPHandler.h b/src/Server/TCPHandler.h index 67dc2ade1bc..708d21c8251 100644 --- a/src/Server/TCPHandler.h +++ b/src/Server/TCPHandler.h @@ -170,7 +170,7 @@ private: bool receivePacket(); void receiveQuery(); void receiveIgnoredPartUUIDs(); - std::optional receiveReadTaskResponseAssumeLocked(); + String receiveReadTaskResponseAssumeLocked(); bool receiveData(bool scalar); bool readDataNext(const size_t & poll_interval, const int & receive_timeout); void readData(const Settings & connection_settings); diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index 86ed97052fa..0c4ed6482dc 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -74,7 +74,7 @@ public: fillInternalBufferAssumeLocked(); } - std::optional next() + String next() { std::lock_guard lock(mutex); return nextAssumeLocked(); @@ -82,7 +82,7 @@ public: private: - std::optional nextAssumeLocked() + String nextAssumeLocked() { if (buffer_iter != buffer.end()) { @@ -92,7 +92,7 @@ private: } if (is_finished) - return std::nullopt; + return {}; fillInternalBufferAssumeLocked(); @@ -141,7 +141,7 @@ private: StorageS3Source::DisclosedGlobIterator::DisclosedGlobIterator(Aws::S3::S3Client & client_, const S3::URI & globbed_uri_) : pimpl(std::make_shared(client_, globbed_uri_)) {} -std::optional StorageS3Source::DisclosedGlobIterator::next() +String StorageS3Source::DisclosedGlobIterator::next() { return pimpl->next(); } @@ -190,17 +190,11 @@ StorageS3Source::StorageS3Source( bool StorageS3Source::initialize() { - String current_key; - if (auto result = (*file_iterator)()) - { - current_key = result.value(); - file_path = bucket + "/" + current_key; - } - else - { - /// Do not initialize read_buffer and stream. + String current_key = (*file_iterator)(); + if (current_key.empty()) return false; - } + + file_path = bucket + "/" + current_key; read_buf = wrapReadBufferWithCompressionMethod( std::make_unique(client, bucket, current_key), chooseCompressionMethod(current_key, compression_hint)); diff --git a/src/Storages/StorageS3.h b/src/Storages/StorageS3.h index ceac2d1b46f..512074479e5 100644 --- a/src/Storages/StorageS3.h +++ b/src/Storages/StorageS3.h @@ -35,14 +35,14 @@ public: { public: DisclosedGlobIterator(Aws::S3::S3Client &, const S3::URI &); - std::optional next(); + String next(); private: class Impl; /// shared_ptr to have copy constructor std::shared_ptr pimpl; }; - using IteratorWrapper = std::function()>; + using IteratorWrapper = std::function; static Block getHeader(Block sample_block, bool with_path_column, bool with_file_column); diff --git a/src/Storages/StorageS3Cluster.cpp b/src/Storages/StorageS3Cluster.cpp index b91880e143d..0b6a2b4b070 100644 --- a/src/Storages/StorageS3Cluster.cpp +++ b/src/Storages/StorageS3Cluster.cpp @@ -107,7 +107,7 @@ Pipe StorageS3Cluster::read( /// Save callback not to capture context by reference of copy it. auto file_iterator = std::make_shared( - [callback = context->getReadTaskCallback()]() -> std::optional { + [callback = context->getReadTaskCallback()]() -> String { return callback(); }); @@ -127,7 +127,7 @@ Pipe StorageS3Cluster::read( StorageS3::updateClientAndAuthSettings(context, client_auth); auto iterator = std::make_shared(*client_auth.client, client_auth.uri); - auto callback = std::make_shared([iterator]() mutable -> std::optional + auto callback = std::make_shared([iterator]() mutable -> String { return iterator->next(); }); @@ -141,6 +141,8 @@ Pipe StorageS3Cluster::read( Pipes pipes; connections.reserve(cluster->getShardCount()); + const bool add_agg_info = processed_stage == QueryProcessingStage::WithMergeableState; + for (const auto & replicas : cluster->getShardsAddresses()) { /// There will be only one replica, because we consider each replica as a shard @@ -160,7 +162,7 @@ Pipe StorageS3Cluster::read( *connections.back(), queryToString(query_info.query), header, context, /*throttler=*/nullptr, scalars, Tables(), processed_stage, callback); - pipes.emplace_back(std::make_shared(remote_query_executor, false, false)); + pipes.emplace_back(std::make_shared(remote_query_executor, add_agg_info, false)); } } diff --git a/src/TableFunctions/TableFunctionS3Cluster.h b/src/TableFunctions/TableFunctionS3Cluster.h index 02f3b634ac9..cc857725ce6 100644 --- a/src/TableFunctions/TableFunctionS3Cluster.h +++ b/src/TableFunctions/TableFunctionS3Cluster.h @@ -16,8 +16,7 @@ class Context; * s3Cluster(cluster_name, source, [access_key_id, secret_access_key,] format, structure) * A table function, which allows to process many files from S3 on a specific cluster * On initiator it creates a connection to _all_ nodes in cluster, discloses asterics - * in S3 file path and register all tasks (paths in S3) in NextTaskResolver to dispatch - * them dynamically. + * in S3 file path and dispatch each file dynamically. * On worker node it asks initiator about next task to process, processes it. * This is repeated until the tasks are finished. */ From c9ae4cb467625de111b6a85d770e07f6eac74250 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Tue, 13 Apr 2021 17:26:43 +0300 Subject: [PATCH 080/108] multiplem pipes on remote --- src/Storages/StorageS3Cluster.cpp | 28 ++++++++++++++++++---------- 1 file changed, 18 insertions(+), 10 deletions(-) diff --git a/src/Storages/StorageS3Cluster.cpp b/src/Storages/StorageS3Cluster.cpp index 0b6a2b4b070..dee20d2dea3 100644 --- a/src/Storages/StorageS3Cluster.cpp +++ b/src/Storages/StorageS3Cluster.cpp @@ -88,7 +88,7 @@ Pipe StorageS3Cluster::read( ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, - unsigned /*num_streams*/) + unsigned num_streams) { StorageS3::updateClientAndAuthSettings(context, client_auth); @@ -110,16 +110,24 @@ Pipe StorageS3Cluster::read( [callback = context->getReadTaskCallback()]() -> String { return callback(); }); + + Pipes pipes; + for (size_t i = 0; i < num_streams; ++i) + { + pipes.emplace_back(std::make_shared( + need_path_column, need_file_column, format_name, getName(), + metadata_snapshot->getSampleBlock(), context, + metadata_snapshot->getColumns(), max_block_size, + compression_method, + client_auth.client, + client_auth.uri.bucket, + file_iterator + )); + } + auto pipe = Pipe::unitePipes(std::move(pipes)); - return Pipe(std::make_shared( - need_path_column, need_file_column, format_name, getName(), - metadata_snapshot->getSampleBlock(), context, - metadata_snapshot->getColumns(), max_block_size, - compression_method, - client_auth.client, - client_auth.uri.bucket, - file_iterator - )); + narrowPipe(pipe, num_streams); + return pipe; } /// The code from here and below executes on initiator From b05718feac4dd59c707a838b8f6318c3de24a83c Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Tue, 13 Apr 2021 17:28:05 +0300 Subject: [PATCH 081/108] style --- src/Storages/StorageS3Cluster.cpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/Storages/StorageS3Cluster.cpp b/src/Storages/StorageS3Cluster.cpp index dee20d2dea3..8b159d0baad 100644 --- a/src/Storages/StorageS3Cluster.cpp +++ b/src/Storages/StorageS3Cluster.cpp @@ -110,7 +110,7 @@ Pipe StorageS3Cluster::read( [callback = context->getReadTaskCallback()]() -> String { return callback(); }); - + Pipes pipes; for (size_t i = 0; i < num_streams; ++i) { From 3954eff276f94a1417e06b73f2d989a041b0ded1 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Tue, 13 Apr 2021 20:55:01 +0300 Subject: [PATCH 082/108] do not use cluster from config on remote replicas --- src/Storages/StorageS3Cluster.cpp | 3 ++- src/Storages/StorageS3Cluster.h | 2 -- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/src/Storages/StorageS3Cluster.cpp b/src/Storages/StorageS3Cluster.cpp index 8b159d0baad..e8b73bc2acb 100644 --- a/src/Storages/StorageS3Cluster.cpp +++ b/src/Storages/StorageS3Cluster.cpp @@ -69,7 +69,6 @@ StorageS3Cluster::StorageS3Cluster( , client_auth{S3::URI{Poco::URI{filename_}}, access_key_id_, secret_access_key_, max_connections_, {}, {}} , filename(filename_) , cluster_name(cluster_name_) - , cluster(context_->getCluster(cluster_name)->getClusterWithReplicasAsShards(context_->getSettings())) , format_name(format_name_) , compression_method(compression_method_) { @@ -131,6 +130,8 @@ Pipe StorageS3Cluster::read( } /// The code from here and below executes on initiator + + auto cluster = context->getCluster(cluster_name)->getClusterWithReplicasAsShards(context->getSettings()); S3::URI s3_uri(Poco::URI{filename}); StorageS3::updateClientAndAuthSettings(context, client_auth); diff --git a/src/Storages/StorageS3Cluster.h b/src/Storages/StorageS3Cluster.h index f7d9481d991..c98840d62fc 100644 --- a/src/Storages/StorageS3Cluster.h +++ b/src/Storages/StorageS3Cluster.h @@ -53,8 +53,6 @@ private: String filename; String cluster_name; - ClusterPtr cluster; - String format_name; String compression_method; }; From 4914860f91d82e7bde6e741948d6f3c4d767e959 Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 22:41:07 +0300 Subject: [PATCH 083/108] Fix style --- tests/queries/0_stateless/01812_basic_auth_http_server.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tests/queries/0_stateless/01812_basic_auth_http_server.sh b/tests/queries/0_stateless/01812_basic_auth_http_server.sh index 6f553596656..4b993137bbd 100755 --- a/tests/queries/0_stateless/01812_basic_auth_http_server.sh +++ b/tests/queries/0_stateless/01812_basic_auth_http_server.sh @@ -13,7 +13,7 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # In this test we do the opposite: passing the invalid credentials while server is accepting default user without a password. # And if the bug exists, they will be ignored (treat as empty credentials) and query succeed. -for i in {3950..4100}; do ${CLICKHOUSE_CURL} --user default:12345 "${CLICKHOUSE_URL}&query=SELECT+1"$(perl -e "print '+'x$i") | grep -v -F 'password' && echo 'Fail' ||:; done +for i in {3950..4100}; do ${CLICKHOUSE_CURL} --user default:12345 "${CLICKHOUSE_URL}&query=SELECT+1"$(perl -e "print '+'x$i") | grep -v -F 'password' ||:; done # You can check that the bug exists in old version by running the old server in Docker: # docker run --network host -it --rm yandex/clickhouse-server:1.1.54385 From 616d7d19f8d99fdd3d85dcf88290dc0fc546c711 Mon Sep 17 00:00:00 2001 From: Maksim Kita Date: Tue, 13 Apr 2021 22:53:36 +0300 Subject: [PATCH 084/108] LibraryDictionary bridge library interface --- programs/library-bridge/CMakeLists.txt | 2 +- .../library-bridge/LibraryInterface.cpp | 3 +- .../library-bridge/LibraryInterface.h | 0 programs/library-bridge/LibraryUtils.h | 3 +- programs/library-bridge/library-log.cpp | 66 ------------------- src/Dictionaries/LibraryDictionarySource.cpp | 1 - 6 files changed, 5 insertions(+), 70 deletions(-) rename src/Dictionaries/LibraryDictionarySourceExternal.cpp => programs/library-bridge/LibraryInterface.cpp (97%) rename src/Dictionaries/LibraryDictionarySourceExternal.h => programs/library-bridge/LibraryInterface.h (100%) delete mode 100644 programs/library-bridge/library-log.cpp diff --git a/programs/library-bridge/CMakeLists.txt b/programs/library-bridge/CMakeLists.txt index a9aa5b4f366..7147c2875f2 100644 --- a/programs/library-bridge/CMakeLists.txt +++ b/programs/library-bridge/CMakeLists.txt @@ -1,6 +1,6 @@ set (CLICKHOUSE_LIBRARY_BRIDGE_SOURCES library-bridge.cpp - library-log.cpp + LibraryInterface.cpp LibraryBridge.cpp Handlers.cpp HandlerFactory.cpp diff --git a/src/Dictionaries/LibraryDictionarySourceExternal.cpp b/programs/library-bridge/LibraryInterface.cpp similarity index 97% rename from src/Dictionaries/LibraryDictionarySourceExternal.cpp rename to programs/library-bridge/LibraryInterface.cpp index 259d0a2846a..3975368c17f 100644 --- a/src/Dictionaries/LibraryDictionarySourceExternal.cpp +++ b/programs/library-bridge/LibraryInterface.cpp @@ -1,4 +1,5 @@ -#include "LibraryDictionarySourceExternal.h" +#include "LibraryInterface.h" + #include namespace diff --git a/src/Dictionaries/LibraryDictionarySourceExternal.h b/programs/library-bridge/LibraryInterface.h similarity index 100% rename from src/Dictionaries/LibraryDictionarySourceExternal.h rename to programs/library-bridge/LibraryInterface.h diff --git a/programs/library-bridge/LibraryUtils.h b/programs/library-bridge/LibraryUtils.h index 359d1de93e3..8ced8df1c48 100644 --- a/programs/library-bridge/LibraryUtils.h +++ b/programs/library-bridge/LibraryUtils.h @@ -1,11 +1,12 @@ #pragma once #include -#include #include #include #include +#include "LibraryInterface.h" + namespace DB { diff --git a/programs/library-bridge/library-log.cpp b/programs/library-bridge/library-log.cpp deleted file mode 100644 index 89fb31623b3..00000000000 --- a/programs/library-bridge/library-log.cpp +++ /dev/null @@ -1,66 +0,0 @@ -#include -#include - -namespace -{ -const char DICT_LOGGER_NAME[] = "LibraryDictionarySourceExternal"; -} - -namespace ClickHouseLibrary -{ - -std::string_view LIBRARY_CREATE_NEW_FUNC_NAME = "ClickHouseDictionary_v3_libNew"; -std::string_view LIBRARY_CLONE_FUNC_NAME = "ClickHouseDictionary_v3_libClone"; -std::string_view LIBRARY_DELETE_FUNC_NAME = "ClickHouseDictionary_v3_libDelete"; - -std::string_view LIBRARY_DATA_NEW_FUNC_NAME = "ClickHouseDictionary_v3_dataNew"; -std::string_view LIBRARY_DATA_DELETE_FUNC_NAME = "ClickHouseDictionary_v3_dataDelete"; - -std::string_view LIBRARY_LOAD_ALL_FUNC_NAME = "ClickHouseDictionary_v3_loadAll"; -std::string_view LIBRARY_LOAD_IDS_FUNC_NAME = "ClickHouseDictionary_v3_loadIds"; -std::string_view LIBRARY_LOAD_KEYS_FUNC_NAME = "ClickHouseDictionary_v3_loadKeys"; - -std::string_view LIBRARY_IS_MODIFIED_FUNC_NAME = "ClickHouseDictionary_v3_isModified"; -std::string_view LIBRARY_SUPPORTS_SELECTIVE_LOAD_FUNC_NAME = "ClickHouseDictionary_v3_supportsSelectiveLoad"; - -void log(LogLevel level, CString msg) -{ - auto & logger = Poco::Logger::get(DICT_LOGGER_NAME); - switch (level) - { - case LogLevel::TRACE: - if (logger.trace()) - logger.trace(msg); - break; - case LogLevel::DEBUG: - if (logger.debug()) - logger.debug(msg); - break; - case LogLevel::INFORMATION: - if (logger.information()) - logger.information(msg); - break; - case LogLevel::NOTICE: - if (logger.notice()) - logger.notice(msg); - break; - case LogLevel::WARNING: - if (logger.warning()) - logger.warning(msg); - break; - case LogLevel::ERROR: - if (logger.error()) - logger.error(msg); - break; - case LogLevel::CRITICAL: - if (logger.critical()) - logger.critical(msg); - break; - case LogLevel::FATAL: - if (logger.fatal()) - logger.fatal(msg); - break; - } -} - -} diff --git a/src/Dictionaries/LibraryDictionarySource.cpp b/src/Dictionaries/LibraryDictionarySource.cpp index d327c85f979..b14545cd6d4 100644 --- a/src/Dictionaries/LibraryDictionarySource.cpp +++ b/src/Dictionaries/LibraryDictionarySource.cpp @@ -10,7 +10,6 @@ #include "DictionarySourceFactory.h" #include "DictionarySourceHelpers.h" #include "DictionaryStructure.h" -#include "LibraryDictionarySourceExternal.h" #include "registerDictionaries.h" #include #include From 5aac762d9c0695ee3fa33446b34879b54e6695f2 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Tue, 13 Apr 2021 22:57:01 +0300 Subject: [PATCH 085/108] apply suggestion from Maksim Polyanskiy --- src/TableFunctions/TableFunctionS3.cpp | 56 +++++++++---------- src/TableFunctions/TableFunctionS3Cluster.cpp | 52 +++++++++-------- 2 files changed, 51 insertions(+), 57 deletions(-) diff --git a/src/TableFunctions/TableFunctionS3.cpp b/src/TableFunctions/TableFunctionS3.cpp index 8e70c31542a..2da597f49ff 100644 --- a/src/TableFunctions/TableFunctionS3.cpp +++ b/src/TableFunctions/TableFunctionS3.cpp @@ -44,38 +44,34 @@ void TableFunctionS3::parseArguments(const ASTPtr & ast_function, ContextPtr con for (auto & arg : args) arg = evaluateConstantExpressionOrIdentifierAsLiteral(arg, context); + /// Size -> argument indexes + static auto size_to_args = std::map> + { + {3, {{"format", 1}, {"structure", 2}}}, + {4, {{"format", 1}, {"structure", 2}, {"compression_method", 3}}}, + {5, {{"access_key_id", 1}, {"secret_access_key", 2}, {"format", 3}, {"structure", 4}}}, + {6, {{"access_key_id", 1}, {"secret_access_key", 2}, {"format", 3}, {"structure", 4}, {"compression_method", 5}}} + }; + + /// This argument is always the first filename = args[0]->as().value.safeGet(); - if (args.size() == 3) - { - format = args[1]->as().value.safeGet(); - structure = args[2]->as().value.safeGet(); - } - else if (args.size() == 4) - { - format = args[1]->as().value.safeGet(); - structure = args[2]->as().value.safeGet(); - compression_method = args[3]->as().value.safeGet(); - } - else if (args.size() == 5) - { - access_key_id = args[1]->as().value.safeGet(); - secret_access_key = args[2]->as().value.safeGet(); - format = args[3]->as().value.safeGet(); - structure = args[4]->as().value.safeGet(); - } - else if (args.size() == 6) - { - access_key_id = args[1]->as().value.safeGet(); - secret_access_key = args[2]->as().value.safeGet(); - format = args[3]->as().value.safeGet(); - structure = args[4]->as().value.safeGet(); - compression_method = args[5]->as().value.safeGet(); - } - else - { - throw Exception("Table function '" + getName() + "' must have arguments.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); - } + auto & args_to_idx = size_to_args[args.size()]; + + if (args_to_idx.contains("format")) + format = args[args_to_idx["format"]]->as().value.safeGet(); + + if (args_to_idx.contains("structure")) + structure = args[args_to_idx["structure"]]->as().value.safeGet(); + + if (args_to_idx.contains("compression_method")) + compression_method = args[args_to_idx["compression_method"]]->as().value.safeGet(); + + if (args_to_idx.contains("access_key_id")) + access_key_id = args[args_to_idx["access_key_id"]]->as().value.safeGet(); + + if (args_to_idx.contains("secret_access_key")) + secret_access_key = args[args_to_idx["secret_access_key"]]->as().value.safeGet(); } ColumnsDescription TableFunctionS3::getActualTableStructure(ContextPtr context) const diff --git a/src/TableFunctions/TableFunctionS3Cluster.cpp b/src/TableFunctions/TableFunctionS3Cluster.cpp index ab73bd51b75..30526057ca5 100644 --- a/src/TableFunctions/TableFunctionS3Cluster.cpp +++ b/src/TableFunctions/TableFunctionS3Cluster.cpp @@ -61,37 +61,35 @@ void TableFunctionS3Cluster::parseArguments(const ASTPtr & ast_function, Context for (auto & arg : args) arg = evaluateConstantExpressionOrIdentifierAsLiteral(arg, context); + /// This arguments are always the first cluster_name = args[0]->as().value.safeGet(); filename = args[1]->as().value.safeGet(); - if (args.size() == 4) + /// Size -> argument indexes + static auto size_to_args = std::map> { - format = args[2]->as().value.safeGet(); - structure = args[3]->as().value.safeGet(); - } - else if (args.size() == 5) - { - format = args[2]->as().value.safeGet(); - structure = args[3]->as().value.safeGet(); - compression_method = args[4]->as().value.safeGet(); - } - else if (args.size() == 6) - { - access_key_id = args[2]->as().value.safeGet(); - secret_access_key = args[3]->as().value.safeGet(); - format = args[4]->as().value.safeGet(); - structure = args[5]->as().value.safeGet(); - } - else if (args.size() == 7) - { - access_key_id = args[2]->as().value.safeGet(); - secret_access_key = args[3]->as().value.safeGet(); - format = args[4]->as().value.safeGet(); - structure = args[5]->as().value.safeGet(); - compression_method = args[4]->as().value.safeGet(); - } - else - throw Exception(message, ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + {4, {{"format", 2}, {"structure", 3}}}, + {5, {{"format", 2}, {"structure", 3}, {"compression_method", 4}}}, + {6, {{"access_key_id", 2}, {"secret_access_key", 3}, {"format", 4}, {"structure", 5}}}, + {7, {{"access_key_id", 2}, {"secret_access_key", 3}, {"format", 4}, {"structure", 5}, {"compression_method", 6}}} + }; + + auto & args_to_idx = size_to_args[args.size()]; + + if (args_to_idx.contains("format")) + format = args[args_to_idx["format"]]->as().value.safeGet(); + + if (args_to_idx.contains("structure")) + structure = args[args_to_idx["structure"]]->as().value.safeGet(); + + if (args_to_idx.contains("compression_method")) + compression_method = args[args_to_idx["compression_method"]]->as().value.safeGet(); + + if (args_to_idx.contains("access_key_id")) + access_key_id = args[args_to_idx["access_key_id"]]->as().value.safeGet(); + + if (args_to_idx.contains("secret_access_key")) + secret_access_key = args[args_to_idx["secret_access_key"]]->as().value.safeGet(); } From 07b610cf70b726da2d9156c01891b9f703362761 Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Tue, 13 Apr 2021 23:04:13 +0300 Subject: [PATCH 086/108] Remove useless files --- .../InternalTextLogsRowOutputStream.h | 2 +- src/Formats/IRowInputStream.cpp | 18 ------ src/Formats/IRowInputStream.h | 51 --------------- src/Formats/IRowOutputStream.cpp | 37 ----------- src/Formats/IRowOutputStream.h | 63 ------------------- src/Formats/ya.make | 2 - src/Processors/Formats/IRowInputFormat.h | 2 +- 7 files changed, 2 insertions(+), 173 deletions(-) delete mode 100644 src/Formats/IRowInputStream.cpp delete mode 100644 src/Formats/IRowInputStream.h delete mode 100644 src/Formats/IRowOutputStream.cpp delete mode 100644 src/Formats/IRowOutputStream.h diff --git a/src/DataStreams/InternalTextLogsRowOutputStream.h b/src/DataStreams/InternalTextLogsRowOutputStream.h index 0f333f70d18..8ade76b34a7 100644 --- a/src/DataStreams/InternalTextLogsRowOutputStream.h +++ b/src/DataStreams/InternalTextLogsRowOutputStream.h @@ -8,7 +8,7 @@ namespace DB /// Prints internal server logs /// Input blocks have to have the same structure as SystemLogsQueue::getSampleBlock() -/// NOTE: IRowOutputStream does not suite well for this case +/// NOTE: IRowOutputFormat does not suite well for this case class InternalTextLogsRowOutputStream : public IBlockOutputStream { public: diff --git a/src/Formats/IRowInputStream.cpp b/src/Formats/IRowInputStream.cpp deleted file mode 100644 index f3603982de5..00000000000 --- a/src/Formats/IRowInputStream.cpp +++ /dev/null @@ -1,18 +0,0 @@ -#include -#include - - -namespace DB -{ - -namespace ErrorCodes -{ - extern const int NOT_IMPLEMENTED; -} - -void IRowInputStream::syncAfterError() -{ - throw Exception("Method syncAfterError is not implemented for input format", ErrorCodes::NOT_IMPLEMENTED); -} - -} diff --git a/src/Formats/IRowInputStream.h b/src/Formats/IRowInputStream.h deleted file mode 100644 index e0b8a574f17..00000000000 --- a/src/Formats/IRowInputStream.h +++ /dev/null @@ -1,51 +0,0 @@ -#pragma once - -#include -#include -#include - -#include - - -namespace DB -{ - -/// Contains extra information about read data. -struct RowReadExtension -{ - /// IRowInputStream.read() output. It contains non zero for columns that actually read from the source and zero otherwise. - /// It's used to attach defaults for partially filled rows. - /// Can be empty, this means that all columns are read. - std::vector read_columns; -}; - -/** Interface of stream, that allows to read data by rows. - */ -class IRowInputStream : private boost::noncopyable -{ -public: - /** Read next row and append it to the columns. - * If no more rows - return false. - */ - virtual bool read(MutableColumns & columns, RowReadExtension & extra) = 0; - - virtual void readPrefix() {} /// delimiter before begin of result - virtual void readSuffix() {} /// delimiter after end of result - - /// Skip data until next row. - /// This is intended for text streams, that allow skipping of errors. - /// By default - throws not implemented exception. - virtual bool allowSyncAfterError() const { return false; } - virtual void syncAfterError(); - - /// In case of parse error, try to roll back and parse last one or two rows very carefully - /// and collect as much as possible diagnostic information about error. - /// If not implemented, returns empty string. - virtual std::string getDiagnosticInfo() { return {}; } - - virtual ~IRowInputStream() {} -}; - -using RowInputStreamPtr = std::shared_ptr; - -} diff --git a/src/Formats/IRowOutputStream.cpp b/src/Formats/IRowOutputStream.cpp deleted file mode 100644 index f84d810b8e8..00000000000 --- a/src/Formats/IRowOutputStream.cpp +++ /dev/null @@ -1,37 +0,0 @@ -#include -#include -#include - - -namespace DB -{ -namespace ErrorCodes -{ - extern const int NOT_IMPLEMENTED; -} - - -void IRowOutputStream::write(const Block & block, size_t row_num) -{ - size_t columns = block.columns(); - - writeRowStartDelimiter(); - - for (size_t i = 0; i < columns; ++i) - { - if (i != 0) - writeFieldDelimiter(); - - const auto & col = block.getByPosition(i); - writeField(*col.column, *col.type, row_num); - } - - writeRowEndDelimiter(); -} - -void IRowOutputStream::writeField(const IColumn &, const IDataType &, size_t) -{ - throw Exception("Method writeField is not implemented for output format", ErrorCodes::NOT_IMPLEMENTED); -} - -} diff --git a/src/Formats/IRowOutputStream.h b/src/Formats/IRowOutputStream.h deleted file mode 100644 index 7cf6251cd0d..00000000000 --- a/src/Formats/IRowOutputStream.h +++ /dev/null @@ -1,63 +0,0 @@ -#pragma once - -#include -#include -#include -#include - - -namespace DB -{ - -class Block; -class IColumn; -class IDataType; -struct Progress; - - -/** Interface of stream for writing data by rows (for example: for output to terminal). - */ -class IRowOutputStream : private boost::noncopyable -{ -public: - - /** Write a row. - * Default implementation calls methods to write single values and delimiters - * (except delimiter between rows (writeRowBetweenDelimiter())). - */ - virtual void write(const Block & block, size_t row_num); - - /** Write single value. */ - virtual void writeField(const IColumn & column, const IDataType & type, size_t row_num); - - /** Write delimiter. */ - virtual void writeFieldDelimiter() {} /// delimiter between values - virtual void writeRowStartDelimiter() {} /// delimiter before each row - virtual void writeRowEndDelimiter() {} /// delimiter after each row - virtual void writeRowBetweenDelimiter() {} /// delimiter between rows - virtual void writePrefix() {} /// delimiter before resultset - virtual void writeSuffix() {} /// delimiter after resultset - - /** Flush output buffers if any. */ - virtual void flush() {} - - /** Methods to set additional information for output in formats, that support it. - */ - virtual void setRowsBeforeLimit(size_t /*rows_before_limit*/) {} - virtual void setTotals(const Block & /*totals*/) {} - virtual void setExtremes(const Block & /*extremes*/) {} - - /** Notify about progress. Method could be called from different threads. - * Passed value are delta, that must be summarized. - */ - virtual void onProgress(const Progress & /*progress*/) {} - - /** Content-Type to set when sending HTTP response. */ - virtual String getContentType() const { return "text/plain; charset=UTF-8"; } - - virtual ~IRowOutputStream() {} -}; - -using RowOutputStreamPtr = std::shared_ptr; - -} diff --git a/src/Formats/ya.make b/src/Formats/ya.make index 8fe938be125..476e13f9a4f 100644 --- a/src/Formats/ya.make +++ b/src/Formats/ya.make @@ -13,8 +13,6 @@ PEERDIR( SRCS( FormatFactory.cpp FormatSchemaInfo.cpp - IRowInputStream.cpp - IRowOutputStream.cpp JSONEachRowUtils.cpp MySQLBlockInputStream.cpp NativeFormat.cpp diff --git a/src/Processors/Formats/IRowInputFormat.h b/src/Processors/Formats/IRowInputFormat.h index c802bd3066b..8c600ad7285 100644 --- a/src/Processors/Formats/IRowInputFormat.h +++ b/src/Processors/Formats/IRowInputFormat.h @@ -14,7 +14,7 @@ namespace DB /// Contains extra information about read data. struct RowReadExtension { - /// IRowInputStream.read() output. It contains non zero for columns that actually read from the source and zero otherwise. + /// IRowInputFormat::read output. It contains non zero for columns that actually read from the source and zero otherwise. /// It's used to attach defaults for partially filled rows. std::vector read_columns; }; From 39d55556b8d965346dbecbade25c9f19ff25e5dc Mon Sep 17 00:00:00 2001 From: tavplubix Date: Tue, 13 Apr 2021 23:14:05 +0300 Subject: [PATCH 087/108] Update StorageMaterializedView.cpp --- src/Storages/StorageMaterializedView.cpp | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/src/Storages/StorageMaterializedView.cpp b/src/Storages/StorageMaterializedView.cpp index 1e86ce6a4e3..89b8bc72526 100644 --- a/src/Storages/StorageMaterializedView.cpp +++ b/src/Storages/StorageMaterializedView.cpp @@ -76,9 +76,10 @@ StorageMaterializedView::StorageMaterializedView( storage_metadata.setSelectQuery(select); setInMemoryMetadata(storage_metadata); - bool point_to_itself_by_uuid = has_inner_table && query.to_inner_uuid == table_id_.uuid; - bool point_to_itself_by_name = has_inner_table && query.to_table_id.database_name == table_id_.database_name - && query.to_table_id.table_name == table_id_.table_name; + bool point_to_itself_by_uuid = has_inner_table && query.to_inner_uuid != UUIDHelpers::Nil + && query.to_inner_uuid == table_id_.uuid; + bool point_to_itself_by_name = !has_inner_table && query.to_table_id.database_name == table_id_.database_name + && query.to_table_id.table_name == table_id_.table_name; if (point_to_itself_by_uuid || point_to_itself_by_name) throw Exception(ErrorCodes::BAD_ARGUMENTS, "Materialized view {} cannot point to itself", table_id_.getFullTableName()); From 0cfe018fd4048817e39298e78d30ac0375e30def Mon Sep 17 00:00:00 2001 From: Maksim Kita Date: Mon, 12 Apr 2021 23:36:13 +0300 Subject: [PATCH 088/108] Moved BorrowedObjectPool to common --- {src/Common => base/common}/BorrowedObjectPool.h | 3 +-- {src/Common => base/common}/MoveOrCopyIfThrow.h | 0 src/Common/ConcurrentBoundedQueue.h | 2 +- src/Dictionaries/ExecutablePoolDictionarySource.h | 3 ++- 4 files changed, 4 insertions(+), 4 deletions(-) rename {src/Common => base/common}/BorrowedObjectPool.h (99%) rename {src/Common => base/common}/MoveOrCopyIfThrow.h (100%) diff --git a/src/Common/BorrowedObjectPool.h b/base/common/BorrowedObjectPool.h similarity index 99% rename from src/Common/BorrowedObjectPool.h rename to base/common/BorrowedObjectPool.h index d5263cf92a8..6a90a7e7122 100644 --- a/src/Common/BorrowedObjectPool.h +++ b/base/common/BorrowedObjectPool.h @@ -7,8 +7,7 @@ #include #include - -#include +#include /** Pool for limited size objects that cannot be used from different threads simultaneously. * The main use case is to have fixed size of objects that can be reused in difference threads during their lifetime diff --git a/src/Common/MoveOrCopyIfThrow.h b/base/common/MoveOrCopyIfThrow.h similarity index 100% rename from src/Common/MoveOrCopyIfThrow.h rename to base/common/MoveOrCopyIfThrow.h diff --git a/src/Common/ConcurrentBoundedQueue.h b/src/Common/ConcurrentBoundedQueue.h index 7bc7f362095..cb29efc3349 100644 --- a/src/Common/ConcurrentBoundedQueue.h +++ b/src/Common/ConcurrentBoundedQueue.h @@ -6,7 +6,7 @@ #include #include -#include +#include /** A very simple thread-safe queue of limited size. * If you try to pop an item from an empty queue, the thread is blocked until the queue becomes nonempty. diff --git a/src/Dictionaries/ExecutablePoolDictionarySource.h b/src/Dictionaries/ExecutablePoolDictionarySource.h index 7f24e56257a..7a0b8681a21 100644 --- a/src/Dictionaries/ExecutablePoolDictionarySource.h +++ b/src/Dictionaries/ExecutablePoolDictionarySource.h @@ -1,7 +1,8 @@ #pragma once +#include + #include -#include #include #include "IDictionarySource.h" From 98b7274b5152f83a9e88e855bc1145b70f025633 Mon Sep 17 00:00:00 2001 From: Maksim Kita Date: Tue, 13 Apr 2021 21:52:59 +0300 Subject: [PATCH 089/108] Fixed pool in ODBC bridge --- programs/odbc-bridge/ODBCConnectionFactory.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/programs/odbc-bridge/ODBCConnectionFactory.h b/programs/odbc-bridge/ODBCConnectionFactory.h index 958cf03cfce..56961ddb2fb 100644 --- a/programs/odbc-bridge/ODBCConnectionFactory.h +++ b/programs/odbc-bridge/ODBCConnectionFactory.h @@ -3,7 +3,7 @@ #include #include #include -#include +#include #include From ec35a878d39b0b57d4ae2254d9135b536ef34094 Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Tue, 13 Apr 2021 23:17:25 +0300 Subject: [PATCH 090/108] simplify storages3cluster --- src/Storages/StorageS3.cpp | 27 ++++++++--- src/Storages/StorageS3.h | 5 +- src/Storages/StorageS3Cluster.cpp | 46 ++----------------- src/TableFunctions/TableFunctionS3Cluster.cpp | 28 +++++++++-- 4 files changed, 49 insertions(+), 57 deletions(-) diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index 0c4ed6482dc..2ae90ff7a31 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -318,7 +318,8 @@ StorageS3::StorageS3( const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, ContextPtr context_, - const String & compression_method_) + const String & compression_method_, + bool distributed_processing_) : IStorage(table_id_) , client_auth{uri_, access_key_id_, secret_access_key_, max_connections_, {}, {}} /// Client and settings will be updated later , format_name(format_name_) @@ -326,6 +327,7 @@ StorageS3::StorageS3( , max_single_part_upload_size(max_single_part_upload_size_) , compression_method(compression_method_) , name(uri_.storage_name) + , distributed_processing(distributed_processing_) { context_->getGlobalContext()->getRemoteHostFilter().checkURL(uri_.uri); StorageInMemoryMetadata storage_metadata; @@ -358,13 +360,24 @@ Pipe StorageS3::read( need_file_column = true; } - /// Iterate through disclosed globs and make a source for each file - auto glob_iterator = std::make_shared(*client_auth.client, client_auth.uri); - auto iterator_wrapper = std::make_shared([glob_iterator]() + std::shared_ptr iterator_wrapper{nullptr}; + if (distributed_processing) { - return glob_iterator->next(); - }); - + iterator_wrapper = std::make_shared( + [callback = local_context->getReadTaskCallback()]() -> String { + return callback(); + }); + } + else + { + /// Iterate through disclosed globs and make a source for each file + auto glob_iterator = std::make_shared(*client_auth.client, client_auth.uri); + iterator_wrapper = std::make_shared([glob_iterator]() + { + return glob_iterator->next(); + }); + } + for (size_t i = 0; i < num_streams; ++i) { pipes.emplace_back(std::make_shared( diff --git a/src/Storages/StorageS3.h b/src/Storages/StorageS3.h index 512074479e5..1e1d76fa6e3 100644 --- a/src/Storages/StorageS3.h +++ b/src/Storages/StorageS3.h @@ -106,7 +106,8 @@ public: const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, ContextPtr context_, - const String & compression_method_ = ""); + const String & compression_method_ = "", + bool distributed_processing_ = false); String getName() const override { @@ -130,7 +131,6 @@ private: friend class StorageS3Cluster; friend class TableFunctionS3Cluster; - friend class StorageS3SequentialSource; struct ClientAuthentificaiton { @@ -149,6 +149,7 @@ private: size_t max_single_part_upload_size; String compression_method; String name; + const bool distributed_processing; static void updateClientAndAuthSettings(ContextPtr, ClientAuthentificaiton &); }; diff --git a/src/Storages/StorageS3Cluster.cpp b/src/Storages/StorageS3Cluster.cpp index e8b73bc2acb..8afc0e44023 100644 --- a/src/Storages/StorageS3Cluster.cpp +++ b/src/Storages/StorageS3Cluster.cpp @@ -79,58 +79,18 @@ StorageS3Cluster::StorageS3Cluster( StorageS3::updateClientAndAuthSettings(context_, client_auth); } - +/// The code executes on initiator Pipe StorageS3Cluster::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, ContextPtr context, QueryProcessingStage::Enum processed_stage, - size_t max_block_size, - unsigned num_streams) + size_t /*max_block_size*/, + unsigned /*num_streams*/) { StorageS3::updateClientAndAuthSettings(context, client_auth); - /// Secondary query, need to read from S3 - if (context->getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY) - { - bool need_path_column = false; - bool need_file_column = false; - for (const auto & column : column_names) - { - if (column == "_path") - need_path_column = true; - if (column == "_file") - need_file_column = true; - } - - /// Save callback not to capture context by reference of copy it. - auto file_iterator = std::make_shared( - [callback = context->getReadTaskCallback()]() -> String { - return callback(); - }); - - Pipes pipes; - for (size_t i = 0; i < num_streams; ++i) - { - pipes.emplace_back(std::make_shared( - need_path_column, need_file_column, format_name, getName(), - metadata_snapshot->getSampleBlock(), context, - metadata_snapshot->getColumns(), max_block_size, - compression_method, - client_auth.client, - client_auth.uri.bucket, - file_iterator - )); - } - auto pipe = Pipe::unitePipes(std::move(pipes)); - - narrowPipe(pipe, num_streams); - return pipe; - } - - /// The code from here and below executes on initiator - auto cluster = context->getCluster(cluster_name)->getClusterWithReplicasAsShards(context->getSettings()); S3::URI s3_uri(Poco::URI{filename}); StorageS3::updateClientAndAuthSettings(context, client_auth); diff --git a/src/TableFunctions/TableFunctionS3Cluster.cpp b/src/TableFunctions/TableFunctionS3Cluster.cpp index 30526057ca5..4a48ed83135 100644 --- a/src/TableFunctions/TableFunctionS3Cluster.cpp +++ b/src/TableFunctions/TableFunctionS3Cluster.cpp @@ -102,11 +102,29 @@ StoragePtr TableFunctionS3Cluster::executeImpl( const ASTPtr & /*function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { - StoragePtr storage = StorageS3Cluster::create( - filename, access_key_id, secret_access_key, StorageID(getDatabaseName(), table_name), - cluster_name, format, context->getSettingsRef().s3_max_connections, - getActualTableStructure(context), ConstraintsDescription{}, - context, compression_method); + StoragePtr storage; + if (context->getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY) + { + /// On worker node this filename won't contains globs + Poco::URI uri (filename); + S3::URI s3_uri (uri); + /// Actually this parameters are not used + UInt64 min_upload_part_size = context->getSettingsRef().s3_min_upload_part_size; + UInt64 max_single_part_upload_size = context->getSettingsRef().s3_max_single_part_upload_size; + UInt64 max_connections = context->getSettingsRef().s3_max_connections; + storage = StorageS3::create( + s3_uri, access_key_id, secret_access_key, StorageID(getDatabaseName(), table_name), + format, min_upload_part_size, max_single_part_upload_size, max_connections, + getActualTableStructure(context), ConstraintsDescription{}, + context, compression_method, /*distributed_processing=*/true); + } + else { + storage = StorageS3Cluster::create( + filename, access_key_id, secret_access_key, StorageID(getDatabaseName(), table_name), + cluster_name, format, context->getSettingsRef().s3_max_connections, + getActualTableStructure(context), ConstraintsDescription{}, + context, compression_method); + } storage->startup(); From 2a86d76ccdce0ea00665f70f0bcd8a39ab3613de Mon Sep 17 00:00:00 2001 From: Nikita Mikhaylov Date: Tue, 13 Apr 2021 23:19:04 +0300 Subject: [PATCH 091/108] style --- src/Storages/StorageS3.cpp | 2 +- src/TableFunctions/TableFunctionS3Cluster.cpp | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index 2ae90ff7a31..a5cbd004d55 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -377,7 +377,7 @@ Pipe StorageS3::read( return glob_iterator->next(); }); } - + for (size_t i = 0; i < num_streams; ++i) { pipes.emplace_back(std::make_shared( diff --git a/src/TableFunctions/TableFunctionS3Cluster.cpp b/src/TableFunctions/TableFunctionS3Cluster.cpp index 4a48ed83135..26ef07ef97f 100644 --- a/src/TableFunctions/TableFunctionS3Cluster.cpp +++ b/src/TableFunctions/TableFunctionS3Cluster.cpp @@ -118,7 +118,8 @@ StoragePtr TableFunctionS3Cluster::executeImpl( getActualTableStructure(context), ConstraintsDescription{}, context, compression_method, /*distributed_processing=*/true); } - else { + else + { storage = StorageS3Cluster::create( filename, access_key_id, secret_access_key, StorageID(getDatabaseName(), table_name), cluster_name, format, context->getSettingsRef().s3_max_connections, From 403d56a17276d114e079283a7e79553a72a9b7b3 Mon Sep 17 00:00:00 2001 From: Alexey Date: Tue, 13 Apr 2021 20:44:34 +0000 Subject: [PATCH 092/108] build.py removes single.md after build --- docs/tools/single_page.py | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/docs/tools/single_page.py b/docs/tools/single_page.py index b88df5a03cb..a1e650d3ad3 100644 --- a/docs/tools/single_page.py +++ b/docs/tools/single_page.py @@ -109,7 +109,8 @@ def build_single_page_version(lang, args, nav, cfg): extra['single_page'] = True extra['is_amp'] = False - with open(os.path.join(args.docs_dir, lang, 'single.md'), 'w') as single_md: + single_md_path = os.path.join(args.docs_dir, lang, 'single.md') + with open(single_md_path, 'w') as single_md: concatenate(lang, args.docs_dir, single_md, nav) with util.temp_dir() as site_temp: @@ -221,3 +222,7 @@ def build_single_page_version(lang, args, nav, cfg): subprocess.check_call(' '.join(create_pdf_command), shell=True) logging.info(f'Finished building single page version for {lang}') + + if os.path.exists(single_md_path): + os.unlink(single_md_path) + \ No newline at end of file From d811f6a5de8440f5a5ae4052b99f06972f8fdfe5 Mon Sep 17 00:00:00 2001 From: alexey-milovidov Date: Tue, 13 Apr 2021 23:51:44 +0300 Subject: [PATCH 093/108] Update Install.cpp --- programs/install/Install.cpp | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/programs/install/Install.cpp b/programs/install/Install.cpp index 1179ce4cba3..11ae4231aa5 100644 --- a/programs/install/Install.cpp +++ b/programs/install/Install.cpp @@ -566,12 +566,12 @@ int mainEntryClickHouseInstall(int argc, char ** argv) if (has_password_for_default_user) { - fmt::print(HILITE "Password for default user is already specified. To remind or reset, see {} and {}." END_HILITE, + fmt::print(HILITE "Password for default user is already specified. To remind or reset, see {} and {}." END_HILITE "\n", users_config_file.string(), users_d.string()); } else if (!stdout_is_a_tty) { - fmt::print(HILITE "Password for default user is empty string. See {} and {} to change it." END_HILITE, + fmt::print(HILITE "Password for default user is empty string. See {} and {} to change it." END_HILITE "\n", users_config_file.string(), users_d.string()); } else @@ -606,7 +606,7 @@ int mainEntryClickHouseInstall(int argc, char ** argv) "\n"; out.sync(); out.finalize(); - fmt::print("Password for default user is saved in file {}.\n", password_file); + fmt::print(HILITE "Password for default user is saved in file {}." END_HILITE "\n", password_file); #else out << "\n" " \n" @@ -617,12 +617,12 @@ int mainEntryClickHouseInstall(int argc, char ** argv) "\n"; out.sync(); out.finalize(); - fmt::print("Password for default user is saved in plaintext in file {}.\n", password_file); + fmt::print(HILITE "Password for default user is saved in plaintext in file {}." END_HILITE "\n", password_file); #endif has_password_for_default_user = true; } else - fmt::print("Password for default user is empty string. See {} and {} to change it.\n", + fmt::print(HILITE "Password for default user is empty string. See {} and {} to change it." END_HILITE "\n", users_config_file.string(), users_d.string()); } From 77bdb5b391bf0f03ce1197828fc72267fd706e5a Mon Sep 17 00:00:00 2001 From: Vasily Nemkov Date: Tue, 13 Apr 2021 13:50:58 +0300 Subject: [PATCH 094/108] Fixed erroneus failure of extractAllGroupsHorizontal on large columns --- src/Functions/extractAllGroups.h | 11 ++++++----- .../01661_extract_all_groups_throw_fast.sql | 1 + 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/src/Functions/extractAllGroups.h b/src/Functions/extractAllGroups.h index c77d497cf17..934d69c0b97 100644 --- a/src/Functions/extractAllGroups.h +++ b/src/Functions/extractAllGroups.h @@ -172,11 +172,12 @@ public: for (size_t group = 1; group <= groups_count; ++group) all_matches.push_back(matched_groups[group]); - /// Additional limit to fail fast on supposedly incorrect usage. - static constexpr size_t MAX_GROUPS_PER_ROW = 1000000; - - if (all_matches.size() > MAX_GROUPS_PER_ROW) - throw Exception(ErrorCodes::TOO_LARGE_ARRAY_SIZE, "Too large array size in the result of function {}", getName()); + /// Additional limit to fail fast on supposedly incorrect usage, aribtrary value. + static constexpr size_t MAX_MATCHES_PER_ROW = 1000; + if (matches_per_row > MAX_MATCHES_PER_ROW) + throw Exception(ErrorCodes::TOO_LARGE_ARRAY_SIZE, + "Too many matches per row (> {}) in the result of function {}", + MAX_MATCHES_PER_ROW, getName()); pos = matched_groups[0].data() + std::max(1, matched_groups[0].size()); diff --git a/tests/queries/0_stateless/01661_extract_all_groups_throw_fast.sql b/tests/queries/0_stateless/01661_extract_all_groups_throw_fast.sql index 48d3baba0c5..a056d77896c 100644 --- a/tests/queries/0_stateless/01661_extract_all_groups_throw_fast.sql +++ b/tests/queries/0_stateless/01661_extract_all_groups_throw_fast.sql @@ -1 +1,2 @@ SELECT repeat('abcdefghijklmnopqrstuvwxyz', number * 100) AS haystack, extractAllGroupsHorizontal(haystack, '(\\w)') AS matches FROM numbers(1023); -- { serverError 128 } +SELECT count(extractAllGroupsHorizontal(materialize('a'), '(a)')) FROM numbers(1000000) FORMAT Null; -- shouldn't fail From 58281eadc0a425a0d840f04ce8a88f7714a1adce Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Wed, 14 Apr 2021 01:08:09 +0300 Subject: [PATCH 095/108] Mark test as long --- ...erence => 01676_long_clickhouse_client_autocomplete.reference} | 0 ...tocomplete.sh => 01676_long_clickhouse_client_autocomplete.sh} | 0 2 files changed, 0 insertions(+), 0 deletions(-) rename tests/queries/0_stateless/{01676_clickhouse_client_autocomplete.reference => 01676_long_clickhouse_client_autocomplete.reference} (100%) rename tests/queries/0_stateless/{01676_clickhouse_client_autocomplete.sh => 01676_long_clickhouse_client_autocomplete.sh} (100%) diff --git a/tests/queries/0_stateless/01676_clickhouse_client_autocomplete.reference b/tests/queries/0_stateless/01676_long_clickhouse_client_autocomplete.reference similarity index 100% rename from tests/queries/0_stateless/01676_clickhouse_client_autocomplete.reference rename to tests/queries/0_stateless/01676_long_clickhouse_client_autocomplete.reference diff --git a/tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh b/tests/queries/0_stateless/01676_long_clickhouse_client_autocomplete.sh similarity index 100% rename from tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh rename to tests/queries/0_stateless/01676_long_clickhouse_client_autocomplete.sh From 6a2a9cecdd4eb9c0f1724708af4c9f80629bc4fc Mon Sep 17 00:00:00 2001 From: alexey-milovidov Date: Wed, 14 Apr 2021 01:24:46 +0300 Subject: [PATCH 096/108] Update extractAllGroups.h --- src/Functions/extractAllGroups.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/Functions/extractAllGroups.h b/src/Functions/extractAllGroups.h index 934d69c0b97..289cff6ca4a 100644 --- a/src/Functions/extractAllGroups.h +++ b/src/Functions/extractAllGroups.h @@ -172,7 +172,7 @@ public: for (size_t group = 1; group <= groups_count; ++group) all_matches.push_back(matched_groups[group]); - /// Additional limit to fail fast on supposedly incorrect usage, aribtrary value. + /// Additional limit to fail fast on supposedly incorrect usage, arbitrary value. static constexpr size_t MAX_MATCHES_PER_ROW = 1000; if (matches_per_row > MAX_MATCHES_PER_ROW) throw Exception(ErrorCodes::TOO_LARGE_ARRAY_SIZE, From 0fca2acbdd1dc4668eb8f52d3120ae6c5f840096 Mon Sep 17 00:00:00 2001 From: alexey-milovidov Date: Wed, 14 Apr 2021 01:32:20 +0300 Subject: [PATCH 097/108] Update CHANGELOG.md --- CHANGELOG.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 0f895c7c482..a501fd358f8 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -11,6 +11,8 @@ * Now replicas that are processing the `ALTER TABLE ATTACH PART[ITION]` command search in their `detached/` folders before fetching the data from other replicas. As an implementation detail, a new command `ATTACH_PART` is introduced in the replicated log. Parts are searched and compared by their checksums. [#18978](https://github.com/ClickHouse/ClickHouse/pull/18978) ([Mike Kot](https://github.com/myrrc)). **Note**: * `ATTACH PART[ITION]` queries may not work during cluster upgrade. * It's not possible to rollback to older ClickHouse version after executing `ALTER ... ATTACH` query in new version as the old servers would fail to pass the `ATTACH_PART` entry in the replicated log. + * In this version, empty `` will block all access to remote hosts while in previous versions it did nothing. If you want to keep old behaviour and you have empty `remote_url_allow_hosts` element in configuration file, remove it. [#20058](https://github.com/ClickHouse/ClickHouse/pull/20058) ([Vladimir Chebotarev](https://github.com/excitoon)). + #### New Feature @@ -132,7 +134,6 @@ * Fix receive and send timeouts and non-blocking read in secure socket. [#21429](https://github.com/ClickHouse/ClickHouse/pull/21429) ([Kruglov Pavel](https://github.com/Avogar)). * `force_drop_table` flag didn't work for `MATERIALIZED VIEW`, it's fixed. Fixes [#18943](https://github.com/ClickHouse/ClickHouse/issues/18943). [#20626](https://github.com/ClickHouse/ClickHouse/pull/20626) ([tavplubix](https://github.com/tavplubix)). * Fix name clashes in `PredicateRewriteVisitor`. It caused incorrect `WHERE` filtration after full join. Close [#20497](https://github.com/ClickHouse/ClickHouse/issues/20497). [#20622](https://github.com/ClickHouse/ClickHouse/pull/20622) ([Vladimir](https://github.com/vdimir)). -* Fixed open behavior of remote host filter in case when there is `remote_url_allow_hosts` section in configuration but no entries there. [#20058](https://github.com/ClickHouse/ClickHouse/pull/20058) ([Vladimir Chebotarev](https://github.com/excitoon)). #### Build/Testing/Packaging Improvement From d85eb22691bd088722c7b158330710ffcf8b77f2 Mon Sep 17 00:00:00 2001 From: alexey-milovidov Date: Wed, 14 Apr 2021 01:32:31 +0300 Subject: [PATCH 098/108] Update CHANGELOG.md --- CHANGELOG.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index a501fd358f8..cc1ec835a7b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -11,7 +11,7 @@ * Now replicas that are processing the `ALTER TABLE ATTACH PART[ITION]` command search in their `detached/` folders before fetching the data from other replicas. As an implementation detail, a new command `ATTACH_PART` is introduced in the replicated log. Parts are searched and compared by their checksums. [#18978](https://github.com/ClickHouse/ClickHouse/pull/18978) ([Mike Kot](https://github.com/myrrc)). **Note**: * `ATTACH PART[ITION]` queries may not work during cluster upgrade. * It's not possible to rollback to older ClickHouse version after executing `ALTER ... ATTACH` query in new version as the old servers would fail to pass the `ATTACH_PART` entry in the replicated log. - * In this version, empty `` will block all access to remote hosts while in previous versions it did nothing. If you want to keep old behaviour and you have empty `remote_url_allow_hosts` element in configuration file, remove it. [#20058](https://github.com/ClickHouse/ClickHouse/pull/20058) ([Vladimir Chebotarev](https://github.com/excitoon)). +* In this version, empty `` will block all access to remote hosts while in previous versions it did nothing. If you want to keep old behaviour and you have empty `remote_url_allow_hosts` element in configuration file, remove it. [#20058](https://github.com/ClickHouse/ClickHouse/pull/20058) ([Vladimir Chebotarev](https://github.com/excitoon)). #### New Feature From caf97d297b5d4931a85a05d6936cf15ee209fbcf Mon Sep 17 00:00:00 2001 From: tavplubix Date: Wed, 14 Apr 2021 12:17:41 +0300 Subject: [PATCH 099/108] Update arcadia_skip_list.txt --- tests/queries/0_stateless/arcadia_skip_list.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/tests/queries/0_stateless/arcadia_skip_list.txt b/tests/queries/0_stateless/arcadia_skip_list.txt index c8a0971bb28..58a836bed46 100644 --- a/tests/queries/0_stateless/arcadia_skip_list.txt +++ b/tests/queries/0_stateless/arcadia_skip_list.txt @@ -91,6 +91,7 @@ 01125_dict_ddl_cannot_add_column 01129_dict_get_join_lose_constness 01138_join_on_distributed_and_tmp +01153_attach_mv_uuid 01191_rename_dictionary 01200_mutations_memory_consumption 01211_optimize_skip_unused_shards_type_mismatch From c34e27ed1c020e651c2206b12049dc3ea15d3066 Mon Sep 17 00:00:00 2001 From: Alexander Tokmakov Date: Wed, 14 Apr 2021 14:07:56 +0300 Subject: [PATCH 100/108] fix test --- tests/queries/0_stateless/01152_cross_replication.reference | 2 -- tests/queries/0_stateless/01152_cross_replication.sql | 4 ++-- tests/queries/skip_list.json | 1 + 3 files changed, 3 insertions(+), 4 deletions(-) diff --git a/tests/queries/0_stateless/01152_cross_replication.reference b/tests/queries/0_stateless/01152_cross_replication.reference index f409f3e65fa..389d14ff28b 100644 --- a/tests/queries/0_stateless/01152_cross_replication.reference +++ b/tests/queries/0_stateless/01152_cross_replication.reference @@ -6,7 +6,5 @@ CREATE TABLE shard_0.demo_loan_01568\n(\n `id` Int64 COMMENT \'id\',\n `da CREATE TABLE shard_1.demo_loan_01568\n(\n `id` Int64 COMMENT \'id\',\n `date_stat` Date COMMENT \'date of stat\',\n `customer_no` String COMMENT \'customer no\',\n `loan_principal` Float64 COMMENT \'loan principal\'\n)\nENGINE = ReplacingMergeTree\nPARTITION BY toYYYYMM(date_stat)\nORDER BY id\nSETTINGS index_granularity = 8192 1 2021-04-13 qwerty 3.14159 2 2021-04-14 asdfgh 2.71828 -1 2021-04-13 qwerty 3.14159 -2 2021-04-14 asdfgh 2.71828 2 2021-04-14 asdfgh 2.71828 1 2021-04-13 qwerty 3.14159 diff --git a/tests/queries/0_stateless/01152_cross_replication.sql b/tests/queries/0_stateless/01152_cross_replication.sql index 4137554ca85..23507c41fd0 100644 --- a/tests/queries/0_stateless/01152_cross_replication.sql +++ b/tests/queries/0_stateless/01152_cross_replication.sql @@ -1,15 +1,16 @@ DROP DATABASE IF EXISTS shard_0; DROP DATABASE IF EXISTS shard_1; +SET distributed_ddl_output_mode='none'; DROP TABLE IF EXISTS demo_loan_01568_dist; CREATE DATABASE shard_0; CREATE DATABASE shard_1; -SET distributed_ddl_output_mode='none'; CREATE TABLE demo_loan_01568 ON CLUSTER test_cluster_two_shards_different_databases ( `id` Int64 COMMENT 'id', `date_stat` Date COMMENT 'date of stat', `customer_no` String COMMENT 'customer no', `loan_principal` Float64 COMMENT 'loan principal' ) ENGINE=ReplacingMergeTree() ORDER BY id PARTITION BY toYYYYMM(date_stat); -- { serverError 371 } SET distributed_ddl_output_mode='throw'; CREATE TABLE shard_0.demo_loan_01568 ON CLUSTER test_cluster_two_shards_different_databases ( `id` Int64 COMMENT 'id', `date_stat` Date COMMENT 'date of stat', `customer_no` String COMMENT 'customer no', `loan_principal` Float64 COMMENT 'loan principal' ) ENGINE=ReplacingMergeTree() ORDER BY id PARTITION BY toYYYYMM(date_stat); CREATE TABLE shard_1.demo_loan_01568 ON CLUSTER test_cluster_two_shards_different_databases ( `id` Int64 COMMENT 'id', `date_stat` Date COMMENT 'date of stat', `customer_no` String COMMENT 'customer no', `loan_principal` Float64 COMMENT 'loan principal' ) ENGINE=ReplacingMergeTree() ORDER BY id PARTITION BY toYYYYMM(date_stat); +SET distributed_ddl_output_mode='none'; SHOW TABLES FROM shard_0; SHOW TABLES FROM shard_1; @@ -18,7 +19,6 @@ SHOW CREATE TABLE shard_1.demo_loan_01568; CREATE TABLE demo_loan_01568_dist AS shard_0.demo_loan_01568 ENGINE=Distributed('test_cluster_two_shards_different_databases', '', 'demo_loan_01568', id % 2); INSERT INTO demo_loan_01568_dist VALUES (1, '2021-04-13', 'qwerty', 3.14159), (2, '2021-04-14', 'asdfgh', 2.71828); -SELECT * FROM demo_loan_01568_dist ORDER BY id; SYSTEM FLUSH DISTRIBUTED demo_loan_01568_dist; SELECT * FROM demo_loan_01568_dist ORDER BY id; diff --git a/tests/queries/skip_list.json b/tests/queries/skip_list.json index d41a41bd524..59ee6ae89f4 100644 --- a/tests/queries/skip_list.json +++ b/tests/queries/skip_list.json @@ -557,6 +557,7 @@ "01135_default_and_alter_zookeeper", "01148_zookeeper_path_macros_unfolding", "01150_ddl_guard_rwr", + "01152_cross_replication", "01185_create_or_replace_table", "01190_full_attach_syntax", "01191_rename_dictionary", From 969b63d1acdd989c07e6005c2fe496ce07c654ca Mon Sep 17 00:00:00 2001 From: Christian Date: Wed, 14 Apr 2021 13:19:07 +0200 Subject: [PATCH 101/108] Excludes views from syncing in MaterializeMySQL (#22760) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Excludes views from syncing in MaterializeMySQL * Adds integration test for ignoring MySQL views on initial replication * Fixes bug in initial integration test for excluding views from replicating to ClickHouse on initial replication * Fixes bug in initial integration test for excluding views from replicating to ClickHouse on initial replication * Replace assert with check_query Co-authored-by: Christian Frøystad Co-authored-by: Ivan <5627721+abyss7@users.noreply.github.com> --- src/Databases/MySQL/MaterializeMetadata.cpp | 2 +- .../materialize_with_ddl.py | 39 +++++++++++++++++++ .../test_materialize_mysql_database/test.py | 2 + 3 files changed, 42 insertions(+), 1 deletion(-) diff --git a/src/Databases/MySQL/MaterializeMetadata.cpp b/src/Databases/MySQL/MaterializeMetadata.cpp index a54d378f813..f5e648903ed 100644 --- a/src/Databases/MySQL/MaterializeMetadata.cpp +++ b/src/Databases/MySQL/MaterializeMetadata.cpp @@ -52,7 +52,7 @@ static std::unordered_map fetchTablesCreateQuery( static std::vector fetchTablesInDB(const mysqlxx::PoolWithFailover::Entry & connection, const std::string & database) { Block header{{std::make_shared(), "table_name"}}; - String query = "SELECT TABLE_NAME AS table_name FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = " + quoteString(database); + String query = "SELECT TABLE_NAME AS table_name FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE != 'VIEW' AND TABLE_SCHEMA = " + quoteString(database); std::vector tables_in_db; MySQLBlockInputStream input(connection, query, header, DEFAULT_BLOCK_SIZE); diff --git a/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py b/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py index 1675b72e0c4..38574a81d0a 100644 --- a/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py +++ b/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py @@ -117,6 +117,45 @@ def dml_with_materialize_mysql_database(clickhouse_node, mysql_node, service_nam mysql_node.query("DROP DATABASE test_database") +def materialize_mysql_database_with_views(clickhouse_node, mysql_node, service_name): + mysql_node.query("DROP DATABASE IF EXISTS test_database") + clickhouse_node.query("DROP DATABASE IF EXISTS test_database") + mysql_node.query("CREATE DATABASE test_database DEFAULT CHARACTER SET 'utf8'") + # existed before the mapping was created + + mysql_node.query("CREATE TABLE test_database.test_table_1 (" + "`key` INT NOT NULL PRIMARY KEY, " + "unsigned_tiny_int TINYINT UNSIGNED, tiny_int TINYINT, " + "unsigned_small_int SMALLINT UNSIGNED, small_int SMALLINT, " + "unsigned_medium_int MEDIUMINT UNSIGNED, medium_int MEDIUMINT, " + "unsigned_int INT UNSIGNED, _int INT, " + "unsigned_integer INTEGER UNSIGNED, _integer INTEGER, " + "unsigned_bigint BIGINT UNSIGNED, _bigint BIGINT, " + "/* Need ClickHouse support read mysql decimal unsigned_decimal DECIMAL(19, 10) UNSIGNED, _decimal DECIMAL(19, 10), */" + "unsigned_float FLOAT UNSIGNED, _float FLOAT, " + "unsigned_double DOUBLE UNSIGNED, _double DOUBLE, " + "_varchar VARCHAR(10), _char CHAR(10), binary_col BINARY(8), " + "/* Need ClickHouse support Enum('a', 'b', 'v') _enum ENUM('a', 'b', 'c'), */" + "_date Date, _datetime DateTime, _timestamp TIMESTAMP, _bool BOOLEAN) ENGINE = InnoDB;") + + mysql_node.query("CREATE VIEW test_database.test_table_1_view AS SELECT SUM(tiny_int) FROM test_database.test_table_1 GROUP BY _date;") + + # it already has some data + mysql_node.query(""" + INSERT INTO test_database.test_table_1 VALUES(1, 1, -1, 2, -2, 3, -3, 4, -4, 5, -5, 6, -6, 3.2, -3.2, 3.4, -3.4, 'varchar', 'char', 'binary', + '2020-01-01', '2020-01-01 00:00:00', '2020-01-01 00:00:00', true); + """) + clickhouse_node.query( + "CREATE DATABASE test_database ENGINE = MaterializeMySQL('{}:3306', 'test_database', 'root', 'clickhouse')".format( + service_name)) + + assert "test_database" in clickhouse_node.query("SHOW DATABASES") + check_query(clickhouse_node, "SHOW TABLES FROM test_database FORMAT TSV", "test_table_1\n") + + clickhouse_node.query("DROP DATABASE test_database") + mysql_node.query("DROP DATABASE test_database") + + def materialize_mysql_database_with_datetime_and_decimal(clickhouse_node, mysql_node, service_name): mysql_node.query("DROP DATABASE IF EXISTS test_database") clickhouse_node.query("DROP DATABASE IF EXISTS test_database") diff --git a/tests/integration/test_materialize_mysql_database/test.py b/tests/integration/test_materialize_mysql_database/test.py index 730305a6f16..3c41c0a2177 100644 --- a/tests/integration/test_materialize_mysql_database/test.py +++ b/tests/integration/test_materialize_mysql_database/test.py @@ -150,12 +150,14 @@ def started_mysql_8_0(): @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_materialize_database_dml_with_mysql_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.dml_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") + materialize_with_ddl.materialize_mysql_database_with_views(clickhouse_node, started_mysql_5_7, "mysql1") materialize_with_ddl.materialize_mysql_database_with_datetime_and_decimal(clickhouse_node, started_mysql_5_7, "mysql1") @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_materialize_database_dml_with_mysql_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.dml_with_materialize_mysql_database(clickhouse_node, started_mysql_8_0, "mysql8_0") + materialize_with_ddl.materialize_mysql_database_with_views(clickhouse_node, started_mysql_8_0, "mysql8_0") materialize_with_ddl.materialize_mysql_database_with_datetime_and_decimal(clickhouse_node, started_mysql_8_0, "mysql8_0") From 2594ee08c50cccda751d34231446b0843d9c9d76 Mon Sep 17 00:00:00 2001 From: Alexander Kuzmenkov <36882414+akuzm@users.noreply.github.com> Date: Wed, 14 Apr 2021 16:13:41 +0300 Subject: [PATCH 102/108] Update code-review.md --- website/blog/en/2021/code-review.md | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/website/blog/en/2021/code-review.md b/website/blog/en/2021/code-review.md index 63a99c13e04..9fac7e203cd 100644 --- a/website/blog/en/2021/code-review.md +++ b/website/blog/en/2021/code-review.md @@ -1,10 +1,13 @@ -# Code review in ClickHouse -# Understanding Why Your Program Works -# Effective Code Review -# Explaining Why Your Program Works -# The Tests Are Passing, Why Would I Read The Diff Again? +--- +title: 'The Tests Are Passing, Why Would I Read The Diff Again?' +image: 'https://blog-images.clickhouse.tech/en/2021/code-review/smaller-crazy-duck.jpg' +date: '2021-04-14' +author: '[Alexander Kuzmenkov](https://github.com/akuzm)' +tags: ['code review', 'development'] +--- -Code review is one of the few software development techniques that is consistently found to reduce the incidence of defects. Why is it effective? This article offers some wild conjecture on this topic, complete with practical advice on getting the most out of your code review. + +Code review is one of the few software development techniques that are consistently found to reduce the incidence of defects. Why is it effective? This article offers some wild conjecture on this topic, complete with practical advice on getting the most out of your code review. ## Understanding Why Your Program Works @@ -25,6 +28,8 @@ When working in a team, you even have a luxury of explaining your code to anothe Code review is often framed as a gatekeeping process, where each contribution is vetted by maintainers to ensure that it is in line with project direction, has acceptable quality, meets the coding guidelines and so on. This perspective might seem natural when dealing with external contributions, but makes less sense if you apply it to internal ones. After all, our fellow maintainers have perfect understanding of project goals and guidelines, probably they are more talented and experienced than us, and can be trusted to produce the best solution possible. How can an additional review be helpful? + + A less-obvious, but very important, part of reviewing the code is just seeing whether it can be understood by another person. It is helpful regardless of the administrative roles and programming proficiency of the parties. What should you do as a reviewer if ease of understanding is your main priority? You probably don't need to be concerned with trivia such as code style. There are automated tools for that. You might find some bugs, but this is probably a side effect. Your main task is making sense of the code. From d3e06e6cfd89b5428459892cfd76ebbef131e3c8 Mon Sep 17 00:00:00 2001 From: Alexander Kuzmenkov Date: Wed, 14 Apr 2021 16:51:01 +0300 Subject: [PATCH 103/108] fixes --- website/blog/en/2021/code-review.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/blog/en/2021/code-review.md b/website/blog/en/2021/code-review.md index 9fac7e203cd..099893491a4 100644 --- a/website/blog/en/2021/code-review.md +++ b/website/blog/en/2021/code-review.md @@ -1,6 +1,6 @@ --- title: 'The Tests Are Passing, Why Would I Read The Diff Again?' -image: 'https://blog-images.clickhouse.tech/en/2021/code-review/smaller-crazy-duck.jpg' +image: 'https://blog-images.clickhouse.tech/en/2021/code-review/normie-duck.jpg' date: '2021-04-14' author: '[Alexander Kuzmenkov](https://github.com/akuzm)' tags: ['code review', 'development'] @@ -28,7 +28,7 @@ When working in a team, you even have a luxury of explaining your code to anothe Code review is often framed as a gatekeeping process, where each contribution is vetted by maintainers to ensure that it is in line with project direction, has acceptable quality, meets the coding guidelines and so on. This perspective might seem natural when dealing with external contributions, but makes less sense if you apply it to internal ones. After all, our fellow maintainers have perfect understanding of project goals and guidelines, probably they are more talented and experienced than us, and can be trusted to produce the best solution possible. How can an additional review be helpful? - + A less-obvious, but very important, part of reviewing the code is just seeing whether it can be understood by another person. It is helpful regardless of the administrative roles and programming proficiency of the parties. What should you do as a reviewer if ease of understanding is your main priority? From 2ce494aa0fa9f9fd631c28340c443db7ad771eea Mon Sep 17 00:00:00 2001 From: Alexey Milovidov Date: Wed, 14 Apr 2021 17:27:13 +0300 Subject: [PATCH 104/108] Tweaks for Debian installer --- programs/install/Install.cpp | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/programs/install/Install.cpp b/programs/install/Install.cpp index 11ae4231aa5..2b0f390f709 100644 --- a/programs/install/Install.cpp +++ b/programs/install/Install.cpp @@ -562,14 +562,23 @@ int mainEntryClickHouseInstall(int argc, char ** argv) bool stdin_is_a_tty = isatty(STDIN_FILENO); bool stdout_is_a_tty = isatty(STDOUT_FILENO); - bool is_interactive = stdin_is_a_tty && stdout_is_a_tty; + + /// dpkg or apt installers can ask for non-interactive work explicitly. + + const char * debian_frontend_var = getenv("DEBIAN_FRONTEND"); + bool noninteractive = debian_frontend_var && debian_frontend_var == std::string_view("noninteractive"); + + bool is_interactive = !noninteractive && stdin_is_a_tty && stdout_is_a_tty; + + /// We can ask password even if stdin is closed/redirected but /dev/tty is available. + bool can_ask_password = !noninteractive && stdout_is_a_tty; if (has_password_for_default_user) { fmt::print(HILITE "Password for default user is already specified. To remind or reset, see {} and {}." END_HILITE "\n", users_config_file.string(), users_d.string()); } - else if (!stdout_is_a_tty) + else if (!can_ask_password) { fmt::print(HILITE "Password for default user is empty string. See {} and {} to change it." END_HILITE "\n", users_config_file.string(), users_d.string()); From 45931a52621d8a117632f691cc061db4f649b169 Mon Sep 17 00:00:00 2001 From: Alexander Kuzmenkov <36882414+akuzm@users.noreply.github.com> Date: Wed, 14 Apr 2021 18:09:51 +0300 Subject: [PATCH 105/108] Update code-review.md --- website/blog/en/2021/code-review.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/en/2021/code-review.md b/website/blog/en/2021/code-review.md index 099893491a4..efaf4e43ac6 100644 --- a/website/blog/en/2021/code-review.md +++ b/website/blog/en/2021/code-review.md @@ -62,7 +62,7 @@ It is common to hear objections to the idea of commenting the code, so let's dis ### Self-documenting Code -You can often see a perplexing idea that the source code can somehow be "self-documenting", or that the comments are a "code smell", and their presence indicates that the code is badly written. I have trouble imagining how this belief can be compatible with any experience in maintaining sufficiently complex and large software, over the years, in collaboration with others. The code and the comments describe different parts of the solution. The code describes the data structures and their transformations, but it cannot convey meaning. The names in the code serve as pointers that map the data and its transformations to the domain concepts, but they are schematic and lack nuance. It is not so difficult to write code that makes it easy to understand what's going on in terms of data manipulation. What it takes is mostly moderation, that is, stopping yourself from being too clever. For most code, it is easy to see what it does, but why? Why this way and not that way? Why is it correct? Why this fast path here helps? Why did you choose this data layout? How is this invariant guaranteed? Why do you need this data now and not later? And so on. This might be not so evident for a developer who is working alone on a short-lived project, because they have all the necessary context in their head. But when they have to work with other people (or even with themselves from past and future), or in an unfamiliar area, the importance of non-code, higher-level context becomes painfully clear. The idea that we should, or even can, somehow encode comments such as [this one](https://github.com/postgres/postgres/blob/55dc86eca70b1dc18a79c141b3567efed910329d/src/backend/optimizer/path/indxpath.c#L2226) into names or control flow is just absurd. +You can often see a perplexing idea that the source code can somehow be "self-documenting", or that the comments are a "code smell", and their presence indicates that the code is badly written. I have trouble imagining how this belief can be compatible with any experience in maintaining sufficiently complex and large software, over the years, in collaboration with others. The code and the comments describe different parts of the solution. The code describes the data structures and their transformations, but it cannot convey meaning. The names in the code serve as pointers that map the data and its transformations to the domain concepts, but they are schematic and lack nuance. It is not so difficult to write code that makes it easy to understand what's going on in terms of data manipulation. What it takes is mostly moderation, that is, stopping yourself from being too clever. For most code, it is easy to see what it does, but why? Why this way and not that way? Why is it correct? Why this fast path here helps? Why did you choose this data layout? How is this invariant guaranteed? Why do you need this data now and not later? And so on. This might be not so evident for a developer who is working alone on a short-lived project, because they have all the necessary context in their head. But when they have to work with other people (or even with themselves from past and future), or in an unfamiliar area, the importance of non-code, higher-level context becomes painfully clear. The idea that we should, or even can, somehow encode comments such as [this one](https://github.com/ClickHouse/ClickHouse/blob/495c6e03aa9437dac3cd7a44ab3923390bef9982/src/Storages/MergeTree/KeyCondition.cpp#L1312-L1347) into names or control flow is just absurd. ### Obsolete Comments From 1f359b080e1f84595c3b7fcc3e88fd11f9155588 Mon Sep 17 00:00:00 2001 From: Alexander Kuzmenkov Date: Wed, 14 Apr 2021 19:17:34 +0300 Subject: [PATCH 106/108] fixes --- website/blog/en/2021/code-review.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/website/blog/en/2021/code-review.md b/website/blog/en/2021/code-review.md index 099893491a4..6a81d55abc8 100644 --- a/website/blog/en/2021/code-review.md +++ b/website/blog/en/2021/code-review.md @@ -1,6 +1,6 @@ --- title: 'The Tests Are Passing, Why Would I Read The Diff Again?' -image: 'https://blog-images.clickhouse.tech/en/2021/code-review/normie-duck.jpg' +image: 'https://blog-images.clickhouse.tech/en/2021/code-review/two-ducks.jpg' date: '2021-04-14' author: '[Alexander Kuzmenkov](https://github.com/akuzm)' tags: ['code review', 'development'] @@ -18,9 +18,9 @@ The correct understanding is also important when modifying and extending softwar How can we make our software easier to understand? It is often said that to see if you really understand something, you have to try explaining it to somebody. For example, as a science student taking an exam, you might be expected to give an explanation to some well-known observed effect, deriving it from the basic laws of this domain. In a similar way, if we are modeling some problem in software, we can start from domain knowledge and general programming knowledge, and build an argument as to why our model is applicable to the problem, why it is correct, has optimal performance and so on. This explanation takes the form of code comments, or, at a higher level, design documents. -If you have a habit of thoroughly commenting your code, you might have noticed that writing the comments is often much harder than writing the code itself. It also has an unpleasant side effect -- at times, while writing a comment, it becomes increasingly clear to you that the code is incomprehensible and takes forever to explain, or maybe is downright wrong, and you have to rewrite it. This is exactly the major positive effect of writing the comments. It helps you find bugs and make the code more understandable, and you wouldn't have noticed these problems unless you tried to explain the code. +If you have a habit of thoroughly commenting your code, you might have noticed that writing the comments is often much harder than writing the code itself. It also has an unpleasant side effect — at times, while writing a comment, it becomes increasingly clear to you that the code is incomprehensible and takes forever to explain, or maybe is downright wrong, and you have to rewrite it. This is exactly the major positive effect of writing the comments. It helps you find bugs and make the code more understandable, and you wouldn't have noticed these problems unless you tried to explain the code. -Understanding why your program works is inseparable from understanding why it fails, so it's no surprise that there is a similar process for the latter, called "rubber duck debugging". To debug a particularly nasty bug, you start explaining the program logic step by step to an imaginary partner or even to an inanimate object such as a yellow rubber duck. This process is often very effective, much in excess of what one would expect given the limited conversational abilities of rubber ducks. The underlying mechanism is probably the same as with comments &emdash; you start to understand your program better by just trying to explain it, and this lets you find bugs. +Understanding why your program works is inseparable from understanding why it fails, so it's no surprise that there is a similar process for the latter, called "rubber duck debugging". To debug a particularly nasty bug, you start explaining the program logic step by step to an imaginary partner or even to an inanimate object such as a yellow rubber duck. This process is often very effective, much in excess of what one would expect given the limited conversational abilities of rubber ducks. The underlying mechanism is probably the same as with comments — you start to understand your program better by just trying to explain it, and this lets you find bugs. When working in a team, you even have a luxury of explaining your code to another developer who works on the same project. It's probably more entertaining than talking to a duck. More importantly, they are going to maintain the code you wrote, so better make sure that _they_ can understand it as well. A good formal occasion for explaining how your code works is the code review process. Let's see how you can get the most out of it, in terms of making your code understandable. @@ -28,14 +28,14 @@ When working in a team, you even have a luxury of explaining your code to anothe Code review is often framed as a gatekeeping process, where each contribution is vetted by maintainers to ensure that it is in line with project direction, has acceptable quality, meets the coding guidelines and so on. This perspective might seem natural when dealing with external contributions, but makes less sense if you apply it to internal ones. After all, our fellow maintainers have perfect understanding of project goals and guidelines, probably they are more talented and experienced than us, and can be trusted to produce the best solution possible. How can an additional review be helpful? - - A less-obvious, but very important, part of reviewing the code is just seeing whether it can be understood by another person. It is helpful regardless of the administrative roles and programming proficiency of the parties. What should you do as a reviewer if ease of understanding is your main priority? You probably don't need to be concerned with trivia such as code style. There are automated tools for that. You might find some bugs, but this is probably a side effect. Your main task is making sense of the code. Start with checking the high-level description of the problem that the pull request is trying to solve. Read the description of the bug it fixes, or the docs for the feature it adds. For bigger features, there is normally a design document that describes the overall implementation without getting too deep into the code details. After you understand the problem, start reading the code. Does it make sense to you? You shouldn't try too hard to understand it. Imagine that you are tired and under time pressure. If you feel you have to make a lot of effort to understand the code, ask the author for clarifications. As you talk, you might discover that the code is not correct, or it may be rewritten in a more straightforward way, or it needs more comments. + + After you get the answers, don't forget to update the code and the comments to reflect them. Don't just stop after getting it explained to you personally. If you had a question as a reviewer, chances are that other people will also have this question later, but there might be nobody around to ask. They will have to resort to `git blame` and re-reading the entire pull request or several of them. Code archaeology is sometimes fun, but it's the last thing you want to do when you are investigating an urgent bug. All the answers should be on the surface. Working with the author, you should ensure that the code is mostly obvious to anyone with basic domain and programming knowledge, and all non-obvious parts are clearly explained. @@ -62,22 +62,22 @@ It is common to hear objections to the idea of commenting the code, so let's dis ### Self-documenting Code -You can often see a perplexing idea that the source code can somehow be "self-documenting", or that the comments are a "code smell", and their presence indicates that the code is badly written. I have trouble imagining how this belief can be compatible with any experience in maintaining sufficiently complex and large software, over the years, in collaboration with others. The code and the comments describe different parts of the solution. The code describes the data structures and their transformations, but it cannot convey meaning. The names in the code serve as pointers that map the data and its transformations to the domain concepts, but they are schematic and lack nuance. It is not so difficult to write code that makes it easy to understand what's going on in terms of data manipulation. What it takes is mostly moderation, that is, stopping yourself from being too clever. For most code, it is easy to see what it does, but why? Why this way and not that way? Why is it correct? Why this fast path here helps? Why did you choose this data layout? How is this invariant guaranteed? Why do you need this data now and not later? And so on. This might be not so evident for a developer who is working alone on a short-lived project, because they have all the necessary context in their head. But when they have to work with other people (or even with themselves from past and future), or in an unfamiliar area, the importance of non-code, higher-level context becomes painfully clear. The idea that we should, or even can, somehow encode comments such as [this one](https://github.com/postgres/postgres/blob/55dc86eca70b1dc18a79c141b3567efed910329d/src/backend/optimizer/path/indxpath.c#L2226) into names or control flow is just absurd. +You can often see a perplexing idea that the source code can somehow be "self-documenting", or that the comments are a "code smell", and their presence indicates that the code is badly written. I have trouble imagining how this belief can be compatible with any experience in maintaining sufficiently complex and large software, over the years, in collaboration with others. The code and the comments describe different parts of the solution. The code describes the data structures and their transformations, but it cannot convey meaning. The names in the code serve as pointers that map the data and its transformations to the domain concepts, but they are schematic and lack nuance. It is not so difficult to write code that makes it easy to understand what's going on in terms of data manipulation. What it takes is mostly moderation, that is, stopping yourself from being too clever. For most code, it is easy to see what it does, but why? Why this way and not that way? Why is it correct? Why this fast path here helps? Why did you choose this data layout? How is this invariant guaranteed? And so on. This might be not so evident for a developer who is working alone on a short-lived project, because they have all the necessary context in their head. But when they have to work with other people (or even with themselves from past and future), or in an unfamiliar area, the importance of non-code, higher-level context becomes painfully clear. The idea that we should, or even can, somehow encode comments such as [this one](https://github.com/postgres/postgres/blob/55dc86eca70b1dc18a79c141b3567efed910329d/src/backend/optimizer/path/indxpath.c#L2226) into names or control flow is just absurd. ### Obsolete Comments The comments can't be checked by the compiler or the tests, so there is no automated way to make sure that they are up to date with the rest of the comments and the code. The possibility of comments gradually getting incorrect is sometimes used as an argument against having any comments at all. -This problem is not exclusive to the comments -- the code also can and does become obsolete. Simple cases such as dead code can be detected by static analysis or studying the test coverage of code. More complex cases can only be found by proofreading, such as maintaining an invariant that is not important anymore, or preparing some data that is not needed. +This problem is not exclusive to the comments — the code also can and does become obsolete. Simple cases such as dead code can be detected by static analysis or studying the test coverage of code. More complex cases can only be found by proofreading, such as maintaining an invariant that is not important anymore, or preparing some data that is not needed. While an obsolete comment can lead to a mistake, the same applies, perhaps more strongly, to the lack of comments. When you need some higher-level knowledge about the code, but it is not written down, you are forced to perform an entire investigation from first principles to understand what's going on, and this is error-prone. Even an obsolete comment likely gives a better starting point than nothing. Moreover, in a code base that makes an active use of the comments, they tend to be mostly correct. This is because the developers rely on comments, read and write them, pay attention to them during code review. The comments are routinely changed along with changing the code, and the outdated comments are soon noticed and fixed. This does require some habit. A lone comment in a vast desert of impenetrable self-documenting code is not going to fare well. ## Conclusion -Code review makes your software better, and a significant part of this probably comes from trying to understand what your software actually does. By paying attention specifically to this aspect of code review, you can make it even more efficient. You'll have less bugs, and your code will be easier to maintain -- and what else could we ask for as software developers? +Code review makes your software better, and a significant part of this probably comes from trying to understand what your software actually does. By paying attention specifically to this aspect of code review, you can make it even more efficient. You'll have less bugs, and your code will be easier to maintain — and what else could we ask for as software developers? -_2021-04-13 [Alexander Kuzmenkov](https://github.com/akuzm)_ +_2021-04-13 [Alexander Kuzmenkov](https://github.com/akuzm). Title photo by [Nikita Mikhaylov](https://github.com/nikitamikhaylov)_ _P.S. This text contains the personal opinions of the author, and is not an authoritative manual for ClickHouse maintainers._ From 57f61c954c455e6def186e33aeb49f7a80a839b9 Mon Sep 17 00:00:00 2001 From: Ivan <5627721+abyss7@users.noreply.github.com> Date: Wed, 14 Apr 2021 19:35:17 +0300 Subject: [PATCH 107/108] Another attempt to enable pytest (#22664) * Fix some tests * Fix tests --- .../00632_aggregation_window_funnel.sql | 2 + tests/queries/0_stateless/01300_read_wkt.sql | 2 + tests/queries/0_stateless/01300_svg.sql | 2 + tests/queries/0_stateless/01300_wkt.sql | 2 + .../0_stateless/01302_polygons_distance.sql | 2 + .../01591_window_functions.reference | 1 + .../0_stateless/01591_window_functions.sql | 2 + .../01658_read_file_to_stringcolumn.sh | 2 +- ..._parallel_parsing_infinite_segmentation.sh | 14 +++--- .../01720_country_perimeter_and_area.sh | 4 +- .../0_stateless/01736_null_as_default.sql | 4 +- .../01758_optimize_skip_unused_shards_once.sh | 2 + ...ptimize_skip_unused_shards_zero_shards.sql | 3 +- .../01760_polygon_dictionaries.sql | 2 + tests/queries/query_test.py | 44 ++++++++++++++++--- 15 files changed, 72 insertions(+), 16 deletions(-) diff --git a/tests/queries/0_stateless/00632_aggregation_window_funnel.sql b/tests/queries/0_stateless/00632_aggregation_window_funnel.sql index d9991be5583..aa0dc804238 100644 --- a/tests/queries/0_stateless/00632_aggregation_window_funnel.sql +++ b/tests/queries/0_stateless/00632_aggregation_window_funnel.sql @@ -87,3 +87,5 @@ select 5 = windowFunnel(10000)(timestamp, event = 1000, event = 1001, event = 10 select 2 = windowFunnel(10000, 'strict_increase')(timestamp, event = 1000, event = 1001, event = 1002, event = 1003, event = 1004) from funnel_test_strict_increase; select 3 = windowFunnel(10000)(timestamp, event = 1004, event = 1004, event = 1004) from funnel_test_strict_increase; select 1 = windowFunnel(10000, 'strict_increase')(timestamp, event = 1004, event = 1004, event = 1004) from funnel_test_strict_increase; + +drop table funnel_test_strict_increase; diff --git a/tests/queries/0_stateless/01300_read_wkt.sql b/tests/queries/0_stateless/01300_read_wkt.sql index 590305fddae..8121bdf6084 100644 --- a/tests/queries/0_stateless/01300_read_wkt.sql +++ b/tests/queries/0_stateless/01300_read_wkt.sql @@ -26,3 +26,5 @@ INSERT INTO geo VALUES ('MULTIPOLYGON(((1 0,10 0,10 10,0 10,1 0),(4 4,5 4,5 5,4 INSERT INTO geo VALUES ('MULTIPOLYGON(((0 0,10 0,10 10,0 10,0 0),(4 4,5 4,5 5,4 5,4 4)),((-10 -10,-10 -9,-9 10,-10 -10)))', 2); INSERT INTO geo VALUES ('MULTIPOLYGON(((2 0,10 0,10 10,0 10,2 0),(4 4,5 4,5 5,4 5,4 4)),((-10 -10,-10 -9,-9 10,-10 -10)))', 3); SELECT readWktMultiPolygon(s) FROM geo ORDER BY id; + +DROP TABLE geo; diff --git a/tests/queries/0_stateless/01300_svg.sql b/tests/queries/0_stateless/01300_svg.sql index 3e70182023b..a1deb1745c3 100644 --- a/tests/queries/0_stateless/01300_svg.sql +++ b/tests/queries/0_stateless/01300_svg.sql @@ -46,3 +46,5 @@ SELECT svg(p) FROM geo ORDER BY id; SELECT svg(p, 'b') FROM geo ORDER BY id; SELECT svg([[[(0., 0.), (10, 0), (10, 10), (0, 10)], [(4., 4.), (5, 4), (5, 5), (4, 5)]], [[(-10., -10.), (-10, -9), (-9, 10)]]], s) FROM geo ORDER BY id; SELECT svg(p, s) FROM geo ORDER BY id; + +DROP TABLE geo; diff --git a/tests/queries/0_stateless/01300_wkt.sql b/tests/queries/0_stateless/01300_wkt.sql index 7047bb698bb..00063d0a612 100644 --- a/tests/queries/0_stateless/01300_wkt.sql +++ b/tests/queries/0_stateless/01300_wkt.sql @@ -30,3 +30,5 @@ INSERT INTO geo VALUES ([[[(0, 0), (10, 0), (10, 10), (0, 10)], [(4, 4), (5, 4), INSERT INTO geo VALUES ([[[(1, 0), (10, 0), (10, 10), (0, 10)], [(4, 4), (5, 4), (5, 5), (4, 5)]], [[(-10, -10), (-10, -9), (-9, 10)]]], 2); INSERT INTO geo VALUES ([[[(2, 0), (10, 0), (10, 10), (0, 10)], [(4, 4), (5, 4), (5, 5), (4, 5)]], [[(-10, -10), (-10, -9), (-9, 10)]]], 3); SELECT wkt(p) FROM geo ORDER BY id; + +DROP TABLE geo; diff --git a/tests/queries/0_stateless/01302_polygons_distance.sql b/tests/queries/0_stateless/01302_polygons_distance.sql index fdbd0254983..a69b5017a5f 100644 --- a/tests/queries/0_stateless/01302_polygons_distance.sql +++ b/tests/queries/0_stateless/01302_polygons_distance.sql @@ -6,3 +6,5 @@ drop table if exists polygon_01302; create table polygon_01302 (x Array(Array(Array(Tuple(Float64, Float64)))), y Array(Array(Array(Tuple(Float64, Float64))))) engine=Memory(); insert into polygon_01302 values ([[[(23.725750, 37.971536)]]], [[[(4.3826169, 50.8119483)]]]); select polygonsDistanceSpherical(x, y) from polygon_01302; + +drop table polygon_01302; diff --git a/tests/queries/0_stateless/01591_window_functions.reference b/tests/queries/0_stateless/01591_window_functions.reference index afc20f67406..21a2e72fea4 100644 --- a/tests/queries/0_stateless/01591_window_functions.reference +++ b/tests/queries/0_stateless/01591_window_functions.reference @@ -1108,3 +1108,4 @@ from ( -- -INT_MIN row offset that can lead to problems with negation, found when fuzzing -- under UBSan. Should be limited to at most INT_MAX. select count() over (rows between 2147483648 preceding and 2147493648 following) from numbers(2); -- { serverError 36 } +drop table window_mt; diff --git a/tests/queries/0_stateless/01591_window_functions.sql b/tests/queries/0_stateless/01591_window_functions.sql index 44b0bba0b27..afbf26d0b5c 100644 --- a/tests/queries/0_stateless/01591_window_functions.sql +++ b/tests/queries/0_stateless/01591_window_functions.sql @@ -414,3 +414,5 @@ from ( -- -INT_MIN row offset that can lead to problems with negation, found when fuzzing -- under UBSan. Should be limited to at most INT_MAX. select count() over (rows between 2147483648 preceding and 2147493648 following) from numbers(2); -- { serverError 36 } + +drop table window_mt; diff --git a/tests/queries/0_stateless/01658_read_file_to_stringcolumn.sh b/tests/queries/0_stateless/01658_read_file_to_stringcolumn.sh index 593f0e59ea7..072e8d75f52 100755 --- a/tests/queries/0_stateless/01658_read_file_to_stringcolumn.sh +++ b/tests/queries/0_stateless/01658_read_file_to_stringcolumn.sh @@ -8,7 +8,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # Data preparation. # Now we can get the user_files_path by use the table file function for trick. also we can get it by query as: # "insert into function file('exist.txt', 'CSV', 'val1 char') values ('aaaa'); select _path from file('exist.txt', 'CSV', 'val1 char')" -user_files_path=$(clickhouse-client --query "select _path,_file from file('nonexist.txt', 'CSV', 'val1 char')" 2>&1 |grep Exception | awk '{gsub("/nonexist.txt","",$9); print $9}') +user_files_path=$(clickhouse-client --query "select _path,_file from file('nonexist.txt', 'CSV', 'val1 char')" 2>&1 | grep Exception | awk '{gsub("/nonexist.txt","",$9); print $9}') mkdir -p ${user_files_path}/ echo -n aaaaaaaaa > ${user_files_path}/a.txt diff --git a/tests/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.sh b/tests/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.sh index d3e634eb560..edc4f6916ff 100755 --- a/tests/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.sh +++ b/tests/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.sh @@ -1,9 +1,11 @@ -#!/usr/bin/env bash - -CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) -# shellcheck source=../shell_config.sh -. "$CURDIR"/../shell_config.sh +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh ${CLICKHOUSE_CLIENT} -q "create table insert_big_json(a String, b String) engine=MergeTree() order by tuple()"; -python3 -c "[print('{{\"a\":\"{}\", \"b\":\"{}\"'.format('clickhouse'* 1000000, 'dbms' * 1000000)) for i in range(10)]; [print('{{\"a\":\"{}\", \"b\":\"{}\"}}'.format('clickhouse'* 100000, 'dbms' * 100000)) for i in range(10)]" 2>/dev/null | ${CLICKHOUSE_CLIENT} --input_format_parallel_parsing=1 --max_memory_usage=0 -q "insert into insert_big_json FORMAT JSONEachRow" 2>&1 | grep -q "min_chunk_bytes_for_parallel_parsing" && echo "Ok." || echo "FAIL" ||: \ No newline at end of file +python3 -c "[print('{{\"a\":\"{}\", \"b\":\"{}\"'.format('clickhouse'* 1000000, 'dbms' * 1000000)) for i in range(10)]; [print('{{\"a\":\"{}\", \"b\":\"{}\"}}'.format('clickhouse'* 100000, 'dbms' * 100000)) for i in range(10)]" 2>/dev/null | ${CLICKHOUSE_CLIENT} --input_format_parallel_parsing=1 --max_memory_usage=0 -q "insert into insert_big_json FORMAT JSONEachRow" 2>&1 | grep -q "min_chunk_bytes_for_parallel_parsing" && echo "Ok." || echo "FAIL" ||: + +${CLICKHOUSE_CLIENT} -q "drop table insert_big_json" diff --git a/tests/queries/0_stateless/01720_country_perimeter_and_area.sh b/tests/queries/0_stateless/01720_country_perimeter_and_area.sh index 76dc403fb2f..75016ee1d1f 100755 --- a/tests/queries/0_stateless/01720_country_perimeter_and_area.sh +++ b/tests/queries/0_stateless/01720_country_perimeter_and_area.sh @@ -22,4 +22,6 @@ ${CLICKHOUSE_CLIENT} -q "SELECT name, polygonPerimeterSpherical(p) from country_ ${CLICKHOUSE_CLIENT} -q "SELECT '-------------------------------------'" ${CLICKHOUSE_CLIENT} -q "SELECT name, polygonAreaSpherical(p) from country_rings" ${CLICKHOUSE_CLIENT} -q "SELECT '-------------------------------------'" -${CLICKHOUSE_CLIENT} -q "drop table if exists country_rings;" \ No newline at end of file +${CLICKHOUSE_CLIENT} -q "drop table if exists country_rings;" + +${CLICKHOUSE_CLIENT} -q "drop table country_polygons" diff --git a/tests/queries/0_stateless/01736_null_as_default.sql b/tests/queries/0_stateless/01736_null_as_default.sql index f9a4bc69acf..a00011b06d4 100644 --- a/tests/queries/0_stateless/01736_null_as_default.sql +++ b/tests/queries/0_stateless/01736_null_as_default.sql @@ -1,5 +1,5 @@ -drop table if exists test_num; +drop table if exists test_enum; create table test_enum (c Nullable(Enum16('A' = 1, 'B' = 2))) engine Log; insert into test_enum values (1), (NULL); select * from test_enum; -drop table if exists test_num; +drop table test_enum; diff --git a/tests/queries/0_stateless/01758_optimize_skip_unused_shards_once.sh b/tests/queries/0_stateless/01758_optimize_skip_unused_shards_once.sh index b26961eda8e..d18ea8694a9 100755 --- a/tests/queries/0_stateless/01758_optimize_skip_unused_shards_once.sh +++ b/tests/queries/0_stateless/01758_optimize_skip_unused_shards_once.sh @@ -10,3 +10,5 @@ $CLICKHOUSE_CLIENT --optimize_skip_unused_shards=1 -nm -q " create table dist_01758 as system.one engine=Distributed(test_cluster_two_shards, system, one, dummy); select * from dist_01758 where dummy = 0 format Null; " |& grep -o "StorageDistributed (dist_01758).*" + +$CLICKHOUSE_CLIENT -q "drop table dist_01758" 2>/dev/null diff --git a/tests/queries/0_stateless/01759_optimize_skip_unused_shards_zero_shards.sql b/tests/queries/0_stateless/01759_optimize_skip_unused_shards_zero_shards.sql index b95d640ca1a..2ddf318313f 100644 --- a/tests/queries/0_stateless/01759_optimize_skip_unused_shards_zero_shards.sql +++ b/tests/queries/0_stateless/01759_optimize_skip_unused_shards_zero_shards.sql @@ -1,2 +1,3 @@ create table dist_01756 (dummy UInt8) ENGINE = Distributed('test_cluster_two_shards', 'system', 'one', dummy); -select ignore(1), * from dist_01756 where 0 settings optimize_skip_unused_shards=1, force_optimize_skip_unused_shards=1 +select ignore(1), * from dist_01756 where 0 settings optimize_skip_unused_shards=1, force_optimize_skip_unused_shards=1; +drop table dist_01756; diff --git a/tests/queries/0_stateless/01760_polygon_dictionaries.sql b/tests/queries/0_stateless/01760_polygon_dictionaries.sql index 5e26d2fc306..406e9af27ea 100644 --- a/tests/queries/0_stateless/01760_polygon_dictionaries.sql +++ b/tests/queries/0_stateless/01760_polygon_dictionaries.sql @@ -65,3 +65,5 @@ SELECT tuple(inf, inf) as key, dictGet('01760_db.dict_array', 'name', key); --{s DROP DICTIONARY 01760_db.dict_array; DROP TABLE 01760_db.points; DROP TABLE 01760_db.polygons; + +DROP DATABASE 01760_db; diff --git a/tests/queries/query_test.py b/tests/queries/query_test.py index b747ac2944e..6ebeccbeeac 100644 --- a/tests/queries/query_test.py +++ b/tests/queries/query_test.py @@ -1,5 +1,3 @@ -import pytest - import difflib import os import random @@ -7,6 +5,8 @@ import string import subprocess import sys +import pytest + SKIP_LIST = [ # these couple of tests hangs everything @@ -14,44 +14,63 @@ SKIP_LIST = [ "00987_distributed_stack_overflow", # just fail + "00133_long_shard_memory_tracker_and_exception_safety", "00505_secure", "00505_shard_secure", "00646_url_engine", "00725_memory_tracking", # BROKEN + "00738_lock_for_inner_table", + "00821_distributed_storage_with_join_on", + "00825_protobuf_format_array_3dim", + "00825_protobuf_format_array_of_arrays", + "00825_protobuf_format_enum_mapping", + "00825_protobuf_format_nested_in_nested", + "00825_protobuf_format_nested_optional", + "00825_protobuf_format_no_length_delimiter", + "00825_protobuf_format_persons", + "00825_protobuf_format_squares", + "00825_protobuf_format_table_default", "00834_cancel_http_readonly_queries_on_client_close", + "00877_memory_limit_for_new_delete", + "00900_parquet_load", "00933_test_fix_extra_seek_on_compressed_cache", "00965_logs_level_bugfix", "00965_send_logs_level_concurrent_queries", + "00974_query_profiler", "00990_hasToken", "00990_metric_log_table_not_empty", "01014_lazy_database_concurrent_recreate_reattach_and_show_tables", + "01017_uniqCombined_memory_usage", "01018_Distributed__shard_num", "01018_ip_dictionary_long", + "01035_lc_empty_part_bug", # FLAKY "01050_clickhouse_dict_source_with_subquery", "01053_ssd_dictionary", "01054_cache_dictionary_overflow_cell", "01057_http_compression_prefer_brotli", "01080_check_for_error_incorrect_size_of_nested_column", "01083_expressions_in_engine_arguments", - # "01086_odbc_roundtrip", + "01086_odbc_roundtrip", "01088_benchmark_query_id", + "01092_memory_profiler", "01098_temporary_and_external_tables", "01099_parallel_distributed_insert_select", "01103_check_cpu_instructions_at_startup", "01114_database_atomic", "01148_zookeeper_path_macros_unfolding", + "01175_distributed_ddl_output_mode_long", "01181_db_atomic_drop_on_cluster", # tcp port in reference "01280_ssd_complex_key_dictionary", "01293_client_interactive_vertical_multiline", # expect-test "01293_client_interactive_vertical_singleline", # expect-test - "01293_system_distribution_queue", # FLAKY "01293_show_clusters", + "01293_show_settings", + "01293_system_distribution_queue", # FLAKY "01294_lazy_database_concurrent_recreate_reattach_and_show_tables_long", "01294_system_distributed_on_cluster", "01300_client_save_history_when_terminated", # expect-test "01304_direct_io", "01306_benchmark_json", - "01035_lc_empty_part_bug", # FLAKY "01320_create_sync_race_condition_zookeeper", "01355_CSV_input_format_allow_errors", "01370_client_autocomplete_word_break_characters", # expect-test @@ -66,18 +85,33 @@ SKIP_LIST = [ "01514_distributed_cancel_query_on_error", "01520_client_print_query_id", # expect-test "01526_client_start_and_exit", # expect-test + "01526_max_untracked_memory", "01527_dist_sharding_key_dictGet_reload", + "01528_play", "01545_url_file_format_settings", "01553_datetime64_comparison", "01555_system_distribution_queue_mask", "01558_ttest_scipy", "01561_mann_whitney_scipy", "01582_distinct_optimization", + "01591_window_functions", "01599_multiline_input_and_singleline_comments", # expect-test "01601_custom_tld", + "01606_git_import", "01610_client_spawn_editor", # expect-test + "01658_read_file_to_stringcolumn", + "01666_merge_tree_max_query_limit", + "01674_unicode_asan", "01676_clickhouse_client_autocomplete", # expect-test (partially) "01683_text_log_deadlock", # secure tcp + "01684_ssd_cache_dictionary_simple_key", + "01685_ssd_cache_dictionary_complex_key", + "01746_executable_pool_dictionary", + "01747_executable_pool_dictionary_implicit_key", + "01747_join_view_filter_dictionary", + "01748_dictionary_table_dot", + "01754_cluster_all_replicas_shard_num", + "01759_optimize_skip_unused_shards_zero_shards", ] From fd424eceb00e4aa535f2e312c4f1e9957a2cac84 Mon Sep 17 00:00:00 2001 From: alexey-milovidov Date: Wed, 14 Apr 2021 23:02:28 +0300 Subject: [PATCH 108/108] Update code-review.md --- website/blog/en/2021/code-review.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/blog/en/2021/code-review.md b/website/blog/en/2021/code-review.md index 6336d132317..dcde371629b 100644 --- a/website/blog/en/2021/code-review.md +++ b/website/blog/en/2021/code-review.md @@ -62,7 +62,7 @@ It is common to hear objections to the idea of commenting the code, so let's dis ### Self-documenting Code -You can often see a perplexing idea that the source code can somehow be "self-documenting", or that the comments are a "code smell", and their presence indicates that the code is badly written. I have trouble imagining how this belief can be compatible with any experience in maintaining sufficiently complex and large software, over the years, in collaboration with others. The code and the comments describe different parts of the solution. The code describes the data structures and their transformations, but it cannot convey meaning. The names in the code serve as pointers that map the data and its transformations to the domain concepts, but they are schematic and lack nuance. It is not so difficult to write code that makes it easy to understand what's going on in terms of data manipulation. What it takes is mostly moderation, that is, stopping yourself from being too clever. For most code, it is easy to see what it does, but why? Why this way and not that way? Why is it correct? Why this fast path here helps? Why did you choose this data layout? How is this invariant guaranteed? And so on. This might be not so evident for a developer who is working alone on a short-lived project, because they have all the necessary context in their head. But when they have to work with other people (or even with themselves from past and future), or in an unfamiliar area, the importance of non-code, higher-level context becomes painfully clear. The idea that we should, or even can, somehow encode comments such as [this one](https://github.com/ClickHouse/ClickHouse/blob/495c6e03aa9437dac3cd7a44ab3923390bef9982/src/Storages/MergeTree/KeyCondition.cpp#L1312-L1347) into names or control flow is just absurd. +You can often see a perplexing idea that the source code can somehow be "self-documenting", or that the comments are a "code smell", and their presence indicates that the code is badly written. I have trouble imagining how this belief can be compatible with any experience in maintaining sufficiently complex and large software, over the years, in collaboration with others. The code and the comments describe different parts of the solution. The code describes the data structures and their transformations, but it cannot convey meaning. The names in the code serve as pointers that map the data and its transformations to the domain concepts, but they are schematic and lack nuance. It is not so difficult to write code that makes it easy to understand what's going on in terms of data manipulation. What it takes is mostly moderation, that is, stopping yourself from being too clever. For most code, it is easy to see what it does, but why? Why this way and not that way? Why is it correct? Why this fast path here helps? Why did you choose this data layout? How is this invariant guaranteed? And so on. This might be not so evident for a developer who is working alone on a short-lived project, because they have all the necessary context in their head. But when they have to work with other people (or even with themselves from past and future), or in an unfamiliar area, the importance of non-code, higher-level context becomes painfully clear. The idea that we should, or even can, somehow encode comments such as [this one](https://github.com/ClickHouse/ClickHouse/blob/26d5db32ae5c9f54b8825e2eca1f077a3b17c84a/src/Storages/MergeTree/KeyCondition.cpp#L1312-L1347) into names or control flow is just absurd. ### Obsolete Comments