diff --git a/docs/en/faq/general/how-do-i-contribute-code-to-clickhouse.md b/docs/en/faq/general/how-do-i-contribute-code-to-clickhouse.md new file mode 100644 index 00000000000..731dc9dface --- /dev/null +++ b/docs/en/faq/general/how-do-i-contribute-code-to-clickhouse.md @@ -0,0 +1,15 @@ +--- +title: How do I contribute code to ClickHouse? +toc_hidden: true +toc_priority: 120 +--- + +# How do I contribute code to ClickHouse? {#how-do-i-contribute-code-to-clickhouse} + +ClickHouse is an open-source project [developed on GitHub](https://github.com/ClickHouse/ClickHouse). + +As customary, contribution instructions are published in [CONTRIBUTING.md](https://github.com/ClickHouse/ClickHouse/blob/master/CONTRIBUTING.md) file in the root of the source code repository. + +If you want to suggest a substantial change to ClickHouse, consider [opening a GitHub issue](https://github.com/ClickHouse/ClickHouse/issues/new/choose) explaining what you want to do, to discuss it with maintainers and community first. [Examples of such RFC issues](https://github.com/ClickHouse/ClickHouse/issues?q=is%3Aissue+is%3Aopen+rfc). + +If your contributions are security related, please check out [our security policy](https://github.com/ClickHouse/ClickHouse/security/policy/) too. diff --git a/docs/en/faq/general/index.md b/docs/en/faq/general/index.md index cd2368be1cf..51fff9a53ae 100644 --- a/docs/en/faq/general/index.md +++ b/docs/en/faq/general/index.md @@ -17,6 +17,7 @@ Questions: - [What is OLAP?](../../faq/general/olap.md) - [What is a columnar database?](../../faq/general/columnar-database.md) - [Why not use something like MapReduce?](../../faq/general/mapreduce.md) +- [How do I contribute code to ClickHouse?](../../faq/general/how-do-i-contribute-code-to-clickhouse.md) !!! info "Don’t see what you were looking for?" Check out [other F.A.Q. categories](../../faq/index.md) or browse around main documentation articles found in the left sidebar. diff --git a/docs/en/interfaces/third-party/client-libraries.md b/docs/en/interfaces/third-party/client-libraries.md index 342b1c9a496..a116c8e2222 100644 --- a/docs/en/interfaces/third-party/client-libraries.md +++ b/docs/en/interfaces/third-party/client-libraries.md @@ -6,7 +6,7 @@ toc_title: Client Libraries # Client Libraries from Third-party Developers {#client-libraries-from-third-party-developers} !!! warning "Disclaimer" - Yandex does **not** maintain the libraries listed below and hasn’t done any extensive testing to ensure their quality. + ClickHouse Inc does **not** maintain the libraries listed below and hasn’t done any extensive testing to ensure their quality. - Python - [infi.clickhouse_orm](https://github.com/Infinidat/infi.clickhouse_orm) diff --git a/docs/en/operations/settings/settings.md b/docs/en/operations/settings/settings.md index 9b4db0e026e..0045fa875d5 100644 --- a/docs/en/operations/settings/settings.md +++ b/docs/en/operations/settings/settings.md @@ -817,9 +817,19 @@ If the number of rows to be read from a file of a [MergeTree](../../engines/tabl Possible values: -- Any positive integer. +- Positive integer. -Default value: 163840. +Default value: `163840`. + +## merge_tree_min_rows_for_concurrent_read_for_remote_filesystem {#merge-tree-min-rows-for-concurrent-read-for-remote-filesystem} + +The minimum number of lines to read from one file before [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) engine can parallelize reading, when reading from remote filesystem. + +Possible values: + +- Positive integer. + +Default value: `163840`. ## merge_tree_min_bytes_for_concurrent_read {#setting-merge-tree-min-bytes-for-concurrent-read} @@ -827,9 +837,19 @@ If the number of bytes to read from one file of a [MergeTree](../../engines/tabl Possible value: -- Any positive integer. +- Positive integer. -Default value: 251658240. +Default value: `251658240`. + +## merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem {#merge-tree-min-bytes-for-concurrent-read-for-remote-filesystem} + +The minimum number of bytes to read from one file before [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) engine can parallelize reading, when reading from remote filesystem. + +Possible values: + +- Positive integer. + +Default value: `251658240`. ## merge_tree_min_rows_for_seek {#setting-merge-tree-min-rows-for-seek} diff --git a/docs/en/sql-reference/aggregate-functions/reference/uniq.md b/docs/en/sql-reference/aggregate-functions/reference/uniq.md index 598af24c0de..33bfe72548b 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/uniq.md +++ b/docs/en/sql-reference/aggregate-functions/reference/uniq.md @@ -24,9 +24,7 @@ Function: - Calculates a hash for all parameters in the aggregate, then uses it in calculations. -- Uses an adaptive sampling algorithm. For the calculation state, the function uses a sample of element hash values up to 65536. - - This algorithm is very accurate and very efficient on the CPU. When the query contains several of these functions, using `uniq` is almost as fast as using other aggregate functions. +- Uses an adaptive sampling algorithm. For the calculation state, the function uses a sample of element hash values up to 65536. This algorithm is very accurate and very efficient on the CPU. When the query contains several of these functions, using `uniq` is almost as fast as using other aggregate functions. - Provides the result deterministically (it does not depend on the query processing order). diff --git a/docs/en/sql-reference/data-types/aggregatefunction.md b/docs/en/sql-reference/data-types/aggregatefunction.md index 81945eeece6..e483a20eed9 100644 --- a/docs/en/sql-reference/data-types/aggregatefunction.md +++ b/docs/en/sql-reference/data-types/aggregatefunction.md @@ -11,9 +11,7 @@ Aggregate functions can have an implementation-defined intermediate state that c **Parameters** -- Name of the aggregate function. - - If the function is parametric, specify its parameters too. +- Name of the aggregate function. If the function is parametric, specify its parameters too. - Types of the aggregate function arguments. diff --git a/docs/ja/faq/general/how-do-i-contribute-code-to-clickhouse.md b/docs/ja/faq/general/how-do-i-contribute-code-to-clickhouse.md new file mode 120000 index 00000000000..5ac9a615386 --- /dev/null +++ b/docs/ja/faq/general/how-do-i-contribute-code-to-clickhouse.md @@ -0,0 +1 @@ +../../../en/faq/general/how-do-i-contribute-code-to-clickhouse.md \ No newline at end of file diff --git a/docs/ru/faq/general/how-do-i-contribute-code-to-clickhouse.md b/docs/ru/faq/general/how-do-i-contribute-code-to-clickhouse.md new file mode 120000 index 00000000000..5ac9a615386 --- /dev/null +++ b/docs/ru/faq/general/how-do-i-contribute-code-to-clickhouse.md @@ -0,0 +1 @@ +../../../en/faq/general/how-do-i-contribute-code-to-clickhouse.md \ No newline at end of file diff --git a/docs/ru/operations/settings/settings.md b/docs/ru/operations/settings/settings.md index 94bd2078373..933060482e3 100644 --- a/docs/ru/operations/settings/settings.md +++ b/docs/ru/operations/settings/settings.md @@ -761,9 +761,20 @@ ClickHouse может парсить только базовый формат `Y Возможные значения: -- Любое положительное целое число. +- Положительное целое число. -Значение по умолчанию: 163840. +Значение по умолчанию: `163840`. + + +## merge_tree_min_rows_for_concurrent_read_for_remote_filesystem {#merge-tree-min-rows-for-concurrent-read-for-remote-filesystem} + +Минимальное количество строк для чтения из одного файла, прежде чем движок [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) может выполнять параллельное чтение из удаленной файловой системы. + +Возможные значения: + +- Положительное целое число. + +Значение по умолчанию: `163840`. ## merge_tree_min_bytes_for_concurrent_read {#setting-merge-tree-min-bytes-for-concurrent-read} @@ -773,7 +784,17 @@ ClickHouse может парсить только базовый формат `Y - Положительное целое число. -Значение по умолчанию: 251658240. +Значение по умолчанию: `251658240`. + +## merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem {#merge-tree-min-bytes-for-concurrent-read-for-remote-filesystem} + +Минимальное количество байтов для чтения из одного файла, прежде чем движок [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) может выполнять параллельное чтение из удаленной файловой системы. + +Возможное значение: + +- Положительное целое число. + +Значение по умолчанию: `251658240`. ## merge_tree_min_rows_for_seek {#setting-merge-tree-min-rows-for-seek} diff --git a/docs/zh/faq/general/how-do-i-contribute-code-to-clickhouse.md b/docs/zh/faq/general/how-do-i-contribute-code-to-clickhouse.md new file mode 120000 index 00000000000..5ac9a615386 --- /dev/null +++ b/docs/zh/faq/general/how-do-i-contribute-code-to-clickhouse.md @@ -0,0 +1 @@ +../../../en/faq/general/how-do-i-contribute-code-to-clickhouse.md \ No newline at end of file diff --git a/programs/keeper/Keeper.cpp b/programs/keeper/Keeper.cpp index afd6a36ea15..d144b4d332e 100644 --- a/programs/keeper/Keeper.cpp +++ b/programs/keeper/Keeper.cpp @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -379,11 +380,11 @@ int Keeper::main(const std::vector & /*args*/) socket.setReceiveTimeout(settings.receive_timeout); socket.setSendTimeout(settings.send_timeout); servers->emplace_back( + listen_host, port_name, - std::make_unique( - new KeeperTCPHandlerFactory(*this, false), server_pool, socket, new Poco::Net::TCPServerParams)); - - LOG_INFO(log, "Listening for connections to Keeper (tcp): {}", address.toString()); + "Keeper (tcp): " + address.toString(), + std::make_unique( + new KeeperTCPHandlerFactory(*this, false), server_pool, socket)); }); const char * secure_port_name = "keeper_server.tcp_port_secure"; @@ -395,10 +396,11 @@ int Keeper::main(const std::vector & /*args*/) socket.setReceiveTimeout(settings.receive_timeout); socket.setSendTimeout(settings.send_timeout); servers->emplace_back( + listen_host, secure_port_name, - std::make_unique( - new KeeperTCPHandlerFactory(*this, true), server_pool, socket, new Poco::Net::TCPServerParams)); - LOG_INFO(log, "Listening for connections to Keeper with secure protocol (tcp_secure): {}", address.toString()); + "Keeper with secure protocol (tcp_secure): " + address.toString(), + std::make_unique( + new KeeperTCPHandlerFactory(*this, true), server_pool, socket)); #else UNUSED(port); throw Exception{"SSL support for TCP protocol is disabled because Poco library was built without NetSSL support.", @@ -408,7 +410,10 @@ int Keeper::main(const std::vector & /*args*/) } for (auto & server : *servers) + { server.start(); + LOG_INFO(log, "Listening for {}", server.getDescription()); + } zkutil::EventPtr unused_event = std::make_shared(); zkutil::ZooKeeperNodeCache unused_cache([] { return nullptr; }); diff --git a/programs/server/Server.cpp b/programs/server/Server.cpp index 45f7834c96c..9e8f214ba74 100644 --- a/programs/server/Server.cpp +++ b/programs/server/Server.cpp @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -70,6 +71,7 @@ #include "MetricsTransmitter.h" #include #include +#include #include #include #include @@ -127,6 +129,11 @@ namespace CurrentMetrics extern const Metric MaxPushedDDLEntryID; } +namespace ProfileEvents +{ + extern const Event MainConfigLoads; +} + namespace fs = std::filesystem; #if USE_JEMALLOC @@ -344,16 +351,53 @@ Poco::Net::SocketAddress Server::socketBindListen(Poco::Net::ServerSocket & sock return address; } -void Server::createServer(const std::string & listen_host, const char * port_name, bool listen_try, CreateServerFunc && func) const +std::vector getListenHosts(const Poco::Util::AbstractConfiguration & config) +{ + auto listen_hosts = DB::getMultipleValuesFromConfig(config, "", "listen_host"); + if (listen_hosts.empty()) + { + listen_hosts.emplace_back("::1"); + listen_hosts.emplace_back("127.0.0.1"); + } + return listen_hosts; +} + +bool getListenTry(const Poco::Util::AbstractConfiguration & config) +{ + bool listen_try = config.getBool("listen_try", false); + if (!listen_try) + listen_try = DB::getMultipleValuesFromConfig(config, "", "listen_host").empty(); + return listen_try; +} + + +void Server::createServer( + Poco::Util::AbstractConfiguration & config, + const std::string & listen_host, + const char * port_name, + bool listen_try, + bool start_server, + std::vector & servers, + CreateServerFunc && func) const { /// For testing purposes, user may omit tcp_port or http_port or https_port in configuration file. - if (!config().has(port_name)) + if (config.getString(port_name, "").empty()) return; - auto port = config().getInt(port_name); + /// If we already have an active server for this listen_host/port_name, don't create it again + for (const auto & server : servers) + if (!server.isStopping() && server.getListenHost() == listen_host && server.getPortName() == port_name) + return; + + auto port = config.getInt(port_name); try { - func(port); + servers.push_back(func(port)); + if (start_server) + { + servers.back().start(); + LOG_INFO(&logger(), "Listening for {}", servers.back().getDescription()); + } global_context->registerServerPort(port_name, port); } catch (const Poco::Exception &) @@ -515,6 +559,27 @@ if (ThreadFuzzer::instance().isEffective()) config().getUInt("thread_pool_queue_size", 10000) ); + Poco::ThreadPool server_pool(3, config().getUInt("max_connections", 1024)); + std::mutex servers_lock; + std::vector servers; + std::vector servers_to_start_before_tables; + /// This object will periodically calculate some metrics. + AsynchronousMetrics async_metrics( + global_context, config().getUInt("asynchronous_metrics_update_period_s", 1), + [&]() -> std::vector + { + std::vector metrics; + metrics.reserve(servers_to_start_before_tables.size()); + for (const auto & server : servers_to_start_before_tables) + metrics.emplace_back(ProtocolServerMetrics{server.getPortName(), server.currentThreads()}); + + std::lock_guard lock(servers_lock); + for (const auto & server : servers) + metrics.emplace_back(ProtocolServerMetrics{server.getPortName(), server.currentThreads()}); + return metrics; + } + ); + ConnectionCollector::init(global_context, config().getUInt("max_threads_for_connection_collector", 10)); bool has_zookeeper = config().has("zookeeper"); @@ -883,12 +948,17 @@ if (ThreadFuzzer::instance().isEffective()) global_context->reloadZooKeeperIfChanged(config); global_context->reloadAuxiliaryZooKeepersConfigIfChanged(config); + + std::lock_guard lock(servers_lock); + updateServers(*config, server_pool, async_metrics, servers); } global_context->updateStorageConfiguration(*config); global_context->updateInterserverCredentials(*config); CompressionCodecEncrypted::Configuration::instance().tryLoad(*config, "encryption_codecs"); + + ProfileEvents::increment(ProfileEvents::MainConfigLoads); }, /* already_loaded = */ false); /// Reload it right now (initial loading) @@ -1000,24 +1070,8 @@ if (ThreadFuzzer::instance().isEffective()) /// try set up encryption. There are some errors in config, error will be printed and server wouldn't start. CompressionCodecEncrypted::Configuration::instance().load(config(), "encryption_codecs"); - Poco::Timespan keep_alive_timeout(config().getUInt("keep_alive_timeout", 10), 0); - - Poco::ThreadPool server_pool(3, config().getUInt("max_connections", 1024)); - Poco::Net::HTTPServerParams::Ptr http_params = new Poco::Net::HTTPServerParams; - http_params->setTimeout(settings.http_receive_timeout); - http_params->setKeepAliveTimeout(keep_alive_timeout); - - auto servers_to_start_before_tables = std::make_shared>(); - - std::vector listen_hosts = DB::getMultipleValuesFromConfig(config(), "", "listen_host"); - - bool listen_try = config().getBool("listen_try", false); - if (listen_hosts.empty()) - { - listen_hosts.emplace_back("::1"); - listen_hosts.emplace_back("127.0.0.1"); - listen_try = true; - } + const auto listen_hosts = getListenHosts(config()); + const auto listen_try = getListenTry(config()); if (config().has("keeper_server")) { @@ -1040,39 +1094,46 @@ if (ThreadFuzzer::instance().isEffective()) { /// TCP Keeper const char * port_name = "keeper_server.tcp_port"; - createServer(listen_host, port_name, listen_try, [&](UInt16 port) - { - Poco::Net::ServerSocket socket; - auto address = socketBindListen(socket, listen_host, port); - socket.setReceiveTimeout(settings.receive_timeout); - socket.setSendTimeout(settings.send_timeout); - servers_to_start_before_tables->emplace_back( - port_name, - std::make_unique( - new KeeperTCPHandlerFactory(*this, false), server_pool, socket, new Poco::Net::TCPServerParams)); - - LOG_INFO(log, "Listening for connections to Keeper (tcp): {}", address.toString()); - }); + createServer( + config(), listen_host, port_name, listen_try, /* start_server: */ false, + servers_to_start_before_tables, + [&](UInt16 port) -> ProtocolServerAdapter + { + Poco::Net::ServerSocket socket; + auto address = socketBindListen(socket, listen_host, port); + socket.setReceiveTimeout(settings.receive_timeout); + socket.setSendTimeout(settings.send_timeout); + return ProtocolServerAdapter( + listen_host, + port_name, + "Keeper (tcp): " + address.toString(), + std::make_unique( + new KeeperTCPHandlerFactory(*this, false), server_pool, socket)); + }); const char * secure_port_name = "keeper_server.tcp_port_secure"; - createServer(listen_host, secure_port_name, listen_try, [&](UInt16 port) - { + createServer( + config(), listen_host, secure_port_name, listen_try, /* start_server: */ false, + servers_to_start_before_tables, + [&](UInt16 port) -> ProtocolServerAdapter + { #if USE_SSL - Poco::Net::SecureServerSocket socket; - auto address = socketBindListen(socket, listen_host, port, /* secure = */ true); - socket.setReceiveTimeout(settings.receive_timeout); - socket.setSendTimeout(settings.send_timeout); - servers_to_start_before_tables->emplace_back( - secure_port_name, - std::make_unique( - new KeeperTCPHandlerFactory(*this, true), server_pool, socket, new Poco::Net::TCPServerParams)); - LOG_INFO(log, "Listening for connections to Keeper with secure protocol (tcp_secure): {}", address.toString()); + Poco::Net::SecureServerSocket socket; + auto address = socketBindListen(socket, listen_host, port, /* secure = */ true); + socket.setReceiveTimeout(settings.receive_timeout); + socket.setSendTimeout(settings.send_timeout); + return ProtocolServerAdapter( + listen_host, + secure_port_name, + "Keeper with secure protocol (tcp_secure): " + address.toString(), + std::make_unique( + new KeeperTCPHandlerFactory(*this, true), server_pool, socket)); #else - UNUSED(port); - throw Exception{"SSL support for TCP protocol is disabled because Poco library was built without NetSSL support.", - ErrorCodes::SUPPORT_IS_DISABLED}; + UNUSED(port); + throw Exception{"SSL support for TCP protocol is disabled because Poco library was built without NetSSL support.", + ErrorCodes::SUPPORT_IS_DISABLED}; #endif - }); + }); } #else throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "ClickHouse server built without NuRaft library. Cannot use internal coordination."); @@ -1080,14 +1141,19 @@ if (ThreadFuzzer::instance().isEffective()) } - for (auto & server : *servers_to_start_before_tables) + for (auto & server : servers_to_start_before_tables) + { server.start(); + LOG_INFO(log, "Listening for {}", server.getDescription()); + } SCOPE_EXIT({ /// Stop reloading of the main config. This must be done before `global_context->shutdown()` because /// otherwise the reloading may pass a changed config to some destroyed parts of ContextSharedPart. main_config_reloader.reset(); + async_metrics.stop(); + /** Ask to cancel background jobs all table engines, * and also query_log. * It is important to do early, not in destructor of Context, because @@ -1099,11 +1165,11 @@ if (ThreadFuzzer::instance().isEffective()) LOG_DEBUG(log, "Shut down storages."); - if (!servers_to_start_before_tables->empty()) + if (!servers_to_start_before_tables.empty()) { LOG_DEBUG(log, "Waiting for current connections to servers for tables to finish."); int current_connections = 0; - for (auto & server : *servers_to_start_before_tables) + for (auto & server : servers_to_start_before_tables) { server.stop(); current_connections += server.currentConnections(); @@ -1115,7 +1181,7 @@ if (ThreadFuzzer::instance().isEffective()) LOG_INFO(log, "Closed all listening sockets."); if (current_connections > 0) - current_connections = waitServersToFinish(*servers_to_start_before_tables, config().getInt("shutdown_wait_unfinished", 5)); + current_connections = waitServersToFinish(servers_to_start_before_tables, config().getInt("shutdown_wait_unfinished", 5)); if (current_connections) LOG_INFO(log, "Closed connections to servers for tables. But {} remain. Probably some tables of other users cannot finish their connections after context shutdown.", current_connections); @@ -1269,223 +1335,18 @@ if (ThreadFuzzer::instance().isEffective()) LOG_INFO(log, "TaskStats is not implemented for this OS. IO accounting will be disabled."); #endif - auto servers = std::make_shared>(); { - /// This object will periodically calculate some metrics. - AsynchronousMetrics async_metrics( - global_context, config().getUInt("asynchronous_metrics_update_period_s", 1), servers_to_start_before_tables, servers); attachSystemTablesAsync(global_context, *DatabaseCatalog::instance().getSystemDatabase(), async_metrics); - for (const auto & listen_host : listen_hosts) { - /// HTTP - const char * port_name = "http_port"; - createServer(listen_host, port_name, listen_try, [&](UInt16 port) - { - Poco::Net::ServerSocket socket; - auto address = socketBindListen(socket, listen_host, port); - socket.setReceiveTimeout(settings.http_receive_timeout); - socket.setSendTimeout(settings.http_send_timeout); - - servers->emplace_back( - port_name, - std::make_unique( - context(), createHandlerFactory(*this, async_metrics, "HTTPHandler-factory"), server_pool, socket, http_params)); - - LOG_INFO(log, "Listening for http://{}", address.toString()); - }); - - /// HTTPS - port_name = "https_port"; - createServer(listen_host, port_name, listen_try, [&](UInt16 port) - { -#if USE_SSL - Poco::Net::SecureServerSocket socket; - auto address = socketBindListen(socket, listen_host, port, /* secure = */ true); - socket.setReceiveTimeout(settings.http_receive_timeout); - socket.setSendTimeout(settings.http_send_timeout); - servers->emplace_back( - port_name, - std::make_unique( - context(), createHandlerFactory(*this, async_metrics, "HTTPSHandler-factory"), server_pool, socket, http_params)); - - LOG_INFO(log, "Listening for https://{}", address.toString()); -#else - UNUSED(port); - throw Exception{"HTTPS protocol is disabled because Poco library was built without NetSSL support.", - ErrorCodes::SUPPORT_IS_DISABLED}; -#endif - }); - - /// TCP - port_name = "tcp_port"; - createServer(listen_host, port_name, listen_try, [&](UInt16 port) - { - Poco::Net::ServerSocket socket; - auto address = socketBindListen(socket, listen_host, port); - socket.setReceiveTimeout(settings.receive_timeout); - socket.setSendTimeout(settings.send_timeout); - servers->emplace_back(port_name, std::make_unique( - new TCPHandlerFactory(*this, /* secure */ false, /* proxy protocol */ false), - server_pool, - socket, - new Poco::Net::TCPServerParams)); - - LOG_INFO(log, "Listening for connections with native protocol (tcp): {}", address.toString()); - }); - - /// TCP with PROXY protocol, see https://github.com/wolfeidau/proxyv2/blob/master/docs/proxy-protocol.txt - port_name = "tcp_with_proxy_port"; - createServer(listen_host, port_name, listen_try, [&](UInt16 port) - { - Poco::Net::ServerSocket socket; - auto address = socketBindListen(socket, listen_host, port); - socket.setReceiveTimeout(settings.receive_timeout); - socket.setSendTimeout(settings.send_timeout); - servers->emplace_back(port_name, std::make_unique( - new TCPHandlerFactory(*this, /* secure */ false, /* proxy protocol */ true), - server_pool, - socket, - new Poco::Net::TCPServerParams)); - - LOG_INFO(log, "Listening for connections with native protocol (tcp) with PROXY: {}", address.toString()); - }); - - /// TCP with SSL - port_name = "tcp_port_secure"; - createServer(listen_host, port_name, listen_try, [&](UInt16 port) - { -#if USE_SSL - Poco::Net::SecureServerSocket socket; - auto address = socketBindListen(socket, listen_host, port, /* secure = */ true); - socket.setReceiveTimeout(settings.receive_timeout); - socket.setSendTimeout(settings.send_timeout); - servers->emplace_back(port_name, std::make_unique( - new TCPHandlerFactory(*this, /* secure */ true, /* proxy protocol */ false), - server_pool, - socket, - new Poco::Net::TCPServerParams)); - LOG_INFO(log, "Listening for connections with secure native protocol (tcp_secure): {}", address.toString()); -#else - UNUSED(port); - throw Exception{"SSL support for TCP protocol is disabled because Poco library was built without NetSSL support.", - ErrorCodes::SUPPORT_IS_DISABLED}; -#endif - }); - - /// Interserver IO HTTP - port_name = "interserver_http_port"; - createServer(listen_host, port_name, listen_try, [&](UInt16 port) - { - Poco::Net::ServerSocket socket; - auto address = socketBindListen(socket, listen_host, port); - socket.setReceiveTimeout(settings.http_receive_timeout); - socket.setSendTimeout(settings.http_send_timeout); - servers->emplace_back( - port_name, - std::make_unique( - context(), - createHandlerFactory(*this, async_metrics, "InterserverIOHTTPHandler-factory"), - server_pool, - socket, - http_params)); - - LOG_INFO(log, "Listening for replica communication (interserver): http://{}", address.toString()); - }); - - port_name = "interserver_https_port"; - createServer(listen_host, port_name, listen_try, [&](UInt16 port) - { -#if USE_SSL - Poco::Net::SecureServerSocket socket; - auto address = socketBindListen(socket, listen_host, port, /* secure = */ true); - socket.setReceiveTimeout(settings.http_receive_timeout); - socket.setSendTimeout(settings.http_send_timeout); - servers->emplace_back( - port_name, - std::make_unique( - context(), - createHandlerFactory(*this, async_metrics, "InterserverIOHTTPSHandler-factory"), - server_pool, - socket, - http_params)); - - LOG_INFO(log, "Listening for secure replica communication (interserver): https://{}", address.toString()); -#else - UNUSED(port); - throw Exception{"SSL support for TCP protocol is disabled because Poco library was built without NetSSL support.", - ErrorCodes::SUPPORT_IS_DISABLED}; -#endif - }); - - port_name = "mysql_port"; - createServer(listen_host, port_name, listen_try, [&](UInt16 port) - { - Poco::Net::ServerSocket socket; - auto address = socketBindListen(socket, listen_host, port, /* secure = */ true); - socket.setReceiveTimeout(Poco::Timespan()); - socket.setSendTimeout(settings.send_timeout); - servers->emplace_back(port_name, std::make_unique( - new MySQLHandlerFactory(*this), - server_pool, - socket, - new Poco::Net::TCPServerParams)); - - LOG_INFO(log, "Listening for MySQL compatibility protocol: {}", address.toString()); - }); - - port_name = "postgresql_port"; - createServer(listen_host, port_name, listen_try, [&](UInt16 port) - { - Poco::Net::ServerSocket socket; - auto address = socketBindListen(socket, listen_host, port, /* secure = */ true); - socket.setReceiveTimeout(Poco::Timespan()); - socket.setSendTimeout(settings.send_timeout); - servers->emplace_back(port_name, std::make_unique( - new PostgreSQLHandlerFactory(*this), - server_pool, - socket, - new Poco::Net::TCPServerParams)); - - LOG_INFO(log, "Listening for PostgreSQL compatibility protocol: " + address.toString()); - }); - -#if USE_GRPC - port_name = "grpc_port"; - createServer(listen_host, port_name, listen_try, [&](UInt16 port) - { - Poco::Net::SocketAddress server_address(listen_host, port); - servers->emplace_back(port_name, std::make_unique(*this, makeSocketAddress(listen_host, port, log))); - LOG_INFO(log, "Listening for gRPC protocol: " + server_address.toString()); - }); -#endif - - /// Prometheus (if defined and not setup yet with http_port) - port_name = "prometheus.port"; - createServer(listen_host, port_name, listen_try, [&](UInt16 port) - { - Poco::Net::ServerSocket socket; - auto address = socketBindListen(socket, listen_host, port); - socket.setReceiveTimeout(settings.http_receive_timeout); - socket.setSendTimeout(settings.http_send_timeout); - servers->emplace_back( - port_name, - std::make_unique( - context(), - createHandlerFactory(*this, async_metrics, "PrometheusHandler-factory"), - server_pool, - socket, - http_params)); - - LOG_INFO(log, "Listening for Prometheus: http://{}", address.toString()); - }); + std::lock_guard lock(servers_lock); + createServers(config(), listen_hosts, listen_try, server_pool, async_metrics, servers); + if (servers.empty()) + throw Exception( + "No servers started (add valid listen_host and 'tcp_port' or 'http_port' to configuration file.)", + ErrorCodes::NO_ELEMENTS_IN_CONFIG); } - if (servers->empty()) - throw Exception("No servers started (add valid listen_host and 'tcp_port' or 'http_port' to configuration file.)", - ErrorCodes::NO_ELEMENTS_IN_CONFIG); - - /// Must be done after initialization of `servers`, because async_metrics will access `servers` variable from its thread. async_metrics.start(); { @@ -1564,9 +1425,15 @@ if (ThreadFuzzer::instance().isEffective()) &CurrentMetrics::MaxDDLEntryID, &CurrentMetrics::MaxPushedDDLEntryID)); } - for (auto & server : *servers) - server.start(); - LOG_INFO(log, "Ready for connections."); + { + std::lock_guard lock(servers_lock); + for (auto & server : servers) + { + server.start(); + LOG_INFO(log, "Listening for {}", server.getDescription()); + } + LOG_INFO(log, "Ready for connections."); + } SCOPE_EXIT_SAFE({ LOG_DEBUG(log, "Received termination signal."); @@ -1575,10 +1442,13 @@ if (ThreadFuzzer::instance().isEffective()) is_cancelled = true; int current_connections = 0; - for (auto & server : *servers) { - server.stop(); - current_connections += server.currentConnections(); + std::lock_guard lock(servers_lock); + for (auto & server : servers) + { + server.stop(); + current_connections += server.currentConnections(); + } } if (current_connections) @@ -1591,7 +1461,7 @@ if (ThreadFuzzer::instance().isEffective()) global_context->getProcessList().killAllQueries(); if (current_connections) - current_connections = waitServersToFinish(*servers, config().getInt("shutdown_wait_unfinished", 5)); + current_connections = waitServersToFinish(servers, config().getInt("shutdown_wait_unfinished", 5)); if (current_connections) LOG_INFO(log, "Closed connections. But {} remain." @@ -1627,4 +1497,273 @@ if (ThreadFuzzer::instance().isEffective()) return Application::EXIT_OK; } + +void Server::createServers( + Poco::Util::AbstractConfiguration & config, + const std::vector & listen_hosts, + bool listen_try, + Poco::ThreadPool & server_pool, + AsynchronousMetrics & async_metrics, + std::vector & servers, + bool start_servers) +{ + const Settings & settings = global_context->getSettingsRef(); + + Poco::Timespan keep_alive_timeout(config.getUInt("keep_alive_timeout", 10), 0); + Poco::Net::HTTPServerParams::Ptr http_params = new Poco::Net::HTTPServerParams; + http_params->setTimeout(settings.http_receive_timeout); + http_params->setKeepAliveTimeout(keep_alive_timeout); + + for (const auto & listen_host : listen_hosts) + { + /// HTTP + const char * port_name = "http_port"; + createServer(config, listen_host, port_name, listen_try, start_servers, servers, [&](UInt16 port) -> ProtocolServerAdapter + { + Poco::Net::ServerSocket socket; + auto address = socketBindListen(socket, listen_host, port); + socket.setReceiveTimeout(settings.http_receive_timeout); + socket.setSendTimeout(settings.http_send_timeout); + + return ProtocolServerAdapter( + listen_host, + port_name, + "http://" + address.toString(), + std::make_unique( + context(), createHandlerFactory(*this, async_metrics, "HTTPHandler-factory"), server_pool, socket, http_params)); + }); + + /// HTTPS + port_name = "https_port"; + createServer(config, listen_host, port_name, listen_try, start_servers, servers, [&](UInt16 port) -> ProtocolServerAdapter + { +#if USE_SSL + Poco::Net::SecureServerSocket socket; + auto address = socketBindListen(socket, listen_host, port, /* secure = */ true); + socket.setReceiveTimeout(settings.http_receive_timeout); + socket.setSendTimeout(settings.http_send_timeout); + return ProtocolServerAdapter( + listen_host, + port_name, + "https://" + address.toString(), + std::make_unique( + context(), createHandlerFactory(*this, async_metrics, "HTTPSHandler-factory"), server_pool, socket, http_params)); +#else + UNUSED(port); + throw Exception{"HTTPS protocol is disabled because Poco library was built without NetSSL support.", + ErrorCodes::SUPPORT_IS_DISABLED}; +#endif + }); + + /// TCP + port_name = "tcp_port"; + createServer(config, listen_host, port_name, listen_try, start_servers, servers, [&](UInt16 port) -> ProtocolServerAdapter + { + Poco::Net::ServerSocket socket; + auto address = socketBindListen(socket, listen_host, port); + socket.setReceiveTimeout(settings.receive_timeout); + socket.setSendTimeout(settings.send_timeout); + return ProtocolServerAdapter( + listen_host, + port_name, + "native protocol (tcp): " + address.toString(), + std::make_unique( + new TCPHandlerFactory(*this, /* secure */ false, /* proxy protocol */ false), + server_pool, + socket, + new Poco::Net::TCPServerParams)); + }); + + /// TCP with PROXY protocol, see https://github.com/wolfeidau/proxyv2/blob/master/docs/proxy-protocol.txt + port_name = "tcp_with_proxy_port"; + createServer(config, listen_host, port_name, listen_try, start_servers, servers, [&](UInt16 port) -> ProtocolServerAdapter + { + Poco::Net::ServerSocket socket; + auto address = socketBindListen(socket, listen_host, port); + socket.setReceiveTimeout(settings.receive_timeout); + socket.setSendTimeout(settings.send_timeout); + return ProtocolServerAdapter( + listen_host, + port_name, + "native protocol (tcp) with PROXY: " + address.toString(), + std::make_unique( + new TCPHandlerFactory(*this, /* secure */ false, /* proxy protocol */ true), + server_pool, + socket, + new Poco::Net::TCPServerParams)); + }); + + /// TCP with SSL + port_name = "tcp_port_secure"; + createServer(config, listen_host, port_name, listen_try, start_servers, servers, [&](UInt16 port) -> ProtocolServerAdapter + { +#if USE_SSL + Poco::Net::SecureServerSocket socket; + auto address = socketBindListen(socket, listen_host, port, /* secure = */ true); + socket.setReceiveTimeout(settings.receive_timeout); + socket.setSendTimeout(settings.send_timeout); + return ProtocolServerAdapter( + listen_host, + port_name, + "secure native protocol (tcp_secure): " + address.toString(), + std::make_unique( + new TCPHandlerFactory(*this, /* secure */ true, /* proxy protocol */ false), + server_pool, + socket, + new Poco::Net::TCPServerParams)); +#else + UNUSED(port); + throw Exception{"SSL support for TCP protocol is disabled because Poco library was built without NetSSL support.", + ErrorCodes::SUPPORT_IS_DISABLED}; +#endif + }); + + /// Interserver IO HTTP + port_name = "interserver_http_port"; + createServer(config, listen_host, port_name, listen_try, start_servers, servers, [&](UInt16 port) -> ProtocolServerAdapter + { + Poco::Net::ServerSocket socket; + auto address = socketBindListen(socket, listen_host, port); + socket.setReceiveTimeout(settings.http_receive_timeout); + socket.setSendTimeout(settings.http_send_timeout); + return ProtocolServerAdapter( + listen_host, + port_name, + "replica communication (interserver): http://" + address.toString(), + std::make_unique( + context(), + createHandlerFactory(*this, async_metrics, "InterserverIOHTTPHandler-factory"), + server_pool, + socket, + http_params)); + }); + + port_name = "interserver_https_port"; + createServer(config, listen_host, port_name, listen_try, start_servers, servers, [&](UInt16 port) -> ProtocolServerAdapter + { +#if USE_SSL + Poco::Net::SecureServerSocket socket; + auto address = socketBindListen(socket, listen_host, port, /* secure = */ true); + socket.setReceiveTimeout(settings.http_receive_timeout); + socket.setSendTimeout(settings.http_send_timeout); + return ProtocolServerAdapter( + listen_host, + port_name, + "secure replica communication (interserver): https://" + address.toString(), + std::make_unique( + context(), + createHandlerFactory(*this, async_metrics, "InterserverIOHTTPSHandler-factory"), + server_pool, + socket, + http_params)); +#else + UNUSED(port); + throw Exception{"SSL support for TCP protocol is disabled because Poco library was built without NetSSL support.", + ErrorCodes::SUPPORT_IS_DISABLED}; +#endif + }); + + port_name = "mysql_port"; + createServer(config, listen_host, port_name, listen_try, start_servers, servers, [&](UInt16 port) -> ProtocolServerAdapter + { + Poco::Net::ServerSocket socket; + auto address = socketBindListen(socket, listen_host, port, /* secure = */ true); + socket.setReceiveTimeout(Poco::Timespan()); + socket.setSendTimeout(settings.send_timeout); + return ProtocolServerAdapter( + listen_host, + port_name, + "MySQL compatibility protocol: " + address.toString(), + std::make_unique(new MySQLHandlerFactory(*this), server_pool, socket, new Poco::Net::TCPServerParams)); + }); + + port_name = "postgresql_port"; + createServer(config, listen_host, port_name, listen_try, start_servers, servers, [&](UInt16 port) -> ProtocolServerAdapter + { + Poco::Net::ServerSocket socket; + auto address = socketBindListen(socket, listen_host, port, /* secure = */ true); + socket.setReceiveTimeout(Poco::Timespan()); + socket.setSendTimeout(settings.send_timeout); + return ProtocolServerAdapter( + listen_host, + port_name, + "PostgreSQL compatibility protocol: " + address.toString(), + std::make_unique(new PostgreSQLHandlerFactory(*this), server_pool, socket, new Poco::Net::TCPServerParams)); + }); + +#if USE_GRPC + port_name = "grpc_port"; + createServer(config, listen_host, port_name, listen_try, start_servers, servers, [&](UInt16 port) -> ProtocolServerAdapter + { + Poco::Net::SocketAddress server_address(listen_host, port); + return ProtocolServerAdapter( + listen_host, + port_name, + "gRPC protocol: " + server_address.toString(), + std::make_unique(*this, makeSocketAddress(listen_host, port, &logger()))); + }); +#endif + + /// Prometheus (if defined and not setup yet with http_port) + port_name = "prometheus.port"; + createServer(config, listen_host, port_name, listen_try, start_servers, servers, [&](UInt16 port) -> ProtocolServerAdapter + { + Poco::Net::ServerSocket socket; + auto address = socketBindListen(socket, listen_host, port); + socket.setReceiveTimeout(settings.http_receive_timeout); + socket.setSendTimeout(settings.http_send_timeout); + return ProtocolServerAdapter( + listen_host, + port_name, + "Prometheus: http://" + address.toString(), + std::make_unique( + context(), createHandlerFactory(*this, async_metrics, "PrometheusHandler-factory"), server_pool, socket, http_params)); + }); + } + +} + +void Server::updateServers( + Poco::Util::AbstractConfiguration & config, + Poco::ThreadPool & server_pool, + AsynchronousMetrics & async_metrics, + std::vector & servers) +{ + Poco::Logger * log = &logger(); + /// Gracefully shutdown servers when their port is removed from config + const auto listen_hosts = getListenHosts(config); + const auto listen_try = getListenTry(config); + + for (auto & server : servers) + if (!server.isStopping()) + { + bool has_host = std::find(listen_hosts.begin(), listen_hosts.end(), server.getListenHost()) != listen_hosts.end(); + bool has_port = !config.getString(server.getPortName(), "").empty(); + if (!has_host || !has_port || config.getInt(server.getPortName()) != server.portNumber()) + { + server.stop(); + LOG_INFO(log, "Stopped listening for {}", server.getDescription()); + } + } + + createServers(config, listen_hosts, listen_try, server_pool, async_metrics, servers, /* start_servers: */ true); + + /// Remove servers once all their connections are closed + while (std::any_of(servers.begin(), servers.end(), [](const auto & server) { return server.isStopping(); })) + { + std::this_thread::sleep_for(std::chrono::milliseconds(100)); + std::erase_if(servers, [&log](auto & server) + { + if (!server.isStopping()) + return false; + auto is_finished = server.currentConnections() == 0; + if (is_finished) + LOG_DEBUG(log, "Server finished: {}", server.getDescription()); + else + LOG_TRACE(log, "Waiting server to finish: {}", server.getDescription()); + return is_finished; + }); + } +} + } diff --git a/programs/server/Server.h b/programs/server/Server.h index 45e5fccd51d..b4f2ea3bb79 100644 --- a/programs/server/Server.h +++ b/programs/server/Server.h @@ -24,6 +24,8 @@ namespace Poco namespace DB { +class AsynchronousMetrics; +class ProtocolServerAdapter; class Server : public BaseDaemon, public IServer { @@ -67,8 +69,30 @@ private: ContextMutablePtr global_context; Poco::Net::SocketAddress socketBindListen(Poco::Net::ServerSocket & socket, const std::string & host, UInt16 port, [[maybe_unused]] bool secure = false) const; - using CreateServerFunc = std::function; - void createServer(const std::string & listen_host, const char * port_name, bool listen_try, CreateServerFunc && func) const; + using CreateServerFunc = std::function; + void createServer( + Poco::Util::AbstractConfiguration & config, + const std::string & listen_host, + const char * port_name, + bool listen_try, + bool start_server, + std::vector & servers, + CreateServerFunc && func) const; + + void createServers( + Poco::Util::AbstractConfiguration & config, + const std::vector & listen_hosts, + bool listen_try, + Poco::ThreadPool & server_pool, + AsynchronousMetrics & async_metrics, + std::vector & servers, + bool start_servers = false); + + void updateServers( + Poco::Util::AbstractConfiguration & config, + Poco::ThreadPool & server_pool, + AsynchronousMetrics & async_metrics, + std::vector & servers); }; } diff --git a/src/Common/ProfileEvents.cpp b/src/Common/ProfileEvents.cpp index cb9c6f594a6..e0383da29f4 100644 --- a/src/Common/ProfileEvents.cpp +++ b/src/Common/ProfileEvents.cpp @@ -284,7 +284,8 @@ M(MergeTreeMetaCacheHit, "Number of times the read of meta file was done from MergeTree meta cache") \ M(MergeTreeMetaCacheMiss, "Number of times the read of meta file was not done from MergeTree meta cache") \ \ - + M(MainConfigLoads, "Number of times the main configuration was reloaded.") \ + \ namespace ProfileEvents { diff --git a/src/Interpreters/AsynchronousMetrics.cpp b/src/Interpreters/AsynchronousMetrics.cpp index 121f7c4153f..d1c5fbebbc7 100644 --- a/src/Interpreters/AsynchronousMetrics.cpp +++ b/src/Interpreters/AsynchronousMetrics.cpp @@ -69,12 +69,10 @@ static std::unique_ptr openFileIfExists(const std::stri AsynchronousMetrics::AsynchronousMetrics( ContextPtr global_context_, int update_period_seconds, - std::shared_ptr> servers_to_start_before_tables_, - std::shared_ptr> servers_) + const ProtocolServerMetricsFunc & protocol_server_metrics_func_) : WithContext(global_context_) , update_period(update_period_seconds) - , servers_to_start_before_tables(servers_to_start_before_tables_) - , servers(servers_) + , protocol_server_metrics_func(protocol_server_metrics_func_) , log(&Poco::Logger::get("AsynchronousMetrics")) { #if defined(OS_LINUX) @@ -238,7 +236,7 @@ void AsynchronousMetrics::start() thread = std::make_unique([this] { run(); }); } -AsynchronousMetrics::~AsynchronousMetrics() +void AsynchronousMetrics::stop() { try { @@ -249,7 +247,10 @@ AsynchronousMetrics::~AsynchronousMetrics() wait_cond.notify_one(); if (thread) + { thread->join(); + thread.reset(); + } } catch (...) { @@ -257,6 +258,11 @@ AsynchronousMetrics::~AsynchronousMetrics() } } +AsynchronousMetrics::~AsynchronousMetrics() +{ + stop(); +} + AsynchronousMetricValues AsynchronousMetrics::getValues() const { @@ -1381,22 +1387,11 @@ void AsynchronousMetrics::update(std::chrono::system_clock::time_point update_ti return it->second; }; - if (servers_to_start_before_tables) + const auto server_metrics = protocol_server_metrics_func(); + for (const auto & server_metric : server_metrics) { - for (const auto & server : *servers_to_start_before_tables) - { - if (const auto * name = get_metric_name(server.getPortName())) - new_values[name] = server.currentThreads(); - } - } - - if (servers) - { - for (const auto & server : *servers) - { - if (const auto * name = get_metric_name(server.getPortName())) - new_values[name] = server.currentThreads(); - } + if (const auto * name = get_metric_name(server_metric.port_name)) + new_values[name] = server_metric.current_threads; } } diff --git a/src/Interpreters/AsynchronousMetrics.h b/src/Interpreters/AsynchronousMetrics.h index 7a5c2d638d7..3c7581ce1a3 100644 --- a/src/Interpreters/AsynchronousMetrics.h +++ b/src/Interpreters/AsynchronousMetrics.h @@ -30,6 +30,11 @@ class ReadBuffer; using AsynchronousMetricValue = double; using AsynchronousMetricValues = std::unordered_map; +struct ProtocolServerMetrics +{ + String port_name; + size_t current_threads; +}; /** Periodically (by default, each minute, starting at 30 seconds offset) * calculates and updates some metrics, @@ -41,24 +46,25 @@ using AsynchronousMetricValues = std::unordered_map()>; AsynchronousMetrics( ContextPtr global_context_, int update_period_seconds, - std::shared_ptr> servers_to_start_before_tables_, - std::shared_ptr> servers_); + const ProtocolServerMetricsFunc & protocol_server_metrics_func_); ~AsynchronousMetrics(); /// Separate method allows to initialize the `servers` variable beforehand. void start(); + void stop(); + /// Returns copy of all values. AsynchronousMetricValues getValues() const; private: const std::chrono::seconds update_period; - std::shared_ptr> servers_to_start_before_tables{nullptr}; - std::shared_ptr> servers{nullptr}; + ProtocolServerMetricsFunc protocol_server_metrics_func; mutable std::mutex mutex; std::condition_variable wait_cond; diff --git a/src/Server/GRPCServer.h b/src/Server/GRPCServer.h index 25c3813c11d..e2b48f1c16b 100644 --- a/src/Server/GRPCServer.h +++ b/src/Server/GRPCServer.h @@ -4,6 +4,7 @@ #if USE_GRPC #include +#include #include "clickhouse_grpc.grpc.pb.h" namespace Poco { class Logger; } @@ -30,6 +31,9 @@ public: /// Stops the server. No new connections will be accepted. void stop(); + /// Returns the port this server is listening to. + UInt16 portNumber() const { return address_to_listen.port(); } + /// Returns the number of currently handled connections. size_t currentConnections() const; diff --git a/src/Server/HTTP/HTTPServer.cpp b/src/Server/HTTP/HTTPServer.cpp index 42e6467d0af..2e91fad1c0f 100644 --- a/src/Server/HTTP/HTTPServer.cpp +++ b/src/Server/HTTP/HTTPServer.cpp @@ -5,31 +5,13 @@ namespace DB { -HTTPServer::HTTPServer( - ContextPtr context, - HTTPRequestHandlerFactoryPtr factory_, - UInt16 port_number, - Poco::Net::HTTPServerParams::Ptr params) - : TCPServer(new HTTPServerConnectionFactory(context, params, factory_), port_number, params), factory(factory_) -{ -} - -HTTPServer::HTTPServer( - ContextPtr context, - HTTPRequestHandlerFactoryPtr factory_, - const Poco::Net::ServerSocket & socket, - Poco::Net::HTTPServerParams::Ptr params) - : TCPServer(new HTTPServerConnectionFactory(context, params, factory_), socket, params), factory(factory_) -{ -} - HTTPServer::HTTPServer( ContextPtr context, HTTPRequestHandlerFactoryPtr factory_, Poco::ThreadPool & thread_pool, - const Poco::Net::ServerSocket & socket, + Poco::Net::ServerSocket & socket_, Poco::Net::HTTPServerParams::Ptr params) - : TCPServer(new HTTPServerConnectionFactory(context, params, factory_), thread_pool, socket, params), factory(factory_) + : TCPServer(new HTTPServerConnectionFactory(context, params, factory_), thread_pool, socket_, params), factory(factory_) { } diff --git a/src/Server/HTTP/HTTPServer.h b/src/Server/HTTP/HTTPServer.h index 3518fd66d20..07ad54d267f 100644 --- a/src/Server/HTTP/HTTPServer.h +++ b/src/Server/HTTP/HTTPServer.h @@ -1,9 +1,9 @@ #pragma once #include +#include #include -#include #include @@ -13,26 +13,14 @@ namespace DB class Context; -class HTTPServer : public Poco::Net::TCPServer +class HTTPServer : public TCPServer { public: explicit HTTPServer( - ContextPtr context, - HTTPRequestHandlerFactoryPtr factory, - UInt16 port_number = 80, - Poco::Net::HTTPServerParams::Ptr params = new Poco::Net::HTTPServerParams); - - HTTPServer( - ContextPtr context, - HTTPRequestHandlerFactoryPtr factory, - const Poco::Net::ServerSocket & socket, - Poco::Net::HTTPServerParams::Ptr params); - - HTTPServer( ContextPtr context, HTTPRequestHandlerFactoryPtr factory, Poco::ThreadPool & thread_pool, - const Poco::Net::ServerSocket & socket, + Poco::Net::ServerSocket & socket, Poco::Net::HTTPServerParams::Ptr params); ~HTTPServer() override; diff --git a/src/Server/HTTP/HTTPServerConnection.cpp b/src/Server/HTTP/HTTPServerConnection.cpp index de81da20ead..7020b8e9a23 100644 --- a/src/Server/HTTP/HTTPServerConnection.cpp +++ b/src/Server/HTTP/HTTPServerConnection.cpp @@ -1,4 +1,5 @@ #include +#include #include @@ -7,10 +8,11 @@ namespace DB HTTPServerConnection::HTTPServerConnection( ContextPtr context_, + TCPServer & tcp_server_, const Poco::Net::StreamSocket & socket, Poco::Net::HTTPServerParams::Ptr params_, HTTPRequestHandlerFactoryPtr factory_) - : TCPServerConnection(socket), context(Context::createCopy(context_)), params(params_), factory(factory_), stopped(false) + : TCPServerConnection(socket), context(Context::createCopy(context_)), tcp_server(tcp_server_), params(params_), factory(factory_), stopped(false) { poco_check_ptr(factory); } @@ -20,12 +22,12 @@ void HTTPServerConnection::run() std::string server = params->getSoftwareVersion(); Poco::Net::HTTPServerSession session(socket(), params); - while (!stopped && session.hasMoreRequests()) + while (!stopped && tcp_server.isOpen() && session.hasMoreRequests()) { try { std::unique_lock lock(mutex); - if (!stopped) + if (!stopped && tcp_server.isOpen()) { HTTPServerResponse response(session); HTTPServerRequest request(context, response, session); @@ -48,6 +50,11 @@ void HTTPServerConnection::run() response.set("Server", server); try { + if (!tcp_server.isOpen()) + { + sendErrorResponse(session, Poco::Net::HTTPResponse::HTTP_SERVICE_UNAVAILABLE); + break; + } std::unique_ptr handler(factory->createRequestHandler(request)); if (handler) diff --git a/src/Server/HTTP/HTTPServerConnection.h b/src/Server/HTTP/HTTPServerConnection.h index 1c7ae6cd2b7..db3969f6ffb 100644 --- a/src/Server/HTTP/HTTPServerConnection.h +++ b/src/Server/HTTP/HTTPServerConnection.h @@ -9,12 +9,14 @@ namespace DB { +class TCPServer; class HTTPServerConnection : public Poco::Net::TCPServerConnection { public: HTTPServerConnection( ContextPtr context, + TCPServer & tcp_server, const Poco::Net::StreamSocket & socket, Poco::Net::HTTPServerParams::Ptr params, HTTPRequestHandlerFactoryPtr factory); @@ -26,6 +28,7 @@ protected: private: ContextPtr context; + TCPServer & tcp_server; Poco::Net::HTTPServerParams::Ptr params; HTTPRequestHandlerFactoryPtr factory; bool stopped; diff --git a/src/Server/HTTP/HTTPServerConnectionFactory.cpp b/src/Server/HTTP/HTTPServerConnectionFactory.cpp index 0e4fb6cfcec..008da222c79 100644 --- a/src/Server/HTTP/HTTPServerConnectionFactory.cpp +++ b/src/Server/HTTP/HTTPServerConnectionFactory.cpp @@ -11,9 +11,9 @@ HTTPServerConnectionFactory::HTTPServerConnectionFactory( poco_check_ptr(factory); } -Poco::Net::TCPServerConnection * HTTPServerConnectionFactory::createConnection(const Poco::Net::StreamSocket & socket) +Poco::Net::TCPServerConnection * HTTPServerConnectionFactory::createConnection(const Poco::Net::StreamSocket & socket, TCPServer & tcp_server) { - return new HTTPServerConnection(context, socket, params, factory); + return new HTTPServerConnection(context, tcp_server, socket, params, factory); } } diff --git a/src/Server/HTTP/HTTPServerConnectionFactory.h b/src/Server/HTTP/HTTPServerConnectionFactory.h index 3f11eca0f69..a19dc6d4d5c 100644 --- a/src/Server/HTTP/HTTPServerConnectionFactory.h +++ b/src/Server/HTTP/HTTPServerConnectionFactory.h @@ -2,19 +2,19 @@ #include #include +#include #include -#include namespace DB { -class HTTPServerConnectionFactory : public Poco::Net::TCPServerConnectionFactory +class HTTPServerConnectionFactory : public TCPServerConnectionFactory { public: HTTPServerConnectionFactory(ContextPtr context, Poco::Net::HTTPServerParams::Ptr params, HTTPRequestHandlerFactoryPtr factory); - Poco::Net::TCPServerConnection * createConnection(const Poco::Net::StreamSocket & socket) override; + Poco::Net::TCPServerConnection * createConnection(const Poco::Net::StreamSocket & socket, TCPServer & tcp_server) override; private: ContextPtr context; diff --git a/src/Server/KeeperTCPHandlerFactory.h b/src/Server/KeeperTCPHandlerFactory.h index 67bb3dab268..58dc73d7c27 100644 --- a/src/Server/KeeperTCPHandlerFactory.h +++ b/src/Server/KeeperTCPHandlerFactory.h @@ -1,7 +1,7 @@ #pragma once #include -#include +#include #include #include #include @@ -10,7 +10,7 @@ namespace DB { -class KeeperTCPHandlerFactory : public Poco::Net::TCPServerConnectionFactory +class KeeperTCPHandlerFactory : public TCPServerConnectionFactory { private: IServer & server; @@ -29,7 +29,7 @@ public: { } - Poco::Net::TCPServerConnection * createConnection(const Poco::Net::StreamSocket & socket) override + Poco::Net::TCPServerConnection * createConnection(const Poco::Net::StreamSocket & socket, TCPServer &) override { try { diff --git a/src/Server/MySQLHandler.cpp b/src/Server/MySQLHandler.cpp index deebc073ad5..2836ee05c30 100644 --- a/src/Server/MySQLHandler.cpp +++ b/src/Server/MySQLHandler.cpp @@ -16,6 +16,7 @@ #include #include #include +#include #include #include #include @@ -62,10 +63,11 @@ static String showTableStatusReplacementQuery(const String & query); static String killConnectionIdReplacementQuery(const String & query); static String selectLimitReplacementQuery(const String & query); -MySQLHandler::MySQLHandler(IServer & server_, const Poco::Net::StreamSocket & socket_, +MySQLHandler::MySQLHandler(IServer & server_, TCPServer & tcp_server_, const Poco::Net::StreamSocket & socket_, bool ssl_enabled, size_t connection_id_) : Poco::Net::TCPServerConnection(socket_) , server(server_) + , tcp_server(tcp_server_) , log(&Poco::Logger::get("MySQLHandler")) , connection_id(connection_id_) , auth_plugin(new MySQLProtocol::Authentication::Native41()) @@ -138,11 +140,14 @@ void MySQLHandler::run() OKPacket ok_packet(0, handshake_response.capability_flags, 0, 0, 0); packet_endpoint->sendPacket(ok_packet, true); - while (true) + while (tcp_server.isOpen()) { packet_endpoint->resetSequenceId(); MySQLPacketPayloadReadBuffer payload = packet_endpoint->getPayload(); + while (!in->poll(1000000)) + if (!tcp_server.isOpen()) + return; char command = 0; payload.readStrict(command); @@ -152,6 +157,8 @@ void MySQLHandler::run() LOG_DEBUG(log, "Received command: {}. Connection id: {}.", static_cast(static_cast(command)), connection_id); + if (!tcp_server.isOpen()) + return; try { switch (command) @@ -369,8 +376,8 @@ void MySQLHandler::finishHandshakeSSL( } #if USE_SSL -MySQLHandlerSSL::MySQLHandlerSSL(IServer & server_, const Poco::Net::StreamSocket & socket_, bool ssl_enabled, size_t connection_id_, RSA & public_key_, RSA & private_key_) - : MySQLHandler(server_, socket_, ssl_enabled, connection_id_) +MySQLHandlerSSL::MySQLHandlerSSL(IServer & server_, TCPServer & tcp_server_, const Poco::Net::StreamSocket & socket_, bool ssl_enabled, size_t connection_id_, RSA & public_key_, RSA & private_key_) + : MySQLHandler(server_, tcp_server_, socket_, ssl_enabled, connection_id_) , public_key(public_key_) , private_key(private_key_) {} diff --git a/src/Server/MySQLHandler.h b/src/Server/MySQLHandler.h index 7ef212bf36e..3af5f7a0eb2 100644 --- a/src/Server/MySQLHandler.h +++ b/src/Server/MySQLHandler.h @@ -24,11 +24,14 @@ namespace CurrentMetrics namespace DB { +class ReadBufferFromPocoSocket; +class TCPServer; + /// Handler for MySQL wire protocol connections. Allows to connect to ClickHouse using MySQL client. class MySQLHandler : public Poco::Net::TCPServerConnection { public: - MySQLHandler(IServer & server_, const Poco::Net::StreamSocket & socket_, bool ssl_enabled, size_t connection_id_); + MySQLHandler(IServer & server_, TCPServer & tcp_server_, const Poco::Net::StreamSocket & socket_, bool ssl_enabled, size_t connection_id_); void run() final; @@ -52,6 +55,7 @@ protected: virtual void finishHandshakeSSL(size_t packet_size, char * buf, size_t pos, std::function read_bytes, MySQLProtocol::ConnectionPhase::HandshakeResponse & packet); IServer & server; + TCPServer & tcp_server; Poco::Logger * log; UInt64 connection_id = 0; @@ -68,7 +72,7 @@ protected: Replacements replacements; std::unique_ptr auth_plugin; - std::shared_ptr in; + std::shared_ptr in; std::shared_ptr out; bool secure_connection = false; }; @@ -77,7 +81,7 @@ protected: class MySQLHandlerSSL : public MySQLHandler { public: - MySQLHandlerSSL(IServer & server_, const Poco::Net::StreamSocket & socket_, bool ssl_enabled, size_t connection_id_, RSA & public_key_, RSA & private_key_); + MySQLHandlerSSL(IServer & server_, TCPServer & tcp_server_, const Poco::Net::StreamSocket & socket_, bool ssl_enabled, size_t connection_id_, RSA & public_key_, RSA & private_key_); private: void authPluginSSL() override; diff --git a/src/Server/MySQLHandlerFactory.cpp b/src/Server/MySQLHandlerFactory.cpp index 7a0bfd8ab09..f7bb073e275 100644 --- a/src/Server/MySQLHandlerFactory.cpp +++ b/src/Server/MySQLHandlerFactory.cpp @@ -118,14 +118,14 @@ void MySQLHandlerFactory::generateRSAKeys() } #endif -Poco::Net::TCPServerConnection * MySQLHandlerFactory::createConnection(const Poco::Net::StreamSocket & socket) +Poco::Net::TCPServerConnection * MySQLHandlerFactory::createConnection(const Poco::Net::StreamSocket & socket, TCPServer & tcp_server) { size_t connection_id = last_connection_id++; LOG_TRACE(log, "MySQL connection. Id: {}. Address: {}", connection_id, socket.peerAddress().toString()); #if USE_SSL - return new MySQLHandlerSSL(server, socket, ssl_enabled, connection_id, *public_key, *private_key); + return new MySQLHandlerSSL(server, tcp_server, socket, ssl_enabled, connection_id, *public_key, *private_key); #else - return new MySQLHandler(server, socket, ssl_enabled, connection_id); + return new MySQLHandler(server, tcp_server, socket, ssl_enabled, connection_id); #endif } diff --git a/src/Server/MySQLHandlerFactory.h b/src/Server/MySQLHandlerFactory.h index 106fdfdf341..25f1af85273 100644 --- a/src/Server/MySQLHandlerFactory.h +++ b/src/Server/MySQLHandlerFactory.h @@ -1,9 +1,9 @@ #pragma once -#include #include #include #include +#include #include @@ -13,8 +13,9 @@ namespace DB { +class TCPServer; -class MySQLHandlerFactory : public Poco::Net::TCPServerConnectionFactory +class MySQLHandlerFactory : public TCPServerConnectionFactory { private: IServer & server; @@ -43,7 +44,7 @@ public: void generateRSAKeys(); - Poco::Net::TCPServerConnection * createConnection(const Poco::Net::StreamSocket & socket) override; + Poco::Net::TCPServerConnection * createConnection(const Poco::Net::StreamSocket & socket, TCPServer & tcp_server) override; }; } diff --git a/src/Server/PostgreSQLHandler.cpp b/src/Server/PostgreSQLHandler.cpp index fee4ace3452..9808b538280 100644 --- a/src/Server/PostgreSQLHandler.cpp +++ b/src/Server/PostgreSQLHandler.cpp @@ -6,6 +6,7 @@ #include #include "PostgreSQLHandler.h" #include +#include #include #include #include @@ -28,11 +29,13 @@ namespace ErrorCodes PostgreSQLHandler::PostgreSQLHandler( const Poco::Net::StreamSocket & socket_, IServer & server_, + TCPServer & tcp_server_, bool ssl_enabled_, Int32 connection_id_, std::vector> & auth_methods_) : Poco::Net::TCPServerConnection(socket_) , server(server_) + , tcp_server(tcp_server_) , ssl_enabled(ssl_enabled_) , connection_id(connection_id_) , authentication_manager(auth_methods_) @@ -60,11 +63,18 @@ void PostgreSQLHandler::run() if (!startup()) return; - while (true) + while (tcp_server.isOpen()) { message_transport->send(PostgreSQLProtocol::Messaging::ReadyForQuery(), true); + + constexpr size_t connection_check_timeout = 1; // 1 second + while (!in->poll(1000000 * connection_check_timeout)) + if (!tcp_server.isOpen()) + return; PostgreSQLProtocol::Messaging::FrontMessageType message_type = message_transport->receiveMessageType(); + if (!tcp_server.isOpen()) + return; switch (message_type) { case PostgreSQLProtocol::Messaging::FrontMessageType::QUERY: diff --git a/src/Server/PostgreSQLHandler.h b/src/Server/PostgreSQLHandler.h index 1d33f41f255..4fd08cc2606 100644 --- a/src/Server/PostgreSQLHandler.h +++ b/src/Server/PostgreSQLHandler.h @@ -18,8 +18,9 @@ namespace CurrentMetrics namespace DB { - +class ReadBufferFromPocoSocket; class Session; +class TCPServer; /** PostgreSQL wire protocol implementation. * For more info see https://www.postgresql.org/docs/current/protocol.html @@ -30,6 +31,7 @@ public: PostgreSQLHandler( const Poco::Net::StreamSocket & socket_, IServer & server_, + TCPServer & tcp_server_, bool ssl_enabled_, Int32 connection_id_, std::vector> & auth_methods_); @@ -40,12 +42,13 @@ private: Poco::Logger * log = &Poco::Logger::get("PostgreSQLHandler"); IServer & server; + TCPServer & tcp_server; std::unique_ptr session; bool ssl_enabled = false; Int32 connection_id = 0; Int32 secret_key = 0; - std::shared_ptr in; + std::shared_ptr in; std::shared_ptr out; std::shared_ptr message_transport; diff --git a/src/Server/PostgreSQLHandlerFactory.cpp b/src/Server/PostgreSQLHandlerFactory.cpp index 1158cf5835e..6f2124861e7 100644 --- a/src/Server/PostgreSQLHandlerFactory.cpp +++ b/src/Server/PostgreSQLHandlerFactory.cpp @@ -1,5 +1,4 @@ #include "PostgreSQLHandlerFactory.h" -#include #include #include @@ -17,11 +16,11 @@ PostgreSQLHandlerFactory::PostgreSQLHandlerFactory(IServer & server_) }; } -Poco::Net::TCPServerConnection * PostgreSQLHandlerFactory::createConnection(const Poco::Net::StreamSocket & socket) +Poco::Net::TCPServerConnection * PostgreSQLHandlerFactory::createConnection(const Poco::Net::StreamSocket & socket, TCPServer & tcp_server) { Int32 connection_id = last_connection_id++; LOG_TRACE(log, "PostgreSQL connection. Id: {}. Address: {}", connection_id, socket.peerAddress().toString()); - return new PostgreSQLHandler(socket, server, ssl_enabled, connection_id, auth_methods); + return new PostgreSQLHandler(socket, server, tcp_server, ssl_enabled, connection_id, auth_methods); } } diff --git a/src/Server/PostgreSQLHandlerFactory.h b/src/Server/PostgreSQLHandlerFactory.h index dc3d4047d2a..e9241da6f0e 100644 --- a/src/Server/PostgreSQLHandlerFactory.h +++ b/src/Server/PostgreSQLHandlerFactory.h @@ -1,16 +1,16 @@ #pragma once -#include #include #include #include +#include #include #include namespace DB { -class PostgreSQLHandlerFactory : public Poco::Net::TCPServerConnectionFactory +class PostgreSQLHandlerFactory : public TCPServerConnectionFactory { private: IServer & server; @@ -28,6 +28,6 @@ private: public: explicit PostgreSQLHandlerFactory(IServer & server_); - Poco::Net::TCPServerConnection * createConnection(const Poco::Net::StreamSocket & socket) override; + Poco::Net::TCPServerConnection * createConnection(const Poco::Net::StreamSocket & socket, TCPServer & server) override; }; } diff --git a/src/Server/ProtocolServerAdapter.cpp b/src/Server/ProtocolServerAdapter.cpp index 6ec1ec572f7..b41ad2376f1 100644 --- a/src/Server/ProtocolServerAdapter.cpp +++ b/src/Server/ProtocolServerAdapter.cpp @@ -1,5 +1,5 @@ #include -#include +#include #if USE_GRPC #include @@ -11,20 +11,29 @@ namespace DB class ProtocolServerAdapter::TCPServerAdapterImpl : public Impl { public: - explicit TCPServerAdapterImpl(std::unique_ptr tcp_server_) : tcp_server(std::move(tcp_server_)) {} + explicit TCPServerAdapterImpl(std::unique_ptr tcp_server_) : tcp_server(std::move(tcp_server_)) {} ~TCPServerAdapterImpl() override = default; void start() override { tcp_server->start(); } void stop() override { tcp_server->stop(); } + bool isStopping() const override { return !tcp_server->isOpen(); } + UInt16 portNumber() const override { return tcp_server->portNumber(); } size_t currentConnections() const override { return tcp_server->currentConnections(); } size_t currentThreads() const override { return tcp_server->currentThreads(); } private: - std::unique_ptr tcp_server; + std::unique_ptr tcp_server; }; -ProtocolServerAdapter::ProtocolServerAdapter(const char * port_name_, std::unique_ptr tcp_server_) - : port_name(port_name_), impl(std::make_unique(std::move(tcp_server_))) +ProtocolServerAdapter::ProtocolServerAdapter( + const std::string & listen_host_, + const char * port_name_, + const std::string & description_, + std::unique_ptr tcp_server_) + : listen_host(listen_host_) + , port_name(port_name_) + , description(description_) + , impl(std::make_unique(std::move(tcp_server_))) { } @@ -36,16 +45,30 @@ public: ~GRPCServerAdapterImpl() override = default; void start() override { grpc_server->start(); } - void stop() override { grpc_server->stop(); } + void stop() override + { + is_stopping = true; + grpc_server->stop(); + } + bool isStopping() const override { return is_stopping; } + UInt16 portNumber() const override { return grpc_server->portNumber(); } size_t currentConnections() const override { return grpc_server->currentConnections(); } size_t currentThreads() const override { return grpc_server->currentThreads(); } private: std::unique_ptr grpc_server; + bool is_stopping = false; }; -ProtocolServerAdapter::ProtocolServerAdapter(const char * port_name_, std::unique_ptr grpc_server_) - : port_name(port_name_), impl(std::make_unique(std::move(grpc_server_))) +ProtocolServerAdapter::ProtocolServerAdapter( + const std::string & listen_host_, + const char * port_name_, + const std::string & description_, + std::unique_ptr grpc_server_) + : listen_host(listen_host_) + , port_name(port_name_) + , description(description_) + , impl(std::make_unique(std::move(grpc_server_))) { } #endif diff --git a/src/Server/ProtocolServerAdapter.h b/src/Server/ProtocolServerAdapter.h index 04c46b53356..9b3b1af0301 100644 --- a/src/Server/ProtocolServerAdapter.h +++ b/src/Server/ProtocolServerAdapter.h @@ -2,14 +2,14 @@ #include +#include #include #include -namespace Poco::Net { class TCPServer; } - namespace DB { class GRPCServer; +class TCPServer; /// Provides an unified interface to access a protocol implementing server /// no matter what type it has (HTTPServer, TCPServer, MySQLServer, GRPCServer, ...). @@ -19,10 +19,10 @@ class ProtocolServerAdapter public: ProtocolServerAdapter(ProtocolServerAdapter && src) = default; ProtocolServerAdapter & operator =(ProtocolServerAdapter && src) = default; - ProtocolServerAdapter(const char * port_name_, std::unique_ptr tcp_server_); + ProtocolServerAdapter(const std::string & listen_host_, const char * port_name_, const std::string & description_, std::unique_ptr tcp_server_); #if USE_GRPC - ProtocolServerAdapter(const char * port_name_, std::unique_ptr grpc_server_); + ProtocolServerAdapter(const std::string & listen_host_, const char * port_name_, const std::string & description_, std::unique_ptr grpc_server_); #endif /// Starts the server. A new thread will be created that waits for and accepts incoming connections. @@ -31,14 +31,23 @@ public: /// Stops the server. No new connections will be accepted. void stop() { impl->stop(); } + bool isStopping() const { return impl->isStopping(); } + /// Returns the number of currently handled connections. size_t currentConnections() const { return impl->currentConnections(); } /// Returns the number of current threads. size_t currentThreads() const { return impl->currentThreads(); } + /// Returns the port this server is listening to. + UInt16 portNumber() const { return impl->portNumber(); } + + const std::string & getListenHost() const { return listen_host; } + const std::string & getPortName() const { return port_name; } + const std::string & getDescription() const { return description; } + private: class Impl { @@ -46,13 +55,17 @@ private: virtual ~Impl() {} virtual void start() = 0; virtual void stop() = 0; + virtual bool isStopping() const = 0; + virtual UInt16 portNumber() const = 0; virtual size_t currentConnections() const = 0; virtual size_t currentThreads() const = 0; }; class TCPServerAdapterImpl; class GRPCServerAdapterImpl; + std::string listen_host; std::string port_name; + std::string description; std::unique_ptr impl; }; diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index 3b1ce4cc846..6b4f77dd7d0 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -32,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -81,9 +82,10 @@ namespace ErrorCodes extern const int UNKNOWN_PROTOCOL; } -TCPHandler::TCPHandler(IServer & server_, const Poco::Net::StreamSocket & socket_, bool parse_proxy_protocol_, std::string server_display_name_) +TCPHandler::TCPHandler(IServer & server_, TCPServer & tcp_server_, const Poco::Net::StreamSocket & socket_, bool parse_proxy_protocol_, std::string server_display_name_) : Poco::Net::TCPServerConnection(socket_) , server(server_) + , tcp_server(tcp_server_) , parse_proxy_protocol(parse_proxy_protocol_) , log(&Poco::Logger::get("TCPHandler")) , server_display_name(std::move(server_display_name_)) @@ -172,13 +174,13 @@ void TCPHandler::runImpl() throw; } - while (true) + while (tcp_server.isOpen()) { /// We are waiting for a packet from the client. Thus, every `poll_interval` seconds check whether we need to shut down. { Stopwatch idle_time; UInt64 timeout_ms = std::min(poll_interval, idle_connection_timeout) * 1000000; - while (!server.isCancelled() && !static_cast(*in).poll(timeout_ms)) + while (tcp_server.isOpen() && !server.isCancelled() && !static_cast(*in).poll(timeout_ms)) { if (idle_time.elapsedSeconds() > idle_connection_timeout) { @@ -189,7 +191,7 @@ void TCPHandler::runImpl() } /// If we need to shut down, or client disconnects. - if (server.isCancelled() || in->eof()) + if (!tcp_server.isOpen() || server.isCancelled() || in->eof()) break; Stopwatch watch; diff --git a/src/Server/TCPHandler.h b/src/Server/TCPHandler.h index 54af44759e7..4c4aeb0d913 100644 --- a/src/Server/TCPHandler.h +++ b/src/Server/TCPHandler.h @@ -35,6 +35,7 @@ class Session; struct Settings; class ColumnsDescription; struct ProfileInfo; +class TCPServer; /// State of query processing. struct QueryState @@ -127,7 +128,7 @@ public: * because it allows to check the IP ranges of the trusted proxy. * Proxy-forwarded (original client) IP address is used for quota accounting if quota is keyed by forwarded IP. */ - TCPHandler(IServer & server_, const Poco::Net::StreamSocket & socket_, bool parse_proxy_protocol_, std::string server_display_name_); + TCPHandler(IServer & server_, TCPServer & tcp_server_, const Poco::Net::StreamSocket & socket_, bool parse_proxy_protocol_, std::string server_display_name_); ~TCPHandler() override; void run() override; @@ -137,6 +138,7 @@ public: private: IServer & server; + TCPServer & tcp_server; bool parse_proxy_protocol = false; Poco::Logger * log; diff --git a/src/Server/TCPHandlerFactory.h b/src/Server/TCPHandlerFactory.h index e610bea330c..03b2592198d 100644 --- a/src/Server/TCPHandlerFactory.h +++ b/src/Server/TCPHandlerFactory.h @@ -1,17 +1,17 @@ #pragma once -#include #include #include #include #include +#include namespace Poco { class Logger; } namespace DB { -class TCPHandlerFactory : public Poco::Net::TCPServerConnectionFactory +class TCPHandlerFactory : public TCPServerConnectionFactory { private: IServer & server; @@ -38,13 +38,13 @@ public: server_display_name = server.config().getString("display_name", getFQDNOrHostName()); } - Poco::Net::TCPServerConnection * createConnection(const Poco::Net::StreamSocket & socket) override + Poco::Net::TCPServerConnection * createConnection(const Poco::Net::StreamSocket & socket, TCPServer & tcp_server) override { try { LOG_TRACE(log, "TCP Request. Address: {}", socket.peerAddress().toString()); - return new TCPHandler(server, socket, parse_proxy_protocol, server_display_name); + return new TCPHandler(server, tcp_server, socket, parse_proxy_protocol, server_display_name); } catch (const Poco::Net::NetException &) { diff --git a/src/Server/TCPServer.cpp b/src/Server/TCPServer.cpp new file mode 100644 index 00000000000..380c4ef9924 --- /dev/null +++ b/src/Server/TCPServer.cpp @@ -0,0 +1,36 @@ +#include +#include + +namespace DB +{ + +class TCPServerConnectionFactoryImpl : public Poco::Net::TCPServerConnectionFactory +{ +public: + TCPServerConnectionFactoryImpl(TCPServer & tcp_server_, DB::TCPServerConnectionFactory::Ptr factory_) + : tcp_server(tcp_server_) + , factory(factory_) + {} + + Poco::Net::TCPServerConnection * createConnection(const Poco::Net::StreamSocket & socket) override + { + return factory->createConnection(socket, tcp_server); + } +private: + TCPServer & tcp_server; + DB::TCPServerConnectionFactory::Ptr factory; +}; + +TCPServer::TCPServer( + TCPServerConnectionFactory::Ptr factory_, + Poco::ThreadPool & thread_pool, + Poco::Net::ServerSocket & socket_, + Poco::Net::TCPServerParams::Ptr params) + : Poco::Net::TCPServer(new TCPServerConnectionFactoryImpl(*this, factory_), thread_pool, socket_, params) + , factory(factory_) + , socket(socket_) + , is_open(true) + , port_number(socket.address().port()) +{} + +} diff --git a/src/Server/TCPServer.h b/src/Server/TCPServer.h new file mode 100644 index 00000000000..219fed5342b --- /dev/null +++ b/src/Server/TCPServer.h @@ -0,0 +1,47 @@ +#pragma once + +#include + +#include +#include + + +namespace DB +{ +class Context; + +class TCPServer : public Poco::Net::TCPServer +{ +public: + explicit TCPServer( + TCPServerConnectionFactory::Ptr factory, + Poco::ThreadPool & thread_pool, + Poco::Net::ServerSocket & socket, + Poco::Net::TCPServerParams::Ptr params = new Poco::Net::TCPServerParams); + + /// Close the socket and ask existing connections to stop serving queries + void stop() + { + Poco::Net::TCPServer::stop(); + // This notifies already established connections that they should stop serving + // queries and close their socket as soon as they can. + is_open = false; + // Poco's stop() stops listening on the socket but leaves it open. + // To be able to hand over control of the listening port to a new server, and + // to get fast connection refusal instead of timeouts, we also need to close + // the listening socket. + socket.close(); + } + + bool isOpen() const { return is_open; } + + UInt16 portNumber() const { return port_number; } + +private: + TCPServerConnectionFactory::Ptr factory; + Poco::Net::ServerSocket socket; + std::atomic is_open; + UInt16 port_number; +}; + +} diff --git a/src/Server/TCPServerConnectionFactory.h b/src/Server/TCPServerConnectionFactory.h new file mode 100644 index 00000000000..613f98352bd --- /dev/null +++ b/src/Server/TCPServerConnectionFactory.h @@ -0,0 +1,27 @@ +#pragma once + +#include + +namespace Poco +{ +namespace Net +{ + class StreamSocket; + class TCPServerConnection; +} +} +namespace DB +{ +class TCPServer; + +class TCPServerConnectionFactory +{ +public: + using Ptr = Poco::SharedPtr; + + virtual ~TCPServerConnectionFactory() = default; + + /// Same as Poco::Net::TCPServerConnectionFactory except we can pass the TCPServer + virtual Poco::Net::TCPServerConnection * createConnection(const Poco::Net::StreamSocket & socket, TCPServer & tcp_server) = 0; +}; +} diff --git a/src/Storages/PostgreSQL/PostgreSQLReplicationHandler.cpp b/src/Storages/PostgreSQL/PostgreSQLReplicationHandler.cpp index 984a9cdd47a..a406ae62693 100644 --- a/src/Storages/PostgreSQL/PostgreSQLReplicationHandler.cpp +++ b/src/Storages/PostgreSQL/PostgreSQLReplicationHandler.cpp @@ -278,9 +278,8 @@ ASTPtr PostgreSQLReplicationHandler::getCreateNestedTableQuery(StorageMaterializ { postgres::Connection connection(connection_info); pqxx::nontransaction tx(connection.getRef()); - auto table_structure = std::make_unique(fetchPostgreSQLTableStructure(tx, table_name, postgres_schema, true, true, true)); - if (!table_structure) - throw Exception(ErrorCodes::LOGICAL_ERROR, "Failed to get PostgreSQL table structure"); + auto [postgres_table_schema, postgres_table_name] = getSchemaAndTableName(table_name); + auto table_structure = std::make_unique(fetchPostgreSQLTableStructure(tx, postgres_table_name, postgres_table_schema, true, true, true)); auto table_override = tryGetTableOverride(current_database_name, table_name); return storage->getCreateNestedTableQuery(std::move(table_structure), table_override ? table_override->as() : nullptr); @@ -516,17 +515,25 @@ void PostgreSQLReplicationHandler::dropPublication(pqxx::nontransaction & tx) void PostgreSQLReplicationHandler::addTableToPublication(pqxx::nontransaction & ntx, const String & table_name) { - std::string query_str = fmt::format("ALTER PUBLICATION {} ADD TABLE ONLY {}", publication_name, doubleQuoteString(table_name)); + std::string query_str = fmt::format("ALTER PUBLICATION {} ADD TABLE ONLY {}", publication_name, doubleQuoteWithSchema(table_name)); ntx.exec(query_str); - LOG_TRACE(log, "Added table `{}` to publication `{}`", table_name, publication_name); + LOG_TRACE(log, "Added table {} to publication `{}`", doubleQuoteWithSchema(table_name), publication_name); } void PostgreSQLReplicationHandler::removeTableFromPublication(pqxx::nontransaction & ntx, const String & table_name) { - std::string query_str = fmt::format("ALTER PUBLICATION {} DROP TABLE ONLY {}", publication_name, doubleQuoteString(table_name)); - ntx.exec(query_str); - LOG_TRACE(log, "Removed table `{}` from publication `{}`", table_name, publication_name); + try + { + std::string query_str = fmt::format("ALTER PUBLICATION {} DROP TABLE ONLY {}", publication_name, doubleQuoteWithSchema(table_name)); + ntx.exec(query_str); + LOG_TRACE(log, "Removed table `{}` from publication `{}`", doubleQuoteWithSchema(table_name), publication_name); + } + catch (const pqxx::undefined_table &) + { + /// Removing table from replication must succeed even if table does not exist in PostgreSQL. + LOG_WARNING(log, "Did not remove table {} from publication, because table does not exist in PostgreSQL", doubleQuoteWithSchema(table_name), publication_name); + } } diff --git a/src/Storages/StorageBuffer.cpp b/src/Storages/StorageBuffer.cpp index f5526781f41..87a8ea2315d 100644 --- a/src/Storages/StorageBuffer.cpp +++ b/src/Storages/StorageBuffer.cpp @@ -455,10 +455,8 @@ static void appendBlock(const Block & from, Block & to) size_t rows = from.rows(); size_t bytes = from.bytes(); - CurrentMetrics::add(CurrentMetrics::StorageBufferRows, rows); - CurrentMetrics::add(CurrentMetrics::StorageBufferBytes, bytes); - size_t old_rows = to.rows(); + size_t old_bytes = to.bytes(); MutableColumnPtr last_col; try @@ -468,6 +466,8 @@ static void appendBlock(const Block & from, Block & to) if (to.rows() == 0) { to = from; + CurrentMetrics::add(CurrentMetrics::StorageBufferRows, rows); + CurrentMetrics::add(CurrentMetrics::StorageBufferBytes, bytes); } else { @@ -480,6 +480,8 @@ static void appendBlock(const Block & from, Block & to) to.getByPosition(column_no).column = std::move(last_col); } + CurrentMetrics::add(CurrentMetrics::StorageBufferRows, rows); + CurrentMetrics::add(CurrentMetrics::StorageBufferBytes, to.bytes() - old_bytes); } } catch (...) diff --git a/src/Storages/System/CMakeLists.txt b/src/Storages/System/CMakeLists.txt index 96c05a59173..133761cbe22 100644 --- a/src/Storages/System/CMakeLists.txt +++ b/src/Storages/System/CMakeLists.txt @@ -9,6 +9,36 @@ get_property (BUILD_COMPILE_DEFINITIONS DIRECTORY ${ClickHouse_SOURCE_DIR} PROPE get_property(TZDATA_VERSION GLOBAL PROPERTY TZDATA_VERSION_PROP) + +find_package(Git) +if(Git_FOUND) + # The commit's git hash, and whether the building workspace was dirty or not + execute_process(COMMAND + "${GIT_EXECUTABLE}" rev-parse HEAD + WORKING_DIRECTORY "${ClickHouse_SOURCE_DIR}" + OUTPUT_VARIABLE GIT_HASH + ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE) + # Git branch name + execute_process(COMMAND + "${GIT_EXECUTABLE}" rev-parse --abbrev-ref HEAD + WORKING_DIRECTORY "${ClickHouse_SOURCE_DIR}" + OUTPUT_VARIABLE GIT_BRANCH + ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE) + # The date of the commit + SET(ENV{TZ} "UTC") + execute_process(COMMAND + "${GIT_EXECUTABLE}" log -1 --format=%ad --date=iso-local + WORKING_DIRECTORY "${ClickHouse_SOURCE_DIR}" + OUTPUT_VARIABLE GIT_DATE + ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE) + # The subject of the commit + execute_process(COMMAND + "${GIT_EXECUTABLE}" log -1 --format=%s + WORKING_DIRECTORY "${ClickHouse_SOURCE_DIR}" + OUTPUT_VARIABLE GIT_COMMIT_SUBJECT + ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE) +endif() + configure_file (StorageSystemBuildOptions.generated.cpp.in ${CONFIG_BUILD}) include("${ClickHouse_SOURCE_DIR}/cmake/dbms_glob_sources.cmake") diff --git a/src/Storages/System/StorageSystemBuildOptions.generated.cpp.in b/src/Storages/System/StorageSystemBuildOptions.generated.cpp.in index da563cc245b..8a19d7649aa 100644 --- a/src/Storages/System/StorageSystemBuildOptions.generated.cpp.in +++ b/src/Storages/System/StorageSystemBuildOptions.generated.cpp.in @@ -50,6 +50,10 @@ const char * auto_config_build[] "USE_KRB5", "@USE_KRB5@", "USE_FILELOG", "@USE_FILELOG@", "USE_BZIP2", "@USE_BZIP2@", + "GIT_HASH", "@GIT_HASH@", + "GIT_BRANCH", "@GIT_BRANCH@", + "GIT_DATE", "@GIT_DATE@", + "GIT_COMMIT_SUBJECT", "@GIT_COMMIT_SUBJECT@", nullptr, nullptr }; diff --git a/src/Storages/tests/gtest_transform_query_for_external_database.cpp b/src/Storages/tests/gtest_transform_query_for_external_database.cpp index f161400630b..57b9e73bbbd 100644 --- a/src/Storages/tests/gtest_transform_query_for_external_database.cpp +++ b/src/Storages/tests/gtest_transform_query_for_external_database.cpp @@ -120,7 +120,7 @@ TEST(TransformQueryForExternalDatabase, InWithSingleElement) check(state, 1, "SELECT column FROM test.table WHERE 1 IN (1)", - R"(SELECT "column" FROM "test"."table" WHERE 1)"); + R"(SELECT "column" FROM "test"."table" WHERE 1 = 1)"); check(state, 1, "SELECT column FROM test.table WHERE column IN (1, 2)", R"(SELECT "column" FROM "test"."table" WHERE "column" IN (1, 2))"); @@ -135,7 +135,7 @@ TEST(TransformQueryForExternalDatabase, InWithMultipleColumns) check(state, 1, "SELECT column FROM test.table WHERE (1,1) IN ((1,1))", - R"(SELECT "column" FROM "test"."table" WHERE 1)"); + R"(SELECT "column" FROM "test"."table" WHERE 1 = 1)"); check(state, 1, "SELECT field, value FROM test.table WHERE (field, value) IN (('foo', 'bar'))", R"(SELECT "field", "value" FROM "test"."table" WHERE ("field", "value") IN (('foo', 'bar')))"); diff --git a/src/Storages/transformQueryForExternalDatabase.cpp b/src/Storages/transformQueryForExternalDatabase.cpp index 4d6c1787a34..c42fb7fa965 100644 --- a/src/Storages/transformQueryForExternalDatabase.cpp +++ b/src/Storages/transformQueryForExternalDatabase.cpp @@ -306,6 +306,18 @@ String transformQueryForExternalDatabase( throw Exception("Query contains non-compatible expressions (and external_table_strict_query=true)", ErrorCodes::INCORRECT_QUERY); } + auto * literal_expr = typeid_cast(original_where.get()); + UInt64 value; + if (literal_expr && literal_expr->value.tryGet(value) && (value == 0 || value == 1)) + { + /// WHERE 1 -> WHERE 1=1, WHERE 0 -> WHERE 1=0. + if (value) + original_where = makeASTFunction("equals", std::make_shared(1), std::make_shared(1)); + else + original_where = makeASTFunction("equals", std::make_shared(1), std::make_shared(0)); + select->setExpression(ASTSelectQuery::Expression::WHERE, std::move(original_where)); + } + ASTPtr select_ptr = select; dropAliases(select_ptr); diff --git a/tests/ci/cancel_and_rerun_workflow_lambda/app.py b/tests/ci/cancel_and_rerun_workflow_lambda/app.py index bd1dc394086..b79eb292dc6 100644 --- a/tests/ci/cancel_and_rerun_workflow_lambda/app.py +++ b/tests/ci/cancel_and_rerun_workflow_lambda/app.py @@ -11,7 +11,6 @@ import boto3 NEED_RERUN_OR_CANCELL_WORKFLOWS = { 13241696, # PR 15834118, # Docs - 15522500, # MasterCI 15516108, # ReleaseCI 15797242, # BackportPR } @@ -86,10 +85,23 @@ WorkflowDescription = namedtuple('WorkflowDescription', def get_workflows_description_for_pull_request(pull_request_event): head_branch = pull_request_event['head']['ref'] print("PR", pull_request_event['number'], "has head ref", head_branch) - workflows = _exec_get_with_retry(API_URL + f"/actions/runs?branch={head_branch}") + workflows_data = [] + workflows = _exec_get_with_retry(API_URL + f"/actions/runs?branch={head_branch}&event=pull_request&page=1") + workflows_data += workflows['workflow_runs'] + i = 2 + while len(workflows['workflow_runs']) > 0: + workflows = _exec_get_with_retry(API_URL + f"/actions/runs?branch={head_branch}&event=pull_request&page={i}") + workflows_data += workflows['workflow_runs'] + i += 1 + if i > 30: + print("Too many workflows found") + break + workflow_descriptions = [] - for workflow in workflows['workflow_runs']: - if workflow['workflow_id'] in NEED_RERUN_OR_CANCELL_WORKFLOWS: + for workflow in workflows_data: + # unfortunately we cannot filter workflows from forks in request to API so doing it manually + if (workflow['head_repository']['full_name'] == pull_request_event['head']['repo']['full_name'] + and workflow['workflow_id'] in NEED_RERUN_OR_CANCELL_WORKFLOWS): workflow_descriptions.append(WorkflowDescription( run_id=workflow['id'], status=workflow['status'], diff --git a/tests/ci/keeper_jepsen_check.py b/tests/ci/keeper_jepsen_check.py index 5c7582242a9..2c2b8b4783f 100644 --- a/tests/ci/keeper_jepsen_check.py +++ b/tests/ci/keeper_jepsen_check.py @@ -120,6 +120,8 @@ if __name__ == "__main__": pr_info = PRInfo() + logging.info("Start at PR number %s, commit sha %s labels %s", pr_info.number, pr_info.sha, pr_info.labels) + if pr_info.number != 0 and 'jepsen-test' not in pr_info.labels(): logging.info("Not jepsen test label in labels list, skipping") sys.exit(0) diff --git a/tests/integration/helpers/cluster.py b/tests/integration/helpers/cluster.py index d440f2de0ca..03520f0e6c0 100644 --- a/tests/integration/helpers/cluster.py +++ b/tests/integration/helpers/cluster.py @@ -2043,7 +2043,8 @@ class ClickHouseInstance: user=user, password=password, database=database) # Connects to the instance via HTTP interface, sends a query and returns the answer - def http_query(self, sql, data=None, params=None, user=None, password=None, expect_fail_and_get_error=False): + def http_query(self, sql, data=None, params=None, user=None, password=None, expect_fail_and_get_error=False, + port=8123, timeout=None, retry_strategy=None): logging.debug(f"Executing query {sql} on {self.name} via HTTP interface") if params is None: params = {} @@ -2057,12 +2058,19 @@ class ClickHouseInstance: auth = requests.auth.HTTPBasicAuth(user, password) elif user: auth = requests.auth.HTTPBasicAuth(user, '') - url = "http://" + self.ip_address + ":8123/?" + urllib.parse.urlencode(params) + url = f"http://{self.ip_address}:{port}/?" + urllib.parse.urlencode(params) - if data: - r = requests.post(url, data, auth=auth) + if retry_strategy is None: + requester = requests else: - r = requests.get(url, auth=auth) + adapter = requests.adapters.HTTPAdapter(max_retries=retry_strategy) + requester = requests.Session() + requester.mount("https://", adapter) + requester.mount("http://", adapter) + if data: + r = requester.post(url, data, auth=auth, timeout=timeout) + else: + r = requester.get(url, auth=auth, timeout=timeout) def http_code_and_message(): code = r.status_code diff --git a/tests/integration/test_input_format_parallel_parsing_memory_tracking/configs/conf.xml b/tests/integration/test_input_format_parallel_parsing_memory_tracking/configs/conf.xml index 3e4c885d1f6..3adba1d402a 100644 --- a/tests/integration/test_input_format_parallel_parsing_memory_tracking/configs/conf.xml +++ b/tests/integration/test_input_format_parallel_parsing_memory_tracking/configs/conf.xml @@ -1,4 +1,23 @@ - - 3000000000 + + 4000000000 + + + + + + + + + + + + + + diff --git a/tests/integration/test_input_format_parallel_parsing_memory_tracking/test.py b/tests/integration/test_input_format_parallel_parsing_memory_tracking/test.py index bc7f32bf544..1c686c7982e 100644 --- a/tests/integration/test_input_format_parallel_parsing_memory_tracking/test.py +++ b/tests/integration/test_input_format_parallel_parsing_memory_tracking/test.py @@ -24,16 +24,13 @@ def start_cluster(): # max_memory_usage_for_user cannot be used, since the memory for user accounted -# correctly, only total is not +# correctly, only total is not (it is set via conf.xml) def test_memory_tracking_total(): - instance.query(''' - CREATE TABLE null (row String) ENGINE=Null; - ''') + instance.query('CREATE TABLE null (row String) ENGINE=Null') instance.exec_in_container(['bash', '-c', 'clickhouse local -q "SELECT arrayStringConcat(arrayMap(x->toString(cityHash64(x)), range(1000)), \' \') from numbers(10000)" > data.json']) for it in range(0, 20): # the problem can be triggered only via HTTP, # since clickhouse-client parses the data by itself. assert instance.exec_in_container(['curl', '--silent', '--show-error', '--data-binary', '@data.json', - 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20TSV']) == '', 'Failed on {} iteration'.format( - it) + 'http://127.1:8123/?query=INSERT%20INTO%20null%20FORMAT%20TSV']) == '', f'Failed on {it} iteration' diff --git a/tests/integration/test_postgresql_replica_database_engine_2/test.py b/tests/integration/test_postgresql_replica_database_engine_2/test.py index c8b63d8e667..7aee454c4a9 100644 --- a/tests/integration/test_postgresql_replica_database_engine_2/test.py +++ b/tests/integration/test_postgresql_replica_database_engine_2/test.py @@ -178,7 +178,7 @@ def assert_number_of_columns(expected, table_name, database_name='test_database' def check_tables_are_synchronized(table_name, order_by='key', postgres_database='postgres_database', materialized_database='test_database', schema_name=''): assert_nested_table_is_created(table_name, materialized_database, schema_name) - print("Checking table is synchronized:", table_name) + print(f"Checking table is synchronized. Table name: {table_name}, table schema: {schema_name}") expected = instance.query('select * from {}.{} order by {};'.format(postgres_database, table_name, order_by)) if len(schema_name) == 0: result = instance.query('select * from {}.{} order by {};'.format(materialized_database, table_name, order_by)) @@ -356,6 +356,11 @@ def test_remove_table_from_replication(started_cluster): for i in range(NUM_TABLES): cursor.execute('drop table if exists postgresql_replica_{};'.format(i)) + # Removing from replication table which does not exist in PostgreSQL must be ok. + instance.query('DETACH TABLE test_database.postgresql_replica_0'); + assert instance.contains_in_log("from publication, because table does not exist in PostgreSQL") + drop_materialized_db() + def test_predefined_connection_configuration(started_cluster): drop_materialized_db() @@ -379,6 +384,7 @@ def test_database_with_single_non_default_schema(started_cluster): NUM_TABLES=5 schema_name = 'test_schema' + materialized_db = 'test_database' clickhouse_postgres_db = 'postgres_database_with_schema' global insert_counter insert_counter = 0 @@ -430,6 +436,14 @@ def test_database_with_single_non_default_schema(started_cluster): instance.query(f"INSERT INTO {clickhouse_postgres_db}.postgresql_replica_{altered_table} SELECT number, number, number from numbers(5000, 1000)") assert_number_of_columns(3, f'postgresql_replica_{altered_table}') check_tables_are_synchronized(f"postgresql_replica_{altered_table}", postgres_database=clickhouse_postgres_db); + + print('DETACH-ATTACH') + detached_table_name = "postgresql_replica_1" + instance.query(f"DETACH TABLE {materialized_db}.{detached_table_name}") + assert not instance.contains_in_log("from publication, because table does not exist in PostgreSQL") + instance.query(f"ATTACH TABLE {materialized_db}.{detached_table_name}") + check_tables_are_synchronized(detached_table_name, postgres_database=clickhouse_postgres_db); + drop_materialized_db() @@ -440,6 +454,7 @@ def test_database_with_multiple_non_default_schemas_1(started_cluster): NUM_TABLES = 5 schema_name = 'test_schema' clickhouse_postgres_db = 'postgres_database_with_schema' + materialized_db = 'test_database' publication_tables = '' global insert_counter insert_counter = 0 @@ -494,6 +509,15 @@ def test_database_with_multiple_non_default_schemas_1(started_cluster): instance.query(f"INSERT INTO {clickhouse_postgres_db}.postgresql_replica_{altered_table} SELECT number, number, number from numbers(5000, 1000)") assert_number_of_columns(3, f'{schema_name}.postgresql_replica_{altered_table}') check_tables_are_synchronized(f"postgresql_replica_{altered_table}", schema_name=schema_name, postgres_database=clickhouse_postgres_db); + + print('DETACH-ATTACH') + detached_table_name = "postgresql_replica_1" + instance.query(f"DETACH TABLE {materialized_db}.`{schema_name}.{detached_table_name}`") + assert not instance.contains_in_log("from publication, because table does not exist in PostgreSQL") + instance.query(f"ATTACH TABLE {materialized_db}.`{schema_name}.{detached_table_name}`") + assert_show_tables("test_schema.postgresql_replica_0\ntest_schema.postgresql_replica_1\ntest_schema.postgresql_replica_2\ntest_schema.postgresql_replica_3\ntest_schema.postgresql_replica_4\n") + check_tables_are_synchronized(detached_table_name, schema_name=schema_name, postgres_database=clickhouse_postgres_db); + drop_materialized_db() @@ -504,6 +528,7 @@ def test_database_with_multiple_non_default_schemas_2(started_cluster): NUM_TABLES = 2 schemas_num = 2 schema_list = 'schema0, schema1' + materialized_db = 'test_database' global insert_counter insert_counter = 0 @@ -557,11 +582,23 @@ def test_database_with_multiple_non_default_schemas_2(started_cluster): print('ALTER') altered_schema = random.randint(0, schemas_num-1) altered_table = random.randint(0, NUM_TABLES-1) + clickhouse_postgres_db = f'clickhouse_postgres_db{altered_schema}' cursor.execute(f"ALTER TABLE schema{altered_schema}.postgresql_replica_{altered_table} ADD COLUMN value2 integer") instance.query(f"INSERT INTO clickhouse_postgres_db{altered_schema}.postgresql_replica_{altered_table} SELECT number, number, number from numbers(1000 * {insert_counter}, 1000)") assert_number_of_columns(3, f'schema{altered_schema}.postgresql_replica_{altered_table}') - check_tables_are_synchronized(f"postgresql_replica_{altered_table}", schema_name=schema_name, postgres_database=clickhouse_postgres_db); + check_tables_are_synchronized(f"postgresql_replica_{altered_table}", schema_name=f"schema{altered_schema}", postgres_database=clickhouse_postgres_db); + + print('DETACH-ATTACH') + detached_table_name = "postgresql_replica_1" + detached_table_schema = "schema0" + clickhouse_postgres_db = f'clickhouse_postgres_db0' + instance.query(f"DETACH TABLE {materialized_db}.`{detached_table_schema}.{detached_table_name}`") + assert not instance.contains_in_log("from publication, because table does not exist in PostgreSQL") + instance.query(f"ATTACH TABLE {materialized_db}.`{detached_table_schema}.{detached_table_name}`") + assert_show_tables("schema0.postgresql_replica_0\nschema0.postgresql_replica_1\nschema1.postgresql_replica_0\nschema1.postgresql_replica_1\n") + check_tables_are_synchronized(f"postgresql_replica_{altered_table}", schema_name=detached_table_schema, postgres_database=clickhouse_postgres_db); + drop_materialized_db() diff --git a/tests/integration/test_server_reload/.gitignore b/tests/integration/test_server_reload/.gitignore new file mode 100644 index 00000000000..edf565ec632 --- /dev/null +++ b/tests/integration/test_server_reload/.gitignore @@ -0,0 +1 @@ +_gen diff --git a/tests/integration/test_server_reload/__init__.py b/tests/integration/test_server_reload/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/integration/test_server_reload/configs/default_passwd.xml b/tests/integration/test_server_reload/configs/default_passwd.xml new file mode 100644 index 00000000000..5c23be0dcb0 --- /dev/null +++ b/tests/integration/test_server_reload/configs/default_passwd.xml @@ -0,0 +1,13 @@ + + + + + + + + + + 123 + + + diff --git a/tests/integration/test_server_reload/configs/dhparam.pem b/tests/integration/test_server_reload/configs/dhparam.pem new file mode 100644 index 00000000000..fb935b9c898 --- /dev/null +++ b/tests/integration/test_server_reload/configs/dhparam.pem @@ -0,0 +1,8 @@ +-----BEGIN DH PARAMETERS----- +MIIBCAKCAQEAkPGhfLY5nppeQkFBKYRpiisxzrRQfyyTUu6aabZP2CbAMAuoYzaC +Z+iqeWSQZKRYeA21SZXkC9xE1e5FJsc5IWzCRiMNZeLuj4ApUNysMu89DpX8/b91 ++Ka6wRJnaO43ZqHj/9FpU4JiYtxoIpXDC9HeiSAnwLwJc3L+nkYfnSGgvzWIxhGV +gCoVmVBoTe7wrqCyVlM5nrNZSjhlSugvXmu2bSK3MwYF08QLKvlF68eedbs0PMWh +WC0bFM/X7gMBEqL4DiINufAShbZPKxD6eL2APiHPUo6xun3ed/Po/5j8QBmiku0c +5Jb12ZhOTRTQjaRg2aFF8LPdW2tDE7HmewIBAg== +-----END DH PARAMETERS----- diff --git a/tests/integration/test_server_reload/configs/ports_from_zk.xml b/tests/integration/test_server_reload/configs/ports_from_zk.xml new file mode 100644 index 00000000000..ae3435a3d3c --- /dev/null +++ b/tests/integration/test_server_reload/configs/ports_from_zk.xml @@ -0,0 +1,9 @@ + + + + + + + + + \ No newline at end of file diff --git a/tests/integration/test_server_reload/configs/server.crt b/tests/integration/test_server_reload/configs/server.crt new file mode 100644 index 00000000000..6f4deca038f --- /dev/null +++ b/tests/integration/test_server_reload/configs/server.crt @@ -0,0 +1,18 @@ +-----BEGIN CERTIFICATE----- +MIIC+zCCAeOgAwIBAgIJAIhI9ozZJ+TWMA0GCSqGSIb3DQEBCwUAMBQxEjAQBgNV +BAMMCWxvY2FsaG9zdDAeFw0xOTA0MjIwNDMyNTJaFw0yMDA0MjEwNDMyNTJaMBQx +EjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC +ggEBAK+wVUEdqF2uXvN0MJBgnAHyXi6JTi4p/F6igsrCjSNjJWzHH0vQmK8ujfcF +CkifW88i+W5eHctuEtQqNHK+t9x9YiZtXrj6m/XkOXs20mYgENSmbbbHbriTPnZB +zZrq6UqMlwIHNNAa+I3NMORQxVRaI0ybXnGVO5elr70xHpk03xL0JWKHpEqYp4db +2aBQgF6y3Ww4khxjIYqpUYXWXGFnVIRU7FKVEAM1xyKqvQzXjQ5sVM/wyHknveEF +3b/X4ggN+KNl5KOc0cWDh1/XaatJAPaUUPqZcq76tynLbP64Xm3dxHcj+gtRkO67 +ef6MSg6l63m3XQP6Qb+MIkd06OsCAwEAAaNQME4wHQYDVR0OBBYEFDmODTO8QLDN +ykR3x0LIOnjNhrKhMB8GA1UdIwQYMBaAFDmODTO8QLDNykR3x0LIOnjNhrKhMAwG +A1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAAwaiJc7uqEpnH3aukbftDwX +m8GfEnj1HVdgg+9GGNq+9rvUYBF6gdPmjRCX9dO0cclLFx8jc2org0rTSq9WoOhX +E6qL4Eqrmc5SE3Y9jZM0h6GRD4oXK014FmtZ3T6ddZU3dQLj3BS2r1XrvmubTvGN +ZuTJNY8nx8Hh6H5XINmsEjUF9E5hog+PwCE03xt2adIdYL+gsbxASeNYyeUFpZv5 +zcXR3VoakBWnAaOVgCHq2qh96QAnL7ZKzFkGf/MdwV10KU3dmb+ICbQUUdf9Gc17 +aaDCIRws312F433FdXBkGs2UkB7ZZme9dfn6O1QbeTNvex2VLMqYx/CTkfFbOQA= +-----END CERTIFICATE----- diff --git a/tests/integration/test_server_reload/configs/server.key b/tests/integration/test_server_reload/configs/server.key new file mode 100644 index 00000000000..6eddb3295db --- /dev/null +++ b/tests/integration/test_server_reload/configs/server.key @@ -0,0 +1,28 @@ +-----BEGIN PRIVATE KEY----- +MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCvsFVBHahdrl7z +dDCQYJwB8l4uiU4uKfxeooLKwo0jYyVsxx9L0JivLo33BQpIn1vPIvluXh3LbhLU +KjRyvrfcfWImbV64+pv15Dl7NtJmIBDUpm22x264kz52Qc2a6ulKjJcCBzTQGviN +zTDkUMVUWiNMm15xlTuXpa+9MR6ZNN8S9CVih6RKmKeHW9mgUIBest1sOJIcYyGK +qVGF1lxhZ1SEVOxSlRADNcciqr0M140ObFTP8Mh5J73hBd2/1+IIDfijZeSjnNHF +g4df12mrSQD2lFD6mXKu+rcpy2z+uF5t3cR3I/oLUZDuu3n+jEoOpet5t10D+kG/ +jCJHdOjrAgMBAAECggEARF66zrxb6RkSmmt8+rKeA6PuQu3sHsr4C1vyyjUr97l9 +tvdGlpp20LWtSZQMjHZ3pARYTTsTHTeY3DgQcRcHNicVKx8k3ZepWeeW9vw+pL+V +zSt3RsoVrH6gsCSrfr4sS3aqzX9AbjwQvh48CJ3mLQ1m70kHV+xbZIh1+4pB/hyP +1wKyUE18ZkOptXvO/TtoHzLQCecpkXtWzmry1Eh2isvXA+NMrAtLibGsyM1mtm7i +5ozevzHabvvCDBEe+KgZdONgVhhhvm2eOd+/s4w3rw4ETud4fI/ZAJyWXhiIKFnA +VJbElWruSAoVBW7p2bsF5PbmVzvo8vXL+VylxYD+AQKBgQDhLoRKTVhNkn/QjKxq +sdOh+QZra0LzjVpAmkQzu7wZMSHEz9qePQciDQQrYKrmRF1vNcIRCVUTqWYheJ/1 +lKRrCGa0ab6k96zkWMqLHD5u+UeJV7r1dJIx08ME9kNJ+x/XtB8klRIji16NiQUS +qc6p8z0M2AnbJzsRfWZRH8FeYwKBgQDHu8dzdtVGI7MtxfPOE/bfajiopDg8BdTC +pdug2T8XofRHRq7Q+0vYjTAZFT/slib91Pk6VvvPdo9VBZiL4omv4dAq6mOOdX/c +U14mJe1X5GCrr8ExZ8BfNJ3t/6sV1fcxyJwAw7iBguqxA2JqdM/wFk10K8XqvzVn +CD6O9yGt2QKBgFX1BMi8N538809vs41S7l9hCQNOQZNo/O+2M5yv6ECRkbtoQKKw +1x03bMUGNJaLuELweXE5Z8GGo5bZTe5X3F+DKHlr+DtO1C+ieUaa9HY2MAmMdLCn +2/qrREGLo+oEs4YKmuzC/taUp/ZNPKOAMISNdluFyFVg51pozPrgrVbTAoGBAKkE +LBl3O67o0t0vH8sJdeVFG8EJhlS0koBMnfgVHqC++dm+5HwPyvTrNQJkyv1HaqNt +r6FArkG3ED9gRuBIyT6+lctbIPgSUip9mbQqcBfqOCvQxGksZMur2ODncz09HLtS +CUFUXjOqNzOnq4ZuZu/Bz7U4vXiSaXxQq6+LTUKxAoGAFZU/qrI06XxnrE9A1X0W +l7DSkpZaDcu11NrZ473yONih/xOZNh4SSBpX8a7F6Pmh9BdtGqphML8NFPvQKcfP +b9H2iid2tc292uyrUEb5uTMmv61zoTwtitqLzO0+tS6PT3fXobX+eyeEWKzPBljL +HFtxG5CCXpkdnWRmaJnhTzA= +-----END PRIVATE KEY----- diff --git a/tests/integration/test_server_reload/configs/ssl_conf.xml b/tests/integration/test_server_reload/configs/ssl_conf.xml new file mode 100644 index 00000000000..43b25032059 --- /dev/null +++ b/tests/integration/test_server_reload/configs/ssl_conf.xml @@ -0,0 +1,18 @@ + + + + + + + /etc/clickhouse-server/config.d/server.crt + /etc/clickhouse-server/config.d/server.key + + /etc/clickhouse-server/config.d/dhparam.pem + none + true + true + sslv2,sslv3 + true + + + diff --git a/tests/integration/test_server_reload/protos/clickhouse_grpc.proto b/tests/integration/test_server_reload/protos/clickhouse_grpc.proto new file mode 100644 index 00000000000..c6cafaf6e40 --- /dev/null +++ b/tests/integration/test_server_reload/protos/clickhouse_grpc.proto @@ -0,0 +1,174 @@ +/* This file describes gRPC protocol supported in ClickHouse. + * + * To use this protocol a client should send one or more messages of the QueryInfo type + * and then receive one or more messages of the Result type. + * According to that the service provides four methods for that: + * ExecuteQuery(QueryInfo) returns (Result) + * ExecuteQueryWithStreamInput(stream QueryInfo) returns (Result) + * ExecuteQueryWithStreamOutput(QueryInfo) returns (stream Result) + * ExecuteQueryWithStreamIO(stream QueryInfo) returns (stream Result) + * It's up to the client to choose which method to use. + * For example, ExecuteQueryWithStreamInput() allows the client to add data multiple times + * while executing a query, which is suitable for inserting many rows. + */ + +syntax = "proto3"; + +package clickhouse.grpc; + +message NameAndType { + string name = 1; + string type = 2; +} + +// Describes an external table - a table which will exists only while a query is executing. +message ExternalTable { + // Name of the table. If omitted, "_data" is used. + string name = 1; + + // Columns of the table. Types are required, names can be omitted. If the names are omitted, "_1", "_2", ... is used. + repeated NameAndType columns = 2; + + // Data to insert to the external table. + // If a method with streaming input (i.e. ExecuteQueryWithStreamInput() or ExecuteQueryWithStreamIO()) is used, + // then data for insertion to the same external table can be split between multiple QueryInfos. + bytes data = 3; + + // Format of the data to insert to the external table. + string format = 4; + + // Settings for executing that insertion, applied after QueryInfo.settings. + map settings = 5; +} + +enum CompressionAlgorithm { + NO_COMPRESSION = 0; + DEFLATE = 1; + GZIP = 2; + STREAM_GZIP = 3; +} + +enum CompressionLevel { + COMPRESSION_NONE = 0; + COMPRESSION_LOW = 1; + COMPRESSION_MEDIUM = 2; + COMPRESSION_HIGH = 3; +} + +message Compression { + CompressionAlgorithm algorithm = 1; + CompressionLevel level = 2; +} + +// Information about a query which a client sends to a ClickHouse server. +// The first QueryInfo can set any of the following fields. Extra QueryInfos only add extra data. +// In extra QueryInfos only `input_data`, `external_tables`, `next_query_info` and `cancel` fields can be set. +message QueryInfo { + string query = 1; + string query_id = 2; + map settings = 3; + + // Default database. + string database = 4; + + // Input data, used both as data for INSERT query and as data for the input() function. + bytes input_data = 5; + + // Delimiter for input_data, inserted between input_data from adjacent QueryInfos. + bytes input_data_delimiter = 6; + + // Default output format. If not specified, 'TabSeparated' is used. + string output_format = 7; + + repeated ExternalTable external_tables = 8; + + string user_name = 9; + string password = 10; + string quota = 11; + + // Works exactly like sessions in the HTTP protocol. + string session_id = 12; + bool session_check = 13; + uint32 session_timeout = 14; + + // Set `cancel` to true to stop executing the query. + bool cancel = 15; + + // If true there will be at least one more QueryInfo in the input stream. + // `next_query_info` is allowed to be set only if a method with streaming input (i.e. ExecuteQueryWithStreamInput() or ExecuteQueryWithStreamIO()) is used. + bool next_query_info = 16; + + /// Controls how a ClickHouse server will compress query execution results before sending back to the client. + /// If not set the compression settings from the configuration file will be used. + Compression result_compression = 17; +} + +enum LogsLevel { + LOG_NONE = 0; + LOG_FATAL = 1; + LOG_CRITICAL = 2; + LOG_ERROR = 3; + LOG_WARNING = 4; + LOG_NOTICE = 5; + LOG_INFORMATION = 6; + LOG_DEBUG = 7; + LOG_TRACE = 8; +} + +message LogEntry { + uint32 time = 1; + uint32 time_microseconds = 2; + uint64 thread_id = 3; + string query_id = 4; + LogsLevel level = 5; + string source = 6; + string text = 7; +} + +message Progress { + uint64 read_rows = 1; + uint64 read_bytes = 2; + uint64 total_rows_to_read = 3; + uint64 written_rows = 4; + uint64 written_bytes = 5; +} + +message Stats { + uint64 rows = 1; + uint64 blocks = 2; + uint64 allocated_bytes = 3; + bool applied_limit = 4; + uint64 rows_before_limit = 5; +} + +message Exception { + int32 code = 1; + string name = 2; + string display_text = 3; + string stack_trace = 4; +} + +// Result of execution of a query which is sent back by the ClickHouse server to the client. +message Result { + // Output of the query, represented in the `output_format` or in a format specified in `query`. + bytes output = 1; + bytes totals = 2; + bytes extremes = 3; + + repeated LogEntry logs = 4; + Progress progress = 5; + Stats stats = 6; + + // Set by the ClickHouse server if there was an exception thrown while executing. + Exception exception = 7; + + // Set by the ClickHouse server if executing was cancelled by the `cancel` field in QueryInfo. + bool cancelled = 8; +} + +service ClickHouse { + rpc ExecuteQuery(QueryInfo) returns (Result) {} + rpc ExecuteQueryWithStreamInput(stream QueryInfo) returns (Result) {} + rpc ExecuteQueryWithStreamOutput(QueryInfo) returns (stream Result) {} + rpc ExecuteQueryWithStreamIO(stream QueryInfo) returns (stream Result) {} +} diff --git a/tests/integration/test_server_reload/test.py b/tests/integration/test_server_reload/test.py new file mode 100644 index 00000000000..3c22b476f64 --- /dev/null +++ b/tests/integration/test_server_reload/test.py @@ -0,0 +1,284 @@ +import contextlib +import grpc +import psycopg2 +import pymysql.connections +import pymysql.err +import pytest +import sys +import time +from helpers.cluster import ClickHouseCluster, run_and_check +from helpers.client import Client, QueryRuntimeException +from kazoo.exceptions import NodeExistsError +from pathlib import Path +from requests.exceptions import ConnectionError +from urllib3.util.retry import Retry + +cluster = ClickHouseCluster(__file__) +instance = cluster.add_instance( + "instance", + main_configs=[ + "configs/ports_from_zk.xml", "configs/ssl_conf.xml", "configs/dhparam.pem", "configs/server.crt", "configs/server.key" + ], + user_configs=["configs/default_passwd.xml"], + with_zookeeper=True) + + +LOADS_QUERY = "SELECT value FROM system.events WHERE event = 'MainConfigLoads'" + + +# Use grpcio-tools to generate *pb2.py files from *.proto. + +proto_dir = Path(__file__).parent / "protos" +gen_dir = Path(__file__).parent / "_gen" +gen_dir.mkdir(exist_ok=True) +run_and_check( + f"python3 -m grpc_tools.protoc -I{proto_dir!s} --python_out={gen_dir!s} --grpc_python_out={gen_dir!s} \ + {proto_dir!s}/clickhouse_grpc.proto", shell=True) + +sys.path.append(str(gen_dir)) +import clickhouse_grpc_pb2 +import clickhouse_grpc_pb2_grpc + + +@pytest.fixture(name="cluster", scope="module") +def fixture_cluster(): + try: + cluster.add_zookeeper_startup_command(configure_ports_from_zk) + cluster.start() + yield cluster + finally: + cluster.shutdown() + + +@pytest.fixture(name="zk", scope="module") +def fixture_zk(cluster): + return cluster.get_kazoo_client("zoo1") + + +def get_client(cluster, port): + return Client(host=cluster.get_instance_ip("instance"), port=port, command=cluster.client_bin_path) + + +def get_mysql_client(cluster, port): + start_time = time.monotonic() + while True: + try: + return pymysql.connections.Connection( + host=cluster.get_instance_ip("instance"), user="default", password="", database="default", port=port) + except pymysql.err.OperationalError: + if time.monotonic() - start_time > 10: + raise + time.sleep(0.1) + + +def get_pgsql_client(cluster, port): + start_time = time.monotonic() + while True: + try: + return psycopg2.connect( + host=cluster.get_instance_ip("instance"), user="postgresql", password="123", database="default", port=port) + except psycopg2.OperationalError: + if time.monotonic() - start_time > 10: + raise + time.sleep(0.1) + + +def get_grpc_channel(cluster, port): + host_port = cluster.get_instance_ip("instance") + f":{port}" + channel = grpc.insecure_channel(host_port) + grpc.channel_ready_future(channel).result(timeout=10) + return channel + + +def grpc_query(channel, query_text): + query_info = clickhouse_grpc_pb2.QueryInfo(query=query_text) + stub = clickhouse_grpc_pb2_grpc.ClickHouseStub(channel) + result = stub.ExecuteQuery(query_info) + if result and result.HasField("exception"): + raise Exception(result.exception.display_text) + return result.output.decode() + + +def configure_ports_from_zk(zk, querier=None): + default_config = [ + ("/clickhouse/listen_hosts", b"0.0.0.0"), + ("/clickhouse/ports/tcp", b"9000"), + ("/clickhouse/ports/http", b"8123"), + ("/clickhouse/ports/mysql", b"9004"), + ("/clickhouse/ports/postgresql", b"9005"), + ("/clickhouse/ports/grpc", b"9100"), + ] + for path, value in default_config: + if querier is not None: + loads_before = querier(LOADS_QUERY) + has_changed = False + try: + zk.create(path=path, value=value, makepath=True) + has_changed = True + except NodeExistsError: + if zk.get(path) != value: + zk.set(path=path, value=value) + has_changed = True + if has_changed and querier is not None: + wait_loaded_config_changed(loads_before, querier) + + +@contextlib.contextmanager +def sync_loaded_config(querier): + # Depending on whether we test a change on tcp or http + # we monitor canges using the other, untouched, protocol + loads_before = querier(LOADS_QUERY) + yield + wait_loaded_config_changed(loads_before, querier) + + +def wait_loaded_config_changed(loads_before, querier): + loads_after = None + start_time = time.monotonic() + while time.monotonic() - start_time < 10: + try: + loads_after = querier(LOADS_QUERY) + if loads_after != loads_before: + return + except (QueryRuntimeException, ConnectionError): + pass + time.sleep(0.1) + assert loads_after is not None and loads_after != loads_before + + +@contextlib.contextmanager +def default_client(cluster, zk, restore_via_http=False): + client = get_client(cluster, port=9000) + try: + yield client + finally: + querier = instance.http_query if restore_via_http else client.query + configure_ports_from_zk(zk, querier) + + +def test_change_tcp_port(cluster, zk): + with default_client(cluster, zk, restore_via_http=True) as client: + assert client.query("SELECT 1") == "1\n" + with sync_loaded_config(instance.http_query): + zk.set("/clickhouse/ports/tcp", b"9090") + with pytest.raises(QueryRuntimeException, match="Connection refused"): + client.query("SELECT 1") + client_on_new_port = get_client(cluster, port=9090) + assert client_on_new_port.query("SELECT 1") == "1\n" + + +def test_change_http_port(cluster, zk): + with default_client(cluster, zk) as client: + retry_strategy = Retry(total=10, backoff_factor=0.1) + assert instance.http_query("SELECT 1", retry_strategy=retry_strategy) == "1\n" + with sync_loaded_config(client.query): + zk.set("/clickhouse/ports/http", b"9090") + with pytest.raises(ConnectionError, match="Connection refused"): + instance.http_query("SELECT 1") + instance.http_query("SELECT 1", port=9090) == "1\n" + + +def test_change_mysql_port(cluster, zk): + with default_client(cluster, zk) as client: + mysql_client = get_mysql_client(cluster, port=9004) + assert mysql_client.query("SELECT 1") == 1 + with sync_loaded_config(client.query): + zk.set("/clickhouse/ports/mysql", b"9090") + with pytest.raises(pymysql.err.OperationalError, match="Lost connection"): + mysql_client.query("SELECT 1") + mysql_client_on_new_port = get_mysql_client(cluster, port=9090) + assert mysql_client_on_new_port.query("SELECT 1") == 1 + + +def test_change_postgresql_port(cluster, zk): + with default_client(cluster, zk) as client: + pgsql_client = get_pgsql_client(cluster, port=9005) + cursor = pgsql_client.cursor() + cursor.execute("SELECT 1") + assert cursor.fetchall() == [(1,)] + with sync_loaded_config(client.query): + zk.set("/clickhouse/ports/postgresql", b"9090") + with pytest.raises(psycopg2.OperationalError, match="closed"): + cursor.execute("SELECT 1") + pgsql_client_on_new_port = get_pgsql_client(cluster, port=9090) + cursor = pgsql_client_on_new_port.cursor() + cursor.execute("SELECT 1") + cursor.fetchall() == [(1,)] + + +def test_change_grpc_port(cluster, zk): + with default_client(cluster, zk) as client: + grpc_channel = get_grpc_channel(cluster, port=9100) + assert grpc_query(grpc_channel, "SELECT 1") == "1\n" + with sync_loaded_config(client.query): + zk.set("/clickhouse/ports/grpc", b"9090") + with pytest.raises(grpc._channel._InactiveRpcError, match="StatusCode.UNAVAILABLE"): + grpc_query(grpc_channel, "SELECT 1") + grpc_channel_on_new_port = get_grpc_channel(cluster, port=9090) + assert grpc_query(grpc_channel_on_new_port, "SELECT 1") == "1\n" + + +def test_remove_tcp_port(cluster, zk): + with default_client(cluster, zk, restore_via_http=True) as client: + assert client.query("SELECT 1") == "1\n" + with sync_loaded_config(instance.http_query): + zk.delete("/clickhouse/ports/tcp") + with pytest.raises(QueryRuntimeException, match="Connection refused"): + client.query("SELECT 1") + + +def test_remove_http_port(cluster, zk): + with default_client(cluster, zk) as client: + assert instance.http_query("SELECT 1") == "1\n" + with sync_loaded_config(client.query): + zk.delete("/clickhouse/ports/http") + with pytest.raises(ConnectionError, match="Connection refused"): + instance.http_query("SELECT 1") + + +def test_remove_mysql_port(cluster, zk): + with default_client(cluster, zk) as client: + mysql_client = get_mysql_client(cluster, port=9004) + assert mysql_client.query("SELECT 1") == 1 + with sync_loaded_config(client.query): + zk.delete("/clickhouse/ports/mysql") + with pytest.raises(pymysql.err.OperationalError, match="Lost connection"): + mysql_client.query("SELECT 1") + + +def test_remove_postgresql_port(cluster, zk): + with default_client(cluster, zk) as client: + pgsql_client = get_pgsql_client(cluster, port=9005) + cursor = pgsql_client.cursor() + cursor.execute("SELECT 1") + assert cursor.fetchall() == [(1,)] + with sync_loaded_config(client.query): + zk.delete("/clickhouse/ports/postgresql") + with pytest.raises(psycopg2.OperationalError, match="closed"): + cursor.execute("SELECT 1") + + +def test_remove_grpc_port(cluster, zk): + with default_client(cluster, zk) as client: + grpc_channel = get_grpc_channel(cluster, port=9100) + assert grpc_query(grpc_channel, "SELECT 1") == "1\n" + with sync_loaded_config(client.query): + zk.delete("/clickhouse/ports/grpc") + with pytest.raises(grpc._channel._InactiveRpcError, match="StatusCode.UNAVAILABLE"): + grpc_query(grpc_channel, "SELECT 1") + + +def test_change_listen_host(cluster, zk): + localhost_client = Client(host="127.0.0.1", port=9000, command="/usr/bin/clickhouse") + localhost_client.command = ["docker", "exec", "-i", instance.docker_id] + localhost_client.command + try: + client = get_client(cluster, port=9000) + with sync_loaded_config(localhost_client.query): + zk.set("/clickhouse/listen_hosts", b"127.0.0.1") + with pytest.raises(QueryRuntimeException, match="Connection refused"): + client.query("SELECT 1") + assert localhost_client.query("SELECT 1") == "1\n" + finally: + with sync_loaded_config(localhost_client.query): + configure_ports_from_zk(zk) + diff --git a/tests/integration/test_storage_postgresql/test.py b/tests/integration/test_storage_postgresql/test.py index 6f43036e64d..b6ac121cd0c 100644 --- a/tests/integration/test_storage_postgresql/test.py +++ b/tests/integration/test_storage_postgresql/test.py @@ -424,6 +424,21 @@ def test_predefined_connection_configuration(started_cluster): cursor.execute(f'DROP TABLE test_table ') +def test_where_false(started_cluster): + cursor = started_cluster.postgres_conn.cursor() + cursor.execute("DROP TABLE IF EXISTS test") + cursor.execute('CREATE TABLE test (a Integer)') + cursor.execute("INSERT INTO test SELECT 1") + + result = node1.query("SELECT count() FROM postgresql('postgres1:5432', 'postgres', 'test', 'postgres', 'mysecretpassword') WHERE 1=0") + assert(int(result) == 0) + result = node1.query("SELECT count() FROM postgresql('postgres1:5432', 'postgres', 'test', 'postgres', 'mysecretpassword') WHERE 0") + assert(int(result) == 0) + result = node1.query("SELECT count() FROM postgresql('postgres1:5432', 'postgres', 'test', 'postgres', 'mysecretpassword') WHERE 1=1") + assert(int(result) == 1) + cursor.execute("DROP TABLE test") + + if __name__ == '__main__': cluster.start() input("Cluster created, press any key to destroy...") diff --git a/tests/integration/test_system_metrics/test.py b/tests/integration/test_system_metrics/test.py index 9e8eac162f6..efcc6f88a24 100644 --- a/tests/integration/test_system_metrics/test.py +++ b/tests/integration/test_system_metrics/test.py @@ -59,3 +59,32 @@ def test_readonly_metrics(start_cluster): node1.query("ATTACH TABLE test.test_table") assert_eq_with_retry(node1, "SELECT value FROM system.metrics WHERE metric = 'ReadonlyReplica'", "0\n", retry_count=300, sleep_time=1) +#For LowCardinality-columns, the bytes for N rows is not N*size of 1 row. +def test_metrics_storage_buffer_size(start_cluster): + node1.query(''' + CREATE TABLE test.test_mem_table + ( + `str` LowCardinality(String) + ) + ENGINE = Memory; + + CREATE TABLE test.buffer_table + ( + `str` LowCardinality(String) + ) + ENGINE = Buffer('test', 'test_mem_table', 1, 600, 600, 1000, 100000, 100000, 10000000); + ''') + + #before flush + node1.query("INSERT INTO test.buffer_table VALUES('hello');") + assert node1.query("SELECT value FROM system.metrics WHERE metric = 'StorageBufferRows'") == "1\n" + assert node1.query("SELECT value FROM system.metrics WHERE metric = 'StorageBufferBytes'") == "24\n" + + node1.query("INSERT INTO test.buffer_table VALUES('hello');") + assert node1.query("SELECT value FROM system.metrics WHERE metric = 'StorageBufferRows'") == "2\n" + assert node1.query("SELECT value FROM system.metrics WHERE metric = 'StorageBufferBytes'") == "25\n" + + #flush + node1.query("OPTIMIZE TABLE test.buffer_table") + assert node1.query("SELECT value FROM system.metrics WHERE metric = 'StorageBufferRows'") == "0\n" + assert node1.query("SELECT value FROM system.metrics WHERE metric = 'StorageBufferBytes'") == "0\n" diff --git a/tests/queries/0_stateless/01370_client_autocomplete_word_break_characters.expect b/tests/queries/0_stateless/01370_client_autocomplete_word_break_characters.expect index e0d01d905bb..f3a28bbee9b 100755 --- a/tests/queries/0_stateless/01370_client_autocomplete_word_break_characters.expect +++ b/tests/queries/0_stateless/01370_client_autocomplete_word_break_characters.expect @@ -20,6 +20,7 @@ expect "SET max_distributed" # Wait for suggestions to load, they are loaded in background set is_done 0 +set timeout 1 while {$is_done == 0} { send -- "\t" expect { @@ -27,10 +28,15 @@ while {$is_done == 0} { set is_done 1 } default { - sleep 1 + # expect "_" will wait for timeout, + # if completion was not loaded it will fail, + # and we will retry, + # but for retry on timeout default should be reseted, + # this is what for this block. } } } +set timeout 60 send -- "\3\4" expect eof diff --git a/utils/ci/jobs/quick-build/README.md b/utils/ci/jobs/quick-build/README.md deleted file mode 100644 index 803acae0f93..00000000000 --- a/utils/ci/jobs/quick-build/README.md +++ /dev/null @@ -1,5 +0,0 @@ -## Build with debug mode and without many libraries - -This job is intended as first check that build is not broken on wide variety of platforms. - -Results of this build are not intended for production usage. diff --git a/utils/ci/jobs/quick-build/run.sh b/utils/ci/jobs/quick-build/run.sh deleted file mode 100755 index af977d14465..00000000000 --- a/utils/ci/jobs/quick-build/run.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/usr/bin/env bash -set -e -x - -# How to run: -# From "ci" directory: -# jobs/quick-build/run.sh -# or: -# ./run-with-docker.sh ubuntu:bionic jobs/quick-build/run.sh - -cd "$(dirname $0)"/../.. - -. default-config - -SOURCES_METHOD=local -COMPILER=clang -COMPILER_INSTALL_METHOD=packages -COMPILER_PACKAGE_VERSION=6.0 -BUILD_METHOD=normal -BUILD_TARGETS=clickhouse -BUILD_TYPE=Debug -ENABLE_EMBEDDED_COMPILER=0 - -CMAKE_FLAGS="-D CMAKE_C_FLAGS_ADD=-g0 -D CMAKE_CXX_FLAGS_ADD=-g0 -D ENABLE_JEMALLOC=0 -D ENABLE_CAPNP=0 -D ENABLE_RDKAFKA=0 -D ENABLE_UNWIND=0 -D ENABLE_ICU=0 -D ENABLE_POCO_MONGODB=0 -D ENABLE_POCO_REDIS=0 -D ENABLE_POCO_NETSSL=0 -D ENABLE_ODBC=0 -D ENABLE_MYSQL=0 -D ENABLE_SSL=0 -D ENABLE_POCO_NETSSL=0 -D ENABLE_CASSANDRA=0 -D ENABLE_LDAP=0" - -[[ $(uname) == "FreeBSD" ]] && COMPILER_PACKAGE_VERSION=devel && export COMPILER_PATH=/usr/local/bin - -. get-sources.sh -. prepare-toolchain.sh -. install-libraries.sh -. build-normal.sh diff --git a/utils/ci/vagrant-freebsd/.gitignore b/utils/ci/vagrant-freebsd/.gitignore deleted file mode 100644 index 8000dd9db47..00000000000 --- a/utils/ci/vagrant-freebsd/.gitignore +++ /dev/null @@ -1 +0,0 @@ -.vagrant diff --git a/utils/ci/vagrant-freebsd/Vagrantfile b/utils/ci/vagrant-freebsd/Vagrantfile deleted file mode 100644 index c01ae5fa6e2..00000000000 --- a/utils/ci/vagrant-freebsd/Vagrantfile +++ /dev/null @@ -1,3 +0,0 @@ -Vagrant.configure("2") do |config| - config.vm.box = "generic/freebsd11" -end diff --git a/utils/grammar-fuzzer/ClickHouseUnlexer.py b/utils/grammar-fuzzer/ClickHouseUnlexer.py deleted file mode 100644 index c91522bd7be..00000000000 --- a/utils/grammar-fuzzer/ClickHouseUnlexer.py +++ /dev/null @@ -1,1771 +0,0 @@ -# Generated by Grammarinator 19.3 - -from itertools import chain -from grammarinator.runtime import * - -charset_0 = list(chain(*multirange_diff(printable_unicode_ranges, [(39, 40),(92, 93)]))) -charset_1 = list(chain(range(97, 98), range(65, 66))) -charset_2 = list(chain(range(98, 99), range(66, 67))) -charset_3 = list(chain(range(99, 100), range(67, 68))) -charset_4 = list(chain(range(100, 101), range(68, 69))) -charset_5 = list(chain(range(101, 102), range(69, 70))) -charset_6 = list(chain(range(102, 103), range(70, 71))) -charset_7 = list(chain(range(103, 104), range(71, 72))) -charset_8 = list(chain(range(104, 105), range(72, 73))) -charset_9 = list(chain(range(105, 106), range(73, 74))) -charset_10 = list(chain(range(106, 107), range(74, 75))) -charset_11 = list(chain(range(107, 108), range(75, 76))) -charset_12 = list(chain(range(108, 109), range(76, 77))) -charset_13 = list(chain(range(109, 110), range(77, 78))) -charset_14 = list(chain(range(110, 111), range(78, 79))) -charset_15 = list(chain(range(111, 112), range(79, 80))) -charset_16 = list(chain(range(112, 113), range(80, 81))) -charset_17 = list(chain(range(113, 114), range(81, 82))) -charset_18 = list(chain(range(114, 115), range(82, 83))) -charset_19 = list(chain(range(115, 116), range(83, 84))) -charset_20 = list(chain(range(116, 117), range(84, 85))) -charset_21 = list(chain(range(117, 118), range(85, 86))) -charset_22 = list(chain(range(118, 119), range(86, 87))) -charset_23 = list(chain(range(119, 120), range(87, 88))) -charset_24 = list(chain(range(120, 121), range(88, 89))) -charset_25 = list(chain(range(121, 122), range(89, 90))) -charset_26 = list(chain(range(122, 123), range(90, 91))) -charset_27 = list(chain(range(97, 123), range(65, 91))) -charset_28 = list(chain(range(48, 58))) -charset_29 = list(chain(range(48, 58), range(97, 103), range(65, 71))) -charset_30 = list(chain(*multirange_diff(printable_unicode_ranges, [(92, 93),(92, 93)]))) -charset_31 = list(chain(range(32, 33), range(11, 12), range(12, 13), range(9, 10), range(13, 14), range(10, 11))) - - -class ClickHouseUnlexer(Grammarinator): - - def __init__(self, *, max_depth=float('inf'), weights=None, cooldown=1.0): - super(ClickHouseUnlexer, self).__init__() - self.unlexer = self - self.max_depth = max_depth - self.weights = weights or dict() - self.cooldown = cooldown - - def EOF(self, *args, **kwargs): - pass - - @depthcontrol - def INTERVAL_TYPE(self): - current = self.create_node(UnlexerRule(name='INTERVAL_TYPE')) - choice = self.choice([0 if [2, 2, 2, 2, 2, 2, 2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_0', i), 1) for i, w in enumerate([1, 1, 1, 1, 1, 1, 1, 1])]) - self.unlexer.weights[('alt_0', choice)] = self.unlexer.weights.get(('alt_0', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.SECOND() - elif choice == 1: - current += self.unlexer.MINUTE() - elif choice == 2: - current += self.unlexer.HOUR() - elif choice == 3: - current += self.unlexer.DAY() - elif choice == 4: - current += self.unlexer.WEEK() - elif choice == 5: - current += self.unlexer.MONTH() - elif choice == 6: - current += self.unlexer.QUARTER() - elif choice == 7: - current += self.unlexer.YEAR() - return current - INTERVAL_TYPE.min_depth = 2 - - @depthcontrol - def ALIAS(self): - current = self.create_node(UnlexerRule(name='ALIAS')) - current += self.unlexer.A() - current += self.unlexer.L() - current += self.unlexer.I() - current += self.unlexer.A() - current += self.unlexer.S() - return current - ALIAS.min_depth = 1 - - @depthcontrol - def ALL(self): - current = self.create_node(UnlexerRule(name='ALL')) - current += self.unlexer.A() - current += self.unlexer.L() - current += self.unlexer.L() - return current - ALL.min_depth = 1 - - @depthcontrol - def AND(self): - current = self.create_node(UnlexerRule(name='AND')) - current += self.unlexer.A() - current += self.unlexer.N() - current += self.unlexer.D() - return current - AND.min_depth = 1 - - @depthcontrol - def ANTI(self): - current = self.create_node(UnlexerRule(name='ANTI')) - current += self.unlexer.A() - current += self.unlexer.N() - current += self.unlexer.T() - current += self.unlexer.I() - return current - ANTI.min_depth = 1 - - @depthcontrol - def ANY(self): - current = self.create_node(UnlexerRule(name='ANY')) - current += self.unlexer.A() - current += self.unlexer.N() - current += self.unlexer.Y() - return current - ANY.min_depth = 1 - - @depthcontrol - def ARRAY(self): - current = self.create_node(UnlexerRule(name='ARRAY')) - current += self.unlexer.A() - current += self.unlexer.R() - current += self.unlexer.R() - current += self.unlexer.A() - current += self.unlexer.Y() - return current - ARRAY.min_depth = 1 - - @depthcontrol - def AS(self): - current = self.create_node(UnlexerRule(name='AS')) - current += self.unlexer.A() - current += self.unlexer.S() - return current - AS.min_depth = 1 - - @depthcontrol - def ASCENDING(self): - current = self.create_node(UnlexerRule(name='ASCENDING')) - choice = self.choice([0 if [1, 1][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_9', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_9', choice)] = self.unlexer.weights.get(('alt_9', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.A() - current += self.unlexer.S() - current += self.unlexer.C() - elif choice == 1: - current += self.unlexer.A() - current += self.unlexer.S() - current += self.unlexer.C() - current += self.unlexer.E() - current += self.unlexer.N() - current += self.unlexer.D() - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.G() - return current - ASCENDING.min_depth = 1 - - @depthcontrol - def ASOF(self): - current = self.create_node(UnlexerRule(name='ASOF')) - current += self.unlexer.A() - current += self.unlexer.S() - current += self.unlexer.O() - current += self.unlexer.F() - return current - ASOF.min_depth = 1 - - @depthcontrol - def BETWEEN(self): - current = self.create_node(UnlexerRule(name='BETWEEN')) - current += self.unlexer.B() - current += self.unlexer.E() - current += self.unlexer.T() - current += self.unlexer.W() - current += self.unlexer.E() - current += self.unlexer.E() - current += self.unlexer.N() - return current - BETWEEN.min_depth = 1 - - @depthcontrol - def BOTH(self): - current = self.create_node(UnlexerRule(name='BOTH')) - current += self.unlexer.B() - current += self.unlexer.O() - current += self.unlexer.T() - current += self.unlexer.H() - return current - BOTH.min_depth = 1 - - @depthcontrol - def BY(self): - current = self.create_node(UnlexerRule(name='BY')) - current += self.unlexer.B() - current += self.unlexer.Y() - return current - BY.min_depth = 1 - - @depthcontrol - def CASE(self): - current = self.create_node(UnlexerRule(name='CASE')) - current += self.unlexer.C() - current += self.unlexer.A() - current += self.unlexer.S() - current += self.unlexer.E() - return current - CASE.min_depth = 1 - - @depthcontrol - def CAST(self): - current = self.create_node(UnlexerRule(name='CAST')) - current += self.unlexer.C() - current += self.unlexer.A() - current += self.unlexer.S() - current += self.unlexer.T() - return current - CAST.min_depth = 1 - - @depthcontrol - def CLUSTER(self): - current = self.create_node(UnlexerRule(name='CLUSTER')) - current += self.unlexer.C() - current += self.unlexer.L() - current += self.unlexer.U() - current += self.unlexer.S() - current += self.unlexer.T() - current += self.unlexer.E() - current += self.unlexer.R() - return current - CLUSTER.min_depth = 1 - - @depthcontrol - def COLLATE(self): - current = self.create_node(UnlexerRule(name='COLLATE')) - current += self.unlexer.C() - current += self.unlexer.O() - current += self.unlexer.L() - current += self.unlexer.L() - current += self.unlexer.A() - current += self.unlexer.T() - current += self.unlexer.E() - return current - COLLATE.min_depth = 1 - - @depthcontrol - def CREATE(self): - current = self.create_node(UnlexerRule(name='CREATE')) - current += self.unlexer.C() - current += self.unlexer.R() - current += self.unlexer.E() - current += self.unlexer.A() - current += self.unlexer.T() - current += self.unlexer.E() - return current - CREATE.min_depth = 1 - - @depthcontrol - def CROSS(self): - current = self.create_node(UnlexerRule(name='CROSS')) - current += self.unlexer.C() - current += self.unlexer.R() - current += self.unlexer.O() - current += self.unlexer.S() - current += self.unlexer.S() - return current - CROSS.min_depth = 1 - - @depthcontrol - def DATABASE(self): - current = self.create_node(UnlexerRule(name='DATABASE')) - current += self.unlexer.D() - current += self.unlexer.A() - current += self.unlexer.T() - current += self.unlexer.A() - current += self.unlexer.B() - current += self.unlexer.A() - current += self.unlexer.S() - current += self.unlexer.E() - return current - DATABASE.min_depth = 1 - - @depthcontrol - def DAY(self): - current = self.create_node(UnlexerRule(name='DAY')) - current += self.unlexer.D() - current += self.unlexer.A() - current += self.unlexer.Y() - return current - DAY.min_depth = 1 - - @depthcontrol - def DEFAULT(self): - current = self.create_node(UnlexerRule(name='DEFAULT')) - current += self.unlexer.D() - current += self.unlexer.E() - current += self.unlexer.F() - current += self.unlexer.A() - current += self.unlexer.U() - current += self.unlexer.L() - current += self.unlexer.T() - return current - DEFAULT.min_depth = 1 - - @depthcontrol - def DELETE(self): - current = self.create_node(UnlexerRule(name='DELETE')) - current += self.unlexer.D() - current += self.unlexer.E() - current += self.unlexer.L() - current += self.unlexer.E() - current += self.unlexer.T() - current += self.unlexer.E() - return current - DELETE.min_depth = 1 - - @depthcontrol - def DESCENDING(self): - current = self.create_node(UnlexerRule(name='DESCENDING')) - choice = self.choice([0 if [1, 1][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_12', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_12', choice)] = self.unlexer.weights.get(('alt_12', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.D() - current += self.unlexer.E() - current += self.unlexer.S() - current += self.unlexer.C() - elif choice == 1: - current += self.unlexer.D() - current += self.unlexer.E() - current += self.unlexer.S() - current += self.unlexer.C() - current += self.unlexer.E() - current += self.unlexer.N() - current += self.unlexer.D() - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.G() - return current - DESCENDING.min_depth = 1 - - @depthcontrol - def DISK(self): - current = self.create_node(UnlexerRule(name='DISK')) - current += self.unlexer.D() - current += self.unlexer.I() - current += self.unlexer.S() - current += self.unlexer.K() - return current - DISK.min_depth = 1 - - @depthcontrol - def DISTINCT(self): - current = self.create_node(UnlexerRule(name='DISTINCT')) - current += self.unlexer.D() - current += self.unlexer.I() - current += self.unlexer.S() - current += self.unlexer.T() - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.C() - current += self.unlexer.T() - return current - DISTINCT.min_depth = 1 - - @depthcontrol - def DROP(self): - current = self.create_node(UnlexerRule(name='DROP')) - current += self.unlexer.D() - current += self.unlexer.R() - current += self.unlexer.O() - current += self.unlexer.P() - return current - DROP.min_depth = 1 - - @depthcontrol - def ELSE(self): - current = self.create_node(UnlexerRule(name='ELSE')) - current += self.unlexer.E() - current += self.unlexer.L() - current += self.unlexer.S() - current += self.unlexer.E() - return current - ELSE.min_depth = 1 - - @depthcontrol - def END(self): - current = self.create_node(UnlexerRule(name='END')) - current += self.unlexer.E() - current += self.unlexer.N() - current += self.unlexer.D() - return current - END.min_depth = 1 - - @depthcontrol - def ENGINE(self): - current = self.create_node(UnlexerRule(name='ENGINE')) - current += self.unlexer.E() - current += self.unlexer.N() - current += self.unlexer.G() - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.E() - return current - ENGINE.min_depth = 1 - - @depthcontrol - def EXISTS(self): - current = self.create_node(UnlexerRule(name='EXISTS')) - current += self.unlexer.E() - current += self.unlexer.X() - current += self.unlexer.I() - current += self.unlexer.S() - current += self.unlexer.T() - current += self.unlexer.S() - return current - EXISTS.min_depth = 1 - - @depthcontrol - def EXTRACT(self): - current = self.create_node(UnlexerRule(name='EXTRACT')) - current += self.unlexer.E() - current += self.unlexer.X() - current += self.unlexer.T() - current += self.unlexer.R() - current += self.unlexer.A() - current += self.unlexer.C() - current += self.unlexer.T() - return current - EXTRACT.min_depth = 1 - - @depthcontrol - def FINAL(self): - current = self.create_node(UnlexerRule(name='FINAL')) - current += self.unlexer.F() - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.A() - current += self.unlexer.L() - return current - FINAL.min_depth = 1 - - @depthcontrol - def FIRST(self): - current = self.create_node(UnlexerRule(name='FIRST')) - current += self.unlexer.F() - current += self.unlexer.I() - current += self.unlexer.R() - current += self.unlexer.S() - current += self.unlexer.T() - return current - FIRST.min_depth = 1 - - @depthcontrol - def FORMAT(self): - current = self.create_node(UnlexerRule(name='FORMAT')) - current += self.unlexer.F() - current += self.unlexer.O() - current += self.unlexer.R() - current += self.unlexer.M() - current += self.unlexer.A() - current += self.unlexer.T() - return current - FORMAT.min_depth = 1 - - @depthcontrol - def FROM(self): - current = self.create_node(UnlexerRule(name='FROM')) - current += self.unlexer.F() - current += self.unlexer.R() - current += self.unlexer.O() - current += self.unlexer.M() - return current - FROM.min_depth = 1 - - @depthcontrol - def FULL(self): - current = self.create_node(UnlexerRule(name='FULL')) - current += self.unlexer.F() - current += self.unlexer.U() - current += self.unlexer.L() - current += self.unlexer.L() - return current - FULL.min_depth = 1 - - @depthcontrol - def GLOBAL(self): - current = self.create_node(UnlexerRule(name='GLOBAL')) - current += self.unlexer.G() - current += self.unlexer.L() - current += self.unlexer.O() - current += self.unlexer.B() - current += self.unlexer.A() - current += self.unlexer.L() - return current - GLOBAL.min_depth = 1 - - @depthcontrol - def GROUP(self): - current = self.create_node(UnlexerRule(name='GROUP')) - current += self.unlexer.G() - current += self.unlexer.R() - current += self.unlexer.O() - current += self.unlexer.U() - current += self.unlexer.P() - return current - GROUP.min_depth = 1 - - @depthcontrol - def HAVING(self): - current = self.create_node(UnlexerRule(name='HAVING')) - current += self.unlexer.H() - current += self.unlexer.A() - current += self.unlexer.V() - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.G() - return current - HAVING.min_depth = 1 - - @depthcontrol - def HOUR(self): - current = self.create_node(UnlexerRule(name='HOUR')) - current += self.unlexer.H() - current += self.unlexer.O() - current += self.unlexer.U() - current += self.unlexer.R() - return current - HOUR.min_depth = 1 - - @depthcontrol - def IF(self): - current = self.create_node(UnlexerRule(name='IF')) - current += self.unlexer.I() - current += self.unlexer.F() - return current - IF.min_depth = 1 - - @depthcontrol - def IN(self): - current = self.create_node(UnlexerRule(name='IN')) - current += self.unlexer.I() - current += self.unlexer.N() - return current - IN.min_depth = 1 - - @depthcontrol - def INF(self): - current = self.create_node(UnlexerRule(name='INF')) - choice = self.choice([0 if [1, 1][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_15', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_15', choice)] = self.unlexer.weights.get(('alt_15', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.F() - elif choice == 1: - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.F() - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.I() - current += self.unlexer.T() - current += self.unlexer.Y() - return current - INF.min_depth = 1 - - @depthcontrol - def INNER(self): - current = self.create_node(UnlexerRule(name='INNER')) - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.N() - current += self.unlexer.E() - current += self.unlexer.R() - return current - INNER.min_depth = 1 - - @depthcontrol - def INSERT(self): - current = self.create_node(UnlexerRule(name='INSERT')) - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.S() - current += self.unlexer.E() - current += self.unlexer.R() - current += self.unlexer.T() - return current - INSERT.min_depth = 1 - - @depthcontrol - def INTERVAL(self): - current = self.create_node(UnlexerRule(name='INTERVAL')) - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.T() - current += self.unlexer.E() - current += self.unlexer.R() - current += self.unlexer.V() - current += self.unlexer.A() - current += self.unlexer.L() - return current - INTERVAL.min_depth = 1 - - @depthcontrol - def INTO(self): - current = self.create_node(UnlexerRule(name='INTO')) - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.T() - current += self.unlexer.O() - return current - INTO.min_depth = 1 - - @depthcontrol - def IS(self): - current = self.create_node(UnlexerRule(name='IS')) - current += self.unlexer.I() - current += self.unlexer.S() - return current - IS.min_depth = 1 - - @depthcontrol - def JOIN(self): - current = self.create_node(UnlexerRule(name='JOIN')) - current += self.unlexer.J() - current += self.unlexer.O() - current += self.unlexer.I() - current += self.unlexer.N() - return current - JOIN.min_depth = 1 - - @depthcontrol - def KEY(self): - current = self.create_node(UnlexerRule(name='KEY')) - current += self.unlexer.K() - current += self.unlexer.E() - current += self.unlexer.Y() - return current - KEY.min_depth = 1 - - @depthcontrol - def LAST(self): - current = self.create_node(UnlexerRule(name='LAST')) - current += self.unlexer.L() - current += self.unlexer.A() - current += self.unlexer.S() - current += self.unlexer.T() - return current - LAST.min_depth = 1 - - @depthcontrol - def LEADING(self): - current = self.create_node(UnlexerRule(name='LEADING')) - current += self.unlexer.L() - current += self.unlexer.E() - current += self.unlexer.A() - current += self.unlexer.D() - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.G() - return current - LEADING.min_depth = 1 - - @depthcontrol - def LEFT(self): - current = self.create_node(UnlexerRule(name='LEFT')) - current += self.unlexer.L() - current += self.unlexer.E() - current += self.unlexer.F() - current += self.unlexer.T() - return current - LEFT.min_depth = 1 - - @depthcontrol - def LIKE(self): - current = self.create_node(UnlexerRule(name='LIKE')) - current += self.unlexer.L() - current += self.unlexer.I() - current += self.unlexer.K() - current += self.unlexer.E() - return current - LIKE.min_depth = 1 - - @depthcontrol - def LIMIT(self): - current = self.create_node(UnlexerRule(name='LIMIT')) - current += self.unlexer.L() - current += self.unlexer.I() - current += self.unlexer.M() - current += self.unlexer.I() - current += self.unlexer.T() - return current - LIMIT.min_depth = 1 - - @depthcontrol - def LOCAL(self): - current = self.create_node(UnlexerRule(name='LOCAL')) - current += self.unlexer.L() - current += self.unlexer.O() - current += self.unlexer.C() - current += self.unlexer.A() - current += self.unlexer.L() - return current - LOCAL.min_depth = 1 - - @depthcontrol - def MATERIALIZED(self): - current = self.create_node(UnlexerRule(name='MATERIALIZED')) - current += self.unlexer.M() - current += self.unlexer.A() - current += self.unlexer.T() - current += self.unlexer.E() - current += self.unlexer.R() - current += self.unlexer.I() - current += self.unlexer.A() - current += self.unlexer.L() - current += self.unlexer.I() - current += self.unlexer.Z() - current += self.unlexer.E() - current += self.unlexer.D() - return current - MATERIALIZED.min_depth = 1 - - @depthcontrol - def MINUTE(self): - current = self.create_node(UnlexerRule(name='MINUTE')) - current += self.unlexer.M() - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.U() - current += self.unlexer.T() - current += self.unlexer.E() - return current - MINUTE.min_depth = 1 - - @depthcontrol - def MONTH(self): - current = self.create_node(UnlexerRule(name='MONTH')) - current += self.unlexer.M() - current += self.unlexer.O() - current += self.unlexer.N() - current += self.unlexer.T() - current += self.unlexer.H() - return current - MONTH.min_depth = 1 - - @depthcontrol - def NAN_SQL(self): - current = self.create_node(UnlexerRule(name='NAN_SQL')) - current += self.unlexer.N() - current += self.unlexer.A() - current += self.unlexer.N() - return current - NAN_SQL.min_depth = 1 - - @depthcontrol - def NOT(self): - current = self.create_node(UnlexerRule(name='NOT')) - current += self.unlexer.N() - current += self.unlexer.O() - current += self.unlexer.T() - return current - NOT.min_depth = 1 - - @depthcontrol - def NULL_SQL(self): - current = self.create_node(UnlexerRule(name='NULL_SQL')) - current += self.unlexer.N() - current += self.unlexer.U() - current += self.unlexer.L() - current += self.unlexer.L() - return current - NULL_SQL.min_depth = 1 - - @depthcontrol - def NULLS(self): - current = self.create_node(UnlexerRule(name='NULLS')) - current += self.unlexer.N() - current += self.unlexer.U() - current += self.unlexer.L() - current += self.unlexer.L() - current += self.unlexer.S() - return current - NULLS.min_depth = 1 - - @depthcontrol - def OFFSET(self): - current = self.create_node(UnlexerRule(name='OFFSET')) - current += self.unlexer.O() - current += self.unlexer.F() - current += self.unlexer.F() - current += self.unlexer.S() - current += self.unlexer.E() - current += self.unlexer.T() - return current - OFFSET.min_depth = 1 - - @depthcontrol - def ON(self): - current = self.create_node(UnlexerRule(name='ON')) - current += self.unlexer.O() - current += self.unlexer.N() - return current - ON.min_depth = 1 - - @depthcontrol - def OR(self): - current = self.create_node(UnlexerRule(name='OR')) - current += self.unlexer.O() - current += self.unlexer.R() - return current - OR.min_depth = 1 - - @depthcontrol - def ORDER(self): - current = self.create_node(UnlexerRule(name='ORDER')) - current += self.unlexer.O() - current += self.unlexer.R() - current += self.unlexer.D() - current += self.unlexer.E() - current += self.unlexer.R() - return current - ORDER.min_depth = 1 - - @depthcontrol - def OUTER(self): - current = self.create_node(UnlexerRule(name='OUTER')) - current += self.unlexer.O() - current += self.unlexer.U() - current += self.unlexer.T() - current += self.unlexer.E() - current += self.unlexer.R() - return current - OUTER.min_depth = 1 - - @depthcontrol - def OUTFILE(self): - current = self.create_node(UnlexerRule(name='OUTFILE')) - current += self.unlexer.O() - current += self.unlexer.U() - current += self.unlexer.T() - current += self.unlexer.F() - current += self.unlexer.I() - current += self.unlexer.L() - current += self.unlexer.E() - return current - OUTFILE.min_depth = 1 - - @depthcontrol - def PARTITION(self): - current = self.create_node(UnlexerRule(name='PARTITION')) - current += self.unlexer.P() - current += self.unlexer.A() - current += self.unlexer.R() - current += self.unlexer.T() - current += self.unlexer.I() - current += self.unlexer.T() - current += self.unlexer.I() - current += self.unlexer.O() - current += self.unlexer.N() - return current - PARTITION.min_depth = 1 - - @depthcontrol - def PREWHERE(self): - current = self.create_node(UnlexerRule(name='PREWHERE')) - current += self.unlexer.P() - current += self.unlexer.R() - current += self.unlexer.E() - current += self.unlexer.W() - current += self.unlexer.H() - current += self.unlexer.E() - current += self.unlexer.R() - current += self.unlexer.E() - return current - PREWHERE.min_depth = 1 - - @depthcontrol - def PRIMARY(self): - current = self.create_node(UnlexerRule(name='PRIMARY')) - current += self.unlexer.P() - current += self.unlexer.R() - current += self.unlexer.I() - current += self.unlexer.M() - current += self.unlexer.A() - current += self.unlexer.R() - current += self.unlexer.Y() - return current - PRIMARY.min_depth = 1 - - @depthcontrol - def QUARTER(self): - current = self.create_node(UnlexerRule(name='QUARTER')) - current += self.unlexer.Q() - current += self.unlexer.U() - current += self.unlexer.A() - current += self.unlexer.R() - current += self.unlexer.T() - current += self.unlexer.E() - current += self.unlexer.R() - return current - QUARTER.min_depth = 1 - - @depthcontrol - def RIGHT(self): - current = self.create_node(UnlexerRule(name='RIGHT')) - current += self.unlexer.R() - current += self.unlexer.I() - current += self.unlexer.G() - current += self.unlexer.H() - current += self.unlexer.T() - return current - RIGHT.min_depth = 1 - - @depthcontrol - def SAMPLE(self): - current = self.create_node(UnlexerRule(name='SAMPLE')) - current += self.unlexer.S() - current += self.unlexer.A() - current += self.unlexer.M() - current += self.unlexer.P() - current += self.unlexer.L() - current += self.unlexer.E() - return current - SAMPLE.min_depth = 1 - - @depthcontrol - def SECOND(self): - current = self.create_node(UnlexerRule(name='SECOND')) - current += self.unlexer.S() - current += self.unlexer.E() - current += self.unlexer.C() - current += self.unlexer.O() - current += self.unlexer.N() - current += self.unlexer.D() - return current - SECOND.min_depth = 1 - - @depthcontrol - def SELECT(self): - current = self.create_node(UnlexerRule(name='SELECT')) - current += self.unlexer.S() - current += self.unlexer.E() - current += self.unlexer.L() - current += self.unlexer.E() - current += self.unlexer.C() - current += self.unlexer.T() - return current - SELECT.min_depth = 1 - - @depthcontrol - def SEMI(self): - current = self.create_node(UnlexerRule(name='SEMI')) - current += self.unlexer.S() - current += self.unlexer.E() - current += self.unlexer.M() - current += self.unlexer.I() - return current - SEMI.min_depth = 1 - - @depthcontrol - def SET(self): - current = self.create_node(UnlexerRule(name='SET')) - current += self.unlexer.S() - current += self.unlexer.E() - current += self.unlexer.T() - return current - SET.min_depth = 1 - - @depthcontrol - def SETTINGS(self): - current = self.create_node(UnlexerRule(name='SETTINGS')) - current += self.unlexer.S() - current += self.unlexer.E() - current += self.unlexer.T() - current += self.unlexer.T() - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.G() - current += self.unlexer.S() - return current - SETTINGS.min_depth = 1 - - @depthcontrol - def TABLE(self): - current = self.create_node(UnlexerRule(name='TABLE')) - current += self.unlexer.T() - current += self.unlexer.A() - current += self.unlexer.B() - current += self.unlexer.L() - current += self.unlexer.E() - return current - TABLE.min_depth = 1 - - @depthcontrol - def TEMPORARY(self): - current = self.create_node(UnlexerRule(name='TEMPORARY')) - current += self.unlexer.T() - current += self.unlexer.E() - current += self.unlexer.M() - current += self.unlexer.P() - current += self.unlexer.O() - current += self.unlexer.R() - current += self.unlexer.A() - current += self.unlexer.R() - current += self.unlexer.Y() - return current - TEMPORARY.min_depth = 1 - - @depthcontrol - def THEN(self): - current = self.create_node(UnlexerRule(name='THEN')) - current += self.unlexer.T() - current += self.unlexer.H() - current += self.unlexer.E() - current += self.unlexer.N() - return current - THEN.min_depth = 1 - - @depthcontrol - def TO(self): - current = self.create_node(UnlexerRule(name='TO')) - current += self.unlexer.T() - current += self.unlexer.O() - return current - TO.min_depth = 1 - - @depthcontrol - def TOTALS(self): - current = self.create_node(UnlexerRule(name='TOTALS')) - current += self.unlexer.T() - current += self.unlexer.O() - current += self.unlexer.T() - current += self.unlexer.A() - current += self.unlexer.L() - current += self.unlexer.S() - return current - TOTALS.min_depth = 1 - - @depthcontrol - def TRAILING(self): - current = self.create_node(UnlexerRule(name='TRAILING')) - current += self.unlexer.T() - current += self.unlexer.R() - current += self.unlexer.A() - current += self.unlexer.I() - current += self.unlexer.L() - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.G() - return current - TRAILING.min_depth = 1 - - @depthcontrol - def TRIM(self): - current = self.create_node(UnlexerRule(name='TRIM')) - current += self.unlexer.T() - current += self.unlexer.R() - current += self.unlexer.I() - current += self.unlexer.M() - return current - TRIM.min_depth = 1 - - @depthcontrol - def TTL(self): - current = self.create_node(UnlexerRule(name='TTL')) - current += self.unlexer.T() - current += self.unlexer.T() - current += self.unlexer.L() - return current - TTL.min_depth = 1 - - @depthcontrol - def UNION(self): - current = self.create_node(UnlexerRule(name='UNION')) - current += self.unlexer.U() - current += self.unlexer.N() - current += self.unlexer.I() - current += self.unlexer.O() - current += self.unlexer.N() - return current - UNION.min_depth = 1 - - @depthcontrol - def USING(self): - current = self.create_node(UnlexerRule(name='USING')) - current += self.unlexer.U() - current += self.unlexer.S() - current += self.unlexer.I() - current += self.unlexer.N() - current += self.unlexer.G() - return current - USING.min_depth = 1 - - @depthcontrol - def VALUES(self): - current = self.create_node(UnlexerRule(name='VALUES')) - current += self.unlexer.V() - current += self.unlexer.A() - current += self.unlexer.L() - current += self.unlexer.U() - current += self.unlexer.E() - current += self.unlexer.S() - return current - VALUES.min_depth = 1 - - @depthcontrol - def VOLUME(self): - current = self.create_node(UnlexerRule(name='VOLUME')) - current += self.unlexer.V() - current += self.unlexer.O() - current += self.unlexer.L() - current += self.unlexer.U() - current += self.unlexer.M() - current += self.unlexer.E() - return current - VOLUME.min_depth = 1 - - @depthcontrol - def WEEK(self): - current = self.create_node(UnlexerRule(name='WEEK')) - current += self.unlexer.W() - current += self.unlexer.E() - current += self.unlexer.E() - current += self.unlexer.K() - return current - WEEK.min_depth = 1 - - @depthcontrol - def WHEN(self): - current = self.create_node(UnlexerRule(name='WHEN')) - current += self.unlexer.W() - current += self.unlexer.H() - current += self.unlexer.E() - current += self.unlexer.N() - return current - WHEN.min_depth = 1 - - @depthcontrol - def WHERE(self): - current = self.create_node(UnlexerRule(name='WHERE')) - current += self.unlexer.W() - current += self.unlexer.H() - current += self.unlexer.E() - current += self.unlexer.R() - current += self.unlexer.E() - return current - WHERE.min_depth = 1 - - @depthcontrol - def WITH(self): - current = self.create_node(UnlexerRule(name='WITH')) - current += self.unlexer.W() - current += self.unlexer.I() - current += self.unlexer.T() - current += self.unlexer.H() - return current - WITH.min_depth = 1 - - @depthcontrol - def YEAR(self): - current = self.create_node(UnlexerRule(name='YEAR')) - current += self.unlexer.Y() - current += self.unlexer.E() - current += self.unlexer.A() - current += self.unlexer.R() - return current - YEAR.min_depth = 1 - - @depthcontrol - def IDENTIFIER(self): - current = self.create_node(UnlexerRule(name='IDENTIFIER')) - choice = self.choice([0 if [1, 1][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_18', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_18', choice)] = self.unlexer.weights.get(('alt_18', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.LETTER() - elif choice == 1: - current += self.unlexer.UNDERSCORE() - if self.unlexer.max_depth >= 1: - for _ in self.zero_or_more(): - choice = self.choice([0 if [1, 1, 1][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_22', i), 1) for i, w in enumerate([1, 1, 1])]) - self.unlexer.weights[('alt_22', choice)] = self.unlexer.weights.get(('alt_22', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.LETTER() - elif choice == 1: - current += self.unlexer.UNDERSCORE() - elif choice == 2: - current += self.unlexer.DEC_DIGIT() - - return current - IDENTIFIER.min_depth = 1 - - @depthcontrol - def FLOATING_LITERAL(self): - current = self.create_node(UnlexerRule(name='FLOATING_LITERAL')) - choice = self.choice([0 if [2, 2, 2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_26', i), 1) for i, w in enumerate([1, 1, 1, 1])]) - self.unlexer.weights[('alt_26', choice)] = self.unlexer.weights.get(('alt_26', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.HEXADECIMAL_LITERAL() - current += self.unlexer.DOT() - if self.unlexer.max_depth >= 1: - for _ in self.zero_or_more(): - current += self.unlexer.HEX_DIGIT() - - if self.unlexer.max_depth >= 1: - for _ in self.zero_or_one(): - choice = self.choice([0 if [1, 1][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_33', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_33', choice)] = self.unlexer.weights.get(('alt_33', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.P() - elif choice == 1: - current += self.unlexer.E() - if self.unlexer.max_depth >= 1: - for _ in self.zero_or_one(): - choice = self.choice([0 if [1, 1][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_37', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_37', choice)] = self.unlexer.weights.get(('alt_37', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.PLUS() - elif choice == 1: - current += self.unlexer.DASH() - - if self.unlexer.max_depth >= 0: - for _ in self.one_or_more(): - current += self.unlexer.DEC_DIGIT() - - - elif choice == 1: - current += self.unlexer.HEXADECIMAL_LITERAL() - choice = self.choice([0 if [1, 1][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_40', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_40', choice)] = self.unlexer.weights.get(('alt_40', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.P() - elif choice == 1: - current += self.unlexer.E() - if self.unlexer.max_depth >= 1: - for _ in self.zero_or_one(): - choice = self.choice([0 if [1, 1][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_44', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_44', choice)] = self.unlexer.weights.get(('alt_44', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.PLUS() - elif choice == 1: - current += self.unlexer.DASH() - - if self.unlexer.max_depth >= 0: - for _ in self.one_or_more(): - current += self.unlexer.DEC_DIGIT() - - elif choice == 2: - current += self.unlexer.INTEGER_LITERAL() - current += self.unlexer.DOT() - if self.unlexer.max_depth >= 1: - for _ in self.zero_or_more(): - current += self.unlexer.DEC_DIGIT() - - if self.unlexer.max_depth >= 1: - for _ in self.zero_or_one(): - current += self.unlexer.E() - if self.unlexer.max_depth >= 1: - for _ in self.zero_or_one(): - choice = self.choice([0 if [1, 1][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_50', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_50', choice)] = self.unlexer.weights.get(('alt_50', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.PLUS() - elif choice == 1: - current += self.unlexer.DASH() - - if self.unlexer.max_depth >= 0: - for _ in self.one_or_more(): - current += self.unlexer.DEC_DIGIT() - - - elif choice == 3: - current += self.unlexer.INTEGER_LITERAL() - current += self.unlexer.E() - if self.unlexer.max_depth >= 1: - for _ in self.zero_or_one(): - choice = self.choice([0 if [1, 1][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_54', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_54', choice)] = self.unlexer.weights.get(('alt_54', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.PLUS() - elif choice == 1: - current += self.unlexer.DASH() - - if self.unlexer.max_depth >= 0: - for _ in self.one_or_more(): - current += self.unlexer.DEC_DIGIT() - - return current - FLOATING_LITERAL.min_depth = 2 - - @depthcontrol - def HEXADECIMAL_LITERAL(self): - current = self.create_node(UnlexerRule(name='HEXADECIMAL_LITERAL')) - current += self.create_node(UnlexerRule(src='0')) - current += self.unlexer.X() - if self.unlexer.max_depth >= 0: - for _ in self.one_or_more(): - current += self.unlexer.HEX_DIGIT() - - return current - HEXADECIMAL_LITERAL.min_depth = 1 - - @depthcontrol - def INTEGER_LITERAL(self): - current = self.create_node(UnlexerRule(name='INTEGER_LITERAL')) - if self.unlexer.max_depth >= 0: - for _ in self.one_or_more(): - current += self.unlexer.DEC_DIGIT() - - return current - INTEGER_LITERAL.min_depth = 1 - - @depthcontrol - def STRING_LITERAL(self): - current = self.create_node(UnlexerRule(name='STRING_LITERAL')) - current += self.unlexer.QUOTE_SINGLE() - if self.unlexer.max_depth >= 0: - for _ in self.zero_or_more(): - choice = self.choice([0 if [0, 1][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_59', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_59', choice)] = self.unlexer.weights.get(('alt_59', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += UnlexerRule(src=self.char_from_list(charset_0)) - elif choice == 1: - current += self.unlexer.BACKSLASH() - current += UnlexerRule(src=self.any_char()) - - current += self.unlexer.QUOTE_SINGLE() - return current - STRING_LITERAL.min_depth = 1 - - @depthcontrol - def A(self): - current = self.create_node(UnlexerRule(name='A')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_1))) - return current - A.min_depth = 0 - - @depthcontrol - def B(self): - current = self.create_node(UnlexerRule(name='B')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_2))) - return current - B.min_depth = 0 - - @depthcontrol - def C(self): - current = self.create_node(UnlexerRule(name='C')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_3))) - return current - C.min_depth = 0 - - @depthcontrol - def D(self): - current = self.create_node(UnlexerRule(name='D')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_4))) - return current - D.min_depth = 0 - - @depthcontrol - def E(self): - current = self.create_node(UnlexerRule(name='E')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_5))) - return current - E.min_depth = 0 - - @depthcontrol - def F(self): - current = self.create_node(UnlexerRule(name='F')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_6))) - return current - F.min_depth = 0 - - @depthcontrol - def G(self): - current = self.create_node(UnlexerRule(name='G')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_7))) - return current - G.min_depth = 0 - - @depthcontrol - def H(self): - current = self.create_node(UnlexerRule(name='H')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_8))) - return current - H.min_depth = 0 - - @depthcontrol - def I(self): - current = self.create_node(UnlexerRule(name='I')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_9))) - return current - I.min_depth = 0 - - @depthcontrol - def J(self): - current = self.create_node(UnlexerRule(name='J')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_10))) - return current - J.min_depth = 0 - - @depthcontrol - def K(self): - current = self.create_node(UnlexerRule(name='K')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_11))) - return current - K.min_depth = 0 - - @depthcontrol - def L(self): - current = self.create_node(UnlexerRule(name='L')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_12))) - return current - L.min_depth = 0 - - @depthcontrol - def M(self): - current = self.create_node(UnlexerRule(name='M')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_13))) - return current - M.min_depth = 0 - - @depthcontrol - def N(self): - current = self.create_node(UnlexerRule(name='N')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_14))) - return current - N.min_depth = 0 - - @depthcontrol - def O(self): - current = self.create_node(UnlexerRule(name='O')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_15))) - return current - O.min_depth = 0 - - @depthcontrol - def P(self): - current = self.create_node(UnlexerRule(name='P')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_16))) - return current - P.min_depth = 0 - - @depthcontrol - def Q(self): - current = self.create_node(UnlexerRule(name='Q')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_17))) - return current - Q.min_depth = 0 - - @depthcontrol - def R(self): - current = self.create_node(UnlexerRule(name='R')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_18))) - return current - R.min_depth = 0 - - @depthcontrol - def S(self): - current = self.create_node(UnlexerRule(name='S')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_19))) - return current - S.min_depth = 0 - - @depthcontrol - def T(self): - current = self.create_node(UnlexerRule(name='T')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_20))) - return current - T.min_depth = 0 - - @depthcontrol - def U(self): - current = self.create_node(UnlexerRule(name='U')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_21))) - return current - U.min_depth = 0 - - @depthcontrol - def V(self): - current = self.create_node(UnlexerRule(name='V')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_22))) - return current - V.min_depth = 0 - - @depthcontrol - def W(self): - current = self.create_node(UnlexerRule(name='W')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_23))) - return current - W.min_depth = 0 - - @depthcontrol - def X(self): - current = self.create_node(UnlexerRule(name='X')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_24))) - return current - X.min_depth = 0 - - @depthcontrol - def Y(self): - current = self.create_node(UnlexerRule(name='Y')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_25))) - return current - Y.min_depth = 0 - - @depthcontrol - def Z(self): - current = self.create_node(UnlexerRule(name='Z')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_26))) - return current - Z.min_depth = 0 - - @depthcontrol - def LETTER(self): - current = self.create_node(UnlexerRule(name='LETTER')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_27))) - return current - LETTER.min_depth = 0 - - @depthcontrol - def DEC_DIGIT(self): - current = self.create_node(UnlexerRule(name='DEC_DIGIT')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_28))) - return current - DEC_DIGIT.min_depth = 0 - - @depthcontrol - def HEX_DIGIT(self): - current = self.create_node(UnlexerRule(name='HEX_DIGIT')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_29))) - return current - HEX_DIGIT.min_depth = 0 - - @depthcontrol - def ARROW(self): - current = self.create_node(UnlexerRule(name='ARROW')) - current += self.create_node(UnlexerRule(src='->')) - return current - ARROW.min_depth = 0 - - @depthcontrol - def ASTERISK(self): - current = self.create_node(UnlexerRule(name='ASTERISK')) - current += self.create_node(UnlexerRule(src='*')) - return current - ASTERISK.min_depth = 0 - - @depthcontrol - def BACKQUOTE(self): - current = self.create_node(UnlexerRule(name='BACKQUOTE')) - current += self.create_node(UnlexerRule(src='`')) - return current - BACKQUOTE.min_depth = 0 - - @depthcontrol - def BACKSLASH(self): - current = self.create_node(UnlexerRule(name='BACKSLASH')) - current += self.create_node(UnlexerRule(src='\\')) - return current - BACKSLASH.min_depth = 0 - - @depthcontrol - def COLON(self): - current = self.create_node(UnlexerRule(name='COLON')) - current += self.create_node(UnlexerRule(src=':')) - return current - COLON.min_depth = 0 - - @depthcontrol - def COMMA(self): - current = self.create_node(UnlexerRule(name='COMMA')) - current += self.create_node(UnlexerRule(src=',')) - return current - COMMA.min_depth = 0 - - @depthcontrol - def CONCAT(self): - current = self.create_node(UnlexerRule(name='CONCAT')) - current += self.create_node(UnlexerRule(src='||')) - return current - CONCAT.min_depth = 0 - - @depthcontrol - def DASH(self): - current = self.create_node(UnlexerRule(name='DASH')) - current += self.create_node(UnlexerRule(src='-')) - return current - DASH.min_depth = 0 - - @depthcontrol - def DOT(self): - current = self.create_node(UnlexerRule(name='DOT')) - current += self.create_node(UnlexerRule(src='.')) - return current - DOT.min_depth = 0 - - @depthcontrol - def EQ_DOUBLE(self): - current = self.create_node(UnlexerRule(name='EQ_DOUBLE')) - current += self.create_node(UnlexerRule(src='==')) - return current - EQ_DOUBLE.min_depth = 0 - - @depthcontrol - def EQ_SINGLE(self): - current = self.create_node(UnlexerRule(name='EQ_SINGLE')) - current += self.create_node(UnlexerRule(src='=')) - return current - EQ_SINGLE.min_depth = 0 - - @depthcontrol - def GE(self): - current = self.create_node(UnlexerRule(name='GE')) - current += self.create_node(UnlexerRule(src='>=')) - return current - GE.min_depth = 0 - - @depthcontrol - def GT(self): - current = self.create_node(UnlexerRule(name='GT')) - current += self.create_node(UnlexerRule(src='>')) - return current - GT.min_depth = 0 - - @depthcontrol - def LBRACKET(self): - current = self.create_node(UnlexerRule(name='LBRACKET')) - current += self.create_node(UnlexerRule(src='[')) - return current - LBRACKET.min_depth = 0 - - @depthcontrol - def LE(self): - current = self.create_node(UnlexerRule(name='LE')) - current += self.create_node(UnlexerRule(src='<=')) - return current - LE.min_depth = 0 - - @depthcontrol - def LPAREN(self): - current = self.create_node(UnlexerRule(name='LPAREN')) - current += self.create_node(UnlexerRule(src='(')) - return current - LPAREN.min_depth = 0 - - @depthcontrol - def LT(self): - current = self.create_node(UnlexerRule(name='LT')) - current += self.create_node(UnlexerRule(src='<')) - return current - LT.min_depth = 0 - - @depthcontrol - def NOT_EQ(self): - current = self.create_node(UnlexerRule(name='NOT_EQ')) - choice = self.choice([0 if [0, 0][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_79', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_79', choice)] = self.unlexer.weights.get(('alt_79', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.create_node(UnlexerRule(src='!=')) - elif choice == 1: - current += self.create_node(UnlexerRule(src='<>')) - return current - NOT_EQ.min_depth = 0 - - @depthcontrol - def PERCENT(self): - current = self.create_node(UnlexerRule(name='PERCENT')) - current += self.create_node(UnlexerRule(src='%')) - return current - PERCENT.min_depth = 0 - - @depthcontrol - def PLUS(self): - current = self.create_node(UnlexerRule(name='PLUS')) - current += self.create_node(UnlexerRule(src='+')) - return current - PLUS.min_depth = 0 - - @depthcontrol - def QUERY(self): - current = self.create_node(UnlexerRule(name='QUERY')) - current += self.create_node(UnlexerRule(src='?')) - return current - QUERY.min_depth = 0 - - @depthcontrol - def QUOTE_SINGLE(self): - current = self.create_node(UnlexerRule(name='QUOTE_SINGLE')) - current += self.create_node(UnlexerRule(src='\'')) - return current - QUOTE_SINGLE.min_depth = 0 - - @depthcontrol - def RBRACKET(self): - current = self.create_node(UnlexerRule(name='RBRACKET')) - current += self.create_node(UnlexerRule(src=']')) - return current - RBRACKET.min_depth = 0 - - @depthcontrol - def RPAREN(self): - current = self.create_node(UnlexerRule(name='RPAREN')) - current += self.create_node(UnlexerRule(src=')')) - return current - RPAREN.min_depth = 0 - - @depthcontrol - def SEMICOLON(self): - current = self.create_node(UnlexerRule(name='SEMICOLON')) - current += self.create_node(UnlexerRule(src=';')) - return current - SEMICOLON.min_depth = 0 - - @depthcontrol - def SLASH(self): - current = self.create_node(UnlexerRule(name='SLASH')) - current += self.create_node(UnlexerRule(src='/')) - return current - SLASH.min_depth = 0 - - @depthcontrol - def UNDERSCORE(self): - current = self.create_node(UnlexerRule(name='UNDERSCORE')) - current += self.create_node(UnlexerRule(src='_')) - return current - UNDERSCORE.min_depth = 0 - - @depthcontrol - def SINGLE_LINE_COMMENT(self): - current = self.create_node(UnlexerRule(name='SINGLE_LINE_COMMENT')) - current += self.create_node(UnlexerRule(src='--')) - if self.unlexer.max_depth >= 0: - for _ in self.zero_or_more(): - current += UnlexerRule(src=self.char_from_list(charset_30)) - - choice = self.choice([0 if [0, 0, 1][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_95', i), 1) for i, w in enumerate([1, 1, 1])]) - self.unlexer.weights[('alt_95', choice)] = self.unlexer.weights.get(('alt_95', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.create_node(UnlexerRule(src='\n')) - elif choice == 1: - current += self.create_node(UnlexerRule(src='\r')) - elif choice == 2: - current += self.unlexer.EOF() - return current - SINGLE_LINE_COMMENT.min_depth = 0 - - @depthcontrol - def MULTI_LINE_COMMENT(self): - current = self.create_node(UnlexerRule(name='MULTI_LINE_COMMENT')) - current += self.create_node(UnlexerRule(src='/*')) - if self.unlexer.max_depth >= 0: - for _ in self.zero_or_more(): - current += UnlexerRule(src=self.any_char()) - - current += self.create_node(UnlexerRule(src='*/')) - return current - MULTI_LINE_COMMENT.min_depth = 0 - - @depthcontrol - def WHITESPACE(self): - current = self.create_node(UnlexerRule(name='WHITESPACE')) - current += self.create_node(UnlexerRule(src=self.char_from_list(charset_31))) - return current - WHITESPACE.min_depth = 0 - diff --git a/utils/grammar-fuzzer/ClickHouseUnparser.py b/utils/grammar-fuzzer/ClickHouseUnparser.py deleted file mode 100644 index 7fa5eb96d31..00000000000 --- a/utils/grammar-fuzzer/ClickHouseUnparser.py +++ /dev/null @@ -1,1815 +0,0 @@ -# Generated by Grammarinator 19.3 - -from itertools import chain -from grammarinator.runtime import * - -import ClickHouseUnlexer - - -class ClickHouseUnparser(Grammarinator): - - def __init__(self, unlexer): - super(ClickHouseUnparser, self).__init__() - self.unlexer = unlexer - @depthcontrol - def queryList(self): - current = self.create_node(UnparserRule(name='queryList')) - current += self.queryStmt() - if self.unlexer.max_depth >= 8: - for _ in self.zero_or_more(): - current += self.unlexer.SEMICOLON() - current += self.queryStmt() - - if self.unlexer.max_depth >= 1: - for _ in self.zero_or_one(): - current += self.unlexer.SEMICOLON() - - current += self.unlexer.EOF() - return current - queryList.min_depth = 8 - - @depthcontrol - def queryStmt(self): - current = self.create_node(UnparserRule(name='queryStmt')) - current += self.query() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.INTO() - current += self.unlexer.OUTFILE() - current += self.unlexer.STRING_LITERAL() - - if self.unlexer.max_depth >= 3: - for _ in self.zero_or_one(): - current += self.unlexer.FORMAT() - current += self.identifier() - - return current - queryStmt.min_depth = 7 - - @depthcontrol - def query(self): - current = self.create_node(UnparserRule(name='query')) - choice = self.choice([0 if [6, 7, 6, 6][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_108', i), 1) for i, w in enumerate([1, 1, 1, 1])]) - self.unlexer.weights[('alt_108', choice)] = self.unlexer.weights.get(('alt_108', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.distributedStmt() - elif choice == 1: - current += self.insertStmt() - elif choice == 2: - current += self.selectUnionStmt() - elif choice == 3: - current += self.setStmt() - return current - query.min_depth = 6 - - @depthcontrol - def distributedStmt(self): - current = self.create_node(UnparserRule(name='distributedStmt')) - choice = self.choice([0 if [5, 6, 6][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_113', i), 1) for i, w in enumerate([1, 1, 1])]) - self.unlexer.weights[('alt_113', choice)] = self.unlexer.weights.get(('alt_113', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.createDatabaseStmt() - elif choice == 1: - current += self.createTableStmt() - elif choice == 2: - current += self.dropStmt() - if self.unlexer.max_depth >= 3: - for _ in self.zero_or_one(): - current += self.unlexer.ON() - current += self.unlexer.CLUSTER() - current += self.identifier() - - return current - distributedStmt.min_depth = 5 - - @depthcontrol - def createDatabaseStmt(self): - current = self.create_node(UnparserRule(name='createDatabaseStmt')) - current += self.unlexer.CREATE() - current += self.unlexer.DATABASE() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.IF() - current += self.unlexer.NOT() - current += self.unlexer.EXISTS() - - current += self.databaseIdentifier() - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_one(): - current += self.engineExpr() - - return current - createDatabaseStmt.min_depth = 4 - - @depthcontrol - def createTableStmt(self): - current = self.create_node(UnparserRule(name='createTableStmt')) - current += self.unlexer.CREATE() - current += self.unlexer.TABLE() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.IF() - current += self.unlexer.NOT() - current += self.unlexer.EXISTS() - - current += self.tableIdentifier() - current += self.schemaClause() - return current - createTableStmt.min_depth = 5 - - @depthcontrol - def schemaClause(self): - current = self.create_node(UnparserRule(name='schemaClause')) - choice = self.choice([0 if [8, 7, 5, 4][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_121', i), 1) for i, w in enumerate([1, 1, 1, 1])]) - self.unlexer.weights[('alt_121', choice)] = self.unlexer.weights.get(('alt_121', choice), 1) * self.unlexer.cooldown - if choice == 0: - current = self.schemaClause_SchemaDescriptionClause() - elif choice == 1: - current = self.schemaClause_SchemaAsSubqueryClause() - elif choice == 2: - current = self.schemaClause_SchemaAsTableClause() - elif choice == 3: - current = self.schemaClause_SchemaAsFunctionClause() - return current - schemaClause.min_depth = 4 - - @depthcontrol - def schemaClause_SchemaDescriptionClause(self): - current = self.create_node(UnparserRule(name='schemaClause_SchemaDescriptionClause')) - current += self.unlexer.LPAREN() - current += self.tableElementExpr() - if self.unlexer.max_depth >= 7: - for _ in self.zero_or_more(): - current += self.unlexer.COMMA() - current += self.tableElementExpr() - - current += self.unlexer.RPAREN() - current += self.engineClause() - return current - schemaClause_SchemaDescriptionClause.min_depth = 7 - - @depthcontrol - def schemaClause_SchemaAsSubqueryClause(self): - current = self.create_node(UnparserRule(name='schemaClause_SchemaAsSubqueryClause')) - if self.unlexer.max_depth >= 5: - for _ in self.zero_or_one(): - current += self.engineClause() - - current += self.unlexer.AS() - current += self.selectUnionStmt() - return current - schemaClause_SchemaAsSubqueryClause.min_depth = 6 - - @depthcontrol - def schemaClause_SchemaAsTableClause(self): - current = self.create_node(UnparserRule(name='schemaClause_SchemaAsTableClause')) - current += self.unlexer.AS() - current += self.tableIdentifier() - if self.unlexer.max_depth >= 5: - for _ in self.zero_or_one(): - current += self.engineClause() - - return current - schemaClause_SchemaAsTableClause.min_depth = 4 - - @depthcontrol - def schemaClause_SchemaAsFunctionClause(self): - current = self.create_node(UnparserRule(name='schemaClause_SchemaAsFunctionClause')) - current += self.unlexer.AS() - current += self.identifier() - current += self.unlexer.LPAREN() - if self.unlexer.max_depth >= 5: - for _ in self.zero_or_one(): - current += self.tableArgList() - - current += self.unlexer.RPAREN() - return current - schemaClause_SchemaAsFunctionClause.min_depth = 3 - - @depthcontrol - def engineClause(self): - current = self.create_node(UnparserRule(name='engineClause')) - current += self.engineExpr() - if self.unlexer.max_depth >= 6: - for _ in self.zero_or_one(): - current += self.orderByClause() - - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_one(): - current += self.partitionByClause() - - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_one(): - current += self.primaryKeyClause() - - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_one(): - current += self.sampleByClause() - - if self.unlexer.max_depth >= 5: - for _ in self.zero_or_one(): - current += self.ttlClause() - - if self.unlexer.max_depth >= 6: - for _ in self.zero_or_one(): - current += self.settingsClause() - - return current - engineClause.min_depth = 4 - - @depthcontrol - def partitionByClause(self): - current = self.create_node(UnparserRule(name='partitionByClause')) - current += self.unlexer.PARTITION() - current += self.unlexer.BY() - current += self.columnExpr() - return current - partitionByClause.min_depth = 3 - - @depthcontrol - def primaryKeyClause(self): - current = self.create_node(UnparserRule(name='primaryKeyClause')) - current += self.unlexer.PRIMARY() - current += self.unlexer.KEY() - current += self.columnExpr() - return current - primaryKeyClause.min_depth = 3 - - @depthcontrol - def sampleByClause(self): - current = self.create_node(UnparserRule(name='sampleByClause')) - current += self.unlexer.SAMPLE() - current += self.unlexer.BY() - current += self.columnExpr() - return current - sampleByClause.min_depth = 3 - - @depthcontrol - def ttlClause(self): - current = self.create_node(UnparserRule(name='ttlClause')) - current += self.unlexer.TTL() - current += self.ttlExpr() - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_more(): - current += self.unlexer.COMMA() - current += self.ttlExpr() - - return current - ttlClause.min_depth = 4 - - @depthcontrol - def engineExpr(self): - current = self.create_node(UnparserRule(name='engineExpr')) - current += self.unlexer.ENGINE() - if self.unlexer.max_depth >= 1: - for _ in self.zero_or_one(): - current += self.unlexer.EQ_SINGLE() - - current += self.identifier() - if self.unlexer.max_depth >= 1: - for _ in self.zero_or_one(): - current += self.unlexer.LPAREN() - if self.unlexer.max_depth >= 5: - for _ in self.zero_or_one(): - current += self.tableArgList() - - current += self.unlexer.RPAREN() - - return current - engineExpr.min_depth = 3 - - @depthcontrol - def tableElementExpr(self): - current = self.create_node(UnparserRule(name='tableElementExpr')) - current = self.tableElementExpr_TableElementColumn() - return current - tableElementExpr.min_depth = 6 - - @depthcontrol - def tableElementExpr_TableElementColumn(self): - current = self.create_node(UnparserRule(name='tableElementExpr_TableElementColumn')) - current += self.identifier() - current += self.columnTypeExpr() - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_one(): - current += self.tableColumnPropertyExpr() - - if self.unlexer.max_depth >= 3: - for _ in self.zero_or_one(): - current += self.unlexer.TTL() - current += self.columnExpr() - - return current - tableElementExpr_TableElementColumn.min_depth = 5 - - @depthcontrol - def tableColumnPropertyExpr(self): - current = self.create_node(UnparserRule(name='tableColumnPropertyExpr')) - choice = self.choice([0 if [2, 2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_142', i), 1) for i, w in enumerate([1, 1, 1])]) - self.unlexer.weights[('alt_142', choice)] = self.unlexer.weights.get(('alt_142', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.DEFAULT() - elif choice == 1: - current += self.unlexer.MATERIALIZED() - elif choice == 2: - current += self.unlexer.ALIAS() - current += self.columnExpr() - return current - tableColumnPropertyExpr.min_depth = 3 - - @depthcontrol - def ttlExpr(self): - current = self.create_node(UnparserRule(name='ttlExpr')) - current += self.columnExpr() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - choice = self.choice([0 if [2, 2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_147', i), 1) for i, w in enumerate([1, 1, 1])]) - self.unlexer.weights[('alt_147', choice)] = self.unlexer.weights.get(('alt_147', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.DELETE() - elif choice == 1: - current += self.unlexer.TO() - current += self.unlexer.DISK() - current += self.unlexer.STRING_LITERAL() - elif choice == 2: - current += self.unlexer.TO() - current += self.unlexer.VOLUME() - current += self.unlexer.STRING_LITERAL() - - return current - ttlExpr.min_depth = 3 - - @depthcontrol - def dropStmt(self): - current = self.create_node(UnparserRule(name='dropStmt')) - choice = self.choice([0 if [5, 5][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_151', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_151', choice)] = self.unlexer.weights.get(('alt_151', choice), 1) * self.unlexer.cooldown - if choice == 0: - current = self.dropStmt_DropDatabaseStmt() - elif choice == 1: - current = self.dropStmt_DropTableStmt() - return current - dropStmt.min_depth = 5 - - @depthcontrol - def dropStmt_DropDatabaseStmt(self): - current = self.create_node(UnparserRule(name='dropStmt_DropDatabaseStmt')) - current += self.unlexer.DROP() - current += self.unlexer.DATABASE() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.IF() - current += self.unlexer.EXISTS() - - current += self.databaseIdentifier() - return current - dropStmt_DropDatabaseStmt.min_depth = 4 - - @depthcontrol - def dropStmt_DropTableStmt(self): - current = self.create_node(UnparserRule(name='dropStmt_DropTableStmt')) - current += self.unlexer.DROP() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.TEMPORARY() - - current += self.unlexer.TABLE() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.IF() - current += self.unlexer.EXISTS() - - current += self.tableIdentifier() - return current - dropStmt_DropTableStmt.min_depth = 4 - - @depthcontrol - def insertStmt(self): - current = self.create_node(UnparserRule(name='insertStmt')) - current += self.unlexer.INSERT() - current += self.unlexer.INTO() - current += self.tableIdentifier() - if self.unlexer.max_depth >= 3: - for _ in self.zero_or_one(): - current += self.unlexer.LPAREN() - current += self.identifier() - if self.unlexer.max_depth >= 3: - for _ in self.zero_or_more(): - current += self.unlexer.COMMA() - current += self.identifier() - - current += self.unlexer.RPAREN() - - current += self.valuesClause() - return current - insertStmt.min_depth = 6 - - @depthcontrol - def valuesClause(self): - current = self.create_node(UnparserRule(name='valuesClause')) - choice = self.choice([0 if [5, 6][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_159', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_159', choice)] = self.unlexer.weights.get(('alt_159', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.VALUES() - current += self.valueTupleExpr() - if self.unlexer.max_depth >= 5: - for _ in self.zero_or_more(): - current += self.unlexer.COMMA() - current += self.valueTupleExpr() - - elif choice == 1: - current += self.selectUnionStmt() - return current - valuesClause.min_depth = 5 - - @depthcontrol - def valueTupleExpr(self): - current = self.create_node(UnparserRule(name='valueTupleExpr')) - current += self.unlexer.LPAREN() - current += self.valueExprList() - current += self.unlexer.RPAREN() - return current - valueTupleExpr.min_depth = 4 - - @depthcontrol - def selectUnionStmt(self): - current = self.create_node(UnparserRule(name='selectUnionStmt')) - current += self.selectStmt() - if self.unlexer.max_depth >= 5: - for _ in self.zero_or_more(): - current += self.unlexer.UNION() - current += self.unlexer.ALL() - current += self.selectStmt() - - return current - selectUnionStmt.min_depth = 5 - - @depthcontrol - def selectStmt(self): - current = self.create_node(UnparserRule(name='selectStmt')) - if self.unlexer.max_depth >= 5: - for _ in self.zero_or_one(): - current += self.withClause() - - current += self.unlexer.SELECT() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.DISTINCT() - - current += self.columnExprList() - if self.unlexer.max_depth >= 8: - for _ in self.zero_or_one(): - current += self.fromClause() - - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_one(): - current += self.sampleClause() - - if self.unlexer.max_depth >= 5: - for _ in self.zero_or_one(): - current += self.arrayJoinClause() - - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_one(): - current += self.prewhereClause() - - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_one(): - current += self.whereClause() - - if self.unlexer.max_depth >= 5: - for _ in self.zero_or_one(): - current += self.groupByClause() - - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_one(): - current += self.havingClause() - - if self.unlexer.max_depth >= 6: - for _ in self.zero_or_one(): - current += self.orderByClause() - - if self.unlexer.max_depth >= 5: - for _ in self.zero_or_one(): - current += self.limitByClause() - - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_one(): - current += self.limitClause() - - if self.unlexer.max_depth >= 6: - for _ in self.zero_or_one(): - current += self.settingsClause() - - return current - selectStmt.min_depth = 4 - - @depthcontrol - def withClause(self): - current = self.create_node(UnparserRule(name='withClause')) - current += self.unlexer.WITH() - current += self.columnExprList() - return current - withClause.min_depth = 4 - - @depthcontrol - def fromClause(self): - current = self.create_node(UnparserRule(name='fromClause')) - current += self.unlexer.FROM() - current += self.joinExpr() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.FINAL() - - return current - fromClause.min_depth = 7 - - @depthcontrol - def sampleClause(self): - current = self.create_node(UnparserRule(name='sampleClause')) - current += self.unlexer.SAMPLE() - current += self.ratioExpr() - if self.unlexer.max_depth >= 3: - for _ in self.zero_or_one(): - current += self.unlexer.OFFSET() - current += self.ratioExpr() - - return current - sampleClause.min_depth = 3 - - @depthcontrol - def arrayJoinClause(self): - current = self.create_node(UnparserRule(name='arrayJoinClause')) - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.LEFT() - - current += self.unlexer.ARRAY() - current += self.unlexer.JOIN() - current += self.columnExprList() - return current - arrayJoinClause.min_depth = 4 - - @depthcontrol - def prewhereClause(self): - current = self.create_node(UnparserRule(name='prewhereClause')) - current += self.unlexer.PREWHERE() - current += self.columnExpr() - return current - prewhereClause.min_depth = 3 - - @depthcontrol - def whereClause(self): - current = self.create_node(UnparserRule(name='whereClause')) - current += self.unlexer.WHERE() - current += self.columnExpr() - return current - whereClause.min_depth = 3 - - @depthcontrol - def groupByClause(self): - current = self.create_node(UnparserRule(name='groupByClause')) - current += self.unlexer.GROUP() - current += self.unlexer.BY() - current += self.columnExprList() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.WITH() - current += self.unlexer.TOTALS() - - return current - groupByClause.min_depth = 4 - - @depthcontrol - def havingClause(self): - current = self.create_node(UnparserRule(name='havingClause')) - current += self.unlexer.HAVING() - current += self.columnExpr() - return current - havingClause.min_depth = 3 - - @depthcontrol - def orderByClause(self): - current = self.create_node(UnparserRule(name='orderByClause')) - current += self.unlexer.ORDER() - current += self.unlexer.BY() - current += self.orderExprList() - return current - orderByClause.min_depth = 5 - - @depthcontrol - def limitByClause(self): - current = self.create_node(UnparserRule(name='limitByClause')) - current += self.unlexer.LIMIT() - current += self.limitExpr() - current += self.unlexer.BY() - current += self.columnExprList() - return current - limitByClause.min_depth = 4 - - @depthcontrol - def limitClause(self): - current = self.create_node(UnparserRule(name='limitClause')) - current += self.unlexer.LIMIT() - current += self.limitExpr() - return current - limitClause.min_depth = 3 - - @depthcontrol - def settingsClause(self): - current = self.create_node(UnparserRule(name='settingsClause')) - current += self.unlexer.SETTINGS() - current += self.settingExprList() - return current - settingsClause.min_depth = 5 - - @depthcontrol - def joinExpr(self): - current = self.create_node(UnparserRule(name='joinExpr')) - choice = self.choice([0 if [6, 8, 8, 8][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_181', i), 1) for i, w in enumerate([1, 1, 1, 1])]) - self.unlexer.weights[('alt_181', choice)] = self.unlexer.weights.get(('alt_181', choice), 1) * self.unlexer.cooldown - if choice == 0: - current = self.joinExpr_JoinExprTable() - elif choice == 1: - current = self.joinExpr_JoinExprParens() - elif choice == 2: - current = self.joinExpr_JoinExprOp() - elif choice == 3: - current = self.joinExpr_JoinExprCrossOp() - return current - joinExpr.min_depth = 6 - - @depthcontrol - def joinExpr_JoinExprTable(self): - current = self.create_node(UnparserRule(name='joinExpr_JoinExprTable')) - current += self.tableExpr() - return current - joinExpr_JoinExprTable.min_depth = 5 - - @depthcontrol - def joinExpr_JoinExprParens(self): - current = self.create_node(UnparserRule(name='joinExpr_JoinExprParens')) - current += self.unlexer.LPAREN() - current += self.joinExpr() - current += self.unlexer.RPAREN() - return current - joinExpr_JoinExprParens.min_depth = 7 - - @depthcontrol - def joinExpr_JoinExprOp(self): - current = self.create_node(UnparserRule(name='joinExpr_JoinExprOp')) - current += self.joinExpr() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - choice = self.choice([0 if [2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_187', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_187', choice)] = self.unlexer.weights.get(('alt_187', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.GLOBAL() - elif choice == 1: - current += self.unlexer.LOCAL() - - current += self.joinOp() - current += self.unlexer.JOIN() - current += self.joinExpr() - current += self.joinConstraintClause() - return current - joinExpr_JoinExprOp.min_depth = 7 - - @depthcontrol - def joinExpr_JoinExprCrossOp(self): - current = self.create_node(UnparserRule(name='joinExpr_JoinExprCrossOp')) - current += self.joinExpr() - current += self.joinOpCross() - current += self.joinExpr() - return current - joinExpr_JoinExprCrossOp.min_depth = 7 - - @depthcontrol - def joinOp(self): - current = self.create_node(UnparserRule(name='joinOp')) - choice = self.choice([0 if [3, 3, 3][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_190', i), 1) for i, w in enumerate([1, 1, 1])]) - self.unlexer.weights[('alt_190', choice)] = self.unlexer.weights.get(('alt_190', choice), 1) * self.unlexer.cooldown - if choice == 0: - current = self.joinOp_JoinOpInner() - elif choice == 1: - current = self.joinOp_JoinOpLeftRight() - elif choice == 2: - current = self.joinOp_JoinOpFull() - return current - joinOp.min_depth = 3 - - @depthcontrol - def joinOp_JoinOpInner(self): - current = self.create_node(UnparserRule(name='joinOp_JoinOpInner')) - choice = self.choice([0 if [2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_194', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_194', choice)] = self.unlexer.weights.get(('alt_194', choice), 1) * self.unlexer.cooldown - if choice == 0: - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.ANY() - - current += self.unlexer.INNER() - elif choice == 1: - current += self.unlexer.INNER() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.ANY() - - return current - joinOp_JoinOpInner.min_depth = 2 - - @depthcontrol - def joinOp_JoinOpLeftRight(self): - current = self.create_node(UnparserRule(name='joinOp_JoinOpLeftRight')) - choice = self.choice([0 if [2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_199', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_199', choice)] = self.unlexer.weights.get(('alt_199', choice), 1) * self.unlexer.cooldown - if choice == 0: - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - choice = self.choice([0 if [2, 2, 2, 2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_203', i), 1) for i, w in enumerate([1, 1, 1, 1, 1])]) - self.unlexer.weights[('alt_203', choice)] = self.unlexer.weights.get(('alt_203', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.OUTER() - elif choice == 1: - current += self.unlexer.SEMI() - elif choice == 2: - current += self.unlexer.ANTI() - elif choice == 3: - current += self.unlexer.ANY() - elif choice == 4: - current += self.unlexer.ASOF() - - choice = self.choice([0 if [2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_209', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_209', choice)] = self.unlexer.weights.get(('alt_209', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.LEFT() - elif choice == 1: - current += self.unlexer.RIGHT() - elif choice == 1: - choice = self.choice([0 if [2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_212', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_212', choice)] = self.unlexer.weights.get(('alt_212', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.LEFT() - elif choice == 1: - current += self.unlexer.RIGHT() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - choice = self.choice([0 if [2, 2, 2, 2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_216', i), 1) for i, w in enumerate([1, 1, 1, 1, 1])]) - self.unlexer.weights[('alt_216', choice)] = self.unlexer.weights.get(('alt_216', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.OUTER() - elif choice == 1: - current += self.unlexer.SEMI() - elif choice == 2: - current += self.unlexer.ANTI() - elif choice == 3: - current += self.unlexer.ANY() - elif choice == 4: - current += self.unlexer.ASOF() - - return current - joinOp_JoinOpLeftRight.min_depth = 2 - - @depthcontrol - def joinOp_JoinOpFull(self): - current = self.create_node(UnparserRule(name='joinOp_JoinOpFull')) - choice = self.choice([0 if [2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_222', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_222', choice)] = self.unlexer.weights.get(('alt_222', choice), 1) * self.unlexer.cooldown - if choice == 0: - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - choice = self.choice([0 if [2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_226', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_226', choice)] = self.unlexer.weights.get(('alt_226', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.OUTER() - elif choice == 1: - current += self.unlexer.ANY() - - current += self.unlexer.FULL() - elif choice == 1: - current += self.unlexer.FULL() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - choice = self.choice([0 if [2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_230', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_230', choice)] = self.unlexer.weights.get(('alt_230', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.OUTER() - elif choice == 1: - current += self.unlexer.ANY() - - return current - joinOp_JoinOpFull.min_depth = 2 - - @depthcontrol - def joinOpCross(self): - current = self.create_node(UnparserRule(name='joinOpCross')) - choice = self.choice([0 if [2, 1][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_233', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_233', choice)] = self.unlexer.weights.get(('alt_233', choice), 1) * self.unlexer.cooldown - if choice == 0: - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - choice = self.choice([0 if [2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_237', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_237', choice)] = self.unlexer.weights.get(('alt_237', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.GLOBAL() - elif choice == 1: - current += self.unlexer.LOCAL() - - current += self.unlexer.CROSS() - current += self.unlexer.JOIN() - elif choice == 1: - current += self.unlexer.COMMA() - return current - joinOpCross.min_depth = 1 - - @depthcontrol - def joinConstraintClause(self): - current = self.create_node(UnparserRule(name='joinConstraintClause')) - choice = self.choice([0 if [4, 4, 4][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_240', i), 1) for i, w in enumerate([1, 1, 1])]) - self.unlexer.weights[('alt_240', choice)] = self.unlexer.weights.get(('alt_240', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.ON() - current += self.columnExprList() - elif choice == 1: - current += self.unlexer.USING() - current += self.unlexer.LPAREN() - current += self.columnExprList() - current += self.unlexer.RPAREN() - elif choice == 2: - current += self.unlexer.USING() - current += self.columnExprList() - return current - joinConstraintClause.min_depth = 4 - - @depthcontrol - def limitExpr(self): - current = self.create_node(UnparserRule(name='limitExpr')) - current += self.unlexer.INTEGER_LITERAL() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - choice = self.choice([0 if [1, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_245', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_245', choice)] = self.unlexer.weights.get(('alt_245', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.COMMA() - elif choice == 1: - current += self.unlexer.OFFSET() - current += self.unlexer.INTEGER_LITERAL() - - return current - limitExpr.min_depth = 2 - - @depthcontrol - def orderExprList(self): - current = self.create_node(UnparserRule(name='orderExprList')) - current += self.orderExpr() - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_more(): - current += self.unlexer.COMMA() - current += self.orderExpr() - - return current - orderExprList.min_depth = 4 - - @depthcontrol - def orderExpr(self): - current = self.create_node(UnparserRule(name='orderExpr')) - current += self.columnExpr() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - choice = self.choice([0 if [2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_250', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_250', choice)] = self.unlexer.weights.get(('alt_250', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.ASCENDING() - elif choice == 1: - current += self.unlexer.DESCENDING() - - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.NULLS() - choice = self.choice([0 if [2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_254', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_254', choice)] = self.unlexer.weights.get(('alt_254', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.FIRST() - elif choice == 1: - current += self.unlexer.LAST() - - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.COLLATE() - current += self.unlexer.STRING_LITERAL() - - return current - orderExpr.min_depth = 3 - - @depthcontrol - def ratioExpr(self): - current = self.create_node(UnparserRule(name='ratioExpr')) - current += self.unlexer.INTEGER_LITERAL() - current += self.unlexer.SLASH() - current += self.unlexer.INTEGER_LITERAL() - return current - ratioExpr.min_depth = 2 - - @depthcontrol - def settingExprList(self): - current = self.create_node(UnparserRule(name='settingExprList')) - current += self.settingExpr() - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_more(): - current += self.unlexer.COMMA() - current += self.settingExpr() - - return current - settingExprList.min_depth = 4 - - @depthcontrol - def settingExpr(self): - current = self.create_node(UnparserRule(name='settingExpr')) - current += self.identifier() - current += self.unlexer.EQ_SINGLE() - current += self.literal() - return current - settingExpr.min_depth = 3 - - @depthcontrol - def setStmt(self): - current = self.create_node(UnparserRule(name='setStmt')) - current += self.unlexer.SET() - current += self.settingExprList() - return current - setStmt.min_depth = 5 - - @depthcontrol - def valueExprList(self): - current = self.create_node(UnparserRule(name='valueExprList')) - current += self.valueExpr() - if self.unlexer.max_depth >= 3: - for _ in self.zero_or_more(): - current += self.unlexer.COMMA() - current += self.valueExpr() - - return current - valueExprList.min_depth = 3 - - @depthcontrol - def valueExpr(self): - current = self.create_node(UnparserRule(name='valueExpr')) - choice = self.choice([0 if [4, 6, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_260', i), 1) for i, w in enumerate([1, 1, 1])]) - self.unlexer.weights[('alt_260', choice)] = self.unlexer.weights.get(('alt_260', choice), 1) * self.unlexer.cooldown - if choice == 0: - current = self.valueExpr_ValueExprLiteral() - elif choice == 1: - current = self.valueExpr_ValueExprTuple() - elif choice == 2: - current = self.valueExpr_ValueExprArray() - return current - valueExpr.min_depth = 2 - - @depthcontrol - def valueExpr_ValueExprLiteral(self): - current = self.create_node(UnparserRule(name='valueExpr_ValueExprLiteral')) - current += self.literal() - return current - valueExpr_ValueExprLiteral.min_depth = 3 - - @depthcontrol - def valueExpr_ValueExprTuple(self): - current = self.create_node(UnparserRule(name='valueExpr_ValueExprTuple')) - current += self.valueTupleExpr() - return current - valueExpr_ValueExprTuple.min_depth = 5 - - @depthcontrol - def valueExpr_ValueExprArray(self): - current = self.create_node(UnparserRule(name='valueExpr_ValueExprArray')) - current += self.unlexer.LBRACKET() - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_one(): - current += self.valueExprList() - - current += self.unlexer.RBRACKET() - return current - valueExpr_ValueExprArray.min_depth = 1 - - @depthcontrol - def columnTypeExpr(self): - current = self.create_node(UnparserRule(name='columnTypeExpr')) - choice = self.choice([0 if [4, 5, 4, 6][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_265', i), 1) for i, w in enumerate([1, 1, 1, 1])]) - self.unlexer.weights[('alt_265', choice)] = self.unlexer.weights.get(('alt_265', choice), 1) * self.unlexer.cooldown - if choice == 0: - current = self.columnTypeExpr_ColumnTypeExprSimple() - elif choice == 1: - current = self.columnTypeExpr_ColumnTypeExprParam() - elif choice == 2: - current = self.columnTypeExpr_ColumnTypeExprEnum() - elif choice == 3: - current = self.columnTypeExpr_ColumnTypeExprComplex() - return current - columnTypeExpr.min_depth = 4 - - @depthcontrol - def columnTypeExpr_ColumnTypeExprSimple(self): - current = self.create_node(UnparserRule(name='columnTypeExpr_ColumnTypeExprSimple')) - current += self.identifier() - return current - columnTypeExpr_ColumnTypeExprSimple.min_depth = 3 - - @depthcontrol - def columnTypeExpr_ColumnTypeExprParam(self): - current = self.create_node(UnparserRule(name='columnTypeExpr_ColumnTypeExprParam')) - current += self.identifier() - current += self.unlexer.LPAREN() - current += self.columnParamList() - current += self.unlexer.RPAREN() - return current - columnTypeExpr_ColumnTypeExprParam.min_depth = 4 - - @depthcontrol - def columnTypeExpr_ColumnTypeExprEnum(self): - current = self.create_node(UnparserRule(name='columnTypeExpr_ColumnTypeExprEnum')) - current += self.identifier() - current += self.unlexer.LPAREN() - current += self.enumValue() - if self.unlexer.max_depth >= 3: - for _ in self.zero_or_more(): - current += self.unlexer.COMMA() - current += self.enumValue() - - current += self.unlexer.RPAREN() - return current - columnTypeExpr_ColumnTypeExprEnum.min_depth = 3 - - @depthcontrol - def columnTypeExpr_ColumnTypeExprComplex(self): - current = self.create_node(UnparserRule(name='columnTypeExpr_ColumnTypeExprComplex')) - current += self.identifier() - current += self.unlexer.LPAREN() - current += self.columnTypeExpr() - if self.unlexer.max_depth >= 5: - for _ in self.zero_or_more(): - current += self.unlexer.COMMA() - current += self.columnTypeExpr() - - current += self.unlexer.RPAREN() - return current - columnTypeExpr_ColumnTypeExprComplex.min_depth = 5 - - @depthcontrol - def columnExprList(self): - current = self.create_node(UnparserRule(name='columnExprList')) - current += self.columnExpr() - if self.unlexer.max_depth >= 3: - for _ in self.zero_or_more(): - current += self.unlexer.COMMA() - current += self.columnExpr() - - return current - columnExprList.min_depth = 3 - - @depthcontrol - def columnExpr(self): - current = self.create_node(UnparserRule(name='columnExpr')) - choice = self.choice([0 if [4, 2, 5, 2, 4, 4, 4, 4, 5, 4, 4, 4, 4, 4, 4, 4, 4, 4][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_273', i), 1) for i, w in enumerate([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])]) - self.unlexer.weights[('alt_273', choice)] = self.unlexer.weights.get(('alt_273', choice), 1) * self.unlexer.cooldown - if choice == 0: - current = self.columnExpr_ColumnExprLiteral() - elif choice == 1: - current = self.columnExpr_ColumnExprAsterisk() - elif choice == 2: - current = self.columnExpr_ColumnExprTuple() - elif choice == 3: - current = self.columnExpr_ColumnExprArray() - elif choice == 4: - current = self.columnExpr_ColumnExprCase() - elif choice == 5: - current = self.columnExpr_ColumnExprExtract() - elif choice == 6: - current = self.columnExpr_ColumnExprTrim() - elif choice == 7: - current = self.columnExpr_ColumnExprInterval() - elif choice == 8: - current = self.columnExpr_ColumnExprIdentifier() - elif choice == 9: - current = self.columnExpr_ColumnExprFunction() - elif choice == 10: - current = self.columnExpr_ColumnExprArrayAccess() - elif choice == 11: - current = self.columnExpr_ColumnExprTupleAccess() - elif choice == 12: - current = self.columnExpr_ColumnExprUnaryOp() - elif choice == 13: - current = self.columnExpr_ColumnExprIsNull() - elif choice == 14: - current = self.columnExpr_ColumnExprBinaryOp() - elif choice == 15: - current = self.columnExpr_ColumnExprTernaryOp() - elif choice == 16: - current = self.columnExpr_ColumnExprBetween() - elif choice == 17: - current = self.columnExpr_ColumnExprAlias() - return current - columnExpr.min_depth = 2 - - @depthcontrol - def columnExpr_ColumnExprLiteral(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprLiteral')) - current += self.literal() - return current - columnExpr_ColumnExprLiteral.min_depth = 3 - - @depthcontrol - def columnExpr_ColumnExprAsterisk(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprAsterisk')) - current += self.unlexer.ASTERISK() - return current - columnExpr_ColumnExprAsterisk.min_depth = 1 - - @depthcontrol - def columnExpr_ColumnExprTuple(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprTuple')) - current += self.unlexer.LPAREN() - current += self.columnExprList() - current += self.unlexer.RPAREN() - return current - columnExpr_ColumnExprTuple.min_depth = 4 - - @depthcontrol - def columnExpr_ColumnExprArray(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprArray')) - current += self.unlexer.LBRACKET() - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_one(): - current += self.columnExprList() - - current += self.unlexer.RBRACKET() - return current - columnExpr_ColumnExprArray.min_depth = 1 - - @depthcontrol - def columnExpr_ColumnExprCase(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprCase')) - current += self.unlexer.CASE() - if self.unlexer.max_depth >= 3: - for _ in self.zero_or_one(): - current += self.columnExpr() - - if self.unlexer.max_depth >= 0: - for _ in self.one_or_more(): - current += self.unlexer.WHEN() - current += self.columnExpr() - current += self.unlexer.THEN() - current += self.columnExpr() - - if self.unlexer.max_depth >= 3: - for _ in self.zero_or_one(): - current += self.unlexer.ELSE() - current += self.columnExpr() - - current += self.unlexer.END() - return current - columnExpr_ColumnExprCase.min_depth = 3 - - @depthcontrol - def columnExpr_ColumnExprExtract(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprExtract')) - current += self.unlexer.EXTRACT() - current += self.unlexer.LPAREN() - current += self.unlexer.INTERVAL_TYPE() - current += self.unlexer.FROM() - current += self.columnExpr() - current += self.unlexer.RPAREN() - return current - columnExpr_ColumnExprExtract.min_depth = 3 - - @depthcontrol - def columnExpr_ColumnExprTrim(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprTrim')) - current += self.unlexer.TRIM() - current += self.unlexer.LPAREN() - choice = self.choice([0 if [2, 2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_295', i), 1) for i, w in enumerate([1, 1, 1])]) - self.unlexer.weights[('alt_295', choice)] = self.unlexer.weights.get(('alt_295', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.BOTH() - elif choice == 1: - current += self.unlexer.LEADING() - elif choice == 2: - current += self.unlexer.TRAILING() - current += self.unlexer.STRING_LITERAL() - current += self.unlexer.FROM() - current += self.columnExpr() - current += self.unlexer.RPAREN() - return current - columnExpr_ColumnExprTrim.min_depth = 3 - - @depthcontrol - def columnExpr_ColumnExprInterval(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprInterval')) - current += self.unlexer.INTERVAL() - current += self.columnExpr() - current += self.unlexer.INTERVAL_TYPE() - return current - columnExpr_ColumnExprInterval.min_depth = 3 - - @depthcontrol - def columnExpr_ColumnExprIdentifier(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprIdentifier')) - current += self.columnIdentifier() - return current - columnExpr_ColumnExprIdentifier.min_depth = 4 - - @depthcontrol - def columnExpr_ColumnExprFunction(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprFunction')) - current += self.identifier() - if self.unlexer.max_depth >= 1: - for _ in self.zero_or_one(): - current += self.unlexer.LPAREN() - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_one(): - current += self.columnParamList() - - current += self.unlexer.RPAREN() - - current += self.unlexer.LPAREN() - if self.unlexer.max_depth >= 5: - for _ in self.zero_or_one(): - current += self.columnArgList() - - current += self.unlexer.RPAREN() - return current - columnExpr_ColumnExprFunction.min_depth = 3 - - @depthcontrol - def columnExpr_ColumnExprArrayAccess(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprArrayAccess')) - current += self.columnExpr() - current += self.unlexer.LBRACKET() - current += self.columnExpr() - current += self.unlexer.RBRACKET() - return current - columnExpr_ColumnExprArrayAccess.min_depth = 3 - - @depthcontrol - def columnExpr_ColumnExprTupleAccess(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprTupleAccess')) - current += self.columnExpr() - current += self.unlexer.DOT() - current += self.unlexer.INTEGER_LITERAL() - return current - columnExpr_ColumnExprTupleAccess.min_depth = 3 - - @depthcontrol - def columnExpr_ColumnExprUnaryOp(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprUnaryOp')) - current += self.unaryOp() - current += self.columnExpr() - return current - columnExpr_ColumnExprUnaryOp.min_depth = 3 - - @depthcontrol - def columnExpr_ColumnExprIsNull(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprIsNull')) - current += self.columnExpr() - current += self.unlexer.IS() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.NOT() - - current += self.unlexer.NULL_SQL() - return current - columnExpr_ColumnExprIsNull.min_depth = 3 - - @depthcontrol - def columnExpr_ColumnExprBinaryOp(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprBinaryOp')) - current += self.columnExpr() - current += self.binaryOp() - current += self.columnExpr() - return current - columnExpr_ColumnExprBinaryOp.min_depth = 3 - - @depthcontrol - def columnExpr_ColumnExprTernaryOp(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprTernaryOp')) - current += self.columnExpr() - current += self.unlexer.QUERY() - current += self.columnExpr() - current += self.unlexer.COLON() - current += self.columnExpr() - return current - columnExpr_ColumnExprTernaryOp.min_depth = 3 - - @depthcontrol - def columnExpr_ColumnExprBetween(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprBetween')) - current += self.columnExpr() - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.NOT() - - current += self.unlexer.BETWEEN() - current += self.columnExpr() - current += self.unlexer.AND() - current += self.columnExpr() - return current - columnExpr_ColumnExprBetween.min_depth = 3 - - @depthcontrol - def columnExpr_ColumnExprAlias(self): - current = self.create_node(UnparserRule(name='columnExpr_ColumnExprAlias')) - current += self.columnExpr() - current += self.unlexer.AS() - current += self.identifier() - return current - columnExpr_ColumnExprAlias.min_depth = 3 - - @depthcontrol - def columnParamList(self): - current = self.create_node(UnparserRule(name='columnParamList')) - current += self.literal() - if self.unlexer.max_depth >= 3: - for _ in self.zero_or_more(): - current += self.unlexer.COMMA() - current += self.literal() - - return current - columnParamList.min_depth = 3 - - @depthcontrol - def columnArgList(self): - current = self.create_node(UnparserRule(name='columnArgList')) - current += self.columnArgExpr() - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_more(): - current += self.unlexer.COMMA() - current += self.columnArgExpr() - - return current - columnArgList.min_depth = 4 - - @depthcontrol - def columnArgExpr(self): - current = self.create_node(UnparserRule(name='columnArgExpr')) - choice = self.choice([0 if [4, 3][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_306', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_306', choice)] = self.unlexer.weights.get(('alt_306', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.columnLambdaExpr() - elif choice == 1: - current += self.columnExpr() - return current - columnArgExpr.min_depth = 3 - - @depthcontrol - def columnLambdaExpr(self): - current = self.create_node(UnparserRule(name='columnLambdaExpr')) - choice = self.choice([0 if [3, 3][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_309', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_309', choice)] = self.unlexer.weights.get(('alt_309', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.LPAREN() - current += self.identifier() - if self.unlexer.max_depth >= 3: - for _ in self.zero_or_more(): - current += self.unlexer.COMMA() - current += self.identifier() - - current += self.unlexer.RPAREN() - elif choice == 1: - current += self.identifier() - if self.unlexer.max_depth >= 3: - for _ in self.zero_or_more(): - current += self.unlexer.COMMA() - current += self.identifier() - - current += self.unlexer.ARROW() - current += self.columnExpr() - return current - columnLambdaExpr.min_depth = 3 - - @depthcontrol - def columnIdentifier(self): - current = self.create_node(UnparserRule(name='columnIdentifier')) - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_one(): - current += self.tableIdentifier() - current += self.unlexer.DOT() - - current += self.identifier() - return current - columnIdentifier.min_depth = 3 - - @depthcontrol - def tableExpr(self): - current = self.create_node(UnparserRule(name='tableExpr')) - choice = self.choice([0 if [5, 4, 7, 6][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_315', i), 1) for i, w in enumerate([1, 1, 1, 1])]) - self.unlexer.weights[('alt_315', choice)] = self.unlexer.weights.get(('alt_315', choice), 1) * self.unlexer.cooldown - if choice == 0: - current = self.tableExpr_TableExprIdentifier() - elif choice == 1: - current = self.tableExpr_TableExprFunction() - elif choice == 2: - current = self.tableExpr_TableExprSubquery() - elif choice == 3: - current = self.tableExpr_TableExprAlias() - return current - tableExpr.min_depth = 4 - - @depthcontrol - def tableExpr_TableExprIdentifier(self): - current = self.create_node(UnparserRule(name='tableExpr_TableExprIdentifier')) - current += self.tableIdentifier() - return current - tableExpr_TableExprIdentifier.min_depth = 4 - - @depthcontrol - def tableExpr_TableExprFunction(self): - current = self.create_node(UnparserRule(name='tableExpr_TableExprFunction')) - current += self.identifier() - current += self.unlexer.LPAREN() - if self.unlexer.max_depth >= 5: - for _ in self.zero_or_one(): - current += self.tableArgList() - - current += self.unlexer.RPAREN() - return current - tableExpr_TableExprFunction.min_depth = 3 - - @depthcontrol - def tableExpr_TableExprSubquery(self): - current = self.create_node(UnparserRule(name='tableExpr_TableExprSubquery')) - current += self.unlexer.LPAREN() - current += self.selectUnionStmt() - current += self.unlexer.RPAREN() - return current - tableExpr_TableExprSubquery.min_depth = 6 - - @depthcontrol - def tableExpr_TableExprAlias(self): - current = self.create_node(UnparserRule(name='tableExpr_TableExprAlias')) - current += self.tableExpr() - current += self.unlexer.AS() - current += self.identifier() - return current - tableExpr_TableExprAlias.min_depth = 5 - - @depthcontrol - def tableIdentifier(self): - current = self.create_node(UnparserRule(name='tableIdentifier')) - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_one(): - current += self.databaseIdentifier() - current += self.unlexer.DOT() - - current += self.identifier() - return current - tableIdentifier.min_depth = 3 - - @depthcontrol - def tableArgList(self): - current = self.create_node(UnparserRule(name='tableArgList')) - current += self.tableArgExpr() - if self.unlexer.max_depth >= 4: - for _ in self.zero_or_more(): - current += self.unlexer.COMMA() - current += self.tableArgExpr() - - return current - tableArgList.min_depth = 4 - - @depthcontrol - def tableArgExpr(self): - current = self.create_node(UnparserRule(name='tableArgExpr')) - choice = self.choice([0 if [3, 4][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_323', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_323', choice)] = self.unlexer.weights.get(('alt_323', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.literal() - elif choice == 1: - current += self.tableIdentifier() - return current - tableArgExpr.min_depth = 3 - - @depthcontrol - def databaseIdentifier(self): - current = self.create_node(UnparserRule(name='databaseIdentifier')) - current += self.identifier() - return current - databaseIdentifier.min_depth = 3 - - @depthcontrol - def literal(self): - current = self.create_node(UnparserRule(name='literal')) - choice = self.choice([0 if [2, 2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_326', i), 1) for i, w in enumerate([1, 1, 1])]) - self.unlexer.weights[('alt_326', choice)] = self.unlexer.weights.get(('alt_326', choice), 1) * self.unlexer.cooldown - if choice == 0: - if self.unlexer.max_depth >= 1: - for _ in self.zero_or_one(): - choice = self.choice([0 if [1, 1][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_331', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_331', choice)] = self.unlexer.weights.get(('alt_331', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.PLUS() - elif choice == 1: - current += self.unlexer.DASH() - - choice = self.choice([0 if [3, 2, 2, 2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_334', i), 1) for i, w in enumerate([1, 1, 1, 1, 1])]) - self.unlexer.weights[('alt_334', choice)] = self.unlexer.weights.get(('alt_334', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.FLOATING_LITERAL() - elif choice == 1: - current += self.unlexer.HEXADECIMAL_LITERAL() - elif choice == 2: - current += self.unlexer.INTEGER_LITERAL() - elif choice == 3: - current += self.unlexer.INF() - elif choice == 4: - current += self.unlexer.NAN_SQL() - elif choice == 1: - current += self.unlexer.STRING_LITERAL() - elif choice == 2: - current += self.unlexer.NULL_SQL() - return current - literal.min_depth = 2 - - @depthcontrol - def keyword(self): - current = self.create_node(UnparserRule(name='keyword')) - choice = self.choice([0 if [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_340', i), 1) for i, w in enumerate([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])]) - self.unlexer.weights[('alt_340', choice)] = self.unlexer.weights.get(('alt_340', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.ALIAS() - elif choice == 1: - current += self.unlexer.ALL() - elif choice == 2: - current += self.unlexer.AND() - elif choice == 3: - current += self.unlexer.ANTI() - elif choice == 4: - current += self.unlexer.ANY() - elif choice == 5: - current += self.unlexer.ARRAY() - elif choice == 6: - current += self.unlexer.AS() - elif choice == 7: - current += self.unlexer.ASCENDING() - elif choice == 8: - current += self.unlexer.ASOF() - elif choice == 9: - current += self.unlexer.BETWEEN() - elif choice == 10: - current += self.unlexer.BOTH() - elif choice == 11: - current += self.unlexer.BY() - elif choice == 12: - current += self.unlexer.CASE() - elif choice == 13: - current += self.unlexer.CAST() - elif choice == 14: - current += self.unlexer.CLUSTER() - elif choice == 15: - current += self.unlexer.COLLATE() - elif choice == 16: - current += self.unlexer.CREATE() - elif choice == 17: - current += self.unlexer.CROSS() - elif choice == 18: - current += self.unlexer.DAY() - elif choice == 19: - current += self.unlexer.DATABASE() - elif choice == 20: - current += self.unlexer.DEFAULT() - elif choice == 21: - current += self.unlexer.DELETE() - elif choice == 22: - current += self.unlexer.DESCENDING() - elif choice == 23: - current += self.unlexer.DISK() - elif choice == 24: - current += self.unlexer.DISTINCT() - elif choice == 25: - current += self.unlexer.DROP() - elif choice == 26: - current += self.unlexer.ELSE() - elif choice == 27: - current += self.unlexer.END() - elif choice == 28: - current += self.unlexer.ENGINE() - elif choice == 29: - current += self.unlexer.EXISTS() - elif choice == 30: - current += self.unlexer.EXTRACT() - elif choice == 31: - current += self.unlexer.FINAL() - elif choice == 32: - current += self.unlexer.FIRST() - elif choice == 33: - current += self.unlexer.FORMAT() - elif choice == 34: - current += self.unlexer.FROM() - elif choice == 35: - current += self.unlexer.FULL() - elif choice == 36: - current += self.unlexer.GLOBAL() - elif choice == 37: - current += self.unlexer.GROUP() - elif choice == 38: - current += self.unlexer.HAVING() - elif choice == 39: - current += self.unlexer.HOUR() - elif choice == 40: - current += self.unlexer.IF() - elif choice == 41: - current += self.unlexer.IN() - elif choice == 42: - current += self.unlexer.INNER() - elif choice == 43: - current += self.unlexer.INSERT() - elif choice == 44: - current += self.unlexer.INTERVAL() - elif choice == 45: - current += self.unlexer.INTO() - elif choice == 46: - current += self.unlexer.IS() - elif choice == 47: - current += self.unlexer.JOIN() - elif choice == 48: - current += self.unlexer.KEY() - elif choice == 49: - current += self.unlexer.LAST() - elif choice == 50: - current += self.unlexer.LEADING() - elif choice == 51: - current += self.unlexer.LEFT() - elif choice == 52: - current += self.unlexer.LIKE() - elif choice == 53: - current += self.unlexer.LIMIT() - elif choice == 54: - current += self.unlexer.LOCAL() - elif choice == 55: - current += self.unlexer.MATERIALIZED() - elif choice == 56: - current += self.unlexer.MINUTE() - elif choice == 57: - current += self.unlexer.MONTH() - elif choice == 58: - current += self.unlexer.NOT() - elif choice == 59: - current += self.unlexer.NULLS() - elif choice == 60: - current += self.unlexer.OFFSET() - elif choice == 61: - current += self.unlexer.ON() - elif choice == 62: - current += self.unlexer.OR() - elif choice == 63: - current += self.unlexer.ORDER() - elif choice == 64: - current += self.unlexer.OUTER() - elif choice == 65: - current += self.unlexer.OUTFILE() - elif choice == 66: - current += self.unlexer.PARTITION() - elif choice == 67: - current += self.unlexer.PREWHERE() - elif choice == 68: - current += self.unlexer.PRIMARY() - elif choice == 69: - current += self.unlexer.QUARTER() - elif choice == 70: - current += self.unlexer.RIGHT() - elif choice == 71: - current += self.unlexer.SAMPLE() - elif choice == 72: - current += self.unlexer.SECOND() - elif choice == 73: - current += self.unlexer.SEMI() - elif choice == 74: - current += self.unlexer.SET() - elif choice == 75: - current += self.unlexer.SETTINGS() - elif choice == 76: - current += self.unlexer.TABLE() - elif choice == 77: - current += self.unlexer.TEMPORARY() - elif choice == 78: - current += self.unlexer.THEN() - elif choice == 79: - current += self.unlexer.TOTALS() - elif choice == 80: - current += self.unlexer.TRAILING() - elif choice == 81: - current += self.unlexer.TRIM() - elif choice == 82: - current += self.unlexer.TO() - elif choice == 83: - current += self.unlexer.TTL() - elif choice == 84: - current += self.unlexer.UNION() - elif choice == 85: - current += self.unlexer.USING() - elif choice == 86: - current += self.unlexer.VALUES() - elif choice == 87: - current += self.unlexer.VOLUME() - elif choice == 88: - current += self.unlexer.WEEK() - elif choice == 89: - current += self.unlexer.WHEN() - elif choice == 90: - current += self.unlexer.WHERE() - elif choice == 91: - current += self.unlexer.WITH() - elif choice == 92: - current += self.unlexer.YEAR() - return current - keyword.min_depth = 2 - - @depthcontrol - def identifier(self): - current = self.create_node(UnparserRule(name='identifier')) - choice = self.choice([0 if [2, 3, 3][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_434', i), 1) for i, w in enumerate([1, 1, 1])]) - self.unlexer.weights[('alt_434', choice)] = self.unlexer.weights.get(('alt_434', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.IDENTIFIER() - elif choice == 1: - current += self.unlexer.INTERVAL_TYPE() - elif choice == 2: - current += self.keyword() - return current - identifier.min_depth = 2 - - @depthcontrol - def unaryOp(self): - current = self.create_node(UnparserRule(name='unaryOp')) - choice = self.choice([0 if [1, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_438', i), 1) for i, w in enumerate([1, 1])]) - self.unlexer.weights[('alt_438', choice)] = self.unlexer.weights.get(('alt_438', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.DASH() - elif choice == 1: - current += self.unlexer.NOT() - return current - unaryOp.min_depth = 1 - - @depthcontrol - def binaryOp(self): - current = self.create_node(UnparserRule(name='binaryOp')) - choice = self.choice([0 if [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2][i] > self.unlexer.max_depth else w * self.unlexer.weights.get(('alt_441', i), 1) for i, w in enumerate([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])]) - self.unlexer.weights[('alt_441', choice)] = self.unlexer.weights.get(('alt_441', choice), 1) * self.unlexer.cooldown - if choice == 0: - current += self.unlexer.CONCAT() - elif choice == 1: - current += self.unlexer.ASTERISK() - elif choice == 2: - current += self.unlexer.SLASH() - elif choice == 3: - current += self.unlexer.PLUS() - elif choice == 4: - current += self.unlexer.DASH() - elif choice == 5: - current += self.unlexer.PERCENT() - elif choice == 6: - current += self.unlexer.EQ_DOUBLE() - elif choice == 7: - current += self.unlexer.EQ_SINGLE() - elif choice == 8: - current += self.unlexer.NOT_EQ() - elif choice == 9: - current += self.unlexer.LE() - elif choice == 10: - current += self.unlexer.GE() - elif choice == 11: - current += self.unlexer.LT() - elif choice == 12: - current += self.unlexer.GT() - elif choice == 13: - current += self.unlexer.AND() - elif choice == 14: - current += self.unlexer.OR() - elif choice == 15: - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.NOT() - - current += self.unlexer.LIKE() - elif choice == 16: - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.GLOBAL() - - if self.unlexer.max_depth >= 2: - for _ in self.zero_or_one(): - current += self.unlexer.NOT() - - current += self.unlexer.IN() - return current - binaryOp.min_depth = 1 - - @depthcontrol - def enumValue(self): - current = self.create_node(UnparserRule(name='enumValue')) - current += self.unlexer.STRING_LITERAL() - current += self.unlexer.EQ_SINGLE() - current += self.unlexer.INTEGER_LITERAL() - return current - enumValue.min_depth = 2 - - default_rule = queryList - diff --git a/utils/grammar-fuzzer/README.md b/utils/grammar-fuzzer/README.md deleted file mode 100644 index b3f233c8648..00000000000 --- a/utils/grammar-fuzzer/README.md +++ /dev/null @@ -1,41 +0,0 @@ -How to use Fuzzer -=== - -The fuzzer consists of auto-generated files: - - ClickHouseUnlexer.py - ClickHouseUnparser.py - -They are generated from grammar files (.g4) using Grammarinator: - - pip3 install grammarinator - grammarinator-process ClickHouseLexer.g4 ClickHouseParser.g4 -o fuzzer/ - -Then you can generate test input for ClickHouse client: - - cd fuzzer - grammarinator-generate \ - -r query_list \ # top-level rule - -o /tmp/sql_test_%d.sql \ # template for output test names - -n 10 \ # number of tests - -c 0.3 \ - -d 20 \ # depth of recursion - -p ClickHouseUnparser.py -l ClickHouseUnlexer.py \ # auto-generated unparser and unlexer - --test-transformers SpaceTransformer.single_line_whitespace \ # transform function to insert whitespace - -For more details see `grammarinator-generate --help`. As a test-transformer function also can be used `SpaceTransformer.multi_line_transformer` - both functions reside in `fuzzer/SpaceTransformer.py` file. - - -Parsing steps -=== - -1. Replace all operators with corresponding functions. -2. Replace all asterisks with columns - if it's inside function call, then expand it as multiple arguments. Warn about undeterministic invocations when functions have positional arguments. - -Old vs. new parser -=== - -- `a as b [c]` - accessing aliased array expression is not possible. -- `a as b . 1` - accessing aliased tuple expression is not possible. -- `between a is not null and b` - `between` operator should have lower priority than `is null`. -- `*.1` - accessing asterisk tuple expression is not possible. diff --git a/utils/grammar-fuzzer/SpaceTransformer.py b/utils/grammar-fuzzer/SpaceTransformer.py deleted file mode 100644 index ad96845c7e2..00000000000 --- a/utils/grammar-fuzzer/SpaceTransformer.py +++ /dev/null @@ -1,38 +0,0 @@ -# -*- coding: utf-8 -*- - -from grammarinator.runtime.tree import * - -from itertools import tee, islice, zip_longest -import random - - -def single_line_whitespace(node): - return _whitespace(node, ' \t') - - -def multi_line_whitespace(node): - return _whitespace(node, ' \t\r\n') - - -def _whitespace(node, symbols): - for child in node.children: - _whitespace(child, symbols) - - # helper function to look ahead one child - def with_next(iterable): - items, nexts = tee(iterable, 2) - nexts = islice(nexts, 1, None) - return zip_longest(items, nexts) - - if isinstance(node, UnparserRule): - new_children = [] - for child, next_child in with_next(node.children): - if (not next_child or - next_child and isinstance(next_child, UnlexerRule) and next_child.name == 'DOT' or - isinstance(child, UnlexerRule) and child.name == 'DOT'): - new_children.append(child) - else: - new_children.extend([child, UnlexerRule(src=random.choice(symbols))]) - node.children = new_children - - return node diff --git a/utils/grammar-fuzzer/__init__.py b/utils/grammar-fuzzer/__init__.py deleted file mode 100644 index 40a96afc6ff..00000000000 --- a/utils/grammar-fuzzer/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# -*- coding: utf-8 -*- diff --git a/utils/junit_to_html/junit-noframes.xsl b/utils/junit_to_html/junit-noframes.xsl deleted file mode 100644 index ae70e230ef6..00000000000 --- a/utils/junit_to_html/junit-noframes.xsl +++ /dev/null @@ -1,390 +0,0 @@ - - - - - - - - Test Results - - - - - - - - -
- - - - -
- - - - - - - - - - - - -

-
- - - - - - - - - -
-

- Back to top - - -

Summary

- - - - - - - - - - - - - - - - - Failure - Error - - - - - - - - -
TestsFailuresErrorsSuccess rateTime
- - - - - - - -
- - - - -
- Note: failures are anticipated and checked for with assertions while errors are unanticipated. -
-
- - - - -

Test Results

-
-
- - - Name - Tests - Errors - Failures - Time(s) - - - - - - Name - Tests - Errors - Failures - Time(s) - Time Stamp - Host - - - - - - Name - Status - Type - Time(s) - - - - - - - - - Failure - Error - - - - - - - - - - - - - - - - - - - - - Error - Failure - TableRowColor - - - - - - Failure - - - - Error - - - - Success - - - - - - - - - - - - -

- - - - - -
- - - -

- - - - - -
- - - - N/A - - - - - - -

- at line - - - , column - - -
-
-
- - - - - - - - - - 32 - - - - - - - - - - - - -
- - - -
- - -
- - - -
- - - -
-
- - - - - - - - -
diff --git a/utils/junit_to_html/junit_to_html b/utils/junit_to_html/junit_to_html deleted file mode 100755 index 132763c7d4c..00000000000 --- a/utils/junit_to_html/junit_to_html +++ /dev/null @@ -1,86 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -import os -import lxml.etree as etree -import json -import argparse - -def export_testcases_json(report, path): - with open(os.path.join(path, "cases.jer"), "w") as testcases_file: - for testsuite in report.getroot(): - for testcase in testsuite: - row = {} - row["hostname"] = testsuite.get("hostname") - row["suite"] = testsuite.get("name") - row["suite_duration"] = testsuite.get("time") - row["timestamp"] = testsuite.get("timestamp") - row["testname"] = testcase.get("name") - row["classname"] = testcase.get("classname") - row["file"] = testcase.get("file") - row["line"] = testcase.get("line") - row["duration"] = testcase.get("time") - for el in testcase: - if el.tag == "system-err": - row["stderr"] = el.text - else: - row["stderr"] = "" - - if el.tag == "system-out": - row["stdout"] = el.text - else: - row["stdout"] = "" - - json.dump(row, testcases_file) - testcases_file.write("\n") - -def export_testsuites_json(report, path): - with open(os.path.join(path, "suites.jer"), "w") as testsuites_file: - for testsuite in report.getroot(): - row = {} - row["suite"] = testsuite.get("name") - row["errors"] = testsuite.get("errors") - row["failures"] = testsuite.get("failures") - row["hostname"] = testsuite.get("hostname") - row["skipped"] = testsuite.get("skipped") - row["duration"] = testsuite.get("time") - row["timestamp"] = testsuite.get("timestamp") - json.dump(row, testsuites_file) - testsuites_file.write("\n") - - -def _convert_junit_to_html(junit_path, result_path, export_cases, export_suites): - with open(os.path.join(os.path.dirname(__file__), "junit-noframes.xsl")) as xslt_file: - junit_to_html_xslt = etree.parse(xslt_file) - if not os.path.exists(result_path): - os.makedirs(result_path) - - with open(junit_path) as junit_file: - junit_xml = etree.parse(junit_file) - - if export_suites: - export_testsuites_json(junit_xml, result_path) - if export_cases: - export_testcases_json(junit_xml, result_path) - transform = etree.XSLT(junit_to_html_xslt) - html = etree.tostring(transform(junit_xml), encoding="utf-8") - - with open(os.path.join(result_path, "result.html"), "w") as html_file: - html_file.write(html) - -if __name__ == "__main__": - - parser = argparse.ArgumentParser(description='Convert JUnit XML.') - parser.add_argument('junit', help='path to junit.xml report') - parser.add_argument('result_dir', nargs='?', help='directory for result files. Default to junit.xml directory') - parser.add_argument('--export-cases', help='Export JSONEachRow result for testcases to upload in CI', action='store_true') - parser.add_argument('--export-suites', help='Export JSONEachRow result for testsuites to upload in CI', action='store_true') - - args = parser.parse_args() - - junit_path = args.junit - if args.result_dir: - result_path = args.result_dir - else: - result_path = os.path.dirname(junit_path) - print("junit_path: {}, result_path: {}, export cases:{}, export suites: {}".format(junit_path, result_path, args.export_cases, args.export_suites)) - _convert_junit_to_html(junit_path, result_path, args.export_cases, args.export_suites) diff --git a/utils/link-validate/link-validate.sh b/utils/link-validate/link-validate.sh deleted file mode 100755 index 2d8d57b95fc..00000000000 --- a/utils/link-validate/link-validate.sh +++ /dev/null @@ -1,42 +0,0 @@ -#/bin/sh -# -# This script is used to validate the shared libraries -# -# Authors: FoundationDB team, https://github.com/apple/foundationdb/blame/master/build/link-validate.sh -# License: Apache License 2.0 - -verlte() { - [ "$1" = "`echo -e "$1\n$2" | sort -V | head -n1`" ] -} - -ALLOWED_SHARED_LIBS=("libdl.so.2" "libpthread.so.0" "librt.so.1" "libm.so.6" "libc.so.6" "ld-linux-x86-64.so.2") - -if [ "$#" -lt 1 ]; then - echo "USAGE: link-validate.sh BINNAME GLIBC_VERSION" - exit 1 -fi - -# Step 1: glibc version - -for i in $(objdump -T "$1" | awk '{print $5}' | grep GLIBC | sed 's/ *$//g' | sed 's/GLIBC_//' | sort | uniq); do - if ! verlte "$i" "${2:-2.10}"; then - echo "Dependency on newer libc detected: $i" - exit 1 - fi -done - -# Step 2: Other dynamic dependencies - -for j in $(objdump -p "$1" | grep NEEDED | awk '{print $2}'); do - PRESENT=0 - for k in ${ALLOWED_SHARED_LIBS[@]}; do - if [[ "$k" == "$j" ]]; then - PRESENT=1 - break - fi - done - if ! [[ $PRESENT == 1 ]]; then - echo "Unexpected shared object dependency detected: $j" - exit 1 - fi -done diff --git a/utils/tests-visualizer/index.html b/utils/tests-visualizer/index.html index 690c42e486e..a15b09ea58e 100644 --- a/utils/tests-visualizer/index.html +++ b/utils/tests-visualizer/index.html @@ -69,12 +69,6 @@ function renderResponse(response) { document.body.style.height = canvas.height + 10 + 'px'; let ctx = canvas.getContext('2d'); - - ctx.imageSmoothingEnabled = false; - ctx.mozImageSmoothingEnabled = false; - ctx.webkitImageSmoothingEnabled = false; - ctx.msImageSmoothingEnabled = false; - let image = ctx.createImageData(canvas.width, canvas.height); let pixels = image.data; @@ -123,8 +117,6 @@ canvas.addEventListener('mousemove', event => { let pixel = canvas.getContext('2d').getImageData(x, y, 1, 1).data; - console.log(pixel); - let info = document.getElementById('info'); info.innerText = `${date}, ${test}`; diff --git a/utils/upload_test_results/README.md b/utils/upload_test_results/README.md deleted file mode 100644 index e6b361081a2..00000000000 --- a/utils/upload_test_results/README.md +++ /dev/null @@ -1,34 +0,0 @@ -## Tool to upload results to CI ClickHouse - -Currently allows to upload results from `junit_to_html` tool to ClickHouse CI - -``` -usage: upload_test_results [-h] --sha SHA --pr PR --file FILE --type - {suites,cases} [--user USER] --password PASSWORD - [--ca-cert CA_CERT] [--host HOST] [--db DB] - -Upload test result to CI ClickHouse. - -optional arguments: - -h, --help show this help message and exit - --sha SHA sha of current commit - --pr PR pr of current commit. 0 for master - --file FILE file to upload - --type {suites,cases} - Export type - --user USER user name - --password PASSWORD password - --ca-cert CA_CERT CA certificate path - --host HOST CI ClickHouse host - --db DB CI ClickHouse database name -``` - -$ ./upload_test_results --sha "cf7eaee3301d4634acdacbfa308ddbe0cc6a061d" --pr "0" --file xyz/cases.jer --type cases --password $PASSWD - -CI checks has single commit sha and pr identifier. -While uploading your local results for testing purposes try to use correct sha and pr. - -CA Certificate for ClickHouse CI can be obtained from Yandex.Cloud where CI database is hosted -``` bash -wget "https://storage.yandexcloud.net/cloud-certs/CA.pem" -O YandexInternalRootCA.crt -``` \ No newline at end of file diff --git a/utils/upload_test_results/upload_test_results b/utils/upload_test_results/upload_test_results deleted file mode 100755 index 5916d0d85e8..00000000000 --- a/utils/upload_test_results/upload_test_results +++ /dev/null @@ -1,127 +0,0 @@ -#!/usr/bin/env python3 -import requests -import argparse - -# CREATE TABLE test_suites -# ( -# sha String, -# pr UInt16, -# suite String, -# errors UInt16, -# failures UInt16, -# hostname String, -# skipped UInt16, -# duration Double, -# timestamp DateTime -# ) ENGINE = MergeTree ORDER BY tuple(timestamp, suite); - -QUERY_SUITES="INSERT INTO test_suites "\ - "SELECT '{sha}' AS sha, "\ - "{pr} AS pr, "\ - "suite, "\ - "errors, "\ - "failures, "\ - "hostname, "\ - "skipped, "\ - "duration, "\ - "timestamp "\ - "FROM input('"\ - "suite String, "\ - "errors UInt16, "\ - "failures UInt16, "\ - "hostname String, "\ - "skipped UInt16, "\ - "duration Double, "\ - "timestamp DateTime"\ - "') FORMAT JSONEachRow" - -# CREATE TABLE test_cases -# ( -# sha String, -# pr UInt16, -# hostname String, -# suite String, -# timestamp DateTime, -# testname String, -# classname String, -# file String, -# line UInt16, -# duration Double, -# suite_duration Double, -# stderr String, -# stdout String -# ) ENGINE = MergeTree ORDER BY tuple(timestamp, testname); - -QUERY_CASES="INSERT INTO test_cases "\ - "SELECT '{sha}' AS sha, "\ - "{pr} AS pr, "\ - "hostname, "\ - "suite, "\ - "timestamp, "\ - "testname, "\ - "classname, "\ - "file, "\ - "line, "\ - "duration, "\ - "suite_duration, "\ - "stderr,"\ - "stdout "\ - "FROM input('"\ - "hostname String, "\ - "suite String, "\ - "timestamp DateTime, "\ - "testname String, "\ - "classname String, "\ - "file String, "\ - "line UInt16, "\ - "duration Double, "\ - "suite_duration Double, "\ - "stderr String, "\ - "stdout String"\ - "') FORMAT JSONEachRow" - - -def upload_request(sha, pr, file, q_type, user, password, ca_cert, host, db): - with open(file) as upload_f: - query = QUERY_SUITES if q_type=="suites" else QUERY_CASES - query = query.format(sha=sha, pr=pr) - url = 'https://{host}:8443/?database={db}&query={query}&date_time_input_format=best_effort'.format( - host=host, - db=db, - query=query - ) - data=upload_f - auth = { - 'X-ClickHouse-User': user, - 'X-ClickHouse-Key': password, - } - - print(query); - - res = requests.post( - url, - data=data, - headers=auth, - verify=ca_cert) - res.raise_for_status() - return res.text - -if __name__ == "__main__": - - parser = argparse.ArgumentParser(description='Upload test result to CI ClickHouse.') - parser.add_argument('--sha', help='sha of current commit', type=str, required=True) - parser.add_argument('--pr', help='pr of current commit. 0 for master', type=int, required=True) - parser.add_argument('--file', help='file to upload', required=True) - parser.add_argument('--type', help='Export type', choices=['suites', 'cases'] , required=True) - parser.add_argument('--user', help='user name', type=str, default="clickhouse-ci") - parser.add_argument('--password', help='password', type=str, required=True) - parser.add_argument('--ca-cert', help='CA certificate path', type=str, default="/usr/local/share/ca-certificates/YandexInternalRootCA.crt") - parser.add_argument('--host', help='CI ClickHouse host', type=str, default="c1a-ity5agjmuhyu6nu9.mdb.yandexcloud.net") - parser.add_argument('--db', help='CI ClickHouse database name', type=str, default="clickhouse-ci") - - args = parser.parse_args() - - print((upload_request(args.sha, args.pr, args.file, args.type, args.user, args.password, args.ca_cert, args.host, args.db))) - - - diff --git a/website/benchmark/hardware/results/xeon_gold_6266.json b/website/benchmark/hardware/results/xeon_gold_6266.json index 4283b711091..0e68466a633 100644 --- a/website/benchmark/hardware/results/xeon_gold_6266.json +++ b/website/benchmark/hardware/results/xeon_gold_6266.json @@ -1,7 +1,7 @@ [ { - "system": "Xeon Gold 6266C, 3GHz, 4vCPU", - "system_full": "Xeon Gold 6266C, 3GHz, 4vCPU, 16GiB RAM, vda1 40GB", + "system": "Huawei Cloud c6.xlarge.4, 4vCPUs, 16 GiB", + "system_full": "Huawei Cloud c6.xlarge.4, Xeon Gold 6266C, 3GHz, 4vCPU, 16GiB RAM, vda1 40GB", "cpu_vendor": "Intel", "cpu_model": "Xeon Gold 6266C", "time": "2021-12-23 00:00:00", diff --git a/website/blog/en/2021/tests-visualization.md b/website/blog/en/2021/tests-visualization.md new file mode 100644 index 00000000000..259cb4d8e34 --- /dev/null +++ b/website/blog/en/2021/tests-visualization.md @@ -0,0 +1,45 @@ +--- +title: 'Decorating a Christmas Tree With the Help Of Flaky Tests' +image: 'https://blog-images.clickhouse.com/en/2021/tests-visualization/tests.png' +date: '2021-12-27' +author: '[Alexey Milovidov](https://github.com/alexey-milovidov)' +tags: ['tests', 'ci', 'flaky', 'christmas', 'visualization'] +--- + +Test suites and testing infrastructure are one of the main assets of ClickHouse. We have tons of functional, integration, unit, performance, stress and fuzz tests. Tests are run on a per commit basis and results are publicly available. + +We also save the results of all test runs into the database in ClickHouse. We started collecting results in June 2020, and we have 1 777 608 240 records so far. Now we run around 5 to 9 million tests every day. + +Tests are good (in general). A good test suite allows for fast development iterations, stable releases, and to accept more contributions from the community. We love tests. If there's something strange in ClickHouse, what are we gonna do? Write more tests. + +Some tests can be flaky. The reasons for flakiness are uncountable - most of them are simple timing issues in the test script itself, but sometimes if a test has failed one of a thousand times it can uncover subtle logic errors in code. + +The problem is how to deal with flaky tests. Some people suggest automatically muting the "annoying" flaky tests. Or adding automatic retries in case of failure. We believe that this is all wrong. Instead of trying to ignore flaky tests, we do the opposite: we put maximum effort into making the tests even more flaky! + +Our recipes for flaky tests: +— never mute or restart them; if the test failed once, always look and investigate the cause; +— randomize the environment for every test run so the test will have more possible reasons to fail; +— if new tests are added, run them 100 times and if at least one fails, do not merge the pull request; +— if new tests are added, use them as a corpus for fuzzing - it will uncover corner cases even if author did not write tests for them; +— [randomize thread scheduling](https://github.com/ClickHouse/ClickHouse/blob/master/src/Common/ThreadFuzzer.h) and add random sleeps and switching between CPU cores at random places and before and after mutex locks/unlocks; +— run everything in parallel on slow machines; + +Key point: to prevent flaky tests, we make our tests as flaky as possible. + +## Nice Way To Visualize Flaky Tests + +There is a test suite named "[functional stateless tests](https://github.com/ClickHouse/ClickHouse/tree/master/tests/queries/0_stateless)" that has 3772 tests. For every day since 2020-06-13 (561 days) and every test (3772 tests), I drew a picture of size 561x3772 where a pixel is green if all test runs finished successfully in the master branch during this day (for all commits and all combinations: release, debug+assertions, ASan, MSan, TSan, UBSan), and a pixel is red if at least one run failed. The pixel will be transparent if the test did not exist that day. + +This visualization is a toy that I've made for fun: + +![Visualization](https://blog-images.clickhouse.com/en/2021/tests-visualization/tree_half.png) + +It looks like a Christmas Tree (you need a bit of imagination). If you have a different kind of imagination, you can see it as a green field with flowers. + +The time is from left to right. The tests are numbered with non-unique numbers (new tests usually get larger numbers), and these numbers are on the vertical axis (newer tests on top). + +If you see red dots in a horizontal line - it is a flaky test. If you see red dots in a vertical line - it means that one day we accidentally broke the master branch. If you see black horizontal lines or cuts in the tree - it means that the tests were added with some old numbers, most likely because some long living feature branch was merged. If you see black vertical lines - it means that some days tests were not run. + +The velocity of adding new tests is represented by how tall and narrow the Christmas tree is. When we add a large number of tests, the tree grows with almost vertical slope. + +The image is prepared by [HTML page](https://github.com/ClickHouse/ClickHouse/pull/33185) with some JavaScript that is querying a ClickHouse database directly and writing to a canvas. It took around ten seconds to build this picture. I also prepared an [interactive version](https://blog-images.clickhouse.com/en/2021/tests-visualization/demo.html) with already-saved data where you can play and find your favorite tests.