mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-10 09:32:06 +00:00
Merge remote-tracking branch 'upstream/master' into pytest
Drop all my changes for .sql tests
This commit is contained in:
commit
3c66942780
63
CHANGELOG.md
63
CHANGELOG.md
@ -1,3 +1,65 @@
|
||||
## ClickHouse release 19.7.3.9, 2019-05-30
|
||||
|
||||
### New Features
|
||||
* Allow to limit the range of a setting that can be specified by user.
|
||||
These constraints can be set up in user settings profile.
|
||||
[#4931](https://github.com/yandex/ClickHouse/pull/4931) ([Vitaly
|
||||
Baranov](https://github.com/vitlibar))
|
||||
* Add a second version of the function `groupUniqArray` with an optional
|
||||
`max_size` parameter that limits the size of the resulting array. This
|
||||
behavior is similar to `groupArray(max_size)(x)` function.
|
||||
[#5026](https://github.com/yandex/ClickHouse/pull/5026) ([Guillaume
|
||||
Tassery](https://github.com/YiuRULE))
|
||||
* For TSVWithNames/CSVWithNames input file formats, column order can now be
|
||||
determined from file header. This is controlled by
|
||||
`input_format_with_names_use_header` parameter.
|
||||
[#5081](https://github.com/yandex/ClickHouse/pull/5081)
|
||||
([Alexander](https://github.com/Akazz))
|
||||
|
||||
### Bug Fixes
|
||||
* Crash with uncompressed_cache + JOIN during merge (#5197)
|
||||
[#5133](https://github.com/yandex/ClickHouse/pull/5133) ([Danila
|
||||
Kutenin](https://github.com/danlark1))
|
||||
* Segmentation fault on a clickhouse-client query to system tables. #5066
|
||||
[#5127](https://github.com/yandex/ClickHouse/pull/5127)
|
||||
([Ivan](https://github.com/abyss7))
|
||||
* Data loss on heavy load via KafkaEngine (#4736)
|
||||
[#5080](https://github.com/yandex/ClickHouse/pull/5080)
|
||||
([Ivan](https://github.com/abyss7))
|
||||
* Fixed very rare data race condition that could happen when executing a query with UNION ALL involving at least two SELECTs from system.columns, system.tables, system.parts, system.parts_tables or tables of Merge family and performing ALTER of columns of the related tables concurrently. [#5189](https://github.com/yandex/ClickHouse/pull/5189) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
### Performance Improvements
|
||||
* Use radix sort for sorting by single numeric column in `ORDER BY` without
|
||||
`LIMIT`. [#5106](https://github.com/yandex/ClickHouse/pull/5106),
|
||||
[#4439](https://github.com/yandex/ClickHouse/pull/4439)
|
||||
([Evgenii Pravda](https://github.com/kvinty),
|
||||
[alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
### Documentation
|
||||
* Translate documentation for some table engines to Chinese.
|
||||
[#5107](https://github.com/yandex/ClickHouse/pull/5107),
|
||||
[#5094](https://github.com/yandex/ClickHouse/pull/5094),
|
||||
[#5087](https://github.com/yandex/ClickHouse/pull/5087)
|
||||
([张风啸](https://github.com/AlexZFX)),
|
||||
[#5068](https://github.com/yandex/ClickHouse/pull/5068) ([never
|
||||
lee](https://github.com/neverlee))
|
||||
|
||||
### Build/Testing/Packaging Improvements
|
||||
* Print UTF-8 characters properly in `clickhouse-test`.
|
||||
[#5084](https://github.com/yandex/ClickHouse/pull/5084)
|
||||
([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Add command line parameter for clickhouse-client to always load suggestion
|
||||
data. [#5102](https://github.com/yandex/ClickHouse/pull/5102)
|
||||
([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Resolve some of PVS-Studio warnings.
|
||||
[#5082](https://github.com/yandex/ClickHouse/pull/5082)
|
||||
([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Update LZ4 [#5040](https://github.com/yandex/ClickHouse/pull/5040) ([Danila
|
||||
Kutenin](https://github.com/danlark1))
|
||||
* Add gperf to build requirements for upcoming pull request #5030.
|
||||
[#5110](https://github.com/yandex/ClickHouse/pull/5110)
|
||||
([proller](https://github.com/proller))
|
||||
|
||||
## ClickHouse release 19.6.2.11, 2019-05-13
|
||||
|
||||
### New Features
|
||||
@ -29,6 +91,7 @@
|
||||
* Fixed hanging on start of the server when a dictionary depends on another dictionary via a database with engine=Dictionary. [#4962](https://github.com/yandex/ClickHouse/pull/4962) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Partially fix distributed_product_mode = local. It's possible to allow columns of local tables in where/having/order by/... via table aliases. Throw exception if table does not have alias. There's not possible to access to the columns without table aliases yet. [#4986](https://github.com/yandex/ClickHouse/pull/4986) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fix potentially wrong result for `SELECT DISTINCT` with `JOIN` [#5001](https://github.com/yandex/ClickHouse/pull/5001) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fixed very rare data race condition that could happen when executing a query with UNION ALL involving at least two SELECTs from system.columns, system.tables, system.parts, system.parts_tables or tables of Merge family and performing ALTER of columns of the related tables concurrently. [#5189](https://github.com/yandex/ClickHouse/pull/5189) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
### Build/Testing/Packaging Improvements
|
||||
* Fixed test failures when running clickhouse-server on different host [#4713](https://github.com/yandex/ClickHouse/pull/4713) ([Vasily Nemkov](https://github.com/Enmk))
|
||||
|
@ -1,3 +1,68 @@
|
||||
## ClickHouse release 19.7.3.9, 2019-05-30
|
||||
|
||||
### Новые возможности
|
||||
* Добавлена возможность ограничить значения конфигурационных параметров,
|
||||
которые может задать пользователь. Эти ограничения устанавливаются в профиле
|
||||
настроек пользователя. [#4931](https://github.com/yandex/ClickHouse/pull/4931)
|
||||
([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Добавлен вариант функции `groupUniqArray` с дополнительным параметром
|
||||
`max_size`, который ограничивает размер результирующего массива, аналогично
|
||||
функции `groupArray(max_size)(x)`.
|
||||
[#5026](https://github.com/yandex/ClickHouse/pull/5026) ([Guillaume
|
||||
Tassery](https://github.com/YiuRULE))
|
||||
* Для входных файлов формата TSVWithNames и CSVWithNames появилась возможность
|
||||
определить порядок колонок в файле исходя из его заголовка. Это поведение
|
||||
управляется конфигурационным параметром `input_format_with_names_use_header`.
|
||||
[#5081](https://github.com/yandex/ClickHouse/pull/5081)
|
||||
([Alexander](https://github.com/Akazz))
|
||||
|
||||
### Исправления ошибок
|
||||
* Падение в процессе слияния при использовании uncompressed_cache и JOIN
|
||||
(#5197). [#5133](https://github.com/yandex/ClickHouse/pull/5133) ([Danila
|
||||
Kutenin](https://github.com/danlark1))
|
||||
* Segmentation fault на запросе к системным таблицам (#5066).
|
||||
[#5127](https://github.com/yandex/ClickHouse/pull/5127)
|
||||
([Ivan](https://github.com/abyss7))
|
||||
* Потеря загружаемых данных при больших потоках загрузки через KafkaEngine
|
||||
(#4736). [#5080](https://github.com/yandex/ClickHouse/pull/5080)
|
||||
([Ivan](https://github.com/abyss7))
|
||||
* Исправлен очень редкий data race condition который мог произойти при выполнении запроса с UNION ALL включающего минимум два SELECT из таблиц system.columns, system.tables, system.parts, system.parts_tables или таблиц семейства Merge и одновременно выполняющихся запросов ALTER столбцов соответствующих таблиц. [#5189](https://github.com/yandex/ClickHouse/pull/5189) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
### Улучшения производительности
|
||||
* Используется поразрядная сортировка числовых колонок для `ORDER BY` без
|
||||
`LIMIT`. [#5106](https://github.com/yandex/ClickHouse/pull/5106),
|
||||
[#4439](https://github.com/yandex/ClickHouse/pull/4439) ([Evgenii
|
||||
Pravda](https://github.com/kvinty),
|
||||
[alexey-milovidov](https://github.com/alexey-milovidov)
|
||||
|
||||
### Документация
|
||||
* Документация для некоторых табличных движков переведена на китайский.
|
||||
[#5107](https://github.com/yandex/ClickHouse/pull/5107),
|
||||
[#5094](https://github.com/yandex/ClickHouse/pull/5094),
|
||||
[#5087](https://github.com/yandex/ClickHouse/pull/5087)
|
||||
([张风啸](https://github.com/AlexZFX),
|
||||
[#5068](https://github.com/yandex/ClickHouse/pull/5068) ([never
|
||||
lee](https://github.com/neverlee))
|
||||
|
||||
### Улучшения сборки, тестирования и пакетирования
|
||||
* Правильно отображаются символы в кодировке UTF-8 в `clickhouse-test`.
|
||||
[#5084](https://github.com/yandex/ClickHouse/pull/5084)
|
||||
([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Добавлен параметр командной строки для `clickhouse-client`, позволяющий
|
||||
всегда загружать данные подсказок.
|
||||
[#5102](https://github.com/yandex/ClickHouse/pull/5102)
|
||||
([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Исправлены некоторые предупреждения PVS-Studio.
|
||||
[#5082](https://github.com/yandex/ClickHouse/pull/5082)
|
||||
([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Обновлена библиотека LZ4.
|
||||
[#5040](https://github.com/yandex/ClickHouse/pull/5040) ([Danila
|
||||
Kutenin](https://github.com/danlark1))
|
||||
* В зависимости сборки добавлен gperf для поддержки готовящегося PR #5030.
|
||||
[#5110](https://github.com/yandex/ClickHouse/pull/5110)
|
||||
([proller](https://github.com/proller))
|
||||
|
||||
|
||||
## ClickHouse release 19.6.2.11, 2019-05-13
|
||||
|
||||
### Новые возможности
|
||||
@ -6,7 +71,7 @@
|
||||
* Добавлена функция `isValidUTF8` для проверки, содержит ли строка валидные данные в кодировке UTF-8. [#4934](https://github.com/yandex/ClickHouse/pull/4934) ([Danila Kutenin](https://github.com/danlark1))
|
||||
* Добавлены новое правило балансировки (`load_balancing`) `first_or_random` по которому запросы посылаются на первый заданый хост и если он недоступен - на случайные хосты шарда. Полезно для топологий с кросс-репликацией. [#5012](https://github.com/yandex/ClickHouse/pull/5012) ([nvartolomei](https://github.com/nvartolomei))
|
||||
|
||||
### Эксперемннтальные возможности
|
||||
### Экспериментальные возможности
|
||||
* Добавлена настройка `index_granularity_bytes` (адаптивная гранулярность индекса) для таблиц семейства MergeTree* . [#4826](https://github.com/yandex/ClickHouse/pull/4826) ([alesapin](https://github.com/alesapin))
|
||||
|
||||
### Улучшения
|
||||
@ -29,6 +94,7 @@
|
||||
* Исправлено зависание на старте сервера если внешний словарь зависит от другого словаря через использование таблицы из БД с движком `Dictionary`. [#4962](https://github.com/yandex/ClickHouse/pull/4962) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* При использовании `distributed_product_mode = 'local'` корректно работает использование столбцов локальных таблиц в where/having/order by/... через табличные алиасы. Выкидывает исключение если таблица не имеет алиас. Доступ к столбцам без алиасов пока не возможен. [#4986](https://github.com/yandex/ClickHouse/pull/4986) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Исправлен потенциально некорректный результат для `SELECT DISTINCT` с `JOIN` [#5001](https://github.com/yandex/ClickHouse/pull/5001) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Исправлен очень редкий data race condition который мог произойти при выполнении запроса с UNION ALL включающего минимум два SELECT из таблиц system.columns, system.tables, system.parts, system.parts_tables или таблиц семейства Merge и одновременно выполняющихся запросов ALTER столбцов соответствующих таблиц. [#5189](https://github.com/yandex/ClickHouse/pull/5189) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
### Улучшения сборки/тестирования/пакетирования
|
||||
* Исправлена неработоспособность тестов, если `clickhouse-server` запущен на удалённом хосте [#4713](https://github.com/yandex/ClickHouse/pull/4713) ([Vasily Nemkov](https://github.com/Enmk))
|
||||
|
@ -12,8 +12,7 @@ ClickHouse is an open-source column-oriented database management system that all
|
||||
* You can also [fill this form](https://forms.yandex.com/surveys/meet-yandex-clickhouse-team/) to meet Yandex ClickHouse team in person.
|
||||
|
||||
## Upcoming Events
|
||||
* ClickHouse at [Percona Live 2019](https://www.percona.com/live/19/other-open-source-databases-track) in Austin on May 28-30.
|
||||
* [ClickHouse Community Meetup in San Francisco](https://www.meetup.com/San-Francisco-Bay-Area-ClickHouse-Meetup/events/261110652/) on June 4.
|
||||
* [ClickHouse Community Meetup in Beijing](https://www.huodongxing.com/event/2483759276200) on June 8.
|
||||
* [ClickHouse on HighLoad++ Siberia](https://www.highload.ru/siberia/2019/abstracts/5348) on June 24-25.
|
||||
* [ClickHouse Community Meetup in Shenzhen](https://www.huodongxing.com/event/3483759917300) on October 20.
|
||||
* [ClickHouse Community Meetup in Shanghai](https://www.huodongxing.com/event/4483760336000) on October 27.
|
||||
|
@ -1,6 +1,6 @@
|
||||
add_library(roaring
|
||||
roaring.c
|
||||
roaring.h
|
||||
roaring.hh)
|
||||
roaring/roaring.h
|
||||
roaring/roaring.hh)
|
||||
|
||||
target_include_directories (roaring PUBLIC ${CMAKE_CURRENT_SOURCE_DIR})
|
||||
|
@ -1,5 +1,5 @@
|
||||
/* auto-generated on Tue Dec 18 09:42:59 CST 2018. Do not edit! */
|
||||
#include "roaring.h"
|
||||
#include "roaring/roaring.h"
|
||||
|
||||
/* used for http://dmalloc.com/ Dmalloc - Debug Malloc Library */
|
||||
#ifdef DMALLOC
|
||||
|
2
contrib/hyperscan
vendored
2
contrib/hyperscan
vendored
@ -1 +1 @@
|
||||
Subproject commit ed17d34a7c786512471946f9105eaa8d925f34c3
|
||||
Subproject commit 01e6b83f9fbdb4020cd68a5287bf3a0471eeb272
|
@ -81,7 +81,7 @@
|
||||
#define PCG_128BIT_CONSTANT(high,low) \
|
||||
((pcg128_t(high) << 64) + low)
|
||||
#else
|
||||
#include "pcg_uint128.hpp" // Y_IGNORE
|
||||
#include "pcg_uint128.hpp"
|
||||
namespace pcg_extras {
|
||||
typedef pcg_extras::uint_x4<uint32_t,uint64_t> pcg128_t;
|
||||
}
|
||||
|
@ -1,5 +1,5 @@
|
||||
#if __has_include(<rdkafka.h>) // maybe bundled
|
||||
# include_next <rdkafka.h> // Y_IGNORE
|
||||
# include_next <rdkafka.h>
|
||||
#else // system
|
||||
# include_next <librdkafka/rdkafka.h>
|
||||
#endif
|
||||
|
@ -67,7 +67,7 @@
|
||||
#include <Storages/ColumnsDescription.h>
|
||||
|
||||
#if USE_READLINE
|
||||
#include "Suggest.h" // Y_IGNORE
|
||||
#include "Suggest.h"
|
||||
#endif
|
||||
|
||||
#ifndef __clang__
|
||||
|
@ -15,7 +15,7 @@
|
||||
#endif
|
||||
|
||||
#if USE_TCMALLOC
|
||||
#include <gperftools/malloc_extension.h> // Y_IGNORE
|
||||
#include <gperftools/malloc_extension.h>
|
||||
#endif
|
||||
|
||||
#include <Common/StringUtils/StringUtils.h>
|
||||
|
@ -3,9 +3,9 @@
|
||||
#if USE_POCO_SQLODBC || USE_POCO_DATAODBC
|
||||
|
||||
#if USE_POCO_SQLODBC
|
||||
#include <Poco/SQL/ODBC/ODBCException.h> // Y_IGNORE
|
||||
#include <Poco/SQL/ODBC/SessionImpl.h> // Y_IGNORE
|
||||
#include <Poco/SQL/ODBC/Utility.h> // Y_IGNORE
|
||||
#include <Poco/SQL/ODBC/ODBCException.h>
|
||||
#include <Poco/SQL/ODBC/SessionImpl.h>
|
||||
#include <Poco/SQL/ODBC/Utility.h>
|
||||
#define POCO_SQL_ODBC_CLASS Poco::SQL::ODBC
|
||||
#endif
|
||||
#if USE_POCO_DATAODBC
|
||||
|
@ -2,9 +2,9 @@
|
||||
#if USE_POCO_SQLODBC || USE_POCO_DATAODBC
|
||||
|
||||
#if USE_POCO_SQLODBC
|
||||
#include <Poco/SQL/ODBC/ODBCException.h> // Y_IGNORE
|
||||
#include <Poco/SQL/ODBC/SessionImpl.h> // Y_IGNORE
|
||||
#include <Poco/SQL/ODBC/Utility.h> // Y_IGNORE
|
||||
#include <Poco/SQL/ODBC/ODBCException.h>
|
||||
#include <Poco/SQL/ODBC/SessionImpl.h>
|
||||
#include <Poco/SQL/ODBC/Utility.h>
|
||||
#define POCO_SQL_ODBC_CLASS Poco::SQL::ODBC
|
||||
#endif
|
||||
#if USE_POCO_DATAODBC
|
||||
|
@ -2,9 +2,9 @@
|
||||
#if USE_POCO_SQLODBC || USE_POCO_DATAODBC
|
||||
|
||||
#if USE_POCO_SQLODBC
|
||||
#include <Poco/SQL/ODBC/ODBCException.h> // Y_IGNORE
|
||||
#include <Poco/SQL/ODBC/SessionImpl.h> // Y_IGNORE
|
||||
#include <Poco/SQL/ODBC/Utility.h> // Y_IGNORE
|
||||
#include <Poco/SQL/ODBC/ODBCException.h>
|
||||
#include <Poco/SQL/ODBC/SessionImpl.h>
|
||||
#include <Poco/SQL/ODBC/Utility.h>
|
||||
#define POCO_SQL_ODBC_CLASS Poco::SQL::ODBC
|
||||
#endif
|
||||
#if USE_POCO_DATAODBC
|
||||
|
@ -8,7 +8,7 @@
|
||||
#if USE_POCO_SQLODBC || USE_POCO_DATAODBC
|
||||
|
||||
#if USE_POCO_SQLODBC
|
||||
#include <Poco/SQL/ODBC/Utility.h> // Y_IGNORE
|
||||
#include <Poco/SQL/ODBC/Utility.h>
|
||||
#endif
|
||||
#if USE_POCO_DATAODBC
|
||||
#include <Poco/Data/ODBC/Utility.h>
|
||||
|
@ -312,11 +312,13 @@ void HTTPHandler::processQuery(
|
||||
client_supports_http_compression = true;
|
||||
http_response_compression_method = CompressionMethod::Zlib;
|
||||
}
|
||||
#if USE_BROTLI
|
||||
else if (http_response_compression_methods == "br")
|
||||
{
|
||||
client_supports_http_compression = true;
|
||||
http_response_compression_method = CompressionMethod::Brotli;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
/// Client can pass a 'compress' flag in the query string. In this case the query result is
|
||||
|
@ -1,10 +1,10 @@
|
||||
#pragma once
|
||||
|
||||
#include <roaring.h>
|
||||
#include <roaring/roaring.h>
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include <boost/noncopyable.hpp>
|
||||
#include <roaring.hh>
|
||||
#include <roaring/roaring.hh>
|
||||
#include <Common/HashTable/SmallTable.h>
|
||||
#include <Common/PODArray.h>
|
||||
|
||||
|
@ -45,7 +45,7 @@ namespace
|
||||
|
||||
/// Such default parameters were picked because they did good on some tests,
|
||||
/// though it still requires to fit parameters to achieve better result
|
||||
auto learning_rate = Float64(0.00001);
|
||||
auto learning_rate = Float64(0.01);
|
||||
auto l2_reg_coef = Float64(0.1);
|
||||
UInt32 batch_size = 15;
|
||||
|
||||
@ -134,9 +134,14 @@ void LinearModelData::update_state()
|
||||
}
|
||||
|
||||
void LinearModelData::predict(
|
||||
ColumnVector<Float64>::Container & container, Block & block, const ColumnNumbers & arguments, const Context & context) const
|
||||
ColumnVector<Float64>::Container & container,
|
||||
Block & block,
|
||||
size_t offset,
|
||||
size_t limit,
|
||||
const ColumnNumbers & arguments,
|
||||
const Context & context) const
|
||||
{
|
||||
gradient_computer->predict(container, block, arguments, weights, bias, context);
|
||||
gradient_computer->predict(container, block, offset, limit, arguments, weights, bias, context);
|
||||
}
|
||||
|
||||
void LinearModelData::returnWeights(IColumn & to) const
|
||||
@ -345,42 +350,38 @@ void IWeightsUpdater::add_to_batch(
|
||||
void LogisticRegression::predict(
|
||||
ColumnVector<Float64>::Container & container,
|
||||
Block & block,
|
||||
size_t offset,
|
||||
size_t limit,
|
||||
const ColumnNumbers & arguments,
|
||||
const std::vector<Float64> & weights,
|
||||
Float64 bias,
|
||||
const Context & context) const
|
||||
const Context & /*context*/) const
|
||||
{
|
||||
size_t rows_num = block.rows();
|
||||
std::vector<Float64> results(rows_num, bias);
|
||||
|
||||
if (offset > rows_num || offset + limit > rows_num)
|
||||
throw Exception("Invalid offset and limit for LogisticRegression::predict. "
|
||||
"Block has " + toString(rows_num) + " rows, but offset is " + toString(offset) +
|
||||
" and limit is " + toString(limit), ErrorCodes::LOGICAL_ERROR);
|
||||
|
||||
std::vector<Float64> results(limit, bias);
|
||||
|
||||
for (size_t i = 1; i < arguments.size(); ++i)
|
||||
{
|
||||
const ColumnWithTypeAndName & cur_col = block.getByPosition(arguments[i]);
|
||||
|
||||
if (!isNativeNumber(cur_col.type))
|
||||
{
|
||||
throw Exception("Prediction arguments must have numeric type", ErrorCodes::BAD_ARGUMENTS);
|
||||
}
|
||||
|
||||
/// If column type is already Float64 then castColumn simply returns it
|
||||
auto features_col_ptr = castColumn(cur_col, std::make_shared<DataTypeFloat64>(), context);
|
||||
auto features_column = typeid_cast<const ColumnFloat64 *>(features_col_ptr.get());
|
||||
auto & features_column = cur_col.column;
|
||||
|
||||
if (!features_column)
|
||||
{
|
||||
throw Exception("Unexpectedly cannot dynamically cast features column " + std::to_string(i), ErrorCodes::LOGICAL_ERROR);
|
||||
}
|
||||
|
||||
for (size_t row_num = 0; row_num != rows_num; ++row_num)
|
||||
{
|
||||
results[row_num] += weights[i - 1] * features_column->getElement(row_num);
|
||||
}
|
||||
for (size_t row_num = 0; row_num < limit; ++row_num)
|
||||
results[row_num] += weights[i - 1] * features_column->getFloat64(offset + row_num);
|
||||
}
|
||||
|
||||
container.reserve(rows_num);
|
||||
for (size_t row_num = 0; row_num != rows_num; ++row_num)
|
||||
{
|
||||
container.reserve(container.size() + limit);
|
||||
for (size_t row_num = 0; row_num < limit; ++row_num)
|
||||
container.emplace_back(1 / (1 + exp(-results[row_num])));
|
||||
}
|
||||
}
|
||||
|
||||
void LogisticRegression::compute(
|
||||
@ -413,10 +414,12 @@ void LogisticRegression::compute(
|
||||
void LinearRegression::predict(
|
||||
ColumnVector<Float64>::Container & container,
|
||||
Block & block,
|
||||
size_t offset,
|
||||
size_t limit,
|
||||
const ColumnNumbers & arguments,
|
||||
const std::vector<Float64> & weights,
|
||||
Float64 bias,
|
||||
const Context & context) const
|
||||
const Context & /*context*/) const
|
||||
{
|
||||
if (weights.size() + 1 != arguments.size())
|
||||
{
|
||||
@ -424,36 +427,33 @@ void LinearRegression::predict(
|
||||
}
|
||||
|
||||
size_t rows_num = block.rows();
|
||||
std::vector<Float64> results(rows_num, bias);
|
||||
|
||||
if (offset > rows_num || offset + limit > rows_num)
|
||||
throw Exception("Invalid offset and limit for LogisticRegression::predict. "
|
||||
"Block has " + toString(rows_num) + " rows, but offset is " + toString(offset) +
|
||||
" and limit is " + toString(limit), ErrorCodes::LOGICAL_ERROR);
|
||||
|
||||
std::vector<Float64> results(limit, bias);
|
||||
|
||||
for (size_t i = 1; i < arguments.size(); ++i)
|
||||
{
|
||||
const ColumnWithTypeAndName & cur_col = block.getByPosition(arguments[i]);
|
||||
if (!isNativeNumber(cur_col.type))
|
||||
{
|
||||
throw Exception("Prediction arguments must have numeric type", ErrorCodes::BAD_ARGUMENTS);
|
||||
}
|
||||
|
||||
/// If column type is already Float64 then castColumn simply returns it
|
||||
auto features_col_ptr = castColumn(cur_col, std::make_shared<DataTypeFloat64>(), context);
|
||||
auto features_column = typeid_cast<const ColumnFloat64 *>(features_col_ptr.get());
|
||||
if (!isNativeNumber(cur_col.type))
|
||||
throw Exception("Prediction arguments must have numeric type", ErrorCodes::BAD_ARGUMENTS);
|
||||
|
||||
auto features_column = cur_col.column;
|
||||
|
||||
if (!features_column)
|
||||
{
|
||||
throw Exception("Unexpectedly cannot dynamically cast features column " + std::to_string(i), ErrorCodes::LOGICAL_ERROR);
|
||||
}
|
||||
|
||||
for (size_t row_num = 0; row_num != rows_num; ++row_num)
|
||||
{
|
||||
results[row_num] += weights[i - 1] * features_column->getElement(row_num);
|
||||
}
|
||||
for (size_t row_num = 0; row_num < limit; ++row_num)
|
||||
results[row_num] += weights[i - 1] * features_column->getFloat64(row_num + offset);
|
||||
}
|
||||
|
||||
container.reserve(rows_num);
|
||||
for (size_t row_num = 0; row_num != rows_num; ++row_num)
|
||||
{
|
||||
container.reserve(container.size() + limit);
|
||||
for (size_t row_num = 0; row_num < limit; ++row_num)
|
||||
container.emplace_back(results[row_num]);
|
||||
}
|
||||
}
|
||||
|
||||
void LinearRegression::compute(
|
||||
|
@ -3,6 +3,7 @@
|
||||
#include <Columns/ColumnVector.h>
|
||||
#include <Columns/ColumnsCommon.h>
|
||||
#include <Columns/ColumnsNumber.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
#include <DataTypes/DataTypeTuple.h>
|
||||
#include <DataTypes/DataTypeArray.h>
|
||||
@ -42,6 +43,8 @@ public:
|
||||
virtual void predict(
|
||||
ColumnVector<Float64>::Container & container,
|
||||
Block & block,
|
||||
size_t offset,
|
||||
size_t limit,
|
||||
const ColumnNumbers & arguments,
|
||||
const std::vector<Float64> & weights,
|
||||
Float64 bias,
|
||||
@ -67,6 +70,8 @@ public:
|
||||
void predict(
|
||||
ColumnVector<Float64>::Container & container,
|
||||
Block & block,
|
||||
size_t offset,
|
||||
size_t limit,
|
||||
const ColumnNumbers & arguments,
|
||||
const std::vector<Float64> & weights,
|
||||
Float64 bias,
|
||||
@ -92,6 +97,8 @@ public:
|
||||
void predict(
|
||||
ColumnVector<Float64>::Container & container,
|
||||
Block & block,
|
||||
size_t offset,
|
||||
size_t limit,
|
||||
const ColumnNumbers & arguments,
|
||||
const std::vector<Float64> & weights,
|
||||
Float64 bias,
|
||||
@ -218,8 +225,13 @@ public:
|
||||
|
||||
void read(ReadBuffer & buf);
|
||||
|
||||
void
|
||||
predict(ColumnVector<Float64>::Container & container, Block & block, const ColumnNumbers & arguments, const Context & context) const;
|
||||
void predict(
|
||||
ColumnVector<Float64>::Container & container,
|
||||
Block & block,
|
||||
size_t offset,
|
||||
size_t limit,
|
||||
const ColumnNumbers & arguments,
|
||||
const Context & context) const;
|
||||
|
||||
void returnWeights(IColumn & to) const;
|
||||
private:
|
||||
@ -228,11 +240,11 @@ private:
|
||||
|
||||
Float64 learning_rate;
|
||||
Float64 l2_reg_coef;
|
||||
UInt32 batch_capacity;
|
||||
UInt64 batch_capacity;
|
||||
|
||||
UInt32 iter_num = 0;
|
||||
UInt64 iter_num = 0;
|
||||
std::vector<Float64> gradient_batch;
|
||||
UInt32 batch_size;
|
||||
UInt64 batch_size;
|
||||
|
||||
std::shared_ptr<IGradientComputer> gradient_computer;
|
||||
std::shared_ptr<IWeightsUpdater> weights_updater;
|
||||
@ -316,7 +328,13 @@ public:
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override { this->data(place).read(buf); }
|
||||
|
||||
void predictValues(
|
||||
ConstAggregateDataPtr place, IColumn & to, Block & block, const ColumnNumbers & arguments, const Context & context) const override
|
||||
ConstAggregateDataPtr place,
|
||||
IColumn & to,
|
||||
Block & block,
|
||||
size_t offset,
|
||||
size_t limit,
|
||||
const ColumnNumbers & arguments,
|
||||
const Context & context) const override
|
||||
{
|
||||
if (arguments.size() != param_num + 1)
|
||||
throw Exception(
|
||||
@ -325,17 +343,12 @@ public:
|
||||
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||
|
||||
/// This cast might be correct because column type is based on getReturnTypeToPredict.
|
||||
ColumnVector<Float64> * column;
|
||||
try
|
||||
{
|
||||
column = &dynamic_cast<ColumnVector<Float64> &>(to);
|
||||
} catch (const std::bad_cast &)
|
||||
{
|
||||
auto * column = typeid_cast<ColumnFloat64 *>(&to);
|
||||
if (!column)
|
||||
throw Exception("Cast of column of predictions is incorrect. getReturnTypeToPredict must return same value as it is casted to",
|
||||
ErrorCodes::BAD_CAST);
|
||||
}
|
||||
|
||||
this->data(place).predict(column->getData(), block, arguments, context);
|
||||
this->data(place).predict(column->getData(), block, offset, limit, arguments, context);
|
||||
}
|
||||
|
||||
/** This function is called if aggregate function without State modifier is selected in a query.
|
||||
|
@ -100,9 +100,16 @@ public:
|
||||
/// Inserts results into a column.
|
||||
virtual void insertResultInto(ConstAggregateDataPtr place, IColumn & to) const = 0;
|
||||
|
||||
/// This function is used for machine learning methods
|
||||
virtual void predictValues(ConstAggregateDataPtr /* place */, IColumn & /*to*/,
|
||||
Block & /*block*/, const ColumnNumbers & /*arguments*/, const Context & /*context*/) const
|
||||
/// Used for machine learning methods. Predict result from trained model.
|
||||
/// Will insert result into `to` column for rows in range [offset, offset + limit).
|
||||
virtual void predictValues(
|
||||
ConstAggregateDataPtr /* place */,
|
||||
IColumn & /*to*/,
|
||||
Block & /*block*/,
|
||||
size_t /*offset*/,
|
||||
size_t /*limit*/,
|
||||
const ColumnNumbers & /*arguments*/,
|
||||
const Context & /*context*/) const
|
||||
{
|
||||
throw Exception("Method predictValues is not supported for " + getName(), ErrorCodes::NOT_IMPLEMENTED);
|
||||
}
|
||||
|
@ -92,13 +92,21 @@ MutableColumnPtr ColumnAggregateFunction::predictValues(Block & block, const Col
|
||||
auto ML_function = func.get();
|
||||
if (ML_function)
|
||||
{
|
||||
size_t row_num = 0;
|
||||
for (auto val : data)
|
||||
if (data.size() == 1)
|
||||
{
|
||||
ML_function->predictValues(val, *res, block, arguments, context);
|
||||
++row_num;
|
||||
/// Case for const column. Predict using single model.
|
||||
ML_function->predictValues(data[0], *res, block, 0, block.rows(), arguments, context);
|
||||
}
|
||||
else
|
||||
{
|
||||
/// Case for non-constant column. Use different aggregate function for each row.
|
||||
size_t row_num = 0;
|
||||
for (auto val : data)
|
||||
{
|
||||
ML_function->predictValues(val, *res, block, row_num, 1, arguments, context);
|
||||
++row_num;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
else
|
||||
{
|
||||
|
@ -6,7 +6,7 @@
|
||||
#include <Common/config.h>
|
||||
#include <re2/re2.h>
|
||||
#if USE_RE2_ST
|
||||
#include <re2_st/re2.h> // Y_IGNORE
|
||||
#include <re2_st/re2.h>
|
||||
#else
|
||||
#define re2_st re2
|
||||
#endif
|
||||
|
@ -7,7 +7,7 @@
|
||||
# include <Common/Exception.h>
|
||||
namespace DB { namespace ErrorCodes { extern const int CPUID_ERROR; }}
|
||||
#elif USE_CPUINFO
|
||||
# include <cpuinfo.h> // Y_IGNORE
|
||||
# include <cpuinfo.h>
|
||||
#endif
|
||||
|
||||
|
||||
|
@ -16,7 +16,7 @@
|
||||
#pragma GCC diagnostic push
|
||||
#pragma GCC diagnostic ignored "-Wunused-parameter"
|
||||
|
||||
#include <llvm/IR/IRBuilder.h> // Y_IGNORE
|
||||
#include <llvm/IR/IRBuilder.h>
|
||||
|
||||
#pragma GCC diagnostic pop
|
||||
|
||||
|
@ -3,13 +3,13 @@
|
||||
|
||||
#include <IO/ReadBuffer.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Formats/CapnProtoRowInputStream.h> // Y_IGNORE
|
||||
#include <Formats/CapnProtoRowInputStream.h>
|
||||
#include <Formats/FormatFactory.h>
|
||||
#include <Formats/BlockInputStreamFromRowInputStream.h>
|
||||
#include <Formats/FormatSchemaInfo.h>
|
||||
#include <capnp/serialize.h> // Y_IGNORE
|
||||
#include <capnp/dynamic.h> // Y_IGNORE
|
||||
#include <capnp/common.h> // Y_IGNORE
|
||||
#include <capnp/serialize.h>
|
||||
#include <capnp/dynamic.h>
|
||||
#include <capnp/common.h>
|
||||
#include <boost/algorithm/string.hpp>
|
||||
#include <boost/range/join.hpp>
|
||||
#include <common/logger_useful.h>
|
||||
|
@ -2,8 +2,8 @@
|
||||
#if USE_PROTOBUF
|
||||
|
||||
#include <Formats/FormatSchemaInfo.h>
|
||||
#include <Formats/ProtobufSchemas.h> // Y_IGNORE
|
||||
#include <google/protobuf/compiler/importer.h> // Y_IGNORE
|
||||
#include <Formats/ProtobufSchemas.h>
|
||||
#include <google/protobuf/compiler/importer.h>
|
||||
#include <Common/Exception.h>
|
||||
|
||||
|
||||
|
@ -7,8 +7,8 @@
|
||||
#include <AggregateFunctions/IAggregateFunction.h>
|
||||
#include <DataTypes/DataTypesDecimal.h>
|
||||
#include <boost/numeric/conversion/cast.hpp>
|
||||
#include <google/protobuf/descriptor.h> // Y_IGNORE
|
||||
#include <google/protobuf/descriptor.pb.h> // Y_IGNORE
|
||||
#include <google/protobuf/descriptor.h>
|
||||
#include <google/protobuf/descriptor.pb.h>
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include "ProtobufWriter.h"
|
||||
|
@ -7,7 +7,7 @@
|
||||
#include <Functions/FunctionHelpers.h>
|
||||
#include <Functions/GatherUtils/Algorithms.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include <libbase64.h> // Y_IGNORE
|
||||
#include <libbase64.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
|
@ -24,7 +24,7 @@
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
#pragma GCC diagnostic push
|
||||
#pragma GCC diagnostic ignored "-Wunused-parameter"
|
||||
#include <llvm/IR/IRBuilder.h> // Y_IGNORE
|
||||
#include <llvm/IR/IRBuilder.h>
|
||||
#pragma GCC diagnostic pop
|
||||
#endif
|
||||
|
||||
|
@ -22,8 +22,8 @@
|
||||
#pragma clang diagnostic ignored "-Wshift-negative-value"
|
||||
#endif
|
||||
|
||||
#include <vectorf128.h> // Y_IGNORE
|
||||
#include <vectormath_exp.h> // Y_IGNORE
|
||||
#include <vectorf128.h>
|
||||
#include <vectormath_exp.h>
|
||||
|
||||
#ifdef __clang__
|
||||
#pragma clang diagnostic pop
|
||||
|
@ -21,9 +21,9 @@
|
||||
#pragma clang diagnostic ignored "-Wshift-negative-value"
|
||||
#endif
|
||||
|
||||
#include <vectorf128.h> // Y_IGNORE
|
||||
#include <vectormath_exp.h> // Y_IGNORE
|
||||
#include <vectormath_trig.h> // Y_IGNORE
|
||||
#include <vectorf128.h>
|
||||
#include <vectormath_exp.h>
|
||||
#include <vectormath_trig.h>
|
||||
|
||||
#ifdef __clang__
|
||||
#pragma clang diagnostic pop
|
||||
|
@ -13,7 +13,7 @@
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
#pragma GCC diagnostic push
|
||||
#pragma GCC diagnostic ignored "-Wunused-parameter"
|
||||
#include <llvm/IR/IRBuilder.h> // Y_IGNORE
|
||||
#include <llvm/IR/IRBuilder.h>
|
||||
#pragma GCC diagnostic pop
|
||||
#endif
|
||||
|
||||
|
@ -55,6 +55,12 @@ namespace ErrorCodes
|
||||
*
|
||||
* Two bitmap andnot calculation, return cardinality:
|
||||
* bitmapAndnotCardinality: bitmap,bitmap -> integer
|
||||
*
|
||||
* Judge if a bitmap is superset of the another one:
|
||||
* bitmapHasAll: bitmap,bitmap -> bool
|
||||
*
|
||||
* Judge if the intersection of two bitmap is nonempty:
|
||||
* bitmapHasAny: bitmap,bitmap -> bool
|
||||
*/
|
||||
|
||||
template <typename Name>
|
||||
@ -430,20 +436,32 @@ private:
|
||||
Block & block, const ColumnNumbers & arguments, size_t input_rows_count, typename ColumnVector<ToType>::Container & vec_to)
|
||||
{
|
||||
const ColumnAggregateFunction * columns[2];
|
||||
bool isColumnConst[2];
|
||||
for (size_t i = 0; i < 2; ++i)
|
||||
{
|
||||
if (auto argument_column_const = typeid_cast<const ColumnConst *>(block.getByPosition(arguments[i]).column.get()))
|
||||
columns[i] = typeid_cast<const ColumnAggregateFunction *>(argument_column_const->getDataColumnPtr().get());
|
||||
if (auto argument_column_const = typeid_cast<const ColumnConst*>(block.getByPosition(arguments[i]).column.get()))
|
||||
{
|
||||
columns[i] = typeid_cast<const ColumnAggregateFunction*>(argument_column_const->getDataColumnPtr().get());
|
||||
isColumnConst[i] = true;
|
||||
}
|
||||
else
|
||||
columns[i] = typeid_cast<const ColumnAggregateFunction *>(block.getByPosition(arguments[i]).column.get());
|
||||
{
|
||||
columns[i] = typeid_cast<const ColumnAggregateFunction*>(block.getByPosition(arguments[i]).column.get());
|
||||
isColumnConst[i] = false;
|
||||
}
|
||||
}
|
||||
|
||||
const PaddedPODArray<AggregateDataPtr> & container0 = columns[0]->getData();
|
||||
const PaddedPODArray<AggregateDataPtr> & container1 = columns[1]->getData();
|
||||
|
||||
for (size_t i = 0; i < input_rows_count; ++i)
|
||||
{
|
||||
const AggregateDataPtr dataPtr0 = isColumnConst[0] ? container0[0] : container0[i];
|
||||
const AggregateDataPtr dataPtr1 = isColumnConst[1] ? container1[0] : container1[i];
|
||||
const AggregateFunctionGroupBitmapData<T> & bd1
|
||||
= *reinterpret_cast<const AggregateFunctionGroupBitmapData<T> *>(columns[0]->getData()[i]);
|
||||
= *reinterpret_cast<const AggregateFunctionGroupBitmapData<T>*>(dataPtr0);
|
||||
const AggregateFunctionGroupBitmapData<T> & bd2
|
||||
= *reinterpret_cast<const AggregateFunctionGroupBitmapData<T> *>(columns[1]->getData()[i]);
|
||||
= *reinterpret_cast<const AggregateFunctionGroupBitmapData<T>*>(dataPtr1);
|
||||
vec_to[i] = Impl<T>::apply(bd1, bd2);
|
||||
}
|
||||
}
|
||||
|
@ -12,7 +12,7 @@
|
||||
|
||||
#include <Common/config.h>
|
||||
#if USE_XXHASH
|
||||
# include <xxhash.h> // Y_IGNORE
|
||||
# include <xxhash.h>
|
||||
#endif
|
||||
|
||||
#if USE_SSL
|
||||
|
@ -18,7 +18,7 @@
|
||||
|
||||
#pragma GCC diagnostic push
|
||||
#pragma GCC diagnostic ignored "-Wunused-parameter"
|
||||
#include <llvm/IR/IRBuilder.h> // Y_IGNORE
|
||||
#include <llvm/IR/IRBuilder.h>
|
||||
#pragma GCC diagnostic pop
|
||||
#endif
|
||||
|
||||
|
@ -27,7 +27,7 @@
|
||||
#endif
|
||||
|
||||
#if USE_RE2_ST
|
||||
# include <re2_st/re2.h> // Y_IGNORE
|
||||
# include <re2_st/re2.h>
|
||||
#else
|
||||
# define re2_st re2
|
||||
#endif
|
||||
|
@ -25,7 +25,7 @@
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
#pragma GCC diagnostic push
|
||||
#pragma GCC diagnostic ignored "-Wunused-parameter"
|
||||
#include <llvm/IR/IRBuilder.h> // Y_IGNORE
|
||||
#include <llvm/IR/IRBuilder.h>
|
||||
#pragma GCC diagnostic pop
|
||||
#endif
|
||||
|
||||
|
@ -53,10 +53,10 @@ public:
|
||||
|
||||
DataTypePtr getReturnTypeImpl(const DataTypes & arguments) const override
|
||||
{
|
||||
if (!arguments.size())
|
||||
if (arguments.empty())
|
||||
throw Exception("Function " + getName() + " requires at least one argument", ErrorCodes::BAD_ARGUMENTS);
|
||||
|
||||
const DataTypeAggregateFunction * type = checkAndGetDataType<DataTypeAggregateFunction>(arguments[0].get());
|
||||
const auto * type = checkAndGetDataType<DataTypeAggregateFunction>(arguments[0].get());
|
||||
if (!type)
|
||||
throw Exception("Argument for function " + getName() + " must have type AggregateFunction - state of aggregate function.",
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
@ -66,19 +66,21 @@ public:
|
||||
|
||||
void executeImpl(Block & block, const ColumnNumbers & arguments, size_t result, size_t /*input_rows_count*/) override
|
||||
{
|
||||
if (!arguments.size())
|
||||
if (arguments.empty())
|
||||
throw Exception("Function " + getName() + " requires at least one argument", ErrorCodes::BAD_ARGUMENTS);
|
||||
|
||||
const ColumnConst * column_with_states
|
||||
= typeid_cast<const ColumnConst *>(&*block.getByPosition(arguments[0]).column);
|
||||
const auto * model = block.getByPosition(arguments[0]).column.get();
|
||||
|
||||
if (const auto * column_with_states = typeid_cast<const ColumnConst *>(model))
|
||||
model = column_with_states->getDataColumnPtr().get();
|
||||
|
||||
if (!column_with_states)
|
||||
const auto * agg_function = typeid_cast<const ColumnAggregateFunction *>(model);
|
||||
|
||||
if (!agg_function)
|
||||
throw Exception("Illegal column " + block.getByPosition(arguments[0]).column->getName()
|
||||
+ " of first argument of function " + getName(), ErrorCodes::ILLEGAL_COLUMN);
|
||||
|
||||
block.getByPosition(result).column =
|
||||
typeid_cast<const ColumnAggregateFunction *>(&*column_with_states->getDataColumnPtr())->predictValues(block, arguments, context);
|
||||
block.getByPosition(result).column = agg_function->predictValues(block, arguments, context);
|
||||
}
|
||||
|
||||
const Context & context;
|
||||
|
@ -2,7 +2,7 @@
|
||||
#if USE_BROTLI
|
||||
|
||||
#include "BrotliReadBuffer.h"
|
||||
#include <brotli/decode.h> // Y_IGNORE
|
||||
#include <brotli/decode.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
@ -4,7 +4,7 @@
|
||||
#include <Poco/URI.h>
|
||||
|
||||
#if USE_HDFS
|
||||
#include <hdfs/hdfs.h> // Y_IGNORE
|
||||
#include <hdfs/hdfs.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
@ -158,6 +158,26 @@ void setTimeouts(Poco::Net::HTTPClientSession & session, const ConnectionTimeout
|
||||
auto retry_timeout = timeouts.connection_timeout.totalMicroseconds();
|
||||
auto session = pool_ptr->second->get(retry_timeout);
|
||||
|
||||
/// We store exception messages in session data.
|
||||
/// Poco HTTPSession also stores exception, but it can be removed at any time.
|
||||
const auto & sessionData = session->sessionData();
|
||||
if (!sessionData.empty())
|
||||
{
|
||||
auto msg = Poco::AnyCast<std::string>(sessionData);
|
||||
if (!msg.empty())
|
||||
{
|
||||
LOG_TRACE((&Logger::get("HTTPCommon")), "Failed communicating with " << host << " with error '" << msg << "' will try to reconnect session");
|
||||
/// Host can change IP
|
||||
const auto ip = DNSResolver::instance().resolveHost(host).toString();
|
||||
if (ip != session->getHost())
|
||||
{
|
||||
session->reset();
|
||||
session->setHost(ip);
|
||||
session->attachSessionData({});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
setTimeouts(*session, timeouts);
|
||||
|
||||
return session;
|
||||
|
@ -2,10 +2,10 @@
|
||||
|
||||
#if USE_HDFS
|
||||
|
||||
#include <IO/ReadBufferFromHDFS.h> // Y_IGNORE
|
||||
#include <IO/ReadBufferFromHDFS.h>
|
||||
#include <IO/HDFSCommon.h>
|
||||
#include <Poco/URI.h>
|
||||
#include <hdfs/hdfs.h> // Y_IGNORE
|
||||
#include <hdfs/hdfs.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
|
@ -6,6 +6,7 @@
|
||||
#include <IO/HTTPCommon.h>
|
||||
#include <IO/ReadBuffer.h>
|
||||
#include <IO/ReadBufferFromIStream.h>
|
||||
#include <Poco/Any.h>
|
||||
#include <Poco/Net/HTTPBasicCredentials.h>
|
||||
#include <Poco/Net/HTTPClientSession.h>
|
||||
#include <Poco/Net/HTTPRequest.h>
|
||||
@ -69,14 +70,24 @@ namespace detail
|
||||
|
||||
LOG_TRACE((&Logger::get("ReadWriteBufferFromHTTP")), "Sending request to " << uri.toString());
|
||||
|
||||
auto & stream_out = session->sendRequest(request);
|
||||
try
|
||||
{
|
||||
auto & stream_out = session->sendRequest(request);
|
||||
|
||||
if (out_stream_callback)
|
||||
out_stream_callback(stream_out);
|
||||
if (out_stream_callback)
|
||||
out_stream_callback(stream_out);
|
||||
|
||||
istr = receiveResponse(*session, request, response);
|
||||
istr = receiveResponse(*session, request, response);
|
||||
|
||||
impl = std::make_unique<ReadBufferFromIStream>(*istr, buffer_size_);
|
||||
impl = std::make_unique<ReadBufferFromIStream>(*istr, buffer_size_);
|
||||
}
|
||||
catch (const Poco::Exception & e)
|
||||
{
|
||||
/// We use session data storage as storage for exception text
|
||||
/// Depend on it we can deduce to reconnect session or reresolve session host
|
||||
session->attachSessionData(e.message());
|
||||
throw;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
@ -3,9 +3,9 @@
|
||||
#if USE_HDFS
|
||||
|
||||
#include <Poco/URI.h>
|
||||
#include <IO/WriteBufferFromHDFS.h> // Y_IGNORE
|
||||
#include <IO/WriteBufferFromHDFS.h>
|
||||
#include <IO/HDFSCommon.h>
|
||||
#include <hdfs/hdfs.h> // Y_IGNORE
|
||||
#include <hdfs/hdfs.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
|
@ -128,6 +128,7 @@ void WriteBufferFromHTTPServerResponse::nextImpl()
|
||||
deflating_buf.emplace(*out_raw, compression_method, compression_level, working_buffer.size(), working_buffer.begin());
|
||||
out = &*deflating_buf;
|
||||
}
|
||||
#if USE_BROTLI
|
||||
else if (compression_method == CompressionMethod::Brotli)
|
||||
{
|
||||
#if defined(POCO_CLICKHOUSE_PATCH)
|
||||
@ -140,6 +141,7 @@ void WriteBufferFromHTTPServerResponse::nextImpl()
|
||||
brotli_buf.emplace(*out_raw, compression_level, working_buffer.size(), working_buffer.begin());
|
||||
out = &*brotli_buf;
|
||||
}
|
||||
#endif
|
||||
|
||||
else
|
||||
throw Exception("Logical error: unknown compression method passed to WriteBufferFromHTTPServerResponse",
|
||||
|
@ -61,7 +61,9 @@ private:
|
||||
|
||||
std::optional<WriteBufferFromOStream> out_raw;
|
||||
std::optional<ZlibDeflatingWriteBuffer> deflating_buf;
|
||||
#if USE_BROTLI
|
||||
std::optional<BrotliWriteBuffer> brotli_buf;
|
||||
#endif
|
||||
|
||||
WriteBuffer * out = nullptr; /// Uncompressed HTTP body is written to this buffer. Points to out_raw or possibly to deflating_buf.
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
#endif
|
||||
|
||||
#if USE_TCMALLOC
|
||||
#include <gperftools/malloc_extension.h> // Y_IGNORE
|
||||
#include <gperftools/malloc_extension.h>
|
||||
|
||||
/// Initializing malloc extension in global constructor as required.
|
||||
struct MallocExtensionInitializer
|
||||
|
@ -19,38 +19,30 @@
|
||||
#pragma GCC diagnostic ignored "-Wunused-parameter"
|
||||
#pragma GCC diagnostic ignored "-Wnon-virtual-dtor"
|
||||
|
||||
/** Y_IGNORE marker means that this header is not analyzed by Arcadia build system.
|
||||
* "Arcadia" is the name of internal Yandex source code repository.
|
||||
* ClickHouse have limited support for build in Arcadia
|
||||
* (ClickHouse source code is used in another Yandex products as a library).
|
||||
* Some libraries are not enabled when build inside Arcadia is used,
|
||||
* that what does Y_IGNORE indicate.
|
||||
*/
|
||||
|
||||
#include <llvm/Analysis/TargetTransformInfo.h> // Y_IGNORE
|
||||
#include <llvm/Config/llvm-config.h> // Y_IGNORE
|
||||
#include <llvm/IR/BasicBlock.h> // Y_IGNORE
|
||||
#include <llvm/IR/DataLayout.h> // Y_IGNORE
|
||||
#include <llvm/IR/DerivedTypes.h> // Y_IGNORE
|
||||
#include <llvm/IR/Function.h> // Y_IGNORE
|
||||
#include <llvm/IR/IRBuilder.h> // Y_IGNORE
|
||||
#include <llvm/IR/LLVMContext.h> // Y_IGNORE
|
||||
#include <llvm/IR/Mangler.h> // Y_IGNORE
|
||||
#include <llvm/IR/Module.h> // Y_IGNORE
|
||||
#include <llvm/IR/Type.h> // Y_IGNORE
|
||||
#include <llvm/ExecutionEngine/ExecutionEngine.h> // Y_IGNORE
|
||||
#include <llvm/ExecutionEngine/JITSymbol.h> // Y_IGNORE
|
||||
#include <llvm/ExecutionEngine/SectionMemoryManager.h> // Y_IGNORE
|
||||
#include <llvm/ExecutionEngine/Orc/CompileUtils.h> // Y_IGNORE
|
||||
#include <llvm/ExecutionEngine/Orc/IRCompileLayer.h> // Y_IGNORE
|
||||
#include <llvm/ExecutionEngine/Orc/RTDyldObjectLinkingLayer.h> // Y_IGNORE
|
||||
#include <llvm/Target/TargetMachine.h> // Y_IGNORE
|
||||
#include <llvm/MC/SubtargetFeature.h> // Y_IGNORE
|
||||
#include <llvm/Support/DynamicLibrary.h> // Y_IGNORE
|
||||
#include <llvm/Support/Host.h> // Y_IGNORE
|
||||
#include <llvm/Support/TargetRegistry.h> // Y_IGNORE
|
||||
#include <llvm/Support/TargetSelect.h> // Y_IGNORE
|
||||
#include <llvm/Transforms/IPO/PassManagerBuilder.h> // Y_IGNORE
|
||||
#include <llvm/Analysis/TargetTransformInfo.h>
|
||||
#include <llvm/Config/llvm-config.h>
|
||||
#include <llvm/IR/BasicBlock.h>
|
||||
#include <llvm/IR/DataLayout.h>
|
||||
#include <llvm/IR/DerivedTypes.h>
|
||||
#include <llvm/IR/Function.h>
|
||||
#include <llvm/IR/IRBuilder.h>
|
||||
#include <llvm/IR/LLVMContext.h>
|
||||
#include <llvm/IR/Mangler.h>
|
||||
#include <llvm/IR/Module.h>
|
||||
#include <llvm/IR/Type.h>
|
||||
#include <llvm/ExecutionEngine/ExecutionEngine.h>
|
||||
#include <llvm/ExecutionEngine/JITSymbol.h>
|
||||
#include <llvm/ExecutionEngine/SectionMemoryManager.h>
|
||||
#include <llvm/ExecutionEngine/Orc/CompileUtils.h>
|
||||
#include <llvm/ExecutionEngine/Orc/IRCompileLayer.h>
|
||||
#include <llvm/ExecutionEngine/Orc/RTDyldObjectLinkingLayer.h>
|
||||
#include <llvm/Target/TargetMachine.h>
|
||||
#include <llvm/MC/SubtargetFeature.h>
|
||||
#include <llvm/Support/DynamicLibrary.h>
|
||||
#include <llvm/Support/Host.h>
|
||||
#include <llvm/Support/TargetRegistry.h>
|
||||
#include <llvm/Support/TargetSelect.h>
|
||||
#include <llvm/Transforms/IPO/PassManagerBuilder.h>
|
||||
|
||||
#pragma GCC diagnostic pop
|
||||
|
||||
|
@ -293,8 +293,12 @@ void AlterCommand::apply(ColumnsDescription & columns_description, IndicesDescri
|
||||
});
|
||||
|
||||
if (erase_it == indices_description.indices.end())
|
||||
{
|
||||
if (if_exists)
|
||||
return;
|
||||
throw Exception("Wrong index name. Cannot find index `" + index_name + "` to drop.",
|
||||
ErrorCodes::LOGICAL_ERROR);
|
||||
ErrorCodes::LOGICAL_ERROR);
|
||||
}
|
||||
|
||||
indices_description.indices.erase(erase_it);
|
||||
}
|
||||
|
@ -24,7 +24,7 @@ struct KafkaSettings : public SettingsCollection<KafkaSettings>
|
||||
M(SettingUInt64, kafka_num_consumers, 1, "The number of consumers per table for Kafka engine.") \
|
||||
M(SettingUInt64, kafka_max_block_size, 0, "The maximum block size per table for Kafka engine.") \
|
||||
M(SettingUInt64, kafka_skip_broken_messages, 0, "Skip at least this number of broken messages from Kafka topic per block") \
|
||||
M(SettingUInt64, kafka_commit_every_batch, 1, "Commit every consumed and handled batch instead of a single commit after writing a whole block")
|
||||
M(SettingUInt64, kafka_commit_every_batch, 0, "Commit every consumed and handled batch instead of a single commit after writing a whole block")
|
||||
|
||||
DECLARE_SETTINGS_COLLECTION(LIST_OF_KAFKA_SETTINGS)
|
||||
|
||||
|
@ -429,7 +429,7 @@ void registerStorageKafka(StorageFactory & factory)
|
||||
#undef CHECK_KAFKA_STORAGE_ARGUMENT
|
||||
|
||||
// Get and check broker list
|
||||
String brokers;
|
||||
String brokers = kafka_settings.kafka_broker_list.value;
|
||||
if (args_count >= 1)
|
||||
{
|
||||
const auto * ast = engine_args[0]->as<ASTLiteral>();
|
||||
@ -442,22 +442,15 @@ void registerStorageKafka(StorageFactory & factory)
|
||||
throw Exception(String("Kafka broker list must be a string"), ErrorCodes::BAD_ARGUMENTS);
|
||||
}
|
||||
}
|
||||
else if (kafka_settings.kafka_broker_list.changed)
|
||||
{
|
||||
brokers = kafka_settings.kafka_broker_list.value;
|
||||
}
|
||||
|
||||
// Get and check topic list
|
||||
String topic_list;
|
||||
String topic_list = kafka_settings.kafka_topic_list.value;
|
||||
if (args_count >= 2)
|
||||
{
|
||||
engine_args[1] = evaluateConstantExpressionAsLiteral(engine_args[1], args.local_context);
|
||||
topic_list = engine_args[1]->as<ASTLiteral &>().value.safeGet<String>();
|
||||
}
|
||||
else if (kafka_settings.kafka_topic_list.changed)
|
||||
{
|
||||
topic_list = kafka_settings.kafka_topic_list.value;
|
||||
}
|
||||
|
||||
Names topics;
|
||||
boost::split(topics, topic_list , [](char c){ return c == ','; });
|
||||
for (String & topic : topics)
|
||||
@ -466,19 +459,15 @@ void registerStorageKafka(StorageFactory & factory)
|
||||
}
|
||||
|
||||
// Get and check group name
|
||||
String group;
|
||||
String group = kafka_settings.kafka_group_name.value;
|
||||
if (args_count >= 3)
|
||||
{
|
||||
engine_args[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[2], args.local_context);
|
||||
group = engine_args[2]->as<ASTLiteral &>().value.safeGet<String>();
|
||||
}
|
||||
else if (kafka_settings.kafka_group_name.changed)
|
||||
{
|
||||
group = kafka_settings.kafka_group_name.value;
|
||||
}
|
||||
|
||||
// Get and check message format name
|
||||
String format;
|
||||
String format = kafka_settings.kafka_format.value;
|
||||
if (args_count >= 4)
|
||||
{
|
||||
engine_args[3] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[3], args.local_context);
|
||||
@ -493,13 +482,9 @@ void registerStorageKafka(StorageFactory & factory)
|
||||
throw Exception("Format must be a string", ErrorCodes::BAD_ARGUMENTS);
|
||||
}
|
||||
}
|
||||
else if (kafka_settings.kafka_format.changed)
|
||||
{
|
||||
format = kafka_settings.kafka_format.value;
|
||||
}
|
||||
|
||||
// Parse row delimiter (optional)
|
||||
char row_delimiter = '\0';
|
||||
char row_delimiter = kafka_settings.kafka_row_delimiter.value;
|
||||
if (args_count >= 5)
|
||||
{
|
||||
engine_args[4] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[4], args.local_context);
|
||||
@ -527,13 +512,9 @@ void registerStorageKafka(StorageFactory & factory)
|
||||
row_delimiter = arg[0];
|
||||
}
|
||||
}
|
||||
else if (kafka_settings.kafka_row_delimiter.changed)
|
||||
{
|
||||
row_delimiter = kafka_settings.kafka_row_delimiter.value;
|
||||
}
|
||||
|
||||
// Parse format schema if supported (optional)
|
||||
String schema;
|
||||
String schema = kafka_settings.kafka_schema.value;
|
||||
if (args_count >= 6)
|
||||
{
|
||||
engine_args[5] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[5], args.local_context);
|
||||
@ -548,13 +529,9 @@ void registerStorageKafka(StorageFactory & factory)
|
||||
throw Exception("Format schema must be a string", ErrorCodes::BAD_ARGUMENTS);
|
||||
}
|
||||
}
|
||||
else if (kafka_settings.kafka_schema.changed)
|
||||
{
|
||||
schema = kafka_settings.kafka_schema.value;
|
||||
}
|
||||
|
||||
// Parse number of consumers (optional)
|
||||
UInt64 num_consumers = 1;
|
||||
UInt64 num_consumers = kafka_settings.kafka_num_consumers.value;
|
||||
if (args_count >= 7)
|
||||
{
|
||||
const auto * ast = engine_args[6]->as<ASTLiteral>();
|
||||
@ -567,13 +544,9 @@ void registerStorageKafka(StorageFactory & factory)
|
||||
throw Exception("Number of consumers must be a positive integer", ErrorCodes::BAD_ARGUMENTS);
|
||||
}
|
||||
}
|
||||
else if (kafka_settings.kafka_num_consumers.changed)
|
||||
{
|
||||
num_consumers = kafka_settings.kafka_num_consumers.value;
|
||||
}
|
||||
|
||||
// Parse max block size (optional)
|
||||
UInt64 max_block_size = 0;
|
||||
UInt64 max_block_size = static_cast<size_t>(kafka_settings.kafka_max_block_size.value);
|
||||
if (args_count >= 8)
|
||||
{
|
||||
const auto * ast = engine_args[7]->as<ASTLiteral>();
|
||||
@ -587,12 +560,8 @@ void registerStorageKafka(StorageFactory & factory)
|
||||
throw Exception("Maximum block size must be a positive integer", ErrorCodes::BAD_ARGUMENTS);
|
||||
}
|
||||
}
|
||||
else if (kafka_settings.kafka_max_block_size.changed)
|
||||
{
|
||||
max_block_size = static_cast<size_t>(kafka_settings.kafka_max_block_size.value);
|
||||
}
|
||||
|
||||
size_t skip_broken = 0;
|
||||
size_t skip_broken = static_cast<size_t>(kafka_settings.kafka_skip_broken_messages.value);
|
||||
if (args_count >= 9)
|
||||
{
|
||||
const auto * ast = engine_args[8]->as<ASTLiteral>();
|
||||
@ -605,12 +574,8 @@ void registerStorageKafka(StorageFactory & factory)
|
||||
throw Exception("Number of broken messages to skip must be a non-negative integer", ErrorCodes::BAD_ARGUMENTS);
|
||||
}
|
||||
}
|
||||
else if (kafka_settings.kafka_skip_broken_messages.changed)
|
||||
{
|
||||
skip_broken = static_cast<size_t>(kafka_settings.kafka_skip_broken_messages.value);
|
||||
}
|
||||
|
||||
bool intermediate_commit = true;
|
||||
bool intermediate_commit = static_cast<bool>(kafka_settings.kafka_commit_every_batch);
|
||||
if (args_count >= 10)
|
||||
{
|
||||
const auto * ast = engine_args[9]->as<ASTLiteral>();
|
||||
@ -623,10 +588,6 @@ void registerStorageKafka(StorageFactory & factory)
|
||||
throw Exception("Flag for committing every batch must be 0 or 1", ErrorCodes::BAD_ARGUMENTS);
|
||||
}
|
||||
}
|
||||
else if (kafka_settings.kafka_commit_every_batch.changed)
|
||||
{
|
||||
intermediate_commit = static_cast<bool>(kafka_settings.kafka_commit_every_batch);
|
||||
}
|
||||
|
||||
return StorageKafka::create(
|
||||
args.table_name, args.database_name, args.context, args.columns,
|
||||
|
@ -49,7 +49,7 @@ struct ReplicatedMergeTreeTableMetadata
|
||||
bool ttl_table_changed = false;
|
||||
String new_ttl_table;
|
||||
|
||||
bool empty() const { return !sorting_key_changed && !skip_indices_changed; }
|
||||
bool empty() const { return !sorting_key_changed && !skip_indices_changed && !ttl_table_changed; }
|
||||
};
|
||||
|
||||
Diff checkAndFindDiff(const ReplicatedMergeTreeTableMetadata & from_zk, bool allow_alter) const;
|
||||
|
@ -3,12 +3,12 @@
|
||||
#if USE_HDFS
|
||||
|
||||
#include <Storages/StorageFactory.h>
|
||||
#include <Storages/StorageHDFS.h> // Y_IGNORE
|
||||
#include <Storages/StorageHDFS.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Interpreters/evaluateConstantExpression.h>
|
||||
#include <Parsers/ASTLiteral.h>
|
||||
#include <IO/ReadBufferFromHDFS.h> // Y_IGNORE
|
||||
#include <IO/WriteBufferFromHDFS.h> // Y_IGNORE
|
||||
#include <IO/ReadBufferFromHDFS.h>
|
||||
#include <IO/WriteBufferFromHDFS.h>
|
||||
#include <Formats/FormatFactory.h>
|
||||
#include <DataStreams/IBlockOutputStream.h>
|
||||
#include <DataStreams/UnionBlockInputStream.h>
|
||||
|
@ -4616,17 +4616,19 @@ void StorageReplicatedMergeTree::removePartsFromZooKeeper(
|
||||
zkutil::ZooKeeperPtr & zookeeper, const Strings & part_names, NameSet * parts_should_be_retried)
|
||||
{
|
||||
std::vector<std::future<Coordination::ExistsResponse>> exists_futures;
|
||||
exists_futures.reserve(part_names.size());
|
||||
for (const String & part_name : part_names)
|
||||
{
|
||||
String part_path = replica_path + "/parts/" + part_name;
|
||||
exists_futures.emplace_back(zookeeper->asyncExists(part_path));
|
||||
}
|
||||
|
||||
std::vector<std::future<Coordination::MultiResponse>> remove_futures;
|
||||
exists_futures.reserve(part_names.size());
|
||||
remove_futures.reserve(part_names.size());
|
||||
try
|
||||
{
|
||||
/// Exception can be thrown from loop
|
||||
/// if zk session will be dropped
|
||||
for (const String & part_name : part_names)
|
||||
{
|
||||
String part_path = replica_path + "/parts/" + part_name;
|
||||
exists_futures.emplace_back(zookeeper->asyncExists(part_path));
|
||||
}
|
||||
|
||||
for (size_t i = 0; i < part_names.size(); ++i)
|
||||
{
|
||||
Coordination::ExistsResponse exists_resp = exists_futures[i].get();
|
||||
|
@ -1,9 +1,9 @@
|
||||
#include <Common/config.h>
|
||||
|
||||
#if USE_HDFS
|
||||
#include <Storages/StorageHDFS.h> // Y_IGNORE
|
||||
#include <Storages/StorageHDFS.h>
|
||||
#include <TableFunctions/TableFunctionFactory.h>
|
||||
#include <TableFunctions/TableFunctionHDFS.h> // Y_IGNORE
|
||||
#include <TableFunctions/TableFunctionHDFS.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
@ -9,7 +9,7 @@ endif ()
|
||||
|
||||
install (PROGRAMS clickhouse-test clickhouse-test-server DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse)
|
||||
install (
|
||||
DIRECTORY queries performance
|
||||
DIRECTORY queries performance config
|
||||
DESTINATION ${CMAKE_INSTALL_DATAROOTDIR}/clickhouse-test
|
||||
USE_SOURCE_PERMISSIONS
|
||||
COMPONENT clickhouse
|
||||
|
@ -19,7 +19,7 @@ from errno import ESRCH
|
||||
import termcolor
|
||||
from random import random
|
||||
import commands
|
||||
from multiprocessing import Pool
|
||||
import multiprocessing
|
||||
from contextlib import closing
|
||||
|
||||
|
||||
@ -369,6 +369,8 @@ def main(args):
|
||||
continue
|
||||
|
||||
jobs = args.jobs
|
||||
if jobs > tests_n:
|
||||
jobs = tests_n
|
||||
if jobs > run_total:
|
||||
run_total = jobs
|
||||
|
||||
@ -379,7 +381,7 @@ def main(args):
|
||||
all_tests_array.append([all_tests[start : end], suite, suite_dir, suite_tmp_dir, run_total])
|
||||
|
||||
if jobs > 1:
|
||||
with closing(Pool(processes=jobs)) as pool:
|
||||
with closing(multiprocessing.Pool(processes=jobs)) as pool:
|
||||
pool.map(run_tests_array, all_tests_array)
|
||||
pool.terminate()
|
||||
else:
|
||||
@ -432,7 +434,7 @@ if __name__ == '__main__':
|
||||
parser.add_argument('--force-color', action='store_true', default=False)
|
||||
parser.add_argument('--database', default='test', help='Default database for tests')
|
||||
parser.add_argument('--parallel', default='1/1', help='One parallel test run number/total')
|
||||
parser.add_argument('-j', '--jobs', default=1, help='Run all tests in parallel', type=int)
|
||||
parser.add_argument('-j', '--jobs', default=1, help='Run all tests in parallel', type=int) # default=multiprocessing.cpu_count()
|
||||
|
||||
parser.add_argument('--no-stateless', action='store_true', help='Disable all stateless tests')
|
||||
parser.add_argument('--no-stateful', action='store_true', help='Disable all stateful tests')
|
||||
|
@ -7,7 +7,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>decimals</table>
|
||||
</clickhouse>
|
||||
</source>
|
||||
@ -45,7 +45,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>decimals</table>
|
||||
</clickhouse>
|
||||
</source>
|
||||
@ -83,7 +83,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>decimals</table>
|
||||
</clickhouse>
|
||||
</source>
|
||||
@ -121,7 +121,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>decimals</table>
|
||||
</clickhouse>
|
||||
</source>
|
||||
@ -162,7 +162,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>decimals</table>
|
||||
</clickhouse>
|
||||
</source>
|
@ -7,7 +7,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>ints</table>
|
||||
</clickhouse>
|
||||
</source>
|
||||
@ -70,7 +70,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>ints</table>
|
||||
</clickhouse>
|
||||
</source>
|
||||
@ -133,7 +133,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>ints</table>
|
||||
</clickhouse>
|
||||
</source>
|
||||
@ -196,7 +196,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>ints</table>
|
||||
</clickhouse>
|
||||
</source>
|
||||
@ -262,7 +262,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>ints</table>
|
||||
</clickhouse>
|
||||
</source>
|
@ -7,7 +7,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>strings</table>
|
||||
</clickhouse>
|
||||
</source>
|
||||
@ -35,7 +35,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>strings</table>
|
||||
</clickhouse>
|
||||
</source>
|
||||
@ -63,7 +63,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>strings</table>
|
||||
</clickhouse>
|
||||
</source>
|
||||
@ -91,7 +91,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>strings</table>
|
||||
</clickhouse>
|
||||
</source>
|
||||
@ -122,7 +122,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>strings</table>
|
||||
</clickhouse>
|
||||
</source>
|
||||
@ -153,7 +153,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>strings</table>
|
||||
</clickhouse>
|
||||
</source>
|
||||
@ -184,7 +184,7 @@
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db></db>
|
||||
<db>test</db>
|
||||
<table>strings</table>
|
||||
</clickhouse>
|
||||
</source>
|
@ -1 +1 @@
|
||||
../../docker/test/stateless/decimals_dictionary.xml
|
||||
../../dbms/tests/config/decimals_dictionary.xml
|
@ -102,6 +102,7 @@ class ClickHouseCluster:
|
||||
self.with_odbc_drivers = False
|
||||
self.with_hdfs = False
|
||||
self.with_mongo = False
|
||||
self.with_net_trics = False
|
||||
|
||||
self.docker_client = None
|
||||
self.is_up = False
|
||||
@ -136,12 +137,19 @@ class ClickHouseCluster:
|
||||
env_variables=env_variables, image=image, stay_alive=stay_alive, ipv4_address=ipv4_address, ipv6_address=ipv6_address)
|
||||
|
||||
self.instances[name] = instance
|
||||
if ipv4_address is not None or ipv6_address is not None:
|
||||
self.with_net_trics = True
|
||||
self.base_cmd.extend(['--file', p.join(HELPERS_DIR, 'docker_compose_net.yml')])
|
||||
|
||||
self.base_cmd.extend(['--file', instance.docker_compose_path])
|
||||
|
||||
cmds = []
|
||||
if with_zookeeper and not self.with_zookeeper:
|
||||
self.with_zookeeper = True
|
||||
self.base_cmd.extend(['--file', p.join(HELPERS_DIR, 'docker_compose_zookeeper.yml')])
|
||||
self.base_zookeeper_cmd = ['docker-compose', '--project-directory', self.base_dir, '--project-name',
|
||||
self.project_name, '--file', p.join(HELPERS_DIR, 'docker_compose_zookeeper.yml')]
|
||||
self.project_name, '--file', p.join(HELPERS_DIR, 'docker_compose_zookeeper.yml')]
|
||||
cmds.append(self.base_zookeeper_cmd)
|
||||
|
||||
if with_mysql and not self.with_mysql:
|
||||
self.with_mysql = True
|
||||
@ -149,11 +157,14 @@ class ClickHouseCluster:
|
||||
self.base_mysql_cmd = ['docker-compose', '--project-directory', self.base_dir, '--project-name',
|
||||
self.project_name, '--file', p.join(HELPERS_DIR, 'docker_compose_mysql.yml')]
|
||||
|
||||
cmds.append(self.base_mysql_cmd)
|
||||
|
||||
if with_postgres and not self.with_postgres:
|
||||
self.with_postgres = True
|
||||
self.base_cmd.extend(['--file', p.join(HELPERS_DIR, 'docker_compose_postgres.yml')])
|
||||
self.base_postgres_cmd = ['docker-compose', '--project-directory', self.base_dir, '--project-name',
|
||||
self.project_name, '--file', p.join(HELPERS_DIR, 'docker_compose_postgres.yml')]
|
||||
cmds.append(self.base_postgres_cmd)
|
||||
|
||||
if with_odbc_drivers and not self.with_odbc_drivers:
|
||||
self.with_odbc_drivers = True
|
||||
@ -162,29 +173,39 @@ class ClickHouseCluster:
|
||||
self.base_cmd.extend(['--file', p.join(HELPERS_DIR, 'docker_compose_mysql.yml')])
|
||||
self.base_mysql_cmd = ['docker-compose', '--project-directory', self.base_dir, '--project-name',
|
||||
self.project_name, '--file', p.join(HELPERS_DIR, 'docker_compose_mysql.yml')]
|
||||
cmds.append(self.base_mysql_cmd)
|
||||
|
||||
if not self.with_postgres:
|
||||
self.with_postgres = True
|
||||
self.base_cmd.extend(['--file', p.join(HELPERS_DIR, 'docker_compose_postgres.yml')])
|
||||
self.base_postgres_cmd = ['docker-compose', '--project-directory', self.base_dir, '--project-name',
|
||||
self.project_name, '--file', p.join(HELPERS_DIR, 'docker_compose_postgres.yml')]
|
||||
cmds.append(self.base_postgres_cmd)
|
||||
|
||||
if with_kafka and not self.with_kafka:
|
||||
self.with_kafka = True
|
||||
self.base_cmd.extend(['--file', p.join(HELPERS_DIR, 'docker_compose_kafka.yml')])
|
||||
self.base_kafka_cmd = ['docker-compose', '--project-directory', self.base_dir, '--project-name',
|
||||
self.project_name, '--file', p.join(HELPERS_DIR, 'docker_compose_kafka.yml')]
|
||||
cmds.append(self.base_kafka_cmd)
|
||||
|
||||
if with_hdfs and not self.with_hdfs:
|
||||
self.with_hdfs = True
|
||||
self.base_cmd.extend(['--file', p.join(HELPERS_DIR, 'docker_compose_hdfs.yml')])
|
||||
self.base_hdfs_cmd = ['docker-compose', '--project-directory', self.base_dir, '--project-name',
|
||||
self.project_name, '--file', p.join(HELPERS_DIR, 'docker_compose_hdfs.yml')]
|
||||
cmds.append(self.base_hdfs_cmd)
|
||||
|
||||
if with_mongo and not self.with_mongo:
|
||||
self.with_mongo = True
|
||||
self.base_cmd.extend(['--file', p.join(HELPERS_DIR, 'docker_compose_mongo.yml')])
|
||||
self.base_mongo_cmd = ['docker-compose', '--project-directory', self.base_dir, '--project-name',
|
||||
self.project_name, '--file', p.join(HELPERS_DIR, 'docker_compose_mongo.yml')]
|
||||
cmds.append(self.base_mongo_cmd)
|
||||
|
||||
if self.with_net_trics:
|
||||
for cmd in cmds:
|
||||
cmd.extend(['--file', p.join(HELPERS_DIR, 'docker_compose_net.yml')])
|
||||
|
||||
return instance
|
||||
|
||||
@ -193,6 +214,32 @@ class ClickHouseCluster:
|
||||
# According to how docker-compose names containers.
|
||||
return self.project_name + '_' + instance_name + '_1'
|
||||
|
||||
def _replace(self, path, what, to):
|
||||
with open(path, 'r') as p:
|
||||
data = p.read()
|
||||
data = data.replace(what, to)
|
||||
with open(path, 'w') as p:
|
||||
p.write(data)
|
||||
|
||||
def restart_instance_with_ip_change(self, node, new_ip):
|
||||
if '::' in new_ip:
|
||||
if node.ipv6_address is None:
|
||||
raise Exception("You shoud specity ipv6_address in add_node method")
|
||||
self._replace(node.docker_compose_path, node.ipv6_address, new_ip)
|
||||
node.ipv6_address = new_ip
|
||||
else:
|
||||
if node.ipv4_address is None:
|
||||
raise Exception("You shoud specity ipv4_address in add_node method")
|
||||
self._replace(node.docker_compose_path, node.ipv4_address, new_ip)
|
||||
node.ipv4_address = new_ip
|
||||
subprocess.check_call(self.base_cmd + ["stop", node.name])
|
||||
subprocess.check_call(self.base_cmd + ["rm", "--force", "--stop", node.name])
|
||||
subprocess.check_call(self.base_cmd + ["up", "--force-recreate", "--no-deps", "-d", node.name])
|
||||
node.ip_address = self.get_instance_ip(node.name)
|
||||
node.client = Client(node.ip_address, command=self.client_bin_path)
|
||||
start_deadline = time.time() + 20.0 # seconds
|
||||
node.wait_for_start(start_deadline)
|
||||
return node
|
||||
|
||||
def get_instance_ip(self, instance_name):
|
||||
docker_id = self.get_instance_docker_id(instance_name)
|
||||
@ -397,20 +444,9 @@ services:
|
||||
{app_net}
|
||||
{ipv4_address}
|
||||
{ipv6_address}
|
||||
|
||||
networks:
|
||||
app_net:
|
||||
driver: bridge
|
||||
enable_ipv6: true
|
||||
ipam:
|
||||
driver: default
|
||||
config:
|
||||
- subnet: 10.5.0.0/12
|
||||
gateway: 10.5.1.1
|
||||
- subnet: 2001:3984:3989::/64
|
||||
gateway: 2001:3984:3989::1
|
||||
'''
|
||||
|
||||
|
||||
class ClickHouseInstance:
|
||||
|
||||
def __init__(
|
||||
@ -527,7 +563,7 @@ class ClickHouseInstance:
|
||||
|
||||
|
||||
def stop(self):
|
||||
self.get_docker_handle().stop(self.default_timeout)
|
||||
self.get_docker_handle().stop()
|
||||
|
||||
|
||||
def start(self):
|
||||
@ -708,7 +744,7 @@ class ClickHouseInstance:
|
||||
app_net = ""
|
||||
else:
|
||||
networks = "networks:"
|
||||
app_net = "app_net:"
|
||||
app_net = "default:"
|
||||
if self.ipv4_address is not None:
|
||||
ipv4_address = "ipv4_address: " + self.ipv4_address
|
||||
if self.ipv6_address is not None:
|
||||
|
11
dbms/tests/integration/helpers/docker_compose_net.yml
Normal file
11
dbms/tests/integration/helpers/docker_compose_net.yml
Normal file
@ -0,0 +1,11 @@
|
||||
version: '2.2'
|
||||
networks:
|
||||
default:
|
||||
driver: bridge
|
||||
enable_ipv6: true
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 10.5.0.0/12
|
||||
gateway: 10.5.1.1
|
||||
- subnet: 2001:3984:3989::/64
|
||||
gateway: 2001:3984:3989::1
|
@ -1,5 +1,5 @@
|
||||
# docker build -t yandex/clickhouse-integration-tests-runner .
|
||||
FROM ubuntu:18.04
|
||||
# yandex/clickhouse-integration-tests-runner
|
||||
|
||||
RUN apt-get update \
|
||||
&& env DEBIAN_FRONTEND=noninteractive apt-get install --yes \
|
||||
|
@ -0,0 +1,3 @@
|
||||
<yandex>
|
||||
<listen_host>::</listen_host>
|
||||
</yandex>
|
@ -0,0 +1,14 @@
|
||||
<yandex>
|
||||
<remote_servers>
|
||||
<test_cluster>
|
||||
<shard>
|
||||
<internal_replication>true</internal_replication>
|
||||
<replica>
|
||||
<default_database>shard_0</default_database>
|
||||
<host>node1</host>
|
||||
<port>9000</port>
|
||||
</replica>
|
||||
</shard>
|
||||
</test_cluster>
|
||||
</remote_servers>
|
||||
</yandex>
|
66
dbms/tests/integration/test_host_ip_change/test.py
Normal file
66
dbms/tests/integration/test_host_ip_change/test.py
Normal file
@ -0,0 +1,66 @@
|
||||
import time
|
||||
import pytest
|
||||
|
||||
import subprocess
|
||||
from helpers.cluster import ClickHouseCluster
|
||||
from helpers.test_tools import assert_eq_with_retry
|
||||
|
||||
|
||||
cluster = ClickHouseCluster(__file__)
|
||||
|
||||
node1 = cluster.add_instance('node1', main_configs=['configs/remote_servers.xml', 'configs/listen_host.xml'], with_zookeeper=True, ipv6_address='2001:3984:3989::1:1111')
|
||||
node2 = cluster.add_instance('node2', main_configs=['configs/remote_servers.xml', 'configs/listen_host.xml'], with_zookeeper=True, ipv6_address='2001:3984:3989::1:1112')
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def start_cluster():
|
||||
try:
|
||||
cluster.start()
|
||||
|
||||
for node in [node1, node2]:
|
||||
node.query(
|
||||
'''
|
||||
CREATE DATABASE IF NOT EXISTS test;
|
||||
CREATE TABLE IF NOT EXISTS test_table(date Date, id UInt32)
|
||||
ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated', '{}')
|
||||
ORDER BY id PARTITION BY toYYYYMM(date);
|
||||
'''.format(node.name)
|
||||
)
|
||||
|
||||
yield cluster
|
||||
|
||||
except Exception as ex:
|
||||
print ex
|
||||
|
||||
finally:
|
||||
cluster.shutdown()
|
||||
pass
|
||||
|
||||
|
||||
def test_merge_doesnt_work_without_zookeeper(start_cluster):
|
||||
# First we check, that normal replication works
|
||||
node1.query("INSERT INTO test_table VALUES ('2018-10-01', 1), ('2018-10-02', 2), ('2018-10-03', 3)")
|
||||
assert node1.query("SELECT count(*) from test_table") == "3\n"
|
||||
assert_eq_with_retry(node2, "SELECT count(*) from test_table", "3")
|
||||
|
||||
# We change source node ip
|
||||
cluster.restart_instance_with_ip_change(node1, "2001:3984:3989::1:7777")
|
||||
|
||||
# Put some data to source node1
|
||||
node1.query("INSERT INTO test_table VALUES ('2018-10-01', 5), ('2018-10-02', 6), ('2018-10-03', 7)")
|
||||
# Check that data is placed on node1
|
||||
assert node1.query("SELECT count(*) from test_table") == "6\n"
|
||||
|
||||
# Because of DNS cache dest node2 cannot download data from node1
|
||||
with pytest.raises(Exception):
|
||||
assert_eq_with_retry(node2, "SELECT count(*) from test_table", "6")
|
||||
|
||||
# drop DNS cache
|
||||
node2.query("SYSTEM DROP DNS CACHE")
|
||||
# Data is downloaded
|
||||
assert_eq_with_retry(node2, "SELECT count(*) from test_table", "6")
|
||||
|
||||
# Just to be sure check one more time
|
||||
node1.query("INSERT INTO test_table VALUES ('2018-10-01', 8)")
|
||||
assert node1.query("SELECT count(*) from test_table") == "7\n"
|
||||
assert_eq_with_retry(node2, "SELECT count(*) from test_table", "7")
|
@ -0,0 +1,14 @@
|
||||
<yandex>
|
||||
<remote_servers>
|
||||
<test_cluster>
|
||||
<shard>
|
||||
<internal_replication>true</internal_replication>
|
||||
<replica>
|
||||
<default_database>shard_0</default_database>
|
||||
<host>node1</host>
|
||||
<port>9000</port>
|
||||
</replica>
|
||||
</shard>
|
||||
</test_cluster>
|
||||
</remote_servers>
|
||||
</yandex>
|
61
dbms/tests/integration/test_parts_delete_zookeeper/test.py
Normal file
61
dbms/tests/integration/test_parts_delete_zookeeper/test.py
Normal file
@ -0,0 +1,61 @@
|
||||
import time
|
||||
import pytest
|
||||
|
||||
from helpers.network import PartitionManager
|
||||
from helpers.cluster import ClickHouseCluster
|
||||
from helpers.test_tools import assert_eq_with_retry
|
||||
|
||||
|
||||
cluster = ClickHouseCluster(__file__)
|
||||
node1 = cluster.add_instance('node1', main_configs=['configs/remote_servers.xml'], with_zookeeper=True)
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def start_cluster():
|
||||
try:
|
||||
cluster.start()
|
||||
|
||||
node1.query(
|
||||
'''
|
||||
CREATE DATABASE test;
|
||||
CREATE TABLE test_table(date Date, id UInt32)
|
||||
ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/replicated', 'node1')
|
||||
ORDER BY id PARTITION BY toYYYYMM(date) SETTINGS old_parts_lifetime=4, cleanup_delay_period=1;
|
||||
'''
|
||||
)
|
||||
|
||||
yield cluster
|
||||
|
||||
except Exception as ex:
|
||||
print ex
|
||||
|
||||
finally:
|
||||
cluster.shutdown()
|
||||
|
||||
|
||||
# Test that outdated parts are not removed when they cannot be removed from zookeeper
|
||||
def test_merge_doesnt_work_without_zookeeper(start_cluster):
|
||||
node1.query("INSERT INTO test_table VALUES ('2018-10-01', 1), ('2018-10-02', 2), ('2018-10-03', 3)")
|
||||
node1.query("INSERT INTO test_table VALUES ('2018-10-01', 4), ('2018-10-02', 5), ('2018-10-03', 6)")
|
||||
assert node1.query("SELECT count(*) from system.parts where table = 'test_table'") == "2\n"
|
||||
|
||||
node1.query("OPTIMIZE TABLE test_table FINAL")
|
||||
assert node1.query("SELECT count(*) from system.parts") == "3\n"
|
||||
|
||||
assert_eq_with_retry(node1, "SELECT count(*) from system.parts", "1")
|
||||
|
||||
node1.query("TRUNCATE TABLE test_table")
|
||||
|
||||
assert node1.query("SELECT count(*) from system.parts") == "0\n"
|
||||
|
||||
node1.query("INSERT INTO test_table VALUES ('2018-10-01', 1), ('2018-10-02', 2), ('2018-10-03', 3)")
|
||||
node1.query("INSERT INTO test_table VALUES ('2018-10-01', 4), ('2018-10-02', 5), ('2018-10-03', 6)")
|
||||
assert node1.query("SELECT count(*) from system.parts where table = 'test_table'") == "2\n"
|
||||
|
||||
with PartitionManager() as pm:
|
||||
node1.query("OPTIMIZE TABLE test_table FINAL")
|
||||
pm.drop_instance_zk_connections(node1)
|
||||
time.sleep(10) # > old_parts_lifetime
|
||||
assert node1.query("SELECT count(*) from system.parts") == "3\n"
|
||||
|
||||
assert_eq_with_retry(node1, "SELECT count(*) from system.parts", "1")
|
@ -1 +1 @@
|
||||
../../docker/test/stateless/ints_dictionary.xml
|
||||
../../dbms/tests/config/ints_dictionary.xml
|
@ -9,4 +9,3 @@ SELECT s, n.x, n.y FROM nested_test ARRAY JOIN nest AS n;
|
||||
SELECT s, n.x, n.y, nest.x FROM nested_test ARRAY JOIN nest AS n;
|
||||
SELECT s, n.x, n.y, nest.x, nest.y FROM nested_test ARRAY JOIN nest AS n;
|
||||
SELECT s, n.x, n.y, nest.x, nest.y, num FROM nested_test ARRAY JOIN nest AS n, arrayEnumerate(nest.x) AS num;
|
||||
DROP TABLE nested_test;
|
||||
|
@ -1,2 +1 @@
|
||||
253984050
|
||||
253984050
|
||||
|
@ -1,7 +1,6 @@
|
||||
DROP TABLE IF EXISTS big_array;
|
||||
CREATE TABLE big_array (x Array(UInt8)) ENGINE=TinyLog;
|
||||
CREATE DATABASE IF NOT EXISTS test;
|
||||
DROP TABLE IF EXISTS test.big_array;
|
||||
CREATE TABLE test.big_array (x Array(UInt8)) ENGINE=TinyLog;
|
||||
SET min_insert_block_size_rows = 0, min_insert_block_size_bytes = 0;
|
||||
INSERT INTO big_array SELECT groupArray(number % 255) AS x FROM (SELECT * FROM system.numbers LIMIT 1000000);
|
||||
SELECT sum(y) AS s FROM remote('127.0.0.{2,3}', currentDatabase(), big_array) ARRAY JOIN x AS y;
|
||||
SELECT sum(s) FROM (SELECT y AS s FROM remote('127.0.0.{2,3}', currentDatabase(), big_array) ARRAY JOIN x AS y);
|
||||
DROP TABLE big_array;
|
||||
INSERT INTO test.big_array SELECT groupArray(number % 255) AS x FROM (SELECT * FROM system.numbers LIMIT 1000000);
|
||||
SELECT sum(y) AS s FROM remote('127.0.0.{2,3}', test, big_array) ARRAY JOIN x AS y;
|
||||
|
@ -0,0 +1 @@
|
||||
253984050
|
@ -0,0 +1,2 @@
|
||||
SELECT sum(s) FROM (SELECT y AS s FROM remote('127.0.0.{2,3}', test, big_array) ARRAY JOIN x AS y);
|
||||
DROP TABLE test.big_array;
|
@ -1,14 +1,14 @@
|
||||
d Date
|
||||
k UInt64
|
||||
i32 Int32
|
||||
CREATE TABLE default.alter (`d` Date, `k` UInt64, `i32` Int32) ENGINE = MergeTree(d, k, 8192)
|
||||
CREATE TABLE test.alter_00061 (`d` Date, `k` UInt64, `i32` Int32) ENGINE = MergeTree(d, k, 8192)
|
||||
2015-01-01 10 42
|
||||
d Date
|
||||
k UInt64
|
||||
i32 Int32
|
||||
n.ui8 Array(UInt8)
|
||||
n.s Array(String)
|
||||
CREATE TABLE default.alter (`d` Date, `k` UInt64, `i32` Int32, `n.ui8` Array(UInt8), `n.s` Array(String)) ENGINE = MergeTree(d, k, 8192)
|
||||
CREATE TABLE test.alter_00061 (`d` Date, `k` UInt64, `i32` Int32, `n.ui8` Array(UInt8), `n.s` Array(String)) ENGINE = MergeTree(d, k, 8192)
|
||||
2015-01-01 8 40 [1,2,3] ['12','13','14']
|
||||
2015-01-01 10 42 [] []
|
||||
d Date
|
||||
@ -17,7 +17,7 @@ i32 Int32
|
||||
n.ui8 Array(UInt8)
|
||||
n.s Array(String)
|
||||
n.d Array(Date)
|
||||
CREATE TABLE default.alter (`d` Date, `k` UInt64, `i32` Int32, `n.ui8` Array(UInt8), `n.s` Array(String), `n.d` Array(Date)) ENGINE = MergeTree(d, k, 8192)
|
||||
CREATE TABLE test.alter_00061 (`d` Date, `k` UInt64, `i32` Int32, `n.ui8` Array(UInt8), `n.s` Array(String), `n.d` Array(Date)) ENGINE = MergeTree(d, k, 8192)
|
||||
2015-01-01 7 39 [10,20,30] ['120','130','140'] ['2000-01-01','2000-01-01','2000-01-03']
|
||||
2015-01-01 8 40 [1,2,3] ['12','13','14'] ['0000-00-00','0000-00-00','0000-00-00']
|
||||
2015-01-01 10 42 [] [] []
|
||||
@ -28,7 +28,7 @@ n.ui8 Array(UInt8)
|
||||
n.s Array(String)
|
||||
n.d Array(Date)
|
||||
s String DEFAULT \'0\'
|
||||
CREATE TABLE default.alter (`d` Date, `k` UInt64, `i32` Int32, `n.ui8` Array(UInt8), `n.s` Array(String), `n.d` Array(Date), `s` String DEFAULT \'0\') ENGINE = MergeTree(d, k, 8192)
|
||||
CREATE TABLE test.alter_00061 (`d` Date, `k` UInt64, `i32` Int32, `n.ui8` Array(UInt8), `n.s` Array(String), `n.d` Array(Date), `s` String DEFAULT \'0\') ENGINE = MergeTree(d, k, 8192)
|
||||
2015-01-01 6 38 [10,20,30] ['asd','qwe','qwe'] ['2000-01-01','2000-01-01','2000-01-03'] 100500
|
||||
2015-01-01 7 39 [10,20,30] ['120','130','140'] ['2000-01-01','2000-01-01','2000-01-03'] 0
|
||||
2015-01-01 8 40 [1,2,3] ['12','13','14'] ['0000-00-00','0000-00-00','0000-00-00'] 0
|
||||
@ -39,7 +39,7 @@ i32 Int32
|
||||
n.ui8 Array(UInt8)
|
||||
n.s Array(String)
|
||||
s Int64
|
||||
CREATE TABLE default.alter (`d` Date, `k` UInt64, `i32` Int32, `n.ui8` Array(UInt8), `n.s` Array(String), `s` Int64) ENGINE = MergeTree(d, k, 8192)
|
||||
CREATE TABLE test.alter_00061 (`d` Date, `k` UInt64, `i32` Int32, `n.ui8` Array(UInt8), `n.s` Array(String), `s` Int64) ENGINE = MergeTree(d, k, 8192)
|
||||
2015-01-01 6 38 [10,20,30] ['asd','qwe','qwe'] 100500
|
||||
2015-01-01 7 39 [10,20,30] ['120','130','140'] 0
|
||||
2015-01-01 8 40 [1,2,3] ['12','13','14'] 0
|
||||
@ -51,7 +51,7 @@ n.ui8 Array(UInt8)
|
||||
n.s Array(String)
|
||||
s UInt32
|
||||
n.d Array(Date)
|
||||
CREATE TABLE default.alter (`d` Date, `k` UInt64, `i32` Int32, `n.ui8` Array(UInt8), `n.s` Array(String), `s` UInt32, `n.d` Array(Date)) ENGINE = MergeTree(d, k, 8192)
|
||||
CREATE TABLE test.alter_00061 (`d` Date, `k` UInt64, `i32` Int32, `n.ui8` Array(UInt8), `n.s` Array(String), `s` UInt32, `n.d` Array(Date)) ENGINE = MergeTree(d, k, 8192)
|
||||
2015-01-01 6 38 [10,20,30] ['asd','qwe','qwe'] 100500 ['0000-00-00','0000-00-00','0000-00-00']
|
||||
2015-01-01 7 39 [10,20,30] ['120','130','140'] 0 ['0000-00-00','0000-00-00','0000-00-00']
|
||||
2015-01-01 8 40 [1,2,3] ['12','13','14'] 0 ['0000-00-00','0000-00-00','0000-00-00']
|
||||
@ -65,7 +65,7 @@ k UInt64
|
||||
i32 Int32
|
||||
n.s Array(String)
|
||||
s UInt32
|
||||
CREATE TABLE default.alter (`d` Date, `k` UInt64, `i32` Int32, `n.s` Array(String), `s` UInt32) ENGINE = MergeTree(d, k, 8192)
|
||||
CREATE TABLE test.alter_00061 (`d` Date, `k` UInt64, `i32` Int32, `n.s` Array(String), `s` UInt32) ENGINE = MergeTree(d, k, 8192)
|
||||
2015-01-01 6 38 ['asd','qwe','qwe'] 100500
|
||||
2015-01-01 7 39 ['120','130','140'] 0
|
||||
2015-01-01 8 40 ['12','13','14'] 0
|
||||
@ -74,7 +74,7 @@ d Date
|
||||
k UInt64
|
||||
i32 Int32
|
||||
s UInt32
|
||||
CREATE TABLE default.alter (`d` Date, `k` UInt64, `i32` Int32, `s` UInt32) ENGINE = MergeTree(d, k, 8192)
|
||||
CREATE TABLE test.alter_00061 (`d` Date, `k` UInt64, `i32` Int32, `s` UInt32) ENGINE = MergeTree(d, k, 8192)
|
||||
2015-01-01 6 38 100500
|
||||
2015-01-01 7 39 0
|
||||
2015-01-01 8 40 0
|
||||
@ -85,7 +85,7 @@ i32 Int32
|
||||
s UInt32
|
||||
n.s Array(String)
|
||||
n.d Array(Date)
|
||||
CREATE TABLE default.alter (`d` Date, `k` UInt64, `i32` Int32, `s` UInt32, `n.s` Array(String), `n.d` Array(Date)) ENGINE = MergeTree(d, k, 8192)
|
||||
CREATE TABLE test.alter_00061 (`d` Date, `k` UInt64, `i32` Int32, `s` UInt32, `n.s` Array(String), `n.d` Array(Date)) ENGINE = MergeTree(d, k, 8192)
|
||||
2015-01-01 6 38 100500 [] []
|
||||
2015-01-01 7 39 0 [] []
|
||||
2015-01-01 8 40 0 [] []
|
||||
@ -94,7 +94,7 @@ d Date
|
||||
k UInt64
|
||||
i32 Int32
|
||||
s UInt32
|
||||
CREATE TABLE default.alter (`d` Date, `k` UInt64, `i32` Int32, `s` UInt32) ENGINE = MergeTree(d, k, 8192)
|
||||
CREATE TABLE test.alter_00061 (`d` Date, `k` UInt64, `i32` Int32, `s` UInt32) ENGINE = MergeTree(d, k, 8192)
|
||||
2015-01-01 6 38 100500
|
||||
2015-01-01 7 39 0
|
||||
2015-01-01 8 40 0
|
||||
|
@ -1,71 +1,71 @@
|
||||
DROP TABLE IF EXISTS alter;
|
||||
CREATE TABLE alter (d Date, k UInt64, i32 Int32) ENGINE=MergeTree(d, k, 8192);
|
||||
DROP TABLE IF EXISTS test.alter_00061;
|
||||
CREATE TABLE test.alter_00061 (d Date, k UInt64, i32 Int32) ENGINE=MergeTree(d, k, 8192);
|
||||
|
||||
INSERT INTO alter VALUES ('2015-01-01', 10, 42);
|
||||
INSERT INTO test.alter_00061 VALUES ('2015-01-01', 10, 42);
|
||||
|
||||
DESC TABLE alter;
|
||||
SHOW CREATE TABLE alter;
|
||||
SELECT * FROM alter ORDER BY k;
|
||||
DESC TABLE test.alter_00061;
|
||||
SHOW CREATE TABLE test.alter_00061;
|
||||
SELECT * FROM test.alter_00061 ORDER BY k;
|
||||
|
||||
ALTER TABLE alter ADD COLUMN n Nested(ui8 UInt8, s String);
|
||||
INSERT INTO alter VALUES ('2015-01-01', 8, 40, [1,2,3], ['12','13','14']);
|
||||
ALTER TABLE test.alter_00061 ADD COLUMN n Nested(ui8 UInt8, s String);
|
||||
INSERT INTO test.alter_00061 VALUES ('2015-01-01', 8, 40, [1,2,3], ['12','13','14']);
|
||||
|
||||
DESC TABLE alter;
|
||||
SHOW CREATE TABLE alter;
|
||||
SELECT * FROM alter ORDER BY k;
|
||||
DESC TABLE test.alter_00061;
|
||||
SHOW CREATE TABLE test.alter_00061;
|
||||
SELECT * FROM test.alter_00061 ORDER BY k;
|
||||
|
||||
ALTER TABLE alter ADD COLUMN `n.d` Array(Date);
|
||||
INSERT INTO alter VALUES ('2015-01-01', 7, 39, [10,20,30], ['120','130','140'],['2000-01-01','2000-01-01','2000-01-03']);
|
||||
ALTER TABLE test.alter_00061 ADD COLUMN `n.d` Array(Date);
|
||||
INSERT INTO test.alter_00061 VALUES ('2015-01-01', 7, 39, [10,20,30], ['120','130','140'],['2000-01-01','2000-01-01','2000-01-03']);
|
||||
|
||||
DESC TABLE alter;
|
||||
SHOW CREATE TABLE alter;
|
||||
SELECT * FROM alter ORDER BY k;
|
||||
DESC TABLE test.alter_00061;
|
||||
SHOW CREATE TABLE test.alter_00061;
|
||||
SELECT * FROM test.alter_00061 ORDER BY k;
|
||||
|
||||
ALTER TABLE alter ADD COLUMN s String DEFAULT '0';
|
||||
INSERT INTO alter VALUES ('2015-01-01', 6,38,[10,20,30],['asd','qwe','qwe'],['2000-01-01','2000-01-01','2000-01-03'],'100500');
|
||||
ALTER TABLE test.alter_00061 ADD COLUMN s String DEFAULT '0';
|
||||
INSERT INTO test.alter_00061 VALUES ('2015-01-01', 6,38,[10,20,30],['asd','qwe','qwe'],['2000-01-01','2000-01-01','2000-01-03'],'100500');
|
||||
|
||||
DESC TABLE alter;
|
||||
SHOW CREATE TABLE alter;
|
||||
SELECT * FROM alter ORDER BY k;
|
||||
DESC TABLE test.alter_00061;
|
||||
SHOW CREATE TABLE test.alter_00061;
|
||||
SELECT * FROM test.alter_00061 ORDER BY k;
|
||||
|
||||
ALTER TABLE alter DROP COLUMN `n.d`, MODIFY COLUMN s Int64;
|
||||
ALTER TABLE test.alter_00061 DROP COLUMN `n.d`, MODIFY COLUMN s Int64;
|
||||
|
||||
DESC TABLE alter;
|
||||
SHOW CREATE TABLE alter;
|
||||
SELECT * FROM alter ORDER BY k;
|
||||
DESC TABLE test.alter_00061;
|
||||
SHOW CREATE TABLE test.alter_00061;
|
||||
SELECT * FROM test.alter_00061 ORDER BY k;
|
||||
|
||||
ALTER TABLE alter ADD COLUMN `n.d` Array(Date), MODIFY COLUMN s UInt32;
|
||||
ALTER TABLE test.alter_00061 ADD COLUMN `n.d` Array(Date), MODIFY COLUMN s UInt32;
|
||||
|
||||
DESC TABLE alter;
|
||||
SHOW CREATE TABLE alter;
|
||||
SELECT * FROM alter ORDER BY k;
|
||||
DESC TABLE test.alter_00061;
|
||||
SHOW CREATE TABLE test.alter_00061;
|
||||
SELECT * FROM test.alter_00061 ORDER BY k;
|
||||
|
||||
OPTIMIZE TABLE alter;
|
||||
OPTIMIZE TABLE test.alter_00061;
|
||||
|
||||
SELECT * FROM alter ORDER BY k;
|
||||
SELECT * FROM test.alter_00061 ORDER BY k;
|
||||
|
||||
ALTER TABLE alter DROP COLUMN n.ui8, DROP COLUMN n.d;
|
||||
ALTER TABLE test.alter_00061 DROP COLUMN n.ui8, DROP COLUMN n.d;
|
||||
|
||||
DESC TABLE alter;
|
||||
SHOW CREATE TABLE alter;
|
||||
SELECT * FROM alter ORDER BY k;
|
||||
DESC TABLE test.alter_00061;
|
||||
SHOW CREATE TABLE test.alter_00061;
|
||||
SELECT * FROM test.alter_00061 ORDER BY k;
|
||||
|
||||
ALTER TABLE alter DROP COLUMN n.s;
|
||||
ALTER TABLE test.alter_00061 DROP COLUMN n.s;
|
||||
|
||||
DESC TABLE alter;
|
||||
SHOW CREATE TABLE alter;
|
||||
SELECT * FROM alter ORDER BY k;
|
||||
DESC TABLE test.alter_00061;
|
||||
SHOW CREATE TABLE test.alter_00061;
|
||||
SELECT * FROM test.alter_00061 ORDER BY k;
|
||||
|
||||
ALTER TABLE alter ADD COLUMN n.s Array(String), ADD COLUMN n.d Array(Date);
|
||||
ALTER TABLE test.alter_00061 ADD COLUMN n.s Array(String), ADD COLUMN n.d Array(Date);
|
||||
|
||||
DESC TABLE alter;
|
||||
SHOW CREATE TABLE alter;
|
||||
SELECT * FROM alter ORDER BY k;
|
||||
DESC TABLE test.alter_00061;
|
||||
SHOW CREATE TABLE test.alter_00061;
|
||||
SELECT * FROM test.alter_00061 ORDER BY k;
|
||||
|
||||
ALTER TABLE alter DROP COLUMN n;
|
||||
ALTER TABLE test.alter_00061 DROP COLUMN n;
|
||||
|
||||
DESC TABLE alter;
|
||||
SHOW CREATE TABLE alter;
|
||||
SELECT * FROM alter ORDER BY k;
|
||||
DESC TABLE test.alter_00061;
|
||||
SHOW CREATE TABLE test.alter_00061;
|
||||
SELECT * FROM test.alter_00061 ORDER BY k;
|
||||
|
||||
DROP TABLE alter;
|
||||
DROP TABLE test.alter_00061;
|
||||
|
@ -14,6 +14,3 @@ CREATE TABLE check_query_log (N UInt32,S String) Engine = Log;
|
||||
INSERT INTO check_query_log VALUES (1, 'A'), (2, 'B'), (3, 'C')
|
||||
|
||||
CHECK TABLE check_query_log;
|
||||
|
||||
DROP TABLE check_query_log;
|
||||
DROP TABLE check_query_tiny_log;
|
||||
|
@ -125,4 +125,3 @@ CREATE TABLE addresses(addr String) ENGINE = Memory;
|
||||
INSERT INTO addresses(addr) VALUES ('00000000000000000000FFFFC1FC110A'), ('00000000000000000000FFFF4D583737'), ('00000000000000000000FFFF7F000001');
|
||||
SELECT cutIPv6(toFixedString(unhex(addr), 16), 0, 3) FROM addresses ORDER BY addr ASC;
|
||||
|
||||
DROP TABLE addresses;
|
||||
|
@ -39,5 +39,3 @@ INSERT INTO summing (k, s) VALUES (0, 1), (666, 1), (666, 0);
|
||||
OPTIMIZE TABLE summing PARTITION 197001;
|
||||
|
||||
SELECT k, s FROM summing ORDER BY k;
|
||||
|
||||
DROP TABLE summing;
|
||||
|
@ -3,4 +3,4 @@
|
||||
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
[ "$NO_SHELL_CONFIG" ] || . $CURDIR/../shell_config.sh
|
||||
|
||||
seq 1 1000 | sed -r 's/.+/CREATE TABLE IF NOT EXISTS test.buf (a UInt8) ENGINE = Buffer(test, b, 1, 1, 1, 1, 1, 1, 1); DROP TABLE test.buf;/' | $CLICKHOUSE_CLIENT -n
|
||||
seq 1 1000 | sed -r 's/.+/CREATE TABLE IF NOT EXISTS buf_00097 (a UInt8) ENGINE = Buffer(currentDatabase(), b, 1, 1, 1, 1, 1, 1, 1); DROP TABLE buf_00097;/' | $CLICKHOUSE_CLIENT -n
|
||||
|
@ -1,13 +1,13 @@
|
||||
DROP TABLE IF EXISTS report1;
|
||||
DROP TABLE IF EXISTS report2;
|
||||
DROP TABLE IF EXISTS test.report1;
|
||||
DROP TABLE IF EXISTS test.report2;
|
||||
|
||||
CREATE TABLE report1(id UInt32, event_date Date, priority UInt32, description String) ENGINE = MergeTree(event_date, intHash32(id), (id, event_date, intHash32(id)), 8192);
|
||||
CREATE TABLE report2(id UInt32, event_date Date, priority UInt32, description String) ENGINE = MergeTree(event_date, intHash32(id), (id, event_date, intHash32(id)), 8192);
|
||||
CREATE TABLE test.report1(id UInt32, event_date Date, priority UInt32, description String) ENGINE = MergeTree(event_date, intHash32(id), (id, event_date, intHash32(id)), 8192);
|
||||
CREATE TABLE test.report2(id UInt32, event_date Date, priority UInt32, description String) ENGINE = MergeTree(event_date, intHash32(id), (id, event_date, intHash32(id)), 8192);
|
||||
|
||||
INSERT INTO report1(id,event_date,priority,description) VALUES (1, '2015-01-01', 1, 'foo')(2, '2015-02-01', 2, 'bar')(3, '2015-03-01', 3, 'foo')(4, '2015-04-01', 4, 'bar')(5, '2015-05-01', 5, 'foo');
|
||||
INSERT INTO report2(id,event_date,priority,description) VALUES (1, '2016-01-01', 6, 'bar')(2, '2016-02-01', 7, 'foo')(3, '2016-03-01', 8, 'bar')(4, '2016-04-01', 9, 'foo')(5, '2016-05-01', 10, 'bar');
|
||||
INSERT INTO test.report1(id,event_date,priority,description) VALUES (1, '2015-01-01', 1, 'foo')(2, '2015-02-01', 2, 'bar')(3, '2015-03-01', 3, 'foo')(4, '2015-04-01', 4, 'bar')(5, '2015-05-01', 5, 'foo');
|
||||
INSERT INTO test.report2(id,event_date,priority,description) VALUES (1, '2016-01-01', 6, 'bar')(2, '2016-02-01', 7, 'foo')(3, '2016-03-01', 8, 'bar')(4, '2016-04-01', 9, 'foo')(5, '2016-05-01', 10, 'bar');
|
||||
|
||||
SELECT * FROM (SELECT id, event_date, priority, description FROM remote('127.0.0.{2,3}', currentDatabase(), report1) UNION ALL SELECT id, event_date, priority, description FROM remote('127.0.0.{2,3}', currentDatabase(), report2)) ORDER BY id, event_date ASC;
|
||||
SELECT * FROM (SELECT id, event_date, priority, description FROM remote('127.0.0.{2,3}', test, report1) UNION ALL SELECT id, event_date, priority, description FROM remote('127.0.0.{2,3}', test, report2)) ORDER BY id, event_date ASC;
|
||||
|
||||
DROP TABLE report1;
|
||||
DROP TABLE report2;
|
||||
DROP TABLE test.report1;
|
||||
DROP TABLE test.report2;
|
||||
|
@ -1,26 +1,24 @@
|
||||
DROP TABLE IF EXISTS test_table;
|
||||
DROP TABLE IF EXISTS test_view;
|
||||
DROP TABLE IF EXISTS test_view_filtered;
|
||||
DROP TABLE IF EXISTS default.test_table;
|
||||
DROP TABLE IF EXISTS default.test_view;
|
||||
DROP TABLE IF EXISTS default.test_view_filtered;
|
||||
|
||||
CREATE TABLE test_table (EventDate Date, CounterID UInt32, UserID UInt64, EventTime DateTime, UTCEventTime DateTime) ENGINE = MergeTree(EventDate, CounterID, 8192);
|
||||
CREATE MATERIALIZED VIEW test_view (Rows UInt64, MaxHitTime DateTime) ENGINE = Memory AS SELECT count() AS Rows, max(UTCEventTime) AS MaxHitTime FROM test_table;
|
||||
CREATE MATERIALIZED VIEW test_view_filtered (EventDate Date, CounterID UInt32) ENGINE = Memory POPULATE AS SELECT CounterID, EventDate FROM test_table WHERE EventDate < '2013-01-01';
|
||||
CREATE TABLE default.test_table (EventDate Date, CounterID UInt32, UserID UInt64, EventTime DateTime, UTCEventTime DateTime) ENGINE = MergeTree(EventDate, CounterID, 8192);
|
||||
CREATE MATERIALIZED VIEW default.test_view (Rows UInt64, MaxHitTime DateTime) ENGINE = Memory AS SELECT count() AS Rows, max(UTCEventTime) AS MaxHitTime FROM default.test_table;
|
||||
CREATE MATERIALIZED VIEW default.test_view_filtered (EventDate Date, CounterID UInt32) ENGINE = Memory POPULATE AS SELECT CounterID, EventDate FROM default.test_table WHERE EventDate < '2013-01-01';
|
||||
|
||||
INSERT INTO test_table (EventDate, UTCEventTime) VALUES ('2014-01-02', '2014-01-02 03:04:06');
|
||||
INSERT INTO default.test_table (EventDate, UTCEventTime) VALUES ('2014-01-02', '2014-01-02 03:04:06');
|
||||
|
||||
SELECT * FROM test_table;
|
||||
SELECT * FROM test_view;
|
||||
SELECT * FROM test_view_filtered;
|
||||
SELECT * FROM default.test_table;
|
||||
SELECT * FROM default.test_view;
|
||||
SELECT * FROM default.test_view_filtered;
|
||||
|
||||
DROP TABLE test_table;
|
||||
DROP TABLE test_view;
|
||||
DROP TABLE test_view_filtered;
|
||||
DROP TABLE default.test_table;
|
||||
DROP TABLE default.test_view;
|
||||
DROP TABLE default.test_view_filtered;
|
||||
|
||||
-- Check only sophisticated constructors and desctructors:
|
||||
|
||||
CREATE DATABASE IF NOT EXISTS test00101;
|
||||
USE test00101;
|
||||
|
||||
USE test;
|
||||
DROP TABLE IF EXISTS tmp;
|
||||
DROP TABLE IF EXISTS tmp_mv;
|
||||
DROP TABLE IF EXISTS tmp_mv2;
|
||||
@ -46,5 +44,3 @@ EXISTS TABLE `.inner.tmp_mv`;
|
||||
EXISTS TABLE `.inner.tmp_mv2`;
|
||||
EXISTS TABLE `.inner.tmp_mv3`;
|
||||
EXISTS TABLE `.inner.tmp_mv4`;
|
||||
|
||||
DROP DATABASE test00101;
|
||||
|
@ -2,19 +2,19 @@ SET max_rows_to_group_by = 100000;
|
||||
SET max_block_size = 100001;
|
||||
SET group_by_overflow_mode = 'any';
|
||||
|
||||
DROP TABLE IF EXISTS numbers500k;
|
||||
CREATE VIEW numbers500k AS SELECT number FROM system.numbers LIMIT 500000;
|
||||
DROP TABLE IF EXISTS test.numbers500k;
|
||||
CREATE VIEW test.numbers500k AS SELECT number FROM system.numbers LIMIT 500000;
|
||||
|
||||
SET totals_mode = 'after_having_auto';
|
||||
SELECT intDiv(number, 2) AS k, count(), argMax(toString(number), number) FROM remote('127.0.0.{2,3}', currentDatabase(), numbers500k) GROUP BY k WITH TOTALS ORDER BY k LIMIT 10;
|
||||
SELECT intDiv(number, 2) AS k, count(), argMax(toString(number), number) FROM remote('127.0.0.{2,3}', test, numbers500k) GROUP BY k WITH TOTALS ORDER BY k LIMIT 10;
|
||||
|
||||
SET totals_mode = 'after_having_inclusive';
|
||||
SELECT intDiv(number, 2) AS k, count(), argMax(toString(number), number) FROM remote('127.0.0.{2,3}', currentDatabase(), numbers500k) GROUP BY k WITH TOTALS ORDER BY k LIMIT 10;
|
||||
SELECT intDiv(number, 2) AS k, count(), argMax(toString(number), number) FROM remote('127.0.0.{2,3}', test, numbers500k) GROUP BY k WITH TOTALS ORDER BY k LIMIT 10;
|
||||
|
||||
SET totals_mode = 'after_having_exclusive';
|
||||
SELECT intDiv(number, 2) AS k, count(), argMax(toString(number), number) FROM remote('127.0.0.{2,3}', currentDatabase(), numbers500k) GROUP BY k WITH TOTALS ORDER BY k LIMIT 10;
|
||||
SELECT intDiv(number, 2) AS k, count(), argMax(toString(number), number) FROM remote('127.0.0.{2,3}', test, numbers500k) GROUP BY k WITH TOTALS ORDER BY k LIMIT 10;
|
||||
|
||||
SET totals_mode = 'before_having';
|
||||
SELECT intDiv(number, 2) AS k, count(), argMax(toString(number), number) FROM remote('127.0.0.{2,3}', currentDatabase(), numbers500k) GROUP BY k WITH TOTALS ORDER BY k LIMIT 10;
|
||||
SELECT intDiv(number, 2) AS k, count(), argMax(toString(number), number) FROM remote('127.0.0.{2,3}', test, numbers500k) GROUP BY k WITH TOTALS ORDER BY k LIMIT 10;
|
||||
|
||||
DROP TABLE numbers500k;
|
||||
DROP TABLE test.numbers500k;
|
||||
|
@ -1,9 +1,9 @@
|
||||
SET max_memory_usage = 100000000;
|
||||
SET max_bytes_before_external_sort = 20000000;
|
||||
|
||||
DROP TABLE IF EXISTS numbers10m;
|
||||
CREATE VIEW numbers10m AS SELECT number FROM system.numbers LIMIT 10000000;
|
||||
DROP TABLE IF EXISTS test.numbers10m;
|
||||
CREATE VIEW test.numbers10m AS SELECT number FROM system.numbers LIMIT 10000000;
|
||||
|
||||
SELECT number FROM remote('127.0.0.{2,3}', currentDatabase(), numbers10m) ORDER BY number * 1234567890123456789 LIMIT 19999980, 20;
|
||||
SELECT number FROM remote('127.0.0.{2,3}', test, numbers10m) ORDER BY number * 1234567890123456789 LIMIT 19999980, 20;
|
||||
|
||||
DROP TABLE numbers10m;
|
||||
DROP TABLE test.numbers10m;
|
||||
|
@ -4,34 +4,34 @@ SELECT '';
|
||||
SELECT length(toString(groupArrayState(toDate(number)))) FROM (SELECT * FROM system.numbers LIMIT 10);
|
||||
SELECT length(toString(groupArrayState(toDateTime(number)))) FROM (SELECT * FROM system.numbers LIMIT 10);
|
||||
|
||||
DROP TABLE IF EXISTS numbers_mt;
|
||||
CREATE TABLE numbers_mt (number UInt64) ENGINE = Log;
|
||||
INSERT INTO numbers_mt SELECT * FROM system.numbers LIMIT 1, 1000000;
|
||||
DROP TABLE IF EXISTS test.numbers_mt;
|
||||
CREATE TABLE test.numbers_mt (number UInt64) ENGINE = Log;
|
||||
INSERT INTO test.numbers_mt SELECT * FROM system.numbers LIMIT 1, 1000000;
|
||||
|
||||
SELECT count(), sum(ns), max(ns) FROM (SELECT intDiv(number, 100) AS k, groupArray(number) AS ns FROM numbers_mt GROUP BY k) ARRAY JOIN ns;
|
||||
SELECT count(), sum(toUInt64(ns)), max(toUInt64(ns)) FROM (SELECT intDiv(number, 100) AS k, groupArray(toString(number)) AS ns FROM numbers_mt GROUP BY k) ARRAY JOIN ns;
|
||||
SELECT count(), sum(toUInt64(ns[1])), max(toUInt64(ns[1])), sum(toUInt64(ns[2]))/10 FROM (SELECT intDiv(number, 100) AS k, groupArray([toString(number), toString(number*10)]) AS ns FROM numbers_mt GROUP BY k) ARRAY JOIN ns;
|
||||
SELECT count(), sum(ns[1]), max(ns[1]), sum(ns[2])/10 FROM (SELECT intDiv(number, 100) AS k, groupArray([number, number*10]) AS ns FROM numbers_mt GROUP BY k) ARRAY JOIN ns;
|
||||
SELECT count(), sum(ns), max(ns) FROM (SELECT intDiv(number, 100) AS k, groupArray(number) AS ns FROM test.numbers_mt GROUP BY k) ARRAY JOIN ns;
|
||||
SELECT count(), sum(toUInt64(ns)), max(toUInt64(ns)) FROM (SELECT intDiv(number, 100) AS k, groupArray(toString(number)) AS ns FROM test.numbers_mt GROUP BY k) ARRAY JOIN ns;
|
||||
SELECT count(), sum(toUInt64(ns[1])), max(toUInt64(ns[1])), sum(toUInt64(ns[2]))/10 FROM (SELECT intDiv(number, 100) AS k, groupArray([toString(number), toString(number*10)]) AS ns FROM test.numbers_mt GROUP BY k) ARRAY JOIN ns;
|
||||
SELECT count(), sum(ns[1]), max(ns[1]), sum(ns[2])/10 FROM (SELECT intDiv(number, 100) AS k, groupArray([number, number*10]) AS ns FROM test.numbers_mt GROUP BY k) ARRAY JOIN ns;
|
||||
|
||||
SELECT count(), sum(ns), max(ns) FROM (SELECT intDiv(number, 100) AS k, groupArray(number) AS ns FROM remote('127.0.0.{2,3}', currentDatabase(), 'numbers_mt') GROUP BY k) ARRAY JOIN ns;
|
||||
SELECT count(), sum(toUInt64(ns)), max(toUInt64(ns)) FROM (SELECT intDiv(number, 100) AS k, groupArray(toString(number)) AS ns FROM remote('127.0.0.{2,3}', currentDatabase(), 'numbers_mt') GROUP BY k) ARRAY JOIN ns;
|
||||
SELECT count(), sum(toUInt64(ns[1])), max(toUInt64(ns[1])), sum(toUInt64(ns[2]))/10 FROM (SELECT intDiv(number, 100) AS k, groupArray([toString(number), toString(number*10)]) AS ns FROM remote('127.0.0.{2,3}', currentDatabase(), 'numbers_mt') GROUP BY k) ARRAY JOIN ns;
|
||||
SELECT count(), sum(ns), max(ns) FROM (SELECT intDiv(number, 100) AS k, groupArray(number) AS ns FROM remote('127.0.0.{2,3}', 'test', 'numbers_mt') GROUP BY k) ARRAY JOIN ns;
|
||||
SELECT count(), sum(toUInt64(ns)), max(toUInt64(ns)) FROM (SELECT intDiv(number, 100) AS k, groupArray(toString(number)) AS ns FROM remote('127.0.0.{2,3}', 'test', 'numbers_mt') GROUP BY k) ARRAY JOIN ns;
|
||||
SELECT count(), sum(toUInt64(ns[1])), max(toUInt64(ns[1])), sum(toUInt64(ns[2]))/10 FROM (SELECT intDiv(number, 100) AS k, groupArray([toString(number), toString(number*10)]) AS ns FROM remote('127.0.0.{2,3}', 'test', 'numbers_mt') GROUP BY k) ARRAY JOIN ns;
|
||||
|
||||
DROP TABLE numbers_mt;
|
||||
CREATE TABLE numbers_mt (number UInt64) ENGINE = Log;
|
||||
INSERT INTO numbers_mt SELECT * FROM system.numbers LIMIT 1, 1048575;
|
||||
DROP TABLE test.numbers_mt;
|
||||
CREATE TABLE test.numbers_mt (number UInt64) ENGINE = Log;
|
||||
INSERT INTO test.numbers_mt SELECT * FROM system.numbers LIMIT 1, 1048575;
|
||||
|
||||
SELECT '';
|
||||
SELECT roundToExp2(number) AS k, length(groupArray(1)(number AS i)), length(groupArray(1024)(i)), length(groupArray(65536)(i)) AS s FROM numbers_mt GROUP BY k ORDER BY k LIMIT 9, 11;
|
||||
SELECT roundToExp2(number) AS k, length(groupArray(1)(hex(number) AS i)), length(groupArray(1024)(i)), length(groupArray(65536)(i)) AS s FROM numbers_mt GROUP BY k ORDER BY k LIMIT 9, 11;
|
||||
SELECT roundToExp2(number) AS k, length(groupArray(1)([hex(number)] AS i)), length(groupArray(1024)(i)), length(groupArray(65536)(i)) AS s FROM numbers_mt GROUP BY k ORDER BY k LIMIT 9, 11;
|
||||
SELECT roundToExp2(number) AS k, length(groupArray(1)(number AS i)), length(groupArray(1024)(i)), length(groupArray(65536)(i)) AS s FROM test.numbers_mt GROUP BY k ORDER BY k LIMIT 9, 11;
|
||||
SELECT roundToExp2(number) AS k, length(groupArray(1)(hex(number) AS i)), length(groupArray(1024)(i)), length(groupArray(65536)(i)) AS s FROM test.numbers_mt GROUP BY k ORDER BY k LIMIT 9, 11;
|
||||
SELECT roundToExp2(number) AS k, length(groupArray(1)([hex(number)] AS i)), length(groupArray(1024)(i)), length(groupArray(65536)(i)) AS s FROM test.numbers_mt GROUP BY k ORDER BY k LIMIT 9, 11;
|
||||
|
||||
SELECT '';
|
||||
SELECT roundToExp2(number) AS k, length(groupArray(1)(number AS i)), length(groupArray(1500)(i)), length(groupArray(70000)(i)) AS s FROM remote('127.0.0.{2,3}', currentDatabase(), 'numbers_mt') GROUP BY k ORDER BY k LIMIT 9, 11;
|
||||
SELECT roundToExp2(number) AS k, length(groupArray(1)(hex(number) AS i)), length(groupArray(1500)(i)), length(groupArray(70000)(i)) AS s FROM remote('127.0.0.{2,3}', currentDatabase(), 'numbers_mt') GROUP BY k ORDER BY k LIMIT 9, 11;
|
||||
SELECT roundToExp2(number) AS k, length(groupArray(1)([hex(number)] AS i)), length(groupArray(1500)(i)), length(groupArray(70000)(i)) AS s FROM remote('127.0.0.{2,3}', currentDatabase(), 'numbers_mt') GROUP BY k ORDER BY k LIMIT 9, 11;
|
||||
SELECT roundToExp2(number) AS k, length(groupArray(1)(number AS i)), length(groupArray(1500)(i)), length(groupArray(70000)(i)) AS s FROM remote('127.0.0.{2,3}', 'test', 'numbers_mt') GROUP BY k ORDER BY k LIMIT 9, 11;
|
||||
SELECT roundToExp2(number) AS k, length(groupArray(1)(hex(number) AS i)), length(groupArray(1500)(i)), length(groupArray(70000)(i)) AS s FROM remote('127.0.0.{2,3}', 'test', 'numbers_mt') GROUP BY k ORDER BY k LIMIT 9, 11;
|
||||
SELECT roundToExp2(number) AS k, length(groupArray(1)([hex(number)] AS i)), length(groupArray(1500)(i)), length(groupArray(70000)(i)) AS s FROM remote('127.0.0.{2,3}', 'test', 'numbers_mt') GROUP BY k ORDER BY k LIMIT 9, 11;
|
||||
|
||||
DROP TABLE numbers_mt;
|
||||
DROP TABLE test.numbers_mt;
|
||||
|
||||
-- Check binary compatibility:
|
||||
-- clickhouse-client -h old -q "SELECT arrayReduce('groupArrayState', [['1'], ['22'], ['333']]) FORMAT RowBinary" | clickhouse-local -s --input-format RowBinary --structure "d AggregateFunction(groupArray2, Array(String))" -q "SELECT groupArray2Merge(d) FROM table"
|
||||
|
@ -1,7 +1,9 @@
|
||||
DROP TABLE IF EXISTS set;
|
||||
DROP TABLE IF EXISTS set2;
|
||||
DROP TABLE IF EXISTS test.set;
|
||||
DROP TABLE IF EXISTS test.set2;
|
||||
|
||||
CREATE TABLE set (x String) ENGINE = Set;
|
||||
CREATE TABLE test.set (x String) ENGINE = Set;
|
||||
|
||||
USE test;
|
||||
|
||||
SELECT arrayJoin(['Hello', 'test', 'World', 'world', 'abc', 'xyz']) AS s WHERE s IN set;
|
||||
SELECT arrayJoin(['Hello', 'test', 'World', 'world', 'abc', 'xyz']) AS s WHERE s NOT IN set;
|
||||
@ -12,10 +14,10 @@ SELECT arrayJoin(['Hello', 'test', 'World', 'world', 'abc', 'xyz']) AS s WHERE s
|
||||
RENAME TABLE set TO set2;
|
||||
SELECT arrayJoin(['Hello', 'test', 'World', 'world', 'abc', 'xyz']) AS s WHERE s IN set2;
|
||||
|
||||
INSERT INTO set2 VALUES ('Hello'), ('World');
|
||||
INSERT INTO test.set2 VALUES ('Hello'), ('World');
|
||||
SELECT arrayJoin(['Hello', 'test', 'World', 'world', 'abc', 'xyz']) AS s WHERE s IN set2;
|
||||
|
||||
INSERT INTO set2 VALUES ('abc'), ('World');
|
||||
INSERT INTO test.set2 VALUES ('abc'), ('World');
|
||||
SELECT arrayJoin(['Hello', 'test', 'World', 'world', 'abc', 'xyz']) AS s WHERE s IN set2;
|
||||
|
||||
DETACH TABLE set2;
|
||||
@ -26,4 +28,6 @@ SELECT arrayJoin(['Hello', 'test', 'World', 'world', 'abc', 'xyz']) AS s WHERE s
|
||||
RENAME TABLE set2 TO set;
|
||||
SELECT arrayJoin(['Hello', 'test', 'World', 'world', 'abc', 'xyz']) AS s WHERE s IN set;
|
||||
|
||||
DROP TABLE set;
|
||||
USE default;
|
||||
|
||||
DROP TABLE test.set;
|
||||
|
@ -1,10 +1,10 @@
|
||||
DROP TABLE IF EXISTS null;
|
||||
CREATE TABLE null (a Array(UInt64), b Array(String), c Array(Array(Date))) ENGINE = Memory;
|
||||
DROP TABLE IF EXISTS null_00117;
|
||||
CREATE TABLE null_00117 (a Array(UInt64), b Array(String), c Array(Array(Date))) ENGINE = Memory;
|
||||
|
||||
INSERT INTO null (a) VALUES ([1,2]), ([3, 4]), ([ 5 ,6]), ([ 7 , 8 ]), ([]), ([ ]);
|
||||
INSERT INTO null (b) VALUES ([ 'Hello' , 'World' ]);
|
||||
INSERT INTO null (c) VALUES ([ ]), ([ [ ] ]), ([[],[]]), ([['2015-01-01', '2015-01-02'], ['2015-01-03', '2015-01-04']]);
|
||||
INSERT INTO null_00117 (a) VALUES ([1,2]), ([3, 4]), ([ 5 ,6]), ([ 7 , 8 ]), ([]), ([ ]);
|
||||
INSERT INTO null_00117 (b) VALUES ([ 'Hello' , 'World' ]);
|
||||
INSERT INTO null_00117 (c) VALUES ([ ]), ([ [ ] ]), ([[],[]]), ([['2015-01-01', '2015-01-02'], ['2015-01-03', '2015-01-04']]);
|
||||
|
||||
SELECT a, b, c FROM null ORDER BY a, b, c;
|
||||
SELECT a, b, c FROM null_00117 ORDER BY a, b, c;
|
||||
|
||||
DROP TABLE null;
|
||||
DROP TABLE null_00117;
|
@ -1,11 +1,15 @@
|
||||
DROP TABLE IF EXISTS join;
|
||||
DROP TABLE IF EXISTS test.join;
|
||||
|
||||
CREATE TABLE join (k UInt64, s String) ENGINE = Join(ANY, LEFT, k);
|
||||
CREATE TABLE test.join (k UInt64, s String) ENGINE = Join(ANY, LEFT, k);
|
||||
|
||||
INSERT INTO join VALUES (1, 'abc'), (2, 'def');
|
||||
USE test;
|
||||
|
||||
INSERT INTO test.join VALUES (1, 'abc'), (2, 'def');
|
||||
SELECT k, s FROM (SELECT number AS k FROM system.numbers LIMIT 10) ANY LEFT JOIN join USING k;
|
||||
|
||||
INSERT INTO join VALUES (6, 'ghi');
|
||||
INSERT INTO test.join VALUES (6, 'ghi');
|
||||
SELECT k, s FROM (SELECT number AS k FROM system.numbers LIMIT 10) ANY LEFT JOIN join USING k;
|
||||
|
||||
DROP TABLE join;
|
||||
USE default;
|
||||
|
||||
DROP TABLE test.join;
|
||||
|
@ -1,27 +1,27 @@
|
||||
DROP TABLE IF EXISTS alter;
|
||||
CREATE TABLE alter (d Date, x UInt8) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/alter', 'r1', d, (d), 8192);
|
||||
DROP TABLE IF EXISTS alter_00121;
|
||||
CREATE TABLE alter_00121 (d Date, x UInt8) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/alter_00121', 'r1', d, (d), 8192);
|
||||
|
||||
INSERT INTO alter VALUES ('2014-01-01', 1);
|
||||
ALTER TABLE alter DROP COLUMN x;
|
||||
INSERT INTO alter_00121 VALUES ('2014-01-01', 1);
|
||||
ALTER TABLE alter_00121 DROP COLUMN x;
|
||||
|
||||
SELECT value FROM system.zookeeper WHERE path = '/clickhouse/tables/test/alter/replicas/r1/parts/20140101_20140101_0_0_0' AND name = 'columns' FORMAT TabSeparatedRaw;
|
||||
SELECT value FROM system.zookeeper WHERE path = '/clickhouse/tables/test/alter_00121/replicas/r1/parts/20140101_20140101_0_0_0' AND name = 'columns' FORMAT TabSeparatedRaw;
|
||||
|
||||
DROP TABLE alter;
|
||||
DROP TABLE alter_00121;
|
||||
|
||||
|
||||
CREATE TABLE alter (d Date) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/alter', 'r1', d, (d), 8192);
|
||||
CREATE TABLE alter_00121 (d Date) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test/alter_00121', 'r1', d, (d), 8192);
|
||||
|
||||
INSERT INTO alter VALUES ('2014-01-01');
|
||||
SELECT * FROM alter ORDER BY d;
|
||||
INSERT INTO alter_00121 VALUES ('2014-01-01');
|
||||
SELECT * FROM alter_00121 ORDER BY d;
|
||||
|
||||
ALTER TABLE alter ADD COLUMN x UInt8;
|
||||
ALTER TABLE alter_00121 ADD COLUMN x UInt8;
|
||||
|
||||
INSERT INTO alter VALUES ('2014-02-01', 1);
|
||||
SELECT * FROM alter ORDER BY d;
|
||||
INSERT INTO alter_00121 VALUES ('2014-02-01', 1);
|
||||
SELECT * FROM alter_00121 ORDER BY d;
|
||||
|
||||
ALTER TABLE alter DROP COLUMN x;
|
||||
SELECT * FROM alter ORDER BY d;
|
||||
ALTER TABLE alter_00121 DROP COLUMN x;
|
||||
SELECT * FROM alter_00121 ORDER BY d;
|
||||
|
||||
SELECT value FROM system.zookeeper WHERE path = '/clickhouse/tables/test/alter/replicas/r1/parts/20140201_20140201_0_0_0' AND name = 'columns' FORMAT TabSeparatedRaw;
|
||||
SELECT value FROM system.zookeeper WHERE path = '/clickhouse/tables/test/alter_00121/replicas/r1/parts/20140201_20140201_0_0_0' AND name = 'columns' FORMAT TabSeparatedRaw;
|
||||
|
||||
DROP TABLE alter;
|
||||
DROP TABLE alter_00121;
|
||||
|
@ -1,10 +1,11 @@
|
||||
SET max_parallel_replicas = 2;
|
||||
|
||||
DROP TABLE IF EXISTS report;
|
||||
DROP TABLE IF EXISTS test.report;
|
||||
|
||||
CREATE TABLE report(id UInt32, event_date Date, priority UInt32, description String) ENGINE = MergeTree(event_date, intHash32(id), (id, event_date, intHash32(id)), 8192);
|
||||
CREATE TABLE test.report(id UInt32, event_date Date, priority UInt32, description String) ENGINE = MergeTree(event_date, intHash32(id), (id, event_date, intHash32(id)), 8192);
|
||||
|
||||
INSERT INTO report(id,event_date,priority,description) VALUES (1, '2015-01-01', 1, 'foo')(2, '2015-02-01', 2, 'bar')(3, '2015-03-01', 3, 'foo')(4, '2015-04-01', 4, 'bar')(5, '2015-05-01', 5, 'foo');
|
||||
SELECT * FROM (SELECT id, event_date, priority, description FROM remote('127.0.0.{2|3}', currentDatabase(), report)) ORDER BY id ASC;
|
||||
INSERT INTO test.report(id,event_date,priority,description) VALUES (1, '2015-01-01', 1, 'foo')(2, '2015-02-01', 2, 'bar')(3, '2015-03-01', 3, 'foo')(4, '2015-04-01', 4, 'bar')(5, '2015-05-01', 5, 'foo');
|
||||
SELECT * FROM (SELECT id, event_date, priority, description FROM remote('127.0.0.{2|3}', test, report)) ORDER BY id ASC;
|
||||
|
||||
DROP TABLE test.report;
|
||||
|
||||
DROP TABLE report;
|
||||
|
@ -1,62 +1,62 @@
|
||||
DROP TABLE IF EXISTS buffer_00126;
|
||||
DROP TABLE IF EXISTS null_sink_00126;
|
||||
DROP TABLE IF EXISTS test.buffer_00126;
|
||||
DROP TABLE IF EXISTS test.null_sink_00126;
|
||||
|
||||
CREATE TABLE null_sink_00126 (a UInt8, b String, c Array(UInt32)) ENGINE = Null;
|
||||
CREATE TABLE buffer_00126 (a UInt8, b String, c Array(UInt32)) ENGINE = Buffer(currentDatabase(), null_sink_00126, 1, 1000, 1000, 1000, 1000, 1000000, 1000000);
|
||||
CREATE TABLE test.null_sink_00126 (a UInt8, b String, c Array(UInt32)) ENGINE = Null;
|
||||
CREATE TABLE test.buffer_00126 (a UInt8, b String, c Array(UInt32)) ENGINE = Buffer(test, null_sink_00126, 1, 1000, 1000, 1000, 1000, 1000000, 1000000);
|
||||
|
||||
INSERT INTO buffer_00126 VALUES (1, '2', [3]);
|
||||
INSERT INTO test.buffer_00126 VALUES (1, '2', [3]);
|
||||
|
||||
SELECT a, b, c FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, c, a FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, a, b FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, c, b FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, a, c FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, b, a FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, b FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, c FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, a FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, c FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, a FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, b FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, b, c FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, c, a FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, a, b FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, c, b FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, a, c FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, b, a FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, b FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, c FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, a FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, c FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, a FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, b FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
|
||||
INSERT INTO buffer_00126 (c, b, a) VALUES ([7], '8', 9);
|
||||
INSERT INTO test.buffer_00126 (c, b, a) VALUES ([7], '8', 9);
|
||||
|
||||
SELECT a, b, c FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, c, a FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, a, b FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, c, b FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, a, c FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, b, a FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, b FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, c FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, a FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, c FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, a FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, b FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, b, c FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, c, a FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, a, b FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, c, b FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, a, c FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, b, a FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, b FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, c FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, a FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, c FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, a FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, b FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
|
||||
INSERT INTO buffer_00126 (a, c) VALUES (11, [33]);
|
||||
INSERT INTO test.buffer_00126 (a, c) VALUES (11, [33]);
|
||||
|
||||
SELECT a, b, c FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, c, a FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, a, b FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, c, b FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, a, c FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, b, a FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, b FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, c FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, a FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, c FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, a FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, b FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c FROM buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, b, c FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, c, a FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, a, b FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, c, b FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, a, c FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, b, a FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, b FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, c FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, a FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a, c FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b, a FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c, b FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT a FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT b FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
SELECT c FROM test.buffer_00126 ORDER BY a, b, c;
|
||||
|
||||
DROP TABLE buffer_00126;
|
||||
DROP TABLE null_sink_00126;
|
||||
DROP TABLE test.buffer_00126;
|
||||
DROP TABLE test.null_sink_00126;
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user