mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-21 23:21:59 +00:00
Merge branch 'master' into in_memory_raft
This commit is contained in:
commit
9c04d912ec
30
CHANGELOG.md
30
CHANGELOG.md
@ -1,5 +1,35 @@
|
||||
## ClickHouse release 21.1
|
||||
|
||||
### ClickHouse release v21.1.3.32-stable, 2021-02-03
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* BloomFilter index crash fix. Fixes [#19757](https://github.com/ClickHouse/ClickHouse/issues/19757). [#19884](https://github.com/ClickHouse/ClickHouse/pull/19884) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Fix crash when pushing down predicates to union distinct subquery. This fixes [#19855](https://github.com/ClickHouse/ClickHouse/issues/19855). [#19861](https://github.com/ClickHouse/ClickHouse/pull/19861) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix filtering by UInt8 greater than 127. [#19799](https://github.com/ClickHouse/ClickHouse/pull/19799) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* In previous versions, unusual arguments for function arrayEnumerateUniq may cause crash or infinite loop. This closes [#19787](https://github.com/ClickHouse/ClickHouse/issues/19787). [#19788](https://github.com/ClickHouse/ClickHouse/pull/19788) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed stack overflow when using accurate comparison of arithmetic type with string type. [#19773](https://github.com/ClickHouse/ClickHouse/pull/19773) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix crash when nested column name was used in `WHERE` or `PREWHERE`. Fixes [#19755](https://github.com/ClickHouse/ClickHouse/issues/19755). [#19763](https://github.com/ClickHouse/ClickHouse/pull/19763) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix a segmentation fault in `bitmapAndnot` function. Fixes [#19668](https://github.com/ClickHouse/ClickHouse/issues/19668). [#19713](https://github.com/ClickHouse/ClickHouse/pull/19713) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Some functions with big integers may cause segfault. Big integers is experimental feature. This closes [#19667](https://github.com/ClickHouse/ClickHouse/issues/19667). [#19672](https://github.com/ClickHouse/ClickHouse/pull/19672) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix wrong result of function `neighbor` for `LowCardinality` argument. Fixes [#10333](https://github.com/ClickHouse/ClickHouse/issues/10333). [#19617](https://github.com/ClickHouse/ClickHouse/pull/19617) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix use-after-free of the CompressedWriteBuffer in Connection after disconnect. [#19599](https://github.com/ClickHouse/ClickHouse/pull/19599) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* `DROP/DETACH TABLE table ON CLUSTER cluster SYNC` query might hang, it's fixed. Fixes [#19568](https://github.com/ClickHouse/ClickHouse/issues/19568). [#19572](https://github.com/ClickHouse/ClickHouse/pull/19572) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Query CREATE DICTIONARY id expression fix. [#19571](https://github.com/ClickHouse/ClickHouse/pull/19571) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Fix SIGSEGV with merge_tree_min_rows_for_concurrent_read/merge_tree_min_bytes_for_concurrent_read=0/UINT64_MAX. [#19528](https://github.com/ClickHouse/ClickHouse/pull/19528) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Buffer overflow (on memory read) was possible if `addMonth` function was called with specifically crafted arguments. This fixes [#19441](https://github.com/ClickHouse/ClickHouse/issues/19441). This fixes [#19413](https://github.com/ClickHouse/ClickHouse/issues/19413). [#19472](https://github.com/ClickHouse/ClickHouse/pull/19472) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Uninitialized memory read was possible in encrypt/decrypt functions if empty string was passed as IV. This closes [#19391](https://github.com/ClickHouse/ClickHouse/issues/19391). [#19397](https://github.com/ClickHouse/ClickHouse/pull/19397) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix possible buffer overflow in Uber H3 library. See https://github.com/uber/h3/issues/392. This closes [#19219](https://github.com/ClickHouse/ClickHouse/issues/19219). [#19383](https://github.com/ClickHouse/ClickHouse/pull/19383) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix system.parts _state column (LOGICAL_ERROR when querying this column, due to incorrect order). [#19346](https://github.com/ClickHouse/ClickHouse/pull/19346) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed possible wrong result or segfault on aggregation when Materialized View and its target table have different structure. Fixes [#18063](https://github.com/ClickHouse/ClickHouse/issues/18063). [#19322](https://github.com/ClickHouse/ClickHouse/pull/19322) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix error `Cannot convert column now64() because it is constant but values of constants are different in source and result`. Continuation of [#7156](https://github.com/ClickHouse/ClickHouse/issues/7156). [#19316](https://github.com/ClickHouse/ClickHouse/pull/19316) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix bug when concurrent `ALTER` and `DROP` queries may hang while processing ReplicatedMergeTree table. [#19237](https://github.com/ClickHouse/ClickHouse/pull/19237) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed `There is no checkpoint` error when inserting data through http interface using `Template` or `CustomSeparated` format. Fixes [#19021](https://github.com/ClickHouse/ClickHouse/issues/19021). [#19072](https://github.com/ClickHouse/ClickHouse/pull/19072) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Disable constant folding for subqueries on the analysis stage, when the result cannot be calculated. [#18446](https://github.com/ClickHouse/ClickHouse/pull/18446) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)).
|
||||
|
||||
|
||||
|
||||
### ClickHouse release v21.1.2.15-stable 2021-01-18
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
@ -278,6 +278,31 @@ public:
|
||||
return res / 3600;
|
||||
}
|
||||
|
||||
/** Calculating offset from UTC in seconds.
|
||||
* which means Using the same literal time of "t" to get the corresponding timestamp in UTC,
|
||||
* then subtract the former from the latter to get the offset result.
|
||||
* The boundaries when meets DST(daylight saving time) change should be handled very carefully.
|
||||
*/
|
||||
inline time_t timezoneOffset(time_t t) const
|
||||
{
|
||||
DayNum index = findIndex(t);
|
||||
|
||||
/// Calculate daylight saving offset first.
|
||||
/// Because the "amount_of_offset_change" in LUT entry only exists in the change day, it's costly to scan it from the very begin.
|
||||
/// but we can figure out all the accumulated offsets from 1970-01-01 to that day just by get the whole difference between lut[].date,
|
||||
/// and then, we can directly subtract multiple 86400s to get the real DST offsets for the leap seconds is not considered now.
|
||||
time_t res = (lut[index].date - lut[0].date) % 86400;
|
||||
/// As so far to know, the maximal DST offset couldn't be more than 2 hours, so after the modulo operation the remainder
|
||||
/// will sits between [-offset --> 0 --> offset] which respectively corresponds to moving clock forward or backward.
|
||||
res = res > 43200 ? (86400 - res) : (0 - res);
|
||||
|
||||
/// Check if has a offset change during this day. Add the change when cross the line
|
||||
if (lut[index].amount_of_offset_change != 0 && t >= lut[index].date + lut[index].time_at_offset_change)
|
||||
res += lut[index].amount_of_offset_change;
|
||||
|
||||
return res + offset_at_start_of_epoch;
|
||||
}
|
||||
|
||||
/** Only for time zones with/when offset from UTC is multiple of five minutes.
|
||||
* This is true for all time zones: right now, all time zones have an offset that is multiple of 15 minutes.
|
||||
*
|
||||
|
@ -168,14 +168,6 @@ public:
|
||||
static_assert(sizeof(LocalDate) == 4);
|
||||
|
||||
|
||||
inline std::ostream & operator<< (std::ostream & ostr, const LocalDate & date)
|
||||
{
|
||||
return ostr << date.year()
|
||||
<< '-' << (date.month() / 10) << (date.month() % 10)
|
||||
<< '-' << (date.day() / 10) << (date.day() % 10);
|
||||
}
|
||||
|
||||
|
||||
namespace std
|
||||
{
|
||||
inline string to_string(const LocalDate & date)
|
||||
|
@ -169,20 +169,6 @@ public:
|
||||
static_assert(sizeof(LocalDateTime) == 8);
|
||||
|
||||
|
||||
inline std::ostream & operator<< (std::ostream & ostr, const LocalDateTime & datetime)
|
||||
{
|
||||
ostr << std::setfill('0') << std::setw(4) << datetime.year();
|
||||
|
||||
ostr << '-' << (datetime.month() / 10) << (datetime.month() % 10)
|
||||
<< '-' << (datetime.day() / 10) << (datetime.day() % 10)
|
||||
<< ' ' << (datetime.hour() / 10) << (datetime.hour() % 10)
|
||||
<< ':' << (datetime.minute() / 10) << (datetime.minute() % 10)
|
||||
<< ':' << (datetime.second() / 10) << (datetime.second() % 10);
|
||||
|
||||
return ostr;
|
||||
}
|
||||
|
||||
|
||||
namespace std
|
||||
{
|
||||
inline string to_string(const LocalDateTime & datetime)
|
||||
|
@ -12,6 +12,8 @@
|
||||
#include <dlfcn.h>
|
||||
#include <fcntl.h>
|
||||
#include <fstream>
|
||||
#include <fmt/format.h>
|
||||
|
||||
|
||||
namespace
|
||||
{
|
||||
@ -189,8 +191,8 @@ void ReplxxLineReader::openEditor()
|
||||
return;
|
||||
}
|
||||
|
||||
String editor = std::getenv("EDITOR");
|
||||
if (editor.empty())
|
||||
const char * editor = std::getenv("EDITOR");
|
||||
if (!editor || !*editor)
|
||||
editor = "vim";
|
||||
|
||||
replxx::Replxx::State state(rx.get_state());
|
||||
@ -204,7 +206,7 @@ void ReplxxLineReader::openEditor()
|
||||
if ((-1 == res || 0 == res) && errno != EINTR)
|
||||
{
|
||||
rx.print("Cannot write to temporary query file %s: %s\n", filename, errnoToString(errno).c_str());
|
||||
return;
|
||||
break;
|
||||
}
|
||||
bytes_written += res;
|
||||
}
|
||||
@ -215,7 +217,7 @@ void ReplxxLineReader::openEditor()
|
||||
return;
|
||||
}
|
||||
|
||||
if (0 == execute(editor + " " + filename))
|
||||
if (0 == execute(fmt::format("{} {}", editor, filename)))
|
||||
{
|
||||
try
|
||||
{
|
||||
|
@ -230,10 +230,10 @@ public:
|
||||
}
|
||||
else
|
||||
{
|
||||
siginfo_t info;
|
||||
ucontext_t context;
|
||||
siginfo_t info{};
|
||||
ucontext_t context{};
|
||||
StackTrace stack_trace(NoCapture{});
|
||||
UInt32 thread_num;
|
||||
UInt32 thread_num{};
|
||||
std::string query_id;
|
||||
DB::ThreadStatus * thread_ptr{};
|
||||
|
||||
|
@ -3,7 +3,6 @@ add_library (mysqlxx
|
||||
Exception.cpp
|
||||
Query.cpp
|
||||
ResultBase.cpp
|
||||
StoreQueryResult.cpp
|
||||
UseQueryResult.cpp
|
||||
Row.cpp
|
||||
Value.cpp
|
||||
|
@ -39,7 +39,6 @@ private:
|
||||
/** MySQL connection.
|
||||
* Usage:
|
||||
* mysqlxx::Connection connection("Test", "127.0.0.1", "root", "qwerty", 3306);
|
||||
* std::cout << connection.query("SELECT 'Hello, World!'").store().at(0).at(0).getString() << std::endl;
|
||||
*
|
||||
* Or with Poco library configuration:
|
||||
* mysqlxx::Connection connection("mysql_params");
|
||||
|
@ -71,16 +71,6 @@ UseQueryResult Query::use()
|
||||
return UseQueryResult(res, conn, this);
|
||||
}
|
||||
|
||||
StoreQueryResult Query::store()
|
||||
{
|
||||
executeImpl();
|
||||
MYSQL_RES * res = mysql_store_result(conn->getDriver());
|
||||
if (!res)
|
||||
checkError(conn->getDriver());
|
||||
|
||||
return StoreQueryResult(res, conn, this);
|
||||
}
|
||||
|
||||
void Query::execute()
|
||||
{
|
||||
executeImpl();
|
||||
|
@ -3,7 +3,6 @@
|
||||
#include <sstream>
|
||||
|
||||
#include <mysqlxx/UseQueryResult.h>
|
||||
#include <mysqlxx/StoreQueryResult.h>
|
||||
|
||||
|
||||
namespace mysqlxx
|
||||
@ -46,11 +45,6 @@ public:
|
||||
*/
|
||||
UseQueryResult use();
|
||||
|
||||
/** Выполнить запрос с загрузкой на клиента всех строк.
|
||||
* Требуется оперативка, чтобы вместить весь результат, зато к строкам можно обращаться в произвольном порядке.
|
||||
*/
|
||||
StoreQueryResult store();
|
||||
|
||||
/// Значение auto increment после последнего INSERT-а.
|
||||
UInt64 insertID();
|
||||
|
||||
|
@ -9,7 +9,7 @@ class Connection;
|
||||
class Query;
|
||||
|
||||
|
||||
/** Базовый класс для UseQueryResult и StoreQueryResult.
|
||||
/** Базовый класс для UseQueryResult.
|
||||
* Содержит общую часть реализации,
|
||||
* Ссылается на Connection. Если уничтожить Connection, то пользоваться ResultBase и любым результатом нельзя.
|
||||
* Использовать объект можно только для результата одного запроса!
|
||||
|
@ -35,7 +35,7 @@ public:
|
||||
{
|
||||
}
|
||||
|
||||
/** Для того, чтобы создать Row, используйте соответствующие методы UseQueryResult или StoreQueryResult. */
|
||||
/** Для того, чтобы создать Row, используйте соответствующие методы UseQueryResult. */
|
||||
Row(MYSQL_ROW row_, ResultBase * res_, MYSQL_LENGTHS lengths_)
|
||||
: row(row_), res(res_), lengths(lengths_)
|
||||
{
|
||||
|
@ -1,30 +0,0 @@
|
||||
#if __has_include(<mysql.h>)
|
||||
#include <mysql.h>
|
||||
#else
|
||||
#include <mysql/mysql.h>
|
||||
#endif
|
||||
|
||||
#include <mysqlxx/Connection.h>
|
||||
#include <mysqlxx/StoreQueryResult.h>
|
||||
|
||||
|
||||
namespace mysqlxx
|
||||
{
|
||||
|
||||
StoreQueryResult::StoreQueryResult(MYSQL_RES * res_, Connection * conn_, const Query * query_) : ResultBase(res_, conn_, query_)
|
||||
{
|
||||
UInt64 rows = mysql_num_rows(res);
|
||||
reserve(rows);
|
||||
lengths.resize(rows * num_fields);
|
||||
|
||||
for (UInt64 i = 0; MYSQL_ROW row = mysql_fetch_row(res); ++i)
|
||||
{
|
||||
MYSQL_LENGTHS lengths_for_row = mysql_fetch_lengths(res);
|
||||
memcpy(&lengths[i * num_fields], lengths_for_row, sizeof(lengths[0]) * num_fields);
|
||||
|
||||
push_back(Row(row, this, &lengths[i * num_fields]));
|
||||
}
|
||||
checkError(conn->getDriver());
|
||||
}
|
||||
|
||||
}
|
@ -1,45 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <vector>
|
||||
|
||||
#include <mysqlxx/ResultBase.h>
|
||||
#include <mysqlxx/Row.h>
|
||||
|
||||
|
||||
namespace mysqlxx
|
||||
{
|
||||
|
||||
class Connection;
|
||||
|
||||
|
||||
/** Результат выполнения запроса, загруженный полностью на клиента.
|
||||
* Это требует оперативку, чтобы вместить весь результат,
|
||||
* но зато реализует произвольный доступ к строкам по индексу.
|
||||
* Если размер результата большой - используйте лучше UseQueryResult.
|
||||
* Объект содержит ссылку на Connection.
|
||||
* Если уничтожить Connection, то объект становится некорректным и все строки результата - тоже.
|
||||
* Если задать следующий запрос в соединении, то объект и все строки тоже становятся некорректными.
|
||||
* Использовать объект можно только для результата одного запроса!
|
||||
* (При попытке присвоить объекту результат следующего запроса - UB.)
|
||||
*/
|
||||
class StoreQueryResult : public std::vector<Row>, public ResultBase
|
||||
{
|
||||
public:
|
||||
StoreQueryResult(MYSQL_RES * res_, Connection * conn_, const Query * query_);
|
||||
|
||||
size_t num_rows() const { return size(); }
|
||||
|
||||
private:
|
||||
|
||||
/** Не смотря на то, что весь результат выполнения запроса загружается на клиента,
|
||||
* и все указатели MYSQL_ROW на отдельные строки различные,
|
||||
* при этом функция mysql_fetch_lengths() возвращает длины
|
||||
* для текущей строки по одному и тому же адресу.
|
||||
* То есть, чтобы можно было пользоваться несколькими Row одновременно,
|
||||
* необходимо заранее куда-то сложить все длины.
|
||||
*/
|
||||
using Lengths = std::vector<MYSQL_LENGTH>;
|
||||
Lengths lengths;
|
||||
};
|
||||
|
||||
}
|
@ -12,8 +12,7 @@ class Connection;
|
||||
|
||||
/** Результат выполнения запроса, предназначенный для чтения строк, одна за другой.
|
||||
* В памяти при этом хранится только одна, текущая строка.
|
||||
* В отличие от StoreQueryResult, произвольный доступ к строкам невозможен,
|
||||
* а также, при чтении следующей строки, предыдущая становится некорректной.
|
||||
* При чтении следующей строки, предыдущая становится некорректной.
|
||||
* Вы обязаны прочитать все строки из результата
|
||||
* (вызывать функцию fetch(), пока она не вернёт значение, преобразующееся к false),
|
||||
* иначе при следующем запросе будет выкинуто исключение с текстом "Commands out of sync".
|
||||
|
@ -25,7 +25,7 @@ class ResultBase;
|
||||
|
||||
/** Represents a single value read from MySQL.
|
||||
* It doesn't owns the value. It's just a wrapper of a pair (const char *, size_t).
|
||||
* If the UseQueryResult/StoreQueryResult or Connection is destroyed,
|
||||
* If the UseQueryResult or Connection is destroyed,
|
||||
* or you have read the next Row while using UseQueryResult, then the object is invalidated.
|
||||
* Allows to transform (parse) the value to various data types:
|
||||
* - with getUInt(), getString(), ... (recommended);
|
||||
|
@ -38,15 +38,6 @@ int main(int, char **)
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
mysqlxx::Query query = connection.query();
|
||||
query << "SELECT 1234567890 abc, 12345.67890 def UNION ALL SELECT 9876543210, 98765.43210";
|
||||
mysqlxx::StoreQueryResult result = query.store();
|
||||
|
||||
std::cerr << result.at(0)["abc"].getUInt() << ", " << result.at(0)["def"].getDouble() << std::endl
|
||||
<< result.at(1)["abc"].getUInt() << ", " << result.at(1)["def"].getDouble() << std::endl;
|
||||
}
|
||||
|
||||
{
|
||||
mysqlxx::UseQueryResult result = connection.query("SELECT 'abc\\\\def' x").use();
|
||||
mysqlxx::Row row = result.fetch();
|
||||
@ -54,27 +45,6 @@ int main(int, char **)
|
||||
std::cerr << row << std::endl;
|
||||
}
|
||||
|
||||
{
|
||||
mysqlxx::Query query = connection.query("SEL");
|
||||
query << "ECT 1";
|
||||
|
||||
std::cerr << query.store().at(0).at(0) << std::endl;
|
||||
}
|
||||
|
||||
{
|
||||
/// Копирование Query
|
||||
mysqlxx::Query query = connection.query("SELECT 'Ok' x");
|
||||
using Queries = std::vector<mysqlxx::Query>;
|
||||
Queries queries;
|
||||
queries.push_back(query);
|
||||
|
||||
for (auto & q : queries)
|
||||
{
|
||||
std::cerr << q.str() << std::endl;
|
||||
std::cerr << q.store().at(0) << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
/// Копирование Query
|
||||
mysqlxx::Query query1 = connection.query("SELECT");
|
||||
@ -84,62 +54,6 @@ int main(int, char **)
|
||||
std::cerr << query1.str() << ", " << query2.str() << std::endl;
|
||||
}
|
||||
|
||||
{
|
||||
/// Копирование Query
|
||||
using Queries = std::list<mysqlxx::Query>;
|
||||
Queries queries;
|
||||
queries.push_back(connection.query("SELECT"));
|
||||
mysqlxx::Query & qref = queries.back();
|
||||
qref << " 1";
|
||||
|
||||
for (auto & query : queries)
|
||||
{
|
||||
std::cerr << query.str() << std::endl;
|
||||
std::cerr << query.store().at(0) << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
/// Транзакции
|
||||
connection.query("DROP TABLE IF EXISTS tmp").execute();
|
||||
connection.query("CREATE TABLE tmp (x INT, PRIMARY KEY (x)) ENGINE = InnoDB").execute();
|
||||
|
||||
mysqlxx::Transaction trans(connection);
|
||||
connection.query("INSERT INTO tmp VALUES (1)").execute();
|
||||
|
||||
std::cerr << connection.query("SELECT * FROM tmp").store().size() << std::endl;
|
||||
|
||||
trans.rollback();
|
||||
|
||||
std::cerr << connection.query("SELECT * FROM tmp").store().size() << std::endl;
|
||||
}
|
||||
|
||||
{
|
||||
/// Транзакции
|
||||
connection.query("DROP TABLE IF EXISTS tmp").execute();
|
||||
connection.query("CREATE TABLE tmp (x INT, PRIMARY KEY (x)) ENGINE = InnoDB").execute();
|
||||
|
||||
{
|
||||
mysqlxx::Transaction trans(connection);
|
||||
connection.query("INSERT INTO tmp VALUES (1)").execute();
|
||||
std::cerr << connection.query("SELECT * FROM tmp").store().size() << std::endl;
|
||||
}
|
||||
|
||||
std::cerr << connection.query("SELECT * FROM tmp").store().size() << std::endl;
|
||||
}
|
||||
|
||||
{
|
||||
/// Транзакции
|
||||
mysqlxx::Connection connection2("test", "127.0.0.1", "root", "qwerty", 3306);
|
||||
connection2.query("DROP TABLE IF EXISTS tmp").execute();
|
||||
connection2.query("CREATE TABLE tmp (x INT, PRIMARY KEY (x)) ENGINE = InnoDB").execute();
|
||||
|
||||
mysqlxx::Transaction trans(connection2);
|
||||
connection2.query("INSERT INTO tmp VALUES (1)").execute();
|
||||
std::cerr << connection2.query("SELECT * FROM tmp").store().size() << std::endl;
|
||||
}
|
||||
std::cerr << connection.query("SELECT * FROM tmp").store().size() << std::endl;
|
||||
|
||||
{
|
||||
/// NULL
|
||||
mysqlxx::Null<int> x = mysqlxx::null;
|
||||
@ -152,59 +66,6 @@ int main(int, char **)
|
||||
std::cerr << (x == 1 ? "Ok" : "Fail") << std::endl;
|
||||
std::cerr << (x.isNull() ? "Fail" : "Ok") << std::endl;
|
||||
}
|
||||
|
||||
{
|
||||
/// Исключения при попытке достать значение не того типа
|
||||
try
|
||||
{
|
||||
connection.query("SELECT -1").store().at(0).at(0).getUInt();
|
||||
std::cerr << "Fail" << std::endl;
|
||||
}
|
||||
catch (const mysqlxx::Exception & e)
|
||||
{
|
||||
std::cerr << "Ok, " << e.message() << std::endl;
|
||||
}
|
||||
|
||||
try
|
||||
{
|
||||
connection.query("SELECT 'xxx'").store().at(0).at(0).getInt();
|
||||
std::cerr << "Fail" << std::endl;
|
||||
}
|
||||
catch (const mysqlxx::Exception & e)
|
||||
{
|
||||
std::cerr << "Ok, " << e.message() << std::endl;
|
||||
}
|
||||
|
||||
try
|
||||
{
|
||||
connection.query("SELECT NULL").store().at(0).at(0).getString();
|
||||
std::cerr << "Fail" << std::endl;
|
||||
}
|
||||
catch (const mysqlxx::Exception & e)
|
||||
{
|
||||
std::cerr << "Ok, " << e.message() << std::endl;
|
||||
}
|
||||
|
||||
try
|
||||
{
|
||||
connection.query("SELECT 123").store().at(0).at(0).getDate();
|
||||
std::cerr << "Fail" << std::endl;
|
||||
}
|
||||
catch (const mysqlxx::Exception & e)
|
||||
{
|
||||
std::cerr << "Ok, " << e.message() << std::endl;
|
||||
}
|
||||
|
||||
try
|
||||
{
|
||||
connection.query("SELECT '2011-01-01'").store().at(0).at(0).getDateTime();
|
||||
std::cerr << "Fail" << std::endl;
|
||||
}
|
||||
catch (const mysqlxx::Exception & e)
|
||||
{
|
||||
std::cerr << "Ok, " << e.message() << std::endl;
|
||||
}
|
||||
}
|
||||
}
|
||||
catch (const mysqlxx::Exception & e)
|
||||
{
|
||||
|
@ -120,7 +120,7 @@ if [ -n "$(ls /docker-entrypoint-initdb.d/)" ] || [ -n "$CLICKHOUSE_DB" ]; then
|
||||
sleep 1
|
||||
done
|
||||
|
||||
clickhouseclient=( clickhouse-client --multiquery -u "$CLICKHOUSE_USER" --password "$CLICKHOUSE_PASSWORD" )
|
||||
clickhouseclient=( clickhouse-client --multiquery --host "127.0.0.1" -u "$CLICKHOUSE_USER" --password "$CLICKHOUSE_PASSWORD" )
|
||||
|
||||
echo
|
||||
|
||||
|
@ -14,7 +14,7 @@ services:
|
||||
ports:
|
||||
- 1006:1006
|
||||
- 50070:50070
|
||||
- 9000:9000
|
||||
- 9010:9010
|
||||
depends_on:
|
||||
- hdfskerberos
|
||||
entrypoint: /etc/bootstrap.sh -d
|
||||
|
@ -46,7 +46,7 @@ toc_title: Adopters
|
||||
| <a href="https://www.exness.com" class="favicon">Exness</a> | Trading | Metrics, Logging | — | — | [Talk in Russian, May 2019](https://youtu.be/_rpU-TvSfZ8?t=3215) |
|
||||
| <a href="https://fastnetmon.com/" class="favicon">FastNetMon</a> | DDoS Protection | Main Product | | — | [Official website](https://fastnetmon.com/docs-fnm-advanced/fastnetmon-advanced-traffic-persistency/) |
|
||||
| <a href="https://www.flipkart.com/" class="favicon">Flipkart</a> | e-Commerce | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=239) |
|
||||
| <a href="https://fun.co/rp" class="favicon">FunCorp</a> | Games | | — | — | [Article](https://www.altinity.com/blog/migrating-from-redshift-to-clickhouse) |
|
||||
| <a href="https://fun.co/rp" class="favicon">FunCorp</a> | Games | | — | 14 bn records/day as of Jan 2021 | [Article](https://www.altinity.com/blog/migrating-from-redshift-to-clickhouse) |
|
||||
| <a href="https://geniee.co.jp" class="favicon">Geniee</a> | Ad network | Main product | — | — | [Blog post in Japanese, July 2017](https://tech.geniee.co.jp/entry/2017/07/20/160100) |
|
||||
| <a href="https://www.genotek.ru/" class="favicon">Genotek</a> | Bioinformatics | Main product | — | — | [Video, August 2020](https://youtu.be/v3KyZbz9lEE) |
|
||||
| <a href="https://www.huya.com/" class="favicon">HUYA</a> | Video Streaming | Analytics | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/7.%20ClickHouse万亿数据分析实践%20李本旺(sundy-li)%20虎牙.pdf) |
|
||||
|
@ -296,11 +296,33 @@ Useful for breaking away from a specific network interface.
|
||||
<interserver_http_host>example.yandex.ru</interserver_http_host>
|
||||
```
|
||||
|
||||
## interserver_https_port {#interserver-https-port}
|
||||
|
||||
Port for exchanging data between ClickHouse servers over `HTTPS`.
|
||||
|
||||
**Example**
|
||||
|
||||
``` xml
|
||||
<interserver_https_port>9010</interserver_https_port>
|
||||
```
|
||||
|
||||
## interserver_https_host {#interserver-https-host}
|
||||
|
||||
Similar to `interserver_http_host`, except that this hostname can be used by other servers to access this server over `HTTPS`.
|
||||
|
||||
**Example**
|
||||
|
||||
``` xml
|
||||
<interserver_https_host>example.yandex.ru</interserver_https_host>
|
||||
```
|
||||
|
||||
## interserver_http_credentials {#server-settings-interserver-http-credentials}
|
||||
|
||||
The username and password used to authenticate during [replication](../../engines/table-engines/mergetree-family/replication.md) with the Replicated\* engines. These credentials are used only for communication between replicas and are unrelated to credentials for ClickHouse clients. The server is checking these credentials for connecting replicas and use the same credentials when connecting to other replicas. So, these credentials should be set the same for all replicas in a cluster.
|
||||
By default, the authentication is not used.
|
||||
|
||||
**Note:** These credentials are common for replication through `HTTP` and `HTTPS`.
|
||||
|
||||
This section contains the following parameters:
|
||||
|
||||
- `user` — username.
|
||||
|
@ -12,7 +12,7 @@ Columns:
|
||||
|
||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Timestamp of the sampling moment.
|
||||
|
||||
- `event_time_microseconds` ([DateTime](../../sql-reference/data-types/datetime.md)) — Timestamp of the sampling moment with microseconds precision.
|
||||
- `event_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Timestamp of the sampling moment with microseconds precision.
|
||||
|
||||
- `timestamp_ns` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Timestamp of the sampling moment in nanoseconds.
|
||||
|
||||
|
@ -79,6 +79,40 @@ Result:
|
||||
└───────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
# quantilesTimingWeighted {#quantilestimingweighted}
|
||||
|
||||
Same as `quantileTimingWeighted`, but accept multiple parameters with quantile levels and return an Array filled with many values of that quantiles.
|
||||
|
||||
|
||||
**Example**
|
||||
|
||||
Input table:
|
||||
|
||||
``` text
|
||||
┌─response_time─┬─weight─┐
|
||||
│ 68 │ 1 │
|
||||
│ 104 │ 2 │
|
||||
│ 112 │ 3 │
|
||||
│ 126 │ 2 │
|
||||
│ 138 │ 1 │
|
||||
│ 162 │ 1 │
|
||||
└───────────────┴────────┘
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT quantilesTimingWeighted(0,5, 0.99)(response_time, weight) FROM t
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─quantilesTimingWeighted(0.5, 0.99)(response_time, weight)─┐
|
||||
│ [112,162] │
|
||||
└───────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**See Also**
|
||||
|
||||
- [median](../../../sql-reference/aggregate-functions/reference/median.md#median)
|
||||
|
@ -182,13 +182,102 @@ If `NULL` is passed to the function as input, then it returns the `Nullable(Noth
|
||||
Gets the size of the block.
|
||||
In ClickHouse, queries are always run on blocks (sets of column parts). This function allows getting the size of the block that you called it for.
|
||||
|
||||
## byteSize(...) {#function-bytesize}
|
||||
## byteSize {#function-bytesize}
|
||||
|
||||
Get an estimate of uncompressed byte size of its arguments in memory.
|
||||
E.g. for UInt32 argument it will return constant 4, for String argument - the string length + 9 (terminating zero + length).
|
||||
The function can take multiple arguments. The typical application is byteSize(*).
|
||||
Returns estimation of uncompressed byte size of its arguments in memory.
|
||||
|
||||
Use case: Suppose you have a service that stores data for multiple clients in one table. Users will pay per data volume. So, you need to implement accounting of users data volume. The function will allow to calculate the data size on per-row basis.
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
byteSize(argument [, ...])
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `argument` — Value.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Estimation of byte size of the arguments in memory.
|
||||
|
||||
Type: [UInt64](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Examples**
|
||||
|
||||
For [String](../../sql-reference/data-types/string.md) arguments the funtion returns the string length + 9 (terminating zero + length).
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT byteSize('string');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─byteSize('string')─┐
|
||||
│ 15 │
|
||||
└────────────────────┘
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
CREATE TABLE test
|
||||
(
|
||||
`key` Int32,
|
||||
`u8` UInt8,
|
||||
`u16` UInt16,
|
||||
`u32` UInt32,
|
||||
`u64` UInt64,
|
||||
`i8` Int8,
|
||||
`i16` Int16,
|
||||
`i32` Int32,
|
||||
`i64` Int64,
|
||||
`f32` Float32,
|
||||
`f64` Float64
|
||||
)
|
||||
ENGINE = MergeTree
|
||||
ORDER BY key;
|
||||
|
||||
INSERT INTO test VALUES(1, 8, 16, 32, 64, -8, -16, -32, -64, 32.32, 64.64);
|
||||
|
||||
SELECT key, byteSize(u8) AS `byteSize(UInt8)`, byteSize(u16) AS `byteSize(UInt16)`, byteSize(u32) AS `byteSize(UInt32)`, byteSize(u64) AS `byteSize(UInt64)`, byteSize(i8) AS `byteSize(Int8)`, byteSize(i16) AS `byteSize(Int16)`, byteSize(i32) AS `byteSize(Int32)`, byteSize(i64) AS `byteSize(Int64)`, byteSize(f32) AS `byteSize(Float32)`, byteSize(f64) AS `byteSize(Float64)` FROM test ORDER BY key ASC FORMAT Vertical;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
Row 1:
|
||||
──────
|
||||
key: 1
|
||||
byteSize(UInt8): 1
|
||||
byteSize(UInt16): 2
|
||||
byteSize(UInt32): 4
|
||||
byteSize(UInt64): 8
|
||||
byteSize(Int8): 1
|
||||
byteSize(Int16): 2
|
||||
byteSize(Int32): 4
|
||||
byteSize(Int64): 8
|
||||
byteSize(Float32): 4
|
||||
byteSize(Float64): 8
|
||||
```
|
||||
|
||||
If the function takes multiple arguments, it returns their combined byte size.
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT byteSize(NULL, 1, 0.3, '');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─byteSize(NULL, 1, 0.3, '')─┐
|
||||
│ 19 │
|
||||
└────────────────────────────┘
|
||||
```
|
||||
|
||||
## materialize(x) {#materializex}
|
||||
|
||||
|
@ -14,8 +14,6 @@ The search is case-sensitive by default in all these functions. There are separa
|
||||
|
||||
Returns the position (in bytes) of the found substring in the string, starting from 1.
|
||||
|
||||
Works under the assumption that the string contains a set of bytes representing a single-byte encoded text. If this assumption is not met and a character can’t be represented using a single byte, the function doesn’t throw an exception and returns some unexpected result. If character can be represented using two bytes, it will use two bytes and so on.
|
||||
|
||||
For a case-insensitive search, use the function [positionCaseInsensitive](#positioncaseinsensitive).
|
||||
|
||||
**Syntax**
|
||||
|
@ -303,9 +303,30 @@ SELECT toFixedString('foo\0bar', 8) AS s, toStringCutToZero(s) AS s_cut
|
||||
└────────────┴───────┘
|
||||
```
|
||||
|
||||
## reinterpretAsUInt(8\|16\|32\|64) {#reinterpretasuint8163264}
|
||||
## reinterpretAs(x, T) {#type_conversion_function-cast}
|
||||
|
||||
## reinterpretAsInt(8\|16\|32\|64) {#reinterpretasint8163264}
|
||||
Performs byte reinterpretation of ‘x’ as ‘t’ data type.
|
||||
|
||||
Following reinterpretations are allowed:
|
||||
1. Any type that has fixed size and value of that type can be represented continuously into FixedString.
|
||||
2. Any type that if value of that type can be represented continuously into String. Null bytes are dropped from the end. For example, a UInt32 type value of 255 is a string that is one byte long.
|
||||
3. FixedString, String, types that can be interpreted as numeric (Integers, Float, Date, DateTime, UUID) into types that can be interpreted as numeric (Integers, Float, Date, DateTime, UUID) into FixedString,
|
||||
|
||||
``` sql
|
||||
SELECT reinterpretAs(toInt8(-1), 'UInt8') as int_to_uint,
|
||||
reinterpretAs(toInt8(1), 'Float32') as int_to_float,
|
||||
reinterpretAs('1', 'UInt32') as string_to_int;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─int_to_uint─┬─int_to_float─┬─string_to_int─┐
|
||||
│ 255 │ 1e-45 │ 49 │
|
||||
└─────────────┴──────────────┴───────────────┘
|
||||
```
|
||||
|
||||
## reinterpretAsUInt(8\|16\|32\|64\|256) {#reinterpretasuint8163264256}
|
||||
|
||||
## reinterpretAsInt(8\|16\|32\|64\|128\|256) {#reinterpretasint8163264128256}
|
||||
|
||||
## reinterpretAsFloat(32\|64) {#reinterpretasfloat3264}
|
||||
|
||||
@ -313,71 +334,13 @@ SELECT toFixedString('foo\0bar', 8) AS s, toStringCutToZero(s) AS s_cut
|
||||
|
||||
## reinterpretAsDateTime {#reinterpretasdatetime}
|
||||
|
||||
These functions accept a string and interpret the bytes placed at the beginning of the string as a number in host order (little endian). If the string isn’t long enough, the functions work as if the string is padded with the necessary number of null bytes. If the string is longer than needed, the extra bytes are ignored. A date is interpreted as the number of days since the beginning of the Unix Epoch, and a date with time is interpreted as the number of seconds since the beginning of the Unix Epoch.
|
||||
|
||||
## reinterpretAsString {#type_conversion_functions-reinterpretAsString}
|
||||
|
||||
This function accepts a number or date or date with time, and returns a string containing bytes representing the corresponding value in host order (little endian). Null bytes are dropped from the end. For example, a UInt32 type value of 255 is a string that is one byte long.
|
||||
|
||||
## reinterpretAsFixedString {#reinterpretasfixedstring}
|
||||
|
||||
This function accepts a number or date or date with time, and returns a FixedString containing bytes representing the corresponding value in host order (little endian). Null bytes are dropped from the end. For example, a UInt32 type value of 255 is a FixedString that is one byte long.
|
||||
|
||||
## reinterpretAsUUID {#reinterpretasuuid}
|
||||
|
||||
This function accepts 16 bytes string, and returns UUID containing bytes representing the corresponding value in network byte order (big-endian). If the string isn't long enough, the functions work as if the string is padded with the necessary number of null bytes to the end. If the string longer than 16 bytes, the extra bytes at the end are ignored.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
reinterpretAsUUID(fixed_string)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `fixed_string` — Big-endian byte string. [FixedString](../../sql-reference/data-types/fixedstring.md#fixedstring).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- The UUID type value. [UUID](../../sql-reference/data-types/uuid.md#uuid-data-type).
|
||||
|
||||
**Examples**
|
||||
|
||||
String to UUID.
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT reinterpretAsUUID(reverse(unhex('000102030405060708090a0b0c0d0e0f')))
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─reinterpretAsUUID(reverse(unhex('000102030405060708090a0b0c0d0e0f')))─┐
|
||||
│ 08090a0b-0c0d-0e0f-0001-020304050607 │
|
||||
└───────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Going back and forth from String to UUID.
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
WITH
|
||||
generateUUIDv4() AS uuid,
|
||||
identity(lower(hex(reverse(reinterpretAsString(uuid))))) AS str,
|
||||
reinterpretAsUUID(reverse(unhex(str))) AS uuid2
|
||||
SELECT uuid = uuid2;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─equals(uuid, uuid2)─┐
|
||||
│ 1 │
|
||||
└─────────────────────┘
|
||||
```
|
||||
These functions are aliases for `reinterpretAs`function.
|
||||
|
||||
## CAST(x, T) {#type_conversion_function-cast}
|
||||
|
||||
|
38
docs/en/sql-reference/window-functions/index.md
Normal file
38
docs/en/sql-reference/window-functions/index.md
Normal file
@ -0,0 +1,38 @@
|
||||
# [development] Window Functions
|
||||
|
||||
!!! warning "Warning"
|
||||
This is an experimental feature that is currently in development and is not ready
|
||||
for general use. It will change in unpredictable backwards-incompatible ways in
|
||||
the future releases.
|
||||
|
||||
ClickHouse currently supports calculation of aggregate functions over a window.
|
||||
Pure window functions such as `rank`, `lag`, `lead` and so on are not yet supported.
|
||||
|
||||
The window can be specified either with an `OVER` clause or with a separate
|
||||
`WINDOW` clause.
|
||||
|
||||
Only two variants of frame are supported, `ROWS` and `RANGE`. The only supported
|
||||
frame boundaries are `ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW`.
|
||||
|
||||
|
||||
## References
|
||||
|
||||
### GitHub Issues
|
||||
The roadmap for the initial support of window functions is [in this issue](https://github.com/ClickHouse/ClickHouse/issues/18097).
|
||||
|
||||
All GitHub issues related to window funtions have the [comp-window-functions](https://github.com/ClickHouse/ClickHouse/labels/comp-window-functions) tag.
|
||||
|
||||
### Tests
|
||||
These tests contain the examples of the currently supported grammar:
|
||||
https://github.com/ClickHouse/ClickHouse/blob/master/tests/performance/window_functions.xml
|
||||
https://github.com/ClickHouse/ClickHouse/blob/master/tests/queries/0_stateless/01591_window_functions.sql
|
||||
|
||||
### Postgres Docs
|
||||
https://www.postgresql.org/docs/devel/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS
|
||||
https://www.postgresql.org/docs/devel/functions-window.html
|
||||
https://www.postgresql.org/docs/devel/tutorial-window.html
|
||||
|
||||
### MySQL Docs
|
||||
https://dev.mysql.com/doc/refman/8.0/en/window-function-descriptions.html
|
||||
https://dev.mysql.com/doc/refman/8.0/en/window-functions-usage.html
|
||||
https://dev.mysql.com/doc/refman/8.0/en/window-functions-frames.html
|
@ -5,6 +5,22 @@ toc_title: '2020'
|
||||
|
||||
### ClickHouse release 20.12
|
||||
|
||||
### ClickHouse release v20.12.5.14-stable, 2020-12-28
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Disable write with AIO during merges because it can lead to extremely rare data corruption of primary key columns during merge. [#18481](https://github.com/ClickHouse/ClickHouse/pull/18481) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed `value is too short` error when executing `toType(...)` functions (`toDate`, `toUInt32`, etc) with argument of type `Nullable(String)`. Now such functions return `NULL` on parsing errors instead of throwing exception. Fixes [#7673](https://github.com/ClickHouse/ClickHouse/issues/7673). [#18445](https://github.com/ClickHouse/ClickHouse/pull/18445) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Restrict merges from wide to compact parts. In case of vertical merge it led to broken result part. [#18381](https://github.com/ClickHouse/ClickHouse/pull/18381) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix filling table `system.settings_profile_elements`. This PR fixes [#18231](https://github.com/ClickHouse/ClickHouse/issues/18231). [#18379](https://github.com/ClickHouse/ClickHouse/pull/18379) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix possible crashes in aggregate functions with combinator `Distinct`, while using two-level aggregation. Fixes [#17682](https://github.com/ClickHouse/ClickHouse/issues/17682). [#18365](https://github.com/ClickHouse/ClickHouse/pull/18365) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix error when query `MODIFY COLUMN ... REMOVE TTL` doesn't actually remove column TTL. [#18130](https://github.com/ClickHouse/ClickHouse/pull/18130) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Update timezones info to 2020e. [#18531](https://github.com/ClickHouse/ClickHouse/pull/18531) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
|
||||
### ClickHouse release v20.12.4.5-stable, 2020-12-24
|
||||
|
||||
#### Bug Fix
|
||||
@ -142,6 +158,70 @@ toc_title: '2020'
|
||||
|
||||
## ClickHouse release 20.11
|
||||
|
||||
### ClickHouse release v20.11.7.16-stable, 2021-03-02
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Explicitly set uid / gid of clickhouse user & group to the fixed values (101) in clickhouse-server images. [#19096](https://github.com/ClickHouse/ClickHouse/pull/19096) ([filimonov](https://github.com/filimonov)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* BloomFilter index crash fix. Fixes [#19757](https://github.com/ClickHouse/ClickHouse/issues/19757). [#19884](https://github.com/ClickHouse/ClickHouse/pull/19884) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Deadlock was possible if system.text_log is enabled. This fixes [#19874](https://github.com/ClickHouse/ClickHouse/issues/19874). [#19875](https://github.com/ClickHouse/ClickHouse/pull/19875) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* In previous versions, unusual arguments for function arrayEnumerateUniq may cause crash or infinite loop. This closes [#19787](https://github.com/ClickHouse/ClickHouse/issues/19787). [#19788](https://github.com/ClickHouse/ClickHouse/pull/19788) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed stack overflow when using accurate comparison of arithmetic type with string type. [#19773](https://github.com/ClickHouse/ClickHouse/pull/19773) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix a segmentation fault in `bitmapAndnot` function. Fixes [#19668](https://github.com/ClickHouse/ClickHouse/issues/19668). [#19713](https://github.com/ClickHouse/ClickHouse/pull/19713) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Some functions with big integers may cause segfault. Big integers is experimental feature. This closes [#19667](https://github.com/ClickHouse/ClickHouse/issues/19667). [#19672](https://github.com/ClickHouse/ClickHouse/pull/19672) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix wrong result of function `neighbor` for `LowCardinality` argument. Fixes [#10333](https://github.com/ClickHouse/ClickHouse/issues/10333). [#19617](https://github.com/ClickHouse/ClickHouse/pull/19617) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix use-after-free of the CompressedWriteBuffer in Connection after disconnect. [#19599](https://github.com/ClickHouse/ClickHouse/pull/19599) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* `DROP/DETACH TABLE table ON CLUSTER cluster SYNC` query might hang, it's fixed. Fixes [#19568](https://github.com/ClickHouse/ClickHouse/issues/19568). [#19572](https://github.com/ClickHouse/ClickHouse/pull/19572) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Query CREATE DICTIONARY id expression fix. [#19571](https://github.com/ClickHouse/ClickHouse/pull/19571) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Fix SIGSEGV with merge_tree_min_rows_for_concurrent_read/merge_tree_min_bytes_for_concurrent_read=0/UINT64_MAX. [#19528](https://github.com/ClickHouse/ClickHouse/pull/19528) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Buffer overflow (on memory read) was possible if `addMonth` function was called with specifically crafted arguments. This fixes [#19441](https://github.com/ClickHouse/ClickHouse/issues/19441). This fixes [#19413](https://github.com/ClickHouse/ClickHouse/issues/19413). [#19472](https://github.com/ClickHouse/ClickHouse/pull/19472) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Mark distributed batch as broken in case of empty data block in one of files. [#19449](https://github.com/ClickHouse/ClickHouse/pull/19449) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix possible buffer overflow in Uber H3 library. See https://github.com/uber/h3/issues/392. This closes [#19219](https://github.com/ClickHouse/ClickHouse/issues/19219). [#19383](https://github.com/ClickHouse/ClickHouse/pull/19383) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix system.parts _state column (LOGICAL_ERROR when querying this column, due to incorrect order). [#19346](https://github.com/ClickHouse/ClickHouse/pull/19346) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix error `Cannot convert column now64() because it is constant but values of constants are different in source and result`. Continuation of [#7156](https://github.com/ClickHouse/ClickHouse/issues/7156). [#19316](https://github.com/ClickHouse/ClickHouse/pull/19316) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix bug when concurrent `ALTER` and `DROP` queries may hang while processing ReplicatedMergeTree table. [#19237](https://github.com/ClickHouse/ClickHouse/pull/19237) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix infinite reading from file in `ORC` format (was introduced in [#10580](https://github.com/ClickHouse/ClickHouse/issues/10580)). Fixes [#19095](https://github.com/ClickHouse/ClickHouse/issues/19095). [#19134](https://github.com/ClickHouse/ClickHouse/pull/19134) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix startup bug when clickhouse was not able to read compression codec from `LowCardinality(Nullable(...))` and throws exception `Attempt to read after EOF`. Fixes [#18340](https://github.com/ClickHouse/ClickHouse/issues/18340). [#19101](https://github.com/ClickHouse/ClickHouse/pull/19101) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed `There is no checkpoint` error when inserting data through http interface using `Template` or `CustomSeparated` format. Fixes [#19021](https://github.com/ClickHouse/ClickHouse/issues/19021). [#19072](https://github.com/ClickHouse/ClickHouse/pull/19072) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Restrict `MODIFY TTL` queries for `MergeTree` tables created in old syntax. Previously the query succeeded, but actually it had no effect. [#19064](https://github.com/ClickHouse/ClickHouse/pull/19064) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Make sure `groupUniqArray` returns correct type for argument of Enum type. This closes [#17875](https://github.com/ClickHouse/ClickHouse/issues/17875). [#19019](https://github.com/ClickHouse/ClickHouse/pull/19019) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix possible error `Expected single dictionary argument for function` if use function `ignore` with `LowCardinality` argument. Fixes [#14275](https://github.com/ClickHouse/ClickHouse/issues/14275). [#19016](https://github.com/ClickHouse/ClickHouse/pull/19016) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix inserting of `LowCardinality` column to table with `TinyLog` engine. Fixes [#18629](https://github.com/ClickHouse/ClickHouse/issues/18629). [#19010](https://github.com/ClickHouse/ClickHouse/pull/19010) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Disable `optimize_move_functions_out_of_any` because optimization is not always correct. This closes [#18051](https://github.com/ClickHouse/ClickHouse/issues/18051). This closes [#18973](https://github.com/ClickHouse/ClickHouse/issues/18973). [#18981](https://github.com/ClickHouse/ClickHouse/pull/18981) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed very rare deadlock at shutdown. [#18977](https://github.com/ClickHouse/ClickHouse/pull/18977) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix bug when mutation with some escaped text (like `ALTER ... UPDATE e = CAST('foo', 'Enum8(\'foo\' = 1')` serialized incorrectly. Fixes [#18878](https://github.com/ClickHouse/ClickHouse/issues/18878). [#18944](https://github.com/ClickHouse/ClickHouse/pull/18944) ([alesapin](https://github.com/alesapin)).
|
||||
* Attach partition should reset the mutation. [#18804](https://github.com/ClickHouse/ClickHouse/issues/18804). [#18935](https://github.com/ClickHouse/ClickHouse/pull/18935) ([fastio](https://github.com/fastio)).
|
||||
* Fix possible hang at shutdown in clickhouse-local. This fixes [#18891](https://github.com/ClickHouse/ClickHouse/issues/18891). [#18893](https://github.com/ClickHouse/ClickHouse/pull/18893) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix *If combinator with unary function and Nullable types. [#18806](https://github.com/ClickHouse/ClickHouse/pull/18806) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Asynchronous distributed INSERTs can be rejected by the server if the setting `network_compression_method` is globally set to non-default value. This fixes [#18741](https://github.com/ClickHouse/ClickHouse/issues/18741). [#18776](https://github.com/ClickHouse/ClickHouse/pull/18776) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed `Attempt to read after eof` error when trying to `CAST` `NULL` from `Nullable(String)` to `Nullable(Decimal(P, S))`. Now function `CAST` returns `NULL` when it cannot parse decimal from nullable string. Fixes [#7690](https://github.com/ClickHouse/ClickHouse/issues/7690). [#18718](https://github.com/ClickHouse/ClickHouse/pull/18718) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix Logger with unmatched arg size. [#18717](https://github.com/ClickHouse/ClickHouse/pull/18717) ([sundyli](https://github.com/sundy-li)).
|
||||
* Add FixedString Data type support. I'll get this exception "Code: 50, e.displayText() = DB::Exception: Unsupported type FixedString(1)" when replicating data from MySQL to ClickHouse. This patch fixes bug [#18450](https://github.com/ClickHouse/ClickHouse/issues/18450) Also fixes [#6556](https://github.com/ClickHouse/ClickHouse/issues/6556). [#18553](https://github.com/ClickHouse/ClickHouse/pull/18553) ([awesomeleo](https://github.com/awesomeleo)).
|
||||
* Fix possible `Pipeline stuck` error while using `ORDER BY` after subquery with `RIGHT` or `FULL` join. [#18550](https://github.com/ClickHouse/ClickHouse/pull/18550) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix bug which may lead to `ALTER` queries hung after corresponding mutation kill. Found by thread fuzzer. [#18518](https://github.com/ClickHouse/ClickHouse/pull/18518) ([alesapin](https://github.com/alesapin)).
|
||||
* Disable write with AIO during merges because it can lead to extremely rare data corruption of primary key columns during merge. [#18481](https://github.com/ClickHouse/ClickHouse/pull/18481) ([alesapin](https://github.com/alesapin)).
|
||||
* Disable constant folding for subqueries on the analysis stage, when the result cannot be calculated. [#18446](https://github.com/ClickHouse/ClickHouse/pull/18446) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed `value is too short` error when executing `toType(...)` functions (`toDate`, `toUInt32`, etc) with argument of type `Nullable(String)`. Now such functions return `NULL` on parsing errors instead of throwing exception. Fixes [#7673](https://github.com/ClickHouse/ClickHouse/issues/7673). [#18445](https://github.com/ClickHouse/ClickHouse/pull/18445) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Restrict merges from wide to compact parts. In case of vertical merge it led to broken result part. [#18381](https://github.com/ClickHouse/ClickHouse/pull/18381) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix filling table `system.settings_profile_elements`. This PR fixes [#18231](https://github.com/ClickHouse/ClickHouse/issues/18231). [#18379](https://github.com/ClickHouse/ClickHouse/pull/18379) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix index analysis of binary functions with constant argument which leads to wrong query results. This fixes [#18364](https://github.com/ClickHouse/ClickHouse/issues/18364). [#18373](https://github.com/ClickHouse/ClickHouse/pull/18373) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix possible crashes in aggregate functions with combinator `Distinct`, while using two-level aggregation. Fixes [#17682](https://github.com/ClickHouse/ClickHouse/issues/17682). [#18365](https://github.com/ClickHouse/ClickHouse/pull/18365) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* `SELECT count() FROM table` now can be executed if only one any column can be selected from the `table`. This PR fixes [#10639](https://github.com/ClickHouse/ClickHouse/issues/10639). [#18233](https://github.com/ClickHouse/ClickHouse/pull/18233) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* `SELECT JOIN` now requires the `SELECT` privilege on each of the joined tables. This PR fixes [#17654](https://github.com/ClickHouse/ClickHouse/issues/17654). [#18232](https://github.com/ClickHouse/ClickHouse/pull/18232) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix possible incomplete query result while reading from `MergeTree*` in case of read backoff (message `<Debug> MergeTreeReadPool: Will lower number of threads` in logs). Was introduced in [#16423](https://github.com/ClickHouse/ClickHouse/issues/16423). Fixes [#18137](https://github.com/ClickHouse/ClickHouse/issues/18137). [#18216](https://github.com/ClickHouse/ClickHouse/pull/18216) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix error when query `MODIFY COLUMN ... REMOVE TTL` doesn't actually remove column TTL. [#18130](https://github.com/ClickHouse/ClickHouse/pull/18130) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix indeterministic functions with predicate optimizer. This fixes [#17244](https://github.com/ClickHouse/ClickHouse/issues/17244). [#17273](https://github.com/ClickHouse/ClickHouse/pull/17273) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Update timezones info to 2020e. [#18531](https://github.com/ClickHouse/ClickHouse/pull/18531) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
|
||||
|
||||
### ClickHouse release v20.11.6.6-stable, 2020-12-24
|
||||
|
||||
#### Bug Fix
|
||||
@ -588,6 +668,60 @@ toc_title: '2020'
|
||||
|
||||
## ClickHouse release 20.9
|
||||
|
||||
### ClickHouse release v20.9.7.11-stable, 2020-12-07
|
||||
|
||||
#### Performance Improvement
|
||||
|
||||
* Fix performance of reading from `Merge` tables over huge number of `MergeTree` tables. Fixes [#7748](https://github.com/ClickHouse/ClickHouse/issues/7748). [#16988](https://github.com/ClickHouse/ClickHouse/pull/16988) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Do not restore parts from WAL if `in_memory_parts_enable_wal` is disabled. [#17802](https://github.com/ClickHouse/ClickHouse/pull/17802) ([detailyang](https://github.com/detailyang)).
|
||||
* Fixed segfault when there is not enough space when inserting into `Distributed` table. [#17737](https://github.com/ClickHouse/ClickHouse/pull/17737) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fixed problem when ClickHouse fails to resume connection to MySQL servers. [#17681](https://github.com/ClickHouse/ClickHouse/pull/17681) ([Alexander Kazakov](https://github.com/Akazz)).
|
||||
* Fixed `Function not implemented` error when executing `RENAME` query in `Atomic` database with ClickHouse running on Windows Subsystem for Linux. Fixes [#17661](https://github.com/ClickHouse/ClickHouse/issues/17661). [#17664](https://github.com/ClickHouse/ClickHouse/pull/17664) ([tavplubix](https://github.com/tavplubix)).
|
||||
* When clickhouse-client is used in interactive mode with multiline queries, single line comment was erronously extended till the end of query. This fixes [#13654](https://github.com/ClickHouse/ClickHouse/issues/13654). [#17565](https://github.com/ClickHouse/ClickHouse/pull/17565) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix the issue when server can stop accepting connections in very rare cases. [#17542](https://github.com/ClickHouse/ClickHouse/pull/17542) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix alter query hang when the corresponding mutation was killed on the different replica. Fixes [#16953](https://github.com/ClickHouse/ClickHouse/issues/16953). [#17499](https://github.com/ClickHouse/ClickHouse/pull/17499) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix bug when mark cache size was underestimated by clickhouse. It may happen when there are a lot of tiny files with marks. [#17496](https://github.com/ClickHouse/ClickHouse/pull/17496) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix `ORDER BY` with enabled setting `optimize_redundant_functions_in_order_by`. [#17471](https://github.com/ClickHouse/ClickHouse/pull/17471) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix duplicates after `DISTINCT` which were possible because of incorrect optimization. Fixes [#17294](https://github.com/ClickHouse/ClickHouse/issues/17294). [#17296](https://github.com/ClickHouse/ClickHouse/pull/17296) ([li chengxiang](https://github.com/chengxianglibra)). [#17439](https://github.com/ClickHouse/ClickHouse/pull/17439) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix crash while reading from `JOIN` table with `LowCardinality` types. Fixes [#17228](https://github.com/ClickHouse/ClickHouse/issues/17228). [#17397](https://github.com/ClickHouse/ClickHouse/pull/17397) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix set index invalidation when there are const columns in the subquery. This fixes [#17246](https://github.com/ClickHouse/ClickHouse/issues/17246) . [#17249](https://github.com/ClickHouse/ClickHouse/pull/17249) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix ColumnConst comparison which leads to crash. This fixed [#17088](https://github.com/ClickHouse/ClickHouse/issues/17088) . [#17135](https://github.com/ClickHouse/ClickHouse/pull/17135) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fixed crash on `CREATE TABLE ... AS some_table` query when `some_table` was created `AS table_function()` Fixes [#16944](https://github.com/ClickHouse/ClickHouse/issues/16944). [#17072](https://github.com/ClickHouse/ClickHouse/pull/17072) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Bug fix for funciton fuzzBits, related issue: [#16980](https://github.com/ClickHouse/ClickHouse/issues/16980). [#17051](https://github.com/ClickHouse/ClickHouse/pull/17051) ([hexiaoting](https://github.com/hexiaoting)).
|
||||
* Avoid unnecessary network errors for remote queries which may be cancelled while execution, like queries with `LIMIT`. [#17006](https://github.com/ClickHouse/ClickHouse/pull/17006) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* TODO. [#16866](https://github.com/ClickHouse/ClickHouse/pull/16866) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Return number of affected rows for INSERT queries via MySQL protocol. Previously ClickHouse used to always return 0, it's fixed. Fixes [#16605](https://github.com/ClickHouse/ClickHouse/issues/16605). [#16715](https://github.com/ClickHouse/ClickHouse/pull/16715) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Update embedded timezone data to version 2020d (also update cctz to the latest master). [#17204](https://github.com/ClickHouse/ClickHouse/pull/17204) ([filimonov](https://github.com/filimonov)).
|
||||
|
||||
|
||||
### ClickHouse release v20.9.6.14-stable, 2020-11-20
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Make it possible to connect to `clickhouse-server` secure endpoint which requires SNI. This is possible when `clickhouse-server` is hosted behind TLS proxy. [#16938](https://github.com/ClickHouse/ClickHouse/pull/16938) ([filimonov](https://github.com/filimonov)).
|
||||
* Conditional aggregate functions (for example: `avgIf`, `sumIf`, `maxIf`) should return `NULL` when miss rows and use nullable arguments. [#13964](https://github.com/ClickHouse/ClickHouse/pull/13964) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix bug when `ON CLUSTER` queries may hang forever for non-leader ReplicatedMergeTreeTables. [#17089](https://github.com/ClickHouse/ClickHouse/pull/17089) ([alesapin](https://github.com/alesapin)).
|
||||
* Reresolve the IP of the `format_avro_schema_registry_url` in case of errors. [#16985](https://github.com/ClickHouse/ClickHouse/pull/16985) ([filimonov](https://github.com/filimonov)).
|
||||
* Fix possible server crash after `ALTER TABLE ... MODIFY COLUMN ... NewType` when `SELECT` have `WHERE` expression on altering column and alter doesn't finished yet. [#16968](https://github.com/ClickHouse/ClickHouse/pull/16968) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Install script should always create subdirs in config folders. This is only relevant for Docker build with custom config. [#16936](https://github.com/ClickHouse/ClickHouse/pull/16936) ([filimonov](https://github.com/filimonov)).
|
||||
* Fix possible error `Illegal type of argument` for queries with `ORDER BY`. Fixes [#16580](https://github.com/ClickHouse/ClickHouse/issues/16580). [#16928](https://github.com/ClickHouse/ClickHouse/pull/16928) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Abort multipart upload if no data was written to WriteBufferFromS3. [#16840](https://github.com/ClickHouse/ClickHouse/pull/16840) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Fix crash when using `any` without any arguments. This is for [#16803](https://github.com/ClickHouse/ClickHouse/issues/16803) . cc @azat. [#16826](https://github.com/ClickHouse/ClickHouse/pull/16826) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix `IN` operator over several columns and tuples with enabled `transform_null_in` setting. Fixes [#15310](https://github.com/ClickHouse/ClickHouse/issues/15310). [#16722](https://github.com/ClickHouse/ClickHouse/pull/16722) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* This will fix optimize_read_in_order/optimize_aggregation_in_order with max_threads>0 and expression in ORDER BY. [#16637](https://github.com/ClickHouse/ClickHouse/pull/16637) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* fixes [#16574](https://github.com/ClickHouse/ClickHouse/issues/16574) fixes [#16231](https://github.com/ClickHouse/ClickHouse/issues/16231) fix remote query failure when using 'if' suffix aggregate function. [#16610](https://github.com/ClickHouse/ClickHouse/pull/16610) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Query is finished faster in case of exception. Cancel execution on remote replicas if exception happens. [#15578](https://github.com/ClickHouse/ClickHouse/pull/15578) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
||||
|
||||
### ClickHouse release v20.9.5.5-stable, 2020-11-13
|
||||
|
||||
#### Bug Fix
|
||||
@ -744,6 +878,23 @@ toc_title: '2020'
|
||||
|
||||
## ClickHouse release 20.8
|
||||
|
||||
### ClickHouse release v20.8.12.2-lts, 2021-01-16
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix *If combinator with unary function and Nullable types. [#18806](https://github.com/ClickHouse/ClickHouse/pull/18806) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Restrict merges from wide to compact parts. In case of vertical merge it led to broken result part. [#18381](https://github.com/ClickHouse/ClickHouse/pull/18381) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
|
||||
|
||||
### ClickHouse release v20.8.11.17-lts, 2020-12-25
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Disable write with AIO during merges because it can lead to extremely rare data corruption of primary key columns during merge. [#18481](https://github.com/ClickHouse/ClickHouse/pull/18481) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed `value is too short` error when executing `toType(...)` functions (`toDate`, `toUInt32`, etc) with argument of type `Nullable(String)`. Now such functions return `NULL` on parsing errors instead of throwing exception. Fixes [#7673](https://github.com/ClickHouse/ClickHouse/issues/7673). [#18445](https://github.com/ClickHouse/ClickHouse/pull/18445) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix possible crashes in aggregate functions with combinator `Distinct`, while using two-level aggregation. Fixes [#17682](https://github.com/ClickHouse/ClickHouse/issues/17682). [#18365](https://github.com/ClickHouse/ClickHouse/pull/18365) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
|
||||
|
||||
### ClickHouse release v20.8.10.13-lts, 2020-12-24
|
||||
|
||||
#### Bug Fix
|
||||
|
@ -183,6 +183,103 @@ SELECT visibleWidth(NULL)
|
||||
Получить размер блока.
|
||||
В ClickHouse выполнение запроса всегда идёт по блокам (наборам кусочков столбцов). Функция позволяет получить размер блока, для которого её вызвали.
|
||||
|
||||
## byteSize {#function-bytesize}
|
||||
|
||||
Возвращает оценку в байтах размера аргументов в памяти в несжатом виде.
|
||||
|
||||
**Синтаксис**
|
||||
|
||||
```sql
|
||||
byteSize(argument [, ...])
|
||||
```
|
||||
|
||||
**Параметры**
|
||||
|
||||
- `argument` — значение.
|
||||
|
||||
**Возвращаемое значение**
|
||||
|
||||
- Оценка размера аргументов в памяти в байтах.
|
||||
|
||||
Тип: [UInt64](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Примеры**
|
||||
|
||||
Для аргументов типа [String](../../sql-reference/data-types/string.md) функция возвращает длину строки + 9 (нуль-терминатор + длина)
|
||||
|
||||
Запрос:
|
||||
|
||||
```sql
|
||||
SELECT byteSize('string');
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
```text
|
||||
┌─byteSize('string')─┐
|
||||
│ 15 │
|
||||
└────────────────────┘
|
||||
```
|
||||
|
||||
Запрос:
|
||||
|
||||
```sql
|
||||
CREATE TABLE test
|
||||
(
|
||||
`key` Int32,
|
||||
`u8` UInt8,
|
||||
`u16` UInt16,
|
||||
`u32` UInt32,
|
||||
`u64` UInt64,
|
||||
`i8` Int8,
|
||||
`i16` Int16,
|
||||
`i32` Int32,
|
||||
`i64` Int64,
|
||||
`f32` Float32,
|
||||
`f64` Float64
|
||||
)
|
||||
ENGINE = MergeTree
|
||||
ORDER BY key;
|
||||
|
||||
INSERT INTO test VALUES(1, 8, 16, 32, 64, -8, -16, -32, -64, 32.32, 64.64);
|
||||
|
||||
SELECT key, byteSize(u8) AS `byteSize(UInt8)`, byteSize(u16) AS `byteSize(UInt16)`, byteSize(u32) AS `byteSize(UInt32)`, byteSize(u64) AS `byteSize(UInt64)`, byteSize(i8) AS `byteSize(Int8)`, byteSize(i16) AS `byteSize(Int16)`, byteSize(i32) AS `byteSize(Int32)`, byteSize(i64) AS `byteSize(Int64)`, byteSize(f32) AS `byteSize(Float32)`, byteSize(f64) AS `byteSize(Float64)` FROM test ORDER BY key ASC FORMAT Vertical;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
Row 1:
|
||||
──────
|
||||
key: 1
|
||||
byteSize(UInt8): 1
|
||||
byteSize(UInt16): 2
|
||||
byteSize(UInt32): 4
|
||||
byteSize(UInt64): 8
|
||||
byteSize(Int8): 1
|
||||
byteSize(Int16): 2
|
||||
byteSize(Int32): 4
|
||||
byteSize(Int64): 8
|
||||
byteSize(Float32): 4
|
||||
byteSize(Float64): 8
|
||||
```
|
||||
|
||||
Если функция принимает несколько аргументов, то она возвращает их совокупный размер в байтах.
|
||||
|
||||
Запрос:
|
||||
|
||||
```sql
|
||||
SELECT byteSize(NULL, 1, 0.3, '');
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
```text
|
||||
┌─byteSize(NULL, 1, 0.3, '')─┐
|
||||
│ 19 │
|
||||
└────────────────────────────┘
|
||||
```
|
||||
|
||||
## materialize(x) {#materializex}
|
||||
|
||||
Превращает константу в полноценный столбец, содержащий только одно значение.
|
||||
|
@ -13,8 +13,6 @@ toc_title: "\u0424\u0443\u043d\u043a\u0446\u0438\u0438\u0020\u043f\u043e\u0438\u
|
||||
|
||||
Возвращает позицию (в байтах) найденной подстроки в строке, начиная с 1, или 0, если подстрока не найдена.
|
||||
|
||||
Работает при допущении, что строка содержит набор байт, представляющий текст в однобайтовой кодировке. Если допущение не выполнено — то возвращает неопределенный результат (не кидает исключение). Если символ может быть представлен с помощью двух байтов, он будет представлен двумя байтами и так далее.
|
||||
|
||||
Для поиска без учета регистра используйте функцию [positionCaseInsensitive](#positioncaseinsensitive).
|
||||
|
||||
**Синтаксис**
|
||||
|
@ -60,7 +60,7 @@ public:
|
||||
return std::make_shared<DataTypeUInt8>();
|
||||
}
|
||||
|
||||
void create(AggregateDataPtr place) const override
|
||||
void create(AggregateDataPtr __restrict place) const override
|
||||
{
|
||||
if (std::uniform_real_distribution<>(0.0, 1.0)(thread_local_rng) <= throw_probability)
|
||||
throw Exception("Aggregate function " + getName() + " has thrown exception successfully", ErrorCodes::AGGREGATE_FUNCTION_THROW);
|
||||
@ -68,7 +68,7 @@ public:
|
||||
new (place) Data;
|
||||
}
|
||||
|
||||
void destroy(AggregateDataPtr place) const noexcept override
|
||||
void destroy(AggregateDataPtr __restrict place) const noexcept override
|
||||
{
|
||||
data(place).~Data();
|
||||
}
|
||||
|
@ -70,25 +70,25 @@ public:
|
||||
return type_res;
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
if (this->data(place).value.changeIfBetter(*columns[1], row_num, arena))
|
||||
this->data(place).result.change(*columns[0], row_num, arena);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
if (this->data(place).value.changeIfBetter(this->data(rhs).value, arena))
|
||||
this->data(place).result.change(this->data(rhs).result, arena);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).result.write(buf, *type_res);
|
||||
this->data(place).value.write(buf, *type_val);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
this->data(place).result.read(buf, *type_res, arena);
|
||||
this->data(place).value.read(buf, *type_val, arena);
|
||||
@ -96,7 +96,7 @@ public:
|
||||
|
||||
bool allocatesMemoryInArena() const override { return Data::allocatesMemoryInArena(); }
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
if (tuple_argument)
|
||||
{
|
||||
|
@ -47,12 +47,12 @@ public:
|
||||
return nested_func->getReturnType();
|
||||
}
|
||||
|
||||
void create(AggregateDataPtr place) const override
|
||||
void create(AggregateDataPtr __restrict place) const override
|
||||
{
|
||||
nested_func->create(place);
|
||||
}
|
||||
|
||||
void destroy(AggregateDataPtr place) const noexcept override
|
||||
void destroy(AggregateDataPtr __restrict place) const noexcept override
|
||||
{
|
||||
nested_func->destroy(place);
|
||||
}
|
||||
@ -77,7 +77,7 @@ public:
|
||||
return nested_func->isState();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
const IColumn * nested[num_arguments];
|
||||
|
||||
@ -104,22 +104,22 @@ public:
|
||||
nested_func->add(place, nested, i, arena);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
nested_func->merge(place, rhs, arena);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
nested_func->serialize(place, buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
nested_func->deserialize(place, buf, arena);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena * arena) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena * arena) const override
|
||||
{
|
||||
nested_func->insertResultInto(place, to, arena);
|
||||
}
|
||||
|
@ -98,13 +98,13 @@ public:
|
||||
|
||||
DataTypePtr getReturnType() const final { return std::make_shared<DataTypeNumber<Float64>>(); }
|
||||
|
||||
void NO_SANITIZE_UNDEFINED merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void NO_SANITIZE_UNDEFINED merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).numerator += this->data(rhs).numerator;
|
||||
this->data(place).denominator += this->data(rhs).denominator;
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
writeBinary(this->data(place).numerator, buf);
|
||||
|
||||
@ -114,7 +114,7 @@ public:
|
||||
writeBinary(this->data(place).denominator, buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
readBinary(this->data(place).numerator, buf);
|
||||
|
||||
@ -124,7 +124,7 @@ public:
|
||||
readBinary(this->data(place).denominator, buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
if constexpr (IsDecimalNumber<Numerator> || IsDecimalNumber<Denominator>)
|
||||
assert_cast<ColumnVector<Float64> &>(to).getData().push_back(
|
||||
@ -148,7 +148,7 @@ class AggregateFunctionAvg final : public AggregateFunctionAvgBase<AvgFieldType<
|
||||
public:
|
||||
using AggregateFunctionAvgBase<AvgFieldType<T>, UInt64, AggregateFunctionAvg<T>>::AggregateFunctionAvgBase;
|
||||
|
||||
void NO_SANITIZE_UNDEFINED add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const final
|
||||
void NO_SANITIZE_UNDEFINED add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const final
|
||||
{
|
||||
this->data(place).numerator += static_cast<const DecimalOrVectorCol<T> &>(*columns[0]).getData()[row_num];
|
||||
++this->data(place).denominator;
|
||||
|
@ -28,7 +28,7 @@ public:
|
||||
|
||||
using ValueT = MaxFieldType<Value, Weight>;
|
||||
|
||||
void NO_SANITIZE_UNDEFINED add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void NO_SANITIZE_UNDEFINED add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
const auto& weights = static_cast<const DecimalOrVectorCol<Weight> &>(*columns[1]);
|
||||
|
||||
|
@ -54,27 +54,27 @@ public:
|
||||
return std::make_shared<DataTypeNumber<T>>();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
this->data(place).update(assert_cast<const ColumnVector<T> &>(*columns[0]).getData()[row_num]);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).update(this->data(rhs).value);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
writeBinary(this->data(place).value, buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
readBinary(this->data(place).value, buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
assert_cast<ColumnVector<T> &>(to).getData().push_back(this->data(place).value);
|
||||
}
|
||||
|
@ -127,7 +127,7 @@ public:
|
||||
return std::make_shared<DataTypeFloat64>();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, const size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, const size_t row_num, Arena *) const override
|
||||
{
|
||||
/// NOTE Slightly inefficient.
|
||||
const auto x = columns[0]->getFloat64(row_num);
|
||||
@ -135,22 +135,22 @@ public:
|
||||
data(place).add(x, y);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
data(place).merge(data(rhs));
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
data(place).serialize(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
data(place).deserialize(buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
assert_cast<ColumnFloat64 &>(to).getData().push_back(getBoundingRatio(data(place)));
|
||||
}
|
||||
|
@ -33,7 +33,7 @@ public:
|
||||
return "categoricalInformationValue";
|
||||
}
|
||||
|
||||
void create(AggregateDataPtr place) const override
|
||||
void create(AggregateDataPtr __restrict place) const override
|
||||
{
|
||||
memset(place, 0, sizeOfData());
|
||||
}
|
||||
|
@ -38,7 +38,7 @@ public:
|
||||
return std::make_shared<DataTypeUInt64>();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn **, size_t, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn **, size_t, Arena *) const override
|
||||
{
|
||||
++data(place).count;
|
||||
}
|
||||
@ -76,28 +76,28 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
data(place).count += data(rhs).count;
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
writeVarUInt(data(place).count, buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
readVarUInt(data(place).count, buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
assert_cast<ColumnUInt64 &>(to).getData().push_back(data(place).count);
|
||||
}
|
||||
|
||||
/// Reset the state to specified value. This function is not the part of common interface.
|
||||
void set(AggregateDataPtr place, UInt64 new_count)
|
||||
void set(AggregateDataPtr __restrict place, UInt64 new_count)
|
||||
{
|
||||
data(place).count = new_count;
|
||||
}
|
||||
@ -126,27 +126,27 @@ public:
|
||||
return std::make_shared<DataTypeUInt64>();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
data(place).count += !assert_cast<const ColumnNullable &>(*columns[0]).isNullAt(row_num);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
data(place).count += data(rhs).count;
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
writeVarUInt(data(place).count, buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
readVarUInt(data(place).count, buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
assert_cast<ColumnUInt64 &>(to).getData().push_back(data(place).count);
|
||||
}
|
||||
|
@ -156,12 +156,12 @@ private:
|
||||
AggregateFunctionPtr nested_func;
|
||||
size_t arguments_num;
|
||||
|
||||
AggregateDataPtr getNestedPlace(AggregateDataPtr place) const noexcept
|
||||
AggregateDataPtr getNestedPlace(AggregateDataPtr __restrict place) const noexcept
|
||||
{
|
||||
return place + prefix_size;
|
||||
}
|
||||
|
||||
ConstAggregateDataPtr getNestedPlace(ConstAggregateDataPtr place) const noexcept
|
||||
ConstAggregateDataPtr getNestedPlace(ConstAggregateDataPtr __restrict place) const noexcept
|
||||
{
|
||||
return place + prefix_size;
|
||||
}
|
||||
@ -172,27 +172,27 @@ public:
|
||||
, nested_func(nested_func_)
|
||||
, arguments_num(arguments.size()) {}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
this->data(place).add(columns, arguments_num, row_num, arena);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
this->data(place).merge(this->data(rhs), arena);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).serialize(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
this->data(place).deserialize(buf, arena);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena * arena) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena * arena) const override
|
||||
{
|
||||
auto arguments = this->data(place).getArguments(this->argument_types);
|
||||
ColumnRawPtrs arguments_raw(arguments.size());
|
||||
@ -209,13 +209,13 @@ public:
|
||||
return prefix_size + nested_func->sizeOfData();
|
||||
}
|
||||
|
||||
void create(AggregateDataPtr place) const override
|
||||
void create(AggregateDataPtr __restrict place) const override
|
||||
{
|
||||
new (place) Data;
|
||||
nested_func->create(getNestedPlace(place));
|
||||
}
|
||||
|
||||
void destroy(AggregateDataPtr place) const noexcept override
|
||||
void destroy(AggregateDataPtr __restrict place) const noexcept override
|
||||
{
|
||||
this->data(place).~Data();
|
||||
nested_func->destroy(getNestedPlace(place));
|
||||
|
@ -103,7 +103,7 @@ public:
|
||||
return std::make_shared<DataTypeNumber<Float64>>();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
if constexpr (!std::is_same_v<UInt128, Value>)
|
||||
{
|
||||
@ -117,22 +117,22 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).merge(this->data(rhs));
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(const_cast<AggregateDataPtr>(place)).serialize(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).deserialize(buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
auto & column = assert_cast<ColumnVector<Float64> &>(to);
|
||||
column.getData().push_back(this->data(place).get());
|
||||
|
@ -50,7 +50,7 @@ private:
|
||||
size_t nested_size_of_data = 0;
|
||||
size_t num_arguments;
|
||||
|
||||
AggregateFunctionForEachData & ensureAggregateData(AggregateDataPtr place, size_t new_size, Arena & arena) const
|
||||
AggregateFunctionForEachData & ensureAggregateData(AggregateDataPtr __restrict place, size_t new_size, Arena & arena) const
|
||||
{
|
||||
AggregateFunctionForEachData & state = data(place);
|
||||
|
||||
@ -128,7 +128,7 @@ public:
|
||||
return std::make_shared<DataTypeArray>(nested_func->getReturnType());
|
||||
}
|
||||
|
||||
void destroy(AggregateDataPtr place) const noexcept override
|
||||
void destroy(AggregateDataPtr __restrict place) const noexcept override
|
||||
{
|
||||
AggregateFunctionForEachData & state = data(place);
|
||||
|
||||
@ -145,7 +145,7 @@ public:
|
||||
return nested_func->hasTrivialDestructor();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
const IColumn * nested[num_arguments];
|
||||
|
||||
@ -178,7 +178,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
const AggregateFunctionForEachData & rhs_state = data(rhs);
|
||||
AggregateFunctionForEachData & state = ensureAggregateData(place, rhs_state.dynamic_array_size, *arena);
|
||||
@ -195,7 +195,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
const AggregateFunctionForEachData & state = data(place);
|
||||
writeBinary(state.dynamic_array_size, buf);
|
||||
@ -208,7 +208,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
AggregateFunctionForEachData & state = data(place);
|
||||
|
||||
@ -225,7 +225,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena * arena) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena * arena) const override
|
||||
{
|
||||
AggregateFunctionForEachData & state = data(place);
|
||||
|
||||
|
@ -142,14 +142,14 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void create(AggregateDataPtr place) const override
|
||||
void create(AggregateDataPtr __restrict place) const override
|
||||
{
|
||||
[[maybe_unused]] auto a = new (place) Data;
|
||||
if constexpr (Trait::sampler == Sampler::RNG)
|
||||
a->rng.seed(seed);
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
if constexpr (Trait::sampler == Sampler::NONE)
|
||||
{
|
||||
@ -176,7 +176,7 @@ public:
|
||||
// if constexpr (Trait::sampler == Sampler::DETERMINATOR)
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
if constexpr (Trait::sampler == Sampler::NONE)
|
||||
{
|
||||
@ -235,7 +235,7 @@ public:
|
||||
// if constexpr (Trait::sampler == Sampler::DETERMINATOR)
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
const auto & value = this->data(place).value;
|
||||
size_t size = value.size();
|
||||
@ -254,7 +254,7 @@ public:
|
||||
// if constexpr (Trait::sampler == Sampler::DETERMINATOR)
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
size_t size = 0;
|
||||
readVarUInt(size, buf);
|
||||
@ -283,7 +283,7 @@ public:
|
||||
// if constexpr (Trait::sampler == Sampler::DETERMINATOR)
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
const auto & value = this->data(place).value;
|
||||
size_t size = value.size();
|
||||
@ -416,8 +416,8 @@ class GroupArrayGeneralImpl final
|
||||
{
|
||||
static constexpr bool limit_num_elems = Trait::has_limit;
|
||||
using Data = GroupArrayGeneralData<Node, Trait::sampler != Sampler::NONE>;
|
||||
static Data & data(AggregateDataPtr place) { return *reinterpret_cast<Data *>(place); }
|
||||
static const Data & data(ConstAggregateDataPtr place) { return *reinterpret_cast<const Data *>(place); }
|
||||
static Data & data(AggregateDataPtr __restrict place) { return *reinterpret_cast<Data *>(place); }
|
||||
static const Data & data(ConstAggregateDataPtr __restrict place) { return *reinterpret_cast<const Data *>(place); }
|
||||
|
||||
DataTypePtr & data_type;
|
||||
UInt64 max_elems;
|
||||
@ -450,14 +450,14 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void create(AggregateDataPtr place) const override
|
||||
void create(AggregateDataPtr __restrict place) const override
|
||||
{
|
||||
[[maybe_unused]] auto a = new (place) Data;
|
||||
if constexpr (Trait::sampler == Sampler::RNG)
|
||||
a->rng.seed(seed);
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
if constexpr (Trait::sampler == Sampler::NONE)
|
||||
{
|
||||
@ -485,7 +485,7 @@ public:
|
||||
// if constexpr (Trait::sampler == Sampler::DETERMINATOR)
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
if constexpr (Trait::sampler == Sampler::NONE)
|
||||
mergeNoSampler(place, rhs, arena);
|
||||
@ -495,7 +495,7 @@ public:
|
||||
// else if constexpr (Trait::sampler == Sampler::DETERMINATOR)
|
||||
}
|
||||
|
||||
void ALWAYS_INLINE mergeNoSampler(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const
|
||||
void ALWAYS_INLINE mergeNoSampler(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const
|
||||
{
|
||||
if (data(rhs).value.empty()) /// rhs state is empty
|
||||
return;
|
||||
@ -517,7 +517,7 @@ public:
|
||||
a.push_back(b[i]->clone(arena), arena);
|
||||
}
|
||||
|
||||
void ALWAYS_INLINE mergeWithRNGSampler(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const
|
||||
void ALWAYS_INLINE mergeWithRNGSampler(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const
|
||||
{
|
||||
if (data(rhs).value.empty()) /// rhs state is empty
|
||||
return;
|
||||
@ -553,7 +553,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
writeVarUInt(data(place).value.size(), buf);
|
||||
|
||||
@ -573,7 +573,7 @@ public:
|
||||
// if constexpr (Trait::sampler == Sampler::DETERMINATOR)
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
UInt64 elems;
|
||||
readVarUInt(elems, buf);
|
||||
@ -606,7 +606,7 @@ public:
|
||||
// if constexpr (Trait::sampler == Sampler::DETERMINATOR)
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
auto & column_array = assert_cast<ColumnArray &>(to);
|
||||
|
||||
@ -692,8 +692,8 @@ class GroupArrayGeneralListImpl final
|
||||
{
|
||||
static constexpr bool limit_num_elems = Trait::has_limit;
|
||||
using Data = GroupArrayGeneralListData<Node>;
|
||||
static Data & data(AggregateDataPtr place) { return *reinterpret_cast<Data *>(place); }
|
||||
static const Data & data(ConstAggregateDataPtr place) { return *reinterpret_cast<const Data *>(place); }
|
||||
static Data & data(AggregateDataPtr __restrict place) { return *reinterpret_cast<Data *>(place); }
|
||||
static const Data & data(ConstAggregateDataPtr __restrict place) { return *reinterpret_cast<const Data *>(place); }
|
||||
|
||||
DataTypePtr & data_type;
|
||||
UInt64 max_elems;
|
||||
@ -710,7 +710,7 @@ public:
|
||||
|
||||
DataTypePtr getReturnType() const override { return std::make_shared<DataTypeArray>(data_type); }
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
if (limit_num_elems && data(place).elems >= max_elems)
|
||||
return;
|
||||
@ -731,7 +731,7 @@ public:
|
||||
++data(place).elems;
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
/// It is sadly, but rhs's Arena could be destroyed
|
||||
|
||||
@ -780,7 +780,7 @@ public:
|
||||
data(place).elems = new_elems;
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
writeVarUInt(data(place).elems, buf);
|
||||
|
||||
@ -792,7 +792,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
UInt64 elems;
|
||||
readVarUInt(elems, buf);
|
||||
@ -821,7 +821,7 @@ public:
|
||||
data(place).last = prev;
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
auto & column_array = assert_cast<ColumnArray &>(to);
|
||||
|
||||
|
@ -102,7 +102,7 @@ public:
|
||||
return std::make_shared<DataTypeArray>(type);
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
/// TODO Do positions need to be 1-based for this function?
|
||||
size_t position = columns[1]->getUInt(row_num);
|
||||
@ -126,7 +126,7 @@ public:
|
||||
columns[0]->get(row_num, arr[position]);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
Array & arr_lhs = data(place).value;
|
||||
const Array & arr_rhs = data(rhs).value;
|
||||
@ -139,7 +139,7 @@ public:
|
||||
arr_lhs[i] = arr_rhs[i];
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
const Array & arr = data(place).value;
|
||||
size_t size = arr.size();
|
||||
@ -159,7 +159,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
size_t size = 0;
|
||||
readVarUInt(size, buf);
|
||||
@ -179,7 +179,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
ColumnArray & to_array = assert_cast<ColumnArray &>(to);
|
||||
IColumn & to_data = to_array.getData();
|
||||
|
@ -114,13 +114,13 @@ public:
|
||||
return std::make_shared<DataTypeArray>(std::make_shared<DataTypeResult>());
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
auto value = static_cast<const ColumnSource &>(*columns[0]).getData()[row_num];
|
||||
this->data(place).add(static_cast<ResultT>(value), arena);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
auto & cur_elems = this->data(place);
|
||||
auto & rhs_elems = this->data(rhs);
|
||||
@ -138,7 +138,7 @@ public:
|
||||
cur_elems.sum += rhs_elems.sum;
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
const auto & value = this->data(place).value;
|
||||
size_t size = value.size();
|
||||
@ -146,7 +146,7 @@ public:
|
||||
buf.write(reinterpret_cast<const char *>(value.data()), size * sizeof(value[0]));
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
size_t size = 0;
|
||||
readVarUInt(size, buf);
|
||||
@ -163,7 +163,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
const auto & data = this->data(place);
|
||||
size_t size = data.value.size();
|
||||
|
@ -22,21 +22,21 @@ public:
|
||||
|
||||
DataTypePtr getReturnType() const override { return std::make_shared<DataTypeNumber<T>>(); }
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
this->data(place).rbs.add(assert_cast<const ColumnVector<T> &>(*columns[0]).getData()[row_num]);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).rbs.merge(this->data(rhs).rbs);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override { this->data(place).rbs.write(buf); }
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override { this->data(place).rbs.write(buf); }
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override { this->data(place).rbs.read(buf); }
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override { this->data(place).rbs.read(buf); }
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
assert_cast<ColumnVector<T> &>(to).getData().push_back(this->data(place).rbs.size());
|
||||
}
|
||||
@ -56,7 +56,7 @@ public:
|
||||
|
||||
DataTypePtr getReturnType() const override { return std::make_shared<DataTypeNumber<T>>(); }
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
Data & data_lhs = this->data(place);
|
||||
const Data & data_rhs = this->data(assert_cast<const ColumnAggregateFunction &>(*columns[0]).getData()[row_num]);
|
||||
@ -71,7 +71,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
Data & data_lhs = this->data(place);
|
||||
const Data & data_rhs = this->data(rhs);
|
||||
@ -90,11 +90,11 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override { this->data(place).rbs.write(buf); }
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override { this->data(place).rbs.write(buf); }
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override { this->data(place).rbs.read(buf); }
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override { this->data(place).rbs.read(buf); }
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
assert_cast<ColumnVector<T> &>(to).getData().push_back(this->data(place).rbs.size());
|
||||
}
|
||||
|
@ -59,14 +59,14 @@ public:
|
||||
return std::make_shared<DataTypeArray>(this->argument_types[0]);
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
if (limit_num_elems && this->data(place).value.size() >= max_elems)
|
||||
return;
|
||||
this->data(place).value.insert(assert_cast<const ColumnVector<T> &>(*columns[0]).getData()[row_num]);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
if (!limit_num_elems)
|
||||
this->data(place).value.merge(this->data(rhs).value);
|
||||
@ -84,7 +84,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
auto & set = this->data(place).value;
|
||||
size_t size = set.size();
|
||||
@ -93,12 +93,12 @@ public:
|
||||
writeIntBinary(elem, buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).value.read(buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
ColumnArray & arr_to = assert_cast<ColumnArray &>(to);
|
||||
ColumnArray::Offsets & offsets_to = arr_to.getOffsets();
|
||||
@ -166,7 +166,7 @@ public:
|
||||
return true;
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
auto & set = this->data(place).value;
|
||||
writeVarUInt(set.size(), buf);
|
||||
@ -177,7 +177,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
auto & set = this->data(place).value;
|
||||
size_t size;
|
||||
@ -188,7 +188,7 @@ public:
|
||||
set.insert(readStringBinaryInto(*arena, buf));
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
auto & set = this->data(place).value;
|
||||
if (limit_num_elems && set.size() >= max_elems)
|
||||
@ -200,7 +200,7 @@ public:
|
||||
set.emplace(key_holder, it, inserted);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
auto & cur_set = this->data(place).value;
|
||||
auto & rhs_set = this->data(rhs).value;
|
||||
@ -218,7 +218,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
ColumnArray & arr_to = assert_cast<ColumnArray &>(to);
|
||||
ColumnArray::Offsets & offsets_to = arr_to.getOffsets();
|
||||
|
@ -332,28 +332,28 @@ public:
|
||||
return std::make_shared<DataTypeArray>(tuple);
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
auto val = assert_cast<const ColumnVector<T> &>(*columns[0]).getData()[row_num];
|
||||
this->data(place).add(static_cast<Data::Mean>(val), 1, max_bins);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).merge(this->data(rhs), max_bins);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).write(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).read(buf, max_bins);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
auto & data = this->data(place);
|
||||
|
||||
|
@ -97,7 +97,7 @@ public:
|
||||
return assert_cast<const ColumnUInt8 &>(*filter_column).getData()[row_num];
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
const ColumnNullable * column = assert_cast<const ColumnNullable *>(columns[0]);
|
||||
const IColumn * nested_column = &column->getNestedColumn();
|
||||
@ -140,7 +140,7 @@ public:
|
||||
return assert_cast<const ColumnUInt8 &>(*columns[num_arguments - 1]).getData()[row_num];
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
/// This container stores the columns we really pass to the nested function.
|
||||
const IColumn * nested_columns[number_of_arguments];
|
||||
|
@ -49,12 +49,12 @@ public:
|
||||
return nested_func->getReturnType();
|
||||
}
|
||||
|
||||
void create(AggregateDataPtr place) const override
|
||||
void create(AggregateDataPtr __restrict place) const override
|
||||
{
|
||||
nested_func->create(place);
|
||||
}
|
||||
|
||||
void destroy(AggregateDataPtr place) const noexcept override
|
||||
void destroy(AggregateDataPtr __restrict place) const noexcept override
|
||||
{
|
||||
nested_func->destroy(place);
|
||||
}
|
||||
@ -74,7 +74,7 @@ public:
|
||||
return nested_func->alignOfData();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
if (assert_cast<const ColumnUInt8 &>(*columns[num_arguments - 1]).getData()[row_num])
|
||||
nested_func->add(place, columns, row_num, arena);
|
||||
@ -108,22 +108,22 @@ public:
|
||||
nested_func->addBatchSinglePlaceNotNull(batch_size, place, columns, null_map, arena, num_arguments - 1);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
nested_func->merge(place, rhs, arena);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
nested_func->serialize(place, buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
nested_func->deserialize(place, buf, arena);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena * arena) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena * arena) const override
|
||||
{
|
||||
nested_func->insertResultInto(place, to, arena);
|
||||
}
|
||||
|
@ -329,7 +329,7 @@ public:
|
||||
return std::make_shared<DataTypeNumber<Float64>>();
|
||||
}
|
||||
|
||||
void create(AggregateDataPtr place) const override
|
||||
void create(AggregateDataPtr __restrict place) const override
|
||||
{
|
||||
std::shared_ptr<IWeightsUpdater> new_weights_updater;
|
||||
if (weights_updater_name == "SGD")
|
||||
@ -346,16 +346,16 @@ public:
|
||||
new (place) Data(learning_rate, l2_reg_coef, param_num, batch_size, gradient_computer, new_weights_updater);
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
this->data(place).add(columns, row_num);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override { this->data(place).merge(this->data(rhs)); }
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override { this->data(place).merge(this->data(rhs)); }
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override { this->data(place).write(buf); }
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override { this->data(place).write(buf); }
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override { this->data(place).read(buf); }
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override { this->data(place).read(buf); }
|
||||
|
||||
void predictValues(
|
||||
ConstAggregateDataPtr place,
|
||||
@ -383,7 +383,7 @@ public:
|
||||
/** This function is called if aggregate function without State modifier is selected in a query.
|
||||
* Inserts all weights of the model into the column 'to', so user may use such information if needed
|
||||
*/
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
this->data(place).returnWeights(to);
|
||||
}
|
||||
|
@ -194,7 +194,7 @@ public:
|
||||
);
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
Float64 value = columns[0]->getFloat64(row_num);
|
||||
UInt8 is_second = columns[1]->getUInt(row_num);
|
||||
@ -205,7 +205,7 @@ public:
|
||||
this->data(place).addX(value, arena);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
auto & a = this->data(place);
|
||||
auto & b = this->data(rhs);
|
||||
@ -213,17 +213,17 @@ public:
|
||||
a.merge(b, arena);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).write(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
this->data(place).read(buf, arena);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
if (!this->data(place).size_x || !this->data(place).size_y)
|
||||
throw Exception("Aggregate function " + getName() + " require both samples to be non empty", ErrorCodes::BAD_ARGUMENTS);
|
||||
|
@ -87,7 +87,7 @@ public:
|
||||
return std::make_shared<DataTypeNumber<PointType>>();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
PointType left = assert_cast<const ColumnVector<PointType> &>(*columns[0]).getData()[row_num];
|
||||
PointType right = assert_cast<const ColumnVector<PointType> &>(*columns[1]).getData()[row_num];
|
||||
@ -99,7 +99,7 @@ public:
|
||||
this->data(place).value.push_back(std::make_pair(right, Int64(-1)), arena);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
auto & cur_elems = this->data(place);
|
||||
auto & rhs_elems = this->data(rhs);
|
||||
@ -107,7 +107,7 @@ public:
|
||||
cur_elems.value.insert(rhs_elems.value.begin(), rhs_elems.value.end(), arena);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
const auto & value = this->data(place).value;
|
||||
size_t size = value.size();
|
||||
@ -115,7 +115,7 @@ public:
|
||||
buf.write(reinterpret_cast<const char *>(value.data()), size * sizeof(value[0]));
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
size_t size = 0;
|
||||
readVarUInt(size, buf);
|
||||
@ -129,7 +129,7 @@ public:
|
||||
buf.read(reinterpret_cast<char *>(value.data()), size * sizeof(value[0]));
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
Int64 current_intersections = 0;
|
||||
Int64 max_intersections = 0;
|
||||
|
@ -48,12 +48,12 @@ public:
|
||||
return nested_func->getReturnType();
|
||||
}
|
||||
|
||||
void create(AggregateDataPtr place) const override
|
||||
void create(AggregateDataPtr __restrict place) const override
|
||||
{
|
||||
nested_func->create(place);
|
||||
}
|
||||
|
||||
void destroy(AggregateDataPtr place) const noexcept override
|
||||
void destroy(AggregateDataPtr __restrict place) const noexcept override
|
||||
{
|
||||
nested_func->destroy(place);
|
||||
}
|
||||
@ -73,27 +73,27 @@ public:
|
||||
return nested_func->alignOfData();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
nested_func->merge(place, assert_cast<const ColumnAggregateFunction &>(*columns[0]).getData()[row_num], arena);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
nested_func->merge(place, rhs, arena);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
nested_func->serialize(place, buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
nested_func->deserialize(place, buf, arena);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena * arena) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena * arena) const override
|
||||
{
|
||||
nested_func->insertResultInto(place, to, arena);
|
||||
}
|
||||
|
@ -721,22 +721,22 @@ public:
|
||||
return type;
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
this->data(place).changeIfBetter(*columns[0], row_num, arena);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
this->data(place).changeIfBetter(this->data(rhs), arena);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).write(buf, *type.get());
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
this->data(place).read(buf, *type.get(), arena);
|
||||
}
|
||||
@ -746,7 +746,7 @@ public:
|
||||
return Data::allocatesMemoryInArena();
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
this->data(place).insertResultInto(to);
|
||||
}
|
||||
|
@ -45,29 +45,29 @@ protected:
|
||||
* We use prefix_size bytes for flag to satisfy the alignment requirement of nested state.
|
||||
*/
|
||||
|
||||
AggregateDataPtr nestedPlace(AggregateDataPtr place) const noexcept
|
||||
AggregateDataPtr nestedPlace(AggregateDataPtr __restrict place) const noexcept
|
||||
{
|
||||
return place + prefix_size;
|
||||
}
|
||||
|
||||
ConstAggregateDataPtr nestedPlace(ConstAggregateDataPtr place) const noexcept
|
||||
ConstAggregateDataPtr nestedPlace(ConstAggregateDataPtr __restrict place) const noexcept
|
||||
{
|
||||
return place + prefix_size;
|
||||
}
|
||||
|
||||
static void initFlag(AggregateDataPtr place) noexcept
|
||||
static void initFlag(AggregateDataPtr __restrict place) noexcept
|
||||
{
|
||||
if constexpr (result_is_nullable)
|
||||
place[0] = 0;
|
||||
}
|
||||
|
||||
static void setFlag(AggregateDataPtr place) noexcept
|
||||
static void setFlag(AggregateDataPtr __restrict place) noexcept
|
||||
{
|
||||
if constexpr (result_is_nullable)
|
||||
place[0] = 1;
|
||||
}
|
||||
|
||||
static bool getFlag(ConstAggregateDataPtr place) noexcept
|
||||
static bool getFlag(ConstAggregateDataPtr __restrict place) noexcept
|
||||
{
|
||||
return result_is_nullable ? place[0] : 1;
|
||||
}
|
||||
@ -95,13 +95,13 @@ public:
|
||||
: nested_function->getReturnType();
|
||||
}
|
||||
|
||||
void create(AggregateDataPtr place) const override
|
||||
void create(AggregateDataPtr __restrict place) const override
|
||||
{
|
||||
initFlag(place);
|
||||
nested_function->create(nestedPlace(place));
|
||||
}
|
||||
|
||||
void destroy(AggregateDataPtr place) const noexcept override
|
||||
void destroy(AggregateDataPtr __restrict place) const noexcept override
|
||||
{
|
||||
nested_function->destroy(nestedPlace(place));
|
||||
}
|
||||
@ -121,7 +121,7 @@ public:
|
||||
return nested_function->alignOfData();
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
if (result_is_nullable && getFlag(rhs))
|
||||
setFlag(place);
|
||||
@ -129,7 +129,7 @@ public:
|
||||
nested_function->merge(nestedPlace(place), nestedPlace(rhs), arena);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
bool flag = getFlag(place);
|
||||
if constexpr (serialize_flag)
|
||||
@ -138,7 +138,7 @@ public:
|
||||
nested_function->serialize(nestedPlace(place), buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
bool flag = 1;
|
||||
if constexpr (serialize_flag)
|
||||
@ -150,7 +150,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena * arena) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena * arena) const override
|
||||
{
|
||||
if constexpr (result_is_nullable)
|
||||
{
|
||||
@ -200,7 +200,7 @@ public:
|
||||
{
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
const ColumnNullable * column = assert_cast<const ColumnNullable *>(columns[0]);
|
||||
const IColumn * nested_column = &column->getNestedColumn();
|
||||
@ -250,7 +250,7 @@ public:
|
||||
is_nullable[i] = arguments[i]->isNullable();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
/// This container stores the columns we really pass to the nested function.
|
||||
const IColumn * nested_columns[number_of_arguments];
|
||||
|
@ -76,13 +76,13 @@ public:
|
||||
return nested_function->alignOfData();
|
||||
}
|
||||
|
||||
void create(AggregateDataPtr place) const override
|
||||
void create(AggregateDataPtr __restrict place) const override
|
||||
{
|
||||
nested_function->create(place);
|
||||
place[size_of_data] = 0;
|
||||
}
|
||||
|
||||
void destroy(AggregateDataPtr place) const noexcept override
|
||||
void destroy(AggregateDataPtr __restrict place) const noexcept override
|
||||
{
|
||||
nested_function->destroy(place);
|
||||
}
|
||||
|
@ -103,7 +103,7 @@ public:
|
||||
return res;
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
auto value = static_cast<const ColVecType &>(*columns[0]).getData()[row_num];
|
||||
|
||||
@ -122,23 +122,23 @@ public:
|
||||
this->data(place).add(value);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).merge(this->data(rhs));
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
/// const_cast is required because some data structures apply finalizaton (like compactization) before serializing.
|
||||
this->data(const_cast<AggregateDataPtr>(place)).serialize(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).deserialize(buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
/// const_cast is required because some data structures apply finalizaton (like sorting) for obtain a result.
|
||||
auto & data = this->data(place);
|
||||
|
@ -63,7 +63,7 @@ public:
|
||||
return std::make_shared<DataTypeNumber<Float64>>();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
Float64 new_x = columns[0]->getFloat64(row_num);
|
||||
Float64 new_y = columns[1]->getFloat64(row_num);
|
||||
@ -71,7 +71,7 @@ public:
|
||||
this->data(place).addY(new_y, arena);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
auto & a = this->data(place);
|
||||
auto & b = this->data(rhs);
|
||||
@ -79,17 +79,17 @@ public:
|
||||
a.merge(b, arena);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).write(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
this->data(place).read(buf, arena);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
auto answer = this->data(place).getResult();
|
||||
|
||||
|
@ -110,7 +110,7 @@ public:
|
||||
return align_of_data;
|
||||
}
|
||||
|
||||
void create(AggregateDataPtr place) const override
|
||||
void create(AggregateDataPtr __restrict place) const override
|
||||
{
|
||||
for (size_t i = 0; i < total; ++i)
|
||||
{
|
||||
@ -127,7 +127,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void destroy(AggregateDataPtr place) const noexcept override
|
||||
void destroy(AggregateDataPtr __restrict place) const noexcept override
|
||||
{
|
||||
for (size_t i = 0; i < total; ++i)
|
||||
nested_function->destroy(place + i * size_of_data);
|
||||
|
@ -94,7 +94,7 @@ public:
|
||||
return std::make_shared<DataTypeArray>(std::make_shared<DataTypeUInt8>());
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, const size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, const size_t row_num, Arena *) const override
|
||||
{
|
||||
for (const auto i : ext::range(0, events_size))
|
||||
{
|
||||
@ -106,22 +106,22 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).merge(this->data(rhs));
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).serialize(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).deserialize(buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
auto & data_to = assert_cast<ColumnUInt8 &>(assert_cast<ColumnArray &>(to).getData()).getData();
|
||||
auto & offsets_to = assert_cast<ColumnArray &>(to).getOffsets();
|
||||
|
@ -149,7 +149,7 @@ public:
|
||||
parsePattern();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, const size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, const size_t row_num, Arena *) const override
|
||||
{
|
||||
const auto timestamp = assert_cast<const ColumnVector<T> *>(columns[0])->getData()[row_num];
|
||||
|
||||
@ -163,17 +163,17 @@ public:
|
||||
this->data(place).add(timestamp, events);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).merge(this->data(rhs));
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).serialize(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).deserialize(buf);
|
||||
}
|
||||
@ -560,7 +560,7 @@ public:
|
||||
|
||||
DataTypePtr getReturnType() const override { return std::make_shared<DataTypeUInt8>(); }
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
this->data(place).sort();
|
||||
|
||||
@ -588,14 +588,14 @@ public:
|
||||
|
||||
DataTypePtr getReturnType() const override { return std::make_shared<DataTypeUInt64>(); }
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
this->data(place).sort();
|
||||
assert_cast<ColumnUInt64 &>(to).getData().push_back(count(place));
|
||||
}
|
||||
|
||||
private:
|
||||
UInt64 count(const ConstAggregateDataPtr & place) const
|
||||
UInt64 count(ConstAggregateDataPtr __restrict place) const
|
||||
{
|
||||
const auto & data_ref = this->data(place);
|
||||
|
||||
|
@ -48,9 +48,9 @@ public:
|
||||
return storage_type;
|
||||
}
|
||||
|
||||
void create(AggregateDataPtr place) const override { nested_func->create(place); }
|
||||
void create(AggregateDataPtr __restrict place) const override { nested_func->create(place); }
|
||||
|
||||
void destroy(AggregateDataPtr place) const noexcept override { nested_func->destroy(place); }
|
||||
void destroy(AggregateDataPtr __restrict place) const noexcept override { nested_func->destroy(place); }
|
||||
|
||||
bool hasTrivialDestructor() const override { return nested_func->hasTrivialDestructor(); }
|
||||
|
||||
@ -58,21 +58,21 @@ public:
|
||||
|
||||
size_t alignOfData() const override { return nested_func->alignOfData(); }
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
nested_func->add(place, columns, row_num, arena);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override { nested_func->merge(place, rhs, arena); }
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override { nested_func->merge(place, rhs, arena); }
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override { nested_func->serialize(place, buf); }
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override { nested_func->serialize(place, buf); }
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
nested_func->deserialize(place, buf, arena);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena * arena) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena * arena) const override
|
||||
{
|
||||
nested_func->insertResultInto(place, to, arena);
|
||||
}
|
||||
|
@ -34,12 +34,12 @@ public:
|
||||
|
||||
DataTypePtr getReturnType() const override;
|
||||
|
||||
void create(AggregateDataPtr place) const override
|
||||
void create(AggregateDataPtr __restrict place) const override
|
||||
{
|
||||
nested_func->create(place);
|
||||
}
|
||||
|
||||
void destroy(AggregateDataPtr place) const noexcept override
|
||||
void destroy(AggregateDataPtr __restrict place) const noexcept override
|
||||
{
|
||||
nested_func->destroy(place);
|
||||
}
|
||||
@ -59,27 +59,27 @@ public:
|
||||
return nested_func->alignOfData();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
nested_func->add(place, columns, row_num, arena);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
nested_func->merge(place, rhs, arena);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
nested_func->serialize(place, buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
nested_func->deserialize(place, buf, arena);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
assert_cast<ColumnAggregateFunction &>(to).getData().push_back(place);
|
||||
}
|
||||
|
@ -123,27 +123,27 @@ public:
|
||||
return std::make_shared<DataTypeFloat64>();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
this->data(place).update(*columns[0], row_num);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).mergeWith(this->data(rhs));
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).serialize(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).deserialize(buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
this->data(place).publish(to);
|
||||
}
|
||||
@ -375,27 +375,27 @@ public:
|
||||
return std::make_shared<DataTypeFloat64>();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
this->data(place).update(*columns[0], *columns[1], row_num);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).mergeWith(this->data(rhs));
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).serialize(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).deserialize(buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
this->data(place).publish(to);
|
||||
}
|
||||
|
@ -121,7 +121,7 @@ public:
|
||||
return std::make_shared<DataTypeNumber<ResultType>>();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
if constexpr (StatFunc::num_args == 2)
|
||||
this->data(place).add(
|
||||
@ -141,22 +141,22 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).merge(this->data(rhs));
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).write(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).read(buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
const auto & data = this->data(place);
|
||||
auto & dst = static_cast<ColVecResult &>(to).getData();
|
||||
|
@ -314,7 +314,7 @@ public:
|
||||
return std::make_shared<ResultDataType>();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
const auto & column = assert_cast<const ColVecType &>(*columns[0]);
|
||||
if constexpr (is_big_int_v<T>)
|
||||
@ -361,22 +361,22 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).merge(this->data(rhs));
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).write(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).read(buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
auto & column = assert_cast<ColVecResult &>(to);
|
||||
column.getData().push_back(this->data(place).get());
|
||||
|
@ -136,7 +136,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns_, const size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns_, const size_t row_num, Arena *) const override
|
||||
{
|
||||
const auto & columns = getArgumentColumns(columns_);
|
||||
|
||||
@ -212,7 +212,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
auto & merged_maps = this->data(place).merged_maps;
|
||||
const auto & rhs_maps = this->data(rhs).merged_maps;
|
||||
@ -231,7 +231,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
const auto & merged_maps = this->data(place).merged_maps;
|
||||
size_t size = merged_maps.size();
|
||||
@ -245,7 +245,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
auto & merged_maps = this->data(place).merged_maps;
|
||||
size_t size = 0;
|
||||
@ -268,7 +268,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
size_t num_columns = values_types.size();
|
||||
|
||||
|
@ -109,7 +109,7 @@ public:
|
||||
);
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
Float64 value = columns[0]->getFloat64(row_num);
|
||||
UInt8 is_second = columns[1]->getUInt(row_num);
|
||||
@ -120,22 +120,22 @@ public:
|
||||
this->data(place).addX(value);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).merge(this->data(rhs));
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).write(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).read(buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
auto [t_statistic, p_value] = this->data(place).getResult();
|
||||
|
||||
|
@ -50,7 +50,7 @@ public:
|
||||
return std::make_shared<DataTypeArray>(this->argument_types[0]);
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
auto & set = this->data(place).value;
|
||||
if (set.capacity() != reserved)
|
||||
@ -62,7 +62,7 @@ public:
|
||||
set.insert(assert_cast<const ColumnVector<T> &>(*columns[0]).getData()[row_num]);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
auto & set = this->data(place).value;
|
||||
if (set.capacity() != reserved)
|
||||
@ -70,19 +70,19 @@ public:
|
||||
set.merge(this->data(rhs).value);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).value.write(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
auto & set = this->data(place).value;
|
||||
set.resize(reserved);
|
||||
set.read(buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
ColumnArray & arr_to = assert_cast<ColumnArray &>(to);
|
||||
ColumnArray::Offsets & offsets_to = arr_to.getOffsets();
|
||||
@ -145,12 +145,12 @@ public:
|
||||
return true;
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).value.write(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const override
|
||||
{
|
||||
auto & set = this->data(place).value;
|
||||
set.clear();
|
||||
@ -173,7 +173,7 @@ public:
|
||||
set.readAlphaMap(buf);
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
auto & set = this->data(place).value;
|
||||
if (set.capacity() != reserved)
|
||||
@ -198,7 +198,7 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
auto & set = this->data(place).value;
|
||||
if (set.capacity() != reserved)
|
||||
@ -206,7 +206,7 @@ public:
|
||||
set.merge(this->data(rhs).value);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
ColumnArray & arr_to = assert_cast<ColumnArray &>(to);
|
||||
ColumnArray::Offsets & offsets_to = arr_to.getOffsets();
|
||||
|
@ -211,27 +211,27 @@ public:
|
||||
}
|
||||
|
||||
/// ALWAYS_INLINE is required to have better code layout for uniqHLL12 function
|
||||
void ALWAYS_INLINE add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void ALWAYS_INLINE add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
detail::OneAdder<T, Data>::add(this->data(place), *columns[0], row_num);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).set.merge(this->data(rhs).set);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).set.write(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).set.read(buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
assert_cast<ColumnUInt64 &>(to).getData().push_back(this->data(place).set.size());
|
||||
}
|
||||
@ -265,28 +265,28 @@ public:
|
||||
return std::make_shared<DataTypeUInt64>();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
this->data(place).set.insert(typename Data::Set::value_type(
|
||||
UniqVariadicHash<is_exact, argument_is_tuple>::apply(num_args, columns, row_num)));
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).set.merge(this->data(rhs).set);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).set.write(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).set.read(buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
assert_cast<ColumnUInt64 &>(to).getData().push_back(this->data(place).set.size());
|
||||
}
|
||||
|
@ -141,7 +141,7 @@ public:
|
||||
return std::make_shared<DataTypeUInt64>();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
if constexpr (!std::is_same_v<T, String>)
|
||||
{
|
||||
@ -155,22 +155,22 @@ public:
|
||||
}
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).set.merge(this->data(rhs).set);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).set.write(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).set.read(buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
assert_cast<ColumnUInt64 &>(to).getData().push_back(this->data(place).set.size());
|
||||
}
|
||||
@ -211,28 +211,28 @@ public:
|
||||
return std::make_shared<DataTypeUInt64>();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
this->data(place).set.insert(typename AggregateFunctionUniqCombinedData<UInt64, K, HashValueType>::Set::value_type(
|
||||
UniqVariadicHash<is_exact, argument_is_tuple>::apply(num_args, columns, row_num)));
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).set.merge(this->data(rhs).set);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).set.write(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).set.read(buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
assert_cast<ColumnUInt64 &>(to).getData().push_back(this->data(place).set.size());
|
||||
}
|
||||
|
@ -185,27 +185,27 @@ public:
|
||||
}
|
||||
|
||||
/// ALWAYS_INLINE is required to have better code layout for uniqUpTo function
|
||||
void ALWAYS_INLINE add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void ALWAYS_INLINE add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
this->data(place).add(*columns[0], row_num, threshold);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).merge(this->data(rhs), threshold);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).write(buf, threshold);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).read(buf, threshold);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
assert_cast<ColumnUInt64 &>(to).getData().push_back(this->data(place).size());
|
||||
}
|
||||
@ -247,27 +247,27 @@ public:
|
||||
return std::make_shared<DataTypeUInt64>();
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
this->data(place).insert(UInt64(UniqVariadicHash<is_exact, argument_is_tuple>::apply(num_args, columns, row_num)), threshold);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).merge(this->data(rhs), threshold);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).write(buf, threshold);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).read(buf, threshold);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
assert_cast<ColumnUInt64 &>(to).getData().push_back(this->data(place).size());
|
||||
}
|
||||
|
@ -149,7 +149,6 @@ private:
|
||||
UInt8 strict_order; // When the 'strict_order' is set, it doesn't allow interventions of other events.
|
||||
// In the case of 'A->B->D->C', it stops finding 'A->B->C' at the 'D' and the max event level is 2.
|
||||
|
||||
|
||||
// Loop through the entire events_list, update the event timestamp value
|
||||
// The level path must be 1---2---3---...---check_events_size, find the max event level that satisfied the path in the sliding window.
|
||||
// If found, returns the max event level, else return 0.
|
||||
@ -250,7 +249,7 @@ public:
|
||||
return std::make_shared<AggregateFunctionNullVariadic<false, false, false>>(nested_function, arguments, params);
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, const size_t row_num, Arena *) const override
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, const size_t row_num, Arena *) const override
|
||||
{
|
||||
bool has_event = false;
|
||||
const auto timestamp = assert_cast<const ColumnVector<T> *>(columns[0])->getData()[row_num];
|
||||
@ -269,22 +268,22 @@ public:
|
||||
this->data(place).add(timestamp, 0);
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||
{
|
||||
this->data(place).merge(this->data(rhs));
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const override
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||
{
|
||||
this->data(place).serialize(buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena *) const override
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||
{
|
||||
this->data(place).deserialize(buf);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr place, IColumn & to, Arena *) const override
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
assert_cast<ColumnUInt8 &>(to).getData().push_back(getEventLevel(this->data(place)));
|
||||
}
|
||||
|
@ -72,10 +72,10 @@ public:
|
||||
/** Create empty data for aggregation with `placement new` at the specified location.
|
||||
* You will have to destroy them using the `destroy` method.
|
||||
*/
|
||||
virtual void create(AggregateDataPtr place) const = 0;
|
||||
virtual void create(AggregateDataPtr __restrict place) const = 0;
|
||||
|
||||
/// Delete data for aggregation.
|
||||
virtual void destroy(AggregateDataPtr place) const noexcept = 0;
|
||||
virtual void destroy(AggregateDataPtr __restrict place) const noexcept = 0;
|
||||
|
||||
/// It is not necessary to delete data.
|
||||
virtual bool hasTrivialDestructor() const = 0;
|
||||
@ -91,16 +91,16 @@ public:
|
||||
* row_num is number of row which should be added.
|
||||
* Additional parameter arena should be used instead of standard memory allocator if the addition requires memory allocation.
|
||||
*/
|
||||
virtual void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const = 0;
|
||||
virtual void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const = 0;
|
||||
|
||||
/// Merges state (on which place points to) with other state of current aggregation function.
|
||||
virtual void merge(AggregateDataPtr place, ConstAggregateDataPtr rhs, Arena * arena) const = 0;
|
||||
virtual void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const = 0;
|
||||
|
||||
/// Serializes state (to transmit it over the network, for example).
|
||||
virtual void serialize(ConstAggregateDataPtr place, WriteBuffer & buf) const = 0;
|
||||
virtual void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const = 0;
|
||||
|
||||
/// Deserializes state. This function is called only for empty (just created) states.
|
||||
virtual void deserialize(AggregateDataPtr place, ReadBuffer & buf, Arena * arena) const = 0;
|
||||
virtual void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena * arena) const = 0;
|
||||
|
||||
/// Returns true if a function requires Arena to handle own states (see add(), merge(), deserialize()).
|
||||
virtual bool allocatesMemoryInArena() const { return false; }
|
||||
@ -111,7 +111,7 @@ public:
|
||||
/// insertResultInto must work correctly. This kind of call sequence occurs
|
||||
/// in `runningAccumulate`, or when calculating an aggregate function as a
|
||||
/// window function.
|
||||
virtual void insertResultInto(AggregateDataPtr place, IColumn & to, Arena * arena) const = 0;
|
||||
virtual void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena * arena) const = 0;
|
||||
|
||||
/// Used for machine learning methods. Predict result from trained model.
|
||||
/// Will insert result into `to` column for rows in range [offset, offset + limit).
|
||||
@ -387,8 +387,8 @@ class IAggregateFunctionDataHelper : public IAggregateFunctionHelper<Derived>
|
||||
protected:
|
||||
using Data = T;
|
||||
|
||||
static Data & data(AggregateDataPtr place) { return *reinterpret_cast<Data *>(place); }
|
||||
static const Data & data(ConstAggregateDataPtr place) { return *reinterpret_cast<const Data *>(place); }
|
||||
static Data & data(AggregateDataPtr __restrict place) { return *reinterpret_cast<Data *>(place); }
|
||||
static const Data & data(ConstAggregateDataPtr __restrict place) { return *reinterpret_cast<const Data *>(place); }
|
||||
|
||||
public:
|
||||
// Derived class can `override` this to flag that DateTime64 is not supported.
|
||||
@ -399,9 +399,9 @@ public:
|
||||
{
|
||||
}
|
||||
|
||||
void create(AggregateDataPtr place) const override { new (place) Data; }
|
||||
void create(AggregateDataPtr __restrict place) const override { new (place) Data; }
|
||||
|
||||
void destroy(AggregateDataPtr place) const noexcept override { data(place).~Data(); }
|
||||
void destroy(AggregateDataPtr __restrict place) const noexcept override { data(place).~Data(); }
|
||||
|
||||
bool hasTrivialDestructor() const override { return std::is_trivially_destructible_v<Data>; }
|
||||
|
||||
|
@ -32,6 +32,8 @@ namespace ErrorCodes
|
||||
* - a histogram (that is, value -> number), consisting of two parts
|
||||
* -- for values from 0 to 1023 - in increments of 1;
|
||||
* -- for values from 1024 to 30,000 - in increments of 16;
|
||||
*
|
||||
* NOTE: 64-bit integer weight can overflow, see also QantileExactWeighted.h::get()
|
||||
*/
|
||||
|
||||
#define TINY_MAX_ELEMS 31
|
||||
@ -396,9 +398,9 @@ namespace detail
|
||||
/// Get the value of the `level` quantile. The level must be between 0 and 1.
|
||||
UInt16 get(double level) const
|
||||
{
|
||||
UInt64 pos = std::ceil(count * level);
|
||||
double pos = std::ceil(count * level);
|
||||
|
||||
UInt64 accumulated = 0;
|
||||
double accumulated = 0;
|
||||
Iterator it(*this);
|
||||
|
||||
while (it.isValid())
|
||||
@ -422,9 +424,9 @@ namespace detail
|
||||
const auto * indices_end = indices + size;
|
||||
const auto * index = indices;
|
||||
|
||||
UInt64 pos = std::ceil(count * levels[*index]);
|
||||
double pos = std::ceil(count * levels[*index]);
|
||||
|
||||
UInt64 accumulated = 0;
|
||||
double accumulated = 0;
|
||||
Iterator it(*this);
|
||||
|
||||
while (it.isValid())
|
||||
|
@ -136,7 +136,7 @@ public:
|
||||
Field operator[](size_t n) const override { return DecimalField(data[n], scale); }
|
||||
void get(size_t n, Field & res) const override { res = (*this)[n]; }
|
||||
bool getBool(size_t n) const override { return bool(data[n].value); }
|
||||
Int64 getInt(size_t n) const override { return Int64(data[n].value * scale); }
|
||||
Int64 getInt(size_t n) const override { return Int64(data[n].value) * scale; }
|
||||
UInt64 get64(size_t n) const override;
|
||||
bool isDefaultAt(size_t n) const override { return data[n].value == 0; }
|
||||
|
||||
|
@ -207,4 +207,22 @@ bool callOnIndexAndDataType(TypeIndex number, F && f, ExtraArgs && ... args)
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename F>
|
||||
static bool callOnTwoTypeIndexes(TypeIndex left_type, TypeIndex right_type, F && func)
|
||||
{
|
||||
return callOnIndexAndDataType<void>(left_type, [&](const auto & left_types) -> bool
|
||||
{
|
||||
using LeftTypes = std::decay_t<decltype(left_types)>;
|
||||
using LeftType = typename LeftTypes::LeftType;
|
||||
|
||||
return callOnIndexAndDataType<void>(right_type, [&](const auto & right_types) -> bool
|
||||
{
|
||||
using RightTypes = std::decay_t<decltype(right_types)>;
|
||||
using RightType = typename RightTypes::LeftType;
|
||||
|
||||
return std::forward<F>(func)(TypePair<LeftType, RightType>());
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -121,7 +121,7 @@ PushingToViewsBlockOutputStream::PushingToViewsBlockOutputStream(
|
||||
out = std::make_shared<PushingToViewsBlockOutputStream>(
|
||||
dependent_table, dependent_metadata_snapshot, *insert_context, ASTPtr());
|
||||
|
||||
views.emplace_back(ViewInfo{std::move(query), database_table, std::move(out), nullptr});
|
||||
views.emplace_back(ViewInfo{std::move(query), database_table, std::move(out), nullptr, 0 /* elapsed_ms */});
|
||||
}
|
||||
|
||||
/// Do not push to destination table if the flag is set
|
||||
@ -146,8 +146,6 @@ Block PushingToViewsBlockOutputStream::getHeader() const
|
||||
|
||||
void PushingToViewsBlockOutputStream::write(const Block & block)
|
||||
{
|
||||
Stopwatch watch;
|
||||
|
||||
/** Throw an exception if the sizes of arrays - elements of nested data structures doesn't match.
|
||||
* We have to make this assertion before writing to table, because storage engine may assume that they have equal sizes.
|
||||
* NOTE It'd better to do this check in serialization of nested structures (in place when this assumption is required),
|
||||
@ -177,15 +175,15 @@ void PushingToViewsBlockOutputStream::write(const Block & block)
|
||||
{
|
||||
// Push to views concurrently if enabled and more than one view is attached
|
||||
ThreadPool pool(std::min(size_t(settings.max_threads), views.size()));
|
||||
for (size_t view_num = 0; view_num < views.size(); ++view_num)
|
||||
for (auto & view : views)
|
||||
{
|
||||
auto thread_group = CurrentThread::getGroup();
|
||||
pool.scheduleOrThrowOnError([=, this]
|
||||
pool.scheduleOrThrowOnError([=, &view, this]
|
||||
{
|
||||
setThreadName("PushingToViews");
|
||||
if (thread_group)
|
||||
CurrentThread::attachToIfDetached(thread_group);
|
||||
process(block, view_num);
|
||||
process(block, view);
|
||||
});
|
||||
}
|
||||
// Wait for concurrent view processing
|
||||
@ -194,22 +192,14 @@ void PushingToViewsBlockOutputStream::write(const Block & block)
|
||||
else
|
||||
{
|
||||
// Process sequentially
|
||||
for (size_t view_num = 0; view_num < views.size(); ++view_num)
|
||||
for (auto & view : views)
|
||||
{
|
||||
process(block, view_num);
|
||||
process(block, view);
|
||||
|
||||
if (views[view_num].exception)
|
||||
std::rethrow_exception(views[view_num].exception);
|
||||
if (view.exception)
|
||||
std::rethrow_exception(view.exception);
|
||||
}
|
||||
}
|
||||
|
||||
UInt64 milliseconds = watch.elapsedMilliseconds();
|
||||
if (views.size() > 1)
|
||||
{
|
||||
LOG_TRACE(log, "Pushing from {} to {} views took {} ms.",
|
||||
storage->getStorageID().getNameForLogs(), views.size(),
|
||||
milliseconds);
|
||||
}
|
||||
}
|
||||
|
||||
void PushingToViewsBlockOutputStream::writePrefix()
|
||||
@ -257,12 +247,13 @@ void PushingToViewsBlockOutputStream::writeSuffix()
|
||||
if (view.exception)
|
||||
continue;
|
||||
|
||||
pool.scheduleOrThrowOnError([thread_group, &view]
|
||||
pool.scheduleOrThrowOnError([thread_group, &view, this]
|
||||
{
|
||||
setThreadName("PushingToViews");
|
||||
if (thread_group)
|
||||
CurrentThread::attachToIfDetached(thread_group);
|
||||
|
||||
Stopwatch watch;
|
||||
try
|
||||
{
|
||||
view.out->writeSuffix();
|
||||
@ -271,6 +262,12 @@ void PushingToViewsBlockOutputStream::writeSuffix()
|
||||
{
|
||||
view.exception = std::current_exception();
|
||||
}
|
||||
view.elapsed_ms += watch.elapsedMilliseconds();
|
||||
|
||||
LOG_TRACE(log, "Pushing from {} to {} took {} ms.",
|
||||
storage->getStorageID().getNameForLogs(),
|
||||
view.table_id.getNameForLogs(),
|
||||
view.elapsed_ms);
|
||||
});
|
||||
}
|
||||
// Wait for concurrent view processing
|
||||
@ -290,6 +287,7 @@ void PushingToViewsBlockOutputStream::writeSuffix()
|
||||
if (parallel_processing)
|
||||
continue;
|
||||
|
||||
Stopwatch watch;
|
||||
try
|
||||
{
|
||||
view.out->writeSuffix();
|
||||
@ -299,10 +297,24 @@ void PushingToViewsBlockOutputStream::writeSuffix()
|
||||
ex.addMessage("while write prefix to view " + view.table_id.getNameForLogs());
|
||||
throw;
|
||||
}
|
||||
view.elapsed_ms += watch.elapsedMilliseconds();
|
||||
|
||||
LOG_TRACE(log, "Pushing from {} to {} took {} ms.",
|
||||
storage->getStorageID().getNameForLogs(),
|
||||
view.table_id.getNameForLogs(),
|
||||
view.elapsed_ms);
|
||||
}
|
||||
|
||||
if (first_exception)
|
||||
std::rethrow_exception(first_exception);
|
||||
|
||||
UInt64 milliseconds = main_watch.elapsedMilliseconds();
|
||||
if (views.size() > 1)
|
||||
{
|
||||
LOG_TRACE(log, "Pushing from {} to {} views took {} ms.",
|
||||
storage->getStorageID().getNameForLogs(), views.size(),
|
||||
milliseconds);
|
||||
}
|
||||
}
|
||||
|
||||
void PushingToViewsBlockOutputStream::flush()
|
||||
@ -314,10 +326,9 @@ void PushingToViewsBlockOutputStream::flush()
|
||||
view.out->flush();
|
||||
}
|
||||
|
||||
void PushingToViewsBlockOutputStream::process(const Block & block, size_t view_num)
|
||||
void PushingToViewsBlockOutputStream::process(const Block & block, ViewInfo & view)
|
||||
{
|
||||
Stopwatch watch;
|
||||
auto & view = views[view_num];
|
||||
|
||||
try
|
||||
{
|
||||
@ -379,11 +390,7 @@ void PushingToViewsBlockOutputStream::process(const Block & block, size_t view_n
|
||||
view.exception = std::current_exception();
|
||||
}
|
||||
|
||||
UInt64 milliseconds = watch.elapsedMilliseconds();
|
||||
LOG_TRACE(log, "Pushing from {} to {} took {} ms.",
|
||||
storage->getStorageID().getNameForLogs(),
|
||||
view.table_id.getNameForLogs(),
|
||||
milliseconds);
|
||||
view.elapsed_ms += watch.elapsedMilliseconds();
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -1,6 +1,7 @@
|
||||
#pragma once
|
||||
|
||||
#include <DataStreams/IBlockOutputStream.h>
|
||||
#include <Common/Stopwatch.h>
|
||||
#include <Parsers/IAST_fwd.h>
|
||||
#include <Storages/IStorage.h>
|
||||
|
||||
@ -44,6 +45,7 @@ private:
|
||||
|
||||
const Context & context;
|
||||
ASTPtr query_ptr;
|
||||
Stopwatch main_watch;
|
||||
|
||||
struct ViewInfo
|
||||
{
|
||||
@ -51,13 +53,14 @@ private:
|
||||
StorageID table_id;
|
||||
BlockOutputStreamPtr out;
|
||||
std::exception_ptr exception;
|
||||
UInt64 elapsed_ms = 0;
|
||||
};
|
||||
|
||||
std::vector<ViewInfo> views;
|
||||
std::unique_ptr<Context> select_context;
|
||||
std::unique_ptr<Context> insert_context;
|
||||
|
||||
void process(const Block & block, size_t view_num);
|
||||
void process(const Block & block, ViewInfo & view);
|
||||
};
|
||||
|
||||
|
||||
|
@ -172,6 +172,26 @@ void DatabaseOrdinary::loadStoredObjects(Context & context, bool has_force_resto
|
||||
|
||||
ThreadPool pool;
|
||||
|
||||
/// We must attach dictionaries before attaching tables
|
||||
/// because while we're attaching tables we may need to have some dictionaries attached
|
||||
/// (for example, dictionaries can be used in the default expressions for some tables).
|
||||
/// On the other hand we can attach any dictionary (even sourced from ClickHouse table)
|
||||
/// without having any tables attached. It is so because attaching of a dictionary means
|
||||
/// loading of its config only, it doesn't involve loading the dictionary itself.
|
||||
|
||||
/// Attach dictionaries.
|
||||
for (const auto & [name, query] : file_names)
|
||||
{
|
||||
auto create_query = query->as<const ASTCreateQuery &>();
|
||||
if (create_query.is_dictionary)
|
||||
{
|
||||
tryAttachDictionary(query, *this, getMetadataPath() + name, context);
|
||||
|
||||
/// Messages, so that it's not boring to wait for the server to load for a long time.
|
||||
logAboutProgress(log, ++dictionaries_processed, total_dictionaries, watch);
|
||||
}
|
||||
}
|
||||
|
||||
/// Attach tables.
|
||||
for (const auto & name_with_query : file_names)
|
||||
{
|
||||
@ -196,19 +216,6 @@ void DatabaseOrdinary::loadStoredObjects(Context & context, bool has_force_resto
|
||||
|
||||
/// After all tables was basically initialized, startup them.
|
||||
startupTables(pool);
|
||||
|
||||
/// Attach dictionaries.
|
||||
for (const auto & [name, query] : file_names)
|
||||
{
|
||||
auto create_query = query->as<const ASTCreateQuery &>();
|
||||
if (create_query.is_dictionary)
|
||||
{
|
||||
tryAttachDictionary(query, *this, getMetadataPath() + name, context);
|
||||
|
||||
/// Messages, so that it's not boring to wait for the server to load for a long time.
|
||||
logAboutProgress(log, ++dictionaries_processed, total_dictionaries, watch);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
@ -1291,7 +1291,6 @@ void CacheDictionary::update(UpdateUnitPtr & update_unit_ptr)
|
||||
BlockInputStreamPtr stream = current_source_ptr->loadIds(update_unit_ptr->requested_ids);
|
||||
stream->readPrefix();
|
||||
|
||||
|
||||
while (true)
|
||||
{
|
||||
Block block = stream->read();
|
||||
|
@ -41,7 +41,7 @@ DictionaryPtr DictionaryFactory::create(
|
||||
throw Exception{name + ": element dictionary.layout should have exactly one child element",
|
||||
ErrorCodes::EXCESSIVE_ELEMENT_IN_CONFIG};
|
||||
|
||||
const DictionaryStructure dict_struct{config, config_prefix + ".structure"};
|
||||
const DictionaryStructure dict_struct{config, config_prefix};
|
||||
|
||||
DictionarySourcePtr source_ptr = DictionarySourceFactory::instance().create(
|
||||
name, config, config_prefix + ".source", dict_struct, context, config.getString(config_prefix + ".database", ""), check_source_config);
|
||||
|
@ -135,17 +135,19 @@ DictionarySpecialAttribute::DictionarySpecialAttribute(const Poco::Util::Abstrac
|
||||
|
||||
DictionaryStructure::DictionaryStructure(const Poco::Util::AbstractConfiguration & config, const std::string & config_prefix)
|
||||
{
|
||||
const auto has_id = config.has(config_prefix + ".id");
|
||||
const auto has_key = config.has(config_prefix + ".key");
|
||||
std::string structure_prefix = config_prefix + ".structure";
|
||||
|
||||
const auto has_id = config.has(structure_prefix + ".id");
|
||||
const auto has_key = config.has(structure_prefix + ".key");
|
||||
|
||||
if (has_key && has_id)
|
||||
throw Exception{"Only one of 'id' and 'key' should be specified", ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
if (has_id)
|
||||
id.emplace(config, config_prefix + ".id");
|
||||
id.emplace(config, structure_prefix + ".id");
|
||||
else if (has_key)
|
||||
{
|
||||
key.emplace(getAttributes(config, config_prefix + ".key", false, false));
|
||||
key.emplace(getAttributes(config, structure_prefix + ".key", false, false));
|
||||
if (key->empty())
|
||||
throw Exception{"Empty 'key' supplied", ErrorCodes::BAD_ARGUMENTS};
|
||||
}
|
||||
@ -158,11 +160,11 @@ DictionaryStructure::DictionaryStructure(const Poco::Util::AbstractConfiguration
|
||||
throw Exception{"'id' cannot be empty", ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
const char * range_default_type = "Date";
|
||||
if (config.has(config_prefix + ".range_min"))
|
||||
range_min.emplace(makeDictionaryTypedSpecialAttribute(config, config_prefix + ".range_min", range_default_type));
|
||||
if (config.has(structure_prefix + ".range_min"))
|
||||
range_min.emplace(makeDictionaryTypedSpecialAttribute(config, structure_prefix + ".range_min", range_default_type));
|
||||
|
||||
if (config.has(config_prefix + ".range_max"))
|
||||
range_max.emplace(makeDictionaryTypedSpecialAttribute(config, config_prefix + ".range_max", range_default_type));
|
||||
if (config.has(structure_prefix + ".range_max"))
|
||||
range_max.emplace(makeDictionaryTypedSpecialAttribute(config, structure_prefix + ".range_max", range_default_type));
|
||||
|
||||
if (range_min.has_value() != range_max.has_value())
|
||||
{
|
||||
@ -194,10 +196,13 @@ DictionaryStructure::DictionaryStructure(const Poco::Util::AbstractConfiguration
|
||||
has_expressions = true;
|
||||
}
|
||||
|
||||
attributes = getAttributes(config, config_prefix);
|
||||
attributes = getAttributes(config, structure_prefix);
|
||||
|
||||
if (attributes.empty())
|
||||
throw Exception{"Dictionary has no attributes defined", ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
if (config.getBool(config_prefix + ".layout.ip_trie.access_to_key_from_attributes", false))
|
||||
access_to_key_from_attributes = true;
|
||||
}
|
||||
|
||||
|
||||
@ -218,21 +223,32 @@ void DictionaryStructure::validateKeyTypes(const DataTypes & key_types) const
|
||||
}
|
||||
}
|
||||
|
||||
const DictionaryAttribute & DictionaryStructure::getAttribute(const String& attribute_name, const DataTypePtr & type) const
|
||||
const DictionaryAttribute & DictionaryStructure::getAttribute(const String & attribute_name) const
|
||||
{
|
||||
auto find_iter
|
||||
= std::find_if(attributes.begin(), attributes.end(), [&](const auto & attribute) { return attribute.name == attribute_name; });
|
||||
if (find_iter != attributes.end())
|
||||
return *find_iter;
|
||||
|
||||
if (find_iter == attributes.end())
|
||||
throw Exception{"No such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
if (key && access_to_key_from_attributes)
|
||||
{
|
||||
find_iter = std::find_if(key->begin(), key->end(), [&](const auto & attribute) { return attribute.name == attribute_name; });
|
||||
if (find_iter != key->end())
|
||||
return *find_iter;
|
||||
}
|
||||
|
||||
const auto & attribute = *find_iter;
|
||||
throw Exception{"No such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
}
|
||||
|
||||
const DictionaryAttribute & DictionaryStructure::getAttribute(const String & attribute_name, const DataTypePtr & type) const
|
||||
{
|
||||
const auto & attribute = getAttribute(attribute_name);
|
||||
|
||||
if (!areTypesEqual(attribute.type, type))
|
||||
throw Exception{"Attribute type does not match, expected " + attribute.type->getName() + ", found " + type->getName(),
|
||||
ErrorCodes::TYPE_MISMATCH};
|
||||
|
||||
return *find_iter;
|
||||
return attribute;
|
||||
}
|
||||
|
||||
std::string DictionaryStructure::getKeyDescription() const
|
||||
|
@ -150,11 +150,13 @@ struct DictionaryStructure final
|
||||
std::optional<DictionaryTypedSpecialAttribute> range_min;
|
||||
std::optional<DictionaryTypedSpecialAttribute> range_max;
|
||||
bool has_expressions = false;
|
||||
bool access_to_key_from_attributes = false;
|
||||
|
||||
DictionaryStructure(const Poco::Util::AbstractConfiguration & config, const std::string & config_prefix);
|
||||
|
||||
void validateKeyTypes(const DataTypes & key_types) const;
|
||||
const DictionaryAttribute &getAttribute(const String& attribute_name, const DataTypePtr & type) const;
|
||||
const DictionaryAttribute & getAttribute(const String & attribute_name) const;
|
||||
const DictionaryAttribute & getAttribute(const String & attribute_name, const DataTypePtr & type) const;
|
||||
std::string getKeyDescription() const;
|
||||
bool isKeySizeFixed() const;
|
||||
size_t getKeySize() const;
|
||||
|
@ -186,6 +186,9 @@ namespace
|
||||
if (!err.empty())
|
||||
LOG_ERROR(log, "Having stderr: {}", err);
|
||||
|
||||
if (thread.joinable())
|
||||
thread.join();
|
||||
|
||||
command->wait();
|
||||
}
|
||||
|
||||
|
@ -247,21 +247,15 @@ IPAddressDictionary::IPAddressDictionary(
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
const DictionaryLifetime dict_lifetime_,
|
||||
bool require_nonempty_,
|
||||
bool access_to_key_from_attributes_)
|
||||
bool require_nonempty_)
|
||||
: IDictionaryBase(dict_id_)
|
||||
, dict_struct(dict_struct_)
|
||||
, source_ptr{std::move(source_ptr_)}
|
||||
, dict_lifetime(dict_lifetime_)
|
||||
, require_nonempty(require_nonempty_)
|
||||
, access_to_key_from_attributes(access_to_key_from_attributes_)
|
||||
, access_to_key_from_attributes(dict_struct_.access_to_key_from_attributes)
|
||||
, logger(&Poco::Logger::get("IPAddressDictionary"))
|
||||
{
|
||||
if (access_to_key_from_attributes)
|
||||
{
|
||||
dict_struct.attributes.emplace_back(dict_struct.key->front());
|
||||
}
|
||||
|
||||
createAttributes();
|
||||
|
||||
loadData();
|
||||
@ -367,18 +361,23 @@ ColumnUInt8::Ptr IPAddressDictionary::hasKeys(const Columns & key_columns, const
|
||||
|
||||
void IPAddressDictionary::createAttributes()
|
||||
{
|
||||
const auto size = dict_struct.attributes.size();
|
||||
attributes.reserve(size);
|
||||
|
||||
for (const auto & attribute : dict_struct.attributes)
|
||||
auto create_attributes_from_dictionary_attributes = [this](const std::vector<DictionaryAttribute> & dict_attrs)
|
||||
{
|
||||
attribute_index_by_name.emplace(attribute.name, attributes.size());
|
||||
attributes.push_back(createAttributeWithType(attribute.underlying_type, attribute.null_value));
|
||||
attributes.reserve(attributes.size() + dict_attrs.size());
|
||||
for (const auto & attribute : dict_attrs)
|
||||
{
|
||||
attribute_index_by_name.emplace(attribute.name, attributes.size());
|
||||
attributes.push_back(createAttributeWithType(attribute.underlying_type, attribute.null_value));
|
||||
|
||||
if (attribute.hierarchical)
|
||||
throw Exception{full_name + ": hierarchical attributes not supported for dictionary of type " + getTypeName(),
|
||||
ErrorCodes::TYPE_MISMATCH};
|
||||
}
|
||||
if (attribute.hierarchical)
|
||||
throw Exception{full_name + ": hierarchical attributes not supported for dictionary of type " + getTypeName(),
|
||||
ErrorCodes::TYPE_MISMATCH};
|
||||
}
|
||||
};
|
||||
|
||||
create_attributes_from_dictionary_attributes(dict_struct.attributes);
|
||||
if (access_to_key_from_attributes)
|
||||
create_attributes_from_dictionary_attributes(*dict_struct.key);
|
||||
}
|
||||
|
||||
void IPAddressDictionary::loadData()
|
||||
@ -396,19 +395,13 @@ void IPAddressDictionary::loadData()
|
||||
element_count += rows;
|
||||
|
||||
const ColumnPtr key_column_ptr = block.safeGetByPosition(0).column;
|
||||
|
||||
size_t attributes_size = dict_struct.attributes.size();
|
||||
if (access_to_key_from_attributes)
|
||||
{
|
||||
/// last attribute contains key and will be filled in code below
|
||||
attributes_size--;
|
||||
}
|
||||
const auto attribute_column_ptrs = ext::map<Columns>(ext::range(0, attributes_size),
|
||||
const auto attribute_column_ptrs = ext::map<Columns>(
|
||||
ext::range(0, dict_struct.attributes.size()),
|
||||
[&](const size_t attribute_idx) { return block.safeGetByPosition(attribute_idx + 1).column; });
|
||||
|
||||
for (const auto row : ext::range(0, rows))
|
||||
{
|
||||
for (const auto attribute_idx : ext::range(0, attribute_column_ptrs.size()))
|
||||
for (const auto attribute_idx : ext::range(0, dict_struct.attributes.size()))
|
||||
{
|
||||
const auto & attribute_column = *attribute_column_ptrs[attribute_idx];
|
||||
auto & attribute = attributes[attribute_idx];
|
||||
@ -991,11 +984,8 @@ void registerDictionaryTrie(DictionaryFactory & factory)
|
||||
const DictionaryLifetime dict_lifetime{config, config_prefix + ".lifetime"};
|
||||
const bool require_nonempty = config.getBool(config_prefix + ".require_nonempty", false);
|
||||
|
||||
const auto & layout_prefix = config_prefix + ".layout.ip_trie";
|
||||
const bool access_to_key_from_attributes = config.getBool(layout_prefix + ".access_to_key_from_attributes", false);
|
||||
// This is specialised dictionary for storing IPv4 and IPv6 prefixes.
|
||||
return std::make_unique<IPAddressDictionary>(dict_id, dict_struct, std::move(source_ptr), dict_lifetime,
|
||||
require_nonempty, access_to_key_from_attributes);
|
||||
return std::make_unique<IPAddressDictionary>(dict_id, dict_struct, std::move(source_ptr), dict_lifetime, require_nonempty);
|
||||
};
|
||||
factory.registerLayout("ip_trie", create_layout, true);
|
||||
}
|
||||
|
@ -28,8 +28,7 @@ public:
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
const DictionaryLifetime dict_lifetime_,
|
||||
bool require_nonempty_,
|
||||
bool access_to_key_from_attributes_);
|
||||
bool require_nonempty_);
|
||||
|
||||
std::string getKeyDescription() const { return key_description; }
|
||||
|
||||
@ -47,8 +46,7 @@ public:
|
||||
|
||||
std::shared_ptr<const IExternalLoadable> clone() const override
|
||||
{
|
||||
return std::make_shared<IPAddressDictionary>(getDictionaryID(), dict_struct, source_ptr->clone(), dict_lifetime,
|
||||
require_nonempty, access_to_key_from_attributes);
|
||||
return std::make_shared<IPAddressDictionary>(getDictionaryID(), dict_struct, source_ptr->clone(), dict_lifetime, require_nonempty);
|
||||
}
|
||||
|
||||
const IDictionarySource * getSource() const override { return source_ptr.get(); }
|
||||
|
@ -407,6 +407,23 @@ struct ToHourImpl
|
||||
using FactorTransform = ToDateImpl;
|
||||
};
|
||||
|
||||
struct TimezoneOffsetImpl
|
||||
{
|
||||
static constexpr auto name = "timezoneOffset";
|
||||
|
||||
static inline time_t execute(UInt32 t, const DateLUTImpl & time_zone)
|
||||
{
|
||||
return time_zone.timezoneOffset(t);
|
||||
}
|
||||
|
||||
static inline time_t execute(UInt16, const DateLUTImpl &)
|
||||
{
|
||||
return dateIsNotSupported(name);
|
||||
}
|
||||
|
||||
using FactorTransform = ToTimeImpl;
|
||||
};
|
||||
|
||||
struct ToMinuteImpl
|
||||
{
|
||||
static constexpr auto name = "toMinute";
|
||||
|
@ -121,25 +121,26 @@ public:
|
||||
return getDictionary(dict_name_col->getValue<String>())->isInjective(attr_name_col->getValue<String>());
|
||||
}
|
||||
|
||||
DictionaryAttribute getDictionaryAttribute(std::shared_ptr<const IDictionaryBase> dictionary, const String& attribute_name) const
|
||||
DictionaryStructure getDictionaryStructure(const String & dictionary_name) const
|
||||
{
|
||||
const DictionaryStructure & structure = dictionary->getStructure();
|
||||
|
||||
auto find_iter = std::find_if(structure.attributes.begin(), structure.attributes.end(), [&](const auto &attribute)
|
||||
{
|
||||
return attribute.name == attribute_name;
|
||||
});
|
||||
|
||||
if (find_iter == structure.attributes.end())
|
||||
throw Exception{"No such attribute '" + attribute_name + "'", ErrorCodes::BAD_ARGUMENTS};
|
||||
|
||||
return *find_iter;
|
||||
String resolved_name = DatabaseCatalog::instance().resolveDictionaryName(dictionary_name);
|
||||
auto load_result = external_loader.getLoadResult(resolved_name);
|
||||
if (!load_result.config)
|
||||
throw Exception("Dictionary " + backQuote(dictionary_name) + " not found", ErrorCodes::BAD_ARGUMENTS);
|
||||
return ExternalDictionariesLoader::getDictionaryStructure(*load_result.config);
|
||||
}
|
||||
|
||||
private:
|
||||
const Context & context;
|
||||
const ExternalDictionariesLoader & external_loader;
|
||||
/// Access cannot be not granted, since in this case checkAccess() will throw and access_checked will not be updated.
|
||||
std::atomic<bool> access_checked = false;
|
||||
|
||||
/// We must not cache dictionary or dictionary's structure here, because there are places
|
||||
/// where ExpressionActionsPtr is cached (StorageDistributed caching it for sharding_key_expr and
|
||||
/// optimize_skip_unused_shards), and if the dictionary will be cached within "query" then
|
||||
/// cached ExpressionActionsPtr will always have first version of the query and the dictionary
|
||||
/// will not be updated after reload (see https://github.com/ClickHouse/ClickHouse/pull/16205)
|
||||
};
|
||||
|
||||
|
||||
@ -267,10 +268,7 @@ public:
|
||||
if (arguments.size() < 3)
|
||||
throw Exception{"Wrong argument count for function " + getName(), ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH};
|
||||
|
||||
/// TODO: We can load only dictionary structure
|
||||
|
||||
String dictionary_name;
|
||||
|
||||
if (const auto * name_col = checkAndGetColumnConst<ColumnString>(arguments[0].column.get()))
|
||||
dictionary_name = name_col->getValue<String>();
|
||||
else
|
||||
@ -278,16 +276,14 @@ public:
|
||||
+ ", expected a const string.", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT};
|
||||
|
||||
String attribute_name;
|
||||
|
||||
if (const auto * name_col = checkAndGetColumnConst<ColumnString>(arguments[1].column.get()))
|
||||
attribute_name = name_col->getValue<String>();
|
||||
else
|
||||
throw Exception{"Illegal type " + arguments[1].type->getName() + " of second argument of function " + getName()
|
||||
+ ", expected a const string.", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT};
|
||||
|
||||
auto dictionary = helper.getDictionary(dictionary_name);
|
||||
|
||||
return helper.getDictionaryAttribute(dictionary, attribute_name).type;
|
||||
/// We're extracting the return type from the dictionary's config, without loading the dictionary.
|
||||
return helper.getDictionaryStructure(dictionary_name).getAttribute(attribute_name).type;
|
||||
}
|
||||
|
||||
ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, size_t input_rows_count) const override
|
||||
|
@ -69,6 +69,8 @@ void registerFunctionFormatDateTime(FunctionFactory &);
|
||||
void registerFunctionFromModifiedJulianDay(FunctionFactory &);
|
||||
void registerFunctionDateTrunc(FunctionFactory &);
|
||||
|
||||
void registerFunctiontimezoneOffset(FunctionFactory &);
|
||||
|
||||
void registerFunctionsDateTime(FunctionFactory & factory)
|
||||
{
|
||||
registerFunctionToYear(factory);
|
||||
@ -136,6 +138,7 @@ void registerFunctionsDateTime(FunctionFactory & factory)
|
||||
registerFunctionFormatDateTime(factory);
|
||||
registerFunctionFromModifiedJulianDay(factory);
|
||||
registerFunctionDateTrunc(factory);
|
||||
registerFunctiontimezoneOffset(factory);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -4,14 +4,10 @@ namespace DB
|
||||
class FunctionFactory;
|
||||
|
||||
void registerFunctionsReinterpretAs(FunctionFactory & factory);
|
||||
void registerFunctionReinterpretAsString(FunctionFactory & factory);
|
||||
void registerFunctionReinterpretAsFixedString(FunctionFactory & factory);
|
||||
|
||||
void registerFunctionsReinterpret(FunctionFactory & factory)
|
||||
{
|
||||
registerFunctionsReinterpretAs(factory);
|
||||
registerFunctionReinterpretAsString(factory);
|
||||
registerFunctionReinterpretAsFixedString(factory);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -1,5 +1,8 @@
|
||||
#include <Functions/FunctionFactory.h>
|
||||
#include <Functions/castTypeToEither.h>
|
||||
#include <Functions/FunctionHelpers.h>
|
||||
|
||||
#include <Core/callOnTypeIndex.h>
|
||||
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
@ -7,6 +10,7 @@
|
||||
#include <DataTypes/DataTypeDate.h>
|
||||
#include <DataTypes/DataTypeDateTime.h>
|
||||
#include <DataTypes/DataTypeUUID.h>
|
||||
#include <DataTypes/DataTypeFactory.h>
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <Columns/ColumnFixedString.h>
|
||||
#include <Columns/ColumnConst.h>
|
||||
@ -21,178 +25,389 @@ namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
template <typename ToDataType, typename Name, bool support_between_float_integer>
|
||||
|
||||
/** Performs byte reinterpretation similar to reinterpret_cast.
|
||||
*
|
||||
* Following reinterpretations are allowed:
|
||||
* 1. Any type that isValueUnambiguouslyRepresentedInFixedSizeContiguousMemoryRegion into FixedString.
|
||||
* 2. Any type that isValueUnambiguouslyRepresentedInContiguousMemoryRegion into String.
|
||||
* 3. Types that can be interpreted as numeric (Integers, Float, Date, DateTime, UUID) into FixedString,
|
||||
* String, and types that can be interpreted as numeric (Integers, Float, Date, DateTime, UUID).
|
||||
*/
|
||||
class FunctionReinterpretAs : public IFunction
|
||||
{
|
||||
template <typename F>
|
||||
static bool castType(const IDataType * type, F && f)
|
||||
public:
|
||||
static constexpr auto name = "reinterpretAs";
|
||||
|
||||
static FunctionPtr create(const Context &) { return std::make_shared<FunctionReinterpretAs>(); }
|
||||
|
||||
String getName() const override { return name; }
|
||||
|
||||
size_t getNumberOfArguments() const override { return 2; }
|
||||
|
||||
bool useDefaultImplementationForConstants() const override { return true; }
|
||||
|
||||
ColumnNumbers getArgumentsThatAreAlwaysConstant() const override { return {1}; }
|
||||
|
||||
DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override
|
||||
{
|
||||
return castTypeToEither<DataTypeUInt32, DataTypeInt32, DataTypeUInt64, DataTypeInt64, DataTypeFloat32, DataTypeFloat64>(
|
||||
type, std::forward<F>(f));
|
||||
const auto & column = arguments.back().column;
|
||||
|
||||
DataTypePtr from_type = arguments[0].type;
|
||||
|
||||
const auto * type_col = checkAndGetColumnConst<ColumnString>(column.get());
|
||||
if (!type_col)
|
||||
throw Exception("Second argument to " + getName() + " must be a constant string describing type."
|
||||
" Instead there is non-constant column of type " + arguments.back().type->getName(),
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
|
||||
DataTypePtr to_type = DataTypeFactory::instance().get(type_col->getValue<String>());
|
||||
|
||||
WhichDataType result_reinterpret_type(to_type);
|
||||
|
||||
if (result_reinterpret_type.isFixedString())
|
||||
{
|
||||
if (!from_type->isValueUnambiguouslyRepresentedInFixedSizeContiguousMemoryRegion())
|
||||
throw Exception("Cannot reinterpret " + from_type->getName() +
|
||||
" as FixedString because it is not fixed size and contiguous in memory",
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
}
|
||||
else if (result_reinterpret_type.isString())
|
||||
{
|
||||
if (!from_type->isValueUnambiguouslyRepresentedInContiguousMemoryRegion())
|
||||
throw Exception("Cannot reinterpret " + from_type->getName() +
|
||||
" as String because it is not contiguous in memory",
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
}
|
||||
else if (canBeReinterpretedAsNumeric(result_reinterpret_type))
|
||||
{
|
||||
WhichDataType from_data_type(from_type);
|
||||
|
||||
if (!canBeReinterpretedAsNumeric(from_data_type) && !from_data_type.isStringOrFixedString())
|
||||
throw Exception("Cannot reinterpret " + from_type->getName() + " as " + to_type->getName()
|
||||
+ " because only Numeric, String or FixedString can be reinterpreted in Numeric",
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
}
|
||||
|
||||
return to_type;
|
||||
}
|
||||
|
||||
ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, size_t /*input_rows_count*/) const override
|
||||
{
|
||||
auto from_type = arguments[0].type;
|
||||
|
||||
ColumnPtr result;
|
||||
|
||||
if (!callOnTwoTypeIndexes(from_type->getTypeId(), result_type->getTypeId(), [&](const auto & types)
|
||||
{
|
||||
using Types = std::decay_t<decltype(types)>;
|
||||
using FromType = typename Types::LeftType;
|
||||
using ToType = typename Types::RightType;
|
||||
|
||||
/// Place this check before std::is_same_v<FromType, ToType> because same FixedString
|
||||
/// types does not necessary have the same byte size fixed value.
|
||||
if constexpr (std::is_same_v<ToType, DataTypeFixedString>)
|
||||
{
|
||||
const IColumn & src = *arguments[0].column;
|
||||
MutableColumnPtr dst = result_type->createColumn();
|
||||
|
||||
ColumnFixedString * dst_concrete = assert_cast<ColumnFixedString *>(dst.get());
|
||||
|
||||
if (src.isFixedAndContiguous() && src.sizeOfValueIfFixed() == dst_concrete->getN())
|
||||
executeContiguousToFixedString(src, *dst_concrete, dst_concrete->getN());
|
||||
else
|
||||
executeToFixedString(src, *dst_concrete, dst_concrete->getN());
|
||||
|
||||
result = std::move(dst);
|
||||
|
||||
return true;
|
||||
}
|
||||
else if constexpr (std::is_same_v<FromType, ToType>)
|
||||
{
|
||||
result = arguments[0].column;
|
||||
|
||||
return true;
|
||||
}
|
||||
else if constexpr (std::is_same_v<ToType, DataTypeString>)
|
||||
{
|
||||
const IColumn & src = *arguments[0].column;
|
||||
MutableColumnPtr dst = result_type->createColumn();
|
||||
|
||||
ColumnString * dst_concrete = assert_cast<ColumnString *>(dst.get());
|
||||
executeToString(src, *dst_concrete);
|
||||
|
||||
result = std::move(dst);
|
||||
|
||||
return true;
|
||||
}
|
||||
else if constexpr (CanBeReinterpretedAsNumeric<ToType>)
|
||||
{
|
||||
using ToColumnType = typename ToType::ColumnType;
|
||||
using ToFieldType = typename ToType::FieldType;
|
||||
|
||||
if constexpr (std::is_same_v<FromType, DataTypeString>)
|
||||
{
|
||||
const auto * col_from = assert_cast<const ColumnString *>(arguments[0].column.get());
|
||||
|
||||
auto col_res = ToColumnType::create();
|
||||
|
||||
const ColumnString::Chars & data_from = col_from->getChars();
|
||||
const ColumnString::Offsets & offsets_from = col_from->getOffsets();
|
||||
size_t size = offsets_from.size();
|
||||
typename ToColumnType::Container & vec_res = col_res->getData();
|
||||
vec_res.resize(size);
|
||||
|
||||
size_t offset = 0;
|
||||
for (size_t i = 0; i < size; ++i)
|
||||
{
|
||||
ToFieldType value{};
|
||||
memcpy(&value,
|
||||
&data_from[offset],
|
||||
std::min(static_cast<UInt64>(sizeof(ToFieldType)), offsets_from[i] - offset - 1));
|
||||
vec_res[i] = value;
|
||||
offset = offsets_from[i];
|
||||
}
|
||||
|
||||
result = std::move(col_res);
|
||||
|
||||
return true;
|
||||
}
|
||||
else if constexpr (std::is_same_v<FromType, DataTypeFixedString>)
|
||||
{
|
||||
const auto * col_from_fixed = assert_cast<const ColumnFixedString *>(arguments[0].column.get());
|
||||
|
||||
auto col_res = ToColumnType::create();
|
||||
|
||||
const ColumnString::Chars & data_from = col_from_fixed->getChars();
|
||||
size_t step = col_from_fixed->getN();
|
||||
size_t size = data_from.size() / step;
|
||||
typename ToColumnType::Container & vec_res = col_res->getData();
|
||||
vec_res.resize(size);
|
||||
|
||||
size_t offset = 0;
|
||||
size_t copy_size = std::min(step, sizeof(ToFieldType));
|
||||
for (size_t i = 0; i < size; ++i)
|
||||
{
|
||||
ToFieldType value{};
|
||||
memcpy(&value, &data_from[offset], copy_size);
|
||||
vec_res[i] = value;
|
||||
offset += step;
|
||||
}
|
||||
|
||||
result = std::move(col_res);
|
||||
|
||||
return true;
|
||||
}
|
||||
else if constexpr (CanBeReinterpretedAsNumeric<FromType>)
|
||||
{
|
||||
using FromTypeFieldType = typename FromType::FieldType;
|
||||
const auto * col = assert_cast<const ColumnVector<FromTypeFieldType>*>(arguments[0].column.get());
|
||||
|
||||
auto col_res = ToColumnType::create();
|
||||
reinterpretImpl(col->getData(), col_res->getData());
|
||||
result = std::move(col_res);
|
||||
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
}))
|
||||
{
|
||||
throw Exception("Cannot reinterpret " + from_type->getName() + " as " + result_type->getName(),
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
private:
|
||||
template <typename T>
|
||||
static constexpr auto CanBeReinterpretedAsNumeric =
|
||||
IsDataTypeNumber<T> ||
|
||||
std::is_same_v<T, DataTypeDate> ||
|
||||
std::is_same_v<T, DataTypeDateTime> ||
|
||||
std::is_same_v<T, DataTypeUUID>;
|
||||
|
||||
static bool canBeReinterpretedAsNumeric(const WhichDataType & type)
|
||||
{
|
||||
return type.isUInt() ||
|
||||
type.isInt() ||
|
||||
type.isDateOrDateTime() ||
|
||||
type.isFloat() ||
|
||||
type.isUUID();
|
||||
}
|
||||
|
||||
static void NO_INLINE executeToFixedString(const IColumn & src, ColumnFixedString & dst, size_t n)
|
||||
{
|
||||
size_t rows = src.size();
|
||||
ColumnFixedString::Chars & data_to = dst.getChars();
|
||||
data_to.resize_fill(n * rows);
|
||||
|
||||
ColumnFixedString::Offset offset = 0;
|
||||
for (size_t i = 0; i < rows; ++i)
|
||||
{
|
||||
StringRef data = src.getDataAt(i);
|
||||
|
||||
std::memcpy(&data_to[offset], data.data, std::min(n, data.size));
|
||||
offset += n;
|
||||
}
|
||||
}
|
||||
|
||||
static void NO_INLINE executeContiguousToFixedString(const IColumn & src, ColumnFixedString & dst, size_t n)
|
||||
{
|
||||
size_t rows = src.size();
|
||||
ColumnFixedString::Chars & data_to = dst.getChars();
|
||||
data_to.resize(n * rows);
|
||||
|
||||
memcpy(data_to.data(), src.getRawData().data, data_to.size());
|
||||
}
|
||||
|
||||
static void NO_INLINE executeToString(const IColumn & src, ColumnString & dst)
|
||||
{
|
||||
size_t rows = src.size();
|
||||
ColumnString::Chars & data_to = dst.getChars();
|
||||
ColumnString::Offsets & offsets_to = dst.getOffsets();
|
||||
offsets_to.resize(rows);
|
||||
|
||||
ColumnString::Offset offset = 0;
|
||||
for (size_t i = 0; i < rows; ++i)
|
||||
{
|
||||
StringRef data = src.getDataAt(i);
|
||||
|
||||
/// Cut trailing zero bytes.
|
||||
while (data.size && data.data[data.size - 1] == 0)
|
||||
--data.size;
|
||||
|
||||
data_to.resize(offset + data.size + 1);
|
||||
memcpy(&data_to[offset], data.data, data.size);
|
||||
offset += data.size;
|
||||
data_to[offset] = 0;
|
||||
++offset;
|
||||
offsets_to[i] = offset;
|
||||
}
|
||||
}
|
||||
|
||||
template <typename From, typename To>
|
||||
static void reinterpretImpl(const PaddedPODArray<From> & from, PaddedPODArray<To> & to)
|
||||
{
|
||||
size_t size = from.size();
|
||||
to.resize(size);
|
||||
to.resize_fill(size);
|
||||
|
||||
for (size_t i = 0; i < size; ++i)
|
||||
{
|
||||
to[i] = unalignedLoad<To>(&(from.data()[i]));
|
||||
memcpy(static_cast<void*>(&to[i]),
|
||||
static_cast<const void*>(&from[i]),
|
||||
std::min(sizeof(From), sizeof(To)));
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <typename ToDataType, typename Name>
|
||||
class FunctionReinterpretAsTyped : public IFunction
|
||||
{
|
||||
public:
|
||||
static constexpr auto name = Name::name;
|
||||
static FunctionPtr create(const Context &) { return std::make_shared<FunctionReinterpretAs>(); }
|
||||
|
||||
using ToFieldType = typename ToDataType::FieldType;
|
||||
using ColumnType = typename ToDataType::ColumnType;
|
||||
static FunctionPtr create(const Context &) { return std::make_shared<FunctionReinterpretAsTyped>(); }
|
||||
|
||||
String getName() const override { return name; }
|
||||
|
||||
size_t getNumberOfArguments() const override { return 1; }
|
||||
|
||||
DataTypePtr getReturnTypeImpl(const DataTypes & arguments) const override
|
||||
{
|
||||
const IDataType & type = *arguments[0];
|
||||
if constexpr (support_between_float_integer)
|
||||
{
|
||||
if (!isStringOrFixedString(type) && !isNumber(type))
|
||||
throw Exception(
|
||||
"Cannot reinterpret " + type.getName() + " as " + ToDataType().getName(), ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
|
||||
if (isNumber(type))
|
||||
{
|
||||
if (type.getSizeOfValueInMemory() != ToDataType{}.getSizeOfValueInMemory())
|
||||
throw Exception(
|
||||
"Cannot reinterpret " + type.getName() + " as " + ToDataType().getName(), ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
if (!isStringOrFixedString(type))
|
||||
throw Exception(
|
||||
"Cannot reinterpret " + type.getName() + " as " + ToDataType().getName(), ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
}
|
||||
return std::make_shared<ToDataType>();
|
||||
}
|
||||
|
||||
bool useDefaultImplementationForConstants() const override { return true; }
|
||||
|
||||
ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr &, size_t /*input_rows_count*/) const override
|
||||
static ColumnsWithTypeAndName addTypeColumnToArguments(const ColumnsWithTypeAndName & arguments)
|
||||
{
|
||||
if (const ColumnString * col_from = typeid_cast<const ColumnString *>(arguments[0].column.get()))
|
||||
const auto & argument = arguments[0];
|
||||
|
||||
DataTypePtr data_type;
|
||||
|
||||
if constexpr (std::is_same_v<ToDataType, DataTypeFixedString>)
|
||||
{
|
||||
auto col_res = ColumnType::create();
|
||||
const auto & type = argument.type;
|
||||
|
||||
const ColumnString::Chars & data_from = col_from->getChars();
|
||||
const ColumnString::Offsets & offsets_from = col_from->getOffsets();
|
||||
size_t size = offsets_from.size();
|
||||
typename ColumnType::Container & vec_res = col_res->getData();
|
||||
vec_res.resize(size);
|
||||
if (!type->isValueUnambiguouslyRepresentedInFixedSizeContiguousMemoryRegion())
|
||||
throw Exception("Cannot reinterpret " + type->getName() +
|
||||
" as FixedString because it is not fixed size and contiguous in memory",
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
|
||||
size_t offset = 0;
|
||||
for (size_t i = 0; i < size; ++i)
|
||||
{
|
||||
ToFieldType value{};
|
||||
memcpy(&value, &data_from[offset], std::min(static_cast<UInt64>(sizeof(ToFieldType)), offsets_from[i] - offset - 1));
|
||||
vec_res[i] = value;
|
||||
offset = offsets_from[i];
|
||||
}
|
||||
|
||||
return col_res;
|
||||
}
|
||||
else if (const ColumnFixedString * col_from_fixed = typeid_cast<const ColumnFixedString *>(arguments[0].column.get()))
|
||||
{
|
||||
auto col_res = ColumnVector<ToFieldType>::create();
|
||||
|
||||
const ColumnString::Chars & data_from = col_from_fixed->getChars();
|
||||
size_t step = col_from_fixed->getN();
|
||||
size_t size = data_from.size() / step;
|
||||
typename ColumnVector<ToFieldType>::Container & vec_res = col_res->getData();
|
||||
vec_res.resize(size);
|
||||
|
||||
size_t offset = 0;
|
||||
size_t copy_size = std::min(step, sizeof(ToFieldType));
|
||||
for (size_t i = 0; i < size; ++i)
|
||||
{
|
||||
ToFieldType value{};
|
||||
memcpy(&value, &data_from[offset], copy_size);
|
||||
vec_res[i] = value;
|
||||
offset += step;
|
||||
}
|
||||
|
||||
return col_res;
|
||||
}
|
||||
else if constexpr (support_between_float_integer)
|
||||
{
|
||||
ColumnPtr res;
|
||||
if (castType(arguments[0].type.get(), [&](const auto & type)
|
||||
{
|
||||
using DataType = std::decay_t<decltype(type)>;
|
||||
using T = typename DataType::FieldType;
|
||||
|
||||
const ColumnVector<T> * col = checkAndGetColumn<ColumnVector<T>>(arguments[0].column.get());
|
||||
auto col_res = ColumnType::create();
|
||||
reinterpretImpl(col->getData(), col_res->getData());
|
||||
res = std::move(col_res);
|
||||
|
||||
return true;
|
||||
}))
|
||||
{
|
||||
return res;
|
||||
}
|
||||
else
|
||||
{
|
||||
throw Exception(
|
||||
"Illegal column " + arguments[0].column->getName() + " of argument of function " + getName(),
|
||||
ErrorCodes::ILLEGAL_COLUMN);
|
||||
}
|
||||
size_t type_value_size_in_memory = type->getSizeOfValueInMemory();
|
||||
data_type = std::make_shared<DataTypeFixedString>(type_value_size_in_memory);
|
||||
}
|
||||
else
|
||||
{
|
||||
throw Exception(
|
||||
"Illegal column " + arguments[0].column->getName() + " of argument of function " + getName(),
|
||||
ErrorCodes::ILLEGAL_COLUMN);
|
||||
}
|
||||
}
|
||||
};
|
||||
data_type = std::make_shared<ToDataType>();
|
||||
|
||||
auto type_name_column = DataTypeString().createColumnConst(1, data_type->getName());
|
||||
ColumnWithTypeAndName type_column(type_name_column, std::make_shared<DataTypeString>(), "");
|
||||
|
||||
ColumnsWithTypeAndName arguments_with_type
|
||||
{
|
||||
argument,
|
||||
type_column
|
||||
};
|
||||
|
||||
return arguments_with_type;
|
||||
}
|
||||
|
||||
DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override
|
||||
{
|
||||
auto arguments_with_type = addTypeColumnToArguments(arguments);
|
||||
return impl.getReturnTypeImpl(arguments_with_type);
|
||||
}
|
||||
|
||||
ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr & return_type, size_t input_rows_count) const override
|
||||
{
|
||||
auto arguments_with_type = addTypeColumnToArguments(arguments);
|
||||
return impl.executeImpl(arguments_with_type, return_type, input_rows_count);
|
||||
}
|
||||
|
||||
FunctionReinterpretAs impl;
|
||||
};
|
||||
|
||||
struct NameReinterpretAsUInt8 { static constexpr auto name = "reinterpretAsUInt8"; };
|
||||
struct NameReinterpretAsUInt16 { static constexpr auto name = "reinterpretAsUInt16"; };
|
||||
struct NameReinterpretAsUInt32 { static constexpr auto name = "reinterpretAsUInt32"; };
|
||||
struct NameReinterpretAsUInt64 { static constexpr auto name = "reinterpretAsUInt64"; };
|
||||
struct NameReinterpretAsUInt256 { static constexpr auto name = "reinterpretAsUInt256"; };
|
||||
struct NameReinterpretAsInt8 { static constexpr auto name = "reinterpretAsInt8"; };
|
||||
struct NameReinterpretAsInt16 { static constexpr auto name = "reinterpretAsInt16"; };
|
||||
struct NameReinterpretAsInt32 { static constexpr auto name = "reinterpretAsInt32"; };
|
||||
struct NameReinterpretAsInt64 { static constexpr auto name = "reinterpretAsInt64"; };
|
||||
struct NameReinterpretAsInt128 { static constexpr auto name = "reinterpretAsInt128"; };
|
||||
struct NameReinterpretAsInt256 { static constexpr auto name = "reinterpretAsInt256"; };
|
||||
struct NameReinterpretAsFloat32 { static constexpr auto name = "reinterpretAsFloat32"; };
|
||||
struct NameReinterpretAsFloat64 { static constexpr auto name = "reinterpretAsFloat64"; };
|
||||
struct NameReinterpretAsDate { static constexpr auto name = "reinterpretAsDate"; };
|
||||
struct NameReinterpretAsDateTime { static constexpr auto name = "reinterpretAsDateTime"; };
|
||||
struct NameReinterpretAsUUID { static constexpr auto name = "reinterpretAsUUID"; };
|
||||
struct NameReinterpretAsString { static constexpr auto name = "reinterpretAsString"; };
|
||||
struct NameReinterpretAsFixedString { static constexpr auto name = "reinterpretAsFixedString"; };
|
||||
|
||||
using FunctionReinterpretAsUInt8 = FunctionReinterpretAsTyped<DataTypeUInt8, NameReinterpretAsUInt8>;
|
||||
using FunctionReinterpretAsUInt16 = FunctionReinterpretAsTyped<DataTypeUInt16, NameReinterpretAsUInt16>;
|
||||
using FunctionReinterpretAsUInt32 = FunctionReinterpretAsTyped<DataTypeUInt32, NameReinterpretAsUInt32>;
|
||||
using FunctionReinterpretAsUInt64 = FunctionReinterpretAsTyped<DataTypeUInt64, NameReinterpretAsUInt64>;
|
||||
using FunctionReinterpretAsUInt256 = FunctionReinterpretAsTyped<DataTypeUInt256, NameReinterpretAsUInt256>;
|
||||
using FunctionReinterpretAsInt8 = FunctionReinterpretAsTyped<DataTypeInt8, NameReinterpretAsInt8>;
|
||||
using FunctionReinterpretAsInt16 = FunctionReinterpretAsTyped<DataTypeInt16, NameReinterpretAsInt16>;
|
||||
using FunctionReinterpretAsInt32 = FunctionReinterpretAsTyped<DataTypeInt32, NameReinterpretAsInt32>;
|
||||
using FunctionReinterpretAsInt64 = FunctionReinterpretAsTyped<DataTypeInt64, NameReinterpretAsInt64>;
|
||||
using FunctionReinterpretAsInt128 = FunctionReinterpretAsTyped<DataTypeInt128, NameReinterpretAsInt128>;
|
||||
using FunctionReinterpretAsInt256 = FunctionReinterpretAsTyped<DataTypeInt256, NameReinterpretAsInt256>;
|
||||
using FunctionReinterpretAsFloat32 = FunctionReinterpretAsTyped<DataTypeFloat32, NameReinterpretAsFloat32>;
|
||||
using FunctionReinterpretAsFloat64 = FunctionReinterpretAsTyped<DataTypeFloat64, NameReinterpretAsFloat64>;
|
||||
using FunctionReinterpretAsDate = FunctionReinterpretAsTyped<DataTypeDate, NameReinterpretAsDate>;
|
||||
using FunctionReinterpretAsDateTime = FunctionReinterpretAsTyped<DataTypeDateTime, NameReinterpretAsDateTime>;
|
||||
using FunctionReinterpretAsUUID = FunctionReinterpretAsTyped<DataTypeUUID, NameReinterpretAsUUID>;
|
||||
|
||||
using FunctionReinterpretAsString = FunctionReinterpretAsTyped<DataTypeString, NameReinterpretAsString>;
|
||||
|
||||
using FunctionReinterpretAsFixedString = FunctionReinterpretAsTyped<DataTypeFixedString, NameReinterpretAsFixedString>;
|
||||
|
||||
using FunctionReinterpretAsUInt8 = FunctionReinterpretAs<DataTypeUInt8, NameReinterpretAsUInt8, false>;
|
||||
using FunctionReinterpretAsUInt16 = FunctionReinterpretAs<DataTypeUInt16, NameReinterpretAsUInt16, false>;
|
||||
using FunctionReinterpretAsUInt32 = FunctionReinterpretAs<DataTypeUInt32, NameReinterpretAsUInt32, true>;
|
||||
using FunctionReinterpretAsUInt64 = FunctionReinterpretAs<DataTypeUInt64, NameReinterpretAsUInt64, true>;
|
||||
using FunctionReinterpretAsInt8 = FunctionReinterpretAs<DataTypeInt8, NameReinterpretAsInt8, false>;
|
||||
using FunctionReinterpretAsInt16 = FunctionReinterpretAs<DataTypeInt16, NameReinterpretAsInt16, false>;
|
||||
using FunctionReinterpretAsInt32 = FunctionReinterpretAs<DataTypeInt32, NameReinterpretAsInt32, true>;
|
||||
using FunctionReinterpretAsInt64 = FunctionReinterpretAs<DataTypeInt64, NameReinterpretAsInt64, true>;
|
||||
using FunctionReinterpretAsFloat32 = FunctionReinterpretAs<DataTypeFloat32, NameReinterpretAsFloat32, true>;
|
||||
using FunctionReinterpretAsFloat64 = FunctionReinterpretAs<DataTypeFloat64, NameReinterpretAsFloat64, true>;
|
||||
using FunctionReinterpretAsDate = FunctionReinterpretAs<DataTypeDate, NameReinterpretAsDate, false>;
|
||||
using FunctionReinterpretAsDateTime = FunctionReinterpretAs<DataTypeDateTime, NameReinterpretAsDateTime, false>;
|
||||
using FunctionReinterpretAsUUID = FunctionReinterpretAs<DataTypeUUID, NameReinterpretAsUUID, false>;
|
||||
}
|
||||
|
||||
void registerFunctionsReinterpretAs(FunctionFactory & factory)
|
||||
@ -201,15 +416,24 @@ void registerFunctionsReinterpretAs(FunctionFactory & factory)
|
||||
factory.registerFunction<FunctionReinterpretAsUInt16>();
|
||||
factory.registerFunction<FunctionReinterpretAsUInt32>();
|
||||
factory.registerFunction<FunctionReinterpretAsUInt64>();
|
||||
factory.registerFunction<FunctionReinterpretAsUInt256>();
|
||||
factory.registerFunction<FunctionReinterpretAsInt8>();
|
||||
factory.registerFunction<FunctionReinterpretAsInt16>();
|
||||
factory.registerFunction<FunctionReinterpretAsInt32>();
|
||||
factory.registerFunction<FunctionReinterpretAsInt64>();
|
||||
factory.registerFunction<FunctionReinterpretAsInt128>();
|
||||
factory.registerFunction<FunctionReinterpretAsInt256>();
|
||||
factory.registerFunction<FunctionReinterpretAsFloat32>();
|
||||
factory.registerFunction<FunctionReinterpretAsFloat64>();
|
||||
factory.registerFunction<FunctionReinterpretAsDate>();
|
||||
factory.registerFunction<FunctionReinterpretAsDateTime>();
|
||||
factory.registerFunction<FunctionReinterpretAsUUID>();
|
||||
|
||||
factory.registerFunction<FunctionReinterpretAsString>();
|
||||
|
||||
factory.registerFunction<FunctionReinterpretAsFixedString>();
|
||||
|
||||
factory.registerFunction<FunctionReinterpretAs>();
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -1,96 +0,0 @@
|
||||
#include <Functions/FunctionFactory.h>
|
||||
|
||||
#include <DataTypes/DataTypeFixedString.h>
|
||||
#include <Columns/ColumnFixedString.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Common/memcpySmall.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
class FunctionReinterpretAsFixedString : public IFunction
|
||||
{
|
||||
public:
|
||||
static FunctionPtr create(const Context &) { return std::make_shared<FunctionReinterpretAsFixedString>(); }
|
||||
|
||||
static constexpr auto name = "reinterpretAsFixedString";
|
||||
|
||||
String getName() const override
|
||||
{
|
||||
return name;
|
||||
}
|
||||
|
||||
size_t getNumberOfArguments() const override { return 1; }
|
||||
|
||||
DataTypePtr getReturnTypeImpl(const DataTypes & arguments) const override
|
||||
{
|
||||
const IDataType & type = *arguments[0];
|
||||
|
||||
if (type.isValueUnambiguouslyRepresentedInFixedSizeContiguousMemoryRegion())
|
||||
return std::make_shared<DataTypeFixedString>(type.getSizeOfValueInMemory());
|
||||
throw Exception("Cannot reinterpret " + type.getName() + " as FixedString because it is not fixed size and contiguous in memory", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
}
|
||||
|
||||
static void NO_INLINE executeToFixedString(const IColumn & src, ColumnFixedString & dst, size_t n)
|
||||
{
|
||||
size_t rows = src.size();
|
||||
ColumnFixedString::Chars & data_to = dst.getChars();
|
||||
data_to.resize(n * rows);
|
||||
|
||||
ColumnFixedString::Offset offset = 0;
|
||||
for (size_t i = 0; i < rows; ++i)
|
||||
{
|
||||
StringRef data = src.getDataAt(i);
|
||||
memcpySmallAllowReadWriteOverflow15(&data_to[offset], data.data, n);
|
||||
offset += n;
|
||||
}
|
||||
}
|
||||
|
||||
static void NO_INLINE executeContiguousToFixedString(const IColumn & src, ColumnFixedString & dst, size_t n)
|
||||
{
|
||||
size_t rows = src.size();
|
||||
ColumnFixedString::Chars & data_to = dst.getChars();
|
||||
data_to.resize(n * rows);
|
||||
|
||||
memcpy(data_to.data(), src.getRawData().data, data_to.size());
|
||||
}
|
||||
|
||||
bool useDefaultImplementationForConstants() const override { return true; }
|
||||
|
||||
ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, size_t /*input_rows_count*/) const override
|
||||
{
|
||||
const IColumn & src = *arguments[0].column;
|
||||
MutableColumnPtr dst = result_type->createColumn();
|
||||
|
||||
if (ColumnFixedString * dst_concrete = typeid_cast<ColumnFixedString *>(dst.get()))
|
||||
{
|
||||
if (src.isFixedAndContiguous() && src.sizeOfValueIfFixed() == dst_concrete->getN())
|
||||
executeContiguousToFixedString(src, *dst_concrete, dst_concrete->getN());
|
||||
else
|
||||
executeToFixedString(src, *dst_concrete, dst_concrete->getN());
|
||||
}
|
||||
else
|
||||
throw Exception("Illegal column " + src.getName() + " of argument of function " + getName(), ErrorCodes::ILLEGAL_COLUMN);
|
||||
|
||||
return dst;
|
||||
}
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
void registerFunctionReinterpretAsFixedString(FunctionFactory & factory)
|
||||
{
|
||||
factory.registerFunction<FunctionReinterpretAsFixedString>();
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -1,92 +0,0 @@
|
||||
#include <Functions/FunctionFactory.h>
|
||||
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Common/memcpySmall.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
/** Function for transforming numbers and dates to strings that contain the same set of bytes in the machine representation. */
|
||||
class FunctionReinterpretAsString : public IFunction
|
||||
{
|
||||
public:
|
||||
static FunctionPtr create(const Context &) { return std::make_shared<FunctionReinterpretAsString>(); }
|
||||
|
||||
static constexpr auto name = "reinterpretAsString";
|
||||
|
||||
String getName() const override
|
||||
{
|
||||
return name;
|
||||
}
|
||||
|
||||
size_t getNumberOfArguments() const override { return 1; }
|
||||
|
||||
DataTypePtr getReturnTypeImpl(const DataTypes & arguments) const override
|
||||
{
|
||||
const IDataType & type = *arguments[0];
|
||||
|
||||
if (type.isValueUnambiguouslyRepresentedInContiguousMemoryRegion())
|
||||
return std::make_shared<DataTypeString>();
|
||||
throw Exception("Cannot reinterpret " + type.getName() + " as String because it is not contiguous in memory", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
}
|
||||
|
||||
static void executeToString(const IColumn & src, ColumnString & dst)
|
||||
{
|
||||
size_t rows = src.size();
|
||||
ColumnString::Chars & data_to = dst.getChars();
|
||||
ColumnString::Offsets & offsets_to = dst.getOffsets();
|
||||
offsets_to.resize(rows);
|
||||
|
||||
ColumnString::Offset offset = 0;
|
||||
for (size_t i = 0; i < rows; ++i)
|
||||
{
|
||||
StringRef data = src.getDataAt(i);
|
||||
|
||||
/// Cut trailing zero bytes.
|
||||
while (data.size && data.data[data.size - 1] == 0)
|
||||
--data.size;
|
||||
|
||||
data_to.resize(offset + data.size + 1);
|
||||
memcpySmallAllowReadWriteOverflow15(&data_to[offset], data.data, data.size);
|
||||
offset += data.size;
|
||||
data_to[offset] = 0;
|
||||
++offset;
|
||||
offsets_to[i] = offset;
|
||||
}
|
||||
}
|
||||
|
||||
bool useDefaultImplementationForConstants() const override { return true; }
|
||||
|
||||
ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, size_t /*input_rows_count*/) const override
|
||||
{
|
||||
const IColumn & src = *arguments[0].column;
|
||||
MutableColumnPtr dst = result_type->createColumn();
|
||||
|
||||
if (ColumnString * dst_concrete = typeid_cast<ColumnString *>(dst.get()))
|
||||
executeToString(src, *dst_concrete);
|
||||
else
|
||||
throw Exception("Illegal column " + src.getName() + " of argument of function " + getName(), ErrorCodes::ILLEGAL_COLUMN);
|
||||
|
||||
return dst;
|
||||
}
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
void registerFunctionReinterpretAsString(FunctionFactory & factory)
|
||||
{
|
||||
factory.registerFunction<FunctionReinterpretAsString>();
|
||||
}
|
||||
|
||||
}
|
19
src/Functions/timezoneOffset.cpp
Normal file
19
src/Functions/timezoneOffset.cpp
Normal file
@ -0,0 +1,19 @@
|
||||
#include <Functions/FunctionFactory.h>
|
||||
#include <Functions/DateTimeTransforms.h>
|
||||
#include <Functions/FunctionDateOrDateTimeToSomething.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
using FunctiontimezoneOffset = FunctionDateOrDateTimeToSomething<DataTypeInt32, TimezoneOffsetImpl>;
|
||||
|
||||
void registerFunctiontimezoneOffset(FunctionFactory & factory)
|
||||
{
|
||||
factory.registerFunction<FunctiontimezoneOffset>();
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
@ -409,8 +409,6 @@ SRCS(
|
||||
registerFunctionsUnixTimestamp64.cpp
|
||||
registerFunctionsVisitParam.cpp
|
||||
reinterpretAs.cpp
|
||||
reinterpretAsFixedString.cpp
|
||||
reinterpretAsString.cpp
|
||||
repeat.cpp
|
||||
replaceAll.cpp
|
||||
replaceOne.cpp
|
||||
@ -454,6 +452,7 @@ SRCS(
|
||||
timeSlot.cpp
|
||||
timeSlots.cpp
|
||||
timezone.cpp
|
||||
timezoneOffset.cpp
|
||||
toColumnTypeName.cpp
|
||||
toCustomWeek.cpp
|
||||
toDayOfMonth.cpp
|
||||
|
@ -38,11 +38,6 @@ public:
|
||||
peeked_size = 0;
|
||||
}
|
||||
checkpoint.emplace(pos);
|
||||
|
||||
// FIXME: we are checking checkpoint existence in few places (rollbackToCheckpoint/dropCheckpoint)
|
||||
// by simple if(checkpoint) but checkpoint can be nullptr after
|
||||
// setCheckpoint called on empty (non initialized/eof) buffer
|
||||
// and we can't just use simple if(checkpoint)
|
||||
}
|
||||
|
||||
/// Forget checkpoint and all data between checkpoint and position
|
||||
|
@ -4,6 +4,7 @@
|
||||
#include <cstring>
|
||||
#include <memory>
|
||||
#include <iostream>
|
||||
#include <cassert>
|
||||
|
||||
#include <Common/Exception.h>
|
||||
#include <IO/BufferBase.h>
|
||||
@ -37,7 +38,7 @@ public:
|
||||
*/
|
||||
inline void next()
|
||||
{
|
||||
if (!offset() && available())
|
||||
if (!offset())
|
||||
return;
|
||||
bytes += offset();
|
||||
|
||||
@ -73,6 +74,9 @@ public:
|
||||
{
|
||||
size_t bytes_copied = 0;
|
||||
|
||||
/// Produces endless loop
|
||||
assert(working_buffer.size() > 0);
|
||||
|
||||
while (bytes_copied < n)
|
||||
{
|
||||
nextIfAtEnd();
|
||||
|
@ -1,7 +1,6 @@
|
||||
#include <Interpreters/AggregateDescription.h>
|
||||
#include <Common/FieldVisitors.h>
|
||||
#include <IO/Operators.h>
|
||||
#include <Parsers/ASTFunction.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -100,31 +99,4 @@ void AggregateDescription::explain(WriteBuffer & out, size_t indent) const
|
||||
}
|
||||
}
|
||||
|
||||
std::string WindowFunctionDescription::dump() const
|
||||
{
|
||||
WriteBufferFromOwnString ss;
|
||||
|
||||
ss << "window function '" << column_name << "\n";
|
||||
ss << "function node " << function_node->dumpTree() << "\n";
|
||||
ss << "aggregate function '" << aggregate_function->getName() << "'\n";
|
||||
if (!function_parameters.empty())
|
||||
{
|
||||
ss << "parameters " << toString(function_parameters) << "\n";
|
||||
}
|
||||
|
||||
return ss.str();
|
||||
}
|
||||
|
||||
std::string WindowDescription::dump() const
|
||||
{
|
||||
WriteBufferFromOwnString ss;
|
||||
|
||||
ss << "window '" << window_name << "'\n";
|
||||
ss << "partition_by " << dumpSortDescription(partition_by) << "\n";
|
||||
ss << "order_by " << dumpSortDescription(order_by) << "\n";
|
||||
ss << "full_sort_description " << dumpSortDescription(full_sort_description) << "\n";
|
||||
|
||||
return ss.str();
|
||||
}
|
||||
|
||||
}
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user