ClickHouse/docs/en/whats-new/changelog/2018.md
2022-04-04 02:05:35 +03:00

117 KiB
Raw Blame History

toc_priority toc_title
78 2018

ClickHouse Release 18.16

ClickHouse Release 18.16.1, 2018-12-21

Bug Fixes:

  • Fixed an error that led to problems with updating dictionaries with the ODBC source. #3825, #3829
  • JIT compilation of aggregate functions now works with LowCardinality columns. #3838

Improvements:

  • Added the low_cardinality_allow_in_native_format setting (enabled by default). When disabled, LowCardinality columns will be converted to ordinary columns for SELECT queries and ordinary columns will be expected for INSERT queries. #3879

Build Improvements:

  • Fixes for builds on macOS and ARM.

ClickHouse Release 18.16.0, 2018-12-14

New Features:

  • DEFAULT expressions are evaluated for missing fields when loading data in semi-structured input formats (JSONEachRow, TSKV). The feature is enabled with the insert_sample_with_metadata setting. #3555
  • The ALTER TABLE query now has the MODIFY ORDER BY action for changing the sorting key when adding or removing a table column. This is useful for tables in the MergeTree family that perform additional tasks when merging based on this sorting key, such as SummingMergeTree, AggregatingMergeTree, and so on. #3581 #3755
  • For tables in the MergeTree family, now you can specify a different sorting key (ORDER BY) and index (PRIMARY KEY). The sorting key can be longer than the index. #3581
  • Added the hdfs table function and the HDFS table engine for importing and exporting data to HDFS. chenxing-xc
  • Added functions for working with base64: base64Encode, base64Decode, tryBase64Decode. Alexander Krasheninnikov
  • Now you can use a parameter to configure the precision of the uniqCombined aggregate function (select the number of HyperLogLog cells). #3406
  • Added the system.contributors table that contains the names of everyone who made commits in ClickHouse. #3452
  • Added the ability to omit the partition for the ALTER TABLE ... FREEZE query in order to back up all partitions at once. #3514
  • Added dictGet and dictGetOrDefault functions that do not require specifying the type of return value. The type is determined automatically from the dictionary description. Amos Bird
  • Now you can specify comments for a column in the table description and change it using ALTER. #3377
  • Reading is supported for Join type tables with simple keys. Amos Bird
  • Now you can specify the options join_use_nulls, max_rows_in_join, max_bytes_in_join, and join_overflow_mode when creating a Join type table. Amos Bird
  • Added the joinGet function that allows you to use a Join type table like a dictionary. Amos Bird
  • Added the partition_key, sorting_key, primary_key, and sampling_key columns to the system.tables table in order to provide information about table keys. #3609
  • Added the is_in_partition_key, is_in_sorting_key, is_in_primary_key, and is_in_sampling_key columns to the system.columns table. #3609
  • Added the min_time and max_time columns to the system.parts table. These columns are populated when the partitioning key is an expression consisting of DateTime columns. Emmanuel Donin de Rosière

Bug Fixes:

  • Fixes and performance improvements for the LowCardinality data type. GROUP BY using LowCardinality(Nullable(...)). Getting the values of extremes. Processing high-order functions. LEFT ARRAY JOIN. Distributed GROUP BY. Functions that return Array. Execution of ORDER BY. Writing to Distributed tables (nicelulu). Backward compatibility for INSERT queries from old clients that implement the Native protocol. Support for LowCardinality for JOIN. Improved performance when working in a single stream. #3823 #3803 #3799 #3769 #3744 #3681 #3651 #3649 #3641 #3632 #3568 #3523 #3518
  • Fixed how the select_sequential_consistency option works. Previously, when this setting was enabled, an incomplete result was sometimes returned after beginning to write to a new partition. #2863
  • Databases are correctly specified when executing DDL ON CLUSTER queries and ALTER UPDATE/DELETE. #3772 #3460
  • Databases are correctly specified for subqueries inside a VIEW. #3521
  • Fixed a bug in PREWHERE with FINAL for VersionedCollapsingMergeTree. 7167bfd7
  • Now you can use KILL QUERY to cancel queries that have not started yet because they are waiting for the table to be locked. #3517
  • Corrected date and time calculations if the clocks were moved back at midnight (this happens in Iran, and happened in Moscow from 1981 to 1983). Previously, this led to the time being reset a day earlier than necessary, and also caused incorrect formatting of the date and time in text format. #3819
  • Fixed bugs in some cases of VIEW and subqueries that omit the database. Winter Zhang
  • Fixed a race condition when simultaneously reading from a MATERIALIZED VIEW and deleting a MATERIALIZED VIEW due to not locking the internal MATERIALIZED VIEW. #3404 #3694
  • Fixed the error Lock handler cannot be nullptr. #3689
  • Fixed query processing when the compile_expressions option is enabled (its enabled by default). Nondeterministic constant expressions like the now function are no longer unfolded. #3457
  • Fixed a crash when specifying a non-constant scale argument in toDecimal32/64/128 functions.
  • Fixed an error when trying to insert an array with NULL elements in the Values format into a column of type Array without Nullable (if input_format_values_interpret_expressions = 1). #3487 #3503
  • Fixed continuous error logging in DDLWorker if ZooKeeper is not available. 8f50c620
  • Fixed the return type for quantile* functions from Date and DateTime types of arguments. #3580
  • Fixed the WITH clause if it specifies a simple alias without expressions. #3570
  • Fixed processing of queries with named sub-queries and qualified column names when enable_optimize_predicate_expression is enabled. Winter Zhang
  • Fixed the error Attempt to attach to nullptr thread group when working with materialized views. Marek Vavruša
  • Fixed a crash when passing certain incorrect arguments to the arrayReverse function. 73e3a7b6
  • Fixed the buffer overflow in the extractURLParameter function. Improved performance. Added correct processing of strings containing zero bytes. 141e9799
  • Fixed buffer overflow in the lowerUTF8 and upperUTF8 functions. Removed the ability to execute these functions over FixedString type arguments. #3662
  • Fixed a rare race condition when deleting MergeTree tables. #3680
  • Fixed a race condition when reading from Buffer tables and simultaneously performing ALTER or DROP on the target tables. #3719
  • Fixed a segfault if the max_temporary_non_const_columns limit was exceeded. #3788

Improvements:

  • The server does not write the processed configuration files to the /etc/clickhouse-server/ directory. Instead, it saves them in the preprocessed_configs directory inside path. This means that the /etc/clickhouse-server/ directory does not have write access for the clickhouse user, which improves security. #2443
  • The min_merge_bytes_to_use_direct_io option is set to 10 GiB by default. A merge that forms large parts of tables from the MergeTree family will be performed in O_DIRECT mode, which prevents excessive page cache eviction. #3504
  • Accelerated server start when there is a very large number of tables. #3398
  • Added a connection pool and HTTP Keep-Alive for connections between replicas. #3594
  • If the query syntax is invalid, the 400 Bad Request code is returned in the HTTP interface (500 was returned previously). 31bc680a
  • The join_default_strictness option is set to ALL by default for compatibility. 120e2cbe
  • Removed logging to stderr from the re2 library for invalid or complex regular expressions. #3723
  • Added for the Kafka table engine: checks for subscriptions before beginning to read from Kafka; the kafka_max_block_size setting for the table. Marek Vavruša
  • The cityHash64, farmHash64, metroHash64, sipHash64, halfMD5, murmurHash2_32, murmurHash2_64, murmurHash3_32, and murmurHash3_64 functions now work for any number of arguments and for arguments in the form of tuples. #3451 #3519
  • The arrayReverse function now works with any types of arrays. 73e3a7b6
  • Added an optional parameter: the slot size for the timeSlots function. Kirill Shvakov
  • For FULL and RIGHT JOIN, the max_block_size setting is used for a stream of non-joined data from the right table. Amos Bird
  • Added the --secure command line parameter in clickhouse-benchmark and clickhouse-performance-test to enable TLS. #3688 #3690
  • Type conversion when the structure of a Buffer type table does not match the structure of the destination table. Vitaly Baranov
  • Added the tcp_keep_alive_timeout option to enable keep-alive packets after inactivity for the specified time interval. #3441
  • Removed unnecessary quoting of values for the partition key in the system.parts table if it consists of a single column. #3652
  • The modulo function works for Date and DateTime data types. #3385
  • Added synonyms for the POWER, LN, LCASE, UCASE, REPLACE, LOCATE, SUBSTR, and MID functions. #3774 #3763 Some function names are case-insensitive for compatibility with the SQL standard. Added syntactic sugar SUBSTRING(expr FROM start FOR length) for compatibility with SQL. #3804
  • Added the ability to mlock memory pages corresponding to clickhouse-server executable code to prevent it from being forced out of memory. This feature is disabled by default. #3553
  • Improved performance when reading from O_DIRECT (with the min_bytes_to_use_direct_io option enabled). #3405
  • Improved performance of the dictGet...OrDefault function for a constant key argument and a non-constant default argument. Amos Bird
  • The firstSignificantSubdomain function now processes the domains gov, mil, and edu. Igor Hatarist Improved performance. #3628
  • Ability to specify custom environment variables for starting clickhouse-server using the SYS-V init.d script by defining CLICKHOUSE_PROGRAM_ENV in /etc/default/clickhouse. Pavlo Bashynskyi
  • Correct return code for the clickhouse-server init script. #3516
  • The system.metrics table now has the VersionInteger metric, and system.build_options has the added line VERSION_INTEGER, which contains the numeric form of the ClickHouse version, such as 18016000. #3644
  • Removed the ability to compare the Date type with a number to avoid potential errors like date = 2018-12-17, where quotes around the date are omitted by mistake. #3687
  • Fixed the behavior of stateful functions like rowNumberInAllBlocks. They previously output a result that was one number larger due to starting during query analysis. Amos Bird
  • If the force_restore_data file cant be deleted, an error message is displayed. Amos Bird

Build Improvements:

  • Updated the jemalloc library, which fixes a potential memory leak. Amos Bird
  • Profiling with jemalloc is enabled by default in order to debug builds. 2cc82f5c
  • Added the ability to run integration tests when only Docker is installed on the system. #3650
  • Added the fuzz expression test in SELECT queries. #3442
  • Added a stress test for commits, which performs functional tests in parallel and in random order to detect more race conditions. #3438
  • Improved the method for starting clickhouse-server in a Docker image. Elghazal Ahmed
  • For a Docker image, added support for initializing databases using files in the /docker-entrypoint-initdb.d directory. Konstantin Lebedev
  • Fixes for builds on ARM. #3709

Backward Incompatible Changes:

  • Removed the ability to compare the Date type with a number. Instead of toDate('2018-12-18') = 17883, you must use explicit type conversion = toDate(17883) #3687

ClickHouse Release 18.14

ClickHouse Release 18.14.19, 2018-12-19

Bug Fixes:

  • Fixed an error that led to problems with updating dictionaries with the ODBC source. #3825, #3829
  • Databases are correctly specified when executing DDL ON CLUSTER queries. #3460
  • Fixed a segfault if the max_temporary_non_const_columns limit was exceeded. #3788

Build Improvements:

  • Fixes for builds on ARM.

ClickHouse Release 18.14.18, 2018-12-04

Bug Fixes:

  • Fixed error in dictGet... function for dictionaries of type range, if one of the arguments is constant and other is not. #3751
  • Fixed error that caused messages netlink: '...': attribute type 1 has an invalid length to be printed in Linux kernel log, that was happening only on fresh enough versions of Linux kernel. #3749
  • Fixed segfault in function empty for argument of FixedString type. Daniel, Dao Quang Minh
  • Fixed excessive memory allocation when using large value of max_query_size setting (a memory chunk of max_query_size bytes was preallocated at once). #3720

Build Changes:

  • Fixed build with LLVM/Clang libraries of version 7 from the OS packages (these libraries are used for runtime query compilation). #3582

ClickHouse Release 18.14.17, 2018-11-30

Bug Fixes:

  • Fixed cases when the ODBC bridge process did not terminate with the main server process. #3642
  • Fixed synchronous insertion into the Distributed table with a columns list that differs from the column list of the remote table. #3673
  • Fixed a rare race condition that can lead to a crash when dropping a MergeTree table. #3643
  • Fixed a query deadlock in case when query thread creation fails with the Resource temporarily unavailable error. #3643
  • Fixed parsing of the ENGINE clause when the CREATE AS table syntax was used and the ENGINE clause was specified before the AS table (the error resulted in ignoring the specified engine). #3692

ClickHouse Release 18.14.15, 2018-11-21

Bug Fixes:

  • The size of memory chunk was overestimated while deserializing the column of type Array(String) that leads to “Memory limit exceeded” errors. The issue appeared in version 18.12.13. #3589

ClickHouse Release 18.14.14, 2018-11-20

Bug Fixes:

  • Fixed ON CLUSTER queries when cluster configured as secure (flag <secure>). #3599

Build Changes:

  • Fixed problems (llvm-7 from system, macos) #3582

ClickHouse Release 18.14.13, 2018-11-08

Bug Fixes:

  • Fixed the Block structure mismatch in MergingSorted stream error. #3162
  • Fixed ON CLUSTER queries in case when secure connections were turned on in the cluster config (the <secure> flag). #3465
  • Fixed an error in queries that used SAMPLE, PREWHERE and alias columns. #3543
  • Fixed a rare unknown compression method error when the min_bytes_to_use_direct_io setting was enabled. 3544

Performance Improvements:

  • Fixed performance regression of queries with GROUP BY of columns of UInt16 or Date type when executing on AMD EPYC processors. Igor Lapko
  • Fixed performance regression of queries that process long strings. #3530

Build Improvements:

  • Improvements for simplifying the Arcadia build. #3475, #3535

ClickHouse Release 18.14.12, 2018-11-02

Bug Fixes:

  • Fixed a crash on joining two unnamed subqueries. #3505
  • Fixed generating incorrect queries (with an empty WHERE clause) when querying external databases. hotid
  • Fixed using an incorrect timeout value in ODBC dictionaries. Marek Vavruša

ClickHouse Release 18.14.11, 2018-10-29

Bug Fixes:

  • Fixed the error Block structure mismatch in UNION stream: different number of columns in LIMIT queries. #2156
  • Fixed errors when merging data in tables containing arrays inside Nested structures. #3397
  • Fixed incorrect query results if the merge_tree_uniform_read_distribution setting is disabled (it is enabled by default). #3429
  • Fixed an error on inserts to a Distributed table in Native format. #3411

ClickHouse Release 18.14.10, 2018-10-23

  • The compile_expressions setting (JIT compilation of expressions) is disabled by default. #3410
  • The enable_optimize_predicate_expression setting is disabled by default.

ClickHouse Release 18.14.9, 2018-10-16

New Features:

  • The WITH CUBE modifier for GROUP BY (the alternative syntax GROUP BY CUBE(...) is also available). #3172
  • Added the formatDateTime function. Alexandr Krasheninnikov
  • Added the JDBC table engine and jdbc table function (requires installing clickhouse-jdbc-bridge). Alexandr Krasheninnikov
  • Added functions for working with the ISO week number: toISOWeek, toISOYear, toStartOfISOYear, and toDayOfYear. #3146
  • Now you can use Nullable columns for MySQL and ODBC tables. #3362
  • Nested data structures can be read as nested objects in JSONEachRow format. Added the input_format_import_nested_json setting. Veloman Yunkan
  • Parallel processing is available for many MATERIALIZED VIEWs when inserting data. See the parallel_view_processing setting. Marek Vavruša
  • Added the SYSTEM FLUSH LOGS query (forced log flushes to system tables such as query_log) #3321
  • Now you can use pre-defined database and table macros when declaring Replicated tables. #3251
  • Added the ability to read Decimal type values in engineering notation (indicating powers of ten). #3153

Experimental Features:

  • Optimization of the GROUP BY clause for LowCardinality data types. #3138
  • Optimized calculation of expressions for LowCardinality data types. #3200

Improvements:

  • Significantly reduced memory consumption for queries with ORDER BY and LIMIT. See the max_bytes_before_remerge_sort setting. #3205
  • In the absence of JOIN (LEFT, INNER, …), INNER JOIN is assumed. #3147
  • Qualified asterisks work correctly in queries with JOIN. Winter Zhang
  • The ODBC table engine correctly chooses the method for quoting identifiers in the SQL dialect of a remote database. Alexandr Krasheninnikov
  • The compile_expressions setting (JIT compilation of expressions) is enabled by default.
  • Fixed behavior for simultaneous DROP DATABASE/TABLE IF EXISTS and CREATE DATABASE/TABLE IF NOT EXISTS. Previously, a CREATE DATABASE ... IF NOT EXISTS query could return the error message “File … already exists”, and the CREATE TABLE ... IF NOT EXISTS and DROP TABLE IF EXISTS queries could return Table ... is creating or attaching right now. #3101
  • LIKE and IN expressions with a constant right half are passed to the remote server when querying from MySQL or ODBC tables. #3182
  • Comparisons with constant expressions in a WHERE clause are passed to the remote server when querying from MySQL and ODBC tables. Previously, only comparisons with constants were passed. #3182
  • Correct calculation of row width in the terminal for Pretty formats, including strings with hieroglyphs. Amos Bird.
  • ON CLUSTER can be specified for ALTER UPDATE queries.
  • Improved performance for reading data in JSONEachRow format. #3332
  • Added synonyms for the LENGTH and CHARACTER_LENGTH functions for compatibility. The CONCAT function is no longer case-sensitive. #3306
  • Added the TIMESTAMP synonym for the DateTime type. #3390
  • There is always space reserved for query_id in the server logs, even if the log line is not related to a query. This makes it easier to parse server text logs with third-party tools.
  • Memory consumption by a query is logged when it exceeds the next level of an integer number of gigabytes. #3205
  • Added compatibility mode for the case when the client library that uses the Native protocol sends fewer columns by mistake than the server expects for the INSERT query. This scenario was possible when using the clickhouse-cpp library. Previously, this scenario caused the server to crash. #3171
  • In a user-defined WHERE expression in clickhouse-copier, you can now use a partition_key alias (for additional filtering by source table partition). This is useful if the partitioning scheme changes during copying, but only changes slightly. #3166
  • The workflow of the Kafka engine has been moved to a background thread pool in order to automatically reduce the speed of data reading at high loads. Marek Vavruša.
  • Support for reading Tuple and Nested values of structures like struct in the Cap'n'Proto format. Marek Vavruša
  • The list of top-level domains for the firstSignificantSubdomain function now includes the domain biz. decaseal
  • In the configuration of external dictionaries, null_value is interpreted as the value of the default data type. #3330
  • Support for the intDiv and intDivOrZero functions for Decimal. b48402e8
  • Support for the Date, DateTime, UUID, and Decimal types as a key for the sumMap aggregate function. #3281
  • Support for the Decimal data type in external dictionaries. #3324
  • Support for the Decimal data type in SummingMergeTree tables. #3348
  • Added specializations for UUID in if. #3366
  • Reduced the number of open and close system calls when reading from a MergeTree table. #3283
  • A TRUNCATE TABLE query can be executed on any replica (the query is passed to the leader replica). Kirill Shvakov

Bug Fixes:

  • Fixed an issue with Dictionary tables for range_hashed dictionaries. This error occurred in version 18.12.17. #1702
  • Fixed an error when loading range_hashed dictionaries (the message Unsupported type Nullable (...)). This error occurred in version 18.12.17. #3362
  • Fixed errors in the pointInPolygon function due to the accumulation of inaccurate calculations for polygons with a large number of vertices located close to each other. #3331 #3341
  • If after merging data parts, the checksum for the resulting part differs from the result of the same merge in another replica, the result of the merge is deleted and the data part is downloaded from the other replica (this is the correct behavior). But after downloading the data part, it couldnt be added to the working set because of an error that the part already exists (because the data part was deleted with some delay after the merge). This led to cyclical attempts to download the same data. #3194
  • Fixed incorrect calculation of total memory consumption by queries (because of incorrect calculation, the max_memory_usage_for_all_queries setting worked incorrectly and the MemoryTracking metric had an incorrect value). This error occurred in version 18.12.13. Marek Vavruša
  • Fixed the functionality of CREATE TABLE ... ON CLUSTER ... AS SELECT ... This error occurred in version 18.12.13. #3247
  • Fixed unnecessary preparation of data structures for JOINs on the server that initiates the query if the JOIN is only performed on remote servers. #3340
  • Fixed bugs in the Kafka engine: deadlocks after exceptions when starting to read data, and locks upon completion Marek Vavruša.
  • For Kafka tables, the optional schema parameter was not passed (the schema of the Cap'n'Proto format). Vojtech Splichal
  • If the ensemble of ZooKeeper servers has servers that accept the connection but then immediately close it instead of responding to the handshake, ClickHouse chooses to connect another server. Previously, this produced the error Cannot read all data. Bytes read: 0. Bytes expected: 4. and the server couldnt start. 8218cf3a
  • If the ensemble of ZooKeeper servers contains servers for which the DNS query returns an error, these servers are ignored. 17b8e209
  • Fixed type conversion between Date and DateTime when inserting data in the VALUES format (if input_format_values_interpret_expressions = 1). Previously, the conversion was performed between the numerical value of the number of days in Unix Epoch time and the Unix timestamp, which led to unexpected results. #3229
  • Corrected type conversion between Decimal and integer numbers. #3211
  • Fixed errors in the enable_optimize_predicate_expression setting. Winter Zhang
  • Fixed a parsing error in CSV format with floating-point numbers if a non-default CSV separator is used, such as ; #3155
  • Fixed the arrayCumSumNonNegative function (it does not accumulate negative values if the accumulator is less than zero). Aleksey Studnev
  • Fixed how Merge tables work on top of Distributed tables when using PREWHERE. #3165
  • Bug fixes in the ALTER UPDATE query.
  • Fixed bugs in the odbc table function that appeared in version 18.12. #3197
  • Fixed the operation of aggregate functions with StateArray combinators. #3188
  • Fixed a crash when dividing a Decimal value by zero. 69dd6609
  • Fixed output of types for operations using Decimal and integer arguments. #3224
  • Fixed the segfault during GROUP BY on Decimal128. 3359ba06
  • The log_query_threads setting (logging information about each thread of query execution) now takes effect only if the log_queries option (logging information about queries) is set to 1. Since the log_query_threads option is enabled by default, information about threads was previously logged even if query logging was disabled. #3241
  • Fixed an error in the distributed operation of the quantiles aggregate function (the error message Not found column quantile...). 292a8855
  • Fixed the compatibility problem when working on a cluster of version 18.12.17 servers and older servers at the same time. For distributed queries with GROUP BY keys of both fixed and non-fixed length, if there was a large amount of data to aggregate, the returned data was not always fully aggregated (two different rows contained the same aggregation keys). #3254
  • Fixed handling of substitutions in clickhouse-performance-test, if the query contains only part of the substitutions declared in the test. #3263
  • Fixed an error when using FINAL with PREWHERE. #3298
  • Fixed an error when using PREWHERE over columns that were added during ALTER. #3298
  • Added a check for the absence of arrayJoin for DEFAULT and MATERIALIZED expressions. Previously, arrayJoin led to an error when inserting data. #3337
  • Added a check for the absence of arrayJoin in a PREWHERE clause. Previously, this led to messages like Size ... does not match or Unknown compression method when executing queries. #3357
  • Fixed segfault that could occur in rare cases after optimization that replaced AND chains from equality evaluations with the corresponding IN expression. liuyimin-bytedance
  • Minor corrections to clickhouse-benchmark: previously, client information was not sent to the server; now the number of queries executed is calculated more accurately when shutting down and for limiting the number of iterations. #3351 #3352

Backward Incompatible Changes:

  • Removed the allow_experimental_decimal_type option. The Decimal data type is available for default use. #3329

ClickHouse Release 18.12

ClickHouse Release 18.12.17, 2018-09-16

New Features:

  • invalidate_query (the ability to specify a query to check whether an external dictionary needs to be updated) is implemented for the clickhouse source. #3126
  • Added the ability to use UInt*, Int*, and DateTime data types (along with the Date type) as a range_hashed external dictionary key that defines the boundaries of ranges. Now NULL can be used to designate an open range. Vasily Nemkov
  • The Decimal type now supports var* and stddev* aggregate functions. #3129
  • The Decimal type now supports mathematical functions (exp, sin and so on.) #3129
  • The system.part_log table now has the partition_id column. #3089

Bug Fixes:

  • Merge now works correctly on Distributed tables. Winter Zhang
  • Fixed incompatibility (unnecessary dependency on the glibc version) that made it impossible to run ClickHouse on Ubuntu Precise and older versions. The incompatibility arose in version 18.12.13. #3130
  • Fixed errors in the enable_optimize_predicate_expression setting. Winter Zhang
  • Fixed a minor issue with backwards compatibility that appeared when working with a cluster of replicas on versions earlier than 18.12.13 and simultaneously creating a new replica of a table on a server with a newer version (shown in the message Can not clone replica, because the ... updated to new ClickHouse version, which is logical, but shouldnt happen). #3122

Backward Incompatible Changes:

  • The enable_optimize_predicate_expression option is enabled by default (which is rather optimistic). If query analysis errors occur that are related to searching for the column names, set enable_optimize_predicate_expression to 0. Winter Zhang

ClickHouse Release 18.12.14, 2018-09-13

New Features:

  • Added support for ALTER UPDATE queries. #3035
  • Added the allow_ddl option, which restricts the users access to DDL queries. #3104
  • Added the min_merge_bytes_to_use_direct_io option for MergeTree engines, which allows you to set a threshold for the total size of the merge (when above the threshold, data part files will be handled using O_DIRECT). #3117
  • The system.merges system table now contains the partition_id column. #3099

Improvements

  • If a data part remains unchanged during mutation, it isnt downloaded by replicas. #3103
  • Autocomplete is available for names of settings when working with clickhouse-client. #3106

Bug Fixes:

  • Added a check for the sizes of arrays that are elements of Nested type fields when inserting. #3118
  • Fixed an error updating external dictionaries with the ODBC source and hashed storage. This error occurred in version 18.12.13.
  • Fixed a crash when creating a temporary table from a query with an IN condition. Winter Zhang
  • Fixed an error in aggregate functions for arrays that can have NULL elements. Winter Zhang

ClickHouse Release 18.12.13, 2018-09-10

New Features:

  • Added the DECIMAL(digits, scale) data type (Decimal32(scale), Decimal64(scale), Decimal128(scale)). To enable it, use the setting allow_experimental_decimal_type. #2846 #2970 #3008 #3047
  • New WITH ROLLUP modifier for GROUP BY (alternative syntax: GROUP BY ROLLUP(...)). #2948
  • In queries with JOIN, the star character expands to a list of columns in all tables, in compliance with the SQL standard. You can restore the old behavior by setting asterisk_left_columns_only to 1 on the user configuration level. Winter Zhang
  • Added support for JOIN with table functions. Winter Zhang
  • Autocomplete by pressing Tab in clickhouse-client. Sergey Shcherbin
  • Ctrl+C in clickhouse-client clears a query that was entered. #2877
  • Added the join_default_strictness setting (values: ", 'any', 'all'). This allows you to not specify ANY or ALL for JOIN. #2982
  • Each line of the server log related to query processing shows the query ID. #2482
  • Now you can get query execution logs in clickhouse-client (use the send_logs_level setting). With distributed query processing, logs are cascaded from all the servers. #2482
  • The system.query_log and system.processes (SHOW PROCESSLIST) tables now have information about all changed settings when you run a query (the nested structure of the Settings data). Added the log_query_settings setting. #2482
  • The system.query_log and system.processes tables now show information about the number of threads that are participating in query execution (see the thread_numbers column). #2482
  • Added ProfileEvents counters that measure the time spent on reading and writing over the network and reading and writing to disk, the number of network errors, and the time spent waiting when network bandwidth is limited. #2482
  • Added ProfileEventscounters that contain the system metrics from rusage (you can use them to get information about CPU usage in userspace and the kernel, page faults, and context switches), as well as taskstats metrics (use these to obtain information about I/O wait time, CPU wait time, and the amount of data read and recorded, both with and without page cache). #2482
  • The ProfileEvents counters are applied globally and for each query, as well as for each query execution thread, which allows you to profile resource consumption by query in detail. #2482
  • Added the system.query_thread_log table, which contains information about each query execution thread. Added the log_query_threads setting. #2482
  • The system.metrics and system.events tables now have built-in documentation. #3016
  • Added the arrayEnumerateDense function. Amos Bird
  • Added the arrayCumSumNonNegative and arrayDifference functions. Aleksey Studnev
  • Added the retention aggregate function. Sundy Li
  • Now you can add (merge) states of aggregate functions by using the plus operator, and multiply the states of aggregate functions by a nonnegative constant. #3062 #3034
  • Tables in the MergeTree family now have the virtual column _partition_id. #3089

Experimental Features:

  • Added the LowCardinality(T) data type. This data type automatically creates a local dictionary of values and allows data processing without unpacking the dictionary. #2830
  • Added a cache of JIT-compiled functions and a counter for the number of uses before compiling. To JIT compile expressions, enable the compile_expressions setting. #2990 #3077

Improvements:

  • Fixed the problem with unlimited accumulation of the replication log when there are abandoned replicas. Added an effective recovery mode for replicas with a long lag.
  • Improved performance of GROUP BY with multiple aggregation fields when one of them is string and the others are fixed length.
  • Improved performance when using PREWHERE and with implicit transfer of expressions in PREWHERE.
  • Improved parsing performance for text formats (CSV, TSV). Amos Bird #2980
  • Improved performance of reading strings and arrays in binary formats. Amos Bird
  • Increased performance and reduced memory consumption for queries to system.tables and system.columns when there is a very large number of tables on a single server. #2953
  • Fixed a performance problem in the case of a large stream of queries that result in an error (the _dl_addr function is visible in perf top, but the server isnt using much CPU). #2938
  • Conditions are cast into the View (when enable_optimize_predicate_expression is enabled). Winter Zhang
  • Improvements to the functionality for the UUID data type. #3074 #2985
  • The UUID data type is supported in The-Alchemist dictionaries. #2822
  • The visitParamExtractRaw function works correctly with nested structures. Winter Zhang
  • When the input_format_skip_unknown_fields setting is enabled, object fields in JSONEachRow format are skipped correctly. BlahGeek
  • For a CASE expression with conditions, you can now omit ELSE, which is equivalent to ELSE NULL. #2920
  • The operation timeout can now be configured when working with ZooKeeper. urykhy
  • You can specify an offset for LIMIT n, m as LIMIT n OFFSET m. #2840
  • You can use the SELECT TOP n syntax as an alternative for LIMIT. #2840
  • Increased the size of the queue to write to system tables, so the SystemLog parameter queue is full error does not happen as often.
  • The windowFunnel aggregate function now supports events that meet multiple conditions. Amos Bird
  • Duplicate columns can be used in a USING clause for JOIN. #3006
  • Pretty formats now have a limit on column alignment by width. Use the output_format_pretty_max_column_pad_width setting. If a value is wider, it will still be displayed in its entirety, but the other cells in the table will not be too wide. #3003
  • The odbc table function now allows you to specify the database/schema name. Amos Bird
  • Added the ability to use a username specified in the clickhouse-client config file. Vladimir Kozbin
  • The ZooKeeperExceptions counter has been split into three counters: ZooKeeperUserExceptions, ZooKeeperHardwareExceptions, and ZooKeeperOtherExceptions.
  • ALTER DELETE queries work for materialized views.
  • Added randomization when running the cleanup thread periodically for ReplicatedMergeTree tables in order to avoid periodic load spikes when there are a very large number of ReplicatedMergeTree tables.
  • Support for ATTACH TABLE ... ON CLUSTER queries. #3025

Bug Fixes:

  • Fixed an issue with Dictionary tables (throws the Size of offsets does not match size of column or Unknown compression method exception). This bug appeared in version 18.10.3. #2913
  • Fixed a bug when merging CollapsingMergeTree tables if one of the data parts is empty (these parts are formed during merge or ALTER DELETE if all data was deleted), and the vertical algorithm was used for the merge. #3049
  • Fixed a race condition during DROP or TRUNCATE for Memory tables with a simultaneous SELECT, which could lead to server crashes. This bug appeared in version 1.1.54388. #3038
  • Fixed the possibility of data loss when inserting in Replicated tables if the Session is expired error is returned (data loss can be detected by the ReplicatedDataLoss metric). This error occurred in version 1.1.54378. #2939 #2949 #2964
  • Fixed a segfault during JOIN ... ON. #3000
  • Fixed the error searching column names when the WHERE expression consists entirely of a qualified column name, such as WHERE table.column. #2994
  • Fixed the “Not found column” error that occurred when executing distributed queries if a single column consisting of an IN expression with a subquery is requested from a remote server. #3087
  • Fixed the Block structure mismatch in UNION stream: different number of columns error that occurred for distributed queries if one of the shards is local and the other is not, and optimization of the move to PREWHERE is triggered. #2226 #3037 #3055 #3065 #3073 #3090 #3093
  • Fixed the pointInPolygon function for certain cases of non-convex polygons. #2910
  • Fixed the incorrect result when comparing nan with integers. #3024
  • Fixed an error in the zlib-ng library that could lead to segfault in rare cases. #2854
  • Fixed a memory leak when inserting into a table with AggregateFunction columns, if the state of the aggregate function is not simple (allocates memory separately), and if a single insertion request results in multiple small blocks. #3084
  • Fixed a race condition when creating and deleting the same Buffer or MergeTree table simultaneously.
  • Fixed the possibility of a segfault when comparing tuples made up of certain non-trivial types, such as tuples. #2989
  • Fixed the possibility of a segfault when running certain ON CLUSTER queries. Winter Zhang
  • Fixed an error in the arrayDistinct function for Nullable array elements. #2845 #2937
  • The enable_optimize_predicate_expression option now correctly supports cases with SELECT *. Winter Zhang
  • Fixed the segfault when re-initializing the ZooKeeper session. #2917
  • Fixed potential blocking when working with ZooKeeper.
  • Fixed incorrect code for adding nested data structures in a SummingMergeTree.
  • When allocating memory for states of aggregate functions, alignment is correctly taken into account, which makes it possible to use operations that require alignment when implementing states of aggregate functions. chenxing-xc

Security Fix:

  • Safe use of ODBC data sources. Interaction with ODBC drivers uses a separate clickhouse-odbc-bridge process. Errors in third-party ODBC drivers no longer cause problems with server stability or vulnerabilities. #2828 #2879 #2886 #2893 #2921
  • Fixed incorrect validation of the file path in the catBoostPool table function. #2894
  • The contents of system tables (tables, databases, parts, columns, parts_columns, merges, mutations, replicas, and replication_queue) are filtered according to the users configured access to databases (allow_databases). Winter Zhang

Backward Incompatible Changes:

  • In queries with JOIN, the star character expands to a list of columns in all tables, in compliance with the SQL standard. You can restore the old behavior by setting asterisk_left_columns_only to 1 on the user configuration level.

Build Changes:

  • Most integration tests can now be run by commit.
  • Code style checks can also be run by commit.
  • The memcpy implementation is chosen correctly when building on CentOS7/Fedora. Etienne Champetier
  • When using clang to build, some warnings from -Weverything have been added, in addition to the regular -Wall-Wextra -Werror. #2957
  • Debugging the build uses the jemalloc debug option.
  • The interface of the library for interacting with ZooKeeper is declared abstract. #2950

ClickHouse Release 18.10

ClickHouse Release 18.10.3, 2018-08-13

New Features:

  • HTTPS can be used for replication. #2760
  • Added the functions murmurHash2_64, murmurHash3_32, murmurHash3_64, and murmurHash3_128 in addition to the existing murmurHash2_32. #2791
  • Support for Nullable types in the ClickHouse ODBC driver (ODBCDriver2 output format). #2834
  • Support for UUID in the key columns.

Improvements:

  • Clusters can be removed without restarting the server when they are deleted from the config files. #2777
  • External dictionaries can be removed without restarting the server when they are removed from config files. #2779
  • Added SETTINGS support for the Kafka table engine. Alexander Marshalov
  • Improvements for the UUID data type (not yet complete). #2618
  • Support for empty parts after merges in the SummingMergeTree, CollapsingMergeTree and VersionedCollapsingMergeTree engines. #2815
  • Old records of completed mutations are deleted (ALTER DELETE). #2784
  • Added the system.merge_tree_settings table. Kirill Shvakov
  • The system.tables table now has dependency columns: dependencies_database and dependencies_table. Winter Zhang
  • Added the max_partition_size_to_drop config option. #2782
  • Added the output_format_json_escape_forward_slashes option. Alexander Bocharov
  • Added the max_fetch_partition_retries_count setting. #2831
  • Added the prefer_localhost_replica setting for disabling the preference for a local replica and going to a local replica without inter-process interaction. #2832
  • The quantileExact aggregate function returns nan in the case of aggregation on an empty Float32 or Float64 set. Sundy Li

Bug Fixes:

  • Removed unnecessary escaping of the connection string parameters for ODBC, which made it impossible to establish a connection. This error occurred in version 18.6.0.
  • Fixed the logic for processing REPLACE PARTITION commands in the replication queue. If there are two REPLACE commands for the same partition, the incorrect logic could cause one of them to remain in the replication queue and not be executed. #2814
  • Fixed a merge bug when all data parts were empty (parts that were formed from a merge or from ALTER DELETE if all data was deleted). This bug appeared in version 18.1.0. #2930
  • Fixed an error for concurrent Set or Join. Amos Bird
  • Fixed the Block structure mismatch in UNION stream: different number of columns error that occurred for UNION ALL queries inside a sub-query if one of the SELECT queries contains duplicate column names. Winter Zhang
  • Fixed a memory leak if an exception occurred when connecting to a MySQL server.
  • Fixed incorrect clickhouse-client response code in case of a query error.
  • Fixed incorrect behavior of materialized views containing DISTINCT. #2795

Backward Incompatible Changes

  • Removed support for CHECK TABLE queries for Distributed tables.

Build Changes:

  • The allocator has been replaced: jemalloc is now used instead of tcmalloc. In some scenarios, this increases speed up to 20%. However, there are queries that have slowed by up to 20%. Memory consumption has been reduced by approximately 10% in some scenarios, with improved stability. With highly competitive loads, CPU usage in userspace and in system shows just a slight increase. #2773
  • Use of libressl from a submodule. #1983 #2807
  • Use of unixodbc from a submodule. #2789
  • Use of mariadb-connector-c from a submodule. #2785
  • Added functional test files to the repository that depend on the availability of test data (for the time being, without the test data itself).

ClickHouse Release 18.6

ClickHouse Release 18.6.0, 2018-08-02

New Features:

  • Added support for ON expressions for the JOIN ON syntax: JOIN ON Expr([table.]column ...) = Expr([table.]column, ...) [AND Expr([table.]column, ...) = Expr([table.]column, ...) ...] The expression must be a chain of equalities joined by the AND operator. Each side of the equality can be an arbitrary expression over the columns of one of the tables. The use of fully qualified column names is supported (table.name, database.table.name, table_alias.name, subquery_alias.name) for the right table. #2742
  • HTTPS can be enabled for replication. #2760

Improvements:

  • The server passes the patch component of its version to the client. Data about the patch version component is in system.processes and query_log. #2646

ClickHouse Release 18.5

ClickHouse Release 18.5.1, 2018-07-31

New Features:

  • Added the hash function murmurHash2_32 #2756.

Improvements:

  • Now you can use the from_env #2741 attribute to set values in config files from environment variables.
  • Added case-insensitive versions of the coalesce, ifNull, and nullIf functions #2752.

Bug Fixes:

  • Fixed a possible bug when starting a replica #2759.

ClickHouse Release 18.4

ClickHouse Release 18.4.0, 2018-07-28

New Features:

  • Added system tables: formats, data_type_families, aggregate_function_combinators, table_functions, table_engines, collations #2721.
  • Added the ability to use a table function instead of a table as an argument of a remote or cluster table function #2708.
  • Support for HTTP Basic authentication in the replication protocol #2727.
  • The has function now allows searching for a numeric value in an array of Enum values Maxim Khrisanfov.
  • Support for adding arbitrary message separators when reading from Kafka Amos Bird.

Improvements:

  • The ALTER TABLE t DELETE WHERE query does not rewrite data parts that were not affected by the WHERE condition #2694.
  • The use_minimalistic_checksums_in_zookeeper option for ReplicatedMergeTree tables is enabled by default. This setting was added in version 1.1.54378, 2018-04-16. Versions that are older than 1.1.54378 can no longer be installed.
  • Support for running KILL and OPTIMIZE queries that specify ON CLUSTER Winter Zhang.

Bug Fixes:

  • Fixed the error Column ... is not under an aggregate function and not in GROUP BY for aggregation with an IN expression. This bug appeared in version 18.1.0. (bbdd780b)
  • Fixed a bug in the windowFunnel aggregate function Winter Zhang.
  • Fixed a bug in the anyHeavy aggregate function (a2101df2)
  • Fixed server crash when using the countArray() aggregate function.

Backward Incompatible Changes:

  • Parameters for Kafka engine was changed from Kafka(kafka_broker_list, kafka_topic_list, kafka_group_name, kafka_format[, kafka_schema, kafka_num_consumers]) to Kafka(kafka_broker_list, kafka_topic_list, kafka_group_name, kafka_format[, kafka_row_delimiter, kafka_schema, kafka_num_consumers]). If your tables use kafka_schema or kafka_num_consumers parameters, you have to manually edit the metadata files path/metadata/database/table.sql and add kafka_row_delimiter parameter with '' value.

ClickHouse Release 18.1

ClickHouse Release 18.1.0, 2018-07-23

New Features:

  • Support for the ALTER TABLE t DELETE WHERE query for non-replicated MergeTree tables (#2634).
  • Support for arbitrary types for the uniq* family of aggregate functions (#2010).
  • Support for arbitrary types in comparison operators (#2026).
  • The users.xml file allows setting a subnet mask in the format 10.0.0.1/255.255.255.0. This is necessary for using masks for IPv6 networks with zeros in the middle (#2637).
  • Added the arrayDistinct function (#2670).
  • The SummingMergeTree engine can now work with AggregateFunction type columns (Constantin S. Pan).

Improvements:

  • Changed the numbering scheme for release versions. Now the first part contains the year of release (A.D., Moscow timezone, minus 2000), the second part contains the number for major changes (increases for most releases), and the third part is the patch version. Releases are still backward compatible, unless otherwise stated in the changelog.
  • Faster conversions of floating-point numbers to a string (Amos Bird).
  • If some rows were skipped during an insert due to parsing errors (this is possible with the input_allow_errors_num and input_allow_errors_ratio settings enabled), the number of skipped rows is now written to the server log (Leonardo Cecchi).

Bug Fixes:

  • Fixed the TRUNCATE command for temporary tables (Amos Bird).
  • Fixed a rare deadlock in the ZooKeeper client library that occurred when there was a network error while reading the response (c315200).
  • Fixed an error during a CAST to Nullable types (#1322).
  • Fixed the incorrect result of the maxIntersection() function when the boundaries of intervals coincided (Michael Furmur).
  • Fixed incorrect transformation of the OR expression chain in a function argument (chenxing-xc).
  • Fixed performance degradation for queries containing IN (subquery) expressions inside another subquery (#2571).
  • Fixed incompatibility between servers with different versions in distributed queries that use a CAST function that isnt in uppercase letters (fe8c4d6).
  • Added missing quoting of identifiers for queries to an external DBMS (#2635).

Backward Incompatible Changes:

  • Converting a string containing the number zero to DateTime does not work. Example: SELECT toDateTime('0'). This is also the reason that DateTime DEFAULT '0' does not work in tables, as well as <null_value>0</null_value> in dictionaries. Solution: replace 0 with 0000-00-00 00:00:00.

ClickHouse Release 1.1

ClickHouse Release 1.1.54394, 2018-07-12

New Features:

  • Added the histogram aggregate function (Mikhail Surin).
  • Now OPTIMIZE TABLE ... FINAL can be used without specifying partitions for ReplicatedMergeTree (Amos Bird).

Bug Fixes:

  • Fixed a problem with a very small timeout for sockets (one second) for reading and writing when sending and downloading replicated data, which made it impossible to download larger parts if there is a load on the network or disk (it resulted in cyclical attempts to download parts). This error occurred in version 1.1.54388.
  • Fixed issues when using chroot in ZooKeeper if you inserted duplicate data blocks in the table.
  • The has function now works correctly for an array with Nullable elements (#2115).
  • The system.tables table now works correctly when used in distributed queries. The metadata_modification_time and engine_full columns are now non-virtual. Fixed an error that occurred if only these columns were queried from the table.
  • Fixed how an empty TinyLog table works after inserting an empty data block (#2563).
  • The system.zookeeper table works if the value of the node in ZooKeeper is NULL.

ClickHouse Release 1.1.54390, 2018-07-06

New Features:

  • Queries can be sent in multipart/form-data format (in the query field), which is useful if external data is also sent for query processing (Olga Hvostikova).
  • Added the ability to enable or disable processing single or double quotes when reading data in CSV format. You can configure this in the format_csv_allow_single_quotes and format_csv_allow_double_quotes settings (Amos Bird).
  • Now OPTIMIZE TABLE ... FINAL can be used without specifying the partition for non-replicated variants of MergeTree (Amos Bird).

Improvements:

  • Improved performance, reduced memory consumption, and correct memory consumption tracking with use of the IN operator when a table index could be used (#2584).
  • Removed redundant checking of checksums when adding a data part. This is important when there are a large number of replicas, because in these cases the total number of checks was equal to N^2.
  • Added support for Array(Tuple(...)) arguments for the arrayEnumerateUniq function (#2573).
  • Added Nullable support for the runningDifference function (#2594).
  • Improved query analysis performance when there is a very large number of expressions (#2572).
  • Faster selection of data parts for merging in ReplicatedMergeTree tables. Faster recovery of the ZooKeeper session (#2597).
  • The format_version.txt file for MergeTree tables is re-created if it is missing, which makes sense if ClickHouse is launched after copying the directory structure without files (Ciprian Hacman).

Bug Fixes:

  • Fixed a bug when working with ZooKeeper that could make it impossible to recover the session and readonly states of tables before restarting the server.
  • Fixed a bug when working with ZooKeeper that could result in old nodes not being deleted if the session is interrupted.
  • Fixed an error in the quantileTDigest function for Float arguments (this bug was introduced in version 1.1.54388) (Mikhail Surin).
  • Fixed a bug in the index for MergeTree tables if the primary key column is located inside the function for converting types between signed and unsigned integers of the same size (#2603).
  • Fixed segfault if macros are used but they arent in the config file (#2570).
  • Fixed switching to the default database when reconnecting the client (#2583).
  • Fixed a bug that occurred when the use_index_for_in_with_subqueries setting was disabled.

Security Fix:

  • Sending files is no longer possible when connected to MySQL (LOAD DATA LOCAL INFILE).

ClickHouse Release 1.1.54388, 2018-06-28

New Features:

  • Support for the ALTER TABLE t DELETE WHERE query for replicated tables. Added the system.mutations table to track progress of this type of queries.
  • Support for the ALTER TABLE t [REPLACE|ATTACH] PARTITION query for *MergeTree tables.
  • Support for the TRUNCATE TABLE query (Winter Zhang)
  • Several new SYSTEM queries for replicated tables (RESTART REPLICAS, SYNC REPLICA, [STOP|START] [MERGES|FETCHES|SENDS REPLICATED|REPLICATION QUEUES]).
  • Added the ability to write to a table with the MySQL engine and the corresponding table function (sundy-li).
  • Added the url() table function and the URL table engine (Alexander Sapin).
  • Added the windowFunnel aggregate function (sundy-li).
  • New startsWith and endsWith functions for strings (Vadim Plakhtinsky).
  • The numbers() table function now allows you to specify the offset (Winter Zhang).
  • The password to clickhouse-client can be entered interactively.
  • Server logs can now be sent to syslog (Alexander Krasheninnikov).
  • Support for logging in dictionaries with a shared library source (Alexander Sapin).
  • Support for custom CSV delimiters (Ivan Zhukov)
  • Added the date_time_input_format setting. If you switch this setting to 'best_effort', DateTime values will be read in a wide range of formats.
  • Added the clickhouse-obfuscator utility for data obfuscation. Usage example: publishing data used in performance tests.

Experimental Features:

  • Added the ability to calculate and arguments only where they are needed (Anastasia Tsarkova)
  • JIT compilation to native code is now available for some expressions (pyos).

Bug Fixes:

  • Duplicates no longer appear for a query with DISTINCT and ORDER BY.
  • Queries with ARRAY JOIN and arrayFilter no longer return an incorrect result.
  • Fixed an error when reading an array column from a Nested structure (#2066).
  • Fixed an error when analyzing queries with a HAVING clause like HAVING tuple IN (...).
  • Fixed an error when analyzing queries with recursive aliases.
  • Fixed an error when reading from ReplacingMergeTree with a condition in PREWHERE that filters all rows (#2525).
  • User profile settings were not applied when using sessions in the HTTP interface.
  • Fixed how settings are applied from the command line parameters in clickhouse-local.
  • The ZooKeeper client library now uses the session timeout received from the server.
  • Fixed a bug in the ZooKeeper client library when the client waited for the server response longer than the timeout.
  • Fixed pruning of parts for queries with conditions on partition key columns (#2342).
  • Merges are now possible after CLEAR COLUMN IN PARTITION (#2315).
  • Type mapping in the ODBC table function has been fixed (sundy-li).
  • Type comparisons have been fixed for DateTime with and without the time zone (Alexander Bocharov).
  • Fixed syntactic parsing and formatting of the CAST operator.
  • Fixed insertion into a materialized view for the Distributed table engine (Babacar Diassé).
  • Fixed a race condition when writing data from the Kafka engine to materialized views (Yangkuan Liu).
  • Fixed SSRF in the remote() table function.
  • Fixed exit behavior of clickhouse-client in multiline mode (#2510).

Improvements:

  • Background tasks in replicated tables are now performed in a thread pool instead of in separate threads (Silviu Caragea).
  • Improved LZ4 compression performance.
  • Faster analysis for queries with a large number of JOINs and sub-queries.
  • The DNS cache is now updated automatically when there are too many network errors.
  • Table inserts no longer occur if the insert into one of the materialized views is not possible because it has too many parts.
  • Corrected the discrepancy in the event counters Query, SelectQuery, and InsertQuery.
  • Expressions like tuple IN (SELECT tuple) are allowed if the tuple types match.
  • A server with replicated tables can start even if you havent configured ZooKeeper.
  • When calculating the number of available CPU cores, limits on cgroups are now taken into account (Atri Sharma).
  • Added chown for config directories in the systemd config file (Mikhail Shiryaev).

Build Changes:

  • The gcc8 compiler can be used for builds.
  • Added the ability to build llvm from submodule.
  • The version of the librdkafka library has been updated to v0.11.4.
  • Added the ability to use the system libcpuid library. The library version has been updated to 0.4.0.
  • Fixed the build using the vectorclass library (Babacar Diassé).
  • Cmake now generates files for ninja by default (like when using -G Ninja).
  • Added the ability to use the libtinfo library instead of libtermcap (Georgy Kondratiev).
  • Fixed a header file conflict in Fedora Rawhide (#2520).

Backward Incompatible Changes:

  • Removed escaping in Vertical and Pretty* formats and deleted the VerticalRaw format.
  • If servers with version 1.1.54388 (or newer) and servers with an older version are used simultaneously in a distributed query and the query has the cast(x, 'Type') expression without the AS keyword and does not have the word cast in uppercase, an exception will be thrown with a message like Not found column cast(0, 'UInt8') in block. Solution: Update the server on the entire cluster.

ClickHouse Release 1.1.54385, 2018-06-01

Bug Fixes:

  • Fixed an error that in some cases caused ZooKeeper operations to block.

ClickHouse Release 1.1.54383, 2018-05-22

Bug Fixes:

  • Fixed a slowdown of replication queue if a table has many replicas.

ClickHouse Release 1.1.54381, 2018-05-14

Bug Fixes:

  • Fixed a nodes leak in ZooKeeper when ClickHouse loses connection to ZooKeeper server.

ClickHouse Release 1.1.54380, 2018-04-21

New Features:

  • Added the table function file(path, format, structure). An example reading bytes from /dev/urandom: ln -s /dev/urandom /var/lib/clickhouse/user_files/random``clickhouse-client -q "SELECT * FROM file('random', 'RowBinary', 'd UInt8') LIMIT 10".

Improvements:

  • Subqueries can be wrapped in () brackets to enhance query readability. For example: (SELECT 1) UNION ALL (SELECT 1).
  • Simple SELECT queries from the system.processes table are not included in the max_concurrent_queries limit.

Bug Fixes:

  • Fixed incorrect behavior of the IN operator when select from MATERIALIZED VIEW.
  • Fixed incorrect filtering by partition index in expressions like partition_key_column IN (...).
  • Fixed inability to execute OPTIMIZE query on non-leader replica if REANAME was performed on the table.
  • Fixed the authorization error when executing OPTIMIZE or ALTER queries on a non-leader replica.
  • Fixed freezing of KILL QUERY.
  • Fixed an error in ZooKeeper client library which led to loss of watches, freezing of distributed DDL queue, and slowdowns in the replication queue if a non-empty chroot prefix is used in the ZooKeeper configuration.

Backward Incompatible Changes:

  • Removed support for expressions like (a, b) IN (SELECT (a, b)) (you can use the equivalent expression (a, b) IN (SELECT a, b)). In previous releases, these expressions led to undetermined WHERE filtering or caused errors.

ClickHouse Release 1.1.54378, 2018-04-16

New Features:

  • Logging level can be changed without restarting the server.
  • Added the SHOW CREATE DATABASE query.
  • The query_id can be passed to clickhouse-client (elBroom).
  • New setting: max_network_bandwidth_for_all_users.
  • Added support for ALTER TABLE ... PARTITION ... for MATERIALIZED VIEW.
  • Added information about the size of data parts in uncompressed form in the system table.
  • Server-to-server encryption support for distributed tables (<secure>1</secure> in the replica config in <remote_servers>).
  • Configuration of the table level for the ReplicatedMergeTree family in order to minimize the amount of data stored in Zookeeper: : use_minimalistic_checksums_in_zookeeper = 1
  • Configuration of the clickhouse-client prompt. By default, server names are now output to the prompt. The servers display name can be changed. Its also sent in the X-ClickHouse-Display-Name HTTP header (Kirill Shvakov).
  • Multiple comma-separated topics can be specified for the Kafka engine (Tobias Adamson)
  • When a query is stopped by KILL QUERY or replace_running_query, the client receives the Query was canceled exception instead of an incomplete result.

Improvements:

  • ALTER TABLE ... DROP/DETACH PARTITION queries are run at the front of the replication queue.
  • SELECT ... FINAL and OPTIMIZE ... FINAL can be used even when the table has a single data part.
  • A query_log table is recreated on the fly if it was deleted manually (Kirill Shvakov).
  • The lengthUTF8 function runs faster (zhang2014).
  • Improved performance of synchronous inserts in Distributed tables (insert_distributed_sync = 1) when there is a very large number of shards.
  • The server accepts the send_timeout and receive_timeout settings from the client and applies them when connecting to the client (they are applied in reverse order: the server sockets send_timeout is set to the receive_timeout value received from the client, and vice versa).
  • More robust crash recovery for asynchronous insertion into Distributed tables.
  • The return type of the countEqual function changed from UInt32 to UInt64 (谢磊).

Bug Fixes:

  • Fixed an error with IN when the left side of the expression is Nullable.
  • Correct results are now returned when using tuples with IN when some of the tuple components are in the table index.
  • The max_execution_time limit now works correctly with distributed queries.
  • Fixed errors when calculating the size of composite columns in the system.columns table.
  • Fixed an error when creating a temporary table CREATE TEMPORARY TABLE IF NOT EXISTS.
  • Fixed errors in StorageKafka (##2075)
  • Fixed server crashes from invalid arguments of certain aggregate functions.
  • Fixed the error that prevented the DETACH DATABASE query from stopping background tasks for ReplicatedMergeTree tables.
  • Too many parts state is less likely to happen when inserting into aggregated materialized views (##2084).
  • Corrected recursive handling of substitutions in the config if a substitution must be followed by another substitution on the same level.
  • Corrected the syntax in the metadata file when creating a VIEW that uses a query with UNION ALL.
  • SummingMergeTree now works correctly for summation of nested data structures with a composite key.
  • Fixed the possibility of a race condition when choosing the leader for ReplicatedMergeTree tables.

Build Changes:

  • The build supports ninja instead of make and uses ninja by default for building releases.
  • Renamed packages: clickhouse-server-base in clickhouse-common-static; clickhouse-server-common in clickhouse-server; clickhouse-common-dbg in clickhouse-common-static-dbg. To install, use clickhouse-server clickhouse-client. Packages with the old names will still load in the repositories for backward compatibility.

Backward Incompatible Changes:

  • Removed the special interpretation of an IN expression if an array is specified on the left side. Previously, the expression arr IN (set) was interpreted as “at least one arr element belongs to the set”. To get the same behavior in the new version, write arrayExists(x -> x IN (set), arr).
  • Disabled the incorrect use of the socket option SO_REUSEPORT, which was incorrectly enabled by default in the Poco library. Note that on Linux there is no longer any reason to simultaneously specify the addresses :: and 0.0.0.0 for listen use just ::, which allows listening to the connection both over IPv4 and IPv6 (with the default kernel config settings). You can also revert to the behavior from previous versions by specifying <listen_reuse_port>1</listen_reuse_port> in the config.

ClickHouse Release 1.1.54370, 2018-03-16

New Features:

  • Added the system.macros table and auto updating of macros when the config file is changed.
  • Added the SYSTEM RELOAD CONFIG query.
  • Added the maxIntersections(left_col, right_col) aggregate function, which returns the maximum number of simultaneously intersecting intervals [left; right]. The maxIntersectionsPosition(left, right) function returns the beginning of the “maximum” interval. (Michael Furmur).

Improvements:

  • When inserting data in a Replicated table, fewer requests are made to ZooKeeper (and most of the user-level errors have disappeared from the ZooKeeper log).
  • Added the ability to create aliases for data sets. Example: WITH (1, 2, 3) AS set SELECT number IN set FROM system.numbers LIMIT 10.

Bug Fixes:

  • Fixed the Illegal PREWHERE error when reading from Merge tables for Distributedtables.
  • Added fixes that allow you to start clickhouse-server in IPv4-only Docker containers.
  • Fixed a race condition when reading from system system.parts_columns tables.
  • Removed double buffering during a synchronous insert to a Distributed table, which could have caused the connection to timeout.
  • Fixed a bug that caused excessively long waits for an unavailable replica before beginning a SELECT query.
  • Fixed incorrect dates in the system.parts table.
  • Fixed a bug that made it impossible to insert data in a Replicated table if chroot was non-empty in the configuration of the ZooKeeper cluster.
  • Fixed the vertical merging algorithm for an empty ORDER BY table.
  • Restored the ability to use dictionaries in queries to remote tables, even if these dictionaries are not present on the requestor server. This functionality was lost in release 1.1.54362.
  • Restored the behavior for queries like SELECT * FROM remote('server2', default.table) WHERE col IN (SELECT col2 FROM default.table) when the right side of the IN should use a remote default.table instead of a local one. This behavior was broken in version 1.1.54358.
  • Removed extraneous error-level logging of Not found column ... in block.

ClickHouse Release 1.1.54362, 2018-03-11

New Features:

  • Aggregation without GROUP BY for an empty set (such as SELECT count(*) FROM table WHERE 0) now returns a result with one row with null values for aggregate functions, in compliance with the SQL standard. To restore the old behavior (return an empty result), set empty_result_for_aggregation_by_empty_set to 1.
  • Added type conversion for UNION ALL. Different alias names are allowed in SELECT positions in UNION ALL, in compliance with the SQL standard.
  • Arbitrary expressions are supported in LIMIT BY clauses. Previously, it was only possible to use columns resulting from SELECT.
  • An index of MergeTree tables is used when IN is applied to a tuple of expressions from the columns of the primary key. Example: WHERE (UserID, EventDate) IN ((123, '2000-01-01'), ...) (Anastasiya Tsarkova).
  • Added the clickhouse-copier tool for copying between clusters and resharding data (beta).
  • Added consistent hashing functions: yandexConsistentHash, jumpConsistentHash, sumburConsistentHash. They can be used as a sharding key in order to reduce the amount of network traffic during subsequent reshardings.
  • Added functions: arrayAny, arrayAll, hasAny, hasAll, arrayIntersect, arrayResize.
  • Added the arrayCumSum function (Javi Santana).
  • Added the parseDateTimeBestEffort, parseDateTimeBestEffortOrZero, and parseDateTimeBestEffortOrNull functions to read the DateTime from a string containing text in a wide variety of possible formats.
  • Data can be partially reloaded from external dictionaries during updating (load just the records in which the value of the specified field greater than in the previous download) (Arsen Hakobyan).
  • Added the cluster table function. Example: cluster(cluster_name, db, table). The remote table function can accept the cluster name as the first argument, if it is specified as an identifier.
  • The remote and cluster table functions can be used in INSERT queries.
  • Added the create_table_query and engine_full virtual columns to the system.tablestable . The metadata_modification_time column is virtual.
  • Added the data_path and metadata_path columns to system.tablesandsystem.databases tables, and added the path column to the system.parts and system.parts_columns tables.
  • Added additional information about merges in the system.part_log table.
  • An arbitrary partitioning key can be used for the system.query_log table (Kirill Shvakov).
  • The SHOW TABLES query now also shows temporary tables. Added temporary tables and the is_temporary column to system.tables (zhang2014).
  • Added DROP TEMPORARY TABLE and EXISTS TEMPORARY TABLE queries (zhang2014).
  • Support for SHOW CREATE TABLE for temporary tables (zhang2014).
  • Added the system_profile configuration parameter for the settings used by internal processes.
  • Support for loading object_id as an attribute in MongoDB dictionaries (Pavel Litvinenko).
  • Reading null as the default value when loading data for an external dictionary with the MongoDB source (Pavel Litvinenko).
  • Reading DateTime values in the Values format from a Unix timestamp without single quotes.
  • Failover is supported in remote table functions for cases when some of the replicas are missing the requested table.
  • Configuration settings can be overridden in the command line when you run clickhouse-server. Example: clickhouse-server -- --logger.level=information.
  • Implemented the empty function from a FixedString argument: the function returns 1 if the string consists entirely of null bytes (zhang2014).
  • Added the listen_tryconfiguration parameter for listening to at least one of the listen addresses without quitting, if some of the addresses cant be listened to (useful for systems with disabled support for IPv4 or IPv6).
  • Added the VersionedCollapsingMergeTree table engine.
  • Support for rows and arbitrary numeric types for the library dictionary source.
  • MergeTree tables can be used without a primary key (you need to specify ORDER BY tuple()).
  • A Nullable type can be CAST to a non-Nullable type if the argument is not NULL.
  • RENAME TABLE can be performed for VIEW.
  • Added the throwIf function.
  • Added the odbc_default_field_size option, which allows you to extend the maximum size of the value loaded from an ODBC source (by default, it is 1024).
  • The system.processes table and SHOW PROCESSLIST now have the is_cancelled and peak_memory_usage columns.

Improvements:

  • Limits and quotas on the result are no longer applied to intermediate data for INSERT SELECT queries or for SELECT subqueries.
  • Fewer false triggers of force_restore_data when checking the status of Replicated tables when the server starts.
  • Added the allow_distributed_ddl option.
  • Nondeterministic functions are not allowed in expressions for MergeTree table keys.
  • Files with substitutions from config.d directories are loaded in alphabetical order.
  • Improved performance of the arrayElement function in the case of a constant multidimensional array with an empty array as one of the elements. Example: [[1], []][x].
  • The server starts faster now when using configuration files with very large substitutions (for instance, very large lists of IP networks).
  • When running a query, table valued functions run once. Previously, remote and mysql table valued functions performed the same query twice to retrieve the table structure from a remote server.
  • The MkDocs documentation generator is used.
  • When you try to delete a table column that DEFAULT/MATERIALIZED expressions of other columns depend on, an exception is thrown (zhang2014).
  • Added the ability to parse an empty line in text formats as the number 0 for Float data types. This feature was previously available but was lost in release 1.1.54342.
  • Enum values can be used in min, max, sum and some other functions. In these cases, it uses the corresponding numeric values. This feature was previously available but was lost in the release 1.1.54337.
  • Added max_expanded_ast_elements to restrict the size of the AST after recursively expanding aliases.

Bug Fixes:

  • Fixed cases when unnecessary columns were removed from subqueries in error, or not removed from subqueries containing UNION ALL.
  • Fixed a bug in merges for ReplacingMergeTree tables.
  • Fixed synchronous insertions in Distributed tables (insert_distributed_sync = 1).
  • Fixed segfault for certain uses of FULL and RIGHT JOIN with duplicate columns in subqueries.
  • Fixed segfault for certain uses of replace_running_query and KILL QUERY.
  • Fixed the order of the source and last_exception columns in the system.dictionaries table.
  • Fixed a bug when the DROP DATABASE query did not delete the file with metadata.
  • Fixed the DROP DATABASE query for Dictionary databases.
  • Fixed the low precision of uniqHLL12 and uniqCombined functions for cardinalities greater than 100 million items (Alex Bocharov).
  • Fixed the calculation of implicit default values when necessary to simultaneously calculate default explicit expressions in INSERT queries (zhang2014).
  • Fixed a rare case when a query to a MergeTree table couldnt finish (chenxing-xc).
  • Fixed a crash that occurred when running a CHECK query for Distributed tables if all shards are local (chenxing.xc).
  • Fixed a slight performance regression with functions that use regular expressions.
  • Fixed a performance regression when creating multidimensional arrays from complex expressions.
  • Fixed a bug that could cause an extra FORMAT section to appear in an .sql file with metadata.
  • Fixed a bug that caused the max_table_size_to_drop limit to apply when trying to delete a MATERIALIZED VIEW looking at an explicitly specified table.
  • Fixed incompatibility with old clients (old clients were sometimes sent data with the DateTime('timezone') type, which they do not understand).
  • Fixed a bug when reading Nested column elements of structures that were added using ALTER but that are empty for the old partitions, when the conditions for these columns moved to PREWHERE.
  • Fixed a bug when filtering tables by virtual _table columns in queries to Merge tables.
  • Fixed a bug when using ALIAS columns in Distributed tables.
  • Fixed a bug that made dynamic compilation impossible for queries with aggregate functions from the quantile family.
  • Fixed a race condition in the query execution pipeline that occurred in very rare cases when using Merge tables with a large number of tables, and when using GLOBAL subqueries.
  • Fixed a crash when passing arrays of different sizes to an arrayReduce function when using aggregate functions from multiple arguments.
  • Prohibited the use of queries with UNION ALL in a MATERIALIZED VIEW.
  • Fixed an error during initialization of the part_log system table when the server starts (by default, part_log is disabled).

Backward Incompatible Changes:

  • Removed the distributed_ddl_allow_replicated_alter option. This behavior is enabled by default.
  • Removed the strict_insert_defaults setting. If you were using this functionality, write to feedback@clickhouse.com.
  • Removed the UnsortedMergeTree engine.

ClickHouse Release 1.1.54343, 2018-02-05

  • Added macros support for defining cluster names in distributed DDL queries and constructors of Distributed tables: CREATE TABLE distr ON CLUSTER '{cluster}' (...) ENGINE = Distributed('{cluster}', 'db', 'table').
  • Now queries like SELECT ... FROM table WHERE expr IN (subquery) are processed using the table index.
  • Improved processing of duplicates when inserting to Replicated tables, so they no longer slow down execution of the replication queue.

ClickHouse Release 1.1.54342, 2018-01-22

This release contains bug fixes for the previous release 1.1.54337:

  • Fixed a regression in 1.1.54337: if the default user has readonly access, then the server refuses to start up with the message Cannot create database in readonly mode.
  • Fixed a regression in 1.1.54337: on systems with systemd, logs are always written to syslog regardless of the configuration; the watchdog script still uses init.d.
  • Fixed a regression in 1.1.54337: wrong default configuration in the Docker image.
  • Fixed nondeterministic behavior of GraphiteMergeTree (you can see it in log messages Data after merge is not byte-identical to the data on another replicas).
  • Fixed a bug that may lead to inconsistent merges after OPTIMIZE query to Replicated tables (you may see it in log messages Part ... intersects the previous part).
  • Buffer tables now work correctly when MATERIALIZED columns are present in the destination table (by zhang2014).
  • Fixed a bug in implementation of NULL.

ClickHouse Release 1.1.54337, 2018-01-18

New Features:

  • Added support for storage of multi-dimensional arrays and tuples (Tuple data type) in tables.
  • Support for table functions for DESCRIBE and INSERT queries. Added support for subqueries in DESCRIBE. Examples: DESC TABLE remote('host', default.hits); DESC TABLE (SELECT 1); INSERT INTO TABLE FUNCTION remote('host', default.hits). Support for INSERT INTO TABLE in addition to INSERT INTO.
  • Improved support for time zones. The DateTime data type can be annotated with the timezone that is used for parsing and formatting in text formats. Example: DateTime('Asia/Istanbul'). When timezones are specified in functions for DateTime arguments, the return type will track the timezone, and the value will be displayed as expected.
  • Added the functions toTimeZone, timeDiff, toQuarter, toRelativeQuarterNum. The toRelativeHour/Minute/Second functions can take a value of type Date as an argument. The now function name is case-sensitive.
  • Added the toStartOfFifteenMinutes function (Kirill Shvakov).
  • Added the clickhouse format tool for formatting queries.
  • Added the format_schema_path configuration parameter (Marek Vavruşa). It is used for specifying a schema in Cap'n Proto format. Schema files can be located only in the specified directory.
  • Added support for config substitutions (incl and conf.d) for configuration of external dictionaries and models (Pavel Yakunin).
  • Added a column with documentation for the system.settings table (Kirill Shvakov).
  • Added the system.parts_columns table with information about column sizes in each data part of MergeTree tables.
  • Added the system.models table with information about loaded CatBoost machine learning models.
  • Added the mysql and odbc table function and corresponding MySQL and ODBC table engines for accessing remote databases. This functionality is in the beta stage.
  • Added the possibility to pass an argument of type AggregateFunction for the groupArray aggregate function (so you can create an array of states of some aggregate function).
  • Removed restrictions on various combinations of aggregate function combinators. For example, you can use avgForEachIf as well as avgIfForEach aggregate functions, which have different behaviors.
  • The -ForEach aggregate function combinator is extended for the case of aggregate functions of multiple arguments.
  • Added support for aggregate functions of Nullable arguments even for cases when the function returns a non-Nullable result (added with the contribution of Silviu Caragea). Example: groupArray, groupUniqArray, topK.
  • Added the max_client_network_bandwidth for clickhouse-client (Kirill Shvakov).
  • Users with the readonly = 2 setting are allowed to work with TEMPORARY tables (CREATE, DROP, INSERT…) (Kirill Shvakov).
  • Added support for using multiple consumers with the Kafka engine. Extended configuration options for Kafka (Marek Vavruša).
  • Added the intExp3 and intExp4 functions.
  • Added the sumKahan aggregate function.
  • Added the to * Number* OrNull functions, where * Number* is a numeric type.
  • Added support for WITH clauses for an INSERT SELECT query (author: zhang2014).
  • Added settings: http_connection_timeout, http_send_timeout, http_receive_timeout. In particular, these settings are used for downloading data parts for replication. Changing these settings allows for faster failover if the network is overloaded.
  • Added support for ALTER for tables of type Null (Anastasiya Tsarkova).
  • The reinterpretAsString function is extended for all data types that are stored contiguously in memory.
  • Added the --silent option for the clickhouse-local tool. It suppresses printing query execution info in stderr.
  • Added support for reading values of type Date from text in a format where the month and/or day of the month is specified using a single digit instead of two digits (Amos Bird).

Performance Optimizations:

  • Improved performance of aggregate functions min, max, any, anyLast, anyHeavy, argMin, argMax from string arguments.
  • Improved performance of the functions isInfinite, isFinite, isNaN, roundToExp2.
  • Improved performance of parsing and formatting Date and DateTime type values in text format.
  • Improved performance and precision of parsing floating point numbers.
  • Lowered memory usage for JOIN in the case when the left and right parts have columns with identical names that are not contained in USING .
  • Improved performance of aggregate functions varSamp, varPop, stddevSamp, stddevPop, covarSamp, covarPop, corr by reducing computational stability. The old functions are available under the names varSampStable, varPopStable, stddevSampStable, stddevPopStable, covarSampStable, covarPopStable, corrStable.

Bug Fixes:

  • Fixed data deduplication after running a DROP or DETACH PARTITION query. In the previous version, dropping a partition and inserting the same data again was not working because inserted blocks were considered duplicates.
  • Fixed a bug that could lead to incorrect interpretation of the WHERE clause for CREATE MATERIALIZED VIEW queries with POPULATE .
  • Fixed a bug in using the root_path parameter in the zookeeper_servers configuration.
  • Fixed unexpected results of passing the Date argument to toStartOfDay .
  • Fixed the addMonths and subtractMonths functions and the arithmetic for INTERVAL n MONTH in cases when the result has the previous year.
  • Added missing support for the UUID data type for DISTINCT , JOIN , and uniq aggregate functions and external dictionaries (Evgeniy Ivanov). Support for UUID is still incomplete.
  • Fixed SummingMergeTree behavior in cases when the rows summed to zero.
  • Various fixes for the Kafka engine (Marek Vavruša).
  • Fixed incorrect behavior of the Join table engine (Amos Bird).
  • Fixed incorrect allocator behavior under FreeBSD and OS X.
  • The extractAll function now supports empty matches.
  • Fixed an error that blocked usage of libressl instead of openssl .
  • Fixed the CREATE TABLE AS SELECT query from temporary tables.
  • Fixed non-atomicity of updating the replication queue. This could lead to replicas being out of sync until the server restarts.
  • Fixed possible overflow in gcd , lcm and modulo (% operator) (Maks Skorokhod).
  • -preprocessed files are now created after changing umask (umask can be changed in the config).
  • Fixed a bug in the background check of parts (MergeTreePartChecker ) when using a custom partition key.
  • Fixed parsing of tuples (values of the Tuple data type) in text formats.
  • Improved error messages about incompatible types passed to multiIf , array and some other functions.
  • Redesigned support for Nullable types. Fixed bugs that may lead to a server crash. Fixed almost all other bugs related to NULL support: incorrect type conversions in INSERT SELECT, insufficient support for Nullable in HAVING and PREWHERE, join_use_nulls mode, Nullable types as arguments of OR operator, etc.
  • Fixed various bugs related to internal semantics of data types. Examples: unnecessary summing of Enum type fields in SummingMergeTree ; alignment of Enum types in Pretty formats, etc.
  • Stricter checks for allowed combinations of composite columns.
  • Fixed the overflow when specifying a very large parameter for the FixedString data type.
  • Fixed a bug in the topK aggregate function in a generic case.
  • Added the missing check for equality of array sizes in arguments of n-ary variants of aggregate functions with an -Array combinator.
  • Fixed a bug in --pager for clickhouse-client (author: ks1322).
  • Fixed the precision of the exp10 function.
  • Fixed the behavior of the visitParamExtract function for better compliance with documentation.
  • Fixed the crash when incorrect data types are specified.
  • Fixed the behavior of DISTINCT in the case when all columns are constants.
  • Fixed query formatting in the case of using the tupleElement function with a complex constant expression as the tuple element index.
  • Fixed a bug in Dictionary tables for range_hashed dictionaries.
  • Fixed a bug that leads to excessive rows in the result of FULL and RIGHT JOIN (Amos Bird).
  • Fixed a server crash when creating and removing temporary files in config.d directories during config reload.
  • Fixed the SYSTEM DROP DNS CACHE query: the cache was flushed but addresses of cluster nodes were not updated.
  • Fixed the behavior of MATERIALIZED VIEW after executing DETACH TABLE for the table under the view (Marek Vavruša).

Build Improvements:

  • The pbuilder tool is used for builds. The build process is almost completely independent of the build host environment.
  • A single build is used for different OS versions. Packages and binaries have been made compatible with a wide range of Linux systems.
  • Added the clickhouse-test package. It can be used to run functional tests.
  • The source tarball can now be published to the repository. It can be used to reproduce the build without using GitHub.
  • Added limited integration with Travis CI. Due to limits on build time in Travis, only the debug build is tested and a limited subset of tests are run.
  • Added support for Cap'n'Proto in the default build.
  • Changed the format of documentation sources from Restricted Text to Markdown.
  • Added support for systemd (Vladimir Smirnov). It is disabled by default due to incompatibility with some OS images and can be enabled manually.
  • For dynamic code generation, clang and lld are embedded into the clickhouse binary. They can also be invoked as clickhouse clang and clickhouse lld .
  • Removed usage of GNU extensions from the code. Enabled the -Wextra option. When building with clang the default is libc++ instead of libstdc++.
  • Extracted clickhouse_parsers and clickhouse_common_io libraries to speed up builds of various tools.

Backward Incompatible Changes:

  • The format for marks in Log type tables that contain Nullable columns was changed in a backward incompatible way. If you have these tables, you should convert them to the TinyLog type before starting up the new server version. To do this, replace ENGINE = Log with ENGINE = TinyLog in the corresponding .sql file in the metadata directory. If your table does not have Nullable columns or if the type of your table is not Log, then you do not need to do anything.
  • Removed the experimental_allow_extended_storage_definition_syntax setting. Now this feature is enabled by default.
  • The runningIncome function was renamed to runningDifferenceStartingWithFirstvalue to avoid confusion.
  • Removed the FROM ARRAY JOIN arr syntax when ARRAY JOIN is specified directly after FROM with no table (Amos Bird).
  • Removed the BlockTabSeparated format that was used solely for demonstration purposes.
  • Changed the state format for aggregate functions varSamp, varPop, stddevSamp, stddevPop, covarSamp, covarPop, corr. If you have stored states of these aggregate functions in tables (using the AggregateFunction data type or materialized views with corresponding states), please write to feedback@clickhouse.com.
  • In previous server versions there was an undocumented feature: if an aggregate function depends on parameters, you can still specify it without parameters in the AggregateFunction data type. Example: AggregateFunction(quantiles, UInt64) instead of AggregateFunction(quantiles(0.5, 0.9), UInt64). This feature was lost. Although it was undocumented, we plan to support it again in future releases.
  • Enum data types cannot be used in min/max aggregate functions. This ability will be returned in the next release.

Please Note When Upgrading:

  • When doing a rolling update on a cluster, at the point when some of the replicas are running the old version of ClickHouse and some are running the new version, replication is temporarily stopped and the message unknown parameter 'shard' appears in the log. Replication will continue after all replicas of the cluster are updated.
  • If different versions of ClickHouse are running on the cluster servers, it is possible that distributed queries using the following functions will have incorrect results: varSamp, varPop, stddevSamp, stddevPop, covarSamp, covarPop, corr. You should update all cluster nodes.

Changelog for 2017