mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-27 01:51:59 +00:00
124 lines
7.9 KiB
Markdown
124 lines
7.9 KiB
Markdown
# ClickHouse release 1.1.54284
|
||
|
||
* This is bugfix release for previous 1.1.54282 release. It fixes ZooKeeper nodes leak in `parts/` directory.
|
||
|
||
# ClickHouse release 1.1.54282
|
||
|
||
This is a bugfix release. The following bugs were fixed:
|
||
* `DB::Exception: Assertion violation: !_path.empty()` error when inserting into a Distributed table.
|
||
* Error when parsing inserted data in RowBinary format if the data begins with ';' character.
|
||
* Errors during runtime compilation of certain aggregate functions (e.g. `groupArray()`).
|
||
|
||
# ClickHouse release 1.1.54276
|
||
|
||
## New features:
|
||
|
||
* You can use an optional WITH clause in a SELECT query. Example query: `WITH 1+1 AS a SELECT a, a*a`
|
||
* INSERT can be performed synchronously in a Distributed table: OK is returned only after all the data is saved on all the shards. This is activated by the setting insert_distributed_sync=1.
|
||
* Added the UUID data type for working with 16-byte identifiers.
|
||
* Added aliases of CHAR, FLOAT and other types for compatibility with the Tableau.
|
||
* Added the functions toYYYYMM, toYYYYMMDD, and toYYYYMMDDhhmmss for converting time into numbers.
|
||
* You can use IP addresses (together with the hostname) to identify servers for clustered DDL queries.
|
||
* Added support for non-constant arguments and negative offsets in the function `substring(str, pos, len).`
|
||
* Added the max_size parameter for the `groupArray(max_size)(column)` aggregate function, and optimized its performance.
|
||
|
||
## Major changes:
|
||
|
||
* Improved security: all server files are created with 0640 permissions.
|
||
* Improved error messages for queries with invalid syntax.
|
||
* Significantly reduced memory consumption and improved performance when merging large sections of MergeTree data.
|
||
* Significantly increased the performance of data merges for the ReplacingMergeTree engine.
|
||
* Improved performance for asynchronous inserts from a Distributed table by batching multiple source inserts. To enable this functionality, use the setting distributed_directory_monitor_batch_inserts=1.
|
||
|
||
## Backward incompatible changes:
|
||
|
||
* Changed the binary format of aggregate states of `groupArray(array_column)` functions for arrays.
|
||
|
||
## Complete list of changes:
|
||
|
||
* Added the `output_format_json_quote_denormals` setting, which enables outputting nan and inf values in JSON format.
|
||
* Optimized thread allocation when reading from a Distributed table.
|
||
* Settings can be modified in readonly mode if the value doesn't change.
|
||
* Added the ability to read fractional granules of the MergeTree engine in order to meet restrictions on the block size specified in the preferred_block_size_bytes setting. The purpose is to reduce the consumption of RAM and increase cache locality when processing queries from tables with large columns.
|
||
* Efficient use of indexes that contain expressions like `toStartOfHour(x)` for conditions like `toStartOfHour(x) op сonstexpr.`
|
||
* Added new settings for MergeTree engines (the merge_tree section in config.xml):
|
||
- replicated_deduplication_window_seconds sets the size of deduplication window in seconds for Replicated tables.
|
||
- cleanup_delay_period sets how often to start cleanup to remove outdated data.
|
||
- replicated_can_become_leader can prevent a replica from becoming the leader (and assigning merges).
|
||
* Accelerated cleanup to remove outdated data from ZooKeeper.
|
||
* Multiple improvements and fixes for clustered DDL queries. Of particular interest is the new setting distributed_ddl_task_timeout, which limits the time to wait for a response from the servers in the cluster.
|
||
* Improved display of stack traces in the server logs.
|
||
* Added the "none" value for the compression method.
|
||
* You can use multiple dictionaries_config sections in config.xml.
|
||
* It is possible to connect to MySQL through a socket in the file system.
|
||
* The `system.parts` table has a new column with information about the size of marks, in bytes.
|
||
|
||
## Bug fixes:
|
||
|
||
* Distributed tables using a Merge table now work correctly for a SELECT query with a condition on the _table field.
|
||
* Fixed a rare race condition in ReplicatedMergeTree when checking data parts.
|
||
* Fixed possible freezing on "leader election" when starting a server.
|
||
* The max_replica_delay_for_distributed_queries setting was ignored when using a local replica of the data source. This has been fixed.
|
||
* Fixed incorrect behavior of `ALTER TABLE CLEAR COLUMN IN PARTITION` when attempting to clean a non-existing column.
|
||
* Fixed an exception in the multiIf function when using empty arrays or strings.
|
||
* Fixed excessive memory allocations when deserializing Native format.
|
||
* Fixed incorrect auto-update of Trie dictionaries.
|
||
* Fixed an exception when running queries with a GROUP BY clause from a Merge table when using SAMPLE.
|
||
* Fixed a crash of GROUP BY when using distributed_aggregation_memory_efficient=1.
|
||
* Now you can specify the database.table in the right side of IN and JOIN.
|
||
* Too many threads were used for parallel aggregation. This has been fixed.
|
||
* Fixed how the "if" function works with FixedString arguments.
|
||
* SELECT worked incorrectly from a Distributed table for shards with a weight of 0. This has been fixed.
|
||
* Crashes no longer occur when running `CREATE VIEW IF EXISTS.`
|
||
* Fixed incorrect behavior when input_format_skip_unknown_fields=1 is set and there are negative numbers.
|
||
* Fixed an infinite loop in the `dictGetHierarchy()` function if there is some invalid data in the dictionary.
|
||
* Fixed `Syntax error: unexpected (...)` errors when running distributed queries with subqueries in an IN or JOIN clause and Merge tables.
|
||
* Fixed the incorrect interpretation of a SELECT query from Dictionary tables.
|
||
* Fixed the "Cannot mremap" error when using arrays in IN and JOIN clauses with more than 2 billion elements.
|
||
* Fixed the failover for dictionaries with MySQL as the source.
|
||
|
||
## Improved workflow for developing and assembling ClickHouse:
|
||
|
||
* Builds can be assembled in Arcadia.
|
||
* You can use gcc 7 to compile ClickHouse.
|
||
* Parallel builds using ccache+distcc are faster now.
|
||
|
||
# ClickHouse release 1.1.54245
|
||
|
||
## New features:
|
||
|
||
* Distributed DDL (for example, `CREATE TABLE ON CLUSTER`).
|
||
* The replicated request `ALTER TABLE CLEAR COLUMN IN PARTITION.`
|
||
* The engine for Dictionary tables (access to dictionary data in the form of a table).
|
||
* Dictionary database engine (this type of database automatically has Dictionary tables available for all the connected external dictionaries).
|
||
* You can check for updates to the dictionary by sending a request to the source.
|
||
* Qualified column names
|
||
* Quoting identifiers using double quotation marks.
|
||
* Sessions in the HTTP interface.
|
||
* The OPTIMIZE query for a Replicated table can can run not only on the leader.
|
||
|
||
## Backward incompatible changes:
|
||
|
||
* Removed SET GLOBAL.
|
||
|
||
## Minor changes:
|
||
|
||
* If an alert is triggered, the full stack trace is printed into the log.
|
||
* Relaxed the verification of the number of damaged or extra data parts at startup (there were too many false positives).
|
||
|
||
## Bug fixes:
|
||
|
||
* Fixed a bad connection "sticking" when inserting into a Distributed table.
|
||
* GLOBAL IN now works for a query from a Merge table that looks at a Distributed table.
|
||
* The incorrect number of cores was detected on a Google Compute Engine virtual machine. This has been fixed.
|
||
* Changes in how an executable source of cached external dictionaries works.
|
||
* Fixed the comparison of strings containing null characters.
|
||
* Fixed the comparison of Float32 primary key fields with constants.
|
||
* Previously, an incorrect estimate of the size of a field could lead to overly large allocations. This has been fixed.
|
||
* Fixed a crash when querying a Nullable column added to a table using ALTER.
|
||
* Fixed a crash when sorting by a Nullable column, if the number of rows is less than LIMIT.
|
||
* Fixed an ORDER BY subquery consisting of only constant values.
|
||
* Previously, a Replicated table could remain in the invalid state after a failed DROP TABLE.
|
||
* Aliases for scalar subqueries with empty results are no longer lost.
|
||
* Now a query that used compilation does not fail with an error if the .so file gets damaged.
|