Since killing doesn't happen instantly, and the start will fail [1]:
The process with pid = 157 is running.
Will terminate forcefully.
Sent kill signal.
/var/run/clickhouse-server/clickhouse-server.pid file exists and contains pid = 157.
The process with pid = 157 is already running.
+ for _ in {1..120}
+ clickhouse-client --query 'SELECT 1'
Code: 210. DB::NetException: Connection refused (localhost:9000)
[1]: https://clickhouse-test-reports.s3.yandex.net/21318/4327e9e1d1e4c9c3576b00f41a8444237549dffd/functional_stateful_tests_(debug).html#fail1
The Year 1925 is a starting point because most of the timezones
switched to saner (mostly 15-minutes based) offsets somewhere
during 1924 or before. And that significantly simplifies implementation.
2238 is to simplify arithmetics for sanitizing LUT index access;
there are less than 0x1ffff days from 1925.
* Extended DateLUTImpl internal LUT to 0x1ffff items, some of which
represent negative (pre-1970) time values.
As a collateral benefit, Date now correctly supports dates up to 2149
(instead of 2106).
* Added a new strong typedef ExtendedDayNum, which represents dates
pre-1970 and post 2149.
* Functions that used to return DayNum now return ExtendedDayNum.
* Refactored DateLUTImpl to untie DayNum from the dual role of being
a value and an index (due to negative time). Index is now a different
type LUTIndex with explicit conversion functions from DatNum, time_t,
and ExtendedDayNum.
* Updated DateLUTImpl to properly support values close to epoch start
(1970-01-01 00:00), including negative ones.
* Reduced resolution of DateLUTImpl::Values::time_at_offset_change
to multiple of 15-minutes to allow storing 64-bits of time_t in
DateLUTImpl::Value while keeping same size.
* Minor performance updates to DateLUTImpl when building month LUT
by skipping non-start-of-month days.
* Fixed extractTimeZoneFromFunctionArguments to work correctly
with DateTime64.
* New unit-tests and stateless integration tests for both DateTime
and DateTime64.
* Refactoring: part 1
* Refactoring: part 2
* Handle request using ReadBuffer interface
* Struggles with ReadBuffer's
* Fix URI parsing
* Implement parsing of multipart/form-data
* Check HTTP_LENGTH_REQUIRED before eof() or will hang
* Fix HTTPChunkedReadBuffer
* Fix build and style
* Fix test
* Resist double-eof
* Fix arcadian build
If you push data via Buffer engine then all your queries will be done
from one user, however this is not always desired behavior, since this
will not allow to limit queries with max_concurrent_queries_for_user and
similar.
```
Undefined symbols for architecture x86_64:
"vtable for DB::IModel", referenced from:
DB::IModel::IModel() in Obfuscator.cpp.o
NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.
ld: symbol(s) not found for architecture x86_64
clang-11: error: linker command failed with exit code 1 (use -v to see invocation)
ninja: build stopped: subcommand failed.
```
* add the query data deduplication excluding duplicated parts in MergeTree family engines.
query deduplication is based on parts' UUID which should be enabled first with merge_tree setting
assign_part_uuids=1
allow_experimental_query_deduplication setting is to enable part deduplication, default ot false.
data part UUID is a mechanism of giving a data part a unique identifier.
Having UUID and deduplication mechanism provides a potential of moving parts
between shards preserving data consistency on a read path:
duplicated UUIDs will cause root executor to retry query against on of the replica explicitly
asking to exclude encountered duplicated fingerprints during a distributed query execution.
NOTE: this implementation don't provide any knobs to lock part and hence its UUID. Any mutations/merge will
update part's UUID.
* add _part_uuid virtual column, allowing to use UUIDs in predicates.
Signed-off-by: Aleksei Semiglazov <asemiglazov@cloudflare.com>
address comments