mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-09-19 16:20:50 +00:00
Merge remote-tracking branch 'origin/master' into pr-local-plan
This commit is contained in:
commit
60153d428d
@ -37,6 +37,7 @@ RUN pip3 install \
|
||||
tqdm==4.66.4 \
|
||||
types-requests \
|
||||
unidiff \
|
||||
jwt \
|
||||
&& rm -rf /root/.cache/pip
|
||||
|
||||
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen en_US.UTF-8
|
||||
|
@ -193,6 +193,7 @@ You can pass parameters to `clickhouse-client` (all parameters have a default va
|
||||
- `--hardware-utilization` — Print hardware utilization information in progress bar.
|
||||
- `--print-profile-events` – Print `ProfileEvents` packets.
|
||||
- `--profile-events-delay-ms` – Delay between printing `ProfileEvents` packets (-1 - print only totals, 0 - print every single packet).
|
||||
- `--jwt` – If specified, enables authorization via JSON Web Token. Server JWT authorization is available only in ClickHouse Cloud.
|
||||
|
||||
Instead of `--host`, `--port`, `--user` and `--password` options, ClickHouse client also supports connection strings (see next section).
|
||||
|
||||
|
@ -36,9 +36,24 @@ $ echo 0 | sudo tee /proc/sys/vm/overcommit_memory
|
||||
Use `perf top` to watch the time spent in the kernel for memory management.
|
||||
Permanent huge pages also do not need to be allocated.
|
||||
|
||||
:::warning
|
||||
If your system has less than 16 GB of RAM, you may experience various memory exceptions because default settings do not match this amount of memory. The recommended amount of RAM is 32 GB or more. You can use ClickHouse in a system with a small amount of RAM, even with 2 GB of RAM, but it requires additional tuning and can ingest at a low rate.
|
||||
:::
|
||||
### Using less than 16GB of RAM
|
||||
|
||||
The recommended amount of RAM is 32 GB or more.
|
||||
|
||||
If your system has less than 16 GB of RAM, you may experience various memory exceptions because default settings do not match this amount of memory. You can use ClickHouse in a system with a small amount of RAM (as low as 2 GB), but these setups require additional tuning and can only ingest at a low rate.
|
||||
|
||||
When using ClickHouse with less than 16GB of RAM, we recommend the following:
|
||||
|
||||
- Lower the size of the mark cache in the `config.xml`. It can be set as low as 500 MB, but it cannot be set to zero.
|
||||
- Lower the number of query processing threads down to `1`.
|
||||
- Lower the `max_block_size` to `8192`. Values as low as `1024` can still be practical.
|
||||
- Lower `max_download_threads` to `1`.
|
||||
- Set `input_format_parallel_parsing` and `output_format_parallel_formatting` to `0`.
|
||||
|
||||
Additional notes:
|
||||
- To flush the memory cached by the memory allocator, you can run the `SYSTEM JEMALLOC PURGE`
|
||||
command.
|
||||
- We do not recommend using S3 or Kafka integrations on low-memory machines because they require significant memory for buffers.
|
||||
|
||||
## Storage Subsystem {#storage-subsystem}
|
||||
|
||||
|
@ -6,26 +6,297 @@ sidebar_label: NLP (experimental)
|
||||
|
||||
# Natural Language Processing (NLP) Functions
|
||||
|
||||
:::note
|
||||
:::warning
|
||||
This is an experimental feature that is currently in development and is not ready for general use. It will change in unpredictable backwards-incompatible ways in future releases. Set `allow_experimental_nlp_functions = 1` to enable it.
|
||||
:::
|
||||
|
||||
## detectCharset
|
||||
|
||||
The `detectCharset` function detects the character set of the non-UTF8-encoded input string.
|
||||
|
||||
*Syntax*
|
||||
|
||||
``` sql
|
||||
detectCharset('text_to_be_analyzed')
|
||||
```
|
||||
|
||||
*Arguments*
|
||||
|
||||
- `text_to_be_analyzed` — A collection (or sentences) of strings to analyze. [String](../data-types/string.md#string).
|
||||
|
||||
*Returned value*
|
||||
|
||||
- A `String` containing the code of the detected character set
|
||||
|
||||
*Examples*
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT detectCharset('Ich bleibe für ein paar Tage.');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─detectCharset('Ich bleibe für ein paar Tage.')─┐
|
||||
│ WINDOWS-1252 │
|
||||
└────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## detectLanguage
|
||||
|
||||
Detects the language of the UTF8-encoded input string. The function uses the [CLD2 library](https://github.com/CLD2Owners/cld2) for detection, and it returns the 2-letter ISO language code.
|
||||
|
||||
The `detectLanguage` function works best when providing over 200 characters in the input string.
|
||||
|
||||
*Syntax*
|
||||
|
||||
``` sql
|
||||
detectLanguage('text_to_be_analyzed')
|
||||
```
|
||||
|
||||
*Arguments*
|
||||
|
||||
- `text_to_be_analyzed` — A collection (or sentences) of strings to analyze. [String](../data-types/string.md#string).
|
||||
|
||||
*Returned value*
|
||||
|
||||
- The 2-letter ISO code of the detected language
|
||||
|
||||
Other possible results:
|
||||
|
||||
- `un` = unknown, can not detect any language.
|
||||
- `other` = the detected language does not have 2 letter code.
|
||||
|
||||
*Examples*
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT detectLanguage('Je pense que je ne parviendrai jamais à parler français comme un natif. Where there’s a will, there’s a way.');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
fr
|
||||
```
|
||||
|
||||
## detectLanguageMixed
|
||||
|
||||
Similar to the `detectLanguage` function, but `detectLanguageMixed` returns a `Map` of 2-letter language codes that are mapped to the percentage of the certain language in the text.
|
||||
|
||||
|
||||
*Syntax*
|
||||
|
||||
``` sql
|
||||
detectLanguageMixed('text_to_be_analyzed')
|
||||
```
|
||||
|
||||
*Arguments*
|
||||
|
||||
- `text_to_be_analyzed` — A collection (or sentences) of strings to analyze. [String](../data-types/string.md#string).
|
||||
|
||||
*Returned value*
|
||||
|
||||
- `Map(String, Float32)`: The keys are 2-letter ISO codes and the values are a percentage of text found for that language
|
||||
|
||||
|
||||
*Examples*
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT detectLanguageMixed('二兎を追う者は一兎をも得ず二兎を追う者は一兎をも得ず A vaincre sans peril, on triomphe sans gloire.');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─detectLanguageMixed()─┐
|
||||
│ {'ja':0.62,'fr':0.36 │
|
||||
└───────────────────────┘
|
||||
```
|
||||
|
||||
## detectProgrammingLanguage
|
||||
|
||||
Determines the programming language from the source code. Calculates all the unigrams and bigrams of commands in the source code.
|
||||
Then using a marked-up dictionary with weights of unigrams and bigrams of commands for various programming languages finds the biggest weight of the programming language and returns it.
|
||||
|
||||
*Syntax*
|
||||
|
||||
``` sql
|
||||
detectProgrammingLanguage('source_code')
|
||||
```
|
||||
|
||||
*Arguments*
|
||||
|
||||
- `source_code` — String representation of the source code to analyze. [String](../data-types/string.md#string).
|
||||
|
||||
*Returned value*
|
||||
|
||||
- Programming language. [String](../data-types/string.md).
|
||||
|
||||
*Examples*
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT detectProgrammingLanguage('#include <iostream>');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─detectProgrammingLanguage('#include <iostream>')─┐
|
||||
│ C++ │
|
||||
└──────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## detectLanguageUnknown
|
||||
|
||||
Similar to the `detectLanguage` function, except the `detectLanguageUnknown` function works with non-UTF8-encoded strings. Prefer this version when your character set is UTF-16 or UTF-32.
|
||||
|
||||
|
||||
*Syntax*
|
||||
|
||||
``` sql
|
||||
detectLanguageUnknown('text_to_be_analyzed')
|
||||
```
|
||||
|
||||
*Arguments*
|
||||
|
||||
- `text_to_be_analyzed` — A collection (or sentences) of strings to analyze. [String](../data-types/string.md#string).
|
||||
|
||||
*Returned value*
|
||||
|
||||
- The 2-letter ISO code of the detected language
|
||||
|
||||
Other possible results:
|
||||
|
||||
- `un` = unknown, can not detect any language.
|
||||
- `other` = the detected language does not have 2 letter code.
|
||||
|
||||
*Examples*
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT detectLanguageUnknown('Ich bleibe für ein paar Tage.');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─detectLanguageUnknown('Ich bleibe für ein paar Tage.')─┐
|
||||
│ de │
|
||||
└────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## detectTonality
|
||||
|
||||
Determines the sentiment of text data. Uses a marked-up sentiment dictionary, in which each word has a tonality ranging from `-12` to `6`.
|
||||
For each text, it calculates the average sentiment value of its words and returns it in the range `[-1,1]`.
|
||||
|
||||
:::note
|
||||
This function is limited in its current form. Currently it makes use of the embedded emotional dictionary at `/contrib/nlp-data/tonality_ru.zst` and only works for the Russian language.
|
||||
:::
|
||||
|
||||
*Syntax*
|
||||
|
||||
``` sql
|
||||
detectTonality(text)
|
||||
```
|
||||
|
||||
*Arguments*
|
||||
|
||||
- `text` — The text to be analyzed. [String](../data-types/string.md#string).
|
||||
|
||||
*Returned value*
|
||||
|
||||
- The average sentiment value of the words in `text`. [Float32](../data-types/float.md).
|
||||
|
||||
*Examples*
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT detectTonality('Шарик - хороший пёс'), -- Sharik is a good dog
|
||||
detectTonality('Шарик - пёс'), -- Sharik is a dog
|
||||
detectTonality('Шарик - плохой пёс'); -- Sharkik is a bad dog
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─detectTonality('Шарик - хороший пёс')─┬─detectTonality('Шарик - пёс')─┬─detectTonality('Шарик - плохой пёс')─┐
|
||||
│ 0.44445 │ 0 │ -0.3 │
|
||||
└───────────────────────────────────────┴───────────────────────────────┴──────────────────────────────────────┘
|
||||
```
|
||||
## lemmatize
|
||||
|
||||
Performs lemmatization on a given word. Needs dictionaries to operate, which can be obtained [here](https://github.com/vpodpecan/lemmagen3/tree/master/src/lemmagen3/models).
|
||||
|
||||
*Syntax*
|
||||
|
||||
``` sql
|
||||
lemmatize('language', word)
|
||||
```
|
||||
|
||||
*Arguments*
|
||||
|
||||
- `language` — Language which rules will be applied. [String](../data-types/string.md#string).
|
||||
- `word` — Word that needs to be lemmatized. Must be lowercase. [String](../data-types/string.md#string).
|
||||
|
||||
*Examples*
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT lemmatize('en', 'wolves');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─lemmatize("wolves")─┐
|
||||
│ "wolf" │
|
||||
└─────────────────────┘
|
||||
```
|
||||
|
||||
*Configuration*
|
||||
|
||||
This configuration specifies that the dictionary `en.bin` should be used for lemmatization of English (`en`) words. The `.bin` files can be downloaded from
|
||||
[here](https://github.com/vpodpecan/lemmagen3/tree/master/src/lemmagen3/models).
|
||||
|
||||
``` xml
|
||||
<lemmatizers>
|
||||
<lemmatizer>
|
||||
<!-- highlight-start -->
|
||||
<lang>en</lang>
|
||||
<path>en.bin</path>
|
||||
<!-- highlight-end -->
|
||||
</lemmatizer>
|
||||
</lemmatizers>
|
||||
```
|
||||
|
||||
## stem
|
||||
|
||||
Performs stemming on a given word.
|
||||
|
||||
### Syntax
|
||||
*Syntax*
|
||||
|
||||
``` sql
|
||||
stem('language', word)
|
||||
```
|
||||
|
||||
### Arguments
|
||||
*Arguments*
|
||||
|
||||
- `language` — Language which rules will be applied. Use the two letter [ISO 639-1 code](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes).
|
||||
- `word` — word that needs to be stemmed. Must be in lowercase. [String](../data-types/string.md#string).
|
||||
|
||||
### Examples
|
||||
*Examples*
|
||||
|
||||
Query:
|
||||
|
||||
@ -40,7 +311,7 @@ Result:
|
||||
│ ['I','think','it','is','a','bless','in','disguis'] │
|
||||
└────────────────────────────────────────────────────┘
|
||||
```
|
||||
### Supported languages for stem()
|
||||
*Supported languages for stem()*
|
||||
|
||||
:::note
|
||||
The stem() function uses the [Snowball stemming](https://snowballstem.org/) library, see the Snowball website for updated languages etc.
|
||||
@ -76,53 +347,6 @@ The stem() function uses the [Snowball stemming](https://snowballstem.org/) libr
|
||||
- Turkish
|
||||
- Yiddish
|
||||
|
||||
## lemmatize
|
||||
|
||||
Performs lemmatization on a given word. Needs dictionaries to operate, which can be obtained [here](https://github.com/vpodpecan/lemmagen3/tree/master/src/lemmagen3/models).
|
||||
|
||||
### Syntax
|
||||
|
||||
``` sql
|
||||
lemmatize('language', word)
|
||||
```
|
||||
|
||||
### Arguments
|
||||
|
||||
- `language` — Language which rules will be applied. [String](../data-types/string.md#string).
|
||||
- `word` — Word that needs to be lemmatized. Must be lowercase. [String](../data-types/string.md#string).
|
||||
|
||||
### Examples
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT lemmatize('en', 'wolves');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─lemmatize("wolves")─┐
|
||||
│ "wolf" │
|
||||
└─────────────────────┘
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
This configuration specifies that the dictionary `en.bin` should be used for lemmatization of English (`en`) words. The `.bin` files can be downloaded from
|
||||
[here](https://github.com/vpodpecan/lemmagen3/tree/master/src/lemmagen3/models).
|
||||
|
||||
``` xml
|
||||
<lemmatizers>
|
||||
<lemmatizer>
|
||||
<!-- highlight-start -->
|
||||
<lang>en</lang>
|
||||
<path>en.bin</path>
|
||||
<!-- highlight-end -->
|
||||
</lemmatizer>
|
||||
</lemmatizers>
|
||||
```
|
||||
|
||||
## synonyms
|
||||
|
||||
Finds synonyms to a given word. There are two types of synonym extensions: `plain` and `wordnet`.
|
||||
@ -131,18 +355,18 @@ With the `plain` extension type we need to provide a path to a simple text file,
|
||||
|
||||
With the `wordnet` extension type we need to provide a path to a directory with WordNet thesaurus in it. Thesaurus must contain a WordNet sense index.
|
||||
|
||||
### Syntax
|
||||
*Syntax*
|
||||
|
||||
``` sql
|
||||
synonyms('extension_name', word)
|
||||
```
|
||||
|
||||
### Arguments
|
||||
*Arguments*
|
||||
|
||||
- `extension_name` — Name of the extension in which search will be performed. [String](../data-types/string.md#string).
|
||||
- `word` — Word that will be searched in extension. [String](../data-types/string.md#string).
|
||||
|
||||
### Examples
|
||||
*Examples*
|
||||
|
||||
Query:
|
||||
|
||||
@ -158,7 +382,7 @@ Result:
|
||||
└──────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Configuration
|
||||
*Configuration*
|
||||
``` xml
|
||||
<synonyms_extensions>
|
||||
<extension>
|
||||
@ -173,153 +397,3 @@ Result:
|
||||
</extension>
|
||||
</synonyms_extensions>
|
||||
```
|
||||
|
||||
## detectLanguage
|
||||
|
||||
Detects the language of the UTF8-encoded input string. The function uses the [CLD2 library](https://github.com/CLD2Owners/cld2) for detection, and it returns the 2-letter ISO language code.
|
||||
|
||||
The `detectLanguage` function works best when providing over 200 characters in the input string.
|
||||
|
||||
### Syntax
|
||||
|
||||
``` sql
|
||||
detectLanguage('text_to_be_analyzed')
|
||||
```
|
||||
|
||||
### Arguments
|
||||
|
||||
- `text_to_be_analyzed` — A collection (or sentences) of strings to analyze. [String](../data-types/string.md#string).
|
||||
|
||||
### Returned value
|
||||
|
||||
- The 2-letter ISO code of the detected language
|
||||
|
||||
Other possible results:
|
||||
|
||||
- `un` = unknown, can not detect any language.
|
||||
- `other` = the detected language does not have 2 letter code.
|
||||
|
||||
### Examples
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT detectLanguage('Je pense que je ne parviendrai jamais à parler français comme un natif. Where there’s a will, there’s a way.');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
fr
|
||||
```
|
||||
|
||||
## detectLanguageMixed
|
||||
|
||||
Similar to the `detectLanguage` function, but `detectLanguageMixed` returns a `Map` of 2-letter language codes that are mapped to the percentage of the certain language in the text.
|
||||
|
||||
|
||||
### Syntax
|
||||
|
||||
``` sql
|
||||
detectLanguageMixed('text_to_be_analyzed')
|
||||
```
|
||||
|
||||
### Arguments
|
||||
|
||||
- `text_to_be_analyzed` — A collection (or sentences) of strings to analyze. [String](../data-types/string.md#string).
|
||||
|
||||
### Returned value
|
||||
|
||||
- `Map(String, Float32)`: The keys are 2-letter ISO codes and the values are a percentage of text found for that language
|
||||
|
||||
|
||||
### Examples
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT detectLanguageMixed('二兎を追う者は一兎をも得ず二兎を追う者は一兎をも得ず A vaincre sans peril, on triomphe sans gloire.');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─detectLanguageMixed()─┐
|
||||
│ {'ja':0.62,'fr':0.36 │
|
||||
└───────────────────────┘
|
||||
```
|
||||
|
||||
## detectLanguageUnknown
|
||||
|
||||
Similar to the `detectLanguage` function, except the `detectLanguageUnknown` function works with non-UTF8-encoded strings. Prefer this version when your character set is UTF-16 or UTF-32.
|
||||
|
||||
|
||||
### Syntax
|
||||
|
||||
``` sql
|
||||
detectLanguageUnknown('text_to_be_analyzed')
|
||||
```
|
||||
|
||||
### Arguments
|
||||
|
||||
- `text_to_be_analyzed` — A collection (or sentences) of strings to analyze. [String](../data-types/string.md#string).
|
||||
|
||||
### Returned value
|
||||
|
||||
- The 2-letter ISO code of the detected language
|
||||
|
||||
Other possible results:
|
||||
|
||||
- `un` = unknown, can not detect any language.
|
||||
- `other` = the detected language does not have 2 letter code.
|
||||
|
||||
### Examples
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT detectLanguageUnknown('Ich bleibe für ein paar Tage.');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─detectLanguageUnknown('Ich bleibe für ein paar Tage.')─┐
|
||||
│ de │
|
||||
└────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## detectCharset
|
||||
|
||||
The `detectCharset` function detects the character set of the non-UTF8-encoded input string.
|
||||
|
||||
|
||||
### Syntax
|
||||
|
||||
``` sql
|
||||
detectCharset('text_to_be_analyzed')
|
||||
```
|
||||
|
||||
### Arguments
|
||||
|
||||
- `text_to_be_analyzed` — A collection (or sentences) of strings to analyze. [String](../data-types/string.md#string).
|
||||
|
||||
### Returned value
|
||||
|
||||
- A `String` containing the code of the detected character set
|
||||
|
||||
### Examples
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT detectCharset('Ich bleibe für ein paar Tage.');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─detectCharset('Ich bleibe für ein paar Tage.')─┐
|
||||
│ WINDOWS-1252 │
|
||||
└────────────────────────────────────────────────┘
|
||||
```
|
||||
|
@ -3820,3 +3820,43 @@ Result:
|
||||
10. │ df │ │
|
||||
└────┴───────────────────────┘
|
||||
```
|
||||
|
||||
## displayName
|
||||
|
||||
Returns the value of `display_name` from [config](../../operations/configuration-files.md/#configuration-files) or server Fully Qualified Domain Name (FQDN) if not set.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
displayName()
|
||||
```
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Value of `display_name` from config or server FQDN if not set. [String](../data-types/string.md).
|
||||
|
||||
**Example**
|
||||
|
||||
The `display_name` can be set in `config.xml`. Taking for example a server with `display_name` configured to 'production':
|
||||
|
||||
```xml
|
||||
<!-- It is the name that will be shown in the clickhouse-client.
|
||||
By default, anything with "production" will be highlighted in red in query prompt.
|
||||
-->
|
||||
<display_name>production</display_name>
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT displayName();
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─displayName()─┐
|
||||
│ production │
|
||||
└───────────────┘
|
||||
```
|
||||
|
||||
|
@ -9,8 +9,8 @@ sidebar_label: CONSTRAINT
|
||||
Constraints could be added or deleted using following syntax:
|
||||
|
||||
``` sql
|
||||
ALTER TABLE [db].name [ON CLUSTER cluster] ADD CONSTRAINT constraint_name CHECK expression;
|
||||
ALTER TABLE [db].name [ON CLUSTER cluster] DROP CONSTRAINT constraint_name;
|
||||
ALTER TABLE [db].name [ON CLUSTER cluster] ADD CONSTRAINT [IF NOT EXISTS] constraint_name CHECK expression;
|
||||
ALTER TABLE [db].name [ON CLUSTER cluster] DROP CONSTRAINT [IF EXISTS] constraint_name;
|
||||
```
|
||||
|
||||
See more on [constraints](../../../sql-reference/statements/create/table.md#constraints).
|
||||
|
@ -141,6 +141,7 @@ $ clickhouse-client --param_tbl="numbers" --param_db="system" --param_col="numbe
|
||||
- `--secure` — если указано, будет использован безопасный канал.
|
||||
- `--history_file` - путь к файлу с историей команд.
|
||||
- `--param_<name>` — значение параметра для [запроса с параметрами](#cli-queries-with-parameters).
|
||||
- `--jwt` – авторизация с использованием JSON Web Token. Доступно только в ClickHouse Cloud.
|
||||
|
||||
Вместо параметров `--host`, `--port`, `--user` и `--password` клиент ClickHouse также поддерживает строки подключения (смотри следующий раздел).
|
||||
|
||||
|
@ -11,8 +11,8 @@ sidebar_label: "Манипуляции с ограничениями"
|
||||
Добавить или удалить ограничение можно с помощью запросов
|
||||
|
||||
``` sql
|
||||
ALTER TABLE [db].name [ON CLUSTER cluster] ADD CONSTRAINT constraint_name CHECK expression;
|
||||
ALTER TABLE [db].name [ON CLUSTER cluster] DROP CONSTRAINT constraint_name;
|
||||
ALTER TABLE [db].name [ON CLUSTER cluster] ADD CONSTRAINT [IF NOT EXISTS] constraint_name CHECK expression;
|
||||
ALTER TABLE [db].name [ON CLUSTER cluster] DROP CONSTRAINT [IF EXISTS] constraint_name;
|
||||
```
|
||||
|
||||
Запросы выполняют добавление или удаление метаданных об ограничениях таблицы `[db].name`, поэтому выполняются мгновенно.
|
||||
|
@ -9,8 +9,8 @@ sidebar_label: 约束
|
||||
约束可以使用以下语法添加或删除:
|
||||
|
||||
``` sql
|
||||
ALTER TABLE [db].name ADD CONSTRAINT constraint_name CHECK expression;
|
||||
ALTER TABLE [db].name DROP CONSTRAINT constraint_name;
|
||||
ALTER TABLE [db].name [ON CLUSTER cluster] ADD CONSTRAINT [IF NOT EXISTS] constraint_name CHECK expression;
|
||||
ALTER TABLE [db].name [ON CLUSTER cluster] DROP CONSTRAINT [IF EXISTS] constraint_name;
|
||||
```
|
||||
|
||||
查看[constraints](../../../sql-reference/statements/create/table.mdx#constraints)。
|
||||
|
@ -64,6 +64,7 @@ namespace ErrorCodes
|
||||
extern const int NETWORK_ERROR;
|
||||
extern const int AUTHENTICATION_FAILED;
|
||||
extern const int NO_ELEMENTS_IN_CONFIG;
|
||||
extern const int USER_EXPIRED;
|
||||
}
|
||||
|
||||
|
||||
@ -74,6 +75,12 @@ void Client::processError(const String & query) const
|
||||
fmt::print(stderr, "Received exception from server (version {}):\n{}\n",
|
||||
server_version,
|
||||
getExceptionMessage(*server_exception, print_stack_trace, true));
|
||||
|
||||
if (server_exception->code() == ErrorCodes::USER_EXPIRED)
|
||||
{
|
||||
server_exception->rethrow();
|
||||
}
|
||||
|
||||
if (is_interactive)
|
||||
{
|
||||
fmt::print(stderr, "\n");
|
||||
@ -944,6 +951,7 @@ void Client::addOptions(OptionsDescription & options_description)
|
||||
("ssh-key-file", po::value<std::string>(), "File containing the SSH private key for authenticate with the server.")
|
||||
("ssh-key-passphrase", po::value<std::string>(), "Passphrase for the SSH private key specified by --ssh-key-file.")
|
||||
("quota_key", po::value<std::string>(), "A string to differentiate quotas when the user have keyed quotas configured on server")
|
||||
("jwt", po::value<std::string>(), "Use JWT for authentication")
|
||||
|
||||
("max_client_network_bandwidth", po::value<int>(), "the maximum speed of data exchange over the network for the client in bytes per second.")
|
||||
("compression", po::value<bool>(), "enable or disable compression (enabled by default for remote communication and disabled for localhost communication).")
|
||||
@ -1102,6 +1110,12 @@ void Client::processOptions(const OptionsDescription & options_description,
|
||||
config().setBool("no-warnings", true);
|
||||
if (options.count("fake-drop"))
|
||||
config().setString("ignore_drop_queries_probability", "1");
|
||||
if (options.count("jwt"))
|
||||
{
|
||||
if (!options["user"].defaulted())
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "User and JWT flags can't be specified together");
|
||||
config().setString("jwt", options["jwt"].as<std::string>());
|
||||
}
|
||||
if (options.count("accept-invalid-certificate"))
|
||||
{
|
||||
config().setString("openSSL.client.invalidCertificateHandler.name", "AcceptCertificateHandler");
|
||||
|
@ -577,8 +577,7 @@ try
|
||||
#if USE_SSL
|
||||
CertificateReloader::instance().tryLoad(*config);
|
||||
#endif
|
||||
},
|
||||
/* already_loaded = */ false); /// Reload it right now (initial loading)
|
||||
});
|
||||
|
||||
SCOPE_EXIT({
|
||||
LOG_INFO(log, "Shutting down.");
|
||||
|
@ -1540,6 +1540,8 @@ try
|
||||
global_context->setMaxDictionaryNumToWarn(new_server_settings.max_dictionary_num_to_warn);
|
||||
global_context->setMaxDatabaseNumToWarn(new_server_settings.max_database_num_to_warn);
|
||||
global_context->setMaxPartNumToWarn(new_server_settings.max_part_num_to_warn);
|
||||
/// Only for system.server_settings
|
||||
global_context->setConfigReloaderInterval(new_server_settings.config_reload_interval_ms);
|
||||
|
||||
SlotCount concurrent_threads_soft_limit = UnlimitedSlots;
|
||||
if (new_server_settings.concurrent_threads_soft_limit_num > 0 && new_server_settings.concurrent_threads_soft_limit_num < concurrent_threads_soft_limit)
|
||||
@ -1702,8 +1704,7 @@ try
|
||||
|
||||
/// Must be the last.
|
||||
latest_config = config;
|
||||
},
|
||||
/* already_loaded = */ false); /// Reload it right now (initial loading)
|
||||
});
|
||||
|
||||
const auto listen_hosts = getListenHosts(config());
|
||||
const auto interserver_listen_hosts = getInterserverListenHosts(config());
|
||||
|
@ -108,6 +108,9 @@ bool Authentication::areCredentialsValid(
|
||||
case AuthenticationType::HTTP:
|
||||
throw Authentication::Require<BasicCredentials>("ClickHouse Basic Authentication");
|
||||
|
||||
case AuthenticationType::JWT:
|
||||
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "JWT is available only in ClickHouse Cloud");
|
||||
|
||||
case AuthenticationType::KERBEROS:
|
||||
return external_authenticators.checkKerberosCredentials(auth_data.getKerberosRealm(), *gss_acceptor_context);
|
||||
|
||||
@ -149,6 +152,9 @@ bool Authentication::areCredentialsValid(
|
||||
case AuthenticationType::SSL_CERTIFICATE:
|
||||
throw Authentication::Require<BasicCredentials>("ClickHouse X.509 Authentication");
|
||||
|
||||
case AuthenticationType::JWT:
|
||||
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "JWT is available only in ClickHouse Cloud");
|
||||
|
||||
case AuthenticationType::SSH_KEY:
|
||||
#if USE_SSH
|
||||
throw Authentication::Require<SshCredentials>("SSH Keys Authentication");
|
||||
@ -193,6 +199,9 @@ bool Authentication::areCredentialsValid(
|
||||
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "SSH is disabled, because ClickHouse is built without libssh");
|
||||
#endif
|
||||
|
||||
case AuthenticationType::JWT:
|
||||
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "JWT is available only in ClickHouse Cloud");
|
||||
|
||||
case AuthenticationType::BCRYPT_PASSWORD:
|
||||
return checkPasswordBcrypt(basic_credentials->getPassword(), auth_data.getPasswordHashBinary());
|
||||
|
||||
@ -222,6 +231,9 @@ bool Authentication::areCredentialsValid(
|
||||
case AuthenticationType::HTTP:
|
||||
throw Authentication::Require<BasicCredentials>("ClickHouse Basic Authentication");
|
||||
|
||||
case AuthenticationType::JWT:
|
||||
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "JWT is available only in ClickHouse Cloud");
|
||||
|
||||
case AuthenticationType::KERBEROS:
|
||||
throw Authentication::Require<GSSAcceptorContext>(auth_data.getKerberosRealm());
|
||||
|
||||
@ -254,6 +266,9 @@ bool Authentication::areCredentialsValid(
|
||||
case AuthenticationType::HTTP:
|
||||
throw Authentication::Require<BasicCredentials>("ClickHouse Basic Authentication");
|
||||
|
||||
case AuthenticationType::JWT:
|
||||
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "JWT is available only in ClickHouse Cloud");
|
||||
|
||||
case AuthenticationType::KERBEROS:
|
||||
throw Authentication::Require<GSSAcceptorContext>(auth_data.getKerberosRealm());
|
||||
|
||||
|
@ -135,6 +135,7 @@ void AuthenticationData::setPassword(const String & password_)
|
||||
case AuthenticationType::BCRYPT_PASSWORD:
|
||||
case AuthenticationType::NO_PASSWORD:
|
||||
case AuthenticationType::LDAP:
|
||||
case AuthenticationType::JWT:
|
||||
case AuthenticationType::KERBEROS:
|
||||
case AuthenticationType::SSL_CERTIFICATE:
|
||||
case AuthenticationType::SSH_KEY:
|
||||
@ -251,6 +252,7 @@ void AuthenticationData::setPasswordHashBinary(const Digest & hash)
|
||||
|
||||
case AuthenticationType::NO_PASSWORD:
|
||||
case AuthenticationType::LDAP:
|
||||
case AuthenticationType::JWT:
|
||||
case AuthenticationType::KERBEROS:
|
||||
case AuthenticationType::SSL_CERTIFICATE:
|
||||
case AuthenticationType::SSH_KEY:
|
||||
@ -322,6 +324,10 @@ std::shared_ptr<ASTAuthenticationData> AuthenticationData::toAST() const
|
||||
node->children.push_back(std::make_shared<ASTLiteral>(getLDAPServerName()));
|
||||
break;
|
||||
}
|
||||
case AuthenticationType::JWT:
|
||||
{
|
||||
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "JWT is available only in ClickHouse Cloud");
|
||||
}
|
||||
case AuthenticationType::KERBEROS:
|
||||
{
|
||||
const auto & realm = getKerberosRealm();
|
||||
|
@ -72,6 +72,11 @@ const AuthenticationTypeInfo & AuthenticationTypeInfo::get(AuthenticationType ty
|
||||
static const auto info = make_info(Keyword::HTTP);
|
||||
return info;
|
||||
}
|
||||
case AuthenticationType::JWT:
|
||||
{
|
||||
static const auto info = make_info(Keyword::JWT);
|
||||
return info;
|
||||
}
|
||||
case AuthenticationType::MAX:
|
||||
break;
|
||||
}
|
||||
|
@ -41,6 +41,9 @@ enum class AuthenticationType : uint8_t
|
||||
/// Authentication through HTTP protocol
|
||||
HTTP,
|
||||
|
||||
/// JSON Web Token
|
||||
JWT,
|
||||
|
||||
MAX,
|
||||
};
|
||||
|
||||
|
@ -33,6 +33,8 @@ void User::setName(const String & name_)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "User name '{}' is reserved", name_);
|
||||
if (name_.starts_with(EncodedUserInfo::SSH_KEY_AUTHENTICAION_MARKER))
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "User name '{}' is reserved", name_);
|
||||
if (name_.starts_with(EncodedUserInfo::JWT_AUTHENTICAION_MARKER))
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "User name '{}' is reserved", name_);
|
||||
name = name_;
|
||||
}
|
||||
|
||||
|
@ -880,8 +880,7 @@ void UsersConfigAccessStorage::load(
|
||||
Settings::checkNoSettingNamesAtTopLevel(*new_config, users_config_path);
|
||||
parseFromConfig(*new_config);
|
||||
access_control.getChangesNotifier().sendNotifications();
|
||||
},
|
||||
/* already_loaded = */ false);
|
||||
});
|
||||
}
|
||||
|
||||
void UsersConfigAccessStorage::startPeriodicReloading()
|
||||
|
@ -43,50 +43,56 @@ public:
|
||||
bool replaced_argument = false;
|
||||
auto replaced_uniq_function_arguments_nodes = function_node->getArguments().getNodes();
|
||||
|
||||
for (auto & uniq_function_argument_node : replaced_uniq_function_arguments_nodes)
|
||||
/// Replace injective function with its single argument
|
||||
auto remove_injective_function = [&replaced_argument](QueryTreeNodePtr & arg) -> bool
|
||||
{
|
||||
auto * uniq_function_argument_node_typed = uniq_function_argument_node->as<FunctionNode>();
|
||||
if (!uniq_function_argument_node_typed || !uniq_function_argument_node_typed->isOrdinaryFunction())
|
||||
continue;
|
||||
|
||||
auto & uniq_function_argument_node_argument_nodes = uniq_function_argument_node_typed->getArguments().getNodes();
|
||||
auto * arg_typed = arg->as<FunctionNode>();
|
||||
if (!arg_typed || !arg_typed->isOrdinaryFunction())
|
||||
return false;
|
||||
|
||||
/// Do not apply optimization if injective function contains multiple arguments
|
||||
if (uniq_function_argument_node_argument_nodes.size() != 1)
|
||||
continue;
|
||||
auto & arg_arguments_nodes = arg_typed->getArguments().getNodes();
|
||||
if (arg_arguments_nodes.size() != 1)
|
||||
return false;
|
||||
|
||||
const auto & uniq_function_argument_node_function = uniq_function_argument_node_typed->getFunction();
|
||||
if (!uniq_function_argument_node_function->isInjective({}))
|
||||
continue;
|
||||
const auto & arg_function = arg_typed->getFunction();
|
||||
if (!arg_function->isInjective({}))
|
||||
return false;
|
||||
|
||||
/// Replace injective function with its single argument
|
||||
uniq_function_argument_node = uniq_function_argument_node_argument_nodes[0];
|
||||
replaced_argument = true;
|
||||
arg = arg_arguments_nodes[0];
|
||||
return replaced_argument = true;
|
||||
};
|
||||
|
||||
for (auto & uniq_function_argument_node : replaced_uniq_function_arguments_nodes)
|
||||
{
|
||||
while (remove_injective_function(uniq_function_argument_node))
|
||||
;
|
||||
}
|
||||
|
||||
if (!replaced_argument)
|
||||
return;
|
||||
|
||||
DataTypes argument_types;
|
||||
argument_types.reserve(replaced_uniq_function_arguments_nodes.size());
|
||||
DataTypes replaced_argument_types;
|
||||
replaced_argument_types.reserve(replaced_uniq_function_arguments_nodes.size());
|
||||
|
||||
for (const auto & function_node_argument : replaced_uniq_function_arguments_nodes)
|
||||
argument_types.emplace_back(function_node_argument->getResultType());
|
||||
replaced_argument_types.emplace_back(function_node_argument->getResultType());
|
||||
|
||||
auto current_aggregate_function = function_node->getAggregateFunction();
|
||||
AggregateFunctionProperties properties;
|
||||
auto aggregate_function = AggregateFunctionFactory::instance().get(
|
||||
auto replaced_aggregate_function = AggregateFunctionFactory::instance().get(
|
||||
function_node->getFunctionName(),
|
||||
NullsAction::EMPTY,
|
||||
argument_types,
|
||||
function_node->getAggregateFunction()->getParameters(),
|
||||
replaced_argument_types,
|
||||
current_aggregate_function->getParameters(),
|
||||
properties);
|
||||
|
||||
/// uniqCombined returns nullable with nullable arguments so the result type might change which breaks the pass
|
||||
if (!aggregate_function->getResultType()->equals(*function_node->getAggregateFunction()->getResultType()))
|
||||
if (!replaced_aggregate_function->getResultType()->equals(*current_aggregate_function->getResultType()))
|
||||
return;
|
||||
|
||||
function_node->getArguments().getNodes() = replaced_uniq_function_arguments_nodes;
|
||||
function_node->resolveAsAggregateFunction(std::move(aggregate_function));
|
||||
function_node->getArguments().getNodes() = std::move(replaced_uniq_function_arguments_nodes);
|
||||
function_node->resolveAsAggregateFunction(std::move(replaced_aggregate_function));
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -109,6 +109,7 @@ namespace ErrorCodes
|
||||
extern const int USER_SESSION_LIMIT_EXCEEDED;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
extern const int CANNOT_READ_FROM_FILE_DESCRIPTOR;
|
||||
extern const int USER_EXPIRED;
|
||||
}
|
||||
|
||||
}
|
||||
@ -2270,7 +2271,7 @@ bool ClientBase::executeMultiQuery(const String & all_queries_text)
|
||||
catch (...)
|
||||
{
|
||||
// Surprisingly, this is a client error. A server error would
|
||||
// have been reported without throwing (see onReceiveSeverException()).
|
||||
// have been reported without throwing (see onReceiveExceptionFromServer()).
|
||||
client_exception = std::make_unique<Exception>(getCurrentExceptionMessageAndPattern(print_stack_trace), getCurrentExceptionCode());
|
||||
have_error = true;
|
||||
}
|
||||
@ -2643,6 +2644,9 @@ void ClientBase::runInteractive()
|
||||
}
|
||||
catch (const Exception & e)
|
||||
{
|
||||
if (e.code() == ErrorCodes::USER_EXPIRED)
|
||||
break;
|
||||
|
||||
/// We don't need to handle the test hints in the interactive mode.
|
||||
std::cerr << "Exception on client:" << std::endl << getExceptionMessage(e, print_stack_trace, true) << std::endl << std::endl;
|
||||
client_exception.reset(e.clone());
|
||||
|
@ -129,6 +129,7 @@ protected:
|
||||
const std::vector<Arguments> & hosts_and_ports_arguments) = 0;
|
||||
virtual void processConfig() = 0;
|
||||
|
||||
/// Returns true if query processing was successful.
|
||||
bool processQueryText(const String & text);
|
||||
|
||||
virtual void readArguments(
|
||||
|
@ -74,6 +74,7 @@ Connection::Connection(const String & host_, UInt16 port_,
|
||||
const String & default_database_,
|
||||
const String & user_, const String & password_,
|
||||
[[maybe_unused]] const SSHKey & ssh_private_key_,
|
||||
const String & jwt_,
|
||||
const String & quota_key_,
|
||||
const String & cluster_,
|
||||
const String & cluster_secret_,
|
||||
@ -86,6 +87,7 @@ Connection::Connection(const String & host_, UInt16 port_,
|
||||
, ssh_private_key(ssh_private_key_)
|
||||
#endif
|
||||
, quota_key(quota_key_)
|
||||
, jwt(jwt_)
|
||||
, cluster(cluster_)
|
||||
, cluster_secret(cluster_secret_)
|
||||
, client_name(client_name_)
|
||||
@ -341,6 +343,11 @@ void Connection::sendHello()
|
||||
performHandshakeForSSHAuth();
|
||||
}
|
||||
#endif
|
||||
else if (!jwt.empty())
|
||||
{
|
||||
writeStringBinary(EncodedUserInfo::JWT_AUTHENTICAION_MARKER, *out);
|
||||
writeStringBinary(jwt, *out);
|
||||
}
|
||||
else
|
||||
{
|
||||
writeStringBinary(user, *out);
|
||||
@ -1310,6 +1317,7 @@ ServerConnectionPtr Connection::createConnection(const ConnectionParameters & pa
|
||||
parameters.user,
|
||||
parameters.password,
|
||||
parameters.ssh_private_key,
|
||||
parameters.jwt,
|
||||
parameters.quota_key,
|
||||
"", /* cluster */
|
||||
"", /* cluster_secret */
|
||||
|
@ -53,6 +53,7 @@ public:
|
||||
const String & default_database_,
|
||||
const String & user_, const String & password_,
|
||||
const SSHKey & ssh_private_key_,
|
||||
const String & jwt_,
|
||||
const String & quota_key_,
|
||||
const String & cluster_,
|
||||
const String & cluster_secret_,
|
||||
@ -173,6 +174,7 @@ private:
|
||||
SSHKey ssh_private_key;
|
||||
#endif
|
||||
String quota_key;
|
||||
String jwt;
|
||||
|
||||
/// For inter-server authorization
|
||||
String cluster;
|
||||
|
@ -52,31 +52,11 @@ ConnectionParameters::ConnectionParameters(const Poco::Util::AbstractConfigurati
|
||||
/// changed the default value to "default" to fix the issue when the user in the prompt is blank
|
||||
user = config.getString("user", "default");
|
||||
|
||||
if (!config.has("ssh-key-file"))
|
||||
if (config.has("jwt"))
|
||||
{
|
||||
bool password_prompt = false;
|
||||
if (config.getBool("ask-password", false))
|
||||
{
|
||||
if (config.has("password"))
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Specified both --password and --ask-password. Remove one of them");
|
||||
password_prompt = true;
|
||||
}
|
||||
else
|
||||
{
|
||||
password = config.getString("password", "");
|
||||
/// if the value of --password is omitted, the password will be set implicitly to "\n"
|
||||
if (password == ASK_PASSWORD)
|
||||
password_prompt = true;
|
||||
}
|
||||
if (password_prompt)
|
||||
{
|
||||
std::string prompt{"Password for user (" + user + "): "};
|
||||
char buf[1000] = {};
|
||||
if (auto * result = readpassphrase(prompt.c_str(), buf, sizeof(buf), 0))
|
||||
password = result;
|
||||
}
|
||||
jwt = config.getString("jwt");
|
||||
}
|
||||
else
|
||||
else if (config.has("ssh-key-file"))
|
||||
{
|
||||
#if USE_SSH
|
||||
std::string filename = config.getString("ssh-key-file");
|
||||
@ -102,6 +82,30 @@ ConnectionParameters::ConnectionParameters(const Poco::Util::AbstractConfigurati
|
||||
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "SSH is disabled, because ClickHouse is built without libssh");
|
||||
#endif
|
||||
}
|
||||
else
|
||||
{
|
||||
bool password_prompt = false;
|
||||
if (config.getBool("ask-password", false))
|
||||
{
|
||||
if (config.has("password"))
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Specified both --password and --ask-password. Remove one of them");
|
||||
password_prompt = true;
|
||||
}
|
||||
else
|
||||
{
|
||||
password = config.getString("password", "");
|
||||
/// if the value of --password is omitted, the password will be set implicitly to "\n"
|
||||
if (password == ASK_PASSWORD)
|
||||
password_prompt = true;
|
||||
}
|
||||
if (password_prompt)
|
||||
{
|
||||
std::string prompt{"Password for user (" + user + "): "};
|
||||
char buf[1000] = {};
|
||||
if (auto * result = readpassphrase(prompt.c_str(), buf, sizeof(buf), 0))
|
||||
password = result;
|
||||
}
|
||||
}
|
||||
|
||||
quota_key = config.getString("quota_key", "");
|
||||
|
||||
@ -139,7 +143,7 @@ ConnectionParameters::ConnectionParameters(const Poco::Util::AbstractConfigurati
|
||||
}
|
||||
|
||||
UInt16 ConnectionParameters::getPortFromConfig(const Poco::Util::AbstractConfiguration & config,
|
||||
std::string connection_host)
|
||||
const std::string & connection_host)
|
||||
{
|
||||
bool is_secure = enableSecureConnection(config, connection_host);
|
||||
return config.getInt("port",
|
||||
|
@ -22,6 +22,7 @@ struct ConnectionParameters
|
||||
std::string password;
|
||||
std::string quota_key;
|
||||
SSHKey ssh_private_key;
|
||||
std::string jwt;
|
||||
Protocol::Secure security = Protocol::Secure::Disable;
|
||||
Protocol::Compression compression = Protocol::Compression::Enable;
|
||||
ConnectionTimeouts timeouts;
|
||||
@ -30,7 +31,7 @@ struct ConnectionParameters
|
||||
ConnectionParameters(const Poco::Util::AbstractConfiguration & config, std::string host);
|
||||
ConnectionParameters(const Poco::Util::AbstractConfiguration & config, std::string host, std::optional<UInt16> port);
|
||||
|
||||
static UInt16 getPortFromConfig(const Poco::Util::AbstractConfiguration & config, std::string connection_host);
|
||||
static UInt16 getPortFromConfig(const Poco::Util::AbstractConfiguration & config, const std::string & connection_host);
|
||||
|
||||
/// Ask to enter the user's password if password option contains this value.
|
||||
/// "\n" is used because there is hardly a chance that a user would use '\n' as password.
|
||||
|
@ -123,7 +123,7 @@ protected:
|
||||
{
|
||||
return std::make_shared<Connection>(
|
||||
host, port,
|
||||
default_database, user, password, SSHKey(), quota_key,
|
||||
default_database, user, password, SSHKey(), /*jwt*/ "", quota_key,
|
||||
cluster, cluster_secret,
|
||||
client_name, compression, secure);
|
||||
}
|
||||
|
@ -19,8 +19,7 @@ ConfigReloader::ConfigReloader(
|
||||
const std::string & preprocessed_dir_,
|
||||
zkutil::ZooKeeperNodeCache && zk_node_cache_,
|
||||
const zkutil::EventPtr & zk_changed_event_,
|
||||
Updater && updater_,
|
||||
bool already_loaded)
|
||||
Updater && updater_)
|
||||
: config_path(config_path_)
|
||||
, extra_paths(extra_paths_)
|
||||
, preprocessed_dir(preprocessed_dir_)
|
||||
@ -28,10 +27,15 @@ ConfigReloader::ConfigReloader(
|
||||
, zk_changed_event(zk_changed_event_)
|
||||
, updater(std::move(updater_))
|
||||
{
|
||||
if (!already_loaded)
|
||||
reloadIfNewer(/* force = */ true, /* throw_on_error = */ true, /* fallback_to_preprocessed = */ true, /* initial_loading = */ true);
|
||||
}
|
||||
auto config = reloadIfNewer(/* force = */ true, /* throw_on_error = */ true, /* fallback_to_preprocessed = */ true, /* initial_loading = */ true);
|
||||
|
||||
if (config.has_value())
|
||||
reload_interval = std::chrono::milliseconds(config->configuration->getInt64("config_reload_interval_ms", DEFAULT_RELOAD_INTERVAL.count()));
|
||||
else
|
||||
reload_interval = DEFAULT_RELOAD_INTERVAL;
|
||||
|
||||
LOG_TRACE(log, "Config reload interval set to {}ms", reload_interval.count());
|
||||
}
|
||||
|
||||
void ConfigReloader::start()
|
||||
{
|
||||
@ -82,7 +86,17 @@ void ConfigReloader::run()
|
||||
if (quit)
|
||||
return;
|
||||
|
||||
reloadIfNewer(zk_changed, /* throw_on_error = */ false, /* fallback_to_preprocessed = */ false, /* initial_loading = */ false);
|
||||
auto config = reloadIfNewer(zk_changed, /* throw_on_error = */ false, /* fallback_to_preprocessed = */ false, /* initial_loading = */ false);
|
||||
if (config.has_value())
|
||||
{
|
||||
auto new_reload_interval = std::chrono::milliseconds(config->configuration->getInt64("config_reload_interval_ms", DEFAULT_RELOAD_INTERVAL.count()));
|
||||
if (new_reload_interval != reload_interval)
|
||||
{
|
||||
reload_interval = new_reload_interval;
|
||||
LOG_TRACE(log, "Config reload interval changed to {}ms", reload_interval.count());
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
@ -92,7 +106,7 @@ void ConfigReloader::run()
|
||||
}
|
||||
}
|
||||
|
||||
void ConfigReloader::reloadIfNewer(bool force, bool throw_on_error, bool fallback_to_preprocessed, bool initial_loading)
|
||||
std::optional<ConfigProcessor::LoadedConfig> ConfigReloader::reloadIfNewer(bool force, bool throw_on_error, bool fallback_to_preprocessed, bool initial_loading)
|
||||
{
|
||||
std::lock_guard lock(reload_mutex);
|
||||
|
||||
@ -120,7 +134,7 @@ void ConfigReloader::reloadIfNewer(bool force, bool throw_on_error, bool fallbac
|
||||
throw;
|
||||
|
||||
tryLogCurrentException(log, "ZooKeeper error when loading config from '" + config_path + "'");
|
||||
return;
|
||||
return std::nullopt;
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
@ -128,7 +142,7 @@ void ConfigReloader::reloadIfNewer(bool force, bool throw_on_error, bool fallbac
|
||||
throw;
|
||||
|
||||
tryLogCurrentException(log, "Error loading config from '" + config_path + "'");
|
||||
return;
|
||||
return std::nullopt;
|
||||
}
|
||||
config_processor.savePreprocessedConfig(loaded_config, preprocessed_dir);
|
||||
|
||||
@ -154,11 +168,13 @@ void ConfigReloader::reloadIfNewer(bool force, bool throw_on_error, bool fallbac
|
||||
if (throw_on_error)
|
||||
throw;
|
||||
tryLogCurrentException(log, "Error updating configuration from '" + config_path + "' config.");
|
||||
return;
|
||||
return std::nullopt;
|
||||
}
|
||||
|
||||
LOG_DEBUG(log, "Loaded config '{}', performed update on configuration", config_path);
|
||||
return loaded_config;
|
||||
}
|
||||
return std::nullopt;
|
||||
}
|
||||
|
||||
struct ConfigReloader::FileWithTimestamp
|
||||
|
@ -17,8 +17,6 @@ namespace Poco { class Logger; }
|
||||
namespace DB
|
||||
{
|
||||
|
||||
class Context;
|
||||
|
||||
/** Every two seconds checks configuration files for update.
|
||||
* If configuration is changed, then config will be reloaded by ConfigProcessor
|
||||
* and the reloaded config will be applied via Updater functor.
|
||||
@ -27,6 +25,8 @@ class Context;
|
||||
class ConfigReloader
|
||||
{
|
||||
public:
|
||||
static constexpr auto DEFAULT_RELOAD_INTERVAL = std::chrono::milliseconds(2000);
|
||||
|
||||
using Updater = std::function<void(ConfigurationPtr, bool)>;
|
||||
|
||||
ConfigReloader(
|
||||
@ -35,8 +35,7 @@ public:
|
||||
const std::string & preprocessed_dir,
|
||||
zkutil::ZooKeeperNodeCache && zk_node_cache,
|
||||
const zkutil::EventPtr & zk_changed_event,
|
||||
Updater && updater,
|
||||
bool already_loaded);
|
||||
Updater && updater);
|
||||
|
||||
~ConfigReloader();
|
||||
|
||||
@ -53,7 +52,7 @@ public:
|
||||
private:
|
||||
void run();
|
||||
|
||||
void reloadIfNewer(bool force, bool throw_on_error, bool fallback_to_preprocessed, bool initial_loading);
|
||||
std::optional<ConfigProcessor::LoadedConfig> reloadIfNewer(bool force, bool throw_on_error, bool fallback_to_preprocessed, bool initial_loading);
|
||||
|
||||
struct FileWithTimestamp;
|
||||
|
||||
@ -67,8 +66,6 @@ private:
|
||||
|
||||
FilesChangesTracker getNewFileList() const;
|
||||
|
||||
static constexpr auto reload_interval = std::chrono::seconds(2);
|
||||
|
||||
LoggerPtr log = getLogger("ConfigReloader");
|
||||
|
||||
std::string config_path;
|
||||
@ -85,6 +82,8 @@ private:
|
||||
std::atomic<bool> quit{false};
|
||||
ThreadFromGlobalPool thread;
|
||||
|
||||
std::chrono::milliseconds reload_interval = DEFAULT_RELOAD_INTERVAL;
|
||||
|
||||
/// Locked inside reloadIfNewer.
|
||||
std::mutex reload_mutex;
|
||||
};
|
||||
|
@ -63,6 +63,9 @@ const char USER_INTERSERVER_MARKER[] = " INTERSERVER SECRET ";
|
||||
/// Marker for SSH-keys-based authentication (passed as the user name)
|
||||
const char SSH_KEY_AUTHENTICAION_MARKER[] = " SSH KEY AUTHENTICATION ";
|
||||
|
||||
/// Market for JSON Web Token authentication
|
||||
const char JWT_AUTHENTICAION_MARKER[] = " JWT AUTHENTICATION ";
|
||||
|
||||
};
|
||||
|
||||
namespace Protocol
|
||||
|
@ -154,6 +154,7 @@ namespace DB
|
||||
M(String, merge_workload, "default", "Name of workload to be used to access resources for all merges (may be overridden by a merge tree setting)", 0) \
|
||||
M(String, mutation_workload, "default", "Name of workload to be used to access resources for all mutations (may be overridden by a merge tree setting)", 0) \
|
||||
M(Double, gwp_asan_force_sample_probability, 0, "Probability that an allocation from specific places will be sampled by GWP Asan (i.e. PODArray allocations)", 0) \
|
||||
M(UInt64, config_reload_interval_ms, 2000, "How often clickhouse will reload config and check for new changes", 0) \
|
||||
|
||||
/// If you add a setting which can be updated at runtime, please update 'changeable_settings' map in StorageSystemServerSettings.cpp
|
||||
|
||||
|
@ -6,7 +6,7 @@ namespace DB
|
||||
|
||||
static constexpr int FILECACHE_DEFAULT_MAX_FILE_SEGMENT_SIZE = 32 * 1024 * 1024; /// 32Mi
|
||||
static constexpr int FILECACHE_DEFAULT_FILE_SEGMENT_ALIGNMENT = 4 * 1024 * 1024; /// 4Mi
|
||||
static constexpr int FILECACHE_DEFAULT_BACKGROUND_DOWNLOAD_THREADS = 5;
|
||||
static constexpr int FILECACHE_DEFAULT_BACKGROUND_DOWNLOAD_THREADS = 0;
|
||||
static constexpr int FILECACHE_DEFAULT_BACKGROUND_DOWNLOAD_QUEUE_SIZE_LIMIT = 5000;
|
||||
static constexpr int FILECACHE_DEFAULT_LOAD_METADATA_THREADS = 16;
|
||||
static constexpr int FILECACHE_DEFAULT_MAX_ELEMENTS = 10000000;
|
||||
|
@ -91,6 +91,7 @@
|
||||
#include <Common/StackTrace.h>
|
||||
#include <Common/Config/ConfigHelper.h>
|
||||
#include <Common/Config/ConfigProcessor.h>
|
||||
#include <Common/Config/ConfigReloader.h>
|
||||
#include <Common/Config/AbstractConfigurationComparison.h>
|
||||
#include <Common/ZooKeeper/ZooKeeper.h>
|
||||
#include <Common/ShellCommand.h>
|
||||
@ -367,6 +368,9 @@ struct ContextSharedPart : boost::noncopyable
|
||||
std::atomic_size_t max_view_num_to_warn = 10000lu;
|
||||
std::atomic_size_t max_dictionary_num_to_warn = 1000lu;
|
||||
std::atomic_size_t max_part_num_to_warn = 100000lu;
|
||||
/// Only for system.server_settings, actually value stored in reloader itself
|
||||
std::atomic_size_t config_reload_interval_ms = ConfigReloader::DEFAULT_RELOAD_INTERVAL.count();
|
||||
|
||||
String format_schema_path; /// Path to a directory that contains schema files used by input formats.
|
||||
String google_protos_path; /// Path to a directory that contains the proto files for the well-known Protobuf types.
|
||||
mutable OnceFlag action_locks_manager_initialized;
|
||||
@ -4503,6 +4507,16 @@ void Context::checkPartitionCanBeDropped(const String & database, const String &
|
||||
checkCanBeDropped(database, table, partition_size, max_partition_size_to_drop);
|
||||
}
|
||||
|
||||
void Context::setConfigReloaderInterval(size_t value_ms)
|
||||
{
|
||||
shared->config_reload_interval_ms.store(value_ms, std::memory_order_relaxed);
|
||||
}
|
||||
|
||||
size_t Context::getConfigReloaderInterval() const
|
||||
{
|
||||
return shared->config_reload_interval_ms.load(std::memory_order_relaxed);
|
||||
}
|
||||
|
||||
InputFormatPtr Context::getInputFormat(const String & name, ReadBuffer & buf, const Block & sample, UInt64 max_block_size, const std::optional<FormatSettings> & format_settings, std::optional<size_t> max_parsing_threads) const
|
||||
{
|
||||
return FormatFactory::instance().getInput(name, buf, sample, shared_from_this(), max_block_size, format_settings, max_parsing_threads);
|
||||
|
@ -1161,6 +1161,9 @@ public:
|
||||
size_t getMaxPartitionSizeToDrop() const;
|
||||
void checkPartitionCanBeDropped(const String & database, const String & table, const size_t & partition_size) const;
|
||||
void checkPartitionCanBeDropped(const String & database, const String & table, const size_t & partition_size, const size_t & max_partition_size_to_drop) const;
|
||||
/// Only for system.server_settings, actual value is stored in ConfigReloader
|
||||
void setConfigReloaderInterval(size_t value_ms);
|
||||
size_t getConfigReloaderInterval() const;
|
||||
|
||||
/// Lets you select the compression codec according to the conditions described in the configuration file.
|
||||
std::shared_ptr<ICompressionCodec> chooseCompressionCodec(size_t part_size, double part_size_ratio) const;
|
||||
|
@ -86,6 +86,7 @@ ColumnsDescription SessionLogElement::getColumnsDescription()
|
||||
AUTH_TYPE_NAME_AND_VALUE(AuthType::SHA256_PASSWORD),
|
||||
AUTH_TYPE_NAME_AND_VALUE(AuthType::DOUBLE_SHA1_PASSWORD),
|
||||
AUTH_TYPE_NAME_AND_VALUE(AuthType::LDAP),
|
||||
AUTH_TYPE_NAME_AND_VALUE(AuthType::JWT),
|
||||
AUTH_TYPE_NAME_AND_VALUE(AuthType::KERBEROS),
|
||||
AUTH_TYPE_NAME_AND_VALUE(AuthType::SSH_KEY),
|
||||
AUTH_TYPE_NAME_AND_VALUE(AuthType::SSL_CERTIFICATE),
|
||||
@ -93,7 +94,7 @@ ColumnsDescription SessionLogElement::getColumnsDescription()
|
||||
AUTH_TYPE_NAME_AND_VALUE(AuthType::HTTP),
|
||||
});
|
||||
#undef AUTH_TYPE_NAME_AND_VALUE
|
||||
static_assert(static_cast<int>(AuthenticationType::MAX) == 10);
|
||||
static_assert(static_cast<int>(AuthenticationType::MAX) == 11);
|
||||
|
||||
auto interface_type_column = std::make_shared<DataTypeEnum8>(
|
||||
DataTypeEnum8::Values
|
||||
|
@ -89,6 +89,12 @@ void ASTAuthenticationData::formatImpl(const FormatSettings & settings, FormatSt
|
||||
password = true;
|
||||
break;
|
||||
}
|
||||
case AuthenticationType::JWT:
|
||||
{
|
||||
prefix = "CLAIMS";
|
||||
parameter = true;
|
||||
break;
|
||||
}
|
||||
case AuthenticationType::LDAP:
|
||||
{
|
||||
prefix = "SERVER";
|
||||
|
@ -250,6 +250,7 @@ namespace DB
|
||||
MR_MACROS(IS_NOT_NULL, "IS NOT NULL") \
|
||||
MR_MACROS(IS_NULL, "IS NULL") \
|
||||
MR_MACROS(JOIN, "JOIN") \
|
||||
MR_MACROS(JWT, "JWT") \
|
||||
MR_MACROS(KERBEROS, "KERBEROS") \
|
||||
MR_MACROS(KEY_BY, "KEY BY") \
|
||||
MR_MACROS(KEY, "KEY") \
|
||||
|
@ -1256,7 +1256,7 @@ Pipe ReadFromMergeTree::spreadMarkRangesAmongStreamsFinal(
|
||||
bool no_merging_final = do_not_merge_across_partitions_select_final &&
|
||||
std::distance(parts_to_merge_ranges[range_index], parts_to_merge_ranges[range_index + 1]) == 1 &&
|
||||
parts_to_merge_ranges[range_index]->data_part->info.level > 0 &&
|
||||
data.merging_params.is_deleted_column.empty();
|
||||
data.merging_params.is_deleted_column.empty() && !reader_settings.read_in_order;
|
||||
|
||||
if (no_merging_final)
|
||||
{
|
||||
@ -1291,7 +1291,7 @@ Pipe ReadFromMergeTree::spreadMarkRangesAmongStreamsFinal(
|
||||
/// Parts of non-zero level still may contain duplicate PK values to merge on FINAL if there's is_deleted column,
|
||||
/// so we have to process all ranges. It would be more optimal to remove this flag and add an extra filtering step.
|
||||
bool split_parts_ranges_into_intersecting_and_non_intersecting_final = settings.split_parts_ranges_into_intersecting_and_non_intersecting_final &&
|
||||
data.merging_params.is_deleted_column.empty();
|
||||
data.merging_params.is_deleted_column.empty() && !reader_settings.read_in_order;
|
||||
|
||||
SplitPartsWithRangesByPrimaryKeyResult split_ranges_result = splitPartsWithRangesByPrimaryKey(
|
||||
metadata_for_reading->getPrimaryKey(),
|
||||
|
@ -90,6 +90,7 @@ message QueryInfo {
|
||||
string user_name = 9;
|
||||
string password = 10;
|
||||
string quota = 11;
|
||||
string jwt = 25;
|
||||
|
||||
// Works exactly like sessions in the HTTP protocol.
|
||||
string session_id = 12;
|
||||
|
@ -1057,7 +1057,7 @@ bool AlterCommand::isRemovingProperty() const
|
||||
|
||||
bool AlterCommand::isDropSomething() const
|
||||
{
|
||||
return type == Type::DROP_COLUMN || type == Type::DROP_INDEX
|
||||
return type == Type::DROP_COLUMN || type == Type::DROP_INDEX || type == Type::DROP_STATISTICS
|
||||
|| type == Type::DROP_CONSTRAINT || type == Type::DROP_PROJECTION;
|
||||
}
|
||||
|
||||
|
@ -8083,6 +8083,13 @@ void MergeTreeData::checkDropCommandDoesntAffectInProgressMutations(const AlterC
|
||||
throw_exception(mutation_name, "column", command.column_name);
|
||||
}
|
||||
}
|
||||
else if (command.type == AlterCommand::DROP_STATISTICS)
|
||||
{
|
||||
for (const auto & stats_col1 : command.statistics_columns)
|
||||
for (const auto & stats_col2 : mutation_command.statistics_columns)
|
||||
if (stats_col1 == stats_col2)
|
||||
throw_exception(mutation_name, "statistics", stats_col1);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -2004,7 +2004,9 @@ MutationCommands ReplicatedMergeTreeQueue::getMutationCommands(
|
||||
MutationCommands commands;
|
||||
for (auto it = begin; it != end; ++it)
|
||||
{
|
||||
chassert(mutation_pointer < it->second->entry->znode_name);
|
||||
/// FIXME : This was supposed to be fixed after releasing 23.5 (it fails in Upgrade check)
|
||||
/// but it's still present https://github.com/ClickHouse/ClickHouse/issues/65275
|
||||
/// chassert(mutation_pointer < it->second->entry->znode_name);
|
||||
mutation_ids.push_back(it->second->entry->znode_name);
|
||||
const auto & commands_from_entry = it->second->entry->commands;
|
||||
commands.insert(commands.end(), commands_from_entry.begin(), commands_from_entry.end());
|
||||
|
@ -1269,6 +1269,7 @@ MergeMutateSelectedEntryPtr StorageMergeTree::selectPartsToMutate(
|
||||
if (command.type != MutationCommand::Type::DROP_COLUMN
|
||||
&& command.type != MutationCommand::Type::DROP_INDEX
|
||||
&& command.type != MutationCommand::Type::DROP_PROJECTION
|
||||
&& command.type != MutationCommand::Type::DROP_STATISTICS
|
||||
&& command.type != MutationCommand::Type::RENAME_COLUMN)
|
||||
{
|
||||
commands_for_size_validation.push_back(command);
|
||||
|
@ -5656,7 +5656,7 @@ std::optional<QueryPipeline> StorageReplicatedMergeTree::distributedWriteFromClu
|
||||
{
|
||||
auto connection = std::make_shared<Connection>(
|
||||
node.host_name, node.port, query_context->getGlobalContext()->getCurrentDatabase(),
|
||||
node.user, node.password, SSHKey(), node.quota_key, node.cluster, node.cluster_secret,
|
||||
node.user, node.password, SSHKey(), /*jwt*/"", node.quota_key, node.cluster, node.cluster_secret,
|
||||
"ParallelInsertSelectInititiator",
|
||||
node.compression,
|
||||
node.secure
|
||||
|
@ -6,6 +6,7 @@
|
||||
#include <IO/MMappedFileCache.h>
|
||||
#include <IO/UncompressedCache.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Common/Config/ConfigReloader.h>
|
||||
#include <Interpreters/ProcessList.h>
|
||||
#include <Storages/MarkCache.h>
|
||||
#include <Storages/MergeTree/MergeTreeBackgroundExecutor.h>
|
||||
@ -84,7 +85,8 @@ void StorageSystemServerSettings::fillData(MutableColumns & res_columns, Context
|
||||
{"mmap_cache_size", {std::to_string(context->getMMappedFileCache()->maxSizeInBytes()), ChangeableWithoutRestart::Yes}},
|
||||
|
||||
{"merge_workload", {context->getMergeWorkload(), ChangeableWithoutRestart::Yes}},
|
||||
{"mutation_workload", {context->getMutationWorkload(), ChangeableWithoutRestart::Yes}}
|
||||
{"mutation_workload", {context->getMutationWorkload(), ChangeableWithoutRestart::Yes}},
|
||||
{"config_reload_interval_ms", {std::to_string(context->getConfigReloaderInterval()), ChangeableWithoutRestart::Yes}}
|
||||
};
|
||||
|
||||
if (context->areBackgroundExecutorsInitialized())
|
||||
|
@ -1,13 +1,4 @@
|
||||
00725_memory_tracking
|
||||
01624_soft_constraints
|
||||
02354_vector_search_queries
|
||||
02901_parallel_replicas_rollup
|
||||
02999_scalar_subqueries_bug_2
|
||||
# Flaky list
|
||||
01825_type_json_in_array
|
||||
01414_mutations_and_errors_zookeeper
|
||||
01287_max_execution_speed
|
||||
# Check after ConstantNode refactoring
|
||||
02154_parser_backtracking
|
||||
02944_variant_as_common_type
|
||||
02942_variant_cast
|
||||
|
@ -63,7 +63,10 @@ def get_access_token_by_key_app(private_key: str, app_id: int) -> str:
|
||||
"iss": app_id,
|
||||
}
|
||||
|
||||
encoded_jwt = jwt.encode(payload, private_key, algorithm="RS256")
|
||||
# FIXME: apparently should be switched to this so that mypy is happy
|
||||
# jwt_instance = JWT()
|
||||
# encoded_jwt = jwt_instance.encode(payload, private_key, algorithm="RS256")
|
||||
encoded_jwt = jwt.encode(payload, private_key, algorithm="RS256") # type: ignore
|
||||
installation_id = get_installation_id(encoded_jwt)
|
||||
return get_access_token_by_jwt(encoded_jwt, installation_id)
|
||||
|
||||
|
@ -0,0 +1 @@
|
||||
#!/usr/bin/env python3
|
@ -0,0 +1,4 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<clickhouse>
|
||||
<config_reload_interval_ms>1000</config_reload_interval_ms>
|
||||
</clickhouse>
|
52
tests/integration/test_config_reloader_interval/test.py
Normal file
52
tests/integration/test_config_reloader_interval/test.py
Normal file
@ -0,0 +1,52 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import pytest
|
||||
import fnmatch
|
||||
|
||||
from helpers.cluster import ClickHouseCluster
|
||||
from helpers.client import QueryRuntimeException
|
||||
|
||||
cluster = ClickHouseCluster(__file__)
|
||||
|
||||
node = cluster.add_instance(
|
||||
"node",
|
||||
main_configs=["configs/config_reloader.xml"],
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def start_cluster():
|
||||
try:
|
||||
cluster.start()
|
||||
yield cluster
|
||||
finally:
|
||||
cluster.shutdown()
|
||||
|
||||
|
||||
def test_reload_config(start_cluster):
|
||||
assert node.wait_for_log_line(
|
||||
f"Config reload interval set to 1000ms", look_behind_lines=2000
|
||||
)
|
||||
|
||||
assert (
|
||||
node.query(
|
||||
"SELECT value from system.server_settings where name = 'config_reload_interval_ms'"
|
||||
)
|
||||
== "1000\n"
|
||||
)
|
||||
node.replace_in_config(
|
||||
"/etc/clickhouse-server/config.d/config_reloader.xml",
|
||||
"1000",
|
||||
"7777",
|
||||
)
|
||||
|
||||
assert node.wait_for_log_line(
|
||||
f"Config reload interval changed to 7777ms", look_behind_lines=2000
|
||||
)
|
||||
|
||||
assert (
|
||||
node.query(
|
||||
"SELECT value from system.server_settings where name = 'config_reload_interval_ms'"
|
||||
)
|
||||
== "7777\n"
|
||||
)
|
@ -1136,7 +1136,7 @@ CREATE TABLE system.users
|
||||
`name` String,
|
||||
`id` UUID,
|
||||
`storage` String,
|
||||
`auth_type` Enum8('no_password' = 0, 'plaintext_password' = 1, 'sha256_password' = 2, 'double_sha1_password' = 3, 'ldap' = 4, 'kerberos' = 5, 'ssl_certificate' = 6, 'bcrypt_password' = 7, 'ssh_key' = 8, 'http' = 9),
|
||||
`auth_type` Enum8('no_password' = 0, 'plaintext_password' = 1, 'sha256_password' = 2, 'double_sha1_password' = 3, 'ldap' = 4, 'kerberos' = 5, 'ssl_certificate' = 6, 'bcrypt_password' = 7, 'ssh_key' = 8, 'http' = 9, 'jwt' = 10),
|
||||
`auth_params` String,
|
||||
`host_ip` Array(String),
|
||||
`host_names` Array(String),
|
||||
|
@ -1,4 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
# Tags: no-tsan
|
||||
# ^ TSan uses more stack
|
||||
|
||||
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
# shellcheck source=../shell_config.sh
|
||||
|
@ -1,2 +1,2 @@
|
||||
1
|
||||
102400 10000000 33554432 4194304 0 0 0 0 /var/lib/clickhouse/filesystem_caches/02344_describe_cache_test 5 5000 0 16
|
||||
102400 10000000 33554432 4194304 0 0 0 0 /var/lib/clickhouse/filesystem_caches/02344_describe_cache_test 0 5000 0 16
|
||||
|
@ -18,3 +18,73 @@ QUERY id: 0
|
||||
LIST id: 9, nodes: 1
|
||||
CONSTANT id: 10, constant_value: UInt64_1, constant_value_type: UInt8
|
||||
1
|
||||
QUERY id: 0
|
||||
PROJECTION COLUMNS
|
||||
uniqCombined((materialize((number)))) UInt64
|
||||
PROJECTION
|
||||
LIST id: 1, nodes: 1
|
||||
FUNCTION id: 2, function_name: uniqCombined, function_type: aggregate, result_type: UInt64
|
||||
ARGUMENTS
|
||||
LIST id: 3, nodes: 1
|
||||
COLUMN id: 4, column_name: number, result_type: UInt64, source_id: 5
|
||||
JOIN TREE
|
||||
TABLE_FUNCTION id: 5, alias: __table1, table_function_name: numbers
|
||||
ARGUMENTS
|
||||
LIST id: 6, nodes: 1
|
||||
CONSTANT id: 7, constant_value: UInt64_10, constant_value_type: UInt8
|
||||
10
|
||||
QUERY id: 0
|
||||
PROJECTION COLUMNS
|
||||
uniq(abs(number)) UInt64
|
||||
PROJECTION
|
||||
LIST id: 1, nodes: 1
|
||||
FUNCTION id: 2, function_name: uniq, function_type: aggregate, result_type: UInt64
|
||||
ARGUMENTS
|
||||
LIST id: 3, nodes: 1
|
||||
FUNCTION id: 4, function_name: abs, function_type: ordinary, result_type: UInt64
|
||||
ARGUMENTS
|
||||
LIST id: 5, nodes: 1
|
||||
COLUMN id: 6, column_name: number, result_type: UInt64, source_id: 7
|
||||
JOIN TREE
|
||||
TABLE_FUNCTION id: 7, alias: __table1, table_function_name: numbers
|
||||
ARGUMENTS
|
||||
LIST id: 8, nodes: 1
|
||||
CONSTANT id: 9, constant_value: UInt64_10, constant_value_type: UInt8
|
||||
QUERY id: 0
|
||||
PROJECTION COLUMNS
|
||||
uniq(toString(abs(materialize(number)))) UInt64
|
||||
PROJECTION
|
||||
LIST id: 1, nodes: 1
|
||||
FUNCTION id: 2, function_name: uniq, function_type: aggregate, result_type: UInt64
|
||||
ARGUMENTS
|
||||
LIST id: 3, nodes: 1
|
||||
FUNCTION id: 4, function_name: abs, function_type: ordinary, result_type: UInt64
|
||||
ARGUMENTS
|
||||
LIST id: 5, nodes: 1
|
||||
FUNCTION id: 6, function_name: materialize, function_type: ordinary, result_type: UInt64
|
||||
ARGUMENTS
|
||||
LIST id: 7, nodes: 1
|
||||
COLUMN id: 8, column_name: number, result_type: UInt64, source_id: 9
|
||||
JOIN TREE
|
||||
TABLE_FUNCTION id: 9, alias: __table1, table_function_name: numbers
|
||||
ARGUMENTS
|
||||
LIST id: 10, nodes: 1
|
||||
CONSTANT id: 11, constant_value: UInt64_10, constant_value_type: UInt8
|
||||
QUERY id: 0
|
||||
PROJECTION COLUMNS
|
||||
uniq((number, 1)) UInt64
|
||||
PROJECTION
|
||||
LIST id: 1, nodes: 1
|
||||
FUNCTION id: 2, function_name: uniq, function_type: aggregate, result_type: UInt64
|
||||
ARGUMENTS
|
||||
LIST id: 3, nodes: 1
|
||||
FUNCTION id: 4, function_name: tuple, function_type: ordinary, result_type: Tuple(UInt64, UInt8)
|
||||
ARGUMENTS
|
||||
LIST id: 5, nodes: 2
|
||||
COLUMN id: 6, column_name: number, result_type: UInt64, source_id: 7
|
||||
CONSTANT id: 8, constant_value: UInt64_1, constant_value_type: UInt8
|
||||
JOIN TREE
|
||||
TABLE_FUNCTION id: 7, alias: __table1, table_function_name: numbers
|
||||
ARGUMENTS
|
||||
LIST id: 9, nodes: 1
|
||||
CONSTANT id: 10, constant_value: UInt64_10, constant_value_type: UInt8
|
||||
|
@ -1,5 +1,14 @@
|
||||
SET allow_experimental_analyzer = 1;
|
||||
SET allow_experimental_analyzer = 1, optimize_injective_functions_inside_uniq = 1;
|
||||
|
||||
-- Simple test
|
||||
EXPLAIN QUERY TREE SELECT uniqCombined(tuple('')) FROM numbers(1);
|
||||
|
||||
SELECT uniqCombined(tuple('')) FROM numbers(1);
|
||||
|
||||
-- Test with chain of injective functions
|
||||
EXPLAIN QUERY TREE SELECT uniqCombined(tuple(materialize(tuple(number)))) FROM numbers(10);
|
||||
SELECT uniqCombined(tuple(materialize(toString(number)))) FROM numbers(10);
|
||||
|
||||
-- No or partial optimization cases
|
||||
EXPLAIN QUERY TREE SELECT uniq(abs(number)) FROM numbers(10); -- no elimination as `abs` is not injective
|
||||
EXPLAIN QUERY TREE SELECT uniq(toString(abs(materialize(number)))) FROM numbers(10); -- only eliminate `toString`
|
||||
EXPLAIN QUERY TREE SELECT uniq(tuple(number, 1)) FROM numbers(10); -- no elimination as `tuple` has multiple arguments
|
||||
|
@ -14,6 +14,7 @@ SETTINGS disk = disk(type = cache,
|
||||
max_size = '1Gi',
|
||||
max_file_segment_size = '40Mi',
|
||||
boundary_alignment = '20Mi',
|
||||
background_download_threads = 2,
|
||||
path = '$CLICKHOUSE_TEST_UNIQUE_NAME',
|
||||
disk = 's3_disk');
|
||||
|
||||
|
@ -2,6 +2,7 @@ DROP TABLE IF EXISTS t1;
|
||||
|
||||
SET allow_experimental_statistics = 1;
|
||||
SET allow_statistics_optimize = 1;
|
||||
SET mutations_sync = 1;
|
||||
|
||||
CREATE TABLE t1
|
||||
(
|
||||
|
@ -1,2 +1,2 @@
|
||||
1048576 10000000 33554432 4194304 0 0 0 0 /var/lib/clickhouse/filesystem_caches/collection_sql 5 5000 0 16
|
||||
1048576 10000000 33554432 4194304 0 0 0 0 /var/lib/clickhouse/filesystem_caches/collection 5 5000 0 16
|
||||
1048576 10000000 33554432 4194304 0 0 0 0 /var/lib/clickhouse/filesystem_caches/collection_sql 0 5000 0 16
|
||||
1048576 10000000 33554432 4194304 0 0 0 0 /var/lib/clickhouse/filesystem_caches/collection 0 5000 0 16
|
||||
|
@ -1,20 +1,20 @@
|
||||
100 10 10 10 0 0 0 0 /var/lib/clickhouse/filesystem_caches/s3_cache_02944/ 5 5000 0 16
|
||||
100 10 10 10 0 0 0 0 /var/lib/clickhouse/filesystem_caches/s3_cache_02944/ 0 5000 0 16
|
||||
0
|
||||
10
|
||||
98
|
||||
set max_size from 100 to 10
|
||||
10 10 10 10 0 0 8 1 /var/lib/clickhouse/filesystem_caches/s3_cache_02944/ 5 5000 0 16
|
||||
10 10 10 10 0 0 8 1 /var/lib/clickhouse/filesystem_caches/s3_cache_02944/ 0 5000 0 16
|
||||
1
|
||||
8
|
||||
set max_size from 10 to 100
|
||||
100 10 10 10 0 0 8 1 /var/lib/clickhouse/filesystem_caches/s3_cache_02944/ 5 5000 0 16
|
||||
100 10 10 10 0 0 8 1 /var/lib/clickhouse/filesystem_caches/s3_cache_02944/ 0 5000 0 16
|
||||
10
|
||||
98
|
||||
set max_elements from 10 to 2
|
||||
100 2 10 10 0 0 18 2 /var/lib/clickhouse/filesystem_caches/s3_cache_02944/ 5 5000 0 16
|
||||
100 2 10 10 0 0 18 2 /var/lib/clickhouse/filesystem_caches/s3_cache_02944/ 0 5000 0 16
|
||||
2
|
||||
18
|
||||
set max_elements from 2 to 10
|
||||
100 10 10 10 0 0 18 2 /var/lib/clickhouse/filesystem_caches/s3_cache_02944/ 5 5000 0 16
|
||||
100 10 10 10 0 0 18 2 /var/lib/clickhouse/filesystem_caches/s3_cache_02944/ 0 5000 0 16
|
||||
10
|
||||
98
|
||||
|
41
tests/queries/0_stateless/03172_error_log_table_not_empty.sh
Executable file
41
tests/queries/0_stateless/03172_error_log_table_not_empty.sh
Executable file
@ -0,0 +1,41 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
# shellcheck source=../shell_config.sh
|
||||
. "$CURDIR"/../shell_config.sh
|
||||
|
||||
# Get the previous number of errors for 111, 222 and 333
|
||||
errors_111=$($CLICKHOUSE_CLIENT -q "SELECT sum(value) FROM system.error_log WHERE code = 111")
|
||||
errors_222=$($CLICKHOUSE_CLIENT -q "SELECT sum(value) FROM system.error_log WHERE code = 222")
|
||||
errors_333=$($CLICKHOUSE_CLIENT -q "SELECT sum(value) FROM system.error_log WHERE code = 333")
|
||||
|
||||
# Throw three random errors: 111, 222 and 333 and wait for more than collect_interval_milliseconds to ensure system.error_log is flushed
|
||||
$CLICKHOUSE_CLIENT -mn -q "
|
||||
SELECT throwIf(true, 'error_log', toInt16(111)) SETTINGS allow_custom_error_code_in_throwif=1; -- { serverError 111 }
|
||||
SELECT throwIf(true, 'error_log', toInt16(222)) SETTINGS allow_custom_error_code_in_throwif=1; -- { serverError 222 }
|
||||
SELECT throwIf(true, 'error_log', toInt16(333)) SETTINGS allow_custom_error_code_in_throwif=1; -- { serverError 333 }
|
||||
SELECT sleep(2) format NULL;
|
||||
SYSTEM FLUSH LOGS;
|
||||
"
|
||||
|
||||
# Check that the three random errors are propagated
|
||||
$CLICKHOUSE_CLIENT -mn -q "
|
||||
SELECT sum(value) > $errors_111 FROM system.error_log WHERE code = 111;
|
||||
SELECT sum(value) > $errors_222 FROM system.error_log WHERE code = 222;
|
||||
SELECT sum(value) > $errors_333 FROM system.error_log WHERE code = 333;
|
||||
"
|
||||
|
||||
# Ensure that if we throw them again, they're still propagated
|
||||
$CLICKHOUSE_CLIENT -mn -q "
|
||||
SELECT throwIf(true, 'error_log', toInt16(111)) SETTINGS allow_custom_error_code_in_throwif=1; -- { serverError 111 }
|
||||
SELECT throwIf(true, 'error_log', toInt16(222)) SETTINGS allow_custom_error_code_in_throwif=1; -- { serverError 222 }
|
||||
SELECT throwIf(true, 'error_log', toInt16(333)) SETTINGS allow_custom_error_code_in_throwif=1; -- { serverError 333 }
|
||||
SELECT sleep(2) format NULL;
|
||||
SYSTEM FLUSH LOGS;
|
||||
"
|
||||
|
||||
$CLICKHOUSE_CLIENT -mn -q "
|
||||
SELECT sum(value) > $(($errors_111+1)) FROM system.error_log WHERE code = 111;
|
||||
SELECT sum(value) > $(($errors_222+1)) FROM system.error_log WHERE code = 222;
|
||||
SELECT sum(value) > $(($errors_333+1)) FROM system.error_log WHERE code = 333;
|
||||
"
|
@ -1,25 +0,0 @@
|
||||
-- Throw three random errors: 111, 222 and 333
|
||||
SELECT throwIf(true, 'error_log', toInt16(111)) SETTINGS allow_custom_error_code_in_throwif=1; -- { serverError 111 }
|
||||
SELECT throwIf(true, 'error_log', toInt16(222)) SETTINGS allow_custom_error_code_in_throwif=1; -- { serverError 222 }
|
||||
SELECT throwIf(true, 'error_log', toInt16(333)) SETTINGS allow_custom_error_code_in_throwif=1; -- { serverError 333 }
|
||||
|
||||
-- Wait for more than collect_interval_milliseconds to ensure system.error_log is flushed
|
||||
SELECT sleep(2) FORMAT NULL;
|
||||
SYSTEM FLUSH LOGS;
|
||||
|
||||
-- Check that the three random errors are propagated
|
||||
SELECT sum(value) > 0 FROM system.error_log WHERE code = 111 AND event_time > now() - INTERVAL 1 MINUTE;
|
||||
SELECT sum(value) > 0 FROM system.error_log WHERE code = 222 AND event_time > now() - INTERVAL 1 MINUTE;
|
||||
SELECT sum(value) > 0 FROM system.error_log WHERE code = 333 AND event_time > now() - INTERVAL 1 MINUTE;
|
||||
|
||||
-- Ensure that if we throw them again, they're still propagated
|
||||
SELECT throwIf(true, 'error_log', toInt16(111)) SETTINGS allow_custom_error_code_in_throwif=1; -- { serverError 111 }
|
||||
SELECT throwIf(true, 'error_log', toInt16(222)) SETTINGS allow_custom_error_code_in_throwif=1; -- { serverError 222 }
|
||||
SELECT throwIf(true, 'error_log', toInt16(333)) SETTINGS allow_custom_error_code_in_throwif=1; -- { serverError 333 }
|
||||
|
||||
SELECT sleep(2) FORMAT NULL;
|
||||
SYSTEM FLUSH LOGS;
|
||||
|
||||
SELECT sum(value) > 1 FROM system.error_log WHERE code = 111 AND event_time > now() - INTERVAL 1 MINUTE;
|
||||
SELECT sum(value) > 1 FROM system.error_log WHERE code = 222 AND event_time > now() - INTERVAL 1 MINUTE;
|
||||
SELECT sum(value) > 1 FROM system.error_log WHERE code = 333 AND event_time > now() - INTERVAL 1 MINUTE;
|
@ -0,0 +1,116 @@
|
||||
2000-01-01 00:00:00 3732436800 3732436800 0
|
||||
2000-01-02 00:00:00 11197396800 11197396800 0
|
||||
2000-01-03 00:00:00 18662356800 18662356800 0
|
||||
2000-01-04 00:00:00 26127316800 26127316800 0
|
||||
2000-01-05 00:00:00 33592276800 33592276800 0
|
||||
2000-01-06 00:00:00 41057236800 41057236800 0
|
||||
2000-01-07 00:00:00 48522196800 48522196800 0
|
||||
2000-01-08 00:00:00 55987156800 55987156800 0
|
||||
2000-01-09 00:00:00 63452116800 63452116800 0
|
||||
2000-01-10 00:00:00 70917076800 70917076800 0
|
||||
2000-01-11 00:00:00 78382036800 78382036800 0
|
||||
2000-01-12 00:00:00 85846996800 85846996800 0
|
||||
2000-01-13 00:00:00 93311956800 93311956800 0
|
||||
2000-01-14 00:00:00 100776916800 100776916800 0
|
||||
2000-01-15 00:00:00 108241876800 108241876800 0
|
||||
2000-01-16 00:00:00 115706836800 115706836800 0
|
||||
2000-01-17 00:00:00 123171796800 123171796800 0
|
||||
2000-01-18 00:00:00 130636756800 130636756800 0
|
||||
2000-01-19 00:00:00 138101716800 138101716800 0
|
||||
2000-01-20 00:00:00 145566676800 145566676800 0
|
||||
2000-01-21 00:00:00 153031636800 153031636800 0
|
||||
2000-01-22 00:00:00 160496596800 160496596800 0
|
||||
2000-01-23 00:00:00 167961556800 167961556800 0
|
||||
2000-01-24 00:00:00 175426516800 175426516800 0
|
||||
2000-01-25 00:00:00 182891476800 182891476800 0
|
||||
2000-01-26 00:00:00 190356436800 190356436800 0
|
||||
2000-01-27 00:00:00 197821396800 197821396800 0
|
||||
2000-01-28 00:00:00 205286356800 205286356800 0
|
||||
2000-01-29 00:00:00 212751316800 212751316800 0
|
||||
2000-01-30 00:00:00 220216276800 220216276800 0
|
||||
2000-01-31 00:00:00 227681236800 227681236800 0
|
||||
2000-02-01 00:00:00 235146196800 235146196800 0
|
||||
2000-02-02 00:00:00 242611156800 242611156800 0
|
||||
2000-02-03 00:00:00 250076116800 250076116800 0
|
||||
2000-02-04 00:00:00 257541076800 257541076800 0
|
||||
2000-02-05 00:00:00 265006036800 265006036800 0
|
||||
2000-02-06 00:00:00 272470996800 272470996800 0
|
||||
2000-02-07 00:00:00 279935956800 279935956800 0
|
||||
2000-02-08 00:00:00 287400916800 287400916800 0
|
||||
2000-02-09 00:00:00 294865876800 294865876800 0
|
||||
2000-02-10 00:00:00 302330836800 302330836800 0
|
||||
2000-02-11 00:00:00 309795796800 309795796800 0
|
||||
2000-02-12 00:00:00 317260756800 317260756800 0
|
||||
2000-02-13 00:00:00 324725716800 324725716800 0
|
||||
2000-02-14 00:00:00 332190676800 332190676800 0
|
||||
2000-02-15 00:00:00 339655636800 339655636800 0
|
||||
2000-02-16 00:00:00 347120596800 347120596800 0
|
||||
2000-02-17 00:00:00 354585556800 354585556800 0
|
||||
2000-02-18 00:00:00 362050516800 362050516800 0
|
||||
2000-02-19 00:00:00 369515476800 369515476800 0
|
||||
2000-02-20 00:00:00 376980436800 376980436800 0
|
||||
2000-02-21 00:00:00 384445396800 384445396800 0
|
||||
2000-02-22 00:00:00 391910356800 391910356800 0
|
||||
2000-02-23 00:00:00 399375316800 399375316800 0
|
||||
2000-02-24 00:00:00 406840276800 406840276800 0
|
||||
2000-02-25 00:00:00 414305236800 414305236800 0
|
||||
2000-02-26 00:00:00 421770196800 421770196800 0
|
||||
2000-02-27 00:00:00 429235156800 429235156800 0
|
||||
2000-02-28 00:00:00 436700116800 436700116800 0
|
||||
2000-02-29 00:00:00 444165076800 444165076800 0
|
||||
2000-03-01 00:00:00 451630036800 451630036800 0
|
||||
2000-03-02 00:00:00 459094996800 459094996800 0
|
||||
2000-03-03 00:00:00 466559956800 466559956800 0
|
||||
2000-03-04 00:00:00 474024916800 474024916800 0
|
||||
2000-03-05 00:00:00 481489876800 481489876800 0
|
||||
2000-03-06 00:00:00 488954836800 488954836800 0
|
||||
2000-03-07 00:00:00 496419796800 496419796800 0
|
||||
2000-03-08 00:00:00 503884756800 503884756800 0
|
||||
2000-03-09 00:00:00 511349716800 511349716800 0
|
||||
2000-03-10 00:00:00 518814676800 518814676800 0
|
||||
2000-03-11 00:00:00 526279636800 526279636800 0
|
||||
2000-03-12 00:00:00 533744596800 533744596800 0
|
||||
2000-03-13 00:00:00 541209556800 541209556800 0
|
||||
2000-03-14 00:00:00 548674516800 548674516800 0
|
||||
2000-03-15 00:00:00 556139476800 556139476800 0
|
||||
2000-03-16 00:00:00 563604436800 563604436800 0
|
||||
2000-03-17 00:00:00 571069396800 571069396800 0
|
||||
2000-03-18 00:00:00 578534356800 578534356800 0
|
||||
2000-03-19 00:00:00 585999316800 585999316800 0
|
||||
2000-03-20 00:00:00 593464276800 593464276800 0
|
||||
2000-03-21 00:00:00 600929236800 600929236800 0
|
||||
2000-03-22 00:00:00 608394196800 608394196800 0
|
||||
2000-03-23 00:00:00 615859156800 615859156800 0
|
||||
2000-03-24 00:00:00 623324116800 623324116800 0
|
||||
2000-03-25 00:00:00 630789076800 630789076800 0
|
||||
2000-03-26 00:00:00 638254036800 638254036800 0
|
||||
2000-03-27 00:00:00 645718996800 645718996800 0
|
||||
2000-03-28 00:00:00 653183956800 653183956800 0
|
||||
2000-03-29 00:00:00 660648916800 660648916800 0
|
||||
2000-03-30 00:00:00 668113876800 668113876800 0
|
||||
2000-03-31 00:00:00 675578836800 675578836800 0
|
||||
2000-04-01 00:00:00 683043796800 683043796800 0
|
||||
2000-04-02 00:00:00 690508756800 690508756800 0
|
||||
2000-04-03 00:00:00 697973716800 697973716800 0
|
||||
2000-04-04 00:00:00 705438676800 705438676800 0
|
||||
2000-04-05 00:00:00 712903636800 712903636800 0
|
||||
2000-04-06 00:00:00 720368596800 720368596800 0
|
||||
2000-04-07 00:00:00 727833556800 727833556800 0
|
||||
2000-04-08 00:00:00 735298516800 735298516800 0
|
||||
2000-04-09 00:00:00 742763476800 742763476800 0
|
||||
2000-04-10 00:00:00 750228436800 750228436800 0
|
||||
2000-04-11 00:00:00 757693396800 757693396800 0
|
||||
2000-04-12 00:00:00 765158356800 765158356800 0
|
||||
2000-04-13 00:00:00 772623316800 772623316800 0
|
||||
2000-04-14 00:00:00 780088276800 780088276800 0
|
||||
2000-04-15 00:00:00 787553236800 787553236800 0
|
||||
2000-04-16 00:00:00 795018196800 795018196800 0
|
||||
2000-04-17 00:00:00 802483156800 802483156800 0
|
||||
2000-04-18 00:00:00 809948116800 809948116800 0
|
||||
2000-04-19 00:00:00 817413076800 817413076800 0
|
||||
2000-04-20 00:00:00 824878036800 824878036800 0
|
||||
2000-04-21 00:00:00 832342996800 832342996800 0
|
||||
2000-04-22 00:00:00 839807956800 839807956800 0
|
||||
2000-04-23 00:00:00 847272916800 847272916800 0
|
||||
2000-04-24 00:00:00 854737876800 854737876800 0
|
||||
2000-04-25 00:00:00 637951968000 862202836800 224250868800
|
@ -0,0 +1,12 @@
|
||||
-- Tags: no-tsan, no-asan, no-msan, no-fasttest
|
||||
-- Test is slow
|
||||
create table tab (x DateTime('UTC'), y UInt32, v Int32) engine = ReplacingMergeTree(v) order by x;
|
||||
insert into tab select toDateTime('2000-01-01', 'UTC') + number, number, 1 from numbers(1e7);
|
||||
optimize table tab final;
|
||||
|
||||
WITH (60 * 60) * 24 AS d
|
||||
select toStartOfDay(x) as k, sum(y) as v,
|
||||
(z + d) * (z + d - 1) / 2 - (toUInt64(k - toDateTime('2000-01-01', 'UTC')) as z) * (z - 1) / 2 as est,
|
||||
est - v as delta
|
||||
from tab final group by k order by k
|
||||
settings max_threads=8, optimize_aggregation_in_order=1, split_parts_ranges_into_intersecting_and_non_intersecting_final=1;
|
@ -48,6 +48,7 @@ AutoML
|
||||
Autocompletion
|
||||
AvroConfluent
|
||||
BIGINT
|
||||
bigrams
|
||||
BIGSERIAL
|
||||
BORO
|
||||
BSON
|
||||
@ -1008,6 +1009,7 @@ UncompressedCacheBytes
|
||||
UncompressedCacheCells
|
||||
UnidirectionalEdgeIsValid
|
||||
UniqThetaSketch
|
||||
unigrams
|
||||
Updatable
|
||||
Uppercased
|
||||
Uptime
|
||||
@ -1507,9 +1509,11 @@ deserializing
|
||||
destructor
|
||||
destructors
|
||||
detectCharset
|
||||
detectTonality
|
||||
detectLanguage
|
||||
detectLanguageMixed
|
||||
detectLanguageUnknown
|
||||
detectProgrammingLanguage
|
||||
determinator
|
||||
deterministically
|
||||
dictGet
|
||||
@ -1526,6 +1530,7 @@ disableProtocols
|
||||
disjunction
|
||||
disjunctions
|
||||
displaySecretsInShowAndSelect
|
||||
displayName
|
||||
distro
|
||||
divideDecimal
|
||||
dmesg
|
||||
|
File diff suppressed because one or more lines are too long
Loading…
Reference in New Issue
Block a user