mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-21 23:21:59 +00:00
Fixed broken links
This commit is contained in:
parent
5d4a877785
commit
72a00e2c62
@ -151,7 +151,7 @@ checks page](../development/build.md#you-dont-have-to-build-clickhouse), or buil
|
||||
|
||||
|
||||
## Functional Stateful Tests
|
||||
Runs [stateful functional tests](tests.md#functional-tests). Treat them in the same way as the functional stateless tests. The difference is that they require `hits` and `visits` tables from the [Yandex.Metrica dataset](../getting-started/example-datasets/metrica.md) to run.
|
||||
Runs [stateful functional tests](tests.md#functional-tests). Treat them in the same way as the functional stateless tests. The difference is that they require `hits` and `visits` tables from the [clickstream dataset](../example-datasets/metrica.md) to run.
|
||||
|
||||
|
||||
## Integration Tests
|
||||
|
@ -134,7 +134,7 @@ Example:
|
||||
|
||||
SELECT level, sum(total) FROM daily GROUP BY level;
|
||||
```
|
||||
To improve performance, received messages are grouped into blocks the size of [max_insert_block_size](../../../operations/settings/settings/#settings-max_insert_block_size). If the block wasn’t formed within [stream_flush_interval_ms](../../../operations/settings/settings/#stream-flush-interval-ms) milliseconds, the data will be flushed to the table regardless of the completeness of the block.
|
||||
To improve performance, received messages are grouped into blocks the size of [max_insert_block_size](../../../operations/settings/settings.md#settings-max_insert_block_size). If the block wasn’t formed within [stream_flush_interval_ms](../../../operations/settings/settings.md/#stream-flush-interval-ms) milliseconds, the data will be flushed to the table regardless of the completeness of the block.
|
||||
|
||||
To stop receiving topic data or to change the conversion logic, detach the materialized view:
|
||||
|
||||
|
@ -95,7 +95,7 @@ SELECT mcc, count() FROM cell_towers GROUP BY mcc ORDER BY count() DESC LIMIT 10
|
||||
|
||||
So, the top countries are: the USA, Germany, and Russia.
|
||||
|
||||
You may want to create an [External Dictionary](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md) in ClickHouse to decode these values.
|
||||
You may want to create an [External Dictionary](../sql-reference/dictionaries/external-dictionaries/external-dicts.md) in ClickHouse to decode these values.
|
||||
|
||||
|
||||
## Use case {#use-case}
|
||||
|
@ -39,7 +39,7 @@ The data is normalized consisted of four tables:
|
||||
|
||||
## Create the Tables {#create-tables}
|
||||
|
||||
We use [Decimal](../../sql-reference/data-types/decimal.md) data type to store prices.
|
||||
We use [Decimal](../sql-reference/data-types/decimal.md) data type to store prices.
|
||||
|
||||
```sql
|
||||
CREATE TABLE dish
|
||||
@ -115,17 +115,17 @@ clickhouse-client --format_csv_allow_single_quotes 0 --input_format_null_as_defa
|
||||
clickhouse-client --format_csv_allow_single_quotes 0 --input_format_null_as_default 0 --date_time_input_format best_effort --query "INSERT INTO menu_item FORMAT CSVWithNames" < MenuItem.csv
|
||||
```
|
||||
|
||||
We use [CSVWithNames](../../interfaces/formats.md#csvwithnames) format as the data is represented by CSV with header.
|
||||
We use [CSVWithNames](../interfaces/formats.md#csvwithnames) format as the data is represented by CSV with header.
|
||||
|
||||
We disable `format_csv_allow_single_quotes` as only double quotes are used for data fields and single quotes can be inside the values and should not confuse the CSV parser.
|
||||
|
||||
We disable [input_format_null_as_default](../../operations/settings/settings.md#settings-input-format-null-as-default) as our data does not have [NULL](../../sql-reference/syntax.md#null-literal). Otherwise ClickHouse will try to parse `\N` sequences and can be confused with `\` in data.
|
||||
We disable [input_format_null_as_default](../operations/settings/settings.md#settings-input-format-null-as-default) as our data does not have [NULL](../sql-reference/syntax.md#null-literal). Otherwise ClickHouse will try to parse `\N` sequences and can be confused with `\` in data.
|
||||
|
||||
The setting [date_time_input_format best_effort](../../operations/settings/settings.md#settings-date_time_input_format) allows to parse [DateTime](../../sql-reference/data-types/datetime.md) fields in wide variety of formats. For example, ISO-8601 without seconds like '2000-01-01 01:02' will be recognized. Without this setting only fixed DateTime format is allowed.
|
||||
The setting [date_time_input_format best_effort](../operations/settings/settings.md#settings-date_time_input_format) allows to parse [DateTime](../sql-reference/data-types/datetime.md) fields in wide variety of formats. For example, ISO-8601 without seconds like '2000-01-01 01:02' will be recognized. Without this setting only fixed DateTime format is allowed.
|
||||
|
||||
## Denormalize the Data {#denormalize-data}
|
||||
|
||||
Data is presented in multiple tables in [normalized form](https://en.wikipedia.org/wiki/Database_normalization#Normal_forms). It means you have to perform [JOIN](../../sql-reference/statements/select/join.md#select-join) if you want to query, e.g. dish names from menu items.
|
||||
Data is presented in multiple tables in [normalized form](https://en.wikipedia.org/wiki/Database_normalization#Normal_forms). It means you have to perform [JOIN](../sql-reference/statements/select/join.md#select-join) if you want to query, e.g. dish names from menu items.
|
||||
For typical analytical tasks it is way more efficient to deal with pre-JOINed data to avoid doing `JOIN` every time. It is called "denormalized" data.
|
||||
|
||||
We will create a table `menu_item_denorm` where will contain all the data JOINed together:
|
||||
|
@ -73,6 +73,6 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
|
||||
|
||||
## Example Queries {#example-queries}
|
||||
|
||||
[The ClickHouse tutorial](../../getting-started/tutorial.md) is based on this web analytics dataset, and the recommended way to get started with this dataset is to go through the tutorial.
|
||||
[The ClickHouse tutorial](../../tutorial.md) is based on this web analytics dataset, and the recommended way to get started with this dataset is to go through the tutorial.
|
||||
|
||||
Additional examples of queries to these tables can be found among [stateful tests](https://github.com/ClickHouse/ClickHouse/tree/master/tests/queries/1_stateful) of ClickHouse (they are named `test.hits` and `test.visits` there).
|
||||
|
@ -60,9 +60,9 @@ ls -1 flightlist_*.csv.gz | xargs -P100 -I{} bash -c 'gzip -c -d "{}" | clickhou
|
||||
`xargs -P100` specifies to use up to 100 parallel workers but as we only have 30 files, the number of workers will be only 30.
|
||||
- For every file, `xargs` will run a script with `bash -c`. The script has substitution in form of `{}` and the `xargs` command will substitute the filename to it (we have asked it for `xargs` with `-I{}`).
|
||||
- The script will decompress the file (`gzip -c -d "{}"`) to standard output (`-c` parameter) and the output is redirected to `clickhouse-client`.
|
||||
- We also asked to parse [DateTime](../../sql-reference/data-types/datetime.md) fields with extended parser ([--date_time_input_format best_effort](../../operations/settings/settings.md#settings-date_time_input_format)) to recognize ISO-8601 format with timezone offsets.
|
||||
- We also asked to parse [DateTime](../sql-reference/data-types/datetime.md) fields with extended parser ([--date_time_input_format best_effort](../operations/settings/settings.md#settings-date_time_input_format)) to recognize ISO-8601 format with timezone offsets.
|
||||
|
||||
Finally, `clickhouse-client` will do insertion. It will read input data in [CSVWithNames](../../interfaces/formats.md#csvwithnames) format.
|
||||
Finally, `clickhouse-client` will do insertion. It will read input data in [CSVWithNames](../interfaces/formats.md#csvwithnames) format.
|
||||
|
||||
Parallel upload takes 24 seconds.
|
||||
|
||||
|
@ -50,13 +50,13 @@ clickhouse-client --query "
|
||||
This is a showcase how to parse custom CSV, as it requires multiple tunes.
|
||||
|
||||
Explanation:
|
||||
- The dataset is in CSV format, but it requires some preprocessing on insertion; we use table function [input](../../sql-reference/table-functions/input.md) to perform preprocessing;
|
||||
- The dataset is in CSV format, but it requires some preprocessing on insertion; we use table function [input](../sql-reference/table-functions/input.md) to perform preprocessing;
|
||||
- The structure of CSV file is specified in the argument of the table function `input`;
|
||||
- The field `num` (row number) is unneeded - we parse it from file and ignore;
|
||||
- We use `FORMAT CSVWithNames` but the header in CSV will be ignored (by command line parameter `--input_format_with_names_use_header 0`), because the header does not contain the name for the first field;
|
||||
- File is using only double quotes to enclose CSV strings; some strings are not enclosed in double quotes, and single quote must not be parsed as the string enclosing - that's why we also add the `--format_csv_allow_single_quote 0` parameter;
|
||||
- Some strings from CSV cannot parse, because they contain `\M/` sequence at the beginning of the value; the only value starting with backslash in CSV can be `\N` that is parsed as SQL NULL. We add `--input_format_allow_errors_num 10` parameter and up to ten malformed records can be skipped;
|
||||
- There are arrays for ingredients, directions and NER fields; these arrays are represented in unusual form: they are serialized into string as JSON and then placed in CSV - we parse them as String and then use [JSONExtract](../../sql-reference/functions/json-functions/) function to transform it to Array.
|
||||
- There are arrays for ingredients, directions and NER fields; these arrays are represented in unusual form: they are serialized into string as JSON and then placed in CSV - we parse them as String and then use [JSONExtract](../sql-reference/functions/json-functions/) function to transform it to Array.
|
||||
|
||||
## Validate the Inserted Data
|
||||
|
||||
@ -80,7 +80,7 @@ Result:
|
||||
|
||||
### Top Components by the Number of Recipes:
|
||||
|
||||
In this example we learn how to use [arrayJoin](../../sql-reference/functions/array-join/) function to expand an array into a set of rows.
|
||||
In this example we learn how to use [arrayJoin](../sql-reference/functions/array-join/) function to expand an array into a set of rows.
|
||||
|
||||
Query:
|
||||
|
||||
@ -185,7 +185,7 @@ Result:
|
||||
10 rows in set. Elapsed: 0.215 sec. Processed 2.23 million rows, 1.48 GB (10.35 million rows/s., 6.86 GB/s.)
|
||||
```
|
||||
|
||||
In this example, we involve [has](../../sql-reference/functions/array-functions/#hasarr-elem) function to filter by array elements and sort by the number of directions.
|
||||
In this example, we involve [has](../sql-reference/functions/array-functions/#hasarr-elem) function to filter by array elements and sort by the number of directions.
|
||||
|
||||
There is a wedding cake that requires the whole 126 steps to produce! Show that directions:
|
||||
|
||||
|
@ -54,9 +54,9 @@ In this example, we define the structure of source data from the CSV file and sp
|
||||
The preprocessing is:
|
||||
- splitting the postcode to two different columns `postcode1` and `postcode2` that is better for storage and queries;
|
||||
- coverting the `time` field to date as it only contains 00:00 time;
|
||||
- ignoring the [UUid](../../sql-reference/data-types/uuid.md) field because we don't need it for analysis;
|
||||
- transforming `type` and `duration` to more readable Enum fields with function [transform](../../sql-reference/functions/other-functions.md#transform);
|
||||
- transforming `is_new` and `category` fields from single-character string (`Y`/`N` and `A`/`B`) to [UInt8](../../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-uint256-int8-int16-int32-int64-int128-int256) field with 0 and 1.
|
||||
- ignoring the [UUid](../sql-reference/data-types/uuid.md) field because we don't need it for analysis;
|
||||
- transforming `type` and `duration` to more readable Enum fields with function [transform](../sql-reference/functions/other-functions.md#transform);
|
||||
- transforming `is_new` and `category` fields from single-character string (`Y`/`N` and `A`/`B`) to [UInt8](../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-uint256-int8-int16-int32-int64-int128-int256) field with 0 and 1.
|
||||
|
||||
Preprocessed data is piped directly to `clickhouse-client` to be inserted into ClickHouse table in streaming fashion.
|
||||
|
||||
@ -352,7 +352,7 @@ Result:
|
||||
|
||||
## Let's Speed Up Queries Using Projections {#speedup-with-projections}
|
||||
|
||||
[Projections](../../sql-reference/statements/alter/projection.md) allow to improve queries speed by storing pre-aggregated data.
|
||||
[Projections](../sql-reference/statements/alter/projection.md) allow to improve queries speed by storing pre-aggregated data.
|
||||
|
||||
### Build a Projection {#build-projection}
|
||||
|
||||
@ -388,7 +388,7 @@ SETTINGS mutations_sync = 1;
|
||||
|
||||
Let's run the same 3 queries.
|
||||
|
||||
[Enable](../../operations/settings/settings.md#allow-experimental-projection-optimization) projections for selects:
|
||||
[Enable](../operations/settings/settings.md#allow-experimental-projection-optimization) projections for selects:
|
||||
|
||||
```sql
|
||||
SET allow_experimental_projection_optimization = 1;
|
||||
|
@ -216,7 +216,7 @@ Use the `clickhouse client` to connect to the server, or `clickhouse local` to p
|
||||
|
||||
### From Sources {#from-sources}
|
||||
|
||||
To manually compile ClickHouse, follow the instructions for [Linux](../development/build.md) or [Mac OS X](../development/build-osx.md).
|
||||
To manually compile ClickHouse, follow the instructions for [Linux](./development/build.md) or [Mac OS X](./development/build-osx.md).
|
||||
|
||||
You can compile packages and install them or use programs without installing packages. Also by building manually you can disable SSE 4.2 requirement or build for AArch64 CPUs.
|
||||
|
||||
@ -271,7 +271,7 @@ If the configuration file is in the current directory, you do not need to specif
|
||||
|
||||
ClickHouse supports access restriction settings. They are located in the `users.xml` file (next to `config.xml`).
|
||||
By default, access is allowed from anywhere for the `default` user, without a password. See `user/default/networks`.
|
||||
For more information, see the section [“Configuration Files”](../operations/configuration-files.md).
|
||||
For more information, see the section [“Configuration Files”](./operations/configuration-files.md).
|
||||
|
||||
After launching server, you can use the command-line client to connect to it:
|
||||
|
||||
@ -282,7 +282,7 @@ $ clickhouse-client
|
||||
By default, it connects to `localhost:9000` on behalf of the user `default` without a password. It can also be used to connect to a remote server using `--host` argument.
|
||||
|
||||
The terminal must use UTF-8 encoding.
|
||||
For more information, see the section [“Command-line client”](../interfaces/cli.md).
|
||||
For more information, see the section [“Command-line client”](./interfaces/cli.md).
|
||||
|
||||
Example:
|
||||
|
||||
|
@ -7,7 +7,7 @@ sidebar_label: Command-Line Client
|
||||
|
||||
ClickHouse provides a native command-line client: `clickhouse-client`. The client supports command-line options and configuration files. For more information, see [Configuring](#interfaces_cli_configuration).
|
||||
|
||||
[Install](../getting-started/index.md) it from the `clickhouse-client` package and run it with the command `clickhouse-client`.
|
||||
[Install](../../quick-start.mdx) it from the `clickhouse-client` package and run it with the command `clickhouse-client`.
|
||||
|
||||
``` bash
|
||||
$ clickhouse-client
|
||||
|
@ -21,7 +21,7 @@ The default sampling frequency is one sample per second and both CPU and real ti
|
||||
|
||||
To analyze the `trace_log` system table:
|
||||
|
||||
- Install the `clickhouse-common-static-dbg` package. See [Install from DEB Packages](../../getting-started/install.md#install-from-deb-packages).
|
||||
- Install the `clickhouse-common-static-dbg` package. See [Install from DEB Packages](../../install.md#install-from-deb-packages).
|
||||
|
||||
- Allow introspection functions by the [allow_introspection_functions](../../operations/settings/settings.md#settings-allow_introspection_functions) setting.
|
||||
|
||||
|
@ -59,7 +59,7 @@ wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/benchmark/cl
|
||||
chmod a+x benchmark-new.sh
|
||||
wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/benchmark/clickhouse/queries.sql
|
||||
```
|
||||
3. Download the [web analytics dataset](../getting-started/example-datasets/metrica.md) (“hits” table containing 100 million rows).
|
||||
3. Download the [web analytics dataset](../example-datasets/metrica.md) (“hits” table containing 100 million rows).
|
||||
```bash
|
||||
wget https://datasets.clickhouse.com/hits/partitions/hits_100m_obfuscated_v1.tar.xz
|
||||
tar xvf hits_100m_obfuscated_v1.tar.xz -C .
|
||||
|
@ -3,7 +3,7 @@ sidebar_position: 44
|
||||
sidebar_label: Requirements
|
||||
---
|
||||
|
||||
# Requirements {#requirements}
|
||||
# Requirements
|
||||
|
||||
## CPU {#cpu}
|
||||
|
||||
@ -56,4 +56,4 @@ The network bandwidth is critical for processing distributed queries with a larg
|
||||
|
||||
ClickHouse is developed primarily for the Linux family of operating systems. The recommended Linux distribution is Ubuntu. The `tzdata` package should be installed in the system.
|
||||
|
||||
ClickHouse can also work in other operating system families. See details in the [Getting started](../getting-started/index.md) section of the documentation.
|
||||
ClickHouse can also work in other operating system families. See details in the [install guide](../install.md) section of the documentation.
|
||||
|
@ -3,7 +3,7 @@ sidebar_position: 46
|
||||
sidebar_label: Troubleshooting
|
||||
---
|
||||
|
||||
# Troubleshooting {#troubleshooting}
|
||||
# Troubleshooting
|
||||
|
||||
- [Installation](#troubleshooting-installation-errors)
|
||||
- [Connecting to the server](#troubleshooting-accepts-no-connections)
|
||||
@ -15,7 +15,7 @@ sidebar_label: Troubleshooting
|
||||
### You Cannot Get Deb Packages from ClickHouse Repository with Apt-get {#you-cannot-get-deb-packages-from-clickhouse-repository-with-apt-get}
|
||||
|
||||
- Check firewall settings.
|
||||
- If you cannot access the repository for any reason, download packages as described in the [Getting started](../getting-started/index.md) article and install them manually using the `sudo dpkg -i <packages>` command. You will also need the `tzdata` package.
|
||||
- If you cannot access the repository for any reason, download packages as described in the [install guide](../install.md) article and install them manually using the `sudo dpkg -i <packages>` command. You will also need the `tzdata` package.
|
||||
|
||||
## Connecting to the Server {#troubleshooting-accepts-no-connections}
|
||||
|
||||
|
@ -11,7 +11,7 @@ slug: /en/getting-started/playground
|
||||
[ClickHouse Playground](https://play.clickhouse.com/play?user=play) allows people to experiment with ClickHouse by running queries instantly, without setting up their server or cluster.
|
||||
Several example datasets are available in Playground.
|
||||
|
||||
You can make queries to Playground using any HTTP client, for example [curl](https://curl.haxx.se) or [wget](https://www.gnu.org/software/wget/), or set up a connection using [JDBC](../interfaces/jdbc.md) or [ODBC](../interfaces/odbc.md) drivers. More information about software products that support ClickHouse is available [here](../interfaces/index.md).
|
||||
You can make queries to Playground using any HTTP client, for example [curl](https://curl.haxx.se) or [wget](https://www.gnu.org/software/wget/), or set up a connection using [JDBC](./interfaces/jdbc.md) or [ODBC](./interfaces/odbc.md) drivers. More information about software products that support ClickHouse is available [here](./interfaces/index.md).
|
||||
|
||||
## Credentials {#credentials}
|
||||
|
||||
@ -39,7 +39,7 @@ HTTPS endpoint example with `curl`:
|
||||
curl "https://play.clickhouse.com/?user=explorer" --data-binary "SELECT 'Play ClickHouse'"
|
||||
```
|
||||
|
||||
TCP endpoint example with [CLI](../interfaces/cli.md):
|
||||
TCP endpoint example with [CLI](./interfaces/cli.md):
|
||||
|
||||
``` bash
|
||||
clickhouse client --secure --host play.clickhouse.com --user explorer
|
||||
|
@ -16,7 +16,7 @@ anyHeavy(column)
|
||||
|
||||
**Example**
|
||||
|
||||
Take the [OnTime](../../../getting-started/example-datasets/ontime.md) data set and select any frequently occurring value in the `AirlineID` column.
|
||||
Take the [OnTime](../../../example-datasets/ontime.md) data set and select any frequently occurring value in the `AirlineID` column.
|
||||
|
||||
``` sql
|
||||
SELECT anyHeavy(AirlineID) AS res
|
||||
|
@ -28,7 +28,7 @@ If the parameter is omitted, default value 10 is used.
|
||||
|
||||
**Example**
|
||||
|
||||
Take the [OnTime](../../../getting-started/example-datasets/ontime.md) data set and select the three most frequently occurring values in the `AirlineID` column.
|
||||
Take the [OnTime](../../../example-datasets/ontime.md) data set and select the three most frequently occurring values in the `AirlineID` column.
|
||||
|
||||
``` sql
|
||||
SELECT topK(3)(AirlineID) AS res
|
||||
|
@ -5,3 +5,4 @@ collapsed: true
|
||||
link:
|
||||
type: generated-index
|
||||
title: External Dictionaries
|
||||
slug: /en/sql-reference/dictionaries/external-dictionaries
|
@ -5,7 +5,7 @@ sidebar_label: Hierarchical dictionaries
|
||||
|
||||
# Hierarchical Dictionaries
|
||||
|
||||
ClickHouse supports hierarchical dictionaries with a [numeric key](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md#ext_dict-numeric-key).
|
||||
ClickHouse supports hierarchical dictionaries with a [numeric key](../../dictionaries/external-dictionaries/external-dicts-dict-structure.md#numeric-key).
|
||||
|
||||
Look at the following hierarchical structure:
|
||||
|
||||
@ -35,7 +35,7 @@ This hierarchy can be expressed as the following dictionary table.
|
||||
|
||||
This table contains a column `parent_region` that contains the key of the nearest parent for the element.
|
||||
|
||||
ClickHouse supports the [hierarchical](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md#hierarchical-dict-attr) property for [external dictionary](../../../sql-reference/dictionaries/external-dictionaries/index.md) attributes. This property allows you to configure the hierarchical dictionary similar to described above.
|
||||
ClickHouse supports the [hierarchical](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md#hierarchical-dict-attr) property for [external dictionary](../../../sql-reference/dictionaries/external-dictionaries/) attributes. This property allows you to configure the hierarchical dictionary similar to described above.
|
||||
|
||||
The [dictGetHierarchy](../../../sql-reference/functions/ext-dict-functions.md#dictgethierarchy) function allows you to get the parent chain of an element.
|
||||
|
||||
|
@ -478,7 +478,7 @@ The `table` and `query` fields cannot be used together. And either one of the `t
|
||||
|
||||
ClickHouse receives quoting symbols from ODBC-driver and quote all settings in queries to driver, so it’s necessary to set table name accordingly to table name case in database.
|
||||
|
||||
If you have a problems with encodings when using Oracle, see the corresponding [F.A.Q.](../../../faq/integration/oracle-odbc.md) item.
|
||||
If you have a problems with encodings when using Oracle, see the corresponding [FAQ](../../../../faq/integration/oracle-odbc.md) item.
|
||||
|
||||
### Mysql {#dicts-external_dicts_dict_sources-mysql}
|
||||
|
||||
|
@ -60,7 +60,7 @@ An xml structure can contain either `<id>` or `<key>`. DDL-query must contain si
|
||||
You must not describe key as an attribute.
|
||||
:::
|
||||
|
||||
### Numeric Key {#ext_dict-numeric-key}
|
||||
### Numeric Key {#numeric-key}
|
||||
|
||||
Type: `UInt64`.
|
||||
|
||||
|
@ -1216,7 +1216,7 @@ SELECT * FROM table WHERE indexHint(<expression>)
|
||||
|
||||
**Example**
|
||||
|
||||
Here is the example of test data from the table [ontime](../../getting-started/example-datasets/ontime.md).
|
||||
Here is the example of test data from the table [ontime](../../example-datasets/ontime.md).
|
||||
|
||||
Input table:
|
||||
|
||||
|
@ -239,7 +239,7 @@ Codecs:
|
||||
|
||||
High compression levels are useful for asymmetric scenarios, like compress once, decompress repeatedly. Higher levels mean better compression and higher CPU usage.
|
||||
|
||||
### Specialized Codecs {#create-query-specialized-codecs}
|
||||
### Specialized Codecs {#specialized-codecs}
|
||||
|
||||
These codecs are designed to make compression more effective by using specific features of data. Some of these codecs do not compress data themself. Instead, they prepare the data for a common purpose codec, which compresses it better than without this preparation.
|
||||
|
||||
|
@ -2,7 +2,7 @@
|
||||
sidebar_label: INTO OUTFILE
|
||||
---
|
||||
|
||||
# INTO OUTFILE Clause {#into-outfile-clause}
|
||||
# INTO OUTFILE Clause
|
||||
|
||||
`INTO OUTFILE` clause redirects the result of a `SELECT` query to a file on the **client** side.
|
||||
|
||||
|
@ -67,7 +67,7 @@ SELECT name, status FROM system.dictionaries;
|
||||
|
||||
## RELOAD MODELS {#query_language-system-reload-models}
|
||||
|
||||
Reloads all [CatBoost](../../guides/apply-catboost-model.md#applying-catboost-model-in-clickhouse) models if the configuration was updated without restarting the server.
|
||||
Reloads all [CatBoost](../../../guides/developer/apply-catboost-model.md) models if the configuration was updated without restarting the server.
|
||||
|
||||
**Syntax**
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user