Merge pull request #21220 from Slach/add_postgresql_engine_docs

Add postgresql engine docs
This commit is contained in:
alexey-milovidov 2021-03-15 15:40:15 +03:00 committed by GitHub
commit fba8d66f51
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
454 changed files with 1211 additions and 1961 deletions

1
docs/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
build

View File

@ -39,4 +39,4 @@ ENGINE = EmbeddedRocksDB
PRIMARY KEY key PRIMARY KEY key
``` ```
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/embedded-rocksdb/) <!--hide--> [Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/embedded-rocksdb/) <!--hide-->

View File

@ -185,7 +185,6 @@ Similar to GraphiteMergeTree, the HDFS engine supports extended configuration us
|hadoop\_kerberos\_kinit\_command | kinit | |hadoop\_kerberos\_kinit\_command | kinit |
#### Limitations {#limitations} #### Limitations {#limitations}
* hadoop\_security\_kerberos\_ticket\_cache\_path can be global only, not user specific * hadoop\_security\_kerberos\_ticket\_cache\_path can be global only, not user specific
## Kerberos support {#kerberos-support} ## Kerberos support {#kerberos-support}
@ -207,4 +206,4 @@ If hadoop\_kerberos\_keytab, hadoop\_kerberos\_principal or hadoop\_kerberos\_ki
- [Virtual columns](../../../engines/table-engines/index.md#table_engines-virtual_columns) - [Virtual columns](../../../engines/table-engines/index.md#table_engines-virtual_columns)
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/hdfs/) <!--hide--> [Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/hdfs/) <!--hide-->

View File

@ -18,3 +18,6 @@ List of supported integrations:
- [Kafka](../../../engines/table-engines/integrations/kafka.md) - [Kafka](../../../engines/table-engines/integrations/kafka.md)
- [EmbeddedRocksDB](../../../engines/table-engines/integrations/embedded-rocksdb.md) - [EmbeddedRocksDB](../../../engines/table-engines/integrations/embedded-rocksdb.md)
- [RabbitMQ](../../../engines/table-engines/integrations/rabbitmq.md) - [RabbitMQ](../../../engines/table-engines/integrations/rabbitmq.md)
- [PostgreSQL](../../../engines/table-engines/integrations/postgresql.md)
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/) <!--hide-->

View File

@ -85,4 +85,4 @@ FROM jdbc_table
- [JDBC table function](../../../sql-reference/table-functions/jdbc.md). - [JDBC table function](../../../sql-reference/table-functions/jdbc.md).
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/jdbc/) <!--hide--> [Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/jdbc/) <!--hide-->

View File

@ -194,4 +194,4 @@ Example:
- [Virtual columns](../../../engines/table-engines/index.md#table_engines-virtual_columns) - [Virtual columns](../../../engines/table-engines/index.md#table_engines-virtual_columns)
- [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size) - [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size)
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/kafka/) <!--hide--> [Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/kafka/) <!--hide-->

View File

@ -54,4 +54,4 @@ SELECT COUNT() FROM mongo_table;
└─────────┘ └─────────┘
``` ```
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/integrations/mongodb/) <!--hide--> [Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/mongodb/) <!--hide-->

View File

@ -24,6 +24,7 @@ The table structure can differ from the original MySQL table structure:
- Column names should be the same as in the original MySQL table, but you can use just some of these columns and in any order. - Column names should be the same as in the original MySQL table, but you can use just some of these columns and in any order.
- Column types may differ from those in the original MySQL table. ClickHouse tries to [cast](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) values to the ClickHouse data types. - Column types may differ from those in the original MySQL table. ClickHouse tries to [cast](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) values to the ClickHouse data types.
- Setting `external_table_functions_use_nulls` defines how to handle Nullable columns. Default is true, if false - table function will not make nullable columns and will insert default values instead of nulls. This is also applicable for null values inside array data types.
**Engine Parameters** **Engine Parameters**
@ -100,4 +101,4 @@ SELECT * FROM mysql_table
- [The mysql table function](../../../sql-reference/table-functions/mysql.md) - [The mysql table function](../../../sql-reference/table-functions/mysql.md)
- [Using MySQL as a source of external dictionary](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-mysql) - [Using MySQL as a source of external dictionary](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-mysql)
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/mysql/) <!--hide--> [Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/mysql/) <!--hide-->

View File

@ -29,6 +29,7 @@ The table structure can differ from the source table structure:
- Column names should be the same as in the source table, but you can use just some of these columns and in any order. - Column names should be the same as in the source table, but you can use just some of these columns and in any order.
- Column types may differ from those in the source table. ClickHouse tries to [cast](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) values to the ClickHouse data types. - Column types may differ from those in the source table. ClickHouse tries to [cast](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) values to the ClickHouse data types.
- Setting `external_table_functions_use_nulls` defines how to handle Nullable columns. Default is true, if false - table function will not make nullable columns and will insert default values instead of nulls. This is also applicable for null values inside array data types.
**Engine Parameters** **Engine Parameters**
@ -127,4 +128,4 @@ SELECT * FROM odbc_t
- [ODBC external dictionaries](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-odbc) - [ODBC external dictionaries](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-odbc)
- [ODBC table function](../../../sql-reference/table-functions/odbc.md) - [ODBC table function](../../../sql-reference/table-functions/odbc.md)
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/odbc/) <!--hide--> [Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/odbc/) <!--hide-->

View File

@ -0,0 +1,106 @@
---
toc_priority: 8
toc_title: PostgreSQL
---
# PosgtreSQL {#postgresql}
The PostgreSQL engine allows you to perform `SELECT` queries on data that is stored on a remote PostgreSQL server.
## Creating a Table {#creating-a-table}
``` sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2],
...
) ENGINE = PostgreSQL('host:port', 'database', 'table', 'user', 'password');
```
See a detailed description of the [CREATE TABLE](../../../sql-reference/statements/create/table.md#create-table-query) query.
The table structure can differ from the original PostgreSQL table structure:
- Column names should be the same as in the original PostgreSQL table, but you can use just some of these columns and in any order.
- Column types may differ from those in the original PostgreSQL table. ClickHouse tries to [cast](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) values to the ClickHouse data types.
- Setting `external_table_functions_use_nulls` defines how to handle Nullable columns. Default is 1, if 0 - table function will not make nullable columns and will insert default values instead of nulls. This is also applicable for null values inside array data types.
**Engine Parameters**
- `host:port` — PostgreSQL server address.
- `database` — Remote database name.
- `table` — Remote table name.
- `user` — PostgreSQL user.
- `password` — User password.
SELECT Queries on PostgreSQL side run as `COPY (SELECT ...) TO STDOUT` inside read-only PostgreSQL transaction with commit after each `SELECT` query.
Simple `WHERE` clauses such as `=, !=, >, >=, <, <=, IN` are executed on the PostgreSQL server.
All joins, aggregations, sorting, `IN [ array ]` conditions and the `LIMIT` sampling constraint are executed in ClickHouse only after the query to PostgreSQL finishes.
INSERT Queries on PostgreSQL side run as `COPY "table_name" (field1, field2, ... fieldN) FROM STDIN` inside PostgreSQL transaction with auto-commit after each `INSERT` statement.
PostgreSQL Array types converts into ClickHouse arrays.
Be careful in PostgreSQL an array data created like a type_name[] may contain multi-dimensional arrays of different dimensions in different table rows in same column, but in ClickHouse it is only allowed to have multidimensional arrays of the same count of dimensions in all table rows in same column.
## Usage Example {#usage-example}
Table in PostgreSQL:
``` text
postgres=# CREATE TABLE "public"."test" (
"int_id" SERIAL,
"int_nullable" INT NULL DEFAULT NULL,
"float" FLOAT NOT NULL,
"str" VARCHAR(100) NOT NULL DEFAULT '',
"float_nullable" FLOAT NULL DEFAULT NULL,
PRIMARY KEY (int_id));
CREATE TABLE
postgres=# insert into test (int_id, str, "float") VALUES (1,'test',2);
INSERT 0 1
postgresql> select * from test;
int_id | int_nullable | float | str | float_nullable
--------+--------------+-------+------+----------------
1 | | 2 | test |
(1 row)
```
Table in ClickHouse, retrieving data from the PostgreSQL table created above:
``` sql
CREATE TABLE default.postgresql_table
(
`float_nullable` Nullable(Float32),
`str` String,
`int_id` Int32
)
ENGINE = PostgreSQL('localhost:5432', 'public', 'test', 'postges_user', 'postgres_password');
```
``` sql
SELECT * FROM postgresql_table WHERE str IN ('test')
```
``` text
┌─float_nullable─┬─str──┬─int_id─┐
│ ᴺᵁᴸᴸ │ test │ 1 │
└────────────────┴──────┴────────┘
1 rows in set. Elapsed: 0.019 sec.
```
## See Also {#see-also}
- [The postgresql table function](../../../sql-reference/table-functions/postgresql.md)
- [Using PostgreSQL as a source of external dictionary](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-postgresql)
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/postgresql/) <!--hide-->

View File

@ -163,3 +163,5 @@ Example:
- `_redelivered` - `redelivered` flag of the message. - `_redelivered` - `redelivered` flag of the message.
- `_message_id` - messageID of the received message; non-empty if was set, when message was published. - `_message_id` - messageID of the received message; non-empty if was set, when message was published.
- `_timestamp` - timestamp of the received message; non-empty if was set, when message was published. - `_timestamp` - timestamp of the received message; non-empty if was set, when message was published.
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/rabbitmq/) <!--hide-->

View File

@ -6,7 +6,7 @@ toc_title: S3
# S3 {#table_engines-s3} # S3 {#table_engines-s3}
This engine provides integration with [Amazon S3](https://aws.amazon.com/s3/) ecosystem. This engine is similar This engine provides integration with [Amazon S3](https://aws.amazon.com/s3/) ecosystem. This engine is similar
to the [HDFS](../../../engines/table-engines/special/file.md#table_engines-hdfs) engine, but provides S3-specific features. to the [HDFS](../../../engines/table-engines/integrations/hdfs.md#table_engines-hdfs) engine, but provides S3-specific features.
## Usage {#usage} ## Usage {#usage}
@ -69,7 +69,7 @@ Constructions with `{}` are similar to the [remote](../../../sql-reference/table
**Example** **Example**
1. Suppose we have several files in TSV format with the following URIs on HDFS: 1. Suppose we have several files in CSV format with the following URIs on S3:
- https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_1.csv - https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_1.csv
- https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_2.csv - https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_2.csv
@ -124,7 +124,7 @@ The following settings can be set before query execution or placed into configur
- `s3_max_single_part_upload_size` — Default value is `64Mb`. The maximum size of object to upload using singlepart upload to S3. - `s3_max_single_part_upload_size` — Default value is `64Mb`. The maximum size of object to upload using singlepart upload to S3.
- `s3_min_upload_part_size` — Default value is `512Mb`. The minimum size of part to upload during multipart upload to [S3 Multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html). - `s3_min_upload_part_size` — Default value is `512Mb`. The minimum size of part to upload during multipart upload to [S3 Multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html).
- `s3_max_redirects` — Default value is `10`. Max number of S3 redirects hops allowed. - `s3_max_redirects` — Default value is `10`. Max number of HTTP redirects S3 hops allowed.
Security consideration: if malicious user can specify arbitrary S3 URLs, `s3_max_redirects` must be set to zero to avoid [SSRF](https://en.wikipedia.org/wiki/Server-side_request_forgery) attacks; or alternatively, `remote_host_filter` must be specified in server configuration. Security consideration: if malicious user can specify arbitrary S3 URLs, `s3_max_redirects` must be set to zero to avoid [SSRF](https://en.wikipedia.org/wiki/Server-side_request_forgery) attacks; or alternatively, `remote_host_filter` must be specified in server configuration.
@ -153,4 +153,4 @@ Example:
</s3> </s3>
``` ```
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/s3/) <!--hide--> [Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/s3/) <!--hide-->

View File

@ -38,10 +38,10 @@ The queries are executed as a read-only user. It implies some limitations:
The following settings are also enforced: The following settings are also enforced:
- [max_result_bytes=10485760](../operations/settings/query_complexity/#max-result-bytes) - [max_result_bytes=10485760](../operations/settings/query-complexity/#max-result-bytes)
- [max_result_rows=2000](../operations/settings/query_complexity/#setting-max_result_rows) - [max_result_rows=2000](../operations/settings/query-complexity/#setting-max_result_rows)
- [result_overflow_mode=break](../operations/settings/query_complexity/#result-overflow-mode) - [result_overflow_mode=break](../operations/settings/query-complexity/#result-overflow-mode)
- [max_execution_time=60000](../operations/settings/query_complexity/#max-execution-time) - [max_execution_time=60000](../operations/settings/query-complexity/#max-execution-time)
## Examples {#examples} ## Examples {#examples}

View File

@ -1254,7 +1254,7 @@ ClickHouse supports configurable precision of `Decimal` type. The `INSERT` query
Unsupported Parquet data types: `DATE32`, `TIME32`, `FIXED_SIZE_BINARY`, `JSON`, `UUID`, `ENUM`. Unsupported Parquet data types: `DATE32`, `TIME32`, `FIXED_SIZE_BINARY`, `JSON`, `UUID`, `ENUM`.
Data types of ClickHouse table columns can differ from the corresponding fields of the Parquet data inserted. When inserting data, ClickHouse interprets data types according to the table above and then [cast](../query_language/functions/type_conversion_functions/#type_conversion_function-cast) the data to that data type which is set for the ClickHouse table column. Data types of ClickHouse table columns can differ from the corresponding fields of the Parquet data inserted. When inserting data, ClickHouse interprets data types according to the table above and then [cast](../sql-reference/functions/type-conversion-functions/#type_conversion_function-cast) the data to that data type which is set for the ClickHouse table column.
### Inserting and Selecting Data {#inserting-and-selecting-data} ### Inserting and Selecting Data {#inserting-and-selecting-data}

View File

@ -1,6 +1,6 @@
# system.data_type_families {#system_tables-data_type_families} # system.data_type_families {#system_tables-data_type_families}
Contains information about supported [data types](../../sql-reference/data-types/). Contains information about supported [data types](../../sql-reference/data-types/index.md).
Columns: Columns:

View File

@ -76,6 +76,6 @@ last_postpone_time: 1970-01-01 03:00:00
**See Also** **See Also**
- [Managing ReplicatedMergeTree Tables](../../sql-reference/statements/system.md/#query-language-system-replicated) - [Managing ReplicatedMergeTree Tables](../../sql-reference/statements/system.md#query-language-system-replicated)
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/replication_queue) <!--hide--> [Original article](https://clickhouse.tech/docs/en/operations/system_tables/replication_queue) <!--hide-->

View File

@ -250,4 +250,3 @@ FROM people
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/agg_functions/combinators/) <!--hide-->

View File

@ -59,4 +59,3 @@ SELECT groupArray(y) FROM t_null_big
`groupArray` does not include `NULL` in the resulting array. `groupArray` does not include `NULL` in the resulting array.
[Original article](https://clickhouse.tech/docs/en/query_language/agg_functions/) <!--hide-->

View File

@ -500,7 +500,6 @@ Problem: Generate a report that shows only keywords that produced at least 5 uni
Solution: Write in the GROUP BY query SearchPhrase HAVING uniqUpTo(4)(UserID) >= 5 Solution: Write in the GROUP BY query SearchPhrase HAVING uniqUpTo(4)(UserID) >= 5
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/agg_functions/parametric_functions/) <!--hide-->
## sumMapFiltered(keys_to_keep)(keys, values) {#summapfilteredkeys-to-keepkeys-values} ## sumMapFiltered(keys_to_keep)(keys, values) {#summapfilteredkeys-to-keepkeys-values}

View File

@ -65,4 +65,3 @@ For our example, the structure of dictionary can be the following:
</dictionary> </dictionary>
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_hierarchical/) <!--hide-->

View File

@ -445,4 +445,3 @@ Other types are not supported yet. The function returns the attribute for the pr
Data must completely fit into RAM. Data must completely fit into RAM.
[Original article](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_layout/) <!--hide-->

View File

@ -19,6 +19,8 @@ Example of settings:
</dictionary> </dictionary>
``` ```
or
``` sql ``` sql
CREATE DICTIONARY (...) CREATE DICTIONARY (...)
... ...
@ -58,7 +60,7 @@ When upgrading the dictionaries, the ClickHouse server applies different logic d
- For MySQL source, the time of modification is checked using a `SHOW TABLE STATUS` query (in case of MySQL 8 you need to disable meta-information caching in MySQL by `set global information_schema_stats_expiry=0`. - For MySQL source, the time of modification is checked using a `SHOW TABLE STATUS` query (in case of MySQL 8 you need to disable meta-information caching in MySQL by `set global information_schema_stats_expiry=0`.
- Dictionaries from other sources are updated every time by default. - Dictionaries from other sources are updated every time by default.
For other sources (ODBC, ClickHouse, etc), you can set up a query that will update the dictionaries only if they really changed, rather than each time. To do this, follow these steps: For other sources (ODBC, PostgreSQL, ClickHouse, etc), you can set up a query that will update the dictionaries only if they really changed, rather than each time. To do this, follow these steps:
- The dictionary table must have a field that always changes when the source data is updated. - The dictionary table must have a field that always changes when the source data is updated.
- The settings of the source must specify a query that retrieves the changing field. The ClickHouse server interprets the query result as a row, and if this row has changed relative to its previous state, the dictionary is updated. Specify the query in the `<invalidate_query>` field in the settings for the [source](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md). - The settings of the source must specify a query that retrieves the changing field. The ClickHouse server interprets the query result as a row, and if this row has changed relative to its previous state, the dictionary is updated. Specify the query in the `<invalidate_query>` field in the settings for the [source](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md).
@ -84,4 +86,3 @@ SOURCE(ODBC(... invalidate_query 'SELECT update_time FROM dictionary_source wher
... ...
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_lifetime/) <!--hide-->

View File

@ -65,6 +65,7 @@ Types of sources (`source_type`):
- DBMS - DBMS
- [ODBC](#dicts-external_dicts_dict_sources-odbc) - [ODBC](#dicts-external_dicts_dict_sources-odbc)
- [MySQL](#dicts-external_dicts_dict_sources-mysql) - [MySQL](#dicts-external_dicts_dict_sources-mysql)
- [PostgreSQL](#dicts-external_dicts_dict_sources-postgresql)
- [ClickHouse](#dicts-external_dicts_dict_sources-clickhouse) - [ClickHouse](#dicts-external_dicts_dict_sources-clickhouse)
- [MongoDB](#dicts-external_dicts_dict_sources-mongodb) - [MongoDB](#dicts-external_dicts_dict_sources-mongodb)
- [Redis](#dicts-external_dicts_dict_sources-redis) - [Redis](#dicts-external_dicts_dict_sources-redis)
@ -659,7 +660,7 @@ Example of settings:
Setting fields: Setting fields:
- `host` The Cassandra host or comma-separated list of hosts. - `host` The Cassandra host or comma-separated list of hosts.
- `port` The port on the Cassandra servers. If not specified, default port is used. - `port` The port on the Cassandra servers. If not specified, default port 9042 is used.
- `user` Name of the Cassandra user. - `user` Name of the Cassandra user.
- `password` Password of the Cassandra user. - `password` Password of the Cassandra user.
- `keyspace` Name of the keyspace (database). - `keyspace` Name of the keyspace (database).
@ -673,4 +674,52 @@ Default value is 1 (the first key column is a partition key and other key column
- `where` Optional selection criteria. - `where` Optional selection criteria.
- `max_threads` The maximum number of threads to use for loading data from multiple partitions in compose key dictionaries. - `max_threads` The maximum number of threads to use for loading data from multiple partitions in compose key dictionaries.
[Original article](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_sources/) <!--hide--> ### PosgreSQL {#dicts-external_dicts_dict_sources-postgresql}
Example of settings:
``` xml
<source>
<postgresql>
<port>5432</port>
<user>clickhouse</user>
<password>qwerty</password>
<db>db_name</db>
<table>table_name</table>
<where>id=10</where>
<invalidate_query>SQL_QUERY</invalidate_query>
</postgresql>
</source>
```
or
``` sql
SOURCE(POSTGRESQL(
port 5432
host 'postgresql-hostname'
user 'postgres_user'
password 'postgres_password'
db 'db_name'
table 'table_name'
replica(host 'example01-1' port 5432 priority 1)
replica(host 'example01-2' port 5432 priority 2)
where 'id=10'
invalidate_query 'SQL_QUERY'
))
```
Setting fields:
- `host` The host on the PostgreSQL server. You can specify it for all replicas, or for each one individually (inside `<replica>`).
- `port` The port on the PostgreSQL server. You can specify it for all replicas, or for each one individually (inside `<replica>`).
- `user` Name of the PostgreSQL user. You can specify it for all replicas, or for each one individually (inside `<replica>`).
- `password` Password of the PostgreSQL user. You can specify it for all replicas, or for each one individually (inside `<replica>`).
- `replica` Section of replica configurations. There can be multiple sections.
- `replica/host` The PostgreSQL host.
- `replica/port` The PostgreSQL port.
- `replica/priority` The replica priority. When attempting to connect, ClickHouse traverses the replicas in order of priority. The lower the number, the higher the priority.
- `db` Name of the database.
- `table` Name of the table.
- `where` The selection criteria. The syntax for conditions is the same as for `WHERE` clause in PostgreSQL, for example, `id > 10 AND id < 20`. Optional parameter.
- `invalidate_query` Query for checking the dictionary status. Optional parameter. Read more in the section [Updating dictionaries](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md).

View File

@ -170,4 +170,3 @@ Configuration fields:
- [Functions for working with external dictionaries](../../../sql-reference/functions/ext-dict-functions.md). - [Functions for working with external dictionaries](../../../sql-reference/functions/ext-dict-functions.md).
[Original article](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_structure/) <!--hide-->

View File

@ -48,4 +48,3 @@ LIFETIME(...) -- Lifetime of dictionary in memory
- [structure](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md) — Structure of the dictionary . A key and attributes that can be retrieved by this key. - [structure](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md) — Structure of the dictionary . A key and attributes that can be retrieved by this key.
- [lifetime](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md) — Frequency of dictionary updates. - [lifetime](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md) — Frequency of dictionary updates.
[Original article](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict/) <!--hide-->

View File

@ -57,4 +57,3 @@ You can [configure](../../../sql-reference/dictionaries/external-dictionaries/ex
- [Dictionary Key and Fields](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md) - [Dictionary Key and Fields](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md)
- [Functions for Working with External Dictionaries](../../../sql-reference/functions/ext-dict-functions.md) - [Functions for Working with External Dictionaries](../../../sql-reference/functions/ext-dict-functions.md)
[Original article](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts/) <!--hide-->

View File

@ -17,4 +17,3 @@ ClickHouse supports:
- [Built-in dictionaries](../../sql-reference/dictionaries/internal-dicts.md#internal_dicts) with a specific [set of functions](../../sql-reference/functions/ym-dict-functions.md). - [Built-in dictionaries](../../sql-reference/dictionaries/internal-dicts.md#internal_dicts) with a specific [set of functions](../../sql-reference/functions/ym-dict-functions.md).
- [Plug-in (external) dictionaries](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md#dicts-external-dicts) with a [set of functions](../../sql-reference/functions/ext-dict-functions.md). - [Plug-in (external) dictionaries](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md#dicts-external-dicts) with a [set of functions](../../sql-reference/functions/ext-dict-functions.md).
[Original article](https://clickhouse.tech/docs/en/query_language/dicts/) <!--hide-->

View File

@ -50,4 +50,3 @@ We recommend periodically updating the dictionaries with the geobase. During an
There are also functions for working with OS identifiers and Yandex.Metrica search engines, but they shouldnt be used. There are also functions for working with OS identifiers and Yandex.Metrica search engines, but they shouldnt be used.
[Original article](https://clickhouse.tech/docs/en/query_language/dicts/internal_dicts/) <!--hide-->

View File

@ -82,4 +82,3 @@ An exception is thrown when dividing by zero or when dividing a minimal negative
Returns the least common multiple of the numbers. Returns the least common multiple of the numbers.
An exception is thrown when dividing by zero or when dividing a minimal negative number by minus one. An exception is thrown when dividing by zero or when dividing a minimal negative number by minus one.
[Original article](https://clickhouse.tech/docs/en/query_language/functions/arithmetic_functions/) <!--hide-->

View File

@ -1541,4 +1541,3 @@ SELECT arrayCumSumNonNegative([1, 1, -4, 1]) AS res
``` ```
Note that the `arraySumNonNegative` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument. Note that the `arraySumNonNegative` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
[Original article](https://clickhouse.tech/docs/en/query_language/functions/array_functions/) <!--hide-->

View File

@ -32,4 +32,3 @@ SELECT arrayJoin([1, 2, 3] AS src) AS dst, 'Hello', src
└─────┴───────────┴─────────┘ └─────┴───────────┴─────────┘
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/functions/array_join/) <!--hide-->

View File

@ -250,4 +250,3 @@ Result:
└───────────────┘ └───────────────┘
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/functions/bit_functions/) <!--hide-->

View File

@ -491,4 +491,3 @@ SELECT bitmapAndnotCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res
└─────┘ └─────┘
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/functions/bitmap_functions/) <!--hide-->

View File

@ -32,4 +32,3 @@ Strings are compared by bytes. A shorter string is smaller than all strings that
## greaterOrEquals, \>= operator {#function-greaterorequals} ## greaterOrEquals, \>= operator {#function-greaterorequals}
[Original article](https://clickhouse.tech/docs/en/query_language/functions/comparison_functions/) <!--hide-->

View File

@ -202,4 +202,3 @@ FROM LEFT_RIGHT
└──────┴───────┴──────────────────┘ └──────┴───────┴──────────────────┘
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/functions/conditional_functions/) <!--hide-->

View File

@ -1070,4 +1070,3 @@ Result:
└────────────────────────────────────┘ └────────────────────────────────────┘
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/functions/date_time_functions/) <!--hide-->

View File

@ -172,4 +172,3 @@ Accepts an integer. Returns a string containing the list of powers of two that t
Accepts an integer. Returns an array of UInt64 numbers containing the list of powers of two that total the source number when summed. Numbers in the array are in ascending order. Accepts an integer. Returns an array of UInt64 numbers containing the list of powers of two that total the source number when summed. Numbers in the array are in ascending order.
[Original article](https://clickhouse.tech/docs/en/query_language/functions/encoding_functions/) <!--hide-->

View File

@ -203,4 +203,3 @@ dictGet[Type]OrDefault('dict_name', 'attr_name', id_expr, default_value_expr)
ClickHouse throws an exception if it cannot parse the value of the attribute or the value doesnt match the attribute data type. ClickHouse throws an exception if it cannot parse the value of the attribute or the value doesnt match the attribute data type.
[Original article](https://clickhouse.tech/docs/en/query_language/functions/ext_dict_functions/) <!--hide-->

View File

@ -309,4 +309,3 @@ SELECT toTypeName(toNullable(10))
└────────────────────────────┘ └────────────────────────────┘
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/functions/functions_for_nulls/) <!--hide-->

View File

@ -482,4 +482,3 @@ Result:
- [xxHash](http://cyan4973.github.io/xxHash/). - [xxHash](http://cyan4973.github.io/xxHash/).
[Original article](https://clickhouse.tech/docs/en/query_language/functions/hash_functions/) <!--hide-->

View File

@ -9,4 +9,3 @@ toc_title: IN Operator
See the section [IN operators](../../sql-reference/operators/in.md#select-in-operators). See the section [IN operators](../../sql-reference/operators/in.md#select-in-operators).
[Original article](https://clickhouse.tech/docs/en/query_language/functions/in_functions/) <!--hide-->

View File

@ -84,4 +84,3 @@ Another example is the `hostName` function, which returns the name of the server
If a function in a query is performed on the requestor server, but you need to perform it on remote servers, you can wrap it in an any aggregate function or add it to a key in `GROUP BY`. If a function in a query is performed on the requestor server, but you need to perform it on remote servers, you can wrap it in an any aggregate function or add it to a key in `GROUP BY`.
[Original article](https://clickhouse.tech/docs/en/query_language/functions/) <!--hide-->

View File

@ -369,4 +369,3 @@ Result:
└──────────────────────────────┘ └──────────────────────────────┘
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/functions/introspection/) <!--hide-->

View File

@ -394,4 +394,3 @@ Result:
└──────────────────┴────────────────────┘ └──────────────────┴────────────────────┘
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/functions/ip_address_functions/) <!--hide-->

View File

@ -292,4 +292,3 @@ Result:
└───────────────────────────────────────────────────────────────────────────────────────────────────────┘ └───────────────────────────────────────────────────────────────────────────────────────────────────────┘
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/functions/json_functions/) <!--hide-->

View File

@ -17,4 +17,3 @@ Zero as an argument is considered “false,” while any non-zero value is consi
## xor {#xor} ## xor {#xor}
[Original article](https://clickhouse.tech/docs/en/query_language/functions/logical_functions/) <!--hide-->

View File

@ -94,4 +94,3 @@ Result:
} }
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/functions/machine-learning-functions/) <!--hide-->

View File

@ -477,4 +477,3 @@ Result:
└──────────┘ └──────────┘
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/functions/math_functions/) <!--hide-->

View File

@ -1971,4 +1971,3 @@ Result:
- [tcp_port](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port) - [tcp_port](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port)
[Original article](https://clickhouse.tech/docs/en/query_language/functions/other_functions/) <!--hide-->

View File

@ -102,4 +102,3 @@ FROM numbers(3)
│ aeca2A │ │ aeca2A │
└───────────────────────────────────────┘ └───────────────────────────────────────┘
[Original article](https://clickhouse.tech/docs/en/query_language/functions/random_functions/) <!--hide-->

View File

@ -185,4 +185,3 @@ Accepts a number. If the number is less than 18, it returns 0. Otherwise, it rou
Accepts a number and rounds it down to an element in the specified array. If the value is less than the lowest bound, the lowest bound is returned. Accepts a number and rounds it down to an element in the specified array. If the value is less than the lowest bound, the lowest bound is returned.
[Original article](https://clickhouse.tech/docs/en/query_language/functions/rounding_functions/) <!--hide-->

View File

@ -150,4 +150,3 @@ Result:
└───────────────────────────────────────────────────────────────────────┘ └───────────────────────────────────────────────────────────────────────┘
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/functions/splitting_merging_functions/) <!--hide-->

View File

@ -648,4 +648,3 @@ Result:
- [List of XML and HTML character entity references](https://en.wikipedia.org/wiki/List_of_XML_and_HTML_character_entity_references) - [List of XML and HTML character entity references](https://en.wikipedia.org/wiki/List_of_XML_and_HTML_character_entity_references)
[Original article](https://clickhouse.tech/docs/en/query_language/functions/string_functions/) <!--hide-->

View File

@ -92,4 +92,3 @@ Predefined characters: `\0`, `\\`, `|`, `(`, `)`, `^`, `$`, `.`, `[`, `]`, `?`,
This implementation slightly differs from re2::RE2::QuoteMeta. It escapes zero byte as `\0` instead of `\x00` and it escapes only required characters. This implementation slightly differs from re2::RE2::QuoteMeta. It escapes zero byte as `\0` instead of `\x00` and it escapes only required characters.
For more information, see the link: [RE2](https://github.com/google/re2/blob/master/re2/re2.cc#L473) For more information, see the link: [RE2](https://github.com/google/re2/blob/master/re2/re2.cc#L473)
[Original article](https://clickhouse.tech/docs/en/query_language/functions/string_replace_functions/) <!--hide-->

View File

@ -773,4 +773,3 @@ Result:
└───────────────────────────────┘ └───────────────────────────────┘
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/functions/string_search_functions/) <!--hide-->

View File

@ -1210,4 +1210,3 @@ Result:
└───────────────────────────────────────────┘ └───────────────────────────────────────────┘
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/functions/type_conversion_functions/) <!--hide-->

View File

@ -420,4 +420,3 @@ Removes the query string and fragment identifier. The question mark and number s
Removes the name URL parameter, if present. This function works under the assumption that the parameter name is encoded in the URL exactly the same way as in the passed argument. Removes the name URL parameter, if present. This function works under the assumption that the parameter name is encoded in the URL exactly the same way as in the passed argument.
[Original article](https://clickhouse.tech/docs/en/query_language/functions/url_functions/) <!--hide-->

View File

@ -165,4 +165,3 @@ SELECT
- [dictGetUUID](../../sql-reference/functions/ext-dict-functions.md#ext_dict_functions-other) - [dictGetUUID](../../sql-reference/functions/ext-dict-functions.md#ext_dict_functions-other)
[Original article](https://clickhouse.tech/docs/en/query_language/functions/uuid_function/) <!--hide-->

View File

@ -150,4 +150,3 @@ Accepts a UInt32 number the region ID from the Yandex geobase. A string with
`ua` and `uk` both mean Ukrainian. `ua` and `uk` both mean Ukrainian.
[Original article](https://clickhouse.tech/docs/en/query_language/functions/ym_dict_functions/) <!--hide-->

View File

@ -296,4 +296,3 @@ SELECT * FROM t_null WHERE y IS NOT NULL
└───┴───┘ └───┴───┘
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/operators/) <!--hide-->

View File

@ -47,4 +47,3 @@ For `ALTER ... ATTACH|DETACH|DROP` queries, you can use the `replication_alter_p
For `ALTER TABLE ... UPDATE|DELETE` queries the synchronicity is defined by the [mutations_sync](../../../operations/settings/settings.md#mutations_sync) setting. For `ALTER TABLE ... UPDATE|DELETE` queries the synchronicity is defined by the [mutations_sync](../../../operations/settings/settings.md#mutations_sync) setting.
[Original article](https://clickhouse.tech/docs/en/query_language/alter/) <!--hide-->

View File

@ -81,5 +81,5 @@ The `TTL` is no longer there, so the second row is not deleted:
### See Also ### See Also
- More about the [TTL-expression](../../../../sql-reference/statements/create/table#ttl-expression). - More about the [TTL-expression](../../../sql-reference/statements/create/table.md#ttl-expression).
- Modify column [with TTL](../../../../sql-reference/statements/alter/column#alter_modify-column). - Modify column [with TTL](../../../sql-reference/statements/alter/column.md#alter_modify-column).

View File

@ -333,5 +333,3 @@ SELECT * FROM base.t1;
│ 3 │ │ 3 │
└───┘ └───┘
``` ```
[Original article](https://clickhouse.tech/docs/en/sql-reference/statements/create/table) <!--hide-->

View File

@ -473,4 +473,3 @@ Doesnt grant any privileges.
The `ADMIN OPTION` privilege allows a user to grant their role to another user. The `ADMIN OPTION` privilege allows a user to grant their role to another user.
[Original article](https://clickhouse.tech/docs/en/query_language/grant/) <!--hide-->

View File

@ -117,4 +117,3 @@ Performance will not decrease if:
- Data is added in real time. - Data is added in real time.
- You upload data that is usually sorted by time. - You upload data that is usually sorted by time.
[Original article](https://clickhouse.tech/docs/en/query_language/insert_into/) <!--hide-->

View File

@ -277,4 +277,3 @@ SYSTEM RESTART REPLICA [db.]replicated_merge_tree_family_table_name
Provides possibility to reinitialize Zookeeper sessions state for all `ReplicatedMergeTree` tables, will compare current state with Zookeeper as source of true and add tasks to Zookeeper queue if needed Provides possibility to reinitialize Zookeeper sessions state for all `ReplicatedMergeTree` tables, will compare current state with Zookeeper as source of true and add tasks to Zookeeper queue if needed
[Original article](https://clickhouse.tech/docs/en/query_language/system/) <!--hide-->

View File

@ -102,5 +102,5 @@ WATCH lv EVENTS LIMIT 1
The `FORMAT` clause works the same way as for the [SELECT](../../sql-reference/statements/select/format.md#format-clause). The `FORMAT` clause works the same way as for the [SELECT](../../sql-reference/statements/select/format.md#format-clause).
!!! info "Note" !!! info "Note"
The [JSONEachRowWithProgress](../../../interfaces/formats/#jsoneachrowwithprogress) format should be used when watching [live view](./create/view.md#live-view) tables over the HTTP interface. The progress messages will be added to the output to keep the long-lived HTTP connection alive until the query result changes. The interval between progress messages is controlled using the [live_view_heartbeat_interval](./create/view.md#live-view-settings) setting. The [JSONEachRowWithProgress](../../interfaces/formats/#jsoneachrowwithprogress) format should be used when watching [live view](./create/view.md#live-view) tables over the HTTP interface. The progress messages will be added to the output to keep the long-lived HTTP connection alive until the query result changes. The interval between progress messages is controlled using the [live_view_heartbeat_interval](./create/view.md#live-view-settings) setting.

View File

@ -124,6 +124,6 @@ SELECT count(*) FROM file('big_dir/file{0..9}{0..9}{0..9}', 'CSV', 'name String,
**See Also** **See Also**
- [Virtual columns](index.md#table_engines-virtual_columns) - [Virtual columns](../../engines/table-engines/index.md#table_engines-virtual_columns)
[Original article](https://clickhouse.tech/docs/en/sql-reference/table-functions/file/) <!--hide--> [Original article](https://clickhouse.tech/docs/en/sql-reference/table-functions/file/) <!--hide-->

View File

@ -39,4 +39,3 @@ SELECT * FROM generateRandom('a Array(Int8), d Decimal32(4), c Tuple(DateTime64(
└──────────┴──────────────┴────────────────────────────────────────────────────────────────────┘ └──────────┴──────────────┴────────────────────────────────────────────────────────────────────┘
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/generate/) <!--hide-->

View File

@ -97,6 +97,5 @@ FROM hdfs('hdfs://hdfs1:9000/big_dir/file{0..9}{0..9}{0..9}', 'CSV', 'name Strin
**See Also** **See Also**
- [Virtual columns](https://clickhouse.tech/docs/en/operations/table_engines/#table_engines-virtual_columns) - [Virtual columns](../../engines/table-engines/index.md#table_engines-virtual_columns)
[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/hdfs/) <!--hide-->

View File

@ -22,16 +22,15 @@ You can use table functions in:
You cant use table functions if the [allow_ddl](../../operations/settings/permissions-for-queries.md#settings_allow_ddl) setting is disabled. You cant use table functions if the [allow_ddl](../../operations/settings/permissions-for-queries.md#settings_allow_ddl) setting is disabled.
| Function | Description | | Function | Description |
|-----------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------| |-----------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------|
| [file](../../sql-reference/table-functions/file.md) | Creates a [File](../../engines/table-engines/special/file.md)-engine table. | | [file](../../sql-reference/table-functions/file.md) | Creates a File-engine table. |
| [merge](../../sql-reference/table-functions/merge.md) | Creates a [Merge](../../engines/table-engines/special/merge.md)-engine table. | | [merge](../../sql-reference/table-functions/merge.md) | Creates a Merge-engine table. |
| [numbers](../../sql-reference/table-functions/numbers.md) | Creates a table with a single column filled with integer numbers. | | [numbers](../../sql-reference/table-functions/numbers.md) | Creates a table with a single column filled with integer numbers. |
| [remote](../../sql-reference/table-functions/remote.md) | Allows you to access remote servers without creating a [Distributed](../../engines/table-engines/special/distributed.md)-engine table. | | [remote](../../sql-reference/table-functions/remote.md) | Allows you to access remote servers without creating a Distributed-engine table. |
| [url](../../sql-reference/table-functions/url.md) | Creates a [Url](../../engines/table-engines/special/url.md)-engine table. | | [url](../../sql-reference/table-functions/url.md) | Creates a URL-engine table. |
| [mysql](../../sql-reference/table-functions/mysql.md) | Creates a [MySQL](../../engines/table-engines/integrations/mysql.md)-engine table. | | [mysql](../../sql-reference/table-functions/mysql.md) | Creates a MySQL-engine table. |
| [jdbc](../../sql-reference/table-functions/jdbc.md) | Creates a [JDBC](../../engines/table-engines/integrations/jdbc.md)-engine table. | | [postgresql](../../sql-reference/table-functions/postgresql.md) | Creates a PostgreSQL-engine table. |
| [odbc](../../sql-reference/table-functions/odbc.md) | Creates a [ODBC](../../engines/table-engines/integrations/odbc.md)-engine table. | | [jdbc](../../sql-reference/table-functions/jdbc.md) | Creates a JDBC-engine table. |
| [hdfs](../../sql-reference/table-functions/hdfs.md) | Creates a [HDFS](../../engines/table-engines/integrations/hdfs.md)-engine table. | | [odbc](../../sql-reference/table-functions/odbc.md) | Creates a ODBC-engine table. |
| [s3](../../sql-reference/table-functions/s3.md) | Creates a [S3](../../engines/table-engines/integrations/s3.md)-engine table. | | [hdfs](../../sql-reference/table-functions/hdfs.md) | Creates a HDFS-engine table. |
| [s3](../../sql-reference/table-functions/s3.md) | Creates a S3-engine table. |
[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/) <!--hide-->

View File

@ -42,4 +42,3 @@ $ cat data.csv | clickhouse-client --query="INSERT INTO test FORMAT CSV"
$ cat data.csv | clickhouse-client --query="INSERT INTO test SELECT * FROM input('test_structure') FORMAT CSV" $ cat data.csv | clickhouse-client --query="INSERT INTO test SELECT * FROM input('test_structure') FORMAT CSV"
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/input/) <!--hide-->

View File

@ -24,4 +24,3 @@ SELECT * FROM jdbc('mysql://localhost:3306/?user=root&password=root', 'schema',
SELECT * FROM jdbc('datasource://mysql-local', 'schema', 'table') SELECT * FROM jdbc('datasource://mysql-local', 'schema', 'table')
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/jdbc/) <!--hide-->

View File

@ -9,4 +9,3 @@ toc_title: merge
The table structure is taken from the first table encountered that matches the regular expression. The table structure is taken from the first table encountered that matches the regular expression.
[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/merge/) <!--hide-->

View File

@ -25,4 +25,3 @@ Examples:
select toDate('2010-01-01') + number as d FROM numbers(365); select toDate('2010-01-01') + number as d FROM numbers(365);
``` ```
[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/numbers/) <!--hide-->

View File

@ -102,5 +102,3 @@ SELECT * FROM odbc('DSN=mysqlconn', 'test', 'test')
- [ODBC external dictionaries](../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-odbc) - [ODBC external dictionaries](../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-odbc)
- [ODBC table engine](../../engines/table-engines/integrations/odbc.md). - [ODBC table engine](../../engines/table-engines/integrations/odbc.md).
[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/jdbc/) <!--hide-->

View File

@ -0,0 +1,104 @@
---
toc_priority: 42
toc_title: postgresql
---
# postgresql {#postgresql}
Allows `SELECT` and `INSERT` queries to be performed on data that is stored on a remote PostgreSQL server.
**Syntax**
``` sql
postgresql('host:port', 'database', 'table', 'user', 'password')
```
**Arguments**
- `host:port` — PostgreSQL server address.
- `database` — Remote database name.
- `table` — Remote table name.
- `user` — PostgreSQL user.
- `password` — User password.
SELECT Queries on PostgreSQL side run as `COPY (SELECT ...) TO STDOUT` inside read-only PostgreSQL transaction with commit after each `SELECT` query.
Simple `WHERE` clauses such as `=, !=, >, >=, <, <=, IN` are executed on the PostgreSQL server.
All joins, aggregations, sorting, `IN [ array ]` conditions and the `LIMIT` sampling constraint are executed in ClickHouse only after the query to PostgreSQL finishes.
INSERT Queries on PostgreSQL side run as `COPY "table_name" (field1, field2, ... fieldN) FROM STDIN` inside PostgreSQL transaction with auto-commit after each `INSERT` statement.
PostgreSQL Array types converts into ClickHouse arrays.
Be careful in PostgreSQL an array data type column like Integer[] may contain arrays of different dimensions in different rows, but in ClickHouse it is only allowed to have multidimensional arrays of the same dimension in all rows.
**Returned Value**
A table object with the same columns as the original PostgreSQL table.
!!! info "Note"
In the `INSERT` query to distinguish table function `postgresql(...)` from table name with column names list you must use keywords `FUNCTION` or `TABLE FUNCTION`. See examples below.
**Examples**
Table in PostgreSQL:
``` text
postgres=# CREATE TABLE "public"."test" (
"int_id" SERIAL,
"int_nullable" INT NULL DEFAULT NULL,
"float" FLOAT NOT NULL,
"str" VARCHAR(100) NOT NULL DEFAULT '',
"float_nullable" FLOAT NULL DEFAULT NULL,
PRIMARY KEY (int_id));
CREATE TABLE
postgres=# insert into test (int_id, str, "float") VALUES (1,'test',2);
INSERT 0 1
postgresql> select * from test;
int_id | int_nullable | float | str | float_nullable
--------+--------------+-------+------+----------------
1 | | 2 | test |
(1 row)
```
Selecting data from ClickHouse:
```sql
SELECT * FROM postgresql('localhost:5432', 'test', 'test', 'postgresql_user', 'password') WHERE str IN ('test');
```
``` text
┌─int_id─┬─int_nullable─┬─float─┬─str──┬─float_nullable─┐
│ 1 │ ᴺᵁᴸᴸ │ 2 │ test │ ᴺᵁᴸᴸ │
└────────┴──────────────┴───────┴──────┴────────────────┘
```
Inserting:
```sql
INSERT INTO TABLE FUNCTION postgresql('localhost:5432', 'test', 'test', 'postgrsql_user', 'password') (int_id, float) VALUES (2, 3);
SELECT * FROM postgresql('localhost:5432', 'test', 'test', 'postgresql_user', 'password');
```
``` text
┌─int_id─┬─int_nullable─┬─float─┬─str──┬─float_nullable─┐
│ 1 │ ᴺᵁᴸᴸ │ 2 │ test │ ᴺᵁᴸᴸ │
│ 2 │ ᴺᵁᴸᴸ │ 3 │ │ ᴺᵁᴸᴸ │
└────────┴──────────────┴───────┴──────┴────────────────┘
```
**See Also**
- [The PostgreSQL table engine](../../engines/table-engines/integrations/postgresql.md)
- [Using PostgreSQL as a source of external dictionary](../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-postgresql)
[Original article](https://clickhouse.tech/docs/en/sql-reference/table-functions/postgresql/) <!--hide-->

View File

@ -164,6 +164,5 @@ Security consideration: if malicious user can specify arbitrary S3 URLs, `s3_max
**See Also** **See Also**
- [Virtual columns](https://clickhouse.tech/docs/en/operations/table_engines/#table_engines-virtual_columns) - [Virtual columns](../../engines/table-engines/index.md#table_engines-virtual_columns)
[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/s3/) <!--hide-->

View File

@ -27,7 +27,7 @@ A table with the specified format and structure and with data from the defined `
**Examples** **Examples**
Getting the first 3 lines of a table that contains columns of `String` and [UInt32](../../sql-reference/data-types/int-uint.md) type from HTTP-server which answers in [CSV](../../interfaces/formats.md/#csv) format. Getting the first 3 lines of a table that contains columns of `String` and [UInt32](../../sql-reference/data-types/int-uint.md) type from HTTP-server which answers in [CSV](../../interfaces/formats.md#csv) format.
``` sql ``` sql
SELECT * FROM url('http://127.0.0.1:12345/', CSV, 'column1 String, column2 UInt32') LIMIT 3; SELECT * FROM url('http://127.0.0.1:12345/', CSV, 'column1 String, column2 UInt32') LIMIT 3;

View File

@ -64,4 +64,5 @@ SELECT * FROM cluster(`cluster_name`, view(SELECT a, b, c FROM table_name))
**See Also** **See Also**
- [View Table Engine](https://clickhouse.tech/docs/en/engines/table-engines/special/view/) - [View Table Engine](https://clickhouse.tech/docs/en/engines/table-engines/special/view/)
[Original article](https://clickhouse.tech/docs/en/query_language/table_functions/view/) <!--hide-->
[Original article](https://clickhouse.tech/docs/en/sql-reference/table-functions/view/) <!--hide-->

View File

@ -29,4 +29,3 @@ toc_title: "Поставщики облачных услуг ClickHouse"
- cross-az масштабирование для повышения производительности и обеспечения высокой доступности - cross-az масштабирование для повышения производительности и обеспечения высокой доступности
- встроенный мониторинг и редактор SQL-запросов - встроенный мониторинг и редактор SQL-запросов
{## [Оригинальная статья](https://clickhouse.tech/docs/ru/commercial/cloud/) ##}

View File

@ -911,4 +911,3 @@ function(
size_t limit) size_t limit)
``` ```
[Оригинальная статья](https://clickhouse.tech/docs/ru/development/style/) <!--hide-->

View File

@ -18,4 +18,3 @@ toc_title: "Введение"
- [Lazy](../../engines/database-engines/lazy.md) - [Lazy](../../engines/database-engines/lazy.md)
[Оригинальная статья](https://clickhouse.tech/docs/ru/database_engines/) <!--hide-->

View File

@ -15,4 +15,3 @@ toc_title: Lazy
CREATE DATABASE testlazy ENGINE = Lazy(expiration_time_in_seconds); CREATE DATABASE testlazy ENGINE = Lazy(expiration_time_in_seconds);
``` ```
[Оригинальная статья](https://clickhouse.tech/docs/ru/database_engines/lazy/) <!--hide-->

View File

@ -157,4 +157,3 @@ SELECT * FROM mysql.test;
└───┴─────┴──────┘ └───┴─────┴──────┘
``` ```
[Оригинальная статья](https://clickhouse.tech/docs/ru/engines/database-engines/materialize-mysql/) <!--hide-->

View File

@ -80,4 +80,3 @@ toc_title: "Введение"
При создании таблицы со столбцом, имя которого совпадает с именем одного из виртуальных столбцов таблицы, виртуальный столбец становится недоступным. Не делайте так. Чтобы помочь избежать конфликтов, имена виртуальных столбцов обычно предваряются подчеркиванием. При создании таблицы со столбцом, имя которого совпадает с именем одного из виртуальных столбцов таблицы, виртуальный столбец становится недоступным. Не делайте так. Чтобы помочь избежать конфликтов, имена виртуальных столбцов обычно предваряются подчеркиванием.
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/) <!--hide-->

View File

@ -41,4 +41,3 @@ ENGINE = EmbeddedRocksDB
PRIMARY KEY key; PRIMARY KEY key;
``` ```
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/embedded-rocksdb/) <!--hide-->

View File

@ -102,16 +102,103 @@ CREATE TABLE table_with_asterisk (name String, value UInt32) ENGINE = HDFS('hdfs
Создадим таблицу с именами `file000`, `file001`, … , `file999`: Создадим таблицу с именами `file000`, `file001`, … , `file999`:
``` sql ``` sql
CREARE TABLE big_table (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/big_dir/file{0..9}{0..9}{0..9}', 'CSV') CREATE TABLE big_table (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/big_dir/file{0..9}{0..9}{0..9}', 'CSV')
``` ```
## Конфигурация {#configuration}
Похоже на GraphiteMergeTree, движок HDFS поддерживает расширенную конфигурацию с использованием файла конфигурации ClickHouse. Есть два раздела конфигурации которые вы можете использовать: глобальный (`hdfs`) и на уровне пользователя (`hdfs_*`). Глобальные настройки применяются первыми, и затем применяется конфигурация уровня пользователя (если она указана).
``` xml
<!-- Глобальные настройки для движка HDFS -->
<hdfs>
<hadoop_kerberos_keytab>/tmp/keytab/clickhouse.keytab</hadoop_kerberos_keytab>
<hadoop_kerberos_principal>clickuser@TEST.CLICKHOUSE.TECH</hadoop_kerberos_principal>
<hadoop_security_authentication>kerberos</hadoop_security_authentication>
</hdfs>
<!-- Конфигурация специфичная для пользователя "root" -->
<hdfs_root>
<hadoop_kerberos_principal>root@TEST.CLICKHOUSE.TECH</hadoop_kerberos_principal>
</hdfs_root>
```
### Список возможных опций конфигурации со значениями по умолчанию
#### Поддерживаемые из libhdfs3
| **параметр** | **по умолчанию** |
| rpc\_client\_connect\_tcpnodelay | true |
| dfs\_client\_read\_shortcircuit | true |
| output\_replace-datanode-on-failure | true |
| input\_notretry-another-node | false |
| input\_localread\_mappedfile | true |
| dfs\_client\_use\_legacy\_blockreader\_local | false |
| rpc\_client\_ping\_interval | 10 * 1000 |
| rpc\_client\_connect\_timeout | 600 * 1000 |
| rpc\_client\_read\_timeout | 3600 * 1000 |
| rpc\_client\_write\_timeout | 3600 * 1000 |
| rpc\_client\_socekt\_linger\_timeout | -1 |
| rpc\_client\_connect\_retry | 10 |
| rpc\_client\_timeout | 3600 * 1000 |
| dfs\_default\_replica | 3 |
| input\_connect\_timeout | 600 * 1000 |
| input\_read\_timeout | 3600 * 1000 |
| input\_write\_timeout | 3600 * 1000 |
| input\_localread\_default\_buffersize | 1 * 1024 * 1024 |
| dfs\_prefetchsize | 10 |
| input\_read\_getblockinfo\_retry | 3 |
| input\_localread\_blockinfo\_cachesize | 1000 |
| input\_read\_max\_retry | 60 |
| output\_default\_chunksize | 512 |
| output\_default\_packetsize | 64 * 1024 |
| output\_default\_write\_retry | 10 |
| output\_connect\_timeout | 600 * 1000 |
| output\_read\_timeout | 3600 * 1000 |
| output\_write\_timeout | 3600 * 1000 |
| output\_close\_timeout | 3600 * 1000 |
| output\_packetpool\_size | 1024 |
| output\_heeartbeat\_interval | 10 * 1000 |
| dfs\_client\_failover\_max\_attempts | 15 |
| dfs\_client\_read\_shortcircuit\_streams\_cache\_size | 256 |
| dfs\_client\_socketcache\_expiryMsec | 3000 |
| dfs\_client\_socketcache\_capacity | 16 |
| dfs\_default\_blocksize | 64 * 1024 * 1024 |
| dfs\_default\_uri | "hdfs://localhost:9000" |
| hadoop\_security\_authentication | "simple" |
| hadoop\_security\_kerberos\_ticket\_cache\_path | "" |
| dfs\_client\_log\_severity | "INFO" |
| dfs\_domain\_socket\_path | "" |
[Руководство по конфигурации HDFS](https://hawq.apache.org/docs/userguide/2.3.0.0-incubating/reference/HDFSConfigurationParameterReference.html) поможет обьяснить назначения некоторых параметров.
#### Расширенные параметры для ClickHouse {#clickhouse-extras}
| **параметр** | **по умолчанию** |
|hadoop\_kerberos\_keytab | "" |
|hadoop\_kerberos\_principal | "" |
|hadoop\_kerberos\_kinit\_command | kinit |
#### Ограничения {#limitations}
* hadoop\_security\_kerberos\_ticket\_cache\_path могут быть определены только на глобальном уровне
## Поддержика Kerberos {#kerberos-support}
Если hadoop\_security\_authentication параметр имеет значение 'kerberos', ClickHouse аутентифицируется с помощью Kerberos.
[Расширенные параметры](#clickhouse-extras) и hadoop\_security\_kerberos\_ticket\_cache\_path помогают сделать это.
Обратите внимание что из-за ограничений libhdfs3 поддерживается только устаревший метод аутентификации,
коммуникация с узлами данных не защищена SASL (HADOOP\_SECURE\_DN\_USER надежный показатель такого
подхода к безопасности). Используйте tests/integration/test\_storage\_kerberized\_hdfs/hdfs_configs/bootstrap.sh для примера настроек.
Если hadoop\_kerberos\_keytab, hadoop\_kerberos\_principal или hadoop\_kerberos\_kinit\_command указаны в настройках, kinit будет вызван. hadoop\_kerberos\_keytab и hadoop\_kerberos\_principal обязательны в этом случае. Необходимо также будет установить kinit и файлы конфигурации krb5.
## Виртуальные столбцы {#virtualnye-stolbtsy} ## Виртуальные столбцы {#virtualnye-stolbtsy}
- `_path` — Путь к файлу. - `_path` — Путь к файлу.
- `_file` — Имя файла. - `_file` — Имя файла.
**Смотрите также** **См. также**
- [Виртуальные столбцы](index.md#table_engines-virtual_columns) - [Виртуальные колонки](../../../engines/table-engines/index.md#table_engines-virtual_columns)
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/hdfs/) <!--hide-->

View File

@ -14,8 +14,9 @@ toc_priority: 30
- [MySQL](../../../engines/table-engines/integrations/mysql.md) - [MySQL](../../../engines/table-engines/integrations/mysql.md)
- [MongoDB](../../../engines/table-engines/integrations/mongodb.md) - [MongoDB](../../../engines/table-engines/integrations/mongodb.md)
- [HDFS](../../../engines/table-engines/integrations/hdfs.md) - [HDFS](../../../engines/table-engines/integrations/hdfs.md)
- [S3](../../../engines/table-engines/integrations/s3.md)
- [Kafka](../../../engines/table-engines/integrations/kafka.md) - [Kafka](../../../engines/table-engines/integrations/kafka.md)
- [EmbeddedRocksDB](../../../engines/table-engines/integrations/embedded-rocksdb.md) - [EmbeddedRocksDB](../../../engines/table-engines/integrations/embedded-rocksdb.md)
- [RabbitMQ](../../../engines/table-engines/integrations/rabbitmq.md) - [RabbitMQ](../../../engines/table-engines/integrations/rabbitmq.md)
- [PostgreSQL](../../../engines/table-engines/integrations/postgresql.md)
[Оригинальная статья](https://clickhouse.tech/docs/ru/engines/table-engines/integrations/) <!--hide-->

View File

@ -89,4 +89,3 @@ FROM jdbc_table
- [Табличная функция JDBC](../../../engines/table-engines/integrations/jdbc.md). - [Табличная функция JDBC](../../../engines/table-engines/integrations/jdbc.md).
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/jdbc/) <!--hide-->

View File

@ -193,4 +193,3 @@ ClickHouse может поддерживать учетные данные Kerbe
- [Виртуальные столбцы](index.md#table_engines-virtual_columns) - [Виртуальные столбцы](index.md#table_engines-virtual_columns)
- [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size) - [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size)
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/kafka/) <!--hide-->

View File

@ -54,4 +54,4 @@ SELECT COUNT() FROM mongo_table;
└─────────┘ └─────────┘
``` ```
[Original article](https://clickhouse.tech/docs/ru/operations/table_engines/integrations/mongodb/) <!--hide--> [Original article](https://clickhouse.tech/docs/ru/engines/table-engines/integrations/mongodb/) <!--hide-->

View File

@ -18,12 +18,13 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
) ENGINE = MySQL('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']); ) ENGINE = MySQL('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']);
``` ```
Смотрите подробное описание запроса [CREATE TABLE](../../../engines/table-engines/integrations/mysql.md#create-table-query). Смотрите подробное описание запроса [CREATE TABLE](../../../sql-reference/statements/create/table.md#create-table-query).
Структура таблицы может отличаться от исходной структуры таблицы MySQL: Структура таблицы может отличаться от исходной структуры таблицы MySQL:
- Имена столбцов должны быть такими же, как в исходной таблице MySQL, но вы можете использовать только некоторые из этих столбцов и в любом порядке. - Имена столбцов должны быть такими же, как в исходной таблице MySQL, но вы можете использовать только некоторые из этих столбцов и в любом порядке.
- Типы столбцов могут отличаться от типов в исходной таблице MySQL. ClickHouse пытается [приводить](../../../engines/table-engines/integrations/mysql.md#type_conversion_function-cast) значения к типам данных ClickHouse. - Типы столбцов могут отличаться от типов в исходной таблице MySQL. ClickHouse пытается [приводить](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) значения к типам данных ClickHouse.
- Настройка `external_table_functions_use_nulls` определяет как обрабатывать Nullable столбцы. По умолчанию 1, если 0 - табличная функция не будет делать nullable столбцы и будет вместо null выставлять значения по умолчанию для скалярного типа. Это также применимо для null значений внутри массивов.
**Параметры движка** **Параметры движка**
@ -100,4 +101,3 @@ SELECT * FROM mysql_table
- [Табличная функция mysql](../../../engines/table-engines/integrations/mysql.md) - [Табличная функция mysql](../../../engines/table-engines/integrations/mysql.md)
- [Использование MySQL в качестве источника для внешнего словаря](../../../engines/table-engines/integrations/mysql.md#dicts-external_dicts_dict_sources-mysql) - [Использование MySQL в качестве источника для внешнего словаря](../../../engines/table-engines/integrations/mysql.md#dicts-external_dicts_dict_sources-mysql)
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/mysql/) <!--hide-->

View File

@ -29,6 +29,7 @@ ENGINE = ODBC(connection_settings, external_database, external_table)
- Имена столбцов должны быть такими же, как в исходной таблице, но вы можете использовать только некоторые из этих столбцов и в любом порядке. - Имена столбцов должны быть такими же, как в исходной таблице, но вы можете использовать только некоторые из этих столбцов и в любом порядке.
- Типы столбцов могут отличаться от типов аналогичных столбцов в исходной таблице. ClickHouse пытается [приводить](../../../engines/table-engines/integrations/odbc.md#type_conversion_function-cast) значения к типам данных ClickHouse. - Типы столбцов могут отличаться от типов аналогичных столбцов в исходной таблице. ClickHouse пытается [приводить](../../../engines/table-engines/integrations/odbc.md#type_conversion_function-cast) значения к типам данных ClickHouse.
- Настройка `external_table_functions_use_nulls` определяет как обрабатывать Nullable столбцы. По умолчанию 1, если 0 - табличная функция не будет делать nullable столбцы и будет вместо null выставлять значения по умолчанию для скалярного типа. Это также применимо для null значений внутри массивов.
**Параметры движка** **Параметры движка**
@ -127,4 +128,3 @@ SELECT * FROM odbc_t
- [Внешние словари ODBC](../../../engines/table-engines/integrations/odbc.md#dicts-external_dicts_dict_sources-odbc) - [Внешние словари ODBC](../../../engines/table-engines/integrations/odbc.md#dicts-external_dicts_dict_sources-odbc)
- [Табличная функция odbc](../../../engines/table-engines/integrations/odbc.md) - [Табличная функция odbc](../../../engines/table-engines/integrations/odbc.md)
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/odbc/) <!--hide-->

View File

@ -0,0 +1,105 @@
---
toc_priority: 8
toc_title: PostgreSQL
---
# PosgtreSQL {#postgresql}
Движок PostgreSQL позволяет выполнять запросы `SELECT` над данными, хранящимися на удалённом PostgreSQL сервере.
## Создание таблицы {#creating-a-table}
``` sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2],
...
) ENGINE = PostgreSQL('host:port', 'database', 'table', 'user', 'password');
```
Смотрите подробное описание запроса [CREATE TABLE](../../../sql-reference/statements/create/table.md#create-table-query).
Структура таблицы может отличаться от исходной структуры таблицы PostgreSQL:
- Имена столбцов должны быть такими же, как в исходной таблице MySQL, но вы можете использовать только некоторые из этих столбцов и в любом порядке.
- Типы столбцов могут отличаться от типов в исходной таблице PostgreSQL. ClickHouse пытается [приводить](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) values to the ClickHouse data types.
- Настройка `external_table_functions_use_nulls` определяет как обрабатывать Nullable столбцы. По умолчанию 1, если 0 - табличная функция не будет делать nullable столбцы и будет вместо null выставлять значения по умолчанию для скалярного типа. Это также применимо для null значений внутри массивов.
**Параметры движка**
- `host:port` — адрес сервера PostgreSQL.
- `database` — Имя базы данных на сервере PostgreSQL.
- `table` — Имя таблицы.
- `user` — Имя пользователя PostgreSQL.
- `password` — Пароль пользователя PostgreSQL.
SELECT запросы на стороне PostgreSQL выполняются как `COPY (SELECT ...) TO STDOUT` внутри транзакции PostgreSQL только на чтение с коммитом после каждого `SELECT` запроса.
Простые условия для `WHERE` такие как `=, !=, >, >=, <, <=, IN` исполняются на стороне PostgreSQL сервера.
Все операции объединения, аггрегации, сортировки, условия `IN [ array ]` и ограничения `LIMIT` выполняются на стороне ClickHouse только после того как запрос к PostgreSQL закончился.
INSERT запросы на стороне PostgreSQL выполняются как `COPY "table_name" (field1, field2, ... fieldN) FROM STDIN` внутри PostgreSQL транзакции с автоматическим коммитом после каждого `INSERT` запроса.
PostgreSQL массивы конвертируются в массивы ClickHouse.
Будьте осторожны в PostgreSQL массивы созданные как type_name[], являются многомерными и могут содержать в себе разное количество измерений в разных строках одной таблицы, внутри ClickHouse допустипы только многомерные массивы с одинаковым кол-вом измерений во всех строках таблицы.
## Пример использования {#usage-example}
Таблица в PostgreSQL:
``` text
postgres=# CREATE TABLE "public"."test" (
"int_id" SERIAL,
"int_nullable" INT NULL DEFAULT NULL,
"float" FLOAT NOT NULL,
"str" VARCHAR(100) NOT NULL DEFAULT '',
"float_nullable" FLOAT NULL DEFAULT NULL,
PRIMARY KEY (int_id));
CREATE TABLE
postgres=# insert into test (int_id, str, "float") VALUES (1,'test',2);
INSERT 0 1
postgresql> select * from test;
int_id | int_nullable | float | str | float_nullable
--------+--------------+-------+------+----------------
1 | | 2 | test |
(1 row)
```
Таблица в ClickHouse, получение данных из PostgreSQL таблицы созданной выше:
``` sql
CREATE TABLE default.postgresql_table
(
`float_nullable` Nullable(Float32),
`str` String,
`int_id` Int32
)
ENGINE = PostgreSQL('localhost:5432', 'public', 'test', 'postges_user', 'postgres_password');
```
``` sql
SELECT * FROM postgresql_table WHERE str IN ('test')
```
``` text
┌─float_nullable─┬─str──┬─int_id─┐
│ ᴺᵁᴸᴸ │ test │ 1 │
└────────────────┴──────┴────────┘
1 rows in set. Elapsed: 0.019 sec.
```
## Смотри также {#see-also}
- [Табличная функция postgresql](../../../sql-reference/table-functions/postgresql.md)
- [Использование PostgreSQL в качестве истояника для внешнего словаря](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-postgresql)

View File

@ -155,3 +155,4 @@ Example:
- `_redelivered` - флаг `redelivered`. (Не равно нулю, если есть возможность, что сообщение было получено более, чем одним каналом.) - `_redelivered` - флаг `redelivered`. (Не равно нулю, если есть возможность, что сообщение было получено более, чем одним каналом.)
- `_message_id` - значение поля `messageID` полученного сообщения. Данное поле непусто, если указано в параметрах при отправке сообщения. - `_message_id` - значение поля `messageID` полученного сообщения. Данное поле непусто, если указано в параметрах при отправке сообщения.
- `_timestamp` - значение поля `timestamp` полученного сообщения. Данное поле непусто, если указано в параметрах при отправке сообщения. - `_timestamp` - значение поля `timestamp` полученного сообщения. Данное поле непусто, если указано в параметрах при отправке сообщения.

View File

@ -0,0 +1,155 @@
---
toc_priority: 4
toc_title: S3
---
# S3 {#table_engines-s3}
Этот движок обеспечивает интеграцию с экосистемой [Amazon S3](https://aws.amazon.com/s3/). Этот движок похож на
движок [HDFS](../../../engines/table-engines/integrations/hdfs.md#table_engines-hdfs), но предоставляет S3-специфичные функции.
## Использование {#usage}
```sql
ENGINE = S3(path, [aws_access_key_id, aws_secret_access_key,] format, structure, [compression])
```
**Параметры**
- `path` — URL ссылающийся на файл расположенный в S3. В режиме для чтения можно читать несколько файлов как один, поддерживаются следующие шаблоны для указания маски пути к файлам: *, ?, {abc,def} и {N..M} где N, M — числа, `abc, def — строки.
- `format` — [Формат](../../../interfaces/formats.md#formats) файла.
- `structure` — Структура таблицы. Формат `'column1_name column1_type, column2_name column2_type, ...'`.
- `compression` — Алгоритм сжатия, не обязятельный параметр. Поддерживаемые значения: none, gzip/gz, brotli/br, xz/LZMA, zstd/zst. По умолчанию, алгоритм сжатия будет автоматически применен в зависимости от расширения в имени файла.
**Пример:**
**1.** Создание таблицы `s3_engine_table` :
```sql
CREATE TABLE s3_engine_table (name String, value UInt32) ENGINE=S3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip')
```
**2.** Заполнение файла:
```sql
INSERT INTO s3_engine_table VALUES ('one', 1), ('two', 2), ('three', 3)
```
**3.** Запрос данных:
```sql
SELECT * FROM s3_engine_table LIMIT 2
```
```text
┌─name─┬─value─┐
│ one │ 1 │
│ two │ 2 │
└──────┴───────┘
```
## Детали реализации {#implementation-details}
- Чтение и запись могут быть одновременными и паралельными
- Не поддерживается:
- `ALTER` и `SELECT...SAMPLE` операции.
- Индексы.
- Репликация.
**Поддержка шаблонов в параметре path**
Множество частей параметра `path` поддерживает шаблоны. Для того чтобы быть обработанным файл должен присутствовать в S3 и соответсвовать шаблону. Списки файлов определяются в момент `SELECT` (но не в момент `CREATE`).
- `*` — Заменяет любой количество любых символов кроме `/` включая пустые строки.
- `?` — Заменяет один символ.
- `{some_string,another_string,yet_another_one}` — Заменяет любую из строк `'some_string', 'another_string', 'yet_another_one'`.
- `{N..M}` — Заменяет любое числов в диапозоне от N до M включительно. N и M могут иметь лидирующие нули например `000..078`.
Конструкции с`{}` работают также как в табличной функции [remote](../../../sql-reference/table-functions/remote.md).
**Пример**
1. Предположим у нас есть некоторые файлы в CSV формате со следующими URIs в S3:
- https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_1.csv
- https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_2.csv
- https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_3.csv
- https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_1.csv
- https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_2.csv
- https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_3.csv
2. Есть несколько способов сделать таблицу состяющую из всех шести файлов:
<!-- -->
```sql
CREATE TABLE table_with_range (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}', 'CSV')
```
3. Другой способ:
```sql
CREATE TABLE table_with_question_mark (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_?', 'CSV')
```
4. Таблица состоящая из всех файлах в обоих каталогах (все файлы должны удовлетворять формату и схеме описанными в запросе):
```sql
CREATE TABLE table_with_asterisk (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/*', 'CSV')
```
!!! warning "Предупреждение"
Если список файлов содержит диапозоны номеров с ведующими нулями, используйте конструкции со скобками для каждой цифры или используйте `?`.
**Пример**
Создание таблицы с именами файлов `file-000.csv`, `file-001.csv`, … , `file-999.csv`:
```sql
CREATE TABLE big_table (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV')
```
## Виртуальные колонки {#virtual-columns}
- `_path` — Path to the file.
- `_file` — Name of the file.
**Смотри также**
- [Virtual columns](../../../engines/table-engines/index.md#table_engines-virtual_columns)
## S3-специфичные настройки {#settings}
Следующие настройки могут быть заданы при запуске запроса или установлены в конфигурационном файле для пользовательского профиля.
- `s3_max_single_part_upload_size` — По умолчанию `64Mb`. Максикальный размер куска данных для загрузки в S3 как singlepart.
- `s3_min_upload_part_size` — По умолчанию `512Mb`. Минимальный размер куска данных для загрузки в S3 с помощью [S3 Multipart загрузки](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html).
- `s3_max_redirects` — Значение по умолчанию `10`. Максимально допустимое количество HTTP перенаправлений от серверов S3.
Примечания для безопасности: если злоумышленник может указать произвольные ссылки на S3, то лучше выставить `s3_max_redirects` как ноль для избежания атак типа [SSRF](https://en.wikipedia.org/wiki/Server-side_request_forgery) ; или ограничить с помощью `remote_host_filter` список адресов по которым возможно взаимодействие с S3.
### Настройки специфичные для заданной конечной точки {#endpointsettings}
Следующие настройки могут быть указаны в конфигурационном файле для заданной конечной точки (которой будет сопоставлен точный конечный префик URL):
- `endpoint` — Обязательный параметр. Указывает префикс URL для конечной точки.
- `access_key_id` и `secret_access_key`Не обязательно. Задает параметры авторизации для заданной конечной точки.
- `use_environment_credentials`Не обязательный параметр, значение по умолчанию `false`. Если установлено как `true`, S3 клиент будет пытаться получить параметры авторизации из переменных окружения и Amazon EC2 метаданных для заданной конечной точки.
- `header`Не обязательный параметр, может быть указан несколько раз. Добавляет указанный HTTP заголовок к запросу для заданной в `endpoint` URL префикса.
- `server_side_encryption_customer_key_base64`Не обязательный параметр. Если указан, к запросам будут указаны заголовки необходимые для доступа к S3 объектам с SSE-C шифрованием.
Пример:
```
<s3>
<endpoint-name>
<endpoint>https://storage.yandexcloud.net/my-test-bucket-768/</endpoint>
<!-- <access_key_id>ACCESS_KEY_ID</access_key_id> -->
<!-- <secret_access_key>SECRET_ACCESS_KEY</secret_access_key> -->
<!-- <use_environment_credentials>false</use_environment_credentials> -->
<!-- <header>Authorization: Bearer SOME-TOKEN</header> -->
<!-- <server_side_encryption_customer_key_base64>BASE64-ENCODED-KEY</server_side_encryption_customer_key_base64> -->
</endpoint-name>
</s3>
```

View File

@ -42,4 +42,3 @@ toc_priority: 29
Движки `Log` и `StripeLog` поддерживают параллельное чтение. При чтении данных, ClickHouse использует множество потоков. Каждый поток обрабатывает отдельный блок данных. Движок `Log` сохраняет каждый столбец таблицы в отдельном файле. Движок `StripeLog` хранит все данные в одном файле. Таким образом, движок `StripeLog` использует меньше дескрипторов в операционной системе, а движок `Log` обеспечивает более эффективное считывание данных. Движки `Log` и `StripeLog` поддерживают параллельное чтение. При чтении данных, ClickHouse использует множество потоков. Каждый поток обрабатывает отдельный блок данных. Движок `Log` сохраняет каждый столбец таблицы в отдельном файле. Движок `StripeLog` хранит все данные в одном файле. Таким образом, движок `StripeLog` использует меньше дескрипторов в операционной системе, а движок `Log` обеспечивает более эффективное считывание данных.
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/log_family/) <!--hide-->

View File

@ -11,4 +11,3 @@ toc_title: Log
При конкурентном доступе к данным, чтения могут выполняться одновременно, а записи блокируют чтения и друг друга. При конкурентном доступе к данным, чтения могут выполняться одновременно, а записи блокируют чтения и друг друга.
Движок Log не поддерживает индексы. Также, если при записи в таблицу произошёл сбой, то таблица станет битой, и чтения из неё будут возвращать ошибку. Движок Log подходит для временных данных, write-once таблиц, а также для тестовых и демонстрационных целей. Движок Log не поддерживает индексы. Также, если при записи в таблицу произошёл сбой, то таблица станет битой, и чтения из неё будут возвращать ошибку. Движок Log подходит для временных данных, write-once таблиц, а также для тестовых и демонстрационных целей.
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/log/) <!--hide-->

View File

@ -90,4 +90,3 @@ SELECT * FROM stripe_log_table ORDER BY timestamp
└─────────────────────┴──────────────┴────────────────────────────┘ └─────────────────────┴──────────────┴────────────────────────────┘
``` ```
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/stripelog/) <!--hide-->

Some files were not shown because too many files have changed in this diff Show More