mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-17 13:13:36 +00:00
157 lines
6.7 KiB
Markdown
157 lines
6.7 KiB
Markdown
---
|
||
toc_priority: 4
|
||
toc_title: S3
|
||
---
|
||
|
||
# S3 {#table_engines-s3}
|
||
|
||
This engine provides integration with [Amazon S3](https://aws.amazon.com/s3/) ecosystem. This engine is similar
|
||
to the [HDFS](../../../engines/table-engines/special/file.md#table_engines-hdfs) engine, but provides S3-specific features.
|
||
|
||
## Usage {#usage}
|
||
|
||
``` sql
|
||
ENGINE = S3(path, [aws_access_key_id, aws_secret_access_key,] format, structure, [compression])
|
||
```
|
||
|
||
**Input parameters**
|
||
|
||
- `path` — Bucket url with path to file. Supports following wildcards in readonly mode: *, ?, {abc,def} and {N..M} where N, M — numbers, `’abc’, ‘def’ — strings.
|
||
- `format` — The [format](../../../interfaces/formats.md#formats) of the file.
|
||
- `structure` — Structure of the table. Format `'column1_name column1_type, column2_name column2_type, ...'`.
|
||
- `compression` — Parameter is optional. Supported values: none, gzip/gz, brotli/br, xz/LZMA, zstd/zst. By default, it will autodetect compression by file extension.
|
||
|
||
**Example:**
|
||
|
||
**1.** Set up the `s3_engine_table` table:
|
||
|
||
``` sql
|
||
CREATE TABLE s3_engine_table (name String, value UInt32) ENGINE=S3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip')
|
||
```
|
||
|
||
**2.** Fill file:
|
||
|
||
``` sql
|
||
INSERT INTO s3_engine_table VALUES ('one', 1), ('two', 2), ('three', 3)
|
||
```
|
||
|
||
**3.** Query the data:
|
||
|
||
``` sql
|
||
SELECT * FROM s3_engine_table LIMIT 2
|
||
```
|
||
|
||
``` text
|
||
┌─name─┬─value─┐
|
||
│ one │ 1 │
|
||
│ two │ 2 │
|
||
└──────┴───────┘
|
||
```
|
||
|
||
## Implementation Details {#implementation-details}
|
||
|
||
- Reads and writes can be parallel
|
||
- Not supported:
|
||
- `ALTER` and `SELECT...SAMPLE` operations.
|
||
- Indexes.
|
||
- Replication.
|
||
|
||
**Globs in path**
|
||
|
||
Multiple path components can have globs. For being processed file should exist and match to the whole path pattern. Listing of files determines during `SELECT` (not at `CREATE` moment).
|
||
|
||
- `*` — Substitutes any number of any characters except `/` including empty string.
|
||
- `?` — Substitutes any single character.
|
||
- `{some_string,another_string,yet_another_one}` — Substitutes any of strings `'some_string', 'another_string', 'yet_another_one'`.
|
||
- `{N..M}` — Substitutes any number in range from N to M including both borders. N and M can have leading zeroes e.g. `000..078`.
|
||
|
||
Constructions with `{}` are similar to the [remote](../../../sql-reference/table-functions/remote.md) table function.
|
||
|
||
**Example**
|
||
|
||
1. Suppose we have several files in TSV format with the following URIs on HDFS:
|
||
|
||
- ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_1.csv’
|
||
- ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_2.csv’
|
||
- ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_3.csv’
|
||
- ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_1.csv’
|
||
- ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_2.csv’
|
||
- ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_3.csv’
|
||
|
||
2. There are several ways to make a table consisting of all six files:
|
||
|
||
<!-- -->
|
||
|
||
``` sql
|
||
CREATE TABLE table_with_range (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}', 'CSV')
|
||
```
|
||
|
||
3. Another way:
|
||
|
||
``` sql
|
||
CREATE TABLE table_with_question_mark (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_?', 'CSV')
|
||
```
|
||
|
||
4. Table consists of all the files in both directories (all files should satisfy format and schema described in query):
|
||
|
||
``` sql
|
||
CREATE TABLE table_with_asterisk (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/*', 'CSV')
|
||
```
|
||
|
||
!!! warning "Warning"
|
||
If the listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`.
|
||
|
||
**Example**
|
||
|
||
Create table with files named `file-000.csv`, `file-001.csv`, … , `file-999.csv`:
|
||
|
||
``` sql
|
||
CREATE TABLE big_table (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV')
|
||
```
|
||
|
||
## Virtual Columns {#virtual-columns}
|
||
|
||
- `_path` — Path to the file.
|
||
- `_file` — Name of the file.
|
||
|
||
**See Also**
|
||
|
||
- [Virtual columns](../../../engines/table-engines/index.md#table_engines-virtual_columns)
|
||
|
||
## S3-related settings {#settings}
|
||
|
||
The following settings can be set before query execution or placed into configuration file.
|
||
|
||
- `s3_max_single_part_upload_size` — Default value is `64Mb`. The maximum size of object to upload using singlepart upload to S3.
|
||
- `s3_min_upload_part_size` — Default value is `512Mb`. The minimum size of part to upload during multipart upload to [S3 Multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html).
|
||
- `s3_max_redirects` — Default value is `10`. Max number of S3 redirects hops allowed.
|
||
|
||
Security consideration: if malicious user can specify arbitrary S3 URLs, `s3_max_redirects` must be set to zero to avoid [SSRF](https://en.wikipedia.org/wiki/Server-side_request_forgery) attacks; or alternatively, `remote_host_filter` must be specified in server configuration.
|
||
|
||
### Endpoint-based settings {#endpointsettings}
|
||
|
||
The following settings can be specified in configuration file for given endpoint (which will be matched by exact prefix of a URL):
|
||
|
||
- `endpoint` — Mandatory. Specifies prefix of an endpoint.
|
||
- `access_key_id` and `secret_access_key` — Optional. Specifies credentials to use with given endpoint.
|
||
- `use_environment_credentials` — Optional, default value is `false`. If set to `true`, S3 client will try to obtain credentials from environment variables and Amazon EC2 metadata for given endpoint.
|
||
- `header` — Optional, can be speficied multiple times. Adds specified HTTP header to a request to given endpoint.
|
||
- `server_side_encryption_customer_key_base64` — Optional. If specified, required headers for accessing S3 objects with SSE-C encryption will be set.
|
||
|
||
Example:
|
||
|
||
```
|
||
<s3>
|
||
<endpoint-name>
|
||
<endpoint>https://storage.yandexcloud.net/my-test-bucket-768/</endpoint>
|
||
<!-- <access_key_id>ACCESS_KEY_ID</access_key_id> -->
|
||
<!-- <secret_access_key>SECRET_ACCESS_KEY</secret_access_key> -->
|
||
<!-- <use_environment_credentials>false</use_environment_credentials> -->
|
||
<!-- <header>Authorization: Bearer SOME-TOKEN</header> -->
|
||
<!-- <server_side_encryption_customer_key_base64>BASE64-ENCODED-KEY</server_side_encryption_customer_key_base64> -->
|
||
</endpoint-name>
|
||
</s3>
|
||
```
|
||
|
||
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/s3/) <!--hide-->
|