10 KiB
slug | sidebar_position | sidebar_label | keywords | |||
---|---|---|---|---|---|---|
/en/sql-reference/table-functions/s3 | 180 | s3 |
|
s3 Table Function
Provides a table-like interface to select/insert files in Amazon S3 and Google Cloud Storage. This table function is similar to the hdfs function, but provides S3-specific features.
If you have multiple replicas in your cluster, you can use the s3Cluster function instead to parallelize inserts.
When using the s3 table function
with INSERT INTO...SELECT
, data is read and inserted in a streaming fashion. Only a few blocks of data reside in memory while the blocks are continuously read from S3 and pushed into the destination table.
Syntax
s3(path [, NOSIGN | aws_access_key_id, aws_secret_access_key] [,format] [,structure] [,compression])
:::tip GCS The S3 Table Function integrates with Google Cloud Storage by using the GCS XML API and HMAC keys. See the Google interoperability docs for more details about the endpoint and HMAC.
For GCS, substitute your HMAC key and HMAC secret where you see aws_access_key_id
and aws_secret_access_key
.
:::
Arguments
-
path
— Bucket url with path to file. Supports following wildcards in readonly mode:*
,**
,?
,{abc,def}
and{N..M}
whereN
,M
— numbers,'abc'
,'def'
— strings. For more information see here.:::note GCS The GCS path is in this format as the endpoint for the Google XML API is different than the JSON API:
https://storage.googleapis.com/<bucket>/<folder>/<filename(s)>
and not
https://storage.cloud.google.com. ::: -
NOSIGN
- If this keyword is provided in place of credentials, all the requests will not be signed. -
format
— The format of the file. -
structure
— Structure of the table. Format'column1_name column1_type, column2_name column2_type, ...'
. -
compression
— Parameter is optional. Supported values:none
,gzip/gz
,brotli/br
,xz/LZMA
,zstd/zst
. By default, it will autodetect compression by file extension.
Returned value
A table with the specified structure for reading or writing data in the specified file.
Examples
Selecting the first 5 rows from the table from S3 file https://datasets-documentation.s3.eu-west-3.amazonaws.com/aapl_stock.csv
:
SELECT *
FROM s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/aapl_stock.csv',
'CSVWithNames'
)
LIMIT 5;
┌───────Date─┬────Open─┬────High─┬─────Low─┬───Close─┬───Volume─┬─OpenInt─┐
│ 1984-09-07 │ 0.42388 │ 0.42902 │ 0.41874 │ 0.42388 │ 23220030 │ 0 │
│ 1984-09-10 │ 0.42388 │ 0.42516 │ 0.41366 │ 0.42134 │ 18022532 │ 0 │
│ 1984-09-11 │ 0.42516 │ 0.43668 │ 0.42516 │ 0.42902 │ 42498199 │ 0 │
│ 1984-09-12 │ 0.42902 │ 0.43157 │ 0.41618 │ 0.41618 │ 37125801 │ 0 │
│ 1984-09-13 │ 0.43927 │ 0.44052 │ 0.43927 │ 0.43927 │ 57822062 │ 0 │
└────────────┴─────────┴─────────┴─────────┴─────────┴──────────┴─────────┘
:::note
ClickHouse uses filename extensions to determine the format of the data. For example, we could have run the previous command without the CSVWithNames
:
SELECT *
FROM s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/aapl_stock.csv'
)
LIMIT 5;
ClickHouse also can determine the compression of the file. For example, if the file was zipped up with a .csv.gz
extension, ClickHouse would decompress the file automatically.
:::
Usage
Suppose that we have several files with following URIs on S3:
- 'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/some_prefix/some_file_1.csv'
- 'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/some_prefix/some_file_2.csv'
- 'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/some_prefix/some_file_3.csv'
- 'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/some_prefix/some_file_4.csv'
- 'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/another_prefix/some_file_1.csv'
- 'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/another_prefix/some_file_2.csv'
- 'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/another_prefix/some_file_3.csv'
- 'https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/another_prefix/some_file_4.csv'
Count the amount of rows in files ending with numbers from 1 to 3:
SELECT count(*)
FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}.csv', 'CSV', 'name String, value UInt32')
┌─count()─┐
│ 18 │
└─────────┘
Count the total amount of rows in all files in these two directories:
SELECT count(*)
FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/{some,another}_prefix/*', 'CSV', 'name String, value UInt32')
┌─count()─┐
│ 24 │
└─────────┘
:::tip
If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use ?
.
:::
Count the total amount of rows in files named file-000.csv
, file-001.csv
, … , file-999.csv
:
SELECT count(*)
FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV', 'name String, value UInt32');
┌─count()─┐
│ 12 │
└─────────┘
Insert data into file test-data.csv.gz
:
INSERT INTO FUNCTION s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip')
VALUES ('test-data', 1), ('test-data-2', 2);
Insert data into file test-data.csv.gz
from existing table:
INSERT INTO FUNCTION s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip')
SELECT name, value FROM existing_table;
Glob ** can be used for recursive directory traversal. Consider the below example, it will fetch all files from my-test-bucket-768
directory recursively:
SELECT * FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/**', 'CSV', 'name String, value UInt32', 'gzip');
The below get data from all test-data.csv.gz
files from any folder inside my-test-bucket
directory recursively:
SELECT * FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/**/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip');
Note. It is possible to specify custom URL mappers in the server configuration file. Example:
SELECT * FROM s3('s3://clickhouse-public-datasets/my-test-bucket-768/**/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip');
The URL 's3://clickhouse-public-datasets/my-test-bucket-768/**/test-data.csv.gz'
would be replaced to 'http://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/**/test-data.csv.gz'
Custom mapper can be added into config.xml
:
<url_scheme_mappers>
<s3>
<to>https://{bucket}.s3.amazonaws.com</to>
</s3>
<gs>
<to>https://{bucket}.storage.googleapis.com</to>
</gs>
<oss>
<to>https://{bucket}.oss.aliyuncs.com</to>
</oss>
</url_scheme_mappers>
Partitioned Write
If you specify PARTITION BY
expression when inserting data into S3
table, a separate file is created for each partition value. Splitting the data into separate files helps to improve reading operations efficiency.
Examples
- Using partition ID in a key creates separate files:
INSERT INTO TABLE FUNCTION
s3('http://bucket.amazonaws.com/my_bucket/file_{_partition_id}.csv', 'CSV', 'a String, b UInt32, c UInt32')
PARTITION BY a VALUES ('x', 2, 3), ('x', 4, 5), ('y', 11, 12), ('y', 13, 14), ('z', 21, 22), ('z', 23, 24);
As a result, the data is written into three files: file_x.csv
, file_y.csv
, and file_z.csv
.
- Using partition ID in a bucket name creates files in different buckets:
INSERT INTO TABLE FUNCTION
s3('http://bucket.amazonaws.com/my_bucket_{_partition_id}/file.csv', 'CSV', 'a UInt32, b UInt32, c UInt32')
PARTITION BY a VALUES (1, 2, 3), (1, 4, 5), (10, 11, 12), (10, 13, 14), (20, 21, 22), (20, 23, 24);
As a result, the data is written into three files in different buckets: my_bucket_1/file.csv
, my_bucket_10/file.csv
, and my_bucket_20/file.csv
.
Accessing public buckets
ClickHouse tries to fetch credentials from many different types of sources.
Sometimes, it can produce problems when accessing some buckets that are public causing the client to return 403
error code.
This issue can be avoided by using NOSIGN
keyword, forcing the client to ignore all the credentials, and not sign the requests.
SELECT *
FROM s3(
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/aapl_stock.csv',
NOSIGN,
'CSVWithNames'
)
LIMIT 5;
Storage Settings
- s3_truncate_on_insert - allows to truncate file before insert into it. Disabled by default.
- s3_create_multiple_files - allows to create a new file on each insert if format has suffix. Disabled by default.
- s3_skip_empty_files - allows to skip empty files while reading. Disabled by default.
See Also