ClickHouse/docs/en/sql-reference/table-functions/s3.md
2022-06-02 04:55:18 -06:00

6.1 KiB

sidebar_position sidebar_label
45 s3

s3 Table Function

Provides table-like interface to select/insert files in Amazon S3. This table function is similar to hdfs, but provides S3-specific features.

Syntax

s3(path, [aws_access_key_id, aws_secret_access_key,] format, structure, [compression])

Arguments

  • path — Bucket url with path to file. Supports following wildcards in readonly mode: *, ?, {abc,def} and {N..M} where N, M — numbers, 'abc', 'def' — strings. For more information see here.
  • format — The format of the file.
  • structure — Structure of the table. Format 'column1_name column1_type, column2_name column2_type, ...'.
  • compression — Parameter is optional. Supported values: none, gzip/gz, brotli/br, xz/LZMA, zstd/zst. By default, it will autodetect compression by file extension.

Returned value

A table with the specified structure for reading or writing data in the specified file.

Examples

Selecting the first two rows from the table from S3 file https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/data.csv:

SELECT *
FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/data.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32')
LIMIT 2;
┌─column1─┬─column2─┬─column3─┐
│       1 │       2 │       3 │
│       3 │       2 │       1 │
└─────────┴─────────┴─────────┘

The similar but from file with gzip compression:

SELECT *
FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/data.csv.gz', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32', 'gzip')
LIMIT 2;
┌─column1─┬─column2─┬─column3─┐
│       1 │       2 │       3 │
│       3 │       2 │       1 │
└─────────┴─────────┴─────────┘

Usage

Suppose that we have several files with following URIs on S3:

Count the amount of rows in files ending with numbers from 1 to 3:

SELECT count(*)
FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}.csv', 'CSV', 'name String, value UInt32')
┌─count()─┐
│      18 │
└─────────┘

Count the total amount of rows in all files in these two directories:

SELECT count(*)
FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/{some,another}_prefix/*', 'CSV', 'name String, value UInt32')
┌─count()─┐
│      24 │
└─────────┘

:::warning If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use ?. :::

Count the total amount of rows in files named file-000.csv, file-001.csv, … , file-999.csv:

SELECT count(*)
FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV', 'name String, value UInt32');
┌─count()─┐
│      12 │
└─────────┘

Insert data into file test-data.csv.gz:

INSERT INTO FUNCTION s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip')
VALUES ('test-data', 1), ('test-data-2', 2);

Insert data into file test-data.csv.gz from existing table:

INSERT INTO FUNCTION s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip')
SELECT name, value FROM existing_table;

Partitioned Write

If you specify PARTITION BY expression when inserting data into S3 table, a separate file is created for each partition value. Splitting the data into separate files helps to improve reading operations efficiency.

Examples

  1. Using partition ID in a key creates separate files:
INSERT INTO TABLE FUNCTION
    s3('http://bucket.amazonaws.com/my_bucket/file_{_partition_id}.csv', 'CSV', 'a String, b UInt32, c UInt32')
    PARTITION BY a VALUES ('x', 2, 3), ('x', 4, 5), ('y', 11, 12), ('y', 13, 14), ('z', 21, 22), ('z', 23, 24);

As a result, the data is written into three files: file_x.csv, file_y.csv, and file_z.csv.

  1. Using partition ID in a bucket name creates files in different buckets:
INSERT INTO TABLE FUNCTION
    s3('http://bucket.amazonaws.com/my_bucket_{_partition_id}/file.csv', 'CSV', 'a UInt32, b UInt32, c UInt32')
    PARTITION BY a VALUES (1, 2, 3), (1, 4, 5), (10, 11, 12), (10, 13, 14), (20, 21, 22), (20, 23, 24);

As a result, the data is written into three files in different buckets: my_bucket_1/file.csv, my_bucket_10/file.csv, and my_bucket_20/file.csv.

See Also

Original article