--- toc_priority: 45 toc_title: s3 --- # s3 {#s3} Provides table-like interface to select/insert files in S3. This table function is similar to [hdfs](../../sql-reference/table-functions/hdfs.md). ``` sql s3(path, [aws_access_key_id, aws_secret_access_key,] format, structure, [compression]) ``` **Input parameters** - `path` — Bucket url with path to file. Supports following wildcards in readonly mode: *, ?, {abc,def} and {N..M} where N, M — numbers, `’abc’, ‘def’ — strings. - `format` — The [format](../../interfaces/formats.md#formats) of the file. - `structure` — Structure of the table. Format `'column1_name column1_type, column2_name column2_type, ...'`. - `compression` — Parameter is optional. Supported values: none, gzip/gz, brotli/br, xz/LZMA, zstd/zst. By default, it will autodetect compression by file extension. **Returned value** A table with the specified structure for reading or writing data in the specified file. **Example** Table from S3 file `https://storage.yandexcloud.net/my-test-bucket-768/data.csv` and selection of the first two rows from it: ``` sql SELECT * FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/data.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') LIMIT 2 ``` ``` text ┌─column1─┬─column2─┬─column3─┐ │ 1 │ 2 │ 3 │ │ 3 │ 2 │ 1 │ └─────────┴─────────┴─────────┘ ``` The similar but from file with `gzip` compression: ``` sql SELECT * FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/data.csv.gz', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32', 'gzip') LIMIT 2 ``` ``` text ┌─column1─┬─column2─┬─column3─┐ │ 1 │ 2 │ 3 │ │ 3 │ 2 │ 1 │ └─────────┴─────────┴─────────┘ ``` **Globs in path** Multiple path components can have globs. For being processed file should exists and matches to the whole path pattern (not only suffix or prefix). - `*` — Substitutes any number of any characters except `/` including empty string. - `?` — Substitutes any single character. - `{some_string,another_string,yet_another_one}` — Substitutes any of strings `'some_string', 'another_string', 'yet_another_one'`. - `{N..M}` — Substitutes any number in range from N to M including both borders. N and M can have leading zeroes e.g. `000..078`. Constructions with `{}` are similar to the [remote table function](../../sql-reference/table-functions/remote.md)). **Example** 1. Suppose that we have several files with following URIs on S3: - ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_1.csv’ - ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_2.csv’ - ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_3.csv’ - ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_4.csv’ - ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_1.csv’ - ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_2.csv’ - ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_3.csv’ - ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_4.csv’ 2. Query the amount of rows in files end with number from 1 to 3: ``` sql SELECT count(*) FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}.csv', 'CSV', 'name String, value UInt32') ``` ``` text ┌─count()─┐ │ 18 │ └─────────┘ ``` 3. Query the amount of rows in all files of these two directories: ``` sql SELECT count(*) FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/*', 'CSV', 'name String, value UInt32') ``` ``` text ┌─count()─┐ │ 24 │ └─────────┘ ``` !!! warning "Warning" If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`. **Example** Query the data from files named `file-000.csv`, `file-001.csv`, … , `file-999.csv`: ``` sql SELECT count(*) FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV', 'name String, value UInt32') ``` ``` text ┌─count()─┐ │ 12 │ └─────────┘ ``` **Data insert** The S3 table function may be used for data insert as well. **Example** Insert a data into file `test-data.csv.gz`: ``` sql INSERT INTO s3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip') VALUES ('test-data', 1), ('test-data-2', 2) ``` Insert a data into file `test-data.csv.gz` from existing table: ``` sql INSERT INTO s3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip') SELECT name, value FROM existing_table ``` ## Virtual Columns {#virtual-columns} - `_path` — Path to the file. - `_file` — Name of the file. ## S3-related settings {#settings} The following settings can be set before query execution or placed into configuration file. - `s3_max_single_part_upload_size` — Default value is `64Mb`. The maximum size of object to upload using singlepart upload to S3. - `s3_min_upload_part_size` — Default value is `512Mb`. The minimum size of part to upload during multipart upload to [S3 Multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html). - `s3_max_redirects` — Default value is `10`. Max number of S3 redirects hops allowed. Security consideration: if malicious user can specify arbitrary S3 URLs, `s3_max_redirects` must be set to zero to avoid [SSRF](https://en.wikipedia.org/wiki/Server-side_request_forgery) attacks; or alternatively, `remote_host_filter` must be specified in server configuration. **See Also** - [Virtual columns](../../engines/table-engines/index.md#table_engines-virtual_columns)