ClickHouse/docs/en/interfaces/formats.md
2022-06-30 16:17:14 +00:00

116 KiB
Raw Blame History

sidebar_position sidebar_label
21 Input and Output Formats

Formats for Input and Output Data

ClickHouse can accept and return data in various formats. A format supported for input can be used to parse the data provided to INSERTs, to perform SELECTs from a file-backed table such as File, URL or HDFS, or to read an external dictionary. A format supported for output can be used to arrange the results of a SELECT, and to perform INSERTs into a file-backed table.

The supported formats are:

Format Input Output
TabSeparated
TabSeparatedRaw
TabSeparatedWithNames
TabSeparatedWithNamesAndTypes
TabSeparatedRawWithNames
TabSeparatedRawWithNamesAndTypes
Template
TemplateIgnoreSpaces
CSV
CSVWithNames
CSVWithNamesAndTypes
CustomSeparated
CustomSeparatedWithNames
CustomSeparatedWithNamesAndTypes
SQLInsert
Values
Vertical
JSON
JSONAsString
JSONStrings
JSONColumns
JSONColumnsWithMetadata
JSONCompact
JSONCompactStrings
JSONCompactColumns
JSONEachRow
JSONEachRowWithProgress
JSONStringsEachRow
JSONStringsEachRowWithProgress
JSONCompactEachRow
JSONCompactEachRowWithNames
JSONCompactEachRowWithNamesAndTypes
JSONCompactStringsEachRow
JSONCompactStringsEachRowWithNames
JSONCompactStringsEachRowWithNamesAndTypes
TSKV
Pretty
PrettyCompact
PrettyCompactMonoBlock
PrettyNoEscapes
PrettySpace
Prometheus
Protobuf
ProtobufSingle
Avro
AvroConfluent
Parquet
Arrow
ArrowStream
ORC
RowBinary
RowBinaryWithNames
RowBinaryWithNamesAndTypes
Native
Null
XML
CapnProto
LineAsString
Regexp
RawBLOB
MsgPack
MySQLDump

You can control some format processing parameters with the ClickHouse settings. For more information read the Settings section.

TabSeparated

In TabSeparated format, data is written by row. Each row contains values separated by tabs. Each value is followed by a tab, except the last value in the row, which is followed by a line feed. Strictly Unix line feeds are assumed everywhere. The last row also must contain a line feed at the end. Values are written in text format, without enclosing quotation marks, and with special characters escaped.

This format is also available under the name TSV.

The TabSeparated format is convenient for processing data using custom programs and scripts. It is used by default in the HTTP interface, and in the command-line clients batch mode. This format also allows transferring data between different DBMSs. For example, you can get a dump from MySQL and upload it to ClickHouse, or vice versa.

The TabSeparated format supports outputting total values (when using WITH TOTALS) and extreme values (when extremes is set to 1). In these cases, the total values and extremes are output after the main data. The main result, total values, and extremes are separated from each other by an empty line. Example:

SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORDER BY EventDate FORMAT TabSeparated
2014-03-17      1406958
2014-03-18      1383658
2014-03-19      1405797
2014-03-20      1353623
2014-03-21      1245779
2014-03-22      1031592
2014-03-23      1046491

1970-01-01      8873898

2014-03-17      1031592
2014-03-23      1406958

Data Formatting

Integer numbers are written in decimal form. Numbers can contain an extra “+” character at the beginning (ignored when parsing, and not recorded when formatting). Non-negative numbers cant contain the negative sign. When reading, it is allowed to parse an empty string as a zero, or (for signed types) a string consisting of just a minus sign as a zero. Numbers that do not fit into the corresponding data type may be parsed as a different number, without an error message.

Floating-point numbers are written in decimal form. The dot is used as the decimal separator. Exponential entries are supported, as are inf, +inf, -inf, and nan. An entry of floating-point numbers may begin or end with a decimal point. During formatting, accuracy may be lost on floating-point numbers. During parsing, it is not strictly required to read the nearest machine-representable number.

Dates are written in YYYY-MM-DD format and parsed in the same format, but with any characters as separators. Dates with times are written in the format YYYY-MM-DD hh:mm:ss and parsed in the same format, but with any characters as separators. This all occurs in the system time zone at the time the client or server starts (depending on which of them formats data). For dates with times, daylight saving time is not specified. So if a dump has times during daylight saving time, the dump does not unequivocally match the data, and parsing will select one of the two times. During a read operation, incorrect dates and dates with times can be parsed with natural overflow or as null dates and times, without an error message.

As an exception, parsing dates with times is also supported in Unix timestamp format, if it consists of exactly 10 decimal digits. The result is not time zone-dependent. The formats YYYY-MM-DD hh:mm:ss and NNNNNNNNNN are differentiated automatically.

Strings are output with backslash-escaped special characters. The following escape sequences are used for output: \b, \f, \r, \n, \t, \0, \', \\. Parsing also supports the sequences \a, \v, and \xHH (hex escape sequences) and any \c sequences, where c is any character (these sequences are converted to c). Thus, reading data supports formats where a line feed can be written as \n or \, or as a line feed. For example, the string Hello world with a line feed between the words instead of space can be parsed in any of the following variations:

Hello\nworld

Hello\
world

The second variant is supported because MySQL uses it when writing tab-separated dumps.

The minimum set of characters that you need to escape when passing data in TabSeparated format: tab, line feed (LF) and backslash.

Only a small set of symbols are escaped. You can easily stumble onto a string value that your terminal will ruin in output.

Arrays are written as a list of comma-separated values in square brackets. Number items in the array are formatted as normally. Date and DateTime types are written in single quotes. Strings are written in single quotes with the same escaping rules as above.

NULL is formatted according to setting format_tsv_null_representation (default value is \N).

In input data, ENUM values can be represented as names or as ids. First, we try to match the input value to the ENUM name. If we fail and the input value is a number, we try to match this number to ENUM id. If input data contains only ENUM ids, it's recommended to enable the setting input_format_tsv_enum_as_number to optimize ENUM parsing.

Each element of Nested structures is represented as array.

For example:

CREATE TABLE nestedt
(
    `id` UInt8,
    `aux` Nested(
        a UInt8,
        b String
    )
)
ENGINE = TinyLog
INSERT INTO nestedt Values ( 1, [1], ['a'])
SELECT * FROM nestedt FORMAT TSV
1  [1]    ['a']

TabSeparated format settings

TabSeparatedRaw

Differs from TabSeparated format in that the rows are written without escaping. When parsing with this format, tabs or linefeeds are not allowed in each field.

This format is also available under the name TSVRaw.

TabSeparatedWithNames

Differs from the TabSeparated format in that the column names are written in the first row.

During parsing, the first row is expected to contain the column names. You can use column names to determine their position and to check their correctness.

If setting input_format_with_names_use_header is set to 1, the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Otherwise, the first row will be skipped.

This format is also available under the name TSVWithNames.

TabSeparatedWithNamesAndTypes

Differs from the TabSeparated format in that the column names are written to the first row, while the column types are in the second row. The first row with names is processed the same way as in TabSeparatedWithNames format. If setting input_format_with_types_use_header is set to 1, the types from input data will be compared with the types of the corresponding columns from the table. Otherwise, the second row will be skipped.

This format is also available under the name TSVWithNamesAndTypes.

TabSeparatedRawWithNames

Differs from TabSeparatedWithNames format in that the rows are written without escaping. When parsing with this format, tabs or linefeeds are not allowed in each field.

This format is also available under the name TSVRawWithNames.

TabSeparatedRawWithNamesAndTypes

Differs from TabSeparatedWithNamesAndTypes format in that the rows are written without escaping. When parsing with this format, tabs or linefeeds are not allowed in each field.

This format is also available under the name TSVRawWithNamesAndNames.

Template

This format allows specifying a custom format string with placeholders for values with a specified escaping rule.

It uses settings format_template_resultset, format_template_row, format_template_rows_between_delimiter and some settings of other formats (e.g. output_format_json_quote_64bit_integers when using JSON escaping, see further)

Setting format_template_row specifies path to file, which contains format string for rows with the following syntax:

delimiter_1${column_1:serializeAs_1}delimiter_2${column_2:serializeAs_2} ... delimiter_N,

where delimiter_i is a delimiter between values ($ symbol can be escaped as $$), column_i is a name or index of a column whose values are to be selected or inserted (if empty, then column will be skipped), serializeAs_i is an escaping rule for the column values. The following escaping rules are supported:

  • CSV, JSON, XML (similarly to the formats of the same names)
  • Escaped (similarly to TSV)
  • Quoted (similarly to Values)
  • Raw (without escaping, similarly to TSVRaw)
  • None (no escaping rule, see further)

If an escaping rule is omitted, then None will be used. XML is suitable only for output.

So, for the following format string:

  `Search phrase: ${SearchPhrase:Quoted}, count: ${c:Escaped}, ad price: $$${price:JSON};`

the values of SearchPhrase, c and price columns, which are escaped as Quoted, Escaped and JSON will be printed (for select) or will be expected (for insert) between Search phrase:, , count:, , ad price: $ and ; delimiters respectively. For example:

Search phrase: 'bathroom interior design', count: 2166, ad price: $3;

The format_template_rows_between_delimiter setting specifies delimiter between rows, which is printed (or expected) after every row except the last one (\n by default)

Setting format_template_resultset specifies the path to file, which contains a format string for resultset. Format string for resultset has the same syntax as a format string for row and allows to specify a prefix, a suffix and a way to print some additional information. It contains the following placeholders instead of column names:

  • data is the rows with data in format_template_row format, separated by format_template_rows_between_delimiter. This placeholder must be the first placeholder in the format string.
  • totals is the row with total values in format_template_row format (when using WITH TOTALS)
  • min is the row with minimum values in format_template_row format (when extremes are set to 1)
  • max is the row with maximum values in format_template_row format (when extremes are set to 1)
  • rows is the total number of output rows
  • rows_before_limit is the minimal number of rows there would have been without LIMIT. Output only if the query contains LIMIT. If the query contains GROUP BY, rows_before_limit_at_least is the exact number of rows there would have been without a LIMIT.
  • time is the request execution time in seconds
  • rows_read is the number of rows has been read
  • bytes_read is the number of bytes (uncompressed) has been read

The placeholders data, totals, min and max must not have escaping rule specified (or None must be specified explicitly). The remaining placeholders may have any escaping rule specified. If the format_template_resultset setting is an empty string, ${data} is used as default value. For insert queries format allows skipping some columns or some fields if prefix or suffix (see example).

Select example:

SELECT SearchPhrase, count() AS c FROM test.hits GROUP BY SearchPhrase ORDER BY c DESC LIMIT 5 FORMAT Template SETTINGS
format_template_resultset = '/some/path/resultset.format', format_template_row = '/some/path/row.format', format_template_rows_between_delimiter = '\n    '

/some/path/resultset.format:

<!DOCTYPE HTML>
<html> <head> <title>Search phrases</title> </head>
 <body>
  <table border="1"> <caption>Search phrases</caption>
    <tr> <th>Search phrase</th> <th>Count</th> </tr>
    ${data}
  </table>
  <table border="1"> <caption>Max</caption>
    ${max}
  </table>
  <b>Processed ${rows_read:XML} rows in ${time:XML} sec</b>
 </body>
</html>

/some/path/row.format:

<tr> <td>${0:XML}</td> <td>${1:XML}</td> </tr>

Result:

<!DOCTYPE HTML>
<html> <head> <title>Search phrases</title> </head>
 <body>
  <table border="1"> <caption>Search phrases</caption>
    <tr> <th>Search phrase</th> <th>Count</th> </tr>
    <tr> <td></td> <td>8267016</td> </tr>
    <tr> <td>bathroom interior design</td> <td>2166</td> </tr>
    <tr> <td>clickhouse</td> <td>1655</td> </tr>
    <tr> <td>spring 2014 fashion</td> <td>1549</td> </tr>
    <tr> <td>freeform photos</td> <td>1480</td> </tr>
  </table>
  <table border="1"> <caption>Max</caption>
    <tr> <td></td> <td>8873898</td> </tr>
  </table>
  <b>Processed 3095973 rows in 0.1569913 sec</b>
 </body>
</html>

Insert example:

Some header
Page views: 5, User id: 4324182021466249494, Useless field: hello, Duration: 146, Sign: -1
Page views: 6, User id: 4324182021466249494, Useless field: world, Duration: 185, Sign: 1
Total rows: 2
INSERT INTO UserActivity SETTINGS
format_template_resultset = '/some/path/resultset.format', format_template_row = '/some/path/row.format'
FORMAT Template

/some/path/resultset.format:

Some header\n${data}\nTotal rows: ${:CSV}\n

/some/path/row.format:

Page views: ${PageViews:CSV}, User id: ${UserID:CSV}, Useless field: ${:CSV}, Duration: ${Duration:CSV}, Sign: ${Sign:CSV}

PageViews, UserID, Duration and Sign inside placeholders are names of columns in the table. Values after Useless field in rows and after \nTotal rows: in suffix will be ignored. All delimiters in the input data must be strictly equal to delimiters in specified format strings.

TemplateIgnoreSpaces

This format is suitable only for input. Similar to Template, but skips whitespace characters between delimiters and values in the input stream. However, if format strings contain whitespace characters, these characters will be expected in the input stream. Also allows to specify empty placeholders (${} or ${:None}) to split some delimiter into separate parts to ignore spaces between them. Such placeholders are used only for skipping whitespace characters. Its possible to read JSON using this format, if values of columns have the same order in all rows. For example, the following request can be used for inserting data from output example of format JSON:

INSERT INTO table_name SETTINGS
format_template_resultset = '/some/path/resultset.format', format_template_row = '/some/path/row.format', format_template_rows_between_delimiter = ','
FORMAT TemplateIgnoreSpaces

/some/path/resultset.format:

{${}"meta"${}:${:JSON},${}"data"${}:${}[${data}]${},${}"totals"${}:${:JSON},${}"extremes"${}:${:JSON},${}"rows"${}:${:JSON},${}"rows_before_limit_at_least"${}:${:JSON}${}}

/some/path/row.format:

{${}"SearchPhrase"${}:${}${phrase:JSON}${},${}"c"${}:${}${cnt:JSON}${}}

TSKV

Similar to TabSeparated, but outputs a value in name=value format. Names are escaped the same way as in TabSeparated format, and the = symbol is also escaped.

SearchPhrase=   count()=8267016
SearchPhrase=bathroom interior design    count()=2166
SearchPhrase=clickhouse     count()=1655
SearchPhrase=2014 spring fashion    count()=1549
SearchPhrase=freeform photos       count()=1480
SearchPhrase=angelina jolie    count()=1245
SearchPhrase=omsk       count()=1112
SearchPhrase=photos of dog breeds    count()=1091
SearchPhrase=curtain designs        count()=1064
SearchPhrase=baku       count()=1000

NULL is formatted as \N.

SELECT * FROM t_null FORMAT TSKV
x=1    y=\N

When there is a large number of small columns, this format is ineffective, and there is generally no reason to use it. Nevertheless, it is no worse than JSONEachRow in terms of efficiency.

Both data output and parsing are supported in this format. For parsing, any order is supported for the values of different columns. It is acceptable for some values to be omitted they are treated as equal to their default values. In this case, zeros and blank rows are used as default values. Complex values that could be specified in the table are not supported as defaults.

Parsing allows the presence of the additional field tskv without the equal sign or a value. This field is ignored.

During import, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1.

CSV

Comma Separated Values format (RFC).

When formatting, rows are enclosed in double-quotes. A double quote inside a string is output as two double quotes in a row. There are no other rules for escaping characters. Date and date-time are enclosed in double-quotes. Numbers are output without quotes. Values are separated by a delimiter character, which is , by default. The delimiter character is defined in the setting format_csv_delimiter. Rows are separated using the Unix line feed (LF). Arrays are serialized in CSV as follows: first, the array is serialized to a string as in TabSeparated format, and then the resulting string is output to CSV in double-quotes. Tuples in CSV format are serialized as separate columns (that is, their nesting in the tuple is lost).

$ clickhouse-client --format_csv_delimiter="|" --query="INSERT INTO test.csv FORMAT CSV" < data.csv

*By default, the delimiter is ,. See the format_csv_delimiter setting for more information.

When parsing, all values can be parsed either with or without quotes. Both double and single quotes are supported. Rows can also be arranged without quotes. In this case, they are parsed up to the delimiter character or line feed (CR or LF). In violation of the RFC, when parsing rows without quotes, the leading and trailing spaces and tabs are ignored. For the line feed, Unix (LF), Windows (CR LF) and Mac OS Classic (CR LF) types are all supported.

NULL is formatted according to setting format_csv_null_representation (default value is \N).

In input data, ENUM values can be represented as names or as ids. First, we try to match the input value to the ENUM name. If we fail and the input value is a number, we try to match this number to ENUM id. If input data contains only ENUM ids, it's recommended to enable the setting input_format_csv_enum_as_number to optimize ENUM parsing.

The CSV format supports the output of totals and extremes the same way as TabSeparated.

CSV format settings

CSVWithNames

Also prints the header row with column names, similar to TabSeparatedWithNames.

CSVWithNamesAndTypes

Also prints two header rows with column names and types, similar to TabSeparatedWithNamesAndTypes.

CustomSeparated

Similar to Template, but it prints or reads all names and types of columns and uses escaping rule from format_custom_escaping_rule setting and delimiters from format_custom_field_delimiter, format_custom_row_before_delimiter, format_custom_row_after_delimiter, format_custom_row_between_delimiter, format_custom_result_before_delimiter and format_custom_result_after_delimiter settings, not from format strings.

There is also CustomSeparatedIgnoreSpaces format, which is similar to TemplateIgnoreSpaces.

CustomSeparatedWithNames

Also prints the header row with column names, similar to TabSeparatedWithNames.

CustomSeparatedWithNamesAndTypes

Also prints two header rows with column names and types, similar to TabSeparatedWithNamesAndTypes.

SQLInsert

Outputs data as a sequence of INSERT INTO table (columns...) VALUES (...), (...) ...; statements.

Example:

SELECT number AS x, number + 1 AS y, 'Hello' AS z FROM numbers(10) FORMAT SQLInsert SETTINGS output_format_sql_insert_max_batch_size = 2
INSERT INTO table (x, y, z) VALUES (0, 1, 'Hello'), (1, 2, 'Hello');
INSERT INTO table (x, y, z) VALUES (2, 3, 'Hello'), (3, 4, 'Hello');
INSERT INTO table (x, y, z) VALUES (4, 5, 'Hello'), (5, 6, 'Hello');
INSERT INTO table (x, y, z) VALUES (6, 7, 'Hello'), (7, 8, 'Hello');
INSERT INTO table (x, y, z) VALUES (8, 9, 'Hello'), (9, 10, 'Hello');

To read data output by this format ypu can use MySQLDump input format.

SQLInsert format settings

JSON

Outputs data in JSON format. Besides data tables, it also outputs column names and types, along with some additional information: the total number of output rows, and the number of rows that could have been output if there werent a LIMIT. Example:

SELECT SearchPhrase, count() AS c FROM test.hits GROUP BY SearchPhrase WITH TOTALS ORDER BY c DESC LIMIT 5 FORMAT JSON
{
        "meta":
        [
                {
                        "name": "num",
                        "type": "Int32"
                },
                {
                        "name": "str",
                        "type": "String"
                },
                {
                        "name": "arr",
                        "type": "Array(UInt8)"
                }
        ],

        "data":
        [
                {
                        "num": 42,
                        "str": "hello",
                        "arr": [0,1]
                },
                {
                        "num": 43,
                        "str": "hello",
                        "arr": [0,1,2]
                },
                {
                        "num": 44,
                        "str": "hello",
                        "arr": [0,1,2,3]
                }
        ],

        "rows": 3,

        "rows_before_limit_at_least": 3,

        "statistics":
        {
                "elapsed": 0.001137687,
                "rows_read": 3,
                "bytes_read": 24
        }
}

The JSON is compatible with JavaScript. To ensure this, some characters are additionally escaped: the slash / is escaped as \/; alternative line breaks U+2028 and U+2029, which break some browsers, are escaped as \uXXXX. ASCII control characters are escaped: backspace, form feed, line feed, carriage return, and horizontal tab are replaced with \b, \f, \n, \r, \t , as well as the remaining bytes in the 00-1F range using \uXXXX sequences. Invalid UTF-8 sequences are changed to the replacement character <20> so the output text will consist of valid UTF-8 sequences. For compatibility with JavaScript, Int64 and UInt64 integers are enclosed in double-quotes by default. To remove the quotes, you can set the configuration parameter output_format_json_quote_64bit_integers to 0.

rows The total number of output rows.

rows_before_limit_at_least The minimal number of rows there would have been without LIMIT. Output only if the query contains LIMIT. If the query contains GROUP BY, rows_before_limit_at_least is the exact number of rows there would have been without a LIMIT.

totals Total values (when using WITH TOTALS).

extremes Extreme values (when extremes are set to 1).

This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).

ClickHouse supports NULL, which is displayed as null in the JSON output. To enable +nan, -nan, +inf, -inf values in output, set the output_format_json_quote_denormals to 1.

See Also

JSONStrings

Differs from JSON only in that data fields are output in strings, not in typed JSON values.

Example:

{
        "meta":
        [
                {
                        "name": "num",
                        "type": "Int32"
                },
                {
                        "name": "str",
                        "type": "String"
                },
                {
                        "name": "arr",
                        "type": "Array(UInt8)"
                }
        ],

        "data":
        [
                {
                        "num": "42",
                        "str": "hello",
                        "arr": "[0,1]"
                },
                {
                        "num": "43",
                        "str": "hello",
                        "arr": "[0,1,2]"
                },
                {
                        "num": "44",
                        "str": "hello",
                        "arr": "[0,1,2,3]"
                }
        ],

        "rows": 3,

        "rows_before_limit_at_least": 3,

        "statistics":
        {
                "elapsed": 0.001403233,
                "rows_read": 3,
                "bytes_read": 24
        }
}

JSONColumns

In this format, all data is represented as a single JSON Object. Note that JSONColumns output format buffers all data in memory to output it as a single block and it can lead to high memory consumption.

Example:

{
	"num": [42, 43, 44],
	"str": ["hello", "hello", "hello"],
	"arr": [[0,1], [0,1,2], [0,1,2,3]]
}

During import, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Columns that are not present in the block will be filled with default values (you can use input_format_defaults_for_omitted_fields setting here)

JSONColumnsWithMetadata

Differs from JSONColumns output format in that it also outputs some metadata and statistics (similar to JSON output format). This format buffers all data in memory and then outputs them as a single block, so, it can lead to high memory consumption.

Example:

{
        "meta":
        [
                {
                        "name": "num",
                        "type": "Int32"
                },
                {
                        "name": "str",
                        "type": "String"
                },

                {
                        "name": "arr",
                        "type": "Array(UInt8)"
                }
        ],

        "data":
        {
                "num": [42, 43, 44],
                "str": ["hello", "hello", "hello"],
                "arr": [[0,1], [0,1,2], [0,1,2,3]]
        },

        "rows": 3,

        "rows_before_limit_at_least": 3,

        "statistics":
        {
                "elapsed": 0.000272376,
                "rows_read": 3,
                "bytes_read": 24
        }
}

JSONAsString

In this format, a single JSON object is interpreted as a single value. If the input has several JSON objects (comma separated), they are interpreted as separate rows. If the input data is enclosed in square brackets, it is interpreted as an array of JSONs.

This format can only be parsed for table with a single field of type String. The remaining columns must be set to DEFAULT or MATERIALIZED, or omitted. Once you collect whole JSON object to string you can use JSON functions to process it.

Examples

Query:

DROP TABLE IF EXISTS json_as_string;
CREATE TABLE json_as_string (json String) ENGINE = Memory;
INSERT INTO json_as_string (json) FORMAT JSONAsString {"foo":{"bar":{"x":"y"},"baz":1}},{},{"any json stucture":1}
SELECT * FROM json_as_string;

Result:

┌─json──────────────────────────────┐
│ {"foo":{"bar":{"x":"y"},"baz":1}} │
│ {}                                │
│ {"any json stucture":1}           │
└───────────────────────────────────┘

An array of JSON objects

Query:

CREATE TABLE json_square_brackets (field String) ENGINE = Memory;
INSERT INTO json_square_brackets FORMAT JSONAsString [{"id": 1, "name": "name1"}, {"id": 2, "name": "name2"}];

SELECT * FROM json_square_brackets;

Result:

┌─field──────────────────────┐
│ {"id": 1, "name": "name1"} │
│ {"id": 2, "name": "name2"} │
└────────────────────────────┘

JSONCompact

Differs from JSON only in that data rows are output in arrays, not in objects.

Example:

{
        "meta":
        [
                {
                        "name": "num",
                        "type": "Int32"
                },
                {
                        "name": "str",
                        "type": "String"
                },
                {
                        "name": "arr",
                        "type": "Array(UInt8)"
                }
        ],

        "data":
        [
                [42, "hello", [0,1]],
                [43, "hello", [0,1,2]],
                [44, "hello", [0,1,2,3]]
        ],

        "rows": 3,

        "rows_before_limit_at_least": 3,

        "statistics":
        {
                "elapsed": 0.001222069,
                "rows_read": 3,
                "bytes_read": 24
        }
}

JSONCompactStrings

Differs from JSONStrings only in that data rows are output in arrays, not in objects.

Example: f

{
        "meta":
        [
                {
                        "name": "num",
                        "type": "Int32"
                },
                {
                        "name": "str",
                        "type": "String"
                },
                {
                        "name": "arr",
                        "type": "Array(UInt8)"
                }
        ],

        "data":
        [
                ["42", "hello", "[0,1]"],
                ["43", "hello", "[0,1,2]"],
                ["44", "hello", "[0,1,2,3]"]
        ],

        "rows": 3,

        "rows_before_limit_at_least": 3,

        "statistics":
        {
                "elapsed": 0.001572097,
                "rows_read": 3,
                "bytes_read": 24
        }
}

JSONCompactColumns

In this format, all data is represented as a single JSON Array. Note that JSONCompactColumns output format buffers all data in memory to output it as a single block and it can lead to high memory consumption

Example:

[
	[42, 43, 44],
	["hello", "hello", "hello"],
	[[0,1], [0,1,2], [0,1,2,3]]
]

Columns that are not present in the block will be filled with default values (you can use input_format_defaults_for_omitted_fields setting here)

JSONEachRow

In this format, ClickHouse outputs each row as a separated, newline-delimited JSON Object.

Example:

{"num":42,"str":"hello","arr":[0,1]}
{"num":43,"str":"hello","arr":[0,1,2]}
{"num":44,"str":"hello","arr":[0,1,2,3]}

While importing data columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1.

JSONStringsEachRow

Differs from JSONEachRow only in that data fields are output in strings, not in typed JSON values.

Example:

{"num":"42","str":"hello","arr":"[0,1]"}
{"num":"43","str":"hello","arr":"[0,1,2]"}
{"num":"44","str":"hello","arr":"[0,1,2,3]"}

JSONCompactEachRow

Differs from JSONEachRow only in that data rows are output in arrays, not in objects.

Example:

[42, "hello", [0,1]]
[43, "hello", [0,1,2]]
[44, "hello", [0,1,2,3]]

JSONCompactStringsEachRow

Differs from JSONCompactEachRow only in that data fields are output in strings, not in typed JSON values.

Example:

["42", "hello", "[0,1]"]
["43", "hello", "[0,1,2]"]
["44", "hello", "[0,1,2,3]"]

JSONEachRowWithProgress

JSONStringsEachRowWithProgress

Differs from JSONEachRow/JSONStringsEachRow in that ClickHouse will also yield progress information as JSON values.

{"row":{"num":42,"str":"hello","arr":[0,1]}}
{"row":{"num":43,"str":"hello","arr":[0,1,2]}}
{"row":{"num":44,"str":"hello","arr":[0,1,2,3]}}
{"progress":{"read_rows":"3","read_bytes":"24","written_rows":"0","written_bytes":"0","total_rows_to_read":"3"}}

JSONCompactEachRowWithNames

Differs from JSONCompactEachRow format in that it also prints the header row with column names, similar to TabSeparatedWithNames.

JSONCompactEachRowWithNamesAndTypes

Differs from JSONCompactEachRow format in that it also prints two header rows with column names and types, similar to TabSeparatedWithNamesAndTypes.

JSONCompactStringsEachRowWithNames

Differs from JSONCompactStringsEachRow in that in that it also prints the header row with column names, similar to TabSeparatedWithNames.

JSONCompactStringsEachRowWithNamesAndTypes

Differs from JSONCompactStringsEachRow in that it also prints two header rows with column names and types, similar to TabSeparatedWithNamesAndTypes.

["num", "str", "arr"]
["Int32", "String", "Array(UInt8)"]
[42, "hello", [0,1]]
[43, "hello", [0,1,2]]
[44, "hello", [0,1,2,3]]

Inserting Data

INSERT INTO UserActivity FORMAT JSONEachRow {"PageViews":5, "UserID":"4324182021466249494", "Duration":146,"Sign":-1} {"UserID":"4324182021466249494","PageViews":6,"Duration":185,"Sign":1}

ClickHouse allows:

  • Any order of key-value pairs in the object.
  • Omitting some values.

ClickHouse ignores spaces between elements and commas after the objects. You can pass all the objects in one line. You do not have to separate them with line breaks.

Omitted values processing

ClickHouse substitutes omitted values with the default values for the corresponding data types.

If DEFAULT expr is specified, ClickHouse uses different substitution rules depending on the input_format_defaults_for_omitted_fields setting.

Consider the following table:

CREATE TABLE IF NOT EXISTS example_table
(
    x UInt32,
    a DEFAULT x * 2
) ENGINE = Memory;
  • If input_format_defaults_for_omitted_fields = 0, then the default value for x and a equals 0 (as the default value for the UInt32 data type).
  • If input_format_defaults_for_omitted_fields = 1, then the default value for x equals 0, but the default value of a equals x * 2.

:::warning When inserting data with input_format_defaults_for_omitted_fields = 1, ClickHouse consumes more computational resources, compared to insertion with input_format_defaults_for_omitted_fields = 0. :::

Selecting Data

Consider the UserActivity table as an example:

┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │         5 │      146 │   -1 │
│ 4324182021466249494 │         6 │      185 │    1 │
└─────────────────────┴───────────┴──────────┴──────┘

The query SELECT * FROM UserActivity FORMAT JSONEachRow returns:

{"UserID":"4324182021466249494","PageViews":5,"Duration":146,"Sign":-1}
{"UserID":"4324182021466249494","PageViews":6,"Duration":185,"Sign":1}

Unlike the JSON format, there is no substitution of invalid UTF-8 sequences. Values are escaped in the same way as for JSON.

:::info Any set of bytes can be output in the strings. Use the JSONEachRow format if you are sure that the data in the table can be formatted as JSON without losing any information. :::

Usage of Nested Structures

If you have a table with Nested data type columns, you can insert JSON data with the same structure. Enable this feature with the input_format_import_nested_json setting.

For example, consider the following table:

CREATE TABLE json_each_row_nested (n Nested (s String, i Int32) ) ENGINE = Memory

As you can see in the Nested data type description, ClickHouse treats each component of the nested structure as a separate column (n.s and n.i for our table). You can insert data in the following way:

INSERT INTO json_each_row_nested FORMAT JSONEachRow {"n.s": ["abc", "def"], "n.i": [1, 23]}

To insert data as a hierarchical JSON object, set input_format_import_nested_json=1.

{
    "n": {
        "s": ["abc", "def"],
        "i": [1, 23]
    }
}

Without this setting, ClickHouse throws an exception.

SELECT name, value FROM system.settings WHERE name = 'input_format_import_nested_json'
┌─name────────────────────────────┬─value─┐
│ input_format_import_nested_json │ 0     │
└─────────────────────────────────┴───────┘
INSERT INTO json_each_row_nested FORMAT JSONEachRow {"n": {"s": ["abc", "def"], "i": [1, 23]}}
Code: 117. DB::Exception: Unknown field found while parsing JSONEachRow format: n: (at row 1)
SET input_format_import_nested_json=1
INSERT INTO json_each_row_nested FORMAT JSONEachRow {"n": {"s": ["abc", "def"], "i": [1, 23]}}
SELECT * FROM json_each_row_nested
┌─n.s───────────┬─n.i────┐
│ ['abc','def'] │ [1,23] │
└───────────────┴────────┘

JSON formats settings

Native

The most efficient format. Data is written and read by blocks in binary format. For each block, the number of rows, number of columns, column names and types, and parts of columns in this block are recorded one after another. In other words, this format is “columnar” it does not convert columns to rows. This is the format used in the native interface for interaction between servers, for using the command-line client, and for C++ clients.

You can use this format to quickly generate dumps that can only be read by the ClickHouse DBMS. It does not make sense to work with this format yourself.

Null

Nothing is output. However, the query is processed, and when using the command-line client, data is transmitted to the client. This is used for tests, including performance testing. Obviously, this format is only appropriate for output, not for parsing.

Pretty

Outputs data as Unicode-art tables, also using ANSI-escape sequences for setting colours in the terminal. A full grid of the table is drawn, and each row occupies two lines in the terminal. Each result block is output as a separate table. This is necessary so that blocks can be output without buffering results (buffering would be necessary in order to pre-calculate the visible width of all the values).

NULL is output as ᴺᵁᴸᴸ.

Example (shown for the PrettyCompact format):

SELECT * FROM t_null
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
└───┴──────┘

Rows are not escaped in Pretty* formats. Example is shown for the PrettyCompact format:

SELECT 'String with \'quotes\' and \t character' AS Escaping_test
┌─Escaping_test────────────────────────┐
│ String with 'quotes' and      character │
└──────────────────────────────────────┘

To avoid dumping too much data to the terminal, only the first 10,000 rows are printed. If the number of rows is greater than or equal to 10,000, the message “Showed first 10 000” is printed. This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).

The Pretty format supports outputting total values (when using WITH TOTALS) and extremes (when extremes is set to 1). In these cases, total values and extreme values are output after the main data, in separate tables. Example (shown for the PrettyCompact format):

SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORDER BY EventDate FORMAT PrettyCompact
┌──EventDate─┬───────c─┐
│ 2014-03-17 │ 1406958 │
│ 2014-03-18 │ 1383658 │
│ 2014-03-19 │ 1405797 │
│ 2014-03-20 │ 1353623 │
│ 2014-03-21 │ 1245779 │
│ 2014-03-22 │ 1031592 │
│ 2014-03-23 │ 1046491 │
└────────────┴─────────┘

Totals:
┌──EventDate─┬───────c─┐
│ 1970-01-01 │ 8873898 │
└────────────┴─────────┘

Extremes:
┌──EventDate─┬───────c─┐
│ 2014-03-17 │ 1031592 │
│ 2014-03-23 │ 1406958 │
└────────────┴─────────┘

PrettyCompact

Differs from Pretty in that the grid is drawn between rows and the result is more compact. This format is used by default in the command-line client in interactive mode.

PrettyCompactMonoBlock

Differs from PrettyCompact in that up to 10,000 rows are buffered, then output as a single table, not by blocks.

PrettyNoEscapes

Differs from Pretty in that ANSI-escape sequences arent used. This is necessary for displaying this format in a browser, as well as for using the watch command-line utility.

Example:

$ watch -n1 "clickhouse-client --query='SELECT event, value FROM system.events FORMAT PrettyCompactNoEscapes'"

You can use the HTTP interface for displaying in the browser.

PrettyCompactNoEscapes

The same as the previous setting.

PrettySpaceNoEscapes

The same as the previous setting.

PrettySpace

Differs from PrettyCompact in that whitespace (space characters) is used instead of the grid.

Pretty formats settings

RowBinary

Formats and parses data by row in binary format. Rows and values are listed consecutively, without separators. This format is less efficient than the Native format since it is row-based.

Integers use fixed-length little-endian representation. For example, UInt64 uses 8 bytes. DateTime is represented as UInt32 containing the Unix timestamp as the value. Date is represented as a UInt16 object that contains the number of days since 1970-01-01 as the value. String is represented as a varint length (unsigned LEB128), followed by the bytes of the string. FixedString is represented simply as a sequence of bytes.

Array is represented as a varint length (unsigned LEB128), followed by successive elements of the array.

For NULL support, an additional byte containing 1 or 0 is added before each Nullable value. If 1, then the value is NULL and this byte is interpreted as a separate value. If 0, the value after the byte is not NULL.

RowBinaryWithNames

Similar to RowBinary, but with added header:

  • LEB128-encoded number of columns (N)
  • N Strings specifying column names

RowBinaryWithNamesAndTypes

Similar to RowBinary, but with added header:

  • LEB128-encoded number of columns (N)
  • N Strings specifying column names
  • N Strings specifying column types

Values

Prints every row in brackets. Rows are separated by commas. There is no comma after the last row. The values inside the brackets are also comma-separated. Numbers are output in a decimal format without quotes. Arrays are output in square brackets. Strings, dates, and dates with times are output in quotes. Escaping rules and parsing are similar to the TabSeparated format. During formatting, extra spaces arent inserted, but during parsing, they are allowed and skipped (except for spaces inside array values, which are not allowed). NULL is represented as NULL.

The minimum set of characters that you need to escape when passing data in Values format: single quotes and backslashes.

This is the format that is used in INSERT INTO t VALUES ..., but you can also use it for formatting query results.

Values format settings

Vertical

Prints each value on a separate line with the column name specified. This format is convenient for printing just one or a few rows if each row consists of a large number of columns.

NULL is output as ᴺᵁᴸᴸ.

Example:

SELECT * FROM t_null FORMAT Vertical
Row 1:
──────
x: 1
y: ᴺᵁᴸᴸ

Rows are not escaped in Vertical format:

SELECT 'string with \'quotes\' and \t with some special \n characters' AS test FORMAT Vertical
Row 1:
──────
test: string with 'quotes' and      with some special
 characters

This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).

XML

XML format is suitable only for output, not for parsing. Example:

<?xml version='1.0' encoding='UTF-8' ?>
<result>
        <meta>
                <columns>
                        <column>
                                <name>SearchPhrase</name>
                                <type>String</type>
                        </column>
                        <column>
                                <name>count()</name>
                                <type>UInt64</type>
                        </column>
                </columns>
        </meta>
        <data>
                <row>
                        <SearchPhrase></SearchPhrase>
                        <field>8267016</field>
                </row>
                <row>
                        <SearchPhrase>bathroom interior design</SearchPhrase>
                        <field>2166</field>
                </row>
                <row>
                        <SearchPhrase>clickhouse</SearchPhrase>
                        <field>1655</field>
                </row>
                <row>
                        <SearchPhrase>2014 spring fashion</SearchPhrase>
                        <field>1549</field>
                </row>
                <row>
                        <SearchPhrase>freeform photos</SearchPhrase>
                        <field>1480</field>
                </row>
                <row>
                        <SearchPhrase>angelina jolie</SearchPhrase>
                        <field>1245</field>
                </row>
                <row>
                        <SearchPhrase>omsk</SearchPhrase>
                        <field>1112</field>
                </row>
                <row>
                        <SearchPhrase>photos of dog breeds</SearchPhrase>
                        <field>1091</field>
                </row>
                <row>
                        <SearchPhrase>curtain designs</SearchPhrase>
                        <field>1064</field>
                </row>
                <row>
                        <SearchPhrase>baku</SearchPhrase>
                        <field>1000</field>
                </row>
        </data>
        <rows>10</rows>
        <rows_before_limit_at_least>141137</rows_before_limit_at_least>
</result>

If the column name does not have an acceptable format, just field is used as the element name. In general, the XML structure follows the JSON structure. Just as for JSON, invalid UTF-8 sequences are changed to the replacement character <20> so the output text will consist of valid UTF-8 sequences.

In string values, the characters < and & are escaped as < and &.

Arrays are output as <array><elem>Hello</elem><elem>World</elem>...</array>,and tuples as <tuple><elem>Hello</elem><elem>World</elem>...</tuple>.

CapnProto

CapnProto is a binary message format similar to Protocol Buffers and Thrift, but not like JSON or MessagePack.

CapnProto messages are strictly typed and not self-describing, meaning they need an external schema description. The schema is applied on the fly and cached for each query.

See also Format Schema.

Data Types Matching

The table below shows supported data types and how they match ClickHouse data types in INSERT and SELECT queries.

CapnProto data type (INSERT) ClickHouse data type CapnProto data type (SELECT)
UINT8, BOOL UInt8 UINT8
INT8 Int8 INT8
UINT16 UInt16, Date UINT16
INT16 Int16 INT16
UINT32 UInt32, DateTime UINT32
INT32 Int32 INT32
UINT64 UInt64 UINT64
INT64 Int64, DateTime64 INT64
FLOAT32 Float32 FLOAT32
FLOAT64 Float64 FLOAT64
TEXT, DATA String, FixedString TEXT, DATA
union(T, Void), union(Void, T) Nullable(T) union(T, Void), union(Void, T)
ENUM Enum(8|16) ENUM
LIST Array LIST
STRUCT Tuple STRUCT

For working with Enum in CapnProto format use the format_capn_proto_enum_comparising_mode setting.

Arrays can be nested and can have a value of the Nullable type as an argument. Tuple type also can be nested.

Inserting and Selecting Data

You can insert CapnProto data from a file into ClickHouse table by the following command:

$ cat capnproto_messages.bin | clickhouse-client --query "INSERT INTO test.hits SETTINGS format_schema = 'schema:Message' FORMAT CapnProto"

Where schema.capnp looks like this:

struct Message {
  SearchPhrase @0 :Text;
  c @1 :Uint64;
}

You can select data from a ClickHouse table and save them into some file in the CapnProto format by the following command:

$ clickhouse-client --query = "SELECT * FROM test.hits FORMAT CapnProto SETTINGS format_schema = 'schema:Message'"

Prometheus

Expose metrics in Prometheus text-based exposition format.

The output table should have a proper structure. Columns name (String) and value (number) are required. Rows may optionally contain help (String) and timestamp (number). Column type (String) is either counter, gauge, histogram, summary, untyped or empty. Each metric value may also have some labels (Map(String, String)). Several consequent rows may refer to the one metric with different labels. The table should be sorted by metric name (e.g., with ORDER BY name).

There's special requirements for labels for histogram and summary, see Prometheus doc for the details. Special rules applied to row with labels {'count':''} and {'sum':''}, they'll be converted to <metric_name>_count and <metric_name>_sum respectively.

Example:

┌─name────────────────────────────────┬─type──────┬─help──────────────────────────────────────┬─labels─────────────────────────┬────value─┬─────timestamp─┐
│ http_request_duration_seconds       │ histogram │ A histogram of the request duration.      │ {'le':'0.05'}                  │    24054 │             0 │
│ http_request_duration_seconds       │ histogram │                                           │ {'le':'0.1'}                   │    33444 │             0 │
│ http_request_duration_seconds       │ histogram │                                           │ {'le':'0.2'}                   │   100392 │             0 │
│ http_request_duration_seconds       │ histogram │                                           │ {'le':'0.5'}                   │   129389 │             0 │
│ http_request_duration_seconds       │ histogram │                                           │ {'le':'1'}                     │   133988 │             0 │
│ http_request_duration_seconds       │ histogram │                                           │ {'le':'+Inf'}                  │   144320 │             0 │
│ http_request_duration_seconds       │ histogram │                                           │ {'sum':''}                     │    53423 │             0 │
│ http_requests_total                 │ counter   │ Total number of HTTP requests             │ {'method':'post','code':'200'} │     1027 │ 1395066363000 │
│ http_requests_total                 │ counter   │                                           │ {'method':'post','code':'400'} │        3 │ 1395066363000 │
│ metric_without_timestamp_and_labels │           │                                           │ {}                             │    12.47 │             0 │
│ rpc_duration_seconds                │ summary   │ A summary of the RPC duration in seconds. │ {'quantile':'0.01'}            │     3102 │             0 │
│ rpc_duration_seconds                │ summary   │                                           │ {'quantile':'0.05'}            │     3272 │             0 │
│ rpc_duration_seconds                │ summary   │                                           │ {'quantile':'0.5'}             │     4773 │             0 │
│ rpc_duration_seconds                │ summary   │                                           │ {'quantile':'0.9'}             │     9001 │             0 │
│ rpc_duration_seconds                │ summary   │                                           │ {'quantile':'0.99'}            │    76656 │             0 │
│ rpc_duration_seconds                │ summary   │                                           │ {'count':''}                   │     2693 │             0 │
│ rpc_duration_seconds                │ summary   │                                           │ {'sum':''}                     │ 17560473 │             0 │
│ something_weird                     │           │                                           │ {'problem':'division by zero'} │      inf │      -3982045 │
└─────────────────────────────────────┴───────────┴───────────────────────────────────────────┴────────────────────────────────┴──────────┴───────────────┘

Will be formatted as:

# HELP http_request_duration_seconds A histogram of the request duration.
# TYPE http_request_duration_seconds histogram
http_request_duration_seconds_bucket{le="0.05"} 24054
http_request_duration_seconds_bucket{le="0.1"} 33444
http_request_duration_seconds_bucket{le="0.5"} 129389
http_request_duration_seconds_bucket{le="1"} 133988
http_request_duration_seconds_bucket{le="+Inf"} 144320
http_request_duration_seconds_sum 53423
http_request_duration_seconds_count 144320

# HELP http_requests_total Total number of HTTP requests
# TYPE http_requests_total counter
http_requests_total{code="200",method="post"} 1027 1395066363000
http_requests_total{code="400",method="post"} 3 1395066363000

metric_without_timestamp_and_labels 12.47

# HELP rpc_duration_seconds A summary of the RPC duration in seconds.
# TYPE rpc_duration_seconds summary
rpc_duration_seconds{quantile="0.01"} 3102
rpc_duration_seconds{quantile="0.05"} 3272
rpc_duration_seconds{quantile="0.5"} 4773
rpc_duration_seconds{quantile="0.9"} 9001
rpc_duration_seconds{quantile="0.99"} 76656
rpc_duration_seconds_sum 17560473
rpc_duration_seconds_count 2693

something_weird{problem="division by zero"} +Inf -3982045

Protobuf

Protobuf - is a Protocol Buffers format.

This format requires an external format schema. The schema is cached between queries. ClickHouse supports both proto2 and proto3 syntaxes. Repeated/optional/required fields are supported.

Usage examples:

SELECT * FROM test.table FORMAT Protobuf SETTINGS format_schema = 'schemafile:MessageType'
cat protobuf_messages.bin | clickhouse-client --query "INSERT INTO test.table SETTINGS format_schema='schemafile:MessageType' FORMAT Protobuf"

where the file schemafile.proto looks like this:

syntax = "proto3";

message MessageType {
  string name = 1;
  string surname = 2;
  uint32 birthDate = 3;
  repeated string phoneNumbers = 4;
};

To find the correspondence between table columns and fields of Protocol Buffers message type ClickHouse compares their names. This comparison is case-insensitive and the characters _ (underscore) and . (dot) are considered as equal. If types of a column and a field of Protocol Buffers message are different the necessary conversion is applied.

Nested messages are supported. For example, for the field z in the following message type

message MessageType {
  message XType {
    message YType {
      int32 z;
    };
    repeated YType y;
  };
  XType x;
};

ClickHouse tries to find a column named x.y.z (or x_y_z or X.y_Z and so on). Nested messages are suitable to input or output a nested data structures.

Default values defined in a protobuf schema like this

syntax = "proto2";

message MessageType {
  optional int32 result_per_page = 3 [default = 10];
}

are not applied; the table defaults are used instead of them.

ClickHouse inputs and outputs protobuf messages in the length-delimited format. It means before every message should be written its length as a varint. See also how to read/write length-delimited protobuf messages in popular languages.

ProtobufSingle

Same as Protobuf but for storing/parsing single Protobuf message without length delimiters.

Avro

Apache Avro is a row-oriented data serialization framework developed within Apaches Hadoop project.

ClickHouse Avro format supports reading and writing Avro data files.

Data Types Matching

The table below shows supported data types and how they match ClickHouse data types in INSERT and SELECT queries.

Avro data type INSERT ClickHouse data type Avro data type SELECT
boolean, int, long, float, double Int(8|16|32), UInt(8|16|32) int
boolean, int, long, float, double Int64, UInt64 long
boolean, int, long, float, double Float32 float
boolean, int, long, float, double Float64 double
bytes, string, fixed, enum String bytes or string *
bytes, string, fixed FixedString(N) fixed(N)
enum Enum(8|16) enum
array(T) Array(T) array(T)
union(null, T), union(T, null) Nullable(T) union(null, T)
null Nullable(Nothing) null
int (date) ** Date int (date) **
long (timestamp-millis) ** DateTime64(3) long (timestamp-millis) *
long (timestamp-micros) ** DateTime64(6) long (timestamp-micros) *

* bytes is default, controlled by output_format_avro_string_column_pattern ** Avro logical types

Unsupported Avro data types: record (non-root), map

Unsupported Avro logical data types: time-millis, time-micros, duration

Inserting Data

To insert data from an Avro file into ClickHouse table:

$ cat file.avro | clickhouse-client --query="INSERT INTO {some_table} FORMAT Avro"

The root schema of input Avro file must be of record type.

To find the correspondence between table columns and fields of Avro schema ClickHouse compares their names. This comparison is case-sensitive. Unused fields are skipped.

Data types of ClickHouse table columns can differ from the corresponding fields of the Avro data inserted. When inserting data, ClickHouse interprets data types according to the table above and then casts the data to corresponding column type.

While importing data, when field is not found in schema and setting input_format_avro_allow_missing_fields is enabled, default value will be used instead of error.

Selecting Data

To select data from ClickHouse table into an Avro file:

$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Avro" > file.avro

Column names must:

  • start with [A-Za-z_]
  • subsequently contain only [A-Za-z0-9_]

Output Avro file compression and sync interval can be configured with output_format_avro_codec and output_format_avro_sync_interval respectively.

AvroConfluent

AvroConfluent supports decoding single-object Avro messages commonly used with Kafka and Confluent Schema Registry.

Each Avro message embeds a schema id that can be resolved to the actual schema with help of the Schema Registry.

Schemas are cached once resolved.

Schema Registry URL is configured with format_avro_schema_registry_url.

Data Types Matching

Same as Avro.

Usage

To quickly verify schema resolution you can use kafkacat with clickhouse-local:

$ kafkacat -b kafka-broker  -C -t topic1 -o beginning -f '%s' -c 3 | clickhouse-local   --input-format AvroConfluent --format_avro_schema_registry_url 'http://schema-registry' -S "field1 Int64, field2 String"  -q 'select *  from table'
1 a
2 b
3 c

To use AvroConfluent with Kafka:

CREATE TABLE topic1_stream
(
    field1 String,
    field2 String
)
ENGINE = Kafka()
SETTINGS
kafka_broker_list = 'kafka-broker',
kafka_topic_list = 'topic1',
kafka_group_name = 'group1',
kafka_format = 'AvroConfluent';

SET format_avro_schema_registry_url = 'http://schema-registry';

SELECT * FROM topic1_stream;

:::warning Setting format_avro_schema_registry_url needs to be configured in users.xml to maintain its value after a restart. Also you can use the format_avro_schema_registry_url setting of the Kafka table engine. :::

Parquet

Apache Parquet is a columnar storage format widespread in the Hadoop ecosystem. ClickHouse supports read and write operations for this format.

Data Types Matching

The table below shows supported data types and how they match ClickHouse data types in INSERT and SELECT queries.

Parquet data type (INSERT) ClickHouse data type Parquet data type (SELECT)
UINT8, BOOL UInt8 UINT8
INT8 Int8 INT8
UINT16 UInt16 UINT16
INT16 Int16 INT16
UINT32 UInt32 UINT32
INT32 Int32 INT32
UINT64 UInt64 UINT64
INT64 Int64 INT64
FLOAT, HALF_FLOAT Float32 FLOAT
DOUBLE Float64 DOUBLE
DATE32 Date UINT16
DATE64, TIMESTAMP DateTime UINT32
STRING, BINARY String BINARY
FixedString BINARY
DECIMAL Decimal DECIMAL
LIST Array LIST
STRUCT Tuple STRUCT
MAP Map MAP

Arrays can be nested and can have a value of the Nullable type as an argument. Tuple and Map types also can be nested.

ClickHouse supports configurable precision of Decimal type. The INSERT query treats the Parquet DECIMAL type as the ClickHouse Decimal128 type.

Unsupported Parquet data types: TIME32, FIXED_SIZE_BINARY, JSON, UUID, ENUM.

Data types of ClickHouse table columns can differ from the corresponding fields of the Parquet data inserted. When inserting data, ClickHouse interprets data types according to the table above and then cast the data to that data type which is set for the ClickHouse table column.

Inserting and Selecting Data

You can insert Parquet data from a file into ClickHouse table by the following command:

$ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT Parquet"

You can select data from a ClickHouse table and save them into some file in the Parquet format by the following command:

$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Parquet" > {some_file.pq}

To exchange data with Hadoop, you can use HDFS table engine.

Parquet format settings

Arrow

Apache Arrow comes with two built-in columnar storage formats. ClickHouse supports read and write operations for these formats.

Arrow is Apache Arrows "file mode" format. It is designed for in-memory random access.

Data Types Matching

The table below shows supported data types and how they match ClickHouse data types in INSERT and SELECT queries.

Arrow data type (INSERT) ClickHouse data type Arrow data type (SELECT)
UINT8, BOOL UInt8 UINT8
INT8 Int8 INT8
UINT16 UInt16 UINT16
INT16 Int16 INT16
UINT32 UInt32 UINT32
INT32 Int32 INT32
UINT64 UInt64 UINT64
INT64 Int64 INT64
FLOAT, HALF_FLOAT Float32 FLOAT32
DOUBLE Float64 FLOAT64
DATE32 Date UINT16
DATE64, TIMESTAMP DateTime UINT32
STRING, BINARY String BINARY
STRING, BINARY FixedString BINARY
DECIMAL Decimal DECIMAL
DECIMAL256 Decimal256 DECIMAL256
LIST Array LIST
STRUCT Tuple STRUCT
MAP Map MAP

Arrays can be nested and can have a value of the Nullable type as an argument. Tuple and Map types also can be nested.

The DICTIONARY type is supported for INSERT queries, and for SELECT queries there is an output_format_arrow_low_cardinality_as_dictionary setting that allows to output LowCardinality type as a DICTIONARY type.

ClickHouse supports configurable precision of the Decimal type. The INSERT query treats the Arrow DECIMAL type as the ClickHouse Decimal128 type.

Unsupported Arrow data types: TIME32, FIXED_SIZE_BINARY, JSON, UUID, ENUM.

The data types of ClickHouse table columns do not have to match the corresponding Arrow data fields. When inserting data, ClickHouse interprets data types according to the table above and then casts the data to the data type set for the ClickHouse table column.

Inserting Data

You can insert Arrow data from a file into ClickHouse table by the following command:

$ cat filename.arrow | clickhouse-client --query="INSERT INTO some_table FORMAT Arrow"

Selecting Data

You can select data from a ClickHouse table and save them into some file in the Arrow format by the following command:

$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Arrow" > {filename.arrow}

Arrow format settings

ArrowStream

ArrowStream is Apache Arrows “stream mode” format. It is designed for in-memory stream processing.

ORC

Apache ORC is a columnar storage format widespread in the Hadoop ecosystem.

Data Types Matching

The table below shows supported data types and how they match ClickHouse data types in INSERT and SELECT queries.

ORC data type (INSERT) ClickHouse data type ORC data type (SELECT)
UINT8, BOOL UInt8 UINT8
INT8 Int8 INT8
UINT16 UInt16 UINT16
INT16 Int16 INT16
UINT32 UInt32 UINT32
INT32 Int32 INT32
UINT64 UInt64 UINT64
INT64 Int64 INT64
FLOAT, HALF_FLOAT Float32 FLOAT
DOUBLE Float64 DOUBLE
DATE32 Date DATE32
DATE64, TIMESTAMP DateTime TIMESTAMP
STRING, BINARY String BINARY
DECIMAL Decimal DECIMAL
LIST Array LIST
STRUCT Tuple STRUCT
MAP Map MAP

Arrays can be nested and can have a value of the Nullable type as an argument. Tuple and Map types also can be nested.

ClickHouse supports configurable precision of the Decimal type. The INSERT query treats the ORC DECIMAL type as the ClickHouse Decimal128 type.

Unsupported ORC data types: TIME32, FIXED_SIZE_BINARY, JSON, UUID, ENUM.

The data types of ClickHouse table columns do not have to match the corresponding ORC data fields. When inserting data, ClickHouse interprets data types according to the table above and then casts the data to the data type set for the ClickHouse table column.

Inserting Data

You can insert ORC data from a file into ClickHouse table by the following command:

$ cat filename.orc | clickhouse-client --query="INSERT INTO some_table FORMAT ORC"

Selecting Data

You can select data from a ClickHouse table and save them into some file in the ORC format by the following command:

$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT ORC" > {filename.orc}

Arrow format settings

To exchange data with Hadoop, you can use HDFS table engine.

LineAsString

In this format, every line of input data is interpreted as a single string value. This format can only be parsed for table with a single field of type String. The remaining columns must be set to DEFAULT or MATERIALIZED, or omitted.

Example

Query:

DROP TABLE IF EXISTS line_as_string;
CREATE TABLE line_as_string (field String) ENGINE = Memory;
INSERT INTO line_as_string FORMAT LineAsString "I love apple", "I love banana", "I love orange";
SELECT * FROM line_as_string;

Result:

┌─field─────────────────────────────────────────────┐
│ "I love apple", "I love banana", "I love orange"; │
└───────────────────────────────────────────────────┘

Regexp

Each line of imported data is parsed according to the regular expression.

When working with the Regexp format, you can use the following settings:

  • format_regexpString. Contains regular expression in the re2 format.

  • format_regexp_escaping_ruleString. The following escaping rules are supported:

    • CSV (similarly to CSV)
    • JSON (similarly to JSONEachRow)
    • Escaped (similarly to TSV)
    • Quoted (similarly to Values)
    • Raw (extracts subpatterns as a whole, no escaping rules, similarly to TSVRaw)
  • format_regexp_skip_unmatchedUInt8. Defines the need to throw an exception in case the format_regexp expression does not match the imported data. Can be set to 0 or 1.

Usage

The regular expression from format_regexp setting is applied to every line of imported data. The number of subpatterns in the regular expression must be equal to the number of columns in imported dataset.

Lines of the imported data must be separated by newline character '\n' or DOS-style newline "\r\n".

The content of every matched subpattern is parsed with the method of corresponding data type, according to format_regexp_escaping_rule setting.

If the regular expression does not match the line and format_regexp_skip_unmatched is set to 1, the line is silently skipped. Otherwise, exception is thrown.

Example

Consider the file data.tsv:

id: 1 array: [1,2,3] string: str1 date: 2020-01-01
id: 2 array: [1,2,3] string: str2 date: 2020-01-02
id: 3 array: [1,2,3] string: str3 date: 2020-01-03

and the table:

CREATE TABLE imp_regex_table (id UInt32, array Array(UInt32), string String, date Date) ENGINE = Memory;

Import command:

$ cat data.tsv | clickhouse-client  --query "INSERT INTO imp_regex_table SETTINGS format_regexp='id: (.+?) array: (.+?) string: (.+?) date: (.+?)', format_regexp_escaping_rule='Escaped', format_regexp_skip_unmatched=0 FORMAT Regexp;"

Query:

SELECT * FROM imp_regex_table;

Result:

┌─id─┬─array───┬─string─┬───────date─┐
│  1 │ [1,2,3] │ str1   │ 2020-01-01 │
│  2 │ [1,2,3] │ str2   │ 2020-01-02 │
│  3 │ [1,2,3] │ str3   │ 2020-01-03 │
└────┴─────────┴────────┴────────────┘

Format Schema

The file name containing the format schema is set by the setting format_schema. Its required to set this setting when it is used one of the formats Cap'n Proto and Protobuf. The format schema is a combination of a file name and the name of a message type in this file, delimited by a colon, e.g. schemafile.proto:MessageType. If the file has the standard extension for the format (for example, .proto for Protobuf), it can be omitted and in this case, the format schema looks like schemafile:MessageType.

If you input or output data via the client in the interactive mode, the file name specified in the format schema can contain an absolute path or a path relative to the current directory on the client. If you use the client in the batch mode, the path to the schema must be relative due to security reasons.

If you input or output data via the HTTP interface the file name specified in the format schema should be located in the directory specified in format_schema_path in the server configuration.

Skipping Errors

Some formats such as CSV, TabSeparated, TSKV, JSONEachRow, Template, CustomSeparated and Protobuf can skip broken row if parsing error occurred and continue parsing from the beginning of next row. See input_format_allow_errors_num and input_format_allow_errors_ratio settings. Limitations:

  • In case of parsing error JSONEachRow skips all data until the new line (or EOF), so rows must be delimited by \n to count errors correctly.
  • Template and CustomSeparated use delimiter after the last column and delimiter between rows to find the beginning of next row, so skipping errors works only if at least one of them is not empty.

RawBLOB

In this format, all input data is read to a single value. It is possible to parse only a table with a single field of type String or similar. The result is output in binary format without delimiters and escaping. If more than one value is output, the format is ambiguous, and it will be impossible to read the data back.

Below is a comparison of the formats RawBLOB and TabSeparatedRaw. RawBLOB:

  • data is output in binary format, no escaping;
  • there are no delimiters between values;
  • no newline at the end of each value. [TabSeparatedRaw] (#tabseparatedraw):
  • data is output without escaping;
  • the rows contain values separated by tabs;
  • there is a line feed after the last value in every row.

The following is a comparison of the RawBLOB and RowBinary formats. RawBLOB:

  • String fields are output without being prefixed by length. RowBinary:
  • String fields are represented as length in varint format (unsigned [LEB128] (https://en.wikipedia.org/wiki/LEB128)), followed by the bytes of the string.

When an empty data is passed to the RawBLOB input, ClickHouse throws an exception:

Code: 108. DB::Exception: No data to insert

Example

$ clickhouse-client --query "CREATE TABLE {some_table} (a String) ENGINE = Memory;"
$ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT RawBLOB"
$ clickhouse-client --query "SELECT * FROM {some_table} FORMAT RawBLOB" | md5sum

Result:

f9725a22f9191e064120d718e26862a9  -

MsgPack

ClickHouse supports reading and writing MessagePack data files.

Data Types Matching

MessagePack data type (INSERT) ClickHouse data type MessagePack data type (SELECT)
uint N, positive fixint UIntN uint N
int N IntN int N
bool UInt8 uint 8
fixstr, str 8, str 16, str 32, bin 8, bin 16, bin 32 String bin 8, bin 16, bin 32
fixstr, str 8, str 16, str 32, bin 8, bin 16, bin 32 FixedString bin 8, bin 16, bin 32
float 32 Float32 float 32
float 64 Float64 float 64
uint 16 Date uint 16
uint 32 DateTime uint 32
uint 64 DateTime64 uint 64
fixarray, array 16, array 32 Array fixarray, array 16, array 32
fixmap, map 16, map 32 Map fixmap, map 16, map 32

Example:

Writing to a file ".msgpk":

$ clickhouse-client --query="CREATE TABLE msgpack (array Array(UInt8)) ENGINE = Memory;"
$ clickhouse-client --query="INSERT INTO msgpack VALUES ([0, 1, 2, 3, 42, 253, 254, 255]), ([255, 254, 253, 42, 3, 2, 1, 0])";
$ clickhouse-client --query="SELECT * FROM msgpack FORMAT MsgPack" > tmp_msgpack.msgpk;

MsgPack format settings

MySQLDump

ClickHouse supports reading MySQL dumps. It reads all data from INSERT queries belonging to one table in dump. If there are more than one table, by default it reads data from the first one. You can specify the name of the table from which to read data from using input_format_mysql_dump_table_name settings. If setting input_format_mysql_dump_map_columns is set to 1 and dump contains CREATE query for specified table or column names in INSERT query the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. This format supports schema inference: if the dump contains CREATE query for the specified table, the structure is extracted from it, otherwise schema is inferred from the data of INSERT queries.

Examples:

File dump.sql:

/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!50503 SET character_set_client = utf8mb4 */;
CREATE TABLE `test` (
  `x` int DEFAULT NULL,
  `y` int DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
/*!40101 SET character_set_client = @saved_cs_client */;
INSERT INTO `test` VALUES (1,NULL),(2,NULL),(3,NULL),(3,NULL),(4,NULL),(5,NULL),(6,7);
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!50503 SET character_set_client = utf8mb4 */;
CREATE TABLE `test 3` (
  `y` int DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
/*!40101 SET character_set_client = @saved_cs_client */;
INSERT INTO `test 3` VALUES (1);
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!50503 SET character_set_client = utf8mb4 */;
CREATE TABLE `test2` (
  `x` int DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
/*!40101 SET character_set_client = @saved_cs_client */;
INSERT INTO `test2` VALUES (1),(2),(3);

Queries:

:) desc file(dump.sql, MySQLDump) settings input_format_mysql_dump_table_name='test2'

DESCRIBE TABLE file(dump.sql, MySQLDump)
SETTINGS input_format_mysql_dump_table_name = 'test2'

Query id: 25e66c89-e10a-42a8-9b42-1ee8bbbde5ef

┌─name─┬─type────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
 x     Nullable(Int32)                                                                              
└──────┴─────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘

:) select * from file(dump.sql, MySQLDump) settings input_format_mysql_dump_table_name='test2'

SELECT *
FROM file(dump.sql, MySQLDump)
         SETTINGS input_format_mysql_dump_table_name = 'test2'

Query id: 17d59664-ebce-4053-bb79-d46a516fb590

┌─x─┐
 1 
 2 
 3 
└───┘