170 KiB
slug | sidebar_position | sidebar_label | title |
---|---|---|---|
/en/interfaces/formats | 21 | View all formats... | Formats for Input and Output Data |
ClickHouse can accept and return data in various formats. A format supported for input can be used to parse the data provided to INSERT
s, to perform SELECT
s from a file-backed table such as File, URL or HDFS, or to read a dictionary. A format supported for output can be used to arrange the
results of a SELECT
, and to perform INSERT
s into a file-backed table.
The supported formats are:
You can control some format processing parameters with the ClickHouse settings. For more information read the Settings section.
TabSeparated
In TabSeparated format, data is written by row. Each row contains values separated by tabs. Each value is followed by a tab, except the last value in the row, which is followed by a line feed. Strictly Unix line feeds are assumed everywhere. The last row also must contain a line feed at the end. Values are written in text format, without enclosing quotation marks, and with special characters escaped.
This format is also available under the name TSV
.
The TabSeparated
format is convenient for processing data using custom programs and scripts. It is used by default in the HTTP interface, and in the command-line client’s batch mode. This format also allows transferring data between different DBMSs. For example, you can get a dump from MySQL and upload it to ClickHouse, or vice versa.
The TabSeparated
format supports outputting total values (when using WITH TOTALS) and extreme values (when ‘extremes’ is set to 1). In these cases, the total values and extremes are output after the main data. The main result, total values, and extremes are separated from each other by an empty line. Example:
SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORDER BY EventDate FORMAT TabSeparated
2014-03-17 1406958
2014-03-18 1383658
2014-03-19 1405797
2014-03-20 1353623
2014-03-21 1245779
2014-03-22 1031592
2014-03-23 1046491
1970-01-01 8873898
2014-03-17 1031592
2014-03-23 1406958
Data Formatting
Integer numbers are written in decimal form. Numbers can contain an extra “+” character at the beginning (ignored when parsing, and not recorded when formatting). Non-negative numbers can’t contain the negative sign. When reading, it is allowed to parse an empty string as a zero, or (for signed types) a string consisting of just a minus sign as a zero. Numbers that do not fit into the corresponding data type may be parsed as a different number, without an error message.
Floating-point numbers are written in decimal form. The dot is used as the decimal separator. Exponential entries are supported, as are ‘inf’, ‘+inf’, ‘-inf’, and ‘nan’. An entry of floating-point numbers may begin or end with a decimal point. During formatting, accuracy may be lost on floating-point numbers. During parsing, it is not strictly required to read the nearest machine-representable number.
Dates are written in YYYY-MM-DD format and parsed in the same format, but with any characters as separators.
Dates with times are written in the format YYYY-MM-DD hh:mm:ss
and parsed in the same format, but with any characters as separators.
This all occurs in the system time zone at the time the client or server starts (depending on which of them formats data). For dates with times, daylight saving time is not specified. So if a dump has times during daylight saving time, the dump does not unequivocally match the data, and parsing will select one of the two times.
During a read operation, incorrect dates and dates with times can be parsed with natural overflow or as null dates and times, without an error message.
As an exception, parsing dates with times is also supported in Unix timestamp format, if it consists of exactly 10 decimal digits. The result is not time zone-dependent. The formats YYYY-MM-DD hh:mm:ss
and NNNNNNNNNN
are differentiated automatically.
Strings are output with backslash-escaped special characters. The following escape sequences are used for output: \b
, \f
, \r
, \n
, \t
, \0
, \'
, \\
. Parsing also supports the sequences \a
, \v
, and \xHH
(hex escape sequences) and any \c
sequences, where c
is any character (these sequences are converted to c
). Thus, reading data supports formats where a line feed can be written as \n
or \
, or as a line feed. For example, the string Hello world
with a line feed between the words instead of space can be parsed in any of the following variations:
Hello\nworld
Hello\
world
The second variant is supported because MySQL uses it when writing tab-separated dumps.
The minimum set of characters that you need to escape when passing data in TabSeparated format: tab, line feed (LF) and backslash.
Only a small set of symbols are escaped. You can easily stumble onto a string value that your terminal will ruin in output.
Arrays are written as a list of comma-separated values in square brackets. Number items in the array are formatted as normally. Date
and DateTime
types are written in single quotes. Strings are written in single quotes with the same escaping rules as above.
NULL is formatted according to setting format_tsv_null_representation (default value is \N
).
In input data, ENUM values can be represented as names or as ids. First, we try to match the input value to the ENUM name. If we fail and the input value is a number, we try to match this number to ENUM id. If input data contains only ENUM ids, it's recommended to enable the setting input_format_tsv_enum_as_number to optimize ENUM parsing.
Each element of Nested structures is represented as an array.
For example:
CREATE TABLE nestedt
(
`id` UInt8,
`aux` Nested(
a UInt8,
b String
)
)
ENGINE = TinyLog
INSERT INTO nestedt Values ( 1, [1], ['a'])
SELECT * FROM nestedt FORMAT TSV
1 [1] ['a']
TabSeparated format settings
- format_tsv_null_representation - custom NULL representation in TSV format. Default value -
\N
. - input_format_tsv_empty_as_default - treat empty fields in TSV input as default values. Default value -
false
. For complex default expressions input_format_defaults_for_omitted_fields must be enabled too. - input_format_tsv_enum_as_number - treat inserted enum values in TSV formats as enum indices. Default value -
false
. - input_format_tsv_use_best_effort_in_schema_inference - use some tweaks and heuristics to infer schema in TSV format. If disabled, all fields will be inferred as Strings. Default value -
true
. - output_format_tsv_crlf_end_of_line - if it is set true, end of line in TSV output format will be
\r\n
instead of\n
. Default value -false
. - input_format_tsv_skip_first_lines - skip specified number of lines at the beginning of data. Default value -
0
. - input_format_tsv_detect_header - automatically detect header with names and types in TSV format. Default value -
true
. - input_format_tsv_skip_trailing_empty_lines - skip trailing empty lines at the end of data. Default value -
false
.
TabSeparatedRaw
Differs from TabSeparated
format in that the rows are written without escaping.
When parsing with this format, tabs or linefeeds are not allowed in each field.
This format is also available under the name TSVRaw
.
TabSeparatedWithNames
Differs from the TabSeparated
format in that the column names are written in the first row.
During parsing, the first row is expected to contain the column names. You can use column names to determine their position and to check their correctness.
:::note If setting input_format_with_names_use_header is set to 1, the columns from the input data will be mapped to the columns of the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Otherwise, the first row will be skipped. :::
This format is also available under the name TSVWithNames
.
TabSeparatedWithNamesAndTypes
Differs from the TabSeparated
format in that the column names are written to the first row, while the column types are in the second row.
:::note If setting input_format_with_names_use_header is set to 1, the columns from the input data will be mapped to the columns in the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Otherwise, the first row will be skipped. If setting input_format_with_types_use_header is set to 1, the types from input data will be compared with the types of the corresponding columns from the table. Otherwise, the second row will be skipped. :::
This format is also available under the name TSVWithNamesAndTypes
.
TabSeparatedRawWithNames
Differs from TabSeparatedWithNames
format in that the rows are written without escaping.
When parsing with this format, tabs or linefeeds are not allowed in each field.
This format is also available under the name TSVRawWithNames
.
TabSeparatedRawWithNamesAndTypes
Differs from TabSeparatedWithNamesAndTypes
format in that the rows are written without escaping.
When parsing with this format, tabs or linefeeds are not allowed in each field.
This format is also available under the name TSVRawWithNamesAndNames
.
Template
This format allows specifying a custom format string with placeholders for values with a specified escaping rule.
It uses settings format_template_resultset
, format_template_row
, format_template_rows_between_delimiter
and some settings of other formats (e.g. output_format_json_quote_64bit_integers
when using JSON
escaping, see further)
Setting format_template_row
specifies the path to the file containing format strings for rows with the following syntax:
delimiter_1${column_1:serializeAs_1}delimiter_2${column_2:serializeAs_2} ... delimiter_N
,
where delimiter_i
is a delimiter between values ($
symbol can be escaped as $$
),
column_i
is a name or index of a column whose values are to be selected or inserted (if empty, then column will be skipped),
serializeAs_i
is an escaping rule for the column values. The following escaping rules are supported:
CSV
,JSON
,XML
(similar to the formats of the same names)Escaped
(similar toTSV
)Quoted
(similar toValues
)Raw
(without escaping, similar toTSVRaw
)None
(no escaping rule, see further)
If an escaping rule is omitted, then None
will be used. XML
is suitable only for output.
So, for the following format string:
`Search phrase: ${SearchPhrase:Quoted}, count: ${c:Escaped}, ad price: $$${price:JSON};`
the values of SearchPhrase
, c
and price
columns, which are escaped as Quoted
, Escaped
and JSON
will be printed (for select) or will be expected (for insert) between Search phrase:
, , count:
, , ad price: $
and ;
delimiters respectively. For example:
Search phrase: 'bathroom interior design', count: 2166, ad price: $3;
The format_template_rows_between_delimiter
setting specifies the delimiter between rows, which is printed (or expected) after every row except the last one (\n
by default)
Setting format_template_resultset
specifies the path to the file, which contains a format string for resultset. Format string for resultset has the same syntax as a format string for row and allows to specify a prefix, a suffix and a way to print some additional information. It contains the following placeholders instead of column names:
data
is the rows with data informat_template_row
format, separated byformat_template_rows_between_delimiter
. This placeholder must be the first placeholder in the format string.totals
is the row with total values informat_template_row
format (when using WITH TOTALS)min
is the row with minimum values informat_template_row
format (when extremes are set to 1)max
is the row with maximum values informat_template_row
format (when extremes are set to 1)rows
is the total number of output rowsrows_before_limit
is the minimal number of rows there would have been without LIMIT. Output only if the query contains LIMIT. If the query contains GROUP BY, rows_before_limit_at_least is the exact number of rows there would have been without a LIMIT.time
is the request execution time in secondsrows_read
is the number of rows has been readbytes_read
is the number of bytes (uncompressed) has been read
The placeholders data
, totals
, min
and max
must not have escaping rule specified (or None
must be specified explicitly). The remaining placeholders may have any escaping rule specified.
If the format_template_resultset
setting is an empty string, ${data}
is used as the default value.
For insert queries format allows skipping some columns or fields if prefix or suffix (see example).
Select example:
SELECT SearchPhrase, count() AS c FROM test.hits GROUP BY SearchPhrase ORDER BY c DESC LIMIT 5 FORMAT Template SETTINGS
format_template_resultset = '/some/path/resultset.format', format_template_row = '/some/path/row.format', format_template_rows_between_delimiter = '\n '
/some/path/resultset.format
:
<!DOCTYPE HTML>
<html> <head> <title>Search phrases</title> </head>
<body>
<table border="1"> <caption>Search phrases</caption>
<tr> <th>Search phrase</th> <th>Count</th> </tr>
${data}
</table>
<table border="1"> <caption>Max</caption>
${max}
</table>
<b>Processed ${rows_read:XML} rows in ${time:XML} sec</b>
</body>
</html>
/some/path/row.format
:
<tr> <td>${0:XML}</td> <td>${1:XML}</td> </tr>
Result:
<!DOCTYPE HTML>
<html> <head> <title>Search phrases</title> </head>
<body>
<table border="1"> <caption>Search phrases</caption>
<tr> <th>Search phrase</th> <th>Count</th> </tr>
<tr> <td></td> <td>8267016</td> </tr>
<tr> <td>bathroom interior design</td> <td>2166</td> </tr>
<tr> <td>clickhouse</td> <td>1655</td> </tr>
<tr> <td>spring 2014 fashion</td> <td>1549</td> </tr>
<tr> <td>freeform photos</td> <td>1480</td> </tr>
</table>
<table border="1"> <caption>Max</caption>
<tr> <td></td> <td>8873898</td> </tr>
</table>
<b>Processed 3095973 rows in 0.1569913 sec</b>
</body>
</html>
Insert example:
Some header
Page views: 5, User id: 4324182021466249494, Useless field: hello, Duration: 146, Sign: -1
Page views: 6, User id: 4324182021466249494, Useless field: world, Duration: 185, Sign: 1
Total rows: 2
INSERT INTO UserActivity SETTINGS
format_template_resultset = '/some/path/resultset.format', format_template_row = '/some/path/row.format'
FORMAT Template
/some/path/resultset.format
:
Some header\n${data}\nTotal rows: ${:CSV}\n
/some/path/row.format
:
Page views: ${PageViews:CSV}, User id: ${UserID:CSV}, Useless field: ${:CSV}, Duration: ${Duration:CSV}, Sign: ${Sign:CSV}
PageViews
, UserID
, Duration
and Sign
inside placeholders are names of columns in the table. Values after Useless field
in rows and after \nTotal rows:
in suffix will be ignored.
All delimiters in the input data must be strictly equal to delimiters in specified format strings.
TemplateIgnoreSpaces
This format is suitable only for input.
Similar to Template
, but skips whitespace characters between delimiters and values in the input stream. However, if format strings contain whitespace characters, these characters will be expected in the input stream. Also allows specifying empty placeholders (${}
or ${:None}
) to split some delimiter into separate parts to ignore spaces between them. Such placeholders are used only for skipping whitespace characters.
It’s possible to read JSON
using this format if the values of columns have the same order in all rows. For example, the following request can be used for inserting data from its output example of format JSON:
INSERT INTO table_name SETTINGS
format_template_resultset = '/some/path/resultset.format', format_template_row = '/some/path/row.format', format_template_rows_between_delimiter = ','
FORMAT TemplateIgnoreSpaces
/some/path/resultset.format
:
{${}"meta"${}:${:JSON},${}"data"${}:${}[${data}]${},${}"totals"${}:${:JSON},${}"extremes"${}:${:JSON},${}"rows"${}:${:JSON},${}"rows_before_limit_at_least"${}:${:JSON}${}}
/some/path/row.format
:
{${}"SearchPhrase"${}:${}${phrase:JSON}${},${}"c"${}:${}${cnt:JSON}${}}
TSKV
Similar to TabSeparated, but outputs a value in name=value format. Names are escaped the same way as in TabSeparated format, and the = symbol is also escaped.
SearchPhrase= count()=8267016
SearchPhrase=bathroom interior design count()=2166
SearchPhrase=clickhouse count()=1655
SearchPhrase=2014 spring fashion count()=1549
SearchPhrase=freeform photos count()=1480
SearchPhrase=angelina jolie count()=1245
SearchPhrase=omsk count()=1112
SearchPhrase=photos of dog breeds count()=1091
SearchPhrase=curtain designs count()=1064
SearchPhrase=baku count()=1000
NULL is formatted as \N
.
SELECT * FROM t_null FORMAT TSKV
x=1 y=\N
When there is a large number of small columns, this format is ineffective, and there is generally no reason to use it. Nevertheless, it is no worse than JSONEachRow in terms of efficiency.
Both data output and parsing are supported in this format. For parsing, any order is supported for the values of different columns. It is acceptable for some values to be omitted – they are treated as equal to their default values. In this case, zeros and blank rows are used as default values. Complex values that could be specified in the table are not supported as defaults.
Parsing allows the presence of the additional field tskv
without the equal sign or a value. This field is ignored.
During import, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1.
CSV
Comma Separated Values format (RFC).
When formatting, rows are enclosed in double quotes. A double quote inside a string is output as two double quotes in a row. There are no other rules for escaping characters. Date and date-time are enclosed in double quotes. Numbers are output without quotes. Values are separated by a delimiter character, which is ,
by default. The delimiter character is defined in the setting format_csv_delimiter. Rows are separated using the Unix line feed (LF). Arrays are serialized in CSV as follows: first, the array is serialized to a string as in TabSeparated format, and then the resulting string is output to CSV in double quotes. Tuples in CSV format are serialized as separate columns (that is, their nesting in the tuple is lost).
$ clickhouse-client --format_csv_delimiter="|" --query="INSERT INTO test.csv FORMAT CSV" < data.csv
*By default, the delimiter is ,
. See the format_csv_delimiter setting for more information.
When parsing, all values can be parsed either with or without quotes. Both double and single quotes are supported. Rows can also be arranged without quotes. In this case, they are parsed up to the delimiter character or line feed (CR or LF). In violation of the RFC, when parsing rows without quotes, the leading and trailing spaces and tabs are ignored. For the line feed, Unix (LF), Windows (CR LF) and Mac OS Classic (CR LF) types are all supported.
NULL
is formatted according to setting format_csv_null_representation (default value is \N
).
In input data, ENUM values can be represented as names or as ids. First, we try to match the input value to the ENUM name. If we fail and the input value is a number, we try to match this number to the ENUM id. If input data contains only ENUM ids, it's recommended to enable the setting input_format_csv_enum_as_number to optimize ENUM parsing.
The CSV format supports the output of totals and extremes the same way as TabSeparated
.
CSV format settings
- format_csv_delimiter - the character to be considered as a delimiter in CSV data. Default value -
,
. - format_csv_allow_single_quotes - allow strings in single quotes. Default value -
true
. - format_csv_allow_double_quotes - allow strings in double quotes. Default value -
true
. - format_csv_null_representation - custom NULL representation in CSV format. Default value -
\N
. - input_format_csv_empty_as_default - treat empty fields in CSV input as default values. Default value -
true
. For complex default expressions, input_format_defaults_for_omitted_fields must be enabled too. - input_format_csv_enum_as_number - treat inserted enum values in CSV formats as enum indices. Default value -
false
. - input_format_csv_use_best_effort_in_schema_inference - use some tweaks and heuristics to infer schema in CSV format. If disabled, all fields will be inferred as Strings. Default value -
true
. - input_format_csv_arrays_as_nested_csv - when reading Array from CSV, expect that its elements were serialized in nested CSV and then put into string. Default value -
false
. - output_format_csv_crlf_end_of_line - if it is set to true, end of line in CSV output format will be
\r\n
instead of\n
. Default value -false
. - input_format_csv_skip_first_lines - skip the specified number of lines at the beginning of data. Default value -
0
. - input_format_csv_detect_header - automatically detect header with names and types in CSV format. Default value -
true
. - input_format_csv_skip_trailing_empty_lines - skip trailing empty lines at the end of data. Default value -
false
. - input_format_csv_trim_whitespaces - trim spaces and tabs in non-quoted CSV strings. Default value -
true
. - [input_format_csv_allow_whitespace_or_tab_as_delimiter](/docs/en/operations/settings/settings-formats.md/# input_format_csv_allow_whitespace_or_tab_as_delimiter) - Allow to use whitespace or tab as field delimiter in CSV strings. Default value -
false
.
CSVWithNames
Also prints the header row with column names, similar to TabSeparatedWithNames.
:::note If setting input_format_with_names_use_header is set to 1, the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Otherwise, the first row will be skipped. :::
CSVWithNamesAndTypes
Also prints two header rows with column names and types, similar to TabSeparatedWithNamesAndTypes.
:::note If setting input_format_with_names_use_header is set to 1, the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Otherwise, the first row will be skipped. If setting input_format_with_types_use_header is set to 1, the types from input data will be compared with the types of the corresponding columns from the table. Otherwise, the second row will be skipped. :::
CustomSeparated
Similar to Template, but it prints or reads all names and types of columns and uses escaping rule from format_custom_escaping_rule setting and delimiters from format_custom_field_delimiter, format_custom_row_before_delimiter, format_custom_row_after_delimiter, format_custom_row_between_delimiter, format_custom_result_before_delimiter and format_custom_result_after_delimiter settings, not from format strings.
If setting input_format_custom_detect_header is enabled, ClickHouse will automatically detect header with names and types if any.
If setting input_format_tsv_skip_trailing_empty_lines is enabled, trailing empty lines at the end of file will be skipped.
There is also CustomSeparatedIgnoreSpaces
format, which is similar to TemplateIgnoreSpaces.
CustomSeparatedWithNames
Also prints the header row with column names, similar to TabSeparatedWithNames.
:::note If setting input_format_with_names_use_header is set to 1, the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Otherwise, the first row will be skipped. :::
CustomSeparatedWithNamesAndTypes
Also prints two header rows with column names and types, similar to TabSeparatedWithNamesAndTypes.
:::note If setting input_format_with_names_use_header is set to 1, the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Otherwise, the first row will be skipped. If setting input_format_with_types_use_header is set to 1, the types from input data will be compared with the types of the corresponding columns from the table. Otherwise, the second row will be skipped. :::
SQLInsert
Outputs data as a sequence of INSERT INTO table (columns...) VALUES (...), (...) ...;
statements.
Example:
SELECT number AS x, number + 1 AS y, 'Hello' AS z FROM numbers(10) FORMAT SQLInsert SETTINGS output_format_sql_insert_max_batch_size = 2
INSERT INTO table (x, y, z) VALUES (0, 1, 'Hello'), (1, 2, 'Hello');
INSERT INTO table (x, y, z) VALUES (2, 3, 'Hello'), (3, 4, 'Hello');
INSERT INTO table (x, y, z) VALUES (4, 5, 'Hello'), (5, 6, 'Hello');
INSERT INTO table (x, y, z) VALUES (6, 7, 'Hello'), (7, 8, 'Hello');
INSERT INTO table (x, y, z) VALUES (8, 9, 'Hello'), (9, 10, 'Hello');
To read data output by this format you can use MySQLDump input format.
SQLInsert format settings
- output_format_sql_insert_max_batch_size - The maximum number of rows in one INSERT statement. Default value -
65505
. - output_format_sql_insert_table_name - The name of the table in the output INSERT query. Default value -
'table'
. - output_format_sql_insert_include_column_names - Include column names in INSERT query. Default value -
true
. - output_format_sql_insert_use_replace - Use REPLACE statement instead of INSERT. Default value -
false
. - output_format_sql_insert_quote_names - Quote column names with "`" characters. Default value -
true
.
JSON
Outputs data in JSON format. Besides data tables, it also outputs column names and types, along with some additional information: the total number of output rows, and the number of rows that could have been output if there weren’t a LIMIT. Example:
SELECT SearchPhrase, count() AS c FROM test.hits GROUP BY SearchPhrase WITH TOTALS ORDER BY c DESC LIMIT 5 FORMAT JSON
{
"meta":
[
{
"name": "num",
"type": "Int32"
},
{
"name": "str",
"type": "String"
},
{
"name": "arr",
"type": "Array(UInt8)"
}
],
"data":
[
{
"num": 42,
"str": "hello",
"arr": [0,1]
},
{
"num": 43,
"str": "hello",
"arr": [0,1,2]
},
{
"num": 44,
"str": "hello",
"arr": [0,1,2,3]
}
],
"rows": 3,
"rows_before_limit_at_least": 3,
"statistics":
{
"elapsed": 0.001137687,
"rows_read": 3,
"bytes_read": 24
}
}
The JSON is compatible with JavaScript. To ensure this, some characters are additionally escaped: the slash /
is escaped as \/
; alternative line breaks U+2028
and U+2029
, which break some browsers, are escaped as \uXXXX
. ASCII control characters are escaped: backspace, form feed, line feed, carriage return, and horizontal tab are replaced with \b
, \f
, \n
, \r
, \t
, as well as the remaining bytes in the 00-1F range using \uXXXX
sequences. Invalid UTF-8 sequences are changed to the replacement character <20> so the output text will consist of valid UTF-8 sequences. For compatibility with JavaScript, Int64 and UInt64 integers are enclosed in double quotes by default. To remove the quotes, you can set the configuration parameter output_format_json_quote_64bit_integers to 0.
rows
– The total number of output rows.
rows_before_limit_at_least
The minimal number of rows there would have been without LIMIT. Output only if the query contains LIMIT.
If the query contains GROUP BY, rows_before_limit_at_least is the exact number of rows there would have been without a LIMIT.
totals
– Total values (when using WITH TOTALS).
extremes
– Extreme values (when extremes are set to 1).
ClickHouse supports NULL, which is displayed as null
in the JSON output. To enable +nan
, -nan
, +inf
, -inf
values in output, set the output_format_json_quote_denormals to 1.
See Also
- JSONEachRow format
- output_format_json_array_of_rows setting
For JSON input format, if setting input_format_json_validate_types_from_metadata is set to 1, the types from metadata in input data will be compared with the types of the corresponding columns from the table.
JSONStrings
Differs from JSON only in that data fields are output in strings, not in typed JSON values.
Example:
{
"meta":
[
{
"name": "num",
"type": "Int32"
},
{
"name": "str",
"type": "String"
},
{
"name": "arr",
"type": "Array(UInt8)"
}
],
"data":
[
{
"num": "42",
"str": "hello",
"arr": "[0,1]"
},
{
"num": "43",
"str": "hello",
"arr": "[0,1,2]"
},
{
"num": "44",
"str": "hello",
"arr": "[0,1,2,3]"
}
],
"rows": 3,
"rows_before_limit_at_least": 3,
"statistics":
{
"elapsed": 0.001403233,
"rows_read": 3,
"bytes_read": 24
}
}
JSONColumns
:::tip The output of the JSONColumns* formats provides the ClickHouse field name and then the content of each row of the table for that field; visually, the data is rotated 90 degrees to the left. :::
In this format, all data is represented as a single JSON Object. Note that JSONColumns output format buffers all data in memory to output it as a single block and it can lead to high memory consumption.
Example:
{
"num": [42, 43, 44],
"str": ["hello", "hello", "hello"],
"arr": [[0,1], [0,1,2], [0,1,2,3]]
}
During import, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Columns that are not present in the block will be filled with default values (you can use the input_format_defaults_for_omitted_fields setting here)
JSONColumnsWithMetadata
Differs from JSONColumns format in that it also contains some metadata and statistics (similar to JSON format). Output format buffers all data in memory and then outputs them as a single block, so, it can lead to high memory consumption.
Example:
{
"meta":
[
{
"name": "num",
"type": "Int32"
},
{
"name": "str",
"type": "String"
},
{
"name": "arr",
"type": "Array(UInt8)"
}
],
"data":
{
"num": [42, 43, 44],
"str": ["hello", "hello", "hello"],
"arr": [[0,1], [0,1,2], [0,1,2,3]]
},
"rows": 3,
"rows_before_limit_at_least": 3,
"statistics":
{
"elapsed": 0.000272376,
"rows_read": 3,
"bytes_read": 24
}
}
For JSONColumnsWithMetadata input format, if setting input_format_json_validate_types_from_metadata is set to 1, the types from metadata in input data will be compared with the types of the corresponding columns from the table.
JSONAsString
In this format, a single JSON object is interpreted as a single value. If the input has several JSON objects (comma separated), they are interpreted as separate rows. If the input data is enclosed in square brackets, it is interpreted as an array of JSONs.
This format can only be parsed for a table with a single field of type String. The remaining columns must be set to DEFAULT or MATERIALIZED, or omitted. Once you collect the whole JSON object to string you can use JSON functions to process it.
Examples
Query:
DROP TABLE IF EXISTS json_as_string;
CREATE TABLE json_as_string (json String) ENGINE = Memory;
INSERT INTO json_as_string (json) FORMAT JSONAsString {"foo":{"bar":{"x":"y"},"baz":1}},{},{"any json stucture":1}
SELECT * FROM json_as_string;
Result:
┌─json──────────────────────────────┐
│ {"foo":{"bar":{"x":"y"},"baz":1}} │
│ {} │
│ {"any json stucture":1} │
└───────────────────────────────────┘
An array of JSON objects
Query:
CREATE TABLE json_square_brackets (field String) ENGINE = Memory;
INSERT INTO json_square_brackets FORMAT JSONAsString [{"id": 1, "name": "name1"}, {"id": 2, "name": "name2"}];
SELECT * FROM json_square_brackets;
Result:
┌─field──────────────────────┐
│ {"id": 1, "name": "name1"} │
│ {"id": 2, "name": "name2"} │
└────────────────────────────┘
JSONCompact
Differs from JSON only in that data rows are output in arrays, not in objects.
Example:
{
"meta":
[
{
"name": "num",
"type": "Int32"
},
{
"name": "str",
"type": "String"
},
{
"name": "arr",
"type": "Array(UInt8)"
}
],
"data":
[
[42, "hello", [0,1]],
[43, "hello", [0,1,2]],
[44, "hello", [0,1,2,3]]
],
"rows": 3,
"rows_before_limit_at_least": 3,
"statistics":
{
"elapsed": 0.001222069,
"rows_read": 3,
"bytes_read": 24
}
}
JSONCompactStrings
Differs from JSONStrings only in that data rows are output in arrays, not in objects.
Example: f
{
"meta":
[
{
"name": "num",
"type": "Int32"
},
{
"name": "str",
"type": "String"
},
{
"name": "arr",
"type": "Array(UInt8)"
}
],
"data":
[
["42", "hello", "[0,1]"],
["43", "hello", "[0,1,2]"],
["44", "hello", "[0,1,2,3]"]
],
"rows": 3,
"rows_before_limit_at_least": 3,
"statistics":
{
"elapsed": 0.001572097,
"rows_read": 3,
"bytes_read": 24
}
}
JSONCompactColumns
In this format, all data is represented as a single JSON Array. Note that JSONCompactColumns output format buffers all data in memory to output it as a single block and it can lead to high memory consumption
Example:
[
[42, 43, 44],
["hello", "hello", "hello"],
[[0,1], [0,1,2], [0,1,2,3]]
]
Columns that are not present in the block will be filled with default values (you can use input_format_defaults_for_omitted_fields setting here)
JSONEachRow
In this format, ClickHouse outputs each row as a separated, newline-delimited JSON Object.
Example:
{"num":42,"str":"hello","arr":[0,1]}
{"num":43,"str":"hello","arr":[0,1,2]}
{"num":44,"str":"hello","arr":[0,1,2,3]}
While importing data columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1.
PrettyJSONEachRow
Differs from JSONEachRow only in that JSON is pretty formatted with new line delimiters and 4 space indents. Suitable only for output.
Example
{
"num": "42",
"str": "hello",
"arr": [
"0",
"1"
],
"tuple": {
"num": 42,
"str": "world"
}
}
{
"num": "43",
"str": "hello",
"arr": [
"0",
"1",
"2"
],
"tuple": {
"num": 43,
"str": "world"
}
}
JSONStringsEachRow
Differs from JSONEachRow only in that data fields are output in strings, not in typed JSON values.
Example:
{"num":"42","str":"hello","arr":"[0,1]"}
{"num":"43","str":"hello","arr":"[0,1,2]"}
{"num":"44","str":"hello","arr":"[0,1,2,3]"}
JSONCompactEachRow
Differs from JSONEachRow only in that data rows are output in arrays, not in objects.
Example:
[42, "hello", [0,1]]
[43, "hello", [0,1,2]]
[44, "hello", [0,1,2,3]]
JSONCompactStringsEachRow
Differs from JSONCompactEachRow only in that data fields are output in strings, not in typed JSON values.
Example:
["42", "hello", "[0,1]"]
["43", "hello", "[0,1,2]"]
["44", "hello", "[0,1,2,3]"]
JSONEachRowWithProgress
JSONStringsEachRowWithProgress
Differs from JSONEachRow
/JSONStringsEachRow
in that ClickHouse will also yield progress information as JSON values.
{"row":{"num":42,"str":"hello","arr":[0,1]}}
{"row":{"num":43,"str":"hello","arr":[0,1,2]}}
{"row":{"num":44,"str":"hello","arr":[0,1,2,3]}}
{"progress":{"read_rows":"3","read_bytes":"24","written_rows":"0","written_bytes":"0","total_rows_to_read":"3"}}
JSONCompactEachRowWithNames
Differs from JSONCompactEachRow
format in that it also prints the header row with column names, similar to TabSeparatedWithNames.
:::note If setting input_format_with_names_use_header is set to 1, the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Otherwise, the first row will be skipped. :::
JSONCompactEachRowWithNamesAndTypes
Differs from JSONCompactEachRow
format in that it also prints two header rows with column names and types, similar to TabSeparatedWithNamesAndTypes.
:::note If setting input_format_with_names_use_header is set to 1, the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Otherwise, the first row will be skipped. If setting input_format_with_types_use_header is set to 1, the types from input data will be compared with the types of the corresponding columns from the table. Otherwise, the second row will be skipped. :::
JSONCompactStringsEachRowWithNames
Differs from JSONCompactStringsEachRow
in that in that it also prints the header row with column names, similar to TabSeparatedWithNames.
:::note If setting input_format_with_names_use_header is set to 1, the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Otherwise, the first row will be skipped. :::
JSONCompactStringsEachRowWithNamesAndTypes
Differs from JSONCompactStringsEachRow
in that it also prints two header rows with column names and types, similar to TabSeparatedWithNamesAndTypes.
:::note If setting input_format_with_names_use_header is set to 1, the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Otherwise, the first row will be skipped. If setting input_format_with_types_use_header is set to 1, the types from input data will be compared with the types of the corresponding columns from the table. Otherwise, the second row will be skipped. :::
["num", "str", "arr"]
["Int32", "String", "Array(UInt8)"]
[42, "hello", [0,1]]
[43, "hello", [0,1,2]]
[44, "hello", [0,1,2,3]]
JSONObjectEachRow
In this format, all data is represented as a single JSON Object, each row is represented as a separate field of this object similar to JSONEachRow format.
Example:
{
"row_1": {"num": 42, "str": "hello", "arr": [0,1]},
"row_2": {"num": 43, "str": "hello", "arr": [0,1,2]},
"row_3": {"num": 44, "str": "hello", "arr": [0,1,2,3]}
}
To use an object name as a column value you can use the special setting format_json_object_each_row_column_for_object_name. The value of this setting is set to the name of a column, that is used as JSON key for a row in the resulting object. Examples:
For output:
Let's say we have the table test
with two columns:
┌─object_name─┬─number─┐
│ first_obj │ 1 │
│ second_obj │ 2 │
│ third_obj │ 3 │
└─────────────┴────────┘
Let's output it in JSONObjectEachRow
format and use format_json_object_each_row_column_for_object_name
setting:
select * from test settings format_json_object_each_row_column_for_object_name='object_name'
The output:
{
"first_obj": {"number": 1},
"second_obj": {"number": 2},
"third_obj": {"number": 3}
}
For input:
Let's say we stored output from the previous example in a file named data.json
:
select * from file('data.json', JSONObjectEachRow, 'object_name String, number UInt64') settings format_json_object_each_row_column_for_object_name='object_name'
┌─object_name─┬─number─┐
│ first_obj │ 1 │
│ second_obj │ 2 │
│ third_obj │ 3 │
└─────────────┴────────┘
It also works in schema inference:
desc file('data.json', JSONObjectEachRow) settings format_json_object_each_row_column_for_object_name='object_name'
┌─name────────┬─type────────────┐
│ object_name │ String │
│ number │ Nullable(Int64) │
└─────────────┴─────────────────┘
Inserting Data
INSERT INTO UserActivity FORMAT JSONEachRow {"PageViews":5, "UserID":"4324182021466249494", "Duration":146,"Sign":-1} {"UserID":"4324182021466249494","PageViews":6,"Duration":185,"Sign":1}
ClickHouse allows:
- Any order of key-value pairs in the object.
- Omitting some values.
ClickHouse ignores spaces between elements and commas after the objects. You can pass all the objects in one line. You do not have to separate them with line breaks.
Omitted values processing
ClickHouse substitutes omitted values with the default values for the corresponding data types.
If DEFAULT expr
is specified, ClickHouse uses different substitution rules depending on the input_format_defaults_for_omitted_fields setting.
Consider the following table:
CREATE TABLE IF NOT EXISTS example_table
(
x UInt32,
a DEFAULT x * 2
) ENGINE = Memory;
- If
input_format_defaults_for_omitted_fields = 0
, then the default value forx
anda
equals0
(as the default value for theUInt32
data type). - If
input_format_defaults_for_omitted_fields = 1
, then the default value forx
equals0
, but the default value ofa
equalsx * 2
.
:::note
When inserting data with input_format_defaults_for_omitted_fields = 1
, ClickHouse consumes more computational resources, compared to insertion with input_format_defaults_for_omitted_fields = 0
.
:::
Selecting Data
Consider the UserActivity
table as an example:
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 5 │ 146 │ -1 │
│ 4324182021466249494 │ 6 │ 185 │ 1 │
└─────────────────────┴───────────┴──────────┴──────┘
The query SELECT * FROM UserActivity FORMAT JSONEachRow
returns:
{"UserID":"4324182021466249494","PageViews":5,"Duration":146,"Sign":-1}
{"UserID":"4324182021466249494","PageViews":6,"Duration":185,"Sign":1}
Unlike the JSON format, there is no substitution of invalid UTF-8 sequences. Values are escaped in the same way as for JSON
.
:::info
Any set of bytes can be output in the strings. Use the JSONEachRow
format if you are sure that the data in the table can be formatted as JSON without losing any information.
:::
Usage of Nested Structures
If you have a table with Nested data type columns, you can insert JSON data with the same structure. Enable this feature with the input_format_import_nested_json setting.
For example, consider the following table:
CREATE TABLE json_each_row_nested (n Nested (s String, i Int32) ) ENGINE = Memory
As you can see in the Nested
data type description, ClickHouse treats each component of the nested structure as a separate column (n.s
and n.i
for our table). You can insert data in the following way:
INSERT INTO json_each_row_nested FORMAT JSONEachRow {"n.s": ["abc", "def"], "n.i": [1, 23]}
To insert data as a hierarchical JSON object, set input_format_import_nested_json=1.
{
"n": {
"s": ["abc", "def"],
"i": [1, 23]
}
}
Without this setting, ClickHouse throws an exception.
SELECT name, value FROM system.settings WHERE name = 'input_format_import_nested_json'
┌─name────────────────────────────┬─value─┐
│ input_format_import_nested_json │ 0 │
└─────────────────────────────────┴───────┘
INSERT INTO json_each_row_nested FORMAT JSONEachRow {"n": {"s": ["abc", "def"], "i": [1, 23]}}
Code: 117. DB::Exception: Unknown field found while parsing JSONEachRow format: n: (at row 1)
SET input_format_import_nested_json=1
INSERT INTO json_each_row_nested FORMAT JSONEachRow {"n": {"s": ["abc", "def"], "i": [1, 23]}}
SELECT * FROM json_each_row_nested
┌─n.s───────────┬─n.i────┐
│ ['abc','def'] │ [1,23] │
└───────────────┴────────┘
JSON formats settings
- input_format_import_nested_json - map nested JSON data to nested tables (it works for JSONEachRow format). Default value -
false
. - input_format_json_read_bools_as_numbers - allow to parse bools as numbers in JSON input formats. Default value -
true
. - input_format_json_read_numbers_as_strings - allow to parse numbers as strings in JSON input formats. Default value -
false
. - input_format_json_read_objects_as_strings - allow to parse JSON objects as strings in JSON input formats. Default value -
false
. - input_format_json_named_tuples_as_objects - parse named tuple columns as JSON objects. Default value -
true
. - input_format_json_defaults_for_missing_elements_in_named_tuple - insert default values for missing elements in JSON object while parsing named tuple. Default value -
true
. - input_format_json_ignore_unknown_keys_in_named_tuple - Ignore unknown keys in json object for named tuples. Default value -
false
. - output_format_json_quote_64bit_integers - controls quoting of 64-bit integers in JSON output format. Default value -
true
. - output_format_json_quote_64bit_floats - controls quoting of 64-bit floats in JSON output format. Default value -
false
. - output_format_json_quote_denormals - enables '+nan', '-nan', '+inf', '-inf' outputs in JSON output format. Default value -
false
. - output_format_json_quote_decimals - controls quoting of decimals in JSON output format. Default value -
false
. - output_format_json_escape_forward_slashes - controls escaping forward slashes for string outputs in JSON output format. Default value -
true
. - output_format_json_named_tuples_as_objects - serialize named tuple columns as JSON objects. Default value -
true
. - output_format_json_array_of_rows - output a JSON array of all rows in JSONEachRow(Compact) format. Default value -
false
. - output_format_json_validate_utf8 - enables validation of UTF-8 sequences in JSON output formats (note that it doesn't impact formats JSON/JSONCompact/JSONColumnsWithMetadata, they always validate utf8). Default value -
false
.
BSONEachRow
In this format, ClickHouse formats/parses data as a sequence of BSON documents without any separator between them. Each row is formatted as a single document and each column is formatted as a single BSON document field with column name as a key.
For output it uses the following correspondence between ClickHouse types and BSON types:
ClickHouse type | BSON Type |
---|---|
Bool | \x08 boolean |
Int8/UInt8/Enum8 | \x10 int32 |
Int16/UInt16/Enum16 | \x10 int32 |
Int32 | \x10 int32 |
UInt32 | \x12 int64 |
Int64/UInt64 | \x12 int64 |
Float32/Float64 | \x01 double |
Date/Date32 | \x10 int32 |
DateTime | \x12 int64 |
DateTime64 | \x09 datetime |
Decimal32 | \x10 int32 |
Decimal64 | \x12 int64 |
Decimal128 | \x05 binary, \x00 binary subtype, size = 16 |
Decimal256 | \x05 binary, \x00 binary subtype, size = 32 |
Int128/UInt128 | \x05 binary, \x00 binary subtype, size = 16 |
Int256/UInt256 | \x05 binary, \x00 binary subtype, size = 32 |
String/FixedString | \x05 binary, \x00 binary subtype or \x02 string if setting output_format_bson_string_as_string is enabled |
UUID | \x05 binary, \x04 uuid subtype, size = 16 |
Array | \x04 array |
Tuple | \x04 array |
Named Tuple | \x03 document |
Map | \x03 document |
IPv4 | \x10 int32 |
IPv6 | \x05 binary, \x00 binary subtype |
For input it uses the following correspondence between BSON types and ClickHouse types:
BSON Type | ClickHouse Type |
---|---|
\x01 double |
Float32/Float64 |
\x02 string |
String/FixedString |
\x03 document |
Map/Named Tuple |
\x04 array |
Array/Tuple |
\x05 binary, \x00 binary subtype |
String/FixedString/IPv6 |
\x05 binary, \x02 old binary subtype |
String/FixedString |
\x05 binary, \x03 old uuid subtype |
UUID |
\x05 binary, \x04 uuid subtype |
UUID |
\x07 ObjectId |
String/FixedString |
\x08 boolean |
Bool |
\x09 datetime |
DateTime64 |
\x0A null value |
NULL |
\x0D JavaScript code |
String/FixedString |
\x0E symbol |
String/FixedString |
\x10 int32 |
Int32/UInt32/Decimal32/IPv4/Enum8/Enum16 |
\x12 int64 |
Int64/UInt64/Decimal64/DateTime64 |
Other BSON types are not supported. Also, it performs conversion between different integer types (for example, you can insert BSON int32 value into ClickHouse UInt8).
Big integers and decimals (Int128/UInt128/Int256/UInt256/Decimal128/Decimal256) can be parsed from BSON Binary value with \x00
binary subtype. In this case this format will validate that the size of binary data equals the size of expected value.
Note: this format don't work properly on Big-Endian platforms.
BSON format settings
- output_format_bson_string_as_string - use BSON String type instead of Binary for String columns. Default value -
false
. - input_format_bson_skip_fields_with_unsupported_types_in_schema_inference - allow skipping columns with unsupported types while schema inference for format BSONEachRow. Default value -
false
.
Native
The most efficient format. Data is written and read by blocks in binary format. For each block, the number of rows, number of columns, column names and types, and parts of columns in this block are recorded one after another. In other words, this format is “columnar” – it does not convert columns to rows. This is the format used in the native interface for interaction between servers, for using the command-line client, and for C++ clients.
You can use this format to quickly generate dumps that can only be read by the ClickHouse DBMS. It does not make sense to work with this format yourself.
Null
Nothing is output. However, the query is processed, and when using the command-line client, data is transmitted to the client. This is used for tests, including performance testing. Obviously, this format is only appropriate for output, not for parsing.
Pretty
Outputs data as Unicode-art tables, also using ANSI-escape sequences for setting colours in the terminal. A full grid of the table is drawn, and each row occupies two lines in the terminal. Each result block is output as a separate table. This is necessary so that blocks can be output without buffering results (buffering would be necessary in order to pre-calculate the visible width of all the values).
NULL is output as ᴺᵁᴸᴸ
.
Example (shown for the PrettyCompact format):
SELECT * FROM t_null
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
└───┴──────┘
Rows are not escaped in Pretty* formats. Example is shown for the PrettyCompact format:
SELECT 'String with \'quotes\' and \t character' AS Escaping_test
┌─Escaping_test────────────────────────┐
│ String with 'quotes' and character │
└──────────────────────────────────────┘
To avoid dumping too much data to the terminal, only the first 10,000 rows are printed. If the number of rows is greater than or equal to 10,000, the message “Showed first 10 000” is printed. This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).
The Pretty format supports outputting total values (when using WITH TOTALS) and extremes (when ‘extremes’ is set to 1). In these cases, total values and extreme values are output after the main data, in separate tables. Example (shown for the PrettyCompact format):
SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORDER BY EventDate FORMAT PrettyCompact
┌──EventDate─┬───────c─┐
│ 2014-03-17 │ 1406958 │
│ 2014-03-18 │ 1383658 │
│ 2014-03-19 │ 1405797 │
│ 2014-03-20 │ 1353623 │
│ 2014-03-21 │ 1245779 │
│ 2014-03-22 │ 1031592 │
│ 2014-03-23 │ 1046491 │
└────────────┴─────────┘
Totals:
┌──EventDate─┬───────c─┐
│ 1970-01-01 │ 8873898 │
└────────────┴─────────┘
Extremes:
┌──EventDate─┬───────c─┐
│ 2014-03-17 │ 1031592 │
│ 2014-03-23 │ 1406958 │
└────────────┴─────────┘
PrettyNoEscapes
Differs from Pretty in that ANSI-escape sequences aren’t used. This is necessary for displaying this format in a browser, as well as for using the ‘watch’ command-line utility.
Example:
$ watch -n1 "clickhouse-client --query='SELECT event, value FROM system.events FORMAT PrettyCompactNoEscapes'"
You can use the HTTP interface for displaying in the browser.
PrettyMonoBlock
Differs from Pretty in that up to 10,000 rows are buffered, then output as a single table, not by blocks.
PrettyNoEscapesMonoBlock
Differs from PrettyNoEscapes in that up to 10,000 rows are buffered, then output as a single table, not by blocks.
PrettyCompact
Differs from Pretty in that the grid is drawn between rows and the result is more compact. This format is used by default in the command-line client in interactive mode.
PrettyCompactNoEscapes
Differs from PrettyCompact in that ANSI-escape sequences aren’t used. This is necessary for displaying this format in a browser, as well as for using the ‘watch’ command-line utility.
PrettyCompactMonoBlock
Differs from PrettyCompact in that up to 10,000 rows are buffered, then output as a single table, not by blocks.
PrettyCompactNoEscapesMonoBlock
Differs from PrettyCompactNoEscapes in that up to 10,000 rows are buffered, then output as a single table, not by blocks.
PrettySpace
Differs from PrettyCompact in that whitespace (space characters) is used instead of the grid.
PrettySpaceNoEscapes
Differs from PrettySpace in that ANSI-escape sequences aren’t used. This is necessary for displaying this format in a browser, as well as for using the ‘watch’ command-line utility.
PrettySpaceMonoBlock
Differs from PrettySpace in that up to 10,000 rows are buffered, then output as a single table, not by blocks.
PrettySpaceNoEscapesMonoBlock
Differs from PrettySpaceNoEscapes in that up to 10,000 rows are buffered, then output as a single table, not by blocks.
Pretty formats settings
- output_format_pretty_max_rows - rows limit for Pretty formats. Default value -
10000
. - output_format_pretty_max_column_pad_width - maximum width to pad all values in a column in Pretty formats. Default value -
250
. - output_format_pretty_max_value_width - Maximum width of value to display in Pretty formats. If greater - it will be cut. Default value -
10000
. - output_format_pretty_color - use ANSI escape sequences to paint colors in Pretty formats. Default value -
true
. - output_format_pretty_grid_charset - Charset for printing grid borders. Available charsets: ASCII, UTF-8. Default value -
UTF-8
. - output_format_pretty_row_numbers - Add row numbers before each row for pretty output format. Default value -
false
.
RowBinary
Formats and parses data by row in binary format. Rows and values are listed consecutively, without separators. Because data is in the binary format the delimiter after FORMAT RowBinary
is strictly specified as next: any number of whitespaces (' '
- space, code 0x20
; '\t'
- tab, code 0x09
; '\f'
- form feed, code 0x0C
) followed by exactly one new line sequence (Windows style "\r\n"
or Unix style '\n'
), immediately followed by binary data.
This format is less efficient than the Native format since it is row-based.
Integers use fixed-length little-endian representation. For example, UInt64 uses 8 bytes. DateTime is represented as UInt32 containing the Unix timestamp as the value. Date is represented as a UInt16 object that contains the number of days since 1970-01-01 as the value. String is represented as a varint length (unsigned LEB128), followed by the bytes of the string. FixedString is represented simply as a sequence of bytes.
Array is represented as a varint length (unsigned LEB128), followed by successive elements of the array.
For NULL support, an additional byte containing 1 or 0 is added before each Nullable value. If 1, then the value is NULL
and this byte is interpreted as a separate value. If 0, the value after the byte is not NULL
.
RowBinaryWithNames
Similar to RowBinary, but with added header:
- LEB128-encoded number of columns (N)
- N
String
s specifying column names
:::note If setting input_format_with_names_use_header is set to 1, the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Otherwise, the first row will be skipped. :::
RowBinaryWithNamesAndTypes
Similar to RowBinary, but with added header:
- LEB128-encoded number of columns (N)
- N
String
s specifying column names - N
String
s specifying column types
:::note If setting input_format_with_names_use_header is set to 1, the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. Otherwise, the first row will be skipped. If setting input_format_with_types_use_header is set to 1, the types from input data will be compared with the types of the corresponding columns from the table. Otherwise, the second row will be skipped. :::
RowBinary format settings
- format_binary_max_string_size - The maximum allowed size for String in RowBinary format. Default value -
1GiB
.
Values
Prints every row in brackets. Rows are separated by commas. There is no comma after the last row. The values inside the brackets are also comma-separated. Numbers are output in a decimal format without quotes. Arrays are output in square brackets. Strings, dates, and dates with times are output in quotes. Escaping rules and parsing are similar to the TabSeparated format. During formatting, extra spaces aren’t inserted, but during parsing, they are allowed and skipped (except for spaces inside array values, which are not allowed). NULL is represented as NULL
.
The minimum set of characters that you need to escape when passing data in Values format: single quotes and backslashes.
This is the format that is used in INSERT INTO t VALUES ...
, but you can also use it for formatting query results.
Values format settings
- input_format_values_interpret_expressions - if the field could not be parsed by streaming parser, run SQL parser and try to interpret it as SQL expression. Default value -
true
. - input_format_values_deduce_templates_of_expressions -if the field could not be parsed by streaming parser, run SQL parser, deduce template of the SQL expression, try to parse all rows using template and then interpret expression for all rows. Default value -
true
. - input_format_values_accurate_types_of_literals - when parsing and interpreting expressions using template, check actual type of literal to avoid possible overflow and precision issues. Default value -
true
.
Vertical
Prints each value on a separate line with the column name specified. This format is convenient for printing just one or a few rows if each row consists of a large number of columns.
NULL is output as ᴺᵁᴸᴸ
.
Example:
SELECT * FROM t_null FORMAT Vertical
Row 1:
──────
x: 1
y: ᴺᵁᴸᴸ
Rows are not escaped in Vertical format:
SELECT 'string with \'quotes\' and \t with some special \n characters' AS test FORMAT Vertical
Row 1:
──────
test: string with 'quotes' and with some special
characters
This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).
XML
XML format is suitable only for output, not for parsing. Example:
<?xml version='1.0' encoding='UTF-8' ?>
<result>
<meta>
<columns>
<column>
<name>SearchPhrase</name>
<type>String</type>
</column>
<column>
<name>count()</name>
<type>UInt64</type>
</column>
</columns>
</meta>
<data>
<row>
<SearchPhrase></SearchPhrase>
<field>8267016</field>
</row>
<row>
<SearchPhrase>bathroom interior design</SearchPhrase>
<field>2166</field>
</row>
<row>
<SearchPhrase>clickhouse</SearchPhrase>
<field>1655</field>
</row>
<row>
<SearchPhrase>2014 spring fashion</SearchPhrase>
<field>1549</field>
</row>
<row>
<SearchPhrase>freeform photos</SearchPhrase>
<field>1480</field>
</row>
<row>
<SearchPhrase>angelina jolie</SearchPhrase>
<field>1245</field>
</row>
<row>
<SearchPhrase>omsk</SearchPhrase>
<field>1112</field>
</row>
<row>
<SearchPhrase>photos of dog breeds</SearchPhrase>
<field>1091</field>
</row>
<row>
<SearchPhrase>curtain designs</SearchPhrase>
<field>1064</field>
</row>
<row>
<SearchPhrase>baku</SearchPhrase>
<field>1000</field>
</row>
</data>
<rows>10</rows>
<rows_before_limit_at_least>141137</rows_before_limit_at_least>
</result>
If the column name does not have an acceptable format, just ‘field’ is used as the element name. In general, the XML structure follows the JSON structure. Just as for JSON, invalid UTF-8 sequences are changed to the replacement character <20> so the output text will consist of valid UTF-8 sequences.
In string values, the characters <
and &
are escaped as <
and &
.
Arrays are output as <array><elem>Hello</elem><elem>World</elem>...</array>
,and tuples as <tuple><elem>Hello</elem><elem>World</elem>...</tuple>
.
CapnProto
CapnProto is a binary message format similar to Protocol Buffers and Thrift, but not like JSON or MessagePack.
CapnProto messages are strictly typed and not self-describing, meaning they need an external schema description. The schema is applied on the fly and cached for each query.
See also Format Schema.
Data Types Matching
The table below shows supported data types and how they match ClickHouse data types in INSERT
and SELECT
queries.
CapnProto data type (INSERT ) |
ClickHouse data type | CapnProto data type (SELECT ) |
---|---|---|
UINT8 , BOOL |
UInt8 | UINT8 |
INT8 |
Int8 | INT8 |
UINT16 |
UInt16, Date | UINT16 |
INT16 |
Int16 | INT16 |
UINT32 |
UInt32, DateTime | UINT32 |
INT32 |
Int32, Decimal32 | INT32 |
UINT64 |
UInt64 | UINT64 |
INT64 |
Int64, DateTime64, Decimal64 | INT64 |
FLOAT32 |
Float32 | FLOAT32 |
FLOAT64 |
Float64 | FLOAT64 |
TEXT, DATA |
String, FixedString | TEXT, DATA |
union(T, Void), union(Void, T) |
Nullable(T) | union(T, Void), union(Void, T) |
ENUM |
Enum(8/16) | ENUM |
LIST |
Array | LIST |
STRUCT |
Tuple | STRUCT |
UINT32 |
IPv4 | UINT32 |
DATA |
IPv6 | DATA |
DATA |
Int128/UInt128/Int256/UInt256 | DATA |
DATA |
Decimal128/Decimal256 | DATA |
STRUCT(entries LIST(STRUCT(key Key, value Value))) |
Map | STRUCT(entries LIST(STRUCT(key Key, value Value))) |
Integer types can be converted into each other during input/output.
For working with Enum
in CapnProto format use the format_capn_proto_enum_comparising_mode setting.
Arrays can be nested and can have a value of the Nullable
type as an argument. Tuple
and Map
types also can be nested.
Inserting and Selecting Data
You can insert CapnProto data from a file into ClickHouse table by the following command:
$ cat capnproto_messages.bin | clickhouse-client --query "INSERT INTO test.hits SETTINGS format_schema = 'schema:Message' FORMAT CapnProto"
Where schema.capnp
looks like this:
struct Message {
SearchPhrase @0 :Text;
c @1 :Uint64;
}
You can select data from a ClickHouse table and save them into some file in the CapnProto format by the following command:
$ clickhouse-client --query = "SELECT * FROM test.hits FORMAT CapnProto SETTINGS format_schema = 'schema:Message'"
Prometheus
Expose metrics in Prometheus text-based exposition format.
The output table should have a proper structure.
Columns name
(String) and value
(number) are required.
Rows may optionally contain help
(String) and timestamp
(number).
Column type
(String) is either counter
, gauge
, histogram
, summary
, untyped
or empty.
Each metric value may also have some labels
(Map(String, String)).
Several consequent rows may refer to the one metric with different labels. The table should be sorted by metric name (e.g., with ORDER BY name
).
There's special requirements for labels for histogram
and summary
, see Prometheus doc for the details. Special rules applied to row with labels {'count':''}
and {'sum':''}
, they'll be converted to <metric_name>_count
and <metric_name>_sum
respectively.
Example:
┌─name────────────────────────────────┬─type──────┬─help──────────────────────────────────────┬─labels─────────────────────────┬────value─┬─────timestamp─┐
│ http_request_duration_seconds │ histogram │ A histogram of the request duration. │ {'le':'0.05'} │ 24054 │ 0 │
│ http_request_duration_seconds │ histogram │ │ {'le':'0.1'} │ 33444 │ 0 │
│ http_request_duration_seconds │ histogram │ │ {'le':'0.2'} │ 100392 │ 0 │
│ http_request_duration_seconds │ histogram │ │ {'le':'0.5'} │ 129389 │ 0 │
│ http_request_duration_seconds │ histogram │ │ {'le':'1'} │ 133988 │ 0 │
│ http_request_duration_seconds │ histogram │ │ {'le':'+Inf'} │ 144320 │ 0 │
│ http_request_duration_seconds │ histogram │ │ {'sum':''} │ 53423 │ 0 │
│ http_requests_total │ counter │ Total number of HTTP requests │ {'method':'post','code':'200'} │ 1027 │ 1395066363000 │
│ http_requests_total │ counter │ │ {'method':'post','code':'400'} │ 3 │ 1395066363000 │
│ metric_without_timestamp_and_labels │ │ │ {} │ 12.47 │ 0 │
│ rpc_duration_seconds │ summary │ A summary of the RPC duration in seconds. │ {'quantile':'0.01'} │ 3102 │ 0 │
│ rpc_duration_seconds │ summary │ │ {'quantile':'0.05'} │ 3272 │ 0 │
│ rpc_duration_seconds │ summary │ │ {'quantile':'0.5'} │ 4773 │ 0 │
│ rpc_duration_seconds │ summary │ │ {'quantile':'0.9'} │ 9001 │ 0 │
│ rpc_duration_seconds │ summary │ │ {'quantile':'0.99'} │ 76656 │ 0 │
│ rpc_duration_seconds │ summary │ │ {'count':''} │ 2693 │ 0 │
│ rpc_duration_seconds │ summary │ │ {'sum':''} │ 17560473 │ 0 │
│ something_weird │ │ │ {'problem':'division by zero'} │ inf │ -3982045 │
└─────────────────────────────────────┴───────────┴───────────────────────────────────────────┴────────────────────────────────┴──────────┴───────────────┘
Will be formatted as:
# HELP http_request_duration_seconds A histogram of the request duration.
# TYPE http_request_duration_seconds histogram
http_request_duration_seconds_bucket{le="0.05"} 24054
http_request_duration_seconds_bucket{le="0.1"} 33444
http_request_duration_seconds_bucket{le="0.5"} 129389
http_request_duration_seconds_bucket{le="1"} 133988
http_request_duration_seconds_bucket{le="+Inf"} 144320
http_request_duration_seconds_sum 53423
http_request_duration_seconds_count 144320
# HELP http_requests_total Total number of HTTP requests
# TYPE http_requests_total counter
http_requests_total{code="200",method="post"} 1027 1395066363000
http_requests_total{code="400",method="post"} 3 1395066363000
metric_without_timestamp_and_labels 12.47
# HELP rpc_duration_seconds A summary of the RPC duration in seconds.
# TYPE rpc_duration_seconds summary
rpc_duration_seconds{quantile="0.01"} 3102
rpc_duration_seconds{quantile="0.05"} 3272
rpc_duration_seconds{quantile="0.5"} 4773
rpc_duration_seconds{quantile="0.9"} 9001
rpc_duration_seconds{quantile="0.99"} 76656
rpc_duration_seconds_sum 17560473
rpc_duration_seconds_count 2693
something_weird{problem="division by zero"} +Inf -3982045
Protobuf
Protobuf - is a Protocol Buffers format.
This format requires an external format schema. The schema is cached between queries.
ClickHouse supports both proto2
and proto3
syntaxes. Repeated/optional/required fields are supported.
Usage examples:
SELECT * FROM test.table FORMAT Protobuf SETTINGS format_schema = 'schemafile:MessageType'
cat protobuf_messages.bin | clickhouse-client --query "INSERT INTO test.table SETTINGS format_schema='schemafile:MessageType' FORMAT Protobuf"
where the file schemafile.proto
looks like this:
syntax = "proto3";
message MessageType {
string name = 1;
string surname = 2;
uint32 birthDate = 3;
repeated string phoneNumbers = 4;
};
To find the correspondence between table columns and fields of Protocol Buffers’ message type ClickHouse compares their names.
This comparison is case-insensitive and the characters _
(underscore) and .
(dot) are considered as equal.
If types of a column and a field of Protocol Buffers’ message are different the necessary conversion is applied.
Nested messages are supported. For example, for the field z
in the following message type
message MessageType {
message XType {
message YType {
int32 z;
};
repeated YType y;
};
XType x;
};
ClickHouse tries to find a column named x.y.z
(or x_y_z
or X.y_Z
and so on).
Nested messages are suitable to input or output a nested data structures.
Default values defined in a protobuf schema like this
syntax = "proto2";
message MessageType {
optional int32 result_per_page = 3 [default = 10];
}
are not applied; the table defaults are used instead of them.
ClickHouse inputs and outputs protobuf messages in the length-delimited
format.
It means before every message should be written its length as a varint.
See also how to read/write length-delimited protobuf messages in popular languages.
ProtobufSingle
Same as Protobuf but for storing/parsing single Protobuf message without length delimiters.
Avro
Apache Avro is a row-oriented data serialization framework developed within Apache’s Hadoop project.
ClickHouse Avro format supports reading and writing Avro data files.
Data Types Matching
The table below shows supported data types and how they match ClickHouse data types in INSERT
and SELECT
queries.
Avro data type INSERT |
ClickHouse data type | Avro data type SELECT |
---|---|---|
boolean , int , long , float , double |
Int(8\16\32), UInt(8\16\32) | int |
boolean , int , long , float , double |
Int64, UInt64 | long |
boolean , int , long , float , double |
Float32 | float |
boolean , int , long , float , double |
Float64 | double |
bytes , string , fixed , enum |
String | bytes or string * |
bytes , string , fixed |
FixedString(N) | fixed(N) |
enum |
Enum(8\16) | enum |
array(T) |
Array(T) | array(T) |
map(V, K) |
Map(V, K) | map(string, K) |
union(null, T) , union(T, null) |
Nullable(T) | union(null, T) |
null |
Nullable(Nothing) | null |
int (date) ** |
Date, Date32 | int (date) ** |
long (timestamp-millis) ** |
DateTime64(3) | long (timestamp-millis) ** |
long (timestamp-micros) ** |
DateTime64(6) | long (timestamp-micros) ** |
bytes (decimal) ** |
DateTime64(N) | bytes (decimal) ** |
int |
IPv4 | int |
fixed(16) |
IPv6 | fixed(16) |
bytes (decimal) ** |
Decimal(P, S) | bytes (decimal) ** |
string (uuid) ** |
UUID | string (uuid) ** |
fixed(16) |
Int128/UInt128 | fixed(16) |
fixed(32) |
Int256/UInt256 | fixed(32) |
record |
Tuple | record |
* bytes
is default, controlled by output_format_avro_string_column_pattern
** Avro logical types
Unsupported Avro logical data types: time-millis
, time-micros
, duration
Inserting Data
To insert data from an Avro file into ClickHouse table:
$ cat file.avro | clickhouse-client --query="INSERT INTO {some_table} FORMAT Avro"
The root schema of input Avro file must be of record
type.
To find the correspondence between table columns and fields of Avro schema ClickHouse compares their names. This comparison is case-sensitive. Unused fields are skipped.
Data types of ClickHouse table columns can differ from the corresponding fields of the Avro data inserted. When inserting data, ClickHouse interprets data types according to the table above and then casts the data to corresponding column type.
While importing data, when field is not found in schema and setting input_format_avro_allow_missing_fields is enabled, default value will be used instead of error.
Selecting Data
To select data from ClickHouse table into an Avro file:
$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Avro" > file.avro
Column names must:
- start with
[A-Za-z_]
- subsequently contain only
[A-Za-z0-9_]
Output Avro file compression and sync interval can be configured with output_format_avro_codec and output_format_avro_sync_interval respectively.
Example Data
Using the ClickHouse DESCRIBE function, you can quickly view the inferred format of an Avro file like the following example. This example includes the URL of a publicly accessible Avro file in the ClickHouse S3 public bucket:
DESCRIBE url('https://clickhouse-public-datasets.s3.eu-central-1.amazonaws.com/hits.avro','Avro);
┌─name───────────────────────┬─type────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
│ WatchID │ Int64 │ │ │ │ │ │
│ JavaEnable │ Int32 │ │ │ │ │ │
│ Title │ String │ │ │ │ │ │
│ GoodEvent │ Int32 │ │ │ │ │ │
│ EventTime │ Int32 │ │ │ │ │ │
│ EventDate │ Date32 │ │ │ │ │ │
│ CounterID │ Int32 │ │ │ │ │ │
│ ClientIP │ Int32 │ │ │ │ │ │
│ ClientIP6 │ FixedString(16) │ │ │ │ │ │
│ RegionID │ Int32 │ │ │ │ │ │
...
│ IslandID │ FixedString(16) │ │ │ │ │ │
│ RequestNum │ Int32 │ │ │ │ │ │
│ RequestTry │ Int32 │ │ │ │ │ │
└────────────────────────────┴─────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
AvroConfluent
AvroConfluent supports decoding single-object Avro messages commonly used with Kafka and Confluent Schema Registry.
Each Avro message embeds a schema id that can be resolved to the actual schema with help of the Schema Registry.
Schemas are cached once resolved.
Schema Registry URL is configured with format_avro_schema_registry_url.
Data Types Matching
Same as Avro.
Usage
To quickly verify schema resolution you can use kafkacat with clickhouse-local:
$ kafkacat -b kafka-broker -C -t topic1 -o beginning -f '%s' -c 3 | clickhouse-local --input-format AvroConfluent --format_avro_schema_registry_url 'http://schema-registry' -S "field1 Int64, field2 String" -q 'select * from table'
1 a
2 b
3 c
To use AvroConfluent
with Kafka:
CREATE TABLE topic1_stream
(
field1 String,
field2 String
)
ENGINE = Kafka()
SETTINGS
kafka_broker_list = 'kafka-broker',
kafka_topic_list = 'topic1',
kafka_group_name = 'group1',
kafka_format = 'AvroConfluent';
-- for debug purposes you can set format_avro_schema_registry_url in a session.
-- this way cannot be used in production
SET format_avro_schema_registry_url = 'http://schema-registry';
SELECT * FROM topic1_stream;
:::note
Setting format_avro_schema_registry_url
needs to be configured in users.xml
to maintain it’s value after a restart. Also you can use the format_avro_schema_registry_url
setting of the Kafka
table engine.
:::
Parquet
Apache Parquet is a columnar storage format widespread in the Hadoop ecosystem. ClickHouse supports read and write operations for this format.
Data Types Matching
The table below shows supported data types and how they match ClickHouse data types in INSERT
and SELECT
queries.
Parquet data type (INSERT ) |
ClickHouse data type | Parquet data type (SELECT ) |
---|---|---|
BOOL |
Bool | BOOL |
UINT8 , BOOL |
UInt8 | UINT8 |
INT8 |
Int8/Enum8 | INT8 |
UINT16 |
UInt16 | UINT16 |
INT16 |
Int16/Enum16 | INT16 |
UINT32 |
UInt32 | UINT32 |
INT32 |
Int32 | INT32 |
UINT64 |
UInt64 | UINT64 |
INT64 |
Int64 | INT64 |
FLOAT |
Float32 | FLOAT |
DOUBLE |
Float64 | DOUBLE |
DATE |
Date32 | DATE |
TIME (ms) |
DateTime | UINT32 |
TIMESTAMP , TIME (us, ns) |
DateTime64 | TIMESTAMP |
STRING , BINARY |
String | BINARY |
STRING , BINARY , FIXED_LENGTH_BYTE_ARRAY |
FixedString | FIXED_LENGTH_BYTE_ARRAY |
DECIMAL |
Decimal | DECIMAL |
LIST |
Array | LIST |
STRUCT |
Tuple | STRUCT |
MAP |
Map | MAP |
UINT32 |
IPv4 | UINT32 |
FIXED_LENGTH_BYTE_ARRAY , BINARY |
IPv6 | FIXED_LENGTH_BYTE_ARRAY |
FIXED_LENGTH_BYTE_ARRAY , BINARY |
Int128/UInt128/Int256/UInt256 | FIXED_LENGTH_BYTE_ARRAY |
Arrays can be nested and can have a value of the Nullable
type as an argument. Tuple
and Map
types also can be nested.
Unsupported Parquet data types: FIXED_SIZE_BINARY
, JSON
, UUID
, ENUM
.
Data types of ClickHouse table columns can differ from the corresponding fields of the Parquet data inserted. When inserting data, ClickHouse interprets data types according to the table above and then cast the data to that data type which is set for the ClickHouse table column.
Inserting and Selecting Data
You can insert Parquet data from a file into ClickHouse table by the following command:
$ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT Parquet"
You can select data from a ClickHouse table and save them into some file in the Parquet format by the following command:
$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Parquet" > {some_file.pq}
To exchange data with Hadoop, you can use HDFS table engine.
Parquet format settings
- output_format_parquet_row_group_size - row group size in rows while data output. Default value -
1000000
. - output_format_parquet_string_as_string - use Parquet String type instead of Binary for String columns. Default value -
false
. - input_format_parquet_import_nested - allow inserting array of structs into Nested table in Parquet input format. Default value -
false
. - input_format_parquet_case_insensitive_column_matching - ignore case when matching Parquet columns with ClickHouse columns. Default value -
false
. - input_format_parquet_allow_missing_columns - allow missing columns while reading Parquet data. Default value -
false
. - input_format_parquet_skip_columns_with_unsupported_types_in_schema_inference - allow skipping columns with unsupported types while schema inference for Parquet format. Default value -
false
. - output_format_parquet_fixed_string_as_fixed_byte_array - use Parquet FIXED_LENGTH_BYTE_ARRAY type instead of Binary/String for FixedString columns. Default value -
true
. - output_format_parquet_version - The version of Parquet format used in output format. Default value -
2.latest
. - output_format_parquet_compression_method - compression method used in output Parquet format. Default value -
snappy
.
ParquetMetadata {data-format-parquet-metadata}
Special format for reading Parquet file metadata (https://parquet.apache.org/docs/file-format/metadata/). It always outputs one row with the next structure/content:
- num_columns - the number of columns
- num_rows - the total number of rows
- num_row_groups - the total number of row groups
- format_version - parquet format version, always 1.0 or 2.6
- total_uncompressed_size - total uncompressed bytes size of the data, calculated as the sum of total_byte_size from all row groups
- total_compressed_size - total compressed bytes size of the data, calculated as the sum of total_compressed_size from all row groups
- columns - the list of columns metadata with the next structure:
- name - column name
- path - column path (differs from name for nested column)
- max_definition_level - maximum definition level
- max_repetition_level - maximum repetition level
- physical_type - column physical type
- logical_type - column logical type
- compression - compression used for this column
- total_uncompressed_size - total uncompressed bytes size of the column, calculated as the sum of total_uncompressed_size of the column from all row groups
- total_compressed_size - total compressed bytes size of the column, calculated as the sum of total_compressed_size of the column from all row groups
- space_saved - percent of space saved by compression, calculated as (1 - total_compressed_size/total_uncompressed_size).
- encodings - the list of encodings used for this column
- row_groups - the list of row groups metadata with the next structure:
- num_columns - the number of columns in the row group
- num_rows - the number of rows in the row group
- total_uncompressed_size - total uncompressed bytes size of the row group
- total_compressed_size - total compressed bytes size of the row group
- columns - the list of column chunks metadata with the next structure:
- name - column name
- path - column path
- total_compressed_size - total compressed bytes size of the column
- total_uncompressed_size - total uncompressed bytes size of the row group
- have_statistics - boolean flag that indicates if column chunk metadata contains column statistics
- statistics - column chunk statistics (all fields are NULL if have_statistics = false) with the next structure:
- num_values - the number of non-null values in the column chunk
- null_count - the number of NULL values in the column chunk
- distinct_count - the number of distinct values in the column chunk
- min - the minimum value of the column chunk
- max - the maximum column of the column chunk
Example:
SELECT * FROM file(data.parquet, ParquetMetadata) format PrettyJSONEachRow
{
"num_columns": "2",
"num_rows": "100000",
"num_row_groups": "2",
"format_version": "2.6",
"metadata_size": "577",
"total_uncompressed_size": "282436",
"total_compressed_size": "26633",
"columns": [
{
"name": "number",
"path": "number",
"max_definition_level": "0",
"max_repetition_level": "0",
"physical_type": "INT32",
"logical_type": "Int(bitWidth=16, isSigned=false)",
"compression": "LZ4",
"total_uncompressed_size": "133321",
"total_compressed_size": "13293",
"space_saved": "90.03%",
"encodings": [
"RLE_DICTIONARY",
"PLAIN",
"RLE"
]
},
{
"name": "concat('Hello', toString(modulo(number, 1000)))",
"path": "concat('Hello', toString(modulo(number, 1000)))",
"max_definition_level": "0",
"max_repetition_level": "0",
"physical_type": "BYTE_ARRAY",
"logical_type": "None",
"compression": "LZ4",
"total_uncompressed_size": "149115",
"total_compressed_size": "13340",
"space_saved": "91.05%",
"encodings": [
"RLE_DICTIONARY",
"PLAIN",
"RLE"
]
}
],
"row_groups": [
{
"num_columns": "2",
"num_rows": "65409",
"total_uncompressed_size": "179809",
"total_compressed_size": "14163",
"columns": [
{
"name": "number",
"path": "number",
"total_compressed_size": "7070",
"total_uncompressed_size": "85956",
"have_statistics": true,
"statistics": {
"num_values": "65409",
"null_count": "0",
"distinct_count": null,
"min": "0",
"max": "999"
}
},
{
"name": "concat('Hello', toString(modulo(number, 1000)))",
"path": "concat('Hello', toString(modulo(number, 1000)))",
"total_compressed_size": "7093",
"total_uncompressed_size": "93853",
"have_statistics": true,
"statistics": {
"num_values": "65409",
"null_count": "0",
"distinct_count": null,
"min": "Hello0",
"max": "Hello999"
}
}
]
},
...
]
}
Arrow
Apache Arrow comes with two built-in columnar storage formats. ClickHouse supports read and write operations for these formats.
Arrow
is Apache Arrow’s "file mode" format. It is designed for in-memory random access.
Data Types Matching
The table below shows supported data types and how they match ClickHouse data types in INSERT
and SELECT
queries.
Arrow data type (INSERT ) |
ClickHouse data type | Arrow data type (SELECT ) |
---|---|---|
BOOL |
Bool | BOOL |
UINT8 , BOOL |
UInt8 | UINT8 |
INT8 |
Int8/Enum8 | INT8 |
UINT16 |
UInt16 | UINT16 |
INT16 |
Int16/Enum16 | INT16 |
UINT32 |
UInt32 | UINT32 |
INT32 |
Int32 | INT32 |
UINT64 |
UInt64 | UINT64 |
INT64 |
Int64 | INT64 |
FLOAT , HALF_FLOAT |
Float32 | FLOAT32 |
DOUBLE |
Float64 | FLOAT64 |
DATE32 |
Date32 | UINT16 |
DATE64 |
DateTime | UINT32 |
TIMESTAMP , TIME32 , TIME64 |
DateTime64 | UINT32 |
STRING , BINARY |
String | BINARY |
STRING , BINARY , FIXED_SIZE_BINARY |
FixedString | FIXED_SIZE_BINARY |
DECIMAL |
Decimal | DECIMAL |
DECIMAL256 |
Decimal256 | DECIMAL256 |
LIST |
Array | LIST |
STRUCT |
Tuple | STRUCT |
MAP |
Map | MAP |
UINT32 |
IPv4 | UINT32 |
FIXED_SIZE_BINARY , BINARY |
IPv6 | FIXED_SIZE_BINARY |
FIXED_SIZE_BINARY , BINARY |
Int128/UInt128/Int256/UInt256 | FIXED_SIZE_BINARY |
Arrays can be nested and can have a value of the Nullable
type as an argument. Tuple
and Map
types also can be nested.
The DICTIONARY
type is supported for INSERT
queries, and for SELECT
queries there is an output_format_arrow_low_cardinality_as_dictionary setting that allows to output LowCardinality type as a DICTIONARY
type.
Unsupported Arrow data types: FIXED_SIZE_BINARY
, JSON
, UUID
, ENUM
.
The data types of ClickHouse table columns do not have to match the corresponding Arrow data fields. When inserting data, ClickHouse interprets data types according to the table above and then casts the data to the data type set for the ClickHouse table column.
Inserting Data
You can insert Arrow data from a file into ClickHouse table by the following command:
$ cat filename.arrow | clickhouse-client --query="INSERT INTO some_table FORMAT Arrow"
Selecting Data
You can select data from a ClickHouse table and save them into some file in the Arrow format by the following command:
$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Arrow" > {filename.arrow}
Arrow format settings
- output_format_arrow_low_cardinality_as_dictionary - enable output ClickHouse LowCardinality type as Dictionary Arrow type. Default value -
false
. - output_format_arrow_string_as_string - use Arrow String type instead of Binary for String columns. Default value -
false
. - input_format_arrow_import_nested - allow inserting array of structs into Nested table in Arrow input format. Default value -
false
. - input_format_arrow_case_insensitive_column_matching - ignore case when matching Arrow columns with ClickHouse columns. Default value -
false
. - input_format_arrow_allow_missing_columns - allow missing columns while reading Arrow data. Default value -
false
. - input_format_arrow_skip_columns_with_unsupported_types_in_schema_inference - allow skipping columns with unsupported types while schema inference for Arrow format. Default value -
false
. - output_format_arrow_fixed_string_as_fixed_byte_array - use Arrow FIXED_SIZE_BINARY type instead of Binary/String for FixedString columns. Default value -
true
. - output_format_arrow_compression_method - compression method used in output Arrow format. Default value -
lz4_frame
.
ArrowStream
ArrowStream
is Apache Arrow’s “stream mode” format. It is designed for in-memory stream processing.
ORC
Apache ORC is a columnar storage format widespread in the Hadoop ecosystem.
Data Types Matching
The table below shows supported data types and how they match ClickHouse data types in INSERT
and SELECT
queries.
ORC data type (INSERT ) |
ClickHouse data type | ORC data type (SELECT ) |
---|---|---|
Boolean |
UInt8 | Boolean |
Tinyint |
Int8/UInt8/Enum8 | Tinyint |
Smallint |
Int16/UInt16/Enum16 | Smallint |
Int |
Int32/UInt32 | Int |
Bigint |
Int64/UInt32 | Bigint |
Float |
Float32 | Float |
Double |
Float64 | Double |
Decimal |
Decimal | Decimal |
Date |
Date32 | Date |
Timestamp |
DateTime64 | Timestamp |
String , Char , Varchar , Binary |
String | Binary |
List |
Array | List |
Struct |
Tuple | Struct |
Map |
Map | Map |
Int |
IPv4 | Int |
Binary |
IPv6 | Binary |
Binary |
Int128/UInt128/Int256/UInt256 | Binary |
Binary |
Decimal256 | Binary |
Other types are not supported.
Arrays can be nested and can have a value of the Nullable
type as an argument. Tuple
and Map
types also can be nested.
The data types of ClickHouse table columns do not have to match the corresponding ORC data fields. When inserting data, ClickHouse interprets data types according to the table above and then casts the data to the data type set for the ClickHouse table column.
Inserting Data
You can insert ORC data from a file into ClickHouse table by the following command:
$ cat filename.orc | clickhouse-client --query="INSERT INTO some_table FORMAT ORC"
Selecting Data
You can select data from a ClickHouse table and save them into some file in the ORC format by the following command:
$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT ORC" > {filename.orc}
Arrow format settings
- output_format_arrow_string_as_string - use Arrow String type instead of Binary for String columns. Default value -
false
. - output_format_orc_compression_method - compression method used in output ORC format. Default value -
none
. - input_format_arrow_import_nested - allow inserting array of structs into Nested table in Arrow input format. Default value -
false
. - input_format_arrow_case_insensitive_column_matching - ignore case when matching Arrow columns with ClickHouse columns. Default value -
false
. - input_format_arrow_allow_missing_columns - allow missing columns while reading Arrow data. Default value -
false
. - input_format_arrow_skip_columns_with_unsupported_types_in_schema_inference - allow skipping columns with unsupported types while schema inference for Arrow format. Default value -
false
.
To exchange data with Hadoop, you can use HDFS table engine.
LineAsString
In this format, every line of input data is interpreted as a single string value. This format can only be parsed for table with a single field of type String. The remaining columns must be set to DEFAULT or MATERIALIZED, or omitted.
Example
Query:
DROP TABLE IF EXISTS line_as_string;
CREATE TABLE line_as_string (field String) ENGINE = Memory;
INSERT INTO line_as_string FORMAT LineAsString "I love apple", "I love banana", "I love orange";
SELECT * FROM line_as_string;
Result:
┌─field─────────────────────────────────────────────┐
│ "I love apple", "I love banana", "I love orange"; │
└───────────────────────────────────────────────────┘
Regexp
Each line of imported data is parsed according to the regular expression.
When working with the Regexp
format, you can use the following settings:
-
format_regexp
— String. Contains regular expression in the re2 format. -
format_regexp_escaping_rule
— String. The following escaping rules are supported:- CSV (similarly to CSV)
- JSON (similarly to JSONEachRow)
- Escaped (similarly to TSV)
- Quoted (similarly to Values)
- Raw (extracts subpatterns as a whole, no escaping rules, similarly to TSVRaw)
-
format_regexp_skip_unmatched
— UInt8. Defines the need to throw an exception in case theformat_regexp
expression does not match the imported data. Can be set to0
or1
.
Usage
The regular expression from format_regexp setting is applied to every line of imported data. The number of subpatterns in the regular expression must be equal to the number of columns in imported dataset.
Lines of the imported data must be separated by newline character '\n'
or DOS-style newline "\r\n"
.
The content of every matched subpattern is parsed with the method of corresponding data type, according to format_regexp_escaping_rule setting.
If the regular expression does not match the line and format_regexp_skip_unmatched is set to 1, the line is silently skipped. Otherwise, exception is thrown.
Example
Consider the file data.tsv:
id: 1 array: [1,2,3] string: str1 date: 2020-01-01
id: 2 array: [1,2,3] string: str2 date: 2020-01-02
id: 3 array: [1,2,3] string: str3 date: 2020-01-03
and the table:
CREATE TABLE imp_regex_table (id UInt32, array Array(UInt32), string String, date Date) ENGINE = Memory;
Import command:
$ cat data.tsv | clickhouse-client --query "INSERT INTO imp_regex_table SETTINGS format_regexp='id: (.+?) array: (.+?) string: (.+?) date: (.+?)', format_regexp_escaping_rule='Escaped', format_regexp_skip_unmatched=0 FORMAT Regexp;"
Query:
SELECT * FROM imp_regex_table;
Result:
┌─id─┬─array───┬─string─┬───────date─┐
│ 1 │ [1,2,3] │ str1 │ 2020-01-01 │
│ 2 │ [1,2,3] │ str2 │ 2020-01-02 │
│ 3 │ [1,2,3] │ str3 │ 2020-01-03 │
└────┴─────────┴────────┴────────────┘
Format Schema
The file name containing the format schema is set by the setting format_schema
.
It’s required to set this setting when it is used one of the formats Cap'n Proto
and Protobuf
.
The format schema is a combination of a file name and the name of a message type in this file, delimited by a colon,
e.g. schemafile.proto:MessageType
.
If the file has the standard extension for the format (for example, .proto
for Protobuf
),
it can be omitted and in this case, the format schema looks like schemafile:MessageType
.
If you input or output data via the client in the interactive mode, the file name specified in the format schema can contain an absolute path or a path relative to the current directory on the client. If you use the client in the batch mode, the path to the schema must be relative due to security reasons.
If you input or output data via the HTTP interface the file name specified in the format schema should be located in the directory specified in format_schema_path in the server configuration.
Skipping Errors
Some formats such as CSV
, TabSeparated
, TSKV
, JSONEachRow
, Template
, CustomSeparated
and Protobuf
can skip broken row if parsing error occurred and continue parsing from the beginning of next row. See input_format_allow_errors_num and
input_format_allow_errors_ratio settings.
Limitations:
- In case of parsing error
JSONEachRow
skips all data until the new line (or EOF), so rows must be delimited by\n
to count errors correctly. Template
andCustomSeparated
use delimiter after the last column and delimiter between rows to find the beginning of next row, so skipping errors works only if at least one of them is not empty.
RawBLOB
In this format, all input data is read to a single value. It is possible to parse only a table with a single field of type String or similar. The result is output in binary format without delimiters and escaping. If more than one value is output, the format is ambiguous, and it will be impossible to read the data back.
Below is a comparison of the formats RawBLOB
and TabSeparatedRaw.
RawBLOB
:
- data is output in binary format, no escaping;
- there are no delimiters between values;
- no newline at the end of each value. [TabSeparatedRaw] (#tabseparatedraw):
- data is output without escaping;
- the rows contain values separated by tabs;
- there is a line feed after the last value in every row.
The following is a comparison of the RawBLOB
and RowBinary formats.
RawBLOB
:
- String fields are output without being prefixed by length.
RowBinary
: - String fields are represented as length in varint format (unsigned [LEB128] (https://en.wikipedia.org/wiki/LEB128)), followed by the bytes of the string.
When an empty data is passed to the RawBLOB
input, ClickHouse throws an exception:
Code: 108. DB::Exception: No data to insert
Example
$ clickhouse-client --query "CREATE TABLE {some_table} (a String) ENGINE = Memory;"
$ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT RawBLOB"
$ clickhouse-client --query "SELECT * FROM {some_table} FORMAT RawBLOB" | md5sum
Result:
f9725a22f9191e064120d718e26862a9 -
MsgPack
ClickHouse supports reading and writing MessagePack data files.
Data Types Matching
MessagePack data type (INSERT ) |
ClickHouse data type | MessagePack data type (SELECT ) |
---|---|---|
uint N , positive fixint |
UIntN | uint N |
int N , negative fixint |
IntN | int N |
bool |
UInt8 | uint 8 |
fixstr , str 8 , str 16 , str 32 , bin 8 , bin 16 , bin 32 |
String | bin 8 , bin 16 , bin 32 |
fixstr , str 8 , str 16 , str 32 , bin 8 , bin 16 , bin 32 |
FixedString | bin 8 , bin 16 , bin 32 |
float 32 |
Float32 | float 32 |
float 64 |
Float64 | float 64 |
uint 16 |
Date | uint 16 |
int 32 |
Date32 | int 32 |
uint 32 |
DateTime | uint 32 |
uint 64 |
DateTime64 | uint 64 |
fixarray , array 16 , array 32 |
Array/Tuple | fixarray , array 16 , array 32 |
fixmap , map 16 , map 32 |
Map | fixmap , map 16 , map 32 |
uint 32 |
IPv4 | uint 32 |
bin 8 |
String | bin 8 |
int 8 |
Enum8 | int 8 |
bin 8 |
(U)Int128/(U)Int256 | bin 8 |
int 32 |
Decimal32 | int 32 |
int 64 |
Decimal64 | int 64 |
bin 8 |
Decimal128/Decimal256 | bin 8 |
Example:
Writing to a file ".msgpk":
$ clickhouse-client --query="CREATE TABLE msgpack (array Array(UInt8)) ENGINE = Memory;"
$ clickhouse-client --query="INSERT INTO msgpack VALUES ([0, 1, 2, 3, 42, 253, 254, 255]), ([255, 254, 253, 42, 3, 2, 1, 0])";
$ clickhouse-client --query="SELECT * FROM msgpack FORMAT MsgPack" > tmp_msgpack.msgpk;
MsgPack format settings
- input_format_msgpack_number_of_columns - the number of columns in inserted MsgPack data. Used for automatic schema inference from data. Default value -
0
. - output_format_msgpack_uuid_representation - the way how to output UUID in MsgPack format. Default value -
EXT
.
MySQLDump
ClickHouse supports reading MySQL dumps. It reads all data from INSERT queries belonging to one table in dump. If there are more than one table, by default it reads data from the first one. You can specify the name of the table from which to read data from using input_format_mysql_dump_table_name settings. If setting input_format_mysql_dump_map_columns is set to 1 and dump contains CREATE query for specified table or column names in INSERT query the columns from input data will be mapped to the columns from the table by their names, columns with unknown names will be skipped if setting input_format_skip_unknown_fields is set to 1. This format supports schema inference: if the dump contains CREATE query for the specified table, the structure is extracted from it, otherwise schema is inferred from the data of INSERT queries.
Examples:
File dump.sql:
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!50503 SET character_set_client = utf8mb4 */;
CREATE TABLE `test` (
`x` int DEFAULT NULL,
`y` int DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
/*!40101 SET character_set_client = @saved_cs_client */;
INSERT INTO `test` VALUES (1,NULL),(2,NULL),(3,NULL),(3,NULL),(4,NULL),(5,NULL),(6,7);
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!50503 SET character_set_client = utf8mb4 */;
CREATE TABLE `test 3` (
`y` int DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
/*!40101 SET character_set_client = @saved_cs_client */;
INSERT INTO `test 3` VALUES (1);
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!50503 SET character_set_client = utf8mb4 */;
CREATE TABLE `test2` (
`x` int DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
/*!40101 SET character_set_client = @saved_cs_client */;
INSERT INTO `test2` VALUES (1),(2),(3);
Queries:
DESCRIBE TABLE file(dump.sql, MySQLDump) SETTINGS input_format_mysql_dump_table_name = 'test2'
┌─name─┬─type────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
│ x │ Nullable(Int32) │ │ │ │ │ │
└──────┴─────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
SELECT *
FROM file(dump.sql, MySQLDump)
SETTINGS input_format_mysql_dump_table_name = 'test2'
┌─x─┐
│ 1 │
│ 2 │
│ 3 │
└───┘
Markdown
You can export results using Markdown format to generate output ready to be pasted into your .md
files:
SELECT
number,
number * 2
FROM numbers(5)
FORMAT Markdown
| number | multiply(number, 2) |
|-:|-:|
| 0 | 0 |
| 1 | 2 |
| 2 | 4 |
| 3 | 6 |
| 4 | 8 |
Markdown table will be generated automatically and can be used on markdown-enabled platforms, like Github. This format is used only for output.