mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-18 21:51:57 +00:00
16ca492938
* CLICKHOUSE-4063: less manual html @ index.md * CLICKHOUSE-4063: recommend markdown="1" in README.md * CLICKHOUSE-4003: manually purge custom.css for now * CLICKHOUSE-4064: expand <details> before any print (including to pdf) * CLICKHOUSE-3927: rearrange interfaces/formats.md a bit * CLICKHOUSE-3306: add few http headers * Remove copy-paste introduced in #3392 * Hopefully better chinese fonts #3392 * get rid of tabs @ custom.css * Apply comments and patch from #3384 * Add jdbc.md to ToC and some translation, though it still looks badly incomplete * minor punctuation * Add some backlinks to official website from mirrors that just blindly take markdown sources * Do not make fonts extra light * find . -name '*.md' -type f | xargs -I{} perl -pi -e 's//g' {} * find . -name '*.md' -type f | xargs -I{} perl -pi -e 's/ sql/g' {} * Remove outdated stuff from roadmap.md * Not so light font on front page too * Refactor Chinese formats.md to match recent changes in other languages * Update some links on front page * Remove some outdated comment * Add twitter link to front page * More front page links tuning * Add Amsterdam meetup link * Smaller font to avoid second line * Add Amsterdam link to README.md * Proper docs nav translation * Back to 300 font-weight except Chinese * fix docs build * Update Amsterdam link * remove symlinks * more zh punctuation * apply lost comment by @zhang2014 * Apply comments by @zhang2014 from #3417 * Remove Beijing link * rm incorrect symlink * restore content of docs/zh/operations/table_engines/index.md * CLICKHOUSE-3751: stem terms while searching docs * CLICKHOUSE-3751: use English stemmer in non-English docs too * CLICKHOUSE-4135 fix * Remove past meetup link * Add blog link to top nav * Add ContentSquare article link * Add form link to front page + refactor some texts * couple markup fixes * minor * Introduce basic ODBC driver page in docs * More verbose 3rd party libs disclaimer * Put third-party stuff into a separate folder * Separate third-party stuff in ToC too * Update links * Move stuff that is not really (only) a client library into a separate page * Add clickhouse-hdfs-loader link * Some introduction for "interfaces" section * Rewrite tcp.md * http_interface.md -> http.md * fix link * Remove unconvenient error for now * try to guess anchor instead of failing * remove symlink * Remove outdated info from introduction * remove ru roadmap.md * replace ru roadmap.md with symlink * Update roadmap.md * lost file * Title case in toc_en.yml * Sync "Functions" ToC section with en * Remove reference to pretty old ClickHouse release from docs * couple lost symlinks in fa * Close quote in proper place * Rewrite en/getting_started/index.md * Sync en<>ru getting_started/index.md * minor changes * Some gui.md refactoring * Translate DataGrip section to ru * Translate DataGrip section to zh * Translate DataGrip section to fa * Translate DBeaver section to fa * Translate DBeaver section to zh * Split third-party GUI to open-source and commercial * Mention some RDBMS integrations + ad-hoc translation fixes * Add rel="external nofollow" to outgoing links from docs * Lost blank lines * Fix class name * More rel="external nofollow" * Apply suggestions by @sundy-li * Mobile version of front page improvements * test * test 2 * test 3 * Update LICENSE * minor docs fix * Highlight current article as suggested by @sundy-li * fix link destination * Introduce backup.md (only "en" for now) * Mention INSERT+SELECT in backup.md * Some improvements for replication.md * Add backup.md to toc * Mention clickhouse-backup tool * Mention LightHouse in third-party GUI list * Introduce interfaces/third-party/proxy.md * Add clickhouse-bulk to proxy.md * Major extension of integrations.md contents * fix link target * remove unneeded file * better toc item name * fix markdown * better ru punctuation * Add yet another possible backup approach * Simplify copying permalinks to headers * Support non-eng link anchors in docs + update some deps * Generate anchors for single-page mode automatically * Remove anchors to top of pages * Remove anchors that nobody links to * build fixes * fix few links * restore css * fix some links * restore gifs * fix lost words * more docs fixes * docs fixes * NULL anchor * update urllib3 dependency * more fixes
141 lines
6.0 KiB
Markdown
141 lines
6.0 KiB
Markdown
# Kafka
|
||
|
||
This engine works with [Apache Kafka](http://kafka.apache.org/).
|
||
|
||
Kafka lets you:
|
||
|
||
- Publish or subscribe to data flows.
|
||
- Organize fault-tolerant storage.
|
||
- Process streams as they become available.
|
||
|
||
|
||
Old format:
|
||
|
||
```
|
||
Kafka(kafka_broker_list, kafka_topic_list, kafka_group_name, kafka_format
|
||
[, kafka_row_delimiter, kafka_schema, kafka_num_consumers])
|
||
```
|
||
|
||
New format:
|
||
|
||
```
|
||
Kafka SETTINGS
|
||
kafka_broker_list = 'localhost:9092',
|
||
kafka_topic_list = 'topic1,topic2',
|
||
kafka_group_name = 'group1',
|
||
kafka_format = 'JSONEachRow',
|
||
kafka_row_delimiter = '\n'
|
||
kafka_schema = '',
|
||
kafka_num_consumers = 2
|
||
```
|
||
|
||
Required parameters:
|
||
|
||
- `kafka_broker_list` – A comma-separated list of brokers (`localhost:9092`).
|
||
- `kafka_topic_list` – A list of Kafka topics (`my_topic`).
|
||
- `kafka_group_name` – A group of Kafka consumers (`group1`). Reading margins are tracked for each group separately. If you don't want messages to be duplicated in the cluster, use the same group name everywhere.
|
||
- `kafka_format` – Message format. Uses the same notation as the SQL ` FORMAT` function, such as ` JSONEachRow`. For more information, see the "Formats" section.
|
||
|
||
Optional parameters:
|
||
|
||
- `kafka_row_delimiter` - Character-delimiter of records (rows), which ends the message.
|
||
- `kafka_schema` – An optional parameter that must be used if the format requires a schema definition. For example, [Cap'n Proto](https://capnproto.org/) requires the path to the schema file and the name of the root `schema.capnp:Message` object.
|
||
- `kafka_num_consumers` – The number of consumers per table. Default: `1`. Specify more consumers if the throughput of one consumer is insufficient. The total number of consumers should not exceed the number of partitions in the topic, since only one consumer can be assigned per partition.
|
||
|
||
Examples:
|
||
|
||
``` sql
|
||
CREATE TABLE queue (
|
||
timestamp UInt64,
|
||
level String,
|
||
message String
|
||
) ENGINE = Kafka('localhost:9092', 'topic', 'group1', 'JSONEachRow');
|
||
|
||
SELECT * FROM queue LIMIT 5;
|
||
|
||
CREATE TABLE queue2 (
|
||
timestamp UInt64,
|
||
level String,
|
||
message String
|
||
) ENGINE = Kafka SETTINGS kafka_broker_list = 'localhost:9092',
|
||
kafka_topic_list = 'topic',
|
||
kafka_group_name = 'group1',
|
||
kafka_format = 'JSONEachRow',
|
||
kafka_num_consumers = 4;
|
||
|
||
CREATE TABLE queue2 (
|
||
timestamp UInt64,
|
||
level String,
|
||
message String
|
||
) ENGINE = Kafka('localhost:9092', 'topic', 'group1')
|
||
SETTINGS kafka_format = 'JSONEachRow',
|
||
kafka_num_consumers = 4;
|
||
```
|
||
|
||
The delivered messages are tracked automatically, so each message in a group is only counted once. If you want to get the data twice, then create a copy of the table with another group name.
|
||
|
||
Groups are flexible and synced on the cluster. For instance, if you have 10 topics and 5 copies of a table in a cluster, then each copy gets 2 topics. If the number of copies changes, the topics are redistributed across the copies automatically. Read more about this at [http://kafka.apache.org/intro](http://kafka.apache.org/intro).
|
||
|
||
`SELECT` is not particularly useful for reading messages (except for debugging), because each message can be read only once. It is more practical to create real-time threads using materialized views. To do this:
|
||
|
||
1. Use the engine to create a Kafka consumer and consider it a data stream.
|
||
2. Create a table with the desired structure.
|
||
3. Create a materialized view that converts data from the engine and puts it into a previously created table.
|
||
|
||
When the `MATERIALIZED VIEW` joins the engine, it starts collecting data in the background. This allows you to continually receive messages from Kafka and convert them to the required format using `SELECT`
|
||
|
||
Example:
|
||
|
||
``` sql
|
||
CREATE TABLE queue (
|
||
timestamp UInt64,
|
||
level String,
|
||
message String
|
||
) ENGINE = Kafka('localhost:9092', 'topic', 'group1', 'JSONEachRow');
|
||
|
||
CREATE TABLE daily (
|
||
day Date,
|
||
level String,
|
||
total UInt64
|
||
) ENGINE = SummingMergeTree(day, (day, level), 8192);
|
||
|
||
CREATE MATERIALIZED VIEW consumer TO daily
|
||
AS SELECT toDate(toDateTime(timestamp)) AS day, level, count() as total
|
||
FROM queue GROUP BY day, level;
|
||
|
||
SELECT level, sum(total) FROM daily GROUP BY level;
|
||
```
|
||
|
||
To improve performance, received messages are grouped into blocks the size of [max_insert_block_size](../settings/settings.md#settings-settings-max_insert_block_size). If the block wasn't formed within [stream_flush_interval_ms](../settings/settings.md) milliseconds, the data will be flushed to the table regardless of the completeness of the block.
|
||
|
||
To stop receiving topic data or to change the conversion logic, detach the materialized view:
|
||
|
||
```
|
||
DETACH TABLE consumer;
|
||
ATTACH MATERIALIZED VIEW consumer;
|
||
```
|
||
|
||
If you want to change the target table by using `ALTER`, we recommend disabling the material view to avoid discrepancies between the target table and the data from the view.
|
||
|
||
## Configuration
|
||
|
||
Similar to GraphiteMergeTree, the Kafka engine supports extended configuration using the ClickHouse config file. There are two configuration keys that you can use: global (`kafka`) and topic-level (`kafka_*`). The global configuration is applied first, and then the topic-level configuration is applied (if it exists).
|
||
|
||
```xml
|
||
<!-- Global configuration options for all tables of Kafka engine type -->
|
||
<kafka>
|
||
<debug>cgrp</debug>
|
||
<auto_offset_reset>smallest</auto_offset_reset>
|
||
</kafka>
|
||
|
||
<!-- Configuration specific for topic "logs" -->
|
||
<kafka_logs>
|
||
<retry_backoff_ms>250</retry_backoff_ms>
|
||
<fetch_min_bytes>100000</fetch_min_bytes>
|
||
</kafka_logs>
|
||
```
|
||
|
||
For a list of possible configuration options, see the [librdkafka configuration reference](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md). Use the underscore (`_`) instead of a dot in the ClickHouse configuration. For example, `check.crcs=true` will be `<check_crcs>true</check_crcs>`.
|
||
|
||
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/kafka/) <!--hide-->
|