Added /zh docs

This commit is contained in:
rfraposa 2022-04-10 17:08:18 -06:00
parent 8f01fe9c49
commit 4feb622c9f
283 changed files with 855 additions and 528 deletions

View File

@ -5,4 +5,4 @@ collapsed: true
link:
type: generated-index
title: Example Datasets
slug: /en/example-datasets
slug: /en/getting-started/example-datasets

View File

@ -1,6 +1,7 @@
---
sidebar_position: 28
sidebar_label: SQL Reference
slug: en/sql-reference/
---
# SQL Reference {#sql-reference}

7
docs/zh/_category_.yml Normal file
View File

@ -0,0 +1,7 @@
position: 50
label: '文档'
collapsible: true
collapsed: true
link:
type: generated-index
title: 文档

View File

@ -1,7 +1,7 @@
---
machine_translated: true
machine_translated_rev: b111334d6614a02564cf32f379679e9ff970d9b1
toc_title: "\u53D8\u66F4\u65E5\u5FD7"
sidebar_label: "\u53D8\u66F4\u65E5\u5FD7"
---
## 碌莽禄release拢.0755-88888888 {#clickhouse-release-v20-3}

View File

@ -1,6 +1,6 @@
---
toc_priority: 1
toc_title: 云
sidebar_position: 1
sidebar_label: 云
---
# ClickHouse 云服务提供商 {#clickhouse-cloud-service-providers}

View File

@ -1,7 +1,6 @@
---
toc_folder_title: 商业支持
toc_priority: 70
toc_title: 简介
sidebar_label: 商业支持
sidebar_position: 70
---
# ClickHouse 商业服务 {#clickhouse-commercial-services}

View File

@ -1,6 +1,6 @@
---
toc_priority: 3
toc_title: 支持
sidebar_position: 3
sidebar_label: 支持
---
# ClickHouse 商业支持服务提供商 {#clickhouse-commercial-support-service-providers}

View File

@ -1,6 +1,6 @@
---
toc_priority: 63
toc_title: "\u6D4F\u89C8\u6E90\u4EE3\u7801"
sidebar_position: 63
sidebar_label: "\u6D4F\u89C8\u6E90\u4EE3\u7801"
---
# 浏览ClickHouse源代码 {#browse-clickhouse-source-code}

View File

@ -1,8 +1,8 @@
---
machine_translated: true
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
toc_priority: 67
toc_title: "\u5982\u4F55\u5728Linux\u4E0A\u6784\u5EFAClickHouse for AARCH64\uFF08\
sidebar_position: 67
sidebar_label: "\u5982\u4F55\u5728Linux\u4E0A\u6784\u5EFAClickHouse for AARCH64\uFF08\
ARM64)"
---

View File

@ -0,0 +1,6 @@
label: '引擎'
collapsible: true
collapsed: true
link:
type: generated-index
title: 引擎

View File

@ -1,6 +1,6 @@
---
toc_priority: 32
toc_title: Atomic
sidebar_position: 32
sidebar_label: Atomic
---
# Atomic {#atomic}

View File

@ -1,7 +1,6 @@
---
toc_folder_title: 数据库引擎
toc_priority: 27
toc_title: Introduction
sidebar_label: 数据库引擎
sidebar_position: 27
---
# 数据库引擎 {#database-engines}

View File

@ -1,6 +1,6 @@
---
toc_priority: 31
toc_title: Lazy
sidebar_position: 31
sidebar_label: Lazy
---
# Lazy {#lazy}

View File

@ -1,6 +1,6 @@
---
toc_priority: 29
toc_title: "[experimental] MaterializedMySQL"
sidebar_position: 29
sidebar_label: "[experimental] MaterializedMySQL"
---
# [experimental] MaterializedMySQL {#materialized-mysql}

View File

@ -1,6 +1,6 @@
---
toc_priority: 29
toc_title: MaterializedMySQL
sidebar_position: 29
sidebar_label: MaterializedMySQL
---
# [experimental] MaterializedMySQL {#materialized-mysql}
@ -53,7 +53,7 @@ CREATE DATABASE mysql ENGINE = MaterializedMySQL('localhost:3306', 'db', 'user',
- `default_authentication_plugin = mysql_native_password `,因为 `MaterializedMySQL` 只能授权使用该方法。
- `gtid_mode = on`因为基于GTID的日志记录是提供正确的 `MaterializedMySQL`复制的强制要求。
!!! attention "注意"
:::info "注意"
当打开`gtid_mode`时,您还应该指定`enforce_gtid_consistency = on`。
## 虚拟列 {#virtual-columns}

View File

@ -1,6 +1,6 @@
---
toc_priority: 30
toc_title: MaterializedPostgreSQL
sidebar_position: 30
sidebar_label: MaterializedPostgreSQL
---
# [experimental] MaterializedPostgreSQL {#materialize-postgresql}

View File

@ -1,6 +1,6 @@
---
toc_priority: 30
toc_title: MySQL
sidebar_position: 30
sidebar_label: MySQL
---
# MySQL {#mysql}

View File

@ -1,6 +1,6 @@
---
toc_priority: 35
toc_title: PostgreSQL
sidebar_position: 35
sidebar_label: PostgreSQL
---
# PostgreSQL {#postgresql}

View File

@ -1,6 +1,6 @@
---
toc_priority: 32
toc_title: SQLite
sidebar_position: 32
sidebar_label: SQLite
---
# SQLite {#sqlite}

View File

@ -1,6 +0,0 @@
---
toc_folder_title: "\u5f15\u64ce"
toc_priority: 25
---

View File

@ -1,6 +1,6 @@
---
toc_priority: 9
toc_title: EmbeddedRocksDB
sidebar_position: 9
sidebar_label: EmbeddedRocksDB
---
# EmbeddedRocksDB 引擎 {#EmbeddedRocksDB-engine}

View File

@ -1,6 +1,6 @@
---
toc_priority: 36
toc_title: HDFS
sidebar_position: 36
sidebar_label: HDFS
---
# HDFS {#table_engines-hdfs}

View File

@ -1,6 +1,6 @@
---
toc_priority: 4
toc_title: Hive
sidebar_position: 4
sidebar_label: Hive
---
# Hive {#hive}

View File

@ -1,6 +1,6 @@
---
toc_folder_title: "\u96C6\u6210"
toc_priority: 30
sidebar_label: 集成的表引擎
sidebar_position: 30
---
# 集成的表引擎 {#table-engines-for-integrations}

View File

@ -1,6 +1,6 @@
---
toc_priority: 34
toc_title: JDBC表引擎
sidebar_position: 34
sidebar_label: JDBC表引擎
---
# JDBC {#table-engine-jdbc}

View File

@ -1,6 +1,6 @@
---
toc_priority: 5
toc_title: MongoDB
sidebar_position: 5
sidebar_label: MongoDB
---
# MongoDB {#mongodb}

View File

@ -1,6 +1,6 @@
---
toc_priority: 35
toc_title: ODBC
sidebar_position: 35
sidebar_label: ODBC
---
# ODBC {#table-engine-odbc}

View File

@ -1,6 +1,6 @@
---
toc_priority: 11
toc_title: PostgreSQL
sidebar_position: 11
sidebar_label: PostgreSQL
---
# PostgreSQL {#postgresql}

View File

@ -1,6 +1,6 @@
---
toc_priority: 10
toc_title: RabbitMQ
sidebar_position: 10
sidebar_label: RabbitMQ
---
# RabbitMQ 引擎 {#rabbitmq-engine}

View File

@ -1,6 +1,6 @@
---
toc_priority: 7
toc_title: S3
sidebar_position: 7
sidebar_label: S3
---
# S3 表引擎 {#table-engine-s3}

View File

@ -1,6 +1,6 @@
---
toc_priority: 7
toc_title: SQLite
sidebar_position: 7
sidebar_label: SQLite
---
# SQLite {#sqlite}

View File

@ -1,7 +1,6 @@
---
toc_folder_title: "\u65E5\u5FD7\u7CFB\u5217"
toc_title: 日志引擎系列
toc_priority: 29
sidebar_label: 日志引擎系列
sidebar_position: 29
---
# 日志引擎系列 {#table_engines-log-engine-family}

View File

@ -37,7 +37,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
<summary>已弃用的建表方法</summary>
!!! attention "注意"
:::info "注意"
不要在新项目中使用该方法,可能的话,请将旧项目切换到上述方法。
``` sql

View File

@ -37,7 +37,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
<summary>已弃用的建表方法</summary>
!!! attention "注意"
:::info "注意"
不要在新项目中使用该方法,可能的话,请将旧项目切换到上述方法。
``` sql

View File

@ -1,6 +1,6 @@
---
toc_priority: 38
toc_title: GraphiteMergeTree
sidebar_position: 38
sidebar_label: GraphiteMergeTree
---
# GraphiteMergeTree {#graphitemergetree}

View File

@ -1,6 +1,6 @@
---
toc_folder_title: "合并树家族"
toc_priority: 28
sidebar_label: "合并树家族"
sidebar_position: 28
---

View File

@ -118,7 +118,7 @@ ENGINE MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDa
<details markdown="1">
<summary>已弃用的建表方法</summary>
!!! attention "注意"
:::attention "注意"
不要在新版项目中使用该方法,可能的话,请将旧项目切换到上述方法。
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
@ -186,24 +186,24 @@ ClickHouse 会为每个数据片段创建一个索引文件来存储这些标记
ClickHouse 不要求主键唯一,所以您可以插入多条具有相同主键的行。
您可以在`PRIMARY KEY`与`ORDER BY`条件中使用`可为空的`类型的表达式,但强烈建议不要这么做。为了启用这项功能,请打开[allow_nullable_key](https://clickhouse.com/docs/zh/operations/settings/settings/#allow-nullable-key)[NULLS_LAST](https://clickhouse.com/docs/zh/sql-reference/statements/select/order-by/#sorting-of-special-values)规则也适用于`ORDER BY`条件中有NULL值的情况下。
您可以在`PRIMARY KEY`与`ORDER BY`条件中使用`可为空的`类型的表达式,但强烈建议不要这么做。为了启用这项功能,请打开[allow_nullable_key](../../../operations/settings/#allow-nullable-key)[NULLS_LAST](../../../sql-reference/statements/select/order-by.md/#sorting-of-special-values)规则也适用于`ORDER BY`条件中有NULL值的情况下。
### 主键的选择 {#zhu-jian-de-xuan-ze}
主键中列的数量并没有明确的限制。依据数据结构,您可以在主键包含多些或少些列。这样可以:
- 改善索引的性能。
- 改善索引的性能。
如果当前主键是 `(a, b)` ,在下列情况下添加另一个 `c` 列会提升性能:
- 如果当前主键是 `(a, b)` ,在下列情况下添加另一个 `c` 列会提升性能:
- 查询会使用 `c` 列作为条件
- 很长的数据范围( `index_granularity` 的数倍)里 `(a, b)` 都是相同的值,并且这样的情况很普遍。换言之,就是加入另一列后,可以让您的查询略过很长的数据范围。
- 改善数据压缩。
- 改善数据压缩。
ClickHouse 以主键排序片段数据,所以,数据的一致性越高,压缩越好。
- 在[CollapsingMergeTree](collapsingmergetree.md#table_engine-collapsingmergetree) 和 [SummingMergeTree](summingmergetree.md) 引擎里进行数据合并时会提供额外的处理逻辑。
- 在[CollapsingMergeTree](collapsingmergetree.md#table_engine-collapsingmergetree) 和 [SummingMergeTree](summingmergetree.md) 引擎里进行数据合并时会提供额外的处理逻辑。
在这种情况下,指定与主键不同的 *排序键* 也是有意义的。
@ -227,10 +227,6 @@ Clickhouse可以做到指定一个跟排序键不一样的主键此时排序
对于 `SELECT` 查询ClickHouse 分析是否可以使用索引。如果 `WHERE/PREWHERE` 子句具有下面这些表达式作为完整WHERE条件的一部分或全部则可以使用索引进行相等/不相等的比较;对主键列或分区列进行`IN`运算、有固定前缀的`LIKE`运算(如name like 'test%')、函数运算(部分函数适用),还有对上述表达式进行逻辑运算。
<!-- It is too hard for me to translate this section as the original text completely. So I did it with my own understanding. If you have good idea, please help me. -->
<!-- It is hard for me to translate this section too, but I think change the sentence struct is helpful for understanding. So I change the phraseology-->
<!--I try to translate it in Chinese,don't worry. -->
因此,在索引键的一个或多个区间上快速地执行查询是可能的。下面例子中,指定标签;指定标签和日期范围;指定标签和日期;指定多个标签和日期范围等执行查询,都会非常快。

View File

@ -40,7 +40,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
<summary>已弃用的建表方法</summary>
!!! attention "注意"
:::info "注意"
不要在新项目中使用该方法,可能的话,请将旧项目切换到上述方法。
``` sql

View File

@ -34,7 +34,7 @@
<summary>已弃用的建表方法</summary>
!!! attention "注意"
:::info "注意"
不要在新项目中使用该方法,可能的话,请将旧项目切换到上述方法。
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]

View File

@ -1,6 +1,6 @@
---
toc_priority: 37
toc_title: "版本折叠MergeTree"
sidebar_position: 37
sidebar_label: "版本折叠MergeTree"
---
# VersionedCollapsingMergeTree {#versionedcollapsingmergetree}
@ -53,7 +53,7 @@ VersionedCollapsingMergeTree(sign, version)
<summary>不推荐使用的创建表的方法</summary>
!!! attention "注意"
:::info "注意"
不要在新项目中使用此方法。 如果可能,请将旧项目切换到上述方法。
``` sql

View File

@ -1,6 +1,6 @@
---
toc_priority: 33
toc_title: 分布式引擎
sidebar_position: 33
sidebar_label: 分布式引擎
---
# 分布式引擎 {#distributed}

View File

@ -1,6 +1,6 @@
---
toc_priority: 46
toc_title: 随机数生成
sidebar_position: 46
sidebar_label: 随机数生成
---
# 随机数生成表引擎 {#table_engines-generate}

View File

@ -1,8 +0,0 @@
---
machine_translated: true
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
toc_folder_title: "\u7279\u522B"
toc_priority: 31
---

View File

@ -1,6 +1,6 @@
---
toc_priority: 40
toc_title: 关联表引擎
sidebar_position: 40
sidebar_label: 关联表引擎
---
# 关联表引擎 {#join}

View File

@ -1,6 +1,6 @@
---
toc_priority: 44
toc_title: Memory
sidebar_position: 44
sidebar_label: Memory
---
# 内存表 {#memory}

View File

@ -1,7 +1,7 @@
---
title: 什么是列存储数据库?
toc_hidden: true
toc_priority: 101
sidebar_position: 101
---
# 什么是列存储数据库? {#what-is-a-columnar-database}

View File

@ -1,7 +1,7 @@
---
title: "\u201CClickHouse\u201D 有什么含义?"
toc_hidden: true
toc_priority: 10
sidebar_position: 10
---
# “ClickHouse” 有什么含义? {#what-does-clickhouse-mean}

View File

@ -1,7 +1,7 @@
---
title: 我如何为ClickHouse贡献代码?
toc_hidden: true
toc_priority: 120
sidebar_position: 120
---
# 我如何为ClickHouse贡献代码? {#how-do-i-contribute-code-to-clickhouse}

View File

@ -1,8 +1,8 @@
---
title: ClickHouse 有关常见问题
toc_hidden_folder: true
toc_priority: 1
toc_title: General
sidebar_position: 1
sidebar_label: General
---
# ClickHouse 有关常见问题 {#general-questions}

View File

@ -1,7 +1,7 @@
---
title: 为何不使用 MapReduce等技术?
toc_hidden: true
toc_priority: 110
sidebar_position: 110
---
# 为何不使用 MapReduce等技术? {#why-not-use-something-like-mapreduce}

View File

@ -1 +0,0 @@
../../../en/faq/general/ne-tormozit.md

View File

@ -0,0 +1,26 @@
---
title: "What does \u201C\u043D\u0435 \u0442\u043E\u0440\u043C\u043E\u0437\u0438\u0442\
\u201D mean?"
toc_hidden: true
sidebar_position: 11
---
# What Does “Не тормозит” Mean? {#what-does-ne-tormozit-mean}
This question usually arises when people see official ClickHouse t-shirts. They have large words **“ClickHouse не тормозит”** on the front.
Before ClickHouse became open-source, it has been developed as an in-house storage system by the largest Russian IT company, [Yandex](https://yandex.com/company/). Thats why it initially got its slogan in Russian, which is “не тормозит” (pronounced as “ne tormozit”). After the open-source release we first produced some of those t-shirts for events in Russia and it was a no-brainer to use the slogan as-is.
One of the following batches of those t-shirts was supposed to be given away on events outside of Russia and we tried to make the English version of the slogan. Unfortunately, the Russian language is kind of elegant in terms of expressing stuff and there was a restriction of limited space on a t-shirt, so we failed to come up with good enough translation (most options appeared to be either long or inaccurate) and decided to keep the slogan in Russian even on t-shirts produced for international events. It appeared to be a great decision because people all over the world get positively surprised and curious when they see it.
So, what does it mean? Here are some ways to translate *“не тормозит”*:
- If you translate it literally, itd be something like *“ClickHouse does not press the brake pedal”*.
- If youd want to express it as close to how it sounds to a Russian person with IT background, itd be something like *“If your larger system lags, its not because it uses ClickHouse”*.
- Shorter, but not so precise versions could be *“ClickHouse is not slow”*, *“ClickHouse does not lag”* or just *“ClickHouse is fast”*.
If you havent seen one of those t-shirts in person, you can check them out online in many ClickHouse-related videos. For example, this one:
![iframe](https://www.youtube.com/embed/bSyQahMVZ7w)
P.S. These t-shirts are not for sale, they are given away for free on most [ClickHouse Meetups](https://clickhouse.com/#meet), usually for best questions or other forms of active participation.

View File

@ -1 +0,0 @@
../../../en/faq/general/olap.md

View File

@ -0,0 +1,39 @@
---
title: What is OLAP?
toc_hidden: true
sidebar_position: 100
---
# What Is OLAP? {#what-is-olap}
[OLAP](https://en.wikipedia.org/wiki/Online_analytical_processing) stands for Online Analytical Processing. It is a broad term that can be looked at from two perspectives: technical and business. But at the very high level, you can just read these words backward:
Processing
: Some source data is processed…
Analytical
: …to produce some analytical reports and insights…
Online
: …in real-time.
## OLAP from the Business Perspective {#olap-from-the-business-perspective}
In recent years, business people started to realize the value of data. Companies who make their decisions blindly, more often than not fail to keep up with the competition. The data-driven approach of successful companies forces them to collect all data that might be remotely useful for making business decisions and need mechanisms to timely analyze them. Heres where OLAP database management systems (DBMS) come in.
In a business sense, OLAP allows companies to continuously plan, analyze, and report operational activities, thus maximizing efficiency, reducing expenses, and ultimately conquering the market share. It could be done either in an in-house system or outsourced to SaaS providers like web/mobile analytics services, CRM services, etc. OLAP is the technology behind many BI applications (Business Intelligence).
ClickHouse is an OLAP database management system that is pretty often used as a backend for those SaaS solutions for analyzing domain-specific data. However, some businesses are still reluctant to share their data with third-party providers and an in-house data warehouse scenario is also viable.
## OLAP from the Technical Perspective {#olap-from-the-technical-perspective}
All database management systems could be classified into two groups: OLAP (Online **Analytical** Processing) and OLTP (Online **Transactional** Processing). Former focuses on building reports, each based on large volumes of historical data, but doing it not so frequently. While the latter usually handle a continuous stream of transactions, constantly modifying the current state of data.
In practice OLAP and OLTP are not categories, its more like a spectrum. Most real systems usually focus on one of them but provide some solutions or workarounds if the opposite kind of workload is also desired. This situation often forces businesses to operate multiple storage systems integrated, which might be not so big deal but having more systems make it more expensive to maintain. So the trend of recent years is HTAP (**Hybrid Transactional/Analytical Processing**) when both kinds of the workload are handled equally well by a single database management system.
Even if a DBMS started as a pure OLAP or pure OLTP, they are forced to move towards that HTAP direction to keep up with their competition. And ClickHouse is no exception, initially, it has been designed as [fast-as-possible OLAP system](../../faq/general/why-clickhouse-is-so-fast.md) and it still does not have full-fledged transaction support, but some features like consistent read/writes and mutations for updating/deleting data had to be added.
The fundamental trade-off between OLAP and OLTP systems remains:
- To build analytical reports efficiently its crucial to be able to read columns separately, thus most OLAP databases are [columnar](../../faq/general/columnar-database.md),
- While storing columns separately increases costs of operations on rows, like append or in-place modification, proportionally to the number of columns (which can be huge if the systems try to collect all details of an event just in case). Thus, most OLTP systems store data arranged by rows.

View File

@ -1,7 +1,7 @@
---
title: 谁在使用 ClickHouse?
toc_hidden: true
toc_priority: 9
sidebar_position: 9
---
# 谁在使用 ClickHouse? {#who-is-using-clickhouse}

View File

@ -1 +0,0 @@
../../../en/faq/general/why-clickhouse-is-so-fast.md

View File

@ -0,0 +1,61 @@
---
title: Why ClickHouse is so fast?
toc_hidden: true
sidebar_position: 8
---
# Why ClickHouse Is So Fast? {#why-clickhouse-is-so-fast}
It was designed to be fast. Query execution performance has always been a top priority during the development process, but other important characteristics like user-friendliness, scalability, and security were also considered so ClickHouse could become a real production system.
ClickHouse was initially built as a prototype to do just a single task well: to filter and aggregate data as fast as possible. Thats what needs to be done to build a typical analytical report and thats what a typical [GROUP BY](../../en/sql-reference/statements/select/group-by/) query does. ClickHouse team has made several high-level decisions that combined made achieving this task possible:
Column-oriented storage
: Source data often contain hundreds or even thousands of columns, while a report can use just a few of them. The system needs to avoid reading unnecessary columns, or most expensive disk read operations would be wasted.
Indexes
: ClickHouse keeps data structures in memory that allows reading not only used columns but only necessary row ranges of those columns.
Data compression
: Storing different values of the same column together often leads to better compression ratios (compared to row-oriented systems) because in real data column often has the same or not so many different values for neighboring rows. In addition to general-purpose compression, ClickHouse supports [specialized codecs](../../en/sql-reference/statements/create/table/#create-query-specialized-codecs) that can make data even more compact.
Vectorized query execution
: ClickHouse not only stores data in columns but also processes data in columns. It leads to better CPU cache utilization and allows for [SIMD](https://en.wikipedia.org/wiki/SIMD) CPU instructions usage.
Scalability
: ClickHouse can leverage all available CPU cores and disks to execute even a single query. Not only on a single server but all CPU cores and disks of a cluster as well.
But many other database management systems use similar techniques. What really makes ClickHouse stand out is **attention to low-level details**. Most programming languages provide implementations for most common algorithms and data structures, but they tend to be too generic to be effective. Every task can be considered as a landscape with various characteristics, instead of just throwing in random implementation. For example, if you need a hash table, here are some key questions to consider:
- Which hash function to choose?
- Collision resolution algorithm: [open addressing](https://en.wikipedia.org/wiki/Open_addressing) vs [chaining](https://en.wikipedia.org/wiki/Hash_table#Separate_chaining)?
- Memory layout: one array for keys and values or separate arrays? Will it store small or large values?
- Fill factor: when and how to resize? How to move values around on resize?
- Will values be removed and which algorithm will work better if they will?
- Will we need fast probing with bitmaps, inline placement of string keys, support for non-movable values, prefetch, and batching?
Hash table is a key data structure for `GROUP BY` implementation and ClickHouse automatically chooses one of [30+ variations](https://github.com/ClickHouse/ClickHouse/blob/master/src/Interpreters/Aggregator.h) for each specific query.
The same goes for algorithms, for example, in sorting you might consider:
- What will be sorted: an array of numbers, tuples, strings, or structures?
- Is all data available completely in RAM?
- Do we need a stable sort?
- Do we need a full sort? Maybe partial sort or n-th element will suffice?
- How to implement comparisons?
- Are we sorting data that has already been partially sorted?
Algorithms that they rely on characteristics of data they are working with can often do better than their generic counterparts. If it is not really known in advance, the system can try various implementations and choose the one that works best in runtime. For example, see an [article on how LZ4 decompression is implemented in ClickHouse](https://habr.com/en/company/yandex/blog/457612/).
Last but not least, the ClickHouse team always monitors the Internet on people claiming that they came up with the best implementation, algorithm, or data structure to do something and tries it out. Those claims mostly appear to be false, but from time to time youll indeed find a gem.
:::info Tips for building your own high-performance software
- Keep in mind low-level details when designing your system.
- Design based on hardware capabilities.
- Choose data structures and abstractions based on the needs of the task.
- Provide specializations for special cases.
- Try new, “best” algorithms, that you read about yesterday.
- Choose an algorithm in runtime based on statistics.
- Benchmark on real datasets.
- Test for performance regressions in CI.
- Measure and observe everything.

View File

@ -1,7 +1,7 @@
---
toc_folder_title: F.A.Q.
sidebar_label: F.A.Q.
toc_hidden: true
toc_priority: 76
sidebar_position: 76
---
# ClickHouse 问答 F.A.Q {#clickhouse-f-a-q}

View File

@ -1,7 +1,7 @@
---
title: 如何从 ClickHouse 导出数据到一个文件?
toc_hidden: true
toc_priority: 10
sidebar_position: 10
---
# 如何从 ClickHouse 导出数据到一个文件? {#how-to-export-to-file}

View File

@ -1,8 +1,8 @@
---
title: 关于集成ClickHouse和其他系统的问题
toc_hidden_folder: true
toc_priority: 4
toc_title: Integration
sidebar_position: 4
sidebar_label: Integration
---
# 关于集成ClickHouse和其他系统的问题 {#question-about-integrating-clickhouse-and-other-systems}

View File

@ -1 +0,0 @@
../../../en/faq/integration/json-import.md

View File

@ -0,0 +1,34 @@
---
title: How to import JSON into ClickHouse?
toc_hidden: true
sidebar_position: 11
---
# How to Import JSON Into ClickHouse? {#how-to-import-json-into-clickhouse}
ClickHouse supports a wide range of [data formats for input and output](../../en/interfaces/formats/). There are multiple JSON variations among them, but the most commonly used for data ingestion is [JSONEachRow](../../en/interfaces/formats/#jsoneachrow). It expects one JSON object per row, each object separated by a newline.
## Examples {#examples}
Using [HTTP interface](../../en/interfaces/http/):
``` bash
$ echo '{"foo":"bar"}' | curl 'http://localhost:8123/?query=INSERT%20INTO%20test%20FORMAT%20JSONEachRow' --data-binary @-
```
Using [CLI interface](../../en/interfaces/cli/):
``` bash
$ echo '{"foo":"bar"}' | clickhouse-client --query="INSERT INTO test FORMAT JSONEachRow"
```
Instead of inserting data manually, you might consider to use one of [client libraries](../../en/interfaces/) instead.
## Useful Settings {#useful-settings}
- `input_format_skip_unknown_fields` allows to insert JSON even if there were additional fields not present in table schema (by discarding them).
- `input_format_import_nested_json` allows to insert nested JSON objects into columns of [Nested](../../en/sql-reference/data-types/nested-data-structures/nested/) type.
:::note
Settings are specified as `GET` parameters for the HTTP interface or as additional command-line arguments prefixed with `--` for the `CLI` interface.
:::

View File

@ -1 +0,0 @@
../../../en/faq/integration/oracle-odbc.md

View File

@ -0,0 +1,15 @@
---
title: What if I have a problem with encodings when using Oracle via ODBC?
toc_hidden: true
sidebar_position: 20
---
# What If I Have a Problem with Encodings When Using Oracle Via ODBC? {#oracle-odbc-encodings}
If you use Oracle as a source of ClickHouse external dictionaries via Oracle ODBC driver, you need to set the correct value for the `NLS_LANG` environment variable in `/etc/default/clickhouse`. For more information, see the [Oracle NLS_LANG FAQ](https://www.oracle.com/technetwork/products/globalization/nls-lang-099431.html).
**Example**
``` sql
NLS_LANG=RUSSIAN_RUSSIA.UTF8
```

View File

@ -1 +0,0 @@
../../../en/faq/operations/delete-old-data.md

View File

@ -0,0 +1,43 @@
---
title: Is it possible to delete old records from a ClickHouse table?
toc_hidden: true
sidebar_position: 20
---
# Is It Possible to Delete Old Records from a ClickHouse Table? {#is-it-possible-to-delete-old-records-from-a-clickhouse-table}
The short answer is “yes”. ClickHouse has multiple mechanisms that allow freeing up disk space by removing old data. Each mechanism is aimed for different scenarios.
## TTL {#ttl}
ClickHouse allows to automatically drop values when some condition happens. This condition is configured as an expression based on any columns, usually just static offset for any timestamp column.
The key advantage of this approach is that it does not need any external system to trigger, once TTL is configured, data removal happens automatically in background.
:::note
TTL can also be used to move data not only to [/dev/null](https://en.wikipedia.org/wiki/Null_device), but also between different storage systems, like from SSD to HDD.
:::
More details on [configuring TTL](../../en/engines/table-engines/mergetree-family/mergetree/#table_engine-mergetree-ttl).
## ALTER DELETE {#alter-delete}
ClickHouse does not have real-time point deletes like in [OLTP](https://en.wikipedia.org/wiki/Online_transaction_processing) databases. The closest thing to them are mutations. They are issued as `ALTER ... DELETE` or `ALTER ... UPDATE` queries to distinguish from normal `DELETE` or `UPDATE` as they are asynchronous batch operations, not immediate modifications. The rest of syntax after `ALTER TABLE` prefix is similar.
`ALTER DELETE` can be issued to flexibly remove old data. If you need to do it regularly, the main downside will be the need to have an external system to submit the query. There are also some performance considerations since mutation rewrite complete parts even theres only a single row to be deleted.
This is the most common approach to make your system based on ClickHouse [GDPR](https://gdpr-info.eu)-compliant.
More details on [mutations](../../en/sql-reference/statements/alter/#alter-mutations).
## DROP PARTITION {#drop-partition}
`ALTER TABLE ... DROP PARTITION` provides a cost-efficient way to drop a whole partition. Its not that flexible and needs proper partitioning scheme configured on table creation, but still covers most common cases. Like mutations need to be executed from an external system for regular use.
More details on [manipulating partitions](../../en/sql-reference/statements/alter/partition/#alter_drop-partition).
## TRUNCATE {#truncate}
Its rather radical to drop all data from a table, but in some cases it might be exactly what you need.
More details on [table truncation](../../en/sql-reference/statements/truncate/).

View File

@ -1,8 +1,8 @@
---
title: 关于操作ClickHouse服务器和集群的问题
toc_hidden_folder: true
toc_priority: 3
toc_title: Operations
sidebar_position: 3
sidebar_label: Operations
---
# 关于操作ClickHouse服务器和集群的问题 {#question-about-operating-clickhouse-servers-and-clusters}

View File

@ -1,7 +1,7 @@
---
title: ClickHouse支持多区域复制吗?
toc_hidden: true
toc_priority: 30
sidebar_position: 30
---
# ClickHouse支持多区域复制吗? {#does-clickhouse-support-multi-region-replication}

View File

@ -1 +0,0 @@
../../../en/faq/operations/production.md

View File

@ -0,0 +1,71 @@
---
title: Which ClickHouse version to use in production?
toc_hidden: true
sidebar_position: 10
---
# Which ClickHouse Version to Use in Production? {#which-clickhouse-version-to-use-in-production}
First of all, lets discuss why people ask this question in the first place. There are two key reasons:
1. ClickHouse is developed with pretty high velocity and usually, there are 10+ stable releases per year. It makes a wide range of releases to choose from, which is not so trivial choice.
2. Some users want to avoid spending time figuring out which version works best for their use case and just follow someone elses advice.
The second reason is more fundamental, so well start with it and then get back to navigating through various ClickHouse releases.
## Which ClickHouse Version Do You Recommend? {#which-clickhouse-version-do-you-recommend}
Its tempting to hire consultants or trust some known experts to get rid of responsibility for your production environment. You install some specific ClickHouse version that someone else recommended, now if theres some issue with it - its not your fault, its someone elses. This line of reasoning is a big trap. No external person knows better whats going on in your companys production environment.
So how to properly choose which ClickHouse version to upgrade to? Or how to choose your first ClickHouse version? First of all, you need to invest in setting up a **realistic pre-production environment**. In an ideal world, it could be a completely identical shadow copy, but thats usually expensive.
Herere some key points to get reasonable fidelity in a pre-production environment with not so high costs:
- Pre-production environment needs to run an as close set of queries as you intend to run in production:
- Dont make it read-only with some frozen data.
- Dont make it write-only with just copying data without building some typical reports.
- Dont wipe it clean instead of applying schema migrations.
- Use a sample of real production data and queries. Try to choose a sample thats still representative and makes `SELECT` queries return reasonable results. Use obfuscation if your data is sensitive and internal policies do not allow it to leave the production environment.
- Make sure that pre-production is covered by your monitoring and alerting software the same way as your production environment does.
- If your production spans across multiple datacenters or regions, make your pre-production does the same.
- If your production uses complex features like replication, distributed table, cascading materialize views, make sure they are configured similarly in pre-production.
- Theres a trade-off on using the roughly same number of servers or VMs in pre-production as in production, but of smaller size, or much less of them, but of the same size. The first option might catch extra network-related issues, while the latter is easier to manage.
The second area to invest in is **automated testing infrastructure**. Dont assume that if some kind of query has executed successfully once, itll continue to do so forever. Its ok to have some unit tests where ClickHouse is mocked but make sure your product has a reasonable set of automated tests that are run against real ClickHouse and check that all important use cases are still working as expected.
Extra step forward could be contributing those automated tests to [ClickHouses open-source test infrastructure](https://github.com/ClickHouse/ClickHouse/tree/master/tests) thats continuously used in its day-to-day development. It definitely will take some additional time and effort to learn [how to run it](../../en/development/tests.md) and then how to adapt your tests to this framework, but itll pay off by ensuring that ClickHouse releases are already tested against them when they are announced stable, instead of repeatedly losing time on reporting the issue after the fact and then waiting for a bugfix to be implemented, backported and released. Some companies even have such test contributions to infrastructure by its use as an internal policy, most notably its called [Beyonces Rule](https://www.oreilly.com/library/view/software-engineering-at/9781492082781/ch01.html#policies_that_scale_well) at Google.
When you have your pre-production environment and testing infrastructure in place, choosing the best version is straightforward:
1. Routinely run your automated tests against new ClickHouse releases. You can do it even for ClickHouse releases that are marked as `testing`, but going forward to the next steps with them is not recommended.
2. Deploy the ClickHouse release that passed the tests to pre-production and check that all processes are running as expected.
3. Report any issues you discovered to [ClickHouse GitHub Issues](https://github.com/ClickHouse/ClickHouse/issues).
4. If there were no major issues, it should be safe to start deploying ClickHouse release to your production environment. Investing in gradual release automation that implements an approach similar to [canary releases](https://martinfowler.com/bliki/CanaryRelease.html) or [green-blue deployments](https://martinfowler.com/bliki/BlueGreenDeployment.html) might further reduce the risk of issues in production.
As you might have noticed, theres nothing specific to ClickHouse in the approach described above, people do that for any piece of infrastructure they rely on if they take their production environment seriously.
## How to Choose Between ClickHouse Releases? {#how-to-choose-between-clickhouse-releases}
If you look into contents of ClickHouse package repository, youll see four kinds of packages:
1. `testing`
2. `prestable`
3. `stable`
4. `lts` (long-term support)
As was mentioned earlier, `testing` is good mostly to notice issues early, running them in production is not recommended because each of them is not tested as thoroughly as other kinds of packages.
`prestable` is a release candidate which generally looks promising and is likely to become announced as `stable` soon. You can try them out in pre-production and report issues if you see any.
For production use, there are two key options: `stable` and `lts`. Here is some guidance on how to choose between them:
- `stable` is the kind of package we recommend by default. They are released roughly monthly (and thus provide new features with reasonable delay) and three latest stable releases are supported in terms of diagnostics and backporting of bugfixes.
- `lts` are released twice a year and are supported for a year after their initial release. You might prefer them over `stable` in the following cases:
- Your company has some internal policies that do not allow for frequent upgrades or using non-LTS software.
- You are using ClickHouse in some secondary products that either does not require any complex ClickHouse features and do not have enough resources to keep it updated.
Many teams who initially thought that `lts` is the way to go, often switch to `stable` anyway because of some recent feature thats important for their product.
:::warning
One more thing to keep in mind when upgrading ClickHouse: were always keeping eye on compatibility across releases, but sometimes its not reasonable to keep and some minor details might change. So make sure you check the [changelog](../../en/whats-new/changelog/) before upgrading to see if there are any notes about backward-incompatible changes.
:::

View File

@ -1,8 +1,8 @@
---
title: 关于ClickHouse使用案例的问题
toc_hidden_folder: true
toc_priority: 2
toc_title: 使用案例
sidebar_position: 2
sidebar_label: 使用案例
---
# 关于ClickHouse使用案例的问题 {#questions-about-clickhouse-use-cases}

View File

@ -1,7 +1,7 @@
---
title: 我能把 ClickHouse 当做Key-value 键值存储来使用吗?
toc_hidden: true
toc_priority: 101
sidebar_position: 101
---
# 我能把 ClickHouse 当做Key-value 键值存储来使用吗? {#can-i-use-clickhouse-as-a-key-value-storage}.

View File

@ -1,7 +1,7 @@
---
title: 我能把 ClickHouse 当做时序数据库来使用吗?
toc_hidden: true
toc_priority: 101
sidebar_position: 101
---
# 我能把 ClickHouse 当做时序数据库来使用吗? {#can-i-use-clickhouse-as-a-time-series-database}

View File

@ -1,6 +1,6 @@
---
toc_priority: 19
toc_title: AMPLab Big Data Benchmark
sidebar_position: 19
sidebar_label: AMPLab Big Data Benchmark
---
# AMPLab Big Data Benchmark {#amplab-big-data-benchmark}

View File

@ -1,6 +1,6 @@
---
toc_priority: 18
toc_title: Terabyte Click Logs from Criteo
sidebar_position: 18
sidebar_label: Terabyte Click Logs from Criteo
---
# Terabyte of Click Logs from Criteo {#criteo-tbji-bie-dian-ji-ri-zhi}

View File

@ -1,6 +1,6 @@
---
toc_priority: 11
toc_title: GitHub 事件数据集
sidebar_position: 11
sidebar_label: GitHub 事件数据集
---
# GitHub 事件数据集

View File

@ -1,7 +1,6 @@
---
toc_folder_title: "\u793A\u4F8B\u6570\u636E\u96C6"
toc_priority: 12
toc_title: "\u5BFC\u8A00"
sidebar_label: 示例数据集
sidebar_position: 12
---
# 示例数据集 {#example-datasets}

View File

@ -1,6 +1,6 @@
---
toc_priority: 15
toc_title: Yandex.Metrica Data
sidebar_position: 15
sidebar_label: Yandex.Metrica Data
---
# Anonymized Yandex.Metrica Data {#anonymized-yandex-metrica-data}

View File

@ -1,6 +1,6 @@
---
toc_priority: 20
toc_title: New York Taxi Data
sidebar_position: 20
sidebar_label: New York Taxi Data
---
# 纽约出租车数据 {#niu-yue-shi-chu-zu-che-shu-ju}

View File

@ -1,6 +1,6 @@
---
toc_priority: 21
toc_title: OnTime
sidebar_position: 21
sidebar_label: OnTime
---
# OnTime {#ontime}

View File

@ -1,6 +1,6 @@
---
toc_priority: 16
toc_title: Star Schema Benchmark
sidebar_position: 16
sidebar_label: Star Schema Benchmark
---
# Star Schema Benchmark {#star-schema-benchmark}

View File

@ -1,6 +1,6 @@
---
toc_priority: 17
toc_title: WikiStat
sidebar_position: 17
sidebar_label: WikiStat
---
# WikiStat {#wikistat}

View File

@ -1,6 +1,6 @@
---
toc_folder_title: 快速上手
toc_priority: 2
sidebar_label: 快速上手
sidebar_position: 2
---
# 入门 {#ru-men}

View File

@ -1,6 +1,6 @@
---
toc_priority: 11
toc_title: 安装部署
sidebar_position: 11
sidebar_label: 安装部署
---
# 安装 {#clickhouse-an-zhuang}
@ -24,15 +24,37 @@ $ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not
建议使用Debian或Ubuntu的官方预编译`deb`软件包。运行以下命令来安装包:
``` bash
{% include 'install/deb.sh' %}
sudo apt-get install -y apt-transport-https ca-certificates dirmngr
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754
echo "deb https://packages.clickhouse.com/deb stable main" | sudo tee \
/etc/apt/sources.list.d/clickhouse.list
sudo apt-get update
sudo apt-get install -y clickhouse-server clickhouse-client
sudo service clickhouse-server start
clickhouse-client # or "clickhouse-client --password" if you've set up a password.
```
<details markdown="1">
<summary>Deprecated Method for installing deb-packages</summary>
``` bash
{% include 'install/deb_repo.sh' %}
sudo apt-get install apt-transport-https ca-certificates dirmngr
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4
echo "deb https://repo.clickhouse.com/deb/stable/ main/" | sudo tee \
/etc/apt/sources.list.d/clickhouse.list
sudo apt-get update
sudo apt-get install -y clickhouse-server clickhouse-client
sudo service clickhouse-server start
clickhouse-client # or "clickhouse-client --password" if you set up a password.
```
</details>
如果您想使用最新的版本,请用`testing`替代`stable`(我们只推荐您用于测试环境)。
@ -53,15 +75,28 @@ $ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not
首先,您需要添加官方存储库:
``` bash
{% include 'install/rpm.sh' %}
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://packages.clickhouse.com/rpm/clickhouse.repo
sudo yum install -y clickhouse-server clickhouse-client
sudo /etc/init.d/clickhouse-server start
clickhouse-client # or "clickhouse-client --password" if you set up a password.
```
<details markdown="1">
<summary>Deprecated Method for installing rpm-packages</summary>
``` bash
{% include 'install/rpm_repo.sh' %}
sudo yum install yum-utils
sudo rpm --import https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG
sudo yum-config-manager --add-repo https://repo.clickhouse.com/rpm/clickhouse.repo
sudo yum install clickhouse-server clickhouse-client
sudo /etc/init.d/clickhouse-server start
clickhouse-client # or "clickhouse-client --password" if you set up a password.
```
</details>
如果您想使用最新的版本,请用`testing`替代`stable`(我们只推荐您用于测试环境)。`prestable`有时也可用。
@ -83,15 +118,54 @@ sudo yum install clickhouse-server clickhouse-client
下载后解压缩下载资源文件并使用安装脚本进行安装。以下是一个最新稳定版本的安装示例:
``` bash
{% include 'install/tgz.sh' %}
LATEST_VERSION=$(curl -s https://packages.clickhouse.com/tgz/stable/ | \
grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | sort -V -r | head -n 1)
export LATEST_VERSION
curl -O "https://packages.clickhouse.com/tgz/stable/clickhouse-common-static-$LATEST_VERSION.tgz"
curl -O "https://packages.clickhouse.com/tgz/stable/clickhouse-common-static-dbg-$LATEST_VERSION.tgz"
curl -O "https://packages.clickhouse.com/tgz/stable/clickhouse-server-$LATEST_VERSION.tgz"
curl -O "https://packages.clickhouse.com/tgz/stable/clickhouse-client-$LATEST_VERSION.tgz"
tar -xzvf "clickhouse-common-static-$LATEST_VERSION.tgz"
sudo "clickhouse-common-static-$LATEST_VERSION/install/doinst.sh"
tar -xzvf "clickhouse-common-static-dbg-$LATEST_VERSION.tgz"
sudo "clickhouse-common-static-dbg-$LATEST_VERSION/install/doinst.sh"
tar -xzvf "clickhouse-server-$LATEST_VERSION.tgz"
sudo "clickhouse-server-$LATEST_VERSION/install/doinst.sh"
sudo /etc/init.d/clickhouse-server start
tar -xzvf "clickhouse-client-$LATEST_VERSION.tgz"
sudo "clickhouse-client-$LATEST_VERSION/install/doinst.sh"
```
<details markdown="1">
<summary>Deprecated Method for installing tgz archives</summary>
``` bash
{% include 'install/tgz_repo.sh' %}
export LATEST_VERSION=$(curl -s https://repo.clickhouse.com/tgz/stable/ | \
grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | sort -V -r | head -n 1)
curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-common-static-$LATEST_VERSION.tgz
curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-common-static-dbg-$LATEST_VERSION.tgz
curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-server-$LATEST_VERSION.tgz
curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-client-$LATEST_VERSION.tgz
tar -xzvf clickhouse-common-static-$LATEST_VERSION.tgz
sudo clickhouse-common-static-$LATEST_VERSION/install/doinst.sh
tar -xzvf clickhouse-common-static-dbg-$LATEST_VERSION.tgz
sudo clickhouse-common-static-dbg-$LATEST_VERSION/install/doinst.sh
tar -xzvf clickhouse-server-$LATEST_VERSION.tgz
sudo clickhouse-server-$LATEST_VERSION/install/doinst.sh
sudo /etc/init.d/clickhouse-server start
tar -xzvf clickhouse-client-$LATEST_VERSION.tgz
sudo clickhouse-client-$LATEST_VERSION/install/doinst.sh
```
</details>
对于生产环境,建议使用最新的`stable`版本。你可以在GitHub页面https://github.com/ClickHouse/ClickHouse/tags找到它它以后缀`-stable`标志。

View File

@ -1,6 +1,6 @@
---
toc_priority: 14
toc_title: 体验平台
sidebar_position: 14
sidebar_label: 体验平台
---
# ClickHouse Playground {#clickhouse-playground}

View File

@ -1,6 +1,6 @@
---
toc_priority: 12
toc_title: 使用教程
sidebar_position: 12
sidebar_label: 使用教程
---
# ClickHouse教程 {#clickhouse-tutorial}

View File

@ -1,6 +1,6 @@
---
toc_priority: 41
toc_title: "\u5E94\u7528CatBoost\u6A21\u578B"
sidebar_position: 41
sidebar_label: "\u5E94\u7528CatBoost\u6A21\u578B"
---
# 在ClickHouse中应用Catboost模型 {#applying-catboost-model-in-clickhouse}

View File

@ -1,9 +1,6 @@
---
machine_translated: true
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
toc_folder_title: "\u6307\u5357"
toc_priority: 38
toc_title: "\u6982\u8FF0"
sidebar_position: 38
sidebar_label: ClickHouse指南
---
# ClickHouse指南 {#clickhouse-guides}

View File

@ -1,6 +1,6 @@
---
toc_priority: 17
toc_title: 命令行客户端
sidebar_position: 17
sidebar_label: 命令行客户端
---
# 命令行客户端 {#command-line-client}

View File

@ -1,6 +1,6 @@
---
toc_priority: 24
toc_title: C++客户端库
sidebar_position: 24
sidebar_label: C++客户端库
---
# C++客户端库 {#c-client-library}

View File

@ -1,6 +1,6 @@
---
toc_priority: 21
toc_title: 输入/输出格式
sidebar_position: 21
sidebar_label: 输入/输出格式
---
# 输入/输出格式 {#formats}

View File

@ -1,6 +1,6 @@
---
toc_priority: 19
toc_title: HTTP客户端
sidebar_position: 19
sidebar_label: HTTP客户端
---
# HTTP客户端 {#http-interface}

View File

@ -1,7 +1,6 @@
---
toc_folder_title: 接口
toc_priority: 14
toc_title: 客户端
sidebar_label: 接口
sidebar_position: 14
---
# 客户端 {#interfaces}

View File

@ -1,6 +1,6 @@
---
toc_priority: 22
toc_title: JDBC驱动
sidebar_position: 22
sidebar_label: JDBC驱动
---
# JDBC驱动 {#jdbc-driver}

View File

@ -1,6 +1,6 @@
---
toc_priority: 20
toc_title: MySQL接口
sidebar_position: 20
sidebar_label: MySQL接口
---
# MySQL接口 {#mysql-interface}

View File

@ -1,6 +1,6 @@
---
toc_priority: 23
toc_title: ODBC驱动
sidebar_position: 23
sidebar_label: ODBC驱动
---
# ODBC驱动 {#odbc-driver}

View File

@ -1,6 +1,6 @@
---
toc_priority: 18
toc_title: 原生接口(TCP)
sidebar_position: 18
sidebar_label: 原生接口(TCP)
---
# 原生接口TCP{#native-interface-tcp}

View File

@ -1,6 +1,6 @@
---
toc_priority: 26
toc_title: 客户端开发库
sidebar_position: 26
sidebar_label: 客户端开发库
---
# 第三方开发库 {#client-libraries-from-third-party-developers}

View File

@ -1,6 +1,6 @@
---
toc_folder_title: 第三方工具
toc_priority: 24
sidebar_label: 第三方工具
sidebar_position: 24
---
# 第三方工具 {#third-party-interfaces}

View File

@ -1,6 +1,6 @@
---
toc_priority: 27
toc_title: 第三方集成库
sidebar_position: 27
sidebar_label: 第三方集成库
---
# 第三方集成库 {#integration-libraries-from-third-party-developers}

Some files were not shown because too many files have changed in this diff Show More