mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-12 10:34:21 +00:00
67c2e50331
* update presentations * CLICKHOUSE-2936: redirect from clickhouse.yandex.ru and clickhouse.yandex.com * update submodule * lost files * CLICKHOUSE-2981: prefer sphinx docs over original reference * CLICKHOUSE-2981: docs styles more similar to main website + add flags to switch language links * update presentations * Less confusing directory structure (docs -> doc/reference/) * Minify sphinx docs too * Website release script: fail fast + pass docker hash on deploy * Do not underline links in docs * shorter * cleanup docker images * tune nginx config * CLICKHOUSE-3043: get rid of habrastorage links * Lost translation * CLICKHOUSE-2936: temporary client-side redirect * behaves weird in test * put redirect back * CLICKHOUSE-3047: copy docs txts to public too * move to proper file * remove old pages to avoid confusion * Remove reference redirect warning for now * Refresh README.md * Yellow buttons in docs * Use svg flags instead of unicode ones in docs * fix test website instance * Put flags to separate files * wrong flag * Copy Yandex.Metrica introduction from main page to docs * Yet another home page structure change, couple new blocks (CLICKHOUSE-3045) * Update Contacts section * CLICKHOUSE-2849: more detailed legal information * CLICKHOUSE-2978 preparation - split by files * More changes in Contacts block * Tune texts on index page * update presentations * One more benchmark * Add usage sections to index page, adapted from slides * Get the roadmap started, based on slides from last ClickHouse Meetup * CLICKHOUSE-2977: some rendering tuning * Get rid of excessive section in the end of getting started * Make headers linkable * CLICKHOUSE-2981: links to editing reference - https://github.com/yandex/ClickHouse/issues/849 * CLICKHOUSE-2981: fix mobile styles in docs * Ban crawling of duplicating docs * Open some external links in new tab * Ban old docs too * Lots of trivial fixes in english docs * Lots of trivial fixes in russian docs * Remove getting started copies in markdown * Add Yandex.Webmaster * Fix some sphinx warnings * More warnings fixed in english docs * More sphinx warnings fixed * Add code-block:: text * More code-block:: text * These headers look not that well * Better switch between documentation languages * merge use_case.rst into ya_metrika_task.rst * Edit the agg_functions.rst texts * Add lost empty lines * Lost blank lines * Add new logo sizes * update presentations * Next step in migrating to new documentation * Fix all warnings in en reference * Fix all warnings in ru reference * Re-arrange existing reference * Move operation tips to main reference * Fix typos noticed by milovidov@ * Get rid of zookeeper.md * Looks like duplicate of tutorial.html * Fix some mess with html tags in tutorial * No idea why nobody noticed this before, but it was completely not clear whet to get the data * Match code block styling between main and tutorial pages (in favor of the latter) * Get rid of some copypaste in tutorial * Normalize header styles * Move example_datasets to sphinx * Move presentations submodule to website * Move and update README.md * No point in duplicating articles from habrahabr here * Move development-related docs as is for now * doc/reference/ -> docs/ (to match the URL on website) * Adapt links to match the previous commit * Adapt development docs to rst (still lacks translation and strikethrough support) * clean on release * blacklist presentations in gulp * strikethrough support in sphinx * just copy development folder for now * fix weird introduction in style article * Style guide translation (WIP) * Finish style guide translation to English * gulp clean separately * Update year in LICENSE * Initial CONTRIBUTING.md * Fix remaining links to old docs in tutorial * Some tutorial fixes * Typo * Another typo * Update list of authors from yandex-team accoding to git log
80 lines
3.5 KiB
ReStructuredText
80 lines
3.5 KiB
ReStructuredText
Resharding
|
|
----------
|
|
|
|
.. code-block:: sql
|
|
|
|
ALTER TABLE t RESHARD [COPY] [PARTITION partition] TO cluster description USING sharding key
|
|
|
|
Query works only for Replicated tables and for Distributed tables that are looking at Replicated tables.
|
|
|
|
When executing, query first checks correctness of query, sufficiency of free space on nodes and writes to ZooKeeper at some path a task to to. Next work is done asynchronously.
|
|
|
|
For using resharding, you must specify path in ZooKeeper for task queue in configuration file:
|
|
|
|
.. code-block:: xml
|
|
|
|
<resharding>
|
|
<task_queue_path>/clickhouse/task_queue</task_queue_path>
|
|
</resharding>
|
|
|
|
When running ``ALTER TABLE t RESHARD`` query, node in ZooKeeper is created if not exists.
|
|
|
|
Cluster description is list of shards with weights to distribute the data.
|
|
Shard is specified as address of table in ZooKeeper. Example: /clickhouse/tables/01-03/hits
|
|
Relative weight of shard (optional, default is 1) could be specified after WEIGHT keyword.
|
|
Example:
|
|
|
|
.. code-block:: sql
|
|
|
|
ALTER TABLE merge.hits
|
|
RESHARD PARTITION 201501
|
|
TO
|
|
'/clickhouse/tables/01-01/hits' WEIGHT 1,
|
|
'/clickhouse/tables/01-02/hits' WEIGHT 2,
|
|
'/clickhouse/tables/01-03/hits' WEIGHT 1,
|
|
'/clickhouse/tables/01-04/hits' WEIGHT 1
|
|
USING UserID
|
|
|
|
Sharding key (``UserID`` in example) has same semantic as for Distributed tables. You could specify ``rand()`` as sharding key for random distribution of data.
|
|
|
|
When query is run, it checks:
|
|
* identity of table structure on all shards.
|
|
* availability of free space on local node in amount of partition size in bytes, with additional 10% reserve.
|
|
* availability of free space on all replicas of all specified shards, except local replica, if exists, in amount of patition size times ratio of shard weight to total weight of all shards, with additional 10% reserve.
|
|
|
|
Next, asynchronous processing of query is of following steps:
|
|
#. Split patition to parts on local node.
|
|
It merges all parts forming a partition and in the same time, splits them to several, according sharding key.
|
|
Result is placed to /reshard directory in table data directory.
|
|
Source parts doesn't modified and all process doesn't intervent table working data set.
|
|
|
|
#. Copying all parts to remote nodes (to each replica of corresponding shard).
|
|
|
|
#. Execution of queries ``ALTER TABLE t DROP PARTITION`` on local node and ``ALTER TABLE t ATTACH PARTITION`` on all shards.
|
|
Note: this operation is not atomic. There are time point when user could see absence of data.
|
|
|
|
When ``COPY`` keyword is specified, source data is not removed. It is suitable for copying data from one cluster to another with changing sharding scheme in same time.
|
|
|
|
#. Removing temporary data from local node.
|
|
|
|
When having multiple resharding queries, their tasks will be done sequentially.
|
|
|
|
Query in example is to reshard single partition.
|
|
If you don't specify partition in query, then tasks to reshard all partitions will be created. Example:
|
|
|
|
.. code-block:: sql
|
|
|
|
ALTER TABLE merge.hits
|
|
RESHARD
|
|
TO ...
|
|
|
|
When resharding Distributed tables, each shard will be resharded (corresponding query is sent to each shard).
|
|
|
|
You could reshard Distributed table to itself or to another table.
|
|
|
|
Resharding is intended for "old" data: in case when during job, resharded partition was modified, task for that partition will be cancelled.
|
|
|
|
On each server, resharding is done in single thread. It is doing that way to not disturb normal query processing.
|
|
|
|
As of June 2016, resharding is in "beta" state: it was tested only for small data sets - up to 5 TB.
|