mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-27 01:51:59 +00:00
2fb95d9ee0
Before this patch it wasn't possible to optimize simple SELECT * FROM dist ORDER BY (w/o GROUP BY and DISTINCT) to more optimal stage (QueryProcessingStage::WithMergeableStateAfterAggregationAndLimit), since that code was under allow_nondeterministic_optimize_skip_unused_shards, rework it and make it possible. Also now distributed_push_down_limit is respected for optimize_distributed_group_by_sharding_key. Next step will be to enable distributed_push_down_limit by default. v2: fix detection of aggregates
26 lines
166 B
Plaintext
26 lines
166 B
Plaintext
distributed_push_down_limit=0
|
|
100 100
|
|
distributed_push_down_limit=1
|
|
0
|
|
1
|
|
2
|
|
3
|
|
4
|
|
5
|
|
6
|
|
7
|
|
8
|
|
9
|
|
40 40
|
|
distributed_push_down_limit=1 with OFFSET
|
|
97
|
|
96
|
|
96
|
|
95
|
|
95
|
|
94
|
|
94
|
|
93
|
|
93
|
|
92
|