Commit Graph

123904 Commits

Author SHA1 Message Date
Bharat Nallan Chakravarthy
63c3f400a3 fix fast test 2023-08-30 22:48:44 -07:00
Bharat Nallan Chakravarthy
f6d1f6ce0a fix style check 2023-08-30 22:09:56 -07:00
Bharat Nallan Chakravarthy
e281a78950 add basic tests 2023-08-30 21:57:43 -07:00
Bharat Nallan Chakravarthy
7889626046 add docs 2023-08-30 21:42:51 -07:00
Bharat Nallan Chakravarthy
61a6316164 add script to generate hash function 2023-08-30 21:30:33 -07:00
Alexey Gerasimchuck
3a212217a3 Implemented globs to select * from '<file>' 2023-08-31 04:20:44 +00:00
robot-ch-test-poll
10898ee96f
Merge pull request #54092 from arenadata/ADQM-1080
Increased log waiting timeout in test_profile_max_sessions_for_user
2023-08-31 06:04:51 +02:00
Tiakon
8c21bd8342
Update ClusterCopierApp.cpp
Shows that the type of task-upload-force is Boolean.
2023-08-31 10:42:45 +08:00
Tiakon
0dcc0be377
Update clickhouse-copier.md
Shows that the type of task-upload-force is Boolean.
2023-08-31 10:38:19 +08:00
Bharat Nallan Chakravarthy
6e1e625230 few fixes to code 2023-08-30 19:10:03 -07:00
Bharat Nallan Chakravarthy
fd1be0a601 initial implementation 2023-08-30 18:41:04 -07:00
Alexey Milovidov
9382dd90ee
Merge pull request #54094 from ClickHouse/changelog-23.8
Changelog for 23.8
2023-08-31 04:03:25 +03:00
Alexey Milovidov
f6cac3c45c Changelog for 23.8 2023-08-31 01:46:46 +02:00
ltrk2
0b2a32b0ba Use iterators instead of std::ranges
For loops for fun factor
2023-08-30 16:36:02 -07:00
Alexey Gerasimchuck
3de967ecbe Increased log waiting timeout 2023-08-30 22:05:38 +00:00
robot-clickhouse-ci-2
1cf0952d28
Merge pull request #54089 from ClickHouse/docs-mysql-interface-cloud-private-preview
Clarify that the cloud MySQL interface is under private preview
2023-08-30 22:47:18 +02:00
Anton Popov
0387556a34
Merge pull request #53914 from Chen768959/fix-53543-2
'from' is supported as a Expression and fix #53543
2023-08-30 22:25:08 +02:00
Mikhail f. Shiryaev
ab3e9df57f
Address review points 2023-08-30 22:06:32 +02:00
Mikhail f. Shiryaev
e553242957
Get rid of CLICKHOUSE_CI_LOGS_* secrets 2023-08-30 22:06:31 +02:00
Mikhail f. Shiryaev
6fb0257ee0
Use CiLogsCredentials in build_check 2023-08-30 22:06:30 +02:00
Mikhail f. Shiryaev
e113ba9024
Add vim filetype for *.lib files 2023-08-30 22:06:26 +02:00
Mikhail f. Shiryaev
f0c18d4bd7
Rework setup of CI logs export 2023-08-30 22:04:37 +02:00
Mikhail f. Shiryaev
a683d5e2f3
Replace defaul assingment by default usage 2023-08-30 22:04:34 +02:00
Antonio Andelic
0148e15aee
Merge pull request #53880 from ClickHouse/archive-improvements-2
Improve schema inference for archives
2023-08-30 21:11:34 +02:00
Justin de Guzman
f8ca303b25
Clarify that the cloud MySQL interface is under private preview 2023-08-30 11:22:47 -07:00
Azat Khuzhin
d0397acafc Add ability to override credentials for accessing base backup in S3
Sometimes credentials with which the backup had been done are inactive
already, and ClickHouse will not be able to read the metadata file to
continue and fail.

Add a setting to allow ignoring credential from base_backup -
`use_same_s3_credentials_for_base_backup` (default to true).

And the same for RESTORE.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-08-30 20:16:22 +02:00
Mikhail f. Shiryaev
b32345242c
Merge pull request #54081 from ClickHouse/apache-archive
Replace dlcdn.apache.org by archive domain
2023-08-30 19:43:43 +02:00
Mikhail f. Shiryaev
c111adb7ce
Replace dlcdn.apache.org by archive domain 2023-08-30 18:40:36 +02:00
Alexander Tokmakov
5cfd1c6d63
Merge pull request #54066 from kssenii/fix-named-collections-access-type-change
Fix named_collection_admin alias
2023-08-30 18:34:03 +02:00
ltrk2
b6d4b5598b Fix SipHash128 reference for big-endian platforms 2023-08-30 08:15:09 -07:00
vdimir
dbdcea30a0
fix style, add built-in documentation 2023-08-30 15:06:35 +00:00
robot-ch-test-poll4
aa3b687d4a
Merge pull request #54064 from kssenii/fix-logical-error
Fix after #52943
2023-08-30 17:06:25 +02:00
Roman Vasin
92b89e8b9e Use NodePtr instead of Node * 2023-08-30 13:50:10 +00:00
vdimir
dd094d1f55
Parse IS NOT DISTINCT and <=> operators 2023-08-30 13:12:43 +00:00
Alexander Tokmakov
4d70624ca3
Merge pull request #53907 from arenadata/ADQM-1126
Added validate_tcp_client_information server setting
2023-08-30 15:10:13 +02:00
kssenii
185e3819ac Fix 2023-08-30 13:41:18 +02:00
kssenii
662f22aed0 Fix 2023-08-30 13:31:54 +02:00
Mikhail Koviazin
021c607725
ConfigReloader: use last_write_time for FileWithTimestamp
Previously it used `FS::getModificationTime` which has only a second precision.
This caused an issue in some cases where the configuration has been changed
more frequently than once per second, and the change wasn't propagated. This
commit fixes that issue.
Closes #53276
2023-08-30 11:06:19 +00:00
Alexander Tokmakov
55af6e5c3e
Merge pull request #54063 from ClickHouse/revert-53677-mutations_subcolumns
Revert "Fix bug on mutations with subcolumns of type JSON in predicates of UPDATE and DELETE queries."
2023-08-30 12:59:51 +02:00
Alexander Tokmakov
83c5e2fba6
Revert "Fix bug on mutations with subcolumns of type JSON in predicates of UPDATE and DELETE queries." 2023-08-30 12:56:17 +02:00
robot-ch-test-poll2
3e5790451e
Merge pull request #54056 from lucasfcnunes/patch-1
fix typo on s3queue.md
2023-08-30 12:53:17 +02:00
Jiebin Sun
7c529e5691
Optimize the merge if all hashSets are singleLevel in UniqExactSet (#52973)
* Optimize the merge if all hashSets are singleLevel

In PR(https://github.com/ClickHouse/ClickHouse/pull/50748), it has added new phase
`parallelizeMergePrepare` before merge if all the hashSets are not all singleLevel
or not all twoLevel. Then it will convert all the singleLevelSet to twoLevelSet in
parallel, which will increase the CPU utilization and QPS.

But if all the hashtables are singleLevel, it could also benefit from the
`parallelizeMergePrepare` optimization in most cases if the hashtable size are not
too small. By tuning the Query `SELECT COUNT(DISTINCT SearchPhase) FROM hits_v1`
in different threads, we have got the mild threshold 6,000.

Test patch with the Query 'SELECT COUNT(DISTINCT Title) FROM hits_v1' on 2x80 vCPUs
server. If the threads are less than 48, the hashSets are all twoLevel or mixed by
singleLevel and twoLevel. If the threads are over 56, all the hashSets are singleLevel.
And the QPS has got at most 2.35x performance gain.

Threads	Opt/Base
8	100.0%
16	99.4%
24	110.3%
32	99.9%
40	99.3%
48	99.8%
56	183.0%
64	234.7%
72	233.1%
80	229.9%
88	224.5%
96	229.6%
104	235.1%
112	229.5%
120	229.1%
128	217.8%
136	222.9%
144	217.8%
152	204.3%
160	203.2%

Signed-off-by: Jiebin Sun <jiebin.sun@intel.com>

* Add the comment and explanation for PR#52973

Signed-off-by: Jiebin Sun <jiebin.sun@intel.com>

---------

Signed-off-by: Jiebin Sun <jiebin.sun@intel.com>
2023-08-30 11:26:16 +02:00
Antonio Andelic
f406019413 Apply PR comments 2023-08-30 09:26:01 +00:00
Raúl Marín
4f9ddcab2b Correct doc for filesystem_prefetch_max_memory_usage 2023-08-30 10:27:29 +02:00
Antonio Andelic
ddb58217d4 Merge branch 'master' into archive-improvements-2 2023-08-30 07:43:25 +00:00
Antonio Andelic
36fb7cfbd1
Merge pull request #54012 from ClickHouse/refactor-async-insert-with-dedup
Refactor logic around async insert with deduplication
2023-08-30 09:37:55 +02:00
Lucas Fernando Cardoso Nunes
6bb406fda9
fix typo on s3queue.md
Signed-off-by: Lucas Fernando Cardoso Nunes <lucasfc.nunes@gmail.com>
2023-08-30 04:30:34 -03:00
Alexey Gerasimchuck
f7d1041e61 minor improvement 2023-08-30 06:14:39 +00:00
Azat Khuzhin
aaa68a525a Add no-replicated-database for 02870_move_partition_to_volume_io_throttling
ALTER TABLE MOVE PARTITION TO DISK/VOLUME should not be replicated since
replicas will not contains this part:

    azat:~/ch/tmp/53338$ for i in clickhouse-server*.log.zst; do echo -n "$i: " && zstd -cdq $i | grep -m1 -e Executed.*ALTER.*test_cnmf4xnb.test_move_partition_throttling -e Exception.*ALTER.*test_cnmf4xnb.test_move_partition_throttling; done
    clickhouse-server1.log.zst: 2023.08.29 16:46:53.960065 [ 1843 ] {19b13d67-54c0-496d-96b9-b1ec09df3618} <Error> executeQuery: Code: 232. DB::Exception: Nothing to move (check that the partition exists). (NO_SUCH_DATA_PART) (version 23.8.1.2862) (from 0.0.0.0:0) (comment: 02870_move_partition_to_volume_io_throttling.sql) (in query: /* ddl_entry=query-0000000005 */ ALTER TABLE test_cnmf4xnb.test_move_partition_throttling MOVE PARTITION tuple() TO VOLUME 'remote' SETTINGS max_remote_write_network_bandwidth = 1600000), Stack trace (when copying this message, always include the lines below):
    clickhouse-server2.log.zst: 2023.08.29 16:46:53.959560 [ 1842 ] {3cd2b5e8-24b9-4cfd-aa47-854e634936f2} <Error> executeQuery: Code: 232. DB::Exception: Nothing to move (check that the partition exists). (NO_SUCH_DATA_PART) (version 23.8.1.2862) (from 0.0.0.0:0) (comment: 02870_move_partition_to_volume_io_throttling.sql) (in query: /* ddl_entry=query-0000000005 */ ALTER TABLE test_cnmf4xnb.test_move_partition_throttling MOVE PARTITION tuple() TO VOLUME 'remote' SETTINGS max_remote_write_network_bandwidth = 1600000), Stack trace (when copying this message, always include the lines below):
    clickhouse-server.log.zst: 2023.08.29 16:46:53.950730 [ 721 ] {df6b20ee-1903-404a-9398-7fead209ccd7} <Debug> DDLWorker(test_cnmf4xnb): Executed query: /* ddl_entry=query-0000000005 */ ALTER TABLE test_cnmf4xnb.test_move_partition_throttling MOVE PARTITION tuple() TO VOLUME 'remote' SETTINGS max_remote_write_network_bandwidth = 1600000

CI: https://s3.amazonaws.com/clickhouse-test-reports/53338/ed401cba9b8254b4a29a5ed9e5ad838c26ffaac1/stateless_tests__release__databasereplicated__[4_4].html
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-08-30 07:27:11 +02:00
Alexey Gerasimchuck
be2f80cc1c minor corrections 2023-08-29 23:59:36 +00:00