Commit Graph

107257 Commits

Author SHA1 Message Date
bkuschel
dc995f7c67
Fix size,free ctx,formatting 2023-02-04 15:03:04 -05:00
Boris Kuschel
d9f3698a43
Fix copy/pasta 2023-02-04 14:59:47 -05:00
Boris Kuschel
1d4cf4fe69
Add encryption support with openssl 2023-02-04 14:59:34 -05:00
Alexander Gololobov
4ef09ecc0f
Update gtest_azure_sdk.cpp 2023-02-04 19:05:05 +01:00
xiedeyantu
f13eedd644 change settings name 2023-02-04 22:11:14 +08:00
Azat Khuzhin
177c98b6a9 Use "exact" matching for fuzzy search
Right now fuzzy search is too smart for SQL, it even takes into account
the case, which should not be accounted (you don't want to type "SELECT"
instead of "select" to find the query).

And to tell the truth, I think too smart fuzzy searching for SQL queries
is not required, and is only harming.

Exact matching seems better algorithm for SQL, it is not 100% exact, it
splits by space, and apply separate matcher actually for each word.
Note, that if you think that "space is not enough" as the delimiter,
then you should first know that this is the delimiter only for the
input query, so to match "system.query_log" you can use "sy qu log"
(also you can disable exact mode by prepending "'" char).

But it ignores the case by default, and the behaviour what is expected
from the CaseMatching::Ignore.

TL;DR;

Just for the history I will describe what had been tried.

At first I tried CaseMatching::Ignore - it does not helps for
SkimV1/SkimV2/Clangd matches.

So I converted lines from the history and input query, to the lower
case. However this does not work for UPPER CASE, since only initial
portion of the query had been converted to the lower.

Then I've looked into skim/fuzzy-matcher crates code, and look for the
reason why CaseMatching::Ignore does not work, and found that there is
still a penalty for case mismatch, but there is no way to pass it from
the user code, so I've tried guerrilla to monkey patch the library's
code and it works:

    // Avoid penalty for case mismatch (even with CaseMatching::Ignore)
    let _guard = guerrilla::patch0(SkimScoreConfig::default, || {
        let score_match = 16;
        let gap_start = -3;
        let gap_extension = -1;
        let bonus_first_char_multiplier = 2;

        return SkimScoreConfig{
            score_match,
            gap_start,
            gap_extension,
            bonus_first_char_multiplier,
            bonus_head: score_match / 2,
            bonus_break: score_match / 2 + gap_extension,
            bonus_camel: score_match / 2 + 2 * gap_extension,
            bonus_consecutive: -(gap_start + gap_extension),
            // penalty_case_mismatch: gap_extension * 2,
            penalty_case_mismatch: 0,
        };
    });

But this does not sounds like a trivial code, so I decided, to look
around, and realized that "exact" matching should do what is required
for the completion of queries (at least from my point of view).

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-02-04 14:15:02 +01:00
李扬
ad6f39389d
Update tests/performance/column_array_filter.xml
Co-authored-by: Alexander Gololobov <440544+davenger@users.noreply.github.com>
2023-02-04 18:49:13 +08:00
Azat Khuzhin
0a598c79f4 Remove misleading information that sanitizer errors are in the docker.log
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-02-04 11:23:26 +01:00
Azat Khuzhin
88dc85e13e Dump sanitizer errors in the integration tests logs
Previously you need to download the artifacts (0.5G+-), or reproduce the
problem.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-02-04 11:23:26 +01:00
Azat Khuzhin
a3a5867b07 Fix data race in BACKUP
Fixes the following data race:

<details>

WARNING: ThreadSanitizer: data race (pid=1)
  Write of size 8 at 0x7b580016ff20 by thread T218 (mutexes: write M0):
    0 DB::BackupImpl::writeFile() build_docker/../src/Backups/BackupImpl.cpp:1000:9 (clickhouse+0x1bd0b7a6) (BuildId: 3558ba44526114e01870f02cc410103fa6cb8de3)
    1 DB::writeBackupEntries()::$_0::operator()(bool) const build_docker/../src/Backups/BackupUtils.cpp:109:25 (clickhouse+0x1bc19cda) (BuildId: 3558ba44526114e01870f02cc410103fa6cb8de3)

  Previous read of size 8 at 0x7b580016ff20 by thread T238:
    0 DB::BackupImpl::writeFile() build_docker/../src/Backups/BackupImpl.cpp:956:14 (clickhouse+0x1bd0ae8d) (BuildId: 3558ba44526114e01870f02cc410103fa6cb8de3)
    1 DB::writeBackupEntries()::$_0::operator()(bool) const build_docker/../src/Backups/BackupUtils.cpp:109:25 (clickhouse+0x1bc19cda) (BuildId: 3558ba44526114e01870f02cc410103fa6cb8de3)

</details>

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-02-04 11:09:11 +01:00
Han Fei
9ea3de14ce use re2 by default 2023-02-04 10:53:54 +01:00
Alexey Milovidov
496cacf25e
Merge pull request #45985 from ClickHouse/fix-crash-in-regression
Fix crash in stochasticLinearRegression.
2023-02-04 03:01:46 +01:00
Alexey Milovidov
22bd0b6f69
Merge pull request #38983 from CurtizJ/randomize-mt-settings
Allow to randomize merge tree settings in tests
2023-02-04 02:59:52 +01:00
MeenaRenganathan22
722e389847 Updating the submodule to the local branch HDFS_PowerPC 2023-02-03 17:45:48 -08:00
Mikhail f. Shiryaev
0c8c04090c
Add checks for installable packages to workflows 2023-02-03 23:40:56 +01:00
Mikhail f. Shiryaev
cd2e1cfada
Merge pull request #45568 from ClickHouse/keeper-systemd
Add systemd.service for clickhouse-keeper
2023-02-03 23:08:02 +01:00
Anton Popov
e9f80c650a fix memory leak at creation of curl connection in azure sdk 2023-02-03 20:10:39 +00:00
Anton Popov
20ebd38242
Merge branch 'master' into fix-sparse-columns-crash 2023-02-03 21:00:02 +01:00
Dan Roscigno
ed6c884083
Merge pull request #46023 from DanRoscigno/fix-headingsincomparison-docs
fix heading level on comparison page
2023-02-03 14:14:25 -05:00
DanRoscigno
8464357bca fix heading level 2023-02-03 13:57:47 -05:00
vdimir
e175b72d79
Update ru doc for sparkbar function 2023-02-03 17:25:28 +00:00
vdimir
6e0d5e4150
Update doc for sparkbar function 2023-02-03 17:23:10 +00:00
vdimir
18e699f459
Add testcases to 02016_aggregation_spark_bar 2023-02-03 17:22:32 +00:00
vdimir
1e45033531
Update AggregateFunctionSparkbar 2023-02-03 17:22:08 +00:00
Igor Nikonov
f49f5d7091 Update tests
+ updadte sorting properties after applying aggregation in order
2023-02-03 17:10:31 +00:00
Anton Popov
a394f9c92a check is storage supports subcolumns 2023-02-03 17:05:57 +00:00
Alexander Tokmakov
fa620cc927
Merge pull request #45459 from FrankChen021/stack_trace_in_part_log
Save exception stack trace in part_log
2023-02-03 20:05:35 +03:00
Alexander Tokmakov
3f11948bb0
Merge branch 'master' into stack_trace_in_part_log 2023-02-03 20:05:00 +03:00
Alexander Tokmakov
7e6f7c79f2
Merge pull request #45457 from FrankChen021/exception_time
Add last_exception_time to replication_queue
2023-02-03 20:00:15 +03:00
vdimir
c6e473a66a
Canonize 02016_aggregation_spark_bar 2023-02-03 16:55:57 +00:00
Alexander Tokmakov
e21c29275a
Merge pull request #45937 from ClickHouse/stress_report_add_context2
Better context for stress tests failures
2023-02-03 18:45:30 +03:00
Alexander Tokmakov
352ccfb156
Merge branch 'master' into stress_report_add_context2 2023-02-03 18:44:53 +03:00
Han Fei
061204408a
Merge pull request #45952 from ucasfl/tuple
Fix tupleElement with Null arguments
2023-02-03 16:15:54 +01:00
Anton Popov
88f2068bfb
Merge branch 'master' into fix-sparse-columns-crash 2023-02-03 16:01:11 +01:00
Dan Roscigno
9f7d493850
Merge pull request #46015 from DanRoscigno/lwd
Lwd
2023-02-03 09:57:51 -05:00
Robert Schulze
85cbb9288c
Merge pull request #45456 from FrankChen021/uncaught_exception
Fix uncaught exception in HTTPHandler
2023-02-03 15:26:02 +01:00
Kseniia Sumarokova
0d77f29a99
Update .reference 2023-02-03 15:22:02 +01:00
DanRoscigno
c9244335ef move title to frontmatter 2023-02-03 09:21:30 -05:00
DanRoscigno
7889b632d6 add metadata 2023-02-03 09:16:33 -05:00
DanRoscigno
26a6c5a25b combine guide and reference for lightweight deletes 2023-02-03 09:13:48 -05:00
Nikita Mikhaylov
33877b5e00
Parallel replicas. Part [2] (#43772) 2023-02-03 14:34:18 +01:00
Antonio Andelic
d5117f2aa6
Define S3 client with bucket and endpoint resolution (#45783)
* Update aws

* Define S3 client with bucket and endpoint resolution

* Add defines for ErrorCodes

* Use S3Client everywhere

* Remove unused errorcode

* Add DROP S3 CLIENT CACHE query

* Add a comment

* Fix style

* Update aws

* Update reference files

* Add missing include

* Fix unit test

* Remove unneeded declarations

* Correctly use RetryStrategy

* Rename S3Client to Client

* Fix retry count

* fix clang-tidy warnings
2023-02-03 14:30:52 +01:00
Han Fei
2656027c9f make it work if we dont define use_vectorscan macro 2023-02-03 14:25:53 +01:00
Mikhail f. Shiryaev
c34a09f215
Merge pull request #46012 from ClickHouse/auto/v23.1.3.5-stable
Update version_date.tsv and changelogs after v23.1.3.5-stable
2023-02-03 14:25:35 +01:00
Anton Popov
cdbe145bc1
Merge pull request #45796 from CurtizJ/fix-leak-in-azure-sdk
Fix test `test_azure_blob_storage_zero_copy_replication ` (memory leak in azure sdk)
2023-02-03 14:16:19 +01:00
Igor Nikonov
ed00db7580 Update sorting properties after reading in order applied 2023-02-03 13:15:06 +00:00
Vitaly Baranov
45d2d678ab
Merge pull request #45800 from vitlibar/rename-new-columns-in-system-backups
Rename new columns in system.backups
2023-02-03 14:00:16 +01:00
robot-clickhouse
a9ab22e45d Update version_date.tsv and changelogs after v23.1.3.5-stable 2023-02-03 13:00:13 +00:00
Azat Khuzhin
a196f995b1 Fix error message for a broken distributed batches ("While sending batch")
There was an error from the begginning that does not respect
file_indices, and iterate only over file_index_to_path, while this is
not correct, since there can be less files then in file_index_to_path,
and this is what file_indices for.

Note, that only an error message was wrong, logic was fine. You can
verify this by the logs:

    2022.12.07 11:55:50.951976 [ 39217 ] {} <Debug> default.dist.DirectoryMonitor: Sending a batch of 10 files to localhost:9000 (128.42 thousand rows, 36.32 MiB bytes).
    2022.12.07 11:55:50.953762 [ 39217 ] {} <Error> default.dist.DirectoryMonitor: Code: 516. DB::Exception: Received from localhost:9000. DB::Exception: Interserver authentication failed. Stack trace:
    ...
    : While sending batch, nums: 62, files: /work6/clickhouse/data/default/dist/shard1_replica1/66827258.bin

As you can see "Sending a batch of 10 files" but "nums: 62"

Fixes: #23856
Refs: #41813
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-02-03 13:54:40 +01:00
Alexey Milovidov
3e3df376c0
Merge pull request #45995 from CurtizJ/check-dynamic-columns
Check dynamic columns of part before its commit
2023-02-03 15:39:54 +03:00