Merge branch 'master' into temporary-tables-clickhouse-local

This commit is contained in:
Alexey Milovidov 2024-08-18 05:58:33 +02:00
commit 7469c46286
12 changed files with 111 additions and 47 deletions

View File

@ -8,6 +8,8 @@ endif ()
# when instantiated from JSON.cpp. Try again when libcxx(abi) and Clang are upgraded to 16.
set (CMAKE_CXX_STANDARD 20)
configure_file(GitHash.cpp.in GitHash.generated.cpp)
set (SRCS
argsToConfig.cpp
cgroupsv2.cpp
@ -33,6 +35,7 @@ set (SRCS
safeExit.cpp
throwError.cpp
Numa.cpp
GitHash.generated.cpp
)
add_library (common ${SRCS})

View File

@ -10,7 +10,7 @@ sidebar_label: Visual Interfaces
### ch-ui {#ch-ui}
[ch-ui](https://github.com/caioricciuti/ch-ui) is a simple React.js app interface for ClickHouse databases, designed for executing queries and visualizing data. Built with React and the ClickHouse client for web, it offers a sleek and user-friendly UI for easy database interactions.
[ch-ui](https://github.com/caioricciuti/ch-ui) is a simple React.js app interface for ClickHouse databases designed for executing queries and visualizing data. Built with React and the ClickHouse client for web, it offers a sleek and user-friendly UI for easy database interactions.
Features:
@ -25,7 +25,7 @@ Web interface for ClickHouse in the [Tabix](https://github.com/tabixio/tabix) pr
Features:
- Works with ClickHouse directly from the browser, without the need to install additional software.
- Works with ClickHouse directly from the browser without the need to install additional software.
- Query editor with syntax highlighting.
- Auto-completion of commands.
- Tools for graphical analysis of query execution.
@ -63,7 +63,7 @@ Features:
- Table list with filtering and metadata.
- Table preview with filtering and sorting.
- Read-only queries execution.
- Read-only query execution.
### Redash {#redash}
@ -75,23 +75,23 @@ Features:
- Powerful editor of queries.
- Database explorer.
- Visualization tools, that allow you to represent data in different forms.
- Visualization tool that allows you to represent data in different forms.
### Grafana {#grafana}
[Grafana](https://grafana.com/grafana/plugins/grafana-clickhouse-datasource/) is a platform for monitoring and visualization.
"Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture. Trusted and loved by the community" — grafana.com.
"Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data-driven culture. Trusted and loved by the community" — grafana.com.
ClickHouse datasource plugin provides a support for ClickHouse as a backend database.
ClickHouse data source plugin provides support for ClickHouse as a backend database.
### qryn (#qryn)
### qryn {#qryn}
[qryn](https://metrico.in) is a polyglot, high-performance observability stack for ClickHouse _(formerly cLoki)_ with native Grafana integrations allowing users to ingest and analyze logs, metrics and telemetry traces from any agent supporting Loki/LogQL, Prometheus/PromQL, OTLP/Tempo, Elastic, InfluxDB and many more.
Features:
- Built in Explore UI and LogQL CLI for querying, extracting and visualizing data
- Built-in Explore UI and LogQL CLI for querying, extracting and visualizing data
- Native Grafana APIs support for querying, processing, ingesting, tracing and alerting without plugins
- Powerful pipeline to dynamically search, filter and extract data from logs, events, traces and beyond
- Ingestion and PUSH APIs transparently compatible with LogQL, PromQL, InfluxDB, Elastic and many more
@ -139,7 +139,7 @@ Features:
### DBM {#dbm}
[DBM](https://dbm.incubator.edurt.io/) DBM is a visual management tool for ClickHouse!
[DBM](https://github.com/devlive-community/dbm) DBM is a visual management tool for ClickHouse!
Features:
@ -151,7 +151,7 @@ Features:
- Support custom query
- Support multiple data sources management(connection test, monitoring)
- Support monitor (processor, connection, query)
- Support migrate data
- Support migrating data
### Bytebase {#bytebase}
@ -169,7 +169,7 @@ Features:
### Zeppelin-Interpreter-for-ClickHouse {#zeppelin-interpreter-for-clickhouse}
[Zeppelin-Interpreter-for-ClickHouse](https://github.com/SiderZhang/Zeppelin-Interpreter-for-ClickHouse) is a [Zeppelin](https://zeppelin.apache.org) interpreter for ClickHouse. Compared with JDBC interpreter, it can provide better timeout control for long running queries.
[Zeppelin-Interpreter-for-ClickHouse](https://github.com/SiderZhang/Zeppelin-Interpreter-for-ClickHouse) is a [Zeppelin](https://zeppelin.apache.org) interpreter for ClickHouse. Compared with the JDBC interpreter, it can provide better timeout control for long-running queries.
### ClickCat {#clickcat}
@ -179,7 +179,7 @@ Features:
- An online SQL editor which can run your SQL code without any installing.
- You can observe all processes and mutations. For those unfinished processes, you can kill them in ui.
- The Metrics contains Cluster Analysis,Data Analysis,Query Analysis.
- The Metrics contain Cluster Analysis, Data Analysis, and Query Analysis.
### ClickVisual {#clickvisual}
@ -332,7 +332,7 @@ Learn more about the product at [TABLUM.IO](https://tablum.io/)
### CKMAN {#ckman}
[CKMAN] (https://www.github.com/housepower/ckman) is a tool for managing and monitoring ClickHouse clusters!
[CKMAN](https://www.github.com/housepower/ckman) is a tool for managing and monitoring ClickHouse clusters!
Features:

View File

@ -79,6 +79,5 @@ SELECT * FROM json FORMAT JSONEachRow
## Related Content
- [Using JSON in ClickHouse](/docs/en/integrations/data-formats/json)
- [Using JSON in ClickHouse](/en/integrations/data-formats/json/overview)
- [Getting Data Into ClickHouse - Part 2 - A JSON detour](https://clickhouse.com/blog/getting-data-into-clickhouse-part-2-json)
-

View File

@ -16,39 +16,29 @@
#include <sys/resource.h>
#if defined(OS_LINUX)
#include <sys/prctl.h>
#include <sys/prctl.h>
#endif
#include <cerrno>
#include <cstring>
#include <unistd.h>
#include <algorithm>
#include <typeinfo>
#include <iostream>
#include <fstream>
#include <memory>
#include <base/scope_guard.h>
#include <Poco/Message.h>
#include <Poco/Util/Application.h>
#include <Poco/Exception.h>
#include <Poco/ErrorHandler.h>
#include <Poco/Pipe.h>
#include <Common/ErrorHandlers.h>
#include <Common/SignalHandlers.h>
#include <base/argsToConfig.h>
#include <base/getThreadId.h>
#include <base/coverage.h>
#include <base/sleep.h>
#include <IO/WriteBufferFromFileDescriptorDiscardOnFailure.h>
#include <IO/ReadBufferFromFileDescriptor.h>
#include <IO/ReadHelpers.h>
#include <IO/WriteHelpers.h>
#include <Common/Exception.h>
#include <Common/PipeFDs.h>
#include <Common/StackTrace.h>
#include <Common/getMultipleKeysFromConfig.h>
#include <Common/ClickHouseRevision.h>
#include <Common/Config/ConfigProcessor.h>
@ -459,13 +449,7 @@ void BaseDaemon::initializeTerminationAndSignalProcessing()
signal_listener_thread.start(*signal_listener);
#if defined(__ELF__) && !defined(OS_FREEBSD)
String build_id_hex = SymbolIndex::instance().getBuildIDHex();
if (build_id_hex.empty())
build_id = "";
else
build_id = build_id_hex;
#else
build_id = "";
build_id = SymbolIndex::instance().getBuildIDHex();
#endif
git_hash = GIT_HASH;

View File

@ -1,10 +1,7 @@
configure_file(GitHash.cpp.in GitHash.generated.cpp)
add_library (daemon
BaseDaemon.cpp
GraphiteWriter.cpp
SentryWriter.cpp
GitHash.generated.cpp
)
target_link_libraries (daemon PUBLIC loggers common PRIVATE clickhouse_parsers clickhouse_common_io clickhouse_common_config)

View File

@ -14,6 +14,8 @@ namespace ErrorCodes
{
extern const int LOGICAL_ERROR;
extern const int CANNOT_GET_CREATE_TABLE_QUERY;
extern const int BAD_ARGUMENTS;
extern const int UNKNOWN_TABLE;
}
DatabasesOverlay::DatabasesOverlay(const String & name_, ContextPtr context_)
@ -124,6 +126,39 @@ StoragePtr DatabasesOverlay::detachTable(ContextPtr context_, const String & tab
getEngineName());
}
void DatabasesOverlay::renameTable(
ContextPtr current_context,
const String & name,
IDatabase & to_database,
const String & to_name,
bool exchange,
bool dictionary)
{
for (auto & db : databases)
{
if (db->isTableExist(name, current_context))
{
if (DatabasesOverlay * to_overlay_database = typeid_cast<DatabasesOverlay *>(&to_database))
{
/// Renaming from Overlay database inside itself or into another Overlay database.
/// Just use the first database in the overlay as a destination.
if (to_overlay_database->databases.empty())
throw Exception(ErrorCodes::BAD_ARGUMENTS, "The destination Overlay database {} does not have any members", to_database.getDatabaseName());
db->renameTable(current_context, name, *to_overlay_database->databases[0], to_name, exchange, dictionary);
}
else
{
/// Renaming into a different type of database. E.g. from Overlay on top of Atomic database into just Atomic database.
db->renameTable(current_context, name, to_database, to_name, exchange, dictionary);
}
return;
}
}
throw Exception(ErrorCodes::UNKNOWN_TABLE, "Table {}.{} doesn't exist", backQuote(getDatabaseName()), backQuote(name));
}
ASTPtr DatabasesOverlay::getCreateTableQueryImpl(const String & name, ContextPtr context_, bool throw_on_error) const
{
ASTPtr result = nullptr;
@ -178,6 +213,18 @@ String DatabasesOverlay::getTableDataPath(const ASTCreateQuery & query) const
return result;
}
UUID DatabasesOverlay::getUUID() const
{
UUID result = UUIDHelpers::Nil;
for (const auto & db : databases)
{
result = db->getUUID();
if (result != UUIDHelpers::Nil)
break;
}
return result;
}
UUID DatabasesOverlay::tryGetTableUUID(const String & table_name) const
{
UUID result = UUIDHelpers::Nil;

View File

@ -35,12 +35,21 @@ public:
StoragePtr detachTable(ContextPtr context, const String & table_name) override;
void renameTable(
ContextPtr current_context,
const String & name,
IDatabase & to_database,
const String & to_name,
bool exchange,
bool dictionary) override;
ASTPtr getCreateTableQueryImpl(const String & name, ContextPtr context, bool throw_on_error) const override;
ASTPtr getCreateDatabaseQuery() const override;
String getTableDataPath(const String & table_name) const override;
String getTableDataPath(const ASTCreateQuery & query) const override;
UUID getUUID() const override;
UUID tryGetTableUUID(const String & table_name) const override;
void drop(ContextPtr context) override;

View File

@ -27,7 +27,6 @@ class ASTQueryWithTableAndOutput;
class ASTTableIdentifier;
class Context;
// TODO(ilezhankin): refactor and merge |ASTTableIdentifier|
struct StorageID
{
String database_name;

View File

@ -1,5 +1,20 @@
-- Tags: no-fasttest
#!/usr/bin/env bash
# Tags: no-fasttest
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh
. "$CURDIR"/../shell_config.sh
session_log_query_prefix="
system flush logs;
select distinct type, user, auth_type, toString(client_address)!='::ffff:0.0.0.0' as a, client_port!=0 as b, interface from system.session_log
where user in ('default', 'nonexistsnt_user_1119', ' ', ' INTERSERVER SECRET ')
and interface in ('HTTP', 'TCP', 'TCP_Interserver')
and (user != 'default' or (a=1 and b=1)) -- FIXME: we should not write uninitialized address and port (but we do sometimes)
and event_time >= now() - interval 5 minute"
$CLICKHOUSE_CLIENT -nm -q "
select * from remote('127.0.0.2', system, one, 'default', '');
select * from remote('127.0.0.2', system, one, 'default', 'wrong password'); -- { serverError AUTHENTICATION_FAILED }
select * from remote('127.0.0.2', system, one, 'nonexistsnt_user_1119', ''); -- { serverError AUTHENTICATION_FAILED }
@ -16,9 +31,17 @@ select * from url('http://127.0.0.1:8123/?query=select+1&user=+++', LineAsString
select * from cluster('test_cluster_interserver_secret', system, one);
system flush logs;
select distinct type, user, auth_type, toString(client_address)!='::ffff:0.0.0.0' as a, client_port!=0 as b, interface from system.session_log
where user in ('default', 'nonexistsnt_user_1119', ' ', ' INTERSERVER SECRET ')
and interface in ('HTTP', 'TCP', 'TCP_Interserver')
and (user != 'default' or (a=1 and b=1)) -- FIXME: we should not write uninitialized address and port (but we do sometimes)
and event_time >= now() - interval 5 minute order by type, user, interface;
$session_log_query_prefix and type != 'Logout' order by type, user, interface;
"
# Wait for logout events.
for _ in {1..10}
do
if [ "`$CLICKHOUSE_CLIENT -q "$session_log_query_prefix and type = 'Logout'" | wc -l`" -eq 3 ]
then
break
fi
sleep 2
done
$CLICKHOUSE_CLIENT -q "$session_log_query_prefix and type = 'Logout' order by user, interface"

View File

@ -15,6 +15,7 @@ fi
# TCP CLIENT: As of today (02/12/21) uses PullingAsyncPipelineExecutor
### Should be cancelled after 1 second and return a 159 exception (timeout)
### However, in the test, the server can be overloaded, so we assert query duration in the interval of 1 to 60 seconds.
query_id=$(random_str 12)
$CLICKHOUSE_CLIENT --query_id "$query_id" --max_execution_time 1 -q "
SELECT * FROM
@ -33,7 +34,7 @@ $CLICKHOUSE_CLIENT --query_id "$query_id" --max_execution_time 1 -q "
FORMAT Null
" 2>&1 | grep -m1 -o "Code: 159"
$CLICKHOUSE_CLIENT -q "system flush logs"
${CLICKHOUSE_CURL} -q -sS "$CLICKHOUSE_URL" -d "select 'query_duration', round(query_duration_ms/1000) from system.query_log where current_database = '$CLICKHOUSE_DATABASE' and query_id = '$query_id' and type != 'QueryStart'"
${CLICKHOUSE_CURL} -q -sS "$CLICKHOUSE_URL" -d "select 'query_duration', round(query_duration_ms/1000) BETWEEN 1 AND 60 from system.query_log where current_database = '$CLICKHOUSE_DATABASE' and query_id = '$query_id' and type != 'QueryStart'"
### Should stop pulling data and return what has been generated already (return code 0)
@ -52,7 +53,7 @@ $CLICKHOUSE_CLIENT --query_id "$query_id" -q "
"
echo $?
$CLICKHOUSE_CLIENT -q "system flush logs"
${CLICKHOUSE_CURL} -q -sS "$CLICKHOUSE_URL" -d "select 'query_duration', round(query_duration_ms/1000) from system.query_log where current_database = '$CLICKHOUSE_DATABASE' and query_id = '$query_id' and type != 'QueryStart'"
${CLICKHOUSE_CURL} -q -sS "$CLICKHOUSE_URL" -d "select 'query_duration', round(query_duration_ms/1000) BETWEEN 1 AND 60 from system.query_log where current_database = '$CLICKHOUSE_DATABASE' and query_id = '$query_id' and type != 'QueryStart'"
# HTTP CLIENT: As of today (02/12/21) uses PullingPipelineExecutor

View File

@ -1,7 +1,9 @@
#!/usr/bin/env bash
# shellcheck disable=SC2266
CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh
. "$CUR_DIR"/../shell_config.sh
$CLICKHOUSE_CLIENT -q "SELECT count(*) FROM numbers(10) AS a, numbers(11) AS b, numbers(12) AS c SETTINGS max_block_size = 0" 2>&1 | grep -q "Sanity check: 'max_block_size' cannot be 0. Set to default value" && echo "OK" || echo "FAIL"
$CLICKHOUSE_CLIENT -q "SELECT count(*) FROM numbers(10) AS a, numbers(11) AS b, numbers(12) AS c SETTINGS max_block_size = 0" 2>&1 |
[ "$(grep -c "Sanity check: 'max_block_size' cannot be 0. Set to default value")" -gt 0 ] && echo "OK" || echo "FAIL"