mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-22 15:42:02 +00:00
Merge pull request #50150 from JackyWoo/support_redis
Add Redis table Engine/Function
This commit is contained in:
commit
91d794cf0a
@ -53,6 +53,7 @@ Engines in the family:
|
||||
- [JDBC](../../engines/table-engines/integrations/jdbc.md)
|
||||
- [MySQL](../../engines/table-engines/integrations/mysql.md)
|
||||
- [MongoDB](../../engines/table-engines/integrations/mongodb.md)
|
||||
- [Redis](../../engines/table-engines/integrations/redis.md)
|
||||
- [HDFS](../../engines/table-engines/integrations/hdfs.md)
|
||||
- [S3](../../engines/table-engines/integrations/s3.md)
|
||||
- [Kafka](../../engines/table-engines/integrations/kafka.md)
|
||||
|
@ -7,4 +7,3 @@ sidebar_label: Integrations
|
||||
# Table Engines for Integrations
|
||||
|
||||
ClickHouse provides various means for integrating with external systems, including table engines. Like with all other table engines, the configuration is done using `CREATE TABLE` or `ALTER TABLE` queries. Then from a user perspective, the configured integration looks like a normal table, but queries to it are proxied to the external system. This transparent querying is one of the key advantages of this approach over alternative integration methods, like dictionaries or table functions, which require the use of custom query methods on each use.
|
||||
|
||||
|
119
docs/en/engines/table-engines/integrations/redis.md
Normal file
119
docs/en/engines/table-engines/integrations/redis.md
Normal file
@ -0,0 +1,119 @@
|
||||
---
|
||||
slug: /en/sql-reference/table-functions/redis
|
||||
sidebar_position: 43
|
||||
sidebar_label: Redis
|
||||
---
|
||||
|
||||
# Redis
|
||||
|
||||
This engine allows integrating ClickHouse with [Redis](https://redis.io/). For Redis takes kv model, we strongly recommend you only query it in a point way, such as `where k=xx` or `where k in (xx, xx)`.
|
||||
|
||||
## Creating a Table {#creating-a-table}
|
||||
|
||||
``` sql
|
||||
CREATE TABLE [IF NOT EXISTS] [db.]table_name
|
||||
(
|
||||
name1 [type1],
|
||||
name2 [type2],
|
||||
...
|
||||
) ENGINE = Redis(host:port[, db_index[, password[, pool_size]]]) PRIMARY KEY(primary_key_name);
|
||||
```
|
||||
|
||||
**Engine Parameters**
|
||||
|
||||
- `host:port` — Redis server address, you can ignore port and default Redis port 6379 will be used.
|
||||
|
||||
- `db_index` — Redis db index range from 0 to 15, default is 0.
|
||||
|
||||
- `password` — User password, default is blank string.
|
||||
|
||||
- `pool_size` — Redis max connection pool size, default is 16.
|
||||
|
||||
- `primary_key_name` - any column name in the column list.
|
||||
|
||||
- `primary` must be specified, it supports only one column in the primary key. The primary key will be serialized in binary as a Redis key.
|
||||
|
||||
- columns other than the primary key will be serialized in binary as Redis value in corresponding order.
|
||||
|
||||
- queries with key equals or in filtering will be optimized to multi keys lookup from Redis. If queries without filtering key full table scan will happen which is a heavy operation.
|
||||
|
||||
## Usage Example {#usage-example}
|
||||
|
||||
Create a table in ClickHouse which allows to read data from Redis:
|
||||
|
||||
``` sql
|
||||
CREATE TABLE redis_table
|
||||
(
|
||||
`k` String,
|
||||
`m` String,
|
||||
`n` UInt32
|
||||
)
|
||||
ENGINE = Redis('redis1:6379') PRIMARY KEY(k);
|
||||
```
|
||||
|
||||
Insert:
|
||||
|
||||
```sql
|
||||
INSERT INTO redis_table Values('1', 1, '1', 1.0), ('2', 2, '2', 2.0);
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT COUNT(*) FROM redis_table;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─count()─┐
|
||||
│ 2 │
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT * FROM redis_table WHERE key='1';
|
||||
```
|
||||
|
||||
```text
|
||||
┌─key─┬─v1─┬─v2─┬─v3─┐
|
||||
│ 1 │ 1 │ 1 │ 1 │
|
||||
└─────┴────┴────┴────┘
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT * FROM redis_table WHERE v1=2;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─key─┬─v1─┬─v2─┬─v3─┐
|
||||
│ 2 │ 2 │ 2 │ 2 │
|
||||
└─────┴────┴────┴────┘
|
||||
```
|
||||
|
||||
Update:
|
||||
|
||||
Note that the primary key cannot be updated.
|
||||
|
||||
```sql
|
||||
ALTER TABLE redis_table UPDATE v1=2 WHERE key='1';
|
||||
```
|
||||
|
||||
Delete:
|
||||
|
||||
```sql
|
||||
ALTER TABLE redis_table DELETE WHERE key='1';
|
||||
```
|
||||
|
||||
Truncate:
|
||||
|
||||
Flush Redis db asynchronously. Also `Truncate` support SYNC mode.
|
||||
|
||||
```sql
|
||||
TRUNCATE TABLE redis_table SYNC;
|
||||
```
|
||||
|
||||
|
||||
## Limitations {#limitations}
|
||||
|
||||
Redis engine also supports scanning queries, such as `where k > xx`, but it has some limitations:
|
||||
1. Scanning query may produce some duplicated keys in a very rare case when it is rehashing. See details in [Redis Scan](https://github.com/redis/redis/blob/e4d183afd33e0b2e6e8d1c79a832f678a04a7886/src/dict.c#L1186-L1269)
|
||||
2. During the scanning, keys could be created and deleted, so the resulting dataset can not represent a valid point in time.
|
67
docs/en/sql-reference/table-functions/redis.md
Normal file
67
docs/en/sql-reference/table-functions/redis.md
Normal file
@ -0,0 +1,67 @@
|
||||
---
|
||||
slug: /en/sql-reference/table-functions/redis
|
||||
sidebar_position: 10
|
||||
sidebar_label: Redis
|
||||
---
|
||||
|
||||
# Redis
|
||||
|
||||
This table function allows integrating ClickHouse with [Redis](https://redis.io/).
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
redis(host:port, key, structure[, db_index[, password[, pool_size]]])
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `host:port` — Redis server address, you can ignore port and default Redis port 6379 will be used.
|
||||
|
||||
- `key` — any column name in the column list.
|
||||
|
||||
- `structure` — The schema for the ClickHouse table returned from this function.
|
||||
|
||||
- `db_index` — Redis db index range from 0 to 15, default is 0.
|
||||
|
||||
- `password` — User password, default is blank string.
|
||||
|
||||
- `pool_size` — Redis max connection pool size, default is 16.
|
||||
|
||||
- `primary` must be specified, it supports only one column in the primary key. The primary key will be serialized in binary as a Redis key.
|
||||
|
||||
- columns other than the primary key will be serialized in binary as Redis value in corresponding order.
|
||||
|
||||
- queries with key equals or in filtering will be optimized to multi keys lookup from Redis. If queries without filtering key full table scan will happen which is a heavy operation.
|
||||
|
||||
|
||||
**Returned Value**
|
||||
|
||||
A table object with key as Redis key, other columns packaged together as Redis value.
|
||||
|
||||
## Usage Example {#usage-example}
|
||||
|
||||
Create a table in ClickHouse which allows to read data from Redis:
|
||||
|
||||
``` sql
|
||||
CREATE TABLE redis_table
|
||||
(
|
||||
`k` String,
|
||||
`m` String,
|
||||
`n` UInt32
|
||||
)
|
||||
ENGINE = Redis('redis1:6379') PRIMARY KEY(k);
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT * FROM redis(
|
||||
'redis1:6379',
|
||||
'key',
|
||||
'key String, v1 String, v2 UInt32'
|
||||
)
|
||||
```
|
||||
|
||||
**See Also**
|
||||
|
||||
- [The `Redis` table engine](/docs/en/engines/table-engines/integrations/redis.md)
|
||||
- [Using redis as a dictionary source](/docs/en/sql-reference/dictionaries/index.md#redis)
|
@ -201,6 +201,7 @@ enum class AccessType
|
||||
M(URL, "", GLOBAL, SOURCES) \
|
||||
M(REMOTE, "", GLOBAL, SOURCES) \
|
||||
M(MONGO, "", GLOBAL, SOURCES) \
|
||||
M(REDIS, "", GLOBAL, SOURCES) \
|
||||
M(MEILISEARCH, "", GLOBAL, SOURCES) \
|
||||
M(MYSQL, "", GLOBAL, SOURCES) \
|
||||
M(POSTGRES, "", GLOBAL, SOURCES) \
|
||||
|
@ -413,6 +413,7 @@ dbms_target_link_libraries (
|
||||
boost::system
|
||||
clickhouse_common_io
|
||||
Poco::MongoDB
|
||||
Poco::Redis
|
||||
)
|
||||
|
||||
if (TARGET ch::mysqlxx)
|
||||
|
@ -580,6 +580,8 @@
|
||||
M(695, ASYNC_LOAD_FAILED) \
|
||||
M(696, ASYNC_LOAD_CANCELED) \
|
||||
M(697, CANNOT_RESTORE_TO_NONENCRYPTED_DISK) \
|
||||
M(698, INVALID_REDIS_STORAGE_TYPE) \
|
||||
M(699, INVALID_REDIS_TABLE_STRUCTURE) \
|
||||
\
|
||||
M(999, KEEPER_EXCEPTION) \
|
||||
M(1000, POCO_EXCEPTION) \
|
||||
|
@ -3,10 +3,6 @@
|
||||
#include "DictionaryStructure.h"
|
||||
#include "registerDictionaries.h"
|
||||
|
||||
#include <Poco/Redis/Array.h>
|
||||
#include <Poco/Redis/Client.h>
|
||||
#include <Poco/Redis/Command.h>
|
||||
#include <Poco/Redis/Type.h>
|
||||
#include <Poco/Util/AbstractConfiguration.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <QueryPipeline/QueryPipeline.h>
|
||||
@ -21,19 +17,7 @@ namespace DB
|
||||
{
|
||||
extern const int UNSUPPORTED_METHOD;
|
||||
extern const int INVALID_CONFIG_PARAMETER;
|
||||
extern const int INTERNAL_REDIS_ERROR;
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int TIMEOUT_EXCEEDED;
|
||||
}
|
||||
|
||||
static RedisStorageType parseStorageType(const String & storage_type_str)
|
||||
{
|
||||
if (storage_type_str == "hash_map")
|
||||
return RedisStorageType::HASH_MAP;
|
||||
else if (!storage_type_str.empty() && storage_type_str != "simple")
|
||||
throw Exception(ErrorCodes::INVALID_CONFIG_PARAMETER, "Unknown storage type {} for Redis dictionary", storage_type_str);
|
||||
|
||||
return RedisStorageType::SIMPLE;
|
||||
}
|
||||
|
||||
void registerDictionarySourceRedis(DictionarySourceFactory & factory)
|
||||
@ -52,14 +36,14 @@ namespace DB
|
||||
auto port = config.getUInt(redis_config_prefix + ".port");
|
||||
global_context->getRemoteHostFilter().checkHostAndPort(host, toString(port));
|
||||
|
||||
RedisDictionarySource::Configuration configuration =
|
||||
RedisConfiguration configuration =
|
||||
{
|
||||
.host = host,
|
||||
.port = static_cast<UInt16>(port),
|
||||
.db_index = config.getUInt(redis_config_prefix + ".db_index", 0),
|
||||
.password = config.getString(redis_config_prefix + ".password", ""),
|
||||
.db_index = config.getUInt(redis_config_prefix + ".db_index", DEFAULT_REDIS_DB_INDEX),
|
||||
.password = config.getString(redis_config_prefix + ".password", DEFAULT_REDIS_PASSWORD),
|
||||
.storage_type = parseStorageType(config.getString(redis_config_prefix + ".storage_type", "")),
|
||||
.pool_size = config.getUInt(redis_config_prefix + ".pool_size", 16),
|
||||
.pool_size = config.getUInt(redis_config_prefix + ".pool_size", DEFAULT_REDIS_POOL_SIZE),
|
||||
};
|
||||
|
||||
return std::make_unique<RedisDictionarySource>(dict_struct, configuration, sample_block);
|
||||
@ -68,26 +52,13 @@ namespace DB
|
||||
factory.registerSource("redis", create_table_source);
|
||||
}
|
||||
|
||||
RedisDictionarySource::Connection::Connection(PoolPtr pool_, ClientPtr client_)
|
||||
: pool(std::move(pool_)), client(std::move(client_))
|
||||
{
|
||||
}
|
||||
|
||||
RedisDictionarySource::Connection::~Connection()
|
||||
{
|
||||
pool->returnObject(std::move(client));
|
||||
}
|
||||
|
||||
static constexpr size_t REDIS_MAX_BLOCK_SIZE = DEFAULT_BLOCK_SIZE;
|
||||
static constexpr size_t REDIS_LOCK_ACQUIRE_TIMEOUT_MS = 5000;
|
||||
|
||||
RedisDictionarySource::RedisDictionarySource(
|
||||
const DictionaryStructure & dict_struct_,
|
||||
const Configuration & configuration_,
|
||||
const RedisConfiguration & configuration_,
|
||||
const Block & sample_block_)
|
||||
: dict_struct{dict_struct_}
|
||||
, configuration(configuration_)
|
||||
, pool(std::make_shared<Pool>(configuration.pool_size))
|
||||
, pool(std::make_shared<RedisPool>(configuration.pool_size))
|
||||
, sample_block{sample_block_}
|
||||
{
|
||||
if (dict_struct.attributes.size() != 1)
|
||||
@ -122,24 +93,9 @@ namespace DB
|
||||
|
||||
RedisDictionarySource::~RedisDictionarySource() = default;
|
||||
|
||||
static String storageTypeToKeyType(RedisStorageType type)
|
||||
{
|
||||
switch (type)
|
||||
{
|
||||
case RedisStorageType::SIMPLE:
|
||||
return "string";
|
||||
case RedisStorageType::HASH_MAP:
|
||||
return "hash";
|
||||
default:
|
||||
return "none";
|
||||
}
|
||||
|
||||
UNREACHABLE();
|
||||
}
|
||||
|
||||
QueryPipeline RedisDictionarySource::loadAll()
|
||||
{
|
||||
auto connection = getConnection();
|
||||
auto connection = getRedisConnection(pool, configuration);
|
||||
|
||||
RedisCommand command_for_keys("KEYS");
|
||||
command_for_keys << "*";
|
||||
@ -159,33 +115,7 @@ namespace DB
|
||||
|
||||
if (configuration.storage_type == RedisStorageType::HASH_MAP)
|
||||
{
|
||||
RedisArray hkeys;
|
||||
for (const auto & key : keys)
|
||||
{
|
||||
RedisCommand command_for_secondary_keys("HKEYS");
|
||||
command_for_secondary_keys.addRedisType(key);
|
||||
|
||||
auto secondary_keys = connection->client->execute<RedisArray>(command_for_secondary_keys);
|
||||
|
||||
RedisArray primary_with_secondary;
|
||||
primary_with_secondary.addRedisType(key);
|
||||
for (const auto & secondary_key : secondary_keys)
|
||||
{
|
||||
primary_with_secondary.addRedisType(secondary_key);
|
||||
/// Do not store more than max_block_size values for one request.
|
||||
if (primary_with_secondary.size() == REDIS_MAX_BLOCK_SIZE + 1)
|
||||
{
|
||||
hkeys.add(primary_with_secondary);
|
||||
primary_with_secondary.clear();
|
||||
primary_with_secondary.addRedisType(key);
|
||||
}
|
||||
}
|
||||
|
||||
if (primary_with_secondary.size() > 1)
|
||||
hkeys.add(primary_with_secondary);
|
||||
}
|
||||
|
||||
keys = hkeys;
|
||||
keys = *getRedisHashMapKeys(connection, keys);
|
||||
}
|
||||
|
||||
return QueryPipeline(std::make_shared<RedisSource>(
|
||||
@ -195,7 +125,7 @@ namespace DB
|
||||
|
||||
QueryPipeline RedisDictionarySource::loadIds(const std::vector<UInt64> & ids)
|
||||
{
|
||||
auto connection = getConnection();
|
||||
auto connection = getRedisConnection(pool, configuration);
|
||||
|
||||
if (configuration.storage_type == RedisStorageType::HASH_MAP)
|
||||
throw Exception(ErrorCodes::UNSUPPORTED_METHOD, "Cannot use loadIds with 'hash_map' storage type");
|
||||
@ -215,7 +145,7 @@ namespace DB
|
||||
|
||||
QueryPipeline RedisDictionarySource::loadKeys(const Columns & key_columns, const std::vector<size_t> & requested_rows)
|
||||
{
|
||||
auto connection = getConnection();
|
||||
auto connection = getRedisConnection(pool, configuration);
|
||||
|
||||
if (key_columns.size() != dict_struct.key->size())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "The size of key_columns does not equal to the size of dictionary key");
|
||||
@ -248,55 +178,4 @@ namespace DB
|
||||
return "Redis: " + configuration.host + ':' + DB::toString(configuration.port);
|
||||
}
|
||||
|
||||
RedisDictionarySource::ConnectionPtr RedisDictionarySource::getConnection() const
|
||||
{
|
||||
ClientPtr client;
|
||||
bool ok = pool->tryBorrowObject(client,
|
||||
[] { return std::make_unique<Poco::Redis::Client>(); },
|
||||
REDIS_LOCK_ACQUIRE_TIMEOUT_MS);
|
||||
|
||||
if (!ok)
|
||||
throw Exception(ErrorCodes::TIMEOUT_EXCEEDED,
|
||||
"Could not get connection from pool, timeout exceeded {} seconds",
|
||||
REDIS_LOCK_ACQUIRE_TIMEOUT_MS);
|
||||
|
||||
if (!client->isConnected())
|
||||
{
|
||||
try
|
||||
{
|
||||
client->connect(configuration.host, configuration.port);
|
||||
|
||||
if (!configuration.password.empty())
|
||||
{
|
||||
RedisCommand command("AUTH");
|
||||
command << configuration.password;
|
||||
String reply = client->execute<String>(command);
|
||||
if (reply != "OK")
|
||||
throw Exception(ErrorCodes::INTERNAL_REDIS_ERROR,
|
||||
"Authentication failed with reason {}", reply);
|
||||
}
|
||||
|
||||
if (configuration.db_index != 0)
|
||||
{
|
||||
RedisCommand command("SELECT");
|
||||
command << std::to_string(configuration.db_index);
|
||||
String reply = client->execute<String>(command);
|
||||
if (reply != "OK")
|
||||
throw Exception(ErrorCodes::INTERNAL_REDIS_ERROR,
|
||||
"Selecting database with index {} failed with reason {}",
|
||||
configuration.db_index, reply);
|
||||
}
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
if (client->isConnected())
|
||||
client->disconnect();
|
||||
|
||||
pool->returnObject(std::move(client));
|
||||
throw;
|
||||
}
|
||||
}
|
||||
|
||||
return std::make_unique<Connection>(pool, std::move(client));
|
||||
}
|
||||
}
|
||||
|
@ -5,16 +5,7 @@
|
||||
|
||||
#include "DictionaryStructure.h"
|
||||
#include "IDictionarySource.h"
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace Redis
|
||||
{
|
||||
class Client;
|
||||
class Array;
|
||||
class Command;
|
||||
}
|
||||
}
|
||||
#include <Storages/RedisCommon.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -23,47 +14,12 @@ namespace DB
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
}
|
||||
|
||||
enum class RedisStorageType
|
||||
{
|
||||
SIMPLE,
|
||||
HASH_MAP,
|
||||
UNKNOWN
|
||||
};
|
||||
|
||||
class RedisDictionarySource final : public IDictionarySource
|
||||
{
|
||||
public:
|
||||
using RedisArray = Poco::Redis::Array;
|
||||
using RedisCommand = Poco::Redis::Command;
|
||||
|
||||
using ClientPtr = std::unique_ptr<Poco::Redis::Client>;
|
||||
using Pool = BorrowedObjectPool<ClientPtr>;
|
||||
using PoolPtr = std::shared_ptr<Pool>;
|
||||
|
||||
struct Configuration
|
||||
{
|
||||
const std::string host;
|
||||
const UInt16 port;
|
||||
const UInt32 db_index;
|
||||
const std::string password;
|
||||
const RedisStorageType storage_type;
|
||||
const size_t pool_size;
|
||||
};
|
||||
|
||||
struct Connection
|
||||
{
|
||||
Connection(PoolPtr pool_, ClientPtr client_);
|
||||
~Connection();
|
||||
|
||||
PoolPtr pool;
|
||||
ClientPtr client;
|
||||
};
|
||||
|
||||
using ConnectionPtr = std::unique_ptr<Connection>;
|
||||
|
||||
RedisDictionarySource(
|
||||
const DictionaryStructure & dict_struct_,
|
||||
const Configuration & configuration_,
|
||||
const RedisConfiguration & configuration_,
|
||||
const Block & sample_block_);
|
||||
|
||||
RedisDictionarySource(const RedisDictionarySource & other);
|
||||
@ -92,12 +48,10 @@ namespace DB
|
||||
std::string toString() const override;
|
||||
|
||||
private:
|
||||
ConnectionPtr getConnection() const;
|
||||
|
||||
const DictionaryStructure dict_struct;
|
||||
const Configuration configuration;
|
||||
const RedisConfiguration configuration;
|
||||
|
||||
PoolPtr pool;
|
||||
RedisPoolPtr pool;
|
||||
Block sample_block;
|
||||
};
|
||||
}
|
||||
|
@ -1,20 +1,12 @@
|
||||
#include "RedisSource.h"
|
||||
|
||||
#include <string>
|
||||
#include <vector>
|
||||
|
||||
#include <Poco/Redis/Array.h>
|
||||
#include <Poco/Redis/Client.h>
|
||||
#include <Poco/Redis/Command.h>
|
||||
#include <Poco/Redis/Type.h>
|
||||
|
||||
#include <Columns/ColumnNullable.h>
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <Columns/ColumnsNumber.h>
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
|
||||
#include "DictionaryStructure.h"
|
||||
#include <IO/ReadBufferFromString.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -30,7 +22,7 @@ namespace DB
|
||||
|
||||
|
||||
RedisSource::RedisSource(
|
||||
ConnectionPtr connection_,
|
||||
RedisConnectionPtr connection_,
|
||||
const RedisArray & keys_,
|
||||
const RedisStorageType & storage_type_,
|
||||
const DB::Block & sample_block,
|
||||
|
@ -6,15 +6,7 @@
|
||||
#include <Processors/ISource.h>
|
||||
#include <Poco/Redis/Array.h>
|
||||
#include <Poco/Redis/Type.h>
|
||||
#include "RedisDictionarySource.h"
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
namespace Redis
|
||||
{
|
||||
class Client;
|
||||
}
|
||||
}
|
||||
#include <Storages/RedisCommon.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -22,15 +14,11 @@ namespace DB
|
||||
class RedisSource final : public ISource
|
||||
{
|
||||
public:
|
||||
using RedisArray = Poco::Redis::Array;
|
||||
using RedisBulkString = Poco::Redis::BulkString;
|
||||
using ConnectionPtr = RedisDictionarySource::ConnectionPtr;
|
||||
|
||||
RedisSource(
|
||||
ConnectionPtr connection_,
|
||||
const Poco::Redis::Array & keys_,
|
||||
RedisConnectionPtr connection_,
|
||||
const RedisArray & keys_,
|
||||
const RedisStorageType & storage_type_,
|
||||
const Block & sample_block,
|
||||
const DB::Block & sample_block,
|
||||
size_t max_block_size);
|
||||
|
||||
~RedisSource() override;
|
||||
@ -40,7 +28,7 @@ namespace DB
|
||||
private:
|
||||
Chunk generate() override;
|
||||
|
||||
ConnectionPtr connection;
|
||||
RedisConnectionPtr connection;
|
||||
Poco::Redis::Array keys;
|
||||
RedisStorageType storage_type;
|
||||
const size_t max_block_size;
|
||||
|
@ -175,7 +175,7 @@ std::vector<std::string> serializeKeysToRawString(const ColumnWithTypeAndName &
|
||||
return result;
|
||||
}
|
||||
|
||||
/// In current implementation rocks db can have key with only one column.
|
||||
/// In current implementation rocks db/redis can have key with only one column.
|
||||
size_t getPrimaryKeyPos(const Block & header, const Names & primary_key)
|
||||
{
|
||||
if (primary_key.size() != 1)
|
||||
|
@ -36,6 +36,10 @@ struct MongoDBEqualKeysSet
|
||||
static constexpr std::array<std::pair<std::string_view, std::string_view>, 4> equal_keys{
|
||||
std::pair{"username", "user"}, std::pair{"database", "db"}, std::pair{"hostname", "host"}, std::pair{"table", "collection"}};
|
||||
};
|
||||
struct RedisEqualKeysSet
|
||||
{
|
||||
static constexpr std::array<std::pair<std::string_view, std::string_view>, 4> equal_keys{std::pair{"hostname", "host"}};
|
||||
};
|
||||
|
||||
template <typename EqualKeys> struct NamedCollectionValidateKey
|
||||
{
|
||||
|
150
src/Storages/RedisCommon.cpp
Normal file
150
src/Storages/RedisCommon.cpp
Normal file
@ -0,0 +1,150 @@
|
||||
#include "RedisCommon.h"
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/parseAddress.h>
|
||||
#include <Interpreters/evaluateConstantExpression.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int INTERNAL_REDIS_ERROR;
|
||||
extern const int TIMEOUT_EXCEEDED;
|
||||
extern const int INVALID_REDIS_STORAGE_TYPE;
|
||||
}
|
||||
|
||||
RedisConnection::RedisConnection(RedisPoolPtr pool_, RedisClientPtr client_)
|
||||
: pool(std::move(pool_)), client(std::move(client_))
|
||||
{
|
||||
}
|
||||
|
||||
RedisConnection::~RedisConnection()
|
||||
{
|
||||
pool->returnObject(std::move(client));
|
||||
}
|
||||
|
||||
String storageTypeToKeyType(RedisStorageType type)
|
||||
{
|
||||
switch (type)
|
||||
{
|
||||
case RedisStorageType::SIMPLE:
|
||||
return "string";
|
||||
case RedisStorageType::HASH_MAP:
|
||||
return "hash";
|
||||
default:
|
||||
return "none";
|
||||
}
|
||||
|
||||
UNREACHABLE();
|
||||
}
|
||||
|
||||
String serializeStorageType(RedisStorageType storage_type)
|
||||
{
|
||||
switch (storage_type)
|
||||
{
|
||||
case RedisStorageType::SIMPLE:
|
||||
return "simple";
|
||||
case RedisStorageType::HASH_MAP:
|
||||
return "hash_map";
|
||||
default:
|
||||
return "none";
|
||||
}
|
||||
}
|
||||
|
||||
RedisStorageType parseStorageType(const String & storage_type_str)
|
||||
{
|
||||
if (storage_type_str == "hash_map")
|
||||
return RedisStorageType::HASH_MAP;
|
||||
else if (!storage_type_str.empty() && storage_type_str != "simple")
|
||||
throw Exception(ErrorCodes::INVALID_REDIS_STORAGE_TYPE, "Unknown storage type {} for Redis dictionary", storage_type_str);
|
||||
|
||||
return RedisStorageType::SIMPLE;
|
||||
}
|
||||
|
||||
RedisConnectionPtr getRedisConnection(RedisPoolPtr pool, const RedisConfiguration & configuration)
|
||||
{
|
||||
RedisClientPtr client;
|
||||
bool ok = pool->tryBorrowObject(client,
|
||||
[] { return std::make_unique<Poco::Redis::Client>(); },
|
||||
REDIS_LOCK_ACQUIRE_TIMEOUT_MS);
|
||||
|
||||
if (!ok)
|
||||
throw Exception(ErrorCodes::TIMEOUT_EXCEEDED,
|
||||
"Could not get connection from pool, timeout exceeded {} seconds",
|
||||
REDIS_LOCK_ACQUIRE_TIMEOUT_MS);
|
||||
|
||||
if (!client->isConnected())
|
||||
{
|
||||
try
|
||||
{
|
||||
client->connect(configuration.host, configuration.port);
|
||||
|
||||
if (!configuration.password.empty())
|
||||
{
|
||||
RedisCommand command("AUTH");
|
||||
command << configuration.password;
|
||||
String reply = client->execute<String>(command);
|
||||
if (reply != "OK")
|
||||
throw Exception(ErrorCodes::INTERNAL_REDIS_ERROR,
|
||||
"Authentication failed with reason {}", reply);
|
||||
}
|
||||
|
||||
if (configuration.db_index != 0)
|
||||
{
|
||||
RedisCommand command("SELECT");
|
||||
command << std::to_string(configuration.db_index);
|
||||
String reply = client->execute<String>(command);
|
||||
if (reply != "OK")
|
||||
throw Exception(ErrorCodes::INTERNAL_REDIS_ERROR,
|
||||
"Selecting database with index {} failed with reason {}",
|
||||
configuration.db_index, reply);
|
||||
}
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
if (client->isConnected())
|
||||
client->disconnect();
|
||||
|
||||
pool->returnObject(std::move(client));
|
||||
throw;
|
||||
}
|
||||
}
|
||||
|
||||
return std::make_unique<RedisConnection>(pool, std::move(client));
|
||||
}
|
||||
|
||||
|
||||
RedisArrayPtr getRedisHashMapKeys(const RedisConnectionPtr & connection, RedisArray & keys)
|
||||
{
|
||||
RedisArrayPtr hkeys = std::make_shared<RedisArray>();
|
||||
for (const auto & key : keys)
|
||||
{
|
||||
RedisCommand command_for_secondary_keys("HKEYS");
|
||||
command_for_secondary_keys.addRedisType(key);
|
||||
|
||||
auto secondary_keys = connection->client->execute<RedisArray>(command_for_secondary_keys);
|
||||
if (secondary_keys.isNull())
|
||||
continue;
|
||||
|
||||
RedisArray primary_with_secondary;
|
||||
primary_with_secondary.addRedisType(key);
|
||||
for (const auto & secondary_key : secondary_keys)
|
||||
{
|
||||
primary_with_secondary.addRedisType(secondary_key);
|
||||
/// Do not store more than max_block_size values for one request.
|
||||
if (primary_with_secondary.size() == REDIS_MAX_BLOCK_SIZE + 1)
|
||||
{
|
||||
hkeys->add(primary_with_secondary);
|
||||
primary_with_secondary.clear();
|
||||
primary_with_secondary.addRedisType(key);
|
||||
}
|
||||
}
|
||||
|
||||
if (primary_with_secondary.size() > 1)
|
||||
hkeys->add(primary_with_secondary);
|
||||
}
|
||||
|
||||
return hkeys;
|
||||
}
|
||||
|
||||
}
|
78
src/Storages/RedisCommon.h
Normal file
78
src/Storages/RedisCommon.h
Normal file
@ -0,0 +1,78 @@
|
||||
#pragma once
|
||||
|
||||
#include <Poco/Redis/Client.h>
|
||||
#include <Poco/Redis/Command.h>
|
||||
#include <Poco/Redis/Array.h>
|
||||
#include <Poco/Types.h>
|
||||
|
||||
#include <Core/Defines.h>
|
||||
#include <base/BorrowedObjectPool.h>
|
||||
#include <Core/Names.h>
|
||||
#include <Storages/ColumnsDescription.h>
|
||||
#include <Storages/StorageFactory.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
static constexpr size_t REDIS_MAX_BLOCK_SIZE = DEFAULT_BLOCK_SIZE;
|
||||
static constexpr size_t REDIS_LOCK_ACQUIRE_TIMEOUT_MS = 5000;
|
||||
|
||||
enum class RedisStorageType
|
||||
{
|
||||
SIMPLE,
|
||||
HASH_MAP,
|
||||
UNKNOWN
|
||||
};
|
||||
|
||||
|
||||
/// storage type to Redis key type
|
||||
String storageTypeToKeyType(RedisStorageType type);
|
||||
|
||||
RedisStorageType parseStorageType(const String & storage_type_str);
|
||||
String serializeStorageType(RedisStorageType storage_type);
|
||||
|
||||
struct RedisConfiguration
|
||||
{
|
||||
String host;
|
||||
uint32_t port;
|
||||
uint32_t db_index;
|
||||
String password;
|
||||
RedisStorageType storage_type;
|
||||
uint32_t pool_size;
|
||||
};
|
||||
|
||||
static uint32_t DEFAULT_REDIS_DB_INDEX = 0;
|
||||
static uint32_t DEFAULT_REDIS_POOL_SIZE = 16;
|
||||
static String DEFAULT_REDIS_PASSWORD;
|
||||
|
||||
using RedisCommand = Poco::Redis::Command;
|
||||
using RedisArray = Poco::Redis::Array;
|
||||
using RedisArrayPtr = std::shared_ptr<RedisArray>;
|
||||
using RedisBulkString = Poco::Redis::BulkString;
|
||||
using RedisSimpleString = String;
|
||||
using RedisInteger = Poco::Int64;
|
||||
|
||||
using RedisClientPtr = std::unique_ptr<Poco::Redis::Client>;
|
||||
using RedisPool = BorrowedObjectPool<RedisClientPtr>;
|
||||
using RedisPoolPtr = std::shared_ptr<RedisPool>;
|
||||
|
||||
/// Redis scan iterator
|
||||
using RedisIterator = int64_t;
|
||||
|
||||
struct RedisConnection
|
||||
{
|
||||
RedisConnection(RedisPoolPtr pool_, RedisClientPtr client_);
|
||||
~RedisConnection();
|
||||
|
||||
RedisPoolPtr pool;
|
||||
RedisClientPtr client;
|
||||
};
|
||||
|
||||
using RedisConnectionPtr = std::unique_ptr<RedisConnection>;
|
||||
|
||||
RedisConnectionPtr getRedisConnection(RedisPoolPtr pool, const RedisConfiguration & configuration);
|
||||
|
||||
///get all redis hash key array
|
||||
/// eg: keys -> [key1, key2] and get [[key1, field1, field2], [key2, field1, field2]]
|
||||
RedisArrayPtr getRedisHashMapKeys(const RedisConnectionPtr & connection, RedisArray & keys);
|
||||
|
||||
}
|
@ -2,6 +2,7 @@
|
||||
|
||||
#include <Common/NamePrompter.h>
|
||||
#include <Parsers/IAST_fwd.h>
|
||||
#include <Parsers/ASTCreateQuery.h>
|
||||
#include <Storages/ColumnsDescription.h>
|
||||
#include <Storages/ConstraintsDescription.h>
|
||||
#include <Storages/IStorage_fwd.h>
|
||||
@ -14,8 +15,6 @@ namespace DB
|
||||
{
|
||||
|
||||
class Context;
|
||||
class ASTCreateQuery;
|
||||
class ASTStorage;
|
||||
struct StorageID;
|
||||
|
||||
|
||||
|
586
src/Storages/StorageRedis.cpp
Normal file
586
src/Storages/StorageRedis.cpp
Normal file
@ -0,0 +1,586 @@
|
||||
#include <unordered_set>
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include <Interpreters/MutationsInterpreter.h>
|
||||
#include <Interpreters/evaluateConstantExpression.h>
|
||||
#include <Parsers/ASTCreateQuery.h>
|
||||
#include <Parsers/ASTDropQuery.h>
|
||||
#include <Parsers/ASTLiteral.h>
|
||||
#include <Processors/Executors/PullingPipelineExecutor.h>
|
||||
#include <Processors/Sinks/SinkToStorage.h>
|
||||
#include <QueryPipeline/Pipe.h>
|
||||
#include <QueryPipeline/QueryPipelineBuilder.h>
|
||||
|
||||
#include <Storages/KVStorageUtils.h>
|
||||
#include <Storages/KeyDescription.h>
|
||||
#include <Storages/NamedCollectionsHelpers.h>
|
||||
#include <Storages/StorageFactory.h>
|
||||
#include <Storages/StorageInMemoryMetadata.h>
|
||||
#include <Storages/StorageRedis.h>
|
||||
#include <Storages/checkAndGetLiteralArgument.h>
|
||||
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/checkStackSize.h>
|
||||
#include <Common/logger_useful.h>
|
||||
#include <Common/parseAddress.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int BAD_ARGUMENTS;
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int INTERNAL_REDIS_ERROR;
|
||||
}
|
||||
|
||||
class RedisDataSource : public ISource
|
||||
{
|
||||
public:
|
||||
RedisDataSource(
|
||||
StorageRedis & storage_,
|
||||
const Block & header,
|
||||
FieldVectorPtr keys_,
|
||||
FieldVector::const_iterator begin_,
|
||||
FieldVector::const_iterator end_,
|
||||
const size_t max_block_size_)
|
||||
: ISource(header)
|
||||
, storage(storage_)
|
||||
, primary_key_pos(getPrimaryKeyPos(header, storage.getPrimaryKey()))
|
||||
, keys(keys_)
|
||||
, begin(begin_)
|
||||
, end(end_)
|
||||
, it(begin)
|
||||
, max_block_size(max_block_size_)
|
||||
{
|
||||
}
|
||||
|
||||
RedisDataSource(StorageRedis & storage_, const Block & header, const size_t max_block_size_, const String & pattern_ = "*")
|
||||
: ISource(header)
|
||||
, storage(storage_)
|
||||
, primary_key_pos(getPrimaryKeyPos(header, storage.getPrimaryKey()))
|
||||
, iterator(-1)
|
||||
, pattern(pattern_)
|
||||
, max_block_size(max_block_size_)
|
||||
{
|
||||
}
|
||||
|
||||
String getName() const override { return storage.getName(); }
|
||||
|
||||
Chunk generate() override
|
||||
{
|
||||
if (keys)
|
||||
return generateWithKeys();
|
||||
return generateFullScan();
|
||||
}
|
||||
|
||||
Chunk generateWithKeys()
|
||||
{
|
||||
const auto & sample_block = getPort().getHeader();
|
||||
if (it >= end)
|
||||
{
|
||||
it = {};
|
||||
return {};
|
||||
}
|
||||
|
||||
const auto & key_column_type = sample_block.getByName(storage.getPrimaryKey().at(0)).type;
|
||||
auto raw_keys = serializeKeysToRawString(it, end, key_column_type, max_block_size);
|
||||
|
||||
return storage.getBySerializedKeys(raw_keys, nullptr);
|
||||
}
|
||||
|
||||
/// TODO scan may get duplicated keys when Redis is rehashing, it is a very rare case.
|
||||
Chunk generateFullScan()
|
||||
{
|
||||
checkStackSize();
|
||||
|
||||
/// redis scan ending
|
||||
if (iterator == 0)
|
||||
return {};
|
||||
|
||||
RedisArray scan_keys;
|
||||
RedisIterator next_iterator;
|
||||
|
||||
std::tie(next_iterator, scan_keys) = storage.scan(iterator == -1 ? 0 : iterator, pattern, max_block_size);
|
||||
iterator = next_iterator;
|
||||
|
||||
/// redis scan can return nothing
|
||||
if (scan_keys.isNull() || scan_keys.size() == 0)
|
||||
return generateFullScan();
|
||||
|
||||
const auto & sample_block = getPort().getHeader();
|
||||
MutableColumns columns = sample_block.cloneEmptyColumns();
|
||||
|
||||
RedisArray values = storage.multiGet(scan_keys);
|
||||
for (size_t i = 0; i < scan_keys.size() && !values.get<RedisBulkString>(i).isNull(); i++)
|
||||
{
|
||||
fillColumns(scan_keys.get<RedisBulkString>(i).value(),
|
||||
values.get<RedisBulkString>(i).value(),
|
||||
primary_key_pos, sample_block, columns
|
||||
);
|
||||
}
|
||||
|
||||
Block block = sample_block.cloneWithColumns(std::move(columns));
|
||||
return Chunk(block.getColumns(), block.rows());
|
||||
}
|
||||
|
||||
private:
|
||||
StorageRedis & storage;
|
||||
|
||||
size_t primary_key_pos;
|
||||
|
||||
/// For key scan
|
||||
FieldVectorPtr keys = nullptr;
|
||||
FieldVector::const_iterator begin;
|
||||
FieldVector::const_iterator end;
|
||||
FieldVector::const_iterator it;
|
||||
|
||||
/// For full scan
|
||||
RedisIterator iterator;
|
||||
String pattern;
|
||||
|
||||
const size_t max_block_size;
|
||||
};
|
||||
|
||||
|
||||
class RedisSink : public SinkToStorage
|
||||
{
|
||||
public:
|
||||
RedisSink(StorageRedis & storage_, const StorageMetadataPtr & metadata_snapshot_);
|
||||
|
||||
void consume(Chunk chunk) override;
|
||||
String getName() const override { return "RedisSink"; }
|
||||
|
||||
private:
|
||||
StorageRedis & storage;
|
||||
StorageMetadataPtr metadata_snapshot;
|
||||
size_t primary_key_pos = 0;
|
||||
};
|
||||
|
||||
RedisSink::RedisSink(StorageRedis & storage_, const StorageMetadataPtr & metadata_snapshot_)
|
||||
: SinkToStorage(metadata_snapshot_->getSampleBlock())
|
||||
, storage(storage_)
|
||||
, metadata_snapshot(metadata_snapshot_)
|
||||
{
|
||||
for (const auto & column : getHeader())
|
||||
{
|
||||
if (column.name == storage.getPrimaryKey()[0])
|
||||
break;
|
||||
++primary_key_pos;
|
||||
}
|
||||
}
|
||||
|
||||
void RedisSink::consume(Chunk chunk)
|
||||
{
|
||||
auto rows = chunk.getNumRows();
|
||||
auto block = getHeader().cloneWithColumns(chunk.detachColumns());
|
||||
|
||||
WriteBufferFromOwnString wb_key;
|
||||
WriteBufferFromOwnString wb_value;
|
||||
|
||||
RedisArray data;
|
||||
for (size_t i = 0; i < rows; ++i)
|
||||
{
|
||||
wb_key.restart();
|
||||
wb_value.restart();
|
||||
|
||||
size_t idx = 0;
|
||||
for (const auto & elem : block)
|
||||
{
|
||||
elem.type->getDefaultSerialization()->serializeBinary(*elem.column, i, idx == primary_key_pos ? wb_key : wb_value, {});
|
||||
++idx;
|
||||
}
|
||||
data.add(wb_key.str());
|
||||
data.add(wb_value.str());
|
||||
}
|
||||
|
||||
storage.multiSet(data);
|
||||
}
|
||||
|
||||
StorageRedis::StorageRedis(
|
||||
const StorageID & table_id_,
|
||||
const RedisConfiguration & configuration_,
|
||||
ContextPtr context_,
|
||||
const StorageInMemoryMetadata & storage_metadata,
|
||||
const String & primary_key_)
|
||||
: IStorage(table_id_)
|
||||
, WithContext(context_->getGlobalContext())
|
||||
, table_id(table_id_)
|
||||
, configuration(configuration_)
|
||||
, log(&Poco::Logger::get("StorageRedis"))
|
||||
, primary_key(primary_key_)
|
||||
{
|
||||
pool = std::make_shared<RedisPool>(configuration.pool_size);
|
||||
setInMemoryMetadata(storage_metadata);
|
||||
}
|
||||
|
||||
Pipe StorageRedis::read(
|
||||
const Names & column_names,
|
||||
const StorageSnapshotPtr & storage_snapshot,
|
||||
SelectQueryInfo & query_info,
|
||||
ContextPtr context_,
|
||||
QueryProcessingStage::Enum /*processed_stage*/,
|
||||
size_t max_block_size,
|
||||
size_t num_streams)
|
||||
{
|
||||
storage_snapshot->check(column_names);
|
||||
|
||||
FieldVectorPtr keys;
|
||||
bool all_scan = false;
|
||||
|
||||
Block header = storage_snapshot->metadata->getSampleBlock();
|
||||
auto primary_key_data_type = header.getByName(primary_key).type;
|
||||
|
||||
std::tie(keys, all_scan) = getFilterKeys(primary_key, primary_key_data_type, query_info, context_);
|
||||
|
||||
if (all_scan)
|
||||
{
|
||||
return Pipe(std::make_shared<RedisDataSource>(*this, header, max_block_size));
|
||||
}
|
||||
else
|
||||
{
|
||||
if (keys->empty())
|
||||
return {};
|
||||
|
||||
Pipes pipes;
|
||||
|
||||
::sort(keys->begin(), keys->end());
|
||||
keys->erase(std::unique(keys->begin(), keys->end()), keys->end());
|
||||
|
||||
size_t num_keys = keys->size();
|
||||
size_t num_threads = std::min<size_t>(num_streams, keys->size());
|
||||
|
||||
num_threads = std::min<size_t>(num_threads, configuration.pool_size);
|
||||
assert(num_keys <= std::numeric_limits<uint32_t>::max());
|
||||
|
||||
for (size_t thread_idx = 0; thread_idx < num_threads; ++thread_idx)
|
||||
{
|
||||
size_t begin = num_keys * thread_idx / num_threads;
|
||||
size_t end = num_keys * (thread_idx + 1) / num_threads;
|
||||
|
||||
pipes.emplace_back(
|
||||
std::make_shared<RedisDataSource>(*this, header, keys, keys->begin() + begin, keys->begin() + end, max_block_size));
|
||||
}
|
||||
return Pipe::unitePipes(std::move(pipes));
|
||||
}
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
// host:port, db_index, password, pool_size
|
||||
RedisConfiguration getRedisConfiguration(ASTs & engine_args, ContextPtr context)
|
||||
{
|
||||
RedisConfiguration configuration;
|
||||
|
||||
if (engine_args.empty())
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Bad arguments count when creating Redis table engine");
|
||||
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args, context))
|
||||
{
|
||||
validateNamedCollection(
|
||||
*named_collection,
|
||||
ValidateKeysMultiset<RedisEqualKeysSet>{"host", "port", "hostname", "password", "db_index", "pool_size"},
|
||||
{});
|
||||
|
||||
configuration.host = named_collection->getAny<String>({"host", "hostname"});
|
||||
configuration.port = static_cast<uint32_t>(named_collection->getOrDefault<UInt64>("port", 6379));
|
||||
configuration.password = named_collection->getOrDefault<String>("password", DEFAULT_REDIS_PASSWORD);
|
||||
configuration.db_index = static_cast<uint32_t>(named_collection->getOrDefault<UInt64>("db_index", DEFAULT_REDIS_DB_INDEX));
|
||||
configuration.pool_size = static_cast<uint32_t>(named_collection->getOrDefault<UInt64>("pool_size", DEFAULT_REDIS_POOL_SIZE));
|
||||
}
|
||||
else
|
||||
{
|
||||
for (auto & engine_arg : engine_args)
|
||||
engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, context);
|
||||
|
||||
/// 6379 is the default Redis port.
|
||||
auto parsed_host_port = parseAddress(checkAndGetLiteralArgument<String>(engine_args[0], "host:port"), 6379);
|
||||
configuration.host = parsed_host_port.first;
|
||||
configuration.port = parsed_host_port.second;
|
||||
|
||||
if (engine_args.size() > 1)
|
||||
configuration.db_index = static_cast<uint32_t>(checkAndGetLiteralArgument<UInt64>(engine_args[1], "db_index"));
|
||||
else
|
||||
configuration.db_index = DEFAULT_REDIS_DB_INDEX;
|
||||
if (engine_args.size() > 2)
|
||||
configuration.password = checkAndGetLiteralArgument<String>(engine_args[2], "password");
|
||||
else
|
||||
configuration.password = DEFAULT_REDIS_PASSWORD;
|
||||
if (engine_args.size() > 3)
|
||||
configuration.pool_size = static_cast<uint32_t>(checkAndGetLiteralArgument<UInt64>(engine_args[3], "pool_size"));
|
||||
else
|
||||
configuration.pool_size = DEFAULT_REDIS_POOL_SIZE;
|
||||
}
|
||||
|
||||
context->getRemoteHostFilter().checkHostAndPort(configuration.host, toString(configuration.port));
|
||||
return configuration;
|
||||
}
|
||||
|
||||
StoragePtr createStorageRedis(const StorageFactory::Arguments & args)
|
||||
{
|
||||
auto configuration = getRedisConfiguration(args.engine_args, args.getLocalContext());
|
||||
|
||||
StorageInMemoryMetadata metadata;
|
||||
metadata.setColumns(args.columns);
|
||||
metadata.setConstraints(args.constraints);
|
||||
metadata.setComment(args.comment);
|
||||
|
||||
if (!args.storage_def->primary_key)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "StorageRedis must require one column in primary key");
|
||||
|
||||
auto primary_key_desc = KeyDescription::getKeyFromAST(args.storage_def->primary_key->ptr(), metadata.columns, args.getContext());
|
||||
auto primary_key_names = primary_key_desc.expression->getRequiredColumns();
|
||||
|
||||
if (primary_key_names.size() != 1)
|
||||
{
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "StorageRedis must require one column in primary key");
|
||||
}
|
||||
|
||||
return std::make_shared<StorageRedis>(args.table_id, configuration, args.getContext(), metadata, primary_key_names[0]);
|
||||
}
|
||||
}
|
||||
|
||||
Chunk StorageRedis::getBySerializedKeys(const std::vector<std::string> & keys, PaddedPODArray<UInt8> * null_map) const
|
||||
{
|
||||
RedisArray redis_keys;
|
||||
for (const auto & key : keys)
|
||||
redis_keys.add(key);
|
||||
return getBySerializedKeys(redis_keys, null_map);
|
||||
}
|
||||
|
||||
Chunk StorageRedis::getBySerializedKeys(const RedisArray & keys, PaddedPODArray<UInt8> * null_map) const
|
||||
{
|
||||
Block sample_block = getInMemoryMetadataPtr()->getSampleBlock();
|
||||
|
||||
size_t primary_key_pos = getPrimaryKeyPos(sample_block, getPrimaryKey());
|
||||
MutableColumns columns = sample_block.cloneEmptyColumns();
|
||||
|
||||
RedisArray values = multiGet(keys);
|
||||
if (values.isNull() || values.size() == 0)
|
||||
return {};
|
||||
|
||||
if (null_map)
|
||||
{
|
||||
null_map->clear();
|
||||
null_map->resize_fill(keys.size(), 1);
|
||||
}
|
||||
|
||||
for (size_t i = 0; i < values.size(); ++i)
|
||||
{
|
||||
if (!values.get<RedisBulkString>(i).isNull())
|
||||
{
|
||||
fillColumns(keys.get<RedisBulkString>(i).value(),
|
||||
values.get<RedisBulkString>(i).value(),
|
||||
primary_key_pos, sample_block, columns
|
||||
);
|
||||
}
|
||||
else /// key not found
|
||||
{
|
||||
if (null_map)
|
||||
{
|
||||
(*null_map)[i] = 0;
|
||||
for (size_t col_idx = 0; col_idx < sample_block.columns(); ++col_idx)
|
||||
{
|
||||
columns[col_idx]->insert(sample_block.getByPosition(col_idx).type->getDefault());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
size_t num_rows = columns.at(0)->size();
|
||||
return Chunk(std::move(columns), num_rows);
|
||||
}
|
||||
|
||||
std::pair<RedisIterator, RedisArray> StorageRedis::scan(RedisIterator iterator, const String & pattern, uint64_t max_count)
|
||||
{
|
||||
auto connection = getRedisConnection(pool, configuration);
|
||||
RedisCommand scan("SCAN");
|
||||
scan << toString(iterator) << "MATCH" << pattern << "COUNT" << toString(max_count);
|
||||
|
||||
const auto & result = connection->client->execute<RedisArray>(scan);
|
||||
RedisIterator next = parse<RedisIterator>(result.get<RedisBulkString>(0).value());
|
||||
|
||||
return {next, result.get<RedisArray>(1)};
|
||||
}
|
||||
|
||||
RedisArray StorageRedis::multiGet(const RedisArray & keys) const
|
||||
{
|
||||
auto connection = getRedisConnection(pool, configuration);
|
||||
|
||||
RedisCommand cmd_mget("MGET");
|
||||
for (size_t i = 0; i < keys.size(); ++i)
|
||||
cmd_mget.add(keys.get<RedisBulkString>(i));
|
||||
|
||||
return connection->client->execute<RedisArray>(cmd_mget);
|
||||
}
|
||||
|
||||
void StorageRedis::multiSet(const RedisArray & data) const
|
||||
{
|
||||
auto connection = getRedisConnection(pool, configuration);
|
||||
|
||||
RedisCommand cmd_mget("MSET");
|
||||
for (size_t i = 0; i < data.size(); ++i)
|
||||
cmd_mget.add(data.get<RedisBulkString>(i));
|
||||
|
||||
auto ret = connection->client->execute<RedisSimpleString>(cmd_mget);
|
||||
if (ret != "OK")
|
||||
throw Exception(ErrorCodes::INTERNAL_REDIS_ERROR, "Fail to write to redis table {}, for {}", table_id.getFullNameNotQuoted(), ret);
|
||||
}
|
||||
|
||||
RedisInteger StorageRedis::multiDelete(const RedisArray & keys) const
|
||||
{
|
||||
auto connection = getRedisConnection(pool, configuration);
|
||||
|
||||
RedisCommand cmd("DEL");
|
||||
for (size_t i = 0; i < keys.size(); ++i)
|
||||
cmd.add(keys.get<RedisBulkString>(i));
|
||||
|
||||
auto ret = connection->client->execute<RedisInteger>(cmd);
|
||||
if (ret != static_cast<RedisInteger>(keys.size()))
|
||||
LOG_DEBUG(
|
||||
log,
|
||||
"Try to delete {} rows but actually deleted {} rows from redis table {}.",
|
||||
keys.size(),
|
||||
ret,
|
||||
table_id.getFullNameNotQuoted());
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
Chunk StorageRedis::getByKeys(const ColumnsWithTypeAndName & keys, PaddedPODArray<UInt8> & null_map, const Names &) const
|
||||
{
|
||||
if (keys.size() != 1)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "StorageRedis supports only one key, got: {}", keys.size());
|
||||
|
||||
auto raw_keys = serializeKeysToRawString(keys[0]);
|
||||
|
||||
if (raw_keys.size() != keys[0].column->size())
|
||||
throw DB::Exception(ErrorCodes::LOGICAL_ERROR, "Assertion failed: {} != {}", raw_keys.size(), keys[0].column->size());
|
||||
|
||||
return getBySerializedKeys(raw_keys, &null_map);
|
||||
}
|
||||
|
||||
Block StorageRedis::getSampleBlock(const Names &) const
|
||||
{
|
||||
return getInMemoryMetadataPtr()->getSampleBlock();
|
||||
}
|
||||
|
||||
SinkToStoragePtr StorageRedis::write(
|
||||
const ASTPtr & /*query*/,
|
||||
const StorageMetadataPtr & metadata_snapshot,
|
||||
ContextPtr /*context*/,
|
||||
bool /*async_insert*/)
|
||||
{
|
||||
return std::make_shared<RedisSink>(*this, metadata_snapshot);
|
||||
}
|
||||
|
||||
void StorageRedis::truncate(const ASTPtr & query, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &)
|
||||
{
|
||||
auto connection = getRedisConnection(pool, configuration);
|
||||
|
||||
auto * truncate_query = query->as<ASTDropQuery>();
|
||||
assert(truncate_query != nullptr);
|
||||
|
||||
RedisCommand cmd_flush_db("FLUSHDB");
|
||||
if (!truncate_query->sync)
|
||||
cmd_flush_db.add("ASYNC");
|
||||
|
||||
auto ret = connection->client->execute<RedisSimpleString>(cmd_flush_db);
|
||||
|
||||
if (ret != "OK")
|
||||
throw Exception(ErrorCodes::INTERNAL_REDIS_ERROR, "Fail to truncate redis table {}, for {}", table_id.getFullNameNotQuoted(), ret);
|
||||
}
|
||||
|
||||
void StorageRedis::checkMutationIsPossible(const MutationCommands & commands, const Settings & /* settings */) const
|
||||
{
|
||||
if (commands.empty())
|
||||
return;
|
||||
|
||||
if (commands.size() > 1)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Mutations cannot be combined for StorageRedis");
|
||||
|
||||
const auto command_type = commands.front().type;
|
||||
if (command_type != MutationCommand::Type::UPDATE && command_type != MutationCommand::Type::DELETE)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Only DELETE and UPDATE mutation supported for StorageRedis");
|
||||
}
|
||||
|
||||
void StorageRedis::mutate(const MutationCommands & commands, ContextPtr context_)
|
||||
{
|
||||
if (commands.empty())
|
||||
return;
|
||||
|
||||
assert(commands.size() == 1);
|
||||
|
||||
auto metadata_snapshot = getInMemoryMetadataPtr();
|
||||
auto storage = getStorageID();
|
||||
auto storage_ptr = DatabaseCatalog::instance().getTable(storage, context_);
|
||||
|
||||
if (commands.front().type == MutationCommand::Type::DELETE)
|
||||
{
|
||||
MutationsInterpreter::Settings settings(true);
|
||||
settings.return_all_columns = true;
|
||||
settings.return_mutated_rows = true;
|
||||
|
||||
auto interpreter = std::make_unique<MutationsInterpreter>(storage_ptr, metadata_snapshot, commands, context_, settings);
|
||||
auto pipeline = QueryPipelineBuilder::getPipeline(interpreter->execute());
|
||||
PullingPipelineExecutor executor(pipeline);
|
||||
|
||||
auto sink = std::make_shared<RedisSink>(*this, metadata_snapshot);
|
||||
|
||||
auto header = interpreter->getUpdatedHeader();
|
||||
auto primary_key_pos = header.getPositionByName(primary_key);
|
||||
|
||||
Block block;
|
||||
while (executor.pull(block))
|
||||
{
|
||||
auto & column_type_name = block.getByPosition(primary_key_pos);
|
||||
|
||||
auto column = column_type_name.column;
|
||||
auto size = column->size();
|
||||
|
||||
RedisArray keys;
|
||||
WriteBufferFromOwnString wb_key;
|
||||
for (size_t i = 0; i < size; ++i)
|
||||
{
|
||||
wb_key.restart();
|
||||
column_type_name.type->getDefaultSerialization()->serializeBinary(*column, i, wb_key, {});
|
||||
keys.add(wb_key.str());
|
||||
}
|
||||
multiDelete(keys);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
assert(commands.front().type == MutationCommand::Type::UPDATE);
|
||||
if (commands.front().column_to_update_expression.contains(primary_key))
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Primary key cannot be updated (cannot update column {})", primary_key);
|
||||
|
||||
MutationsInterpreter::Settings settings(true);
|
||||
settings.return_all_columns = true;
|
||||
settings.return_mutated_rows = true;
|
||||
|
||||
auto interpreter = std::make_unique<MutationsInterpreter>(storage_ptr, metadata_snapshot, commands, context_, settings);
|
||||
auto pipeline = QueryPipelineBuilder::getPipeline(interpreter->execute());
|
||||
PullingPipelineExecutor executor(pipeline);
|
||||
|
||||
auto sink = std::make_shared<RedisSink>(*this, metadata_snapshot);
|
||||
|
||||
Block block;
|
||||
while (executor.pull(block))
|
||||
{
|
||||
sink->consume(Chunk{block.getColumns(), block.rows()});
|
||||
}
|
||||
}
|
||||
|
||||
/// TODO support ttl
|
||||
void registerStorageRedis(StorageFactory & factory)
|
||||
{
|
||||
StorageFactory::StorageFeatures features{
|
||||
.supports_sort_order = true,
|
||||
.supports_parallel_insert = true,
|
||||
.source_access_type = AccessType::REDIS,
|
||||
};
|
||||
|
||||
factory.registerStorage("Redis", createStorageRedis, features);
|
||||
}
|
||||
|
||||
}
|
83
src/Storages/StorageRedis.h
Normal file
83
src/Storages/StorageRedis.h
Normal file
@ -0,0 +1,83 @@
|
||||
#pragma once
|
||||
|
||||
#include <Poco/Redis/Redis.h>
|
||||
#include <Storages/IStorage.h>
|
||||
#include <Storages/RedisCommon.h>
|
||||
#include <Interpreters/IKeyValueEntity.h>
|
||||
#include <Interpreters/Context_fwd.h>
|
||||
#include <Storages/MutationCommands.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
/* Implements storage in the Redis.
|
||||
* Use ENGINE = Redis(host:port[, db_index[, password[, pool_size]]]) PRIMARY KEY(key);
|
||||
*/
|
||||
class StorageRedis : public IStorage, public IKeyValueEntity, WithContext
|
||||
{
|
||||
public:
|
||||
StorageRedis(
|
||||
const StorageID & table_id_,
|
||||
const RedisConfiguration & configuration_,
|
||||
ContextPtr context_,
|
||||
const StorageInMemoryMetadata & storage_metadata,
|
||||
const String & primary_key_);
|
||||
|
||||
std::string getName() const override { return "Redis"; }
|
||||
|
||||
Pipe read(
|
||||
const Names & column_names,
|
||||
const StorageSnapshotPtr & storage_snapshot,
|
||||
SelectQueryInfo & query_info,
|
||||
ContextPtr context_,
|
||||
QueryProcessingStage::Enum processed_stage,
|
||||
size_t max_block_size,
|
||||
size_t num_streams) override;
|
||||
|
||||
SinkToStoragePtr write(
|
||||
const ASTPtr & query,
|
||||
const StorageMetadataPtr & /*metadata_snapshot*/,
|
||||
ContextPtr context,
|
||||
bool /*async_insert*/) override;
|
||||
|
||||
void truncate(const ASTPtr &,
|
||||
const StorageMetadataPtr & metadata_snapshot,
|
||||
ContextPtr,
|
||||
TableExclusiveLockHolder &) override;
|
||||
|
||||
void checkMutationIsPossible(const MutationCommands & commands, const Settings & settings) const override;
|
||||
void mutate(const MutationCommands &, ContextPtr) override;
|
||||
|
||||
Names getPrimaryKey() const override { return {primary_key}; }
|
||||
|
||||
/// Return chunk with data for given serialized keys.
|
||||
/// If out_null_map is passed, fill it with 1/0 depending on key was/wasn't found. Result chunk may contain default values.
|
||||
/// If out_null_map is not passed. Not found rows excluded from result chunk.
|
||||
Chunk getBySerializedKeys(
|
||||
const std::vector<std::string> & keys,
|
||||
PaddedPODArray<UInt8> * out_null_map) const;
|
||||
|
||||
Chunk getBySerializedKeys(
|
||||
const RedisArray & keys,
|
||||
PaddedPODArray<UInt8> * out_null_map) const;
|
||||
|
||||
std::pair<RedisIterator, RedisArray> scan(RedisIterator iterator, const String & pattern, uint64_t max_count);
|
||||
|
||||
RedisArray multiGet(const RedisArray & keys) const;
|
||||
void multiSet(const RedisArray & data) const;
|
||||
RedisInteger multiDelete(const RedisArray & keys) const;
|
||||
|
||||
Chunk getByKeys(const ColumnsWithTypeAndName & keys, PaddedPODArray<UInt8> & null_map, const Names &) const override;
|
||||
|
||||
Block getSampleBlock(const Names &) const override;
|
||||
|
||||
private:
|
||||
StorageID table_id;
|
||||
RedisConfiguration configuration;
|
||||
|
||||
Poco::Logger * log;
|
||||
RedisPoolPtr pool;
|
||||
|
||||
const String primary_key;
|
||||
};
|
||||
|
||||
}
|
@ -59,6 +59,7 @@ void registerStorageMySQL(StorageFactory & factory);
|
||||
#endif
|
||||
|
||||
void registerStorageMongoDB(StorageFactory & factory);
|
||||
void registerStorageRedis(StorageFactory & factory);
|
||||
|
||||
|
||||
#if USE_RDKAFKA
|
||||
@ -160,6 +161,7 @@ void registerStorages()
|
||||
#endif
|
||||
|
||||
registerStorageMongoDB(factory);
|
||||
registerStorageRedis(factory);
|
||||
|
||||
#if USE_RDKAFKA
|
||||
registerStorageKafka(factory);
|
||||
|
94
src/TableFunctions/TableFunctionRedis.cpp
Normal file
94
src/TableFunctions/TableFunctionRedis.cpp
Normal file
@ -0,0 +1,94 @@
|
||||
#include <TableFunctions/TableFunctionRedis.h>
|
||||
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/parseAddress.h>
|
||||
|
||||
#include <Interpreters/Context.h>
|
||||
|
||||
#include <Parsers/ASTFunction.h>
|
||||
#include <Parsers/ASTIdentifier.h>
|
||||
|
||||
#include <Interpreters/parseColumnsListForTableFunction.h>
|
||||
#include <Storages/ColumnsDescription.h>
|
||||
#include <TableFunctions/TableFunctionFactory.h>
|
||||
#include <TableFunctions/registerTableFunctions.h>
|
||||
#include <Storages/checkAndGetLiteralArgument.h>
|
||||
#include <Interpreters/evaluateConstantExpression.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
StoragePtr TableFunctionRedis::executeImpl(
|
||||
const ASTPtr & /*ast_function*/, ContextPtr context, const String & table_name, ColumnsDescription /*cached_columns*/) const
|
||||
{
|
||||
auto columns = getActualTableStructure(context);
|
||||
|
||||
StorageInMemoryMetadata metadata;
|
||||
metadata.setColumns(columns);
|
||||
|
||||
String db_name = "redis" + getDatabaseName() + "_db_" + toString(configuration.db_index);
|
||||
auto storage = std::make_shared<StorageRedis>(
|
||||
StorageID(db_name, table_name), configuration, context, metadata, primary_key);
|
||||
storage->startup();
|
||||
return storage;
|
||||
}
|
||||
|
||||
ColumnsDescription TableFunctionRedis::getActualTableStructure(ContextPtr context) const
|
||||
{
|
||||
return parseColumnsListFromString(structure, context);
|
||||
}
|
||||
|
||||
void TableFunctionRedis::parseArguments(const ASTPtr & ast_function, ContextPtr context)
|
||||
{
|
||||
const auto & func_args = ast_function->as<ASTFunction &>();
|
||||
if (!func_args.arguments)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Table function 'redis' must have arguments.");
|
||||
|
||||
ASTs & args = func_args.arguments->children;
|
||||
|
||||
if (args.size() < 3)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Bad arguments count when creating Redis table function");
|
||||
|
||||
for (auto & arg : args)
|
||||
arg = evaluateConstantExpressionOrIdentifierAsLiteral(arg, context);
|
||||
|
||||
auto parsed_host_port = parseAddress(checkAndGetLiteralArgument<String>(args[0], "host:port"), 6379);
|
||||
configuration.host = parsed_host_port.first;
|
||||
configuration.port = parsed_host_port.second;
|
||||
|
||||
primary_key = checkAndGetLiteralArgument<String>(args[1], "key");
|
||||
structure = checkAndGetLiteralArgument<String>(args[2], "structure");
|
||||
|
||||
if (args.size() > 3)
|
||||
configuration.db_index = static_cast<uint32_t>(checkAndGetLiteralArgument<UInt64>(args[3], "db_index"));
|
||||
else
|
||||
configuration.db_index = DEFAULT_REDIS_DB_INDEX;
|
||||
if (args.size() > 4)
|
||||
configuration.password = checkAndGetLiteralArgument<String>(args[4], "password");
|
||||
else
|
||||
configuration.password = DEFAULT_REDIS_PASSWORD;
|
||||
if (args.size() > 5)
|
||||
configuration.pool_size = static_cast<uint32_t>(checkAndGetLiteralArgument<UInt64>(args[5], "pool_size"));
|
||||
else
|
||||
configuration.pool_size = DEFAULT_REDIS_POOL_SIZE;
|
||||
|
||||
context->getRemoteHostFilter().checkHostAndPort(configuration.host, toString(configuration.port));
|
||||
|
||||
auto columns = parseColumnsListFromString(structure, context);
|
||||
if (!columns.has(primary_key))
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Bad arguments redis table function structure should contains key.");
|
||||
}
|
||||
|
||||
|
||||
void registerTableFunctionRedis(TableFunctionFactory & factory)
|
||||
{
|
||||
factory.registerFunction<TableFunctionRedis>();
|
||||
}
|
||||
|
||||
}
|
34
src/TableFunctions/TableFunctionRedis.h
Normal file
34
src/TableFunctions/TableFunctionRedis.h
Normal file
@ -0,0 +1,34 @@
|
||||
#pragma once
|
||||
|
||||
#include <Storages/StorageRedis.h>
|
||||
#include <TableFunctions/ITableFunction.h>
|
||||
#include <Storages/ExternalDataSourceConfiguration.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
/* Implements Redis table function.
|
||||
* Use redis(host:port, key, structure[, db_index[, password[, pool_size]]]);
|
||||
*/
|
||||
class TableFunctionRedis : public ITableFunction
|
||||
{
|
||||
public:
|
||||
static constexpr auto name = "redis";
|
||||
String getName() const override { return name; }
|
||||
|
||||
private:
|
||||
StoragePtr executeImpl(
|
||||
const ASTPtr & ast_function, ContextPtr context,
|
||||
const String & table_name, ColumnsDescription cached_columns) const override;
|
||||
|
||||
const char * getStorageTypeName() const override { return "Redis"; }
|
||||
|
||||
ColumnsDescription getActualTableStructure(ContextPtr context) const override;
|
||||
void parseArguments(const ASTPtr & ast_function, ContextPtr context) override;
|
||||
|
||||
RedisConfiguration configuration;
|
||||
String structure;
|
||||
String primary_key;
|
||||
};
|
||||
|
||||
}
|
@ -21,6 +21,7 @@ void registerTableFunctions()
|
||||
registerTableFunctionInput(factory);
|
||||
registerTableFunctionGenerate(factory);
|
||||
registerTableFunctionMongoDB(factory);
|
||||
registerTableFunctionRedis(factory);
|
||||
|
||||
registerTableFunctionMeiliSearch(factory);
|
||||
|
||||
|
@ -18,6 +18,7 @@ void registerTableFunctionValues(TableFunctionFactory & factory);
|
||||
void registerTableFunctionInput(TableFunctionFactory & factory);
|
||||
void registerTableFunctionGenerate(TableFunctionFactory & factory);
|
||||
void registerTableFunctionMongoDB(TableFunctionFactory & factory);
|
||||
void registerTableFunctionRedis(TableFunctionFactory & factory);
|
||||
|
||||
void registerTableFunctionMeiliSearch(TableFunctionFactory & factory);
|
||||
|
||||
|
0
tests/integration/test_storage_redis/__init__.py
Normal file
0
tests/integration/test_storage_redis/__init__.py
Normal file
388
tests/integration/test_storage_redis/test.py
Normal file
388
tests/integration/test_storage_redis/test.py
Normal file
@ -0,0 +1,388 @@
|
||||
## sudo -H pip install redis
|
||||
import redis
|
||||
import pytest
|
||||
import struct
|
||||
import sys
|
||||
|
||||
from helpers.client import QueryRuntimeException
|
||||
from helpers.cluster import ClickHouseCluster
|
||||
from helpers.test_tools import TSV
|
||||
|
||||
cluster = ClickHouseCluster(__file__)
|
||||
|
||||
node = cluster.add_instance("node", with_redis=True)
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def started_cluster():
|
||||
try:
|
||||
cluster.start()
|
||||
yield cluster
|
||||
finally:
|
||||
cluster.shutdown()
|
||||
|
||||
|
||||
def get_redis_connection(db_id=0):
|
||||
client = redis.Redis(
|
||||
host="localhost", port=cluster.redis_port, password="clickhouse", db=db_id
|
||||
)
|
||||
return client
|
||||
|
||||
|
||||
def get_address_for_ch():
|
||||
return cluster.redis_host + ":6379"
|
||||
|
||||
|
||||
def drop_table(table):
|
||||
node.query(f"DROP TABLE IF EXISTS {table} SYNC")
|
||||
|
||||
|
||||
# see SerializationString.serializeBinary
|
||||
def serialize_binary_for_string(x):
|
||||
var_uint_max = (1 << 63) - 1
|
||||
buf = bytearray()
|
||||
# write length
|
||||
length = len(x)
|
||||
# length = (length << 1) ^ (length >> 63)
|
||||
if length > var_uint_max:
|
||||
raise ValueError("Value too large for varint encoding")
|
||||
for i in range(9):
|
||||
byte = length & 0x7F
|
||||
if length > 0x7F:
|
||||
byte |= 0x80
|
||||
buf += bytes([byte])
|
||||
length >>= 7
|
||||
if not length:
|
||||
break
|
||||
# write data
|
||||
buf += x.encode("utf-8")
|
||||
return bytes(buf)
|
||||
|
||||
|
||||
# see SerializationNumber.serializeBinary
|
||||
def serialize_binary_for_uint32(x):
|
||||
buf = bytearray()
|
||||
packed_num = struct.pack("I", x)
|
||||
buf += packed_num
|
||||
if sys.byteorder != "little":
|
||||
buf.reverse()
|
||||
return bytes(buf)
|
||||
|
||||
|
||||
def test_simple_select(started_cluster):
|
||||
client = get_redis_connection()
|
||||
address = get_address_for_ch()
|
||||
|
||||
# clean all
|
||||
client.flushall()
|
||||
drop_table("test_simple_select")
|
||||
|
||||
data = {}
|
||||
for i in range(100):
|
||||
packed = serialize_binary_for_string(str(i))
|
||||
data[packed] = packed
|
||||
|
||||
client.mset(data)
|
||||
client.close()
|
||||
|
||||
# create table
|
||||
node.query(
|
||||
f"""
|
||||
CREATE TABLE test_simple_select(
|
||||
k String,
|
||||
v String
|
||||
) Engine=Redis('{address}', 0, 'clickhouse') PRIMARY KEY (k)
|
||||
"""
|
||||
)
|
||||
|
||||
response = TSV.toMat(
|
||||
node.query("SELECT k, v FROM test_simple_select WHERE k='0' FORMAT TSV")
|
||||
)
|
||||
assert len(response) == 1
|
||||
assert response[0] == ["0", "0"]
|
||||
|
||||
response = TSV.toMat(
|
||||
node.query("SELECT * FROM test_simple_select ORDER BY k FORMAT TSV")
|
||||
)
|
||||
assert len(response) == 100
|
||||
assert response[0] == ["0", "0"]
|
||||
|
||||
|
||||
def test_select_int(started_cluster):
|
||||
client = get_redis_connection()
|
||||
address = get_address_for_ch()
|
||||
|
||||
# clean all
|
||||
client.flushall()
|
||||
drop_table("test_select_int")
|
||||
|
||||
data = {}
|
||||
for i in range(100):
|
||||
packed = serialize_binary_for_uint32(i)
|
||||
data[packed] = packed
|
||||
|
||||
client.mset(data)
|
||||
client.close()
|
||||
|
||||
# create table
|
||||
node.query(
|
||||
f"""
|
||||
CREATE TABLE test_select_int(
|
||||
k UInt32,
|
||||
v UInt32
|
||||
) Engine=Redis('{address}', 0, 'clickhouse') PRIMARY KEY (k)
|
||||
"""
|
||||
)
|
||||
|
||||
response = TSV.toMat(
|
||||
node.query("SELECT k, v FROM test_select_int WHERE k=0 FORMAT TSV")
|
||||
)
|
||||
assert len(response) == 1
|
||||
assert response[0] == ["0", "0"]
|
||||
|
||||
response = TSV.toMat(
|
||||
node.query("SELECT * FROM test_select_int ORDER BY k FORMAT TSV")
|
||||
)
|
||||
assert len(response) == 100
|
||||
assert response[0] == ["0", "0"]
|
||||
|
||||
|
||||
def test_create_table(started_cluster):
|
||||
address = get_address_for_ch()
|
||||
|
||||
# simple creation
|
||||
drop_table("test_create_table")
|
||||
node.query(
|
||||
f"""
|
||||
CREATE TABLE test_create_table(
|
||||
k String,
|
||||
v UInt32
|
||||
) Engine=Redis('{address}') PRIMARY KEY (k)
|
||||
"""
|
||||
)
|
||||
|
||||
# simple creation with full engine args
|
||||
drop_table("test_create_table")
|
||||
node.query(
|
||||
f"""
|
||||
CREATE TABLE test_create_table(
|
||||
k String,
|
||||
v UInt32
|
||||
) Engine=Redis('{address}', 0, 'clickhouse', 10) PRIMARY KEY (k)
|
||||
"""
|
||||
)
|
||||
|
||||
drop_table("test_create_table")
|
||||
node.query(
|
||||
f"""
|
||||
CREATE TABLE test_create_table(
|
||||
k String,
|
||||
f String,
|
||||
v UInt32
|
||||
) Engine=Redis('{address}', 0, 'clickhouse', 10) PRIMARY KEY (k)
|
||||
"""
|
||||
)
|
||||
|
||||
drop_table("test_create_table")
|
||||
with pytest.raises(QueryRuntimeException):
|
||||
node.query(
|
||||
f"""
|
||||
CREATE TABLE test_create_table(
|
||||
k String,
|
||||
f String,
|
||||
v UInt32
|
||||
) Engine=Redis('{address}', 0, 'clickhouse', 10) PRIMARY KEY ()
|
||||
"""
|
||||
)
|
||||
|
||||
drop_table("test_create_table")
|
||||
with pytest.raises(QueryRuntimeException):
|
||||
node.query(
|
||||
f"""
|
||||
CREATE TABLE test_create_table(
|
||||
k String,
|
||||
f String,
|
||||
v UInt32
|
||||
) Engine=Redis('{address}', 0, 'clickhouse', 10)
|
||||
"""
|
||||
)
|
||||
|
||||
|
||||
def test_simple_insert(started_cluster):
|
||||
client = get_redis_connection()
|
||||
address = get_address_for_ch()
|
||||
|
||||
# clean all
|
||||
client.flushall()
|
||||
drop_table("test_simple_insert")
|
||||
|
||||
node.query(
|
||||
f"""
|
||||
CREATE TABLE test_simple_insert(
|
||||
k UInt32,
|
||||
m DateTime,
|
||||
n String
|
||||
) Engine=Redis('{address}', 0, 'clickhouse') PRIMARY KEY (k)
|
||||
"""
|
||||
)
|
||||
|
||||
node.query(
|
||||
"""
|
||||
INSERT INTO test_simple_insert Values
|
||||
(1, '2023-06-01 00:00:00', 'lili'), (2, '2023-06-02 00:00:00', 'lucy')
|
||||
"""
|
||||
)
|
||||
|
||||
response = node.query("SELECT COUNT(*) FROM test_simple_insert FORMAT Values")
|
||||
assert response == "(2)"
|
||||
|
||||
response = TSV.toMat(
|
||||
node.query("SELECT k, m, n FROM test_simple_insert WHERE k=1 FORMAT TSV")
|
||||
)
|
||||
assert len(response) == 1
|
||||
assert response[0] == ["1", "2023-06-01 00:00:00", "lili"]
|
||||
|
||||
response = TSV.toMat(
|
||||
node.query(
|
||||
"SELECT k, m, n FROM test_simple_insert WHERE m='2023-06-01 00:00:00' FORMAT TSV"
|
||||
)
|
||||
)
|
||||
assert len(response) == 1
|
||||
assert response[0] == ["1", "2023-06-01 00:00:00", "lili"]
|
||||
|
||||
response = TSV.toMat(
|
||||
node.query("SELECT k, m, n FROM test_simple_insert WHERE n='lili' FORMAT TSV")
|
||||
)
|
||||
assert len(response) == 1
|
||||
assert response[0] == ["1", "2023-06-01 00:00:00", "lili"]
|
||||
|
||||
|
||||
def test_update(started_cluster):
|
||||
client = get_redis_connection()
|
||||
address = get_address_for_ch()
|
||||
# clean all
|
||||
client.flushall()
|
||||
drop_table("test_update")
|
||||
|
||||
node.query(
|
||||
f"""
|
||||
CREATE TABLE test_update(
|
||||
k UInt32,
|
||||
m DateTime,
|
||||
n String
|
||||
) Engine=Redis('{address}', 0, 'clickhouse') PRIMARY KEY (k)
|
||||
"""
|
||||
)
|
||||
|
||||
node.query(
|
||||
"""
|
||||
INSERT INTO test_update Values
|
||||
(1, '2023-06-01 00:00:00', 'lili'), (2, '2023-06-02 00:00:00', 'lucy')
|
||||
"""
|
||||
)
|
||||
|
||||
response = node.query(
|
||||
"""
|
||||
ALTER TABLE test_update UPDATE m='2023-06-03 00:00:00' WHERE k=1
|
||||
"""
|
||||
)
|
||||
|
||||
print("update response: ", response)
|
||||
|
||||
response = TSV.toMat(
|
||||
node.query("SELECT k, m, n FROM test_update WHERE k=1 FORMAT TSV")
|
||||
)
|
||||
assert len(response) == 1
|
||||
assert response[0] == ["1", "2023-06-03 00:00:00", "lili"]
|
||||
|
||||
# can not update key
|
||||
with pytest.raises(QueryRuntimeException):
|
||||
node.query(
|
||||
"""
|
||||
ALTER TABLE test_update UPDATE k=2 WHERE k=1
|
||||
"""
|
||||
)
|
||||
|
||||
|
||||
def test_delete(started_cluster):
|
||||
client = get_redis_connection()
|
||||
address = get_address_for_ch()
|
||||
|
||||
# clean all
|
||||
client.flushall()
|
||||
drop_table("test_delete")
|
||||
|
||||
node.query(
|
||||
f"""
|
||||
CREATE TABLE test_delete(
|
||||
k UInt32,
|
||||
m DateTime,
|
||||
n String
|
||||
) Engine=Redis('{address}', 0, 'clickhouse') PRIMARY KEY (k)
|
||||
"""
|
||||
)
|
||||
|
||||
node.query(
|
||||
"""
|
||||
INSERT INTO test_delete Values
|
||||
(1, '2023-06-01 00:00:00', 'lili'), (2, '2023-06-02 00:00:00', 'lucy')
|
||||
"""
|
||||
)
|
||||
|
||||
response = node.query(
|
||||
"""
|
||||
ALTER TABLE test_delete DELETE WHERE k=1
|
||||
"""
|
||||
)
|
||||
|
||||
print("delete response: ", response)
|
||||
|
||||
response = TSV.toMat(node.query("SELECT k, m, n FROM test_delete FORMAT TSV"))
|
||||
assert len(response) == 1
|
||||
assert response[0] == ["2", "2023-06-02 00:00:00", "lucy"]
|
||||
|
||||
response = node.query(
|
||||
"""
|
||||
ALTER TABLE test_delete DELETE WHERE m='2023-06-02 00:00:00'
|
||||
"""
|
||||
)
|
||||
|
||||
response = TSV.toMat(node.query("SELECT k, m, n FROM test_delete FORMAT TSV"))
|
||||
assert len(response) == 0
|
||||
|
||||
|
||||
def test_truncate(started_cluster):
|
||||
client = get_redis_connection()
|
||||
address = get_address_for_ch()
|
||||
# clean all
|
||||
client.flushall()
|
||||
drop_table("test_truncate")
|
||||
|
||||
node.query(
|
||||
f"""
|
||||
CREATE TABLE test_truncate(
|
||||
k UInt32,
|
||||
m DateTime,
|
||||
n String
|
||||
) Engine=Redis('{address}', 0, 'clickhouse') PRIMARY KEY (k)
|
||||
"""
|
||||
)
|
||||
|
||||
node.query(
|
||||
"""
|
||||
INSERT INTO test_truncate Values
|
||||
(1, '2023-06-01 00:00:00', 'lili'), (2, '2023-06-02 00:00:00', 'lucy')
|
||||
"""
|
||||
)
|
||||
|
||||
response = node.query(
|
||||
"""
|
||||
TRUNCATE TABLE test_truncate
|
||||
"""
|
||||
)
|
||||
|
||||
print("truncate table response: ", response)
|
||||
|
||||
response = TSV.toMat(node.query("SELECT COUNT(*) FROM test_truncate FORMAT TSV"))
|
||||
assert len(response) == 1
|
||||
assert response[0] == ["0"]
|
230
tests/integration/test_table_function_redis/test.py
Normal file
230
tests/integration/test_table_function_redis/test.py
Normal file
@ -0,0 +1,230 @@
|
||||
import datetime
|
||||
|
||||
import redis
|
||||
import pytest
|
||||
import sys
|
||||
import struct
|
||||
|
||||
from helpers.client import QueryRuntimeException
|
||||
from helpers.cluster import ClickHouseCluster
|
||||
from helpers.test_tools import TSV
|
||||
|
||||
cluster = ClickHouseCluster(__file__)
|
||||
|
||||
node = cluster.add_instance("node", with_redis=True)
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def started_cluster():
|
||||
try:
|
||||
cluster.start()
|
||||
yield cluster
|
||||
finally:
|
||||
cluster.shutdown()
|
||||
|
||||
|
||||
def get_redis_connection(db_id=0):
|
||||
client = redis.Redis(
|
||||
host="localhost", port=cluster.redis_port, password="clickhouse", db=db_id
|
||||
)
|
||||
return client
|
||||
|
||||
|
||||
def get_address_for_ch():
|
||||
return cluster.redis_host + ":6379"
|
||||
|
||||
|
||||
# see SerializationString.serializeBinary
|
||||
def serialize_binary_for_string(x):
|
||||
var_uint_max = (1 << 63) - 1
|
||||
buf = bytearray()
|
||||
# write length
|
||||
length = len(x)
|
||||
# length = (length << 1) ^ (length >> 63)
|
||||
if length > var_uint_max:
|
||||
raise ValueError("Value too large for varint encoding")
|
||||
for i in range(9):
|
||||
byte = length & 0x7F
|
||||
if length > 0x7F:
|
||||
byte |= 0x80
|
||||
buf += bytes([byte])
|
||||
length >>= 7
|
||||
if not length:
|
||||
break
|
||||
# write data
|
||||
buf += x.encode("utf-8")
|
||||
return bytes(buf)
|
||||
|
||||
|
||||
# see SerializationNumber.serializeBinary
|
||||
def serialize_binary_for_uint32(x):
|
||||
buf = bytearray()
|
||||
packed_num = struct.pack("I", x)
|
||||
buf += packed_num
|
||||
if sys.byteorder != "little":
|
||||
buf.reverse()
|
||||
return bytes(buf)
|
||||
|
||||
|
||||
def test_simple_select(started_cluster):
|
||||
client = get_redis_connection()
|
||||
address = get_address_for_ch()
|
||||
|
||||
# clean all
|
||||
client.flushall()
|
||||
|
||||
data = {}
|
||||
for i in range(100):
|
||||
packed = serialize_binary_for_string(str(i))
|
||||
data[packed] = packed
|
||||
|
||||
client.mset(data)
|
||||
client.close()
|
||||
|
||||
response = TSV.toMat(
|
||||
node.query(
|
||||
f"""
|
||||
SELECT
|
||||
key, value
|
||||
FROM
|
||||
redis('{address}', 'key', 'key String, value String', 0, 'clickhouse', 10)
|
||||
WHERE
|
||||
key='0'
|
||||
FORMAT TSV
|
||||
"""
|
||||
)
|
||||
)
|
||||
|
||||
assert len(response) == 1
|
||||
assert response[0] == ["0", "0"]
|
||||
|
||||
response = TSV.toMat(
|
||||
node.query(
|
||||
f"""
|
||||
SELECT
|
||||
*
|
||||
FROM
|
||||
redis('{address}', 'key', 'key String, value String', 0, 'clickhouse', 10)
|
||||
ORDER BY
|
||||
key
|
||||
FORMAT TSV
|
||||
"""
|
||||
)
|
||||
)
|
||||
|
||||
assert len(response) == 100
|
||||
assert response[0] == ["0", "0"]
|
||||
|
||||
|
||||
def test_create_table(started_cluster):
|
||||
client = get_redis_connection()
|
||||
address = get_address_for_ch()
|
||||
|
||||
# clean all
|
||||
client.flushall()
|
||||
client.close()
|
||||
|
||||
node.query(
|
||||
f"""
|
||||
SELECT
|
||||
*
|
||||
FROM
|
||||
redis('{address}', 'k', 'k String, v UInt32', 0, 'clickhouse', 10)
|
||||
"""
|
||||
)
|
||||
|
||||
# illegal data type
|
||||
with pytest.raises(QueryRuntimeException):
|
||||
node.query(
|
||||
f"""
|
||||
SELECT
|
||||
*
|
||||
FROM
|
||||
redis('{address}', 'k', 'k not_exist_type, v String', 0, 'clickhouse', 10)
|
||||
"""
|
||||
)
|
||||
|
||||
# illegal key
|
||||
with pytest.raises(QueryRuntimeException):
|
||||
node.query(
|
||||
f"""
|
||||
SELECT
|
||||
*
|
||||
FROM
|
||||
redis('{address}', 'not_exist_key', 'k not_exist_type, v String', 0, 'clickhouse', 10)
|
||||
"""
|
||||
)
|
||||
|
||||
|
||||
def test_data_type(started_cluster):
|
||||
client = get_redis_connection()
|
||||
address = get_address_for_ch()
|
||||
|
||||
# string
|
||||
client.flushall()
|
||||
value = serialize_binary_for_string("0")
|
||||
client.set(value, value)
|
||||
|
||||
response = TSV.toMat(
|
||||
node.query(
|
||||
f"""
|
||||
SELECT
|
||||
*
|
||||
FROM
|
||||
redis('{address}', 'k', 'k String, v String', 0, 'clickhouse', 10)
|
||||
WHERE
|
||||
k='0'
|
||||
FORMAT TSV
|
||||
"""
|
||||
)
|
||||
)
|
||||
|
||||
assert len(response) == 1
|
||||
assert response[0] == ["0", "0"]
|
||||
|
||||
# number
|
||||
client.flushall()
|
||||
value = serialize_binary_for_uint32(0)
|
||||
client.set(value, value)
|
||||
|
||||
response = TSV.toMat(
|
||||
node.query(
|
||||
f"""
|
||||
SELECT
|
||||
*
|
||||
FROM
|
||||
redis('{address}', 'k', 'k UInt32, v UInt32', 0, 'clickhouse', 10)
|
||||
WHERE
|
||||
k=0
|
||||
FORMAT TSV
|
||||
"""
|
||||
)
|
||||
)
|
||||
|
||||
assert len(response) == 1
|
||||
assert response[0] == ["0", "0"]
|
||||
|
||||
# datetime
|
||||
client.flushall()
|
||||
# clickhouse store datatime as uint32 in internal
|
||||
dt = datetime.datetime(2023, 6, 1, 0, 0, 0)
|
||||
seconds_since_epoch = dt.timestamp()
|
||||
value = serialize_binary_for_uint32(int(seconds_since_epoch))
|
||||
client.set(value, value)
|
||||
|
||||
response = TSV.toMat(
|
||||
node.query(
|
||||
f"""
|
||||
SELECT
|
||||
*
|
||||
FROM
|
||||
redis('{address}', 'k', 'k DateTime, v DateTime', 0, 'clickhouse', 10)
|
||||
WHERE
|
||||
k='2023-06-01 00:00:00'
|
||||
FORMAT TSV
|
||||
"""
|
||||
)
|
||||
)
|
||||
|
||||
assert len(response) == 1
|
||||
assert response[0] == ["2023-06-01 00:00:00", "2023-06-01 00:00:00"]
|
@ -149,6 +149,7 @@ FILE [] GLOBAL SOURCES
|
||||
URL [] GLOBAL SOURCES
|
||||
REMOTE [] GLOBAL SOURCES
|
||||
MONGO [] GLOBAL SOURCES
|
||||
REDIS [] GLOBAL SOURCES
|
||||
MEILISEARCH [] GLOBAL SOURCES
|
||||
MYSQL [] GLOBAL SOURCES
|
||||
POSTGRES [] GLOBAL SOURCES
|
||||
|
@ -297,7 +297,7 @@ CREATE TABLE system.grants
|
||||
(
|
||||
`user_name` Nullable(String),
|
||||
`role_name` Nullable(String),
|
||||
`access_type` Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW FILESYSTEM CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER NAMED COLLECTION' = 41, 'ALTER TABLE' = 42, 'ALTER DATABASE' = 43, 'ALTER VIEW REFRESH' = 44, 'ALTER VIEW MODIFY QUERY' = 45, 'ALTER VIEW' = 46, 'ALTER' = 47, 'CREATE DATABASE' = 48, 'CREATE TABLE' = 49, 'CREATE VIEW' = 50, 'CREATE DICTIONARY' = 51, 'CREATE TEMPORARY TABLE' = 52, 'CREATE ARBITRARY TEMPORARY TABLE' = 53, 'CREATE FUNCTION' = 54, 'CREATE NAMED COLLECTION' = 55, 'CREATE' = 56, 'DROP DATABASE' = 57, 'DROP TABLE' = 58, 'DROP VIEW' = 59, 'DROP DICTIONARY' = 60, 'DROP FUNCTION' = 61, 'DROP NAMED COLLECTION' = 62, 'DROP' = 63, 'UNDROP TABLE' = 64, 'TRUNCATE' = 65, 'OPTIMIZE' = 66, 'BACKUP' = 67, 'KILL QUERY' = 68, 'KILL TRANSACTION' = 69, 'MOVE PARTITION BETWEEN SHARDS' = 70, 'CREATE USER' = 71, 'ALTER USER' = 72, 'DROP USER' = 73, 'CREATE ROLE' = 74, 'ALTER ROLE' = 75, 'DROP ROLE' = 76, 'ROLE ADMIN' = 77, 'CREATE ROW POLICY' = 78, 'ALTER ROW POLICY' = 79, 'DROP ROW POLICY' = 80, 'CREATE QUOTA' = 81, 'ALTER QUOTA' = 82, 'DROP QUOTA' = 83, 'CREATE SETTINGS PROFILE' = 84, 'ALTER SETTINGS PROFILE' = 85, 'DROP SETTINGS PROFILE' = 86, 'SHOW USERS' = 87, 'SHOW ROLES' = 88, 'SHOW ROW POLICIES' = 89, 'SHOW QUOTAS' = 90, 'SHOW SETTINGS PROFILES' = 91, 'SHOW ACCESS' = 92, 'ACCESS MANAGEMENT' = 93, 'SHOW NAMED COLLECTIONS' = 94, 'SHOW NAMED COLLECTIONS SECRETS' = 95, 'NAMED COLLECTION CONTROL' = 96, 'SYSTEM SHUTDOWN' = 97, 'SYSTEM DROP DNS CACHE' = 98, 'SYSTEM DROP MARK CACHE' = 99, 'SYSTEM DROP UNCOMPRESSED CACHE' = 100, 'SYSTEM DROP MMAP CACHE' = 101, 'SYSTEM DROP QUERY CACHE' = 102, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 103, 'SYSTEM DROP FILESYSTEM CACHE' = 104, 'SYSTEM DROP SCHEMA CACHE' = 105, 'SYSTEM DROP S3 CLIENT CACHE' = 106, 'SYSTEM DROP CACHE' = 107, 'SYSTEM RELOAD CONFIG' = 108, 'SYSTEM RELOAD USERS' = 109, 'SYSTEM RELOAD SYMBOLS' = 110, 'SYSTEM RELOAD DICTIONARY' = 111, 'SYSTEM RELOAD MODEL' = 112, 'SYSTEM RELOAD FUNCTION' = 113, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 114, 'SYSTEM RELOAD' = 115, 'SYSTEM RESTART DISK' = 116, 'SYSTEM MERGES' = 117, 'SYSTEM TTL MERGES' = 118, 'SYSTEM FETCHES' = 119, 'SYSTEM MOVES' = 120, 'SYSTEM DISTRIBUTED SENDS' = 121, 'SYSTEM REPLICATED SENDS' = 122, 'SYSTEM SENDS' = 123, 'SYSTEM REPLICATION QUEUES' = 124, 'SYSTEM DROP REPLICA' = 125, 'SYSTEM SYNC REPLICA' = 126, 'SYSTEM RESTART REPLICA' = 127, 'SYSTEM RESTORE REPLICA' = 128, 'SYSTEM WAIT LOADING PARTS' = 129, 'SYSTEM SYNC DATABASE REPLICA' = 130, 'SYSTEM SYNC TRANSACTION LOG' = 131, 'SYSTEM SYNC FILE CACHE' = 132, 'SYSTEM FLUSH DISTRIBUTED' = 133, 'SYSTEM FLUSH LOGS' = 134, 'SYSTEM FLUSH' = 135, 'SYSTEM THREAD FUZZER' = 136, 'SYSTEM UNFREEZE' = 137, 'SYSTEM FAILPOINT' = 138, 'SYSTEM' = 139, 'dictGet' = 140, 'displaySecretsInShowAndSelect' = 141, 'addressToLine' = 142, 'addressToLineWithInlines' = 143, 'addressToSymbol' = 144, 'demangle' = 145, 'INTROSPECTION' = 146, 'FILE' = 147, 'URL' = 148, 'REMOTE' = 149, 'MONGO' = 150, 'MEILISEARCH' = 151, 'MYSQL' = 152, 'POSTGRES' = 153, 'SQLITE' = 154, 'ODBC' = 155, 'JDBC' = 156, 'HDFS' = 157, 'S3' = 158, 'HIVE' = 159, 'AZURE' = 160, 'SOURCES' = 161, 'CLUSTER' = 162, 'ALL' = 163, 'NONE' = 164),
|
||||
`access_type` Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW FILESYSTEM CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER NAMED COLLECTION' = 41, 'ALTER TABLE' = 42, 'ALTER DATABASE' = 43, 'ALTER VIEW REFRESH' = 44, 'ALTER VIEW MODIFY QUERY' = 45, 'ALTER VIEW' = 46, 'ALTER' = 47, 'CREATE DATABASE' = 48, 'CREATE TABLE' = 49, 'CREATE VIEW' = 50, 'CREATE DICTIONARY' = 51, 'CREATE TEMPORARY TABLE' = 52, 'CREATE ARBITRARY TEMPORARY TABLE' = 53, 'CREATE FUNCTION' = 54, 'CREATE NAMED COLLECTION' = 55, 'CREATE' = 56, 'DROP DATABASE' = 57, 'DROP TABLE' = 58, 'DROP VIEW' = 59, 'DROP DICTIONARY' = 60, 'DROP FUNCTION' = 61, 'DROP NAMED COLLECTION' = 62, 'DROP' = 63, 'UNDROP TABLE' = 64, 'TRUNCATE' = 65, 'OPTIMIZE' = 66, 'BACKUP' = 67, 'KILL QUERY' = 68, 'KILL TRANSACTION' = 69, 'MOVE PARTITION BETWEEN SHARDS' = 70, 'CREATE USER' = 71, 'ALTER USER' = 72, 'DROP USER' = 73, 'CREATE ROLE' = 74, 'ALTER ROLE' = 75, 'DROP ROLE' = 76, 'ROLE ADMIN' = 77, 'CREATE ROW POLICY' = 78, 'ALTER ROW POLICY' = 79, 'DROP ROW POLICY' = 80, 'CREATE QUOTA' = 81, 'ALTER QUOTA' = 82, 'DROP QUOTA' = 83, 'CREATE SETTINGS PROFILE' = 84, 'ALTER SETTINGS PROFILE' = 85, 'DROP SETTINGS PROFILE' = 86, 'SHOW USERS' = 87, 'SHOW ROLES' = 88, 'SHOW ROW POLICIES' = 89, 'SHOW QUOTAS' = 90, 'SHOW SETTINGS PROFILES' = 91, 'SHOW ACCESS' = 92, 'ACCESS MANAGEMENT' = 93, 'SHOW NAMED COLLECTIONS' = 94, 'SHOW NAMED COLLECTIONS SECRETS' = 95, 'NAMED COLLECTION CONTROL' = 96, 'SYSTEM SHUTDOWN' = 97, 'SYSTEM DROP DNS CACHE' = 98, 'SYSTEM DROP MARK CACHE' = 99, 'SYSTEM DROP UNCOMPRESSED CACHE' = 100, 'SYSTEM DROP MMAP CACHE' = 101, 'SYSTEM DROP QUERY CACHE' = 102, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 103, 'SYSTEM DROP FILESYSTEM CACHE' = 104, 'SYSTEM DROP SCHEMA CACHE' = 105, 'SYSTEM DROP S3 CLIENT CACHE' = 106, 'SYSTEM DROP CACHE' = 107, 'SYSTEM RELOAD CONFIG' = 108, 'SYSTEM RELOAD USERS' = 109, 'SYSTEM RELOAD SYMBOLS' = 110, 'SYSTEM RELOAD DICTIONARY' = 111, 'SYSTEM RELOAD MODEL' = 112, 'SYSTEM RELOAD FUNCTION' = 113, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 114, 'SYSTEM RELOAD' = 115, 'SYSTEM RESTART DISK' = 116, 'SYSTEM MERGES' = 117, 'SYSTEM TTL MERGES' = 118, 'SYSTEM FETCHES' = 119, 'SYSTEM MOVES' = 120, 'SYSTEM DISTRIBUTED SENDS' = 121, 'SYSTEM REPLICATED SENDS' = 122, 'SYSTEM SENDS' = 123, 'SYSTEM REPLICATION QUEUES' = 124, 'SYSTEM DROP REPLICA' = 125, 'SYSTEM SYNC REPLICA' = 126, 'SYSTEM RESTART REPLICA' = 127, 'SYSTEM RESTORE REPLICA' = 128, 'SYSTEM WAIT LOADING PARTS' = 129, 'SYSTEM SYNC DATABASE REPLICA' = 130, 'SYSTEM SYNC TRANSACTION LOG' = 131, 'SYSTEM SYNC FILE CACHE' = 132, 'SYSTEM FLUSH DISTRIBUTED' = 133, 'SYSTEM FLUSH LOGS' = 134, 'SYSTEM FLUSH' = 135, 'SYSTEM THREAD FUZZER' = 136, 'SYSTEM UNFREEZE' = 137, 'SYSTEM FAILPOINT' = 138, 'SYSTEM' = 139, 'dictGet' = 140, 'displaySecretsInShowAndSelect' = 141, 'addressToLine' = 142, 'addressToLineWithInlines' = 143, 'addressToSymbol' = 144, 'demangle' = 145, 'INTROSPECTION' = 146, 'FILE' = 147, 'URL' = 148, 'REMOTE' = 149, 'MONGO' = 150, 'REDIS' = 151, 'MEILISEARCH' = 152, 'MYSQL' = 153, 'POSTGRES' = 154, 'SQLITE' = 155, 'ODBC' = 156, 'JDBC' = 157, 'HDFS' = 158, 'S3' = 159, 'HIVE' = 160, 'AZURE' = 161, 'SOURCES' = 162, 'CLUSTER' = 163, 'ALL' = 164, 'NONE' = 165),
|
||||
`database` Nullable(String),
|
||||
`table` Nullable(String),
|
||||
`column` Nullable(String),
|
||||
@ -581,10 +581,10 @@ ENGINE = SystemPartsColumns
|
||||
COMMENT 'SYSTEM TABLE is built on the fly.'
|
||||
CREATE TABLE system.privileges
|
||||
(
|
||||
`privilege` Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW FILESYSTEM CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER NAMED COLLECTION' = 41, 'ALTER TABLE' = 42, 'ALTER DATABASE' = 43, 'ALTER VIEW REFRESH' = 44, 'ALTER VIEW MODIFY QUERY' = 45, 'ALTER VIEW' = 46, 'ALTER' = 47, 'CREATE DATABASE' = 48, 'CREATE TABLE' = 49, 'CREATE VIEW' = 50, 'CREATE DICTIONARY' = 51, 'CREATE TEMPORARY TABLE' = 52, 'CREATE ARBITRARY TEMPORARY TABLE' = 53, 'CREATE FUNCTION' = 54, 'CREATE NAMED COLLECTION' = 55, 'CREATE' = 56, 'DROP DATABASE' = 57, 'DROP TABLE' = 58, 'DROP VIEW' = 59, 'DROP DICTIONARY' = 60, 'DROP FUNCTION' = 61, 'DROP NAMED COLLECTION' = 62, 'DROP' = 63, 'UNDROP TABLE' = 64, 'TRUNCATE' = 65, 'OPTIMIZE' = 66, 'BACKUP' = 67, 'KILL QUERY' = 68, 'KILL TRANSACTION' = 69, 'MOVE PARTITION BETWEEN SHARDS' = 70, 'CREATE USER' = 71, 'ALTER USER' = 72, 'DROP USER' = 73, 'CREATE ROLE' = 74, 'ALTER ROLE' = 75, 'DROP ROLE' = 76, 'ROLE ADMIN' = 77, 'CREATE ROW POLICY' = 78, 'ALTER ROW POLICY' = 79, 'DROP ROW POLICY' = 80, 'CREATE QUOTA' = 81, 'ALTER QUOTA' = 82, 'DROP QUOTA' = 83, 'CREATE SETTINGS PROFILE' = 84, 'ALTER SETTINGS PROFILE' = 85, 'DROP SETTINGS PROFILE' = 86, 'SHOW USERS' = 87, 'SHOW ROLES' = 88, 'SHOW ROW POLICIES' = 89, 'SHOW QUOTAS' = 90, 'SHOW SETTINGS PROFILES' = 91, 'SHOW ACCESS' = 92, 'ACCESS MANAGEMENT' = 93, 'SHOW NAMED COLLECTIONS' = 94, 'SHOW NAMED COLLECTIONS SECRETS' = 95, 'NAMED COLLECTION CONTROL' = 96, 'SYSTEM SHUTDOWN' = 97, 'SYSTEM DROP DNS CACHE' = 98, 'SYSTEM DROP MARK CACHE' = 99, 'SYSTEM DROP UNCOMPRESSED CACHE' = 100, 'SYSTEM DROP MMAP CACHE' = 101, 'SYSTEM DROP QUERY CACHE' = 102, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 103, 'SYSTEM DROP FILESYSTEM CACHE' = 104, 'SYSTEM DROP SCHEMA CACHE' = 105, 'SYSTEM DROP S3 CLIENT CACHE' = 106, 'SYSTEM DROP CACHE' = 107, 'SYSTEM RELOAD CONFIG' = 108, 'SYSTEM RELOAD USERS' = 109, 'SYSTEM RELOAD SYMBOLS' = 110, 'SYSTEM RELOAD DICTIONARY' = 111, 'SYSTEM RELOAD MODEL' = 112, 'SYSTEM RELOAD FUNCTION' = 113, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 114, 'SYSTEM RELOAD' = 115, 'SYSTEM RESTART DISK' = 116, 'SYSTEM MERGES' = 117, 'SYSTEM TTL MERGES' = 118, 'SYSTEM FETCHES' = 119, 'SYSTEM MOVES' = 120, 'SYSTEM DISTRIBUTED SENDS' = 121, 'SYSTEM REPLICATED SENDS' = 122, 'SYSTEM SENDS' = 123, 'SYSTEM REPLICATION QUEUES' = 124, 'SYSTEM DROP REPLICA' = 125, 'SYSTEM SYNC REPLICA' = 126, 'SYSTEM RESTART REPLICA' = 127, 'SYSTEM RESTORE REPLICA' = 128, 'SYSTEM WAIT LOADING PARTS' = 129, 'SYSTEM SYNC DATABASE REPLICA' = 130, 'SYSTEM SYNC TRANSACTION LOG' = 131, 'SYSTEM SYNC FILE CACHE' = 132, 'SYSTEM FLUSH DISTRIBUTED' = 133, 'SYSTEM FLUSH LOGS' = 134, 'SYSTEM FLUSH' = 135, 'SYSTEM THREAD FUZZER' = 136, 'SYSTEM UNFREEZE' = 137, 'SYSTEM FAILPOINT' = 138, 'SYSTEM' = 139, 'dictGet' = 140, 'displaySecretsInShowAndSelect' = 141, 'addressToLine' = 142, 'addressToLineWithInlines' = 143, 'addressToSymbol' = 144, 'demangle' = 145, 'INTROSPECTION' = 146, 'FILE' = 147, 'URL' = 148, 'REMOTE' = 149, 'MONGO' = 150, 'MEILISEARCH' = 151, 'MYSQL' = 152, 'POSTGRES' = 153, 'SQLITE' = 154, 'ODBC' = 155, 'JDBC' = 156, 'HDFS' = 157, 'S3' = 158, 'HIVE' = 159, 'AZURE' = 160, 'SOURCES' = 161, 'CLUSTER' = 162, 'ALL' = 163, 'NONE' = 164),
|
||||
`privilege` Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW FILESYSTEM CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER NAMED COLLECTION' = 41, 'ALTER TABLE' = 42, 'ALTER DATABASE' = 43, 'ALTER VIEW REFRESH' = 44, 'ALTER VIEW MODIFY QUERY' = 45, 'ALTER VIEW' = 46, 'ALTER' = 47, 'CREATE DATABASE' = 48, 'CREATE TABLE' = 49, 'CREATE VIEW' = 50, 'CREATE DICTIONARY' = 51, 'CREATE TEMPORARY TABLE' = 52, 'CREATE ARBITRARY TEMPORARY TABLE' = 53, 'CREATE FUNCTION' = 54, 'CREATE NAMED COLLECTION' = 55, 'CREATE' = 56, 'DROP DATABASE' = 57, 'DROP TABLE' = 58, 'DROP VIEW' = 59, 'DROP DICTIONARY' = 60, 'DROP FUNCTION' = 61, 'DROP NAMED COLLECTION' = 62, 'DROP' = 63, 'UNDROP TABLE' = 64, 'TRUNCATE' = 65, 'OPTIMIZE' = 66, 'BACKUP' = 67, 'KILL QUERY' = 68, 'KILL TRANSACTION' = 69, 'MOVE PARTITION BETWEEN SHARDS' = 70, 'CREATE USER' = 71, 'ALTER USER' = 72, 'DROP USER' = 73, 'CREATE ROLE' = 74, 'ALTER ROLE' = 75, 'DROP ROLE' = 76, 'ROLE ADMIN' = 77, 'CREATE ROW POLICY' = 78, 'ALTER ROW POLICY' = 79, 'DROP ROW POLICY' = 80, 'CREATE QUOTA' = 81, 'ALTER QUOTA' = 82, 'DROP QUOTA' = 83, 'CREATE SETTINGS PROFILE' = 84, 'ALTER SETTINGS PROFILE' = 85, 'DROP SETTINGS PROFILE' = 86, 'SHOW USERS' = 87, 'SHOW ROLES' = 88, 'SHOW ROW POLICIES' = 89, 'SHOW QUOTAS' = 90, 'SHOW SETTINGS PROFILES' = 91, 'SHOW ACCESS' = 92, 'ACCESS MANAGEMENT' = 93, 'SHOW NAMED COLLECTIONS' = 94, 'SHOW NAMED COLLECTIONS SECRETS' = 95, 'NAMED COLLECTION CONTROL' = 96, 'SYSTEM SHUTDOWN' = 97, 'SYSTEM DROP DNS CACHE' = 98, 'SYSTEM DROP MARK CACHE' = 99, 'SYSTEM DROP UNCOMPRESSED CACHE' = 100, 'SYSTEM DROP MMAP CACHE' = 101, 'SYSTEM DROP QUERY CACHE' = 102, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 103, 'SYSTEM DROP FILESYSTEM CACHE' = 104, 'SYSTEM DROP SCHEMA CACHE' = 105, 'SYSTEM DROP S3 CLIENT CACHE' = 106, 'SYSTEM DROP CACHE' = 107, 'SYSTEM RELOAD CONFIG' = 108, 'SYSTEM RELOAD USERS' = 109, 'SYSTEM RELOAD SYMBOLS' = 110, 'SYSTEM RELOAD DICTIONARY' = 111, 'SYSTEM RELOAD MODEL' = 112, 'SYSTEM RELOAD FUNCTION' = 113, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 114, 'SYSTEM RELOAD' = 115, 'SYSTEM RESTART DISK' = 116, 'SYSTEM MERGES' = 117, 'SYSTEM TTL MERGES' = 118, 'SYSTEM FETCHES' = 119, 'SYSTEM MOVES' = 120, 'SYSTEM DISTRIBUTED SENDS' = 121, 'SYSTEM REPLICATED SENDS' = 122, 'SYSTEM SENDS' = 123, 'SYSTEM REPLICATION QUEUES' = 124, 'SYSTEM DROP REPLICA' = 125, 'SYSTEM SYNC REPLICA' = 126, 'SYSTEM RESTART REPLICA' = 127, 'SYSTEM RESTORE REPLICA' = 128, 'SYSTEM WAIT LOADING PARTS' = 129, 'SYSTEM SYNC DATABASE REPLICA' = 130, 'SYSTEM SYNC TRANSACTION LOG' = 131, 'SYSTEM SYNC FILE CACHE' = 132, 'SYSTEM FLUSH DISTRIBUTED' = 133, 'SYSTEM FLUSH LOGS' = 134, 'SYSTEM FLUSH' = 135, 'SYSTEM THREAD FUZZER' = 136, 'SYSTEM UNFREEZE' = 137, 'SYSTEM FAILPOINT' = 138, 'SYSTEM' = 139, 'dictGet' = 140, 'displaySecretsInShowAndSelect' = 141, 'addressToLine' = 142, 'addressToLineWithInlines' = 143, 'addressToSymbol' = 144, 'demangle' = 145, 'INTROSPECTION' = 146, 'FILE' = 147, 'URL' = 148, 'REMOTE' = 149, 'MONGO' = 150, 'REDIS' = 151, 'MEILISEARCH' = 152, 'MYSQL' = 153, 'POSTGRES' = 154, 'SQLITE' = 155, 'ODBC' = 156, 'JDBC' = 157, 'HDFS' = 158, 'S3' = 159, 'HIVE' = 160, 'AZURE' = 161, 'SOURCES' = 162, 'CLUSTER' = 163, 'ALL' = 164, 'NONE' = 165),
|
||||
`aliases` Array(String),
|
||||
`level` Nullable(Enum8('GLOBAL' = 0, 'DATABASE' = 1, 'TABLE' = 2, 'DICTIONARY' = 3, 'VIEW' = 4, 'COLUMN' = 5, 'NAMED_COLLECTION' = 6)),
|
||||
`parent_group` Nullable(Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW FILESYSTEM CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER NAMED COLLECTION' = 41, 'ALTER TABLE' = 42, 'ALTER DATABASE' = 43, 'ALTER VIEW REFRESH' = 44, 'ALTER VIEW MODIFY QUERY' = 45, 'ALTER VIEW' = 46, 'ALTER' = 47, 'CREATE DATABASE' = 48, 'CREATE TABLE' = 49, 'CREATE VIEW' = 50, 'CREATE DICTIONARY' = 51, 'CREATE TEMPORARY TABLE' = 52, 'CREATE ARBITRARY TEMPORARY TABLE' = 53, 'CREATE FUNCTION' = 54, 'CREATE NAMED COLLECTION' = 55, 'CREATE' = 56, 'DROP DATABASE' = 57, 'DROP TABLE' = 58, 'DROP VIEW' = 59, 'DROP DICTIONARY' = 60, 'DROP FUNCTION' = 61, 'DROP NAMED COLLECTION' = 62, 'DROP' = 63, 'UNDROP TABLE' = 64, 'TRUNCATE' = 65, 'OPTIMIZE' = 66, 'BACKUP' = 67, 'KILL QUERY' = 68, 'KILL TRANSACTION' = 69, 'MOVE PARTITION BETWEEN SHARDS' = 70, 'CREATE USER' = 71, 'ALTER USER' = 72, 'DROP USER' = 73, 'CREATE ROLE' = 74, 'ALTER ROLE' = 75, 'DROP ROLE' = 76, 'ROLE ADMIN' = 77, 'CREATE ROW POLICY' = 78, 'ALTER ROW POLICY' = 79, 'DROP ROW POLICY' = 80, 'CREATE QUOTA' = 81, 'ALTER QUOTA' = 82, 'DROP QUOTA' = 83, 'CREATE SETTINGS PROFILE' = 84, 'ALTER SETTINGS PROFILE' = 85, 'DROP SETTINGS PROFILE' = 86, 'SHOW USERS' = 87, 'SHOW ROLES' = 88, 'SHOW ROW POLICIES' = 89, 'SHOW QUOTAS' = 90, 'SHOW SETTINGS PROFILES' = 91, 'SHOW ACCESS' = 92, 'ACCESS MANAGEMENT' = 93, 'SHOW NAMED COLLECTIONS' = 94, 'SHOW NAMED COLLECTIONS SECRETS' = 95, 'NAMED COLLECTION CONTROL' = 96, 'SYSTEM SHUTDOWN' = 97, 'SYSTEM DROP DNS CACHE' = 98, 'SYSTEM DROP MARK CACHE' = 99, 'SYSTEM DROP UNCOMPRESSED CACHE' = 100, 'SYSTEM DROP MMAP CACHE' = 101, 'SYSTEM DROP QUERY CACHE' = 102, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 103, 'SYSTEM DROP FILESYSTEM CACHE' = 104, 'SYSTEM DROP SCHEMA CACHE' = 105, 'SYSTEM DROP S3 CLIENT CACHE' = 106, 'SYSTEM DROP CACHE' = 107, 'SYSTEM RELOAD CONFIG' = 108, 'SYSTEM RELOAD USERS' = 109, 'SYSTEM RELOAD SYMBOLS' = 110, 'SYSTEM RELOAD DICTIONARY' = 111, 'SYSTEM RELOAD MODEL' = 112, 'SYSTEM RELOAD FUNCTION' = 113, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 114, 'SYSTEM RELOAD' = 115, 'SYSTEM RESTART DISK' = 116, 'SYSTEM MERGES' = 117, 'SYSTEM TTL MERGES' = 118, 'SYSTEM FETCHES' = 119, 'SYSTEM MOVES' = 120, 'SYSTEM DISTRIBUTED SENDS' = 121, 'SYSTEM REPLICATED SENDS' = 122, 'SYSTEM SENDS' = 123, 'SYSTEM REPLICATION QUEUES' = 124, 'SYSTEM DROP REPLICA' = 125, 'SYSTEM SYNC REPLICA' = 126, 'SYSTEM RESTART REPLICA' = 127, 'SYSTEM RESTORE REPLICA' = 128, 'SYSTEM WAIT LOADING PARTS' = 129, 'SYSTEM SYNC DATABASE REPLICA' = 130, 'SYSTEM SYNC TRANSACTION LOG' = 131, 'SYSTEM SYNC FILE CACHE' = 132, 'SYSTEM FLUSH DISTRIBUTED' = 133, 'SYSTEM FLUSH LOGS' = 134, 'SYSTEM FLUSH' = 135, 'SYSTEM THREAD FUZZER' = 136, 'SYSTEM UNFREEZE' = 137, 'SYSTEM FAILPOINT' = 138, 'SYSTEM' = 139, 'dictGet' = 140, 'displaySecretsInShowAndSelect' = 141, 'addressToLine' = 142, 'addressToLineWithInlines' = 143, 'addressToSymbol' = 144, 'demangle' = 145, 'INTROSPECTION' = 146, 'FILE' = 147, 'URL' = 148, 'REMOTE' = 149, 'MONGO' = 150, 'MEILISEARCH' = 151, 'MYSQL' = 152, 'POSTGRES' = 153, 'SQLITE' = 154, 'ODBC' = 155, 'JDBC' = 156, 'HDFS' = 157, 'S3' = 158, 'HIVE' = 159, 'AZURE' = 160, 'SOURCES' = 161, 'CLUSTER' = 162, 'ALL' = 163, 'NONE' = 164))
|
||||
`parent_group` Nullable(Enum16('SHOW DATABASES' = 0, 'SHOW TABLES' = 1, 'SHOW COLUMNS' = 2, 'SHOW DICTIONARIES' = 3, 'SHOW' = 4, 'SHOW FILESYSTEM CACHES' = 5, 'SELECT' = 6, 'INSERT' = 7, 'ALTER UPDATE' = 8, 'ALTER DELETE' = 9, 'ALTER ADD COLUMN' = 10, 'ALTER MODIFY COLUMN' = 11, 'ALTER DROP COLUMN' = 12, 'ALTER COMMENT COLUMN' = 13, 'ALTER CLEAR COLUMN' = 14, 'ALTER RENAME COLUMN' = 15, 'ALTER MATERIALIZE COLUMN' = 16, 'ALTER COLUMN' = 17, 'ALTER MODIFY COMMENT' = 18, 'ALTER ORDER BY' = 19, 'ALTER SAMPLE BY' = 20, 'ALTER ADD INDEX' = 21, 'ALTER DROP INDEX' = 22, 'ALTER MATERIALIZE INDEX' = 23, 'ALTER CLEAR INDEX' = 24, 'ALTER INDEX' = 25, 'ALTER ADD PROJECTION' = 26, 'ALTER DROP PROJECTION' = 27, 'ALTER MATERIALIZE PROJECTION' = 28, 'ALTER CLEAR PROJECTION' = 29, 'ALTER PROJECTION' = 30, 'ALTER ADD CONSTRAINT' = 31, 'ALTER DROP CONSTRAINT' = 32, 'ALTER CONSTRAINT' = 33, 'ALTER TTL' = 34, 'ALTER MATERIALIZE TTL' = 35, 'ALTER SETTINGS' = 36, 'ALTER MOVE PARTITION' = 37, 'ALTER FETCH PARTITION' = 38, 'ALTER FREEZE PARTITION' = 39, 'ALTER DATABASE SETTINGS' = 40, 'ALTER NAMED COLLECTION' = 41, 'ALTER TABLE' = 42, 'ALTER DATABASE' = 43, 'ALTER VIEW REFRESH' = 44, 'ALTER VIEW MODIFY QUERY' = 45, 'ALTER VIEW' = 46, 'ALTER' = 47, 'CREATE DATABASE' = 48, 'CREATE TABLE' = 49, 'CREATE VIEW' = 50, 'CREATE DICTIONARY' = 51, 'CREATE TEMPORARY TABLE' = 52, 'CREATE ARBITRARY TEMPORARY TABLE' = 53, 'CREATE FUNCTION' = 54, 'CREATE NAMED COLLECTION' = 55, 'CREATE' = 56, 'DROP DATABASE' = 57, 'DROP TABLE' = 58, 'DROP VIEW' = 59, 'DROP DICTIONARY' = 60, 'DROP FUNCTION' = 61, 'DROP NAMED COLLECTION' = 62, 'DROP' = 63, 'UNDROP TABLE' = 64, 'TRUNCATE' = 65, 'OPTIMIZE' = 66, 'BACKUP' = 67, 'KILL QUERY' = 68, 'KILL TRANSACTION' = 69, 'MOVE PARTITION BETWEEN SHARDS' = 70, 'CREATE USER' = 71, 'ALTER USER' = 72, 'DROP USER' = 73, 'CREATE ROLE' = 74, 'ALTER ROLE' = 75, 'DROP ROLE' = 76, 'ROLE ADMIN' = 77, 'CREATE ROW POLICY' = 78, 'ALTER ROW POLICY' = 79, 'DROP ROW POLICY' = 80, 'CREATE QUOTA' = 81, 'ALTER QUOTA' = 82, 'DROP QUOTA' = 83, 'CREATE SETTINGS PROFILE' = 84, 'ALTER SETTINGS PROFILE' = 85, 'DROP SETTINGS PROFILE' = 86, 'SHOW USERS' = 87, 'SHOW ROLES' = 88, 'SHOW ROW POLICIES' = 89, 'SHOW QUOTAS' = 90, 'SHOW SETTINGS PROFILES' = 91, 'SHOW ACCESS' = 92, 'ACCESS MANAGEMENT' = 93, 'SHOW NAMED COLLECTIONS' = 94, 'SHOW NAMED COLLECTIONS SECRETS' = 95, 'NAMED COLLECTION CONTROL' = 96, 'SYSTEM SHUTDOWN' = 97, 'SYSTEM DROP DNS CACHE' = 98, 'SYSTEM DROP MARK CACHE' = 99, 'SYSTEM DROP UNCOMPRESSED CACHE' = 100, 'SYSTEM DROP MMAP CACHE' = 101, 'SYSTEM DROP QUERY CACHE' = 102, 'SYSTEM DROP COMPILED EXPRESSION CACHE' = 103, 'SYSTEM DROP FILESYSTEM CACHE' = 104, 'SYSTEM DROP SCHEMA CACHE' = 105, 'SYSTEM DROP S3 CLIENT CACHE' = 106, 'SYSTEM DROP CACHE' = 107, 'SYSTEM RELOAD CONFIG' = 108, 'SYSTEM RELOAD USERS' = 109, 'SYSTEM RELOAD SYMBOLS' = 110, 'SYSTEM RELOAD DICTIONARY' = 111, 'SYSTEM RELOAD MODEL' = 112, 'SYSTEM RELOAD FUNCTION' = 113, 'SYSTEM RELOAD EMBEDDED DICTIONARIES' = 114, 'SYSTEM RELOAD' = 115, 'SYSTEM RESTART DISK' = 116, 'SYSTEM MERGES' = 117, 'SYSTEM TTL MERGES' = 118, 'SYSTEM FETCHES' = 119, 'SYSTEM MOVES' = 120, 'SYSTEM DISTRIBUTED SENDS' = 121, 'SYSTEM REPLICATED SENDS' = 122, 'SYSTEM SENDS' = 123, 'SYSTEM REPLICATION QUEUES' = 124, 'SYSTEM DROP REPLICA' = 125, 'SYSTEM SYNC REPLICA' = 126, 'SYSTEM RESTART REPLICA' = 127, 'SYSTEM RESTORE REPLICA' = 128, 'SYSTEM WAIT LOADING PARTS' = 129, 'SYSTEM SYNC DATABASE REPLICA' = 130, 'SYSTEM SYNC TRANSACTION LOG' = 131, 'SYSTEM SYNC FILE CACHE' = 132, 'SYSTEM FLUSH DISTRIBUTED' = 133, 'SYSTEM FLUSH LOGS' = 134, 'SYSTEM FLUSH' = 135, 'SYSTEM THREAD FUZZER' = 136, 'SYSTEM UNFREEZE' = 137, 'SYSTEM FAILPOINT' = 138, 'SYSTEM' = 139, 'dictGet' = 140, 'displaySecretsInShowAndSelect' = 141, 'addressToLine' = 142, 'addressToLineWithInlines' = 143, 'addressToSymbol' = 144, 'demangle' = 145, 'INTROSPECTION' = 146, 'FILE' = 147, 'URL' = 148, 'REMOTE' = 149, 'MONGO' = 150, 'REDIS' = 151, 'MEILISEARCH' = 152, 'MYSQL' = 153, 'POSTGRES' = 154, 'SQLITE' = 155, 'ODBC' = 156, 'JDBC' = 157, 'HDFS' = 158, 'S3' = 159, 'HIVE' = 160, 'AZURE' = 161, 'SOURCES' = 162, 'CLUSTER' = 163, 'ALL' = 164, 'NONE' = 165))
|
||||
)
|
||||
ENGINE = SystemPrivileges
|
||||
COMMENT 'SYSTEM TABLE is built on the fly.'
|
||||
|
@ -13,6 +13,7 @@ null
|
||||
numbers
|
||||
numbers_mt
|
||||
odbc
|
||||
redis
|
||||
remote
|
||||
remoteSecure
|
||||
url
|
||||
|
@ -2046,6 +2046,7 @@ reconnection
|
||||
recurse
|
||||
redash
|
||||
reddit
|
||||
redis
|
||||
redisstreams
|
||||
refcounter
|
||||
regexpExtract
|
||||
|
Loading…
Reference in New Issue
Block a user