Merge branch 'master' into fix-odbc

This commit is contained in:
alexey-milovidov 2021-04-17 21:29:03 +03:00 committed by GitHub
commit 212f68e279
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
20 changed files with 107 additions and 56 deletions

View File

@ -17,8 +17,6 @@ CREATE [ROW] POLICY [IF NOT EXISTS | OR REPLACE] policy_name1 [ON CLUSTER cluste
[TO {role1 [, role2 ...] | ALL | ALL EXCEPT role1 [, role2 ...]}]
```
`ON CLUSTER` clause allows creating row policies on a cluster, see [Distributed DDL](../../../sql-reference/distributed-ddl.md).
## USING Clause {#create-row-policy-using}
Allows to specify a condition to filter rows. An user will see a row if the condition is calculated to non-zero for the row.
@ -30,20 +28,20 @@ In the section `TO` you can provide a list of users and roles this policy should
Keyword `ALL` means all the ClickHouse users including current user. Keyword `ALL EXCEPT` allow to exclude some users from the all users list, for example, `CREATE ROW POLICY ... TO ALL EXCEPT accountant, john@localhost`
!!! note "Note"
If there are no row policies defined for a table then any user can `SELECT` all the row from the table.
Defining one or more row policies for the table makes the access to the table depending on the row policies no matter if
those row policies are defined for the current user or not. For example, the following row policy
If there are no row policies defined for a table then any user can `SELECT` all the row from the table. Defining one or more row policies for the table makes the access to the table depending on the row policies no matter if those row policies are defined for the current user or not. For example, the following policy
`CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter`
forbids the users `mira` and `peter` to see the rows with `b != 1`, and any non-mentioned user (e.g., the user `paul`) will see no rows from `mydb.table1` at all! If that isn't desirable you can fix it by adding one more row policy, for example:
forbids the users `mira` and `peter` to see the rows with `b != 1`, and any non-mentioned user (e.g., the user `paul`) will see no rows from `mydb.table1` at all.
If that's not desirable it can't be fixed by adding one more row policy, like the following:
`CREATE ROW POLICY pol2 ON mydb.table1 USING 1 TO ALL EXCEPT mira, peter`
## AS Clause {#create-row-policy-as}
It's allowed to have more than one policy enabled on the same table for the same user at the one time.
So we need a way to combine the conditions from multiple policies.
It's allowed to have more than one policy enabled on the same table for the same user at the one time. So we need a way to combine the conditions from multiple policies.
By default policies are combined using the boolean `OR` operator. For example, the following policies
``` sql
@ -53,14 +51,15 @@ CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 TO peter, antonio
enables the user `peter` to see rows with either `b=1` or `c=2`.
The `AS` clause specifies how policies should be combined with other policies. Policies can be either permissive or restrictive.
By default policies are permissive, which means they are combined using the boolean `OR` operator.
The `AS` clause specifies how policies should be combined with other policies. Policies can be either permissive or restrictive. By default policies are permissive, which means they are combined using the boolean `OR` operator.
A policy can be defined as restrictive as an alternative. Restrictive policies are combined using the boolean `AND` operator.
Here is the formula:
Here is the general formula:
```
row_is_visible = (one or more of the permissive policies' conditions are non-zero) AND (all of the restrictive policies's conditions are non-zero)`
row_is_visible = (one or more of the permissive policies' conditions are non-zero) AND
(all of the restrictive policies's conditions are non-zero)
```
For example, the following policies
@ -72,6 +71,10 @@ CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 AS RESTRICTIVE TO peter, antonio
enables the user `peter` to see rows only if both `b=1` AND `c=2`.
## ON CLUSTER Clause {#create-row-policy-on-cluster}
Allows creating row policies on a cluster, see [Distributed DDL](../../../sql-reference/distributed-ddl.md).
## Examples

View File

@ -17,9 +17,7 @@ CREATE [ROW] POLICY [IF NOT EXISTS | OR REPLACE] policy_name1 [ON CLUSTER cluste
[TO {role [,...] | ALL | ALL EXCEPT role [,...]}]
```
Секция `ON CLUSTER` позволяет создавать политики на кластере, см. [Распределенные DDL запросы](../../../sql-reference/distributed-ddl.md).
## USING Clause {#create-row-policy-using}
## Секция USING {#create-row-policy-using}
Секция `USING` указывает условие для фильтрации строк. Пользователь может видеть строку, если это условие, вычисленное для строки, дает ненулевой результат.
@ -30,22 +28,21 @@ CREATE [ROW] POLICY [IF NOT EXISTS | OR REPLACE] policy_name1 [ON CLUSTER cluste
Ключевым словом `ALL` обозначаются все пользователи, включая текущего. Ключевые слова `ALL EXCEPT` позволяют исключить пользователей из списка всех пользователей. Например, `CREATE ROW POLICY ... TO ALL EXCEPT accountant, john@localhost`
!!! note "Note"
Если для таблицы не задано ни одной политики доступа к строкам, то любой пользователь может выполнить `SELECT` и получить все строки таблицы.
Если определить хотя бы одну политику для таблицы, до доступ к строкам будет управляться этими политиками, причем для всех пользователей
(даже для тех, для кого политики не определялись). Например, следующая политика
Если для таблицы не задано ни одной политики доступа к строкам, то любой пользователь может выполнить команду SELECT и получить все строки таблицы. Если определить хотя бы одну политику для таблицы, до доступ к строкам будет управляться этими политиками, причем для всех пользователей (даже для тех, для кого политики не определялись). Например, следующая политика
`CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter`
запретит пользователям `mira` и `peter` видеть строки с `b != 1`, и еще запретит всем остальным пользователям (например, пользователю `paul`)
видеть какие-либо строки вообще из таблицы `mydb.table1`! Если это нежелательно, такое поведение можно исправить, определив дополнительную политику:
запретит пользователям `mira` и `peter` видеть строки с `b != 1`, и еще запретит всем остальным пользователям (например, пользователю `paul`) видеть какие-либо строки вообще из таблицы `mydb.table1`.
Если это нежелательно, такое поведение можно исправить, определив дополнительную политику:
`CREATE ROW POLICY pol2 ON mydb.table1 USING 1 TO ALL EXCEPT mira, peter`
## Секция AS {#create-row-policy-as}
Может быть одновременно активно более одной политики для одной и той же таблицы и одного и того же пользователя.
Поэтому нам нужен способ комбинировать политики. По умолчанию политики комбинируются с использованием логического оператора `OR`.
Например, политики:
Может быть одновременно активно более одной политики для одной и той же таблицы и одного и того же пользователя. Поэтому нам нужен способ комбинировать политики.
По умолчанию политики комбинируются с использованием логического оператора `OR`. Например, политики:
``` sql
CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter
@ -55,10 +52,15 @@ CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 TO peter, antonio
разрешат пользователю с именем `peter` видеть строки, для которых будет верно `b=1` или `c=2`.
Секция `AS` указывает, как политики должны комбинироваться с другими политиками. Политики могут быть или разрешительными (`PERMISSIVE`), или ограничительными (`RESTRICTIVE`). По умолчанию политики создаются разрешительными (`PERMISSIVE`); такие политики комбинируются с использованием логического оператора `OR`.
Ограничительные (`RESTRICTIVE`) политики комбинируются с использованием логического оператора `AND`.
Используется следующая формула:
`строка_видима = (одна или больше permissive-политик дала ненулевой результат проверки условия) И (все restrictive-политики дали ненулевой результат проверки условия)`
Ограничительные (`RESTRICTIVE`) политики комбинируются с использованием логического оператора `AND`.
Общая формула выглядит так:
```
строка_видима = (одна или больше permissive-политик дала ненулевой результат проверки условия) И
(все restrictive-политики дали ненулевой результат проверки условия)
```
Например, политики
@ -69,6 +71,10 @@ CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 AS RESTRICTIVE TO peter, antonio
разрешат пользователю с именем `peter` видеть только те строки, для которых будет одновременно `b=1` и `c=2`.
## Секция ON CLUSTER {#create-row-policy-on-cluster}
Секция `ON CLUSTER` позволяет создавать политики на кластере, см. [Распределенные DDL запросы](../../../sql-reference/distributed-ddl.md).
## Примеры
`CREATE ROW POLICY filter1 ON mydb.mytable USING a<1000 TO accountant, john@localhost`

View File

@ -122,7 +122,7 @@ namespace
else if (auto * data_uint64 = getIndexesData<UInt64>(column))
return mapUniqueIndexImpl(*data_uint64);
else
throw Exception("Indexes column for getUniqueIndex must be ColumnUInt, got" + column.getName(),
throw Exception("Indexes column for getUniqueIndex must be ColumnUInt, got " + column.getName(),
ErrorCodes::LOGICAL_ERROR);
}
}
@ -151,7 +151,7 @@ void ColumnLowCardinality::insertFrom(const IColumn & src, size_t n)
const auto * low_cardinality_src = typeid_cast<const ColumnLowCardinality *>(&src);
if (!low_cardinality_src)
throw Exception("Expected ColumnLowCardinality, got" + src.getName(), ErrorCodes::ILLEGAL_COLUMN);
throw Exception("Expected ColumnLowCardinality, got " + src.getName(), ErrorCodes::ILLEGAL_COLUMN);
size_t position = low_cardinality_src->getIndexes().getUInt(n);

View File

@ -66,7 +66,7 @@ ColumnPtr selectIndexImpl(const Column & column, const IColumn & indexes, size_t
else if (auto * data_uint64 = detail::getIndexesData<UInt64>(indexes))
return column.template indexImpl<UInt64>(*data_uint64, limit);
else
throw Exception("Indexes column for IColumn::select must be ColumnUInt, got" + indexes.getName(),
throw Exception("Indexes column for IColumn::select must be ColumnUInt, got " + indexes.getName(),
ErrorCodes::LOGICAL_ERROR);
}

View File

@ -466,7 +466,7 @@ namespace
else if (auto * data_uint64 = getIndexesData<UInt64>(column))
return mapIndexWithAdditionalKeys(*data_uint64, dict_size);
else
throw Exception("Indexes column for mapIndexWithAdditionalKeys must be UInt, got" + column.getName(),
throw Exception("Indexes column for mapIndexWithAdditionalKeys must be UInt, got " + column.getName(),
ErrorCodes::LOGICAL_ERROR);
}
}

View File

@ -58,7 +58,7 @@ std::pair<String, StoragePtr> createTableFromAST(
auto table_function = factory.get(ast_create_query.as_table_function, context);
ColumnsDescription columns;
if (ast_create_query.columns_list && ast_create_query.columns_list->columns)
columns = InterpreterCreateQuery::getColumnsDescription(*ast_create_query.columns_list->columns, context, false);
columns = InterpreterCreateQuery::getColumnsDescription(*ast_create_query.columns_list->columns, context, true);
StoragePtr storage = table_function->execute(ast_create_query.as_table_function, context, ast_create_query.table, std::move(columns));
storage->renameInMemory(ast_create_query);
return {ast_create_query.table, storage};
@ -69,7 +69,7 @@ std::pair<String, StoragePtr> createTableFromAST(
if (!ast_create_query.columns_list || !ast_create_query.columns_list->columns)
throw Exception("Missing definition of columns.", ErrorCodes::EMPTY_LIST_OF_COLUMNS_PASSED);
ColumnsDescription columns = InterpreterCreateQuery::getColumnsDescription(*ast_create_query.columns_list->columns, context, false);
ColumnsDescription columns = InterpreterCreateQuery::getColumnsDescription(*ast_create_query.columns_list->columns, context, true);
ConstraintsDescription constraints = InterpreterCreateQuery::getConstraintsDescription(ast_create_query.columns_list->constraints);
return

View File

@ -35,7 +35,7 @@ public:
{
const auto * type = typeid_cast<const DataTypeLowCardinality *>(arguments[0].get());
if (!type)
throw Exception("First first argument of function lowCardinalityIndexes must be ColumnLowCardinality, but got"
throw Exception("First first argument of function lowCardinalityIndexes must be ColumnLowCardinality, but got "
+ arguments[0]->getName(), ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
return std::make_shared<DataTypeUInt64>();

View File

@ -33,7 +33,7 @@ public:
{
const auto * type = typeid_cast<const DataTypeLowCardinality *>(arguments[0].get());
if (!type)
throw Exception("First first argument of function lowCardinalityKeys must be ColumnLowCardinality, but got"
throw Exception("First first argument of function lowCardinalityKeys must be ColumnLowCardinality, but got "
+ arguments[0]->getName(), ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
return type->getDictionaryType();

View File

@ -650,7 +650,7 @@ std::optional<NameAndTypePair> ActionsMatcher::getNameAndTypeFromAST(const ASTPt
return NameAndTypePair(child_column_name, node->result_type);
if (!data.only_consts)
throw Exception("Unknown identifier: " + child_column_name + " there are columns: " + data.actions_stack.dumpNames(),
throw Exception("Unknown identifier: " + child_column_name + "; there are columns: " + data.actions_stack.dumpNames(),
ErrorCodes::UNKNOWN_IDENTIFIER);
return {};

View File

@ -363,7 +363,7 @@ ASTPtr InterpreterCreateQuery::formatConstraints(const ConstraintsDescription &
}
ColumnsDescription InterpreterCreateQuery::getColumnsDescription(
const ASTExpressionList & columns_ast, ContextPtr context_, bool sanity_check_compression_codecs)
const ASTExpressionList & columns_ast, ContextPtr context_, bool attach)
{
/// First, deduce implicit types.
@ -372,6 +372,7 @@ ColumnsDescription InterpreterCreateQuery::getColumnsDescription(
ASTPtr default_expr_list = std::make_shared<ASTExpressionList>();
NamesAndTypesList column_names_and_types;
bool make_columns_nullable = !attach && context_->getSettingsRef().data_type_default_nullable;
for (const auto & ast : columns_ast.children)
{
@ -390,8 +391,7 @@ ColumnsDescription InterpreterCreateQuery::getColumnsDescription(
if (*col_decl.null_modifier)
column_type = makeNullable(column_type);
}
/// XXX: context_ or context ?
else if (context_->getSettingsRef().data_type_default_nullable)
else if (make_columns_nullable)
{
column_type = makeNullable(column_type);
}
@ -436,6 +436,7 @@ ColumnsDescription InterpreterCreateQuery::getColumnsDescription(
if (!default_expr_list->children.empty())
defaults_sample_block = validateColumnsDefaultsAndGetSampleBlock(default_expr_list, column_names_and_types, context_);
bool sanity_check_compression_codecs = !attach && !context_->getSettingsRef().allow_suspicious_codecs;
ColumnsDescription res;
auto name_type_it = column_names_and_types.begin();
for (auto ast_it = columns_ast.children.begin(); ast_it != columns_ast.children.end(); ++ast_it, ++name_type_it)
@ -511,8 +512,7 @@ InterpreterCreateQuery::TableProperties InterpreterCreateQuery::setProperties(AS
if (create.columns_list->columns)
{
bool sanity_check_compression_codecs = !create.attach && !getContext()->getSettingsRef().allow_suspicious_codecs;
properties.columns = getColumnsDescription(*create.columns_list->columns, getContext(), sanity_check_compression_codecs);
properties.columns = getColumnsDescription(*create.columns_list->columns, getContext(), create.attach);
}
if (create.columns_list->indices)

View File

@ -53,7 +53,7 @@ public:
/// Obtain information about columns, their types, default values and column comments,
/// for case when columns in CREATE query is specified explicitly.
static ColumnsDescription getColumnsDescription(const ASTExpressionList & columns, ContextPtr context, bool sanity_check_compression_codecs);
static ColumnsDescription getColumnsDescription(const ASTExpressionList & columns, ContextPtr context, bool attach);
static ConstraintsDescription getConstraintsDescription(const ASTExpressionList * constraints);
static void prepareOnClusterQuery(ASTCreateQuery & create, ContextPtr context, const String & cluster_name);

View File

@ -429,7 +429,7 @@ StoragePtr InterpreterSystemQuery::tryRestartReplica(const StorageID & replica,
auto & create = create_ast->as<ASTCreateQuery &>();
create.attach = true;
auto columns = InterpreterCreateQuery::getColumnsDescription(*create.columns_list->columns, system_context, false);
auto columns = InterpreterCreateQuery::getColumnsDescription(*create.columns_list->columns, system_context, true);
auto constraints = InterpreterCreateQuery::getConstraintsDescription(create.columns_list->constraints);
auto data_path = database->getTableDataPath(create);

View File

@ -25,7 +25,7 @@ ColumnsDescription parseColumnsListFromString(const std::string & structure, Con
if (!columns_list)
throw Exception("Could not cast AST to ASTExpressionList", ErrorCodes::LOGICAL_ERROR);
return InterpreterCreateQuery::getColumnsDescription(*columns_list, context, !settings.allow_suspicious_codecs);
return InterpreterCreateQuery::getColumnsDescription(*columns_list, context, false);
}
}

View File

@ -538,3 +538,26 @@ def test_odbc_long_column_names(started_cluster):
cursor.execute("DROP TABLE IF EXISTS clickhouse.test_long_column_names")
node1.query("DROP TABLE IF EXISTS test_long_column_names")
def test_odbc_long_text(started_cluster):
conn = get_postgres_conn()
cursor = conn.cursor()
cursor.execute("drop table if exists clickhouse.test_long_text")
cursor.execute("create table clickhouse.test_long_text(flen int, field1 text)");
# sample test from issue 9363
text_from_issue = """BEGIN These examples only show the order that data is arranged in. The values from different columns are stored separately, and data from the same column is stored together. Examples of a column-oriented DBMS: Vertica, Paraccel (Actian Matrix and Amazon Redshift), Sybase IQ, Exasol, Infobright, InfiniDB, MonetDB (VectorWise and Actian Vector), LucidDB, SAP HANA, Google Dremel, Google PowerDrill, Druid, and kdb+. Different orders for storing data are better suited to different scenarios. The data access scenario refers to what queries are made, how often, and in what proportion; how much data is read for each type of query rows, columns, and bytes; the relationship between reading and updating data; the working size of the data and how locally it is used; whether transactions are used, and how isolated they are; requirements for data replication and logical integrity; requirements for latency and throughput for each type of query, and so on. The higher the load on the system, the more important it is to customize the system set up to match the requirements of the usage scenario, and the more fine grained this customization becomes. There is no system that is equally well-suited to significantly different scenarios. If a system is adaptable to a wide set of scenarios, under a high load, the system will handle all the scenarios equally poorly, or will work well for just one or few of possible scenarios. Key Properties of OLAP Scenario¶ The vast majority of requests are for read access. Data is updated in fairly large batches (> 1000 rows), not by single rows; or it is not updated at all. Data is added to the DB but is not modified. For reads, quite a large number of rows are extracted from the DB, but only a small subset of columns. Tables are "wide," meaning they contain a large number of columns. Queries are relatively rare (usually hundreds of queries per server or less per second). For simple queries, latencies around 50 ms are allowed. Column values are fairly small: numbers and short strings (for example, 60 bytes per URL). Requires high throughput when processing a single query (up to billions of rows per second per server). Transactions are not necessary. Low requirements for data consistency. There is one large table per query. All tables are small, except for one. A query result is significantly smaller than the source data. In other words, data is filtered or aggregated, so the result fits in a single server"s RAM. It is easy to see that the OLAP scenario is very different from other popular scenarios (such as OLTP or Key-Value access). So it doesn"t make sense to try to use OLTP or a Key-Value DB for processing analytical queries if you want to get decent performance. For example, if you try to use MongoDB or Redis for analytics, you will get very poor performance compared to OLAP databases. Why Column-Oriented Databases Work Better in the OLAP Scenario¶ Column-oriented databases are better suited to OLAP scenarios: they are at least 100 times faster in processing most queries. The reasons are explained in detail below, but the fact is easier to demonstrate visually. END"""
cursor.execute("""insert into clickhouse.test_long_text (flen, field1) values (3248, '{}')""".format(text_from_issue));
node1.query('''
DROP TABLE IF EXISTS test_long_test;
CREATE TABLE test_long_text (flen UInt32, field1 String)
ENGINE = ODBC('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_long_text')''')
result = node1.query("select field1 from test_long_text;")
assert(result.strip() == text_from_issue)
long_text = "text" * 1000000
cursor.execute("""insert into clickhouse.test_long_text (flen, field1) values (400000, '{}')""".format(long_text));
result = node1.query("select field1 from test_long_text where flen=400000;")
assert(result.strip() == long_text)

View File

@ -26,6 +26,7 @@
<disk>s31</disk>
</external>
</volumes>
<move_factor>0.0</move_factor>
</hybrid>
</policies>
</storage_configuration>

View File

@ -36,6 +36,15 @@ def get_large_objects_count(cluster, size=100):
return counter
def wait_for_large_objects_count(cluster, expected, size=100, timeout=30):
while timeout > 0:
if get_large_objects_count(cluster, size) == expected:
return
timeout -= 1
time.sleep(1)
assert get_large_objects_count(cluster, size) == expected
@pytest.mark.parametrize(
"policy", ["s3"]
)
@ -67,23 +76,15 @@ def test_s3_zero_copy_replication(cluster, policy):
assert node1.query("SELECT * FROM s3_test order by id FORMAT Values") == "(0,'data'),(1,'data'),(2,'data'),(3,'data')"
# Based on version 20.x - two parts
assert get_large_objects_count(cluster) == 2
wait_for_large_objects_count(cluster, 2)
node1.query("OPTIMIZE TABLE s3_test")
time.sleep(1)
# Based on version 20.x - after merge, two old parts and one merged
assert get_large_objects_count(cluster) == 3
wait_for_large_objects_count(cluster, 3)
# Based on version 20.x - after cleanup - only one merged part
countdown = 60
while countdown > 0:
if get_large_objects_count(cluster) == 1:
break
time.sleep(1)
countdown -= 1
assert get_large_objects_count(cluster) == 1
wait_for_large_objects_count(cluster, 1, timeout=60)
node1.query("DROP TABLE IF EXISTS s3_test NO DELAY")
node2.query("DROP TABLE IF EXISTS s3_test NO DELAY")
@ -127,7 +128,7 @@ def test_s3_zero_copy_on_hybrid_storage(cluster):
assert node2.query("SELECT partition_id,disk_name FROM system.parts WHERE table='hybrid_test' FORMAT Values") == "('all','s31')"
# Check that after moving partition on node2 no new obects on s3
assert get_large_objects_count(cluster, 0) == s3_objects
wait_for_large_objects_count(cluster, s3_objects, size=0)
assert node1.query("SELECT * FROM hybrid_test ORDER BY id FORMAT Values") == "(0,'data'),(1,'data')"
assert node2.query("SELECT * FROM hybrid_test ORDER BY id FORMAT Values") == "(0,'data'),(1,'data')"

View File

@ -2,3 +2,6 @@ Nullable(Int32) Int32 Nullable(Int32) Int32
CREATE TABLE default.data_null\n(\n `a` Nullable(Int32),\n `b` Int32,\n `c` Nullable(Int32),\n `d` Int32\n)\nENGINE = Memory
Nullable(Int32) Int32 Nullable(Int32) Nullable(Int32)
CREATE TABLE default.set_null\n(\n `a` Nullable(Int32),\n `b` Int32,\n `c` Nullable(Int32),\n `d` Nullable(Int32)\n)\nENGINE = Memory
CREATE TABLE default.set_null\n(\n `a` Nullable(Int32),\n `b` Int32,\n `c` Nullable(Int32),\n `d` Nullable(Int32)\n)\nENGINE = Memory
CREATE TABLE default.cannot_be_nullable\n(\n `n` Nullable(Int8),\n `a` Array(UInt8)\n)\nENGINE = Memory
CREATE TABLE default.cannot_be_nullable\n(\n `n` Nullable(Int8),\n `a` Array(UInt8)\n)\nENGINE = Memory

View File

@ -1,5 +1,6 @@
DROP TABLE IF EXISTS data_null;
DROP TABLE IF EXISTS set_null;
DROP TABLE IF EXISTS cannot_be_nullable;
SET data_type_default_nullable='false';
@ -45,6 +46,17 @@ INSERT INTO set_null VALUES (NULL, 2, NULL, NULL);
SELECT toTypeName(a), toTypeName(b), toTypeName(c), toTypeName(d) FROM set_null;
SHOW CREATE TABLE set_null;
DETACH TABLE set_null;
ATTACH TABLE set_null;
SHOW CREATE TABLE set_null;
CREATE TABLE cannot_be_nullable (n Int8, a Array(UInt8)) ENGINE=Memory; -- { serverError 43 }
CREATE TABLE cannot_be_nullable (n Int8, a Array(UInt8) NOT NULL) ENGINE=Memory;
SHOW CREATE TABLE cannot_be_nullable;
DETACH TABLE cannot_be_nullable;
ATTACH TABLE cannot_be_nullable;
SHOW CREATE TABLE cannot_be_nullable;
DROP TABLE data_null;
DROP TABLE set_null;
DROP TABLE cannot_be_nullable;

View File

@ -0,0 +1 @@
select case 1.1 when 0.1 then 'a' when 1.1 then 'b' when 2.1 then 'c' else 'default' end as f;