diff --git a/docs/ru/engines/table-engines/mergetree-family/collapsingmergetree.md b/docs/ru/engines/table-engines/mergetree-family/collapsingmergetree.md index dac490468d0..e3b4238a200 100644 --- a/docs/ru/engines/table-engines/mergetree-family/collapsingmergetree.md +++ b/docs/ru/engines/table-engines/mergetree-family/collapsingmergetree.md @@ -89,7 +89,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] └─────────────────────┴───────────┴──────────┴──────┘ ``` -Первая строка отменяет предыдущее состояние объекта (пользователя). Она должен повторять все поля из ключа сортировки для отменённого состояния за исключением `Sign`. +Первая строка отменяет предыдущее состояние объекта (пользователя). Она должна повторять все поля из ключа сортировки для отменённого состояния за исключением `Sign`. Вторая строка содержит текущее состояние. diff --git a/docs/ru/engines/table-engines/mergetree-family/mergetree.md b/docs/ru/engines/table-engines/mergetree-family/mergetree.md index 7269cc023e4..24e0f8dbbb8 100644 --- a/docs/ru/engines/table-engines/mergetree-family/mergetree.md +++ b/docs/ru/engines/table-engines/mergetree-family/mergetree.md @@ -584,7 +584,7 @@ TTL d + INTERVAL 1 MONTH GROUP BY k1, k2 SET x = max(x), y = min(y); Данные с истекшим `TTL` удаляются, когда ClickHouse мёржит куски данных. -Когда ClickHouse видит, что некоторые данные устарели, он выполняет внеплановые мёржи. Для управление частотой подобных мёржей, можно задать настройку `merge_with_ttl_timeout`. Если её значение слишком низкое, придется выполнять много внеплановых мёржей, которые могут начать потреблять значительную долю ресурсов сервера. +Когда ClickHouse видит, что некоторые данные устарели, он выполняет внеплановые мёржи. Для управления частотой подобных мёржей, можно задать настройку `merge_with_ttl_timeout`. Если её значение слишком низкое, придется выполнять много внеплановых мёржей, которые могут начать потреблять значительную долю ресурсов сервера. Если вы выполните запрос `SELECT` между слияниями вы можете получить устаревшие данные. Чтобы избежать этого используйте запрос [OPTIMIZE](../../../engines/table-engines/mergetree-family/mergetree.md#misc_operations-optimize) перед `SELECT`. @@ -679,7 +679,7 @@ TTL d + INTERVAL 1 MONTH GROUP BY k1, k2 SET x = max(x), y = min(y); - `policy_name_N` — название политики. Названия политик должны быть уникальны. - `volume_name_N` — название тома. Названия томов должны быть уникальны. - `disk` — диск, находящийся внутри тома. -- `max_data_part_size_bytes` — максимальный размер куска данных, который может находится на любом из дисков этого тома. Если в результате слияния размер куска ожидается больше, чем max_data_part_size_bytes, то этот кусок будет записан в следующий том. В основном эта функция позволяет хранить новые / мелкие куски на горячем (SSD) томе и перемещать их на холодный (HDD) том, когда они достигают большого размера. Не используйте этот параметр, если политика имеет только один том. +- `max_data_part_size_bytes` — максимальный размер куска данных, который может находиться на любом из дисков этого тома. Если в результате слияния размер куска ожидается больше, чем max_data_part_size_bytes, то этот кусок будет записан в следующий том. В основном эта функция позволяет хранить новые / мелкие куски на горячем (SSD) томе и перемещать их на холодный (HDD) том, когда они достигают большого размера. Не используйте этот параметр, если политика имеет только один том. - `move_factor` — доля доступного свободного места на томе, если места становится меньше, то данные начнут перемещение на следующий том, если он есть (по умолчанию 0.1). Для перемещения куски сортируются по размеру от большего к меньшему (по убыванию) и выбираются куски, совокупный размер которых достаточен для соблюдения условия `move_factor`, если совокупный размер всех партов недостаточен, будут перемещены все парты. - `prefer_not_to_merge` — Отключает слияние кусков данных, хранящихся на данном томе. Если данная настройка включена, то слияние данных, хранящихся на данном томе, не допускается. Это позволяет контролировать работу ClickHouse с медленными дисками. @@ -730,7 +730,7 @@ TTL d + INTERVAL 1 MONTH GROUP BY k1, k2 SET x = max(x), y = min(y); В приведенном примере, политика `hdd_in_order` реализует прицип [round-robin](https://ru.wikipedia.org/wiki/Round-robin_(%D0%B0%D0%BB%D0%B3%D0%BE%D1%80%D0%B8%D1%82%D0%BC)). Так как в политике есть всего один том (`single`), то все записи производятся на его диски по круговому циклу. Такая политика может быть полезна при наличии в системе нескольких похожих дисков, но при этом не сконфигурирован RAID. Учтите, что каждый отдельный диск ненадёжен и чтобы не потерять важные данные это необходимо скомпенсировать за счет хранения данных в трёх копиях. -Если система содержит диски различных типов, то может пригодиться политика `moving_from_ssd_to_hdd`. В томе `hot` находится один SSD-диск (`fast_ssd`), а также задается ограничение на максимальный размер куска, который может храниться на этом томе (1GB). Все куски такой таблицы больше 1GB будут записываться сразу на том `cold`, в котором содержится один HDD-диск `disk1`. Также, при заполнении диска `fast_ssd` более чем на 80% данные будут переносится на диск `disk1` фоновым процессом. +Если система содержит диски различных типов, то может пригодиться политика `moving_from_ssd_to_hdd`. В томе `hot` находится один SSD-диск (`fast_ssd`), а также задается ограничение на максимальный размер куска, который может храниться на этом томе (1GB). Все куски такой таблицы больше 1GB будут записываться сразу на том `cold`, в котором содержится один HDD-диск `disk1`. Также при заполнении диска `fast_ssd` более чем на 80% данные будут переноситься на диск `disk1` фоновым процессом. Порядок томов в политиках хранения важен, при достижении условий на переполнение тома данные переносятся на следующий. Порядок дисков в томах так же важен, данные пишутся по очереди на каждый из них. diff --git a/docs/ru/interfaces/third-party/client-libraries.md b/docs/ru/interfaces/third-party/client-libraries.md index f55bbe2a47d..a4659e9ac4e 100644 --- a/docs/ru/interfaces/third-party/client-libraries.md +++ b/docs/ru/interfaces/third-party/client-libraries.md @@ -8,6 +8,7 @@ sidebar_label: "Клиентские библиотеки от сторонни :::danger "Disclaimer" Яндекс не поддерживает перечисленные ниже библиотеки и не проводит тщательного тестирования для проверки их качества. +::: - Python: - [infi.clickhouse_orm](https://github.com/Infinidat/infi.clickhouse_orm) diff --git a/docs/ru/operations/clickhouse-keeper.md b/docs/ru/operations/clickhouse-keeper.md index 67be83e13b2..3a931529b32 100644 --- a/docs/ru/operations/clickhouse-keeper.md +++ b/docs/ru/operations/clickhouse-keeper.md @@ -325,21 +325,21 @@ clickhouse-keeper-converter --zookeeper-logs-dir /var/lib/zookeeper/version-2 -- Например, для кластера из 3 нод, алгоритм кворума продолжает работать при отказе не более чем одной ноды. Конфигурация кластера может быть изменена динамически с некоторыми ограничениями. -Переконфигурация также использует Raft, поэтому для добавление новой ноды кластера или исключения старой ноды из него требуется достижения кворума в рамках текущей конфигурации кластера. +Переконфигурация также использует Raft, поэтому для добавления новой ноды кластера или исключения старой ноды требуется достижение кворума в рамках текущей конфигурации кластера. Если в вашем кластере произошел отказ большего числа нод, чем допускает Raft для вашей текущей конфигурации и у вас нет возможности восстановить их работоспособность, Raft перестанет работать и не позволит изменить конфигурацию стандартным механизмом. -Тем не менее ClickHousr Keeper имеет возможность запуститься в режиме восстановления, который позволяет переконфигурировать класте используя только одну ноду кластера. +Тем не менее ClickHouse Keeper имеет возможность запуститься в режиме восстановления, который позволяет переконфигурировать кластер используя только одну ноду кластера. Этот механизм может использоваться только как крайняя мера, когда вы не можете восстановить существующие ноды кластера или запустить новый сервер с тем же идентификатором. Важно: - Удостоверьтесь, что отказавшие ноды не смогут в дальнейшем подключиться к кластеру в будущем. -- Не запускайте новые ноды, пока не завешите процедуру ниже. +- Не запускайте новые ноды, пока не завершите процедуру ниже. После того, как выполнили действия выше выполните следующие шаги. -1. Выберете одну ноду Keeper, которая станет новым лидером. Учтите, что данные которые с этой ноды будут испольщзованы всем кластером, поэтому рекомендуется выбрать ноду с наиболее актуальным состоянием. +1. Выберете одну ноду Keeper, которая станет новым лидером. Учтите, что данные с этой ноды будут использованы всем кластером, поэтому рекомендуется выбрать ноду с наиболее актуальным состоянием. 2. Перед дальнейшими действиям сделайте резервную копию данных из директорий `log_storage_path` и `snapshot_storage_path`. 3. Измените настройки на всех нодах кластера, которые вы собираетесь использовать. -4. Отправьте команду `rcvr` на ноду, которую вы выбрали или остановите ее и запустите заново с аргументом `--force-recovery`. Это переведет ноду в режим восстановления. +4. Отправьте команду `rcvr` на ноду, которую вы выбрали, или остановите ее и запустите заново с аргументом `--force-recovery`. Это переведет ноду в режим восстановления. 5. Запускайте остальные ноды кластера по одной и проверяйте, что команда `mntr` возвращает `follower` в выводе состояния `zk_server_state` перед тем, как запустить следующую ноду. -6. Пока нода работает в режиме восстановления, лидер будет возвращать ошибку на запрос `mntr` пока кворум не будет достигнут с помощью новых нод. Любые запросы от клиентов и постедователей будут возвращать ошибку. +6. Пока нода работает в режиме восстановления, лидер будет возвращать ошибку на запрос `mntr` пока кворум не будет достигнут с помощью новых нод. Любые запросы от клиентов и последователей будут возвращать ошибку. 7. После достижения кворума лидер перейдет в нормальный режим работы и станет обрабатывать все запросы через Raft. Удостоверьтесь, что запрос `mntr` возвращает `leader` в выводе состояния `zk_server_state`. diff --git a/docs/ru/operations/opentelemetry.md b/docs/ru/operations/opentelemetry.md index b6c5e89bcc6..4e127e9e0f0 100644 --- a/docs/ru/operations/opentelemetry.md +++ b/docs/ru/operations/opentelemetry.md @@ -10,6 +10,7 @@ ClickHouse поддерживает [OpenTelemetry](https://opentelemetry.io/) :::danger "Предупреждение" Поддержка стандарта экспериментальная и будет со временем меняться. +::: ## Обеспечение поддержки контекста трассировки в ClickHouse diff --git a/docs/ru/operations/server-configuration-parameters/settings.md b/docs/ru/operations/server-configuration-parameters/settings.md index bffa3c39a60..e29b9def9d4 100644 --- a/docs/ru/operations/server-configuration-parameters/settings.md +++ b/docs/ru/operations/server-configuration-parameters/settings.md @@ -26,6 +26,7 @@ ClickHouse перезагружает встроенные словари с з :::danger "Внимание" Лучше не использовать, если вы только начали работать с ClickHouse. +::: Общий вид конфигурации: @@ -1064,6 +1065,7 @@ ClickHouse использует потоки из глобального пул :::danger "Обратите внимание" Завершающий слеш обязателен. +::: **Пример** @@ -1330,6 +1332,7 @@ TCP порт для защищённого обмена данными с кли :::danger "Обратите внимание" Завершающий слеш обязателен. +::: **Пример** diff --git a/docs/ru/operations/storing-data.md b/docs/ru/operations/storing-data.md index 2f5c9c95ea4..56081c82bc9 100644 --- a/docs/ru/operations/storing-data.md +++ b/docs/ru/operations/storing-data.md @@ -82,7 +82,7 @@ sidebar_label: "Хранение данных на внешних дисках" - `type` — `encrypted`. Иначе зашифрованный диск создан не будет. - `disk` — тип диска для хранения данных. -- `key` — ключ для шифрования и расшифровки. Тип: [Uint64](../sql-reference/data-types/int-uint.md). Вы можете использовать параметр `key_hex` для шифрования в шестнадцатеричной форме. +- `key` — ключ для шифрования и расшифровки. Тип: [UInt64](../sql-reference/data-types/int-uint.md). Вы можете использовать параметр `key_hex` для шифрования в шестнадцатеричной форме. Вы можете указать несколько ключей, используя атрибут `id` (смотрите пример выше). Необязательные параметры: diff --git a/docs/ru/sql-reference/data-types/aggregatefunction.md b/docs/ru/sql-reference/data-types/aggregatefunction.md index 21b452acb1d..e42b467e4af 100644 --- a/docs/ru/sql-reference/data-types/aggregatefunction.md +++ b/docs/ru/sql-reference/data-types/aggregatefunction.md @@ -6,7 +6,7 @@ sidebar_label: AggregateFunction # AggregateFunction {#data-type-aggregatefunction} -Агрегатные функции могут обладать определяемым реализацией промежуточным состоянием, которое может быть сериализовано в тип данных, соответствующий AggregateFunction(…), и быть записано в таблицу обычно посредством [материализованного представления] (../../sql-reference/statements/create/view.md). Чтобы получить промежуточное состояние, обычно используются агрегатные функции с суффиксом `-State`. Чтобы в дальнейшем получить агрегированные данные необходимо использовать те же агрегатные функции с суффиксом `-Merge`. +Агрегатные функции могут обладать определяемым реализацией промежуточным состоянием, которое может быть сериализовано в тип данных, соответствующий AggregateFunction(…), и быть записано в таблицу обычно посредством [материализованного представления](../../sql-reference/statements/create/view.md). Чтобы получить промежуточное состояние, обычно используются агрегатные функции с суффиксом `-State`. Чтобы в дальнейшем получить агрегированные данные необходимо использовать те же агрегатные функции с суффиксом `-Merge`. `AggregateFunction(name, types_of_arguments…)` — параметрический тип данных. diff --git a/docs/ru/sql-reference/data-types/geo.md b/docs/ru/sql-reference/data-types/geo.md index a7c5f79b0be..24d981195f5 100644 --- a/docs/ru/sql-reference/data-types/geo.md +++ b/docs/ru/sql-reference/data-types/geo.md @@ -10,6 +10,7 @@ ClickHouse поддерживает типы данных для отображ :::danger "Предупреждение" Сейчас использование типов данных для работы с географическими структурами является экспериментальной возможностью. Чтобы использовать эти типы данных, включите настройку `allow_experimental_geo_types = 1`. +::: **См. также** - [Хранение географических структур данных](https://ru.wikipedia.org/wiki/GeoJSON). diff --git a/docs/ru/sql-reference/data-types/special-data-types/interval.md b/docs/ru/sql-reference/data-types/special-data-types/interval.md index 856275ed8f2..109ceee7852 100644 --- a/docs/ru/sql-reference/data-types/special-data-types/interval.md +++ b/docs/ru/sql-reference/data-types/special-data-types/interval.md @@ -10,6 +10,7 @@ sidebar_label: Interval :::danger "Внимание" Нельзя использовать типы данных `Interval` для хранения данных в таблице. +::: Структура: diff --git a/docs/ru/sql-reference/data-types/tuple.md b/docs/ru/sql-reference/data-types/tuple.md index 76370d01c0d..8953134d154 100644 --- a/docs/ru/sql-reference/data-types/tuple.md +++ b/docs/ru/sql-reference/data-types/tuple.md @@ -34,7 +34,7 @@ SELECT tuple(1,'a') AS x, toTypeName(x) ## Особенности работы с типами данных {#osobennosti-raboty-s-tipami-dannykh} -При создании кортежа «на лету» ClickHouse автоматически определяет тип каждого аргументов как минимальный из типов, который может сохранить значение аргумента. Если аргумент — [NULL](../../sql-reference/data-types/tuple.md#null-literal), то тип элемента кортежа — [Nullable](nullable.md). +При создании кортежа «на лету» ClickHouse автоматически определяет тип всех аргументов как минимальный из типов, который может сохранить значение аргумента. Если аргумент — [NULL](../../sql-reference/data-types/tuple.md#null-literal), то тип элемента кортежа — [Nullable](nullable.md). Пример автоматического определения типа данных: diff --git a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-polygon.md b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-polygon.md index 64637edc4a4..24f29d3bf53 100644 --- a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-polygon.md +++ b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-polygon.md @@ -61,7 +61,7 @@ LAYOUT(POLYGON(STORE_POLYGON_KEY_COLUMN 1)) - Мультиполигон. Представляет из себя массив полигонов. Каждый полигон задается двумерным массивом точек — первый элемент этого массива задает внешнюю границу полигона, последующие элементы могут задавать дырки, вырезаемые из него. -Точки могут задаваться массивом или кортежем из своих координат. В текущей реализации поддерживается только двумерные точки. +Точки могут задаваться массивом или кортежем из своих координат. В текущей реализации поддерживаются только двумерные точки. Пользователь может [загружать свои собственные данные](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md) во всех поддерживаемых ClickHouse форматах. @@ -80,7 +80,7 @@ LAYOUT(POLYGON(STORE_POLYGON_KEY_COLUMN 1)) - `POLYGON`. Синоним к `POLYGON_INDEX_CELL`. Запросы к словарю осуществляются с помощью стандартных [функций](../../../sql-reference/functions/ext-dict-functions.md) для работы со внешними словарями. -Важным отличием является то, что здесь ключами будут являются точки, для которых хочется найти содержащий их полигон. +Важным отличием является то, что здесь ключами являются точки, для которых хочется найти содержащий их полигон. **Пример** diff --git a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md index 8c01b8295bf..a711287ae8e 100644 --- a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md +++ b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md @@ -59,6 +59,7 @@ ClickHouse поддерживает следующие виды ключей: :::danger "Обратите внимание" Ключ не надо дополнительно описывать в атрибутах. +::: ### Числовой ключ {#ext_dict-numeric-key} diff --git a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts.md b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts.md index 314fefab5eb..a262a354889 100644 --- a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts.md +++ b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts.md @@ -14,7 +14,7 @@ ClickHouse: - Периодически обновляет их и динамически подгружает отсутствующие значения. - Позволяет создавать внешние словари с помощью xml-файлов или [DDL-запросов](../../statements/create/dictionary.md#create-dictionary-query). -Конфигурация внешних словарей может находится в одном или нескольких xml-файлах. Путь к конфигурации указывается в параметре [dictionaries_config](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-dictionaries_config). +Конфигурация внешних словарей может находиться в одном или нескольких xml-файлах. Путь к конфигурации указывается в параметре [dictionaries_config](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-dictionaries_config). Словари могут загружаться при старте сервера или при первом использовании, в зависимости от настройки [dictionaries_lazy_load](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-dictionaries_lazy_load). diff --git a/docs/ru/sql-reference/functions/introspection.md b/docs/ru/sql-reference/functions/introspection.md index 7d04dff6b72..26497ef21d3 100644 --- a/docs/ru/sql-reference/functions/introspection.md +++ b/docs/ru/sql-reference/functions/introspection.md @@ -22,7 +22,7 @@ sidebar_label: "Функции интроспекции" ClickHouse сохраняет отчеты профилировщика в [журнал трассировки](../../operations/system-tables/trace_log.md#system_tables-trace_log) в системной таблице. Убедитесь, что таблица и профилировщик настроены правильно. -## addresssToLine {#addresstoline} +## addressToLine {#addresstoline} Преобразует адрес виртуальной памяти внутри процесса сервера ClickHouse в имя файла и номер строки в исходном коде ClickHouse. diff --git a/docs/ru/sql-reference/operators/exists.md b/docs/ru/sql-reference/operators/exists.md index 3fc085fe021..38855abbcf3 100644 --- a/docs/ru/sql-reference/operators/exists.md +++ b/docs/ru/sql-reference/operators/exists.md @@ -8,7 +8,8 @@ slug: /ru/sql-reference/operators/exists `EXISTS` может быть использован в секции [WHERE](../../sql-reference/statements/select/where.md). :::danger "Предупреждение" - Ссылки на таблицы или столбцы основного запроса не поддерживаются в подзапросе. + Ссылки на таблицы или столбцы основного запроса не поддерживаются в подзапросе. +::: **Синтаксис** diff --git a/docs/ru/sql-reference/operators/in.md b/docs/ru/sql-reference/operators/in.md index fa679b890a7..60400fb2b31 100644 --- a/docs/ru/sql-reference/operators/in.md +++ b/docs/ru/sql-reference/operators/in.md @@ -38,9 +38,9 @@ SELECT '1' IN (SELECT 1); └──────────────────────┘ ``` -Если в качестве правой части оператора указано имя таблицы (например, `UserID IN users`), то это эквивалентно подзапросу `UserID IN (SELECT * FROM users)`. Это используется при работе с внешними данными, отправляемым вместе с запросом. Например, вместе с запросом может быть отправлено множество идентификаторов посетителей, загруженное во временную таблицу users, по которому следует выполнить фильтрацию. +Если в качестве правой части оператора указано имя таблицы (например, `UserID IN users`), то это эквивалентно подзапросу `UserID IN (SELECT * FROM users)`. Это используется при работе с внешними данными, отправляемыми вместе с запросом. Например, вместе с запросом может быть отправлено множество идентификаторов посетителей, загруженное во временную таблицу users, по которому следует выполнить фильтрацию. -Если в качестве правой части оператора, указано имя таблицы, имеющий движок Set (подготовленное множество, постоянно находящееся в оперативке), то множество не будет создаваться заново при каждом запросе. +Если в качестве правой части оператора, указано имя таблицы, имеющей движок Set (подготовленное множество, постоянно находящееся в оперативке), то множество не будет создаваться заново при каждом запросе. В подзапросе может быть указано более одного столбца для фильтрации кортежей. Пример: @@ -49,9 +49,9 @@ SELECT '1' IN (SELECT 1); SELECT (CounterID, UserID) IN (SELECT CounterID, UserID FROM ...) FROM ... ``` -Типы столбцов слева и справа оператора IN, должны совпадать. +Типы столбцов слева и справа оператора IN должны совпадать. -Оператор IN и подзапрос могут встречаться в любой части запроса, в том числе в агрегатных и лямбда функциях. +Оператор IN и подзапрос могут встречаться в любой части запроса, в том числе в агрегатных и лямбда-функциях. Пример: ``` sql @@ -122,7 +122,7 @@ FROM t_null Существует два варианта IN-ов с подзапросами (аналогично для JOIN-ов): обычный `IN` / `JOIN` и `GLOBAL IN` / `GLOBAL JOIN`. Они отличаются способом выполнения при распределённой обработке запроса. -:::note "Attention" +:::note "Внимание" Помните, что алгоритмы, описанные ниже, могут работать иначе в зависимости от [настройки](../../operations/settings/settings.md) `distributed_product_mode`. ::: При использовании обычного IN-а, запрос отправляется на удалённые серверы, и на каждом из них выполняются подзапросы в секциях `IN` / `JOIN`. @@ -228,7 +228,7 @@ SELECT CounterID, count() FROM distributed_table_1 WHERE UserID IN (SELECT UserI SETTINGS max_parallel_replicas=3 ``` -преобразуются на каждом сервере в +преобразуется на каждом сервере в ```sql SELECT CounterID, count() FROM local_table_1 WHERE UserID IN (SELECT UserID FROM local_table_2 WHERE CounterID < 100) diff --git a/docs/ru/sql-reference/operators/index.md b/docs/ru/sql-reference/operators/index.md index 57c426cb5ad..b5fec3cb38c 100644 --- a/docs/ru/sql-reference/operators/index.md +++ b/docs/ru/sql-reference/operators/index.md @@ -263,6 +263,7 @@ SELECT toDateTime('2014-10-26 00:00:00', 'Europe/Moscow') AS time, time + 60 * 6 │ 2014-10-26 00:00:00 │ 2014-10-26 23:00:00 │ 2014-10-27 00:00:00 │ └─────────────────────┴─────────────────────┴─────────────────────┘ ``` +::: **Смотрите также** diff --git a/docs/ru/sql-reference/statements/alter/view.md b/docs/ru/sql-reference/statements/alter/view.md index 2d4823bba3a..e6f6730ff99 100644 --- a/docs/ru/sql-reference/statements/alter/view.md +++ b/docs/ru/sql-reference/statements/alter/view.md @@ -6,7 +6,7 @@ sidebar_label: VIEW # Выражение ALTER TABLE … MODIFY QUERY {#alter-modify-query} -Вы можеие изменить запрос `SELECT`, который был задан при создании [материализованного представления](../create/view.md#materialized), с помощью запроса 'ALTER TABLE … MODIFY QUERY'. Используйте его если при создании материализованного представления не использовалась секция `TO [db.]name`. Настройка `allow_experimental_alter_materialized_view_structure` должна быть включена. +Вы можете изменить запрос `SELECT`, который был задан при создании [материализованного представления](../create/view.md#materialized), с помощью запроса 'ALTER TABLE … MODIFY QUERY'. Используйте его если при создании материализованного представления не использовалась секция `TO [db.]name`. Настройка `allow_experimental_alter_materialized_view_structure` должна быть включена. Если при создании материализованного представления использовалась конструкция `TO [db.]name`, то для изменения отсоедините представление с помощью [DETACH](../detach.md), измените таблицу с помощью [ALTER TABLE](index.md), а затем снова присоедините запрос с помощью [ATTACH](../attach.md). diff --git a/docs/ru/sql-reference/statements/optimize.md b/docs/ru/sql-reference/statements/optimize.md index b70bba2d765..26993183232 100644 --- a/docs/ru/sql-reference/statements/optimize.md +++ b/docs/ru/sql-reference/statements/optimize.md @@ -10,6 +10,7 @@ sidebar_label: OPTIMIZE :::danger "Внимание" `OPTIMIZE` не устраняет причину появления ошибки `Too many parts`. +::: **Синтаксис** diff --git a/src/Analyzer/FunctionNode.cpp b/src/Analyzer/FunctionNode.cpp index 718dcf4bb58..fe170c8482e 100644 --- a/src/Analyzer/FunctionNode.cpp +++ b/src/Analyzer/FunctionNode.cpp @@ -2,18 +2,21 @@ #include #include -#include -#include #include #include +#include +#include + #include #include #include +#include +#include #include namespace DB @@ -44,17 +47,29 @@ const DataTypes & FunctionNode::getArgumentTypes() const ColumnsWithTypeAndName FunctionNode::getArgumentColumns() const { const auto & arguments = getArguments().getNodes(); + size_t arguments_size = arguments.size(); + ColumnsWithTypeAndName argument_columns; argument_columns.reserve(arguments.size()); - for (const auto & arg : arguments) + for (size_t i = 0; i < arguments_size; ++i) { - ColumnWithTypeAndName argument; - argument.type = arg->getResultType(); - if (auto * constant = arg->as()) - argument.column = argument.type->createColumnConst(1, constant->getValue()); - argument_columns.push_back(std::move(argument)); + const auto & argument = arguments[i]; + + ColumnWithTypeAndName argument_column; + + if (isNameOfInFunction(function_name) && i == 1) + argument_column.type = std::make_shared(); + else + argument_column.type = argument->getResultType(); + + auto * constant = argument->as(); + if (constant && !isNotCreatable(argument_column.type)) + argument_column.column = argument_column.type->createColumnConst(1, constant->getValue()); + + argument_columns.push_back(std::move(argument_column)); } + return argument_columns; } diff --git a/src/Analyzer/InDepthQueryTreeVisitor.h b/src/Analyzer/InDepthQueryTreeVisitor.h index af69fc55589..1cc48fb1e53 100644 --- a/src/Analyzer/InDepthQueryTreeVisitor.h +++ b/src/Analyzer/InDepthQueryTreeVisitor.h @@ -99,8 +99,9 @@ class InDepthQueryTreeVisitorWithContext public: using VisitQueryTreeNodeType = std::conditional_t; - explicit InDepthQueryTreeVisitorWithContext(ContextPtr context) + explicit InDepthQueryTreeVisitorWithContext(ContextPtr context, size_t initial_subquery_depth = 0) : current_context(std::move(context)) + , subquery_depth(initial_subquery_depth) {} /// Return true if visitor should traverse tree top to bottom, false otherwise @@ -125,11 +126,17 @@ public: return current_context->getSettingsRef(); } + size_t getSubqueryDepth() const + { + return subquery_depth; + } + void visit(VisitQueryTreeNodeType & query_tree_node) { auto current_scope_context_ptr = current_context; SCOPE_EXIT( current_context = std::move(current_scope_context_ptr); + --subquery_depth; ); if (auto * query_node = query_tree_node->template as()) @@ -137,6 +144,8 @@ public: else if (auto * union_node = query_tree_node->template as()) current_context = union_node->getContext(); + ++subquery_depth; + bool traverse_top_to_bottom = getDerived().shouldTraverseTopToBottom(); if (!traverse_top_to_bottom) visitChildren(query_tree_node); @@ -145,7 +154,12 @@ public: if (traverse_top_to_bottom) visitChildren(query_tree_node); + + getDerived().leaveImpl(query_tree_node); } + + void leaveImpl(VisitQueryTreeNodeType & node [[maybe_unused]]) + {} private: Derived & getDerived() { @@ -172,6 +186,7 @@ private: } ContextPtr current_context; + size_t subquery_depth = 0; }; template diff --git a/src/Analyzer/JoinNode.h b/src/Analyzer/JoinNode.h index 0d856985794..f58fe3f1af5 100644 --- a/src/Analyzer/JoinNode.h +++ b/src/Analyzer/JoinNode.h @@ -106,6 +106,12 @@ public: return locality; } + /// Set join locality + void setLocality(JoinLocality locality_value) + { + locality = locality_value; + } + /// Get join strictness JoinStrictness getStrictness() const { diff --git a/src/Analyzer/Passes/AutoFinalOnQueryPass.cpp b/src/Analyzer/Passes/AutoFinalOnQueryPass.cpp index fa5fc0e75a8..15326ca1dc8 100644 --- a/src/Analyzer/Passes/AutoFinalOnQueryPass.cpp +++ b/src/Analyzer/Passes/AutoFinalOnQueryPass.cpp @@ -42,7 +42,7 @@ private: return; const auto & storage = table_node ? table_node->getStorage() : table_function_node->getStorage(); - bool is_final_supported = storage && storage->supportsFinal() && !storage->isRemote(); + bool is_final_supported = storage && storage->supportsFinal(); if (!is_final_supported) return; diff --git a/src/Analyzer/Passes/LogicalExpressionOptimizer.cpp b/src/Analyzer/Passes/LogicalExpressionOptimizerPass.cpp similarity index 97% rename from src/Analyzer/Passes/LogicalExpressionOptimizer.cpp rename to src/Analyzer/Passes/LogicalExpressionOptimizerPass.cpp index 73585a4cd23..3d65035f9fd 100644 --- a/src/Analyzer/Passes/LogicalExpressionOptimizer.cpp +++ b/src/Analyzer/Passes/LogicalExpressionOptimizerPass.cpp @@ -7,8 +7,6 @@ #include #include -#include - namespace DB { @@ -100,6 +98,9 @@ private: } } + if (and_operands.size() == function_node.getArguments().getNodes().size()) + return; + if (and_operands.size() == 1) { /// AND operator can have UInt8 or bool as its type. @@ -207,6 +208,9 @@ private: or_operands.push_back(std::move(in_function)); } + if (or_operands.size() == function_node.getArguments().getNodes().size()) + return; + if (or_operands.size() == 1) { /// if the result type of operand is the same as the result type of OR diff --git a/src/Analyzer/Passes/OptimizeGroupByFunctionKeysPass.cpp b/src/Analyzer/Passes/OptimizeGroupByFunctionKeysPass.cpp index f6c4d2bc15d..c97645219da 100644 --- a/src/Analyzer/Passes/OptimizeGroupByFunctionKeysPass.cpp +++ b/src/Analyzer/Passes/OptimizeGroupByFunctionKeysPass.cpp @@ -69,8 +69,7 @@ private: for (auto it = function_arguments.rbegin(); it != function_arguments.rend(); ++it) candidates.push_back({ *it, is_deterministic }); - // Using DFS we traverse function tree and try to find if it uses other keys as function arguments. - // TODO: Also process CONSTANT here. We can simplify GROUP BY x, x + 1 to GROUP BY x. + /// Using DFS we traverse function tree and try to find if it uses other keys as function arguments. while (!candidates.empty()) { auto [candidate, parents_are_only_deterministic] = candidates.back(); @@ -108,6 +107,7 @@ private: return false; } } + return true; } diff --git a/src/Analyzer/Passes/QueryAnalysisPass.cpp b/src/Analyzer/Passes/QueryAnalysisPass.cpp index 34c03a9ffb6..38575965973 100644 --- a/src/Analyzer/Passes/QueryAnalysisPass.cpp +++ b/src/Analyzer/Passes/QueryAnalysisPass.cpp @@ -193,13 +193,9 @@ namespace ErrorCodes * lookup should not be continued, and exception must be thrown because if lookup continues identifier can be resolved from parent scope. * * TODO: Update exception messages - * TODO: JOIN TREE subquery constant columns * TODO: Table identifiers with optional UUID. * TODO: Lookup functions arrayReduce(sum, [1, 2, 3]); - * TODO: SELECT (compound_expression).*, (compound_expression).COLUMNS are not supported on parser level. - * TODO: SELECT a.b.c.*, a.b.c.COLUMNS. Qualified matcher where identifier size is greater than 2 are not supported on parser level. * TODO: Support function identifier resolve from parent query scope, if lambda in parent scope does not capture any columns. - * TODO: Scalar subqueries cache. */ namespace @@ -701,7 +697,9 @@ struct IdentifierResolveScope } if (auto * union_node = scope_node->as()) + { context = union_node->getContext(); + } else if (auto * query_node = scope_node->as()) { context = query_node->getContext(); @@ -1336,6 +1334,9 @@ private: /// Global resolve expression node to projection names map std::unordered_map resolved_expressions; + /// Global resolve expression node to tree size + std::unordered_map node_to_tree_size; + /// Global scalar subquery to scalar value map std::unordered_map scalar_subquery_to_scalar_value; @@ -1864,7 +1865,10 @@ void QueryAnalyzer::evaluateScalarSubqueryIfNeeded(QueryTreeNodePtr & node, Iden Block scalar_block; - QueryTreeNodePtrWithHash node_with_hash(node); + auto node_without_alias = node->clone(); + node_without_alias->removeAlias(); + + QueryTreeNodePtrWithHash node_with_hash(node_without_alias); auto scalar_value_it = scalar_subquery_to_scalar_value.find(node_with_hash); if (scalar_value_it != scalar_subquery_to_scalar_value.end()) @@ -1954,21 +1958,7 @@ void QueryAnalyzer::evaluateScalarSubqueryIfNeeded(QueryTreeNodePtr & node, Iden * * Example: SELECT (SELECT 2 AS x, x) */ - NameSet block_column_names; - size_t unique_column_name_counter = 1; - - for (auto & column_with_type : block) - { - if (!block_column_names.contains(column_with_type.name)) - { - block_column_names.insert(column_with_type.name); - continue; - } - - column_with_type.name += '_'; - column_with_type.name += std::to_string(unique_column_name_counter); - ++unique_column_name_counter; - } + makeUniqueColumnNamesInBlock(block); scalar_block.insert({ ColumnTuple::create(block.getColumns()), @@ -2348,7 +2338,13 @@ QueryTreeNodePtr QueryAnalyzer::tryResolveTableIdentifierFromDatabaseCatalog(con storage_id = context->resolveStorageID(storage_id); bool is_temporary_table = storage_id.getDatabaseName() == DatabaseCatalog::TEMPORARY_DATABASE; - auto storage = DatabaseCatalog::instance().tryGetTable(storage_id, context); + StoragePtr storage; + + if (is_temporary_table) + storage = DatabaseCatalog::instance().getTable(storage_id, context); + else + storage = DatabaseCatalog::instance().tryGetTable(storage_id, context); + if (!storage) return {}; @@ -2914,7 +2910,10 @@ QueryTreeNodePtr QueryAnalyzer::tryResolveIdentifierFromTableExpression(const Id break; IdentifierLookup column_identifier_lookup = {qualified_identifier_with_removed_part, IdentifierLookupContext::EXPRESSION}; - if (tryBindIdentifierToAliases(column_identifier_lookup, scope) || + if (tryBindIdentifierToAliases(column_identifier_lookup, scope)) + break; + + if (table_expression_data.should_qualify_columns && tryBindIdentifierToTableExpressions(column_identifier_lookup, table_expression_node, scope)) break; @@ -3018,11 +3017,39 @@ QueryTreeNodePtr QueryAnalyzer::tryResolveIdentifierFromJoin(const IdentifierLoo resolved_identifier = std::move(result_column_node); } - else if (scope.joins_count == 1 && scope.context->getSettingsRef().single_join_prefer_left_table) + else if (left_resolved_identifier->isEqual(*right_resolved_identifier, IQueryTreeNode::CompareOptions{.compare_aliases = false})) { + const auto & identifier_path_part = identifier_lookup.identifier.front(); + auto * left_resolved_identifier_column = left_resolved_identifier->as(); + auto * right_resolved_identifier_column = right_resolved_identifier->as(); + + if (left_resolved_identifier_column && right_resolved_identifier_column) + { + const auto & left_column_source_alias = left_resolved_identifier_column->getColumnSource()->getAlias(); + const auto & right_column_source_alias = right_resolved_identifier_column->getColumnSource()->getAlias(); + + /** If column from right table was resolved using alias, we prefer column from right table. + * + * Example: SELECT dummy FROM system.one JOIN system.one AS A ON A.dummy = system.one.dummy; + * + * If alias is specified for left table, and alias is not specified for right table and identifier was resolved + * without using left table alias, we prefer column from right table. + * + * Example: SELECT dummy FROM system.one AS A JOIN system.one ON A.dummy = system.one.dummy; + * + * Otherwise we prefer column from left table. + */ + if (identifier_path_part == right_column_source_alias) + return right_resolved_identifier; + else if (!left_column_source_alias.empty() && + right_column_source_alias.empty() && + identifier_path_part != left_column_source_alias) + return right_resolved_identifier; + } + return left_resolved_identifier; } - else if (left_resolved_identifier->isEqual(*right_resolved_identifier, IQueryTreeNode::CompareOptions{.compare_aliases = false})) + else if (scope.joins_count == 1 && scope.context->getSettingsRef().single_join_prefer_left_table) { return left_resolved_identifier; } @@ -4466,6 +4493,7 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi bool is_special_function_dict_get = false; bool is_special_function_join_get = false; bool is_special_function_exists = false; + bool is_special_function_if = false; if (!lambda_expression_untyped) { @@ -4473,6 +4501,7 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi is_special_function_dict_get = functionIsDictGet(function_name); is_special_function_join_get = functionIsJoinGet(function_name); is_special_function_exists = function_name == "exists"; + is_special_function_if = function_name == "if"; auto function_name_lowercase = Poco::toLower(function_name); @@ -4571,6 +4600,60 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi is_special_function_in = true; } + if (is_special_function_if && !function_node_ptr->getArguments().getNodes().empty()) + { + /** Handle special case with constant If function, even if some of the arguments are invalid. + * + * SELECT if(hasColumnInTable('system', 'numbers', 'not_existing_column'), not_existing_column, 5) FROM system.numbers; + */ + auto & if_function_arguments = function_node_ptr->getArguments().getNodes(); + auto if_function_condition = if_function_arguments[0]; + resolveExpressionNode(if_function_condition, scope, false /*allow_lambda_expression*/, false /*allow_table_expression*/); + + auto constant_condition = tryExtractConstantFromConditionNode(if_function_condition); + + if (constant_condition.has_value() && if_function_arguments.size() == 3) + { + QueryTreeNodePtr constant_if_result_node; + QueryTreeNodePtr possibly_invalid_argument_node; + + if (*constant_condition) + { + possibly_invalid_argument_node = if_function_arguments[2]; + constant_if_result_node = if_function_arguments[1]; + } + else + { + possibly_invalid_argument_node = if_function_arguments[1]; + constant_if_result_node = if_function_arguments[2]; + } + + bool apply_constant_if_optimization = false; + + try + { + resolveExpressionNode(possibly_invalid_argument_node, + scope, + false /*allow_lambda_expression*/, + false /*allow_table_expression*/); + } + catch (...) + { + apply_constant_if_optimization = true; + } + + if (apply_constant_if_optimization) + { + auto result_projection_names = resolveExpressionNode(constant_if_result_node, + scope, + false /*allow_lambda_expression*/, + false /*allow_table_expression*/); + node = std::move(constant_if_result_node); + return result_projection_names; + } + } + } + /// Resolve function arguments bool allow_table_expressions = is_special_function_in; @@ -5059,7 +5142,7 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi /// Do not constant fold get scalar functions bool disable_constant_folding = function_name == "__getScalar" || function_name == "shardNum" || - function_name == "shardCount"; + function_name == "shardCount" || function_name == "hostName"; /** If function is suitable for constant folding try to convert it to constant. * Example: SELECT plus(1, 1); @@ -5085,7 +5168,8 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi /** Do not perform constant folding if there are aggregate or arrayJoin functions inside function. * Example: SELECT toTypeName(sum(number)) FROM numbers(10); */ - if (column && isColumnConst(*column) && (!hasAggregateFunctionNodes(node) && !hasFunctionNode(node, "arrayJoin"))) + if (column && isColumnConst(*column) && !typeid_cast(column.get())->getDataColumn().isDummy() && + (!hasAggregateFunctionNodes(node) && !hasFunctionNode(node, "arrayJoin"))) { /// Replace function node with result constant node Field column_constant_value; @@ -5433,9 +5517,9 @@ ProjectionNames QueryAnalyzer::resolveExpressionNode(QueryTreeNodePtr & node, Id } } - if (node - && scope.nullable_group_by_keys.contains(node) - && !scope.expressions_in_resolve_process_stack.hasAggregateFunction()) + validateTreeSize(node, scope.context->getSettingsRef().max_expanded_ast_elements, node_to_tree_size); + + if (scope.nullable_group_by_keys.contains(node) && !scope.expressions_in_resolve_process_stack.hasAggregateFunction()) { node = node->clone(); node->convertToNullable(); @@ -6592,6 +6676,17 @@ void QueryAnalyzer::resolveQuery(const QueryTreeNodePtr & query_node, Identifier /// Resolve query node sections. + NamesAndTypes projection_columns; + + if (!scope.group_by_use_nulls) + { + projection_columns = resolveProjectionExpressionNodeList(query_node_typed.getProjectionNode(), scope); + if (query_node_typed.getProjection().getNodes().empty()) + throw Exception(ErrorCodes::EMPTY_LIST_OF_COLUMNS_QUERIED, + "Empty list of columns in projection. In scope {}", + scope.scope_node->formatASTForErrorMessage()); + } + if (query_node_typed.hasWith()) resolveExpressionNodeList(query_node_typed.getWithNode(), scope, true /*allow_lambda_expression*/, false /*allow_table_expression*/); @@ -6686,11 +6781,14 @@ void QueryAnalyzer::resolveQuery(const QueryTreeNodePtr & query_node, Identifier convertLimitOffsetExpression(query_node_typed.getOffset(), "OFFSET", scope); } - auto projection_columns = resolveProjectionExpressionNodeList(query_node_typed.getProjectionNode(), scope); - if (query_node_typed.getProjection().getNodes().empty()) - throw Exception(ErrorCodes::EMPTY_LIST_OF_COLUMNS_QUERIED, - "Empty list of columns in projection. In scope {}", - scope.scope_node->formatASTForErrorMessage()); + if (scope.group_by_use_nulls) + { + projection_columns = resolveProjectionExpressionNodeList(query_node_typed.getProjectionNode(), scope); + if (query_node_typed.getProjection().getNodes().empty()) + throw Exception(ErrorCodes::EMPTY_LIST_OF_COLUMNS_QUERIED, + "Empty list of columns in projection. In scope {}", + scope.scope_node->formatASTForErrorMessage()); + } /** Resolve nodes with duplicate aliases. * Table expressions cannot have duplicate aliases. @@ -6757,6 +6855,15 @@ void QueryAnalyzer::resolveQuery(const QueryTreeNodePtr & query_node, Identifier validateAggregates(query_node, { .group_by_use_nulls = scope.group_by_use_nulls }); + for (const auto & column : projection_columns) + { + if (isNotCreatable(column.type)) + throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, + "Invalid projection column with type {}. In scope {}", + column.type->getName(), + scope.scope_node->formatASTForErrorMessage()); + } + /** WITH section can be safely removed, because WITH section only can provide aliases to query expressions * and CTE for other sections to use. * diff --git a/src/Analyzer/TableNode.cpp b/src/Analyzer/TableNode.cpp index a746986be04..f315d372bc9 100644 --- a/src/Analyzer/TableNode.cpp +++ b/src/Analyzer/TableNode.cpp @@ -61,12 +61,17 @@ bool TableNode::isEqualImpl(const IQueryTreeNode & rhs) const void TableNode::updateTreeHashImpl(HashState & state) const { - auto full_name = storage_id.getFullNameNotQuoted(); - state.update(full_name.size()); - state.update(full_name); - - state.update(temporary_table_name.size()); - state.update(temporary_table_name); + if (!temporary_table_name.empty()) + { + state.update(temporary_table_name.size()); + state.update(temporary_table_name); + } + else + { + auto full_name = storage_id.getFullNameNotQuoted(); + state.update(full_name.size()); + state.update(full_name); + } if (table_expression_modifiers) table_expression_modifiers->updateTreeHash(state); diff --git a/src/Analyzer/Utils.cpp b/src/Analyzer/Utils.cpp index c5a5c042cbc..eb7aceef1e8 100644 --- a/src/Analyzer/Utils.cpp +++ b/src/Analyzer/Utils.cpp @@ -8,6 +8,7 @@ #include #include #include +#include #include #include @@ -32,6 +33,7 @@ namespace DB namespace ErrorCodes { extern const int LOGICAL_ERROR; + extern const int BAD_ARGUMENTS; } bool isNodePartOfTree(const IQueryTreeNode * node, const IQueryTreeNode * root) @@ -79,6 +81,75 @@ bool isNameOfInFunction(const std::string & function_name) return is_special_function_in; } +bool isNameOfLocalInFunction(const std::string & function_name) +{ + bool is_special_function_in = function_name == "in" || + function_name == "notIn" || + function_name == "nullIn" || + function_name == "notNullIn" || + function_name == "inIgnoreSet" || + function_name == "notInIgnoreSet" || + function_name == "nullInIgnoreSet" || + function_name == "notNullInIgnoreSet"; + + return is_special_function_in; +} + +bool isNameOfGlobalInFunction(const std::string & function_name) +{ + bool is_special_function_in = function_name == "globalIn" || + function_name == "globalNotIn" || + function_name == "globalNullIn" || + function_name == "globalNotNullIn" || + function_name == "globalInIgnoreSet" || + function_name == "globalNotInIgnoreSet" || + function_name == "globalNullInIgnoreSet" || + function_name == "globalNotNullInIgnoreSet"; + + return is_special_function_in; +} + +std::string getGlobalInFunctionNameForLocalInFunctionName(const std::string & function_name) +{ + if (function_name == "in") + return "globalIn"; + else if (function_name == "notIn") + return "globalNotIn"; + else if (function_name == "nullIn") + return "globalNullIn"; + else if (function_name == "notNullIn") + return "globalNotNullIn"; + else if (function_name == "inIgnoreSet") + return "globalInIgnoreSet"; + else if (function_name == "notInIgnoreSet") + return "globalNotInIgnoreSet"; + else if (function_name == "nullInIgnoreSet") + return "globalNullInIgnoreSet"; + else if (function_name == "notNullInIgnoreSet") + return "globalNotNullInIgnoreSet"; + + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Invalid local IN function name {}", function_name); +} + +void makeUniqueColumnNamesInBlock(Block & block) +{ + NameSet block_column_names; + size_t unique_column_name_counter = 1; + + for (auto & column_with_type : block) + { + if (!block_column_names.contains(column_with_type.name)) + { + block_column_names.insert(column_with_type.name); + continue; + } + + column_with_type.name += '_'; + column_with_type.name += std::to_string(unique_column_name_counter); + ++unique_column_name_counter; + } +} + QueryTreeNodePtr buildCastFunction(const QueryTreeNodePtr & expression, const DataTypePtr & type, const ContextPtr & context, @@ -102,6 +173,27 @@ QueryTreeNodePtr buildCastFunction(const QueryTreeNodePtr & expression, return cast_function_node; } +std::optional tryExtractConstantFromConditionNode(const QueryTreeNodePtr & condition_node) +{ + const auto * constant_node = condition_node->as(); + if (!constant_node) + return {}; + + const auto & value = constant_node->getValue(); + auto constant_type = constant_node->getResultType(); + constant_type = removeNullable(removeLowCardinality(constant_type)); + + auto which_constant_type = WhichDataType(constant_type); + if (!which_constant_type.isUInt8() && !which_constant_type.isNothing()) + return {}; + + if (value.isNull()) + return false; + + UInt8 predicate_value = value.safeGet(); + return predicate_value > 0; +} + static ASTPtr convertIntoTableExpressionAST(const QueryTreeNodePtr & table_expression_node) { ASTPtr table_expression_node_ast; diff --git a/src/Analyzer/Utils.h b/src/Analyzer/Utils.h index 3e2d95c6012..5802c86c462 100644 --- a/src/Analyzer/Utils.h +++ b/src/Analyzer/Utils.h @@ -13,6 +13,18 @@ bool isNodePartOfTree(const IQueryTreeNode * node, const IQueryTreeNode * root); /// Returns true if function name is name of IN function or its variations, false otherwise bool isNameOfInFunction(const std::string & function_name); +/// Returns true if function name is name of local IN function or its variations, false otherwise +bool isNameOfLocalInFunction(const std::string & function_name); + +/// Returns true if function name is name of global IN function or its variations, false otherwise +bool isNameOfGlobalInFunction(const std::string & function_name); + +/// Returns global IN function name for local IN function name +std::string getGlobalInFunctionNameForLocalInFunctionName(const std::string & function_name); + +/// Add unique suffix to names of duplicate columns in block +void makeUniqueColumnNamesInBlock(Block & block); + /** Build cast function that cast expression into type. * If resolve = true, then result cast function is resolved during build, otherwise * result cast function is not resolved during build. @@ -22,6 +34,9 @@ QueryTreeNodePtr buildCastFunction(const QueryTreeNodePtr & expression, const ContextPtr & context, bool resolve = true); +/// Try extract boolean constant from condition node +std::optional tryExtractConstantFromConditionNode(const QueryTreeNodePtr & condition_node); + /** Add table expression in tables in select query children. * If table expression node is not of identifier node, table node, query node, table function node, join node or array join node type throws logical error exception. */ diff --git a/src/Analyzer/ValidationUtils.cpp b/src/Analyzer/ValidationUtils.cpp index 8ccecc9769c..d70ed1170fc 100644 --- a/src/Analyzer/ValidationUtils.cpp +++ b/src/Analyzer/ValidationUtils.cpp @@ -16,6 +16,7 @@ namespace ErrorCodes { extern const int NOT_AN_AGGREGATE; extern const int NOT_IMPLEMENTED; + extern const int BAD_ARGUMENTS; } class ValidateGroupByColumnsVisitor : public ConstInDepthQueryTreeVisitor @@ -283,4 +284,52 @@ void assertNoFunctionNodes(const QueryTreeNodePtr & node, visitor.visit(node); } +void validateTreeSize(const QueryTreeNodePtr & node, + size_t max_size, + std::unordered_map & node_to_tree_size) +{ + size_t tree_size = 0; + std::vector> nodes_to_process; + nodes_to_process.emplace_back(node, false); + + while (!nodes_to_process.empty()) + { + const auto [node_to_process, processed_children] = nodes_to_process.back(); + nodes_to_process.pop_back(); + + if (processed_children) + { + ++tree_size; + node_to_tree_size.emplace(node_to_process, tree_size); + continue; + } + + auto node_to_size_it = node_to_tree_size.find(node_to_process); + if (node_to_size_it != node_to_tree_size.end()) + { + tree_size += node_to_size_it->second; + continue; + } + + nodes_to_process.emplace_back(node_to_process, true); + + for (const auto & node_to_process_child : node_to_process->getChildren()) + { + if (!node_to_process_child) + continue; + + nodes_to_process.emplace_back(node_to_process_child, false); + } + + auto * constant_node = node_to_process->as(); + if (constant_node && constant_node->hasSourceExpression()) + nodes_to_process.emplace_back(constant_node->getSourceExpression(), false); + } + + if (tree_size > max_size) + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Query tree is too big. Maximum: {}", + max_size); +} + } diff --git a/src/Analyzer/ValidationUtils.h b/src/Analyzer/ValidationUtils.h index b8ba6b8cc10..c15a3531c8d 100644 --- a/src/Analyzer/ValidationUtils.h +++ b/src/Analyzer/ValidationUtils.h @@ -7,7 +7,7 @@ namespace DB struct ValidationParams { - bool group_by_use_nulls; + bool group_by_use_nulls = false; }; /** Validate aggregates in query node. @@ -31,4 +31,11 @@ void assertNoFunctionNodes(const QueryTreeNodePtr & node, std::string_view exception_function_name, std::string_view exception_place_message); +/** Validate tree size. If size of tree is greater than max size throws exception. + * Additionally for each node in tree, update node to tree size map. + */ +void validateTreeSize(const QueryTreeNodePtr & node, + size_t max_size, + std::unordered_map & node_to_tree_size); + } diff --git a/src/Analyzer/WindowNode.cpp b/src/Analyzer/WindowNode.cpp index 3e8537302e5..d516f7a58b8 100644 --- a/src/Analyzer/WindowNode.cpp +++ b/src/Analyzer/WindowNode.cpp @@ -113,11 +113,17 @@ ASTPtr WindowNode::toASTImpl() const window_definition->parent_window_name = parent_window_name; - window_definition->children.push_back(getPartitionByNode()->toAST()); - window_definition->partition_by = window_definition->children.back(); + if (hasPartitionBy()) + { + window_definition->children.push_back(getPartitionByNode()->toAST()); + window_definition->partition_by = window_definition->children.back(); + } - window_definition->children.push_back(getOrderByNode()->toAST()); - window_definition->order_by = window_definition->children.back(); + if (hasOrderBy()) + { + window_definition->children.push_back(getOrderByNode()->toAST()); + window_definition->order_by = window_definition->children.back(); + } window_definition->frame_is_default = window_frame.is_default; window_definition->frame_type = window_frame.type; diff --git a/src/Common/CurrentThread.cpp b/src/Common/CurrentThread.cpp index b54cf3b9371..188e78fe69b 100644 --- a/src/Common/CurrentThread.cpp +++ b/src/Common/CurrentThread.cpp @@ -110,23 +110,4 @@ ThreadGroupStatusPtr CurrentThread::getGroup() return current_thread->getThreadGroup(); } -MemoryTracker * CurrentThread::getUserMemoryTracker() -{ - if (unlikely(!current_thread)) - return nullptr; - - auto * tracker = current_thread->memory_tracker.getParent(); - while (tracker && tracker->level != VariableContext::User) - tracker = tracker->getParent(); - - return tracker; -} - -void CurrentThread::flushUntrackedMemory() -{ - if (unlikely(!current_thread)) - return; - current_thread->flushUntrackedMemory(); -} - } diff --git a/src/Common/CurrentThread.h b/src/Common/CurrentThread.h index ffc00c77504..f4975e800ca 100644 --- a/src/Common/CurrentThread.h +++ b/src/Common/CurrentThread.h @@ -40,12 +40,6 @@ public: /// Group to which belongs current thread static ThreadGroupStatusPtr getGroup(); - /// MemoryTracker for user that owns current thread if any - static MemoryTracker * getUserMemoryTracker(); - - /// Adjust counters in MemoryTracker hierarchy if untracked_memory is not 0. - static void flushUntrackedMemory(); - /// A logs queue used by TCPHandler to pass logs to a client static void attachInternalTextLogsQueue(const std::shared_ptr & logs_queue, LogsLevel client_logs_level); diff --git a/src/Functions/FunctionFile.cpp b/src/Functions/FunctionFile.cpp index 240732965f4..fa7dda82e1c 100644 --- a/src/Functions/FunctionFile.cpp +++ b/src/Functions/FunctionFile.cpp @@ -38,6 +38,8 @@ public: String getName() const override { return name; } size_t getNumberOfArguments() const override { return 0; } bool isSuitableForShortCircuitArgumentsExecution(const DataTypesWithConstInfo & /*arguments*/) const override { return true; } + bool isDeterministic() const override { return false; } + bool isDeterministicInScopeOfQuery() const override { return false; } DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override { diff --git a/src/Functions/URL/decodeURLComponent.cpp b/src/Functions/URL/decodeURLComponent.cpp index 7d98ccd63a0..05e3fbea3fd 100644 --- a/src/Functions/URL/decodeURLComponent.cpp +++ b/src/Functions/URL/decodeURLComponent.cpp @@ -14,28 +14,33 @@ namespace ErrorCodes static size_t encodeURL(const char * __restrict src, size_t src_size, char * __restrict dst, bool space_as_plus) { char * dst_pos = dst; - for (size_t i = 0; i < src_size; i++) + for (size_t i = 0; i < src_size; ++i) { if ((src[i] >= '0' && src[i] <= '9') || (src[i] >= 'a' && src[i] <= 'z') || (src[i] >= 'A' && src[i] <= 'Z') || src[i] == '-' || src[i] == '_' || src[i] == '.' || src[i] == '~') { - *dst_pos++ = src[i]; + *dst_pos = src[i]; + ++dst_pos; } else if (src[i] == ' ' && space_as_plus) { - *dst_pos++ = '+'; + *dst_pos = '+'; + ++dst_pos; } else { - *dst_pos++ = '%'; - *dst_pos++ = hexDigitUppercase(src[i] >> 4); - *dst_pos++ = hexDigitUppercase(src[i] & 0xf); + dst_pos[0] = '%'; + ++dst_pos; + writeHexByteUppercase(src[i], dst_pos); + dst_pos += 2; } } - *dst_pos++ = src[src_size]; + *dst_pos = 0; + ++dst_pos; return dst_pos - dst; } + /// We assume that size of the dst buf isn't less than src_size. static size_t decodeURL(const char * __restrict src, size_t src_size, char * __restrict dst, bool plus_as_space) { @@ -120,10 +125,14 @@ struct CodeURLComponentImpl ColumnString::Chars & res_data, ColumnString::Offsets & res_offsets) { if (code_strategy == encode) - //the destination(res_data) string is at most three times the length of the source string + { + /// the destination(res_data) string is at most three times the length of the source string res_data.resize(data.size() * 3); + } else + { res_data.resize(data.size()); + } size_t size = offsets.size(); res_offsets.resize(size); diff --git a/src/Interpreters/AsynchronousInsertQueue.cpp b/src/Interpreters/AsynchronousInsertQueue.cpp index 78b173de6dc..590cbc9ba83 100644 --- a/src/Interpreters/AsynchronousInsertQueue.cpp +++ b/src/Interpreters/AsynchronousInsertQueue.cpp @@ -18,7 +18,6 @@ #include #include #include -#include #include #include #include @@ -104,10 +103,9 @@ bool AsynchronousInsertQueue::InsertQuery::operator==(const InsertQuery & other) return query_str == other.query_str && settings == other.settings; } -AsynchronousInsertQueue::InsertData::Entry::Entry(String && bytes_, String && query_id_, MemoryTracker * user_memory_tracker_) +AsynchronousInsertQueue::InsertData::Entry::Entry(String && bytes_, String && query_id_) : bytes(std::move(bytes_)) , query_id(std::move(query_id_)) - , user_memory_tracker(user_memory_tracker_) , create_time(std::chrono::system_clock::now()) { } @@ -236,7 +234,7 @@ AsynchronousInsertQueue::push(ASTPtr query, ContextPtr query_context) if (auto quota = query_context->getQuota()) quota->used(QuotaType::WRITTEN_BYTES, bytes.size()); - auto entry = std::make_shared(std::move(bytes), query_context->getCurrentQueryId(), CurrentThread::getUserMemoryTracker()); + auto entry = std::make_shared(std::move(bytes), query_context->getCurrentQueryId()); InsertQuery key{query, settings}; InsertDataPtr data_to_process; diff --git a/src/Interpreters/AsynchronousInsertQueue.h b/src/Interpreters/AsynchronousInsertQueue.h index e6b7bff8d26..23a2860364d 100644 --- a/src/Interpreters/AsynchronousInsertQueue.h +++ b/src/Interpreters/AsynchronousInsertQueue.h @@ -1,7 +1,6 @@ #pragma once #include -#include #include #include #include @@ -60,31 +59,6 @@ private: UInt128 calculateHash() const; }; - struct UserMemoryTrackerSwitcher - { - explicit UserMemoryTrackerSwitcher(MemoryTracker * new_tracker) - { - auto * thread_tracker = CurrentThread::getMemoryTracker(); - prev_untracked_memory = current_thread->untracked_memory; - prev_memory_tracker_parent = thread_tracker->getParent(); - - current_thread->untracked_memory = 0; - thread_tracker->setParent(new_tracker); - } - - ~UserMemoryTrackerSwitcher() - { - CurrentThread::flushUntrackedMemory(); - auto * thread_tracker = CurrentThread::getMemoryTracker(); - - current_thread->untracked_memory = prev_untracked_memory; - thread_tracker->setParent(prev_memory_tracker_parent); - } - - MemoryTracker * prev_memory_tracker_parent; - Int64 prev_untracked_memory; - }; - struct InsertData { struct Entry @@ -92,10 +66,9 @@ private: public: const String bytes; const String query_id; - MemoryTracker * const user_memory_tracker; const std::chrono::time_point create_time; - Entry(String && bytes_, String && query_id_, MemoryTracker * user_memory_tracker_); + Entry(String && bytes_, String && query_id_); void finish(std::exception_ptr exception_ = nullptr); std::future getFuture() { return promise.get_future(); } @@ -106,19 +79,6 @@ private: std::atomic_bool finished = false; }; - ~InsertData() - { - auto it = entries.begin(); - // Entries must be destroyed in context of user who runs async insert. - // Each entry in the list may correspond to a different user, - // so we need to switch current thread's MemoryTracker parent on each iteration. - while (it != entries.end()) - { - UserMemoryTrackerSwitcher switcher((*it)->user_memory_tracker); - it = entries.erase(it); - } - } - using EntryPtr = std::shared_ptr; std::list entries; diff --git a/src/Interpreters/FullSortingMergeJoin.h b/src/Interpreters/FullSortingMergeJoin.h index fa7d0478535..a94d7a7dfc6 100644 --- a/src/Interpreters/FullSortingMergeJoin.h +++ b/src/Interpreters/FullSortingMergeJoin.h @@ -44,6 +44,10 @@ public: const auto & on_expr = table_join->getOnlyClause(); bool support_conditions = !on_expr.on_filter_condition_left && !on_expr.on_filter_condition_right; + if (!on_expr.analyzer_left_filter_condition_column_name.empty() || + !on_expr.analyzer_right_filter_condition_column_name.empty()) + support_conditions = false; + /// Key column can change nullability and it's not handled on type conversion stage, so algorithm should be aware of it bool support_using_and_nulls = !table_join->hasUsing() || !table_join->joinUseNulls(); diff --git a/src/Interpreters/InterpreterSelectQueryAnalyzer.cpp b/src/Interpreters/InterpreterSelectQueryAnalyzer.cpp index 0536ee10f7c..98f70c25dcd 100644 --- a/src/Interpreters/InterpreterSelectQueryAnalyzer.cpp +++ b/src/Interpreters/InterpreterSelectQueryAnalyzer.cpp @@ -226,6 +226,12 @@ BlockIO InterpreterSelectQueryAnalyzer::execute() return result; } +QueryPlan & InterpreterSelectQueryAnalyzer::getQueryPlan() +{ + planner.buildQueryPlanIfNeeded(); + return planner.getQueryPlan(); +} + QueryPlan && InterpreterSelectQueryAnalyzer::extractQueryPlan() && { planner.buildQueryPlanIfNeeded(); diff --git a/src/Interpreters/InterpreterSelectQueryAnalyzer.h b/src/Interpreters/InterpreterSelectQueryAnalyzer.h index 681a9cfe5a3..2c8af49cf0e 100644 --- a/src/Interpreters/InterpreterSelectQueryAnalyzer.h +++ b/src/Interpreters/InterpreterSelectQueryAnalyzer.h @@ -51,6 +51,8 @@ public: BlockIO execute() override; + QueryPlan & getQueryPlan(); + QueryPlan && extractQueryPlan() &&; QueryPipelineBuilder buildQueryPipeline(); diff --git a/src/Interpreters/ReplaceQueryParameterVisitor.cpp b/src/Interpreters/ReplaceQueryParameterVisitor.cpp index f271de26ca4..893c93f0950 100644 --- a/src/Interpreters/ReplaceQueryParameterVisitor.cpp +++ b/src/Interpreters/ReplaceQueryParameterVisitor.cpp @@ -50,7 +50,16 @@ void ReplaceQueryParameterVisitor::visit(ASTPtr & ast) void ReplaceQueryParameterVisitor::visitChildren(ASTPtr & ast) { for (auto & child : ast->children) + { + void * old_ptr = child.get(); visit(child); + void * new_ptr = child.get(); + + /// Some AST classes have naked pointers to children elements as members. + /// We have to replace them if the child was replaced. + if (new_ptr != old_ptr) + ast->updatePointerToChild(old_ptr, new_ptr); + } } const String & ReplaceQueryParameterVisitor::getParamValue(const String & name) @@ -89,6 +98,7 @@ void ReplaceQueryParameterVisitor::visitQueryParameter(ASTPtr & ast) literal = value; else literal = temp_column[0]; + ast = addTypeConversionToAST(std::make_shared(literal), type_name); /// Keep the original alias. diff --git a/src/Parsers/ASTAlterQuery.h b/src/Parsers/ASTAlterQuery.h index 2a48f5bbd9e..1400113fa9c 100644 --- a/src/Parsers/ASTAlterQuery.h +++ b/src/Parsers/ASTAlterQuery.h @@ -256,6 +256,11 @@ protected: void formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override; bool isOneCommandTypeOnly(const ASTAlterCommand::Type & type) const; + + void forEachPointerToChild(std::function f) override + { + f(reinterpret_cast(&command_list)); + } }; } diff --git a/src/Parsers/ASTBackupQuery.h b/src/Parsers/ASTBackupQuery.h index a3e3a144c72..0201c2b14f9 100644 --- a/src/Parsers/ASTBackupQuery.h +++ b/src/Parsers/ASTBackupQuery.h @@ -94,5 +94,12 @@ public: void formatImpl(const FormatSettings & format, FormatState &, FormatStateStacked) const override; ASTPtr getRewrittenASTWithoutOnCluster(const WithoutOnClusterASTRewriteParams &) const override; QueryKind getQueryKind() const override; + + void forEachPointerToChild(std::function f) override + { + f(reinterpret_cast(&backup_name)); + f(reinterpret_cast(&base_backup_name)); + } }; + } diff --git a/src/Parsers/ASTConstraintDeclaration.h b/src/Parsers/ASTConstraintDeclaration.h index 437aab1a82d..f48d7ef77fe 100644 --- a/src/Parsers/ASTConstraintDeclaration.h +++ b/src/Parsers/ASTConstraintDeclaration.h @@ -25,5 +25,11 @@ public: ASTPtr clone() const override; void formatImpl(const FormatSettings & s, FormatState & state, FormatStateStacked frame) const override; + + void forEachPointerToChild(std::function f) override + { + f(reinterpret_cast(&expr)); + } }; + } diff --git a/src/Parsers/ASTCreateQuery.cpp b/src/Parsers/ASTCreateQuery.cpp index 955ce62b0f7..e28e863c21f 100644 --- a/src/Parsers/ASTCreateQuery.cpp +++ b/src/Parsers/ASTCreateQuery.cpp @@ -91,6 +91,11 @@ public: ASTPtr clone() const override; void formatImpl(const FormatSettings & s, FormatState & state, FormatStateStacked frame) const override; + + void forEachPointerToChild(std::function f) override + { + f(reinterpret_cast(&elem)); + } }; ASTPtr ASTColumnsElement::clone() const diff --git a/src/Parsers/ASTCreateQuery.h b/src/Parsers/ASTCreateQuery.h index 90a15e09369..230996f610e 100644 --- a/src/Parsers/ASTCreateQuery.h +++ b/src/Parsers/ASTCreateQuery.h @@ -32,6 +32,17 @@ public: void formatImpl(const FormatSettings & s, FormatState & state, FormatStateStacked frame) const override; bool isExtendedStorageDefinition() const; + + void forEachPointerToChild(std::function f) override + { + f(reinterpret_cast(&engine)); + f(reinterpret_cast(&partition_by)); + f(reinterpret_cast(&primary_key)); + f(reinterpret_cast(&order_by)); + f(reinterpret_cast(&sample_by)); + f(reinterpret_cast(&ttl_table)); + f(reinterpret_cast(&settings)); + } }; @@ -57,6 +68,16 @@ public: return (!columns || columns->children.empty()) && (!indices || indices->children.empty()) && (!constraints || constraints->children.empty()) && (!projections || projections->children.empty()); } + + void forEachPointerToChild(std::function f) override + { + f(reinterpret_cast(&columns)); + f(reinterpret_cast(&indices)); + f(reinterpret_cast(&primary_key)); + f(reinterpret_cast(&constraints)); + f(reinterpret_cast(&projections)); + f(reinterpret_cast(&primary_key)); + } }; @@ -126,6 +147,19 @@ public: protected: void formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override; + + void forEachPointerToChild(std::function f) override + { + f(reinterpret_cast(&columns_list)); + f(reinterpret_cast(&inner_storage)); + f(reinterpret_cast(&storage)); + f(reinterpret_cast(&as_table_function)); + f(reinterpret_cast(&select)); + f(reinterpret_cast(&comment)); + f(reinterpret_cast(&table_overrides)); + f(reinterpret_cast(&dictionary_attributes_list)); + f(reinterpret_cast(&dictionary)); + } }; } diff --git a/src/Parsers/ASTDictionary.h b/src/Parsers/ASTDictionary.h index 3611621b8ad..8c332247d52 100644 --- a/src/Parsers/ASTDictionary.h +++ b/src/Parsers/ASTDictionary.h @@ -47,6 +47,11 @@ public: ASTPtr clone() const override; void formatImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override; + + void forEachPointerToChild(std::function f) override + { + f(reinterpret_cast(¶meters)); + } }; diff --git a/src/Parsers/ASTExternalDDLQuery.h b/src/Parsers/ASTExternalDDLQuery.h index 7913d44b970..96600b07f29 100644 --- a/src/Parsers/ASTExternalDDLQuery.h +++ b/src/Parsers/ASTExternalDDLQuery.h @@ -41,6 +41,11 @@ public: } QueryKind getQueryKind() const override { return QueryKind::ExternalDDL; } + + void forEachPointerToChild(std::function f) override + { + f(reinterpret_cast(&from)); + } }; } diff --git a/src/Parsers/ASTFunctionWithKeyValueArguments.h b/src/Parsers/ASTFunctionWithKeyValueArguments.h index 67d591dfcdc..75a8ae0415e 100644 --- a/src/Parsers/ASTFunctionWithKeyValueArguments.h +++ b/src/Parsers/ASTFunctionWithKeyValueArguments.h @@ -33,6 +33,11 @@ public: bool hasSecretParts() const override; void updateTreeHashImpl(SipHash & hash_state) const override; + + void forEachPointerToChild(std::function f) override + { + f(reinterpret_cast(&second)); + } }; diff --git a/src/Parsers/ASTIndexDeclaration.h b/src/Parsers/ASTIndexDeclaration.h index e22c1da4489..bd52a611f3f 100644 --- a/src/Parsers/ASTIndexDeclaration.h +++ b/src/Parsers/ASTIndexDeclaration.h @@ -23,6 +23,12 @@ public: ASTPtr clone() const override; void formatImpl(const FormatSettings & s, FormatState & state, FormatStateStacked frame) const override; + + void forEachPointerToChild(std::function f) override + { + f(reinterpret_cast(&expr)); + f(reinterpret_cast(&type)); + } }; } diff --git a/src/Parsers/ASTProjectionDeclaration.h b/src/Parsers/ASTProjectionDeclaration.h index 53c681c3ec1..df7a7c832a6 100644 --- a/src/Parsers/ASTProjectionDeclaration.h +++ b/src/Parsers/ASTProjectionDeclaration.h @@ -18,6 +18,11 @@ public: ASTPtr clone() const override; void formatImpl(const FormatSettings & s, FormatState & state, FormatStateStacked frame) const override; + + void forEachPointerToChild(std::function f) override + { + f(reinterpret_cast(&query)); + } }; } diff --git a/src/Parsers/ASTTableOverrides.h b/src/Parsers/ASTTableOverrides.h index c47260789d8..1df267acaa9 100644 --- a/src/Parsers/ASTTableOverrides.h +++ b/src/Parsers/ASTTableOverrides.h @@ -27,6 +27,12 @@ public: String getID(char) const override { return "TableOverride " + table_name; } ASTPtr clone() const override; void formatImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override; + + void forEachPointerToChild(std::function f) override + { + f(reinterpret_cast(&columns)); + f(reinterpret_cast(&storage)); + } }; /// List of table overrides, for example: diff --git a/src/Parsers/IAST.h b/src/Parsers/IAST.h index 627b1174b33..5928506aa5b 100644 --- a/src/Parsers/IAST.h +++ b/src/Parsers/IAST.h @@ -175,6 +175,16 @@ public: field = nullptr; } + /// After changing one of `children` elements, update the corresponding member pointer if needed. + void updatePointerToChild(void * old_ptr, void * new_ptr) + { + forEachPointerToChild([old_ptr, new_ptr](void ** ptr) mutable + { + if (*ptr == old_ptr) + *ptr = new_ptr; + }); + } + /// Convert to a string. /// Format settings. @@ -295,6 +305,10 @@ public: protected: bool childrenHaveSecretParts() const; + /// Some AST classes have naked pointers to children elements as members. + /// This method allows to iterate over them. + virtual void forEachPointerToChild(std::function) {} + private: size_t checkDepthImpl(size_t max_depth) const; diff --git a/src/Parsers/MySQL/ASTAlterCommand.h b/src/Parsers/MySQL/ASTAlterCommand.h index f097ed71219..87b665ec6a5 100644 --- a/src/Parsers/MySQL/ASTAlterCommand.h +++ b/src/Parsers/MySQL/ASTAlterCommand.h @@ -80,6 +80,15 @@ protected: { throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method formatImpl is not supported by MySQLParser::ASTAlterCommand."); } + + void forEachPointerToChild(std::function f) override + { + f(reinterpret_cast(&index_decl)); + f(reinterpret_cast(&default_expression)); + f(reinterpret_cast(&additional_columns)); + f(reinterpret_cast(&order_by_columns)); + f(reinterpret_cast(&properties)); + } }; class ParserAlterCommand : public IParserBase diff --git a/src/Parsers/MySQL/ASTCreateDefines.h b/src/Parsers/MySQL/ASTCreateDefines.h index 3d2a79568ab..7c23d1cb87f 100644 --- a/src/Parsers/MySQL/ASTCreateDefines.h +++ b/src/Parsers/MySQL/ASTCreateDefines.h @@ -31,6 +31,13 @@ protected: { throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method formatImpl is not supported by MySQLParser::ASTCreateDefines."); } + + void forEachPointerToChild(std::function f) override + { + f(reinterpret_cast(&columns)); + f(reinterpret_cast(&indices)); + f(reinterpret_cast(&constraints)); + } }; class ParserCreateDefines : public IParserBase @@ -44,4 +51,3 @@ protected: } } - diff --git a/src/Planner/Planner.cpp b/src/Planner/Planner.cpp index 2ce470d9ecf..37a4614bad3 100644 --- a/src/Planner/Planner.cpp +++ b/src/Planner/Planner.cpp @@ -214,9 +214,14 @@ public: { /// Constness of limit is validated during query analysis stage limit_length = query_node.getLimit()->as().getValue().safeGet(); - } - if (query_node.hasOffset()) + if (query_node.hasOffset() && limit_length) + { + /// Constness of offset is validated during query analysis stage + limit_offset = query_node.getOffset()->as().getValue().safeGet(); + } + } + else if (query_node.hasOffset()) { /// Constness of offset is validated during query analysis stage limit_offset = query_node.getOffset()->as().getValue().safeGet(); diff --git a/src/Planner/PlannerContext.cpp b/src/Planner/PlannerContext.cpp index 9f4a489bf5f..59ae0f20fac 100644 --- a/src/Planner/PlannerContext.cpp +++ b/src/Planner/PlannerContext.cpp @@ -45,7 +45,7 @@ bool GlobalPlannerContext::hasColumnIdentifier(const ColumnIdentifier & column_i return column_identifiers.contains(column_identifier); } -PlannerContext::PlannerContext(ContextPtr query_context_, GlobalPlannerContextPtr global_planner_context_) +PlannerContext::PlannerContext(ContextMutablePtr query_context_, GlobalPlannerContextPtr global_planner_context_) : query_context(std::move(query_context_)) , global_planner_context(std::move(global_planner_context_)) {} diff --git a/src/Planner/PlannerContext.h b/src/Planner/PlannerContext.h index 63874bf7ab9..e47198bfe5f 100644 --- a/src/Planner/PlannerContext.h +++ b/src/Planner/PlannerContext.h @@ -88,16 +88,22 @@ class PlannerContext { public: /// Create planner context with query context and global planner context - PlannerContext(ContextPtr query_context_, GlobalPlannerContextPtr global_planner_context_); + PlannerContext(ContextMutablePtr query_context_, GlobalPlannerContextPtr global_planner_context_); /// Get planner context query context - const ContextPtr & getQueryContext() const + ContextPtr getQueryContext() const { return query_context; } - /// Get planner context query context - ContextPtr & getQueryContext() + /// Get planner context mutable query context + const ContextMutablePtr & getMutableQueryContext() const + { + return query_context; + } + + /// Get planner context mutable query context + ContextMutablePtr & getMutableQueryContext() { return query_context; } @@ -137,12 +143,18 @@ public: */ TableExpressionData * getTableExpressionDataOrNull(const QueryTreeNodePtr & table_expression_node); - /// Get table expression node to data read only map + /// Get table expression node to data map const std::unordered_map & getTableExpressionNodeToData() const { return table_expression_node_to_data; } + /// Get table expression node to data map + std::unordered_map & getTableExpressionNodeToData() + { + return table_expression_node_to_data; + } + /** Get column node identifier. * For column node source check if table expression data is registered. * If table expression data is not registered exception is thrown. @@ -184,7 +196,7 @@ public: private: /// Query context - ContextPtr query_context; + ContextMutablePtr query_context; /// Global planner context GlobalPlannerContextPtr global_planner_context; diff --git a/src/Planner/PlannerExpressionAnalysis.cpp b/src/Planner/PlannerExpressionAnalysis.cpp index 9a7340f936c..11444503c5f 100644 --- a/src/Planner/PlannerExpressionAnalysis.cpp +++ b/src/Planner/PlannerExpressionAnalysis.cpp @@ -34,15 +34,13 @@ namespace * It is client responsibility to update filter analysis result if filter column must be removed after chain is finalized. */ FilterAnalysisResult analyzeFilter(const QueryTreeNodePtr & filter_expression_node, - const ColumnsWithTypeAndName & current_output_columns, + const ColumnsWithTypeAndName & input_columns, const PlannerContextPtr & planner_context, ActionsChain & actions_chain) { - const auto & filter_input = current_output_columns; - FilterAnalysisResult result; - result.filter_actions = buildActionsDAGFromExpressionNode(filter_expression_node, filter_input, planner_context); + result.filter_actions = buildActionsDAGFromExpressionNode(filter_expression_node, input_columns, planner_context); result.filter_column_name = result.filter_actions->getOutputs().at(0)->result_name; actions_chain.addStep(std::make_unique(result.filter_actions)); @@ -52,8 +50,8 @@ FilterAnalysisResult analyzeFilter(const QueryTreeNodePtr & filter_expression_no /** Construct aggregation analysis result if query tree has GROUP BY or aggregates. * Actions before aggregation are added into actions chain, if result is not null optional. */ -std::pair, std::optional> analyzeAggregation(const QueryTreeNodePtr & query_tree, - const ColumnsWithTypeAndName & current_output_columns, +std::optional analyzeAggregation(const QueryTreeNodePtr & query_tree, + const ColumnsWithTypeAndName & input_columns, const PlannerContextPtr & planner_context, ActionsChain & actions_chain) { @@ -69,9 +67,7 @@ std::pair, std::optional(group_by_input); + ActionsDAGPtr before_aggregation_actions = std::make_shared(input_columns); before_aggregation_actions->getOutputs().clear(); std::unordered_set before_aggregation_actions_output_node_names; @@ -203,14 +199,14 @@ std::pair, std::optional analyzeWindow(const QueryTreeNodePtr & query_tree, - const ColumnsWithTypeAndName & current_output_columns, + const ColumnsWithTypeAndName & input_columns, const PlannerContextPtr & planner_context, ActionsChain & actions_chain) { @@ -220,11 +216,9 @@ std::optional analyzeWindow(const QueryTreeNodePtr & query auto window_descriptions = extractWindowDescriptions(window_function_nodes, *planner_context); - const auto & window_input = current_output_columns; - PlannerActionsVisitor actions_visitor(planner_context); - ActionsDAGPtr before_window_actions = std::make_shared(window_input); + ActionsDAGPtr before_window_actions = std::make_shared(input_columns); before_window_actions->getOutputs().clear(); std::unordered_set before_window_actions_output_node_names; @@ -299,12 +293,11 @@ std::optional analyzeWindow(const QueryTreeNodePtr & query * It is client responsibility to update projection analysis result with project names actions after chain is finalized. */ ProjectionAnalysisResult analyzeProjection(const QueryNode & query_node, - const ColumnsWithTypeAndName & current_output_columns, + const ColumnsWithTypeAndName & input_columns, const PlannerContextPtr & planner_context, ActionsChain & actions_chain) { - const auto & projection_input = current_output_columns; - auto projection_actions = buildActionsDAGFromExpressionNode(query_node.getProjectionNode(), projection_input, planner_context); + auto projection_actions = buildActionsDAGFromExpressionNode(query_node.getProjectionNode(), input_columns, planner_context); auto projection_columns = query_node.getProjectionColumns(); size_t projection_columns_size = projection_columns.size(); @@ -347,13 +340,11 @@ ProjectionAnalysisResult analyzeProjection(const QueryNode & query_node, * Actions before sort are added into actions chain. */ SortAnalysisResult analyzeSort(const QueryNode & query_node, - const ColumnsWithTypeAndName & current_output_columns, + const ColumnsWithTypeAndName & input_columns, const PlannerContextPtr & planner_context, ActionsChain & actions_chain) { - const auto & order_by_input = current_output_columns; - - ActionsDAGPtr before_sort_actions = std::make_shared(order_by_input); + ActionsDAGPtr before_sort_actions = std::make_shared(input_columns); auto & before_sort_actions_outputs = before_sort_actions->getOutputs(); before_sort_actions_outputs.clear(); @@ -436,13 +427,12 @@ SortAnalysisResult analyzeSort(const QueryNode & query_node, * Actions before limit by are added into actions chain. */ LimitByAnalysisResult analyzeLimitBy(const QueryNode & query_node, - const ColumnsWithTypeAndName & current_output_columns, + const ColumnsWithTypeAndName & input_columns, const PlannerContextPtr & planner_context, const NameSet & required_output_nodes_names, ActionsChain & actions_chain) { - const auto & limit_by_input = current_output_columns; - auto before_limit_by_actions = buildActionsDAGFromExpressionNode(query_node.getLimitByNode(), limit_by_input, planner_context); + auto before_limit_by_actions = buildActionsDAGFromExpressionNode(query_node.getLimitByNode(), input_columns, planner_context); NameSet limit_by_column_names_set; Names limit_by_column_names; @@ -480,8 +470,7 @@ PlannerExpressionsAnalysisResult buildExpressionAnalysisResult(const QueryTreeNo std::optional where_analysis_result_optional; std::optional where_action_step_index_optional; - const auto * input_columns = actions_chain.getLastStepAvailableOutputColumnsOrNull(); - ColumnsWithTypeAndName current_output_columns = input_columns ? *input_columns : join_tree_input_columns; + ColumnsWithTypeAndName current_output_columns = join_tree_input_columns; if (query_node.hasWhere()) { @@ -490,9 +479,9 @@ PlannerExpressionsAnalysisResult buildExpressionAnalysisResult(const QueryTreeNo current_output_columns = actions_chain.getLastStepAvailableOutputColumns(); } - auto [aggregation_analysis_result_optional, aggregated_columns_optional] = analyzeAggregation(query_tree, current_output_columns, planner_context, actions_chain); - if (aggregated_columns_optional) - current_output_columns = std::move(*aggregated_columns_optional); + auto aggregation_analysis_result_optional = analyzeAggregation(query_tree, current_output_columns, planner_context, actions_chain); + if (aggregation_analysis_result_optional) + current_output_columns = actions_chain.getLastStepAvailableOutputColumns(); std::optional having_analysis_result_optional; std::optional having_action_step_index_optional; diff --git a/src/Planner/PlannerJoinTree.cpp b/src/Planner/PlannerJoinTree.cpp index 6f818e2c8f7..a48cceebfb6 100644 --- a/src/Planner/PlannerJoinTree.cpp +++ b/src/Planner/PlannerJoinTree.cpp @@ -246,17 +246,87 @@ bool applyTrivialCountIfPossible( return true; } -JoinTreeQueryPlan buildQueryPlanForTableExpression(const QueryTreeNodePtr & table_expression, - const SelectQueryInfo & select_query_info, - const SelectQueryOptions & select_query_options, - PlannerContextPtr & planner_context, - bool is_single_table_expression) +void prepareBuildQueryPlanForTableExpression(const QueryTreeNodePtr & table_expression, PlannerContextPtr & planner_context) { const auto & query_context = planner_context->getQueryContext(); const auto & settings = query_context->getSettingsRef(); + auto & table_expression_data = planner_context->getTableExpressionDataOrThrow(table_expression); + auto columns_names = table_expression_data.getColumnNames(); + + auto * table_node = table_expression->as(); + auto * table_function_node = table_expression->as(); + auto * query_node = table_expression->as(); + auto * union_node = table_expression->as(); + + /** The current user must have the SELECT privilege. + * We do not check access rights for table functions because they have been already checked in ITableFunction::execute(). + */ + if (table_node) + { + auto column_names_with_aliases = columns_names; + const auto & alias_columns_names = table_expression_data.getAliasColumnsNames(); + column_names_with_aliases.insert(column_names_with_aliases.end(), alias_columns_names.begin(), alias_columns_names.end()); + checkAccessRights(*table_node, column_names_with_aliases, query_context); + } + + if (columns_names.empty()) + { + NameAndTypePair additional_column_to_read; + + if (table_node || table_function_node) + { + const auto & storage = table_node ? table_node->getStorage() : table_function_node->getStorage(); + const auto & storage_snapshot = table_node ? table_node->getStorageSnapshot() : table_function_node->getStorageSnapshot(); + additional_column_to_read = chooseSmallestColumnToReadFromStorage(storage, storage_snapshot); + + } + else if (query_node || union_node) + { + const auto & projection_columns = query_node ? query_node->getProjectionColumns() : union_node->computeProjectionColumns(); + NamesAndTypesList projection_columns_list(projection_columns.begin(), projection_columns.end()); + additional_column_to_read = ExpressionActions::getSmallestColumn(projection_columns_list); + } + else + { + throw Exception(ErrorCodes::LOGICAL_ERROR, "Expected table, table function, query or union. Actual {}", + table_expression->formatASTForErrorMessage()); + } + + auto & global_planner_context = planner_context->getGlobalPlannerContext(); + const auto & column_identifier = global_planner_context->createColumnIdentifier(additional_column_to_read, table_expression); + columns_names.push_back(additional_column_to_read.name); + table_expression_data.addColumn(additional_column_to_read, column_identifier); + } + + /// Limitation on the number of columns to read + if (settings.max_columns_to_read && columns_names.size() > settings.max_columns_to_read) + throw Exception(ErrorCodes::TOO_MANY_COLUMNS, + "Limit for number of columns to read exceeded. Requested: {}, maximum: {}", + columns_names.size(), + settings.max_columns_to_read); +} + +JoinTreeQueryPlan buildQueryPlanForTableExpression(QueryTreeNodePtr table_expression, + const SelectQueryInfo & select_query_info, + const SelectQueryOptions & select_query_options, + PlannerContextPtr & planner_context, + bool is_single_table_expression, + bool wrap_read_columns_in_subquery) +{ + const auto & query_context = planner_context->getQueryContext(); + const auto & settings = query_context->getSettingsRef(); + + auto & table_expression_data = planner_context->getTableExpressionDataOrThrow(table_expression); + QueryProcessingStage::Enum from_stage = QueryProcessingStage::Enum::FetchColumns; + if (wrap_read_columns_in_subquery) + { + auto columns = table_expression_data.getColumns(); + table_expression = buildSubqueryToReadColumnsFromTableExpression(columns, table_expression, query_context); + } + auto * table_node = table_expression->as(); auto * table_function_node = table_expression->as(); auto * query_node = table_expression->as(); @@ -264,8 +334,6 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(const QueryTreeNodePtr & tabl QueryPlan query_plan; - auto & table_expression_data = planner_context->getTableExpressionDataOrThrow(table_expression); - if (table_node || table_function_node) { const auto & storage = table_node ? table_node->getStorage() : table_function_node->getStorage(); @@ -362,32 +430,6 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(const QueryTreeNodePtr & tabl auto columns_names = table_expression_data.getColumnNames(); - /** The current user must have the SELECT privilege. - * We do not check access rights for table functions because they have been already checked in ITableFunction::execute(). - */ - if (table_node) - { - auto column_names_with_aliases = columns_names; - const auto & alias_columns_names = table_expression_data.getAliasColumnsNames(); - column_names_with_aliases.insert(column_names_with_aliases.end(), alias_columns_names.begin(), alias_columns_names.end()); - checkAccessRights(*table_node, column_names_with_aliases, planner_context->getQueryContext()); - } - - /// Limitation on the number of columns to read - if (settings.max_columns_to_read && columns_names.size() > settings.max_columns_to_read) - throw Exception(ErrorCodes::TOO_MANY_COLUMNS, - "Limit for number of columns to read exceeded. Requested: {}, maximum: {}", - columns_names.size(), - settings.max_columns_to_read); - - if (columns_names.empty()) - { - auto additional_column_to_read = chooseSmallestColumnToReadFromStorage(storage, storage_snapshot); - const auto & column_identifier = planner_context->getGlobalPlannerContext()->createColumnIdentifier(additional_column_to_read, table_expression); - columns_names.push_back(additional_column_to_read.name); - table_expression_data.addColumn(additional_column_to_read, column_identifier); - } - bool need_rewrite_query_with_final = storage->needRewriteQueryWithFinal(columns_names); if (need_rewrite_query_with_final) { @@ -423,6 +465,17 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(const QueryTreeNodePtr & tabl { from_stage = storage->getQueryProcessingStage(query_context, select_query_options.to_stage, storage_snapshot, table_expression_query_info); storage->read(query_plan, columns_names, storage_snapshot, table_expression_query_info, query_context, from_stage, max_block_size, max_streams); + + if (query_context->hasQueryContext() && !select_query_options.is_internal) + { + auto local_storage_id = storage->getStorageID(); + query_context->getQueryContext()->addQueryAccessInfo( + backQuoteIfNeed(local_storage_id.getDatabaseName()), + local_storage_id.getFullTableName(), + columns_names, + {}, + {}); + } } if (query_plan.isInitialized()) @@ -464,16 +517,6 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(const QueryTreeNodePtr & tabl } else { - if (table_expression_data.getColumnNames().empty()) - { - const auto & projection_columns = query_node ? query_node->getProjectionColumns() : union_node->computeProjectionColumns(); - NamesAndTypesList projection_columns_list(projection_columns.begin(), projection_columns.end()); - auto additional_column_to_read = ExpressionActions::getSmallestColumn(projection_columns_list); - - const auto & column_identifier = planner_context->getGlobalPlannerContext()->createColumnIdentifier(additional_column_to_read, table_expression); - table_expression_data.addColumn(additional_column_to_read, column_identifier); - } - auto subquery_options = select_query_options.subquery(); Planner subquery_planner(table_expression, subquery_options, planner_context->getGlobalPlannerContext()); /// Propagate storage limits to subquery @@ -516,10 +559,11 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(const QueryTreeNodePtr & tabl planner.buildQueryPlanIfNeeded(); auto expected_header = planner.getQueryPlan().getCurrentDataStream().header; - materializeBlockInplace(expected_header); if (!blocksHaveEqualStructure(query_plan.getCurrentDataStream().header, expected_header)) { + materializeBlockInplace(expected_header); + auto rename_actions_dag = ActionsDAG::makeConvertingActions( query_plan.getCurrentDataStream().header.getColumnsWithTypeAndName(), expected_header.getColumnsWithTypeAndName(), @@ -1059,14 +1103,40 @@ JoinTreeQueryPlan buildJoinTreeQueryPlan(const QueryTreeNodePtr & query_node, const ColumnIdentifierSet & outer_scope_columns, PlannerContextPtr & planner_context) { - const auto & query_node_typed = query_node->as(); - auto table_expressions_stack = buildTableExpressionsStack(query_node_typed.getJoinTree()); + auto table_expressions_stack = buildTableExpressionsStack(query_node->as().getJoinTree()); size_t table_expressions_stack_size = table_expressions_stack.size(); bool is_single_table_expression = table_expressions_stack_size == 1; std::vector table_expressions_outer_scope_columns(table_expressions_stack_size); ColumnIdentifierSet current_outer_scope_columns = outer_scope_columns; + /// For each table, table function, query, union table expressions prepare before query plan build + for (size_t i = 0; i < table_expressions_stack_size; ++i) + { + const auto & table_expression = table_expressions_stack[i]; + auto table_expression_type = table_expression->getNodeType(); + if (table_expression_type == QueryTreeNodeType::JOIN || + table_expression_type == QueryTreeNodeType::ARRAY_JOIN) + continue; + + prepareBuildQueryPlanForTableExpression(table_expression, planner_context); + } + + /** If left most table expression query plan is planned to stage that is not equal to fetch columns, + * then left most table expression is responsible for providing valid JOIN TREE part of final query plan. + * + * Examples: Distributed, LiveView, Merge storages. + */ + auto left_table_expression = table_expressions_stack.front(); + auto left_table_expression_query_plan = buildQueryPlanForTableExpression(left_table_expression, + select_query_info, + select_query_options, + planner_context, + is_single_table_expression, + false /*wrap_read_columns_in_subquery*/); + if (left_table_expression_query_plan.from_stage != QueryProcessingStage::FetchColumns) + return left_table_expression_query_plan; + for (Int64 i = static_cast(table_expressions_stack_size) - 1; i >= 0; --i) { table_expressions_outer_scope_columns[i] = current_outer_scope_columns; @@ -1120,19 +1190,23 @@ JoinTreeQueryPlan buildJoinTreeQueryPlan(const QueryTreeNodePtr & query_node, } else { - const auto & table_expression_data = planner_context->getTableExpressionDataOrThrow(table_expression); - if (table_expression_data.isRemote() && i != 0) - throw Exception(ErrorCodes::UNSUPPORTED_METHOD, - "JOIN with multiple remote storages is unsupported"); + if (table_expression == left_table_expression) + { + query_plans_stack.push_back(std::move(left_table_expression_query_plan)); /// NOLINT + left_table_expression = {}; + continue; + } + /** If table expression is remote and it is not left most table expression, we wrap read columns from such + * table expression in subquery. + */ + bool is_remote = planner_context->getTableExpressionDataOrThrow(table_expression).isRemote(); query_plans_stack.push_back(buildQueryPlanForTableExpression(table_expression, select_query_info, select_query_options, planner_context, - is_single_table_expression)); - - if (query_plans_stack.back().from_stage != QueryProcessingStage::FetchColumns) - break; + is_single_table_expression, + is_remote /*wrap_read_columns_in_subquery*/)); } } diff --git a/src/Planner/PlannerJoins.cpp b/src/Planner/PlannerJoins.cpp index 2a7bd49d6a3..63fe3cc7b55 100644 --- a/src/Planner/PlannerJoins.cpp +++ b/src/Planner/PlannerJoins.cpp @@ -18,6 +18,7 @@ #include #include +#include #include #include #include @@ -61,6 +62,8 @@ void JoinClause::dump(WriteBuffer & buffer) const for (const auto & dag_node : dag_nodes) { dag_nodes_dump += dag_node->result_name; + dag_nodes_dump += " "; + dag_nodes_dump += dag_node->result_type->getName(); dag_nodes_dump += ", "; } diff --git a/src/Planner/TableExpressionData.h b/src/Planner/TableExpressionData.h index e828f128e38..0f74e671ac7 100644 --- a/src/Planner/TableExpressionData.h +++ b/src/Planner/TableExpressionData.h @@ -101,6 +101,17 @@ public: return column_names; } + NamesAndTypes getColumns() const + { + NamesAndTypes result; + result.reserve(column_names.size()); + + for (const auto & column_name : column_names) + result.push_back(column_name_to_column.at(column_name)); + + return result; + } + ColumnIdentifiers getColumnIdentifiers() const { ColumnIdentifiers result; diff --git a/src/Planner/Utils.cpp b/src/Planner/Utils.cpp index 5c5eadac55d..2018ddafcdd 100644 --- a/src/Planner/Utils.cpp +++ b/src/Planner/Utils.cpp @@ -4,6 +4,7 @@ #include #include +#include #include #include @@ -19,6 +20,7 @@ #include #include +#include #include #include #include @@ -341,27 +343,6 @@ QueryTreeNodePtr mergeConditionNodes(const QueryTreeNodes & condition_nodes, con return function_node; } -std::optional tryExtractConstantFromConditionNode(const QueryTreeNodePtr & condition_node) -{ - const auto * constant_node = condition_node->as(); - if (!constant_node) - return {}; - - const auto & value = constant_node->getValue(); - auto constant_type = constant_node->getResultType(); - constant_type = removeNullable(removeLowCardinality(constant_type)); - - auto which_constant_type = WhichDataType(constant_type); - if (!which_constant_type.isUInt8() && !which_constant_type.isNothing()) - return {}; - - if (value.isNull()) - return false; - - UInt8 predicate_value = value.safeGet(); - return predicate_value > 0; -} - QueryTreeNodePtr replaceTablesAndTableFunctionsWithDummyTables(const QueryTreeNodePtr & query_node, const ContextPtr & context, ResultReplacementMap * result_replacement_map) @@ -391,4 +372,36 @@ QueryTreeNodePtr replaceTablesAndTableFunctionsWithDummyTables(const QueryTreeNo return query_node->cloneAndReplace(replacement_map); } +QueryTreeNodePtr buildSubqueryToReadColumnsFromTableExpression(const NamesAndTypes & columns, + const QueryTreeNodePtr & table_expression, + const ContextPtr & context) +{ + auto projection_columns = columns; + + QueryTreeNodes subquery_projection_nodes; + subquery_projection_nodes.reserve(projection_columns.size()); + + for (const auto & column : projection_columns) + subquery_projection_nodes.push_back(std::make_shared(column, table_expression)); + + if (subquery_projection_nodes.empty()) + { + auto constant_data_type = std::make_shared(); + subquery_projection_nodes.push_back(std::make_shared(1UL, constant_data_type)); + projection_columns.push_back({"1", std::move(constant_data_type)}); + } + + auto context_copy = Context::createCopy(context); + updateContextForSubqueryExecution(context_copy); + + auto query_node = std::make_shared(std::move(context_copy)); + + query_node->resolveProjectionColumns(projection_columns); + query_node->getProjection().getNodes() = std::move(subquery_projection_nodes); + query_node->getJoinTree() = table_expression; + query_node->setIsSubquery(true); + + return query_node; +} + } diff --git a/src/Planner/Utils.h b/src/Planner/Utils.h index 0520bd67d26..0effb1d08ae 100644 --- a/src/Planner/Utils.h +++ b/src/Planner/Utils.h @@ -63,13 +63,15 @@ bool queryHasWithTotalsInAnySubqueryInJoinTree(const QueryTreeNodePtr & query_no /// Returns `and` function node that has condition nodes as its arguments QueryTreeNodePtr mergeConditionNodes(const QueryTreeNodes & condition_nodes, const ContextPtr & context); -/// Try extract boolean constant from condition node -std::optional tryExtractConstantFromConditionNode(const QueryTreeNodePtr & condition_node); - /// Replace tables nodes and table function nodes with dummy table nodes using ResultReplacementMap = std::unordered_map; QueryTreeNodePtr replaceTablesAndTableFunctionsWithDummyTables(const QueryTreeNodePtr & query_node, const ContextPtr & context, ResultReplacementMap * result_replacement_map = nullptr); +/// Build subquery to read specified columns from table expression +QueryTreeNodePtr buildSubqueryToReadColumnsFromTableExpression(const NamesAndTypes & columns, + const QueryTreeNodePtr & table_expression, + const ContextPtr & context); + } diff --git a/src/Processors/QueryPlan/ISourceStep.cpp b/src/Processors/QueryPlan/ISourceStep.cpp index 0644d9b44eb..37f56bc7a43 100644 --- a/src/Processors/QueryPlan/ISourceStep.cpp +++ b/src/Processors/QueryPlan/ISourceStep.cpp @@ -12,10 +12,19 @@ ISourceStep::ISourceStep(DataStream output_stream_) QueryPipelineBuilderPtr ISourceStep::updatePipeline(QueryPipelineBuilders, const BuildQueryPipelineSettings & settings) { auto pipeline = std::make_unique(); - QueryPipelineProcessorsCollector collector(*pipeline, this); + + /// For `Source` step, since it's not add new Processors to `pipeline->pipe` + /// in `initializePipeline`, but make an assign with new created Pipe. + /// And Processors for the Step is added here. So we do not need to use + /// `QueryPipelineProcessorsCollector` to collect Processors. initializePipeline(*pipeline, settings); - auto added_processors = collector.detachProcessors(); - processors.insert(processors.end(), added_processors.begin(), added_processors.end()); + + /// But we need to set QueryPlanStep manually for the Processors, which + /// will be used in `EXPLAIN PIPELINE` + for (auto & processor : processors) + { + processor->setQueryPlanStep(this); + } return pipeline; } diff --git a/src/Processors/QueryPlan/Optimizations/optimizeReadInOrder.cpp b/src/Processors/QueryPlan/Optimizations/optimizeReadInOrder.cpp index 0874a3771ae..9407504579b 100644 --- a/src/Processors/QueryPlan/Optimizations/optimizeReadInOrder.cpp +++ b/src/Processors/QueryPlan/Optimizations/optimizeReadInOrder.cpp @@ -519,8 +519,9 @@ AggregationInputOrder buildInputOrderInfo( enreachFixedColumns(sorting_key_dag, fixed_key_columns); - for (auto it = matches.cbegin(); it != matches.cend(); ++it) + for (const auto * output : dag->getOutputs()) { + auto it = matches.find(output); const MatchedTrees::Match * match = &it->second; if (match->node) { diff --git a/src/QueryPipeline/printPipeline.h b/src/QueryPipeline/printPipeline.h index 76143211875..e91909cb50b 100644 --- a/src/QueryPipeline/printPipeline.h +++ b/src/QueryPipeline/printPipeline.h @@ -10,7 +10,6 @@ namespace DB * You can render it with: * dot -T png < pipeline.dot > pipeline.png */ - template void printPipeline(const Processors & processors, const Statuses & statuses, WriteBuffer & out) { @@ -70,5 +69,4 @@ void printPipeline(const Processors & processors, WriteBuffer & out) /// If QueryPlanStep wasn't set for processor, representation may be not correct. /// If with_header is set, prints block header for each edge. void printPipelineCompact(const Processors & processors, WriteBuffer & out, bool with_header); - } diff --git a/src/Storages/HDFS/StorageHDFSCluster.cpp b/src/Storages/HDFS/StorageHDFSCluster.cpp index 8dbaa0796e9..a88470d01c7 100644 --- a/src/Storages/HDFS/StorageHDFSCluster.cpp +++ b/src/Storages/HDFS/StorageHDFSCluster.cpp @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -83,8 +84,12 @@ Pipe StorageHDFSCluster::read( auto extension = getTaskIteratorExtension(query_info.query, context); /// Calculate the header. This is significant, because some columns could be thrown away in some cases like query with count(*) - Block header = - InterpreterSelectQuery(query_info.query, context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); + Block header; + + if (context->getSettingsRef().allow_experimental_analyzer) + header = InterpreterSelectQueryAnalyzer::getSampleBlock(query_info.query, context, SelectQueryOptions(processed_stage).analyze()); + else + header = InterpreterSelectQuery(query_info.query, context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); const Scalars & scalars = context->hasQueryContext() ? context->getQueryContext()->getScalars() : Scalars{}; diff --git a/src/Storages/MergeTree/MergeTreeData.cpp b/src/Storages/MergeTree/MergeTreeData.cpp index 73007e3f178..d7cea944689 100644 --- a/src/Storages/MergeTree/MergeTreeData.cpp +++ b/src/Storages/MergeTree/MergeTreeData.cpp @@ -4121,9 +4121,9 @@ void MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event * until, const Contex ProfileEvents::increment(ProfileEvents::RejectedInserts); throw Exception( ErrorCodes::TOO_MANY_PARTS, - "Too many parts ({}) in all partitions in total. This indicates wrong choice of partition key. The threshold can be modified " + "Too many parts ({}) in all partitions in total in table '{}'. This indicates wrong choice of partition key. The threshold can be modified " "with 'max_parts_in_total' setting in element in config.xml or with per-table setting.", - parts_count_in_total); + parts_count_in_total, getLogName()); } size_t outdated_parts_over_threshold = 0; @@ -4137,8 +4137,8 @@ void MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event * until, const Contex ProfileEvents::increment(ProfileEvents::RejectedInserts); throw Exception( ErrorCodes::TOO_MANY_PARTS, - "Too many inactive parts ({}). Parts cleaning are processing significantly slower than inserts", - outdated_parts_count_in_partition); + "Too many inactive parts ({}) in table '{}'. Parts cleaning are processing significantly slower than inserts", + outdated_parts_count_in_partition, getLogName()); } if (settings->inactive_parts_to_delay_insert > 0 && outdated_parts_count_in_partition >= settings->inactive_parts_to_delay_insert) outdated_parts_over_threshold = outdated_parts_count_in_partition - settings->inactive_parts_to_delay_insert + 1; @@ -4151,6 +4151,7 @@ void MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event * until, const Contex const auto active_parts_to_throw_insert = query_settings.parts_to_throw_insert ? query_settings.parts_to_throw_insert : settings->parts_to_throw_insert; size_t active_parts_over_threshold = 0; + { bool parts_are_large_enough_in_average = settings->max_avg_part_size_for_too_many_parts && average_part_size > settings->max_avg_part_size_for_too_many_parts; @@ -4160,9 +4161,10 @@ void MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event * until, const Contex ProfileEvents::increment(ProfileEvents::RejectedInserts); throw Exception( ErrorCodes::TOO_MANY_PARTS, - "Too many parts ({} with average size of {}). Merges are processing significantly slower than inserts", + "Too many parts ({} with average size of {}) in table '{}'. Merges are processing significantly slower than inserts", parts_count_in_partition, - ReadableSize(average_part_size)); + ReadableSize(average_part_size), + getLogName()); } if (active_parts_to_delay_insert > 0 && parts_count_in_partition >= active_parts_to_delay_insert && !parts_are_large_enough_in_average) diff --git a/src/Storages/MergeTree/MergeTreeIndexInverted.cpp b/src/Storages/MergeTree/MergeTreeIndexInverted.cpp index e7d86f2a635..8e8409f3868 100644 --- a/src/Storages/MergeTree/MergeTreeIndexInverted.cpp +++ b/src/Storages/MergeTree/MergeTreeIndexInverted.cpp @@ -201,6 +201,7 @@ MergeTreeConditionInverted::MergeTreeConditionInverted( rpn.push_back(RPNElement::FUNCTION_UNKNOWN); return; } + rpn = std::move( RPNBuilder( query_info.filter_actions_dag->getOutputs().at(0), context_, @@ -208,10 +209,10 @@ MergeTreeConditionInverted::MergeTreeConditionInverted( { return this->traverseAtomAST(node, out); }).extractRPN()); + return; } ASTPtr filter_node = buildFilterNode(query_info.query); - if (!filter_node) { rpn.push_back(RPNElement::FUNCTION_UNKNOWN); @@ -226,7 +227,6 @@ MergeTreeConditionInverted::MergeTreeConditionInverted( query_info.prepared_sets, [&](const RPNBuilderTreeNode & node, RPNElement & out) { return traverseAtomAST(node, out); }); rpn = std::move(builder).extractRPN(); - } /// Keep in-sync with MergeTreeConditionFullText::alwaysUnknownOrTrue diff --git a/src/Storages/MergeTree/RPNBuilder.cpp b/src/Storages/MergeTree/RPNBuilder.cpp index cee5038ed21..fb3592a1541 100644 --- a/src/Storages/MergeTree/RPNBuilder.cpp +++ b/src/Storages/MergeTree/RPNBuilder.cpp @@ -59,7 +59,7 @@ void appendColumnNameWithoutAlias(const ActionsDAG::Node & node, WriteBuffer & o { auto name = node.function_base->getName(); if (legacy && name == "modulo") - writeCString("moduleLegacy", out); + writeCString("moduloLegacy", out); else writeString(name, out); diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeAttachThread.cpp b/src/Storages/MergeTree/ReplicatedMergeTreeAttachThread.cpp index 557123ddae2..c859c994818 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeAttachThread.cpp +++ b/src/Storages/MergeTree/ReplicatedMergeTreeAttachThread.cpp @@ -60,11 +60,11 @@ void ReplicatedMergeTreeAttachThread::run() if (needs_retry) { - LOG_ERROR(log, "Initialization failed. Error: {}", e.message()); + LOG_ERROR(log, "Initialization failed. Error: {}", getCurrentExceptionMessage(/* with_stacktrace */ true)); } else { - LOG_ERROR(log, "Initialization failed, table will remain readonly. Error: {}", e.message()); + LOG_ERROR(log, "Initialization failed, table will remain readonly. Error: {}", getCurrentExceptionMessage(/* with_stacktrace */ true)); storage.initialization_done = true; } } diff --git a/src/Storages/StorageDistributed.cpp b/src/Storages/StorageDistributed.cpp index 4eb454e5156..77972b67644 100644 --- a/src/Storages/StorageDistributed.cpp +++ b/src/Storages/StorageDistributed.cpp @@ -39,11 +39,16 @@ #include #include +#include +#include #include #include #include +#include +#include #include #include +#include #include #include @@ -55,6 +60,7 @@ #include #include #include +#include #include #include #include @@ -69,12 +75,14 @@ #include #include +#include #include #include #include #include +#include #include #include #include @@ -138,6 +146,7 @@ namespace ErrorCodes extern const int DISTRIBUTED_TOO_MANY_PENDING_BYTES; extern const int ARGUMENT_OUT_OF_BOUND; extern const int TOO_LARGE_DISTRIBUTED_DEPTH; + extern const int DISTRIBUTED_IN_JOIN_SUBQUERY_DENIED; } namespace ActionLocks @@ -634,12 +643,278 @@ StorageSnapshotPtr StorageDistributed::getStorageSnapshotForQuery( namespace { -QueryTreeNodePtr buildQueryTreeDistributedTableReplacedWithLocalTable(const SelectQueryInfo & query_info, +/// Visitor that collect column source to columns mapping from query and all subqueries +class CollectColumnSourceToColumnsVisitor : public InDepthQueryTreeVisitor +{ +public: + struct Columns + { + NameSet column_names; + NamesAndTypes columns; + + void addColumn(NameAndTypePair column) + { + if (column_names.contains(column.name)) + return; + + column_names.insert(column.name); + columns.push_back(std::move(column)); + } + }; + + const std::unordered_map & getColumnSourceToColumns() const + { + return column_source_to_columns; + } + + void visitImpl(QueryTreeNodePtr & node) + { + auto * column_node = node->as(); + if (!column_node) + return; + + auto column_source = column_node->getColumnSourceOrNull(); + if (!column_source) + return; + + auto it = column_source_to_columns.find(column_source); + if (it == column_source_to_columns.end()) + { + auto [insert_it, _] = column_source_to_columns.emplace(column_source, Columns()); + it = insert_it; + } + + it->second.addColumn(column_node->getColumn()); + } + +private: + std::unordered_map column_source_to_columns; +}; + +/** Visitor that rewrites IN and JOINs in query and all subqueries according to distributed_product_mode and + * prefer_global_in_and_join settings. + * + * Additionally collects GLOBAL JOIN and GLOBAL IN query nodes. + * + * If distributed_product_mode = deny, then visitor throws exception if there are multiple distributed tables. + * If distributed_product_mode = local, then visitor collects replacement map for tables that must be replaced + * with local tables. + * If distributed_product_mode = global or prefer_global_in_and_join setting is true, then visitor rewrites JOINs and IN functions that + * contain distributed tables to GLOBAL JOINs and GLOBAL IN functions. + * If distributed_product_mode = allow, then visitor does not rewrite query if there are multiple distributed tables. + */ +class DistributedProductModeRewriteInJoinVisitor : public InDepthQueryTreeVisitorWithContext +{ +public: + using Base = InDepthQueryTreeVisitorWithContext; + using Base::Base; + + explicit DistributedProductModeRewriteInJoinVisitor(const ContextPtr & context_) + : Base(context_) + {} + + struct InFunctionOrJoin + { + QueryTreeNodePtr query_node; + size_t subquery_depth = 0; + }; + + const std::unordered_map & getReplacementMap() const + { + return replacement_map; + } + + const std::vector & getGlobalInOrJoinNodes() const + { + return global_in_or_join_nodes; + } + + static bool needChildVisit(QueryTreeNodePtr & parent, QueryTreeNodePtr & child) + { + auto * function_node = parent->as(); + if (function_node && isNameOfGlobalInFunction(function_node->getFunctionName())) + return false; + + auto * join_node = parent->as(); + if (join_node && join_node->getLocality() == JoinLocality::Global && join_node->getRightTableExpression() == child) + return false; + + return true; + } + + void visitImpl(QueryTreeNodePtr & node) + { + auto * function_node = node->as(); + auto * join_node = node->as(); + + if ((function_node && isNameOfGlobalInFunction(function_node->getFunctionName())) || + (join_node && join_node->getLocality() == JoinLocality::Global)) + { + InFunctionOrJoin in_function_or_join_entry; + in_function_or_join_entry.query_node = node; + in_function_or_join_entry.subquery_depth = getSubqueryDepth(); + global_in_or_join_nodes.push_back(std::move(in_function_or_join_entry)); + return; + } + + if ((function_node && isNameOfLocalInFunction(function_node->getFunctionName())) || + (join_node && join_node->getLocality() != JoinLocality::Global)) + { + InFunctionOrJoin in_function_or_join_entry; + in_function_or_join_entry.query_node = node; + in_function_or_join_entry.subquery_depth = getSubqueryDepth(); + in_function_or_join_stack.push_back(in_function_or_join_entry); + return; + } + + if (node->getNodeType() == QueryTreeNodeType::TABLE) + tryRewriteTableNodeIfNeeded(node); + } + + void leaveImpl(QueryTreeNodePtr & node) + { + if (!in_function_or_join_stack.empty() && node.get() == in_function_or_join_stack.back().query_node.get()) + in_function_or_join_stack.pop_back(); + } + +private: + void tryRewriteTableNodeIfNeeded(const QueryTreeNodePtr & table_node) + { + const auto & table_node_typed = table_node->as(); + const auto * distributed_storage = typeid_cast(table_node_typed.getStorage().get()); + if (!distributed_storage) + return; + + bool distributed_valid_for_rewrite = distributed_storage->getShardCount() >= 2; + if (!distributed_valid_for_rewrite) + return; + + auto distributed_product_mode = getSettings().distributed_product_mode; + + if (distributed_product_mode == DistributedProductMode::LOCAL) + { + StorageID remote_storage_id = StorageID{distributed_storage->getRemoteDatabaseName(), + distributed_storage->getRemoteTableName()}; + auto resolved_remote_storage_id = getContext()->resolveStorageID(remote_storage_id); + const auto & distributed_storage_columns = table_node_typed.getStorageSnapshot()->metadata->getColumns(); + auto storage = std::make_shared(resolved_remote_storage_id, distributed_storage_columns); + auto replacement_table_expression = std::make_shared(std::move(storage), getContext()); + replacement_map.emplace(table_node.get(), std::move(replacement_table_expression)); + } + else if ((distributed_product_mode == DistributedProductMode::GLOBAL || getSettings().prefer_global_in_and_join) && + !in_function_or_join_stack.empty()) + { + auto * in_or_join_node_to_modify = in_function_or_join_stack.back().query_node.get(); + + if (auto * in_function_to_modify = in_or_join_node_to_modify->as()) + { + auto global_in_function_name = getGlobalInFunctionNameForLocalInFunctionName(in_function_to_modify->getFunctionName()); + auto global_in_function_resolver = FunctionFactory::instance().get(global_in_function_name, getContext()); + in_function_to_modify->resolveAsFunction(global_in_function_resolver->build(in_function_to_modify->getArgumentColumns())); + } + else if (auto * join_node_to_modify = in_or_join_node_to_modify->as()) + { + join_node_to_modify->setLocality(JoinLocality::Global); + } + + global_in_or_join_nodes.push_back(in_function_or_join_stack.back()); + } + else if (distributed_product_mode == DistributedProductMode::ALLOW) + { + return; + } + else if (distributed_product_mode == DistributedProductMode::DENY) + { + throw Exception(ErrorCodes::DISTRIBUTED_IN_JOIN_SUBQUERY_DENIED, + "Double-distributed IN/JOIN subqueries is denied (distributed_product_mode = 'deny'). " + "You may rewrite query to use local tables " + "in subqueries, or use GLOBAL keyword, or set distributed_product_mode to suitable value."); + } + } + + std::vector in_function_or_join_stack; + std::unordered_map replacement_map; + std::vector global_in_or_join_nodes; +}; + +/** Execute subquery node and put result in mutable context temporary table. + * Returns table node that is initialized with temporary table storage. + */ +QueryTreeNodePtr executeSubqueryNode(const QueryTreeNodePtr & subquery_node, + ContextMutablePtr & mutable_context, + size_t subquery_depth) +{ + auto subquery_hash = subquery_node->getTreeHash(); + String temporary_table_name = fmt::format("_data_{}_{}", subquery_hash.first, subquery_hash.second); + + const auto & external_tables = mutable_context->getExternalTables(); + auto external_table_it = external_tables.find(temporary_table_name); + if (external_table_it != external_tables.end()) + { + auto temporary_table_expression_node = std::make_shared(external_table_it->second, mutable_context); + temporary_table_expression_node->setTemporaryTableName(temporary_table_name); + return temporary_table_expression_node; + } + + auto subquery_options = SelectQueryOptions(QueryProcessingStage::Complete, subquery_depth, true /*is_subquery*/); + auto context_copy = Context::createCopy(mutable_context); + updateContextForSubqueryExecution(context_copy); + + InterpreterSelectQueryAnalyzer interpreter(subquery_node, context_copy, subquery_options); + auto & query_plan = interpreter.getQueryPlan(); + + auto sample_block_with_unique_names = query_plan.getCurrentDataStream().header; + makeUniqueColumnNamesInBlock(sample_block_with_unique_names); + + if (!blocksHaveEqualStructure(sample_block_with_unique_names, query_plan.getCurrentDataStream().header)) + { + auto actions_dag = ActionsDAG::makeConvertingActions( + query_plan.getCurrentDataStream().header.getColumnsWithTypeAndName(), + sample_block_with_unique_names.getColumnsWithTypeAndName(), + ActionsDAG::MatchColumnsMode::Position); + auto converting_step = std::make_unique(query_plan.getCurrentDataStream(), std::move(actions_dag)); + query_plan.addStep(std::move(converting_step)); + } + + Block sample = interpreter.getSampleBlock(); + NamesAndTypesList columns = sample.getNamesAndTypesList(); + + auto external_storage_holder = TemporaryTableHolder( + mutable_context, + ColumnsDescription{columns}, + ConstraintsDescription{}, + nullptr /*query*/, + true /*create_for_global_subquery*/); + + StoragePtr external_storage = external_storage_holder.getTable(); + auto temporary_table_expression_node = std::make_shared(external_storage, mutable_context); + temporary_table_expression_node->setTemporaryTableName(temporary_table_name); + + auto table_out = external_storage->write({}, external_storage->getInMemoryMetadataPtr(), mutable_context); + auto io = interpreter.execute(); + io.pipeline.complete(std::move(table_out)); + CompletedPipelineExecutor executor(io.pipeline); + executor.execute(); + + mutable_context->addExternalTable(temporary_table_name, std::move(external_storage_holder)); + + return temporary_table_expression_node; +} + +QueryTreeNodePtr buildQueryTreeDistributed(SelectQueryInfo & query_info, const StorageSnapshotPtr & distributed_storage_snapshot, const StorageID & remote_storage_id, const ASTPtr & remote_table_function) { - const auto & query_context = query_info.planner_context->getQueryContext(); + auto & planner_context = query_info.planner_context; + const auto & query_context = planner_context->getQueryContext(); + + std::optional table_expression_modifiers; + + if (auto * query_info_table_node = query_info.table_expression->as()) + table_expression_modifiers = query_info_table_node->getTableExpressionModifiers(); + else if (auto * query_info_table_function_node = query_info.table_expression->as()) + table_expression_modifiers = query_info_table_function_node->getTableExpressionModifiers(); QueryTreeNodePtr replacement_table_expression; @@ -651,6 +926,9 @@ QueryTreeNodePtr buildQueryTreeDistributedTableReplacedWithLocalTable(const Sele auto table_function_node = std::make_shared(remote_table_function_node.getFunctionName()); table_function_node->getArgumentsNode() = remote_table_function_node.getArgumentsNode(); + if (table_expression_modifiers) + table_function_node->setTableExpressionModifiers(*table_expression_modifiers); + QueryAnalysisPass query_analysis_pass; query_analysis_pass.run(table_function_node, query_context); @@ -660,13 +938,89 @@ QueryTreeNodePtr buildQueryTreeDistributedTableReplacedWithLocalTable(const Sele { auto resolved_remote_storage_id = query_context->resolveStorageID(remote_storage_id); auto storage = std::make_shared(resolved_remote_storage_id, distributed_storage_snapshot->metadata->getColumns()); + auto table_node = std::make_shared(std::move(storage), query_context); - replacement_table_expression = std::make_shared(std::move(storage), query_context); + if (table_expression_modifiers) + table_node->setTableExpressionModifiers(*table_expression_modifiers); + + replacement_table_expression = std::move(table_node); } replacement_table_expression->setAlias(query_info.table_expression->getAlias()); - return query_info.query_tree->cloneAndReplace(query_info.table_expression, std::move(replacement_table_expression)); + auto query_tree_to_modify = query_info.query_tree->cloneAndReplace(query_info.table_expression, std::move(replacement_table_expression)); + + CollectColumnSourceToColumnsVisitor collect_column_source_to_columns_visitor; + collect_column_source_to_columns_visitor.visit(query_tree_to_modify); + + const auto & column_source_to_columns = collect_column_source_to_columns_visitor.getColumnSourceToColumns(); + + DistributedProductModeRewriteInJoinVisitor visitor(query_info.planner_context->getQueryContext()); + visitor.visit(query_tree_to_modify); + + auto replacement_map = visitor.getReplacementMap(); + const auto & global_in_or_join_nodes = visitor.getGlobalInOrJoinNodes(); + + for (const auto & global_in_or_join_node : global_in_or_join_nodes) + { + if (auto * join_node = global_in_or_join_node.query_node->as()) + { + auto join_right_table_expression = join_node->getRightTableExpression(); + auto join_right_table_expression_node_type = join_right_table_expression->getNodeType(); + + QueryTreeNodePtr subquery_node; + + if (join_right_table_expression_node_type == QueryTreeNodeType::QUERY || + join_right_table_expression_node_type == QueryTreeNodeType::UNION) + { + subquery_node = join_right_table_expression; + } + else if (join_right_table_expression_node_type == QueryTreeNodeType::TABLE || + join_right_table_expression_node_type == QueryTreeNodeType::TABLE_FUNCTION) + { + const auto & columns = column_source_to_columns.at(join_right_table_expression).columns; + subquery_node = buildSubqueryToReadColumnsFromTableExpression(columns, + join_right_table_expression, + planner_context->getQueryContext()); + } + else + { + throw Exception(ErrorCodes::LOGICAL_ERROR, + "Expected JOIN right table expression to be table, table function, query or union node. Actual {}", + join_right_table_expression->formatASTForErrorMessage()); + } + + auto temporary_table_expression_node = executeSubqueryNode(subquery_node, + planner_context->getMutableQueryContext(), + global_in_or_join_node.subquery_depth); + temporary_table_expression_node->setAlias(join_right_table_expression->getAlias()); + replacement_map.emplace(join_right_table_expression.get(), std::move(temporary_table_expression_node)); + continue; + } + else if (auto * in_function_node = global_in_or_join_node.query_node->as()) + { + auto & in_function_subquery_node = in_function_node->getArguments().getNodes().at(1); + auto in_function_node_type = in_function_subquery_node->getNodeType(); + if (in_function_node_type != QueryTreeNodeType::QUERY && in_function_node_type != QueryTreeNodeType::UNION) + continue; + + auto temporary_table_expression_node = executeSubqueryNode(in_function_subquery_node, + planner_context->getMutableQueryContext(), + global_in_or_join_node.subquery_depth); + in_function_subquery_node = std::move(temporary_table_expression_node); + } + else + { + throw Exception(ErrorCodes::LOGICAL_ERROR, + "Expected global IN or JOIN query node. Actual {}", + global_in_or_join_node.query_node->formatASTForErrorMessage()); + } + } + + if (!replacement_map.empty()) + query_tree_to_modify = query_tree_to_modify->cloneAndReplace(replacement_map); + + return query_tree_to_modify; } } @@ -694,17 +1048,13 @@ void StorageDistributed::read( if (!remote_table_function_ptr) remote_storage_id = StorageID{remote_database, remote_table}; - auto query_tree_with_replaced_distributed_table = buildQueryTreeDistributedTableReplacedWithLocalTable(query_info, + auto query_tree_distributed = buildQueryTreeDistributed(query_info, storage_snapshot, remote_storage_id, remote_table_function_ptr); - query_ast = queryNodeToSelectQuery(query_tree_with_replaced_distributed_table); - - Planner planner(query_tree_with_replaced_distributed_table, SelectQueryOptions(processed_stage).analyze()); - planner.buildQueryPlanIfNeeded(); - - header = planner.getQueryPlan().getCurrentDataStream().header; + query_ast = queryNodeToSelectQuery(query_tree_distributed); + header = InterpreterSelectQueryAnalyzer::getSampleBlock(query_ast, local_context, SelectQueryOptions(processed_stage).analyze()); } else { diff --git a/src/Storages/StorageMergeTree.cpp b/src/Storages/StorageMergeTree.cpp index aadd7b8c20a..d9bb189524c 100644 --- a/src/Storages/StorageMergeTree.cpp +++ b/src/Storages/StorageMergeTree.cpp @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -223,8 +224,12 @@ void StorageMergeTree::read( auto cluster = local_context->getCluster(local_context->getSettingsRef().cluster_for_parallel_replicas); - Block header = - InterpreterSelectQuery(modified_query_ast, local_context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); + Block header; + + if (local_context->getSettingsRef().allow_experimental_analyzer) + header = InterpreterSelectQueryAnalyzer::getSampleBlock(modified_query_ast, local_context, SelectQueryOptions(processed_stage).analyze()); + else + header = InterpreterSelectQuery(modified_query_ast, local_context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); ClusterProxy::SelectStreamFactory select_stream_factory = ClusterProxy::SelectStreamFactory( diff --git a/src/Storages/StorageS3Cluster.cpp b/src/Storages/StorageS3Cluster.cpp index c7535bb4550..a3fd24b4f98 100644 --- a/src/Storages/StorageS3Cluster.cpp +++ b/src/Storages/StorageS3Cluster.cpp @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -102,7 +103,20 @@ Pipe StorageS3Cluster::read( auto extension = getTaskIteratorExtension(query_info.query, context); /// Calculate the header. This is significant, because some columns could be thrown away in some cases like query with count(*) - auto interpreter = InterpreterSelectQuery(query_info.query, context, SelectQueryOptions(processed_stage).analyze()); + + Block sample_block; + ASTPtr query_to_send = query_info.query; + + if (context->getSettingsRef().allow_experimental_analyzer) + { + sample_block = InterpreterSelectQueryAnalyzer::getSampleBlock(query_info.query, context, SelectQueryOptions(processed_stage)); + } + else + { + auto interpreter = InterpreterSelectQuery(query_info.query, context, SelectQueryOptions(processed_stage).analyze()); + sample_block = interpreter.getSampleBlock(); + query_to_send = interpreter.getQueryInfo().query->clone(); + } const Scalars & scalars = context->hasQueryContext() ? context->getQueryContext()->getScalars() : Scalars{}; @@ -110,7 +124,6 @@ Pipe StorageS3Cluster::read( const bool add_agg_info = processed_stage == QueryProcessingStage::WithMergeableState; - ASTPtr query_to_send = interpreter.getQueryInfo().query->clone(); if (!structure_argument_was_provided) addColumnsStructureToQueryWithClusterEngine( query_to_send, StorageDictionary::generateNamesAndTypesDescription(storage_snapshot->metadata->getColumns().getAll()), 5, getName()); @@ -136,7 +149,7 @@ Pipe StorageS3Cluster::read( shard_info.pool, std::vector{try_result}, queryToString(query_to_send), - interpreter.getSampleBlock(), + sample_block, context, /*throttler=*/nullptr, scalars, diff --git a/src/Storages/StorageView.cpp b/src/Storages/StorageView.cpp index 1a7050b4dff..7e12a972768 100644 --- a/src/Storages/StorageView.cpp +++ b/src/Storages/StorageView.cpp @@ -1,6 +1,7 @@ #include #include #include +#include #include #include @@ -117,6 +118,10 @@ StorageView::StorageView( SelectQueryDescription description; description.inner_query = query.select->ptr(); + + NormalizeSelectWithUnionQueryVisitor::Data data{SetOperationMode::Unspecified}; + NormalizeSelectWithUnionQueryVisitor{data}.visit(description.inner_query); + is_parameterized_view = query.isParameterizedView(); parameter_types = analyzeReceiveQueryParamsWithType(description.inner_query); storage_metadata.setSelectQuery(description); @@ -167,7 +172,7 @@ void StorageView::read( query_plan.addStep(std::move(materializing)); /// And also convert to expected structure. - const auto & expected_header = storage_snapshot->getSampleBlockForColumns(column_names,parameter_values); + const auto & expected_header = storage_snapshot->getSampleBlockForColumns(column_names, parameter_values); const auto & header = query_plan.getCurrentDataStream().header; const auto * select_with_union = current_inner_query->as(); diff --git a/src/Storages/WindowView/StorageWindowView.cpp b/src/Storages/WindowView/StorageWindowView.cpp index 3a74fd5fc75..3471e4ea6bf 100644 --- a/src/Storages/WindowView/StorageWindowView.cpp +++ b/src/Storages/WindowView/StorageWindowView.cpp @@ -78,6 +78,7 @@ namespace ErrorCodes extern const int SUPPORT_IS_DISABLED; extern const int TABLE_WAS_NOT_DROPPED; extern const int NOT_IMPLEMENTED; + extern const int UNSUPPORTED_METHOD; } namespace @@ -1158,6 +1159,10 @@ StorageWindowView::StorageWindowView( , fire_signal_timeout_s(context_->getSettingsRef().wait_for_window_view_fire_signal_timeout.totalSeconds()) , clean_interval_usec(context_->getSettingsRef().window_view_clean_interval.totalMicroseconds()) { + if (context_->getSettingsRef().allow_experimental_analyzer) + throw Exception(ErrorCodes::UNSUPPORTED_METHOD, + "Experimental WINDOW VIEW feature is not supported with new infrastructure for query analysis (the setting 'allow_experimental_analyzer')"); + if (!query.select) throw Exception(ErrorCodes::INCORRECT_QUERY, "SELECT query is not specified for {}", getName()); diff --git a/src/TableFunctions/TableFunctionMerge.cpp b/src/TableFunctions/TableFunctionMerge.cpp index 066caa8170d..586cee54085 100644 --- a/src/TableFunctions/TableFunctionMerge.cpp +++ b/src/TableFunctions/TableFunctionMerge.cpp @@ -53,7 +53,7 @@ std::vector TableFunctionMerge::skipAnalysisForArguments(const QueryTree result.push_back(i); } - return {0}; + return result; } void TableFunctionMerge::parseArguments(const ASTPtr & ast_function, ContextPtr context) diff --git a/src/TableFunctions/TableFunctionURL.cpp b/src/TableFunctions/TableFunctionURL.cpp index 5de6c6b4ccc..cc3a858e4dc 100644 --- a/src/TableFunctions/TableFunctionURL.cpp +++ b/src/TableFunctions/TableFunctionURL.cpp @@ -9,6 +9,8 @@ #include #include #include +#include +#include #include #include #include @@ -26,6 +28,24 @@ namespace ErrorCodes extern const int BAD_ARGUMENTS; } +std::vector TableFunctionURL::skipAnalysisForArguments(const QueryTreeNodePtr & query_node_table_function, ContextPtr) const +{ + auto & table_function_node = query_node_table_function->as(); + auto & table_function_arguments_nodes = table_function_node.getArguments().getNodes(); + size_t table_function_arguments_size = table_function_arguments_nodes.size(); + + std::vector result; + + for (size_t i = 0; i < table_function_arguments_size; ++i) + { + auto * function_node = table_function_arguments_nodes[i]->as(); + if (function_node && function_node->getFunctionName() == "headers") + result.push_back(i); + } + + return result; +} + void TableFunctionURL::parseArguments(const ASTPtr & ast, ContextPtr context) { const auto & ast_function = assert_cast(ast.get()); diff --git a/src/TableFunctions/TableFunctionURL.h b/src/TableFunctions/TableFunctionURL.h index a670bdc0682..dca5123fb69 100644 --- a/src/TableFunctions/TableFunctionURL.h +++ b/src/TableFunctions/TableFunctionURL.h @@ -12,7 +12,7 @@ class Context; /* url(source, format[, structure, compression]) - creates a temporary storage from url. */ -class TableFunctionURL : public ITableFunctionFileLike +class TableFunctionURL final: public ITableFunctionFileLike { public: static constexpr auto name = "url"; @@ -23,10 +23,11 @@ public: ColumnsDescription getActualTableStructure(ContextPtr context) const override; -protected: +private: + std::vector skipAnalysisForArguments(const QueryTreeNodePtr & query_node_table_function, ContextPtr context) const override; + void parseArguments(const ASTPtr & ast, ContextPtr context) override; -private: StoragePtr getStorage( const String & source, const String & format_, const ColumnsDescription & columns, ContextPtr global_context, const std::string & table_name, const String & compression_method_) const override; diff --git a/tests/ci/stress.py b/tests/ci/stress.py index 12c40ea1f66..5e151e6c098 100755 --- a/tests/ci/stress.py +++ b/tests/ci/stress.py @@ -36,7 +36,8 @@ def get_options(i, upgrade_check): client_options.append("join_algorithm='partial_merge'") if join_alg_num % 5 == 2: client_options.append("join_algorithm='full_sorting_merge'") - if join_alg_num % 5 == 3: + if join_alg_num % 5 == 3 and not upgrade_check: + # Some crashes are not fixed in 23.2 yet, so ignore the setting in Upgrade check client_options.append("join_algorithm='grace_hash'") if join_alg_num % 5 == 4: client_options.append("join_algorithm='auto'") @@ -224,6 +225,20 @@ def prepare_for_hung_check(drop_databases): return True +def is_ubsan_build(): + try: + query = """clickhouse client -q "SELECT value FROM system.build_options WHERE name = 'CXX_FLAGS'" """ + output = ( + check_output(query, shell=True, stderr=STDOUT, timeout=30) + .decode("utf-8") + .strip() + ) + return "-fsanitize=undefined" in output + except Exception as e: + logging.info("Failed to get build flags: %s", str(e)) + return False + + if __name__ == "__main__": logging.basicConfig(level=logging.INFO, format="%(asctime)s %(message)s") parser = argparse.ArgumentParser( @@ -243,6 +258,10 @@ if __name__ == "__main__": args = parser.parse_args() if args.drop_databases and not args.hung_check: raise Exception("--drop-databases only used in hung check (--hung-check)") + + # FIXME Hung check with ubsan is temporarily disabled due to https://github.com/ClickHouse/ClickHouse/issues/45372 + suppress_hung_check = is_ubsan_build() + func_pipes = [] func_pipes = run_func_test( args.test_cmd, @@ -307,7 +326,7 @@ if __name__ == "__main__": res = call(cmd, shell=True, stdout=tee.stdin, stderr=STDOUT) if tee.stdin is not None: tee.stdin.close() - if res != 0 and have_long_running_queries: + if res != 0 and have_long_running_queries and not suppress_hung_check: logging.info("Hung check failed with exit code %d", res) else: hung_check_status = "No queries hung\tOK\t\\N\t\n" diff --git a/tests/integration/test_async_insert_memory/test.py b/tests/integration/test_async_insert_memory/test.py deleted file mode 100644 index 279542f087c..00000000000 --- a/tests/integration/test_async_insert_memory/test.py +++ /dev/null @@ -1,40 +0,0 @@ -import pytest - -from helpers.cluster import ClickHouseCluster - -cluster = ClickHouseCluster(__file__) - -node = cluster.add_instance("node") - - -@pytest.fixture(scope="module", autouse=True) -def start_cluster(): - try: - cluster.start() - yield cluster - finally: - cluster.shutdown() - - -def test_memory_usage(): - node.query( - "CREATE TABLE async_table(data Array(UInt64)) ENGINE=MergeTree() ORDER BY data" - ) - - node.get_query_request("SELECT count() FROM system.numbers") - - INSERT_QUERY = "INSERT INTO async_table SETTINGS async_insert=1, wait_for_async_insert=1 VALUES ({})" - for iter in range(10): - values = list(range(iter * 5000000, (iter + 1) * 5000000)) - node.query(INSERT_QUERY.format(values)) - - response = node.get_query_request( - "SELECT groupArray(number) FROM numbers(1000000) SETTINGS max_memory_usage_for_user={}".format( - 30 * (2**23) - ) - ) - - _, err = response.get_answer_and_error() - assert err == "", "Query failed with error {}".format(err) - - node.query("DROP TABLE async_table") diff --git a/tests/queries/0_stateless/00378_json_quote_64bit_integers.reference b/tests/queries/0_stateless/00378_json_quote_64bit_integers.reference index 5174c13a9e0..b8d51e5d078 100644 --- a/tests/queries/0_stateless/00378_json_quote_64bit_integers.reference +++ b/tests/queries/0_stateless/00378_json_quote_64bit_integers.reference @@ -48,10 +48,10 @@ { "i0": "0", "u0": "0", - "ip": "9223372036854775807", - "in": "-9223372036854775808", - "up": "18446744073709551615", - "arr": ["0"], + "ip": "0", + "in": "0", + "up": "0", + "arr": [], "tuple": ["0","0"] }, @@ -119,7 +119,7 @@ ["0", "0", "9223372036854775807", "-9223372036854775808", "18446744073709551615", ["0"], ["0","0"]] ], - "totals": ["0", "0", "9223372036854775807", "-9223372036854775808", "18446744073709551615", ["0"], ["0","0"]], + "totals": ["0", "0", "0", "0", "0", [], ["0","0"]], "extremes": { @@ -180,10 +180,10 @@ { "i0": 0, "u0": 0, - "ip": 9223372036854775807, - "in": -9223372036854775808, - "up": 18446744073709551615, - "arr": [0], + "ip": 0, + "in": 0, + "up": 0, + "arr": [], "tuple": [0,0] }, @@ -251,7 +251,7 @@ [0, 0, 9223372036854775807, -9223372036854775808, 18446744073709551615, [0], [0,0]] ], - "totals": [0, 0, 9223372036854775807, -9223372036854775808, 18446744073709551615, [0], [0,0]], + "totals": [0, 0, 0, 0, 0, [], [0,0]], "extremes": { diff --git a/tests/queries/0_stateless/00378_json_quote_64bit_integers.sql b/tests/queries/0_stateless/00378_json_quote_64bit_integers.sql index 3a70b64bc86..e7b59bc3f7f 100644 --- a/tests/queries/0_stateless/00378_json_quote_64bit_integers.sql +++ b/tests/queries/0_stateless/00378_json_quote_64bit_integers.sql @@ -2,6 +2,7 @@ SET output_format_write_statistics = 0; SET extremes = 1; +SET allow_experimental_analyzer = 1; SET output_format_json_quote_64bit_integers = 1; SELECT toInt64(0) as i0, toUInt64(0) as u0, toInt64(9223372036854775807) as ip, toInt64(-9223372036854775808) as in, toUInt64(18446744073709551615) as up, [toInt64(0)] as arr, (toUInt64(0), toUInt64(0)) as tuple GROUP BY i0, u0, ip, in, up, arr, tuple WITH TOTALS FORMAT JSON; diff --git a/tests/queries/0_stateless/00445_join_nullable_keys.reference b/tests/queries/0_stateless/00445_join_nullable_keys.reference index afc8003910c..cc1c06d593b 100644 --- a/tests/queries/0_stateless/00445_join_nullable_keys.reference +++ b/tests/queries/0_stateless/00445_join_nullable_keys.reference @@ -22,13 +22,13 @@ 13 13 14 14 \N 8 -0 0 -0 2 -0 4 -0 6 -0 8 1 1 3 3 5 5 7 7 9 9 +\N 0 +\N 2 +\N 4 +\N 6 +\N 8 diff --git a/tests/queries/0_stateless/00445_join_nullable_keys.sql b/tests/queries/0_stateless/00445_join_nullable_keys.sql index a0453356e98..774594f90f3 100644 --- a/tests/queries/0_stateless/00445_join_nullable_keys.sql +++ b/tests/queries/0_stateless/00445_join_nullable_keys.sql @@ -1,3 +1,4 @@ +SET allow_experimental_analyzer = 1; SET join_use_nulls = 0; SET any_join_distinct_right_table_keys = 1; diff --git a/tests/queries/0_stateless/00722_inner_join.reference b/tests/queries/0_stateless/00722_inner_join.reference index 86c07e6e84e..b5e8a77a20d 100644 --- a/tests/queries/0_stateless/00722_inner_join.reference +++ b/tests/queries/0_stateless/00722_inner_join.reference @@ -16,24 +16,24 @@ ┌─x──────┬─name─┐ │ system │ one │ └────────┴──────┘ -┌─database─┬─t.name─┐ -│ system │ one │ -└──────────┴────────┘ +┌─database─┬─name─┐ +│ system │ one │ +└──────────┴──────┘ ┌─db.x───┬─name─┐ │ system │ one │ └────────┴──────┘ -┌─db.name─┬─t.name─┐ -│ system │ one │ -└─────────┴────────┘ -┌─db.name─┬─t.name─┐ -│ system │ one │ -└─────────┴────────┘ -┌─t.database─┬─t.name─┐ -│ system │ one │ -└────────────┴────────┘ -┌─database─┬─t.name─┐ -│ system │ one │ -└──────────┴────────┘ +┌─db.name─┬─name─┐ +│ system │ one │ +└─────────┴──────┘ +┌─db.name─┬─name─┐ +│ system │ one │ +└─────────┴──────┘ +┌─database─┬─name─┐ +│ system │ one │ +└──────────┴──────┘ +┌─database─┬─name─┐ +│ system │ one │ +└──────────┴──────┘ 2 2 2 diff --git a/tests/queries/0_stateless/00722_inner_join.sql b/tests/queries/0_stateless/00722_inner_join.sql index 75ef40ff2b7..0d5a543b99d 100644 --- a/tests/queries/0_stateless/00722_inner_join.sql +++ b/tests/queries/0_stateless/00722_inner_join.sql @@ -1,3 +1,5 @@ +-- Tags: no-parallel + SET allow_experimental_analyzer = 1; DROP TABLE IF EXISTS one; diff --git a/tests/queries/0_stateless/00848_join_use_nulls_segfault.reference b/tests/queries/0_stateless/00848_join_use_nulls_segfault.reference index 6bfe0db1448..43f48089b06 100644 --- a/tests/queries/0_stateless/00848_join_use_nulls_segfault.reference +++ b/tests/queries/0_stateless/00848_join_use_nulls_segfault.reference @@ -10,13 +10,13 @@ l \N \N String Nullable(String) \N \N \N \N using -l \N String Nullable(String) - \N String Nullable(String) -l \N String Nullable(String) +l \N Nullable(String) Nullable(String) +l \N Nullable(String) Nullable(String) +\N \N Nullable(String) Nullable(String) +\N \N Nullable(String) Nullable(String) +l \N Nullable(String) Nullable(String) +l \N Nullable(String) Nullable(String) \N \N Nullable(String) Nullable(String) -l \N String Nullable(String) - \N String Nullable(String) -l \N String Nullable(String) \N \N Nullable(String) Nullable(String) \N \N \N \N @@ -32,13 +32,13 @@ l \N \N Nullable(String) Nullable(String) \N \N \N \N using + join_use_nulls -l \N String Nullable(String) l \N Nullable(String) Nullable(String) -\N \N Nullable(String) Nullable(String) -\N \N Nullable(String) Nullable(String) -l \N String Nullable(String) l \N Nullable(String) Nullable(String) -\N \N Nullable(String) Nullable(String) -\N \N Nullable(String) Nullable(String) +r \N Nullable(String) Nullable(String) +r \N Nullable(String) Nullable(String) +l \N Nullable(String) Nullable(String) +l \N Nullable(String) Nullable(String) +r \N Nullable(String) Nullable(String) +r \N Nullable(String) Nullable(String) \N \N \N \N diff --git a/tests/queries/0_stateless/00848_join_use_nulls_segfault.sql b/tests/queries/0_stateless/00848_join_use_nulls_segfault.sql index 57eca0eb9e0..2f6cca0284c 100644 --- a/tests/queries/0_stateless/00848_join_use_nulls_segfault.sql +++ b/tests/queries/0_stateless/00848_join_use_nulls_segfault.sql @@ -1,4 +1,5 @@ SET any_join_distinct_right_table_keys = 1; +SET allow_experimental_analyzer = 1; DROP TABLE IF EXISTS t1_00848; DROP TABLE IF EXISTS t2_00848; @@ -53,16 +54,16 @@ SELECT t3.id = 'l', t3.not_id = 'l' FROM t1_00848 t1 LEFT JOIN t3_00848 t3 ON t1 SELECT 'using + join_use_nulls'; -SELECT *, toTypeName(t1.id), toTypeName(t3.id) FROM t1_00848 t1 ANY LEFT JOIN t3_00848 t3 USING(id) ORDER BY t1.id, t3.id; -SELECT *, toTypeName(t1.id), toTypeName(t3.id) FROM t1_00848 t1 ANY FULL JOIN t3_00848 t3 USING(id) ORDER BY t1.id, t3.id; -SELECT *, toTypeName(t2.id), toTypeName(t3.id) FROM t2_00848 t2 ANY FULL JOIN t3_00848 t3 USING(id) ORDER BY t2.id, t3.id; +SELECT *, toTypeName(t1.id), toTypeName(t3.id) FROM t1_00848 t1 ANY LEFT JOIN t3_00848 t3 USING(id) ORDER BY id; +SELECT *, toTypeName(t1.id), toTypeName(t3.id) FROM t1_00848 t1 ANY FULL JOIN t3_00848 t3 USING(id) ORDER BY id; +SELECT *, toTypeName(t2.id), toTypeName(t3.id) FROM t2_00848 t2 ANY FULL JOIN t3_00848 t3 USING(id) ORDER BY id; -SELECT *, toTypeName(t1.id), toTypeName(t3.id) FROM t1_00848 t1 LEFT JOIN t3_00848 t3 USING(id) ORDER BY t1.id, t3.id; -SELECT *, toTypeName(t1.id), toTypeName(t3.id) FROM t1_00848 t1 FULL JOIN t3_00848 t3 USING(id) ORDER BY t1.id, t3.id; -SELECT *, toTypeName(t2.id), toTypeName(t3.id) FROM t2_00848 t2 FULL JOIN t3_00848 t3 USING(id) ORDER BY t2.id, t3.id; +SELECT *, toTypeName(t1.id), toTypeName(t3.id) FROM t1_00848 t1 LEFT JOIN t3_00848 t3 USING(id) ORDER BY id; +SELECT *, toTypeName(t1.id), toTypeName(t3.id) FROM t1_00848 t1 FULL JOIN t3_00848 t3 USING(id) ORDER BY id; +SELECT *, toTypeName(t2.id), toTypeName(t3.id) FROM t2_00848 t2 FULL JOIN t3_00848 t3 USING(id) ORDER BY id; -SELECT t3.id = 'l', t3.not_id = 'l' FROM t1_00848 t1 ANY LEFT JOIN t3_00848 t3 USING(id) ORDER BY t1.id, t3.id; -SELECT t3.id = 'l', t3.not_id = 'l' FROM t1_00848 t1 LEFT JOIN t3_00848 t3 USING(id) ORDER BY t1.id, t3.id; +SELECT t3.id = 'l', t3.not_id = 'l' FROM t1_00848 t1 ANY LEFT JOIN t3_00848 t3 USING(id) ORDER BY id; +SELECT t3.id = 'l', t3.not_id = 'l' FROM t1_00848 t1 LEFT JOIN t3_00848 t3 USING(id) ORDER BY id; DROP TABLE t1_00848; DROP TABLE t2_00848; diff --git a/tests/queries/0_stateless/00853_join_with_nulls_crash.reference b/tests/queries/0_stateless/00853_join_with_nulls_crash.reference index 459b73acdbf..5df14d02d5e 100644 --- a/tests/queries/0_stateless/00853_join_with_nulls_crash.reference +++ b/tests/queries/0_stateless/00853_join_with_nulls_crash.reference @@ -15,5 +15,5 @@ bar bar 1 2 String Nullable(String) \N 0 1 String Nullable(String) foo 2 0 String bar 1 2 String -test 0 1 String + 0 1 String 0 1 String diff --git a/tests/queries/0_stateless/00853_join_with_nulls_crash.sql b/tests/queries/0_stateless/00853_join_with_nulls_crash.sql index c63c2d99cba..b620b8a7902 100644 --- a/tests/queries/0_stateless/00853_join_with_nulls_crash.sql +++ b/tests/queries/0_stateless/00853_join_with_nulls_crash.sql @@ -27,7 +27,7 @@ SELECT s1.other, s2.other, count_a, count_b, toTypeName(s1.other), toTypeName(s2 ( SELECT other, count() AS count_a FROM table_a GROUP BY other ) s1 ALL FULL JOIN ( SELECT other, count() AS count_b FROM table_b GROUP BY other ) s2 -USING other +ON s1.other = s2.other ORDER BY s2.other DESC, count_a, s1.other; SELECT s1.something, s2.something, count_a, count_b, toTypeName(s1.something), toTypeName(s2.something) FROM @@ -41,7 +41,7 @@ SELECT s1.something, s2.something, count_a, count_b, toTypeName(s1.something), t ( SELECT something, count() AS count_a FROM table_a GROUP BY something ) s1 ALL RIGHT JOIN ( SELECT something, count() AS count_b FROM table_b GROUP BY something ) s2 -USING (something) +ON s1.something = s2.something ORDER BY count_a DESC, s1.something, s2.something; SET joined_subquery_requires_alias = 0; @@ -50,7 +50,7 @@ SELECT something, count_a, count_b, toTypeName(something) FROM ( SELECT something, count() AS count_a FROM table_a GROUP BY something ) as s1 ALL FULL JOIN ( SELECT something, count() AS count_b FROM table_b GROUP BY something ) as s2 -USING (something) +ON s1.something = s2.something ORDER BY count_a DESC, something DESC; DROP TABLE table_a; diff --git a/tests/queries/0_stateless/00858_issue_4756.reference b/tests/queries/0_stateless/00858_issue_4756.reference index d00491fd7e5..e8183f05f5d 100644 --- a/tests/queries/0_stateless/00858_issue_4756.reference +++ b/tests/queries/0_stateless/00858_issue_4756.reference @@ -1 +1,3 @@ 1 +1 +1 diff --git a/tests/queries/0_stateless/00858_issue_4756.sql b/tests/queries/0_stateless/00858_issue_4756.sql index 3da0766c4e9..9eacd5ef364 100644 --- a/tests/queries/0_stateless/00858_issue_4756.sql +++ b/tests/queries/0_stateless/00858_issue_4756.sql @@ -1,3 +1,4 @@ +set allow_experimental_analyzer = 1; set distributed_product_mode = 'local'; drop table if exists shard1; @@ -21,7 +22,7 @@ where distr1.id in from distr1 join distr2 on distr1.id = distr2.id where distr1.id > 0 -); -- { serverError 288 } +); select distinct(d0.id) from distr1 d0 where d0.id in @@ -32,15 +33,14 @@ where d0.id in where d1.id > 0 ); --- TODO ---select distinct(distr1.id) from distr1 ---where distr1.id in ---( --- select distr1.id --- from distr1 as d1 --- join distr2 as d2 on distr1.id = distr2.id --- where distr1.id > 0 ---); +select distinct(distr1.id) from distr1 +where distr1.id in +( + select distr1.id + from distr1 as d1 + join distr2 as d2 on distr1.id = distr2.id + where distr1.id > 0 +); drop table shard1; drop table shard2; diff --git a/tests/queries/0_stateless/00933_test_fix_extra_seek_on_compressed_cache.reference b/tests/queries/0_stateless/00933_test_fix_extra_seek_on_compressed_cache.reference index f1839bae259..e142c6c79fe 100644 --- a/tests/queries/0_stateless/00933_test_fix_extra_seek_on_compressed_cache.reference +++ b/tests/queries/0_stateless/00933_test_fix_extra_seek_on_compressed_cache.reference @@ -1 +1,3 @@ +99999 +99999 0 0 13 diff --git a/tests/queries/0_stateless/00933_test_fix_extra_seek_on_compressed_cache.sh b/tests/queries/0_stateless/00933_test_fix_extra_seek_on_compressed_cache.sh index 390d6a70ef1..7bf4a88e972 100755 --- a/tests/queries/0_stateless/00933_test_fix_extra_seek_on_compressed_cache.sh +++ b/tests/queries/0_stateless/00933_test_fix_extra_seek_on_compressed_cache.sh @@ -13,15 +13,24 @@ $CLICKHOUSE_CLIENT --query="CREATE TABLE small_table (a UInt64 default 0, n UInt $CLICKHOUSE_CLIENT --query="INSERT INTO small_table (n) SELECT * from system.numbers limit 100000;" $CLICKHOUSE_CLIENT --query="OPTIMIZE TABLE small_table FINAL;" -cached_query="SELECT count() FROM small_table where n > 0;" +cached_query="SELECT count() FROM small_table WHERE n > 0;" -$CLICKHOUSE_CLIENT --use_uncompressed_cache=1 --query="$cached_query" &> /dev/null - -$CLICKHOUSE_CLIENT --use_uncompressed_cache=1 --allow_prefetched_read_pool_for_remote_filesystem=0 --allow_prefetched_read_pool_for_local_filesystem=0 --query_id="test-query-uncompressed-cache" --query="$cached_query" &> /dev/null +$CLICKHOUSE_CLIENT --log_queries 1 --use_uncompressed_cache 1 --query="$cached_query" +$CLICKHOUSE_CLIENT --log_queries 1 --use_uncompressed_cache 1 --allow_prefetched_read_pool_for_remote_filesystem 0 --allow_prefetched_read_pool_for_local_filesystem 0 --query_id="test-query-uncompressed-cache" --query="$cached_query" $CLICKHOUSE_CLIENT --query="SYSTEM FLUSH LOGS" - -$CLICKHOUSE_CLIENT --query="SELECT ProfileEvents['Seek'], ProfileEvents['ReadCompressedBytes'], ProfileEvents['UncompressedCacheHits'] AS hit FROM system.query_log WHERE (query_id = 'test-query-uncompressed-cache') and current_database = currentDatabase() AND (type = 2) AND event_date >= yesterday() ORDER BY event_time DESC LIMIT 1" +$CLICKHOUSE_CLIENT --query=" + SELECT + ProfileEvents['Seek'], + ProfileEvents['ReadCompressedBytes'], + ProfileEvents['UncompressedCacheHits'] AS hit + FROM system.query_log + WHERE query_id = 'test-query-uncompressed-cache' + AND current_database = currentDatabase() + AND type = 2 + AND event_date >= yesterday() + ORDER BY event_time DESC + LIMIT 1" $CLICKHOUSE_CLIENT --query="DROP TABLE IF EXISTS small_table" diff --git a/tests/queries/0_stateless/00988_expansion_aliases_limit.sql b/tests/queries/0_stateless/00988_expansion_aliases_limit.sql index 15c9f82da6f..e78ccf56093 100644 --- a/tests/queries/0_stateless/00988_expansion_aliases_limit.sql +++ b/tests/queries/0_stateless/00988_expansion_aliases_limit.sql @@ -1 +1,3 @@ -SELECT 1 AS a, a + a AS b, b + b AS c, c + c AS d, d + d AS e, e + e AS f, f + f AS g, g + g AS h, h + h AS i, i + i AS j, j + j AS k, k + k AS l, l + l AS m, m + m AS n, n + n AS o, o + o AS p, p + p AS q, q + q AS r, r + r AS s, s + s AS t, t + t AS u, u + u AS v, v + v AS w, w + w AS x, x + x AS y, y + y AS z; -- { serverError 168 } +SET allow_experimental_analyzer = 1; + +SELECT 1 AS a, a + a AS b, b + b AS c, c + c AS d, d + d AS e, e + e AS f, f + f AS g, g + g AS h, h + h AS i, i + i AS j, j + j AS k, k + k AS l, l + l AS m, m + m AS n, n + n AS o, o + o AS p, p + p AS q, q + q AS r, r + r AS s, s + s AS t, t + t AS u, u + u AS v, v + v AS w, w + w AS x, x + x AS y, y + y AS z; -- { serverError 36 } diff --git a/tests/queries/0_stateless/01010_pmj_right_table_memory_limits.sql b/tests/queries/0_stateless/01010_pmj_right_table_memory_limits.sql index 7804ce32a5a..f9f30b44700 100644 --- a/tests/queries/0_stateless/01010_pmj_right_table_memory_limits.sql +++ b/tests/queries/0_stateless/01010_pmj_right_table_memory_limits.sql @@ -3,7 +3,10 @@ SET max_memory_usage = 32000000; SET join_on_disk_max_files_to_merge = 4; -SELECT number * 200000 as n, j FROM numbers(5) nums +SELECT n, j FROM +( + SELECT number * 200000 as n FROM numbers(5) +) nums ANY LEFT JOIN ( SELECT number * 2 AS n, number AS j FROM numbers(1000000) @@ -13,14 +16,20 @@ USING n; -- { serverError 241 } SET join_algorithm = 'partial_merge'; SET default_max_bytes_in_join = 0; -SELECT number * 200000 as n, j FROM numbers(5) nums +SELECT n, j FROM +( + SELECT number * 200000 as n FROM numbers(5) +) nums ANY LEFT JOIN ( SELECT number * 2 AS n, number AS j FROM numbers(1000000) ) js2 USING n; -- { serverError 12 } -SELECT number * 200000 as n, j FROM numbers(5) nums +SELECT n, j FROM +( + SELECT number * 200000 as n FROM numbers(5) +) nums ANY LEFT JOIN ( SELECT number * 2 AS n, number AS j FROM numbers(1000000) @@ -28,7 +37,10 @@ ANY LEFT JOIN ( USING n SETTINGS max_bytes_in_join = 30000000; -- { serverError 241 } -SELECT number * 200000 as n, j FROM numbers(5) nums +SELECT n, j FROM +( + SELECT number * 200000 as n FROM numbers(5) +) nums ANY LEFT JOIN ( SELECT number * 2 AS n, number AS j FROM numbers(1000000) @@ -39,7 +51,10 @@ SETTINGS max_bytes_in_join = 10000000; SET partial_merge_join_optimizations = 1; -SELECT number * 200000 as n, j FROM numbers(5) nums +SELECT n, j FROM +( + SELECT number * 200000 as n FROM numbers(5) +) nums LEFT JOIN ( SELECT number * 2 AS n, number AS j FROM numbers(1000000) @@ -50,7 +65,10 @@ SETTINGS max_rows_in_join = 100000; SET default_max_bytes_in_join = 10000000; -SELECT number * 200000 as n, j FROM numbers(5) nums +SELECT n, j FROM +( + SELECT number * 200000 as n FROM numbers(5) +) nums JOIN ( SELECT number * 2 AS n, number AS j FROM numbers(1000000) diff --git a/tests/queries/0_stateless/01018_ambiguous_column.reference b/tests/queries/0_stateless/01018_ambiguous_column.reference index a2a1d6ea4f6..308726fa184 100644 --- a/tests/queries/0_stateless/01018_ambiguous_column.reference +++ b/tests/queries/0_stateless/01018_ambiguous_column.reference @@ -1,12 +1,15 @@ 0 0 0 0 +0 0 0 0 0 0 -┌─one.dummy─┬─A.dummy─┬─B.dummy─┐ -│ 0 │ 0 │ 0 │ -└───────────┴─────────┴─────────┘ +0 +0 +┌─system.one.dummy─┬─A.dummy─┬─B.dummy─┐ +│ 0 │ 0 │ 0 │ +└──────────────────┴─────────┴─────────┘ ┌─A.dummy─┬─one.dummy─┬─two.dummy─┐ │ 0 │ 0 │ 0 │ └─────────┴───────────┴───────────┘ diff --git a/tests/queries/0_stateless/01018_ambiguous_column.sql b/tests/queries/0_stateless/01018_ambiguous_column.sql index 54603aab810..620bdb6ba3f 100644 --- a/tests/queries/0_stateless/01018_ambiguous_column.sql +++ b/tests/queries/0_stateless/01018_ambiguous_column.sql @@ -1,4 +1,6 @@ -select * from system.one cross join system.one; -- { serverError 352 } +SET allow_experimental_analyzer = 1; + +select * from system.one cross join system.one; select * from system.one cross join system.one r; select * from system.one l cross join system.one; select * from system.one left join system.one using dummy; @@ -8,10 +10,10 @@ USE system; SELECT dummy FROM one AS A JOIN one ON A.dummy = one.dummy; SELECT dummy FROM one JOIN one AS A ON A.dummy = one.dummy; -SELECT dummy FROM one l JOIN one r ON dummy = r.dummy; -- { serverError 352 } -SELECT dummy FROM one l JOIN one r ON l.dummy = dummy; -- { serverError 352 } -SELECT dummy FROM one l JOIN one r ON one.dummy = r.dummy; -- { serverError 352 } -SELECT dummy FROM one l JOIN one r ON l.dummy = one.dummy; -- { serverError 352 } +SELECT dummy FROM one l JOIN one r ON dummy = r.dummy; +SELECT dummy FROM one l JOIN one r ON l.dummy = dummy; -- { serverError 403 } +SELECT dummy FROM one l JOIN one r ON one.dummy = r.dummy; +SELECT dummy FROM one l JOIN one r ON l.dummy = one.dummy; -- { serverError 403 } SELECT * from one JOIN one A ON one.dummy = A.dummy diff --git a/tests/queries/0_stateless/01040_dictionary_invalidate_query_switchover_long.reference b/tests/queries/0_stateless/01040_dictionary_invalidate_query_switchover_long.reference index c89fe48d9f9..8d40aebacf2 100644 --- a/tests/queries/0_stateless/01040_dictionary_invalidate_query_switchover_long.reference +++ b/tests/queries/0_stateless/01040_dictionary_invalidate_query_switchover_long.reference @@ -1,5 +1,5 @@ 122 -Table dictdb_01041_01040.dict_invalidate doesn\'t exist +1 133 diff --git a/tests/queries/0_stateless/01040_dictionary_invalidate_query_switchover_long.sh b/tests/queries/0_stateless/01040_dictionary_invalidate_query_switchover_long.sh index 7249d5e1a82..6856f952a47 100755 --- a/tests/queries/0_stateless/01040_dictionary_invalidate_query_switchover_long.sh +++ b/tests/queries/0_stateless/01040_dictionary_invalidate_query_switchover_long.sh @@ -53,7 +53,7 @@ function check_exception_detected() export -f check_exception_detected; timeout 30 bash -c check_exception_detected 2> /dev/null -$CLICKHOUSE_CLIENT --query "SELECT last_exception FROM system.dictionaries WHERE database = 'dictdb_01041_01040' AND name = 'invalidate'" 2>&1 | grep -Eo "Table dictdb_01041_01040.dict_invalidate .* exist" +$CLICKHOUSE_CLIENT --query "SELECT last_exception FROM system.dictionaries WHERE database = 'dictdb_01041_01040' AND name = 'invalidate'" 2>&1 | grep -Eo "dictdb_01041_01040.dict_invalidate.*UNKNOWN_TABLE" | wc -l $CLICKHOUSE_CLIENT --query " CREATE TABLE dictdb_01041_01040.dict_invalidate diff --git a/tests/queries/0_stateless/01047_window_view_parser_inner_table.sql b/tests/queries/0_stateless/01047_window_view_parser_inner_table.sql index 2d9911287a3..bf1ac254783 100644 --- a/tests/queries/0_stateless/01047_window_view_parser_inner_table.sql +++ b/tests/queries/0_stateless/01047_window_view_parser_inner_table.sql @@ -1,5 +1,6 @@ -- Tags: no-parallel +SET allow_experimental_analyzer = 0; SET allow_experimental_window_view = 1; DROP DATABASE IF EXISTS test_01047; set allow_deprecated_database_ordinary=1; diff --git a/tests/queries/0_stateless/01048_window_view_parser.sql b/tests/queries/0_stateless/01048_window_view_parser.sql index 4c329f99f6e..f87d9aa023e 100644 --- a/tests/queries/0_stateless/01048_window_view_parser.sql +++ b/tests/queries/0_stateless/01048_window_view_parser.sql @@ -1,5 +1,6 @@ -- Tags: no-parallel +SET allow_experimental_analyzer = 0; SET allow_experimental_window_view = 1; DROP DATABASE IF EXISTS test_01048; set allow_deprecated_database_ordinary=1; diff --git a/tests/queries/0_stateless/01050_window_view_parser_tumble.sql b/tests/queries/0_stateless/01050_window_view_parser_tumble.sql index d9604bb2b52..f49fbc251fd 100644 --- a/tests/queries/0_stateless/01050_window_view_parser_tumble.sql +++ b/tests/queries/0_stateless/01050_window_view_parser_tumble.sql @@ -1,3 +1,4 @@ +SET allow_experimental_analyzer = 0; SET allow_experimental_window_view = 1; DROP TABLE IF EXISTS mt; diff --git a/tests/queries/0_stateless/01051_window_view_parser_hop.sql b/tests/queries/0_stateless/01051_window_view_parser_hop.sql index 472dc66f1a2..45877cf0647 100644 --- a/tests/queries/0_stateless/01051_window_view_parser_hop.sql +++ b/tests/queries/0_stateless/01051_window_view_parser_hop.sql @@ -1,3 +1,4 @@ +SET allow_experimental_analyzer = 0; SET allow_experimental_window_view = 1; DROP TABLE IF EXISTS mt; diff --git a/tests/queries/0_stateless/01052_window_view_proc_tumble_to_now.sh b/tests/queries/0_stateless/01052_window_view_proc_tumble_to_now.sh index 9fdc66191d7..e75b7d9570b 100755 --- a/tests/queries/0_stateless/01052_window_view_proc_tumble_to_now.sh +++ b/tests/queries/0_stateless/01052_window_view_proc_tumble_to_now.sh @@ -4,7 +4,11 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh -$CLICKHOUSE_CLIENT --multiquery < /dev/null | grep Header -m 1 -A 8 +$CLICKHOUSE_CLIENT "${opts[@]}" -q "explain json = 1, description = 0, header = 1 select 1, 2 + dummy FORMAT TSVRaw" 2> /dev/null | grep Header -m 1 -A 8 echo "--------" -$CLICKHOUSE_CLIENT -q "EXPLAIN json = 1, actions = 1, header = 1, description = 0 +$CLICKHOUSE_CLIENT "${opts[@]}" -q "EXPLAIN json = 1, actions = 1, header = 1, description = 0 SELECT quantile(0.2)(number), sumIf(number, number > 0) from numbers(2) group by number, number + 1 FORMAT TSVRaw - " | grep Aggregating -A 40 + " | grep Aggregating -A 36 echo "--------" -$CLICKHOUSE_CLIENT -q "EXPLAIN json = 1, actions = 1, description = 0 +$CLICKHOUSE_CLIENT "${opts[@]}" -q "EXPLAIN json = 1, actions = 1, description = 0 SELECT x, y from numbers(2) array join [number, 1] as x, [number + 1] as y FORMAT TSVRaw " | grep ArrayJoin -A 2 echo "--------" -$CLICKHOUSE_CLIENT -q "EXPLAIN json = 1, actions = 1, description = 0 +$CLICKHOUSE_CLIENT "${opts[@]}" -q "EXPLAIN json = 1, actions = 1, description = 0 SELECT distinct intDiv(number, 2), intDiv(number, 3) from numbers(10) FORMAT TSVRaw " | grep Distinct -A 1 echo "--------" -$CLICKHOUSE_CLIENT -q "EXPLAIN json = 1, actions = 1, description = 0 +$CLICKHOUSE_CLIENT "${opts[@]}" -q "EXPLAIN json = 1, actions = 1, description = 0 SELECT number + 1 from numbers(10) order by number desc, number + 1 limit 3 FORMAT TSVRaw " | grep "Sort Description" -A 12 diff --git a/tests/queries/0_stateless/01911_logical_error_minus.sql b/tests/queries/0_stateless/01911_logical_error_minus.sql index 3dcdedd38f5..7f371a463f8 100644 --- a/tests/queries/0_stateless/01911_logical_error_minus.sql +++ b/tests/queries/0_stateless/01911_logical_error_minus.sql @@ -26,7 +26,7 @@ INSERT INTO codecTest (key, name, ref_valueF64, valueF64, ref_valueF32, valueF32 INSERT INTO codecTest (key, name, ref_valueF64, valueF64, ref_valueF32, valueF32) SELECT number AS n, 'sin(n*n*n)*n', sin(n * n * n * n* n) AS v, v, v, v FROM system.numbers LIMIT 301, 100; -SELECT IF(-2, NULL, 0.00009999999747378752), IF(1048577, 1048576, NULL), c1.key, IF(1, NULL, NULL), c2.key FROM codecTest AS c1 , codecTest AS c2 WHERE ignore(IF(257, -2, NULL), arrayJoin([65537]), IF(3, 1024, 9223372036854775807)) AND IF(NULL, 256, NULL) AND (IF(NULL, '1048576', NULL) = (c1.key - NULL)) LIMIT 65535; +SELECT IF(2, NULL, 0.00009999999747378752), IF(104, 1048576, NULL), c1.key, IF(1, NULL, NULL), c2.key FROM codecTest AS c1 , codecTest AS c2 WHERE ignore(IF(255, -2, NULL), arrayJoin([65537]), IF(3, 1024, 9223372036854775807)) AND IF(NULL, 256, NULL) AND (IF(NULL, '1048576', NULL) = (c1.key - NULL)) LIMIT 65535; SELECT c1.key, c1.name, c1.ref_valueF64, c1.valueF64, c1.ref_valueF64 - c1.valueF64 AS dF64, '', c2.key, c2.ref_valueF64 FROM codecTest AS c1 , codecTest AS c2 WHERE (dF64 != 3) AND c1.valueF64 != 0 AND (c2.key = (c1.key - 1048576)) LIMIT 0; @@ -72,7 +72,7 @@ INSERT INTO codecTest (key, ref_valueU64, valueU64, ref_valueU32, valueU32, ref_ SELECT number as n, n + (rand64() - 9223372036854775807)/1000 as v, v, v, v, v, v, v, v, v, v, v, v, v, v, v, v, toDateTime(v), toDateTime(v), toDate(v), toDate(v) FROM system.numbers LIMIT 3001, 1000; -SELECT IF(-2, NULL, 0.00009999999747378752), IF(1048577, 1048576, NULL), c1.key, IF(1, NULL, NULL), c2.key FROM codecTest AS c1 , codecTest AS c2 WHERE ignore(IF(257, -2, NULL), arrayJoin([65537]), IF(3, 1024, 9223372036854775807)) AND IF(NULL, 256, NULL) AND (IF(NULL, '1048576', NULL) = (c1.key - NULL)) LIMIT 65535; +SELECT IF(2, NULL, 0.00009999999747378752), IF(104, 1048576, NULL), c1.key, IF(1, NULL, NULL), c2.key FROM codecTest AS c1 , codecTest AS c2 WHERE ignore(IF(255, -2, NULL), arrayJoin([65537]), IF(3, 1024, 9223372036854775807)) AND IF(NULL, 256, NULL) AND (IF(NULL, '1048576', NULL) = (c1.key - NULL)) LIMIT 65535; DROP TABLE codecTest; diff --git a/tests/queries/0_stateless/01913_names_of_tuple_literal.sql b/tests/queries/0_stateless/01913_names_of_tuple_literal.sql index 09de9e8cf37..879f4c91587 100644 --- a/tests/queries/0_stateless/01913_names_of_tuple_literal.sql +++ b/tests/queries/0_stateless/01913_names_of_tuple_literal.sql @@ -1,2 +1,4 @@ +SET allow_experimental_analyzer = 0; + SELECT ((1, 2), (2, 3), (3, 4)) FORMAT TSVWithNames; SELECT ((1, 2), (2, 3), (3, 4)) FORMAT TSVWithNames SETTINGS legacy_column_name_of_tuple_literal = 1; diff --git a/tests/queries/0_stateless/02048_clickhouse_local_stage.reference b/tests/queries/0_stateless/02048_clickhouse_local_stage.reference index 44c39f2a444..00e0f4ddb2e 100644 --- a/tests/queries/0_stateless/02048_clickhouse_local_stage.reference +++ b/tests/queries/0_stateless/02048_clickhouse_local_stage.reference @@ -1,15 +1,15 @@ -execute: default +execute: --allow_experimental_analyzer=1 "foo" 1 -execute: --stage fetch_columns -"dummy" +execute: --allow_experimental_analyzer=1 --stage fetch_columns +"system.one.dummy_0" 0 -execute: --stage with_mergeable_state -"1" +execute: --allow_experimental_analyzer=1 --stage with_mergeable_state +"1_UInt8" 1 -execute: --stage with_mergeable_state_after_aggregation -"1" +execute: --allow_experimental_analyzer=1 --stage with_mergeable_state_after_aggregation +"1_UInt8" 1 -execute: --stage complete +execute: --allow_experimental_analyzer=1 --stage complete "foo" 1 diff --git a/tests/queries/0_stateless/02048_clickhouse_local_stage.sh b/tests/queries/0_stateless/02048_clickhouse_local_stage.sh index 5c1303b5160..182acc23a13 100755 --- a/tests/queries/0_stateless/02048_clickhouse_local_stage.sh +++ b/tests/queries/0_stateless/02048_clickhouse_local_stage.sh @@ -5,6 +5,10 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh +opts=( + "--allow_experimental_analyzer=1" +) + function execute_query() { if [ $# -eq 0 ]; then @@ -15,8 +19,8 @@ function execute_query() ${CLICKHOUSE_LOCAL} "$@" --format CSVWithNames -q "SELECT 1 AS foo" } -execute_query # default -- complete -execute_query --stage fetch_columns -execute_query --stage with_mergeable_state -execute_query --stage with_mergeable_state_after_aggregation -execute_query --stage complete +execute_query "${opts[@]}" # default -- complete +execute_query "${opts[@]}" --stage fetch_columns +execute_query "${opts[@]}" --stage with_mergeable_state +execute_query "${opts[@]}" --stage with_mergeable_state_after_aggregation +execute_query "${opts[@]}" --stage complete diff --git a/tests/queries/0_stateless/02125_constant_if_condition_and_not_existing_column.sql b/tests/queries/0_stateless/02125_constant_if_condition_and_not_existing_column.sql index 4aad7ae3694..822ffb19764 100644 --- a/tests/queries/0_stateless/02125_constant_if_condition_and_not_existing_column.sql +++ b/tests/queries/0_stateless/02125_constant_if_condition_and_not_existing_column.sql @@ -6,9 +6,9 @@ insert into test values (0); select if(0, y, 42) from test; select if(1, 42, y) from test; select if(toUInt8(0), y, 42) from test; -select if(toInt8(0), y, 42) from test; +select if(toUInt8(0), y, 42) from test; +select if(toUInt8(1), 42, y) from test; select if(toUInt8(1), 42, y) from test; -select if(toInt8(1), 42, y) from test; select if(toUInt8(toUInt8(0)), y, 42) from test; select if(cast(cast(0, 'UInt8'), 'UInt8'), y, 42) from test; explain syntax select x, if((select hasColumnInTable(currentDatabase(), 'test', 'y')), y, x || '_') from test; diff --git a/tests/queries/0_stateless/02125_query_views_log_window_function.sql b/tests/queries/0_stateless/02125_query_views_log_window_function.sql index 1de2cc95b14..fff1e943c58 100644 --- a/tests/queries/0_stateless/02125_query_views_log_window_function.sql +++ b/tests/queries/0_stateless/02125_query_views_log_window_function.sql @@ -1,4 +1,6 @@ +set allow_experimental_analyzer = 0; set allow_experimental_window_view = 1; + CREATE TABLE data ( `id` UInt64, `timestamp` DateTime) ENGINE = Memory; CREATE WINDOW VIEW wv Engine Memory as select count(id), tumbleStart(w_id) as window_start from data group by tumble(timestamp, INTERVAL '10' SECOND) as w_id; diff --git a/tests/queries/0_stateless/02160_untuple_exponential_growth.sh b/tests/queries/0_stateless/02160_untuple_exponential_growth.sh index 9ec6594af69..2bc8f74a524 100755 --- a/tests/queries/0_stateless/02160_untuple_exponential_growth.sh +++ b/tests/queries/0_stateless/02160_untuple_exponential_growth.sh @@ -7,5 +7,5 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # Should finish in reasonable time (milliseconds). # In previous versions this query led to exponential complexity of query analysis. -${CLICKHOUSE_LOCAL} --query "SELECT untuple(tuple(untuple((1, untuple((untuple(tuple(untuple(tuple(untuple((untuple((1, 1, 1, 1)), 1, 1, 1)))))), 1, 1))))))" 2>&1 | grep -cF 'TOO_BIG_AST' -${CLICKHOUSE_LOCAL} --query "SELECT untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple((1, 1, 1, 1, 1))))))))))))))))))))))))))" 2>&1 | grep -cF 'TOO_BIG_AST' +${CLICKHOUSE_LOCAL} --query "SELECT untuple(tuple(untuple((1, untuple((untuple(tuple(untuple(tuple(untuple((untuple((1, 1, 1, 1)), 1, 1, 1)))))), 1, 1))))))" 2>&1 | grep -cF 'too big' +${CLICKHOUSE_LOCAL} --query "SELECT untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple(tuple(untuple((1, 1, 1, 1, 1))))))))))))))))))))))))))" 2>&1 | grep -cF 'too big' diff --git a/tests/queries/0_stateless/02174_cte_scalar_cache.reference b/tests/queries/0_stateless/02174_cte_scalar_cache.reference index 817116eda88..1acbef35325 100644 --- a/tests/queries/0_stateless/02174_cte_scalar_cache.reference +++ b/tests/queries/0_stateless/02174_cte_scalar_cache.reference @@ -1,3 +1,3 @@ -02177_CTE_GLOBAL_ON 5 500 11 0 5 -02177_CTE_GLOBAL_OFF 1 100 5 0 1 -02177_CTE_NEW_ANALYZER 2 200 3 0 2 +02177_CTE_GLOBAL_ON 1 100 4 0 1 +02177_CTE_GLOBAL_OFF 1 100 4 0 1 +02177_CTE_NEW_ANALYZER 1 100 4 0 1 diff --git a/tests/queries/0_stateless/02174_cte_scalar_cache.sql b/tests/queries/0_stateless/02174_cte_scalar_cache.sql index 9ed80d08cff..50a10834e64 100644 --- a/tests/queries/0_stateless/02174_cte_scalar_cache.sql +++ b/tests/queries/0_stateless/02174_cte_scalar_cache.sql @@ -1,3 +1,5 @@ +SET allow_experimental_analyzer = 1; + WITH ( SELECT sleep(0.0001) FROM system.one ) as a1, ( SELECT sleep(0.0001) FROM system.one ) as a2, diff --git a/tests/queries/0_stateless/02184_hash_functions_and_ip_types.reference b/tests/queries/0_stateless/02184_hash_functions_and_ip_types.reference index 07705827428..b305806cd08 100644 --- a/tests/queries/0_stateless/02184_hash_functions_and_ip_types.reference +++ b/tests/queries/0_stateless/02184_hash_functions_and_ip_types.reference @@ -1,54 +1,54 @@ Row 1: ────── -ipv4: 1.2.3.4 -halfMD5(toIPv4('1.2.3.4')): 14356538739656272800 -farmFingerprint64(toIPv4('1.2.3.4')): 5715546585361069049 -xxh3(toIPv4('1.2.3.4')): 14355428563589734825 -wyHash64(toIPv4('1.2.3.4')): 13096729196120951355 -xxHash32(toIPv4('1.2.3.4')): 2430391091 -gccMurmurHash(toIPv4('1.2.3.4')): 5478801830569062645 -murmurHash2_32(toIPv4('1.2.3.4')): 1658978282 -javaHashUTF16LE(toIPv4('1.2.3.4')): 24190 -intHash64(toIPv4('1.2.3.4')): 5715546585361069049 -intHash32(toIPv4('1.2.3.4')): 3152671896 -metroHash64(toIPv4('1.2.3.4')): 5715546585361069049 -hex(murmurHash3_128(toIPv4('1.2.3.4'))): 549E9EF692591F6BB55874EF9A0DE88E -jumpConsistentHash(toIPv4('1.2.3.4'), 42): 37 -sipHash64(toIPv4('1.2.3.4')): 10711397536826262068 -hex(sipHash128(toIPv4('1.2.3.4'))): DBB6A76B92B59789EFB42185DC32311D -kostikConsistentHash(toIPv4('1.2.3.4'), 42): 0 -xxHash64(toIPv4('1.2.3.4')): 14496144933713060978 -murmurHash2_64(toIPv4('1.2.3.4')): 10829690723193326442 -cityHash64(toIPv4('1.2.3.4')): 5715546585361069049 -hiveHash(toIPv4('1.2.3.4')): 122110 -murmurHash3_64(toIPv4('1.2.3.4')): 16570805747704317665 -murmurHash3_32(toIPv4('1.2.3.4')): 1165084099 -yandexConsistentHash(toIPv4('1.2.3.4'), 42): 0 +ipv4: 1.2.3.4 +halfMD5(ipv4): 14356538739656272800 +farmFingerprint64(ipv4): 5715546585361069049 +xxh3(ipv4): 14355428563589734825 +wyHash64(ipv4): 13096729196120951355 +xxHash32(ipv4): 2430391091 +gccMurmurHash(ipv4): 5478801830569062645 +murmurHash2_32(ipv4): 1658978282 +javaHashUTF16LE(ipv4): 24190 +intHash64(ipv4): 5715546585361069049 +intHash32(ipv4): 3152671896 +metroHash64(ipv4): 5715546585361069049 +hex(murmurHash3_128(ipv4)): 549E9EF692591F6BB55874EF9A0DE88E +jumpConsistentHash(ipv4, 42): 37 +sipHash64(ipv4): 10711397536826262068 +hex(sipHash128(ipv4)): DBB6A76B92B59789EFB42185DC32311D +kostikConsistentHash(ipv4, 42): 0 +xxHash64(ipv4): 14496144933713060978 +murmurHash2_64(ipv4): 10829690723193326442 +cityHash64(ipv4): 5715546585361069049 +hiveHash(ipv4): 122110 +murmurHash3_64(ipv4): 16570805747704317665 +murmurHash3_32(ipv4): 1165084099 +yandexConsistentHash(ipv4, 42): 0 Row 1: ────── -ipv6: fe80::62:5aff:fed1:daf0 -halfMD5(toIPv6('fe80::62:5aff:fed1:daf0')): 9503062220758009199 -hex(MD4(toIPv6('fe80::62:5aff:fed1:daf0'))): E35A1A4FB3A3953421AB348B2E1A4A1A -hex(MD5(toIPv6('fe80::62:5aff:fed1:daf0'))): 83E1A8BD8AB7456FC229208409F79798 -hex(SHA1(toIPv6('fe80::62:5aff:fed1:daf0'))): A6D5DCE882AC44804382DE4639E6001612E1C8B5 -hex(SHA224(toIPv6('fe80::62:5aff:fed1:daf0'))): F6995FD7BED2BCA21F68DAC6BBABE742DC1BA177BA8594CEF1715C52 -hex(SHA256(toIPv6('fe80::62:5aff:fed1:daf0'))): F75497BAD6F7747BD6B150B6F69BA2DEE354F1C2A34B7BEA6183973B78640250 -hex(SHA512(toIPv6('fe80::62:5aff:fed1:daf0'))): 0C2893CCBF44BC19CCF339AEED5B68CBFD5A2EF38263A48FE21C3379BA4438E7FF7A02F59D7542442C6E6ED538E6D13D65D3573DADB381651D3D8A5DEA232EAC -farmFingerprint64(toIPv6('fe80::62:5aff:fed1:daf0')): 6643158734288374888 -javaHash(toIPv6('fe80::62:5aff:fed1:daf0')): 684606770 -xxh3(toIPv6('fe80::62:5aff:fed1:daf0')): 4051340969481364358 -wyHash64(toIPv6('fe80::62:5aff:fed1:daf0')): 18071806066582739916 -xxHash32(toIPv6('fe80::62:5aff:fed1:daf0')): 3353862080 -gccMurmurHash(toIPv6('fe80::62:5aff:fed1:daf0')): 11049311547848936878 -murmurHash2_32(toIPv6('fe80::62:5aff:fed1:daf0')): 1039121047 -javaHashUTF16LE(toIPv6('fe80::62:5aff:fed1:daf0')): -666938696 -metroHash64(toIPv6('fe80::62:5aff:fed1:daf0')): 15333045864940909774 -hex(sipHash128(toIPv6('fe80::62:5aff:fed1:daf0'))): 31D50562F877B1F92A99B05B646568B7 -hex(murmurHash3_128(toIPv6('fe80::62:5aff:fed1:daf0'))): 6FFEF0C1DF8B5B472FE2EDF0C76C12B9 -sipHash64(toIPv6('fe80::62:5aff:fed1:daf0')): 5681592867096972315 -xxHash64(toIPv6('fe80::62:5aff:fed1:daf0')): 4533874364641685764 -murmurHash2_64(toIPv6('fe80::62:5aff:fed1:daf0')): 11839090601505681839 -cityHash64(toIPv6('fe80::62:5aff:fed1:daf0')): 1599722731594796935 -hiveHash(toIPv6('fe80::62:5aff:fed1:daf0')): 684606770 -murmurHash3_64(toIPv6('fe80::62:5aff:fed1:daf0')): 18323430650022796352 -murmurHash3_32(toIPv6('fe80::62:5aff:fed1:daf0')): 3971193740 +ipv6: fe80::62:5aff:fed1:daf0 +halfMD5(ipv6): 9503062220758009199 +hex(MD4(ipv6)): E35A1A4FB3A3953421AB348B2E1A4A1A +hex(MD5(ipv6)): 83E1A8BD8AB7456FC229208409F79798 +hex(SHA1(ipv6)): A6D5DCE882AC44804382DE4639E6001612E1C8B5 +hex(SHA224(ipv6)): F6995FD7BED2BCA21F68DAC6BBABE742DC1BA177BA8594CEF1715C52 +hex(SHA256(ipv6)): F75497BAD6F7747BD6B150B6F69BA2DEE354F1C2A34B7BEA6183973B78640250 +hex(SHA512(ipv6)): 0C2893CCBF44BC19CCF339AEED5B68CBFD5A2EF38263A48FE21C3379BA4438E7FF7A02F59D7542442C6E6ED538E6D13D65D3573DADB381651D3D8A5DEA232EAC +farmFingerprint64(ipv6): 6643158734288374888 +javaHash(ipv6): 684606770 +xxh3(ipv6): 4051340969481364358 +wyHash64(ipv6): 18071806066582739916 +xxHash32(ipv6): 3353862080 +gccMurmurHash(ipv6): 11049311547848936878 +murmurHash2_32(ipv6): 1039121047 +javaHashUTF16LE(ipv6): -666938696 +metroHash64(ipv6): 15333045864940909774 +hex(sipHash128(ipv6)): 31D50562F877B1F92A99B05B646568B7 +hex(murmurHash3_128(ipv6)): 6FFEF0C1DF8B5B472FE2EDF0C76C12B9 +sipHash64(ipv6): 5681592867096972315 +xxHash64(ipv6): 4533874364641685764 +murmurHash2_64(ipv6): 11839090601505681839 +cityHash64(ipv6): 1599722731594796935 +hiveHash(ipv6): 684606770 +murmurHash3_64(ipv6): 18323430650022796352 +murmurHash3_32(ipv6): 3971193740 diff --git a/tests/queries/0_stateless/02184_hash_functions_and_ip_types.sql b/tests/queries/0_stateless/02184_hash_functions_and_ip_types.sql index 67aae812144..d96574ef4fe 100644 --- a/tests/queries/0_stateless/02184_hash_functions_and_ip_types.sql +++ b/tests/queries/0_stateless/02184_hash_functions_and_ip_types.sql @@ -1,5 +1,7 @@ -- Tags: no-fasttest +SET allow_experimental_analyzer = 1; + SELECT toIPv4('1.2.3.4') AS ipv4, halfMD5(ipv4), diff --git a/tests/queries/0_stateless/02227_union_match_by_name.reference b/tests/queries/0_stateless/02227_union_match_by_name.reference index cebcc42dcba..e51ea983f7f 100644 --- a/tests/queries/0_stateless/02227_union_match_by_name.reference +++ b/tests/queries/0_stateless/02227_union_match_by_name.reference @@ -1,40 +1,53 @@ --- { echo } +-- { echoOn } + EXPLAIN header = 1, optimize = 0 SELECT avgWeighted(x, y) FROM (SELECT NULL, 255 AS x, 1 AS y UNION ALL SELECT y, NULL AS x, 1 AS y); -Expression (Projection) +Expression (Project names) Header: avgWeighted(x, y) Nullable(Float64) - Expression (Before ORDER BY) - Header: avgWeighted(x, y) Nullable(Float64) + Expression (Projection) + Header: avgWeighted(x_0, y_1) Nullable(Float64) Aggregating - Header: avgWeighted(x, y) Nullable(Float64) + Header: avgWeighted(x_0, y_1) Nullable(Float64) Expression (Before GROUP BY) - Header: x Nullable(UInt8) - y UInt8 - Union - Header: x Nullable(UInt8) - y UInt8 - Expression (Conversion before UNION) - Header: x Nullable(UInt8) + Header: x_0 Nullable(UInt8) + y_1 UInt8 + Expression (Change column names to column identifiers) + Header: x_0 Nullable(UInt8) + y_1 UInt8 + Union + Header: NULL Nullable(UInt8) + x Nullable(UInt8) y UInt8 - Expression (Projection) - Header: x UInt8 + Expression (Conversion before UNION) + Header: NULL Nullable(UInt8) + x Nullable(UInt8) y UInt8 - Expression (Before ORDER BY) - Header: 255 UInt8 - 1 UInt8 - dummy UInt8 - ReadFromStorage (SystemOne) - Header: dummy UInt8 - Expression (Conversion before UNION) - Header: x Nullable(UInt8) - y UInt8 - Expression (Projection) - Header: x Nullable(Nothing) + Expression (Project names) + Header: NULL Nullable(Nothing) + x UInt8 + y UInt8 + Expression (Projection) + Header: NULL_Nullable(Nothing) Nullable(Nothing) + 255_UInt8 UInt8 + 1_UInt8 UInt8 + Expression (Change column names to column identifiers) + Header: system.one.dummy_0 UInt8 + ReadFromStorage (SystemOne) + Header: dummy UInt8 + Expression (Conversion before UNION) + Header: NULL Nullable(UInt8) + x Nullable(UInt8) y UInt8 - Expression (Before ORDER BY) - Header: 1 UInt8 - NULL Nullable(Nothing) - dummy UInt8 - ReadFromStorage (SystemOne) - Header: dummy UInt8 + Expression (Project names) + Header: y UInt8 + x Nullable(Nothing) + y UInt8 + Expression (Projection) + Header: 1_UInt8 UInt8 + NULL_Nullable(Nothing) Nullable(Nothing) + 1_UInt8 UInt8 + Expression (Change column names to column identifiers) + Header: system.one.dummy_0 UInt8 + ReadFromStorage (SystemOne) + Header: dummy UInt8 SELECT avgWeighted(x, y) FROM (SELECT NULL, 255 AS x, 1 AS y UNION ALL SELECT y, NULL AS x, 1 AS y); 255 diff --git a/tests/queries/0_stateless/02227_union_match_by_name.sql b/tests/queries/0_stateless/02227_union_match_by_name.sql index cc0ab8ba5aa..6a19add1d37 100644 --- a/tests/queries/0_stateless/02227_union_match_by_name.sql +++ b/tests/queries/0_stateless/02227_union_match_by_name.sql @@ -1,3 +1,8 @@ --- { echo } +SET allow_experimental_analyzer = 1; + +-- { echoOn } + EXPLAIN header = 1, optimize = 0 SELECT avgWeighted(x, y) FROM (SELECT NULL, 255 AS x, 1 AS y UNION ALL SELECT y, NULL AS x, 1 AS y); SELECT avgWeighted(x, y) FROM (SELECT NULL, 255 AS x, 1 AS y UNION ALL SELECT y, NULL AS x, 1 AS y); + +-- { echoOff } diff --git a/tests/queries/0_stateless/02286_drop_filesystem_cache.sh b/tests/queries/0_stateless/02286_drop_filesystem_cache.sh index b563c487646..991622446b8 100755 --- a/tests/queries/0_stateless/02286_drop_filesystem_cache.sh +++ b/tests/queries/0_stateless/02286_drop_filesystem_cache.sh @@ -16,7 +16,7 @@ for STORAGE_POLICY in 's3_cache' 'local_cache'; do ORDER BY key SETTINGS storage_policy='$STORAGE_POLICY', min_bytes_for_wide_part = 10485760" - $CLICKHOUSE_CLIENT --query "SYSTEM STOP MERGES" + $CLICKHOUSE_CLIENT --query "SYSTEM STOP MERGES test_02286" $CLICKHOUSE_CLIENT --query "SYSTEM DROP FILESYSTEM CACHE" $CLICKHOUSE_CLIENT --query "SELECT count() FROM system.filesystem_cache" diff --git a/tests/queries/0_stateless/02303_query_kind.reference b/tests/queries/0_stateless/02303_query_kind.reference index 163f8b0ed5e..5af8c2b743f 100644 --- a/tests/queries/0_stateless/02303_query_kind.reference +++ b/tests/queries/0_stateless/02303_query_kind.reference @@ -1,36 +1,36 @@ -clickhouse-client --query_kind secondary_query -q explain plan header=1 select toString(dummy) as dummy from system.one group by dummy -Expression ((Projection + Before ORDER BY)) +clickhouse-client --allow_experimental_analyzer=1 --query_kind secondary_query -q explain plan header=1 select toString(dummy) as dummy from system.one group by dummy +Expression ((Project names + Projection)) Header: dummy String Aggregating - Header: toString(dummy) String - Expression (Before GROUP BY) - Header: toString(dummy) String + Header: toString(system.one.dummy_0) String + Expression ((Before GROUP BY + Change column names to column identifiers)) + Header: toString(system.one.dummy_0) String ReadFromStorage (SystemOne) Header: dummy UInt8 -clickhouse-local --query_kind secondary_query -q explain plan header=1 select toString(dummy) as dummy from system.one group by dummy -Expression ((Projection + Before ORDER BY)) +clickhouse-local --allow_experimental_analyzer=1 --query_kind secondary_query -q explain plan header=1 select toString(dummy) as dummy from system.one group by dummy +Expression ((Project names + Projection)) Header: dummy String Aggregating - Header: toString(dummy) String - Expression (Before GROUP BY) - Header: toString(dummy) String + Header: toString(system.one.dummy_0) String + Expression ((Before GROUP BY + Change column names to column identifiers)) + Header: toString(system.one.dummy_0) String ReadFromStorage (SystemOne) Header: dummy UInt8 -clickhouse-client --query_kind initial_query -q explain plan header=1 select toString(dummy) as dummy from system.one group by dummy -Expression ((Projection + Before ORDER BY)) +clickhouse-client --allow_experimental_analyzer=1 --query_kind initial_query -q explain plan header=1 select toString(dummy) as dummy from system.one group by dummy +Expression ((Project names + Projection)) Header: dummy String Aggregating - Header: toString(dummy) String - Expression (Before GROUP BY) - Header: toString(dummy) String + Header: toString(system.one.dummy_0) String + Expression ((Before GROUP BY + Change column names to column identifiers)) + Header: toString(system.one.dummy_0) String ReadFromStorage (SystemOne) Header: dummy UInt8 -clickhouse-local --query_kind initial_query -q explain plan header=1 select toString(dummy) as dummy from system.one group by dummy -Expression ((Projection + Before ORDER BY)) +clickhouse-local --allow_experimental_analyzer=1 --query_kind initial_query -q explain plan header=1 select toString(dummy) as dummy from system.one group by dummy +Expression ((Project names + Projection)) Header: dummy String Aggregating - Header: toString(dummy) String - Expression (Before GROUP BY) - Header: toString(dummy) String + Header: toString(system.one.dummy_0) String + Expression ((Before GROUP BY + Change column names to column identifiers)) + Header: toString(system.one.dummy_0) String ReadFromStorage (SystemOne) Header: dummy UInt8 diff --git a/tests/queries/0_stateless/02303_query_kind.sh b/tests/queries/0_stateless/02303_query_kind.sh index 5ad5f9ec6f4..1d883a2dcc7 100755 --- a/tests/queries/0_stateless/02303_query_kind.sh +++ b/tests/queries/0_stateless/02303_query_kind.sh @@ -4,6 +4,10 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CUR_DIR"/../shell_config.sh +opts=( + "--allow_experimental_analyzer=1" +) + function run_query() { echo "clickhouse-client $*" @@ -12,5 +16,5 @@ function run_query() echo "clickhouse-local $*" $CLICKHOUSE_LOCAL "$@" } -run_query --query_kind secondary_query -q "explain plan header=1 select toString(dummy) as dummy from system.one group by dummy" -run_query --query_kind initial_query -q "explain plan header=1 select toString(dummy) as dummy from system.one group by dummy" +run_query "${opts[@]}" --query_kind secondary_query -q "explain plan header=1 select toString(dummy) as dummy from system.one group by dummy" +run_query "${opts[@]}" --query_kind initial_query -q "explain plan header=1 select toString(dummy) as dummy from system.one group by dummy" diff --git a/tests/queries/0_stateless/02342_window_view_different_struct.sql b/tests/queries/0_stateless/02342_window_view_different_struct.sql index c5bf8899cae..a5b2b8daa5a 100644 --- a/tests/queries/0_stateless/02342_window_view_different_struct.sql +++ b/tests/queries/0_stateless/02342_window_view_different_struct.sql @@ -1,3 +1,4 @@ +SET allow_experimental_analyzer = 0; SET allow_experimental_window_view = 1; DROP TABLE IF EXISTS data_02342; diff --git a/tests/queries/0_stateless/02364_window_view_segfault.sh b/tests/queries/0_stateless/02364_window_view_segfault.sh index d03a1e5ae3e..3def22f4a9e 100755 --- a/tests/queries/0_stateless/02364_window_view_segfault.sh +++ b/tests/queries/0_stateless/02364_window_view_segfault.sh @@ -5,7 +5,11 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh -${CLICKHOUSE_CLIENT} --multiquery --multiline --query """ +opts=( + "--allow_experimental_analyzer=0" +) + +${CLICKHOUSE_CLIENT} "${opts[@]}" --multiquery --multiline --query """ DROP TABLE IF EXISTS mt ON CLUSTER test_shard_localhost; DROP TABLE IF EXISTS wv ON CLUSTER test_shard_localhost; CREATE TABLE mt ON CLUSTER test_shard_localhost (a Int32, timestamp DateTime) ENGINE=MergeTree ORDER BY tuple(); diff --git a/tests/queries/0_stateless/02371_select_projection_normal_agg.sql b/tests/queries/0_stateless/02371_select_projection_normal_agg.sql index 283aec0b122..8650fb6b843 100644 --- a/tests/queries/0_stateless/02371_select_projection_normal_agg.sql +++ b/tests/queries/0_stateless/02371_select_projection_normal_agg.sql @@ -11,7 +11,8 @@ CREATE TABLE video_log ) ENGINE = MergeTree PARTITION BY toDate(datetime) -ORDER BY (user_id, device_id); +ORDER BY (user_id, device_id) +SETTINGS index_granularity = 8192, index_granularity_bytes = '10Mi'; DROP TABLE IF EXISTS rng; @@ -57,7 +58,8 @@ CREATE TABLE video_log_result ) ENGINE = MergeTree PARTITION BY toDate(hour) -ORDER BY sum_bytes; +ORDER BY sum_bytes +SETTINGS index_granularity = 8192, index_granularity_bytes = '10Mi'; INSERT INTO video_log_result SELECT toStartOfHour(datetime) AS hour, diff --git a/tests/queries/0_stateless/02381_join_dup_columns_in_plan.reference b/tests/queries/0_stateless/02381_join_dup_columns_in_plan.reference index bbf288c45d7..31a37862663 100644 --- a/tests/queries/0_stateless/02381_join_dup_columns_in_plan.reference +++ b/tests/queries/0_stateless/02381_join_dup_columns_in_plan.reference @@ -2,51 +2,51 @@ Expression Header: key String value String Join - Header: key String - value String + Header: s1.key_0 String + s2.value_1 String Expression - Header: key String + Header: s1.key_0 String ReadFromStorage Header: dummy UInt8 Union - Header: s2.key String - value String + Header: s2.key_2 String + s2.value_1 String Expression - Header: s2.key String - value String + Header: s2.key_2 String + s2.value_1 String ReadFromStorage Header: dummy UInt8 Expression - Header: s2.key String - value String + Header: s2.key_2 String + s2.value_1 String ReadFromStorage Header: dummy UInt8 Expression Header: key String value String Join - Header: key String - s2.key String - value String + Header: s1.key_0 String + s2.key_2 String + s2.value_1 String Sorting - Header: key String + Header: s1.key_0 String Expression - Header: key String + Header: s1.key_0 String ReadFromStorage Header: dummy UInt8 Sorting - Header: s2.key String - value String + Header: s2.key_2 String + s2.value_1 String Union - Header: s2.key String - value String + Header: s2.key_2 String + s2.value_1 String Expression - Header: s2.key String - value String + Header: s2.key_2 String + s2.value_1 String ReadFromStorage Header: dummy UInt8 Expression - Header: s2.key String - value String + Header: s2.key_2 String + s2.value_1 String ReadFromStorage Header: dummy UInt8 diff --git a/tests/queries/0_stateless/02381_join_dup_columns_in_plan.sql b/tests/queries/0_stateless/02381_join_dup_columns_in_plan.sql index 4ed6d965292..dfcd8c12e11 100644 --- a/tests/queries/0_stateless/02381_join_dup_columns_in_plan.sql +++ b/tests/queries/0_stateless/02381_join_dup_columns_in_plan.sql @@ -1,3 +1,4 @@ +SET allow_experimental_analyzer = 1; SET join_algorithm = 'hash'; EXPLAIN actions=0, description=0, header=1 diff --git a/tests/queries/0_stateless/02402_external_disk_mertrics.sql b/tests/queries/0_stateless/02402_external_disk_mertrics.sql index b675c05f45c..e9696eb7122 100644 --- a/tests/queries/0_stateless/02402_external_disk_mertrics.sql +++ b/tests/queries/0_stateless/02402_external_disk_mertrics.sql @@ -20,7 +20,8 @@ SET join_algorithm = 'partial_merge'; SET default_max_bytes_in_join = 0; SET max_bytes_in_join = 10000000; -SELECT number * 200000 as n, j * 2097152 FROM numbers(5) nums +SELECT n, j * 2097152 FROM +(SELECT number * 200000 as n FROM numbers(5)) nums ANY LEFT JOIN ( SELECT number * 2 AS n, number AS j FROM numbers(1000000) ) js2 USING n ORDER BY n diff --git a/tests/queries/0_stateless/02420_final_setting_analyzer.reference b/tests/queries/0_stateless/02420_final_setting_analyzer.reference index ee7c2541bcf..9a03c484765 100644 --- a/tests/queries/0_stateless/02420_final_setting_analyzer.reference +++ b/tests/queries/0_stateless/02420_final_setting_analyzer.reference @@ -108,9 +108,6 @@ select left_table.id,val_left, val_middle, val_right from left_table ORDER BY left_table.id, val_left, val_middle, val_right; 1 c a c 1 c b c --- no distributed tests because it is not currently supported: --- JOIN with remote storages is unsupported. - -- Quite exotic with Merge engine DROP TABLE IF EXISTS table_to_merge_a; DROP TABLE IF EXISTS table_to_merge_b; diff --git a/tests/queries/0_stateless/02420_final_setting_analyzer.sql b/tests/queries/0_stateless/02420_final_setting_analyzer.sql index 5937e536239..14c832cfaf5 100644 --- a/tests/queries/0_stateless/02420_final_setting_analyzer.sql +++ b/tests/queries/0_stateless/02420_final_setting_analyzer.sql @@ -79,9 +79,6 @@ select left_table.id,val_left, val_middle, val_right from left_table inner join (SELECT * FROM right_table WHERE id = 1) r on middle_table.id = r.id ORDER BY left_table.id, val_left, val_middle, val_right; --- no distributed tests because it is not currently supported: --- JOIN with remote storages is unsupported. - -- Quite exotic with Merge engine DROP TABLE IF EXISTS table_to_merge_a; DROP TABLE IF EXISTS table_to_merge_b; diff --git a/tests/queries/0_stateless/02451_order_by_monotonic.reference b/tests/queries/0_stateless/02451_order_by_monotonic.reference index d3de324a7e1..f9f0ef38be1 100644 --- a/tests/queries/0_stateless/02451_order_by_monotonic.reference +++ b/tests/queries/0_stateless/02451_order_by_monotonic.reference @@ -4,19 +4,19 @@ 2022-09-09 12:00:00 0x 2022-09-09 12:00:00 1 2022-09-09 12:00:00 1x - Prefix sort description: toStartOfMinute(t) ASC - Result sort description: toStartOfMinute(t) ASC, c1 ASC - Prefix sort description: toStartOfMinute(t) ASC - Result sort description: toStartOfMinute(t) ASC - Prefix sort description: negate(a) ASC - Result sort description: negate(a) ASC - Prefix sort description: negate(a) ASC, negate(b) ASC - Result sort description: negate(a) ASC, negate(b) ASC - Prefix sort description: a DESC, negate(b) ASC - Result sort description: a DESC, negate(b) ASC - Prefix sort description: negate(a) ASC, b DESC - Result sort description: negate(a) ASC, b DESC - Prefix sort description: negate(a) ASC - Result sort description: negate(a) ASC, b ASC - Prefix sort description: a ASC - Result sort description: a ASC, negate(b) ASC + Prefix sort description: toStartOfMinute(test.t_0) ASC + Result sort description: toStartOfMinute(test.t_0) ASC, test.c1_1 ASC + Prefix sort description: toStartOfMinute(test.t_0) ASC + Result sort description: toStartOfMinute(test.t_0) ASC + Prefix sort description: negate(test.a_0) ASC + Result sort description: negate(test.a_0) ASC + Prefix sort description: negate(test.a_0) ASC, negate(test.b_1) ASC + Result sort description: negate(test.a_0) ASC, negate(test.b_1) ASC + Prefix sort description: test.a_0 DESC, negate(test.b_1) ASC + Result sort description: test.a_0 DESC, negate(test.b_1) ASC + Prefix sort description: negate(test.a_0) ASC, test.b_1 DESC + Result sort description: negate(test.a_0) ASC, test.b_1 DESC + Prefix sort description: negate(test.a_0) ASC + Result sort description: negate(test.a_0) ASC, test.b_1 ASC + Prefix sort description: test.a_0 ASC + Result sort description: test.a_0 ASC, negate(test.b_1) ASC diff --git a/tests/queries/0_stateless/02451_order_by_monotonic.sh b/tests/queries/0_stateless/02451_order_by_monotonic.sh index cc26ba91e1c..7d1356b4445 100755 --- a/tests/queries/0_stateless/02451_order_by_monotonic.sh +++ b/tests/queries/0_stateless/02451_order_by_monotonic.sh @@ -4,37 +4,41 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh +opts=( + "--allow_experimental_analyzer=1" +) + function explain_sort_description() { - out=$($CLICKHOUSE_CLIENT --optimize_read_in_order=1 -q "EXPLAIN PLAN actions = 1 $1") + out=$($CLICKHOUSE_CLIENT "${opts[@]}" --optimize_read_in_order=1 -q "EXPLAIN PLAN actions = 1 $1") echo "$out" | grep "Prefix sort description:" echo "$out" | grep "Result sort description:" } -$CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS t_order_by_monotonic" -$CLICKHOUSE_CLIENT -q "CREATE TABLE t_order_by_monotonic (t DateTime, c1 String) ENGINE = MergeTree ORDER BY (t, c1) +$CLICKHOUSE_CLIENT "${opts[@]}" -q "DROP TABLE IF EXISTS t_order_by_monotonic" +$CLICKHOUSE_CLIENT "${opts[@]}" -q "CREATE TABLE t_order_by_monotonic (t DateTime, c1 String) ENGINE = MergeTree ORDER BY (t, c1) AS SELECT '2022-09-09 12:00:00', toString(number % 2) FROM numbers(2) UNION ALL SELECT '2022-09-09 12:00:30', toString(number % 2)|| 'x' FROM numbers(3)" -$CLICKHOUSE_CLIENT --optimize_aggregation_in_order=1 -q "SELECT count() FROM - (SELECT toStartOfMinute(t) AS s, c1 FROM t_order_by_monotonic GROUP BY s, c1)" +$CLICKHOUSE_CLIENT "${opts[@]}" --optimize_aggregation_in_order=1 -q "SELECT count() FROM + (SELECT toStartOfMinute(t) AS s, c1 FROM t_order_by_monotonic AS test GROUP BY s, c1)" -$CLICKHOUSE_CLIENT --optimize_read_in_order=1 -q "SELECT toStartOfMinute(t) AS s, c1 FROM t_order_by_monotonic ORDER BY s, c1" +$CLICKHOUSE_CLIENT "${opts[@]}" --optimize_read_in_order=1 -q "SELECT toStartOfMinute(t) AS s, c1 FROM t_order_by_monotonic AS test ORDER BY s, c1" -explain_sort_description "SELECT toStartOfMinute(t) AS s, c1 FROM t_order_by_monotonic ORDER BY s, c1" -explain_sort_description "SELECT toStartOfMinute(t) AS s, c1 FROM t_order_by_monotonic ORDER BY s" +explain_sort_description "SELECT toStartOfMinute(t) AS s, c1 FROM t_order_by_monotonic AS test ORDER BY s, c1" +explain_sort_description "SELECT toStartOfMinute(t) AS s, c1 FROM t_order_by_monotonic AS test ORDER BY s" -$CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS t_order_by_monotonic" +$CLICKHOUSE_CLIENT "${opts[@]}" -q "DROP TABLE IF EXISTS t_order_by_monotonic" -$CLICKHOUSE_CLIENT -q "CREATE TABLE t_order_by_monotonic (a Int64, b Int64) ENGINE = MergeTree ORDER BY (a, b)" +$CLICKHOUSE_CLIENT "${opts[@]}" -q "CREATE TABLE t_order_by_monotonic (a Int64, b Int64) ENGINE = MergeTree ORDER BY (a, b)" -$CLICKHOUSE_CLIENT -q "INSERT INTO t_order_by_monotonic VALUES (1, 1) (1, 2), (2, 1) (2, 2)" +$CLICKHOUSE_CLIENT "${opts[@]}" -q "INSERT INTO t_order_by_monotonic VALUES (1, 1) (1, 2), (2, 1) (2, 2)" -explain_sort_description "SELECT * FROM t_order_by_monotonic ORDER BY -a" -explain_sort_description "SELECT * FROM t_order_by_monotonic ORDER BY -a, -b" -explain_sort_description "SELECT * FROM t_order_by_monotonic ORDER BY a DESC, -b" -explain_sort_description "SELECT * FROM t_order_by_monotonic ORDER BY -a, b DESC" -explain_sort_description "SELECT * FROM t_order_by_monotonic ORDER BY -a, b" -explain_sort_description "SELECT * FROM t_order_by_monotonic ORDER BY a, -b" +explain_sort_description "SELECT * FROM t_order_by_monotonic AS test ORDER BY -a" +explain_sort_description "SELECT * FROM t_order_by_monotonic AS test ORDER BY -a, -b" +explain_sort_description "SELECT * FROM t_order_by_monotonic AS test ORDER BY a DESC, -b" +explain_sort_description "SELECT * FROM t_order_by_monotonic AS test ORDER BY -a, b DESC" +explain_sort_description "SELECT * FROM t_order_by_monotonic AS test ORDER BY -a, b" +explain_sort_description "SELECT * FROM t_order_by_monotonic AS test ORDER BY a, -b" -$CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS t_order_by_monotonic" +$CLICKHOUSE_CLIENT "${opts[@]}" -q "DROP TABLE IF EXISTS t_order_by_monotonic" diff --git a/tests/queries/0_stateless/02483_cuturlparameter_with_arrays.reference b/tests/queries/0_stateless/02483_cuturlparameter_with_arrays.reference index dd677873c7c..348408a15cc 100644 --- a/tests/queries/0_stateless/02483_cuturlparameter_with_arrays.reference +++ b/tests/queries/0_stateless/02483_cuturlparameter_with_arrays.reference @@ -1,4 +1,5 @@ -- { echoOn } + SELECT cutURLParameter('http://bigmir.net/?a=b&c=d', []), cutURLParameter('http://bigmir.net/?a=b&c=d', ['a']), @@ -30,7 +31,7 @@ SELECT FORMAT Vertical; Row 1: ────── -cutURLParameter('http://bigmir.net/?a=b&c=d', []): http://bigmir.net/?a=b&c=d +cutURLParameter('http://bigmir.net/?a=b&c=d', array()): http://bigmir.net/?a=b&c=d cutURLParameter('http://bigmir.net/?a=b&c=d', ['a']): http://bigmir.net/?c=d cutURLParameter('http://bigmir.net/?a=b&c=d', ['a', 'c']): http://bigmir.net/? cutURLParameter('http://bigmir.net/?a=b&c=d', ['c']): http://bigmir.net/?a=b @@ -43,7 +44,7 @@ cutURLParameter('http://bigmir.net/?a=b&c=d#e&g=h', ['c', 'g']): http: cutURLParameter('http://bigmir.net/?a=b&c=d#e&g=h', ['e', 'g']): http://bigmir.net/?a=b&c=d#e cutURLParameter('http://bigmir.net/?a=b&c=d#test?e=f&g=h', ['test', 'e']): http://bigmir.net/?a=b&c=d#test?g=h cutURLParameter('http://bigmir.net/?a=b&c=d#test?e=f&g=h', ['test', 'g']): http://bigmir.net/?a=b&c=d#test?e=f -cutURLParameter('//bigmir.net/?a=b&c=d', []): //bigmir.net/?a=b&c=d +cutURLParameter('//bigmir.net/?a=b&c=d', array()): //bigmir.net/?a=b&c=d cutURLParameter('//bigmir.net/?a=b&c=d', ['a']): //bigmir.net/?c=d cutURLParameter('//bigmir.net/?a=b&c=d', ['a', 'c']): //bigmir.net/? cutURLParameter('//bigmir.net/?a=b&c=d#e=f', ['a', 'e']): //bigmir.net/?c=d# @@ -88,7 +89,7 @@ SELECT FORMAT Vertical; Row 1: ────── -cutURLParameter(materialize('http://bigmir.net/?a=b&c=d'), []): http://bigmir.net/?a=b&c=d +cutURLParameter(materialize('http://bigmir.net/?a=b&c=d'), array()): http://bigmir.net/?a=b&c=d cutURLParameter(materialize('http://bigmir.net/?a=b&c=d'), ['a']): http://bigmir.net/?c=d cutURLParameter(materialize('http://bigmir.net/?a=b&c=d'), ['a', 'c']): http://bigmir.net/? cutURLParameter(materialize('http://bigmir.net/?a=b&c=d'), ['c']): http://bigmir.net/?a=b @@ -101,7 +102,7 @@ cutURLParameter(materialize('http://bigmir.net/?a=b&c=d#e&g=h'), ['c', 'g']): cutURLParameter(materialize('http://bigmir.net/?a=b&c=d#e&g=h'), ['e', 'g']): http://bigmir.net/?a=b&c=d#e cutURLParameter(materialize('http://bigmir.net/?a=b&c=d#test?e=f&g=h'), ['test', 'e']): http://bigmir.net/?a=b&c=d#test?g=h cutURLParameter(materialize('http://bigmir.net/?a=b&c=d#test?e=f&g=h'), ['test', 'g']): http://bigmir.net/?a=b&c=d#test?e=f -cutURLParameter(materialize('//bigmir.net/?a=b&c=d'), []): //bigmir.net/?a=b&c=d +cutURLParameter(materialize('//bigmir.net/?a=b&c=d'), array()): //bigmir.net/?a=b&c=d cutURLParameter(materialize('//bigmir.net/?a=b&c=d'), ['a']): //bigmir.net/?c=d cutURLParameter(materialize('//bigmir.net/?a=b&c=d'), ['a', 'c']): //bigmir.net/? cutURLParameter(materialize('//bigmir.net/?a=b&c=d#e=f'), ['a', 'e']): //bigmir.net/?c=d# diff --git a/tests/queries/0_stateless/02483_cuturlparameter_with_arrays.sql b/tests/queries/0_stateless/02483_cuturlparameter_with_arrays.sql index ea2d6ae104f..6d64d2685b7 100644 --- a/tests/queries/0_stateless/02483_cuturlparameter_with_arrays.sql +++ b/tests/queries/0_stateless/02483_cuturlparameter_with_arrays.sql @@ -1,4 +1,7 @@ +SET allow_experimental_analyzer = 1; + -- { echoOn } + SELECT cutURLParameter('http://bigmir.net/?a=b&c=d', []), cutURLParameter('http://bigmir.net/?a=b&c=d', ['a']), diff --git a/tests/queries/0_stateless/02494_query_cache_explain.reference b/tests/queries/0_stateless/02494_query_cache_explain.reference index ecc965ac391..690e75bca7c 100644 --- a/tests/queries/0_stateless/02494_query_cache_explain.reference +++ b/tests/queries/0_stateless/02494_query_cache_explain.reference @@ -1,9 +1,9 @@ 1 1 -Expression ((Projection + Before ORDER BY)) +Expression ((Project names + (Projection + Change column names to column identifiers))) Limit (preliminary LIMIT (without OFFSET)) ReadFromStorage (SystemNumbers) -Expression ((Projection + Before ORDER BY)) +Expression ((Project names + (Projection + Change column names to column identifiers))) Limit (preliminary LIMIT (without OFFSET)) ReadFromStorage (SystemNumbers) (Expression) diff --git a/tests/queries/0_stateless/02494_query_cache_explain.sql b/tests/queries/0_stateless/02494_query_cache_explain.sql index 67717efde13..68b7e0005f8 100644 --- a/tests/queries/0_stateless/02494_query_cache_explain.sql +++ b/tests/queries/0_stateless/02494_query_cache_explain.sql @@ -1,6 +1,7 @@ -- Tags: no-parallel -- Tag no-parallel: Messes with internal cache +SET allow_experimental_analyzer = 1; SET allow_experimental_query_cache = true; SYSTEM DROP QUERY CACHE; diff --git a/tests/queries/0_stateless/02521_cannot-find-column-in-projection.sql b/tests/queries/0_stateless/02521_cannot-find-column-in-projection.sql deleted file mode 100644 index 31602c5bae2..00000000000 --- a/tests/queries/0_stateless/02521_cannot-find-column-in-projection.sql +++ /dev/null @@ -1,3 +0,0 @@ -create table test(day Date, id UInt32) engine=MergeTree partition by day order by tuple(); -insert into test select toDate('2023-01-05') AS day, number from numbers(10); -with toUInt64(id) as id_with select day, count(id_with) from test where day >= '2023-01-01' group by day limit 1000; -- { serverError NOT_FOUND_COLUMN_IN_BLOCK } diff --git a/tests/queries/0_stateless/02521_cannot_find_column_in_projection.reference b/tests/queries/0_stateless/02521_cannot_find_column_in_projection.reference new file mode 100644 index 00000000000..2cd767c8054 --- /dev/null +++ b/tests/queries/0_stateless/02521_cannot_find_column_in_projection.reference @@ -0,0 +1 @@ +2023-01-05 10 diff --git a/tests/queries/0_stateless/02521_cannot_find_column_in_projection.sql b/tests/queries/0_stateless/02521_cannot_find_column_in_projection.sql new file mode 100644 index 00000000000..255c6f56ab3 --- /dev/null +++ b/tests/queries/0_stateless/02521_cannot_find_column_in_projection.sql @@ -0,0 +1,7 @@ +SET allow_experimental_analyzer = 1; + +drop table if exists test; +create table test(day Date, id UInt32) engine=MergeTree partition by day order by tuple(); +insert into test select toDate('2023-01-05') AS day, number from numbers(10); +with toUInt64(id) as id_with select day, count(id_with) from test where day >= '2023-01-01' group by day limit 1000; +drop table test; diff --git a/tests/queries/0_stateless/02567_and_consistency.reference b/tests/queries/0_stateless/02567_and_consistency.reference index bcb2b5aecfb..e0014f187a8 100644 --- a/tests/queries/0_stateless/02567_and_consistency.reference +++ b/tests/queries/0_stateless/02567_and_consistency.reference @@ -6,10 +6,8 @@ true ===== true ===== -===== 1 ===== -===== allow_experimental_analyzer true #45440 diff --git a/tests/queries/0_stateless/02567_and_consistency.sql b/tests/queries/0_stateless/02567_and_consistency.sql index f02185a1a52..8ad06bd68cb 100644 --- a/tests/queries/0_stateless/02567_and_consistency.sql +++ b/tests/queries/0_stateless/02567_and_consistency.sql @@ -42,31 +42,10 @@ SETTINGS enable_optimize_predicate_expression = 0; SELECT '====='; -SELECT toBool(sin(SUM(number))) AS x -FROM -( - SELECT 1 AS number -) -GROUP BY number -HAVING 1 AND sin(sum(number)) -SETTINGS enable_optimize_predicate_expression = 1; -- { serverError 59 } - -SELECT '====='; - SELECT 1 and sin(1); SELECT '====='; -SELECT toBool(sin(SUM(number))) AS x -FROM -( - SELECT 1 AS number -) -GROUP BY number -HAVING x AND sin(1) -SETTINGS enable_optimize_predicate_expression = 0; -- { serverError 59 } - -SELECT '====='; SELECT 'allow_experimental_analyzer'; SET allow_experimental_analyzer = 1; diff --git a/tests/queries/0_stateless/02579_fill_empty_chunk.sql b/tests/queries/0_stateless/02579_fill_empty_chunk.sql index 14ae322d8c9..cbdbd7a9f84 100644 --- a/tests/queries/0_stateless/02579_fill_empty_chunk.sql +++ b/tests/queries/0_stateless/02579_fill_empty_chunk.sql @@ -1,5 +1,7 @@ -- this SELECT produces empty chunk in FillingTransform +SET enable_positional_arguments = 0; + SELECT 2 AS x, arrayJoin([NULL, NULL, NULL]) diff --git a/tests/queries/0_stateless/02675_predicate_push_down_filled_join_fix.sql b/tests/queries/0_stateless/02675_predicate_push_down_filled_join_fix.sql index 78cb423216b..73baad11634 100644 --- a/tests/queries/0_stateless/02675_predicate_push_down_filled_join_fix.sql +++ b/tests/queries/0_stateless/02675_predicate_push_down_filled_join_fix.sql @@ -1,4 +1,5 @@ SET allow_experimental_analyzer = 1; +SET single_join_prefer_left_table = 0; DROP TABLE IF EXISTS test_table; CREATE TABLE test_table diff --git a/tests/queries/0_stateless/02677_decode_url_component.reference b/tests/queries/0_stateless/02677_decode_url_component.reference new file mode 100644 index 00000000000..5f88856dc1c --- /dev/null +++ b/tests/queries/0_stateless/02677_decode_url_component.reference @@ -0,0 +1,2 @@ +%D0%BA%D0%BB%D0%B8%D0%BA%D1%85%D0%B0%D1%83%D1%81 1 +1 diff --git a/tests/queries/0_stateless/02677_decode_url_component.sql b/tests/queries/0_stateless/02677_decode_url_component.sql new file mode 100644 index 00000000000..68345b5de16 --- /dev/null +++ b/tests/queries/0_stateless/02677_decode_url_component.sql @@ -0,0 +1,5 @@ +SELECT + encodeURLComponent('кликхаус') AS encoded, + decodeURLComponent(encoded) = 'кликхаус' AS expected_EQ; + +SELECT DISTINCT decodeURLComponent(encodeURLComponent(randomString(100) AS x)) = x FROM numbers(100000); diff --git a/tests/queries/0_stateless/25340_grace_hash_limit_race.reference b/tests/queries/0_stateless/02677_grace_hash_limit_race.reference similarity index 100% rename from tests/queries/0_stateless/25340_grace_hash_limit_race.reference rename to tests/queries/0_stateless/02677_grace_hash_limit_race.reference diff --git a/tests/queries/0_stateless/25340_grace_hash_limit_race.sql b/tests/queries/0_stateless/02677_grace_hash_limit_race.sql similarity index 100% rename from tests/queries/0_stateless/25340_grace_hash_limit_race.sql rename to tests/queries/0_stateless/02677_grace_hash_limit_race.sql diff --git a/tests/integration/test_async_insert_memory/__init__.py b/tests/queries/0_stateless/02678_explain_pipeline_graph_with_projection.reference similarity index 100% rename from tests/integration/test_async_insert_memory/__init__.py rename to tests/queries/0_stateless/02678_explain_pipeline_graph_with_projection.reference diff --git a/tests/queries/0_stateless/02678_explain_pipeline_graph_with_projection.sql b/tests/queries/0_stateless/02678_explain_pipeline_graph_with_projection.sql new file mode 100644 index 00000000000..e8b7405d602 --- /dev/null +++ b/tests/queries/0_stateless/02678_explain_pipeline_graph_with_projection.sql @@ -0,0 +1,12 @@ +DROP TABLE IF EXISTS t1; +CREATE TABLE t1(ID UInt64, name String) engine=MergeTree order by ID; + +insert into t1(ID, name) values (1, 'abc'), (2, 'bbb'); + +-- The returned node order is uncertain +explain pipeline graph=1 select count(ID) from t1 FORMAT Null; +explain pipeline graph=1 select sum(1) from t1 FORMAT Null; +explain pipeline graph=1 select min(ID) from t1 FORMAT Null; +explain pipeline graph=1 select max(ID) from t1 FORMAT Null; + +DROP TABLE t1; diff --git a/tests/queries/0_stateless/02521_cannot-find-column-in-projection.reference b/tests/queries/0_stateless/02679_query_parameters_dangling_pointer.reference similarity index 100% rename from tests/queries/0_stateless/02521_cannot-find-column-in-projection.reference rename to tests/queries/0_stateless/02679_query_parameters_dangling_pointer.reference diff --git a/tests/queries/0_stateless/02679_query_parameters_dangling_pointer.sql b/tests/queries/0_stateless/02679_query_parameters_dangling_pointer.sql new file mode 100644 index 00000000000..7705b860e8e --- /dev/null +++ b/tests/queries/0_stateless/02679_query_parameters_dangling_pointer.sql @@ -0,0 +1,4 @@ +-- There is no use-after-free in the following query: + +SET param_o = 'a'; +CREATE TABLE test.xxx (a Int64) ENGINE=MergeTree ORDER BY ({o:String}); -- { serverError 44 }