mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-22 07:31:57 +00:00
Merge branch 'master' into keeper-linearizable-reads
This commit is contained in:
commit
0e6f8ec12b
@ -19,7 +19,6 @@
|
||||
* Make the remote filesystem cache composable, allow not to evict certain files (regarding idx, mrk, ..), delete old cache version. Now it is possible to configure cache over Azure blob storage disk, over Local disk, over StaticWeb disk, etc. This PR is marked backward incompatible because cache configuration changes and in order for cache to work need to update the config file. Old cache will still be used with new configuration. The server will startup fine with the old cache configuration. Closes https://github.com/ClickHouse/ClickHouse/issues/36140. Closes https://github.com/ClickHouse/ClickHouse/issues/37889. ([Kseniia Sumarokova](https://github.com/kssenii)). [#36171](https://github.com/ClickHouse/ClickHouse/pull/36171))
|
||||
|
||||
#### New Feature
|
||||
* Support SQL standard DELETE FROM syntax on merge tree tables and lightweight delete implementation for merge tree families. [#37893](https://github.com/ClickHouse/ClickHouse/pull/37893) ([Jianmei Zhang](https://github.com/zhangjmruc)) ([Alexander Gololobov](https://github.com/davenger)). Note: this new feature does not make ClickHouse an HTAP DBMS.
|
||||
* Query parameters can be set in interactive mode as `SET param_abc = 'def'` and transferred via the native protocol as settings. [#39906](https://github.com/ClickHouse/ClickHouse/pull/39906) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Quota key can be set in the native protocol ([Yakov Olkhovsky](https://github.com/ClickHouse/ClickHouse/pull/39874)).
|
||||
* Added a setting `exact_rows_before_limit` (0/1). When enabled, ClickHouse will provide exact value for `rows_before_limit_at_least` statistic, but with the cost that the data before limit will have to be read completely. This closes [#6613](https://github.com/ClickHouse/ClickHouse/issues/6613). [#25333](https://github.com/ClickHouse/ClickHouse/pull/25333) ([kevin wan](https://github.com/MaxWk)).
|
||||
@ -33,6 +32,8 @@
|
||||
* Add formats `PrettyMonoBlock`, `PrettyNoEscapesMonoBlock`, `PrettyCompactNoEscapes`, `PrettyCompactNoEscapesMonoBlock`, `PrettySpaceNoEscapes`, `PrettySpaceMonoBlock`, `PrettySpaceNoEscapesMonoBlock`. [#39646](https://github.com/ClickHouse/ClickHouse/pull/39646) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Add new setting schema_inference_hints that allows to specify structure hints in schema inference for specific columns. Closes [#39569](https://github.com/ClickHouse/ClickHouse/issues/39569). [#40068](https://github.com/ClickHouse/ClickHouse/pull/40068) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
|
||||
#### Experimental Feature
|
||||
* Support SQL standard DELETE FROM syntax on merge tree tables and lightweight delete implementation for merge tree families. [#37893](https://github.com/ClickHouse/ClickHouse/pull/37893) ([Jianmei Zhang](https://github.com/zhangjmruc)) ([Alexander Gololobov](https://github.com/davenger)). Note: this new feature does not make ClickHouse an HTAP DBMS.
|
||||
|
||||
#### Performance Improvement
|
||||
* Improved memory usage during memory efficient merging of aggregation results. [#39429](https://github.com/ClickHouse/ClickHouse/pull/39429) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 69
|
||||
sidebar_label: "Named connections"
|
||||
sidebar_label: "Named collections"
|
||||
---
|
||||
|
||||
# Storing details for connecting to external sources in configuration files
|
||||
@ -12,7 +12,7 @@ from users with only SQL access.
|
||||
Parameters can be set in XML `<format>CSV</format>` and overridden in SQL `, format = 'TSV'`.
|
||||
The parameters in SQL can be overridden using format `key` = `value`: `compression_method = 'gzip'`.
|
||||
|
||||
Named connections are stored in the `config.xml` file of the ClickHouse server in the `<named_collections>` section and are applied when ClickHouse starts.
|
||||
Named collections are stored in the `config.xml` file of the ClickHouse server in the `<named_collections>` section and are applied when ClickHouse starts.
|
||||
|
||||
Example of configuration:
|
||||
```xml
|
||||
@ -24,7 +24,7 @@ $ cat /etc/clickhouse-server/config.d/named_collections.xml
|
||||
</clickhouse>
|
||||
```
|
||||
|
||||
## Named connections for accessing S3.
|
||||
## Named collections for accessing S3.
|
||||
|
||||
The description of parameters see [s3 Table Function](../sql-reference/table-functions/s3.md).
|
||||
|
||||
@ -42,7 +42,7 @@ Example of configuration:
|
||||
</clickhouse>
|
||||
```
|
||||
|
||||
### Example of using named connections with the s3 function
|
||||
### Example of using named collections with the s3 function
|
||||
|
||||
```sql
|
||||
INSERT INTO FUNCTION s3(s3_mydata, filename = 'test_file.tsv.gz',
|
||||
@ -58,7 +58,7 @@ FROM s3(s3_mydata, filename = 'test_file.tsv.gz')
|
||||
1 rows in set. Elapsed: 0.279 sec. Processed 10.00 thousand rows, 90.00 KB (35.78 thousand rows/s., 322.02 KB/s.)
|
||||
```
|
||||
|
||||
### Example of using named connections with an S3 table
|
||||
### Example of using named collections with an S3 table
|
||||
|
||||
```sql
|
||||
CREATE TABLE s3_engine_table (number Int64)
|
||||
@ -73,7 +73,7 @@ SELECT * FROM s3_engine_table LIMIT 3;
|
||||
└────────┘
|
||||
```
|
||||
|
||||
## Named connections for accessing MySQL database
|
||||
## Named collections for accessing MySQL database
|
||||
|
||||
The description of parameters see [mysql](../sql-reference/table-functions/mysql.md).
|
||||
|
||||
@ -95,7 +95,7 @@ Example of configuration:
|
||||
</clickhouse>
|
||||
```
|
||||
|
||||
### Example of using named connections with the mysql function
|
||||
### Example of using named collections with the mysql function
|
||||
|
||||
```sql
|
||||
SELECT count() FROM mysql(mymysql, table = 'test');
|
||||
@ -105,7 +105,7 @@ SELECT count() FROM mysql(mymysql, table = 'test');
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
### Example of using named connections with an MySQL table
|
||||
### Example of using named collections with an MySQL table
|
||||
|
||||
```sql
|
||||
CREATE TABLE mytable(A Int64) ENGINE = MySQL(mymysql, table = 'test', connection_pool_size=3, replace_query=0);
|
||||
@ -116,7 +116,7 @@ SELECT count() FROM mytable;
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
### Example of using named connections with database with engine MySQL
|
||||
### Example of using named collections with database with engine MySQL
|
||||
|
||||
```sql
|
||||
CREATE DATABASE mydatabase ENGINE = MySQL(mymysql);
|
||||
@ -129,7 +129,7 @@ SHOW TABLES FROM mydatabase;
|
||||
└────────┘
|
||||
```
|
||||
|
||||
### Example of using named connections with an external dictionary with source MySQL
|
||||
### Example of using named collections with an external dictionary with source MySQL
|
||||
|
||||
```sql
|
||||
CREATE DICTIONARY dict (A Int64, B String)
|
||||
@ -145,7 +145,7 @@ SELECT dictGet('dict', 'B', 2);
|
||||
└─────────────────────────┘
|
||||
```
|
||||
|
||||
## Named connections for accessing PostgreSQL database
|
||||
## Named collections for accessing PostgreSQL database
|
||||
|
||||
The description of parameters see [postgresql](../sql-reference/table-functions/postgresql.md).
|
||||
|
||||
@ -166,7 +166,7 @@ Example of configuration:
|
||||
</clickhouse>
|
||||
```
|
||||
|
||||
### Example of using named connections with the postgresql function
|
||||
### Example of using named collections with the postgresql function
|
||||
|
||||
```sql
|
||||
SELECT * FROM postgresql(mypg, table = 'test');
|
||||
@ -186,8 +186,7 @@ SELECT * FROM postgresql(mypg, table = 'test', schema = 'public');
|
||||
└───┘
|
||||
```
|
||||
|
||||
|
||||
### Example of using named connections with database with engine PostgreSQL
|
||||
### Example of using named collections with database with engine PostgreSQL
|
||||
|
||||
```sql
|
||||
CREATE TABLE mypgtable (a Int64) ENGINE = PostgreSQL(mypg, table = 'test', schema = 'public');
|
||||
@ -201,7 +200,7 @@ SELECT * FROM mypgtable;
|
||||
└───┘
|
||||
```
|
||||
|
||||
### Example of using named connections with database with engine PostgreSQL
|
||||
### Example of using named collections with database with engine PostgreSQL
|
||||
|
||||
```sql
|
||||
CREATE DATABASE mydatabase ENGINE = PostgreSQL(mypg);
|
||||
@ -213,7 +212,7 @@ SHOW TABLES FROM mydatabase
|
||||
└──────┘
|
||||
```
|
||||
|
||||
### Example of using named connections with an external dictionary with source POSTGRESQL
|
||||
### Example of using named collections with an external dictionary with source POSTGRESQL
|
||||
|
||||
```sql
|
||||
CREATE DICTIONARY dict (a Int64, b String)
|
||||
@ -228,3 +227,59 @@ SELECT dictGet('dict', 'b', 2);
|
||||
│ two │
|
||||
└─────────────────────────┘
|
||||
```
|
||||
|
||||
## Named collections for accessing remote ClickHouse database
|
||||
|
||||
The description of parameters see [remote](../sql-reference/table-functions/remote.md/#parameters).
|
||||
|
||||
Example of configuration:
|
||||
|
||||
```xml
|
||||
<clickhouse>
|
||||
<named_collections>
|
||||
<remote1>
|
||||
<host>localhost</host>
|
||||
<port>9000</port>
|
||||
<database>system</database>
|
||||
<user>foo</user>
|
||||
<password>secret</password>
|
||||
</remote1>
|
||||
</named_collections>
|
||||
</clickhouse>
|
||||
```
|
||||
|
||||
### Example of using named collections with the `remote`/`remoteSecure` functions
|
||||
|
||||
```sql
|
||||
SELECT * FROM remote(remote1, table = one);
|
||||
┌─dummy─┐
|
||||
│ 0 │
|
||||
└───────┘
|
||||
|
||||
SELECT * FROM remote(remote1, database = merge(system, '^one'));
|
||||
┌─dummy─┐
|
||||
│ 0 │
|
||||
└───────┘
|
||||
|
||||
INSERT INTO FUNCTION remote(remote1, database = default, table = test) VALUES (1,'a');
|
||||
|
||||
SELECT * FROM remote(remote1, database = default, table = test);
|
||||
┌─a─┬─b─┐
|
||||
│ 1 │ a │
|
||||
└───┴───┘
|
||||
```
|
||||
|
||||
### Example of using named collections with an external dictionary with source ClickHouse
|
||||
|
||||
```sql
|
||||
CREATE DICTIONARY dict(a Int64, b String)
|
||||
PRIMARY KEY a
|
||||
SOURCE(CLICKHOUSE(NAME remote1 TABLE test DB default))
|
||||
LIFETIME(MIN 1 MAX 2)
|
||||
LAYOUT(HASHED());
|
||||
|
||||
SELECT dictGet('dict', 'b', 1);
|
||||
┌─dictGet('dict', 'b', 1)─┐
|
||||
│ a │
|
||||
└─────────────────────────┘
|
||||
```
|
||||
|
@ -34,6 +34,6 @@ install(FILES config.xml users.xml DESTINATION "${CLICKHOUSE_ETC_DIR}/clickhouse
|
||||
|
||||
clickhouse_embed_binaries(
|
||||
TARGET clickhouse_server_configs
|
||||
RESOURCES config.xml users.xml embedded.xml play.html
|
||||
RESOURCES config.xml users.xml embedded.xml play.html dashboard.html js/uplot.js
|
||||
)
|
||||
add_dependencies(clickhouse-server-lib clickhouse_server_configs)
|
||||
|
905
programs/server/dashboard.html
Normal file
905
programs/server/dashboard.html
Normal file
@ -0,0 +1,905 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<title>ClickHouse Dashboard</title>
|
||||
<link rel="icon" href="data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI1NCIgaGVpZ2h0PSI0OCIgdmlld0JveD0iMCAwIDkgOCI+PHN0eWxlPi5ve2ZpbGw6I2ZjMH0ucntmaWxsOnJlZH08L3N0eWxlPjxwYXRoIGQ9Ik0wLDcgaDEgdjEgaC0xIHoiIGNsYXNzPSJyIi8+PHBhdGggZD0iTTAsMCBoMSB2NyBoLTEgeiIgY2xhc3M9Im8iLz48cGF0aCBkPSJNMiwwIGgxIHY4IGgtMSB6IiBjbGFzcz0ibyIvPjxwYXRoIGQ9Ik00LDAgaDEgdjggaC0xIHoiIGNsYXNzPSJvIi8+PHBhdGggZD0iTTYsMCBoMSB2OCBoLTEgeiIgY2xhc3M9Im8iLz48cGF0aCBkPSJNOCwzLjI1IGgxIHYxLjUgaC0xIHoiIGNsYXNzPSJvIi8+PC9zdmc+">
|
||||
<script src="https://cdn.jsdelivr.net/npm/uplot@1.6.21/dist/uPlot.iife.min.js"></script>
|
||||
<style>
|
||||
:root {
|
||||
--color: black;
|
||||
--background: linear-gradient(to bottom, #00CCFF, #00D0D0);
|
||||
--chart-background: white;
|
||||
--shadow-color: rgba(0, 0, 0, 0.25);
|
||||
--input-shadow-color: rgba(0, 255, 0, 1);
|
||||
--error-color: red;
|
||||
--legend-background: rgba(255, 255, 255, 0.75);
|
||||
--title-color: #666;
|
||||
--text-color: black;
|
||||
--edit-title-background: #FEE;
|
||||
--edit-title-border: #F88;
|
||||
--button-background-color: #FFCB80;
|
||||
--button-text-color: black;
|
||||
--new-chart-background-color: #EEE;
|
||||
--new-chart-text-color: black;
|
||||
--param-background-color: #EEE;
|
||||
--param-text-color: black;
|
||||
--input-background: white;
|
||||
--chart-button-hover-color: red;
|
||||
}
|
||||
|
||||
[data-theme="dark"] {
|
||||
--color: white;
|
||||
--background: #151C2C;
|
||||
--chart-background: #1b2834;
|
||||
--shadow-color: rgba(0, 0, 0, 0);
|
||||
--input-shadow-color: rgba(255, 128, 0, 0.25);
|
||||
--error-color: #F66;
|
||||
--legend-background: rgba(255, 255, 255, 0.25);
|
||||
--title-color: white;
|
||||
--text-color: white;
|
||||
--edit-title-background: #364f69;
|
||||
--edit-title-border: #333;
|
||||
--button-background-color: orange;
|
||||
--button-text-color: black;
|
||||
--new-chart-background-color: #666;
|
||||
--new-chart-text-color: white;
|
||||
--param-background-color: #666;
|
||||
--param-text-color: white;
|
||||
--input-background: #364f69;
|
||||
--chart-button-hover-color: #F40;
|
||||
}
|
||||
|
||||
* {
|
||||
box-sizing: border-box;
|
||||
}
|
||||
html, body {
|
||||
color: var(--color);
|
||||
height: 100%;
|
||||
overflow: auto;
|
||||
margin: 0;
|
||||
}
|
||||
body {
|
||||
font-family: Liberation Sans, DejaVu Sans, sans-serif, Noto Color Emoji, Apple Color Emoji, Segoe UI Emoji;
|
||||
padding: 1rem;
|
||||
overflow-x: hidden;
|
||||
background: var(--background);
|
||||
display: grid;
|
||||
grid-template-columns: auto;
|
||||
grid-template-rows: fit-content(10%) auto;
|
||||
}
|
||||
input {
|
||||
/* iPad, Safari */
|
||||
border-radius: 0;
|
||||
margin: 0;
|
||||
}
|
||||
#charts
|
||||
{
|
||||
height: 100%;
|
||||
display: flex;
|
||||
flex-flow: row wrap;
|
||||
gap: 1rem;
|
||||
}
|
||||
.chart {
|
||||
flex: 1 40%;
|
||||
min-width: 20rem;
|
||||
min-height: 16rem;
|
||||
background: var(--chart-background);
|
||||
box-shadow: 0 0 1rem var(--shadow-color);
|
||||
overflow: hidden;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.chart div { position: absolute; }
|
||||
|
||||
.inputs { font-size: 14pt; }
|
||||
|
||||
#connection-params {
|
||||
margin-bottom: 0.5rem;
|
||||
display: grid;
|
||||
grid-template-columns: auto 15% 15%;
|
||||
column-gap: 0.25rem;
|
||||
}
|
||||
|
||||
.inputs input {
|
||||
box-shadow: 0 0 1rem var(--shadow-color);
|
||||
padding: 0.25rem;
|
||||
}
|
||||
|
||||
#chart-params input {
|
||||
margin-right: 0.25rem;
|
||||
}
|
||||
|
||||
input {
|
||||
font-family: Liberation Sans, DejaVu Sans, sans-serif, Noto Color Emoji, Apple Color Emoji, Segoe UI Emoji;
|
||||
outline: none;
|
||||
border: none;
|
||||
font-size: 14pt;
|
||||
background-color: var(--input-background);
|
||||
color: var(--text-color);
|
||||
}
|
||||
|
||||
.u-legend th { display: none; }
|
||||
|
||||
.themes {
|
||||
float: right;
|
||||
font-size: 20pt;
|
||||
margin-bottom: 1rem;
|
||||
}
|
||||
|
||||
#toggle-dark, #toggle-light {
|
||||
padding-right: 0.5rem;
|
||||
user-select: none;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
#toggle-dark:hover, #toggle-light:hover {
|
||||
display: inline-block;
|
||||
transform: translate(1px, 1px);
|
||||
filter: brightness(125%);
|
||||
}
|
||||
|
||||
#run {
|
||||
background: var(--button-background-color);
|
||||
color: var(--button-text-color);
|
||||
font-weight: bold;
|
||||
user-select: none;
|
||||
cursor: pointer;
|
||||
margin-bottom: 1rem;
|
||||
}
|
||||
|
||||
#run:hover {
|
||||
filter: contrast(125%);
|
||||
}
|
||||
|
||||
#add {
|
||||
font-weight: bold;
|
||||
user-select: none;
|
||||
cursor: pointer;
|
||||
padding-left: 0.5rem;
|
||||
padding-right: 0.5rem;
|
||||
background: var(--new-chart-background-color);
|
||||
color: var(--new-chart-text-color);
|
||||
float: right;
|
||||
margin-right: 0 !important;
|
||||
margin-left: 1rem;
|
||||
margin-bottom: 1rem;
|
||||
}
|
||||
|
||||
#add:hover {
|
||||
background: var(--button-background-color);
|
||||
}
|
||||
|
||||
form {
|
||||
display: inline;
|
||||
}
|
||||
|
||||
form .param_name {
|
||||
font-size: 14pt;
|
||||
padding: 0.25rem;
|
||||
background: var(--param-background-color);
|
||||
color: var(--param-text-color);
|
||||
display: inline-block;
|
||||
box-shadow: 0 0 1rem var(--shadow-color);
|
||||
margin-bottom: 1rem;
|
||||
}
|
||||
|
||||
input:focus {
|
||||
box-shadow: 0 0 1rem var(--input-shadow-color);
|
||||
}
|
||||
|
||||
.title {
|
||||
left: 50%;
|
||||
top: 0.25em;
|
||||
transform: translate(-50%, 0);
|
||||
font-size: 16pt;
|
||||
font-weight: bold;
|
||||
color: var(--title-color);
|
||||
z-index: 10;
|
||||
}
|
||||
|
||||
.chart-buttons {
|
||||
cursor: pointer;
|
||||
display: none;
|
||||
position: absolute;
|
||||
top: 0.25rem;
|
||||
right: 0.25rem;
|
||||
font-size: 200%;
|
||||
color: #888;
|
||||
z-index: 10;
|
||||
}
|
||||
.chart-buttons a {
|
||||
margin-right: 0.25rem;
|
||||
}
|
||||
.chart-buttons a:hover {
|
||||
color: var(--chart-button-hover-color);
|
||||
}
|
||||
|
||||
.query-editor {
|
||||
display: none;
|
||||
grid-template-columns: auto fit-content(10%);
|
||||
grid-template-rows: auto fit-content(10%);
|
||||
z-index: 11;
|
||||
position: absolute;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
}
|
||||
|
||||
.query-error {
|
||||
display: none;
|
||||
z-index: 10;
|
||||
position: absolute;
|
||||
color: var(--error-color);
|
||||
padding: 2rem;
|
||||
}
|
||||
|
||||
.query-editor textarea {
|
||||
grid-row: 1;
|
||||
grid-column: 1 / span 2;
|
||||
z-index: 11;
|
||||
padding: 0.5rem;
|
||||
outline: none;
|
||||
border: none;
|
||||
font-size: 12pt;
|
||||
border-bottom: 1px solid var(--edit-title-border);
|
||||
background: var(--chart-background);
|
||||
color: var(--text-color);
|
||||
resize: none;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.query-editor input {
|
||||
grid-row: 2;
|
||||
padding: 0.5rem;
|
||||
}
|
||||
|
||||
.edit-title {
|
||||
background: var(--edit-title-background);
|
||||
}
|
||||
|
||||
.edit-confirm {
|
||||
background: var(--button-background-color);
|
||||
color: var(--button-text-color);
|
||||
font-weight: bold;
|
||||
cursor: pointer;
|
||||
}
|
||||
.edit-confirm:hover {
|
||||
filter: contrast(125%);
|
||||
}
|
||||
|
||||
.nowrap {
|
||||
white-space: pre;
|
||||
}
|
||||
|
||||
/* Source: https://cdn.jsdelivr.net/npm/uplot@1.6.21/dist/uPlot.min.css
|
||||
* It is copy-pasted to lower the number of requests.
|
||||
*/
|
||||
.uplot, .uplot *, .uplot *::before, .uplot *::after {box-sizing: border-box;}.uplot {font-family: system-ui, -apple-system, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";line-height: 1.5;width: min-content;}.u-title {text-align: center;font-size: 18px;font-weight: bold;}.u-wrap {position: relative;user-select: none;}.u-over, .u-under {position: absolute;}.u-under {overflow: hidden;}.uplot canvas {display: block;position: relative;width: 100%;height: 100%;}.u-axis {position: absolute;}.u-legend {font-size: 14px;margin: auto;text-align: center;}.u-inline {display: block;}.u-inline * {display: inline-block;}.u-inline tr {margin-right: 16px;}.u-legend th {font-weight: 600;}.u-legend th > * {vertical-align: middle;display: inline-block;}.u-legend .u-marker {width: 1em;height: 1em;margin-right: 4px;background-clip: padding-box !important;}.u-inline.u-live th::after {content: ":";vertical-align: middle;}.u-inline:not(.u-live) .u-value {display: none;}.u-series > * {padding: 4px;}.u-series th {cursor: pointer;}.u-legend .u-off > * {opacity: 0.3;}.u-select {background: rgba(0,0,0,0.07);position: absolute;pointer-events: none;}.u-cursor-x, .u-cursor-y {position: absolute;left: 0;top: 0;pointer-events: none;will-change: transform;z-index: 100;}.u-hz .u-cursor-x, .u-vt .u-cursor-y {height: 100%;border-right: 1px dashed #607D8B;}.u-hz .u-cursor-y, .u-vt .u-cursor-x {width: 100%;border-bottom: 1px dashed #607D8B;}.u-cursor-pt {position: absolute;top: 0;left: 0;border-radius: 50%;border: 0 solid;pointer-events: none;will-change: transform;z-index: 100;/*this has to be !important since we set inline "background" shorthand */background-clip: padding-box !important;}.u-axis.u-off, .u-select.u-off, .u-cursor-x.u-off, .u-cursor-y.u-off, .u-cursor-pt.u-off {display: none;}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="inputs">
|
||||
<form id="params">
|
||||
<div id="connection-params">
|
||||
<input spellcheck="false" id="url" type="text" value="" placeholder="URL" />
|
||||
<input spellcheck="false" id="user" type="text" value="" placeholder="user" />
|
||||
<input spellcheck="false" id="password" type="password" placeholder="password" />
|
||||
</div>
|
||||
<div>
|
||||
<input id="add" type="button" value="Add chart">
|
||||
<span class="nowrap themes"><span id="toggle-dark">🌚</span><span id="toggle-light">🌞</span></span>
|
||||
<div id="chart-params"></div>
|
||||
</div>
|
||||
</form>
|
||||
</div>
|
||||
<div id="charts"></div>
|
||||
<script>
|
||||
|
||||
/** Implementation note: it might be more natural to use some reactive framework.
|
||||
* But for now it is small enough to avoid it. As a bonus we have less number of dependencies,
|
||||
* which is better for maintainability.
|
||||
*
|
||||
* TODO:
|
||||
* - zoom on the graphs should work on touch devices;
|
||||
* - add mass edit capability (edit the JSON config as a whole);
|
||||
* - compress the state for URL's #hash;
|
||||
* - save/load JSON configuration;
|
||||
* - footer with "about" or a link to source code;
|
||||
* - allow to configure a table on a server to save the dashboards;
|
||||
* - multiple lines on chart;
|
||||
* - if a query returned one value, display this value instead of a diagram;
|
||||
* - if a query returned something unusual, display the table;
|
||||
*/
|
||||
|
||||
let host = 'https://play.clickhouse.com/';
|
||||
let user = 'explorer';
|
||||
let password = '';
|
||||
|
||||
/// If it is hosted on server, assume that it is the address of ClickHouse.
|
||||
if (location.protocol != 'file:') {
|
||||
host = location.origin;
|
||||
user = 'default';
|
||||
}
|
||||
|
||||
/// This is just a demo configuration of the dashboard.
|
||||
|
||||
let queries = [
|
||||
{
|
||||
"title": "Queries/second",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(ProfileEvent_Query)
|
||||
FROM system.metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
},
|
||||
{
|
||||
"title": "CPU Usage (cores)",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(ProfileEvent_OSCPUVirtualTimeMicroseconds) / 1000000
|
||||
FROM system.metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
},
|
||||
{
|
||||
"title": "Queries Running",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(CurrentMetric_Query)
|
||||
FROM system.metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
},
|
||||
{
|
||||
"title": "Merges Running",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(CurrentMetric_Merge)
|
||||
FROM system.metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
},
|
||||
{
|
||||
"title": "Selected Bytes/second",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(ProfileEvent_SelectedBytes)
|
||||
FROM system.metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
},
|
||||
{
|
||||
"title": "IO Wait",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(ProfileEvent_OSIOWaitMicroseconds) / 1000000
|
||||
FROM system.metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
},
|
||||
{
|
||||
"title": "CPU Wait",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(ProfileEvent_OSCPUWaitMicroseconds) / 1000000
|
||||
FROM system.metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
},
|
||||
{
|
||||
"title": "OS CPU Usage (Userspace)",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(value)
|
||||
FROM system.asynchronous_metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
AND metric = 'OSUserTimeNormalized'
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
},
|
||||
{
|
||||
"title": "OS CPU Usage (Kernel)",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(value)
|
||||
FROM system.asynchronous_metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
AND metric = 'OSSystemTimeNormalized'
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
},
|
||||
{
|
||||
"title": "Read From Disk",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(ProfileEvent_OSReadBytes)
|
||||
FROM system.metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
},
|
||||
{
|
||||
"title": "Read From Filesystem",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(ProfileEvent_OSReadChars)
|
||||
FROM system.metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
},
|
||||
{
|
||||
"title": "Memory (tracked)",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(CurrentMetric_MemoryTracking)
|
||||
FROM system.metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
},
|
||||
{
|
||||
"title": "Load Average (15 minutes)",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(value)
|
||||
FROM system.asynchronous_metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
AND metric = 'LoadAverage15'
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
},
|
||||
{
|
||||
"title": "Selected Rows/second",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(ProfileEvent_SelectedRows)
|
||||
FROM system.metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
},
|
||||
{
|
||||
"title": "Inserted Rows/second",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(ProfileEvent_InsertedRows)
|
||||
FROM system.metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
},
|
||||
{
|
||||
"title": "Total MergeTree Parts",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, avg(value)
|
||||
FROM system.asynchronous_metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
AND metric = 'TotalPartsOfMergeTreeTables'
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
},
|
||||
{
|
||||
"title": "Max Parts For Partition",
|
||||
"query": `SELECT toStartOfInterval(event_time, INTERVAL {rounding:UInt32} SECOND)::INT AS t, max(value)
|
||||
FROM system.asynchronous_metric_log
|
||||
WHERE event_date >= toDate(now() - {seconds:UInt32}) AND event_time >= now() - {seconds:UInt32}
|
||||
AND metric = 'MaxPartCountForPartition'
|
||||
GROUP BY t
|
||||
ORDER BY t`
|
||||
}
|
||||
];
|
||||
|
||||
/// Query parameters with predefined default values.
|
||||
/// All other parameters will be automatically found in the queries.
|
||||
let params = {
|
||||
"rounding": "60",
|
||||
"seconds": "86400"
|
||||
};
|
||||
|
||||
let theme = 'light';
|
||||
|
||||
function setTheme(new_theme) {
|
||||
theme = new_theme;
|
||||
document.documentElement.setAttribute('data-theme', theme);
|
||||
window.localStorage.setItem('theme', theme);
|
||||
drawAll();
|
||||
}
|
||||
|
||||
document.getElementById('toggle-light').addEventListener('click', e => setTheme('light'));
|
||||
document.getElementById('toggle-dark').addEventListener('click', e => setTheme('dark'));
|
||||
|
||||
/// uPlot objects will go here.
|
||||
let plots = [];
|
||||
/// chart div's will be here.
|
||||
let charts = document.getElementById('charts');
|
||||
|
||||
/// This is not quite correct (we cannot really parse SQL with regexp) but tolerable.
|
||||
const query_param_regexp = /\{(\w+):[^}]+\}/g;
|
||||
|
||||
/// Automatically parse more parameters from the queries.
|
||||
function findParamsInQuery(query, new_params) {
|
||||
for (let match of query.matchAll(query_param_regexp)) {
|
||||
const name = match[1];
|
||||
new_params[name] = params[name] || '';
|
||||
}
|
||||
}
|
||||
|
||||
function findParamsInQueries() {
|
||||
let new_params = {}
|
||||
queries.forEach(q => findParamsInQuery(q.query, new_params));
|
||||
params = new_params;
|
||||
}
|
||||
|
||||
function insertParam(name, value) {
|
||||
let param_wrapper = document.createElement('span');
|
||||
param_wrapper.className = 'nowrap';
|
||||
|
||||
let param_name = document.createElement('span');
|
||||
param_name.className = 'param_name';
|
||||
let param_name_text = document.createTextNode(`${name}: `);
|
||||
param_name.appendChild(param_name_text);
|
||||
|
||||
let param_value = document.createElement('input');
|
||||
param_value.className = 'param';
|
||||
param_value.name = `${name}`;
|
||||
param_value.type = 'text';
|
||||
param_value.value = value;
|
||||
param_value.spellcheck = false;
|
||||
|
||||
param_wrapper.appendChild(param_name);
|
||||
param_wrapper.appendChild(param_value);
|
||||
document.getElementById('chart-params').appendChild(param_wrapper);
|
||||
}
|
||||
|
||||
function buildParams() {
|
||||
let params_elem = document.getElementById('chart-params');
|
||||
while (params_elem.firstChild) {
|
||||
params_elem.removeChild(params_elem.lastChild);
|
||||
}
|
||||
|
||||
for (let [name, value] of Object.entries(params)) {
|
||||
insertParam(name, value);
|
||||
}
|
||||
|
||||
let run = document.createElement('input');
|
||||
run.id = 'run';
|
||||
run.type = 'submit';
|
||||
run.value = 'Ok';
|
||||
|
||||
document.getElementById('chart-params').appendChild(run);
|
||||
}
|
||||
|
||||
function updateParams() {
|
||||
[...document.getElementsByClassName('param')].forEach(e => { params[e.name] = e.value });
|
||||
}
|
||||
|
||||
function getParamsForURL() {
|
||||
let url = '';
|
||||
for (let [name, value] of Object.entries(params)) {
|
||||
url += `¶m_${encodeURIComponent(name)}=${encodeURIComponent(value)}`;
|
||||
};
|
||||
return url;
|
||||
}
|
||||
|
||||
function insertChart(i) {
|
||||
let q = queries[i];
|
||||
let chart = document.createElement('div');
|
||||
chart.className = 'chart';
|
||||
|
||||
let chart_title = document.createElement('div');
|
||||
let title_text = document.createTextNode('');
|
||||
chart_title.appendChild(title_text);
|
||||
chart_title.className = 'title';
|
||||
chart.appendChild(chart_title);
|
||||
|
||||
let query_error = document.createElement('div');
|
||||
query_error.className = 'query-error';
|
||||
query_error.appendChild(document.createTextNode(''));
|
||||
chart.appendChild(query_error);
|
||||
|
||||
let query_editor = document.createElement('div');
|
||||
query_editor.className = 'query-editor';
|
||||
|
||||
let query_editor_textarea = document.createElement('textarea');
|
||||
query_editor_textarea.spellcheck = false;
|
||||
query_editor_textarea.value = q.query;
|
||||
query_editor_textarea.placeholder = 'Query';
|
||||
query_editor.appendChild(query_editor_textarea);
|
||||
|
||||
let query_editor_title = document.createElement('input');
|
||||
query_editor_title.type = 'text';
|
||||
query_editor_title.value = q.title;
|
||||
query_editor_title.placeholder = 'Chart title';
|
||||
query_editor_title.className = 'edit-title';
|
||||
query_editor.appendChild(query_editor_title);
|
||||
|
||||
let query_editor_confirm = document.createElement('input');
|
||||
query_editor_confirm.type = 'submit';
|
||||
query_editor_confirm.value = 'Ok';
|
||||
query_editor_confirm.className = 'edit-confirm';
|
||||
|
||||
function editConfirm() {
|
||||
query_editor.style.display = 'none';
|
||||
query_error.style.display = 'none';
|
||||
q.title = query_editor_title.value;
|
||||
q.query = query_editor_textarea.value;
|
||||
title_text.data = '';
|
||||
findParamsInQuery(q.query, params);
|
||||
buildParams();
|
||||
draw(i, chart, getParamsForURL(), q.query);
|
||||
saveState();
|
||||
}
|
||||
|
||||
query_editor_confirm.addEventListener('click', editConfirm);
|
||||
|
||||
/// Ctrl+Enter (or Cmd+Enter on Mac) will also confirm editing.
|
||||
query_editor.addEventListener('keydown', e => {
|
||||
if ((event.metaKey || event.ctrlKey) && (event.keyCode == 13 || event.keyCode == 10)) {
|
||||
editConfirm();
|
||||
}
|
||||
});
|
||||
|
||||
query_editor.addEventListener('keyup', e => {
|
||||
if (e.key == 'Escape') {
|
||||
query_editor.style.display = 'none';
|
||||
}
|
||||
});
|
||||
|
||||
query_editor.appendChild(query_editor_confirm);
|
||||
|
||||
chart.appendChild(query_editor);
|
||||
|
||||
let edit_buttons = document.createElement('div');
|
||||
edit_buttons.className = 'chart-buttons';
|
||||
|
||||
let edit = document.createElement('a');
|
||||
let edit_text = document.createTextNode('✎');
|
||||
edit.appendChild(edit_text);
|
||||
|
||||
function editStart() {
|
||||
query_editor.style.display = 'grid';
|
||||
query_editor_textarea.focus();
|
||||
}
|
||||
|
||||
edit.addEventListener('click', e => editStart());
|
||||
if (!q.query) {
|
||||
editStart();
|
||||
}
|
||||
|
||||
let trash = document.createElement('a');
|
||||
let trash_text = document.createTextNode('✕');
|
||||
trash.appendChild(trash_text);
|
||||
trash.addEventListener('click', e => {
|
||||
/// Indices may change after deletion of other element, hence captured "i" may become incorrect.
|
||||
let idx = [...charts.querySelectorAll('.chart')].findIndex(child => chart == child);
|
||||
if (plots[idx]) {
|
||||
plots[idx].destroy();
|
||||
plots[idx] = null;
|
||||
}
|
||||
plots.splice(idx, 1);
|
||||
charts.removeChild(chart);
|
||||
queries.splice(idx, 1);
|
||||
findParamsInQueries();
|
||||
buildParams();
|
||||
resize();
|
||||
saveState();
|
||||
});
|
||||
|
||||
edit_buttons.appendChild(edit);
|
||||
edit_buttons.appendChild(trash);
|
||||
|
||||
chart.appendChild(edit_buttons);
|
||||
|
||||
chart.addEventListener('mouseenter', e => { edit_buttons.style.display = 'block'; });
|
||||
chart.addEventListener('mouseleave', e => { edit_buttons.style.display = 'none'; });
|
||||
|
||||
charts.appendChild(chart);
|
||||
};
|
||||
|
||||
document.getElementById('add').addEventListener('click', e => {
|
||||
queries.push({ title: '', query: '' });
|
||||
insertChart(plots.length);
|
||||
plots.push(null);
|
||||
resize();
|
||||
});
|
||||
|
||||
function legendAsTooltipPlugin({ className, style = { background: "var(--legend-background)" } } = {}) {
|
||||
let legendEl;
|
||||
|
||||
function init(u, opts) {
|
||||
legendEl = u.root.querySelector(".u-legend");
|
||||
|
||||
legendEl.classList.remove("u-inline");
|
||||
className && legendEl.classList.add(className);
|
||||
|
||||
uPlot.assign(legendEl.style, {
|
||||
textAlign: "left",
|
||||
pointerEvents: "none",
|
||||
display: "none",
|
||||
position: "absolute",
|
||||
left: 0,
|
||||
top: 0,
|
||||
zIndex: 100,
|
||||
boxShadow: "2px 2px 10px rgba(0,0,0,0.1)",
|
||||
...style
|
||||
});
|
||||
|
||||
// hide series color markers
|
||||
const idents = legendEl.querySelectorAll(".u-marker");
|
||||
|
||||
for (let i = 0; i < idents.length; i++)
|
||||
idents[i].style.display = "none";
|
||||
|
||||
const overEl = u.over;
|
||||
|
||||
overEl.appendChild(legendEl);
|
||||
|
||||
overEl.addEventListener("mouseenter", () => {legendEl.style.display = null;});
|
||||
overEl.addEventListener("mouseleave", () => {legendEl.style.display = "none";});
|
||||
}
|
||||
|
||||
function update(u) {
|
||||
let { left, top } = u.cursor;
|
||||
left -= legendEl.clientWidth / 2;
|
||||
top -= legendEl.clientHeight / 2;
|
||||
legendEl.style.transform = "translate(" + left + "px, " + top + "px)";
|
||||
}
|
||||
|
||||
return {
|
||||
hooks: {
|
||||
init: init,
|
||||
setCursor: update,
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
async function draw(idx, chart, url_params, query) {
|
||||
if (plots[idx]) {
|
||||
plots[idx].destroy();
|
||||
plots[idx] = null;
|
||||
}
|
||||
|
||||
host = document.getElementById('url').value;
|
||||
user = document.getElementById('user').value;
|
||||
password = document.getElementById('password').value;
|
||||
|
||||
let url = `${host}?default_format=JSONCompactColumns`
|
||||
if (user) {
|
||||
url += `&user=${encodeURIComponent(user)}`;
|
||||
}
|
||||
if (password) {
|
||||
url += `&password=${encodeURIComponent(password)}`;
|
||||
}
|
||||
|
||||
let response, data, error;
|
||||
try {
|
||||
response = await fetch(url + url_params, { method: "POST", body: query });
|
||||
data = await response.text();
|
||||
if (response.ok) {
|
||||
data = JSON.parse(data);
|
||||
} else {
|
||||
error = data;
|
||||
}
|
||||
} catch (e) {
|
||||
console.log(e);
|
||||
error = e.toString();
|
||||
}
|
||||
|
||||
if (!error) {
|
||||
if (!Array.isArray(data)) {
|
||||
error = "Query should return an array.";
|
||||
} else if (data.length == 0) {
|
||||
error = "Query returned empty result.";
|
||||
} else if (data.length != 2) {
|
||||
error = "Query should return exactly two columns: unix timestamp and value.";
|
||||
} else if (!Array.isArray(data[0]) || !Array.isArray(data[1]) || data[0].length != data[1].length) {
|
||||
error = "Wrong data format of the query.";
|
||||
}
|
||||
}
|
||||
|
||||
let error_div = chart.querySelector('.query-error');
|
||||
let title_div = chart.querySelector('.title');
|
||||
if (error) {
|
||||
error_div.firstChild.data = error;
|
||||
title_div.style.display = 'none';
|
||||
error_div.style.display = 'block';
|
||||
return;
|
||||
} else {
|
||||
error_div.firstChild.data = '';
|
||||
error_div.style.display = 'none';
|
||||
title_div.style.display = 'block';
|
||||
}
|
||||
|
||||
const [line_color, fill_color, grid_color, axes_color] = theme != 'dark'
|
||||
? ["#F88", "#FEE", "#EED", "#2c3235"]
|
||||
: ["#864", "#045", "#2c3235", "#c7d0d9"];
|
||||
|
||||
let sync = uPlot.sync("sync");
|
||||
|
||||
const max_value = Math.max(...data[1]);
|
||||
|
||||
const opts = {
|
||||
width: chart.clientWidth,
|
||||
height: chart.clientHeight,
|
||||
axes: [ { stroke: axes_color,
|
||||
grid: { width: 1 / devicePixelRatio, stroke: grid_color },
|
||||
ticks: { width: 1 / devicePixelRatio, stroke: grid_color } },
|
||||
{ stroke: axes_color,
|
||||
grid: { width: 1 / devicePixelRatio, stroke: grid_color },
|
||||
ticks: { width: 1 / devicePixelRatio, stroke: grid_color } } ],
|
||||
series: [ { label: "x" },
|
||||
{ label: "y", stroke: line_color, fill: fill_color } ],
|
||||
padding: [ null, null, null, (Math.round(max_value * 100) / 100).toString().length * 6 - 10 ],
|
||||
plugins: [ legendAsTooltipPlugin() ],
|
||||
cursor: {
|
||||
sync: {
|
||||
key: "sync",
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
plots[idx] = new uPlot(opts, data, chart);
|
||||
sync.sub(plots[idx]);
|
||||
|
||||
/// Set title
|
||||
const title = queries[idx].title.replaceAll(/\{(\w+)\}/g, (_, name) => params[name] );
|
||||
chart.querySelector('.title').firstChild.data = title;
|
||||
}
|
||||
|
||||
async function drawAll() {
|
||||
let params = getParamsForURL();
|
||||
const charts = document.getElementsByClassName('chart');
|
||||
for (let i = 0; i < queries.length; ++i) {
|
||||
draw(i, charts[i], params, queries[i].query);
|
||||
}
|
||||
}
|
||||
|
||||
function resize() {
|
||||
plots.forEach(plot => {
|
||||
if (plot) {
|
||||
let chart = plot.over.closest('.chart');
|
||||
plot.setSize({ width: chart.clientWidth, height: chart.clientHeight });
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
new ResizeObserver(resize).observe(document.body);
|
||||
|
||||
document.getElementById('params').onsubmit = function(event) {
|
||||
updateParams();
|
||||
drawAll();
|
||||
saveState();
|
||||
event.preventDefault();
|
||||
}
|
||||
|
||||
|
||||
function saveState() {
|
||||
const state = { host: host, user: user, queries: queries, params: params };
|
||||
history.pushState(state, '',
|
||||
window.location.pathname + (window.location.search || '') + '#' + btoa(JSON.stringify(state)));
|
||||
}
|
||||
|
||||
function regenerate() {
|
||||
document.getElementById('url').value = host;
|
||||
document.getElementById('user').value = user;
|
||||
document.getElementById('password').value = password;
|
||||
|
||||
findParamsInQueries();
|
||||
buildParams();
|
||||
|
||||
plots.forEach(elem => elem && elem.destroy());
|
||||
plots = queries.map(e => null);
|
||||
|
||||
while (charts.firstChild) {
|
||||
charts.removeChild(charts.lastChild);
|
||||
}
|
||||
|
||||
for (let i = 0; i < queries.length; ++i) {
|
||||
insertChart(i);
|
||||
}
|
||||
}
|
||||
|
||||
window.onpopstate = function(event) {
|
||||
if (!event.state) { return; }
|
||||
({host, user, queries, params} = event.state);
|
||||
|
||||
regenerate();
|
||||
drawAll();
|
||||
};
|
||||
|
||||
if (window.location.hash) {
|
||||
try {
|
||||
({host, user, queries, params} = JSON.parse(atob(window.location.hash.substring(1))));
|
||||
} catch {}
|
||||
}
|
||||
|
||||
regenerate();
|
||||
|
||||
let new_theme = window.localStorage.getItem('theme');
|
||||
if (new_theme && new_theme != theme) {
|
||||
setTheme(new_theme);
|
||||
} else {
|
||||
drawAll();
|
||||
}
|
||||
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
2
programs/server/js/uplot.js
Normal file
2
programs/server/js/uplot.js
Normal file
File diff suppressed because one or more lines are too long
@ -269,7 +269,7 @@ public:
|
||||
bool isFixedAndContiguous() const override { return data->isFixedAndContiguous(); }
|
||||
bool valuesHaveFixedSize() const override { return data->valuesHaveFixedSize(); }
|
||||
size_t sizeOfValueIfFixed() const override { return data->sizeOfValueIfFixed(); }
|
||||
StringRef getRawData() const override { return data->getRawData(); }
|
||||
std::string_view getRawData() const override { return data->getRawData(); }
|
||||
|
||||
/// Not part of the common interface.
|
||||
|
||||
|
@ -71,9 +71,9 @@ public:
|
||||
data.resize_assume_reserved(data.size() - n);
|
||||
}
|
||||
|
||||
StringRef getRawData() const override
|
||||
std::string_view getRawData() const override
|
||||
{
|
||||
return StringRef(reinterpret_cast<const char*>(data.data()), byteSize());
|
||||
return {reinterpret_cast<const char*>(data.data()), byteSize()};
|
||||
}
|
||||
|
||||
StringRef getDataAt(size_t n) const override
|
||||
|
@ -209,7 +209,7 @@ public:
|
||||
|
||||
bool isFixedAndContiguous() const override { return true; }
|
||||
size_t sizeOfValueIfFixed() const override { return n; }
|
||||
StringRef getRawData() const override { return StringRef(chars.data(), chars.size()); }
|
||||
std::string_view getRawData() const override { return {reinterpret_cast<const char *>(chars.data()), chars.size()}; }
|
||||
|
||||
/// Specialized part of interface, not from IColumn.
|
||||
void insertString(const String & string) { insertData(string.c_str(), string.size()); }
|
||||
|
@ -332,9 +332,9 @@ public:
|
||||
bool isFixedAndContiguous() const override { return true; }
|
||||
size_t sizeOfValueIfFixed() const override { return sizeof(T); }
|
||||
|
||||
StringRef getRawData() const override
|
||||
std::string_view getRawData() const override
|
||||
{
|
||||
return StringRef(reinterpret_cast<const char*>(data.data()), byteSize());
|
||||
return {reinterpret_cast<const char*>(data.data()), byteSize()};
|
||||
}
|
||||
|
||||
StringRef getDataAt(size_t n) const override
|
||||
|
@ -507,7 +507,7 @@ public:
|
||||
[[nodiscard]] virtual bool isFixedAndContiguous() const { return false; }
|
||||
|
||||
/// If isFixedAndContiguous, returns the underlying data array, otherwise throws an exception.
|
||||
[[nodiscard]] virtual StringRef getRawData() const { throw Exception("Column " + getName() + " is not a contiguous block of memory", ErrorCodes::NOT_IMPLEMENTED); }
|
||||
[[nodiscard]] virtual std::string_view getRawData() const { throw Exception("Column " + getName() + " is not a contiguous block of memory", ErrorCodes::NOT_IMPLEMENTED); }
|
||||
|
||||
/// If valuesHaveFixedSize, returns size of value, otherwise throw an exception.
|
||||
[[nodiscard]] virtual size_t sizeOfValueIfFixed() const { throw Exception("Values of column " + getName() + " are not fixed size.", ErrorCodes::CANNOT_GET_SIZE_OF_FIELD); }
|
||||
|
@ -41,12 +41,12 @@ struct HashMethodOneNumber
|
||||
/// If the keys of a fixed length then key_sizes contains their lengths, empty otherwise.
|
||||
HashMethodOneNumber(const ColumnRawPtrs & key_columns, const Sizes & /*key_sizes*/, const HashMethodContextPtr &)
|
||||
{
|
||||
vec = key_columns[0]->getRawData().data;
|
||||
vec = key_columns[0]->getRawData().data();
|
||||
}
|
||||
|
||||
explicit HashMethodOneNumber(const IColumn * column)
|
||||
{
|
||||
vec = column->getRawData().data;
|
||||
vec = column->getRawData().data();
|
||||
}
|
||||
|
||||
/// Creates context. Method is called once and result context is used in all threads.
|
||||
@ -577,7 +577,7 @@ struct HashMethodKeysFixed
|
||||
columns_data.reset(new const char*[keys_size]);
|
||||
|
||||
for (size_t i = 0; i < keys_size; ++i)
|
||||
columns_data[i] = Base::getActualColumns()[i]->getRawData().data;
|
||||
columns_data[i] = Base::getActualColumns()[i]->getRawData().data();
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
@ -86,13 +86,10 @@ static void splitHostAndPort(const std::string & host_and_port, std::string & ou
|
||||
|
||||
static DNSResolver::IPAddresses hostByName(const std::string & host)
|
||||
{
|
||||
/// Family: AF_UNSPEC
|
||||
/// AI_ALL is required for checking if client is allowed to connect from an address
|
||||
auto flags = Poco::Net::DNS::DNS_HINT_AI_V4MAPPED | Poco::Net::DNS::DNS_HINT_AI_ALL;
|
||||
/// Do not resolve IPv6 (or IPv4) if no local IPv6 (or IPv4) addresses are configured.
|
||||
/// It should not affect client address checking, since client cannot connect from IPv6 address
|
||||
/// if server has no IPv6 addresses.
|
||||
flags |= Poco::Net::DNS::DNS_HINT_AI_ADDRCONFIG;
|
||||
auto flags = Poco::Net::DNS::DNS_HINT_AI_ADDRCONFIG;
|
||||
|
||||
DNSResolver::IPAddresses addresses;
|
||||
|
||||
|
@ -430,19 +430,20 @@ void FileSegment::completeBatchAndResetDownloader()
|
||||
cv.notify_all();
|
||||
}
|
||||
|
||||
void FileSegment::completeWithState(State state, bool auto_resize)
|
||||
void FileSegment::completeWithState(State state)
|
||||
{
|
||||
std::lock_guard cache_lock(cache->mutex);
|
||||
std::lock_guard segment_lock(mutex);
|
||||
|
||||
assertNotDetached(segment_lock);
|
||||
|
||||
bool is_downloader = isDownloaderImpl(segment_lock);
|
||||
if (!is_downloader)
|
||||
auto caller_id = getCallerId();
|
||||
if (caller_id != downloader_id)
|
||||
{
|
||||
cv.notify_all();
|
||||
throw Exception(ErrorCodes::REMOTE_FS_OBJECT_CACHE_ERROR,
|
||||
"File segment can be completed only by downloader or downloader's FileSegmentsHodler");
|
||||
throw Exception(
|
||||
ErrorCodes::LOGICAL_ERROR,
|
||||
"File segment completion can be done only by downloader. (CallerId: {}, downloader id: {}",
|
||||
caller_id, downloader_id);
|
||||
}
|
||||
|
||||
if (state != State::DOWNLOADED
|
||||
@ -450,140 +451,48 @@ void FileSegment::completeWithState(State state, bool auto_resize)
|
||||
&& state != State::PARTIALLY_DOWNLOADED_NO_CONTINUATION)
|
||||
{
|
||||
cv.notify_all();
|
||||
throw Exception(ErrorCodes::REMOTE_FS_OBJECT_CACHE_ERROR,
|
||||
"Cannot complete file segment with state: {}", stateToString(state));
|
||||
}
|
||||
|
||||
if (state == State::DOWNLOADED)
|
||||
{
|
||||
if (auto_resize && downloaded_size != range().size())
|
||||
{
|
||||
LOG_TEST(log, "Resize cell {} to downloaded: {}", range().toString(), downloaded_size);
|
||||
assert(downloaded_size <= range().size());
|
||||
segment_range = Range(segment_range.left, segment_range.left + downloaded_size - 1);
|
||||
}
|
||||
|
||||
/// Update states and finalize cache write buffer.
|
||||
setDownloaded(segment_lock);
|
||||
|
||||
if (downloaded_size != range().size())
|
||||
throw Exception(
|
||||
ErrorCodes::REMOTE_FS_OBJECT_CACHE_ERROR,
|
||||
"Cannot complete file segment as DOWNLOADED, because downloaded size ({}) does not match expected size ({})",
|
||||
downloaded_size, range().size());
|
||||
throw Exception(
|
||||
ErrorCodes::REMOTE_FS_OBJECT_CACHE_ERROR,
|
||||
"Cannot complete file segment with state: {}", stateToString(state));
|
||||
}
|
||||
|
||||
download_state = state;
|
||||
|
||||
try
|
||||
{
|
||||
completeImpl(cache_lock, segment_lock);
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
if (!downloader_id.empty() && is_downloader)
|
||||
downloader_id.clear();
|
||||
|
||||
cv.notify_all();
|
||||
throw;
|
||||
}
|
||||
|
||||
cv.notify_all();
|
||||
completeBasedOnCurrentState(cache_lock, segment_lock);
|
||||
}
|
||||
|
||||
void FileSegment::completeBasedOnCurrentState(std::lock_guard<std::mutex> & cache_lock)
|
||||
void FileSegment::completeWithoutState(std::lock_guard<std::mutex> & cache_lock)
|
||||
{
|
||||
std::lock_guard segment_lock(mutex);
|
||||
completeBasedOnCurrentState(cache_lock, segment_lock);
|
||||
}
|
||||
|
||||
void FileSegment::completeBasedOnCurrentState(std::lock_guard<std::mutex> & cache_lock, std::lock_guard<std::mutex> & segment_lock)
|
||||
{
|
||||
if (is_detached)
|
||||
return;
|
||||
|
||||
assertNotDetached(segment_lock);
|
||||
|
||||
completeBasedOnCurrentStateUnlocked(cache_lock, segment_lock);
|
||||
}
|
||||
|
||||
void FileSegment::completeBasedOnCurrentStateUnlocked(
|
||||
std::lock_guard<std::mutex> & cache_lock, std::lock_guard<std::mutex> & segment_lock)
|
||||
{
|
||||
bool is_downloader = isDownloaderImpl(segment_lock);
|
||||
bool is_last_holder = cache->isLastFileSegmentHolder(key(), offset(), cache_lock, segment_lock);
|
||||
bool can_update_segment_state = is_downloader || is_last_holder;
|
||||
size_t current_downloaded_size = getDownloadedSize(segment_lock);
|
||||
|
||||
if (is_last_holder && download_state == State::SKIP_CACHE)
|
||||
{
|
||||
cache->remove(key(), offset(), cache_lock, segment_lock);
|
||||
return;
|
||||
}
|
||||
|
||||
if (download_state == State::SKIP_CACHE || is_detached)
|
||||
return;
|
||||
|
||||
if (isDownloaderImpl(segment_lock)
|
||||
&& download_state != State::DOWNLOADED
|
||||
&& getDownloadedSize(segment_lock) == range().size())
|
||||
{
|
||||
setDownloaded(segment_lock);
|
||||
}
|
||||
|
||||
assertNotDetached(segment_lock);
|
||||
|
||||
if (download_state == State::DOWNLOADING || download_state == State::EMPTY)
|
||||
{
|
||||
/// Segment state can be changed from DOWNLOADING or EMPTY only if the caller is the
|
||||
/// downloader or the only owner of the segment.
|
||||
|
||||
bool can_update_segment_state = isDownloaderImpl(segment_lock) || is_last_holder;
|
||||
|
||||
if (can_update_segment_state)
|
||||
download_state = State::PARTIALLY_DOWNLOADED;
|
||||
}
|
||||
|
||||
try
|
||||
{
|
||||
completeImpl(cache_lock, segment_lock);
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
if (!downloader_id.empty() && isDownloaderImpl(segment_lock))
|
||||
downloader_id.clear();
|
||||
|
||||
cv.notify_all();
|
||||
throw;
|
||||
}
|
||||
|
||||
cv.notify_all();
|
||||
}
|
||||
|
||||
void FileSegment::completeImpl(std::lock_guard<std::mutex> & cache_lock, std::lock_guard<std::mutex> & segment_lock)
|
||||
{
|
||||
bool is_last_holder = cache->isLastFileSegmentHolder(key(), offset(), cache_lock, segment_lock);
|
||||
|
||||
if (is_last_holder
|
||||
&& (download_state == State::PARTIALLY_DOWNLOADED || download_state == State::PARTIALLY_DOWNLOADED_NO_CONTINUATION))
|
||||
{
|
||||
size_t current_downloaded_size = getDownloadedSize(segment_lock);
|
||||
if (current_downloaded_size == 0)
|
||||
SCOPE_EXIT({
|
||||
if (is_downloader)
|
||||
{
|
||||
download_state = State::SKIP_CACHE;
|
||||
LOG_TEST(log, "Remove cell {} (nothing downloaded)", range().toString());
|
||||
cache->remove(key(), offset(), cache_lock, segment_lock);
|
||||
cv.notify_all();
|
||||
}
|
||||
});
|
||||
|
||||
LOG_TEST(log, "Complete without state (is_last_holder: {}). File segment info: {}", is_last_holder, getInfoForLogImpl(segment_lock));
|
||||
|
||||
if (can_update_segment_state)
|
||||
{
|
||||
if (current_downloaded_size == range().size())
|
||||
setDownloaded(segment_lock);
|
||||
else
|
||||
{
|
||||
/**
|
||||
* Only last holder of current file segment can resize the cell,
|
||||
* because there is an invariant that file segments returned to users
|
||||
* in FileSegmentsHolder represent a contiguous range, so we can resize
|
||||
* it only when nobody needs it.
|
||||
*/
|
||||
download_state = State::PARTIALLY_DOWNLOADED_NO_CONTINUATION;
|
||||
/// Resize this file segment by creating a copy file segment with DOWNLOADED state,
|
||||
/// but current file segment should remain PARRTIALLY_DOWNLOADED_NO_CONTINUATION and with detached state,
|
||||
/// because otherwise an invariant that getOrSet() returns a contiguous range of file segments will be broken
|
||||
/// (this will be crucial for other file segment holder, not for current one).
|
||||
cache->reduceSizeToDownloaded(key(), offset(), cache_lock, segment_lock);
|
||||
}
|
||||
download_state = State::PARTIALLY_DOWNLOADED;
|
||||
|
||||
markAsDetached(segment_lock);
|
||||
resetDownloaderImpl(segment_lock);
|
||||
|
||||
if (cache_writer)
|
||||
{
|
||||
@ -593,10 +502,62 @@ void FileSegment::completeImpl(std::lock_guard<std::mutex> & cache_lock, std::lo
|
||||
}
|
||||
}
|
||||
|
||||
if (!downloader_id.empty() && (isDownloaderImpl(segment_lock) || is_last_holder))
|
||||
switch (download_state)
|
||||
{
|
||||
LOG_TEST(log, "Clearing downloader id: {}, current state: {}", downloader_id, stateToString(download_state));
|
||||
downloader_id.clear();
|
||||
case State::SKIP_CACHE:
|
||||
{
|
||||
if (is_last_holder)
|
||||
cache->remove(key(), offset(), cache_lock, segment_lock);
|
||||
|
||||
return;
|
||||
}
|
||||
case State::DOWNLOADED:
|
||||
{
|
||||
assert(downloaded_size == range().size());
|
||||
assert(is_downloaded);
|
||||
break;
|
||||
}
|
||||
case State::DOWNLOADING:
|
||||
case State::EMPTY:
|
||||
{
|
||||
assert(!is_last_holder);
|
||||
break;
|
||||
}
|
||||
case State::PARTIALLY_DOWNLOADED:
|
||||
case State::PARTIALLY_DOWNLOADED_NO_CONTINUATION:
|
||||
{
|
||||
if (is_last_holder)
|
||||
{
|
||||
if (current_downloaded_size == 0)
|
||||
{
|
||||
LOG_TEST(log, "Remove cell {} (nothing downloaded)", range().toString());
|
||||
|
||||
download_state = State::SKIP_CACHE;
|
||||
cache->remove(key(), offset(), cache_lock, segment_lock);
|
||||
}
|
||||
else
|
||||
{
|
||||
LOG_TEST(log, "Resize cell {} to downloaded: {}", range().toString(), current_downloaded_size);
|
||||
|
||||
/**
|
||||
* Only last holder of current file segment can resize the cell,
|
||||
* because there is an invariant that file segments returned to users
|
||||
* in FileSegmentsHolder represent a contiguous range, so we can resize
|
||||
* it only when nobody needs it.
|
||||
*/
|
||||
download_state = State::PARTIALLY_DOWNLOADED_NO_CONTINUATION;
|
||||
|
||||
/// Resize this file segment by creating a copy file segment with DOWNLOADED state,
|
||||
/// but current file segment should remain PARRTIALLY_DOWNLOADED_NO_CONTINUATION and with detached state,
|
||||
/// because otherwise an invariant that getOrSet() returns a contiguous range of file segments will be broken
|
||||
/// (this will be crucial for other file segment holder, not for current one).
|
||||
cache->reduceSizeToDownloaded(key(), offset(), cache_lock, segment_lock);
|
||||
}
|
||||
|
||||
markAsDetached(segment_lock);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
LOG_TEST(log, "Completed file segment: {}", getInfoForLogImpl(segment_lock));
|
||||
@ -793,7 +754,7 @@ FileSegmentsHolder::~FileSegmentsHolder()
|
||||
/// under the same mutex, because complete() checks for segment pointers.
|
||||
std::lock_guard cache_lock(cache->mutex);
|
||||
|
||||
file_segment->completeBasedOnCurrentState(cache_lock);
|
||||
file_segment->completeWithoutState(cache_lock);
|
||||
|
||||
file_segment_it = file_segments.erase(current_file_segment_it);
|
||||
}
|
||||
@ -859,13 +820,20 @@ void FileSegmentRangeWriter::completeFileSegment(FileSegment & file_segment)
|
||||
/// was initially set with a margin as `max_file_segment_size`. => We need to always
|
||||
/// resize to actual size after download finished.
|
||||
file_segment.getOrSetDownloader();
|
||||
file_segment.completeWithState(FileSegment::State::DOWNLOADED, /* auto_resize */true);
|
||||
|
||||
assert(file_segment.downloaded_size <= file_segment.range().size());
|
||||
file_segment.segment_range = FileSegment::Range(
|
||||
file_segment.segment_range.left, file_segment.segment_range.left + file_segment.downloaded_size - 1);
|
||||
file_segment.reserved_size = file_segment.downloaded_size;
|
||||
|
||||
file_segment.completeWithState(FileSegment::State::DOWNLOADED);
|
||||
|
||||
on_complete_file_segment_func(file_segment);
|
||||
}
|
||||
else
|
||||
{
|
||||
std::lock_guard cache_lock(cache->mutex);
|
||||
file_segment.completeBasedOnCurrentState(cache_lock);
|
||||
file_segment.completeWithoutState(cache_lock);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -142,7 +142,7 @@ public:
|
||||
|
||||
void completeBatchAndResetDownloader();
|
||||
|
||||
void completeWithState(State state, bool auto_resize = false);
|
||||
void completeWithState(State state);
|
||||
|
||||
String getInfoForLog() const;
|
||||
|
||||
@ -195,12 +195,8 @@ private:
|
||||
/// FileSegmentsHolder. complete() might check if the caller of the method
|
||||
/// is the last alive holder of the segment. Therefore, complete() and destruction
|
||||
/// of the file segment pointer must be done under the same cache mutex.
|
||||
void completeBasedOnCurrentState(std::lock_guard<std::mutex> & cache_lock);
|
||||
void completeBasedOnCurrentStateUnlocked(std::lock_guard<std::mutex> & cache_lock, std::lock_guard<std::mutex> & segment_lock);
|
||||
|
||||
void completeImpl(
|
||||
std::lock_guard<std::mutex> & cache_lock,
|
||||
std::lock_guard<std::mutex> & segment_lock);
|
||||
void completeBasedOnCurrentState(std::lock_guard<std::mutex> & cache_lock, std::lock_guard<std::mutex> & segment_lock);
|
||||
void completeWithoutState(std::lock_guard<std::mutex> & cache_lock);
|
||||
|
||||
void resetDownloaderImpl(std::lock_guard<std::mutex> & segment_lock);
|
||||
|
||||
|
@ -589,7 +589,6 @@ static constexpr UInt64 operator""_GiB(unsigned long long value)
|
||||
M(UInt64, remote_fs_read_max_backoff_ms, 10000, "Max wait time when trying to read data for remote disk", 0) \
|
||||
M(UInt64, remote_fs_read_backoff_max_tries, 5, "Max attempts to read with backoff", 0) \
|
||||
M(Bool, enable_filesystem_cache, true, "Use cache for remote filesystem. This setting does not turn on/off cache for disks (must be done via disk config), but allows to bypass cache for some queries if intended", 0) \
|
||||
M(UInt64, filesystem_cache_max_wait_sec, 5, "Allow to wait at most this number of seconds for download of current remote_fs_buffer_size bytes, and skip cache if exceeded", 0) \
|
||||
M(Bool, enable_filesystem_cache_on_write_operations, false, "Write into cache on write operations. To actually work this setting requires be added to disk config too", 0) \
|
||||
M(Bool, enable_filesystem_cache_log, false, "Allows to record the filesystem caching log for each query", 0) \
|
||||
M(Bool, read_from_filesystem_cache_if_exists_otherwise_bypass_cache, false, "", 0) \
|
||||
|
@ -222,9 +222,6 @@ CachedOnDiskReadBufferFromFile::getReadBufferForFileSegment(FileSegmentPtr & fil
|
||||
{
|
||||
auto range = file_segment->range();
|
||||
|
||||
size_t wait_download_max_tries = settings.filesystem_cache_max_wait_sec;
|
||||
size_t wait_download_tries = 0;
|
||||
|
||||
auto download_state = file_segment->state();
|
||||
LOG_TEST(log, "getReadBufferForFileSegment: {}", file_segment->getInfoForLog());
|
||||
|
||||
@ -274,16 +271,7 @@ CachedOnDiskReadBufferFromFile::getReadBufferForFileSegment(FileSegmentPtr & fil
|
||||
return getCacheReadBuffer(range.left);
|
||||
}
|
||||
|
||||
if (wait_download_tries++ < wait_download_max_tries)
|
||||
{
|
||||
download_state = file_segment->wait();
|
||||
}
|
||||
else
|
||||
{
|
||||
LOG_DEBUG(log, "Retries to wait for file segment download exceeded ({})", wait_download_tries);
|
||||
download_state = FileSegment::State::SKIP_CACHE;
|
||||
}
|
||||
|
||||
download_state = file_segment->wait();
|
||||
continue;
|
||||
}
|
||||
case FileSegment::State::DOWNLOADED:
|
||||
|
@ -301,7 +301,7 @@ private:
|
||||
ColumnFixedString::Chars & data_to = dst.getChars();
|
||||
data_to.resize(n * rows);
|
||||
|
||||
memcpy(data_to.data(), src.getRawData().data, data_to.size());
|
||||
memcpy(data_to.data(), src.getRawData().data(), data_to.size());
|
||||
}
|
||||
|
||||
static void NO_INLINE executeToString(const IColumn & src, ColumnString & dst)
|
||||
|
@ -183,23 +183,28 @@ bool HadoopSnappyReadBuffer::nextImpl()
|
||||
if (eof)
|
||||
return false;
|
||||
|
||||
if (!in_available)
|
||||
do
|
||||
{
|
||||
in->nextIfAtEnd();
|
||||
in_available = in->buffer().end() - in->position();
|
||||
in_data = in->position();
|
||||
if (!in_available)
|
||||
{
|
||||
in->nextIfAtEnd();
|
||||
in_available = in->buffer().end() - in->position();
|
||||
in_data = in->position();
|
||||
}
|
||||
|
||||
if (decoder->result == Status::NEEDS_MORE_INPUT && (!in_available || in->eof()))
|
||||
{
|
||||
throw Exception(String("hadoop snappy decode error:") + statusToString(decoder->result), ErrorCodes::SNAPPY_UNCOMPRESS_FAILED);
|
||||
}
|
||||
|
||||
out_capacity = internal_buffer.size();
|
||||
out_data = internal_buffer.begin();
|
||||
decoder->result = decoder->readBlock(&in_available, &in_data, &out_capacity, &out_data);
|
||||
|
||||
in->position() = in->buffer().end() - in_available;
|
||||
}
|
||||
while (decoder->result == Status::NEEDS_MORE_INPUT);
|
||||
|
||||
if (decoder->result == Status::NEEDS_MORE_INPUT && (!in_available || in->eof()))
|
||||
{
|
||||
throw Exception(String("hadoop snappy decode error:") + statusToString(decoder->result), ErrorCodes::SNAPPY_UNCOMPRESS_FAILED);
|
||||
}
|
||||
|
||||
out_capacity = internal_buffer.size();
|
||||
out_data = internal_buffer.begin();
|
||||
decoder->result = decoder->readBlock(&in_available, &in_data, &out_capacity, &out_data);
|
||||
|
||||
in->position() = in->buffer().end() - in_available;
|
||||
working_buffer.resize(internal_buffer.size() - out_capacity);
|
||||
|
||||
if (decoder->result == Status::OK)
|
||||
|
@ -80,7 +80,6 @@ struct ReadSettings
|
||||
size_t remote_fs_read_backoff_max_tries = 4;
|
||||
|
||||
bool enable_filesystem_cache = true;
|
||||
size_t filesystem_cache_max_wait_sec = 1;
|
||||
bool read_from_filesystem_cache_if_exists_otherwise_bypass_cache = false;
|
||||
bool enable_filesystem_cache_log = false;
|
||||
bool is_file_cache_persistent = false; /// Some files can be made non-evictable.
|
||||
|
@ -60,7 +60,8 @@ TEST(HadoopSnappyDecoder, repeatNeedMoreInput)
|
||||
String output;
|
||||
WriteBufferFromString out(output);
|
||||
copyData(read_buffer, out);
|
||||
out.finalize();
|
||||
UInt128 hashcode = sipHash128(output.c_str(), output.size());
|
||||
String hashcode_str = getHexUIntLowercase(hashcode);
|
||||
ASSERT_EQ(hashcode_str, "593afe14f61866915cc00b8c7bd86046");
|
||||
ASSERT_EQ(hashcode_str, "673e5b065186cec146789451c2a8f703");
|
||||
}
|
||||
|
@ -3451,7 +3451,6 @@ ReadSettings Context::getReadSettings() const
|
||||
res.remote_fs_read_max_backoff_ms = settings.remote_fs_read_max_backoff_ms;
|
||||
res.remote_fs_read_backoff_max_tries = settings.remote_fs_read_backoff_max_tries;
|
||||
res.enable_filesystem_cache = settings.enable_filesystem_cache;
|
||||
res.filesystem_cache_max_wait_sec = settings.filesystem_cache_max_wait_sec;
|
||||
res.read_from_filesystem_cache_if_exists_otherwise_bypass_cache = settings.read_from_filesystem_cache_if_exists_otherwise_bypass_cache;
|
||||
res.enable_filesystem_cache_log = settings.enable_filesystem_cache_log;
|
||||
res.enable_filesystem_cache_on_lower_level = settings.enable_filesystem_cache_on_lower_level;
|
||||
|
@ -113,14 +113,14 @@ public:
|
||||
const auto & null_map_column = nullable_column->getNullMapColumn();
|
||||
|
||||
auto nested_column_raw_data = nested_column.getRawData();
|
||||
__msan_unpoison(nested_column_raw_data.data, nested_column_raw_data.size);
|
||||
__msan_unpoison(nested_column_raw_data.data(), nested_column_raw_data.size());
|
||||
|
||||
auto null_map_column_raw_data = null_map_column.getRawData();
|
||||
__msan_unpoison(null_map_column_raw_data.data, null_map_column_raw_data.size);
|
||||
__msan_unpoison(null_map_column_raw_data.data(), null_map_column_raw_data.size());
|
||||
}
|
||||
else
|
||||
{
|
||||
__msan_unpoison(result_column->getRawData().data, result_column->getRawData().size);
|
||||
__msan_unpoison(result_column->getRawData().data(), result_column->getRawData().size());
|
||||
}
|
||||
|
||||
#endif
|
||||
|
@ -47,11 +47,11 @@ ColumnData getColumnData(const IColumn * column)
|
||||
|
||||
if (const auto * nullable = typeid_cast<const ColumnNullable *>(column))
|
||||
{
|
||||
result.null_data = nullable->getNullMapColumn().getRawData().data;
|
||||
result.null_data = nullable->getNullMapColumn().getRawData().data();
|
||||
column = & nullable->getNestedColumn();
|
||||
}
|
||||
|
||||
result.data = column->getRawData().data;
|
||||
result.data = column->getRawData().data();
|
||||
|
||||
return result;
|
||||
}
|
||||
|
@ -1,4 +1,5 @@
|
||||
#include <Parsers/ASTQueryWithOutput.h>
|
||||
#include <Parsers/ASTSetQuery.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -40,7 +41,7 @@ void ASTQueryWithOutput::formatImpl(const FormatSettings & s, FormatState & stat
|
||||
format->formatImpl(s, state, frame);
|
||||
}
|
||||
|
||||
if (settings_ast)
|
||||
if (settings_ast && assert_cast<ASTSetQuery *>(settings_ast.get())->print_in_format)
|
||||
{
|
||||
s.ostr << (s.hilite ? hilite_keyword : "") << s.nl_or_ws << indent_str << "SETTINGS " << (s.hilite ? hilite_none : "");
|
||||
settings_ast->formatImpl(s, state, frame);
|
||||
|
@ -192,7 +192,7 @@ void ASTSelectQuery::formatImpl(const FormatSettings & s, FormatState & state, F
|
||||
limitOffset()->formatImpl(s, state, frame);
|
||||
}
|
||||
|
||||
if (settings())
|
||||
if (settings() && assert_cast<ASTSetQuery *>(settings().get())->print_in_format)
|
||||
{
|
||||
s.ostr << (s.hilite ? hilite_keyword : "") << s.nl_or_ws << indent_str << "SETTINGS " << (s.hilite ? hilite_none : "");
|
||||
settings()->formatImpl(s, state, frame);
|
||||
|
@ -14,6 +14,12 @@ class ASTSetQuery : public IAST
|
||||
public:
|
||||
bool is_standalone = true; /// If false, this AST is a part of another query, such as SELECT.
|
||||
|
||||
/// To support overriding certain settings in a **subquery**, we add a ASTSetQuery with Settings to all subqueries, containing
|
||||
/// the list of all settings that affect them (specifically or globally to the whole query).
|
||||
/// We use `print_in_format` to avoid printing these nodes when they were left unchanged from the parent copy
|
||||
/// See more: https://github.com/ClickHouse/ClickHouse/issues/38895
|
||||
bool print_in_format = true;
|
||||
|
||||
SettingsChanges changes;
|
||||
NameToNameMap query_parameters;
|
||||
|
||||
|
@ -142,7 +142,9 @@ bool ParserQueryWithOutput::parseImpl(Pos & pos, ASTPtr & node, Expected & expec
|
||||
// Pass them manually, to apply in InterpreterSelectQuery::initSettings()
|
||||
if (query->as<ASTSelectWithUnionQuery>())
|
||||
{
|
||||
QueryWithOutputSettingsPushDownVisitor::Data data{query_with_output.settings_ast};
|
||||
auto settings = query_with_output.settings_ast->clone();
|
||||
assert_cast<ASTSetQuery *>(settings.get())->print_in_format = false;
|
||||
QueryWithOutputSettingsPushDownVisitor::Data data{settings};
|
||||
QueryWithOutputSettingsPushDownVisitor(data).visit(query);
|
||||
}
|
||||
}
|
||||
|
@ -169,10 +169,20 @@ void addCommonDefaultHandlersFactory(HTTPRequestHandlerFactoryMain & factory, IS
|
||||
replicas_status_handler->allowGetAndHeadRequest();
|
||||
factory.addHandler(replicas_status_handler);
|
||||
|
||||
auto web_ui_handler = std::make_shared<HandlingRuleHTTPHandlerFactory<WebUIRequestHandler>>(server, "play.html");
|
||||
web_ui_handler->attachNonStrictPath("/play");
|
||||
web_ui_handler->allowGetAndHeadRequest();
|
||||
factory.addHandler(web_ui_handler);
|
||||
auto play_handler = std::make_shared<HandlingRuleHTTPHandlerFactory<WebUIRequestHandler>>(server);
|
||||
play_handler->attachNonStrictPath("/play");
|
||||
play_handler->allowGetAndHeadRequest();
|
||||
factory.addHandler(play_handler);
|
||||
|
||||
auto dashboard_handler = std::make_shared<HandlingRuleHTTPHandlerFactory<WebUIRequestHandler>>(server);
|
||||
dashboard_handler->attachNonStrictPath("/dashboard");
|
||||
dashboard_handler->allowGetAndHeadRequest();
|
||||
factory.addHandler(dashboard_handler);
|
||||
|
||||
auto js_handler = std::make_shared<HandlingRuleHTTPHandlerFactory<WebUIRequestHandler>>(server);
|
||||
js_handler->attachNonStrictPath("/js/");
|
||||
js_handler->allowGetAndHeadRequest();
|
||||
factory.addHandler(js_handler);
|
||||
}
|
||||
|
||||
void addDefaultHandlersFactory(HTTPRequestHandlerFactoryMain & factory, IServer & server, AsynchronousMetrics & async_metrics)
|
||||
|
@ -8,12 +8,14 @@
|
||||
#include <IO/HTTPCommon.h>
|
||||
#include <Common/getResource.h>
|
||||
|
||||
#include <re2/re2.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
WebUIRequestHandler::WebUIRequestHandler(IServer & server_, std::string resource_name_)
|
||||
: server(server_), resource_name(std::move(resource_name_))
|
||||
WebUIRequestHandler::WebUIRequestHandler(IServer & server_)
|
||||
: server(server_)
|
||||
{
|
||||
}
|
||||
|
||||
@ -28,8 +30,38 @@ void WebUIRequestHandler::handleRequest(HTTPServerRequest & request, HTTPServerR
|
||||
response.setChunkedTransferEncoding(true);
|
||||
|
||||
setResponseDefaultHeaders(response, keep_alive_timeout);
|
||||
response.setStatusAndReason(Poco::Net::HTTPResponse::HTTP_OK);
|
||||
*response.send() << getResource(resource_name);
|
||||
|
||||
if (request.getURI().starts_with("/play"))
|
||||
{
|
||||
response.setStatusAndReason(Poco::Net::HTTPResponse::HTTP_OK);
|
||||
*response.send() << getResource("play.html");
|
||||
}
|
||||
else if (request.getURI().starts_with("/dashboard"))
|
||||
{
|
||||
response.setStatusAndReason(Poco::Net::HTTPResponse::HTTP_OK);
|
||||
|
||||
std::string html(getResource("dashboard.html"));
|
||||
|
||||
/// Replace a link to external JavaScript file to embedded file.
|
||||
/// This allows to open the HTML without running a server and to host it on server.
|
||||
/// Note: we can embed the JavaScript file inline to the HTML,
|
||||
/// but we don't do it to keep the "view-source" perfectly readable.
|
||||
|
||||
static re2::RE2 uplot_url = R"(https://[^\s"'`]+u[Pp]lot[^\s"'`]*\.js)";
|
||||
RE2::Replace(&html, uplot_url, "/js/uplot.js");
|
||||
|
||||
*response.send() << html;
|
||||
}
|
||||
else if (request.getURI() == "/js/uplot.js")
|
||||
{
|
||||
response.setStatusAndReason(Poco::Net::HTTPResponse::HTTP_OK);
|
||||
*response.send() << getResource("js/uplot.js");
|
||||
}
|
||||
else
|
||||
{
|
||||
response.setStatusAndReason(Poco::Net::HTTPResponse::HTTP_NOT_FOUND);
|
||||
*response.send() << "Not found.\n";
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -13,11 +13,10 @@ class WebUIRequestHandler : public HTTPRequestHandler
|
||||
{
|
||||
private:
|
||||
IServer & server;
|
||||
std::string resource_name;
|
||||
|
||||
public:
|
||||
WebUIRequestHandler(IServer & server_, std::string resource_name_);
|
||||
WebUIRequestHandler(IServer & server_);
|
||||
void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) override;
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
|
@ -196,13 +196,14 @@
|
||||
[url]
|
||||
(let [encoded-url (md5 url)
|
||||
expected-file-name (.getName (io/file url))
|
||||
dest-file (str binaries-cache-dir "/" encoded-url)
|
||||
dest-folder (str binaries-cache-dir "/" encoded-url)
|
||||
dest-file (str dest-folder "/clickhouse")
|
||||
dest-symlink (str common-prefix "/" expected-file-name)
|
||||
wget-opts (concat cu/std-wget-opts [:-O dest-file])]
|
||||
(when-not (cu/exists? dest-file)
|
||||
(info "Downloading" url)
|
||||
(do (c/exec :mkdir :-p binaries-cache-dir)
|
||||
(c/cd binaries-cache-dir
|
||||
(do (c/exec :mkdir :-p dest-folder)
|
||||
(c/cd dest-folder
|
||||
(cu/wget-helper! wget-opts url))))
|
||||
(c/exec :rm :-rf dest-symlink)
|
||||
(c/exec :ln :-s dest-file dest-symlink)
|
||||
|
17
tests/queries/0_stateless/02353_format_settings.reference
Normal file
17
tests/queries/0_stateless/02353_format_settings.reference
Normal file
@ -0,0 +1,17 @@
|
||||
SELECT 1
|
||||
FORMAT CSV
|
||||
SETTINGS max_execution_time = 0.001
|
||||
SELECT 1
|
||||
SETTINGS max_execution_time = 0.001
|
||||
FORMAT CSV
|
||||
SELECT 1
|
||||
UNION ALL
|
||||
SELECT 2
|
||||
FORMAT CSV
|
||||
SETTINGS max_execution_time = 0.001
|
||||
SELECT 1
|
||||
SETTINGS max_threads = 1
|
||||
UNION ALL
|
||||
SELECT 2
|
||||
SETTINGS max_execution_time = 2
|
||||
FORMAT `Null`
|
16
tests/queries/0_stateless/02353_format_settings.sh
Executable file
16
tests/queries/0_stateless/02353_format_settings.sh
Executable file
@ -0,0 +1,16 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
# shellcheck source=../shell_config.sh
|
||||
. "$CUR_DIR"/../shell_config.sh
|
||||
|
||||
set -e
|
||||
|
||||
format="$CLICKHOUSE_FORMAT"
|
||||
|
||||
echo "select 1 format CSV settings max_execution_time = 0.001" | $format
|
||||
echo "select 1 settings max_execution_time = 0.001 format CSV" | $format
|
||||
echo "select 1 UNION ALL Select 2 format CSV settings max_execution_time = 0.001" | $format
|
||||
|
||||
# I don't think having multiple settings makes sense, but it's supported so test that it still works
|
||||
echo "select 1 settings max_threads=1 UNION ALL select 2 settings max_execution_time=2 format Null" | $format
|
2
tests/queries/0_stateless/02389_dashboard.reference
Normal file
2
tests/queries/0_stateless/02389_dashboard.reference
Normal file
@ -0,0 +1,2 @@
|
||||
🌚
|
||||
leeoniya
|
8
tests/queries/0_stateless/02389_dashboard.sh
Executable file
8
tests/queries/0_stateless/02389_dashboard.sh
Executable file
@ -0,0 +1,8 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
# shellcheck source=../shell_config.sh
|
||||
. "$CURDIR"/../shell_config.sh
|
||||
|
||||
${CLICKHOUSE_CURL} -sS "${CLICKHOUSE_PORT_HTTP_PROTO}://${CLICKHOUSE_HOST}:${CLICKHOUSE_PORT_HTTP}/dashboard" | grep -oF '🌚'
|
||||
${CLICKHOUSE_CURL} -sS "${CLICKHOUSE_PORT_HTTP_PROTO}://${CLICKHOUSE_HOST}:${CLICKHOUSE_PORT_HTTP}/js/uplot.js" | grep -oF 'leeoniya'
|
Loading…
Reference in New Issue
Block a user