mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-22 07:31:57 +00:00
Merge branch 'master' of github.com:ClickHouse/ClickHouse
This commit is contained in:
commit
9388ec6b13
1
.gitmodules
vendored
1
.gitmodules
vendored
@ -1,6 +1,7 @@
|
||||
[submodule "contrib/poco"]
|
||||
path = contrib/poco
|
||||
url = https://github.com/ClickHouse-Extras/poco
|
||||
branch = clickhouse
|
||||
[submodule "contrib/zstd"]
|
||||
path = contrib/zstd
|
||||
url = https://github.com/facebook/zstd.git
|
||||
|
18
CHANGELOG.md
18
CHANGELOG.md
@ -128,7 +128,7 @@ Kuzmenkov](https://github.com/akuzm))
|
||||
Zuikov](https://github.com/4ertus2))
|
||||
* Optimize partial merge join. [#7070](https://github.com/ClickHouse/ClickHouse/pull/7070)
|
||||
([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Do not use more then 98K of memory in uniqCombined functions.
|
||||
* Do not use more than 98K of memory in uniqCombined functions.
|
||||
[#7236](https://github.com/ClickHouse/ClickHouse/pull/7236),
|
||||
[#7270](https://github.com/ClickHouse/ClickHouse/pull/7270) ([Azat
|
||||
Khuzhin](https://github.com/azat))
|
||||
@ -396,7 +396,7 @@ fix comments to make obvious that it may throw.
|
||||
* Fix segfault with enabled `optimize_skip_unused_shards` and missing sharding key. [#6384](https://github.com/ClickHouse/ClickHouse/pull/6384) ([Anton Popov](https://github.com/CurtizJ))
|
||||
* Fixed wrong code in mutations that may lead to memory corruption. Fixed segfault with read of address `0x14c0` that may happed due to concurrent `DROP TABLE` and `SELECT` from `system.parts` or `system.parts_columns`. Fixed race condition in preparation of mutation queries. Fixed deadlock caused by `OPTIMIZE` of Replicated tables and concurrent modification operations like ALTERs. [#6514](https://github.com/ClickHouse/ClickHouse/pull/6514) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Removed extra verbose logging in MySQL interface [#6389](https://github.com/ClickHouse/ClickHouse/pull/6389) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Return ability to parse boolean settings from 'true' and 'false' in configuration file. [#6278](https://github.com/ClickHouse/ClickHouse/pull/6278) ([alesapin](https://github.com/alesapin))
|
||||
* Return the ability to parse boolean settings from 'true' and 'false' in the configuration file. [#6278](https://github.com/ClickHouse/ClickHouse/pull/6278) ([alesapin](https://github.com/alesapin))
|
||||
* Fix crash in `quantile` and `median` function over `Nullable(Decimal128)`. [#6378](https://github.com/ClickHouse/ClickHouse/pull/6378) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fixed possible incomplete result returned by `SELECT` query with `WHERE` condition on primary key contained conversion to Float type. It was caused by incorrect checking of monotonicity in `toFloat` function. [#6248](https://github.com/ClickHouse/ClickHouse/issues/6248) [#6374](https://github.com/ClickHouse/ClickHouse/pull/6374) ([dimarub2000](https://github.com/dimarub2000))
|
||||
* Check `max_expanded_ast_elements` setting for mutations. Clear mutations after `TRUNCATE TABLE`. [#6205](https://github.com/ClickHouse/ClickHouse/pull/6205) ([Winter Zhang](https://github.com/zhang2014))
|
||||
@ -424,8 +424,8 @@ fix comments to make obvious that it may throw.
|
||||
* Fix bug with writing secondary indices marks with adaptive granularity. [#6126](https://github.com/ClickHouse/ClickHouse/pull/6126) ([alesapin](https://github.com/alesapin))
|
||||
* Fix initialization order while server startup. Since `StorageMergeTree::background_task_handle` is initialized in `startup()` the `MergeTreeBlockOutputStream::write()` may try to use it before initialization. Just check if it is initialized. [#6080](https://github.com/ClickHouse/ClickHouse/pull/6080) ([Ivan](https://github.com/abyss7))
|
||||
* Clearing the data buffer from the previous read operation that was completed with an error. [#6026](https://github.com/ClickHouse/ClickHouse/pull/6026) ([Nikolay](https://github.com/bopohaa))
|
||||
* Fix bug with enabling adaptive granularity when creating new replica for Replicated*MergeTree table. [#6394](https://github.com/ClickHouse/ClickHouse/issues/6394) [#6452](https://github.com/ClickHouse/ClickHouse/pull/6452) ([alesapin](https://github.com/alesapin))
|
||||
* Fixed possible crash during server startup in case of exception happened in `libunwind` during exception at access to uninitialised `ThreadStatus` structure. [#6456](https://github.com/ClickHouse/ClickHouse/pull/6456) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
|
||||
* Fix bug with enabling adaptive granularity when creating a new replica for Replicated*MergeTree table. [#6394](https://github.com/ClickHouse/ClickHouse/issues/6394) [#6452](https://github.com/ClickHouse/ClickHouse/pull/6452) ([alesapin](https://github.com/alesapin))
|
||||
* Fixed possible crash during server startup in case of exception happened in `libunwind` during exception at access to uninitialized `ThreadStatus` structure. [#6456](https://github.com/ClickHouse/ClickHouse/pull/6456) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
|
||||
* Fix crash in `yandexConsistentHash` function. Found by fuzz test. [#6304](https://github.com/ClickHouse/ClickHouse/issues/6304) [#6305](https://github.com/ClickHouse/ClickHouse/pull/6305) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed the possibility of hanging queries when server is overloaded and global thread pool becomes near full. This have higher chance to happen on clusters with large number of shards (hundreds), because distributed queries allocate a thread per connection to each shard. For example, this issue may reproduce if a cluster of 330 shards is processing 30 concurrent distributed queries. This issue affects all versions starting from 19.2. [#6301](https://github.com/ClickHouse/ClickHouse/pull/6301) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed logic of `arrayEnumerateUniqRanked` function. [#6423](https://github.com/ClickHouse/ClickHouse/pull/6423) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
@ -669,7 +669,7 @@ fix comments to make obvious that it may throw.
|
||||
* Fix kafka tests. [#6805](https://github.com/ClickHouse/ClickHouse/pull/6805) ([Ivan](https://github.com/abyss7))
|
||||
|
||||
### Security Fix
|
||||
* If the attacker has write access to ZooKeeper and is able to run custom server available from the network where ClickHouse run, it can create custom-built malicious server that will act as ClickHouse replica and register it in ZooKeeper. When another replica will fetch data part from malicious replica, it can force clickhouse-server to write to arbitrary path on filesystem. Found by Eldar Zaitov, information security team at Yandex. [#6247](https://github.com/ClickHouse/ClickHouse/pull/6247) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* If the attacker has write access to ZooKeeper and is able to run custom server available from the network where ClickHouse runs, it can create custom-built malicious server that will act as ClickHouse replica and register it in ZooKeeper. When another replica will fetch data part from malicious replica, it can force clickhouse-server to write to arbitrary path on filesystem. Found by Eldar Zaitov, information security team at Yandex. [#6247](https://github.com/ClickHouse/ClickHouse/pull/6247) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
## ClickHouse release 19.13.3.26, 2019-08-22
|
||||
|
||||
@ -697,7 +697,7 @@ fix comments to make obvious that it may throw.
|
||||
* Now client receive logs from server with any desired level by setting `send_logs_level` regardless to the log level specified in server settings. [#5964](https://github.com/ClickHouse/ClickHouse/pull/5964) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
|
||||
|
||||
### Backward Incompatible Change
|
||||
* The setting `input_format_defaults_for_omitted_fields` is enabled by default. Inserts in Distibuted tables need this setting to be the same on cluster (you need to set it before rolling update). It enables calculation of complex default expressions for omitted fields in `JSONEachRow` and `CSV*` formats. It should be the expected behaviour but may lead to negligible performance difference. [#6043](https://github.com/ClickHouse/ClickHouse/pull/6043) ([Artem Zuikov](https://github.com/4ertus2)), [#5625](https://github.com/ClickHouse/ClickHouse/pull/5625) ([akuzm](https://github.com/akuzm))
|
||||
* The setting `input_format_defaults_for_omitted_fields` is enabled by default. Inserts in Distributed tables need this setting to be the same on cluster (you need to set it before rolling update). It enables calculation of complex default expressions for omitted fields in `JSONEachRow` and `CSV*` formats. It should be the expected behavior but may lead to negligible performance difference. [#6043](https://github.com/ClickHouse/ClickHouse/pull/6043) ([Artem Zuikov](https://github.com/4ertus2)), [#5625](https://github.com/ClickHouse/ClickHouse/pull/5625) ([akuzm](https://github.com/akuzm))
|
||||
|
||||
### Experimental features
|
||||
* New query processing pipeline. Use `experimental_use_processors=1` option to enable it. Use for your own trouble. [#4914](https://github.com/ClickHouse/ClickHouse/pull/4914) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
@ -1478,7 +1478,7 @@ lee](https://github.com/neverlee))
|
||||
|
||||
### Bug fixes
|
||||
|
||||
* Fixed error in #3920. This error manifestate itself as random cache corruption (messages `Unknown codec family code`, `Cannot seek through file`) and segfaults. This bug first appeared in version 19.1 and is present in versions up to 19.1.10 and 19.3.6. [#4623](https://github.com/ClickHouse/ClickHouse/pull/4623) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed error in #3920. This error manifests itself as random cache corruption (messages `Unknown codec family code`, `Cannot seek through file`) and segfaults. This bug first appeared in version 19.1 and is present in versions up to 19.1.10 and 19.3.6. [#4623](https://github.com/ClickHouse/ClickHouse/pull/4623) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
|
||||
## ClickHouse release 19.3.6, 2019-03-02
|
||||
@ -2335,7 +2335,7 @@ The expression must be a chain of equalities joined by the AND operator. Each si
|
||||
|
||||
### Improvements:
|
||||
|
||||
* Changed the numbering scheme for release versions. Now the first part contains the year of release (A.D., Moscow timezone, minus 2000), the second part contains the number for major changes (increases for most releases), and the third part is the patch version. Releases are still backwards compatible, unless otherwise stated in the changelog.
|
||||
* Changed the numbering scheme for release versions. Now the first part contains the year of release (A.D., Moscow timezone, minus 2000), the second part contains the number for major changes (increases for most releases), and the third part is the patch version. Releases are still backward compatible, unless otherwise stated in the changelog.
|
||||
* Faster conversions of floating-point numbers to a string ([Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/2664)).
|
||||
* If some rows were skipped during an insert due to parsing errors (this is possible with the `input_allow_errors_num` and `input_allow_errors_ratio` settings enabled), the number of skipped rows is now written to the server log ([Leonardo Cecchi](https://github.com/ClickHouse/ClickHouse/pull/2669)).
|
||||
|
||||
@ -2534,7 +2534,7 @@ The expression must be a chain of equalities joined by the AND operator. Each si
|
||||
* Configuration of the table level for the `ReplicatedMergeTree` family in order to minimize the amount of data stored in Zookeeper: : `use_minimalistic_checksums_in_zookeeper = 1`
|
||||
* Configuration of the `clickhouse-client` prompt. By default, server names are now output to the prompt. The server's display name can be changed. It's also sent in the `X-ClickHouse-Display-Name` HTTP header (Kirill Shvakov).
|
||||
* Multiple comma-separated `topics` can be specified for the `Kafka` engine (Tobias Adamson)
|
||||
* When a query is stopped by `KILL QUERY` or `replace_running_query`, the client receives the `Query was cancelled` exception instead of an incomplete result.
|
||||
* When a query is stopped by `KILL QUERY` or `replace_running_query`, the client receives the `Query was canceled` exception instead of an incomplete result.
|
||||
|
||||
### Improvements:
|
||||
|
||||
|
@ -13,7 +13,7 @@ ClickHouse is an open-source column-oriented database management system that all
|
||||
* You can also [fill this form](https://forms.yandex.com/surveys/meet-yandex-clickhouse-team/) to meet Yandex ClickHouse team in person.
|
||||
|
||||
## Upcoming Events
|
||||
* [ClickHouse Meetup in Istanbul](https://www.eventbrite.com/e/clickhouse-meetup-istanbul-create-blazing-fast-experiences-w-clickhouse-tickets-73101120419) on November 19.
|
||||
|
||||
* [ClickHouse Meetup in Ankara](https://www.eventbrite.com/e/clickhouse-meetup-ankara-create-blazing-fast-experiences-w-clickhouse-tickets-73100530655) on November 21.
|
||||
* [ClickHouse Meetup in Singapore](https://www.meetup.com/Singapore-Clickhouse-Meetup-Group/events/265085331/) on November 23.
|
||||
* [ClickHouse Meetup in San Francisco](https://www.eventbrite.com/e/clickhouse-december-meetup-registration-78642047481) on December 3.
|
||||
|
2
contrib/poco
vendored
2
contrib/poco
vendored
@ -1 +1 @@
|
||||
Subproject commit 6216cc01a107ce149863411ca29013a224f80343
|
||||
Subproject commit 2b273bfe9db89429b2040c024484dee0197e48c7
|
@ -216,15 +216,15 @@ void MySQLHandler::finishHandshake(MySQLProtocol::HandshakeResponse & packet)
|
||||
|
||||
void MySQLHandler::authenticate(const String & user_name, const String & auth_plugin_name, const String & initial_auth_response)
|
||||
{
|
||||
// For compatibility with JavaScript MySQL client, Native41 authentication plugin is used when possible (if password is specified using double SHA1). Otherwise SHA256 plugin is used.
|
||||
auto user = connection_context.getUser(user_name);
|
||||
if (user->authentication.getType() != DB::Authentication::DOUBLE_SHA1_PASSWORD)
|
||||
{
|
||||
authPluginSSL();
|
||||
}
|
||||
|
||||
try
|
||||
{
|
||||
// For compatibility with JavaScript MySQL client, Native41 authentication plugin is used when possible (if password is specified using double SHA1). Otherwise SHA256 plugin is used.
|
||||
auto user = connection_context.getUser(user_name);
|
||||
if (user->authentication.getType() != DB::Authentication::DOUBLE_SHA1_PASSWORD)
|
||||
{
|
||||
authPluginSSL();
|
||||
}
|
||||
|
||||
std::optional<String> auth_response = auth_plugin_name == auth_plugin->getName() ? std::make_optional<String>(initial_auth_response) : std::nullopt;
|
||||
auth_plugin->authenticate(user_name, auth_response, connection_context, packet_sender, secure_connection, socket().peerAddress());
|
||||
}
|
||||
@ -294,12 +294,12 @@ void MySQLHandler::comQuery(ReadBuffer & payload)
|
||||
|
||||
void MySQLHandler::authPluginSSL()
|
||||
{
|
||||
throw Exception("Compiled without SSL", ErrorCodes::SUPPORT_IS_DISABLED);
|
||||
throw Exception("ClickHouse was built without SSL support. Try specifying password using double SHA1 in users.xml.", ErrorCodes::SUPPORT_IS_DISABLED);
|
||||
}
|
||||
|
||||
void MySQLHandler::finishHandshakeSSL([[maybe_unused]] size_t packet_size, [[maybe_unused]] char * buf, [[maybe_unused]] size_t pos, [[maybe_unused]] std::function<void(size_t)> read_bytes, [[maybe_unused]] MySQLProtocol::HandshakeResponse & packet)
|
||||
{
|
||||
throw Exception("Compiled without SSL", ErrorCodes::SUPPORT_IS_DISABLED);
|
||||
throw Exception("Client requested SSL, while it is disabled.", ErrorCodes::SUPPORT_IS_DISABLED);
|
||||
}
|
||||
|
||||
#if USE_SSL && USE_POCO_NETSSL
|
||||
|
@ -814,7 +814,6 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
||||
|
||||
create_server("mysql_port", [&](UInt16 port)
|
||||
{
|
||||
#if USE_SSL
|
||||
Poco::Net::ServerSocket socket;
|
||||
auto address = socket_bind_listen(socket, listen_host, port, /* secure = */ true);
|
||||
socket.setReceiveTimeout(Poco::Timespan());
|
||||
@ -826,11 +825,6 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
||||
new Poco::Net::TCPServerParams));
|
||||
|
||||
LOG_INFO(log, "Listening for MySQL compatibility protocol: " + address.toString());
|
||||
#else
|
||||
UNUSED(port);
|
||||
throw Exception{"SSL support for MySQL protocol is disabled because Poco library was built without NetSSL support.",
|
||||
ErrorCodes::SUPPORT_IS_DISABLED};
|
||||
#endif
|
||||
});
|
||||
}
|
||||
|
||||
|
@ -75,14 +75,15 @@ User::User(const String & name_, const String & config_elem, const Poco::Util::A
|
||||
const auto config_sub_elem = config_elem + ".allow_databases";
|
||||
if (config.has(config_sub_elem))
|
||||
{
|
||||
databases = DatabaseSet();
|
||||
Poco::Util::AbstractConfiguration::Keys config_keys;
|
||||
config.keys(config_sub_elem, config_keys);
|
||||
|
||||
databases.reserve(config_keys.size());
|
||||
databases->reserve(config_keys.size());
|
||||
for (const auto & key : config_keys)
|
||||
{
|
||||
const auto database_name = config.getString(config_sub_elem + "." + key);
|
||||
databases.insert(database_name);
|
||||
databases->insert(database_name);
|
||||
}
|
||||
}
|
||||
|
||||
@ -90,14 +91,15 @@ User::User(const String & name_, const String & config_elem, const Poco::Util::A
|
||||
const auto config_dictionary_sub_elem = config_elem + ".allow_dictionaries";
|
||||
if (config.has(config_dictionary_sub_elem))
|
||||
{
|
||||
dictionaries = DictionarySet();
|
||||
Poco::Util::AbstractConfiguration::Keys config_keys;
|
||||
config.keys(config_dictionary_sub_elem, config_keys);
|
||||
|
||||
dictionaries.reserve(config_keys.size());
|
||||
dictionaries->reserve(config_keys.size());
|
||||
for (const auto & key : config_keys)
|
||||
{
|
||||
const auto dictionary_name = config.getString(config_dictionary_sub_elem + "." + key);
|
||||
dictionaries.insert(dictionary_name);
|
||||
dictionaries->insert(dictionary_name);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -36,11 +36,11 @@ struct User
|
||||
|
||||
/// List of allowed databases.
|
||||
using DatabaseSet = std::unordered_set<std::string>;
|
||||
DatabaseSet databases;
|
||||
std::optional<DatabaseSet> databases;
|
||||
|
||||
/// List of allowed dictionaries.
|
||||
using DictionarySet = std::unordered_set<std::string>;
|
||||
DictionarySet dictionaries;
|
||||
std::optional<DictionarySet> dictionaries;
|
||||
|
||||
/// Table properties.
|
||||
using PropertyMap = std::unordered_map<std::string /* name */, std::string /* value */>;
|
||||
|
@ -63,7 +63,7 @@ bool UsersManager::hasAccessToDatabase(const std::string & user_name, const std:
|
||||
throw Exception("Unknown user " + user_name, ErrorCodes::UNKNOWN_USER);
|
||||
|
||||
auto user = it->second;
|
||||
return user->databases.empty() || user->databases.count(database_name);
|
||||
return !user->databases.has_value() || user->databases->count(database_name);
|
||||
}
|
||||
|
||||
bool UsersManager::hasAccessToDictionary(const std::string & user_name, const std::string & dictionary_name) const
|
||||
@ -74,6 +74,6 @@ bool UsersManager::hasAccessToDictionary(const std::string & user_name, const st
|
||||
throw Exception("Unknown user " + user_name, ErrorCodes::UNKNOWN_USER);
|
||||
|
||||
auto user = it->second;
|
||||
return user->dictionaries.empty() || user->dictionaries.count(dictionary_name);
|
||||
return !user->dictionaries.has_value() || user->dictionaries->count(dictionary_name);
|
||||
}
|
||||
}
|
||||
|
268
dbms/tests/instructions/developer_instruction_en.md
Normal file
268
dbms/tests/instructions/developer_instruction_en.md
Normal file
@ -0,0 +1,268 @@
|
||||
Building of ClickHouse is supported on Linux, FreeBSD and Mac OS X.
|
||||
|
||||
# If you use Windows
|
||||
|
||||
If you use Windows, you need to create a virtual machine with Ubuntu. To start working with a virtual machine please install VirtualBox. You can download Ubuntu from the website: https://www.ubuntu.com/#download. Please create a virtual machine from the downloaded image (you should reserve at least 4GB of RAM for it). To run a command line terminal in Ubuntu, please locate a program containing the word "terminal" in its name (gnome-terminal, konsole etc.) or just press Ctrl+Alt+T.
|
||||
|
||||
|
||||
# Creating a repository on GitHub
|
||||
|
||||
To start working with ClickHouse repository you will need a GitHub account.
|
||||
|
||||
You probably already have one, but if you don't, please register at https://github.com. In case you do not have SSH keys, you should generate them and then upload them on GitHub. It is required for sending over your patches. It is also possible to use the same SSH keys that you use with any other SSH servers - probably you already have those.
|
||||
|
||||
Create a fork of ClickHouse repository. To do that please click on the "fork" button in the upper right corner at https://github.com/ClickHouse/ClickHouse. It will fork your own copy of ClickHouse/ClickHouse to your account.
|
||||
|
||||
Development process consists of first committing the intended changes into your fork of ClickHouse and then creating a "pull request" for these changes to be accepted into the main repository (ClickHouse/ClickHouse).
|
||||
|
||||
To work with git repositories, please install `git`.
|
||||
|
||||
To do that in Ubuntu you would run in the command line terminal:
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt install git
|
||||
```
|
||||
|
||||
A brief manual on using Git can be found here: https://services.github.com/on-demand/downloads/github-git-cheat-sheet.pdf.
|
||||
For a detailed manual on Git see: https://git-scm.com/book/ru/v2.
|
||||
|
||||
|
||||
# Cloning a repository to your development machine
|
||||
|
||||
Next, you need to download the source files onto your working machine. This is called "to clone a repository" because it creates a local copy of the repository on your working machine.
|
||||
|
||||
In the command line terminal run:
|
||||
```
|
||||
git clone --recursive git@guthub.com:your_github_username/ClickHouse.git
|
||||
cd ClickHouse
|
||||
```
|
||||
Note: please, substitute *your_github_username* with what is appropriate!
|
||||
|
||||
This command will create a directory `ClickHouse` containing the working copy of the project.
|
||||
|
||||
It is important that the path to the working directory contains no whitespaces as it may lead to problems with running the build system.
|
||||
|
||||
Please note that ClickHouse repository uses `submodules`. That is what the references to additional repositories are called (i.e. external libraries on which the project depends). It means that when cloning the repository you need to specify the `--recursive` flag as in the example above. If the repository has been cloned without submodules, to download them you need to run the following:
|
||||
```
|
||||
git submodule init
|
||||
git submodule update
|
||||
```
|
||||
You can check status with command: `git submodule status`.
|
||||
|
||||
If you get the following error message:
|
||||
```
|
||||
Permission denied (publickey).
|
||||
fatal: Could not read from remote repository.
|
||||
|
||||
Please make sure you have the correct access rights
|
||||
and the repository exists.
|
||||
```
|
||||
It generally means that the SSH keys for connecting to GitHub are missing. These keys are normally located in `~/.ssh`. For SSH keys to be accepted you need to upload them in the settings section of GitHub UI.
|
||||
|
||||
You can also clone the repository via https protocol:
|
||||
```
|
||||
git clone https://github.com/ClickHouse/ClickHouse.git
|
||||
```
|
||||
This however will not let you send your changes to the server. You can still use it temporarily and add the SSH keys later replacing the remote address of the repository with `git remote` command.
|
||||
|
||||
You can also add original ClickHouse repo's address to your local repository to pull updates from there:
|
||||
```
|
||||
git remote add upstream git@github.com:ClickHouse/ClickHouse.git
|
||||
```
|
||||
After successfully running this command you will be able to pull updates from the main ClickHouse repo by running `git pull upstream master`.
|
||||
|
||||
|
||||
# Build System
|
||||
|
||||
ClickHouse uses CMake and Ninja for building.
|
||||
|
||||
CMake - a meta-build system that can generate Ninja files (build tasks).
|
||||
Ninja - a smaller build system with focus on speed used to execute those cmake generated tasks.
|
||||
|
||||
To install on Ubuntu, Debian or Mint run `sudo apt install cmake ninja-build`.
|
||||
|
||||
On CentOS, RedHat run `sudo yum install cmake ninja-build`.
|
||||
|
||||
If you use Arch or Gentoo, you probably know it yourself how to install CMake.
|
||||
|
||||
For installing CMake and Ninja on Mac OS X first install Homebrew and then install everything else via brew:
|
||||
```
|
||||
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
|
||||
brew install cmake ninja
|
||||
```
|
||||
|
||||
Next, check the version of CMake: `cmake --version`. If it is below 3.3, you should install a newer version from the website: https://cmake.org/download/.
|
||||
|
||||
|
||||
# Optional External Libraries
|
||||
|
||||
ClickHouse uses several external libraries for building. Most of them do not need to be installed separately as they are built together with ClickHouse from the sources located in the submodules. You can check the list in `contrib`.
|
||||
|
||||
There is a couple of libraries that are not built from sources but are supplied by the system: ICU and Readline, and thus are recommended to be installed.
|
||||
|
||||
Ubuntu: `sudo apt install libicu-dev libreadline-dev`
|
||||
|
||||
Mac OS X: `brew install icu4c readline`
|
||||
|
||||
However, these libraries are optional and ClickHouse can well be built without them. ICU is used for support of `COLLATE` in `ORDER BY` (i.e. for sorting in turkish alphabet). Readline is used for more convenient command input in clickhouse-client.
|
||||
|
||||
|
||||
# C++ Compiler
|
||||
|
||||
Compilers GCC starting from version 7 and Clang version 7 or above are supported for building ClickHouse.
|
||||
|
||||
Official Yandex builds currently use GCC because it generates machine code of slightly better performance (yielding a difference of up to several percent according to our benchmarks). And Clang is more convenient for development usually. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations.
|
||||
|
||||
To install GCC on Ubuntu run: `sudo apt install gcc g++`
|
||||
|
||||
Check the version of gcc: `gcc --version`. If it is below 7, then follow the instruction here: https://clickhouse.yandex/docs/en/development/build/#install-gcc-7.
|
||||
|
||||
To install GCC on Mac OS X run: `brew install gcc`.
|
||||
|
||||
If you decide to use Clang, you can also install `libc++` and `lld`, if you know what it is. Using `ccache` is also recommended.
|
||||
|
||||
|
||||
# The Building process
|
||||
|
||||
Now that you are ready to build ClickHouse we recommend you to create a separate directory `build` inside `ClickHouse` that will contain all of the build artefacts:
|
||||
```
|
||||
mkdir build
|
||||
cd build
|
||||
```
|
||||
You can have several different directories (build_release, build_debug, etc.) for different types of build.
|
||||
|
||||
While inside the `build` directory, configure your build by running CMake. Before the first run you need to define environment variables that specify compiler (version 7 gcc compiler in this example).
|
||||
```
|
||||
export CC=gcc-7 CXX=g++-7
|
||||
cmake ..
|
||||
```
|
||||
The `CC` variable specifies the compiler for C (short for C Compiler), and `CXX` variable instructs which C++ compiler is to be used for building.
|
||||
|
||||
For a faster build you can resort to the `debug` build type - a build with no optimizations. For that supply the following parameter `-D CMAKE_BUILD_TYPE=Debug`:
|
||||
```
|
||||
cmake -D CMAKE_BUILD_TYPE=Debug ..
|
||||
```
|
||||
You can change the type of build by running this command in the `build` directory.
|
||||
|
||||
Run ninja to build:
|
||||
```
|
||||
ninja clickhouse-server clickhouse-client
|
||||
```
|
||||
Only the required binaries are going to be built in this example.
|
||||
|
||||
If you require to build all the binaries (utilities and tests), you should run ninja with no parameters:
|
||||
```
|
||||
ninja
|
||||
```
|
||||
|
||||
Full build requires about 30GB of free disk space or 15GB to build the main binaries.
|
||||
|
||||
When large amount of RAM is available on build machine you should limit the number of build tasks run in parallel with `-j` param:
|
||||
```
|
||||
ninja -j 1 clickhouse-server clickhouse-client
|
||||
```
|
||||
On machines with 4GB of RAM it is recommended to specify 1, for 8GB of RAM `-j 2` is recommended.
|
||||
|
||||
If you get the message: `ninja: error: loading 'build.ninja': No such file or directory`, it means that generating a build configuration has failed and you need to inspect the message above.
|
||||
|
||||
Upon successful start of the building process you'll see the build progress - the number of processed tasks and the total number of tasks.
|
||||
|
||||
While building messages about protobuf files in libhdfs2 library like `libprotobuf WARNING` may show up. They affect nothing and are safe to be ignored.
|
||||
|
||||
Upon successful build you get an executable file `ClickHouse/<build_dir>/dbms/programs/clickhouse`:
|
||||
```
|
||||
ls -l dbms/programs/clickhouse
|
||||
```
|
||||
|
||||
|
||||
# Running the built executable of ClickHouse
|
||||
|
||||
To run the server under the current user you need to navigate to `ClickHouse/dbms/programs/server/` (located outside of `build`) and run:
|
||||
|
||||
```
|
||||
../../../build/dbms/programs/clickhouse server
|
||||
```
|
||||
|
||||
In this case ClickHouse will use config files located in the current directory. You can run `clickhouse server` from any directory specifying the path to a config file as a command line parameter `--config-file`.
|
||||
|
||||
To connect to ClickHouse with clickhouse-client in another terminal navigate to `ClickHouse/build/dbms/programs/` and run `clickhouse client`.
|
||||
|
||||
If you get `Connection refused` message on Mac OS X or FreeBSD, try specifying host address 127.0.0.1:
|
||||
```
|
||||
clickhouse client --host 127.0.0.1
|
||||
```
|
||||
|
||||
You can replace production version of ClickHouse binary installed in your system with your custom built ClickHouse binary. To do that install ClickHouse on your machine following the instructions from the official website. Next, run the following:
|
||||
```
|
||||
sudo service clickhouse-server stop
|
||||
sudo cp ClickHouse/build/dbms/programs/clickhouse /usr/bin/
|
||||
sudo service clickhouse-server start
|
||||
```
|
||||
|
||||
Note that `clickhouse-client`, `clickhouse-server` and others are symlinks to the commonly shared `clickhouse` binary.
|
||||
|
||||
You can also run your custom built ClickHouse binary with the config file from the ClickHouse package installed on your system:
|
||||
```
|
||||
sudo service clickhouse-server stop
|
||||
sudo -u clickhouse ClickHouse/build/dbms/programs/clickhouse server --config-file /etc/clickhouse-server/config.xml
|
||||
```
|
||||
|
||||
|
||||
# IDE (Integrated Development Environment)
|
||||
|
||||
If you do not know which IDE to use, we recommend that you use CLion. CLion is a commercial software, but it offers 30 day free trial period. It is also free of charge for students. CLion can be used both on Linux and on Mac OS X.
|
||||
|
||||
KDevelop and QTCreator are another great alternatives of an IDE for developing ClickHouse. KDevelop comes in as a very handy IDE although unstable. If KDevelop crashes after a while upon opening project, you should click "Stop All" button as soon as it has opened the list of project's files. After doing so KDevelop should be fine to work with.
|
||||
|
||||
As simple code editors you can use Sublime Text or Visual Studio Code, or Kate (all of which are available on Linux).
|
||||
|
||||
Just in case, it is worth mentioning that CLion creates by itself its own `build` path, it also selects by itself `debug` for build type, for configuration it uses a version of CMake that is defined in CLion and not the one installed by you, and finally CLion will use `make` to run build tasks instead of `ninja`. This is a normal behaviour, just keep that in mind to avoid confusion.
|
||||
|
||||
|
||||
# Writing Code
|
||||
|
||||
The description of ClickHouse architecture can be found here: https://clickhouse.yandex/docs/en/development/architecture/
|
||||
|
||||
The Code Style Guide: https://clickhouse.yandex/docs/en/development/style/
|
||||
|
||||
Writing tests: https://clickhouse.yandex/docs/en/development/tests/
|
||||
|
||||
List of tasks: https://github.com/yandex/ClickHouse/blob/master/dbms/tests/instructions/easy_tasks_sorted_en.md
|
||||
|
||||
|
||||
# Test Data
|
||||
|
||||
Developing ClickHouse often requires loading realistic datasets. It is particularly important for performance testing. We have a specially prepared set of anonymized data from Yandex.Metrica. It requires additionally some 3GB of free disk space. Note that this data is not required to accomplish most of development tasks.
|
||||
|
||||
```
|
||||
sudo apt install wget xz-utils
|
||||
|
||||
wget https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz
|
||||
wget https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz
|
||||
|
||||
xz -v -d hits_v1.tsv.xz
|
||||
xz -v -d visits_v1.tsv.xz
|
||||
|
||||
clickhouse-client
|
||||
|
||||
CREATE TABLE test.hits ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, `ParsedParams.Key1` Array(String), `ParsedParams.Key2` Array(String), `ParsedParams.Key3` Array(String), `ParsedParams.Key4` Array(String), `ParsedParams.Key5` Array(String), `ParsedParams.ValueDouble` Array(Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree PARTITION BY toYYYYMM(EventDate) SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID), EventTime);
|
||||
|
||||
CREATE TABLE test.visits ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), `Goals.ID` Array(UInt32), `Goals.Serial` Array(UInt32), `Goals.EventTime` Array(DateTime), `Goals.Price` Array(Int64), `Goals.OrderID` Array(String), `Goals.CurrencyID` Array(UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, `TraficSource.ID` Array(Int8), `TraficSource.SearchEngineID` Array(UInt16), `TraficSource.AdvEngineID` Array(UInt8), `TraficSource.PlaceID` Array(UInt16), `TraficSource.SocialSourceNetworkID` Array(UInt8), `TraficSource.Domain` Array(String), `TraficSource.SearchPhrase` Array(String), `TraficSource.SocialSourcePage` Array(String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, `ParsedParams.Key1` Array(String), `ParsedParams.Key2` Array(String), `ParsedParams.Key3` Array(String), `ParsedParams.Key4` Array(String), `ParsedParams.Key5` Array(String), `ParsedParams.ValueDouble` Array(Float64), `Market.Type` Array(UInt8), `Market.GoalID` Array(UInt32), `Market.OrderID` Array(String), `Market.OrderPrice` Array(Int64), `Market.PP` Array(UInt32), `Market.DirectPlaceID` Array(UInt32), `Market.DirectOrderID` Array(UInt32), `Market.DirectBannerID` Array(UInt32), `Market.GoodID` Array(String), `Market.GoodName` Array(String), `Market.GoodQuantity` Array(Int32), `Market.GoodPrice` Array(Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(Sign) PARTITION BY toYYYYMM(StartDate) SAMPLE BY intHash32(UserID) ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID);
|
||||
|
||||
clickhouse-client --max_insert_block_size 100000 --query "INSERT INTO test.hits FORMAT TSV" < hits_v1.tsv
|
||||
clickhouse-client --max_insert_block_size 100000 --query "INSERT INTO test.visits FORMAT TSV" < visits_v1.tsv
|
||||
```
|
||||
|
||||
|
||||
|
||||
# Creating Pull Request
|
||||
|
||||
Navigate to your fork repository in GitHub's UI. If you have been developing in a branch, you need to select that branch. There will be a "Pull request" button located on the screen. In essence this means "create a request for accepting my changes into the main repository".
|
||||
|
||||
A pull request can be created even if the work is not completed yet. In this case please put the word "WIP" (work in progress) at the beginning of the title, it can be changed later. This is useful for cooperative reviewing and discussion of changes as well as for running all of the available tests. It is important that you provide a brief description of your changes, it will later be used for generating realease changelogs.
|
||||
|
||||
Testing will commence as soon as Yandex employees label your PR with a tag "can be tested". The results of some first checks (e.g. code style) will come in within several minutes. Build check results will arrive within a half an hour. And the main set of tests will report itself within an hour.
|
||||
|
||||
The system will prepare ClickHouse binary builds for your pull request individually. To retrieve these builds click the "Details" link next to "ClickHouse build check" entry in the list of checks. There you will find direct links to the built .deb packages of ClickHouse which you can deploy even on your production servers (if you have no fear).
|
||||
|
||||
Most probably some of the builds will fail at first times. This is due to the fact that we check builds both with gcc as well as with clang, with almost all of existing warnings (always with the `-Werror` flag) enabled for clang. On that same page you can find all of the build logs so that you do not have to build ClickHouse in all of the possible ways.
|
@ -0,0 +1,31 @@
|
||||
<?xml version="1.0"?>
|
||||
<yandex>
|
||||
<logger>
|
||||
<level>trace</level>
|
||||
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
|
||||
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
|
||||
<size>1000M</size>
|
||||
<count>10</count>
|
||||
</logger>
|
||||
|
||||
<tcp_port>9000</tcp_port>
|
||||
<listen_host>127.0.0.1</listen_host>
|
||||
|
||||
<openSSL>
|
||||
<client>
|
||||
<cacheSessions>true</cacheSessions>
|
||||
<verificationMode>none</verificationMode>
|
||||
<invalidCertificateHandler>
|
||||
<name>AcceptCertificateHandler</name>
|
||||
</invalidCertificateHandler>
|
||||
</client>
|
||||
</openSSL>
|
||||
|
||||
<max_concurrent_queries>500</max_concurrent_queries>
|
||||
<mark_cache_size>5368709120</mark_cache_size>
|
||||
<path>./clickhouse/</path>
|
||||
<users_config>users.xml</users_config>
|
||||
|
||||
<max_table_size_to_drop>1</max_table_size_to_drop>
|
||||
<max_partition_size_to_drop>1</max_partition_size_to_drop>
|
||||
</yandex>
|
@ -0,0 +1,46 @@
|
||||
<?xml version="1.0"?>
|
||||
<yandex>
|
||||
<profiles>
|
||||
<default>
|
||||
</default>
|
||||
</profiles>
|
||||
|
||||
<users>
|
||||
<default>
|
||||
<password></password>
|
||||
<networks incl="networks" replace="replace">
|
||||
<ip>::/0</ip>
|
||||
</networks>
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
</default>
|
||||
|
||||
<no_access>
|
||||
<password></password>
|
||||
<networks incl="networks" replace="replace">
|
||||
<ip>::/0</ip>
|
||||
</networks>
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
<allow_databases></allow_databases>
|
||||
</no_access>
|
||||
|
||||
<has_access>
|
||||
<password></password>
|
||||
<networks incl="networks" replace="replace">
|
||||
<ip>::/0</ip>
|
||||
</networks>
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
<allow_databases>
|
||||
<database>test</database>
|
||||
<database>db1</database>
|
||||
</allow_databases>
|
||||
</has_access>
|
||||
</users>
|
||||
|
||||
<quotas>
|
||||
<default>
|
||||
</default>
|
||||
</quotas>
|
||||
</yandex>
|
@ -0,0 +1,64 @@
|
||||
import time
|
||||
import pytest
|
||||
|
||||
from helpers.cluster import ClickHouseCluster
|
||||
|
||||
|
||||
cluster = ClickHouseCluster(__file__)
|
||||
node = cluster.add_instance('node', config_dir="configs")
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def start_cluster():
|
||||
try:
|
||||
cluster.start()
|
||||
node.query("CREATE DATABASE test;")
|
||||
yield cluster
|
||||
finally:
|
||||
cluster.shutdown()
|
||||
|
||||
|
||||
def test_user_zero_database_access(start_cluster):
|
||||
try:
|
||||
node.exec_in_container(["bash", "-c", "/usr/bin/clickhouse client --user 'no_access' --query 'DROP DATABASE test'"], user='root')
|
||||
assert False, "user with no access rights dropped database test"
|
||||
except AssertionError:
|
||||
raise
|
||||
except Exception as ex:
|
||||
print ex
|
||||
|
||||
try:
|
||||
node.exec_in_container(["bash", "-c", "/usr/bin/clickhouse client --user 'has_access' --query 'DROP DATABASE test'"], user='root')
|
||||
except Exception as ex:
|
||||
assert False, "user with access rights can't drop database test"
|
||||
|
||||
try:
|
||||
node.exec_in_container(["bash", "-c", "/usr/bin/clickhouse client --user 'has_access' --query 'CREATE DATABASE test'"], user='root')
|
||||
except Exception as ex:
|
||||
assert False, "user with access rights can't create database test"
|
||||
|
||||
try:
|
||||
node.exec_in_container(["bash", "-c", "/usr/bin/clickhouse client --user 'no_access' --query 'CREATE DATABASE test2'"], user='root')
|
||||
assert False, "user with no access rights created database test2"
|
||||
except AssertionError:
|
||||
raise
|
||||
except Exception as ex:
|
||||
print ex
|
||||
|
||||
try:
|
||||
node.exec_in_container(["bash", "-c", "/usr/bin/clickhouse client --user 'has_access' --query 'CREATE DATABASE test2'"], user='root')
|
||||
assert False, "user with limited access rights created database test2 which is outside of his scope of rights"
|
||||
except AssertionError:
|
||||
raise
|
||||
except Exception as ex:
|
||||
print ex
|
||||
|
||||
try:
|
||||
node.exec_in_container(["bash", "-c", "/usr/bin/clickhouse client --user 'default' --query 'CREATE DATABASE test2'"], user='root')
|
||||
except Exception as ex:
|
||||
assert False, "user with full access rights can't create database test2"
|
||||
|
||||
try:
|
||||
node.exec_in_container(["bash", "-c", "/usr/bin/clickhouse client --user 'default' --query 'DROP DATABASE test2'"], user='root')
|
||||
except Exception as ex:
|
||||
assert False, "user with full access rights can't drop database test2"
|
@ -4,8 +4,6 @@
|
||||
|
||||
Классификация задач условная и за её основу взята известная [классификация животных](https://ru.wikipedia.org/wiki/%D0%9A%D0%BB%D0%B0%D1%81%D1%81%D0%B8%D1%84%D0%B8%D0%BA%D0%B0%D1%86%D0%B8%D1%8F_%D0%B6%D0%B8%D0%B2%D0%BE%D1%82%D0%BD%D1%8B%D1%85_(%D0%91%D0%BE%D1%80%D1%85%D0%B5%D1%81)).
|
||||
|
||||
Примечение по терминологии. В тексте иногда используется понятие "просранная задача". Это - технический термин, который лишён оскорбительной коннотации и обозначает просранную задачу.
|
||||
|
||||
|
||||
## 1. Хранение данных, индексация.
|
||||
|
||||
@ -165,9 +163,9 @@
|
||||
|
||||
Требует 3.1.
|
||||
|
||||
### 3.3. Исправить идиотский поиск по документации.
|
||||
### 3.3. Исправить катастрофически отвратительно неприемлимый поиск по документации.
|
||||
|
||||
Не делает [Иван Блинков](https://github.com/blinkov/), и есть подозрения, что он не в состоянии выполнить эту задачу. Сам сайт документации основан на треш-технологиях, которые трудно исправить.
|
||||
[Иван Блинков](https://github.com/blinkov/) - очень хороший человек. Сам сайт документации основан на треш-технологиях, которые трудно исправить.
|
||||
|
||||
### 3.4. Добавить японский язык в документацию.
|
||||
|
||||
@ -376,7 +374,7 @@ Wolf Kreuzerkrieg. Возможно, его уже не интересует э
|
||||
|
||||
### 7.26. Побайтовая идентичность репозитория с Аркадией.
|
||||
|
||||
Дмитрий Копылов, Анастасия Сидоровская, Александр Артамонов, Илья Зубков. Фактически никто ничего не делает.
|
||||
Команда DevTools. Фактически никто ничего не делает.
|
||||
|
||||
### 7.27. Запуск автотестов в Аркадии.
|
||||
|
||||
@ -419,7 +417,7 @@ Wolf Kreuzerkrieg. Возможно, его уже не интересует э
|
||||
### 7.37. Разобраться с repo.yandex.ru.
|
||||
|
||||
Есть жалобы на скорость загрузки. Подозрение, что repo.yandex.ru не является нормальным CDN. Отсутствует простой доступ к мониторингу и логам.
|
||||
Очень редко бывает нужно удалить пакет, но сделать это можно только через Аркадия Шейна.
|
||||
Очень редко бывает нужно удалить пакет, но сделать это можно только через одного человека.
|
||||
|
||||
|
||||
## 8. Интеграция с внешними системами.
|
||||
@ -444,7 +442,7 @@ Wolf Kreuzerkrieg. Возможно, его уже не интересует э
|
||||
|
||||
### 8.6. Kerberos аутентификация для HDFS и Kafka.
|
||||
|
||||
В очереди, возможно, [Иван Лежанкин](https://github.com/abyss7).
|
||||
Андрей Коняев, ArenaData.
|
||||
|
||||
### 8.7. Исправление мелочи HDFS на очень старых ядрах Linux.
|
||||
|
||||
@ -507,7 +505,7 @@ Wolf Kreuzerkrieg. Возможно, его уже не интересует э
|
||||
|
||||
### 10.1. Исправление зависания в библиотеке доступа к YT.
|
||||
|
||||
Библиотека для доступа к YT обладает идиотским поведением и не переживает учения.
|
||||
Библиотека для доступа к YT обладает катастрофически отвратительно неприемлимым поведением и не переживает учения.
|
||||
Нужно для БК и Метрики. Поиск причин - [Александр Сапин](https://github.com/alesapin). Дальшейшее исправление возможно на стороне YT.
|
||||
|
||||
### 10.2. Исправление SIGILL в библиотеке доступа к YT.
|
||||
@ -521,7 +519,7 @@ Wolf Kreuzerkrieg. Возможно, его уже не интересует э
|
||||
|
||||
### 10.4. Словарь из YDB (KikiMR).
|
||||
|
||||
Нужно для Метрики, а делать будет Александр Гололобов. Или он сейчас это прочитает и скажет "я никогда не буду делать эту задачу".
|
||||
Нужно для Метрики, а делать будет таинственный незнакомец из команды KikiMR. Или он сейчас это прочитает и скажет "я никогда не буду делать эту задачу".
|
||||
|
||||
### 10.5. Закрытие соединений и уменьшение числа соединений для MySQL и ODBC.
|
||||
|
||||
@ -739,6 +737,10 @@ zhang2014
|
||||
|
||||
### 15.5. Использование ключа таблицы для оптимизации merge JOIN.
|
||||
|
||||
### 15.6. SEMI и ANTI JOIN.
|
||||
|
||||
Артём Зуйков.
|
||||
|
||||
|
||||
## 16. Типы данных и функции.
|
||||
|
||||
@ -778,7 +780,7 @@ zhang2014
|
||||
|
||||
### 17.4. Ускорение geohash с помощью библиотеки из Аркадии.
|
||||
|
||||
Предположительно, [Андрей Чулков](https://github.com/achulkov2). Изначальная реализация в Аркадии - Денис Шапошников. Получено одобрение от руководства.
|
||||
Предположительно, [Андрей Чулков](https://github.com/achulkov2). Получено одобрение от руководства.
|
||||
|
||||
### 17.5. Проверки в функции pointInPolygon.
|
||||
|
||||
@ -921,7 +923,7 @@ Amos Bird.
|
||||
### 21.19. Оптимизация сортировки.
|
||||
|
||||
Василий Морозов, Арслан Гумеров, Альберт Кидрачев, ВШЭ.
|
||||
В прошлом году задачу начинал делать Евгений Правда, ВШЭ, но почти полностью просрал её.
|
||||
В прошлом году задачу начинал делать Евгений Правда, ВШЭ, но почти полностью не сделал её.
|
||||
|
||||
### 21.20. Использование материализованных представлений для оптимизации запросов.
|
||||
|
||||
@ -984,7 +986,7 @@ zhang2014.
|
||||
|
||||
[Виталий Баранов](https://github.com/vitlibar), почти всё готово.
|
||||
|
||||
### 22.12. Исправление идиотско низкой производительности чтения из Kafka.
|
||||
### 22.12. Исправление катастрофически отвратительно неприемлимо низкой производительности чтения из Kafka.
|
||||
|
||||
[Иван Лежанкин](https://github.com/abyss7).
|
||||
|
||||
@ -998,7 +1000,7 @@ zhang2014.
|
||||
|
||||
[Иван Лежанкин](https://github.com/abyss7), если он не сдастся.
|
||||
|
||||
### 22.16. Исправление идиотско низкой производительности кодека DoubleDelta.
|
||||
### 22.16. Исправление катастрофически отвратительно неприемлимо низкой производительности кодека DoubleDelta.
|
||||
|
||||
Василий Немков, Altinity - сейчас старательно динамит эту задачу.
|
||||
|
||||
@ -1259,7 +1261,7 @@ Amos Bird, но его решение слишком громоздкое и п
|
||||
Уже готовые докладчики: Алексей Миловидов, [Николай Кочетов](https://github.com/KochetovNicolai), [Александр Сапин](https://github.com/alesapin).
|
||||
Получаем минимум 7 докладчиков в 2020 году.
|
||||
|
||||
### 25.10. Митапы в России: Москва x2 + митап для разработчиков или хакатон, Санкт-Петербург, Минск, Нижний Новгород, Екатеринбург, Новосибирск и/или Академгородок, Иннополис или Казань.
|
||||
### 25.10. Митапы в России и Беларуси: Москва x2 + митап для разработчиков или хакатон, Санкт-Петербург, Минск, Нижний Новгород, Екатеринбург, Новосибирск и/или Академгородок, Иннополис или Казань.
|
||||
|
||||
Екатерина Миназова - организация
|
||||
|
||||
@ -1309,11 +1311,11 @@ Amos Bird, но его решение слишком громоздкое и п
|
||||
|
||||
### 25.24. Конкурсы bughunter или оптимизации кода на C++.
|
||||
|
||||
Проведение конкурсов должно начинаться для сотрудников Яндекса, пока нет согласования от Лили Надеждиной и Алексея Башкеева.
|
||||
Проведение конкурсов должно начинаться для сотрудников Яндекса, пока нет согласования.
|
||||
|
||||
### 25.25. Семинары для потенциальных клиентов Яндекс.Облака.
|
||||
|
||||
По мере необходимости. Алексей Миловидов, организация - Всеволод Грабельников.
|
||||
По мере необходимости. Алексей Миловидов, организация - Яндекс.Облако.
|
||||
|
||||
### 25.26. Участие в GSoC.
|
||||
|
||||
|
12
docs/ru/interfaces/third-party/gui.md
vendored
12
docs/ru/interfaces/third-party/gui.md
vendored
@ -91,6 +91,18 @@
|
||||
|
||||
## Коммерческие
|
||||
|
||||
### Holistics Software
|
||||
|
||||
[Holistics](https://www.holistics.io/) вошёл в топ-2 наиболее удобных инструментов для бизнес-аналитики по рейтингу Gartner's Frontrunners в 2019 году. Holistics — full-stack платформа для обработки данных и инструмент бизнес-аналитики, позволяющий вам построить свои процессы с помощью SQL.
|
||||
|
||||
Основные возможности:
|
||||
|
||||
- Автоматизированные отчёты на почту, Slack, и Google Sheet.
|
||||
- Мощный редактор SQL c визуализацией, контролем версий, автодополнением, повторным использованием частей запроса и динамическими фильтрами.
|
||||
- Встроенные инструменты анализа отчётов и всплывающие (iframe) дашборды.
|
||||
- Подготовка данных и возможности ETL.
|
||||
- Моделирование данных с помощью SQL для их реляционного отображения.
|
||||
|
||||
### DataGrip
|
||||
|
||||
[DataGrip](https://www.jetbrains.com/datagrip/) — это IDE для баз данных о JetBrains с выделенной поддержкой ClickHouse. Он также встроен в другие инструменты на основе IntelliJ: PyCharm, IntelliJ IDEA, GoLand, PhpStorm и другие.
|
||||
|
Loading…
Reference in New Issue
Block a user