mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-21 15:12:02 +00:00
Merge pull request #28268 from infinivision/hdfs_namenode_ha
configable LIBHDFS3_CONF, refers to #8159
This commit is contained in:
commit
7784f4ebb0
@ -184,9 +184,10 @@ Similar to GraphiteMergeTree, the HDFS engine supports extended configuration us
|
||||
|hadoop\_kerberos\_keytab | "" |
|
||||
|hadoop\_kerberos\_principal | "" |
|
||||
|hadoop\_kerberos\_kinit\_command | kinit |
|
||||
|libhdfs3\_conf | "" |
|
||||
|
||||
### Limitations {#limitations}
|
||||
* hadoop\_security\_kerberos\_ticket\_cache\_path can be global only, not user specific
|
||||
* hadoop\_security\_kerberos\_ticket\_cache\_path and libhdfs3\_conf can be global only, not user specific
|
||||
|
||||
## Kerberos support {#kerberos-support}
|
||||
|
||||
@ -198,6 +199,22 @@ security approach). Use tests/integration/test\_storage\_kerberized\_hdfs/hdfs_c
|
||||
|
||||
If hadoop\_kerberos\_keytab, hadoop\_kerberos\_principal or hadoop\_kerberos\_kinit\_command is specified, kinit will be invoked. hadoop\_kerberos\_keytab and hadoop\_kerberos\_principal are mandatory in this case. kinit tool and krb5 configuration files are required.
|
||||
|
||||
## HDFS Namenode HA support{#namenode-ha}
|
||||
|
||||
libhdfs3 support HDFS namenode HA.
|
||||
|
||||
- Copy `hdfs-site.xml` from an HDFS node to `/etc/clickhouse-server/`.
|
||||
- Add following piece to ClickHouse config file:
|
||||
|
||||
``` xml
|
||||
<hdfs>
|
||||
<libhdfs3_conf>/etc/clickhouse-server/hdfs-site.xml</libhdfs3_conf>
|
||||
</hdfs>
|
||||
```
|
||||
|
||||
- Then use `dfs.nameservices` tag value of `hdfs-site.xml` as the namenode address in the HDFS URI. For example, replace `hdfs://appadmin@192.168.101.11:8020/abc/` with `hdfs://appadmin@my_nameservice/abc/`.
|
||||
|
||||
|
||||
## Virtual Columns {#virtual-columns}
|
||||
|
||||
- `_path` — Path to the file.
|
||||
|
@ -123,6 +123,12 @@ HDFSBuilderWrapper createHDFSBuilder(const String & uri_str, const Poco::Util::A
|
||||
if (host.empty())
|
||||
throw Exception("Illegal HDFS URI: " + uri.toString(), ErrorCodes::BAD_ARGUMENTS);
|
||||
|
||||
// Shall set env LIBHDFS3_CONF *before* HDFSBuilderWrapper construction.
|
||||
const String & libhdfs3_conf = config.getString(HDFSBuilderWrapper::CONFIG_PREFIX + ".libhdfs3_conf", "");
|
||||
if (!libhdfs3_conf.empty())
|
||||
{
|
||||
setenv("LIBHDFS3_CONF", libhdfs3_conf.c_str(), 1);
|
||||
}
|
||||
HDFSBuilderWrapper builder;
|
||||
if (builder.get() == nullptr)
|
||||
throw Exception("Unable to create builder to connect to HDFS: " +
|
||||
|
Loading…
Reference in New Issue
Block a user