How to do kerberos authentication for Clickhouse HDFS disk? - hadoop

In clickhouse documentation it is explained how to add hdfs disk by providing URL
<clickhouse>
<storage_configuration>
<disks>
<hdfs>
<type>hdfs</type>
<endpoint>hdfs://hdfs1:9000/clickhouse/</endpoint>
</hdfs>
</disks>
...
</storage_configuration>
</clickhouse>
However, HDFS only supports kerberos authentication, it is not possible to auth with URL.
Clickhouse also explains how to set up kerberos auth for HDFS engine here.
<!-- Global configuration options for HDFS engine type -->
<hdfs>
<hadoop_kerberos_keytab>/tmp/keytab/clickhouse.keytab</hadoop_kerberos_keytab>
<hadoop_kerberos_principal>clickuser#TEST.CLICKHOUSE.TECH</hadoop_kerberos_principal>
<hadoop_security_authentication>kerberos</hadoop_security_authentication>
</hdfs>
<!-- Configuration specific for user "root" -->
<hdfs_root>
<hadoop_kerberos_principal>root#TEST.CLICKHOUSE.TECH</hadoop_kerberos_principal>
</hdfs_root>
How to configure kerberos for HDFS disk in MergeTree engine?

Related

Why would I Kerberise my hadoop (HDP) cluster if it already uses AD/LDAP?

I have a HDP cluster.
This cluster is configured to use Active Directory as Authentication and Authorization authority. To be more specific, we use Ranger to limit accesses to HDFS directories, Hive tables and Yarn queues after said user provided correct username/password combinaison.
I have been tasked to Kerberise the Cluster, which is very easy thanks to the "press buttons and skip" like option in Ambari.
We Kerberised a test cluster. While interacting with Hive does not require any modification on our existing scripts on the cluster's machines, it is very, very difficult to find a way for end users to interact with Hive from OUTSIDE the cluster (PowerBI, DbVisualizer, PHP application).
Kerberising seems to bring an unnecessary amount of work.
What concret benefits would I get from Kerberising the cluster (except make the guys above in the hierachy happy because, hey, we Kerberised, yoohoo) ?
Edit:
One benefit:
Kerberising the Cluster grants more security as it is running on linux machines, but the company Active Directory is not able to handle such OS.
Ranger with AD/LDAP authentication and authorization is ok for external users, but AFAIK, it will not secure machine-to-machine or command-line interactions.
I'm not sure if it still applies, but on a Cloudera cluster without Kerberos, you could fake a login by setting an environment parameter HADOOP_USER_NAME on the command line:
sh-4.1$ whoami
ali
sh-4.1$ hadoop fs -ls /tmp/hive/zeppelin
ls: Permission denied: user=ali, access=READ_EXECUTE, inode="/tmp/hive/zeppelin":zeppelin:hdfs:drwx------
sh-4.1$ export HADOOP_USER_NAME=hdfs
sh-4.1$ hadoop fs -ls /tmp/hive/zeppelin
Found 4 items
drwx------ - zeppelin hdfs 0 2015-09-26 17:51 /tmp/hive/zeppelin/037f5062-56ba-4efc-b438-6f349cab51e4
For machine-to-machine communications, tools like Storm, Kafka, Solr or Spark are not secured by Ranger, but they are secured by Kerberos, so only dedicated processes can use those services.
Source: https://community.cloudera.com/t5/Support-Questions/Kerberos-AD-LDAP-and-Ranger/td-p/96755
Update: Apparently, Kafka and Solr Integration has been implemented in Ranger since then.

What does it mean 'limited to Hive table data' in Apache Sentry reference?

Here https://www.cloudera.com/documentation/enterprise/5-9-x/topics/sentry_intro.html
we can read that
Apache Sentry Overview Apache Sentry is a granular, role-based
authorization module for Hadoop. Sentry provides the ability to
control and enforce precise levels of privileges on data for
authenticated users and applications on a Hadoop cluster. Sentry
currently works out of the box with Apache Hive, Hive
Metastore/HCatalog, Apache Solr, Impala, and HDFS (limited to Hive
table data).
What does it mean exactly that HDFS is limited to Hive table data?
Does it mean that I can't set privileges access for users to particular paths on HDFS?
For example,
I would like to set read access for user_A to path /my_test1
and write/read access for user_B to path /my_test1 and path /my_test2.
Is it possible with Apache Sentry?
Sentry controls do not replace HDFS ACLs. The synchronization between Sentry permissions and HDFS ACLs is one-way; that is, the Sentry plugin on the NameNode will apply Sentry permissions along with HDFS ACLs, so that HDFS enforces access to Hive table data according to Sentry's configuration, even when being accessed with other tools. Thus, HDFS access control is a means to enforcement of policies defined in Sentry in such a case.
Enforcement of arbitrary file access in HDFS should still be done via HDFS ACLs.

Check permission in HDFS

I'm totally new in Hadoop. One of SAS users has problem to save a file from SAS Enterprise Guide to Hadoop and I've been asked to check permissions in HDFS that if they have been granted properly. Somehow to make sure users are allowed to move from one side and to add it to the other side.
Where should I check for it on SAS servers? If it is a file or how can I check it?
Your answer with details would be more appreciated.
Thanks.
This question is to vague, but I can offer a few suggestions. First off, the SAS Enterprise Guide user should have a resulting SAS log from his job with any errors.
The Hadoop cluster distribution, version, services being used (For example Knox, Sentry, or Ranger security products must be setup), and authentication (kerberos) all make a difference. I will assume you are not having kerberos issues nor are running Knox, Sentry, Ranger ect, and you are using core hadoop with no Kerberos. If you need help with those you must be more specific.
1. You have to check permissions on the hadoop side for this. You have to know where they are putting the data into hadoop. These are paths in HDFS, not the servers file system.
If connecting to hive, and not specifying any options it is likely /user/hive/warehouse, or /user/username folder.
2 - Hadoop Stickybit enabled by default prevents users from writing to /tmp in HDFS. Some SAS Programs write to /tmp folder in hdfs to save metadata, along with other information.
Run the following command on a Hadoop node to check basic permissions in HDFS.
hadoop fs -ls /
You should see the /tmp folder along with permissions, if the /tmp folder has a "t" at the end the sticky bit is set such as drwxrwxrwt. If the permissions are drwxrwxrwx then sticky bit isn't set, which is good to eliminate permissions issues.
If you have a sticky bit set on /tmp, which is usually by default then you must either remote it, or set an HDFS TEMP directory in the SAS Programs libname for Hadoop cluster.
Please see the following SAS/Access to Hadoop Guide about the libname options at SAS/ACCESS® 9.4 for Relational Databases: Reference, Ninth Edition | LIBNAME Statement Specifics for Hadoop
To remove/change the Hadoop sticky bit see the following article, or from your Hadoop vendor. Configuring Hadoop Security in CDH 5 Step 14: Set the Sticky Bit on HDFS Directories . You will want to do the opposite of this article to remove the stickybit though.
2 - SAS + Authentication + Users -
If your Hadoop cluster is secured using Kerberos then each SAS user much have a valid kerberos ticket to talk to any Hadoop service. There are a number of guides on the SAS Hadoop Support page about Kerberos along with other resources. With kerberos they need a kerberos ticket, not a username or password.
SAS 9.4 Support For Hadoop Reference
If you are not using kerberos then you can either have either the Hadoop default of no authentication, or possibly some services such as Hive could have LDAP enabled.
If you don't have LDAP enabled then you can use any Hadoop username in the libname statement to connect such as hive, hdfs, or yarn. You do not need to enter any password, and this user doesn't have to be the SAS User Account. This is because they default Hadoop configuration does not require authentication. You can use another account such as one you might create for the SAS User in your Hadoop cluster. If you do this you must create a /user/username folder in HDFS by running something like the following as the HDFS superuser, or one with permissions in Hadoop then set the ownership to the user.
hadoop fs -mkdir /user/sasdemo
hadoop fs -chown sasdemo:sasusers /user/sasdemo
Then you can check to make sure it exists with
hadoop fs -ls /user/
Basically whichever user they have in their libname statement in their SAS program must have a users home folder in hadoop. The Hadoop users will have one created by default on install but you will need to create them for any additional users.
If you are using LDAP with Hadoop (not to common from what I've seen) then you will have to have the LDAP username along with a password for the user account in the libname statement. I believe you can encode the password if you like.
Testing Connections to Hadoop from SAS Program
You can modify the following SAS code to do a basic test to put one of the sashelp datasets into Hadoop using a serial connection to HiveServer2 using SAS Enterprise Guide. This is only a very basic test but should prove you can write to Hadoop.
libname myhive hadoop server=hiveserver.example.com port=10000 schema=default user=hive;
data myhive.cars;set sashelp.cars;run;
Then if you want you can use the Hadoop client of your choice to find the data in Hadoop in the location you stored it, likely /user/hive/warehouse.
hadoop fs -ls /user/hive/warehouse
And/Or you should be able to run a proc contents in SAS Enterprise Guide to display the contents of the Hadoop Hive table you just put into Hadoop.
PROC CONTENTS DATA=myhive.cars;run;
Hope this helps, good luck!
To find the proper groups who can access files in the HDFS, we need to check the Sentry.
The file ACL's are described in the Sentry, so if you want to give/revoke access to anyone, it can be done through it.
On the left hand side is the file location and right hand side is the ACL's of the groups.

Hive User Impersonation for Sentry

I was reading on that for while using sentry you must disable hive user impersonation.
Is it necessary to disable to impersonation? If Yes is there any other way to impersonate hive user with sentry enabled?
Impersonation and Sentry are two different ways to provide authorization in Hive. First one is based on "POSIX-like" hdfs file system permissions, while Sentry is role-based authorization module + SentryService.
There is no way to use Sentry with impersonation enabled in Hive. It could be a security issue. User/application with granted access to any entity (database, table) stored in hive metadata store could have access to any directory/file on hdfs that doesn't "belong" to him.
According to Cloudera the impersonation is not a recommended way to implement authorization in HiveServer2 (HiveServer2 Impersonation).

Flume-ng hdfs security

I'm new in hadoop and Flume NG and I need some help.
I don't understand how hdfs security implemented.
Here are lines from configuration from Flume User Guide:
# properties of hdfs-Cluster1-sink
agent_foo.sinks.hdfs-Cluster1-sink.type = hdfs
agent_foo.sinks.hdfs-Cluster1-sink.hdfs.path = hdfs://namenode/flume/webdata
Does it mean that anyone who knows my hdfs path can write any data to my hdfs?
The question is from some time ago, but I'll try to answer it for any other developer dealing with Flume and HDFS security.
Flume's HDFS sink just need the endpoint where the data is going to be persisted. It such an endpoint is secured or not, it depends entirely on Hadoop, not in Flume.
Hadoop ecosystem has several tools and system for implementing security, but focusing on those native elements, we talk about the authentication and authorization methods.
The authentication is based on Kerberos, and as any other auth mechanism, it is the process of determining whether someone or something is, in fact, who or what it is declared to be. So, by using auth it is not enough by knowing a HDFS user name, but you have to demostrate you own such a user by previously authenticating against Kerberos and obtaining a ticket. Authentication may be pasword-based or keytab-based; you can see the keytabs as "certificate files" containing the authentication keys.
The authorization can be implemented at file system, by deciding which permissions has any folder or file within HDFS. Thus, is a certaing file has only 600 permissions, then only its owner will be able to read or write it. Other authorization mechanisms like Hadoop ACLs can be used.
Being said that, if you have a look to the Flume sink, you'll see that there is a couple of parameters about Kerberos:
hdfs.kerberosPrincipal – Kerberos user principal for accessing secure HDFS
hdfs.kerberosKeytab – Kerberos keytab for accessing secure HDFS
In Kerberos terminology, a principal is a unique identity to which Kerberos can assign tickets. Thus, for each enabled user at HDFS you will need a principal registered in Kerberos. The keytab, as previously said, is a container for the authentication keys a certain principal owns.
Thus, if you want to secure your HDFS then install Kerberos, create principals and keytabs for each enabled user and configure the HDFS sink properly. In addition, change the permissions appropriately in your HDFS.

Resources