No Access Audit found Ranger - hadoop

I am working on Apache Ranger to enable data security across my Hadoop platform, which is working fine but I am not able to see Access Audit on Ranger Portal.
I have enabled Audit to DB, Audit to HDFS and Audit provider summary as well for respective components on ambari.
Please help me to see Access Audit on Ranger Portal.

check the namenode log (normally under /var/log/hadoop/hdfs/...-namenode.log) and see if the driver of your DB can be found or if an exception is thrown. If the latter is the case, add the driver JAR to e.g. /usr/share/java/ to make sure the driver class is available.

Had met the same problem.
Followed every instruction but the hdfs plugin didn't take effect.
Solved by upgrading hadoop from 2.6.3 to 2.7.2.
As in apach ranger offical site, it says ranger 0.5 only works with hadoop 2.7+.

Related

cloudera navigator multi-tenancy capability

In short, can Cloudera Navigator be configured for a multi-tenancy context ?
In detail, we have a datalake (Hadoop cluster) with many business entities, and we want that each business ventity view, manage and access only it own data using the cloudera navigator.
I didn't find any information on the net, and the ui seems to not provide such option.
thanks in advance
You may use Cloudera Manager to create Kerberos principals and keytabs which you may configure to access required directories.
Read: Configuring Authentication in Cloudera Manager
At the current version, Cloudera navigator is not multi-tenancy enabled.
So, at short time one of the solution is custom dev using the Cloudera navigator API coupled with other technologies.

Adding Hbase service in kerberos enabled CDH cluster

I have a CDH cluster already running with kerberos authentication.
I have a requirement to add HBase service to the running cluster.
Looking for a documentation to enable hbase service since its kerberos enabled. Both command line and GUI options welcome.
Also, its good if there is a testing method like small table creation steps like that.
Thanks in advance!
If you add it through Coudera Manager-Add Service wizards, CDH takes care automatically (create/distribute Kerberos keytabs and add services)

how to enable HDFS file view in Ambari on hortonworks sandbox?

how to enable HDFS file view in Ambari on hortonworks sandbox ?
I logged in with admin user.
I tried using admin->Manage view
but I count not found any where file system view or something like that.
It does not directly answer your question, but should be able to view the HDFS filesystem at [ambari.host]:50070/explorer.html#/
To enable the HDFS file view in Ambari, the steps are well described in the Hortonworks documentation : you need to change the HDFS configuration, and then add a files view instance. Have a look here : http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.0.0/bk_ambari_views_guide/content/ch_using_files_view.html
Ambari Files View is one of the views shipped by Ambari 2.1.0 in the IOP 4.1 release. The View provides a web user interface for browsing HDFS, create/remove directories, downloading/uploading files, etc. The cluster must have HDFS and WebHDFS deployed in order to use the Ambari Files View.
If you are downloading Hortonworks Sandbox HDP after 13 jun 2019 please go through this website https://www.roseindia.net/bigdata/hadoop/install-hortonworks-sandbox-on-virtualbox.shtml and make sure you are downloading current version of Sandbox HDP(Now 3.1 version is available for sandbox) and VirtualBox 6.0 is present so make sure you are downloading both of above mentioned ..
This worked for me..

HBase region servers going down when try to configure Apache Phoenix

I'm using CDH 5.3.1 and HBase 0.98.6-cdh5.3.1 and trying to configure Apache Phoenix 4.4.0
As per the documentation provided in Apache Phoenix Installation
Copied phoenix-4.4.0-HBase-0.98-server.jar file in lib directory (/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hbase/lib) of both master and region servers
Restarted HBase service from Cloudera Manager.
When I check the HBase instances I see the region servers are down and I don't see any problem in log files.
I even tried to copy all the jars from the phoenix folder and still facing the same issue.
I have even tried to configure Phoenix 4.3.0 and 4.1.0 but still no luck.
Can someone point me what else I need to configure or anything else that I need to do to resolve this issue
I'm able to configure Apache Phoenix using Parcels. Following are the steps to install Phoenix using Cloudera Manager
In Cloudera Manager, go to Hosts, then Parcels.
Select Edit Settings.
Click the + sign next to an existing Remote Parcel Repository URL, and add the following URL: http://archive.cloudera.com/cloudera-labs/phoenix/parcels/1.0/. Click Save Changes.
Select Hosts, then Parcels.
In the list of Parcel Names, CLABS_PHOENIX is now available. Select it and choose Download.
The first cluster is selected by default. To choose a different cluster for distribution, select it. Find CLABS_PHOENIX in the list, and click Distribute.
If you plan to use secondary indexing, add the following to the hbase-site.xml advanced configuration snippet. Go to the HBase service, click Configuration, and choose HBase Service Advanced Configuration Snippet (Safety Valve) for hbase-site.xml. Paste in the following XML, then save the changes.
<property>
<name>hbase.regionserver.wal.codec</name>
<value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
</property>
Whether you edited the HBase configuration or not, restart the HBase service. Click Actions > Restart
For detailed installation steps and other details refer this link
I dont think, Phoenix4.4.0 is compatible with CDH version you are running. This discussion on mailing list will help you:http://search-hadoop.com/m/9UY0h2n4MOg1IX6OR1

Can't start impala after updating CDH (5.0.0 -> 5.0.2)

I wasn't able to start impala (server, state-store, catalog) after updating to cdh 5.0.2. From what I found, the startup script is expecting the executables to be found in /usr/lib/impala/sbin. There was no such directory. Instead there were /usr/lib/impala/sbin-debug and /usr/lib/impala/sbin-retail. I could finally start impala by creating a symlink
ln -s /usr/lib/impala/sbin-retail /usr/lib/impala/sbin
However I'm still puzzled about the issue. What is the correct form to start impala. Perhaps there is some sort of config variable that lets you choose wether you want to run "debug" or "retail" version.
You can read Cloudera-Manager-Installation-Guide. I think it can be helpful for you.
You can try to update with Cloudera Manager
Installing Impala after Upgrading Cloudera Manager
If you have just upgraded Cloudera Manager from a version that did not support Impala, the Impala software is
not installed automatically. (Upgrading Cloudera Manager does not automatically upgrade CDH or other managed
services). You can add Impala using parcels; go to the Hosts tab, and select the Parcels tab. You should see at
least one Impala parcel available for download. See Parcels for detailed instructions on using parcels to install
or upgrade Impala. If you do not see any Impala parcels available, click the Edit Settings button on the Parcels
page to go to the Parcel configuration settings and verify that the Impala parcel repo URL
(http://archive.cloudera.com/impala/parcels/latest/) has been configured in the Parcels configuration page.
See Parcel Configuration Settings for more details.
Post Installation Configuration
See The Impala Service in Managing Clusters with Cloudera Manager for instructions on configuring the Impala
service.
Cloudera Manager 5.0.2 supports Impala 1.2.1 or later.
If the version of your Impala service is 1.1 or earlier the upgrade will cause the non uvailability of Impala so you need to upgrade the Impala as well to 1.2.1 or later.

Resources