Permission when setup Hadoop Cluster in Pentaho - hadoop

I try to setup new hadoop cluster in pentaho 9.3 but i got permission error.
It requires username and password for hdfs but i don't know how to create user and password for hdfs.
step 1
step 2
I get Error
Error

Hadoop, by default, uses regular OS user accounts. For Linux, you'd use useradd command.

Related

Not able to create new table in hive from Spark-shell

I am using single node setup in Redhat and installed Hadoop Hive Pig and Spark . I configured hive metadata in Derby and everything . I created new folder for Hive tables and gave full privilege (chmod 777 ) . Then I created one table from Hive CLI and I am able to select those data in Spark-shell and printed those values to the console. But from Spark-shell/Spark-Sql I am not able to create new tables .It is throwing error as
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:file:/2016/hive/test2 is not a directory or unable to create one)
I checked the permission and User(using same user for Installation and Hive and Hadoop Spark etc).
Is there anything need to be done for getting full integration of Spark and Hive
Thanks
Check that the permissions in hdfs are correct (not just the filesystem)
hadoop fs -chmod -R 755 /user
If the error message persists afterwards please update the question.

"Permission denied" for almost everything after a successful ssh into gcloud instance that was created using bdutil

Just created instance and deployed a cluster using bdutil. SSH works fine as I can ssh into instance using ./bdutil shell.
When I try to access directories such as Hadoop, hdfs etc., it throws an error:
Permission Denied
The terminal appears like this username#hadoop-m $ I know hadoop-m is the name of the instance. What is the username? It says my name but I don't know where it got this from or what the password is.
I am using Ubuntu to ssh into the instance.
Not a hadoop expert, I can answer a bit generally. On GCE when you ssh in gcloud creates a username from you google account name. Hadoop directories such as hadoop or hdfs are probably owned by a different user. Please try using sudo chmod to make give your username permissions to read/write the directories you need.
To elaborate on Jeff's answer, bdutil-deployed clusters set up the user hadoop as the Hadoop admin (this 'admin' user may differ on different Hadoop systems, where Hadoop admin accounts may be split into separate users hdfs, yarn, mapred, etc). Note that bdutil clusters should work without needing deal with Hadoop admin stuff for normal jobs, but if you need to access those Hadoop directories, you can either do:
sudo su hadoop
or
sudo su
to open a shell as hadoop or root, respectively. Or as Jeff mentions, you can sudo chmod to grant broader access to your own username.

Hadoop: Pseudo Distributed mode for multiple users

I appreciate your help in advance.
I have setup Hadoop in Pseudo Distributed mode using the root user credentials. I want to provide access to multiple users (let us say hadoop1, hadoop2, etc) to be able to submit and run MapReduce jobs on this cluster. How do we get this done?
What I have done so far?
> - Setup Hadoop to run in Pseudo-distributed mode
> - Used "root" user credentials to set this up.
> - Added users hadoop1 and hadoop2 to a group called "hadoop".
> - Added root also to be part of the group "hadoop".
> - Created a folder called hdfstmp and set this as the path for hadoop.tmp.dir.
> - Started the cluster using bin/start-all.sh
> - Ran MapReduce jobs using hadoop1 and hadoop2 users.
I got the error below:
Exception in thread "main" java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1006)
at java.io.File.createTempFile(File.java:1989)
at org.apache.hadoop.util.RunJar.main(RunJar.java:119)
To overcome this error, I gave group "hadoop" rwx permissions to folder hdfstmp. The permissions on this folder look like drwxrwxr-x.
Submitted MapReduce jobs using hadoop1 and hadoop2 users login. The job ran fine without any errors.
However, if I do a stop-all.sh and then do a start-all.sh, the DataNode (and occassionally even NameNode) does not start up. When I check the logs, i see an error as below:
2013-09-21 16:43:54,518 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /data/hdfstmp/dfs/data, expected: rwxr-xr-x, while actual: rwxrwxr-x
Now, without change to the group ownership of the hdfstmp directory, my MR jobs submitted by different users do not run. But when the NameNode gets restarted, i get the issue as above.
How do i overcome this issue? What is the best practice for the same?
Also, is there a way to monitor the jobs that are being submitted by the different users? I am assuming the Web UI should allow me to do this. Please confirm.
I appreciate any assistance you can provide me on this issue. Thanks.
Regards
Adding a dedicated Hadoop system user
We will use a dedicated Hadoop user account for running Hadoop. While that’s not required it is recommended because it helps to separate the Hadoop installation from other software applications and user accounts running on the same machine (think: security, permissions, backups, etc).
#addgroup hadoop
#adduser --ingroup hadoop hadoop1
#adduser --ingroup hadoop hadoop2
This will add the user hduser and the group hadoop to your local machine.
Change permission of your hadoop installed directory
chown -R hduser:hadoop hadoop
And lastly change hadoop temporary directoy permission
If your temp directory is /app/hadoop/tmp
#mkdir -p /app/hadoop/tmp
#chown hduser:hadoop /app/hadoop/tmp
and if you want to tighten up security, chmod from 755 to 750...
#chmod 750 /app/hadoop/tmp

Hive failed to create /user/hive/warehouse

I just get started on Apache Hive, and I am using my local Ubuntu box 12.04, with Hive 0.10.0 and Hadoop 1.1.2.
Following the official "Getting Started" guide on Apache website, I am now stuck at the Hadoop command to create the hive metastore with the command in the guide:
$ $HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse
the error was mkdir: failed to create /user/hive/warehouse
Does Hive require hadoop in a specific mode? I know I didn't have to do much to my Hadoop installation other that update JAVA_HOME so it is in standalone mode. I am sure Hadoop itself is working since I am run the PI example that comes with hadoop installation.
Also, the other command to create /tmp shows the /tmp directory already exists so it didn't recreate, and /bin/hadoop fs -ls is listing the current directory.
So, how can I get around it?
Almost all examples of the documentation have this command wrong. Just like unix you will need the "-p" flag to create the parent directories as well unless you have already created them. This command will work.
$HADOOP_HOME/bin/hadoop fs -mkdir -p /user/hive/warehouse
When running hive on local system, just add to ~/.hiverc:
SET hive.metastore.warehouse.dir=${env:HOME}/Documents/hive-warehouse;
You can specify any folder to use as a warehouse. Obviously, any other hive configuration method will do (hive-site.xml or hive -hiveconf, for example).
That's possibly what Ambarish Hazarnis kept in mind when saying "or Create the warehouse in your home directory".
This seems like a permission issue. Do you have access to root folder / ?
Try the following options-
1. Run command as superuser
OR
2.Create the warehouse in your home directory.
Let us know if this helps. Good luck!
When setting hadoop properties in the spark configuration, prefix them with spark.hadoop.
Therefore set
conf.set("spark.hadoop.hive.metastore.warehouse.dir","/new/location")
This works for older versions of Spark. The property has changed in spark 2.0.0
Adding answer for ref to Cloudera CDH users who are seeing this same issue.
If you are using Cloudera CDH distribution, make sure you have followed these steps:
launched Cloudera Manager (Express / Enterprise) by clicking on the desktop icon.
Open Cloudera Manager page in browser
Start all services
Cloudera has /user/hive/warehouse folder created by default. Its just that YARN and HDFS might not be up and running to access this path.
While this is a simple permission issue that was resolved with sudo in my comment above, there are a couple of notes:
create it in home directory should work as well, but then you may need to update hive setting for the path of metastore, which I think defaults to /user/hive/warehouse
I ran into another error of CREATE TABLE statement with Hive shell, the error was something like this:
hive> CREATE TABLE pokes (foo INT, bar STRING);
FAILED: Error in metadata: MetaException(message:Got exception: java.io.FileNotFoundException File file:/user/hive/warehouse/pokes does not exist.)
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
It turns to be another permission issue, you have to create a group called "hive" and then add the current user to that group and change ownership of /user/hive/warehouse to that group. After that, it works. Details can be found from this link below:
http://mail-archives.apache.org/mod_mbox/hive-user/201104.mbox/%3CBANLkTinq4XWjEawu6zGeyZPfDurQf+j8Bw#mail.gmail.com%3E
if you r running linux check (in hadoop core-site.xml ) data directory & permission, it looks like you ve kept the default which is /data/tmp and im most cases that will take root permission ..
change the xml config file , delete /data/tmp and run fs format (OC after you ve modified the core xml config)
I recommend using upper versions of hive i.e. 1.1.0 version, 0.10.0 is very buggy.
Run this command and try to create a directory it would grant full permission for the user in hdfs /user directory.
hadoop fs -chmod -R 755 /user
I am using MacOS and homebrew as package manager. I had to set the property in hive-site.xml as
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/usr/local/Cellar/hive/2.3.1/libexec/conf/warehouse</value>
</property>

Cloudera Hadoop access with Kerberos gives TokenCache error : Can't get Master Kerberos principal for use as renewer

I am trying to access Cloudera Hadoop setup (HIVE + Impala) from Mac Book Pro OS X 10.8.4.
We have Cloudera CDH-4.3.0 installed on Linux servers. I have extracted CDH-4.2.0 tarball to my Mac Book Pro.
I have set proper configuration and Kerberos credentials so that commands like 'hadoop -fs -ls /' works and HIVE shell starts up.
However when I do 'show databases' command it gives following error:
> hive
> show databases;
>
Failed with exception java.io.IOException:java.io.IOException: Can't get Master Kerberos principal for use as renewer
The error is related to TokenCache.
When I searched for error, it seems following method 'obtainTokensForNamenodesInternal' throws this error when it tries to get a delegation token for specific FS and fails.
http://hadoop.apache.org/docs/current/api/src-html/org/apache/hadoop/mapreduce/security/TokenCache.html
On client side I don't see any error in HIVE shell logs. I have also tried using tarballs of CDH 4.3.0 with same configuration I get the same error.
Any help or pointers for resolving this error would be highly appreciated.
It seems that you have not config the kerberos for yarn.
Add the follow configure in your yarn-site.cml
<property>
<name>yarn.nodemanager.principal</name>
<value>yarn_priciple/fqdn#_HOST</value>
</property>
<property>
<name>yarn.resourcemanager.principal</name>
<value>yarn_priciple/fqdn#_HOST</value>
</property>
Create a new Gateway YARN role instance in the host from Cloudera Manager. It will automatically setup and update the yarn-site.xml.

Resources