Ambari throws error E090 HDFS030 Error in creation reation - hadoop

I have set up a view for file browsing in Ambari admin (Views - Add view - files), but when trying to access this view, I get the following error appearing:
E090 HDFS030 Error in creation /user//hive/jobs/hive-job-... [HdfsApiException]
Why?

Solved!
The solution to my problem was that the user running "ambari-server" was not allowed to act upon behalf of the current user logged into ambari. In Hadoop terms, the ambari daemon user was not allowed to impersonate the ambari user.
To fix this, the HDFS config had to be modified to add access for my ambari-server user to impersonate everybody. For detailed howto, see this page:
http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.0.0/bk_ambari_views_guide/content/_configuring_your_cluster_for_files_view.html

Related

ETCD warning The server needs to initialize the root user

I have a simple etcd server running and I am using this github project called etcd-keeper to visualize the data in the etcd.
you can find the etcd-keeper project here: https://github.com/evildecay/etcdkeeper
I have created the root using etcdctl and everything works fine.
And I needed to create a another user that has limited view access. So, I created another test-user user and added read-only role with relevant persmissions.
Everything is good but, when I try to access the etcd server using etcd-keeper it doesn't allow me to log in with the test-user credentials unless I signed in with root user first
I don't need to share the root user credentials with the person logs with test-user. Otherwise no point in creating a new user noh.
I get this warning as below:
Can someone please help me to fix this problem? Is this error from etcd servr side? Anyone has used this etcd-keeper before?
Thank you.

What determines what user / groups Ranger can see when setting policies?

Have users on local machines that have HDFS /user dirs that do not show up as possible users when setting Ranger policies
I can see that Ranger already have a place where you can see and add users in the settings menu of the ranger UI, but not sure where this is getting populated from.
So my question then is what determines if Ranger can see cluster users for setting policies (and is there an easy way to manage this via ambari)?
The problem was that I had thought, looking at a answer on the Hortonworks community forums, that for a user to be recognized as "existing" on the HDP cluster, all that was required was for the user to 1) exist on a cluster node and 2) have a folder in hdfs:///user/<the username>. This apparantly is not correct (at least in the case of being recognized by Ranger as a valid user that can have policies set on them).
In order for a user to be recognized by Ranger (here, I do not have a cluster integrated with Kerberos or Active Directory), that user needs to exist on the usersync server machine which supports...
the ability [for Ranger] to get users and groups from the corporate AD to use in policy definitions.

How to use the ResourceManager web interface as an user

Every time i try to use the Hadoop Resource Manager web interface (http://resource-manger.host:8088/cluster/) i show up logged in as dr.who.
My question, how can I login as another user? In this case i want to login as myself and have a higher lever of privileges than dr.who.
The user infomation is got from HttpServletRequest#getRemoteUser().
1. If you deployed an insecure cluster, the simplest way to pass the username to server is by url parameter. For example, http://localhost:8088/cluster?user.name=babu
2. If you deployed a secure cluster, you probably use Kerberos authentication. You can use kinit to get a kerberos tgt, then configure the browser to negotiate. (network.negotiate-auth.trusted-uris for firefox, and --auth-server-whitelist for chromium. I'm sure there's lots of answers about this)
For more information, you can check hadoop official documentation.(https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/HttpAuthentication.html)
You should set the access control list by changing the default configuration of:
yarn.resourcemanager.zk-acl
from
world:anyone:rwcda
to something else,which is Cluster-specific
The ACLs the ResourceManager uses for the znode structure to store the internal state.

Weblogic user automatically getting deleted during weblogic server start up

I created a user in weblogic by following the below steps:
1. clicked on security realams present on left side panel.
2. clicked on myrealam
3. clicked on user and groups
4. clicked on new
5. Provided user name and password
The user created successfully. However when I am starting the server by deleting the log,cache, tmp and data folder. The created user is getting deleted automatically.
From my first level analysis I found its due to the deletion of data folder.
I want to create a permanent user for security validation.
Can anyone Please help me to create a permanent user.
Regards
Asutosh Kar
I got the answer to solve my above issues.
There are 2 ways to solve the above issue:
1. Export the LDAP files from the security realms to a directory present in the server and delete the data directory and restart the server. After restart of the server again import the LDAP files.
2. Modify the DefaultAuthenticatorInit.ldift present under the domain security directory to add the user and group details. After that delete the data directory and restart the server.
Regards
Asutosh Kar
I tried following and it works
Login to weblogic console.
Navigate to domain.
Under domain , security tab > Embedded LDAP
Select Master First and then restart servers.
The users created after restart will remain in the system

Login Hive, log4j file

I'm trying to access to Hive by the command window.
I just run "Hive" in the appropiate directory but I get an error "Login denied".
I've read that log4j is used to log in, but I don't know whether I have to create an account and write my user data there or not.
Thank you very much
The Hive service should be working right now. From a FI-LAB VM of your own, you simply have to log into the Head Node using your Cosmos credentials (if you have no Cosmos credentials, get them by registering here):
[root#your_filab_vm]$ssh cosmos.lab.fi-ware.org
Once logged in the Head Node, type the following command:
[your_cosmos_username#cosmosmaster-gi]$ hive
Logging initialized using configuration in jar:file:/usr/local/hive-0.9.0-shark-0.8.0-bin/lib/hive-common-0.9.0-shark-0.8.0.jar!/hive-log4j.properties
Hive history file=/tmp/<your_cosmos_username>/hive_job_log_<your_cosmos_username>_201407212017_1797291774.txt
hive>
As you can see, in this example your Hive history will be written within:
/tmp/<your_cosmos_username>/hive_job_log_<your_cosmos_username>_201407212017_1797291774.txt

Resources