hadoop single cluster user - hadoop

I am reading this document here:
http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-common/SingleCluster.html#Pseudo-Distributed_Operation
It has this item:
Make the HDFS directories required to execute MapReduce jobs:
$ bin/hdfs dfs -mkdir /user
$ bin/hdfs dfs -mkdir /user/<username>
It is not clear to me what <username> here should be.
Is this the Linux dedicated user which I created for Hadoop or something else?
I am beginner at Hadoop, just installed it today
and I am just trying to play a few basic examples.

Short Answer: It doesn't have to be any username, it's just whatever you choose to call the directory in HDFS where you want to put your output. But using /user/<username> is convention and good practice.
Long-Winded Answer:
Peter, think of the "Hadoop username" merely as a way to keep your stuff in HDFS distinct from that of anyone else who's also using the same Hadoop cluster. It's really just the name of a directory that you're creating or using under /user in HDFS. You don't necessarily have to "log in" to use Hadoop, but very often the hadoop username just mimics your standard username/profile.
For example, at my previous employer, everyone's logins (for email address, chat client, accessing applications, connecting to servers, developing code, etc. -- pretty much anything at work that ever required a username & password) were in the format of <firstname.lastname>, so we'd log in to everything that way. Most of us had execution privileges to our grid, so we would ssh to an appropriate server (e.g. $ssh trevor.allen#server-of-awesomeness), where we had permission to execute MapReduce jobs to the grid. Just like my user was always first.last on my own machine, as well as on all our Linux servers (e.g. home in /home/trevor.allen/), we would follow this precedent in HDFS as well, pointing any output to HDFS to /user/first.last. Of course, since the "username" was arbitrary (really just the name of a directory), you'd occasionally see typos (/user/john.deo) or someone got mixed up between Linux's usr convention and Hadoop's user convention (/user/john.doe vs /usr/john.doe), and just random dropping of last names (/user/john), and so on.
Hope that helps!

The username corresponds to a user in HDFS. So here you can create a the same user as your linux account or others. For example if you install hive, spark or Hbase, you will have to create their directories in order to running this services.

user name here is the one you used to login to hadoop .by default its a user account name.

Related

How to login to real time company hadoop cluster?

I am new to hadoop environment. I was joined in a company and was given KT and required documents for project. They asked me to login into cluster and start work immediately. Can any one suggest me the steps to login?
Not really clear what you're logging into. You should ask your coworkers for advice.
However, sounds like you have a Kerberos keytab, and you would run
kinit -k key.kt
There might be additional arguments necessary there, such as what's referred to as a principal, but only the cluster administrators can answer what that needs to be.
To verify your ticket is active
klist
Usually you will have Edge Nodes i.e client nodes installed with all the clients like
HDFS Client
Sqoop Client
Hive Client etc.
You need to get the hostnames/ip-addresses for these machines. If you are using windows you can use putty to login to these nodes by either using username and password or by using the .ppk file provided for those nodes.
Any company in my view will have a infrastructure team which configures LDAP with the Hadoop cluster which allows all the users by providing/adding your ID to the group roles.
And btw, are you using Cloudera/Mapr/Hortonworks? Every distribution has their own way and best practices.
I am assuming KT means knowledge transfer. Also the project document is about the application and not the Hadoop Cluster/Infra.
I would follow the following procedure:
1) Find out the name of the edge-node (also called client node) from your team or your TechOps. Also find out if you will be using some generic linux user (say "develteam") or you would have to get a user created on the edge-node.
2) Assuming you are accessing from Windows, install some ssh client (like putty).
3) Log in to the edge node using the credentials (for generic user or specific user as in #1).
4) Run following command to check you are on Hadoop Cluster:
> hadoop version
5) Try hive shell by typing:
> hive
6) Try running following HDFS command:
> hdfs dfs -ls /
6) Ask a team member where to find Hadoop config for that cluster. You would most probably not have write permissions, but may be you can cat the following files to get idea of the cluster:
core-site.xml
hdfs-site.xml
yarn-site.xml
mapred-site.xml

Submitting MR job to Hadoop cluster with different ID's

What is the best way in which we can submit the MR job to hadoop cluster?
Scenario:
Developers have their own id's e.g. dev-user1, dev-user2 etc.
Hadoop cluster has various id's for various components e.g hdfs user for HDFS, yarn for YARN etc.
This means dev-user1 can't read / write HDFS as it is hdfs id that has access to HDFS.
Can anyone help me understand what is the best practice in which a developer can submit a job to hadoop cluster? I don't want to share the hadoop "specific" id details to anyone.
How does it work in real life scenarios.
best practice in which a developer can submit a job to hadoop cluster?
Depends on the job... yarn jar would be a used for MapReduce
This means dev-user1 can't read / write HDFS as it is hdfs id that has access to HDFS.
Not everything is owned by the hdfs user. You need to make /user/dev-user1 HDFS directory owned by that user so that's where the user has a "private" space. You can still make a directory anywhere else on HDFS that multiple users write to.
And permissions are only checked if you've explicitly enabled them on HDFS... And even if you did, then you still can put both users into the same POSIX group, or make directories globally writable by all.
https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html
In production grade clusters, Hadoop is secured by Kerberos credentials and ACLs are managed via Apache Ranger or Sentry, which both allow fine-grained permission management

Why another user is required to run hadoop?

I have a question regarding hadoop configuration
why we need to create a user for running hadoop can we not run hadoop on a root user?
Yes, you can run it as root.
It is not a requirement to have a dedicated user for Hadoop but having one with lesser privileges than root is considered a good practice. It helps in separating Hadoop processes from other services running on the same machine.
This is not hadoop specific, it's a common good practice in IT to have specific users for running daemons ,for security reasons (for example in hadoop, if you run map reduce daemons as root, a malign user could launch a map reduce job which deletes not only hdfs data, but operating system data), for best control ,etc. Take a look at this:
https://unix.stackexchange.com/questions/29159/why-is-it-recommended-to-create-a-group-and-user-for-some-applications
It is not at all required to create a new user to run hadoop. Also, hadoop user need not be (should not be) in sudoers file or a root user [ref]. Your login user for the machine can also act as a hadoop user. But as mentioned by #Luis and #franklinsijo, it is a good practice to have a specific user for a specific service.

Check permission in HDFS

I'm totally new in Hadoop. One of SAS users has problem to save a file from SAS Enterprise Guide to Hadoop and I've been asked to check permissions in HDFS that if they have been granted properly. Somehow to make sure users are allowed to move from one side and to add it to the other side.
Where should I check for it on SAS servers? If it is a file or how can I check it?
Your answer with details would be more appreciated.
Thanks.
This question is to vague, but I can offer a few suggestions. First off, the SAS Enterprise Guide user should have a resulting SAS log from his job with any errors.
The Hadoop cluster distribution, version, services being used (For example Knox, Sentry, or Ranger security products must be setup), and authentication (kerberos) all make a difference. I will assume you are not having kerberos issues nor are running Knox, Sentry, Ranger ect, and you are using core hadoop with no Kerberos. If you need help with those you must be more specific.
1. You have to check permissions on the hadoop side for this. You have to know where they are putting the data into hadoop. These are paths in HDFS, not the servers file system.
If connecting to hive, and not specifying any options it is likely /user/hive/warehouse, or /user/username folder.
2 - Hadoop Stickybit enabled by default prevents users from writing to /tmp in HDFS. Some SAS Programs write to /tmp folder in hdfs to save metadata, along with other information.
Run the following command on a Hadoop node to check basic permissions in HDFS.
hadoop fs -ls /
You should see the /tmp folder along with permissions, if the /tmp folder has a "t" at the end the sticky bit is set such as drwxrwxrwt. If the permissions are drwxrwxrwx then sticky bit isn't set, which is good to eliminate permissions issues.
If you have a sticky bit set on /tmp, which is usually by default then you must either remote it, or set an HDFS TEMP directory in the SAS Programs libname for Hadoop cluster.
Please see the following SAS/Access to Hadoop Guide about the libname options at SAS/ACCESS® 9.4 for Relational Databases: Reference, Ninth Edition | LIBNAME Statement Specifics for Hadoop
To remove/change the Hadoop sticky bit see the following article, or from your Hadoop vendor. Configuring Hadoop Security in CDH 5 Step 14: Set the Sticky Bit on HDFS Directories . You will want to do the opposite of this article to remove the stickybit though.
2 - SAS + Authentication + Users -
If your Hadoop cluster is secured using Kerberos then each SAS user much have a valid kerberos ticket to talk to any Hadoop service. There are a number of guides on the SAS Hadoop Support page about Kerberos along with other resources. With kerberos they need a kerberos ticket, not a username or password.
SAS 9.4 Support For Hadoop Reference
If you are not using kerberos then you can either have either the Hadoop default of no authentication, or possibly some services such as Hive could have LDAP enabled.
If you don't have LDAP enabled then you can use any Hadoop username in the libname statement to connect such as hive, hdfs, or yarn. You do not need to enter any password, and this user doesn't have to be the SAS User Account. This is because they default Hadoop configuration does not require authentication. You can use another account such as one you might create for the SAS User in your Hadoop cluster. If you do this you must create a /user/username folder in HDFS by running something like the following as the HDFS superuser, or one with permissions in Hadoop then set the ownership to the user.
hadoop fs -mkdir /user/sasdemo
hadoop fs -chown sasdemo:sasusers /user/sasdemo
Then you can check to make sure it exists with
hadoop fs -ls /user/
Basically whichever user they have in their libname statement in their SAS program must have a users home folder in hadoop. The Hadoop users will have one created by default on install but you will need to create them for any additional users.
If you are using LDAP with Hadoop (not to common from what I've seen) then you will have to have the LDAP username along with a password for the user account in the libname statement. I believe you can encode the password if you like.
Testing Connections to Hadoop from SAS Program
You can modify the following SAS code to do a basic test to put one of the sashelp datasets into Hadoop using a serial connection to HiveServer2 using SAS Enterprise Guide. This is only a very basic test but should prove you can write to Hadoop.
libname myhive hadoop server=hiveserver.example.com port=10000 schema=default user=hive;
data myhive.cars;set sashelp.cars;run;
Then if you want you can use the Hadoop client of your choice to find the data in Hadoop in the location you stored it, likely /user/hive/warehouse.
hadoop fs -ls /user/hive/warehouse
And/Or you should be able to run a proc contents in SAS Enterprise Guide to display the contents of the Hadoop Hive table you just put into Hadoop.
PROC CONTENTS DATA=myhive.cars;run;
Hope this helps, good luck!
To find the proper groups who can access files in the HDFS, we need to check the Sentry.
The file ACL's are described in the Sentry, so if you want to give/revoke access to anyone, it can be done through it.
On the left hand side is the file location and right hand side is the ACL's of the groups.

HDFS configuration & what is the user directory for?

I am currently "playing around" with Hadoop in a VM (CDH4.1.3 image from cloudera). What I am wondering about is the following (and the documentation did not really help me in that regard).
Following the tutorial, I would format a NameNode first - OK, that is already done if one uses the cloudera image. Likewise the HDFS file structure is already present. In the hdfs-site.xml the datanode data dir is set to:
/var/lib/hadoop-hdfs/cache/${user.name}/dfs/data
which is obviously where the blocks are supposed to be copied to in a real distributed setting. In the cloudera tutorial, one is told to create hdfs "home directories" for each user (/users/<username>), which I do not understand what they are for. Are they just for local test-runs in a single-node setup?
Say I really had petabytes of data on type not fitting into my local storage. This data would have to be distributed straight away, rendering a local "home directory" entirely useless.
Could someone tell me, just to give me an intuition, how a real Hadoop workflow with massive data would look like? What kind of distinct nodes would I have running for a start?
There's the master (JobTracker) with its slave file (where would I put that) allowing the master to resolve all the DataNodes. Then there is my NameNode that keeps track of where the block IDs are stored. The DataNodes are also carry TaskTracker responsibility. In the config files, the NameNode's URI is included -- am I correct so far? Then there is still the ${user.name} variable in the configuration which apparently, if I understood it right, has something to do with WebHDFS, which would also be great if someone could explain to me. In the running examples, the directions tend to be hardcoded to
/var/lib/hadoop-hdfs/cache/1/dfs/data, /var/lib/hadoop-hdfs/cache/2/dfs/data and so on.
So, back to the example: Say, I have my tape and want to import data into my HDFS (and I am required to stream data into the filesystem because I lack the local storage to save it locally on a single machine). Where would I start with the migration process? On an arbitrary DataNode? On the NameNode that distributes the chunks? After all, I cannot assume the data just to "be there", because the name node has to be aware of the block IDs.
It would be great if someone could shortly elaborate on these topics:
What is the home directory really for?
Do I migrate data to the home directory first and to the real distributed system afterwards?
How does WebHDFS work and what role does it play with regards to the user.name variable
How would I migrate "big data" into my HDFS on the fly - or even if it's not big data, how do I populate my file system in a proper way (meaning, that the chunks get randomly distributed across the cluster?
What is the home directory really for?
You have a small confusion here. Just like /home exists for local filesystems on Linux, where users are given their own storage space, /users is a home mount ON the HDFS (Distributed FS). The tutorial needs you to administratively create a home directory for the user you wish to later be running data loads and queries as, such that they get adequate permissions and storage access onto the HDFS. The tutorial is not asking you to create these directories locally.
Do I migrate data to the home directory first and to the real distributed system afterwards?
I believe my above answer should clarify this for you. You should create your home directory on the HDFS, and then load all your data inside of that directory.
How does WebHDFS work and what role does it play with regards to the user.name variable
WebHDFS is one of the various ways to access HDFS. Regular clients to talk to HDFS require use of Java APIs. WebHDFS (and also HttpFs) techniques were added to HDFS to let other languages have their own set of APIs by providing a REST front-end to HDFS. WebHDFS allows user-authentication, to help persist the permission and security models.
How would I migrate "big data" into my HDFS on the fly - or even if it's not big data, how do I populate my file system in a proper way (meaning, that the chunks get randomly distributed across the cluster?
The large part of problem HDFS solves for you is that of managing distribution of data. When loading files or data streams to HDFS (via CLI tools, sinks from Apache Flume, etc.), the blocks are spread in an ideal distribution by HDFS itself, and the chunking is managed by it as well. All you need to do is use the user-side regular FileSystem style APIs and forget about what goes where underneath - its all managed for you.

Resources