I am new to hadoop and I have a problem.
The problem is I want give someone to use hdfs command, but cannot give them root password, So everything that needed "sudo" or "su hdfs" could not work. I have the reason that i cannot give others root permission.
I have found some solution like:
Create a group, change and let the group have HDFS permission, and add a user in, so that user would have HDFS permission. I had try it but fail.
So, I want to let a user be able to use hdfs commands without using "sudo -su hdfs" command or any command needed sudo permission. Could you tell me how to set the related settings or files with deeper details or any useful reference website ? Thank you all!
I believe by just setting the permission of /usr/bin/hdfs to be '-rwxr-xr-x 1 root root' ,other accounts should be able to execute the command hdfs successfully. have you tried it?
Rajest is right, 'hdfs command does not need sudo', probably you are using 'sudo -su hdfs' because command is attacking to path where only user 'hdfs' has permissions, you must organize data for your users.
A workarround (if you are not using kerberos) for using hdfs with any user is executing this line before working:
export HADOOP_USER_NAME=<HDFS_USER>
Related
I have a post-commit hook in my subversion that will export a copy of my repo to a desired location for deployment. That part works fine, but it comes in with apache:apache. I need this to be changed to prod_user:prod_user. If I try to add a chown statement in my script, it will fail. If I try to use sudo, it will ask for a password that I cant give because this happening in a post-commit script. I'd like this to be as automated as possible.
My question is: How can I make this work? I need to export the contents of my repo to the production folder and convert the users/groups to match existing production users/groups.
Is there a way to pass my password as an argument to a sudo command?
Thank you for your help!
Is there a way to pass my password as an argument to a sudo command?
Don't do it, if at all possible. This will leak your password to anyone that can read the script.
But if you can't avoid it, use echo <password> | sudo -S <command> - -S makes sudo read from stdin so you can give it the password from there
Don't do any of sudo, chown, chgrp. It is not the responsibility of the uploader to fix permissions on the remote server.
Have the server administrator properly setup these, so that pushing production files from the repository works straight without messing with sudo permission at the server.
If you are the one same person, then take the time to fix the server side to avoid having a remote user elevate its privileges (even temporarily with sudo) for the sake of fixing uploaded files permissions.
Use crontab -e as root user, then you can change ownership without escalation of privileges.
Or run as prod_user and make it check out the code ...then it is already the owner of the files.
Keeping a file with the last deployment timestamp can be used to compare to HEAD timestamp.
I have set my local laravel 5 storage folder to permissions 755 and to user www-data, as I use apache2 on my machine. However I was getting blank screens instead of stack traces on errors so I changed the permissions to 777 which resolved the issue.
However I feel this is a (terrible) bandaid as really it is allowing any user to modify this directory with all permissions rather than the correct user with limited permissions. I don't know if this issue will affect the development or production server but granting those permissions in such a scenario is not an option.
How do I figure out which user (or group) actually needs permissions to use this directory for laravel logging so I can assign the directory to them and return the permissions back to 755?
I have tried
ps aux | egrep '(apache|httpd)'
but it shows that most processes are being run as www-data...
You're on the right track with ps aux | egrep '(apache|httpd)'.
Processes
Apache/httpd is started as user root, but then it spawns processes to handle incoming request as the user defined in it's configuration. That default user is usually either www-data or apache.
Server OS
On CentOS/RedHat servers, you'll likely see processes being run as user/group apache (this is the default).
On Debian/Ubuntu, the default user set for the processes handling requests is www-data.
This all assumes apache is using mod-php. If you are using php-fpm, the use running PHP may be configured separately (altho it has the same defaults as apache in my experience).
Permissions for storage
As it sounds like you know, the storage directory needs to be writable by the user or group (depending on permissions) running those processes.
www-data?
It sounds like the result of ps aux | egrep '(apache|httpd)' was www-data, so it's likely, but not 100% definitive, that the directory needs to be writable by user/group www-data (either by setting it as the owner and ensuring owner has those permissions, or setting it via group permissions, or making it world-writable).
A quick test
One easy way to tell is if you delete the log file / view cache files from the storage directory, and then make that directory world-writable.
Make some requests in Laravel that would re-generate those files, and then see what user/group is set on the new files.
That is one way to see what the user/group is set to of the process running PHP.
Are the folders in storage set to 755 too?
If not, you should change the permissions recursively by doing chmod -R 755 storage. Just take care when you use chmod -R because you could set the entire server to 755 by mistake.
I have installed and setup a single node instance of hadoop using my username. I want to setup the same hadoop setup to a different user. How can I do this?
In hadoop we run different tasks and store data in HDFS.
If several users are doing tasks using the same user account, it will be difficult to trace the jobs and track the tasks/defects done by each user.
Also the other issue is with the security.
If all are given the same user account, all users will have the same privilege and all can access everyone’s data, can modify it, can perform execution, can delete it also.
This is a very serious issue.
For this we need to create multiple user accounts.
Benefits of Creating multiple users
1) The directories/files of other users cannot be modified by a user.
2) Other users cannot add new files to a user’s directory.
3) Other users cannot perform any tasks (mapreduce etc) on a user’s files.
In short data is safe and is accessible only to the assigned user and the superuser.
Steps for setting up multiple User accounts
For adding new user capable of performing hadoop operations, do the following steps.
Step 1
Creating a New User
For Ubuntu
sudo adduser --ingroup <groupname> <username>
For RedHat variants
useradd -g <groupname> <username>
passwd
Then enter the user details and password.
Step 2
we need to change the permission of a directory in HDFS where hadoop stores its temporary data.
Open the core-site.xml file
Find the value of hadoop.tmp.dir.
In my core-site.xml, it is /app/hadoop/tmp. In the proceeding steps, I will be using /app/hadoop/tmp as my directory for storing hadoop data ( ie value of hadoop.tmp.dir).
Then from the superuser account do the following step.
hadoop fs –chmod -R 1777 /app/hadoop/tmp/mapred/staging
Step 3
The next step is to give write permission to our user group on hadoop.tmp.dir (here /app/hadoop/tmp. Open core-site.xml to get the path for hadoop.tmp.dir). This should be done only in the machine(node) where the new user is added.
chmod 777 /app/hadoop/tmp
Step 4
The next step is to create a directory structure in HDFS for the new user.
For that from the superuser, create a directory structure.
Eg: hadoop fs –mkdir /user/username/
Step 5
With this we will not be able to run mapreduce programs, because the ownership of the newly created directory structure is with superuser. So change the ownership of newly created directory in HDFS to the new user.
hadoop fs –chown –R username:groupname <directory to access in HDFS>
Eg: hadoop fs –chown –R username:groupname /user/username/
Step 6
login as the new user and perform hadoop jobs..
su – username
I had similar file permission issue and it did not get fixed by executing hadoop fs –chmod -R 1777 /app/hadoop/tmp/mapred/staging.
Instead, it got fixed by executing the following Unix command $ sudo chmod -R 1777 /app/hadoop/tmp/mapred
I've been looking for a way to force usermod to modify the password/group/... files despite the user being in use.
What I do get now is this:
!! Failed to execute 'usermod --home '...' --password '...' --shell '/bin/false' 'zabbix' 2>&1':
usermod: user zabbix is currently used by process 518
I know that for being secure I need to restart the service. But this is done within a setup script. I am restarting all services at the end.
Is there any way to say --force? (well, except for modifying all necessary files.)
Thanks
If you can get root rights via sudo and are confident enough to change system files using vi then I would change the files manually.
Only a few things need to be changed in
- /etc/passwd
here you could change UID, GID, Homedirectory, Shell ...
- /etc/group
here you might need to change UID/GID as well for the username if there was a change
The File /etc/shadow will be changed automatically when using passwd to set a new password. This you can directly perform if you are root: "passwd username"
You can run usermod in a separate user namespace (with a recent enough linux), but you need to map the root user to root (otherwise you won't have permissions to modify /etc/passwd).
I.e. something like this:
unshare --user --map-root-user usermod ...
Now usermod won't find the processes running with the uid of user you are modifying.
You probably won't be able to modify the root user itself with this.
I want to run the following sample bash script which needs sudo password for a command
#!/bin/bash
kinit #needs sudo password
vi hello.txt
while running the above script it is asking for password.
How can i pass the username and password in the command itself or is there any better way i can skip passing my password in the script ?
TL;DR
You can't—at least, not the way you think.
Longer Answer with Alternatives
You have a couple of options:
Authenticate interactively with sudo before running your script, e.g. sudo -v. The credentials will be temporarily cached, giving you time to run your script.
Add a specific command such as /usr/lib/klibc/bin/kinit to your sudoers file with the NOPASSWD option. See sudoers(5) and and visudo(8) for syntax.
Use gksudo(1) or kdesu(1) with the appropriate keyring to cache your credentials if you're using a desktop environment.
One or more of these will definitely get you where you want to go—just not the way you wanted to get there.
So if you have access to your full system, you can change your sudoers file to allow certain sudo commands to be run w/o a password.
On the command line run visudo
Find your user and change the line to look something like this:
pi ALL=(ALL) NOPASSWD: /path/to/kinit, /path/to/another/command
That should do it. Give it another shot!
Hope that helps
You shouldn't pass username and password. This is not secure and it is not going to work if the password is changed.
You can use this:
gksudo kinit # This is going to open a dialog asking for the password.
#sudo kinit # or this if you want to type your password in the terminal
vi hello.txt
Or you can run your script under root. But note that vi is going to be ran as root as well, which means that it will probably create files that belong to root, that might be not what you want.