Retrieving only name of user who owns folder - bash

I'm making a bash script that has to create subfolders and expand files into a mounted folder. My Problem is that I can't create subfolders as ROOT, i need to the commands in my script as:
su - UnknownUser -c "mkdir MAKEDir"
So my question is, how can I only retrieve the user name when doing commands like
ls -l
Thanks for any input!

Use stat -c with a format of %U for the textual name of the owner, or %u for the uid of the owner (or %G/%g for group)
stat -c %U <filename>

The stat command varies greatly by implementation, but the following will work
# GNU stat
# -c may be used in place of --format
$ stat --format %U file.ext
# BSD (Mac OS X, anyway) stat
$ stat -f %Su file.ext

use awk:
for the user of file foo:
ls -l foo | awk '{print $3}'
or, for user and filename together:
ls -l | awk '{print $3, $9 }'

If "parent" is the parent directory...
user=$(ls -ld ${parent} | awk '{print $3}')
sudo -u $user mkdir ${parent}/child1
sudo -u $user mkdir ${parent}/child2
sudo -u $user mkdir ${parent}/child1/grandchild1
sudo -u $user mkdir ${parent}/child1/grandchild2

Related

bash script to access a file in a remote host three layers deep

So in the terminal I access the remote host through ssh -p then once I'm in i have to cd /directory1/directory2/. Then I want to find the latest directory which I do using ls -td -- */ | head -n 1 then using this I want to cd into that and tail -n 1 file1
All these commands work in the terminal but I want to automate it to where I can just type ./tailer.sh and have that be output.
Any ideas would be appreciated.
The shell script tailer.sh can look something like this
#!/bin/bash
ssh -p <PORT> <HOST_NAME> '( cd /directory1/directory2/ && LATEST_DIR=$(ls -td -- */ | head -n 1) && cd ${LATEST_DIR} && tail -n 1 file1 )'
Then give execute permissions to tailer.sh using chmod u+x tailer.sh
Run the script using ./tailer.sh

Looping through file to create other command files

I am trying to create a script that will automatically log me in to a specific remote device (let's call it a fw). The "command" is a bit elaborate, as we are logging in from a protected network server, and there are hundreds of these to login to.
I have created a file with two parameters (command and name) separated by "#", the first parameter is the "command" string with spaces (ie: "sudo --user ....") which I want to put (echo) into an executeable file called "name" (name of the device I want to login to).
My logic was originally:
for line in $(awk -F# '{print $1, $2}' list.txt), do touch $2; && echo "$1 > $2" && chmod +x $2; done
my end should create x number of files named "$name", each with only a one-line command of "$command" and be "executeable".
I have tried several things to make this work. I can iterate of the file with not much issue using for, while, and even [[ -n $name ]], but, this only provides me with one variable and doesn't split the line into the two I need, "$command" and "$name". Even $1 and $2 would be fine for my purposes...
While testing:
$ while IFS=# read -r line; do echo "$line"; done < list
sudo --user xxxxxxxxxxxxxx#yyyyyyyyy
sudo --user xxxxxxxxxxxxxx#yyyyyyyyy
sudo --user xxxxxxxxxxxxxx#yyyyyyyyy
even using IFS=# to split the $line - doesn't remove the "#" as expected.
for-looping:
$ for line in $(cat list); do echo $line; done
sudo --user xxxxxxxxxxxxxx
yyyyyyyyy
sudo --user xxxxxxxxxxxxxx
yyyyyyyyy
sudo --user xxxxxxxxxxxxxx
yyyyyyyyy
Trying to expand to:
bin$ for line in $(cat list); do awk -F# '{print $1, $2}' $line; done
awk: fatal: cannot open file ` xxxxxxxxxxxxxxxxx' for reading (No such file or directory)
awk: fatal: cannot open file `yyyyyyyyy
sudo --user xxxxxxxxxxxxxxxxx' for reading (No such file or directory)
awk: fatal: cannot open file `yyyyyyyyy
sudo --user xxxxxxxxxxxxxxxxx' for reading (No such file or directory)
I would like to parse (loop) through the file - separate the parms and create $name with $command inside and chmod +x $name to have an executeable that will log me in automatically to "$name" node.
I suggest inserting all your logic into the awk script.
script.awk
BEGIN {FS = "[\r#]"} # set field separator to # or <CR>
{ # for each input line
print $1 > $2; # write input 1st field to file named 2nd field
system("chmod a+x "$2); # set file named 2nd field, to be executable
}
running the script:
awk -f script.awk list.txt
input list.txt
sudo --user xxxxxxxxxxxxxx#yyyyyyyy1
sudo --user xxxxxxxxxxxxxx#yyyyyyyy2
sudo --user xxxxxxxxxxxxxx#yyyyyyyy3
output:
dudi#DM-840$ ls -l yy*
total 3
-rwxrwxrwx 1 dudi dudi 28 Jun 23 01:21 yyyyyyyy1*
-rwxrwxrwx 1 dudi dudi 28 Jun 23 01:21 yyyyyyyy2*
-rwxrwxrwx 1 dudi dudi 28 Jun 23 01:21 yyyyyyyy3*
update:
changed FS to include the <CR> char, otherwise appended to filenames (seen as ^M).

How do I not show the processes that I can't kill with 'kill + [pid number]' command?

I was working on a project "make a task manager in linux" at school
I used ps -u [username] -o stat,cmd --sort=-pid | awk '{print $2$3$4}' command to get cmd names from the ps command
If I use this command, I see the part of the result like this :
awk{print$2$3$4}
ps-u[username]
when I try to terminate those process using the pid of each process, it won't terminate them because their PID doesn't exist.
How could I not show those awk{print$2$3$4} and ps-u[username] ???
I couldn't think of any idea
ps -u [username] -o stat,cmd --sort=-pid | awk '{print $2$3$4}'
You can't kill them because they were only alive while the commands were running, which was the same command you used to generate that output.
There's a few ways you can suppress these. I think the easiest would be to filter them out in your awk script.:
ps -u [username] -o stat,cmd --sort=-pid | awk '$2!="awk" && $2!="ps"{print $2$3$4}'
JNevill's solution excludes every running awk or ps process. I think it's better to exclude processes on tty. Also, you aren't getting complete commands with how you use awk. I (kind of) solved it using sed.
$ ps -u $USER -o stat,tty,cmd --sort=-pid | grep -v `ps -h -o tty $$` | sed -r 's/.* (.*)$/\1/'
You can test it with the following command. I opened man ps in another terminal.
$ ps -u $USER -o stat,tty,cmd --sort=-pid | grep -v `ps -h -o tty $$` | grep -E '(ps|grep)'
S+ pts/14 man ps
The downside is, besides excluding ps and grep, it excludes your application as well.

Cron with command requiring sudo

what would be my options to make a script from command where I need to put my sudo password in?
Im exporting a fsimage and would like to do it on regular basis. It could be run from my accout but ideally, I would like to create a user dedicated to make these exports.
I would like to stay away from using root cron and use a more secure way of doing this
Entire command looks like this:
sudo ssh czmorchr 'hdfs oiv -p Delimited -i $(ls -t /dfs/nn/current/fsimage_* | grep -v md5 |
head -1) -o /dev/stdout 2>/dev/null' | grep -v "/.Trash/" |sed -e 's/\r/\\r/g' | awk 'BEGIN
{ FS="\t"; OFS="\t" } $0 ! ~ /_impala_insert_staging/ && ($0 ~ /^\/user\/hive\/warehouse\/cz_prd/ ||
$0 ~ /^\/user\/hive\/warehouse\/cz_tst/) { split($1,a,"/"); db=a[5]; table=a[6]; gsub(".db$", "", table); }
db && $10 ~ /^d/ {par=""; for(i=7;i<=length(a);i++) pa r=par"/"a[i] } db && $10 !~ /^d/
{ par=""; for(i=7;i<=length(a) - 1;i++) par=par"/"a[i]; file=a[length(a)] } NR > 1 { print db,table, par, file, $0 }' |
hadoop fs -put -f - /user/hive/warehouse/cz_prd_mon_ma.db/hive_warehouse_files/fsim age.tsv
To do something as sudo without entering password, there is an unsafe way, like
echo ubuntu | sudo -S ls
here im granting an ls command with ubuntu user and ubuntu password.
As you can see, piping password to sudo -S works.
Additionaly you need to make user sudoer
here is an example https://askubuntu.com/questions/7477/how-can-i-add-a-new-user-as-sudoer-using-the-command-line.
I was able to resolve this issue using setfacl command. (Context:) I setup another folder in HDFS where standby node should dump its fsimages. Then I used this command and after that, I was able to run the script above without sudo and in a crontab.
setfacl -m u:hdfs:rwx /home/user_name/fsimage-dump/namenode

Scape quotes on remote command

I'm try to pass a commadn on remote server.
Command work fine on local server, but when try pass on remote server trought ssh get error for bad scpaing
ls -t /root/mysql/*.sql | awk 'NR>2 {system(\"rm \"" $0 \"\"")}'
Full comnand
ssh root#host -p XXX "mysqldump --opt --all-databases > /root/mysql/$(date +%Y%m%d%H%M%S).sql;ls -t /root/mysql/*.sql | awk 'NR>2 {system(\"rm \"" $0 \"\"")}'"
Actually no need to use awk and avoid all that quotes escaping:
ls -t /root/mysql/*.sql | tail -n +1 | xargs rm
This is assuming your *.sql files don't have any whitespaces otherwise you should use stat command and sort the output using sort.

Resources