awk for different delimiters piped from xargs command - bash

I run an xargs command invoking bash shell with multiple commands. I am unable to figure out how to print two columns with different delimiters.
The command is ran is below
cd /etc/yp
cat "$userlist" | xargs -I {} bash -c "echo -e 'For user {} \n'
grep -w {} auto_*home|sed 's/:/ /' | awk '{print \$1'\t'\$NF}'
grep -w {} passwd group netgroup |cut -f1 -d ':'|sort|uniq;echo -e '\n'"
the output I get is
For user xyz
auto_homeabc.jkl.com:/rtw2kop/xyz
group
netgroup
passwd
I need a tab after the auto_home(since it is a filename) like in
auto_home abc.jkl.com:/rtw2kop/xyz
The entry from auto_home file is below
xyz -rw,intr,hard,rsize=32768,wsize=32768 abc.jkl.com:/rtw2kop/xyz
How do I awk for the first field(auto_home) and the last field abc.jkl.com:/rtw2kop/xyz? As I have put a pipe from grep command to awk.'\t' isnt working in the above awk command.

If I understand what you are attempting correctly, then I suggest this approach:
while read user; do
echo "For user $user"
awk -v user="$user" '$1 == user { print FILENAME "\t" $NF }' auto_home
awk -F: -v user="$user" '$1 == user { print FILENAME; exit }' passwd group netgroup | sort -u
done < "$userlist"
The basic trick is the read loop, which will read a line into the variable $user from the file named in $userlist; after that, it's all straightforward awk.
I took the liberty of changing the selection criteria slightly; it looked as though you wanted to select for usernames, not strings anywhere in the line. This way, only lines will be selected in which the first token is equal to the currently inspected user, and lines in which other tokens are equal to the username but not the first are discarded. I believe this to be what you want; if it is not, please comment and we can work it out.

In the 1st awk command, double-escape the \t to \\t. (You may also need to double-escape the \n.)

Related

Loop that prints twice in bash

I am writing this bash script that is supposed to print out all the users that have never logged in with an option to sort them. I have managed to get all the input working, however, I am encountering issues when it comes to printing the output. The loop goes as follows:
for user in $(lastlog | grep -i 'never' | awk '{print $1}'); do
grep $user /etc/passwd | awk -F ':' '{print $1, $3}'
done
of course, this loop doesn't sort the output, however, from my limited understanding of shells and shell scripting it should only be a matter of putting a ' | sort' after the first "awk '{print $1}'". my problem is that the output of this loop prints every user at least twice, and in some instances, four times. Why is that and how can I fix it?
Well, let's try to debug it:
for user in $(lastlog | grep -i 'never' | awk '{print $1}'); do
echo "The user '$user' matches these lines:"
grep $user /etc/passwd | awk -F ':' '{print $1, $3}'
echo
done
This outputs:
The user 'daemon' matches these lines:
daemon 1
colord 112
The user 'bin' matches these lines:
root 0
daemon 1
bin 2
sys 3
sync 4
games 5
man 6
(...)
And indeed, the entry for colord does contain daemon:
colord:x:112:120:colord colour management daemon,,,:/var/lib/colord:/bin/false
^-- Here
And the games entry does match bin:
games:x:5:60:games:/usr/games:/usr/sbin/nologin
^-- Here
So instead of matching the username string anywhere, we just want to match it from the start of the line until the first colon:
for user in $(lastlog | grep -i 'never' | awk '{print $1}'); do
echo "The user '$user' matches these lines:"
grep "^$user:" /etc/passwd | awk -F ':' '{print $1, $3}'
echo
done
And now each entry only shows the singly entry it was supposed to, so you can remove the echos and keep going.
If you're interested in finesse and polish, here's an alternative solution that works efficiently across language settings, weird usernames, network auth, large lists, etc:
LC_ALL=C lastlog |
awk -F ' ' '/Never logged in/ {printf "%s\0", $1}' |
xargs -0 getent passwd |
awk -F : '{print $1,$3}' |
sort
Just think what happens with a user named sh. How many users would grep sh match? Probably all of them, since each is using some shell in the shell field.
You should think about
awk -F ':' '$1 == "'"$user"'" {print $1, $3}' /etc/passwd
or with an awk variable for user:
awk -F ':' -vuser="$user" '$1 == user {print $1, $3}' /etc/passwd
Your grep will match multiple lines, (man will match manuel and norman etc.) anchor it to the beginning of the line and add a trail :.
grep "^${user}:" /etc/passwd | awk -F ':' '{print $1, $3}'
A better option might be to forget about grepping /etc/passwd completely and use the id command to get the user id:
id=$(id -u "${user}" 2>/dev/null) && printf "%s %d\n" "${user}" "${id}"
If the id command fails nothing is printed, or it could be modified to be:
id=$(id -u "${user}" 2>/dev/null)
printf "%s %s\n" "${user}" "${id:-(User not found)}"
In gnu linux I'm pretty sure that the found users id not existing isn't possible as lastlog will only report existing users so the second example may be pointless.

How to parse file to commands in bash

I'm trying to achieve some goal here and while I do know partials steps, I am not successful in putting it all together. I'm looking for an inline command for single usage on multiple hosts. Let's have SW repository file organized like this:
# comments
PROD_NAME:INSTALL_DIR:OPTIONS
PROD_NAME:INSTALL_DIR:OPTIONS
PROD_NAME:INSTALL_DIR:OPTIONS
Now, let's say we want to process the file and do some copy action on every one of the products. So, I can pipe grep getting rid of comment lines into while do cycle, where I use awk to break down each line to product name and it's path and complete it into copy commands. And that's too much of nesting for my skill level, I'm afraid. Anyone who'd care to share?
you can use a bash loop to do the same
$ while IFS=: read -r p i o;
do echo "cp $o $p $i";
done < <(grep -v '^#' file)
cp OPTIONS PROD_NAME INSTALL_DIR
cp OPTIONS PROD_NAME INSTALL_DIR
cp OPTIONS PROD_NAME INSTALL_DIR
remove echo to run as given.
Comments can be removed by
grep -v '^#'
For awk you have to specify the field delimiter:
awk -f: '{print $1, $2, $3}'
In order to craft copy commands you have to pipe the result to a shell.
echo -e '# comments\nNAME:DIR:OPT' |
grep -v '^#' |
awk -F: '{print "cp", $3, $2, $1}' |
sh
Even better: read a book.
Or this:
http://linuxcommand.org/learning_the_shell.php
https://en.wikibooks.org/wiki/Bourne_Shell_Scripting

Bash Script - get User Name given UID

How - given USER ID as parameter, find out what is his name? The problem is to write a Bash script, and somehow use etc/passwd file.
The uid is the 3rd field in /etc/passwd, based on that, you can use:
awk -v val=$1 -F ":" '$3==val{print $1}' /etc/passwd
4 ways to achieve what you need:
http://www.digitalinternals.com/unix/linux-get-username-from-uid/475/
Try this:
grep ":$1:" /etc/passwd | cut -f 1 -d ":"
This greps for the UID within /etc/passwd.
Alternatively you can use the getent command:
getent passwd "$1" | cut -f 1 -d ":"
It then does a cut and takes the first field, delimited by a colon. This first field is the username.
You might find the SS64 pages for cut and grep useful:
http://ss64.com/bash/grep.html
http://ss64.com/bash/cut.html

How to ensure that gid in /etc/passwd also exist in /etc/group

Background: The Unclassified RHEL 6 Security Technical Implementation Guide (STIG), a DoD guide, specifies in (STID-ID) RHEL-06-000294 that all user primary GIDs appearing in /etc/passwd must exist in /etc/group.
Instead of running the recommended 'pwck -rq' command and piping to a log then forcing the admin to manually remediate, it makes more sense to programmatically check if the users GID exists and, if not, simply set it to something which does exist such as "users" or "nobody"
I've been beating my head against this and can't quite get it. I've failed at sed, awk, and some piping of grep into sed or awk. The problem seems to be when I attempt to nest commands. I learned the hard way that awk won't nest in one-liners (or possibly at all).
The pseudo-code for a more traditional loop looks something like:
# do while not EOF /etc/passwd
# GIDREF = get 4th entry, sepparator ":" from line
# USERNAMEFOUND = get first entry, separator ":" from line
# grep ":$GIDREF:" /etc/group
# if not found then
# set GID for USERNAMEFOUND to "users"
# fi
# end-do.
This seems like it should be quite simple but I'm apparently missing something.
Thanks for the help.
-Kirk
List all GIDs from /etc/passwd that don't exist in /etc/group:
comm -23 <(awk -F: '{print $4}' /etc/passwd | sort -u) \
<(awk -F: '{print $3}' /etc/group | sort -u)
Fix them:
nogroup=$(awk -F: '($1=="nobody") {print $3}' /etc/group)
for gid in $(
comm -23 <(awk -F: '{print $4}' /etc/passwd | sort -u) \
<(awk -F: '{print $3}' /etc/group | sort -u)); do
awk -v gid="$gid" -F: '($4==gid) {print $1}' /etc/passwd |
xargs -n 1 usermod -g "$nogroup"
done
I'm thinking along these lines:
if ! grep "^$(groups $USERNAME | cut -d\ -f 1):" /etc/group > /dev/null; then
usermod -g users $USERNAME
fi
Where
groups $USERNAME | cut -d\ -f 1
gives the primary group of $USERNAME by splitting the output of groups $USERNAME at the first space (before the :) and
grep ^foo: /etc/group
checks if group foo exists.
EDIT: Fix in the code: quoting and splitting at space instead of colon to allow the appended colon in the grep pattern (otherwise, grep ^foo would have also said that group foo existed if there was a group foobar).

Parsing CSV file in bash script [duplicate]

This question already has answers here:
How to extract one column of a csv file
(18 answers)
Closed 7 years ago.
I am trying to parse in a CSV file which contains a typical access control matrix table into a shell script. My sample CSV file would be
"user","admin","security"
"user1","x",""
"user2","","x"
"user3","x","x"
I would be using this list in order to create files in their respective folders. The problem is how do I get it to store the values of column 2/3 (admin/security)? The output I'm trying to achieve is to group/sort all users that have admin/security rights and create files in their respective folders. (My idea is to probably store all admin/security users into different files and run from there.)
The environment does not allow me to use any Perl or Python programs. However any awk or sed commands are greatly appreciated.
My desired output would be
$ cat sample.csv
"user","admin","security"
"user1","x",""
"user2","","x"
"user3","x","x"
$ cat security.csv
user2
user3
$ cat admin.csv
user1
user3
if you can use cut(1) (which you probably can if you're on any type of unix) you can use
cut -d , -f (n) (file)
where n is the column you want.
You can use a range of columns (2-3) or a list of columns (1,3).
This will leave the quotes but you can use a sed command or something light-weight for that.
$ cat sample.csv
"user","admin","security"
"user1","x",""
"user2","","x"
"user3","x","x"
$ cut -d , -f 2 sample.csv
"admin"
"x"
""
"x"
$ cut -d , -f 3 sample.csv
"security"
""
"x"
"x"
$ cut -d , -f 2-3 sample.csv
"admin","security"
"x",""
"","x"
"x","x"
$ cut -d , -f 1,3 sample.csv
"user","security"
"user1",""
"user2","x"
"user3","x"
note that this won't work for general csv files (doesn't deal with escaped commas) but it should work for files similar to the format in the example for simple usernames and x's.
if you want to just grab the list of usernames, then awk is pretty much the tool made for the job, and an answer below does a good job that I don't need to repeat.
But a grep solution might be quicker and more lightweight
The grep solution:
grep '^\([^,]\+,\)\{N\}"x"'
where N is the Nth column, with the users being column 0.
$ grep '^\([^,]\+,\)\{1\}"x"' sample.csv
"user1","x",""
"user3","x","x"
$ grep '^\([^,]\+,\)\{2\}"x"' sample.csv
"user2","","x"
"user3","x","x"
from there on you can use cut to get the first column:
$ grep '^\([^,]\+,\)\{1\}"x"' sample.csv | cut -d , -f 1
"user1"
"user3"
and sed 's/"//g' to get rid of quotes:
$ grep '^\([^,]\+,\)\{1\}"x"' sample.csv | cut -d , -f 1 | sed 's/"//g'
user1
user3
$ grep '^\([^,]\+,\)\{2\}"x"' sample.csv | cut -d , -f 1 | sed 's/"//g'
user2
user3
Something to get you started (please note this will not work for csv files with embedded commas and you will have to use a csv parser):
awk -F, '
NR>1 {
gsub(/["]/,"",$0);
if($2!="" && $3!="")
print $1 " has both privileges";
print $1 > "file"
}' csv

Resources