useradd with multiple parameters - bash

I've hit a wall after trying to find a solution to this issue. I have a file of 1000 lines from a csv that I need to use to create users within centos.
csv file is structured as username, first, last, gender, dob, country, color, fruits, OS, shell, permission
lines 601 - 1000 | Add users with the following requirements:
username
comment
shell
primary group must be their color
I have my shell script like this:
cat file.csv | cut -d, -f7 | tail -400 | while read group; do groupadd "$group"; done
cat file.csv | cut -d, -f1-3,7,10 | tail -400 | while read username first last color shell; do useradd "$username" -c "$first $last" -g "$color" -s "$shell"; done
When I run the first script, I get "groupapp: group 'color' already exists." I think mainly because it added the group the first time and when it went through the new lines, it stated it already exists. When I verify using /etc/group I do see the groups listed in there.
Now when I run the second script, I get "useradd: group ' ' does not exist". Like I stated when I looked in /etc/group the groups were there. What am I doing wrong?

For the "group already exists" problem, you could add sort -u into the pipeline.
For the useradd problem, see what the output of cut is:
$ echo '1,2,3,4,5,6,7,8,9,10' | cut -d, -f1-3,7,10
1,2,3,7,10
To split that into separate variables with the read command, you need to specify that the delimiter is comma:
cut -d, -f1-3,7,10 file.csv \
| tail -400 \
| while IFS="," read -r username first last color shell; do
useradd "$username" -c "$first $last" -g "$color" -s "$shell"
done
Notes:
cut can read files, don't need cat
you almost always want read -r

Related

Is there a way to combine the outputs of two bash commands?

I am working to display users display name, date and time of login, and their default Bash shell. I can get each to print but I cannot find a way to get it to display line by line.
This is what I have tried to far. I have tried using that pipe, but so far putting them both in variables has been the closest thing yet.
#!/bin/bash
x="$(who | tr -s [:space:] | cut -d' ' -f1,3,4)"
y="$(cat /etc/passwd | tr -s [:print:] | cut -d':' -f7)"
echo $x $y
For every user I have to display this:
username date of login time of login default bash shell
UPDATE:
The answers to other questions combine a single string together, this question is asking how to combine multiple lines of the WHO command, with the users default bash shell (which again is multiple lines). That is why I am asking here.
My sample output is to display the username, time and date of login, and the default bash shell for every logged in user like the who command does (so basically I want to take the who command, cut what I want out of it, and then add the user's default bash shell to the end). I have the two commands that give that to me but I need the default bash shell to be added onto each user displayed by the "who" command. Right now the program prints out all the users logged on and then after it prints them all out it prints the default shell.
Here is a quick and dirty attempt.
who |
while read -r name _ time date _; do
printf "%-12s %-14s %-14s %s\n" \
"$name" "$time" "$date" \
"$(getent passwd "$name" | cut -d: -f7)"
done
I am guessing as to what the fields are and what widths you want for the columns; the who tool's output is very system-dependent.

Bash Script - get User Name given UID

How - given USER ID as parameter, find out what is his name? The problem is to write a Bash script, and somehow use etc/passwd file.
The uid is the 3rd field in /etc/passwd, based on that, you can use:
awk -v val=$1 -F ":" '$3==val{print $1}' /etc/passwd
4 ways to achieve what you need:
http://www.digitalinternals.com/unix/linux-get-username-from-uid/475/
Try this:
grep ":$1:" /etc/passwd | cut -f 1 -d ":"
This greps for the UID within /etc/passwd.
Alternatively you can use the getent command:
getent passwd "$1" | cut -f 1 -d ":"
It then does a cut and takes the first field, delimited by a colon. This first field is the username.
You might find the SS64 pages for cut and grep useful:
http://ss64.com/bash/grep.html
http://ss64.com/bash/cut.html

How to compare a file to a list in linux with one line code?

Hey so got another predicament that I am stuck in. I wanted to see approximately how many Indian people are using the stampede computer. So I set up an indian txt file in vim that has about 50 of the most common surnames in india and I want to compare those names in the file to the user name list.
So far this is the code I have
getent passwd | cut -f 5 -d: | cut -f -d' '
getent passwd gets the userid list which is going to look like this
tg827313:x:827313:8144474:Brandon Williams
the cut functions will get just the last name so the output of the example will be
Williams
Now can use the grep function to compare files but how do I use it to compare the getent passwd list with the file?
To count how many of the last names of computer users appear in the file namefile, use:
getent passwd | cut -f 5 -d: | cut -f -d' ' | grep -wFf namefile | wc -l
How it works
getent passwd | cut -f 5 -d: | cut -f -d' '
This is your code which I will assume works as intended for you.
grep -wFf namefile
This selects names that match a line in namefile. The -F option tells grep not to use regular expressions for the names. The names are assumed to be fixed strings. The option -f tells grep to read the strings from file. -w tells grep to match whole words only.
wc -l
This returns a count of the lines in the output.

awk for different delimiters piped from xargs command

I run an xargs command invoking bash shell with multiple commands. I am unable to figure out how to print two columns with different delimiters.
The command is ran is below
cd /etc/yp
cat "$userlist" | xargs -I {} bash -c "echo -e 'For user {} \n'
grep -w {} auto_*home|sed 's/:/ /' | awk '{print \$1'\t'\$NF}'
grep -w {} passwd group netgroup |cut -f1 -d ':'|sort|uniq;echo -e '\n'"
the output I get is
For user xyz
auto_homeabc.jkl.com:/rtw2kop/xyz
group
netgroup
passwd
I need a tab after the auto_home(since it is a filename) like in
auto_home abc.jkl.com:/rtw2kop/xyz
The entry from auto_home file is below
xyz -rw,intr,hard,rsize=32768,wsize=32768 abc.jkl.com:/rtw2kop/xyz
How do I awk for the first field(auto_home) and the last field abc.jkl.com:/rtw2kop/xyz? As I have put a pipe from grep command to awk.'\t' isnt working in the above awk command.
If I understand what you are attempting correctly, then I suggest this approach:
while read user; do
echo "For user $user"
awk -v user="$user" '$1 == user { print FILENAME "\t" $NF }' auto_home
awk -F: -v user="$user" '$1 == user { print FILENAME; exit }' passwd group netgroup | sort -u
done < "$userlist"
The basic trick is the read loop, which will read a line into the variable $user from the file named in $userlist; after that, it's all straightforward awk.
I took the liberty of changing the selection criteria slightly; it looked as though you wanted to select for usernames, not strings anywhere in the line. This way, only lines will be selected in which the first token is equal to the currently inspected user, and lines in which other tokens are equal to the username but not the first are discarded. I believe this to be what you want; if it is not, please comment and we can work it out.
In the 1st awk command, double-escape the \t to \\t. (You may also need to double-escape the \n.)

Parsing CSV file in bash script [duplicate]

This question already has answers here:
How to extract one column of a csv file
(18 answers)
Closed 7 years ago.
I am trying to parse in a CSV file which contains a typical access control matrix table into a shell script. My sample CSV file would be
"user","admin","security"
"user1","x",""
"user2","","x"
"user3","x","x"
I would be using this list in order to create files in their respective folders. The problem is how do I get it to store the values of column 2/3 (admin/security)? The output I'm trying to achieve is to group/sort all users that have admin/security rights and create files in their respective folders. (My idea is to probably store all admin/security users into different files and run from there.)
The environment does not allow me to use any Perl or Python programs. However any awk or sed commands are greatly appreciated.
My desired output would be
$ cat sample.csv
"user","admin","security"
"user1","x",""
"user2","","x"
"user3","x","x"
$ cat security.csv
user2
user3
$ cat admin.csv
user1
user3
if you can use cut(1) (which you probably can if you're on any type of unix) you can use
cut -d , -f (n) (file)
where n is the column you want.
You can use a range of columns (2-3) or a list of columns (1,3).
This will leave the quotes but you can use a sed command or something light-weight for that.
$ cat sample.csv
"user","admin","security"
"user1","x",""
"user2","","x"
"user3","x","x"
$ cut -d , -f 2 sample.csv
"admin"
"x"
""
"x"
$ cut -d , -f 3 sample.csv
"security"
""
"x"
"x"
$ cut -d , -f 2-3 sample.csv
"admin","security"
"x",""
"","x"
"x","x"
$ cut -d , -f 1,3 sample.csv
"user","security"
"user1",""
"user2","x"
"user3","x"
note that this won't work for general csv files (doesn't deal with escaped commas) but it should work for files similar to the format in the example for simple usernames and x's.
if you want to just grab the list of usernames, then awk is pretty much the tool made for the job, and an answer below does a good job that I don't need to repeat.
But a grep solution might be quicker and more lightweight
The grep solution:
grep '^\([^,]\+,\)\{N\}"x"'
where N is the Nth column, with the users being column 0.
$ grep '^\([^,]\+,\)\{1\}"x"' sample.csv
"user1","x",""
"user3","x","x"
$ grep '^\([^,]\+,\)\{2\}"x"' sample.csv
"user2","","x"
"user3","x","x"
from there on you can use cut to get the first column:
$ grep '^\([^,]\+,\)\{1\}"x"' sample.csv | cut -d , -f 1
"user1"
"user3"
and sed 's/"//g' to get rid of quotes:
$ grep '^\([^,]\+,\)\{1\}"x"' sample.csv | cut -d , -f 1 | sed 's/"//g'
user1
user3
$ grep '^\([^,]\+,\)\{2\}"x"' sample.csv | cut -d , -f 1 | sed 's/"//g'
user2
user3
Something to get you started (please note this will not work for csv files with embedded commas and you will have to use a csv parser):
awk -F, '
NR>1 {
gsub(/["]/,"",$0);
if($2!="" && $3!="")
print $1 " has both privileges";
print $1 > "file"
}' csv

Resources