Create users based on a text file using Bash script? - bash

So, I have a text file that is organized like this
<username>:<fullname>:<usergroups>
I need to create a new user for each line and put them into their groups. I am stuck with trying to set username into a variable to use with useradd. I have tried using cut but it needs a file name, I can't just pass it a line.
Here is what I currently have:
#! /bin/bash
linesNum=1
while read line
do
echo
name=$( cut -d ":" -f1 $( line ) )
((lineNum+=1))
done < "users.txt"
Thanks for your help!

#!/bin/bash
while IFS=: read username fullname usergroups
do
useradd -G $usergroups -c "$fullname" $username
done < users.txt
fullname is the only string that should contains whitespace (hence the quotes), A list of usergroups should be separated from the next by a comma, with no intervening whitespace (so no quotes on that argument) and your username should not contain whitespace either.
Upate:
To get the list of usergroups to create first you can do this...
#!/bin/bash
while IFS=: read username fullname usergroups
do
echo $usergroups >> allgrouplist.txt
done < users.txt
while IFS=, read group
do
echo $group >> groups.txt
done < allgrouplist.txt
sort -u groups.txt | while read group
do
groupadd $group
done
This is a bit long winded, and could be compacted to avoid the use of the additional files allgrouplist.txt and groups.txt but I wanted to make this easy to read. For reference here's a more compact version.
sort -u < $(
echo $(while IFS=: read a b groups; do echo $groups; done < users.txt )
| while IFS=, read group; do echo $group; done )
| while read group
do
groupadd $group
done
(I screwed the compact version up a bit at first, it should be fine now, but please note I haven't tested this!)

IFS=: while read username fullname usergroups
do
useradd -G "$usergroups" -c "$fullname" "$username"
done < users.txt

Related

Download URLs from CSV into subdirectory given in first field

So I want to export my products into my new website. I have an csv file with these data:
product id,image1,image2,image3,image4,image5
1,https://img.url/img1-1.png,https://img.url/img1-2.png,https://img.url/img1-3.png,https://img.url/img1-4.png,https://img.url/img1-5.png
2,https://img.url/img2-1.png,https://img.url/img2-2.png,https://img.url/img2-3.png,https://img.url/img2-4.png,https://img.url/img2-5.png
What I want to do is to make a script to read from that file, make directory named with product id, download images of the product and put them inside their own folder (folder 1 => image1-image5 of product id 1, folder 2 => image1-image5 of product id 2, and so on).
I can make a normal text file instead of using the excel format if it's easier to do. Thanks before.
Sorry I'm really new here. I haven't done the code yet because I'm clueless, but what I want to do is something like this:
for id in $product_id; do
mkdir $id && cd $id && curl -o $img1 $img2 $img3 $img4 $img5 && cd ..
done
Here is a quick and dirty attempt which should hopefully at least give you an idea of how to handle this.
#!/bin/bash
tr ',' ' ' <products.csv |
while read -r prod urls; do
mkdir -p "$prod"
# Potential bug: urls mustn't contain shell metacharacters
for url in $urls; do
wget -P "$prod" "$url"
done
done
You could equivalently do ( cd "$prod" && curl -O "$url" ) if you prefer curl; I generally do, though the availability of an option to set the output directory with wget is convenient.
If your CSV contains quotes around the fields or you need to handle URLs which contain shell metacharacters (irregular spaces, wildcards which happen to match files in the current directory, etc; but most prominently & which means to run a shell command in the background) perhaps try something like
while IFS=, read -r prod url1 url2 url3 url4 url5; do
mkdir -p "$prod"
wget -P "$prod" "$url1"
wget -P "$prod" "$url2"
: etc
done <products.csv
which (modulo the fixed quoting) is pretty close to your attempt.
Or perhaps switch to a less wacky input format, maybe generate it on the fly from the CSV with
awk -F , 'function trim (value) {
# Trim leading and trailing double quotes
sub(/^"/, "", value); sub(/"$/, "", value);
return value; }
{ prod=trim($1);
for(i=2; i<=NF; ++i) {
# print space-separated prod, url
print prod, trim($i) } }' products.csv |
while read -r prod url; do
mkdir -p "$prod"
wget -P "$prod" "$url"
done
which splits the CSV into repeated lines with the same product ID and one URL each, and any CSV quoting removed, then just loops over that instead. mkdir with the -p option helfully doesn't mind if the directory already exists.
If you followed the good advice that #Aaron gave you, this code can help you, as you seem to be new with bash I commented out the code for better comprehension.
#!/bin/bash
# your csv file
myFile=products.csv
# number of lines of file
nLines=$(wc -l $myFile | awk '{print $1}')
echo "Total Lines=$nLines"
# loop over the lines of file
for i in `seq 1 $nLines`;
do
# first column value
id=$(sed -n $(($i+1))p $myFile | awk -F ";" '{print $1}')
line=$(sed -n $(($i+1))p $myFile)
#create the folder if not exist
mkdir $id 2>/dev/null
# number of images in the line
nImgs=$(($(echo $line | awk -F ";" '{print NF-1}')-1))
# go to id folder
cd $id
#loop inside the line values
for j in `seq 2 $nImgs`;
do
# getting the image url to download it
img=$(echo $line | cut -d ";" -f $j)
echo "Downloading image $img**";echo
# downloading the image
wget $img
done
# go back path
cd ..
done

If statement matching words separated by special char

I'm new to unix. I have a file with an unknown amount of lines in format: "password, username" and I'm trying to make a function that checks this file against user inputted login.
What I have so far:
Accounts file format:
AAA###, firstname.lastname
echo "Please enter Username:"
read username
if cut -d "," -f2 accounts | grep -w -q $username
then
echo "Success"
fi
This function will return Success for inputs "firstname" "lastname" and "firstname.lastname" when I only want it to return for "firstname.lastname"
Any help would be appreciated.
You could go for an exact match, with ^ and $ anchors, like this:
echo "Please enter Username:"
read username
if cut -d "," -f2 accounts | grep -q "^$username$"; then
echo "Success"
fi
While this would work even when the user gives an empty input, you might want to explicitly check for that.
If you loop over the file within the shell, you can use string equality operators instead of regular expressions:
read -rp "enter Username (first.last): " username
shopt -s extglob
found=false
while IFS=, read -r pass uname _othertext; do
# from your question, it looks like the separator is "comma space"
# so we'll remove leading whitespace from the $uname
if [[ "$username" = "${uname##+([[:blank:]])}" ]]; then
echo "Success"
found=true
break
fi
done < accounts
if ! $found; then
echo "$username not found in accounts file"
fi
while read loops in the shell are very slow compared to grep, but depending on the size of the accounts file you may not notice.
Based on your comment, the issue is that the field separator is a comma then a space, not just a comma. cut can't do multi-character delimiters, but awk can. In your code, replace
cut -d "," -f2
with
awk -F ", " '{print $2}'
By the way, there are a few things needed to guard against user input:
# Use "-r" to avoid backslash escapes.
read -rp "Please enter Username:" username
# Always quote variables ("$username").
# Use "grep -F" for fixed-string mode.
# Use "--" to prevent arguments being interpreted as options.
if awk -F ", " '{print $2}' accounts | grep -wqF -- "$username"; then
echo "Success"
fi

How do I add to a column instead of a row using Bash Script and csv?

#!/bin/bash
# This file will gather who is information
while IFS=, read url
do
whois $url > output.txt
echo "$url," >> Registrants.csv
grep "Registrant Email:" output.txt >> Registrants.csv
done < $1
How do I get the grep output to go into a new column instead of a new row? I Want column 1 to have the echo, column 2 to have the grep, then go down to a new row.
You can disable the trailing newline on echo with the -n flag.
#!/bin/bash
# This file will gather who is information
while IFS=, read url
do
whois $url > output.txt
echo -n "$url," >> Registrants.csv
grep "Registrant Email:" output.txt >> Registrants.csv
done < $1
Use printf, then you don't have to worry if the "echo" you are using accepts options.
printf "%s" "$url,"
printf is much more portable than "echo -n".

Bash Script Help Needed

So I'm working on an assignment and I'm very close to getting it. Just having issues with the last part. Here is the whole problem I guess, just so you know what I'm trying to do -
Write a shell script called make_uid which creates user login names given a file containing the user's full name. Your script needs to read the newusers file, and for each name in the file create a login name which consists of the first character of the users first name, and up to 7 characters of their last name. If the last name is less than seven characters, use the entire last name. If the user only has one name, use whatever is provided as a name (in the newusers file) to generate an 8 character long login name. Note: login names need to be all lower case!
Once you have created a login name, you need to check the passwd file to make sure that the login name which you just created does not exist. If the name exists, chop off the last character of the name that you created, and add a digit (starting at 1) and check the passwd file again. Repeat this process until you create a unique user login name. Once you have a unique user name, append it to the passwd file, and continue processing the newusers file.
This is my code so far. At this point, it makes a full passwd file with all of the login names. I'm just having trouble with the final step of sorting through the list and editing duplicates accordingly.
#!/bin/bash
#Let's make some login names!
declare -a first
declare -a last
declare -a password
file=newusers
first=( $(cat $file | cut -b1 | tr "[:upper:]" "[:lower:]" | tr '\n' ' ') )
for (( i=0; i<${#first[#]}; i++)); do
echo ${first[i]} >> temp1
done
last=( $(cat $file | awk '{print $NF}' $file | cut -b1-7 | tr "[:upper:]" "[:lower:]"))
for (( i=0; i<${#last[#]}; i++)); do
echo ${last[i]} >> temp2
done
paste -d "" temp1 temp2 >> passwd
sort -o passwd passwd
more passwd
rm temp1 temp2
Well, I probably shouldn't be answering a homework assignment but maybe it will help you learn.
#!/bin/bash
infile=./newusers
outfile=./passwd
echo -n "" > $outfile
cat $infile | while read line; do
read firstName lastName < <(echo $line)
if [ -z "$lastName" ]; then
login=${firstName:0:8}
else
login=${firstName:0:1}${lastName:0:7}
fi
digit=1
while fgrep -q $login $outfile; do
login=${login%?}$digit
let digit++
done
echo $login >> $outfile
done
There may be some way to do the fgrep check in a single command instead of a loop but this is the most readable. Also, your problem statement didn't say what to do if the name was less than 8 characters so this solution doesn't address that and will produce passwords that are short if the names are short.
Edit: The fgrep loop assumes that there will be fewer than 10 duplicates. If not, you need to make it a bit more robust:
lastDigit="?"
nextDigit=1
while fgrep -q $login $outfile; do
login=${login%$lastDigit}$nextDigit
let lastDigit=nextDigit
let nextDigit++
done
Add all user names into another file before adding the digit. Use fgrep -xc theusername thisotherfile, this returns a digit. Append the digit to the login name if it's not 0.

Bash: delete first line of stdin

I've created a script for account creation that reads from a csv file, i do not need the first line in the csv as it has the titles for the colums, im trying to delete the first line using sed 1d $file, but it doesnt seem to work.
#!/bin/bash
FILE="applicants.csv"
sed 1d $FILE |while IFS=: read USERNAME PASSWORD SCHOOL PROGRAM STATUS; do
#------------------------------------------
groupadd -f $SCHOOL
useradd $USERNAME -p $PASSWORD -g $SCHOOL
if [ $? == 0 ]; then
echo
echo "Success! $USERNAME created"
grep $USERNAME /etc/passwd
echo
#------------------------------------------
else
echo "Failed to create account for $USERNAME"
fi
done < $FILE
here is the csv file
Full Name:DOB:School:Program:Status
JoseDavid:22-08-86:ACE:Bsc Computing:Unfinished
YasinAhmed:22-07-85:ACE:Bsc Networking:Complete
MohammedAli:21-04-84:ACE:Bsc Forensics:Complete
UtahKing:22-09-84:ACE:BSC IT:Unfinished
UsmanNaeem:21-09-75:ACE:BSC Computing:Complete
Here is a screenshot of the output
http://i.stack.imgur.com/R5zPN.jpg
is there anyway to skip the first line?
Try using tail -n +2 $FILE instead of sed.
You're reading from the unedited file with the redirect at the end: done < $FILE. Try changing that line to just done.

Resources