This question already has answers here:
Read a file line by line assigning the value to a variable [duplicate]
(10 answers)
Fast way of finding lines in one file that are not in another?
(11 answers)
Closed 4 years ago.
I'm writing a bash script that deletes users that are not permitted within the system, but im running into a problem.
#!/bin/bash
getent passwd {1000..60000} | cut -d: -f1 > allusers.txt;
diff allowedusers.txt allusers.txt > del.user;
for user in "cat del.user";
do userdel -r $user;
done
When I run it, everything goes smoothly until the userdel command. It just outputs usage of userdel.
Usage: userdel [options] LOGIN
Options:
-f, --force force removal of files,
even if not owned by user
-h, --help display this help message and exit
-r, --remove remove home directory and mail spool
-R, --root CHROOT_DIR directory to chroot into
-Z, --selinux-user remove any SELinux user mapping for the user
No changes are made to users after the script has run. Any help would be appreciated.
diff produces the result with the number of lines where the difference is found:
:~$ cat 1
User1
User2
User3
$ cat 2
User233
User43
User234
User1
And result is:
$ diff 1 2
0a1,3
> User233
> User43
> User234
2,3d4
< User2
< User3
Instead of diff try grep (to show differences in 2d file) :
grep -v -F -x -f file1 file2
where:
-F, --fixed-strings
Interpret PATTERN as a list of fixed strings, separated by newlines, any of which is to be matched.
-x, --line-regexp
Select only those matches that exactly match the whole line.
-v, --invert-match
Invert the sense of matching, to select non-matching lines.
-f FILE, --file=FILE
Obtain patterns from FILE, one per line. The empty file contains zero patterns, and therefore matches nothing.
Example result is:
$ grep -v -F -x -f 1 2
User233
User43
User234
Your user variable is not iterating over the users in the file. It is iterating over the literal string "cat del.user" instead of the contents of the file del.user.
To get the contents of the file, I believe you meant to use a subshell to cat the file:
for user in $(cat del.user); do
userdel -r $user
done
Related
I'm trying to manipulate csv data to only delete users with specific header attributes by creating a script file and running it, then it will read data from a csv and delete the users with specific information in their line... but when I run this script, it gives errors and shows information such as all the commands to use but when I go through the list of commands it gives, it doesn't work.
mycsv.csv
username,first,last,gender,dob,countries,airports,shells,cuisines,operands,water,nfl
mb8239,maaran,batey,m,april 16 1993,japan,tpa,sh,spanish,multiplication,hint,49ers
INPUT=mycsv.csv
OLDIFS=$IFS
IFS=','
while read username first last gender dob countries airports shells cuisines operands water nfl
do
if [ $shells == "sh" ]
then
userdel -r
fi
done < $INPUT
IFS=$OLDIFS
From what I understand, if im trying to remove the users using the shell 'sh' then i would do userdel -r no?
Finally had a chance to get to my ubuntu machine and sort the awk system() command syntax out. Try the following awk command:
awk -F',' '(NR>1) {if ($8 ~ /sh/) {out=system("sudo userdel -r " $1 ); print out}}' mycsv.csv
Output:
userdel: user 'mb8239' does not exist
6
The userdel: user 'mb8239' does not exist is the output generated by the attempted deletion of a user that does not exist on the host, so is appropriate output. The 6 is the exit code of the userdel command that is also expected per the man page for userdel when a user does not exist:
EXIT VALUES
The userdel command exits with the following values:
0
success
1
can't update password file
2
invalid command syntax
6
specified user doesn't exist
8
user currently logged in
10
can't update group file
12
can't remove home directory
If you do not care about the output from the userdel command then you can utilize the following to redirect the output to /dev/null:
awk -F',' '(NR>1) {if ($8 ~ /sh/) {system("sudo userdel -r " $1 " > /dev/null 2>&1")}}' mycsv.csv
This question already has answers here:
Bash - remove all lines beginning with 'P'
(5 answers)
Closed 3 years ago.
I'm trying to copy a file but skip a specific line in the file that starts with 'An', using the bash in mac terminal.
The file only has 4 lines:
Kalle Andersson 036-134571
Bengt Pettersson 031-111111
Anders Johansson 08-806712
Per Eriksson 0140-12321
I know how to copy the file using the command cp and to grab a specific line in the file using the grep command.
I do not know how i can delete a specific line i the file.
I have used the cp command:
cp file1.txt file2.txt
to copy the file.
And I used the
grep 'An' file2.txt
I expect the result where the new file have the three lines:
Kalle Andersson 036-134571
Bengt Pettersson 031-111111
Per Eriksson 0140-12321.
Is there an way I can do this in a single command?
As Aaron said:
grep -vE '^An' file1.txt > file2.txt
What you do here is use grep with the -v option. That means print every line, except the one that matches. Furthermore, you instruct the shell to redirect the output of the grep to file2.txt. That is the meaning of the>.
There are a lot of commands in Unix/Linux that can be used for this. sed is an obvious candidate, awk can do it, as in
awk '{if (!/^An/) print}' file1.txt > file2.txt
Another option is ed:
ed file1.txt <<EOF
1
/^An
d
w file2.txt
q
EOF
I am having some problems with using the first column ${1} as input to a script.
Currently the portions of the script looks like this.
#!/bin/bash
INPUT="${1}"
for NAME in `cat ${INPUT}`
do
SIZE="`du -sm /FAServer/na3250-a/homes/${NAME} | sed 's|/FAServer/na3250-a/homes/||'`"
DATESTAMP=`ls -ld /FAServer/na3250-a/homes/${NAME} | awk '{print $6}'`
echo "${SIZE} ${DATESTAMP}"
done
However, I want to modify the INPUT="${1}" to take the first {1} within a specific file. This is so I can run the lines above in another script and use a file that is previously generated as the input. Also to have the output go out to a new file.
So something like:
INPUT="$location/DisabledActiveHome ${1}" ???
Here's my full script below.
#!/bin/bash
# This script will search through Disabled Users OU and compare that list of
# names against the current active Home directories. This is to find out
# how much space those Home directories take up and which need to be removed.
# MUST BE RUN AS SUDO!
# Setting variables for _adm and storage path.
echo "Please provide your _adm account name:"
read _adm
echo "Please state where you want the files to be generated: (absolute path)"
read location
# String of commands to lookup information using ldapsearch
ldapsearch -x -LLL -h "REDACTED" -D $_adm#"REDACTED" -W -b "OU=Accounts,OU=Disabled_Objects,DC="XX",DC="XX",DC="XX"" "cn=*" | grep 'sAMAccountName'| egrep -v '_adm$' | cut -d' ' -f2 > $location/DisabledHome
# Get a list of all the active Home directories
ls /FAServer/na3250-a/homes > $location/ActiveHome
# Compare the Disabled accounts against Active Home directories
grep -o -f $location/DisabledHome $location/ActiveHome > $location/DisabledActiveHome
# Now get the size and datestamp for the disabled folders
INPUT="${1}"
for NAME in `cat ${INPUT}`
do
SIZE="`du -sm /FAServer/na3250-a/homes/${NAME} | sed 's|/FAServer/na3250-a/homes/||'`"
DATESTAMP=`ls -ld /FAServer/na3250-a/homes/${NAME} | awk '{print $6}'`
echo "${SIZE} ${DATESTAMP}"
done
I'm new to all of this so any help is welcome. I will be happy to clarify any and all questions you might have.
EDIT: A little more explanation because I'm terrible at these things.
The lines of code below came from a previous script are a FOR loop:
INPUT="${1}"
for NAME in `cat ${INPUT}`
do
SIZE="`du -sm /FAServer/na3250-a/homes/${NAME} | sed 's|/FAServer/na3250-a/homes/||'`"
DATESTAMP=`ls -ld /FAServer/na3250-a/homes/${NAME} | awk '{print $6}'`
echo "${SIZE} ${DATESTAMP}"
done
It is executed by typing:
./Script ./file
The FILE that is being referenced has one column of user names and no other data:
User1
User2
User3
etc.
The Script would take the file and look at the first users name, which is reference by
INPUT=${1}
then run a DU command on that user and find out what the size of their HOME drive is. That would be reported by the SIZE variable. It will do the same thing with the DATESTAMP in regards to when the HOME drive was created for the user. When it is done doing the tasks for that user, it would move on to the next one in the column until it is done.
So following that logic, I want to automate the entire process. Instead of doing this in two steps, I would like to make this all a one step process.
The first process would be to generate the $location/DisabledActiveHome file, which would have all of the disabled users names. Then to run the last portion to get the Size and creation date of each HOME drive for all the users in the DisabledActiveHome file.
So to do that, I need to modify the
INPUT=${1}
line to reflect the previously generated file.
$location/DisabledActiveHome
I don't understand your question really, but I think you want this. Say your file is called file.txt and looks like this:
1 99
2 98
3 97
4 96
You can get the first column like this:
awk '{print $1}' file.txt
1
2
3
4
If you want to use that in your script, do this
while read NAME; do
echo $NAME
done < <(awk '{print $1}' file.txt)
1
2
3
4
Or you may prefer cut like this:
while read NAME; do
echo $NAME
done < <(cut -d" " -f1 file.txt)
1
2
3
4
Or this may suit even better
while read NAME OtherUnwantedJunk; do
echo $NAME
done < file.txt
1
2
3
4
This last, and probably best, solution above uses IFS, which is bash's Input Field Separator, so if your file looked like this
1:99
2:98
3:97
4:96
you would do this
while IFS=":" read NAME OtherUnwantedJunk; do
echo $NAME
done < file.txt
1
2
3
4
INPUT="$location/DisabledActiveHome" worked like a charm. I was confused about the syntax and the proper usage and output
My friend recently asked how to compare two folders in linux and then run meld against any text files that are different. I'm slowly catching on to the linux philosophy of piping many granular utilities together, and I put together the following solution. My question is, how could I improve this script. There seems to be quite a bit of redundancy and I'd appreciate learning better ways to script unix.
#!/bin/bash
dir1=$1
dir2=$2
# show files that are different only
cmd="diff -rq $dir1 $dir2"
eval $cmd # print this out to the user too
filenames_str=`$cmd`
# remove lines that represent only one file, keep lines that have
# files in both dirs, but are just different
tmp1=`echo "$filenames_str" | sed -n '/ differ$/p'`
# grab just the first filename for the lines of output
tmp2=`echo "$tmp1" | awk '{ print $2 }'`
# convert newlines sep to space
fs=$(echo "$tmp2")
# convert string to array
fa=($fs)
for file in "${fa[#]}"
do
# drop first directory in path to get relative filename
rel=`echo $file | sed "s#${dir1}/##"`
# determine the type of file
file_type=`file -i $file | awk '{print $2}' | awk -F"/" '{print $1}'`
# if it's a text file send it to meld
if [ $file_type == "text" ]
then
# throw out error messages with &> /dev/null
meld $dir1/$rel $dir2/$rel &> /dev/null
fi
done
please preserve/promote readability in your answers. An answer that is shorter but harder to understand won't qualify as an answer.
It's an old question, but let's work a bit on it just for fun, without thinking in the final goal (maybe SCM) nor in tools that already do this in a better way. Just let's focus in the script itself.
In the OP's script, there are a lot of string processing inside bash, using tools like sed and awk, sometimes more than once in the same command line or inside a loop executing n times (one per file).
That's ok, but it's necessary to remember that:
Each time the script calls any of those programs, it's created a new process in the OS, and that is expensive in time and resources. So the less programs are called, the better is the performance of script that is executing:
diff 2 times (1 just to print to user)
sed 1 time processing diff result and 1 time for each file
awk 1 time processing sed result and 2 times for each file (processing file result)
file 1 time for each file
That doesn't apply to echo, read, test and others that are builtin commands of bash, so no external program is executed.
meld is the final command that will display the files to user, so it doesn't count.
Even with the builtin commands, redirection pipelines | has a cost too, because the shell has to create pipes, duplicate handles, and maybe even creating forks of the shell (that is a process itself). So again: less is better.
The messages of diff command are locale dependants, so if the system is not in english, the whole script won't work.
Thinking that, let's clean a bit the original script, mantaining the OP's logic:
#!/bin/bash
dir1=$1
dir2=$2
# Set english as current language
LANG=en_US.UTF-8
# (1) show files that are different only
diff -rq $dir1 $dir2 |
# (2) remove lines that represent only one file, keep lines that have
# files in both dirs, but are just different, delete all but left filename
sed '/ differ$/!d; s/^Files //; s/ and .*//' |
# (3) determine the type of file
file -i -f - |
# (4) for each file
while IFS=":" read file file_type
do
# (5) drop first directory in path to get relative filename
rel=${file#$dir1}
# (6) if it's a text file send it to meld
if [[ "$file_type" =~ "text/" ]]
then
# throw out error messages with &> /dev/null
meld ${dir1}${rel} ${dir2}${rel} &> /dev/null
fi
done
A little explaining:
Unique chain of commands cmd1 | cmd2 | ... where the output (stdout) of previous one is the input (stdin) of the next one.
Execute sed just once to execute 3 operations (separated with ;) in diff output:
Deleting lines ending with " differ"
Delete "Files " at the beginning of remaining lines
Delete from " and " to the end of remaining lines
Execute command file once to process the file list in stdin (option -f -)
Use the while bash sentence to read two values separated by : for each line line of stdin.
Use bash variable substitution to extract filename from a variable
Use bash test to compare a file type with a regular expression
For clarity reasons, I didn't considerate that file and directory names may have spaces. In such cases, both scripts will fail. To avoid that is necessary enclose in double quotes any reference to file/dir name variable.
I didn't use awk, because it is powerful enough that can replace almost the entire script ;-)
I have a bash script which reads lines from a text file with 4 columns(no headers). The number of lines can be a maximum of 4 lines or less. The words in each line are separated by SPACE character.
ab#from.com xyz#to.com;abc#to.com Sub1 MailBody1
xv#from.com abc#to.com;poy#to.com Sub2 MailBody2
mb#from.com gmc#to.com;abc#to.com Sub3 MailBody3
yt#from.com gqw#to.com;xyz#to.com Sub4 MailBody4
Currently, I am parsing the file and after getting each line, I am storing each word in every line into a variable and calling mailx four times. Wondering if is there is an elegant awk/sed solution to the below mentioned logic.
find total number of lines
while read $line, store each line in a variable
parse each line as i=( $line1 ), j=( $line2 ) etc
get values from each line as ${i[0]}, ${i[1]}, ${i[2]} and ${i[3]} etc
call mailx -s ${i[2]} -t ${i[1]} -r ${i[0]} < ${i[3]}
parse next line and call mailx
do this until no more lines or max 4 lines have been reached
Do awk or sed provide an elegant solution to the above iterating/looping logic?
Give this a shot:
head -n 4 mail.txt | while read from to subject body; do
mailx -s "$subject" -t "$to" -r "$from" <<< "$body"
done
head -n 4 reads up to four lines from your text file.
read can read multiple variables from one line, so we can use named variables for readability.
<<< is probably what you want for the redirection, rather than <. Probably.
The above while loop works well as a simple alternative to sed and awk if you have a lot of control over how to display the lines of text in a file. the read command can use a specified delimiter as well, using the -d flag.
Another simple example:
I had used mysql to grab a list of users and hosts, putting it into a file /tmp/userlist with text as shown:
user1 host1
user2 host2
user3 host3
I passed these variables into a mysql command to get grant info for these users and hosts and append to /tmp/grantlist:
cat /tmp/userlist | while read user hostname;
do
echo -e "\n\nGrabbing user $user for host $hostname..."
mysql -u root -h "localhost" -e "SHOW GRANTS FOR '$user'#$hostname" >> /tmp/grantlist
done