Bash while loop with sudo and awk - bash

I am trying to read a file using a while loop running as sudo, and then run some awk on each line. File owner is user2 and I am running as user1.
sudo sh -c 'while IFS= read -r line; do
echo $line | awk '{print $NF}'; done < /home/user2/test.txt'
I am having trouble with the single quotes. Double quotes does not work at both the places where I have single quotes. Is there any way to get the same command to work with some adjustments?
NOTE: I got the output using the following methods:
sudo cat /home/user2/test.txt |
while IFS= read line; do
echo $line | awk '{print $NF}'
done
sudo awk {'print $NF'} /home/user2/test.txt
I am trying to understand if there is any solution for using sudo, while and awk with single quotes all of them in a single line.

There are a few reasonable options:
sudo sh -c 'while IFS= read -r line; do echo "$line" | awk '"'"'{print $NF}'"'"';done < /home/user2/test.txt'
or (observing that this particular awk does not require single quotes):
sudo sh -c 'while IFS= read -r line; do echo "$line" | awk "{print \$NF}"; done < /home/user2/test.txt'
or (always a good choice to simplify things) write a script:
sudo sh -c '/path/to/script'
or:
sudo sh << \EOF
while IFS= read -r line; do echo "$line" |
awk '{print $NF}'; done < /home/user2/test.txt
EOF
Note that for all of these it would be good to get rid of the loop completely and do:
sudo sh -c 'awk "{printf \$NF}" /home/user2/test.xt'
but presumably this is a simplified version of the actual problem

You should definitely only use sudo for the code which absolutely needs the privileges. This would perhaps be one of the very few cases where a single cat makes sense.
sudo cat /home/user2/test.txt | awk '{ print $NF }'
As others have noted already, the while read loop is completely useless and quite inefficient here, but if you really insist on doing something like that, it's not hard to apply the same principle. (Notice also the fixed quoting.)
sudo cat /home/user2/test.txt |
while IFS= read -r line; do
echo "$line" # double quotes are important
done | awk '{ print $NF }'
Tangentially, you cannot nest single quotes inside single quotes. The usual solution to that is to switch one set of quotes to double quotes, and probably also then correspondingly escape what needs escaping inside the double quotes.
sudo sh -c 'while IFS= read -r line; do
echo "$line" | awk "{print \$NF}"
# double quotes and^ escape^ ^
done < /home/user2/test.txt'
Just to spell this out, the double quotes around the Awk script are weaker than single quotes, so the dollar sign in $1 needs to be backslashed to protect it from the (inner) shell.
... But as suggested above, the awk script doesn't need to run inside sudo at all here; and anyway, it's also more efficient to put it outside the done.

Related

Script does not work with ` but works in single command

In my bash, the whole script won't work... When I use `
My script is
#!/bin/bash
yesterday=$(date --date "$c days ago" +%F)
while IFS= read -r line
do
dir=$(echo $line | awk -F, '{print $1 }')
country=$(echo $line | awk -F, '{print $2 }')
cd path/$dir
cat `ls -v | grep email.csv` > e.csv
done < "s.csv"
Above output is blank.
If i use ""
output is No such file or directory
but if I use only 1 line on the terminal it works
cat `ls -v | grep email.csv` > e.csv
I also try with / , but didnt work either...
You should generally avoid ls in scripts.
Also, you should generally prefer the modern POSIX $(command substitution) syntax like you already do in several other places in your script; the obsolescent backtick `command substitution` syntax is clunky and somewhat more error-prone.
If this works in the current directory but fails in others, it means that you have a file matching the regex in the current directory, but not in the other directory.
Anyway, the idiomatic way to do what you appear to be attempting is simply
cat *email?csv* >e.csv
If you meant to match a literal dot, that's \. in a regular expression. The ? is a literal interpretation of what your grep actually did; but in the following, I will assume you actually meant to match *email.csv* (or in fact probably even *email.csv without a trailing wildcard).
If you want to check if there are any files, and avoid creating e.csv if not, that's slightly tricky; maybe try
for file in *email.csv*; do
test -e "$file" || break
cat *email.csv* >e.csv
break
done
Alternatively, look into the nullglob feature of Bash. See also Check if a file exists with wildcard in shell script.
On the other hand, if you just want to check whether email.csv exists, without a wildcard, that's easy:
if [ -e email.csv ]; then
cat email.csv >e.csv
fi
In fact, that can even be abbreviated down to
test -e email.csv && cat email.csv >e.csv
As an aside, read can perfectly well split a line into tokens.
#!/bin/bash
yesterday=$(date --date "$c days ago" +%F)
while IFS=, read -r dir country _
do
cd "path/$dir" # notice proper quoting, too
cat *email.csv* > e.csv
# probably don't forget to cd back
cd ../..
done < "s.csv"
If this is in fact all your script does, probably do away with the silly and slightly error-prone cd;
while IFS=, read -r dir country _
do
cat "path/$dir/"*email.csv* > "path/$dir/e.csv"
done < "s.csv"
See also When to wrap quotes around a shell variable.

Script : substitute value in script and display it

I have this sql command : insert into users(username, password) values ($username, $password)
I want to display this line for every user
this is my script
#!/bin/bash
for name in $(cat /etc/passwd | cut -d: -f1)
do
pass= sudo grep -w $name /etc/shadow | cut -d: -f2
echo 'insert into `users`(`username`, `password`) values ($name, $pass)'
done
But when i execute the script it doesn't do the substitution
As root (sudo -s):
#!/bin/bash
while read name; do
pass=$(grep -w "$name" /etc/shadow | cut -d: -f2)
echo "INSERT INTO \`users\`(\`username\`, \`password\`) VALUES ($name, $pass)"
done < <(cut -d: -f1 /etc/passwd)
 Notes
If you are a bash beginniner, some good pointers to start learning :
FAQ,
Guide,
Ref,
bash hackers,
quotes,
Check your script
And avoid people recommendations saying to learn with tldp.org web site, the tldp bash guide -ABS in particular) is outdated, and in some cases just plain wrong. The BashGuide and the bash-hackers' wiki are far more reliable.
Learn how to quote properly in shell, it's very important :
"Double quote" every literal that contains spaces/metacharacters and every expansion: "$var", "$(command "$var")", "${array[#]}", "a & b". Use 'single quotes' for code or literal $'s: 'Costs $5 US', ssh host 'echo "$HOSTNAME"'. See
http://mywiki.wooledge.org/Quotes
http://mywiki.wooledge.org/Arguments
http://wiki.bash-hackers.org/syntax/words
Here I have fixed the issues with your script. Please check if it works now :
#!/bin/bash
while read name
do
pass=$( sudo grep -w $name /etc/shadow | awk -F':' '{print $2}' )
echo "insert into 'users'('username', 'password') values ($name, $pass)"
done <<< "$(awk -F':' '{print $1}' /etc/passwd))"
Regards!
Try this Shellcheck-clean pure Bash code, which needs to be run as root:
#! /bin/bash -p
while IFS=: read -r name pass _ ; do
printf "insert into users (username, password) values ('%s', '%s')\\n" \
"$name" "$pass"
done </etc/shadow
/etc/shadow should contain the same users as /etc/passwd, so the code doesn't use /etc/passwd.
See BashFAQ/001 (How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?) for an explanation of how while IFS=: read -r ... works. It also explains the use of _ as a "junk variable".
See the accepted, and excellent, answer to Why is printf better than echo? for an explanation of why the code uses printf instead of echo.

Bash : reading regex from file and subsitute them into sed inline as variable

I am stuck with how sed interacts with variables. I am reading a list of regex from a file then substitute it into SED to mask certain sensitive information within a log file. if I hard coded the regex, the SED work perfectly, however it behave differently when used with variable.
con-list.txt contain below:
(HTTP\/)(.{2})(.*?)(.{2})(group\.com)
(end\sretrieve\sfacility\s)(.{2})(.*?)(.{3})$
Not sure if the dollar sign for regex is interfering with the SED command.
input="/c/Users/con-list.txt"
inputfiles="/c/Users/test.log"
echo $inputfiles
while IFS= read -r var
do
#echo "Searching $var"
count1=`zgrep -E "$var" "$inputfiles" | wc -l`
if [ ${count1} -ne 0 ]
then
echo "total:${count1} ::: ${var}"
sed -r -i "s|'[$]var'|'\1\2XXXX\4\5'|g" $inputfiles #this doesnt work
sed -r -i "s/(HTTP\/)(.{2})(.*?)(.{2})(group\.com)/'\1\2XXXX\4\5'/g" $inputfiles #This works
egrep -in "${var}" $inputfiles
fi
done < "$input"
I need the SED to accept the regex as variable read from the file. So I could automate masking for sensitive information within logs.
$ ./zgrep2.sh
/c/Users/test.log
total:4 ::: (HTTP\/)(.{2})(.*?)(.{2})(group\.comp\.com\#GROUP\.COM)
sed: -e expression #1, char 30: invalid reference \5 on `s' command's RHS
Your idea was right, but you forgot to leave the regex in the sed command to be under double quotes for $var to be expanded.
Also you don't need to use wc -l to count the match of occurrences. The family of utilities under grep all implement a -c flag that returns a count of matches. That said, you don't even need to count the matches, but use the return code of the command (if the match was found or not) simply as
if zgrep -qE "$var" "$inputfiles" ; then
Assuming you might need the count for debug purposes, you can continue with your approach with modifications to your script done as below
Notice how the var is interpolated in the sed substitution, leaving it expanded under double-quotes and once expanded preserving the literal values using the single-quote.
while IFS= read -r var
do
count1=$(zgrep -Ec "$var" "$inputfiles")
if [ "${count1}" -ne 0 ]
then
sed -r -i 's|'"$var"'|\1\2XXXX\4\5|g' "$inputfiles"
sed -r -i "s/(HTTP\/)(.{2})(.*?)(.{2})(group\.com)/'\1\2XXXX\4\5'/g" "$inputfiles"
egrep -in "${var}" "$inputfiles"
fi
done < "$input"
You need:
sed -r -i "s/$var"'/\1\2XXXX\4\5/g' $inputfiles
You also need to provide sample input (a useful bit of the log file) so that we can verify our solutions.
EDIT: a slight change to $var and I think this is what you want:
$ cat ~/tmp/j
Got creds for HTTP/PPCKSAPOD81.group.com
Got creds for HTTP/PPCKSAPOD21.group.com
Got creds for HTTP/PPCKSAPOD91.group.com
Got creds for HTTP/PPCKSWAOD81.group.com
Got creds for HTTP/PPCKSDBOD81.group.com
Got creds for HTTP/PPCKSKAOD81.group.com
$ echo $var
(HTTP\/)(.{2})(.*?)(.{2})(.group\.com)
$ sed -r "s/$var"'/\1\2XXXX\4\5/' ~/tmp/j
Got creds for HTTP/PPXXXX81.group.com
Got creds for HTTP/PPXXXX21.group.com
Got creds for HTTP/PPXXXX91.group.com
Got creds for HTTP/PPXXXX81.group.com
Got creds for HTTP/PPXXXX81.group.com
Got creds for HTTP/PPXXXX81.group.com

Evaluating a log file using a sh script

I have a log file with a lot of lines with the following format:
IP - - [Timestamp Zone] 'Command Weblink Format' - size
I want to write a script.sh that gives me the number of times each website has been clicked.
The command:
awk '{print $7}' server.log | sort -u
should give me a list which puts each unique weblink in a separate line. The command
grep 'Weblink1' server.log | wc -l
should give me the number of times the Weblink1 has been clicked. I want a command that converts each line created by the Awk command above to a variable and then create a loop that runs the grep command on the extracted weblink. I could use
while IFS='' read -r line || [[ -n "$line" ]]; do
echo "Text read from file: $line"
done
(source: Read a file line by line assigning the value to a variable) but I don't want to save the output of the Awk script in a .txt file.
My guess would be:
while IFS='' read -r line || [[ -n "$line" ]]; do
grep '$line' server.log | wc -l | ='$variabel' |
echo " $line was clicked $variable times "
done
But I'm not really familiar with connecting commands in a loop, as this is my first time. Would this loop work and how do I connect my loop and the Awk script?
Shell commands in a loop connect the same way they do without a loop, and you aren't very close. But yes, this can be done in a loop if you want the horribly inefficient way for some reason such as a learning experience:
awk '{print $7}' server.log |
sort -u |
while IFS= read -r line; do
n=$(grep -c "$line" server.log)
echo "$line" clicked $n times
done
# you only need the read || [ -n ] idiom if the input can end with an
# unterminated partial line (is illformed); awk print output can't.
# you don't really need the IFS= and -r because the data here is URLs
# which cannot contain whitespace and shouldn't contain backslash,
# but I left them in as good-habit-forming.
# in general variable expansions should be doublequoted
# to prevent wordsplitting and/or globbing, although in this case
# $line is a URL which cannot contain whitespace and practically
# cannot be a glob. $n is a number and definitely safe.
# grep -c does the count so you don't need wc -l
or more simply
awk '{print $7}' server.log |
sort -u |
while IFS= read -r line; do
echo "$line" clicked $(grep -c "$line" server.log) times
done
However if you just want the correct results, it is much more efficient and somewhat simpler to do it in one pass in awk:
awk '{n[$7]++}
END{for(i in n){
print i,"clicked",n[i],"times"}}' |
sort
# or GNU awk 4+ can do the sort itself, see the doc:
awk '{n[$7]++}
END{PROCINFO["sorted_in"]="#ind_str_asc";
for(i in n){
print i,"clicked",n[i],"times"}}'
The associative array n collects the values from the seventh field as keys, and on each line, the value for the extracted key is incremented. Thus, at the end, the keys in n are all the URLs in the file, and the value for each is the number of times it occurred.

Extracting a pattern (grep output) in Linux from shell?

Grep output is usually like this:
after/ftplugin/python.vim:49: setlocal number
Is it possible for me extract the file name and line number from this result using standard linux utilities ? Looking for a generic solution that works pretty well .
I can think of using awk to get the first string like :
Input
echo 'after/ftplugin/python.vim:49: setlocal number' | awk 'print $1'
'after/ftplugin/python.vim:49:'
$
Expected
after/ftplugin/python.vim and 49
Goal : Open in Vim
I am writing a small function that transforms the grep output to something vim can understand - mostly for academic purpose . I know there are thinks like Ack.vim out there which does something similar . What are the standard light weight utils out there ?
Edit: grep -n "text to find" file.ext |cut -f1 -d: seems to do it if you dont mind double parsing the string . Sed though needs to be used !
If you're using Bash you can do it this way:
IFS=: read FILE NUM __ < <(exec grep -Hn "string to find" file)
vim "+$NUM" "$FILE"
Or POSIX:
IFS=: read FILE NUM __ <<EOD
$(grep -Hn "string to find" file)
EOD
vim "+$NUM" "$FILE"
Style © konsolebox :)
This will do:
echo 'after/ftplugin/python.vim:49: setlocal number' | awk -F: '{print $1,"and",$2}'
after/ftplugin/python.vim and 49
But give us data before grep. It may be that we can cut it more down. No need for both grep and awk
If by "reverse parse" you mean you want to start from the end (and can safely assume that the file content contains no colons), parameter expansion makes that easy:
line='after/ftplugin/python.vim:49: setlocal number'
name_and_lineno=${line%:*}
name=${name_and_lineno%:*}
lineno=${name_and_lineno##*:}
Being all in-process (using shell built-in functionality), this is much faster than using external tools such as sed, awk, etc.
To connect it all together, consider a loop such as the following:
while read -r line; do
...
done < <(grep ...)
Now, to handle all possible filenames (including ones with colons) and all possible content (including strings with colons), you need a grep with GNU extensions:
while IFS='' read -u 4 -r -d '' file \
&& read -u 4 -r -d ':' lineno \
&& read -u 4 -r line; do
vim "+$lineno" "$file"
done 4< <(grep -HnZ -e "string to find" /dev/null file)
This works as follows:
Use grep -Z (a GNU extension) to terminate each filename with a NUL rather than a :
Use IFS='' read -r -d '' to read until the first NUL when reading filenames
Use read -r -d ':' lineno to read until a colon when reading line numbers
Read until the next newline when reading lines
Redirect contents on FD #4 to avoid overriding stdin, stdout or stderr (so vim will still work properly)
Use the -u 4 argument on all calls to read to handle contents from FD #4
How about this?:
echo 'after/ftplugin/python.vim:49: setlocal number' | cut -d: -f1-2 | sed -e 's/:/ and /'
Result:
after/ftplugin/python.vim and 49

Resources