sed command garbled for bash 3.0 - bash

I have gone through other threads Sed command garbled didnt work
but it didn't helped me
flag=1
echo "enter the folder into which you want to capture"
read logs
mkdir $logs
path=/user/gur40139/shell/angel
for i in $path/*.tra*
do
value=$( grep -ic \*= $i )
if [ $value -ge $flag ]
then
name=`basename $i .tra\*`
echo -e "count is $value\n" >> $path/$logs/log_"$name".txt
sed -n '/\*=/ {n;p}' $i|sed 2n\;G >> $path/$logs/log_"$name".txt
fi
done
echo -e "\nDone\n"
Error:
sed: command garbled: /\*=/ {n;p}
Additional Note: This code is working properly on bash 4.1 version but I want to test it in 3.0, There many options which are not even working like sed --version.

sed -n '/\*=/ {n;p;}' ...
you need to terminate the line after the p so a ; or a new line. Your code will certainly work on recent GNU sed but not on posix version

Related

OS version capture script - unexpected results when using awk

I have a small shell script as follows that I am using to login to multiple servers to capture whether the target server is using Redhat or Ubuntu as the OS version.
#!/bin/ksh
if [ -f $HOME/osver.report.txt ];then
rm -rf $HOME/osver.report.txt
fi
for x in `cat hostlist`
do
OSVER=$(ssh $USER#${x} "cat /etc/redhat-release 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null")
echo -e "$x \t\t $OSVER" >> osver.report.txt
done
The above script works, however, if I attempt to add in some awk as shown below and the server is a redhat server...my results in the osver.report.txt will only show the hostname and no OS version. I have played around with the quoting, but nothing seems to work.
OSVER=$(ssh $USER#${x} "cat /etc/redhat-release | awk {'print $1,$2,$6,$7'} 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null")
If I change the script as suggested to the following:
#!/bin/bash
if [ -f $HOME/osver.report.txt ];then
rm -rf $HOME/osver.report.txt
fi
for x in cat hostlist
do
OSVER=$(
ssh $USER#${x} bash << 'EOF'
awk '{print "$1,$2,$6,$7"}' /etc/redhat-release 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null
EOF
)
echo -e "$x \t\t $OSVER" >> osver.report.txt
done
Then I get the following errors:
./test.bash: line 9: unexpected EOF while looking for matching `)'
./test.bash: line 16: syntax error: unexpected end of file
You're suffering from a quoting problem. When you pass a quoted command to ssh, you effectively lose one level of quoting (as if you passed the same arguments to sh -c "..."). So the command that you're running on the remote host is actually:
cat /etc/redhat-release | awk '{print ,,,}' | grep -i DISTRIB_DESCRIPTION /etc/lsb-release
One way of resolving this is to pipe your script into a shell, rather than passing it as arguments:
OSVER=$(
ssh $USER#${x} bash <<'EOF'
awk '{print "$1,$2,$6,$7"}' /etc/redhat-release 2>/dev/null ||
grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null
EOF
)
The use of <<'EOF' here inhibits any variable expansion in the here document...without that, expressions like $1 would be expanded locally.
A better solution would be to look into something like ansible which has built-in facilities for sshing to groups of hosts and collecting facts about them, including distribution version information.

shell script with tail/grep not working when run via cron - Mint 18.1

Running my script in a terminal works fine. It also works fine in Mint 18.2 when run at boot via /etc/rc.local, but on Mint 18.1 it doesn't work. Also on 18.1 it won't run via sudo crontab -e. I'm assuming it's got something to do with the tail/grep part.
Here is the relevant part of my script - up to this point the script works;
# Takes a screen capture every time I type a string that matches one from a list
sudo -i tail -fn0 "$path"k.log | \
while read line ; do
echo "$line" | egrep --line-buffered -i -e "$pattern"
if [ $? = 0 ]
then
matches=$(echo "$line" | egrep --line-buffered -i -o "$pattern")
cap_split=$(echo "$matches" | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/ /g')
cap_string=$(echo "$cap_split" | sed -e 's/[^A-Za-z0-9\\n._-]/_/g')
sleep 1
DISPLAY=:0.0 scrot "$path"cap/"$stamp"_"$cap_string".png
echo -e "### Match found \"$cap_split\" and cap created ###"
fi
done
Why will it only work in the terminal on Mint 8.1 and not from rc.local or cron?

sed or awk to append to specific line

There is a file that I'd like to edit from a bash script
if [[ -n "$CHROME_USER_DATA_DIR" ]]; then
exec -a "$0" "$HERE/chrome" \
--user-data-dir="$CHROME_USER_DATA_DIR" "$#"
else
exec -a "$0" "$HERE/chrome" "$#"
fi
I would like to append to the end of any line containing exec -a "$0" "$HERE/chrome" "$#" the string
--user-data-dir
The result would over write the existing file with
if [[ -n "$CHROME_USER_DATA_DIR" ]]; then
exec -a "$0" "$HERE/chrome" \
--user-data-dir="$CHROME_USER_DATA_DIR" "$#"
else
exec -a "$0" "$HERE/chrome" "$#" --user-data-dir
fi
I am having difficulty understanding sed and awk, but want to make this work.
Unfortunately you need to use a regexp instead of a string comparison since your white space can apparently vary so that introduces some escaping complexity. Using GNU awk for -i inplace and \s shorthand for [[:space:]]:
awk -i inplace '/exec\s+-a\s+"\$0"\s+"\$HERE\/chrome"\s+"\$#"/{$0=$0 " --user-data-dir"} 1' file
Try this to edit your file with GNU sed:
sed -i '5s/$/& --user-data-dir/' file
With awk you could do:
awk 'NR==5{$0=$0" --user-data-dir"}1' file > newfile

How do I store a bash command as string for multiple substitutions?

I'm trying to clean up this script I have and this piece of code is annoying me because I know it can be more DRY:
if grep --version | grep "GNU" > /dev/null ;
then
grep -P -r -l "\x0d" $dir | grep "${fileRegex}"
else
grep -r -l "\x0d" $dir | grep "{$fileRegex}"
fi
My thoughts are to somehow conditionally set a string variable to either "grep -P" or "egrep" and then in a single line do something like:
$(cmdString) -r -l "\x0d" $dir | grep "${fileRegex}"
Or something like that but it doesn't work.
Are you worried about a host which has GNU grep but not egrep? Do such hosts exist?
If not why not just always use egrep? (Though -P and egrep are not the same thing.)
That being said you don't use strings for this (see BashFAQ#50).
You use arrays: grepcmd=(egrep) or grepcmd=(grep -P) and then "${grepcmd[#]}" ....
You can also avoid needing perl mode entirely if you use $'\r' or similar (assuming your shell understands that quoting method).
You can do this:
if grep --version | grep "GNU" > /dev/null
then
cmdString=(grep -P)
else
cmdString=(egrep)
fi
"${cmdString[#]}" -r -l "\x0d" "$dir" | grep "{$fileRegex}"
#Etan Reisner's suggestion worked well. For those that are interested in the final code (this case is for tabs, not windows line endings but it is similar):
fileRegex=${1:-".*\.java"}
if grep --version | grep "GNU" > /dev/null ;
then
cmdString=(grep -P)
else
cmdString=(grep)
fi
arr=$("${cmdString[#]}" -r -l "\x09" . | grep "${fileRegex}")
if [ -n "$dryRun" ]; then
for i in $arr; do echo "$i"; done
else
for i in $arr; do expand -t 7 "$i" > /tmp/e && mv /tmp/e "$i"; done
fi

Lynx is stopping loop?

I'll just apologize beforehand; this is my first ever post, so I'm sorry if I'm not specific enough, if the question has already been answered and I just didn't look hard enough, and if I use incorrect formatting of some kind.
That said, here is my issue: In bash, I am trying to create a script that will read a file that lists several dozen URL's. Once it reads each line, I need it to run a set of actions on that, the first being to use lynx to navigate to the website. However, in practice, it will run once perfectly on the first line. Lynx goes, the download works, and then the subsequent renaming and organizing of that file go through as well. But then it skips all the other lines and acts like it has finished the whole file.
I have tested to see if it was lynx causing the issue by eliminating all the other parts of the code, and then by just eliminating lynx. It works without Lynx, but, of course, I need lynx for the rest of the output to be of any use to me. Let me just post the code:
!#/bin/bash
while read line; do
echo $line
lynx -accept_all_cookies $line
echo "lynx done"
od -N 2 -h *.zip | grep "4b50"
echo "od done, if 1 starting..."
if [[ $? -eq 0 ]]
then ls *.*>>logs/zips.log
else
od -N 2 -h *.exe | grep "5a4d"
echo "if 2 starting..."
if [[ $? -eq 0 ]]
then ls *.*>>logs/exes.log
else
od -N 2 -h *.exe | grep "5a4d, 4b50"
echo "if 3 starting..."
if [[ $? -eq 1 ]]
then
ls *.*>>logs/failed.log
fi
echo "if 3 done"
fi
echo "if 2 done"
fi
echo "if 1 done..."
FILE=`(ls -tr *.* | head -1)`
NOW=$(date +"%m_%d_%Y")
echo "vars set"
mv $FILE "criticalfreepri/${FILE%%.*}(ZCH,$NOW).${FILE#*.}" -u
echo "file moved"
rm *.zip *.exe
echo "file removed"
done < "lynx"
$SHELL
Just to be sure, I do have a file called "lynx" that contains the urls separated by a return each. Also, I used all those "echo"s to do my own sort of debugging, but I have tried it with and without the echo's. When I execute the script, the echo's all show up...
Any help is appreciated, and thank you all so much! Hope I didn't break any rules on this post!
PS: I'm on Linux Mint running things through the "terminal" program. I'm scripting with bash in Gedit, if any of that info is relevant. Thanks!
EDIT: Actually, the echo tests repeat for all three lines. So it would appear that lynx simply can't start again in the same loop?
Here is a simplified version of the script, as requested:
!#/bin/bash
while read -r line; do
echo $line
lynx $line
echo "lynx done"
done < "ref/url"
read "lynx"
$SHELL
Note that I have changed the sites the "url" file goes to:
`www.google.com
www.majorgeeks.com
http://www.sophos.com/en-us/products/free-tools/virus-removal-tool.aspx`
Lynx is not designed to use in scripts because it locks the terminal. Lynx is an interactive console browser.
If you want to access URLs in a script use wget, for example:
wget http://www.google.com/
For exit codes see: http://www.gnu.org/software/wget/manual/html_node/Exit-Status.html
to parse the html-content use:
VAR=`wget -qO- http://www.google.com/`
echo $VAR
I found a way which may fulfilled your requirement to run lynx command in loop with substitution of different url link.
Use
echo `lynx $line`
(Echo the lynx $line in single quote('))
instead of lynx $line. You may refer below:
your code
!#/bin/bash
while read -r line; do
echo $line
lynx $line
echo "lynx done"
done < "ref/url"
read "lynx"
$SHELL
try on below
!#/bin/bash
while read -r line; do
echo $line
echo `lynx $line`
echo "lynx done"
done < "ref/url"
I should have answered this question a long time ago. I got the program working, it's now on Github!
Anyway, I simply had to wrap the loop inside a function. Something like this:
progdownload () {
printlog "attmpting download from ${URL}"
if echo "${URL}" | grep -q "http://www.majorgeeks.com/" ; then
lynx -cmd_script="${WORKINGDIR}/support/mgcmd.txt" --accept-all-cookies ${URL}
else wget ${URL}
fi
}
URL="something.com"
progdownload

Resources