Bash Stdout redirect on solaris 10 - bash

ok this is working:
trace -t lstat64 -v lstat64 ls "myfilename" 2>pipefile
cat pipefile | grep ct | cut -d '[' -f 2 | cut -d ' ' -f 2
But i dont want to have to use the file "pipefile", how can i redirect the output straight to my grep and cut?

So, you want to ignore stdout and only consider stderr?
trace -t lstat64 -v lstat64 ls "myfilename" 2>&1 1>/dev/null |
grep ct | cut -d '[' -f 2 | cut -d ' ' -f 2
First, the stderr file handle is redirected to whatever the stdout file handle refers to, then the stdout file handle is redirected to /dev/null. Then grep can read from stdin whatever is emitted from trace's stderr.

I got it, I just realized i was getting stderr confused with stdout, this was my solution:
trace -t lstat64 -v lstat64 ls "myfilename" 2>&1 | grep ct | cut -d '[' -f 2 | cut -d ' ' -f 2

Related

Bash : Curl grep result as string variable

I have a bash script as below:
curl -s "$url" | grep "https://cdn" | tail -n 1 | awk -F[\",] '{print $2}'
which is working fine, when i run run it, i able to get the cdn url as:
https://cdn.some-domain.com/some-result/
when i put it as variable :
myvariable=$(curl -s "$url" | grep "https://cdn" | tail -n 1 | awk -F[\",] '{print $2}')
and i echo it like this:
echo "CDN URL: '$myvariable'"
i get blank result. CDN URL:
any idea what could be wrong? thanks
If your curl command produces a trailing DOS carriage return, that will botch the output, though not exactly like you describe. Still, maybe try this.
myvariable=$(curl -s "$url" | awk -F[\",] '/https:\/\/cdn/{ sub(/\r/, ""); url=$2} END { print url }')
Notice also how I refactored the grep and the tail (and now also tr -d '\r') into the Awk command. Tangentially, see useless use of grep.
The result could be blank if there's only one item after awk's split.
You might try grep -o to only return the matched string:
myvariable=$(curl -s "$url" | grep -oP 'https://cdn.*?[",].*' | tail -n 1 | awk -F[\",] '{print $2}')
echo "$myvariable"

How do I do a website health check using CURL command

I'm trying to monitor a website using curl but the output doesn't seem to work, please see commands below:
#!/bin/bash
varDate=$(date '+%Y-%m-%d %H:%M:%S')
varCurlError=$(curl -sSf https://website.com > /dev/null)
varHttpCode=$(curl -Is https://website.com | head -n 1)
varResponseTime=$(curl -s -w '%{time_total}' -o /dev/null website.com)
varOutput="$varDate | $varCurlError | $varHttpCode | $varResponseTime"
echo $varOutput
The output looks like this :
| 0.07323 18:51:40 | | HTTP/1.1 200 OK
What can I change or add to fix the output.
Much appreciated.
#!/bin/bash
varDate=$(date '+%Y-%m-%d %H:%M:%S')
varCurlError=$(curl -sSf https://website.com 2>&1 >/dev/null)
varHttpCode=$(curl -Is https://website.com | head -n 1)
varResponseTime=$(curl -s -w '%{time_total}' -o /dev/null website.com | tr -d \\r )
varOutput="$varDate | $varCurlError | $varHttpCode | $varResponseTime"
echo $varOutput
There are two corrections:
tr -d \r was added as per glenn jackman. The CR is causing your varResponseTime to be printed at the beginning of the line. The tr command deletes the CR.
You need to first redirect stderr to stdout before you close file descriptor 1 in your varCurlError statement. Now, errors reported by curl to stderr will be sent to stdout (and captured by your $() enclosure). The output curl sends to stdout will go to the bitbucket. Order is important. >/dev/null 2>&1 doesn't work - it sends stdout and stderr to /dev/null.
#glenn jackman is correct about the need to pipe the curl output to | tr -d '\r'
That is, change your code to
#!/bin/bash
varDate=$(date '+%Y-%m-%d %H:%M:%S' | tr -d '\r')
varCurlError=$(curl -sSf https://website.com | tr -d '\r' > /dev/null)
varHttpCode=$(curl -Is https://website.com | tr -d '\r' | head -n 1)
varResponseTime=$(curl -s -w '%{time_total}' -o /dev/null website.com | tr -d '\r')
varOutput="$varDate | $varCurlError | $varHttpCode | $varResponseTime"
echo "$varOutput"
It can be done with wget so you see if you can get any data and it can be simple like this:
#!/bin/bash
dt=$(date '+%d/%m/%Y %H:%M:%S');
wget domain/yourindex
if [ -f /home/$USER/yourindex ] ; then
#echo $dt GOOD >> /var/log/fix.log
echo GOOD >/dev/null 2>&1
else
#counter measures like sudo systemctl restart php7.2-fpm.service && sudo systemctl restart nginx
echo $dt BROKEN >> /var/log/fix.log
fi
rm login*
exit

How to disable error message sent by command "ping"?

I wrote a simple ping sweeper using bash script. I also use grep command to filter out the result I want. The problem is, the console keep printing out error message: "ping: recvmsg: No route to host" no matter what grep command I tried. I tried to write the output into a file, and there is no error message inside the file but they still appear on the console. I want to know what causes the console to print out error message like that and how to disable it, thanks.
Here is the script I wrote.
#!/bin/bash
for ip in $(seq 1 254); do
#ping -c 1 10.11.1.$ip | grep -v "recvmsg" | grep "bytes from" | cut -d " " -f 4 | cut -d ":" -f 1 &
ping -c 1 10.11.1.$ip | grep -v "recvmsg" |grep -v "ping" | grep "bytes from" | cut -d " " -f 4 | cut -d ":" -f 1| sort -d >> report &
done
wait
And here is the error message
ping: recvmsg: No route to host
You can use the redirectors for stderr (standard error) you only need put this at the end of your command 2> error.log
#!/bin/bash
for ip in $(seq 1 254); do
#ping -c 1 10.11.1.$ip | grep -v "recvmsg" | grep "bytes from" | cut -d " " -f 4 | cut -d ":" -f 1 &
ping -c 1 10.11.1.$ip 2> error.log | grep -v "recvmsg" |grep -v "ping" | grep "bytes from" | cut -d " " -f 4 | cut -d ":" -f 1| sort -d >> report &
done
wait

grep search with filename as parameter

I'm working on a shell script.
OUT=$1
here, the OUT variable is my filename.
I'm using grep search as follows:
l=`grep "$pattern " -A 15 $OUT | grep -w $i | awk '{print $8}'|tail -1 | tr '\n' ','`
The issue is that the filename parameter I must pass is test.log.However, I have the folder structure :
test.log
test.log.001
test.log.002
I would ideally like to pass the filename as test.log and would like it to search it in all log files.I know the usual way to do is by using test.log.* in command line, but I'm facing difficulty replicating the same in shell script.
My efforts:
var-$'.*'
l=`grep "$pattern " -A 15 $OUT$var | grep -w $i | awk '{print $8}'|tail -1 | tr '\n' ','`
However, I did not get the desired result.
Hopefully this will get you closer:
#!/bin/bash
for f in "${1}*"; do
grep "$pattern" -A15 "$f"
done | grep -w $i | awk 'END{print $8}'

Unable to substitute redirection for redundant cat

cat joined.txt | xargs -t -a <(cut --fields=1 | sort -u | grep -E '\S') -I{} --max-args=1 --max-procs=4 echo "mkdir -p imdb/movies/{}; grep '^{}' joined.txt > imdb/movies/{}/movies.txt" | bash
The code above works but substituting the redundant cat at the start of the code with a redirection like below doesn't work and leads to a cut input output error.
< joined.txt xargs -t -a <(cut --fields=1 | sort -u | grep -E '\S') -I{} --max-args=1 --max-procs=4 echo "mkdir -p imdb/movies/{}; grep '^{}' joined.txt > imdb/movies/{}/movies.txt" | bash
In either case, it is the cut command inside the process substitution (and not xargs) that should be reading from joined.txt, so to be completely safe, you should put either the pipe or the input redirection inside the the process substitution. Actually, neither is necessary; cut can just take joined.txt as an argument.
xargs -t -a <( cat joined.txt | cut ... ) ... | bash
or
xargs -t -a <( cut -f1 joined.txt | ... ) ... | bash
However, it would be clearest to skip the process substitution altogether, and pipe the output of that pipeline to xargs:
cut -f joined.txt | sort -u | grep -E '\S' | xargs -t ...

Resources