How to search for a string in a text file and perform a specific action based on the result - bash

I have very little experience with Bash but here is what I am trying to accomplish.
I have two different text files with a bunch of server names in them. Before installing any windows updates and rebooting them, I need to disable all the nagios host/service alerts.
host=/Users/bob/WSUS/wsus_test.txt
password="my_password"
while read -r host
do
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" "https://nagios.fqdn.here/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k
done < wsus_test.txt >> /Users/bob/WSUS/diable_test.log 2>&1
This is a reduced form of my current code which works as intended, however, we have servers in a bunch of regions. Each server name is prepended with a 3 letter code based on region (ie, LAX, NYC, etc). Secondly, we have a nagios server in each region so I need the code above to be connecting to the correct regional nagios server based on the server name being passed in.
I tried adding 4 test servers into a text file and just adding a line like this:
if grep lax1 /Users/bob/WSUS/wsus_text.txt; then
<same command as above but with the regional nagios server name>
fi
This doesn't work as intended and nothing is actually disabled/enabled via API calls. Again, I've done very little with Bash so any pointers would be appreciated.

Extract the region from host name and use it in the Nagios URL, like this:
while read -r host; do
region=$(cut -f1 -d- <<< "$host")
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" "https://nagios-$region.fqdn.here/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k
done < wsus_test.txt >> /Users/bob/WSUS/diable_test.log 2>&1

Related

Limit Speed in Ubuntu based on traffic

I have come across a script i.e. Wondershaper
The script is terrific, however any way to make it smarter?
Like it runs after certain traffic has gone through?
Say 1TB is set per day, once 1TB is hit, the script turns on automatically?
I have thought about setting crn job,
At 12 am it clears the wondershaper, and in 15mins interval, it checks if the server has crossed 1TB limit for the day, and then if it is true then it runs the limiter,
but I am not sure how to set up the 2nd part, how can i setup a way that will enable the limiter to run after 1TB is crossed?
Remove Code
wondershaper -ca eth0
Limit Code
wondershaper -a eth0 -u 154000
I have made a custom script for this, as it is not possible to do it within the system, i had to become creative and do a API call to the datacenter and then run cron job.
I also used bashjson, to run it. I have attached the script below.
date=$(date +%F)
url='API URL /metrics/datatraffic?from='
url1='T00:00:00Z&to='
url2='T23:59:59Z&aggregation=SUM'
final="$url$date$url1$date$url2"
wget --no-check-certificate -O output.txt \
--method GET \
--timeout=0 \
--header 'X-Lsw-Auth: API AUTH' \
$final
sed 's/[][]//g' output.txt >> test1.json // will remove '[]' from the code just to make things easier for bashjson to understand
down=$(/root/bashjson/bashjson.sh test1.json metrics DOWN_PUBLIC values value) // outputs the data to variable
up=$(/root/bashjson/bashjson.sh test1.json metrics UP_PUBLIC values value)
newdown=$(printf "%.14f" $down)
newup=$(printf "%.14f" $up)
upp=$(printf "%.0f\n" "$newup") // removes scientific notation as bash does not like it
downn=$(printf "%.0f\n" "$newdown")
if (($upp>800000000000 | bc))
then
wondershaper -a eth0 -u 100000 //main command to limit
else
echo uppworks
fi
if (($downn>500000000000 | bc))
then
wondershaper -a eth0 -d 100000
else
echo downworks
fi
rm -rf output.txt test1.json
echo $upp
echo $downn
You can always update it as per your preference.

Remote login (ssh differences)

I would like to know what is the difference between the below commands:
ssh vagrant#someipaddress
cd /home/vagrant/
grep -i "something" data.txt
and
ssh vagrant#someipaddress 'cd /home/vagrant; cat data.txt' | grep -i "something"
From this website it mentions, that you can send multiple commands to the remote server. Is the second option actually logging into the server? What is the benefit in this second approach?
Strictly Speaking from the example provided:
The first command:
Logs onto the remote server
Executes a couple commands, and
Stays logged on to the server
The second command runs half on the remote machine, logs out of the remote machine, and then pipes the output to grep on your local machine, all in one command line.
Breaking down what's happening:
ssh vagrant#someipaddress 'cd /home/vagrant; cat data.txt' | grep -i "something"
The section in bold is running on your local PC, based on the output from the ssh session
The 'quotes "contain" the entire command block
the " quotes "contain" the individual arguments within the command block.
You may have meant to do this:
ssh vagrant#someipaddress 'cd /home/vagrant; cat data.txt' | grep -i "something"
Where the bold section runs locally
Or you may have intentionally done this:
ssh vagrant#someipaddress 'cd /home/vagrant/ | grep -i "something" data.txt'
Where the entire command runs on the server.
Either way, the end result:
Is that you automatically log out of the remote machine, and the whole command sequence was executed in one hit.

How to use curl -w switch with multiple data token as format parameter?

I want to get two things from curl: http_code and time_total from a single curl request. How should I formulate the -w %{insert_formatting_here} ?
These works:
result = $(curl -s -w %{http_code} -o temp.txt) "http://127.0.0.1"
echo "$result"
result = $(curl -s -w %{time_total} -o temp.txt) "http://127.0.0.1"
echo "$result"
Result:
200
0.004
But this didn't work as I expected:
result = $(curl -s -w %{http_code time_total} -o temp.txt) "http://127.0.0.1"
echo "$result"
Result:
<p>where "$CATALINA_HOME" is the root of the Tomcat installation directory. If you're seeing this page, and you don't think you should be, then you're either a user who has arrived at new installation of Tomcat, or you're an administrator who hasn't got his/her setup quite right. Providing the latter is the case, please refer to the Tomcat Documentation for more detailed setup and administration in %{http_codeReserved99-2014 Apache Software Foundation<br/>ht="80" alt="Powered by Tomcat"/><br/>s working on Tomcat</li>configuring and using Tomcat</li> developing web applications.</p>
I cannot find any tutorial that helps me to put multiple token on the format parameter. They only list the format token, but there's no example or anything so far.
Each placeholder needs to be in brackets, i.e.:
curl -s -w "%{http_code}:%{time_total}" http://127.0.0.1

Why use -Lo- with curl when piping to bash?

In the janus project, they use curl to download and pipe a bootstrap script into bash.
https://github.com/carlhuda/janus
It looks like this:
$ curl -Lo- https://bit.ly/janus-bootstrap | bash
Why would one want to use the args -Lo-?
-o is supposed to be for output, but wouldn't that happen anyway (i.e. to stdout)?
It's all in the man pages:
-L in case the page has moved (3xx response) curl will redirect the request to the new address
-o output to a file instead of stdout (usually the screen). In your case the o flag is redundant since the output is piped to bash (for execution) - not to a file.
The -o is redundant, they produce the exact same output:
$ curl --silent example.com | sha256sum
3587cb776ce0e4e8237f215800b7dffba0f25865cb84550e87ea8bbac838c423 *-
$ curl --silent --output - example.com | sha256sum
3587cb776ce0e4e8237f215800b7dffba0f25865cb84550e87ea8bbac838c423 *-
They have used that syntax since that line was first introduced in 2011.
You might ask Wael Nasreddine (#kalbasit on GitHub) why he did it. He
is still active on that repo.

How to download a file with wget and save it according to the http-reported filename?

When you request a file with wget and that file is being served by some dynamic page (e.g. php), wget will try to use the path to that dynamic page (usually looking as if an angry child got hold of your keyboard: index.php?a8s7df6a8s=d6fa8sd6f90v78wg&l45i87ylqwiu45h=j76h2g461k326v).
However, these pages usually send an HTTP header with the file so that user agents can display a sensible file name. How do I get wget to listen to that and use it (instead of the url) to determine the name under which to save the file?
I found that a way to do this was to use the --server-response flag with --spider and invoke wget twice (there is certainly room for improvement, there!)
Assume the url to be in $link:
wget --quiet --server-response --spider -O /dev/null -- "$link" 2>&1 \
| sed -n 's/^.*filename=\([^;]*\)\(;.*\)\?$/\1/p' \
| while read name; do
wget -O "$name" -- "$link"
break
done
Seems to work like a charm for me.
Possibly, there is a direct way, though. This creates (completely unnecessarily) two connections to the server.

Resources